entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 17
188
| authors
sequence | primary_category
stringlengths 5
18
| categories
sequence | text
stringlengths 2
629k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2307.03944v1 | 20230708095104 | Enhanced Strong Coupling between Spin Ensemble and non-Hermitian Topological Edge States | [
"Jie Qian",
"Jie Li",
"Shi-Yao Zhu",
"J. Q. You",
"Yi-Pu Wang"
] | quant-ph | [
"quant-ph",
"cond-mat.mes-hall",
"physics.optics"
] |
Interdisciplinary Center of Quantum Information, Zhejiang Province Key Laboratory of Quantum Technology and Device, Department of Physics, Zhejiang University, Hangzhou 310027, China
Interdisciplinary Center of Quantum Information, Zhejiang Province Key Laboratory of Quantum Technology and Device, Department of Physics, Zhejiang University, Hangzhou 310027, China
Interdisciplinary Center of Quantum Information, Zhejiang Province Key Laboratory of Quantum Technology and Device, Department of Physics, Zhejiang University, Hangzhou 310027, China
Hefei National Laboratory, Hefei 230088, China
[email protected]
Interdisciplinary Center of Quantum Information, Zhejiang Province Key Laboratory of Quantum Technology and Device, Department of Physics, Zhejiang University, Hangzhou 310027, China
[email protected]
Interdisciplinary Center of Quantum Information, Zhejiang Province Key Laboratory of Quantum Technology and Device, Department of Physics, Zhejiang University, Hangzhou 310027, China
Light-matter interaction is crucial to both understanding fundamental phenomena and developing versatile applications. Strong coupling, robustness, and controllability are the three most important aspects in realizing light-matter interactions. Topological and non-Hermitian photonics, have provided frameworks for robustness and extensive control freedom, respectively. How to engineer the properties of the edge state such as photonic density of state, scattering parameters by using non-Hermitian engineering while ensuring topological protection has not been fully studied. Here we construct a parity-time-symmetric dimerized photonic lattice and generate complex-valued edge states via spontaneous PT-symmetry breaking. The enhanced strong coupling between the topological photonic edge mode and magnon mode in a ferromagnetic spin ensemble is demonstrated. Our research reveals the subtle non-Hermitian topological edge states and provides strategies for realizing and engineering topological light-matter interactions.
Enhanced Strong Coupling between Spin Ensemble and non-Hermitian Topological Edge States
Yi-Pu Wang
August 12, 2023
========================================================================================
Introduction.—Topology has evolved as a powerful governing principle for predicting and harnessing the robust propagation of currents in various systems, including condensed matter system <cit.>, acoustics <cit.>, mechanics <cit.> and photonics <cit.>. In topological photonics, a topological invariant ensures robust localization or propagation of electromagnetic waves <cit.>. On the other hand, non-Hermitian photonics <cit.> has also flourished in recent years, not only due to the ubiquitous non-Hermiticity in nature <cit.>, but also because the non-Hermiticity provides additional degrees of freedom to manipulate the wave behaviors. In pursuit of the simultaneous robustness and greater control flexibility, as well as the interest in fundamental research, non-Hermitian topological physics <cit.> has received considerable attention and substantial development. Scientists investigate new paradigms <cit.> and explore potential applications in this interdisciplinary territory <cit.>.
A coupled system can have two forms of non-Hermiticity. One kind is generated when there is asymmetric interaction between the sites, which leads to the non-Hermitian skin effect <cit.>. The other type, which is caused by on-site loss, can lead to intriguing phenomena associated with the parity-time (PT) symmetry. The PT-symmetric systems have received special attention, because they were proved to have real spectra <cit.>. A sequence of studies have studied the topologically protected bound (defect) states in PT-symmetric topological systems <cit.>, where the defect states are real in the PT-symmetry unbroken phase. Moreover, a number of studies have investigated whether topological edge states exist in the PT-symmetric systems <cit.>, concluding that since the edge state is not an eigenstate of the PT operator, an imaginary eigenvalue is obtained along with the spontaneous PT-symmetry breaking. In this case, a non-Hermitian edge state is obtained. We find that these imaginary edge states in the PT-symmetric system are actually topologically protected by the particle-hole symmetry <cit.>. In the one-dimensional (1D) non-Hermitian PT-symmetric Su-Schrieffer-Heeger (SSH) model <cit.>, the chiral symmetry of the system is broken, losing its topological ℤ invariant, but the particle-hole symmetry of the system is preserved and the system owns a topological ℤ_2 invariant. In the presence of perturbations that do not violate the particle-hole symmetry, the real parts of the eigenvalues of the edge modes remain 0, reflecting the topologically protected characteristics. Under this situation, the topological photonic mode with robust properties can be further manipulated by non-Hermiticity, which is highly desirable for investigating light-matter interactions <cit.>.
To investigate the interaction between topological photonic modes and matters <cit.>, we employ the photon-magnon coupling system <cit.>, which has benefits including the flexible tunability and experimental demonstration at room temperature. In this Letter, we use a set of lossy microwave resonators to build 1D non-Hermitian SSH photonic lattices. By coupling a ferromagnetic spin ensemble (FSE) to Hermitian and non-Hermitian SSH chains and monitoring the strength of the coupling between the photonic modes and the magnon mode in the FSE, we verify the topological edge states and bulk states. Non-Hermiticity introduced by the on-site alternating losses breaks the passive PT-symmetry of zero-energy modes and results in two complex-valued edge states, which localize exponentially at the opposite ends of the chain [Fig. <ref>(b)]. Further, the photonic density of state (PDOS) at boundaries is larger than that in the Hermitian case [Fig. <ref>(a)], which strengthens the coupling between the topological photonic mode and the magnon mode. Our experiment demonstrates the potential of manipulating the interaction between topological photonic states and matter by exploiting non-Hermiticity.
System and model.—The SSH chain consists of six unit cells [Figs. <ref>(a) and <ref>(b)], in which each unit contains two split-ring-resonators (SRRs) fabricated on the F4B substrate [Fig. <ref>(a)]. In the experiment, the SRR exhibits a resonance at ω_0/2π=5.62 GHz with an intrinsic loss of γ_0/2π=24.42 MHz, and the topological property is unaltered by the uniform losses along the chain <cit.>. Therefore, SRRs with the same loss can be used to build the Hermitian SSH model. Two neighboring SRRs are separated by staggered spacings to realize the intracell and intercell coupling rates, v and w. Edge states appear in the finite chain when the bulk winding number of the Hermitian Hamiltonian is 𝒲_h=1 <cit.>. The effective Hermitian SSH chain is designed in the topological non-trivial phase (v/2π=216.5 MHz, w/2π=341 MHz) and the Hamiltonian is written as <cit.>:
ℋ_h/ħ=∑_s=1^2N(ω_0-iγ_0)â_s^†â_s+∑_s=1^2N-2(vâ_sâ_s+1^†+wâ_s+1â_s+2^†),
where â_s^† (â_s) is the photon creation (annihilation) operator of the s-th SRR. The uniform losses of the units only yield all eigenvalues of the chain to have the same imaginary component iγ_0. The eigenvalues of the coupled SRRs are plotted in the complex plane, as shown in Fig. <ref>(c). A pair of zero-energy modes (Re(ω_m=6,7)-ω_0=0, green dots) appear in the band gap (gray area), which are the edge modes. The measured transmission spectrum of the chain is shown in Fig. <ref>(d), where the peaks correspond to the resonances of the eigenmodes. By simulating the field distribution at the edge mode frequency of ω_0/2π=5.62 GHz, we find that the electromagnetic field tends to localize at both edges of the chain, as predicted by wave function distribution <cit.>. In the low-frequency region, the measured spectrum [Fig. <ref>(d), solid line] displays an amplitude deviation from that in the high-frequency region. This is due to the residual dissipative coupling between SRRs <cit.>.
Then, on-site non-Hermiticity is added to the SSH chain. As depicted in Fig. <ref>(a), resistors R_A=0.1 Ω and R_B=2.7 Ω are integrated into odd and even sites of the chain, respectively, which induce alternated losses of γ_A/2π=36 MHz and γ_B/2π=73 MHz. The Hamiltonian becomes <cit.>:
ℋ_nh/ħ= ∑_s∈ X(ω_0-iγ_A)â_s^†â_s+∑_s∈ Y(ω_0-iγ_B)â_s^†â_s
+∑_s=1^2N-2(vâ_sâ_s+1^†+wâ_s+1â_s+2^†),
where X={1, 3, 5, ..., 2N-1}, Y={2, 4, 6, ..., 2N}, and N=6. The integrated resistors shift ω_0/2π to 5.48 GHz, and the hopping rates shift to v/2π=208.5 MHz, and w/2π=335.5 MHz. The alternated losses make the system a passive PT-symmetric one. The spontaneous PT-symmetry breaking occurs in zero-energy modes, resulting in a splitting of the imaginary parts of zero-energy modes, as shown in Fig. <ref>(e). One with a low loss Im(ω_m=6)/2π=40.42 MHz (Edge_1, blue dot) localizes at the left boundary of the chain, and the other with a high loss Im(ω_m=7)/2π=68.58 MHz (Edge_2, red dot) localizes at the right, as schematically shown in Fig. <ref>(b). The bulk Hamiltonian still preserves the PT-symmetry when δγ/2<|w-v|, and δγ=γ_B-γ_A. In this regime, the topological property is still determined by the generalized integer winding number 𝒲_nh <cit.>. 𝒲_nh=1 guarantees the existence of two non-Hermitian topological edge modes.
Experiment results.—To investigate the edge modes engineered by the non-Hermiticity, we measure the PDOS and linewidths of the edge and bulk modes in both Hermitian and non-Hermitian cases. Notably, conventional detection of the PDOS relies on the near-field radiation <cit.>, but in the non-Hermitian situation, the local gain and loss will diminish its reliability. Using the spin ensemble as a probe, we can directly detect the PDOS. In addition, it allows us to study the strong coherent interaction between the topological photonic modes and magnons.
In the experiment, the spin ensemble employed to couple with the chain is a 1-mm diameter yttrium iron garnet (YIG) sphere. The magnon mode in the sphere interacts with the local photonic modes, with a coupling strength g proportional to ηχ√(nSħω_r/2V) <cit.>, where η≤1 describes the spatial overlap and polarization matching between the photonic mode and the magnon mode, χ is the gyromagnetic ratio, n is the total number of spins, S=5/2 is the spin number of the ground state Fe^3+ ion in YIG, ω_r is the resonance frequency, and V is the photonic mode volume. Consequently, the square of the coupling strength g^2 directly reflects the PDOS at the coupling location. Firstly, we move the YIG sphere to each site (labeled as s, s=1,2,3,...,12) of the Hermitian chain, and obtain the PDOS distribution of the m-th eigenmode by analyzing the transmission spectra. The bias magnetic field is perpendicular to the device plane, and mappings of transmission spectra are measured versus electromagnet current and probe frequency. Figures <ref>(b) and <ref>(e), for instance, show the mappings when the YIG sphere is placed at site-1 and site-12, respectively. The coupling strength between m-th eigenmode of the chain and the magnon mode at the s-th site is defined as g_m,s, which can be obtained by fitting the level repulsion with:
ω_m,s^±=1/2[ω_n+ω_m±√((ω_n-ω_m)+4g_m,s^2)],
where ω_n=ω_n-iγ_n and ω_m=ω_m-i(γ_m+κ_m) are the eigenvalues of the uncoupled magnon mode and the m-th eigenmode of the chain, respectively. γ_n is the total loss rate of the magnon mode, γ_m is the intrinsic loss rate of the m-th eigenmode, and κ_m is the extrinsic loss rate of the m-th eigenmode to the input/output ports <cit.>. Coupling strengths between the magnon mode and edge modes (m=6,7) at site-1 and site-12 are obtained by fitting the level repulsion depicted in Figs. <ref>(b) and <ref>(e), which are g_edge,1/2π=g_edge,12/2π=80 MHz. Similarly, coupling strengths between the magnon mode and bulk mode (m=8) at site-1 and site-12 are obtained as g_bulk,1/2π=g_bulk,12/2π=37 MHz. g_m,s^2 as a function of the site index s are illustrated in Figs. <ref>(c) and <ref>(d), denoted by blue (m=8) and red dots (m=6,7), respectively. The observed g_m,s^2 are in good agreement with the intensity distributions for the wave function |φ_m,s|^2 (gray bar diagram).
Then, we couple the spin ensemble to the non-Hermitian SSH chain, as shown in Fig. <ref>(a). Figures <ref>(b) and <ref>(e) display the mappingswhen the YIG sphere is placed at site-1 and site-12, respectively. The mappings show similar amount of level repulsion, but reflects very different linewidths of the edge modes. Using Eq. (<ref>), the loss of the edge mode at site-1 is fitted to be γ_edge,1/2π=41.1 MHz, which is contributed by the addition of the two edge modes (m=6,7). The relation is γ_edge,s=[Im(ω_m=6)·|φ_6,s|^2+Im(ω_m=7)·|φ_7,s|^2]/(|φ_6,s|^2+|φ_7,s|^2), and the wave functions of the edge modes |φ_m,s|^2 are displayed as the bar diagram in Fig. <ref>(d). Similarly, we get γ_edge,12/2π=67.9 MHz. More interestingly, the coupling strengths between the magnon mode and edge modes at site-1 and site-12 are observed to be g_edge,1/2π=g_edge,12/2π=112 MHz, which is larger than that in the Hermitian case (80 MHz). We plot g_m,s^2 versus site index s for m=8 and m=6, 7 in Figs. <ref>(c) and <ref>(d), respectively. It can be found that the bulk mode maintains expanded, similar to the Hermitian bulk mode. But, as shown in Fig. <ref>(d), the low-loss edge state (Edge_1) accumulates at the left boundary, while high-loss edge state (Edge_2) accumulates at the right edge. The introduction of on-site loss does contribute to the increase of PDOS at the boundaries. The mechanism can be interpreted as follows: When the PT-symmetry of the edge states is broken, the energy flow between adjacent resonators is partly blocked <cit.>. The low-loss (high-loss) edge state becomes more localized at the low-loss (high-loss) site, as shown in Figs. <ref>(b) and <ref>(a), it corresponds the left (right) boundary of the chain.
It is also intriguing to detect the properties of the non-Hermitian topological edge states from spectroscopic measurements. In the PT-symmetry unbroken phase, two topological edge states cannot be distinguished via spectroscopic measurement, as shown in Fig. <ref>(a). The absorptivity spectra A_1 measured when loading microwave to port 1 is totally coincident with A_2 measured when loading microwave to port 2. In the symmetry broken phase, two topological edge states can be distinguished in spectra, as shown in Fig. <ref>(b). The spectra A_1 exhibits the low-loss state with a relatively narrow bandwidth, while the spectra A_2 reveals the high-loss state.
Finally, we anticipate to discuss about some additional characteristics of the exceptional point (EP) in the non-Hermitian chain. The dimensionless eigenvalues are defined as β_real+iβ_imag, where β_real=[Re(ω)-ω_0]/(v+w), β_imag=[|Im(ω)|-γ̅]/(v+w), and γ̅=(γ_A+γ_B)/2. In a finite SSH chain, when increasing the non-Hermitian parameter δγ/2(v+w), a series of exceptional points are gradually reached [Figs. <ref>(c) and <ref>(d)]. It can be found that the EP of the edge modes is distinctly away from the EPs of the bulk modes. The edge modes experience spontaneous PT-symmetry breaking (SPTB) at EP_1, where δγ/2(v+w) is only about 0.02. With the increase of chain length, the non-Hermiticity needed for SPTB in edge modes decreases exponentially. In the case of N≫1, any finite δγ will lead to the SPTB in edge modes <cit.>. However, the minimum requirement of SPTB in bulk mode needs δγ/2|w-v|, which is much larger than 0.02. Additional analysis is provided in the supplementary materials.
Conclusion.—We have implemented the PT-symmetric non-Hermitian topological SSH model with microwave resonators and achieved the control of topological edge states using the on-site non-Hermiticity. Through spontaneous PT-symmetry breaking, we obtain the non-Hermitian edge modes, where the photonic mode densities are enhanced at both ends of the chain. We realize the strong coupling between the edge modes and the magnon mode in both Hermitian and non-Hermitian cases. We experimentally verify that the coupling strength between the non-Hermitian edge states and the spin ensemble is stronger than that in the Hermitian situation. Our research illustrates non-Hermiticity engineered topological edge states and paves a way for studying strong coherent interaction between topological photonic modes and matter.
This work is supported by the National Key Research and Development Program of China (No. 2022YFA1405200), National Natural Science Foundation of China (No. 92265202, No. 11934010, No. U1801661, and No. 12174329), and the Fundamental Research Funds for the Central Universities (No. 2021 FZZX001-02).
99
Burkov-16
A. A. Burkov, Topological semimetals, Nature Materials 15, 1145 (2016).
Hasan-10
M. Z. Hasan and C. L. Kane, Colloquium: Topological insulators, Rev. Mod. Phys. 82, 3045 (2010).
Zhaoju-15
Z. Yang, F. Gao, X. Shi, X. Lin, Z. Gao, Y. Chong, and B. Zhang, Topological Acoustics, Phys. Rev. Lett. 114, 114301 (2015).
Ma-19
G. Ma, M. Xiao and C. T. Chan, Topological phases in acoustic and mechanical systems, Nat. Rev. Phys. 1, 281 (2019).
Yihao-22
H. Xue, Y. Yang, B. Zhang, Topological acoustics, Nature Reviews Materials 7, 974 (2022).
Huber-16
S. D. Huber, Topological mechanics, Nat. Phys. 12, 621 (2016).
Haldane-08
F. D. M. Haldane and S. Raghu, Possible realization of directional optical waveguides in photonic crystals with broken time-reversal symmetry, Phys. Rev. Lett. 100, 013904 (2008).
Wang-09
Z. Wang, Y. Chong, J. D. Joannopoulos, and M. Soljačić, Observation of unidirectional backscattering-immune topological electromagnetic states, Nature 461, 772 (2009).
Lu-14
L. Lu, J. D. Joannopoulos, and M. Soljačić, Topological photonics, Nat. Photon. 8, 821 (2014).
Ozawa-19
T. Ozawa et al., Topological photonics, Rev. Mod. Phys. 91, 015006 (2019).
Blanco-Redondo-18
A. Blanco-Redondo, B. Bell, D. Oren, B. J. Eggleton and M. Segev, Topological protection of biphoton states, Science 362, 568 (2018).
Yang-18
B. Yang et al., Ideal Weyl points and helicoid surface states in artificial photonic crystal structures, Science 359, 1013 (2018).
Klembt-18
S. Klembt et al., Exciton-polariton topological insulator, Nature, 562, 552 (2018).
Feng-17
L. Feng, R. EI-Ganainy, and L. Ge, Non-Hermitian photonics based on parity–time symmetry, Nat. Photon. 11, 752 (2017).
EI-Ganainy-18
R. EI-Ganainy et al., Non-Hermitian physics and PT symmetry, Nat. Phys. 14, 11 (2018).
Longhi-18
Stefano Longhi, Parity-time symmetry meets photonics: A new
twist in non-hermitian optics, Europhysics Letters 120, 64001 (2018).
Bender-07
C. M. Bender, Making sense of non-hermitian hamiltonians, Reports on Progress in Physics 70, 947 (2007).
Ashida-20
Y. Ashida, Z. P. Gong, and M. Ueda, Non-Hermitian physics, Adv. Phys. 69, 249 (2020).
Coulais-21
C. Coulais, R. Fleury, and J. Van Wezel, Topology and broken Hermiticity, Nat. Phys. 17, 9 (2021).
Bergholtz-21
E. J. Bergholtz, J. C. Budich, and F. K. Kunst, Exceptional topology of non-Hermitian systems, Rev. Mod. Phys. 93, 015005 (2021).
Yao-18
S. Yao and Z. Wang, Edge States and Topological Invariants of Non-Hermitian Systems, Phys. Rev. Lett. 121, 086803 (2018).
Yokomizo-19
K. Yokomizo and S. Murakami, Non-Bloch band
theory of non-Hermitian systems, Phys. Rev. Lett. 123, 066404 (2019).
CHL-20
C. H. Lee, L. Li, R. Thomale, and J. Gong,
Unraveling non-Hermitian pumping: Emergent spectral singularities and anomalous responses, Phys. Rev. B 102, 085151
(2020).
Helbig-20
T. Helbig et al., Generalized bulk–boundary correspondence in non-Hermitian topolectrical circuits. Nat. Phys. 16, 747 (2020).
Xue-20
L. Xiao, T. Deng, K. Wang, G. Zhu, Z. Wang, W. Yi, and P. Xue, Non-Hermitian bulk–boundary correspondence in quantum dynamics, Nat. Phys. 16, 761 (2020).
Zhao-19
H. Zhao et al., Non-Hermitian topological light steering, Science 365, 1163 (2019).
St-Jean-17
P. St-Jean et al., Lasing in topological edge states of a one-dimensional lattice, Nat. Photon. 11, 651 (2017).
Parto-18
M. Parto et al., Edge-Mode Lasing in 1D Topological Active Arrays, Phys. Rev. Lett. 120, 113901 (2018).
Hu-21
B. Hu et al., Non-Hermitian topological whispering gallery, Nature 597, 655 (2021).
Alvarez-18
V. M. Martinez Alvarez, J. E. Barrios Vargas, and L. E. F. Foa Torres,
Non-Hermitian robust edge states in one dimension: Anomalous localization and eigenspace condensation at exceptional points, Phys. Rev. B 97, 121401(R) (2018).
Okuma-20
N. Okuma, K. Kawabata, K. Shiozaki, and M. Sato, Topological Origin of Non-Hermitian Skin Effects, Phys. Rev. Lett. 124, 086801 (2020).
Bender-98
C. M. Bender and S. Boettcher, Real Spectra in Non-Hermitian Hamiltonians Having PT Symmetry, Phys. Rev. Lett. 80, 5243 (1998).
Schomerus-13
H. Schomerus, Topologically protected midgap states in complex photonic lattices, Opt. Lett. 38, 1912 (2013)
Malzard-15
S. Malzard, C. Poli, and H. Schomerus, Topologically Protected Defect States in Open Photonic Systems with Non-Hermitian Charge-Conjugation and Parity-Time Symmetry, Phys. Rev. Lett. 115, 200402 (2015).
Weimann-17
S. Weimann et al., Topologically protected bound states in photonic parity-time-symmetric crystals, Nat. Mater. 16, 433-438 (2017).
Stegmaier-21
A. Stegmaier et al., Topological Defect Engineering and PT Symmetry in Non-Hermitian Electrical Circuits, Phys. Rev. Lett. 126, 215302 (2021).
Esaki-11
K. Esaki, M. Sato, K. Hasebe, and M. Kohmoto, Edge states and topological phases in non-Hermitian systems, Phys. Rev. B 84, 205128 (2011).
Hu-11
Y. C. Hu and T. L. Hughes, Absence of topological insulator phases in non-Hermitian PT-symmetric Hamiltonians, Phys. Rev. B 84, 153101 (2011).
Xue-17
L. Xiao, X. Zhan, Z. H. Bian, K. K. Wang, X. Zhang, X. P. Wang, J. Li, K. Mochizuki, D. Kim, N. Kawakami, W. Yi, H. Obuse, B. C. Sanders, P. Xue, Observation of topological edge states in parity–time-symmetric quantum walks, Nature Physics 13, 1117 (2017).
Cheng-22
D. Cheng et al., Truncation-dependent PT phase transition for the edge states of a two-dimensional non-Hermitian system, Phys. Rev. B 105, L201105 (2022).
SM
See Supplementary Materials at ... for device details, Hamiltonian and topological invariant analysis, additional transmission mappings, and the experimental measurement details, which includes Refs. <cit.>.
Su-79
W. P. Su, J. R. Schrieffer and A. J. Heeger, Solitons in Polyacetylene, Phys. Rev. Lett. 42, 1698 (1979).
Gutzler-21
R. Gutzler, M. Garg, C. R. Ast, K. Kuhnke, and Kern, K. Light–matter interaction at atomic scales, Nat. Rev. Phys. 3, 441 (2021).
Ruggenthaler-18
M. Ruggenthaler, N. Tancogne-Dejean, J. Flick, H. Appel, and A. Rubio, From a quantum-electrodynamical light–matter description to novel spectroscopies, Nat. Rev. Chem. 2, 0118 (2018).
Kockum-19
A. F. Kockum, A. Miranowicz, S. De Liberato, S. Savasta, and F. Nori, Ultrastrong coupling between light and matter, Nat. Rev. Phys. 1, 19 (2019).
Kim-21
E. Kim et al., Quantum Electrodynamics in a Topological Waveguide, Phys. Rev. X 11, 011015 (2021).
Huebl-PRL-2013
H. Huebl, C. W. Zollitsch, J. Lotze, F. Hocke, M. Greifenstein, A. Marx, R. Gross, and S. T. B. Goennenwein, High Cooperativity in Coupled Microwave Resonator Ferrimagnetic Insulator Hybrids, Phys. Rev. Lett. 111, 127003 (2013).
Tabuchi-PRL-2013
Y. Tabuchi, S. Ishino, T. Ishikawa, R. Yamazaki, K. Usami, and Y. Nakamura, Hybridizing Ferromagnetic Magnons and Microwave Photons in the Quantum Limit, Phys. Rev. Lett. 113, 083603 (2014).
Zhang-PRL-2014
X. Zhang, C.-L. Zou, L. Jiang, and H. X. Tang, Strongly Coupled Magnons and Cavity Microwave Photons, Phys. Rev. Lett. 113, 156401 (2014).
Tobar-PRApp-2014
M. Goryachev, W. G. Farr, D. L. Creedon, Y. Fan, M. Kostylev, and M. E. Tobar, High-Cooperativity Cavity QED with Magnons at Microwave Frequencies, Phys. Rev. Applied 2, 054002 (2014).
You-npj-2015
D. Zhang, X.-M. Wang, T.-F. Li, X.-Q. Luo, W. Wu, F. Nori, J. Q. You, Cavity quantum electrodynamics with ferromagnetic magnons in a small yttrium-iron-garnet sphere, npj Quantum Information 1, 15014 (2015).
Wang-2019
Y.-P. Wang, J. W. Rao, Y. Yang, P.-C. Xu, Y. S. Gui, B. M. Yao, J. Q. You, and C.-M. Hu, Nonreciprocity and Unidirectional Invisibility in Cavity Magnonics, Phys. Rev. Lett. 123, 127202 (2019).
Wang-2020
Y.-P. Wang and C.-M. Hu, Dissipative couplings in cavity magnonics, Journal of Applied Physics 127, 130901 (2020).
Rameshti-22
B. Z. Rameshti, S. V. Kusminskiy, J. A. Haigh, K. Usami, D. Lachance-Quirion, Y. Nakamura, C. Hu, H. X. Tang, G. E. W. Bauer and Y. M. Blanter, Cavity Magnonics, Physics Reports 979, 1-60 (2022).
Yuan-22
H. Y. Yuan, Y. Cao, A. Kamra, P. Yan, and R. A. Duine, Quantum magnonics: when magnon spintronics meets quantum information science, Physics Reports 965, 1 (2022).
Bellec-13
M. Bellec, U. Kuhl, G. Montambaux, and F. Mortessagne, Tight-binding couplings in microwave artificial graphene, Phys. Rev. B 88, 115437 (2013).
Peng-14
B. Peng, Ş. K. Özdemir, F. Lei, F. Monifi, M. Gianfreda, G. Long, S. Fan, F. Nori, C. M. Bender and L. Yang, Parity-time-symmetric whispering-gallery microcavities, Nat. Phys. 10, 394 (2014).
|
http://arxiv.org/abs/2307.03998v1 | 20230708154349 | Lightweight Improved Residual Network for Efficient Inverse Tone Mapping | [
"Liqi Xue",
"Tianyi Xu",
"Yongbao Song",
"Yan Liu",
"Lei Zhang",
"Xiantong Zhen",
"Jun Xu"
] | cs.CV | [
"cs.CV",
"eess.IV"
] |
Lightweight Improved Residual Network for Efficient Inverse Tone Mapping
Liqi Xue, Tianyi Xu, Yongbao Song, Yan Liu, Lei Zhang, Xiantong Zhen, and Jun Xu
This work was sponsored by the National Natural Science Foundation of China (No. 62002176, 62176068, and 12101334), CAAI-Huawei MindSpore Open Fund, the Natural Science Foundation of Tianjin (No. 21JCQNJC00030), and the Fundamental Research Funds for the Central Universities. Corresponding author: Xiantong Zhen ([email protected]) and Jun Xu ([email protected]).
Liqi Xue, Tianyi Xu, Yan Liu, and Jun Xu are with the School of Statistics and Data Science, Nankai University, Tianjin 300071, China.
Yongbao Song is with the School of Mathematical Science, Nankai University, Tianijn 300071, China.
Lei Zhang and Xiantong Zhen are with the Computer Science College, Guangdong University of Petrochemical Technology, Maoming 525000, China.
August 12, 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The display devices like HDR10 televisions are increasingly prevalent in our daily life for visualizing high dynamic range (HDR) images. But the majority of media images on the internet remain in 8-bit standard dynamic range (SDR) format. Therefore, converting SDR images to HDR ones by inverse tone mapping (ITM) is crucial to unlock the full potential of abundant media images. However, existing ITM methods are usually developed with complex network architectures requiring huge computational costs. In this paper, we propose a lightweight Improved Residual Network (IRNet) by enhancing the power of popular residual block for efficient ITM. Specifically, we propose a new Improved Residual Block (IRB) to extract and fuse multi-layer features for fine-grained HDR image reconstruction. Experiments on three benchmark datasets demonstrate that our IRNet achieves state-of-the-art performance on both the ITM and joint SR-ITM tasks. The code, models and data will be publicly available at <https://github.com/ThisisVikki/ITM-baseline>.
Inverse tone mapping, improved residual block, lightweight network, inference efficiency.
§ INTRODUCTION
High dynamic range (HDR) images defined in Rec.2020 <cit.> exhibit clearer details in highlights and shadows, as well as smoother transitions on brightness and color, than the standard dynamic range (SDR) images with 8-bit color depth defined in Rec.709 <cit.>. Owing to these benefits, the manufacturers of television and mobile devices make a push to bring HDR contents to demanding consumers. Though HDR display devices allow more visually-pleasing contents in HDR images by Dolby Vision, HDR10, and HDR10+ technologies <cit.>, the SDR images in 8 bit-depth would be featureless when being directly broadcast on HDR display devices <cit.>. To present the SDR images closer to human perception on HDR display devices, it is essential to convert SDR images into comfortable HDR ones without color or information loss. This challenging problem is known as inverse tone mapping (ITM) <cit.>, which has been studied in a more general sense rather than expanding the luminance range of camera raw image files in linear color space <cit.>.
Early image ITM methods mainly resort to global or local image processing operators for promising performance. Global ITM operators <cit.> usually utilize reverse tone mapping functions to extend the dynamic range of image pixels. But this would bring distorted details and uneven transitions between neighborhood pixels in different levels of brightness. Local ITM operators <cit.> expand the image bit depth in a spatially-varying manner. Unfortunately, these methods would fail to preserve the global consistency of luminance ranges across an image. Recently, deep neural networks have been employed to tackle the ITM task from a data-driven perspective <cit.>. These networks usually contain strong backbones with complex architectures, which may require huge computational costs for promising ITM performance. Besides, the methods of <cit.> simultaneously tackle the joint image super-resolution (SR) and ITM (joint SR-ITM) tasks by separating the base and detail components from the input image with extra image decomposition <cit.>. However, this would further increase the model complexity and computational costs of current joint SR-ITM methods over previous ITM ones.
Despite their promising performance, the above-mentioned ITM methods suffer from two main limitations. Firstly, the complex model architectures obscure the core of the ITM problem, that is, “expanding the luminance range of a low dynamic range image to produce a higher dynamic range image” <cit.>. This problem of extending luminance range or color bit depth is similar to the tasks of image super-resolution <cit.> and video frame prediction <cit.>, all of which aim to increase the highly-correlated information of the input image at different aspects. Therefore, it is possible to tackle the ITM problem by simple and lightweight neural networks, as inspired by the concurrent works in image super-resolution <cit.> and video frame prediction <cit.>. Secondly, the huge computational costs also limits the prevalence of ITM methods from being deployed into edge devices. For example, to perform ITM on a 4K-resolution (3840×2160) image, Deep SR-ITM <cit.> needs 2.50M parameters, ∼1.05×10^4G FLOPs, and a speed of 777.95ms, while HDRTVNet <cit.> needs 37.20M parameters, ∼1.41×10^4G FLOPs, and a speed of 1513.43ms.
In this paper, we leverage the popular residual learning recipe <cit.> to develop a simple and lightweight Improved Residual Network (IRNet) for efficient ITM performance. Specifically, we propose an Improved Residual Block (IRB) with simple modifications of the residual block <cit.> for fine-grained feature extraction and fusion. On network design, we also adopt the plain residual learning framework to avoid complex multi-branch architecture <cit.>. Experiments on three benchmark datasets, including our newly collected one, show that our IRNet is very efficient and outperforms ITM methods. As shown in Figure <ref>, our IRNet only needs ∼0.13M parameters and ∼0.22×10^4G FLOPs at a speed of 398.33ms to process a 4K-resolution image, which outperforms state-of-the-art methods on the ITM task. On the HDRTV1K dataset <cit.>, our IRNet exceeds AGCM+LE <cit.> on visual quality and PSNR by 0.59dB, but only has one tenth of parameter amounts (∼0.13M v.s. ∼1.41M). Besides, our IRNet also achieves superior performance to Deep SR-ITM <cit.> and JSI-GAN <cit.> on joint SR-ITM.
In summary, our main contributions are three-fold:
* We develop a lightweight Improved Residual Network (IRNet) for efficient image inverse tone mapping (ITM). Our IRNet is built upon a new Improved Residual Block (IRB) customized from the popular residual block for fine-grained feature extraction and fusion.
* We collect a new test set for ITM, , ITM-4K, that has 160 4K-resolution images of versatile scenes with ground truth HDR images. It serves as a good supplementary to HDRTV1K <cit.> that has 117 test images.
* Experiments on the HDRTV1K dataset <cit.>, our new ITM-4K test set, and the test set in <cit.> show that our lightweight IRNet is efficient and achieves impressive quantitative and qualitative results on the ITM and joint SR-ITM tasks. Comprehensive ablation studies also validate the effectiveness of our model design.
The rest of this paper is organized as follows. In <ref>, we summarize the related work. In <ref>, we present the proposed Improved Residual Network (IRNet). In <ref>, we perform experiments to validate the efficiency of our IRNet on ITM and joint SR-ITM. In <ref>, we conclude this paper.
§ RELATED WORK
§.§ Inverse Tone Mapping
The inverse tone mapping (ITM) task aims to transform a standard dynamic range (SDR, usually in 8-bit) image into a high dynamic range (HDR, usually in 16-bit) image. This problem is ill-posed due to the information loss in the luminance ranges of SDR images. Early explorations on the ITM task can be divided into global and local ITM operators. While the global ITM operators equally apply linear expansion <cit.>, cross-bilateral filtering <cit.>, or a gamma-based expansion <cit.> to all the pixels or patches of an input SDR image, the local ITM operators <cit.> reconstruct highlight regions or expand the luminance ranges of each pixel or patch according to the local information around it. Previous works show that global ITM operators <cit.> could avoid undesired artifacts, but result in rough details and unnatural transitions due to the ignorance of local detail reconstruction. On the contrary, local ITM operators <cit.> implemented adaptively on small areas would fail to capture the global consistency of luminance ranges.
To deal with the issues of locally undesired artifacts and global luminance consistency raised by the early methods mentioned above, many recent ITM methods <cit.> shift to utilize the advancements of deep convolutional neural networks (CNNs).
Early CNN-based methods <cit.> merge low dynamic range (LDR) images captured under multiple exposure settings to produce an SDR image. Meanwhile, the work of <cit.> presents a multi-branch CNN to implement ITM both on global and local perspectives. Then, the method of <cit.> introduces a feature masking strategy to address the problem of undesired artifacts emerged during the image reconstruction. Recently, the physical principle of HDR image formation is also incorporated into the designing of ITM CNNs <cit.>. For example, HDRTVNet <cit.> contains of an adaptive global color mapping network, a local enhancement network, and a highlight generation network.
Despite their promising performance, most of these methods require huge parameter amounts and computational costs, which hinders them from being deployed into resource-constrained edge devices. In this paper, we aim to develop a lightweight yet efficient ITM network.
§.§ Joint Super-Resolution and Inverse Tone Mapping
Joint Super-Resolution and Inverse Tone Mapping (joint SR-ITM) aims to simultaneously increase the spatial resolution and dynamic range of an input low-resolution and standard dynamic range (LR-SDR) image. Deep convolutional neural networks have also been applied to tackle the joint SR-ITM task <cit.>. Considering that the luminance ranges of different image areas should be expanded adaptively, the method of <cit.> firstly decomposes an SDR image into a low-frequency structure component and a high-frequency detail component, and then processes the two components by two different but correlated network branches. The separation is implemented by guided-filtering <cit.>, which is widely used in image smoothing <cit.>. This framework is also employed in the subsequent work of JSI-GAN <cit.>. To tackle multi-frame SDR inputs, Lecouat <cit.> reformulated the joint SR-ITM task as an optimization problem to fuse multiple LR-SDR raw image bursts in different exposures into an HR-HDR image. Tan <cit.> developed a two-branch network to fuse a series of LR-LDR dynamic images into an HR-HDR one by estimating the motion cues by a deformable module <cit.>.
Though with appealing performance, the image decomposition based methods usually require multi-branch network architectures for the joint SR-ITM task, which, however, usually implies a considerable growth of parameter amounts and computational burden to tackle parallel feature extraction and elaborate interaction. In this paper, we propose a lightweight ITM network for inference efficiency, inspired by the merits of lightweight image super-resolution networks <cit.>.
§.§ Efficient Image Restoration
For the goal of inference efficiency, network compression and acceleration techniques are exploited to reduce the computational burden and memory consumption of image restoration methods <cit.>.
One popular solution is employing Laplacian pyramid <cit.> to decompose the input image into a low-resolution base layer consuming the majority of computations and several high-resolution detail layers requiring a few computations <cit.>. Bilateral grid learning <cit.> is also utilized to learn approximate operators on the downsampled images and execute the learned operators to the original images. Other inference strategies like recursive learning <cit.> and look-up table <cit.> are also exploited to accelerate image restoration networks.
Instead of developing new methods, some recent works accelerate existing restoration networks by model slimming <cit.> or input-adaptive inference <cit.>.
In this paper, we develop an efficient ITM network that can well process a 4K-resolution SDR image with ∼134K parameters and 0.4 seconds.
§ PROPOSED METHOD
§.§ Motivation
In the scene-referred workflow <cit.>, an HDR raw image in camera color space (usually in 16-bit color depth) will be tone mapped to an SDR RGB image in display-referred color space (usually in 8-bit color depth). This process is usually implemented in a camera imaging pipeline containing multiple image processing operations, during which different pixels usually undergo different compression strengths on dynamic ranges to produce visually pleasing image contrasts <cit.>.
The task of inverse tone mapping (ITM) aims to increase the dynamic range of light intensity (or luminance) in an SDR image. An SDR image in 8-bit depth can display a maximum of around 16.7 million shades of color, while an HDR image in 10-bit depth can display a maximum of around 1.07 billion shades of color, allowing it to exhibit more colors with better visual quality <cit.>. To better understand the luminance difference between SDR and HDR images, in Figure <ref> (a), we visualize the maximum and minimum luminance values of 117 HDR images from the test set of <cit.>, as well as the luminance values at the corresponding positions of the paired SDR images. We observe that there are obvious gaps between the maximum values of HDR images and the values at the corresponding positions of the paired SDR images, whilst
slight differences between the minimum values of the HDR images and the values at the corresponding positions of the paired SDR images. This indicates that high-luminance values change more greatly than low-luminance ones. Besides, the luminance values of different HDR images also show distinct gaps when compared to those values at the corresponding positions in the paired SDR images.
For promising ITM performance, the ITM methods <cit.> suffer from complex network backbones with huge parameter amounts and computational costs. To implement efficient ITM, in this paper, we propose to develop a simple, lightweight, and efficient Improved Residual Network (IRNet) by slightly modifying the residual block <cit.>. As shown in Figure <ref> (b), we concatenate the intermediate feature map F_1 after the LeakyReLU in the residual block with the fused feature of F_in and F_2 (more details will be presented in <ref>). Our IRNet shows clear improvements, especially in the bright area near the sun, over that without using the feature map F_1 on the image “028” from the HDRTV1K test set <cit.>, as shown in Figure <ref> (b). Along the highlighted lines,
the green line of our IRNet enjoys closer approximation to the blue line of the “Ground Truth” HDR image than the red line of our IRNet without using the intermediate feature F_1 (denoted as “IRNet w/o F_1”). In Figure <ref> (c), we plot the ratios of luminance values of the highlighted lines by our IRNet and “IRNet w/o F_1”, which also validates that our IRNet achieves better approximation to the “Ground Truth” than the IRNet without F_1. This validates the effectiveness of our IRB over the residual block for ITM.
Adaptive luminance extension is also important for the ITM task. For this goal, many joint SR-ITM methods <cit.> performed image or feature decomposition to extract and fuse multi-scale feature maps. However, these ITM networks with decomposition techniques often suffer from complex network structures with heavy computational costs (Table <ref>). For efficiency consideration, we design our IRNet as a simple and lightweight network by employing the popular residual block <cit.> as a proper backbone for our IRNet. The promising results in Figure <ref> (b) by our IRNet without using F_1 motivates us to further improve our IRNet for better ITM performance.
§.§ Proposed Improved Residual Network
Our IRNet first extracts the initial feature map using a 1×1 convolution layer (instead of 3×3 one to reduce the parameter amounts). Then we cascade n Improved Residual Blocks (IRBs) proposed for fine-grained feature extraction and fusion. The details of our IRB block will be introduced later. To boost the ITM performance, each IRB is followed by a Contrast-aware Channel Attention (CCA) layer <cit.>. We also use a skip connection to sum the feature maps before the IRB block and after the CCA layer.
Improved Residual Block (IRB). The proposed IRB block is built upon the residual block <cit.>, which achieves great success in many computer vision tasks <cit.>. As shown in Figure <ref> (a), the residual block <cit.> contains two 3 × 3 convolution layers with an activation function (here we replace ReLU by LeakyReLU) between them, the output feature is added by the input feature F_in and activated by another LeakyReLU function.
Built upon the residual block, our IRB block is designed to keep our IRNet as simple as possible with better ITM performance. This is feasible by fully exploiting the multi-layer feature maps within the IRB block. To this end, given the input feature F_in∈ℝ^H× W× C, our IRB first refines it by a 3×3 convolution layer and a LeakyReLU activation function. The extracted feature F_1∈ℝ^H× W× C/2 is further refined in our IRB by a second 3×3 convolution layer to output the feature F_2∈ℝ^H× W× C:
F_1 = LeakyReLU(Conv_3 × 3(F_in)),
F_2 = Conv_3× 3(F_1 ).
Then our IRB uses a skip connection and a Conv_1×1 to fuse F_in and F_2 and obtain the fusion feature F_fuse:
F_fuse = Conv_1×1(F_in+F_2).
Finally, different from the residual block, our IRB explicitly concatenates the intermediate feature F_1
with the fusion feature F_fuse
to produce the output feature F_out as follows:
F_out=Conv_1×1(Concat(F_fuse,F_1)).
We visualize the structure of our IRB block in Figure <ref> (b).
Compared with the original residual block, our IRB well extracts and utilizes the multi-layer features, which correspond to spatially adaptive luminance areas for ITM. As shown in Figure <ref> (a), compared with the IRNet w/o F_1, our IRNet restores the luminance of HDR image closer to the ground truth, especially in the highlight regions. Even though popular encoder-decoder frameworks like U-net <cit.> or Uformer <cit.> can be utilized here to extract strong multi-scale features, this would bring significant growth on parameter amounts and computational costs <cit.>. Through a simple modification to the residual block, the proposed IRB serves as a lightweight building block in our IRNet for efficient ITM performance.
The mean feature map along the channel dimension could reflects the luminance information of that feature <cit.>. In Figure <ref> (b), we visualize the mean feature maps of F_in, F_1, F_2, F_fuse, and F_out extracted by our IRNet and “IRNet w/o F_1”. One can see that the mean feature map of F_1 extracted by our IRNet exhibits higher luminance in the sky area around the sun than that of “IRNet w/o F_1”. Due to the lack of luminance information by the intermediate feature F_1, “IRNet w/o F_1” produces stronger contrasts at the input feature F_in of IRB blocks and darker luminance around the sun in the output feature F_out, than our IRNet using F_1 in our IRB block.
Contrast-aware Channel Attention (CCA). To preserve image details, we utilize a CCA layer <cit.> after each IRB block. As shown in Figure <ref> (c), the CCA layer is consisted of contrast computation, two 1×1 convolution layers interleaved with a ReLU function, a sigmoid function, and a skip connection between the input and output features to help gradient propagation. Given the input X=[x_1,...,x_C]∈ℝ^H× W× C, the contrast is computed as follows:
z_c = H_GC(x_c)
= √(1/HW∑_(i,j)∈ x_c (x_c^i,j-∑_(i,j)∈ x_c x_c^i,j)^2) +
1/HW∑_(i,j)∈ x_c x_c^i,j, c=1,....,C.
After the i-th (i=1,...,n-1) IRB block and CCA layer, the output feature is added to the input feature F_in^i by a skip connection, and F_in^n+1 is the final feature that will be inputted to the next convolution layers as follows:
F_in^i+1=F_in^i+CCA(IRB(F_in^i)).
After extracting n scales of fine-grained feature maps, we concatenate them for multi-scale feature fusion, which is implemented by a sequence of 1×1 convolution layer, a LeakyReLU activation function, and a 3×3 convolution layer. Finally, we reconstruct the output HDR image using a 3×3 convolution layer. The overall architecture of the proposed IRNet is shown in Figure <ref> (d).
To apply the proposed IRNet to the joint SR-ITM task, we further add a Pixel Shuffle operation <cit.> after the final 3×3 convolution layer of our IRNet to make it feasible for super-resolution. The Pixel-Shuffle contains two 3×3 convolution layers interleaved with a ReLU function. The first convolution layer reduces the channel dimension of the feature map from C to 3s^2, where s is the upsampling factor, while the second convolution layer reconstructs the 3-channel HR-HDR image via upsampling the feature map by a factor of s.
§.§ Implementation Details
Here, we set the channel dimension of the feature map F_in as C=64. The number of IRB blocks n is set as n=2 for the ITM task and n=5 for the joint SR-ITM task. We use Kaiming initialization <cit.> to initialize the parameters of our IRNet.
To optimize these parameters, we adopt Adam optimizer <cit.> with β_1 = 0.9 and β_2 = 0.999 to minimize an ℓ_1 loss function. The learning rate η is initialized as 5×10^-4 and degrades to 1×10^-11 by cosine annealing schedule with warm restart <cit.> in every 60 epochs. The batch size is set as 16. We train the models of our IRNet for 200 epochs on an NVIDIA V100 GPU with 32GB memory.
§ EXPERIMENTS
In this section, we evaluate the performance of comparison methods and our IRNet on the ITM and joint SR-ITM tasks. We first introduce the used datasets and metrics. Then we present the the comparison results on ITM and joint SR-ITM, respectively. Finally, we conduct a series of ablation experiments to study the components of our IRNet.
§.§ Dataset and Metrics
Training set. In our experiments, we use the recently published HDRTV1K dataset <cit.> to evaluate the comparison methods. This dataset contains 1,235 pairs of 8-bit SDR and 10-bit HDR images for training and 117 pairs of images for testing. We crop each image in the training set into 30 256×256 image patches. For data augmentation, we randomly flip the cropped patches horizontally or vertically, rotate these patches by 90°, 180°, or 270°.
To perform joint SR-ITM on the HDRTV1K dataset, which is originally developed only for ITM, we downsample the SDR images by a factor of s=4 to obtain the low-resolution (LR) SDR images, similar to <cit.>. The high-resolution (HR) and HDR images from the HDRTV1K dataset can still be used as the training targets.
Test sets. On the ITM task, we evaluate the comparison methods on three datasets: the test set of HDRTV1K <cit.>, our newly collected ITM-4K dataset (for high-resolution images), and the test set in <cit.>. On the joint SR-ITM task, we evaluate the comparison methods on the test set of HDRTV1K <cit.>. The details of these test sets are summarized as follows:
* HDRTV1K <cit.> contains 117 test SDR images of size 3840×2160×3, with paired HDR images. For joint SR-ITM, we downsample the SDR images by a factor of 4 to generate the LR-SDR test images.
* ITM-4K contains 160 pairs of SDR and HDR images of size 3840×2160×3. These images are extracted from 9 HDR10 videos collected from https://4kmedia.org4kmedia.org. The corresponding SDR videos are generated through YouTube similar to <cit.>. We display 12 typical scenes from the 160 test images in Figure <ref>. In Figure <ref>, we also visualize the distribution of the 160 SDR images in our ITM-4K dataset and the 117 SDR test images in HDRTV1K <cit.> using t-SNE <cit.>. One can see that our ITM-4K dataset contains diverse scenes similar yet supplementary to the test set of HDRTV1K <cit.>.
* The test set in <cit.>. This dataset contains 28 test images, 12 of which are overlapped with the training set of HDRTV1K <cit.> and the test set of our ITM-4K. Thus, we use the remaining 16 images to evaluate the ITM methods. Note that although this dataset is used for joint SR-ITM task, the test set provides the SDR images of the same sizes with the corresponding HDR images, which can be used to evaluate ITM methods. We do not use this test set for the joint SR-ITM task due to its overlap with the training set of HDRTV1K <cit.>.
Metrics. We evaluate the performance of different methods on ITM and joint SR-ITM in terms of PSNR, SSIM <cit.>, LPIPS <cit.>, and HDR-VDP3 <cit.>. PSNR is used to evaluate the closeness of the output image to the corresponding ground truth image. SSIM <cit.> and LPIPS <cit.> evaluate the structural and perceptual similarity, respectively, of the output image to the corresponding ground truth image. HDR-VDP3 <cit.> is a widely used metric to evaluate the quality of HDR images <cit.>, and we use its prediction of “quality” (Q) here.
§.§ Results on Inverse Tone Mapping
Comparison methods.
For our IRNet, we set n=2 and C=64, and denote it as “IRNet-2 (64c)”.
We compare it with four ITM methods of HDRNet <cit.>, CSRNet <cit.>, Ada-3DLUT <cit.>, and HDRTVNet <cit.>. The methods of Pixel2Pixel <cit.> and CycleGAN <cit.> are also evaluated as two generative baselines for ITM. As suggested in <cit.>, we also modify the joint SR-ITM methods of Deep SR-ITM <cit.> and JSI-GAN <cit.> for the ITM task, by setting the stride of the first convolution layer as 2 to make them feasible for the ITM task. This manner reduces their computational costs while not degrading the ITM performance.
Objective results. The comparison results on the test set of HDRTV1K <cit.> are summarized in Table <ref>. One can see that our IRNet-2 (64c) outperforms the second best method, , AGCM+LE, by 0.59dB, 0.0011, and 0.3 in terms of PSNR, SSIM, and LPIPS, respectively. Note that our IRNet-2 (64c) has 134.73K parameters, fewer than all the other comparison methods except CSRNet (36.49K) and AGCM (35.25K). But these two methods suffer from clear performance gap to our IRNet-2 (64c) in terms of all evaluation metrics. On HDR-VDP3, our method is slightly (0.03) lower than the best method AGCM+LE. But AGCM+LE requires 1410K parameters, 6228.31G FLOPs, and 3114.09G MACs to process a 4K-resolution SDR image at a speed of 691.30ms, much larger than those of our IRNet-2 (64c).
Besides, our IRNet-1 (48c), , the IRNet with a single IRB block and C=48, only needs 49.3K parameters to achieve competitive results with the second best method of AGCM+LE.
We further evaluate our IRNet-2 and other methods on our ITM-4K dataset and the 16 SDR images in the test set of <cit.>. As shown in Table <ref>, our IRNet-2 (64c) still achieves better results than other comparison methods on PSNR and HDR-VDP3. In summary, our IRNet achieves efficient ITM performance with a lightweight backbone.
Visual quality is an important criterion to evaluate the performance of ITM methods, since human are the final reviewers of the image quality. For the purpose of visualization, the HDR images are generated from HDR10 videos and stored in the 16-bit PNG format. The comparison results of visual quality by different methods on three test sets are shown in Figure <ref>. We observe that most comparison methods suffer from a certain degree of color bias, especially near the light source. Our IRNet achieves closer results to the ground truth images than other methods, with more correct colors and color contrasts. In addition, our IRNet achieves better PSNR and SSIM results than the other comparison methods. All these results demonstrate that our IRNet is very effective on ITM.
Running speed is the actual wall-clock time of evaluating model efficiency on SDR images. We calculate the running time of comparison methods on 4K-resolution (3840×2160×3) images. As shown in Table <ref>, our IRNet-2 (64c) is faster than the second and third best methods, , AGCM+LE and HDRTVNet, by a gap of 292.97ms and 1115.10ms, respectively. Meanwhile, IRNet-1 (48c) reduces the running time of IRNet-2 (64c) from 398.33ms to 166.91ms with guaranteed performance. Although faster than our IRNet-2, the methods of HDRNet, CSRNet, Ada-3DLUT, and AGCM suffer from obvious performance degradation on quantitative metrics.
§.§ Results on Joint SR-ITM
Comparison methods. Here, we set n=5 and C=64 in our IRNet, and denote it as “IRNet-5 (64c)”. We compare it with two SR methods, , EDSR <cit.> and RFDN <cit.>, two cascaded two stage SR-ITM methods, , “HDRTVNet+RFDN” (sequentially performing ITM by HDRTVNet and SR by RFDN) and “RFDN+HDRTVNet” (vice versa), and two joint SR-ITM methods, , Deep SR-ITM <cit.> and JSI-GAN <cit.>. For the cascaded SR-ITM methods, we choose RFDN <cit.> and HDRTVNet<cit.> since they are methods on SR and ITM, respectively.
Objective results. The comparison of numerical results are summarized in Table <ref>. It can be seen that
the two SR methods still achieve reasonable performance in terms of objective metrics. By first performing SR and then ITM, the cascaded method achieves better results on image quality metrics, but requires heavy computational costs, , 14783.55G FLOPs and 7391.58G MACs to process an LR-SDR image of size 960×540. Of course, first performing ITM and then SR significantly reduces the computational costs, and the performance on evaluation metrics suffers from a huge degradation as well. Besides, compared with Deep SR-ITM and JSI-GAN, our IRNet-5 (64c) achieves the best PSNR results (0.38dB higher than the second best method “RFDN+HDRTVNet”) and comparable results on the other metrics, but with the least requirements on parameter amounts, computational costs, and inference time. These observations demonstrate that our IRNet is a lightweight and efficient backbone that can achieve performance on the joint SR-ITM task.
Visual quality. In Figure <ref>, we qualitatively compare the visual results of different methods on the HDRTV1K test set <cit.> modified for joint SR-ITM (please refer to <ref> A). One can see that all these methods obtain promising visual results on the presented scenes. The method of “HDRTVNet+RFDN” produces blurry edges around the lighting area. Besides, the images output by “HDRTVNet+RFDN”, “RFDN+HDRTVNet”, Deep SR-ITM <cit.> and JSI-GAN <cit.> suffer from the color shift problem to some extent. By fully exploiting multi-layer features for fine-grained image reconstruction, our IRNet-5 (64c) not only accurately restores the image colors, but also well increases the image details during the SR process. These results validate that, though being lightweight with the fewest parameter amounts and computational costs, the proposed IRNet is very efficient on the joint SR-ITM task.
Running speed. The comparison results of running speed on the downsampled images (960×540×3) are summarized in Table <ref>. It can be seen that our IRNet is faster than other comparison methods. Note that when comparing with “RFDN+HDRTVNet”, our IRNet-5 achieves comparable performance with only 4.08% of its running time. These results validate the efficiency of our IRNet on joint SR-ITM.
§.§ Ablation Study
To study in detail the working mechanism of our IRNet, we present comprehensive ablation experiments of our IRNet on ITM. Specifically, we assess:
1) how to extract the intermediate feature F_1 in our IRB?
2) how does the number of IRB blocks affect our IRNet?
3) how does the channel dimension C in IRB influence our IRNet?
4) how does the CCA layer boost our IRNet?
All variants of our IRNet are trained and evaluated on the training set and test set of HDRTV1K <cit.>, respectively.
1) How to extract the intermediate feature F_1 in our IRB? The IRB in our IRNet is modified from the residual block (RB). To validate the effectiveness of our IRB, we first evaluate our IRNet by replacing the IRB blocks by the RB blocks (using LeakyReLU instead of ReLU for fair comparison). The results listed in the first two rows of Table <ref> show that our IRNet with the IRB block achieves much better performance than our IRNet with the original RB block.
Besides, we design several variants of our IRB block (“IRB”) and study how they influence our IRNet on ITM. We first remove the intermediate feature F_1 to verify its importance in our IRB, which is denoted as “IRB w/o F_1”. Then we study where to extract the intermediate feature F_1, which can be put before the first convolution layer (take F_1 as F_in), after the activation layer (our IRB), before the addition operation (take F_1 as F_2). The results are summarized in Table <ref>. One can see that our IRNet with the original IRB achieves the best PSNR and SSIM results. By removing the feature F_1, the variant of our IRNet achieves clear drop on PSNR and SSIM, but similar LPIPS and HDR-VDP3 results. If we use the input feature F_in of IRB or the feature after the second convolution layer F_2 as the intermediate feature F_in, the variants of our IRNet suffer from clear drop on PSNR, but with a little difference on SSIM and LPIPS. All these results validate the effectiveness of utilizing the feature after the activation function as the intermediate feature for our IRB to achieve promising ITM performance.
2) How does the number of IRB blocks affect our IRNet? In our IRNet, we use two IRB blocks for ITM and five IRB blocks for joint SR-ITM. Here, we vary the number of IRB blocks to study how it influences our IRNet. The results are listed in Tables <ref> and <ref>, respectively. It can be seen that our IRNet achieves promising performance with 1∼4 IRB blocks on SSIM, LPIPS, and HDR-VDP3. Our IRNet with two IRB blocks achieves the best PSNR results among all choices. Similarly, our IRNet with five IRB blocks achieves the best PSNR and SSIM results on joint SR-ITM, while that with six IRB blocks achieves the best LPIPS and HDR-VDP3 results. To reduce the parameter amounts, we use two and five IRB blocks in our IRNet for ITM and joint SR-ITM, respectively.
3) How does the channel dimension C in IRB influence our IRNet? To answer this question, we perform experiments on our IRNet with different number of channels in the IRB block. The results of our IRNet-1 and IRNet-2 on ITM and those of our IRNet-5 on joint SR-ITM are shown in the Table <ref>, Table <ref> and Table <ref>, respectively.
For ITM, our IRNet-1 using one IRB achieves the best PSNR and SSIM results when C=48 and with 49.30K parameters, while our IRNet-2 using two IRBs achieves the best PSNR and SSIM results when C=64 and with 134.73K parameters. For joint SR-ITM, our IRNet-5 using five IRBs achieves the best PSNR results when C=64 and with 468.19K parameters. Our IRNet-5 with C=96 achieves better SSIM, LPIPS, and HDR-VDP3 results, but suffers from a huge growth of parameter amounts. Thus, we set C=48 and C=64 in our IRNet-1 and IRNet-2, respectively for ITM, and C=64 in our IRNet-5 for joint SR-ITM.
4) How does the CCA layer boost our IRNet? Our IRNet uses one CCA layer after each IRB block to refine the feature maps. We remove the first CCA layer between two IRB blocks in our IRNet-2. The results on ITM are shown in Table <ref>. One can see that our IRNet-2 without the first CCA layer suffers from a clear performance drop on PSNR. This demonstrates that the CCA layer is important to our IRNet-2 on ITM.
§ CONCLUSION
In this paper, we developed a lightweight and efficient inverse tone mapping (ITM) network. The proposed Improved Residual Network (IRNet) is mainly consisted of Improved Residual Blocks (IRB) modified from the popular residual block and Contrast-aware Channel Attention (CCA) layers. The proposed IRB block is able to fuse multi-layer features extracted by different convolution layers for fine-grained ITM. We also collected a new ITM-4K test set containing 160 versatile 4K-resolution SDR images. Experiments on three benchmark datasets demonstrated that, our IRNet outperforms the state-of-the-art methods on the ITM task with only ∼0.13M parameters and ∼0.22×10^4G FLOPs per 4K image. Further experiments on the joint SR-ITM task also showed the advantages of our IRNet over the comparison methods on the objective metrics, the computational efficiency, and most importantly, the image quality such as color depth restoration.
plain
|
http://arxiv.org/abs/2307.04724v1 | 20230710173408 | The individual abundance distributions of disc stars across birth radii in GALAH | [
"Kaile Wang",
"Andreia Carrillo",
"Melissa K. Ness",
"Tobias Buck"
] | astro-ph.GA | [
"astro-ph.GA"
] |
firstpage–lastpage
AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning
Yuwei Guo^1,2
Ceyuan Yang^1*
Anyi Rao^3
Yaohui Wang^1
Yu Qiao^1
Dahua Lin^1,2
Bo Dai^1
^1Shanghai AI Laboratory ^2The Chinese University of Hong Kong
^3Stanford University
<https://animatediff.github.io/>
===================================================================================================================================================================================================================================================================================
Individual abundances in the Milky Way disc record stellar birth properties (e.g. age, birth radius ()) and capture the diversity of the star-forming environments over time. Assuming an analytical relationship between ([Fe/H], [α/Fe]) and , we examine the distributions of individual abundances [X/Fe] of elements C, O, Mg, Si, Ca (α), Al (odd-z), Mn (iron-peak), Y, and Ba (neutron-capture) for stars in the Milky Way. We want to understand how these elements might differentiate environments across the disc. We assign tracks of in the [α/Fe] vs. [Fe/H] plane as informed by expectations from simulations for ∼ 59,000 GALAH stars in the solar neighborhood (R∼7-9 kpc) which also have inferred ages. Our formalism for shows that older stars (∼10 Gyrs) have a distribution with smaller mean values (i.e., R̅_∼5±0.8 kpc) compared to younger stars (∼6 Gyrs; R̅_∼10±1.5 kpc), for a given [Fe/H], consistent with inside-out growth. The α-, odd-z, and iron-peak element abundances decrease as a function of , whereas the neutron-capture abundances increase. The -[Fe/H] gradient we measure is steeper compared to the present-day gradient (-0.067 dex/kpc vs -0.058 dex/kpc), which we also find true for -[X/Fe] gradients. These results (i) showcase the feasibility of relating the birth radius of stars to their element abundances, (ii) the abundance gradients across are steeper than those over current radius, and (iii) offer an observational comparison to expectations on element abundance distributions from hydrodynamical simulations.
Galaxy: abundances – Galaxy: disc – Galaxy: evolution
§ INTRODUCTION
Recovering the birth conditions of the stars is one of the main goals of Galactic archaeology. However, stars deviate from their birth orbits, such that their guiding-center radius can change over their lifetime, without leaving any signature of this change. These orbital excursions are due to processes such as the interaction with the spiral structure as well as external perturbations from infalling satellites (e.g. ). Although we cannot directly probe the initial orbital properties of disc stars at birth, they exhibit atmospheric abundances that - to first order - reflect the abundance distribution of the gas from which the stars were born, with exceptions (e.g. ).
We can therefore assume that most element abundances of stars, in particular within narrow regions of evolutionary state, are time-invariant. With stellar death, elements created within the stars and during explosive nucleosynthesis are returned to the interstellar medium. This enriches the environment where newer stars are formed, in a cyclic process. The element abundances for a given star are therefore a record of the nucleosynthetic history of the star-forming environment, at that particular time and place. The time invariance of element abundances and their effective barcode of a star's birth environment has been foundational to the idea of chemical tagging, via which individual molecular cloud stellar birth sites in the disc might be reconstructed using abundances alone <cit.>. However, the current data appear to demonstrate that this goal is prohibited by the low-dimensionality of what appears to be a very correlated abundance space
<cit.>. A more feasible goal with current spectroscopic data is the inference of the time and overall radius at which stars formed in the disc.
Different types of stars and production mechanisms produce elements across the periodic table with different yields, at different rates, and at different points in time (see ). Additionally, it is widely accepted that galaxies, like the Milky Way, formed inside-out, with star formation starting in the deepest part of the potential and proceeding outwards (e.g. ). Combining nucleosynthesis timescales with the inside-out formation of the Milky Way, the element abundances of the stars encode the temporal enrichment of the Galaxy and reveal stars' birth properties in terms of age and spacial location. We are now able to have a clearer picture of this as the field of Galactic archaeology has greatly expanded due to large multi-object stellar surveys, such as the Apache Point Observatory Galactic Evolution Experiment (APOGEE; ), the Galactic Archaeology with HERMES (GALAH; ), Gaia-European Southern Observatory (ESO) survey (Gaia-ESO; ), and the Large Sky Area Multi-Object Fibre Spectroscopic Telescope (LAMOST; ). These surveys enable the detailed study of the element abundance for >10^5 stars in the Galaxy.
In addition to element abundances, another fundamental and time invariant property of stars is their age. Age tells us when during the evolution of the galaxy a star was formed. In fact, numerous studies have explored the relationship between stellar age and element abundances, or age-[X/Fe] relations (). These studies have shown that just by knowing a star's age and metallicity, [Fe/H], the element abundance, [X/Fe], can be predicted up to a precision of 0.02 dex for many elements. Indeed, a star's age does prove to be a key link to understanding the nucleosynthetic history and evolution of the Galaxy.
The only missing link now is where in the galaxy a star was born. If we know a star's individual abundances, age, and birth site, we can begin to unravel the formation of the Milky Way disc with utmost detail.
To this effect, the element abundances of stars should be yet very useful. Earlier works have demonstrated the feasibility to infer birth radius, . For example, <cit.> presented a largely model-independent approach for estimating for Milky Way disc stars, using [Fe/H] and age estimates from the local HARPS sample <cit.>. The assumptions relied on are (1) the interstellar medium (ISM) is well mixed at a given radius, (2) there exists a negative radial metallicity gradient in the ISM for most of the disc lifetime, (3) stars younger than 1 Gyr are expected to have little migration, and (4) the Milky Way formed inside-out. Utilizing the derived in their work, they find that the ISM radial metallicity gradient in the Milky Way disc flattens with time. As noted in this study, processes like radial migration can blur signatures. With this in mind, <cit.> developed a model to derive by quantifying the radial migration in the Milky Way disc, using the ages and [Fe/H] of low-α disc stars. In this work, it was assumed that (i) the metallicity of the ISM has negligible variations azimuthally, (ii) the Milky Way had a relatively quiescent life for the past 8 Gyr, and (iii) radial orbit migration is the only mechanism responsible for the scatter in age–metallicity at a given radius. Their model reproduced the observed data well and further found that the radial orbit migration efficiency in the Milky Way is strong. Recently, <cit.> proposed an empirical method to derive birth radii from age and metallicity measurements, with the assumptions that gas is well mixed in Galactic azimuth, Milky Way formed inside-out, and there is a well-defined linear relation between metallicity and birth radius. Such in-depth studies to derive have been shown to be successful, with the help of various physically-motivated assumptions and modeling. It is therefore worth asking if could be similarly derived with different assumptions, and specifically a model that does not directly use present-day radius measurements. In addition, as detailed element abundances have been shown to have a direct link to ages, it is interesting to explore how detailed element abundances can potentially trace stars back to their birth sites.
Fortunately, the correlations between the birth radius and stellar properties have also been shown from cosmological hydrodynamical simulations, allowing methods of recovering the birth radius of stars to be explored. For example, <cit.> examined the reliability of inferring birth radii from the assumed linear relationship between the ISM metallicity with radius, using four zoom-in cosmological hydrodynamic simulations from the NIHAO-UHD project (). They found that precise stellar birth radii can be obtained for stars with age < 10 Gyr, as the stellar disc starts to form and the linear correlation between the ISM metallicity and radius increases. Also with the simulations from the NIHAO-UHD project, <cit.> showed the direct correlation between element abundances (specifically, [O/Fe] and ) and the birth location of stars.
In this work, we want to recover the birth radius of stars simply based on their and abundances, as shown in simulation works (e.g. ). Instead of performing complex Galactic chemical evolution modeling, we assign each of the stars a birth radius based on their and abundances and examine the validity of this birth radius assignment. To do this, we explore the individual abundance distribution [X/Fe] across birth radii with the disc stars in GALAH DR3 data <cit.>. In Section <ref>, we describe the observational data we used in this study. In Section <ref>, we discuss our birth radius assignment and the simulation work we are motivated by. In Section <ref>, we present our age-birth radius relation in two thin metallicity bins, and in Section <ref> we show the distribution of individual element abundances [X/Fe] across birth radii. The results presented in these two sections validate our birth radius assignment based on element abundances. Lastly, we summarize and discuss the results in Section <ref>.
§ OBSERVATIONAL DATA
We take advantage of the Galactic Archaeology with HERMES (GALAH) survey data release 3 (DR3, ) which measures up to 30 element abundance ratios for elements in different groups: α, light/odd-z, iron-peak, and neutron-capture. The GALAH survey uses the HERMES instrument, a high-resolution (R ∼ 28,000) four channel fibre-fed spectrograph (covering 4713–4903 Å, 5648–5873 Å, 6478–6737 Å, and 7585–7887 Å) on the Anglo-Australian Telescope <cit.>. The catalogue contains 588,571 stars, with the stellar parameters determined using the modified version of the spectrum synthesis code Spectroscopy Made Easy (SME: ; ) and 1D MARCS model atmospheres. After the stellar parameters were estimated and fixed, one abundance was fitted at a time for the different lines/elements in the GALAH wavelength range <cit.>. In this work, we aim to study the distribution of individual abundances [X/Fe], which we take to be X = α, C, O, Mg, Al, Si, Ca, Mn, Y, and Ba, spanning the different groups of elements.
In addition to the main catalogue, we also use the GALAH DR3 value-added catalogue that contains stellar ages, Galactic kinematics, and dynamics. The stellar ages were determined by the Bayesian Stellar Parameter Estimation code (BSTEP), an isochrone-based
scheme that provides a Bayesian estimate of intrinsic stellar parameters from observed parameters by making use of stellar isochrones, adopting a flat prior on age and metallicity <cit.>. The Galactic dynamic information was calculated using galpy <cit.>. In the calculations, the best fitting axisymmetric potential by <cit.> was used with a Solar radius of 8.21 kpc.
We assemble a parent sample of qualified GALAH DR3 disc stars according to the following criteria:
* flag_sp=0, flag_fe_h=0, flag_X_fe=0
* -2 < [Fe/H], < 0.5, -1 < <6
* 3500 < < 6250 K, SNR = snr_c3_iraf > 40
* 7 < R < 9, and |z| < 2
where X = α, C, O, Mg, Al, Si, Ca, Mn, Y, and Ba. We set the cut in element abundance to avoid extreme values. The flag_sp, flag_fe_h, and flag_X_fe are set to select stars with reliable stellar parameters and element abundance determination. In addition, we limit the range such that the abundances are not affected by systematic temperature trends. This selection produces agreement between the values from GALAH+ DR3 and from angular diameter-based measurements (e.g. ) for Gaia benchmark stars <cit.>. We show in the Appendix in Figure <ref> that there are only small slopes between element abundances [X/Fe] and . These may be real or systematics inherited from stellar models. We employ a signal-to-noise ratio cut of SNR > 40 for the red band (CCD 3) to ensure good quality spectra, as well as cuts in Galactocentric radius (R) and height from the disc plane (z) to select for disc stars. This results in a sample of 59,124 stars, and the stellar parameters are shown in Figure <ref>. The stars span a range of 0.004 to 13 Gyrs in age, with a median age = 5.7 Gyrs. The 16th and 84th percentiles of age are 3.7 Gyrs and 8.6 Gyrs respectively. Figure <ref> shows the density plots of the parent sample on [Fe/H]-[X/Fe] plane, for elements C, O, Mg, Al, Si, Ca, Mn, Y, and Ba.
§ METHOD
We aim to determine, given and abundances, the distribution of [X/Fe] across different birth radii (), under an assumed relation between [Fe/H]-[α/Fe] and . Cosmological simulations (e.g. ) demonstrate clear birth radius tracks on the [O/Fe] vs. abundance plane. Figure <ref> is a reproduction of Figure 3 in <cit.> for the galaxy g2.79e12, showing the [O/Fe] vs. plane at solar radius (7<R<9 kpc). The zoom-in simulation of g2.79e12 analyzed in <cit.> is taken from the Numerical Investigation of a Hundred Astronomical Objects (NIHAO) simulation suite of cosmological hydrodynamical simulations of Milky Way mass galaxies (). The total virial mass, total stellar mass, and the disc scale length of g2.79e12 are 3.13×10^12M_⊙, 15.9×10^10M_⊙, and 5.57 kpc. Figure <ref> panels are colored by (a) birth radius, (b) age, (c) birth radius dispersion, and (d) stellar mass. In Figure <ref> panel (a), stars with high [O/Fe] (>0.3) are seen to mostly originate from the inner Galaxy, while stars with low [O/Fe] (< 0.2) are distributed across a wider range of birth radii where larger birth radii are offset to lower metallicity. Panel (b) shows clear horizontal age gradients with older ages associated with higher [O/Fe]. In panel (c), there is high birth radius dispersion around = -1.0, [O/Fe] = 0.3, as well as the lower-right region on the [O/Fe] vs. plane towards high metallicity. The stellar mass is also higher in the lower-right region, as shown in panel (d).
Motivated by the results from the <cit.> simulations, specifically the birth radius-element abundances trends, we lay down seven tracks (2, 4, 6, 8, 10, 12, and 14 kpc) as shown in Figure <ref> panel (a) in the vs. plane from GALAH data. These tracks can be described by the following equation
= -40×(+0.80×exp(0.4×)-0.81)+8
which was obtained by fitting birth radius tracks similar to Figure <ref> panel (a). We further assign every star in our sample a birth radius according to the equation, with known and . The number of stars in bins between each track is as follows: 3804 (2-4 kpc), 9328 (4-6 kpc), 15720 (6-8 kpc), 18628 (8-10 kpc), 9565 (10-12 kpc), 1306 (12-14 kpc). Stars with assigned < 0 kpc are removed (170 stars, or 0.29% of all qualified disc stars).
Instead of using the oxygen abundance [O/Fe], we choose to use the alpha element abundance because (1) it is better measured, as the mean uncertainty in is smaller than that of [O/Fe], and (2) in the simulations performed by <cit.>, [O/Fe] is intended as a tracer of alpha-elements than of the specific element O. The absolute values of each radial track are not calibrated to match the Milky Way, but the range is consistent with the birth radius range used in other studies, e.g. . We adopt this form of the relation between [Fe/H]-[α/Fe]-birth radius and examine the overall effect for the birth radius variations in the element abundance distributions, and for the birth radius at fixed age, if such a relation is held in the Milky Way.
The birth radius increases as [α/Fe] decreases (from top to bottom), and the y-axis spacing between two neighboring curves is around 0.05 dex. The distribution of the parent sample stars on the [α/Fe] vs. [Fe/H] are also shown in Figure <ref> panel (a) colored by age, with the median (, ) shown as a black circle. We lay down the tracks such that the middle track goes over the median (, ) point because most of the stars are located near the origin, i.e. ([Fe/H], [α/Fe]) = (0, 0) (see density plot in Figure <ref> panel (b)). Furthermore, the birth radius for the majority of stars roughly follows a similar distribution as their current Galactocentric radii <cit.>, which is around 8 kpc. Additionally, the distribution of stellar ages exhibits a decreasing trend going towards lower and higher as shown in Figure <ref> panel (a).
As shown in Figure <ref> panel b, the stellar population density is very non-uniformly distributed in the - plane. We wish to carry out an analysis of how the age and individual abundance distributions of stars change with birth radius, given our model assigned in the - plane. Therefore, the varying density distribution of stars in this plane is not information we wish to propagate. To eliminate the impact of the uneven density of stars in the - plane for this analysis, we use a grid of evenly spaced representative populations in [α/Fe] vs. [Fe/H]. Along the x and y axis, the grid spacing is 0.051 and 0.019, respectively. Bins with N < 20 stars are removed, mostly on the edges, because we want our binned data to be representative of the neighboring star population on the abundance plane.
The remaining sample of 231 binned data points, including 57,858 stars, is summarised in Figure <ref> panel (c) colored by mean birth radius. We use these binned data points, which give us an even sampling across the -plane in , for further analysis.
§ BIRTH RADIUS DISTRIBUTIONS WITH AGE AND METALLICITY
We explore how the birth radius distribution of stars in the - plane as shown in Figure 4 (c) changes as a function of age and metallicity. In Figure <ref>, we show the birth radius distribution for a high metallicity (-0.25<[Fe/H]<0, top panel) and low metallicity (-0.5<[Fe/H]<-0.25, bottom panel) sample. Within the same metallicity bin, the sample is broken down into three stellar ages bins. These are shown separately with different colors in the sub-panels of Figure <ref>, with the lightest to darkest color for the youngest to oldest stars, respectively.
The mean birth radius values for the three age bins lie at 10.1 kpc (4-6 Gyr bin), 8.2 kpc (6-8 Gyr bin), and 5.0 kpc (8-10 Gyr bin) for the high metallicity sample, and at 10.7 kpc (6-8 Gyr bin), 8.1 kpc (8-10 Gyr bin), and 5.2 kpc (10-12 Gyr bin) for the low metallicity sample. For both the high and low metallicity samples, the birth radius distribution for older stars generally peaks at a smaller birth radius compared to younger stars, exhibiting an inside-out formation trend similar to other studies (e.g. ). Furthermore, the width of the birth radius distributions also has a correlation with age, in which the width decreases with increasing age. The median absolute deviations (MAD) of the three high metallicity age bins are 1.4 kpc (4-6 Gyr bin), 1.2 kpc (6-8 Gyr bin), and 0.8 kpc (8-10 Gyr bin), and the values for the low metallicity sample are 1.5 kpc (6-8 Gyr bin), 1.2 kpc (8-10 Gyr bin), and 0.8 kpc (10-12 Gyr bin). Here we choose MAD to describe the dispersion because our sample distribution is non-Gaussian, and MAD is less sensitive to extreme values. Under this assumed model between birth radius and the - plane, this is consistent with an inside-out formation of the Milky Way; the older stars are more concentrated in the inner Galaxy. The younger stars on the other hand show mean distributions at larger radii and with wider distributions across Galactic radii.
Interestingly, we do not see any obvious age- trends when examining the data across all [Fe/H], i.e. without looking at different metallicity bins. This signal is erased, as the mean age distribution is a function of [Fe/H], so this age gradient, which is consistent with the idea of `inside-out' formation, is only seen when looking at the distribution of stellar ages in small ranges of [Fe/H] in our sample. In the pre-binned data (shown in Figure 3, panel (b), there is a clear density peak in the distribution in the - plane; this non-uniform density would presumably enable signatures in age and radius, which are correlated with this plane (i.e. ), without metallicity binning, as the majority of stars are at one particular metallicity already. The overall age gradient seen when examining all stars in the Milky Way (e.g. ) is similarly presumably sensitive to the underlying density distribution of stars as a function of metallicity. This is an example of the Yule-Simpson paradox, a phenomenon in which a trend appears in several groups of data, but disappears or reverses when the groups of data are combined. Examples of Yule-Simpson's paradox in Galactic archaeology can be found in <cit.>. Additionally, samples with different metallicity are dominated by stars of different ages. As shown in <cit.> Figure 2, the distribution of current radius R at low metallicity ([Fe/H] = -0.75) is dominated by 7-10 Gyr old stars, while the 1-3 Gyr old star population becomes the majority at high metallicity ([Fe/H] = 0). This change in age dominance with [Fe/H] also appears in Figure <ref>. For the high metallicity sample, we are able to make a bin for 4-6 Gyrs stars but not for 10-12 Gyrs due to having too few old stars in the sample, and this is the opposite in the low metallicity sample. Therefore, we have to make bins according to [Fe/H] to account for the differing dominant age populations. In addition, this allows us to see inside-out growth in the level of chemical enrichment for mono-age populations. Comparing the distributions of the two 6-8 Gyr age bins (colored light pink) in both high and low metallicity samples, we find that the low metallicity sample peaks at a larger . A similar trend also exists in the distributions for age = 8-10 Gyr stars (colored red). By selecting narrow metallicity bins, we show that the inside-out formation holds for different metallicities.
We summarise the birth radius-age relation, as shown in the top-left panel of Figure <ref>. Overall, as the birth radius increases, the stellar age decreases. Similarly, as birth radius increases, the mean metallicity, [Fe/H], decreases, as shown in the bottom-left panel of Figure <ref>. The top-right panel of Figure <ref> shows the age dispersion as a function of birth radius. We see that small birth radii correspond to the highest age dispersions. Similarly, in the bottom-right panel of Figure <ref>, we see that the [Fe/H] dispersion is highest at the smallest radii.
§ INDIVIDUAL ABUNDANCE DISTRIBUTIONS AT DIFFERENT BIRTH RADII
We investigate the abundance distributions for the elements C, O, Mg, Al, Si, Ca, Mn, Fe, Y, and Ba, spanning the α, odd-z, iron-peak, and neutron-capture groups of elements, at different birth radii. These [X/Fe] distributions are shown in Figure <ref>. The number of data points from Figure 4 (c) in each of the birth radius bins is 84 (2-6 kpc), 84 (6-10 kpc), and 53 (10-14 kpc).
We find a bimodal distribution towards small birth radius bins. High precision observational measurements of - in the solar neighborhood show a bimodality termed the `low' and `high' alpha discs (e.g. ). Across a wider Galactic radius range these change in their density contribution; the high-alpha sequence is concentrated to the inner Galaxy and the low-alpha sequence extends to the outer Galaxy (e.g. ). The sampling we use for our analysis is evenly spaced across the full - plane as shown in Figure <ref> (c). However, when we examine the individual abundance distributions, a bimodality appears in a number of individual elements at the smallest birth radii. This is presumably due to the contribution from both the high and low alpha discs at fixed birth radius in the inner Galaxy. In effect, this is a strong prediction of our model, that the disc is bimodal in elements at small birth radius. Furthermore, most of the elements show that the [X/Fe] distribution changes from wide (2-6 kpc) to narrow (10-14 kpc) as the birth radius increases.
Metallicity, [Fe/H]: The metallicity distribution at a small birth radius has a higher mean value. A decreasing mean metallicity gradient is observed with present-day guiding radius from the Milky Way center to the outer region (e.g. ). This is inherited from a birth gradient in the gas metallicity (e.g. ) but has presumably been weakened by radial migration over time (e.g. ).
Carbon: Carbon is mainly produced in massive stars, followed by low-mass AGB stars <cit.>. Therefore, carbon distributions should be similar to that of α-elements, as the majority of the α-elements are produced in massive stars. The age-abundance relation for carbon in other observational works (e.g.) shows a positive gradient, indicating that [C/Fe] is larger for older stars. In this study, the carbon abundance [C/Fe] has little relation with value. We see a weak and opposite trend where there is a slight shift in peak position (i.e. larger bins have greater peak [C/Fe]). However, Carbon changes over the evolution of the star due to dredge-up, so perhaps this is representative of the impact of the intrinsic evolution of the element rather than extrinsic (ISM).
Oxygen, Magnesium, Silicon, and Calcium (α-elements): For the α-elements Mg, Si, and Ca, the distributions peak at a smaller mean [X/Fe] as the birth radius increases. The α-elements are mainly produced through Type II Supernovae and their relative ISM contribution is diluted by the increasing supernovae Ia iron-peak pollution. Therefore, we expect the abundance of α-elements, as a function of iron, to be lower in younger stars. We note that the oxygen abundance [O/Fe] shows the smallest evolution across different birth radii. The distribution is wider at smaller birth radii, and each of the distributions overlaps significantly. We see little variation in [O/Fe] with , which contradicts the progression found in other works (e.g. ).
Manganese (iron-peak): The iron-peak element Mn has a higher mean [Mn/Fe] value toward smaller birth radii. The iron-peak elements like Mn are generally synthesized in Type Ia supernovae and also in collapse supernovae. At the center of the Milky Way, younger stars are formed from more enriched gas compared to the outskirts of the Galaxy. As [Mn/Fe] increases with [Fe/H] (e.g. ), [Mn/Fe] is expected to be higher in the Galactic center compared to that in the outskirts. In the age-abundance trends of Mn examined by <cit.> and <cit.>, we see that both studies reveal a relatively flat but still positive age-abundance slope. In general, our result agrees with those from the previous studies.
Aluminum (Odd-z): The odd-z element Al also has a higher mean abundance at smaller birth radii. Based on the prediction from the chemical evolution model of the Milky Way, [Al/Fe] decreases with time for stars with age 12 Gyrs and younger ( Figure 2). Since the majority of our sample stars are younger than 12 Gyrs, we expect our sample to behave similarly (i.e. decreasing [Al/Fe] with time). Moreover, <cit.> examined the age-abundance relation of stars at solar metallicity and discovered a positive relation wherein [Al/Fe] increases with increasing age. Such a trend is also seen by <cit.> in their analysis of the Sun-like stars in the solar neighborhood. Thus, [Al/Fe] is expected to increase with decreasing birth radii, as predicted by both the chemical evolution model and the age-[Al/Fe] relation, and as shown in our results. Interestingly, the 2-6 birth radius bin does not follow the general trend of increasing dispersion with smaller birth radii. However, this is because, for all the binned data points with Al abundance available, the ones that fall in the 2-6 kpc birth radius bin do not span a wide range in [Fe/H], and thus the dispersion of the bin is smaller.
Barium & Yttrium (Neutron-capture): The two neutron-capture elements, Ba and Y, though centered on different values, have similar abundance distributions for stars at different birth radii; that is, the distribution peaks at a larger [X/Fe] value as birth radius increases. They exhibit an opposite trend as the aforementioned elements C, O, Mg, Al, Si, Ca, and Mn. This trend is consistent with the age-abundance relation for the neutron-capture elements from the literature (e.g. ). According to the negative age-[Ba/Fe] relation <cit.> as well as the age-birth radius relation (Figure <ref> top-left panel), the older population were born at smaller mean birth radii with a lower [Ba/Fe] value. It is reassuring that Ba and Y have abundance distributions that behave similarly to birth radius, as both are considered s-process elements.
Furthermore, we calculate and tabulate the -[X/Fe] gradients for the low-α stars. In Figure <ref>, we present the [X/Fe] vs. plots for the low-α in GALAH DR3, with the black lines representing the best-fit gradients and colored by log density. The vertical error bar reflects the MAD of [X/Fe] in small bins with bin width = 2 kpc. The gradient results are summarized in Table <ref> column 3. The reason we focus on the low-α population is that they exhibit the strongest change in element abundances across radius, but for the high-α stars, there is no obvious abundance trend associated with radius (e.g. ). Adopting <cit.> cuts for low-α stars ([Mg/Fe]>0.12-0.13[Fe/H] if [Fe/H]<0; [Mg/Fe]>0.12 if [Fe/H]>0), the number of the low-α stars in our sample is ∼56,000. The inner-most bin (i.e. <5 kpc) seems to be an outlier to the general trend (referring to Figure <ref>). Therefore, to justify a linear fit and gradient metric, we excluded the inner-most data points in our gradient calculations. The gradients are calculated over a range of 5-13 kpc. The largest abundance gradient with is seen in [Fe/H] at -0.067 dex/kpc followed by the individual element [O/Fe] with an [X/Fe]- slope of 0.029±0002 dex/kpc.
We emphasize that in the GALAH sample we use, our present-day radius is limited to the solar neighborhood, with a mean present-day radius of 8.14±0.35 kpc. However, as stars migrate from birth, this survey still gives us access to stars born all over the disc, as parameterized in our model of (from 2-14 kpc). In APOGEE, the survey spans a present-day Galactocentric radius of 0.01-20 kpc, so we can directly compare and contrast our results for birth radius to the present-day radius with APOGEE. For example, Table <ref> column 1 shows the abundance gradients for APOGEE DR16 low-α disc stars (i.e. [α/M]<0.12, |z|<1) with current radius in the range of 5-13 kpc, obtained from <cit.> Figure 7.
We show the seven elements [X/Fe] where X=C, O, Mg, Al, Si, Ca, and Mn) in APOGEE DR17 <cit.> that are in common with the elements used in this study. We also calculate gradients for the element abundances [X/Fe] independently, using ∼ 63,000 APOGEE DR17 low-α stars. We adopt similar cuts as <cit.> (i.e. 4800 K< <5800 K, <3.6, [α/M]<0.12, and |z|<1). The APOGEE gradients are summarized in Table <ref> columns 1 and 2. In column 4, since GALAH covers a narrow range in current radius compared to APOGEE, the present-day radius-abundance gradients for GALAH low-α stars around the solar neighborhood only (7<R<9 kpc) are shown. We discuss these gradient comparisons in more detail in Section <ref> below.
§ DISCUSSION
In this work, we explore the element abundance distributions of stars as a function of birth radius which we inferred from the [Fe/H]-[α/Fe] plane alone, as motivated by cosmological simulations. We now discuss the validity of our assigned tracks and the implications of our estimates on the star formation history of the Galaxy.
We test two other models for assigning the birth radius. We lay down horizontal and vertical tracks, on the vs. plane. From these alternate tracks, we produce [X/Fe] distributions of these stars with different , similar to Figure <ref>. In the horizontal assignment, we see that the mean [Mn/Fe] and [Fe/H] values increase with increasing . This contradicts the observed [Fe/H] gradient (i.e. higher [Fe/H] at the center) of the Galaxy due to inside-out formation and therefore its longer history of star formation. In addition, there is no obvious trend in the dispersion across different bins for C, O, Al, Mn, Y, and Ba. As for the vertical assignment, the mean abundances of all four α-elements, O, Mg, Si, and Ca, increase with , which does not agree with what is observed with the present-day guiding radius. Observations show that as radius increases the low-α populations dominate and in the inner Galaxy the high-α population has the highest density (e.g. ). Therefore, the alternative models we propose result in [X/Fe] distributions that are inconsistent with that of observations of present-day guiding radius. However, in general the assignments motivated by the NIHAO-UHD simulations give rise to trends in the individual abundances [X/Fe] that are consistent with observations of element abundance distributions with present-day guiding radius. We have the expectation that the element abundance gradients and dispersions as a function of birth radius will be higher amplitude than that of the present-day guiding radius due to the impact of radial migration. Therefore, this gives us a better insight into the element abundance distributions at stellar birth place and time in the Milky Way disc.
Due to radial migration <cit.>, we expect gradients in [X/Fe]- to be weakened over time. Therefore, abundance gradients across should be steeper than present-day gradients. This is indeed what we find for most elements.
Using the APOGEE DR16 data, <cit.> report negative present-day gradients across radius in the low-α disc (i.e. [α/M]<0.12, |z|<1) for [Fe/H], as well as as the individual elements [X/Fe] where X = C, Al, Mn. For the elements X = O, Mg, Si, Ca they report positive gradients with Galactic radius. These gradients are summarised in Table <ref> column 1. In column 2 of this table, we report the present-day abundance gradients we calculate with APOGEE. We find good agreement with the <cit.> analysis with the exception of a few elements. We note that the [Mg/Fe] and [Al/Fe] present-day abundance gradients are opposite in sign compared to <cit.> gradients. However, the gradients for these two elements are very shallow. Some differences are not unexpected as we use the ASPCAP abundances from APOGEE and the <cit.> paper uses a data-driven approach to report calibrated abundances that these gradients are based on. Similarly, we report the present-day gradients in GALAH (column 4) for the low-alpha stars (adopting cuts). Note that the GALAH present-day gradients are over a restricted radius range, compared to APOGEE. Again there are some differences, and the GALAH gradients are shallower than APOGEE gradients.
The present-day element abundance gradients with radius in columns 1, 2, and 4 of Table 1 serve as a comparison to our calculated birth radius gradients (column 3). We find that the GALAH birth radius gradients are steeper than both the GALAH present-day local gradient (column 4) and APOGEE present-day gradients (with wider present-day radius range; column 1 & 2). The magnitude of the change in gradients varies with elements.
We can therefore infer from our comparisons between columns 1 and 3 that gradients between elements and radius flatten over time. The element [Fe/H] shows the steepest gradient of -0.067 dex/kpc across birth radius. This flattens the order of 13 percent, to -0.058 dex/kpc from birth to present-day radius, well in agreement with recent theoretical predictions <cit.>. The elements [X/Fe] where X = O, C, Mn, and Al all have the next steepest gradients from -0.21 dex/kpc to 0.029 dex/kpc with birth radius. These flatten by between ≈ 0.02-0.03 dex/kpc such that the present-day gradients for these elements vary between ≈ -0.014-0.002 dex/kpc. We also note that some of the gradients change sign, between birth and present-day radius (i.e. C, Mg, Ca, and Y). Similar [X/H] radial gradients being flattened over time is also observed in <cit.>, in which they used an empirical approach from <cit.> to derive estimates for APOGEE DR17 red giant stars based on their age and [Fe/H].
The individual abundances of stars as a function of birth radius record the star-forming environment at that location and time in the disc. A recent study by <cit.> employed chemical evolution modeling <cit.> to use ages and individual abundances of GALAH stars to infer environmental parameters (i.e. high-mass slope of the IMF (α_IMF), number of SN Ia exploding per solar mass over 15 Gyr (log_10(SNIa))). Their analysis assumed a link between using small bins in [Fe/H]-[Mg/Fe]-[Ba/Fe]-age for the chemical evolution model, as representative of linking to the interstellar medium conditions at different birth radii. They subsequently examined the model parameter gradients across present-day radius. They found that the abundances give rise to a gradient in the high-mass end of the disc's initial mass function. They report that this is more top-heavy towards the inner disc, and more bottom-heavy in the outer disc. Using our birth radius assignment, it would be possible to directly infer the environmental parameters as a function of birth radius and compare the conditions at different birth places and times in the star-forming disc directly.
§ CONCLUSION
This work examines the distribution of individual abundances [X/Fe] of elements C, O, Mg, Al, Si, Ca, Mn, Y, and Ba for disc stars in different birth radii. To do this, we assumed seven birth radius tracks across the vs. plane of ∼ 59,000 GALAH DR3 disc stars and assigned each star a birth radius. This formalism is based on the NIHAO-UHD simulations <cit.> (see Figures <ref> and <ref>). We emphasize that our adopted model of birth radius is not calibrated to quantitatively map a location in the - plane to the birth radius. Rather, this serves as a tool to trace the element abundance and age distribution of stars across the disc from their origin. Via this approach, we can map variations in time of birth and in individual channels of enrichment to differences in the star-forming environment over time and radius. Below we summarize our main results:
* The distribution as a function of age supports an inside-out growth for the Milky Way disc (Figure <ref>). There is a larger mean value in for the younger population (i.e., ∼10 kpc) compared to the older population (i.e., ∼4 kpc). This result is consistent with a number of earlier studies (e.g. ).
* The distribution dispersions change with age as well i.e., the median absolute deviation changes from 0.8 kpc to 1.5 kpc going from older to younger stellar populations as the Milky Way disc grows with time and therefore has star formation over a larger region.
* There is a clear progression in the median [X/Fe] trend with : Mg, Si, Ca, Mn, and Al all decrease while C, O, Y, and Ba all increase with increasing .
* For the low-α population, the abundance gradients are steeper in birth compared to present-day radius. The [Fe/H]- gradient measures -0.067±0.0002 dex/kpc compared to the [Fe/H] present-day gradient of -0.058 dex/kpc. The [O/Fe] abundance is the next strongest indicator of ; it exhibits the steepest [X/Fe]- slope of the [X/Fe] measurements (see Table <ref>) and is 0.029±0.0002 dex/kpc in and 0.002 dex/kpc in present-day.
We tested two other birth radius assignments based on stars' location on the vs. plane, but neither return physically plausible [X/Fe] distributions across radius. Furthermore, our model adopted from the simulation gives sensible results that are aligned with expectations. For example, according to radial migration, we expect birth radius gradients to be steeper in the past, which we find. Therefore, the adopted model for birth radius appears physically plausible and presumably gives insight into the relative distribution of individual abundances across the disc as it formed. Our model uses no direct information about present-day birth radius and is also therefore a useful comparison to models that do assume a relationship with the present-day radius.
In summary, aided by tracks inspired from a cosmological hydrodynamical simulation of a Milky Way-like galaxy and assumptions constrained to the vs. plane, we are able to recover the inside-out growth of the Milky Way disc and the spatial evolution in its chemical abundance distributions. This work serves as a proof of concept of the legitimacy of this modeling approach, and in future it can be applied to additional large spectroscopic survey data. This includes data that covers a larger area of the disc, such as SDSS-V Milky Way Mapper. In addition, chemical evolution modeling would add another dimension in investigating the validity of these assignments (e.g. ). Nonetheless, this work shows that assigning birth radius to stars in the Milky Way and studying the element abundance distributions over time and birth place is very promising. This is demonstrative of the utility in using ensembles individual abundances to trace the formation of the Milky Way disc.
§ ACKNOWLEDGEMENTS
AC acknowledges support from the Science and
Technology Facilities Council (STFC) [grant number
ST/T000244/1] and the Leverhulme Trust.
TB's contribution to this project was made possible by funding from the Carl Zeiss Stiftung.
§ DATA AVAILABILITY
The GALAH DR3 data used in this article are available at https://www.galah-survey.org/dr3/the_catalogueshttps://www.galah-survey.org/dr3/the_catalogues. The APOGEE DR17 data used in this article are available at https://www.sdss4.org/dr17https://www.sdss4.org/dr17. Simulation data from the NIHAO-UHD project is availabel at https://tobias-buck.de/#sim_datahttps://tobias-buck.de/#sim_data. Other data used in this article can be made available upon reasonable request to the corresponding authors.
mnras
§ ELEMENT ABUNDANCE VS. EFFECTIVE TEMPERATURE
We include here the element abundance [X/Fe] vs. effective temperature plot to show that there is no trend associated with [X/Fe] and , see Figure <ref>.
|
http://arxiv.org/abs/2307.05327v1 | 20230711151910 | Conservative binary dynamics from gravitational tail emission processes | [
"Gabriel Luz Almeida",
"Alan Müller",
"Stefano Foffa",
"Riccardo Sturani"
] | gr-qc | [
"gr-qc",
"hep-th"
] |
elt
elt
snakes,decorations.pathmorphing
|
http://arxiv.org/abs/2307.04053v1 | 20230708220300 | How is Fatherhood Framed Online in Singapore? | [
"Tran Hien Van",
"Abhay Goyal",
"Muhammad Siddique",
"Lam Yin Cheung",
"Nimay Parekh",
"Jonathan Y Huang",
"Keri McCrickerd",
"Edson C Tandoc Jr.",
"Gerard Chung",
"Navin Kumar"
] | cs.CL | [
"cs.CL"
] |
How is Fatherhood Framed Online in Singapore?
Sen Lu, Abhronil Sengupta
School of Electrical Engineering and Computer Science
The Pennsylvania State University
University Park, PA 16802, USA
Email: {senlu, sengupta}@psu.edu
============================================================================================================================================================================================
The proliferation of discussion about fatherhood in Singapore attests to its significance, indicating the need for an exploration of how fatherhood is framed, aiding policy-making around fatherhood in Singapore. Sound and holistic policy around fatherhood in Singapore may reduce stigma and apprehension around being a parent, critical to improving the nation's flagging birth rate. We analyzed 15,705 articles and 56,221 posts to study how fatherhood is framed in Singapore across a range of online platforms (news outlets, parenting forums, Twitter). We used NLP techniques to understand these differences. While fatherhood was framed in a range of ways on the Singaporean online environment, it did not seem that fathers were framed as central to the Singaporean family unit. A strength of our work is how the different techniques we have applied validate each other.
Keywords: fatherhood, singapore, social media
§ INTRODUCTION
Fatherhood is now an unprecedentedly visible cultural phenomenon in Singapore. This increased attention is related to the inaugural nationwide fatherhood movement, Dads for Life, the continual development of parenting magazines and the recent emergence of fatherhood blogs within the Singapore internet sphere. In recent times, various fatherhood-related initiatives in Singapore have collaborated with government agencies, business corporations, and community organizations on initiatives to create awareness of the importance of the father’s role, develop commitment to good fathering, and encourage fathers to spend time with their children. In Singapore, the introduction of paternity leave and encouragement for fathers to play a bigger role in childcare and child-raising suggest that the government is sympathetic to the pursuit of gender equality. However, there is a gap between the perception of the importance of fathers and the actual involvement of fathers in their children’s lives. In addition, the role of fathers continues to be recognized primarily as that of a breadwinner. Yet fathers want to do more and experience parenthood as a very fulfilling experience, to which they are highly committed <cit.>. The proliferation of discussion about fatherhood in Singapore attests to its significance as a commercial, ideological, and cultural subject, indicating the need for an exploration of how fatherhood is framed, aiding policy-making around fatherhood in Singapore. While there has been research around how fatherhood is framed in the Singapore context, there is limited analysis of how fatherhood is framed on social media, news outlets, or online forums. Such platforms are where opinions or news on fatherhood are forwarded, people get parenting information, or get quick answers to fatherhood questions. Studying how fatherhood is framed in the online Singaporean context is central to crafting progressive and effective policy around parenting in Singapore, as well as managing the media landscape. Sound and holistic policy around fatherhood in Singapore may reduce stigma and apprehension around being a parent, critical to improving the nation's flagging birth rate. Policies developed in Singapore around fatherhood may then be implemented in nearby East Asian countries, which have similarly low birth rates, to mitigate a rapidly aging society and a shrinking taxpayer base. In this paper, we demonstrate how fatherhood in Singapore is framed on multiple online platforms (news outlets, parenting forums, Twitter). Our main research question (RQ) is as follows: How is fatherhood in Singapore framed on various online platforms? Our findings suggested that while fatherhood was framed in a multiplicity of forms online, it did not seem that fathers were core to the family.
§ RELATED WORK
Fatherhood Framing Online
Work on fatherhood in Singapore is limited. Recent work proposed the concept of Confucian masculinity to explain how the depiction of active fatherhood reinforced the ubiquitous normal family that upholds patriarchal ideology and perpetuates patriarchal power, obscuring the contradictions of class, race, and sexuality that exist in Singapore <cit.>. Other work examined the fatherhood discourses in new dad ads; feature articles from Today’s Parents, a parenting magazine; articles from Life Dads, a government electronic newsletter on fatherhood; and blog entries from three fatherhood blogs <cit.>. The study employed critical discourse analysis, and proposed a Hegemonic Fatherhood Discourse Schema to postulate that the new father/man and traditional father/man ideology is the hegemonic fatherhood in Singapore, ultimately serving the interests of the Singapore state. While past work detailed framing around fatherhood in Singapore, previous research did not compare framing across online platforms, or provide an overview of fatherhood framing to develop policy or informational tools. While there was limited fatherhood research in the Singapore context, there was relatively more research on fatherhood framing online in other contexts. For example, recent work <cit.> used discussion threads from two Web-based parenting communities, r/Daddit and r/PreDaddit from Reddit. Results demonstrated that men used web-based communities to share the joys and challenges of the fatherhood experience.
§ DATA AND METHOD
Data We first selected three content experts who had published at least ten peer-reviewed articles in the last three years around fatherhood. We ensured the content experts were either from Singapore or conducted research on fatherhood/parenthood in Singapore. Given the wide disciplinary focus of fatherhood research, we sought to select a range of experts across disciplines. We recruited one expert from each of these disciplines: Public policy, social work, computational social science. Selecting experts from a range of fields allows results to be contextualized to fields where fatherhood research is concentrated, allowing for findings to be drawn on by stakeholders in public policy, social work, and computational social science. The context experts separately developed lists of online platforms most relevant to fatherhood in Singapore. Each expert developed a list of ten platforms independently, and we selected only platforms common to all three experts' lists. For each online platform, experts also provided up to 10 examples, where applicable, of websites, or forums, and we selected examples common to all experts' lists. The final list of platforms is as follows: Singapore news outlets (Straits Times, Channel NewsAsia, TODAYonline), parenting forums (singaporemotherhood.com, singaporeparents.com.sg/forum, forums.hardwarezone.com.sg/threads/welcome-to-hwzs-parenting-kids-early-learning-forum.5684416, mummysg.com/forums), Twitter (filtering only posts related to Singapore). Examples of platforms not selected: Facebook, Instagram, Reddit, LinkedIn. We were not able to collect Facebook and Instagram data as there was limited support for CrowdTangle, the main mode of Facebook/Instagram data collection. Similarly, the pushshift.io Reddit API had limited support and Reddit data collected was incomplete. LinkedIn had limited fatherhood posts and posts were mostly centered on non-family content. To capture fatherhood-related text on these platforms, we used queries based on a related systematic review e.g., father* OR dad* OR patern* OR paternal OR paternity OR stepdad* OR stepfather* OR step-dad* OR Step-father* OR papa. We used only English-language keywords as most of discussion in the Singapore internet environment is in English. English is also the major language of communication in Singapore. For forums, we used automated scraping techniques (Beautiful Soup) to obtain forum posts from 2010 to 2023, with the same set of keywords. We ran a search for querying the keywords in the title of the forum post or replies to the forum post. We collected all posts that contained these keywords within the forum posts and replies. Regarding Twitter, we used the Twitter API and the indicated keywords to collect tweets from 2011 to 2023. Finally, for news articles, we used Nexis to obtain news archives from 1992 to 2023. To prepare the data for analysis, English stop words such as the, a, an were removed, along with abbreviations, and terms were stemmed using Porter’s stemming algorithm. Stemming converts words with the same stem or root (e.g., innovative and innovator) to a single word type (e.g., innovate). We organized data into four streams for analysis: Twitter (tweets), news (news articles), forums (forum posts).
Sentiment
Sentiment analysis can aid us in comprehending how sentiment around fatherhood is expressed in the online arena. As an example, forums may be more likely to have lower sentiment compared to news. DistilBERT was used for sentiment analysis. DistilBERT was used separately on data from each platform. The model assigns sentiment based on each article or post. Sentiment is from a -1 to 1 scale, where values <0 are negative sentiment, >0 are positive sentiment, and close to 0 are neutral. To stay within the admitted input size of the model, the text length (title + body text) was clipped to to 512 tokens.
Emotion Recognition
Emotion recognition can help us understand how emotions are expressed across various platforms, indicating differences in how fatherhood is framed in Singapore. For example, forums may be more likely to contain anger compared to news. We used DistilBERT for emotion recognition. The model was applied separately on data from each platform. The model assigns emotions (anger, fear, joy, love, sadness, surprise) based on each article or post. To stay within the admitted input size of the model, we clipped the length of the text (title + body text) to 512 tokens.
We provided an overview of the data in Table <ref>. Two reviewers independently examined 10% of the articles or posts within each dataset to confirm salience with our research question. The reviewers then discussed their findings and highlighted items deemed relevant across both lists. We noted the following relevance proportions: News outlets (82%), Twitter (90%), Parenting forums (78%).
§ RESULTS
Overview
We first explored sample posts across platforms. News outlets generally mentioned fatherhood in the context of providing demographic data about interviewees, with excerpts such as So the 40-year-old eye specialist and father of three had to wrap up his work at the hospital quickly, or when interviewees were referring to their fathers with no specific reference to fatherhood e.g., Mr Lee, whose father founded the clan association, rents out its third floor to a small media firm. Broadly, news outlets did not seem to focus on the experience of fatherhood, with the bulk of articles mentioning fathers as a demographic indicator. Twitter posts focused on people recounting incidents, often humorous or heart-warming, with their fathers e.g., My dad was telling me something serious and he hit his leg against the table and I burst out laughing so he had no choice but to laugh, Dad brought back homemade fresh horfun (noodles) from the temple. It's delicious. Twitter seemed to have a greater focus on fathers playing a core function in the Singapore family unit. Posts from forums were very diverse topically. Several posts were about hiring a helper for a young child: My husband is totally against the idea of employing a helper, as he does not like a stranger living with us; I am a father of a newborn baby girl. I recently engaged a confinement lady by the name of Auntie Judy. Such posts suggest the significant role domestic helpers play in the Singaporean family, and how a portion of a father's role is perhaps to oversee the hiring of the domestic helper. Other posts were about suspected infidelity e.g., So my Wife of 2 years has been cheating on me with another male colleague, perhaps indicative of the strain parenting is related to within some Singaporean families.
We then provided word clouds in Figure <ref> as an overview of the data. Across all datasets, words such as time, work, now were prominent, perhaps indicative of how work and likely limited time are central to fatherhood in Singapore. Most common trigrams for news articles centered on leaders of Singapore, who were father and son: Lee Kwan Yew and Lee Hsien Loong. This may indicate that the mainstream news media discussion around fatherhood had little to do with fathers' role in a family, but simply around familial relationships within major news stories. In 1992 - 2003, common trigrams in the news were engineer success story and pressure parent counting. From 2004 - 2019, common trigrams were two baby boy, first new baby, and first time parent. From 2020 - 2022, common trigrams were generation grit family, and grit family love. Broadly, news trigrams may detail how the initial focus was on children bringing pride and wealth to their families, with a transition toward celebrating new births. In more recent years, forums tended to focus on how the family unit could overcome struggles. The most common trigrams in Twitter focused on celebrating fathers through specific events such as Father's Day and birthdays: happy father's day, happy birthday daddy. Such phrases indicated that Twitter may be used to celebrate fathers, but only in relation to pre-defined events, instead of fathers being celebrated for time put toward caregiving etc. Common trigrams in 2011 - 2020 were love u dad, dad love love. 2021 onwards, popular trigrams were feel fulfilling husband, and last nite daddy. Twitter data demonstrated a shift from declaring love for one's father, to fathers indicating how they were fulfilled in their role. Unlike other datasets, there appears to be a shift towards a more active form of fatherhood in Singapore, where fathers describe pride in their role. Trigrams in forums centered on perceived marital infidelity, such as wife unfaithful husband, and assisted reproductive technologies, such as ivf mommy toben, and cousin egg donor. Forums seemed to be platforms where people sought support around spousal infidelity and assisted reproductive technologies, rather than discuss fathers' role in the family unit. The most common trigrams in forums changed over time, with phrases such as gave birth daughter, and first time dad in 2010 - 2019, but with phrases such as happen file divorce, and judged urged divorcing in 2020. In 2021, common trigrams were conceiving single women, while in 2022, trigrams such as crave physical intimacy, and physicial intimacy normal were popular. Forums, while initially around celebrating birth, may have become places where people sought information around divorce, assisted reproductive technologies, and physical intimacy. Broadly, descriptive data indicated shifting framing around fatherhood, but a limited focus on fathers as core to the Singapore family.
Sentiment
We presented sentiment analysis results across each platform in Table <ref>. News and Twitter had higher proportions of positive sentiment (53.7% and 57.0% respectively) compared to forums (27.2%). Forums had the highest proportion of negative sentiment (65.9%), compared to news and Twitter (43.8% and 33.8% respectively). We then presented sentiment analysis results over time for each platform in Figure <ref>. News data exhibited several fluctuations but had the greatest rise in positive sentiment post-2009. The nationwide fatherhood movement, Dads for Life, started in 2009, may explain the increase in positive sentiment. Examples of news article content with positive sentiment were as follows: A group of prominent figures from various organisations and businesses have banded together to start up the Fathers Action Network. The network aims to kick-start a movement called Dads for Life to get fathers more involved with their families, especially in their childrens' lives. This follows a fatherhood perception survey conducted in April and May this year by a Ministry. Most felt that being a father and raising children is one of the most fulfilling experiences a man can have.; Work is work and family is family. Our ultimate goal is still our family. Work is just a means to get the money so we should be very clear about it. And that is the sort of spirit that the Dads for Life movement wants to inspire. After 2017, positive sentiment declined over time, and was overtaken by negative sentiment. Forums had broadly negative sentiment 2015 onward, reaching a peak in 2017, followed by a steady decline. Twitter exhibited mostly positive sentiment 2013 onward with a steady decline after. We suggest that the high proportion of positive sentiment in the news may be related to governmental initiatives and the high proportion of negative sentiment in forums may be related to a more frank discussion of the stresses of parenting.
Emotion Recognition
We presented emotion recognition results across each platform in Table <ref>. News had the highest proportion of joyous (61.3%) and loving (34.2%) posts, perhaps reflecting governmental initiatives around fatherhood. While Twitter and forums had similar levels of joyous posts (56.6% and 44.2% respectively), they were still not as high as news. Similarly, loving posts on Twitter and forums (2.4% and 4.1% respectively) were far lower than news outlets. We suggest that the emotion in the news reflects pro-fatherhood governmental initiatives, but these do not always filter successfully to other media. We then presented emotion recognition results over time for each platform in Figure <ref>. News data exhibited several fluctuations but had the steepest rise post-2009. Dads for Life, started in 2009, may explain the uptick in news articles, especially around joy. Examples of news article content that were coded as joy: It's a happy Father's Day for SAFRA, as it is set to receive funds from the "Dads for Life" movement to pump up father-friendly activities for its members over the next two years.; He will be running alongside his daughter in the Dads For Life 800m Father and Child Challenge, a new category in the annual SAFRA Singapore Bay Run and Army Half-Marathon. Mr Shariff, who was born without part of his left leg, said: I signed us up because I want to show her how running can make her happy. Both Twitter and forum posts saw a sudden spike post-2013 onward, mostly around joy. We suggest that the shift in emotion may be due to a delayed reaction to Dads for Life. Broadly, we forward that the 2009 Dads for Life movement and other similar policies may have catalyzed emotional reactions around fatherhood in the Singapore online arena. However, the rises in emotion were not sustained and seemed to decline by 2023, perhaps indicative that new policy levers may need to be rolled out.
§ DISCUSSION
Our RQ was to explore how fatherhood in Singapore is framed on various online platforms. A strength of our work is how the different techniques we applied validate each other as well as reveal differences across platforms. While fatherhood was framed in a range of ways on the Singaporean online environment, it did not seem that fathers were framed as central to the Singaporean family unit. Results also indicated that governmental initiatives may have some effect on altering the framing of fatherhood, but are not lasting in effect. The concordance in our results suggests the veracity of our findings and we hope that results can add to research and policy around fatherhood in Singapore. Our evidence adds to previous research, where we provided data on how governmental initiatives may initially buttress framing around fatherhood, but needs to be sustained to provide broad and lasting support for fathers. Key to how fatherhood is framed in Singapore is the inclusion of fathers' viewpoints when writing news articles on fatherhood. Where possible, fathers themselves should be consulted on articles about fatherhood. For example, a panel staffed by fathers can comment on fatherhood-related online news articles, providing suggestions on how articles can more accurately represent fathers' concerns <cit.>. Our findings relied on the validity of data collected with our search terms. We used a range of established techniques to search for all articles/posts relevant to fatherhood, and our data contained text aligned with how fatherhood is framed. We were thus confident in the comprehensiveness of our data. We only used English-language text but will include other languages in future work. Given the token limits for the emotion recognition technique, we were not able to use emotion recognition for the entirety of longer news articles. We note that the recall of the search string was not tested. We note that our data may not be generalizable to how fatherhood is framed globally. Our goal was not to identify who was doing the framing around fatherhood e.g., family members or government. Future studies will seek to identify which stakeholders were likely involved in the framing.
splncs04
|
http://arxiv.org/abs/2307.04595v2 | 20230710143631 | Singling out SO(10) GUT models using recent PTA results | [
"Stefan Antusch",
"Kevin Hinze",
"Shaikh Saad",
"Jonathan Steiner"
] | hep-ph | [
"hep-ph"
] |
[E-mail:][email protected]
[E-mail:][email protected]
[E-mail:][email protected]
[E-mail:][email protected]
Department of Physics, University of Basel, Klingelbergstrasse 82, CH-4056 Basel, Switzerland
In this work, we construct promising model building routes towards SO(10) GUT inflation and examine their ability to explain the recent PTA results hinting at a stochastic gravitational wave (GW) background at nanohertz frequencies. We consider a supersymmetric framework within which the so-called doublet-triplet splitting problem is solved without introducing fine-tuning. Additionally, realistic fermion masses and mixings, gauge coupling unification, and cosmic inflation are incorporated by utilizing superfields with representations no higher than the adjoint representation. Among the three possible scenarios, two of these cases require a single adjoint Higgs field, and do not lead to cosmic strings. In contrast, the third scenario featuring two adjoints, can lead to a network of metastable cosmic strings that generates a GW background contribution compatible with the recent PTA findings and testable by various ongoing and upcoming GW observatories.
Singling out SO(10) GUT models using recent PTA results
Jonathan Steiner
February 2023
=========================================================
Introduction:–
Global collaboration among pulsar timing arrays (PTAs) (NANOGrav <cit.>, PPTA <cit.>, EPTA <cit.>, and IPTA <cit.>) previously revealed evidence of common-spectrum noise at nanohertz frequencies. Recent analysis, including CPTA <cit.>, EPTA <cit.>, NANOGrav <cit.>, and PPTA <cit.>, identified spatial correlations (Hellings-Downs effect <cit.>), providing strong support for a stochastic gravitational-wave background (SGWB). Although the mergers of supermassive black hole binaries (SMBHBs) are natural astrophysical sources of the SGWB at nanohertz frequencies, the new data somewhat disfavors SMBHBs in explaining the observed PTA SGWB signal <cit.>. Therefore, the SGWB likely points toward new physics beyond the Standard Model (SM). One of the explanations that fits well with the data is a metastable cosmic string network (CSN) <cit.>. Since such cosmic strings (CSs) can arise from the multi-step spontaneous breaking of the symmetry group of a Grand Unified Theory (GUT) after cosmic inflation, this raises the question of what can be learned about GUTs from this finding.
GUTs <cit.>, combined with SUSY, offer an appealing framework for a more fundamental theory beyond the SM of elementary particles. GUTs unify the three fundamental forces of the SM, while SUSY provides a natural solution to the gauge hierarchy problem and a potential weakly interacting dark matter candidate when R-parity or matter-parity ensures its stability. SO(10)-based GUTs are particularly interesting as they unify all SM fermions of each family into a single irreducible 16-dimensional representation. This 16-dimensional representation also includes a SM singlet right-handed neutrino, which, through the type-I seesaw mechanism <cit.>, generates tiny masses for the SM neutrinos.
Promising GUT models must satisfy proton decay bounds and achieve successful gauge coupling unification. In SUSY GUT models, the d=5 proton decay operators are induced by color-triplet exchange, necessitating the superheavy nature of color-triplet states compared to their doublet partners, known as the doublet-triplet splitting (DTS) problem <cit.>. A desirable GUT model should solve the DTS problem without fine-tuning parameters. Since GUTs generate the Yukawa matrices out of joint GUT operators, leading to
constraints on the flavor structure, a further challenge
consists in realizing viable fermion masses and mixings.
Cosmic inflation <cit.> that solves the horizon and flatness problems of the standard Big Bang cosmology, and explains the origin of structure formation of the observable Universe, could have a deep connection to SUSY GUT models. In addition to the similarity of the scales of inflation and gauge coupling unification, inflation is also crucial to dilute away unwanted topological defects <cit.> like monopoles which generically form at some stage of GUT symmetry breaking. Furthermore, supersymmetric theories typically possess many flat directions, providing an attractive framework for realizing inflation. While monopoles have to be diluted by inflation, other topological defects, like (metastable) CSs <cit.> that form after inflation can leave an observable signature in the SGWB.
In this work, we explore supersymmetric SO(10) GUTs that naturally solve the DTS problem, generate realistic fermion masses, and achieve successful gauge coupling unification and inflation. We focus on lower-dimensional field representations and investigate scenarios with Higgs fields no higher than the adjoint representation. Three promising routes for SO(10) GUT model building are identified: two cases use a single adjoint Higgs field, while the third scenario requires two copies. In the latter case, the intermediate symmetry contains two Abelian factors crucial for CSN formation. For the first time, we construct a realistic SUSY SO(10) GUT scenario (particularly the third scenario), satisfying the mentioned criteria and leads to metastable CSs capable of explaining the recent PTA results for a stochastic GW background at nanohertz frequencies.
SO(10) model building:–
Two major guiding principles in building realistic models in our framework are the natural DTS <cit.> (see also <cit.>) and employing smaller dimensional representations. In achieving this, we utilize 45_H and 16_H+16_H Higgs representations to break the GUT symmetry down to the SM, which is subsequently broken by 10_H (and possibly by 16_H+16_H). The fundamental representation contains weak-doublet and color-triplet states,
10_H =(2_H+3_H)+(2_H+3_H)
=(1,2,1/2)+(3,1,-1/3)+c.c..
The VEV of the adjoint, ⟨ 45_H⟩∝ iτ_2⊗diag(a_1,a_2,a_3,a_4,a_5) that breaks the GUT symmetry is expected to provide superheavy masses to both these components. With this setup, one can construct three classes of models:
* a single adjoint Higgs with ⟨ 45_H⟩∝ B-L generator,
* a single adjoint Higgs with ⟨ 45_H⟩∝ I_3R generator,
* two adjoint Higgses, one with ⟨ 45_H⟩∝ B-L generator and another with ⟨ 45_H^'⟩∝ I_3R generator.
For each model, the superpotential takes the form,
W= W_GUT-breaking+ W_Inflation+W_Mixed_W_Intermedite-breaking
+W_DTS+W_Yukawa,
where terms in W_GUT-breaking and W_Intermedite-breaking lead to a consistent symmetry breaking of the GUT symmetry down to the SM gauge group. Terms in W_DTS realize DTS without fine-tuning, and the W_Inflation part of the superpotential leads to an inflationary period.
∙ B-L-case: The symmetry breaking chain in this scenario is given by
SO(10)
SU(3)_C× SU(2)_L× SU(2)_R× U(1)_B-L
SU(3)_C× SU(2)_L× U(1)_Y .
The GUT scale symmetry breaking is achieved via
W_GUT-breaking ⊃m_45/2Tr[45_H^2]+λ/4ΛTr[45_H^4],
with the VEV ⟨ 45_H⟩∝ iτ_2⊗diag(a,a,a,0,0).
Note that breaking the GUT symmetry gives rise to superheavy monopoles that must be inflated away. Therefore inflation must take place after the formation of the monopoles. A straightforward option is to utilize hybrid <cit.> inflation (an alternative option is tribrid inflation <cit.>) at the last intermediate symmetry breaking stage, which we achieve via employing 16_H+16_H that acquire VEVs [As a result, the appearance of automatic R-parity from within the SO(10) group is no longer possible. However a discrete symmetry, such as a Z_2 symmetry (matter parity), can readily be imposed.] in the right-handed neutrino direction. Then the relevant superpotential term contributing to inflation takes the following form,
W_Inflation⊃κ S(16_H16_H-m_16^2),
which fixes the magnitude of the VEVs ⟨ 16_H16_H⟩ = m^2_16.
Here, S is a GUT singlet superfield, the scalar component of which plays the role of the inflaton.
Since 45_H and 16_H+16_H have component fields that share the same quantum numbers,
45_H, 16_H, 16_H⊃ (1,1,1)+ (3,2,1/6)+ (3,1,-2/3) +c.c.,
to avoid additional would-be Goldstone bosons, which would ruin gauge coupling unification, these fields must have non-trivial mixing terms. The simplest possible interaction term, 16_H 45_H 16_H, is not welcome since it would destabilize the VEV of 45_H from the desired “Dimopoulos-Wilczek form”.
To circumvent this issue, we introduce a second copy of spinorial representations, 16_H^'+16_H^', which do not acquire a VEV in the right-handed neutrino direction. Then a consistent symmetry breaking without additional would-be Goldstone bosons can be achieved via the addition of the following terms in the superpotential:
W_Mixed⊃
16_H(λ_1 45_H+λ^'_1 1_H)16_H^' +16_H^' (λ_2 45_H+λ^'_2 1^'_H)16_H.
Here, we introduced the “sliding singlets” 1^(')_H, which are assumed to have no other terms in the superpotential that could fix their VEVs. They are needed to allow for vanishing F-terms corresponding to 16_H^', 16_H^'.
Concerning DTS, remarkably, the specific VEV structure of the 45_H provides masses to only the color-triplets, while the weak-doublets remain massless, schematically
10_1H⟨ 45_H⟩ 10_2H=
02_1H 2_2H
+02_2H 2_1H
+3_1H 3_2H
+3_2H 3_1H.
However, if only the above term is added to the superpotential, then the low energy spectrum would contain four light doublets instead of the usual two doublets of the MSSM. This would spoil the successful gauge coupling unification of the MSSM. To avoid extra light states, we allow a direct mass term for 10_2H, i.e.,
10_2H10_2H= 2_2H 2_2H
+3_2H 3_2H.
Then, the terms in the superpotential relevant for providing the masses of the doublets and triplets and naturally realizing their splittings are
W_DTS⊃γ 10_1H 45_H 10_2H +m_10 10_2H 10_2H.
A crucial remark is in order. Assuming that only 10_1H couples to the fermions, the term in Eq. (<ref>) by itself does not induce proton decay. Once the term in Eq. (<ref>) is also introduced, together they allow the proton to decay via color-triplet Higgses, since now an effective mass term linking 3_1H and 3_1H can be written down after integrating out 3_2H and 3_2H. This can be understood schematically as follows:
3_1H⟨ 45_H⟩z13_2Hz23_2H m_10z33_2Hz43_2H⟨ 45_H⟩ 3_1H .
[remember picture,overlay]
[latex-latex]
([shift=(2pt,-2pt)]z1)
– ([shift=(2pt,-12pt)]z1)
– node[midway, below] ([shift=(2pt,-12pt)]z2)
– ([shift=(2pt,-2pt)]z2);
[latex-latex]
([shift=(2pt,-2pt)]z3)
– ([shift=(2pt,-12pt)]z3)
– node[midway, below] ([shift=(2pt,-12pt)]z4)
– ([shift=(2pt,-2pt)]z4);
With a sufficiently large effective triplet mass ∼ M^2_GUT/m_10, the d=5 proton decay is suppressed.
∙ I_3R-case: The symmetry breaking chain in this scenario is given by
SO(10)
SU(4)_C× SU(2)_L× U(1)_R
SU(3)_C× SU(2)_L× U(1)_Y ,
which is obtained by ⟨ 45_H⟩∝ iτ_2⊗diag(0,0,0,b,b). Although the W_GUT-breaking and W_Intermedite-breaking parts of the superpotential are identical to the B-L case, W_DTS takes a different form, which we discuss in the following.
Due to ⟨ 45_H⟩∝ I_3R, we now have the opposite situation compared to the previous case, namely
10_1H⟨ 45_H⟩ 10_2H=
2_1H 2_2H
+2_2H 2_1H
+03_1H 3_2H
+03_2H 3_1H.
Therefore, a different strategy must be implemented to obtain light doublets and superheavy color-triplets. By noting that
16_H^'⊃2_H^' is a SU(2)_R singlet, and, on the contrary, 16_H^'⊃3_H^' resides in a SU(2)_R doublet, one obtains a mass only for the color-triplet, and not for the weak doublet, i.e.,
16_H^'⟨ 45_H⟩ 16_H^'=
02_H^' 2_H^'
+3_H^' 3_H^' .
If only the above term is included in the superpotential, then a pair of triplets will remain massless in addition to one pair of doublets. To provide large masses to all the color-triplets, we add two more terms
W_DTS⊃
λ_316_H^' 45_H 16^'_H
+λ_4 10_H 16_H 16_H+λ_5 10_H 16_H 16_H .
As for the d=5 proton decay,
assuming the SM fermion masses are coming from their coupling to the 10_H (i.e. neglecting all contributions from the 16_H), the effective triplet mass m_T is approximately given by
m_T=-λ_3λ_4λ_5⟨ 16_H⟩⟨16_H⟩/2λ_1λ_2 ⟨ 45_H⟩.
Choosing somewhat small λ_1,λ_2 allows having m_T≳ 10^19 GeV, which is required by proton decay constraints.
∙ B-L & I_3R-case: Depending on the values of the VEVs of the two adjoints, various symmetry breaking chains may arise in this scenario, examples of which are (a) ⟨ 45_H⟩ > ⟨ 45_H^'⟩ > ⟨ 16_H⟩, ⟨16_H⟩:
SO(10)
SU(3)_C× SU(2)_L× SU(2)_R× U(1)_B-L
SU(3)_C× SU(2)_L× U(1)_R× U(1)_B-L
SU(3)_C× SU(2)_L× U(1)_Y ,
(b) ⟨ 45_H^'⟩ > ⟨ 45_H⟩ > ⟨ 16_H⟩, ⟨16_H⟩:
SO(10)
SU(4)_C× SU(2)_L× U(1)_R
SU(3)_C× SU(2)_L× U(1)_R× U(1)_B-L
SU(3)_C× SU(2)_L× U(1)_Y ,
(c) ⟨ 45_H⟩ = ⟨ 45_H^'⟩ > ⟨ 16_H⟩, ⟨16_H⟩:
SO(10)
SU(3)_C× SU(2)_L× U(1)_R× U(1)_B-L
SU(3)_C× SU(2)_L× U(1)_Y .
In this scenario, for each of the adjoints, the GUT symmetry breaking superpotential consists of the terms given in Eq. (<ref>). Since ⟨ 45_H⟩ and ⟨ 45_H^'⟩ break SO(10) to the left-right symmetry and quark-lepton symmetry, respectively, the first and the second break the generators in (3,2,+1/6)+(3,2,-5/6)+(3,1,2/3)+c.c and (3,2,+1/6)+(3,2,-5/6)+(1,1,+1)+c.c, respectively. Consequently, there would be additional massless states. To avoid such massless states, we add the following mixing term in the superpotential,
W_GUT-breaking ⊃η/Λ Tr[45_H.45_H.45_H^' .45_H^'].
As before, one requires non-trivial interactions between the spinorial representations and the adjoints to give masses to the would-be Goldstones. For the two adjoints, we now introduce two sets of additional spinorial representations, 16_H^'+16_H^' and 16_H^''+16_H^'', and add the following terms, such that the VEVs of the adjoints are not destabilized:
W_Mixed⊃
16_H(λ_1 45_H+λ_1^' 1_H)16_H^' +16_H^' (λ_2 45_H+λ_2^' 1_H^')16_H
+ 16_H(λ_3 45_H^'+λ_3^' 1_H^'')16_H^'' +16_H^'' (λ_4 45_H^'+λ_4^' 1_H^''')16_H .
For the DTS, we include the term 10_1H45_H10_2H. However, here we can construct an example model which does not lead to proton decay at leading order via d=5 operators. To this end, we forbid the direct mass term 10_2H10_2H. Instead, we include a higher dimensional operator, 10_2H. 45^' 2.10_2H, such that an effective triplet mass for 3_1H and 3_1H cannot be written down, since,
10_2H45_H^' 210_2H=
2_2H 2_2H
+2_2H 2_2H
+03_2H 3_2H
+03_2H 3_2H.
With the inclusion of the above two terms, still one pair of color-triplets and an additional pair of weak doublets remain massless. We cure this by adding a term of the form 16^''_H 16^'_H to the superpotential,
W_DTS⊃
γ_1 10_1H 45_H 10_2H +γ_2/Λ 10_2H45_H^' 210_2H
+ω_1616^''_H 16^'_H ,
that leads to a single pair of light doublets, as desired.
It is important to note that all the scenarios discussed above can successfully reproduce correct charged fermion masses and mixings by incorporating suitable higher-dimensional operators. The light neutrinos acquire masses through the standard type-I seesaw mechanism. The Majorana masses for the right-handed neutrinos are generated by the following higher-dimensional operator:
W_Yukawa⊃ Y_R 16_i 16_j 16_H16_H/Λ∼ Y_R v^2_R/Λν^cν^c .
Gravitational wave signals:–
In some of the models we consider, breaking e.g. a simple group into a subgroup that contains a U(1) factor leads to monopole creation. To prevent overclosing the universe, inflation must get rid of the monopoles. At some later stage, once the left-over Abelian symmetry is broken, strings appear (we assume the ideal Nambu-Goto string approximation, where the dominant radiation emission of CSs is into GWs <cit.>). If these two scales are very close, Schwinger nucleation of monopole-antimonopole pairs <cit.> on the string cuts it into pieces and makes it decay. How quickly these metastable strings decay depends on a parameter κ_m <cit.>,
κ_m= m^2/μ∼8π/g^2( v_m/v_R)^2,
where m is the mass of the monopole and v_m (v_R) is the monopole (string) creation scale. The network behaves like a stable-string network for κ_m^1/2≫ 10.
Metastable CSNs provide an intriguing explanation for the newly released PTA data <cit.>. The data indicates string tension (μ) values in the range Gμ∼ 10^-8-10^-5 for κ_m^1/2∼ 7.7-8.3 (with a strong correlation, cf. Fig. 10 of <cit.>), consistent with CMB bounds. Notably, the 68% credible region in the Gμ-κ_m^1/2 parameter plane overlaps with the third advanced LIGO–Virgo–KAGRA (LVK) bound, while major parts of the 95% credible region are compatible, preferring Gμ≲ 10^-7 and κ_m^1/2∼ 8 <cit.>, as shown in Fig.<ref>. However, it should be remarked that the computation of the GW spectrum from metastable CSs carries significant uncertainty <cit.>. Furthermore, various possible effects are not included in the above shown GW spectrum, for instance, an extended matter domination phase after inflation <cit.> or the change of degrees of freedom below the SUSY breaking scale <cit.>. Nevertheless, observing a higher frequency SGWB signal in the next LIGO–Virgo–KAGRA rounds would be a fascinating confirmation of the scenario.
Interestingly, Gμ∼ 10^-7 corresponds roughly to v_R∼ 10^15 GeV, which is fully consistent with the type-I seesaw contribution to neutrino masses and corresponds to the right scale for inflation. On the other hand, stable CSs are disfavored by the recent PTA data[Stable cosmic strings, however, were consistent with the previous PTA data. For works on GWs, in light of NANOGrav12.5 data, arising from cosmic strings within GUTs, c.f., <cit.>.].
The first (and second) model studied, the B-L- (and I_3R-) case, leads to embedded strings, which are generally unstable <cit.>. Interestingly, all three models in the B-L & I_3R-case have the potential to produce metastable strings for nearly degenerate monopole and string formation scales: M_I∼ M_II for cases (a) and (b), and M_GUT∼ M_I for case (c). However, in case (c), a lower GUT scale ∼ 10^15 GeV would have to be arranged that requires suppression of d=6 proton decay utilizing the freedom in the Yukawa sector, which makes this case somewhat less appealing.
We like to point out that the class of promising SO(10) models we considered in this work may or may not lead to the formation of CSs, contrary to the class of models considered in <cit.>, where the appearance of CSs is unavoidable.
Before concluding, we discuss the gauge coupling unification for an example scenario that leads to metastable CSs (specifically, we choose case (a) within B-L & I_3R). To achieve metastable strings, the monopole and string formation scales must nearly coincide. Therefore, we effectively have three scales: the GUT scale, the monopole/string formation scale, and the SUSY breaking scale (fixed at 3 TeV). To simplify the analysis, we assume that the fields breaking a symmetry are degenerate with the corresponding scale, while the remaining states have GUT scale masses. This minimal number of free parameters allows us to find a wide range for the monopole/string formation scale, approximately M_I∼ M_II∼ [10^9-10^17] GeV (with 10^16 GeV ≤ M_GUT≤ 10^18 GeV and M_GUT>M_I), while still being consistent with gauge coupling unification. Our analysis considers a 1% uncertainty on the measured values of the gauge couplings to account for GUT threshold uncertainties.
A comprehensive analysis encompassing gauge coupling unification, fermion masses and mixings, proton decay, GW signal, and the mass spectrum of the component fields from the superpotential terms will be presented in a forthcoming publication.
Conclusions:–
We explored promising model-building routes for SO(10) GUT inflation in light of the recent PTA results suggesting the presence of a SGWB at nanohertz frequencies. Our investigation focused on a supersymmetric SO(10) framework with small dimensional representations, effectively solving the doublet-triplet splitting problem without fine-tuning. This approach enables realistic fermion masses, gauge coupling unification, and simple options for embedding cosmic inflation. Among the three model classes studied, one involves two adjoint fields capable of generating a network of metastable cosmic strings. This network generates a SGWB background contribution that can explain the recent PTA data, and will be tested by various upcoming GW observatories.
Note added: As we were completing this work, several papers appeared that also discussed the
impact of the new PTA results on new physics scenarios <cit.>.
style
|
http://arxiv.org/abs/2307.04125v1 | 20230709083942 | Bounced Model of Droplet on Moving Substrate | [
"Chengwu Liu"
] | physics.flu-dyn | [
"physics.flu-dyn"
] |
[email protected]
https://orcid.org/0000-0001-9067-1892
School of Physics, Shandong University, Jinan 250100, China.
School of Physics, Shandong University, Jinan 250100, China.
Firstly, we get the completely bouncing criteria Cr for droplet on moving substrate. The bouncing without splashing condition is Cr>1. Then, we mainly research the effect of wind field for droplet, and get the completely bouncing criteria Cr_wind for droplet with wind.
Lastly, we get the contact angle of droplet on the moving substrate and calculate the Time Independent Reynolds Equation with rho and μ are constant.
Bounced Model of Droplet on Moving Substrate
Chengwu Liu
August 12, 2023
============================================
§ INTRODUCTION
The questions of droplet on a surface are related to the interaction of interface.
There is a micrometer-size gas film in the interface between liquid and solid. This gas film was firstly observed by the way of snapshoot <cit.>. The evolutionary process of gas film was firstly observed by X-Ray technology <cit.> at the moment of contacting. They found that the gas film evolve to a bubble with spending on microsecond-size time. E. Sawaguchi <cit.> found that the distribution of thickness of droplet on a moving surface is similar to saddle surface. In addition, the hydrophobicity of droplet on a moving surface is enhanced and is similar to Leidenfrost effect <cit.>. Therefore, the interaction between liquid and solid would be affected by the motion of surface. In this paper, we will talk about how these parameters affect the hydrophobicity in section 2.
Ted Mao <cit.> assumed a critical bounced state to deal with the question of bounced on motionless surface and got a critical bounced criteria E_ERE^*. Actually, the gas film on motionless surface is different from gas film on moving surface. So the interaction of liquid and solid on motionless surface is also a little different from one on moving surface. In this paper, we will talk about this question in section 3.
Droplet also might splash on a solid surface. We have some models to describe splashing <cit.><cit.><cit.><cit.><cit.>. In this paper, we will talk about how extra wind affects the splashing and bounced of droplet in section 4.
§ DROPLET ON THE MOVING SUBSTRATE
The final states of droplet after it impacts on moving substrate are various. We name bouncing without splashing as completely bouncing, bouncing and splashing as partially bouncing. We find out many physical phenomenons about droplet on the moving substrate with lots of experiments(The experiments conditions are in the supplemental material<cit.>. ). For instance, there is the critical speed which is the shift of bouncing and retention. We will deeply research the completely bouncing condition and critical speed below.
§.§ Completely Bouncing Criteria for droplet on Moving Substrate
We will research the phenomenons without splashing in this part.
Droplet will spread, retract, bounce/retain after impacting on the moving substrate. According to the different interaction modes on the solid-liquid interface, the interface can be divided into three areas. As the figure 2(a) shown, the first one is gas film(thickness h∼ 10μm). The cohesive force of gas film acts as normal capillary force to inhibit bouncing. The second one is solid-liquid interface. The interaction of liquid and solid is Van der Waals force. The third one is molecule area. The interaction of this part mainly include the Derjaguin Disjoining Pressure Π( h) =Π_vdW+Π_EL+Π_struc.+Π_steric. In these areas, inhibition and promotion effects for bouncing should be researched in detial.
§.§.§ Inhibition Effects
Firstly, the maximal viscous dissipation of droplet could be approxed is D≤ V_G=mgh∼ 10^-3J with enery conservation(we order the surface of substrate is the zero gravitational potential surface.) Nextly, the dissipation of droplet also due to the microstructure on the surface of substrate. The roughness could be evaluated by the size of roll angle in general. We could ignore the dissipation of this part with considering a small roll angle(smooth surface). Acturally, we use the substrate with small roll angle in our experiments(more detials in supplemental material<cit.>). Then, we will consider the interaction between liquid and solid on the gas film and solid-liquid interface. The attraction is Van der Waals force mainly. And it is the potential force and plays an obvious role on micrometer scale justly. Therefore, the vdW force between solid and liquid has no effect on bouncing. However, some gas are carried in the gas film during the droplet falling process. This gas film will go away from the system constituted by droplet and solid substrate during the bouncing process. And the capillary force of gas film will dissipate the energy of droplet. So, the cohesive force of gas film will inhibit bouncing. Nextly, we will explain it detially.
First of all, this process is different from the previous section. In this section, the pressure of gas film p changes continuously over time, and it's relatively large. The kinetic equation could be given through analysing the droplet as the figure 2(b) shown.
( p-p_⊖) S=md^2H/dt^2+mg
where p_⊖ is atmospheric pressure, H is the vertical positionsl coordinaten of droplet, S is the contact area of liquid and solid. Apparently, we could find out H and S change over time through observing experiments results. So p also should change over time as figure 2 shown. Then the dissipation due to the cohesive force of gas film is be considered below.
The gas film is divided into two parts g_1, g_2 with h thickness as figure 2(d) shown. And two parts at a distance of d∼ 10^-9m(d is the distance of two molecule). Hence, the per unit area vdW potential between g_1 and g_2 is
w_g =-A/12π[ 1/d^2+1/( d+2h) ^2-2/( d+h) ^2]h≫ d-A/12π d^2
where A∼ 10^-19J is Hamker constant. We assume that the thickness of gas film changes from h_0 to h from the state of maximum spreading to bouncing exactly, τ' is the time interval of this process. Then the energy dissipation during retraction process is
Q_cap=| ∫_h_0^h-∂ w_g/∂ dSdh| =| ∫_0^τ'AS/6π d^3dh/dtdt |
Then we expand h to linear term and consider the ideal gas hypothesis.
dh/dt=h-h_0/t, d^3=kT/p
We could give the energy dissipation due to capillary force with equation <ref>.
Q_cap ≈| ∫_0^τ'-ApS( h-h_0) /6π d^3tSdt|
[Sh_0=V_0]Sh=V| ∫_0^τ'Ap( V-V_0) /6π kTtdt|
[p_0V_0=ν RT]pV=ν RTAν R/6π k| ∫_0^τ'1-( p/p_0) /tdt|
This is a improper integral. In order for the improper integral is convergent. We have
lim_t→ 0+1-( p/p_0) /t=lim_t→ 0+-1/p_0dp/dt=a⇒dp/dt=-ap_0
Therefore, we have the pressure p changes linearly with time below the above approximation. Then bring equation <ref> to equation <ref>.
Q_cap =Aν R/6π k| ∫_p_0^p^⊖dp/p_0|=Aν N_A/6πp_0-p^⊖/p_0=Aν N_A/6πaτ'∼ 10^-3J
where ν is the amount of substrate of gas film, N_A is Avogadro constant. We could conclude that the order of energy dissipation due to capillary force is same as the gravitational potential energy. So it couldn't be ignored.
Lastly, let's consider the viscous dissipation. It's so difficult to calculate that we have to estimate it. Let's consider an undemanding toy model.
Firstly, the viscous dissipation should relateto spreading factor with a certain function. The viscous dissipation which droplet impacts on a smooth motionless substrate relate to ( d_m/D)^2<cit.> during spreading process, and relate to ( d_m/D)^2.3<cit.> during retraction process. Hence, we associate the spring oscillator model of two degrees of freedom with this question(Two springs placed perpendicular to each other horizontally on a smooth substrate). We assume that the viscous dissipation is E_diss∼ QE_p max, where E_p max is the maximum spreading "elastic potential energy" for the first time, Q is similar to quality factor.
E_diss∼α k_n ( D_n max-D_0)^2 +β k_t ( D_t max-D_0) ^2
Considering that the effect of tangential mainly due to surface tension and viscous shear stress T_ν, the effect of normal mainly due to surface tension. We could give the k_n and k_t using above model.
k_t ∼aγD_t max+bT_ν/D_t max-D_0
k_n ∼a' γD_n max/D_n max-D_0
where α, β , a, b, a' is constant, T_ν∼η VD_n maxD_t max/δ is the viscous shear stress<cit.>, δ is the thickness of gas film estimated by LLD's law<cit.>, V is the speed of substrate, the maximum tangential spreading diameter on moving substrate<cit.> and the maximum spreading diameter on motionless substrate<cit.> are respectively D_t max/D_0∼We^1/4Ca^1/6 and D_max/D_0∼We^1/4. And the substrate speed has no effect on normal maximum spreading diameter. Therefore, D_n max∼We^1/4. Then, we could estimate the viscous dissipation with scaling law.
§.§.§ promotion effects
Firstly, one of the promotion effects is initial kinetic energy E_total=1/2ρ( 1/6π D_0^3 ) U^2, where U is the speed which droplet impacts the moving substrate. Next, as the figure 2(c) shown. The movment of solid substrate caused the flow of air beacuse air is the viscous fluid. Hence, the pressure around substrate will be decreased by wind field. So, it has a lift force F_L∼ρ_gU_t^2h on droplet, where ρ_g=1.185kg/m^3 is the density of air(25℃), ρ=997kg/m^3 is the density of water(25℃), U_t∼ 1m/s is wind speed around the substrate, h∼ 10^-3m is the maximum spreading thickness of droplet, D∼ 10^-3m is the initial diameter of droplet. So, we have
ρ DU^2/ρ_gU_t^2h∼ 10^3
So lift force could be ignored when the speed of substrate is little(U∼ 1m/s). The situation of big wind speed will be discussed in section 4.
§.§.§ Completely Bouncing Criteria for droplet on Moving Substrate
Hence, we could conclude that the initial kinetic energy E_total promote to bounce, viscous dissipation E_diss and energy dissipation due to capillary force Q_cap inhibit bouncing.
Considering a imaginary state which droplet bounce exactly. Then, we could get the condition of bouncing using energy conservarion. We have
E_total+mgD/2>E_Diss+Q_cap+mgD/2
So, we can get the completely bouncing criteria for droplet on moving substrate.
Cr=πWe^3/2√(γ/D_0ρ)/2Aν N_A/π D_0^2γ ( p_0-p^⊖/p_0 ) +12 [ ( βWe^1/4Ca^1/6+β'D_0We^1/2Ca^1/2/lc ) ( We^1/4Ca^1/6-1 ) +αWe^1/4 ( We^1/4-1 ) ]
where l_c=( γ/ρ g) ^-1/2 is the Capillary length of water, rho is the density of liquid, γ is the liquid-gas surface tension, α, β, β' are the constants to be determined. The numerator is shown that the effect of initial kinetic energy. The left term of denominator is shown that the effects of capillary force in the gas film. The right term of denominator is shown that the effects of viscous dissipation. The bouncing without splashing condition is
Cr>1
And completely bouncing criteria for droplet on moving substrate Cr_V, which is only due to the speed of substrate.
Cr_V=A/B+( CV^1/6+C'V^1/2) ( DV^1/6-1)
where A,B,C,C',D are indepent to substrate speed V. The droplet will retain on the substrate if Cr_V≤ 1, and bounce on the substrate if Cr_V>1.
In a word, the completely bouncing criteria for droplet on moving substrate is relate to capillary number Ca and Weber number We which both are the initial state parameters of the droplet and moving substrate, as figure 2(e) shown. And we also explain the experimental phenomenons which the final state of droplet will shift with changing substrate speed as figure 2(f) shown. We also could find out the critical speed which is the shift of bouncing and retention as figure 2(f) shown. These results show that droplet might undergo the transformation of "Retention to Bounced" or "Retention to Bounced to Retention". But we didn't consider the dissipation due to the microstructure on the surface of substrate. It's also a complex question
§ THE EFFECT OF WIND FIELD FOR DROPLET
In the discussion of previous section, we find that a lift force due to the wind field couldn't be ignored with high substrate speed. The lift force promote to bounce. We even find out the splashingofdroplet in further experiments. So we design the experiment(more detials are in supplemental material<cit.>) to illustrate the importance of wind field for the final state of droplet as figure 3(a)(b)shown.
We conclude that the final state of droplet is affected by both the initial height of droplet(We) and the speed of wind(Ca) from figure 3(b). And the state of droplet will shift to splashing and partially bouncing with creasing the wind speed and the initial height. The state of droplet will shift to splashing with only creasing the initial height. The state of droplet with small initial height will shift to splashing and partially bouncing if only creasing the wind speed. And state also could shift from retention to completely bouncing/from completely bouncing to retention. We conclude that the spreading factor et. al. could change with the wind speed from figure 3(c). Next, we will research the causes of these phenomenons.
§.§ The Transition Between Retention and Completely Bouncing
Firstly, we assume that (1)all flows could be ragard as the isentropic flows. (2)The boundary conditions between liquid and gas obey Navier boundary conditions, i.e., V⃗_droplet=V⃗_air(3)Wind field is 2D incompressible laminar flow, i.e., ∇·V⃗=0, V_z=const..
We take two circuits C_droplet and C_air around the interface between liquid and gas. The velocity circulations respectively are
Γ_droplet=∮_C_dropletV⃗_droplet·dl⃗=Γ_air=∮_C_airV⃗_air·dl⃗=Γ
These velocity circulations Γ are constant because of assumation 2. And the interaction of wind field on the droplet could be divided into vertical and horizontal direction. The bouncing state mainly depends on the vertical interaction.
L⃗=ρ_aV⃗_∞×Γ⃗_air=-ρ_aΓV⃗_∞×k⃗
It can be seen from assumation 3
∇×L⃗=-ρ_aΓ[ ( k⃗·∇)V⃗_∞+( ∇·k⃗) V⃗_∞-( V⃗_∞·∇) k⃗-( ∇·V⃗_∞) k⃗] =0
So the lift force L⃗ is potential force. We assmue the initial height of droplet is h_0 and the lift force potential of initial position is 0. Then the lift force potential is
V_L( y) =-∫ -ρ_aΓV⃗_∞×k⃗·j⃗dy=-ρ_aΓ∫ V_∞xdy=ρ_aΓ∫_h_0^yV_∞xdy
So the completely bouncing criteria for droplet is
Cr_wind=ρ gπ D_0^3( h_0-0.5D_0) /6/E_Diss+Q_cap+ρ_aΓ∫_h_0^D_0/2V_∞xdy
The bouncing without splashing condition is
Cr_wind>1
The final state of droplt could shift between bouncing and retention beacuse Γ∫_h_0^D_0/2V_∞xdy could bigger/smaller than 0.
§.§ The Transition Between Splashing and without Splashing
Then, considering the transition between splashing and without splashing. The front of droplet maybe generate the liquid finger during the spreading process. Liquid finger will be not only affected by the lubrication force of bottom gas and the attraction of top gas<cit.>F_L=K_lν_gV_t+K_uρ_gV_t^2H_t, but also affected by the wind field as figure 3(e) shown. The effect of extra wind field is shown as lift force F_wind,L( V), where V is the speed of wind speed. Hence, we could introduce the F_wind,L( V) to R&G model.
β^2_Wind=F_L+F_wind,L/2γ
So, the state of droplet maybe have the transition between splashing and without splashing with the wind field.
§ THE BALANCE OF DROPLET ON THE MOVING SUBSTRATE
§.§ The Contact Angle of Droplet
The bottom of droplet will generate a very thin gas film when it impact on a substrate<cit.>. The gas film will be saddle shape stably on moving substrate<cit.>. The pressure of gas film is p∼ 10Pa<cit.>, the gas film thickness is δ∼ 10μ m. We also could approx the Knudsen number of gas film is K_n=λ/h∼ 10^-4. So the gas film could be regard as the continuons flow. If we assume that pressure is a constance on the direction perpendicular to moving substrate, the thickness and pressure of lubrication gas obey the Reylond Equation:
∂/∂ x(h^3/μ∂ p/∂ x)+∂/∂ y(h^3/μ∂ p/∂ y)=6U∂ h/∂ x
where U is the speed of moving substrate, μ is the dynamic viscosity of lubrication gas. This equation shows that the relation of distribution between thickness and pressure. We could give the distribution of pressure with measuring the distribution of thickness.
Then we research the effect of contact angle on moving surface. Considered the difference form gas film on motionless and moving substrate originates from the moving substrate. So, we focus on this element, then analyse this question with minimum energy principle.
We order that the area of gas film is D and O is the center of area D, as shown in the figure1, we research the infinitesimal area D_i with infinitesimal angle and radial length R_i. It is full of air and saturated vapour in area D_i, the pressure of vapour obeys the Clapeyron Equation p=p_0exp(-L_v,m/RT). And the molecular number and pressure of two components(saturated vapour and air) obey that
1=N_air/N+N_H_2O/N
, 1=p_air/p+p_H_2O/p
The mean kinetic energy of two components can be given that e̅_air=5/2kT, e̅_H_2O=3kT, if air is regarded as diatomic molecule. So we could give the internal energy of gas film
E_k=∑_i=1^n∬_D_inhe̅dσ=∑_i=1^n∬_D_i5/2ph+hp_0/2exp(-L_v,m/RT)dσ
Hence, assumed that the front of droplet has a infinitesimal virtual displacement δ R_i. Approximately, the interfacial energy between solid and gas remain unchanged because of the gas film in the solid-gas interface. So the variation in energy of system is δ E_i=(Δ L_icosθ_L γ_LG+Δ L_iγ_SL)δ R_i+δ E_ki. And a stabilized system must obey that
δ E_i/δ R_i=0
A combination of equation <ref> and equation <ref> leads to
cosθ_Li=cosθ_0-1/γ_LG[ 5/2ph+hp_0/2exp( -L_v,m/RT) ]-γ_SG/γ_LG
where p=p(R_icosθ_Δ L_i,R_isinθ_Δ L_i),h=h(R_icosθ_Δ L_i,R_isinθ_Δ L_i), i.e., p and h are the pressure and thickness of soild-liquid interface boundary respectively.
Hence, we can get the mean contact angle with intergrating contact angle cosθ_Li along the boundary.
cosθ_L=cosθ_0-1/Lγ_LG∮_L[ 5/2ph+hp_0/2exp( -L_v,m/RT) ]dl-γ_SG/γ_LG
where θ_0 is the contact angle obeying the Young Equation, L is the circumference of solid-liquid interface boundary, L_v, m is the latent heat of phase transition from liquid to gas phase. T is the temperature of gas film. The element on the right side of equation <ref> is the influence of gas film. The element on the middle of equation <ref> is the influence of moving substrate.
We could conclude that the contact angle on moving substrate is 8^∘ bigger than the one obeyed Young Equation in the room tempurature approximately. So the hydrophobicity will be reinforced on the moving substrate. And the contact time<cit.><cit.>, spreading factor<cit.>, bouncing et.al. will change with the change of hydrophobicity between the droplet and substrate. Then we will elucidate them in detial.
§.§ The analytical solution for Reynolds Equation
Firstly, we could get another equation which describes the gas film on the moving surface from Reynolds transport equation:
∂ h/∂ t+∇·( h𝐮) =0, ∂ h/∂ t=0
where 𝐮 could be seen as the surface speed U𝐢+V𝐣 because of the Navier Boundary Conditions.
Then, the problem is solving the differential equations:
{
h∂^2p/∂ x^2+h∂^2p/∂ y^2+3( ∂ h/∂ x∂ p/∂ x+∂ h/∂ y∂ p/∂ y) =0
U∂ h/∂ x+V∂ h/∂ y=0
.
Then, we order that h( x,y) =h_X( x) h_Y( y), p( x,y) =p_X( x) p_Y( y). We could get equation <ref> through bringing these to equation <ref>.
1/p_X^2d^2p_X/dx^2+1/p_Y^2d^2p_Y/dx^2+3/p_X^2p_Yh_Xdh_X/dxdp_X/dx+3/p_Y^2p_Xh_Ydh_Y/dydp_X/dy=0
Then, finding the derivative of equation <ref> with respect to x and y in turn. We can get
d^2p_X/dx^2-Cdp_X/dx+C^' p_X^2=0
dp_Y/dydh_Y/dy-Cp_Y^2h_Y/3=C^'p_Yh_Y/3
C, C^'are constant. And we could esaily find the solution of equarion <ref>.
h_X=C_h_1exp( -λ/Ux) , h_Y=C_h_2exp( λ/Vy)
bring them to equation <ref>, we can get
∫dp_Y/C_2^' p_Y+C_2p_Y=y
So the p_Y is
p_Y=C_2/-C_2^' +exp( -y+C_2^''/C_2)
Then, we solve the equation <ref> with series method. Considering the series solution p_X=∑_n=0^∞a_nx^n. Then, we can get
a_n+2( n+2) ( n+1) -Ca_n+1( n+1) +2C^'( a_0a_n+a_1+a_n-1+⋯ +a_n/2a_n/2) =0, n is an even.
a_n+2( n+2) ( n+1) -Ca_n+1( n+1) +2C^'( a_0a_n+a_1+a_n-1+⋯ +a_( n-1) /2a_( n+1) /2) =0, n is an odd.
And the radius of convergence R obey that
lim_n→∞[ n+2+2C^'/n+1( a_0R^2+a_1R^3+⋯ +a_n/2R^n/2+2) ]-CR=0, n is an even.
lim_n→∞[ n+2+2C^'/n+1( a_0R^2+a_1R^3+⋯ +a_( n-1) /2R^( n+3) /2) ]-CR=0, n is an odd.
In addition, we can get equation <ref> when R=1.
lim_n→∞[ n+2+2C^' q( n/2+1) /n+1-C] ≤ lim_n→∞[ n+2+2C^'/n+1( a_0R^2+a_1R^3+⋯ +a_n/2R^n/2+2) ]-CR
≤lim_n→∞[ n+2+2C^' q^'( n/2+1) /n+1-C], n is an even.
lim_n→∞[ n+2+2C^' w( ( n+1) /2) /n+1-C] ≤ lim_n→∞[ n+2+2C^'/n+1( a_0R^2+a_1R^3+⋯ +a_( n-1) /2R^( n+3) /2) ]-CR
≤lim_n→∞[ n+2+2C^' w^'(( n+1) /2) /n+1-C], n is an odd.
where q=min{ a_0, a_1,⋯, a_n/2}, q^'=max{ a_0, a_1,⋯, a_n/2} and w=min{ a_0, a_1,⋯, a_( n-1) /2}, w^'=max{ a_0, a_1,⋯, a_( n-1) /2}.
If the equation <ref> is right, the equation <ref> and <ref> would be wrong. So the R is either ∞ or 1<R<∞.
So p( x,y) is
p=( a_0+a_1x+Ca_1-C^' a_0^2/2x^2+⋯) ( C_2/-C_2^' +exp( -y+C_2^''/C_2) ) , 1<x≤∞
So h( x, y) is
h( x, y) =h_X· h_Y=C_hexp( -λ/Ux+ λ/Vy)
§ CONCLUSION
In section 2, we find that how the moving substrate affect the hydrophobicity of droplet and we discuss the analytical solution for Reynolds Equation. Therefore, we would analytically get the contact angle on moving substrate. But we must have some boundary conditions such as h( x, y)|_droplet boundary=H( x, y) , p( x, y)|_droplet boundary=P( x, y) and so on to get the whole solution. In section 3, we find out some promotion and inhibition effects for bouncing question. Finally, we get a completely bouncing criteria Cr for droplet on moving substrate. Some phenomenons could be pridicted by using this criteria. In section 4, we research the effect of extra wind field for droplet. we find that extra wind field could change the final states of droplet. In addition, we get a completely bouncing criteria Cr_Wind for droplet on moving substrate with extra wind by the way that introduce the lift force potential. We also get the splashing criteria β^2_Wind using the R&G Model.
But we don't analytically get a criteria because of the complex viscous dissipation. We just use a simple model to calculate it. We also don't consider the energy dissipation due to the roughness of substrate. It is so important that cound't be ignored in some substrate with big roughness.
Thanks for Shangqian Sun, Hongwang Lu, Jingcheng Hao and Ying Ma 's support for this work.
|
http://arxiv.org/abs/2307.07433v1 | 20230714155757 | An Approximation Algorithm for Multi Allocation Hub Location Problems | [
"Niklas Jost"
] | cs.DM | [
"cs.DM",
"math.OC"
] |
[
R. Walder
August 12, 2023
===================
The multi allocation p-hub median problem (MApHM), the multi allocation uncapacitated hub location problem (MAuHLP) and the multi allocation p-hub location problem (MApHLP) are common hub location problems with several practical applications. HLPs aim to construct a network for routing tasks between different locations. Specifically, a set of hubs must be chosen and each routing must be performed using one or two hubs as stopovers. The costs between two hubs are discounted by a parameter α. The objective is to minimize the total transportation cost in the MApHM and additionally to minimize the set-up costs for the hubs in the MAuHLP and MApHLP. In this paper, an approximation algorithm to solve these problems is developed, which improves the approximation bound for MApHM to 3.451, for MAuHLP to 2.173 and for MApHLP to 4.552 when combined with the algorithm of <cit.>.
The proposed algorithm is capable of solving much bigger instances than any exact algorithm in the literature. New benchmark instances have been created and published for evaluation, such that HLP algorithms can be tested and compared on huge instances. The proposed algorithm performs on most instances better than the algorithm of <cit.>, which was the only known approximation algorithm for these problems by now.
Keywords: Hub Location Problem, Approximation Algorithm, Combinatorial Optimization
§ INTRODUCTION
Hub location problems (HLPs) frequently appear for logistics service providers. They must decide where to open depots such that different locations are connected as efficiently as possible. Often tours start with a pre-carriage milk run to collect multiple parcels in an area to bring them to a local depot or branch. These parcels are delivered to the destination branch in the main carriage. In the on-carriage, the parcels are again delivered by a milk run.
In this paper, the focus is on optimizing the main carriage step. Instead of direct transports between any pair of branches, the parcels are delivered to central warehouses, transshipment points or hubs in between. This has two main benefits: First, many parcels can be transported together, although they have different destinations. This results in consolidation effects such as lower costs and a better network structure. The second advantage is that the mode of transport can be changed and multimodal transportation can be used, which is also more efficient.
The task is to identify hubs for building an efficient transport network. The transportation costs between hubs are reduced to model the consolidation benefits of multimodal transportation. In addition, one or two hubs must be chosen as stopovers for any transport.
The hub location model has been introduced by <cit.>. Later, <cit.> made integer programming formulations for various HLPs as the p-hub median problem (pHM) or the uncapacitated hub location problem (uHLP), which are the most common HLPs. Reviews of HLPs can be found in <cit.> and <cit.>.
Many algorithms were developed for the multi allocation p-hub median problem (MApHM) to solve large-scale instances efficiently. A greedy-interchange heuristic was presented by <cit.>. Two years later, an efficient mathematical formulation was created by <cit.>. A special case where only one hub can be chosen between origin and destination was considered by <cit.>.
For the multi allocation uncapacitated hub location problem (MAuHLP) branch and bound algorithms were developed by <cit.> and <cit.>. Later, <cit.> presented a dual-ascent branch and bound heuristic.
The multi allocation hub location problem (MAHLP) is a combination and therefore a generalization of MApHM and MAuHLP.
To the best of the authors' knowledge, only <cit.> have given an approximation algorithm for these problems. They have already constructed a sophisticated approximation algorithm for the single allocation variants, resulting in a 6.35 approximation for the SApHM, 2.48 for the SAuHLP and 8.47 for SApHLP, such that this paper focuses on the multi allocation variants. For them they have shown a 3.68 approximation algorithm for the MApHM, 2.49 for the MAuHLP and 4.74 for MAHLP.
A typical multi allocation problem is designing a transport network. Hence, this problem appears for any logistics service provider. For establishing a complex network structure with hundreds of branches and potential hubs the discussed exact algorithms will not be able to give a solution in a reasonable time by the complexity of the problem. In this scenario it is crucial to have a faster algorithm, such as the proposed one. Further applications are telecommunication networks, postal companies and the aviation.
In the next section, ILP formulations of the problems are given. The reduction-based algorithm is explained and presented in section <ref>. In section <ref> the approximation bound is proven. The prove of the used lemmata is outsourced to section <ref>. In the last section (<ref>), the quality of the algorithm's solution is tested on several instances.
§ MATHEMATICAL MODEL
This paper focuses on the multi allocation variant of this strategic, offline problem. Unlike the single allocation variant, each delivery task can be planned individually. In the single allocation variant, each delivery task starting at the same branch must use the same first hub. As an example, consider three branches ℬ={B_1,B_2,B_3}, five hubs ℋ={H_1,...,H_5} and given delivery tasks 𝒯={(B_1,B_2),(B_1,B_3)}. Possible solutions for a given distance function d are illustrated in Figure <ref>.
In the following, a sequence of an origin branch, one or two hubs and a destination branch is called a tour for that pair of branches. In the multi allocation variant, it could be reasonable to use the tour B_1→ H_1 → H_3 → B_2 and B_1→ H_2 → H_5 → B_3 (black arrows). Alternatively, if fewer hubs should be opened, it might also be reasonable to route everything over H_1 and H_4 instead of H_3 or H_5 (red arrows). In the single allocation variant, it is not possible to connect A to H_1 and H_2. Instead, one hub must be used for both deliveries, as in the red solution. Notice that a single allocation solution is always a feasible multi allocation solution. Furthermore, an optimal multi allocation routing can be computed fast for a given set of open hubs by testing any possible combination of hubs for any tour.
In the model, a graph G=(V,E) with edge weights d_ij∈ℝ_0^+ for i,j∈ V, delivery tasks 𝒯, a set of branches ℬ⊆ V and a set of potential hubs ℋ⊆ V is given. Any vertex is a branch, a hub or both such that V=ℬ∪ℋ. The decision variable X_ijkm∈{0,1} indicates if the tour from i to m over j and k with j,k∈ℋ, meaning the tour i→ j → k → m is used. As discussed, there are consolidation effects between two hubs, such that using the connection between them is cheaper or faster. This is modeled by a given discount factor α with 0≤α≤ 1. The costs for the tour i→ j → k → m therefore are
d_ij+α d_jk+d_km.
In the example above, for a given α=1/2, the cost of any red tour is 1+1/2· 3+3=5.5 and the cost of any black tour is 1+1/2· 5+1=4.5.
In the next subsection, some restrictions on the distance function are given. For instance, d_ii=0 will be assumed ((<ref>)). Consequently, the case where only one hub is used in a tour can be modelled by setting j=k without having costs for the α d_jk term.
Furthermore, the binary decision variable Y_i indicates whether hub i is open, and only open hubs can be used for the routing. Since it is expensive to open hubs, opening them is limited. In the pHM, the number of open hubs is limited by a given integer p∈ℕ_+. The objective is to minimize the summed transportation costs. In the uHLP, the hubs have opening costs c_h_1,c_h_2,...,c_h_|ℋ|. In the pHLP the number of hubs is limited and opening costs need to be considered.
To simplify the notation, let T_b,b'=1 iff (b,b')∈𝒯 is a given delivery tasks and T_b,b'=0 else.
The ILP variables are as follows:
* p∈ℕ: maximum number of open hubs (for the pHM and pHLP)
* c_h_1,c_h_2,...,c_h_|ℋ|: set up costs of the hubs (for the uHLP and pHLP)
* ℬ: a finite set of branches
* ℋ: a finite set of potential hubs
* d_i,j: a non negative distance function for any i,j∈ℬ∪ℋ
* τ={(B_i,B_j),....}: a set of delivery tasks
In addition, the following decision variables are used:
* Y_i∈{0,1 }: deciding if hub i is opened
* X_bhh'b'∈{0,1 }: deciding if the corresponding tour is applied
Then the ILP for the p-hub median problem is:
min ∑_(b,b')∈τ∑_h∈ℋ∑_h'∈ℋ (d_bh+α· d_hh'+d_h'b')· X_bhh'b'
s.t. ∑_h∈ℋY_h≤ p
∑_h∈ℋ∑_h'∈ℋX_bhh'b'=T_b,b' ∀ b,b'∈ℬ
X_bhh'b'≤ Y_h ∀ b,b'∈ℬ; ∀ h,h'∈ℋ
X_bhh'b'≤ Y_h' ∀ b,b'∈ℬ; ∀ h,h'∈ℋ
Y_h∈{0,1} ∀ h∈ℋ
X_bhh'b'∈{0,1} ∀ b,b'∈ℬ; ∀ h,h'∈ℋ
Constraint (<ref>) ensures that at most p hubs are opened. The existence of exactly one routing for any pair of branches in 𝒯 is ensured by (<ref>) and (<ref>). Constraint, (<ref>) together with (<ref>) restrict tours to use open hubs only. Lastly, by constraint (<ref>), every hub is either open or closed.
In the uncapacitated hub location problem, constraint (<ref>) is not applied and the objective function is replaced by
min∑_(b,b')∈τ∑_h∈ℋ∑_h'∈ℋ (d_b,h+α· d_hh'+d_h'b')· X_bhh'b'+∑_h∈ℋY_h· c_h.
As mentioned the p-hub location problem is a generalization of both problems, such that any constraint from the pHM together with the objective of the uHLP is applied. Setting c_f=0 for any facility would result in the pHM and setting p=|ℋ| would result in the uHLP.
The problems will be reduced to the corresponding facility location problem (FLP); namely the k-median problem, the uncapacitated facility location problem (uFLP) and the k-facility location problem (k-FLP). Then the solution of a k-median/ facility location algorithm will be used as hub location solution. To improve the readability of this paper, when talking about FLPs the k-median problem will be meant as well.
§.§ Facility location problems
For a given set of cities 𝒞 and facilities ℱ, the task is to open facilities and connect any city to exactly one open facility. The objective is to minimize the summed distances (added to the set-up costs for uFLP and k-FLP). Different notation is used to clarify when talking about the HLP and when about the FLP.
In the FLPs, the following variables exist:
* k∈ℕ: maximum number of open facilities (for the k-median and k-FLP)
* c_F_1,F_2,...F_|ℱ|: set up costs (for the uFLP and k-FLP)
* 𝒞: a finite set of cities
* ℱ: a finite set of potential facilities
* Γ_i,j: a non negative distance function for any i,j∈𝒞∪ℱ
In addition, the following decision variables are used:
* Y_f: deciding if facility f is opened
* X_c,f: deciding if city c is connected to facility f
The ILP for the k-FLP is:
min ∑_c∈𝒞∑_f∈ℱΓ_c,f· X_c,f+∑_f∈ℱY_f· c_f
s.t. ∑_f∈ℱY_f≤ k
∑_f∈ℱX_c,f=1 ∀ c∈𝒞
X_c,f≤ Y_f ∀ c∈𝒞; ∀ f∈ℱ
Y_f∈{0,1} ∀ f∈ℱ
X_c,f∈{0,1} ∀ c∈𝒞; ∀ f∈ℱ.
By constraint (<ref>) at most k facilities open. Any customer is served by constraint (<ref>) and only open facilities are used by (<ref>). In the uFLP, constraint (<ref>) is not applied. In the k-median problem, the objective function is replaced by
min∑_c∈𝒞∑_f∈ℱΓ_c,f· X_c,f.
Notice that an FLP instance can easily be modelled as HLP by adding a city C' and a facility F' with distances Γ_C',F' =0, Γ_C',F=∞, Γ_C,F'=∞ for any F≠ F' and C≠ C'. Let any city be a branch and any facility a hub. Furthermore, set τ= {(C_1,C'),(C_2,C'),... }, α=0 and c_F'=0. Any tour will be connected to C' using F' and the first hub can be interpreted as the facility connected to the corresponding city. Then the k-median problem is directly transferred into the (p+1)HM, the uFLP into the uHLP and the k-FLP into the (p+1)HLP.
For the FLP, many algorithms were established, such as the 2.675+ ϵ-approximation algorithm of <cit.> for the k-median problem, the 2+√(3)+ϵ-approximation algorithm of <cit.> for the k-FLP and a primal-dual algorithm by <cit.> and a 1.488 approximation by <cit.> for the uFLP. A survey on FLP can be found by <cit.>.
§.§ Distance function
Since most FLP algorithms consider metric distance functions, it needs to be assured that Γ forms a metric.
A nonnegative distance function Γ is metric if the following three conditions hold:
∀ i: Γ_i,i=0 (Definite)
∀ i,j: Γ_i,j=Γ_j,i (Symmetry)
∀ i,j,k: Γ_i,j+Γ_j,k≥Γ_i,k (Triangle inequality)
In addition, to prove the approximation bound the distance function d of the HLP must define a norm, which is a special case of metrics. However, it is not necessary that a specific norm is given. A norm of any vectors x,y and any scalar α fulfills the following three conditions:
||x||=0⇒ x=0 (Definite)
||α· x||=|α|· ||x|| (Homogeneity)
||x+y||≤ ||x||+||y|| (Triangle inequality)
In a norm, the distance between two points x and y can be written as d_x,y=||x-y||.
Notice that any norm induces a metric. However, not any metric can be induced by a norm. For example, consider the discrete metric where d(x,y)=1 for any x≠ y. This is obviously a metric, but for x≠0 it holds ||2· x||=1≠ 2= 2· ||x||.
A p-norm is a special metric which is defined for two d-dimensional points X=(x_1,x_2,...,x_d) and Y=(y_1,y_2,...,y_d) as Γ_X,Y=||X-Y||_p=√(∑_i=1^d |x_i-y_i|^p).
§ THE ALGORITHM
In this section, a new approximation algorithm for the metric multi allocation p-hub median problem (MApHM), the metric multi allocation uncapacitated hub location problem (MAuHLP) and the metric multi allocation p-hub location problem (MApHLP) is established. This is done by reducing it to the corresponding FLP, where a 2.675+ϵ-approximation algorithm for the k-median problem by <cit.>, a 2+√(3)+ϵ-approximation algorithm for the k-FLP by <cit.> and a 1.488-approximation algorithm for the uFLP by <cit.> exist.
To motivate the idea of the algorithm, consider the task to route one parcel from branch B_1 to branch B_2 using exactly two of the four possible hubs as in Figure <ref>. To reduce the problem to facility location, the decision for the first hub (the hub of B_1) must be independent of the second hub (the hub of B_2). However, ignoring the destination might lead to suboptimal results. In Figure <ref> hub H_1 and H_1' are equally far away from B_1. Moreover, H_2 and H_2' are equally far away from B_2. Since the task is to get from B_1 to B_2, using H_1 and H_2' to reduce the hub-to-hub distance makes sense. These two hubs are especially good since they are in the direction of the destination. Involving the destination for the FLP decision is the contribution of this work and it improves the solution.
The difficulty is to find good hubs without involving the hub decision of the second hub. The problem can be divided into two parts by adding a mid-point to the problem M_B_1,B_2, which is halfway between B_1 and B_2. Since only one tour is considered most of the time, this point is called M for simplicity. One task is to find a short path to the mid-point and one is to find a path from the mid-point to the destination as in Figure <ref>.
By the triangle inequality, the tour of Figure <ref> involving point M can not be shorter than the direct tour as in Figure <ref>. The main part of the paper is to bound the detour of the second tour. The proposed algorithm using this idea is described as follows:
§.§ Proposed algorithm (PA)
1. Add for each delivery task (B_1,B_2) a node M_B_1,B_2 in the HLP instance and define the distances as
* d_M,M=0,
* d_M_B_1,B_2,v=d_v,M_B_1,B_2=1/2d_B_1,v+1/2d_B_2,v for v∈ V\{M_B_1,B_2}.
2. Built an FLP instance from the HLP instance in the following manner:
* Use the k as p (for the pHM/pHLP) and/or c as facility costs (for the uHLP/pHLP)
* Let 𝒞=∅ and add for any tour T_B_1,B_2=1 two cities 𝒞←𝒞∪{ C_B_1,B_2}∪{ C_B_2,B_1}
* Use the potential hubs as potential facilities: ℱ←ℋ. To clarify, when talking about facilities or hubs, the facilities will be called F_1,F_2,... and the hubs H_1,H_2,..., although F_i directly refers to H_i.
* Define the distance function Γ_i,j as
Γ_(C_B_1,B_2),(C_B_3,B_4):=d_B_1B_3+αd_(M_B_1,B_2)(M_B_3,B_4) for branch to branch distances
Γ_(C_B_1,B_2),F_1:=d_B_1H_1+αd_(M_B_1,B_2)H_1 for branch to hub distances
Γ_F_1,F_2:=d_H_1H_2(1+α) for hub to hub distances
Notice that only the branch-to-hub distance is necessary for the algorithm. However, the other distances need to be defined to show that Γ forms a metric.
3. Use a γ approximation of a (metric) FLP algorithm
4. Apply this solution to MApHM/MAuFLP/MAHLP by:
* Opening a hub if the corresponding facility is opened in the FLP
* Solve the routing optimal
Notice that c_B_1,B_2 and c_B_2,B_1 are different vertices and they only have in common that they belong to the same tour and consider the same mid-point in the distance function. Moreover, notice that c_B_1,B_2 and c_B_1,B_3 are also different vertices, although both represent a tour starting in B_1. If a vector space is given the mid-point can also be set to
M_B_1,B_2=B_1+1/2(B_2-B_1)
such that d_M_B_1,B_2v=d_vM_B_1,B_2=||B_1+1/2(B_2-B_1)-v||. This definition directly follows the motivation and works as well. In both cases we directly have
d_B_1M=1/2d_B_1B_2=d_MB_2.
This algorithm produces a valid solution. The pHM and the pHLP open at most p hubs, since the k-median problem/ k-FLP does so ((<ref>),(<ref>)). Additionally, only open hubs are used for tours since they are opened facilities in the FLP ((<ref>),(<ref>),(<ref>)). Furthermore, since metric FLP algorithms are used it is necessary that Γ forms a metric. This is shown in the Appendix <ref>.
To visualize the algorithm's idea, consider two branches B_1 and B_2 with coordinates (0,0) and (2,0) and α=0.5 in a vector space. Hence, M=(1,0). A potential hub H_1=(x,y) on the way from B_1 to M has costs √(|x|^p+|y|^p)+α√(|1-x|^p+|y|^p) in a p-norm. In Figure <ref>, any point having exactly cost 0.9 is shown for different norms.
This figure illustrates that the proposed cost function considers both intuitions. On the one hand, close hubs are preferred over far away hubs; on the other hand, hubs on the way to the destination B_2 are preferred over hubs in other directions. For instance, the hubs H'=(-0.26̅,0) and H”=(0.8,0) have both costs of 0.9, meaning they are considered equally good first hubs. H' has the advantage of being close to B_1 and H” has the advantage of being in the destination's direction. If α is reduced, it is more important to reduce the branch-to-hub distance; if α is enlarged, it is more important to reduce the hub-to-hub distance by preferring hubs close to the mid-point. For α=0, this graph would be the (scaled) unit circle for the different norms, which makes sense since the hub-to-hub distance can be neglected.
In the next section, the algorithm is bounded.
§ APPROXIMATION GUARANTEE
To obtain the claimed bound, the proposed algorithm (PA) as well as the algorithm of <cit.> (BaP) will be used. The solution with the smaller costs will than be chosen. The PA obtain better results than BaP for smaller values of α and BaP obtains in theory better results for larger values of α.
Applying the PA and BaP with the k-median algorithm of <cit.> is a 3.451 approximation algorithm for the MApHM. For the MAuHLP applying the algorithm of <cit.> for uFLP gives a 2.173 approximation algorithm. Applying the algorithm of <cit.> is a 4.552 approximation algorithm for the MApHLP.
In any proof, the bounds are shown for one fixed delivery task. Since the proofs hold for any delivery task, the bound also holds for the whole solution.
Proof: Since the PA considers an optimal routing in step four, any bound for specific routing strategies hold simultaneously. Therefore, two routing strategies similarly to BaP are considered. Let B_1,B_2 ∈ℬ and H_1,H_2∈ℋ such that C_B_1,B_2 is connected to facility F_1 and C_B_2,B_1 to F_2. In other words, for the routing from B_1 to B_2 the facility location algorithm connected the corresponding cities to F_1 respectively F_2.
Routing strategy 1: A routing is performed using the hubs corresponding to the city-facility connections. In other words B_1→ H_1 → H_2 → B_2 is used.
Routing strategy 2: Only one hub is used which is connected to one of the two branches by a city-facility connection. This hub is than connected to both branches. In other words B_1→ H_1 → B_2 or B_1→ H_2 → B_2 is used.
Figure <ref> visualize both strategies. Strategy one is especially good for low values of α since the hub to hub connection is discounted. Strategy two is good for α close to 1. Notice that for α=1 by the triangle inequality strategy 1 can not outperform strategy 2.
In the following for both strategies an approximation bound is established.
The proposed algorithm with routing strategy 1 is a (1+α)γ approximation algorithm.
The proposed algorithm with routing strategy 2 is a γ+1/(1+α)α approximation algorithm.
In BaP, the idea of using the presented routing strategies is used as well, reaching the guarantees 1/α and γ+α(1+γ) respectively yielding together to a 1+γ bound. In Table <ref> the different bounds are shown.
Notice that BaP_1≤ PA_1 and PA_2≤ BAP_2. Hence, if both algorithms are used and the best solution is applied, the guarantee can be decreased to
min( (1+α)γ,1/α).
In Figure <ref> any bound is visualized using γ=2.675, which is the k-median guarantee of the algorithm of <cit.> (for MApHM). Using BaP would guarantee an approximation bound of the intersection between the black and blue function (3.675), which has been the best approximation bound before. Running the proposed algorithm guarantees the approximation bound of the intersection between the orange and red lines (4.011). Running both algorithms and taking the best result guarantees any bound. Therefore, the black and red lines' intersection (3.451) can be obtained with this method.
For the MAuHLP this is analogously using the γ=1.488 bound by <cit.> as visualized in Figure <ref>. In Figure <ref> this is visualized with the γ=2+√(3) bound of <cit.> for the MApFLP. In Table <ref> the decreased bounds are presented.
As a result, the approximation bound of the MApHM is improved to 3.451, the approximation bound of the MAuHLP is improved to 2.173 and the bound of the MApHLP is improved to 4.552 .
§ PROOFS
In this section Lemma <ref> and <ref> are shown. Let OPT_HLP be the optimal objective value of the HLP instance, OPT_FLP the of the FLP instance and ALG_HLP, ALG_FLP the corresponding objective values of the algorithms solution.
The lemmata are shown by using the additional following lemma:
It holds ALG_FLP≤γ(1+α) OPT_HLP.
§.§ Proof of Lemma <ref>
The proposed algorithm with routing strategy 1 is a (1+α)γ approximation algorithm.
Proof: In strategy 1 as in Figure <ref> the routing of the solution is done according to the facility location connections. For any FLP solution B_1→ H_1 and B_2→ H_2 with mid-point M holds
ALG_FLP=d_B_1H_1+α d_H_1M+α d_MH_2+d_ H_2B_2
≥_(<ref>) d_B_1H_1+α d_H_1H_2+d_H_2B_2=ALG_HLP.
The set-up costs were neglected since they are equal for ALG_FLP and ALG_HLP by the definition of the strategy.
Together with Lemma <ref> directly
ALG_HLP≤ (1+α)γ OPT_HLP
follows.
§.§ Proof of Lemma <ref>
The proposed algorithm with routing strategy 2 is a γ+1/(1+α)α approximation algorithm.
Proof: In strategy 2 as in Figure <ref> only one hub is used in the HLP routing.
As before, the set-up costs for the MAuHLP can be neglected since they are the same in ALG_FLP and ALG_HLP.
W.l.o.g. let
d_B_1H_1+α d_H_1M≤ d_B_2H_2+α d_MH_2.
According to strategy 2 the routing is B_1→ H_1→ B_2.
This gives:
ALG_HLP≤ d_B_1H_1+d_H_1B_2
≤_(<ref>) d_B_1H_1+d_H_1M+d_MB_2
≤_(<ref>) d_B_1H_1+2α/1+αd_H_1M+1-α/1+α(d_B_1M+d_B_1H_1)+d_MB_2
= 2/1+α(d_B_1H_1+α d_H_1M)+1-α/1+α· d_B_1M+d_MB_2
≤_(<ref>)2/1+α·1/2(d_B_1H_1+α d_H_1M+d_B_2H_2+α d_MH_2)+1-α/1+α· d_B_1M+d_MB_2
=1/1+αALG_FLP+1-α/1+α· d_B_1M+d_MB_2
=_(<ref>)1/1+αALG_FLP+ 2/1+α· d_B_1B_2·1/2
≤_<ref>1/1+α(1+α)γOPT_HLP+ 1/1+α· d_B_1B_2
≤( γ+ 1/1+α·1/α) ·OPT_HLP
=( γ+1/(1+α)α) ·OPT_HLP
In the last inequality it is used that any solution has at least cost of the discounted direct connection between B_1 and B_2 such that OPT_HLP≥α· d_B_1B_2.
§.§ Proof of Lemma <ref>
It holds ALG_FLP≤γ(1+α) OPT_HLP.
Proof: Since a γ-approximation is used ALG_FLP≤γOPT_FLP holds.
It is left to show that OPT_FLP≤ (1+α)· OPT_HLP. Let an optimal HLP solution be given. An FLP solution I can be constructed by using the connections according to the HLP solution. The set-up costs can be neglected since they are the same in I and OPT_HLP. Since I is a valid FLP solution OPT_FLP≤ cost(I). Again fix a tour from B_1 via H_1 and H_2 to B_2 in the optimal hub location solution.
OPT_FLP≤ cost(I)
=d_B_1H_1+α d_H_1M+α d_MH_2+ d_H_2B_2
=d_B_1H_1+α( 1/2d_B_1H_1+1/2d_B_2H_1) +α( 1/2d_B_1H_2+1/2d_B_2H_2)+ d_H_2B_2
=d_B_1H_1+α( d_B_1H_1+1/2( d_B_2H_1 -d_H_2B_2)+ 1/2( d_B_1H_2-d_B_1H_1) +d_B_2H_2)+ d_H_2B_2
≤_(<ref>)d_B_1H_1+α( d_B_1H_1+1/2( d_H_1H_2)+ 1/2( d_H_1H_2) +d_B_2H_2)+ d_H_2B_2
≤ (1+α) ( d_B_1H_1+α d_H_1H_2+d_H_2B_2)
=(1+α)· OPT_HLP
The proof for the alternative vector space definition of M is in the appendix at <ref>.
§ COMPUTATIONAL RESULTS
In this section, the computational results of the proposed algorithm are shown. The described algorithm is compared to BaP.
The algorithms differ only by the definition of the distance between a branch and hub in step 2 of the algorithm. Instead of defining
Γ_(C_B_1,B_2),F_1:=d_B_1H_1+α d_(M_B_1,B_2)H_1
for branch to hub distances, BaP defines the distance as
Γ_(C_B_1,B_2),F_1:=d_B_1H_1.
For both algorithms an optimal routing and the same FLP algorithms are applied. A simple greedy algorithm for the k-median problem was used to obtain reasonable results for huge instances quickly. The algorithm starts with an empty set of facilities and in each of the k steps, it adds the facility, which reduces the maximal costs for the FLP in this iteration.
Similarly, for the uncapacitated FLP, the algorithm of <cit.> is used, which defines all cities as uncovered in the beginning. In each iteration, it greedily covers a set of uncovered cities C̃ by a facility f minimizing c_f+∑_c∈C̃Γ_f,c/|C̃|.
Benchmark instances exist at <cit.> or from the Australian post at <cit.>. However, only a few instances were considered, and each instance is small. Hence, new instances were created to ensure enough test cases.
Three sets of test instances were created:
Small-sized instances with 1,000 delivery tasks, 50 branches, 100 hubs and 1,000 samples.
Medium-sized instances with 5,000 delivery tasks, 100 branches, 200 hubs and 200 samples.
Big instances with 20,000 delivery tasks, 1,000 branches, 400 hubs and 100 samples.
All instances were too huge to get results from an optimal solver in a reasonable time.
The locations were drawn uniformly for two dimensions in [0,1]. For MAuHLP, the first 100 small-sized instances were considered. Additionally, the set-up costs were either uniformly drawn from [0,1.2] or set to 1. Any run's test instances and objectives can be obtained at <http://dx.doi.org/10.17877/DE290R-23200>. Additionally, in the instances volumes were given, which were neglected for these problem statements. An extension would be to weight tours differently.
Table <ref> shows median values for the 2-norm MApHM.
In any test case, the proposed algorithm significantly improves the result concerning BaP.
The results for MAuHLP are presented in Table <ref>.
For the MAuHLP, the PA outperforms BaP in any but one test instance. Moreover, the tests suggest that the difference between both algorithms is for the MApHM more significant. For the MApHLP the objective values depend on the relation between k and the set-up costs. For large k the MAuHLP solutions were received and for small k results similar to the MApHM solutions.
In any but one test case, PA outperforms BaP such that beyond the theoretical improvement, the tests indicate that the proposed algorithm is, from a practical point of view, superior and should be used when the instance size is too large or the time bound is too small for exact algorithms.
§ ACKNOWLEDGMENTS
Special thanks to Anna Schroeter, Dorothee Henke and Nele Pommerening for the helpful discussions. Furthermore, thanks to Aleksandra "Ola" Grochala for helping with the implementation.
agsm
§ APPENDIX
§ Γ FORMS A METRIC
Metric FLP algorithms can only be used if the created FLP instance defines a metric. Hence, it needs to shown that Γ forms a metric for the FLPs.
As described, Γ refers to the distances in the created graph for metric FLPs and d refers to the distance of the input graph. In the following, let C_B_1,B_2,C_B_3,B_4,C_B_5,B_6∈𝒞 be any city in the FLP with B_1,B_2,B_3,B_4,B_5,B_6∈ℬ are the corresponding branches and M_B_1,B_2,M_B_3,B_4,M_B_5,B_6 are the corresponding mid-points of the tours in the HLP. In addition, let F_1,F_2,F_3∈ℱ be potential facilities in the FLP. Definite(<ref>), symmetry(<ref>) and the triangle inequality(<ref>) must be proven for each combination of cities and facilities as defined in the algorithm.
1. Definite holds due to
Γ_(C_B_1,B_2),(C_B_1,B_2)=d_B_1B_1+α d_(M_B_1,B_2)(M_B_1,B_2)=0
and
Γ_F_1,F_1=d_H_1H_1(1+α)=0
2. Symmetry directly follows from the definition.
3. The triangle inequality will be shown for any case. We distinguish the cases 1 between two cities, 2 between one city and one facility and 3 between two facilities. Each case can be further distinguished if the shortcut is done through a city .1 or a facility .2. Any equation uses that d forms a metric such that the triangle inequality on d can be used.
1.1:
Γ_(C_B_1,B_2),(C_B_3,B_4)=d_B_1B_3+α d_(M_B_1,B_2)(M_B_3,B_4)
≤ d_B_1B_5+d_B_5B_3+α d_(M_B_1,B_2)(M_B_5,B_6)+α d_(M_B_5,B_6)(M_B_3,B_4)
=Γ_(C_B_1,B_2),(C_B_5,B_6)+Γ_(C_B_5,B_6),(C_B_3,B_4)
1.2:
Γ_(C_B_1,B_2),(C_B_3,B_4)=d_B_1B_3+α d_(M_B_1,B_2)(M_B_3,B_4)
≤ d_B_1H_1+d_H_1B_3+α d_(M_B_1,B_2)H_1+α d_H_1(M_B_3,B_4)
=Γ_(C_B_1,B_2),F_1+Γ_F_1,(C_B_3,B_4)
2.1:
Γ_(C_B_1,B_2),F_1=d_B_1H_1+α d_(M_B_1,B_2)H_1
≤ d_B_1B_3+d_B_3H_1+α d_(M_B_1,B_2)(M_B_3,B_4)+α d_(M_B_3,B_4)H_1
=Γ_(C_B_1,B_2),(C_B_3,B_4)+Γ_(C_B_3,B_4),F_1
2.2:
Γ_(C_B_1,B_2),F_1=d_B_1H_1+α d_(M_B_1,B_2)H_1
≤ d_B_1H_2+d_H_2H_1+α d_(M_B_1,B_2)H_2+α d_H_2H_1
=Γ_(C_B_1,B_2),F_2+Γ_F_2,F_1
3.1:
Γ_F_1,F_2=d_H_1,H_2(1+α)
≤ d_H_1B_1+d_B_1H_2+α d_H_1(M_B_1,B_2)+α d_(M_B_1,B_2)H_2
=Γ_F_1,(C_B_1,B_2)+Γ_(C_B_1,B_2),F_2
3.2:
Γ_F_1,F_2=d_H_1H_2(1+α)
≤ (d_H_1H_3+d_H_3H_2)(1+α)
=Γ_F_1,F_3+Γ_F_3,F_2
This proves that Γ forms a metric.
§ PROVE OF LEMMA <REF> FOR ALTERNATIVE DEFINITION OF M
W.l.o.g let M be in the origin, such that ||M||=0. Furthermore, let ||H_1||≥ ||H_2||. Since, M is the mid-point ||B_1||=||-B_2 ||. Than
ALG_HLP≤ ||B_1-H_1||+||H_1-B_2||
≤_(<ref>) ||B_1-H_1||+||H_1||+||B_2||
≤_(<ref>) ||B_1-H_1||+2α/1+α||H_1||+1-α/1+α(||B_1||+||B_1-H_1||)+||B_2||
= 2/1+α(||B_1-H_1||+α||H_1||)+1-α/1+α·||B_1||+||B_2||
≤_(<ref>)2/1+α·1/2(||B_1-H_1||+α||H_1||+||B_2-H_2||+α||H_2||)+1-α/1+α·||B_1||+||B_2||
=1/1+αALG_FLP+1-α/1+α·||B_1||+||B_2||
=1/1+αALG_FLP+ 2/1+α· ||B_1||
≤_<ref>1/1+α(1+α)γOPT_HLP+ 2/1+α· ||B_1||
≤( γ+ 2/1+α·1/2α) ·OPT_HLP
=( γ+1/(1+α)α) ·OPT_HLP
§ NOT WORKING ANYMORE
In this section we consider a case distinction to show that ALG_FLP≤ (1+2α)· ALG_HLP. Therefore, we compare the HLP solution costs for one tour to the corresponding FLP solution and show that it is at most (1+2α) times more expensive. Hence, fix a tour and the costs of the HLP for this tour are
d_B_1,H_1+α d_H_1,H_2+d_H_2,B_2
while the cost of the FLP solution choosing these hubs is
Γ_(C_B_1,B_2),F_1+Γ_(C_B_2,B_1),F_2=d_B_1,H_1+α d_(M_B_1,B_2),H_1+d_B_2,H_2+α d_(M_B_2,B_1),H_2.
§ APPROXIMATION GUARANTEE FOR METRICS
In general metrics, only a γ(1+2α)-approximation can be guaranteed. A mid-point can be include as a node on the direct edge from B_1 to B_2 with distance d_B_1,M=d_B_2,M=1/2· d_B_1,B_2. Any other distance can be set to the shortest path distance between M and the corresponding node. Clearly, this is a metric.
For the proof, fix one tour B_1 to B_2 with mid-point M. W.l.o.g. assume that the fixed route is the only one.
OPT_HLP≤OPT_FLP
≤γ· ALG_FLP
= γ(Γ_B_1,H_1+Γ_H_2,B_2+α(Γ_H_1,M+Γ_M,H_2))
≤γ( Γ_B_1,H_1+Γ_H_2,B_2+α(Γ_B_1,H_1+Γ_B_1,M+Γ_B_2,H_2+Γ_M,B_2))
= γ( Γ_B_1,H_1+Γ_H_2,B_2+α(Γ_B_1,H_1+Γ_B_2,H_2+Γ_B_1,B_2))
= γ(1+α)( Γ_B_1,H_1+Γ_B_2,H_2)+γ·α·Γ_B_1,B_2
≤γ(1+2α) ALG_HLP
This is indeed sharp for general metrics. Consider γ=1 and the following instance with two possible routes H_1,H_2 and H̃_̃1̃,H̃_̃2̃ as in Figure <ref>.
Both routes have a FLP solution of 1+2α for each connection. However, the top route has a cost of 2+4α and the bottom of 2. Hence, the 1+2α bound is indeed sharp.
In Figure <ref> is a visualization of the situation. The HLP solution directly connects H_1 to H_2 (green edges and red edge) according to <ref>. The according FLP solution instead uses a route via M and uses the blue edges instead of the red. This intuition exactly matches equality <ref> where two terms are the green edges and the other two are the α-discounted blue edges.
We construct an orthogonal line o to B_1,B_2 going trough M as in Figure <ref>.
|
http://arxiv.org/abs/2307.04270v1 | 20230709215839 | A Complete Finite Equational Axiomatisation of the Fracterm Calculus for Common Meadows | [
"Jan A Bergstra",
"John V Tucker"
] | cs.LO | [
"cs.LO"
] |
<foo
DisplayAlgebra
DisplaySignature
DisplayEquations
[1]text
Common Meadows
Informatics Institute, University of Amsterdam, Science Park 900, 1098 XH, Amsterdam,
The Netherlands
[email protected] Department of Computer Science, Swansea University, Bay Campus, Fabian Way,
Swansea, SA1 8EN, United Kingdom
[email protected]
A Complete Finite Equational Axiomatisation of the Fracterm Calculus for Common Meadows
Jan A Bergstra1 John V Tucker2
August 12, 2023
===========================================================================================
We analyse abstract data types that model numerical structures with a concept of error. Specifically, we focus on arithmetic data types that contain an error flag whose main purpose is to always return a
value for division. To rings and fields we add a division operator x/y and study a class of algebras called common meadows wherein x/0 =.
The set of equations true in all common meadows is named the fracterm calculus of common meadows. We give a finite equational axiomatisation of the fracterm calculus of common meadows and prove that it is complete and that the fracterm calculus is decidable.
arithmetical data type, division by zero, error flag, common meadow, fracterm, fracterm calculus.
§ INTRODUCTION
Arithmetical structures have deep mathematical theories exploring their abstract axiomatisations, concrete representations, comparisons by homomorphisms, use in constructions, methods of equation solving, etc. For example, the naturals form commutative semirings, the integers form commutative rings, and the rationals, reals and complex numbers form fields. However, for computing, their classical algebraic theories have some shortcomings. Computing with arithmetical structures requires us to make abstract data types with extra algebraic properties that arise from the semantics of algorithms and programs. In practical computation, the application of an operator must return a value, i.e., must be a total operator. For this reason arithmetical structures in computing can have various special elements that flag special behaviour; the most obvious examples are error flags, such as a pocket calculator displays when trying to compute 1/0 or when having an overflow. Floating point arithmetics employ several more flags, such as infinities +∞, -∞ and `not a number' 𝖭𝖺𝖭. Surprisingly, not much is known about the algebraic theories of these augmented structures whose semantical features are deemed practically essential for arithmetical abstract data types. What has been known, at least since von Neumann and Goldstine's 1947 analysis of numerics, is that computer arithmetics do not satisfy the beautiful axioms of classical algebra <cit.>.
§.§ Common meadows
In <cit.>, we began to investigate semantic aspects of computer arithmetic using the theory of abstract data types. Using the equational methods characteristic of the theory, we have studied several semantic options for undefined operators and overflows, often focussing on data types of rational numbers (we sketch some of this programme later, in section <ref>).
In this paper we consider the class of all arithmetical data types called common meadows, which have the form
(F ∪{} | 0, 1, , x+y, -x, x · y, x/y)
where F is a field and is an element that behaves like an error flag. Following <cit.>, we use the term meadow for any field equipped with
division, or inverse, as an operation. The idea of a common meadow was introduced in <cit.>. The class of all common meadows is denoted 𝖢𝖬.
Common meadows are built from fields by adding error and division, as follows. Given any field F, we extend its domain with a new element which is absorptive, which means for all x ∈ F,
x + = , x · = , and - = .
This gives us the enlarged field-like structure 𝖤𝗇𝗅_(F), using the general methods of <cit.>. The addition of disturbs the classical algebra of fields as standard properties can fail, e.g.,
x - x ≠ 0 because - = and x · 0 ≠ 0 because · 0 = .
We will explore the effect of and show that, surprisingly, many familiar laws can be preserved or rescued.
With installed, we can extend 𝖤𝗇𝗅_(F) with a total division function x/y, also written x/y, and defined by:
x/y = if y=0, y = or x=; otherwise,
x/y = x · y^' where y^'∈ F is the unique element for which y · y^'= 1 in F.
This algebra we denote 𝖤𝗇𝗅_(F(_/_)) and is a common meadow.
With these constructions introduced, we can now turn to the main theorem of the paper, for which we need to be very precise about the syntax of rings, fields and common meadows. The syntax is determined by choosing signatures that contain names for the constants and operations. We need several: Σ_r for rings and fields; Σ_r, for rings and fields with ; Σ_m for meadows; and Σ_cm for common meadows. We will use terms and equations over these signatures.
§.§ Fracterm calculus for common meadows
The importance of the field of rational numbers for computing influences our use rings and fields in developing data types. In addition to focussing on division as a total function, we highlight the idea of a fraction – the primary representation of rationals in practice – adapting it to the abstract setting of meadows. Although fractions are not well-defined notions, the idea can be made perfectly precise using the syntax of the signature containing division. In general, a fracterm is a term over the meadow signature Σ_m whose leading function symbol is division. Fracterms were introduced in <cit.>, and a full motivation for the use of this syntax and terminology is given in <cit.>.
The equational theory of common meadows is the set
FC(𝖢𝖬) = { e | ∀ A ∈𝖢𝖬. A e }
of all equations over Σ_cm that are true in all common meadows; we call the this the fracterm calculus for common meadows.
The objective of the paper is to develop enough theory to prove the following new result (Theorem <ref> below).
Theorem. There is a finite equational axiomatisation E_𝖿𝗍𝖼-𝖼𝗆 that is sound for the class 𝖢𝖬 of common meadows and complete w.r.t. equational logic for the fracterm calculus FC(𝖢𝖬) for common meadows, i.e., for any equation e over Σ_cm,
E_𝖿𝗍𝖼-𝖼𝗆⊢ e if, and only if, e ∈ FC(𝖢𝖬).
In the language of logic, the equational theory of commmon meadows is finitely based. Furthermore:
Corollary. The fracterm calculus for common meadows is algorithmically decidable.
The class of all fields is classically definable by finitely many first order axioms, allowing negation; but it is not definable by any set of equations or conditional equations as they do not form a variety in the sense of Birkhoff's Theorem, or a quasivariety in the sense of Mal'tsev's Theorem (e.g., they are not closed under products) <cit.>. Equations, and conditional equations, are the preferred forms of axioms for data types, especially if they have good term rewriting properties <cit.>; they are a basic component for specification and verification tools. Seeking equational axiomatisations of arithmetical data types is a technical programme for which this paper is something of a milestone. Common meadows have emerged as a mathematically sound and tractable data types semantics for computer arithmetic. Our theorem improves on earlier axiomatisations and on a partial completeness result for common meadows given in <cit.>, based on fields with characteristic 0.
Complementing our theorem is the fact that our axiomatisation E_𝖿𝗍𝖼-𝖼𝗆 does not prove all conditional equations even for characteristic 0, <cit.>:
Question. Does the conditional equational theory of common meadows have a sound and complete finite conditional equation axiomatization?
§.§ Structure of the paper
We begin with preliminaries that recall basic ideas about abstract data types in Section <ref>, and rings, fields and common meadows in Section <ref>.
Polynomials play a central role in arithmetical structures and so transitions between standard polynomials and syntactic polynomials for fields and common meadows are established in Section <ref>. In Section <ref> we use the ideas and results we have accumulated to prove the theorems. We discuss technical matters arising and some background to the study of totalisation in Section <ref>.
§ PRELIMINARIES ON DATA TYPES
The theory of abstract data types starts from four basic concepts as follows. An implementation of a data type is modelled by a many-sorted algebra A of signature Σ. A signature Σ is an interface to some (model of an) implementation of the data type, and the constants and operations declared in Σ provide the only means of access to the data for the programmer. Axiomatisations of the operations in a signature define a range of implementations and provide the only means for the programmer to reason about the data. Two implementations of an interface are equivalent if, and only if, their algebraic models are isomorphic. The theory of arithmetic data types we are developing here is shaped by these and the following following general concepts.
§.§ Terms and equations
That signatures model interfaces establishes an essential role for the syntax of terms and equations in the theory abstract data types.
Let Σ be any signature. Let X be any countable set of variables. Let T(Σ) and T(Σ, X)
be the algebras of all closed or ground terms over Σ, and open terms with variables in X, respectively. Given a Σ-algebra A, and a valuation σ for variables in a term t ∈ T(Σ, X), the result of evaluating t in A using σ is denoted t _σ.
An equation is a formula of the form
e ≡ t(x_1, … , x_k) = t'(x_1, … , x_k)
where t(x_1, … , x_k), t'(x_1, … , x_k) are terms over Σ with variables from the list x_1, … , x_k ∈ X of all the variables in e – the terms t and t' need not have the same variables. Set Eqn(Σ,X) to be the set of all equations over Σ with variables taken from X.
An equation e ≡ t = t' ∈ Eqn(Σ,X) is valid in the algebra A, written A e, if for all valuations σ of variables of e, t _σ = t' _σ. The equation e is valid in a class 𝖪 of Σ-algebras, written 𝖪 e, if it is valid in every algebra in 𝖪.
Given E ⊂ Eqn(Σ,X), we use equational logic for reasoning and write E ⊢ e if e can be deduced from E.
Let 𝖪 be a class of Σ-algebras. An axiomatisation of the algebras of 𝖪 by a set E of equations is sound w.r.t. equational logic for 𝖪 if for all e ∈ E, if E ⊢ e then 𝖪 e.
Conversely, the axiomatisation E is complete w.r.t. equational logic for 𝖪 if for all e ∈ E, if 𝖪 e then E ⊢ e.
In trying to axiomatise a given class 𝖪 of structures soundness is necessary. However, for a given class 𝖪 of structures completeness is more complicated and, in fact, rare; for example, if the class 𝖪 is a particular algebra (unique up to isomorphism), such as some algebra of rational numbers. In algebraic practice, many classes studied consist of just the algebras that satisfy an interesting set of axioms. However, these classes that are defined by axioms arise from encounters with structures, and the search for an axiomatisation and its study is a method for discovering the essential properties of these structures.
Let 𝖪 be a class of Σ-algebras. The set
EqnThy(𝖪) = { e | ∀ A ∈𝖪 . A e }
is called the equational theory of 𝖪.
§.§ Data types and their enlargements by
The properties of interest to abstract data types are isomorphism invariants – typical examples are properties that are definable by first order formulae and forms of computability. This means that if a property is true of any data type A, and is an isomorphism invariant, then the property will be true of its abstract data type. For more of the general theory of abstract data types see <cit.>.
Our algebras will be single-sorted and have a non-empty carrier so we will use a simple notation for data types. For instance,
(A | c_1, …, c_k, f_1,…, f_l)
denotes a data type with domain A and constants c_1, ...,c_k from A, and functions f_1,..,f_k, where it is assumed that arities for the functions on A are known from the context.
A Σ-algebra A is Σ-minimal if it is generated by the constants and operations named in its signature Σ. A data type is a Σ-minimal algebra. An abstract data type is an isomorphism class of a data type.
Algebras can be expanded by adding new constants and operations to their signature.
Algebras can be extended by adding new elements to their carriers. Combining expansions and extensions in some order
comprises what we call enlargements of an algebra.
Consider the following general method of enlarging an algebra with .
Consider the algebra
(A | c_1, …, c_k, f_1,…, f_l)
of signature Σ. Suppose ∉ A and let
Enl_(A) = (A ∪{} | c_1, …, c_k, , f_1,…, f_l)
wherein is
(i) absortive, i.e., if is an argument to an operation f then the result is ; and
(ii) totalising, i.e., if any operation f is undefined in A then it returns a value in Enl_(A).
Let Σ_ = Σ∪{} be the signature of Enl_(A).
If the algebra A is total then f returns if, and only if, one of its arguments is .
We can adapt some equational axioms true of A to accommodate by using this idea:
An equation t = t' is a balanced equation if the terms t and t' have the same variables.
Their key property is this:
Let A be a Σ algebra and let t = t' be a balanced equation. Then,
A t = t' if, and only if, Enl_(A) t = t'.
§ PRELIMINARIES ON ARITHMETIC STRUCTURES
In the arguments that follow, we will move between the algebra of rings, fields (with and without ) and common meadows.
§.§ Rings and fields and common meadows
We start from the theory of commutative rings and fields. A commutative ring with 1 is an algebra of the form
(R | 0, 1, x+y, -x, x · y).
A field F is a commutative ring with 1 in which each non-zero element x ∈ F has an inverse y ∈ F, i.e., x · y = 1. Note rings and fields have the same three operations. Let Σ_r be a signature for rings and fields. All our rings will be commutative with 1.
Let be a ring of integers and let be a field of rational numbers containing the subring .
We add to a ring R by applying the enlargement of Definition <ref> to make the algebra
Enl_(R) = (A ∪{} | 0, 1, , x+y, -x, x · y)
with signature Σ_r,. The same construction applied to a field F yields Enl_(F). The point of adding is to manage division.
§.§ Meadows and common meadows
To fields we add division to make a meadow.
A meadow is a partial algebra F( _/_)
obtained as an expansion of a field with a
division function _/_ that works as usual on non-zero elements of the domain of F. Let Σ_m = Σ_r∪{_/_}.
To totalise division, we add to a meadow F( _/_) by applying the enlargement of Definition <ref>:
A common meadow is a total algebra
Enl_(F( _/_)) = (F ∪{} | 0, 1, , x+y, -x, x · y, x/y)
with signature Σ_cm = Σ_m,.
Thus, we have a field F equipped with a division function _/_ that has been made total by having x/0 = for all x, including .
Equivalent designs for meadows and common meadows can be based on inverse as a primitive, an approach that was taken in <cit.>.
Recall that to qualify as a data type, an algebra must be minimal, i.e., generated by its constants and operations. Now, if F is a finite prime field then 𝖤𝗇𝗅_(F) is minimal, while for all other fields F – especially the rationals – the algebra is non-minimal and is not a data type for that reason. Division is needed to make the classical field of rational numbers a data type:
The common meadow Enl_(( _/_)) of rationals is Σ_cm-minimal and hence qualifies as a data type.
Recalling an observation made in <cit.>, we summarise the constuction:
Every field F can be enlarged to a common meadow Enl_(F( _/_)) that is unique with respect to isomorphisms that fix the field F.
If F is a computable field then Enl_(F( _/_)) is a computable common meadow.
It is easy to see that the extension of F by is computable. Division is partial on F, but its set { (x, 0) | x ∈ F } of undefined arguments is computable, for which the value for divisions can be computed.
See, e.g., <cit.> for methods to express this argument in detail.
Applying the definitions of equations in Section <ref> we have:
The fracterm calculus of common meadows is the set
FC(𝖢𝖬) = { e ∈ Eqn(Σ_cm) | ∀ A ∈𝖢𝖬. A e }
of all equations made of Σ_cm-terms that are true in all common meadows.
§.§ Polynomial sumterms
For the next steps in preparing for the proof, we need some syntactic theory of polynomials adapted to the presence of in rings and fields
and, later, to working with division in common meadows.
A sumterm is a Σ_r term with _+_ as its leading function symbol.
A pure product term is a Σ_r term containing only multiplications _·_.
A flat sumterm is a sum of pure product terms, where sums may have an arbitrary length (assuming the associativity of addition).
Let Eqn(Σ_r) denote the set of equations made from terms over Σ_r. Now since
Σ_r⊂Σ_r,⊂Σ_cm
these ring terms and equations are destined to play a special role in the theory of common meadows: they are the simple terms and equations over Σ_cm that do not involve or division.
Let SumEqn(Σ_r) ⊂ Eqn(Σ_r) be the set of all equations whose terms are sumterms.
The sumterm calculus of common meadows is the set
SumC(𝖢𝖬) = { e ∈ SumEqn(Σ_r) | ∀ A ∈𝖢𝖬. A e }
of all sumterm equations true in all common meadows.
§.§ Equational specifications with
Consider the set E_𝗐𝖼𝗋, of equational axioms over Σ_r, in Table <ref>.
(x+y)+z = x + (y + z)
x+y = y+x
x+0 = x
x + (-x) = 0 · x
x · (y · z) = (x · y) · z
x · y = y · x
1 · x = x
x · (y+ z) = (x · y) + (x · z)
-(-x) = x
0 · (x + y) = 0 · (x · y)
x + =
E_𝗐𝖼𝗋,: equational axioms for weak commutative rings with
Notice these equations are close to the equational axioms of commutative rings.
The eight equations for commutative rings in Table <ref> that are intact are balanced equations (Lemma <ref>). The two axioms (4) and (10) are adjusted to the presence of . For example, the unbalanced equation x + (-x) = 0 is replaced by the balanced x + (-x) = 0 · x, which is valid for x =. Axiom (11) introduces , from which the absorption axioms for · and - can be derived from E_𝗐𝖼𝗋,. We call these axioms for weak commutative rings.
The equations E_𝗐𝖼𝗋, in Table <ref> are a finite axiomsatistion that is complete for the
(i) sumterm calculus for rings equipped with ;
(ii) sumterm calculus for fields equipped with ; and
(iii) sumterm calculus for common meadows.
The validity of these axioms in all structures of the form 𝖤𝗇𝗅_(F( _/_)), for a field F, is easy to check by inspection. Hence, the axioms are sound for SumC(𝖢𝖬).
By Proposition 2.3 of <cit.>, the equations E_𝗐𝖼𝗋, of
Table <ref> provide a complete axiomatisation of the
equational theory of the class of structures obtained as
𝖤𝗇𝗅_(R)
for some ring R. It is an immediate corollary of the proof of Proposition 2.3 in <cit.> that
contemplating a smaller class of structures by requiring that
R is a field allows the conclusion to be drawn for 𝖤𝗇𝗅_(F): In the final lines of that proof,
instead of considering a ring of integers one may use, to the same effect, a field of rationals. Since the sum terms and equations do not involve division, completeness holds 𝖤𝗇𝗅_(F( _/_)).
In Section <ref>, we build the equations of common meadows by axiomatising _/_ on top of this set E_𝗐𝖼𝗋,.
§ STANDARD POLYNOMIALS AS SYNTACTIC TERMS OVER COMMON MEADOWS
Working with standard polynomials over rings and fields does not involve syntax or .
Here we collect some results on standard polynomials over fields and, in particular, (i) formalise syntactic terms for standard polynomials and (ii) establish a two-way transformation between standard polynomials and their formal syntactic counterparts.
§.§ Properties of standard polynomials
Consider the polynomial rings [X_1,…,X_n] ⊆[X_1,…,X_n]. We need to distinguish specific types of multivariate polynomials. In particular, each value s ∈,
including 0, counts as a polynomial. However, coefficients of polynomials must be non-zero, so 0 · X_1 · X_2
will not be considered a polynomial in [X_1,X_2].
A polynomial p in [X_1,…,X_n] is primitive if the
gcd of its coefficients equals 1.
Let be an arbitrary but fixed algebraic closure of the field .
Suppose p and q are polynomials in [X_1,…,X_n]
which take value 0 at the same argument vectors in ^n,
then p and q have the same irreducible polynomials as factors (up to constant factors in ),
in the ring [X_1,…,X_n].
This follows by repeated application of the Nullstellensatz (e.g., <cit.>, Ch. IX, Theorem 1.5) and unique factorization (e.g., <cit.>, Ch. IV, Corollary. 2.4).
(Lemma of Gauss.)
Consider a polynomial p ∈[X_1,…,X_n].
Suppose that p is non-zero and has a factorisation p = r_1 · r_2 in [X_1,…,X_n].
Then for some rational numbers c_1,c_2 ∈,
p = c_1 · r_1 · c_2 · r_2
and the polynomials c_1 · r_1 and c_2 · r_2 are in
[X_1,…,X_n].
Suppose that a non-zero primitive polynomial p ∈[X_1,…,X_n] has a factorisation
p = r_1 ·…· r_m with r_1,…,r_m irreducible polynomials in [X_1,…,X_n]. Then the
multiset {r_1, …, r_m} of polynomials, modulo the sign thereof, is unique.
Suppose α and β are primitive non-zero polynomials in the ring [X_1,…,X_n] with the property that α and β take value 0 on the same argument vectors in . Then there are primitive
irreducible polynomials
γ_1,…,γ_m ∈[X_1,…,X_n] and positive natural numbers a_1,…,a_n, b_1,…,b_m
such that in [X_1,…,X_n],
α = γ_1^a_1·…·γ_n^a_m and β =
γ_1^b_1·…·γ_n^b_m.
If in
it is the case that α and β vanish on the same arguments both have the irreducible factors, say γ_1,…,γ_m
over [X_1,…,X_n]. Using Proposition <ref> these irreducible polynomials may be chosen in
[X_1,…,X_n], and with Proposition <ref> one finds that, viewed as a set, said collection of polynomials
is unique modulo the sign of each polynomial.
In the proof below only Proposition <ref> will be used.
§.§ Polynomial sumterms in the setting of common meadows
The step from the ordinary algebra of rings and fields to working in common meadows is not difficult, but it involves some details. The key syntactic idea is a special sumterm called a polynomial sumterm over Σ_r, and hence over our other signatures, which will work like a standard polynomial in conventional algebra.
To replicate in syntax the various standard polynomials, we begin with choosing sets of numerals, which are closed terms for denoting the naturals, integers and rationals. Numerals for natural numbers are: 0,1,2, 3,… where 2≡ 1+1, 3≡2 +1, … In general: n+1≡n+1. (The precise definition of numerals is somewhat arbitrary and other choices are equally useful.)
For integers we will have terms of the form -n with n>0. We
will use the notation n for an arbitrary integer, thus 0≡ 0, 1≡ 1
and for positive n, -n≡ - (n).
For rational numbers, we have terms of the form n/m and -n/m with n>0, m>0 and (n,m) = 1.
In this way, for each a ∈ we have a unique numeral t_a such that t_a = a in .
We build the polynomial sumterms in stages.
A pure monomial is a non-empty product of variables (understood modulo associativity and commutativity of multiplication).
A monomial is a product c · p with c a non-zero numeral for a rational number and p a pure monomial.
We will assume that pure monomials are written in a uniform manner mentioning the variables in the order inherited from the infinite listing X_1,X_2,… with powers expressed as positive natural numbers (where power 1 is conventionally omitted). Recalling Definition <ref> of sumterms:
A polynomial sumterm p is a flat sumterm for which
(i) all summands involve pairwise different pure monomials,
and
(ii) none of the coefficients is 0, unless p ≡ 0.
Clearly, 0 is a polynomial sumterm while is not a polynomial sumterm as polynomial sumterms are terms over Σ_r:
Given polynomial sumterms p and q,
Enl_() p=q if, and only if, E_𝗐𝖼𝗋,⊢ p = q.
This is an immediate corollary of the proof of Theorem 2.1 in <cit.>.
§.§ Transitions between standard polynomials and polynomial sumterms
We now turn to the relationship between standard polynomials and polynomial sumterms. Upon evaluation of the numerals that serve as its coefficients, a polynomial sumterm
p with variables in X_1,…,X_n can be understood as a standard polynomial p' in the ring
[X_1,…,X_n]. Thus, we have the translation:
p ↦ p'.
Conversely, a polynomial α∈[X_1,…,X_n] can be written as a polynomial sumterm α^⋆ by
turning all coefficients in into the corresponding numerals. Thus, we have the translation:
α↦α^⋆.
Given polynomial sumterms p and q involving the same variables and a ring R,
the following equivalence holds:
Enl_(R) p = q if, and only if, p' = q' in R.
Moreover, the following observation can be made, which, however, critically depends on the assumption that all coefficients of a polynomial are non-zero.
For all polynomials
α and β:
α = β in R if, and only if, Enl_(R) α^⋆ = β^⋆.
For all polynomials
α and β:
α = β in if, and only if, Enl_() α^⋆ = β^⋆
if, and only if,
E_𝗐𝖼𝗋,⊢α^⋆ = β^⋆.
This follows by combining Proposition <ref> with Proposition <ref>.
Properties of polynomial sumterms and standard polynomials correspond as follows:
(i) p is non-zero ⟺ p' is non-zero,
(ii) p has degree n ⟺ p' has degree n,
(iii) p is irreducible ⟺ p' is irreducible,
(iv) p is primitive ⟺ p' is primitive,
(v) q is a factor of p ⟺ q' is a factor of p',
(vi) any polynomial sumterm p can be written as a· q for a non-zero integer a and a primitive polynomial sumterm q.
§.§ Quasi-polynomial sumterms
Consider, for instance, the Σ-terms
x and x + 0 · y.
On evaluating in a commutative ring R, these terms over Σ_r define the same functions, but they do not do so in the enlargement Enl_(R)
as they take different values upon choosing x=0, y =. Thus, the terms need to be distinguished: since 0 usefully occurs as a coefficient in a polynomial when working with . We will work with a second kind of polynomial sumterm in order to make these issues explicit.
A quasi-polynomial sumterm p is either
(i) a polynomial sumterm, or
(ii) a monomial of the form
0 · r with r a pure monomial with all its variables in r occurring in the first power only,
or
(iii) the sum q + 0 · r of a polynomial sumterm
q and a monomial of the kind in (ii) and such that no variables occur both in q and in r.
The following proposition provides a rationale for the specific form of quasi-polynomial sumterms as just defined.
Given a sumterm p which contains at least one variable, a pure monomial q can be found with variables occurring with power 1 only such that 0 · p = 0 · q.
Use the following rewrite rules, working modulo commutativity and associativity of addition and multiplication, each of which are sound w.r.t. E_𝗐𝖼𝗋,:
x+ 0 → x,
0 · (x · y) → (0 · x) + (0 · y),
0 · (x+ y) → (0 · x) + (0 · y),
0 ·n→ 0,
(0 · x) + (0 · x) → 0 · x
until no further rewrites with these rules can be performed and finally use the rule
0 · x + 0 · y → 0 · (x · y)
to arrive at the required form.
The sum of two polynomial sumterms need not be provably equal by E_𝗐𝖼𝗋, to a polynomial sumterm.
Indeed, x + (-x) = 0 · x is merely a quasi-polynomial sumterm. However, conversely:
A product r = p · q of two non-zero polynomial sumterms p and q
is provably equal to a polynomial sumterm by E_𝗐𝖼𝗋,.
We write t=_𝗐𝖼𝗋, r for E_𝗐𝖼𝗋,⊢ t=r. First, notice that p · q is provably equal to a quasi-polynomial sumterm.
For example, consider p ≡ x+1, q ≡ x-1, then
r = p · q =_𝗐𝖼𝗋,
(x^2 + 0 · x) + (-1)=_𝗐𝖼𝗋, (x · (x + 0)) + (-1) =_𝗐𝖼𝗋, x^2 + (-1).
More generally, if a variable x occurs in either p or q then as a function p · q depends on x,
from which it follows that in the polynomial α with α = p' · q',
x must occur at least once in a monomial of α of which the coefficient is non-zero.
This implies that an additional summand 0 · x is unnecessary in the
quasi-polynomial sumterm α^⋆, which for that reason is provably equal with E_𝗐𝖼𝗋, to
a polynomial sumterm.
Let p and q be integer polynomial sumterms, both with non-zero degree, with variables among X_1,…,X_n and such that p' as well as q' are primitive polynomials.
Suppose that in Enl_() both p and q have value 0 on the same argument
vectors in Enl_()^n. Then, there are
(i) a positive natural number m,
(ii) integer polynomial sumterms r_1,…,r_m with non-zero degree, such that r_1',…,r_n' are primitive polynomials, and
(iii) non-zero natural numbers a_1,…,a_n, b_1,…,b_m
such that
E_𝗐𝖼𝗋,⊢ p = r_1^a_1·…· r_n^a_m and
E_𝗐𝖼𝗋,⊢ q = r_1^b_1·…· r_n^b_m.
Let p and q be as assumed in the statement of the Proposition.
Now p and q evaluate to 0 for the same argument vectors in Enl_()^n. It follows that p and q must contain precisely the same variables. To see this, assume otherwise that say
variable x occurs in p and not in q (the other case will work similarly) and
then choose a valuation for the other variables in
which solves q=0, by additionally having value for x a valuation is obtained where q=0 and p =, thereby contradicting the assumptions on p and q.
Both p' and q' then
have non-zero degree and are non-zero polynomials with, using Proposition <ref>, the same zeroes in
()^n.
Now Proposition <ref> can be applied with α≡ p', β≡ q' thus
finding polynomial sumterms γ_1,…,γ_m, and numbers a_1,…,a_m, b_1,…,b_m such that in
:
p' = α = γ_1^a_1·…·γ_m^a_m and q' = β = γ_1^b_1·…·γ_m^b_m.
Now choose:
r_1≡γ_1^⋆,…,r_m≡γ_m^⋆. It follows that
Enl_() p = r_1^a_1·…· r_m^a_m andEnl_() q = r_1^b_1·…· r_m^b_m.
Moreover, with Proposition <ref>, we know that r_1^a_1·…· r_m^a_m is provably equal to a polynomial
sumterm, say P (by E_𝗐𝖼𝗋,) and that
r_1^b_1·…· r_m^b_m is provably equal to a polynomial sumterm, say Q. So we find
Enl_() p = P and Enl_() q = Q, and in consequence
Enl_() p = P and Enl_() q = Q.
Lastly, using Proposition <ref>,
E_𝗐𝖼𝗋,⊢ p = P and E_𝗐𝖼𝗋,⊢ q = Q from which one finds that
E_𝗐𝖼𝗋,⊢ p = r_1^a_1·…· r_m^a_m and
E_𝗐𝖼𝗋,⊢ q = r_1^b_1·…· r_m^b_m thereby completing the proof.
The quasi-polynomial sumterm introduces extra variables via a linear monomial. In <cit.> introduced extra variables using a linear sum 0 · (x_1 + … + x_1), which takes the same values. From <cit.> we take the following information concerning sumterms:
Let t be a Σ_r,-term, then either
(i) E_𝗐𝖼𝗋,⊢ t =; or
(ii) there is a quasi-polynomial sumterm p such that E_𝗐𝖼𝗋,⊢ t = p.
In each case the reduction is computable.
§ EQUATIONAL AXIOMS FOR COMMON MEADOWS
We now add to the equational axioms E_𝗐𝖼𝗋, in Table <ref> to make a set of equational axioms for
common meadows:
E_𝖿𝗍𝖼-𝖼𝗆 in Table <ref>. These equations have been presented in
a different but equivalent form
in <cit.>.
By inspection, one can validate soundness:
(Soundness of E_𝖿𝗍𝖼-𝖼𝗆.)
𝖢𝖬 E_𝖿𝗍𝖼-𝖼𝗆.
import E_𝗐𝖼𝗋,
x = x/1
-x/y = -x /y
x/y·u/v = x · u/y · v
x/y + u/v = (x· v) + (y · u)/y · v
x/( u/v) = x ·v · v/u · v
x/y + 0 · z = x + 0 · z/y
= 1/0
E_𝖿𝗍𝖼-𝖼𝗆: Equational axioms for fracterm calculus for common meadows
§.§ On fracterms and flattening
The introduction of division or a unary inverse introduces fractional expressions.
The theory of fractions is by no means clear-cut if the lack of consensus on their nature is anything to go by <cit.>.
However, in abstract data type theory, fractions can be given a clear formalisation as a
syntactic object – as a term over a signature containing _/_ or -^-1 with a certain form.
Rather than fraction we will speak of a fracterm,
following the terminology of <cit.> (item 25 of 4.2).
A fracterm is a term over Σ_cm whose leading function symbol is division _/_. A flat fracterm is a fracterm with only one division operator.
Thus, fracterms have form p/q, and flat fracterms have the form p/q in which p and q do not involve any occurrence of division. Note that fracterms are generally defined as terms of the signature Σ_m of meadows, but we will use them only over the Σ_cm of common meadows (and its subsignatures). The following simplification process is a fundamental property of working with fracterms.
(Fracterm flattening <cit.>.)
For each term t over Σ_cm there exist p and q terms over Σ_r, i.e., both not involving or division, such that
E_𝖿𝗍𝖼-𝖼𝗆⊢ t = p/q,
i.e., t is provably equal to a flat fracterm. Furthermore, the transformation is computable.
Immediate by structural induction on the structure of t, noting that any occurrence of can be replaced by 1/0.
The set E_𝖿𝗍𝖼-𝖼𝗆 of equational axioms for common meadows
has been designed so that the proof of
fracterm flattening is straightforward; it also allows
other results of use for this paper to be obtained easily.
More compact but logically equivalent axiomatisations can be found. In <cit.>, using inverse rather than division, a set of logically independent axioms for common meadows is given, from which
fracterm flattening is shown, the proof of which then is correspondingly harder.
From now on we will omit brackets thanks to associativity commutativity of addition and multiplication.
§.§ Completeness
We prove that the equations E_𝖿𝗍𝖼-𝖼𝗆 are complete for the fracterm calculus 𝖢𝖬 of common meadows, i.e., for the equational theory of the class of common meadows:
For any equation t=r over Σ_cm the following holds:
E_𝖿𝗍𝖼-𝖼𝗆⊢ t=r if, and only if, t=r is valid in all common meadows.
The soundness of E_𝖿𝗍𝖼-𝖼𝗆 was noted in Proposition <ref>.
For completeness, suppose that t=r is valid in all common meadows, i.e., 𝖢𝖬 t=r. In what follows, for brevity, we will write ⊢ e for
E_𝖿𝗍𝖼-𝖼𝗆⊢ e.
By the fracterm flattening Theorem <ref>, we can find Σ_r terms p,q,u,v such that
⊢ t = p/q and ⊢ r = u/v.
By Proposition <ref>, each of these four terms can be written in the form of a quasi-polynomial sumterm:
⊢ p = s_p + 0 · h_p, ⊢ q = s_q + 0 · h_q, ⊢ u = s_u + 0 · h_u, ⊢ p = s_v + 0 · h_v
with s_p, s_q, s_u, s_v polynomial sumterms and h_p, h_q,
h_u and h_v linear monomials.
Substituting these quasi-polynomial sumterms for p,q,u,v and applying axiom 17 of E_𝖿𝗍𝖼-𝖼𝗆, we get
⊢p/q = (s_p + 0 · h_p) + 0 · h_q/s_q and ⊢u/v = (s_u + 0 · h_u) + 0 · h_v/s_v.
So, to prove ⊢ t = r we need to prove
⊢(s_p + 0 · h_p) + 0 · h_q/s_q =(s_u + 0 · h_u) + 0 · h_v/s_v
assuming its validity in all common meadows.
Now, notice that in all common meadows s_q and s_v must produce 0 on precisely the same non- valuations of the variables occurring in either of both expressions. Six cases will be distinguished, of which the first five are straightforward to deal with:
(i) s_q ≡ 0 and s_v ≡ 0. Here, trivially
⊢(s_p + 0 · h_p) + 0 · h_q/s_q = =(s_u + 0 · h_u) + 0 · h_v/s_v.
(ii) s_q ≡ 0 and s_v ≢0. This is not possible because s_q and s_v must produce 0 on the same valuations of variables and if, for a polynomial sumterm h, h ≢0 then it must be that for some common meadow Enl_(G(_/_)) and valuation σ, we have Enl_(G(_/_)), σh=0. The symmetric case
s_q ≢0 and s_v ≡ 0 is not possible for corresponding reasons.
(iv) s_q and s_v are both non-zero numerals, say s_q = a and s_v = b. Now, factorisations of a and b both contain the same prime numbers. To see this, otherwise assume that say prime c is a divisor of a while c
is not a divisor of b. Then, working in the prime field F_c of characteristic c, s_q takes value 0 while s_v does not. The symmetric case that b has a prime factor c which is not a divisor of a works in the same way.
(v) one of s_q and s_v is a non-zero numeral, while the other one contains one or more variables, i.e., has degree 1 or higher. This situation is impossible because in that case the polynomial sumterm of nonzero degree takes both value zero and nonzero (on appropriate arguments) in and for that reason also on appropriate non- valuations for ()_.
(vi) Lastly, we are left with the main case that both s_q and s_v are polynomials with non-zero degree. It suffices to prove
⊢s_p + 0 · (h_p + h_q)/s_q =s_u + 0 · (h_u + h_v)/s_v
from its validity in all common meadows.
Now, as a first step, chose non-zero integers a and b as follows:
a is the of the coefficients of s_q and b is the of the coefficients of s_v.
Further, choose polynomial sumterms ŝ_q and ŝ_v such that
⊢ s_q = a·ŝ_q and ⊢ s_v = b·ŝ_v.
Next, we show that a and b must have the same prime factors. If not, say c is a prime factor of a but not of b. In the algebraic closure F_c of the prime field F_c of characteristic c a solution – i.e., a valuation σ – exists for the equation s_v -1 = 0; this equation must be of non-zero degree as s_v is of non-zero degree.
We find that F_c,σc = 0 so that
F_c,σa = 0, which implies F_c,σ s_q = 0. Furthermore,
F_c,σb≠ 0 and F_c,σŝ_v = 1 so that
F_c,σ s_v = b·ŝ_v ≠ 0, which contradicts the assumptions made above.
Without loss of generality, we may assume that a and b are both positive, and we take an increasing sequence of prime factors c_1,…,c_k with respective positive powers
e_1,…,e_k and f_1,…,f_k such that
a = c_1^e_1·…· c_k^e_k and b = c_1^f_1·…· c_k^f_k.
The next step is to notice that ŝ_q and ŝ_v must have the same zero's in and to
apply Proposition <ref> on the polynomial sumterms ŝ_q and ŝ_v,
thereby obtaining a sequence of irreducible and primitive polynomials r_1,…,r_m with positive powers a_1,…,a_m and b_1,…,b_m such that
⊢ŝ_q = r_1^a_1·…· r_m^a_m and⊢ŝ_v = r_1^b_1·…· r_m^b_m.
By substitution, now we know that
s_q + 0 · (h_p + h_q)/c_1^e_1·…· c_k^e_k· r_1^a_1·…· r_m^a_m = s_u + 0 · (h_u+ h_v)/c_1^f_1·…· c_k^f_k· r_1^b_1·…· r_m^b_m.
It suffices to prove the same equation from E_𝖿𝗍𝖼-𝖼𝗆 and to that end
we proceed in the following manner.
First, notice by the usual rules of calculation, available from E_𝖿𝗍𝖼-𝖼𝗆,
1/x = 1+0/x = 1/x + 0/x = x + 0 · x/x · x =
(1 +0) · x/x · x = x/x · x.
Then, let K_max be the maximum of e_1,…,e_k,f_1,…,f_k,a_1, …,a_m,
b_1,…,b_m, and let K = K_max+1.
Now, we make repeated use the validity of
x + 0 · w/y · z^g = (x · z^h) + 0 · w/ y · z^g+h (⋆)
for positive integers g and h (in this case for g + h = K) in order to transform the above equation into another, but equivalent,
equation between flat fracterms with the same denominator. The identity (⋆) is a consequence of the validity of the equations
1/x = x/x · x and (x + (0 · y)) · z = (x · z) + (0 · y)).
Let
t̂≡s_q + 0 · (h_p + h_q)/c_1^e_1·…· c_k^e_k· r_1^a_1·…· r_m^a_m and r̂≡s_u + 0 · (h_u+ h_v)/c_1^f_1·…· c_k^f_k· r_1^b_1·…· r_m^b_m.
Moreover, let
t̂̂̂≡(s_q · c_1^K-e_1·…· c_k^K-e_k· r_1^K-a_1·…· r_m^K-a_m) + 0 · (h_p + h_q)/c_1^K·…· c_k^K· r_1^K·…· r_m^K
and
r̂̂̂≡(s_u · c_1^K-f_1·…· c_k^K-f_k· r_1^K-b_1·…· r_m^K-b_m) + 0 · (h_u+ h_v)/c_1^K·…· c_k^K· r_1^K·…· r_m^K.
Here it is assumed that the variables in h_q do not occur elsewhere in t̂̂̂ and that the variables of h_u
do not occur elsewhere in r̂̂̂. With repeated use of the identity (⋆) we find that
⊢t̂ = t̂̂̂ and ⊢r̂ = r̂̂̂.
Summarizing the above, we have established that
⊢ t = t̂ = t̂̂̂, ⊢ r = r̂ = r̂̂̂ and Enl_() t̂̂̂ = r̂̂̂.
Consider the numerators and let
H_t = s_q · c_1^K-e_1·…· c_k^K-e_k· r_1^K-a_1·…· r_m^K-a_m
and
H_r = s_u · c_1^K-f_1·…· c_k^K-f_k· r_1^K-b_1·…· r_m^K-b_m.
Then, from Enl_() t̂̂̂ = r̂̂̂, it follows that
working in Enl_() for all non- rational substitutions σ, if Enl_(),σ
c_1^K·…· c_k^K · r_1^K·…· r_m^K-b_m≠ 0 it must be the case that Enl_(),σ H_t= H_r.
So, for all non- valuations σ,
Enl_(), σ (c_1^K·…· c_k^K· r_1^K·…· r_m^K) · (H_t-H_r )= 0.
Rings of polynomials over have no zero divisors
and the polynomial sumterm c_1^K·…· c_k^K· r_1^K·…· r_m^K is non-zero.
Thus, it follows that, H_t-H_r = 0 as polynomials so that ⊢ H_t = H_r.
Finally, we complete the proof by noticing that
⊢ H_t + 0 · (h_p + h_q) = H_r +0 · (h_u+ h_v)
because otherwise both terms contain different variables which cannot be the case.
To see this latter point, notice that if, say x occurs in H_t + 0 · (h_p + h_q) and not in H_r +0 · (h_u+ h_v), then, because H_t = H_r, a contradiction with Enl_() t̂̂̂ = r̂̂̂ is arises: contemplate any valuation σ that satisfies
Enl_() c_1^K·…· c_k^K · r_1^K·…· r_m^K-b_m-1 = 0, a requirement which is independent of x. Indeed, now the RHS depends on x while the LHS does not, which is a contradiction, thereby completing the proof.
The fracterm calculus of common meadows is decidable.
Given an equation e, if it is true in all common meadows then it is provable from E_𝖿𝗍𝖼-𝖼𝗆. The equations provable from this finite set E_𝖿𝗍𝖼-𝖼𝗆 are computably enumerable. Thus, the true equations of the fracterm calculus of common meadows are computably enumerable.
If e is not true in all common meadows then e fails in an algebraic closure of some prime field or F_p for some prime p. These fields are computable and can be computably enumerated uniformly <cit.>, and a computable search for a counterexample to e attempted. Thus, the false equations of the fracterm calculus of common meadows are computably enumerable.
In consequence, the fracterm calculus of common meadows is decidable.
Of course, this enumeration argument for decidability is crude. However, we note that the completeness proof for Theorem <ref> is effective because the transformations which are used are all computable – including the earlier necessary lemmas such as flattening (Theorem <ref>) and reductions to quasi-polynomials (Proposition <ref>). From these transformations, which map the provability of equations to the identity of terms, an alternate proof of decidability can be constructed that offers an algorithm for the provability and validity of equations and invites a further independent analysis.
§ CONCLUDING REMARKS
§.§ Matters arising
The completeness result distinguishes the axioms in E_𝖿𝗍𝖼-𝖼𝗆 as an abstract characterisation of fields with a simple, workable error flag, i.e., the common meadows. Being close to the axioms for commutative rings, the axioms E_𝖿𝗍𝖼-𝖼𝗆 are not unfamiliar and hopefully memorable; they establish a firm platform for the algebraic and logical study of an attractive practical semantics for reasoning about arithmetical data types.
The equational axiomatisation E_𝖿𝗍𝖼-𝖼𝗆 has been optimised for ease of use in the paper (e.g., especially flattening), and we have not paid attention to the logical independence of the various axioms. Some of the axioms of E_𝖿𝗍𝖼-𝖼𝗆 are redundant, given the other ones. Given their arithmetic purpose, the relationships between axiomatisations of common meadows and axiomatisations of rings and fields are of mathematical interest and practical value. Finding attractive sets of axioms which are
also minimal is a topic worthy of investigation in its own right. In the revision of <cit.> the same equational theory, though equipped with inverse rather than with division, is given an axiomatisation with logically independent axioms.
Three open questions stand out from the results in this paper:
(i) Is the fracterm calculus of the common meadow Enl_((_/_)) of rationals decidable?
(ii) Can a finite basis for the fracterm calculus of common meadows with orderings
be found?
(iii) Can the fracterm calculus of common meadows be axiomatised by means of a specification which constitutes a complete term rewriting system?
In the matter of (ii), this was done in the setting of 1/0 = 0 using a sign function in <cit.>.
In the matter of (iii), a negative result in a simplified case was obtained in <cit.>.
Notwithstanding these open questions, we consider common meadows to provide an attractive basis for the formal specification of arithmetics for computation.
§.§ Background to the problem of division by zero
Completely central to quantification and computation are the rational numbers . When we measure the world using a system of units and subunits then we use the rational numbers. Today's computers calculate only within subsets of the rational numbers. An early objective for our theory is to design and analyse abstract data types for the rational numbers. Designing a data type for rationals requires algebraic minimality, which can be obtained by introducing either division or inverse as an operation. Thus, division using rational numbers is essential and must be total, which requires choosing a value for 1/0.
Using various semantical flags to be found in practical computations to totalise division – such as 𝖾𝗋𝗋𝗈𝗋, ∞, NaN, the last standing for `not a number' – we have constructed equational specifications (under initial algebra semantics) for the following data types of rational numbers:
Involutive meadows, where an element of the meadow's domain is used for totalisation, in particular 1/0 = 0, <cit.>.
Common meadows, the subject of this paper, where a new external element that is `absorbtive' is used for totalisation 1/0 =, <cit.>;
Wheels, where a one external ∞ is used for totalisation 1/0 = ∞ = -1/0, together with an additional external error element to help control the side effects of infinity, <cit.>;
Transrationals, where besides the error element two external signed infinities are added, one positive and one negative, so that
division is totalised by setting 1/0 = ∞ and -1/0 = -∞, <cit.>;
Symmetric transrationals, where the error element , two external signed infinities +∞, -∞, and two infinitesimals +ι, -ι are added so that
division is totalised by setting 1/0 =, as with common meadows, and the other elements are used to manage overflows and underflows, <cit.>; specifically, totality is separated from over and under flows.
In practice, the first four of these models are based on data type conventions to be found in theorem provers, common calculators, exact numerical computation and, of course, floating point computation, respectively. The last, the symmetric transrationals, we developed to extend the scope and improve the algebra of the transrationals.
For some historical remarks on division by zero, we mention <cit.>, and for a survey we mention <cit.>.
Of these five semantical options it may be helpful to compare the common meadows with one of the above. The simplest choice appears to be the involutive meadows, which have been deployed in logical arguments and has its advocates <cit.>.
In our <cit.>, to create an equational specification for the rational numbers we introduced totality by setting 0^-1=0. This led us to the study of involutive meadows <cit.>, and subsequently to the broad programme of work cited above.
An explicit logical discussion of the proposal to adopt 0^-1 = 0 dates back at least to Suppes <cit.>, and led to theoretical work of Ono <cit.>. A completeness result was shown by Ono <cit.>. In <cit.>, the fracterm calculus of involutive meadows was introduced. Completeness for the Suppes-Ono fracterm calculus is shown with a different proof in <cit.>. An advantage of the latter approach to completeness is that it generalises to the case of ordered meadows, see also <cit.>.
Although the flattening property is quite familiar from the school algebra of rational numbers, it stands in marked contrast with the abstract situation for involutive meadows.
In <cit.> it is shown that, with the axioms for involutive meadows, terms are provably equal to only finite sums of flat fracterms; and in <cit.>, it is shown that arbitrarily large numbers of summands may be needed for that purpose. Thus, the involutive meadows run into difficulties that the common meadows do not.
Our results here and elsewhere point to the fact that arithmetical abstract data types with error flags are theoretically superior among the many practical conventions we have studied. This design decision is attractive semantically since as an error flag can have a number of different interpretations in computations.
Futhermore, much of the algebra we have encountered for common meadows is intimately and agreeably connected with the theories of rings and fields; and it serves rather well the theory of data types of rational numbers, which must be a starting point for theorising.
tocsectionReferences
99
AndersonVA2007
James A. Anderson, Norbert Völker, and Andrew A. Adams. 2007.
Perspecx Machine VIII, axioms of transreal arithmetic.
In J. Latecki, D. M. Mount and A. Y. Wu (eds), Proc. SPIE 6499. Vision Geometry XV, 649902, 2007.
AndersonB2021
James A. Anderson and Jan A. Bergstra. 2021.
Review of Suppes 1957 proposals for division by zero.
Transmathematica, (2021).
<https://doi.org/10.36285/tm.53>.
Bergstra2019b
Jan A. Bergstra.
Division by zero, a survey of options.
Transmathematica, (2019).
<https://doi.org/10.36285/tm.v0i0.17>.
Bergstra2020
Jan A. Bergstra. 2020.
Arithmetical data types, fracterms, and the fraction definition problem.
Transmathematica, (2020).
< https://doi.org/10.36285/tm.33>.
BergstraBP2013
Jan A. Bergstra, Inge Bethke and Alban Ponse. 2013.
Cancellation meadows: a generic basis theorem and some applications.
The Computer Journal, 56 (1) (2013), 3–14.
Also <arxiv.org/abs/0803.3969>.
BergstraBP2015
Jan A. Bergstra, I. Bethke, and A. Ponse.
Equations for formally real meadows.
Journal of Applied Logic, 13 (2) (2015), 1–23.
BergstraHT2009
Jan A. Bergstra, Yoram Hirshfeld, and John V. Tucker.
Meadows and the equational specification of division.
Theoretical Computer Science, 410 (12) (2009), 1261–1271.
BergstraM2015
Jan A. Bergstra and C.A. Middelburg.
Division by zero in non-involutive meadows.
Journal of Applied Logic, 13(1): 1–12 (2015).
<https://doi.org/10.1016/j.jal.2014.10.001>
BergstraM2016a
Jan A. Bergstra and Cornelis A. Middelburg.
Transformation of fractions into simple fractions in divisive meadows. 2015.
Journal of Applied Logic, 16 (2015), 92–110. Also <https://arxiv.org/abs/1510.06233>.
BergstraP2015
Jan A. Bergstra and Alban Ponse. 2015.
Division by zero in common meadows.
In R. de Nicola and R. Hennicker (eds), Software, Services, and Systems: Wirsing Festschrift,
Lecture Notes in Computer Science 8950, Springer, 2015, 46–61.
For an improved version (2021), see: .
BergstraP2016
Jan A. Bergstra and Alban Ponse. 2016.
Fracpairs and fractions over a reduced commutative ring.
Indigationes Mathematicae,
27, (2016), 727–748.
Also <https://arxiv.org/abs/1411.4410>.
BergstraT1995
Jan A. Bergstra and J.V. Tucker.
Equational specifications, complete term rewriting systems, and computable and
semicomputable algebras.
Journal of the ACM, Vol. 42 (6), 1194-1230 (1995).
BergstraT2007
Jan A. Bergstra and John V. Tucker. 2007.
The rational numbers as an abstract data type.
Journal of the ACM, 54 (2) (2007), Article 7.
BergstraT2020
Jan A. Bergstra and John V. Tucker. 2020.
The transrational numbers as an abstract data type.
Transmathematica, (2020).
<https://doi.org/10.36285/tm.47>.
BergstraT2021a
Jan A. Bergstra and John V. Tucker. 2021.
The wheel of rational numbers as an abstract data type.
In Roggenbach M. (editor), Recent Trends in Algebraic Development Techniques. WADT 2020.
Lecture Notes in Computer Science 12669, Springer, 2021, 13–30.
BergstraT2022b
Jan A. Bergstra and John V. Tucker. 2022.
On the axioms of common meadows: Fracterm calculus, flattening and incompleteness.
The Computer Journal. Online first, 8pp.
<https://doi.org/10.1093/comjnl/bxac026>
BergstraT2021c
Jan A. Bergstra and J.V. Tucker.
Partial Arithmetical Data Types of Rational Numbers and their Equational Specification
Journal of Logical and Algebraic Methods in Programming, 128, August 2022, 100797.
<https://doi.org/10.1016/j.jlamp.2022.100797>
BergstraT2021b
Jan A. Bergstra and John V. Tucker. 2022.
Totalising partial algebras: Teams and splinters.
Transmathematica,
<https://doi.org/10.36285/tm.57>
BergstraT2022c
Jan A. Bergstra and John V. Tucker,
Symmetric Transrationals: The Data Type and the Algorithmic Degree of its Equational Theory,
In N. Jansen et al. (eds.) A Journey From Process Algebra via Timed Automata to Model Learning - A Festschrift Dedicated to Frits Vaandrager on the Occasion of His 60th Birthday, Lecture Notes in Computer Science 13560, 63-80.
dosReisGA2016
Tiago S. dos Reis, Walter Gomide, and James A. Anderson. 2016.
Construction of the transreal numbers and algebraic transfields.
IAENG International Journal of Applied Mathematics,
46 (1) (2016), 11–23. <http://www.iaeng.org/IJAM/issues_v46/issue_1/IJAM_46_1_03.pdf>
EhrichWL1997
Hans-Dieter Ehrich, Markus Wolf, and Jacques Loeckx.
Specification of Abstract Data Types.
Vieweg Teubner, 1997.
EhrigMahr1985
H. Ehrig and B. Mahr.
Fundamentals of Algebraic Specification 1: Equations und Initial Semantics, EATCS Monographs on Theoretical Computer Science, Vol. 6, Springer, 1985.
ShepherdsonF1956
Albrecht Fröhlich and John C. Shepherdson
Effective procedures in field theory, 1956
Philosophical Transactions of the Royal Society of London. Series A, Mathematical and Physical Sciences 248, 407-432
<http://doi.org/10.1098/rsta.1956.0003>
NeumannGoldstine1947
John von Neumann and Hermann Goldstine.
Numerical inverting of matrices of high order. 1947.
Bulletin American Mathematical Society, 53 (11), 1021-1099.
Lang2002
Serge Lang.
Algebra.
Graduate Texts in Mathematics, Vol. 211, Third revised edition. Springer.
Mal'tsev1973
A.I. Mal'tsev.
Algebraic systems, Springer, 1973.
MeinkeTucker92
K. Meinke and J. V. Tucker.
Universal Algebra.
In S Abramsky and D Gabbay and T Maibaum,
Handbook of Logic for Computer Science,
Oxford University Press, 1992, 189–411.
Ono1983
Hiroakira Ono. 1983.
Equational theories and universal theories of fields.
Journal of the Mathematical Society of Japan, 35 (2) (1983), 289-306.
OkumuraSM2017
Hiroshi Okumura, Saburou Saitoh and Tsutomu Matsuura. 2017
Relations of zero and ∞.
Journal of Technology and Social Science, (2017) 1 (1).
Okumura2018
Hiroshi Okumura.
Is it really impossible to divide by zero?
Biostatistics and Biometrics Open Acc. J. 7 (1)
555703. DOI: 10.19080/BBOJ.2018.07.555703, (2018)
Setzer1997
Anton Setzer. 1997.
Wheels (Draft), Unpublished. 1997.
StoltenbergTucker1999
Viggo Stoltenberg-Hansen and John V. Tucker. 1999.
Computable rings and fields,
in Edward Griffor (ed), Handbook of Computability Theory,
Elsevier, 1999, 363-447.
Suppes1957
Patrick Suppes. 1957.
Introduction to Logic.
Van Nostrand Reinhold, 1957.
Tucker2022
John V Tucker. 2022.
Unfinished Business: Abstract data types and computer arithmetic.
BCS FACS FACTS, The Newsletter of the Formal Aspects of Computing Science
BCS Specialist Group, Issue 2022-1, February 2022, 60-68.
<https://www.bcs.org/media/8289/facs-jan22.pdf>
Wechler1992
Wolfgang Wechler.
Universal Algebra for Computer Scientists.
Springer-Verlag, 1992.
> |
http://arxiv.org/abs/2307.06240v1 | 20230712152826 | DSSE: a drone swarm search environment | [
"Manuel Castanares",
"Luis F. S. Carrete",
"Enrico F. Damiani",
"Leonardo D. M. de Abreu",
"José Fernando B. Brancalion",
"Fabrício J. Barth"
] | cs.LG | [
"cs.LG",
"cs.AI",
"cs.RO",
"cs.SY",
"eess.SY",
"I.2.6; I.6.7"
] |
New Three and Four-Dimensional Toric and Burst-Error-Correcting Quantum Codes
Cibele Cristina TrincaThe author is with the Department of Biotechnology and Bioprocess Engineering, Federal University of Tocantins, Gurupi-TO, Brazil (e-mail: [email protected])., Reginaldo Palazzo Jr.The author is with the School of Electrical and Computer Engineering, State University of Campinas, Brazil (e-mail: [email protected])., Ricardo Augusto WatanabeThe author is with the School of Mathematics, Statistics and Scientific Computing, State University of Campinas, Brazil (e-mail: [email protected]).,
Clarice Dias de AlbuquerqueThe author is with the Science and Technology Center, Federal University of Cariri, Juazeiro do Norte, Brazil (e-mail: [email protected])., J. Carmelo InterlandoThe author is with the Department of Mathematics and Statistics, San Diego State University, San Diego, CA, USA (e-mail: [email protected]). and
Antonio Aparecido de AndradeThe author is with the Department of Mathematics, São Paulo State University, Brazil (e-mail: [email protected]).
August 12, 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The Drone Swarm Search project is an environment, based on PettingZoo, that is to be used in conjunction with multi-agent (or single-agent) reinforcement learning algorithms. It is an environment in which the agents (drones), have to find the targets (shipwrecked people). The agents do not know the position of the target and do not receive rewards related to their own distance to the target(s). However, the agents receive the probabilities of the target(s) being in a certain cell of the map. The aim of this project is to aid in the study of reinforcement learning algorithms that require dynamic probabilities as inputs.
§ INTRODUCTION
Every year, vast bodies of water worldwide claim numerous missing individuals. According to the World Health Organization (WHO), there are an estimated 236,000 annual drowning deaths worldwide, making it the third leading cause of unintentional injury death worldwide and accounting for 7% of all injury-related deaths <cit.>. With over 71% of the earth's surface covered by oceans, according to the U.S. Geological Survey (USGS) <cit.>, finding these missing individuals is no easy task, due to the complexity of oceanic environments and the vastness of the search areas. However, drone swarms have emerged as a promising tool for searching for missing individuals.
The use of drones in rescue operations has resulted in successfully saving 940 people while being utilized in 551 rescue incidents so far <cit.>. The capacity of drones to reach difficult terrain and inaccessible areas, as well as their ability to capture real-time images and videos, has proved to be helpful in search and rescue missions.
The accuracy of search and rescue missions is believed to be significantly increased by the incorporation of Artificial Intelligence (AI) technology <cit.>, as it can leverage probabilistic models based on the ocean’s behaviors, as well as the last known location of the people being rescued.
Several solutions have been proposed in the last years to solve this problem <cit.>, in special, using reinforcement learning algorithms. The reinforcement algorithm does not work alone, it needs an environment to guide it. Because of that, all of the articles cited below have created their own environment to recreate the real-world scenario in a way that the algorithm could understand what was happening and what has to be done.
However, all the papers that claimed to use personalized environments did not make it available to the public, and all of the tools were developed for internal use only. The fact of not publishing those tools as public ones
The fact of not publishing those tools as public resources can be seen as a limitation in terms of reproducibility, transparency, and collaboration in the field of reinforcement learning. It restricts the ability of other researchers and practitioners to build upon or validate the work conducted in those papers. For this reason, the goal of this paper is to provide a tool for everyone so that interested parties can search for better solutions to the problem of finding shipwrecked people using a drone swarm.
The rest of the paper is organized as follows. Next section presents the simplications adopted in order to build the environment. Section <ref> presents all theory related to the development of the probability matrix. Section <ref> describes the target's movement algorithm. Sections <ref> and <ref> describe the reward function and the implementation of the environment as a python library, respectaly. Section <ref> gives details about the relation between this tool and real world. Finally, section <ref> offers conclusions and directions for future work.
§ ADOPTED SIMPLICATIONS
To achieve what was proposed, a simulation of the real-world situation had to be created, but since the real world is incredibly complex a few premises had to be set to simplify the overall scenario. First of all, there will only be one shipwrecked person, since adding multiple people would increase the complexity of the algorithm. It is considered that once the drone is over the person, and executes the action of search, he will identify the person. How he identifies the person is not considered, since this is only a simulated environment. The drone will be able to move only in the area delimited by a grid that covers where the shipwrecked person is located. The drone can only execute five different actions, moving up, down, left, right, and searching. The search was defined as an action because, in this scenario, the drone will be flying at a high altitude so that it can visualize a bigger area, once it identifies a possible target it will descend and verify that it is in fact a person, therefore the search action was included to represent this process. The drone also does not move diagonally to simplify the model. The environment will not simulate the wind or any natural disaster which may affect the drone's flight.
The drones have a restriction, their battery life. They can only do a certain number of steps before their battery ends. The person's movement in the ocean will also not be defined by complex ocean modeling but by a simple vector that will force the person to drift away over time. Finally, the drone will be placed in a grid similar to the one below (image <ref>), where each cell has a probability, representing the chances of the shipwrecked person being located in that area.
§ UNDERSTANDING THE PROBABILITY MATRIX
Based on previous studies <cit.> a probability matrix will be created to demonstrate the chances of the shipwrecked person being in a given cell. The matrix has the same dimensions as the position matrix, and it is the primary piece of information used by the agent. Researchers with similar areas of study <cit.> used multiple metrics in order to define the values of the matrix. For example, the wind and flow of the ocean can greatly impact the trajectory of a shipwrecked person. This type of data can vary depending on the place, day, and time. However, since the modeling of the ocean is not a priority of the project, a directional vector will act as the ocean’s current, which will subsequently drag the shipwrecked person to different places on the map. This will, therefore, change the value of each cell in the probability matrix. Said directional vector along with the initial position of the shipwrecked person are inputs that can be defined by the user. This allows the simulation of a scenario in which the user has knowledge of the ocean’s current, along with the shipwrecked person’s last known localization.
Using the directional vector and the initial position of the shipwrecked person, the probability matrix can be created. In the first state, a probability of one hundred percent is placed in the cell where the person was last seen (image <ref>). As time progresses, the supposed position of the person moves according to the directional vector. Once the assumed position of the person moves, according to the vector, the probabilities are distributed around this new cell. Additionally, a circumference is placed around the cell in which the person is assumed to be. This circumference dictates the area in which the person could be (probability greater than zero). The radius of this circle is increased as time goes by, to represent the growing uncertainty of the position of the shipwrecked person. All of the cells that are inside of the circle receive a probability, which is calculated using the bi-dimensional Gaussian function (equation <ref>).
f(x,y) = A × e^-((x-x_o)^2/2σ x^2+(y-y_0)^2/2 σ y^2)
where A is the amplitude, x_0 and y_0 is the supposed position of a person, and σ x, σ y define how the function will be stretched along the matrix. Except for the supposed position of the shipwrecked person, all of these parameters are inputs. Furthermore, this formula will create a Gaussian distribution where the maximum is A. In order to transform the values into probabilities, the value of each cell is divided by the sum of all of the values returned by the function.
As time transpires, the probability matrix will gradually change, because of the movement of the current, as well as the increase in uncertainty relating to the position of the shipwrecked person. This eventually creates a probability matrix in which multiple cells contain probabilities (image <ref>). The matrix is then used to determine the proper position of the shipwrecked person, which will not necessarily be in the cell with the highest probability.
§ UNDERSTANDING TARGET'S MOVEMENT
Since the goal of this library is not to simulate in depth the ocean's movements or the person's movement in the ocean, the person's movement in the grid will be created using the probability matrix which is described above. In the simulation, the person will start in a cell chosen by the user where the probability of the person being there is 100%. In the next step, the probability will disperse, as it was described above. Considering the dispersed probability matrix the person will look at all the adjacent cell's probabilities and will make a decision either to move or to stay in its current spot, this decision is based on the probability of the adjacent cells. Therefore it is safe to assume that most times the person will choose to go to the highest probability cell making his movement follow the high probability area throughout the simulation.
This movement strategy was adapted to simulate the target's decision-making, when searching for a person in the ocean, it is doubtful they will stay in the same place, they will constantly be trying to make decisions to survive, meaning, they would most likely move around. Although in a real situation, a shipwrecked person may not move as fast as the target in the simulation, the movement is also designed to simulate the uncertainty of a person being in a cell. Even though the person may not be in a high-probability cell, the agent still must search the cell, because the person will most probably be located in one of the other high-probability cells.
§ ENVIRONMENT REWARDS
The reward is a simple concept where you will penalize the agent if it does something that it is not supposed to do and reward it in case it does something that leads it to its goal. Any reinforcement algorithm works in such a way that it will always try to maximize the agent's rewards, so if the agent does something it is not supposed to do, it will receive a massive negative reward so it learns to not do it again.
In this environment the agent receives a reward of 1 per action by default, that is because the drone needs to be incentivized to walk and explore the grid. The drone (agent) will receive a reward of -100000 in case it leaves the grid. This is because in the early experiments when the reward for leaving the grid was -1000, the agent would learn that leaving the grid instantly would give a better reward than searching and not finding the target since if he left the grid the reward would be only -1000 and if he searched it and not found would be about -2000. Therefore the reward for leaving the grid was raised to -100000 so that the agent quickly learns to not leave the grid.
The agent will receive a reward of -1000 if it does not find the target. This is because the agent must be penalized for not finding the target but it can’t be as big as leaving the grid since the agent must still be incentivized to look for the target. The agent will receive a reward of -2000 in case of collision. This is because the reward needs to be lower than the case in which the agent does not find the target, otherwise, the agents would learn to crash so that they don't get a worse reward.
In case the agent searches, it will receive a reward according to the probability of the cell, so if the drone searches in a cell with a probability of 80% the drone will receive a reward equal to 80, this is because the agent needs to learn that it is better to search in higher probability cells rather than waste time searching in the lower probability areas.
Finally, if the drone finds the target it will receive a reward according to the equation <ref>. This is because the agent needs to be incentivized to find the target in the fewest moves possible. So if the total timestep is 500 and it finds the target in the timestep 480, it will receive a reward of 200. Still, if the agent finds in 100 steps, it will receive a reward of 4000, greatly incentivizing him to find the target in the quickest way possible.
r = 10000 + 10000 * (1-timestep/timestep_limit)
The variable timestep represents the number count of the action that is taking place. For example, if an agent executed 50 actions, the timestep is equal to 50. timestep_limit is the amount of actions that an agent can do in an episode. The table <ref> summarizes the environment rewards.
§ ENVIRONMENT IMPLEMENTATION
The implementation of Reinforcement Learning algorithms implies the necessity of an environment, which the agents can act upon. This environment also provides multiple mechanics that are crucial for reinforcement learning. For example, the reward that each agent receives is determined by the environment. Moreover, the actions and their consequences are all embedded inside this structure. All of these aspects, and more, are necessary for the development of any reinforcement learning algorithm. Because of such dependency, it is important to maintain a certain structure and standard that allows these algorithms to be implemented in different environments, with small adaptations. The step function can be used as an example to understand the dynamic previously explained. This function is used whenever the algorithm wants the agents to perform an action. For example, in this project, the step function is responsible for moving the drones and performing the search action, whenever the algorithm wants them to. In addition, after the action is performed, its reward is calculated and returned by the same function, along with other important information. The inputs and outputs of this function, and their respective data structures (lists, dictionaries, variables, etc) all have to be in line with a certain norm. This way, when a reinforcement learning algorithm is implemented on top of this environment, the programmer can be certain that the step function’s inputs and outputs will have the same structure as other environments. The same can be said for the other functions that these algorithms require.
For this library, it was decided that the environment would follow the norms of a project called PettingZoo <cit.>, which makes available an array of different environments. PettingZoo was created by OpenAI as one of several tools developed to conduct artificial intelligence research along with Gymnasium, Minari, and several others. This library does not include training algorithms, as its sole purpose is to deliver specific environments. PettingZoo contains environments for multiple Atari games, such as Space Invaders, Pong, Mario Bros, and many more. This way, reinforcement learning algorithms can be created on top of these video games. These environments can be understood as a shell that can fit many different training algorithms so that users can study and improve different training algorithms without having to worry about recreating the environments.
Finally, a python package[https://pypi.org/project/DSSE/https://pypi.org/project/DSSE/] with this environment was created, with the intention of making it available for future studies. The source code for this package is also publicly available on GitHub[https://github.com/PFE-Embraer/drone-swarm-searchhttps://github.com/PFE-Embraer/drone-swarm-search]. This repository contains a detailed documentation of the environment, including installation instructions, thorough descriptions of functions and variables, and an example of how this environment is to be used.
§ ENVIRONMENT AND THE REAL WORLD
For the environment to be useful in a real scenario, the dimensions of the environment need to be determined in relation to the real world. For example, if the environment were to be used in a real-life scenario, how would the grid size be defined? For that, two pieces of information will be needed, the search zone size and the cell size. The search zone size is an independent variable that will change with every scenario, but most importantly, the cell size must be defined as a constant. Given that drones are only allowed to fly at about 120 meters <cit.>, because of aerial space interference, and considering the Wiris ProSc camera <cit.>, which is a camera used for environmental research, archeological and geological research, and so on, the drone’s field of view will be about 16900m^2 at an altitude of 120m.
The environment considers that the cell is defined by what the drone can view, so whenever the drone is in a cell it can scan the whole area of the cell for the target. Therefore it is safe to assume that the size of each cell will also be 130m × 130m considering the camera and altitude above. Thus in a real-world case where the search zone is 1km × 1km, the environment will need to be created with a grid size equal to 8, so that the grid in the simulation represents an area of 1.04km × 1.04km. It is important to note that this cell size is defined by the camera and altitude chosen, changing these parameters will change the cell size.
§ CONCLUSION AND FUTURE WORK
The environment, published as a python package[https://pypi.org/project/DSSE/https://pypi.org/project/DSSE/], was designed with the intention of allowing external researchers to utilize and modify it as needed. This open approach encourages others to build upon the project, potentially achieving even more remarkable results than those demonstrated here. By engaging a wider community, the utilization of this environment has the potential to drive further improvements, thereby influencing future algorithmic advancements.
One of the biggest limitations of this environment is the fact that the shipwrecked person will not leave the grid if it reaches its edge, but will simply move around in the corner until the episode is complete. Second, the drones' actions are discrete, which does not represent the real world, where the drone is free to move in any direction using a continuous space. Finally, there is also a limitation with the ocean’s simulation and the target’s movements. Although it was sufficient for the first delivery, it may not be for a real-life situation, so it would be interesting to add a more sophisticated way to calculate the probability matrix as well as a more complex simulation for the target's movement.
plain
|
http://arxiv.org/abs/2307.04168v1 | 20230709133056 | Possible open charm molecular pentaquarks from $Λ_cK^{(*)}/Σ_cK^{(*)}$ interactions | [
"Rui Chen",
"Qi Huang"
] | hep-ph | [
"hep-ph"
] |
[email protected]
[email protected]
^1Key Laboratory of Low-Dimensional Quantum Structures and Quantum Control of Ministry of Education, Department of Physics and Synergetic Innovation Center for Quantum Effects and Applications, Hunan Normal University, Changsha 410081, China
^2Department of Physics, Nanjing Normal University, Nanjing 210023, China
In this work, we adopt the one-boson-exchange model to study the Y_cK^(*) (Y_c=Λ_c, Σ_c) interactions. After considering both of the S-D wave mixing effects and the coupled channel effects, we can predict several possible open-charm molecular pentaquarks, i.e., the single Σ_cK^* molecular states with I(J^P)=1/2(1/2^-), 1/2(3/2^-) and 3/2(1/2^-), the coupled Λ_cK^*/Σ_cK^* molecular states with 1/2(1/2^-) and 1/2(3/2^-), and the coupled Σ_cK/Λ_cK^*/Σ_cK^* molecular state with 1/2(1/2^-). Meanwhile, we extend our study to the Y_cK̅^(*) interactions, our results suggest the Σ_cK̅ system with I(J^P)=1/2(1/2^-), the Σ_cK̅^* systems with 1/2(1/2^-), 1/2(3/2^-), and 3/2(3/2^-), the coupled Λ_cK̅^*/Σ_cK̅^* system with 1/2(1/2^-), and the Σ_cK̅/Λ_cK̅^*/Σ_cK̅^* system with 1/2(1/2^-) can be the prime molecular candidates.
12.39.Pn, 14.20.Pt, 13.75.Jz
Possible open charm molecular pentaquarks from Λ_cK^(*)/Σ_cK^(*) interactions
Qi Huang^2[Corresponding author]
August 12, 2023
=============================================================================
§ INTRODUCTION
In the past decades, the observations of X/Y/Z/P_c/T_cc structures have stimulated theorist's extensive interest in
exploring the properties of exotic states. Among the possible configurations, the hadronic molecular state, which is composed of the color-singlet hadrons, plays an important role in explaining the observed exotic structures. The main reason of introducing such a configuration is that many observed X/Y/Z/P_c/T_cc structures are near some specific mass thresholds of the hadron pairs, which leads to answers whether these observations can be explained under the framework of the molecular state (one can see Refs. <cit.> for a detailed review). Thus, carrying out the study of the hadronic molecular state has became an active and important research field in the hadron physics. It is not only helpful to reveal the underlying structures of these near thresholds X/Y/Z/P_c/T_cc structures, but also can improve our knowledge of the non-perturbative behavior of the quantum chromodynamics (QCD).
Very recently, the LHCb collaboration continued to report their observations of two open heavy flavor multiquark candidates, T_cs̅^a0(2900) and T_cs̅^a++(2900), where the superscript a means that their quantum numbers are both I(J^P)=1(0^+) <cit.>. For the T_cs̅^a0(2900), the discovered channel is D_s^+ π^-, the mass and width are 2892 ± 14 ± 15 MeV and 119 ± 26 ± 12 MeV, respectively, while for the T_cs̅^a++(2900), the discovered channel, the mass, and the width are D_s^+ π^+, 2921 ± 17 ± 19 MeV and 137 ± 32 ± 14 MeV, respectively. According to their channels, mass positions and quantum numbers, it is easy to guess that the T_cs̅^a0(2900) and T_cs̅^a++(2900) belong to the same isovector triplet. Furthermore, the LHCb collaboration also determined their averaged masses and decay widths, which are 2908 ± 11 ± 20 MeV and 136 ± 23 ± 11 MeV, respectively.
Due to the charged property of T_cs̅^a0(++)(2900), their minimal valance quark components are naturally inferred to be cs̅qq̅ (q=u, d). Since they are very close to the D^*K^* mass threshold, it is natural conjecture whether the T_cs̅^a0(++)(2900) states can be the isovector D^*K^* molecules with J^P=0^+. In fact, in our former work <cit.>, we can not only reproduce the D_s0^∗(2317) and D_s1(2460) in the S-wave DK and D^*K molecular scenario, but also find the one-boson-exchange (OBE) effective potentials are strong enough to form loosely bound molecular states for the D^*K^* systems with I(J^P)=0(0^+, 1^+, 2^+), and 1(0^+). Therefore, the D^*K^* hadronic molecular explanations for the T_cs̅^a0(++)(2900) states cannot be excluded. In addition, there are other different theoretical explanations to the T_cs̅^a0(++)(2900) states, like the compact open-charm pentaquark <cit.> and the D^*ρ molecule <cit.>.
Besides the T_cs̅^a0(++)(2900), another two open-charm states X_0(2900) and X_1(2900), which were observed by the LHCb collaboration in the D^-K^+ final states of the B^+→ D^+D^-K^+ decay process <cit.>, are also interesting. Their spin-parities J^P are 0^+ and 1^+, respectively. Because their mass positions are very close to the D̅^*K^* and D̅_1K mass thresholds, respectively, many theorists propose the X_0(2900) and X_1(2900) states as the hadronic molecular states <cit.>. At present, the inner structures for the T_cs̅^a0(++)(2900) and X_0,1(2900) are still on discussion (one can see Refs. <cit.>).
As is well known, the light diquark in the heavy baryons Y_c=(Λ_c, Σ_c) has the same color structure 3̅_c with the light anti-quark in the heavy meson Qq̅ <cit.>. If the T_cs̅^a0(++)(2900) can be assigned as the loosely bound hadronic molecular states composed by the charmed meson and kaon, it is natural to conjecture whether there exist possible open charm molecular pentaquarks counterpart of the T_cs̅^a0(++)(2900), which are near the thresholds of the Λ_cK^(*) and Σ_cK^(*), respectively. In this work, we search for such open charm molecular partners composed by Λ_cK^(*) and Σ_cK^(*), which can not only enrich the family of the exotic states, but also help us to understand the nature of the newly T_cs̅^a0(++)(2900).
Apart from searching for possible Λ_cK^(*) and Σ_cK^(*) molecular states, in this work, we also study the interactions between the S-wave charmed baryon Y_c=(Λ_c, Σ_c) and the anti-strange meson K̅^(*) by adopting the OBE model and considering both of the S-D mixing effects and the coupled channel effects. After solving the coupled channel Schrödinger equations, we can search for the possible charmed-strange molecular pentaquarks counterpart of the X_0,1(2900). Our study will not only provide valuable information to experimental search for exotic open charm hadronic molecular pentaquarks, but also give indirect test of molecular state picture for the T_cs̅^a0(++)(2900) and X_0,1(2900).
This paper is organized as follows. After this introduction, we introduce the relevant effective Lagrangians and the OBE model in Sec. <ref>. In Sec. <ref>, we present the OBE effective potentials and the corresponding numerical results. The paper ends with a summary in Sec. <ref>.
§ LAGRANGIANS AND OBE MODEL
In this work, we deduce the OBE effective potentials for the Y_cK^(*) systems by employing the effective Lagrangian approach at the hadronic level. The relevant Lagrangians describing the interactions between the heavy baryons and light mesons are constructed in terms of the heavy quark limit and chiral symmetry <cit.>, i.e.,
ℒ_ℬ_3̅ = l_B⟨ℬ̅_3̅σℬ_3̅⟩
+iβ_B⟨ℬ̅_3̅v^μ(𝒱_μ-ρ_μ)ℬ_3̅⟩,
ℒ_ℬ_6 = l_S⟨𝒮̅_μσ𝒮^μ⟩
-3/2g_1ε^μνλκv_κ⟨𝒮̅_μA_ν𝒮_λ⟩
+iβ_S⟨𝒮̅_μv_α(𝒱_ab^α-ρ_ab^α) 𝒮^μ⟩
+λ_S⟨𝒮̅_μF^μν(ρ)𝒮_ν⟩,
ℒ_ℬ_3̅ℬ_6 = ig_4⟨𝒮̅^̅μ̅A_μℬ_3̅⟩
+iλ_Iε^μνλκv_μ⟨𝒮̅_νF_λκℬ_3̅⟩+h.c..
Here, v=(1,0) is the four velocity, ρ_ba^μ=ig_VV_ba^μ/√(2), and F^μν(ρ)=∂^μρ^ν-∂^νρ^μ
+[ρ^μ,ρ^ν]. A_μ and 𝒱_μ stand for the axial current and vector current, respectively. They can be written as
A_μ = 1/2(ξ^†∂_μξ-ξ∂_μξ^†)=i/f_π∂_μP+…,
𝒱_μ = 1/2(ξ^†∂_μξ+ξ∂_μξ^†)
=i/2f_π^2[P,∂_μP]+…,
respectively. Here, ξ=exp(iP/f_π) and f_π=132 MeV. ℬ_3̅ and 𝒮_μ =-√(1/3)(γ_μ+v_μ)γ^5ℬ_6+ℬ_6μ^* denote the ground heavy baryons multiplets with their light quarks in the 3̅ and 6 flavor representation, respectively. The matrices ℬ_3̅, ℬ_6, P, and V read as
.[ ℬ_3̅ = ([ 0 Λ_c^+; -Λ_c^+ 0 ]), ℬ_6 = ([ Σ_c^++ Σ_c^+/√(2); Σ_c^+/√(2) Σ_c^0 ]),; P = ([ π^0/√(2)+η/√(6) π^+; π^- -π^0/√(2)+η/√(6) ]), V = ([ ρ^0/√(2)+ω/√(2) ρ^+; ρ^- -ρ^0/√(2)+ω/√(2) ]). ].
The effective Lagrangians describing the interactions between the strange mesons and light mesons are constructed in the SU(3) symmetry <cit.>, i.e.,
ℒ_PPV = ig/2√(2)⟨∂^μP(PV_μ-V_μP⟩,
ℒ_VVP = g_VVP/√(2)ϵ^μναβ⟨∂_μV_ν∂_αV_βP⟩,
ℒ_VVV = ig/2√(2)⟨∂^μV^ν(V_μV_ν-V_νV_μ)⟩.
After expanding Eqs. (<ref>)-(<ref>), we can further obtain
ℒ_σ = l_B⟨ℬ̅_3̅σℬ_3̅⟩
-l_S⟨ℬ̅_6σℬ_6⟩,
ℒ_P =
ig_1/2f_πε^μνλκv_κ⟨ℬ̅_6
γ_μγ_λ∂_νPℬ_6⟩
-√(1/3)g_4/f_π⟨ℬ̅_6γ^5
(γ^μ+v^μ)∂_μPℬ_3̅⟩+h.c.,
ℒ_V = 1/√(2)β_Bg_V⟨ℬ̅_3̅v·Vℬ_3̅⟩
-β_Sg_V/√(2)⟨ℬ̅_6v·Vℬ_6⟩
-λ_Ig_V/√(6)ε^μνλκv_μ⟨ℬ̅_6γ^5γ_ν(∂_λV_κ-∂_κV_λ)ℬ_3̅⟩+h.c.
-iλ g_V/3√(2)⟨ℬ̅_6γ_μγ_ν(∂^μV^ν-∂^νV^μ)
ℬ_6⟩,
ℒ_K^(*)K^(*)σ = g_σm_KK̅ Kσ-g_σm_K^*K̅^*· K^*σ,
ℒ_P KK^* = ig/4[(K̅^*μ K-K̅ K^*μ)(τ·∂_μπ+∂_μη/√(3)).
.+(∂_μK̅ K^*μ-K̅^*μ∂_μK)(τ·π+η/√(3))],
ℒ_V KK = ig/4[K̅∂_μK
-∂_μK̅K](τ·ρ^μ+ω^μ),
ℒ_V K^*K^* = ig/4[(K̅_μ^*∂^μK^*ν-∂^μK̅^*ν K_μ^*)(τ·ρ_ν+ω_ν).
.+(∂^μK̅^*νK_ν^*-K̅_ν^*∂^μK^*ν)
(τ·ρ_μ+ω_μ).
.+(K̅_ν^* K^*_μ-K̅_μ^*K^*_ν)
(τ·∂^μρ^ν+∂^μω^ν)],
ℒ_P K^*K^* = g_VVPε_μναβ∂^μK̅^*ν∂^αK^*β(τ·π+η/√(3)),
ℒ_V KK^* = g_VVPε_μναβ(∂^μK̅^*νK+K̅∂^μK^*ν)
(τ·∂^αρ^β+∂^αω^β).
Coupling constants in the above Lagrangians are estimated with the quark model <cit.>, l_S=-2l_B=7.3, g_1=(√(8)/3)g_4=1.0, β_Sg_V=-2β_Bg_V=12.0, λ_Sg_V=-2√(2)λ_Ig_V=19.2 GeV^-1, g_σ=-3.65, and g=12.00. g_VVP=3g^2/(32√(2)π^2f_π) <cit.>.
With these prepared effective Lagrangians, we can easily write down the scattering amplitudes for the B_1M_2→ B_3M_4 processes in the t-channel, where B_1 and B_3 stand for the initial and final baryons, respectively, and M_2 and M_4 stand for the inial and final mesons, respectively. The corresponding effective potentials can be related to the scattering amplitudes by the Breit approximation,
𝒱_E^B_1M_2→ B_3M_4(q) =
-ℳ(B_1M_2→ B_3M_4)/4√(m_B_1m_M_2m_B_3m_M_4).
Here, m_i is the mass of the interaction hadron. ℳ(B_1M_2→ B_3M_4) denotes the scattering amplitude for the B_1M_2→ B_3M_4 process by exchanging the light mesons (σ, π, η, ρ, and ω). Next, we perform the Fourier transformation to obtain the effective potentials in the coordinate space 𝒱(r),
𝒱_E(r) =
∫d^3q/(2π)^3e^iq·r𝒱_E(q)ℱ^2(q^2,m_E^2).
In order to compensate the off-shell effect of the exchanged meson, we introduce a monopole form factor ℱ(q^2,m_E^2)= (Λ^2-m_E^2)/(Λ^2-q^2) at every interactive vertex, where Λ, m_E, and q are the cutoff parameter, the mass and four-momentum of the exchanged meson, respectively. In our numerical calculations, we vary the cutoff value in the range of 0.8≤Λ≤5.0 GeV. According to the deuteron experience <cit.>, the reasonable cutoff value is taken around 1.00 GeV. In the following discussion, the loosely bound state with the cutoff value around 1.00 GeV can be recommended as the prime hadronic molecular candidate.
For the Λ_cK^(*) systems, the flavor wave function |I,I_3⟩ can be expressed as |1/2,1/2⟩=|Λ_c^+K^(*)+⟩ and |1/2,-1/2⟩=|Λ_c^+K^(*)0⟩. For the Σ_cK^(*) systems, their isospin I can be taken as 1/2 or 3/2. The corresponding flavor wave functions |I,I_3⟩ are
[ |1/2,1/2⟩ =
√(2/3)|Σ_c^++K^(*)0⟩
-1/√(3)|Σ_c^+K^(*)+⟩,; |1/2,-1/2⟩ =
1/√(3)|Σ_c^+K^(*)0⟩
-√(2/3)|Σ_c^0K^(*)+⟩, ]
[ |3/2,3/2⟩ = |Σ_c^++K^(*)+⟩,; |3/2,1/2⟩ =
1/√(3)|Σ_c^++K^(*)0⟩+
√(2/3)|Σ_c^+K^(*)+⟩,; |3/2,-1/2⟩ =√(2/3)|Σ_c^+K^(*)0⟩
+1/√(3)|Σ_c^0K^(*)+⟩,; |3/2,-3/2⟩ =
|Σ_c^0K^(*)0⟩, ]
respectively. When we consider the S-D wave mixing effects, the spin-orbit wave functions |^2S+1L_J⟩ are
.[ Y_cK[J^P=1/2^-]: |^2S_1/2⟩,; Y_cK^*[J^P=1/2^-]: |^2S_1/2⟩, |^4D_1/2⟩,; Y_cK^*[J^P=3/2^-]: |^4S_3/2⟩, |^2D_3/2⟩, |^4D_3/2⟩. ].
The general expressions of the spin-orbit wave functions |^2S+1L_J⟩ for the Y_cK^(*) systems read as
Y_cK: |^2S+1L_J⟩ = ∑_m_S,m_LC^J,M_1/2m_S,Lm_Lχ_1/2m|Y_L,m_L⟩,
Y_cK^*: |^2S+1L_J⟩ = ∑_m,m'^m_S,m_LC^S,m_S_1/2m,1m'C^J,M_Sm_S,Lm_Lχ_1/2mϵ^m'|Y_L,m_L⟩.
Here, C^J,M_1/2m_S,Lm_L, C^S,m_S_1/2m,1m', and C^J,M_Sm_S,Lm_L are the Clebsch-Gordan coefficients. χ_1/2m and Y_L,m_L stand for the spin wave function and the spherical harmonics function, respectively. ϵ is the polarization vector for the vector meson with ϵ_±^m=∓1/√(2)(ϵ_x^m±iϵ_y^m) and ϵ_0^m=ϵ_z^m, which satisfies ϵ_±1= 1/√(2)(0,±1,i,0) and ϵ_0 =(0,0,0,-1).
§ THE OBE EFFECTIVE POTENTIALS AND THE NUMERICAL RESULTS
Following the above procedures, we can deduce the concrete OBE effective potentials for the Y_cK^(*) systems with different quantum configurations. After that, we adopt the obtained OBE effective potentials to solve the coupled channel Schrödinger equations. By doing this, we can search for the bound state solutions. A system with the reasonable bound state solutions can be recommended as the good hadronic molecular candidate, where the binding energy is taken from several MeV to several tens MeV, and the root-mean-square (RMS) radius is a few fm or larger.
§.§ The Λ_cK^(*) systems
The total OBE effective potentials for the single Λ_cK system can be written as
V_Λ_cK→Λ_cK = l_Bg_σχ_3^†χ_1Y(Λ,m_σ,r)
+β_Bg_Vg/4χ_3^†χ_1Y(Λ,m_ω,r).
Here, we define
Y(Λ,m,r) = 1/4π r(e^-mr-e^-Λ
r)-Λ^2-m^2/8πΛe^-Λ r.
As shown in Eq. (<ref>), there exist the σ exchange and ω exchange interactions, which contribute in the intermediate range and the short range, respectively. The σ exchange provides an attractive interaction, whereas the ω exchange interaction is repulsive. Here, the ρ exchange interaction is strongly suppressed as the isospin forbidden in the Λ_c-Λ_c-ρ coupling. Since the KKπ(η) coupling is forbidden by the spin-parity conservation, the pseudoscalr meson (π/η) exchanges interactions are strongly suppressed, either.
After solving the Schrödinger equation, we don't find bound state solutions in the cutoff region 0.8≤Λ≤5.0 GeV. Thus, the OBE effective potentials for the Λ_cK system is not strong enough to bind a bound state.
For the single S-wave Λ_cK^* systems with J^P=1/2^- and 3/2^-, their OBE effective potentials are the same, i.e.,
V_Λ_cK^*→Λ_cK^* = l_Bg_σ(ϵ_2·ϵ_4^†)χ_3^†χ_1Y(Λ,m_σ,r)
+β_Bg_Vg/4(ϵ_2·ϵ_4^†)χ_3^†χ_1Y(Λ,m_ω,r).
When we consider the S-D wave mixing effects, the operator ϵ_2·ϵ_4^† will be replaced by the unit matrix ℐ=⟨^2S'+1L'_J'|ϵ_2·ϵ_4^†|^2S+1L_J⟩ in the numerical calculations, which indicates the OBE effective potentials are the exactly the same with those for the Λ_cK system with 1/2^-. In the cutoff region 0.8≤Λ≤5.0 GeV, we cannot find the bound state solutions, either.
In this work, we further perform the coupled channel analysis on the Λ_cK^*/Σ_cK^* interactions, the corresponding OBE effective potentials are
V_Λ_cK^*^C = ([ V_Λ_cK^*→Λ_cK^* V_Σ_cK^*→Λ_cK^*; V_Λ_cK^*→Σ_cK^* V_Σ_cK^*→Σ_cK^* ]),
with
V_Λ_cK^*→Σ_cK^* = 1/6g_4g_VVP/f_πℱ_1(r,σ,iϵ_2×ϵ_4^†)
Y(Λ_0,m_π0,r)
-1/6√(2)λ_Ig_Vg/m_K^*ℱ_2(r,σ,iϵ_2×ϵ_4^†)
Y(Λ_0,m_ρ0,r),
V_Σ_cK^*→Σ_cK^* = 1/2l_Sg_σχ_3^†χ_1
ϵ_2·ϵ_3^†Y(Λ,m_σ,r)
-g_1g_VVP/6√(2)f_πℱ_1(r,σ,iϵ_2×ϵ_4^†)
𝒢(I)Y(Λ,m_π,r)
-g_1g_VVP/18√(2)f_πℱ_1(r,σ,iϵ_2×ϵ_4^†)
Y(Λ,m_η,r)
+1/8β_Sg_Vgχ_3^†χ_1
ϵ_2·ϵ_3^†𝒢(I)Y(Λ,m_ρ,r)
+λ_Sg_Vg/8√(3)m_Σ_cχ_3^†χ_1
ϵ_2·ϵ_3^†𝒢(I)∇^2Y(Λ,m_ρ,r)
-λ_Sg_Vg/24√(3)m_K^*ℱ_2(r,σ,iϵ_2×ϵ_4^†)
𝒢(I)Y(Λ,m_ρ,r)
+1/8β_Sg_Vgχ_3^†χ_1
ϵ_2·ϵ_3^†Y(Λ,m_ω,r)
+λ_Sg_Vg/8√(3)m_Σ_cχ_3^†χ_1
ϵ_2·ϵ_3^†∇^2Y(Λ,m_ω,r)
-λ_Sg_Vg/24√(3)m_K^*ℱ_2(r,σ,iϵ_2×ϵ_4^†)
Y(Λ,m_ω,r).
Here, the value of the isospin factor 𝒢(I) is taken as 𝒢(I=1/2)=-2, 𝒢(I=3/2)=1. The variables in Eq. (<ref>) are Λ_0^2 =Λ^2-q_0^2, m_π0^2=m_π^2-q_0^2, m_ρ0^2=m_ρ^2-q_0^2 with q_0 =
M_Σ_c^2-M_Λ_c^2/2(M_Σ_c+M_K^*). And we define useful operators, i.e.,
ℱ_1(r,a,b) = χ_3^†(a·b∇^2
+S(r̂,a,b)
r∂/∂ r1/r∂/∂ r)χ_1,
ℱ_2(r,a,b) = χ_3^†(2a·b∇^2
-S(r̂,a,b)
r∂/∂ r1/r∂/∂ r)χ_1.
Here, the a·b and S(r̂,a,b) stand for the spin-spin interactions and the tensor force operators, respectively. The corresponding matrices elements can be obtained by sandwiched between the spin-orbit wave functions as presented in the Eq. (<ref>), i.e.,
iσ·(ϵ_2×ϵ_4^†) ↦ {[ ([ -2 0; 0 1 ]), J^P=1/2^-; ([ 1 0 0; 0 -2 0; 0 0 1 ]), J^P=3/2^- ].
S(r̂,σ,iϵ_2×ϵ_4^†) ↦ {[ ([ 0 -√(2); -√(2) -2 ]), J^P=1/2^-; ([ 0 1 2; 1 0 -1; 2 -1 0 ]), J^P=3/2^- ].
With these deduced effective potentials, we search for the bound state solutions for the coupled Λ_cK^*/Σ_cK^* systems in the cutoff range 0.8≤Λ≤5.0 GeV. In Table <ref>, we collect the corresponding numerical results, which include the cutoff dependence of the binding energy E, the root-mean-square radius r_RMS, and the probabilities P_i(%) for all the discussed channels.
For the coupled Λ_cK^*/Σ_cK^* system with I(J^P)=1/2(1/2^-), there exist four channels, the Λ_cK^*(^2S_1/2, ^4D_1/2) channels and the Σ_cK^*(^2S_1/2, ^4D_1/2) channels after considering both the S-D wave mixing effects and the coupled channel effects. As presented in Table <ref>, when the cutoff is taken as 1.56 GeV, the binding energy is -0.14 MeV, the RMS radius is 6.11 fm, and the probability for the Λ_cK^*(^2S_1/2) channel is 98.82%. As the cutoff Λ increases to 1.62 GeV, the binding energy becomes -11.57 MeV, the RMS radius becomes 1.12 fm, and the Λ_cK^*(^2S_1/2) is still the dominant channel with the probability around 93.18%. From the current numerical results, in the cutoff range around 1.60 GeV, we can obtain the weakly bound state with the reasonable loosely bound state solutions, and the dominant channel is the Λ_cK^*(^2S_1/2) with its probability over 90%. Since the cutoff value is close to the empirical value Λ∼ 1.00 GeV for the deuteron <cit.>, we conclude that the coupled Λ_cK^*/Σ_cK^* systems with I(J^P)=1/2(1/2^-) can be recommended as a good hadronic molecular candidate.
For the coupled Λ_cK^*/Σ_cK^* system with I(J^P)=1/2(3/2^-), there include the Λ_cK^*(^4S_3/2, ^2D_3/2, ^4D_3/2) channels and the Σ_cK^*(^4S_3/2, ^2D_3/2, ^4D_3/2) channels when we consider both the coupled channel effects and the S-D wave mixing effects. As shown in Table <ref>, we can obtain loosely bound state solutions at the cutoff larger than 1.34 GeV, where the binding energy is from several to ten MeV, and the RMS radius is larger than 1.00 fm, the dominant channel is the Λ_cK^*(^4S_3/2) channel. As the increasing of the cutoff value, the Σ_cK^*(^4S_3/2) channel becomes more and more important, when the cutoff is 1.40 GeV, the probability of the S-wave Σ_cK^* component turns into 27.27%. If we still adopt the experience of the deuteron <cit.>, the coupled Λ_cK^*/Σ_cK^* system with 1/2(3/2^-) can be a good hadronic molecular candidate, it is mainly composed by the Λ_cK^*(^4S_3/2) channel, followed by the Σ_cK^*(^4S_3/2) channel.
In addition, we find that the coupled channel effects play an important role in forming these Λ_cK^*/Σ_cK^* bound states with 1/2(1/2^-, 3/2^-), since there don't exist bound state solutions in the single Λ_cK^* systems.
§.§ The Σ_cK^(*) systems
The OBE effective potentials for the single Σ_cK is
V_Σ_cK→Σ_cK = 1/2l_Sg_σχ_3^†χ_1Y(Λ,m_σ,r)
+𝒢(I)/8β_Sg_Vgχ_3^†χ_1Y(Λ,m_ρ,r)
-𝒢(I)/24m_Σ_cλ_Sg_Vgχ_3^†χ_1
∇^2Y(Λ,m_ρ,r)
+1/8β_Sg_Vgχ_3^†χ_1Y(Λ,m_ω,r)
-1/24m_Σ_cλ_Sg_Vgχ_3^†χ_1∇^2Y(Λ,m_ω,r).
Here, there exists the extra ρ exchange interaction in comparison to the Λ_cK system, and it provides the attractive and repulsive forces for the Σ_cK system with I=1/2 and 3/2, respectively. Therefore, it is possible to find the bound state solutions for the Σ_cK system with I=1/2 as the stronger attractive OBE effective potentials. After solving the coupled channel Schrödinger equation, our results show that there exist no bound state solutions for the iso-quartet Σ_cK system. For the iso-doublet Σ_cK system, as presented in Table <ref>, we can obtain the reasonable loosely bound state solutions when the cutoff Λ is larger than 2.00 GeV.
When we further perform the coupled Σ_cK/Λ_cK^*/Σ_cK^* analysis. There can allow the π-exchange interactions for both of the Λ_cK^*→Σ_cK and Σ_cK^*→Σ_cK process, which plays very important role in binding the deuteron. The corresponding OBE effective potentials can be expressed as
V_Σ_cK^C = ([ V_Σ_cK→Σ_cK V_Λ_cK^*→Σ_cK V_Σ_cK^*→Σ_cK; V_Σ_cK→Λ_cK^* V_Λ_cK^*→Λ_cK^* V_Σ_cK^*→Λ_cK^*; V_Σ_cK→Σ_cK^* V_Λ_cK^*→Σ_cK^* V_Σ_cK^*→Σ_cK^* ]),
with
V_Λ_cK^*→Σ_cK = -1/6g_4g/f_π√(m_Km_K^*)ℱ_1(r,σ,ϵ_2)
U(Λ_1,m_π1,r)
-λ_Ig_Vg_VVP/3√(2)√(m_K^*/m_K)ℱ_2(r,σ,ϵ_2)
Y(Λ_1,m_ρ1,r),
V_Σ_cK^*→Σ_cK = g_1gℱ_1(r,σ,ϵ_2)/24√(2)f_π√(m_Km_K^*)𝒢(I)Y(Λ_2,m_π2,r)
+g_1g/72√(2)f_π√(m_Km_K^*)ℱ_1(r,σ,ϵ_2)
Y(Λ_2,m_η2,r)
+λ_Sg_Vg_VVP/6√(3)√(m_K^*/m_K)ℱ_2(r,σ,ϵ_2)
𝒢(I)Y(Λ_2,m_ρ2,r)
+λ_Sg_Vg_VVP/6√(3)√(m_K^*/m_K)ℱ_2(r,σ,ϵ_2)
Y(Λ_2,m_ω2,r).
Here, we define an useful function in Eq. (<ref>), i.e.,
U(Λ,m,r) = 1/4π r(cos(mr)-e^-Λ r)-Λ^2+m^2/8πΛe^-Λ r.
The variables in the above effective potentials (<ref>)-(<ref>) are defined as
q_1=M_Λ_c^2+M_K^2-M_Σ_c^2-M_K^*^2/2(M_Σ_c+M_K), Λ_1^2=Λ^2-q_1^2, m_π1^2=q_1^2-m_π^2, m_ρ1^2=m_ρ^2-q_1^2, q_2=M_K^*^2-M_K^2/2(M_Σ_c+M_K), Λ_2^2=Λ^2-q_2^2, m_π2^2=m_π^2-q_2^2, m_η2^2=m_η^2-q_2^2, m_ρ2^2=m_ρ^2-q_2^2, m_ω2^2=m_ω^2-q_2^2. After considering the S-D wave mixing effects, the matrix elements for the spin-spin interaction and tensor force operators read as σ·ϵ_2↦ ([ √(3) 0 ] ) and S(r̂,σ,ϵ_2)↦ ([ 0 -√(6) ] ), respectively.
In Table <ref>, we collect the bound state solutions (the binding energy E, the root-mean-square radius r_RMS, and the probabilities P_i(%) for all the discussed channels) for the coupled Σ_cK/Λ_cK^*/Σ_cK^* systems with I(J^P)=0,1(1/2^-).
For the Σ_cK/Λ_cK^*/Σ_cK^* system with I(J^P)=1/2(1/2^-), there exist the Σ_cK(^2S_1/2) channel, the Λ_cK^*(^2S_1/2,^4D_1/2) channels, and the Σ_cK^*(^2S_1/2,^4D_1/2) channels. The reasonable loosely bound state solutions emerge at the cutoff Λ=0.90 GeV, where the binding energy is -0.36 MeV, the RMS radius is 4.78 fm, and the dominant channel is the Σ_cK(^2S_1/2) with the probability P=98.85%. When the cutoff increases to 1.05 GeV, this bound state binds deeper, the binding energy is -18.44 MeV, the RMS radius decreases to 1.42 fm, and the Σ_cK(^2S_1/2) channel is still the dominant channel with its probability around 95%. For the remaining channels, their probabilities are very tiny. Compared to the bound state properties in the single channel case, the cutoff is very close to the reasonable value Λ∼1.00 GeV. Therefore, the coupled Σ_cK/Λ_cK^*/Σ_cK^* system with I(J^P)=1/2(1/2^-) can be the prime molecular candidate, and the coupled channel effects play an important role for the formation of this bound state.
For the Σ_cK/Σ_cK^* system with I(J^P)=3/2(1/2^-), there include the Σ_cK(^2S_1/2) channel and the Σ_cK^*(^2S_1/2,^4D_1/2) channels. We find a weakly bound state at the cutoff Λ=1.28 GeV, the binding energy is E=-2.58 MeV, the RMS radius is r_RMS=2.85 fm, and the dominant channel is the Σ_cK(^2S_1/2) with the probability P=92.77%. With the increasing of the cutoff value, the Σ_cK^* channel becomes more and more important. As the cutoff increases to 1.31 GeV, the binding energy turns into -48.10 MeV, the RMS radius decreases to 0.61 fm, and the probability for the Σ_cK^*(^2S_1/2) is 24.29%. However, the binding energy depends very sensitively with the cutoff. Thus, we cannot draw a definite conclusion that the Σ_cK/Σ_cK^* system with I(J^P)=3/2(1/2^-) as a good hadronic molecular candidate.
For the Σ_cK^* systems, the isospin and spin-parity configurations I(J^P) include 1/2(1/2^-), 1/2(3/2^-), 3/2(1/2^-), and 3/2(3/2^-) after considering the S-D wave mixing effects. The relevant OBE effective potentials are presented in Eq. (<ref>). Our results indicate that there exist the reasonable loosely bound state solutions for the Σ_cK^* states with I(J^P)=1/2(1/2^-), 1/2(3/2^-), and 3/2(1/2^-) in the cutoff range 0.80≤Λ≤5.00 GeV. As shown in Table <ref>, for the Σ_cK^* systems with 1/2(3/2^-) and 3/2(1/2^-), the binding energy around several to several tens MeV and the RMS radius around several fm appear at the cutoff around 1.00 GeV, which is comparable to the value in the deuteron <cit.>. Therefore, these two states can be suggested as the good hadronic molecular candidates. For the Σ_cK^* system with 1/2(1/2^-), the loosely bound state solutions appear as the cutoff is larger than 1.70 GeV, which is slightly far away from the empirical value for the deuteron <cit.>, in this work, we cannot exclude that the Σ_cK^* system with 1/2(1/2^-) as a suitable molecular candidate.
In summary, our results can predict several possible open charm molecular pentaquarks, the coupled Λ_cK^*/Σ_cK^* molecular states with I(J^P)=1/2(1/2^-,3/2^-), the coupled Σ_cK/Λ_cK^*/Σ_cK^* molecular states with I(J^P)=1/2(1/2^-), and the single Σ_cK^* states with I(J^P)=1/2(1/2^-,3/2^-) and 3/2(1/2^-). And the coupled channel effects do play the very important role in generating these coupled channel molecular candidates.
The study of the strong decay behaviors is very helpful to the search of these predicted open flavor molecular pentaquarks. According to the conservation of the quantum numbers and the limit of the phase space, we collect the important strong decay decay channels as follows, i.e.,
Σ_cK/Λ_cK^*/Σ_cK^*[1/2(1/2^-)] → {D_sN, Λ_cK},
Λ_cK^*/Σ_cK^*[1/2(1/2^-)] → {D_s^(*)N, Λ_cK, Σ_cK},
Λ_cK^*/Σ_cK^*[1/2(3/2^-)] → {D_s^*N},
Σ_cK^*[1/2(1/2^-)] → {D_s^(*)N, Λ_cK^(*), Σ_cK},
Σ_cK^*[1/2(3/2^-)] → {D_s^*N, Λ_cK^*, Σ_c^*K},
Σ_cK^*[3/2(1/2^-)] → {D_s^*Δ, Σ_cK}.
§.§ The predictions of the possible Y_cK̅^(*) molecular states
In this work, we further extend our study to the Λ_cK̅^(*) and Σ_cK̅^(*) systems, the corresponding OBE effective potentials can be related to those for the Λ_cK^(*) and Σ_cK^(*) systems by the G-parity rule <cit.>, i.e.,
V_B_1M̅_2→ B_3M̅_4 = (-1)^G_EV_B_1M_2→ B_3M_4,
where G_E stands for the G-parity for the exchanged meson in the B_1M_2→ B_3M_4 process, notations M̅_i and M_i correspond to the anti-mesons and mesons, respectively. Therefore, the effective potentials from the ω and π exchanges are in completely contrast between the Y_cK^(*) and the Y_cK̅^(*) systems.
In the following, we also perform the single channel analysis and the coupled channel analysis on the Y_cK̅^(*) systems. We summary the corresponding numerical results in Table <ref> and Table <ref>, respectively.
As shown in Table <ref>, we collect the bound state properties for the single Y_cK̅^(*) systems. In the cutoff region 0.80≤Λ≤5.00 GeV, we can obtain five loosely bound states, the Σ_cK̅ bound state with I(J^P)=1/2(1/2^-), the Σ_cK̅^* states with I(J^P)=1/2(1/2^-), 1/2(3/2^-), 3/2(1/2^-) and 3/2(3/2^-). Among these five bound states, we cannot recommend the Σ_cK̅^* state with 3/2(1/2^-) as a good hadronic molecular candidate, as the cutoff value is too far away from the empirical value Λ∼1.00 GeV. For the remaining four bound states, we conclude that they can be prime hadronic molecular candidates when we take the same cutoff criterion in the deuteron.
When we consider the coupled channel analysis, we find four weakly bound states by varying the cutoff from 0.80 GeV to 5.00 GeV as shown in Table <ref>. For the Λ_cK̅^*/Σ_cK̅^* coupled bound state with I(J^P)=1/2(1/2^-) and the Σ_cK̅/Λ_cK̅^*/Σ_cK̅^* coupled bound state with I(J^P)=1/2(1/2^-), we can obtain the reasonable loosely bound state properties at the cutoff taken around 1.00 GeV, the dominant channels are the Λ_cK̅^*(^2S_1/2) and Σ_cK̅(^2S_1/2) channels, respectively. Thus, these two coupled bound states can be prime hadronic molecular candidates, which are mainly composed by the Λ_cK̅^* and Σ_cK̅ states, respectively.
Compared to the bound states solutions for the single Λ_cK̅^* and Σ_cK̅ systems with 1/2(1/2^-), we also find that the coupled channel effects play an important role in generating the Λ_cK̅^* state with 1/2(1/2^-). However, it contributes very little for the Σ_cK̅ state with 1/2(1/2^-). Thus, the Σ_cK̅/Λ_cK̅^*/Σ_cK̅^* coupled bound state with I(J^P)=1/2(1/2^-) predicted here is not a new bound state but has a close relation with the single Σ_cK̅ molecule with 1/2(1/2^-).
For the Λ_cK̅^*/Σ_cK̅^* coupled system with I(J^P)=1/2(3/2^-), its dominant channel is the Σ_cK̅^*(^4S_3/2). As shown in Table <ref>, its size is much smaller than those coupled channel bound states mainly made up by the lowest system. As the dominant channel is the Σ_cK̅^*(^4S_3/2), this bound state has a close relation to the Σ_cK̅^* molecule with 1/2(3/2^-).
For the Σ_cK̅/Σ_cK̅^* coupled system with I(J^P)=3/2(1/2^-), we can obtain the bound state solution as the cutoff reaches up to 3.80 GeV. Obviously, the cutoff applied here is deviated from the reasonable value 1.00 GeV. It cannot be a good molecular candidate.
All in all, our results can predict five Y_cK̅^(*) type hadronic molecular candidates, the coupled Λ_cK̅^*/Σ_cK̅^* molecule with 1/2(1/2^-), the Σ_cK̅/Λ_cK̅^*/Σ_cK̅^* molecule with 1/2(1/2^-), the Σ_cK̅^* molecules with 1/2(1/2^-,3/2^-), and 3/2(3/2^-), where the coupled channel effects play a vital role in binding the coupled Λ_cK̅^*/Σ_cK̅^* state with 1/2(1/2^-). Their important two-body strong decay channels are summarized as follows, i.e.,
Σ_cK̅[1/2(1/2^-)] → {Λ_cK̅, Ξ_c^(')π},
Λ_cK̅^*/Σ_cK̅^*[1/2(1/2^-)] → {Λ_cK̅, Σ_cK̅, DΛ, DΣ, Ξ_c^(')π, Ξ_c^(')η},
Σ_cK̅^*[1/2(1/2^-)] → {Λ_cK̅^(*), Σ_cK̅, D^(*)Λ, D^(*)Σ, .
. Ξ_c^(')π, Ξ_c^(')η, Ξ_cρ, Ξ_cω},
Σ_cK̅^*[1/2(3/2^-)] → {Λ_cK̅^*, Σ_c^*K̅, D^*Λ, D^*Σ,.
. Ξ_cρ, Ξ_cω, Ξ_c^*π, Ξ_c^*η},
Σ_cK̅^*[3/2(3/2^-)] → {Σ_c^*K̅, D^*Σ, Ξ_cρ, Ξ_c^*π}.
§ SUMMARY
The study of the exotic states is an important and interesting issue in the hadron physics. Searching for the hadronic molecular states can not only enrich the family of the exotic states, but also help us to understand the essential hadron-hadron interactions. Very recently, the LHCb collaboration observed two open heavy flavor multiquarks T_cs̅^a0(++). Their near threshold behavior inspires the isovector D^*K^* molecular explanations to them. In our former paper, we found the D^*K^* state with I(J^P)=1(0^+) can be possible molecular candidate by adopting the OBE effective potentials <cit.>.
In this work, we extend our study on the interactions between the S-wave charmed baryon Y_c=(Λ_c,Σ_c) and the strange meson K^(*) by using the OBE model, and we consider both of the S-D wave mixing effects and the coupled channel effects. As shown in Figure <ref>, our results indicate the single Σ_cK^* states with I(J^P)=1/2(1/2^-), 1/2(3/2^-) and 3/2(1/2^-) can be good open charm molecular candidates. When we further consider the coupled channel effects, we can predict another three prime open charm molecular candidates, i.e., the coupled Λ_cK^*/Σ_cK^* molecular states with 1/2(1/2^-) and 1/2(3/2^-), and the coupled Σ_cK/Λ_cK^*/Σ_cK^* molecular state with 1/2(1/2^-), where the dominant channels correspond to the Λ_cK^*(^2S_1/2), Λ_cK^*(^4S_3/2), and Σ_cK(^2S_1/2), respectively. And the coupled channel effects play the essential role in binding these three coupled channel molecular candidates.
As a byproduct, we further study the Y_cK̅^(*) interactions in the same model. As shown in Figure <ref>, we can predict the existences of the Y_cK̅^(*) type hadronic molecular states, i.e., the Σ_cK̅ molecule with I(J^P)=1/2(1/2^-), the Σ_cK̅^* molecules with 1/2(1/2^-), 1/2(3/2^-), and 3/2(3/2^-), the coupled Λ_cK̅^*/Σ_cK̅^* molecule with 1/2(1/2^-), and the Σ_cK̅/Λ_cK̅^*/Σ_cK̅^* molecule with 1/2(1/2^-). We expect the experimentalists to search for these predicted open charm molecular pentaquarks.
§ ACKNOWLEDGMENTS
R. C. is supported by the Xiaoxiang Scholars Programme of Hunan Normal University.
99
Chen:2016qju
H. X. Chen, W. Chen, X. Liu and S. L. Zhu,
https://linkinghub.elsevier.com/retrieve/pii/S037015731630103XPhys. Rept. 639, 1-121 (2016).
Liu:2019zoy
Y. R. Liu, H. X. Chen, W. Chen, X. Liu and S. L. Zhu,
https://linkinghub.elsevier.com/retrieve/pii/S0146641019300304Prog. Part. Nucl. Phys. 107, 237-320 (2019).
Chen:2016spr
H. X. Chen, W. Chen, X. Liu, Y. R. Liu and S. L. Zhu,
https://iopscience.iop.org/article/10.1088/1361-6633/aa6420Rept. Prog. Phys. 80, no.7, 076201 (2017).
Guo:2017jvc
F. K. Guo, C. Hanhart, U. G. Meißner, Q. Wang, Q. Zhao and B. S. Zou,
https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.90.015004Rev. Mod. Phys. 90, no.1, 015004 (2018),
https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.94.029901[erratum: Rev. Mod. Phys. 94, no.2, 029901 (2022)].
Chen:2022asf
H. X. Chen, W. Chen, X. Liu, Y. R. Liu and S. L. Zhu,
https://iopscience.iop.org/article/10.1088/1361-6633/aca3b6Rept. Prog. Phys. 86, no.2, 026201 (2023).
LHCb:Tcc
C. Chen and E. S. Norella,
https://indico.cern.ch/event/1176505/https://indico.cern.ch/event/1176505/.
LHCb:Qian
W. B. Qian,
https://indico.ihep.ac.cn/event/17185/https://indico.ihep.ac.cn/event/17185/.
Chen:2016ypj
R. Chen and X. Liu,
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.94.034006Phys. Rev. D 94, no.3, 034006 (2016).
Chen:2017rhl
W. Chen, H. X. Chen, X. Liu, T. G. Steele and S. L. Zhu,
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.95.114005Phys. Rev. D 95, no.11, 114005 (2017).
Guo:2021mja
T. Guo, J. Li, J. Zhao and L. He,
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.105.054018Phys. Rev. D 105, no.5, 054018 (2022).
Cheng:2020nho
J. B. Cheng, S. Y. Li, Y. R. Liu, Y. N. Liu, Z. G. Si and T. Yao,
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.101.114017Phys. Rev. D 101, no.11, 114017 (2020).
Agaev:2022duz
S. S. Agaev, K. Azizi and H. Sundu,
https://iopscience.iop.org/article/10.1088/1361-6471/acc41aJ. Phys. G 50, no.5, 055002 (2023).
LHCb:2020bls
R. Aaij et al. [LHCb],
https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.125.242001Phys. Rev. Lett. 125, 242001 (2020).
LHCb:2020pxc
R. Aaij et al. [LHCb],
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.102.112003Phys. Rev. D 102, 112003 (2020).
Molina:2020hde
R. Molina and E. Oset,
https://www.sciencedirect.com/science/article/pii/S0370269320306730Phys. Lett. B 811, 135870 (2020).
Hu:2020mxp
M. W. Hu, X. Y. Lao, P. Ling and Q. Wang,
https://iopscience.iop.org/article/10.1088/1674-1137/abcfaaChin. Phys. C 45, no.2, 021003 (2021).
Liu:2020nil
M. Z. Liu, J. J. Xie and L. S. Geng,
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.102.091502Phys. Rev. D 102, no.9, 091502 (2020).
Kong:2021ohg
S. Y. Kong, J. T. Zhu, D. Song and J. He,
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.104.094012Phys. Rev. D 104, no.9, 094012 (2021).
Wang:2021lwy
B. Wang and S. L. Zhu,
https://link.springer.com/article/10.1140/epjc/s10052-022-10396-9Eur. Phys. J. C 82, no.5, 419 (2022).
Xiao:2020ltm
C. J. Xiao, D. Y. Chen, Y. B. Dong and G. W. Meng,
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.103.034004Phys. Rev. D 103, no.3, 034004 (2021).
Huang:2020ptc
Y. Huang, J. X. Lu, J. J. Xie and L. S. Geng,
https://link.springer.com/article/10.1140/epjc/s10052-020-08516-4Eur. Phys. J. C 80, no.10, 973 (2020).
Chen:2020aos
H. X. Chen, W. Chen, R. R. Dong and N. Su,
https://iopscience.iop.org/article/10.1088/0256-307X/37/10/101201Chin. Phys. Lett. 37, no.10, 101201 (2020).
Agaev:2020nrc
S. S. Agaev, K. Azizi and H. Sundu,
https://iopscience.iop.org/article/10.1088/1361-6471/ac0b31J. Phys. G 48, no.8, 085012 (2021).
Qi:2021iyv
J. J. Qi, Z. Y. Wang, Z. F. Zhang and X. H. Guo,
https://link.springer.com/article/10.1140/epjc/s10052-021-09422-zEur. Phys. J. C 81, no.7, 639 (2021).
Chen:2021tad
H. Chen, H. R. Qi and H. Q. Zheng,
https://link.springer.com/article/10.1140/epjc/s10052-021-09603-wEur. Phys. J. C 81, no.9, 812 (2021).
An:2022vtg
H. T. An, Z. W. Liu, F. S. Yu and X. Liu,
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.106.L111501Phys. Rev. D 106, no.11, L111501 (2022).
Liu:2011xc
Y. R. Liu and M. Oka,
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.85.014015Phys. Rev. D 85, 014015 (2012).
Lin:1999ad
Z. W. Lin and C. M. Ko,
https://journals.aps.org/prc/abstract/10.1103/PhysRevC.62.034903Phys. Rev. C 62, 034903 (2000).
Nagahiro:2008mn
H. Nagahiro, L. Roca and E. Oset,
https://link.springer.com/article/10.1140/epja/i2008-10567-8Eur. Phys. J. A 36, 73-84 (2008).
Chen:2017xat
R. Chen, A. Hosaka and X. Liu,
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.97.036016Phys. Rev. D 97, no.3, 036016 (2018).
Kaymakcalan:1983qq
O. Kaymakcalan, S. Rajeev and J. Schechter,
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.30.594Phys. Rev. D 30, 594 (1984)..
Tornqvist:1993ng
N. A. Tornqvist,
https://link.springer.com/article/10.1007/BF01413192Z. Phys. C 61, 525-537 (1994).
Tornqvist:1993vu
N. A. Tornqvist,
https://link.springer.com/article/10.1007/BF02734018Nuovo Cim. A 107, 2471-2476 (1994).
Klempt:2002ap
E. Klempt, F. Bradamante, A. Martin and J. M. Richard,
https://www.sciencedirect.com/science/article/pii/S0370157302001448?via
|
http://arxiv.org/abs/2307.04481v1 | 20230710110332 | Digital Modeling for Everyone: Exploring How Novices Approach Voice-Based 3D Modeling | [
"Giuseppe Desolda",
"Andrea Esposito",
"Florian Müller",
"Sebastian Feger"
] | cs.HC | [
"cs.HC",
"cs.AI",
"H.5.2; I.2.1"
] |
Digital Modeling for Everyone
G. Desolda et al.
Department of Computer Science, University of Bari Aldo Moro, Bari, Italy
{giuseppe.desolda, andrea.esposito}@uniba.it
LMU Munich, Munich, Germany
{florian.mueller, sebastian.feger}@um.ifi.lmu.de
Digital Modeling for Everyone: Exploring How Novices Approach Voice-Based 3D Modeling
Giuseppe Desolda10000-0001-9894-2116 Andrea Esposito10000-0002-9536-3087 Florian Müller20000-0002-9621-6214
Sebastian Feger20000-0002-0287-0945
August 12, 2023
====================================================================================================================================================
Manufacturing tools like 3D printers have become accessible to the wider society, making the promise of digital fabrication for everyone seemingly reachable. While the actual manufacturing process is largely automated today, users still require knowledge of complex design applications to produce ready-designed objects and adapt them to their needs or design new objects from scratch. To lower the barrier to the design and customization of personalized 3D models, we explored novice mental models in voice-based 3D modeling by conducting a high-fidelity Wizard of Oz study with 22 participants. We performed a thematic analysis of the collected data to understand how the mental model of novices translates into voice-based 3D modeling. We conclude with design implications for voice assistants. For example, they have to: deal with vague, incomplete and wrong commands; provide a set of straightforward commands to shape simple and composite objects; and offer different strategies to select 3D objects.
§ INTRODUCTION
The digital fabrication revolution aims to democratize the way people create tangible objects <cit.>. With the widespread availability of 3D printing together with many other digital fabrication technologies such as laser cutters or CNC routers, end users are moving from passive consumers to active producers. While the actual manufacturing process is largely automated today, users are still required to have a profound knowledge of complex 3D modeling applications, when they adapt models to their needs or even design new objects from scratch <cit.>. Thus, even if the introduction of technologies such as 3D printers has revolutionized the hobbyist community, lowering the barrier of entry to manufacturing even for novices (who can now put their hands in the process of creating artifacts without relying on third parties), we argue that the design of the 3D objects to be manufactured still requires a high level of knowledge and expertise.
These limitations have pushed researchers to investigate natural interaction techniques to simplify 3D modeling tools <cit.>. For example, research explored gestures <cit.>, virtual/augmented reality <cit.>, eye tracking <cit.>, brain-computer interface <cit.> and their combination <cit.> as a multimodal approach. However, their adoption is reserved for technical users and it is strongly limited by hardware costs and excessive size/weight that can make the users easily fatigued <cit.>. As another possible solution, voice-based interaction has been explored, to both integrate the traditional GUI interface (e.g., to enable shortcuts via voice commands) <cit.>) or as the primary interaction paradigm (e.g., see <cit.>). Although voice-based interaction requires only a microphone, it does not yet provide adequate digital modeling support for everyone: existing solutions either do not consider final users at all <cit.>, or only target 3D experts <cit.>, and novices are not considered potential target beneficiaries of the proposed innovations.
To lower the barrier to the design and customization of personalized 3D models by exploiting the potential of voice-based interaction, this study aims to understand how the mental model of novices translates into voice-based 3D modeling. We conducted a high-fidelity WoZ study to elicit novices' mental model, for example, their expectation, beliefs, needs, and abilities. We recruited a total of 22 participants without skills in 3D modeling, who performed 14 tasks revolving around some basic concepts of 3D modeling like the creation of objects, the manipulation of objects (e.g., scaling, rotating, and/or moving objects), and the creation of composite objects. All the WoZ sessions' recordings were analyzed through thematic analysis. The findings of the study have been distilled in the form of lessons learned. For example, we found that: voice assistants must manage the corrections the novices do during and after the commands; deal with vague and incomplete commands; consider the prior novices' knowledge; provide only a simplified set of operations for creating simple and composite 3D objects; design a workflow similar to what novices would do if they were building real objects; understand chained commands; understand commands that are relative to the users’ point of view.
The contribution of this paper is two-fold. First, we report the results of our WoZ study presenting the themes that emerged from the thematic analysis. Second, based on these results, we provide a set of design implications for the future design of voice-based interaction paradigms for 3D modeling for novices.
§ BACKGROUND AND RELATED WORK
This study revolves around the concept of voice-based 3D modeling as a key factor for enabling the democratization of digital fabrication. This section starts by illustrating some of the existing solutions based on natural interaction that try to address the complexity of 3D modeling (<ref>). Next, we provide an overview of the requirements for interacting with voice assistants (<ref>). Finally, we provide a brief summary of the motivation of this study and introduce the research question that guided our work (<ref>).
§.§ Addressing the Complexity of 3D modeling
To mitigate the issues of traditional GUI-based CAD systems, researchers explored natural interaction paradigms like eye tracking <cit.>, brain-computer interface <cit.>, gestures <cit.>, virtual/augmented reality <cit.> and their combination <cit.> as a multimodal approach for 3D modeling. The goal of natural interactions with CAD systems is to increase their usability for both expert users and, especially, novice users. Specifically, they aim to: [label=*)]
* reduce the learning curve of the system;
* allow a more intuitive interaction process;
* enhance the design abilities of the designers
<cit.>.
An example of a multimodal system is “3D Palette” by Billinghurst et al.: a mix of tablet and pen inputs, electromagnetic sensors and voice commands are used to support the digital design process <cit.>. Similarly, Nanjundaswamy et al. explored a mix of gesture-based interaction, speech recognition, and brain-computer interfaces to reduce the initial learning curve of the design system <cit.>. A complete overview of the multimodal solutions for CAD is reported by Niu et al. <cit.>. Despite these potential benefits, such multimodal techniques require the adoption of specialized hardware (e.g., depth-sensing cameras for gesture recognition, headsets to recognize brain signals), which use can be limited by their prices, sizes, weight, and complexity of use <cit.>. Thus, it is still hard for novice users to really adopt them in real and daily contexts <cit.>.
To overcome these limitations, researchers also investigated voice-based interaction because of its intuitive nature and the simplicity of the required hardware, i.e., a microphone, which nowadays is embedded in any laptop, tablet, or webcam <cit.>. Furthermore, considering the ubiquity of smartphones and the rise of AR and VR glasses, voice-based interaction can be generalized to technologies where other interaction modalities are not available options. Attempts of integrating voice-based interaction to CAD systems date as back as 1985 <cit.>. A more recent work suggests the use of voice commands to allow users to either quickly search commands by simply stating their intention <cit.>, or to annotate 3D models <cit.>. Systems, where the entire modeling process is carried out by voice commands, have also been explored. An example is the solution presented by Kou and Tan, where voice commands related to a CAD-specific lexicon and grammar are understood by a context-aware algorithm <cit.>. A similar example was proposed by Xue et al., which improves the previous solution by allowing free-form sentences in <cit.>. Another example of a fully-working system is the one presented by Grigor et al.: it follows the same ideas as the previous ones but uses AI to understand the users' inputs, thus allowing for more freedom in the commands, <cit.>. Similarly, Kou et al. proposed a flexible voice-enabled CAD system, where users are no longer constrained by predefined commands by exploiting a knowledge-guided approach to infer the semantics of voice input <cit.>.
Among all the previous examples, it must be highlighted that the design of their paradigm was made without any kind of involvement of the final users <cit.> or by solely involving experts in the final testing phase <cit.>. For example, the study by Nanjundaswamy et al. evaluates a multimodal system using gestures, speech and a brain-computer interface by involving a group of five skilled people <cit.>. Similarly, Khan et al. involve a total of 41 skilled users from an architecture or engineering background to elicit the requirements of a CAD system based on gestures and speech commands <cit.>. As another example, Vyas et al. test the usability of a speech-based CAD system involving 6 students with backgrounds in engineering, architecture and visualization <cit.>.
The work proposed by Cuadra et al. investigated how novices use voice assistants to design 3D objects <cit.>. They performed a WoZ study to compare voice assistants with and without the use of a video channel showing the design in progress, investigating how the two approaches impact users' accuracy and satisfaction. Cuadra et al. validate the idea of using voice assistants, as participants are more satisfied with their objects and suffer less from cognitive overload when the design process is supported by video, but it does not provide any insight on the mental model of novices approaching the digital modeling task <cit.>.
§.§ Interacting with Voice Assistants
The first solution of voice interaction implementing speech recognition dates as back as 1952, when Davis et al. proposed a prototype able to recognize digits <cit.>. In recent years, the evolution of machine learning and AI fostered the spreading of powerful commercial voice assistants, often based on deep neural networks trained on a plethora of data.
However, such powerful speech recognition models alone are not sufficient to build an effective voice assistant, since the interaction with such systems must be considered in the design of the whole system <cit.>. This need, together with the growing availability of commercial voice assistants, has fostered a sharp uptick of studies on user interaction with voice assistants <cit.>. Aspects like the cues that drive the conversation <cit.>, the properties that a voice assistant should have <cit.>, the user's mental model <cit.>, emotions felt during the conversation <cit.>, conversational design patterns <cit.> have been investigated. In addition, solutions to design and evaluate interaction with voice assistants are beginning to be proposed (see, for example, <cit.>). Careful consideration of these design aspects gains importance when voice assistants aim to simplify challenging or technical operations (e.g., see <cit.>). Since 3D modeling represents such a demanding task for novices, the elicitation of the novices' mental model is crucial to lower the barrier for 3D modeling.
§.§ Summary and Research Question
The analysis of the literature highlights that to simplify the 3D modeling, often the existing solutions are based on multimodal techniques such as gestures, eye tracking, or brain-computer interfaces; however, their adoption in real contexts is strongly limited by the adoption of specialized hardware and, overall, they target technical users.
Voice interaction seems a promising paradigm that can overcome the limitations of multimodal solutions, but the existing voice-based solutions are still lacking for three important reasons:
[label=*)]
* users are often not considered throughout the design phase, or they are only involved too late in testing phases;
* to the best of our knowledge, novices are never considered as target users;
* the voice-based interaction is built on top of the existing CAD systems (and their complexity), instead of designing from scratch the voice paradigm and the whole system.
Considering these limitations, to really democratize digital fabrication considering novices, users should be able to access 3D modeling tools even without special skills. All these motivations pushed us to explore novices' mental model in voice-based 3D modeling, in order to reduce the cost of their entry in the digital fabrication era. This is an aspect that has never been explored before and that deserves attention to really democratize digital fabrication. Therefore, our work addresses the following research question: How does the mental model of novices translate into voice-based 3D modeling?
§ METHOD
To answer our research question, we performed a high-fidelity WoZ study <cit.> because it has been proven successful in eliciting the user's mental model for voice-based interaction (e.g., see <cit.>). Then, we carried out an inductive thematic analysis <cit.> on the qualitative data, i.e., the transcriptions of the WoZ sessions and the answers of the participants to the open questions.
§.§ Participants
A total of 22 participants (F=15, M=7) have been recruited through convenience sampling <cit.> on the social circles of the authors of this article. This number of participants is in line with other similar studies (e.g., see <cit.>). Half of the participants were Italians while the other half were Germans. Their mean age was 24.1 years (σ = 3.7, min = 21, max = 34). The entire study was performed in English so as not to have results related to specific languages, which is out of the scope of this study. To ensure that the collected data is not biased toward knowledgeable users, we only recruited participants without any kind of experience with 3D modeling. Regarding the participants' level of education, around 45.45% already have a High School Diploma or a German A-level, 36.36% have a Bachelor's Degree, 13.64% have a Master's Degree, and only one participant (representing the remaining 4.55%) has not provided any information. Most participants (15 out of 22) do not have a STEM education, while 6 of the remaining 7 do not have any computational thinking skills, as they studied or worked in non-IT scientific fields (e.g., pharmaceutical and nutrition sciences). Regarding the participants' skills, they had an average level of IT knowledge (x̅ = 6.5/10; σ = 2.1), a medium-low level of knowledge of voice assistants (x̅ = 3.1/10; σ = 2.0) and very low knowledge of 3D modeling (x̅ = 1.6/10; σ = 1.1).
§.§ Tasks
A total of 14 tasks have been designed by two authors of this paper, both experts in 3D modeling, taking into account the most common and useful activities that are required to create simple and composite 3D objects. The resulting tasks revolve around basic concepts of 3D modeling, like the creation of simple objects, the manipulation of objects (e.g., scaling, rotating, and/or moving objects), and the creation of composite geometries. The details of the tasks are reported in
the task table in the attached appendix
(the list of all the graphical tasks is available in the attached appendix, sub-folder tasks). To reduce the impact of the primer effect <cit.> that providing a textual description of a task would have on the participants, we chose to provide the participants with graphical tasks: each task is composed of a brief prompt and a diagram showing the participants a 3D object or a 3D transformation that should be recreated (an example of graphical tasks is provided in <ref>). The representations chosen for each task were validated during a pilot study with 3 novices that were not considered in the final WoZ study.
§.§ Apparatus
We carried out the WoZ study remotely by using Zoom[<https://zoom.us>]. Four researchers have been involved: two Italians acted respectively as conductors and wizards for the Italian participants, while two German researchers acted as conductors and wizards for the German participants. In both groups, researchers switched roles to minimize the risk of bias introduced when conducting the test.
To create the illusion for participants that they are interacting with a real voice-based system for 3D modeling, we decided to use Blender[<https://www.blender.org>], explaining to participants that they can interact with it through voice commands. Blender has been selected since it is a free and open-source software that, among other features like sculpting or rendering, allows one to design and visualize 3D objects. One of the main features that made Blender the perfect choice for our WoZ study is the availability of API for the Python language[<https://docs.blender.org/api/current/>] that can be used inside a shell-like environment: this allows the Wizard to immediately create and modify the objects programmatically when the participants provide voice commands, thus preventing the participants from noticing anything odd and increasing the speed at which the Wizard is capable of satisfying the participants' requests. Taking advantage of this feature, we pre-defined a set of functions in a Python module to simplify the use of Blender's APIs for the purpose of this study (the module is available in the supplementary materials, sub-folder python module).
To show the participants the task they had to complete, we overlaid the graphical tasks on the bottom-right side of the Blender's window. To this aim, we used Open Broadcaster Software (or, more commonly, OBS)[<https://obsproject.com>], a free and open-source software for video recording and live streaming. Using OBS, it was also possible to define animations and transitions to show when users are moving to the next task and to signal to the participants that the “voice assistant” (i.e., the Wizard) is listening to the user's command or it is actually performing it. In particular, for each task, both the Blender window and the graphical task are visible (see <ref>). When the participants activate the Blender voice assistant by saying “Hey Blender”, the “I'm listening” label indicates that participants can provide the command to solve the task (see <ref>). Then, when the voice command has been issued, a rotating icon indicates that the voice assistant is analyzing it, creating the illusion that there is a real voice assistant (see <ref>). During the loading, the Wizard writes the Python statements related to the user commands and the result is finally shown in Blender (see <ref>).
§.§ Procedure
For each participant, when the Zoom session started, both the conductor and the Wizard were connected on Zoom but the latter never appeared or interacted with the participant. While the conductor introduced the participant to the study, the Wizard shared his screen, in particular the window created by using OBS. The sessions were recorded using Zoom's built-in recorder. Before starting the recordings, participants were asked to sign (either in digital or in verbal form) a privacy policy. It is worth mentioning that our universities require approval by an ethics committee only in the case of medical and clinical studies. For other studies like ours, they require that test participants give consent in a written or digital form; thus, we informed participants about all the details of the study and asked them to agree before starting the study. All of them agreed.
As soon as the participant agreed to attend the study, the conductor invited the participant to complete a set of tasks. The webcam of the conductor was turned off during task execution to avoid disturbing the participant. To reduce the variability between sessions and between the Italian and German participants, the same introductory script was defined (available in the attached appendix, sub-folder "introductory script"). In summary, the conductor explains that the goal of the study was to validate a new voice assistant called Blender, which we created to assist novices in 3D modeling. Then, the conductor asks to complete a set of tasks and that, for each of them, a graphical representation appears on the right-bottom side of their screen. The conductor also specifies that the participant had to first activate the voice assistant by saying “Hey Blender” and then, once the “I'm listening” label appears, the participant can provide a sequence of voice commands that, in their opinion, is the best to solve the task (for example “create a cube”). No examples of voice commands have been provided to avoid introducing bias. At the end of each task, the participants had to communicate with the conductor to move on to the next task.
At the end of the session, each participant filled in a questionnaire that includes questions on demographics, as well as some usability-related questions to evaluate the effectiveness of the Blender voice assistant. Furthermore, since (to the extent of our knowledge) there were no previous examples of graphical tasks for a WoZ study, we have also chosen to add some questions to evaluate how easy it was for the user to understand the tasks (available in attached appendix, sub-folder questionnaire). The entire procedure lasted around 30 minutes for each participant. A graphical synthesis of the entire procedure and the data collected is shown in <ref>.
§.§ Data Analysis
The first analysis regarded the questionnaire answers that evaluate the choice of providing the tasks in graphical format. Specifically, we included a question that asked “How easy it was to understand the graphical tasks?” and it ranges from 1 (not simple at all) to 10 (very simple). Both the median and average scores are 8.2/10, with a standard deviation of 1.0. These results seem to validate the idea of presenting the tasks graphically, but it also highlights that for some tasks (the ones with an ambiguous representation) the conductor of the study must be able to guide the participants to the right interpretation (without the use of words that may introduce a primer effect <cit.>). In our study, this issue impacted only the 11th task for four participants and it was solved by turning the webcam on and mimicking the action depicted in the task, in case the user was showing difficulties in understanding a task or if he/she explicitly requested help.
After ensuring the quality of the graphical tasks, we analyzed the qualitative data collected during the study, which helped us answer the research question, i.e., video transcriptions, questionnaire responses and participants' comments. All the video recordings (a total of about 11 hours) were first transcribed and expanded by including the annotations that identify pauses, the start and the end of the processing by the WoZ, and eventual errors or over-correction by the WoZ. This dataset was completed by reporting the participants comments and the answers to the three open questions we included in the questionnaire:
[label=*)]
* What did you like the most about the system used and the interaction with it?
* What did you like less about the system and the interaction with it? and
* Would you use a system like Blender to model in 3D? Please motivate your answer.
This data was analyzed in a systematic qualitative interpretation using Inductive Thematic Analysis <cit.>. The initial coding was conducted independently by four researchers, who are co-authors of this article and are experienced in qualitative data analysis: two of them analyzed the Italian results while the other two the German results. The two couples of researchers began with open coding independently. Once all the data was coded, the set of initial codes was further refined by merging the different codes. This first filtering phase allowed us to obtain a set of code groups that capture meaning at a higher level. The identified code groups were then used by each group to extract the main themes. At the end, both the codes and the themes of the two groups were compared to identify similarities and differences. With the exception of some minor differences related to their naming, both the codes and the themes identified by the two couples of researchers were identical in meaning. The final themes that will be presented here derive from a joint naming session carried out by all four researchers. Only a few small differences were identified, and they will be discussed as part of the design implications. The final codes and themes with the relationships among them are available in the attached appendix, sub-folder Codes and Themes.
§ RESULTS
The thematic analysis resulted in the description of five themes reported in the following sub-sections. For each theme, significant participant quotes are reported. For the sake of conciseness, we will refer to participants as “P” followed by the participant number, and to the WoZ system as simply “system”.
§.§ Basic Operations
This theme frames the strategies of interactions that novices have when they approach the 3D modeling activities of creation and manipulation.
§.§.§ Creation.
Novices tend to provide simple commands in the form “”, where the used verbs are typically “create”, “draw”, “build”, and examples of shape names are “cube”, “box”, or “cylinder”. This behavior has been observed in tasks that required the creation of simple or composite objects. Strictly related to this is the object duplication. Novices usually keep the requests simple by asking them to duplicate a precise object, as P4 did in task 12 when he said “duplicate the cube”. When the novices, instead, have to face the creation of multiple identical objects, without using the duplication requests (for example, because there was no previous copy in the scene), they simply use a basic creation request by also providing the number of copies: this is clearly exemplified by P5 in task 14 in “create four cylinders”.
§.§.§ Manipulation
The manipulation operations used by novices during the study are translation, rotation, and scaling. It is worth mentioning that the manipulation operations require some kind of reference frame to be performed; to this aim, novices often use relative references (for more details see theme theme:mental-model where the references used by the novices are discussed).
In more complex cases, novices provided commands containing both a creation request and an implicit manipulation request, where the manipulation is often expressed as a set of constraints on the final object. As an example, in task 14, P8 asked the system to “create four cylinders on the corners of the lower rectangle”: in this example, the multiple creation request is clearly visible, and it is put alongside a relative positioning request.
Finally, one of the most interesting identified open codes is the one that relates to moving objects with respect to implicit construction shapes. As an example, P4 during the last task asked “place the four cylinders at the four corners of a square.” In this example, the participant did not have a square in the scene but implicitly requested the system to create a square, place the cylinders at its corners, and delete the square once the operation was completed. This kind of operation was pretty common throughout the last task: around 45% of the participants provided a command that used a construction shape like the one previosly cited.
§.§ Selection of Objects
This theme covers the strategies adopted to identify and select objects, specifically, absolute selection, relative selection, or implicit selection. In the case of absolute selection, most participants explicitly refer to the entire scene, or to a single object in a scene by using its name (the one shown in the “inspector” view in Blender, as P11 asked during task 14 by saying “should I call it Box 0001 if I want to move it?”) or by its shape (as P1 did during task 6 by saying “move the cube 20 cm downwards”). A specialization of the latter case is the reference to a shape using a 2D approximation. One example is echoed by P8 during task 14: “Hey blender, move the upper rectangle on the side of the lower one”. Here, the user referred to two 3D boxes by their 2D approximation (rectangles).
The relative selection resulted in four commonly used strategies to select objects, namely:
* their relative time of creation (e.g., P3 in task 14: “Blender, place the second box under the first”);
* their relative position (e.g., P8 in task 14: “Hey Blender, create four cylinders in the corners of the lower rectangle”);
* their dimensions (e.g., P11 in task 14: “Hey Blender, move the tallest box attaching it to the side of the other box”);
* by inverting the current selection, eventually applying additional filters (e.g., P3 in task 14: “Blender, place the other two cylinders like you placed the previous ones”).
Finally, users also often performed implicit selections of the objects in the scene, for example, by referring to a single object in the scene or by referring to the last edited object, either explicitly or implicitly (e.g., P1 in task 8 implicitly referred to the last edited object by saying “increase the volume by three times”).
It is worth remarking that novices do not differentiate nor have preferences between the various methods, and actually, often mix them to be sure that the selection is clear and precise (e.g.: in a previously shown example by P8 in task 14, “Hey blender, move the upper rectangle on the side of the lower one”, the user performs the selection by using both an absolute reference to the 2D approximation of the shape of an object, and a relative reference to the positioning of another object).
§.§ Errors
Due to the lack of geometry knowledge and/or 3D modeling expertise, often novices commit errors of which the users are aware of, and errors of which the users are not aware of. In the first case, they try to prevent or correct the errors. For this reason, we named it “error correction”. In the second case, when a user is either not aware of an error or if they do not care about trying to fix it, then the error simply represents a mistake made during the task execution. For this reason, we named it “execution errors”. We analyze the details of each thread in the following paragraphs.
§.§.§ Error correction.
Different behaviors for correcting the errors have been observed, specifically during and after the command. Regarding the error correction made during the command, some novices try to prevent their own errors when they recognize one while stating the command, by providing a correction in the same command. For example, P9 during the chair construction task says “Hey blender, create a rectangle over the quadrilateral of length – I mean, height 30 centimeters, depth 5 and side 20–22...”. This command contains multiple corrections, starting from the correction of the name of the dimension that the user wants to set to 30 centimeters, and then correcting the actual size of the side of the rectangle to 22 centimeters
Regarding the corrections made after the commands, most of the participants expected some utility commands that are typically available in GUI-based software, like the “undo” and “redo” functions. As an example, P3 during task 14 provided both the command “Blender, undo the last operation”, and “place the other two cylinders as you've placed the previous ones.” This highlights how, although novices may not be familiar with the task of 3D modeling or voice-based interaction, they were able to transfer the knowledge of other software they may have used in the past, expecting that their previous experience would be applicable to the new, unknown system.
§.§.§ Execution errors.
Some of the mistakes committed by the novices are strictly related to lapsus, lack of knowledge, or system shortcomings. In the case of lapsus, some participants referred to shapes and objects using the wrong name (e.g., P10 was trying to refer to a box by calling it “cylinder” during task 14). In case of lack of knowledge, errors range from wrong names used for dimensions and primitives, to being unaware of the direction of the axis, perhaps by referring to previous knowledge obtained in school. For example, the Y axis in a 2D plane is usually the vertical one, thus some novices expect the Y axis to be the vertical one also in 3D. Finally, we identified system shortcomings, i.e. errors made by the wizard during the execution of the commands: all of these errors can be traced back to the incomprehension of the command, often due to its intrinsic vagueness (see the theme of “theme:mental-model”).
§.§ The Gulf of Execution
This theme represents the way novices translate their goals into commands. Throughout the sessions, before providing specific commands, we immediately noticed that novices often think aloud to understand what they have to do and how they can translate it to commands like P16 said during task 14 by saying “so, the picture has a different point of view. I should move it a little bit. Ok. Hey Blender, make the cylinder bigger.” Then, by analyzing their commands, we identified three main aspects of the commands where the gulf of execution becomes critical, specifically:
[label=*)]
* relativity
* vagueness
* abstraction.
§.§.§ Relativity.
Here we summarize how novices think about positions, scale, rotation, and selection relative to other parts of the scene. Two main overall frames of reference are used by the novices: the axes and other objects.
To select an axis, novices adopt three approaches, namely:
[label=*)]
* axis relative direction: a common way of selecting axes is through their relative direction (depending on the user's point of view), as echoed by P9 during task 11, by saying “move the geometric shape 20 cm to the right”;
* the axis color: as an example, during the execution of the last task (the one of creating a chair), P2 referred to the Y axis by its color stating “turn of 180 degrees the box on the green axis”;
* axis name: some novices also refer to axes by their actual name, as P19 did during the 12th task by asking the system to “move the right cube 10 centimeters along the X axis.”.
When referring to objects' dimensions, novices adopted two main approaches for selection. A first approach consists of using the dimensions' name, as P3 has done in the task of chair creation by saying “move along the y axis of a length equal to the base of the second box the last cylinder”. A second approach used a relative comparison to other dimensions; for example, P3 during task 14 selected an object by stating “move the third cylinder under the highest box [...]”.
§.§.§ Vagueness.
It encloses a lack of information in the commands provided to reach the goals. In general, the lack of information is caused by:
* chaining of multiple commands to describe at a high level a composite shape, as shown by P22 during the chair creation task, by asking “create four cylinders with the same distance to each other.”;
* missing data that the system needs to execute the requests; as an example, novices forget to provide some or all dimensions of a shape (e.g., P1 in task 1 stated “create a cube” without providing any dimension), they forget to specify a parameter for a transformation (e.g., P7 in task 10 asked to “rotate of 30 degrees the figure” without specifying a direction).
§.§.§ Abstraction.
We noticed two behaviors related to the abstraction of the commands. The first one relates a general abstraction over the process to reach the desired goal, as exemplified by P2 that tried to solve task 14 by saying “create a chair using two boxes and four cylinders”. The second one refers to how novices translate the desired 3D shapes into words. For example, shapes are created by providing a general description (e.g., P10 in task 4 by saying “create a 3D rectangle 30 cm high, 20 cm deep, and long 10 cm”, referred to a box as a “3D rectangle”, thus simply describing the shape) or by approximating the desired shape with a similar 2D shape (e.g., P8 during task 4 used “rectangle” instead of “box” by saying “create a rectangle of height 30, width 20, depth 10”). Furthermore, especially German participants, novices also refer to the 3D shapes by using similar real-world objects (e.g., P17 during task 3 stated “create a dice with an edge length of 30 centimeters”, using “dice” instead of “cube”).
§.§ Users' Requests
We collected requests and suggestions provided by the participants, which provide useful insights on novices' mental model.
Among the most common requests, participants often asked to rotate the camera and change their point of view. As an example, P11 during the last task of creating a chair, asked “can I see it from below?” and “can I see it from above” to perform some minor adjustments and corrections to the positions of the 3D objects. This behavior underlines the need to provide a way to allow novices to rotate their point of view. This functional requirement is strictly related to the theme of theme:selection-of-objects as it may benefit from different interaction modalities that could be explored (e.g., using AR).
Another common request is related to the actual dimensions: when novices explicitly set size in the command (for example, in the third task), they want to check that the system created an object of the right size. This is exemplified by P10 which explicitly asked if “can I ask it to check the dimensions?” in the third task. This suggestion does not translate to an additional requirement for the AI model that recognizes users' commands, but it rather provides some insights on the requirements of the whole 3D modeling tool.
Other minor suggestions regarded the customization of the axis: some participants expected the Y axis to be the “vertical” one as it usually happens in 2D drawings, rather than the Z axis as it happens in 3D modeling tools like Blender. Providing such a customization option would surely reduce the error rate in a final system, as the novices could adapt it to their own knowledge.
§ DISCUSSION AND IMPLICATIONS
Based on the findings of the WoZ study, in the following we present design implications for the development of future voice-based 3D modeling tools for novice designers and relate them to the wider research literature around voice assistants and general user experience principles.
§.§.§ Understand user corrections and adapt to them.
This requirement stems from the errors the users are aware of (see theme theme:errors). It poses requirements that impact two different facets of future voice-based digital modeling tools: the NLU layer and the conversation flow.
Regarding the NLU layer, systems must be able to intercept user corrections and aborted commands. Based on our findings, we note that recognizing uncertainty, hesitation, doubt, and error awareness early on is particularly crucial in the digital modeling context, as users displayed them frequently due to their unfamiliarity with 3D modeling <cit.>.
Regarding the conversation flow, after intercepting the error correction, it is important to design a dialog that helps users understand the error and recover from it <cit.>. Moore and Arar <cit.> provide valuable pointers through their Natural Conversation Framework which proposes a set of conversational patterns. Some of these patterns relate to user corrections and can be applied to voice-based digital modeling. An example inspired by this framework that relates to errors that users correct while they issue a 3D modeling command might be:
User: Hey blender, increase of 10 centimeters -no- of 20 centimeters the sides of the geometric figure
Agent: I'm sorry, I didn't understand. Do you mean an increase of 10 or 20 centimeters?
User: 20 centimeters.
Agent: Ok, I'm increasing of 20 centimeters the sides of the geometric figure.
§.§.§ Deal with vague and incomplete commands
. We have identified numerous theme:errors by the lack of knowledge and the system's shortcomings that users were unaware of. These errors are related to incomprehension due to the vagueness and abstraction of some commands. Self-repair strategies should be introduced to improve interaction <cit.>. To this aim, we identified two possible solutions. The first one consists of sensible defaults: in case of a vague command, the voice assistant fixes it by selecting a relevant parameter from a list of alternatives. For example, if the user says “create a cylinder on top of the cube”, the cylinder diameter is not specified. In this case, the system can assume that the diameter is equal to the side of the cube. This solution can also benefit from the dialog context: as suggested by Jain et al., resolving and maintaining the dialog context can help select the most appropriate sensible default from a list of alternatives <cit.>. For example, if other cylinders have been previously created with a given diameter on top of cubes the same can be applied to the new ones in case of vague commands. This allows the system to be proactive, anticipating the users' requests as suggested by Völkel et al. <cit.>.
The second solution consists of interactively guiding the user by providing the missing information. With reference to the previous command of the box and cylinder, instead of using defaults, the voice assistant can explicitly ask the user for the desired radius. The strategy adopted by the voice assistant is informed by the degree of system autonomy or desired user control. A hybrid solution can also benefit from both approaches: the selected sensible default can be used by the voice assistant to ask the user if the default is right, for example, with reference to the previous case the voice assistant can reply: “OK, I'm creating a cylinder with a diameter equal to the side of the cube. Is it OK?”
§.§.§ Translate interaction conventions to voice-based digital modeling
. Users commonly apply their experience with software applications to other applications or even different domains. As an example, some participants expected to execute “undo” or “redo” commands, which are common across applications and domains. This is in line with the traditional Nielsen heuristics of “user control and freedom” and “consistency and standard” <cit.>. The latter states that “users should not have to wonder whether different words, situations, or actions mean the same thing”, thus the system should “follow platform and industry conventions” (from Nielsen <cit.>). For this reason, a voice-based 3D modeling system should provide such common operations, like the aforementioned “undo” and “redo” commands. Further exploration may be required to clearly define and match the set of expected commands to voice-based digital modeling.
§.§.§ Adopt simple operations even for the creation of composite 3D models
. Based on the theme theme:creation-and-manipulation, we note that most users follow similar and simple approaches even in complex tasks. For example, by analyzing task 13 (which consisted of creating a figure having a cylinder on top of the cube), multiple approaches might be adopted, but novices used only basic operations (creation and translation) to create both a simple cube and a cylinder and then moving the latter on top of the former. This highlights that, although many technical operations may be implemented in voice assistants for digital modeling, it is important to provide novices with simple operations to create and compose 3D objects, rather than prescribing more complex operations like “extrusion” and “insetting”, which are most adequate for skilled users <cit.>.
§.§.§ Match digital modeling workflows with novices' expectations and experiences from building physical objects
.
Related to the theme:creation-and-manipulation, but by focusing on the last task (that consisted of the creation of a chair), we noticed that the majority of the users started by creating the base cylinders (almost all users started with a phrase like “create four cylinders”). This surely provides an interesting insight on how people approach the creation of composite 3D objects. By creating the base cylinders first, users are basically following an approach that starts from the bottom and proceeds upwards. This is not different from the approach that users should follow if they were composing physical shapes: by starting from the bottom, they are able to stack the various shapes without the risk of their composition to “fall down”. This indication can be useful if wizard procedures are introduced to guide the creation of composite 3D objects; for example, the voice assistants can start the interaction by asking which is the shape, with its features, that must be placed at the bottom, then going on guiding the user to create other shapes on top of the previous ones.
§.§.§ Provide alternatives for the selection of 3D objects
. By reflecting on the theme of theme:selection-of-objects, we argue that it is among the most critical ones: most of the 3D modeling revolves around the selection of objects to be composed. We found that several and different techniques have been adopted by the novices. For example, a common solution is represented by commands to select an object by referring to the entire scene, in other words in an absolute way. We also documented commands that use relative references, for example, their relative time of creation, their relative position, their dimensions, and by inverting the current selection. The last approach is represented by the implicit selection of the objects in the scene. These strategies represent different solutions the users can adopt to select a 3D object, and thus the voice assistant should accommodate all of them. To simplify the interaction, future voice assistants can be complemented with additional interaction modalities like gestures or eye tracking, where users could simply point <cit.> or gaze <cit.> at the object or surface they want to select.
§.§.§ Understand commands that are relative to the user's point of view
. As described in the themes theme:mental-model and theme:selection-of-objects, users often execute commands that are related to their point of view, in particular, to change the camera perspective, to select an axis, and to select a 3D object. In other words, we found that a common way for novices to issue commands is through the “screen” coordinate system <cit.>, as provided by some professional 3D modeling systems[<https://shorturl.at/fGLRZ>], by using common words such as “left” and “right”, as P9 did during task 11 with the command “move the geometric shape 20 cm to the right”. Furthermore, novices often provided commands relative to both their point of view and other objects (as P10 did during task 13: “insert a cylinder on top of the cube”). This implies that future voice assistants must be equipped with some way of understanding the 3D context into which the command is provided, and they must take into account the user's point of view during the intent-matching process.
§.§.§ Grant multiple ways to refer to the axes
. Users referred to the axes of the 3D scene by adopting different approaches: by indicating the axis color, by referring to the user's relative direction, by using the axis name (see themes theme:mental-model) or some users also preferred to switch the Y and Z axes as the “vertical” axis (see theme theme:users-suggestions). This ambiguity is also found in professional systems, as some of them use the Z axis as vertical while others use the Y axis instead <cit.>. This behavior should be considered in the design of voice assistants for 3D modeling, since this is a core activity that, if not adequately supported, might lead to ineffective user interaction.
§.§.§ Design for complex commands.
. Multiple chained commands have often been prompted to execute various actions. In our study, it was possible to accommodate the multiple users commands thanks to the WoZ but voice assistants are typically restricted to simple standalone commands. Similar to what Fast et al. already proposed for complex tasks <cit.>, also voice-based systems for 3D modeling should address this requirement, which strongly impacts the design of its NLU layer that must be able to understand and execute multiple chained commands.
§.§.§ Favor explicit trigger words
. Previous work by Vtyurina et al. argued that forcing the use of explicit trigger words would constrain user interactions, suggesting the use of implicit conversation cues for driving the dialog <cit.>. On the contrary, during our experiments novices used implicit conversational cues while thinking about their workflow and as a natural reaction after a successful command execution (see theme:mental-model): this highlights the need for future voice-based systems to provide clear explicit activation cues and trigger words, to avoid any unintentional activation that would disrupt users' workflow.
§.§.§ Embrace diversity in naming approaches
. As novices usually have little to no knowledge of the 3D modeling domain, they often have to resort to different naming approaches when dealing with shapes for which they do not recall the “right” name. As already highlighted in theme:mental-model, novices can refer to shapes by providing high-level descriptions (e.g., “3D rectangle” instead of “box”), 2D approximations (“rectangle” instead of “box”), or by associating them to a real-world object (e.g., “dice” instead of “cube”). For this reason, future systems must be able to understand both analogies and descriptions of shapes. A concrete solution might be the adoption of a lexical ontology like WordNet <cit.> to infer the shape name related to the real object.
§ LIMITATIONS OF THE STUDY
Our study is an initial step toward understanding how novices approach voice-based 3D modeling. We have identified some limitations of our work. First, the novices' languages deserve a wider exploration: our study highlights very small differences between Germans and Italians because of their culture; however, a similar study where participants use their native languages might be useful to understand how language might impact the resulting mental model. Similarly, this study does not focus on how aspects like ethnicity, socio-economic status, and age might impact the novice's mental model. Another limitation regards the tasks: the ones used in the study are representative of the most common operations to design 3D models but digital fabrication often implies the design of objects that are more complex than a chair. In addition, the set of proposed tasks does not cover all possible operations (e.g., selecting textures and making holes). Future work may also study differences between the mental model of lay users (target of this study) and novices in 3D modeling that are domain experts (e.g., they have expertise in sculpting or 3D world composition, but do not know how to model). Similarly, the proposed voice-based interaction approach may be compared with alternative solutions based on mouse and keyboard or multi-modal approaches, to explore the pros and cons of each solution. Finally, Blender has been selected as the 3D modeling tool because of the advantages reported in <ref>; however, its UI is designed for a WIMP interaction thus it presents commands, buttons, functions, etc., that might bias or confuse novices. Despite carefully hiding all the useless parts of the Blender UI, the adoption of a system purposely designed to better fit the voice interaction might be adopted to elicit the mental model.
§ CONCLUSION
Voice interaction is emerging as a promising paradigm that can simplify 3D modeling for digital fabrication. However, novices' mental model is never considered when designing voice-based 3D modeling systems. In addition, voice interaction is usually built on top of WIMP systems instead of designing the voice paradigm and the whole system from scratch. This study addresses these limitations by investigating the novices' mental model in 3D modeling and contributes to the state-of-the-art by identifying a set of design implications that support the definition of voice-based interaction paradigms for the design and customization of personalized 3D models. This contribution aims to lower the barrier to 3D modeling thus supporting the wider democratization of digital fabrication.
As future work, we are now addressing the limitations reported in the previous section. We are also working on the development of a prototype of a voice assistant integrated into Blender: it is currently being developed in DialogFlow <cit.> and it has been designed considering the design implications proposed in this study. The aim is to study novices' behavior when interacting with real systems, also exploring if and how the design indications suggested in this study also accommodate the design of more complex objects in more realistic situations, for example, by proposing scenarios instead of tasks.
§.§.§ Acknowledgements
This work has been funded by the European Union’s Horizon 2020 research and innovation program under grant agreement No. 952026 (<https://www.humane-ai.eu/>).
The research of Andrea Esposito is funded by a Ph.D. fellowship within the framework of the Italian “D.M. n. 352, April 9, 2022” - under the National Recovery and Resilience Plan, Mission 4, Component 2, Investment 3.3 - Ph.D. Project “Human-Centered Artificial Intelligence (HCAI) techniques for supporting end users interacting with AI systems”, co-supported by “Eusoft S.r.l.” (CUP H91I22000410007).
splncs04
|
http://arxiv.org/abs/2307.03949v1 | 20230708103948 | Ergodic observables in non-ergodic systems: the example of the harmonic chain | [
"Marco Baldovin",
"Raffaele Marino",
"Angelo Vulpiani"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech"
] |
Institute for Complex Systems - CNR, P.le Aldo Moro 2, 00185, Rome, Italy
Université Paris-Saclay, CNRS, LPTMS,530 Rue André Rivière, 91405, Orsay, France
Dipartimento di Fisica e Astronomia,
Universitá degli Studi di Firenze, Via Giovanni Sansone 1, 50019, Sesto Fiorentino, Italy
Dipartimento di Fisica, Sapienza Universitá di
Roma, P.le Aldo Moro 5, 00185, Rome, Italy
In the framework of statistical mechanics the properties of
macroscopic systems are deduced starting from the laws of their microscopic
dynamics. One of the key assumptions in this procedure is the ergodic property,
namely the equivalence between time averages and ensemble averages.
This property can be proved only for a limited number of systems; however,
as proved by Khinchin <cit.>, weak forms of it hold even in systems that
are not ergodic at the microscopic scale, provided that extensive observables are considered.
Here we show in a pedagogical way the
validity of the ergodic hypothesis, at a practical level,
in the paradigmatic case of a chain of harmonic oscillators. By
using analytical results and numerical computations, we provide evidence that
this non-chaotic integrable system shows ergodic behavior in the
limit of many degrees of freedom. In particular, the
Maxwell-Boltzmann distribution turns out to fairly describe the
statistics of the single particle velocity. A study of the
typical time-scales for relaxation is also provided.
Ergodic observables in non-ergodic systems: the example of the harmonic
chain
Angelo Vulpiani
August 12, 2023
==============================================================================
§ INTRODUCTION
Since the seminal works by Maxwell, Boltzmann and Gibbs, statistical mechanics
has been conceived as a link between the microscopic world of atoms and
molecules and the macroscopic one where everyday phenomena are
observed <cit.>. The same physical system can be
described, in the former, by an enormous number of degrees of freedom N (of
the same order of the Avogadro number) or, in the latter, in terms of just a few
thermodynamics quantities. Statistical mechanics is able to describe in a
precise way the behavior of these macroscopic observables, by exploiting the
knowledge of the laws for the microscopic dynamics and classical results from
probability theory. Paradigmatic examples of this success are, for instance, the
possibility to describe the probability distribution of the single-particle velocity
in an ideal gas <cit.>, as well as the detailed behavior
of phase transitions <cit.> and
critical phenomena <cit.>.
In some cases (Bose-Einstein condensation <cit.>, absolute negative temperature
systems <cit.>) the results
of statistical mechanics were able to predict states of the matter that were never been observed before.
In spite of the above achievements, a complete consensus about the actual reasons for such a success
has not been yet reached within the statistical mechanics community. The main
source of disagreement is the so-called “ergodic hypothesis”, stating that time
averages (the ones actually measured in physics experiments) can be computed as
ensemble averages (the ones appearing in statistical mechanics calculations).
Specifically, a system is called ergodic when the value of the time average of
any observable is the same as the one obtained by taking the average over the
energy surface, using the microcanonical distribution <cit.>. It is worth mentioning that, from a mathematical
point of view, ergodicity holds only for a small amount of physical systems: the
KAM theorem <cit.>
establishes that, strictly speaking, non-trivial dynamics cannot be ergodic.
Nonetheless, the ergodic hypothesis happens to work extremely well also for
non-ergodic systems. It provides results in perfect agreement with the numerical and experimental observations, as seen in a wealth of physical situations <cit.>.
Different explanations for this behavior have been provided. Without going into
the details of the controversy, three main points of view can be identified: (i)
the “classical” school based on the seminal works by Boltzmann and the important
contribution of Khinchin, where the main role is played by the presence of many
degrees of freedom in the considered systems
<cit.>; (ii) those, like the Prigogine school, who recognize in the chaotic
nature of the microscopic evolution the dominant ingredient <cit.>; (iii) the maximum entropy point of view, which does
not consider statistical mechanics as a physical theory but as an inference
methodology based on incomplete information <cit.>.
The main aim of the present contribution is to clarify, at a pedagogical level,
how ergodicity manifests itself for some relevant degrees of freedom, in
non-ergodic systems. We say that ergodicity occurs “at a practical level”.
To this end, a classical chain of N coupled harmonic oscillators turns out to
be an excellent case study: being an integrable system, it cannot be suspected
of being chaotic; still, “practical” ergodicity is recovered for relevant
observables, in the limit of N≫1. We believe that this kind of analysis
supports the traditional point of view of Boltzmann, which identifies the large
number of degrees of freedom as the reason for the occurrence of ergodic
behavior for physically relevant observables. Of course, these conclusions are
not new. In the works of Khinchin (and then Mazur and van der
Lynden) <cit.> it is rigorously shown that the ergodic
hypothesis holds for observables that are computed as an average over a finite
fraction of the degrees of freedom, in the limit of N ≫ 1. Specifically, if
we limit our interest to this particular (but non-trivial) class of observables,
the ergodic hypothesis holds for almost all initial conditions (but for a set
whose probability goes to zero for N →∞), within arbitrary accuracy.
In addition, several numerical results for weakly non-linear systems
<cit.>, as well
as integrable systems <cit.>,
present strong indications of the poor role of chaotic behaviour, implying the
dominant relevance of the many degrees of freedom. Still, we think it may be
useful, at least from a pedagogical point of view, to analyze an explicit
example where analytical calculations can be made (to some extent), without
losing physical intuition about the model.
The rest of this paper is organized as follows. In Section <ref> we
briefly recall basic facts about the chosen model, to fix the notation and
introduce some formulae that will be useful in the following.
Section <ref> contains the main result of the paper. We present an explicit calculation of the empirical distribution of the single-particle momentum, given a system starting from out-of-equilibrium initial conditions. We show that in this case the Maxwell-Boltzmann distribution is an excellent approximation in the N→∞ limit. Section <ref> is devoted to an analysis
of the typical times at which the described ergodic behavior is expected to be
observed; a comparison with a noisy version of the model (which is ergodic by
definition) is also provided. In Section <ref> we draw our final
considerations.
§ MODEL
We are interested in the dynamics of a one-dimensional chain of N classical
harmonic oscillators of mass m.
The state of the system is described by the canonical coordinates {q_j(t),
p_j(t)} with j=1,..,N; here p_j(t) identifies the momentum of the j-th
oscillator at time t, while q_j(t) represents its position. The j-th and
the (j+1)-th particles of the chain interact through a linear force of
intensity κ|q_j+1-q_j|, where κ is the elastic constant. We will
assume that the first and the last oscillator of the chain are coupled to
virtual particles at rest, with infinite inertia (the walls), i.e. q_0≡ q_N+1≡ 0.
The Hamiltonian of the model reads therefore
ℋ(𝐪,𝐩)=∑_j=0^N p_j^2/2 m +
∑_j=0^Nm ω_0^2 /2(q_j+1 - q_j)^2,
where ω_0=√(κ/m).
Such a system is integrable and, therefore, trivially non-ergodic. This can
be easily seen by considering the normal modes of the chain, i.e. the set of
canonical coordinates
Q_k=√(2/N+1)∑_j=1^N q_j sinj k π/N+1
P_k=√(2/N+1)∑_j=1^N p_j sinj k π/N+1 ,
with k=1, ..., N. Indeed, by rewriting the Hamiltonian in terms of these new
canonical coordinates one gets
ℋ(𝐐,𝐏)=1/2∑_k=1^N
P_k^2/m + ω_k^2 Q_k^2 ,
where the frequencies of the normal modes are given by
ω_k=2 ω_0 sinπ k/2N +2 .
In other words, the system can be mapped into a collection of independent
harmonic oscillators with characteristic frequencies {ω_k}. This system
is clearly non-ergodic, as it admits N integrals of motion, namely the
energies
E_k=1/2P_k^2/m + ω_k^2 Q_k^2
associated to the normal modes.
In spite of its apparent simplicity, the above system allows the investigation
of some nontrivial aspects of the ergodic hypothesis, and helps clarifying the
physical meaning of this assumption.
§ ERGODIC BEHAVIOR OF THE MOMENTA
In this section we analyze the statistics of the single-particle momenta of the
chain. We aim to show that they approximately follow a Maxwell-Boltzmann
distribution
𝒫_MB(p)=√(β/2π m)e^-β p^2/2m
in the limit of large N, where β is the inverse temperature of the
system. With the chosen initial conditions, β=N/E_tot. Firstly, extending
some classical results by Kac <cit.>, we focus on the empirical
distribution of the momentum of one particle, computed from a unique long
trajectory, namely
𝒫_e^(j)p=1 T∫_0^T dt δp -p_j(t) .
Then we consider the marginal probability distribution
𝒫_ep,t computed from the momenta {p_j} of all the
particles at a specific time t, i.e.
𝒫_ep,t=1 N∑_j=1^N δp -p_j(t) .
In both cases we assume that the system is prepared in an atypical initial
condition. More precisely, we consider the case in which Q_j(0)=0, for all
j, and the total energy E_tot, at time t=0, is equally distributed
among the momenta of the first N^⋆ normal modes, with 1 ≪ N^⋆≪ N:
P_j(0)=
√(2m E_tot/N^⋆) for 1 ≤ j ≤ N^⋆
0 for N^⋆< j ≤ N .
In this case, the dynamics of the first N^⋆ normal modes is given by
Q(t) =√(2 E_tot/ω_k^2N^⋆)sinω_k t
P(t) =√(2 m E_tot/N^⋆)cosω_k t .
§.§ Empirical distribution of single-particle momentum
Our aim is to compute the empirical distribution of the momentum of a given
particle p_j, i.e., the distribution of its values measured in time. This
analytical calculation was carried out rigorously by Mazur and Montroll in
Ref. <cit.>. Here, we provide an alternative argument that has the advantage of being more concise and intuitive, in contrast to the mathematical rigour of <cit.>. Our approach exploits the computation of
the moments of the distribution; by showing that they are the same, in the limit
of infinite measurement time, as those of a Gaussian, it is possible to conclude
that the considered momentum follows the equilibrium Maxwell-Boltzmann
distribution. The assumption N≫1 will enter explicitly the calculation.
The momentum of the j-th particle can be written as a linear combination of
the momenta of the normal modes by inverting Eq. (<ref>):
p_j(t) =√(2/N+1)∑_k=1^N sinj k π/N+1
P_k(t)
=2√(m
E_tot/(N+1)N^⋆)∑_k=1^N^⋆sinkjπ/N+1cosω_k t
where the ω_k's are defined by Eq. (<ref>), and the
dynamics (<ref>) has been taken into account. The n-th empirical moment
of the distribution is defined as the average p_j^n of the n-th
powerof p_j over a measurement time T:
p_j^n =1/T∫_0^Tdt p_j^n(t)
=1/T∫_0^Tdt (C_N^⋆)^n
∏_l=1^n∑_k_l=1^N^⋆sink_l
jπ/N+1cosω_k_l t
=(C_N^⋆)^n
∑_k_1=1^N^⋆…∑_k_n=1^N^⋆sink_1jπ/N+1
…sink_njπ/N+1 1/T∫_0^Tdt cosω_k_1 t…cosω_k_n t
with
C_N^⋆=2√(m E_tot/(N+1)N^⋆) .
We want to study the integral appearing in the last term of the above equation.
To this end it is useful to recall that
1/2 π∫_0^2πd θcos^n(θ)=
(n-1)!!/n!! for n even
0 for n odd .
As a consequence, one has
1/T∫_0^Td t cos^n(ω t)≃(n-1)!!/n!! for n even
0 for n odd .
Indeed, we are just averaging over ≃ω T/2 π periods of the
integrated function, obtaining the same result we get for a single period,
with a correction of the order O(ω T)^-1. This correction comes from the fact that T
is not, in general, an exact multiple of 2 π/ω.
If ω_1, ω_2, ..., ω_q are incommensurable (i.e., their
ratios cannot be expressed as rational numbers), provided that T is much
larger than (ω_j-ω_k)^-1 for each choice of 1 ≤ k < j ≤ q, a
well known result <cit.> assures that
1/T∫_0^Td t cos^n_1(ω_1 t)·...·cos^n_q(ω_q t) ≃ 1/T∫_0^Td t
cos^n_1(ω_1 t)·...·1/T∫_0^Td t
cos^n_q(ω_1 t)
≃ (n_1-1)!!/n_1!!· ...·(n_q-1)!!/n_q!! if all n's are even ,
where the last step is a consequence of Eq. (<ref>). Instead, if at
least one of the n's is odd, the above quantity vanishes, again with
corrections due to the finite time T. Since the smallest sfrequency is
ω_1, one has that the error is at most of the order Oq(ω_1
T)^-1≃ O(qN /ω_0 T).
Let us consider again the integral in the last term of Eq. (<ref>). The
ω_k's are, in general, incommensurable. Therefore, the integral vanishes
when n is odd, since in that case at least one of the {n_l}, l=1,...,q,
will be odd. When n is even, the considered quantity is different from zero
as soon as the k's are pairwise equal, so that n_1=...=n_q=2. In the following we
will neglect the contribution of terms containing groups of four or more equal
k's: if n≪ N^⋆, the number of these terms is indeed ∼
O(N^⋆) times less numerous than the pairings, and it can be neglected if
N^⋆≫1 (which is one of our assumptions on the initial condition).
Calling Ω_n the set of possible pairings for the vector
𝐤=(k_1,...,k_l), we have then
p_j^n≃C_N^⋆/√(2)^n ∑_𝐤∈Ω_n∏_l=1^n sink_ljπ/N+1 ,
with an error of O(1/N^⋆) due to neglecting groups of 4, 6 and so on,
and an error O(nN/ω_0 T) due to the finite averaging time T, as
discussed before. Factor 2^-n/2 comes from the explicit evaluation of
Eq. (<ref>) .
At fixed j, we need now to estimate the sums appearing in the above equation,
recalling that the k's are pairwise equal. If j> N/N^⋆, the
arguments of the periodic functions can be thought as if independently extracted
from a uniform distribution 𝒫(k)=1/N^⋆. One has:
sin^2 kj π/N+1≃∑_k=1^N^⋆1/N^⋆sin^2 kj π/N+1≃1/2 π∫_-π^πd θ sin^2(θ)=1/2 ,
and
∏_l=1^n sink_ljπ/N+1≃ 2^-n/2 ,
if 𝐤∈Ω_n.
As a consequence
p_j^n ≃C_N^⋆/2^n (N^⋆)^n/2 𝒩(Ω_n)≃m
E_tot/N+1^n/2𝒩(Ω_n) ,
where 𝒩(Ω_n) is the number of ways in which we can choose the
pairings. These are the moments of a Gaussian distribution with zero average and
m E_tot/N+1 variance.
Summarising, it is possible to show that, if n ≪ N^⋆≪ N, the first
n moments of the distribution are those of a Maxwell-Boltzmann distribution.
In the limit of N≫1 with N^⋆/N fixed, the Gaussian distribution is
thus recovered up to an arbitrary number of moments. Let us note that the
assumption Q_j(0)=0, while allowing to make the calculations clearer, is not
really relevant. Indeed, if Q_j(0)≠ 0 we can repeat the above computation
while replacing ω_k t by ω_k t + ϕ_k, where the phases ϕ_k
take into account the initial conditions.
Fig. <ref> shows the standardized histogram of the relative
frequencies of single-particle velocities of the considered system, in the N
≫ 1 limit, with the initial conditions discussed before. As expected, the
shape of the distribution tends to a Gaussian in the large-time limit.
§.§ Distribution of momenta at a given time
A similar strategy can be used to show that, at any given time t large enough,
the histogram of the momenta is well approximated by a Gaussian distribution.
Again, the large number of degrees of freedom plays an important role.
We want to compute the empirical moments
p^n(t)=1/N∑_j=1^N p_j^n(t) ,
defined according to the distribution 𝒫_e^(j)p introduced
by Eq. (<ref>).
Using again Eq. (<ref>) we get
p^n(t)= 1/N∑_j=1^N(C_N^⋆)^n∑_k=1^N^⋆sinkjπ/N+1cosω_k t^n
= 1/N(C_N^⋆)^n∑_k_1^N^⋆…∑_k_n=1^N^⋆∏_l=1^Ncosω_k_lt∑_j=1^Nsink_1 j
π/N+1…sink_n j π/N+1 .
Reasoning as before, we see that the sum over j vanishes in the large N
limit unless the k's are pairwise equal. Again, we neglect the terms including
groups of 4 or more equal k's, assuming that n≪ N^⋆, so that their
relative contribution is O(1/N^⋆). That sum selects paired values of k
for the product inside the square brackets, and we end with
p^n(t)≃1/N(C_N^⋆)^n∑_𝐤∈Ω_n∏_l=1^Ncosω_k_lt .
If t is “large enough” (we will come back to this point in the following
section), different values of ω_k_l lead to completely uncorrelated
values of cos(ω_k_l t). Hence, as before, we can consider the
arguments of the cosines as extracted from a uniform distribution, obtaining
p^n(t)≃C_N^⋆/2^n (N^⋆)^n/2 𝒩(Ω_n)≃m
E_tot/N+1^n/2𝒩(Ω_n) .
These are again the moments of the equilibrium Maxwell-Boltzmann distribution.
We had to assume n ≪ N^⋆, meaning that a Gaussian distribution is recovered
only in the limit of large number of degrees of freedom.
The
empirical distribution can be compared with the Maxwell-Boltzmann by looking at
the Kullback-Leibler divergence K(𝒫_e(p,t), 𝒫_MB(p))
which provides a sort of distance between the empirical 𝒫_e(p,t) and
the Maxwell-Boltzmann:
K[𝒫_e(p,t), 𝒫_MB(p)]= - ∫𝒫_e(p,t) ln𝒫_MB(p)/𝒫_e(p,t) dp.
Figure <ref> shows how the Kullback-Leibler divergences
approach their equilibrium limit, for different values of N. As expected, the transition happens on a time scale that depends linearly on N.
A comment is in order: even if this behaviour
may look similar to the H-Theorem for diluited gases, such a resemblance is
only superficial. Indeed, while in the cases of diluited gases the approach to
the Maxwell-Boltzmann is due to the collisions among different particles that actually exchange
energy and momentum, in the considered case the “thermalization” is due to a dephasing
mechanism.
§ ANALYSIS OF THE TIME SCALES
In the previous section, when considering the distribution of the momenta at a
given time, we had to assume that t was “large enough” in order for our
approximations to hold. In particular we required cos(ω_k_1t) and
cos(ω_k_2t) to be uncorrelated as soon as k_1 k_2. Such a dephasing
hypothesis amounts to asking that
|ω_k_1t-ω_k_2t|> 2π c ,
where c is the number of phases by which the two oscillator have to differ before
they can be considered uncorrelated. The constant c may be much larger than 1,
but it is not expected to depend strongly on the size N of the system. In other words,
we require
t> c/|ω_k_1-ω_k_2|
for each choice of k_1 and k_2. To estimate this typical relaxation time, we
need to pick the minimum value of |ω_k_1-ω_k_2| among the
possible pairs (k_1,k_2). This term is minimized when k_1=k̃ and
k_2=k̃-1 (or vice-versa), with k̃ chosen such that
ω_k̃-ω_k̃-1
is minimum. In the large-N limit this quantity is approximated by
ω_k̃-ω_k̃-1=ω_0sink̃π/2N+2-ω_0sink̃π- π/2N+2≃ω_0cosk̃π/2N+2π/2N+2 ,
which is minimum when k̃ is maximum, i.e. for k̃=N^⋆.
Dephasing is thus expected to occur at
t> 4cN/ω_0cosN^⋆π/2N ,
i.e. t>4cN/ω_0 in the N^⋆/N ≪ 1 limit.
It is instructive to compare this characteristic time with the typical
relaxation time of the “damped” version of the considered system. For doing so, we assume that our chain of oscillators is now in contact with a viscous medium which acts at the same time as a thermal bath and as a source of viscous friction. By
considering the (stochastic) effect of the medium, one gets the Klein-Kramers
stochastic process <cit.>
∂ q_j/∂ t=p_j/m
∂ p_j/∂ t=ω_0^2(q_j+1 - 2 q_j + q_j-1)
-γ p_j + √(2 γ T)ξ_j
where γ is the damping coefficient and T is the temperature of the
thermal bath (we are taking the Boltzmann constant k_B equal to 1). Here the
{ξ_j} are time-dependent, delta-correlated Gaussian noises such that
ξ_j(t)ξ_k(t')=δ_jkδ(t-t').
Such a system is surely ergodic and the stationary probability distribution is
the familiar equilibrium one
𝒫_s(𝐪,𝐩) ∝
e^-H(𝐪,𝐩)/T.
Also in this case we can consider the evolution of the normal modes. By taking
into account Eqs. (<ref>) and (<ref>) one gets
Q̇_̇k̇ =1/m P_k
Ṗ_̇k̇ =- ω_k^2 Q_k - γ/m P + √(2 γ T)ζ_k
where the {ζ_k} are again delta-correlated Gaussian noises. It is
important to notice that also in this case the motion of the modes is
independent (i.e. the friction does not couple normal modes with different
k); nonetheless, the system is ergodic, because the presence of the noise
allows it to explore, in principle, any point of the phase-space.
The Fokker-Planck equation for the evolution of the probability density function
𝒫Q_k,P_k,t of the k-th normal mode can be derived using
standard methods <cit.>:
∂_t𝒫=-∂_Q_kP_k𝒫+∂_P_kω_k^ 2Q_k𝒫+γ/mP_k𝒫+γ
T∂_P_k^2 𝒫 .
The above equation allows to compute also the time dependence of the correlation
functions of the system in the stationary state. In particular one gets
d/dtQ_k(t) Q_k(0)=1/mP_k(t)Q_k(0)
and
d/dtP_k(t) Q_k(0)-ω_k^2 m Q_k(t) Q_k(0)
-γ/mP_k(t) Q_k(0) ,
which, once combined together, lead to
d^2/d t^2Q_k(t) Q_k(0)+γ/md/dtQ_k(t)
Q_k(0)+ ω_k^2Q_k(t) Q_k(0)=0 .
For ω_k <γ/m the solution of this equation admits two characteristic
frequencies ω̃_±, namely
ω̃_±=γ/2m1 ±√(1-m^2
ω_k^2/γ^2).
In the limit ω_k ≪γ/m one has therefore
ω̃_- ≃m/4 γω_k^2 ≃m ω_0^2
π^2 k^2/γ N^2 .
Therefore, as a matter of fact, even in the damped case the system needs a time
that scales as N^2 in order to get complete relaxation for the modes. As we
discussed before, the dephasing mechanism that guarantees for “practical”
ergodicity in the deterministic version is instead expected to occur on time scales of order O(N).
§ CONCLUSIONS
The main aim of this paper was to expose, at a pedagogical level, some aspects
of the foundation of statistical mechanics, namely the role of
ergodicity for the validity of the statistical approach to the study of complex
systems.
We analyzed a chain of classical harmonic oscillators (i.e. a paradigmatic
example of integrable system, which cannot be suspected to show chaotic
behaviour). By extending some well-known results by Kac <cit.>, we showed that the
Maxwell-Bolzmann distribution approximates with arbitrary precision (in the
limit of large number of degrees of freedom) the empirical distribution of the
momenta of the system, after a dephasing time which scales with the size of the
chain. This is true also for quite pathological initial conditions, where only a
small fraction of the normal modes is excited at time t=0. The scaling of the
typical dephasing time with the number of oscillators N may appear as a limit
of our argument, since this time will diverge in the thermodynamic limit; on the
other hand one should consider, as explicitely shown before, that the damped
version of this model (which is ergodic by definition) needs times of the order
O(N^2) to reach thermalization for each normal mode.
This comparison clearly shows that the effective thermalization observed in
large systems has little to do with the mathematical concept of ergodicity, and
it is instead related to the large number of components concurring to define the
global observales that are usually taken into account (in our case, the large
number of normal modes that define the momentum of a single particle). When
these components cease to be in phase, the predictions of statistical mechanics
start to be effective; this can be observed even in integrable systems, without
need for the mathematical notion of ergodicity to hold.
In other words, we believe that the present work give further evidence of the
idea (which had been substantiated mathematically by Khinchin, Mazur and van
der Linden) that the most relevant ingredient of statistical mechanics is the
large number of degrees of freedom, and the global nature of the observables
that are typically taken into account.
§ ACKNOWLEDGEMENTS
RM is supported by #NEXTGENERATIONEU (NGEU) and funded by the Ministry of University and Research (MUR), National Recovery and Resilience Plan (NRRP), project MNESYS (PE0000006) "A Multiscale integrated approach to the study of the nervous system in health and disease" (DN. 1553 11.10.2022).
|
http://arxiv.org/abs/2307.10073v1 | 20230714125456 | Scalable Deep Learning for RNA Secondary Structure Prediction | [
"Jörg K. H. Franke",
"Frederic Runge",
"Frank Hutter"
] | cs.LG | [
"cs.LG",
"q-bio.BM"
] |
[
Scalable Deep Learning for RNA Secondary Structure Prediction
equal*
Jörg K.H. Frankeuni
Frederic Rungeuni
Frank Hutteruni
uniDepartment of Computer Science, University of Freiburg, Freiburg, Germany
Jörg [email protected]
Machine Learning, ICML
0.3in
]
The field of RNA secondary structure prediction has made significant progress with the adoption of deep learning techniques. In this work, we present the RNAformer, a lean deep learning model using axial attention and recycling in the latent space. We gain performance improvements by designing the architecture for modeling the adjacency matrix directly in the latent space and by scaling the size of the model. Our approach achieves state-of-the-art performance on the popular TS0 benchmark dataset and even outperforms methods that use external information. Further, we show experimentally that the RNAformer can learn a biophysical model of the RNA folding process.
§ INTRODUCTION
RNA molecules play a central role in many cellular processes, including regulation of transcription, translation, epigenetics, or more general differentiation and development <cit.>.
These functions strongly depend on the structure of the RNA, which is defined by the secondary structure that describes the intra-molecular basepair interactions, determined by the sequence of nucleotides. Also, the secondary structure can provide important insights into RNA behavior and guide the design of RNA-based therapeutics and nanomachines <cit.>.
Therefore, the accurate prediction of the secondary structure is very desirable and a significant problem in computational biology <cit.>.
Traditionally, the problem of secondary structure prediction is solved with dynamic programming approaches that minimize the free energy (MFE) of a structure, like the most widely used algorithm, RNAfold <cit.>. The optimization is based on thermodynamic parameters derived from UV melting experiments <cit.>.
More recently, deep-learning-based approaches have conquered the field, showing superior performance on benchmark datasets, and can further incorporate additional information e.g. embeddings from large-scale RNA sequence models <cit.>.
We present in this work a deep learning architecture that outperforms other methods on a commonly used benchmark dataset, such as TS0 provided by <cit.>, without ensembling or making use of additional information.
Our performance improvements are mainly based on an axial attention Transformer-like architecture which has a potentially high inductive bias for the prediction of an adjacency matrix. In contrast to the conventional used CNNs, axial attention has a receptive field of the whole pair matrix at any time and does not need to build the receptive field by depth. Further, we gain improvement by recycling to simulate a larger depth and classical scaling in terms of more training data, model parameters, and longer training times.
However, some work in the field recently raised concerns about the performance improvements of deep learning methods, questioning if the learned predictions are a result of similarities between training and test data, and if the algorithms really learn a biophysical model of the folding process <cit.>.
Since current datasets are typically curated with regard to sequence similarity only, the performance of models mainly assesses intra-family performance <cit.>, while inter-family evaluations are rarely reported.
Our suggestion is to show the capability to learn a biophysical model using sequences with predicted structures from the widely used, well-defined but simplified biophysical model RNAfold.
To this end, we build a dataset based on RNA family information from the Rfam <cit.> database with structure predictions from RNAfold and demonstrate that our method is capable of learning the biophysical model of the folding process.
Our main contributions are:
* We propose a novel architecture for RNA secondary structure prediction based on axial attention and recycling.
* We achieve state-of-the-art results on the commonly used benchmark dataset TS0 (Section <ref>).
* We show that our method is capable of learning the underlying folding dynamics of an MFE model in an inter-family prediction setting (Section <ref>).
§ BACKGROUND & RELATED WORK
Secondary structure prediction algorithms can be roughly divided into two classes: (1) de novo prediction methods that seek to predict the structures directly from the nucleotide sequence and (2) homology modeling methods that require a set of homologous RNA sequences for their predictions <cit.>, called an RNA family.
Predictions can then be applied either within given families (intra-family predictions) or across different families (inter-family prediction).
De novo prediction methods are typically preferred since the search for homologous sequences is time-consuming and often, there is no family information available for novel RNAs.
Until recently, the field of de novo RNA secondary structure prediction was dominated by Dynamic Programming (DP) approaches that either build on algorithms for predicting the MFE secondary structure <cit.>, or algorithms to find the most likely structure (maximum expected accuracy).
One disadvantage of these algorithms is that they are typically limited to the prediction of nested RNA secondary structures, i.e. they cannot predict Pseudoknots <cit.> out-of-the-box, which are present in around 40% of RNAs <cit.>, overrepresented in functional important regions <cit.> and known to assist folding into 3D structures <cit.>.
Only recently, deep-learning-based approaches conquered the field, which benefit from making few assumptions on the underlying biophysical folding process, while not being restricted to only predict a subset of possible base pairs <cit.>, and achieved state-of-the-art performance <cit.>.
We now briefly summarize some existing methods and refer the reader to more detailed related work in Appendix <ref>.
RNAfold <cit.> uses a DP approach for the prediction of MFE secondary structures. The version we use here is based on the energy parameters provided by the Turner nearest-neighbor model <cit.>.
SPOT-RNA <cit.> uses an ensemble of models with residual networks (ResNets) <cit.>, bidirectional LSTM <cit.>, and dilated convolution <cit.> architectures. SPOT-RNA was trained on a large set of intra-family RNA data for de novo predictions on a newly proposed test set, TS0.
ProbTransformer <cit.> uses a probabilistic enhancement of the Transformer architecture for intra-family predictions. The model is trained on a large set of available secondary structure data and evaluated on TS0.
RNA-FM <cit.> uses sequence embeddings of an RNA foundation model that is trained on 23 million RNA sequences to perform intra-family predictions of RNA secondary structures in a downstream task. The foundation model consists of a large Transformer architecture, while the downstream model uses a ResNet32 <cit.>.
§ RNAFORMER
Our model architecture is inspired by AlphaFold <cit.>, which models a multi-sequence alignment and a pair matrix in the latent space and processes it with the use of axial attention <cit.>. In our approach, which we dub RNAformer[Source code and models: https://github.com/automl/RNAformergithub.com/automl/RNAformer], we simplify this architecture and only use axial attention for modeling a latent representation for the pairing between all nucleotides of the input RNA sequence. This construction leads to a potentially higher inductive bias since each layer adds some value to the latent representation of the adjacency matrix. To capture the dependency between the potential pairings we use two mechanisms: (1) axial attention and a (2) convolutional layer. Axial attention is a type of attention mechanism that captures dependencies between positions along a specific axis of the input data. In our case, we apply axial attention to the row and the column of the latent pairing matrix to create a dependency between all potential nucleotide pairings. To improve the modeling of local structures like stem-loops, we use a convolutional neural network with a kernel size of three instead of the position-wise feed-forward layer from the vanilla Transformer <cit.>.
RNAformer embeds the RNA sequence with a linear layer twice and broadcasts them, one for a row-wise and one for a column-wise representation before we add them to the initial latent representation. Now we apply multiple Transformer-like blocks, each consisting of a row-wise axial attention, a column-wise axial attention, and a two-layer convolutional network. Lastly, we apply a linear layer and output the paring matrix of the secondary structure directly.
Similar to AlphaFold, we apply recycling of the processed latent space to artificially increase the model depth and allow the model to reprocess and correct its own predictions. Therefore, we pass the latent representation multiple times through the block without gradient and calculate gradients only for the last recycle iteration.
We apply dropout, pre-norm, and residual connections to all layers except the embedding and generator layers. For loss calculation, we masked 50% of the unpaired entries in the adjacency matrix before calculating the mean cross-entropy loss. This helps to increase the learning signal in the heavily imbalanced adjacency matrix. Refer to Figure <ref> for an overview of our architecture.
§ EXPERIMENTS
We evaluate the performance of our model in two settings. First, we evaluate the intra-family prediction capability based on the bpRNA dataset. Secondly, we assess the performance on inter-family predictions, as well as investigate the learning of a biophysical model by training the RNAformer on a dataset derived from Rfam database and the generated target secondary structures with RNAfold.
§.§ bpRNA Experiment
Data curation In order to generate a training dataset for intra-family predictions, we first collect a large data corpus from the following public sources: the bpRNA-1m <cit.>, the ArchiveII <cit.> and RNAStrAlign <cit.> dataset provided by <cit.>, all data from RNA-Strand <cit.>, as well as all RNA containing data from PDB.
Secondary structures for PDB samples were derived from the 3D structure information using DSSR <cit.>.
After removing duplicates we use the exact same protocol as <cit.> to remove sequence similarities while we replace the training set TR0 with our own data. In particular, we apply a 80% similarity cutoff between the sequences using CD-HIT <cit.> and a homology search using BLASTN <cit.> with a large e-value of 10, to further reject sequences from our training set that show homologies with the respective test sets.
Most DL methods use the TS0 dataset for evaluations.
However, similar to <cit.>, we did not cluster the training, validation, and test data internally to learn from the data diversity.
Model & Training Setup We evaluate the RNAformer in a setup with 6 blocks and with different latent dimensions of 32, 64, 128, and 256, resulting in total parameter counts of roughly 0.5M, 2M, 8M, and 32M parameters, respectively. We applied recycling (↺) with 6 iterations to the largest model and sample the number of recycle iterations during the training uniformly from 2 to 6. We trained all models on 8 GPUs with a batch size of 500 tokens per GPU and a maximum sequence length of 500, for 50k steps. This limit is mainly due to the large memory footprint of the two-dimensional latent space, however, we note that the same cutoff was also applied in previous work <cit.>. For optimization, we used AdamW <cit.> learning rate warm-up, a cosine learning rate decay, weight decay, and gradient clipping. Refer to Appendix <ref> for all hyperparameter values.
Results We compared RNAformer to the models in the related work and present the results in Table <ref>. For a more comprehensive comparison refer to Appendix <ref>. Our largest model with 32M parameters with the use of recycling achieves a new state-of-the-art result on the TS0 benchmark set. We solve 17.2% of the sequences completely without any mistakes. The recycling (↺) leads to a performance gain of ∼1% and a steady increase of the parameter count from 0.5M to 32M also leads to a steady performance increase. This shows that we gain performance from over-parameterization and could indicate that the inductive bias induced by the architecture is beneficial for this task.
§.§ Rfam Experiment
Data curation To evaluate the performance on inter-family predictions, as well as investigate the learning of a biophysical model, we derive a training dataset from families of the Rfam database version 14.9 <cit.>.
We first select all families with a covariance model with maximum CLEN of ≤ 500 and sample a large set of sequences for each family from the covariance models using Infernal <cit.>.
We then build a large set with two third sequences from families with CLEN ≤ 200 and one-third of sequences from the families with CLEN > 200 to increase the number of families further.
We randomly select 25 and 30 families from this set for validation and testing, respectively, and leaf all samples from other families for training.
All sequences are folded using RNAfold <cit.>.
We apply a length cutoff at 200 nucleotides since we expect RNAfold predictions to be more reliable for sequences below this threshold, to save computational costs, and since all datasets of experimentally derived RNA structures from the literature show a maximum sequence length below 200 nucleotides.
<cit.> created a test set, TS-hard, in an inter-family manner similar to the data pipelines used by the Rfam database for RNA family assignments.
We follow this pipeline to remove similar sequences between our training data and the validation- and test sets provided by <cit.> using CD-HIT and BLASTN as described before.
We then build an MSA of all sequences in TS-hard with BLASTN at an e-value of 0.1 using NCBI's nt database as a reference and build covariance models from the MSAs using Infernal. However, while <cit.> used SPOT-RNA for predictions of the consensus structures of the MSA, which appears inappropriate since the method was built for de-novo predictions, we use LocARNA-P <cit.>, a commonly used tool to build MSAs based on sequence and structure-based alignments. The covariance models were then used to remove all sequences from the training data, using an e-value threshold of 0.1. We use this dataset to learn the underlying biophysical model of RNAfold, evaluated on the Rfam test data, and for evaluations on TS-hard.
Again we avoid clustering the datasets internally to keep structural diversity.
All datasets are described in more detail in Table <ref> in Appendix <ref>.
Model & Training Setup We used the same setup as in the first experiment with the difference of a maximum sequence length of 200 tokens, a batch size of 600 tokens per GPU, and a training time of 100k steps.
Results
As shown in Table <ref>, we can replicate the RNAfold algorithm increasingly better with growing model size. Our largest model achieves a mean F1 score of 94.8 on the test set and predicts 76.3% of the structures entirely correct. This result suggests that the RNAformer can learn the underlying biophysical model of the folding process. We observe similar results regarding scaling for the TS-hard dataset, where F1 scores increase with model size, resulting in a similar performance as RNAfold, which further supports our observation on the Rfam dataset. Interestingly, our larger models even slightly outperform RNAfold on TS-hard. However, these results require further investigations and a closer look at what the RNAformer layers models in detail, before we speculate about whether these results originate from the RNAformer architecture, or simply from slight deviations from the learned biophysical model.
§ CONCLUSION & FUTURE WORK
We introduced a new architecture for RNA secondary structure prediction and showed state-of-the-art performance on the TS0 benchmark set. The gain in performance is based on axial attention, a recycling of the latent space, and a larger dataset based on the same similarity criteria as used in related work. We also trained the RNAformer on a dataset derived from the Rfam database with RNAfold prediction to demonstrate that we can learn a biophysical model like RNAfold. The downside of our approach is a large memory footprint.
Our approach could be further improved by the usage of additional information like MSA <cit.> or language embeddings with additional text information. We could also improve the architecture and enhance it with a probabilistic layer to capture ambiguities <cit.> or scale it even further. Another way to improve or adapt our model is finetuning, which is heavily used for large language models and could be applicable to fine-tuning high-quality data.
However, besides methodological improvements, more effort in the generation and collection of high-quality data is required to achieve accurate predictions of RNA structures with deep learning.
§ ACKNOWLEDGEMENTS
This research was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant number 417962828.
Appendix
§ TRAINING DETAILS
§ RELATED WORK
As described in Section <ref>, RNA secondary structure prediction was previously dominated by dynamic programming approaches the either optimize for MFE or maximum expected accuracy (MEA) predictions.
The runtime of these approaches in 𝒪(n^3).
However, linear time approximations have been proposed <cit.>.
Besides runtime, the major disadvantage of these algorithms is that they are typically limited to the prediction of nested RNA secondary structures, which strongly limits their accuracy <cit.>.
Some work, however, used heuristic approaches to overcome this issue, again at the price of runtime <cit.>.
In this regard, deep learning approaches have strong advantages, especially when modeling the RNA secondary structure as an adjacency matrix, where all types of pairs and pseudoknots are represented identically.
We now discuss existing deep learning approaches in more detail.
SPOT-RNA <cit.> was the first algorithm using deep neural networks for end-to-end prediction of RNA secondary structures, using an ensemble of models with residual networks (ResNets) <cit.>, bidirectional LSTM- <cit.> (BiLSTMs) <cit.>, and dilated convolution <cit.> architectures. SPOT-RNA was trained on a large set of intra-family RNA data for de novo predictions on TS0, and further fine-tuned on a small set of experimentally-derived RNA structures, for predictions including tertiary interactions. However, the performance for these types of base pairs was rather poor and the currently available version of the algorithm excludes tertiary interactions from its outputs.
E2efold <cit.> uses a Transformer encoder architecture for de novo prediction of RNA secondary structures. The algorithm was trained on a dataset of homologous RNAs and showed strongly reduced performance across evaluation in multiple other publications <cit.>, which indicates strong overfitting. We use the same data as the respective work for evaluations and thus exclude E2efold from our evaluations.
MXFold2 <cit.> seeks to learn the scoring function for a subsequent DP algorithm using a CNN/BiLSTM architecture. The network is trained to predict scores close to a set of thermodynamic parameters. In contrast to the previously described methods, MXFold2 is restricted to predicting a limited set of base pairs due to the DP algorithm.
UFold <cit.> employs a UNet <cit.> architecture for de novo secondary structure prediction, additionally reporting results for predictions on data that contains tertiary interactions after fine-tuning the model.
In UFold an RNA sequence is an image of all possible base-pairing maps and an additional map for pair probabilities, represented as square matrices.
SPOT-RNA2 <cit.> is a homology modeling method that incorporates MSA features as well as sequence profiles (PSSM) and features derived from direct coupling analysis (DCA) for the prediction of RNA secondary structures. Similar to SPOT-RNA, predictions are based on an ensemble of models but using dilated convolutions only. Since SPOT-RNA2's predictions are based on evolutionary features and homologous sequence information, the predictions can be considered intra-family wise independent of the curation of the dataset since homologies between the evolutionary information and the training or test sets were not explicitly excluded during evaluations. Nevertheless, we use the carefully designed test set, TS-hard, proposed by <cit.> for our evaluations on inter-family predictions as described in Section <ref>.
ProbTransformer <cit.> uses a probabilistic enhancement for either an encoder or decoder transformer architecture for intra-family predictions. The model is trained on a large set of available secondary structure data and evaluated on TS0. By learning a hierarchical joint distribution in the latent, the ProbTransformer is the first learning algorithm that is capable of sampling different structures of this latent distribution, which was shown by reconstructing structure ensembles of a distinct dataset with multiple structures for a given input sequence.
RNA-FM <cit.> uses sequence embeddings of an RNA foundation model that is trained on 23 million RNA sequences from 800000 species to perform intra-family predictions of RNA secondary structures in a downstream task. The foundation model consists of a 12-layer transformer architecture, while the downstream models use a ResNet32 architecture.
REDfold <cit.> uses a residual encoder-decoder architecture inspired by the UNet architecture of UFold. Interestingly, the model input is a 146× L× L tensor, representing square matrices of all possible base pairs (10 combinations for dinucleotide pairs) and tetranucleotide combinations (136 combinations) without considering their order.
The model is trained on highly homogeneous data, reporting strong performance on 4-fold cross-validation experiments, but also reporting strong results when considering sequence similarity. However, when we evaluated REDfold on TS0, we did not observe the same performance (see Table <ref>). Together with the results on unseen families provided by <cit.>, this might indicate potential overfitting.
We note that there are other methods we do not consider here because they either showed inferior performance to methods we compare against <cit.> or because their source code is not publicly available <cit.>.
§ DATA
§ EXPERIMENTS
|
http://arxiv.org/abs/2307.04993v1 | 20230711031325 | Uncertainty Quantification of the Virial Black Hole Mass with Conformal Prediction | [
"Suk Yee Yong",
"Cheng Soon Ong"
] | astro-ph.CO | [
"astro-ph.CO",
"astro-ph.GA",
"astro-ph.IM",
"cs.LG"
] |
firstpage–lastpage
: A Tensor Compiler with Explicit Data Movement Description and Instruction-level Graph IR
Zixuan Ma, Haojie Wang, Jingze Xing, Liyan Zheng, Chen Zhang, Huanqi Cao
Kezhao Huang, Shizhi Tang, Penghan Wang and Jidong Zhai
Tsinghua University
=================================================================================================================================================================
Precise measurements of the black hole mass are essential to gain insight on the black hole and host galaxy co-evolution.
A direct measure of the black hole mass is often restricted to nearest galaxies and instead, an indirect method using the single-epoch virial black hole mass estimation is used for objects at high redshifts.
However, this method is subjected to biases and uncertainties as it is reliant on the scaling relation from a small sample of local active galactic nuclei.
In this study, we propose the application of conformalised quantile regression (CQR) to quantify the uncertainties of the black hole predictions in a machine learning setting.
We compare CQR with various prediction interval techniques and demonstrated that CQR can provide a more useful prediction interval indicator.
In contrast to baseline approaches for prediction interval estimation, we show that the CQR method provides prediction intervals that adjust to the black hole mass and its related properties.
That is it yields a tighter constraint on the prediction interval (hence more certain) for a larger black hole mass, and accordingly, bright and broad spectral line width source.
Using a combination of neural network model and CQR framework, the recovered virial black hole mass predictions and uncertainties are comparable to those measured from the Sloan Digital Sky Survey.
The code is publicly available at .
black hole physics – (galaxies:) quasars: general – (galaxies:) quasars: supermassive black holes – methods: data analysis – methods: statistical
§ INTRODUCTION
At the centre of every active galactic nuclei (AGN) is a black hole <cit.>.
The black hole mass, MBH, is a crucial quantity in understanding the co-evolution between the black hole and its host galaxy <cit.>.
However, direct and accurate measurements are very limited to close by galaxies as high spatial resolution is required <cit.>.
Beyond the local universe, the single-epoch virial mass estimation is applied to estimate the virial black hole mass, Mvir, which is calibrated empirically using reverberation mapping <cit.> samples of local AGN <cit.>.
This method assumes that the gas in the broad line region (BLR) of the AGN is in Keplerian motion and the virial black hole mass is estimated by
Mvir=Δ V^2 R/G,
where G is the gravitational constant and Δ V is the velocity dispersion of a particular broad emission line often measured by the full width at half maximum (FWHM).
Due to the intensive monitoring at a high cadence over a duration, reverberation mapping for multi-epoch observations are often carried out for a limited number of sources <cit.>.
Nonetheless, reverberation mapping studies have also found that there is a relationship between the BLR radius R and monochromatic continuum or line luminosities L <cit.>, which is used as the basis for single-epoch virial mass estimates <cit.>.
Based on this R-L relation, the BLR size is derived for a given luminosity and then estimate the Mvir, in which case <ref> can be rewritten as:
logMvir=a + blog L + clogFWHM,
where (a,b) are the coefficients calibrated from reverberation mapping.
The coefficient c is usually set to 2 based on the virial theorem <cit.>.
Depending on the redshift of the object, different emission line widths and luminosities are used <cit.>.
For low redshift sources, this is typically the Hβ and Mgii lines, and their respective continuum luminosity measured at rest wavelengths of 5100 Å and 3000 Å.
The majority of reverberation mapping studies have been conducted using Hβ on low redshift AGN <cit.>.
Often for higher redshift, the Mgii or Civ line is utilised.
However, this involves applying additional scaling from the Hβ line to formulate the virial mass based on other lines <cit.>.
There have been efforts to establish the R-L relation for high redshift AGN <cit.>, though it is still debatable whether the single-epoch Mvir of these lines are reliable or will need further correction <cit.> since they might be affected by non-virial component due to the stratified BLR of the different lines <cit.>.
There are several limitations and sources of uncertainties in using the single-epoch method that could lead to significant error up to 0.5 dex in the virial black hole mass <cit.>.
Some of the common issues are as follows.
First, the relationship between the line width of Hβ and Mgii might be non-linear <cit.>, which is not accounted for if a constant c=2 in <ref> is applied on Mgii line-based MBH.
Second, the intrinsic scatter in the R-L relation calibrated against local reverberation mapped AGN samples using Hβ is ∼ 0.2 dex <cit.> and can be larger than 0.36 dex when using Mgii <cit.>.
The Mvir based on Mgii line have to be properly calibrated such that they match those of Hβ line <cit.>.
Various prescriptions have been proposed <cit.> to calibrate the (a,b) coefficients in <ref>, which can vary depending on which specific line is used <cit.>.
Practically, this also assumes that a single best fit line from the empirical relationship, fixed by some constant coefficients, is applicable to every sources.
Third, the derived continuum and spectral line properties rely on the choice of spectral fitting process.
Mainly, this requires a consistent procedure for fitting the continuum and modelling individual spectral line component as this will substantially affect the line measurements <cit.>.
The presence of strong absorption lines and using low quality signal-to-noise ratio spectra are likely to result in unreliable measurements <cit.>.
Recently, several studies have employed machine learning and deep learning methods to predict the properties of the black hole.
<cit.> explored the MBH correlation with their host galaxy properties using Lasso regression.
They then used the extracted subset of properties to derive an empirical formula for the black hole mass and shown that it is able to retrieve the masses with a scatter of 0.5 dex.
Though only trained using a small sample available from reverberation mapping, <cit.> demonstrated that they are able to generate quasar spectra along with the associated physical properties even for missing spectral region and without requiring calibration from the R-L scaling relation.
They applied a multi-output Gaussian process latent variable model and estimated the uncertainties in the predicted MBH due to errors from measurements and input spectra, and reported a scatter of 0.4 dex in the predictions.
Similarly, <cit.> used a multi-layer perceptron regressor on a few reverberation mapped AGN samples probed in the X-ray regime and recovered the MBH within ± (2–5)%.
<cit.> employed a hybrid deep neural network model consisting of convolutional and fully connected layers on quasar light curves as an alternative to the expensive spectral data.
They predicted the MBH from the light curves within 0.37 dex scatter.
Previous studies primarily considered recovering the black hole mass from the measured MBH using light curves or calibrated based on reverberation mapping of low redshift quasars.
However, a question still remain: since all measurements of the black hole mass have intrinsic scatter, how good is then the uncertainties of the black hole mass predictions? In this work, we do not attempt to build a more accurate predictor for the black hole mass.
Instead, we focus on quantifying the uncertainties of the line-based virial mass, Mvir, and address some of the aforementioned limitations and sources of uncertainty.
In particular, we employ a conformal prediction for regression framework, specifically the conformalised quantile regression <cit.>, and conduct a comparative study with several other prediction interval approaches.
The conformalised quantile regression is of particular interest as it has been shown to be flexible to any heteroscedasticity in the data and generates adaptive prediction intervals.
We present a method to quantify the uncertainty in black hole mass predictions with adaptive prediction intervals.
We separate this into two parts:
* Perform representation learning (finding a good feature encoding) using a neural network model: This effectively avoid the need to fit and obtain individual line measurements.
* Generate predictions and prediction intervals for the line-based Mvir: We examine different prediction interval methods to quantify the uncertainties in the Mvir.
The outline of the paper is as follows.
<Ref> describes the dataset utilised.
Overviews of the neural network model and the prediction interval methods employed are given in <ref>.
The results followed by discussions in <ref> and <ref>, respectively.
Finally, <ref> summarises our findings.
§ DATASET
We briefly describe the dataset used in this work and pre-processing applied on the data.
We use the recent catalogue of quasar properties <cit.> derived from the Sloan Digital Sky Survey (SDSS) Data Release 16 Quasar <cit.> catalogue.
The data[<http://quasar.astro.illinois.edu/paper_data/DR16Q/>] and tutorial[<https://github.com/QiaoyaWu/sdss4_dr16q_tutorial>] containing the description of the data and demo are publicly available online.
The details on the derived spectral line measurements are described in Section 3 of <cit.> and also in their earlier work <cit.>, which we briefly outline here.
They corrected the spectra for Galactic reddening using the dust map from <cit.> and <cit.> and the extinction curve from <cit.>.
After shifting the spectra to the rest-frame using the redshift from the SDSS DR16Q catalogue, they fitted the continuum by a power law and a third-order polynomial, and also an iron template <cit.> to several continuum fitting windows that are not affected by broad line emission.
Quasars that have peculiar continuum shapes are fitted with the additive (positive-definite) polynomial component.
They subtracted the continuum and iron fit from the spectrum to form a line-only spectrum, which is then fitted with a set of Gaussians in logarithmic wavelength space.
To minimise the effect of absorption lines from intervening absorption systems, they performed an iterative approach to mask pixels below 3-sigma of the original model fit and refit.
From the best spectral fitting parameters, <cit.> measured the continuum and emission line properties, including the spectral line peak and FWHM.
Using a Monte Carlo approach, they estimated the uncertainties in the line measurements by randomly perturb the original spectrum at each pixel with a Gaussian.
They performed for 25 iterations and took the semi-amplitude within the 16th and 84th percentile range as the error of each spectral quantity.
To calibrate the coefficient (a,b) for the single-epoch Mvir, they adopted (0.91, 0.50) for Hβ <cit.> and (0.74, 0.62) for Mgii <cit.>.
The measurement uncertainties in the Mvir are also provided in the catalogue.
Their compiled catalogue has a total of 750,414 spectra, with each data file containing the original fluxes, continuum fluxes, and the spectral line fluxes with continuum subtracted.
For the sample selection, we follow the recommended quality cuts for specific emission lines in their paper, namely line flux/flux error >2 and logarithm line luminosity ranges 38–48 erg s^-1 and apply them to the Hβ and Mgii lines.
We further restrain the sample that have both Hβ and Mgii line widths and black hole masses available.
As black hole mass is derived from the line width, we remove quasars with large errors in the black hole mass with >0.5 dex and line width error with >2000km s^-1.
We also select spectra with high median signal-to-noise ratio per pixel of ≥ 10.
A summary of selection criteria along with the number of drop out after each cut is listed in <ref>.
Our final data sample consists of 13,952 spectra, and the distributions of the black hole masses with redshifts are shown in <ref>.
We split the data into 70% training, 20% validation, and 10% test sets, which are 9766, 2930, and 1256 spectra, respectively.
In training the machine learning model, we find that using the fluxes of the entire spectrum as our input data does not lead to any meaningful feature extraction.
This might be due to the noisy fluctuations and spurious spectral spikes in the fluxes.
Hence, we use the spectral line flux with continuum subtracted, which is provided in the data file from <cit.>, as the input for the training and validation of the machine learning.
The validation set is used for evaluating the model performance during the training.
Since we do not utilise the wavelength, which contains the position information, when training the neural network and the line flux mainly cut off at ∼ 1000 pixels, we therefore truncate the data to the first 1000 pixels.
The virial Hβ and Mgii black hole mass estimates are used as the ground truth labels.
The fluxes and labels are normalised from 0 to 1.
§ VIRIAL BLACK HOLE MASS PREDICTIONS AND UNCERTAINTIES
In this section, we detail the end-to-end pipeline implemented from training the input data using a neural network model to output the prediction intervals.
A flowchart of the pipeline is illustrated in <ref>.
The following notation is adopted.
Given n, the number of independently and identically distributed training data with input-target pair {(X_i,Y_i)}_i=1^n, we perform regression.
In regression analysis, the target can be represented by
Y=μ̂(X)+ϵ,
where μ̂(X) is the regression function to be estimated and ϵ is the model error.
In this case, the target Y is the virial black hole mass Mvir, and the input X is the SDSS spectra.
§.§ Construction of neural network for feature extraction
To extract the feature vectors from the spectra, we employ a supervised learning approach using a generic fully connected neural network model with similar hidden layer architecture in <cit.>.
The neural network is constructed using PyTorch <cit.>, an open source machine learning framework in Python.
The input layer consists of 1000 neurons followed by 3 hidden layers of 64, 64, and 8 neurons with rectified linear unit as activation function and dropout <cit.> of probability 0.1, then finally an output layer with 1 node and sigmoid activation function.
The outputs of the second to last layer of 8 neurons is saved as features of the spectra.
As our main aim is not to find the best model, we do not attempt any optimisation or hyperparameters tuning on the model.
Following <cit.>, the stochastic gradient descent based Adam optimiser <cit.> is used with initial learning rate of 5 × 10^-4 and weight decay regularisation parameter of 10^-6.
Additionally, we apply a constant learning rate scheduler that decreases by a factor of 0.5 every 2 steps.
The model is optimised with mean squared error (MSE) as the cost function.
The model is then trained for 100 epochs with batch size of 64.
There are a few main assumptions that we made in training our machine learning model.
We assume that the SDSS spectra are of good quality with reliable derived properties.
On the other hand, note that these properties are also constrained by the same assumptions used to derive them.
In particular, the derived Mvir from SDSS are dependent on various factors, including the Keplerian motion assumption into the virial theorem and the applicability of the empirical scaling relation to single-epoch mass estimates.
There are also potential systematic uncertainties that might not be completely accounted for.
Further caveats are discussed in <ref>.
To train the supervised neural network model, we use the spectra as inputs and the SDSS DR16Q derived virial Hβ and Mgii black hole mass estimates as targets to be optimised.
The uncertainties of the measured Mvir are not included when training the model.
§.§ Construction of regressor for predictions
After the feature extraction process from the neural network, we use gradient boosting for regression to make the predictions.
Depending on the uncertainty quantification methods, which will be described in <ref>, the quantile loss is applied for the conformalised quantile regression, while the MSE loss for the rest of the resampling techniques.
To reduce the prediction error, we optimise the regressor by performing a randomised search with 10-fold cross-validation for 100 iterations to find the best hyperparameters for the regressor.
The explored parameter space and the adopted best model are shown in <ref>.
To check the performance of the regression model, two common evaluation metrics, the mean absolute error (MAE) and root mean squared error (RMSE), are evaluated.
MAE is the average of the absolute errors between the target value Y_i and predicted value μ̂(X_i):
MAE=1/n∑_i=1^n |Y_i-μ̂(X_i)|.
RMSE is the average of the squares of the difference between the target and predicted value:
RMSE=√(1/n∑_i=1^n[Y_i-μ̂(X_i)]^2).
MAE is more robust to outliers, while the RMSE is easier to optimise.
In both cases, the lower score the better.
Additionally, 10-fold cross validation is performed to obtain the mean and standard deviation of the respective evaluation metrics.
§.§ Assessing the performance of prediction intervals
The two criteria that are crucial to assess the performance of the prediction intervals are the coverage and the width <cit.>.
The prediction interval coverage probability (PICP) or coverage for short reflects the probability that the prediction interval will contain the target value, which is defined as
PICP=1/ntest∑_i=1^ntestc_i,
where c_i=1 if Y_i∈ [L(X_i), U(X_i)] otherwise c_i=0, L(X_i) and U(X_i) are the lower and upper bounds, respectively.
Ideally, higher PICP is better and it should be close to the nominal confidence level of (1-α).
The confidence level is set to be 90%.
Additionally, we compute the coefficient of determination, R^2, to measure the percentage of variance
between PICP and the (1-α) nominal coverage rate.
R^2=1-∑[Y_i-μ̂(X_i)]^2/∑(Y_i-Y̅)^2,
where Y̅ is the mean of Y.
The R^2 ranges 0–1 (or in percentage 0–100%), where the higher the better with 100% being a perfect fit.
The mean prediction interval width (MPIW) measures the wideness of the prediction interval and is given by the average of the width
MPIW=∑_i=1^ntest[U(X_i)-L(X_i)],
where the prediction interval width is defined as the difference between the upper and lower bounds, which is the term in the square bracket.
The larger the width, the more uncertain.
It is desirable to have a high PICP but a narrow MPIW.
§.§ Construction of prediction intervals
Various methods to construct prediction intervals have been developed and a comparison between different strategies is reviewed in <cit.>.
Using an open-source Python package called model agnostic prediction interval estimator <cit.> or MAPIE[<https://github.com/scikit-learn-contrib/MAPIE>], we explore different techniques to estimate the prediction intervals.
To estimate the prediction interval, the model error ϵ can be characterised as the conditional probability distribution of Y given X, ℙ_Y|X.
In practice, this is estimated by the difference between the label and the prediction, Y_i - μ̂(X_i).
Let (X_n+1,Y_n+1) be the input-target for a new unseen test point.
Suppose we want to construct a valid prediction interval 𝒞̂_n,α(X_n+1) for the test data.
It should satisfy
ℙ{Y_n+1∈𝒞̂(X_n+1)}≥ 1-α,
where α is the target quantile and the complementary (1-α) is the confidence level or coverage rate.
The estimator (for the prediction interval) is considered calibrated if it satisfies the inequality in <ref>.
A conformity score is a measure of how similar a sample is compared to the rest of the dataset and is used to determine the threshold for the quantile, leading to a prediction interval.
A key challenge to estimating the prediction interval is to ensure statistical consistency, and various approaches have been proposed.
Conformal prediction <cit.> offers a robust uncertainty quantification framework and a distribution-free coverage guarantee that satisfy <ref>.
As a set of baseline comparison, we compare conformal prediction against various uncertainty quantification methods from the MAPIE package, namely naive, jackknife+-after-bootstrap, cross-validation and its variations.
We briefly review the methods we use in this paper in the following.
§.§.§ “Naive” conformity score
Consider a simple or “naive” way to compute conformity score, by using the residual of the training dataset, which gives
𝒞̂_n,α^naive(X_n+1)=μ̂(X_n+1) ±q̂_n,α^+|Y_i-μ̂(X_i)|,
where q̂_n,α^+ is the (1-α) quantile of the empirical distribution.
Though this method is computationally cheap, it does not guarantee coverage and is likely to overfit, which underestimates the prediction interval widths.
§.§.§ Jackknife+-after-bootstrap
The standard jackknife is a leave-one-out cross-validation <cit.> approach.
We opt for jackknife+-after-bootstrap <cit.> as it is more computationally efficient than the standard jackknife.
The steps to infer the jackknife+ab prediction intervals are as follow:
* Bootstrap resampling from the training set with replacement K times, B_1,…,B_K.
* Fit the K regression functions μ̂_B_k on the bootstrapped dataset.
* Aggregate the estimated prediction function using the bootstrapped dataset excluding sample i given by μ̂_φ,-i=φ({μ̂_B_k(X_n+1: i ∉ B_k)}), where φ is the aggregation function usually taken to be the mean or median.
The mean is used, which is the default.
Then compute the conformity score as the residual R_φ,i=|Y_i-μ̂_φ,-i(X_i)| for i=1,…,n.
* Output jackknife+ab prediction interval:
𝒞̂_n,α,B^jackknife+ab(X_n+1)=[q̂_n,α^-{μ̂_φ,-i(X_n+1) - R_φ,i}, .
. q̂_n,α^+{μ̂_φ,-i(X_n+1) + R_φ,i}],
where q̂_n,α^- is the α quantile of the distribution and recall the (1-α) counterpart is q̂_n,α^+.
§.§.§ Cross-validation and its variations
Rather than the leave-one-out method, cross validation can be performed in K-fold to reduce computation time.
The steps to infer the CV+ prediction intervals are as follow:
* Split training set into K disjoint subsets S_1,…,S_K each of size m=n/K.
* Fit the K regression functions μ̂_-S_k on the training dataset with kth subset excluded.
* Compute the conformity score from the K-fold process as R_i^CV=|Y_i-μ̂_-S_k(i)(X_i)|, where the subset k(i) contains i.
* Output CV+ prediction interval:
𝒞̂_n,α,K^CV+(X_n+1)=[q̂_n,α^-{μ̂_-S_k(i)(X_n+1) - R_i^CV}, .
. q̂_n,α^+{μ̂_-S_k(i)(X_n+1) + R_i^CV}].
The jackknife+ab and CV+ provide slightly larger coverage guarantee of (1-2α).
For standard CV, the output prediction interval is defined as
𝒞̂_n,α^CV(X_n+1)=[q̂_n,α^-{μ̂(X_n+1) - R_i^CV}, q̂_n,α^+{μ̂(X_n+1) + R_i^CV}].
Another variation of CV that is more conservative than CV+ is the CV-minmax method given by
𝒞̂_n,α^CV-minmax(X_n+1)=[min_i=1,…,nμ̂_-i(X_n+1) - q̂_n,α^+{R_i^CV}, .
. max_i=1,…,nμ̂_-i(X_n+1) + q̂_n,α^+{R_i^CV}],
which guarantee the (1-α) coverage in <ref>
§.§.§ Conformalised quantile regression
As the transductive or full conformal prediction is computationally heavy, the inductive or split conformal prediction <cit.> approach is applied to alleviate the issue.
In this setting, it trains the model only once, but requires data splitting for the calibration set.
For regression, the conformalised quantile regression <cit.> is built upon conformal prediction and quantile regression <cit.> to provide a two-sided prediction interval or band.
The steps to infer the CQR prediction intervals are as follow:
* Split dataset into two disjoint subsets for training set ℐ_1 and calibration set ℐ_2.
* Fit two conditional quantile functions for the lower quantile q̂_α/2 and upper quantile q̂_1-α/2.
* Compute the conformity score for each i ∈ℐ_2 as E_i^CQR=max{q̂_α/2(X_i)-Y_i, Y_i-q̂_1-α/2(X_i)}.
* Compute Q̂_1-α(E^CQR,ℐ_2):=(1-α)(1+1/|ℐ_2|)-th empirical quantile of {E_i^CQR: i ∈ℐ_2}.
* Output CQR prediction interval:
𝒞̂_n,α^CQR(X_n+1)=[q̂_α/2(X_n+1) - Q̂_1-α(E^CQR,ℐ_2), .
. q̂_1-α/2(X_n+1) + Q̂_1-α(E^CQR,ℐ_2)].
We employ CQR with inductive split using the validation set as the calibration set.
Towards the final stage of the prediction pipeline in <ref>, prediction intervals are obtained from the various uncertainty quantification methods.
Their performances are evaluated and compared using the two metrics, PICP and MPIW, as defined previously in <ref>.
§ RESULTS
§.§ Effectiveness of neural network for feature extraction
Features extracted by a neural network are not directly interpretable, as they do not correspond to any particular physical parameters.
However, if the regressor is to perform well, the extracted features should capture meaningful aspects of the raw data.
To determine whether the extracted features from the neural network are meaningful, we use Uniform Manifold Approximation and Projection <cit.> or UMAP[<https://github.com/lmcinnes/umap>], a dimension reduction technique to project the 8-dimension features to 2-dimension parameter space.
As the purpose is purely for visualisation, we set the number of components to 2 and use the defaults for the rest of the UMAP hyperparameters.
It can be observed in <ref> that the 2-dimensional UMAP representation is structured such that smaller Mvir objects tend to be on the right and gradually towards the left for increasing Mvir.
This affirms that the 8 features extracted are sensible to characterise the Hβ and Mgii line-based Mvir, which are used as inputs for the regressor.
§.§ Performance of regressor for predictions
Due to the various assumptions imposed on estimating the black hole mass, the black hole mass estimates can be substantially biased and uncertain <cit.>.
To leverage the need for individual spectral line fitting, we use a neural network model to extract the latent feature vectors and use a regressor to predict the Mvir.
The prediction errors of the regressors are shown in <ref>.
Overall, the performances of the regressors using quantile loss and MSE loss for both Hβ and Mgii cases are similar.
The Mgii line-based Mvir prediction errors are slightly larger compared to those of Hβ.
As previously mentioned, this is likely because the Mvir based on Hβ is better calibrated <cit.>, which leads to the smaller prediction error.
As a comparison, the performances of the black hole mass predictions trained using machine learning model reported by other studies are also listed in <ref>.
It can be seen that the predictions are relatively good with low prediction errors when compared to those from other studies.
Though note that the dataset used in those studies are not the same from one another; thus, the difference in scores might also be attributed to the difficulty of the machine learning task.
§.§ Reliability of prediction intervals
A well calibrated uncertainty quantification is valuable to assess the reliability of the black hole mass predictions.
We compare several techniques to estimate the prediction intervals.
The comparison between the predicted mass Mvir,pred and actual mass Mvir from SDSS with prediction intervals at 90% confidence level is presented in <ref>.
For reference, the shaded gray regions indicate the intrinsic scatter or standard deviation about the scaling relation of 0.2 dex using Hβ <cit.> and 0.36 dex using Mgii line <cit.>.
As previously demonstrated, the neural network is able to retrieve the Mvir predictions, being comparatively close to those measured from SDSS (<ref>, identity line in grey dashed line).
All but one of the methods for the Mgii line-based Mvir dataset have PICP lower than the target 90% confidence level.
Although, this is an indication that they are inadequately calibrated, their PICP remain relatively close to the nominal value.
Overall, at 90% confidence level, the mean widths of the prediction intervals for all methods are larger than the width of the intrinsic scatter, but still well below some of the reported intrinsic scatter about the R-L relationship in order of ≳± 0.4dex <cit.>, which is ≥ 0.8dex for the width of the scatter.
An evaluation of the performance of the prediction intervals over a range of nominal confidence levels using PICP and MPIW is presented next.
As mentioned, it is desirable to have PICP close to the target coverage and small MPIW.
<Ref> displays the difference between the PICP and nominal coverage with respect to the nominal coverage along with the coefficient of determination R^2 for each method.
For most of the ranges of nominal coverage, the PICP of the naive method is underestimated, while on the other end, the CV-minmax is overestimated.
This is also evident from the lower overall R^2.
In general, the CV-minmax has the least performing PICP with lowest R^2, especially when the target confidence level is small.
This is followed by the naive method.
The rest of the methods, including jackknife+ab, CV, CV+, and CQR, have comparable PICP as well as R^2, particularly towards larger nominal confidence level.
The MPIW scores for a range of nominal coverage is presented in <ref>.
There is a trade-off of larger width with increasing confidence level, as expected.
The MPIW values for CV-minmax are the largest in all ranges of nominal coverage level, while the MPIW tends to be smaller for the naive method as the nominal coverage is set to be larger.
Other methods have similar MPIW.
For the remaining of the analysis, the results are for 90% confidence level, unless otherwise stated.
<Ref> compares the degree of variations in the prediction interval widths for the different uncertainty quantification methods.
The naive, CV, and jackknife+ab methods produce constant or negligible changes in the widths of the prediction intervals.
The prediction bounds from CV+ are also mainly constant except for a few minorities.
The two methods that exhibit variable widths are the CV-minmax and CQR.
However, it can be seen that CV-minmax will generate at the very least larger widths compared to the widths from the constant prediction interval methods as baseline.
Using CQR, it shows greater variability and is able to yield narrower widths under certain circumstances, which will be presented next.
When comparing the scale of the Hβ and Mgii Mvir,pred prediction widths, those from Mgii are wider, which is consistent with it being harder to measure, for instance, due to non-virial component <cit.>.
Among the explored uncertainty quantification methods, CQR performs the best; therefore, we focus on CQR and demonstrate its adaptiveness with respect to the properties of the quasars.
<Ref> portrays the variations in the prediction interval width for selected quasar properties.
To measure the strength of the correlation, the Spearman's correlation coefficient <cit.> and its corresponding p-value are also calculated.
It is found that there is a negative correlation (statistically significant at p-value≪ 0.001%) between the prediction interval widths and Mvir, and subsequently with the black hole mass related quasar properties, including the line luminosity L and the FWHM of the broad component of the Hβ and Mgii lines.
Some other quasar properties that are also of significantly correlated with the widths (not shown in the figure) are their respective properties measured using the whole line component.
The two quantities, line luminosity and FWHM, relationship with the prediction interval width is expected as these are incorporated into the virial theorem to estimate the Mvir.
Between the line luminosity and FWHM, the FWHM is more strongly anti-correlated with the size of the prediction interval width, which is a consequence from the virial theorem.
For more luminous and broader spectral line width quasars, the inferred prediction interval using CQR is able to generate a tighter bound.
We then compare the Mvir,pred based on Hβ and Mgii, as well as their associated prediction intervals using CQR in <ref>.
Comparing multiple emission lines are recommended to get a better constraint of the Mvir <cit.>.
Similar analysis is commonly carried out using single-epoch Mvir calibrated with empirical scaling relation from reverberation mapping <cit.>.
As expected, the Hβ and Mgii line-based Mvir are tightly correlated, albeit the large scatter.
This is not surprising, considering that the amount of scatter from the SDSS measured Mvir is even larger, as illustrated in <ref>.
Since the errors from SDSS measurements only account for the propagated measurement errors, the median lower and upper intervals are smaller in comparison to the prediction intervals from CQR, as expected.
The retrieved Hβ and Mgii based Mvir,pred along with the prediction intervals using CQR are comparable to those measured from SDSS.
§ DISCUSSIONS
§.§ Black hole mass predictions and uncertainties
In the past decade, artificial intelligence and machine learning have witnessed increasing growth and gained popularity within the astronomy community to solve big data challenges <cit.>.
Not surprisingly, recently a number of papers have employed machine learning to predict the masses of the black hole in AGN <cit.>.
In those studies, they mainly focused on retrieving the predictions of the true black hole mass, whereby the performance in terms of prediction error is usually assessed using MAE, MSE, or RMSE.
Yet, this only evaluates the ability of the machine learning model to recover the true value, but not the reliability of the predictions.
Uncertainty quantification of the black hole mass predictions is vital, especially since the single epoch MBH estimates already suffer from a wide range of intrinsic scatter <cit.>.
In fact, the uncertainty can extend more than 0.5 dex for individual AGN <cit.> and is dependent on which emission line is used to probe the MBH <cit.>.
At the same time, there are also uncertainties from the adopted machine learning pipelines, which introduce further uncertainties into the MBH estimation.
Without properly accounting for the uncertainties in the predicted MBH, the recovered value will be more biased than it already is.
Therefore it is more desirable to quantify the uncertainties of the black hole masses for each individual AGN rather than for the general AGN population.
Specifically, variable or adaptive widths prediction interval should be considered, as addressed in this study.
Subsequently, one can then attain the prediction interval and conduct analysis similar to those done in reverberation mapping studies (or other black hole mass estimation techniques).
§.§ Proposed adaptive uncertainty quantification
We recommend the need to not only assess the performance of the predictions of the black hole mass from the machine learning model, but also quantify the uncertainties for the prediction intervals.
We present an uncertainty quantification method to generate adaptive prediction intervals for the black hole mass estimation using CQR introduced by <cit.>.
In <ref>, we have shown that CQR is more informative of the model's uncertainty compared to other investigated uncertainty quantification methods.
Therefore, we propose that a variable width prediction interval method using CQR is better suited for this particular task.
In assessing the performance of the prediction intervals, it can be seen that the CQR outperforms the rest.
Other methods either produce prediction interval widths that are the same or too wide.
The CQR is more adaptive and better reflects the uncertainty of each individual object.
Additionally, we find that the width of the prediction interval is correlated with the black hole mass and its associated properties, particularly the line luminosity and FWHM.
The larger the black hole mass, the tighter the prediction interval widths.
This suggests that given a bright and broad spectral line source, we should be able to predict the black hole mass with more certainty.
We also highlighted that the virial black hole mass predictions and their corresponding prediction uncertainties generated from the combination of the neural network and CQR architecture, are comparable in scale and magnitude as those measured from SDSS using a spectral line fitting algorithm and reverberation mapping scaling relation with errors from measurements.
The dependence of the spectral line fitting algorithm will affect the continuum and spectral emission line width measurements, effectively biasing the black hole mass estimation <cit.>.
In which case, one can then opt to predict the black hole mass and their associated uncertainties using machine learning coupled with CQR as it offers a somewhat agnostic framework to the fitting of the individual spectral emission lines.
The uncertainty quantification methods that we presented in this study can be deployed with any base machine learning algorithm to quantify the uncertainty of the predicted MBH.
The code repository at contains Python scripts to get the data (described in <ref>), run feature extraction using neural networks and run uncertainty quantification for regression (described in <ref>).
These can be used separately or deployed to existing machine learning methods to generate prediction intervals for the black hole mass predictions (refer to MAPIE documentation for more details).
Additionally, the reproducible outputs for the analysis in this work are also provided.
We include a pre-trained model in PyTorch of the feature extraction method from the supervised neural network model that has been trained on the Hβ and Mgii line-based Mvir dataset from SDSS.
Examples of practical usage include evaluating new datasets, fine-tuning existing networks, and employing the pre-trained model in downstream tasks such as classification and anomaly detection based on the quasar properties.
The generated predictions as well as the uncertainty estimates for the different uncertainty quantification methods are included.
Supplementary Python notebooks including tutorials on usage and data analysis are also provided.
§.§ Further Experimentations and Caveats
It is apparent that the choice of dataset affects the performance of the prediction intervals.
As aforementioned, we have tested with different input spectra, including the full spectra and those with continuum subtracted, though they performed badly.
Therefore, for our input dataset from SDSS, we use the spectral line flux only that have been continuum subtracted.
This means that the input data still depend on the spectral fitting algorithm and procedure, whereby in this case, to fit and subtract the continuum and extract only the regions with the spectral lines.
As we did not visually inspect the spectra, some of them might also have been fitted poorly.
In this case, the derived properties might also be biased.
Another further constraint is that we choose to use spectra that have both Hβ and Mgii lines.
As a consequence, the predictions will not perform well in the absence of any of these lines or if other broad emission lines are present.
One obvious experiment is to evaluate on reverberation mapped samples.
For this purpose, we applied our neural network model on the Mgii reverberation-mapped SDSS objects from <cit.>.
Using a subset of their sample that contains both Hβ and Mgii lines, we found that the model is able to recover the Mgii-based Mvir reasonably well, however, predicting the Mgii reverberation mapping black hole measurements yield larger errors (see <ref>).
This is due to the fact that Mvir and MBH from reverberation mapping are not directly comparable, whereby the former does not account for the unknown f factor that is known to be unique for individual sources <cit.>; albeit often assumed to be constant <cit.>.
Since we have not performed a rigorous search for the best regressor, further performance improvement on prediction accuracy could be obtained with more computational resources.
Nevertheless, the basic architecture can act as a baseline and is able to obtain an effective feature extraction that leads to a reasonable prediction of the Mvir.
We have also conducted experiments using unsupervised learning approach on the same dataset.
We employed a vanilla autoencoder model consisting of a layer of 512 neurons for the encoder and decoder with 8 as the latent dimension for feature extraction.
However, it appears that this model is greatly affected by the presence/absence of other strong broad emission lines, in this case the Hα line; thus, outputs higher errors compared to those from the supervised learning approach.
It is also important to point out that the coverages of most of the explored uncertainty quantification methods are below the intended nominal coverage, but mainly still rather close to it.
For our purpose, we did not further attempt to achieve the nominal coverage, which possibly can be fixed by increasing the number of calibration samples <cit.>.
<cit.> has provided an outline of the procedure and also the method to check for the correct coverage.
§.§ Future prospects and avenues
We highlight some future investigations that can be carried out.
Though the availability of reverberation mapped objects is currently limited, the single-epoch black hole mass measurements from these are more precise and better constrained than those calibrated from the scaling relation using Hβ line <cit.>.
They can then be used to predict the reverberation mapping MBH.
Note that there are various systematic errors from the reverberation mapping method that could lead to poor estimates of the black hole mass by a factor of 3 or ∼ 0.5dex <cit.>.
There are several advantages of using machine learning to perform predictions of the black hole mass.
We demonstrate that the neural network model is capable of retrieving the Mvir predictions without having to model individual emission lines of the spectrum and derive their line properties.
The estimates are also comparable to those from SDSS measurements.
Since the machine learning approach is general, one can also apply a similar pipeline to predict other properties of quasars, such as the emission line width, and quantify their uncertainties.
The development of a machine learning model that is completely independent on the spectral fitting algorithm and process would be of interest.
Another benefit that has been mentioned in <cit.> is that it avoids the use of the empirical scaling relation between the BLR size and luminosity from reverberation mapping to calibrate line-based Mvir.
With this, the induced bias from the scaling relation can be mitigated or removed.
Subsequently, the inferred Mvir for high redshift objects using Civ might also be less biased.
However, it is noteworthy that there are additional complications when using Civ line such as it is dominated by outflows <cit.> instead of virial motion, which is the basis of the Mvir estimate.
Hence, it is also important to ensure that the dataset used contains reliable measurements with good signal-to-noise ratio <cit.>.
We envision that the inclusion of uncertainty quantification will provide a useful assessment of the reliability of the black hole mass predictions trained using machine learning.
The CQR as well as other uncertainty quantification methods that we explored in this work can be incorporated in conjunction with many machine learning models.
A further extension to this work is to explore various or novel uncertainty quantification techniques that would improve the coverage and widths, such as in the presence of limited data and data with large measurement errors.
A potential approach is the conformal predictive system for regression <cit.>.
Rather than a single interval, the conformal predictive system estimates the cumulative probability distribution.
In this way, it can be used to further access the trustworthiness of the uncertainty based on the difficulty of the estimates.
§ SUMMARY
Measuring an accurate black hole mass has been known to be challenging due to the induced bias from the scaling relation that is used to calibrate the virial black hole mass for high redshift sources.
A reliable tool to determine the uncertainty of the virial black hole mass is important to probe the black hole population and evolution.
In this work, we examine various prediction interval methods, including conformalised quantile regression (CQR), to quantify the uncertainties in the Hβ and Mgii line-based virial black hole mass estimation.
The code is publicly available at .
Using quasar spectra from the Sloan Digital Sky Survey, we train the data on a neural network model for feature extraction using supervised learning, which is then provided to a regressor for predictions.
Among the uncertainty quantification methods that we investigated, the CQR generates a more practical and meaningful range of probable intervals compared to other methods such as jackknife+-after-bootstrap, cross-validation and its variations.
The uncertainty interval of every other methods is either fixed or relatively large.
Conversely, the CQR is able to provide variable width prediction intervals and the tightness of the bounds reflects the correlation with the black hole mass as well as its associated properties.
As objects increase in black hole mass, the size of the prediction interval become narrower.
That is, the prediction bound from CQR will be more certain given a luminous object with broad spectral line width.
Additionally, the neural network architecture coupled with CQR framework are able to retrieve the line-based virial black hole masses and their corresponding errors as wellx as those estimated from the Sloan Digital Sky Survey.
The uncertainty quantification method can be deployed to any machine learning algorithm to assess the quality of the black hole mass predictions, and hence, is recommended.
§ ACKNOWLEDGEMENTS
We thank the anonymous referee for valuable suggestions on the manuscript.
Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the University of Utah. The SDSS website is <www.sdss4.org>.
SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics | Harvard & Smithsonian, the Chilean Participation Group, the French Participation Group, Instituto de Astrofísica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut für Astronomie (MPIA Heidelberg), Max-Planck-Institut für Astrophysik (MPA Garching), Max-Planck-Institut für Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatário Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autónoma de México, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.
Software: Astropy <cit.>, Jupyter <cit.>, MAPIE <cit.>, Matplotlib <cit.>, NumPy <cit.>, pandas <cit.>, PyTorch <cit.>, scikit-learn <cit.>, SciPy <cit.> UMAP <cit.>.
§ DATA AVAILABILITY
The catalogue and spectroscopic data underlying this article are available in Sloan Digital Sky Survey Data Release 16 quasar properties catalogue at <http://quasar.astro.illinois.edu/paper_data/DR16Q/>. The code repository used in this work is publicly available at .
mnras
§ DATASET SAMPLE SELECTION
From the 750,414 SDSS DR16Q spectra, we perform quality cuts, as described in <ref> of the main paper.
The selection criteria and the corresponding number of spectra after each cut are as follow.
Note that the number of drop out with every cut depends on the ordering in which the criterion is performed as there will be spectra that satisfy multiple criteria.
* Hβ line flux/flux error >2: 140,172
* and Mgii line flux/flux error >2: 133,772
* and Hβ logarithm line luminosity ranges 38–48 erg s^-1: 132,543
* and Mgii logarithm line luminosity ranges 38–48 erg s^-1: 132,538
* and signal-to-noise ratio per pixel ≥ 10: 25,283
* and Hβ line width available: 25,283
* and Mgii line width available: 25,283
* and Hβ black hole mass available: 14,798
* and Mgii black hole mass available: 14,777
* and Hβ black hole mass error <0.5: 14,602
* and Mgii black hole mass error <0.5: 14,314
* and Hβ line width error <2000km s^-1: 14,124
* and Mgii line width error <2000km s^-1: 13,952
The final data sample contains 13,952 spectra. The SDSS DR16Q data along with the derived catalogue are publicly available at <http://quasar.astro.illinois.edu/paper_data/DR16Q/>.
§ PREDICTING BLACK HOLE MASS FROM REVERBERATION MAPPING
To examine the performance of the predictions and prediction intervals on reverberation mapped black hole masses, MRM, we utilise the reverberation mapped SDSS samples from <cit.> measured using Mgii lags.
We cross-match them with the SDSS DR16Q quasar properties catalogue <cit.> and further restrict those with both Hβ and Mgii lines.
This provides a total of 14 samples and 7 among them are gold samples with most credible Mgii lags of ≤ 10% individual false positive rate.
<Ref> shows the comparison between the Mvir and MRM along with the predictions from the supervised neural network and prediction intervals from conformalised quantile regression, as outlined in <ref> of the main paper.
Half of the samples are gold samples (<ref>, gold star).
The majority of them have Mvir close to MRM within 0.5 dex, except one that also shows the largest discrepancy of ∼ 1.5 dex.
As mentioned, this inconsistency arise due to the fundamental difference between the Mvir and MRM.
A further suggestion is to train using MRM samples in order to predict the same quantity.
§ EXPERIMENT ON SELECTED SUBSAMPLES
The same experiment outlined in the main paper is repeated using a smaller but rather balanced dataset that evenly covers a range of Mvir measurements.
We choose to equally sample the black hole masses estimated from Hβ and Mgii lines within the range of 10^8–10^9 M_⊙ with bin interval of 0.2.
Our final sample is then 6070 spectra, which is split into 4249, 1274, and 547 spectra that account for 70% training, 20% validation, and 10% test sets, respectively.
|
http://arxiv.org/abs/2307.04872v1 | 20230710194154 | The Synthesis Lab: Empowering Collaborative Learning in Higher Education through Knowledge Synthesis | [
"Xinran Zhu",
"Hong Shui",
"Bodong Chen"
] | cs.HC | [
"cs.HC",
"cs.CY"
] |
[email protected]
0003-0064-4861
University of Pennsylvania
Philadelphia
United States
[email protected]
University of Minnesota
Minneapolis
United States
[email protected]
University of Pennsylvania
Philadelphia
United States
The ability to synthesize information has emerged as a critical skill for success across various fields. However, within the field of education, there is a lack of systematic understanding and well-defined design infrastructures that address the mechanisms and processes of knowledge synthesis in collaborative learning settings. In this poster, we introduce a design innovation – The Synthesis Lab, which aims to support students in synthesizing ideas from their online discussions in higher education classrooms. The tool offers structured work-spaces for students to decompose the synthesis process into intermediate synthesis products and features two key iterative processes of knowledge synthesis in collaborative settings: categorizing peers’ ideas into conceptual building blocks and developing a synthesis of the discussions. Future implementation and evaluation of the design will make significant contributions to both research and practice.
<ccs2012>
<concept>
<concept_id>10003120.10003130.10003233</concept_id>
<concept_desc>Human-centered computing Collaborative and social computing systems and tools</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010405.10010489.10010492</concept_id>
<concept_desc>Applied computing Collaborative learning</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Human-centered computing Collaborative and social computing systems and tools
[500]Applied computing Collaborative learning
The Synthesis Lab: Empowering Collaborative Learning in Higher Education through Knowledge Synthesis
Bodong Chen
August 12, 2023
====================================================================================================
§ INTRODUCTION
In our ever-evolving world, where information flows incessantly amidst unprecedented technological advancements and the growth of artificial intelligence, the ability to synthesize information has emerged as a critical skill for success across various fields. Just as a scientist combines different reactants in a chemical experiment to create new substances, or a composer weaves melodies into a harmonious symphony, knowledge synthesis can be seen as both an art and a science. It involves skillfully and strategically weaving together diverse strands of information to foster conceptual innovation, generate novel knowledge, and design creative solutions <cit.>.
Knowledge synthesis is one important form of cognition in human learning and collaboration. In contrast to other cognitive processes such as interpreting and evaluating new information, synthesis-making requires efforts to rise above current levels of explanation which results in understanding phenomena on a higher plane and the creation of new concepts <cit.>. Within organizations or learning communities, the synthesis-making process becomes even more intricate when individuals engage in dynamic interactions, encountering a broad range of perspectives and resources, which all make the synthesis process challenging.
Research from various disciplines has examined processes or concepts related to knowledge synthesis in collaborative/cooperative settings from multiple perspectives. In CSCW, scholars have emphasized key components of scholarly knowledge synthesis, such as capturing context information and information reuse <cit.>. Similarly, in creativity research, researchers have used constructs that are closely related. Javadi and Fu <cit.> investigated “idea integration” in electronic brainstorming as a process for “adoption, exploitation, combination or synthesis” of multiple ideas. In information sciences, Robert et al. <cit.> refers to “knowledge integration” as “the synthesis of individual team members’ information and expertise through 'social interactions'.” These studies have shed light on the importance of effective knowledge synthesis in enhancing collaboration and improving outcomes in diverse domains.
Moreover, educational research has also touched upon concepts related to knowledge synthesis. DeSchryver <cit.> developed a framework for web-mediated knowledge synthesis which includes six strategies for individuals such as divergent keyword search, synthesis for meaning, in-the-moment insights, repurposing (e.g., engaging learners to evolve the original ideas with their own added value), reinforcement (e.g., justifying new ideas by revisiting the sources or further discussion with peers), and note-taking. Recent work framed synthesis as a “trans-disciplinary skill” that “encapsulate the ways in which creative people think.” <cit.> In CSCW’s sister field – Computer-Supported Collaborative Learning (CSCL), knowledge synthesis plays a crucial role in collaborative learning by helping students distill, connect, organize, and analyze the information to deepen their thinking. For example, the knowledge building model emphasizes the notion of “rise above” in knowledge building discourse to synthesize and build on previous ideas that leads to the development of novel knowledge <cit.>. However, there is a lack of systematic understanding regarding the mechanisms and processes of knowledge synthesis in CSCL. Important questions remain unanswered, necessitating both theoretical and empirical investigation in the field. For instance, how do students synthesize ideas generated in collaborative discourse? How can the knowledge synthesis process support learning and collaboration? And how can the synthesis be used to orchestrate various learning events? Such understanding is essential for informing the design of learning systems, technologies, and pedagogies to support effective knowledge synthesis.
To address this gap, we initiated a Design-Based Research <cit.> project, connecting theories and designs in CSCW and CSCL, to understand and support knowledge synthesis through a series of ongoing design innovations. Drawing on the socio-cognitive stances, this project aims to 1) support the knowledge synthesis process in CSCL through a series of design innovations, and 2) investigate the mechanism of knowledge synthesis in collaborative learning settings through empirical research. In this poster, we aim to showcase an early effort of this project, the design of a web application for supporting knowledge synthesis in college students’ online discussion activities – The Synthesis Lab. This application helps deconstruct the complex synthesis-making process into smaller building blocks and guides students through the key steps, including distilling, connecting, analyzing, rising above, and aggregating ideas generated from the discussions. These steps guide students to discover the interrelationship between peers’ posts other than the simply reply relationships, which leads to further rising above previous ideas and constructing coherent knowledge out of fragmentary information.
§ THE SYNTHESIS LAB
Informed by interdisciplinary literature, knowledge synthesis in the designed technology space is operationalized as a dynamic process encompassing the analysis and integration of ideas fostered through interactions with peers in digital environments. Situated in a collaborative setting, the overarching goal is to generate novel knowledge out of the conversation, while facilitating the orchestration of various learning activities and application scenarios. Serving as a tool for thinking, it nurtures higher order competences, such as creativity and collaboration, fostering a fertile ground for profound thinking and intellectual growth.
The Synthesis Lab (see Fig. <ref> for the interface and example user workflow) retrieves students’ online discussion data on a web annotation platform – Hypothesis (https://web.hypothes.is/), via its APIs. Hypothesis is a web annotation technology that allows users to collaboratively read, annotate, highlight, and tag on a shared document or web page. It has been widely used to support social reading in classrooms as a form of online discussion across universities <cit.>. For example, as part of their weekly routine, students engage with course readings by annotating the texts and responding to peers’ annotations prior to in-person class meetings.
Drawing inspiration from previous designs (e.g., <cit.>) and incorporating insights from interdisciplinary literature (e.g., <cit.>), The Synthesis Lab offers a structural framework to guide students’ synthesis process. The workflow within the tool revolves around two primary goals: categorizing peers’ ideas into conceptual building blocks (CBBs) <cit.> and developing a synthesis of the discussions. These goals are achieved through interaction across three vertically organized workspaces: Distill, Analyze, and Synthesize. This organization provides structured workspaces for students to decompose the tasks into intermediate synthesis products: in-source annotations, per-source summaries, and cross-source syntheses <cit.>. The design encourages students to fluidly navigate between these workspaces, allowing them to revisit annotations and thoughts iteratively, recognizing that the synthesis-making process is non-linear in nature.
§.§ Categorizing Peers’ Ideas into CBBs
Once students have selected the reading for analysis, they initiate the synthesis process by browsing through the class annotations. In the Distill column, students are able to filter annotations by key words, authors and tags. Meanwhile,
students start to analyze annotations by creating Annotation Groups in the Analyze column, where they categorize annotations into different categories following on various strategies. For instance, some students may opt to group annotations by “applications” or “methodology”, while others may group them based on semantic meanings. This step allows students to organize ideas into CBBs, which become the metadata and contextual information for future synthesis work <cit.>. Additionally, students jot down their thoughts in the "In-the-moment Notes" box to document the contextual information surrounding their decisions. This step encourages active analysis of peers’ ideas and the meaningful integration of concepts.
§.§ Developing a Synthesis of the Discussions
Following their analysis of individual annotations, this step prompts students to shift their attention to the Annotation Groups in order to identify connections or reconsider their grouping strategies. For example, they can merge two groups as a new group (combining CBBs to a higher level CBB) or transfer annotations from one group to another. This process encourages students to repurpose and reinforce their learning by ruminating over the categories and revisiting the annotations/notes <cit.>. Ultimately, students start the synthesis writing phase in the Synthesize column, drawing upon all their existing notes and activities to compose a comprehensive synthesis.
§ CURRENT IMPLEMENTATION AND FUTURE DIRECTIONS
Using a co-design approach, we are collaborating with instructors to develop class activities that were supported by The Synthesis Lab. To investigate the design enactment, we conducted a pilot study during the Spring 2023 semester in a graduate-level classroom at a large private university. Throughout the study, students actively engaged in social annotation activities on a weekly basis. Within each session, 2 to 3 students assumed the role of discussion leaders, facilitating in-class discussions. To effectively fulfill their role, the discussion leaders were required to synthesize the class annotations in preparation for the meetings. For this study, discussion leaders who volunteered to participate utilized The Synthesis Lab to support their synthesis process.
We collected various learning artifacts from the participants, including their annotations and synthesis writings. Additionally, we conducted follow-up interviews with three participants to gain further insights. Upon preliminary analysis of the collected data, we noticed that the synthesis strategies employed by students varied. For instance, one student adopted a deductive approach by initially creating Annotation Groups based on the abstract, and then assigning relevant annotations to these pre-defined groups. Conversely, another student employed an inductive approach, generating CBBs while reading annotations within the Distill space in an iterative manner. Furthermore, the analysis highlights the potential of this design in fostering a more profound understanding of the value of knowledge synthesis among students. It also demonstrates the capacity of the design to enhance students' synthesis skills and promote collaborative learning.
The first design iteration is currently in progress and is expected to be completed by Fall 2023. Our focus is on enhancing the interactivity between different workspaces, including the implementation of backlinks to maintain the connection between annotations and In-the-Moment Notes. Additionally, we are actively exploring how artificial intelligence (AI) can augment the synthesis process to further enhance the students' experience.
The expected contribution of this work will be three-fold. First, the proposed technology innovation has great potential for broader applications. Further, the investigation of the design's implementation aims to develop a framework of knowledge synthesis in collaborative learning that will make significant contributions to both research and practice. Finally, leveraging the synergies between CSCW and CSCL allows for a deeper understanding of the interplay between technology, social dynamics, and learning constructs within the knowledge synthesis process. This understanding, in turn, allows for the creation of meaningful design infrastructures that can contribute to both fields advancing our understanding of learning and collaboration processes, optimizing technology-supported interactions, and fostering creative knowledge creation.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.05253v2 | 20230711133605 | Precise Image Generation on Current Noisy Quantum Computing Devices | [
"Florian Rehm",
"Sofia Vallecorsa",
"Kerstin Borras",
"Michele Grossi",
"Dirk Krücker",
"Valle Varo"
] | quant-ph | [
"quant-ph",
"hep-ex"
] |
Precise Image Generation on Noisy Quantum Computing Devices]
Precise Image Generation on Current Noisy Quantum Computing Devices
[1,2]Florian [email protected]
1]Sofia Vallecorsa
2,3]Kerstin Borras
1]Michele Grossi
3]Dirk Krücker
3]Valle Varo
*[1]CERN, Geneva, Switzerland
[2]RWTH Aachen University, Aachen, Germany
[3]DESY, Hamburg, Germany
The Quantum Angle Generator (QAG) is a new full Quantum Machine Learning model designed to generate accurate images on current Noise Intermediate Scale (NISQ) Quantum devices. Variational quantum circuits form the core of the QAG model, and various circuit architectures are evaluated. In combination with the so-called MERA-upsampling architecture, the QAG model achieves excellent results, which are analyzed and evaluated in detail. To our knowledge, this is the first time that a quantum model has achieved such accurate results. To explore the robustness of the model to noise, an extensive quantum noise study is performed. In this paper, it is demonstrated that the model trained on a physical quantum device learns the noise characteristics of the hardware and generates outstanding results. It is verified that even a quantum hardware machine calibration change during training of up to 8% can be well tolerated. For demonstration, the model is employed in indispensable simulations in high energy physics required to measure particle energies and, ultimately, to discover unknown particles at the Large Hadron Collider at CERN.
[
[
August 12, 2023
===================
§ INTRODUCTION
Quantum computing has the potential for a new paradigm in future computing to accelerate tasks or even handle classically unsolvable problems <cit.>. In the current Noise Intermediate Scale Quantum (NISQ) era, quantum devices suffer from non-negligible hardware errors, limited connectivity and a limited number of qubits <cit.>. While practical quantum advantage is currently extremely difficult to accomplish, finding the best suited algorithms to effectively combat the problems of NISQ-era devices remains a widely researched topic. Quantum Machine Learning (QML) is a domain which achieves acceptable results on NISQ devices due to the observed robustness against noise <cit.>.
High Energy Physics (HEP) experiments, such as those at the Large Hadron Collider (LHC) at CERN, require enormous amounts of simulated data for deriving high precision physics results <cit.>. To handle this demand, gigantic quantities of computing hardware resources are necessary, which has led to the creation of the world's largest computing grid operated by CERN <cit.>. To alleviate this strain on computational resources, Machine Learning (ML) models have been developed that exhibit remarkable speed-ups over current Monte Carlo-based simulations while maintaining the required level of accuracy. In general, QML simulations represent a promising approach to address the only further increasing simulation demands in the future <cit.>. QML employs quantum circuits which exploit the quantum properties of superposition and entanglement, which possess the potential to outperform neural networks, their classical analogue <cit.>. In addition, QML might have the advantage of learning more complex distributions with fewer parameters than classical ML due to their wider accessible phase space.
Encoding the classical data into qubit states on quantum computers is a non-trivial task <cit.>. Currently, many encoding techniques exist, each exhibiting specific advantages and disadvantages, and in practice, identifying “the best” encoding technique remains application dependent <cit.>. To achieve a potential quantum advantage over classical computing, theoretical studies suggest that at least linear scaling from qubits to features is required <cit.>. On the other side, models which employ better than linear scaling encoding techniques have drawbacks, making them unsuitable for generating precise images on NISQ devices. For example, amplitude encoding can only generate probability distributions and not absolute pixel entries, i.e. energy values.
At present, there exist several quantum generative models, for example, the Quantum Circuit Born Machine (QCBM) <cit.>, Quantum Variational Autoencoders <cit.> or variations of quantum Generative Adversarial Networks <cit.>. They all face limitations. Some models either do not scale well in terms of qubits required relative to the number of encoded features, or they do not achieve a satisfying level of fidelity. The Quantum Angle Generator (QAG) presented in this paper and first introduced in reference <cit.>, aims to overcome these problems. Employing angle encoding with linear scaling of qubits to features, it achieves extremely accurate results for a real-world problem on current physical noisy quantum devices.
The paper content is structured as follows. First, the HEP use case is motivated, and the training data is defined. Next, the QAG model and the employed angle encoding technique are presented. Then, multiple circuit architectures are compared and the advantages of the best ones are highlighted. An in-depth accuracy analysis of the model follows to highlight its excellent precision. The quantum hardware noise behavior is evaluated, including training and inference executed on real quantum devices. Lastly, conclusions are drawn and summarized.
§ HIGH ENERGY PHYSICS SIMULATIONS
Simulations remain a crucial component of HEP analysis to evaluate the results obtained by the processed data of the experiments. Currently, simulations are dominantly performed with Monte Carlo methods such as the Geant4 toolkit <cit.>. However, Monte Carlo simulations are very hardware resource demanding and occupy half of the worldwide LHC Computing Grid <cit.>. Future LHC experiments will require more simulations due to more energetic particles, simultaneous collisions and detectors constructed with higher granularity. However, the projected budget for hardware development and computing resources cannot keep pace with these increasing demands <cit.>. As a result, ML alternatives to Monte Carlo methods are being actively researched. Initial prototypes predict significant reductions in simulation time and hardware resources while retaining acceptable levels of accuracy <cit.>. This research goes one step beyond classical ML. Since HEP data sets are generally created by underlying quantum mechanical effects, performing the computations on quantum devices which likewise make use of quantum effects has the potential to substantially enhance the simulations in accuracy and in terms of sustainable computing.
Electromagnetic calorimeters are constructed as high granularity sensor grids to measure the energy of photons, electrons, and positrons through complex particle shower generation processes in space and time <cit.>. They constitute a key component of HEP detectors to measure the energy of the particles produced in the interaction process and occupy most of the simulation time <cit.>. Calorimeter outputs can be interpreted at lowest order as static and spatial 3D images, which we call “shower images”: the value of each pixel corresponds to the energy measured in a specific calorimeter cell. The initial data from reference <cit.> consists of 25 × 25 × 25 pixel images. An example of a 3D shower image is visualized in figure <ref>. To reduce the dimensionality, the images are averaged along two spatial axes (x- and y-direction), resulting in a one-dimensional representation that is further downsampled to eight pixels by averaging three contiguous pixels along the z-direction. Although the initial data set provides many different energies, for simplicity this study focuses on images recorded by particles in the energy range of [225, 275] GeV. The data set is split into a training and test set, each consisting of approximately 1 000 samples. The downsampled data is available in reference <cit.>, and an example image is illustrated later in this paper in figure <ref>.
§ THE QUANTUM ANGLE GENERATOR
The QAG represents a QML model which employs the well established technique of angle encoding <cit.> to generate extremely precise images.
It scales linearly with the number of encoded features. Thus, the generation of n features requires n qubits. In this study, the number of features corresponds to the number of pixels. A comprehensive description of the QAG model, its objective training function, and an evaluation of several quantum circuits are provided below.
§.§ Model Description
The QAG model consists of variational quantum circuits trained by an objective function. The model structure is visualized in figure <ref>. All qubits are initialized in the basis state |0⟩. The state preparation function implements a Hadamard (H) gate to constitute superposition, followed by a y-rotational (Ry) gate to introduce randomness so that the model can draw new samples at each execution. For this, the Ry gate angles Ω are randomly drawn from a [-1, 1] uniform distribution and pixel-wise multiplied by the pixel standard deviations present in the training data to obtain correct pixel energy variations. To account for all various primary particle energies, all angles Ω are multiplied by a random value between [-0.25, 0.25].
The unitary transformation consists of quantum circuits and constitutes the trainable part of the QAG model. Various circuit architectures were tested as documented in section <ref>.
To convert the quantum states back into classical energy values via angle encoding, the model must be executed multiple times and the quantum states measured. The number of executions is commonly denoted as the number of shots nb_shots. It is counted how often state |0⟩ is measured. The scalar intersection I of the vertical axis on the Bloch sphere (z-axis) is calculated with:
I = 2 ·counts(|0⟩)/nb_shots-1 .
Next, the intersection I is transformed into the rotational angle θ by the trigonometric function θ = arcsin(I).
The angle θ operates in the x-z-plane of the Bloch sphere and is defined as zero in the |+⟩ state. Rotating θ clockwise leads to positive angles. The decoding process is visually illustrated in figure <ref> with an example state |Ψ⟩, its intersection I and the corresponding angle θ for a single qubit example. The angle θ can then be transformed into a pixel energy E by the linear change of ranges equation:
E = (E_max-E_min)(θ - θ_min)/θ_max-θ_min .
E_max, E_min, θ_max and θ_min are defined in figure <ref>: the minimum energy E_min = 0MeV is set to θ = -π/2 and the maximum energy E_max = 0.6MeV is set to θ = +π/2.
With E_min=0 and θ_max= -θ_min equation <ref> can be simplified into:
E = ( E_max/2 ·θ_max) · (θ + θ_max) .
For multi-qubit quantum circuits: the final quantum states of all qubits are measured, and the outcomes are decoded independently. Although the qubit results are individually decoded, the state of each qubit is entangled with others due to the gates applied within the variational quantum circuit, as lined out later.
It is worth to be noted that the angle θ and, therefore, the corresponding decoded energy E remain in discrete values. Since, θ depends in value and accuracy on the number of shots nb_shots: the larger nb_shots, the better the achievable energy precision and resolution. Fortunately, with present quantum devices, the nb_shots can be easily chosen to be large. Currently, on IBMQ devices the maximum possible number of shots is nb_shots = 100 000. For the simplified calorimeter use case, this is more than sufficient. For comparison, in a previous classical reduced precision ML research in reference <cit.>, it is demonstrated that already 256 discrete energy levels are sufficient for correctly reproducing the full-size calorimeter shower image. In this reduced precision research, the parameters of the neural network are quantized from a larger format (floating point 32) down to a smaller number format (integer 8). This study will show that 512 shots provide a sufficient resolution.
§.§ Training Objective Function
The QAG model is trained with two losses employed as objective functions. The first one is the Mean Maximum Discrepancy (MMD) loss, already successfully applied by other quantum models, e.g. the QCBM <cit.>. Training exclusively with the MMD loss resulted in good average shower distributions. However, when exploring the generated images in more detail, for example in the pixel correlation, the model did not perform satisfactorily. Therefore, a second correlation (Corr) loss is added to help learn the patterns present in the training data (e.g., image pixel correlations). The Corr loss is calculated by the pixel-wise mean squared error (MSE) between the pixel correlation values present in the training data and the ones inside the generated data. The pixel-wise correlations are illustrated in figure <ref>.
To train the QAG model, the Simultaneous Perturbation Stochastic Approximation (SPSA) optimizer <cit.> is employed, which only requires two optimization steps per epoch. The hyperparameters for training were found by extensive hyperparameter searches employing the Optuna <cit.> library. All tests in this study are executed in Qiskit version 0.26.2. The models are trained for 500 epochs, containing one batch. The dynamic MMD loss weight starts at a value of one and decays linearly with -0.001 ·epoch, starting from epoch 100. Opposite, the Corr loss weight increases by the same value starting at zero. The dynamical training batch size is set to generate one image in the first hundred epochs and afterward to 20 images to calculate the Corr loss between multiple images. Each quantum job contains 512 shots for training and inference. The generator SPSA optimizer learning rate is set to c_0=1 with an exponential learning rate decay of 0.006 starting from epoch 50. All these settings showed the best performance in the tests.
§.§ Quantum Circuit Study
The ideal circuit should contain a certain, optimized to be minimal, number of parameters to achieve a sufficient level of accuracy. Different circuit architectures were compared to each other based on characteristic numbers expressing the power of the circuit. The circuits employ trainable rotational gates and two-qubit entanglement gates. As the angle encoding primarily uses the qubits y-axis component, we predominantly employ Ry gates. For some circuit architectures, we test if additional z-rotational gates (Rz gates) or deeper circuits with depth 2 (denoted as d2) can further improve the results. We use two-qubit controlled-not gates (cx gates) native on IBM Quantum (IBMQ) <cit.> devices; while other entanglement gates are compositions of multiple native gates of the hardware. Keeping an eye on the goal of executing the training on a real quantum device, the absolute number of decomposed gates should be kept as small as possible.
The characteristic circuit numbers used in this study are the number of trainable parameters N_p, the expressibility X and the entanglement capability E. Larger circuits with more trainable parameters are potentially capable of achieving more accurate results. However, it might be that a plateau is reached at some point where the same task can be solved with similar accuracy by a smaller circuit, which is being investigated here. The definitions for X and E are from reference <cit.>: X describes how well the circuit can represent the pure states of the representative Hilbert space. For a single qubit the expressibility exhibits how many states of the Bloch sphere can be represented. In this paper, we measure 1-X: the closer to 1 the better the expressibility of the model, while the closer to 0 the worse. The entanglement capability E is a measure that expresses the ability of a circuit to generate entangled states between the qubits. Likewise, E ranges from 0 to 1, where 1 represents the best achievable value. The circuit architectures under study are introduced in the appendix <ref>. Their corresponding characteristic numbers and theoretical potential are provided in the appendix <ref>. In the following, the circuits are evaluated for the calorimeter use case.
We start by interpreting the results displayed in figure <ref>. The MSE accuracy metric is calculated by taking the pixel-wise MSE between the average Geant4 and QAG images. The training is repeated 25 times for each circuit and the mean and standard deviations are plotted. To prevent the influence of outliers, the best and worst two trials are discarded in this analysis. The MSE is given as a function of: N_p on the left, X in the middle, and E on the right. By inspecting the plots, it can be recognized that the MSE does not correlate with any of the characteristic circuit values in the plots, neither do the characteristic values correlate among themselves, as shown in the appendix <ref>.
The MERA-up, MERA-up_d2, and MERA-up_Rz architecture perform best with the lowest MSE. This is consistent with the observation that they maintain a high X and E, as provided in the appendix <ref>. The error bars provide a hint about the training stability. It can be observed that the better the average MSE of a model, the smaller its standard deviation.
All in all, the MERA-up_Rz circuit clearly performs best considering the characteristic circuit values and the achieved accuracy in training. However, with the emphasis on a low number of N_p, the plain MERA-up circuit performs almost as well, while needing only half the number of parameters. Therefore, the following studies employ the MERA-up circuit for training the QAG model.
§ IN-DEPTH ACCURACY ANALYSIS
In this section, we analyze the results of the QAG model operating the MERA-up circuit architecture. We showcase typical accuracy metrics for the calorimeter simulation in HEP.
§.§ Training Evaluation
In a first step, the statistical trends of the objective functions during training are investigated. In figure <ref>, the unweighted loss functions (excluding loss weights) are plotted as a function of the training epochs. The mean of twenty training repetitions is visualized as a thick solid line and the standard deviation (STD) as a colored band. The Corr loss starts influencing the training only at epoch 100 because its weight is set to zero before.
The MMD loss of all training repetitions converges stable, and the STD band narrows towards the end of the training. Overall, the MMD and Corr loss converge smoothly without strong oscillations, which is a desirable characteristic for stable (Q)ML training. The MMD loss contributes far more than the Corr loss throughout the training. However, the Corr loss plays a significant role in achieving good physics accuracy in the generated shower images.
§.§ Inference Evaluation
In the following, the accuracy in inference is evaluated. The generated images of the best trained model are compared to the Geant4 test data, which consists of 980 images. Likewise, there are 980 images generated by the QAG model to create the following accuracy metrics.
1. Average calorimeter shower shape:
The first metric represents the calorimeter shower shape displayed in figure <ref>. The shower shape is perfectly reproduced by the QAG model. The MSE corresponds to 0.00059±0.00037, which is extremely close to zero, indicating a very good accuracy.
2. Pixel-wise correlation:
The second metric corresponds to the pixel-wise image correlation. The positive or negative correlation patterns between all the pixels are determined. The baseline represents the correlation from Geant4 in figure <ref>. The correlation for the generated data of the QAG model is presented in figure <ref>. It can be derived that the overall correlation pattern is accurately reproduced by the QAG model. Like the Geant4 data, it consists of a larger and more compact positively correlated group of pixels. The other pixels are negatively correlated. Inspecting the particular details, different color shades indicate some minor deviations. However, the achieved correlation precision by QAG is astonishing. Therefore, it can be concluded, that the quantum circuits are capable to reproduce complex correlation patterns through substantial entanglement strategies, as present in the MERA-up architecture.
3. Energy sum:
The energies contained in all pixels calculated for the individual images represent the third accuracy metric. In figure <ref>, the energy sum histogram of the Geant4 images reveals a Gaussian shape and is correctly reproduced visually by the QAG model, which is also confirmed by the mean μ and the standard deviation σ.
4. k-means Clusters:
This metric evaluates if the QAG model can correctly represent specific image modes. The Geant4 data is clustered with the k-means algorithm <cit.> to find four clusters or image modes as illustrated in figure <ref>. Cluster 0 deposits substantially larger fractions of energy in earlier calorimeter cells than in higher cluster numbers. Further, the particles from clusters 0 and 1 contain a larger energy fraction, estimated by the integral area below the curves. Here we are interested in whether the QAG model can reproduce this behavior. The four clusters of the QAG images are provided in figure <ref>. A similar structure can be observed, which indicates a good accuracy in reproducing the energy contents and image modes on average.
5. Pixel-wise energy distribution:
The last metric is used to examine the distributions of the energy content of each pixel, as illustrated in figure <ref>. Overall, the histograms of the QAG model match those of the Geant4 model. Even pixels with non-Gaussian energy distributions in the Geant4 model are correctly reproduced by the QAG model. For example, the longer tail towards smaller energies on the left side of pixel 4 is equally present in the histogram for the QAG model. The large histogram overlaps indicate that not only the averages are reproduced with high accuracy, but also the energy distributions returned for each individual pixel are correct.
§ QUANTUM NOISE STUDY
In the current NISQ era, relatively high hardware error levels are one of the primary limitations to effectively employing algorithms on real quantum devices. Similar as in the classical case, QML models appear to be noise resilient to some degree of hardware errors <cit.>. In the following, the robustness of the QAG model to simulated noise is tested in inference and training. Furthermore, training and inference are executed on real quantum devices with measured noise levels and compared to the results with simulated noise.
§.§ Inference
In a first step, quantum noise is applied only to the inference of a model trained without noise. Inference is performed using three different noise configurations: simulated noise at varying levels, simulated noise derived from the later used hardware, and finally, with the real quantum hardware. In the simulated noise configurations, each qubit noise is modeled with the same readout measurement error, and each inter-qubit connection with the same two-gate (CNOT) error. In the combined noise model both are on the same level. In contrast, the hardware noise levels can vary widely for individual qubits as well as for gates.
Multiple noise configurations and error levels from zero up to 15% are tested. The MSE is utilized as the accuracy measure. The results are illustrated in figure <ref>. For each configuration, the average value of 20 generated images is plotted in dependence of the noise level as a solid line and the standard deviation as a colored band around the mean. The gray horizontal line serves as accuracy reference for the noise-free configuration.
All configurations maintain sufficient accuracy up to approximately 1.5% of noise. The configuration with readout noise only (green) is most robust and maintains a stable accuracy of up to 8% of noise. CNOT noise only (blue) and both readout noise and CNOT noise combined (orange) experience a stronger impact. As expected, the combined configuration (orange) performs worst. As a side note, current quantum devices have average noise levels below 5%, which is expected to gradually decrease further. But as discussed in the following, the noise levels are unstable and can sometimes spike up. Therefore, wider noise ranges were investigated.
Next, the inference is run by loading the real hardware noise model from into the simulator. The device consists of a Falcon r4 processor with 27 qubits. The average readout noise over the qubits employed at the time of the test is 2.51% and the average gate noise level is 0.97%. The explicit noise model, containing the entries for each qubit, is provided in the appendix in figure <ref>. The result is included in figure <ref> (blue triangle). The noise level position (x-axis) is determined as the average of readout and CNOT noise. Although the noise levels of the qubits vary strongly, the measured accuracy of the hardware noise simulation agrees well with the simulated noise in mean and standard deviation within the uncertainties. This suggests that a model trained without any noise would theoretically be able to run inference on noisy hardware without a significant drop in accuracy.
Finally, the inference is executed on the real device. The result is plotted as a red triangle in figure <ref>. The accuracy on the real hardware is worse than predicted by the simulation, as indicated by a larger MSE value. The decomposition of the circuit to the real hardware includes swap operations, which imply additional two-qubit entanglement gates for the quantum circuit. It is possible that these are not included in the hardware noise simulation and lead to higher noise influence on real hardware and thus to worse results.
§.§ Training
The noise study is repeated to include noise also during training. We study if the QAG model can learn to compensate for noise in training, especially when running on the real quantum device. Besides, we investigate with which noise values the model can still maintain a reasonable accuracy.
The results are provided in figure <ref>. The configurations with readout noise (green) and CNOT noise (blue) maintain a similar level of accuracy until approximately 3%. The accuracy of the combination of readout and CNOT noise (orange) decreases marginally from 1% of noise and further with larger noise levels. However, at 3% of noise, its accuracy is still close to the noiseless case, staying within one standard deviation. This indicates that training the model with noise makes the QAG model more robust than only applying noise to a trained model in inference only.
Next, two quantum devices are simulated. and , which are both 27 qubit devices, but has the more advanced Falcon r5.11 processor. Likewise, the training is repeated ten times and the results are added as blue and orange triangles in figure <ref>. It can be noted that the average accuracy of the training with hardware noise performs slightly worse than the simulated combined noise model (orange line). The strong overlap between the error from the noise simulation (orange band) and the hardware noise simulation (orange and blue error bar) indicates that the accuracy difference is statistically not significant.
Finally, the entire training was executed on the real quantum device. First, the training was carried out on . During training, around epoch 280, an unpredicted significant noise change occurred and the readout noise of one qubit increased to 8%, as shown in figure <ref>. As a result, the MMD loss spikes up, as shown in figure <ref>. This negatively influenced the training. However, after the noise change during training, the model recovered and adapted to the new noisy environment. In the still remaining number of epochs the loss decreased to a modest level. The training was repeated on the best performing device without having a hardware calibration change during training, and the training losses are shown in figure <ref>.
The results of both hardware training are included as red and green triangles in figure <ref>. The error bars correspond to the accuracy deviations within 50 generated validation images. It can be observed that the average accuracy of the hardware training (red) is visibly worse than the noise simulation (blue). This is most likely due to the large noise increase and the fact that the model did not have enough remaining epochs to fully recover. This is also indicated by the still decreasing losses towards the end of the training. As mentioned above, the training was repeated on the more stable machine . This time, the training was completed without calibration changes and with about 1% lower hardware noise level (readout noise 0.86% and CNOT noise 0.89%). The accuracy of the hardware training on (green) is only marginally worse than the one from the simulator (orange). This indicates that the simulated and real hardware results behave similarly for low hardware noise levels. This fulfills the expectations derived from the pure simulation that exhibit only statistically insignificant variations in the very good accuracy at these low noise levels.
Comparing the absolute MSE magnitude of the real hardware training (≈0.002 and ≈0.003) with the one from inference from the previous section (≈0.005), the accuracy improved, suggesting that the QAG model is able to adapt its parameters to the noisy hardware to improve its precision. This is also confirmed by the training, where the accuracy worsened entirely after the calibration change, but then recovered. In the appendix in figure <ref>, the average shower image created by the device is visualized, and in figure <ref> by . The shower image of agrees well with Geant4, whereas exhibits some deficiencies because of the missing remaining training epochs after the calibration.
§ CONCLUSION
The results of this study clearly demonstrate that the newly developed QAG model is capable of generating images with good precision, as measured with a variety of validation metrics. This includes correctly reproducing average values, but most importantly, also complex pixel-wise correlations with the chosen optimal MERA-up quantum architecture. These results reveal that the QAG model with a good entangled circuit is capable of learning intrinsic correlation patterns from the training data.
Our study exhibit the significant impact that quantum hardware noise can have on the accuracy of quantum machine learning models. The results evidence that training the models with noise leads to better performance (stable until 3% noise) because the QAG model adapts to the underlying noise behavior and converges faster in contrast to the situation of applying noise in inference only (stable until 1.5% noise). This was also verified on the real hardware by the training. Furthermore, our study shows that the QAG model is robust and can produce accurate results even with significant hardware calibration changes with up to 8% noise, as demonstrated by the training on . Overall, the newly developed QAG model demonstrates that training quantum machine learning models with realistic quantum hardware noise can lead to robust models and accurate results, which is of great importance for the future development of real world quantum machine learning applications.
§.§.§ Acknowledgment
The authors would like to express their sincere gratitude to Simon Schnake and Alexis-Harilaos Verney-Provatas for their help, advice, and proofreading, which improved the quality and outcome of this paper.
This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research and by the CERN Quantum Technology Initiative. Additionally, we would like to acknowledge partial funding from the BMBF Project "NiQ: Noise in Quantum Algorithms" within the BMBF program "Anwendungsnetzwerke für das Quantencomputing“ and from the Deutsches Elektronen-Synchrotron DESY, a member of the Helmholtz Association (HGF).
Access to the IBM Quantum Services was obtained through the IBM Quantum Hub at CERN. The views expressed are those of the authors and do not reflect the official policy or position of IBM or the IBM Quantum team.
§ QUANTUM CIRCUIT ARCHITECTURES UNDER STUDY
The quantum circuit architectures investigated in this paper are summarized in figure <ref>. In general, it was observed that hierarchical architectures perform best while maintaining a reasonable number of quantum gates and parameters. Specifically, we examine the Tree Tensor Network (TTN) architecture and Multi-scale Entanglement Renormalization Ansatz (MERA) introduced in reference <cit.>. Multiple variations of these circuits are tested: 1) circuits with a depth two (naming scheme "d2"). These circuits contain two layers, with all circuit gates placed twice, to evaluate if deeper circuits perform better. 2) Circuits additional with Rz-gates are employed after each Ry-gate (naming scheme "Rz") to assess if rotations around an additional axis can improve the accuracy. Both architecture variants double the number of parameters of the initial circuit.
For the MERA architecture, further variations are analyzed where only the right circuit half from the original architecture is implemented, denoted as the MERA upsampling (MERA-up) circuit. In the MERA-up circuit, the information is upsampled from the central qubit and spread to all the other qubits, similar to what happens in classical ML generative models, for example, transpose convolutional neural networks. The left half of the MERA circuit, the MERA downsampling (MERA-down) architecture, is not tested because it would rather compress the information as required in classification tasks that are not used in this paper. The last architecture contains a simple linear entanglement strategy.
In reference <cit.>, multiple complex four-qubit architectures are compared. However, these architectures are not exploited in our study. The reason is most of them contain many more gates and parameters, and they do not scale well for more than four qubit circuits. Also, many architectures employ parameterized two-qubit rotational gates which we found, taking the decomposition into account, not as effective as the combination of separate rotation and entanglement gates.
§ CHARACTERISTIC CIRCUIT NUMBERS
The results of evaluating the characteristic circuit numbers (number of parameters N_p, expressibility X and entanglement capability E) are shown in figure <ref>. In figure <ref> E is plotted as a function of X. The MERA_Rz and MERA-up_d2_Rz architecture perform best and lie almost on top of each other in the top right corner, directly followed by MERA-up_Rz and TTN_Rz. The Linear architecture and the basic TTN perform worst. It can be clearly noted that circuits with more gates (Rz, d2) perform better. N_p is studied in figure <ref> and figure <ref>. It can be seen that the best two circuits – MERA_Rz and MERA-up_d2_Rz – contain by far the largest number of parameters. However, a more limited number of parameters is desirable to address NISQ hardware limitations. The MERA-up_Rz and TTN_Rz architectures contain approximately half the number of parameters compared to the MERA_Rz and MERA-up_d2_Rz circuits but perform almost similarly accurately. Therefore, they may be preferred in practice. Further reducing the number of parameters, the MERA-up architecture with only N_p=23 parameters maintains adequate values for E and X and is the baseline circuit for the more detailed studies. All characteristic circuit numbers measured in the circuit study are summarized in table <ref>.
§ FULL TRAINING ON QUANTUM DEVICE
The average shower images of the quantum hardware training are correctly reproduced, as provided in figure <ref>. The correlation plot for the training is provided in figure <ref>. The overall correlation pattern is correctly reproduced. All in all, the model trained on shows a good performance.
|
http://arxiv.org/abs/2307.04358v1 | 20230710060523 | False Sense of Security: Leveraging XAI to Analyze the Reasoning and True Performance of Context-less DGA Classifiers | [
"Arthur Drichel",
"Ulrike Meyer"
] | cs.CR | [
"cs.CR",
"cs.LG"
] |
Leveraging XAI to Analyze the Reasoning and True Performance of DGA Classifiers]False Sense of Security: Leveraging XAI to Analyze the Reasoning and True Performance of Context-less DGA Classifiers
[email protected]
RWTH Aachen University
[email protected]
RWTH Aachen University
The problem of revealing botnet activity through Domain Generation Algorithm (DGA) detection seems to be solved, considering that available deep learning classifiers achieve accuracies of over 99.9%.
However, these classifiers provide a false sense of security as they are heavily biased and allow for trivial detection bypass.
In this work, we leverage explainable artificial intelligence (XAI) methods to analyze the reasoning of deep learning classifiers and to systematically reveal such biases.
We show that eliminating these biases from DGA classifiers considerably deteriorates their performance.
Nevertheless we are able to design a context-aware detection system that is free of the identified biases and maintains the detection rate of state-of-the art deep learning classifiers.
In this context, we propose a visual analysis system that helps to better understand a classifier's reasoning, thereby increasing trust in and transparency of detection methods and facilitating decision-making.
<ccs2012>
<concept>
<concept_id>10002978.10002997.10002999</concept_id>
<concept_desc>Security and privacy Intrusion detection systems</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010257</concept_id>
<concept_desc>Computing methodologies Machine learning</concept_desc>
<concept_significance>300</concept_significance>
</concept>
</ccs2012>
[300]Security and privacy Intrusion detection systems
[300]Computing methodologies Machine learning
[
Ulrike Meyer
August 12, 2023
===================
Copyright held by the owner/author(s) 2023. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive version was published in The 26th International Symposium on Research in Attacks, Intrusions and Defenses (RAID ’23), https://doi.org/10.1145/3607199.3607231
§ INTRODUCTION
In recent years, deep learning has been increasingly used as a building block for security systems incorporating classifiers that achieve high accuracies in various classification tasks.
The advantage of deep learning classifiers is that they often outperform classical machine learning approaches, can be trained in an end-to-end fashion, and automatically learn to extract relevant features for classification.
Therefore, less effort is often expended in creating such classifiers, since they seem to achieve high accuracies out-of-the-box and do not require the integration of domain knowledge as would be required to create feature-based or rule-based classifiers.
This black-box nature of deep learning classifiers is particularly dangerous in the security domain, as the classifiers operate in an adversarial environment where an attacker actively aims to avoid detection.
Since it is unclear what a classifier has learned, not only is its operation opaque, leading to trust issues, but it is also unclear whether the training data might have influenced a classifier in a way that an attacker could easily bypass the classification.
Related work <cit.> has identified and summarized common pitfalls when using machine learning in computer security, including pitfalls that make it easier for an attacker to evade detection.
These pitfalls range from sampling bias, where the data used does not adequately represent the true data distribution, over inaccurate ground-truth labels, to incorporating spurious correlations, where artifacts unrelated to the classification problem provide shortcuts for distinguishing classes.
To uncover potential classification biases introduced by these pitfalls, related work suggests using explainability techniques for machine learning.
However, it remains unclear which strategy is appropriate to mitigate identified problems.
In this work, we systematically apply explainability techniques to the use-case of Domain Generation Algorithm (DGA) detection to reveal a variety of biases in state-of-the-art deep learning classifiers.
We then evaluate the loss in classification performance induced by the elimination of these biases from the classifiers and propose a classification system that is free of the identified biases.
We focus on DGA detection because for this use-case a plethora of research exists, the state-of-the-art classifiers that achieve accuracies up to 99.9% are open source, and domains generated by different DGAs are publicly available in bulk through open source intelligence (OSINT) feeds such as DGArchive <cit.>. This allows us to replicate the results of related work before performing a critical analysis of automatic feature extraction.
To this end, we first conduct an extensive evaluation of a variety of different explainability techniques including recent developments.
Then, we demonstrate how these methods can be used to debug and improve the understanding of state-of-the-art classifiers.
In this context, we identify features and classification biases and show how this knowledge can be exploited to evade detection with ease.
To address these issues, we propose a classification system free of the identified biases combined with a visualization system that supports analysts in Security Operation Centers (SOCs), increases transparency and confidence in detection methods, and facilitates decision-making.
Finally, as a secondary contribution, we use the knowledge gained from our study to improve the state-of-the-art deep learning as well as feature-based approaches for DGA multiclass classification in terms of classification performance and efficiency.
Overall, we thus provide a systematic approach to expose biases and analyze the reasoning of deep learning classifiers for DGA detection.
While some of these biases may seem obvious and easily avoidable, they are present even in DGA detection approaches proposed at leading security conferences (e.g., <cit.>).
Moreover, these biases are rooted on subtle flaws that are rife in security research and affect many other use-cases as well <cit.>.
Thus, with this work we aim to raise awareness of potential pitfalls in state-of-the-art classifiers that allow bypassing detection, and provide helpful guidance in conducting a similar analysis also for different use-cases.
While features and biases are highly domain specific, the generation of explanations is completely independent of the underlying classification task.
Hence, the fundamental idea of leveraging XAI to improve machine learning classifiers is applicable to a variety of different use-cases (e.g., phishing detection, malware detection, vulnerability discovery, or general network intrusion detection).
§ PRELIMINARIES
The self-learned features of a deep learning classifier and thus potential biases in its classification decision are mostly use-case dependent.
It is thus fundamental to understand the specifics of the classification task at hand, including the data used by state-of-the-art classifiers and the data preprocessing applied.
§.§ Domain Generation Algorithm Detection
Domain Generation Algorithms (DGAs) are used by malware infected devices to contact the botnet master's command and control (C2) server for updates or instructions (e.g., the target IP for a distributed denial-of-service (DDoS) attack).
DGAs are pseudo-random algorithms which generate a large amount of domain names that the bots query one by one.
The advantage of this approach over using fixed IP addresses or fixed domain names is that it creates an asymmetric situation where the botnet master only needs to register one domain, but the defenders have to block all generated domains.
The botnet master knows the seed and the generation scheme and can thus register a DGA-generated domain in advance.
When the bots query this domain, they get the valid C2 server's address, while all other queries result in non-existent domain (NXD) responses.
§.§ State-of-the-Art Classifiers
To combat DGAs, binary detection approaches have been proposed in the past, capable of distinguishing benign domains from DGA-generated domains with high probability and low false-positive rates (e.g., <cit.>).
Going a step further, multiclass classifiers have been proposed that can not only separate benign domains from DGA-generated domains, but are also able to associate malicious domains with the DGA that generated them, allowing for the identification and targeted remediation of malware families (e.g., <cit.>).
In general these approaches can be divided into two groups: context-less (e.g., <cit.>) and context-aware (e.g., <cit.>) approaches.
Context-less approaches work exclusively with information that can be extracted from a single domain name, while context-aware approaches use additional information, such as statistical data from the monitored network, to further improve detection performance.
Previous studies (e.g., <cit.>) have shown that context-less approaches achieve similar or even higher performance while requiring less resources and being less intrusive than context aware approaches.
Furthermore, the machine learning classifiers can additionally be divided into feature-based classifiers such as support vector machines (SVMs) or random forests (RFs) (e.g., <cit.>), and feature-less (deep learning-based) classifiers such as recurrent (RNNs), convolutional (CNNs), or residual neural networks (ResNets) (e.g., <cit.>).
Previous studies (e.g., <cit.>) have shown that feature-less approaches achieve superior classification performance.
The currently best deep learning-based classifier for binary and multiclass classification is ResNet <cit.>.
Hence, we analyze the reasoning of this particular classifier in detail.
In addition, we use the insights gained from our analysis to identify missing features in EXPLAIN <cit.>, currently the most powerful feature-based multiclass classifier, and seek to bring its classification performance up to the state-of-the-art level.
In the following, we briefly introduce both classifier types.
Detailed information on the implementations of each classifier can be found in <cit.>.
§.§.§ ResNet
Drichel et al. <cit.> proposed ResNet-based models for DGA binary and multiclass classification.
The classifiers are constructed from residual blocks containing skip connections between convolutional layers to counteract the vanishing gradient problem.
B-ResNet, the proposed binary classifier, uses only one residual block with 128 filters per convolutional layer while M-ResNet, the multiclass classifier, is more complex and composed of eleven residual blocks with 256 filters.
§.§.§ EXPLAIN
The authors of EXPLAIN <cit.> proposed several variants of their feature-based and context-less DGA multiclass classifier.
The best performing model is a one-vs.-rest variant of a RF that extracts 76 features for each domain name to be classified, which can be categorized into 51 linguistic, 19 statistical and 6 structural features.
§.§ Data
To train machine learning classifiers for DGA classification, domain names labeled with the DGA that generated them are widely available in OSINT feeds such as DGArchive <cit.>.
Benign training data can either be obtained by monitoring real networks or generated artificially based on public top sites rankings such as Tranco <cit.>.
The problem with artificial data is that it may not accurately reflect real network traffic and thus may introduce bias and lead to misleading results.
Further, the domain names included in public top sites rankings are on the resolving side of the DNS traffic because they are registered.
Since most DGA-generated domains are not registered, additional bias may be introduced when they are paired with registered benign domain names for training.
Due to these reasons, several approaches (e.g., <cit.>) focus on the classification of non-resolving DNS traffic (NX-traffic).
Moreover, the focus on NX-traffic offers a number of other advantages:
First, NX-traffic is easier to monitor because its volume is an order of magnitude smaller than the volume of full DNS traffic.
Monitoring NX-traffic still allows us to detect malware-infected machines before they are instructed to participate in malicious actions, as DGAs can usually be detected in NX-traffic long before they resolve a registered domain for their C2 server.
Second, NXDs are less privacy-sensitive compared to resolving domain names, as they generally do not contain user-generated domains, with the exception of typo domains.
Although, NXDs may still contain sensitive information about an organization as a whole, the classification of NX-traffic seems better suited to a Classification-as-a-Service (CaaS) setting.
Finally, it has been shown that classifiers trained on NX-traffic are more robust against certain adversarial attacks compared to classifiers trained on resolving traffic <cit.>.
In this work, we follow the suggestions of related works and focus on the classification of NX-traffic.
In the following, we briefly describe our data sources.
§.§.§ DGArchive
We use the OSINT feed of DGArchive <cit.> to obtain DGA-labeled domains.
At the time of writing the feed contains approximately 123 million unique samples generated by 106 different DGAs.
§.§.§ University Network
We extract benign-labeled domain names from traffic recordings of the central DNS resolver of the campus network of RWTH Aachen University.
This network includes several academic and administrative networks, dormitory networks, and the network of the affiliated university hospital.
We selected a one-month recording of NXDs from mid-October 2017 until mid-November 2017 containing approximately 35 million unique NXDs for our evaluation.
We deliberately chose an older NX-traffic recording because in our study we also want to evaluate whether a classifier learns time-dependent artifacts of a specific network or whether it generalizes well to new environments and is time-robust.
We filter all NXDs from this data source using DGArchive to remove potentially malicious domains.
Although the data may still contain mislabeled samples, the only way to avoid this problem is to use artificial data which may not accurately reflect real network traffic and thus may introduce additional bias.
§.§.§ Company Network
A second source for benign-labeled data are recordings of several central DNS resolvers of Siemens AG.
Data obtained from this source is very diverse as the DNS resolvers cover the regions of Asia, Europe, and the USA.
From the company, we obtain a one-month recording of benign NXDs from April 2019 containing approximately 311 million unfiltered NXDs.
Benign data from this source is only used for the final real-world evaluation study, which is free of experimental biases, to assess whether a classifier contains any biases with respect to the network data on which it was trained and whether a classifier is time-robust.
We again filter all NXDs from this data source using DGArchive to clean the data as much as possible.
§.§.§ Ethical Considerations
Our institution does not yet have an ethics review board that could have approved this study.
However, we ensured that we do not record or use any personally identifiable information (PII) or quasi-identifiers.
When recording traffic from the university and company network, we only observe NX-traffic and store the queried domain names, omitting all other information including IP addresses that could be used as pseudonyms to correlate domain names queried by the same host.
Thereby, we only obtain a list of domain names that occurred within the recording period, with no relation to users within the network.
Additionally, we focus on NX-traffic because NXDs are less privacy-sensitive compared to resolving domain names, as they generally do not contain user-generated domains, with the exception of typo domains.
Although the NXDs may still contain sensitive information about an organization as a whole (e.g., they could indicate possible business relationships between different companies), it is questionable to what extent and with what accuracy such information can be recovered, if at all possible.
§.§ Preprocessing
It is important to understand the applied domain name preprocessing as this step can introduce significant classification biases.
The works (e.g., <cit.>) that operate on single NXDs for classification make the data used unique and filter all benign samples against OSINT feeds to remove potentially contained malicious domains before training and testing a classifier.
Other than that, they do not apply any filtering to the benign-labeled data used, since it is captured from real-world networks.
The argument for this decision is that this feeds the classifier with the queries that occur naturally in a network, and does not bias the classification performance in any direction since no filtering is applied.
While the feature-based classifiers (e.g., <cit.>) start extracting predefined features from this data, the deep learning-based approaches (e.g., <cit.>) have to convert the domain names into a numerical representation in order to be able to feed them to a neural network.
Most works (e.g., <cit.>) follow a similar approach, which mainly differs in the maximum acceptable length of a domain.
First, all characters are converted to lowercase (which is an uncritical operation as the DNS operates case-insensitive) and every character is mapped to a unique integer.
Additionally, the input is padded with zeros from the left side.
The authors of the ResNet classifier <cit.> propose padding to the maximum domain length of 253 characters in order to be able to perform training and classification on every possible NXD while using batch learning.
In this work, we follow these suggestions of related work on preprocessing.
§ EVALUATION OVERVIEW
In this section, we describe our evaluation methodology, explain the decisions underlying the dataset generation process, and perform a result reproduction study of the classifiers from related work to verify our evaluation setup.
§.§ Datasets & Methodology
We create two disjoint datasets, one to train and test a set of state-of-the-art models (DSmod), and one to analyze different explainability methods and investigate biases (DSex).
For each DGA in DGArchive, we randomly select 20,000 samples.
If less than 20,000 samples are available per DGA, we select all samples.
Then we split the samples for each DGA equally between the two datasets.
For two DGAs, only five samples are available in the OSINT feed.
We constrain that at least four samples are available for training classifiers within DSmod.
Thus, for two DGAs (Dnsbenchmark and Randomloader), only one sample is contained in DSex.[We intentionally include underrepresented classes because the inclusion of a few training samples per class allows a classifier to detect various underrepresented DGAs with high probability that would otherwise be missed. At the same time, this does not affect a classifier's ability to recognize well-represented classes <cit.>.]
Thereby, we are able to perform a four-fold cross validation stratified over all included classes using DSmod, resulting in four different classifiers being trained and tested.
Finally, we select the same number of benign samples as we selected malicious samples, resulting in balanced datasets.
In binary classification experiments, we use all benign samples and use the same label for all malicious domains, regardless of which DGA generated a domain.
In multiclass classification experiments, we limit the amount of benign samples to 10,000 in order to have a more evenly distributed amount of samples between the various classes.
Here we assign a separate label for each DGA.
In total, DSmod and DSex each contain approximately 1.2 million domains derived from 107 different classes.
We train all four classifiers in the four-fold cross validation with DSmod using early stopping with a patience of five epochs to avoid overfitting.
These classifiers are then used to analyze different explainability methods and investigate biases using samples from DSex.
This methodology allows us to conduct a study to reproduce the results of related work (using DSmod) as it replicates the classification setting used by the state of the art.
In addition, we can evaluate four classifiers and 20 explainability methods on the same unseen data (DSex) and can assess whether the classifiers converge to similar local optima and whether the explainability methods provide stable results between different models.
However, this methodology introduces spatial and temporal experimental biases <cit.>.
Spatial bias arises from using an unrealistic ratio of benign to malicious samples in the test data. For the DGA detection use-case, most queried domains within a network are benign. This significant class imbalance can lead to base-rate fallacy <cit.> where evaluation metrics such as true-positive rate (TPR) and false-positive-rate (FPR) are misleading.
Temporal bias is introduced by temporally inconsistent evaluations which integrate future knowledge about testing samples into the training phase. In the state-of-the-art classification setting, temporal bias is introduced in two ways: First, four-fold cross validation does not ensure that all training samples are strictly temporally precedent to the testing ones. Second, the benign and malicious samples in the datasets are not from the same time window (one-month real-world benign data compared to several years of DGArchive data).
Thus, we conduct an additional evaluation under real-world conditions where we mitigate all experimental biases in Section <ref>.
To this end, we make use of our second source for real-world data, the company network.
In this context, we also assess whether classifiers generalize between different networks and are time-robust.
§.§ State-of-the-Art Results Reproduction
Before conducting the actual explainability study, we reproduce the results of related work to validate our evaluation setup.
We use the same evaluation metrics as in the original papers: accuracy (ACC), true-positive rate (TPR), and false-positive rate (FPR) for the binary experiments, and f1-score, precision, and recall (which is equal to TPR) for the multiclass experiments.
As suggested in <cit.>, we use macro-averaging to calculate the overall evaluation metrics because the available samples vary widely per DGA class.
This way we do not skew the overall score towards well-represented classes.
We present the averaged results of the four-fold cross validation in Table <ref>.
The upper part of the table shows the results of the binary evaluation, the lower part those of the multiclass evaluation.
By comparing these results with the values reported in the original papers, we can confirm that we were able to reproduce the results, as we arrive at very similar values.
The last row of the table shows the results for an adapted model of M-ResNet aimed at making it more explainable.
Recently, Bohle et al. <cit.> proposed a so-called B-Cos transform which, when interchanged with linear transforms of neural networks, increases the networks' explainability by promoting the alignment of weight-input during training.
The alignment pressure on the weights ensures that the model computations align with task-relevant features and therefore become explainable.
Since interchanging the linear transforms of the ResNet model with B-Cos transforms could introduce a trade-off between classification performance and explanatory fidelity, we also evaluate this model using DSmod and present the results in the last row of Table <ref>.
Indeed, this modification slightly sacrifices model performance in favor of a more explainable model compared to the M-ResNet baseline.
§ EXPLAINABILITY METHODS
As a secondary contribution to the critical analysis of automatic feature extraction for DGA detection, we conduct a comparative evaluation of different explainability methods.
In this section, we briefly introduce explainability techniques for machine learning and present the results of the comparative evaluation.
The exhaustive evaluation can be found in Appendix <ref>.
In general, explainability methods can be divided into two categories: white-box approaches, which are model-specific and use knowledge, e.g, about the internal architecture and model weights of a neural network, and black-box approaches that are model-agnostic.
In this work, we focus on white-box approaches as they have been proven to produce better results compared to black-box approaches <cit.>.
The general idea of white-box approaches to deriving local explanations for input samples is to compute the gradients from the output back to the input.
Thereby, for an input sample , a neural network N, and a prediction , a relevance vector is derived which describes the relevance of each dimension of x for the predicted label y.
Thus, in terms of context-less DGA classification, an explainability method determines the relevance of each character in the context of its position for the assignment of an individual domain name to a particular class.
When evaluating the explainability methods, we focus on the explanations generated for the predictions of a multiclass classifier because, unlike a binary classifier, it has a variety of other prediction possibilities in addition to distinguishing between benign and malicious.
In this work, we make use of the iNNvestigate library <cit.> which implements many explainability methods and provides a common interface to evaluate 19 white-box approaches including Layer-wise Relevance Propagation (LRP) <cit.> using 12 different rules.
In addition, we also evaluate explanations generated by the recently proposed B-Cos network adjustment <cit.>.
Similarly to Warnecke et al. <cit.>, we evaluate the explainability methods based on four metrics: fidelity, sparsity, stability, and efficiency.
Since we only evaluate white-box methods that compute relevance vectors directly from the weights of a neural network, all explainability methods are complete in that they are able to compute non-degenerate explanations for every possible input.
In contrast to <cit.>, we evaluate a total of 20 white-box explainability approaches (compared to the three evaluated by Warnecke et al.) and extend the fidelity and stability metrics to be more suitable for analyzing DGA classifiers.
Based on the four metrics, we select the top five techniques (b-cos, deeptaylor, integratedgradients, lrp.alpha2beta1, and lrp.zplus) for our bias investigation study in the next section.
§ INTERPRETING THE EXPLANATIONS
Having decided on explainability methods, we can now examine the reasoning of the deep learning classifiers.
To this end, we use the classifiers trained during the four-fold cross validation on DSmod to predict all samples of DSex, and then use all selected explainability methods to compute explanations.
Subsequently, for each method and class, we use DBSCAN <cit.> to cluster the relevance vectors and group similar explanations together.
Finally, we manually review the clusters to identify potential features of the deep learning classifiers.
For each domain name and relevance vector, we visualize the importance of each character through heatmaps.
We encode positive contributions to the predicted label as green colors and negative contributions as red colors.
An example of the clustering and visualization of the relevance vectors generated by lrp.zplus for the Banjori DGA is shown in Fig. <ref>.[
Note that relevance vectors are not direct characteristics of individual inputs, but rather of the model that processes those inputs.
By clustering the relevance vectors, we can still find clusters similar to those in Fig. <ref>, but in this case it might be more appropriate to first compute clusters based on other features such as n-gram embeddings.
However, it is unclear what other features should be used to calculate such clusters (which brings us back to manual feature engineering) since, e.g., n-gram embeddings would not be useful for hex-based DGAs.
]
In the following we present our findings from this study.
We use the explainability methods to identify potential biases and then conduct various experiments to quantify the impact on classification.
While some of these biases may seem obvious and easily avoidable, they are present even in DGA detection approaches proposed at leading security conferences (e.g., <cit.>).
Moreover, these biases are rooted on subtle flaws that are rife in security research and affect many other use-cases as well <cit.>.
§.§ Revealing Biases
In this work, we mainly focus on the classification biases between the benign and the malicious class since the most severe danger in misclassification is that DGA-domains are wrongly labeled as benign.
If a certain proportion of samples is incorrectly assigned to a DGA by a multiclass classifier, this has less impact because the domains are still detected as malicious.
The main incentive for an adversary would be to exploit biases to force a detection system to classify DGA-domains as benign, allowing communication with botnets.
Therefore, we consider the threat model, which attempts to mask domains as if they were generated by another DGA, to be less reasonable.
In total, we identified five biases present in current state-of-the-art classifiers that provide a false sense of security, as they can be easily exploited to evade detection.[While we analyzed the ResNet-based classifier in detail, we verified that the identified biases are also exploitable in the LSTM-based <cit.> and the CNN-based classifier <cit.>.]
Moreover, biases inherent in a classifier can affect the classifier's ability to detect yet unknown DGAs.
§.§.§ Length Bias
Across all explainability methods and across many clusters, dots included in a domain name are often calculated as particularly important for the classification.
We reckon that the dots themselves are not important in isolation, but that the deep learning classifiers infer the features of domain length and number of subdomains from it.
To assess the importance of this feature, we conduct the following experiment:
First, we chose the Qadars DGA as it generates domains of a fixed length and is correctly attributed by M-ResNet most of the time (f1-score of 0.99400).
In detail, all domains generated by Qadars match the following regular expression (regex): , i.e., Qadars generates domains with a fixed length of 12, using only the characters a-z and 0-9, and finally adds a dot and one of four possible top-level domains (TLDs).
Then, we adapt the reimplementation of Qadars[<https://github.com/baderj/domaingenerationalgorithms>] to generate domains of all possible lengths.
Note that each domain name identifier can be a maximum of 63 characters long before it must be separated by a dot, and the full domain name can be a maximum of 253 characters long.
For each possible length and for each known seed (six in total), we generate at most 100 different domains, resulting in a dataset size of around 147,000 unique samples.
For each sample, we always fill in the highest level subdomain with characters before adding a dot.
Finally, we feed the generated domains into the M-ResNet classifier and observe the percentage of classifications assigned to Qadars, any other DGA, and the benign class depending on the domain length.
In Fig. <ref>, we display the results of this experiment.
The percentage of classifications assigned to Qadars increases with domain length, peaking at the original domain length of 12, and then falls abruptly from there.
As the domain length increases, the percentage increases slightly because the classifier has more information to derive the correct prediction.
Most of the time, however, the classifier assigns the samples to different DGA classes.
The percentage of benign classifications increases rapidly from the length of 69, 133, and 197.
This is because at these lengths additional subdomains must be included to form a valid domain.
The more dots, the more benign classifications. Sometimes even more than 50% of all classifications are assigned to the benign class.
After the dots are inserted, the benign classifications decrease with increasing domain length as more information generated by the DGA is available for prediction.
Investigating the sample length distribution of the classifiers' training set illustrates the problem that with increasing length, more domains are classified as benign.
In Fig. <ref>, we display two box plots of the domain length distribution for the benign and malicious classes.
The maximum domain length of a DGA-labeled sample within the training set is 59.
Thus, it is very likely that a classifier learns to assign a sample to the benign class with greater probability if it exceeds 59 in length.
Fortunately, this is not the only feature on which classification depends.
Since the domain length depends on the number of dots/subdomains, we examine this bias below.
§.§.§ Number of Dots/Subdomains Bias
As seen in the previous section, the number of dots/subdomains has a significant impact on the classification.
Looking at the number of dots contained in the training set separately for the benign and malicious classes, we can see that the benign class contains significantly more dots.
The average number of dots is 7.12, the median is 5, and the maximum is 35.
In comparison, the average for the malicious class is 1.08, the median is 1 and the maximum is 2.
In fact, only 19 DGAs generate domains with more than one dot and only two DGAs (Beebone and Madmax) have dots past their effective second-level domain (e2LD).
We refer to e2LD here because some DGAs use dynamic DNS services or public suffixes, which should not be counted as their generated second-level domain.
§.§.§ www. Bias
In connection to the number of dots/subdomains bias we observed during our manual review of the relevance vector clusters for the benign class, that over all explainability methods, clusters have formed which highlight the importance of the “www.” prefix.
Examining the distribution of domains with the prefix “www.” within the training set, we find that the benign class contains 3,382 (0.00288%) samples, while the malicious class contains only 183 (0.00016%) samples.
To assess the impact of this bias, we perform the following experiment:
We take the four binary classifiers of the four-fold cross validation and all the malicious samples that the classifiers have correctly classified (true-positives).
Then we prepend the “www.” prefix to all true-positives and reevaluate the models on these samples.
On average over all folds, 434,916 (74.23%) out of 585,907 true-positives became false-negatives, while only 150,991 were still correctly classified.
This shows that there is a huge bias regarding this prefix and malware authors could exploit this issue by simply prepending “www.” to their generated domains in order to evade detection of state-of-the-art classifiers.
Although, only a small fraction of all samples have the “www.” prefix, it can introduce bias into classification if the feature is sufficiently discriminatory.
§.§.§ Top-Level Domain Bias
Through our study, across all explainability methods and across multiple classes, we encountered multiple occurrences of clusters that, in combination with other features, highly value the top-level domain (TLD) as a significant feature.
To assess the impact of this feature, we make use of out-of-distribution (OOD) testing, as it was identified to be one of the most effective ways to reveal biases <cit.>.
To this end, we perform a leave-one-group-out evaluation.
In detail, similarly to the four-fold cross validation, we train a classifier for every fold on the respective fold's training data of DSmod, except that we omit all samples of a particular class.
Then, we use the four trained classifiers to predict all samples of the left out class contained in DSex.
As an example, we present the results obtained on the Mirai DGA leave-one-group-out evaluation.
All samples generated by Mirai use one of these three TLDs: online, support, and tech.
In each fold all Mirai samples that use the online and tech TLD are predicted to be malicious while all samples with the support TLD are labeled as benign.
It seems that this is because the classifier tends to classify samples with never-seen TLDs into the benign class.
Omitting all Mirai samples from training has the effect of removing all samples that use the support TLD from the entire training set.
Although there appears to be enough information within the second-level domain to correctly assign a sample to the malicious class (as 100% of all online TLD samples are correctly assigned), the classifier is biased due to the unknown TLD to attribute the samples to the benign class.
Similar pictures emerge also for a variety of other DGAs.
Examination of the TLD distribution within the training set supports this statement.
There are 413 distinct TLDs in the benign data, of which 274 are unique to benign samples.
In comparison, there are only 258 different TLDs within the malicious labeled data, of which 115 are uniquely used by malicious samples.
On the other hand, all samples with the tech TLD were also correctly labeled as malicious although this TLD was completely removed from the training data.
Since all support TLD samples are misclassified and all samples use the same generation algorithm, it is unlikely that the information within the second-level domain was discriminatory enough for the tech TLD samples.
Analyzing the calculated relevance vectors for these samples revealed that the classification is significantly influenced by the “ch” suffix of the tech TLD.
Looking at the ch TLD distribution within the training data it becomes apparent why this is the case: there are 2063 ch TLDs within the malicious samples and only 51 within the benign samples.
This bias investigation delivers two results:
First, state-of-the-art classifiers heavily depend on the TLD, resulting in the fact that a malware author could simply change the TLD used to evade detection.
Second, it might be useful to encode the TLD as a one-hot encoded vector before inputting it to a classifier since it is rather a categorical feature.
In the case of the Mirai evaluation, this was a stroke of luck for the defender site.
However, since the TLD can be freely chosen, an attacker could exploit this knowledge to evade detection.
§.§.§ Validity/Diversity Bias
During our study, we encountered several large benign clusters that contain domains that are invalid and therefore would not resolve (e.g. due to an invalid or missing TLD).
In fact, 7.64% of all benign samples within the training set are invalid, while all malicious samples are valid.
An attacker has no incentive in generating invalid samples, as they would be useless for establishing connections between bots and their C2 server.
Thus, a classifier most likely learns the shortcut to distinguish domains based on their validity.
Although this is not a true bias, since invalid domains cannot be resolved and therefore assigned to the benign class, it does have an impact on the reported FPR of state-of-the-art classifiers as invalid samples are probably easier to classify.
While there is nothing wrong in calculating the FPR for the detection system which pre-filters invalid domains to the benign class, here the classifiers real true-negative rate (TNR) is artificially inflated.
Furthermore, including invalid samples in the training sets carries the additional risk of the classifier focusing on useless information and prevents the classifier from learning more complex features that might be useful in separating valid benign samples from malicious ones.
In addition, we found several benign clusters specific to the network in which the data was collected (e.g., domains including the official e2LD of the university).
Training and evaluating classifiers on this data could lead to misleadingly high results, as the classifiers may have only learned to separate network-specific domains from malicious ones, but they do not generalize between different networks.
§ MITIGATING BIASES
Now that we have identified several biases, we present strategies to mitigate them.
In addition, in various experiments, we measure the cost in terms of loss in classification performance for avoiding biases, since biases are nothing more than features that appear in the training data. For instance, biases such as the TLD are perfectly valid signals for the classifier to learn based on the underlying data distribution, since such features can be used to some extent to distinguish between benign and malicious samples. However, this is not desirable for features that can be easily modified by an attacker, as they can be exploited (e.g. by exchanging the TLD) to evade detection.
Finally, in a real-world study, we measure the true classification performance of DGA classifiers that are free of the identified biases, and evaluate whether a classifier generalizes to different networks and is time-robust.
In other words, here we evaluate whether a classifier is free from biases that might be introduced by artifacts in specific networks and at certain times.
§.§ Mitigation Strategies
In the following, we address the individual biases and suggest how to mitigate them.
§.§.§ Number of Dots/Subdomains, www., and TLD Biases
As demonstrated in the previous section, these biases can be easily exploited by an attacker to evade detection.
Adding the “www.” prefix to malicious domains converted around 75% of true-positives into false-negatives, while selecting a TLD that was never seen by a classifier during training allows for complete bypass of detection.
Since the botmaster's authority over a domain starts with the e2LD and all other subdomains as well as the TLD can be freely selected, we suggest to perform the classification exclusively on the e2LD and to omit all other information.
Note that this does not open up any new attack vector, but may remove valuable features that could be used for classification, resulting in a decrease in overall classification performance.
Hence, in Section <ref>, we measure the trade-off between bias-reduced classification and performance.
§.§.§ Validity/Diversity Bias
Since invalid samples can be pre-filtered and assigned to the benign class, we choose to only train a classifier on valid domains, allowing the classifier to focus on task-relevant features.
As a result, the FPR of the classifier reported by us is likely to be larger than that reported by related work, since the classifier does not encounter easily classifiable invalid samples during testing.
Further, to mitigate the problem that a classifier only learns to separate network-specific domains from malicious ones, we focus on diverse data by training on unique e2LDs.
In doing so, we aim to train classifiers that generalize well between different networks.
Focusing solely on unique e2LDs has the effect that the underlying sample distribution changes fundamentally.
Training using this data will again increase the classifier's FPR since a e2LD occurs only once, either in the training or test set.
In contrast, in the state-of-the-art classification setting, a large proportion of unique domains with the same e2LD occur, which may be network-specific, such as domains that contain the university's official e2LD.
Once the classifier learns of a benign e2LD, samples with the same e2LD can be easily assigned to the benign class.
§.§.§ Length Bias
Focusing exclusively on valid and diverse e2LD already significantly equalizes the length distribution between benign and malicious samples and almost mitigates the bias.
In Fig. <ref>, we show two box plots of the unique and valid e2LD length distributions for the benign class and malicious samples.
In comparison to the sample length distributions in the state-of-the-art classification setting (cf. Fig. <ref>), the e2LD length distributions are much more similar.
Unfortunately, thereby the length bias cannot be fully mitigated.
The classifier will probably still tend to classify longer samples towards the benign class.
However, as we saw during the length bias experiment, longer samples contain more information that helps the classifier make the correct decision.
Thus, for an adversary, increasing the domain length is more of a trade-off between exploiting length bias and providing too much information to the classifier.
Note, reducing the domain length of input samples to mitigate this bias is not a viable option, as this opens up a new attack vector where an attacker can hide features that would have sorted a domain into the malicious class.
On the other hand, it is possible to generate additional artificial domains by adapting publicly available reimplementations of DGAs (similar to the length bias experiment) to balance the length distributions and thus mitigate the bias completely.
However, this may require oversampling of benign data and care must be taken to ensure that this does not affect classification performance on clean data.
Since the focus on valid and diverse e2LD almost evens out the distributions, we decided against it.
§.§ Bias Mitigation Experiments
In the following, we measure the cost in terms of loss in classification performance for avoiding biases.
We expect classification performance to deteriorate because biases are nothing more than features based on the underlying distribution of the training data.
All experiments are similar to the four-fold cross validation performed in Section <ref>, except that here we focus on diverse data.
To this end, we first map all fully qualified domain names (FQDNs) to their e2LDs.
We then randomly sample the e2LDs and then select exactly one sample per unique e2LD for each evaluation scenario.
For binary and multiclass classification, we examine four scenarios each: classification on valid and diverse FQDNs, on FQDNs without TLDs (no TLDs), on FQDNs without subdomains (e2LDs + TLDs), and exclusively on e2LDs.
In the upper part of Table <ref>, we present the results for the binary setting while the lower part of the table displays the results for the multiclass setting.
For convenience we also show the performance of the classifiers in the state-of-the-art classification setting from Section <ref>.
As suspected, when only valid and diverse samples are used, the performance of the binary classifier is significantly worse, especially with respect to the FPR.
Removing the TLDs from the FQDNs has less of an impact on performance than removing all subdomains after the e2LD.
However, in both scenarios the loss in performance is tremendous, increasing the FPR to about 7.1% - 7.6%.
Classification solely on the e2LD delivers the worst results reaching a 89.1% TPR @ 10.5% FPR for the decision threshold of 0.5.
Examining the individual TPRs for each DGA, we find that the rate drops significantly for some DGAs, while for others it remains high, even reaching 100%.
Although the average TPR drops significantly compared to the state-of-the-art setting, we expect that most DGAs could still be detected as they query multiple domains before finally resolving a registered domain.
Provided that a decision is not made on the basis of a single query.
Only the DGAs Redyms and Ud3 would be completely missed as for these DGAs the TPRs are zero over all four folds.
In the multiclass setting, classification performance is not affected as much when trained on valid and diverse FQDNs.
This is because focusing on these samples mainly affects the benign class and a few DGA classes that have a small sample size and generate FQDNs that map to the same e2LD (e.g., they generate domains with the same e2LD but with different TLDs).
However, most DGAs are not affected by this.
In contrast to the binary setting, here the TLDs are more relevant for classification than the subdomains after the e2LD.
If only the e2LDs are used for classification, the performance deteriorates drastically (mainly because of the missing TLDs).
Removing all subdomains after the e2LD affects only two DGAs: Beebone and Madmax.
However, when the subdomains are removed, there is still enough information in their domain names to classify them correctly most of the time.
Beebone's f1-score drops slightly from 97.7% to 95.7%, and Madmax's from 74.9% to 60.2%.
In summary, the TLD is vital for the multiclass classification.
In the binary setting, classifying exclusively e2LD is as bias-free as possible but the achieved performance does not seem to be acceptable.
However, the effective TPR@FPR operation point of a detection system that pre-filters invalid samples and classifies all input samples regardless of the uniqueness of their e2LD can still be acceptable.
In the next section, we get to the bottom of this question.
§.§ Real-World Study
In this section, we perform a real-world study to assess the true performance of bias-reduced DGA binary classification.
In this context, we evaluate whether the classifiers generalize between different networks and are time-robust.
Simultaneously, we enforce that the evaluation is free of experimental biases.
In the following, we refer to classifiers that mitigate the identified biases as bias-reduced classifiers.
To this end, we train a classifier using the real-world benign e2LDs from the university network recorded from mid-October 2017 to mid-November 2017, as well as DGArchive data that was available until the end of the recording period.
In detail, DGArchive contains approximately 53 million unique domains generated by 85 different DGAs up to this point in time.
Training a classifier using a dataset which is similar to DSmod, but with the constraint that the malicious samples are from the same time window as the benign samples, mitigates one of the two experimental temporal biases included in the state-of-the-art classification setting.
To mitigate the second experimental temporal bias, that requires that all training samples are strictly temporally precedent to the testing ones, we evaluate the classifier on approximately 311 million benign e2LDs captured in the company network in April 2019 (cf. Section <ref>) and DGA-domains from DGArchive that were generated by DGAs in April 2019.
Within April 2019, 46 DGAs (four of which were unknown at the time of the training) generated approximately 1.2 million domains.
In this way, we eliminate the experimental temporal biases, and can guarantee that the benign samples come from different networks and that the time interval between the occurrence of the training and the test samples is about 17 months.
To eliminate the experimental spatial bias, it is required to approximate the true ratio of benign to malicious samples in the test data.
Since the true sample distribution is unknown, we conduct two experiments to estimate the true detection performance of bias-reduced DGA binary classification.
First, we evaluate the classifier using all 311 million benign e2LDs and gradually increase the amount of included malicious test samples generated in April 2019 from 1% to 100% for each DGA.
Thereby, the ratios between the domains generated by the different DGAs follow the true distribution.
In the following, we report the obtained results of the classifier that first checks whether a sample is invalid.
If it is invalid, the sample is ignored.
Otherwise, it is evaluated by the classifier.
In Fig. <ref>, we display the TPRs for fixed FPRs between [0.001,0.008] for the bias-reduced classifier depending on the contamination of the test set (i.e., the relative amount of included malicious test samples from April 2019).
The achieved TPRs are nearly stable for all fixed FPRs, showing that no base-rate fallacy is measurable within these ratios of benign to malicious samples.
We argue this is because the benign data heavily overshadows the malicious data even when we include 100% of all DGA-domains from April 2019.
In this experiment, the relative percentage of malicious samples varies between 0.00362% and 0.35998%, which means that in the worst case, 99.64002% of the test data is still from the benign class.
As it is unclear, how many DGAs are present in a real-world network, we additional conduct a second experiment to estimate the worst-case classification performance.
Here, for each DGA, we evaluate the classifier using all malicious samples generated in April 2019 of that particular DGA and all 311 million benign e2LDs.
In total, we thus evaluate the classifier using 46 test sets, since there are 46 DGAs that generate at least one domain in April 2019.
On average the bias-reduced classifier achieves a TPR of 0.85735 at a FPR of 0.00506 for the decision threshold of 0.5.
In Fig. <ref>, we display the receiver operating characteristic (ROC) curve averaged over all evaluation runs for the FPR range of [0,0.01].
In addition, we also show the ROC curves for the best-detected DGA (Dyre) and the worst-detected DGA (Nymaim2).
We argue that the classifier is remarkable time-robust and generalizes well to different networks.
The temporal and spatial changes in data distribution have increased the FPR compared to the state-of-the-art setting at the decision threshold of 0.5.
However, this was to be expected as the distribution of benign samples naturally varies between networks, at least to some degree.
Moreover, the classifier is able to achieve a slightly lower TPR as the bias-reduced e2LD classifiers from the previous section.
Surprisingly, for three of the four DGAs that were unknown at the time of training (Ccleaner, Tinynuke, Wd), the bias-reduced classifier is able to correctly classify 100% of all generated samples.
Only the Nymaim2 DGA is detected worse with a TPR of 14.84%, which is the main reason for the slightly lower average TPR compared to the bias-reduced e2LD classifiers from the previous section.[
We additionally evaluated the four e2LD classifiers from the previous section against the 311 million benign NXDs and all DGA-domains from DSex (which are completely disjoint with the training samples) to evaluate the performance using all 106 known DGAs.
Thereby, we arrive at very similar results.
We present the corresponding ROC curves in Appendix <ref>.
Note that this of course reintroduces experimental temporal bias.
]
At a fixed FPR of 0.008 the bias-reduced classifier achieves a TPR of about 89%.
In practice, it might be advantageous to set the threshold to a lower fixed FPR value.
Setting the FPR at 0.001 to 0.002 would still allow an approximate detection rate of about 67% to 78%.
However, how useful this is depends on what is done with the classification results.
Context-less DGA detection was never intended for single-domain based decision-making.
This evaluation assessed the true performance of bias-reduced DGA classifiers and demonstrated the limits of what is possible without contextual information.
§ BIAS-REDUCED DGA CLASSIFICATION
In this section, we use the insights gained from the bias mitigation and the real-world study to propose a classification system that (1) is as bias-free as possible and (2) does not miss entire DGA families.
Further, we propose an approach to improve visualization support to increase trust in and transparency of detection methods and facilitate decision-making.
§.§ Bias-reduced DGA Classification System
As previous evaluations have shown, bias can be easily exploited to evade detection.
Focusing exclusively on e2LD helps mitigate most identified biases.
However, this causes the classifier to lose the ability to recognize specific DGA families as a whole.
In the case of multiclass classification, we have seen that the classification relies heavily on information outside of the e2LD to correctly assign domains of multiple classes.
In the following, we present a detection system that counteracts these issues.
In Fig. <ref>, we visualize the system's architecture.
In the first step, the detection system evaluates whether the entered NXD is invalid or not.
If it is invalid, it is ignored, otherwise the input sample is passed to the binary classification step.
Here, two classifiers work in parallel: a bias-reduced classifier that classifies the e2LD of the input sample, and a full classifier that uses the FQDN.
This classification step can lead to four possible outcomes:
First, both classifiers agree on the benign label, so the detection system also outputs benign.
Second, the bias-reduced classifier outputs malicious while the full classifier predicts benign.
This is an indication that an attacker might try to exploit biases to evade detection.
Third, the bias-reduced classifier predicts benign and the full classifier malicious.
This suggests that the features outside the e2LD may be indispensable to detect the DGAs that the bias-reduced classifier would miss.
And fourth, both classifiers agree on the malicious label indicating that the input sample is very likely DGA-generated.
Regardless of the results, the input sample can be passed to a multiclass classifier trained on FQDNs to associate the sample with the DGA that most likely generated it.
Finally, we propose to pass the input sample associated with the classification results to a visualization system to understand the classifier's reasoning and to support the decision-making process.
Using this detection system, we achieve bias-reduced DGA detection and do not miss entire DGA families.
§.§ Visualization Support
The proposed detection system gets the most out of context-less and bias-reduced DGA classification.
In order to facilitate decision-making and to better understand the reasoning of a classifier we propose a visualization system.
In this work, we demonstrated the limits of context-less classification and showed that decision-making based on the classification result of a single query is practically insufficient.
To make a decision based on multiple classification results, the minimum information required is the mapping between the host and the queried domains.
While this information may not be available to a CaaS provider, the network operator that uses the service most likely has this knowledge.
In the following, we only use this additional knowledge to facilitate the work of SOC analysts.
Fig. <ref> shows the different views of the proposed visualization system based on mock data.
Two main view groups summarize the classification results: the global and the local views.
Both contain the queried domain names, in which the relevance of each character to the prediction is highlighted using a heatmap.
In this example, we used integratedgradients to compute the relevance vectors for the predictions of the multiclass model.
However, any other explainability method can be chosen.
In addition, we display the total amount of times the domain was queried as well as the classification results from the bias-reduced, full binary, and multiclass classifier.
The global view summarizes all classification results for the entire network and allows finding multiple hosts infected with the same malware.
The local view summarizes the results for a single host and allows targeted analysis of all queries performed by that host.
Local views can be accessed through the Recent Classification Results by Client view, which displays the total and relative number of domains classified as benign or malicious per host.
From both, the global and the local view, it is possible to analyze how often and which hosts queried a particular domain.
Additionally, for each domain, it is possible to analyze the clusters in which the relevance vector falls and to extract a simple regex that fits all samples within the cluster.
In this way, it may be possible to identify multiple hosts infected with the same malware.
§ ADDITIONAL UTILIZATION OF THE KNOWLEDGE GAINED
As a secondary contribution, we use the knowledge gained in the previous evaluations to improve the state-of-the-art deep learning and feature-based multiclass classifiers in terms of classification performance and efficiency.
In this section, we therefore take a step back from improving the generalization of classifiers by removing classification biases and briefly turn our attention to improving the performance and efficiency of the classifiers themselves.
§.§ Improving M-ResNet
In this work, we mainly improved the binary classifier B-ResNet by mitigating identified biases.
Now we also take a closer look at the multiclass classifier M-ResNet.
In Section <ref>, we noted that the classifier does not use the TLD as a standalone feature, but also derives additional features from the character distribution.
Since the TLD can be freely chosen by the adversary and the TLD is more of a categorical feature, we adapt the M-ResNet model to classify a domain by using the one-hot encoded vector representation of the TLD instead of the character-wise encoding.
Thereby, we aim to improve classification performance by allowing the classifier to focus on the more important part of the FQDN.
Furthermore, this has the effect that other implicit features, such as domain length, are no longer affected by the chosen TLD.
We evaluated this model using a four-fold cross validation on DSmod but could not measure any significant improvement.
As could be seen in the relevance vector cluster analysis, the original model appears to have a large enough capacity to learn the correct extraction of the TLD from the characters.
Furthermore, the characters within the TLD do not appear to significantly affect the multiclass classifier.
Since overparameterization has been associated with a higher susceptibility to learning spurious correlations <cit.>, we attempt to iteratively reduce the complexity of the adapted model.
As a result, we were able to successfully remove the last four residual blocks and reduce the number of trainable parameters by 35.5% without affecting classification performance (f1-score of 0.78691).
Thereby, we additionally improved the model's carbon footprint and reduced the required time for training and inference.
§.§ Improving EXPLAIN
Now we try to improve the feature-based multiclass classifier EXPLAIN by using knowledge extracted by explainability methods applied on M-ResNet.
To this end, we cluster relevance vectors for samples which are correctly classified by M-ResNet but incorrectly by EXPLAIN, targeting the identification of features that are missing in EXPLAIN.
We attribute the performance difference between both classifiers to four findings: (1) ResNet seems to handle imbalanced data and class weighting better, (2) for some DGAs, M-ResNet is simply better at guessing, (3) M-ResNet is able to learn complex features through a series of non-linear transformations that are not easily understood by a human, and (4) both classifier converge to different local optima and thus tend to assign similar samples to either one or the other class.
§.§.§ Imbalanced Data
Investigating the relevance vector clusters for the Redyms DGA, it is immediately apparent that for M-ResNet, the “-” character is useful for the correct classification.
Although, the feature that counts the “-” character is defined in EXPLAIN's source code, it was not selected during the feature selection process.
We reckon, that this is because the feature is only important for a few classes but other features are important for a much higher number of classes which resulted in lower importance score during the feature selection process.
This problem could be the reason why several classes are recognized worse by EXPLAIN, and suggest that M-ResNet might be better with imbalanced data and class weighting in general.
In contrast to EXPLAIN's feature selection step, we assume that M-ResNet does not completely remove self-learned features, but fine-tunes the importance by adjusting the weights.
Adding the “-”-feature to EXPLAINs feature set improves the f1-score for the Redyms DGA by 53.15% and brings the detection rate to a level similar to that of M-ResNet.
§.§.§ Random Guessing
EXPLAIN mostly confuses the samples of Ud4 with Dmsniff.
Analysis of all samples from both classes revealed that both DGAs generate 100% identical domains, so they are most likely the same DGA.
Upon inquiry to DGArchive this was confirmed and in the future the feed of Ud4 will be discontinued.
Here, M-ResNet is just better at guessing (by an f1-score of 16.48%).
§.§.§ Complex Features
We cannot exclude the possibility that M-ResNet is able to learn complex features through a series of non-linear transformations that are not easily understood by a human.
For instance, related work <cit.> suggests that the ResNet classifier may be able to distinguish, at least to some degree, between underlying pseudo-random number generators.
To improve EXPLAIN, we adapt the features related to randomness tests and add all of them to the final feature set.
In detail, we adapt the 14 randomness tests from <cit.> to include the final p-values used for the decision of whether a certain randomness test is passed instead of only the result of the test.
Reevaluating the model with all additional features, we could measure a small improvement of 0.783% in f1-score.
§.§.§ Different Optima
Most other DGAs that are confused by EXPLAIN generate similar domains, and often all domains match the same regexes.
EXPLAIN is significantly better (> 10% in f1-score) than M-ResNet in four DGAs, whereas M-ResNet is also significantly better in four other DGAs.
We reckon that both models converge to different local optima and thus tend to assign similar samples to either one or the other class.
§.§.§ Overall Results
We were able to improve EXPLAIN from an f1-score of 0.76733 to 0.77516 by adding additional features to EXPLAINs feature set, bringing it closer to the performance of deep learning classifiers such as M-ResNet.
§ OTHER RELATED WORK
We already discussed related work on DGA detection in Section <ref>.
Consequently, we focus here on related work on explainability and bias learning prevention.
For the DGA detection use-case, there are only a few works that partially address the explainability of detection systems.
Drichel et al. <cit.> proposed the multiclass classifier EXPLAIN as a feature-based alternative to deep learning-based classifiers.
While feature-based approaches often seem inherently explainable, it is often not easy to interpret their predictions.
For instance, EXPLAIN's predictions are based on the majority vote of 360 decision trees with a maximum depth of 43 and a random mixture of 76 features that include several statistical features that are difficult for a human to analyze.
The authors of <cit.> also adopt a feature-based RF classifier based on the EXPOSURE system <cit.> and mainly use SHAP <cit.> to derive explanations.
However, their approach relies heavily on extensive tracking of DNS traffic and is unable to derive explanations in the multiclass classification setting.
None of these works investigate biases inherent in detection methods.
To the best of our knowledge, this is the first work to critically analyze the features used, focusing on their limitations and unintended consequences for the DGA use-case.
In addition, related work <cit.> has identified several general measures to mitigate bias learning that can also be applied here.
Changing the loss function <cit.> and adding regularization terms <cit.> can force a classifier to learn more complex features instead of focusing on simple biases.
Also, the learning rate of the optimizer can be adjusted to make the classifier learn either simpler or more complex features <cit.>.
Somewhat related is the issue of adversarial attacks and the robustness of classifiers.
Here, semantic gaps in the data create blind spots in classifiers which make them susceptible to small input perturbations that lead to misclassifications.
Adversarial training can be used to prevent such classification shortcuts <cit.>.
In context of DGA detection, several works deal with this topic .
§ CONCLUSION
In this work, we showed how XAI methods can be used to debug, improve understanding, and enhance state-of-the-art DGA classifiers.
To this end, we performed a comparative evaluation of different explainability methods and used the best ones to explain the predictions of the deep learning classifiers.
Thereby, we identified biases present in state-of-the-art classifiers that can be easily exploited by an adversary to bypass detection.
To solve these issues we proposed a bias-reduced classification system that mitigates the biases, achieves state-of-the-art detection performance, generalizes well between different networks, and is time-robust.
In this context, we measured the true performance of state-of-the-art DGA classifiers, showed the limits of context-less DGA binary classification, and proposed a visualization system that facilitates decision-making and helps to understand the reasoning of deep learning classifiers.
Finally, we used the knowledge gained from our study to improve the state-of-the-art deep learning as well as feature-based approaches for DGA multiclass classification.
In future work, the usefulness of the visualization system needs to be evaluated, preferably in an operational environment.
A promising future research direction is the combination of context-less and context-aware systems to further enhance detection and decision-making.
§ AVAILABILITY
We make the source code of the machine learning models publicly available[<https://gitlab.com/rwth-itsec/explainability-analyzed-dga-models>] to encourage replication studies and facilitate future work.
The authors would like to thank Daniel Plohmann, Simon Ofner, and the Cyber Analysis & Defense department of Fraunhofer FKIE for granting us access to DGArchive as well as Siemens AG and Jens Hektor from the IT Center of RWTH Aachen University for providing NXD data.
ACM-Reference-Format
§ EVALUATING EXPLAINABILITY METHODS
We evaluate the explainability methods using four metrics: fidelity, sparsity, stability, and efficiency following <cit.>.
Since we only evaluate white-box methods that compute relevance vectors directly from the weights of a neural network, all explainability methods are complete in that they are able to compute non-degenerate explanations for every possible input.
To evaluate the explainability methods we use the four classifiers trained on DSmod during our results reproduction study and predict all samples from DSex.
For each metric, we average the results across all classifiers.
§.§ Fidelity
The first evaluation criterion is fidelity, which measures how faithfully important features contribute to a particular prediction.
We adopt the Descriptive Accuracy (DA) metric from <cit.>, which measures for a given input sample x how removing the k-most relevant features change the original neural network's prediction .
The idea behind this metric is that as relevant features are removed, accuracy should decrease as the classifier has less information to make the correct prediction.
The better an explanation, the faster the accuracy decreases as the removed features capture more context of the predictions.
Thus, explainability methods that show a more rapid decline in DA when removing key features provide better explanations than explainability methods with a more gradual decrease.
In context-less DGA classification, removing an input feature corresponds to removing a character from a domain.
Here, we consider two scenarios: (1) removing a character and thus reducing the total domain length, and (2) replacing a character with the padding symbol and thereby retaining the original domain length.
Both approaches have drawbacks: removing a character can have a greater impact on accuracy because it also affects the implicit feature of domain length.
On the other hand, preserving the domain length by replacing the character with the padding symbol may confuse a classifier, as the classifier was never faced with such samples during training.
Hence, we calculate the average DA for both scenarios and on all samples of DSex for k∈[1,10].
To derive a single score, we compute the Area Under the Curve (AUC).
The smaller the score, the better the explanations.
Results:
In Table <ref>, we show the results for this criterion.
For further evaluation we choose integratedgradients as it scores best when removing the top k-features and b-cos as it achieves the best score in the second scenario.
In addition, we also select lrp.zplus since it obtains the best scores when replacing features on the unmodified M-ResNet model.
§.§ Sparsity
An explanation is only meaningful if only a limited number of features are selected as the explanation result to make it understandable for a human analyst.
To measure the sparsity of an explanation, we follow the Mass Around Zero (MAZ) criterion proposed in <cit.>.
First, for every sample, we calculate the relevance vector r = (r_0,...,r_n), normalize the absolute entries of r to the range [0,1], and fit it to a half-normalized histogram h.
Then, we calculate the MAZ by .
Finally, we compute the AUC to derive a single score.
Sparse explanations have a steep increase in MAZ around zero and are flat around one because only few features are marked as relevant.
Conversely, explanations with many relevant features have a smaller slope close to zero.
Therefore, the higher the AUC score, the sparse the explanations.
Results:
In the third column of Table <ref>, we show the results for this criterion.
We select lrp.alpha2beta1 for further evaluation as it shows the best sparsity for explanations.
However, high sparsity is only useful if the most relevant features are correctly determined.
Therefore, we also investigate Sparsity * (1-Fidelity) and display the results in the fourth column.
Depending on the fidelity, integratedgradients shows the most sparse explanations.
§.§ Stability
An explainability method is stable if it provides the same explanation for a given input over multiple runs.
Since we only evaluate white-box approaches which calculate the relevance vector deterministically, all methods are stable.
However, here we still want to evaluate the stability of the explainability methods over different model weights, i.e., whether the explainability methods calculate similar explanations via different model weights.
Assuming that all models converge to similar local optima, it is conceivable that they learn the same features that are similarly relevant to predictions of specific classes.
Note that this need not be the case as there may be multiple highly predictive features for a single class.
However, we believe this is an important criterion, as it is beneficial when deriving explanations in an operational environment that the security analyst is presented with similar explanations for the same classes after a model update, e.g., after the inclusion of a newly emerged malware family, as before the model update.
Otherwise, the new explanations would confuse rather than help the analyst.
The standard deviation of the f1-score across the four folds is low at 0.00552, which may indicate that the classifiers are converging to similar local optima.
To evaluate this criterion, we first compute the average of the standard deviation values (std) for each entry of a relevance vector across all folds for all domains.
Then, we average these values to derive a single score, with smaller values corresponding to more similar explanations across different model weights.
Results:
The fifth column of Table <ref> shows the results for this criterion.
The two methods which achieve the best results by far are deeptaylor and lrp.zplus.
Both methods also achieve high fidelity scores (deeptaylor is second best in the feature remove setting and lrp.zplus is best on the unmodified M-ResNet model in the feature replace setting), which may indicate that the models learn the same most predictive features for the same classes.
On the other hand, integratedgradients achieves the best fidelity score in the feature remove setting and only performs moderately well in terms of stability.
This could be due to the fact, that in contrast to the other two methods, integratedgradients shows a significantly higher sparsity, which could indicate that there may be multiple highly predictive feature combinations for the same classes.
We add deeptaylor to the list of methods to be evaluated further.
However, the results of this criterion should be treated with caution, as they depend heavily on what a model has learned.
Since we use the same models for all explainability methods, this criterion still allows us to compare explainability methods in terms of whether they provide similar explanations through different model weights.
§.§ Efficiency
We follow the definition of efficiency in <cit.>, which states that a method is efficient if it does not delay the typical workflow of an expert.
To evaluate this criterion, we measured and averaged the times to compute the explanations during the previous experiments.
Results:
In the last column of Table <ref> we display the average time in seconds for computing a single explanation for a prediction.
All methods are sufficiently fast that we do not select any method based on this criterion.
B-cos, integratedgradients, and smoothgrad are around on order of magnitude slower than the other approaches.
For B-cos this is the case as the current implementation does not support batch calculations to derive explanations.
For integratedgradients and smoothgrad this is because we had to reduce the batch size of 2,000 samples to 200 due to higher RAM requirements of the algorithms.
Nevertheless, even without batch calculations all methods are sufficiently fast and would not delay the workflow of an expert.
§.§ Comparison of Explainability Methods
We briefly document our findings of using different explainability methods during our evaluations:
While lrp.alpha2beta1 often provides very sparse explanations, it occasionally seems to fail, sometimes just flagging features that argue against the prediction even though the classifier is very confident.
We cannot justify the loss of performance caused by the required adjustment to the state-of-the-art M-ResNet model for the explanations generated by b-cos, since the explanations are not significantly different from the other methods.
The three best performing explainability methods through our study are deeptaylor, integratedgradients, and lrp.zplus.
All three can be used to explain the predictions of deep learning classifiers for the DGA classification use-case.
However, integratedgradients seems to provide sparser explanations compared to the other two methods.
§ ADDITIONAL ROC CURVES OF THE REAL-WORLD STUDY
|
http://arxiv.org/abs/2307.06113v2 | 20230712121333 | Sublinear Time Shortest Path in Expander Graphs | [
"Noga Alon",
"Allan Grønlund",
"Søren Fuglede Jørgensen",
"Kasper Green Larsen"
] | cs.DS | [
"cs.DS"
] |
Scalable generation and detection of on-demand W states in nanophotonic circuits
Ali W. Elshaari
August 12, 2023
=================================================================================
Computing a shortest path between two nodes in an undirected unweighted graph is among the most basic algorithmic tasks. Breadth first search solves this problem in linear time, which is clearly also a lower bound in the worst case. However, several works have shown how to solve this problem in sublinear time in expectation when the input graph is drawn from one of several classes of random graphs. In this work, we extend these results by giving sublinear time shortest path (and short path) algorithms for expander graphs. We thus identify a natural deterministic property of a graph (that is satisfied by typical random regular graphs) which suffices for sublinear time shortest paths. The algorithms are very simple, involving only bidirectional breadth first search and short random walks. We also complement our new algorithms by near-matching lower bounds.
§ INTRODUCTION
Computing shortest paths in an undirected unweighted graph is among the most fundamental tasks in graph algorithms. In the single source case, the textbook breadth first search (BFS) algorithm computes such shortest paths in O(m+n) time in a graph with n nodes and m edges. Linear time is clearly also a lower bound on the running time of any algorithm that is correct on all input graphs, even if we only consider computing a shortest s-t path for a pair of nodes s,t, and not the shortest path from s to all other nodes. Initial intuition might also suggest that linear time is necessary for computing the shortest path between two nodes s,t in a random graph drawn from any reasonable distribution, such as an Erdős-Rényi random graph or a random d-regular graph. However, this intuition is incorrect and there exists an algorithm with a sublinear expected running time for many classes of random graphs <cit.>. Moreover, the algorithm is strikingly simple! It is merely the popular practical heuristic of bidirectional BFS <cit.>. In bidirectional BFS, one simultaneously runs BFS from the source s and destination t, expanding the two BFS trees by one layer at a time. If the input graph is e.g. an Erdős-Rényi random graph, then it can be shown that the two BFS trees have a node in common after exploring only O(√(n)) nodes in expectation. If the node v is first to be explored in both trees, then the path from s → v → t in the two BFS trees form a shortest path between s and t. The fact that only O(√(n)) nodes need to be explored intuitively follows from the birthday paradox and the fact that the nodes nearest to s and t are uniform random in an Erdős-Rényi random graph (although not completely independent).
Note that for sublinear time graph algorithms to be meaningful, we assume that we have random access to the nodes and their neighbors. More concretely, we assume the nodes are indexed by integers [n] = {1,…,n} and that we can query for the number of nodes adjacent to a node v, as well as query for the j'th neighbor of a node v. We remark that several works have also extended the bidirectional BFS heuristic to weighted input graphs and/or setups where heuristic estimates of distances between nodes and the source or destination are known <cit.>. There are also works giving sublinear time algorithms for other natural graph problems under the assumption of a random input graph <cit.>.
A caveat of the previous works that give provable sublinear time shortest path algorithms, is that they assume a random input graph. In this work, we identify "deterministic" properties of graphs that may be exploited to obtain sublinear time s-t shortest path algorithms. Concretely, we study shortest paths in expander graphs. An n-node d-regular (all nodes have degree d) graph G, is an (n,d,λ)-graph if the eigenvalues λ_1 ≥⋯≥λ_n of the corresponding adjacency matrix A satisfies max_i ≠ 1|λ_i| ≤λ. Note that the eigenvalues are real since A is symmetric and real. We start by presenting a number of algorithmic results when the input graph is an expander.
Shortest s-t Path.
Our first contribution demonstrates that the simple bidirectional BFS algorithm efficiently computes the shortest path between most pairs of nodes s, t in an expander:
If G is an (n,d,λ)-graph, then for every node s ∈ G, every 0 < δ < 1, it holds for at least (1-δ)n nodes t, that bidirectional BFS between s and t, finds a shortest s-t path after visiting O((d-1)^⌈ (1/4) _d/λ(n/δ) ⌉) nodes.
While the bound in Theorem <ref> on the number of nodes visited may appear unwieldy at first, we note that it simplifies significantly for natural values of d and λ. For instance, an (n,d,λ)-graph is Ramanujan if λ≤ 2√(d-1). For Ramanujan graphs, and more generally for graphs with λ = O(√(d)), the bound in Theorem <ref> simplifies to near-√(n):
If G is an (n,d,O(√(d)))-graph, then for every node s ∈ G, every 0 < δ < 1, it holds for at least (1-δ)n nodes t, that bidirectional BFS between s and t, finds a shortest s-t path after visiting O((n/δ)^1/2 + O(1/ln d)) nodes.
We also demonstrate that the bound can be tightened even further for Ramanujan graphs:
If G is a d-regular Ramanujan graph where d ≥ 3, then for every node s ∈ G, it holds for at least (1-o(1))n nodes t, that bidirectional BFS between s and t, finds a shortest s-t path after visiting O( √(n)·ln^3/2(n)) nodes.
Short s-t Path.
One drawback of bidirectional BFS in expanders, is that it is only guaranteed to find a shortest path efficiently for most pairs of nodes s,t. Motivated by this shortcoming, we also present a simple randomized algorithm for finding a short, but not necessarily shortest, s-t path. For any parameter 0 < δ < 1, the algorithm starts by growing a BFS tree from s until Θ(√(n ln(1/δ))) nodes have been explored. It then performs O(√(n ln(1/δ))/_d/λ(n)) random walks starting at t. Each of these random walks run for O(_d/λ(n)) steps. If any of these walks discover a node in the BFS tree, it has found an s-t path of length O(_d/λ(n)).
We show that this BFS + Random Walks algorithm has a high probability of finding an s-t path:
If G is an (n,d,λ)-graph with λ≤ d/2, then for every pair of nodes s,t, every 0 < δ < 1, it holds with probability at least 1-δ, that BFS + Random Walks between s and t, finds an s-t path of length O(_d/λ(n)) while visiting O(√(n ln(1/δ))) nodes.
Lower Bounds.
While bidirectional BFS, or BFS + Random Walks, are natural algorithms for finding s-t paths efficiently, it is not a priori clear that better strategies do not exist. One could e.g. imagine sampling multiple nodes in an input graph, growing multiple small BFS trees from the sampled nodes and somehow use this to speed up the discovery of an s-t path. To rule this approach out, we complement the algorithms presented above with lower bounds. For proving lower bounds, we consider distributions over input graphs and show that any algorithm that explores few nodes fails to find an s-t path with high probability in such a random input graph. As Erdős-Rényi random graphs (with large enough edge probability) and random d-regular graphs are both expanders with good probability, we prove lower bounds for both these random graph models. The distribution of an Erdős-Rényi random graph on n nodes is defined from a parameter 0 < p < 1. In such a random graph, each edge is present independently with probability p. A random d-regular graph on the other hand, is uniform random among all n-node graphs where every node has degree d.
Our lower bounds hold even for the problem of reporting an arbitrary path connecting a pair of nodes s, t, not just for reporting a short/shortest path. Furthermore, our lower bounds are proved in a model where we allow node-incidence queries. A node-incidence query is specified by a node index v and is returned the set of all edges incident to v. Our first lower bound holds for Erdős-Rényi random graphs:
Any (possibly randomized) algorithm for reporting an s-t path in an Erdős-Rényi random graph, where edges are present with probability p ≥ 1.5 ln(n)/n, either makes Ω(1/(p√(n))) node-incidence queries or outputs a valid path with probability at most o(1)+p.
Note that the lower bound assumes p ≥ 1.5 ln(n)/n. This is a quite natural assumption since for p ≪ln(n)/n, the input graph is disconnected with good probability. The concrete constant 1.5 is mostly for simplicity of the proof. We remark that the additive p in the success probability is tight as an algorithm always reporting the direct path consisting of the single edge (s,t) is correct with probability p. Also observe that the number of edges discovered after O(1/(p√(n))) node-incidence queries is about O(pn/(p√(n))) = O(√(n)) since each node has p(n-1) incident edges in expectation.
For the case of random d-regular graphs, we show the following lower bound for constant degree d:
Any (possibly randomized) algorithm for reporting an s-t path in a random d-regular graph with d=O(1), either makes Ω(√(n)) node-incidence queries or outputs a valid path with probability at most o(1).
We remark that a random d-regular graph is near-Ramanujan with probability 1-o(1) as proved in <cit.>, confirming a conjecture raised in <cit.>. A near-Ramanujan graph is an (n,d,λ)-expander with λ≤ 2√(d-1)+o(1). Thus our upper bounds in Theorem <ref> and Theorem <ref> nearly match this lower bound.
Overview.
In Section <ref>, we present our upper bound results and prove the claims in Theorem <ref> and Theorem <ref>. The upper bounds are all simple algorithms and also have simple proofs using well-known facts about expanders.
In Section <ref>, we prove our lower bounds. These proofs are more involved and constitute the main technical contributions of this work.
§ UPPER BOUNDS
In the following, we present and analyse simple algorithms for various s-t reachability problems in expander graphs.
§.§ Shortest Path
Let G be an (n,d,λ)-graph and consider the following bidirectional BFS algorithm for finding a shortest path between a pair of nodes s,t: grow a BFS tree _s from s and a BFS tree _t from t simultaneously. In each iteration, the next layer of _s and _t is computed and as soon as a node v appears in both trees, we have found a shortest path from s to t, namely the path s → v → t in the two BFS trees.
We show that this algorithm is efficient for most pairs of nodes s, t as claimed in Theorem <ref>.
To prove Theorem <ref>, we show that in any (n,d,λ)-graph G, it holds for every node s ∈ G that most other nodes have a small distance to s. Concretely, we show the following
If G is an (n,d,λ)-graph, then for every node s ∈ G, it holds for every 0 < δ < 1 that there are no more than δ n nodes with distance more than (1/2)_d/λ(n/δ) from s.
Theorem <ref> now follows from Lemma <ref> by observing that for a pair of nodes s,t of distance k in an (n,d,λ)-graph, the bidirectional searches will meet after expanding for ⌈ k/2 ⌉ steps from s and t. Since each node explored during breadth first search has at most d-1 neighbors outside the previously explored tree, it follows that the total number of nodes visited is O((d-1)^⌈ k/2 ⌉). Since it holds for every s ∈ G that (s,t) ≤ (1/2)_d/λ(n/δ) for a 1-δ fraction of all other nodes t, the conclusion follows.
Corollary <ref> follows from Theorem <ref> by observing that for λ = O(√(d)), we have (1/4) _d/λ(n/δ) = (1/2)_Ω(d)(n/δ). Noting that _Ω(d)(n/δ) = ln(n/δ)/(ln(d) - O(1)) = (1+O(1/ln d)) _d-1(n/δ), the conclusion follows.
What remains is to prove Lemma <ref>. While the contents of the lemma is implicit in previous works, we have not been able to find a reference explicitly stating this fact. We thus provide a simple self-contained proof building on Chung's <cit.> proof that the diameter of an (n,d,λ)-graph is bounded by ⌈_d/λ n ⌉.
Let A be the adjacency matrix of an (n,d,λ)-graph G. Letting d=λ_1 ≥λ_2 ≥⋯≥λ_n denote the (real-valued) eigenvalues of the real symmetric matrix A, we may write A in its spectral decomposition A=U Σ U^T with λ_1,…,λ_n being the diagonal entries of the diagonal matrix Σ. By definition, we have max{λ_2, |λ_n|} = λ.
Notice that (A^k)_s,t gives the number of length-k paths from node s to node t in G. Furthermore, we have A^k = U Σ^k U^T. Now let s be an arbitrary node of G and let Z ⊆ [n] denote the subset of columns t such that (A^k)_s,t = 0. The eigenvalues of A^k are λ_1^k,…,λ_n^k and the all-1's vector is an eigenvector corresponding to λ_1. Let _Z denote the indicator for the set Z, i.e. the coordinates of _Z corresponding to t ∈ Z are 1 and the remaining coordinates are 0. By definition of Z, we have that e_s^T A^k _Z = 0. At the same time, we may write _Z = (|Z|/n) + β u where u is a unit length vector orthogonal to and β = √(|Z|-|Z|^2/n). Hence
0 = e^T_s A^k _Z
= e^T_s A^k ((|Z|/n) + β u)
= e^T_s λ_1^k (|Z|/n) + β e^T_s A^k u
≥ d^k|Z|/n - β·e_s·A^k u
≥ d^k |Z|/n - βλ^k.
From this we conclude |Z| ≤ (λ/d)^k n β≤ (λ/d)^k n √(|Z|), implying |Z| ≤ (λ/d)^2k n^2. For k = (1/2)_d/λ(n/δ), this is |Z| ≤δ n.
For the special case of Ramanujan graphs, Theorem <ref> claims an even stronger result than Theorem <ref>. Recall that an (n,d,λ)-graph is Ramanujan if it satisfies that λ≤ 2√(d-1). To prove Theorem <ref> we make use of the following concentration result on distances in Ramanujan graphs:
Let G be a d-regular Ramanujan graph on n nodes, where d ≥ 3. Then for every node s ∈ G it holds that
|{ t ∈ G : |(s,t) - _d-1n| > 3 _d-1 n }| = o(n).
Using Theorem <ref>, we conclude that for every node s ∈ G, it holds for (1-o(1))n choices of t that (s,t) ≤_d-1 n + 3 _d-1 n. The middle node v on a shortest path from s to t thus has distance at most k = ⌈ (_d-1 n + 3 _d-1 n)/2 ⌉≤ (1/2)_d-1 n + (3/2) _d-1 n + 1 from s and t. Since the nodes in a layer ℓ of a BFS tree in a d-regular graph G has at most d-1 neighbors in layer ℓ+1, we conclude that the two BFS trees _s and _t contain at most O((d-1)^k) ≤ O(√(n)·ln^3/2(n)) nodes each upon termination. Note that the same
proof shows how to find a shortest path in time n^1/2+o(1) between most pairs of vertices s and t in near Ramanujan graphs, as it is also proved in <cit.> that in such graphs, for every node s there are
only o(n) nodes t of distance exceeding (1+o(1)) _d-1 n from s.
§.§ Connecting Path
In the following, we analyse our algorithm, BFS + Random Walks, for finding a short s-t path in an (n,d,λ)-graph. The algorithm is parameterised by an integer k ≥√(n) and is as follows: First, run BFS from s until k nodes have been discovered. Call the set of discovered nodes V_s. Next, run τ = k/(3 _d/λ(n)) random walks _1,…,_τ from t, with each random walk having a length of 3 _d/λ(n). If any of the random walks intersects V_s, we have found an s-t path of length O(_d/λ(n)) as the paths _i have length O(_d/λ(n)) and the diameter, and hence the depth of the BFS tree, in an (n,d,λ)-graph is at most ⌈_d /λ(n) ⌉ <cit.>.
To analyse the success probability of the algorithm, we bound the probability that all paths _i avoid V_s. For this, we use the following two results
Let G be an (n,d,λ)-graph. For any two nodes s,t in G, the probability p^k_s,t that a random walk starting in s and of length k ends in the node t, satisfies |1/n - p^k_s,t| ≤ (λ/d)^k.
Let G be an (n,d,λ)-graph and let W be a set of w vertices in G and set μ = w/n. Let P(W,k) be the total number of length k paths (k+1 nodes) that stay in W. Then
P(W,k) ≤ wd^k(μ + (λ/d) (1-μ))^k.
Now consider one of the length 3 _d/λ(n) random walks = _i starting in t. To show that it is likely that the path intersects V_s, we split the random walk = (t, _1, …,_3_d/λ(n) + 1) into two parts, namely the first 2_d/λ(n) steps ^(1) = (t,_1,…,_2_d/λ(n)+1) and the remaining _d/λ(n) steps ^(2) = (_2_d/λ(n)+1,…,_3_d/λ(n)+1). Note that we let the last node e(^(1)) = _2_d/λ(n)+1 in ^(1) equal the first node s(^(2)) = _2_d/λ(n)+1 in ^(2). We use ^(1) to argue that ^(2) has a near-uniform random starting node. We then argue that ^(2) intersects V_s with good probability.
By Theorem <ref>, it holds for any node r ∈ G that [e(^(1)) = r] ≤ 1/n + 1/n^2. Next, conditioned on e(^(1))=r, the path ^(2) is uniform random among the d^_d/λ(n) length _d/λ(n) paths starting in r. It follows that for any fixed path p of length _d/λ(n) in G, we have [^(2) = p] ≤[e(^(1)) = s(p)] d^-_d/λ(n)≤ (1/n + 1/n^2)d^-_d/λ(n). Now by Theorem <ref> with W=V(G) ∖ V_s and assuming λ≤ d/2, there are at most n d^_d/λ(n)((1-k/n) + (λ/d)(k/n))^_d/λ(n)≤ n d^_d/λ(n) (1-k/(2n))^_d/λ(n)≤ n d^_d/λ(n)exp(-_d/λ(n)k/(2n)) paths in G that stay within V(G) ∖ V_s. A union bound over all of them implies that the probability that ^(2) avoids V_s is at most
(1/n+1/n^2)d^-_d/λ(n) n d^_d/λ(n)exp(-_d/λ(n)k/(2 n)) ≤exp(-_d/λ(n)k/(2n) + 1/n).
Since the τ = k/(3 _d/λ(n)) random walks _1,…,_τ are independent, we conclude that the probability they all avoid V_s is no more than
exp(-k^2/(6n) + k/(3 _d/λ(n)n)).
Letting k = √(7 n ln(1/δ)) and assuming n is at least some sufficiently large constant, we have that at least one path _i intersects V_s with probability at least 1-δ. This completes the proof of Theorem <ref>.
§ LOWER BOUNDS
In this section, we prove lower bounds on the number of queries made by any algorithm for computing an s-t path in a random graph. Our query model allows node-incidence queries. Here the n nodes of a graph G are assumed to be labeled by the integers [n]. A node-incidence query is specified by a node index i ∈ [n], and the query algorithm is returned the list of edges (i,j) incident to i.
We start by considering an Erdős-Rényi random graph, as it is the simplest to analyse. We then proceed to random d-regular graphs. For the lower bounds, the task is to output a path between nodes s=1 and t=n. An algorithm for finding an s-t path works as follows: In each step, the algorithm is allowed to ask one node-incidence query. We make no assumption about how the algorithm determines which query to make in each step, other than it being computable from all edges seen so far (the responses to the node-incidence queries). For randomized algorithms, the choice of query in each step is chosen randomly from a distribution over queries computable from all edges seen so far.
§.§ Erdős-Rényi
Let be an Erdős-Rényi random graph, where each edge is present independently with probability p ≥ 1.5 ln(n)/n and let ^⋆ be a possibly randomized algorithm for computing an s-t path in when s=1 and t=n. Let α^⋆ be the probability that ^⋆ outputs a valid s-t path (all edges on the reported path are in ) and let q be the worst case number of queries made by ^⋆ (for ^⋆ making an expected q queries, we can always make it worst case O(q) queries by decreasing α by a small additive constant). Here the probability is over both the random choices of ^⋆ and the random input graph . By linearity of expectation, we may fix the random choices of ^⋆ to obtain a deterministic algorithm that outputs a valid s-t path with probability α≥α^⋆. It thus suffices to prove an upper bound on α for such deterministic .
For a graph G, let π(G) denote the trace of running the deterministic on G. If i_1(G),…,i_q(G) denotes the sequence of queries made by on G and _1(G),…,_q(G) denotes the returned sets of edges, then
π(G) := (i_1(G), _1(G), i_2(G), …, i_q(G), _q(G)).
Observe that if we condition on a particular trace τ=(i_1, N_1, i_2, …, i_q, N_q), then the distribution of conditioned on π(,)=τ is the same as if we condition on the set of edges incident to i_1,…,i_q being precisely N_1,…,N_q. This is because the algorithm is deterministic and the execution of is the same for all graphs G with the same such sets of edges incident to i_1,…,i_q. Furthermore, no graph G with a different set of incident edges for i_1,…,i_q will result in the trace τ.
For a trace τ = (i_1,N_1,…,i_q,N_q), call the trace connected if there is a path from s to t using the discovered edges
⋃_j=1^q N_j.
Otherwise, call it disconnected. Intuitively, if a trace is disconnected, then it is unlikely that will succeed in outputting a valid path connecting s and t as it has to guess some of the edges along such a path. Furthermore, if makes too few queries, then it is unlikely that the trace is connected. Letting (G) denote the output of on the graph G, we have for a random graph that
α = [() is valid] ≤[π() is connected] + [() is valid|π() is disconnected].
We now bound the two quantities on the right hand side separately.
The simplest term to bound is
[() is valid|π(,) is disconnected].
For this, let τ = (i_1,N_1,…,i_q,N_q) be an arbitrary disconnected trace in the support of π() when is an Erdős-Rényi random graph, where each edge is present with probability p ≥ 1.5 ln(n)/n. Observe that the output of is determined from τ. Since τ is disconnected, the path reported by on τ must contain at least one edge (u,v) where neither u nor v is among ∪_j{i_j} or otherwise the output path is valid with probability 0 conditioned on τ. But conditioned on the trace τ, every edge that is not connected to {i_1,…,i_q} is present independently with probability p. We thus conclude
[() is valid|π()=τ] ≤ p.
Since this holds for every disconnected τ, we conclude
[() is valid|π() is disconnected] ≤ p.
Next we bound the probability that π() is connected. For this, define for 1≤ k≤ q
π_k(G) := (i_1(G),_1(G),i_2(G),…,i_k(G),_k(G))
as the trace of on G after the first k queries. As for π(G), we say that π_k(G) is connected if there is a path from s to t using the discovered edges
E(π_k(G)) = ⋃_j=1^k _j(G)
and that it is disconnected otherwise. We further say that π_k(G) is useless if it is both disconnected and |E(π_k(G))| ≤ 2pnk. Since
[π_k() is disconnected] ≥[π_k() is useless]
we focus on proving that [π_k() is useless] is large. For this, we lower bound
[π_k() is useless|π_k-1() is useless].
Note that the base case π_0() is defined to be useless as s and t are not connected when no queries have been asked and also |E(π_0(G))|=0 ≤ 2pn 0 = 0.
Let τ_k-1 = (i_1,N_1,…,i_k-1,N_k-1) be any useless trace. The query i_k = i_k() is uniquely determined when conditioning on π_k-1() = τ_k-1 and so is the edge set E_k-1=E(π_k-1()). Furthermore, we know that |E_k-1| ≤ 2pn(k-1). We now bound the probability that the query i_k discovers more than 2pn new edges. If i_k has already been queried, no new edges are discovered and the probability is 0. So assume i_k ∉{i_1,…,i_k-1}. Now observe that conditioned on π_k-1()=τ_k-1, the edges (i_k,i) where i ∉{i_1,…,i_k-1} are independently included in with probability p each. The number of new edges discovered is thus a sum of m ≤ n independent Bernoullis _1,…,_m with success probability p. A Chernoff bound implies [∑_i _i > (1+δ)μ] < (e^δ/(1+δ)^1+δ)^μ for any μ≥ mp and any δ>0. Letting μ = np and δ = 1 gives
[∑_i _i > 2np] < (e/4)^np < e^-np/3.
Since we assume p > 1.5 ln (n)/n, this is at most 1/√(n).
We next bound the probability that the discovered edges _k() makes s and t connected in E(π_k()). For this, let V_s denote the nodes in the connected component of s in the subgraph induced by the edges E_k-1. Define V_t similarly. We split the analysis into three cases. First, if i_k ∈ V_s, then _k() connects s and t if and only if one of the edges {i_k}× V_t is in . Conditioned on π_k-1() = τ_k-1, each such edge is in independently either with probability 0, or with probability p (depending on whether one of the end points is in {i_1,…,i_k-1}). A union bound implies that s and t are connected in E(π_k()) with probability at most p|V_t|. A symmetric argument upper bounds the probability by p|V_s| in case i_k ∈ V_t. Finally, if i_k is in neither of V_s and V_t, it must have an edge to both a node in V_s and in V_t to connect s and t. By independence, this happens with probability at most p^2|V_t| |V_s|. We thus conclude that
[π_k() is connected|π_k-1() = τ_k-1] ≤ p max{|V_s|,|V_t|}≤ p (|E_k-1|+1) ≤ 2p^2 n k.
A union bound implies
[π_k() is useless|π_k-1() is useless] ≥ 1-2p^2 nk - 1/√(n).
This finally implies
[π() is useless] = ∏_k=1^q [π_k() is useless|π_k-1() is useless]
≥ ∏_k=1^q (1 - 2p^2 n k - 1/√(n))
≥ 1-∑_k=1^q (2p^2 n k + 1/√(n))
≥ 1-p^2 n (q+1)^2 - q/√(n).
It follows that
[π() is connected] = 1-[π() is disconnected] ≤ 1-[π() is useless] ≤ p^2 n (q+1)^2 + q/√(n).
For q = o(1/(p√(n))) and p ≥ 1.5 ln(n)/n, this is o(1). Note that for the lower bound to be meaningful, we need p = O(1/√(n)) as otherwise the bound on q is less than 1. (Indeed,
for p=Ω(1/√(n)), s and t have a common neighbor with probability bounded away from 0 and if so 2 queries suffice).
This concludes the proof of Theorem <ref>.
§.§ d-Regular Graphs
We now proceed to random d-regular graphs. Assume dn is even, as otherwise a d-regular graph on n nodes does not exist. Similarly to our proof for the Erdős-Rényi random graphs, we will condition on a trace of . Unfortunately, the resulting conditional distribution of a random d-regular graph is more cumbersome to analyse. We thus start by reducing to a slightly different problem.
Let _n,d denote the set of all graphs on nd nodes where the edges form a perfect matching on the nodes. There are thus nd/2 edges in any such graph. We think of the nodes of a graph G ∈_n,d as partitioned into n groups of d nodes each, and we index the nodes by integer pairs (i,j) with i ∈ [n] and j ∈ [d]. Here i denotes the index of the group. For a graph G ∈_n,d and a sequence of group indices p:= s,i_1,…,i_m,t, we say that p is a valid s-t meta-path in G, if for every two consecutive indices a,b in p, there is at least one edge ((a,j_1), (b,j_2)) in G. A meta-path is thus a valid path if and only if s and t are connected in the graph resulting from contracting the nodes in each group.
Now consider the problem of finding a valid s-t meta-path in a graph drawn uniformly from _n,d (we write ∼_n,d to denote such a graph) while asking group-incidence queries. A group-incidence query is specified by a group index i ∈ [n] and the answer to the query is the set of edges incident to the nodes {i}×{1,…,d}.
We start by showing that an algorithm ^⋆ for finding an s-t path in a random d-regular n-node graph, gives an algorithm for finding an s-t meta-path in a random ∼_n,d using group-incidence queries.
If there is a (possibly randomized) algorithm ^⋆ that reports a valid s-t path with probability α in a random d-regular graph on n nodes while making q node-incidence queries, then there is a deterministic algorithm that reports a valid s-t meta-path with probability at least exp(-O(d^2)) α in a random graph ∼_n,d while making q group-incidence queries.
Given an algorithm ^⋆ that reports a valid s-t path in a random d-regular graph on n nodes with probability α, we start by fixing its randomness to obtain a deterministic algorithm ' with the same number of queries that outputs a valid s-t path with probability at least α. Next, let ∼_n,d. Let i_1 ∈ [n] be the first node that ' queries (which is independent of the input graph). Our claimed algorithm for reporting an s-t meta-path in starts by querying the group i_1. Upon being returned the set of edges {((i_1,1), (j_1,k_1)), …, ((i_1,d),(j_d,k_d))} incident to {i_1}×{1,…,d}, we contract the groups such that each edge ((i_1,h), (j,k)) is replaced by (i_1,j). If this creates any duplicate edges or self-edges, aborts and outputs an arbitrarily chosen s-t meta-path. Otherwise, the resulting set of edges {(i_1,j_1),…,(i_1,j_d)} is passed on to ' as the response to the first query i_1. The next query i_2 of ' is then determined and we again ask it as a group-incidence query on and proceed by contracting groups in the returned set of edges and passing the result to ' if there are no duplicate or self-edges. Finally, if we succeed in processing all q queries of ' without encountering duplicate or self-edges, outputs the s-t path reported by ' as the s-t meta-path.
To see that this strategy has the claimed probability of reporting a valid s-t meta-path, let ^⋆ be the graph obtained from by contracting all groups. Observe that if we condition on ^⋆ being a simple graph (no duplicate edges or self-edges), then the conditional distribution of ^⋆ is precisely that of a random d-regular graph on n nodes. It is well-known <cit.> that the contracted graph ^⋆ is indeed simple with probability at least exp(-O(d^2)) and the claim follows.
In light of Lemma <ref>, we thus set out to prove lower bounds for deterministic algorithms that report an s-t meta-path in a random ∼_n,d using group-incidence queries.
Let be a deterministic algorithm making q group-incidence queries that reports a valid s-t meta-path with probability α in a random ∼_n,d. Similarly to our proof for Erdős-Rényi graphs, we start by defining the trace of on a graph G ∈_n,d. If i_1(G),…,i_q(G) ∈ [n] denotes the sequence of group-incidence queries made by on G and _1(G),…,_q(G) denotes the returned sets of edges, then for 1 ≤ k ≤ q, we define
π_k(G) = (i_1(G),_1(G),…,i_k(G),_k(G)).
We also let π(G) := π_q(G) denote the full trace. Call a trace τ_k = (i_1,N_1,…,i_k,N_k) connected if there is a sequence of group indices p:=s,i_1,…,i_m,t such that for every two consecutive indices a,b in p, there is an edge ((a,h), (b,k)) in ∪_i N_i. Otherwise, call the trace disconnected. Letting (G) denote the output of on the graph G, we have
α = [() is valid] ≤[π() is connected] + [() is valid|π() is disconnected].
We bound the two terms separately, starting with the latter. So let τ= (i_1,N_1,…,i_q,N_q) be a disconnected trace in the support of π(). The output meta-path () = p = s,i_1,…,i_m,t of is determined from τ. Since τ is disconnected, there must be a pair of consecutive indices a,b in p such that there is no edge ((a,h), (b,k)) ∈∪_i N_i. Fix such a pair a,b. We now consider two cases. First, if either a or b is among i_1,…,i_q, then all edges incident to that group are among ∪_i N_i conditioned on π()=τ. It thus follows that p is a valid s-t meta-path with probability 0 conditioned on π()=τ. Otherwise, neither of a and b are among i_1,…,i_q. The set of edges ∪_i N_i specify at most dq edges of the matching . For any node whose matching edge is not specified by ∪_i N_i, the conditional distribution of its neighbor is uniform random among all other nodes whose matching edge is not in ∪_i N_i. For each of the d^2 possible edges ((a,h),(b,k)) between the groups a and b, there is thus a probability at most 1/(nd -1 - 2dq) that the edge is in conditioned on π()=τ. A union bound over all d^2 such edges finally implies
[() is valid|π()=τ] ≤d^2/nd-1-2dq.
Since this holds for every disconnected τ, we conclude
[() is valid|π() is disconnected] ≤d^2/nd-1-2dq.
Next, to bound [π() is connected], we show that
[π_k() is disconnected|π_k-1() is disconnected]
is large. So let τ_k-1 = (i_1,N_1,…,i_k-1, N_k-1) be a disconnected trace in the support of π_k-1(). The next query i_k = i_k() of is fixed conditioned on π_k-1() = τ_k-1. We have a two cases. First, if i_k ∈{i_1,…,i_k-1} then no new edges are returned by the query and we conclude
[π_k() is disconnected|π_k-1()=τ_k-1] = 1.
Otherwise, let V_s denote the subset of group-indices j for which there is a meta-path from s to j. Similarly, let V_t denote the subset of group-indices j for which there is a meta-path from t to j. We have V_s ∩ V_t = ∅. Now if i_k ∈ V_s, we have that π_k() is connected only if there is an edge between a node (i_k, j) with j ∈ [d] and a node (b, k) with b ∈ V_t. Let r ∈{0,…,d} denote the number of nodes (i_k,j) with j ∈ [d] for which the corresponding matching edge is not in ∪_i N_i. Conditioned on π_k-1() = τ_k-1, the neighbor of any such node is uniform random among all other nodes for which the corresponding matching edge is not in ∪_i N_i. There are at least nd-1 - 2d(k-1) such nodes. A union bound over at most rd|V_t| ≤ d^2 |V_t| pairs ((i_k,j),(b, k)) implies that π_k() is connected with probability at most d^2|V_t|/(nd - 1-2d(k-1)). A symmetric arguments gives an upper bound of d^2|V_s|/(nd-1-2d(k-1)) in case i_k ∈ V_t. Finally, if i_k is in neither of V_s and V_t, then there must still be an edge ((i_k,j), (a,k)) for a group a ∈ V_s. We thus conclude
[π_k() is connected|π_k-1()=τ_k-1] ≤d^2max{|V_s|, |V_t|}/nd-1-2d(k-1)≤d^3 k/nd-1-2dq.
Since this holds for every disconnected trace τ_k-1, we finally conclude
[π() is disconnected] ≥∏_k=1^q (1-d^3 k/nd-1-2dq) ≥ 1 - ∑_k=1^q d^3 k/nd-1-2dq≥ 1-d^3 q^2/nd-1-2dq
and thus
[π() is connected] ≤d^3 q^2/nd-1-2dq.
For constant degree d, if q = o(√(n)), this is o(1). Together with Lemma <ref>, we have thus proved Theorem <ref>.
abbrv
|
http://arxiv.org/abs/2307.04506v2 | 20230710115703 | Distributed Decisions on Optimal Load Balancing in Loss Networks | [
"Qiong Liu",
"Chehao Wang",
"Ce Zheng"
] | eess.SP | [
"eess.SP"
] |
Distributed Decisions on Optimal Load Balancing
in Loss Networks
Qiong Liu1, Chenhao Wang2, Ce Zheng1
1Télécom Paris, Institut Polytechnique de Paris, France
2Beijing Normal University, China
Email: [email protected], [email protected], [email protected]
==========================================================================================================================================================================================================================
When multiple users share a common link in direct transmission, packet loss and link congestion may occur due to the simultaneous arrival of traffics at the source node. To tackle this problem, users may resort to an indirect path: the packet flows are first relayed through a sidelink to another source node, then transmitted to the destination. This behavior brings the problems of packet routing or load balancing: (1) how to maximize the total traffic in a collaborative way; (2) how self-interested users choose routing strategies to minimize their individual packet loss independently.
In this work, we propose a generalized mathematical framework to tackle the packet and load balancing issue in loss networks. In centralized scenarios with a planner, we provide a polynomial-time algorithm to compute the system optimum point where the total traffic rate is maximized. Conversely, in decentralized settings with autonomous users making distributed decisions, the system converges to an equilibrium where no user can reduce their loss probability through unilateral deviation. We thereby provide a full characterization of Nash equilibrium and examine the efficiency loss stemming from selfish behaviors, both theoretically and empirically. In general, the performance degradation caused by selfish behaviors is not catastrophic; however, this gap is not monotonic and can have extreme values in certain specific scenarios.
load balancing, Nash equilibria, price of anarchy, network congestion, sidelink
§ INTRODUCTION
Since the seminal work of Erlang <cit.>, loss networks have played a crucial role in analyzing and optimizing stochastic systems involving simultaneous resource utilization, and non-backlogging workloads (for an extensive overview, see <cit.>). Meanwhile, in the post-5G era, cloud-enabled networks have emerged as a dominant architecture, where multiple servers collect data from users and relay it to a central hub for final processing. To guarantee network efficacy, that is no server is either overburdened or underutilized, load balancing strategies are well studied, e.g., <cit.>. In this context, loss networks provide valuable mathematical frameworks for comprehending and enhancing load distribution within cloud-enabled networks.
Early load balancing research for cloud-enabled networks focused on centralized scenarios, where a centralized planner scheduled workloads to optimize aspects like performance-energy tradeoffs <cit.> and algorithmic considerations <cit.>. However, due to the stringent latency requirement for real-time decisions and the increasing signaling overhead caused by the large-scale deployment of servers and massive users, distributed decisions become a better solution. In this context, the complexity of the problem increases due to the non-cooperative and competitive behaviors among users within the system.
To address the challenges of load balancing in a distributed way, game theory provides a mathematical framework that describes and analyzes scenarios with interactive decisions <cit.>. Till now, some studies have demonstrated the efficacy of game-theoretic models in addressing load balancing problems. For instance, Mondal et al. <cit.> developed a game-theoretic model for load balancing among competitive cloudlets, while Yi et al. <cit.> investigated a similar problem, incorporating additional considerations of queue-aware strategies. In <cit.>, symmetric loss models where each source has an equal number of users are considered. However, previous studies mostly focused on limited cases of identical user strategies, which may not reflect real-world scenarios, i.e., different users may have different objectives and preferences. Therefore, further research is needed to develop game-theoretic models that can address the challenges of load balancing in a more general and realistic manner.
In this paper, we employ game theory to address load balancing in both distributed and centralized environments, where users have non-identical strategies and the number of users is not evenly distributed. Specifically, we consider the load balancing in a cloud-enabled network consisting of m source nodes (servers) {s_1,…,s_m} and one destination node (central hub) d. Each source s_i has n_i users seeking service, and the traffic originating from each user is assumed to follow an independent Poisson point process with an identical rate. The nodes in the network are connected by two types of communication links, namely sidelinks that connect two sources, and direct links that connect a source and destination. The sidelink has a random identical independent distribution (i.i.d) loss with a fixed probability q, and the direct link has a congestion loss that depends on the arrival rate and service rate of each server.
The user cannot split its traffic, and has to determine how to route all of its traffic from the source node arrived at to the destination node. There are two approaches for the traffic transmission: a direct path (DP) in which the packet goes directly from the source arrived at to the destination, and an indirect path (IP) in which the packet is first relayed to another source node and then takes the direct link from that node to the destination.
We treat packet loss probability as the performance metric in load balancing, instead of additive costs like delay or fees in classical routing games <cit.>, resulting in a non-additive and non-convex optimization process. Each user aims to minimize its own loss probability and engage in a game by strategically selecting its own path. In the end, no user can reduce its loss probability by unilateral deviation and reaches the state of Nash Equilibrium (NE).
§.§ Our Contributions
Our work contributes to the load balancing game in the following aspects: First, we prove two lemmas related to the optimal solution when a centralized planner exists. Based on these lemmas, a low-complexity algorithm that maximizes the total traffic is proposed.
Second, we study the decentralized environment where decisions are made by autonomous and self-interested users. The sufficient and necessary conditions on NE are derived, which depend on the number of users on direct path and each indirect path.
Moreover, since a NE may be suboptimal, we use the price of anarchy (PoA) <cit.> to measure the gap between the NE led by users' selfish behaviors and the system optimum achieved by the centralized planner.
The rest of the paper is structured as follows. The formal model and notations are presented in Section <ref>. In Section <ref>, we provide details to compute the optimal solution that maximizes the total traffic when a centralized planner exists. In Section <ref>, we study the NE in the decentralized decision-making scenarios, and analyzed the efficiency loss stemming from selfish behaviors. In Section <ref>, a fine-grained analysis is performed on the existence of NE in various network configurations for a specific scenario involving two source nodes.
Numerical results are presented and discussed in Section <ref>. Finally, Section <ref> concludes the paper and outlines some future work.
§.§ Other related works
Routing games.
As a special class of congestion games, routing games in a network are problems of routing traffic to achieve
the best possible network performance, and have been studied within various contexts
and within various communities, for example, the mathematics community <cit.>, the telecommunications <cit.>, and theoretical computer science <cit.>. The above references have all in common a cost framework which is additive over links,
such as delays or tolls, and is flow conserving (the amount entering a node equals the amount leaving it). Routing games with non-additive cost in loss networks are studied in <cit.>.
Braess-like paradox in distributed systems.
The Braess-like paradox is said to occur in a network system with distributed behaviors if adding an extra link or adding communication capacity to the system leads to a worse system performance. It widely exists in transportation networks and queuing networks. Bean et al. <cit.> show that it can occur in loss networks.
Kameda et al. <cit.> consider a model similar to ours in that a job (packet) can be processed directly or indirectly; however, they do not consider the loss probability. They identify a Braess-like paradox in which adding capacity to the channel may degrade the system performance on the response time.
Kameda and Pourtallier <cit.> characterize conditions under which such paradoxical behavior occurs, and give examples in which the degradation of performance may increase without bound.
§ MODEL AND PRELIMINARIES
We abstractly model our problem using a graph. Consider a network with m source nodes S={s_1,…,s_m} and one destination node d. For each source node s_i∈ S, let N_i be the set of users arriving at s_i, and n_i=|N_i| be the number of such users. Without loss of generality, we assume n_1 > n_2 >…>n_m. Denote [m]={1,…,m}. There is a total of n=∑_i∈[m]n_i users in the system, who are self-interested players in the game. We say players and users interchangeably throughout this paper. Each user is identified with a flow (or traffic) of packets, which originates from the user and is assumed to form an independent Poisson process with an identical rate ϕ. See Fig. <ref> for illustration.
Each user controls its route that all its packets should follow. For a user associated with s_i∈ S, there are only two types of routes to ship these packets to the destination d: either a direct path (DP) (s_i,d), or an indirect two-hop path (IP) (s_i,s_j,d) for some s_j≠ s_i, in which the packet is first sent to another source s_j by the side link (s_i,s_j), and then passes through the direct link (s_j,d).
Strategies. For every source s_i, each user k∈ N_i decides a one-shot strategy 𝐩_k^(i)=( p_k1^(i),…,p_km^(i))^T∈ [0,1]^m with ∑_j∈[m]p_kj^(i)=1, where p_ki^(i) is the probability of routing all packets through DP, and p_kj^(i) (j∈[m],j≠ i) is the probability of routing all packets through IP (s_i,s_j,d).
When no confusion arises, we simply write the strategy 𝐩_k^(i) as 𝐩_k.
We focus on pure strategies in this paper: a strategy 𝐩_k is pure if 𝐩_k_∞=1, i.e., user k deterministically selects a route with probability 1 (for example, 𝐩_k=(0,1,0,…,0)^T).
Let 𝐩=(𝐩_1,…,𝐩_n) be the strategy profile of all users.
Loss probability and loss rate.
There are two types of losses: (1)
Losses on side links. We assume that a packet originating from node s_i and relayed to node s_j is lost with a fixed probability q for every side link (s_i,s_j), independently of any other loss. Denote by q̅=1-q the probability that a packet is successfully relayed. (2) Congestion losses on direct links. We assume that there is no buffer to restore the backlogged packets, so a packet will be lost when it enters the direct path which is occupied for the transmission of another packet. The transmission time of a packet on a direct link (s_i,d) is a random variable σ following a distribution 𝒳, which is assumed to be an identically independent distribution (i.i.d) for all packets.
Given strategy profile 𝐩, user k∈ N_i continuously sends packets that follow an independent Poisson process with rate p_ki^(i)·ϕ to DP (s_i,d), and an independent Poisson process of packets with rate p_kj^(i)·ϕ to IP (s_i,s_j,d), for any s_j≠ s_i.
Since there is a random
loss on the side link (s_i,s_j), the flow of packets from user k∈ N_i that arrive at the node s_j is also a Poisson process with rate q̅p_kj^(i)ϕ.
Thus, for each source s_i∈ S, the flow over the link (s_i,d) is Poisson distributed with a
traffic rate T_i(𝐩) given by
T_i(𝐩)=∑_k∈ N_ip_ki^(i)ϕ + ∑_j∈[m]\{i}∑_k∈ N_j p_ki^(j)q̅ϕ.
When no confusion arises, we simply write T_i(𝐩) as T_i.
The probability of no congestion loss on the direct link (s_i,d) equals the probability that there is no arrival during a transmission time σ, which is given by
*𝔼_σ∼𝒳e^-T_iσ.
As usual practice, assume 𝒳 is an exponential distribution with a rate parameter μ (service rate) and mean 1/μ. Thus the probability of no congestion loss on (s_i,d) is
*𝔼_σ∼𝒳e^-T_iσ=∫_0^+∞μ e^-μσ e^-T_iσdσ=μ/T_i+μ,
and the loss probability on link (s_i,d) is T_i/T_i+μ.
Given the strategy profile 𝐩, for s_i∈ S and k∈ N_i, the loss rate of user k is defined as
LR_k(𝐩) =[ p_ki^(i)T_i/T_i +μ+( 1 - p_ki^(i)) q +
( 1 - q) ∑_j∈[m]\{i} p_kj^(i)T_j/T_j +μ] ϕ,
and the loss probability of user k is LR_k(𝐩)/ϕ.
Total traffic. Regarding the system efficiency, we measure it by the total traffic rate arriving at the destination d. Given the strategy profile 𝐩,
the total traffic rate TR(𝐩) of the system can be derived in two ways. The first expression is derived as the summation of successful transmission rates on direct links:
TR(𝐩)=∑_i∈ [m]T_i·μ/T_i+μ
=μ[ m-∑_i∈ [m]μ/∑_k∈ N_i p_ki^(i)ϕ +∑_ j∈[m] \{i}∑_k∈ N_jp_ki^(j)q̅ϕ+μ]
where T_i is the traffic rate over link (s_i,d), and μ/T_i+μ is the probability of no congestion loss on (s_i,d).
The second expression is from users' perspective:
TR(𝐩) :=∑_i∈ [m]∑_k∈ N_i(ϕ-LR_k),
where ϕ-LR_k(𝐩) is the traffic rate of user k∈ N_i that successfully arrive at d. It is not hard to see that (<ref>) and (<ref>) are equivalent.
Nash equilibria. A Nash equilibrium (NE) is a strategy profile where no player can decrease its loss probability by unilaterally deviating to any other strategy. Formally, we give a definition.
A strategy profile 𝐩 is a Nash equilibrium, if for any source s_i∈ S and any player k∈ N_i, we have
LR_k(𝐩_k,𝐩_-k)≤ LR_k(𝐩_k',𝐩_-k),
where 𝐩_k' can be any feasible strategy of player k, and 𝐩_-k is the strategy profile of all other players.
We measure the efficiency of NEs by the price of anarchy (PoA) <cit.>, which is defined as the ratio between social efficiencies in an optimal solution and in the worst NE.
Formally, given an instance Γ of this game, we define
PoA(Γ)=TR(opt)/min_𝐩∈ℕ𝔼TR(𝐩).
where opt is an optimal solution of Γ, and ℕ𝔼 is the set of all NEs. The PoA of the whole game is defined as the maximum over all instances, that is, PoA=max_ΓPoA(Γ).
§ CENTRALIZED ANALYSIS
The main technical results of the paper are presented now. We show how to compute an optimal solution that maximizes the total traffic.
Note that the total traffic rate depends on the number of users working on each source by DP or IP, but not the users' identity.
Given a strategy profile 𝐩, let u_i=|{k∈ N_i | p_ki^(i)=1}| be the number of users working with DP (s_i,d), and let v_i=|{k∈ N_j, j ∈ [m] \i| p_ki^(j)=1}| be the users working with IP through link (s_i,d). Define y_i=u_i+v_i as the number of users who choose source s_i (including both DP and IP).
In any optimal solution, for any source s_i, either u_i=n_i or v_i=0 or both hold.
Let 𝐩 be an optimal solution. Suppose for contradiction that u_i<n_i,v_i>0 for some source s_i. Then there exists a user (say, k) in N_i who chooses IP (say, (s_i,s_i',d) for some i'≠ i). Also, since v_i>0, there exist a source s_j≠ s_i and a user l∈ N_j who chooses IP (s_j,s_i,d). The total traffic rate is TR(𝐩)=μ T_i/T_i+μ+∑_w∈ [m]\{i}(μ T_w/T_w+μ).
Now we show that the total traffic rate can be improved by revising 𝐩. Let user k∈ N_i choose DP, and let user l∈ N_j choose IP (s_j,s_i',d). Fixing all others' strategies, denote the new strategy profile by 𝐩', and define u_i',v_i' accordingly. Note that u_i'=u_i+1,v_i'=v_i-1, and T_w(𝐩')=T_w(𝐩) for all source s_w≠ s_i. Since q>0, we have
T_i(𝐩')=(u_i+1)ϕ+(v_i-1)q̅ϕ> u_iϕ+v_iq̅ϕ=T_i(𝐩).
So TR(𝐩')>TR(𝐩), contradicting to the optimality.
Lemma <ref> indicates that if a source (say s_i) provides service to the users of other sources, then all users of s_i choose DP.
In any optimal solution, there must exist ĩ∈[m], such that v_l=0 for all l≤ĩ, and u_j=n_j for all j>ĩ.
Given an optimal solution 𝐩, suppose for contradiction that there exist i,j∈[m] (i<j) such that v_i>0 and u_j<n_j. By Lemma <ref>, we have u_i=n_i and v_j=0. There exists a source s_i' and a user k∈ N_i' selecting IP (s_i',s_i,d). There exists a source s_j' (j'≠ j) and a user k'∈ N_j selecting IP (s_j,s_j',d). Note that when i'=j and j'=i, users k and k' may coincide. The total traffic rate is
TR(𝐩) =μ T_i/T_i+μ+μ T_j/T_j+μ+∑_w∈ [m]\{i,j}(μ T_w/T_w+μ)
=μ(2-1/T_i+μ-1/T_j+μ)+∑_w∈ [m]\{i,j}(μ T_w/T_w+μ).
Now we show that the total traffic rate can be improved by revising 𝐩. Let user k choose IP (s_i',s_j',d) if i'≠ j' and choose DP (s_i',d) if i'=j'. Let user k'∈ N_j choose DP. Fixing all others’ strategies, denote the new strategy profile by 𝐩', and define u_i',v_i' accordingly. Note that v_i' = v_i-1, u_j' = u_j+1, and T_w(𝐩')=T_w(𝐩) for all other sources s_w ≠ s_i,s_j. Since i < j, it follows that n_i ≥ n_j > u_j. Therefore, we have
1/T_i+μ+1/T_j+μ=1/n_iϕ+v_iq̅ϕ+μ+1/u_jϕ+μ
> 1/n_iϕ+v_i'q̅ϕ+μ+1/u_j'ϕ+μ
=1/T_i'+μ+1/T_j'+μ,
which indicates that TR(𝐩)<TR(𝐩'), a contradiction.
Lemma <ref> shows that there exists a threshold ĩ:
1) if i>ĩ, all users from s_i chose DP;
2) if i≤ĩ, portion users chose DP, and portion users chose IP.
Now we are ready to present Algorithm <ref>. The main idea is searching for ĩ in Lemma <ref>. For each candidate of ĩ, let B be the number of users selecting IP, all of whom come from L={s_l | l≤ĩ}, and go to R={s_j | j>ĩ}. For every possible value of B, we compute the best possible way for extracting the B users from L and distributing them over R.
In Algorithm <ref>, step (a) is to make T_l (and thus no congestion probability μ/T_l+μ) as equal as possible for l∈[ĩ]. This can be realized by initializing u_l=n_l, and then removing players one by one from the highest u_l and updating until B players have been removed. The goal of step (b) is to make T_j (and thus μ/T_j+μ) as even as possible. This can be realized by initializing v_j=0, then adding users one by one to v_j'=min_j>ĩn_jϕ+v_jq̅ϕ+μ and updating, until B players have been added. These two steps guarantee that the B loads are distributed in an optimal way to maximize the traffic rate.
Though the output of the algorithm is (u_i^*,v_i^*)_i∈ N, we can easily extend it to a corresponding strategy profile because the situation for each source s_i has been determined. Next we prove that its optimality.
Algorithm <ref> returns an optimal solution for maximizing the total traffic, and runs in O(mn^2) time.
In the first loop, we traverse all indexes in [m] to find the ĩ in Lemma <ref>. In the second loop, we traverse all possible numbers of users who select IP, and given any such a number B, we extract the B users from {s_l | l≤ĩ} and distribute them over {s_j | j>ĩ} in an optimal way to maximize the traffic rate. So all possible optimal solutions have been searched by the algorithm, giving the optimality.
For the time complexity, we have m iterations in the first loop, at most n iterations in the second loop, and the time for each iteration is O(n).
Intuitively, when the transmission loss probability is sufficiently large, all packets should go through DP; when there is no transmission loss, the load of packets should be distributed evenly over all sources. We verify the intuition as follows.
If q=1, the unique optimal solution is that all users choose DP (i.e., u_i=n_i, v_i=0, ∀ i∈ M). If q=0, a strategy profile 𝐩 is optimal if and only if |y_i-y_j|≤ 1 for all i,j∈ [m].
If q=1, TR=∑_i∈[m]μ u_iϕ/u_iϕ+μ is increasing with respect to every u_i. By the monotonicity, the optimum is achieved when u_i=n_i. If q=0, suppose for contradiction that there exist i,j∈ [m] in an optimal solution 𝐩 such that y_i-y_j≥ 2. The total traffic rate is TR=μ(m-∑_k∈ [m]μ/y_kϕ+μ). Consider a new strategy profile 𝐩' with y_i'=y_i-1,y_j'=y_j+1, i.e., a user who chooses source s_i deviates to s_j. Then the total traffic rate becomes TR'=μ(m-∑_k∈[m]\{i,j}μ/y_kϕ+μ-μ/y_i'ϕ+μ-μ/y_j'ϕ+μ)>TR, a contradiction.
§ DECENTRALIZED ANALYSIS
In this section, we study the Nash equilibria in the decentralized decision-making scenario where each user makes a decision on the choice of DP or IP.
§.§ Characterization of NEs
A NE should satisfy that: for a user selecting DP, its loss rate will not decrease if it deviates to any IP; for a user selecting IP, its loss rate will not decrease if it deviates to DP or another IP.
We formalize it as the following characterization.
Given an arbitrary strategy profile 𝐩 with (u_i,v_i)_i∈ [m], let i^*∈min_i∈[m]{u_i+v_iq̅}, and let x_ij∈{0,1} be an indicator where x_ij=1 if there exists at least one user selecting IP (s_i,s_j,d). Then, 𝐩 is a NE, if and only if the following conditions are satisfied:
(i) for all i∈[m] with u_i>0,
we have
q̅(u_i+v_iq̅)≤ u_i^*+v_i^*q̅+q̅+qμ/ϕ;
(ii) for all i,l∈[m] with x_il=1, we have
u_l + v_lq̅≤min{q̅( u_i + 1 + v_iq̅) -qμ/ϕ, u_i^*+ v_i^*q̅+q̅}
Proof sketch: Suppose 𝐩 is a NE. Consider any source s_i∈ S and user k∈ N_i.
Case 1. In its NE, user k selects DP in 𝐩 (denoted as 𝐩_k^(i) where p_ki^(i)=1). If it deviates to IP (s_i,s_j,d) where j ≠ i (denoted as 𝐩'_k^(i) where p_kj^i=1), by Definition <ref>, we have LR_k(𝐩_k^(i),𝐩_-k)≤ LR_k(𝐩'_k^(i),𝐩_-k), equivalent to (<ref>).
Case 2. In its NE, user k selects IP (s_i,s_j,d) in 𝐩. If it deviates to DP, it leads to the first part of (<ref>). If it deviates to another IP, it leads to the second part of (<ref>).
Suppose 𝐩 is a NE. Consider an arbitrary source s_i∈ S and arbitrary user k∈ N_i.
Case 1. User k selects DP in 𝐩 (denoted as 𝐩_k^(i) where p_ki^(i)=1). If it deviates to IP (s_i,s_j,d) where j ≠ i (denoted as 𝐩'_k^(i) where p_kj^i=1), by Definition <ref>, we have LR_k(𝐩_k^(i),𝐩_-k)≤ LR_k(𝐩'_k^(i),𝐩_-k). It is equivalent to
1-μ/u_iϕ+v_iq̅ϕ+μ≤ q+q̅·(1-μ/u_jϕ+(v_j+1)q̅ϕ+μ)
⇔ q̅/u_jϕ+(v_j+1)q̅ϕ+μ≤1/u_iϕ+v_iq̅ϕ+μ
⇔ q̅(u_i+v_iq̅)-qμ/ϕ≤ u_j+(v_j+1)q̅,
The above inequality should hold for all j≠ i, and thus is equivalent to Equation (<ref>).
Case 2. User k selects IP (s_i,s_l,d) in 𝐩. If it deviates to DP (s_i,d), by Definition <ref>, we should have LR_k(𝐩_k^(i),𝐩_-k)≤ LR_k(𝐩'_k^(i),𝐩_-k). It is equivalent to
q+q̅·(1-μ/u_lϕ+v_lq̅ϕ+μ)≤ 1-μ/(u_i+1)ϕ+v_iq̅ϕ+μ
⇔ 1/(u_i+1)ϕ+v_iq̅ϕ+μ≤q̅/u_lϕ+v_lq̅ϕ+μ
⇔ u_l+v_lq̅≤q̅(u_i+1+v_iq̅)-qμ/ϕ.
Moreover, a NE must guarantee that user k will not deviate to another IP (s_i,s_j,0), and thus we should have
1-μ/u_lϕ+v_lq̅ϕ+μ≤ 1-μ/u_jϕ+(v_j+1)q̅ϕ+μ
⇔ u_l+v_lq̅≤ u_j+(v_j+1)q̅.
Note that the above inequality should hold for all j≠ i,l. Therefore, we obtain Equation (<ref>).
§.§ Price of Anarchy
We investigate the price of anarchy in this section, which measures the efficiency of NE. We give an upper bound on the optimal total traffic rate, and a lower bound on the total traffic rate of any NE.
In an optimal solution 𝐩,
the total traffic rate is TR_(𝐩)≤μ m (1-μ/n_1ϕ+μ).
Let i be the index stated in Lemma <ref>. It suffices to show with the proof by contradiction that in the optimal solution 𝐩, u_i+v_iq̅≤ n_i. First, for i=1, we have v_1=0, and thus it satisfies u_1+v_1q̅=u_1≤ n_1. For any i>1, suppose for contradiction that u_i+v_iq̅>n_i. Then v_i>0, and there exists a source s_j and a user k∈ N_j that chooses the IP (s_j,s_i,d), i.e., u_j<n_j. By Lemma <ref>, it must be v_j=0, and thus T_j=u_jϕ. Denote the strategy as 𝐩 with p_ki^(j)=1.
The total traffic rate is
TR(𝐩)=μ( m - μ/T_j + μ - μ/T_i + μ - ∑_w∈[m]\{j}μ/T_w + μ)
We show that the total traffic rate can be improved with user k ∈ N_j deviating from IP (s_j, s_i, d) to DP (s_j,d).
Fixing the strategies of all others, denote by 𝐩' the new strategy profile, and define (u'_w,v'_w,T'_w)_w∈[m] accordingly. Note that u_j'=u_j+1, v_i'=v_i-1, u_i'=u_i, and T_w'=T_w for any w∈[m]\{j}. Since u_i+v_iq̅>n_1≥ n_j≥ u_j, we have
1/T_j+μ+1/T_i+μ=1/u_jϕ+μ+1/u_iϕ+v_iq̅ϕ+μ
> 1/u_j'ϕ+μ+1/u_i'ϕ+v_i'q̅ϕ+μ= 1/T_j'+μ+1/T_i'+μ.
It indicates that TR(𝐩')>TR(𝐩) is a contradiction. Consequently, u_i+v_iq̅≤ n_i ≤n_1. According (<ref>), we have TR_(𝐩)≤μ m(1-μ/n_1ϕ+μ).
Let z=min{n_m,n/4m-q̅-qμ/ϕ}. For every NE 𝐩, the total traffic rate satisfies TR(𝐩)≥μ(m-mμ/zϕ+μ).
Let i^*=min_i∈[m]{u_i+v_iq̅}. Since TR(𝐩)≥μ m(1-μ/(u_i^*+v_i^*q̅)ϕ+μ), it suffices to prove that u_i^*+v_i^*q̅≥ z. If u_i^*+v_i^*q̅≥ n_m, it is done. We only need to consider the case when u_i^*+v_i^*q̅< n_m≤ n_i^*. There exists some users in N_i^* selecting IP. By Equation (<ref>), we have
u_i^*+v_i^*q̅≤q̅(u_i^*+1+v_i^*q̅)-qμ/ϕ.
By Theorem <ref>, for each i∈[m], if u_i>0, then q̅(u_i+v_iq̅)≤ u_i^*+v_i^*q̅+q̅+qμ/ϕ; if v_i>0, then u_i+v_iq̅≤ u_i^*+v_i^*q̅+q̅. In both cases, we obtain u_i+v_i/2≤ 2(u_i^*+v_i^*q̅+q̅+qμ/ϕ).
Summing up over all i∈[m], we have
n/2≤∑_i∈[m](u_i+v_i/2)≤ 2m (u_i^*+v_i^*q̅+q̅+qμ/ϕ),
which implies that u_i^*+v_i^*q̅≥n/4m-q̅-qμ/ϕ.
For any instance with m sources, the price of anarchy is PoA≤ 1+n_1μ/n_1zϕ+zμ, where z=min{n_m,n/4m-q̅-qμ/ϕ}.
Combining the upper bound in Lemma <ref> and the lower bound on TR(𝐩) for any NE 𝐩 in Lemma <ref>, it follows
PoA ≤m - mμ/n_1ϕ + μ/m - mμ/zϕ + μ = n_1(zϕ+μ)/z(n_1ϕ+μ) = 1 + n_1μ/n_1zϕ + zμ.
§ A PARTICULAR CASE: TWO SOURCES
In this section, we focus on the special case of m=2. That is, there are only two sources s_1 and s_2. Assume w.l.o.g. that n_1≥ n_2. For each user k ∈ N_i, there is only one IP. Accordingly, its strategy becomes 𝐩_k^(i) = (p_k1^(i),p_k2^(i)), i = 1, 2.
And we have
n_1 = u_1 + v_2;
n_2 = u_2 + v_1.
The traffic rate T_i(𝐩) in (<ref>) is rephrased as
T_i(𝐩) = ∑_k∈ N_ip_ki^(i)ϕ + ∑_k∈ N_j, j≠ i p_ki^(j)q̅ϕ = u_i ϕ + v_i q̅ϕ.
Given strategy profile 𝐩, the set N is further partitioned into 4 subsets (V_1,V_2,V_3,V_4) where V_1={k∈ N_1 | 𝐩_k^(1) =(1, 0)}, V_2={k∈ N_1 | 𝐩_k^(1) = (0,1)}, V_3={k∈ N_2 | 𝐩_k^(2)=(0,1)} and V_4={k∈ N_2 | 𝐩_k^(2)=(1,0)}.
Clearly, users in V_1 and V_3 choose DP, and users in V_2 and V_4 choose IP.
Suppose 𝐩 is a NE. We study the deviation of users in V_1,V_2,V_3,V_4, respectively.
For user k∈ V_1, the strategy is 𝐩_k^(1) = (p_k1^(1), p_k2^(1))=(1, 0), and the loss rate in (<ref>) is
LR_k( 𝐩 )
=ϕ T_1(𝐩)/T_1(𝐩)+μ = [1 - μ/ϕ/u_1 + v_1q̅ + μ/ϕ] ϕ.
When user k∈ V_1 deviates to IP, the strategy profile becomes 𝐩' = ( 𝐩'_k^(1), 𝐩_-k) where 𝐩_k'^(1) =(0, 1).
The loss rate of user k becomes
LR_i(𝐩') = qϕ+q̅ϕT_2(𝐩')/T_2(𝐩') +μ=[1 -q̅μ/ϕ/u_2 + (v_2 + 1)q̅+μ/ϕ] ϕ.
Since 𝐩 is NE, k has no incentive to deviate, and thus LP_k(𝐩)≤ LR_k(𝐩'),
which is equivalent to
t_1(u_2) : = qμ/ϕ+ u_2(1 +q̅^2) + (n_1 + 1)q̅-n_2q̅^2/2q̅≥ u_1,
where t_1(u_2) is a function with respect to variable u_2.
For user k∈ V_2 with strategy 𝐩_k^(1) =(0, 1), the loss rate is
LR_k(𝐩) = qϕ + q̅ϕT_2(𝐩)/T_2(𝐩) + μ = [1 - q̅μ/ϕ/u_2 + v_2q̅ + μ/ϕ] ϕ.
When user k∈ V_2 deviates to DP, the strategy profile becomes 𝐩' = ( 𝐩'_k^(1), 𝐩_-k) where 𝐩_k'^(1)=(1, 0).
The loss rate of k becomes
LR_k(𝐩') = ϕ T_1(𝐩')/T_1(𝐩') + μ = [ 1 - μ/ϕ/u_1+1+v_1q̅+μ/ϕ]ϕ.
Since 𝐩 is NE, we have LR_k(𝐩)≤ LR_k(𝐩'), that is,
u_1≥qμ/ϕ+ u_2(1 +q̅^2) + (n_1 - 1)q̅- n_2q̅^2/2q̅ = t_1(u_2) - 1.
Symmetrically, for each user k∈ V_3 and k∈ V_4, since 𝐩 is NE, we have
t_2(u_1)-1 ≤ u_2 ≤ t_2(u_1),
where
t_2(u_1) := qμ/ϕ + u_1(1+q̅^2)+(n_2+1)q̅ - n_1q̅^2/2q̅.
Note that Eq. (<ref>) - (<ref>) are the sufficient and necessary conditions for an arbitrary strategy 𝐩 to achieve NE.
Now we are ready to give a characterization of NEs.
Let 𝐩 be an arbitrary strategy profile for the game with two sources. Let u_1 and u_2 be the number of users in N_1 and N_2 who choose DP under 𝐩, respectively. We have
[1] when (a) u_1=n_1,u_2<n_2, or (b) u_1=0,u_2>0, 𝐩 cannot be a NE;
[2] when u_1∈[0,n_1),u_2∈[0,n_2), 𝐩 is NE if and only if u_1≥ t_1(u_2)-1 and u_2≥ t_2(u_1)-1;
[3] when u_1∈(0,n_1),u_2=n_2, 𝐩 is NE if and only if u_1∈[t_1(u_2)-1,t_1(u_2)];
[4] when u_1=n_1,u_2=n_2, 𝐩 is NE if and only if n_1q̅≤ qμ/ϕ+n_2+q̅.
Given 𝐩, let (V_1,V_2,V_3,V_4) be a partition of N as defined above. We discuss the four cases.
Case 1. When (a) u_1=n_1 and u_2<n_2, V_4 is nonempty. If 𝐩 is a NE, it must satisfy t_2(u_1)-1≤ u_2.
However, because q̅u_2≤ n_1,q̅u_2≤ (n_2-1)q̅ and qμ/ϕ>0, it cannot hold.
When (b) u_1=0 and u_2>0, V_2 is nonempty. If 𝐩 is a NE, it must satisfy Eq. (<ref>), that is, u_1≥ t_1(u_2)-1. It follows that 0=2q̅u_1≥ qμ/ϕ+u_2(1+q̅^2)+(n_1-1)q̅-n_2q̅^2≥ qμ/ϕ+1+(n_1-1)q̅-(n_2-1)q̅^2≥ qμ/ϕ+1>0, a contradiction.
Case 2. When u_1∈[0,n_1),u_2∈[0,n_2), V_2 and V_4 are nonempty. It is easy to see that 𝐩 is NE if and only if u_1≥ t_1(u_2)-1 and u_2≥ t_2(u_1)-1 are satisfied simultaneously.
Case 3. When u_2=n_2,u_1∈(0,n_1), V_1,V_2,V_3 are nonempty, and V_4 is empty. 𝐩 is NE if and only if t_1(u_2)-1≤ u_1≤ t_1(u_2) and u_2≤ t_2(u_1) hold simultaneously. Moreover, note that u_2≤ t_2(u_1) is implied by u_1≥ t_1(u_2)-1. Therefore, the sufficient and necessary condition for NE is u_1∈[t_1(u_2)-1,t_1(u_2)].
Case 4. When u_1=n_1,u_2=n_2, V_1,V_3 are nonempty, and V_2,V_4 are empty. 𝐩 is NE if and only if u_1≤ t_1(u_2) and u_2≤ t_2(u_1). It is easy to see that, it is equivalent to n_1q̅≤ qμ/ϕ+n_2+q̅.
Note that every situation of u_1,u_2 is included in the above four cases. So we complete a characterization.
Case 4 can be intuitively explained by considering the sidelink loss probability q over link (s_1,s_2). If q is sufficiently high, no user would prefer the indirect path, and selecting the direct path would be a NE for all users. Conversely, when there is no transmission loss over sidelink (s_1,s_2) (i.e., q=0), every user would prefer to use the source with fewer users. Therefore, the profile of all users selecting DP is a NE only if the user distribution between the two sources is as even as possible, with n_1≤ n_2+1. Based on Theorem <ref>, we give some interesting conclusions.
If a strategy profile with u_1=n_1,u_2=n_2 is optimal, then it is also a NE.
A strategy profile with u_1=0,u_2=0 is a NE, if and only if (a) n_1=n_2+1,q̅=1, or (b) n_1=n_2,n_1(1-q̅^2)≤q̅-qμ/ϕ.
Note that u_1≥ t_1(u_2)-1 and u_2≥ t_2(u_1)-1 cannot hold simultaneously when q>2/n, and u_1≥ t_1(n_2)-1 cannot hold when n_1q̅< qμ/ϕ+n_2+q̅.
When n_1q̅< qμ/ϕ+n_2+q̅ and q>2/n, the unique NE is that all users choose DP, i.e., u_1=n_1,u_2=n_2.
We end this section by proving the existence of NE.
For any game instance with two sources, there exists a NE with u_1>0 and u_2=n_2.
By Theorem <ref> (4), if n_1q̅≤ qμ/ϕ+n_2+q̅, then the strategy profile that all users choose DP (i.e., u_1=n_1,u_2=n_2) is a NE. Otherwise, n_1q̅> qμ/ϕ+n_2+q̅. Let m̃ be an integer in interval [qμ/ϕ+n_2+n_1q̅-q̅/2q̅,qμ/ϕ+n_2+n_1q̅+q̅/2q̅]=[t_1(n_2)-1,t_1(n_2)], which always admits at least one integer. Note that n_1>qμ/ϕ+n_2+n_1q̅+q̅/2q̅≥m̃>0. By Theorem <ref>, a strategy profile with u_1=m̃ and u_2=n_2 is a NE.
§ NUMERICAL EXPERIMENTS
Through numerical simulations, we explore the impact of traffic condition on network performance, i.e., the total traffic rate and PoA.
Recall that the traffic flow originating from each user is Poisson with rate ϕ,
the service rate of each direct link is μ, and the loss probability over each side link is q. Assume ϕ=1 for normalization.
We first present the simulation results in two-source networks. In Fig. <ref>, the PoA and the total traffics are plotted under different q, μ and n_1, showing a PoA of less than 1.08. Such a little gap between the optimal solution and the worst NE suggests that the gain of centralized-decision making over decentralized-decision making is trivial most of the time. As shown in Fig. <ref>, the total traffic decreases with the increase of q, i.e., the increased loss rate on sidelink. On the other hand, the PoA is first increasing from 1 at q=0, implying that the NE and optimal solution are the same with u_1 = n_2 + v_1. That is, we have the equal number of users on (s_1,d) and (s_2,d) in terms of both IP and DP. With the increase of q, the benefit of centralized-decision making is gradually unveiled. However, when p reaches a certain value, the PoA goes down to 1 quickly. An intuitive explanation is that, when q becomes larger than the loss probability on DP, no users will choose IP in NE. And this strategy is optimal as well.
In Fig. <ref>, the traffic rates grow with the increase of μ due to the increased probability of no congestion in (<ref>). This is, a high service rate help clear the collision and relieve congestion on both DP and IP. The PoA curve indicates that either in overloaded or less congested scenarios, there is little improvement of centralized-decision making. In Fig. <ref>, the increased number of users leads to an increase of traffic rate in spite of the rise in loss rate. What is more, PoA tends towards 1 for small and large n_1. As the strategies in opt and NE are much similar for users at source s_1, i.e., DP in less biased scenario and IP severely biased scenario.
In the multi-source network, while the optimal solution can be easily computed by Algorithm <ref>, it is difficult to find all NEs even given Theorem <ref>. Hence, we merely consider a small value of m and n (i.e., m=3). The service rate and traffic arrival rate are fixed as μ=1, ϕ=1. Results are given in Fig. <ref>, which shows similar results in Fig, <ref>. It is obvious that the growth of the total traffic slows down gradually, because given the service rate, an increase of n_1 aggravates the network congestion. Second, the increase of loss rate on sidelink, leads to the increase of loss rate on IP. As a result, more users choose DP instead, which in turn worsens the network congestion.
Figure <ref> plots the performances for a range of q. When q=0 and q=1, the PoA is exactly 1.
The PoA converges to 1 when q goes to 1, because when the service rate is large enough compared with arrival rate, there is a sufficiently small congestion loss and all users like to choose DP.
§ CONCLUSION
In this work, we give a theoretical analysis of a load balancing game in cloud-enabled networks, in which the users want to minimize the loss probability of their packets with suitable routing strategies. In the centralized analysis, an efficient algorithm for maximizing the total traffic rate is proposed, according to Lemma <ref> and Lemma <ref>. In the decentralized analysis, a characterization of Nash equilibrium is given, and the PoA is investigated. Numerical experiments show that the efficiency loss due to selfish behaviors is relatively small in most cases.
There are many future directions that are worth exploring. First, we only focus on pure strategies of players in this work, and an immediate and natural question is how the users act when mixed strategies are allowed. Second, it would be interesting to investigate heterogeneous servers (source nodes) where each s_i serves a different purpose or has a different service rate μ_i. Moreover, while we only consider direct path and one-hop indirect paths, a more general scenario where players can choose multi-hop indirect paths to the destination can be taken into consideration.
IEEEtran
|
http://arxiv.org/abs/2307.04339v1 | 20230710043044 | Miriam: Exploiting Elastic Kernels for Real-time Multi-DNN Inference on Edge GPU | [
"Zhihe Zhao",
"Neiwen Ling",
"Nan Guan",
"Guoliang Xing"
] | cs.DC | [
"cs.DC",
"cs.AI"
] |
Many applications such as autonomous driving and augmented reality, require the concurrent running of multiple deep neural networks (DNN) that poses different levels of real-time performance requirements. However, coordinating multiple DNN tasks with varying levels of criticality on edge GPUs remains an area of limited study. Unlike server-level GPUs, edge GPUs are resource-limited and lack hardware-level resource management mechanisms for avoiding resource contention. Therefore, we propose Miriam, a contention-aware task coordination framework for multi-DNN inference on edge GPU. Miriam consolidates two main components, an elastic-kernel generator, and a runtime dynamic kernel coordinator, to support mixed critical DNN inference. To evaluate Miriam, we build a new DNN inference benchmark based on CUDA with diverse representative DNN workloads. Experiments on two edge GPU platforms show that Miriam can increase system throughput by 92% while only incurring less than 10% latency overhead for critical tasks, compared to state of art baselines.
Realization of an extremely anisotropic Heisenberg magnet in Rydberg atom arrays
Jaewook Ahn
August 12, 2023
================================================================================
§ INTRODUCTION
Deep learning (DL) has become a catalyst for a wide range of applications running on the edge, such as augmented reality and autonomous driving. These applications typically require the concurrent execution of multiple DNN tasks that have varying levels of criticality. For example, in mobile augmented reality, DNN inference tasks are often used for gesture recognition and user behaviour analysis, which are key components in providing a seamless user experience. This presents a major challenge as mobile/edge devices are constrained by limited computational resources for running multi-DNN inference tasks in real-time.
To support multiple DNN-based applications that have different real-time requirements <cit.>, a common practice is to share an edge Graphics Processing Unit (GPU). However, this practice poses significant challenges. On the one hand, when executing multiple DNNs simultaneously, their contention over the limited onboard resources on the same edge GPU can result in a performance bottleneck <cit.>.
On the other hand, dedicating the entire GPU to latency-critical tasks to guarantee their real-time requirements results in low GPU utilization <cit.>. Meanwhile, most of the approaches that attempt to support concurrent DNN inference tasks on GPU <cit.> require runtime support from vendors like NVIDIA Multi-Process Service (MPS) and Multi-Instance GPU (MIG) <cit.>, which are unavailable on edge GPUs due to the architectural differences.
Furthermore, multi-DNN inferences present two potentially conflicting objectives. Firstly, it is imperative that critical DNN tasks are given priority over other tasks in order to minimize end-to-end latency. This necessitates that the critical tasks are treated as first-class citizens on the GPU, with no interference from other tasks. Secondly, in order to achieve high overall throughput, all co-running DNN tasks should be concurrently executed in a best effort manner. These two conflicting objectives pose a major challenge for efficiently coordinating the inferences of multiple DNN tasks on edge GPU.
In this paper, we propose a new system named Miriam which aims to support real-time multi-DNN inference on edge GPUs by addressing the latency and throughput problems of co-running multiple DNN inference tasks. The key idea of Miriam is based on the elastic kernel [Kernel here refers to a small program that is executed on a GPU to perform the specific DNN kernel computations.], which can achieve more fine-grained resource mappings on GPU. Specifically, traditional kernels are elasticized by breaking them down into smaller, more flexible units that can be dynamically scheduled and remapped to different GPU resources based on their priority and criticality. This elasticization approach enables the padding of other GPU kernels, which maximizes GPU utilization without causing significant resource contention. As a result, critical tasks can be prioritized without compromising overall system throughput, thus improving the real-time performance of the system.
Our design is based on the key observation that the latency degradation of co-running DNN kernels is mainly caused by two dominant factors, namely intra-multi-processor (SM) resource contention and inter-multi-processor resource contention.
We leverage elastic kernels to address those two kinds of resource contention. Specifically, Miriam integrates two main components. The first component, the elastic-kernel generator, consists of an elastic grid/block generator that generates resource-controllable GPU kernels to resolve co-running DNN tasks resource contention, and a source-to-source kernel transformer that converts original GPU kernels into elastic kernels while preserving computation consistency. We also design a dynamic runtime coordinator to schedule the elastic kernels to proactively control the execution of the co-running kernel at runtime.
To evaluate the effectiveness of Miriam, we implement it as a hybrid framework based on CUDA, C++, and Python. We use a set of multi-DNN inference benchmarks for edge GPUs that include tasks with different priorities to evaluate the system's effectiveness. Our results demonstrate that, compared to existing methods, Miriam can serve significantly more requests with up to 92% throughput improvement while maintaining the inference speed for critical tasks with only a 10% increase in latency. These results highlight Miriam's superior performance in achieving efficient coordination of real-time multi-DNN inference tasks on edge GPUs.
§ RELATED WORK
To enable on-device multi-DNN inference on edge devices, prior methods such as joint DNN model compression sacrifices a modest level of accuracy for each model to reduce the computational costs of mixed DNN workloads <cit.>. In contrast, Miriam does not compromise on accuracy and can be seen as an orthogonal approach to the above systems. Other methods address this problem through new compiling techniques. For example, Veltair <cit.> proposes to generate multiple versions of compiled DNN models with different intensities of resource contention for scheduling at runtime to accelerate multi-DNN inference. However, these methods also lead to issues such as high overhead in storage and offline profiling, making them hard to scale to more use cases.
Systems like DeepEye <cit.>, Abacus <cit.>, and Dart <cit.> have utilized the interleaving of operators with different "contention channels" (memory-bound or compute-bound). Although these methods have proven to be effective, they require time-consuming offline profiling and are cumbersome to generalize for new DNN tasks. REEF <cit.> addresses the same problem of mixed-critical multi-DNN inference coordination and achieves kernel-level preemption for critical tasks. However, the approach requires modification of the GPU driver library, which is not practical in many popular closed-source devices. Heimdall <cit.> and Band <cit.> also target solving resource contention of multi-DNN inference, while they have different application settings from ours.
Warped-Slicer <cit.> employs performance versus computing unit occupancy curves for selecting an optimized simultaneous kernel pattern, but the method fails to address resource contention between kernels. Works such as HSM <cit.> and <cit.> model the latency degradation of concurrent GPU kernel executions based on hardware information, but the predictors built in these works are difficult to adapt to real-world multi-DNN inference scenarios that are characterized by nondeterministic kernel overlapping <cit.>. Other works such as Smcentric <cit.> and Effisha <cit.> tackle the GPU multitasking problem from resource management perspectives in a space-multiplexing manner <cit.>, which is orthogonal to Miriam's approach.
§ BACKGROUND
In this paper, we present the design and implementation of Miriam based on the CUDA programming model for NVIDIA GPU <cit.>. We first introduce some terminologies in CUDA. Fig. <ref> (left) shows the layout of an NVIDIA Jetson TX2 GPU, which consists of two SMs, each capable of running a number of GPU threads with a maximum size, and both SMs share the global memory.
CUDA Programming Model. A CUDA GPU has a number of Streaming Multiprocessor (SM). Each SM contains multiple cores, which are the processing units that execute the instructions of the threads. All cores within the same SM share the same set of registers and can communicate with each other through shared memory.
Code executed by the GPU is known as a GPU kernel <cit.>.
Threads are the smallest unit of work that can be executed in parallel on a GPU, and they are organized into blocks. Each block is a group of threads that can execute concurrently on a single SM.
A grid is a collection of blocks that are organized in a three-dimensional array.
The grid defines the overall structure of the data being processed and how it is partitioned into blocks.
GPU streams are a way of organizing and executing asynchronous tasks on the GPU. Each stream is a sequence of kernels (e.g. Conv, MemCopy) that can be executed independently of other streams. Kernels in the same stream are executed in a FIFO manner <cit.>.
Kernel Execution on GPU. When launching a kernel in CUDA, we specify the dimensions of the grid and blocks. Each block is dispatched to and executed on one SM. However, whether a block can be dispatched to an SM that already has a block executing on it depends on whether there are enough remaining resources, such as thread slots and shared memory, to accommodate the new block. If there is no available SM to accommodate a block, it has to wait in a queue in a first-in, first-out (FIFO) order.
When a kernel executes on an SM, it competes for on-SM resources, such as thread slots and shared memory, with other kernels already dispatched to and executing on the same SM. This competition greatly affects the execution time of a kernel on the SM. Thus, the varying time a block waits in the queue, in addition to the varying time it takes to execute its workload on the SM, contributes to the overall varying latency experienced by the kernel.
§ MOTIVATION AND CHALLENGES
Miriam aims to support co-running DNN inference tasks on edge GPU for real-time applications. Tasks that have strict real-time requirements are referred to as critical tasks. For example, obstacle detection in autonomous driving must be finished by a certain deadline, allowing sufficient time for the vehicle to maneuver around obstructions. Tasks that do not have strict real-time deadlines are referred to as normal tasks. For example, monitoring human drivers' emotions and fatigue can be executed in a best-effort manner to improve the driving experience.
aims to meet the real-time requirement for latency-critical tasks while maximizing the overall throughput of co-running normal tasks in a dynamic manner. One common solution is to sequentially execute critical tasks and normal tasks, which can yield the lowest latency for critical task execution, but at the cost of significantly reduced overall throughput. An alternative solution is to directly execute multiple DNN tasks on the same edge GPU without proper contention management. However, this can cause increased latency for critical tasks.
Here we investigate performance degradation caused by the simultaneous execution of multiple DNN tasks. When running alone on an edge GPU, GPU kernel execution time for DNN inferences tends to remain consistent. However, the simultaneous execution of multiple DNN tasks on an edge GPU can significantly impact performance. To study this effect, we conducted an experiment using CUDA multi-stream on an NVIDIA RTX 2060 GPU where we launched a DNN task (i.e., ResNet50) with different co-runners in a closed-loop manner. In Fig. <ref> (left), we present the cumulative distribution function (CDF) of the ResNet50 latency with various co-running tasks. The results show that the latency of ResNet50 ranges from 4.4 ms to roughly 16.2 ms when co-running with VGG16, while the solo-running latency is 4.2 ms, yielding a significant variation. Meanwhile, the latency distribution pattern for different co-running model settings also varies a lot.
The primary factor that results in these large variations in latency is the complex resource contention among the co-running tasks, which can be classified into intra-SM contention and inter-SM contention, as is shown in Fig. <ref> (right). The latency experienced by a GPU kernel depends not only on the time it takes for the workload to execute on the SM (affected by intra-SM contention) but also on the time it takes for the workload to wait to be dispatched to the SM (affected by inter-SM contention). Intra-SM contention and inter-SM contention are two types of resource contention among co-running tasks on a GPU. Intra-SM contention refers to the contention within an SM, which can occur when multiple thread blocks from different kernels are dispatched to the same SM and compete for shared resources, such as registers, shared memory, and execution units. Inter-SM contention refers to the contention among SMs, which can occur when multiple thread blocks from different kernels are dispatched to different SMs and compete for shared resources, such as global memory and memory controllers. These two types of contention can cause significant performance degradation and latency variation for co-running tasks on a GPU.
Thus, given two incoming DNN task queues for normal task τ^normal and critical task τ^critical, to maximize the overall task throughput while guaranteeing the real-time performance of critical tasks, it is crucial to carefully manage the contention that arises from multiple overlapping kernels during co-execution. Our design objective is: to mitigate the latency degradation of the critical kernel during concurrent execution with the normal kernel by resolving inter- and intra-SM contention while allocating idle SM resources to the normal kernel as much as possible.
§ MIRIAM OVERVIEW
We now introduce Miriam, a holistic kernel-level system for real-time multi-DNN inference on edge GPU. Miriam is a compiler-runtime synergistic framework that achieves fine-grained kernel-level GPU resources mapping. In this section, we first introduce the key idea of Miriam and then describe its system architecture.
§.§ Key Idea
In Section <ref>, we show that it is imperative to give careful consideration to the resource contention that arises between multiple parallel kernels. Failure to do so can result in GPU under-utilization and degradation of inference latency.
Motivated by these findings, Miriam proposes a new DNN kernel inference abstraction, elastic kernel, which is a GPU kernel that has adjustable grid size and block size. Different gird/block sizes of the elastic kernel correspond to different patterns of SM-level GPU resource usage. By transforming normal kernels into elastic kernels, Miriam can control their resource contention to the critical task, and thus maximize the overall system throughput while not compromising the real-time performance of the critical kernel.
To this end, Miriam generates an elastic kernel for each normal task offline and enables kernel coordination at runtime. Specifically, Miriam employs a novel elastic kernel generator to construct an elastic kernel with adjustable GPU resource usage patterns. During the runtime phase, the coordinator will select the best implementation patterns of the elastic kernels and dynamically pad them with the critical kernels to fully utilize the GPU resource.
§.§ System Architecture
Fig. <ref> shows a bird-eye view of . Miriam incorporates two parts: Offline Elastic Kernel Generation and Online Kernel Coordination, working at levels of compilation, i.e., source-to-source code transformation, and kernel coordination, respectively. They collaborate to exploit elastic kernels for supporting multiple DNN inference on edge GPUs.
generates elastic kernels by transforming the compiler-generated or handcrafted CUDA kernels to the elastic form. We generate elastic kernels from both grids' and blocks' perspectives of GPU kernels, which are called elastic grid and elastic block, respectively. These configuration knobs can achieve fine-grained control over inter- and intra-SM resources.
There are two challenges here for generating elastic kernels. First, the design space of the elastic kernel implementation patterns is too large (e.g., 2874 on average for a single kernel in AlexNet <cit.>).
Hence, we shrink the design space to decrease the number of potential elastic kernel candidates by taking the hardware limitation into consideration. Second, When a kernel is launched in CUDA, the execution configuration specifies the number of threads to be launched and how they are organized into blocks and grids. Modifying the grid and block size in a DNN kernel directly can cause computation errors because this affects how threads are organized and executed on the GPU. In case of this, includes a novel source-to-source kernel transformer, which transforms GPU programs of a given DNN kernel into an elastic kernel execution paradigm while ensuring the consistency of computation results.
adopts a novel dynamic kernel coordination mechanism that controls the execution of elastic and critical kernels at run-time. Specifically, will profile the SM occupancy of each elastic kernel and the critical kernels. Then, determines the grid size and block size of the next elastic kernel from the normal task queue at runtime. In this way, tasks with elastic kernels can maximize resource utilization without interference to other co-running critical kernels.
A key challenge here is that an elastic kernel may be executed solely or in parallel with different critical kernels. Hence, we cannot determine the scheduling of the elastic kernel at the time of kernel launch. To address this issue, we design a dynamic kernel sharding mechanism, in which we divide an elastic kernel into several shards and determine the scheduling for each sharding according to run-time resource usage.
Miriam can support a wide range of applications that need to run multiple DNNs on the edge GPU.
For instance, an obstacle detection task and a navigation task need to run in parallel to achieve autonomous driving.
The obstacle detection task is critical because it is related to driving safety, while the navigation task can be executed in a best-effort manner as a normal task.
For such a DL task set, as shown in Fig. <ref>, first divides them into critical kernels and normal kernels according to their task characteristic, i.e., criticality of the tasks. Normal kernels are compiled offline and transformed into elastic kernels by . At run-time, the elastic sharding policy of normal kernels is determined by the to maximize resource utilization while not interfering with the execution of the critical kernel.
§ GENERATION OF ELASTIC KERNELS
To support finer control over inter- and intra-SM resources of a kernel running on the edge GPU, we propose an elastic kernel generator. The design principle of Miriam is based on the insight that both the block and grid's resource allocations can be distilled from the native GPU programming model. Fig. <ref> illustrates the design of the proposed elastic kernel generator: elastic block and elastic grid. By separating resource allocation for thread blocks from the logic-level grid and thread block identity, this approach generates resource-controllable GPU kernels for further resolving co-running DNN tasks resource contention problems.
To improve the efficiency of the elastic kernel generation process, proposes to shrink the design space of elastic kernels according to hardware limitations, as well as observations on co-running DNN kernels from critical and normal task queues. Moreover, to maintain the accuracy of elastic kernel calculation after elastic kernel transformation, we design a source-to-source kernel transformer. Our transformer can convert original GPU kernels into elastic kernels while preserving computational equivalence.
§.§ Controllable Intra-SM Resource by Elastic Block
DNN kernels can be broadly categorized into memory operations (memory allocations, memory transfers, etc.) and kernel execution. To enable the execution of a single kernel on multiple GPU SMs, GPU programming divides a large kernel into multiple sub-kernels, each of which is executed by a GPU block.
The block size is determined by the computation workload of each sub-computation. Blocks with smaller sizes consume less thread usage for each instruction cycle.
Multi-DNN inference on edge GPU can cause severe intra-SM contention when multiple thread blocks from different kernels compete for the resource within the same SM. Some blocks would fail to execute or delay, which leads to a decrease in the overall throughput and an increase in the corresponding latency of the DNN inference. For this issue, one possible solution is to perform code-level optimization of the GPU kernel. This approach includes optimizing the memory access patterns and reducing unnecessary computations to decrease the intra-SM resource usage, and thus alleviates intra-SM contention. However, optimizing GPU codes for a specific DNN model is challenging and time-consuming. Different optimization techniques such as loop-tiling, loop-unrolling and parallelization naturally have different trade-offs in terms of execution performance, memory usage, and code complexity. Achieving the appropriate balance among those factors requires careful experimentation and tuning.
Adapting codes for different concurrent kernels from diverse tasks demands a significant amount of effort and may not generalize well, thereby restricting the effectiveness and applicability of the optimization techniques.
To carefully manage the resource usage of each block, adjusts the number of threads within the targeted block to generate elastic blocks for each thread block. We adopt the persistent thread technique <cit.> that is capable of adjusting a kernel’s resident block size on an SM. In contrast to traditional kernels where threads terminate after completing the kernel execution, persistent threads remain active throughout the execution of a kernel function. We limit the range of each elastic block size to fall between 1 and the maximum resident block size. We also transform the default 1:1 logical-to-physical threads mapping scheme to an N:1 mapping scheme while preserving the initial program semantics.
Compared to static block fusion <cit.>, which fuses multiple thread blocks from different GPU kernels into a single one to reduce unnecessary loads and stores, our persistent thread design does not require pre-compilation of all possible combinations of kernels. This feature enables flexible SM-level resource mapping at runtime.
Our elastic kernel is designed to stay within the shared memory limit, and we achieve this by modifying the way we control the intra-SM resources, including shared memory, compared to the original kernel. This modification results in a memory occupancy that is either equal to or less than that of the original kernel.
While the persistent thread mechanism provides fine-grained control over intra-SM parallelism, it comes with nontrivial overhead. The optimal number of launched persistent threads does not always equal to the maximum number of concurrently executing threads from all thread blocks that can be afforded by a single SM. Hence, we will narrow the design space of elastic block which will be introduced in Section <ref>.
§.§ Elastic Grid for Inter-SM Contention
While elastic block design can resolve intra-SM thread-slot contention, inter-SM memory (e.g., DRAM, L2 Cache) fetching contention can still be a severe problem if blocks inside a kernel are directly launched. DNN kernels often use a large number of blocks to hide stall cycles due to data access, thus, when multiple DNN inference requests arrive in rapid succession, multiple SMs are allocated to execute the requests (e.g. memory bus) have to wait for each other, leading to decreased execution performance.
Miriam proposes an elastic grid generator that slices the initial grid into multiple smaller grids. This approach can improve resource utilization and reduce inter-SM contention by allowing more efficient memory accesses across multiple SMs.
Elastic grid generation implies a kernel slicing plan: Given a kernel K, a slicing plan P(K) is a scheme that slices K into a sequence of n slices [s0, s1, s2,..., s_n-1] based on thread-block-granularity partitions.
Thus, given a set of kernels, the problem is to determine the optimal grid slicing policy of the initial kernel when co-running with other tasks with different workloads.
To formulate, as for a DNN kernel K with M thread blocks, a dichotomy algorithm-based slicing plan S(K) can be applied to K. Specifically, there would be a sequence of slicing schemes represented as:
S(K)=(M/2^n,M/2^n-1...,M), n=*max_i{M mod 2^i=0}
where n is the power index of 2 to be divided. By doing this, we enable normal kernels to be issued with a flexible number of thread blocks on SM, co-locating with critical kernels. By dividing the single kernel into multiples, the sliced grids can be scheduled to run independently by the GPU, allowing the GPU to interleave the execution of them with the execution of other critical kernels. The elastic grid design efficiently reduces co-locating kernels' inter-SM memory contention by improving the time-multiplexing potential of the kernel with other kernels, allowing the GPU to better balance the allocation of resources and maximize overall performance.
§.§ Workload-balanced-guided Design Space Shrinking
We need to determine the execution parameters of the elastic kernel at run-time, which includes the grid number(N_blk_be) and the block size(S_blk_be). We call each pair of execution parameters a schedule. A main challenge here is the huge number of feasible schedules, which makes it difficult to enumerate schedules or heuristically find optimal ones at run time.
The total number of feasible schedules is exponential to the number of operators in the incoming model and the size of input data. For example, an implemented AlexNet model in the Tango benchmark with an input image size of 3x224x224 can have up to 2.2 × 10^25 feasible schedules for all Conv kernels <cit.>.
To address this challenge, we shrink the design space for each kernel by removing combinations of elastic grid sizes and block sizes that may result in dispatch failure due to severe resource contention. In another word, Miriam narrows down the design space by eliminating configurations that are expected to have low performance.
When multiple kernels are co-running, thread blocks from different kernels can have many possible inter-leavings of SM-level contention or inefficiency. We propose two constraints to address these issues as shown in Eq. <ref>, and the specific parameters of these factors are shown in Table 1.
N_blk_be⩽ N_SM - N_blk_rt mod N_SM
S_blk_be⩽ L_threads - blk_size_rt
The first constraint is based on the observation that workload across SMs is unbalanced. This kind of imbalance appears broadly when the number of thread blocks is not a multiple of the number of SMs inside an edge GPU. To address this issue, we prune cases where the number of thread blocks of elastic kernels exceeds the remaining available SMs after dispatching all the thread blocks from critical kernels.
The second constraint addresses intra-SM workload balance, which aims to reduce contention between thread blocks from different kernels competing for resources within an SM. It is necessary to ensure that each SM has as much workload as possible and that the workload is balanced. If the workload in an SM is too light, then the resources in that SM may be wasted. On the other hand, if the workload in an SM is too heavy, it may lead to resource contention and performance degradation. We prune cases when the working threads of an elastic kernel exceed too much of the spare intra-SM resources after being occupied by blocks from the critical kernel based on the intra-SM workload balance constraint.
To formulate these two inefficiency cases, we define WIScore as a workload imbalance metric:
WIScore= N_blk_rt mod N_SM+N_blk_be/N_SM * S_blk_be+S_blk_be/L_threads(4)
where the value of WIScore ranges from [0,1]. Another factor we consider when shrinking the design space is the dispatch overhead for the elastic kernels. To ensure that the potential schedule generated for each elastic kernel is feasible and does not violate critical decision-making requirements. Miriam prunes these cases using OScore:
OScore =
1 ∑ LO_blk(k_be_i) < MAX_blk, ∀ i ∈ [1,N_shard]
and ∑ LO_pt(k_be_i) < MAX_pt, ∀ i ∈ [1,N_shard]
0 Otherwise
(5)
where function LO() represents the launch overhead which equals the sum of the launching time for each elastic kernel fragment, subtracting the launching time for the initial normal kernel. OScore is set to 0 when the overhead exceeds the maximum acceptable bar we set, which is a constant number.
The product of the WIScore and OScore values that are computed for each elastic kernel candidate gives a metric that can be used as a design space narrowing navigator for the performance boundary. Specifically, by multiplying these two scores (WIScore * OScore), we can identify the candidates that are likely to achieve the best performance within the given design space. Miriam computes it for every possible combination of elastic kernel implementation settings. Determining the optimal percentage of candidates to select is difficult since it is unclear how many candidates need to be chosen to ensure that Miriam finds the best parameters within the pruned design space. Thus, we test some representative tensor operations (such as convolution in CifarNet <cit.> and matrix multiplication in GRU <cit.>) and then picks out the top 20% combinations among all the candidates to be used in the next stage of runtime kernel coordination. Through these tests, we do not find any cases in which the model prunes the best-performing set of parameters.
With the assistance of constraint injections, we can greatly reduce the design space without sacrificing the candidate elastic kernel's performance. This feature is especially useful given the large number of possible kernel configurations in modern edge GPUs.
§.§ Source-to-Source Elastic Kernel Transformer
Before assessing the effectiveness of elastic kernel design, it is crucial to investigate whether the grid or block sizes of DNN kernels can be modified directly from the original user-developed or compiler-generated GPU programs. An experiment was conducted on the benchmarks of Tango <cit.> to evaluate the effectiveness of direct kernel transformation. The results of the experiment showed that only 7.4% of the implemented kernels in the Tango benchmarks were compatible with grid/block size adjustment without requiring modifications to computation schedules inside kernels.
This is because that the block size and grid size defined in a kernel are determined by the computation schedule of the kernel: either directly written in CUDA codes or through declarative loop-oriented scheduling primitives in DNN compilers, which bind symbolic-extent logical threads with physical GPU threads, as is shown in Fig. <ref>. This constraint motivates us to design a source-to-source kernel transformer that can support our elastic kernel design.
Miriam rapidly equivalently transforms a DNN kernel by injecting a piece of code at the beginning of each kernel, which checks the computation and memory offsets to realize where it begins and ends after being evicted. Specifically, we compute a global thread identifier and use it as a basis for SM-level workload distribution. This identifier takes the thread ID as input and produces a corresponding index for the data element accessed by the thread. We replace references regarding physical threads (e.g. GridDim) and identity variables (e.g. threadIdx.x) in the original kernel codes with logical equivalents. Miriam employs two approaches for implementing the index function: computation-based and memory-based. The computation-based approach computes the index within the kernel when the thread accesses the corresponding data element. Alternatively, in the memory-based approach, the indices are pre-calculated on the host side (i.e., the CPU) prior to kernel launch and stored in shared memory for use during kernel execution.
§ RUNTIME DYNAMIC KERNEL COORDINATION
This section introduces our design for the online scheduler of elastic kernel coordination. First, we call each elastic kernel (i.e., elastic grid and elastic block) as elastic kernel shard. Our guidelines for designing the coordinator are two-fold: maximizing overall real-time performance and mitigating resource contention. To achieve these goals, our runtime coordinator constantly monitors the available GPU resources, both from the critical kernels and elastic kernels. It then determines which elastic kernel shards can co-run effectively with the critical kernels.
Execution timeline of co-running kernels. Upon receiving multiple normal task requests b1...bn, Miriam pushes all the kernels into a normal tasks queue and the kernels are dispatched to the GPU semantic through multiple streams. Once a critical task arrives, Miriam will instantly select appropriate elastic kernel fragments of the following normal kernel in a "bin-packing" manner, considering the current intra- and inter-SM-level resource distributions. After that, once the critical kernels finished executing, all the kernels from normal tasks will re-occupying the GPU.
Grid/block size determination of elastic kernels.
During runtime, a fixed size for elastic grids and block settings for elastic kernels can easily become inefficient with the optimal co-scheduled elastic kernel shards varying with different co-running with critical kernels. For example, if one critical kernel finishes and there still exists half of the computations unfinished from the co-locating elastic kernel, the rest half of thread blocks from it lead to severe resource contention or under-utilization when co-locating with the subsequent critical kernel. The selection policy for elastic kernel shards is crucial in order to prevent latency interference with critical tasks. To ensure optimal performance, one approach is to build a duration prediction model for the formation of operator groups based on runtime performance events (e.g. cache misses and global memory bandwidth)<cit.>, and control the kernel overlap based on the model. However, runtime events are not supported on edge GPUs like Nvidia Jetson devices, and the hardware events reported by tools like Nsight Sys and Nsight Compute can only be obtained with high overhead. Thus, this method cannot be applied to our problem (kernel overlaps are not determined) in a practical way.
To address these challenges, Miriam adopts a greedy scheduling policy. Specifically, when the elastic kernel partially overlaps with the critical kernel, the kernel coordinator must carefully balance the resources allocated to each kernel. In this case, the coordinator needs to ensure that the padded elastic kernel does not interfere with the execution of the critical kernel, while still using as many available resources as possible. When the padded kernel runs on its own, the kernel coordinator can allocate all of the available resources to the kernel, since there are no other tasks running on the GPU. This allows the kernel to run as efficiently as possible, without any interference from other tasks. To efficiently manage these elastic kernels while achieving the goal, we propose a dynamic-sized shade binary tree approach for elastic kernel shards formation to achieve high runtime efficiency and low resource contention from different combinations of overlapped kernels.
Our shaded binary tree structure is an abstract for managing the elastic kernel shards, which is similar to a complete binary tree structure of shards, as is shown in Fig. <ref>. The root of the tree represents the kernel from the normal tasks, whose initial grid size is M. Each node corresponds to a part of computations, or potential thread blocks to be dispatched inside the kernel. The shading property for each node is the elastic block size of the thread block. Directed edges indicate the potential sliced peers for the unfinished computations left over from the predecessor. The whole structure is composed of the actual shard and the virtual shard. The actual shards are the ultimately formed elastic kernel shards that are to be dispatched, and the virtual shards are the potential fragments of the elastic kernel that would not be dispatched.
Miriam relies on the dynamic shaded kernel binary tree structure to manipulate the elastic kernels from normal tasks and determines the elastic kernel shards with heuristics based on the number of thread blocks of kernels from both critical and normal tasks. As is shown in Fig. <ref>, which illustrates the life cycle of an elastic normal kernel. For elastic fragment selection from normal kernels, the policy is to pick a set of elastic blocks from the head of the shaded kernel binary tree to share SM-level resources with co-locating thread blocks from resident critical kernels with trivial contention. Miriam proposes to utilize a policy to ensure that the elastic blocks from normal kernels will only use the left-over resources from the critical kernels.
§ EVALUATIONS
§.§ Experiment Setup
We implemented Miriam based on NVIDIA CUDA 11.2 <cit.> for elastic kernel generation and online kernel scheduling, and Python3.6 for the source-to-source kernel transformer.
§.§.§ Implementation and Testbed.
Our experiments are conducted on an NVIDIA GeForce RTX 2060 that features 1920 CUDA cores and an NVIDIA Jetson AGX Xaiver with Pascal GPU architecture with 256 NVIDIA
CUDA cores <cit.>. We implemented Miriam with NVIDIA CUDA 11.2 for elastic kernel generations and Python3.6 for the end-to-end kernel transformation. Note that Miriam is extensible and can work well on other GPU platforms that officially support OpenCL, HIP or other CUDA alike programming paradigms such as AMD Embedded Radeon™ E9170 <cit.>.
§.§.§ DNN Workloads.
We use six popular DNN models from both computer vision and language processing fields to evaluate Miriam. Inspired by DISB <cit.>, we build a benchmark named MDTB (Mixed-critical DNN Task Benchmarks) based on both CUDA implemented Kernels to fully demonstrate the performance and generalization of our framework, summarized in Table <ref>. MDTB benchmark simulates three patterns for inference tasks from user requests: (1). Arrival in uniform distribution. The client sends inference requests at a fixed frequency (e.g. 10 requests/second), which simulates critical applications such as pose estimation. (2). Arrival in Poisson distribution, which simulates event-driven applications such as obstacle detection. (3). Closed-loop workloads simulate when the client keeps sending inference requests.
We choose five representative DNN models in MDTB, including AlexNet <cit.>, SqueezeNet <cit.>, GRU <cit.>, LSTM <cit.>, ResNet <cit.>, and CifarNet <cit.>, all implemented in CUDA. We conduct neural network inference with a 224x224x3 single batch of images as the input to mimic the inference in real applications.
§.§.§ Baselines.
We compare Miriam with multiple DNN scheduling approaches on edge GPU. Sequential selects one model from both task queues (critical and normal) in a round-robin fashion and performs the inference one by one. In this mode, the critical tasks run independently, occupy the GPU resources, and can have optimal end-to-end latency for critical tasks. GPU Multi-stream with Priority enqueues kernels from both critical and normal tasks at the same time, and models are executed in parallel. This is adopted by NVIDIA Triton <cit.>. Inter-stream Barrier (IB) is the state-of-art multi-DNN operator scheduling method based on multi-stream <cit.>. It uses inter-stream barriers to manually synchronize kernel dispatch among different kernels. In this mode, the concurrency among kernels can be controlled by utilizing stream and synchronization-based mechanisms.
§.§.§ Metrics.
We use the overall throughput, the end-to-end latency for critical tasks, and the achieved occupancy as our evaluation metrics.
End-to-end Latency of Critical Tasks. This metric measures the end-to-end inference speed of critical tasks with real-time demands.
Overall Throughput. This metric represents how many requests from users can Miriam serve on the target edge GPU.
Achieved Occupancy. By definition, achieved occupancy is the average ratio of active warps on an SM to the maximum number of active warps supported by the SM<cit.>, defined as below:
Achieved Occupancy = Active_warps / Active_cyles/MAX_warps_per_SM
We use this metric to evaluate the fine-grained GPU utilization of our system performance.
§.§ Overall Performance
To reflect the performance gain of system overall throughput with little sacrifice on the real-time performance of the critical tasks, we compare Miriam against other GPU scheduling approaches under MDTB A-D workloads on two edge GPU platforms.
We merge discussion of the uniform distribution and poisson distribution of critical task requests because their workloads are comparable. This allows us to analyze and discuss their similarities more efficiently.
Closed-loop Critical Tasks (MDTB A). Workloads with closed-loop critical tasks (AlexNet) experience significant resource contention when co-running with normal tasks (CifarNet). Fig. <ref> (a)-(d) show that: compared to Sequential, Multi-stream and IB increase the critical task latency by 1.95× and 1.52× on 2060 and 2.02× and 1.77× on Xavier, respectively, while Miriam incurs only a 21% and 28% overhead on critical tasks. Miriam also improves overall throughput by 64% and 83% on the two platforms, outperforming other approaches significantly under MDTB A workloads. We observed that IB's throughput performance is even worse than Sequential's due to the frequent launching of critical tasks require the insertion of more synchronization barriers among GPU streams to manage kernel groups, resulting in significant overhead. In terms of achieved occupancy, Fig. <ref> (e) and (f) demonstrate that Miriam leads to higher SM-level GPU resources compared to other baselines. It is important to note that achieving nearly 100% theoretical occupancy is difficult for DNN inference tasks due to their large thread blocks, which can easily lead to resource idleness or SM incapacity to cover memory access latency <cit.>.
Uniform/Poisson Critical Tasks (MDTB B, C, and D). As the launching frequency of critical workloads decreases, the overall throughput of all approaches improves with different degrees compared to vanilla Sequential due to increased opportunities for normal tasks to share GPU resources with critical tasks. We observed that Miriam outperforms other approaches in this scenario. For instance, using MDTB B, C, and D on Xavier, Miriam increases overall throughput by 1.85×, 1.79×, and 1.91× over Sequential, which is much better than the other baselines. While both Multi-stream and IB also yield improved throughput compared to Sequential with 1.34× 1.73×, they lead to severe latency degradation for the critical tasks by 32% 88%, whereas Miriam only incurs a latency overhead of less than 21% for these benchmarks. This improvement can be attributed to our elastic kernel design and runtime dynamic kernel coordination approach. Since the Sequential approach exhibits the shortest latency for each critical task, our comparison demonstrates that Miriam maximizes overall throughput while preserving the end-to-end latency of critical tasks. From a GPU utilization standpoint, Miriam increases the average active warps of each cycle, resulting in better SM utilization. These results confirm the effectiveness of our elastic kernel sharding approach and demonstrate our ability to effectively pad critical kernels.
We observe that the performance improvements offered by Miriam may not always result in higher SM occupancy on Jetson Xavier. This is because Xavier has much fewer onboard resources and a smaller number of SM compared to 2060. Additionally, the relatively low memory bandwidth of the Xavier can limit the amount of data that can be transferred between the memory and SMs, leading to performance bottlenecks with complex models. The thermal design power of the Xavier is also relatively low compared to 2060, which can limit the amount of power that can be consumed by the GPU and the amount of heat that can be generated. This can negatively impact the clock speed of the processor cores and the amount of parallelism that can be achieved, which in turn can have a negative impact on the relationship between SM occupancy and performance.
§.§ In-depth Analysis of Miriam
To better understand why Miriam performs better than other GPU scheduling approaches under severe contention circumstances, we provide a in-depth analysis in this section, with two AlexNet models co-running on a single 2060 GPU named AlexNet-C which serves as the critical task, and AlexNet-N which serves as the normal task. Both tasks are launched in a closed-loop manner.
In Fig. <ref>, the upper two rows show the timelines of active kernels from the two co-running DNN tasks, which demonstrate the performance difference between Miriam and Multi-stream. The figure is sketched based on real profiling results achieved from NVIDIA Nsight Sys <cit.>, in which we use the blue color to represent the critical task, green color to represent normal tasks launched by vanilla Multi-stream, and pink color represents elastic kernels of the normal task by Miriam. As shown in the figure, there are obviously more pink blocks than green blocks, and these pink blocks are tightly padded with the blue blocks, which can be a showcase of the elastic kernel shards padded with the critical kernels. The end-to-end latency of AlexNet-C in Miriam is much lower than that in Multi-stream.
We also show the corresponding achieved occupancy of this case in Fig. <ref>. The average layer-wise achieved occupancy for Miriam is 65.25% and is 32.9% for Multi-stream. As mentioned, more average active warps per cycle and less contention overhead is the key to improving the parallelism while preserving the speed of critical tasks.
§.§ Evaluations on Design Space Shrinking
Miriam filters out the definitely-slow cases (80%) by applying hardware limiters, as detailed in Chapter 6.3. The trade-off between elasticized scale (i.e., the dynamic shaded binary tree's depth, as discussed in Chapter 7) and scheduling granularity is a critical consideration for different implementations of elastic kernels, as shown in Fig. <ref> to guide the further shrinking process. For instance, an elastic kernel shard with elastic_grid_size=1 is flexible to accommodate other critical kernels, but launching overhead for such a shard may be too large due to the increased number of kernel shards. Fig. <ref> summarizes the pruned space of candidate elastic kernels from the models in MDTB, ranging from 84% to 95.2%. The expected pruned space may differ across candidate models due to multiple factors, such as the complexity of the models (i.e., the operator types used) and the input size.
§.§ Case Study: Autonomous Driving with LGSVL
We further use a real-world trace from an open autonomous driving platform (i.e., LG SVL <cit.>) as the workload, which provides a realistic arrival distribution of critical tasks (i.e., obstacle detection) and normal tasks (i.e., pose estimation) in autonomous driving.
The trace was collected from a 3D Lidar perception module and a 2D camera perception module when running the LGSVL simulator, and we selected backbones from the models included in our MDTB benchmark, they are SqueezeNet for simulation of pose estimation as the normal task (lidar data), and ResNet for obstacle detection as the critical task (camera data). The clients send the inference requests in a uniform distribution, with 12.5 Hz frequency for the normal task and 10 Hz for the critical task, as is shown in Fig. <ref>. The experiment was conducted on GTX 2060.
Fig. <ref> demonstrates the experimental results for this real-world workload. Compared to Sequential, Multi-stream and IB increase the overall throughput by 1.41× and 1.25×, while amplifying the critical task latency by 82% and 56%, respectively. Due to the low launching frequency of both critical and normal tasks (10 and 12.5 Hz), the elastic kernels of the normal task can execute concurrently with the critical task with little eviction overhead for elastic kernel shards. Finally, Miriam achieves 89% improvement of overall throughput compared to Sequential, and only incurs 11% latency overhead for the critical task. This proves how Miriam can achieve large improvement of throughput based on our elastic kernel design with little sacrifice on critical task latency, which is also confirmed by our high SM occupancy among all baselines shown in Fig. <ref> (c).
§.§ System Overhead
The scheduling overhead of Miriam mainly consists of two parts. The first part is the runtime elastic kernel shards selection, which scans the shard candidates and has the complexity of O(N). Owing to the low complexity of the scheduling mechanism in Miriam, we find that their overall average overhead for serving in each DNN model is less than 0.35 ms. The second part is the launch time overhead for critical kernels due to the padding of the elastic kernels, we evaluate this overhead and found that in most (over 80%) cases, the overhead is less than 15 us. This latency overhead is mainly because of contention on the texture cache and L2 memory, which we leave for future work.
§ DISCUSSION
Scalability. We believe that Miriam has the potential to be scaled beyond pair-wise DNN tasks co-running and can support more general tasks. However, due to the large number of co-running kernel possibilities, some additional considerations must be taken into account. These include establishing a scheduling policy for normal tasks with the same priority, as well as finding an efficient way to perform offline kernel profiling since the design space increases exponentially.
Integrated with DNN Compiler. Representative DNN compilers like TVM <cit.> can generate high-performance DNN kernels with low latency using auto-tuning <cit.>. However, DNN compiling is an offline approach with a long compilation time, and the generated kernels can not be easily modified at runtime. This creates a gap between static compilation and dynamic scenarios in IoT applications, particularly when on-device resources become available dynamically.
To fill this gap, Miriam can serve as a post-compiling runtime to ensure that the on-device resources are fully utilized during runtime in an adaptive manner.
Orthogonal to Other Approaches. Miriam can work symbiotically with other optimized DNN execution approaches, such as model compression <cit.>, and edge-cloud offloading <cit.>, to execute multi-DNN workloads effectively.With such a collaborative approach, it becomes possible to achieve improved runtime performance and better resource utilization, enabling effective execution of multi-DNN workloads in resource-constrained edge computing environments.
§ CONCLUSION
We propose a novel system named Miriam that addresses latency and throughput problems of co-running multiple DNN inference tasks on edge GPUs. The proposed system utilizes elastic kernels to facilitate fine-grained GPU resource re-mapping and a runtime dynamic kernel coordinator to support dynamic multi-DNN inference tasks. Experimental results on a benchmark we built on two types of edge GPU show that Miriam can significantly improve the overall system throughput while incurring minimal latency overhead for critical tasks, compared to dedicating the GPU to critical tasks.
plain
|
http://arxiv.org/abs/2307.04290v1 | 20230710004342 | The Electronic Structure of the Hydrogen Molecule: A Tutorial Exercise in Classical and Quantum Computation | [
"Vincent Graves",
"Christoph Sünderhauf",
"Nick S. Blunt",
"Róbert Izsák",
"Milán Szőri"
] | physics.chem-ph | [
"physics.chem-ph",
"quant-ph"
] |
< g r a p h i c s >
The potential energy curves of H_2 and a simple quantum circuit associated with them.
In this educational paper, we will discuss calculations on the hydrogen molecule both on classical and quantum computers. In the former case, we will discuss the calculation of molecular integrals that can then be used to calculate potential energy curves at the Hartree–Fock level and to correct them by obtaining the exact results for all states in the minimal basis. Some aspects of spin-symmetry will also be discussed. In the case of quantum computing, we will start out from the second-quantized Hamiltonian and qubit mappings. Using quantum phase estimation, we then provide the circuits for two different algorithms: Trotteization and qubitization. Finally, the significance of quantum error correction will be briefly discussed.
§ INTRODUCTION
It has been almost 20 years since Prof. Csizmadia, known simply to his students as IGC, gave a series of lectures at the University of Szeged about theoretical calculations and their relevance to organic chemistry. As students (R. I., M. Sz.) attending these lectures, we knew that he had studied with Slater and was a professor of international standing who had been associated with the University of Toronto for a long time. While we had already heard of the basics of quantum mechanics and quantum chemistry, we were all eager to know more about their applications to chemical problems that we had also encountered by then in the organic chemistry lab. Who better to tell us about that than IGC, who was among the pioneers of applying Gaussian orbitals to organic molecules and was among the authors of POLYATOM<cit.>, the first program package that could carry out such calculations? Despite his many scientific achievements, IGC never talked much about the past except to explain something to his students. He had a unique style that can be discerned from some of his writings<cit.> but that worked best in the classroom. We all remember his simple explanations of complicated mathematical subjects, usually accompanied by student-friendly illustrations that he simply called Mickey Mouse Figures. Apart from his knowledge on chemical calculation and his accessible lecturing style, his dedication set him apart from most teachers we had known: few people would have given a two-hour long lecture when struggling with a whooping cough that threatened to strangle him. In our contribution to this special issue commemorating his achievements, we would like to pay tribute to him as an educator by providing an educational introduction to quantum chemistry methods, using the hydrogen molecule as an important first example. While IGC would have probably preferred an organic molecule and might have given less detail about the calculation than we intend to, he would have certainly approved of our using the simplest Gaussian orbital basis possible and we hope that such a simple model calculation will help the determined student to understand the machinery underlying modern quantum chemistry calculations. It is in this spirit that we offer this contribution to his memory.
§ THEORETICAL BACKGROUND
§.§ The Hartree–Fock Method
Chemistry investigates the myriad of ways molecules may interact. With the advent of quantum mechanics, it became possible to explain these interactions in terms of those between the electrons and nuclei that make up molecules. Unfortunately, this leaves us with a large number of variables to consider if we want to describe everything that takes place in a chemist's flask. To make the problem easier to solve, several further assumptions are made beyond the axioms of quantum mechanics and special relativity needed to describe chemical systems. To start with, akin to usual practice in thermodynamics, we may divide the universe into a system and its environment. The system is simply the part of the world we are interested in, and, for chemical purposes, this might be a number of atoms and molecules. As a first approximation, we will consider only particles in the system and neglect interactions with the environment. We will further neglect relativistic effects and the time dependence of the states of the system. Within the Born-Oppenheimer approximation, the nuclear and electronic variables are separated and the electronic problem is solved for fixed nuclear coordinates. The electronic Hamiltonian then takes the form
Ĥ=
-∑_i1/2∇_i^2
+∑_A<BZ_A Z_B/|𝐑_A-𝐑_B|
-∑_iAZ_A/|𝐫_i-𝐑_A|
+∑_i<j1/|𝐫_i-𝐫_j|,
where the indices i,j denote electrons and A,B nuclei, 𝐫_i is the position of an electron and 𝐑_A is that of a nucleus, Z_A is its charge number. The terms in the order they appear are the kinetic energy of the electrons, the potential energy of nuclear-nuclear, nuclear-electron and electron-electron interactions.
Although the approximations so far simplify the problem considerably, the resulting quantum mechanical problem remains intractable. In the next step, the variables describing individual electrons are also separated. To fulfil the condition of antisymmetry required by the exclusion principle, an approximate many-electron wavefunction is constructed as the antisymmetrized product of functions describing a single electron
Φ =
1/√(N!)ϕ_1(𝐱_1) ϕ_2(𝐱_1) ⋯ ϕ_N(𝐱_1)
ϕ_1(𝐱_2) ϕ_2(𝐱_2) ⋯ ϕ_N(𝐱_2)
⋮ ⋮ ⋱ ⋮
ϕ_1(𝐱_N) ϕ_2(𝐱_N) ⋯ ϕ_N(𝐱_N)
The function Φ is called the Slater determinant and the one-electron functions ϕ_p are the spin orbitals.<cit.> Since the latter are orthonormal, the norm of Φ is also 1. The energy of the system can then be obtained as the expectation value of of the Hamiltonian with respect to the Slater determinant,
E = ⟨Φ|Ĥ|Φ⟩,
implying an integration over all electronic coordinates 𝐱_p. At this point, the spin variable of an electron (s_p) can also be separated from the spatial coordinates (𝐫_p), to yield spatial orbitals φ_p,
ϕ_p(𝐱_p) = φ_p(𝐫_p)σ_p(s_p).
For labeling the orbitals, we will use the convention that i,j,… refer to orbitals occupied in the Hartree–Fock ground state, a,b,… are unoccupied and p,q,… could refer to any molecular orbital. The spin-function σ can denote a spin-up (α) or a spin-down (β) state of a single electron identified by s_p. Often, the above product is denoted simply as pσ. If necessary, spin-orbital and spatial orbital labels can be distinguished by capitalizing one of them. Here, we opt for capitalizing the spin-orbital labels, which leads to the compact notation P=pσ or P=pσ_p. Sometimes it is convenient to refer to spin orbitals in terms of their spatial component. This purpose is served by the `relative' spin notation in which the product pα is simply referred to as p and pβ as p̅. Using the fact that the spin functions α and β are orthonormal, the following expression can be obtained for the Hartree–Fock energy using the Slater-Condon rules<cit.>
E = ⟨Φ |Ĥ|Φ⟩ = E_n + 2∑_i (i|ĥ|i) + 2∑_ij(ii|jj)-∑_ij(ij|ij),
where E_n is simply the nuclear-nuclear interaction term in Eq. (<ref>). The one electron integral reads
(p|ĥ|q) = ∫φ_p^*(𝐫) ĥ(𝐫) φ_q(𝐫) d𝐫,
with ĥ containing the kinetic energy term of the electrons and the nuclear-electron interaction energy in Eq. (<ref>). Finally, the two-electron term is
(pq|rs) = ∬φ^*_p(𝐫_1)φ_q(𝐫_1)φ^*_r(𝐫_2)φ_s(𝐫_2)/|𝐫_1-𝐫_2| d𝐫_1d𝐫_2,
and represents the remaining electron-electron interaction term in Eq. (<ref>). Note that these integrals are defined in terms of spatial orbitals but the spin-orbital equivalents are easily defined as (P|ĥ|Q)=(p|ĥ|q)δ_σ_pσ_q and (PQ|RS)=(pq|rs)δ_σ_pσ_qδ_σ_rσ_s.
To find the lowest-energy determinant Φ, the energy needs to be minimized under the constraint that Φ is normalized. This is a quite involved task in general, and in a final approximation,
the molecular orbitals (MO) φ_p are expanded in terms of known atomic orbitals (AO) serving as basis functions,
φ_p = ∑_μ C_μ pχ_μ.
Here C_μ p is an element of the MO coefficient matrix 𝐂. This yields the algebraic form of the Hartree–Fock equations, sometimes called the Hartree–Fock-Roothaan-Hall equations,<cit.>
𝐅𝐂 = 𝐒𝐂𝐄,
with the elements of the Fock matrix 𝐅 and the overlap matrix 𝐒 defined as
F_μν = ∫χ_μ(𝐫)f̂(𝐫)χ_ν(𝐫) d𝐫,
S_μν = ∫χ_μ(𝐫)χ_ν(𝐫) d𝐫,
and 𝐄 being the diagonal matrix containing the molecular orbital energies. Note that from this point on we will assume that quantities are real and will not denote complex conjugation any more. As the Fock operator
f̂[{φ_i}](𝐫_1) = ĥ(𝐫_1) + ∑_j∫φ_j(𝐫_2;𝐑)2-P̂_12/|𝐫_1-𝐫_2|φ_j(𝐫_2;𝐑) d𝐫_2,
itself depends on the orbitals that we seek to optimize, this is a self-consistent eigenvalue problem, the solutions of which must be found in an iterative manner. The operator P̂_12 swaps the coordinate labels to account for antisymmetry. The resulting Fock matrix has the general form
F_μν = h_μν + G_μν,
where the core term h_μν itself consists of two contributions,
h_μν = T_μν + V_μν,
with
T_μν = -1/2∫χ_μ(𝐫)∇^2 χ_ν(𝐫) d𝐫,
and
V_μν = ∑_A V_μν(A),
V_μν(A) = -Z_A∫χ_μ(𝐫)χ_ν(𝐫)/|𝐫-𝐑_A| d𝐫,
while the electronic interaction term consists of a direct Coulomb term and an exchange term,
G_μν = ∑_κλP_κλ (μν|κλ)
-1/2∑_κλP_κλ (μκ|νλ)
with
(μν|κλ) = ∬χ_μ(𝐫_1)χ_ν(𝐫_1)χ_κ(𝐫_2)χ_λ(𝐫_2)/|𝐫_1-𝐫_2| d𝐫_1d𝐫_2,
where the charge-density matrix is defined as
P_κλ = 2∑_i C_κ i C_λ i.
Finally, the spin-restricted Hartree–Fock (RHF) energy can be written in terms of the AO quantities as
E_RHF = E_n + 1/2∑_μνP_μν(h_μν+F_μν).
§.§ Electron Correlation
The Hartree–Fock solution has several deficiencies that originate in the approximations made. Over the decades several methods have been devised that improve one or more of these approximations, starting from the Hartree–Fock solution.<cit.> Other than the choice of the AO basis, improving on the treatment of interelectronic interactions in the Hartree–Fock method is the most important issue in practical calculations. In particular, in the Hartree–Fock model, the electrons with parallel and anti-parallel spins are treated differently in that the probability of finding two electrons at the same place is zero in the former case (presence of a Fermi hole) and non-zero in the latter (lack of a Coulomb hole). This is a consequence of representing the many-body wavefunction using a single Slater determinant. However, the manifold of Slater determinants can be used to build an improved wavefunction Ψ as a linear combination
Ψ = ∑_I 𝒞_IΦ_I.
If the expansion contains all possible Slater determinants in a given basis, then this full configuration interaction (FCI) expansion will yield the exact solution in that basis if the normalized coefficients C_I are optimised. To find these, the following eigenproblem must be solved
𝐇𝒞=ℰ𝒞,
where 𝐇 is the matrix representation of the Hamiltonian with elements
H_IJ = ⟨Φ_I | Ĥ | Φ_J⟩.
The correlation energy E_c is then the difference between the exact energy ℰ and the Hartree–Fock energy E,
E_c = ℰ - E.
However, determinants are not the only basis in which Ψ can be expanded. Unlike determinants, configuration state functions (CSF)<cit.> are eigenfunctions of the total spin squared operator,
Θ = ∑_I D_IΦ_I,
where D_I are fixed coefficients and Θ is a CSF. For a given spin sublevel, there are in fact fewer CSFs than there are determinants. Using CSFs instead of determinants may change how many basis states have large coefficients in the FCI expansion, as does rotating the orbitals between occupied and virtual spaces.
§ THE HYDROGEN MOLECULE IN A MINIMAL BASIS
§.§ Possible States and Their Long Range Behaviour
Consider a single H_2 molecule and let χ_μ be an atomic orbital<cit.> (AO) on one of the H atoms, and χ_ν another AO on the other H atom. Because this simple problem has only two basis functions and is highly symmetric, it is possible to describe some of its properties, especially at large internuclear separations, even without solving the HF and FCI equations. In this section, we will discuss such general considerations, then move on to the actual calculations.
Due to the symmetries of H_2, we know that there are only two possibilities of combining χ_μ and χ_ν into molecular orbitals. The MO coefficients must have the same magnitude with the same or opposite signs. After normalization, this yields the bonding orbital φ_i
φ_i = 1/√(2(1+S_μν))(χ_μ + χ_ν),
while the anti-bonding orbital φ_a has the form
φ_a = 1/√(2(1-S_μν))(χ_μ - χ_ν).
In terms of the MO coefficient matrix 𝐂, this means that
𝐂 =
[ 1/√(2(1+S_μν)) 1/√(2(1-S_μν)); 1/√(2(1+S_μν)) -1/√(2(1-S_μν)); ].
For the purposes of analysing long range behaviour, the AOs can be assumed to be orthonormal, since the two AOs barely overlap at large bond lengths. Thus, for the remainder of this section, we will assume that S_μν≈ 0. The lowest-energy RHF determinant is then
Φ_0 = |ii̅|,
where
|ii̅|≡ |φ_iφ̅_i| =
1/√(2)(φ_i(1)φ̅_i(2) - φ_i(2)φ̅_i(1)).
The bar in φ̅_i denotes the fact that φ_i is occupied by a β spin orbital. Substituting Eq. (<ref>), one gets
Φ_0 = Φ_C0 + Φ_I0,
where Φ_0 consists of a covalent part
Φ_C0 = 1/2(|μν̅| + |νμ̅|),
and an ionic part
Φ_I0 = 1/2(|μμ̅| + |νν̅|).
The covalent contribution consists of AO basis determinants in which one electron is assigned to one H atom via μ and the other electron to the other H atom via ν, which is what is expected in a homolytic dissociation process. The ionic contribution on the other hand consists of AO determinants in which both electrons are assigned to one atom only. The fact that in Φ_0 the two contributions come with an equal weight leads to what is known as the dissociation catastrophe of Hartree–Fock theory. Since Φ_C0 describes a homolytic and Φ_I0 a heterolytic process and since the latter requires a much higher energy, the total dissociation curve produces an artificially large dissociation energy for the homolytic process. The customary solution is to construct the doubly excited determinant
Φ_1 = |aa̅|,
which, after following a similar procedure as above, is found to be
Φ_1 = -Φ_C0 + Φ_I0.
We may now define a two-determinant trial wavefunction
Ψ_0 = 𝒞_0Φ_0 + 𝒞_1Φ_1,
which can be written as
Ψ_0 = (𝒞_0-𝒞_1)Φ_C0 + (𝒞_0+𝒞_1)Φ_I0.
It is clear that if 𝒞_0=-𝒞_1, the ionic contribution vanishes and the covalent contribution survives. This trial function has the necessary flexibility to describe the entire curve in a qualitatively correct way: close to the equilibrium 𝒞_0≈ 1, which agrees well with the fact that HF is a good description of the H_2 molecule at equilibrium distance. This analysis is also identical with the result obtained from a valance bond (VB) construction of the wavefunction. Similar results can be obtained for the correct heterolytic curve starting from the doubly excited determinant Φ_1.
So far we have only considered the ground state and the doubly excited state within the minimal basis. When it comes to singly excited states, it is useful to represent them using CSFs, i.e., linear combinations of determinants that are spin-eigenstates, as mentioned above. For a singlet state, this has the form
Θ_S = 1/√(2)(|ia̅| - |i̅a|),
This wavefunction can also be analyzed in terms of AO basis determinants,
|ia̅| = -Φ_C1 + Φ_I1,
|ai̅| = Φ_C1 + Φ_I1,
where the ionic and covalent contributions are
Φ_C1 = 1/2(|μν̅| - |νμ̅|),
Φ_I1 = 1/2(|μμ̅| - |νν̅|).
Therefore,
Θ_S = √(2)Φ_I1,
which means that this is a fully ionic solution at long distance. Similarly, the three degenerate triplet states,
Φ_T^+ = |ia|=-|μν|,
Θ_T = 1/√(2)(|ia̅| + |i̅a|)=√(2)Φ_C1,
Φ_T^- = |i̅a̅|=-|μ̅ν̅|,
all of which have a covalent character.
§.§ The Necessary Integrals
The calculation of the electronic energy requires the construction of the integrals in Eq. (<ref>). The simplest model that can be evaluated without the aid of a computer assumes that the basis functions χ_μ and χ_ν are simple normalized Gaussians<cit.>
χ_μ = (2α/π)^3/4e^-α(𝐫+𝐑)^2,
χ_ν = (2α/π)^3/4e^-α(𝐫-𝐑)^2,
where we have assumed that the two atoms are at an equal distance 𝐑 from the origin. Without loss of generality, we may choose the molecule to lie along the x-axis, i.e., that 𝐑=(R,0,0). Thus, the above Gaussians decompose into
(2α/π)^3/4 e^-α(𝐫±𝐑)^2
=(2α/π)^3/4
e^-α(x± R)^2
e^-α y^2
e^-α z^2.
While the multiplication of the same Gaussians is easily evaluated, when different Gaussians are multiplied, the Gaussian product theorem applies
e^-α(𝐫±𝐑)^2e^-α(𝐫∓𝐑)^2 = e^-2α R^2 e^-2α𝐫^2.
The overlap integrals are then
S_μμ = S_νν =
(2α/π)^3/2∫ e^-2α(𝐫±𝐑)^2 d𝐫=1,
by normalization, and
S_μν = S_νμ =
(2α/π)^3/2 e^-2α R^2∫ e^-2α𝐫^2 d𝐫=e^-2α R^2.
The two unique values of the kinetic energy integral T_μν can be determined by using derivation and partial integration techniques. The remaining integrals contain the Coulomb operator in some form. The nuclear-electronic attraction term also depends on the position of the nuclei, yielding integrals of the form V_μν(A) and V_μν(B), where A and B denote the nuclei on which χ_μ and χ_ν are centered, respectively. There are altogether three unique values of these integrals, while the two-electron integrals (μν|κλ) may assume four distinct values, all listed in the Supplementary Material. We note that the evaluation of the Coulomb integrals is significantly simplified by the application of the Gaussian integral, e.g.,
1/|𝐫±𝐑| =
1/√(π)∫^+∞_-∞
e^-(𝐫±𝐑)^2 t^2 d t,
as discussed in detail elsewhere.<cit.> This introduces another Gaussian function beyond the AOs χ_μ and χ_ν, and thus the usual product rules and integration techniques apply. In particular, the change of variables of the type u^2 = t^2/(2α + t^2) simplifies the evaluation of the integrals significantly.
Once the two-electron integrals are known, the most general form of the effective two-body term for two atomic orbitals can be written as
G_μμ(𝐏)
= 1/2 (μμ|μμ) P_μμ
+ (μμ|μν) P_μν
+ (μμ|νν) P_νν
- 1/2 (μν|μν) P_μν,
G_νν(𝐏)
= (νν|μμ) P_μμ
+ (νν|μν) P_μν
+ 1/2 (νν|νν) P_νν
- 1/2 (νμ|νμ) P_μμ,
G_μν(𝐏) = G_νμ(𝐏)
= 1/2 (μν|μμ) P_μμ
+ 3/2 (μν|μν) P_μν
+ 1/2 (μν|νν) P_νν
- 1/2 (μμ|νμ) P_μν.
Here, we have only used the fact P_μν is symmetric.
§.§ The Orbital Exponent
These formulae can be evaluated for any R once the exponent α is known. We may determine this by assuming that the single Gaussian considered here is an STO-1G orbital, i.e., one in which a single Gaussian (1G) is used to fit a Slater type orbital (STO). The coefficient α may be obtained by maximizing the overlap<cit.>
⟨ψ_1s|χ_μ⟩ = √(ζ_0^3/π)(2α_0/π)^3/4∫ e^-ζ_0 |𝐫|e^-α_0 𝐫^2 d𝐫.
Assuming that ζ_0=1, as in the H atom, this yields α_0≈ 0.270950. As α_0 is associated with |𝐫|^2 and ζ_0 with |𝐫|, it is customary<cit.> to rescale α using
α = α_0ζ^2,
if the value of ζ is different from ζ_0=1. A change in the value of ζ would reflect the change in the STO as a result of the molecular environment. Thus, to find an optimal ζ, the energy of an H atom may be optimized as a function of ζ. This energy is simply given as
E_H = T_μμ + V_μμ(A) = 3/2α -2√(2α/π),
and the optimization yields
ζ = 2/3√(2/πα_0),
yielding
α = 8/9π,
which is approximately α≈ 0.282942. This choice of α yields the best energy value obtainable for the H atom using a single atom-centered Gaussian, E_H=-4/3π≈ -0.424413, still relatively far off from the exact value of -1/2 in atomic units.
§.§ The RHF Potential Energy Curves
The energy expression in Eq. (<ref>) is particularly simple for the ground state of H_2,
E_0 = ⟨Φ_0|Ĥ|Φ_0⟩ = E_n + 2 (i|ĥ|i) + (ii|ii).
Once the intergrals are constructed and an initial guess of 𝐂 is found, the next step should be to build the Fock matrix and optimize P_μν iteratively. Fortunately, the symmetry adapted orbitals in Eq. (<ref>) and Eq. (<ref>) turn out to be the self-consistent solutions of the Hartree–Fock equations. To see this, it is enough to show that G_μμ=G_νν, since for a real symmetric two-by-two matrix with identical diagonal elements, the eigenvectors have the form (a,a) or (a,-a) for some value a, usually fixed by normalization. With these assumptions, the charge density matrix is
𝐏 = 1/1+S_μν[ 1 1; 1 1 ].
Substituting this into Eqs. (<ref>), (<ref>), (<ref>) leads to simplifications which are discussed in more detail in the Supplementary Material, the most important of which is that G_μμ(𝐏) = G_νν(𝐏). Once these quantities are available, the Fock matrix can be built as in Eq. (<ref>), while the energy can be obtained as in Eq. (<ref>), using the integrals discussed in Sec. <ref>. These steps and the final analytic formulae are discussed in more detail in the Supplementary Material.
To obtain the doubly excited state Φ_1, Eq. (<ref>) should be evaluated. Fortunately, for H_2 in the minimal basis, a simpler route is available by simply relabeling all i to a in the energy formula,
E_1 = ⟨Φ_1|Ĥ|Φ_1⟩ = E_n + 2 (a|ĥ|a) + (aa|aa).
This amounts to constructing a new density,
𝐏̅ = 1/1-S_μν[ 1 -1; -1 1; ],
which then produces a modified G(𝐏̅) matrix. The procedure from this point is very similar to the case of E_0 and is detailed in the Supplementary Material.
Finally, the HF energies, E_S and E_T, of the singly excited singlet and triplet states can be obtained from Eq. (<ref>), by using the Slater-Condon rules,<cit.>
E_S = ⟨Θ_S|Ĥ|Θ_S⟩ = E_n + (i|ĥ|i) + (a|ĥ|a) + (ii|aa) + (ia|ia),
and
E_T = ⟨Θ_T|Ĥ|Θ_T⟩ = E_n + (i|ĥ|i) + (a|ĥ|a) + (ii|aa) - (ia|ia).
The AO expressions and the final analytical formulae are again given in the Supplementary Material.
Fig. <ref> displays the dissociation curves Δ E_X = E_X - 2E_H for all the possible Hartree–Fock states in the minimal basis. Thus, E_X can be the RHF energy of the 1^1Σ^+_g singlet ground state (E_0), the doubly excited 2^1Σ^+_g singlet state (E_1), the singly excited 1^1Σ^+_u singlet state (E_S) and one of the degenerate 1^3Σ^+_u triplet states (E_T). Around the equilibrium distance, all curves behave reasonably, the ground state and the singly excited state have a minimum indicating a stable structure for H_2 in these states. As the two H atoms are pulled apart, the ground state and the doubly excited states converge. From the formulae provided in the Supplementary Material, it is easily seen that Δ E_0 and Δ E_1 both converge to the value √(α/π) as the internuclear distance D goes to infinity, while Δ E_S approaches 2√(α/π) and Δ E_T goes to 0 as D→∞. The fact that the ground state curve in particular does not approach zero is often referred to as the `dissociation catastrophe' of the RHF method.<cit.> It shows that RHF does not produce two H atoms at infinite distance, but due to the weight of the ionic contributions mentioned in Sec. <ref>, it significantly overshoots, although it should be noted that it is still well under the purely ionic limit at 2√(α/π). One way to solve this problem is to mix various states of the same spin and spatial symmetry; we will consider this approach in the next section.
§.§ The FCI Potential Energy Curves
To overcome the problems of the RHF method, the wavefunction can also be expanded as in Eq. (<ref>). This means that the matrix Hamiltonian in the basis of many-electron basis states shown in Eq. (<ref>) must be diagonalized. Notice that neither of the singlet RHF states mix with the triplet as they have different spin symmetry and Θ_S does not mix with Φ_0 or Φ_1, as they have they have different spatial symmetry. Thus, the only non-zero off-diagonal elements in Eq. (<ref>) are those between Φ_0 and Φ_1, yielding a conveniently simple two-by-two matrix
𝐇=
[ E_0 g; g E_1; ],
where g = ⟨Φ_1|Ĥ|Φ_0⟩ = (ia|ia), discussed more explicitly in the Supplementary Material.
As discussed before in Sec. <ref>, the mixture of Φ_0 or Φ_1 is enough to produce the correct ground-state solution in the minimal basis, due to the cancellation of ionic terms. The eigenvalues of H, shown in Fig. <ref>, are
E_± = A±√(ω^2+4g^2)/2,
A = 1/2(E_0 + E_1), ω = E_1-E_0,
where A is the average RHF energy of the two states, while ω is the excitation energy. Using the formulae of the Supplementary Material, it is now easy to show that the FCI solution with the minus sign, E_- approaches 0 as D→∞ corresponding to the correct covalent dissociation limit. Furthermore, the other solution, E_+, converges to the correct ionic limit 2√(α/π). Thus, within the minimal basis, only the triplet and the singly excited singlet states are described correctly at the RHF level, it is necessary to mix two RHF states to recover the exact solutions for the other two. As we will see in the next section, there is an alternative: breaking the spin symmetry also removes the dissociation catastrophe.
§.§ The UHF Potential Energy Curves
The RHF solution in Eq. (<ref>) has the property that C_μ i = C_ν i and for the occupied MO i. The unrestricted Hartree–Fock (UHF) model differs from RHF in that there are two different sets of spatial orbitals for electrons with alpha and beta spins, i_α and i_β. Since these MOs span the same space as the RHF solution, we may represent them using the RHF orbitals as a basis,<cit.>
φ_i_α = U^α_iiφ_i + U^α_iaφ_a,
φ_i_β = U^β_iiφ_i + U^β_iaφ_a.
with U^α and U^β being the unitary transformations that yield i_α and i_β. In this case, the MO coefficients belonging to the two AOs are not fixed by symmetry and need not have the same magnitude, i.e., C_μ i_σ≠ C_ν i_σ, for σ=α, β. On the other hand, the number of alpha and beta electrons are equal which is reflected in the solution: both alpha and beta orbitals can be obtained from the RHF ones by a rotation of the same angle but opposite direction, i.e., U^α=U^β†, U^α_ii=U^β_ii≡ U_ii and U^α_ia=-U^β_ia≡ U_ia. On substitution into the UHF determinant
Φ_UHF = |i_αi̅_β| = U_ii^2 Φ_0 - U_ia^2Φ_1 +
√(2)U_iiU_iaΘ_T,
which reveals the spin-symmetry-broken nature of the UHF wavefunction since it mixes singlet and triplet states. Evaluating the energy as an expectation value gives
E_UHF =
U_ii^4 (E_0 + E_1 +2g -2E_T) +2U^2_ii(E_T -E_1 - g) + E_1,
where the normalization condition was used to eliminate U_ia. This expression can be minimized as a function of U_ii with the result
U_ii^2 = E_1 + g -E_T/E_0 + E_1 +2g -2E_T.
Note that because of normalization, it must be true that U_ii^2≤ 1. It turns out that the above function decreases monotonically as a function of D and it approached 0.5 at infinity. Thus, we need only find out where it takes the value U_ii=1, which is the case if
E_0 +g -E_T = 0.
This happens at the Coulson-Fischer point at a distance of D_CF = 2.4653.
Thus, the optimized UHF energy for the ground state becomes
E_UHF =
E_0, D<D_CF,
E_1 - (E_1+g-E_T)^2/E_0+E_1+2g-2E_T, D_CF≤ D.
Fig. <ref> shows the RHF, UHF and FCI ground-state solutions. Unlike the RHF solution, the UHF curve indeed approaches the FCI limit at infinite distance. The UHF solution is often a convenient starting point for electron correlation methods since it is a much more flexible reference point than the spin-restricted alternative, which can often only achieve a qualitatively correct starting point by mixing several determinants or CSFs. While the spin-symmetry-broken character of UHF can also be problematic,<cit.> the FCI solution in Eq. (<ref>) is much harder to obtain. Consequently, many approximate approaches have been developed on classical computers to tackle this problem, and more recently the potential benefit of quantum computers in solving this problem has also been investigated. In the next section, we will continue the discussion of the H_2 molecule from the perspective of quantum computers. In doing so, we will provide an introduction to the topic of quantum algorithms for solving quantum chemistry problems, which is currently an active and growing area of research.
§ THE HYDROGEN MOLECULE ON THE QUANTUM COMPUTER
§.§ The Second-Quantized Hamiltonian
Second quantization is a technique in which the evaluation of matrix elements is performed through algebraic operations. To achieve this one switches from the Hilbert-space representation to a Fock-space representation. Within the Fock space, these Slater determinants are represented as occupation number vectors (ONV), i.e., the list of the occupation numbers of orbitals in their canonical order. Next, fermionic creation and annihilation operators are defined that map ONVs onto other ONVs. If n_P=0,1 is the occupation number of spin-orbital P, which may be labelled using integers P=0,1,2,…, then the annihilation and creation operators are defined by their action as
â_P^† | n_0, …, n_P, …, n_N ⟩ = δ_0n_PΓ_P | n_0, …, 1_P, …, n_N ⟩,
â_P | n_0, …, n_P, …, n_N ⟩ = δ_1n_PΓ_P | n_0, …, 0_P, …, n_N ⟩,
where Γ_P=∏_Q=0^Q=P-1(-1)^n_Q is just a sign factor. These operators obey the following anti-commutation relations,
{â_P,â_Q} = 0,
{â_P^†, â_Q^†} = 0,
{â_P^†, â_Q} = δ_PQ,
and it is worth pointing out that δ_PQ=δ_pqδ_σ_pσ_q, see discussion on Eq. (<ref>). Then, the Hamiltonian can be written in terms of creation and annihilation operators, which is its second-quantized form,
Ĥ =
E_n
+ ∑_PQ (P|ĥ|Q)
â_P^†â_Q
+ 1/2∑_PQRS (PS|RQ)
â_P^†â_Q^†â_Râ_S,
where the connection with the MO integrals defined previously is (P|ĥ|Q)=(p|ĥ|q)δ_σ_pσ_q and (PS|RQ)=(ps|rq)δ_σ_pσ_sδ_σ_rσ_q.
We next again consider the H_2 example in particular. Using the minimal basis, each of the 2 electrons can be in 4 possible states, the canonical order of which is φ_iα, φ_iβ, φ_aα, φ_aβ. Relabelling these as P = 0,1,2,3, a possible two-electron state has the form |n_0, n_1, n_2, n_3 ⟩ (with the sum of occupation numbers, i.e., the number of electrons, being 2). Due to the Pauli exclusion principle, each occupation number can be equal to 0 or 1. Thus, the lowest-energy determinant is simply |1100⟩, and other determinants can be written similarly. Setting up the FCI problem would correspond to evaluating matrix elements of Ĥ with respect to these Slater determinants. As one example, calculating ⟨ 1100 |Ĥ|1100⟩ would yield the result in Eq. (<ref>). We can also write the full H_2 Hamiltonian in second-quantized form. In the H_2 case, it is obvious that some of the one- and two-body integrals vanish due to spin-integration. Other terms are zero due to spatial symmetry. After simplifications, the H_2 Hamiltonian takes the form<cit.>
Ĥ =
E_n
+ (i|ĥ|i) (a_0^† a_0 + a_1^† a_1)
+ (a|ĥ|a) (a_2^† a_2 + a_3^† a_3)
+ (ii|ii) a_0^† a_1^† a_1 a_0
+(ii|aa)(a_0^† a_2^† a_2 a_0 + a_1^† a_3^† a_3 a_1)
+(ii|aa) (a_0^† a_3^† a_3 a_0 + a_1^† a_2^† a_2 a_1)
+ (aa|aa) a_2^† a_3^† a_3 a_2
+ (ia|ia) ( a_0^† a_3^† a_1 a_2 + a_2^† a_1^† a_3 a_0 + a_0^† a_1^† a_3 a_2 + a_2^† a_3^† a_1 a_0 ),
where the antisymetrized integral is defined as (ii|aa) = (ii|aa) - (ia|ia). All of these integrals are known from the classical calculations in the previous section.
§.§ Second-Quantized Qubit Mappings
The basic operations on a quantum computer are carried out on two-state quantum systems called qubits. A general qubit state is an arbitrary linear combination of the |0⟩ and |1⟩ states, i.e. |ψ⟩ = α |0⟩ + β|1⟩, with normalization |α|^2 + |β|^2 = 1.
Notice that, because each spin orbital in a chemical system can be in state |0⟩ or |1⟩, it seems reasonable that we can map Slater determinants, and fermionic Hamiltonians, to a qubit representation. However, the operators that act on qubits are written in terms of Pauli matrices (defined in Supplementary Material), which follow a different algebra compared to the fermionic creation and annihilation operators. Therefore, we need a way to convert the fermionic operators in the second-quantized Hamiltonian to the Pauli representation. The oldest and simplest mapping is due to Jordan and Wigner,<cit.>
a_P^† →1/2(X_P - iY_P ) ⊗_Q<P Z_Q,
a_P →1/2(X_P + iY_P) ⊗_Q<P Z_Q,
where the sub-index indicates the qubit the matrix is acting on. Here, the string of Z operators is needed to enforce the fermionic anti-commutation relations defined in Eqs. (<ref>) to (<ref>).
Applying this mapping to Eq. (<ref>) leads to the qubit Hamiltonian H, the explicit form of which can be found in the Supplementary Material for Hamiltonians with real coefficients. While the qubit Hamiltonian is quite lengthy in the general case, it assumes a relatively simple form in the H_2 case,<cit.>
H = H_0 + H_ii [Z_0 + Z_1] + H_aa [Z_2 + Z_3]
+ 1/4(ii|ii) Z_0 Z_1
+ 1/4(ii|aa) [Z_0 Z_2 + Z_1 Z_3]
+ 1/4(ii|aa) [Z_0 Z_3 + Z_1 Z_2]
+ 1/4(aa|aa) Z_2 Z_3
- 1/4(ia|ia)
[
X_0 X_1 Y_2 Y_3
+ Y_0 Y_1 X_2 X_3
- X_0 Y_1 Y_2 X_3
- Y_0 X_1 X_2 Y_3
],
with coefficients
H_0 =
E_n + (i|ĥ|i) + (a|ĥ|a)
+ 1/4(ii|ii) + 1/2(ii|aa) + 1/4(aa|aa),
H_ii =
1/2(i|ĥ|i)
+ 1/4[(ii|ii) + (ii|aa)],
H_aa =
1/2(a|ĥ|a)
+ 1/4[(ii|aa) + (aa|aa)].
Here, the spin-summed integral is (ii|aa) = 2(ii|aa)-(ia|ia).
There have been several alternative proposals to improve on the Jordan-Wigner mapping, both in terms of the number of qubits used and in terms of the length of the Pauli strings. A simple improvement to reduce the number of qubits required is the Qubit Efficient Encoding (QEE)<cit.>. In this case, the mapping focuses on the fermionic ladder operators â^†_P â_Q which corresponds to sums of diadic products of basis vectors of the type |𝐧⟩⟨𝐧'|, where 𝐧 and 𝐧' are sequences of occupation numbers that differ at positions P and Q (n_P=n'_Q=1, n_Q=n'_P=0). To proceed further, the basis vectors are converted into a binary form based on their ordering. In the general case, the number of qubits required is only logarithmic in the number of spin orbitals. In the H_2 case, there are six possible two-electron basis states of the type |𝐧⟩ =|n_0,n_1,n_2,n_3⟩ such that the occupation numbers add up to 2. In the occupation number representation, encoding |𝐧⟩ requires 4 qubits. However, the six possible basis states may also be labeled as 0,…,5 by some convention. Since the binary representation of the largest ordinal, 5, is 101 and this requires only three digits, all six states can also be represented as |𝐛⟩ = |b_0,b_1,b_2⟩ using only 3 qubits. The fermionic ladder operators then also have the form |𝐛⟩⟨𝐛'| which can be decomposed into direct products of |0⟩⟨ 0|, |0⟩⟨ 1|, |1⟩⟨ 0| and |1⟩⟨ 1|.
A separate issue with the Jordan-Wigner encoding is the long string of anti-symmetrizing Z operators that appears after the mapping, which leads to undesirable scaling. Bravyi and Kitaev proposed a new mapping which encoded this anti-symmetrization in a more efficient way<cit.>. The number of qubits required still depends on the number of spin orbitals, N, but this time, the information that is stored on these qubits depends on the qubit index, starting from 0. If the index is even then the qubit is encoded with the orbital occupation, much like in Jordan-Wigner. If the index is odd then the anti-symmetrization of a subset of orbitals is encoded. Finally, when log_2(i+1) is an integer (where i is the qubit index) then the anti-symmetrization of all orbitals with an index lower than or equal to the current index is encoded. All sums are performed in modulo 2. The Jordan-Wigner and Bravyi-Kitaev mapping have been compared in the literature for chemical calculations.<cit.> Although the Bravyi-Kitaev approach certainly has its advantages, for our purposes, the Jordan-Wigner mapping is a sufficient starting point.
§.§ The 1-Qubit Hydrogen Hamiltonian
The symmetries present in the Hamiltonian can be exploited to reduce (or “taper”) the number of qubits required for a calculation. For the case of H_2, note that only two Slater determinants, |1100⟩ and |0011⟩, can contribute to the ground-state wavefunction, due to particle number, spin and spatial symmetries. Since only two states can contribute, this suggests that the corresponding Hamiltonian can be represented by just a single qubit.
The general procedure to reduce the Hamiltonian is beyond the scope of this paper and is discussed elsewhere.<cit.> Here, it is enough to note that we are looking for a transformation of the type
H'=U^† H U,
where U is unitary. Since H and H' are unitarily equivalent, their eigenvalues are also the same. The main requirement that should make this transformation worthwhile is that H' should commute with Pauli X matrices for at least some of the qubits. If this holds, then for the purposes of determining the ground-state energy, these qubits can be replaced by the eigenvalues of the corresponding X matrices, i.e., either +1 or -1. In the H_2 case, U can be written<cit.> as U=U_1 U_2 U_3, with
U_P = 1/√(2)(X_P + Z_0 Z_P),
for P = 1, 2, 3. The transformation 𝒫'=U^†𝒫U can now be performed for each Pauli string 𝒫 in the Jordan-Wigner qubit Hamiltonian in Eq. (<ref>). The results are summarized in the Supplementary Material. As a consequence, only X or I matrices act on qubits 1, 2 and 3 in H'. For example, for 𝒫 = Z_2, 𝒫' = Z_0 X_2; although there is a Z acting on qubit 0, only X or I Paulis act on qubits 1, 2 and 3.
Before these qubits can be tapered, the corresponding eigenvalues of X matrices should be known for the eigenstate of H' that we seek, which will be the ground state. Here, symmetry can again be exploited since, as noted above, it only allows the configurations |1100> and |0011> to contribute to the ground state, |Ψ⟩. Therefore, the eigenvalues of the operators Z_0 Z_1 must be equal to +1, while the eigenvalue of Z_0 Z_2 and Z_0 Z_3, will equal -1. Taking Z_0 Z_1 as an example, we may write
Z_0 Z_1 |Ψ⟩ = | Ψ⟩.
Inserting U U^† = I,
Z_0 Z_1 U U^† | Ψ⟩ = | Ψ⟩,
and applying U^† to both sides gives
( U^† Z_0Z_1 U ) U^† |Ψ⟩ = U^† | Ψ⟩.
From the above, U^† | Ψ⟩ is an eigenvector of the transformed Hamiltonian H' and based on the results shown in the Supplementary Material, U^† Z_0 Z_1 U = X_1. Therefore, the eigenstates of H' are also eigenstates of X_1 with eigenvalue +1, and all instances of X_1 in the Hamiltonian can be replaced by +1. The same argument can be worked through for X_2 and X_3, which will be replaced by eigenvalues -1.
Thus, the only operators remaining in the transformed Hamiltonian are Z_0 and X_0 acting on qubit 0, and qubits 1, 2 and 3 can be removed. The final single-qubit Hamiltonian has the form
H' = c_0 + c_1 Z_0 + c_2 X_0,
with
c_0 = H_0 + 1/4[(ii|ii) + (aa|aa)] - 1/2(ii|aa),
c_1 = 2(H_ii - H_aa),
c_2 = (ia|ia).
§.§ Quantum Algorithms
Once the qubit Hamiltonian is available, the question still remains of how the energy calculation is to be carried out. In the current era of noisy intermediate scale quantum (NISQ) devices, the program depth measured in terms of the number of gates in the quantum circuit must be short enough so that the program can run before device errors ruin the result. This has led to a search for algorithmic solutions that satisfy this criteria, the most important of them for chemistry being the variational quantum eigensolver (VQE) algorithm<cit.>. In VQE, the wavefunction is parametrized in a similar way as in traditional approaches of quantum chemistry, such as coupled cluster theory and variational Monte Carlo, except that the quantum implementation should be unitary. Such an approach relies on Ansätze, i.e., the wavefunction is parametrized using a reference function (typically the HF solution) and parameterized quantum gates acting on it. This leads to a linear combination of excited determinants. VQE is a hybrid classical-quantum algorithm in which the energy evaluations happen on the quantum computer, while the optimization of the wavefunction coefficients is performed on the classical computer. Although this approach is more familiar to computational chemists, and for H_2 in the minimal basis it could even yield the exact energy, it has steep scaling with system size<cit.>, and so we do not consider it further here.
Quantum phase estimation (QPE), on the other hand, is a purely quantum algorithm, first introduced by Kitaev in 1995<cit.>. The QPE method can be used to determine the eigenvalues of a unitary operator U,
U |Ψ_k⟩ = e^2π i θ_k |Ψ_k⟩,
where θ_k is the phase corresponding to the k'th eigenstate of U, |Ψ_k⟩. The quantum circuit diagram for the “textbook” QPE algorithm<cit.> is shown in Fig. <ref>. The top m qubits are ancilla qubits, which are measured at the end of the circuit to obtain the first m bits of an eigenphase θ_k of U. The bottom n qubits (represented in this circuit diagram by a single line), to which the unitary U is applied, are prepared in an initial state, |ψ⟩. Here, n is the number of qubits in the Hamiltonian, which is equal to 1 for the H_2 Hamiltonian in Eq. <ref>. The initial state |ψ⟩ should be a good approximation to the exact eigenstate |Ψ_k⟩, whose eigenphase we want to estimate. The larger the overlap, the higher the probability of measuring the desired θ_k. However, in general there is a chance that the wavefunction will collapse to an undesired |Ψ_k⟩ upon measurement.
Remember that our goal is to estimate the eigenvalues of H, but QPE provides the eigenphases of a unitary U. In order to apply QPE to the energy estimation problem, the eigenvalues of H must be encoded in the phases of U. Performing QPE with U will then allow estimation of the desired energies. The most common encoding of H in U is through the time evolution operator[Usually the time evolution operator would be e^-iHt, but the minus sign is unimportant in QPE, as the phases can be extracted regardless. We call U(t) the time evolution operator for brevity.],
U(t) = e^i H t,
where t is a scalar parameter. If the eigenvalues of H are denoted E_k, then the eigenvalues of U will be of the form e^i E_k t, and it is trivial to obtain the desired energy from the measured eigenphase. An alternative encoding is sometimes considered in more sophisticated implementations of QPE, which is discussed in Section <ref>. To begin with, we will turn our attention to the implementation of the time evolution operator in Eq. <ref>. For most instances of H for chemistry problems, this cannot be implemented exactly on a quantum computer, and we instead must consider approximate approaches such as Trotterization.
§.§ Trotterization
As described in Section <ref>, we would like to perform QPE, the circuit diagram for which is shown in Fig. <ref>. We wish to encode the Hamiltonian in the unitary U through time evolution, as defined in Eq. <ref>.
For the case of H_2 in a minimal basis, the Hamiltonian consists of two single-qubit Pauli operators and an identity contribution, as in Eq. <ref>. We drop the constant shift c_0, so that
H = c_1 Z + c_2 X,
and
U(t) = e^i(c_1 Z + c_2 X) t.
Therefore we have to consider the question, how can this operator be implemented on a quantum computer? For a more general chemical Hamiltonian, its qubit form can be written
H = ∑_j=1^L H_j,
where each H_j consists of an n-qubit Pauli, P_j, and a coefficient c_j, so that H_j = c_j P_j, for example[More generally, each H_j might be a linear combination of commuting Paulis<cit.>; time-evolving a Hamiltonian of fully-commuting Pauli terms can be performed efficiently. Such H_l terms are called fast-forwardable Hamiltonians.].
In theory, a quantum computer is capable of implementing a general unitary operation, however the number of gates required to do so may be extremely large. In practice, a finite set of basis gates is defined, from which all other unitary operations are constructed. These basis gates ultimately correspond to operations that are performed on the physical qubits. On current quantum computers, such as a superconducting quantum processor, a common set of native operations might include Pauli rotation gates, R_P(θ) = e^-i (θ/2) P, and a CZ (controlled Z) gate. For fault-tolerant quantum computers, arbitrary rotation gates cannot be protected, and one might instead work with the Hadamard gate, the phase gates S and T, and the CNOT gate. However, in both cases, complex multi-qubit operations such as e^iHt, for a general H, cannot be performed directly.
The most common solution to approximately implement U = e^iHt for H as in Eq. <ref> is through Trotter product formulas, or Trotterization. The simplest of these is the first-order Trotter expansion
e^iHt≈ U_1(t) = ∏_j=1^L e^i H_j t.
The second-order Trotter expansion is defined by
e^iHt≈ U_2(t) = ∏_j=1^L e^i H_j t/2∏_j=L^1 e^i H_j t/2.
The benefit of these expansions is that each term e^i H_j t can now be implemented in a fairly direct manner on a quantum computer. However, these product formula are approximate, unless all the terms in H commute with each other. In particular, the error on the p'th-order Trotter expansion is<cit.>
‖ U_p(t) - e^iHt‖ = 𝒪 (α t^p+1).
This means that the error in the first-order expansion is 𝒪(t^2), while the error in the second-order expansion is 𝒪(t^3). The value α depends on the commutator of the terms in the partitioning of H.
In order to manage this error, we split the time evolution operator into m steps, each of length t/m:
e^i ∑_l H_l t = ( e^i ∑_l H_l t/m)^m.
Each of the steps is then approximated by a Trotter formula, U_p(t/m), which will become exact in the large-m limit.
An important question is how many rotation gates of the form e^i H_j t/m are needed to perform time evolution up to time t with error ϵ (in the trace distance), which we denote N_gates(t,ϵ). This question has been studied in detail. For the first-order Trotter formula the number of required gates is<cit.> N_gates, 1(t,ϵ) = 𝒪(t^2/ϵ) while for the second-order formula<cit.> N_gates, 2(t,ϵ) = 𝒪(t^1.5/√(ϵ)). For a comparison of N_gates(t, ϵ) for different simulation methods, see Ref. .
Having discussed Trotterization for general Hamiltonians, we now consider the specific case of H_2, taking the first-order Trotter expansion. Here we approximate U(t) by
U(t) ≈( e^i Z c_1 t/m e^i X c_2 t/m)^m,
so that each term is a Pauli rotation gate. In particular, the Pauli-Z rotation is defined R_Z(θ) = e^-i Z θ/2, and the Pauli-X rotation is R_X(θ) = e^-i X θ/2. Thus we have
U(t) ≈[ R_Z ( - 2 c_1 t/m) R_X ( - 2 c_2 t/m) ]^m.
Lastly, note from Fig. <ref> that the U operators must each be controlled on an ancilla qubit. The circuit diagram for the controlled-U operation is shown in Figure <ref>.
§.§ Qubitisation
Quantum phase estimation allows to measure the eigenvalues of a unitary operator U. Above, this was used to determine energies by choosing U=e^iHt and implementing the exponential in a quantum circuit using the Trotter product formula.
Alternatively, a different unitary operator U can be chosen for phase estimation. In qubitisation <cit.>, U is chosen to be the walk operator
U = e^iarccos (H/λ), λ = |c_1| + |c_2|
with the subnormalisation λ the norm of the Hamiltonian[To be precise, the walk operator also has eigenvalues e^-iarccos(E_k/λ).]. Performing phase estimation on the walk operator also allows to determine H's energies. Here we will specifically consider the H_2 Hamiltonian, and the constant shift c_0 will again be ignored throughout.
The walk operator can be constructed from circuits called PREPARE and SELECT using an additional ancilla qubit. The ancilla qubit's state indicates the two terms c_1Z and c_2X of the Hamiltonian. (For larger Hamiltonians with more terms, you would require more than one ancilla qubit.)
The PREPARE operator acts on the ancilla qubit and prepares a state corresponding to the terms' coefficients c_1 and c_2:
PREPARE|0⟩ = √(|c_1|/λ)|0⟩ + √(|c_2|/λ)|1⟩= R_Y(α)|0⟩, α = 2arctan√(|c_2/c_1|).
The coefficients are such that the measurement probabilities are c_1 and c_2 (here, they have the same sign, otherwise slight adaptations are necessary below), with the subnormalisation λ being required to ensure the right-hand state is normalised. PREPARE is implemented with a single-qubit rotation gate R_Y(α).
The
SELECT operator acts on the system qubit and selects the operator for the term corresponding to the state of the ancilla qubit. It applies either Z or X on the system qubit:
SELECT|0⟩|ψ⟩ = |0⟩Z|ψ⟩, SELECT|1⟩|ψ⟩ = |1⟩X|ψ⟩.
Qubitisation theory shows that the circuit for the walk operator can be constructed from these operators together with a reflection around |0⟩⟨0| as follows:
qubit ancilla a;
qubit |ψ⟩ psi;
box U (a,psi);
text = (-);
box PREPARE a;
box SELECT (a,psi);
box PREPARE^† a;
Z a;
For usage in the QPE circuit, the walk operator must be controlled on the ancillas that allow the readout of the phase. Inserting the circuits for PREPARE and SELECT in our example, we have:
[baseline=([yshift=-1ex]current bounding box.center)]
qubit control c;
qubit ancilla a;
qubit |ψ⟩ psi;
box U (a,psi) | c;
text = (-);
box R_Y(α) a;
z psi | c a;
x psi | c, a;
box R_Y(α)^† a;
box Z a | c;
The advantage to Trotterisation is that this circuit does not suffer from a Trotterisation error. Instead, it is exact (up to the finite precision of the rotation by α). Thus, it avoids lengthy circuit repetitions stemming from a small Δ t in the Trotter product formula, at the cost of an ancilla qubit. Many recent large-scale chemical quantum algorithms<cit.> are based on qubitisation due to the shorter circuits.
We will now explain that the walk operator U has the eigenvalues e^± i arccos(E_k/λ) by resorting to arguments that generalise to other and larger Hamiltonians.
Geometrically, a product of reflections about axes of relative angle β results in a rotation by angle 2β.
Both Z and PREPARE^†·SELECT·PREPARE are reflections, because their squares are the identity operator. Hence, their product is a rotation. In fact, this is true individually for each eigenvalue E_k of H on the two-dimensional subspace generated by |0⟩|E_k⟩. A general two-dimensional reflection matrix about an axis at inclination β has the
form
[ cos 2β -2cosβsinβ; -2cosβsinβ -cos 2β ].
From the top left matrix element in the relevant basis generated by |0⟩|E_k⟩, we can determine the angles of the reflection axes: cos 2β_1=⟨0|⟨E_k|Z⊗ I|0⟩|E_k⟩=+1 for Z and
cos(2β_2) =⟨0|⟨E_k|PREPARE^†·SELECT·PREPARE|0⟩|E_k⟩ = E_k/λ.
Hence, the walk operator in this basis is a rotation by angle
β_rot = 2(β_2-β_1) = arccos(E_k/λ)-arccos(+1) = arccos(E_k/λ).
Since a rotation by angle β_rot has eigenvalues e^± i β_rot, the walk operator has the eigenvalues e^± iarccos(E_k/λ) for each energy E_k of the Hamiltonian.
§.§ Quantum Error Correction
Quantum computers are affected by noise. For example, the latest IBM quantum computer<cit.> has a median error rate of ∼ 0.66% for CNOT gates (noise varies strongly for different gates and gate types, but this will suffice for the following back-of-the-envelope estimation). Roughly, this means that a quantum algorithm run on such a NISQ device could only use circuits with a depth of about 103 operations before the error rate becomes larger than 50%. However, 103 gates is far too little for any useful quantum algorithm.
While the error rates of qubits are expected to decrease as technology progresses, they will always stay significant compared to error rates in classical computing. This is because qubits are inherently small quantum systems and even a tiny perturbation from the environment can have a disastrous effect on the qubit's state.
Luckily, quantum error correction provides a pathway to run useful longer circuits, despite the errors affecting the qubits.
To explain the concept of error correction, let us resort to the common experience of a noisy telephone line. When spelling out a name, the letters b and p can easily be confused. The error can be corrected by referring to each letter by a longer name according to the standard phonetic alphabet, like bravo for b, papa for p. This reduces the possibility of error, while increasing the length of the information transmitted.
Similarly, in quantum error correction, multiple physical qubits are used to represent one logical qubit, which has a reduced error rate compared to the physical qubit.
In quantum computing, once you measure a qubit, the wavefunction collapses and the state is destroyed. This makes it difficult to correct errors that occur in the midst of a computation. However, a theory of quantum error correction has been developed and shows intricate methods to perform measurements that reveal information about errors (if any) that have occurred, without destroying the information that is encoded. This is information is sufficient to correct the errors, provided there are not too many.
For quantum algorithms of any reasonable length, we will have to resort to quantum error correction<cit.>. While this results in an overhead in the number of physical qubits (many physical qubits encode one logical qubit) and run-time, at least it offers a chance to escape the limited fidelity of quantum computers.
§ SUMMARY
In this contribution to the memory of Prof. Csizmadia, we have provided a detailed discussion of quantum chemical calculations on the simplest diatomic molecule, H_2. Such a simple exercise is not only useful as an elucidation of the traditional methods of quantum chemistry, but it also serves as an introduction to the emerging field of quantum computing, as applied to chemistry. Thus, after providing a high-level overview of the theoretical basis of molecular calculations which led us to the Hartree–Fock model and to the notion of electron correlation, we turned to the evaluation of the necessary equations in the minimal basis. After a general discussion of the long-distance properties of the possible states in the minimal basis, the necessary integral calculations were outlined and the orbital exponent was determined by standard methods. Next, we have compared the spin-restricted Hartree–Fock and the exact solutions to discover that the exact solution removes the artifacts of the Hartree–Fock model and finds the proper covalent ground state. As a final contribution to our description of traditional methods, we have also discussed the effects of breaking spin-symmetry in the Hartree–Fock model. Next, we turned our attention to quantum computing and gave a brief discussion on second quantization in order to rewrite the Hamiltonian in terms of fermionic operators for the H_2 problem. We then used the Jordan-Wigner mapping to recast this Hamiltonian as a sum of Pauli-strings (products of Pauli spin-matrices) which can be implemented on a quantum computer. We have also made use of spatial symmetry to reduce the Hamiltonian to a form that acts on a single qubit. A discussion of quantum algorithms for chemistry followed and we decided to focus on variants of quantum phase estimation in the remainder of this paper. Trotterization and qubitization were introduced as two distinct algorithms for translating the single-qubit Hamiltonian into phase estimation circuits that can in principle be run on current quantum hardware. However, in the last section on quantum error correction we also discussed why such a calculation cannot be expected to yield accurate results without applying methods to reduce noise on quantum hardware. Quantum error correction is a very active field of research and its detailed discussion is outside the scope of the present paper, although another paper is in preparation outlining its application in the case of the hydrogen molecule<cit.>.
@ifundefinedendmcitethebibliography
32
f
subitem(mcitesubitemcount)
[Barnett(1963)]barnett1963mechanized
Barnett, M. P. Mechanized Molecular Calculations—The POLYATOM System.
Rev. Mod. Phys. 1963, 35, 571–572
[Csizmadia et al.()Csizmadia, Harrison, Moskowitz, Seung,
Sutcliffe, and Barnett]POLYATOM
Csizmadia, I. G.; Harrison, M. C.; Moskowitz, J. W.; Seung, S.;
Sutcliffe, B. T.; Barnett, M. P. QCPE #47.1 POLYATOM – Program set for
nonempirical molecular calculations, Quantum Chemistry Exchange Program,
Indiana University, Bloomington, Indiana 47401.
[Csizmadia et al.(1966)Csizmadia, Harrison, Moskowitz, and
Sutcliffe]csizmadia1966non
Csizmadia, I. G.; Harrison, M. C.; Moskowitz, J. W.; Sutcliffe, B. T.
Non-empirical LCAO-MO-SCF-CI calculations on organic molecules with Gaussian
type functions. Theoretica chimica acta 1966, 6,
191–216
[Csizmadia(1991)]csizmadia1991some
Csizmadia, I. G. Some Fundamentals of Molecular Orbital Computations. In
Computational Advances in Organic Chemistry: Molecular Structure and
Reactivity; Springer Netherlands: Dordrecht, 1991; pp 1–165
[Mayer(2003)]mayer2003simple
Mayer, I. Simple theorems, proofs, and derivations in quantum chemistry;
Springer Science & Business Media: New York, 2003
[Szabo and Ostlund(2012)Szabo, and Ostlund]szabo2012modern
Szabo, A.; Ostlund, N. S. Modern quantum chemistry: introduction to
advanced electronic structure theory; Dover Publications: New York,
2012
[Helgaker et al.(2014)Helgaker, Jorgensen, and
Olsen]helgaker2014molecular
Helgaker, T.; Jorgensen, P.; Olsen, J. Molecular electronic-structure
theory; John Wiley & Sons: New Jersey, 2014
[Bartlett and Stanton(1994)Bartlett, and
Stanton]bartlett1994applications
Bartlett, R. J.; Stanton, J. F. Applications of Post-Hartree—Fock Methods: A
Tutorial. In Reviews in Computational Chemistry; VCH Publishers: New
York, 1994; Chapter 2, pp 65–169
[Knowles et al.(2000)Knowles, Schütz, and
Werner]knowles2000ab
Knowles, P. J.; Schütz, M.; Werner, H.-J. Ab Initio Methods for Electron
Correlation in Molecules. In Mod. Methods Algorithms Quantum Chem.;
Grotendorst, J., Ed.; NIC: Jülich, 2000; Vol. 1; pp 61–151
[Whitfield et al.(2011)Whitfield, Biamonte, and
Aspuru-Guzik]whitfield_simulation_2011
Whitfield, J. D.; Biamonte, J.; Aspuru-Guzik, A. Simulation of Electronic
Structure Hamiltonians Using Quantum Computers. Mol. Phys.
2011, 109, 735–750
[Jordan and Wigner(1928)Jordan, and Wigner]jordan1928ueber
Jordan, P.; Wigner, E. Über das Paulische Äquivalenzverbot.
Zeitschrift für Physik 1928, 47, 631–651
[Shee et al.(2022)Shee, Tsai, Hong, Cheng, and
Goan]shee_qubit-efficient_2022
Shee, Y.; Tsai, P.-K.; Hong, C.-L.; Cheng, H.-C.; Goan, H.-S. Qubit-efficient
encoding scheme for quantum simulations of electronic structure. Phys.
Rev. Research 2022, 4, 023154
[Bravyi and Kitaev(2002)Bravyi, and Kitaev]bravyi_2002
Bravyi, S. B.; Kitaev, A. Y. Fermionic Quantum Computation. Annals of
Physics 2002, 298, 210–226
[Tranter et al.(2018)Tranter, Love, Mintert, and
Coveney]tranter_comparison_2018
Tranter, A.; Love, P. J.; Mintert, F.; Coveney, P. V. A comparison of the
Bravyi-Kitaev and Jordan-Wigner transformations for the quantum simulation of
quantum chemistry. J. Chem. Theory Comput. 2018, 14,
5617–5630
[Bravyi et al.()Bravyi, Gambetta, Mezzacapo, and
Temme]bravyi_tapering_2017
Bravyi, S.; Gambetta, J. M.; Mezzacapo, A.; Temme, K. Tapering off qubits to
simulate fermionic Hamiltonians. <http://arxiv.org/abs/1701.08213>
[Setia et al.(2020)Setia, Chen, Rice, Mezzacapo, Pistoia, and
Whitfield]setia_reducing_2020
Setia, K.; Chen, R.; Rice, J. E.; Mezzacapo, A.; Pistoia, M.; Whitfield, J.
Reducing qubit requirements for quantum simulation using molecular point
group symmetries. J. Chem. Theory Comput. 2020, 16,
6091–6097
[Peruzzo et al.(2014)Peruzzo, McClean, Shadbolt, Yung, Zhou,
Love, Aspuru-Guzik, and O'Brien]vqe_2014
Peruzzo, A.; McClean, J.; Shadbolt, P.; Yung, M.-H.; Zhou, X.-Q.;
Love, P. J.; Aspuru-Guzik, A.; O'Brien, J. L. A variational eigenvalue solver
on a quantum processor. Nat. Commun. 2014, 5,
4213
[Blunt et al.(2023)Blunt, Camps, Crawford, Izsák, Leontica,
Mirani, Moylett, Scivier, Sünderhauf, Schopf, Taylor, and
Holzmann]bluntPerspectiveCurrentStateoftheArt2022
Blunt, N. S.; Camps, J.; Crawford, O.; Izsák, R.; Leontica, S.; Mirani, A.;
Moylett, A. E.; Scivier, S. A.; Sünderhauf, C.; Schopf, P.; Taylor, J. M.;
Holzmann, N. Perspective on the Current State-of-the-Art of Quantum
Computing for Drug Discovery Applications. J. Chem. Theory
Comput. 2023, 18, 7001–7023
[Kitaev(1995)]kitaev_quantum_1995
Kitaev, A. Y. Quantum measurements and the Abelian Stabilizer Problem.
arXiv:quant-ph/9511026 1995,
[Nielsen and Chuang(2010)Nielsen, and Chuang]nielsen_quantum_2010
Nielsen, M. A.; Chuang, I. L. Quantum computation and quantum
information, 10th ed.; Cambridge University Press, 2010
[Martínez-Martínez et al.(2022)Martínez-Martínez, Yen, and
Izmaylov]Luis2022
Martínez-Martínez, L. A.; Yen, T.-C.; Izmaylov, A. F. Assessment of various
Hamiltonian partitionings for the electronic structure problem on a quantum
computer using the Trotter approximation. arXiv:2210.10189 [quant-ph]
2022,
[Childs et al.(2021)Childs, Su, Tran, Wiebe, and
Zhu]Childs2021
Childs, A. M.; Su, Y.; Tran, M. C.; Wiebe, N.; Zhu, S. Theory of Trotter Error
with Commutator Scaling. Phys. Rev. X 2021, 11,
011020
[Lloyd(1996)]Lloyd1996
Lloyd, S. Universal Quantum Simulators. Science 1996,
273, 1073–1078
[Berry et al.(2006)Berry, Ahokas, Cleve, and
Sanders]Berry2006
Berry, D. W.; Ahokas, G.; Cleve, R.; Sanders, B. C. Efficient Quantum
Algorithms for Simulating Sparse Hamiltonians. Commun. Math. Phys.
2006, 270, 359
[Childs et al.(2018)Childs, Maslov, Nam, Ross, and
Su]Childs2018
Childs, A. M.; Maslov, D.; Nam, Y.; Ross, N. J.; Su, Y. Toward the first
quantum simulation with quantum speedup. PNAS 2018,
115, 9456
[Poulin et al.(2018)Poulin, Kitaev, Steiger, Hastings, and
Troyer]poulinQuantumAlgorithmSpectral2018
Poulin, D.; Kitaev, A.; Steiger, D. S.; Hastings, M. B.; Troyer, M. Quantum
Algorithm for Spectral Measurement with Lower Gate Count.
Phys. Rev. Lett. 2018, 121, 010501
[Berry et al.(2018)Berry, Kieferová, Scherer, Sanders, Low,
Wiebe, Gidney, and Babbush]berryImprovedTechniquesPreparing2018
Berry, D. W.; Kieferová, M.; Scherer, A.; Sanders, Y. R.; Low, G. H.;
Wiebe, N.; Gidney, C.; Babbush, R. Improved Techniques for Preparing
Eigenstates of Fermionic Hamiltonians. npj Quantum Inf
2018, 4, 22
[Ivanov et al.(2023)Ivanov, Sünderhauf, Holzmann, Ellaby,
Kerber, Jones, and Camps]ivanovQuantumComputationPeriodic2023a
Ivanov, A. V.; Sünderhauf, C.; Holzmann, N.; Ellaby, T.; Kerber, R. N.;
Jones, G.; Camps, J. Quantum Computation for Periodic Solids in Second
Quantization. Phys. Rev. Res. 2023, 5, 013200
[Lee et al.(2021)Lee, Berry, Gidney, Huggins, McClean, Wiebe,
and Babbush]leeEvenMoreEfficient2021
Lee, J.; Berry, D. W.; Gidney, C.; Huggins, W. J.; McClean, J. R.; Wiebe, N.;
Babbush, R. Even More Efficient Quantum Computations of Chemistry through
Tensor Hypercontraction. PRX Quantum 2021, 2,
030305
[IBM()]IBMQuantumHighest2021
IBM Quantum’s Highest Performant System, Yet.
<https://research.ibm.com/blog/eagle-quantum-error-mitigation>
[Blunt et al.()Blunt, Gehér, and Moylett]blunt2023_h2
Blunt, N. S.; Gehér, G. P.; Moylett, A. E. Compilation of a simple
chemistry application to quantum error correction primitives. In
preparation
§ FORMULAE FOR ENERGY CURVES AND INTEGRALS
The kinetic energy of the electrons T_μν is calculated as
T_μμ = T_νν = -1/2(2α/π)^3/2∫ e^-α(𝐫±𝐑)^2∇^2 e^-α(𝐫±𝐑)^2 d𝐫=3/2α,
T_μν = T_νμ = -1/2(2α/π)^3/2∫ e^-α(𝐫±𝐑)^2∇^2 e^-α(𝐫∓𝐑)^2 d𝐫=(3/2α - 2α^2R^2)e^-2α R^2.
The nuclear-electronic attraction term also depends on the position of the nuclei. Let A be the atom on which χ_μ is centered and B the center of χ_ν. Then, the total potential has the form
V_μν = V_μν(A) + V_μν(B),
where the unique contributions are
V_μμ(A) = V_νν(B) = -(2α/π)^3/2∫e^-2α(𝐫±𝐑)^2/|𝐫±𝐑| d𝐫=-2√(2α/π),
V_μμ(B) = V_νν(A) = -(2α/π)^3/2∫e^-2α(𝐫±𝐑)^2/|𝐫∓𝐑| d𝐫=-erf(2√(2α)R)/2R,
V_μν(A) = V_μν(B) = -(2α/π)^3/2 e^-2α R^2∫e^-2α𝐫^2/|𝐫±𝐑| d𝐫=-erf(√(2α)R)/Re^-2α R^2.
The two-body terms can be dealt with similarly. Since there are only two basis functions, there are only four unique integrals,
(μμ|μμ) = (νν|νν) = (2α/π)^3 ∬e^-2α(𝐫_1±𝐑)^2e^-2α(𝐫_2±𝐑)^2/|𝐫_1 - 𝐫_2| d𝐫_1d𝐫_2 = 2√(α/π),
(μμ|μν) = (νν|μν) = (2α/π)^3 e^-2α R^2∬e^-2α(𝐫_1±𝐑)^2e^-2α𝐫_2^2/|𝐫_1 - 𝐫_2| d𝐫_1d𝐫_2 = erf(√(α)R)/Re^-2α R^2,
(μμ|νν) = (νν|μμ) = (2α/π)^3 ∬e^-2α(𝐫_1±𝐑)^2e^-2α(𝐫_2∓𝐑)^2/|𝐫_1 - 𝐫_2| d𝐫_1d𝐫_2 = erf(2√(α)R)/2R,
(μν|μν) = (νμ|νμ) = (2α/π)^3 e^-4α R^2∬e^-2α𝐫_1^2e^-2α𝐫_2^2/|𝐫_1 - 𝐫_2| d𝐫_1d𝐫_2 = 2√(α/π)e^-4α R^2.
Assuming the special from of Eq. (61) for the charge-density matrix, Eqs. (52), (53), (54) of the main text take the following special form,
G_μμ(𝐏) = G_νν(𝐏) = 1/1+S_μν(1/2(μμ|μμ) + (μμ|μν) + 1/2(μμ|νν)),
G_μν(𝐏) = G_νμ(𝐏) = 1/1+S_μν((μμ|μν) + (μν|μν)),
where we have also neglected the exchange contributions in Eqs. (52), (53), (54) as they cancel some of the Coulomb terms in a system of two electrons in which the same spatial orbital is occupied by the two electrons. Putting all these results together, the Hartree-Fock energy for the hydrogen ground state can be calculated from the special case of Eq. (19),
E_0 = E_n + 1/1+S_μν(h_μμ+h_μν+F_μμ+F_μν),
leading to
E_0 = 1/D
+1/1+e^-α D^2/2[
3α - 4√(2α/π)-2 erf(2√(α)D)/D +
(3α - α^2D^2 -8 erf(√(α)D)/D)e^-α D^2/2.
.
+1/1+e^-α D^2/2(
√(α/π) +
4 erf(√(α)/2D)/De^-α D^2/2 +
erf(√(α)D)/2D +
2√(α/π)e^-α D^2)
].
Here a change of variables 2R=D was also introduced so that the expressions depend directly on the internuclear distance D.
A similar process yields the following simplified G-elements corresponding to 𝐏̅ in Eq. (63),
G_μμ(𝐏̅) = G_νν(𝐏̅) = 1/1-S_μν(1/2(μμ|μμ) - (μμ|μν) + 1/2(μμ|νν)),
G_μν(𝐏̅) = G_νμ(𝐏̅) = 1/1-S_μν((μμ|μν) - (μν|μν)),
and a new energy expression
E_1 = E_nn + 1/1-S_μν(h_μμ-h_μν+F_μμ-F_μν),
and finally,
E_1 = 1/D
+1/1-e^-α D^2/2[
3α - 4√(2α/π)-2 erf(2√(α)D)/D -
(3α - α^2D^2 -8 erf(√(α)D)/D)e^-α D^2/2.
.
+1/1-e^-α D^2/2(
√(α/π) -
4 erf(√(α)/2D)/De^-α D^2/2 +
erf(√(α)D)/2D +
2√(α/π)e^-α D^2)
].
For the singly-excited singlet state, the AO basis expression has the form
E_S = ⟨Θ_S|Ĥ|Θ_S⟩ = E_n + (P_μμ+P̅_μμ)(μ|ĥ|μ) + (P_μν+P̅_μν)(μ|ĥ|ν)
+ P_μμP̅_μμ(μμ|μμ) + P_μνP̅_μν(μν|μν),
yielding
E_S = 1/D
+1/1-e^-α D^2/2(
3α - 4√(2α/π)-2 erf(2√(α)D)/D)
-e^-α D^2/2/1-e^-α D^2/2(
3α - α^2D^2 -8 erf(√(α)D)/D)
+2√(α/π).
Similarly for the triplets
E_T = ⟨Θ_S|Ĥ|Θ_S⟩ = E_n + (P_μμ+P̅_μμ)(μ|ĥ|μ) + (P_μν+P̅_μν)(μ|ĥ|ν)
+ P_μμP̅_νν(μμ|νν) + P_μνP̅_μν(μν|μν),
so that
E_T = 1/D
+1/1-e^-α D^2/2(
3α - 4√(2α/π)-2 erf(2√(α)D)/D)
-e^-α D^2/2/1-e^-α D^2/2(
3α - α^2D^2 -8 erf(√(α)D)/D)
+1/1-e^-α D^2/2(
erf(√(α)D)/D-2√(α/π)e^-α D^2/2).
Finally, the off-diagonal element in the FCI-matrix in Eq. (66) is simply given as
g = ⟨Φ_1|Ĥ|Φ_0⟩ = (ia|ia) = 1/1-e^-α D^2(
√(α/π) - erf(√(α)D)/D).
§ QUBIT MAPPINGS
Any 2 × 2 matrix can be written as a linear combination of the Pauli spin-matrices X, Y, and Z and the identity matrix I given by
X = [ 0 1; 1 0 ] Y = [ 0 -i; i 0 ]
Z = [ 1 0; 0 -1 ] I = [ 1 0; 0 1 ]
For a general chemical Hamiltonian with real coefficients, the explicit form of the qubit Hamiltonian in terms of MO integrals, after applying the Jordan-Wigner mapping, is
ℋ = E_n + 1/2[∑_P(P|ĥ|P) + 1/4∑_PQ(PP|QQ)]
-1/2∑_P [(P|ĥ|P) + 1/2∑_Q(PP|QQ)] Z_P + 1/4∑_Q<P(PP|QQ) Z_P Z_Q
+1/2∑_Q<P[(P|ĥ|Q) + 1/4∑_R(PQ|RR)] (X_P.X_Q + Y_P.Y_Q)
-1/4∑_P<Q<R(PQ|RR) Z_R(X_Q.X_P + Y_Q.Y_P)
-1/4∑_P<R<Q(PQ|RR)(X_Q.R.X_P + Y_Q.R.Y_P)
-1/4∑_R<P<Q(PQ|RR)(X_Q.X_P + Y_Q.Y_P)Z_R
-1/4∑_S<R<Q<P [(PS|QR)-(PQ|SR)] (X_P.X_Q X_R.X_S + Y_P.Y_Q Y_R.Y_S)
-1/4∑_S<R<Q<P [(PR|QS)-(PQ|RS)] (X_P.X_Q Y_R.Y_S + Y_P.Y_Q X_R.X_S)
-1/4∑_S<R<Q<P [(PS|QR)-(PR|QS)] (X_P.Y_Q Y_R.X_S + Y_P.X_Q X_R.Y_S).
Here the notation X_Q.X_P = X_Q Z_Q-1… Z_P+1 X_P indicates a product of Z_S matrices such that P<S<Q, while X_Q.R.X_P denotes a similar string, except that S≠ R.
§ THE 1-QUBIT HYDROGEN HAMILTONIAN
The transformed Pauli strings in the Jordan-Wigner Hamiltonian, after performing 𝒫→𝒫' = U^†𝒫 U as defined in the main text, are as follows:
[ Z_0; Z_1; Z_2; Z_3; Z_0Z_1; Z_0Z_2; Z_0Z_3; Z_1Z_2; Z_1Z_3; Z_2Z_3; Y_0Y_1X_2X_3; X_0Y_1Y_2X_3; Y_0X_1X_2Y_3; X_0X_1Y_2Y_3; ]→[ Z_0; Z_0X_1; Z_0X_2; Z_0X_3; X_1; X_2; X_3; X_1X_2; X_1X_3; X_2X_3; X_0X_2X_3; X_0X_3; X_0X_1X_2; X_0X_1; ]
|
http://arxiv.org/abs/2307.03976v2 | 20230708133744 | Short-time large deviations of the spatially averaged height of a KPZ interface on a ring | [
"Timo Schorlepp",
"Pavel Sasorov",
"Baruch Meerson"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech"
] |
[email protected]
Institute for Theoretical Physics I,
Ruhr University Bochum, 44801 Bochum, Germany
[email protected]
ELI Beamlines Facility,
ERIC, 25241 Dolní Br̆ežany, Czech Republic
[email protected]
Racah Institute of Physics, Hebrew
University of Jerusalem, Jerusalem 91904, Israel
Using the optimal fluctuation method, we evaluate the short-time probability
distribution P (H̅, L, t=T) of the spatially averaged height H̅ = (1/L) ∫_0^L h (x, t=T) dx
of a one-dimensional interface h (x, t) governed by the Kardar–Parisi–Zhang equation
∂_th=ν∂_x^2h+λ/2(∂_xh)^2+√(D)ξ(x,t)
on a ring of length L. The process starts from a flat interface, h(x,t=0)=0.
Both at λH̅<0, and at sufficiently small positive λH̅ the optimal
(that is, the least-action) path h(x,t) of the interface, conditioned on H̅, is uniform
in space, and the distribution P (H̅, L, T) is Gaussian. However, at sufficiently
large λH̅>0 the spatially uniform solution becomes sub-optimal and gives way
to non-uniform optimal paths. We study them, and the resulting non-Gaussian distribution P (H̅, L, T),
analytically and numerically. The loss of optimality of the uniform solution occurs via a dynamical
phase transition of either first, or second order, depending on the rescaled system size
ℓ = L/√(ν T), at a critical value H̅=H̅_c(ℓ). At large but
finite ℓ the transition is of first order. Remarkably, it becomes an “accidental" second-order
transition in the limit of ℓ→∞, where a large-deviation
behavior -ln P (H̅, L, T) ≃ (L/T) f(H̅)
(in the units λ=ν=D=1) is observed. At small ℓ the transition is of second order,
while at ℓ =O(1) transitions of both types occur.
Short-time large deviations of the spatially averaged height
of a KPZ interface on a ring
Baruch Meerson
August 12, 2023
=========================================================================================
§ INTRODUCTION
Atypically large fluctuations in macroscopic systems out of
equilibrium continue to attract great interest from statistical physicists.
Although a universal description of such fluctuations is unavailable, there has been
much progress in studies of particular systems. One of the main theoretical tools
in this area is known under different names in different areas of physics:
the optimal fluctuation method
(OFM), the instanton method, the weak-noise theory, the
macroscopic fluctuation theory, etc. This method relies
on a saddle-point evaluation of the pertinent path integral
of the stochastic process, conditioned on the
large deviation. The method is based on a model-specific
small parameter (often called “weak noise"), and it brings about a
conditional variational problem. The solution of this problem – a
deterministic, and in general time-dependent, field – describes the “optimal path" of the system:
the most probable system's history which dominates the contribution of different
paths to the statistics in question.
Among multiple applications of the OFM, we focus on one set of problems which has attracted attention in the last
two decades <cit.>: short-time
large deviations of a stochastically growing interface as described by the one-dimensional Kardar–Parisi–Zhang (KPZ) equation <cit.>
∂_th=ν∂_x^2h+λ/2(∂_xh)^2+√(D)ξ(x,t) ,
where ξ(x,t) is a white noise with
⟨ξ(x,t)⟩=0 , ⟨ξ(x,t)ξ(x^',
t^')⟩=δ(x-x^')δ(t-t^') .
Here we employ the OFM to study a KPZ interface on a ring of length L, i.e. with periodic boundary
conditions at x=0 and x=L. The interface is initially flat,
h(x,t=0)=0 ,
and we are interested in evaluating
the probability density function (PDF) P(H̅, L, T)
of the spatially averaged surface height
H̅ = 1/L∫_0^L h(x,T) dx
at a final time t=T >0, which is much shorter than the characteristic nonlinear
time of Eq. (<ref>), τ_NL= ν^5/D^2 λ^4.
The short-time limit allows one to employ the OFM in a controlled
manner <cit.>, as we will
reiterate shortly. The problem, defined by Eqs. (<ref>)-(<ref>), continues the
line of studies of Refs. <cit.> of finite system-size effects (which turn out to be quite dramatic)
in large deviations of height of the KPZ interface.
Upon rescaling t → tT,
x → (ν T)^1/2 x, h →ν h / λ and ξ→(ν T^3)^-1/4ξ, Eq. (<ref>) becomes
∂_th= ∂_x^2h+1/2(∂_xh)^2
+√(ε)ξ(x,t) ,
with rescaled noise strength ε = D λ^2 T^1/2
/ ν^5/2 on a ring of rescaled length ℓ = L / √(ν T).
The PDF of the rescaled average height H̅ at final time t = 1
can then be written as a path integral
P(H̅,ℓ,ε) = ∫_h(·, 0) = 0 Dh δ(
1/ℓ∫_0^ℓ h(x,1) dx - H̅)
J[h] exp{-1/ε S[h] }
with action functional
S[h] = ∫_0^1 dt ∫_0^ℓ dx L(h, ∂_t h ) = 1/2∫_0^1 dt
∫_0^ℓ dx [∂_th - ∂_x^2h-1/2(∂_xh)^2 ]^2 ,
where ℒ(h,∂_t h) is the Lagrangian.
The OFM assumes a weak-noise limit ε→ 0, when the path integral (<ref>) can be evaluated
by the saddle-point method, while the Jacobian J[h] does not contribute in the leading-order.
In this limit, the PDF P(H̅,ℓ,ε) is dominated by
the optimal path of the system, that is by the most likely history h(x,t) conditional on a given average height at t=1:
-ln P(H̅, ℓ, ε) ε→ 0≃ε^-1min_h(·, 0)= 0 ,
∫_0^ℓ
h(x,1)dx = ℓH̅ S[h] = ε^-1 S(H̅, ℓ) .
Hence, the PDF can be determined, up to pre-exponential factors, from the
solution of this constrained minimization problem. Here
we will solve this minimization problem numerically, for different H̅
and ℓ, and analytically in the asymptotic limits of large and small
ℓ[Note that whenever there exists a spatially
non-uniform optimal path, there are actually infinitely many possible
paths due to the translational symmetry of the problem with respect to x. Accounting for
this submanifold of degenerate solutions and for the associated zero
mode is, however, only relevant for pre-exponential factors <cit.> which
we do not address here.].
It will be convenient to present our results by setting
ν=λ=D=1[In most of the paper we assume, without
loss of generality, that λ>0. Indeed, changing λ to -λ is equivalent to changing h to -h.].
Then the weak-noise scaling (<ref>) reads
-ln P(H̅, ℓ, ε→ 0) ≃
T^-1/2 S(H̅, ℓ) .
Note that the limit ε→ 0 at fixed ℓ corresponds to
the short-time limit T → 0 and small-length limit L → 0
with L / √(T) = const.
When instead T goes to zero at L=const, one has
both ε→ 0 and ℓ→∞. The latter limit turns out to be most interesting, and it is analyzed here
in detail. It is natural to expect that for
any H̅, when ℓ→∞, the action S(H̅, ℓ) should exhibit
a large-deviation form
S(H̅,ℓ) ℓ→∞≃ℓ f(H̅) ,
leading to
-ln P(H̅, L, T→ 0) ≃
(L/T) f(H̅) ,
and this is what we indeed observe here. Less expectedly, we also find that the rate
function f(H̅) exhibits, at a critical value H̅=H̅_c(ℓ),
a dynamical phase transition (DPT) which is accidentally second-order.
By that we mean that
the rate function at the critical point becomes continuously differentiable
only in the limit of ℓ→∞. At arbitrary large but finite ℓ the
large-deviation form (<ref>) breaks down. We show, however, that the action S(H̅,ℓ) still exhibits
a DPT at a critical point H̅=H̅_c, but this DPT is of first order and the optimal
path at the critical point changes discontinuously via a subcritical bifurcation.
For small ℓ a truly second-order DPT is observed as predicted earlier <cit.>.
At intermediate values of ℓ = O(1) DPTs of both types occur. In the latter regime analytical
results are unavailable as of yet, and we present some numerical results. All the DPTs that we
found in this system occur because of a loss of optimality of a path that is uniform in space.
The loss of optimality takes the form either of a subcritical bifurcation (for the first-order DPTs),
or a supercritical bifurcation (for the true second-order DPTs).
The remainder of this paper is structured as follows. In Sec. <ref> we formulate
the OFM equations and boundary conditions, present a simple uniform solution of these equations,
previously studied in Refs. <cit.>, and
argue that it describes the optimal path of the system at all λ H<0. Supercritical
bifurcations of the uniform solution have been recently studied in Ref. <cit.>. Still,
for convenience of further discussion, we briefly rederive them in Sec. <ref>.
Section <ref> includes our results of numerical minimization of the action
functional (<ref>) in different regions of the (H̅,ℓ) phase diagram.
These numerical results provided valuable insights into the nature of optimal paths of the
interface which led us to develop asymptotic analytical solutions of the OFM problem for
large ℓ that we present in Sec. <ref>. The asymptotic solution for small ℓ
is briefly discussed in Sec. <ref>. We summarize and discuss our main results
in Sec. <ref>. A description of numerical algorithms that we use here is relegated to the Appendix.
§ OFM EQUATIONS AND UNIFORM SOLUTION
At a technical level, the main objective of this work is to determine the minimum action S(H̅, ℓ)
as a function of the rescaled average height H̅ and rescaled
system size ℓ. In this section, we present the necessary
conditions for minimizers of the action functional (<ref>) – the OFM equations and the boundary conditions.
We argue then that a simple spatially uniform solution
of the ensuing OFM problem is always optimal for H̅ < 0.
The first-order necessary conditions for a minimizer of the action
functional (<ref>) can be represented as a pair of Hamilton's equations
for the optimal history of the interface h(x,t) and the
conjugate momentum density p = ∂ L / ∂(∂_t h). These equations
were derived in many papers <cit.>, and they take the form
∂_th = ∂_x^2h+1/2(∂_xh)^2+p
,
∂_tp = -∂_x^2p+∂_x(p∂_xh)
.
The “momentum density" p(x,t) describes the (rescaled) optimal realization of
the external noise ξ(x,t) that drives the interface conditional on a specified H̅.
In the present case Eq. (<ref>) and (<ref>) should be complemented by the periodic boundary conditions
at x=0 and x = ℓ, by the initial condition
h(x,0)=0 ,
and by the final-time condition
p(x,1)=Λ= ,
which follows from the demand that a boundary term at t=1, originating from an
integration by parts, should vanish for any h(x,1).
The parameter Λ is a Lagrange multiplier which needs
to be chosen so as to impose the rescaled final-time condition
1/ℓ∫_0^ℓ h(x,1) dx = H̅ .
Once the optimal path is determined, the action S(H̅,ℓ)
can be determined from the equation
S = 1/2∫_0^1 dt∫_0^ℓ dx p^2(x,t) ,
which follows from Eqs. (<ref>) and (<ref>).
By differentiating the action S(H̅, ℓ) = S[h(x,t;H̅,ℓ)] of
the optimal profile h = h(x,t;H̅,ℓ) with respect to H̅ using
the chain rule, one can show that Λ is related to the action via
Λ=1/ℓ ∂ S(H̅, ℓ)/∂H̅ dS=ℓΛ dH̅) .
If the action S(H̅, ℓ) is a strictly convex function of H̅,
there is a bijective relation between Λ and H̅, and it
suffices, for the purpose of calculating the action, to only
determine H̅(Λ) and use Eq. (<ref>). This shortcut is very convenient and
holds for many large-deviation calculations <cit.>.
There is an obvious exact solution of the OFM equations and the boundary conditions:
h(x,t)=H̅ t , p(x,t)=Λ , Λ = H̅ ,
S=ℓ/2H̅^2 ,
which describes a uniformly growing flat interface.
We will often call this branch of solutions branch 1. By virtue of Eq. (<ref>),
whenever the uniform solution (<ref>) is the optimal one, we have
a Gaussian PDF for H̅ up to pre-exponential factors. Of most interest, however,
are the regions of parameters H̅
and ℓ, for which the uniform solution is sub-optimal. As we will see,
the loss of optimality can occur via either a supercritical, or a subcritical bifurcation.
First of all, we can argue that, for negative H̅, the uniform
solution (<ref>) is always optimal. Using the evident conservation law
1/ℓ∫_0^ℓ p(x,t)
d x = Λ = const
of Eq. (<ref>), we can rewrite the action (<ref>) for any solution
of the OFM equations as
S = 1/2∫_0^1 dt∫_0^ℓ
dx p^2(x,t)=ℓΛ^2/2+1/2∫_0^1 dt∫_0^ℓ dx
[p(x,t)-Λ]^2 ,
Also, integrating both sides of Eq. (<ref>) with respect to t from 0 to 1 and
with respect to x over the ring, and using the periodic boundary conditions
and the conservation law (<ref>), we obtain
H̅=1/ℓ∫_0^ℓ h(x,1) dx
=Λ+1/2ℓ∫_0^1 dt∫_0^ℓ
dx [∂_xh(x,t)]^2 .
One can easily see from Eqs. (<ref>) and (<ref>) that, at negative Λ
(or H̅) any inhomogeneity in the
momentum density p both increases
the action S, and decreases the average height |H̅| in comparison to their
values for the uniform solution. Therefore, any nonuniform solution here is sub-optimal.
In contrast to this, for Λ >0 (or
H̅>0), an inhomogeneity increases both S,
and H̅ in comparison to the uniform solution. A competition
between these two opposite effects may give rise to non-uniform solutions with lesser action than
the uniform one, as we will indeed see in the following.
§ BIFURCATIONS OF THE UNIFORM SOLUTION
In this brief section we carry out a linear stability analysis of the
uniform solution (<ref>). We find that, for sufficiently
large positive H̅, the uniform solution can continuously
and supercritically bifurcate to a non-uniform solution. The first
spatial Fourier mode to become unstable as H̅ increases depends
on the rescaled system size ℓ in a nontrivial way and is determined
from Eq. (<ref>). This equation has also been obtained in Ref. <cit.>
by calculating the leading-order prefactor correction to the asymptotic
scaling in Eq. (<ref>) through Gaussian integration of
fluctuations around the uniform solution (<ref>).
At first order of a perturbation theory around the uniform
solution (<ref>) we have
p(x,t)=H̅+b(t)cos qx , h(x,t)=H̅ t + a(t)cos qx
, |a|, |b|≪ 1 .
Here the wave number q spans the set 2π m/ℓ for
m=1,2,…. Substituting the expressions (<ref>)
into Eqs. (<ref>) and (<ref>) and neglecting higher-order terms, we obtain
the following system
of linear ordinary differential equations:
ȧ=-q^2a+b , ḃ=q^2b-q^2H̅ a .
It has solutions proportional to e^iω t, where
ω=± q √(H̅-q^2) .
Using the boundary conditions (<ref>) and (<ref>), we obtain the
following relationship between q and H̅ = H̅_c(q)
at the bifurcation points:
tan(q√(H̅-q^2))=-√(H̅-q^2)/q .
Note that the trivial solution H̅=q^2 of Eq. (<ref>) does
not correspond to a valid non-uniform solution due to the boundary conditions
at t=0 and 1. The resulting dependence H̅(q) can be expressed in a
parametric form
H̅ = -2 u/sin 2u , q=√(-u u) ,
(2n-1)π/2<u<nπ; n=1,2,3,… ,
where, for given ℓ, only values of q = 2 π m / ℓ
with m = 1, 2, 3, … are allowed.
The first three branches of Eq. (<ref>) are shown in
Fig. <ref>. As one can see, the first instability appears for n = 1,
and a necessary condition for the instability, for any ℓ, is H̅_c≥ 4.603.
When ℓ→∞, the first instability of the
uniform solution will occur, at H̅_c≃ 4.603, for a very high mode
m ≃ 1.343 ℓ/ 2 π.
For finite ℓ, one can find the bifurcation point on the n=1 branch of Eq. (<ref>)
numerically.
Finally, for ℓ→ 0, the first instability occurs for the m = 1 mode at
H̅≃ (2 π / ℓ)^2 in
agreement with Ref. <cit.>.
§ NUMERICAL RESULTS
Now we proceed with a numerical solution of the
minimization problem in Eq. (<ref>) for different H̅ and ℓ. The numerical methods that
we used are described in the Appendix. In addition to confirming
the supercritical bifurcations of the uniform solution that we discussed in Sec. <ref>,
we will uncover important subcritical bifurcations
and get insight into non-perturbative optimal paths which
will be studied analytically in Secs. <ref> and <ref>.
We start with the simpler case of small ℓ.
Choosing a moderately small value ℓ = π / 8 and numerically
minimizing the action (<ref>) for different Λ, we
obtain
the rate function S(H̅, ℓ) and Lagrange
multiplier Λ(H̅) shown in Fig. <ref>.
The spatially uniform solution (<ref>), corresponding
to branch 1 of the action, is seen to become unstable
close to H̅≃ (2 π / ℓ)^2 as stated in Sec. <ref>,
and there is a
continuous (second-order) DPT to a spatially
nonuniform solution. Indeed, the (m = 1)-spatial Fourier mode of the
profile becomes unstable at this point. One such spatially nonuniform solution close to the transition point
is shown in Fig. <ref>. As H̅ increases, the optimal solution
turns, for most of the time 0<t<1, into a stationary “cnoidal" solution for p which
drives an h-profile which is non-uniform in x, but is uniformly translating in the vertical direction.
The same solution appears in the problem of the one-point height distribution for the KPZ
equation on a ring <cit.>, and we use it in
Sec. <ref> to calculate the theoretical curves in
Figs. <ref> and <ref>,
which match the numerical results quite well.
Next, we turn to the more complicated and interesting case of large
ℓ.
For ℓ = 16 π the minimization of the augmented action (<ref>)
leads to the results for the rate function S(H̅) and Lagrange
multiplier Λ(H̅) shown
in Fig. <ref>. In addition to branch 1 we observe two other branches of solutions.
Branch 2 is observed to the right of a narrow
transition region close to H̅≃ 4. On this branch the action S(H̅) is
approximately a linear function, while Λ is almost constant. Further, for much larger H̅,
there is a smoothed-out second-order transition from branch 2 to a
third branch 3 with a different scaling behavior.
The optimal paths for branches 2 and 3 are shown in
Fig. <ref>. They consist of strongly localized large-amplitude stationary
solitons of p that drive an outgoing almost triangular structure of h (or two antishocks
of V(x,t) = ∂_x h(x,t), see Sec. <ref>. The solution, corresponding to branch 2,
clearly emerges via a subcritical, rather than supercritical bifurcation. Strikingly, the soliton
has a well-defined life time which is very close to 1/2. The
difference between branches 2 and 3 is that, for branch 3, the two edges
of the triangular structure of h(x,t) collide before the final time t=1 is reached,
while for branch 2 they do not.
These crucial findings will guide our stationary-soliton-based asymptotic theory for large ℓ that we develop
in Sec. <ref>. There we give an analytical description of the optimal paths
for branches 2 and 3, which are the only relevant ones for large
ℓ. There we establish a first-order transition at H̅≃ 4 for large but finite ℓ
and show that it becomes “accidentally" second order in the limit of ℓ→∞.
We also find that the smoothed-out second-order
transition from branch 2 to branch 3 occurs at H̅ = ℓ^2 / 6. The resulting
analytical predictions, indicated by the lines in
Figs. <ref> and <ref>, are in good agreement with numerics
at large, but finite ℓ.
At moderate ℓ the transition region where the spatially uniform
solution (<ref>) of branch 1 becomes sub-optimal is quite
complex, as one can appreciate from
Fig. <ref>.
We see that, in general, there are both first and second order
transitions in this region: The uniform solution becomes
linearly unstable for some m > 1, leading to second-order
transitions, but there is also a competition with the (subcritical) one-soliton
solution. The subcritical scenario clearly wins for sufficiently large ℓ. Indeed, for ℓ = 32 π
we observe only a first-order
transition from the spatially uniform to the soliton solution,
while the linear instability becomes irrelevant.
Note that, for branch 2, in addition to stationary single-soliton
solutions of the OFM equation, discussed so far, there are also stationary multi-soliton solutions
consisting of two or more (almost) non-interacting strongly localized stationary solitons
of p and corresponding expanding triangles of h. One such solution, which we observed numerically, is
shown in the top row of
Fig. <ref>. We found, however,
that such solutions always have a larger action than
the one-soliton solution for the same ℓ
and H̅. Therefore, the one-soliton solution indeed seems to provide
the optimal solution. In the limit ℓ→∞,
these multi-soliton solutions – a soliton gas – would contribute to the
pre-exponential factor for 𝒫(H̅, ℓ), but
pre-exponential factors are beyond the scope of this paper. Additionally, in the
bottom row in Fig. <ref>,
we show an optimal path for ℓ = 16 π and close
to H̅ = 4, which emerges through linear instability of
the (m = 11)-mode. Later on, however, it is overtaken by the
one-soliton solution.
§ LARGE-ℓ ASYMPTOTICS: RISE AND FALL OF THE SOLITON
§.§ General description of the solution
Guided by our numerical solutions and by the previous works on the one-point KPZ height
statistics on the line <cit.> and on a ring <cit.>, here we find approximate
asymptotic solutions of Eqs. (<ref>)-(<ref>) which give rise to two nontrivial
branches (we call them branches 2 and 3) of the large-deviation function S(H̅) for large ℓ.
As we found, for both branches the maximum one-point height of the interface H=max h(x,t=1) turns
out to be very large: H≫ 1. Therefore, in addition to the strong inequality ℓ≫ 1,
we can also use the strong inequality H≫ 1. This allows us to construct “inviscid" asymptotic
solutions in different regions of space, separated by discontinuities of proper types. Like their
numerical counterparts, the analytical solutions exhibit two distinct stages in time, with an abrupt
transition between them at some branch-dependent intermediate time 0<t=τ<1 which we will determine.
For 0<t<τ the solution has the form of a strongly localized stationary soliton of p(x,t)
and “antishock" of V(x,t)= -∂_x h(x,t) which were previously identified in the problem
of one-point height statistics on the line <cit.> and on a ring <cit.>.
The characteristic width, O(1/√(H)), of the soliton-antishock structure is much less than
unity. Outside of the soliton-antishock one has p(x,t) ≃ 0. As a result, Eq. (<ref>)
is obeyed trivially and, at distances ≳ 1 from the soliton, h(x,t) follows the deterministic KPZ dynamics
∂_th=∂_x^2h+1/2(∂_xh)^2 ,
which is equivalent to the Burgers equation
∂_tV+ V ∂_x V =∂_x^2V
for the field V(x,t) =-∂_x h(x,t). In addition, the diffusion term in Eq. (<ref>)
can be also neglected at large distainces <cit.>, and one arrives at the inviscid Hopf equation
∂_tV+V∂_x V=0 .
The stationary soliton-antishock structure drives an almost triangular configuration of h(x,t)
which is expanding outwards <cit.>. The height of the triangle grows linearly with time, while
its two edges propagate with a constant speed as “ordinary" shocks of V(x,t) obeying Eq. (<ref>)
or, when treated as discontinuities, obeying Eq. (<ref>) <cit.>. The positions of these shocks
at t=1 determine the boundaries of the “impact region" of the soliton-antishock structure. When the
size of the impact region, which scales as O(√(H)) <cit.>, is shorter than the rescaled system
size ℓ (this happens when H̅ is not too large, see below), there is also an external region
where the uniform solution p(x,t)=Λ =const and V(x,t)=0 holds, see Eq. (<ref>).
The external uniform solution holds for all times 0<t<1, and it contributes to the large-deviation
function of H̅. In the inviscid limit the regions of zero and nonzero p are divided by a
stationary discontinuity. This regime corresponds to branch 2.
Branch 3 appears when, due to the periodicity of the system, the ordinary shocks of V(x,t)
collide with each other before the final time t=1 is reached. In this case the impact region
of the soliton-antishock structure extends to the whole system, and a region of the uniform solution does not appear.
For the solution to obey the boundary condition (<ref>), the p-soliton must turn into a
constant p= Λ at t=1. Remarkably, as we have seen in our numerical results for large ℓ,
the soliton rapidly decays in the vicinity of a well-defined time t=τ<1. For both branches 2 and 3,
the subsequent dynamics, at τ<t<1,
gives only a subleading contribution (which we neglect, alongside with other subleading contributions)
to the maximum one-point height H and to the action. This stage is important, however, for determining H̅.
We can qualitatively understand this nontrivial temporal structure of the solutions from the viewpoint of action
minimization: First, for 0 ≤ t ≤τ, the interface is efficiently driven upward by a stationary
p-soliton, in the same manner as for the one-point height PDF of the KPZ equation on the line <cit.>
and on a ring <cit.>. Then, quickly suppressing the soliton at an intermediate time 0<τ < 1 and
evolving the interface according to the almost free KPZ dynamics for τ < t ≤ 1 increases considerably
the average height H̅ for a negligible additional cost in terms of action. The optimal value of τ
is the one that minimizes the action for a given H̅.
As an overview, we present here the action S(H̅, ℓ) at leading order for large ℓ,
as will be derived in subsections <ref> and <ref>:
S(H̅, ℓ) ≃{[ H̅^22ℓ , -∞ < H̅≤ 4 , (branch 1); (4 H̅ - 8) ℓ , 4 < H̅≤ℓ^26 , (branch 2); H̅^3/2Φ(H̅ / ℓ^2) , ℓ^26 < H̅ < ∞ , (branch 3) ].
where the function Φ(…) is defined in Eq. (<ref>) and
obeys Φ(z →∞) → 8 √(2) /3. The first line in Eq. (<ref>)
comes from the uniform solution (<ref>). The first two lines manifestly reveal the large-deviation
scaling (<ref>), while the third line does not.
Now we proceed to a more detailed description of the solutions, and we will start with branch 2.
§.§ Branch 2
Due to a translational symmetry of the problem (<ref>)-(<ref>), we can place the soliton-antishock
structure at x=0 (see Fig. <ref>) so that, to the leading order, H≃ h(0,τ).
As explained above, at H≫ 1, the p-soliton can be considered as a point-like object. We will only need
the value of its “mass", ∫ dx p(x,t) which, by virtue of Eq. (<ref>), is conserved. Using
the explicit expression for the soliton, p(x,t)=p_s(x) = 2 c cosh^-2 (√(c/2) x) <cit.>,
where c=H/τ, we obtain
∫_-∞^∞ dx p_s(x) = √(32 H/τ) .
The base of the triangular structure of the h-profile is equal to
2a(t)=√(2H/τ) t ,
while the triangle's height is
h(0,t)=Ht/τ , 0<t<τ .
Let us denote the total size of the impact region of the soliton-antishock structure
by 2a_1, where a_1 ≡ a(t=1). In the region a(t)<|x|<a_1 we have
p=h=0 .
The triangular profile of h on the interval 0<|x|<a(t) is described by the expressions <cit.>
p(x,t)=0 , h(x,t)
=H(t/τ-√(2)|x|/√(Hτ))
, and
V(x,t)=-∂_xh(x,t) = Ṽ x ,
where
Ṽ=√(2H/τ) .
As one can see from Eqs. (<ref>) and (<ref>), the ordinary shocks propagate
with the speed Ṽ/2, as to be expected from Eq. (<ref>) or (<ref>) <cit.>.
After the rapid decay of the soliton at t=τ, the “post-soliton" solution (in the region to be determined)
can be described by the ideal hydrodynamic equations corresponding to the inviscid limit of Eqs. (<ref>)
and (<ref>):
∂_tV +V ∂_xV = -∂_x p ,
∂_tp+∂_x(pV) = 0 .
The V-antishock now plays the role of a discontinuity which undergoes a decay starting from t=τ.
In the leading order we can neglect the -∂_x p term, so that Eq. (<ref>) becomes the Hopf
equation (<ref>). Its solution is
V(x,t)=x/t-τ .
Plugging Eq. (<ref>) into Eq. (<ref>) and using the “final" condition (<ref>)
on p(x,t=1), we obtain
p(x,t) =Λ(1-τ)/(t-τ) .
The solution (<ref>) and (<ref>) holds at t>τ and |x|≤ a_d(t). The boundaries of this region,
x= ± a_d(t)≡Ṽ(t-τ) ,
represent weak discontinuities, moving with the speed Ṽ – that is twice as fast as
the ordinary shocks at x=± a(t), see Eq. (<ref>). Our simulations show
that the weak discontinuities catch up with the shocks at t=1. The corresponding condition can
be written as a_d(1) = a_1, and it yields τ=1/2[We also
obtained τ=1/2 analytically by solving the problem for a general τ and then minimizing the
resulting action with respect to τ. These calculations are somewhat cumbersome, and we do not show them here.]
Therefore, during the second stage of the dynamics, 1/2<t<1, V(x,t) is described by the following expressions:
V(|x|≤ a_d(t),t)=x/t-1/2 , V(a_d(t)≤|x|≤ a(t),t)=±Ṽ , V(a(t)<|x|< a_1,t)=0 .
Using the relation V(x,t)=-∂_x h(x,t), we can obtain the h-profile at any time 1/2<t<1
by integrating Eq. (<ref>) over x. The result describes a parabolic profile of h at |x|<a_d(t),
flanked by the linear profiles at a_d(t)<|x|<a_1 corresponding to the triangular structure of h(x,t) of
the first stage the dynamics. At t=1 the parabolic profile takes over the whole interval |x|<a_1, and we obtain
h(x,t=1)=H-x^2 , |x|<a_1=√(H).
At |x|>a_1 the uniform solution holds:
h(|x|>a_1,t)=Λ t , p(|x|>a_1,t)=Λ .
Now we evaluate the contributions of the uniform solution to the action, Δ S_u, and to the average
height, ΔH̅_u, at t=1. As ℓ goes to infinity, we can neglect the difference between the
total system length ℓ and the length of the domain of uniform solution ℓ-2a_1, and obtain
Δ S_u=Λ^2ℓ/2 ΔH̅_u=Λ .
The leading-order contribution of the soliton-antishock solution to the action is <cit.>
Δ S_s=8√(2)/3 H^3/2/√(τ)=16 H^3/2/3 .
This contribution comes from the first stage of the process, 0<t<1/2, while the second stage gives
only a subleading contribution which we neglect.
The second stage, 1/2<t<1 does contribute to H̅, however. Using Eq. (<ref>), we obtain
ΔH̅_s=4 H^3/2/3ℓ .
What remains to be done is to determine Λ, to collect the contributions to S and H̅,
and to eliminate H in favor of H̅ and ℓ.
In order to determine Λ, we use the local conservation of p(x,t) evident in Eq. (<ref>).
Because of this local conservation law,
the total soliton “mass", see Eq. (<ref>), must be equal to the integral of the solution (<ref>)
for p(x,t) over x from -a_1 to a_1. This condition yields a remarkably simple result: Λ=4,
a constant value (up to small subleading corrections).
Combining Eqs. (<ref>)-(<ref>), we obtain
H̅=4+4 H^3/2/3ℓ ,
S=8ℓ+16 H^3/2/3 .
Eliminating H, we arrive at the leading-order result for the large-deviation function of H̅
for branch 2 in the limit of large ℓ, which was announced in the second line of Eq. (<ref>):
S=(4H̅ -8) ℓ .
This expression obeys the large-deviation scaling (<ref>). As was to be expected, the actions
of branch 1 and 2 coincide at
H̅=H̅_c=4. Noticeably, their first derivatives with respect to H̅
also coincide at this point.
In addition, using Eq. (<ref>), we see that Eq. (<ref>) is consistent with Λ=4,
independently of H̅, for branch 2.
We will look into these peculiarities more carefully in Sec. <ref>.
One applicability condition of Eq. (<ref>) is the strong inequality H≫ 1.
Using the first relation in Eq. (<ref>),
we can rewrite this strong inequality in terms of H̅ and ℓ≫ 1:
H̅-4 ≫ 1/ℓ .
This condition limits H̅ from below. A condition on H̅ from above distinguishes
branch 2 from branch 3. It demands that the ordinary shocks of V(x,t) do not collide with
each other until t=1[While deriving Eq. (<ref>) we
demanded a strong inequality 2√(H)≪ℓ. However, when H̅≫ 1, the main contribution
to S and H̅ comes from the soliton-antishock solution, rather than from the uniform one. As a
result, the strong inequality 2√(H)≪ℓ becomes unnecessary, and a simple inequality suffices.].
This condition can be written as 2√(H)<ℓ or, using Eq. (<ref>),
H̅-4<ℓ^2/6ℓ≫1 .
Now we proceed to a description of branch 3.
§.§ Branch 3
When the inequality (<ref>) is violated, the two outgoing ordinary shocks of V(x,t) collide
with each other and merge at x=±ℓ / 2 (which is the same point of the ring) at some t<1.
Upon the merger, a single stationary shock appears, see Fig. <ref>. Now the impact region of
the soliton-antishock is the whole system: 2a_1=ℓ, and the external region of the uniform solution,
characteristic of branch 2, does not appear here.
Most of the general formulas, derived in the context of branch 2, remain valid for branch 3.
In particular, here too τ is determined by the condition that the weak discontinuities catch
up with the ordinary shocks at t=1. The only difference is that a_1=ℓ/2 now. Solving the
equation a_d(1) = a_1, or
√(2H/τ)(1-τ) = ℓ/2 ,
we obtain
τ =1+ℓ^2/16 H-ℓ√(ℓ^2+32H)/16 H ,
so that τ depends on H and ℓ. Unsurprisingly, Eq. (<ref>) yields τ=1/2 in
the boundary case H=ℓ^2/4, when the size 2a_1 of the impact region of the soliton-antishock
in an infinite system is equal to the system size ℓ. When H goes to infinity, τ approaches 1.
We will not repeat here all expressions for h(x,t), V(x,t) and p(x,t) in different regions,
and present only the expression for h(x,1):
h(x,1)=H-x^2/2(1-τ) ,
with τ from Eq. (<ref>).
Using this expression, we can evaluate H̅. The action S remains the same as in the
first equality in Eq. (<ref>), and we obtain
H̅=H-1/24 ℓ^2/(1-τ) ,
S=8√(2)/3 H^3/2/√(τ) .
Eliminating H from these relations and using Eq. (<ref>), we arrive at a leading-order
result for the large-deviation function S(H̅,ℓ) in the limit of large ℓ and very
large H̅, which was announced in the third line of Eq. (<ref>):
S(H̅,ℓ) = H̅^3/2Φ(H̅/ℓ^2) , where Φ(z) =2 √(2) (9 z+1+√(18z+1))^1/2(36 z+1+√(18z+1))/81 z^3/2 .
In terms of H̅, the condition H>ℓ^2/4 becomes, in the leading order, H̅>ℓ^2/6.
As a result, the function Φ(z) is defined for z≥ 1/6, and Φ(1/6) = 4 √(6).
A graph of Φ(z) is depicted in Fig. <ref>.
In the limit of H̅≫ℓ^2≫ 1 Eq. (<ref>) yields
S=8√(2)/3H̅^3/2+4/3H̅ℓ+ … .
The leading-order term of this expression coincides with the action for a single-point height H <cit.>.
This is to be expected, because for very large H̅, τ approaches 1, and the difference
between H̅ and H becomes relatively small.
The expressions in Eqs. (<ref>) and (<ref>) match in the leading order in ℓ
at the boundary H̅≃ℓ^2/6 between the branches 2 and 3, both giving (2/3) ℓ^3+O(ℓ).
For completeness, we also present the optimal transition time τ in Eq. (<ref>) in terms of H̅ and ℓ:
τ(H̅,ℓ)=1+ℓ^2/12 H̅-ℓ√(ℓ^2+18
H̅)/12 H̅ .
§.§ Dynamical phase transition
In this subsection we resolve the nature of the DPT between
branches 1 and 2, which corresponds to the subcritical bifurcation from the uniform solution (<ref>)
to the leading-order soliton solution discussed in Sec. <ref>. To this end we will have to focus
on subleading corrections that we have previously ignored. We will also present the large-deviation
scaling of 𝒫(H̅,L,T) in the limit of T → 0 at fixed L, in the physical units.
As we have already noticed, the actions S_1(H̅, ℓ) and S_2(H̅, ℓ), described
by the first and second lines of Eq. (<ref>),
coincide at H̅=H̅_c=4 together with their first derivatives ∂ S_1(H̅, ℓ) /
∂H̅ and ∂ S_2(H̅, ℓ)/∂H̅
at H̅_c=4. It would be incorrect, however,
to conclude from here that the DPT between branches 1 and 2 at H̅=H̅_c
is of second order. Indeed, the supercritical first bifurcation of the uniform solution (<ref>)
to a solution with a single maximum of h(x,1) – the one with q = 2 π / ℓ
in Eq. (<ref>) – actually occurs, as ℓ→∞, at much
larger H̅≃ℓ^2 / 16 ≫ 4. Furthermore,
as follows from numerical minimization of Eq. (<ref>), instability
of any Fourier mode around the uniform solution can only occur
at H̅≃ 4.60334 (for q ≃ 1.34336). It
is not surprising, therefore, that
at large but finite ℓ, and at a slightly shifted transition
point H̅_c> 4 where the actions of branches 1 and 2
are equal, the optimal paths h(x,t) for branches 1 and 2, that we found numerically,
are dramatically different, and their respective Lagrange
multipliers Λ are not equal. The latter fact means, by
virtue of Eq. (<ref>), that at large ℓ we actually observe a first-order DPT, not a second-order one.
To make sense of these facts, we recall that Eq. (<ref>)
for the action of branch 2 is merely a leading order asymptotic
at ℓ→∞. Subleading terms, so far unaccounted for, should remove
the degeneracy of the leading-order results by breaking the accidental continuity
of the first derivative ∂ S(H̅, ℓ)/∂H̅
at H̅=H̅_c, and
rendering the corresponding bifurcation subcritical and the corresponding DPT
first-order. The subleading terms should also account for a slight shift of the critical
point H̅_c to the right from its leading-order
value H̅_c=4, as observed in our numerics.
Motivated by the large-H asymptotic of the upper tail of the exact
short-time probability distribution of the one-point height h(x = 0,t = 1)=H
on the line, determined in Ref. <cit.>, we can conjecture the following
subleading terms of S_2(H̅,ℓ) at large ℓ:
S_2(H̅,ℓ)=(4H̅ -8) ℓ+B H^1/2+C H^-1/2+… ,
where B>0 and C are numerical constants O(1), which are independent
of ℓ. The condition B>0 is necessary for the equation
S_1 ( H̅_c,ℓ) =
S_2 ( H̅_c,ℓ)
to have a solution for H̅)_c close to
4 at large ℓ.
To verify Eq. (<ref>), we plotted in Fig. <ref> our large-ℓ numerical results for
[S_2(H̅,ℓ) - (4H̅ -8)
ℓ]/√(H) versus H. A fair plateau at large H is observed, with B ≃ 5.3 > 0 found by fitting.
Now, keeping the first subleading term in Eq. (<ref>)
and the leading-order dependence of H on H̅ in Eq. (<ref>),
we can rewrite Eq. (<ref>) in terms of H̅ and ℓ:
S_2(H̅,ℓ)=8ℓ+4(H̅ -4) ℓ
+ (3/4)^1/3 B [(H̅-4)ℓ]^1/3
+ … ,
(H̅-4)ℓ≫ 1 .
Now Eq. (<ref>) for the critical point becomes
1/2(H̅_c-4 )^2ℓ
= (3/4)^1/3 B [ (H̅_c
-4 )ℓ]^1/3+… ,
Its approximate solution,
H̅_c = 4 + 6^1/5 B^3/5 ℓ^-2/5+… ,
describes a small ℓ-dependent positive shift of the critical point from the leading-order value 4.
This H̅_c corresponds to
H = (9/8)^2/5 B^2/5ℓ^2/5 +…
of the branch-2 solution at the critical point. We observe that, for this solution, H →∞
as ℓ→∞, guaranteeing applicability of our theory at large ℓ. Going back to the
large-deviation scaling (<ref>), we notice that there is now a small but finite jump ∼ℓ^-2/5
of the derivative ℓ^-1∂ S/∂H̅ of the effective rate function at the shifted critical
point. The transition between branches 1 and 2, therefore, is of first order.
By virtue of Eq. (<ref>), the subleading correction in Eq. (<ref>) also removes the degeneracy
of the leading-order result Λ=4 by adding to it a small ℓ-dependent correction that goes
to zero as ℓ→∞.
Using Eq. (<ref>), we plotted in Fig. <ref> the actions of branches 1 and 2, normalized
by ℓH̅^2, in the
vicinity of the H̅ = H̅_c. It is clearly seen that the subleading correction removes the degeneracy
and makes the DPT first-order. Furthermore,
the predicted H̅_c from Eq. (<ref>)
for ℓ = 32 π, which is H̅_c≃ 4.6, is close to our numerical result H̅_c≃ 4.57. for this ℓ, see
Fig. <ref>.
Note that our arguments in favor of the expansion (<ref>) are far from rigorous.
In particular, we cannot exclude a very
slow (for example, logarithmic) dependence of the coefficient B on H in Eq. (<ref>)
based only on the numerical evidence. However,
our main conclusion about the first-order DPT between branches 2 and 3
seems robust.
To conclude this section, we present our large-deviation results, described by the first two lines
of Eq. (<ref>), in
the physical units. Recall that, by taking the
limit T → 0 at fixed L,
we have both ε∝ T^1/2→ 0 and ℓ→∞. In this limit only the first
two lines of Eq. (<ref>) are relevant, and we
obtain[Note the factor of T instead of the customary weak-noise
factor T^1/2 on the left-hand side
of Eq. (<ref>).]
-lim_T→ 0 T ln P(H̅,L,T)
=ν^2/Dλ^2 L f(λH̅/ν) ,
f(w)={[ w^2/2 w<4 ,; 4w-8 w>4 . ].
As we
elaborated in this subsection, the DPT
in Eq. (<ref>) at w = 4 can be called an “accidental”
second order DPT in the sense that the optimal paths, that are responsible for the two branches in Eq. (<ref>),
transition into each other discontinuously, and that the differentiability of the rate function
at the critical point emerges only in
the limit T → 0 at fixed L.
§ SMALL-ℓ ASYMPTOTICS
We found that our numerical results on the second-order DPT at small ℓ, shown in Figs. <ref>
and <ref> and described in Sec. <ref>,
can be understood in terms of a small-ℓ asymptotic solution of the OFM equations (<ref>)
and (<ref>) which was previously found in the context of the one-point
height distribution on a ring <cit.>. In this solution
the interface is driven by a stationary dn^2 profile (see below) of p. The solution represents a finite-amplitude
generalization of a weak sinusolidal modulation with m = 1 which results from the second-order DPT from
the uniform solution. This solution is given by the following expressions[This
solution is invalid inside
narrow boundary layers in time at t=0 and t=1, but their contribution to the action is negligible.]
h(x,t) ≃ H t + 2 lndn[2 K(k) x/ℓ,
k ] ,
p(x,t) ≃ p_0(x) = [4 K(k)/ℓ]^2
dn^2 [2 K(k) x/ℓ , k] ,
where K(k) is the complete elliptic integral of the first kind
and dn(…) is one of the Jacobi elliptic functions <cit.>.
The elliptic modulus k ∈ (0,1) is determined by H via the relation
8 (2 - k^2) K^2(k)/ℓ^2 = H ,
The action of this solution as a function of k is <cit.>
S(k) = 128/3 ℓ^3 K^3(k) [2(2-k^2) E(k)
- (1-k^2) K(k) ] .
At given ℓ≪ 1, Eqs. (<ref>) and (<ref>) determine S as a
function of H in a parametric form. The critical point H̅ = (2 π / ℓ)^2 corresponds
to k=0, when Eqs. (<ref>) and (<ref>) reduce to the uniform solution. k>0
correspond to supercritical solutions.
In order to recast this dependence in terms of S(H̅,ℓ),
we need to express H through H̅ and ℓ. Although Eq. (<ref>) is formally inapplicable
at t=1, asymptotically as ℓ→ 0 we still have
H - H̅≃ -1/ℓ∫_-ℓ /2^ℓ / 2
2 lndn[2 K(k) x/ℓ,
k ] dx= 1/2ln1/1 - k^2 .
where we have used a product formula for dn <cit.>.
Using Eqs. (<ref>) and (<ref>), we obtain
H̅(k) = 8 (2 - k^2) K^2(k)/ℓ^2-1/2ln1/1-k^2 .
Equations (<ref>) and (<ref>) determine S=S(H̅,ℓ) and were
used in Fig. <ref> to draw the theoretical curves for the action and
Lagrange multiplier (via Eq. (<ref>))
at ℓ = π / 8, which agree very well with the numerical action minimization results. Also shown is the
asymptotic action
S(H̅) ≃8 √(2)/3H̅^3/2
as H̅→∞, which agrees with Eq. (<ref>) and can be obtained from
Eqs. (<ref>) and (<ref>) by considering the limit k → 1
with E(k) → 1 and K(k) ≃12ln11-k. As one can see from
Fig. <ref>, the asymptotic relation (<ref>)
is not yet satisfied for the moderately small ℓ = π / 8: noticeably, the solution h(x,1)
at the final time deviates from Eq. (<ref>). However, the numerically found action
is already accurately described by Eqs. (<ref>) and (<ref>), because
the difference between H and H̅ is always subleading – at most O(√(H)) – at small ℓ.
§ SUMMARY AND DISCUSSION
We applied the OFM to evaluate analytically and numerically the short-time PDF P (H̅, L, t=T),
and the optimal paths which dominate this PDF, of the KPZ interface on a ring. The short-time PDF has
the scaling form (<ref>), where ε∼ T^1/2 plays the role of the weak-noise
parameter. The phase diagram of the system
represents the (H̅, ℓ=L/√(ν T)) plane. We were especially interested in the DPTs that occur
in this system at sufficiently large positive λH̅>0. We found that, depending on ℓ, these
DPTs occur via either a supercritical, or a subcritical bifurcation of the “trivial" (uniform in space)
optimal path of the KPZ interface. The supercritical bifurcations dominate at very small ℓ, the subcritical
bifurcations dominate at very large ℓ. In these two limits we obtained asymptotic analytical solutions
for the optimal paths of the system, evaluated the resulting action, and verified the analytical results
numerically. We also found that, as T goes to zero at constant L, the PDF acquire a simple large-deviation
form (<ref>). Interestingly, the rate function f(H̅) exhibits, at a critical value
of H̅=H̅_c(ℓ), a DPT which is accidentally second-order.
In the (much more complicated) region of intermediate ℓ=O(1) we observed numerically both supercritical,
and subcritical bifurcations of the uniform solution. This region of the phase diagram is presently out of
reach of analytical theory. It would be very interesting, but challenging, to determine the complete phase
diagram of the system in this region. In particular, it would be interesting to locate, somewhere
between ℓ=16 π and ℓ = 32π, at least one critical point (H̅_*, ℓ_*) where the
second order DPT curve H̅_c^(2)(ℓ) ends when it meets the first order DPT curve H̅_c^(1)(ℓ),
as well as other possible critical points.
These tasks will become more feasible if this problem, as described by Eqs. (<ref>)-(<ref>),
joins the list of similar
large-deviation OFM problems for the KPZ equation which have been solved exactly by the inverse scattering
method (ISM) <cit.>. Indeed, as was previously found in Ref. <cit.>,
a canonical Hopf–Cole transformation brings Eqs. (<ref>) and (<ref>) into the nonlinear
Schrödinger equation in imaginary space and time. Therefore, Eqs. (<ref>) and (<ref>)
belong to a family of completely integrable models. The only problem (but potentially a big one) is to
adapt the ISM to a finite system with periodic boundaries and to accommodate the problem-specific boundary
conditions (<ref>) and (<ref>). The exact solution would also provide
a full analytic control of the subleading corrections to the action of branch 2, which are presently half-empiric.
Finally, it would be very interesting to explore the possibility of extending to the spatially averaged KPZ
interface height some of the recent “stochastic integrability" approaches, which led, for selected initial
conditions, to exact representations for the complete statistics of the one-point interface
height <cit.>.
§ ACKNOWLEDGMENTS
The authors thank Eldad Bettelheim and Naftali R. Smith for useful discussions.
This research was supported by the program
“Advanced Research Using High Intensity Laser-Produced Photons and Particles"
(ADONIS) (CZ.02.1.01/0.0/0.0/16019/0000789) of the European Regional Development Fund (ERDF) (PS),
and by the Israel Science Foundation (Grant No. 1499/20) (BM).
§ NUMERICAL METHODS
Our numerical procedure of finding solutions h and p of the
OFM problem (<ref>)-(<ref>)
can be summarized as follows:
To compute numerical solutions to the boundary-value problem
for h and p for given ℓ and H̅, we use a
refined version of the popular Chernykh–Stepanov
back-and-forth iteration algorithm <cit.> as described in detail
in Ref. <cit.>, using the language of PDE-constrained optimization.
The idea is to interpret the back-and-forth
iterations – fixing Λ and solving Eq. (<ref>) forward in time
with fixed p, and Eq. (<ref>) backward in time with fixed h until
convergence – as adjoint <cit.> gradient evaluations δ S /
δ p of the action
functional with fixed Λ,
S[p] = 1/2∫_0^1 dt ∫_0^ℓ
d x p^2(x,t) - Λ∫_0^ℓ h[p](x,1) dx ,
with the height profile h = h[p] determined for a
given p through Eq. (<ref>).
This interpretation allows us to use automatic update step-size
control (here: Armijo line search <cit.>) and
preconditioning for faster convergence (here: L-BFGS method <cit.>).
Conceptually, one fixes Λ in this formulation and obtains
the corresponding average height value H̅ a posteriori.
For large ℓ we find multiple solutions for the
same H̅, and the action S(H̅,ℓ) of the optimal solution as a
function of H̅
becomes nonconvex for some H̅. Nonconvexity of the rate
function S(H̅) is an issue because
minimizing the functional (<ref>) effectively computes the
Legendre–Fenchel transform of the rate function at Λ,
which may diverge in this case. Therefore, we add a
penalty term to the action, leading to the so-called
augmented Lagrangian formulation <cit.>
S[p] = 1/2∫_0^1 dt ∫_0^ℓ
d x p^2(x,t) - Λ(
∫_0^ℓ h[p](x,1) dx - ℓH̅)
+ μ/2(∫_0^ℓ h[p](x,1)
dx - ℓH̅)^2 ,
and solve multiple minimization problems for increasing penalty
parameters μ.
In this formulation, one can directly prescribe H̅ at the
cost of solving multiple optimization problems, and it is usable
regardless of convexity of the rate function, or in other words regardless of
bijectivity of the map between H̅ and Λ.
The formulation (<ref>) is more convenient to
trace solution branches: one initializes the optimization on an
already found solution on a given branch and slightly changes
Λ. In order to trace branches close to the transition
region for large ℓ in
the nonconvex case, we temporarily reparameterize the observable
as described in Ref. <cit.> with reparameterizations
g(z) = lnln z or g(z) = 1 - exp{-(z - 3.5) }.
Within this general framework, we use a
pseudo-spectral code with spatial resolution n_x
to solve Eqs. (<ref>)
and (<ref>), with an exact integration of the diffusion
terms through an integrating factor in Fourier space. An explicit
second-order Runge–Kutta integrator with n_t equidistant steps
is used in time. The gradient of the action functional is
evaluated exactly on a discrete level (“discretize,
then optimize”). Python source code to illustrate the optimization
methods in a simple toy problem
can be found in Ref. <cit.>.
99
KK2007 I. V. Kolokolov and S. E. Korshunov, Phys. Rev. B 75, 140201(R) (2007).
KK2008 I. V. Kolokolov and S. E. Korshunov, Phys. Rev. B 78, 024206 (2008).
KK2009 I. V. Kolokolov and S. E. Korshunov, Phys. Rev. E 80, 031107 (2009).
MKV B. Meerson, E. Katzav, and A. Vilenkin, Phys. Rev. Lett. 116, 070601 (2016).
KMSparabola A. Kamenev, B. Meerson, and P. V. Sasorov, Phys. Rev. E 94, 032108 (2016).
LDMRS P. Le Doussal, S. N. Majumdar, A. Rosso, and G. Schehr,
Phys. Rev. Lett. 117, 070403 (2016).
Janas2016 M. Janas, A. Kamenev, and B. Meerson, Phys. Rev. E 94, 032133 (2016).
KLD2017 A. Krajenbrink and P. Le Doussal, Phys. Rev. E 96, 020102(R)
(2017).
MeersonSchmidt2017 B. Meerson and J. Schmidt, J. Stat. Mech. (2017) P103207.
SMS2018 N. R. Smith, B. Meerson, and P. V. Sasorov, J. Stat. Mech. (2018) 023202.
SKM2018 N. R. Smith, A. Kamenev, and B. Meerson, Phys. Rev. E 97, 042130 (2018).
SmithMeerson2018 N. R. Smith and B. Meerson, Phys. Rev. E 97, 052110 (2018).
Hartmann2018 A. K. Hartmann, P. Le Doussal, S. N. Majumdar, A. Rosso,
and G. Schehr, Europhys. Lett. 121, 67004 (2018).
MV2018 B. Meerson and A. Vilenkin, Phys. Rev. E 98, 032145 (2018).
Asida2019 T. Asida, E. Livne, and B. Meerson, Phys. Rev. E 99, 042132 (2019).
SMV2019 N. R. Smith, B. Meerson, and A. Vilenkin, J. Stat. Mech. (2019)
053207.
HMS2019 A. K. Hartmann, B. Meerson, and P. Sasorov, Phys. Rev. Res. 1, 032043(R) (2019).
KLD2021 A. Krajenbrink and P. Le Doussal, Phys. Rev. Lett. 127, 064101 (2021).
HMS2021 A. K. Hartmann, B. Meerson, and P. Sasorov, Phys. Rev. E 104, 054125 (2021).
KLD2022 A. Krajenbrink and P. Le Doussal, Phys. Rev. E 105, 054142 (2022).
Lamarre P. Y. G. Lamarre, Y. Lin, L.-C. Tsai,
Probab. Theor. Rel. Fields 185, 885 (2023).
SGG T. Schorlepp, T. Grafke, and R. Grauer, J. Stat. Phys. 190, 50 (2023).
KPZ M. Kardar, G. Parisi, and Y.-C. Zhang, Phys. Rev. Lett. 56, 889
(1986).
shortcut F. D. Cunden, P. Facchi, and P. Vivo, J. Phys. A: Math. Theor. 49, 135202.
(2016).
Whithambook G. B. Whitham, Linear and Nonlinear Waves (Wiley, New York, 2011).
SM18 N. Smith and B. Meerson, Phys. Rev. E 97, 052110 (2018).
Jacobi Wolfram MathWorld, https://mathworld.wolfram.com/JacobiEllipticFunctions.html
Wolf Wolfram Research, Inc., https://functions.wolfram.com/EllipticFunctions/JacobiDN/08/
SS T. Sasamoto and H. Spohn, Phys. Rev. Lett. 104, 230602 (2010).
CDR P. Calabrese, P. Le Doussal, A. Rosso, Europhys. Lett.
90, 20002 (2010).
Dotsenko V. Dotsenko, Europhys. Lett. 90, 20003 (2010).
ACQ G. Amir, I. Corwin, and J. Quastel, Comm. Pur. Appl. Math.
64, 466 (2011).
CLD11 P. Calabrese, and P. Le Doussal, Phys. Rev. Lett. 106, 250603 (2011).
CLD12 P. Le Doussal and P. Calabrese, J. Stat. Mech. (2012) P06001.
IS12 T. Imamura and T. Sasamoto, Phys. Rev. Lett. 108, 190603 (2012).
IS13 T. Imamura and T. Sasamoto, J. Stat. Phys. 150, 908 (2013).
Borodinetal A. Borodin, I. Corwin, P. L. Ferrari, and B. Vető, Math. Phys. Anal. Geom. 18, 20 (2015).
CS A. I. Chernykh and M. G. Stepanov, Phys. Rev. E 64,
026306 (2001).
SGMG T. Schorlepp, T. Grafke, S. May, and R. Grauer, Philos. Trans. Royal Soc. A 380, 20210051 (2022).
Plessix R.-E. Plessix, Geophys. J. Int. 167, 495 (2006).
Armijo L. Armijo, Pacific J. Math. 16, 1 (1966).
LN D. C. Liu and J. Nocedal, Math. Program. 45, 503 (1989).
Hestenes M. R. Hestenes, J. Optim. Theory. Appl. 4, 303 (1969).
AG M. Alqahtani and T. Grafke, J. Phys. A: Math. Theor. 54 175001 (2021).
STGS T. Schorlepp, S. Tong, T. Grafke, and G. Stadler, arXiv:2303.11919 (2023).
|
http://arxiv.org/abs/2307.04640v1 | 20230710153623 | Properties of the $η_q$ leading-twist distribution amplitude and its effects to the $B/D^+ \toη^{(\prime)}\ell^+ ν_\ell$ decays | [
"Dan-Dan Hu",
"Xing-Gang Wu",
"Hai-Bing Fu",
"Tao Zhong",
"Zai-Hui Wu",
"Long Zeng"
] | hep-ph | [
"hep-ph"
] |
[email protected]
[email protected]
Department of Physics, Chongqing Key Laboratory for Strongly Coupled Physics, Chongqing University, Chongqing 401331, P.R. China
[email protected]
[email protected]
[email protected]
Department of Physics, Guizhou Minzu University, Guiyang 550025, P.R. China
[email protected]
Department of Physics, Chongqing Key Laboratory for Strongly Coupled Physics, Chongqing University, Chongqing 401331, P.R. China
The η^(')-mesons in the quark-flavor basis are mixtures of two mesonic states |η_q⟩=|u̅ u+d̅ d⟩/√(2) and |η_s⟩=|s̅ s⟩. In the previous work, we have made a detailed study on the η_s leading-twist distribution amplitude. As a sequential work, in the present paper, we fix the η_q leading-twist distribution amplitude by using the light-cone harmonic oscillator model for its wave function and by using the QCD sum rules within the QCD background field to calculate its moments. The input parameters of η_q leading-twist distribution amplitude ϕ_2;η_q at an initial scale μ_0∼ 1 GeV are then fixed by using those moments. The sum rules for the 0_ th-order moment can also be used to fix the magnitude of η_q decay constant, which gives f_η_q=0.141±0.005 GeV. As an application of the present derived ϕ_2;η_q, we calculate the transition form factors B(D)^+ →η^(') by using the QCD light-cone sum rules up to twist-4 accuracy and by including the next-to-leading order QCD corrections to the twist-2 part, and then fix the related CKM matrix element and the decay width for the semi-leptonic decays B(D)^+ →η^(')ℓ^+ ν_ℓ.
13.25.Hw, 11.55.Hx, 12.38.Aw, 14.40.Be
Properties of the η_q leading-twist distribution amplitude and its effects to the B/D^+ →η^(')ℓ^+ ν_ℓ decays
Long Zeng
August 12, 2023
============================================================================================================
§ INTRODUCTION
The mixing of η and η' mesons is essential to disentangle the standard model (SM) hadronic uncertainties with the new physics beyond the SM. It involves the dynamics and structure of the pseudoscalar mesons that has two mixing modes η-η' and η-η'-G, both of which have important theoretical significance. These mixings are caused by the QCD anomalies and are related to the breaking of chiral symmetry. However, since the matrix element of the exception operator is mainly non-perturbative, it still has not been calculated reliably. One may turn to phenomenological studies to obtain useful information on the non-perturbative QCD theory <cit.>. At present, the η-η'-G mixing mode has been studied in detail in Refs <cit.>. As for the η-η' mixing model, one can investigate it by using two distinct schemes, namely the singlet-octet (SO) scheme and the quark-flavor (QF) scheme. These two schemes reflect different understandings of the essential physics and they are related with a proper rotation of an ideal mixing angle <cit.>. Practically, a dramatic simplification can be achieved by adopting the QF scheme <cit.>, especially, the decay constants in the quark-flavor basis simply follow the same pattern of the state mixing due to the OZI-rule. In QF scheme, the physical meson states |η⟩ and |η'⟩ are related to the QF basis |η_q⟩=|u̅u+d̅ d⟩/√(2) and |η _s⟩=|s̅ s⟩ by an orthogonal transformation <cit.>,
[ |η⟩ |η'⟩ ] = [ cosϕ -sinϕsinϕ cosϕ ][ |η_q⟩ |η_s⟩ ],
where ϕ is the mixing angle. In the present paper, we shall adopt the QF scheme to do our analysis and to achieve a better understanding of the mixing mechanism between η and η'.
The B(D)→η^(') transitions are important, since they involve b→ u and c→ d transitions and are sensitive to the CKM matrix elements |V_ ub| and |V_ cd|. A more accurate determination of |V_ ub| and |V_ cd| would improve the stringency of unitarity constraints on the CKM matrix elements and provides an improved test of standard model (SM). Many measurements on |V_ ub| and |V_ cd| have been done according to various decay channels of B(D)-mesons <cit.>. Compared with the non-leptonic B(D)-meson decays, the semi-leptonic decays D^+ →η^(')ℓ^+ ν_ℓ <cit.> and B^+ →η^(')ℓ^+ ν_ℓ <cit.> are much simpler with less non-perturbative effects and can serve as helpful platforms for exploring the differences among various mechanisms.
As key components of the B(D)→η^(') semileptonic decays, the B(D)→η^(') transition form factors (TFFs) need to be precisely calculated, whose main contribution comes from the |η_q⟩-component (the |η_s⟩-component gives negligible contribution here, but will have sizable contribution for B_s (D_s) decays <cit.>). By further assuming SU_ F(3) symmetry, the TFFs f_+^B(D)→η^(') satisfy the following relation <cit.>
f_+^B(D)→η = cosϕ f_+^B(D)→η_q,
f_+^B(D)→η' = sinϕ f_+^B(D)→η_q.
The TFFs of the heavy-to-light transitions at large and intermediate momentum transfers are among the most important applications of the light-cone sum rules (LCSR) approach. Using the LCSR approach, a two-point correlation function will be introduced and expanded near the light cone x^2 → 0, whose transition matrix elements are then parameterized as the light meson's light-cone distribution amplitudes (LCDAs) of increasing twists <cit.>. It is thus important to know the properties of the LCDAs.
In the present paper, we will adopt the light cone harmonic oscillator (LCHO) model for the η_q leading-twist LCDA ϕ_2;η_q. The LCHO model is based on the Brodsky-Huang-Lepage (BHL) prescription <cit.> [The BHL-prescription is obtained in this way by connecting the equal-time wavefunction in the rest frame and the wavefunction in the infinite momentum frame, which indicates that the LCWF should be a function of the meson's off-shell energy.] for the light-cone wavefunction (LCWF), which is composed of the spin-space LCWF and the spatial one. The LCDA can be obtained by integrating over the transverse momentum from the LCWF. The parameters of ϕ_2;η_q at an initial scale will be fixed by using the derived moments of the LCDA, which will then be run to any scale region via proper evolution equation. Its moments will be calculated by using the QCD sum rules within the framework of the background field theory (BFTSR) <cit.>. The QCD sum rules method suggests to use the non-vanishing vacuum condensates to represent the non-perturbative effects <cit.>. The QCD background field approach provides a description for those vacuum condensates from the viewpoint of field theory <cit.>. It assumes that the quark and gluon fields are composed of the background fields and the quantum fluctuations around them. And the vacuum expectation values of those background fields describe the non-perturbative effects, while the quantum fluctuations represent the calculable perturbative effects. As a combination, the BFTSR approach provides a clean physical picture for separating the perturbative and non-perturbative properties of the QCD theory and provides a systematic way to derive the QCD sum rules for hadron phenomenology. At the present, the BFTSR approach has been successfully applied for dealing with the LCDAs of various mesons, some recent examples can be found in Refs.<cit.>.
The remaining parts of the paper are organized as follows. In Sec. <ref>, we give the calculation technology for the moments of the η_q leading-twist LCDA ϕ_2;η_q by using the BFTSR approach, give a brief introduction of the LCHO model of ϕ_2;η_q, and then give the LCSR to the semi-leptonic decay B(D)^+ →η_qℓ^+ ν_ℓ. In Sec. <ref>, we first determine the parameters of ϕ_2;η_q. Finally, the TFF, the decay width and the CKM matrix element of the semi-leptonic decay B(D)^+→η^(')ℓ^+ ν_ℓ will be discussed. We will also compare our results with the experimental data and other theoretical predictions. Sec. <ref> is reserved for a summary.
§ CALCULATION TECHNOLOGY
§.§ Determination of the moments ⟨ξ _2;η_q^n⟩ of the η_q twist-2 LCDA using the BFTSR
To determine the distribution amplitude, one can calculate firstly the moment of the distribution amplitude. The η^(') meson twist-2 LCDA is defined as <cit.>
⟨ 0|Ψ̅(z) C_i[z, - z] z γ _5Ψ ( - z)|η^(') (q)⟩
= i(z · q)f_η∫_0^1 dxe^i(2x - 1)(z · q)ϕ _2;η^(')(x,μ)
where Ψ=(u,d,s) represents the triplet of the light-quark fields in the flavour space, [z,-z] is the path-ordered gauge connection which ensures the gauge invariance of the operator, and ϕ _2;η^(')(x,μ) is the twist-2 LCDA of the η meson with respect to the current whose flavour content is given by C_i(i= q,s). And we have C_q=(√(2) C_1+ C_8)/√(3) and C_s=( C_1-√(2) C_8)/√(3) with C_1=1/√(3) and C_8=λ_8/√(2) which are derived in singlet-octet scheme <cit.>, where λ_8 is the standard Gell-Mann matrix and 1 is 3×3 unit matrix. The η^(')-meson twist-2 two-quark LCDAs are symmetric in the QF basis <cit.>.
In line with the implementation of the QF scheme for the η_q twist-2 LCDA, an approximation is implicitly adopted, i.e. ⟨ 0|Ψ̅(z) C_q [z, - z] zγ_5 Ψ(-z)|η_q(q)⟩ = ⟨ 0|u̅(z)[z, - z] zγ_5 d( - z)|π ^ - (q)⟩ <cit.>. That is, the definition of the η_q meson is the same as that of the π^0 meson. According to the definition, we have
C_q/√(2)⟨ 0|[u̅(0) zγ_5 (iz · D)^nu(0) + d̅(0) zγ_5 (iz · D )^nd(0)]|η_q(q)⟩
= i(z· q)^n+1f_η_q⟨ξ _2;η_q^n⟩|_μ ,
where μ is an initial scale. The η_q twist-2 LCDA ϕ _2;η_q and the n_ th-order moment satisfy the equation,
⟨ξ _2;η_q^n⟩|_μ = ∫_0^1 dx (2x-1)^n ϕ _2;η_q (x,μ).
Once the insert current is determined, the first step is to construct the correlation function (correlator)
Π _2;η_q^(n,0) = i∫d^4xe^iq · x⟨ 0|T{J_n(x),J_0^† (0)} |0⟩
= (z · q)^n + 2Π _2;η_q^(n,0)(q^2),
For the QF basis, one may have two independent axial vector currents J_μ5^q (q=u,d) and J_μ5^s. We have discussed J_μ5^s in our previous work <cit.> for the case of η_s, and in this paper, we will focus on J_μ5^q (q=u,d) for the present case of η_q. Then the required currents in the correlator can be defined as J_n(x) = C_q/√(2) [u̅(x) zγ_5 (iz · D )^nu(x) +d̅(x) zγ_5 (iz · D )^nd(x)]=u̅(x) zγ_5 (iz · D )^nd(x) <cit.>, where z^2=0. It is found that even moments are non-zero and the odd moments of the LCDA are zero because of the G-parity, then only the n=(0,2,4,…) will be considered.
For the second step, the correlator can be calculated by inserting a complete set of intermediate hadronic states in physical region. Based on the quark-hadron duality, the hadron expression can be obtained
Im I_2;η_q, Had^(n,0)(q^2) = πδ (q^2 - m̃_η_q^2)f_η_q^2⟨ξ _2;η_q^n⟩|_μ⟨ξ _2;η_q^0⟩|_μ
+ π34π ^2(n + 1)(n + 3)θ (q^2 - s_η_q)
Because of the SU(3) flavour symmetric, here m̃_η_q is the η_q effective mass <cit.>, f_η_q is the decay constant of η_q and s_η_q stands for the continuum threshold.
For the third step, one can apply the operator product expansion (OPE) to deal with the correlator in the deep Euclidean region. It is calculable and can be carried out within the framework of BFTSR. Detailed calculation processes can be found in Ref. <cit.>. The fourth step is to match the hadron expression corresponding to the correlator and the results obtained by OPE using the dispersion relation. After applying the Borel transformation for both sides so as to suppress the unwanted contributions from the even higher-order dimensional condensates, the sum rules for the moments of the η_q leading-twist LCDA ϕ _2;η_q(x,μ ) can be finally obtained, which takes the following form
⟨ξ _2;η_q^n⟩ |_μ⟨ξ _2;η_q^0 ⟩|_μ = M^2/f_η_q^2 e^m̃_η_q^2/M^2{3/4π^2(n+1)(n+3)(1-e^-s_η_q/M^2) + (m_u + m_d)⟨q̅q⟩/M^4 + ⟨α_sG^2⟩/12πM^41 + nθ (n - 2)/n + 1
- (m_u + m_d)⟨ g_sq̅σ TGq⟩/M^68n + 1/18 + ⟨ g_sq̅q⟩^2/M^6 4(2n + 1)/18 - ⟨ g_s^3fG^3⟩/M^6 nθ (n - 2)/48π ^2 + ⟨ g_s^2q̅q⟩^2/M^6 2 + κ ^2/486π ^2
×{ - 2 (51n + 25)( - lnM^2/μ^2) + 3 (17n + 35) + θ (n - 2)[2n( - lnM^2/μ^2)+ 49n^2 + 100n + 56/n
- 25(2n + 1)[ψ(n + 1/2) - ψ(n/2)+ ln 4]]}}.
It has been shown that due to the anomalous dimension of the n_ th-order moment grows with increment of n, the contribution of the much higher moments at the large momentum transfer shall be highly suppressed <cit.>. Thus one may only need to calculate the first few ones. Specifically, the sum rule of the 0_ th-order moment is
(⟨ξ_2;η_q^0⟩|_μ )^2= M^2/f_η_q^2e^m̃_η_q^2/M^2{1/4 π^2(1 - e^-s_η_q/M^2)
+ (m_u + m_d)⟨q̅q⟩/M^4 - (m_u + m_d)⟨ g_sq̅σ TGq⟩/18 M^6
+ ⟨α_s G^2 ⟩/12π M^4 +4⟨ g_sq̅q⟩^2/18M^6 + ⟨ g_s^2q̅q⟩^2/M^6 2+κ^2/486π ^2
×[-50(-lnM^2/μ^2)+105]}.
Due to the particularity of quark composition of η-meson, we take the η_q mass appeared in Eqs. (<ref>) and (<ref>) as its effective mass 370 MeV <cit.>. We use the equation ⟨ξ _2;η_q^n⟩ |_μ =⟨ξ_2;η_q^n⟩ |_μ⟨ξ _2;η_q^0⟩|_μ/√((⟨ξ _2;η_q^0⟩ |_μ )^2) to calculate the moment <cit.>. The decay constant is an important input for the B(D)→η^(') TFFs, which has been calculated under different methods such as the LCSR <cit.>, the QCD sum rules (QCD SR) <cit.>, the light-front quark model (LFQM) <cit.>, the lattice QCD (LQCD) <cit.>, the Bethe-Salpeter (BS) model <cit.>, the relativistic quark model (RQM) <cit.>, the non-relativistic quark model (NRQM) <cit.>, and etc.. As for the decay constant f_η_q, those studies shows that f_η_q is within a broader range [0.130,0.168] GeV. At present, the sum rule of the η_q decay constant can be inversely obtained by using Eq.(<ref>). The ⟨ξ _2;η_q ^0⟩ |_μ should be normalized in a suitable Borel window, which will be treated as an important criteria for determining the η_q decay constant.
§.§ The LCHO model for η_q twist-2 LCDA
The meson's LCDA can be derived from its light-cone wave-function (LCWF) by integrating its transverse components. It is helpful to construct the η_q leading-twist LCWF and then get its LCDA <cit.>. Practically, the η_q wave-function can be constructed by using the BHL prescription, and the LCHO model takes the form <cit.>:
ψ_2;η_q(x,𝐤_) = χ _2;η_q (x,𝐤_)ψ _2;η^R(x,𝐤_),
where 𝐤_ is the η_q transverse momentum, χ _2;η_q(x,𝐤_) stands for the spin-space WF that comes from the Wigner-Melosh rotation and the spatial WF ψ _2;η_q ^R(x,𝐤_) comes from the approximate bound-state solution in the quark model for η_q, which the detailed expressions can be found in Ref <cit.>. Using the following relationship between the η_q twist-2 LCDA and LCWF,
ϕ _2;η_q(x,μ) = 2√(6)/f_η_q∫_0^|𝐤_|^2 ≤μ^2d^2 𝐤_/16π^3ψ _2;η_q(x,𝐤_),
and by integrating over the transverse momentum 𝐤_, one can get the twist-2 LCDA ϕ _2;η_q(x,μ ), which can be read off,
ϕ _2;η_q(x,μ) = √(3) A_2;η_qm_qβ _2;η_q/2√(2)π^3/2 f_η_q√(xx̅)φ _2;η_q(x)
×{ Erf[√(m_q^2 + μ^2/8β _2;η_q^2xx̅)]- Erf[√(m_q^2/8β _2;η_q^2xx̅)]}.
where q=(u, d), m_q is the constituent quark mass. The main difference of the model parameters is the constituent quark mass, i.e. m_u=m_d=250 MeV in the spin-averaged meson mass scheme <cit.>, m_u=m_d=330 GeV in the invariant meson mass scheme <cit.> and m_u=m_d=300 MeV for the simplest in Refs <cit.>. In principle, the hadron function determines all properties of hadrons. From the relation between wavefunction and measurability, we can obtain some constraints on the general properties of hadronic function. We will constraint the parameters A_2;η_q and β _2;η_q according to the following two constraints.
Both the pseudoscalar and vector mesons one constraint on the wavefunction is from the leptonic decay processes. The WF normalization condition provided from the process η_q→μν
∫_0^1 dx∫d^2 𝐤_/16π^3ψ_2;η_q(x,𝐤_) = f_η_q/2√(6).
The second constraint is the most natural one: the probability of finding the qq̅ Fock state in a meson should be not larger than 1,
P_η_q =∫_0^1 dx∫d^2 𝐤_/16π^3 |ψ_2;η_q(x,𝐤_)|^2
= A_2;η_q^2m_q^2/32π^2[φ _2;η_q(x)]^2Γ[0,m_q^2/4β_2;η_q^2xx̅].
Since pionic twist-2 wavefunction conforms to the probability P_π≈ 0.3 <cit.>, we adopt P_η_q≈0.3 to carry out the following calculation. Equivalently, one can replace the constraint (<ref>) by the quark transverse momentum ⟨𝐤_ ^2⟩ _η_q, which is measurable and is defined as <cit.>
⟨𝐤_ ^2⟩ _η_q = ∫_0^1 dx ∫d^2 𝐤_/16π^3 |𝐤^2_| ψ_2;η_q^R(x,𝐤_)^2/P_η_q
=∫_0^1 dx 4exp[- m_q^24xx̅β _2;η_q^2]xx̅β_2;η _q^2/Γ[0,m_q^24xx̅β_2;η_q^2] - m_q^2
where the incomplete gamma function Γ [s,x] = ∫_0^x t^(s-1) e^-t dt.
The function φ _2;η_q(x) determines the dominant longitudinal behavior of ϕ _2;η_q(x,μ^2), which can be expanded as a Gegenbauler series as
φ _2;η_q(x) =[1 + ∑_n B_n × C_n^3/2(2x - 1) ],
For self-consistency, it has been found that the parameters B_n are close to their corresponding Gegenbauer moment, i.e. B_n ∼ a_n, especially for the first few ones <cit.>. The η_q meson Gegenbauer moments can be calculated by the following way
a_2;η_q^n(μ)=∫_0^1 dxϕ _2;η_q(x,μ)C_n^3/2(2x-1)/∫_0^1 dx6x(1-x)[C_n^3/2(2x-1)]^2
The Gegenbauer moments a_2;η_q^n(μ) and the DA moments ⟨ξ _2;η_q^n⟩ |_μ satisfy the following relations
⟨ξ _2;η_q^2⟩ |_μ =1/5+12/35a_2;η_q^2(μ)
⟨ξ _2;η_q^4⟩ |_μ =3/35+8/35a_2;η_q^2(μ)+8/77a_2;η_q^4(μ)
···
By using the sum rules (<ref>) of ⟨ξ _2;η_q^n⟩ |_μ, one can determine the values of a_2;η_q^n(μ), which then can be used to fix the values of B_n. In the following we will adopt the given two Gegenbauer moments a^2,4_2;η_q to fix the parameters B_2,4.
§.§ The B(D)^+→η_q ℓ^+ ν_ℓ TFFs using the LCSR
The LCSR approach is an effective tool in determining the non-perturbative properties of hadronic states. Here and after, we use the symbol “H” to indicate the B(D)-meson for convenience. Following the LCSR approach, one should first construct a correlator with the weak current and a current with the quantum numbers of the H meson that are sandwiched between the vacuum and η_q state. More explicitly, for H→η_q, we need to calculate the correlator
Π_μ(p,q) =i∫d^4 xe^iqx⟨η_q (p)|T{u̅(x)γ _μQ(x), j_H(0)} |0⟩
= Π[q^2, (p+q)^2] p_μ + Π̃[q^2, (p+q)^2] q_μ.
where j_H=(m_Q Q̅ iγ_5 d) with Q = (b,c)-quark for (B,D) meson, respectively. The LCSR calculation for the B(D)^+ →η_q TFFs is similar to the case of B_s(D_s)→η_s, which has been done in Ref.<cit.>. In the following, we will give the main procedures for self-consistency, and the interesting reader may turn to Ref.<cit.> for more detail.
The dual property of the correlator (<ref>) is used to connect the two different representations in different momentum transfer regions. In the time-like region, one can insert a complete set of the intermediate hadronic states in the correlator and obtain its hadronic representation by isolating out the pole term of the lowest meson state, i.e.
Π_μ^ had(p,q)=⟨η_q (p)|u̅γ_μ Q|H(p+q)⟩⟨ H(p+q)|Q̅iγ_5q|0⟩/m_H^2-(p+q)^2
+∑_ H⟨η_q (p)|u̅γ_μ Q|H^ H(p+q)⟩⟨ H^ H(p+q)|Q̅ iγ_5q|0⟩/m_H^ H^2-(p+q)^2
= Π^ had[q^2,(p+q)^2]p_μ+Π^ had[q^2,(p+q)^2]q_μ,
where the superscript “had" and “H" stand for the hadronic expression of the correlator and the continuum states of heavy meson, respectively. Here, the decay constant of B(D)-meson is defined via the equation, ⟨ H|Q̅iγ_5q|0⟩ = m_H^2 f_H/m_Q, and by using the hadronic dispersion relations in the virtuality (p+q)^2 of the current in the B(D) channel, we can relate the correlator to the H→η_q matrix element <cit.>
⟨η_q (p)|u̅γ_μ Q| H(p+q)⟩ = 2p_μ f^H→η_q_+(q^2)
+ q_μ( f^H→η_q_+(q^2) + f^H→η_q_- (q^2)).
Due to chiral suppression, only the first term contributes to the semileptonic decay of H→η_q with massless leptons in the final state. Then, the hadronic expression for the invariant amplitude can be written as
Π[q^2,(p+q)^2] = 2m_H^2 f_H f_+^H→η_q (q^2)/ [m_H^2 - (p+q)^2]p_μ
+ ∫_s_0^∞ ds ρ^ H (q^2,s)/s - (p+q)^2,
where s_0 is continuum threshold parameter, ρ^ H is the hadronic spectral density.
In the space-like region, the correlator can be calculated by using the operator production expansion (OPE). The OPE near the light cone x^2 ≈ 0 leads to a convolution of perturbatively calculable hard-scattering amplitudes and universal soft LCDAs. Since the contributions of the three-particle part is small <cit.>, we only calculate the two-particle part here, and the corresponding matrix element is <cit.>
⟨η_q (p)|u̅_α ^i(x)d_β ^j(0)|0⟩ = iδ ^ij/12f_η_q∫_0^1 due^iup · x{[ pγ _5]_βαϕ _2;η_q
(u)-[γ _5]_βαμ _η_qϕ _3;η_q ^p(u) + 1/6[σ _ντγ _5]_βαp_νx_τμ _η_qϕ _3;η_q ^σ (u)
+ 1/16 [ pγ _5]_βαx^2ϕ _4;η_q (u) - i/2[ xγ _5]_βα∫_0^u ψ _4;η_q (v)dv}
The light-cone expansion for q^2, (p+q)^2 ≪ m_b^2 (or m_c^2), the correlator Π^ OPE can be written in the general form
Π^ OPE[q^2,(p+q)^2] = F_0(q^2,(p+q)^2)
+ α_s C_F/4π F_1(q^2,(p+q)^2).
In the above equation, the first term is the leading-order (LO) for all the LCDAs' contributions, and the second term stands for the gluon radiative corrections to the dominant twist-2 parts.
After an analytic continuation of the light-cone expansion to physical momenta using a dispersion relations, one equates the above two representations by the assumption of quark-hadron duality. Then to get the final LCSR, we need to do the Borel transformation, which results in
f^H→η_q_+ (q^2) = e^m_H^2/M^2/2m_H^2 f_H[ F_0(q^2,M^2,s_0)
+α_s C_F/4π F_1(q^2,M^2,s_0)],
where F_0 (F_1) represents the leading-order (LO) or next-to-leading order (NLO) contribution, respectively. Our final LCSR for the H→η_q TFF is
f^H→η_q_+(q^2) = m_Q^2 f_η_q/2m_H^2 f_He^m_H^2/M^2∫_u_0^1 du e^-s(u)/M^2{ϕ_2;η_q(u)/u + μ_η_q/m_Q[ϕ_3;η_q^p(u) + 1/6 (2ϕ_3;η_q^σ (u)/u - m_Q^2+q^2-u^2m_η^2/m_Q^2-q^2+u^2m_η^2
×d/duϕ_3;η_q^σ (u) +4um_η ^2m_Q^2/(m_Q^2 - q^2 + u^2m_η^2)^2ϕ_3;η_q^σ (u))] + 1/m_Q^2-q^2+u^2 m_η^2[uψ_4;η_q(u) +(1-
2 u^2 m_η^2/m_Q^2 - q^2 + u^2 m_η^2)
×∫_0^u dv ψ_4;η_q(v) - m_Q^2/4u/m_Q^2 - q^2 + u^2m_η^2(d^2/du^2 - 6um_η ^2/m_Q^2-q^2+u^2m_η^2 d/du + 12um_η^4/(m_Q^2 - q^2 + u^2m_η ^2)^2)ϕ_4;η_q(u)]
+ α_s C_F e^m_H^2/M^2/8π m_H^2 f_H F_1(q^2,M^2,s_0),
where u̅=(1-u), μ_η_q=m^2_η/(m_u+m_d), s(u) = ( m_Q^2 - u̅ q^2 + uu̅ m_η^2 )/u and u_0 = (q^2 - s_0 + m_η ^2 + √((q^2 - s_0 + m_η ^2 )^2 - 4m_η ^2(q^2 - m_Q^2)))/2m_η^2. The invariant amplitude F_1(q^2,M^2,s_0) has been given in Ref. <cit.>, which can be written as a factorized form of the convolutions. As will be shown below, the high-twist terms will have quite small contributions to compare with the leading-twist terms, thus we will not discuss the uncertainties caused by the different choices of the high-twist LCDAs. For convenience, we take the η_q twist-3 LCDAs ϕ_3;η_q^p(u), ϕ_3;η_q^σ (u), and the twist-4 LCDAs ψ_4;η_q(u), ϕ_4;η_q(u), together with their parameters from Ref. <cit.>.
Using the resultant B(D)→η^(') TFFs, one can extract the CKM matrix element |V_ cd| or |V_ ub| by comparing with the predictions with the experimental data, i.e. via the following equation <cit.>
B(H→η^(')ℓν_ℓ )/τ (H) = ∫_0^q^2_ maxdΓ/dq^2 (H→η^(')ℓν_ℓ),
where τ (H) is the H-meson lifetime, and the maximum of the squared momentum transfer q^2_ max = (m_H - m_η^('))^2.
§ NUMERICAL ANALYSIS
§.§ Input parameters
We adopt the following parameters to do the numerical calculation. According to the Particle Data Group (PDG) <cit.>, we take the charm-quark mass m_c(m̅_c)=1.27±0.02, b-quark mass m_b(m̅_b)=4.18^+0.03_-0.02 GeV; the η, η', D and B-meson masses are m_η =0.5478 GeV, m_η'=0.9578 GeV, m_D^+=1.870 GeV and m_B^+=5.279 GeV, respectively; the lifetimes of D^+ and B^+ mesons are τ (B^ + )=1.638±0.004 ps and τ(D^ + )=1.033±0.005 ps, respectively; the current-quark-masses for the light u and d-quarks are m_u =2.16^+0.49_-0.26 MeV and m_d =4.67^+0.48_-0.17 MeV at the scale μ =2 GeV. As for the decay constants f_B and f_D, we take f_B =0.215^+0.007_-0.007 GeV <cit.> and f_D=0.142±0.006 <cit.>. The renormalization scale is set as the typical momentum flow μ_B=√(m^2_B-m̅_b^2)≈ 3 GeV for B-meson decay or μ_D ≈ 1.4 GeV for D-meson decay. We also need to know the values of the non-perturbative vacuum condensates up to dimension-six, which include the double-quark condensates ⟨ qq̅⟩ and ⟨ g_sq̅q⟩ ^2, the quark-gluon condensate ⟨ g_sq̅σ TGq⟩, the four-quark condensate ⟨ g_s^2q̅q⟩ ^2, the double-gluon condensate ⟨α_s G^2 ⟩ and the triple-gluon condensate ⟨ g_s^3fG^3⟩, and etc. We take their values as <cit.>,
⟨ qq̅⟩ = (-2.417_-0.114^+0.227)× 10^-2 GeV^3 ,
⟨ g_sq̅q⟩ ^2 = (2.082_-0.697^+0.734)× 10^-3 GeV^6 ,
⟨ g_sq̅σ TGq⟩ =(-1.934_-0.103^+0.188)× 10^-2 GeV^5 ,
⟨ g_s^2q̅q⟩ ^2 = (7.420_-2.483^+2.614)× 10^-3 GeV^6 ,
⟨α_s G^2 ⟩ = 0.038±0.011 GeV^4 ,
⟨ g_s^3fG^3⟩ ≈ 0.045 GeV^6 .
The ratio κ = ⟨ ss̅⟩/⟨ qq̅⟩= 0.74±0.03 is given in Ref. <cit.>. In order to make the calculation more accurate, every vacuum condensates and current quark masses need to be run from their initial values at the scale μ_0 to the required scale by using the renormalization group equations (RGE) <cit.>.
§.§ The η_q decay constant and the moments ⟨ξ _2;η_q^n⟩
The continuum threshold parameter (s_0) and the Borel parameter M^2 are two important parameters for the sum rules analysis. When calculating the decay constant f_η_q, one may set its continuum threshold to be close to the squared mass of the η' meson, i.e. s_0=0.95±0.1 GeV^2 <cit.>. To determine the allowable M^2 range, e.g. the Borel window, for the η_q decay constant, we adopt the following criteria,
* The continuum contribution is less than 30%;
* The contributions of the six-dimensional condensates are no more than 5%;
* The value of f_η_q is stable in the Borel window;
* The ⟨ξ _2;η_q^0⟩ |_μ_0 is normalized in the Borel window, e.g. ⟨ξ^0_2;η_q ⟩|_μ_0=1.
We put the curves for the decay constant f_η_q versus the Borel parameter M^2 in Fig. <ref>, where the shaded band indicates the uncertainties from the errors of all the mentioned input parameters. The decay constant is flat in the allowable Borel window, which confirms the third criterion. Using the above four criteria and the chosen continuum threshold parameter, we put the numerical results of f_η_q in Table <ref>. As a comparison, we also present several predictions using the QCDSR and LQCD approaches. Our predictions are in good agreement with the QCDSR 2000 <cit.> and the LQCD 2021 predictions within errors <cit.>. The reason why we are slightly different from QCDSR 2000 is that their calculation only includes the contributions up to five dimensional operators, and our present one includes the dimension-6 vacuum condensation terms. Using the determined f_η_q, we then determine the moments of its twist-2 LCDA. Similarly, several important conditions need to be satisfied before the moments of η_q LCDA can be determined <cit.>.
Furthermore, in order to search for a suitable Borel window for the moments, one can take the similar criteria adopted for the traditional sum rules, i.e. keeping the dimension-six condensate's contribution to be no more than 5% and the continuum contribution to be no more than 40%. To determine the first two LCDA moments ⟨ξ _2;η_q^n⟩|_μ_0 with n=(2,4), we set the continuum contributions to be less than 35% and 40%, respectively. We find that the allowable Borel windows for the two moments ⟨ξ _2;η_q^2,4⟩|_μ are M^2∈[1.782,2.232] and M^2∈[2.740,3.258], respectively. Numerical results of the first two moments ⟨ξ _2;η_q^2,4⟩|_μ can be obtained, which at the initial scale μ_0 are
⟨ξ _2;η_q ^2⟩ |_μ_0= 0.253±0.014,
⟨ξ _2;η_q ^4⟩ |_μ_0= 0.127±0.010.
§.§ The LCHO model parameters for ϕ_2;η_q
Combining the normalization condition (<ref>), the probability formula for qq̅ Fock state P_η_q≈0.3, and the moments ⟨ξ _2;η_q^(2,4)⟩|_μ_0 shown in Eqs.(<ref>, <ref>), the determined LCHO parameters are shown in Table <ref> and their corresponding LCDA ϕ_2;η_q is given in Fig. <ref>. Its behavior of one peak with two humps is caused by a_2;η_q ^2(μ _0)=0.156±0.042 and a_2;η_q^4(μ _0)=0.055±0.005, which is given by using their relations (<ref>) to the moments ⟨ξ _2;η_q^n⟩ that can be calculated by using the sum rules (<ref>). In this paper, we take m_q=300 MeV to do the following calculation and use Δ m_q=± 50 MeV to estimate its uncertainty. Table <ref> shows that the parameters B_2 and B_4 and the quark transverse momentum ⟨𝐤_^2⟩ _η_q increase with the increment of constituent quark mass, but the harmonious parameter β _2;η_q decreases gradually. Experimentally the average quark transverse momentum of pion, ⟨𝐤_ ^2⟩_π, is of the order (300 MeV)^2 approximately <cit.>. So it is reasonable to require √(⟨𝐤_^2⟩_η_q) have the value of about a few hundreds MeV <cit.>. For the case of m_q=300±50 MeV, we numerically obtain ⟨𝐤_^2⟩_η_q=0.123^+0.003_-0.002≈ (351^+4_-3 MeV)^2, which is reasonable and in some sense indicates the inner consistency of all the LCHO model parameters. Moreover, by using the RGE, one can get the ϕ_2;η_q(x, μ) at any scale μ <cit.>. Fig. <ref> shows the LCDA ϕ_2;η_q at several typical scales with m_q=300 MeV. At low scale, it shows double humped behavior and when the scale μ increases, the shape of ϕ_2;η_q becomes narrower; and when μ→∞, it will tends to single-peak asymptotic behavior for the light mesons <cit.> ϕ^ as_η_q(x,μ)|_μ→∞=6x(1-x).
We make a comparison of the properties of the LCHO model of the twist-2 LCDA ϕ_2;η_q with other theoretical predictions in Fig. <ref>. Fig. <ref> gives the results for μ=μ_0=1 GeV, where the asymptotic form <cit.>, the CZ form <cit.> and the behaviors given by the LCSR 2007 <cit.> and LCSR 2015 <cit.> are presented. For the LCSR 2007 result, its double peaked behavior is caused by the keeping its Gegenbauer expansion only with the first term together with the approximation a_2;η_q^2(μ _0)=a_2;η'_q^2(μ _0)=0.25 <cit.>. For the LCDA used in LCSR 2015 <cit.>, its behavior is close to our present one. It is obtained by using the approximation that the twist-2 LCDA ϕ_2;η_q has the same behavior as that of the pion twist-2 LCDA ϕ_2;π, e.g. a_2;η_q^2(μ _0)=a_2;π^2(μ _0)=0.17 and a_2;η_q^4(μ _0)=a_2;π^4(μ _0)=0.06, which are consistent with our Gegenbauer moments within errors [Since the twist-2 parts dominant the TFFs, this consistency also explains why our following LCSR predictions for the TFFs are close in shape with those of Ref.<cit.>.].
§.§ The TFFs and observable for the semileptonic decay B(D)^+→η^(')ℓ^+ν_ℓ
One of the most important applications of the η_q-meson LCDAs is the semileptonic decay H^+→η^(')ℓ^+ν_ℓ, whose main contribution in the QF scheme comes from the |η_q⟩-component. Here H^+ stands for B^+ or D^+, respectively. And to derive the required H^+→η^(') TFFs, we take the mixing angle ϕ=(41.2^+0.05_-0.06)^∘ <cit.>.
The continuum threshold s^H→η^(')_0 and Borel parameters M^2 are two important parameters for the LCSR of the TFFs. As usual choice of treating the heavy-to-light TFFs, we set the continuum threshold as the one near the squared mass of the first excited state of D or B-meson, accordingly. And to fix the Borel window for the TFFs, we require the contribution of the continuum states to be less than 30%. The determined values agree with Refs.<cit.>, and we will take the following values to do our discussion
s_0^D→η= 7.0±0.5 GeV^2, M^2_D→η = 3.0±0.5 GeV.
s_0^D→η' = 7.0±0.5 GeV^2, M^2_D→η' = 3.0±0.5 GeV.
s_0^B→η = 37.0± 1.0 GeV^2, M^2_B→η = 18.0±2.0 GeV.
s_0^B→η' = 37.0±1.0 GeV^2, M^2_B→η' = 18.0±2.0 GeV.
Using Eqs.(<ref>, <ref>) together with the LCSR (<ref>) for the TFF f^H→η_q_+(q^2), we then get the results for f_+^H→η^(')(q^2), where H represents B or D, respectively. Fig. <ref> shows how the total TFFs f_+^H→η^(')(q^2) change with the increment of q^2, in which the twist-2 up to NLO QCD corrections, the twist-3 and the twist-4 contributions have been presented separately. Fig. <ref> shows that the twist-2 terms dominant the TFFs. We also find that the NLO QCD corrections to the twist-2 terms are sizable and should be taken into consideration for a sound prediction. For examples, at the large recoil point, the twist-2 NLO terms give about 15.8% (17.6%) and 6.4% (7.2%) contributions to the total TFFs f_+^D→η^(')(0) and f_+^B→η^(')(0), respectively. Table <ref> gives our present LCSR predictions for the TFFs f_+^D→η^(')(0) and f_+^B→η^(')(0). As a comparison, we have also presented the results derived from various theoretical approaches and experimental data in Table <ref>, including the LCSR approach <cit.>, the pQCD approach <cit.>, the covariant light front (CLF) approach <cit.>, the light front quark model (LFQM) approach <cit.>, the covariant confining quark mode (CCQM) approach <cit.>, and the BES-III Collaboration <cit.>. The uncertainties of the TFFs f_+^H→η^(')(0) caused by different input parameters are listed as follows,
f_+^B→η(0) = 0.145(_-0.004^+0.004)_s_0(_-0.002^+0.002)_M^2(_-0.007^+0.007)_m_b f_B
(_-0.005^+0.005)_f_η_q(_-0.0001^+0.0001)_ϕ
= 0.145_-0.010^+0.009,
f_+^B→η'(0) = 0.128(_-0.003^+0.003)_s_0(_-0.002^+0.002)_M^2(_-0.006^+0.006)_m_b f_B
(_-0.005^+0.005)_f_η_q(_-0.0001^+0.0002)_ϕ
= 0.128_-0.009^+0.008,
f_+^D→η(0) = 0.329 (_-0.004^+0.003)_s_0 (_-0.005^+0.009)_M^2 (_-0.009^+0.016)_m_c f_D
(_-0.010^+0.010)_f_η_q(_-0.0003^+0.0002)_ϕ
= 0.329_-0.015^+0.021,
f_+^D→η'(0) = 0.294(_-0.004^+0.003)_s_0(_-0.005^+0.009)_M^2(_-0.011^+0.017)_m_c f_D
(_-0.009^+0.009)_f_η_q(_-0.0003^+0.0002)_ϕ
= 0.294_-0.015^+0.021.
Here the second equations show the squared averages of all the mentioned errors.
The physically allowable ranges of the above four heavy-to-light TFFs are m_ℓ ^2 ≤q^2≤(m_D^ + - m_η)^2≈ 1.75 GeV^2, m_ℓ ^2 ≤q^2≤(m_D^ + - m_η' )^2≈ 0.84 GeV^2, m_ℓ ^2 ≤q^2≤(m_B^ + - m_η)^2≈ 22.40 GeV^2 and m_ℓ ^2 ≤q^2≤(m_B^ + - m_η' )^2≈ 18.67 GeV^2, respectively. The LCSR approach is applicable in low and intermediate q^2 region, which however can be extended to whole q^2 region via proper extrapolation approaches. In the present paper, we adopt the converging simplified series expansion (SSE) proposed in Refs.<cit.> to do the extrapolation, which suggest a simple parameterization for the heavy-to-light TFF, e.g.
f_+^H→η^(')(q^2) = 1/1 - q^2/m_R^*^2∑_k b_k z^k(t,t_0)
where m_R^*=m_B^*=5.325 GeV (m_D^*=2.010 GeV) <cit.> are vector meson resonances, z(t,t_0) is a function
z(t,t_0) = √(t_+ - t) - √(t_+ - t_0)/√(t_+ -t) + √(t_+ - t_0).
Here t_± = (m_H^+±m_η^('))^2 and t_0 = t_+ (1 - √(1 - t_-/t_+)) is a free parameter. The free parameter b_k can be fixed by requiring Δ <1%, where the parameter Δ is used to measure the quality of extrapolation and it is defined as
Δ = ∑_t |F_i(t) - F_i^ fit(t)|/∑_t |F_i(t)|× 100,
where t ∈ [0,1/40, ⋯ ,40/40] × 13.0(1.0) GeV of η-meson, t ∈ [0,1/40, ⋯ ,40/40] × 11.2(0.5) GeV of η'-meson. The two coefficients b_1,2 with all input parameters are set as their central values are listed in Table <ref>. The qualities of extrapolation parameter Δ are less than ∼ 0.8%. The extrapolated TFFs in whole q^2-region are given in Fig. <ref>, where some typical theoretical and experimental results are presented as a comparison, such as CCQM <cit.>, LFQM <cit.>, LCSR 2015 <cit.>, pQCD <cit.> and BESIII 2020 <cit.>. The solid lines in Fig. <ref> denote the center values of the LCSR predictions, where the shaded areas are theoretical uncertainties from all the mentioned error sources. The thicker shaded bands represent the LCSR predictions, which have been extrapolated to physically allowable q^2-region. Fig. <ref> indicates that: 1) Our present LCSR prediction of f_+^D→η(q^2) is in good agreement with BESIII data <cit.>; 2) Our present LCSR prediction of f_+^D→η'(q^2) is consistent with the LFQM prediction <cit.> and the LCSR 2015 <cit.> predictions within errors; 3) Our present LCSR predictions of f_+^B→η^(')(q^2) are close to the LCSR 2015 prediction <cit.>, and their values at q^2= 0 are consistent with the pQCD prediction <cit.> within errors.
Fig. <ref> shows the differential decay widthes for B(D)^+→η^(')ℓ^+ν_ℓ without CKM matrix elements. As a comparison, the predictions using different theoretical approaches and the experimental data, such as CCQM <cit.>, LFQM <cit.>, LCSR <cit.> and BESIII collaboration <cit.>, are also presented. The differential decay width dΓ/|V_ cd|dq^2 (D^+→ηℓ^ + ν _ℓ) agrees with the BESIII 2018 <cit.> and BESIII 2020 <cit.> within errors.
By matching the branching fractions and the decay lifetimes given by the PDG with the decay widthes predicted by Eq.(<ref>), one may derive the CKM matrix elements |V_ub| and |V_cd|. We put our results in Table <ref>, where the errors are caused by all the mentioned error sources and the PDG errors for the branching fractions and the decay lifetimes. Some typical measured values of |V_ub| and |V_cd| are also given in Table <ref>. The predicted |V_cd| is within the error range of experimental result BESIII 2020. Using the fixed CKM matrix elements, our final predictions of the branching function are: B(D→η e ν_e ) = (1.11 ± 0.07) ×10^ - 3, B(D→ημν_μ) = (1.04 ± 0.11) ×10^ - 3, B(D→η' e ν_e) = (2.0 ± 0.4) ×10^ - 4, B(B→ηℓν_ℓ) = (3.9 ± 0.5) ×10^ - 5, B(B→η' ℓν_ℓ) = (2.3 ± 0.8) ×10^ - 5, respectively.
§ SUMMARY
In this paper, we have suggest a LCHO model (<ref>) for the η_q-meson leading-twist LCDA ϕ_2;η_q(x,μ), whose moments have been calculated by using the QCD sum rules based on the QCD background field. To compare with the conventional Gegenbauer expansion for the LCDA, the LCHO model usually has better end-point behavior due to the BHL-prescription, which will be helpful to suppress the end-point singularity for the heavy-to-light meson decays. The QCD sum rules for the 0_ th-order moment can be used to fix the η_q decay constant, and we obtain f_η_q=0.141±0.005 GeV. As an explicit application of ϕ_2;η_q, we then calculate the TFFs B(D)^+ →η^(') under the QF scheme for the η-η' mixing and by using the QCD light-cone sum rules up to twist-4 accuracy and by including the next-to-leading order QCD corrections to the dominant twist-2 part. Our LCSR prediction of TFFs are consistent with most of theoretical predictions and the recent BESIII data within errors. By applying those TFFs, we get the decay widths of B(D)^+→η^(')ℓ^+ν_ℓ. The magnitudes of the CKM matrix elements |V_ ub| and |V_ cd| have also been discussed by inversely using the PDG values for the branching fractions and the decay lifetimes. The future more precise data at the high luminosity Belle II experiment <cit.> and super tau-charm factory <cit.> shall be helpful to test all those results.
§ ACKNOWLEDGMENTS
This work was supported in part by the Chongqing Graduate Research and Innovation Foundation under Grant No. CYB23011 and No.ydstd1912, by the National Natural Science Foundation of China under Grant No.12175025, No.12265010, No.12265009 and No.12147102, the Project of Guizhou Provincial Department of Science and Technology under Grant No.ZK[2021]024 and No.ZK[2023]142, and the Project of Guizhou Provincial Department of Education under Grant No.KY[2021]030, and the Key Laboratory for Particle Physics of Guizhou Minzu University No.GZMUZK[2022]PT01.
99
Ke:2010htz
H. W. Ke, X. Q. Li and Z. T. Wei,
“Determining the η-η' mixing by the newly measured BR(D(D_s)→η(η')+l̅+ν_l”,
https://doi.org/10.1140/epjc/s10052-010-1383-6
Eur. Phys. J. C 69 (2010) 133.
Ke:2011fj
H. W. Ke, X. H. Yuan and X. Q. Li,
“Fraction of the gluonium component in η' and η”,
https://doi.org/10.1142/S0217751X11054796
Int. J. Mod. Phys. A 26 (2011) 4731.
Cao:2012nj
F. G. Cao,
“Determination of the η-η^' mixing angle”,
https://doi.org/10.1103/PhysRevD.85.057501
Phys. Rev. D 85 (2012) 057501.
Gilman:1987ax
F. J. Gilman and R. Kauffman,
“The eta Eta-prime Mixing Angle”,
https://doi:10.1103/PhysRevD.37.3348
Phys. Rev. D 36 (1987) 2761.
Ball:1995zv
P. Ball, J. M. Frere and M. Tytgat,
“Phenomenological evidence for the gluon content of eta and eta-prime”,
https://doi:10.1016/0370-2693(95)01287-7
Phys. Lett. B 365 (1996) 367.
Feldmann:2002kz
T. Feldmann and P. Kroll,
“Mixing of pseudoscalar mesons”,
https://doi.org/10.1238/Physica.Topical.099a00013
Phys. Scripta T 99 (2002) 13.
Kroll:2002nt
P. Kroll and K. Passek-Kumericki,
“The Two gluon components of the eta and eta-prime mesons to leading twist accuracy”,
https://doi:10.1103/PhysRevD.67.054017
Phys. Rev. D 67 (2003) 054017.
Ambrosino:2009sc
F. Ambrosino, A. Antonelli, M. Antonelli, F. Archilli, P. Beltrame, G. Bencivenni, S. Bertolucci, C. Bini, C. Bloise and S. Bocchetta, et al.
“A Global fit to determine the pseudoscalar mixing angle and the gluonium content of the eta-prime meson”,
https://doi:10.1088/1126-6708/2009/07/105
JHEP 07 (2009) 105.
Ball:2007hb
P. Ball and G. W. Jones,
“B →η^(') Form Factors in QCD”,
https://doi.org/10.1088/1126-6708/2007/08/025
JHEP 0708 (2007) 025.
Duplancic:2015zna
G. Duplancic and B. Melic,
“Form factors of B, B_s →η^' and D, D_s→η^' transitions from QCD light-cone sum rules”,
https://doi.org/10.1007/JHEP11(2015)138
JHEP 1511 (2015) 138.
Feldmann:1999uf
T. Feldmann,
“Quark structure of pseudoscalar mesons,”
https://doi.org/10.1142/S0217751X00000082
Int. J. Mod. Phys. A 15 (2000) 159.
Feldmann:1998su
T. Feldmann,
“Mixing and decay constants of pseudoscalar mesons: Octet singlet versus quark flavor basis”,
https://doi.org/10.1016/S0920-5632(99)00152-8
Nucl. Phys. B Proc. Suppl. 74 (1999) 151.
Feldmann:1998vh
T. Feldmann, P. Kroll and B. Stech,
“Mixing and decay constants of pseudoscalar mesons”,
https://doi.org/ doi:10.1103/PhysRevD.58.114006
Phys. Rev. D 58 (1998) 114006.
CLEO:2007vpk
N. E. Adam et al. [CLEO Collaboration],
“A Study of Exclusive Charmless Semileptonic B Decay and |V_(ub)|”,
https://doi.org/10.1103/PhysRevLett.99.041802
Phys. Rev. Lett. 99 (2007) 041802.
BESIII:2013iro
M. Ablikim et al. [BESIII Collaboration],
“Precision measurements of B(D^+ →μ^+ ν_μ), the pseudoscalar decay constant f_D^+, and the quark mixing matrix element |V_ cd|”,
https://doi.org/10.1103/PhysRevD.89.051104
Phys. Rev. D 89 (2014) 051104.
BaBar:2014xzf
J. P. Lees et al. [BaBar Collaboration],
“ Measurement of the D^0 →π^- e^+ ν_e differential decay branching fraction as a function of q^2 and study of form factor parameterizations”,
https://doi.org/10.1103/PhysRevD.91.052022
Phys. Rev. D 91 (2015) 052022.
CLEO:2009svp
D. Besson et al. [CLEO Collaboration],
“Improved measurements of D meson semileptonic decays to π and K mesons”,
https://doi.org/10.1103/PhysRevD.80.032005
Phys. Rev. D 80 (2009) 032005.
HFLAV:2019otj
Y. S. Amhis et al. [HFLAV],
“Averages of b-hadron, c-hadron, and τ-lepton properties as of 2018”,
https://doi.org/10.1103/10.1140/epjc/s10052-020-8156-7
Eur. Phys. J. C 81 (2021) 226.
Lubicz:2017syv
V. Lubicz et al. [ETM Collaboration],
“Scalar and vector form factors of D →π(K) ℓν decays with N_f=2+1+1 twisted fermions”,
https://doi.org/10.1103/PhysRevD.96.054514
Phys. Rev. D 96 (2017) 054514.
BaBar:2011xxm
J. P. Lees et al. [BaBar Collaboration],
“Study of B̅→ X_u ℓν̅ decays in BB̅ events tagged by a fully reconstructed B-meson decay and determination of |V_ub|”,
https://doi.org/10.1103/PhysRevD.86.032004
Phys. Rev. D 86 (2012) 032004.
Belle:2005uxj
I. Bizjak et al. [Belle Collaboration],
“Determination of |V_ub| from measurements of the inclusive charmless semileptonic partial rates of B mesons using full reconstruction tags”,
https://doi.org/10.1103/PhysRevLett.95.241801
Phys. Rev. Lett. 95 (2005) 241801.
Gonzalez-Solis:2018ooo
S. Gonzàlez-Solís and P. Masjuan,
“Study of B→πℓν_ℓ and B^+→η^(')ℓ^+ν_ℓ decays and determination of |V_ub|”,
https://doi.org/10.1103/PhysRevD.98.034027
Phys. Rev. D 98 (2018) 034027.
Zyla:2022zbs
P. A. Zyla et al. [Particle Data Group],
“Review of Particle Physics”,
https://doi.org/10.1093/ptep/ptaa104
PTEP 2022 (2022) 083C01.
CLEO:2008xqh
R. E. Mitchell et al. [CLEO Collaboration],
“Observation of D^+ →η e^+ ν(e)”,
https://doi.org/ doi:10.1103/PhysRevLett.102.081801
Phys. Rev. Lett. 102 (2009), 081801.
CLEO:2010pjh
J. Yelton et al. [CLEO Collaboration],
“Studies of D^+ →η', η, ϕ e^+ ν_e”,
https://doi.org/10.1103/PhysRevD.84.032001
Phys. Rev. D 84 (2011), 032001.
BESIII:2018eom
M. Ablikim et al. [BESIII Collaboration],
“Study of the decays D^+→η^(') e^+ν_e”,
https://doi.org/110.1103/PhysRevD.97.092009
Phys. Rev. D 97 (2018) 092009.
Ablikim:2020hsc
M. Ablikim [BESIII Collaboration],
“First Observation of D^+ →ημ^+ν_μ and Measurement of Its Decay Dynamics”,
https://doi.org/10.1103/PhysRevLett.124.231801
Phys. Rev. Lett. 124 (2020) 231801.
BaBar:2008byi
B. Aubert et al. [BaBar Collaboration],
“Measurements of B →{π, η, η^'}ℓν_ℓ Branching Fractions and Determination of |V_ub| with Semileptonically Tagged B Mesons”,
https://doi.org/10.1103/PhysRevLett.101.081801
Phys. Rev. Lett. 101 (2008) 081801.
BaBar:2010npl
P. del Amo Sanchez et al. [BaBar Collaboration],
“Measurement of the B^0 →π^ℓℓ^+ ν and B^+ →η^(')ℓ^+ ν Branching Fractions, the B^0 →π^- ℓ^+ ν and B^+ →ηℓ^+ ν Form-Factor Shapes, and Determination of |V_ub|”,
https://doi.org/10.1103/PhysRevD.83.052011
Phys. Rev. D 83 (2011) 052011.
Belle:2017pzx
C. Beleño et al. [Belle Collaboration],
“Measurement of the decays B→ηℓν_ℓ and B→η'ℓν_ℓ in fully reconstructed events at Belle”,
https://doi.org/10.1103/PhysRevD.96.091102
Phys. Rev. D 96 (2017) 091102.
Belle:2021hah
U. Gebauer et al. [Belle Collaboration],
“Measurement of the branching fractions of the B^+ →ηℓ^+ ν_ℓ and B^+ →η^'ℓ^+ ν_ℓ decays with signal-side only reconstruction in the full q^2 range”,
https://doi.org/10.1103/PhysRevD.106.032013
Phys. Rev. D 106 (2022) 032013.
Cheng:2010yd
H. Y. Cheng and K. C. Yang,
“Charmless Hadronic B Decays into a Tensor Meson”,
https://doi.org/10.1103/PhysRevD.83.034001
Phys. Rev. D 83 (2011) 034001 (2011).
Braun:1988qv
V. M. Braun and I. E. Filyanov,
“QCD Sum Rules in Exclusive Kinematics and Pion Wave Function”,
https://doi.org/10.1007/BF01548594
Z. Phys. C 44 (1989) 157.
Balitsky:1989ry
I. I. Balitsky, V. M. Braun and A. V. Kolesnichenko,
“Radiative Decay Σ^+ → p γ in Quantum Chromodynamics”,
https://doi.org/10.1016/0550-3213(89)90570-1
Nucl. Phys. B 312 (1989) 509.
Chernyak:1990ag
V. L. Chernyak and I. R. Zhitnitsky,
“B meson exclusive decays into baryons”,
https://doi.org/10.1016/0550-3213(90)90612-H
Nucl. Phys. B 345 (1990) 137.
Ball:1991bs
P. Ball, V. M. Braun and H. G. Dosch,
“Form-factors of semileptonic D decays from QCD sum rules”,
https://doi.org/10.1103/PhysRevD.44.3567
Phys. Rev. D 44 (1991) 3567.
Brodsky:1981jv
S. J. Brodsky, T. Huang and G. P. Lepage,
“Hadronic wave functions and high momentum transfer interactions in quantum chromodynamics”,
Conf. Proc. C 810816 (1981) 143. SLAC-PUB-16520.
Lepage:1982gd
G. P. Lepage, S. J. Brodsky, T. Huang and P. B. Mackenzie,
“Hadronic Wave Functions in QCD”,
CLNS-82-522.
Huang:1986wm
T. Huang, X. N. Wang, X. D. Xiang and S. J. Brodsky,
“The Quark Mass and Spin Effects in the Mesonic Structure”,
https://doi.org/10.1103/PhysRevD.35.1013
Phys. Rev. D 35 (1987) 1013.
Huang:1989gv
T. Huang and Z. Huang,
“Quantum Chromodynamics in Background Fields”,
https://doi.org/10.1103/PhysRevD.39.1213
Phys. Rev. D 39 (1989) 1213.
Shifman:1978bx
M. A. Shifman, A. I. Vainshtein and V. I. Zakharov,
“QCD and Resonance Physics. Theoretical Foundations”,
https://doi.org/10.1016/0550-3213(79)90022-1
Nucl. Phys. B 147 (1979) 385.
Hubschmid:1982pa
W. Hubschmid and S. Mallik,
“Operator Expainsion At Short Distance In QCD”,
https://doi.org/10.1016/0550-3213(82)90134-1
Nucl. Phys. B 207 (1982) 29.
Govaerts:1984bk
J. Govaerts, F. de Viron, D. Gusbin and J. Weyers,
“QCD Sum Rules and Hybrid Mesons”,
https://doi.org/10.1016/0550-3213(84)90583-2
Nucl. Phys. B 248 (1984) 1.
Reinders:1984sr
L. J. Reinders, H. Rubinstein and S. Yazaki,
“Hadron Properties from QCD Sum Rules”,
https://doi.org/10.1016/0370-1573(85)90065-1
Phys. Rept. 127 (1985) 1.
Elias:1987ac
V. Elias, T. G. Steele and M. D. Scadron,
“q q̅ and Higher Dimensional Condensate Contributions to the Nonperturbative Quark Mass”,
https://doi.org/10.1103/PhysRevD.38.1584
Phys. Rev. D 38 (1988) 1584.
Zhang:2021wnv
Y. Zhang, T. Zhong, H. B. Fu, W. Cheng and X. G. Wu,
“Ds-meson leading-twist distribution amplitude within the QCD sum rules and its application to the B_s→ D_s transition form factor”,
https://doi.org/10.1103/PhysRevD.103.114024
Phys. Rev. D 103 (2021) 114024.
Zhong:2018exo
T. Zhong, Y. Zhang, X. G. Wu, H. B. Fu and T. Huang,
“The ratio ℛ(D) and the D-meson distribution amplitude”,
https://doi.org/10.1140/epjc/s10052-018-6387-7
Eur. Phys. J. C 78 (2018) 937.
Fu:2018vap
H. B. Fu, L. Zeng, W. Cheng, X. G. Wu and T. Zhong,
“Longitudinal leading-twist distribution amplitude of the J/ψ meson within the background field theory”,
https://doi.org/10.1103/PhysRevD.97.074025
Phys. Rev. D 97 (2018) 074025.
Zhang:2017rwz
Y. Zhang, T. Zhong, X. G. Wu, K. Li, H. B. Fu and T. Huang,
“Uncertainties of the B→ D transition form factor from the D-meson leading-twist distribution amplitude”,
https://doi.org/10.1140/epjc/s10052-018-5551-4
Eur. Phys. J. C 78 (2018) 76.
Fu:2016yzx
H. B. Fu, X. G. Wu, W. Cheng and T. Zhong,
“ρ -meson longitudinal leading-twist distribution amplitude within QCD background field theory”,
https://doi.org/10.1103/PhysRevD.94.074004
Phys. Rev. D 94 (2016) 074004.
Hu:2021zmy
D. D. Hu, H. B. Fu, T. Zhong, L. Zeng, W. Cheng and X. G. Wu,
“η-meson leading-twist distribution amplitude within QCD sum rule approach and its application to the semi-leptonic decay D_s^+ →ηℓ^+ν_ℓ”,
https://doi.org/10.1140/epjc/s10052-021-09958-0
Eur. Phys. J. C 82 (2022) 12.
Bali:2014pva
G. S. Bali, S. Collins, S. Dürr and I. Kanamori,
“D_s →η, η' semileptonic decay form factors with disconnected quark loop contributions”,
https://doi.org/10.1103/PhysRevD.91.014503
Phys. Rev. D 91 (2015) 014503.
Cheng:2020vwr
S. Cheng, A. Khodjamirian and A. V. Rusov,
“Pion light-cone distribution amplitude from the pion electromagnetic form factor”,
https://doi.org/ doi:10.1103/PhysRevD.102.074022
Phys. Rev. D 102 (2020) 074022.
Zhong:2021epq
T. Zhong, Z. H. Zhu, H. B. Fu, X. G. Wu and T. Huang,
“Improved light-cone harmonic oscillator model for the pionic leading-twist distribution amplitude”,
https://doi.org/10.1103/PhysRevD.104.016021
Phys. Rev. D 104 (2021) 016021.
Ball:2004ye
P. Ball and R. Zwicky,
“New results on B →π, K, η decay formfactors from light-cone sum rules”,
https://doi.org/10.1103/PhysRevD.71.014015
Phys. Rev. D 71 (2005) 014015.
DeFazio:2000my
F. De Fazio and M. R. Pennington,
“Radiative ϕ meson decays and η - η^' mixing: A QCD sum rule analysis”,
https://doi.org/10.1088/1126-6708/2000/07/051
JHEP 07 (2000) 051.
Ali:1998eb
A. Ali, G. Kramer and C. D. Lu,
“Experimental tests of factorization in charmless nonleptonic two-body B decays”,
https://doi.org/10.1103/PhysRevD.58.094009
Phys. Rev. D 58 (1998) 094009.
Dhiman:2019qaa
N. Dhiman, H. Dahiya, C. R. Ji and H. M. Choi,
“Study of twist-2 distribution amplitudes and the decay constants of pseudoscalar and vector heavy mesons in light-front quark model”,
https://doi.org/10.22323/1.374.0038
PoS LC2019 (2019) 038.
Hwang:2010hw
C. W. Hwang,
“Analyses of decay constants and light-cone distribution amplitudes for S-wave heavy meson”,
https://doi.org/10.22323/1.374.0038
Phys. Rev. D 81 (2010) 114024.
Geng:2016pyr
C. Q. Geng, C. C. Lih and C. Xia,
“Some heavy vector and tensor meson decay constants in light-front quark model”,
https://doi.org/10.1140/epjc/s10052-016-4172-z
Eur. Phys. J. C 76 (2016) 313.
Choi:2007se
H. M. Choi,
“Decay constants and radiative decays of heavy mesons in light-front quark model”,
https://doi.org/10.1103/PhysRevD.75.073016
Phys. Rev. D 75 (2007) 073016.
Dercks:2017lfq
D. Dercks, H. Dreiner, M. E. Krauss, T. Opferkuch and A. Reinert,
“R-Parity Violation at the LHC”,
https://doi.org/10.1140/epjc/s10052-017-5414-4
Eur. Phys. J. C 77 (2017) 856.
Becirevic:1998ua
D. Becirevic, P. Boucaud, J. P. Leroy, V. Lubicz, G. Martinelli, F. Mescia and F. Rapuano,
“Nonperturbatively improved heavy - light mesons: Masses and decay constants”,
https://doi.org/10.1103/PhysRevD.60.074501
Phys. Rev. D 60 (1999) 074501.
FermilabLattice:2014tsy
A. Bazavov et al. [Fermilab Lattice and MILC],
“Charmed and Light Pseudoscalar Meson Decay Constants from Four-Flavor Lattice QCD with Physical Light Quarks”,
https://doi.org/10.22323/1.374.0038
Phys. Rev. D 90 (2014) 074509.
Bali:2021qem
G. S. Bali et al. [RQCD Collaboration],
“asses and decay constants of the η and η' mesons from lattice QCD”,
https://doi.org/doi:10.1007/JHEP08(2021)137
JHEP 08 (2021) 137.
Cvetic:2004qg
G. Cvetic, C. S. Kim, G. L. Wang and W. Namgung,
“Decay constants of heavy meson of 0-state in relativistic Salpeter method”,
https://doi.org/10.1016/j.physletb.2004.06.092
Phys. Lett. B 596 (2004) 84.
Wang:2005qx
G. L. Wang,
“Decay constants of heavy vector mesons in relativistic Bethe-Salpeter method”,
https://doi.org/10.1016/j.physletb.2005.12.005
Phys. Lett. B 633 (2006) 492.
Bhatnagar:2009jg
S. Bhatnagar, S. Y. Li and J. Mahecha,
“power counting of various Dirac covariants in hadronic Bethe-Salpeter wavefunctions for decay constant calculations of pseudoscalar mesons”,
https://doi.org/10.1142/S0218301311018460
Int. J. Mod. Phys. E 20 (2011) 1437.
Hwang:1996ha
D. S. Hwang and G. H. Kim,
“Decay constants of B, B^* and D, D^* mesons in relativistic mock meson model”,
https://doi.org/10.1103/PhysRevD.55.6944
Phys. Rev. D 55 (1997) 6944.
Capstick:1989ra
S. Capstick and S. Godfrey,
“Pseudoscalar Decay Constants in the Relativized Quark Model and Measuring the CKM Matrix Elements”,
https://doi.org/10.1103/PhysRevD.41.2856
Phys. Rev. D 41 (1990) 2856.
Ebert:2006hj
D. Ebert, R. N. Faustov and V. O. Galkin,
“Relativistic treatment of the decay constants of light and heavy mesons”,
https://doi.org/10.1016/j.physletb.2006.02.042
Phys. Lett. B 635 (2006) 93.
Yazarloo:2016luc
B. H. Yazarloo and H. Mehraban,
“Study of B and B_s mesons with a Coulomb plus exponential type potential”,
https://doi.org/10.1209/0295-5075/116/31004
EPL 116 (2016) 31004.
Guo:1991eb
X. H. Guo and T. Huang,
“Hadronic wavefunctions in D and B decays”,
https://doi.org/ doi:10.1103/PhysRevD.43.2931
Phys. Rev. D 43 (1991) 2931.
Huang:1994dy
T. Huang, B. Q. Ma and Q. X. Shen,
“Analysis of the pion wavefunction in light cone formalism”,
https://doi.org/ doi:10.1103/PhysRevD.49.1490
Phys. Rev. D 49 (1994) 1490.
Jaus:1991cy
W. Jaus,
“Relativistic constituent quark model of electroweak properties of light mesons”,
https://doi.org/10.1103/PhysRevD.44.2851
Phys. Rev. D 44 (1991) 2851.
Choi:1996mq
H. M. Choi and C. R. Ji,
“Light cone quark model predictions for radiative meson decays”,
https://doi.org/10.1016/S0375-9474(97)00052-3
Nucl. Phys. A 618 (1997) 291.
Ji:1992yf
C. R. Ji, P. L. Chung and S. R. Cotanch,
“Light cone quark model axial vector meson wave function”,
https://doi.org/10.1103/PhysRevD.45.4214
Phys. Rev. D 45 (1992) 4214.
Wu:2008yr
X. G. Wu and T. Huang,
“Kaon Electromagnetic Form-Factor within the k(T) Factorization Formalism and It's Light-Cone Wave Function”,
https://doi.org/10.1088/1126-6708/2008/04/043
JHEP 04 (2008) 043 (2008).
Wu:2011gf
X. G. Wu and T. Huang,
“Constraints on the Light Pseudoscalar Meson Distribution Amplitudes from Their Meson-Photon Transition Form Factors”,
https://doi.org/10.1103/PhysRevD.84.074011
Phys. Rev. D 84 (2011) 074011 (2011).
Huang:2013yya
T. Huang, T. Zhong and X. G. Wu,
“Determination of the pion distribution amplitude”,
https://doi.org/10.1103/PhysRevD.88.034013
Phys. Rev. D 88, 034013 (2013).
Wu:2012kw
X. G. Wu, T. Huang and T. Zhong,
“Information on the Pion Distribution Amplitude from the Pion-Photon Transition Form Factor with the Belle and BaBar Data”,
https://doi.org/10.1088/1674-1137/37/6/063105
Chin. Phys. C 37, 063105 (2013).
Duplancic:2008ix
G. Duplancic, A. Khodjamirian, T. Mannel, B. Melic and N. Offen,
“Light-cone sum rules for B→π form factors revisited”,
https://doi.org/10.1088/1126-6708/2008/04/014
JHEP 04 (2008) 014.
Fu:2013wqa
H. B. Fu, X. G. Wu, H. Y. Han, Y. Ma and T. Zhong,
“|V_cb| from the semileptonic decay B→ D ℓν̅_ℓ and the properties of the D meson distribution amplitude”,
https://doi.org/doi:10.1016/j.nuclphysb.2014.04.021
Nucl. Phys. B 884 (2014) 172.
Colangelo:2000dp
P. Colangelo and A. Khodjamirian,
“QCD sum rules, a modern perspective,”
https://doi.org/10.1142/9789812810458_0033
arXiv:hep-ph/0010175 [hep-ph].
Narison:2014wqa
S. Narison,
“Mini-review on QCD spectral sum rules”,
https://doi.org/10.1016/j.nuclphysbps.2015.01.041
Nucl. Part. Phys. Proc. 258 (2015) 189.
Narison:2014ska
S. Narison,
“Improved f_D*_(s), f_B*_(s) and f_B_c from QCD Laplace sum rules,”
https://doi.org/10.1142/S0217751X1550116X
Int. J. Mod. Phys. A 30 (2015) 1550116.
Metcalf:1979iw
W. J. Metcalf, I. J. R. Aitchison, J. LeBritton, D. McCal, A. C. Melissinos, A. P. Contogouris, S. Papadopoulos, J. Alspector, S. Borenstein and G. R. Kalbfleisch, et al.
“The Magnitude of Parton Intrinsic Transverse Momentum”,
https://doi.org/10.1016/0370-2693(80)90449-9
Phys. Lett. B 91 (1980) 275.
Lepage:1980fj
G. P. Lepage and S. J. Brodsky,
“Exclusive Processes in Perturbative Quantum Chromodynamics,”
https://doi.org/10.1103/PhysRevD.22.2157
Phys. Rev. D 22, 2157 (1980).
Chernyak:1981zz
V. L. Chernyak and A. R. Zhitnitsky,
“Exclusive Decays of Heavy Mesons,”
https://doi.org/10.1016/0550-3213(83)90251-1
Nucl. Phys. B 201, 492 (1982).
Wang:2014vra
Z. G. Wang,
“B-S transition form-factors with the light-cone QCD sum rules”,
https://doi.org/ doi:10.1103/PhysRevD.78.059901
Eur. Phys. J. C 75 (2015) 50.
Offen:2013nma
N. Offen, F. A. Porkert and A. Schäfer,
“Light-cone sum rules for the D_s→η^(')ℓν_ℓ form factor”,
https://doi.org/10.1103/PhysRevD.88.034023
Phys. Rev. D 88 (2013) 034023.
Charng:2006zj
Y. Y. Charng, T. Kurimoto and H. n. Li,
“Gluonic contribution to B →η^(') form factors”,
https://doi.org/ doi:10.1103/PhysRevD.78.059901
Phys. Rev. D 74 (2006) 074024.
Chen:2009qk
C. H. Chen, Y. L. Shen and W. Wang,
“|V(ub)| and B →η^(') Form Factors in Covariant Light Front Approach”,
https://doi.org/ 10.1016/j.physletb.2010.02.056
Phys. Lett. B 686 (2010) 118.
Verma:2011yw
R. C. Verma,
“Decay constants and form factors of s-wave and p-wave mesons in the covariant light-front quark model”,
https://doi.org/10.1088/0954-3899/39/2/025005
J. Phys. G 39 (2012) 025005.
Ivanov:2019nqd
M. A. Ivanov, J. G. Körner, J. N. Pandya, P. Santorelli, N. R. Soni and C. T. Tran,
“Exclusive semileptonic decays of D and D_s mesons in the covariant confining quark model”,
https://doi.org/10.1007/s11467-019-0908-1
Front. Phys. (Beijing) 14 (2019) 64401.
Bourrely:2008za
C. Bourrely, I. Caprini and L. Lellouch,
“Model-independent description of B →πℓν decays and a determination of |V_(ub)|”,
https://doi.org/doi:10.1103/PhysRevD.82.099902
Phys. Rev. D 79 (2009) 013008.
Bharucha:2010im
A. Bharucha, T. Feldmann and M. Wick,
“Theoretical and Phenomenological Constraints on Form Factors for Radiative and Semi-Leptonic B-Meson Decays”,
https://doi.org/doi:10.1007/JHEP09(2010)090
JHEP 09 (2010) 090.
FermilabLattice:2015mwy
J. A. Bailey et al. [Fermilab Lattice and MILC],
“|V_ub| from B→πℓν decays and (2+1)-flavor lattice QCD”,
https://doi.org/doi:10.1103/PhysRevD.92.014024
Phys. Rev. D 92 (2015) 014024.
Belle-II:2018jsg
E. Kou et al. [Belle-II],
“The Belle II Physics Book,”
https://doi.org/doi:10.1093/ptep/ptz106
PTEP 2019, 123C01 (2019).
Achasov:2023gey
M. Achasov, X. C. Ai, R. Aliberti, Q. An, X. Z. Bai, Y. Bai, O. Bakina, A. Barnyakov, V. Blinov and V. Bobrovnikov, et al.
“STCF Conceptual Design Report: Volume I - Physics & Detector,”
https://arxiv.org/abs/2303.15790
arXiv:2303.15790 [hep-ex].
|
http://arxiv.org/abs/2307.04475v1 | 20230710105439 | Modelling opinion misperception and the emergence of silence in online social system | [
"Daniele Vilone",
"Eugenia Polizzi"
] | physics.soc-ph | [
"physics.soc-ph"
] |
media/
|
http://arxiv.org/abs/2307.05406v1 | 20230711161238 | Trotter24: A precision-guaranteed adaptive stepsize Trotterization for Hamiltonian simulations | [
"Tatsuhiko N. Ikeda",
"Keisuke Fujii"
] | quant-ph | [
"quant-ph",
"cond-mat.mtrl-sci",
"cond-mat.str-el",
"hep-lat",
"physics.comp-ph"
] |
[email protected]
RIKEN Center for Quantum Computing, Wako, Saitama 351-0198, Japan
Department of Physics, Boston University, Boston, Massachusetts 02215, USA
[email protected]
Graduate School of Engineering Science, Osaka University,
1-3 Machikaneyama, Toyonaka, Osaka 560-8531, Japan.
Center for Quantum Information and Quantum Biology, Osaka University, 560-0043, Japan.
RIKEN Center for Quantum Computing, Wako, Saitama 351-0198, Japan
Fujitsu Quantum Computing Joint Research Division at QIQB,
Osaka University, 1-2 Machikaneyama, Toyonaka 560-0043, Japan
Choosing an optimal time step is crucial for an efficient Hamiltonian simulation based on Trotterization but difficult due to the complex structure of the Trotter error.
Here we develop a method measuring the Trotter error by combining the second- and fourth-order Trotterizations rather than consulting with mathematical error bounds.
Implementing this method, we construct an algorithm, which we name Trotter24, for adaptively using almost the largest stepsize , which keeps quantum circuits shallowest, within an error tolerance ϵ preset for our purpose.
Trotter24 applies to generic Hamiltonians, including time-dependent ones, and can be generalized to any orders of Trotterization.
Benchmarking it in a quantum spin chain, we find the adaptively chosen to be about ten times larger than that inferred from known upper bounds of Trotter errors.
Trotter24 allows us to keep the quantum circuit thus shallower within the error tolerance in exchange for paying the cost of measurements.
Trotter24: A precision-guaranteed adaptive stepsize Trotterization for Hamiltonian simulations
Keisuke Fujii
August 12, 2023
==============================================================================================
§ INTRODUCTION
The rapid development of quantum devices in recent years has led researchers to find useful applications with significant quantum advantage <cit.>.
Quantum many-body dynamics, or Hamiltonian, simulation is one of the most promising candidates because quantum computers could overcome the exponential complexity that classical computers face <cit.>, enabling us to address intriguing dynamical phenomena like nonequilibrium phases of matter <cit.> and to implement fundamental quantum algorithms like phase estimation <cit.>.
Among several algorithms for the Hamiltonian simulation, Trotterization <cit.> is and will be used most commonly in the current noisy intermediate-scale quantum (NISQ) era and the coming early fault-tolerant quantum computing (FTQC) era because it does not demand additional ancillary qubits or largely controlled quantum gates.
Indeed, quantum advantage in Trotterized dynamics simulation has been reported using a 127-qubit NISQ computer only recently <cit.>.
One major and presumably inevitable issue of Trotterization is the trade-off relation between the simulation accuracy and the circuit depth.
The k-th order Trotterization accompanies an error of O(^k+1) during a single time step , which decreases when is taken shorter.
In the meantime, the number of steps to reach a final time increases, meaning a deeper quantum circuit.
To suppress the gate depth, it is desirable to choose the largest possible stepsize , i.e., the shallowest circuit, within our error tolerance ϵ preset for our purposes.
However, it is difficult to find the optimal stepsize because the Trotter error is complex in generic many-body systems.
According to the previous studies on the Trotter error, its upper bounds <cit.> and typical values <cit.> are available.
If we choose so that the upper bound is below our tolerance ϵ, the precision is guaranteed, but tends to be too small, as we will see below.
On the other hand, if we choose based on the typical values, can be larger, but the precision guarantee is lost.
Recently, Zhao et al. <cit.> proposed an approach where δ t is chosen adaptively in each time step based on the energy expectation value and variance.
Yet, the precision guarantee of this method is still elusive, and the applicability is limited to time-independent Hamiltonians.
In this paper, we propose a precision-guaranteed method for choosing almost the largest within a preset error tolerance ϵ, whose key concept is illustrated in Fig. <ref>.
The stepsize is chosen based on the measurement of the actual error and, thereby, optimal to achieve our tolerance.
The measurement of the Trotter error is executable on a quantum circuit, without knowing the exact solution, with the help of a higher-order Trotterization formula, like in the Runge-Kutta-Fehlberg (RKF) method for classical simulations.
We mainly focus on the second-order Trotterization supplemented by the fourth-order formula, naming the method Trotter24 in analogy to RFK45.
We benchmark Trotter24 in a quantum spin chain under time-independent and -dependent Hamiltonians, finding that the adaptively chosen is about ten times larger than that inferred from the upper bound of Trotter errors.
§ MEASURING TROTTER ERROR IN EACH STEP
For simplicity, we first consider a time-independent Hamiltonian H consisting of two parts,
H=A+B,
where A and B do not necessarily commute with each other.
Generalization to more noncommuting parts is straightforward, and we will generalize the arguments to time-dependent ones later in Sec. <ref>.
Before considering a sequence of time steps in later sections, we focus here on a single step and discuss how to measure the Trotter error.
For this purpose, we assume that the quantum state at time t is known to be |ψ(t)⟩ and consider evolving it by a small time step δ t,
|ψ(t+δ t)⟩ = U()|ψ(t)⟩ = e^-i H δ t|ψ(t)⟩.
Trotterization approximately decomposes e^-i H into quantum-gate-friendly parts consisting of either A and B.
Since the generalization to other formulas is straightforward, we focus, for concreteness, on the second-order formula T_2(δ t)
T_2(δ t) ≡ e^-i A δ t/2e^-i Bδ te^-i A δ t/2,
= e^-i H δ t+Υ_3
where Υ_3=O(δ t^3) is an anti-Hermitian error operator.
These relations imply
|ψ_2(t+δ t)⟩≡ T_2(δ t)|ψ(t)⟩
= |ψ(t+δ t)⟩ +O(^3),
meaning that T_2() approximates the exact one-step evolution within an error of O(^3).
To quantify the error arising in the one step, we adopt the conventional fidelity error
η_F ≡ 1-|⟨ψ(t+δ t)|ψ_2(t+δ t)||⟩^2.
We can also use other quantities depending on our purposes and make parallel arguments.
For example, when we are interested in the expectation value of an observable O, we care about the error in it,
η_O ≡⟨ψ(t+δ t)|O|ψ(t+δ t)|-⟩⟨ψ_2(t+δ t)|O|ψ_2(t+δ t)|.⟩
In either case, calculating η_F or η_O is difficult because we do not know the exactly evolved state |ψ(t+)⟩.
We remark that, although |ψ_2(t+)⟩ involves an O(^3) error, η_F=O(^6), while η_O=O(^3) holds as expected.
This is because the leading O(^3) term of 1-⟨ψ(t+δ t)|ψ_2(t+δ t)|$⟩ is pure-imaginary as shown in Appendix <ref>,
and its leading-order contribution ofη_Fis given by
η_F=⟨ψ(t)|(iΥ_3)^2|ψ(t)|-⟩⟨ψ(t)|(iΥ_3)|ψ(t)|^⟩2+O(^7),
whereiΥ_3is Hermitian.
Equation (<ref>) dictates thatη_Fis the variance of the “observable”iΥ_3,
giving a way to estimateη_Fusing|ψ(t)⟩and the explicit form ofiΥ_3.
Indeed this is a possible way of measuringη_F, but it requires, for generic many-body Hamiltonians, measuring numerous Hermitian operators involved iniΥ_3consisting of doubly nested commutators betweenAandB.
Hence, in this work, we will focus on another way to estimateη_Fandη_Owith less sampling costs.
Our idea of estimating the errors is the following: In calculatingη_Fandη_Oin the leading order, we can safely replace the exact|ψ(t+)⟩by a higher-order approximant.
For instance, we utilize a fourth-order formula known as
the Forest-Ruth-Suzuki formula <cit.>,T_4(δt)given by
T_4(δ t) ≡ e^-is/2Aδ te^-is Bδ te^-i1-s/2Aδ te^-i(1-2s)Bδ t
× e^-i1-s/2Aδ te^-is Bδ te^-is/2Aδ t
=e^-i H δ t+Υ_5,
wheres=(2-2^1/3)^-1andΥ_5=O(δt^5)is anti-Hermitian.
These expressions lead to
|ψ_4(t+δ t)⟩ ≡ T_4(δ t)|ψ(t)⟩
=|ψ(t+δ t)⟩ +O(^5),
meaning thatT_4()approximates the exact evolution within an error ofO(^5), which is two-order more accurate than Eq. (<ref>).
Replacing|ψ(t+)⟩by|ψ_4(t+)⟩inη_Fandη_O, we obtain the following key analytical results (see Appendix <ref> for derivation):
For the fidelity error,
η_F = η_F^(24) + O(^8),
η_F^(24) ≡ 1-|⟨ψ_4(t+δ t)|ψ_2(t+δ t)||⟩^2,
and, for the observable error,
η_O = η_O^(24) + O(^5),
η_O^(24) ≡⟨ψ_4(t+δ t)|O|ψ_4(t+δ t)|⟩
-⟨ψ_2(t+δ t)|O|ψ_2(t+δ t)|.⟩
Given thatη_F=O(^6)andη_O=O(^3), these results mean thatη_F^(24)(η_O^(24)) coincides withη_F(η_O) in the leading order.
Remarkably, unlikeη_Fandη_O,η_F^(24)andη_O^(24)consist ofT_2()andT_4()and are thereby implementable in quantum circuits.
In other words, we can estimate the deviation from the exact solution induced byT_2()without knowing the solution when supplemented with the fourth-order Trotterization and neglect higher-order corrections.
We emphasize thatη_Fandη_Oare the actual Trotter error specific to the current state|ψ(t)⟩.
This contrasts the upper-bound arguments on the operator differenceU()-T_2()<cit.>.
Such upper bounds apply to arbitrary states and are thus always larger than or equal to the error occurring at a specific state|ψ(t)⟩.
The fact thatη_Fandη_Oare state-dependent enables us to choosemore accurately so that the error is below our tolerance, as we will see in detail below.
We remark on why we use the fourth-order rather than Ruth's third-order formula <cit.>,
T_3(δ t) ≡ e^-i7/24Aδ te^-i2/3 Bδ te^-i3/4Aδ te^i2/3Bδ t
× e^i1/24Aδ te^-i Bδ t
=e^-i H δ t+Υ_4,
whereΥ_4=O(^4).
ReplacingT_4()byT_3(), one can easily make similar arguments to define|ψ_3(t+)⟩,η^(23)_F, andη^(23)_O, proving thatη_F=η_F^(23)+O(^7)andη_O=η_O^(23)+O(^4).
Thus,η^(23)_F(η^(23)_O) reproducesη_F(η_O) in the leading order and works as its estimator as well.
Likewise, if we use ann-th (n≥3) order, we can haveη_F^(2n)(η_O^(2n)), which approximatesη_F(η_O) in accuracy ofO(^n+4)(O(^n+1)).
Meanwhile, using a largerndemands more exponentials (see Eqs. (<ref>) and (<ref>)), increasing the gate complexity.
Nicely,T_4()has only one more exponential thanT_3()in improving the accuracy by an order.
Considering this reasonable balance between complexity and accuracy, we adoptT_4()for primary use.
§ ITERATION AND ENTIRE ALGORITHM
In the previous section, we assumed that|ψ(t)⟩is known and found error estimatorsη_F^(24)andη_O^(24)that consist of product formulas and hence implementable in quantum circuits.
Here, we discuss how we choose an appropriatein successive steps of time evolution.
As we see below, we can utilize the measured error estimator to determine a nearly optimal, thereby making the successive evolution efficient.
Since the argument goes in parallel, we first focus on the fidelity error and will address the observable error later in this section.
Our overall task is to simulate the time evolution according to the HamiltonianHfrom the initial timet_inito the final timet_fin, starting from an initial state|ψ_0⟩.
We set an error toleranceϵfor the fidelity error in each time step.
Initially, we have no a priori information about the appropriate time step, so take a reasonably small trial stepsize_0, say,_0=0.1J^-1withJbeing a typical energy scale ofH.
For this_0, we implementT_2(_0)andT_4(_0)and calculateη_F^(24)using a quantum circuit.
Basically, we aim the stepsize to be so small that
η_F^(24)<ϵ.
If this is true, we accept our trial_0and evolve our state as|ψ_1⟩=T_2(_0)|ψ_0⟩.
Ifη_F^(24)≥ϵinstead, our trial_0is too large and we need a smaller_0'.
In choosing_0'appropriately, we invoke the leading-order scaling relationη_F^(24)≈α_0^6for some unknownαindependent of_0.
We can use this relation to estimateαbyα≈η_F^(24)/_0^6since we measuredη_F^(24).
For_0', we expectη_F^(24)'≈α(_0')^6≈η_F^(24)(_0'/_0)^6, which we wish is smaller thanϵ.
Thus, the conditionη_F^(24)'<ϵleads to_0'≈_0 (ϵ/η_F^(24))^1/6as an optimal choice within our error tolerance.
For a safety margin, we introduce a constantC(0<C<1) and set_0'= C _0 (ϵ/η_F^(24))^1/6as an updated trial_0.
We repeat this update procedure untilη_F^(24)gets smaller thanϵand accept the latest_0to evolve our state as|ψ_1⟩=T_2(_0)|ψ_0⟩.
Next, we move on to the second step, using a time step_1.
In choosing this, we again use the latestη_F^(24)obtained at the end of the previous time step.
Since|ψ_1⟩≈|ψ_0⟩, we can expect the error scaling coefficientαto be almost the same in the present and previous steps.
Therefore, like in the updated trials within the previous time step, we have_1=C _0 (ϵ/η_F^(24))^1/6as a good candidate for the optimal stepsize in the present time step.
We note thatη_F^(24)here is what was measured in the previous step, and we have not made any measurements in the present step yet.
Using this_1as a trial stepsize, we implementT_2(_1)andT_4(_1)and calculateη_F^(24)using a quantum circuit.
Depending on whetherη_F^(24)is less or greater thanϵ, we accept or update_1like in the previous step.
The following iteration is straightforward and repeated until the accumulated evolution timet_ini+_0+_1+…exceeds the final timet_fin.
We summarize a pseudocode for the algorithm in Algorithm <ref>.
Let us make a parallel argument for the observable errorη_Oinstead of the fidelity errorη_F.
At each time step, we measureη_O^(24)and judge if the condition
|η_O^(24)| < ϵ_O O
is met.
This is an analog of Eq. (<ref>), and we introduced the operator normOas a reference scale and put the subscriptOon the tolerance asϵ_Oto avoid confusion.
The iteration scheme is parallel to the fidelity case, but the update of the stepsize comes with the exponent1/3instead of1/6sinceη^(24)_O=O(^3)rather thanη^(24)_F=O(^6).
We summarize a pseudocode for the observable-based algorithm in Algorithm <ref>. |
http://arxiv.org/abs/2307.04505v1 | 20230710115649 | Analysis of the possible satellite contamination in LAMOST-MRS spectra | [
"Mikhail Kovalev",
"Olivier R. Hainaut",
"Xuefei Chen",
"Zhanwen Han"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.IM"
] |
firstpage–lastpage
Distributed Decisions on Optimal Load Balancing
in Loss Networks
Qiong Liu1, Chenhao Wang2, Ce Zheng1
1Télécom Paris, Institut Polytechnique de Paris, France
2Beijing Normal University, China
Email: [email protected], [email protected], [email protected]
==========================================================================================================================================================================================================================
We present the detection of false positive double-lined spectroscopic binaries candidates (SB2) using medium-resolution survey (MRS) spectra from the one time-domain field of LAMOST data release 10 (DR10). The secondary component in all these binaries has near zero radial velocity and solar-like spectral lines. Highly likely this is light from the semi-transparent clouds illuminated by the full Moon. However we also suspect that partially this contamination can be caused by a solar light reflected from the surface of low-orbital artificial satellites launched in the beginning of 2022. We found several possible contaminant candidates using archival orbital data. We propose measures to reduce risk of such contamination for the future observations and methods to find it in archived ones.
binaries : spectroscopic – techniques : spectroscopic
§ INTRODUCTION
Since the launch of the Sputnik-1 in 1957, we can see artificial satellites flying in the night sky. Such observations can be very useful for Earth-related science (i.e. determination of the geopotential), although for astrophysics, satellites can be an obstacle.
This problem has become more serious with the start of the active populating of low earth orbits, which now host many thousands of telecommunication satellites, which form huge constellations. In most pessimistic scenario such intensive commercialisation of space can be the end of all ground base astronomy.
In spectroscopic observations, the flyby of an artificial satellite will result as a fake spectroscopic binary, where contamination will be visible as a solar-like spectral component. For low orbit satellites, the line of sight velocity () is near zero when the satellite rise close to culmination, but the transverse velocity is very high, so contamination lasts much less than a second for typical values of field of view. Thus for a typical bright astrophysical target, contamination is usually negligible and only relatively faint objects are affected <cit.>.
<cit.> identified many double-lined spectroscopic binary (SB2) candidates in LAMOST (Large Sky Area Multi-Object fiber Spectroscopic Telescope) MRS <cit.>. However some of them can be false positives, which can be identified by taking advantage of multiple observations in a time domain sub-survey. Here we present results for one particular field, where these false-positive SB2s can be caused by satellite contamination.
The paper is organised as follows: in Sections <ref> and <ref>, we describe the observations and methods. Section <ref> presents our results. In Section <ref> we discuss the results. In Section <ref> we summarise the paper and draw conclusions.
§ OBSERVATIONS
LAMOST is a 4-meter quasi-meridian reflective Schmidt telescope with 4000 fibers installed on its 5 FoV focal plane. These configurations allow it to observe spectra for at most 4000 celestial objects simultaneously (<cit.>).
For the analysis in this paper, we downloaded all available time-domain DR10 spectra from <www.lamost.org/dr10/v0/> observed within the field “TD164021N701415T01". We use the spectra taken at a resolving power of R=λ/ Δλ∼ 7 500. Each spectrum is divided on two arms: blue from 4950 Å to 5350 Å and red from 6300 Å to 6800 Å. During the reduction, heliocentric radial velocity corrections in range of _h=-5,-2 were applied to all spectra. We convert the wavelength scale in the observed spectra from vacuum to air using <cit.>. Observations are carried out in MJD=59676.8-59692.8 days, spanning an interval of 16 days.
We selected only spectra stacked for whole night[ Each epoch contains seven short 20 min individual exposures, which were stacked to increase ] and apply a cut on the signal-to-noise (>=20). In total we have 5625 spectra from 1323 targets. The number of epochs varies from 2 to 4 per target, as very noisy epochs were not selected for some targets.
§ METHODS
We use the same spectroscopic models and method as <cit.> to analyse individual LAMOST-MRS spectra, see very brief description below.
The normalised binary model spectrum is generated as a sum of the two Doppler-shifted normalised single-star spectral models f_λ,i[they are designed as a good representation of the LAMOST-MRS spectra], scaled according to the difference in luminosity, which is a function of the and stellar size. We assume both components to be spherical and use the following equation:
f_λ, binary=f_λ,2 + k_λf_λ,1/1+k_λ,
k_λ= B_λ(_,1) R^2_1/B_λ(_,2) R^2_2
where k_λ is the luminosity ratio per wavelength unit, B_λ is the black-body radiation (Plank function), is the effective temperature and R is the stellar radius. Throughout the paper we always assume the primary star to be brighter one. In comparison with <cit.> we directly use the ratio of stellar radii q as a fitting parameter, instead of the mass ratio with difference of the surface gravity .
Each spectrum is analysed with the single and binary spectral model, thus we can calculate the difference in reduced χ^2 between two solutions and the improvement factor (), computed using Equation <ref> similar to <cit.>. This improvement factor estimates the absolute value difference between two fits and weights it by the difference between the two solutions.
f_ imp=∑[ (|f_λ, single-f_λ|-|f_λ, binary-f_λ|)/σ_λ] /∑[ |f_λ, single-f_λ, binary|/σ_λ] ,
where f_λ and σ_λ are the observed flux and corresponding uncertainty, f_λ, single and f_λ, binary are the best-fit single-star and binary model spectra, respectively, and the sum is over all wavelength pixels.
§ RESULTS
We carefully checked the quality of the spectral fits through visual inspection of the plots. Several spectra were selected as SB2 candidates using criteria formulated in <cit.>, although this selection was not complete as these criteria prioritise purity. This study is focused on possible satellite contamination, so we introduce a new selection of the fitted parameters, like and improvement factor, see Table <ref>.
Out of four epochs, one with MJD=59685.8 d has significantly more selected candidates, so we explored it more carefully. Thus we keep only stars that appear as a regular single star in all epoch except MJD=59685.8 d. In total we left with 37 SB2 candidates, with a secondary component at _2∼ 0. They are marked as open triangles on Figure <ref>.
We show the most clear example J162843.74+680439.7 (G=14.55 mag) with very large ∼410 in Fig. <ref>. In the top panel we show fits of the co-added spectrum by the single-star and binary model. The single-star model obviously failed to fit double-lined spectrum, while binary models fit the primary component (67 per cent) at _1=-418.72 and catch another additional spectrum component (33 per cent) at _2=-7.62. In the middle panel we show fitting results for a mock spectrum of J162843.74+680439.7 contaminated by solar spectrum of V=16 mag, where we applied Gaussian noise according to . As you can see both panels are very similar.
In the bottom panel we show all seven short 20 min exposures spectra before coaddition. It is clear that contamination happened at UTC times t=19:59 and t=20:21 as these two exposures have an additional spectral component, which has a brightness comparable with the main target. When all exposures were co-added we got the double-lined spectrum with significantly smaller noise.
In the other candidates contamination is not that clearly visible as they have smaller . The majority of the candidates have G∼14.5 and _ red<50 in the co-added spectrum, thus probably for brighter targets contamination was negligible and comparable to the noise level.
§ POSSIBLE SOURCE OF CONTAMINATION
Highly-likely such contamination was caused by the clouds, illuminated by the full Moon, which significantly increased sky background in the spectra. Unfortunately, sky subtraction failed to completely remove it during the spectral reduction, see the last two individual 20-minute exposures in the bottom panel of Fig. <ref>. This can explain such solar-like spectral component very well as sky becomes brighter as sun is rising. At the moment of the end of observation it's height was around -12. This also supported by the fact that contamination is visible only in relatively faint targets. Nevertheless we decided to test other possible sources of contamination.
We checked if this contamination can be due to a solar system object. We used Minor Planet Center checker[<https://minorplanetcenter.net/cgi-bin/mpcheck.cgi>] to check 1284083 known objects and found none of them brighter than 18 mag in our field. With slightly larger search radii we found comet C/2019 K7 (Smith) with coordinates α=16:20:45.9, δ=+67^∘ 56' 19", although it is unlikely to be our contaminant, because otherwise it will be visible in all exposures, as it moves very slowly.
In order to investigate whether this contamination could have been caused by a satellite passing through the field of view, we verified that, at the time of the observations, low-Earth orbit (LEO) satellites were illuminated by the Sun. This was tested using the formalism described in <cit.> for generic LEOs as well as for Starlink and OneWeb satellites.
To evaluate the number of fibres typically affected by a satellite trail, a million trails, randomly positioned, were shot through a realistic LAMOST field of view. For the considered field, 1324 fibres had object of interest (with suitable S/N>20) over the 4000 fibres of the instrument, so 1324 fibres were considered in this experiment.
A trail is considered to affect a fibre if the impact distance is less than 3”, which accounts for the radius of the fibre (whose diametre is 3.3”) and the width of the trail, which is set to 2” accounting for the seeing and the marginally resolved satellite. For each trail, the number of fibres affected was counted. Figure <ref> illustrates this for 100 trails on the left panel, and displays a histogram of the number of fibres affected on the left panel. This method is the same that was used to evaluate (Michevat priv.comm.) the impact on 4MOST, a similar spectrograph built at ESO <cit.>. About 64% of the trails hit no fibre and while 0.01% of the satellites hit 7 fibres, a trail will hit 0.44 fibres on average. As 37 fibres were contaminated, this suggests up to ∼80 satellites crossed the 5 field of view during the exposure. These numbers should be taken with a fairly large uncertainty, as the seeing and the width of the trail will cause the number of fibres affected to be larger, but the contamination for a larger impact distance will be smaller.
To estimate the visual magnitude of the satellite causing the contamination, one must estimate the level of contamination of the spectra, and take into account the effect of motion of the satellite. With typical angular velocities of the order of 1 s^-1 at zenith, a LEO satellite spends only a few milliseconds t_ eff crossing the fibre during the total exposure time t_ exp = 1200 s. The apparent magnitude m of the object can be estimated from its effective magnitude m_ eff measured on the spectrum,
m = m_ eff + 2.5 log_10t_ eff/t_ exp
= m_ eff + 2.5 log_10r_ fibre/ω_ sat t_ exp ,
where r_ fibre = 3.3” is the angular diameter of a fibre on the sky.
Using the method in <cit.>, the angular velocity of the satellite in the direction of observations was estimated for Starlink (0.66 s^-1) and OneWeb (0.30 s^-1) satellites.
The effective magnitude can be estimated from the contamination. The S/N of the G ∼ 14.5 was up to 50 in the co-added spectrum, corresponding to ∼ 20 in the individual 1200s exposures. To be noticeable, the contamination must have S/N > 5 (which corresponds to G∼16), and to be detectable at all, S/N>2 (G∼ 17).
Combining these pieces of information, Eq. <ref> gives visual magnitudes ∼1–2.
Fainter satellites will not be detected.
As of the time of the observations, about 4500 satellites were present on LEOs (roughly 2000 pre-existing, and 2002 Starlink[
Jonathan McDowell’s Starlink web page
<https://planet4589.org/space/con/star/stats.html>
] and 426 OneWeb[
Jonathan McDowell’s OneWeb web page
<https://planet4589.org/space/con/ow/stats.html>
] from recently launched mega-constellations).
Using the method of <cit.>, this results in ∼ 15 satellite trails per exposure during long twilight, as illustrated in Fig. <ref>.
This number is much too low to explain the observed contamination. Furthermore, the magnitudes of the satellites differs widely (some of them, such as HST or ISS can be as bright as V -5 to 2), but the bulk of the Starlink satellites are in the 5.6–7.2 range <cit.> and OneWeb in the 7–9 range <cit.>, ie well below the reach of the spectrograph.
We also checked Satellite Track Predictor (STP)[<http://www.astro.amu.edu.pl/STP>] for time interval UTC=19:30, 20:30 and found that 12 bright satellites with V≤6 mag crossed our field. We show their tracks in Fig. <ref>. STP reports that errors can be up to 0.1-0.5 for sky-positions and σ_V=2 mag for brightness, so some of these satellites (like Starlink and Cosmos with reported V=4 mag) can be bright enough to cause contamination.
In the week after their launch, the satellites appear as a train, or like a string of pearls while they slowly disperse in elongation along their very low orbit. During that phase, they appear much brighter than when on their operational orbit, because of the shorter distance to the observer, and because the configuration and attitude of the satellites are different than when in operations. In the days of the earliest Starlink launches, they could be as bright as mag ∼ 0. Since then, the operator has modified the attitude of the satellites so that they are much dimmer, in the 1–3 range most of the time[Although very bright (up to V∼ 0 mag) and short (∼1 sec) flashes are possible. First author saw them several times.]. A batch of satellites launched with one rocket consist typically of 60 satellites. In order to test whether such a train of recently launched satellites could have crossed our field of view, Two Line Elements (TLEs), the orbital elements of the satellites, were retrieved for the date of the observations using CelesTrack [<https://celestrak.org/NORAD/archives/request.php>]. Using the skyfield[<https://rhodesmill.org/skyfield/>] package, the visibility of the satellite was verified, from LAMOST for the time of the observations. It appears that a series of Starlink satellites from the 2022 Feb. 21 launch^<ref> crossed the sky during the exposure. While their tracks, as computed by us, are in the general vicinity of our observation, they does not cross the field of view. However, the TLEs are notoriously not very accurate –especially at a phase when the operator frequently adjust the orbit, and our method to compute the satellite position is not verified. At that time, the satellites were at an altitude of 350km, with a magnitude in the 1–2 range. The apparent angular velocity of these satellites was ω∼ 1.0 s^-1, which leads to effective magnitudes m_ eff∼ 16–17, i.e. in the range of the contamination.
Therefore, we suggest that the observations can be theoretically, "photobombed" by a train of Starlink satellites on their low, parking orbit, although contamination by clouds is more likely.
In the future, the number of satellites in mega-constellations is likely to grow significantly. Assuming 65 000 satellites (as in <cit.>), this would result in a typical 1200s exposure being crossed by about 200 satellite trails, potentially resulting in ∼ 260 fibres contaminated per exposure taken during long twilight (3% of the fibres). However, the limiting magnitude of the LAMOST-MRS instrument for 1200s exposure is V∼ 15 (5σ). Converting the apparent magnitudes of the satellites (using the crude photometric model described in <cit.>) into effective magnitudes, these will be in the 18 to 23 range (depending on the satellite's orbit and altitude and azimuth), well below the limit of LAMOST-MRS, even accounting for a possible 1 mag error on the photometric model.
As usual, it is important to note that once the sun dips far enough under the horizon, most of the satellites fall in the shadow of the Earth. This problem is therefore only critical during the first and last hours of the night.
While the satellites on operational orbits will not be a major concern for LAMOST, the compact trains of very low satellites can affect the observations. The probability of such a train crossing a telescope field of view is low, but considering that constellations will need to be regularly replenished, new satellites will need to be continuously launched. Considering 100 000 satellites with a life-time of 5 years, this would result in about one launch per day (each with 60 satellites). If the satellites stay one month in low orbit, this would result in about 60 trains in orbit, at various stage of dispersion. It is therefore important that the satellite operators also keep the brightness of the satellites to the absolute minimum possible during their stay on transit orbit. The changes of satellite attitude implemented by Starlink illustrate the improvements than can be made.
§ CONCLUSIONS
We successfully detected false-positive SB2 candidates in the LAMOST-MRS spectra.
The secondary component in all these binaries have near zero radial velocity and solar-like spectral lines. Highly likely this is light from the semi-transparent clouds illuminated by the full Moon. However we also suspect that partially this contamination can be a solar light reflected from the surface of low-orbital artificial satellites launched in the beginning of 2022. We found several possible contaminant candidates using archival orbital data from CelesTrack and STP web service.
Unfortunately results presented in this paper cannot definitely confirm satellites as contaminant, as other sources like clouds and problem with sky subtraction will have similar effect on the spectral observations.
To identify and remove such contamination we recommend analysis of all spectra taken during twilight, assuming a binary spectrum model, where one component has solar-like spectrum with radial velocity in the range =-10,+10.
Also the short exposures should be carefully checked prior the co-addition to avoid the production of false double-lined spectra with contaminated exposures.
During the scheduling of the observation one should consider possibility of the contamination by the bright "train" of newly launched satellites and avoid observations near the twilight if possible. Also we recommend to take additional image of the observed field, to reliably identify possible satellite tracks.
§ ACKNOWLEDGEMENTS
MK is grateful to his parents, Yuri Kovalev and Yulia Kovaleva, for their full support in making this research possible. We thank Hans Bähr for his careful proof-reading of the manuscript. We thank Zhang Haotong and Luo A-Li for useful discussions. We thank Dr. Nikolay Emelyanov for providing the link to Minor Planet Center Checker. We thank Monika Kamińska for providing sky positions for satellites from STP. We are grateful to Dr. T.S. Kelso for development and maintaining of the CelesTrack.
This work is supported by National Key R&D Program of China (Grant No. 2021YFA1600401/3), and by the Natural Science Foundation of China (Nos. 12090040/3, 12125303, 11733008).
Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences. The authors gratefully acknowledge the “PHOENIX Supercomputing Platform” jointly operated by the Binary Population Synthesis Group and the Stellar Astrophysics Group at Yunnan Observatories, Chinese Academy of Sciences.
This research has made use of NASA’s Astrophysics Data System. It also made use of TOPCAT, an interactive graphical viewer and editor for tabular data <cit.>.
§ DATA AVAILABILITY
The data underlying this article will be shared on reasonable request to the corresponding author.
LAMOST-MRS spectra are downloaded from <www.lamost.org>.
mnras
@urlcharsothermakeother $&#_%
@doi@urlcharsother ifnextchar [ @doi@
@doi@[]
@doi@[#1]#2tempa#1tempaempty http://dx.doi.org/#2 doi:#2http://dx.doi.org/#2 #1
@eprint#1#2@eprint@#1:#2::nil
@eprint@arXiv#1http://arxiv.org/abs/#1 arXiv:#1
@eprint@dblp#1http://dblp.uni-trier.de/rec/bibtex/#1.xml
dblp:#1
@eprint@#1:#2:#3:#4niltempa #1tempb #2tempc
#3tempc empty tempc tempb tempb tempa tempb empty tempb arXivifundefined
mn@eprint@tempbtempb:tempcmn@eprint@tempbtempc
[Bassa, Hainaut &
Galadí-EnríquezBassa et al.2022]bassa2022
Bassa C. G., Hainaut O. R., Galadí-Enríquez D., 2022,
@doi [] 10.1051/0004-6361/202142101, https://ui.adsabs.harvard.edu/abs/2022A A...657A..75B 657, A75
[Cui et al.,Cui
et al.2012]2012RAA....12.1197C
Cui X.-Q., et al., 2012, @doi [Research in Astronomy and Astrophysics]
10.1088/1674-4527/12/9/003, https://ui.adsabs.harvard.edu/abs/2012RAA....12.1197C 12, 1197
[Czesla, Schröter, Schneider,
Huber, Pfeifer, Andreasen & ZechmeisterCzesla
et al.2019]pya
Czesla S., Schröter S., Schneider C. P., Huber K. F., Pfeifer
F., Andreasen D. T., Zechmeister M., 2019, PyA: Python
astronomy-related packages (@eprint ascl 1906.010)
[El-Badry et al.,El-Badry
et al.2018]bardy2018
El-Badry K., et al., 2018, @doi [] 10.1093/mnras/sty240, https://ui.adsabs.harvard.edu/abs/2018MNRAS.476..528E 476, 528
[Kovalev, Li, Zhang, Li, Chen &
HanKovalev et al.2022a]tyc
Kovalev M., Li Z., Zhang X., Li J., Chen X., Han Z., 2022a,
@doi [] 10.1093/mnras/stac1177, https://ui.adsabs.harvard.edu/abs/2022MNRAS.513.4295K 513, 4295
[Kovalev, Chen & HanKovalev
et al.2022b]bincat
Kovalev M., Chen X., Han Z., 2022b, @doi []
10.1093/mnras/stac2513, https://ui.adsabs.harvard.edu/abs/2022MNRAS.517..356K 517, 356
[Liu et al.,Liu et al.2020]lamostmrs
Liu C., et al., 2020, arXiv e-prints, https://ui.adsabs.harvard.edu/abs/2020arXiv200507210L p. arXiv:2005.07210
[MallamaMallama2020]mallama2020
Mallama A., 2020, @doi [arXiv e-prints] 10.48550/arXiv.2012.05100,
https://ui.adsabs.harvard.edu/abs/2020arXiv201205100M p.
arXiv:2012.05100
[MallamaMallama2021a]mallama2021a
Mallama A., 2021a, @doi [arXiv e-prints] 10.48550/arXiv.2101.00374,
https://ui.adsabs.harvard.edu/abs/2021arXiv210100374M p.
arXiv:2101.00374
[MallamaMallama2021b]mallama2021b
Mallama A., 2021b, @doi [arXiv e-prints] 10.48550/arXiv.2111.09735,
https://ui.adsabs.harvard.edu/abs/2021arXiv211109735M p.
arXiv:2111.09735
[TaylorTaylor2005]topcat
Taylor M. B., 2005, in Shopbell P., Britton M., Ebert R., eds,
Astronomical Society of the Pacific Conference Series Vol. 347, Astronomical
Data Analysis Software and Systems XIV. p. 29
[Zhao, Zhao, Chu, Jing &
DengZhao et al.2012]2012RAA....12..723Z
Zhao G., Zhao Y.-H., Chu Y.-Q., Jing Y.-P., Deng L.-C., 2012,
@doi [Research in Astronomy and Astrophysics]
10.1088/1674-4527/12/7/002, https://ui.adsabs.harvard.edu/abs/2012RAA....12..723Z 12, 723
[de Jong et al.,de Jong
et al.2019]4most2019
de Jong R. S., et al., 2019, @doi [The Messenger]
10.18727/0722-6691/5117, https://ui.adsabs.harvard.edu/abs/2019Msngr.175....3D 175, 3
|
http://arxiv.org/abs/2307.04126v1 | 20230709084201 | Compactness of sequences of warped product circles over spheres with nonnegative scalar curvature | [
"Wenchuan Tian",
"Changliang Wang"
] | math.DG | [
"math.DG"
] |
[4]
thmTheorem[section]
|
http://arxiv.org/abs/2307.04368v2 | 20230710064918 | ECS -- an Interactive Tool for Data Quality Assurance | [
"Christian Sieberichs",
"Simon Geerkens",
"Alexander Braun",
"Thomas Waschulzik"
] | cs.LG | [
"cs.LG",
"cs.AI",
"cs.SY",
"eess.SY"
] |
New results on the dynamics of critical collapse
Cheng-Gang Shao2
August 12, 2023
================================================
With the increasing capabilities of machine learning systems and there potential use in safety-critical systems, ensuring high-quality data is becoming increasingly important. In this paper we present a novel approach for the assurance of data quality. For this purpose, the mathematical basics are first discussed and the approach is presented using multiple examples. This results in the detection of data points with potentially harmful properties for the use in safety-critical systems.
§ INTRODUCTION
The development of machine learning (ML) based systems has led to a widespread use in research, industry as well as in the everyday life. Even though ML systems show great performance in solving complex tasks, their use is mostly limited to domains, where wrong decisions only have minor consequences. The application of ML systems in high-risk domains currently is problematic due to the needed quality, lack of trustworthiness and the expected legal basis. To give a legal framework for the application of ML systems the European AI act <cit.> is at the moment under development. Simultaneously, multiple projects from research and industry are dealing with the topic of ML systems in high risk areas, such like "KI-Absicherung" <cit.> and "safetrAIn" <cit.>. All of those projects highlight the high requirements that are needed to protect humans from errors made by ML systems. High risk ML systems have to fulfill the requirements according to <cit.> Chapter 2 "REQUIREMENTS FOR HIGH-RISK AI SYSTEMS" Article 10 "Data and data governance" Point 3: "Training, validation and testing data sets shall be relevant, representative, free of errors and complete". In this paper we introduce a new approach that will contribute to the future fulfillment of this requirement. It is showcased how different relevant aspects of the data can be analysed and how relations between the given data can be used for quality assurance aspects.
The presented approach is part of the QUEEN-method (Qualitätsgesicherte effiziente Entwicklung vorwärtsgerichteter künstlicher Neuronaler Netze, quality-assured efficient development of neural networks) <cit.> which is a comprehensive approach for the development of quality assured neural networks. In the scope of the QUEEN-method two data quality assurance methods were developed, namely (integrated quality indicator) <cit.> and ECS (equivalent classes sets) <cit.>. These methods were developed simultaneous in close cooperation. In this paper we want to show the mathematical basis and the use of ECS on the topic of quality assurance. The abilities and usage of the is covered in another submission <cit.>.
The ECS is particularly used to analyse the local and global composition of data sets. Based on this a wide variate of data quality properties is addressed. Be it the identification of single data points like outliers, false annotations or isolated data or the identification of groups of data points like decision boundaries and local data point groups of identical output. The ECS makes it possible to identify all data points which do not match specifiable conditions. The method itself is thereby created in such a way that interactions between the user and the data are supported in order to simplify and speed up the quality assurance process.
§ RELATED WORK/STATE OF THE ART
Despite the fact that data quality and quality assurance are widely necessary and researched, there exists no single general accepted definition. Instead, there are several attempts to define data quality based on current developments. One example is given by <cit.> who define data quality with respect to the intended use of the data. It is argued that data quality has to be a context dependent term to be appropriately used in the context of a given tasks. In addition, the term of "data quality" is split into multiple properties like accuracy, consistency, completeness, safety and more. In <cit.> many of these properties are listed and defined separately. In <cit.> data quality is additionally split into subjective and objective assessments of data quality.
A general definition on data quality can thereby not be given. Instead is high data quality considered to be data which is fit for its intended purpose <cit.>. If data quality is used in standards it is typically split into the different properties which have to be analysed separately like in <cit.>.
A first step to assure the data quality is the use of descriptive statistics <cit.>. Herein statistical methods are used to gain greater insights into the given data. Common methods are the visualization via scatter plots and histograms, often combined with the measurement of central tendencies, dispersion and location parameters. Our proposed method extends the descriptive statistical methods, enables the visualization of multiple quality assurance aspects in one plot and enables a direct interaction between the visualization of the quality indicator visualization and the data.
When trying to assure the quality of data, another possible approach is the representation of given data points in lower dimensial space using methods of dimensinality reduction. Commonly used methods are PCA <cit.>, tSNE <cit.> or UMAP <cit.>. These methods often produce representations interpretable by humans if the output dimensionality is chosen to be low enough. However, such methods often result in considerable loss of information. The ECS on the other hand is computed on the original values and takes all the given information into account.
Some approaches try to cover as many dimensions of the data quality as possible. One way to do this is by testing the data against predefined rules and assumptions. An example of such an approach is the pointblank R package <cit.> which is created for an agent based data quality assurance. In this package, specific elements of the data are tested against predefined functions. As part of this it can be tested if the data is greater, equal, lower and so on. Another method is given by DEEQU published in <cit.> and <cit.>. This package allows for assumption based unit tests which can be defined by the user. Tests on specific parameters of the data, similar to those already mentioned with regard to the pointblank package, are possible as well. A last method that should be mentioned here is shown in <cit.>. This approach showcases a probability-based method which calculates a value representing the probability that a data set is free of internal errors with respect to entered rules. The entered rules are based on the presence of data of certain values, comparable to the package pointblank.
The main problem in using the mentioned approaches is the large amount of required knowledge about the data to create accurate assumptions. On top of this, the efficient creation of assumptions is only given if the user is aware that the data quality is influenced in some regards. Due to the reliance on the relationship between data points, our methods do not need any assumptions or rules that are to be specified by an user. Instead our approach can be used without any knowledge about the data.
A different approach is the focus on just a single dimension of the data quality. On the topic of outlier detection these are for example density-based algorithms like <cit.>. In this approach, the amount of local neighbouring data points is calculated and the thereby generated local density is compared with the nearest neighbours. Another approach is using the DBSCAN algorithm <cit.> to cluster the given data. Based on this clustering the method proposed by <cit.> calculates values to identify clusters of minimal sizes. These clusters are then regarded as possible anomalies.
Another data quality property is the detection of possible outliers, which can also be solved by density based clustering. One example of such an algorithm is given by <cit.>. This algorithm uses a fixed clustering to identify clusters followed by the computation of cluster distances. The clusters are classified as anomalous based on the inter-cluster distances and the deviation from the mean inter-cluster distance. Two quite similar approaches are <cit.> and <cit.>. Both approaches use a clustering of the given data in a first step. The first one uses the previously mentioned DBSCAN, the second one uses a cluster algorithm named OPTICS <cit.>. In a second step, anomalous clusters are identified, once based on inverse distance weighting (IDW) and once using the kringing method.
The main advantage of all of the mentioned methods is the reliable calculation of their data quality property. However, due to the methods focus on one specific data quality property, they are only useful if the assumption exists that this property could contain errors. The advantage of our proposed method is that multiple data quality properties may be analyzed with one approach.
§ METHOD
The ECS is based on the idea that a data set can to be split into input data and output data. The input data defines the dimensions of the data, henceforth are called features, which can be used to predict the output features. The amount of all possible inputs creates the input space I. Accordingly is the output space O created by all possible outputs. To use the ECS properly all feature values have to be numbers. Features which are not created by numbers have to be represented in some way as a number or a combination of numbers.
To start the calculation of the ECS two metrics are needed. These metrics should be chosen in such a way that "similar" data points according to the semantics of the task that has to be solved have a relatively small distance to each other. At the same time "dissimilar" data points should have a relatively large distance. The distances between two data points can be calculated in the input space and in the output space independent from each other. By doing so, it is possible to use different metrics for the distances in I and in O. Which metric is best suited for the data set depends on the given type of data and the task to be be solved. In the following, the difference between data points in the input space is named input distance d_RI. Accordingly, the difference between data points in the output space is called output distance d_RO.
To differentiate between "similar" and "dissimilar" data points, the distances can be separated into different groups. The minimal approach is to create two groups. One group for relatively small distances and another one for relatively large distances. Doing so requires a threshold, which is called δ_in for distances in the input space and δ_out for distances in the output space. These δ can be absolute distance values or a percentage of the maximum known distance between data points. They are set based on the data quality properties that should be identified and the used data type. By comparing two data points with each other, four possible scenarios can be distinguished:
* small input distance - small output distance
* small input distance - large output distance
* large input distance - small output distance
* large input distance - large output distance
Each of these scenarios shows a relation between the data points. If for example the distances are both small, than the data points may showcase a common use case with a typical output. A small input distance in combination with a large output distance on the other hand could showcase complex areas in the input space or an outlier. Either way, the identification of data properties based on two data points is not enough. Due to this the following four ECS-sets are calculated. In these sets, the compared data points are saved, which are part of one of the above scenarios.
ECS_EE(D) {d_c|d_c ∈ D^2 ∧ d_RI(B) ≤δ_in
∧ d_RA(B) ≤δ_out}
ECS_EU(D) {d_c|d_c ∈ D^2 ∧ d_RI(B) ≤δ_in
∧ d_RA(B) > δ_out}
ECS_UE(D) {d_c|d_c ∈ D^2 ∧ d_RI(B) > δ_in
∧ d_RA(B) ≤δ_out}
ECS_UU(D) {d_c|d_c ∈ D^2 ∧ d_RI(B) > δ_in
∧ d_RA(B) > δ_out}
Each of the four ECS-sets represents all comparisons between data points which result in one of the four scenarios. Thereby, an E showcases a small distance whereas a U showcases a large one. The first of the two letters of the ECS-sets represents the input distance and the second represents the output distance. Following this, ECS_EU contains all data point comparisons which result in a small input and a large output distance. The ECS_UE on the other hand contains comparisons which result in a large input and a small output distance.
The information of the ECS-sets can be used to analyse the data points for each of the four scenarios. This way, it is possible to identify data points with specific properties. It would, for example, be possible to identify all data points which have the many dissimilar data points in close proximity. It would also be possible to identify data points which showcase small distances in the input and the output space. By doing so, certain areas of the input space can be identified which correlate with a certain outputs of the output space. It could also be possible to identify features which differentiate certain data points from each other.
The ECS-sets contain all information of the data set which could be used for quality assurance. However, the formatting of the sets is difficult for humans to read. This is especially the case when entire data sets should be analysed and not just a small subset of the data. The solution is the comprehensive representation of the ECS-sets in such a way that interesting data points can easily be identified. Before this can be done, it has to be determined which combinations of data points are the most interesting ones. The expectation hereby would be that similar input data would create output data that is related in some way. Based on this, it can be assumed that a combination of data points with a small input distance also has a small output distance. On the other hand it would not be expected that data points with large input distances to each other would showcase similar output data. The most interesting combinations of data points would thereby be combinations which result in a small input distance. This comparisons can be display particularly by sorting the data point comparisons based on the input distance. An example of the sorted representation of the ECS_EE is shown on the right side of the figure <ref>. Listed on the x-axis is the comparison between data points. This comparison is data point based and showcases the comparisons of any data point with the kth smallest distance in the input space. On the y-axis, it is displayed how many of these comparisons are part of the current ECS-set, which is in this case the ECS_EE. In this process functions are created representing every data point. To showcase the entire data set these functions are superimposed over each other. Every function visually displays if and which of the nearest data points are part of the ECS_EE. The data set which was used to create the displayed ECS_EE is shown on the left side of figure <ref>. It is a simple data set created by two input features (a, b) and one output feature (color and shape). Increasing functions display that most of the data points with the kth smallest distance are part of the current ECS-set. Functions which do not increase display that the comparisons are part of another ECS-set. It should be emphasized here that the function in the kth position only increases for one of the four ECS-sets.
The created ECS-histograms consist of a large number of functions. Areas in the ECS-histogram, in which large amounts of functions showcase the same behavior, are displayed darker. Accordingly smaller amounts of functions are display brighter. For the representation of the amount of functions, gamma correction is used. This way, even singular functions should stay visible.
As stated before, it would be expected that a small input distance influences the output distance. An ideal data set would have a strong correlation between the position in the input and the output space. The resulting data point combinations would just have small input and output combinations for all the nearest neighbouring data points. This would result in a steep increase of all functions in the ECS_EE until all possible similar data points are combined with each other. From this point on the functions in the ECS_EE do not increase any further. The ECS_EE function created by a single data point in such an ideal data set is shown schematically in the figure <ref>. The main diagonal is thereby displaying the maximum speed at which a function is able to increase. Data sets or individual data points which are not ideal do create different functions. One extreme example would be that a function would not increase at all in the ECS_EE. This would be the case because there are no possible combinations with small input distance, small output distance or both. Which of these possibilities is actually the case, can be tested by using the other ECS-histograms.
The benefit of the representation of the information of the data set in the form of the ECS-histogram is the data point based presented information. The neighbouring data points to each data point and there relation to each other are shown and can be compared to set expectations. This basis helps at identifying functions which do not live up to the expectations fairly easy. The reason why the functions are not behaving the way they should, is intrinsically given by the combination of the behaviour and the ECS-set. Additionally, it would be possible to define limits in the ECS-histogram which can be tested autonomously.
§ APPLICATION
We want to showcase the abilities of the ECS by the application on two different data sets. Simultaneously we want to show how the ECS can be used explicitly to detect certain data quality properties. With this in mind we created a data set which is used as an example. The data set is created in such a way that properties detected by the ECS can be verified by displaying the critical data points. In addition is the ECS used on the MNIST data <cit.> set to display the detection of data quality properties on a commonly known example. The application on the data sets is focused on the the data quality properties created by outliers, isolated data points and local groups of data points with identical output values.
§.§ ECS on point cloud
To demonstrate the usage of the ECS, an artificial data set is created which is displayed at the left side of figure <ref>. This data set is similar to the one displayed in figure <ref>. The most important difference is, that the clustered data points can not clearly be separated from each due to the clusters overlapping partially. The data set contains 1000 data points which are grouped in four clusters. As in figure <ref> each cluster has a different amount and a different density of data points.
The data set was created this way, because it demonstrates a simple classification task. At the same time, properties like outlier and local groups with identical output are present and can be visualized. In the following, it is shown how these properties can be identified by using the ECS. All the ECS-histograms are created using a δ_in of 0.3 times the maximum distance in the input space and a δ_out of 0 to differentiate between all differing outputs.
§.§.§ outliers
An outlier is a data point which has an unexpected output for the given input. This output is typically very different from an output that would be expected. Here an outlier is just be considered to vary in the output space. Unwanted variations in the input space are treated in the following section. The reasons for an outlier can be different. The output may for example be wrong or the data point showcases a rare but correct input.
Due to their character, outlier appear in areas which are dominated by data points with a different output. Given this information, it can be stated that an outlier has close neighbours with large output distances. The ECS_EU is used to identify these cases. Functions in the ECS_EU are increasing if there are data point combinations with small input and large output distances. Such functions that already increase for the nearest neighbours can thus be regarded as outliers. By targeting these functions, the corresponding outliers can be identified. How many combinations for how many nearest neighbours should be part of the ECS_EU is dependent on the given data set. In the given point cloud example, 100 nearest neighbours were chosen to be enough to represent the local data points. If out of these 100 combinations more than 70 have a large output distance, then the data point is regarded as an outlier.
The ECS_EU is shown in figure <ref>. The area of importance in which the functions of outliers appear is highlighted by a rectangle. It can be noticed that some data points in the point cloud are highlighted which means that they are considered to be outliers. It can also be noticed that most of the functions are not increasing by much.
§.§.§ isolated data points
Isolated data points are data points which have a large input distance to many or all of there nearest neighbours. This means that the data point showcases an input which is rare or possibly wrong. In the literature these type of data points are often referred to by "Out-of-distribution-data".
The ECS_UE and the ECS_UU are used to identify data points which have large distances to the nearest neighbours. In both ECS-sets are combinations saved which have large input distances to each other. The difference between these two ECS-histograms is the differently sized output distance, which is not considered for isolated data points. The corresponding functions of isolated data points increase very early in the ECS-histograms. Most of the time, the functions increase in the ECS_UE as well as in the ECS_UU. This is the case, because the nearest neighbours themselves may have large input distances to each other and thereby showcase very different outputs. The sooner a function increases, the fewer data points are given in the local area of an isolated data point. The amount of neighbouring data points that should exist is on the given task and data set. Typically, this means that every data point should at least have a few neighbouring data points with small input distance. If many data points have no near neighbours, an adjustment of the parameter delta_out can be considered.
By using the ECS_UE and the ECS_UU it can be stated that, there are no isolated data points in the current data set with less than 50 close neighbours. This can be confirmed by the fact that the sample data set used here was created with clusters of data points
§.§.§ local groups of identical output
A local group with identical output is a structure created by multiple data points. All data points in such a group have small distances to each other regarding the input and the output distances. There are no greater amounts of data points which showcase a large output distance, besides possible outliers or false data points. The identification of these groups showcase the ability of the used metric to differentiate between different outputs on the basis of the corresponding input. This means that the input data of the groups share similar features which in turn leads to the differentiation. It would be possible to solve the given task at least for these groups based on these similar features.
The combination of small input distances and small output distances can be identified using the ECS_EE. The functions correlating with data points as part of a local group with identical output increase strongly. The functions will increase as long as there exist data points with small distances in the input and output space in the data set. These strongly increasing functions showcase every data point which is part of such a group of data points. Using the ECS_EE, there is the possibility to also identify groups of different amounts of data points. This can be done by choosing the function increasing the strongest for different amounts of neighbours. If the functions have increase up to the chosen amount, it means that there is a minimum of this amount of data points in the group.
In the given case in figure <ref>, groups with 100 data points and identical output should be identified. The area of importance in the ECS_EE is marked by an rectangle in the upper right corner. It should be noticed that not just the function increasing the strongest were marked but also some functions which increase a little bit slower. This has be done to make the identified groups more robust against false data points and outliers. In the given case, this means that also functions with 95 out of 100 data points are regarded as local groups with identical output. In addition, it can be noticed that most of the functions in figure <ref> are increasing. This indicates that there are many data points arranged in local groups. This is logical because the point cloud was created this way as four groups of clustered data points. The detected data points which are part of a local group are marked on the left side of the figure <ref>.
§.§ ECS on MNIST
In contrast to the previous example, MNIST is a data set which was created to represent the specific task of classifying handwritten numbers. The data set consists of 60000 images of size 28*28 serving as the input and as many numbers between zero and nine for classifying the input.
The most important difference between the previous used point cloud and the MNIST data set is the amount of data and input features. The much greater amount leads to much more functions in the ECS-histograms. The ECS-histograms thereby get more complicated. This can be counteracted by applying more specific metrics to the data. Here the pixel-wise euclidean distance is chosen as a metric. The euclidean distance is typically not used on images due to its bad performance. But in the case of MNIST, this metric is applicable, as the image pixels are given as centered grayscale values. It can be shown that the abilities of the ECS are still given using the euclidean metric. Another problem which appears by using data with many features is the curse of dimensionality through which all distances are getting closer to each other. As a result a larger δ_in of 0.75 times the maximum distance in the input space is used in the following. The δ_out is still 0 to differentiate between all differing outputs.
To show the input data of the MNIST data set in a way, a representation is used in the following chapters. This representation is created by using UMAP <cit.>, a dimensionality reduction method. The cluster, which are created this way are marked with a number to showcase the corresponding output. The ECS is used based on the original MNIST input and output data.
§.§.§ outliers
As shown in the chaper "ECS on point cloud - outliers", the ECS_EU used to identify outliers. The ECS_EU of the MNIST data set for the nearest 200 neighbours is shown in figure <ref>. As mentioned before, this representation has much more functions. These are too dense for any singular function to be identified without an interaction. But it can be noticed that most functions do not increase by a lot, as indicated by the darker visualization. The amount of functions (|F|) which increase to a specific value of fulfillment (v_f) until the 200th neighbour are shown in the following table <ref>.
r0.35
v_f |F|
101-200 6021
51-100 7337
11-50 14914
0-10 31728
0 16813
Amount of data point combinations which are part of the ECS_UE for the 200 nearest neighbours.
It must be noticed that more than half of the given data points have a maximum of 10 combinations showcasing a small input distance and a large output distance. On the other hand, there are more than 6000 data points for which half of the nearest 200 neighbours have a large output distance. Not all of these are outliers, some may be positioned between classes others may have badly assigned distances. To identify outliers only functions performing worse than a random assignment of distances are used. This means that all data points having more than 180 data points with large output distance among the nearest 200 neighbours are interpreted as outliers. By choosing these functions 804 data points where identified as outliers. In figure <ref>, a random sample of nine of these outliers is displayed. It is noticeable that all of these data points do look strange. Most may also be mistaken with a different number. It would for example be possible to remove these data points from the MNIST data set to achieve a higher data quality.
§.§.§ isolated data points
The identification of isolated data points in MNIST is identical with the identification of data points in the point cloud. The ECS_UE and ECS_UU used are shown in figure <ref>. In the ECS_UE are 129 and in the ECS_UU 132 data points with less then 200 neighbouring data points. Most of the correlated data points appear in the ECS_UE as well as in the ECS_UU. Due to the relatively small amount of increasing functions the histogram is created darker.
The earliest functions start increasing in the ECS_UE and ECS_UU for less than 10 neighbours. The input data of the earliest increasing functions is shown in figure <ref>. One function in the ECS_UE is noticeable do to its steep and early increase. The corresponding input data is shown in figure <ref> in the third image from left. This data point has a large distance to its closest neighbours. At the same time, most of these neighbours have the same output "4". This lead to the conclusion that the data point has still some of the most important features which are correlated with the output "4", even if the data point is very isolated. Overall it is noticeable that most isolated data points shown use many input pixels to display the number. This is not often the case in the MNIST data set. In addition, the pixel-wise euclidean distance used reacts especially on pixel-wise differences by assigning higher distances in the input space.
§.§.§ local groups of identical output
The ECS_EE, which is used for the identification of local groups of data points with identical output, is shown on the right side of figure <ref> for the nearest 500 neighbours. As it is the case for the detection of outlier, the amount of functions is much larger than in the point cloud example. It is also not possible to identify single functions but instead overall trends of the functions. It can be noticed that most of the functions are increasing very steeply. This means that most data points do have small input as well as output distances in combination with their nearest neighbours. This in turn means that most data points are located in local groups with identical output. The amount of data points which should be part of the groups can be changed by using different amounts of neighbours in the ECS_EE. In table <ref>, the amount of data points (|dp|) which belong to a local group of different size (gs) is shown. These amounts where created by allowing a maximum of 5 data points a different output which could exist due to outliers.
Noticeable are most of the data points located in groups of a few hundred data points. But still more than 4000 data points could be detected which are part of local groups with more than 1500 data points.
r0.45
gs |dp|
100 38383
200 27351
500 14745
1000 7851
1500 4329
Amount of data points which are part of the different sized local groups with identical output. The entire data set contains 60000 data points.
The position of these data points is highlighted in dark in the UMAP representation in figure <ref>. It can be noticed, that especially data points with an output of 1, but also of 0 and 6 show local groups of identical output. This means, that the used metric has the ability to differentiate these data points from each other. The local groups which can be identified can than be used to solve the given task, based on there location in the input space.
§ CONCLUSION
In this paper we presented a novel approach for the data quality assurance based on local similarities. It was shown how the ECS is calculated and can be used on an artificial example. The thereby presented procedure was used to detect data quality properties on the MNIST data set. Besides the possibility to detect outlier, isolated data points and local groups of similar output has the versatile applicability of the ECS been shown. ECS could also be used to validate quantitative data set requirements for data quality properties. These can state the minimum amount of elements per group, the amount of outliers or a maximum amount of local groups. Some of these properties, like the amount of accepted outliers in local groups, may be dependent from associated safety requirements and the required safety integrity level.
|
http://arxiv.org/abs/2307.05728v1 | 20230711185527 | Towards A Scalable Solution for Improving Multi-Group Fairness in Compositional Classification | [
"James Atwood",
"Tina Tian",
"Ben Packer",
"Meghana Deodhar",
"Jilin Chen",
"Alex Beutel",
"Flavien Prost",
"Ahmad Beirami"
] | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CY"
] |
[
equal*
James Atwoodgoog
Tina Tiangoog
Ben Packergoog
Meghana Deodhargoog
Jilin Chengoog
Alex Beutelopenai
Flavien Prostgoog
Ahmad Beiramigoog
googGoogle
openaiOpenAI (work done while at Google)
James [email protected]
Tina [email protected]
Ahmad [email protected]
Machine Learning, ICML
0.2in
]
Despite the rich literature on machine learning fairness, relatively little attention has been paid to remediating complex systems, where the final prediction is the combination of multiple classifiers and where multiple groups are present. In this paper, we first show that natural baseline approaches for improving equal opportunity fairness scale linearly with the product of the number of remediated groups and the number of remediated prediction labels, rendering them impractical. We then introduce two simple techniques, called task-overconditioning and group-interleaving, to achieve a constant scaling in this multi-group multi-label setup. Our experimental results in academic and real-world environments demonstrate the effectiveness of our proposal at mitigation within this environment.
§ INTRODUCTION
The literature around group fairness is relatively rich when we consider a binary classifier and desire to satisfy group fairness for a (binary) group <cit.>.
However, many real-world applications go beyond a single binary decision and we are often faced with multi-label systems where the end decision is a composition of the individual labels <cit.>.
In this paper, we study a multi-label classification system where several binary classification decisions are combined to make a final prediction for any given input. We consider a special composite classifier where the overall system decision is 1 if any of the individual binary classifier outputs are 1. For example, consider a content moderation system for an online forum that predicts whether a given comment is toxic, insulting, or attacking identity, and hides a comment if any of the predictions are positive <cit.>. How do we perform classification based on the combination of these individual predictions, and achieve specific group fairness goals? This problem, which is a special instance of the compositional fairness <cit.>, is the focus of this paper.
In addition, we also study a multi-group setting where we are interested in the fairness of the classifier system with respect to many groups
Among the in-process mitigation techniques <cit.>, we focus our mitigation strategy on the MinDiff technique <cit.> for improving equality of opportunity <cit.> in classifiers. MinDiff has proven to be effective at inducing equality of opportunity while maintaining overall classifier performance across a variety of tasks by relying on the maximum mean discrepancy (MMD) estimators <cit.>. Importantly, MinDiff can be successfully applied to environments when instances labeled with group membership are very sparse, by using a dedicated data streams to ensure that each mini-batch contains a constant number of group labeled examples.
A natural baseline in this scenario is an extension of MinDiff to fairness mitigation in multigroup, multilabel environments, where one regularizer is introduced per each group and per each classifier. However, this baseline causes the batch size to scale linearly in both the number of groups and number of prediction tasks being remediated. Even for small number of groups and and small number of classifiers, this can quickly grow out of hand to the extent that this baseline becomes impractical, especially when the baseline classifier is already expensive to train.
This significantly increases resource usage and slows training as the number of groups and prediction tasks grows.
We propose two simple optimization techniques to achieve this fairness goal with a constant scaling, with empirical verification. Our contributions are summarized below:
* Task-overconditioning: The natural extension of MinDiff requires a batch of negative examples for each label, resulting in a constant scaling with the number of classifiers. Instead, task-overconditioning suggests using a single batch that contains the negative examples across all labels.
We argue that task-overconditioning further aligns the overall optimization objective to that of mitigating the overall compositional decision, which is our goal, while also achieving a constant scaling with the number of individual classifiers.
* Group-interleaving: The natural extension of the mitigation solution requires a batch of negative examples with respect to each group at each iteration. Instead, group-interleaving makes the optimization objective stochastic with respect to groups at each iteration, allowing a constant scaling with the number of groups.
* Empirical verification: We empirically show that our proposed method, which combines overconditioning and group-interleaving results in equal or better Pareto frontiers than baseline methods, with significant training speedup, on two datasets.
Related Work.
Methods for improving group fairness can generally be categorized in three main classes: pre-processing, post-processing, and in-processing methods. Pre-processing algorithms <cit.> transform the biased data features to a new space in which the labels and sensitive attributes are statistically independent. Post-processing approaches <cit.> achieve group fairness properties by altering the final decision of the classifier.
The focus of this paper is on in-processing methods, which introduce constraints/regularizers for improving fairness in training. These methods have empirically shown to produce a more favorable performance/fairness Pareto tradeoff compared to other methods <cit.>. These include <cit.> decision-trees <cit.>, support vector machines <cit.>, boosting <cit.>, neural networks <cit.>, or (logistic) regression models <cit.>.
See the recent paper by <cit.> for a more comprehensive literature survey. We focus in this paper on the MinDiff technique, which has been successful across tasks <cit.>.
This paper is also broadly related to the compositional fairness literature <cit.>. In contrast to these works, we focus on a narrower sense of compositionality (only the intersection) for which we derive a scalable specialized solution.
§ BACKGROUND & PROBLEM SETUP
Here, we formally provide the problem setup. Let (x, {y_t}_t ∈ [T]) represent a feature and a set of T binary labels, where x ∈𝒳, and y_t ∈{0, 1} for all t ∈ [T][We define [T]:= {1, …, T}.]. In our setup, the
overall decision is a simple composite function of the individual labels: y = max_t ∈ T{y_t},
i.e., y = 1 if and only if there exists t ∈ [T] s.t. y_t = 1.
We consider a scenario where we train T individual predictors {_t(x; θ)}_t ∈ [T] in a multi-label setup, where _t(x; θ) ∈{0, 1} is a binary classifier from features x, and θ represents model parameters.[We often drop (x; θ) for brevity and refer to the output of the t-th classifier as _t.] Similarly, the overall model prediction is given by
= max_t ∈ T{_t},
i.e., = 1 if and only if there exists t ∈ [T] s.t. _t = 1. In other words, we predict that the overall label is 1 when any of the underlying classifiers is triggered. As explained before, this setup is common in many applications, where the final decision (e.g. rejecting a comment <cit.>) based on depends on many sub-decisions _t (properties of the customer or comment).
Our goal is to optimize for fairness in the equal opportunity sense <cit.> for the overall model prediction with respect to multiple group memberships.[Fairness of individual predictors is desirable but not required.] Let the set 𝒢 capture all groups for which we would like to improve fairness. Let g_m ∈{0, 1} denote the identifier for membership in group g_m, for m ∈ [|𝒢|].
P( = 1 | G_m = 0, Y = 0) = P( = 1 | G_m = 1, Y = 0),
Note that in this paper, we do not consider the intersectional fairness setting <cit.> where the goal is to ensure fairness to all intersections of group memberships; see Appendix <ref>.
While there are numerous ways to optimize for fairness in machine learning, specifically in equal opportunity sense, the methods that achieve better fairness/performance Pareto frontiers have been empirically observed to be mostly in-processing methods <cit.>, where a regularizer is added to the (cross-entropy) training loss to mitigate the model fairness gap. The regularizer is usually of the form:
D((θ) , G_m | Y= 0)
where D(·, ·) is a proper divergence between two random variables.
Notice that in our problem setup (θ) is not a differentiable function of the task-level predictors. Hence, we cannot use it directly to regularize the training of the individual task-level classifiers via backpropagation. This situation occurs when task-level predictors are not trained jointly or are even owned by different teams in an organization.
One intuitive solution to remediate this multi-label setup is to ensure that each individual classifier is fair for each group <cit.>. This intuitive design is motivated by previous work <cit.> which finds that fairness of individual predictors might be sufficient to improve fairness of the overall system, even if there are no theoretical guarantees. We refer to this objective as task-level equal opportunity:
For any task t ∈ [T] and group m for m ∈ |𝒢|,
P(_t = 1 | G_m = 0, Y_t = 0) = P(_t = 1 | G_m = 1, Y_t = 0).
§ BASELINE: MANY MINDIFF REGULARIZERS
While there are many effective methods for solving task-level equal opportunity in (<ref>), as discussed in related work, here we focus on an adaptation of MinDiff <cit.> to the multi-group multi-label classification case. This regularization-based approach has a number of advantages. First, it does not require group labels at inference time, which is often true for real-world applications. Next, it has been empirically demonstrated to be effective at remediating fairness issues while still maintaining overall performance <cit.>. Finally, it is designed to be effective when group-labeled instances are rare even in training data.
This MinDiff technique introduces a new loss term based on maximum mean discrepancy (MMD) to promote (conditional) independence between the predictions and sensitive group <cit.> per each group and each task. More precisely, the loss becomes:
L_MinDiff = L_CE(Ŷ, Y) + λ∑_t ∈ T∑_m ∈ [|𝒢|] R_t,m,
where L_CE is the empirical cross-entropy loss, λ is a hyperparameter that sets the relative strength of the entropy and MMD loss,[In practice, one can tune the MinDiff strength for each regularizer at the expense of a complex hyperparameter tuning.] and
R_t,m=MMD(Ŷ_t | Y_t = 0, G_m = 0; Ŷ_t | Y_t = 0, G_m = 1).
Computing R_t,m requires negative labeled instances (Y_t = 0) for both group membership cases (G_m = 0 and G_m = 1). In practice, instances with group membership information are much less frequently available than those without. Min Diff handles this by creating dedicated data streams for group-labeled instances that ensure that every batch has the data required to compute the MMD kernel component of the loss. As the number of groups and predictions tasks increases, this leads to O(T · |𝒢|) data streams that must be stored and a T · |𝒢| multiplier on the batch size.
§ PROPOSED METHOD: MINDIFF-IO
We now describe our proposed method, MinDiff-IO, which is built on two main components: Group-Interleaving and Task-Overconditioning. The central insight behind these two approaches is that we can still accomplish our goal, overall equal opportunity defined by Equation (<ref>), by optimizing a slightly different objective that is better aligned and is easier to compute. We describe these techniques in the subsequent sections.
§.§ Task-Overconditioning
The baseline method that targets task-level equal opportunity has a number of data streams and batch size that scales linearly with T, making it intractable for systems where T might be large (e.g., O(100)). Additionally, it does not necessarily imply overall equal opportunity <cit.>, Equation (<ref>), which is what is desired.
In this section, we present our proposal towards satisfying overall equal opportunity in this compositional decision system. We provide limited theoretical motivation for why it might be more aligned with the overall fairness objective under restrictive assumptions. We shall also see in the experimental section (where those restrictive assumptions are not satisfied) that it leads to equal or better fairness/performance Pareto frontiers.
For all t ∈ [T],
P(Y_t = 1 | G_m = 0, Y = 0) = P(Y_t = 1 | G_m = 1, Y = 0).
Note that, unlike Equation (<ref>), we condition on all labels having negative truth. This has the effect of requiring only one dataset for all tasks when computing loss rather than T datasets.
Let task-level classifiers {_t(x; θ)}_t ∈ [T] be such that for all x ∈𝒳, and for any t ≠τ,
_t(x; θ) _τ(x; θ) = 0.
In other words, the classifiers don't trigger simultaneously; if _t = 1 then _τ = 0 for all τ≠ t.
Notice that Assumption <ref> is a strong assumption as it requires the classifiers to have non-overlapping coverage, which is not necessarily satisfied in practice. For example, in the content moderation example, a comment might be toxic, insulting, and attacking identity at the same time. While this assumption is very restrictive, we show that under this scenario overconditioning is perfectly aligned with the goal of mitigating overall classifier. We also don't need this assumption for our empirical results, which show improvements over the baseline classifier.
If Assumption <ref> is satisfied, then Definition <ref> (overconditioned task-level equal opportunity) implies Definition <ref> (overall equal opportunity).
The proof is relegated to the appendix. Lemma <ref> determines a scenario where overconditioning task-level equal opportunity indeed implies the desired overall equal opportunity. Notice that even under Assumption <ref>, Definition <ref> is a stronger requirement than Definition <ref>, and is not implied by it. In other words, we might be able to satisfy the overall equal opportunity and yet the overconditioning equal opportunity might not be satisfied for all task-level classifiers.
To solve for Task-Overconditioning, we adapt MinDiff loss as follows:
L_MinDiff-O = L_CE(Ŷ, Y) +
λ∑_t ∈ T∑_m ∈ [|𝒢|] R^O_t,m,
where
R^O_t,m=MMD(Ŷ_t | Y = 0, G_m = 0; Ŷ_t | Y = 0, G_m = 1).
Note that there will be fewer data instances that are suitable for computing (<ref>), which requires all labels to be jointly negative, than (<ref>), which only requires individual labels to be negative. We have not found this to be an issue in practical applications where positive label incidence is low.
§.§ Group-Interleaving
MinDiff was originally designed to present remediation data from all groups to the model at each iteration. However, we can reduce the complexity of computing the MinDiff regularizer further by presenting only one group per batch to the model. In this case, the loss becomes:
L_MinDiff-IO = L_CE(Ŷ, Y) + R^O_M
where M is a random index supported on [|𝒢|]. In other words, here we remediate against a random draw from the groups at each iteration of the algorithm. Notice that the new loss is the same as the task-overconditioned loss in expectation, and is expected to converge to a stationary point of the same objective. On the other hand, when combined with Task-Overconditioning, the loss can be computed with only O(1) extra instances in each batch, with no dependence on |𝒢| and T.
§ EVALUATION METRICS
For each binary group membership, i.e., G_m ∈{0, 1}, where G_m = 1 is considered the minority group membership, we quantify the fairness gap through the following interchangeable metrics that are expressed in terms of the absolute gap and the ratio of the two groups:
d_EO, m = |FPR_G_m = 1 - FPR_G_m = 0|,
and
r_EO, m = FPR_G_m = 1 / FPR_G_m = 0,
where P denote the empirical distribution over a test set of N i.i.d samples from P_XY, and for i ∈{0, 1},
FPR_G_m = i := P(Y = 1 | G_m = i, Y = 0).
To measure the classification performance we both compute the Area Under the ROC Curve (ROC AUC) of the classifier as well as accuracy.
Finally, to measure speed, we report the number of iterations per second achieved during model training.
§ EXPERIMENTS
We run two experiments. The first experiment provides the Pareto frontier of fairness vs performance for each approach using a publicly-available academic dataset, and the second provides the performance, fairness, and speed of a real-world policy enforcement classifier at a particular operating point with each of the proposed approaches. Overall, these experiments show that MinDiffIO provides equal or better fairness/performance while improving training speed.
§.§ Civil Comments
The first set of experiments are run on the Civil Comments Dataset <cit.>; details are given in Appendix <ref>. Civil Comments contains comment text and seven associated crowd annotated labels related to the `civility' of the comment; whether the comment is an insult, toxic, or attacking identity, and so on. We use the subset of the data that are labeled with group information. Groups are related to race, ethnicity, gender, disability, and sexuality.
We train comment classifiers on three of the seven labels and combine the predictions into a system-level prediction: a comment is classified as unsafe is any of the prediction is unsafe. We compare this with `direct remediation' where a classifier is trained to predict the system-level label (the logical OR of the component labels) rather than the components. In addition, we compare with `component-based' remediation where MinDiff or MinDiffIO are applied to component classifiers. Results are shown for one group (Black) in Figure <ref>; results for other groups and component-level results for all predictions and groups can be found in Appendix <ref>.
The plot displays the tradeoff between fairness and performance as the hyperparameter λ is varied. As λ increases, the contribution of the MMD component of the loss grows, leading to increased fairness at the cost of performance.
We observe that the component-based approaches offer better performance for a given fairness than direct remediation. Also, MindDiff and MinDiffIO offer qualitatively similar results.
§.§ Product Policy Compliance Detection
We now study a real world system, which is responsible for filtering out examples which break the product policy. This is similar to literature on toxic comment detection <cit.> or hateful speech filtering <cit.>. To reflect the different facets of the product, a set of rules (10-1000) are defined and an example is against product policy if any given rule is broken. In practice, we use individual classifiers to predict each rule, and an example is filtered out if any individual soft prediction reaches a certain threshold. Samples can be categorized into two sensitive attributes (each considered as binary) and we want to guarantee fairness to samples from each group, which lends itself to the multi-label and multigroup classification.
Note that a false positive of this system is a user harm because policy-following content is flagged as policy-violating. Our goal is to reduce the gaps between false positive rates between minority groups and a baseline population.
We first evaluate the initial system without any remediation and find that two groups have high false positive rate differences. Our goal is to design the mitigation strategy that reduces the observed gaps on the final policy (gap from Definition <ref>) for both groups, while maintaining good performance (measured by AUCPR) and training speed (training steps/sec).
In Table <ref>, we show four different remediation approaches. The first approach, unremediated, has some performance, fairness, and speed characteristics that we compare other approaches to. The second approach, baseline, is unworkably slow, so we are unable to run experiments or provide results. The third approach, which introduces Task Overconditioning, reduces fairness gaps with a minor hit to performance and a major hit to speed. Finally, the fourth approach adds Group Interleaving to mitigate the speed impact while maintaining similar fairness and performance characteristics.
§ CONCLUSION
Prior in-process equal opportunity remediation methods suffer from poor (linear) scaling in the number of prediction tasks and number of groups to remediate, making existing techniques sometimes impossible to apply to real-world scenarios. We present Mindiff-IO, a new method that builds on the MinDiff approach to provide constant scaling with respect to tasks and groups. We show that Mindiff-IO provides similar performance and fairness characteristics to MinDiff while scaling much better in multilabel and multigroup environments through experiments with both academic and real-world datasets.
The limitations of this work are provided in Appendix <ref>.
§ ACKNOWLEDGEMENTS
We would like to thank Preethi Lahoti, Ananth Balashankar, Lucian Cionca, and Katherine Heller for their constructive feedback on this paper.
icml2022
§ LIMITATIONS
There are three limitations to the approaches mentioned here. First, overconditioning requires instances that have negative ground truth for all modeled labels[Note that the ground truth negative requirement is present when optimizing for equality of opportunity with respect to false positive rates. If equality of opportunity with respect to false negative rates were the goal, the method would instead require ground truth positives.] in order to compute the Min Diff loss. This is a realistic environment; for instance, a policy dataset where policy-violating content is rare. However, if true positives are very common, this method may no longer be effective.
The second limitation is with respect to intersectional group fairness. The interleaving optimization described in Section <ref> does not explicitly represent or remediate the intersection of groups. Intersectional remediation is a more difficult problem due to the exponential scaling of the number of intersections with respect to the number of groups. We opted not to remediate intersections because of the sparsity of our group labels - very few instances are labeled with more than one group. We believe that techniques that effectively and efficiently address intersectional remediation are an interesting area for future work.
Third, we only consider MinDiff-based techniques in this paper and demonstrate that Mindiff-IO has better scaling characteristics than the original MinDiff approach. Future work could compare the fairness, performance, and scaling properties of Mindiff-IO with other methods of achieving equal opportunity. In addition, future work could test the application of interleaving and overconditioning to other in-processing methods.
§ PROOFS, EXPERIMENT DETAILS, AND FURTHER RESULTS
§.§ Proof of Lemma <ref>
The proof is completed by noting that
P(Y = 1 | G = 0, Y = 0) =
P(max_t ∈ [T]Y_t = 1| G = 0, Y = 0)
= ∑_t ∈ [T] P(Y_t = 1 | G = 0, Y = 0)
= ∑_t ∈ [T] P(Y_t = 1 | G = 1, Y = 0)
= P(max_t ∈ [T]Y_t = 1 | G = 1, Y = 0)
= P(Y = 1 | G = 1, Y = 0),
where (<ref>) follows from Assumption <ref>, and (<ref>) follows from Definition <ref>, and (<ref>) follows from Assumption <ref>.
§.§ Civil Comments Experimental Details
For these experiments we select three labels (identity attack, insult, and toxicity) as well as four groups (black, gay or lesbian, female, and transgender) for modeling and remediation. Our model consists of a single hidden layer deep neural network that takes a simple hashing trick bag of words vectorization of the comment text as input. The hidden layer and text vector have 64 and 1,000 elements, respectively.
All models are trained for 25 epochs with a learning rate if 0.1 and a Gaussian kernel weight of 1.0.
We present empirical Pareto frontiers of fairness (here, the absolute value of the difference between false positive rates for a group and baseline) and performance (here, ROC AUC). Thresholds for the fairness dimension are selected through calibration on a validation set.
§.§ Detailed Civil Comments Results
System-level results for all four groups are shown in Figure <ref>. Note that, in each case, component-based techniques outperform direct remediation by offering a higher performance for a given fairness.
Component results for each group, label pair are shown in Figure <ref> for both the MinDiff and Mindiff-IO techniques. These Pareto frontiers are generated by varying the hyperparameter λ, where higher λ values put more weight on the MinDiff loss term and lead to improved fairness at the cost of performance. Each data point in the plot is generated by training a model five times; the crosses in each dimension represent 95% confidence intervals.
Note that each approach achieves a similar Pareto frontier, indicating the Mindiff-IO has similar performance and fairness characteristics. In other words, this experiment confirms that Mindiff-IO does not sacrifice classifier fairness or performance for individual classifiers. In the next experiment, we will provide training speed measurements to demonstrate the scaling advantages of Mindiff-IO.
|
http://arxiv.org/abs/2307.04866v1 | 20230710192545 | Automated Detection of Gait Events and Travel Distance Using Waist-worn Accelerometers Across a Typical Range of Walking and Running Speeds | [
"Albara Ah Ramli",
"Xin Liu",
"Kelly Berndt",
"Chen-Nee Chuah",
"Erica Goude",
"Lynea B. Kaethler",
"Amanda Lopez",
"Alina Nicorici",
"Corey Owens",
"David Rodriguez",
"Jane Wang",
"Daniel Aranki",
"Craig M. McDonald",
"Erik K. Henricson"
] | eess.SP | [
"eess.SP",
"cs.AI",
"cs.LG"
] |
1
.001
Automated Detection of Gait Events and Travel Distance
Albara Ah Ramli et al.
mode = title]Automated Detection of Gait Events and Travel Distance Using Waist-worn Accelerometers Across a Typical Range of Walking and Running Speeds
1]Albara Ah Ramli
Conceptualization, Methodology, Software, Formal analysis, Writing, Supervision, Validation, Visualization, Investigation, Data Curation
1]Xin Liu
Writing - Review and Editing, Methodology, Supervision
2]Kelly Berndt
Investigation, Data Curation, Writing - Review and Editing
3]Chen-Nee Chuah
Writing - Review and Editing, Methodology, Supervision
2]Erica Goude
Investigation, Supervision, Writing - Review and Editing
2]Lynea B. Kaethler
Investigation, Data Curation, Writing - Review and Editing
2]Amanda Lopez
Investigation, Data Curation, Writing - Review and Editing
2]Alina Nicorici
Investigation, Data Curation, Methodology
4]Corey Owens
Investigation, Data Curation
2]David Rodriguez
Investigation, Data Curation, Writing - Review and Editing
2]Jane Wang
Investigation, Data Curation, Writing - Review and Editing
5]Daniel Aranki
Conceptualization, Methodology, Software, Analysis
2]Craig M. McDonald
Conceptualization, Resources, Funding acquisition
2]Erik K. Henricson[type=editor,auid=000,bioid=1,prefix=,role=,orcid=0000-0002-4617-225X]
Conceptualization, Methodology, Software, Formal analysis, Writing, Supervision, Funding acquisition, Investigation
[1]
[email protected]
[1]organization=Department of Computer Science, School of Engineering; University of California,vaddressline=1 Shields Ave, city=Davis,postcode=95616, state=CA,country=USA
[2]organization=Department of Physical Medicine and Rehabilitation, School of Medicine; University of California,addressline=1 Shields Ave, city=Davis, postcode=CA 95616, state=CA,country=USA
[3]organization=Department of Electrical and Computer Engineering, School of Engineering; University of California,vaddressline=1 Shields Ave, city=Davis,postcode=95616, state=CA,country=USA
[4]organization=UC Davis Center for Health and Technology, School of Medicine; University of California Davis,addressline=1 Shields Ave, city=Davis, postcode=CA 95616, state=CA,country=USA
[5]organization=Berkeley School of Information; University of California Berkeley,addressline=1 Shields Ave, city=Berkeley, postcode=CA 94720, state=CA,country=USA
[cor1]Corresponding author
Background: Estimation of temporospatial clinical features of gait (CFs), such as step count and length, step duration, step frequency, gait speed and distance traveled is an important component of community-based mobility evaluation using wearable accelerometers. However, challenges arising from device complexity and availability, cost and analytical methodology have limited widespread application of such tools. Research Question: Can accelerometer data from commercially-available smartphones be used to extract gait CFs across a broad range of attainable gait velocities in children with Duchenne muscular dystrophy (DMD) and typically developing controls (TDs) using machine learning (ML)-based methods Methods: Fifteen children with DMD and 15 TDs underwent supervised clinical testing across a range of gait speeds using 10 or 25m run/walk (10MRW, 25MRW), 100m run/walk (100MRW), 6-minute walk (6MWT) and free-walk (FW) evaluations while wearing a mobile phone-based accelerometer at the waist near the body’s center of mass. Gait CFs were extracted from the accelerometer data using a multi-step machine learning-based process and results were compared to ground-truth observation data. Results: Model predictions vs. observed values for step counts, distance traveled, and step length showed a strong correlation (Pearson’s r = -0.9929 to 0.9986, p<0.0001). The estimates demonstrated a mean (SD) percentage error of 1.49% (7.04%) for step counts, 1.18% (9.91%) for distance traveled, and 0.37% (7.52%) for step length compared to ground truth observations for the combined 6MWT, 100MRW, and FW tasks. Significance: The study findings indicate that a single accelerometer placed near the body’s center of mass can accurately measure CFs across different gait speeds in both TD and DMD peers, suggesting that there is potential for accurately measuring CFs in the community with consumer-level smartphones.
* Extracting CFs using a single accelerometer at varying speeds in DMD and TD peers.
* ML-based method to estimate CFs such as steps, distance, duration, length, and speed.
* Compare the estimated CFs with the ground truth observations and pedometer.
* Suggests that CFs can be measured in the community without using GRF.
Temporospatial gait clinical featuresDuchenne muscular dystrophyTypically-developingAccelerometer Machin Learning Gait cycle
[
[
August 12, 2023
===================
§ INTRODUCTION
Accelerometers can be more accurate than pedometers at slower walking speeds and in populations with atypical gait patterns, making pedometers less suitable for evaluating physical activity in such populations <cit.>. Estimating temporospatial clinical features (CFs) of gait (step length, step duration, step frequency, and gait speed) is a fundamental step in gait analysis, and detecting the initial contact (IC) of the heel is crucial for identifying gait events and the beginning of the step cycle. In a laboratory environment, detecting events and estimating CFs is typically done by measuring ground reaction forces (GRF) and verifying with visual observation. However, using these methods to measure gait events in the community is often impractical.
Studies have described the potential of using acceleration signals to estimate CFs. Several studies have demonstrated that step length, gait speed, initial contact (IC), and incline can be determined from acceleration signals of the lower trunk <cit.>. Aminian and colleagues explored feasibility of using a fully connected artificial neural network (ANN) with accelerometers on the trunk and heel to predict incline and speed based on ten statistical parameters extracted from the raw signal <cit.>. Results revealed that a negative peak in the heel accelerometer signal indicates IC events in each gait cycle (two steps).
Studies comparing accelerometer signals from different body positions at various walking speeds demonstrate that positions near the body’s center of mass (trunk, waist, pelvis, and sacrum) are suitable for capturing gait events <cit.>. In a study by Zijlstra et al., participants walked on a force-transducing treadmill and overground while trunk acceleration data was recorded to estimate step lengths and walking speed. Initial contact (IC) events were matched with vertical ground reaction force (GRF) normalized by body weight to anteroposterior acceleration. The start and end of gait cycles from the GRF corresponded with the time of the peak amplitude value in the anteroposterior acceleration signal <cit.>. Further research by Lee et al. and Mo et al. demonstrated that IC events can be determined from anteroposterior acceleration measured at the pelvis and sacrum <cit.>. They collected accelerometer signals from the pelvis/sacrum and GRF data, and matched IC events on anteroposterior acceleration with vertical GRF. Initial contact events on the force plate corresponded with the instant of the positive peak pelvis/sacrum anteroposterior acceleration<cit.>.
We present a machine learning (ML)-based method that automates detection of initial contact (IC) events and clinical features of gait (CFs) using raw accelerometer signals obtained from consumer mobile devices <cit.>. We demonstrate that using a single accelerometer worn close to the body's center of mass is an accurate and reliable approach to estimate CFs and IC events across a typical range of walking speeds. This method can be applied to healthy individuals and those with gait disturbances without the need for ground reaction force (GRF) measurements.
§ MATERIALS AND METHODS
Estimating distance using accelerometer signals is challenging due to inherent quadratic error of accelerometers, which can result in deteriorating estimates even with short integration times and distances. Many methods attempt to estimate distance from accelerometers by integrating acceleration twice with respect to time, despite incorporating error-limiting mechanisms and setting restrictions, which can result in errors due to noise, drift, and bias <cit.>. We propose an ML-based signal processing method that accurately estimates an individual's distance traveled, step length, and number of steps across varying walking/running speeds, outperforming the built-in pedometer function on iPhones, which show the highest error percentage in slow walking speeds <cit.>.
Because different individuals have different walking/running behaviors that affect acceleration, we built a regression model for each individual to estimate distance based on their specific walking/running patterns. We developed a regression model using data from five different speeds (SC-L1 to SC-L5) to map step length to the corresponding anteroposterior acceleration amplitudes using pairs of distance and acceleration values (Figure <ref>A). We calculated distance for a single speed by averaging the step distances, while the acceleration was calculated by averaging the maximum values of acceleration in each step (Figure <ref>B).
To ensure a fair comparison, we evaluated three sources of estimated data: first, ground-truth data based on video observation of distance traveled and number of steps; second, the pedometer sensor in the iPhone, which provided estimates of distance and number of steps; and third, our Walk4Me system <cit.>, which includes calibration regression models for estimating distance and a signal processing algorithm for measuring number of steps. We estimated the speed, step length, and frequency as derivatives from the regression and signal processing.
§.§ Participants
Fifteen children with Duchenne muscular dystrophy (DMD) and fifteen typically developing (TD) peers participated in gait speed experiments. The age of the participants ranged from 3 to 16 years, with a mean age of 8.6 years and a standard deviation of 3.5. Their body weight ranged from 17.2 to 101 kg, with a mean weight of 36 kg and a standard deviation of 18.8. Their height ranged from 101.6 to 165.5 cm, with a mean height of 129 cm and a standard deviation of 15.8. All participants had at least 6 months of walking experience and were able to perform a 10-meter walk/jog/run test in less than 10 seconds. Participants with DMD had a confirmed clinical diagnosis and were either naïve to glucocorticoid therapy or on a stable regimen for at least three months. Northstar Ambulatory Assessment (NSAA) scores for DMD participants ranged from 34 to 8, indicating typical levels of function to clinically-apparent moderate mobility limitation (Table-<ref>). The protocol was reviewed and approved by the Institutional Review Board (IRB) at the University of California, Davis, and informed consent was obtained from each participant prior to the initiation of study procedures. Measurements were taken at eight different walking/running gait activities, including speed-calibration tests at slow walk to running speeds (SC-L1, SC-L2, SC-L3, SC-L4, and SC-L5), a 6-minute walk test (6MWT), a 100-meter fast-walk/jog/run (100MRW), and a free walk (FW).
§.§ Equipment
Acceleration data from each participant were sampled at a rate of 100 Hz using an iPhone 11 and our Walk4Me smartphone application <cit.>. The phones were securely attached at the waist with an athletic-style elastic belt enclosure, positioned approximately at the level of the lumbosacral junction. The raw accelerometer signal was synchronized with video recordings captured by a GoPro camera at a rate of 30 Hz. An observer marked the events where a participant passed the start or end of the duration or distance assigned to each activity using the web portal of the Walk4Me system.
§.§ Gait and Events Detection and Data Analysis
We collected the raw accelerometer signal from 30 participants, which included the x, y, and z axes (vertical, mediolateral, and anteroposterior), along with the corresponding timestamps. Based on the findings of Zijlstra <cit.>, we observed that the initial contact (IC) events were more distinguishable in the anteroposterior axis (z-axis) compared to the other axes. Therefore, we used the anteroposterior signal from the raw accelerometer data to develop our method for counting the number of steps, estimating step length, and calculating the total distance individuals walked at different speeds.
§.§.§ Method of Step Detection
Figure <ref>A presents a raw accelerometer signal of the anteroposterior movement (z-axis) from a typically developing (TD) participant during fast walk speed calibration (SC-L4) for 3.9 seconds. The steps in the anteroposterior signal are characterized by long wavelengths (low frequency), while other wavelengths (high frequency) represent noise signals. To extract the steps, we applied a low-pass filter to the signal to smooth the signal and remove short-term fluctuations while preserving the longer-term trend (Figure <ref>A and Figure <ref>B). We then identified the peak values of the filtered signal, as the peaks occur only once per step in the filtered signal (Figure <ref>C). The number of peaks corresponds to the number of steps taken by the participant. Figure <ref>A shows the estimated number of steps using our method as blue dots, compared to the ground truth represented by a black line. The built-in pedometer steps estimation is shown in red.
§.§.§ Method of IC Detection
To detect the IC events, we find the midpoint between two peaks in the filtered signal (Figure <ref>D), which corresponds to the toe-off (TO) events during the gait cycle based on observation. We then identify all the peaks that occur within each step duration in the original acceleration signal (Figure <ref>E). Next, we determine the maximum peak value (anteroposterior G's), which corresponds to the time point of each IC (Figure <ref>F).
§.§.§ Method of Step Length Estimation Using Regression
We create an individualized regression model for each participant to associate average peak acceleration values with step lengths. Figure <ref>A depicts the data flow of our model training and prediction process. Each model is trained using five different participant-selected calibration speeds (SC-L1 to SC-L5). For each speed, we calculate the average acceleration peak values by taking the mean of all the peaks as described in Section <ref>. To calculate the average step length for training, we divide the observed ground-truth distance by the number of steps obtained from Section <ref>. This process is repeated for each of the five calibration speeds (e.g., point SC-L4 in Figure <ref>A). The resulting individualized equation through all five points allows us to input the peak acceleration value of any step within the participant's range of ambulatory velocity to estimate that step's length (shown as the green line in Figure <ref>A).
§.§.§ Estimating the Distance
After establishing the individualized model, it can be used on unseen data. We calculate the step lengths of all identified steps from a previously unseen event and accumulate them to calculate the total distance traveled by the individual. In this project, we used 100MRW, 6MWT, and FW as input signals during the inference stage, as shown in Figure <ref>B, and compared the calculated distances with the ground-truth observed distances and the device's internal pedometer.
§.§.§ Calculating the Average Step Length
During the inference stage, to calculate the average step length of an individual, we divide the distance estimated from Section <ref> by the number of steps obtained from Section <ref>. Figure <ref>C shows the estimated average step length using our ML model as blue dots, compared to the ground-truth average step length represented by a black line. The red dots represent the average step length estimated by the built-in pedometer.
§.§.§ Gait Pattern Representation
After determining the midpoint boundaries between steps, we generate a composite map of each step normalized to the gait cycle percentage, allowing for visual examination of AI-determined steps for irregularities or comparison of averaged accelerometer patterns between individuals (Figure <ref>). The gait cycle is identified using peak detection at the IC event, marking the beginning and end of each step. The average acceleration patterns are also calculated from all gait cycles across all activities and at various speeds. The forward movement (x-axis) is normalized to a time scale of 0 to 100%. By comparing the gait cycles of two participants (TD and DMD peers) at various speeds, distinct different patterns of acceleration magnitude emerge (Figure <ref>), highlighting differences in gait patterns between the two participants.
After determining the boundaries between steps, we generate a composite map of each step normalized to the gait cycle percentage, allowing for visual examination of AI-determined steps for irregularities or comparison of averaged accelerometer patterns between individuals (Figure <ref>). The gait cycle is identified using peak detection at the IC event, marking the beginning and end of each step. The average acceleration patterns are also calculated from all gait cycles across all activities and at various speeds. The forward movement (x-axis) is normalized to a time scale of 0 to 100%. Using this method, we can identify the IC of every single step and estimate the step duration (Figure <ref>F) without the need to use GRF <cit.>. By comparing the gait cycles of two participants (TD and DMD peers) at various speeds, distinctly different patterns of acceleration magnitude emerge (Figure <ref>), highlighting differences in the gait between the two participants.
§.§.§ Error Percentage Rates
To compare observed ground-truth step counts, distance traveled, and average step lengths with our model's estimates and the pedometer estimates native to the mobile devices, we employed two methods. First, we calculated the aggregated error for all estimates by determining an error percentage rate (Error_rate) using equation <ref>.
Error_rate = |∑_i^n |V_c-V_o|_i - ∑_i^n |V_o|_i/∑_i^n |V_o|_i| × 100
The Error_rate is calculated by aggregating the residual values of all participants (i) for all activities. The residual is the difference between the proposed methods (V_c) and the total number of ground truth observations (V_o). Then, the total aggregated is subtracted from the total ground truth and divided by the total ground truth. Table-<ref> compares the error percentage rate of step count, distance, and average step length between our Walk4Me system and iPhone pedometer measurements.
Second, to evaluate the percentage error for each individual measurement and estimate pair, we subtracted the model estimate from the observed ground truth measure and divided it by the ground truth measure multiplied by 100 for each event. We computed mean (SD) percentage error for step count, distance traveled, and step length parameters for calibration events SC-L1 to SC-L5 combined, and separately for 6MWT, 100MRW, and FW efforts combined, as well as for all efforts combined. We compared the mean percentage error values between control participants and those with DMD using simple t-tests for each contrast.
§ RESULTS
In this study, we assessed the accuracy of step counts during walking, jogging, and running using our Walk4Me system compared to the iPhone pedometer. We validated our results by comparing both systems with ground-truth data. Our findings, as shown in Table-<ref>, indicate that the Walk4Me system had an average step count error rate of 3.46%, demonstrating reliable performance in accurately tracking steps at different speeds. The combined error rates from participants with Duchenne muscular dystrophy (DMD) and typically developing (TD) participants ranged from 1.26% during slow walk pace (SC-L2) to 7.26% during the fast 100m run. In contrast, the iPhone's built-in pedometer showed an average error rate of 48.46% during short- to moderate-distance tasks at varying gait velocities. The iPhone pedometer had the lowest error rate of 36.35% during the longer-duration fast walk 6MWT task, and the highest error rate of 85.26% during the short-duration jogging/running task SC-L5.
For distance measurement, our Walk4Me system showed an average error rate of 5.83%, with the lowest error rate of 3.9% during the fast walk SC-L4 pace, and the highest error rate of 7.74% during the fast 100m run. The iPhone's built-in pedometer had an average error rate of 42.23%, with task-specific error ranging from 27.42% during the 6MWT to 82.54% during SC-L5 jogging/running task.
For step length measurement, our Walk4Me system showed an average error rate of 5.80%, with the lowest error rate of 3.68% at a comfortable walking pace (SC-L3), and the highest error rate of 8.64% during the short-term jog/run SC-L5 task. The iPhone's built-in pedometer demonstrated an average error rate of 46.40%, which varied from 30.76% during SC-L5 to 76.10% during SC-L1.
In contrast to overall aggregate accuracy, the mean (SD) accuracy of model predictions for individual events compared to ground truth observations for step counts, distance traveled, and step lengths is presented in Table-<ref> and depicted in Figure <ref>A, Figure <ref>B, and Figure <ref>C. Predicted and observed values for all three parameters showed a strong correlation (Pearson's r = -0.9929 to 0.9986, p<0.0001). The estimates demonstrated a mean (SD) percentage error of 1.49% (7.04%) for step counts, 1.18% (9.91%) for distance traveled, and 0.37% (7.52%) for step length compared to ground truth observations for the combined 6MWT, 100MRW, and FW tasks. There were no statistically significant differences in mean error percentages between control participants and those with DMD (data not shown).
§ DISCUSSION
The use of travel distance and step length as gait metrics is essential for clinical gait assessment in the community setting. However, accurately measuring step length traditionally requires a clinical facility or gait lab with a trained observer present during the assessment session. Clinical assessment methods are considered the most detailed and ideal, but their availability may be limited due to factors such as facility availability, staff availability, difficulties with patient travel to assessment locations, or public health restrictions such as those related to COVID-19. Additionally, clinical observation methods can be susceptible to human error, such as observer fatigue or distraction, as well as instrument errors, failed video recordings, or obstructed views, which can limit the utility of the collected data. An alternative option to overcome these limitations and facilitate more frequent and convenient collection of gait data in the community setting is to use off-the-shelf technologies such as pedometers, which are commonly built into smartphones and widely used in sports. However, it is crucial to assess the reliability of these devices, particularly when used for clinical purposes. Therefore, we conducted experiments to clinically validate the reliability of using a pedometer and compared the results with those obtained by observers.
We propose an ML-based signal processing method using our Walk4Me system, which can estimate step counts, distance traveled, and step lengths with increased levels of accuracy. The advantage of our method is that it requires less observed interaction, only necessitating a short duration of time for five speed-calibration tests. Our system can automatically estimate distance and step length without the need for human interaction. Some of the source code and a demo of this paper can be found at <https://albara.ramli.net/research/ic> along with some additional results.
§ CONCLUSION
This study introduces a novel signal processing and machine learning technique estimates that accurately identifies steps and step length based on the individual's gait style. Our findings demonstrate that using a single accelerometer worn near the body's center of mass can be more accurate than a standard pedometer. Our method can be applied to both healthy individuals and those with muscle disorders without the need for ground reaction force (GRF) measurements. To our knowledge, this is the first study to propose a method that extracts CFs from raw accelerometer data across the attainable range of gait speeds in healthy participants and those with muscle disease. On average, our method of counting steps and estimating stride length and distance traveled performs well when applied to longer structured sub-maximal clinical testing efforts and free-roaming self-selected pace travel. In these settings, our methods surpass the pedometer functions native to the mobile devices we use. This will allow us to extend basic elements of gait analysis to community settings using commonly available consumer-level devices.
§ DATA AVAILABILITY
The authors commit to providing data access in compliance with Gait and Pose journal, grant sponsor, and University of California guidelines. Requests for data access can be addressed to the corresponding author.
§ FUNDING ACKNOWLEDGEMENT
This study was partially funded for participant assessment and data collection by a grant from the U.S. Department of Defense (W81XWH-17-1-0477), a pilot grant from the University of California Center for Information Technology Research in the Interest of Society (CITRIS) and the Banatao Institute, and by a research grant from the Muscular Dystrophy Association.
§ ACKNOWLEDGMENTS
We would like to thank the students of the UC Davis EEC193A/B Winter 2020 Senior Design Projects Team (Nikki Esguerra, Ivan Hernandez, Zehao Li, and Jingxuan Shi) for their work piloting proof-of-concept methods for clinical feature extraction.
§ DECLARATION OF INTEREST
The authors declare that they have no competing interests and no conflicts to declare.
model1-num-names
Author biography without author photo.
Author biography. Author biography. Author biography.
figs/pic1
Author biography with author photo.
Author biography. Author biography. Author biography.
figs/pic1
Author biography with author photo.
Author biography. Author biography. Author biography.
|
http://arxiv.org/abs/2307.04200v1 | 20230709150848 | Integrated frequency-modulated optical parametric oscillator | [
"Hubert S. Stokowski",
"Devin J. Dean",
"Alexander Y. Hwang",
"Taewon Park",
"Oguz Tolga Celik",
"Marc Jankowski",
"Carsten Langrock",
"Vahid Ansari",
"Martin M. Fejer",
"Amir H. Safavi-Naeini"
] | physics.optics | [
"physics.optics",
"quant-ph"
] |
APS/123-QED
1
2
3
⋆ [email protected]
Valid PACS appear here
Integrated frequency-modulated optical parametric oscillator
Amir H. Safavi-Naeini1,⋆
August 12, 2023
============================================================
Optical frequency combs have revolutionized precision measurement, time-keeping, and molecular spectroscopy <cit.>. A substantial effort has developed around “microcombs”: integrating comb-generating technologies into compact, reliable photonic platforms <cit.>. Current approaches for generating these microcombs involve either the electro-optic <cit.> (EO) or Kerr mechanisms <cit.>. Despite rapid progress, maintaining high efficiency and wide bandwidth remains challenging. Here, we introduce a new class of microcomb – an integrated optical frequency comb generator that combines electro-optics and parametric amplification to yield a frequency-modulated optical parametric oscillator (FM-OPO). In stark contrast to EO and Kerr combs, the FM-OPO microcomb does not form pulses but maintains operational simplicity and highly efficient pump power utilization with an output resembling a frequency-modulated laser <cit.>. We outline the working principles of FM-OPO and demonstrate them by fabricating the complete optical system in thin-film lithium niobate (LNOI). We measure pump to comb internal conversion efficiency exceeding 93% (34% out-coupled) over a nearly flat-top spectral distribution spanning ≈ 1,000 modes (≈ 6 THz). Compared to an EO comb, the cavity dispersion rather than loss determines the FM-OPO bandwidth, enabling broadband combs with a smaller RF modulation power. The FM-OPO microcomb, with its robust operational dynamics, high efficiency, and large bandwidth, contributes a new approach to the field of microcombs and promises to herald an era of miniaturized precision measurement, and spectroscopy tools to accelerate advancements in metrology, spectroscopy, telecommunications, sensing, and computing.
§ INTRODUCTION
Optical frequency combs, characterized by their precisely spaced, sharp spectral lines that serve as a “frequency ruler" for light, are indispensable tools in numerous fields, from precision metrology and atomic clocks to high-capacity telecommunications and molecular spectroscopy <cit.>. Fueled by their potential practical applications, the drive to miniaturize frequency combs into chip-scale integrated devices, known as microcombs, has recently accelerated at a remarkable pace <cit.>. Traditional optical frequency combs, produced through mode-locked lasers and synchronously pumped optical parametric oscillators, are large-scale and require substantial infrastructure, thus limiting their utility outside laboratory settings. Two principal methods for creating integrated frequency comb sources suitable for smaller, deployable devices have been explored in response. The first involves third-order χ^(3) or Kerr optical nonlinearity, with successful demonstrations in materials such as silica, silicon nitride, aluminum nitride, silicon carbide, and lithium niobate <cit.>. The second strategy employs the electro-optic effect, which has been realized in resonant (shown in Fig. <ref>a) and non-resonant integrated thin-film lithium niobate devices <cit.>.
Despite these remarkable advances, electro-optic and Kerr combs face several challenges. They are often limited in their efficiency, exhibit a strong pump background, suffer from limited tunability, and display a decreasing comb line intensity for the lines distant from the pump. Moreover, Kerr frequency combs demand sophisticated control and become significantly more challenging to operate at a smaller free spectral range (FSR).
In this study, we propose and demonstrate a new type of microcomb that combines the advantages of both EO and Kerr combs, merging nonlinear optical processes with electro-optic modulation in an integrated device. Specifically, our structure accommodates both optical parametric amplification and phase modulation within a single cavity, thereby facilitating the generation of a frequency-modulated optical parametric oscillator (FM-OPO, Fig. <ref>b). <cit.> Remarkably, unlike in conventional Kerr and EO combs, the dynamics in our system do not result in pulse formation, making the output more closely resemble that of a frequency-modulated (FM) laser. This strategy maintains the operational simplicity characteristic of electro-optic combs while achieving substantially broader bandwidths than those attainable through modulation alone. Furthermore, our technique gives rise to a flat-top output comb, an optimal spectral distribution for many applications, while avoiding unwanted nonlinearities that manifest at large pulse peak powers<cit.>. Finally, the FM-OPO exhibits impressive efficiency, converting a significant fraction of the pump light into comb lines while demanding only modest RF power inputs for operation.
To implement the integrated FM-OPO, we turn to thin-film lithium niobate (LN) for its strong second-order optical nonlinearity and electro-optic (EO) effect. Thin-film LN has recently emerged as a platform for integrated nanophotonics<cit.> through demonstrations of efficient electro-optic modulators <cit.>, electro-optic combs<cit.>, periodically poled lithium niobate (PPLN) waveguides for frequency conversion<cit.>, quantum light generation<cit.>, resonant second harmonic generation and optical parametric oscillators<cit.>, and integration with complex photonic integrated circuits for applications such as laser control<cit.> and quantum measurements<cit.>. The above demonstrations are either based on the EO effect that transfers energy between optical modes separated by the RF frequency or the χ^(2) nonlinearity that can provide broadband gain. Combining these two distinct capabilities forms the foundation for the integrated FM-OPO.
§ COMB DYNAMICS
Both Kerr and EO comb generation fundamentally rely on mode-locking, which subsequently leads to the formation of pulses. However, this process inherently introduces a strong frequency-dependent variation in the intensity of the comb lines that decay exponentially with their offset from the center.
Another considerable challenge posed by pulse formation is the inefficient utilization of pump power, as a continuous wave (CW) pump only overlaps with a small part of the circulating field. Recent advancements have started to address this issue, mainly by exploiting auxiliary resonances <cit.> and utilizing pulsed pumps <cit.>. Finally, pulse formation leads to large intracavity peak powers that can engage other unwanted nonlinearities and make comb formation challenging in integrated platforms <cit.>. We discover here that incorporating parametric gain into an EO-modulated cavity leads to a frequency comb without necessitating pulse formation. Despite the modulation being close to the cavity resonance mode spacing, our system's dynamics strikingly resemble those of an FM laser <cit.>. As in an FM laser, we will see that the optical frequency of the signal is swept across a bandwidth B.W. at the rate of the RF modulation Ω.
We first consider the situation without any modulation. We assume that we operate the OPO nondegenerately so that it emits signal and idler tones at mode number offsets ± n_osc from a central mode with frequency ω_0 close to ω_p/2. As we introduce RF modulation at frequency Ω characterized by a mode coupling rate M, these signal and idler tones are simultaneously subject to gain and modulation. The pairing of these effects around the signal and idler creates conditions that mirror the dynamics of an FM laser, where phase-insensitive gain and modulation coexist.
In an FM laser, the limiting behavior that prevents mode-locking arises from a detuning between the cavity's FSR and the drive frequency Ω. The FM laser then transitions to chaotic and mode-locked states as this detuning is reduced and the bandwidth is increased to approach the gain bandwidth of the medium or a limit set by the cavity dispersion <cit.>. The oscillation bandwidth of the FM-OPO is limited by the cavity's dispersion, characterized by mode frequencies , where ζ_1 and ζ_2 are the cavity FSR near ω_0 and the second-order dispersion, respectively. Under the regime considered, our device avoids the transition to mode-locking behavior. The signal and idler modes are far separated and experience local FSRs near ± n_osc that differ from each other by 2 n_oscζ_2. Moreover, the parametric nature of the process necessitates the simultaneous formation of combs at both signal and idler frequencies. Therefore in the assumed nondegenerate regime, there is always effectively a drive detuning when we consider both signal and idler combs. This results in dynamics that closely mirror those of an FM laser with detuned driving, where continuous frequency sweeping is observed rather than pulse formation. The effective bandwidth is given by
B.W.≡2ΓΩ = 4MΩ/n_oscζ_2
where Γ is the modulation index, and the signal and idler tones are frequency modulated as a_s,i(t) ≈ A_s,ie^-iω_i te^∓ iΓsin(Ω t) e^iω_pt/2. The bandwidth formula aligns well with the established expression for the FM laser bandwidth B.W.∝M Ω/(Ω-FSR) <cit.>, with the correspondence being that the FM laser detuning Ω-FSR is replaced by the detuning n_oscζ_2 between the drive and local FSR in the FM-OPO. Finally, we note that there are conditions where the above analysis no longer holds, e.g., at (near-)degenerate OPO operation leading to smaller n_osc, at significantly larger M, or for dispersion-engineered waveguides that may match the local signal/idler FSRs. Bulk phase-modulated OPOs have already been demonstrated <cit.>. We leave the engineering and study of the dynamics of integrated phase-modulated OPOs in a wider set of operating regimes to future work.
§ RESULTS
We demonstrate an optical frequency comb generator based on an FM-OPO integrated on a chip (Fig. <ref>c). The device evenly distributes 11 mW of optical power over 200 comb lines using 140 mW of C-band optical pump power and 200 mW of RF modulation power. Comb lines are spaced by about 5.8 GHz. We base our device on a racetrack resonator in thin-film lithium niobate on insulator (LNOI) with intrinsic quality factors of around Q_i≈ 10^6. This resonator holds within it an electro-optic modulator, an optical parametric amplifier, and a high-efficiency wavelength-selective coupler that nearly fully transmits the 780 nm pump while keeping the C-band excitation within the cavity. Figure <ref>a shows a schematic design of the device, while Fig. <ref>b shows a microscope image of a single FM-OPO device. The coupler allows our device to operate as a doubly resonant OPO where the pump passes through the OPA but is non-resonant in the cavity. One straight section has gold electrodes patterned next to it, enabling electro-optic modulation of the cavity (see the left inset in Fig. <ref>b). The other straight section of the cavity is a periodically poled lithium niobate (PPLN) waveguide that provides parametric gain when pumped with the second harmonic (see the right inset in Fig. <ref>b for a second harmonic microscope picture of the poled thin-film lithium niobate). In the Methods section, we describe the design and characterization of the waveguides and cavity in detail.
We generate the 780 nm pump on the same chip in a separate PPLN waveguide. We filter out the original pump field through three on-chip filters of the same design as the intracavity coupler. The high SHG efficiency allows us to achieve considerable optical pump powers using only a standard commercial C-band laser. Figure <ref>c shows an example FM-OPO output spectrum when the device is pumped with around 140 mW of FH optical power (corresponding to around 100 mW of SH power) and 200 mW of RF power, equivalent to about 4.5 V peak voltage. We plot an electro-optic comb generated using the same RF power within the same cavity in gray for comparison. We observe a flat comb formation around signal and idler wavelengths and no significant background from the pump. The measured output aligns with our coupled-mode theory model (thick dark blue line) described below. The bottom right inset in Fig. <ref>c shows individual lines in a flat spectrum spaced by around 5.8 GHz. The top right inset in Fig. <ref>c shows the result of collecting the output using a fast photodetector and an RF spectrum analyzer. In the RF spectrum, we observe narrow lines spaced by the multiples of the cavity FSR, resulting from the FM-OPO sweeping over a frequency-dependent output coupler (see Methods for details).
We can understand nearly all of the salient features of the observed spectra in the context of an approximate time-domain coupled-mode theory analysis. We also use this formulation to derive the formula for the comb bandwidth shown in equation <ref>, which agrees well with observations <ref>b. We define mode amplitudes a_n to represent the field amplitudes for the n-th mode around the fundamental frequency, where n = 0 corresponds to the fundamental mode closest to half of the pump frequency. In this context, b represents the amplitude of the second harmonic pump field. Each mode n has a natural frequency given by the cavity dispersion with ζ_1/2π≈ 5.8 GHz and ζ_2/2π≈ 11 kHz corresponding to the cavity FSR and the second-order dispersion, respectively. Other key parameters include the laser drive detuning Δ≡ω_p/2 - ω_0, and the RF drive detuning from the FSR δ≡Ω - ζ_1. The mode coupling due to modulation M, which is proportional to the RF drive voltage, and the nonlinear coupling rate g provide the critical ingredients for realizing the comb dynamics. We also include the loss rates of the considered field amplitudes, κ_a,n and κ_b. The rate κ_b corresponds to that of an extremely lossy single-pass “cavity” and allows us to approximate our DRO in this coupled-mode theory formulation. We derive all of the model parameters from independent simulations, as well as experimental and theoretical analysis (refer to the Methods section and SI for more details).
The resulting coupled-mode equations are
ȧ_n
=
[
i(
Δ + nδ - n^2ζ_22)
-κ_a,n2 ]
a_n
- i M
(
a_n-1
+
a_n+1)
- 2ig
a_-n^∗b
ḃ
=
- κ_b2 b
-i g ∑_na_na_-n
+
i√(κ_b)β_in.
There are two main approximations in these equations. First, we represent the pump field as the excitation of a very lossy mode b – solutions involving significant spatial variations of the pump field along the waveguide cannot be represented accurately by this model. Secondly, we only include coupling between modes n and -n – we ignore the weaker coupling between modes with nearby n numbers. For example, coupling between n and -n+1 can be present and may become stronger as a function of pump wavelength. Tuning the pump wavelength and consequently the detuning Δ over a cavity FSR changes the mode pairs that are amplified (see Fig <ref>a). Device parameters are summarized in Extended Data Table <ref>).
We tune the output wavelength in the FM-OPO through small adjustments to the pump wavelength, allowing the output to span the full range of the gain spectrum. This tuning is predominantly influenced by the cavity dispersion, mirroring the characteristics observed in an unmodulated OPO <cit.>. We show the OPO tuning behavior in Fig. <ref>a. The blue traces correspond to measurements with an optical spectrum analyzer (OSA), whereas the gray lines present the predicted tuning behavior based on the waveguide dispersion. The FM-OPO exhibits a similar tuning pattern, as shown in Fig. <ref>b. Here, the comb clusters closely follow the expected tuning. By adjusting the pump wavelength by 20 pm, which equates to half of the cavity's free spectral range (FSR), we can access bandwidth of approximately 70 nm for both FM-OPO and OPO.
We measure the spectra generated by the FM-OPO using an optical spectrum analyzer. We find that the device operates continuously and robustly in a nondegenerate mode at around n_osc≈ 800. In this regime, we expect Eqn. (<ref>) to hold to high accuracy. We pump the device at 1554 nm with about 140 mW. We step the electro-optic coupling rate of the 5.8-GHz EO modulation between 0 and around 510 MHz by varying the RF power supplied to the chip. As shown in Fig. <ref>a, we observe a frequency comb develop. A number of additional comb clusters labeled (-n_osc+1,n_osc) and (-n_osc+1, n_osc+1) appear at a drive exceeding M/2π≈ 360 MHz; these are described in more detail in the Methods section. We only plot the signal combs (blue detuned) and omit the idler combs (red detuned) for clarity; we provide full spectra in Extended Data Fig. <ref>. The measured spectral peak at around 1554 nm corresponds to a slight leakage of the original FH pump into the cavity. We count the number of generated lines within the 3 dB bandwidth of the flat-top and plot this in Fig. <ref>b. We observe good agreement between the data, numerical solution of the coupled-mode equations <ref>-<ref> (blue shaded region), and the analytical expression for the FM-OPO given by equation <ref> (dashed line). At the highest EO modulation rate of around 1.2 W, we observe over 1,000 comb lines oscillating together within -30 dB from the flat-top mean power (see Extended Data Fig. <ref>e for the full spectrum).
The FM-OPO operates with high efficiency, converting around 34% of the input SH light into comb lines. First, the intracavity conversion efficiency is high, exceeding 90%, based on the pump depletion measurement in Fig. <ref>c. We calculate it based on the contrast between the measured maxima and minima of the normalized SH power, visible when tuning the pump wavelength, as shown in the inset. Next, the intracavity comb is outcoupled with the cavity escape efficiency η_a ≈ 0.36, which limits the total efficiency of our device. Note that the depletion and the conversion efficiency do not depend on the RF drive strength. The output power of the FM-OPO resembles a typical behavior of an unmodulated OPO in Fig. <ref>d, where we observe a threshold of about 47 mW SH power and nonlinear coupling rate g/2π≈ 12 kHz, lower than the predicted 67 kHz, which we attribute to operating at non-perfect phase matching Δ k ≠ 0.
§ DISCUSSION
We have successfully demonstrated a new type of integrated comb generator and established its fundamental operating principles. Our device demonstrates exceptional brightness, flatness, and efficiency while retaining robust operational dynamics. Given that our initial demonstration still has the potential for significant improvements in optical bandwidth by dispersion engineering, RF power consumption by resonant enhancement, and optical conversion efficiency by improved out-coupling, this breakthrough opens the door to a new class of deployable optical frequency combs. For the well-established application of these combs to the problems of spectroscopy, the versatility of the LN material platform allows for spectral coverage from blue light <cit.> into the mid-infrared <cit.>, enabling their use in fields such as medical diagnostics <cit.>, process control in agriculture, food production, and various industrial sectors <cit.>. Moreover, the potential of these devices as a source of flat-top combs makes them invaluable for applications from fiber communication systems to FMCW LiDAR <cit.>.
§ METHODS
§.§ Device design and Fabrication
We design our waveguide geometry to maximize the normalized efficiency and interaction rate. Extended Data Figure <ref>a shows a schematic of the periodically poled, X-cut LN waveguide. We chose the ridge height h = 300 nm, slab thickness s = 200 nm, top width w = 1.2 µm, and SiO_2 cladding thickness c = 700 nm. We find the guided modes by numerically solving Maxwell's equations with a finite-element solver (COMSOL). Extended Data Figure <ref>a shows the E_x field distribution for a mode at 1550 nm. Extended Data Figure <ref>b presents the bands of the effective index as a function of wavelength in our waveguide geometry. The blue line highlights the fundamental TE mode we use in our nonlinear waveguide and electro-optic modulator. The difference between the effective index at the fundamental and second harmonic frequency Δn_eff results in phase mismatch that we compensate for with periodic poling with a period of around . The LN waveguide forms a racetrack resonator with an intracavity directional coupler designed to close the resonator for the FH but ensure that the SH pump does not circulate. We call this design a “snail resonator". All of the waveguide bends are defined by Euler curves to minimize light scattering between straight and bent waveguide sections.
We periodically pole the thin-film LN before the waveguide fabrication by patterning chromium finger electrodes on top of an insulating SiO_2 layer. Extended Data Figure <ref>c shows an SEM micrograph of a poling electrode. Next, we apply short pulses on the order of 1 kV to invert the ferroelectric domains and then verify the poling with a second harmonic microscope; Extended Data Fig. <ref>d shows a periodically poled film. In the second harmonic microscope picture, the black areas on the sides of the image correspond to the metal electrodes. The oblong shapes stretching between fingers correspond to the inverted LN domains. White regions at the center of the inverted domains correspond to the poling that extends throughout the full depth of the thin-film LN. We pattern the critical within the fully poled film regions by aligning the electron-beam lithography mask in the waveguide patterning step.
Extended Data Fig. <ref> presents the fabrication process flow. We start with a thin-film lithium niobate on insulator chip (Extended Data Fig. <ref>a). We use 500 nm LN film bonded to around 2 µm of SiO_2 on a silicon handle wafer (LNOI from NanoLN). Then, we deposit about 100 nm of silicon dioxide using plasma-enhanced chemical vapor deposition (PlasmaTherm Shuttlelock PECVD System), which serves as a protective layer and prevents leakage current during poling. We pattern 100 nm thick chromium electrodes (evaporated with Kurt J. Lesker e-beam evaporator) on top of the insulating layer through electron-beam lithography (JEOL 6300-FS, 100-kV) and liftoff process and apply short voltage pulses to invert the LN domains (Extended Data Fig. <ref>b). Next, we remove the chromium and SiO_2 layers with chromium etchant and buffered oxide etchant to obtain a poled thin-film LN chip (Extended Data Fig. <ref>c). We follow with waveguide patterning using JEOL 6300-FS electron-beam lithography and hydrogen silsesquioxane mask (FOx-16). We transfer the mask to the LN material using dry etching with an argon ion mill (Extended Data Fig. <ref>d). After the waveguide fabrication, we pattern another liftoff mask with electron-beam lithography to pattern electrodes for our electro-optic modulators (Extended Data Fig. <ref>e). We use 200 nm of gold with a 15 nm chromium adhesion layer evaporated with the e-beam evaporator. We clad the entire chip with a layer of 700 nm thick SiO_2 deposited with a high-density plasma chemical vapor deposition using PlasmaTherm Versaline HDP CVD System (Extended Data Fig. <ref>f) and open vias to access electrodes using inductively coupled plasma reactive ion etching (Extended Data Fig. <ref>g). We finish preparing the chip facets for light coupling by stealth dicing with a DISCO DFL7340 laser saw.
§.§ Experimental Setup
We characterize our devices' FM-OPO and OPO response using the setup in Extended Data Fig. <ref>. We color-code the paths intended to use with various signals: light orange corresponds to the fundamental harmonic light (around 1500-1600 nm), the blue path corresponds to the second harmonic (around 750-800 nm), and green corresponds to the RF signals. We drive our devices with a tunable C-band laser (Santec TSL-550, 1480–1630 nm) that we amplify with an erbium-doped fiber amplifier (EDFA) to around 1 watt. The wavelength of the laser is controlled in a feedback loop using a wavelength meter (Bristol Instruments 621B-NIR). We control the optical power to the chip with a MEMS variable optical attenuator (from OZ Optics) and calibrate the power using a 5% tap and a power meter (Newport 918D-IR-OD3R). The light then passes through a fiber polarization controller (FPC) and couples to the chip facet through a lensed fiber. We deliver RF signals to the chip through a ground-signal-groud probe (GGB Industries Picoprobe 40A). We use Keysight E8257D PSG Analog Signal Generator as an RF source and amplify it with a high-power
amplifier (Mini-Circuits ZHL-5W-63-S+). We place a circulator before the chip to avoid any reflections into the source and terminate the reflected port after passing it through a 20 dB attenuator.
The generated light is split between two paths with a 1000-nm short-pass dichroic mirror (Thorlabs DMSP1000). The two paths are connected to the InGaAs and Si avalanche photodiodes (Thorlabs APD410A and Thorlabs APD410) to detect the FH and SH power, respectively. VOAs precede both APDs to avoid saturation and increase the dynamic range of the measurements (HP 8156A and Thorlabs FW102C). Part of the FH path splits into an optical spectrum analyzer (Yokogawa AQ6370C) and a fast photodetector (New Focus 1554-B-50), which response is characterized by an RF spectrum analyzer (Rohde & Schwarz FSW26).
§.§ Intracavity coupler characterization
We characterize the performance of the intracavity coupler using a smaller resonator with a straight section length of around 2 mm. Extended Data Figure <ref>a shows transmission of such a cavity (depicted in Extended Data Fig. <ref>b), where we normalize the background to one. We observe the contrast of cavity modes changing across the used wavelength range due to the changes in the intrinsic and extrinsic quality factors. The former can be used to benchmark the coupler's performance. We observe a smooth transition from an undercoupled cavity at 1500 nm, through critical coupling at around 1550 nm, to an overcoupled cavity at 1580 nm. To verify this, we fit the quality factors of all the modes. An example is shown in Extended Data Fig. <ref>c, where we observe intrinsic quality factor Q_i ≈ 2.5 · 10^6 and extrinsic quality factor Q_e ≈ 0.8 · 10^6. Extended Data Figure <ref>d shows the intrinsic and extrinsic quality factors measured as a function of wavelength. We find that Q_i peaks at around 1580 nm, corresponding to the maximum transmission through the coupler. In the FM-OPO device, we use the same coupler but extend the device length to 10 mm, which results in the flattening of the Q_i dependence on wavelength.
§.§ Dispersion measurement
The second-order dispersion ζ_2 is a critical parameter of the FM-OPO because it determines the comb span and tunability. To quantify it, we modify the measurement setup by adding another 5% tap connected to a fiber Mach-Zehnder interferometer (MZI) and a photodetector (Newport 1623 Nanosecond Photodetector), see Extended Data Fig. <ref>a. We collect the MZI transmission and the cavity transmission while scanning the pump laser and calibrate the wavelength by unwrapping the phase in the MZI transmission spectrum. This method allows us to measure cavity mode location with precision on the order of single MHz. We measure the FM-OPO cavity spectrum using the feedline waveguide and extract the local FSR, as shown in Extended Data Fig. <ref>b. The relative position of cavity modes is defined by ω_n = ω_0 + ζ_1 × n + ζ_2/2 × n^2. We fit the FSR with respect to the mode number and extract the second-order dispersion parameter ζ_2/2π ≈ 11 kHz, which agrees with the theoretical prediction based on the finite-element simulation.
§.§ Second-order optical nonlinearity characterization
We characterize the nonlinear performance of our PPLN waveguides through a second harmonic generation measurement in a waveguide that passes through the same poled area of the chip as the FM-OPO PPLN waveguides. The experiment geometry is shown in Extended Data Fig. <ref>a, where the input to the chip is the same as in the general setup but the lensed fiber couples to the test waveguide. Two APDs collect the output light the same way as in the FM-OPO measurements. Extended Data Figure <ref>b shows an example of a measured SHG transfer function recorded while sweeping the C-band laser with fixed power of around 200 µW on the chip. The waveguide length is about 7 mm, and slight distortion to the sinc function results from small waveguide nonuniformities along its length. Extended Data Figure <ref>c shows the peak SH power on-chip recorded as a function of the on-chip pump power at the FH frequency. The inset shows a bright SH spot scattered at the end of an on-chip LN waveguide and lensed fiber tip. We fit a quadratic polynomial to the data to extract the normalized efficiency η that defines the relationship between the SH and pump power:
P_SH = ηP_FH^2 L^2,
where L is the length of the PPLN waveguide, P_SH and P_FH correspond to the power of the second harmonic and fundamental, respectively. We extract normalized efficiency of around 1,500 %/(Wcm^2), corresponding to the interaction rate around g/2π≈ 67 kHz, which agrees with our theory. The measured FM-OPO operates away from the perfect quasi-phase matching, Δ k ≠ 0, which reduces the interaction rate to around 12 kHz.
§.§ Electro-optic characterization
To characterize the electro-optic performance of the FM-OPO resonator, we drive the cavity with RF and probe the transmission spectra of the feedline waveguide as shown in Extended Data Fig. <ref>a. We use the same input chain as in the FM-OPO measurements, except for the RF amplifier. We collect the light using an InGaAs APD paired with a VOA. The cavity transmission with no RF drive reveals a usual Lorentzian lineshape (blue points in Extended Data Fig. <ref>b), that we fit to extract the intrinsic quality factor of around Q_i≈ 1· 10^6. However, the lineshape becomes distorted when the RF modulation is applied to the cavity on resonance with the local FSR. We model it by simplifying the full FM-OPO cavity coupled-mode equation <ref> and adding an FH drive to one of the cavity modes n = 0. In the small optical power limit and absence of the SH drive, we can write the model as:
ȧ_n
= ( i(Δ - n^2 ζ_22)
- κ_a,n2) a_n
- i M
(
a_n-1
+
a_n+1)
+ i √(κ_a^(e) P_FHħω_a) δ_n,0.
Here, Δ is the laser detuning, and δ_n,0 is a Kronecker delta. We model the EO-modulated cavity response by solving this system of equations for 50 modes in steady state:
0 = M A + B,
where M is the matrix including pump detuning, loss rates of the cavity modes, and electro-optic coupling, and B(n ≠ 0) = 0. We find A = - M ^-1 B. The total output power of the cavity consists of the laser pump interfering with the intracavity field and a sum of all the generated sidebands:
|a_out|^2 =
|(a_in - i √(κ_0^(e)) a_0)|^2
+ ∑_n≠ 0κ_n^(e) |a_n|^2.
We evaluate this model numerically to fit the transmission lineshapes of the modulated cavity for various peak voltage values. The orange points in Extended Data Fig. <ref>b correspond to one example of data collected for the cavity modulated with a peak voltage of around V_P≈ 4.5 V. The red line corresponds to the fit. When fitting the modulated lineshapes, we fix the extrinsic and intrinsic quality factors, as measured for the unmodulated line, and extract only the electro-optic coupling M. Then, we plot the measured values of the EO coupling M/2π as a function of peak voltage in Extended Data Fig. <ref>c and fit a line to find the dependence of the EO coupling on the peak voltage. We measure M/2π≈ 60 MHz/V.
§.§ RF and optical spectra of the FM-OPO
We examine the FM-OPO combs we produce using a high-speed photodetector and an RF spectrum analyzer. Interestingly, a single FM-OPO, as defined by equations (derived in SI):
a_i(t) = A_ie^-iω_i te^iΓsin(Ω t) e^iω_pt/2
a_s(t) = A_se^-iω_s te^-iΓsin(Ω t)e^iω_pt/2,
should not create any detectable RF tones when evaluated with a fast photodetector since a pure phase or frequency modulation will not be detected on a photodiode measuring intensity. However, we observe peaks in the RF spectra for the FM-OPOs shown in Fig. <ref>a that are spaced by Ω. These are displayed in Extended Data Fig. <ref>a, and we provide a closer look at the first sidebands in Extended Data Fig. <ref>b.
We find that even a minor dependence of the cavity's external coupler transmission on wavelength can lead to a noticeable conversion from frequency modulation to intensity modulation. To confirm this, we estimate the expected result of a high-speed photodiode measurement of signal and idler combs produced following equations <ref>-<ref>, under the influence of a wavelength-dependent coupler. We determine the external coupling as a function of frequency for our cavity from the same measurement we used for dispersion characterization. The average change in the external coupling across the 1500-1600 nm measurement bandwidth is approximately ∂κ_a^(e)/∂ω≈ -5·10^-6. The calculated RF spectra (Extended Data Fig. <ref>c) qualitatively match our experimental observations, with discrepancies occurring at higher electro-optic modulation rates where the single FM-OPO approximation is no longer applicable. For each RF spectrum, we also present the full optical spectra (including signal and idler) in Extended Data Fig. <ref>d.
We plot the spectrum with the largest observed coverage, measured at around 1.2 W of RF power in Extended Data Fig. <ref>e. Note that for a particular pump wavelength, there are multiple possible modes of oscillation corresponding to the coupling between different mode pairs (-n_osc, n_osc), (-n_osc-1, n_osc), (-n_osc-1, n_osc-1), and so on. For the non-modulated OPO operation at the power levels we experimentally characterized, we observe only one oscillating mode at a fixed pump wavelength (i.e., (-n_osc, n_osc)), which we attribute to optimal phase matching. Adjusting the pump results in switching between different mode pairs with a periodicity of 1/2 FSR. However, in the presence of sufficiently strong modulation, clusters of modes arise in the FM-OPO spectrum corresponding to these secondary mode pairs being excited.
§.§ FM-OPO tuning with laser and RF detuning
We experimentally analyze the behavior of FM-OPO comb properties with respect to the RF drive parameters. First, we step the pump laser across one FSR of the cavity (Extended Data Fig. <ref>a) and record the OSA spectra for various electro-optic coupling rates (Extended Data Fig. <ref>b-d). We can calibrate the pump wavelength, as shown in Extended Data Fig. <ref>a with respect to the cavity modes by looking at the slight leakage of the original FH pump visible as a faint line at around 1554 nm signal wavelength in all the colormaps. In this study, we operate in a nondegenerate regime and observe a pure OPO in Extended Data Fig. <ref>b. Next, by switching on a moderate RF modulation, we achieve M/2π ≈ 100 MHz in Extended Data Fig. <ref>c and observe comb formation and higher-order FM-OPO comb development. Finally, at high modulation of around M/2π ≈ 510 MHz, we observe that the combs originating from different OPO modes (-n_osc, n_osc), (-n_osc+1, n_osc), and (-n_osc+1, n_osc+1) start to merge. We note that the areas with suppressed FM-OPO intensity result from the waveguide mode crossings between the fundamental TE mode and higher order modes that effectively reduce the quality factors in that region. Next, we analyze the FM-OPO response to the RF detuning δ, defined schematically in Extended Data Fig. <ref>e. We measure this by pumping the device at around 1545 nm and using M/2π≈ 510 MHz. For most measurements, we fix the detuning to δ = 0 so that the RF frequency is on resonance with the cavity FSR near degeneracy Ω = ζ_1 to maximize the comb span and output optical power. If the RF drive is detuned, we observe comb shrinking, as shown in Extended Data Fig. <ref>f, and the total output power decreases, as shown in Extended Data Fig. <ref>g.
§.§ Uncertainty analysis
The measurement error of the comb count in Fig. <ref>b is given by the standard deviation of 51 measurements (41 measurements for the highest RF power). The shaded region corresponds to the coupled-mode-equation simulation, from which we extract the half-widths of the simulated combs. We assume uncertainty of ±1 mode on each side of the signal and idler combs. We calculate the uncertainty of the measured depletion of the SH pump (Fig. <ref>c) and the measured OPO signal (Fig. <ref>d) based on the standard deviation of the SH and FH signals over the measurement time.
We measure the FM-OPO resonator's average intrinsic and total quality factors by averaging the results of Lorentzian fits over around 20 nm of the spectrum, where we observe the comb formation. The standard deviation gives their uncertainties. We infer the uncertainty of κ_b based on the precision of our estimation of the group index (10^-3, based on the finite-element solver). We calculate the cavity escape efficiency uncertainty based on the errors of the average quality factors. The uncertainties of the cavity free spectral range, cavity dispersion, peak waveguide nonlinear efficiency, and electro-optically induced mode-coupling rate correspond to the standard errors of the fit parameters extracted from the least-square fitting. Uncertainties of the nonlinear interaction rate and the SH power threshold of the OPO are calculated based on the standard errors of a nonlinear fit. We calculate the internal and total OPO efficiency errors based on the cavity escape efficiency and SH depletion uncertainties.
§ DATA AVAILABILITY
The data sets generated during and/or analyzed during this study are available from the corresponding author on request.
§ ACKNOWLEDGEMENTS
This work was supported by U.S. government through the Defense Advanced Research Projects Agency Young Faculty Award and Director's Fellowship (YFA, Grant No. D19AP00040), LUMOS program (Grant No. HR0011-20-2-0046), the U.S. Department of Energy (Grant No. DE-AC02-76SF00515) and Q-NEXT NQI Center, and the U.S. Air Force Office of Scientific Research provided a MURI grant (Grant No. FA9550-17-1-0002). We thank NTT Research for their financial and technical support. H.S.S. acknowledges support from the Urbanek Family Fellowship, and V.A. was partially supported by the Stanford Q-Farm Bloch Fellowship Program and the Max Planck Society Sabbatical Fellowship Award. This work was also performed at the Stanford Nano Shared Facilities (SNSF), supported by the National Science Foundation under award ECCS-2026822. We also acknowledge the Q-NEXT DOE NQI Center and the David and Lucille Packard Fellowship for their support. D.D. and A.Y.H acknowledge support from the NSF GRFP (No. DGE-1656518). H.S.S. and V.A. thank Kevin Multani and Christopher Sarabalis for discussions and technical support. A.H.S.-N. thanks Joseph M. Kahn and Stephen E. Harris for useful discussions.
§ AUTHOR CONTRIBUTIONS
A.H.S.-N. and H.S.S. conceived the device and H.S.S. designed the photonic integrated circuit.
H.S.S., C.L., and M.J. developed essential components of the photonic circuit.
H.S.S., T.P., and A.Y.H. fabricated the device.
H.S.S., V.A., and O.T.C. developed the fabrication process.
M.M.F. and A.H.S.-N. provided experimental and theoretical support.
H.S.S., T.P., and D.J.D. performed the experiments.
H.S.S., A.Y.H., T.P., and D.J.D.analyzed the data.
H.S.S., and A.H.S.-N. wrote the manuscript.
H.S.S., V.A., and A.H.S.-N. developed the experiment. H.S.S., D.J.D., A.H.S.-N. developed the numerical and analytical models. A.H.S.-N. supervised all efforts.
§ COMPETING INTERESTS
A.H.S.-N., H.S.S., and A.Y.H. are inventors of a patent application that covers the concept and implementation of the frequency-modulated optical parametric oscillator and its applications. The remaining authors declare no competing interests.
§ SUPPLEMENTARY INFORMATION
§ OPTICAL PARAMETRIC OSCILLATOR WITHOUT MODULATION
We model the doubly-resonant optical parametric oscillator (OPO) based on the Hamiltonian of the system. We separate it into the unperturbed and interaction part - H_0, H_PA:
H_0 = ∑_n ω(n) a_n^∗ a_n + ω_bb^∗ b,
H_PA=g ∑_n b a_n^∗ a_-n^∗ + c.c.,
where a_n and b correspond to the amplitudes of the n-th fundamental harmonic (FH) mode around the OPO degeneracy, point n=0, and the second harmonic (SH) pump mode. ω(n) and ω_b correspond to the frequency of the n-th FH mode and SH pump, g is the χ^(2) nonlinear coupling rate. The coupled mode equations are given by:
ȧ_n = -(iω(n)+κ2)a_n-2igba_-n^∗
ḃ = -(iω_b +κ_b2)b-ig ∑_n a_n a_-n.
We can use this model to calculate the threshold and analyze the above-threshold behavior by assuming that b is driven by a classical field at the pump frequency ω_p. This leads to a drive term H_d = √(κ_b)β_ine^-iω_p t b^∗ + c.c. For a doubly-resonant OPO, the loss of the b field is dominated by the extrinsci coupler κ_b^(e)≈κ_b. We remove the time dependence by putting b in frame of ω_p (which assume is the same as ω_b, and all a_n in frame ω_p/2, to maintain time-independence of H_PA). The resulting relevant parts of the Hamiltonian are:
H_0 = ∑_n (ω(n) - ω_p2) a_n^∗ a_n,
H_PA = g ∑_n b a_n^∗ a_-n^∗ + c.c.
and the coupled mode equations turn into:
ȧ_n = -(iω(n) - i ω_p2 + κ2)
a_n-2igba_-n^∗
ḃ = -κ_b2 b - ig ∑_n a_n a_-n
+
i√(κ_b)β_in.
§.§ Doubly-resonant OPO Threshold
To find the threshold, we assume that the a_n’s are all equal to 0, so b obtains some complex field amplitude, and we obtain a system of two equations for each mode pair (-n,+n). We also use the cavity dispersion:
ω(n) = ω_0+ζ_1 n + ζ_2 n^22,
where ζ_1, and ζ_2 correspond to the first and second-order dispersion. This approach applied to equation <ref> yields:
ȧ_n =
-[
i(
ω_0+ζ_1 n + ζ_2 n^22)
-iω_p2+κ2]
a_n-2igba_-n^∗,
ȧ^∗_-n =
-[
-i(
ω_0+ζ_1 (-n) + ζ_2 (-n)^22)
+iω_p2+κ2]
a^∗_-n+2igb^∗ a_n.
From this system of equations, we have a coupled system involving two modes a_n and a_-n. We can write the system of equations in a matrix form by treating (a_n, a_-n^∗) as a complex vector as da/dt = 𝐌𝐚, where 𝐚 = [ a_n; a_-n^∗ ] is the complex vector of amplitudes, and 𝐌 is the matrix:
𝐌 =
[ -[
i ( - Δ +
ζ_1 n + ζ_2 n^22)
+κ2] -2igb; 2igb^∗ -[
-i( -Δ
+ζ_1 (-n) + ζ_2 (-n)^22)
+κ2] ],
where we introduced the pump detuning defined as Δ = ω_p/2 - ω_0. This equation describes the evolution of the complex amplitudes a_n and a_-n^∗ in terms of a linear transformation defined by the matrix 𝐌.
To find the stability conditions, one has to calculate the eigenvalues of the matrix 𝐌 and find the conditions for the real parts of these eigenvalues to be negative. We can compute its eigenvalues by solving the characteristic equation det(𝐌 - λ𝐈) = 0 that leads to the following quadratic equation:
[λ + i(-Δ + ζ_1 n + ζ_2 n^22) + κ2] [ λ - i( - Δ +ζ_1 (-n) + ζ_2 (-n)^22) + κ2] - 4g^2|b|^2 = 0.
The corresponding stability criterion is:
16 g^2 |b|^2 > κ^2 + (2Δ - n^2 ζ_2)^2.
The threshold of the OPO is minimized when the pump detuning perfectly compensates for the second-order dispersion for the n-th pair of modes Δ = n^2 ζ_2/2. In that case, we can substitute a steady-state solution for b into the stability condition and see that the threshold of the doubly-resonant OPO is given by:
P_th = ħω_p κ_a^2 κ_b64 g^2.
§.§ Above-threshold behavior
To find the relation between the pump power and the output power of the OPO we solve equations <ref>-<ref> in a steady state. Above the threshold, one pair of signal-idler modes will dominate the dynamics of the system, so we neglect the other FH modes:
a_n = 4igba_-n^∗κ_a
b = -4ig a_n a_-n
+ 2i√(κ_b)β_inκ_b
Substituion of equation <ref> into <ref> yields:
0 = 8g^2κ_b a_n|a_-n|^2
- 4g√(κ_b) β_in a_-n^∗
+
κ_a2 a_n.
We can write an analogous equation for the signal and idler modes. Assuming that the amplitudes are real and the loss rates for the signal and idler modes are the same, we find the amplitudes of the signal and idler modes as
|a_n|^2 = |a_-n|^2 = √(κ_b)2g β_in
-
κ_a κ_b16g^2
The total output power of the doubly-resonant OPO is:
P_out
=
4 η_a
P_th(
√(P_inP_th) - 1
),
where η_a = κ_a^(e)/κ_a is the cavity extraction efficiency for the FH modes and the input power is defined as .
We can relate the OPO efficiency to the pump depletion by looking at the b amplitude in a steady state (equation <ref>). By substituting the solutions for the FH modes, we see that b = iκ_a/4g and use the input-output relations to find the output amplitude:
b_out =
b_in + i√(κ_b^(e)) b
=
b_in(
1 - 2 √(P_th)|b_in|).
The depletion of the pump power is:
D
=
4 P_thP_in (
√(P_inP_th) - 1
).
The OPO efficiency ρ is proportional to depletion and the cavity extraction efficiency:
ρ = η_a D.
We use this relationship to find the efficiency of our OPO and frequency comb generator. Note that the doubly-resonant OPO can achieve high efficiency by just increasing the coupling rate to the cavity and achieve >50% efficiency for κ_a^(e)>κ_a^(i).
§.§ Approximate single-mode model of a propagating pump field
Our goal is to represent the propagating SH field and its dynamics approximately as the dynamics of a single b mode. We note that this model can not capture complex spatial variations in the pump field. Let's first consider just the b mode, ignoring the other a_n modes of the system. We will need to define an effective loss rate of b. The input and output fluxes give the number of the photons within our waveguide in the steady state. After some time T>τ, where τ is the amount of time it take the field to propagate across the waveguide, the number of photons in that region is given by
|b(T>τ)|^2
=
∫_0^T |β_in|^2 dt
-
∫_τ^T |β_out|^2 dt,
where τ is the cavity round-trip time. If we neglect the propagation loss, we also see that β_in = β_out and:
|b(T>τ)|^2
=
∫_0^τ |β_in|^2 dt
=
|β_in|^2 τ.
On the other hand, solving equation <ref> in steady state and low-power approximation yields:
b
=
- 2√(κ_b^(e)) β_in.
We combine equations <ref>-<ref> to see that the effective loss rate of the b mode is given by κ_b = 4/τ = 4 v_g / L, where v_g is the group velocity of the pump and L is the total length over which the light propagates, here equivalent to the resonator length.
§ FREQUENCY-MODULATED OPTICAL PARAMETRIC OSCILLATOR
To include the effects of the intracavity phase modulator, we need to include additional term in the Hamiltonian:
H_mod=2 Mcos (Ω t)∑_σ=s,i∑_m,m' a^∗_σ,m'a_σ,m + c.c.,
where M is the electro-optic modulation rate, and Ω corresponds to the RF frequency applied to the modulator. Applying RWA in the new frame, leads to:
H_mod=M∑_σ=s,i∑_mã^∗_σ,mã_σ,m+1 + c.c..
Resulting coupled mode equations, including both the parametric gain and the modulation, are then:
H = H_0 + H_PA + H_mod.
The coupled mode equations in the text are generated from this classical Hamiltonian.
§.§ Relabeling the modes according to offset from NDOPO signal and idler
Once we have a nondegenerate oscillating solution for the equations above, the signal and idler oscillations would be at a specific value of ± n_osc. We will also denote the frequencies of these oscillations as
ω_s = ω(+n_osc) and
ω_i = ω(-n_osc).
The oscillating modes are then
a_s,0 ≡ a_+n_osc
a_i,0 ≡ a_-n_osc.
We can count out from these oscillating modes with a new index variable, m:
a_s,m ≡ a_n_osc+m,
a_i,m ≡ a_-(n_osc+m),
with frequencies
ω_s(m) = ω(n_osc+m),
ω_i(m) = ω(-n_osc-m).
These can be written more explicitly using the original definition of ω(n):
ω_s(m) = ω_0+ζ_1 (n_osc+m) + ζ_2/2 (n_osc+m)^2,
ω_i(m) = ω_0+ζ_1 (-n_osc-m) + ζ_2/2 (-n_osc-m)^2.
Now we rewrite Hamiltonians in terms of these new frequencies and the newly defined modes a_s,m and a_i,m. For example, the zeroth order Hamiltonian H_0 would be:
H_0 = ∑_m [(ω_s(m)-ω_p/2) a_s,m^∗ a_s,m + (ω_i(m)-ω_p/2) a_i,m^∗ a_i,m].
Similarly, the parametric amplification Hamiltonian H_PA becomes
H_PA = g ∑_m (b a_s,m^∗ a_i,m^∗ + c.c.)
This relabeling puts the oscillating modes at the center m=0 and allows us to examine the dynamics in their vicinity. The indices m are the offsets from these central modes.
§.§ Calculating a recurrence relation for the modal amplitudes
First, we consider the steady state equation for the b mode as:
ḃ = -κ_b/2 b -i g∑_m ã_s,mã_i,m+i√(κ_b) b_in.
At steady state, we expect the amplitude of the intracavity pump field to become:
b = -2i g/κ_b∑_m ã_s,mã_i,m+2i/√(κ_b) b_in
For the idler and signal modes, we then derive the equations:
dã_i,m/dt = -κ_a/2ã_i,m-i ((n_oscζ_2 m + m^2ζ_2/2 )ã_i,m + g b ã_s,m^∗ + M(ã_i,m+1+ã_i,m-1) ),
dã_s,m/dt = -κ_a/2ã_s,m-i ((n_oscζ_2 m + m^2ζ_2/2 )ã_s,m + g b ã_i,m^∗ + M(ã_s,m+1+ã_s,m-1) ).
Now we can find relations for steady-state amplitudes (assuming we can choose the phases so that all the amplitudes are real) – the imaginary part of the equations above are given by:
0 = (n_oscζ_2 m + m^2ζ_2/2 )ã_i,m + M(ã_i,m+1+ã_i,m-1),
0 = (n_oscζ_2 m + m^2ζ_2/2 )ã_s,m + M(ã_s,m+1+ã_s,m-1) ,
Notice that we used Re[g b ã_i,m^∗] = 0, i.e., if b_in is chosen to be real, b will be imaginary.
§.§ Verifying that the FM solution satisfies the equations of motion
We show that a solution of the form
ã_i,m=A_i J_m(Γ), ã_s,m=A_s J_m(Γ)
approximately satisfies the steady-state dynamics of the system derived above. Substituting the relations into the equations (<ref>) and (<ref>),
we find:
0 = - (n_oscζ_2 m + m^2ζ_2/2 )A_i J_m(Γ) +M(A_i J_m+1(Γ)+A_i J_m-1(Γ)).
We now use the Bessel function recurrence relation to simplify this expression. The recurrence relation for Bessel functions is:
2m/xJ_m(x) = J_m-1(x) + J_m+1(x),
which leads to
0 = - (n_oscζ_2 m + m^2ζ_2/2 ) +M ·2m/Γ.
Obviously, it is not possible to satisfy this relation exactly due to the dependence on m of inside equation. But, we can make this approximately true by setting
Γ = 2 M/n_oscζ_2.
The bandwidth is then given by
B.W.≡2ΓΩ = 4MΩ/n_oscζ_2
We have solved the comb dynamics in frequency domain. To find the time-domain solution, we use the Jacobi-Anger relation
e^izsinθ = ∑_m=-∞^∞ J_m(z) e^imθ
we find that the steady-state solution of the system appears as swept signal and idler tones. These would be represented in the form
a_i(t) = A_ie^-iω_i te^iΓsin(Ω t) e^iω_pt/2
a_s(t) = A_se^-iω_s te^-iΓsin(Ω t)e^iω_pt/2.
§ NUMERICAL MODELING
§.§ Quasi-static approximation
We solve the coupled mode equations for 2,500 modes. To increase the numerical efficiency of the ODE solver, we notice that the extrinsic coupling rate of the b amplitude is much faster than any other rates in the system and introduce a quasi-static approximation. During each step of the ODE solver, we assume that the SH mode is in a steady state, which yields:
ȧ_n
= [
i(
Δ + nδ - n^2ζ_22)
-κ_a,n2 ]
a_n
-
iM(
a_n-1
+
a_n+1)
- 2ig
a_-n^∗b
b =
- 2i g ∑_na_na_-n
-
i√(κ^(e)_b)β_inκ_b^(e).
These are the equations we solve in the main text to predict the shape of the optical frequency combs generated in our device and their total comb count.
|
http://arxiv.org/abs/2307.04388v1 | 20230710075157 | Core localized alpha-channeling via low frequency Alfven mode generation in reversed shear scenarios | [
"Zhiyong Qiu",
"Shizhao Wei",
"Tao Wang",
"Liu Chen",
"Fulvio Zonca"
] | physics.plasm-ph | [
"physics.plasm-ph"
] |
Relieving the S_8 Tension: Exploring the Surface-type DBI Model as a Dark Matter Paradigm
Huanyuan Shan
August 12, 2023
=========================================================================================
A novel channel for fuel ions heating in tokamak core plasma is proposed and analyzed using nonlinear gyrokinetic theory. The channel is achieved via spontaneous decay of reversed shear Alfvén eigenmode (RSAE) into low frequency Alfvén modes (LFAM), which then heat fuel ions via collisionless ion Landau damping. The conditions for RSAE spontaneous decay are investigated, and the saturation level and the consequent fuel ion heating rate are also derived. The channel is expected to be crucial for future reactors operating under reversed shear configurations, where fusion alpha particles are generated in the tokamak core where the magnetic shear is typically reversed, and there is a dense RSAE spectrum due to the small alpha particle characteristic dimensionless orbits.
Energetic particles (EPs) as well as fusion alpha particles related physics <cit.> are key elements towards understanding the performance of future fusion reactors, among which two crucial topics are EPs transport loss by self-generated collective oscillations such as shear Alfvén wave (SAW) eigenmodes <cit.> and searching for alternative/complementary routes to transfer EP power to fuel ions, i.e., alpha-channeling <cit.>. Both processes are influenced by the saturation level and spectrum of SAWs. In this contribution, a channel for reversed shear Alfvén eigenmode (RSAE) <cit.> nonlinear saturation is proposed and analysed, which is expected to play significant roles in future reactor-scale tokamaks with rich spectrum of core-localized RSAEs <cit.> due to the reversed shear magnetic configuration and small dimensionless EP orbit size.
In this proposed process, a RSAE spontaneously decays into another RSAE and a low frequency Alfvén mode (LFAM), which can be ion Landau damped, leading to effective heating of thermal ions in the reversed shear region, and consequently, enhanced fusion performance.
We consider for simplicity low-β_i plasma such that the frequency separation between RSAE and LFAM required for resonant mode coupling can be well satisfied. The nonlinear coupling is dominated by thermal plasma contribution, while the RSAEs are excited by EPs, so the thermal plasma nonuniformity can be neglected, which is also consistent with the advanced scenario of reversed shear configuration.
The governing equations describing nonlinear interactions among RSAEs and LFAM with all predominantly SAW polarization can be derived from nonlinear gyrokinetic vorticity equation <cit.>
and quasi-neutrality condition,
with the particle response derived from nonlinear gyrokinetic equation <cit.>.
The general equation for three SAWs nonlinear interaction, with the matching condition being Ω_3(ω_3,𝐤_3)=Ω_1(ω_1,𝐤_1)+Ω_2(ω_2,𝐤_2), can be derived as
b_k_3ℰ_k_3δϕ_k_3 = - i/ω_3Λ^k_3_k_2,k_1[ (b_k_2-b_k_1)(1-k_∥1k_∥2V^2_A/ω_1ω_2) +b_k_3V^2_Ak_∥ 3/ω_3(k_∥ 1/ω_1 - k_∥ 2/ω_2) ]δϕ_k_1δϕ_k_2,
with ℰ_k≡ -k^2_∥ V^2_A/ω^2_k + 1 - ω^2_G/ω^2_k being the SAW dielectric function in the WKB limit, ω_G≡√(7/4+T_e/T_i) v_i/R_0 being the leading order geodesic acoustic mode frequency <cit.>, accounting for SAW continuum upshift and creation of beta-induced continuum gap, and Λ^k_k”,k'≡ (c/B_0)𝐛̂·𝐤”×𝐤' with 𝐛̂ being the unit vector along the equilibrium magnetic field 𝐁_0.
Equation (<ref>) describes the nonlinear evolution of SAWs, with Ω_3 modified by the beating of Ω_1 and Ω_2, the first term on the right hand side due to the competition of Reynolds and Maxwell stresses, and the second term from finite parallel electric field contribution to field line bending. Note that, since (ω_1+ω_2)≃ (k_∥ 1+k_∥ 2)V_A, Ω_3 naturally satisfies the SAW D.R. and can be strongly excited if it is a normal mode of the system, leading to significant spectral transfer of SAW turbulence.
We note that, in the expression of ℰ_k, effects of wave-particle interactions are not included, consistent with the k_∥v_i≪ω_k ordering for bulk non-resonant ions. However, finite Landau damping due to resonance with ions is crucial for alpha-channeling, and will be recovered formally in the later analysis by inclusion of the anti-Hermitian part of ℰ_k <cit.>.
§ PARAMETRIC DECAY OF RSAE
Equation (<ref>) will be applied to the nonlinear decay of a pump RSAE Ω_0(ω_0, 𝐤_0) into a RSAE sideband Ω_1(ω_1, 𝐤_1) and a LFAM Ω_B(ω_B, 𝐤_B), with the frequency/wavenumber matching condition Ω_0=Ω_1+Ω_B assumed without loss of generality.
For RSAE and LFAM being dominated by single-n and single-m mode structures, we take
δϕ_k=A_k(t)Φ_k(x) exp(-iω_k t+inξ-imθ), with A_k(t) being the slowly varying mode amplitude, Φ_k(x) the parallel mode structure localized about q_min with x≡ nq-m, and the normalization condition ∫ |Φ_k|^2 dx=1 is satisfied.
For the effective transfer of alpha particle energy to core ions, ω_B≤ O(v_i/(qR_0)), and thus, |ω_B|≪ |ω_0|, |ω_1| and k_∥ B≃ 0. Thus, the q_min surface also corresponds to the rational surface of Ω_B, i.e., Ω_B is the LFAM in the reversed shear configuration, as investigated theoretically <cit.>. We then have, ω_0≃ω_1 and k_∥0≃ k_∥ 1. Effects of small frequency mismatch on the decay process will be discussed later.
The nonlinear RSAE sideband and LFAM equations can be derived from equation (<ref>) as
b̂_1ℰ̂_1 A_1 = -i/ω_1⟨Λ^k_1_k_0,k_B^*α_1 Φ_1Φ_0Φ_B⟩_x A_0 A_B^*,
b̂_Bℰ̂_B A_B = -i/ω_B⟨Λ^k_B_k_0,k_1^*α_B Φ_BΦ_0Φ_1⟩_x A_0 A_1^*,
with α_1≡ (b_0-b_B)(1- k_∥ Bk_∥0V^2_A/(ω_0ω_B)) + b_1 V^2_A (k_∥ 1/ω_1 ) (k_∥ B/ω_B - k_∥0/ω_0), α_B ≡ (b_0-b_1)(1- k_∥ 1k_∥0V^2_A/(ω_0ω_1)) + b_B V^2_A (k_∥ B/ω_B ) (k_∥ 1/ω_1 - k_∥0/ω_0), ⟨⋯⟩_x≡∫⋯ dx denoting averaging over the fast radial scale, b̂_1ℰ̂_1≡∫Φ_1 b_1 ℰ_1Φ_1 dx being the Ω_1 eigenmode local dispersion function, and b̂_Bℰ̂_B being the local dispersion function for the LFAM eigenmode.
The parametric decay dispersion relation for RSAE decaying into another RSAE and LFAM can then be derived by combining equations (<ref>) and (<ref>)
ℰ̂_1ℰ̂_B^*≃(Λ̂^k_1_k_0,k_B^*)^2α̂_N/b̂_Bb̂_1 ω_Bω_1Ĉ^2 |A_0|^2,
with Ĉ≡⟨Φ_0Φ_BΦ_1⟩_x, Λ̂^k_1_k_0,k_B^*= ⟨Λ^k_1_k_0,k_B^*⟩_x, α̂_N≡α̂_1α̂_B, and Ĉ≃√(2 Δ_B/(√(π)Δ_0Δ_1)), with Δ_0∼Δ_1∼ O(1) and Δ_B∼ O(β^1/2) being the characteristic radial widths of the respective linear parallel mode structures.
Expanding ℰ̂_1≃ i ∂_ω_1ℰ̂_1(∂_t+γ_1)≃ (2 i/ω_1) (γ+γ_1) and ℰ̂_B^*≃ (-2i/ω_B) (γ+γ_B) in the local limit, with γ denoting the slow temporal variation of Ω_1 and Ω_B due to the parametric instability, and γ_1/γ_B being the linear damping rates of RSAE/LFAM accounted for by the anti-Hermitian part of ℰ_1/ℰ_B, one obtains
(γ+γ_1)(γ+γ_B)=(Λ̂^k_1_k_0,k_B^*)^2 α̂_N/4 b̂_B b̂_1Ĉ^2|A_0|^2.
The condition for the pump RSAE spontaneous decay can thus be obtained from equation (<ref>) as
α̂_N>0
and
(Λ̂^k_1_k_0,k_B^*)^2 α̂_N Ĉ^2|A_0|^2/(4 b̂_B b̂_1) >γ_Bγ_1 for the nonlinear drive overcoming the threshold due to Ω_1 and Ω_B Landau damping.
The nonlinear dispersion relation is very complex, and depends on various conditions including the polarization and mode structure of the three modes involved. For further analytical progress, the WKB limit and the strong assumption of k_∥ B→ 0 is adopted, and a parameter regime can be identified for the spontaneous decay process to strongly occur, which corresponds to k_⊥1≫ k_⊥0, such that (b_0-b_1)(b_0-b_B-b_1)>0; and α̂_N>0 can be satisfied with 1-k_∥0k_∥ 1V^2_A/(ω_0ω_1)>0, which generally requires Ω_1 being excited above the local SAW continuum accumulation point with n_1q_min< m_1.
The threshold condition for the RSAE spontaneous decay, for the proposed parameter region of RSAE “normal cascading" to |k_⊥1|≫ |k_⊥0|, can be estimated as
|δ B_⊥0/B_0|^2 > 4γ_1γ_B/ω_0ω_1k^2_∥0/k^2_⊥11/Ĉ^21/1-k_∥0k_∥ 1V^2_A/(ω_0ω_1)∼𝒪(10^-7),
and is comparable with or slightly higher than typical threshold condition for other dominant nonlinear mode coupling processes, e.g., ZS generation. This threshold amplitude, is also consistent with typical SAW instability intensity observed in experiments. Thus, this channel could be an important process in determining the nonlinear dynamics of RSAE.
§ NONLINEAR SATURATION AND CORE-LOCALIZED ION HEATING
The RSAE saturation level can be estimated by considering the feedback of the two sidebands to the pump RSAE, which can be derived from equation (<ref>) as
b̂_0ℰ̂_0 A_0≃ -i/ω_0Λ̂^k_0_k_1,k_Bα̂_0 Ĉ A_1 A_B,
with α_0= (b_1-b_B) (1- k_∥ Bk_∥ 1V^2_A/(ω_1ω_B)) + b_0 V^2_A(k_∥0/ω_0) (k_∥ B/ω_B- k_∥ 1/ω_1). The saturation level of LFAM, can be estimated from the fixed point solution of equations (<ref>), (<ref>) and (<ref>), and one obtains,
|A_B|^2= γ_0γ_1 b̂_0b̂_1ω_0ω_1∂_ω_1ℰ_1,ℛ∂_ω_0ℰ_0,ℛ/(α̂_0 α̂_1 |Ĉ|^2 (Λ̂^k_0_k_1,k_B)^2), and the ion heating rate due to LFAM Landau damping, can be estimated as
P_i=2γ_B ω_B∂ℰ_B,ℛ/∂ω_Bn_0e^2/T_ib̂_B |A_B|^2 ∼ 10^-3γ_0 n T.
The obtained core ion heating due to LFAM conllisionless damping, can be comparable to Coulomb collisional heating estimated by n T/τ_E, with τ_E being the energy confinement time.
This channel, achieved via the Landau damping of secondary LFAM, noting that k_∥ B≪1, is highly localized around the q_min surface (this conclusion can also be obtained, noting as the “secondary" LFAM structure will be determined by the primary RSAE, with a narrower extent than the primary RSAEs), will deposit fusion alpha particle power locally and heating core ions, leading to direct improvement of fusion performance in the tokamak center. The nonlinear dynamics of RSAE with multiple channels accounted for simultaneously <cit.> is crucial for the understanding of core plasma behaviour and fusion performance of future reactors.
10
AFasoliNF2007
Fasoli A, Gormenzano C, et al, 2007 Nuclear
Fusion 47 S264
LChenRMP2016
Chen L and Zonca F 2016 Review of Modern Physics 88 015008
NFischPRL1992
Fisch N J and Rax J M 1992 Phys. Rev. Lett. 69(4) 612–615
HBerkPRL2001
Berk H L, Borba D N, Breizman B N, Pinches S D and Sharapov S E 2001 Phys.
Rev. Lett. 87(18) 185002
TWangPoP2018
Wang T, Qiu Z, Zonca F, Briguglio S, Fogaccia G, Vlad G and Wang X 2018 Physics of Plasmas 25 062509
LChenJGR1991
Chen L and Hasegawa A 1991 Journal of Geophysical Research: Space
Physics 96 1503 ISSN 2156-2202
EFriemanPoF1982
Frieman E A and Chen L 1982 Physics of Fluids 25 502–508
NWinsorPoF1968
Winsor N, Johnson J L and Dawson J M 1968 Physics of Fluids 11
2448–2450
FZoncaPPCF1996
Zonca F, Chen L and Santoro R A 1996 Plasma Physics and Controlled
Fusion 38 2011
RMaPPCF2022
Ma R, Chen L, Zonca F, Li Y and Qiu Z 2022 Plasma Physics and Controlled
Fusion 64 035019
SWeiJPP2021
Wei S, Wang T, Chen N and Qiu Z 2021 Journal of Plasma Physics 87
905870505
SWeiNF2022 Wei S, Wang T, Chen L, Zonca F and Qiu Z, 2022 Nuclear Fusion 62 126038
|
http://arxiv.org/abs/2307.04443v1 | 20230710095228 | Search-time Efficient Device Constraints-Aware Neural Architecture Search | [
"Oshin Dutta",
"Tanu Kanvar",
"Sumeet Agarwal"
] | cs.CV | [
"cs.CV",
"cs.LG"
] |
Indian Institute of Technology
{oshin.dutta,sumeet}@ee.iitd.ac.in, [email protected]
Search-time Efficient Device Constraints-Aware Neural Architecture Search
Oshin Dutta Tanu Kanvar Sumeet Agarwal
=========================================================================
Edge computing aims to enable edge devices, such as IoT devices, to process data locally instead of relying on the cloud. However, deep learning techniques like computer vision and natural language processing can be computationally expensive and memory-intensive. Creating manual architectures specialized for each device is infeasible due to their varying memory and computational constraints. To address these concerns, we automate the construction of task-specific deep learning architectures optimized for device constraints through Neural Architecture Search (NAS). We present DCA-NAS, a principled method of fast neural network architecture search that incorporates edge-device constraints such as model size and floating-point operations. It incorporates weight sharing and channel bottleneck techniques to speed up the search time. Based on our experiments, we see that DCA-NAS outperforms manual architectures for similar sized models and is comparable to popular mobile architectures on various image classification datasets like CIFAR-10, CIFAR-100, and Imagenet-1k. Experiments with search spaces—DARTS and NAS-Bench-201 show the generalization capabilities of DCA-NAS. On further evaluating our approach on Hardware-NAS-Bench, device-specific architectures with low inference latency and state-of-the-art performance were discovered.
§ INTRODUCTION
In recent years, there has been significant progress in developing Deep Neural Network (DNN) architectures <cit.> for edge and mobile devices.However, designing DNN architectures for specific hardware constraints and tasks is a time-consuming and computationally expensive process <cit.>. To address this, Neural Architecture Search (NAS) <cit.> has become popular as it discovers optimal architectures given a task and network operations. Despite its success, traditional NAS techniques cannot guarantee optimal architecture for specific devices with hardware constraints such as storage memory and maximum supported FLOPs.
To address this concern, researchers have developed hardware-aware algorithms <cit.> that find optimal device architectures with low resource training overhead and search time. These methods often use inference latency <cit.>, FLOPs <cit.> or a combination of hardware metrics <cit.> as constraints scaled by a tunable factor. However, the time to tune the scaling factor is often not considered within the NAS search time and can be ten times the reported search time.
To address these issues, we propose the Device Constraints-Aware NAS (DCA-NAS), a principled differentiable NAS method that introduces total allowable model size or floating-point operations (FLOPs) as constraints within the optimization problem, with minimal hyper-parameter tuning. Unlike inference latency which is task dependent, FLOPs and memory are specified with a given hardware and thus are appropriate for our generic method. The approach is adaptable to other hardware metrics such as energy consumption or inference latency using additional metric-measuring functions.
The paper make the following significant contributions:
* It introduces a fast method that uses weight sharing among operations in the search space and channel bottleneck, along with a differentiable resource constraint, for continuous exploration of the search space.
* A training pipeline that allows a user to input device memory or FLOPs and search for optimal architecture with minimal hyper-parameter tuning.
* Our extensive experimentation on vision datasets- CIFAR-10, CIFAR-100, TinyImagenet, Imagenet-1k and inference-latency comparisons of trained models on Hardware-NAS-bench demonstrate the efficiency of our method. The generalization of our method to different search spaces is shown with experiments on DARTS and NAS-Bench.
§ RELATED WORK
Neural Architecture Search
Popular approaches <cit.> designed architectures for high performance on specific tasks or datasets with the traditional deep learning perspective that bigger is better, resulting in computationally and memory-intensive inference on edge devices. Network pruning <cit.>, channels removal <cit.> and weights/activations quantization <cit.> can compress architectures, but require pre-training, hyperparameter tuning, and often lack transferability.Neural Architecture Search (NAS) methods such as Reinforcement Learning <cit.>, Evolutionary Learning <cit.> and Differentiable Neural Architecture Search (DNAS) <cit.> can automatically search for architectures without user intervention, and can transfer across similar tasks. DNAS with surrogate metrics <cit.> have also been used to explore the architecture search space. However, architectures found by DNAS methods are not optimized for deployment on edge devices and smaller models obtained by reducing layers or channels are often sub-optimal.
Hardware-aware Neural Architecture search
Certain NAS methods optimize <cit.> for constraints such as latency, inference speed <cit.>, FLOPS <cit.>, memory usage <cit.>. Some use a separate DNN to predict constraint metrics and evolutionary search to obtain hardware-aware optimal models <cit.>, while others consider real-time latencies of edge devices or provide specific architectures for specific devices <cit.>. However, these methods require significant search time and tuning of scaling factors controlling the trade-off between the performance and the constraint, and do not always account for optimal architectures. In contrast, we use a differentiable hardware-aware objective function with generic hardware metrics, and do not require a tunable scaling factor.
Certain methods <cit.> train a supernet first and then search for a smaller architecture, but this is only efficient when there are more than fifteen different edge devices with different limitations or deployment scenarios <cit.> as training the supernet takes huge resources-32 V100s taking about 1,200 GPU hours. Search stage followed by evaluation, as done in our approach is more efficient when the different number of possible edge devices is less than fifteen.
§ DCA-NAS: DEVICE CONSTRAINTS AWARE FAST NEURAL ARCHITECTURE SEARCH
We present the preliminary gradient-based NAS objective function in section <ref> and then formulate the problem of incorporating the hardware-awareness in NAS as a constrained optimization problem in section <ref> followed by techniques to reduce the search time in section <ref>. The framework of our approach is illustrated in Figure <ref>.
Notation
α_o^i, j :is the architecture parameter for operation o between a pair of nodes (i,j)
b(o) :is the number of learnable parameters or the FLOPs required by the operation o.
w :are the learnable weights of the operations.
K_d :is the resource constraint of the device given as input to the algorithm.
K_d^' :is the constraint metric derived from the look-up graph.
λ :is the Lagrange multiplier for solving the constrained optimization that incorporates model size or FLOPs as constraints.
§.§ Gradient-based NAS Objective Function
Popular DNAS techniques <cit.> have two stages, the search phase and the evaluation phase. During the search phase, given a task or a dataset the techniques search for a network of cells, which are directed acyclic graphs with N nodes. The edges of the graph are network layers, whose operations are to be selected from a pre-defined set 𝒪 containing operations such as 3x3 separable convolution and identity operations with trainable weights w_o.
The search is made differentiable by making the choice of a particular operation to be a softmax of architecture weights α of all operations. Thus, the intermediate output z_j at node j is given by,
z_j=∑_o ∈𝒪exp{α_o^i, j}/∑_o^'∈𝒪exp{α_o^'^i, j}· o(w_o^i,j,𝐳_i)
§.§ DCA-NAS formulation
Previous DNAS approaches <cit.> did not focus on searching architectures specifically for inference on resource-constrained devices. In contrast, we formulate the DNAS objective function as a constrained optimization problem by incorporating device resource constraints (memory or FLOPs) in the search objective function. The constrained bi-level optimization problem is written as,
[ min _α ℒ_val (w^*(α), α); s.t. w^*(α)=argmin_wℒ_train (w, α); s.t. k_s(α) ≤ K_d ]
where training dataset is split into train and val to optimize w and α simultaneously in each iteration subject to the constraint that the architecture's number of parameters or FLOPs k_s must be less than or equal to the device resource constraint K_d. The following equation calculates the architecture's number of parameters or FLOPs during search given the number of cellsc_n . Our method can also be adapted to use other metrics such as latency and energy consumption with additional metric measuring functions.
k_s(α)= c_n∑_(i,j)∈ N∑_o ∈𝒪exp{α_o^i, j} * b(o)/∑_o^'∈𝒪exp{α_o^'^i, j}
§.§.§ Tackling the difference in search and evaluation networks
The size of the architecture in the search phase k_s is different from the architecture size in evaluation phase due to the softmax weighting factor in equation <ref> (demonstration can be found in the appendix). To address this, we introduce a tighter bound on the search constraint K_d^', which is less than the device resource constraint K_d. A lookup graph (LUG) needs to be made for each dataset by varying K_d^' within appropriate bounds and running the algorithm until convergence each time to obtain the corresponding device resource constraint K_d. The computation time of the LUG can be reduced by running the searches in parallel. Thus, on incorporating the tighter constraint by looking-up the graph for the given device resource constraint K_d along with the trainable Lagrange multiplier λ in Equation <ref>, the objective function is re-written as,
[ ℒ =ℒ_val (w^*(α), α)
+λ (k_s(α)-LUG(K_d)); s.t. w^*(α)=argmin_wℒ_train (w, α) ]
§.§ Techniques to reduce search time
Channel Bottleneck We use convolutional layers of 1x1 kernel to reduce the depth of output channels of operations in the search space to save computation time and memory overhead.
Derived Cell and Weight sharing. During architecture search, only one cell with trainable α is used to optimize architecture parameters. The target network for inference is built by stacking cells with architectures derived from highly weighted operations. This can be done during search by deriving the other cell architectures from the first at each iteration <cit.>. The arrangement of the cells for search is given in the appendix. This derived cell saves computation and memory overhead. A weight sharing strategy <cit.> among same operations with the same originating node i to all nodes i<j<N has been applied within a cell. This is motivated by the observation that non-parametric operations operating on the representation of a node produce the same feature map irrespective of the output node and thereby extended to parametric operations. Thus, Equation <ref> may be re-written to the following,
z_j=∑_o ∈𝒪exp{α_o^i, j}/∑_o^'∈𝒪exp{α_o^'^i, j}· o(w_o^i,𝐳_i)
§ EXPERIMENTAL RESULTS
Our approach is evaluated on two search spaces- DARTS and NAS-Bench with vision datasets- CIFAR10, TinyImagenet, Imagenet-16-20 and Imagenet-1k. The details of the search space and implementation is given in the appendix
§.§ Results on DARTS search space
§.§.§ Transferability- learning of coarse features during search.
We transfer the architecture searched on CIFAR-10 to train and evaluate the model weights on TinyImagenet in Table <ref> and ImageNet-1k in Table <ref>. This transferred model yields higher performance than manually designed architectures <cit.> for the target dataset. It is observed that performance of the transferred model is comparable to the architecture searched on the target dataset itself which can be attributed to the architecture learning coarse features than objects during search.
§.§.§ Performance versus Device-Constraints trade-off
DCA-NAS discovers 2 to 4% better-performing architectures than manual designs with a memory constraint of 3.5 million parameters on CIFAR-10 and similar performance on TinyImagenet as in Table <ref>.
On Imagenet-1k, DCA-NAS yields models with similar performance to other NAS methods <cit.> with a constraint of 5.5 million parameters (taken to yield similar sized models as other NAS methods) as in Table <ref>. We vary the input device resource constraint and plot the performance of the searched models against the number of parameters in Figure <ref>. As observed, DCA-NAS searched models can yield 15x lower sized models than manual architectures like PyramidNet-272 <cit.> with at most 1% reduction in accuracy on CIFAR-10. On TinyImagenet, DCA-NAS yields models similar in performance but 6x smaller in size than the manual Resnet variant. In comparison to ProxylessNAS <cit.> for Imagenet-1k, DCA-NAS yields 32% smaller model in terms of model parameters for similar accuracy. In comparison to DNAS methods <cit.> for each of the three datasets, we observe that the performance of the DCA-NAS searched models is retained to a certain extent as resources are further limited after which the model performance degrades. DCA-NAS model of similar size has the advantage of better performance (by 1%) and being automatically searched over MobileNet-v2 <cit.>, a manually designed network on Imagenet-1k.
§.§.§ Search time comparison
For evaluation on TinyImagenet in Table <ref>, the architecture searched on CIFAR-10 with DCA-NAS yields model in the lowest search time which indicates the search-time efficiency of the transferability property. Our method requires about 4x lower search cost than SGAS <cit.> which performs the best among the other transferred architectures and 16x lower search time than the other resource-constrained approach <cit.> for similar performance as seen in Table <ref>. Moreover, ProxylessNAS <cit.> takes about 4x more search time than DCA-NAS whereas PC-DARTS takes about 2x more search time with no capability to constraint model size.
§.§ Results on NAS-Bench-201 search space
§.§.§ Performance and Latency comparisons on different devices
Our method reports the mean by averaging over five runs with different random seed. Figure <ref> compares the performance of models searched with DCA-NAS and PC-DARTS by varying the latency constraints. It shows that unlike PC-DARTS, DCA-NAS can search for more efficient models which have lower inference latency for similar test accuracy. Moreover, we observe that models with similar performance have lower latency when tested on Pixel 3 than on Raspberry Pi 4 due to a faster RAM in Pixel 3.
DCA-NAS takes the lowest search time among all the NAS methods due to the addition of search-time-efficient techniques while being at-par in terms of performance across all datasets.
§ ABLATION STUDY
Effectiveness of various algorithmic augmentations for faster search: We analyze the effectiveness of algorithmic augmentations mentioned preciously <ref> to reduce search cost in our study. We sequentially add weight sharing, channel bottleneck, and derived cells to the baseline DARTS <cit.> method and measure search time and accuracy. Weight sharing, channel bottleneck, and derived cells was observed to significantly reduce search memory overhead, enabling us to use larger batch sizes and reducing overall search cost as seen in Figure <ref>. Adding the resource-constraint in the final DCA-NAS method negligibly increases search cost while maintaining performance.
Stability of the approach:
We test stability by running the search algorithm independently five times with different initial seeds and the same constraints and hyperparameters. The architectures found during each run have similar performance when re-trained and evaluated as shown in Fig. <ref>. Smaller models have lower performance due to restrictions in model complexity compared to larger models.
§ CONCLUSION
We present DCA-NAS, a device constraints-aware neural architecture search framework which discovers architectures optimized to the memory and computational constraints of an edge device in a time-efficient manner. It does so by incorporating a constraint in terms of the number of parameters or floating point operations (FLOPs) in the objective function with the help of a Lagrange multiplier. DCA-NAS in essence searches for a Pareto optimal solution given the edge device memory or FLOPs constraint. Moreover, it enables architecture search with search cost 4 to 17 times lower than the previous state-of-the-art Hardware-aware NAS approaches. DCA-NAS can discover models with size about 10 to 15 times lower than manually designed architectures for similar performance. In comparison to DARTS and its other NAS variants, DCA-NAS can discover models upto 3x smaller in size with similar performance. This hardware-aware approach can be generalized to any future updates to differential neural architecture search and possibly to training-free methods of NAS with some adaptation.
§ ACKNOWLEDGEMENT
We thank the anonymous reviewers; Profs. Surendra Prasad and Brejesh Lall of IIT Delhi; and colleagues at Cadence India for their valuable feedback and inputs. This research is supported by funding from Cadence India; the first author is also supported by a fellowship from the Ministry of Education, India.
splncs04
Appendix
========
§ DERIVING CELL ARCHITECTURES
The searched cells are stacked to form the network whose weights are trained and evaluated. The layers of this network during the evaluation phase is varied from 4 to 20. It can be seen that the models searched with DARTS with only 2-cells perform equally well as those of 8-cell search for target model with layers more than 10. Hence, in our experiments, instead of training architecture parameters for all 8 cells, we train only 2 cells- one normal and the other reduction cell. The architecture of the other 6 cells stacked to form the network during search are derived from either the normal or the reduction cell as shown in Figure <ref>.
§ CALCULATION OF SEARCH-STAGE ARCHITECTURE SIZE
The size of the architecture in the search phase k_s is different from the architecture size in evaluation phase due to the softmax weighting factor in equation <ref> (demonstrated in Figure <ref>). To address this, we introduce a tighter bound on the search constraint K_d^', which is less than the device resource constraint K_d. A lookup graph (LUG) needs to be made for each dataset by varying K_d^' within appropriate bounds and running the algorithm until convergence each time to obtain the corresponding device resource constraint K_d. The computation time of the LUG can be reduced by running the searches in parallel.
§ ALGORITHM
The practical implementation of our resource-constrained gradient descent-based approach is illustrated in Algrorithm <ref>.
§ IMPLEMENTATION DETAILS
The experiments with the smaller vision datasets-MNIST, FashionMNIST, CIFAR-10, Imagenet-16-120 and TinyImagenet were run on a single Tesla V100 GPU. Training and evaluation on Imagenet-1k was performed on a cluster containing eight V100 GPUs.
The super-net used for search with smaller vision datasets except Imagenet-1k consists of 8 cells, with 6 normal cells and 2 reduction cells, and an initial number of channels set to 16. Each cell has 6 nodes, with the first 2 nodes in cell k serving as input nodes. The super-net is trained for 50 epochs with a batchsize of 512, and optimized using SGD with a momentum of 0.9 and weight decay of 3e-4. The learning rate is initially set to 0.2 and gradually reduced to zero using a cosine scheduler. Architecture parameters α are optimized using Adam optimizer, with a learning rate of 6e-4, a momentum of (0.5, 0.999), and a weight decay of 1e-3. The search is run 5 times, and the architecture with the highest validation accuracy is chosen. For evaluation, the target-net has 20 cells, with 18 normal cells and 2 reduction cells, and an initial number of channels set to 36. The target-net is trained for 600 epochs with a batchsize of 96, optimized using SGD with a momentum of 0.9, weight decay of 3e-4, and gradient clipping of 5. The initial learning rate is set to 0.025 and gradually reduced to zero using a cosine scheduler. Additional settings include a cutout length of 16, dropout rate of 0.2, and use of an auxiliary head.
For Imagenet-1k, We reduce the input size from 224 × 224 to 28 × 28 using three convolution layers with a stride of 2. The super-net for search has 8 cells starting with 16 channels, and the target-net for evaluation has 14 cells starting with 48 channels. Both search and evaluation use a batch size of 1,024. In search, we train for 50 epochs with a learning rate of 0.5 (annealed down to zero using a cosine scheduler), and a learning rate of 6e-3 for architecture parameters. In evaluation, we train for 250 epochs using the SGD optimizer with a momentum of 0.9 and a weight decay of 3e-5, and adopt an auxiliary head and the label smoothing technique.
§ MODEL PERFORMANCE BY VARYING FLOPS CONSTRAINT ON CIFAR10, TINYIMAGENET AND IMAGENET-1K
Instead of model parameters, we also experiment with FLOPs as the constraint in our objective function. As shown in Figure <ref>, our method DCA-NAS retains performance till a certain FLOPs constraint, after which it degrades. In comparison to manual architectures, our NAS approach yields models which require much smaller FLOPs and hence would have lower latency.
|
http://arxiv.org/abs/2307.05950v1 | 20230712063251 | Exploring the Effectiveness of LLMs in Automated Logging Generation: An Empirical Study | [
"Yichen Li",
"Yintong Huo",
"Zhihan Jiang",
"Renyi Zhong",
"Pinjia He",
"Yuxin Su",
"Michael R. Lyu"
] | cs.SE | [
"cs.SE"
] |
Exploring the Effectiveness of LLMs in Automated Logging Generation: An Empirical Study
Yichen Li12,
Yintong Huo12,
Zhihan Jiang2,
Renyi Zhong2,
Pinjia He3^**,
Yuxin Su4, and
Michael R. Lyu2
2Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China.
Email: {ycli21, ythuo, zhjiang22, ryzhong22, lyu}@cse.cuhk.edu.hk
3The Chinese University of Hong Kong, Shenzhen, China.
Email: [email protected]
4Sun Yat-sen University, Guangzhou, China.
Email: [email protected]
August 12, 2023
=================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Automated logging statement generation techniques facilitate developers in writing appropriate logging statements that document software behaviors.
Current retrieval-based and learning-based logging methods fail to provide accurate logging statements in complex software.
Although existing large language models (LLMs) might be a good fit for the task due to their great success in natural language generation and programming language comprehension, their effectiveness and generalization capabilities have not been explored.
To this end, this paper performs the first extensive study on applying LLMs for logging statement generation.
We build LogBench, the first logging statement generation dataset that contains 3,870 methods and 6,849 logging statements.
On LogBench, we evaluate the effectiveness and generalization capabilities of eight state-of-the-art LLMs, which include general-purpose and code-specific models ranging from 60M to 175B in size.
Specifically, we evaluate LLM's logging effectiveness by studying 1) their ability to decide logging ingredients, 2) the impact of the internal characteristics of LLMs, and 3) the influence of external factors.
We further evaluate LLM's logging generalization capabilities using unseen data derived from code transformation techniques.
Our study demonstrates that existing LLMs fall short of practical requirements for generating proper logging statement texts. We also disclose the impact of internal characteristics (e.g., pre-trained code knowledge) and external factors (e.g., programming contexts) for LLMs in automated logging.
In addition, we observe that existing LLMs cannot generalize to logging unseen code, revealing their unsatisfactory generalization capabilities.
Based on our findings, we further discuss three implications that can enhance logging statement generation in the future, such as developing a unified metric for logging quality, incorporating shareable code knowledge into LLMs, and devising suitable prompts.^*Co-first authors.^**Corresponding author.
§ INTRODUCTION
Writing appropriate logging statements in code is critical for software development in recording program behavior.
Effective logging statements can facilitate performance analysis <cit.> and provide insights for failure identification <cit.>.
As shown in the example below, a logging statement typically consists of three ingredients: a logging level, logging variables, and logging texts <cit.>.
The logging level (e.g., warn) indicates the severity of a log event. Meanwhile, the logging variables (e.g., url) contain essential run-time information from system states. Additionally, logging texts (e.g., Failed to connect to host: <>) provides a description of the system activities.
[l]
log.warn("Failed to connect to host: {}", url)
To help software developers decide the contents of logging statements (i.e., what-to-log), logging statement generation tools are built to automatically suggest logging statements given code snippets. Conventional logging suggestion studies <cit.> revealed that similar codes have similar logging statements, so they apply a retrieval-based approach to suggest similar logging statements from the historical code base <cit.>.
However, such retrieval-based approaches are limited to the logging statements that they have been programmed with.
To combat the challenge, recent studies use neural-based methods to decide single ingredient of logging statements (i.e., logging levels, logging variables, logging text). For example, prior work <cit.> predicts the appropriate logging level by taking surrounding code features into a neural network.
While these tools have also shown improvements in suggesting important variables <cit.> or proper log levels <cit.>, they lack the ability to produce complete logging statements contain multiple ingredients simultaneously.
Some tools <cit.> require the availability of certain components to suggest others, which can be impractical for programmers who need to determine the complete logging statement before deciding the logging levels.
Generating complete logging statements has been considered challenging as it requests the model to analyze code structure, comprehend the developer's intention, and produce meaningful logging texts <cit.>. Moreover, existing neural-based tools are further restricted by training data with limited logging statements and may not generalize to unseen code.
Recent large pre-trained language models (LLMs) <cit.> have achieved impressive performance in the field of natural language processing (NLP).
Inspired by this, the latest model, LANCE, treats logging statements generation as a text-to-text generation problem and trains LLMs for it <cit.>.
LLMs have proven their efficacy in automatically generating functional code <cit.> or resolving bugs <cit.>, and have even been integrated as plugins for developers <cit.>.
However, their capacity for generating logging statements has not been examined.
To fill this gap, we pose the following question: Can the LLMs produce logging statements for developers properly?
For one thing, LLMs with strong text generation abilities could promote descriptive logging statements; for another, LLMs have exhibited a powerful aptitude for code comprehension <cit.>, which pave the way for uncovering the semantics of logging variables.
Our work.
To answer the question, this empirical study thoroughly investigates how modern LLMs perform the task of generating logging statements involving two areas: effectiveness and generalizability.
Our first goal is to extensively evaluate the effectiveness of LLMs by studying (1) their ability to decide logging ingredients, (2) the impact of the internal characteristics of LLMs, and (3) the influence brought by external factors.
Our second goal is to assess the generalizability of LLMs. Since LLMs are trained on a significant portion of the publicly available code, there is a potential data leakage issue in which logging statements for evaluation purposes may be included in the original training data <cit.>. It remains unclear whether LLMs are inferring the logging statement or merely memorizing the training data.
Thus, we further evaluate the generalization capabilities of LLMs using unseen code.
In particular, we evaluate the performance of eight state-of-the-art LLMs on , a new dataset collected by us, consisting of 2,430 Java files, 3,870 methods, and 6,849 logging statements.
Additionally, we employ a lightweight code transformation technique to generate a modified dataset , which contains previously unseen data to evaluate the generalization capabilities of LLMs.
Based on our large-scale empirical study on and , we summarize our key findings as follows.
Key findings.
(1) Existing LLMs' performance in generating logging texts falls short of practical logging requirements.
(2) The size of LLMs does not decisively affect their logging performance, but code knowledge acquisition shows a remarkable advantage.
(3) Since comments provide code intentions from developers, ignoring them leads to a decreased effectiveness for LLMs.
(4) Compared with comments, LLMs benefit more from taking additional methods in the same file as they offer specific programming contexts.
(5) Unseen codes significantly degrade all LLMs' performance, particularly in the variable prediction and logging text generation.
Based on the above findings, we also discuss several implications and provide actionable advice to researchers that may help LLMs become more powerful in logging.
Key implications.
(1) For logging practice studies, a new and unified metric considering multiple logging ingredients is in demand to assess LLMs' logging performance.
(2) Based on the identified internal characteristics and external factors, incorporating relevant code knowledge and appropriate programming contexts is suggested to enhance LLMs' logging effectiveness.
(3) To advance the generalization capabilities of LLMs, developing prompt-based learning techniques offers great potential of LLMs in automated logging.
Contributions. The contribution of this study is threefold:
* To our best knowledge, we conduct the first large-scale empirical study to explore LLMs' ability on logging statement generation.
* We build the first logging statement generation benchmark dataset LogBench with 3,870 methods and 6,849 logging statements. Using the dataset, we extensively examine the effectiveness and generalization capabilities of LLMs.
* We condense the results into five findings and three implications that we believe can shed light on future logging research. All datasets and analysis details are released in
<https://github.com/LogStudySE/LogStudy>.
§ PRELIMINARY
§.§ Task Formulation
This study focuses on the logging statement generation task (i.e., what-to-log), which is viewed as a statement completion problem: Given lines of code (typically a method) and a specific logging point between two statements, the generator is then required to predict the logging statement at such point. The prediction is expected to be similar to the one removed from the original file.
Figure <ref> (in dashed line) illustrates an example of this task, where an effective logging statement generator should suggest log.debug("Reload received for path:" + path) that highlighted with green[In this paper, the logging statement that should be predicted by the generator is always highlighted by green.].
For the code lines with n logging statements, we create n-1 inputs by removing each of them at once, similar to previous work <cit.>.
§.§ Study Subjects
We evaluate and discuss the effectiveness of state-of-the-art LLMs varying from 60M to 175B parameters.
We summarize their access categories (Access), proposed tasks (Task), and the number of parameters (# Params) in Table <ref>.
Since we already included official models <cit.> from the GPT series, other models that have been tuned on GPT <cit.> are not included in our study (e.g., GPT-Neo <cit.> and GPT-J <cit.>).
§.§.§ Text-davinci-003 (denoted as Davinci)
It is derived from InstructGPT <cit.>, which is an “instruct” model that is meant to generate texts with clear instructions. It has been trained on diverse internet texts and has shown superiority in a wide range of natural language generation tasks.
§.§.§ GPT3.5-turbo (denoted as ChatGPT)
It is an upgraded version of GPT-3.5 models <cit.>, particularly enabling conversational capabilities by employing an optimization technique known as reinforcement learning from human feedback <cit.>. It is the primary backbone of ChatGPT <cit.>, and is one of the largest language models to date.
§.§.§ LANCE
LANCE <cit.> is the first model for generating and injecting complete logging statements in code. It accepts a method that needs one logging statement and outputs a meaningful log message with a proper logging level in the right position in the code. It is built on the Text-To-Text-Transfer-Transformer model, which has been trained with the goal of injecting proper logging statements.
§.§.§ InCoder
InCoder <cit.> is a unified generative model trained on a large corpus of code where regions of code have been randomly masked. It thus can infill arbitrary code with bidirectional code context and further be demonstrated to be capable of challenging code-related tasks, such as type inference, variable name prediction, and code generation.
§.§.§ CodeGeeX
CodeGeeX <cit.> is an open-source, large-scale multilingual code generation model, which has been integrated into several IDEs (e.g., IntelliJ IDEA) to suggest the following code lines from natural language descriptions.
It is built on Transformer architectures with an autoregressive decoder and pre-trained on public GitHub repositories.
§.§.§ TabNine
TabNine <cit.> is an AI code assistant that can predict and suggest the following lines of code for developers. It is integrated into multiple IDEs (e.g., Visual Studio) to automatically complete code lines, generate entire functions, and produce code snippets from natural languages (e.g., comments).
§.§.§ Copilot
Copilot <cit.> is a widely-studied AI-powered software development tool performed by the CodeX <cit.> model to help developers write codes <cit.>. After training on a large dataset of public code from GitHub, it can employ people's natural language descriptions to “write” the following code trunks.
§.§.§ CodeWhisperer
Developed by Amazon, CodeWhisperer <cit.> serves as a coding companion for software developers after training the LLM on billions of lines of code.
It can generate code snippets or full functions in real time based on comments written by developers.
§ STUDY DESIGN
§.§ Overview
Fig. <ref> exhibits the overview framework of this study involving four research questions from two perspectives: effectiveness and generalizability.
To start, we develop a large-scale benchmarking dataset with 6,849 logging statements in 3,870 methods by crawling high-quality GitHub repositories.
This study first evaluates the effectiveness of state-of-the-art LLMs in terms of multiple logging ingredients, as each of them has been widely studied <cit.> in logging practice (RQ1).
We also identify the internal characteristics of LLMs and discuss their impact on logging statement generation, which guide the development and selection of appropriate LLMs for logging generation (RQ2).
Afterwards, we investigate external influencing factors and illustrate how they boost LLM's performance, which helps researchers and developers to optimize their usage of a specific model for logging (RQ3).
Last but not least, we explore the generalizability of LLMs to assess their behavior in developing new software in the real world.
To this end, we evaluate models on another unseen code dataset, , which contains transformed code from the original data while preserving readability and semantics from code (RQ4).
§.§ Benchmark Datasets
This section describes how we develop the benchmark dataset
and . Although we choose Java as the target language to build the study due to its wide presence in industry and research <cit.>, the experiments and findings can be extended to other programming languages.
§.§.§ Creation of Dataset
We build the benchmark dataset that consists of high-quality and well-maintained Java files with logging statements
by mining open-source repositories from GitHub. As the largest host of source code in the world, GitHub contains a great amount of repositories that reflect typical software development processes.
In particular, we begin by downloading the Java repositories that meet the following requirements[All repositories are archived on April 2023]:
* Gaining more than 20 stars, which indicates a higher level of attention and interest in the project.
* Receiving more than 100 commits, which suggests the project is more actively maintained and less likely to be a disposable project.
* Engaging with at least 5 contributors, which demonstrates the quality of its logging statements by simulating the collaborative essence of the software development.
We then extract the files that contains logging statements in two steps. We first reserve the projects whose POM file includes popular logging utility dependencies (e.g., Log4j, SLF4J), resulting in 3,089 repositories. We then extract the Java files containing at least one logging statement by matching them with regular expressions as previous work <cit.>, because logging statements are always written in specified syntax (e.g., log.info(), log.error()).
As coding assistant tools (such as Copilot and TabNine) offered by companies have restricted usage patterns, we are unable to evaluate them using automated scripts <cit.>.
Therefore, we manually evaluated a random sample of the extracted files across various repositories.
This resulted in a dataset of 2,420 files containing 3,870 methods and 6,849 logging statements, which we refer to as .
§.§.§ Creation of Dataset to Avoid Data Leaking
LLMs deliver great performance in multiple tasks; however, evaluating their performance solely on publicly available data can be problematic. Since LLMs are trained on datasets that are obtained through large-scale web scraping <cit.>, these models may have already seen the benchmark data during their training, raising concerns about assessing their generalization abilities <cit.>.
This issue, commonly known as data leakage, requires particular attention since most code models (e.g., Incoder <cit.>, Copilot <cit.>) have trained on GitHub code.
To fairly evaluate the generalization ability of LLMs, we further develop an unseen code dataset that consists of the code transformed from .
Prior works have developed semantics-preserving code transformation techniques that do not change the functionality of the original code, for the purpose of evaluating the robustness of code models <cit.>.
However, these approaches randomly replace informative tokens with meaningless ones, degrading the readability of code.
For example, after transforming the informative variable name (e.g., totalMemory) to the non-informative name (e.g., abc), even a programmer can hardly understand the variables and log properly.
Such transformations make the transformed code less likely to appear in daily programming and not suitable for logging practice studies.
To avoid the issue, we devise an effective code transformation technique that generates semantics-preserving and readability-preserving variations of the original code.
In particular, our code transformation technique employs a series of carefully engineered, lightweight code transformations at the Abstract Syntax Tree (AST) level, ensuring that the transformed code remains functionally equivalent to the original code. Following previous studies <cit.>, we propose seven different types of code transformers and their descriptions shown in Table <ref>, including but not limited to the variable transformer and statement transformer. Besides, the readability-degrading transformations, such as injecting dead code <cit.> and modifying identifier name, are eliminated.
The process of transformation begins with converting the source code into an AST representation using JavaParser <cit.>. To detect potential transformation points (i.e., nodes and subtrees) for each transformer,
a series of predefined checkers traverse the AST in a top-down manner.
Once the transformation points are identified, each checker will independently call its corresponding transformer to perform a one-time transformation. We denote one-time transformation as T: x → x', where x and x' represent the source AST and post-transformed AST, respectively.
Each transformer functions independently, allowing multiple transformations to be applied to the same code snippet without conflicts.
These single transformations are chained together to form the overall transformation: 𝕋 = T_1 ∘ T_2 ∘ ... ∘ T_n.
Once all the identified points have been transformed or the number of transformations reaches a predetermined threshold, the AST is converted back into source code to complete the transformation process.
Fig. <ref> shows a case concerning how a constant transformer works for code transformation.
After converting the code into an AST representation,
the constant checker detects the transformation points by going through the original AST.
Then, it calls the constant transformer to transform the constant expression 1024 * 1024 by adding a new variable const_1. The difference between the original AST and the transformed AST is highlighted in the red area.
§.§ Implementations
§.§.§ Evaluation
Based on the interaction ways offered by different LLMs (Table <ref>), we implement them as follows.
(1) Released models (LANCE, InCoder): we ran them on a 32-Core workstation with an Intel Xeon Platinum 8280 CPU, 256 GB RAM, and 4x NVIDIA GeForce RTX 3090 GPUs in Ubuntu 20.04.4 LTS.
(2) APIs (ChatGPT, Davinci): we called their official APIs to generate the logging statement by providing the following prompt:
Please complete the incomplete logging statement at the logging point: [Code with corresponding logging point].
(3) Plugins (Copilot, CodeGeex, TabNine, CodeWhisperer): we purchased accounts for each author to obtain the logging statement manually at the logging point that starts with the original logging API (e.g., log.). Such a process forces these plugins to generate logging statements instead of other functional codes.
§.§.§ Code Transformation
Our code transformation technique was implemented using 4,074 lines of Java code and the JavaParser library <cit.>, a widely-used parser for analyzing, transforming, and generating Java code. All transformations were performed on the same workstation as in the evaluation.
§ STUDY RESULTS
§.§ Metrics
In line with prior work <cit.>, we evaluate the logging statement generation performance concerning three ingredients: logging levels, logging variables, and logging texts.
Although different ingredients offer diverse information for system maintainers, they serve as indispensable resources for reasoning the software behavior.
(1) Logging levels.
Following previous studies <cit.>, we use the level accuracy (L-ACC) and Average Ordinal Distance Score (AOD) for evaluating logging level predictions.
L-ACC measures the percentage of correctly predicted log levels out of all suggested results.
AOD <cit.> computes the distance between logging levels.
Since different levels are not independent of each other. For example, the error is closer to warn compared with info, AOD takes the average distance between the actual logging level a_i and the suggested logging level (denoted as Dis(a_i,s_i)). AOD is formulated as AOD=∑^N_i=1 (1-Dis(a_i,s_i)/MaxDis(a_i))/N, where N is the number of logging statements and MaxDis(a_i) refers to the maximum possible distance of the actual log level. For example, the maximum distance of error is 4 from trace if there are 5 levels (i.e., trace, debug, info, warn, error).
(2) Logging variables.
Evaluating predictions from LLMs are different from neural-based classification networks, as the predicted probabilities of each variable are not known. In particular, we employ Precision, Recall, and F1 to evaluate predicted logging variables.
For each predicted logging statement, we use S_pd to denote variables in LLMs prediction and S_gt to denote the variables in the actual logging statement. We report the correct proportion of the predicted variables (precision=S_pd∩ S_gt/S_pd) and the proportion of actually correct positives predicted by the model (recall=S_pd∩ S_gt/S_gt), and their harmonic mean (F1=2*Precision*Recall/Precision+Recall).
(3) Logging texts.
Building on previous research <cit.>, we assess the quality of the produced logging texts using two well-established machine translation evaluation metrics, namely BLEU <cit.> and ROUGE <cit.>.
These n-gram metrics compute the similarity between generated log messages and the actual logging text crafted by developers, yielding a percentage score ranging from 0 to 1. A higher score indicates greater similarity between the generated log messages and the actual logging text, thus signifying better quality. In particular, we use BLEU-K and ROUGE-K to compare the overlap concerning K-grams between the generated and the actual logs.
§.§ RQ1: How do different LLMs perform in deciding ingredients of logging statements generation?
To answer RQ1, we evaluate the eight top-performing LLMs listed in Table <ref> by applying the original benchmark dataset . The evaluation results are illustrated in Table <ref>, where we underline the best performance for each metric.
Intra-ingredient. Regarding the logging levels, we observe that Copilot achieves the best L-ACC performance, i.e., 0.743, indicating that Copilot is more effective in predicting the correct logging levels for the 70%-80% cases.
While other baselines do not perform as well as Copilot, they also deliver acceptable performance in suggesting correct logging levels for at least 60% cases.
However, with respect to the variable recommendation, all models perform with a greater difference. While 70% variables can be recommended by Copilot, LANCE can merely correctly infer 42% variables that should be logged.
The recall rate for variable prediction is consistently lower than the precision rate for every model, uncovering the difficulty of suggesting complete variables.
We consider that predicting variables is more challenging than logging levels, as the variables can be more diverse, customized, and contain different meanings. To combat the challenge, logging variables should be inferred with a deeper comprehension of code structure, such as control flow information.
Concerning logging text generation, Copilot and CodeWhisper achieve similar multiple performances and outperform other baselines by a wide margin.
On average, the studied models generate logging statements with the similarity of 19.4% and 34.1% for BLEU-4 and ROUGE-L scores, respectively. The result indicates that recommending appropriate logging statements still remains a great challenge.
Inter-ingredient.
From the inter-ingredient perspective, we observe that the LLM performance trend is similar across multiple ingredients, i.e., models exhibiting good performance in logging level prediction also have prominent capabilities in generating logging texts. For instance, Copilot and CodeWhisperer outperform other baselines in all reported metrics.
This is probably because suggesting all three ingredients requires similar code ycomprehension capabilities, including understanding the data flows, paying attention to specific code structures, and inferring the code functionalities.
Nevertheless, Incoder stands out as an exception, by performing relatively low performance in predicting logging levels (i.e., the worst baseline) and doing better in logging texts (the fourth best performer). After investigation, we observe that Incoder predicts 41% cases with debug level, most of which is actually for info level statements.
[boxsep=1pt,left=2pt,right=2pt,top=3pt,bottom=2pt,width=,colback=white!90!black,boxrule=0pt, colbacktitle=white!,toptitle=2pt,bottomtitle=1pt,opacitybacktitle=0]
Finding 1. While existing models correctly predict levels for 74.3% logging statements, they merely produce logging texts that are 24.9% similar to the actual ones measured by BLEU-4, which indicates their logging statement generation performance falls short of practical requirements.
§.§ RQ2: What internal characteristics of LLMs will affect logging generation?
We examine the summarized baseline characteristics presented in Table <ref> and experimental results shown in Table <ref> to answer the question. Analyzing the internal characteristics of models guides the training paradigm and selection of proper LLMs for logging statement generation.
First, the pre-training domain of language models has a great impact on their effectiveness on producing logging statements.
Models that are pre-trained on a code corpus, such as Copilot, perform significantly better than models that are pre-trained for general purposes (e.g., ChatGPT).
For instance, Copilot outperforms ChatGPT by 64% in terms of the BLUE-4 score.
The difference in performance can be attributed to that programming languages have different syntax and semantics compared to common languages <cit.>.
As illustrated in Fig <ref>, the general-purpose LLMs mispredict the logging statement by concentrating on the variable shown in the method declaration, while ignoring the registration process before the logging point.
However, most code models capture such a process as they recognize drivers are usually key variables describing device situations and logging statements usually describe the most recent activities.
Second, pre-training code models in a large corpus allow them to learn shareable code knowledge for enhancing their logging performance.
InCoder, for example, is trained with generative token prediction tasks in comprehensive codes, which enable it to capture the informative variables and common programming behaviors to boost their logging performance. Such training may help InCoder to learn JDBC drivers (shown in Fig. <ref>) that are widely used in Java applications, although it was not included in the method.
On the other hand, LANCE, which is only trained on logging-related code with limited code diversity, struggles to identify critical variables (e.g., ) for logging.
Third, contrary to common thought,
the size of the model does not seem to have a decisive impact on LLM's logging performance.
Recent studies also suggest that smaller models can generate comparable results to larger ones when properly used for code analysis <cit.>.
Among our competing models,
Copilot and CodeGeex have similar parameter sizes, but Copilot wins first place in 7 out of 11 metrics, and CodeGeex ranks fifth in most metrics.
In Fig. <ref>, CodeGeex successfully predicts the logging activities to record registered drivers, but it fails to find the variable , which usually contains drivers and serves for driver management.
Moreover, although CodeGeex is twice the size of InCoder, it performs worse than InCoder in 9 out of 11 metrics.
The trending ChatGPT, despite having ten times the model size of most code-related models, does not exhibit exceptional logging capabilities.
[boxsep=1pt,left=2pt,right=2pt,top=3pt,bottom=2pt,width=,colback=white!90!black,boxrule=0pt, colbacktitle=white!,toptitle=2pt,bottomtitle=1pt,opacitybacktitle=0]
Finding 2. While the model size of LLMs does not have a decisive impact, there are remarkable advantages of incorporating code knowledge for automated logging.
§.§ RQ3: How do external factors influence the effectiveness in generating logging statements?
While RQ2 discusses the internal characteristics of LLMs, some external factors are also likely to influence their effectiveness in logging generation. In particular, we focus on how comments and programming contexts will impact the performance.
With the investigation, not only can developers make the best use of these tools, but also can guide researchers in building more effective automated logging models.
With comment v.s. without comment.
Inspired by the importance of human-written comments for intelligent code analysis <cit.>, we also explore the utility of comments for our logging study.
To this end, we feed the original code (with comment) and comment-free code into LLMs separately and display the results of the comment-free evaluation and its corresponding performance drop rate (Δ) in Table <ref> (denoted as wo/ Cmt) associated with AOD, F1, BLEU, and ROUGE.
The results show that all LLMs consistently experience performance drops in a comment-free cycle, with an average drop rate on 0.5%, 2.4%, 2.3%, and 2.4% of AOD, F1, BLEU, and ROUGE, respectively.
The decrement is mainly because that
comments are written by developers to describe the functionalities of the following code snippets, which share a similar nature to logging practices that record system activities.
Fig. <ref> represents a case of CodeWhisperer that can be facilitated by reading the comment of “parse sequence Id”.
Without the comment, CodeWhisperer only concentrates on the invalid sequence number but failed to involve parsing descriptions; thus, such unclear logging statements may mislead maintainers on parsing failure diagnosis. Besides, the comments highlight that the exception is a foreseeable and potentially common issue, which helps the LLMs in correctly selecting the log level, changing the logging level from warn to debug.
[boxsep=1pt,left=2pt,right=2pt,top=3pt,bottom=2pt,width=,colback=white!90!black,boxrule=0pt, colbacktitle=white!,toptitle=2pt,bottomtitle=1pt,opacitybacktitle=0]
Finding 3.
Ignoring comments impedes LLMs in generating logging statements by missing code descriptions, resulting in 2.4% decrement in recommending logging texts on average.
Programming contexts: method v.s. file.
Current logging practice tools restrict their work on code snippets or methods <cit.>, and ignore the information from other relative methods.
However, methods that depict
functionalities are also possible to contain similar logging statements <cit.>, which can be used as references to resolve logging statements.
This is mainly because of the limits of input size in previous neural-based models.
As LLMs can process thousands of input tokens without being obstructed by such limitations, we assess the benefits of larger programming contexts, i.e., file-level input.
In this regard, we feed the an entire Java file for generating logging statements rather than a specific method.
The result in Table <ref> presents the effectiveness of file-level input (w/ File) and the corresponding increment ratio (Δ).
The result suggests that the file-level programming contexts consistently enhance performances in terms of all metrics, where TabNine increases 3.6%, 9.9%, and 55.0% for AOD, F1, and BLEU score, respectively.
On average, all models generate 46.3% more similar logging statements to actual ones (reflected by BLEU-4) than using a single method as input.
We take Fig. <ref> as an example from CodeWhisperer to illustrate how LLMs can learn from an additional method, where the green line represents the required logging statements. The model learned logging patterns from Method1, which includes the broker plugin name and its status (i.e., start). When confronting the target method , CodeWhisperer may refer to Method1 and write similar logging statements by changing the status from to .
Additionally, by analyzing the file-level context, LLMs can capture relevant variables, learn the relationships of multiple methods, and identify logging styles that are consistent across the file.
As a result, LLMs are able to generate more accurate and context-aware logs.
Surprisingly, we observe that broadening programming texts has a much greater impact than including comments, despite the fact that some models (e.g., Copilot) are trained to generate code from natural language.
This suggests that syntactic information in code may be more important than semantic information inside comments.
[boxsep=1pt,left=2pt,right=2pt,top=3pt,bottom=2pt,width=,colback=white!90!black,boxrule=0pt, colbacktitle=white!,toptitle=2pt,bottomtitle=1pt,opacitybacktitle=0]
Finding 4. Compared with comments, incorporating file-level programming contexts has a greater improvement in logging practice by offering additional functionality-similar methods and logging styles.
§.§ RQ4: How do different LLMs generate logging statements for unseen code?
In this RQ, we assess the generalization capabilities of language models by evaluating them on the .
As stated in Section <ref>, predicting accurate logging statements does not necessarily imply that a model can be generalized to unseen cases well.
As logging statements are highly personalized in natural language, it is crucial to evaluate LLMs' ability to handle these cases in daily development.
We present the result in Table <ref>, where we underline the best performance for each metric and the lowest performance drop rate (Δ) compared to corresponding results in .
Our experiments show that all models experience different degrees of performance drop when generating logging statements on unseen code.
LANCE has the smallest decrement of 6.9% of four presented metrics on average, while CodeGeex has the largest drop of 18.2%.
Copilot exhibits the greatest generalization capabilities by outperforming other baselines in three out of four metrics on unseen code.
Additionally, we observe that predicting logging levels has a minimal decline in the performance of 1.1%, whereas
predicting logging variables and logging text experience significant drops in performance of 13.5% and 15%, respectively.
Such experiments indicate that resolving logging variables and logging texts are more challenging than predicting logging levels, which should gain full attention in future research.
Fig. <ref> illustrates a transformation case where we highlight code differences in red and demonstrate how LLMs (CodeWhisperer, ChatGPT, Incoder) perform accordingly.
Regarding the original code, all models correctly predict that inMB should be used to record memory.
However, after transforming the constant expression to a new variable and then assigning to inMB, all models fail to identify inMB for logging.
CodeWhisperer and Incoder mistakenly predict totalMemory and heapMemoryUsage as the memory size indicator, while ChatGPT does not suggest any variables.
Even though the transformation retains code semantics, existing models exhibit a significant performance drop, indicating their limited generalization abilities.
[boxsep=1pt,left=2pt,right=2pt,top=3pt,bottom=2pt,width=,colback=white!90!black,boxrule=0pt, colbacktitle=white!,toptitle=2pt,bottomtitle=1pt,opacitybacktitle=0]
Finding 5. LLMs' performance on variable prediction and logging text generation drops significantly for unseen code by 13.5% and 15.0% on average, respectively, highlighting the need to improve the generalization capabilities of the models.
§ DISCUSSION AND FUTURE WORK
§.§ Unified metrics for logging statement generation
In Section <ref>, we extensively evaluate the performance of LLMs in generating logging statements using eleven metrics over three ingredients.
However, comparing the quality of the generated logging statements with multiple metrics presents a challenge, where models may excel in one metric while performing poorly in others.
Previous studies have employed metrics like BLEU and ROUGE, which are commonly used for natural language translation.
However, these metrics are not suitable for logging statements evaluation because:
(1) BLUE may overrate the precision of the cases with matched n-grams that are in short length, leading to a biased average <cit.>.
(2) both BLUE and ROUGE are built on n-grams and do not consider semantics between texts, i.e., they aggressively penalize lexical differences, even if the predicted logging statements are synonymous to the actual ones <cit.>.
Therefore, we believe an alternative metric that considers the lexical, syntactical, and semantic representations of logging statements should be developed in the future.
[boxsep=1pt,left=2pt,right=2pt,top=3pt,bottom=2pt,width=,colback=white,boxrule=0.5pt, colbacktitle=white!,toptitle=2pt,bottomtitle=1pt,opacitybacktitle=0]
Implication 1. The existing logging statements evaluation with excessive metrics emphasizes the need for a new unified metric.
§.§ Domain knowledge incorporation for LLMs
In Section <ref>, we point out that code knowledge (e.g., code semantics, shareable code knowledge) acquired from the pre-training phase significantly promotes LLM's logging performance by filling the gap between natural language and programming language.
In Section <ref>, we further discover that external resources, particularly additional programming contexts, benefit LLMs in offering logging statements in similar code for reference.
Based on these findings, we consider that both the internally gained code knowledge and external resources can be combined to facilitate automated logging in the future.
One potential approach to integrating effective code knowledge is simultaneously fine-tuning LLMs in multiple code tasks, where the shareable code behaviors can be acquired during multi-task learning.
Moreover, existing method-level logging practice studies are worthy of extending to the file-level, as additional methods provide extra code information with similar logging styles.
Broadening the programming contexts to the file-level may further allow LLMs to uncover variable definitions over multiple files, which help to generate logging statements that are consistent with existing ones.
[boxsep=1pt,left=2pt,right=2pt,top=3pt,bottom=2pt,width=,colback=white,boxrule=0.5pt, colbacktitle=white!,toptitle=2pt,bottomtitle=1pt,opacitybacktitle=0]
Implication 2. Equipping appropriate code knowledge or external programming contexts into LLMs via fine-tuning enables promising solutions towards automated logging.
§.§ Enhancing generalization capabilities of LLMs
In Section <ref>, we observe that current LLMs drop seriously in unseen codes, reflecting their restrictive generalization capabilities.
The result can be attributed to the excessive capacity of parameters in LLMs that can easily memorize large datasets <cit.>.
This issue will become more severe when code in actual development is customized and has diverse logging styles.
To promote the generalization capabilities of ever-growing LLMs, one effective idea is to apply a prompt-based method with few chain-of-thought demonstrations <cit.> to avoid huge computational costs. The chain-of-thought strategy allows models to decompose complicated multi-step problems into several intermediate reasoning steps. For example, we can ask models to focus on special code structures (e.g., try-catch), then advise them to elicit key variables and system activities to log. While the chain-of-thought strategy has shown success in natural language reasoning tasks <cit.>,
future work is suggested to explore such prompt-based approaches to enhance generalization capabilities.
[boxsep=1pt,left=2pt,right=2pt,top=3pt,bottom=2pt,width=,colback=white,boxrule=0.5pt, colbacktitle=white!,toptitle=2pt,bottomtitle=1pt,opacitybacktitle=0]
Implication 3. Apart from adding effective code knowledge, devising prompt-based strategies with zero-shot or few-shot learning can facilitate LLMs in logging.
§ THREATS TO VALIDITY
Internal Threats.
(1) A primary concern of this study is the potential bias introduced by the limited size of the dataset, which consists of 3,840 methods.
This limitation arises due to the fact that those plugin-based code completion tools impose usage restrictions to prevent bots; therefore, human efforts are needed.
To address the threat, we acquired and sampled and datasets from well-maintained open projects, so that they can be representative for evaluation.
Note that existing Copilot testing studies have used datasets that are not larger than ours <cit.>.
(2) Another concern involves the context length limitations of certain language models <cit.> (e.g., 4,097 tokens for Davinci), which may affect the file-level experiment.
To address this concern, we analyze the collected data and reveal that
98.6% of the Java files fall within the 4096-token limit, and 94.3% of them are within the 2048-token range. Such analysis suggests that the majority of files in our dataset remain unaffected by the context length restrictions. Consequently, we argue that the context length limitations of these models do not significantly compromise the validity of our experiment.
External Threats. One potential external threat stems from the fact that the dataset was primarily based on the Java language, which may affect the generalizability of our findings to other languages.
However, according to previous works <cit.>, Java is among the most prevalent programming languages for logging research purposes, and both SLF4J and Log4j are highly popular and widely adopted logging APIs within the Java ecosystem. We believe the representativeness of our study is highlighted by the dominance of Java and these APIs in the logging domain. The core idea of the study can still be generalized to other logging frameworks or languages.
§ RELATED WORK
§.§ Logging Statement Automation
The logging statement automation studies focus on automatically generating logging statements, which can be divided into two categories: what-to-log and where-to-log.
What-to-log studies are interested in producing concrete logging statements, which include deciding the appropriate log level (e.g., warn, error) <cit.>, choosing suitable variables <cit.>, and generating proper logging text <cit.>. For example, ordinal-based neural networks <cit.> and graph neural networks <cit.> have been applied to learn syntactic code features and semantic text features for log-level suggestions. LogEnhancer <cit.> aims to reduce the burden of failure diagnosis by inserting causally-related variables in a logging statement from a programming analysis perspective, whereas <cit.> predicts logging variables for developers using a self-attention neural network to learn tokens in code snippets.
Where-to-log studies concentrating on suggesting logging points in source code <cit.>. Excessive logging statements can enhance unnecessary efforts in software development and maintenance, while insufficient logging statements lead to missing key system behavior information for potential system diagnosis <cit.>. To automate logging points, previous studies solve the log placement problem in specific code construct types, such as catch <cit.>, if <cit.>, and exception <cit.>. <cit.> proposes a deep learning-based framework to suggest logging locations by fusing syntactic, semantic and block features extracted from source code.
The most recent model in T5 architecture, LANCE <cit.>, provides a one-stop logging statements solution for deciding logging points and logging contents for code snippets.
Although these works tried new emerging deep-learning models to decide on logging statements, they lack the analysis of the model itself and comprehensive evaluation, for example, how to measure the generated logging statements, or in what scenario can the model performs better.
To fill the gap, our study is the first one that investigates and compares current large language models for automated logging generation.
Knowing the answer can help future scientists better develop, apply, and integrate these large models in practice.
§.§ Large Language Models for Code
The remarkable success of LLMs in the NLP has prompted the development of pre-trained models in other areas, particularly in intelligent code analysis <cit.>.
CodeBERT <cit.> adopts the transformer architecture <cit.> and has been trained on a blend of programming and natural languages to learn a general representation for code, which can further support generating a program from a natural language specification.
In addition to sequence-based models, GraphCodeBERT <cit.> considers the code property of
its structural and logical relationship (e.g., data flow, control flow), creating a more effective model for code understanding tasks <cit.>.
Furthermore, <cit.> presents UniXCoder, which is a unified cross-modal pre-trained model for programming language. UniXcoder employs a mask attention mechanism to regulate the model's behavior and trains with cross-modal contents such as AST and code comment to enhance code representation.
The recent work, InCoder <cit.>, is adept at handling generative tasks (e.g., comment generation) after learning bidirectional context for infilling arbitrary code lines.
As the use of large code models grows, many of them have been integrated into IDE plugins <cit.> to assist developers in their daily programming.
Nonetheless, the effectiveness of LLMs in logging statement generation has never been explored. By extensively examining the performance of LLMs in writing logging statements, this paper contributes to a deeper understanding of the potential applications of LLMs in automated logging.
§.§ Empirical Study on Logging Practice
Logging practices have been widely studied to guide developers in writing appropriate logging statements, because modern log-based software maintenance highly depends on the quality of logging code <cit.>. Logging too little or too much will hinder the failure diagnosis process in software reliability engineering <cit.>.
To reveal how logging practices in the industry help engineers make logging decisions, <cit.> analyzes two large-scale online service systems involving 54 experienced developers at Microsoft, providing six insightful findings concerning the logging code categories, decisional factors, and auto-logging feasibility. Another industrial study <cit.> indicates that the logging process is developer-dependent and thus strongly suggested standardizing event logging activities company-wide.
Exploration studies on logging statements' evolution over open software projects have also been conducted <cit.>, revealing that paraphrasing, inserting, and deleting logging statement operations are prevalent during software evolution.
<cit.> revisits the logging instrumentation pipeline with three phases, including logging approach, logging utility integration, and logging code composition.
While some studies <cit.> introduce the existing what-to-log approaches with technical details, they focus on the overall log workflow (proactive logging generation and reactive log management)
but do not provide a qualitative comparison and discussion of logging generation tools.
In summary, even though logging practices have been widely studied as a crucial part of software development, there exists neither a benchmark evaluation of logging generation models nor a detailed analysis of them.
To bridge the gap, this study is the first empirical study on logging statement generation tools and guides developers to measure whether they can use automated logging assistants when writing code.
§ CONCLUSION
In this paper, we present the first extensive evaluation of LLMs for logging statement generation.
To do so, we develop two benchmark datasets and assess the effectiveness and generalization capabilities of eight state-of-the-art LLMs.
Our evaluation indicates that existing LLMs are not yet capable of meeting the practical requirements for automated logging statement generation.
We investigate the internal characteristics of LLMs that affect their logging performance, including model size and pre-training process.
Also, we identify several external factors that boost model performance, such as comments and programming contexts. Furthermore, we evaluate the generalization ability of LLMs using a dataset that contains transformed code and reveal that unseen code has a significant impact on LLM's logging utility.
Lastly, we present three implications based on evaluation for future research on adopting language models for automated logging generation.
IEEEtranN
|
http://arxiv.org/abs/2307.06070v1 | 20230712104413 | Exponential distance relation (aka Titius-Bode law) in extra solar planetary systems | [
"Dimitrios Krommydas",
"Fabio Scardigli"
] | astro-ph.EP | [
"astro-ph.EP",
"gr-qc",
"hep-th"
] | |
http://arxiv.org/abs/2307.09370v1 | 20230714084538 | Unearthing the foundational role of anharmonicity in heat transport in glasses | [
"Alfredo Fiorentino",
"Enrico Drigo",
"Stefano Baroni",
"Paolo Pegolo"
] | cond-mat.dis-nn | [
"cond-mat.dis-nn",
"cond-mat.mtrl-sci",
"physics.comp-ph"
] |
SISSA—Scuola Internazionale Superiore di Studi Avanzati, Trieste
SISSA—Scuola Internazionale Superiore di Studi Avanzati, Trieste
SISSA—Scuola Internazionale Superiore di Studi Avanzati, Trieste
CNR-IOM—Istituto Officina Materiali, DEMOCRITOS SISSA unit, Trieste
[email protected]
SISSA—Scuola Internazionale Superiore di Studi Avanzati, Trieste
The time-honored Allen-Feldman theory of heat transport in glasses is generally assumed to predict a finite value for the thermal conductivity, even if it neglects the anharmonic broadening of vibrational normal modes. We demonstrate that the harmonic approximation predicts that the bulk lattice thermal conductivity of harmonic solids inevitably diverges at any temperature, irrespective of configurational disorder, and that its ability to represent the heat-transport properties observed experimentally in most glasses is implicitly due to finite-size effects. Our theoretical analysis is thoroughly benchmarked against careful numerical simulations. Our findings thus reveal that a proper account of anharmonic effects is indispensable to predict a finite value for the bulk thermal conductivity in any solid material, be it crystalline or glassy.
Unearthing the foundational role of anharmonicity in heat transport in glasses
Paolo Pegolo 0000-0003-1491-8229
August 12, 2023
==============================================================================
In a series of highly influential papers spanning the nineties <cit.>, Allen and Feldman (AF) laid the ground for a harmonic theory of heat transport in glasses, which is still considered a landmark in the field. In a nutshell, the AF theory stipulates that disorder alone is able to bring down the heat conductivity from the infinite value it would have in a harmonic crystal to the finite value that is observed in a glass, without resorting to any anharmonic effects <cit.>.
This claim notwithstanding, it was soon realized that an infrared singularity inevitably affects any harmonic theory of heat transport in glasses <cit.>. Indeed, a continuous Debye model for the low-frequency/long-wavelength vibrations combined with a standard Rayleigh quartic (∝ω^4) damping of sound waves expected from harmonic disorder <cit.> would result in a divergence of the thermal conductivity at all temperatures <cit.>. The impact of such singularity on the overall consistency of the AF theory and on the validity of the computations based on it has long been overlooked. On the theoretical side, it was proposed that the same quantum-tunneling effects alleged to determine the low-temperature plateau in the conductivity-temperature curve <cit.> could also regularize the infrared singularity at high temperatures <cit.>. In the numerical applications of the AF theory, the singularity is regularized without relying on any such tunneling effects. Instead, the regularization takes advantage of the finite size of any glass model used in practice, which naturally introduces a low-frequency cutoff, ω_min∼2π c/L, c being the sound velocity and L the linear size of the system. In addition, the discrete spectrum resulting from this finite size requires an ad hoc broadening of the vibrational lines to be dealt with, which also has a regularizing effect.
Building on our previous work on extrapolating bulk transport coefficients from finite glass models <cit.>, this paper delves into the impact of the infrared singularity in the harmonic theory of heat conduction. We demonstrate that, by treating anharmonic effects within the quasi-harmonic Green-Kubo (QHGK) theory <cit.>, the singularity can be effectively regularized in the bulk limit at any finite temperature without relying on quantum-tunneling effects, nor on any arbitrary infrared cutoff. Our results shed light onto the “unreasonable” effectiveness that the AF theory has demonstrated over three decades, in spite of the infrared singularity that inherently affects it. On the one hand, we find that the contribution of frequencies below the infrared cutoff, ω < ω_min, which diverges in the harmonic approximation, is relatively small when anharmonic effects are properly accounted for. On the other hand, we find that the commonly employed smearing procedure effectively mimics the boundary-scattering effects observed in thin-film samples. These findings highlight the intricate interplay between boundary and finite-size effects, on one side, and theoretical predictions, on the other, thus emphasizing the nuanced nature of the AF theory's success. Unearthing the reasons of this success provides a solid ground for advancing the theory and numerical simulation of heat transport in glassy materials.
The structure of the article is the following: first, we briefly review the AF theory and its natural extension to account for anharmonic effects perturbatively, namely the QHGK method <cit.>. In both approaches the contribution of low-frequency modes to the heat conductivity can be described by a Debye model, whose parameters can be estimated from the vibrational dynamical structure factor (VDSF). Then, using the Debye model, we show that the AF prediction for the bulk thermal conductivity diverges at any temperature. This is both motivated theoretically and demonstrated numerically by an accurate finite-size scaling analysis of the thermal conductivity and VDSF of three paradigmatic glasses, amorphous silicon, silica, and silicon carbide. We then examine how this divergence is cured by either boundary-scattering or anharmonic effects and discuss the relevance of our findings to experiments performed on thin films. Finally, we present our conclusions.
§ THEORY
The AF expression for the heat conductivity of a glass in the harmonic approximation reads <cit.>:
κ = π/3V∑_μν C_μv_μν^2 δ(ω_μ - ω_ν),
where μ and ν enumerate normal modes, ω_μ is the corresponding (angular) frequency, C_μ=ħω_μ.∂ n(ω_μ)/∂ T|_V the contribution of the μ-th normal mode to the isochoric heat capacity—n(ω)=[e^ħω/k_B T-1 ]^-1 being the temperature derivative of the Bose-Einstein occupation number, and k_B the Boltzmann constant—V is the volume, and v_μν is a generalized velocity matrix. The velocity matrix, whose precise definition can be found in Refs. allen1989thermal,isaeva2019modeling, is essentially the first real-space moment of the matrix of interatomic force constants. It is anti-Hermitean and, in a crystal, it can be chosen to be diagonal in the (Bloch) normal-mode representation, so that its diagonal elements are the group velocities of the normal modes. In a disordered system, where normal modes are necessarily real, the corresponding diagonal elements of the velocity matrix vanish, and the heat conductivity results from the coupling between (quasi-) degenerate states (see below). It must be stressed that Eq. <ref> holds only in the thermodynamic limit, where the vibrational spectrum is continuous and the double sum actually means a double integral, while practical calculations on finite models require smearing the Dirac delta to a peaked function such as a Lorentzian, thus turning Eq. (<ref>) to <cit.>
κ = 1/3V∑_μν C_μv_μν^2 η/(ω_μ - ω_ν)^2 + η^2,
where η is the broadening width of the smeared Dirac delta. The value of η is customarily chosen large enough to encompass several normal modes within the Lorentzian, while still remaining small enough to preserve the characteristics of a peaked function. The broadening of the delta function determines the extent to which pairs of quasi-degenerate states contribute to the heat conductivity, as strict degeneracy holds a zero probability in any finite model of a disordered system. As it will turn out, and at variance with what is commonly assumed, η plays a crucial role in determining the value of the thermal conductivity in the AF model. In particular, its relevance becomes apparent for low-frequency vibrations within the long-wavelength regime, where normal modes gradually approach the behavior of (plane) sound waves.
In recent years, the AF approach has been generalized to incorporate a perturbative treatment of anharmonic effects, resulting in the QHGK theory <cit.> or, equivalently, the Wigner Transport Equation <cit.>. The QHGK expression for the thermal conductivity of an isotropic material reads:
κ = 1/3V∑_μν C_μνv_μν^2 γ_μ+γ_ν/(ω_μ - ω_ν)^2 + (γ_μ + γ_ν)^2,
where γ_μ is the anharmonic linewidth of the μth normal mode, and
C_μν=ħ^2ω_νω_μ/Tn(ω_ν)-n(ω_μ)/ħ(ω_μ-ω_ν)
is a generalized two-mode heat capacity. When ω_μ=ω_ν, Eq. (<ref>) reduces to the modal specific heat, C_μ, appearing in Eqs. (<ref>) and (<ref>). The QHGK thermal conductivity, Eq. (<ref>), applies to crystalline and amorphous solids alike. For crystals, it reduces to the results of Boltzmann transport equation in the relaxation-time approximation, supplemented with inter-band effects; for glasses in the harmonic limit, the QHGK approximation reduces to the AF model <cit.>. Again, in practical calculations on finite systems, the AF model is restored bringing γ_μ from its temperature- and mode-dependent value to a temperature- and mode-independent value. The use of a constant linewidth has minimal impact on intermediate- to high-frequency vibrations—which were dubbed diffusons and locons by AF, due to their localization in real space <cit.>—since, for these modes, anharmonic lifetimes are usually small and the density of states is large and slowly varying. In fact, in the AF model the diffuson contribution to the thermal conductivity is weakly dependent on η, reaching convergence once the smearing is of the order of the average normal-mode frequency spacing. On the contrary, low-frequency vibrations, referred to as propagons by AF due to their ability to propagate like sound waves <cit.>, display a distinct behavior. For these excitations, the vibrational density of states (VDOS) decreases quadratically as the frequency approaches zero, and the anharmonic lifetimes diverge due to the lack of vibrational decay channels. Consequently, the finite, constant linewidth introduced by smearing the Dirac delta function could possibly result in a nonphysical contribution to the heat conductivity.
In order to address the propagon contribution to the heat conductivity, it is expedient to define the vibrational dynamical structure factor. As mentioned above, propagons, diffusons, and locons differ by the degree of localization they feature. This can be observed in the VDSF that, for a harmonic system, is defined as <cit.>:
S_b^∘(ω,𝐐) =∑_νδ(ω-ω_ν)| ⟨ν| 𝐐,b⟩|^2,
where ⟨ν| 𝐐,b ⟩ denotes the projection of the ν normal mode over a sound (plane) wave vibration of wavevector 𝐐 and polarization b (b=L,T for longitudinal and transverse branches, respectively) <cit.>. Anharmonic effects in the VDSF can be accounted for by smearing the delta function in Eq. (<ref>) to a Lorentzian function:
S_b(ω,𝐐)=1/π∑_νγ_ν/γ_ν^2+(ω-ω_ν)^2|⟨ν| 𝐐,b⟩|^2.
The low-frequency, small-wavevector, portion of each branch of the VDSF features an almost linear dispersion typical of acoustic waves, ω_Qb=c_b Q, where c_L/T are the longitudinal/transverse speeds of sound <cit.>. In other words, S_b(𝐐, ω) is a peaked function centered at c_b Q which can be faithfully represented by a single Lorentzian profile,
S_b(ω,𝐐) ≈α_b(𝐐)/πΓ_b(𝐐)/(ω-c_b Q)^2+Γ_b(𝐐)^2,
allowing one to evaluate the speed of sound as well as the wavevector dependence of the sound damping coefficients, Γ_b(𝐐), accounting for both disorder and anharmonic effects on the same footing. The 𝐐-dependent function α_b(𝐐) is a global prefactor that scales the Lorentzian.
For any given polarization, in an isotropic medium the damping coefficient can only depend on the magnitude of the wavevector, yielding Γ_b(Q). Propagons are identified as those low-frequency/long-wavelength normal modes that contribute to the VDSF in the linear-dispersion regime. The increasing broadening of the dispersion identifies a cutoff frequency for propagons, ω_P, often referred to as the Ioffe-Regel limit <cit.>. According to Eq. (<ref>), below this limit vibrational modes can be approximately described by damped (plane) sound waves, characterized by the group velocities c_b and decay times τ_b(Q)=[2 Γ_b(Q)]^-1. Consequently, the propagon contribution to the heat conductivity can be cast into the form <cit.>:
κ_P = 1/3V∑_𝐐 b^c_b Q < ω_P g_b C(c_b Q)c_b^2τ_b(Q),
where ω_P is the propagons' cutoff frequency, and g_b is the degeneracy of the propagon branch: g_L=1 and g_T=2. In the bulk limit, when the size of the system is brought to infinity, the discrete sum over states turns into an integral through the definition of a density of states; the propagon contribution to the thermal conductivity thus takes a form reminiscent of the kinetic theory of gases <cit.>:
κ_P = ∑_b c_b^2/3∫_0^ω_P C(ω) ρ_b(ω) 1/2Γ_b(ω/c_b)ω,
where ρ_b=g_bω^2/2π^2c_b^3 are the L/T Debye's density of states per unit volume. Eq. (<ref>), which applies to both crystals and glasses, is the infinite-size limit of the propagon contribution to both Eqs. (<ref>) and (<ref>), the difference lying in whether or not Γ_b(ω/c_b) includes anharmonic effects. Formulas such as Eq. (<ref>), where hydrodynamic arguments are used to extrapolate QHGK results to the infinite-size limit <cit.>, will be referred to as hydrodynamic QHGK formulas.
In general, for low enough frequencies, one has <cit.>:
Γ_b(ω/c_b) ≈ A_b ω^2 + B_b ω^4.
In the harmonic approximation, A_b=0 and the leading order in the frequency dependence of the sound damping coefficient is quartic, Γ_b∼ω^4, due to incoherent Rayleigh scattering from elastic fluctuations of the medium <cit.>. This behavior, which is confirmed by experiments <cit.>, can be understood through random media theory on a continuous model <cit.>, or from a microscopic perspective via harmonic perturbation theory, such as in the case of crystals with mass disorder <cit.> or random spring constants <cit.>.
When A_b=0 in Eq. (<ref>), one easily sees that the propagon contribution to the heat conductivity diverges at all temperatures. On the other hand, the inclusion of anharmonic contributions ensures a quadratic dependence of Γ_b on frequency, resulting in a finite thermal conductivity whenever A_b 0 <cit.>.
We conclude that disorder alone is insufficient to guarantee a finite bulk thermal conductivity in glasses.
How come practical calculations employing the AF model yield finite values of κ which compare fairly well with experimental results? The answer ultimately hinges on the fact that calculations are necessarily performed on finite glass models. This has two main consequences. The first is that the finite size, L, naturally introduces an infrared cutoff to κ_P, ω_min∼ 2 π c / L, which makes it finite. Notably, the infrared contribution to κ_P, which is divergent in the harmonic approximation, turns out to be typically small in most cases, when anharmonic effects are adequately considered as detailed below. The second consequence is that the finite number of normal modes requires the smearing of their individual contributions to Eq. (<ref>). This leads to Eq. (<ref>), wherein anharmonic linewidths are substituted with a (rather unphysical) mode-independent broadening. Therefore, the net effect of a finite calculation is that a contribution to κ—the one associated with frequencies below ω_min—is completely neglected, while all that remains is affected by the choice of the constant damping due to the broadening width, η. Crucially, this broadening plays a significant role even in the bulk limit, as the VDSF linewidth results to be the sum of the harmonic contribution, proportional to ω^4, and the constant broadening due to the smearing <cit.>. Thus, the smearing width enters the Debye expression of the harmonic thermal conductivity as
κ_P = ∑_b g_b/6 π^2 c_b∫_ω_min^ω_P C(ω) ω^2/2 B_b ω^4 + ηω.
The integral in Eq. (<ref>) converges for any finite value of η and/or ω_min. The bulk limit is restored in the ω_min→ 0, η→ 0 limit. Fig. <ref> shows the harmonic (solid lines) and anharmonic (dashed lines) thermal conductivity of a typical amorphous solid as a function of ω_min and η. The propagon contribution is obtained from Eq. (<ref>) (harmonic case) and Eq. (<ref>) with Γ given by Eq. (<ref>) (anharmonic). The diffuson contribution, κ_D, (which does not depend on ω_min) is essentially independent of η, and it thus adds the same constant shift to each line. The left panel of Fig. <ref> shows κ as a function of the infrared cutoff for different values of the smearing width. When η=0, κ diverges in the ω_min→ 0 limit. Vice versa, the right panel shows κ as a function of the smearing width for different values of the infrared cutoff. Again, when ω_min=0, κ diverges in the η→ 0 limit.
We must now stress that most of the experimental literature on heat transport in glasses from the nineties concerns micrometer-thick films <cit.>, rather than samples of macroscopic size. Formally, boundary effects such as those involved in thin-film experiments would enter the expression of the thermal conductivity the same way as the AF smearing width in Eq. (<ref>). For a thin-film sample, the thermal conductivity can in fact be described by an equation similar to Eq. (<ref>), where η is replaced a constant boundary-scattering contribution to the linewidth, of the form η_BS∼ c/d, d being the film thickness <cit.>. As a consequence, the bulk limit of the AF model with fixed η yields the same thermal conductivity of a thin film rather than that of an infinite system. In the harmonic approximation, the former remains finite, while the latter diverges. This might have unintentionally contributed to the misconception that the heat conductivity of bulk glasses can be fully explained in terms of disorder effects alone, neglecting anharmonic interactions. These interactions dampen low-frequency vibrations and regularize the heat conductivity at all finite temperatures. In many cases, this regularization renders the infrared contribution to κ almost negligible in the bulk limit compared to that of diffusons. Essentially, anharmonic interactions substitute a divergent quantity (the bulk thermal conductivity of propagons in the harmonic approximation) with a finite quantity that can be mimicked by a finite-size effect in calculations on finite systems. In conclusion, the presence of anharmonic interactions is vital for regularizing the behavior of κ in a macroscopic system, even in the case of disordered materials <cit.>.
§ NUMERICAL EXPERIMENTS
In order to substantiate our arguments, we have performed a number of numerical experiments on three glasses featuring different convergence properties to the bulk limit <cit.>: amorphous silicon (), silica (), and silicon carbide (). The technical details of our simulations are reported in the Methods section.
§.§
Low-frequency behavior of the sound damping coefficients
The quartic frequency dependence of the sound damping coefficients in harmonic glasses can be understood perturbatively in terms of the scattering of acoustic waves in a homogeneous medium with small, random, independent local fluctuations of the elastic constants <cit.>. In Fig. <ref> we report the dependence of the attenuation coefficients in the harmonic approximation on frequency for the three materials considered in this work. Both and exhibit an ω^4→ω^2 crossover, ω_XO, respectively around 2 and 1 THz (ω_XO≈ 12 and ω_XO≈ 6 rad/ps), in agreement with theoretical models which explain it in terms of the mixing between longitudinal and transverse modes, due to the broadening of the linear dispersion induced by disorder <cit.>. This behavior is also in agreement with experiments, which find a first—temperature-dependent—crossover at very low frequency between an ω^2 regime, determined by anharmonic effects, and the ω^4 regime where disorder dominates <cit.>, followed a by second—temperature-independent—crossover from ω^4 to ω^2, due to the longitudinal-transverse mixing mentioned above <cit.>. No such crossover is observed in . As the minimum frequency compatible with a given finite glass models scales as the inverse size, rather large simulation cells are required to discriminate the crossover and evaluate the corresponding coefficients. For materials such as , where the crossover occurs at relatively low frequencies, it is essential to have systems with several tens of thousands of atoms. In practice, standard lattice-dynamical techniques based on matrix diagonalization are unsuitable to deal with such large systems, and the VDSF can be best computed directly in these cases using Haydock's recursion method <cit.> based on the Lanczos algorithm <cit.> (see the Methods section). We are thus able to compute the VDSF for systems comprising up to hundred-thousand atoms, an order of magnitude larger than those computed in our earlier work employing direct diagonalization <cit.>.
It must be noted that the existence of the ω^4→ω^2 crossover may yield misleading results in the computation of the thermal conductivity. This issue is particularly relevant for , a material often depicted as highly disordered, whose heat conductivity would have very small finite-size effects, and reaching a well-converged value with models of a few thousand atoms <cit.>.
Actually, since its crossover frequency is ω_XO∼ 7.5,rad/ps for both polarizations <cit.>, to evaluate the bulk limit of the thermal conductivity of this system, one would need to employ samples whose linear size exceeds the wavelength of the corresponding sound wave. The wavelength for the longitudinal sound wave, λ_XO^L, is given by ω_XO/2π c_L∼ 60 Å, while the wavelength for the transverse sound wave, λ_XO^T, is given by ω_XO/2π c_T∼35 Å. Thus, the sample size should be greater than λ_XO^L, which means it should contain ≳ 14000 atoms.
Therefore, if one were to study the AF thermal conductivity of a glass with a finite model of linear size smaller than λ_XO^L, one would only sample the propagon contribution above the crossover, thus squarely missing the quartic low-frequency dependence of the sound damping coefficient that determines the divergence of the heat conductivity in the harmonic approximation.
§.§ AF thermal conductivity
In order to demonstrate how the ω^4 dependence of the harmonic sound damping coefficients affects the bulk limit of the AF heat conductivity, we computed the propagon contribution to the AF conductivity in , , and over a range of values of the smearing parameter, η, and for finite models of progressively larger sizes. We compared these results with the analytical model provided by Eq. (<ref>), κ_P(T, η), whose parameters are estimated from the harmonic VDSFs. The results for at 500 K are shown in the upper panel of Fig. <ref>. As the size of the model increases, the AF data approach the analytical benchmark. The convergence is achieved at larger sizes as the value of η decreases. In fact, calculations on a finite system with small η are meaningless: when the average frequency spacing of propagons is larger than the AF smearing, the Lorentzian functions in Eq. (<ref>) become so sharp that the corresponding effective VDOS features unphysical gaps that result in a spurious reduction of the thermal conductivity.
From the central and lower panels of Fig. <ref> similar conclusions can be drawn for and , respectively. Unlike , both materials present the aforementioned ω^4→ω^2 crossover around ω_XO=12 rad/ps and 6 rad/ps, respectively. This requires a different functional form for the harmonic linewidth, able to capture the crossover, such as the one proposed in Ref. <cit.>:
Γ^∘(ω) = C_b ω^2 [1 +(ω_XO^b/ω)^2δ]^-1/δ,
where C_b is a constant, ω_XO^b is the polarization-dependent crossover angular frequency, and δ=1.5 determines the sharpness of the transition. We then compared the AF data with the analytical model provided by Eq. (<ref>) with the linewidth computed with Eq. (<ref>). Like for , when η is large, for both and the AF results converge in size to the analytical ones, and the convergence is reached at larger sizes as η diminishes. In the η→0 limit, the analytical model diverges due to the Rayleigh (∝ω^4) scattering term in the harmonic linewidth.
§.§ Discussion
As discussed above, the AF method is commonly acknowledged to effectively account for experimental measurements of the heat conductivity of amorphous solids. To gain a deeper insight into this effectiveness, in Fig. <ref> we analyze the dependence of the extrapolated [Eq. (<ref>), ω_min=0] thermal conductivity of , , and at 100 K and 500 K on the thickness of the sample, d. The boundary scattering adds to the linewidth of each polarization a term equal to η^b_BS=c_b/d <cit.>. The insets show the propagon contribution to the thermal conductivity, κ_P, where third-order anharmonic effects are computed with the Fermi's Golden Rule and included through the Matthiessen rule <cit.>. In the harmonic approximation, the thermal conductivity diverges as the film thickness increases, as indicated by the solid lines. However, when anharmonicity is accounted for, the thermal conductivity converges to its bulk value at a finite thickness. The figure demonstrates that in materials such as and , where propagons contribute marginally to heat transport compared to diffusons, the bulk limit is reached at nanometer scales. In our model, where propagons play a more significant role, the bulk limit is achieved at much larger sizes, around a hundred micrometers. It is worth noting that, at the typical thin-film sizes used experimentally, the harmonic value of κ is not significantly different from the anharmonic one. This suggests that a harmonic model on a finite system can provide a reasonable estimate of the thermal conductivity of a thin film even when extrapolated to ω_min→ 0, as long as boundary scattering is appropriately accounted for.
Fig. <ref> illustrates the temperature dependence of κ for and . We compare experimental measurements from the literature with our hydrodynamic QHGK results and AF calculations conducted on finite samples. The anharmonic linewidths are computed on a range of temperatures and extrapolated to get a continuous line, as described in Refs. braun2016size,klemens1951thermal. These linewidths are then combined with the total linewidth using the Matthiessen rule <cit.>. For , the QHGK results match the bulk experimental measurement <cit.>. AF calculations, performed on a sample comprising 3000 atoms, also shows good agreement with both bulk-QHGK results and experimental data. This indicates that in the case of , diffusons completely dominate the thermal conductivity, so that similar results are obtained neglecting contributions below ω_min (as done in finite AF calculations) as well as considering the anharmonic damping of propagons (as in bulk QHGK calculations). However, a direct extrapolation of the AF results regularized with a finite η yields values of κ ranging from κ_D to infinity, depending on the value of the smearing parameter.
In the case of , where propagons are more important, the intriguing effectiveness of AF calculations in matching experimental data is further questioned. For instance, a calculation using 4096 atoms closely agrees with measurements on a 0.52 μ m-thick film <cit.>, seemingly validating the entire procedure. Again, what is actually happening is that the (diverging) contribution to κ from 0 to ω_min is being set to zero rather than to the (finite and small) value it would have when accounting for anharmonicity. For , the missing contribution is not as negligible as it is for , resulting in a pronounced difference between the QHGK results and the AF calculation.
The temperature dependence of the QHGK heat conductivity results from two competing contributions. One is that from diffusons, which is exponentially suppressed at low temperatures—due to the Bose-Einstein occupation function—and saturates to a constant at higher temperatures. The other is the one from propagons, which diverges as T → 0 for essentially the same reasons why it does so in crystals <cit.>: first, the propagation of sound waves with wavelengths much larger than the atomic correlation length is relatively unaffected by disorder at leading order in ω, and, second, the temperature dependence of A_b in Eq. (<ref>) causes the integral in Eq. (<ref>) to diverge for vanishing temperatures <cit.>.
The concavity of κ(T) is thus determined by the relative magnitudes of these two contributions <cit.>. In materials where propagons contribute marginally to the heat conductivity (such as , upper panel of Fig. <ref>), the divergence of the bulk value of κ_P(T) becomes noticeable primarily at low temperatures. The change in concavity is thus determined by the onset of diffusons. Conversely, when propagons dominate the thermal conductivity, the concavity might be entirely determined by κ_P, such as in the case of our model of (purple curve in the lower panel of Fig. <ref>). At low temperatures, the divergence is suppressed by boundary scattering effects in thin films, as illustrated in Fig. <ref>.
Even in bulk systems, where no boundary scattering exists, the low-temperature divergence is suppressed by quantum tunneling between quasi-degenerate minima in the glass energy landscape, which leads to the plateau commonly observed at a few tens of kelvins in most glasses <cit.>.
§ CONCLUSIONS
The main result of this paper is the demonstration that disorder effects alone are not sufficient to bring the heat conductivity of a material from the infinite value it has in a harmonic crystal to the finite value observed in real glasses. In the harmonic approximation, the low-frequency portion of the vibrational spectrum yields an infinite contribution to the thermal conductivity reminiscent of its behavior in crystals: in the long-wavelength/low-frequency regime, sound waves propagate in glasses essentially the same way they do in crystals, to the relevant order in frequency. In fact, the lack of sound damping in harmonic crystals and its rapid decay in harmonic glasses (∼ω^4) both fail to effectively regularize the divergent conductivity. By contrast, a proper account of anharmonic effects makes the bulk thermal conductivity of glasses finite at any finite temperature. Still, with anharmonicity alone, the thermal conductivity would diverge at zero temperature. In practice, at extremely low temperatures the residual divergence is suppressed by quantum-tunneling effects <cit.>, leading to the well-known thermal conductivity plateau at a few tens of kelvins. This plateau is believed to be due to the tunneling between quasi-degenerate low-energy minima in the glass energy landscape, responsible for the residual entropy in glasses <cit.>. As our treatment is limited to the vibrational properties within a single such energy minimum, it obviously fails to address the low-temperature plateau. The description of these tunneling effects from first principles thus remains a major challenge in the physics of glasses to be addressed in the future. It is noteworthy that, in materials where the dominant influence of propagons on heat transport persists at temperatures higher than those at which quantum tunneling suppresses them, our analysis indicates that the bulk thermal conductivity should display a maximum at low temperature, which could potentially be detected experimentally.
§ METHODS
§.§ Computational details
The glass samples used in our simulations were generated through a melt-and-quench procedure. Initially, a crystalline conventional cell was replicated ℓ times along each Cartesian direction. Molecular trajectories were then generated in different thermodynamic ensembles (see below), employing the velocity-Verlet algorithm implemented in the code <cit.>. A time step of 1/1/0.5 fs was used for //. To ensure statistical robustness, all of our results were averaged over 4/4/10 independent samples. These samples were obtained by repeating the melt-and-quench procedure multiple times, each with a different random initialization. After
the equilibration, the atomic configurations were optimized so as to make atomic forces smaller than a preassigned threshold of 10^-10 eV/Å.
Normal modes and thermal conductivities are computed with the κALDo code <cit.> using second- and third-order interatomic force constants obtained from .
§.§.§
was modeled with a Vashishta force field <cit.>. The glass was modeled starting from the β-cristobalite cubic conventional 24–atom unit cell with mass density of 2.20 g/cm^3 replicated ℓ=18 times along each Cartesian direction, comprising ≈ 140,000 atoms in the simulation box. The crystal was originally melted at 7000 K and then quenched to 500 K in 10 ns <cit.>. The system was then thermalized at 500 K for 400 ps and for 100 more ps in the NVE ensemble. The final average density of the samples thus obtained is 2.408 g/cm^3 with a standard deviation across different samples of 0.002 g/cm^3.
§.§.§
was also modeled with a Vashishta force field <cit.>. The starting configuration was a crystalline cubic zinc-blend structure, with 8 atoms in the unit cell, and a mass density of 3.22 g/cm^3 repeated 23 times along each Cartesian direction, thus comprising ≈ 97,000 atoms in the simulation cell. Following the procedure described in Ref. <cit.>, we initially heated the crystal from 300 K to 4000 K in the NpT ensemble at constant null pressure and then quenched to 500 K in 400ps and finally equilibrated in the NVE ensemble for 80 ps. The average density of the is 2.976 g/cm^3 with a standard deviation across different samples of 0.002 g/cm^3.
§.§.§
was modeled with the Tersoff force field <cit.>. The starting configuration was a diamond structure, with 8 atoms in the unit cell, with a mass density of 2.31 g/cm^3 repeated 12 times along each Cartesian direction, corresponding to ≈ 14,000 atoms in the simulation cell. At variance with and , in the case of , this moderate size already allows one to observe the ω^4 scaling of the harmonic linewidth. The crystal is initially melted at 6000 K and then quenched to 300 K in 22 ns and equilibrated in the NVE ensemble for 10 ns <cit.>. The average mass density of the is 2.275 g/cm^3 with a standard deviation across samples of 0.003 g/cm^3.
§.§ Haydock’s recursion method
The direct computation of the harmonic VDSF is unfeasible for systems of tens of thousands of atoms because it requires the diagonalization of the entire dynamical matrix, a procedure that scales as the cube of the number of atoms. Haydock's recursion method is an iterative procedure, based on the Lanczos orthogonalization algorithm, that allows one to estimate the VDSF as the imaginary part of a diagonal element of the vibrational Green's function of the system <cit.>.
Using this procedure, we were able to address several systems of tens of thousands of atoms, where the quartic scaling of the harmonic linewidth is appreciable, as reported in Fig. <ref>.
We want to compute the diagonal matrix elements of the vibrational Green's function of the form:
lim_ϵ→0((ω+iϵ)^2 -𝐊)^-1𝐐, b=
π/2ω[S_b^0(ω, 𝐐)+S_b^0(-ω, 𝐐) ],
where
𝐊= 𝐌^-1/2𝐊𝐌^-1/2,
𝐊 is the matrix of interatomic force constants (i.e. the Hessian of the energy with respect to atomic displacements), and 𝐌 is the diagonal, positive-definite, matrix of the atomic mass distribution. In a system of N atoms, |𝐐, b⟩ is a 3N-dimensional vector whose projection onto the displacement of I-th atomic site in the α-th Cartesian direction is:
⟨I, α||𝐐, b⟩=1/√(N)ϵ_α^b(𝐐)e^i 𝐐·𝐑_I,
where ϵ^b(𝐐) is the polarization vector, and 𝐐=2π/L(n,m,l), with (n,m,l)∈ℤ^3, is a wavevector compatible with the enforced PBCs. The harmonic VDSF is then computed by a continued fraction expansion:
π/2ω[S_b^0(ω, 𝐐)+S_b^0(-ω, 𝐐)]=
lim_ϵ→ 01/(ω+iϵ)^2-a_0- b_1^2/ (ω+iϵ)^2-a_1- b_2^2/⋱,
where the coefficients {a_0, a_1,…} and {b_1, b_2, …} are evaluated by the recursion Lanczos chain:
|ξ_-1⟩=0,
|ξ_0⟩=|𝐐, b⟩,
b_n|ξ_n⟩=(𝐊-a_n-1)|ξ_n-1⟩-b_n-1|ξ_n-2⟩,
a_n=⟨ξ_n|𝐊|ξ_n⟩,
b_n=⟨ξ_n|𝐊|ξ_n-1⟩.
This procedure drastically reduces the computational cost of the evaluation of the harmonic VDSF, going from a 𝒪 ((3N)^3 ) scaling of the exact diagonalization algorithm to the 𝒪(k(3N)^2) scaling, where N is the number of atoms in the simulation cell and k is the number of steps of the Lanczos chain. Moreover, since the matrix of the interatomic force constants is sparse, the numerical burden of Haydock's algorithm can be further reduced to a complexity 𝒪(kN). The procedure proves to be numerically robust, in spite of the well-known instabilities of the Lanczos tridiagonalization scheme <cit.>, and approximately 200 recursion steps are typically sufficient to estimate the sound damping coefficients, which we increased up 600 steps to carefully test the convergence. In order to validate the iterative algorithm we compared the harmonic attenuation coefficients fitted from the VDSF computed via direct diagonalization of the dynamical matrix and via Haydock's method as in Eq. (<ref>). In Fig. <ref> we display the sound damping coefficients computed on a model of of 13824 atoms, showing good agreement between the two methods.
The authors are grateful to Federico Grasselli for a critical reading of the early version of the manuscript. This work was partially supported by the European Commission through the MaX Centre of Excellence for supercomputing applications (grant number 101093374) and by the Italian MUR, through the PRIN project FERMAT (grant number 2017KFY7XF) and the Italian National Centre for HPC, Big Data, and Quantum Computing (grant number CN00000013).
|
http://arxiv.org/abs/2307.05651v1 | 20230711135936 | Assessing Peer Award Diversification on Reddit | [
"Amaury Trujillo"
] | cs.SI | [
"cs.SI"
] |
0000-0001-6227-0944
IIT-CNR
Pisa
Italy
[email protected]
<ccs2012>
<concept>
<concept_id>10003120.10003121.10011748</concept_id>
<concept_desc>Human-centered computing Empirical studies in HCI</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10003120.10003130.10011762</concept_id>
<concept_desc>Human-centered computing Empirical studies in collaborative and social computing</concept_desc>
<concept_significance>300</concept_significance>
</concept>
</ccs2012>
[500]Human-centered computing Empirical studies in HCI
[300]Human-centered computing Empirical studies in collaborative and social computing
Monetizing user-generated content in social media such as Reddit, in which users are both content creators and consumers, is challenging.
Among the platform's strategies we find Reddit Awards, paid tokens of appreciation given by peer users who markedly enjoyed a particular posted content. Initially, there were only three awards, but the platform later greatly expanded their number and variety.
This work thus aims to investigate how awarding changed after such diversification.
To this end, two datasets of posts made before and after the change (16M submissions and 203M comments) were analyzed by operationalizing awarding level and award diversity.
Results show that after diversification, the awarding level increased across multiple measures, both significantly and considerably, albeit two of the original three awards remained by far the most commonly given.
Such an increase indicates that providing more award options benefits users and is a viable user interaction approach for platforms to both engage users and monetize their content.
Assessing Peer Award Diversification on Reddit
Amaury Trujillo
==============================================
§ INTRODUCTION
Maintaining an engaged user base and monetize it has always been paramount but very demanding for social media platforms <cit.>.
In this regard, some platforms have established content creator schemes in which approved users can monetize their content based on the engagement of content consumers.
However, such schemes present an imbalanced power dynamic between platform and creators <cit.>, and in platforms in which there is no clear distinction between content creators and consumers, e.g., online discussion forums, these are not a viable option.
Reddit, one of the most visited websites worldwide,[<https://www.alexa.com/siteinfo/reddit.com>] is a representative example.
After building an initial critical user base, in 2009 the platform turned to the commonly used and abused advertisement business model. However, as the number of users continued to rapidly grow, thanks to engaging content and discussions, Reddit's operation became hard to sustain.
Hence, in July of 2010, the platform introduced Reddit Gold, a paid subscription membership with exclusive features, as an additional revenue stream to cover its ever increasing operating costs. In November of 2012, Reddit Gold was expanded with the ability to give other users gold features for a limited time, via the gilding of posts; it was the platform's precursor of a monetized peer award system.
Six years later, in September of 2018, the platform introduced Reddit Coins, a new monetization strategy, with the platform's paid subscription membership being renamed from Reddit Gold to Reddit Premium.
Reddit Coins[<https://www.reddit.com/coins>] are virtual goods that can be purchased by users —with or without Reddit Premium— to be exchanged by awards that are used as tokens of appreciation for submissions or comments posted by other users.
Some Reddit awards grant the recipient benefits, such as coins or access to exclusive features within the platform; other awards function merely as a decoration for the recipient's post.
Initially, only three awards were available: the original Gold, plus Silver and Platinum; although giving any of these was still called gilding.
Then, in July of 2019, Reddit introduced Community Awards (exclusive to a given subreddit) and significantly expanded the general awards from three to dozens (and counting), thus diversifying awards platform-wise.
The new awards, often depicted with colorful imagery (see Figure <ref>), are mostly humorous references to Internet slang, memes, or inside jokes. Consequently, the act of giving an award was renamed from gilding to awarding, albeit the core mechanism remained the same.
Obviously, the main goal of the diversification was to increase the level of awarding and thus increase revenue from the direct selling of Reddit Coins and/or the Reddit Premium subscription that includes a fixed monthly amount of Coins plus access to exclusive awards. However, despite the benefits in user interaction touted by Reddit personnel and the enthusiasm of many users, others decried these changes, mainly for its focus on monetization and perceived lack of appeal when compared to the then-current gilding mechanism,[<https://www.redd.it/chdx1h/>] as expressed by redditor Poiuy2010_2011:
Maybe I'm in the minority here but I find these community awards absolutely useless. With the standard 3 awards there is a clear hierarchy. But with the community awards, especially when a post gets popular, all of them just kinda blend together and ultimately become meaningless.
In this context, the present work aims to investigate the impact in awarding behavior of expanding awards beyond gilding within Reddit, around the following research questions (RQ):
* RQ1: How award diversification changed awarding levels?
* RQ2: How diverse are the awards given by Reddit users?
Results indicate that indeed awarding levels (based on different metrics) increased after diversification; still, the original and more hierarchical gilding awards remained the most popular awards, particularly Silver and Gold. Hence, this study offers two primary contributions: an operationalization of appreciation token levels and diversity, and suggestions for the implementation of similar paid appreciation token schemes in other platforms based on peer user-generated content.
§ BACKGROUND AND RELATED WORK
Many social media platforms have developed virtual currencies as a potential source of revenue stream, although with varying degrees of success, as exemplified by two of the biggest platforms.
On one hand, back in 2005 Tencent introduced to great success QQ Coins as a means to pay for online services and virtual goods within its ecosystem <cit.>, which with the years expanded to various in-platform financial activities <cit.>, and is still widely used by hundreds of millions of users.
On the other hand, in 2011 Facebook Credits were made available to purchase digital items within the Facebook platform, subject to transaction fees, but the currency was phased out two years later after user and developer complains <cit.>, as well as more rigid financial regulation <cit.>.
Incidentally, Facebook also intended to develop its own cryptocurrency, called Diem (formerly known as Libra), but it was promptly met with government scrutiny and public mistrust <cit.>, with the company getting rid of the project in January of 2022, before any launch.
Arguably, the aforementioned increasing legal probes on virtual currencies compelled Reddit to explicitly use the term virtual goods to refer to its own monetization scheme, despite that the name Reddit Coins is more evocative of currency.
The same term is also used in a similar monetization scheme that the streaming platform Twitch introduced in mid-2016, called Cheering with Bits.
To show support to streamers, Twitch users can purchase virtual goods called Bits, with which they Cheer on a given stream channel chat, earning a channel-specific badge representing the amount of Bits donated. Twitch offers a few common badges related to Bits, but affiliates and partners can replace the badges in specific channels, thus diversifying the imagery in a manner reminiscent of Reddit Community Awards. TikTok has a similar yet different monetization mechanism for its streaming platform, called Gifting. Users buy TikTok Coins —which the platform explicitly treats as virtual items and not property in its terms of service— in order to purchase in-platform virtual Gifts to donate to live streamers. The Gifts are then transformed into Diamonds, which are valued proportionally to the gift coin value (minus the platform's share) and can be then redeemed for money by streamers.
Twitch Bits have challenged the dominance of third-party donation tools <cit.>, and —together with paid subscriptions— they have become one of the main mechanisms with which Twitch streamers monetize their content <cit.>, and users manifest their appreciation and financial commitment to streamers <cit.>. TikTok Coins are also considered to be key in the financial success of TikTok and some of its content creators <cit.>.
Hence, Twitch Bits and TikTok Coins share more of the characteristics of currency as a medium of exchange than Reddit Coins.
In fact —and in spite of their designation as virtual goods or items by their creators— in the literature all three are treated as in-platform currency <cit.>. However, they do not act as a store of value and are limited as a medium of exchange or unit of account, which are generally considered to be the functions of currency <cit.>.
Hence, I refer to these and other similar monetization virtual objects as paid appreciation tokens: recognition symbols of the enjoyment that some content has brought to their giver, who had the willingness to pay for them.
There are, however, three fundamental differences between these video streaming platforms and Reddit that affect their paid appreciation token mechanisms.
First, the content dynamics are different: original content creation is a core aspect of Twitch and TikTok, while content sharing and discussion is central in Reddit, irrespective of whether such content was originally created by the poster or not.
Second, streamers monetize their content whenever they receive Cheers or Gifts, earning real-world money proportional to the donated Bits or Coins, while Reddit users do not receive financial compensation when their post is awarded; at most, some awards include access to exclusive Premium features or Reddit Coins to be further exchanged by other awards. After all, it is the appreciation token that has been paid, not necessarily the creator or sharer of the content to which the token was bestowed.
Third, the relationship between the parties involved is quite different. In Twitch, interactions revolve around channels, managed by content creators (who often hope to earn money for their effort) for content consumers (who sometimes monetarily reward such efforts). TikTok is similar, but there are no channels, the interaction is directly with content creators. In Reddit, the user-generated content process revolves around topical communities, in which almost any registered Reddit user can act as both creator and consumer, without the expectation of earning money for posting content. In other words, users are peers, hence the term peer awards is also used to designate Reddit's paid appreciation tokens.
Notwithstanding these distinctive features of Reddit and its popularity, there is scarce literature on its peer awarding mechanisms, old and new. The majority of academic work on Reddit that takes into consideration the awarded (or gilded) status does so as a content feature for a specific task, such as the modeling of content popularity <cit.> or user personality <cit.>.
The earlier of a couple works focused on Reddit awarding concerns an analysis of pre-Coins gilded content from a sample of 25M comments from the top 100 subreddits by content in May of 2015 <cit.>.
However, due to the focus on linguistic features, the work only considered comments, excluding submissions (which could represent links or embedded multimedia).
Therein, authors found that subreddits could be clustered in different groups based on linguistic and other comment features, as well as the topic of interest.
In most cases, the initial comments (relative to their submission) are more likely to be gilded compared to late comments, lengthier comments are preferred to shorter ones, and that the subreddit clusters had different preferences of writing style, e.g., the use of “you” and a more narrative style for sports-related communities.
A more recent and in-depth analysis concerns the post-Coins gilding process. In this experimental study by <cit.>, Gold was randomly and anonymously given to 905 users' posts over a two-month period within three writing-focused subreddits (hfy, nosleep, and shortscarystories). Therein, authors found that peer awards induced recipients to produce longer and more frequent content, especially among newer community members, with such posts being textually similar to their own past awarded content. Hence, besides the monetization itself of user-generated content, we could say that peer awards also motivate users to create more content, which in turn attracts new users to the platform or retains the attention of those already present —producing a positive network effect— while also increasing the chances of new content being awarded. That being said, the experimental study in question was limited to a single award kind, with a very small number of subreddits, and a relatively small quantity content, which is expected to have a very specific narrative nature. Further studies are necessary to better understand peer paid appreciation tokens.
In that regard, and as far as I am aware, the present work is the first academic work on the new Reddit's awarding mechanism in particular, and the first quantitative study on the diversification of paid appreciation tokens on social media platforms in general.
§ DATA
Subreddits represent communities of users who share a common interest, each developing its own specific identity, participants, and dynamics.
Hence, in order to answer the research questions, both at the platform and community levels, the first step was to select a representative sample of subreddits.
To this end, I selected the top 50 subreddits by number of subscribers as of January 1, 2019, with the following exclusion criteria: 1) it is an official subreddit of Reddit (not user-driven); and 2) it is marked as adult content, i.e., not safe for work (NSFW).
Only two subreddits were excluded based on these criteria, which were respectively: blog, the official Reddit's blog subreddit with 16.8M subscribers, most by default upon registration; and gonewild, a subreddit for the exchange of nude and sexually explicit photos with 1.7M subscribers.
Concerning Reddit posts, there are two kinds: submissions and comments.
A submission is posted directly to a given subreddit, it has a title, and either a text body or a link (e.g., to a website or an embedded multimedia file).
A comment represents textual content and is posted in response to a given submission or to another comment. Both submissions and comments can receive awards, but their dynamics are intrinsically different <cit.>.
Hence, awarding is analyzed separately for each kind.
For the retrieval of submissions and comments for the selected subreddits, I used the monthly dumps of the Reddit Dataset from Pushshift, a platform for the collection, analysis, and archiving of several social media <cit.>.
Regarding the data timeframe, less than one year passed between the introduction of the gilding and awarding mechanisms, with both being introduced on the third quarter, albeit neither mechanism was widely adopted immediately.
Therefore, I collected the subreddits' submissions for a period of six months for each mechanism starting from the calendar year following its introduction —thus excluding the first months of adoption— which corresponds respectively to the first halves of 2019 and 2020.
Henceforth these two periods and their respective datasets will be referred to as 2019H1 for the gilding and 2020H1 for the diversified awarding.
Concerning the comments, those made within the following two months (60 days) after their submission's creation were retrieved for every submission in the sampling.
The vast majority of Reddit comments are made within the first few hours of a submission <cit.>, thus this timespan should cover all of the comments made to a submission, save for a few exceptions.
In 2019H1, there are 7.47M submissions (6.9% of the grand total) and 100.48M comments. In 2020H1, there are 8.38M submissions (5.4% of the grand total) and 103.04M comments.
Finally, the awards received, if any, for each submission and comment were extracted.
It should be noted, however, that the identity of the giver and the time of awarding are not available.
Hence, in the case of 2019H1, for the analyses only the three gilding awards are taken into consideration, with any new non-gilding awards (most likely given during the transition period) being ignored.
§ METHODS
First, I operationalized the research questions by defining objective measures of awarding level (RQ1) and award diversity (RQ2); then, I chose adequate statistical methods for these based on a preliminary analysis; finally, I conducted the respective data analyses by dataset, subreddit, and post kind, as detailed in the following paragraphs.
§.§ Measurement of Awarding Level
To describe the level of awarding and its possible increase after award diversification, the number of awards given seems to be the most straightforward measure. However, this measure would offer an insufficient view, as it is possible that the awards given increase but that the number of coins spent for them does not, i.e., more but cheaper awards are given. In addition, perhaps there is an increase in both awards and coins, but the proportion of awarded posts remains stable, i.e., the same proportion awarded of posts receives more awards and higher coin-value. Hence, for a more comprehensive description of awarding level (RQ1), I use the following three measures:
* Awards given: the count of awards given, regardless of their identity; also referred to as awardings.
* Coins spent: the sum of distinct award prices multiplied by the respective number of times it was given.
* Awarded posts: the number of posts that received at least one awarding.
In order to have standardized measures while comparing results —and given the relative small shares of awarded content— henceforth all of the above measures are expressed in per thousand posts (). At the subreddit level, dispersion is measured with the median absolute deviation (MAD), and to test the paired differences of awarding measures between 2019H1 and 2020H1, I used two-sided Wilcoxon signed-rank tests. Effect sizes are also given, both as percentage growth and paired median differences. For the latter, a 95% confidence interval (CI) is calculated via DABEST (data analysis with bootstrap-coupled estimation) <cit.>, based on 5000 resamples.
§.§ Definition of Award Diversity
The definition and measurement of diversity have been the source of much terminological confusion <cit.>.
This is particularly the case in Ecology, in which the study of species diversity has been traditionally described with several diversity indices, such as Richness, Shannon index, and Gini-Simpson index <cit.>.
However, these indices measure different things: Richness is a count of distinct types (e.g., species), the Shannon index is an entropy, and the Gini-Simpson index is a probability <cit.>. In a seminal work <cit.>, Hill presented a formal discussion on the
concept of species diversity and described it as the inverse of mean species proportional
abundance; further, Hill also defined a unifying notation in which different means —harmonic, geometric and arithmetic— correspond to the aforementioned traditional diversity indices. However, the Hill diversity only became widely known decades later, after the publication of a highly influential scientific opinion piece by <cit.>, in which it was called the “true diversity”.
In its most common definition <cit.> —and substituting species by awards— the Hill diversity can be written as:
^qD = ( ∑_i=1^A p_i^q )^1/1-q
In other words, the Hill diversity measures the mean rarity of awards in a sample, while also indicating that a community with rarer awards (on average) has higher diversity <cit.>.
Based on a preliminary analysis of 2020H1 there are few abundant awards (namely Silver and Gold) and many rare awards, thus setting q to 1 is a sensible choice. In this case, we can use the more known Shannon index,
H = -∑_i = 1^A p_i ln(p_i)
and rewrite Eq. <ref> with q=1 as ^1D = e^H, which is the award diversity (RQ2) used herein.
All of the sampled subreddits have at least one award by kind of post in each dataset, hence the values of ^1D are within a maximum of Richness A and a minimum of 1. It should thus be noted that, unlike traditional diversity indices, if some proportion of a subreddit’s awards were randomly removed, the Hill diversity decreases by that proportion <cit.>.
§ RESULTS
§.§ Awarding Levels
After diversification, the awarding levels increased considerably for all three measures and both post kinds. Based on a paired Wilcoxon signed-rank test at the subreddit level, this increase was significant in all cases (p ≪ .001).
Regarding the growth at the dataset level for both post kinds (see Table <ref>), awards given was the measure that grew the most, with an average of +175.5%; then coins spent with an average of +122.5%; and lastly awarded posts with an average of +121%.
Hence, the number of coins spent per awards given decreased after diversification: in 2019H1 it was 381 for submissions and 295 for comments, while in 2020H1 it was 292 for submissions and 249 for comments, which represents a respective growth of 23.4% and 15.7%.
In other words, cheaper awards were the ones that became more popular.
Interestingly, at the dataset level the number of submissions is an order of magnitude lower than the number of comments, but in terms of awarding measures is the opposite: submissions have an awarding level an order of magnitude higher than comments.
At the subreddit level, the median values of awarding measures reflect similar considerable increases, albeit the awarding effect sizes after diversification are less spread for comments than submissions (see Table <ref>).
Indeed, we can see in Figure <ref> that the distribution shapes of the awarding measures for both are noticeably different.
Moreover, for comments we can see that there is a remarkable inversion of skewness after diversification: the distribution on a logarithmic scale goes from positively skewed in 2019H1 to negatively skewed in 2020H1.
In other words, before diversification most subreddits had awarding levels for comments below the subreddit-level mean, while afterward is the contrary. Submission awarding measures also manifest such change in skewness, but in a much less remarkable way.
To discern if the awarding increase was related to the new non-gilding awards and not only to an increase in the old gilding-only awards, the number of awards given by gilding-status was also analyzed.
At the dataset level, the number of gilding-only awards given experiences an increase after diversification, with +31.7% for submissions and +48.3% for comments.
At the subreddit level (see Figure <ref>), the median awards given in 2020H was 11.04 for submissions and 1.86 for comments.
Based on a paired Wilcoxon signed-rank test, the effect is significant in both cases (p<.001), with a paired median difference of 3.06 with 95% CI (1.86; 4.32) for submissions, and 0.66 with 95% CI (0.58; 0.79) for comments.
In 2020H1, the 3 gilding-only awards comprised a sizeable portion of the awardings compared to the 450 non-gilding awards. At the dataset level, gilding-only awardings represented 44.5% for submissions and 58.2% for comments, with median values at the subreddit level being almost the same.
As illustrated in Figure <ref>, of the 50 sampled subreddits, almost half (23) preferred gilding awards for submissions, while this preference extends to the vast majority (46) of subreddits for comments. Naturally, this preference impacts the diversity of awards in 2020H1.
§.§ Award Diversities
At the dataset level, the diversity ^1D in 2019H1 was 2.45 for submissions and 2.17 for comments, while in 2020H1 it was respectively 28 and 15.2, which represents a growth of +1043.32% for submissions and +598.95% for comments. At the subreddit level, in 2019H1 the median value was 2.45 0.12 for submissions and 2.13 0.08 for comments, while in 2020H1 it was 19.3 5.36 and 12.8 2.25, respectively. The paired median difference was 16.85 with 95% CI (15.72; 19.25) for submissions, and 10.67 with 95% CI (10; 11.45) for comments. Hence, and as illustrated in Figure <ref>, in both datasets submissions had in average a higher diversity, while also having a higher ^1D increase after diversification in both spread and median values with respect to comments. Interestingly, the diversity between both datasets was modestly correlated at the comment level (τ = .22, p=0.02) but not at the submission level (τ = .04, p>0.6).
As mentioned before, in 2020H1 the old gilding awards had (in average) a higher proportion compared to the new non-gilding awards.
In fact, Silver and Gold are respectively the first and second most given awards, both before and after diversification. The three gilding awards are also part of the 14 awards that are present in all of the 50 sampled subreddits, with Silver and Gold being at least an order of magnitude higher in terms of total awards given compared to the rest of the awards (see Figure <ref>). It should be noted that, although the proportions of the three gilding awards did not suffer significant changes after diversification, Platinum (the most expensive gilding award) remained among these 14 common awards, but it fell to the eighth place by total awards given.
In 2020H1, there are a total of 453 awards. The median coin price is 500 (mean=1436), with the minimum price being 10 and the maximum 50,000.
Only 69 of these awards (15.23%) included a coin reward, with a median of 100 coins (mean=434.9), a minimum of 5, and a maximum of 5000.
There was a significant (p<.01) moderate positive correlation between subreddit diversity and number of posts for both submissions (τ=.22) and comments (τ=.19).
A considerable number of awards (165) were given only once, with the median being 5 times (mean = 1066). Most awards (257) only appear in one of the fifty subreddits (mean=11.68). The top three awards by their given times were Silver (162,652 times), Gold (86,830 times), and Take My Energy (21,625 times). This data distribution is illustrated in Figure <ref>.
§ DISCUSSION
§.§ The more the better
The analysis results indicate that the diversification of Reddit Awards attained its goal of increasing platform-wise awarding. In average, all of the post awarding measures had a growth of +140% in one year.
As a comparison, in the pre-Coins gilding study mentioned in <ref> with the top 100 subreddit by number for comments for the month of May of 2015, the number of gilded comments at the dataset level was of 0.411 <cit.>.
Thus, in circa four years —and after an extension to three awards— there was a growth of only +116% for gilded comments, while the equivalent one-year increase after diversification for awarded comments was of +131% (see Table <ref>).
Concerning the diversity of awards, the increase was important but not that much noticeable as one could have expected, albeit submissions had more noticeable increase in diversity compared to comments. Most awards in the dataset were given only a very few times, although some other new awards found great success, as illustrated in Figure <ref>.
Indeed the growth in awarding was not all thanks to the new awards. The three gilding awards also grew slightly, and both Silver and Gold continued to be the most popular awards.
There might be a few reasons for the popularity of these awards.
There is the price and perks; Silver costs 100 coins and has no perks; Gold costs 500 coins and includes a reward of 100 coins and a week of ad-free browsing; and, for completeness, the coin price of Platinum is 1800, with ad-free browsing and 700 coins as reward, both valid only for a month.
Gold was the first and original award, and at the moment of writing is the default award upon awarding (see Figure <ref>), thus there is a sense of familiarity; the same, but to a lesser extent as with Silver. As a matter of fact, Silver had its origin as an inside joke among users. It was an image of a poorly drawn and engraved silver medal, which was linked on comments as a humorous statement that the content was appreciated but not sufficiently for Gold or as a token of appreciation for which the “giver” had no sufficient funds. Reddit flew with the idea and made the Silver award for the Reddit Coins introduction, retaining the original crude drawing and the lack of benefits, which has now become by far the most given award platform-wise. The case of Silver highlights the fact that users might be driven to use a particular award by its meaning (explicit or implicit), rather than by its price or perks.
§.§ Limitations
This study has several limitations, all of which might inspire future work on the subject. The sampling method might have given biased results at the subreddit level; perhaps smaller subreddits are more or less generous in their awarding, although at the dataset level the sampled subreddits represent an important and diverse share of the platform's content.
Similarly, the less numerous yet important category of NSFW subreddits might have resulted in different awarding dynamics due to its mature content. For instance, gonewild, the only NSFW community present in the initial top 50 rank (see <ref>), has some peculiarities.
According to a pre-Coins study on the subreddit <cit.>, circa 25% of a sample of 90 females users (the vast majority of submissions are by women) with 3,454 individual photos was gilded at least once, and had 23 comments per photo (the vast majority by men). These female users were chosen evenly by both account age and activity, although no more details are given regarding content gilding. However, I think that this NSFW subreddit is an outlier, taking into consideration its amateur nature, sexually provoking dynamic among users, and immense popularity (it is the only such subreddit ranked in the top lists).
Another sampling limitation concerns the timeframe of the data collected; given that only a few months had passed after diversification in 2020H1, perhaps the proportion between gilding and non-gilding content could have been different for a later time period, albeit unrelated exogenous phenomena might had confounded the results.
Endogenous phenomena might have also affected the study results. In particular, the graphical user interface might have a significant impact on which awards are chosen, if any.
As can be seen in Figure <ref>, during award selection there are several tabs which group the awards, and only a small subset of the awards is immediately visible upon opening the selection dialog.
It is most likely that the item disposition in this section is carefully monitored, since a few years Reddit revamped its approach to graphical design and user interaction.[<https://www.wired.com/story/reddit-redesign/>]
§.§ Implications for social media platforms
Overall, results offer strong supporting evidence for the benefits of diversifying paid appreciation tokens, which might represent an advisable monetization strategy of user-generated content for other social media platforms that already use paid appreciation tokens or are interested in them.
In particular, the case of Reddit Awards is of relevance to platforms centered around user-generated content for and by peers, such as discussion forums and question-and-answers websites.
Moreover, peer awards could complement or substitute existing achievement badges —which are automatically given by the system and are non-paid— such as those present in the platform Stack Overflow as way to gamify and engage the question and answer nature of its user-generated content <cit.>.
Careful attention should be paid to user involvement in the design and development of peer award mechanisms, however.
Indeed, significant changes in the design of a peer user-generated content platforms sometimes might have catastrophic consequences, such as with the redesign of the question-and-answers AnswerBag in 2009 <cit.> and the infamous fourth version of the news aggregator Digg in 2010 <cit.>, with the latter provoking a mass migration towards Reddit and even becoming a meme.[<https://knowyourmeme.com/memes/events/digg-v4>]
This event was in fact one of the main drivers for the aforementioned revamped design approach of Reddit.
Important new features such as Reddit Awards have now a period of pilot testing and community feedback before their platform-wise deployment so as to lead to a successful adoption by users.
Furthermore, Reddit Awards have been extensively promoted and are considered to be key for the future of the platform, as stated by Reddit administrator venkman01 in an official company statement:[<https://redd.it/hmdwxs>]
Awarding is an important part of our direct-to-consumer revenue; it complements advertising revenue and gives us a strong footing to pursue our mission into the future. By giving awards, users not only recognize others but also help Reddit in its mission to bring more community and belonging to the world.
It is likely that this future also includes the offering of Reddit Awards (or other paid appreciation tokens) as digital property via the use of non-fungible tokens (NFTs) and blockchain technology. An NFT is a unique identifier recorded in a distributed ledger (e.g., a blockchain) that can be used to certify the ownership of digital objects, which have recently gathered much hype due to high-profile sales <cit.>. In fact, Reddit has recently started to experiment with the selling of NFTs, in the form of unique profile avatars.[<https://nft.reddit.com>] I thus foresee that, despite the legal issues with virtual currency and goods mentioned in <ref>, in the near future social media platforms will at least explore expanding their revenue streams by going to the extreme of diversification and offering paid appreciation non-fungible tokens, selling the idea of “a unique object for a unique content recognition”.
§ CONCLUSION
Paid appreciation tokens have become one of the many strategies with which social media platforms are retaining and monetizing user-generated content. This work focused on the diversification effects of Reddit Awards, a peer awarding mechanism in which any user can be content creator and content consumer at the same time, without the expectation of financial gain. To this end, a set of measures to characterize and operationalize awarding and diversification were defined and analyzed, which could be further adapted to investigate similar paid appreciation tokens in other platforms. Results indicate that providing more award options brings a benefit to users —who obtain entertainment seeing others post awards, as well as satisfaction in receiving or giving awards— and the platform —which increases its revenue. Still, already familiar awards remained highly popular, thus attention should be paid to these when introducing changes. Being one of the few studies on the topic, the current work has several limitations. Further research is thus needed to better understand the user design implications of peer awarding mechanisms in social platforms.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.04023v1 | 20230708180031 | SDT: A Low-cost and Topology-reconfigurable Testbed for Network Research | [
"Zixuan Chen",
"Zhigao Zhao",
"Zijian Li",
"Jiang Shao",
"Sen Liu",
"Yang Xu"
] | cs.NI | [
"cs.NI",
"cs.PF"
] |
†]Zixuan Chen
†]Zhigao Zhao
†]Zijian Li
†]Jiang Shao
†]Sen Liu
†∗]Yang Xu
[ ]{zxchen20, zgzhao20, lizj21, jshao20, senliu, xuy} @fudan.edu.cn
[†]School of Computer Science, Fudan University, Shanghai, China
[]Institute of Fintech, Fudan University, Shanghai, China
[]Peng Cheng Laboratory, Shenzhen, China
SDT: A Low-cost and Topology-reconfigurable Testbed for Network Research
* Corresponding author: Yang Xu.
This paper will be published in IEEE CLUSTER 2023. Preview version only.
[
===================================================================================================================================================================================
Network experiments are essential to network-related scientific research (e.g., congestion control, QoS, network topology design, and traffic engineering). However, (re)configuring various topologies on a real testbed is expensive, time-consuming, and error-prone. In this paper, we propose Software Defined Topology Testbed (SDT), a method for constructing a user-defined network topology using a few commodity switches. SDT is low-cost, deployment-friendly, and reconfigurable, which can run multiple sets of experiments under different topologies by simply using different topology configuration files at the controller we designed. We implement a prototype of SDT and conduct numerous experiments. Evaluations show that SDT only introduces at most 2% extra overhead than full testbeds on multi-hop latency and is far more efficient than software simulators (reducing the evaluation time by up to 2899x). SDT is more cost-effective and scalable than existing Topology Projection (TP) solutions. Further experiments show that SDT can support various network research experiments at a low cost on topics including but not limited to topology design, congestion control, and traffic engineering.
Testbed, reconfigurable topology, network evaluation
§ INTRODUCTION
As the main bottleneck of Data Centers (DCs), the Data Center Networks (DCNs) have attracted much research attention from both industry and academia <cit.>. There exist some commonly used DCN topologies that are scalable and cost-effective including Fat-Tree <cit.>, Dragonfly <cit.>, Torus <cit.>, BCube <cit.>, HyperBCube <cit.>, et al. Research on DCNs, including congestion control mechanisms, routing algorithms, deadlock avoidance functions, et al., should be applied to most of these topologies (or at least some) for better generality (e.g., <cit.>). There are also many pieces of state-of-the-art research on optimizing the physical topology to improve the application performance like Distributed Machine Learning (DML) <cit.>. All of these require a testbed that can support multiple topologies to verify the effects of each mechanism.
It is not easy to support multiple topologies at the same time and do reconfiguration among them. First, building a topology such as Fat-Tree can be complex. For example, it needs 20 4-port switches and 48 cables to deploy a standard Fat-Tree topology supporting only 16 nodes (Figure <ref>). In addition, it is more complicated to support different topologies and reconfigurations simultaneously. Connections are error-prone and difficult to check when reconfiguring. Although emulators (e.g., Mininet <cit.>, Open vSwitch <cit.>, OpenStack <cit.>) can simulate a variety of topologies, they still have some obvious drawbacks such as long simulation time and insufficient authenticity of results. Therefore, deploying a full testbed for evaluation is crucial and irreplaceable, even if it is hard to make.
As far as we know, a qualified real-world testbed requires several characteristics, including fast topology reconfiguration, cost-friendly deployment, and convenient maintenance. The challenges in designing such a testbed lie in how to support topology reconfiguration, preferably without manual switching of cables; how to reduce the cost of the test platform, including hardware and labor costs; and even how to support user-defined topologies, rather than being limited to the existing commonly used topologies.
Switch Projection (SP) is a solution to construct topologies for network experiments but needs heavy staffing. The good news is that the Micro Electro Mechanical System (MEMS) optical switches can be used to build reconfigurable network topologies <cit.>. Based on its reconfigurable and lossless bi-switching property, it can take the place of SP's manpower. We call the SP with MEMS optical switches the “Switch Projection-Optical Switch (SP-OS)”. SP-OS can construct user-defined topologies and support real-time reconfiguration without manual operations. However, it still has certain disadvantages, such as high cost and poor expandability. Considering the above characteristics and challenges, we propose a topology-reconfigurable testbed named Software Defined Topology Testbed (SDT) without costly optical switches to achieve lower cost and better scalability.
In short, the contributions of the paper are
* We summarize the methodology of Topology Projection (TP) and propose SDT, a testbed solution for building real topologies. SDT uses commodity OpenFlow switches to construct various topologies. Once the connection deployment is completed, the topology (re)configuration can be finished in a short time without manually changing the physical connections or using optical switches (Figure <ref>).
* We develop an easy-to-use SDT controller supporting user-defined topologies. Users can develop their routing strategy or other new technologies with the SDT controller. The transformation process from logical topology to physical topology is fully automated.
* We compare SDT with existing TP methods, and SDT shows better cost-effectiveness and scalability. We use real applications to evaluate 1) the latency and bandwidth differences compared with the full testbed and 2) the Application Completion Time (ACT) and time consumption compared with the simulator. Evaluations show that SDT has only 0.03-2% deviation on latency compared to the full testbed and reduces the evaluation time by up to 2899x faster than the simulator in a 16-second HPC benchmark for communication efficiency with 32 nodes.
* We further implement some prevalent network functions on SDT, including routing strategy, deadlock avoidance, and congestion control. SDT shows substantial flexibility in network evaluations.
The rest of the paper is organized as follows. We introduce the related works in <ref>. We present the motivation and design of SDT in detail in Sections <ref> and <ref>. A prototype of SDT controller is introduced in <ref>. The accuracy and efficiency of SDT are evaluated in <ref>, with some state-of-the-art network functions implemented. We discuss SDT in <ref> and conclude the paper in <ref>.
§ RELATED WORKS
§.§ Reconfigurable Networks
To better allocate link bandwidth in response to the non-uniform traffic often present in DCNs, some researchers propose reconfigurable networks, which can dynamically adjust links based on real-time network traffic to better serve hot node pairs (nodes with heavy traffic). These reconfigurable networks are often implemented with optical devices, which can offer lossless bi-switching capabilities. The optical devices used in reconfigurable networks can mainly be categorized into MEMS-based optical switches and other specialized optical devices (e.g., free-space optics and optical devices that forward based on light wavelength).
§.§.§ Reconfigurable Networks based on MEMS Optical Switch
MEMS optical switches use several tiny mirrors on the silicon crystal to forward the light between different fiber interfaces. The tiny mirrors are called microarrays, working as a reconfigurable static crossbar by rotation.
MEMS optical switches have been put into practical usage very early, and the technology is relatively mature and less error-prone. Therefore, early reconfigurable networks, such as c-Through <cit.> and Helios <cit.>, use MEMS optical switches to build reconfigurable networks. However, MEMS optical switches still have drawbacks, such as their relatively large reconfiguration delays (about 100ms) and high hardware costs.
§.§.§ Reconfigurable Networks based on Customized Optics
To achieve faster reconfiguration, researchers have proposed other customized optical devices, such as Free Space Optics used in Firefly <cit.> and ProjecToR <cit.>, which reflect the laser propagating in the air with mirrors that can do faster angle adjustment to complete the reconfiguration. This kind of network can achieve reconfiguration as fast as 12μ s, but it is easily disturbed by the environment, which causes significant optical path shifts and makes the deployment impossible.
In addition, Sirius <cit.> uses Arrayed Waveguide Grating Router (AWGR) to forward the input light of different wavelengths to the corresponding output ports to complete the reconfiguration. However, this method needs to be used with a highly customized tunable laser that can quickly generate lasers of different wavelengths, which is also less practical.
Besides these, there are some other similar customized-optics-based fast reconfiguration works like <cit.>.
§.§ Network Evaluation Tools
Network researchers have developed and used many network evaluation tools in the past few decades. We roughly divide them into 1) simulator, 2) emulator, and 3) testbed. They have played a significant role in the progress of network technologies, but they also have certain disadvantages.
§.§.§ Simulator
Existing network simulation tools such as NS-2 <cit.>, NS-3 <cit.>, OPNET <cit.>, OMNET++ <cit.> and GloMoSim <cit.> offer efficient and cost-effective ways to evaluate the network performance under different conditions. However, compared with the testbed, they lack both scalability and reality. Simulators may take several days to complete one simulation, and they also suffer from the lack of ability to simulate various random situations that might occur in real networks.
§.§.§ Emulator
The primary goal of network emulators such as Mininet <cit.> with Open vSwitch (OVS) <cit.> and Netem <cit.> is to create an environment whereby users can flexibly combine the VMs, applications, products, and services to perform a relatively more authentic simulation. However, the performance of emulators is poor in the high bandwidth environment (10Gbps+) or medium-scale topologies (containing 20+ switches) due to the limitation of the system resources. Besides, emulators cannot do everything we want, e.g., Mininet has no official support for Priority-based Flow Control (PFC), even though PFC is already a standard feature.
As a widely used cloud computing infrastructure software, OpenStack <cit.> can be used to build a set of computing nodes with specific topologies using commodity servers and switches. However, the construction of topology on OpenStack is still virtualized by OVS. As a result, the network topology on OpenStack has scalability and reality problems and will be limited by the bandwidth.
§.§.§ Testbed
Existing testbed platforms available to researchers include Emulab <cit.>, CloudLab <cit.> and PlanetLab <cit.>, which have made considerable progress in making testbed as easy to use and control as simulation. Nevertheless, their drawbacks are also obvious. Whether virtualization is used or not, the reconfiguration of the testbed requires heavy manual operations. Several testbeds dedicated to wireless environments are proposed, such as TWIST <cit.>, and DRIVE <cit.>. These works mainly consider wireless environments, which do not apply to DCN-related experiments.
§ MOTIVATION AND BACKGROUND
This section firstly introduces our motivation for “Topology Projection (TP)”. Then, we summarize a straightforward solution named Switch Projection (SP). The SP can support TP easily but can not be reconfigured without manpower. MEMS optical switches can be introduced for topology reconfiguration, which is introduced at the end of this section with the name Switch Projection-Optical Switch (SP-OS).
§.§ Why Do We Need the SDT?
By comprehensively considering the pros and cons of three types of existing network evaluation tools (Table <ref>), we find that they are generally unable to achieve high-performance and low-cost evaluations for various network topologies. Although the simulation is easy to operate and the cost is relatively small, its scalability is limited by the high time cost. As the number of nodes increases and the network traffic grows, the simulation time can be thousands of times longer than the real-world ACT. Testbeds are needed to get better evaluation scalability and efficiency. However, the deployment expenses of testbeds are high and even unacceptable for researchers.
Therefore, we want to construct a system that performs almost the same as the full testbed with high efficiency and scalability. The system should support fast reconfiguration among various topologies without changing the physical connections under an acceptable budget. That is why we present SDT. The efficiency of SDT is close to full testbeds without any manual operation during reconfiguration and with lower hardware costs.
§.§ A Possible Solution: Switch Projection
Some works (e.g., <cit.>) use a switch to construct a simple topology for evaluation. We call this method of constructing a topology “TP”. SDT is also a TP method.
The main idea of traditional TP is to project the topologies by using the logical switch as a meta unit. The right side of Figure <ref> is the topology we want to construct, which is a part of a 2D-Torus. We call this “logical topology”. The radix of the switches in this logical topology is 4, i.e., every logical switch has 4 ports. The physical switch can be divided into sub-switches based on the radix. As a result, each sub-switch has 4 ports as well. After that, we can use these sub-switches for the topology projection.
We call this type of TP “SP” and conclude its general approach here. The first step of SP is dividing one physical switch into multiple sub-switches. Then we project the sub-switches to the logical switches in the topology, which is why this method is called SP. After the projection, we manually connect these sub-switches' corresponding ports to build the topology. We can use Software-Defined Networking (SDN) functions (e.g., flow tables in the OpenFlow switch) to divide the sub-switches.
Take Figure <ref> as an example of how SP works. We first divide and project the sub-switches. Ports 1-4 on the physical switch are considered on one sub-switch, so we project them to an arbitrary logical switch e.g., switch 1. Ports in the logical switch 1 are numbered based on the projected ports from the physical switch. The operations are the same for other sub-switches.
We then connect the cables between specific sub-switch ports based on the logical topology. For example, in the logical topology, there is a link between ports 3 and 9 (i.e., Link (A)). We connect the corresponding ports on the physical switch. After all the links are made, it is time to deploy the flow table (we use OpenFlow in this paper) to restrict the packet forwarding domain on the physical switch based on the ports' labels. For instance, data packets entering port 1 can only be forwarded to ports 2-4. The restrictions are based on the partition of sub-switches.
§.§ Make SP Topology-reconfigurable
The manual operations required for SP on topology reconfiguration are massive. We have to re-connect the cables manually on every topology reconfiguration, which is error-prone. As the topology size increases, the difficulty of deployment increases correspondingly. Therefore, we introduce MEMS optical switches into SP to reduce labor costs. The new design is called SP-OS.
The optical switch can replace manual operations on the reconfiguration. We connect all the ports on the physical switch to the optical switch (Figure <ref>). When the topology needs to be reconfigured, modifying the configuration of the optical switch based on the labels can replace the manual operations. The advantage of SP-OS is that once the testbed is deployed, all reconfigurations can be done remotely by software control.
The introduction of optical switches leads to increased hardware costs. Optical devices are generally costly. The price of a 320-port MEMS optical switch is more than $100k, and only 160 LC-LC[Lucent Connector (LC).] fibers can be connected. As the number of ports on the optical switch increases, the price increases significantly. SDT can work without optical switches, which provides significant savings.
TurboNet <cit.> is another topology-reconfigurable SP method for TP, which replaces manual reconnection with the Tofino switch's loopback ports. However, the use of loopback ports results in a reduction in the available bandwidth of the switches <cit.>. We compare the scalability between TurboNet and SDT in <ref>.
§ THE DESIGN OF SDT
In this section, we first introduce the fundamental design of SDT on a single switch. Then, we expand the SDT to multiple switches to support larger topologies. We also address the issue of topology partitioning in multi-switch deployments.
§.§ SDT on a Single Switch
Although SP-OS can support automated topology reconfiguration, its cost is relatively high due to the introduction of optical switches. Therefore, we design the SDT, which can provide the same functionality as SP-OS but without optical switches.
The main idea of SDT is to use Link Projection (LP) rather than SP to construct the logical topology on a physical switch. SDT first projects physical links[To construct a physical link, we connect two arbitrary ports on the switch. In the paper, the switch's upper and lower adjacent ports are connected for simplicity.] to logical ones on the topology, and then number the ports on the logical topology based on the projected ports from the physical switch. Taking Figure <ref> as an example, the physical links A and B are projected to the logical topology, and then the corresponding ports in the logical topology can be tagged with 1, 2, 3, and 4, respectively.
After the projection, we group ports on the physical switch into different sub-switches based on the relationship of their counterparts in the logical topology. For instance, in Figure <ref>, ports 1, 3, 5, and 7 in the topology form a logical switch, so the corresponding ports 1, 3, 5, 7 in the physical switch should be grouped in the same sub-switch. We use OpenFlow flow tables to keep the packets entering this sub-switch only forwarded to their corresponding forwarding domain. The other sub-switches are divided according to these steps as well.
Please note that no optical switch is needed when the topology is reconfigured in SDT.
Here we summarize the fundamental differences between SP-OS and SDT.
* In SP-OS, sub-switch partitions are determined arbitrarily (the only constraint is that the radix of sub-switches should match the radix of logical switches in the topology). MEMS optical switches are used to (re)connect links between these sub-switches based on the topology's logical switches (projected by SP).
* In SDT, physical links on the physical switch will remain fixed once constructed (which can be arbitrary). The sub-switches are (re)partitioned based on the result of LP. Rules in the flow tables of the OpenFlow switch can be used to realize the sub-switch partition, and no optical switch is needed during a topology reconfiguration.
The size of the logical topology supported by SDT is limited by the number of ports on the physical switch. A topology can be appropriately built if the total number of ports in the topology is less than or equal to the number of ports on the physical switch (excluding the ports connected to the end hosts). This constraint applies to all TP methods.
§.§ SDT on Multiple Switches
When one switch is insufficient to project the entire logical topology, multiple switches are needed to use. In SP-OS, it is not difficult to expand the supported logical topology by adding more switches and optical devices. The expansion of SDT is also relatively simple but requires additional discussions below.
On the construction of the multi-switch scenario, it needs to cut the logical topology into various sub-topologies, and each sub-topology is maintained independently by one physical switch.
There are two different types of links in multi-switch SDT. We call the links between the upper and lower adjacent ports of one switch self-links. For those links across the sub-topologies, we project them from the links across physical switches and call them inter-switch links. For instance, the topology has been cut into two sub-topologies on the right side of Figure <ref>. The links inside each sub-topology are self-links, and the links between the two sub-topologies are inter-switch links.
There is a requirement for the number of inter-switch links. Taking Figure <ref> as an example, the scale of the logical topology is larger than the previous one. As a result, one 64-port switch cannot build this topology, but two can make it. To build the topology, we divide the topology into two sub-topologies. How to divide the topologies is discussed in Sec. <ref>.
Here we use the formula to represent the inter-switch links. Define topology (graph) G(E, V) as the logical topology we hope to build, and the sub-topologies are G_A(E_A, V_A) and G_B(E_B, V_B). E_nA represents the links to nodes on the physical switch A, E_sA represents the self-links on the physical switch A, and E_aAB represents the inter-switch links between the physical switches A and B. In the logical topology, there is a relationship: E = E_n + E_s. For sub-topologies after being divided, they have
E_A = E_nA + E_sA
E_B = E_nB + E_sB
V = V_A + V_B
For inter-switch links, the following equation exists.
E_aAB = E_aBA = E - E_A - E_B
We can now determine the number of inter-switch links for the logical topology by Eq. <ref>. For the case in Figure <ref>, there are 8 inter-switch links between the two sub-topologies, which means at least 8 inter-switch links are required to construct this topology.
The reservation of inter-switch links is flexible, but it must fulfill the requirements of the desired topologies and the specifications of physical switches. Taking Figure <ref> as an example, we aim to construct a 4x4 2D-Torus topology (the connections to nodes are omitted for simplicity). When the number of ports on physical switches is greater than 64, only 1 switch is necessary. When the number of ports exceeds 32 but is less than 64, 2 switches are required to build the topology, as shown on the left side of Figure <ref>. Each switch is assigned 12 self-links and 8 inter-switch links in this scenario. When the number of ports is less than 32 but greater than 16, we can build it with 4 switches. Attention must be paid to determining the switches at both ends of the inter-switch links according to the partition results.
It is worth noting that even if the partitioning methods are different, the results of TP are almost the same. Nevertheless, a proper cutting method enables the testbed to support more topologies without manual modifications. In the implementation, if it needs to perform experiments on multiple topologies, we generally divide the topologies in advance based on the specifications of switches (port number, limitation of flow table et al.) to obtain a proper number of inter-switch links between different switch pairs, i.e., to keep the number of inter-switch links between multiple different switch pairs about the same. The reserved inter-switch links usually come from the maximum inter-switch links among all topologies.
§.§ Topology Partition for SDT on Multiple Switches
The partition of the logical topology needs to be discussed. We define the function “Cut(G(E, V), params...)” for dividing the topology. The input of the function is the logical topology G(E, V), switch parameters, and the number of switches. The output is the partitioning method that satisfies the requirements of all the topologies we aim to build and the number of links of each type to be assigned. The problem is represented with switches and nodes as vertices and logical links as edges. The logical topology can be described as an undirected graph. To achieve the partitioning, we apply a graph partitioning algorithm that splits the graph into sub-graphs.
The partition of the graph needs to meet certain requirements. The first is that the number of inter-switch links should be small, for the inter-switch links are relatively more complicated than self-links. With this requirement, one initial idea is to use the “Min-cut” partitioning algorithm to divide the topology. The target is to minimize the CutEdges(E_A, E_B) = ∑_u∈ V_A, v∈ V_Bw(u, v). Notes that w(u, v)=1.
Besides this, we also want to keep the number of used links (or ports) per physical switch as balanced as possible. It is beneficial to balance the number of ports and links of each physical switch in terms of resource usage and complexity of ports to nodes. However, Min-cut partitioning can not work well under this condition. Figure <ref> shows the differences between these partitioning methods. Another graph partitioning algorithm is needed, whose target is to minimize α× Cut(E_A, E_B) + β× (1/∑_E_A^i1 + 1/∑_E_B^i1).
To summarize the requirements for the SDT partitioning algorithm, the graph partitioning algorithm should 1) minimize the number of edges between sub-graphs and 2) balance the number of edges within each sub-graph. Meeting these requirements is a proven NP-hard problem, and algorithms such as RatioCut <cit.> or minimize normalized cut (NCut) <cit.> can be used to solve it. In practice, we use the widely-used METIS library <cit.> with these constraints to perform the partitioning of the topology, and the results are usually satisfactory. When multiple topologies need to be evaluated in real-world experiments, we perform graph partitioning for all topologies and then select the maximum number of inter-switch links as the reference for deployment on the physical topology.
§ IMPLEMENTATION DETAILS: SDT CONTROLLER
We implement the SDT controller based on the library Ryu <cit.> under version 4.34 and the API in commodity OpenFlow switches. As shown in Figure <ref>, the SDT controller consists of 4 modules. Topology Customization and Routing Strategy are two basic modules of the controller. The remaining two modules, i.e., Deadlock Avoidance and Network Monitor, are dedicated modules for DCNs. SDT controller supports fast (re)configuration of network topology and other modules by running a simple configuration file as shown in Figure <ref>.
§.§.§ Topology Customization
This module is essential for performing TP, consisting of 1) the checking function and 2) the deployment function. In the checking function, all user-defined topologies will be used as input to the module, along with how the testbed is connected (e.g., distribution of nodes and two types of links). The module first checks if these topologies meet the deployment conditions as addressed in <ref>. If not, the module will inform the user of the necessary link modification. Then, the checked user-defined topology is used as the input for the deployment function. The controller will maintain the logical topology as an undirected graph and run the TP process automatically in this function.
§.§.§ Routing Strategy
This module contains various routing strategies for different topologies. We implement several routing algorithms as shown in Table <ref>. Most of the user-defined routing strategies can be implemented by the SDT controller as a specific set of flow tables. For instance, when a new flow comes, the SDT controller calculates the paths on the logical topology according to the strategies and then delivers the corresponding flow tables to the proper OpenFlow switches to perform a specific routing for the flow.
§.§.§ Deadlock Avoidance and Network Monitor
These two modules are dedicated modules for DCNs. The former works in the lossless network, like RDMA over Converged Ethernet (RoCE), along with Routing Strategy module to avoid the deadlock. The latter is mainly used for network telemetry. For example, the SDT controller periodically collects statistics data in each port of OpenFlow switches through provided API. The collected data can be further used to calculate the load of each logical switch in the case of adaptive routing.
We use the SDT controller to implement some prevalent network functions to evaluate SDT's capability. For details, please refer to <ref>.
§ EVALUATION
In this section, we conduct several experiments to answer the questions, including:
* Will SDT introduce additional overhead (e.g., latency) compared to a full testbed? ( <ref>)
* How many types of topologies can SDT project? ( <ref>)
* How cost-effective and scalable is SDT compared to previous TP methods? ( <ref>)
* How much speed-up can SDT bring to network experiments? ( <ref>)
* Can existing network functions be applied to SDT? ( <ref>)
It is worth mentioning that all topology reconfigurations of SDT in this section are done remotely without any manual rewiring.
§.§ Experiment Setup
§.§.§ SDT Cluster Setup
We use 3 H3C S6861-54QF OpenFlow switches (with 64 10Gbps SFP+ ports and 6 40Gbps QSFP+ ports, which can be split into 4 10Gbps SFP+ ports) for SDT. We use 16 HPE DL360 Gen9 servers with E5-2695v4 (18 cores and 36 threads) as host servers and virtualize them to 32 computing nodes (i.e., virtual machines). Each host server has one Mellanox ConnectX-4 10GbE dual-port NIC. Each computing node is allocated with 32GB RAM and 8 CPU cores. Moreover, each computing node is bound with a physical NIC port through SR-IOV to ensure that the virtualization will not become the performance bottleneck. All the network devices support the Priority Flow Control (PFC) for lossless ethernet.
§.§.§ Baselines
We use a full testbed to compare the accuracy of SDT in terms of latency and bandwidth. We compare the Application Completion Time (ACT) of SDT with a self-designed simulator running different HPC applications under different topologies. We also evaluate the cost-effectiveness and scalability compared to SP, SP-OS, and TurboNet <cit.>.
The network simulator we use is based on two popular simulators BookSim <cit.> and SST/Macro <cit.>. The simulator supports a range of features needed by the evaluations (including PFC, cut-through, trace replaying, et al.) and is event-driven for efficiency. To run the same application as the nodes on SDT, the simulator uses the traces collected from running an HPC application on real computing nodes to ensure the simulator's authenticity. We only compare the SDT to the TurboNet with Port Mapper (PM) because the number of queues on each port in the topology projected by Queue Mapper (QM) is inadequate for experiments inside the DCs.
§.§ TP Accuracy of SDT
§.§.§ Latency
We construct a multi-hop topology for latency and bandwidth tests as shown in Figure <ref>. The topology consists of 8 switches and computing nodes. There is one node connected to each switch. The switches and nodes are inter-connected with 10Gbps links. We build this topology on SDT and a full testbed and compare the latency between Node 1 to Node 8 by using the Pingpong application in Intel MPI Benchmark (IMB) <cit.>. The application is running on the RoCEv2 network with ECN-disabled.
We perform the latency test 10k times on incremental message lengths (param -msglen) and collect the latencies. Define the average latency of the full testbed as l_r, and the latency of SDT is l_s. The overhead is calculated by l_s - l_r/l_r. Figure <ref> shows that the SDT would bring an acceptable overhead to the RTT. It is worth noting that the latency is quite small in the RoCEv2 network, which means introducing any tiny delay can lead to large deviations in results. For example, the 10-hop latency of the lengths below 256 bytes is under 10μ s. Although the latencies on RoCEv2 are sensitive to the hardware conditions, the overheads brought by SDT are below 1.6%, which can be ignored. With the increment of message lengths, the overhead brought by SDT is getting smaller.
§.§.§ Bandwidth
We use iperf3 to construct an incast scenario for bandwidth test: all other nodes send 10Gbps TCP traffics to node 4. We compare the bandwidth on loss and lossless networks (with PFC off/on, respectively).
The results (refer to Figure <ref>) demonstrate that with PFC enabled, the bandwidth allocation for each iperf3 flow aligns with the full testbed. For instance, nodes 3 and 5, which have 2 congestion points on their path to node 4, have comparable bandwidth when controlled by PFC in both the SDT and full testbed. Their bandwidth allocation is significantly distinct from that of other nodes with different hop counts. In the network without PFC, the bandwidth distribution between SDT and the full testbed has a nearly identical trend. Nodes that can allocate relatively high bandwidth (which may be influenced by RTT and other factors) behave similarly in both the actual topology and SDT. The trends are nearly alike for nodes with lower bandwidth. The only differences may be due to the additional overhead introduced by SDT, leading to slight differences in RTT and therefore different window growth rates.
To summarize, the way SDT builds the topology does introduce a bit of additional overhead, resulting in a deviation of 1.6% or less to the latencies compared to the full testbed in our environment. Our initial speculation is that these additional latency overheads are because TP increases the load of the switch's crossbar, which causes a slight bias compared to the real environment. These deviations are reasonable and have a negligible impact on the bandwidths.
During the evaluation, we also evaluate the hardware isolation using the Wireshark network sniffer on the client side. We deploy two unconnected topologies in one SDT testbed and conduct the same Pingpong experiment separately. The evaluation results show that the client's port does not receive packets from nodes that are not connected in one topology.
§.§ Scalability, Convenience, and Cost of SDT
We use simulations to compare the scalabilities, conveniences, and costs between SDT and other TP methods (SP, SP-OS, and TurboNet <cit.>) on the projection of multiple topologies, including the widely-used topologies in DCs (Fat-Tree, Dragonfly, and Torus) and 261 WAN topologies (comes from the Internet Topology Zoo <cit.>). The metric of reconfiguration times is calculated by the total time spent from the time the configuration is placed until the network is available. The hardware costs are extrapolated from the current market price of the hardware.
Table <ref> presents the results of the evaluations and shows that SDT can project more topologies than TurboNet at the same hardware cost, making it more scalable and cost-efficient than SP and SP-OS. SP requires manual reconnection, making reconfiguration time-consuming and prone to errors, especially for large topologies. SP-OS incorporates optical switches (OS) to facilitate reconfiguration but suffers from expensive hardware costs. TurboNet employs the loopback port of P4 switches for reconfiguration, resulting in halved bandwidth on the ports and reduced scalability compared to SDT. Also, recompiling the P4 program is time-consuming. SDT is the best option among these solutions due to its excellent scalability and cost-effectiveness.
§.§ Comparison between SDT, Simulator, and Full Testbed
We run a batch of HPC applications and benchmarks, including HPCG, HPL, miniGhost, miniFE, and IMB, to verify the ACT differences among SDT, the simulator, and the full testbed.
The HPC applications can verify the universality of SDT in the network experiments, while the IMB Alltoall is a pure traffic benchmark without any computation, ideal for verifying the impact on network performances brought by SDT's overhead. We run the applications on specific topologies and construct the topologies on both SDT and simulator. All parameters remain the same for the simulator and SDT, including PFC thresholds, congestion control, DCQCN enabled, cut-through enabled, et al. For details on network functions like deadlock avoidance, please refer to <ref>.
We select the topologies 1) Dragonfly with a=4, g=9 <cit.>, and h=2, 2) Fat-Tree with k=4 <cit.>, 3) 5x5 2D-Torus, and 4) 4x4x4 3D-Torus <cit.> for evaluation. For the topologies with the number of nodes greater than 32, we randomly select the nodes but keep the same among all the evaluations.
Table <ref> shows the difference in real-application evaluation between the SDT and simulator. Ax (B%) in the table represents the evaluation time of the SDT is A times faster than the simulator with a difference of ACT in B%. The result shows that the ACT collected in SDT is almost identical to the simulator, with a maximum deviation of 3%. However, the time consumption of SDT is greatly reduced compared to the simulator, especially in applications with heavy traffic.
Further evaluations are conducted to assess the performance improvement brought by SDT as the number of nodes increases. Figure <ref> compares the time consumption of full testbed (real ACT), simulator, and SDT in evaluating IMB Alltoall benchmark on a Dragonfly topology (a=4, g=9, h=2) with 1, 2, 4, 8, 16, and 32 randomly selected nodes. Note that SDT's time consumption includes the deployment time of the topology. Results show that when the ACT is short, the topology deployment time may result in overhead in the evaluation time consumption, but it is still faster than the simulator. It's worth mentioning that the simulation time may be affected by the performance of the machine running the simulation, but this does not resolve the issue that the simulation is much slower than a real experiment on SDT.
To summarize, SDT can well construct actual network topologies. The experiments performed on SDT show almost the same ACT as the real environments and the simulations, while SDT has much lower costs than the full testbed and is much faster than the simulator. There are good reasons that SDT can be used for more authentic and efficient network evaluations than simulators and emulators.
§.§ Running Prevalent Network Functions on SDT
We also evaluate the feasibility of deploying prevalent network features on the SDT, with two specific modern network functions, RoCEv2, and a naive active routing.
RoCEv2 works over lossless ethernet with PFC enabled. Since SDT does not have any hardware modifications to the physical ports, building a lossless network environment is possible by simply enabling the PFC on both switches and NIC ports. Moreover, DCQCN <cit.> is an end-to-end congestion control method to delay the generation of PFC messages. Like PFC, the DCQCN can be enabled by directly turning it on as long as the switch and the NIC support it. We further deploy three types of deadlock avoidance methods alongside routing strategies on the SDT (Table <ref>), which are working properly in the evaluation of real applications (See <ref>).
We implement an active routing algorithm based on <cit.> for the Dragonfly topology (a=4, g=9, h=2, with randomly selected 32 nodes). This algorithm extends Dragonfly's minimal routing policy by estimating network congestion according to the statistic data from Network Monitor module. We evaluate active routing using a prevalent pure communication application, i.e., IMB Alltoall. Results show that active routing works well on SDT, which can reduce the ACT of the IMB Alltoall.
In summary, SDT shows strong adaptability to existing network functions. Most existing ethernet features can be easily deployed in SDT. Researchers can use SDT to validate existing network functions in multiple-scale topologies or to develop and evaluate new network functions using SDT.
§ DISCUSSION AND FUTURE WORK
§.§ Flexibility Enhancement
In SDT, the inter-switch links reservation issue might occur ( <ref>). Manual operations may still be required once the reserved inter-switch links cannot accommodate the new user-defined topology. To handle this, SDT can leverage optical switches to turn a link into either a self-link or an inter-switch link dynamically according to the topology requirements to further enhance the flexibility of SDT. We are designing the SDT controller with the optical switches and investigating whether there are additional challenges.
§.§ Switch Selection
The SDT controller in this paper performs TP operations on commodity OpenFlow switches. Generally, other switches can also be used for TP if they meet the following conditions: 1) allowing loopback packets to pass through self-links (or the STP protocol can be disabled), and 2) supporting 5-tuple matching or others similar to determine the forwarding of packets. For instance, other types of switches, like switches supporting extended ACL tables, are also suitable for TP. The P4-based (Intel Tofino) SDT controller is under refinement.
§.§ Resource Limitation
In SDT, the most significant resource is the maximum number of supported flow table entries in each OpenFlow switch. When a switch runs out of flow table entries during the setup of logical topology, the setup procedure may fail or other unknown failures could occur. SDT controller leverage a built-in module to check the number of available table entries to avoid such problem. If the demand for entries is greater than the available one, it can merge entries, split the topology, or inform operators to add more switches. In our evaluation, the problem of inadequate flow table capacity is rare. For instance, when we project a Fat-Tree with k=4 (containing 20 switches and 16 nodes) to 2 OpenFlow switches, each switch requires about only 300 flow table entries, which is not difficult for modern commercial OpenFlow switches to deploy.
§ CONCLUSION
We summarize the advantages and disadvantages of existing network evaluation tools and conclude the methodology of an alternative method called “Topology Projection” (TP). Based on the idea of TP, we propose SDT, a deployment-friendly and automatically reconfigurable network topology testbed. SDT allows researchers to use several commodity OpenFlow switches to build network topologies based on user-defined topology configurations. SDT is fully transparent to other network components and can significantly reduce the deployment cost for network topology evaluations. We also develop the corresponding SDT controller for automatic topology reconfiguration. Through evaluations, we find that SDT can achieve almost the same physical properties as the full testbed and runs up to 2899x faster on network evaluations than the simulator does. SDT is more cost-effective and scalable than other TP solutions and can support a wide range of network research works.
§ ACKNOWLEDGEMENTS
This work is sponsored by the Key-Area Research and Development Program of Guangdong Province (2021B0101400001), National Natural Science Foundation of China (62150610497, 62172108, 62002066), Natural Science Foundation of Shanghai (23ZR1404900), the Major Key Project of PCL, and Open Research Projects of Zhejiang Lab (2022QA0AB07). We also sincerely appreciate the anonymous reviewers for their valuable and constructive feedback.
Touko-Format-unsrt
|
http://arxiv.org/abs/2307.07417v2 | 20230711144414 | RoPDA: Robust Prompt-based Data Augmentation for Low-Resource Named Entity Recognition | [
"Sihan Song",
"Furao Shen",
"Jian Zhao"
] | cs.CL | [
"cs.CL",
"cs.AI"
] |
Signal-background separation and energy reconstruction of gamma rays using pattern spectra and convolutional neural networks for the Small-Sized Telescopes of the Cherenkov Telescope Array
[
August 12, 2023
============================================================================================================================================================================================
Data augmentation has been widely used in low-resource NER tasks to tackle the problem of data sparsity. However, previous data augmentation methods have the disadvantages of disrupted syntactic structures, token-label mismatch, and requirement for external knowledge or manual effort.
To address these issues, we propose Robust Prompt-based Data Augmentation (RoPDA) for low-resource NER. Based on pre-trained language models (PLMs) with continuous prompt, RoPDA performs entity augmentation and context augmentation through five fundamental augmentation operations to generate label-flipping and label-preserving examples.
To optimize the utilization of the augmented samples, we present two techniques: Self-Consistency Filtering and mixup. The former effectively eliminates low-quality samples, while the latter prevents performance degradation arising from the direct utilization of label-flipping samples.
Extensive experiments on three benchmarks from different domains demonstrate that RoPDA significantly improves upon strong baselines, and also outperforms state-of-the-art semi-supervised learning methods when unlabeled data is included.
§ INTRODUCTION
Named Entity Recognition (NER) is a fundamental NLP task which is dedicated to identifying predefined named entities (e.g., persons, organizations and locations) from texts.
With the rapid development of deep learning in recent years, fine-tuning pre-trained language models (PLMs) such as BERT <cit.> for NER tasks has yielded promising results <cit.>. However, fine-tuning PLMs still necessitates a substantial amount of human annotations. NER models, in particular, require each token to be labeled in a sequence, which is a laborious and time-consuming task in the real world.
Consequently, NER frequently encounters the challenge of data sparsity, making low-resource NER a pressing priority.
In order to mitigate the issue of data sparsity,
various data augmentation methods for low-resource NER have been proposed, such as traditional word-level manipulation <cit.> and more recent PLM-based methods, and the latter have received a lot of attention and have yielded promising results <cit.>. <cit.> leverage a masked language model to randomly mask entities in the sentence and regenerate them conditioned on labels, thereby enhancing entity diversity. <cit.> employ Soft Prompt for seq2seq PLMs and propose a dual-view augmentation approach to generate sentences conditioned on labels and keywords.
However, their approaches have the limitation of only boosting entity diversity but not context diversity, or necessitating external knowledge (i.e., external corpus).
To address these issues, we propose Robust Prompt-based Data Augmentation (RoPDA) for low-resource NER.
With well-trained continuous prompt <cit.>, our model is capable of automatically generating training samples for low-resource NER tasks, eliminating the need for external knowledge or human efforts, which is different from <cit.>.
To enhance model generalization, we propose five fundamental data augmentation operations. Among these, one operation focuses on context augmentation, aiming to increase context diversity and enrich our training data. The remaining four operations concentrate on entity augmentation, designed to generate adversarial examples and promote entity diversity, including label-flipping and label-preserving operations.
Inspired by <cit.>, a label-flipping operation means regenerating an entity in the sentence into a different type of entity.
As shown in Figure <ref>, after a label-flipping operation, the entity type sequences of the augmented sentence and the original sentence differ only in one entity.
Such an augmented sentence can serve as an adversarial example <cit.> for this modified entity type, enhancing the NER model's capability in distinguishing these two entity types before and after regeneration.
Conversely, A label-preserving operation involves regenerating an entity as a new entity of the same type, thereby prompting entity diversity.
We argue that NER models can benefit from both label-preserving and label-flipping operations.
With these five augmentation operations, we can generate smooth and diverse sentences without relying on external knowledge.
To further optimize the utilization of augmented samples, we propose Self-Consistency <cit.> Filtering and mixup. The former acquires the capability to effectively filter out low-quality samples by fine-tuning the PLM with a bidirectional mask. The latter employs linear interpolation for adversarial examples, resulting in a smoother label distribution that prevents the performance degradation caused by their direct utilization.
Through these five augmentation operations followed by Self-Consistency Filtering and mixup, we can generate high-quality augmented samples.
To summarize, our contributions are as follows:
* We propose a robust prompt-based data augmentation method RoPDA for low-resource NER, which contains five augmentation operations. By simultaneously augmenting entities and contexts, RoPDA can generate augmented examples with high diversity.
* We propose Self-Consistency Filtering to improve the quality of augmented samples through bidirectional masking.
* We utilize the mixup technique to interpolate adversarial examples with corresponding original examples, effectively maximizing the utility of adversarial examples.
* Experiments on three benchmarks show the significant performance gain of RoPDA over current state-of-the-art baselines.
§ RELATED WORK
§.§ Data Augmentation
Word-level manipulation is a prevalent data augmentation method that manipulates words in the original text to generate synthetic text utilizing predefined rules <cit.>. <cit.> generate new examples through token substitutions, including synonym replacement and mention replacement. But these methods run the risk of destroying the sentence structure and making labels inconsistent with modified tokens.
Recently, leveraging the powerful generative capability and rich knowledge of PLMs to generate augmented data has been explored <cit.>. <cit.> perform entity replacement on corrupted sentences based on the masked language model.
<cit.> and <cit.> leverage seq2seq PLMs to generate synthetic data conditioned on entity types.
Adversarial augmentation <cit.> is commonly used to improve model robustness, but recent research has discovered that it can also improve model generalization <cit.>. <cit.> adopt expert-guided heuristics for adversarial augmentation, focusing on modifying entities and contexts in accordance with predefined rules. However, their approach lacks diversity in entities and relies on expert knowledge. <cit.> create entity attacks by replacing the target entity with entities of the same type from an external knowledge base. On the other hand, our approach, which is based on PLMs, can produce high-quality adversarial examples without the need for external knowledge or human effort.
§ METHOD
The overview of RoPDA is shown in Figure <ref>. Firstly, We begin by preprocessing the original data into linearized sentences with entities constrained by their types.
Secondly, we add prompt vectors to the seq2seq PLM and fine-tune it with the linearized few-shot data.
Thirdly, linearized sentences undergo strategic masking using five fundamental augmentation operations.
Fourthly, we regenerate the masked sentences using the fine-tuned PLM to produce augmented sentences.
Subsequently, Self-Consistency Filtering is applied to filter out noisy low-quality samples.
Lastly, to better utilize generated adversarial examples, a mixup is employed to interpolate the adversarial examples and the original data during the NER model training.
§.§ Data Preprocessing
Similar to <cit.> and <cit.>, we adopt a linearization strategy to convert a sentence and its corresponding tags into a linearized sequence. As shown in Figure <ref>, given a sentence X=[x_1,x_2,x_3,⋯,x_L], for each entity e_ij=x_i ⋯ x_j with type l_k (e.g., l_k=“PER") , we convert it into “[ x_i ⋯ x_j | O(l_k) ]", where O(l_k) is the natural language form of label l_k (e.g., O(l_k)=“person") . Before being sent to PLMs, labeled sentences must be processed using this template. This data preprocessing enables the PLM to explicitly take label information into account when generating tokens, thereby constraining entity and entity label consistency. Furthermore, such a template is reversible, enabling us to recover the sentence and its labels from the linearized sequence.
§.§ Prompt-based PLM
Fine-tuning is the prevalent way to adapt large PLMs to downstream tasks. In low-resource scenarios, however, the available training data is inadequate relative to the model size, and updating all parameters of the model can readily cause overfitting, leading to a decline in generalization.
By prepending instructions to the task input and directly generating the task output from PLMs, prompt can effectively harness the potential of PLMs, especially in low-resource settings <cit.>. Following <cit.>, we add a sequence of continuous trainable vectors, which is also called Soft Prompt, as shown in Figure <ref>, to each layer of the PLM. During training, we only update the parameters of the prompt vectors and fix all PLM parameters.
Note that in this paper, we choose the T5 <cit.> model as our backbone sequence-to-sequence PLM.
Different from <cit.>, we do not pre-train prompt vectors with additional corpus texts but only utilize a small number of labeled data for training.
Although our method does not leverage external knowledge, we still achieve significant improvement compared to their method.
§.§ Data Augmentation
Tokens in a sentence can be divided into entity segments and context segments depending on whether or not they are in an entity. Formally, a sentence X is divided into C_1 E_1 ⋯ C_n E_n C_n+1, with C_i representing the context segment and E_i representing the entity segment.
To enhance data diversity, we propose five fundamental data augmentation operations, one of which is dedicated to context augmentation, while the other four focus on entity augmentation, involving label-flipping and label-preserving operations.
The sentence is linearized according to Section <ref> before each operation.
Then we apply the five operations to the linearized sentence by masking the according part of it.
PLMs are then employed to generate the masked parts in order to obtain the synthetic sentences. Figure <ref> shows examples of each operation.
Op1: Augmenting the Entity-Related Span Choose an entity segment at random from the sentence, mask the entity segment and a portion of its surrounding context segments.
Op2: Changing the Entity Type Choose an entity segment at random from the sentence and replace its type with a new entity type l_new. Then mask the entity segment and a portion of its surrounding context segments.
Op3: Adding an Entity Choose an entity segment E_i from the sentence randomly, and add a new entity segment of type l_new and corresponding context segments after E_i.
As a result, the processed sentence would be:
⋯ C_i E_i <MASK>
[<MASK> | O(l_new) ] <MASK> C_i+1⋯.
Op4: Erasing an Entity Choose an entity segment E_i at random, then mask it together with a portion of its surrounding context C_i and C_i+1, which is equivalent to removing the entity and associated context from the sentence. As a result, the processed sentence would be:
⋯ C_i-1 E_i-1 <MASK> E_i+1⋯.
Op5: Augmenting Contextual Spans Choose a context segment from the sentence
at random, and then mask a portion of the context.
Op2 to Op4 are the label-flipping operations, since they essentially alter the entity type sequence of the original sentence, thus generating an adversarial example for the altered entity type.
There are two ways to choose the new entity type l_new for label-flipping operations: one is to choose at random from the label set, and the other is to choose based on the entity type similarity. More details can be seen in Section <ref>.
Op1, on the other hand, is a label-preserving operation that increases entity diversity by regenerating entities into new entities of the same type without altering the entity sequence.
Op5 serves to enhance the diversity of the context, a factor that we consider also crucial for increasing the overall training set diversity.
Both label-flipping and label-preserving operations can improve the performance of NER models and can be effectively utilized in combination.
Therefore, we propose four data augmentation strategies based on these fundamental operations, as shown in Table <ref>. The first, Standard Augmentation (SA), is the label-preserving strategy, and the latter three, Entity Label Change (ELC), Entity Adding (EA) and Entity Replacing (ER), are the label-flipping strategies due to the use of label-flipping operations. As previously mentioned, the data generated by label-flipping strategies can be viewed as adversarial examples.
§.§ Self-Consistency Data Filtering
The samples generated by the proposed augmentation strategies may exhibit the entity and type inconsistency issue, especially for label-flipping strategies.
To further improve the quality of augmented samples, we propose a novel filtering strategy based on self-consistency that employs a bidirectional mask to fine-tune the T5 model.
As shown in Figure <ref>, the bidirectional mask includes Type2Word and Word2Type, where Type2Word refers to masking the words(entities and contexts) in the linearized sentence and inferring them based on entity types, and Word2Type refers to masking entity types and inferring them based on words. After that, the fine-tuned T5 model will acquire bidirectional inference capabilities between entities/contexts and types and the ability to recognize and verify consistent samples.
Specifically, we first generate diverse augmented samples using the type-to-entity/context inference capability, as described in Section <ref>. We then use Word2Type to regenerate each entity type in the augmented samples with the assistance of the entity/context-to-type inference capability. We only keep samples whose regenerated entity types are consistent with the original. We believe that such samples are self-consistent for the model and have a higher degree of confidence.
§.§ Mixup
Using the four strategies described in Section <ref>, we can generate a large number of adversarial samples.
However, directly training the NER model with adversarial examples may cause overfitting on adversarial features <cit.>.
Mixup <cit.>, as a regularization technique, can not only improve the model's generalization performance but also its robustness against adversarial attacks <cit.>.
To this end, we leverage mixup to prevent the model from overfitting the adversarial samples and help improve generalization ability by linearly interpolating the adversarial samples and the original data.
Given a pair of data points (x,y)and (x',y'), with x denoting a data point and y being its label in a one-hot representation, mixup creates a new data point by linearly interpolating the data points and their labels:
x̂ =λ x + (1-λ) x'
ŷ =λ y + (1-λ) y'
where the mixing parameter λ is sampled from a Beta distribution, λ∼ Beta(α,β).
In this work, we linearly interpolate the label-flipping sample x_f with its corresponding original example x_o in the hidden space. At the m-th layer, the hidden representations for each token in x_f, denoted as h_f^m, are interpolated with h_o^m, the hidden representations for each token in the original example x_o, by a ratio λ:
ĥ^m=λ h_f^m + (1-λ) h_o^m
After that, ĥ^m is fed to the (m+1)-th layer. In the meantime, the corresponding labels of the two samples are linearly mixed with the same ratio.
§ EXPERIMENTS
§.§ Experimental Setup
Datasets
We conduct experiments on three datasets. (a) CoNLL03 <cit.> is a collection of news wire articles from the Reuters Corpus, containing 4 entity types. (b) MIT Restaurant <cit.> is a collection of user utterances in the restaurant domain with 8 entity types. (c) MIT Movie <cit.> consists of user utterances in the movie domain with 12 entity types. These three datasets are from different domains and have varying numbers of entity types, allowing for a comprehensive evaluation of our method.
We create four low-resource settings of shot-5/10/20/50 for each dataset. In the shot-K setting, we sample K samples from the Train set for each entity type as the training set and add the remaining to the unlabeled set.
Baselines We use the model trained only on the original data as the baseline. We also adopt several other state-of-the-art methods for comparison: (1) SDANER <cit.> explores different token replacement techniques for NER tasks. (2) MELM <cit.> performs entity replacement on corrupted sentences based on the masked language model. (3) PromDA <cit.> employs Soft Prompt for seq2seq PLMs and proposes a dual-view data augmentation method that generates data conditioned on labels or keywords. (4) MetaST <cit.> makes use of unlabeled data through self-training and mitigates errors from noisy pseudo-labels through adaptive data re-weighting. MetaST is a state-of-the-art semi-supervised method.
Leveraging Unlabeled Data
Unlabeled data contains a wealth of information that, if harnessed effectively, can enhance model performance significantly.
A common solution for utilizing unlabeled data is self-training, which involves pseudo-annotating unlabeled data in the i-th iteration with the model from the (i-1)-th iteration.
To maximize the utilization of knowledge in unlabeled data, we propose a method called RoPDA* for generating augmented samples for unlabeled data.
After pseudo-annotating unlabeled data, we only retain those with high-confidence pseudo-labels and use PLMs to synthesize augmented data for them. Finally, we train the NER model using original data, adversarial examples, pseudo-labeled data, and augmented data for unlabeled data in an iterative fashion. The overall training procedure is shown in Appendix <ref>.
§.§ Main Results
Without unlabeled data, RoPDA achieves significant performance gains and outperforms the state-of-the-art baselines with a large margin at every shot. As shown in Table <ref>, RoPDA consistently outperforms SDANER, MELM, and PromDA, and also outperforms MetaST in most cases, which uses additional unlabeled data. RoPDA yields an improvement of 2.3-8.3% on CoNLL03, while achieving an improvement of 0.6-4.8% on MIT Restaurant and 1.4-7.5% on MIT Movie.
With unlabeled data, our approach achieves further improvements as shown in RoPDA* in Table <ref>.
RoPDA* far outperforms MetaST, the state-of-the-art semi-supervised method, across all benchmarks.
In comparison to RoPDA, RoPDA* achieves another improvement by 1.9%, 1.7%, and 1.1% on three benchmarks, respectively.
To better understand the role of unlabeled data, we also conduct experiments on RoPDA+self-training.
As shown in Table <ref>, RoPDA* consistently outperforms RoPDA+self-training, which shows that data augmentation on unlabeled data can further improve model performance, reflecting the superiority of our method.
§.§ Ablation Study
As shown in Tables <ref> and <ref>, we conduct ablation studies to quantify the contribution of various components, i.e., soft prompt/self-consistency filtering/mixup.
Additionally, we analyze the impact of filtering and mixup on each individual strategy.
§.§.§ Soft Prompt
To examine the impact of soft prompt, we compare the soft prompt model with the standard T5 model, which undergoes fine-tuning using a learning rate of 1e-4. As depicted in Table <ref>, eliminating the soft prompt leads to significant performance decreases across all three benchmarks, highlighting the efficacy of soft prompt in low-resource scenarios.
§.§.§ Self-Consistency Filtering
After removing self-consistency filtering, as shown in Table <ref>, there is a moderate decline in model performance across all benchmarks.
Subsequently, we proceed to examine the impact of self-consistency filtering on each individual strategy.
As shown in Table <ref>, removing self-consistency filtering brings greater performance degradation to label-flipping strategies with the biggest decline in ELC having a value of 0.8, but minor degradation to the label-preserving strategy.
We believe that this is because label-flipping strategies introduce more semantic and structural alterations to the original sentence, thereby introducing more noise and necessitating a higher degree of filtering.
This hypothesis can be supported by Table <ref>, where it is evident that self-consistency filtering retains the largest proportion of data for SA, while effectively filtering out a much greater amount of low-quality data for label-flipping strategies, specifically ELC.
§.§.§ Mixup
As shown in Table <ref>, removing mixup leads to performance degradation across all benchmarks with an average decline of 0.5. Furthermore, an analysis of mixup's impact on each individual strategy, as shown in Table <ref>, reveals that mixup also significantly affects label-flipping strategies to a greater extent than the label-preserving strategy.
This phenomenon can be attributed to the fact that interpolating adversarial examples with the original data prevents overfitting adversarial features and improves generalization. Conversely, mixup on label-preserving data does not yield the same benefits.
Notably, self-consistency filtering and mixup have analogous impacts on these strategies, as they both essentially enhance label-flipping samples.
§.§ Analysis
Different Combinations of Strategies
As shown in Table <ref>, the performance of the model decreases when one strategy is removed from the combination of all, except ELC.
This implies that the contribution of ELC is comparatively limited when incorporating the other three strategies.
We believe this is because directly changing entity types introduce too much noise, which is inferior to the two indirect ways of changing entity types, EA and ER.
In conclusion, all four strategies can improve model performance, but ELC contributes less.
Different Mixup Choices Our original mixup method (F-Mixup) is to mix up the label-flipping examples, also called adversarial examples, with the corresponding original data.
We also carry out experiments on the following mixup methods to verify the superiority of F-Mixup: (1) P-Mixup mixups the label-preserving data with the corresponding original data. (2) Both-Mixup mixups both label-flipping data and label-preserving data with the corresponding original data. (3) Joint-Mixup mixups the label-flipping data with both the corresponding label-preserving data and original data.
As shown in Table <ref>, F-Mixup is undoubtedly superior to other mixups, demonstrating the importance of interpolating adversarial examples with the original data.
Label Flipping Schemes
There are two ways to choose l_new for label-flipping operations. The first one Random is to choose randomly from the label set, and the other is to choose based on entity type similarity. For similarity-based methods, we propose two options: Fixed means fixed flipping of each entity type to the most/least similar type. Probability indicates that the flip probability is calculated based on the entity type similarity using the softmax function.
The specific calculation method for similarity can be found in Appendix <ref>.
Table <ref> shows that Random outperforms all similarity-based schemes.
We contend that this is due to the fact that random selection has the potential to maximize sentence diversity, whereas selection based on similarity will result in a much higher probability of flipping to a specific type than other types for each type, thus reducing sentence diversity.
Hyperparameter Setting for Data Augmentation There are several important parameters for data augmentation: K, M and N.
K is the number of label-flipping operations performed on each sample. K is initially set to 1, and when K increases, the F1 score on CoNLL03 falls by 1.1%.
M and N represent the times of entity and context augmentation for each sentence, respectively.
As shown in Figure <ref>, when context augmentation is removed, that is, when N=0, the performance declines by 1.2%, indicating the importance of enhancing the contexts of sentences.
When M and N take fixed values, the performance suffers when they are too large or too small, with M=2 and N=3 providing the best results.
In addition, the best performance is attained when M and N are both randomly chosen from {1, 2, 3}.
In our original experiment, when performing entity augmentation, we not only mask and regenerate the entity but also regenerate a portion of the surrounding context, as we believe this will make the generated entity more in tune with its context. We run experiments on CoNLL03 to verify our hypothesis. The F1 score decreases by 0.8% when only regenerating the entity without regenerating its context, indicating the importance of regenerating its context simultaneously.
§.§ Case Study
Table <ref> shows some generated examples from RoPDA.
RoPDA can generate high-quality new entities and increase entity diversity by leveraging the powerful inference ability and rich knowledge contained in PLMs.
Taking the sentence from CoNLL03 as an example, “New York" is usually represented as a location-type entity, but in our example sentence, “New York" represents an organization type. In EA strategy, the trained language model learns the true meaning of “New York" through contextual inference and its conditioned label and thus, generates a new organization entity “Boston" which has a similar meaning to “New York".
In addition, as can be seen in Table <ref>, in ELC and ER strategies, the original entity can be regenerated as a new type of entity that is harmonious with the context.
Moreover, our proposed method improves not only entity diversity but also context diversity significantly.
§ CONCLUSION
In this paper, we propose a robust prompt-based data augmentation method RoPDA for low-resource NER. RoPDA performs entity augmentation and context augmentation through five fundamental augmentation operations to generate label-flipping and label-preserving examples.
To optimize the utilization of augmented data, we introduce two techniques: Self-Consistency Filtering and mixup.
Self-Consistency Filtering efficiently eliminates low-quality samples, while mixup mitigates performance degradation arising from the direct utilization of adversarial samples.
Extensive experiments on three benchmark datasets showcase the effectiveness of RoPDA.
acl_natbib
§ TRAINING PROCEDURE OF ROPDA*
§ IMPLEMENTATION DETAILS
We utilize the T5-Large <cit.> model as our generative model. The T5-Large model requires no further fine-tuning, and the prompt parameters are only fine-tuned using few-shot data. Following <cit.>, we use Adafactor <cit.> optimizer with learning rate 1e-3 and weight decay 1e-5.
We set the batch size 16 and train the model for 10k steps. When performing data augmentation, we set K to 1, and M and N are chosen at random from {1, 2, 3}.
We treat the NER task as a sequence labeling task and utilize the BERT-BASE model as our backbone model. When training the NER model, we set the learning rate 5e-5 and batch size 8. The hidden layer of mixup is randomly selected from{8, 9, 10}. The α and β in the mixup are set to 130 and 5, respectively. The NER training settings of baselines are set to the same as RoPDA. We report the average micro-F1 score for 3 runs.
§ ROPDA IN DATA-RICH SETTINGS
Despite being designed for low-resource settings, RoPDA can still offer appreciable performance improvements in high-resource settings.
As shown in Table <ref>, in data-rich scenarios, RoPDA improves model performance on all three benchmarks, and as the amount of data increases, the performance gain brought by RoPDA continues to decline.
In Addition, we discover that the worse the baseline performance of a certain dataset, the greater the performance gain brought by RoPDA.
§ CALCULATION OF ENTITY TYPE SIMILARITY
Given two entity types, l_1 and l_2 (e.g., PER and LOC), we first obtain their natural language forms, O(l_1) and O(l_2) (e.g., person and location), and then employ BERT <cit.> to obtain the embedding representations, h(l_1) and h(l_2), for O(l_1) and O(l_2). Subsequently, we compute the Euclidean distance or cosine similarity between h(l_1) and h(l_2), considering the negative Euclidean distance or cosine similarity as the similarity measure for the entity types.
|
http://arxiv.org/abs/2307.09469v2 | 20230710180805 | Graph Representation of the Magnetic Field Topology in High-Fidelity Plasma Simulations for Machine Learning Applications | [
"Ioanna Bouri",
"Fanni Franssila",
"Markku Alho",
"Giulia Cozzani",
"Ivan Zaitsev",
"Minna Palmroth",
"Teemu Roos"
] | physics.plasm-ph | [
"physics.plasm-ph",
"cs.LG"
] |
[
Graph Representation of the Magnetic Field Topology in High-Fidelity Plasma Simulations for Machine Learning Applications
equal*
Ioanna Bourics
Fanni Franssilacs
Markku Alhophy
Giulia Cozzaniphy
Ivan Zaitsevphy
Minna Palmrothphy
Teemu Rooscs
phyDepartment of Physics, University of Helsinki, Helsinki, Finland
csDepartment of Computer Science, University of Helsinki, Helsinki, Finland
Ioanna Bourimailto:[email protected]@helsinki.fi
Machine Learning, ICML
0.3in
]
Topological analysis of the magnetic field in simulated plasmas allows the study of various physical phenomena in a wide range of settings. One such application is magnetic reconnection, a phenomenon related to the dynamics of the magnetic field topology, which is difficult to detect and characterize in three dimensions. We propose a scalable pipeline for topological data analysis and spatiotemporal graph representation of three-dimensional magnetic vector fields. We demonstrate our methods on simulations of the Earth's magnetosphere produced by Vlasiator, a supercomputer-scale Vlasov theory-based simulation for near-Earth space. The purpose of this work is to challenge the machine learning community to explore graph-based machine learning approaches to address a largely open scientific problem with wide-ranging potential impact.
§ INTRODUCTION
Magnetic reconnection is a fundamental plasma physical process characterized by a topological reconfiguration of the magnetic field and energy conversion from magnetic to kinetic and thermal energy, leading to plasma heating, particle acceleration, and mixing of plasmas <cit.>. The phenomenon is encountered in different settings and plays a key role in the eruption of solar flares and coronal mass ejections (CMEs) in the solar corona <cit.>, in the Earth's magnetosphere and its interaction with the solar wind <cit.>, in astrophysical plasmas <cit.>, as well as in fusion plasma during major and minor tokamak disruptions <cit.>.
Magnetic reconnection is linked to space weather conditions that can potentially damage terrestrial technological infrastructure, satellites, and manned space missions <cit.>.
CMEs cause magnetospheric magnetic storms <cit.>, during which the terrestrial power grids may suffer from Geomagnetically Induced Currents (GICs) and even fail <cit.>. Solar flares accelerate particles into relativistic energies, which propagate to the Earth's upper atmosphere and
affect satellite and radar signals that can be significantly altered or lost during active space weather conditions <cit.>.
The nature of the phenomenon is well-understood in two-dimensional (2D) settings, and quasi-2D models have been successful at reproducing many features of reconnection in the solar corona and the Earth's magnetosphere <cit.>.
However, magnetic reconnection is intrinsically a three-dimensional (3D) process. This becomes especially evident when considering reconnection in the solar corona, where the magnetic field forms twisted coronal loops with complex topologies <cit.>. Despite considerable progress, the additional complexity introduced in 3D settings continues to pose many open questions regarding the nature of 3D magnetic reconnections in the solar and the Earth's magnetospheric environment <cit.>.
We present a scalable pipeline for topological data analysis and graph representation of 3D magnetic vector fields. First, we introduce spatial null graphs, a graph representation that can be used to characterize the topology of a magnetic field. In addition, to encode the temporal evolution of the magnetic field, we extend this concept with spatiotemporal null graphs. Finally, we present the spatial and spatiotemporal null graphs produced by the topological analysis of the magnetic vector field in the Earth's magnetotail. For this purpose, we use 3D global simulations produced by Vlasiator, a supercomputer-scale Vlasov theory-based model that incorporates the solar wind – magnetosphere – ionosphere system
<cit.>. The constructed graphs enable the use of topological information as input for machine learning methods such as (spatiotemporal) graph neural networks (GNNs) <cit.>.
§ MAGNETIC FIELD TOPOLOGY
This section introduces some concepts of vector field topology. For a general introduction, see <cit.>; for reviews focused on magnetic fields and magnetic reconnection in particular, see <cit.>.
The magnetic field on a location with spatial coordinates x⃗ = (x,y,z) ∈ ℝ^3 can be represented as a vector field B⃗(x⃗) = (B_x(x⃗), B_y(x⃗), B_z(x⃗)). According to Gauss's law for magnetism, the field has zero divergence everywhere
∇·B⃗
= (∂ B_x/∂ x +
∂ B_y/∂ y +
∂ B_z/∂ z)
≡ 0.
From a topological perspective, magnetic nulls, points where the magnetic field vanishes B⃗(x⃗_0)_2 = 0, are of special interest. At such points, the structure of the local field can be characterized by forming a first-order Taylor approximation around x⃗_0:
B⃗(x⃗) =
J(x⃗_0) (x⃗ - x⃗_0)
+ o(x⃗ - x⃗_0_2),
where J = ∇B⃗ is the Jacobian of the magnetic field.
The topology of the field is characterized by the magnetic skeleton, which comprises of the magnetic nulls, separatrix surfaces delineating distinct magnetic domains, and separator curves formed on the intersections of separatrix surfaces <cit.>.
To extract the magnetic skeleton of a field, we use the Visualization Toolkit (VTK) <cit.> – an open source software package for scientific data analysis and visualization.
The VTK vector field topology filter <cit.> is a later extension to the package that adds the functionality for computing the main elements of the topological skeletons of 2D and 3D vector fields.
§.§ Magnetic nulls
Magnetic nulls can be classified into different types that characterize their topology, according to the eigenvalues of the Jacobian matrix J of the vector field <cit.>.
Given the three eigenvalues of the Jacobian, (λ_1, λ_2,λ_3) ∈ℂ^3, it follows from Eq. (<ref>) that their sum is equal to zero:
λ_1 + λ_2 + λ_3 = 0,
and, therefore,
two of the eigenvalues must have the same sign while the third one is of the opposite sign[We do not consider degenerate nulls where one or more of the eigenvalues is exactly zero. Such points are physically unstable <cit.> and can be handled as special cases of one or more of the four types we introduce here.]. Moreover, while the eigenvalues can be complex-valued, the eigenvalues with non-zero imaginary parts always come in pairs of complex conjugates, so that their real parts are the same, and the third eigenvalue is a real number of the opposite sign.
Due to the above constraints, each null can be classified in terms of its polarity; if the two same-sign eigenvalues are negative, the null is classified as a negative null, otherwise it is a positive null <cit.>. Furthermore, magnetic nulls with complex eigenvalues exhibit a spiraling topology <cit.>. Figure <ref> illustrates the resulting classification into four types, where types A and B represent (non-spiraling) topologies of negative and positive magnetic nulls, while types As and Bs encode spiraling topologies of negative and positive magnetic nulls, respectively.
§.§ Separatrices and separators
The eigenvectors of the Jacobian can be used to define the so-called separatrices that are associated with the magnetic nulls <cit.>. Each non-degenerate null has one 2D separatrix or fan surface and two one-dimensional virtual separatrices or spines <cit.>. The fan surface is defined by the infinitely many magnetic field lines within the plane spanned by the two eigenvectors corresponding to the same-sign eigenvalues. The two spine field lines end in the magnetic null point, entering
along the directions parallel and antiparallel to the third eigenvector, normal to the fan plane <cit.>.
In physical simulations, magnetic nulls connect via separator curves (or reconnection lines) formed by the intersection of the fan surfaces of two connected nulls <cit.>. However, the process of integrating the separatrices to find their intersection can be computationally very expensive, which is why separators are usually approximated <cit.>.
In order to approximate the separators, we choose to follow the 2D null lines: curves along which two of the magnetic field components are zero, while the third can vary. <cit.> found that with a small guide-field, 3D reconnection is well-approximated by a 2D system. Specifically in the magnetotail environment, we ignore the East-West component of the magnetic field, B_y, since it tends to exhibit the least variation as the guide-field component, and we require that the other two components are zero, B_x=B_z=0. We call such points 2D nulls and use the term proper null to refer to points where B⃗_2=0, which are clearly a subset of the 2D nulls. We provide a modification of the existing VTK vector field topology filter to detect such 2D nulls.
§ GRAPH REPRESENTATIONS
Originally introduced by <cit.>, null graphs are graph representations that characterize the topology of a magnetic field by encoding the connectivity between proper nulls (vertices) via separators (edges).
We extend their definition to construct a graph representation that can be useful for different machine learning tasks and downstream applications, such as spatiotemporal GNNs <cit.>. We propose two computationally efficient heuristics to trace the connectivity between proper nulls both spatially and temporally.
§.§ Spatial Null Graphs
After modifying the VTK vector field topology filter to detect the 2D magnetic nulls where B_x = B_z = 0 (sec. <ref>), we construct the 2D null lines by connecting the 2D nulls to each other based on spatial proximity.
In practice, the 2D null lines can be traced by initializing at most two paths from each proper null based on a cut-off value on the maximum Euclidean distance from the proper null.
Each of the paths is then iteratively expanded by finding – within the same cut-off distance – the nearest 2D null that is not already included in any of the already traced paths. Paths terminating without reaching a proper null are considered a dead end and are discarded, while paths ending at a proper null become edges in the null graph. The type of each proper null (A / B / As / Bs) is encoded as a node feature in the graph.
§.§ Spatiotemporal Null Graphs
Consider a bipartite graph 𝒢 = (𝒱_i, 𝒱_i+1, ℰ), where the vertex sets 𝒱_i and 𝒱_i+1 are defined by proper nulls detected at times t_i and t_i+1, respectively. The set of edges ℰ represents a (partial) matching between the magnetic nulls with the interpretation that vertices v∈𝒱_i and v'∈𝒱_i+1 are connected by an edge e=(v,v')∈ℰ if they correspond to the same proper null. The problem can be cast as an unbalanced assignment problem defined by the following maximization problem
max_ℰ∈𝔐∑_(v,v') ∈ℰ 1/w(v,v'),
where the set of allowed matchings 𝔐 is defined by requiring that each vertex appears in at most one edge, and the weight w(v,v') = x⃗(v) - x⃗(v')_2 is defined by the Euclidean distance between the respective coordinates of the proper nulls v and v'. Additional constraints including (i) a maximum distance constraint w_max, or (ii) matching only the same type nulls,
can be incorporated by letting w(v,v')=-1, for any edge that does not satisfy them. We apply both constraints to get an initial matching, and, for some of the unmatched magnetic nulls, we need to run a subsequent matching without constraint (ii) to account for type switches.
All vertices in 𝒱_i that remain unmatched are considered to have disappeared after step t_i. Likewise, all vertices in 𝒱_i+1 that remain unmatched are considered to have appeared before step t_i+1. According to <cit.>, proper nulls can either appear by entering the simulation domain across a boundary, or as a result of a bifurcation, in which case they appear in pairs of opposite polarity. These two cases can be distinguished based on the coordinates and types of the unmatched vertices in 𝒱_i+1.
§ RESULTS AND DISCUSSION
The supercomputer-generated simulations from Vlasiator provide large-scale, high-fidelity data – some of which is openly available <cit.>. To give an idea of the scale of available data, the example of the published Vlasiator dataset provides 170 time-steps × 1,723,328 grid points at each time-step, i.e., the time series consists of a total of ∼ 293 million grid points. Working with such scales requires efficient and scalable methods, and the spatiotemporal graph representation of the data can allow for less resource-intensive machine learning approaches.
The magnetotail simulations used to produce the results presented here, consist of a grid with resolution 50 × 127 × 108 in the x (tailward–Earthward), y (East–West), and z (North–South) directions of the magnetic field, respectively. A step of one unit in any direction of the grid corresponds to 1000 km, and the time series used to generate the spatiotemporal null graphs has 1 s cadence <cit.>.
First, the modified VTK topology filter is used to detect 2D and proper nulls, and to classify the latter in types (sec. <ref>). The 2D nulls are detected using B_y as the guide-field component (sec. <ref>).
The results obtained from the first stage of the process are illustrated on the left side of Figure <ref>.
The result of the spatial tracing method (sec. <ref>), is presented on the right of Figure <ref>. The 3D null points are colored according to their type. Different types of spatial connectivity are also color-encoded, using different colors of spatial edges depending on the type of the connection.
Figure <ref> shows an example of a spatiotemporal graph, where for each time-step t_i for i ∈{0,1,2}, a 2D projection (X-Y plane) of the spatial graph is presented. The temporal edges trace the temporal evolution of each proper null across all time-steps. The colors of the temporal edges represent the type of the proper null traced over time, with the exception of the pink temporal edges which denote a type switch scenario (e.g., t_0 → t_1: B → Bs). Finally, the green circle at t_2 is used to mark a pair of proper nulls of opposite polarity that appear together before t_2 due to a bifurcation.
We have presented a scalable data analysis pipeline for the detection and spatiotemporal tracing of proper magnetic nulls. These methods allow us to characterize the topology a 3D magnetic field using graph representations. The resulting spatiotemporal null graphs can be useful in various downstream learning tasks, especially in GNN applications <cit.>.
In the process of formulating 3D magnetic reconnection detection as a machine learning task, two potential limitations arise. If we formulate the problem as a supervised learning task, there is a severe difficulty in reliably labeling a sufficient amount of training data, as 3D magnetic reconnection remains difficult to detect and characterize. Similarly, if we were to formulate the problem as an unsupervised learning task, questions arise regarding the interpretability of results and the model performance evaluation.
Currently, we are working on a GNN approach that aims to circumvent these issues by formulating the learning task as a plasmoid[Outflows of plasma driven by the magnetic tension force of newly reconnected field lines <cit.>.] formation forecast, as their generation is linked to reconnecting plasmas <cit.>. The location of a plasmoid can be characterized using the magnetic skeleton <cit.>, which allows us to use spatiotemporal null graphs to learn when and where a plasmoid is formed. Next, in order to detect the magnetic reconnection, we can examine the reconnection rate and energy conversion rate at the possible reconnection sites located in close proximity to the newly-formed plasmoid. This work can then be extended to facilitate the study of magnetized plasmas in different settings, which is linked to a variety of open questions that can be interesting to both the astrophysics and machine learning research communities.
§ ACKNOWLEDGEMENTS
Funding in direct support of this work: Research Council of Finland grants #345635 (DAISY) and #339327 (Carrington). The authors thank the Finnish Computing Competence Infrastructure (FCCI) for supporting this work with computational and data storage resources.
icml2023
|
http://arxiv.org/abs/2307.07564v1 | 20230714181236 | Euler-Maruyama approximations of the stochastic heat equation on the sphere | [
"Annika Lang",
"Ioanna Motschan-Armen"
] | math.NA | [
"math.NA",
"cs.NA",
"math.PR",
"60H35, 65C30, 60H15, 35R60, 33C55, 65M70"
] |
The stochastic heat equation on the sphere driven by additive isotropic Wiener noise is approximated by a spectral method in space and forward and backward Euler–Maruyama schemes in time. The spectral approximation is based on a truncation of the series expansion with respect to the spherical harmonic functions. Optimal strong convergence rates for a given regularity of the initial condition and driving noise are derived for the Euler–Maruyama methods. Besides strong convergence, convergence of the expectation and second moment is shown, where the approximation of the second moment converges with twice the strong rate. Numerical simulations confirm the theoretical results.
Diverse Approximations for Monotone Submodular Maximization Problems with a Matroid Constraint
Anh Viet Do Mingyu Guo Aneta Neumann Frank Neumann
Optimisation and Logistics, School of Computer and Mathematical Sciences, The University of Adelaide
=============================================================================================================================================================
Keywords. Stochastic heat equation. Isotropic Wiener noise. Stochastic evolution on surfaces. Euler–Maruyama scheme. Spectral approximation. Strong convergence. Second moment.
§ INTRODUCTION
While stochastic partial differential equations (SPDEs) and their numerical approximations have mainly been considered in Euclidean space so far, applications motivate to extend the theory to surfaces and especially the sphere. Examples are uncertain evolution on the Earth or cells.
Numerical methods for SPDEs have been developed and analyzed for more than two decades by now, with references for example summarized in the monographs <cit.>, but the literature on surfaces is still rare. We are only aware of the results on the sphere given in <cit.>.
To give this area a new push, we consider the stochastic heat equation
X(t) = Δ_𝕊^2 X(t) t + W(t)
on the unit sphere ^2 with initial condition X(0) = X_0 ∈ L^2(Ω; L^2(𝕊^2)) driven by an additive isotropic Q-Wiener process W.
A spectral method and strong convergence for this equation has been considered in <cit.> that allows only for simulation if the stochastic convolution is computed directly with the correct distribution. It does not allow to simulate solutions for a given sample of the Q-Wiener process.
In this work we allow for computations based on samples of the Q-Wiener process by a time approximation with a forward and backward Euler–Maruyama scheme. Optimal rates for given regularity of the initial condition and noise are derived in the semigroup framework in <cit.> based on estimates for deterministic PDEs in <cit.>. We are following the Gothenburg tradition of optimal estimates and derive optimal rates for strong convergence but allow for up to O(h) for a time step size h instead of the usually shown limit of O(h^1/2).
Additionally we show convergence of the expectation and the second moment of the solution for the spectral and the Euler–Maruyama methods. While the rates for the expectation are the same as for strong convergence due to the limits of the deterministic PDE theory, we obtain twice the rate for the second moment compared to the strong convergence for a given regularity.
In our setting we are able to show all results by elementary estimates on exponential functions and their approximation. Therefore we do not require the reader to be familiar with the semigroup theory used in <cit.> but are able to illustrate numerical analysis for SPDEs and their optimal convergence in a more elementary way.
The outline of this paper is as follows: In Section <ref> we introduce the stochastic heat equation with the necessary framework, background, and its properties. Section <ref> recapitulates the spectral approximation in space presented in <cit.> and its strong convergence. We show additionally convergence of the expectation and the second moment of the equation. The forward and backward Euler–Maruyama methods are then presented in Section <ref>. Based on properties of the exponential function and its approximation, we prove optimal strong convergence rates and convergence of the expectation and the second moment, which is twice the strong rate. We conclude in Section <ref> with numerical simulations that confirm our theoretical results. Solution paths for all approximation methods are shown at <https://www.youtube.com/playlist?list=PLtvKza5x5KGN6FR5JPOey85VpdJLEeY-w>. Details on the expectation and the second moment are included in Appendix <ref> and the proofs on the estimates of the exponential functions are shown in Appendix <ref>.
§ THE STOCHASTIC HEAT EQUATION ON THE SPHERE AND ITS PROPERTIES
We consider the stochastic heat equation on the sphere on a complete filtered probability space (, , (_t)_t, ) and a finite time interval 𝕋 = [0,T], T < + ∞,
X(t) = Δ_𝕊^2 X(t) t + W(t)
with _0-measurable initial condition X(0) = X_0 ∈ L^2(Ω;L^2(𝕊^2)).
Before deriving a solution for the equation, let us introduce all necessary notation.
Let 𝕊^2 denote the unit sphere in ℝ^3, i.e.,
𝕊^2 = { x ∈ℝ^3, x = 1 },
where · denotes the Euclidean norm, and we equip it with the geodesic metric given by
d(x,y) = arccos ⟨ x,y ⟩_ℝ^3
for all x,y ∈𝕊^2. Furthermore we denote by σ the Lebesgue measure on the sphere which admits the representation
σ (y) = sinϑ ϑ φ
for Cartesian coordinates y ∈𝕊^2 coupled to polar coordinates (ϑ, φ) ∈ [0, π] × [0, 2 π) via the transformation y = (sinϑcosφ, sinϑsinφ, cosϑ).
To characterize the driving noise W and give properties of the Laplace–Beltrami operator Δ_^2, it is essential to introduce the set of spherical harmonic functions := (Y_ℓ,m, ℓ∈ℕ_0, m = - ℓ, … , ℓ) consisting of Y_ℓ,m : [0,π] × [0,2 π) →ℂ given by
Y_ℓ,m(ϑ,φ) = √(2 ℓ +1/4 π(ℓ - μ)!/(ℓ + μ)!)𝒫_ℓ,m(cosϑ) e^i m φ
for ℓ∈ℕ_0, m = 0, … , ℓ and by
Y_ℓ, m = (-1)^m Y_ℓ, -m
for m = - ℓ , … , - 1.
Here the associated Legendre functions (𝒫_ℓ,m(μ), ℓ∈ℕ_0, m = 0, … ,ℓ ) are defined by
𝒫_ℓ,m(μ) = (- 1)^m(1-μ^2)^m/2∂^m/∂μ^m𝒫_ℓ (μ)
for ℓ∈ℕ_0, m = 0,… , ℓ and μ∈ [-1,1], which are themselves characterized by the Legendre polynomials (P_ℓ, ℓ∈ℕ_0) that can for example be written by Rodrigues' formula (see, e.g., <cit.>)
P_ℓ(μ) = 2^- ℓ1/ℓ !∂^ℓ/∂μ^ℓ (μ^2-1)^ℓ
for all ℓ∈ℕ_0 and μ∈ [- 1, 1].
The spherical harmonic functions form an orthonormal basis of L^2(^2;) and its subspace L^2(^2) of all real-valued functions consists of all functions f = ∑_ℓ = 0^∞∑_m=-ℓ^ℓ f_ℓ,m Y_ℓ,m with coefficients f_ℓ,m∈ satisfying
f_ℓ, m = (-1)^m f_ℓ, -m
similarly to the well-known properties of Fourier expansions of real-valued functions on . With a slight abuse of notation we switch in what follows between Cartesian and polar coordinates and set
Y_ℓ,m(y) = Y_ℓ,m(ϑ,φ)
with y = (sinϑcosφ, sinϑsinφ, cosϑ).
We define the Laplace–Beltrami operator or spherical Laplacian in terms of spherical coordinates similarly to <cit.> by
Δ_𝕊^2 = ( sinϑ)^- 1∂/∂ϑ( sinϑ∂/∂ϑ) + ( sinϑ)^- 2∂^2/∂φ^2.
It is well known that it satisfies (see, e.g., Theorem 2.13 in <cit.>)
Δ_𝕊^2 Y_ℓ,m = - ℓ(ℓ +1) Y_ℓ,m
for all ℓ∈ℕ_0, m = - ℓ, … , ℓ, i.e., the spherical harmonic functions 𝒴 are eigenfunctions of Δ_𝕊^2 with eigenvalues (- ℓ ( ℓ +1), ℓ∈ℕ_0).
On the unit sphere we define the Sobolev spaces H^s(^2) with smoothness index s ∈ via Bessel potentials as
H^s(𝕊^2) = (Id - Δ_𝕊^2 )^- s/2 L^2(𝕊^2),
with inner product given by
⟨ f,g ⟩_H^s(𝕊^2) = ⟨ (Id - Δ_𝕊^2 )^s/2 f, (Id - Δ_𝕊^2 )^s/2 g ⟩_L^2(𝕊^2).
For further details on these spaces we refer for instance to <cit.>.
The corresponding Lebesgue–Bochner spaces for p ≥ 1 are denoted by L^p(Ω;H^s(𝕊^2)) with norm
Z _L^p(Ω;H^s(𝕊^2)) = [ Z _H^s(𝕊^2)^p ]^1/p.
The last thing to introduce from (<ref>) before being able to solve it is the driving noise. Similarly to <cit.> and <cit.>, we introduce an isotropic Q-Wiener process by the series expansion, often referred to as Karhunen–Loève expansion,
W(t,y) = ∑_ℓ = 0^∞∑_m= - ℓ^ℓ a_ℓ,m(t) Y_ℓ,m(y)
= ∑_ℓ = 0^∞( √(A_ℓ)β_ℓ,0^1(t) Y_ℓ,0(y) + √(2 A_ℓ)∑_m=1^ℓ (β_ℓ,m^1(t) Re Y_ℓ,m(y) + β_ℓ,m^2(t) Im Y_ℓ,m(y))),
where ((β_ℓ,m^1,β_ℓ,m^2), ℓ∈ℕ_0, m = 0 , … , ℓ) is a sequence of independent, real-valued Brownian motions with β_ℓ,0^2 = 0 for ℓ∈ℕ_0 and (A_ℓ, ℓ∈_0) denotes the angular power spectrum. The covariance operator Q is characterized by its eigenexpansion (see, e.g., <cit.>) given by
Q Y_ℓ,m= A_ℓ Y_ℓ,m.
The regularity of W is given by the properties of Q, which in turn are described by the decay of the angular power spectrum. More specifically
W(t)_L^2(Ω;H^s(^2))^2
= (Id - Δ_𝕊^2 )^s/2 W(t)_L^2(Ω;L^2(^2))^2
= t ∑_ℓ = 0^∞ (2ℓ + 1) A_ℓ (1+ℓ(ℓ+1))^s
= t ((Id - Δ_𝕊^2 )^s Q),
which follows with similar calculations as in <cit.>. This expression is finite if A_ℓ≤ C ℓ^- with α > 2(s+1) for all ℓ≥ℓ_0.
We are now in state to solve the stochastic heat equation (<ref>) which reads in integral form
X(t) = X_0 +∫_0^t Δ_𝕊^2 X(s) s + ∫_0^t W(s)
= X_0 +∫_0^t Δ_𝕊^2 X(s) s + W(t).
Since the spherical harmonics are an eigenbasis of Δ_^2 and Q, we expand both sides in and obtain
∑_ℓ = 0^∞∑_m= -ℓ^ℓ X_ℓ,m(t) Y_ℓ,m = ∑_ℓ = 0^∞∑_m= -ℓ^ℓ X_ℓ,m^0 Y_ℓ,m + ∫_0^t X_ℓ,m(s) Δ_𝕊^2 Y_ℓ,m s + a_ℓ,m(t) Y_ℓ,m
= ∑_ℓ = 0^∞∑_m= -ℓ^ℓ(X_ℓ,m^0 - ℓ(ℓ+1) ∫_0^t X_ℓ,m(s) s + a_ℓ,m(t) ) Y_ℓ,m ,
for the corresponding coefficients X_ℓ,m(t) = ⟨ X(t), Y_ℓ,m⟩_L^2(𝕊^2;) of the series expansion. The solution is then given by the solutions (X_ℓ,m, ℓ∈_0, m=-ℓ,…,ℓ) to the system of Ornstein–Uhlenbeck processes
X_ℓ,m(t) = X_ℓ,m^0 - ℓ ( ℓ +1) ∫_0^t X_ℓ,m(s) s + a_ℓ,m(t),
which are obtained by the variations of constants formula
X_ℓ , m(t) = e^ - ℓ (ℓ +1)t X_ℓ,m^0 + ∫_0^t e^- ℓ (ℓ +1)(t-s) a_ℓ,m(s).
In order to simulate real-valued solutions in later sections using the expansion (<ref>), we need to reformulate the equations in the real and imaginary part. Using (<ref>) and noting that X_ℓ,0 and Y_ℓ,0 are real-valued for all ℓ∈_0, we obtain
∑_ℓ = 0^∞∑_m= -ℓ^ℓ X_ℓ,m(t) Y_ℓ,m
= ∑_ℓ = 0^∞( X_ℓ,0(t) Y_ℓ,0 + ∑_m=1^ℓ 2 (X_ℓ,m(t)) (Y_ℓ,m) - 2 (X_ℓ,m(t)) (Y_ℓ,m) ).
This yields for our system of stochastic differential equations (<ref>) using (<ref>)
X_ℓ,0(t) = X_ℓ,0^0 - ℓ (ℓ +1) ∫_0^t X_ℓ,0(s) s + √(A_ℓ)β_ℓ,0^1(t),
(X_ℓ,m(t)) = (X_ℓ,m^0) - ℓ (ℓ +1) ∫_0^t (X_ℓ,m(s)) s + √(2^-1A_ℓ) β_ℓ,m^1(t),
(X_ℓ,m(t)) = (X_ℓ,m^0) - ℓ (ℓ +1) ∫_0^t (X_ℓ,m(s)) s + √(2^-1 A_ℓ) β_ℓ,m^2(t).
We can write the equations based on increments for m=0 (and similarly for m>0 with coefficients √(2^-1A_ℓ) instead)
X_ℓ,0(t) = X_ℓ,0(r) - ℓ (ℓ +1) ∫_r^t X_ℓ,0(s) s + √(A_ℓ)∫_r^tβ_ℓ,0^1(s),
as well as the solutions recursively
X_ℓ,0(t) = e^-ℓ(ℓ+1)(t-r) X_ℓ,0(r)+ √(A_ℓ)∫_r^t e^-ℓℓ+1)(t-s)β_ℓ,0^1(s).
By straight forward computations, which we add for completeness in Appendix <ref>, we obtain that the expectation of the solution is given by
[X(t)]
= ∑_ℓ = 0^∞∑_m = - ℓ^ℓ e^- ℓ (ℓ +1)t[X_ℓ,m^0] Y_ℓ,m,
and the second moment satisfies
[X(t)^2_L^2(^2)]
= ∑_ℓ = 0^∞( ∑_m = -ℓ^ℓ e^-2 ℓ (ℓ +1)t[|X_ℓ,m^0|^2] Y_ℓ,m^2_L^2(𝕊^2;))
+ A_ℓ1+ 2ℓ/2ℓ(ℓ+1) (1 - e^-2 ℓ (ℓ+1)t).
§ SPECTRAL APPROXIMATION IN SPACE
We start with the approximation in space by the spectral method used in <cit.>. We recall the strong convergence and derive the error in the expectation and second moment.
We approximate the solution by truncating the series expansion (<ref>) with the given solutions (<ref>) at a given > 0, i.e., we set
X^(κ)(t)
= ∑_ℓ = 0^κ∑_m = -ℓ^ℓ( e^-ℓ(ℓ+1)t X_ℓ,m^0 + ∫_0^t e^-ℓ(ℓ+1)(t-s) a_ℓ,m(s) ) Y_ℓ,m.
Analogously to the calculations in Appendix <ref> we derive the expectation
[X^(κ)(t)]
= ∑_ℓ = 0^κ∑_m = - ℓ^ℓ e^- ℓ (ℓ +1)t[X_ℓ,m^0] Y_ℓ,m,
and the second moment of the spectral approximation
[X^(κ)(t)^2_L^2(^2)]
= ∑_ℓ = 0^( ∑_m = -ℓ^ℓ e^-2 ℓ (ℓ +1)t[|X_ℓ,m^0|^2] Y_ℓ,m^2_L^2(𝕊^2;))
+ A_ℓ1+ 2ℓ/2ℓ(ℓ+1) (1 - e^-2 ℓ (ℓ+1)t).
Strong convergence of the spectral approximation was already shown in Lemma 7.1 in <cit.>. We state the result here with respect to the initial condition which is of interest in the next section. The constants follow immediately from the proof in <cit.>.
Let t ∈.
Furthermore assume that there exist ℓ_0 ∈, > 0, and a constant C>0 such that the angular power spectrum (A_ℓ, ℓ∈_0) satisfies A_ℓ≤ C ·ℓ^- for all ℓ > ℓ_0.
Then the strong error of the approximate solution X^(κ) is bounded uniformly in time and independently of a time discretization by
X(t) - X^(κ)(t)_L^2(;L^2(^2))≤ e^-(+1)(+2)tX_0_L^2(;L^2(^2)) + Ĉ·^-/2
for all ≥ℓ_0 and a constant Ĉ depending on C and .
We continue with the convergence of the expectation and the second moment of the equation. Since the solution is Gaussian conditioned on the initial condition, these are important quantities to characterize it.
Let t ∈.
Furthermore assume that there exist ℓ_0 ∈, > 0, and a constant C>0 such that the angular power spectrum (A_ℓ, ℓ∈_0) satisfies A_ℓ≤ C ·ℓ^- for all ℓ > ℓ_0.
Then the expectation of the approximate solution X^(κ) is bounded for all ≥ℓ_0 uniformly in time and independently of a time discretization by
[X(t)] - [X^(κ)(t)]_L^2(^2)≤ e^-(+1)(+2)t[X_0]_L^2(^2).
The error of the second moment is bounded by
| [ X(t) ^2_L^2(^2) - X^(κ)(t) ^2_L^2(^2)] |
≤ 2 · e^- 2(+1)(+2)t X_0_L^2(;L^2(^2))^2
+ Ĉ·κ^- α
for all ≥ℓ_0, where Ĉ depends on C and .
Given the exact formulation of the expectation of the solution (<ref>), the error is given by
[X(t)] - [X^(κ)(t)]_L^2(^2)
= ∑_ℓ = κ + 1^∞∑_m = - ℓ^ℓ e^- ℓ (ℓ +1)t [X_ℓ,m^0] Y_ℓ,m_L^2(^2),
which is bounded in the same way as in Lemma <ref> (see <cit.>) by
∑_ℓ = κ + 1^∞∑_m = - ℓ^ℓ e^- ℓ (ℓ +1)t [X_ℓ,m^0] Y_ℓ,m_L^2(^2)^2
= ∑_ℓ=+1^∞∑_m= - ℓ^ℓ e^- 2ℓ(ℓ+1)t [X_ℓ,m^0] Y_ℓ, m_L^2(^2;)^2
≤ e^- 2(+1)(+2)t[X_0]_L^2(^2)^2
and finishes the proof of the first part of the lemma.
Using the same computation as in the proof of <cit.>, one obtains for the second moment
| [ X(t) ^2_L^2(^2) - X^(κ)(t) ^2_L^2(^2)] |
= X(t) - X^(κ)(t)_L^2(;L^2(^2))^2,
and applying Lemma <ref> yields the claim.
Having convergence results for the semidiscrete approximation at hand, we are now ready to look at time discretizations and fully discrete approximations in the next section.
§ EULER–MARUYAMA APPROXIMATION IN TIME
We have seen in the previous section that we can approximate the solution to (<ref>) by the spectral approximation (<ref>). Computations are only possible in practice if simulating the stochastic convolutions directly. Since we know the distribution of the stochastic convolutions, this can be done (see <cit.> for details). If we want to simulate the solution for a given sample of the Q-Wiener process W, we need to take another approach. In this section we introduce forward and backward Euler–Maruyama schemes based on samples of W and show their convergence.
Let 0 = t_0 < t_1 < … < t_n = T, n ∈ℕ, be an equidistant time grid with step size h. The forward Euler approximation of the exponential function e^-ℓ(ℓ+1)h is given by
ξ = (1 - ℓ (ℓ +1)h).
In the later convergence analysis, we will need properties of this approximation that separate the behavior of growing ℓ and h going to zero. These estimates have been shown in the abstract semigroup framework, e.g., in <cit.> and based on <cit.>. We are able to show these optimal regularity results based on elementary computations. The proof of the following proposition is given in Appendix <ref>.
The exponential function and its approximation by the forward Euler approximation satisfy the following properties:
* For all μ∈ (0,1], there exists a constant C_μ >0 such that for all ℓ∈ and h > 0
| e^-ℓ(ℓ+1)h - (1-ℓ(ℓ+1)h) |
≤ C_μ (ℓ(ℓ+1))^1+ μ h^1+ μ.
* For all μ∈ (0,1], there exists a constant C_μ >0 such that for all ℓ, k ∈ and h > 0 with ℓ(ℓ+1)h ≤ 1
| e^- ℓ ( ℓ +1) h· k - (1-ℓ (ℓ +1)h)^k |
≤ C_μ (ℓ(ℓ+1))^1+ μ h^1+ μ k e^-ℓ(ℓ+1)h· (k-1)
≤ C_μ (ℓ(ℓ+1))^μ h^μ.
Following <cit.>, stability is guaranteed if there exists K ≥ 1 such that for all h > 0 and all ℓ∈ℕ_0
| 1 - ℓ ( ℓ +1) h | ≤ K.
Therefore this forward approximation will only lead to a stable scheme if
h ≤ | ℓ ( ℓ +1) |^-1,
which restricts the time step size h by the truncation index .
The backward Euler approximation of the exponential function e^-ℓ(ℓ+1)h is given by
ξ = (1 + ℓ (ℓ +1)h)^-1,
which is unconditionally stable since
| (1 + ℓ (ℓ +1)h)^-1| ≤ K
for any K ≥ 1.
We prove analogous results to Proposition <ref> also for the backward scheme in Appendix <ref>, which are stated in the following proposition.
The exponential function and its approximation by the backward Euler approximation satisfy the following properties:
* For all μ∈ (-1,1], there exists a constant C_μ >0 such that for all ℓ∈ and h > 0
| e^-ℓ(ℓ+1)h - (1+ℓ(ℓ+1)h)^-1 |
≤ C_μ (ℓ(ℓ+1))^1+μ h^1+μ.
* For all μ∈ (-1,1], there exists a constant C_μ >0 such that for all ℓ, k ∈ and h > 0 with ℓ(ℓ+1)h ≤ C_c
| e^- ℓ ( ℓ +1) h· k - (1+ℓ(ℓ+1)h)^-k |
≤ C_μ (ℓ(ℓ+1))^1+μ h^1+μ k e^-ℓ(ℓ+1)h· (k-1)
≤ C_μ (ℓ(ℓ+1))^μ h^μ.
Applying the forward and backward approximation to (<ref>) for m=0, we obtain the Euler–Maruyama method for the forward scheme
X_ℓ,0^(h)(t_k) = (1-ℓ(ℓ+1)h) X_ℓ,0^(h)(t_k-1) + √(A_ℓ)Δβ_ℓ,0^1(t_k),
where Δβ_ℓ,m^1(t_k) = β_ℓ,m^j(t_k) - β_ℓ,m^1(t_k-1) denotes the increment of the Brownian motion.
Similarly the backward scheme is given by
X_ℓ,0^(h)(t_k) = (1 + ℓ (ℓ +1)h)^-1( X_ℓ,0^(h)(t_k-1) + √(A_ℓ)Δβ_ℓ,0^1(t_k)).
We write both schemes in one by
X_ℓ,0^(h)(t_k) = ξ X_ℓ,0^(h)(t_k-1) + ξ^√(A_ℓ)Δβ_ℓ,0^1(t_k),
where = 0 in the forward scheme and = 1 in the backward scheme.
Recursively, this leads to the representation
X_ℓ,0^(h)(t_k) = ξ^k X_ℓ,0^0 + √(A_ℓ)∑_j=1^k ξ^k-j+Δβ_ℓ,0^1(t_j).
The equations for m>0 are obtained in the same way.
Our Euler–Maruyama approximation of (<ref>) is given by
X^(,h)(t_k)
= ∑_ℓ=0^κ X_ℓ,0^(h)(t_k) Y_ℓ,0
+ 2 ∑_m = 1^ℓ(X_ℓ,m^(h)(t_k)) (Y_ℓ,m) - (X_ℓ,m^(h)(t_k)) (Y_ℓ,m).
Plugging the representation (<ref>) into (<ref>), observing that all stochastic increments have expectation zero, and rewriting the real and imaginary parts in terms of Y_ℓ,m, we derive the expectation of the Euler–Maruyama method
[X^(,h)(t_k)]
= ∑_ℓ = 0^κ∑_m = - ℓ^ℓξ^k [X_ℓ,m^0] Y_ℓ,m.
For the second moment, we proceed similarly for the first term in (<ref>) and use the properties of the independent stochastic increments to obtain
[X^(, h)(t_k)^2_L^2(^2)]
= ∑_ℓ = 0^κ( ∑_m = - ℓ^ℓξ^2k[|X_ℓ,m^0|^2] Y_ℓ,m^2_L^2(𝕊^2;)) + A_ℓ (1+ 2ℓ) ∑_j = 1^k ξ^2(k-j+δ) h.
As a last prerequisite for our convergence analysis, we need regularity properties of exponential functions. As for the approximation properties in the previous propositions, the proof of the following results can be found in Appendix <ref>.
Assume that ℓ(ℓ+1)h ≤ C_c.
The exponential function satisfies the following regularity estimates:
* For all μ∈ (0,1], there exists a constant C_μ such that for all t_k > 0
∑_j = 1^k ∫_t_j-1^t_j (e^ - ℓ (ℓ+1)(t_k-s) - e^ - ℓ (ℓ+1)(t_k-t_j-1))^2 s
≤ C_μ (ℓ(ℓ+1))^2μ-1 h^2μ.
* For all μ∈ (0,1], there exists a constant C_μ such that for all t_k > 0
∑_j = 1^k ∫_t_j-1^t_j (e^ - ℓ (ℓ+1)(t_k-s) - e^ - ℓ (ℓ+1)(t_k-t_j))^2 s
≤ C_μ (ℓ(ℓ+1))^2μ-1 h^2μ.
* For all μ∈ [0,1], there exists a constant C_μ such that for all t_k > 0
| ∑_j = 1^k ∫_t_j-1^t_j e^ - 2ℓ (ℓ+1)(t_k-s) - e^ - 2ℓ (ℓ+1)(t_k-t_j-1) s |
≤ C_μ (ℓ(ℓ+1))^μ - 1 h^μ.
* For all μ∈ [0,1], there exists a constant C_μ such that for all t_k > 0
| ∑_j = 1^k ∫_t_j-1^t_j e^- 2 ℓ (ℓ+1)(t_k-s) - e^ - 2 ℓ (ℓ+1)(t_k-t_j) s |
≤ C_μ (ℓ(ℓ+1))^μ - 1 h^μ.
Having all basic estimates at hand, we are now ready to prove strong convergence with optimal rates for additive noise given the regularity of the initial condition and the noise.
The proofs are inspired by <cit.> but bring the semigroup theory and estimates going back to <cit.> to an elementary level.
Assume that there exist > 0 and a constant C>0 such that the angular power spectrum (A_ℓ, ℓ∈_0) satisfies A_ℓ≤ C ·ℓ^- for ℓ > 0 and that X_0 ∈ L^2(;H^η(^2)) for some η > 0.
Then for all ∈ and h > 0 such that (+1)h ≤ C_c, the strong error between X^(κ) and X^(,h) is uniformly bounded for some constant Ĉ on all time grid points t_k by
X^(κ)(t_k) - X^(,h)(t_k) _L^2(Ω;L^2(𝕊^2))≤Ĉ( h^min{1,η/2}X_0_H^η(^2) + h^min{1,/4}).
Using the truncated version of (<ref>) and (<ref>), we write the error in the real and imaginary parts as
X^(κ)(t_k) - X^(,h)(t_k) _L^2(Ω;L^2(𝕊^2))^2
= ∑_ℓ = 1^κ([ | X_ℓ,0(t_k) - X_ℓ,0^(h)(t_k) |^2 ] Y_ℓ,0_L^2(^2)^2
+ 2 ∑_m = 1^ℓ( [ | (X_ℓ,m(t_k)) - (X_ℓ,m^(h)(t_k)) |^2 ] Y_ℓ,m_L^2(^2)^2
+ [ | (X_ℓ,m(t_k)) - (X_ℓ,m^(h)(t_k)) |^2 ] Y_ℓ,m_L^2(^2)^2 ) ).
The first difference satisfies with the recursive formulations (<ref>) and (<ref>) for m=0 that
[ | X_ℓ,0(t_k) - X_ℓ,0^(κ,h)(t_k) |^2 ]
= [ | ( e^- ℓ ( ℓ +1) t - ξ^k ) X_ℓ,0^0 + √(A_ℓ)( ∫_0^t_k e^- ℓ ( ℓ +1)( t_k-s) β_ℓ,0^1(s) - ∑_j=1^k ξ^k-j+Δβ_ℓ,0^1(t_j) ) |^2 ]
= ( e^- ℓ ( ℓ +1) t_k - ξ^k )^2 [ | X_ℓ,0^0 |^2 ] + A_ℓ[ | ∑_j=1^k ∫_t_j-1^t_j e^- ℓ ( ℓ +1)( t_k-s) - ξ^k-j+ β_ℓ,0^1(s) |^2 ] ,
where we used that the mixed term vanishes due to the mean zero of the Gaussian increments and that Δβ_ℓ,0^1(t_j) = ∫_t_j-1^t_jβ_ℓ,0^1(s).
The first term is bounded by Proposition <ref> <ref> and Proposition <ref> <ref>, respectively, by
( e^- ℓ ( ℓ +1) t_k - ξ^k )^2 [ | X_ℓ,0^0 |^2 ]
≤( C_η/2 ( ℓ ( ℓ +1))^η/2 h^η/2)^2 [ | X_ℓ,m^0 |^2],
for η∈ (0,2], and exploiting regularity we obtain
( e^- ℓ ( ℓ +1) t_k - ξ^k )^2 [ | X_ℓ,0^0 |^2 ] Y_ℓ,0_L^2(^2;ℂ)^2
≤ C_η/2^2 h^η [ | X_ℓ,0^0 |^2] Y_ℓ,0_H^η(^2)^2.
Applying the Itô isometry to the second term yields
[ | ∑_j=1^k ∫_t_j-1^t_j e^- ℓ ( ℓ +1)( t_k-s) - ξ^k-j+ β_ℓ,0^1(s) |^2 ]
= ∑_j=1^k ∫_t_j-1^t_j( e^- ℓ ( ℓ +1)( t_k-s) - ξ^k-j+)^2 s
≤ 2 ∑_j = 1^k ∫_t_j-1^t_j( e^- ℓ ( ℓ +1)(t_k-s) - e^- ℓ ( ℓ +1)(t_k-t_j-))^2 s + ∫_t_j-1^t_j(e^- ℓ ( ℓ +1)(t_k-t_j-) - ξ^k-j+)^2 s
≤ 2 C_μ (ℓ(ℓ+1))^2μ - 1 h^2μ + 2 ∑_j = 1^k∫_t_j-1^t_j(e^- ℓ ( ℓ +1)(t_k-t_j-) - ξ^k-j+)^2 s,
where we applied Proposition <ref> <ref> and <ref> in the last step for μ∈ (0,1]. Using the first inequality in Proposition <ref> <ref> and Proposition <ref> <ref> for μ = 1, respectively, we bound the last term by
∑_j = 1^k∫_t_j-1^t_j(e^- ℓ ( ℓ +1)(t_k-t_j-) - ξ^k-j+)^2 s
≤∑_j = 1^k∫_t_j-1^t_j( C_1 (ℓ(ℓ+1))^2 h^2 (k-j+) e^-ℓ(ℓ+1)h· (k-j+-1))^2 s
= C_1^2 (ℓ(ℓ+1))^4 h^2 h ∑_j = 1^k (h(k-j+))^2 e^-2ℓ(ℓ+1)h· (k-j+ - 1).
The key estimate for optimal rates with respect to the regularity of the driving noise is to bound the sum
h ∑_j = 1^k (h(k-j+))^2 e^-2ℓ(ℓ+1)h· (k-j+ - 1)
= h ∑_j = 0^k-1 (h(j+))^2 e^-2ℓ(ℓ+1)h· (j+ - 1)≤ e^2 C_c∫_0^∞ (s+h)^2 e^-2ℓ(ℓ+1)s s
= e^2 C_c( h^2/2ℓ(ℓ+1) + h/2(ℓ(ℓ+1))^2 + 1/4(ℓ(ℓ+1))^3)
by an integral, which holds since ℓ(ℓ+1)h ≤ C_c and the integral is decaying for s ≥max{1,(ℓ(ℓ+1))^-1-h}.
Plugging this bound in and resorting, we obtain
∑_j = 1^k∫_t_j-1^t_j(e^- ℓ ( ℓ +1)(t_k-t_j-) - ξ^k-j+)^2 s
≤ C_1^2 e^2 C_c (ℓ(ℓ+1))^2μ-1 h^2μ( (hℓ(ℓ+1))^4-2μ + (hℓ(ℓ+1))^3-2μ + (hℓ(ℓ+1))^2-2μ)
≤C̃ (ℓ(ℓ+1))^2μ-1 h^2μ,
where we used in the last inequality that ℓ(ℓ+1)h ≤ C_c.
In conclusion we have shown that
[ | X_ℓ,0(t_k) - X_ℓ,0^(h)(t_k) |^2 ] Y_ℓ,0_L^2(^2)^2
≤ C_μ^2 h^2μ [ | X_ℓ,0^0 |^2] Y_ℓ,0_H^2μ(^2)^2
+ 4 C̃ A_ℓ (ℓ(ℓ+1))^2μ-1 h^2μY_ℓ,0_L^2(^2)^2.
The terms for m > 0 are bounded in the same way.
Putting all parts of (<ref>) together, we bound
X^(κ)(t_k) - X^(,h)(t_k) _L^2(Ω;L^2(𝕊^2))^2
≤ C_η^2 h^2ηX_0_H^2η(^2)^2 + 4 C̃ h^2μ∑_ℓ = 1^ A_ℓ (2ℓ+1) (ℓ(ℓ+1))^2μ-1
and conclude with the observation that the last term satisfies
∑_ℓ = 1^ A_ℓ (2ℓ+1) ( ℓ ( ℓ +1))^2μ - 1≤ C ∑_ℓ = 1^ℓ^- + 1 + 4μ - 2≤ C ^4μ - ,
which is bounded for μ≤/4. Since μ∈ (0,1], the claim follows.
Putting Lemma <ref> and Theorem <ref> together, the total error is bounded by
X(t_k) - X^(,h)(t_k) _L^2(Ω;L^2(𝕊^2))≤Ĉ( h^min{1,η/2}X_0_H^η(^2) + ^-/2 + h^min{1,/4})
and the rates are balanced for = 2 η.
Optimal rates for additive noise and multiplicative noise were derived in <cit.> and <cit.>, respectively, for convergence up to O(h^min{1,}/2) under the assumption that X_0 ∈ L^2(;H^(^2)) and ((-Δ_^2)^(-1)/2Q) < + ∞. Setting = η = /2, the assumptions coincide with our conditions.
Having shown strong convergence, we continue with the time discretization error of the expectation and the second moment extending Lemma <ref> to the fully discrete setting.
Assume that there exist > 0 and a constant C>0 such that the angular power spectrum (A_ℓ, ℓ∈_0) satisfies A_ℓ≤ C ·ℓ^- for all ℓ > 0 and that X_0 ∈ L^2(;H^η(^2)) for some η > 0.
Then for all ∈ and h > 0 such that (+1)h ≤ C_c, the error of the expectation is uniformly bounded for some constant Ĉ>0 on all time grid points t_k by
[ X^(κ)(t_k) - X^(,h)(t_k) ]_L^2(𝕊^2)≤Ĉ h^min{1,η/2}[X_0] _H^η(𝕊^2).
The second moment satisfies under the same assumptions that
| [ X^(κ)(t_k) ^2_L^2(𝕊^2) - X^(,h)(t_k) ^2_L^2(𝕊^2)] |
≤Ĉ( h^min{1,η}X_0_L^2(;H^η(𝕊^2))^2
+ h^min{1,/2}).
We observe first that
[ X^(κ)(t_k) ] - [ X^(,h)(t_k) ] = ∑_ℓ = 0^κ∑_m = - ℓ^ℓ( e^- ℓ (ℓ +1)t_k - ξ^k ) [X_ℓ,m^0] Y_ℓ,m
using (<ref>) and (<ref>) combined with the linearity of the expectation.
Using Proposition <ref> <ref> or Proposition <ref> <ref>, respectively, we bound the above by
[ X^(κ)(t_k) ] - [ X^(,h)(t_k) ] _L^2(𝕊^2)^2
≤∑_ℓ = 1^κ∑_m = - ℓ^ℓ( C_η/2 ( ℓ ( ℓ +1))^η/2 h^η/2)^2 [ | X_ℓ,m^0 |^2] Y_ℓ,m_L^2(𝕊^2;)^2
≤∑_ℓ = 1^κ C_η/2^2 h^η∑_m = - ℓ^ℓ[ | X_ℓ,m^0 |^2] (1- Δ_𝕊^2 )^η/2 Y_ℓ,m_L^2(𝕊^2;)^2
≤ C_η/2^2 h^η [ X_0 ]_H^η(𝕊^2)^2
for η∈ (0,2]. Taking the square root finishes the proof of the first claim.
For the second moment, we rewrite using (<ref>) and (<ref>) to get
[ X^(κ)(t_k) ^2_L^2(𝕊^2) - X^(,h)(t_k) ^2_L^2(𝕊^2)]
= ∑_ℓ = 1^κ∑_m = - ℓ^ℓ( e^- 2 ℓ (ℓ +1)t_k - ξ^2k) [|X_ℓ,m^0|^2] Y_ℓ,m^2_L^2(𝕊^2)
+ A_ℓ (1+ 2ℓ) ( (2ℓ(ℓ+1))^-1 (1 - e^- 2 ℓ (ℓ+1)t_k) - ∑_j = 1^k ξ^2(k-j+) h )
= ∑_ℓ = 1^κ∑_m = - ℓ^ℓ( e^- 2 ℓ (ℓ +1)t_k - ξ^2k) [|X_ℓ,m^0|^2] Y_ℓ,m^2_L^2(𝕊^2)
+ A_ℓ (1+ 2ℓ) ( ∑_j = 1^k ∫_t_j-1^t_j e^- 2 ℓ (ℓ+1)(t_k-s) - ξ^2(k-j+) s ),
using in the last equation that
(2ℓ(ℓ+1))^-1 (1 - e^- 2 ℓ (ℓ+1)t_k)
= ∫_0^t_k e^- 2 ℓ (ℓ+1)(t_k-s) s
= ∑_j = 1^k ∫_t_j-1^t_j e^- 2 ℓ (ℓ+1)(t_k-s) s .
Similarly to the proof of Theorem <ref> we split
e^- 2 ℓ ( ℓ +1)(t_k-s) - ξ^2(k-j+)
= (e^- 2 ℓ ( ℓ +1)(t_k-s) - e^- 2 ℓ ( ℓ +1)(t_k-t_j-))
+ (e^- 2 ℓ ( ℓ +1)(t_k-t_j-) - ξ^2(k-j+)).
and obtain two integrals in (<ref>) which we bound separately. To the first integral we apply Proposition <ref><ref> or <ref>, respectively. The second one can be bounded in a similar way as the stochastic term in the proof of Theorem <ref>. Using the first inequality in Proposition <ref><ref> or Proposition <ref><ref> for μ = 1, respectively, and resorting the terms, we start with
| ∑_j = 1^k ∫_t_j-1^t_j e^- 2 ℓ (ℓ+1)(t_k-s) - ξ^2(k-j+) s |
≤ C_1 2 (ℓ(ℓ+1))^2 h^2 ∑_j = 1^k h(k-j+) e^-2ℓ(ℓ+1)h· (k-j+ - 1).
Again we bound the last term by the corresponding integral to obtain
h ∑_j = 1^k h(k-j+) e^-2ℓ(ℓ+1)h· (k-j+ - 1) ≤ e^2 C_c∫_0^∞ (s+h) e^-2ℓ(ℓ+1)s s
= e^2 C_c( h/2(ℓ(ℓ+1)) + 1/4(ℓ(ℓ+1))^2)
since ℓ(ℓ+1)h ≤ C_c, and conclude using the same bound that
| ∑_j = 1^k∫_t_j-1^t_j e^- 2ℓ ( ℓ +1)(t_k-t_j-) - ξ^2(k-j+) s |
≤C̃ (ℓ(ℓ+1))^μ-1 h^μ.
The first term in (<ref>) is bounded using as in the proof of Theorem <ref> Proposition <ref><ref> or Proposition <ref><ref>, respectively.
All together we get
[ X^(κ)(t_k) ^2_L^2(𝕊^2) - X^(,h)(t_k) ^2_L^2(𝕊^2)]
≤ C_min{1,η} h^min{1,η}X_0_L^2(;H^η(𝕊^2))^2
+ 2 C̃ h^μ∑_ℓ = 1^ A_ℓ (2ℓ+1) (ℓ(ℓ+1))^μ-1
for μ∈ (0,1].
We conclude by observing that the last term satisfies
∑_ℓ = 1^ A_ℓ (2ℓ+1) (ℓ(ℓ+1))^μ-1≤ C ∑_ℓ = 0^ℓ^- + 1 + 2μ - 1≤ C ^2μ-,
which is bounded for all μ≤min{1, /2}.
Putting together Lemma <ref> and Theorem <ref>, the total errors are bounded by
[ X(t_k) - X^(,h)(t_k) ]_L^2(𝕊^2)≤ C h^min{1,η/2}[X_0] _H^η(𝕊^2).
and
| [ X(t_k) ^2_L^2(𝕊^2) - X^(,h)(t_k) ^2_L^2(𝕊^2)] |
≤ C ( h^min{1,η}X_0_L^2(;H^η(𝕊^2))^2
+ ^- + h^min{1,/2}).
While the error in the expectation coincides with the strong error in Theorem <ref>, due to the properties of the corresponding deterministic PDE, the error rate in the second moment is twice that of strong convergence under fixed regularity properties. We are thus able to confirm the rule of thumb that the weak rate is twice the strong one with time convergence limited by 1.
§ NUMERICAL SIMULATION
We are now ready to confirm our theoretical results from Sections <ref> and <ref> with numerical experiments. We compare the convergence rates of the different errors for the spectral approximation, the forward and the backward Euler–Maruyama scheme.
For the spectral approximation, we use a reference solution with = 2^10 at time T=1 and compare it to the approximations based on κ = 2^j for j = 0, … , 9. In Figure <ref> we computed the expectations of the strong error explicitly while we used 10 Monte Carlo samples in Figure <ref>. The obtained rates for =1,…,5 coincide with those proven in Lemma <ref>. Since the error in the initial condition converges exponentially fast and we cannot see a difference in the convergence plots, we set X_0 = 0.
This exponential convergence is visible in Figure <ref>, which confirms the convergence of the expectation in Lemma <ref>. Due to the fast smoothing of the solution, we use T=0.01. Setting X_0=0 and computing the expectations explicitly, we confirm the convergence rates of the second moments from Lemma <ref> for =1/2,1,2,3 in Figure <ref>.
Having verified the spectral convergence, it remains to simulate the time discretization with the forward and backward Euler–Maruyama scheme. For that we focus on the error between X^(κ) and X^(,h). We simulate on time grids with step size h = 2^-2· m for m = 1, …, 10 coupled with = 2^m to guarantee stability for the forward Euler–Maruyama scheme and since larger do not change the simulation results. As for the spectral approximations, we set X_0 = 0 to focus on the convergence with respect to the smoothness of the noise given by . The results for the forward Euler–Maruyama scheme in Figure <ref> using the exact expectations confirm the expected convergence of O(h^min{1,/4}) from Theorem <ref>.
Similar results are obtained for the backward Euler–Maruyama method in Figure <ref>. For completeness we added the corresponding results for the forward and backward scheme based on 10 Monte Carlo samples and with reference solution using h = 2^-14 and = 2^7 in Figures <ref> and <ref>.
Figure <ref> and Figure <ref> show the simulated convergence of the expectation for η = 1/2, 1, 2, where we would expect from Theorem <ref> no convergence, convergence of rate 1/2 and 1, respectively. We used T=0.01 to minimize the smoothing over time. Still it is clear that all solutions are smooth for finite . Therefore the simulations all show O(h) convergence but with different error constant depending on η.
As for the strong error, we set X_0 = 0 in the simulation of the error of the second moment to focus on the convergence with respect to the noise smoothness . In Figures <ref> and <ref> for the forward and backward Euler–Maruyama schemes, we observe convergence of O(h^min{1,/2}), which confirms Theorem <ref> for the second moment.
§ PROPERTIES OF THE SOLUTION
Let us consider the expectation of the solution. It holds that
[X(t)]
= [X_0] + ∫_0^t Δ_𝕊^2[X(s)] s
due to the linearity of the expectation and the mean zero property of the Q-Wiener process. Setting u(t) = [X(t)] and u_0 = [X_0], we obtain that the expectation of X is the solution to the (deterministic) PDE
∂_t u = Δ_𝕊^2 u
with initial condition u(0) = u_0.
This PDE is solved by the variations of constants formula
[X(t)]
= u(t)
= ∑_ℓ = 0^∞∑_m = - ℓ^ℓ e^- ℓ (ℓ +1)t u_ℓ,m^0 Y_ℓ,m
= ∑_ℓ = 0^∞∑_m = - ℓ^ℓ e^- ℓ (ℓ +1)t[X_ℓ,m^0] Y_ℓ,m,
where u_ℓ,m^0 = ⟨ u_0, Y_ℓ,m⟩_L^2(𝕊^2;).
Another interesting quantity of the solution is the second moment [X(t)^2_L^2(^2)]. We observe first that
[X(t)^2_L^2(^2)]
= [∑_ℓ = 0^∞∑_m = - ℓ^ℓ(e^- ℓ (ℓ +1)t X_ℓ,m^0 + ∫_0^t e^- ℓ (ℓ +1)(t-s) a_ℓ,m(s))
Y_ℓ,m_L^2(^2)^2],
where the stochastic processes a_ℓ,m are given in (<ref>). Due to the independence of the Q-Wiener process of the initial condition and the mean zero property of the Itô integral, the two terms separate. While the first term satisfies
[∑_ℓ = 0^∞∑_m = - ℓ^ℓ e^- ℓ (ℓ +1)t X_ℓ,m^0 Y_ℓ,m_L^2(^2)^2]
= ∑_ℓ = 0^∞∑_m = - ℓ^ℓ e^- 2 ℓ (ℓ +1)t[|X_ℓ,m^0|^2] Y_ℓ,m^2_L^2(𝕊^2;),
it remains to have a closer look at the stochastic convolution next. By the Itô isometry and the scaling of the spherical harmonic functions, we obtain
[∑_ℓ = 0^∞∑_m = - ℓ^ℓ∫_0^t e^- ℓ (ℓ +1)(t-s) a_ℓ,m(s))
Y_ℓ,m_L^2(^2)^2]
= [∑_ℓ = 0^∞( √(A_ℓ)∫_0^t e^- ℓ (ℓ +1)(t-s) β_ℓ,0^1(s) Y_ℓ,0
+ √(2 A_ℓ)∑_m=1^ℓ(∫_0^t e^- ℓ (ℓ +1)(t-s) β_ℓ,m^1(s) Re Y_ℓ,m + ∫_0^t e^- ℓ (ℓ +1)(t-s) β_ℓ,m^2(s) Im Y_ℓ,m)) _L^2(^2)^2]
= ∑_ℓ = 0^∞(
A_ℓ∫_0^t e^- 2 ℓ (ℓ +1)(t-s) s ( Y_ℓ,0_L^2(^2)^2 + 2 ∑_m=1^ℓ ( Y_ℓ,m_L^2(^2)^2 + Y_ℓ,m_L^2(^2)^2 ) ) )
= ∑_ℓ = 0^∞A_ℓ (2ℓ(ℓ+1))^-1 (1 - e^- 2 ℓ (ℓ+1)t) (1+ 2ℓ).
In conclusion the second moment of X(t) is given by
[X(t)^2_L^2(^2)]
= ∑_ℓ = 0^∞( ∑_m = - ℓ^ℓ e^- 2 ℓ (ℓ +1)t[|X_ℓ,m^0|^2] Y_ℓ,m^2_L^2(𝕊^2))
+ A_ℓ (1+ 2ℓ) (2ℓ(ℓ+1))^-1 (1 - e^- 2 ℓ (ℓ+1)t).
§ REGULARITY OF EXPONENTIAL FUNCTIONS AND THEIR APPROXIMATION
In this section we collect the proofs on the regularity of exponential functions and their approximation with a forward and backward Euler method from the propositions in Section <ref>.
Let us start to prove the first property <ref>.
By partial integration we obtain that
| exp(-ℓ(ℓ+1)h) - (1-ℓ(ℓ+1)h) |
= | ∫_0^h ∫_0^s (ℓ(ℓ+1))^2 e^-ℓ(ℓ+1)r r s |.
Since x^η e^-x≤C̃_η, we can bound the expression inside the integral by
(ℓ(ℓ+1))^2 e^-ℓ(ℓ+1)r≤C̃_μ (ℓ(ℓ+1))^1+ μ r^μ-1,
which leads for any μ∈ (0,1] to
| ∫_0^h ∫_0^s (ℓ(ℓ+1))^2 e^-ℓ(ℓ+1)r r s |
≤C̃_μ (ℓ(ℓ+1))^1+ μμ^-1∫_0^h s^μ s
= C̃_μ (ℓ(ℓ+1))^1+ μμ^-1 (1+ μ)^-1 h^1+ μ
= C_μ (ℓ(ℓ+1))^1+ μ h^1+ μ.
We continue with the proof of <ref> and use a^n - b^n = (a-b) ∑_j = 0^n-1 a^n-1-jb^j to obtain that
| e^- ℓ ( ℓ +1) h· k - (1-ℓ (ℓ +1)h)^k |
= |e^- ℓ ( ℓ +1) h - (1-ℓ (ℓ +1)h)|
·| ∑_j = 0^k-1 e^- ℓ ( ℓ +1) h· j (1-ℓ (ℓ +1)h)^k-1-j|.
The first term is bounded by <ref> and for the second, we observe that the Taylor expansion with remainder satisfies
e^-ℓ(ℓ+1)h = 1 - ℓ(ℓ+1)h + ∫_0^ℓ(ℓ+1) (ℓ(ℓ+1)h - s) e^-s s.
Since the integral is positive, we obtain
1 - ℓ(ℓ+1)h ≤ e^-ℓ(ℓ+1)h,
which yields
| ∑_j = 0^k-1 e^- ℓ ( ℓ +1) h· j (1-ℓ (ℓ +1)h)^k-1-j|
≤ k e^- ℓ ( ℓ +1) h· (k-1)
and implies the first inequality of the claim. The second follows by
k e^- ℓ ( ℓ +1) h· (k-1)≤ e^ℓ(ℓ+1)h C̃_1 (ℓ ( ℓ +1) h)^-1≤ e^1 C̃_1 (ℓ ( ℓ +1) h)^-1
applying again that x^η e^-x≤C̃_η and that ℓ(ℓ+1)h ≤ 1.
Similarly to the proof of Proposition <ref>, we observe first by partial integration that
| e^-ℓ(ℓ+1)h - (1+ℓ(ℓ+1)h)^-1 |
= | -(ℓ(ℓ+1))^2/1 + ℓ(ℓ+1)h∫_0^h ∫_s^h e^-ℓ(ℓ+1)r r s |.
Using again that x^η e^-x≤C̃_η to bound (ℓ(ℓ+1))^1-μ e^-ℓ(ℓ+1)r≤C̃_1-μ r^μ-1, we compute the integrals to obtain
∫_0^h ∫_s^h r^μ-1 r s
= (1+μ)^-1 h^1+μ.
Putting all together yields
| -(ℓ(ℓ+1))^2/1 + ℓ(ℓ+1)h∫_0^h ∫_s^h exp(-ℓ(ℓ+1)r) r s |
≤C̃_1-μ(ℓ(ℓ+1))^1+μ/1 + ℓ(ℓ+1)h (1+μ)^-1 h^1+μ
= C_μ (ℓ(ℓ+1))^1+μ h^1+μ
for all μ∈ (-1,1], which concludes the proof of <ref>.
Using again the same approach as in Proposition <ref> with a^n - b^n = (a-b) ∑_j = 0^n-1 a^n-1-jb^j and bounding
| ∑_j = 0^k-1 e^- ℓ ( ℓ +1) h· j (1+ℓ (ℓ +1)h)^-(k-1-j)|
≤e^C_c/1+C_c k e^- ℓ ( ℓ +1) h· (k-1)
in a similar way yields both inequalities in <ref>. The only difference is that we apply ℓ(ℓ+1)h ≤ C_c to obtain the bound
(1+ℓ(ℓ+1)h)^-1
= (1+ℓ(ℓ+1)h)^-1 e^ℓ(ℓ+1)h e^-ℓ(ℓ+1)h≤ e^C_c e^-ℓ(ℓ+1)h.
To prove <ref>, we observe first that
∫_t_j-1^t_j (e^ - ℓ (ℓ+1)(t_k-s) - e^ - ℓ (ℓ+1)(t_k-t_j-1))^2 s
= ∫_t_j-1^t_j e^ - 2ℓ (ℓ+1)(t_k-s) (1 - e^-ℓ(ℓ+1)(s-t_j-1))^2 s
≤ h e^ - 2ℓ (ℓ+1)(t_k-t_j) (1 - e^-ℓ(ℓ+1)h)^2.
Therefore we can bound
|∑_j = 1^k ∫_t_j-1^t_j (e^ - ℓ (ℓ+1)(t_k-s) - e^ - ℓ (ℓ+1)(t_k-t_j-1))^2 s|
≤ h (1 - e^-ℓ(ℓ+1)h)^2 ∑_j = 1^k e^ - 2ℓ (ℓ+1)(t_k-t_j).
Since e^ - 2ℓ (ℓ+1)h < 1 and (1-x)^-1 = ∑_j=0^∞ x^j for |x|<1, the sum satisfies
∑_j = 1^k e^ - 2ℓ (ℓ+1)(t_k-t_j) = ∑_j = 0^k-1 e^ - 2ℓ (ℓ+1)h · j≤ (1 - e^ - 2ℓ (ℓ+1)h)^-1
= (1 - e^ - ℓ (ℓ+1)h)^-1 (1 + e^ - ℓ (ℓ+1)h)^-1
≤ (1 - e^ - ℓ (ℓ+1)h)^-1.
On the one hand side, for μ∈ (1/2,1]
1 - e^-ℓ(ℓ+1)h = (ℓ(ℓ+1))^2μ-1∫_0^h (ℓ(ℓ+1))^2-2μ e^-ℓ(ℓ+1)r r
≤C̃_2-2μ(ℓ(ℓ+1))^2μ-1| ∫_0^h r^2μ -2 r |
= C̃_2-2μ(ℓ(ℓ+1))^2μ-1 (2μ-1)^-1 h^2μ-1.
For μ = 1/2 the expression is bounded and for μ∈ (0,1/2) on the other hand side
1 - e^-ℓ(ℓ+1)h≤ 1
= (ℓ(ℓ+1)h)^1-2μ(ℓ(ℓ+1)h)^2μ - 1≤ C_c (ℓ(ℓ+1)h)^2μ - 1
since ℓ(ℓ+1)h ≤ C_c. Therefore for μ∈ (0,1], the expression satisfies
1 - e^-ℓ(ℓ+1)h≤ C (ℓ(ℓ+1)h)^2μ - 1,
and putting all terms together yields the claim.
Similarly one proves <ref>.
We continue with the proof of <ref>. With the same steps as in the proof of <ref>, we arrive at
|∑_j = 1^k ∫_t_j-1^t_j e^ - 2ℓ (ℓ+1)(t_k-s) - e^ - 2 ℓ (ℓ+1)(t_k-t_j-1) s|
≤∑_j = 1^k h e^ - 2ℓ (ℓ+1)(t_k-t_j) (1 - e^-2ℓ(ℓ+1)h)
≤ h (1 - e^-2ℓ(ℓ+1)h)^-1 (1 - e^-2ℓ(ℓ+1)h)
= h (ℓ(ℓ+1)h)^μ -1 (ℓ(ℓ+1)h)^1- μ≤ C_c (ℓ(ℓ+1))^μ -1 h^μ,
which shows the claim. The proof of <ref> follows in the same way.
abbrv
|
http://arxiv.org/abs/2307.05668v1 | 20230711180001 | Adiabatic dynamics of coupled spins and phonons in magnetic insulators | [
"Shang Ren",
"John Bonini",
"Massimiliano Stengel",
"Cyrus E. Dreyer",
"David Vanderbilt"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci"
] |
false
true
#1#1
#1#1
#1#1
#1#1
#1#1
#1#1
Red
Orange
OliveGreen
blue
Magenta
comm
true
2.7in
0.5in
=====
#1#1
Temporary newpage
=====
#1
==
|
http://arxiv.org/abs/2307.05131v1 | 20230711092033 | Overview of BioASQ 2023: The eleventh BioASQ challenge on Large-Scale Biomedical Semantic Indexing and Question Answering | [
"Anastasios Nentidis",
"Georgios Katsimpras",
"Anastasia Krithara",
"Salvador Lima López",
"Eulália Farré-Maduell",
"Luis Gasco",
"Martin Krallinger",
"Georgios Paliouras"
] | cs.CL | [
"cs.CL"
] |
Overview of BioASQ 2023
A. Nentidis et al.
National Center for Scientific Research “Demokritos”, Athens, Greece
{tasosnent, gkatsibras, akrithara, paliourg}@iit.demokritos.gr
Aristotle University of Thessaloniki, Thessaloniki, Greece
Barcelona Supercomputing Center, Barcelona, Spain
{salvador.limalopez, eulalia.farre, lgasco, martin.krallinger}@bsc.es
Overview of BioASQ 2023: The eleventh BioASQ challenge on Large-Scale Biomedical Semantic Indexing and Question Answering
Anastasios Nentidis1,2 Georgios Katsimpras1 Anastasia Krithara1 Salvador Lima López3 Eulália Farré-Maduell3 Luis Gasco3 Martin Krallinger3 Georgios Paliouras1
August 12, 2023
====================================================================================================================================================================
This is an overview of the eleventh edition of the BioASQ challenge in the context of the Conference and Labs of the Evaluation Forum (CLEF) 2023.
BioASQ is a series of international challenges promoting advances in large-scale biomedical semantic indexing and question answering.
This year, BioASQ consisted of new editions of the two established tasks b and Synergy, and a new task (MedProcNER) on semantic annotation of clinical content in Spanish with medical procedures, which have a critical role in medical practice.
In this edition of BioASQ, 28 competing teams submitted the results of more than 150 distinct systems in total for the three different shared tasks of the challenge.
Similarly to previous editions, most of the participating systems achieved competitive performance, suggesting the continuous advancement of the state-of-the-art in the field.
§ INTRODUCTION
The BioASQ challenge has been focusing on the advancement of the state-of-the-art in large-scale biomedical semantic indexing and question answering (QA) for more than 10 years <cit.>.
In this direction, it organizes different shared tasks annually, developing respective benchmark datasets that represent the real information needs of experts in the biomedical domain.
This allows the participating teams from around the world, who work on the development of systems for biomedical semantic indexing and question answering, to benefit from the publicly available datasets, evaluation infrastructure, and exchange of ideas in the context of the BioASQ challenge and workshop.
Here, we present the shared tasks and the datasets of the eleventh BioASQ challenge in 2023, as well as an overview of the participating systems and their performance.
The remainder of this paper is organized as follows.
First, Section <ref> presents a general description of the shared tasks, which took place from January to May 2023, and the corresponding datasets developed for the challenge.
Then, Section <ref> provides a brief overview of the participating systems for the different tasks.
Detailed descriptions for some of the systems are available in the proceedings of the lab.
Subsequently, in Section <ref>, we present the performance of the systems for each task, based on state-of-the-art evaluation measures or manual assessment.
Finally, in Section <ref> we draw some conclusions regarding the 2023 edition of the BioASQ challenge.
§ OVERVIEW OF THE TASKS
The eleventh edition of the BioASQ challenge (BioASQ 11) consisted of three tasks: (1) a biomedical question answering task (task b), (2) a task on biomedical question answering on developing medical problems (task Synergy), both considering documents in English, and (3) a new task on semantic annotation of medical documents in Spanish with clinical procedures (MedProcNER) <cit.>. In this section, we first describe this year's editions of the two established tasks b (task 11b) and Synergy (Synergy 11) with a focus on differences from previous editions of the challenge <cit.>. Additionally, we also present the new MedProcNER task on clinical procedure semantic recognition, linking, and indexing in Spanish medical documents.
§.§ Biomedical semantic QA - task 11b
The eleventh edition of task b (task 11b) focuses on a large-scale question-answering scenario in which the participants are required to develop systems for all the stages of biomedical question answering.
As in previous editions, the task examines four types of questions: “yes/no”, “factoid”, “list” and “summary” questions <cit.>.
In this edition, the training dataset provided to the participating teams for the development of their systems consisted of 4,719 biomedical questions from previous versions of the challenge annotated with ground-truth relevant material, that is, articles, snippets, and answers <cit.>.
Table <ref> shows the details of both training and test datasets for task 11b.
The test data for task 11b were split into four independent bi-weekly batches. These include two batches of 75 questions and two batches of 90 questions each, as presented in Table <ref>.
As in previous editions of task b, task 11b was also divided into two phases which run for two consecutive days for each batch: (phase A) the retrieval of the relevant material and (phase B) providing the answers to the questions.
In each phase, the participants have 24 hours to submit the responses generated by their systems.
In particular, a test set consisting of the bodies of biomedical questions, written in English, was released for phase A and the participants were expected to identify and submit relevant elements from designated resources, namely PubMed/MEDLINE-article abstracts, and snippets extracted from these resources.
Then, some relevant articles and snippets for these questions, which have been manually selected by the experts, were also released in phase B and the participating systems were challenged to respond with exact answers, that is entity names or short phrases, and ideal answers, that is, natural language summaries of the requested information.
§.§ Task Synergy 11
The task Synergy was introduced two years ago <cit.> envisioning a continuous dialog between the experts and the systems.
In task Synergy, the motivation is to make the advancements of biomedical information retrieval and question answering available to biomedical experts studying open questions for developing problems, aiming at a synergy between automated question-answering systems and biomedical experts.
In this model, the systems provide relevant material and answers to the experts that posed some open questions. The experts assess these responses and feed their assessment back to the systems.
This feedback is then exploited by the systems in order to provide more relevant material, considering more recent material that becomes available in the meantime, and improved responses to the experts as shown in Figure <ref>.
This process proceeds with new feedback and new responses from the systems for the same open questions that persist, in an iterative way, organized in rounds.
After eight rounds of the task Synergy in the context of BioASQ9 <cit.> and four more in the context of BioASQ10 <cit.>, all focusing on open questions for the developing problem of the COVID-19 pandemic, in BioASQ11 we extended the Synergy task (Synergy 11) to open questions for any developing problem of interest for the participating biomedical experts <cit.>.
In this direction, the four bi-weekly rounds of Synergy 11 were open to any developing problem, and a designated version of the PubMed/MEDLINE repository was considered for the retrieval of relevant material in each round. As in previous versions of the task, and contrary to task b, the open questions were not required to have definite answers and the answers to the questions could be more volatile.
In addition, a set of 311 questions on COVID-19, from the previous versions of the Synergy task, were available, together with respective incremental expert feedback and answers, as a development set for systems participating in this edition of the task.
Table <ref> shows the details of the datasets used in task Synergy 11.
Similar to task 11b, four types of questions are examined in Synergy 11 task: yes/no, factoid, list, and summary, and two types of answers, exact and ideal. Moreover, the assessment of the systems' performance is based on the evaluation measures used in task 11b.
However, contrary to task 11b, Synergy 11 was not structured into phases, with both relevant material and answers received together.
For new questions, only relevant material, that is relevant articles and snippets, was required until the expert considered that enough material has been gathered and marked the questions as “ready to answer".
Once a question is marked as “ready to answer", the systems are expected to respond to the experts with both new relevant material and answers in subsequent rounds.
§.§ Medical semantic annotation in Spanish - MedProcNER
Clinical procedures play a critical role in medical practice, being an essential tool for the diagnosis and treatment of patients. They are also a difficult information type to extract, often being made up of abbreviations, multiple parts, and even descriptive sections. Despite their importance, there are not many resources that focus in-depth on the automatic detection of clinical procedures, and even fewer, if any, consider concept normalization.
With this in mind, this year we introduced the MedProcNER (Medical Procedure Named Entity Recognition) shared task as part of BioASQ11 as summarized in Figure <ref>. The task challenges participants to create automatic systems that can extract different aspects of information about clinical procedures. These aspects are divided into three different sub-tasks:
* Clinical Procedure Recognition: This is a named entity recognition (NER) task where participants are challenged to automatically detect mentions of clinical procedures in a corpus of clinical case reports in Spanish.
* Clinical Procedure Normalization: In this entity linking (EL) task, participants must create systems that are able to assign SNOMED CT codes to the mentions retrieved in the previous sub-task.
* Clinical Procedure-based Document Indexing: This is a semantic indexing challenge in which participants automatically assign clinical procedure SNOMED CT codes to the full clinical case report texts so that they can be indexed. In contrast to the previous sub-task, participants do not need to rely on any previous systems, making this an independent sub-task.
To enable the development of clinical procedure recognition, linking and indexing systems, we have released the MedProcNER/ProcTEMIST corpus, a Gold Standard dataset of 1,000 clinical case reports manually annotated by multiple clinical experts with clinical procedures. The case reports were carefully selected by clinical experts and belong to various medical specialties including, amongst others, oncology, odontology, urology, and psychiatry. They are the same text documents that were used for the corpus and shared task on diseases DisTEMIST <cit.>, building towards a collection of fully-annotated texts for clinical concept recognition and normalization.
The MedProcNER corpus is publicly available on Zenodo[https://doi.org/10.5281/zenodo.7817745].
In addition to the text annotations, the mentions in the corpus have been normalized to SNOMED CT. SNOMED CT (Systematized Nomenclature of Medicine Clinical Terms) is a comprehensive clinical terminology and coding system designed to facilitate the exchange and communication of health-related information across different healthcare settings and systems, which makes it fit for the normalization of varied clinical concepts. For the task, only a subset of 250 normalized documents was released as training data. The complete normalized dataset will be released as post-workshop material.
Annotation and normalization guidelines were specifically created for this task.
The current version of the guidelines includes 31 pages and a total of 60 rules that describe how to annotate different procedure types ranging from simple explorations to complex surgical descriptions. They also include a discussion of the task’s importance and use cases, basic information about annotation process, a description of different procedure types and comparisons with similar clinical entity types, and indications and resources for the annotators.
As with the DisTEMIST corpus, the guidelines were refined via multiple rounds of inter-annotator agreement (IAA) through parallel annotation of a section of the corpus. The final IAA score (computed as the pairwise agreement between two independent annotators) is of 81.2. The MedProcNER guidelines are available in Zenodo[https://doi.org/10.5281/zenodo.7817666].
In addition to the corpus and guidelines, some additional resources have been released as part of the task. First, a SNOMED CT gazetteer was released containing official terms and synonyms from the relevant branches of SNOMED CT for the grounding of procedure mentions. The MedProcNER gazetteer has been built using the 31/10/2022 version of the Spanish edition of SNOMED CT, which is composed than 300,000 concepts organized in 19 different hierarchies including “procedure", “substance" and "regime/therapy". To simplify the entity linking and indexing task, we compiled a reduced subset of the terminology with a smaller set of concepts to which the mentions can be mapped. The gazetteer consists of 234,674 lexical entries, out of which 130,219 are considered main terms. Within these entries, there are 130,219 unique codes originating from 19 hierarchies.
Next, to foster the advancement of document indexing with other terminologies and boost the reusability of MedProcNER data, we have created cross-mappings that connect the SNOMED CT mentions found in the corpus to MeSH and ICD-10. These mappings were achieved using the UMLS Meta-thesaurus.
Finally, a Multilingual Silver Standard similar to last year's DisTEMIST <cit.> and LivingNER <cit.> was created in six different languages: English, French, Italian, Portuguese, Romanian and Catalan. This Silver Standard was automatically generated using a lexical annotation transfer approach in which the corpus' texts and Gold Standard annotations are translated separately and then mapped onto each other using a look-up system. This look-up takes into account individual annotations in each file, their translations and also a lemmatized version of the entities (obtained using spaCy[https://spacy.io/]). Transferred annotations carry over the SNOMED CT code originally assigned to the Spanish annotation. All additional resources are available in Zenodo together with the Gold Standard corpus[https://doi.org/10.5281/zenodo.7817666].
As for the task evaluation, all three MedProcNER sub-tasks are evaluated using micro-averaged precision, recall and F1-score. It is important to highlight that the evaluation of entity linking systems is not conducted in isolation but rather in an end-to-end manner. Instead of being provided an exhaustive list of mentions to be normalized, participants had to rely on their predictions from the named entity recognition stage. Consequently, the obtained scores might not accurately represent the overall performance of the systems. However, this type of evaluation does offer a more comprehensive assessment of complete systems, closely resembling their performance in real-world applications.
MedProcNER is promoted by the Spanish Plan for the Advancement of Language Technology (Plan TL)[https://plantl.mineco.gob.es] and organized by the Barcelona Supercomputing Center (BSC) in collaboration with BioASQ. A more in-depth analysis of the MedProcNER Gold Standard, guidelines and additional resources is presented in the MedProcNER overview paper <cit.>.
§ OVERVIEW OF PARTICIPATION
§.§ Task 11b
19 teams competed this year in task 11b submitting the responses of 76 different systems for both phases A and B, in total. In particular, 9 teams with 37 systems participated in Phase A, while in Phase B, the number of participants and systems were 16 and 59 respectively. Six teams engaged in both phases.
An overview of the technologies employed by the teams is provided in Table <ref> for the systems for which a description was available. Detailed descriptions for some of the systems are available at the proceedings of the workshop.
The (“bioinfo”) team from the University of Aveiro participated in both phases of the task with five systems. In phase A, they developed a two-stage retrieval pipeline. The first stage adopted the traditional BM25 model. In contrast, the second stage implemented transformer-based neural re-ranking models from PubMedBERT and monoT5 checkpoints. Additionally, synthetic data were used to augment the training regimen. The reciprocal rank fusion (RRF) was utilized to ensemble the outputs from various models.
For Phase B, their systems utilized instruction-based transformer models, such as ALPACA-LoRA, OA-Pythia, and OA-LLaMA, for conditioned zero-shot answer generation. More specifically, given the most relevant article from Phase A, the model was designed to generate an ideal answer based on the information contained in the relevant article.
Another team participating in both phases is the team from the University of Regensburg. Their systems (“UR-gpt”) relied on two commercial versions of the GPT Large Language model (LLM). Specifically, their systems experimented with both GPT-3.5-turbo and GPT-4 models. In phase A, their systems used zero-shot learning for query expansion, query reformulation and re-ranking. For Phase B, they used zero-shot learning, grounded with relevant snippets.
The BSRC Alexander Fleming team also participated in both phases with the systems“ELECTROBERT”. Their systems are built upon their previously developed systems <cit.> and also adapted the semi-supervised method GANBERT <cit.> for document relevance classification. Furthermore, for the initial document selection phase their systems utilize BM25 combined with RM3 query expansion with optimized parameters.
The `MindLab” team competed in both phases of the task with five systems. For document retrieval their systems used the BM25 scoring function and semantic-similarity as a re-ranking strategy. For passage retrieval their systems used a metric learning method which fuses different
similarity measures through a siamese convolutional network.
The “dmiip” team from the Fudan University participated in both phases of the task with five systems. In phase A, their systems used BM25 and GPT for the retrieval stage, and a cross-encoder ranker based on different biomedical PLMs, such as PubMedBERT, BioBERT, BioLinkBERT and ELECTRA for the ranking stage. Biomedical PLMs and GPT-3.5 are also utilized in Phase B. The systems are initially finetuned on SQuAD and then trained with the BioASQ traning dataset.
In phase A, the “A&Q” team participated with five systems. Their systems are based on a multi-stage approach which incorporates a bi-encoder model in the retrieval stage, and
a cross-encoder model at the re-ranking stage. At the retrieval stage, a hybrid retriever that combines dense and sparse retrieval, where the dense retrieval is implemented with the bi-encoder and the sparse retrieval is implemented with BM25. Both encoders are initialized with PubMedBERT and further trained on PubMed query-article search logs.
The IRCCS team participated with five systems (“IRCCS”) in phase A. Their systems follow a two-step methodology. First, they score the documents using the BM25 ranking function. Then, the second step is to re-rank them based on cosine similarity between the query and each document, which are encoded using various transformers models.
The IRIT lab team competed also in phase A with two systems (“MarkedCEDR”). Their systems adopt a two-stage retrieval approach composed of a retriever and a re-ranker. The former is based on BM25. The later is an implementation of a BERT cross-encoder named CEDR.
In phase B, the Ontotext team participated with two systems (“ELErank”).
Their systems used BioM-ELECTRA as a backbone model for both yes/no and factoid questions. For yes/no questions, it was fine-tuned in a sequence classification setting, and for factoid questions, it was fine-tuned in a token classification setting (for extractive QA). Before applying classification, the sentences were ranked based on their cosine similarity to the question. Top-5 most relevant sentences were used for classification. Sentence embeddings for ranking were calculated with S-PubMedBERT.
The National Central Uni team competed with five systems “NCU-IISR” in phase B. Their systems utilized OpenAI’s ChatCompletions API, incorporating Prompt Engineering techniques to explore various prompts. Specifically, their systems used GPT-3 and GPT-4 for answer generation.
The CMU team participated with four different systems (“AsqAway”) in phase B. Their system adopt an ensembling approach using transformer models. For factoid and list questions they use a BioBERT and BioM-Electra ensemble. For yes/no questions, they employ BioM-Electra.
The Korea University team participated with five systems (“DMIS-KU”). They employed different pre-processing, training, and data augmentation methods and different QA models. For the yes/no type, the systems utilized the “full-snippet" pre-processing method, where all snippets were concatenated into a single context. The BioLinkBERT-large model was used as the embedding model. For the factoid type, the “single-snippet" method was used, which involved processing one snippet at a time. The BioLinkBERT-large was trained using the SQuAD dataset and fine-tuned using the BioASQ training data. For the list type, the full-snippet method was used again. Additionally, their systems employed a dataset generation framework, called LIQUID, to augment the training data. Also, the GPT-4 was utilized to answer list questions in a one-shot manner. In all question types, the final predictions are produced by combining the results from multiple single models using an ensemble method.
There were two teams from the Macquarie University. The first team participated with five systems (“MQ”) in phase B and focused on finding the ideal answers. Three of their systems employed GPT-3.5 and various types of prompts. The rest of the systems were based on their previously developed systems <cit.>.
The second tean (“MQU”) competed with five systems in phase B which utilised BART and BioBART that were fine-tuned for abstractive summarisation.
As in previous editions of the challenge, a baseline was provided for phase B exact answers, based on the open source OAQA system<cit.>. This system relies on more traditional NLP and Machine Learning approaches, used to achieve top performance in older editions of the challenge, and now serves as a baseline. The system is developed based on the UIMA framework. In particular, question and snippet parsing is done with ClearNLP. Then, MetaMap, TmTool <cit.>, C-Value, and LingPipe <cit.> are employed for identifying concepts that are retrieved from the UMLS Terminology Services (UTS). Finally, the relevance of concepts, documents, and snippets is identified based on some classifier components and some scoring and ranking techniques are also employed.
Furthermore, this year we introduced two more baselines for phase B ideal answers, BioASQ Baseline ZS and BioASQ Baseline FS, which are based on zero-shot prompting of Biomedical LMs. Both systems utilized the BioGPT, a language model trained exclusively on biomedical abstracts and papers, with the former using as input only the question body, and the latter using the concatenation of the question body and the relevant snippets until the input length is exceeded.
§.§ Task Synergy 11
In this edition of the task Synergy (Synergy 11) 5 teams participated submitting the results from 12 distinct systems.
An overview of systems and approaches employed in this task is provided in Table <ref>, for the systems for which a description was available. More detailed descriptions for some of the systems are available at the proceedings of the workshop.
The Fudan University (“dmipp”) competed in task Synergy with the same models they used for task 11b. Additionally, they expanded the query with the shortest relevant snippet in the provided feedback.
The “UCSD” team competed in task Synergy with two systems. Their systems (“bio-answerfinder”) used the Bio-AnswerFinder end-to-end QA system they had previously developed <cit.> with few improvements, including the use of the expert feedback data in retraining of their model's re-ranker.
The BSRC Alexander Fleming team participated with two systems. Similar to task b, their systems (“ELECTROBERT”) built upon their previously developed systems <cit.> and also adapted the semi-supervised method GANBERT <cit.>.
§.§ Task MedProcNER
Among the 47 teams registered for the MedProcNER task, 9 teams submitted at least one run of their predictions. Specifically, all 9 teams engaged in the entity recognition sub-task, while 7 teams participated in the entity linking sub-task. Additionally, 4 teams took part in the document indexing sub-task. Overall, a total of 68 runs were submitted, reflecting the collective efforts and contributions of the participating teams.
Table <ref> gives an overview of the methodologies used by the participants in each of the sub-tasks.
As is the case in many modern NLP approaches, the majority of the participants used transformers-based models. RoBERTa <cit.> and SapBERT <cit.> models were the most popular for named entity recognition and entity linking respectively. In addition to this, in order to boost the systems' performance some teams also relied on recurrent classifiers such as CRFs (e.g. BIT.UA <cit.>, SINAI team <cit.>), adapters (e.g. KFU NLP team), model ensembling/voting (e.g. KFU NLP team, Onto-text <cit.>) and data augmentation (e.g. BIT.UA <cit.>). Interestingly, one of the participants (Samy Ateia from the University of Regensburg <cit.>) proposes an approach based on Generative Pre-trained Transformers (GPT) models for all three sub-tasks.
§ RESULTS
§.§ Task 11b
Phase A:
The Mean Average Precision (MAP) was the official measure for evaluating system performance on document retrieval in phase A of task 11b, which is based on the number of ground-truth relevant elements.
For snippet retrieval, however, the situation is more complicated as a ground-truth snippet may overlap with several distinct submitted snippets, which makes the interpretation of MAP less straightforward.
For this reason, since BioASQ9 the F-measure is used for the official ranking of the systems in snippet retrieval, which is calculated based on character overlaps[<http://participants-area.bioasq.org/Tasks/b/eval_meas_2022/>] <cit.>.
Since BioASQ8, a modified version of Average Precision (AP) is adopted for MAP calculation.
In brief, since BioASQ3, the participant systems are allowed to return up to 10 relevant items (e.g. documents or snippets), and the calculation of AP was modified to reflect this change. However, some questions with fewer than 10 golden relevant items have been observed in the last years, resulting in relatively small AP values even for submissions with all the golden elements. Therefore, the AP calculation was modified to consider both the limit of 10 elements and the actual number of golden elements <cit.>.
Tables <ref> and <ref> present some indicative preliminary results for the retrieval of documents and snippets in batch 1. The full results are available online on the result page of task 11b, phase A[<http://participants-area.bioasq.org/results/11b/phaseA/>]. The final results for task 11b will be available after the completion of the manual assessment of the system responses by the BioASQ team of biomedical experts, which is still in progress, therefore the results reported here are currently preliminary.
Phase B:
In phase B of task 11b, the competing systems submit exact and ideal answers.
As regards the ideal answers, the official ranking of participating systems is based on manual scores assigned by the BioASQ team of experts that assesses each ideal answer in the responses <cit.>.
The final position of systems providing exact answers is based on their average ranking in the three question types where exact answers are required, that is “yes/no”, “list”, and “factoid”. Summary questions for which no exact answers are submitted are not considered in this ranking.
In particular, the mean F1 measure is used for the ranking in list questions, the mean reciprocal rank (MRR) is used for the ranking in factoid questions, and the F1 measure, macro-averaged over the classes of yes and no, is used for yes/no questions.
Table <ref> presents some indicative preliminary results on exact answer extraction from batch 2. The full results of phase B of task 11b are available online[<http://participants-area.bioasq.org/results/11b/phaseB/>]. These results are preliminary, as the final results for task 11b will be available after the manual assessment of the system responses by the BioASQ team of biomedical experts.
The top performance of the participating systems in exact answer generation for each type of question during the eleven years of BioASQ is presented in Figure <ref>.
The preliminary results for task 11b, reveal that the participating systems keep improving in answering all types of questions.
In batch 2, for instance, presented in Table <ref>, several systems manage to correctly answer literally all yes/no questions. This is also the case for batch 3 and batch 4.
Some improvements are also observed in the preliminary results for factoid questions compared to the previous years, but there is still more room for improvement, as done for list questions where the preliminary performance is comparable to the one of the previous year.
§.§ Task Synergy 11
In task Synergy 11 the participating systems were expected to retrieve documents and snippets, as in phase A of task 11b, and, at the same time, provide answers for some of these questions, as in phase B of task 11b.
In contrast to task 11b, however, due to the developing nature of the relevant knowledge, no answer is currently available for some of the open questions. Therefore only the questions indicated to have enough relevant material gathered from previous rounds (“Answer ready”) require the submission of exact and ideal answers by the participating systems.
In addition, no golden documents and snippets were provided by the experts for new questions. For questions from previous rounds, on the other hand, a separate file with feedback from the experts was provided, that is elements of the documents and snippets previously submitted by the participants with manual annotations of their relevance.
Therefore, these documents and snippets, that have already been assessed and included in the feedback, were not considered valid for submission by the participants in the subsequent rounds, and even if accidentally submitted, they were not considered for the evaluation of that round. As in phase A of task 11b, the evaluation measures for document and snippet retrieval are MAP and F-measure respectively.
Regarding the ideal answers, the systems were ranked according to manual scores assigned to them by the BioASQ experts during the assessment of systems responses as in phase B of task B <cit.>. In this task, however, the assessment took place during the course of the task, so that the systems can have the feedback of the experts available, prior to submitting their new responses.
For the exact answers, which were required for all questions except the summary ones, the measure considered for ranking the participating systems depends on the question type.
For the yes/no questions, the systems were ranked according to the macro-averaged F1-measure on the prediction of no and yes answers.
For factoid questions, the ranking was based on mean reciprocal rank (MRR), and for list questions on mean F1-measure.
Some indicative results for the Synergy task are presented in Table <ref>.
The full results of Synergy 11 task are available online[<http://participants-area.bioasq.org/results/synergy_v2023/>].
Overall, the collaboration between participating biomedical experts and question-answering systems allowed the progressive identification of relevant material and extraction of exact and ideal answers for several open questions for developing problems, such as COVID-19, Colorectal Cancer, Duchenne Muscular Dystrophy, Alzheimer's Disease, and Parkinson's Disease.
In particular, after the completion of the the four rounds of the Synergy 11 task, enough relevant material was identified for providing an answer to about 79% of the questions. In addition, about 42% of the questions had at least one ideal answer, submitted by the systems, which was considered satisfactory (ground truth) by the expert that posed the question.
§.§ Task MedProcNER
All in all, the top scores for each sub-task were:
* Clinical Procedure Recognition. The BIT.UA team attained all top 5 positions with their transformer-based solution that also uses masked CRF and data augmentation. They achieved the highest F1-score, 0.7985, highest precision (0.8095) and highest recall (0.7984) . Teams Vicomtech and SINAI also obtained F1-scores over 0.75.
* Clinical Procedure Normalization. The highest F1-score (0.5707), precision (0.5902) and recall (0.5580) were obtained by Vicomtech. Teams SINAI and Fusion were also above 0.5 F1-score using token similarity techniques and a cross-lingual SapBERT, respectively.
* Clinical Procedure-based Document Indexing. The Vicomtech team also obtained the highest F1-score (0.6242), precision (0.6371) and recall (0.6295), with the KFU NLP Team coming in second place (0.4927 F1-score). In this sub-task, all participating teams reused their systems and/or output from previous sub-tasks.
The complete results for the entity recognition, linking and document indexing are shown in tables <ref>, <ref> and <ref>, respectively.
Overall, the performance of the systems presented for the MedProcNER shared task is very diverse, with scores ranging from 0.759 F-score (by the BIT.UA team on the entity recognition task) to 0.126 (University of Regenburg on the entity linking task). This gap evidences mainly two things: the multitude of approaches and the difficulty of the corpus.
On the one hand, the systems presented for the task were very varied. Even amongst BERT-based models, participants tried different strategies such as using models pre-trained on different domains (biomedical, clinical) and languages (Spanish, multilingual), implementing different pre/post-processing techniques, data augmentation and using multiple output layers (CRF, GRU, LSTM). Again, it is remarkable that one of the participants (Samy Ateia from the University of Regensburg) used GPT3.5 (ChatGPT) and GPT4 for their submissions. Even though the overall performance is not too good (especially in terms of recall), this is partly to be expected since the system was fine-tuned for the task using a few-shot approach.
On the other hand, the Gold Standard corpus is very varied in terms of mentions, with many mentions being quite long and descriptive (especially surgical mentions). Additionally, the text documents span multiple medical specialties, which introduces not only more variety in clinical procedures but also possible ambiguities due to the use of specialized abbreviations. In the future, we will expand the corpus with more annotated documents to address this issue.
Compared to last year's DisTEMIST task, which had a very similar setting, results are overall a bit higher but still quite similar. In terms of named entity recognition methodologies, transformers and BERT-like models were the most popular in both tasks, with RoBERTa not only being the most widely used but also achieving some of the best results.
In the entity linking sub-task systems that use SapBERT seem to have gained popularity, being used by at least 3 teams, including the top-scoring system, with very good results. In contrast, in last year's DisTEMIST only one team (HPI-DHC) used it, and actually achieved the best entity linking score using an ensemble of SapBERT and TF-IDF with re-ranking and a training data lookup.
§ CONCLUSIONS
This paper provides an overview of the eleventh BioASQ challenge.
This year, the challenge consisted of three tasks: (1) Task 11b on biomedical semantic question answering in English and (2) task Synergy 11 on question answering for developing problems, both already established from previous years of the challenge, and (3) the new task MedProcNER on retrieving medical procedure information from medical content in Spanish.
The preliminary results for task 11b reveal some improvements in the performance of the top participating systems, mainly for yes/no and factoid answer generation. However, room for improvement is still available, particularly for factoid and list questions, where the performance is less consistent.
The new edition of the Synergy task in an effort to enable a dialogue between the participating systems with biomedical experts revealed that state-of-the-art systems, despite still having room for improvement, can be a useful tool for biomedical experts that need specialized information for addressing open questions in the context of several developing problems.
The new task MedProcNER introduced three new challenging subtasks on annotating clinical case reports in Spanish. Namely, Named Entity Recognition, Entity Linking, and Semantic Indexing for medical procedures. Due to the importance of semantic interoperability across data sources, SNOMED CT was the target terminology employed in this task, and multilingual annotated resources have been released. This novel task on medical procedure information indexing in Spanish highlighted the importance of generating resources to develop and evaluate systems that (1) effectively work in multilingual and non-English scenarios and (2) combine heterogeneous data sources.
The ever-increasing focus of participating systems on deep neural approaches, already apparent in previous editions of the challenge, is also observed this year.
Most of the proposed approaches built on state-of-the-art neural architectures (BERT, PubMedBERT, BioBERT, BART etc.) adapted to the biomedical domain and specifically to the tasks of BioASQ.
This year, in particular, several teams investigated approaches based on Generative Pre-trained Transformer (GPT) models for the BioASQ tasks.
Overall, several systems managed competitive performance on the challenging tasks offered in BioASQ, as in previous versions of the challenge, and the top performing of them were able to improve over the state-of-the-art performance from previous years.
BioASQ keeps pushing the research frontier in biomedical semantic indexing and question answering for eleven years now, offering both well-established and new tasks.
Lately, it has been extended beyond the English language and biomedical literature, with the tasks MESINESP <cit.>, DisTEMIST <cit.>, and this year with MedProcNER.
In addition, BioASQ reaches a more and more broad community of biomedical experts that may benefit from the advancements in the field. This has been done initially for COVID-19, through the introductory versions of Synergy, and was later extended into more topics with the collaborative batch of task 10b and the extended version of Synergy 11, introduced this year.
The future plans for the challenge include a further extension of the benchmark data for question answering through a community-driven process, extending the community of biomedical experts involved in the Synergy task, as well as extending the resources considered in the BioASQ tasks, both in terms of documents types and language.
§ ACKNOWLEDGMENTS
Google was a proud sponsor of the BioASQ Challenge in 2022.
The eleventh edition of BioASQ is also sponsored by Ovid.
Atypon Systems Inc. is also sponsoring this edition of BioASQ.
The MEDLINE/PubMed data resources considered in this work were accessed courtesy of the U.S. National Library of Medicine.
BioASQ is grateful to the CMU team for providing the exact answer baselines for task 11b, as well as to Georgios Moschovis and Ion Androutsopoulos, from the Athens University of Economics and Business, for providing the ideal answer baselines.
The MedProcNER track was partially funded by the Encargo of Plan TL (SEDIA) to the Barcelona Supercomputing Center. Due to the relevance of medical procedures for implants/devices specially in the case cardiac diseases this project is also supported by the European Union’s Horizon Europe Coordination & Support Action under Grant Agreement No 101058779 (BIOMATDB) and DataTools4Heart Grant Agreement No. 101057849. We also acknowledge the support from the AI4PROFHEALTH project (PID2020-119266RA-I00).
splncs04
|
http://arxiv.org/abs/2307.05307v2 | 20230711145625 | Phases of (2+1)D SO(5) non-linear sigma model with a topological term on a sphere: multicritical point and disorder phase | [
"Bin-Bin Chen",
"Xu Zhang",
"Yuxuan Wang",
"Kai Sun",
"Zi Yang Meng"
] | cond-mat.str-el | [
"cond-mat.str-el",
"cond-mat.stat-mech"
] |
Department of Physics and HKU-UCAS Joint Institute of Theoretical and Computational Physics, The University of Hong Kong, Pokfulam Road, Hong Kong SAR, China
Department of Physics and HKU-UCAS Joint Institute of Theoretical and Computational Physics, The University of Hong Kong, Pokfulam Road, Hong Kong SAR, China
[email protected]
Department of Physics, University of Florida, Gainesville, FL 32601, USA
[email protected]
Department of Physics, University of Michigan, Ann Arbor, MI 48109, USA
[email protected]
Department of Physics and HKU-UCAS Joint Institute of Theoretical and Computational Physics, The University of Hong Kong, Pokfulam Road, Hong Kong SAR, China
Novel critical phenomena beyond the Landau-Ginzburg-Wilson paradigm have been long sought after.
Among many candidate scenarios,
the deconfined quantum critical point (DQCP) constitutes the most fascinating one, and its lattice model realization has been debated over the past two decades.
Here we apply the spherical Landau level regularization
upon the exact (2+1)D SO(5) non-linear sigma model with a topological term to study the potential DQCP therein.
Utilizing the state-of-the-art density matrix renormalization group method with
explicit SU(2)_spin×U(1)_charge symmetries,
accompanied by quantum Monte Carlo simulation,
we accurately obtain the comprehensive phase diagram of the model on a sphere.
We find various novel quantum phases,
including a Néel state, a ferromagnet (FM), a valence bond solid (VBS) state, a valley polarized (VP) state and quantum disordered phase occupying extended area of the phase diagram.
Our results show that two different symmetry-breaking phases,
i.e., the SO(2)-breaking VBS and the SO(3)-breaking Néel states,
are separated by either a weakly first-order transition or the disordered region with a multicritical point in between, thus
opening up more interesting questions on this two-decade long debate on the nature of DQCP.
Phases of (2+1)D SO(5) non-linear sigma model with a topological term on a sphere:
multicritical point and disorder phase
Zi Yang Meng
August 12, 2023
==========================================================================================================================
Introduction.—
Over the past two decades, the enigma of the deconfined quantum critical point (DQCP) has never failed to attract attention across the communities of condensed matter to quantum field theory and high-energy physics, as it is believed to offer a new paradigm in theory <cit.>, numerical simulation <cit.>, and experiment <cit.> that goes beyond the Landau-Ginzburg-Wilson (LGW) framework of phase transitions.
However, the lattice realizations of DQCP have been debated ever since. In SU(2) spin systems, the J-Q model <cit.> was initially believed to realize a DQCP between Néel and valence bond solid (VBS) states. Over the years, a plethroa of results have been reported, including the emergent continuous symmetry with fractionalized excitations <cit.> yet drifting critical exponents incompatible with conformal bootstrap bounds (with one O(3)×ℤ_4 singlet) <cit.>, weakly first-order pseudocriticality versus continuous transition or multicritical point <cit.>, and violation of positivity entanglement requirement for a unitary conformal field theory (CFT) <cit.>,
and debate regarding the nature of the phase transition persists to this day.
A more recent quantum Monte Carlo (QMC) study suggests the non-unitary CFT of the DQCP scenario in SU(N) spin systems for N<N_c≃7 <cit.>.
Similar changing perceptions also occur in DQCP models with fermions, realizing transitions from a Dirac semimetal (DSM) through quantum spin Hall insulator to superconductor <cit.>, or from DSM through VBS to Néel state <cit.>. The inclusion of fermions offers advantages over the previous J-Q model, due to the absence of symmetry-allowed quadruple monopoles and the associated
second length scale that breaks the assumed U(1) symmetry down to ℤ_4 <cit.>, but the non-compatible critical exponents still persist and the accumulating numerical results are also pointing towards a non-unitary CFT of these DQCPs <cit.>. Despite extensive efforts over
the past two decades, the lattice realizations of DQCP in its original sense of beyond LGW and yet still critical, with emergent continuous symmetry and fractionalized excitations, are still in “The Enigma of Arrival” <cit.>.
A key origin of the debate stems from the fundamental requirement of emergent symmetries at DQCPs. For instance, the J-Q model DQCP requires a U(1) symmetry to emerge out of the ℤ_4 symmetry of VBS, which then combines with the SU(2) symmetry of the Néel order to give rise to the ultimate SO(5) emergent symmetry. Due to the extremely slow RG flow towards such emergent symmetries, numerical studies may face
challenges in accessing these proposed DQCPs due to finite size effects. To overcome this challenge, lattice models with explicit SO(5) symmetry have been introduced, e.g., the (2+1)D SO(5) nonlinear sigma model (NLSM) with a Wess-Zumino-Witten (WZW) topological term <cit.>, to directly ask the quesiton whether there is a continuous Néel-VBS transition in its phase diagram. However, previous attempts along this line with the half-filled Landau level of Dirac fermions as a regularization on torus geometry, are unfortunately still limited by severe computational complexity both for density matrix renormalization group (DMRG) and QMC simulations, and consequently cannot give a conclusive answer <cit.>.
Here, we push forward the solution of the problem by applying the spherical Landau level regularization which has been studied in the context of fractional quantum Hall effect in early literatures <cit.> and has recently been shown to suffer less finite-size effect than that of the torus geometry for the (2+1)D Ising model <cit.>. Using DMRG with
explicit SU(2)_spin×U(1)_charge symmetries, accompanied with exact diagonalization (ED) and QMC simulations, we accurately simulate and reach the entire phase diagram of the model with various novel quantum states identified, including the Néel, VBS, ferromagnet (FM), and valley polarized (VP) states. Most importantly, we find a disordered region separates the VBS and Néel states, and the two critical boundaries meet at a multicritical point along the SO(5) line behind which the SO(5) symmetry is explicitly broken with weakly first order transition between the Néel and VBS phases.
Our discovery of the extended disordered phase and the multicritical point,
opens up more interesting questions,
in particular the nature of the disordered phase and its relation to pseudo-criticality
and symmetry-enforced gaplessness <cit.>,
on the two-decade long quest of DQCP in the phase diagram of the (2+1)D
SO(5) NLSM with WZW topological term.
Model and Methods.—
We consider the (2+1)D Hamiltonian
H_Γ = 1/2∫ dΩ{U_0 [ψ^†(Ω)ψ(Ω)-2]^2 -∑_i=1^5 u_i [ψ^†(Ω)Γ^iψ(Ω)]^2},
where ψ_τσ(Ω) is the 4-component Dirac fermion annihilation operator
with mixing valley τ and spin σ indices, and
Γ^i={τ_x⊗𝕀, τ_y⊗𝕀, τ_z⊗σ_x,
τ_z⊗σ_y, τ_z⊗σ_z}
are the 5 mutually anticommuting matrices, whose commutators
L^ij=-i2[Γ^i,Γ^j] are the generators of the SO(5) group.
Subsequently, we project the SO(5) Dirac fermion Hamiltonian onto the zero energy Landau level on the sphere, which is the same as the lowest massive fermion Landau levels (LLL) of a sphere with 4π s magnetic monopole at its origin <cit.>, where the (2s+1)-fold degenerate
LLL wavefunction takes the form of
Φ_m(Ω)∝ e^imϕcos^s+m(θ2)sin^s-m(θ2)
with m∈{-s, -s+1,⋯,s-1,s} and 2s ∈ℤ. This can be done by the expansion
ψ(Ω)=∑_m Φ_m(Ω)c_m, leading to the projected Hamiltonian
Ĥ_Γ = U_0 Ĥ_0 - ∑_i u_i Ĥ_i, with
Ĥ_i=∑_m_1,m_2,mV_m_1,m_2,m_2-m,m_1+m×
(c^†_m_1Γ^i c^ _m_1+m-2δ_i0δ_m0)
(c^†_m_2Γ^i c^ _m_2-m-2δ_i0δ_m0)
where we defined Γ^0=𝕀⊗𝕀.
The precise form of V_m_1,m_2,m_3,m_4
can be found in the Supplementary Materials (SM) <cit.>.
Throughout this work, we set U_0=1 as the energy unit and
let u_1=u_2=u_K, u_3=u_4=u_5 = u_N.
When u_K=u_N>0, this model is known to be described by a SO(5) NLSM with a WZW term <cit.>.
When u_K ≠ u_N, the symmetry reduces to SO(3)⊗SO(2).
For positive u_K and
u_N, it was proposed that u_N > u_K stabilizes the Néel
order, which spontaneously breaks the SO(3) symmetry,
while u_N < u_K favors a valley order breaking the SO(2)
symmetry, which in a lattice model can be interpreted as the VBS order.
If a direct and continuous phase transition between these two
states arises at u_K = u_N, at the transition the system has an
explicit SO(5) symmetry, which realizes a DQCP.
While previous works focused on positive values for u_K and u_N,
we sweep the entire (u_K,u_N) plane for symmetry breaking phases.
We employ DMRG with SU(2)_spin×U(1)_charge
symmetry in the framework of the tensor library QSpace <cit.>, and
keep up to 2048 SU(2) invariant multiplets (equivalent to ∼6000
U(1) states) to render the truncation errors within 5×10^-5.
We also perform determinant QMC as well as ED
simulations as complements, to accurately determine the various phases and their phase boundaries.
We denote the system size by the Landau level degeneracy N=2s+1 and obtain converging
results with N=3,4,5,...,15, the largest system size achieved so far for the model on sphere to our knowledge.
Phase Diagram.—
Before diving into the details, we give a summary of the phase diagram. For
all the ordered phases that we observe, the order parameters take the form of fermion bilinears: ⟨ O ⟩ =
∫ dΩ⟨ψ^†(Ω)M ψ(Ω)⟩ = ∑_m⟨ c^†_m M c_m⟩, where M is either a Γ-matrix or one of the SO(5)
generators L^ij.
In the case of (u_K, u_N)>0,
there are 3 phases including the Néel state
(ordered in the Γ^3,4,5 directions),
the VBS (ordered in the Γ^1,2 directions),
and the disorder phase, as shown in fig:fig1.
At small u_K and u_N (below ∼0.115), the Néel and
VBS phases are separated by a first-order phase boundary,
along the u_K = u_N line with SO(5) symmetry.
At large u_K and u_N, instead of the proposed
direct and continuious transition, we find that Néel and VBS phases are
separated by an intermediate disordered phase, and continuous
transitions from the disordered state to both Néel and VBS states.
For negative values of u_K and/or u_N, we find
three distinct phases: the FM state (M = L^34,L^35,L^45)
where both valleys exhibit the same magnetization direction,
the VP state (M = L^12) which breaks an Ising ℤ_2
symmetry, and the disorder phase. When the magnitudes
of u_K and u_N are small (i.e., (u_K, u_N) > -1.8),
the FM and VP states are directly connected by a first-order
transition along the SO(5) line. Again, for larger |u_K,N|,
the FM and VP phases are separated
by a disordered phase, while the transitions between
the FM/VP states and the disordered state are all first-order.
The transition between the FM and Néel states takes
place in the quadrant of u_K < 0 and u_N > 0 through a
first order phase boundary. Similarly, a first-order transition
between the VBS and VP states is observed in the quadrant of
u_K > 0 and u_N < 0.
Phases of (u_K, u_N)>0 quadrant.—
We first focus on the positive u_K and u_N quadrant of the phase diagram,
and compute the squared order parameter ⟨ O_i^2⟩ with
O_i = ∫ dΩψ^†(Ω)Γ^iψ(Ω) = ∑_m c^†_m Γ^i c^ _m.
Specifically, we use m^2_Neel = 13N^2⟨(O_3^2+O_4^2+O_5^2)⟩
for Néel order, and m^2_VBS = 12N^2⟨(O_1^2+O_2^2)⟩
for VBS order.
In fig:fig2(a), we fix u_K=2 and vary u_N, the rescaled squared VBS order
m^2_VBSN^Δ with Δ=0.519 being the O(2) scaling dimension,
exhibits a nice crossing behavior at u_N≃1.5 for various system sizes up to N=12.
It suggests that, in the thermodynamic limit, the system is O(2) VBS ordered for u_N<1.5
and O(2) disordered for u_N>1.5, and is critical in the vicinity of u_N≃1.5.
Similarly, in fig:fig2(b), the rescaled squared Néel order
m^2_NeelN^Δ with Δ=0.519 being the O(3) scaling dimension,
crosses nicely at u_N≃4.5 for various system size up to N=12.
It suggests the system is O(3) Néel ordered for u_N>4.5
and O(3) disordered for u_N<4.5, and is critical in the vicinity of u_N≃4.5.
Due to the topological nature of the WZW term, the disordered phase in this model must be highly nontrivial. It is either a gapless critical phase or exhibits some “hidden” symmetry breaking order that allows for a gapped phase, e.g., a chiral spin liquid <cit.>. From our data, both interpretations are plausible, see e.g., Fig. S5 in SM <cit.>.
Moreover, unless the disordered phase is completely conventional in the presence of a hidden order, one should not expect a Wilson-fisher like scaling dimension Δ=0.519. This suggests another possibility that our observed scaling behavior at these phase boundaries may be subject to finite-size effects [We thank Cenke Xu and Yin-Chen He for the private communications on this matter].
In fig:fig2(c), to address the phase boundary of the disordered phase,
we resort to the RG-invariant binder ratio for VBS phase,
i.e., ⟨ O^2_1⟩^2/⟨ O^4_1⟩, which shows crossing behaviour between
successive N and N+1, and for largest ones N=11 and 12, they cross at u_N≃1.21.
To accurately determine the behavior at the boundary of the disordered phase (and its size in the off-diagonal direction), requires a comprehensive search for hidden orders and a more careful finite-size analysis, which we defer to a followup study.
Similar simulations are performed for various fixed u_K cuts,
and the continuous transitions are observed separately and
merge into one multi-critical point at u_K=u_N≃0.115.
For the case of u_K=u_N=u<0.115,
the SO(5) line represents a first-order transition
where the system spontaneously breaks the SO(5) symmetry.
To verify such a first-order line, we simulate along the exact SO(5) line
u_K=u_N=u. As shown in fig:fig2 (d),
correlation ratios, defined as
R≡1-⟨𝐎^2_l=1⟩/⟨𝐎^2_l=0⟩
from QMC for different system sizes up to N=15,
indicate the phase transition point near u≃0.1.
Here 𝐎_l=0=(O_1,l=0, ⋯,O_5,l=0) is the O(5) order parameter vector
and 𝐎_l=1 is the order parameter with the smallest angular momentum shift (see the SM <cit.>).
In SM, we also show the rescaled m^2_VBSN^Δ along the SO(5) line,
where they cross at u≃0.115, suggesting that, the system is O(2) VBS and O(3) Néel
ordered (or SO(5) symmetry-breaking) for u<0.115
and SO(5) disordered for u>0.115, and is critical in the vicinity of u≃0.115.
As denoted in Fig. <ref> (b), this multicritical point is the meeting point of the O(2)-breaking and O(3)-breaking critical boundaries, therefore requires to fine-tune two different control parameters in order to access this point. For 0<u<0.115, our data suggest either a weak first order transition or a narrow coexistence region of the Néel and VBS phases, shown in Sec.IV. in SM <cit.>.
Phases of (u_K, u_N) <0 quadrant.—
For negative u_K and u_N, the order parameter with M=Γ^i vanishes in the thermodynamic limit.
Instead, the relevant order parameter involves the SO(5) generator M=L^ij. We calculate the squared generators ⟨Õ_ij^2⟩ with
Õ_ij = ∫ dΩψ^†(Ω)L^ijψ(Ω) = ∑_m c^†_m L^ij c^ _m,
and define the squared FM order parameter as
m^2_FM=1N^2⟨ (Õ_34^2 + Õ_35^2 + Õ_45^2)⟩,
and the squared VP order parameter as
m^2_VP=1N^2⟨Õ_12^2 ⟩.
We note that, as L^12=τ_z, L^34=-σ_z, L^35=σ_y, L^45=σ_x,
the finite value of m^2_VP and m^2_FM
suggests the VP and FM states respectively.
In fig:fig3(a) and (b), we simulate along the negative SO(5) line u_K=u_N=u<0.
The ground-state energies per orbital
e_g = 1N⟨ψ| H_Γ|ψ⟩
show clear kinks at u_c(N) which changes upon increasing N.
In the inset, we performed a linear extrapolation of u_c(N) versus 1/N,
and find the thermodynamic value u_c(∞)≃ -1.056.
As shown in fig:fig3(b),
such a first-order transition can also be seen from the squared order parameter
⟨Õ^2⟩/N^2,
which rapidly jumps from zero to a finite plateau when increasing u around u_c(N).
The heights of the plateau decreases upon increasing N which perfectly follows a straight line
and can be extrapolated to the value of 4.
In fig:fig3(c), we further determine the u_K=-3 cut in the phase diagram.
The value of m^2_FM suddenly jump from zero to finite around
u_N(N=∞)≃-1.25.
And similarly in fig:fig3(d), we simulate on fixed u_N=-3 cut where m^2_VP
suddenly jump to finite value of 4 around u_K(N=∞)≃-1.38.
First order transition across quadrants.—
We then study the phase transition between the observed Néel and FM states, as well as between
the VBS and VP states.
In fig:fig4(a), we show the first cut along the line u_N = u_K +1
where m^2_FM shows a sudden drop at the point of (u_K, u_N)=(-0.5, 0.5),
suggesting a direct first-order FM-Néel transition.
In fig:fig4(b), we show the second cut along the line u_N = u_K-1
where m^2_VP shows a sudden drop
at the point of (u_K, u_N)=(0.7, -0.3), suggesting a direct first-order
VP-VBS transition.
Discussions.—
Our study provides a comprehensive
phase diagram for the (2+1)D SO(5) non-linear sigma
model with a topological term on a sphere. It
reveals novel quantum states and suggests a SO(5) disordered region separating the O(2)
breaking VBS and O(3) breaking Néel phases, which terminates at a multicritical
point <cit.>.
While the exact location and scaling behavior along the phase boundary still remains to be carefully addressed, this unexpected disordered phase unlocks new insights, as it is poised to host highly unconventional quantum states: a gapless critical phase or some “hidden” symmetry-breaking order such as a chiral spin liquid <cit.>. It may also offer a platform for the search of the predicted pseudo-critical behaviour <cit.>, which we leave for future studies.
These results, combined with recent observations
of non-unitary CFT from entanglement measurements <cit.>, push forward to
a definitive answer and open up new directions for the two-decade long pursuit of DQCP
in various Néel-to-VBS settings.
Furthermore, our results find resonance with the experiments both in the VBS-AFM transition in quantum magnets SrCu_2(BO_3)_2 <cit.> and the QSH-SC transition in monolayer WTe_2 <cit.>, the systems either exhibit a first order transition or an intermediate phase. A new pathway
towards conformal 2D SU(2) DQCP is recently proposed, with SO(5)_f × SO(5)_b global symmetry, which characterizes various symmetry breaking phases of the cuprate phase diagram <cit.>. Investigating the validity of this newly proposed DQCP using present techniques would be of great interest.
Acknowledgment.- We thank Subir Sachdev, Fakher Assaad, Yin-Chen He and Cenke Xu for valuable discussions on the related topic. BBC, XZ and ZYM thank Wei Zhu for fruitful discussion on spherical Landau level regularization. They acknowledge the support from the Research Grants Council of Hong Kong SAR of China (Grant Nos. 17301420, 17301721, AoE/P-701/20, 17309822), the ANR/RGC Joint Research Scheme sponsored by Research
Grants Council of Hong Kong SAR of China and French
National Reserach Agency(Porject No. A_HKU703/22), the K. C. Wong Education Foundation (Grant No. GJTD-2020-01) and the Seed Funding “Quantum-Inspired explainable-AI” at the HKU-TCL Joint Research Centre for Artificial Intelligence. YW is supported by NSF under award number DMR-2045781. The authors also acknowledge the HPC2021 system under the Information Technology Services
and the Blackbody HPC system at the Department of Physics,
University of Hong Kong for their technical support
and generous allocation of CPU time.
Note Added:- Upon the completion of this work, Ref. <cit.> reported pseudo-critical behavior for the SO(5) line. The parameter range of the reported pseudo-critical behavior and (approximate) conformal symmetry, i.e. 0.7<V/U<1.5, correspond to 0.4746<u/U_0<1.7143, in the disordered phase in our phase diagram.
apsrev4-2
Supplemental Materials for
Phases of (2+1)D SO(5) non-linear sigma model with a topological term on a sphere:
multicritical point and disorder phase
3em
In Supplementary Materials <ref>, we explain the spherical Landau level regularization of the SO(5) model. In <ref>, we show the DMRG implementation of the model. In <ref>, we show the ED and QMC implementation of the model. Our numerical results, for DMRG, ED and QMC, are show in <ref>.
§ SPHERICAL LANDAU LEVEL REGULARIZATION OF SO(5) MODEL
§.§ More on the SO(5) model
Our notation is based on that used in Refs. <cit.>.
We would like to project the SO(5) Hamiltonian onto the lowest Landau level (LLL) of the Haldane sphere.
The original Hamiltonian is
H_Γ = 1/2∫dΩ_1∫dΩ_2δ( | Ω_1 - Ω_2| )∑_i = 0^5U_i( ψ^†( Ω_1)Γ^iψ( Ω_1) - C( Ω_1)δ_i,0)( ψ^†( Ω_2)Γ^iψ( Ω_2) - C( Ω_2)δ_i,0)
where ψ_α(Ω) is 4-component fermion annihilation operator with mixing valley and spin index α,
and Γ^i={τ_x⊗𝕀, τ_y⊗𝕀, τ_z⊗σ⃗_x, τ_z⊗σ⃗_y, τ_z⊗σ⃗_z} are the 5 mutually anticommuting matrices.
Here, U_i=-u_i for i≠0 as shown in the main text and this will be the starting point Hamiltonian for DMRG or ED simulations (see <ref> for details), where all parameters U_i can be tuned freely.
For QMC, to avoid sign problem explicitly, we need to rewrite the Hamiltonian in τ^μ form by Fierz identity (see <ref> for details)
H_τ = 1/2∫dΩ_1∫dΩ_2δ( | Ω_1 - Ω_2| )∑_μ = 0^3g_μ( ψ^†( Ω_1)τ^μψ( Ω_1) - C( Ω_1)δ_μ,0)( ψ^†( Ω_2)τ^μψ( Ω_2) - C( Ω_2)δ_μ,0)
with C(Ω_1)=2∑_m|Φ_m(Ω_1)|^2 ensure half-filling. According to g_0=U_0+u_N, g_1=g_2=-(u_K+u_N), g_3=2u_N, the sign-problem-free QMC simulation requires there are even negative terms (0 or 2) within g_1,g_2,g_3 and g_0⩾0 (see <ref>). One can see this region covers the first quadrant of the phase diagram (u_N,u_K)>0 in Fig. <ref> of the main text. In our QMC simulation, we only focus on the u_N=u_K SO(5) line.
In our notation, the 10 generators of the SO(5) rotation group are
L_12 = -i2[Γ^1, Γ^2] = (τ_x⊗𝕀)(τ_y⊗𝕀) - (τ_y⊗𝕀)(τ_x⊗𝕀) = τ_z ⊗𝕀,
L_13 = -i2[Γ^1, Γ^3] = (τ_x⊗𝕀)(τ_z⊗σ_x) - (τ_z⊗σ_x)(τ_x⊗𝕀) = - τ_y ⊗σ_x,
L_14 = -i2[Γ^1, Γ^4] = (τ_x⊗𝕀)(τ_z⊗σ_y) - (τ_z⊗σ_y)(τ_x⊗𝕀) = - τ_y ⊗σ_y,
L_15 = -i2[Γ^1, Γ^5] = (τ_x⊗𝕀)(τ_z⊗σ_z) - (τ_z⊗σ_z)(τ_x⊗𝕀) = - τ_y ⊗σ_z,
L_23 = -i2[Γ^2, Γ^3] = (τ_y⊗𝕀)(τ_z⊗σ_x) - (τ_z⊗σ_x)(τ_y⊗𝕀) = τ_x ⊗σ_x,
L_24 = -i2[Γ^2, Γ^4] = (τ_y⊗𝕀)(τ_z⊗σ_y) - (τ_z⊗σ_y)(τ_y⊗𝕀) = τ_x ⊗σ_y,
L_25 = -i2[Γ^2, Γ^5] = (τ_y⊗𝕀)(τ_z⊗σ_z) - (τ_z⊗σ_z)(τ_y⊗𝕀) = τ_x ⊗σ_z,
L_34 = -i2[Γ^3, Γ^4] = (τ_z⊗σ_x)(τ_z⊗σ_y) - (τ_z⊗σ_y)(τ_z⊗σ_x) = 𝕀⊗σ_z,
L_35 = -i2[Γ^3, Γ^5] = (τ_z⊗σ_x)(τ_z⊗σ_z) - (τ_z⊗σ_z)(τ_z⊗σ_x) = - 𝕀⊗σ_y,
L_45 = -i2[Γ^4, Γ^5] = (τ_z⊗σ_y)(τ_z⊗σ_z) - (τ_z⊗σ_z)(τ_z⊗σ_y) = 𝕀⊗σ_x.
§.§ Spherical Landau level
For electrons moving on the surface of a sphere with 4π s monopole (2s∈ Z), the Hamiltonian is H_0 = 12M_er^2Λ_μ^2, and Λ_μ=∂_μ + iA_μ. The eigenstates are quantized into spherical Landau levels with
energies E_n = [n(n+1)+(2n+1)s]/(2M_er^2) and n=0,1,⋯ the Landau level index. The (n+1)_th level is (2s+2n+1)-fold degenerate. We assume all interactions are much smaller than the energy gap between Landau levels, and just consider the lowest Landau level (LLL) n=0, which is (2s+1)-fold degenerate and we denote N=2s+1 as the system size of the problem. The wave-functions of LLL orbital are monopole harmonics
Φ_m(θ,ϕ) = N_m e^imϕcos^s+m(θ2)sin^s-m(θ2),
with m=-s,-s+1,⋯,s and N_m = √((2s+1)!4π (s+m)!(s-m)!).
§.§ Details on the LLL projection
The projection of H_Γ on the LLL of the Haldane sphere is carried out as
H_Γ^(LLL) = 1/2∫ dΩ_1∫ dΩ_2δ( | Ω_1 - Ω_2| )∑_i = 0^5U_i∑_m_1,n_1Φ_m_1^*( Ω_1)Φ_n_1( Ω_1)( ∑_α,βc_m_1,α^†Γ_α,β^ic_n_1,β - 2δ_m_1,n_1δ_i,0)
·∑_m_2,n_2Φ_m_2^*( Ω_2)Φ_n_2( Ω_2)( ∑_α,βc_m_2,α^†Γ_α,β^ic_n_2,β - 2δ_m_2,n_2δ_i,0),
and the projection of H_τ on the LLL of the Haldane sphere is carried out as
H_τ^(LLL) = 1/2∫ dΩ_1∫ dΩ_2δ( | Ω_1 - Ω_2| )∑_μ = 0^3g_μ∑_m_1,n_1Φ_m_1^*( Ω_1)Φ_n_1( Ω_1)( ∑_α,βc_m_1,α^†τ_α,β^μc_n_1,β - 2δ_m_1,n_1δ_μ,0)
·∑_m_2,n_2Φ_m_2^*( Ω_2)Φ_n_2( Ω_2)( ∑_α,βc_m_2,α^†τ_α,β^μc_n_2,β - 2δ_m_2,n_2δ_μ,0).
According to the Legendre polynomial U( | 𝐫_1 - 𝐫_2| ) = ∑_k = 0^∞V_kP_l( cos( Ω_12)) = ∑_kV_k4π/2k+ 1∑_m = - k^kY_k,m^*( Ω_1)Y_k,m( Ω_2). For U( | 𝐫_1 - 𝐫_2| ) = δ( | Ω_1 - Ω_2| ), we have V_k=2k+1.
We then arrive at the form,
H_Γ^(LLL) = ∑_i U_i ∑_m_1,m_2,m (-1)^2s+m+m_1+m_2(2s+1)^22∑_k V_k
[ s k s; -m_1 -m m_1+m ][ s k s; -m_2 m m_2-m ][ s k s; -s 0 s ]^2
× (c^†_m_1,αΓ^i_α,β c^ _m_1+m,β-2δ_i0δ_m0)
(c^†_m_2,αΓ^i_α,β c^ _m_2-m,β-2δ_i0δ_m0)
= ∑_i U_i ∑_m_1,m_2,m V_m_1,m_2,m_2-m,m_1+m(c^†_m_1,αΓ^i_α,β c^ _m_1+m,β-2δ_i0δ_m0)
(c^†_m_2,αΓ^i_α,β c^ _m_2-m,β-2δ_i0δ_m0)
and
H_τ^(LLL) = ∑_μ g_μ ∑_m_1,m_2,m (-1)^2s+m+m_1+m_2(2s+1)^22∑_k V_k
[ s k s; -m_1 -m m_1+m ][ s k s; -m_2 m m_2-m ][ s k s; -s 0 s ]^2
× (c^†_m_1,ατ^μ_α,β c^ _m_1+m,β-2δ_μ0δ_m0)
(c^†_m_2,ατ^μ_α,β c^ _m_2-m,β-2δ_μ0δ_m0)
= ∑_μ g_μ ∑_m_1,m_2,m V_m_1,m_2,m_2-m,m_1+m(c^†_m_1,ατ^μ_α,β c^ _m_1+m,β-2δ_μ0δ_m0)
(c^†_m_2,ατ^μ_α,β c^ _m_2-m,β-2δ_μ0δ_m0)
with
V_m_1,m_2,m_3,m_4 = (-1)^2s+m_1+2m_2-m_3(2s+1)^22∑_k (2k+1)
[ s k s; -m_1 m_1-m_4 m_4 ][ s k s; -m_2 m_2-m_3 m_3 ][ s k s; -s 0 s ]^2.
§ DETAILED IMPLEMENTATION IN DMRG
§.§ SU(2) symmetric Hamiltonian
For the case of u_1=u_2=u_K and u_3=u_4=u_5=u_N considered in the main text, the model possesses the
SU(2)_spin×U(1)_valley×U(1)_charge symmetries.
In this case, the projected SO(5) model Hamiltonian H^(LLL)_Γ [c.f. HamG] can be rewritten into
a spin rotation invariant and valley charge conserved form as
H_Γ^(LLL) = ∑_m_1,m_2,m V_m_1,m_2,m_2-m,m_1+m ∑_i U_i (c^†_m_1,αΓ^i_α,β c^ _m_1+m,β-2δ_i0δ_m0)
(c^†_m_2,αΓ^i_α,β c^ _m_2-m,β-2δ_i0δ_m0)
= ∑_m_1,m_2,m V_m_1,m_2,m_2-m,m_1+m { U_0 (Ψ^†_m_1Ψ^ _m_1+m-2δ_m0)
(Ψ^†_m_2Ψ^ _m_2-m-2δ_m0)
- 2u_K (Ψ^†_m_1τ^+ Ψ^ _m_1+m)
(Ψ^†_m_2τ^- Ψ^ _m_2-m)
- 2u_K (Ψ^†_m_1τ^- Ψ^ _m_1+m)
(Ψ^†_m_2τ^+ Ψ^ _m_2-m)
- 4u_N (Ψ^†_m_1𝐒^†Ψ^ _m_1+m)·(Ψ^†_m_2𝐒Ψ^ _m_2-m) },
where the irreducible operator (irop) for fermion annihilation is
Ψ̂^S=1/2,S_z =
[ Ψ̂_1; Ψ̂_2 ] with Ψ̂_τ^S=1/2,S_z =
[ -ĉ^ _↓,τ; ĉ^ _↑,τ ],
where the components Ψ̂_τ^1/2,+1/2=-c_↓,τ and
Ψ̂_τ^1/2,-1/2=c_↑,τ
transform as the irreducible representation (irep) under SU(2) spin rotation group
|S=1/2; S_z⟩ with S_z=+1/2, -1/2, respectively.
For this, the relative sign in the first component is important, and this fermion annihilation
operator corresponds to the defining representation, i.e., S=1/2 for SU(2).
And we have
τ^+ =
[ 0 0 1 0; 0 0 0 1; 0 0 0 0; 0 0 0 0 ], τ^- =
[ 0 0 0 0; 0 0 0 0; 1 0 0 0; 0 1 0 0 ],
𝐒≡(-1/√(2)S^+, S^z, 1/√(2)S^-)^T with
S^+ =
[ 0 1 0 0; 0 0 0 0; 0 0 0 1; 0 0 0 0 ],
S^z = 1/2[ 1 0 0 0; 0 -1 0 0; 0 0 1 0; 0 0 0 -1 ],
S^- =
[ 0 0 0 0; 1 0 0 0; 0 0 0 0; 0 0 1 0 ],
and
𝐒^†≡(-1/√(2)S^-, S^z, 1/√(2)S^+).
§.§ Angular-momentum-space matrix product state
We consider the many-body wavefucntion in the lowest Landau level basis with spin and valley degrees of freedom,
|ψ⟩ = ∑_α_-s⋯α_m⋯α_s A_α_-s⋯α_m⋯α_s⊗_m=-s^s|α⟩_m,
where |α_m⟩ spans a 16-dimension local Hilbert space, obtained by the 16 ways of filling in electrons within the 4 states
|σ,τ⟩∈{|↑,1⟩,|↓,1⟩,|↑,2⟩,|↓,2⟩}
for each orbital m∈{-s, -s+1, ⋯, s-1, s}.
The matrix product state (MPS) ansatz for this particular case then expresses as,
|ψ⟩ = ∑_α_-s⋯α_m⋯α_s∑_β_-s⋯β_m⋯β_s-1
(A^[-s])_α_-s^β_-s (A^[-s+1])_α_-s+1^β_-s,β_-s+1⋯
(A^[s])_α_-s+1^β_s-1⊗_m=-s^s|α⟩_m,
where the geometric bond basis |β_m⟩ is introduced to encode entanglement in the system. Graphically, it can be
depicted as shown in fig:figs1.
Practically, the 16-dimensional local Hilbert space is numerically costly for 2-site update in DMRG, and we thus split the two valleys into two adjacent
local tensors and the more practical MPS reads
|ψ⟩ = ∑_α_-s^1α_-s^2⋯α_s^1α_s^2∑_β_-s^1β_-s^2⋯β_s-1^1β_s-1^2∏_m=-s^s
(A^[m,1])_α_m^1^β_m-1^2β_m^1(A^[m,2])_α_m^2^β_m^1β_m^2⊗_m=-s^s|α⟩_m^1|α⟩_m^2,
where the local Hilbert space |α_m^τ⟩ now spans a 4-dimension local Hilbert space,
obtained by the 4 ways of filling in the spin-up and/or spin-down electrons.
The graphical representation is shown in fig:figs2.
§ DETAILED IMPLEMENTATION IN QMC
§.§ Fierz identity
Following the Ref. <cit.>, we use the Fierz identity to rewrite Hamiltonian from H_Γ to H_τ.
First, we would like to introduce the property of Γ^i=(τ_x⊗ I_2, τ_y⊗ I_2, τ_z⊗σ_x, τ_z⊗σ_y, τ_z⊗σ_z) matrices
( ψ^†τ_zψ)^2 = - 1/2( ψ^†ψ - 2)^2 - 1/2∑_i = 3,4,5( ψ^†Γ^iψ)^2 + 1/2∑_i = 1,2( ψ^†Γ^iψ)^2 + 2.
This equation comes from the idea that any 4×4 matrix can be expanded by 16 matrices O^i∈{τ_α⊗σ_β} and {O^i⊗ O^j} forms the basis for 16×16 matrix.
O_α,β^iO_γ,η^i = ∑_j,kb_i;j,kO_α,η^jO_γ,β^k.
This formula times O_β,γ^mO_η,α^n and contract all labels
( O^iO^mO^iO^n) = ∑_j,kb_i;j,k( O^jO^n)( O^kO^m).
Since O^i and O^j always commute or anti-commute and (O^jO^n)=4δ_j,n, we have b_i;j,k=±1/4δ_j,k
,O_α,β^iO_γ,η^i=±1/4∑_jO_α,η^jO_γ,β^j. Here + for commute and - for anti-commute. With this relationship, we can derive
( ψ^†O^iψ)^2 = ∓1/4∑_j( ψ^†O^jψ)^2 + μψ^†ψ.
This can be seen from
( ψ^†O^iψ)^2 = ψ_α^†O_α,β^iψ_βψ_γ^†O_γ,η^iψ_η = 4ψ_α^†ψ_αδ_i,0 + ψ_α^†ψ_α - O_α,β^iO_γ,η^iψ_α^†ψ_ηψ_γ^†ψ_β = ∓1/4∑_j( ψ^†O^jψ)^2 + μψ^†ψ.
The chemical potential tuning term is 5ψ^†ψ if O^i=I_4 and ψ^†ψ if O^i≠ I_4. Directly expand the formula on the LHS below according to eq:eqS33, we obtain
( ψ^†τ_zψ)^2 - ∑_i = 1,2( ψ^†Γ^iψ)^2 + ( ψ^†ψ)^2 = - ( ψ^†τ_zψ)^2 - ∑_i = 3,4,5( ψ^†Γ^iψ)^2 + 4ψ^†ψ.
By using this formula, we can rewrite
g_0( ψ^†ψ - 2)^2 + g_1∑_μ = x,y( ψ^†τ_μψ)^2 + g_2( ψ^†τ_zψ)^2 - 2g_2 = U/2( ψ^†ψ - 2)^2 - u_N/2∑_i = 3,4,5( ψ^†Γ^iψ)^2 - u_K/2∑_i = 1,2( ψ^†Γ^iψ)^2,
where the coefficients meet
g_0 = U + g_2/2,g_1 = - u_K + g_2/2,g_2 = u_N.
Ignore the constant, we rewrite the Hamiltonian from the Γ^i form to τ_μ form, where σ_μ label is 2×2 identity matrix. This is crucial to have an explicit sign-problem-free determinant QMC simulation.
§.§ Introducing the auxiliary field
In the QMC, we first rewrite the Hamiltonian H^(LLL)_τ [c.f. HamT] in a compact form and integrate the solid angle based on spin-weighted spherical harmonics formula
H_τ^(LLL) = 1/2∑_μ = 0^3∑_l = 0^2sU_μ,l∑_m = - l^lδρ_μ,l,mδρ_μ,l,m^† = 1/2∑_μ = 0^3∑_l = 0^2sU_μ,l1/2∑_m ≥ 0( δρ_μ,l,m + δρ_μ,l,m^†)^2 - ( δρ_μ,l,m - δρ_μ,l,m^†)^2,
where
δρ_μ,l,m = ∑_m_1,n_1√(4π/2l + 1)∫dΩ_1Φ_m_1^*( Ω_1)Φ_n_1( Ω_1)Y_lm^*( Ω_1)( ∑_α,βc_m_1,α^†τ_α,β^μc_n_1,β - 2δ_m_1,n_1δ_μ,0)
= ∑_m_1,n_1( - 1)^s + n_1( 2s + 1)
[ s l s; - m_1 - m n_1; ][ s l s; - s 0 s; ](∑_α,βc_m_1,α^†τ_α,β^μc_n_1,β - 2δ_m_1,n_1δ_μ,0)
= ( - 1)^mδρ_μ,l, - m^†,
where U_μ,l=g_μ(2l+1).
From now on, it is in a form that Hubbard–Stratonovich transformation can be carried out explicitly. We take the label μ at the outside because if there is no projection, terms with different μ commute and Trotter decomposition will not introduce error between different μ blocks. Then we would arrange m label from large to small and then separate two auxiliary fields, since for m=0 and m>s, [δρ_μ,l,m±δρ_μ,l,m^†,δρ_μ,l^',m±δρ_μ,l^',m^†]=0. And we finally put l label from large to small. The partition function of this Hamiltonian after Trotter decomposition and Hubbard–Stratonovich transformation is
Z = ( e^- β H_τ) = ( ∏_te^- Δ t∑_μ,m,lU_μ,l/4(δρ_μ,l,m + δρ_μ,l,m^†)^2 - (δρ_μ,l,m - δρ_μ,l,m^†)^2)
≈ ∑_{ s_t,μ,l,m}∏_t,μ,l,m1/16γ( s_t,μ,l,m,1)γ( s_t,μ,l,m,2)( ∏_t,μ,m∏_le^iη(s_t,μ,l,m,1)A_μ,l(δρ_μ,l,m + δρ_μ,l,m^†)∏_le^η(s_t,μ,l,m,1)A_μ,l(δρ_μ,l,m - δρ_μ,l,m^†)),
where {s_t,μ,l,m} is the set of auxiliary field, A_μ,l=√(Δ t U_μ,l/4), γ(±1)=1+√(6)/3,γ(±2)=1-√(6)/3,η(±1)=±√(2(3-√(6))),η(±2)=±√(2(3+√(6))). Since we write down all the auxiliary fields, we would like to discuss the computation complexity for this angular momentum QMC method now. As one can count easily that the amount of auxiliary fields is N_t N^2, where N_t is the number of trotter decomposition layers and N=2s+1 is the system size. One update for a non-local coupling auxiliary field costs N^3, so that the total cost will be N_t N^5 for a single sweep. Compared with Hubbard model N_t N^3 <cit.> and cut off momentum space QMC N_t N^4 <cit.>, this angular momentum QMC bears heavier cost. Besides, different from Hubbard model, to control trotter error coming from [δρ_μ,l,m±δρ_μ,l,m^†,δρ_μ,l^',m±δρ_μ,l^',m^†]≠0, one need a larger N_t for a larger N and a suitable arrangement for the position of auxiliary fields.
§.§ Absence of Sign problem
One should notice this τ_μ form has no sign problem. Since we have not write the σ explicitly in our Hamiltonian, there is an SU(2) symmetry for the decoupled Hamiltonian. We can use this spin-like freedom to make the block diagonalized matrices form complex conjugate. The trick can be seen by noticing the particle-hole transformation will transform δρ_μ,l,m to -(δρ_μ,l,m^†)^∗. If U_μ,l>0, this means
iη(s_t,μ,l,m,1)A_μ,l(δρ_μ,l,m+δρ_μ,l,m^†)→-iη(s_t,μ,l,m,1)A_μ,l(δρ_μ,l,m^∗+(δρ_μ,l,m^†)^∗),
η(s_t,μ,l,m,1)A_μ,l(δρ_μ,l,m-δρ_μ,l,m^†)→η(s_t,μ,l,m,1)A_μ,l(δρ_μ,l,m^∗-(δρ_μ,l,m^†)^∗).
These are just what we want as the case μ=0,3 when U>0,U_0>0. But one should notice for μ=1,2, U_μ,l<0 and we may need another minus sign for these terms and keep the formula at μ=0,3 unchanged. This can be done by giving a minus sign phase for either τ_± particle (e.g., c_n_1,τ_-→-c_n_1,τ_- and c_n_1,τ_-^†→-c_n_1,τ_-^† ) because δρ_μ,l,m is diagonal with τ at μ=0,3 and off-diagonal with τ at μ=1,2. From the discussion above, we can conclude one possible transformation for σ_- particles
c̃_n_1,τ_+,σ_-=c_n_1,τ_+,σ_-^†,
c̃_n_1,τ_-,σ_-=-c_n_1,τ_-,σ_-^†.
As a check, with this transformation we explicitly have
δρ_0,l,m,σ_- = ∑_m_1,n_1√(4π/2l + 1)∫dΩ_1Φ_m_1^*( Ω_1)Φ_n_1( Ω_1)Y_lm^*( Ω_1)( ∑_α = ± 1c_m_1,α,σ_-^†c_n_1,α,σ_- - δ_m_1,n_1)
= ∑_m_1,n_1√(4π/2l + 1)∫dΩ_1Φ_m_1^*( Ω_1)Φ_n_1( Ω_1)Y_lm^*( Ω_1)( ∑_α = ± 1( - 1)^2αc̃_m_1,α,σ_-c̃_n_1,α,σ_-^† - δ_m_1,n_1)
= - ∑_m_1,n_1√(4π/2l + 1)∫dΩ_1Φ_m_1^*( Ω_1)Φ_n_1( Ω_1)Y_lm^*( Ω_1)( ∑_α = ± 1c̃_n_1,α,σ_-^†c̃_m_1,α,σ_- - δ_m_1,n_1)
= - ∑_m_1,n_1√(4π/2l + 1)∫dΩ_1Φ_n_1^*( Ω_1)Φ_m_1( Ω_1)Y_lm^*( Ω_1)( ∑_α = ± 1c̃_m_1,α,σ_-^†c̃_n_1,α,σ_- - δ_m_1,n_1)
= - δρ_0,l, - m,σ_+^*,
δρ_1,l,m,σ_- = ∑_m_1,n_1√(4π/2l + 1)∫dΩ_1Φ_m_1^*( Ω_1)Φ_n_1( Ω_1)Y_lm^*( Ω_1)( c_m_1,α,σ_-^†c_n_1, - α,σ_- + c_m_1, - α,σ_-^†c_n_1,α,σ_-)
= ∑_m_1,n_1√(4π/2l + 1)∫dΩ_1Φ_m_1^*( Ω_1)Φ_n_1( Ω_1)Y_lm^*( Ω_1)( - c̃_m_1,α,σ_-c̃_n_1, - α,σ_-^† - c̃_m_1, - α,σ_-c̃_n_1,α,σ_-^†)
= ∑_m_1,n_1√(4π/2l + 1)∫dΩ_1Φ_m_1^*( Ω_1)Φ_n_1( Ω_1)Y_lm^*( Ω_1)( c̃_n_1, - α,σ_-^†c̃_m_1,α,σ_- + c̃_n_1,α,σ_-^†c̃_m_1, - α,σ_-)
= ∑_m_1,n_1√(4π/2l + 1)∫dΩ_1Φ_n_1^*( Ω_1)Φ_m_1( Ω_1)Y_lm^*( Ω_1)( c̃_m_1,α,σ_-^†c̃_n_1, - α,σ_- + c̃_m_1, - α,σ_-^†c̃_n_1,α,σ_-)
= δρ_1,l, - m,σ_+^*,
δρ_2,l,m,σ_- = - i∑_m_1,n_1√(4π/2l + 1)∫dΩ_1Φ_m_1^*( Ω_1)Φ_n_1( Ω_1)Y_lm^*( Ω_1)( c_m_1,α,σ_-^†c_n_1, - α,σ_- - c_m_1, - α,σ_-^†c_n_1,α,σ_-),
= - i∑_m_1,n_1√(4π/2l + 1)∫dΩ_1Φ_m_1^*( Ω_1)Φ_n_1( Ω_1)Y_lm^*( Ω_1)( - c̃_m_1,α,σ_-c̃_n_1, - α,σ_-^† + c̃_m_1, - α,σ_-c̃_n_1,α,σ_-^†)
= - i∑_m_1,n_1√(4π/2l + 1)∫dΩ_1Φ_m_1^*( Ω_1)Φ_n_1( Ω_1)Y_lm^*( Ω_1)( c̃_n_1, - α,σ_-^†c̃_m_1,α,σ_- - c̃_n_1,α,σ_-^†c̃_m_1, - α,σ_-)
= i∑_m_1,n_1√(4π/2l + 1)∫dΩ_1Φ_n_1^*( Ω_1)Φ_m_1( Ω_1)Y_lm^*( Ω_1)( c̃_m_1,α,σ_-^†c̃_n_1, - α,σ_- - c̃_m_1, - α,σ_-^†c̃_n_1,α,σ_-)
= δρ_2,l, - m,σ_+^*,
δρ_3,l,m,σ_- = ∑_m_1,n_1√(4π/2l + 1)∫dΩ_1Φ_m_1^*( Ω_1)Φ_n_1( Ω_1)Y_lm^*( Ω_1)( c_m_1,α,σ_-^†c_n_1,α,σ_- - c_m_1, - α,σ_-^†c_n_1, - α,σ_-)
= ∑_m_1,n_1√(4π/2l + 1)∫dΩ_1Φ_m_1^*( Ω_1)Φ_n_1( Ω_1)Y_lm^*( Ω_1)( c̃_m_1,α,σ_-c̃_n_1,α,σ_-^† - c̃_m_1, - α,σ_-c̃_n_1, - α,σ_-^†)
= - ∑_m_1,n_1√(4π/2l + 1)∫dΩ_1Φ_m_1^*( Ω_1)Φ_n_1( Ω_1)Y_lm^*( Ω_1)( c̃_n_1,α,σ_-^†c̃_m_1,α,σ_- - c̃_n_1, - α,σ_-^†c̃_m_1, - α,σ_-)
= - ∑_m_1,n_1√(4π/2l + 1)∫dΩ_1Φ_n_1^*( Ω_1)Φ_m_1( Ω_1)Y_lm^*( Ω_1)( c̃_m_1,α,σ_-^†c̃_n_1,α,σ_- - c̃_m_1, - α,σ_-^†c̃_n_1, - α,σ_-)
= - δρ_3,l, - m,σ_+^*.
With these relationship the nontrivial part contributing the sample weight has
( ∏_t,μ,m∏_le^iη(s_t,μ,l,m,1)A_μ,l(δρ_μ,l,m,σ_- + δρ_μ,l,m,σ_-^†)∏_le^η(s_t,μ,l,m,1)A_μ,l(δρ_μ,l,m,σ_- - δρ_μ,l,m,σ_-^†))
= ( ∏_t,μ,m∏_le^iη(s_t,μ,l,m,1)A_μ,l(δρ_μ,l,m,σ_+ + δρ_μ,l,m,σ_+^†)∏_le^η(s_t,μ,l,m,1)A_μ,l(δρ_μ,l,m,σ_+ - δρ_μ,l,m,σ_+^†))^*
And this closes the proof of this section.
§.§ Details for the ED and QMC measurements
In ED simulation, the Hamiltonian will be block diagonalized according to good quantum number (i.e., particle number, total magnetic quantum number and total angular momentum quantum number). We diagonalize total angular momentum operator J^2 within the subspace with a certain particle number and total magnetic quantum number. Since the many-body Hamiltonian has nothing to do with the total magnetic quantum number, we just take the smallest total magnetic quantum number subspace to derive all the possible eigenvalues. We use the same unitary transformation diagonalizing total angular momentum J^2 to block diagonalize the Hamiltonian. Then the eigenstates within each block corresponds to the same total angular momentum quantum number. J^2 is defined as below
J^2 = ( ∑_m,αJ_m,α)^2 = ∑_m,n,α,βJ_m,α· J_n,β = ∑_m,n,α,βJ_m,α^zJ_n,β^z + 1/2( J_m,α^+J_n,β^- + J_m,α^-J_n,β^+),
where
J_m,α^z = mc_m,α^† c_m,α
J_m,α^+ = √((s-m)(s+m+1))c_m+1,α^† c_m,α
J_m,α^- = √((s+m)(s-m+1))c_m-1,α^† c_m,α
and J_-s,α^-=J_s,α^+=0. Insert the formula above to expression of J^2 will give us the total angular momentum operator in Fock basis. It is easy to verify J_α^z=∑_m J_m,α^z, J_α^+=∑_m J_m,α^+ and J_α^-=∑_m J_m,α^- do form angular momentum algebra
J_α^+,J_α^'^- = 2J_α^zδ_α,α^',
J_α^z,J_α^'^+ = J_α^+δ_α,α^',
J_α^z,J_α^'^- = - J_α^-δ_α,α^'.
Besides, this definition also will not introduce minus sign from fermion anti-commutation if we just align the nearest magnetic quantum number orbital states together (i.e., -s,-s+1,…,s) in each subspace α. The measurements for ED is straightforward, and one need to expand multi-fermion correlation at a certain auxiliary field configuration by Wick's theorem.
We list the measurements in our ED and QMC simulation below. For the SO(5) order parameter, we define
O_i,l,m = ∫dΩ_1Y_lm^*( Ω_1)ψ^†( Ω_1)Γ^iψ( Ω_1)
= ∑_m^',n^'∫dΩ_1Φ_m^'^*( Ω_1)Φ_n^'( Ω_1)Y_lm^*( Ω_1)c_m^'^†Γ^ic_n^'
= √(2l + 1/4π)∑_m^',n^'( - 1)^s + n^'( 2s + 1)
[ s l s; - m^' - m n^'; ][ s l s; - s 0 s; ]c_m^'^†Γ^ic_n^'
≡ √(2l + 1/4π)∑_m^',n^'M_m^',n^'^l,mc_m^'^†Γ^ic_n^'.
Then the imaginary time correlation function can be defined as
⟨O_i,l,m(t) O_i,l,m^†(0)⟩
= 2l + 1/4π⟨( ∑_m_1,n_1M_m_1,n_1^l,mc_m_1^†(t)Γ^ic_n_1(t))( ∑_m_2,n_2( M_m_2,n_2^l,m)^*c_n_2^†(0)Γ^ic_m_2(0))⟩
≡ ∑_m_1,n_1,m_2,n_2P_m_1,n_1,m_2,n_2^l,m⟨( ∑_m_1,n_1c_m_1^†(t)Γ^ic_n_1(t))( ∑_m_2,n_2c_n_2^†(0)Γ^ic_m_2(0))⟩.
Besides, we also use internal energy to benchmark our QMC simulation with ED
⟨ H ⟩ = ∑_μ,m_1,n_1,m_2,n_2V_μ,m_1,n_1,m_2,n_2⟨( ∑_α_1,β_1c_m_1,α_1^†τ_α_1,β_1^μc_n_1,β_1 - 2δ_m_1,n_1δ_μ,0)( ∑_α_2,β_2c_n_2,α_2^†τ_α_2,β_2^μc_m_2,β_2 - 2δ_m_2,n_2δ_μ,0)⟩,
here V_μ,m_1,n_1,m_2,n_2≡1/2∑_l = 0^2sU_μ,l∑_m = - l^lM_m_1,n_1^l,m( M_m_2,n_2^l,m)^*.
The observed results are listed in supplemental figures section.
§ SUPPLEMENTAL FIGURES
|
http://arxiv.org/abs/2307.03946v1 | 20230708100056 | Superconducting Gap Structure of Filled Skutterudite LaOs$_4$As$_{12}$ Compound through $μ$SR Investigations | [
"A. Bhattacharyya",
"D. T. Adroja",
"A. D. Hillier",
"P. K. Biswas"
] | cond-mat.supr-con | [
"cond-mat.supr-con",
"cond-mat.mtrl-sci",
"cond-mat.str-el"
] |
[email protected]
Department of Physics, Ramakrishna Mission Vivekananda Educational and Research Institute, Belur Math, Howrah 711202, West Bengal, India
[email protected]
ISIS Facility, Rutherford Appleton Laboratory, Chilton, Didcot Oxon, OX11 0QX, United Kingdom
Highly Correlated Matter Research Group, Physics Department, University of Johannesburg, PO Box 524, Auckland Park 2006, South Africa
ISIS Facility, Rutherford Appleton Laboratory, Chilton, Didcot Oxon, OX11 0QX, United Kingdom
Deceased
ISIS Facility, Rutherford Appleton Laboratory, Chilton, Didcot Oxon, OX11 0QX, United Kingdom
Filled skutterudite compounds have gained attention recently as an innovative platforms for studying intriguing low-temperature superconducting properties. Regarding the symmetry of the superconducting gap, contradicting findings from several experiments have been made for LaRu_4As_12 and its isoelectronic counterpart, LaOs_4As_12. In this vein, we report comprehensive bulk and microscopic results on LaOs_4As_12 utilizing specific heat analysis and muon-spin rotation/relaxation (μSR) measurements. Bulk superconductivity with T_C = 3.2 K was confirmed by heat capacity. The superconducting ground state of the filled-skutterudite LaOs_4As_12 compound is found to have two key characteristics: superfluid density exhibits saturation type behavior at low temperature, which points to a fully gapped superconductivity with gap value of 2Δ/k_BT_C = 3.26; additionally, the superconducting state does not show any sign of spontaneous magnetic field, supporting the preservation of time-reversal symmetry. These results open the door for the development of La-based skutterudites as special probes for examining the interplay of single- and multiband superconductivity in classical electron–phonon systems.
Superconducting Gap Structure of Filled Skutterudite LaOs_4As_12 Compound through μSR Investigations
P. K. Biswas
August 12, 2023
====================================================================================================
§ INTRODUCTION
Due to their potential as thermoelectric materials for either refrigeration or power generation applications, many filled skutterudite compounds with RT_4X_12 stoichiometry (R = alkali metals, alkaline earth metals, lanthanides, or light actinides; T = Fe, Os, Ru; X = P, As, Sb) have lately been the focus of several investigations <cit.>. With two formula units RT_4X_12 per unit cell, these compounds form a body-centered cubic structure (space group Im3̅, No: 204). The structures consist of rigid covalently bonded cage-forming frameworks T_4X_12 that encapsulate various bonded guest atoms R. This leads to local anharmonic thermal vibrations (rattling modes), which would reduce phononic heat conduction and open the door to their potential as promising thermoelectric materials. Because of the significant hybridization between the 4f band manifold and electronic conduction states, as well as the degree of freedom provided by the R-f-derived multipole momenta of the cubically symmetric X_12 cages, those compounds may include a variety of distinct electronic and magnetic ground states. For examples, consider unconventional superconductivity <cit.>, Kondo effect <cit.>, heavy fermios <cit.>, non-Fermi liquid behavior <cit.>, etc.
The majority of the Pr- and Ce-based filled skutterudite compounds are hybridized gap semiconductors or show magnetic transitions, however PrOs_4Sb_12 <cit.>, PrRu_4Sb_12 <cit.> and PrRu_4As_12 <cit.> show superconducting transitions at 1.8 K, 0.97 K and 2.4 K, respectively. PrOs_4Sb_12 is highly intriguing for a variety of reasons <cit.>, including: (i) it is the first known example of a heavy-fermion superconductor containing Pr; (ii) it shows unconventional strong-coupling superconductivity that breaks time-reversal symmetry; and (iii) instead of magnetic fluctuations, electric quadrupole fluctuations may be involved in the superconducting pairing process. The unique band structure of these compounds and the hybridization effects between localized f electrons and conduction electrons appear to play a crucial role, in addition to the fact that the origin of the majority of those unconventional phenomenologies is unknown. It was recently revealed that the Fermi level of La compounds is placed at a prominent peak arising from the T-d band manifold, which might contribute to electronic instability <cit.>. Several La-based compounds LaT_4X_12 are especially remarkable within the filled skutterudite class due to their remarkable superconducting properties. For examples, LaFe_4P_12 (T_C = 4.1 K) <cit.>, LaOs_4P_12 (T_C = 1.8 K) <cit.>, and LaRu_4Sb_12 (T_C = 3.6 K) <cit.>, with a special attention to the LaRu_4As_12 (T_C = 10.3 K, H_c2 = 10.2 T)- with the highest superconducting transition temperature. <cit.>.
The ratio of the heat capacity jump Δ C to γT_C is ΔC/(γT_C)=1.75 for LaRu_4As_12 comparison to the BCS value of 1.43 <cit.>. While the majority of La-based filled skutterudites are completely gapped superconductors, past research has shown numerous unique aspects of LaRu_4As_12, such as a positive curvature of H_c2, nonexponential behavior of the electronic heat capacity, and square root field dependency of the Sommerfeld coefficient (γ) <cit.>. We recently reported unambiguous evidence of multiband s+s-wave superconductivity in LaRu_4As_12 using muon-spin rotation measurements, with 2Δ_1/k_BT_C = 3.73 for the larger gap and 2Δ_ 2/k_BT_C = 0.144 for the smaller gap <cit.>. Furthermore, inelastic X-ray scattering experiments indicated essentially temperature-independent phonon modes between 300 K and 20 K, with the exception of 2 K, where a weak softening of the specific phonon modes is detected <cit.>. All of these results demonstrate the relevance of the electron–phonon interaction in the superconductivity of LaRu_4As_12, and they accord well with the DFT-based phonon simulations <cit.>.
Another isostructural La-based filled skutterudite compound, LaOs_4As_12, has been reported by Shirotani et al. to exhibit superconductivity with T_C. = 3.2 K <cit.>. LaOs_4As_12 has also shown some signs of multiband superconductivity, such as the upward curving of the upper critical field around the transition temperature and unusual behavior in the electronic specific heat data <cit.>. A single-gap, s-wave superconducting ground state, however, is suggested by a recent study of the temperature dependency of lower critical field <cit.>. Another study found that the high-amplitude lanthanum phonons dominate the vibrational eigenmodes at low energies based on the phonon dispersion relation determined from inelastic neutron scattering experiments <cit.>.
We have thus performed systematic muon-spin rotation and relaxation (μSR) measurements to examine the superconducting pairing process in the LaOs_4As_12 compound. Contrary to prior experimental work asserting two-band superconductivity <cit.>, we demonstrate that the low-temperature behavior of the superfluid density points to a fully gapped superconducting Fermi surface. Furthermore, the preservation of time-reversal symmetry is confirmed by the lack of spontaneous magnetic fields in the superconducting state, ruling out unusual pairing processes. The transition from two-band to single-band superconductivity in LaRu_4As_12 to LaOs_4As_12 is caused by differences in interband coupling strength in the Fermi surface, as evidenced by the different degrees of hybridization and electronic properties observed in the Fermi surfaces of both compounds <cit.>. These results underline the significance of LaRu_4As_12 and LaOs_4As_12 compounds as an important platform for investigating filled skutterudites for the competition between single-band and multiband superconductivity in electron–phonon driven systems.
§ EXPERIMENTAL DETAILS
The high-temperature molten-metal-flux technique, described in <cit.>, was used to grow single crystals of LaOs_4As_12. In a quartz ampule, elements with purities higher than 99.9% and a molar ratio of La:Os:Cd:As → 1:4:12:48 were combined. The details on the single crystal growth can be found in <cit.>. The relaxation approach was used to measure the heat capacity in a Quantum Design physical properties measurement (PPMS) system. Temperatures as low as 0.38 K were attained utilizing a He-3 attachment to the PPMS <cit.>.
The μSR measurements were carried out using small size unaligned single crystals (0.1 mm × 0.1 mm × 0.1 mm, total mass 1 g), which gave powder average muon signal, of LaOs_4As_12. The MuSR spectrometer at the Rutherford Appleton Laboratory, ISIS Neutron and Muon Source in the UK was used to perform the μSR measurements <cit.>. In a μSR experiment, the sample is injected with 100% spin-polarized muons. Each implanted muon thermalizes, at which point it decays (lifetime τ_μ = 2.2 μs) into a positron (and two neutrinos) which is preferentially released in the direction of the muon spin at the moment of decay. Utilizing detectors carefully placed around the sample, the decay positrons are detected and time-stamped. It is possible to calculate the asymmetry in the positron emission as a function of time, A(t), using the collected histograms from the forward (F) and backward (B) detectors, A(t)=N_F(t)-α N_B(t)/N_F(t)+α N_B(t), where α is a calibration factor for the instrument and N_F(t) and N_B(t) are the number of positrons counted in the forward and backward detectors, respectively. Detectors are placed longitudinally during ZF-μSR, and a correction coil is used to cancel out any stray magnetic fields up to 10^-4 mT. To investigate the time reversal symmetry ZF-μSR measurements were carried out <cit.>. In the vortex state, TF-μSR measurements were performed with applied fields of 20, 30, 40, 50, and 60 mT, which is greater than the lower critical field H_c1 (∼5 mT) and lower than the upper critical field H_c2 (∼1 T) <cit.>. The sample was covered using a thin silver foil after being mounted onto a high purity (99.995%) silver sample holder using diluted GE-varnish. The sample was cool down to 300 mK using a dilution refrigerator. To generate the vertex lattice by trapping the applied TF, we applied field above T_C and then sample was cooled in the field to the base temperature of 300 mK. We used WiMDA <cit.> software to analyze the μSR data.
§ RESULTS AND DISCUSSION
§.§ Crystal Structure & Physical Properties
LaOs_4As_12 crystallizes in a CoAs_3-type skutterudite structure packed with La atoms and has a body-centered cubic structure with the space group Im3̅ (No. 204) as shown in Figure <ref>. The large icosahedron cage made of As atoms is located around the electropositive La sites, which lack four-fold rotational symmetry. Between the cages, a transition metal ion called Os forms a cubic sublattice. The low temperature specific heat measurements C_P as a function of temperature at zero magnetic field are shown in the inset of Figure <ref>a. Using the equations C_P = γ T + β T^3, the normal state heat capacity is fitted. We calculated the lattice contribution to the specific heat β = 0.613 mJ/mol K^4 and the electronic parameter (Sommerfeld's coefficient) γ = 90.47 mJ/mol K^2 from this. The Debye temperature is determined using the Debye model as Θ_D = (12π^4nR/5β)^1/3, where R is the universal gas constant, which is 8.314
J/mol-K, and n denotes the number of atoms in the compound (n = 17). The value of Θ_D is thus calculated to be approximately 377 K, which agrees with the previous measurement <cit.>. Figure <ref>a displays the low-T electronic specific heat C_e that was produced after the phonon contribution was taken into account. The heat capacity jump at T_C (Δ C_e/γ T_C) is calculated to be 1.2, which is less than 1.43 the value expected for a weak-coupling BCS superconductivity. The fit to the exponential temperature dependency of C_e(T) yields Δ(0) = 0.40 meV, which is close to the 0.45 meV value obtained from the TF-μSR data analysis (discussed in section-B). Thus, the value of 2Δ(0)/k_BT_C = 2.9, which is less than the 3.53 anticipated for weak-coupling BCS superconductors. However, the linear fitting shown in Figure <ref>b shows that this material exhibits BCS behavior with a single isotropic gap.
§.§ Superconducting Gap Structure: TF-μSR
The pairing mechanism and superconducting gap structure of the LaOs_4As_12 were investigated by TF-μSR experiments down to 0.3 K. The TF-μSR asymmetry time spectra in the presence of 20 mT and 50 mT applied magnetic fields at above and below T_C are shown in Figures <ref>a–d. Because of the extra inhomogeneous field distribution of the vortex lattice generated inside the superconducting mixed state of LaOs_4As_12, the spectrum in Figure <ref>a,c in the superconducting state at 0.3 K demonstrate a greater relaxation. Using the Gaussian damped decay function, the asymmetry spectra were fitted <cit.> using the following equation,
A_TF(t) = A_scexp(-σ_TF^2t^2/2)cos(γ_μB_sct+ϕ) +
A_bgcos(γ_μB_bgt+ϕ).
The gyromagnetic muon ratio is γ_μ/2π = 135.53 MHz/T, and the initial asymmetries of muons stopping on the sample and on the silver holder are A_sc and A_bg, respectively (constant across the entire temperature range). The local fields B_sc and B_bg represent muons stopping on the sample and on the sample holder, respectively, whereas ϕ represents initial phase value and σ_TF represents the Gaussian depolarization rate. We calculated the values of A_sc = 76% and A_bg = 24% of the total asymmetries by fitting 0.3 K data. When additional temperature data were analyzed, A_bg was kept constant and A_sc was found nearly temperature independent. The emergence of bulk superconductivity is indicated by an increase in the σ_TF rate as the system approaches the superconducting state. With the use of the following formula, the superconducting contribution to the relaxation σ_sc was determined, σ_sc = √(σ_TF^2-σ_nm^2), where the nuclear magnetic dipolar contribution, is denoted by the symbol σ_nm, which is derived from high-temperature fits and is temperature independent. Figure <ref>e depicts the temperature dependence of σ_sc in several applied TF fields. Due to low H_c2 value, as seen in Figure <ref>f, σ_sc depends on the applied field. Brandt demonstrated that the London penetration depth λ_L(T) is linked to σ_sc for a superconductor with H_ext/H_c2 ≤ 0.25 <cit.>.
σ_sc[μ s^-1] = 4.83 × 10^4(1-H_ext/H_c2)
×{1+1.21[1-√((H_ext/H_c2))]^3}λ_L^-2[nm].
This relationship has been used to compute the temperature dependency of λ_L(T). As demonstrated in Figure <ref>f, isothermal cuts perpendicular to the temperature axis of σ_sc data sets were utilized to estimate the H-dependence of the depolarization rate σ_sc(H). The normalized λ_L^-2(T)/λ_L^-2(0) temperature variation, which is directly proportional to superfluid density, is shown in Figure <ref>a. The data were fitted using the following equation <cit.>:
σ_sc(T)/σ_sc(0) = λ_L^-2(T)/λ_L^-2(0)
= 1 + 1/π∫_0^2π∫_Δ(T)^∞(δ f/δ E) ×EdEdϕ/√(E^2-Δ(T,ϕ))^2,
where f = [1+exp(E/k_BT)]^-1 is the Fermi function. We take Δ_k(T,ϕ) = Δ(T)g_k(ϕ), where we assume a temperature dependence that is universal Δ(T) = Δ_0 tanh[1.82{1.018(T_C/T-1)}^0.51]. The magnitude of the gap at 0 K is Δ_0, and the function g_k denotes the gap's angular dependence, which is equal to 1 for one isotropic energy gap s, 1 for two isotropic s+s wave energy gap and cos(2ϕ) for d-wave gap, where ϕ is the azimuthal angle along the Fermi surface.
Figure <ref>a illustrates our comparison of three distinct gap models: employing a single isotropic s-gap wave, a multigap s+s-wave gap, and a nodal d-wave gap. As seen in the figure, the superfluid density saturates at low temperatures, which is a characteristic of the s-wave model with a single gap. An isotropic single-band s-wave model with a gap value of 0.45 meV provides the best representation of the data, with a gap to T_C ratio 2Δ(0)/k_BT_C = 3.26, which is less than the BCS weak-coupling limit (=3.53). On the other hand, the substantial rise in the χ^2 value puts the d-wave model and s+s-wave (multigap) model inappropriate for this system. A two-gap s+s-wave model of multiband superconductivity has been shown to be compatible with the temperature dependence of magnetic penetration depth of LaRu_4As_12. The higher gap to T_C ratio computed in the s + s-wave scenario, 2Δ_1(0)/k_BT_C = 3.73, is fairly comparable to the value of 3.53 for BCS superconductor in case of LaRu_4As_12 <cit.>. For LaRu_4As_12, 2 K specific phonon modes exhibit modest softening when compared to 20 K, demonstrating that the electron–phonon interactions causing the superconductivity have an audible impact on the vibrational eigenstates <cit.>. Using McMillan's relation, it is also possible to determine the electron–phonon coupling constant (λ_e-ph) <cit.>:
λ_e-ph = 1.04+μ^*ln(Θ_D/1.45T_C)/(1-0.62μ^*)ln(Θ_D/1.45T_C)-1.04.
where μ^* is the repulsive screened Coulomb parameter usually assigned as μ^* = 0.13. The calculated value of the λ_e-ph is 0.534. The London model is described as λ_L^2=m^*c^2/4π n_s e^2. It connects the effective mass enhancement m^* [=(1+λ_e-ph)*m_e], superconducting carrier density n_s [=m^*c^2/4π e^2λ_L(0)^2], and London penetration depth. By employing the s-wave model, we determined the London penetration depth of λ_L(0) = 168 nm. The effective mass enhancement is calculated to be m^* = 1.53 m_e, and the superconducting carrier density is predicted to be n_s = 1.53 × 10^27 carriers m^-3. References <cit.> include a description of the computations in detail. The calculated values of ,n_s = 8.6 × 10^27 carriers m^-3 and m^* = 1.749 m_e for LaRu_4As_12 <cit.>. The fitted parameters for LaOs_4As_12 and LaRu_4As_12 (for comparison) are shown in Table <ref>. To explain the observed nature of the superconducting gap structures, it is important to comprehend the electronic structures of these compounds, which have been carried <cit.> and the results suggest that the single-band order parameter in
LaOs_4As_12 seems to be associated with the hybridized As-p and Os-d electronic character
of the Fermi surface. On the other hand, the lack of hybridization for the disjointed Fermi surface of LaRu_4As_12, may explain its multiband superconducting nature.
§.§ Preserved Time Reversal Symmetry: ZF-μSR
In order to determine if there is a spontaneous magnetic field present in the superconducting ground state, we conducted the ZF-μSR experiment. Figure <ref>b shows the time evolution of the asymmetry spectra for T = 0.3 K < T_C and T = 3.5 K > T_C. The ZF-μSR spectra recorded in the normal and superconducting states show the same relaxations that can be found in overlapping ZF-μSR spectra, indicating that the superconducting state does not shows any spontaneous magnetic field or spin fluctuations. This result suggests that the time-reversal symmetry is preserved in LaOs_4As_12 superconducting state. The strong resemblance of the ZF-μSR spectra (above and below T_C) suggests that the time-reversal symmetry is also retained in the superconducting state of LaRu_4As_12. In order to fit the ZF data, a Lorentzian function was used <cit.>,
G_ZF(t) = A_sc(t)exp(-λ_ZF t)+A_bg,
where λ_ZF is the electronic relaxation rate, A_sc stands for the sample asymmetry, A_bg for the constant nondecaying background signal. The red line in Figure <ref>b indicates the fits to the ZF-μSR data. The ZF-μSR asymmetry data fitting parameters are λ_ZF = 0.754(4) μs^-1 at 0.3 K and λ_ZF = 0.744(5) μs^-1 at 3.5 K. No conclusive evidence of TRS breaking can be found since the relaxation rate change is within the error bar.
§ SUMMARY
We employed TF-μSR to determine the gap symmetry of the superconducting state of LaOs_4As_12. An isotropic BCS-type s-wave gap model explains the temperature dependence of the superfluid density. The gap to T_C ratio, which was determined from the s-wave gap fit to the superfluid density, is 3.26; nonetheless, this is smaller than 3.53 expected for conventional BCS systems. The ZF-μSR spectra at 0.3 K and 3.5 K are strikingly similar, indicating that the time-reversal symmetry is intact. These results open up the possibility of using the compounds LaRu_4As_12 and LaOs_4As_12 as special research platforms for investigating filled skutterudites for the interplay between single- and multiband superconducting order parameters in conventional systems.
§.§ Acknowledgements
We thank T. Cichorek and J. Juraszek for providing LaOs_4As_12 sample and the ascii heat capacity data. We would like to thank T. Cichorek, P. P. Ferreira, R. Lucrezi, J. Juraszek, C. Heil and L. T. F. Eleno for interesting discussions. AB expresses gratitude to the Science and Engineering Research Board for the CRG Research Grant (CRG/2020/000698 & CRG/2022/008528) and CRS Project Proposal at UGC-DAE CSR (CRS/2021-22/03/549). DTA appreciates the support provided by the Royal Society of London for the Newton Advanced Fellowship between the UK and China, the International Exchange between the UK and Japan, and EPSRC-UK (Grant number EP/W00562X/1). We thanks the ISIS Facility for the beam time, RB1520431 <cit.>.
apsrev4-1
|
http://arxiv.org/abs/2307.05287v1 | 20230711142809 | A stochastic two-step inertial Bregman proximal alternating linearized minimization algorithm for nonconvex and nonsmooth problems | [
"Chenzheng Guo",
"Jing Zhao",
"Qiao-Li Dong"
] | math.OC | [
"math.OC"
] |
A stochastic two-step inertial Bregman proximal alternating linearized minimization algorithm for nonconvex and nonsmooth problemsSupported by Scientific Research Project of Tianjin Municipal Education Commission (2022ZD007).
Chenzheng GuoEmail: [email protected],
Jing ZhaoCorresponding author. Email: [email protected],
Qiao-Li DongEmail: [email protected]
College of Science, Civil Aviation University of China, Tianjin 300300, China
======================================================================================================================================================================================================================================
Abstract.
In this paper, for solving a broad class of large-scale nonconvex and nonsmooth optimization problems, we propose a stochastic two-step inertial Bregman proximal alternating linearized minimization (STiBPALM) algorithm with variance-reduced stochastic gradient estimators. And we show that SAGA and SARAH are variance-reduced gradient estimators. Under expectation conditions with the Kurdyka–Łojasiewicz property and some suitable conditions on the parameters, we obtain that the sequence generated by the proposed algorithm converges to a critical point. And the general convergence rate is also provided. Numerical experiments on sparse nonnegative matrix factorization and blind image-deblurring are presented to demonstrate the performance of the proposed algorithm.
0.4 true cm
AMS Mathematics Subject Classification: 47J06, 49J52, 65K10, 90C26, 90C30.
Key words: Nonconvex and nonsmooth optimization; Stochastic; Bregman; Variance-reduced; Kurdyka–Łojasiewicz property.
§ INTRODUCTION
In this paper, we are interested in solving the following composite optimization problem:
min_x∈ℝ^l ,y∈ℝ^mΦ(x,y)=f(x)+H(x,y)+g(y),
where f:ℝ^l→(-∞,+∞], g:ℝ^m→(-∞,+∞] are proper lower semicontinuous. H(x,y)=1/n∑_i=1^n H_i(x,y) has a finite-sum structure, H_i:ℝ^l×ℝ^m →ℝ is continuously differentiable and ∇ H_i is Lipschitz continuous on bounded subsets. Note that here and throughout the paper, no convexity is imposed on Φ. In practical application, numerous problems can be formulated into the form of (<ref>), such as signal and image processing <cit.>, nonnegative matrix facorization <cit.>, blind image-deblurring <cit.>, sparse principal component analysis <cit.>, compressed sensing <cit.> and so on. Here we list two applications of (<ref>), which will also be used in the numerical experiments.
(1) Sparse nonnegative matrix factorization (S-NMF). The S-NMF has important applications in image processing (face recognition) and bioinformatics (clustering of gene expressions), see <cit.> for details. Given a matrix A∈ℝ ^ l× m and an integer r>0, we want to seek a factorization A ≈ XY, where X ∈ℝ ^ l× r, Y ∈ℝ ^r× m are nonnegative with r ≤min{l,m} and X is sparse. One way to solve this problem is by finding a solution for the non-negative least squares model given by
X,Ymin{η/2 A-XY _F^2 : X,Y≥ 0, X_i _0≤ s, i=1,2,… ,r},
where η>0, X_i denotes the ith column of X, X_i _0 denotes the number of nonzero elements of the ith column of X. In this formulation, the sparsity on X is strictly enforced using the nonconvex l_ 0 constraint. Let H(X,Y)=η/2 A-XY _F^2=∑_i=1^lη/2 A_i-X_iY _F^2, f(X)=ι_X≥0(X)+ι_ X_1 _0≥ s(X)+⋯ +ι_ X_r _0≥ s(X), g(Y)=ι_Y≥0(Y), where A_i denotes the ith low of A, ι_C is the indicator function on C. Then this model (<ref>) can be converted to (<ref>).
(2) Blind image deconvolution (BID). Let A be the observed blurred image, and let X be the unknown sharp image of the same size. Furthermore, let Y denote a small unknown blur kernel, a typical variational formulation of the blind deconvolution problem is given by:
X,Ymin{1/2 A-X⊙ Y _F^2+η∑_r=1^2d R([D(X)]_r) : 0≤ X≤ 1, 0≤ Y≤ 1, Y _1≤ 1},
where η>0, ⊙ is the two-dimensional convolution operator, X is the image to recover, and Y is the blur kernel to estimate. Here R(· ) is an image regularization term, that imposes sparsity on the image gradient and hence favors sharp images. D(· ) is the differential operator, computing the horizontal and vertical gradients for each pixel. This model (<ref>) can be converted to (<ref>), where H(X,Y)=1/2 A-X⊙ Y _F^2+η∑_r=1^2d R([D(X)]_r), f(X)=ι_0≤ X≤ 1(X), g(Y)=ι_ Y _1≤ 1(Y)+ι_0≤ Y≤ 1(Y). See <cit.> for details.
For solving problem (<ref>), a frequently applied algorithm is the following proximal alternating linearized minimization algorithm (PALM) by Bolte et al. <cit.> based on results in <cit.>:
x_k+1∈min_ x∈ℝ^l{f(x)+⟨ x,∇_xH(x_k,y_k)⟩+1/2λ_kx-x_k^2_2},
y_k+1∈min_y∈ℝ^m{g(y)+⟨ y,∇_yH(x_k+1,y_k)⟩+1/2μ_ky-y_k^2_2},
where {λ_k}_k∈ℕ and {μ_k}_k∈ℕ are positive sequences. To further improve the performance of PALM, Pock and Sabach <cit.> introduced an inertial step to PALM, and proposed the following inertial proximal alternating linearized minimization (iPALM) algorithm:
u_1k=x_k+α _1k(x_k-x_k-1), v_1k=x_k+β _1k(x_k-x_k-1),
x_k+1∈min_ x∈ℝ^l{f(x)+⟨ x,∇_xH(v_1k,y_k)⟩+1/2λ_kx-u_1k^2_2},
u_2k=y_k+α _2k(y_k-y_k-1), v_2k=y_k+β _2k(y_k-y_k-1),
y_k+1∈min_y∈ℝ^m{g(y)+⟨ y,∇_yH(x_k+1,v_2k)⟩+1/2μ_ky-u_2k^2_2},
where α _1k,α _2k,β _1k,β _2k∈ [ 0,1 ]. Then Gao et al. <cit.> presented a Gauss–Seidel type inertial proximal alternating linearized minimization (GiPALM) algorithm, in which the inertial step is performed whenever the x or y-subproblem is updated. In order to use the existing information as much as possible to further improve the numerical performance, Wang et al. <cit.> proposed a new inertial version of proximal alternating linearized minimization (NiPALM) algorithm, which inherits both advantages of iPALM and GiPALM.
Bregman distance regularization is an effective way to improve the numerical results of the algorithm. In <cit.>, the authors constructed the following two-step inertial Bregman alternating minimization (TiBAM) algorithm using the information of the previous three iterates:
x_k+1∈min_ x∈ℝ^l{Φ(x,y_k)+D_ϕ_1(x,x_k)+α_1k⟨ x,x_k-1-x_k⟩+α_2k⟨ x,x_k-2-x_k-1⟩},
y_k+1∈min_y∈ℝ^m{Φ(x_k+1,y)+D_ϕ_2(y,y_k)+β_1k⟨ y,y_k-1-y_k⟩+β_2k⟨ y,y_k-2-y_k-1},
where D_ϕ_i(i=1,2) denotes the Bregman distance with respect to ϕ_i(i=1,2). By linearizing H(x,y) in TiBAM algorithm, the authors <cit.> proposed the following two-step inertial Bregman proximal alternating linearized minimization (TiBPALM) algorithm:
x_k+1∈min_ x∈ℝ^l{ f(x)+⟨ x,∇_xH(x_k,y_k)⟩+D_ϕ_1(x,x_k)+α_1k⟨ x,x_k-1-x_k⟩
+α_2k⟨ x,x_k-2-x_k-1⟩},
y_k+1∈min_y∈ℝ^m{ g(y)+⟨ y,∇_yH(x_k+1,y_k)⟩+D_ϕ_2(y,y_k)+β_1k⟨ y,y_k-1-y_k⟩
+β_2k⟨ y,y_k-2-y_k-1⟩}.
If we take ϕ_1(x)=1/2λx^2_2 and ϕ_2(y)=1/2μy^2_2 for all x∈ℝ^l and y∈ℝ^m, then (<ref>) becomes two-step inertial proximal alternating linearized minimization (TiPALM) algorithm. Then based on alternating minimization algorithm, Chao et al. <cit.> proposed inertial alternating minimization with Bregman distance (BIAM) algorithm. Other related work can be found in <cit.> and their references.
It should be noted that all these works are obtained for deterministic methods, i.e. no randomness involved. But when the dimension of data is very large, the computing cost of the full gradient of the function H(x,y) is often prohibitively expensive. In order to overcome this difficulty, stochastic gradient approximations were applied, see, e.g. <cit.> and the references therein. A block stochastic gradient iteration combining simple stochastic gradient descent (SGD) estimator with PALM was first proposed by Xu and Yin <cit.>.
To weaken the assumptions on the objective function in <cit.> and improve the estimates on the convergence rate of a stochastic PALM algorithm, Driggs et al. <cit.> used more sophisticated so-called variance-reduced gradient estimators instead of the simple stochastic gradient descent estimators, and proposed the following stochastic proximal alternating linearized minimization (SPRING) algorithm:
x_k+1∈min_ x∈ℝ^l{f(x)+⟨ x,∇_x(x_k,y_k)⟩+1/2λ_kx-x_k^2_2},
y_k+1∈min_y∈ℝ^m{g(y)+⟨ y,∇_y(x_k+1,y_k)⟩+1/2μ_ky-y_k^2_2}.
The key of SPRING algorithm is replacing the full gradient computations ∇_x H(x_k,y_k) and ∇_yH(x_k+1,y_k) with stochastic estimations ∇_x(x_k,y_k) and ∇_y(x_k+1,y_k), respectively. Then Hertrich et al. <cit.> introduced the following inertial variant of a stochastic PALM algorithm with variance-reduced gradient estimator, called SiPALM:
u_1k=x_k+α _1k(x_k-x_k-1), v_1k=x_k+β _1k(x_k-x_k-1),
x_k+1∈min_ x∈ℝ^l{f(x)+⟨ x,∇_x(v_1k,y_k)⟩+1/2λ_kx-u_1k^2_2},
u_2k=y_k+α _2k(y_k-y_k-1), v_2k=y_k+β _2k(y_k-y_k-1),
y_k+1∈min_y∈ℝ^m{g(y)+⟨ y,∇_y(x_k+1,v_2k)⟩+1/2μ_ky-u_2k^2_2},
where α _1k,α _2k,β _1k,β _2k∈ [ 0,1 ]. Also, some variance-reduced gradient estimators are proposed to solve the nonconvex
optimization problem. The classical stochastic gradient direction is modified in various ways so as to drive the variance of the gradient estimator towards zero.
Such as SAG <cit.>, SVRG <cit.>, SAGA <cit.> and SARAH <cit.>.
In this paper, we combine inertial technique, Bregman distance and stochastic gradient estimators to develop a stochastic two-step inertial Bregman proximal alternating linearized minimization (STiBPALM) algorithm to solve the nonconvex optimization problem (<ref>). Our contributions are listed as follows.
(1) We propose the STiBPALM algorithm with variance-reduced stochastic gradient estimators to solve the nonconvex optimization problem (<ref>). And we show that SAGA and SARAH are variance-reduced gradient estimators (Definition <ref>) in the appendix.
(2) We provide theoretical analysis to show that the proposed algorithm with the variance-reduced stochastic gradient estimator has global convergence under expectation conditions.
Under the expectation version of Kurdyka–Łojasiewicz (KŁ) property, the sequence generated by the proposed algorithm converges to a critical point and the general convergence rate is also obtained.
(3) We use several well studied stochastic gradient estimators (e.g. SGD, SAGA and SARAH) to test the performance of STiBPALM for sparse nonnegative matrix factorization and blind image-deblurring problems. And comparing with some existing algorithms (e.g. PALM, iPALM, SPRING and SiPALM) in the literature, we report some preliminary numerical results to demonstrate the effectiveness of the proposed algorithm.
This paper is organized as follows. In Section <ref>, we recall some concepts and important lemmas which will be used in the proof of main results. Section <ref> introduces our STiBPALM algorithm in detail.
We discuss the convergence behavior of STiBPALM in Section <ref>. In Section <ref>, we perform some numerical experiments and compare the results with other algorithms. We give the specific theoretical analysis to show that SAGA and SARAH have variance-reduced stochastic gradient estimators in the appendix.
§ PRELIMINARIES
In this section, we summarize some useful definitions and lemmas.
(Kurdyka–Łojasiewicz property <cit.>)
Let F: ℝ^d →(-∞,+∞] be a proper and lower semicontinuous function.
(i) The function F: ℝ^d →(-∞,+∞] is said to have the Kurdyka–Łojasiewicz (KŁ) property at x^∗∈domF if there exist η∈ (0,+∞], a neighborhood U of x^∗ and a continuous concave function φ:[0,η)→ℝ_+ such that φ(0)=0, φ is C^1 on (0,η), for all s∈(0,η) it is φ'(s)>0 and for all x in U∩[F(x^∗)<F<F(x^∗)+η] the Kurdyka–Łojasiewicz inequality holds,
φ'(F(x)-F(x^∗)) dist(0,∂ F(x))≥ 1.
(ii) Proper lower semicontinuous functions which satisfy the Kurdyka–Łojasiewicz inequality at each point of its domain are called Kurdyka–Łojasiewicz (KŁ) functions.
Roughly speaking, KŁ functions become sharp up to reparameterization via φ, a desingularizing function for F. Typical KŁ functions include the class of semialgebraic functions <cit.>. For instance, the l _0 pseudonorm and the rank function are KŁ. Semialgebraic functions admit desingularizing functions of the form φ (r)=ar^1-ϑ for a > 0, and ϑ∈ [0, 1) is known as the KŁ exponent of the function <cit.>. For these functions, the KŁ inequality reads
(F(x)-F(x^∗))^ϑ≤ Cξ , ∀ξ∈∂ F(x)
for some C>0.
A function F is said convex if domF is a convex set and if, for all x, y∈domF, α∈[0,1],
F(α x+(1-α)y)≤α F(x)+(1-α)F(y).
F is said θ-strongly convex with θ> 0 if F-θ/2·^2 is convex, i.e.,
F(α x+(1-α)y)≤α F(x)+(1-α)F(y)-1/2θα(1-α)x-y^2
for all x, y∈domF and α∈[0,1].
Suppose that the function F is differentiable. Then F is convex if and only if domF is a convex set and
F(x)≥ F(y)+⟨∇ F(y),x-y⟩
holds for all x, y∈domF. Moreover, F is θ-strongly convex with θ> 0 if and only if
F(x)≥ F(y)+⟨∇ F(y),x-y⟩+θ/2x-y^2
for all x, y∈domF.
Let ϕ:ℝ^d →(-∞,+∞] be a convex and Gâteaux differentiable function.
The function D_ϕ : domϕ × intdomϕ→ [0,+∞), defined by
D_ϕ(x,y)=ϕ(x)-ϕ(y)-⟨∇ϕ(y),x-y⟩,
is called the Bregman distance with respect to ϕ.
From the above definition, it follows that
D_ϕ(x,y)≥θ/2x-y^2,
if ϕ is θ-strongly convex.
(Descent lemma<cit.>)
Let F: ℝ^d→ℝ be a continuously differentiable function with gradient ∇ F assumed L-Lipschitz continuous. Then
| F(y)-F(x)-⟨ y-x,∇ F(x) ⟩ | ≤L /2 x-y ^2, ∀ x,y∈ℝ^d.
Let F:ℝ^d→ℝ be a function with L-Lipschitz continuous gradient, G:ℝ^d→ℝ a proper lower semicontinuous function, and z∈min_v∈ℝ^d{G(v)+⟨ d,v-x⟩+D_ϕ(v,x)+γ⟨ v,u⟩+μ⟨ v,w⟩}, where D_ϕ denotes the Bregman distance with respect to ϕ, and x, d, u, w∈ℝ^d. Then, for all y∈ℝ^d,
F(z)+G(z)≤ F(y)+G(y)+⟨∇ F(x)-d,z-y ⟩ +L/2 x-y ^2+D_ϕ(y,x)
+L/2 z-x ^2-D_ϕ(z,x)+γ⟨ y-z,u⟩+μ⟨ y-z,w⟩.
By Lemma <ref>, we have the inequalities
F(x)-F(y)≤⟨∇ F(x), x-y⟩+L/2 x-y ^2,
F(z)-F(x)≤⟨∇ F(x), z-x⟩+L/2 z-x ^2,
which implies that
F(z)≤ F(y)+⟨∇ F(x), z-y⟩+L/2 x-y ^2+L/2 z-x ^2.
Furthermore, by the definition of z, taking v=y, we obtain
G(z)+⟨ d,z-x⟩+D_ϕ(z,x)+γ⟨ z,u⟩+μ⟨ z,w⟩
≤ G(y)+⟨ d,y-x⟩+D_ϕ(y,x)+γ⟨ y,u⟩+μ⟨ y,w⟩,
which implies that
G(z)≤ G(y)+⟨ d,y-z⟩+D_ϕ(y,x)-D_ϕ(z,x)+γ⟨ y-z,u⟩+μ⟨ y-z,w⟩.
Adding (<ref>) and (<ref>) completes the proof.
(sufficient decrease property)
Let F, G, and z be defined as in Lemma <ref>, where x, d, u, w∈ℝ^d. Assum that ϕ is θ-strongly convex. Then the following inequality holds, for any λ>0,
F(z)+G(z)≤ F(x)+G(x)+1/2Lλ d-∇ F(x) ^2 +L(λ+1) -θ/2 x-z ^2
+γ⟨ x-z,u⟩+μ⟨ x-z,w⟩.
From Lemma <ref> with y=x, we have
F(z)+G(z)≤ F(x)+G(x)+⟨∇ F(x)-d,z-x ⟩+L/2 x-z ^2
-D_ϕ(z,x)+γ⟨ x-z,u⟩+μ⟨ x-z,w⟩.
Using Young's inequality ⟨∇ F(x)-d,z-x ⟩≤1/2Lλ d-∇ F(x) ^2 +Lλ/2 x-z ^2 and (<ref>) we can obtain
F(z)+G(z)≤ F(x)+G(x)+1/2Lλ d-∇ F(x) ^2 +Lλ/2 x-z ^2+L/2 x-z ^2
-θ/2 z-x ^2+γ⟨ x-z,u⟩+μ⟨ x-z,w⟩,
which can be abbreviated as the desired result.
§ STOCHASTIC TWO-STEP INERTIAL BREGMAN PROXIMAL ALTERNATING LINEARIZED MINIMIZATION ALGORITHM
Throughout this paper, we impose the following assumptions.
(i) The function Φ is bounded from below, i.e., Φ(x,y)≥Φ.
(ii) For any fixed y,
the partial gradient ∇_x H_i(·,y) is globally Lipschitz with module L_y for all i∈{ 1,… ,n }, that is,
∇_x H_i ( x_1 ,y ) - ∇_x H_i ( x_2,y ) ≤ L_y x_1-x_2, ∀ x_1 ,x_2∈ℝ^l.
Likewise, for any fixed x, the partial gradient ∇_y H_i(x,·) is globally Lipschitz with module L_x,
∇_y H_i ( x,y_1 ) - ∇_y H_i ( x,y_2 ) ≤ L_x y_1-y_2 , ∀ y_1 ,y_2∈ℝ^m.
(iii) ∇ H is Lipschitz continuous on bounded subsets of ℝ^l×ℝ^m. In other words, for each bounded subset B_1× B_2 of ℝ^l×ℝ^m, there exists M_B_1× B_2 > 0 such that
∇_x H ( x_1 ,y_1 ) - ∇_x H ( x_2,y_2 ) ≤ M_B_1× B_2 ( x_1-x_2,y_1-y_2 )
for all ( x_1 ,y_1), ( x_2 ,y_2)∈ B_1× B_2.
(iv) ϕ_i(i=1,2) is θ_i-strongly convex differentiable function. And the gradient ∇ϕ_i is η_i-Lipschitz continuous, i.e.,
∇ϕ_1(x_1) -∇ϕ_1(x_2)≤η_1 x_1-x_2, ∀ x_1 ,x_2∈ℝ^l,
∇ϕ_2(y_1) -∇ϕ_2(y_2)≤η_2 y_1 -y_2, ∀ y_1 ,y_2∈ℝ^m.
We now introduce a stochastic version of the two-step inertial Bregman proximal alternating linearized minimization algorithm. The key of our algorithm is replacing the full gradient computations ∇_x H(u_k,y_k) and ∇_y(x_k+1,v_k) with stochastic estimations ∇_x(u_k,y_k) and ∇_y(x_k+1,v_k), respectively. We describe the resulted algorithm as follows.
Stochastic gradients ∇_x(u_k,y_k) and ∇_y(x_k+1,v_k) use the gradients of only a few indices ∇ _xH_i(u_k,y_k) and ∇ _yH_i(x_k+1,v_k) for i ∈ B_k ⊂{ 1,2,… , n }. The minibatch B_k is chosen uniformly at random from all subsets of { 1,2,… , n } with cardinality b. The simplest one is the stochastic gradient descent (SGD) estimator <cit.>. While the SGD estimator is not variance-reduced, many popular gradient estimators as the SAGA <cit.> and SARAH <cit.> estimators have this property. In this paper, we mainly consider SAGA (Appendix <ref>) and SARAH (Appendix <ref>) gradient estimators.
(SGD <cit.>)
The SGD gradient estimator ∇_x^SGD(x_k,y_k) is defined as follows,
∇_x^SGD(x_k,y_k)=1/b∑_i∈ B_k∇ _xH_i(x_k,y_k),
where B_k are mini-batches containing b indices.
The SGD gradient estimator uses the gradient of a randomly sampled batch to represent the full gradient.
(SAGA <cit.>)
The SAGA gradient estimator ∇_x^SAGA(x_k,y_k) is defined as follows,
∇_x^SAGA(x_k,y_k)=1/b∑_i∈ B_k ( ∇ _xH_i(x_k,y_k)- ∇ _xH_i(φ _k^i,y_k) ) + 1/n∑_j=1^n∇ _xH_j(φ _k^j,y_k),
where B_k are mini-batches containing b indices. The variables φ _k^i follow the update rules φ _k+1^i=x_k if i∈ B_k and φ _k+1^i=φ _k^i otherwise.
(SARAH <cit.>)
The SARAH gradient estimator reads for k = 0 as
∇_x^SARAH(x_0,y_0)=∇_xH(x_0,y_0).
For k = 1, 2,…, we define random variables p_k∈{ 0,1 } with P(p_k=0)=1/p and P(p_k=1)=1-1/p, where p ∈(1,∞ ) is a fixed chosen parameter. Let B_k be a random subset uniformly drawn from { 1,… , n } of fixed batch size b. Then for k= 1, 2,…, the SARAH gradient estimator reads as
∇_x^SARAH(x_k,y_k)
= ∇_xH(x_k,y_k), if p_k=0,
1/b∑_i∈ B_k ( ∇ _xH_i(x_k,y_k)- ∇ _xH_i(x_k-1,y_k-1) ) +∇_x^SARAH(x_k-1,y_k-1), if p_k=1.
In our analysis, we assume that stochastic gradient estimator used in Algorithm <ref> is variance-reduced, which is a quite general assumption in stochastic gradient algorithms <cit.>. The following definition is analogous to Definition 2.1 in <cit.>.
(variance-reduced gradient estimator)
Let { z_k } _k∈ℕ={ (x_k,y_k)} _k∈ℕ be the sequence generated by Algorithm <ref> with some gradient estimator ∇. This gradient estimator is called variance-reduced with constants V_1,V_2,V_Υ≥ 0, and ρ∈ (0,1] if it satisfies the following conditions:
(i) (MSE bound) There exists a sequence of random variables {Υ _k } _k∈ℕ of the form Υ _k=∑_i=1^s (v_k^i )^2 for some nonnegative random variables v_k^i∈ℝ such that
𝔼_k [ ∇_x(u_k,y_k)-∇ _xH(u_k,y_k) ^2+∇_y(x_k+1,v_k)-∇ _yH(x_k+1,v_k) ^2 ]
≤ Υ _k+V_1 (𝔼_k z_k+1-z_k ^2+ z_k-z_k-1 ^2 + z_k-1-z_k-2 ^2+ z_k-2-z_k-3 ^2 ),
and, with Γ _k=∑_i=1^s v_k^i
𝔼_k [ ∇_x(u_k,y_k)-∇ _xH(u_k,y_k) +∇_y(x_k+1,v_k)-∇ _yH(x_k+1,v_k) ]
≤ Γ_k+V_2 (𝔼_k z_k+1-z_k+ z_k-z_k-1 + z_k-1-z_k-2+ z_k-2-z_k-3 ).
(ii) (Geometric decay) The sequence {Υ _k } _k∈ℕ decays geometrically:
𝔼_kΥ _k+1≤ (1-ρ )Υ _k+V_Υ (𝔼_k z_k+1-z_k ^2+ z_k-z_k-1 ^2+ z_k-1-z_k-2 ^2.
.+ z_k-2-z_k-3 ^2 ).
(iii) (Convergence of estimator) If { z_k } _k∈ℕ satisfies lim_k →∞𝔼 z_k-z_k-1 ^2=0, then 𝔼Υ _k→ 0 and 𝔼Γ _k→ 0.
In the following, if { z_k } _k∈ℕ={ (x_k,y_k)} _k∈ℕ be the bounded sequence generated by Algorithm <ref>, we assume ∇ H is M-Lipschitz continuous on { (x_k,y_k)} _k∈ℕ.
For the sequences { x_k }_k∈ℕ and { y_k }_k∈ℕ generated by Algorithm <ref>, there exist L> 0 such that
sup{ L _y_k:k∈ℕ}≤ L and sup{ L _x_k:k∈ℕ}≤ L,
where L _y_k and L _x_k are the Lipschitz constants for ∇_x H_i(·,y_k) and ∇_y H_i(x_k,·), respectively.
If { z_k } _k∈ℕ={ (x_k,y_k)} _k∈ℕ be the bounded sequence generated by Algorithm <ref>. Then the SAGA gradient estimator is variance-reduced with parameters V_1=16N^2γ^2/b, V_2=4Nγ/√(b), V_Υ=408nN^2(1+2γ_1^2+γ_2^2)/b^2 and ρ=b/2n, where N=max{M,L }, γ=max{γ_1,γ_2 }. The SARAH estimator is variance-reduced with parameters V_1=6 ( 1-1/p )M^2(1+2γ_1^2+γ_2^2), V_2=M√(6(1-1/p)(1 +2γ_1^2+γ_2^2) ), V_Υ=6 ( 1-1/p )M^2(1+2γ_1^2+γ_2^2) and ρ= 1/p.
See the detailed proof of Proposition <ref> in Appendix <ref> and <ref>. And the conclusion that SVRG gradient estimator is variance-reduced can be obtained similarly.
Below, we give the supermartingale convergence theorem that will be applied to obtain almost sure convergence of sequences generated by STiBPALM (Algorithm <ref>).
(supermartingale convergence)
Let { X_k } _k∈ℕ and { Y_k } _k∈ℕ be sequences of bounded nonnegative random variables such that X_k, Y_k depend only on the first k iterations of Algorithm <ref>. If
𝔼 _kX_k+1+Y_k≤ X_k
for all k, then ∑_k=0^∞ Y_k<+∞ a.s. and { X_k } converges a.s.
§ CONVERGENCE ANALYSIS UNDER THE KL PROPERTY
In this section, under Assumption <ref> and <ref>, we prove convergence of the sequence and extend the convergence rates of SPRING to Algorithm <ref>, for semialgebraic function Φ. Given k∈ℕ, define the quantity
Ψ _k= Φ (z_k)+ 1/LλρΥ _k+ (V_1+V_Υ /ρ/Lλ+α_1+α_2/2+2L(γ_1^2+γ_2^2)/λ+3Z )z_k-z_k-1^2
+ ( V_1+V_Υ /ρ/Lλ+α_2/2+2Lγ_2^2/λ+2Z )z_k-1-z_k-2^2+ ( V_1+V_Υ /ρ/Lλ +Z ) z_k-2-z_k-3^2,
where λ=√(10(V_1+V_Υ /ρ)+4L^2(γ_1^2+γ_2^2)/L^2), Z=V_1+V_Υ /ρ/√(10(V_1+V_Υ /ρ)+4L^2(γ_1^2+γ_2^2))+ϵ>0, ϵ>0 is small enough. Our first result guarantees that Ψ _k is decreasing in expectation.
(𝑙 _2 summability)
Suppose Assumption <ref> and <ref> hold. Let { z_k }_k∈ℕ be the sequence generated by Algorithm <ref> with variance-reduced gradient estimator, and let
θ△=min{θ_1,θ_2 }> L+2α _1+2α _2+2√(10(V_1+V_Υ /ρ)+4L^2(γ_1^2+γ_2^2))+6ϵ,
then the following conclusions hold.
(i) Ψ _k satisfies
𝔼_k [Ψ _k+1 +κ z_k+1-z_k^2+ϵ z_k-z_k-1^2+ ϵ z_k-1-z_k-2^2+ Z z_k-2-z_k-3^2 ] ≤Ψ _k,
where κ=-L -θ/2-α_1-α_2-√(10(V_1+V_Υ /ρ)+4L^2(γ_1^2+γ_2^2))-3ϵ>0.
(ii) The expectation of the squared distance between the iterates is summable:
∑_k=0^∞𝔼 [ x_k+1-x_k^2+ y_k+1-y_k^2]=∑_k=0^∞𝔼 z_k+1-z_k^2<∞.
(i) Applying Lemma <ref> with F(· )=H(· ,y_k), G(· )=f(· ), z=x_k+1, x= x_k, d =∇_x(u_k,y_k), u = x_k-1-x_k and w = x_k-2-x_k-1, for any λ>0, we have
H(x_k+1,y_k)+f(x_k+1)
≤ H(x_k,y_k)+f(x_k)+1/2Lλ∇_x(u_k,y_k)-∇_x H(x_k,y_k) ^2+L(λ+1) -θ_1 /2 x_k+1-x_k ^2
+α _1k⟨ x_k+1-x_k,x_k-x_k-1⟩+α _2k⟨ x_k+1-x_k,x_k-1-x_k-2⟩
(1)≤ H(x_k,y_k)+f(x_k)+1/Lλ∇_x(u_k,y_k)-∇_x H(u_k,y_k) ^2+1/Lλ∇_x H(u_k,y_k)-∇_x H(x_k,y_k) ^2
+L(λ+1) -θ_1 /2 x_k+1-x_k ^2 +α_1k/2 (x_k+1-x_k^2+x_k-x_k-1^2)
+α_2k/2(x_k+1-x_k^2+x_k-1-x_k-2^2)
(2)≤ H(x_k,y_k)+f(x_k)+1/Lλ∇_x(u_k,y_k)-∇_x H(u_k,y_k) ^2+L/λ u_k-x_k ^2
+ (L(λ+1) -θ_1 /2 +α_1+α_2/2 ) x_k+1-x_k ^2+α_1/2x_k-x_k-1^2+α_2/2x_k-1-x_k-2^2
≤ H(x_k,y_k)+f(x_k)+1/Lλ∇_x(u_k,y_k)-∇_x H(u_k,y_k) ^2+ (2Lγ_1k^2/λ+α_1/2 ) x_k-x_k-1 ^2
+ ( 2Lγ_2k^2/λ+α_2/2 ) x_k-1-x_k-2 ^2 + (L(λ+1) -θ_1 /2+α_1+α_2/2 ) x_k+1-x_k ^2.
Inequality (1) is the standard inequality a-c^2≤2 a-b^2+2 b-c^2, and (2) use Assumption <ref> (ii) and Assumption <ref>. Analogously, for the updates in y_k, we use Lemma <ref> with F(· )=H( x_k+1,·), G(· )=g(· ), z=y_k+1, x= y_k, d =∇_y(x_k+1,v_k), u = y_k-1-y_k and w = y_k-2-y_k-1, we have
H(x_k+1,y_k+1)+g(y_k+1)
≤ H(x_k+1,y_k)+g(y_k)+1/Lλ∇_y(x_k+1,v_k)-∇_y H(x_k+1,v_k) ^2+ (2Lμ_1k^2/λ+α_1/2 ) y_k-y_k-1 ^2
+ ( 2Lμ_2k^2/λ+α_2/2 ) y_k-1-y_k-2 ^2+ (L(λ+1) -θ_2 /2 +α_1+α_2/2 ) y_k+1-y_k ^2.
Adding (<ref>) and (<ref>), we have
Φ (x_k+1,y_k+1)
≤ Φ (x_k,y_k)+1/Lλ ( ∇_x(u_k,y_k)-∇_x H(u_k,y_k) ^2 +∇_y(x_k+1,v_k)-∇_y H(x_k+1,v_k) ^2 )
+ ( L(λ+1) -θ/2+α_1+α_2/2 )z_k+1-z_k^2+ ( 2Lγ_1^2/λ+α_1/2 )z_k-z_k-1^2
+ ( 2Lγ_2^2/λ+α_2/2 )z_k-1-z_k-2^2,
where θ=min{θ_1,θ_2 }. Applying the conditional expectation operator 𝔼 _k, we can bound the MSE terms using (<ref>). This gives
𝔼 _k [ Φ (z_k+1)+ ( -L(λ+1) -θ/2-α_1+α_2/2-V_1/Lλ ) z_k+1-z_k^2 ]
≤ Φ (z_k)+ 1/LλΥ _k+ ( V_1/Lλ+2Lγ_1^2/λ+α_1/2 )z_k-z_k-1^2+( V_1/Lλ+ 2Lγ_2^2/λ+α_2/2 )z_k-1-z_k-2^2
+V_1/Lλz_k-2-z_k-3^2.
Next, we use (<ref>) to say that
1/LλΥ _k≤1/Lλρ ( -𝔼_kΥ _k+1+Υ _k+V_Υ (𝔼_k z_k+1-z_k ^2+ z_k-z_k-1 ^2. .
. . + z_k-1-z_k-2 ^2 + z_k-2-z_k-3 ^2 ) ).
Combining these inequalities, we have
𝔼 _k [ Φ (z_k+1)+ 1/LλρΥ _k+1 + ( -L(λ+1) -θ/2-α_1+α_2/2 -V_1+V_Υ /ρ/Lλ ) z_k+1-z_k^2 ]
≤ Φ (z_k)+ 1/LλρΥ_k+ ( V_1+V_Υ /ρ/Lλ+2Lγ_1^2/λ+α_1/2 )z_k-z_k-1^2
+( V_1+V_Υ /ρ/Lλ+ 2Lγ_2^2/λ+α_2/2 )z_k-1-z_k-2^2+V_1+V_Υ /ρ/Lλz_k-2-z_k-3^2.
This is equivalent to
𝔼 _k [ Φ (z_k+1)+ 1/LλρΥ _k+1+ (V_1+V_Υ /ρ/Lλ+α_1+α_2/2+2L(γ_1^2+γ_2^2)/λ+3Z ) z_k+1-z_k^2 .
. + ( V_1+V_Υ /ρ/Lλ +α_2/2+2Lγ_2^2/λ+2Z ) z_k-z_k-1^2+ ( V_1+V_Υ /ρ/Lλ +Z ) z_k-1-z_k-2^2 .
. + ( -L(λ+1) -θ/2- 2(V_1+V_Υ /ρ) /Lλ-α_1-α_2-2L(γ_1^2+γ_2^2)/λ-3Z )z_k+1-z_k^2 ]
≤ Φ (z_k)+ 1/LλρΥ _k+ (V_1+V_Υ /ρ/Lλ+α_1+α_2/2+2L(γ_1^2+γ_2^2)/λ+3Z )z_k-z_k-1^2
+ ( V_1+V_Υ /ρ/Lλ+α_2/2+2Lγ_2^2/λ+2Z )z_k-1-z_k-2^2+ ( V_1+V_Υ /ρ/Lλ +Z ) z_k-2-z_k-3^2
- (Z-V_1+V_Υ /ρ/Lλ )z_k-z_k-1^2- (Z-V_1+V_Υ /ρ/Lλ )z_k-1-z_k-2^2-Zz_k-2-z_k-3^2.
We have
𝔼 _k [Ψ _k+1+ ( -L(λ+1) -θ/2- 2(V_1+V_Υ /ρ) /Lλ-α_1-α_2-2L(γ_1^2+γ_2^2)/λ-3Z )z_k+1-z_k^2 ]
≤ Ψ _k- (Z-V_1+V_Υ /ρ/Lλ )z_k-z_k-1^2- (Z-V_1+V_Υ /ρ/Lλ )z_k-1-z_k-2^2-Zz_k-2-z_k-3^2.
By λ= √(10(V_1+V_Υ /ρ)+4L^2(γ_1^2+γ_2^2)/L^2), we have -L(λ+1) -θ/2- 2(V_1+V_Υ /ρ) /Lλ-α_1-α_2-2L(γ_1^2+γ_2^2)/λ-3Z=-L -θ/2-α_1-α_2-√(10(V_1+V_Υ /ρ)+4L^2(γ_1^2+γ_2^2))-3ϵ=κ. Hence (<ref>) becomes
𝔼_k [Ψ _k+1 +κ z_k+1-z_k^2+ϵ z_k-z_k-1^2+ ϵ z_k-1-z_k-2^2 + Z z_k-2-z_k-3^2 ] ≤Ψ _k.
According to θ> L+2α _1+2α _2+2√(10(V_1+V_Υ /ρ)+4L^2(γ_1^2+γ_2^2))+6ϵ, we have κ>0. So we prove the first claim.
(ii) We apply the full expectation operator to (<ref>) and sum the resulting inequality from k=0 to k=T-1,
𝔼Ψ _T+κ∑_k=0^T-1𝔼 z_k+1-z_k^2+ϵ∑_k=0^T-1𝔼 z_k-z_k-1^2+ ϵ∑_k=0^T-1𝔼 z_k-1-z_k-2^2
+ Z ∑_k=0^T-1𝔼 z_k-2-z_k-3^2
≤ Ψ _0,
Using the facts that Φ≤Ψ_T,
κ∑_k=0^T-1𝔼 z_k+1-z_k^2+ϵ∑_k=0^T-1𝔼 z_k-z_k-1^2+ ϵ∑_k=0^T-1𝔼 z_k-1-z_k-2^2
+ Z ∑_k=0^T-1𝔼 z_k-2-z_k-3^2
≤ Ψ _0-Φ.
Taking the limit T → +∞, we have the sequence {𝔼 z_k+1-z_k^2 } is summable.
The next lemma establishes a bound on the norm of the subgradients of Φ(z_k).
(subgradient bound)
Suppose Assumption <ref> and <ref> hold. Let {z_k}_k∈ℕ be a bounded sequence, which is generated by Algorithm <ref> with variance-reduced gradient estimator. For k≥ 0, define
A_x^k = ∇ _xH(x_k,y_k)-∇ _x(u_k-1,y_k-1)+∇ϕ_1(x_k-1)- ∇ϕ_1(x_k)+α_1,k-1(x_k-1-x_k-2)
+α_2,k-1(x_k-2-x_k-3),
A_y^k = ∇ _yH(x_k,y_k)-∇ _y(x_k,v_k-1)+∇ϕ_2(y_k-1)- ∇ϕ_2(y_k)+β_1,k-1(y_k-1-y_k-2)
+β_2,k-1(y_k-2-y_k-3).
Then (A_x^k,A_y^k )∈∂Φ(x_k,y_k) and
𝔼_k-1 (A_x^k,A_y^k )
≤ p (𝔼_k-1 z_k-z_k-1+ z_k-1-z_k-2 + z_k-2-z_k-3+ z_k-3-z_k-4 )+Γ_k-1,
where p=2(2N+η+Nγ_1+Nγ_2+α _1+α _2)+V_2, N=max{ M,L }, η =max{η _1,η _2 }.
By the definition of x_k, we have that 0 must lie in the subdifferential at point x_k of the function
x⟼ f(x)+⟨ x,∇_x(u_k-1,y_k-1)⟩+D_ϕ_1(x,x_k-1)+α_1,k-1⟨ x,x_k-2-x_k-1⟩+α_2,k-1⟨ x,x_k-3-x_k-2⟩.
Since ϕ are differential, we have
0∈∂ f(x_k)+∇_x(u_k-1,y_k-1)+∇ϕ_1(x_k)- ∇ϕ_1(x_k-1)+α_1,k-1(x_k-2-x_k-1)+α_2,k-1(x_k-3-x_k-2),
which implies that
∇ _xH(x_k,y_k)-∇_x(u_k-1,y_k-1)+∇ϕ_1(x_k-1)-∇ϕ_1(x_k)
+α_1,k-1(x_k-1-x_k-2)+α_2,k-1(x_k-2-x_k-3)
∈∇ _xH(x_k,y_k)+∂ f(x_k).
Similarly, we have
∇ _yH(x_k,y_k)-∇ _y(x_k,v_k-1)+∇ϕ_2(y_k-1)- ∇ϕ_2(y_k)
+β_1,k-1(y_k-1-y_k-2)+β_2,k-1(y_k-2-y_k-3)
∈∇ _yH(x_k,y_k)+∂ g(y_k).
Because of the structure of Φ, from (<ref>) and (<ref>), we have
(A_x^k,A_y^k )∈∂Φ(x_k,y_k).
All that remains is to bound the norms of A_x^k and A_y^k. Because ∇ H is M-Lipschitz continuous on bounded sets, then from Assumption <ref> (iii) and (iv), we have
A_x^k
≤ ∇ _xH(x_k,y_k)-∇_x(u_k-1,y_k-1) +∇ϕ_1(x_k-1)-∇ϕ_1(x_k)
+α_1,k-1 x_k-1-x_k-2 +α_2,k-1 x_k-2-x_k-3
≤ ∇ _xH(x_k,y_k)-∇ _xH(u_k-1,y_k-1) +∇ _xH(u_k-1,y_k-1)-∇_x(u_k-1,y_k-1)
+η _1 x_k-1-x_k+α_1,k-1 x_k-1-x_k-2 +α_2,k-1 x_k-2-x_k-3
≤ ∇ _xH(u_k-1,y_k-1)-∇_x(u_k-1,y_k-1) +M x_k-u_k-1+M y_k-y_k-1
+η _1 x_k-1-x_k+α_1,k-1 x_k-1-x_k-2 +α_2,k-1 x_k-2-x_k-3
≤ ∇ _xH(u_k-1,y_k-1)-∇_x(u_k-1,y_k-1) +(M+η _1) x_k-x_k-1+M y_k-y_k-1
(Mγ_1+α_1) x_k-1-x_k-2 +(Mγ_2+α_2) x_k-2-x_k-3.
A similar argument holds for A_y^k:
A_y^k
≤ ∇ _yH(x_k,y_k)-∇ _yH(x_k,v_k-1) +∇ _yH(x_k,v_k-1)-∇_y(x_k,v_k-1)
+η _2 y_k-1-y_k+β_1,k-1 y_k-1-y_k-2 +β_2,k-1 y_k-2-y_k-3
≤ ∇ _yH(x_k,v_k-1)-∇_y(x_k,v_k-1) +(L+η _2) y_k-y_k-1
(Lγ_1+α_1) y_k-1-y_k-2 +(Lγ_2+α_2) y_k-2-y_k-3.
Adding (<ref>) and (<ref>), we get
A_x^k+ A_y^k
≤ ∇ _xH(u_k-1,y_k-1)-∇_x(u_k-1,y_k-1) +∇ _yH(x_k,v_k-1)-∇_y(x_k,v_k-1)
+2( 2N+η) z_k-z_k-1+2(Nγ_1+α _1) z_k-1-z_k-2 +2(Nγ_2+α _2) z_k-2-z_k-3,
where N=max{ M,L }, η =max{η _1,η _2 }. Applying the conditional expectation operator and using (<ref>) to bound the MSE terms, we can obtain
𝔼_k-1 (A_x^k,A_y^k ) ≤𝔼_k-1 [ A_x^k+ A_y^k ]
≤ (4N+2η+V_2)𝔼 _k-1 z_k-z_k-1 +(2Nγ_1+2α _1+V_2) z_k-1-z_k-2
+(2Nγ_2+2α _2+V_2) z_k-2-z_k-3+V_2 z_k-3-z_k-4+Γ_k-1
≤ p ( 𝔼_k-1 z_k-z_k-1+ z_k-1-z_k-2 + z_k-2-z_k-3+ z_k-3-z_k-4 )+Γ_k-1,
where p=2(2N+η+Nγ_1+Nγ_2+α _1+α _2)+V_2.
Define the set of limit points of { z_k}_k∈ℕ as
Ω:={ẑ: there exists a subsequence { z_k_l} of { z_k} such that z_k_l→ẑ as l→∞}.
The following lemma describes properties of Ω.
(limit points of { z_k}_k∈ℕ)
Suppose Assumption <ref> and <ref> hold.
Let {z_k}_k∈ℕ be a bounded sequence, which is generated by Algorithm <ref> with variance-reduced gradient estimator, let
θ > L+2α _1+2α _2+2√(10(V_1+V_Υ /ρ)+4L^2(γ_1^2+γ_2^2))+6ϵ.
where ϵ>0 is small enough. Then
(1) ∑_k=1^∞ z_k-z_k-1^2<∞ a.s., and z_k-z_k-1→ 0 a.s.;
(2) 𝔼Φ (z_k)→Φ ^∗, where Φ ^∗∈ [Φ,∞ );
(3) 𝔼 dist(0,∂Φ (z_k)) → 0;
(4) the set Ω is nonempty, and for all z^∗∈Ω, 𝔼 dist(0,∂Φ (z^∗)) = 0;
(5) dist(z_k ,Ω)→ 0 a.s.;
(6) Ω is a.s. compact and connected;
(7) 𝔼Φ (z^∗)= Φ ^∗ for all z^∗∈Ω.
By Lemma <ref>, we have claim (1) holds.
According to (<ref>), the supermartingale convergence theorem ensures {Ψ _k} converges to a finite, positive random variable. Because z_k-z_k-1→ 0 a.s., z_k-1-z_k-2→ 0 a.s., z_k-2-z_k-3→ 0 a.s. and ∇ is variance-reduced so 𝔼Υ_k → 0,
we can say
lim_k →∞𝔼Ψ _k=lim_k →∞𝔼Φ (z_k) ∈ [Φ,∞ ),
which implys claim (2).
Claim (3) holds because, by Lemma <ref>,
𝔼 (A_x^k,A_y^k )
≤ p 𝔼( z_k-z_k-1+ z_k-1-z_k-2 + z_k-2-z_k-3+ z_k-3-z_k-4 )+𝔼Γ_k-1.
We have that 𝔼 z_k-z_k-1→ 0 and 𝔼Γ_k-1→ 0. This ensures that 𝔼 (A_x^k,A_y^k )→ 0. Since (A_x^k,A_y^k ) is one element of ∂Φ (z_k), we obtain 𝔼 dist(0,∂Φ (z_k))≤𝔼 (A_x^k,A_y^k )→ 0.
To prove claim (4), suppose z^∗ =(x^∗,y^∗) is a limit point of the sequence { z_k}_k∈ℕ (a limit point must exist because we suppose the sequence { z_k}_k∈ℕ is bounded). This means there exists a subsequence {z_k_j}
satisfying lim_j→∞ z_k_j= z^∗. Furthermore, by the variance-reduced property of ∇(u_k_j-1,y_k_j-1), we have 𝔼∇_x(u_k_j-1,y_k_j-1)-∇_x H(u_k_j-1,y_k_j-1) ^2→ 0.
Because f and g are lower semicontinuous, we have
lim inf_j→∞f(x_k_j)≥ f(x^∗ ),
lim inf_j→∞g(y_k_j)≥ g(y^∗ ).
By the update rule for x_k_j,
letting x=x^∗, we have
f(x_k_j)+⟨ x _k_j,∇_x(u_k_j-1,y_k_j-1)⟩+D_ϕ_1(x_k_j,x_k_j-1)+α_1,k_j-1⟨ x_k_j,x_k_j-2-x_k_j-1⟩
+α_2,k_j-1⟨ x_k_j,x_k_j-3-x_k_j-2⟩
≤ f(x^∗)+⟨ x^∗,∇_x(u_k_j-1,y_k_j-1)⟩+D_ϕ_1(x^∗,x_k_j-1)+α_1,k_j-1⟨ x^∗,x_k_j-2-x_k_j-1⟩
+α_2,k_j-1⟨ x^∗,x_k_j-3-x_k_j-2⟩.
Taking the expectation and taking the limit j →∞,
lim sup_j→∞f(x_k_j)
≤ lim sup_j→∞f(x^∗)+⟨ x^∗-x_k_j,∇_xH(u_k_j-1,y_k_j-1)⟩+⟨ x^∗-x_k_j,∇_x(u_k_j-1,y_k_j-1)
-∇_xH(u_k_j-1,y_k_j-1)⟩+ϕ_1(x^∗)-ϕ_1(x_k_j)+⟨∇ϕ_1(x_k_j-1),x^∗-x_k_j-1⟩
+α_1,k_j-1⟨ x^∗-x_k_j,x_k_j-2-x_k_j-1⟩+α_2,k_j-1⟨ x^∗-x_k_j,x_k_j-3-x_k_j-2⟩.
The second term on the right goes to zero because x_k_j→ x^∗ and {∇_xH(u_k_j-1,y_k_j-1)} is bounded. The
thrid term is zero almost surely because it is bounded above by x^∗-x_k_j^2, and ∇_x(u_k_j-1,y_k_j-1)-∇_xH(u_k_j-1,y_k_j-1) → 0 a.s. Noting that ϕ_1 is differentiable, so lim sup_j→∞f(x_k_j)≤ f(x^∗ ) a.s., which, together with (<ref>), implies that lim_j→∞f(x_k_j)= f(x^∗) a.s. Similarly, we have
lim_j→∞g(y_k_j)= g(y^∗) a.s., and hence
lim_j→∞Φ (x_k_j,y_k_j)=Φ (x^∗ ,y^∗) a.s.
Claim (3) ensures that 𝔼 dist(0,∂Φ (z_k)) → 0. Combining (<ref>) and the fact that the subdifferential of Φ is closed, we have 𝔼 dist(0,∂Φ (z^∗)) = 0.
Claims (5) and (6) hold for any sequence satisfying z_k-z_k-1→ 0 a.s. (this fact is used in the same context in <cit.>)
Finally, we must show that Φ has constant expectation over Ω. From claim (2), we have 𝔼Φ (z_k)→Φ ^∗, which implies 𝔼Φ (z_k_j)→Φ ^∗ for every subsequence { z_k_j}_j∈ℕ converging to some z^∗∈Ω. In the proof of claim (4), we show that Φ (z_k_j)→Φ(z ^∗) a.s., so 𝔼Φ (z^∗)= Φ ^∗ for all z^∗∈Ω.
The following lemma is analogous to the uniformized Kurdyka–Łojasiewicz property <cit.>. It is a slight generalization of the KŁ property showing that z_k eventually enters a region of z̃ for some z̃ satisfying Φ (z̃ )= Φ(z ^∗), and in this region, the KŁ inequality holds.
Assume that the conditions of Lemma <ref> hold and that z_k is not a critical point of Φ after a finite number of iterations. Let Φ be a semialgebraic function with KŁ exponent ϑ. Then there exist an index m and a desingularizing function φ so that the following bound holds:
φ'(𝔼 [Φ (z_k)-Φ_k ^∗])𝔼 dist(0,∂Φ (z_k))≥ 1, ∀ k>m,
where Φ_k ^∗ is a nondecreasing sequence converging to 𝔼Φ (z^∗) for all z^∗∈Ω.
The proof is almost the same as that of Lemma 4.5 in <cit.>. We omit the proof here. We now show that the iterates of Algorithm <ref> have finite length in expectation.
(finite length)
Assume that the conditions of Lemma <ref> hold and Φ is a semialgebraic function with KŁ exponent ϑ∈ [0,1). Let {z_k}_k∈ℕ be a bounded sequence, which is generated by Algorithm <ref> with variance-reduced gradient estimator.
(i) Either z_k is a critical point after a finite number of iterations or { z_k}_k∈ℕ satisfies the finite-length property in expectation:
∑_k=0^∞𝔼 z_k+1-z_k<∞,
and there exists an integer m so that, for all i > m,
∑_k=m^i𝔼 z_k+1-z_k +∑_k=m^i𝔼 z_k-z_k-1+ ∑_k=m^i𝔼 z_k-1-z_k-2+ ∑_k=m^i𝔼 z_k-2-z_k-3
≤ √(𝔼 z_m-z_m-1^2) +√(𝔼 z_m-1-z_m-2^2)+ √(𝔼 z_m-2-z_m-3^2)+ √(𝔼 z_m-3-z_m-4^2)
+2√(s)/K_1ρ√(𝔼Υ _m-1)+K_3 _m,i+1,
where
K_1=p+2√(sV_Υ)/ρ, K_3=4K_1 /K_2, K_2=min{κ ,ϵ,Z },
p is as in Lemma <ref>, and _p,q=(𝔼[Ψ _p-Φ _p^∗ ]-𝔼[Ψ _q-Φ _q^∗ ]).
(ii) { z_k}_k∈ℕ generated by Algorithm <ref> converge to a critical point of Φ in expectation.
(i) If ϑ∈ (0,1/2), then Φ satisfies the KŁ property with exponent 1/2, so we consider only the case ϑ∈ [ 1/2,1). By Lemma <ref>, there exists a function φ_0(r)=ar^1-ϑ such that
φ_0'(𝔼[ Φ (z_k)-Φ_k ^∗])𝔼 dist(0,∂Φ (z_k))≥ 1, ∀ k>m.
Lemma <ref> provides a bound on 𝔼 dist(0,∂Φ (z_k)).
𝔼 dist(0,∂Φ (z_k)) ≤𝔼 (A_x^k,A_y^k )
≤ p𝔼 ( z_k-z_k-1+ z_k-1-z_k-2 + z_k-2-z_k-3+ z_k-3-z_k-4 )+𝔼Γ_k-1
≤ p ( √(𝔼 z_k-z_k-1^2)+√(𝔼 z_k-1-z_k-2^2) +√(𝔼 z_k-2-z_k-3^2)+√(𝔼 z_k-3-z_k-4^2) )
+√(s𝔼Υ _k-1) .
The final inequality is Jensen's inequality. Because Γ _k=∑_i=1^s v_k^i for some nonnegative random variables v_k^i, we can say 𝔼Γ _k=𝔼∑_i=1^s v_k^i≤𝔼√(s∑_i=1^s (v_k^i )^2)≤√(s𝔼Υ _k). We can bound the term √(𝔼Υ _k) using (<ref>):
√(𝔼Υ _k)
≤ √((1-ρ )𝔼Υ _k-1+V_Υ𝔼 ( z_k-z_k-1 ^2+ z_k-1-z_k-2 ^2 + z_k-2-z_k-3 ^2+ z_k-3-z_k-4 ^2 ))
≤ √((1-ρ ))√(𝔼Υ _k-1) +√(V_Υ) ( √(𝔼 z_k-z_k-1 ^2) +√(𝔼 z_k-1-z_k-2 ^2) +√(𝔼 z_k-2-z_k-3 ^2).
. +√(𝔼 z_k-3-z_k-4 ^2) )
≤ (1-ρ/2 )√(𝔼Υ _k-1) +√(V_Υ) ( √(𝔼 z_k-z_k-1 ^2) +√(𝔼 z_k-1-z_k-2 ^2) +√(𝔼 z_k-2-z_k-3 ^2).
.+√(𝔼 z_k-3-z_k-4 ^2) ).
The final inequality uses the fact that √(1-ρ) =1-ρ/2- ρ^2 /8-⋯. This implies that
√(s𝔼Υ _k-1)
≤ 2√(s)/ρ ( √(𝔼Υ _k-1)-√(𝔼Υ _k) ) +2√(sV_Υ)/ρ ( √(𝔼 z_k-z_k-1 ^2) +√(𝔼 z_k-1-z_k-2 ^2) .
.+√(𝔼 z_k-2-z_k-3 ^2) +√(𝔼 z_k-3-z_k-4 ^2) ).
Then, from (<ref>) and (<ref>), we have
𝔼 dist(0,∂Φ (z_k))
≤ ( p+2√(sV_Υ)/ρ ) ( √(𝔼 z_k-z_k-1 ^2) +√(𝔼 z_k-1-z_k-2 ^2) +√(𝔼 z_k-2-z_k-3 ^2) .
.+√(𝔼 z_k-3-z_k-4 ^2) ) +2√(s)/ρ (√(𝔼Υ _k-1)-√(𝔼Υ _k) )
= K_1 ( √(𝔼 z_k-z_k-1 ^2) +√(𝔼 z_k-1-z_k-2 ^2) +√(𝔼 z_k-2-z_k-3 ^2)+√(𝔼 z_k-3-z_k-4 ^2) )
+2√(s)/ρ (√(𝔼Υ _k-1)-√(𝔼Υ _k) ),
where K_1=p+2√(sV_Υ)/ρ. Define C_k to be the right side of this inequality:
C_k= K_1√(𝔼 z_k-z_k-1^2)+ K_1√(𝔼 z_k-1-z_k-2^2) + K_1√(𝔼 z_k-2-z_k-3^2)
+ K_1√(𝔼 z_k-3-z_k-4^2)+2√(s)/ρ (√(𝔼Υ _k-1)-√(𝔼Υ _k) ).
We then have
φ_0'(𝔼 [Φ (z_k)-Φ_k ^∗])C_k≥ 1, ∀ k>m.
By the definition of φ_0, this is equivalent to
a(1-ϑ )C_k/(𝔼 [Φ (z_k)-Φ_k ^∗])^ϑ≥ 1, ∀ k>m.
We would like to hold the inequality above for Ψ_k rather than Φ (z_k). Replace 𝔼Φ (z_k) with
𝔼Ψ_k by introducing a term of 𝒪 ( ( 𝔼 [ z_k-z_k-1^2+ z_k-1-z_k-2^2+ z_k-2-z_k-3^2+Υ _k ] )^ϑ ) in the denominator. We show that inequality (<ref>) still holds after this adjustment because these terms are small compared to C_k. Indeed, the quantity
C_k≥ c_1 ( √(𝔼 z_k-z_k-1^2)+ √(𝔼 z_k-1-z_k-2^2) +√(𝔼 z_k-2-z_k-3^2) .
.+√(𝔼 z_k-3-z_k-4^2)+√(𝔼Υ _k-1) )
for some constant c_1>0. And because 𝔼 z_k-z_k-1^2 → 0, 𝔼Υ_k → 0, and ϑ >1/2, there exists an index m and constants c_2,c_3>0 such that
(𝔼[Ψ _k-Φ (z_k) ] ) ^ϑ
= ( 𝔼 [1/LλρΥ _k+ (V_1+V_Υ /ρ/Lλ+α_1+α_2/2+2L(γ_1^2+γ_2^2)/λ+3Z )z_k-z_k-1^2+ ( V_1+V_Υ /ρ/Lλ . . .
...+α_2/2+2Lγ_2^2/λ+2Z )z_k-1-z_k-2^2+ ( V_1+V_Υ /ρ/Lλ +Z ) z_k-2-z_k-3^2 ] ) ^ϑ
≤ c_2 ( ( 𝔼 [ Υ _k-1+ z_k-z_k-1 ^2 + z_k-1-z_k-2 ^2+ z_k-2-z_k-3 ^2+ z_k-3-z_k-4 ^2 ] )^ϑ )
≤ c_3C_k, ∀ k>m.
The first inequality uses (<ref>). Because the terms above are small compared to C_k, there exists a constant d such that c_3<d<+∞ and
ad(1-ϑ )C_k/(𝔼[Φ (z_k)-Φ_k ^∗])^ϑ+ (𝔼[Ψ _k-Φ (z_k) ] ) ^ϑ≥ 1, ∀ k>m.
For ϑ∈ [ 1/2,1), using the fact that (a+b)^ϑ≤ a^ϑ +b^ϑ for all a, b ≥ 0, we have
ad(1-ϑ )C_k/ (𝔼[Ψ _k-Φ_k ^∗ ] ) ^ϑ =ad(1-ϑ )C_k/ (𝔼[Φ (z_k)-Φ_k ^∗+Ψ _k-Φ (z_k) ] ) ^ϑ
≥ad(1-ϑ )C_k/ (𝔼[Φ (z_k)-Φ_k ^∗] ) ^ϑ+ (𝔼[Ψ _k-Φ (z_k) ] ) ^ϑ
≥ 1, ∀ k>m.
Therefore, with φ (r)=adr^1-ϑ,
φ'(𝔼[Ψ _k-Φ_k ^∗])C_k≥ 1, ∀ k>m.
By the concavity of φ,
φ(𝔼[Ψ _k-Φ_k ^∗])-φ(𝔼[Ψ _k+1-Φ_k+1 ^∗]) ≥φ'(𝔼[Ψ _k-Φ_k ^∗])(𝔼[Ψ _k-Φ_k ^∗+Φ_k+1 ^∗-Ψ _k+1])
≥φ'(𝔼[Ψ _k-Φ_k ^∗])(𝔼[Ψ _k-Ψ _k+1]),
where the last inequality follows from the fact that Φ_k ^∗ is nondecreasing. With _p,q=φ(𝔼[Ψ _p-Φ _p^∗ ])-φ(𝔼[Ψ _q-Φ _q^∗ ]), we have shown
_k,k+1C_k≥𝔼[Ψ _k-Ψ _k+1], ∀ k>m.
Using Lemma <ref>, we can bound 𝔼[Ψ _k-Ψ _k+1] below by both 𝔼 z_k+1-z_k^2, 𝔼 z_k-z_k-1^2, 𝔼 z_k-1-z_k-2^2 and 𝔼 z_k-2-z_k-3^2. Specifically,
_k,k+1C_k ≥κ𝔼 z_k+1-z_k^2+ϵ𝔼 z_k-z_k-1^2+ϵ𝔼 z_k-1-z_k-2^2+Z𝔼 z_k-2-z_k-3^2
≥ K_2𝔼 z_k+1-z_k^2+K_2𝔼 z_k-z_k-1^2+K_2𝔼 z_k-1-z_k-2^2+K_2𝔼 z_k-2-z_k-3^2,
where K_2=min{κ ,ϵ,Z }>0, κ, λ, ϵ and Z are set as in Lemma <ref>. Let us use the first of these inequalities to begin. Applying Young's inequality to (<ref>) yields
√(𝔼 z_k+1-z_k^2) +√(𝔼 z_k-z_k-1^2) +√(𝔼 z_k-1-z_k-2^2)+√(𝔼 z_k-2-z_k-3^2)
≤ 2√(𝔼 z_k+1-z_k^2+𝔼 z_k-z_k-1^2+𝔼 z_k-1-z_k-2^2+𝔼 z_k-2-z_k-3^2)
≤ 2√(K_2^-1C_k _k,k+1)≤C_k/2K_1+2K_1 _k,k+1/K_2
≤ 1/2√(𝔼 z_k-z_k-1^2) +1/2√(𝔼 z_k-1-z_k-2^2) +1/2√(𝔼 z_k-2-z_k-3^2)+1/2√(𝔼 z_k-3-z_k-4^2)
+√(s)/K_1ρ (√(𝔼Υ _k-1)-√(𝔼Υ _k) )+2K_1 _k,k+1/K_2.
Summing inequality (<ref>) from k=m to k=i, set
T_m^i= ∑_k=m^i√(𝔼 z_k+1-z_k^2)+∑_k=m^i√(𝔼 z_k-z_k-1^2) +∑_k=m^i√(𝔼 z_k-1-z_k-2^2)
+∑_k=m^i√(𝔼 z_k-2-z_k-3^2).
Then
T_m^i≤1/2T_m-1^i-1+√(s)/K_1ρ (√(𝔼Υ _m-1)-√(𝔼Υ _i) )+2K_1/K_2 _m,i+1,
which implies that
1/2T_m^i≤ 1/2√(𝔼 z_m-z_m-1^2)+1/2√(𝔼 z_m-1-z_m-2^2)+1/2√(𝔼 z_m-2-z_m-3^2)
+1/2√(𝔼 z_m-3-z_m-4^2)+√(s)/K_1ρ (√(𝔼Υ _m-1)-√(𝔼Υ _i) )+2K_1/K_2 _m,i+1.
Dropping the nonpositive term -√(𝔼Υ _i), this show that
T_m^i≤ √(𝔼 z_m-z_m-1^2)+√(𝔼 z_m-1-z_m-2^2)+ √(𝔼 z_m-2-z_m-3^2)
+ √(𝔼 z_m-3-z_m-4^2)+2√(s)/K_1ρ√(𝔼Υ _m-1)+K_3 _m,i+1.
where K_3=4K_1/K_2. Applying Jensen's inequality to the terms on the left gives
∑_k=m^i𝔼 z_k+1-z_k +∑_k=m^i𝔼 z_k-z_k-1+ ∑_k=m^i𝔼 z_k-1-z_k-2+ ∑_k=m^i𝔼 z_k-2-z_k-3≤ T_m^i
≤ √(𝔼 z_m-z_m-1^2)+√(𝔼 z_m-1-z_m-2^2)+√(𝔼 z_m-2-z_m-3^2)+√(𝔼 z_m-3-z_m-4^2)
+2√(s)/K_1ρ√(𝔼Υ _m-1)+K_3 _m,i+1.
The term lim_i →∞ _m,i+1 is bounded because 𝔼Ψ_k is bounded due to Lemma <ref>. Letting i →∞, we prove the assertion.
(ii) An immediate consequence of claim (i) is that the sequence { z_k}_k∈ℕ converges in expectation to a critical point. This is because, for any p,q ∈ℕ with p ≥ q, 𝔼 z_p-z_q=𝔼∑_k=q^p-1( z_k+1-z_k) ≤∑_k=q^p-1𝔼 z_k+1-z_k, and the finite length property implies this final sum converges to zero. This proves claim (ii).
Assume that the conditions of Lemma <ref> hold and Φ is a semialgebraic function with KŁ exponent ϑ∈ [0, 1). Let {z_k}_k∈ℕ be a bounded sequence, which is generated by Algorithm <ref> with variance-reduced gradient estimator. The following convergence rates hold:
(i) If ϑ∈ (0, 1/2 ], then there exist d_1 > 0 and τ∈ [1 - ρ,1) such that 𝔼 z_k-z^∗≤ d_1τ ^k.
(ii) If ϑ∈ (1/2 ,1), then there exists a constant d_2 > 0 such that 𝔼 z_k-z^∗≤ d_2k ^-1-ϑ/2ϑ -1.
(iii) If ϑ = 0, then there exists an m ∈ℕ such that 𝔼Φ (z_k)=𝔼Φ (z^∗ ) for all k ≥ m.
As in the proof of Theorem <ref>, if ϑ∈ (0, 1/2 ), then Φ satisfies the KŁ property with exponent 1/2, so we consider only the case ϑ∈ [1/2 ,1).
Let
T_m= ∑_k=m^∞√(𝔼 z_k+1-z_k^2)+∑_k=m^∞√(𝔼 z_k-z_k-1^2) +∑_k=m^∞√(𝔼 z_k-1-z_k-2^2)
+∑_k=m^∞√(𝔼 z_k-2-z_k-3^2).
Substituting the desingularizing function φ (r)=ar^1-ϑ into (<ref>), let i→∞, then we have
T_m≤ √(𝔼 z_m-z_m-1^2)+√(𝔼 z_m-1-z_m-2^2)+√(𝔼 z_m-2-z_m-3^2)+√(𝔼 z_m-3-z_m-4^2)
+2√(s)/K_1ρ√(𝔼Υ _m-1)+aK_3(𝔼[Ψ _m-Φ _m^∗ ])^1-ϑ.
Because Ψ _m=Φ(z_m)+𝒪( z_m-z_m-1^2+ z_m-1-z_m-2^2+ z_m-2-z_m-3^2+Υ _m), we can rewrite the final term as Φ(z_m)-Φ _m^∗.
(𝔼[Ψ _m-Φ _m^∗ ])^1-ϑ
= (𝔼 [Φ(z_m)-Φ _m^∗+ 1/LλρΥ _k+ (V_1+V_Υ /ρ/Lλ+α_1+α_2/2+2L(γ_1^2+γ_2^2)/λ+3Z )z_m-z_m-1^2 . .
..+ ( V_1+V_Υ /ρ/Lλ+α_2/2+2Lγ_2^2/λ+2Z )z_m-1-z_m-2^2+ ( V_1+V_Υ /ρ/Lλ +Z ) . .
.. z_m-2-z_m-3^2 ] )^1-ϑ
(1)≤ (𝔼[Φ(z_m)-Φ _m^∗] )^1-ϑ+ (1/Lλρ𝔼Υ _m )^1-ϑ+ ( ( V_1+V_Υ /ρ/Lλ+α_1+α_2/2+2L(γ_1^2+γ_2^2)/λ+3Z ) .
. 𝔼z_m-z_m-1^2 )^1-ϑ+ (( V_1+V_Υ /ρ/Lλ+α_2/2+2Lγ_2^2/λ+2Z ) 𝔼z_m-1-z_m-2^2 )^1-ϑ
+ (( V_1+V_Υ /ρ/Lλ +Z ) 𝔼z_m-2-z_m-3^2 )^1-ϑ.
Inequality (1) is due to the fact that (a+b) ^1-ϑ≤ a^1-ϑ+b^1-ϑ.
Applying the KŁ inequality (<ref>),
aK_3 (𝔼[Φ(z_m)-Φ _m^∗] )^1-ϑ≤ aK_4 (𝔼ξ _m )^1-ϑ/ϑ
for all ξ _m∈∂Φ (z_m) and we have absorbed the constant C into K_4. Inequality (<ref>) provides a bound on the norm of the subgradient:
(𝔼ξ _m )^1-ϑ/ϑ≤ ( p ( √(𝔼 z_m-z_m-1^2)+√(𝔼 z_m-1-z_m-2^2) +√(𝔼 z_m-2-z_m-3^2)..
..+√(𝔼 z_m-3-z_m-4^2) )+√(s𝔼Υ _m-1) )^1-ϑ/ϑ.
Let
Θ _m= p ( √(𝔼 z_m-z_m-1^2)+√(𝔼 z_m-1-z_m-2^2) +√(𝔼 z_m-2-z_m-3^2).
.+√(𝔼 z_m-3-z_m-4^2) )+√(s𝔼Υ _m-1).
Therefore, it follows from (<ref>)-(<ref>) that
T_m≤ √(𝔼 z_m-z_m-1^2)+√(𝔼 z_m-1-z_m-2^2)+√(𝔼 z_m-2-z_m-3^2)+√(𝔼 z_m-3-z_m-4^2)
+2√(s)/K_1ρ√(𝔼Υ _m-1)+aK_4Θ _m^1-ϑ/ϑ+aK_3 (1/Lλρ𝔼Υ _m )^1-ϑ
+aK_3 ( ( V_1+V_Υ /ρ/Lλ+α_1+α_2/2+2L(γ_1^2+γ_2^2)/λ+3Z )𝔼z_m-z_m-1^2 )^1-ϑ
+aK_3 (( V_1+V_Υ /ρ/Lλ+α_2/2+2Lγ_2^2/λ+2Z ) 𝔼z_m-1-z_m-2^2 )^1-ϑ
+aK_3 (( V_1+V_Υ /ρ/Lλ +Z ) 𝔼z_m-2-z_m-3^2 )^1-ϑ.
(i) If ϑ = 1/2, then (𝔼ξ _m )^1-ϑ/ϑ=𝔼ξ _m. Equation (<ref>) then gives
T_m≤ √(𝔼 z_m-z_m-1^2)+√(𝔼 z_m-1-z_m-2^2)+√(𝔼 z_m-2-z_m-3^2)+√(𝔼 z_m-3-z_m-4^2)
+2√(s)/K_1ρ√(𝔼Υ _m-1)+aK_4 ( p ( √(𝔼 z_m-z_m-1^2)+√(𝔼 z_m-1-z_m-2^2) . .
..+√(𝔼 z_m-2-z_m-3^2)+√(𝔼 z_m-3-z_m-4^2) )+√(s𝔼Υ _m-1) )+aK_3√(1/Lλρ)√(𝔼Υ _m)
+ (aK_3√(V_1+V_Υ /ρ/Lλ+α_1+α_2/2+2L(γ_1^2+γ_2^2)/λ+3Z ) )√(𝔼z_m-z_m-1^2)
+ (aK_3√(V_1+V_Υ /ρ/Lλ+α_2/2+2Lγ_2^2/λ+2Z ) )√(𝔼z_m-1-z_m-2^2)
+ (aK_3√(V_1+V_Υ /ρ/Lλ +Z ) )√(𝔼z_m-2-z_m-3^2)
≤ ( 1+aK_5 ( p+√(V_1+V_Υ /ρ/Lλ+α_1+α_2/2+2L(γ_1^2+γ_2^2)/λ+3Z ) ) ) ( √(𝔼 z_m-z_m-1^2) .
.+√(𝔼 z_m-1-z_m-2^2) +√(𝔼 z_m-2-z_m-3^2)+√(𝔼 z_m-3-z_m-4^2) )
+ ( 2√(s)/K_1ρ+aK_5 √(s) )√(𝔼Υ _m-1) +aK_5√(1/Lλρ)√(𝔼Υ _m),
where K_5=max{ K_3,K_4 }. Using (<ref>), we have that, for any constant c > 0,
0≤ -c√(𝔼Υ _k)+c(1-ρ/2 )√(𝔼Υ _k-1) +c√(V_Υ) ( √(𝔼 z_k-z_k-1 ^2) +√(𝔼 z_k-1-z_k-2 ^2) .
.+√(𝔼 z_k-2-z_k-3 ^2) +√(𝔼 z_k-3-z_k-4 ^2) ).
Combining this inequality with (<ref>),
T_m≤ ( 1+aK_5 ( p+√(V_1+V_Υ /ρ/Lλ+α_1+α_2/2+2L(γ_1^2+γ_2^2)/λ+3Z )+c√(V_Υ) ) ) ( √(𝔼 z_m-z_m-1^2) .
.+√(𝔼 z_m-1-z_m-2^2)+√(𝔼 z_m-2-z_m-3^2)+√(𝔼 z_m-3-z_m-4^2) )
+c ( 1-ρ/2+2√(s)/K_1ρ c +aK_5 √(s)/c )√(𝔼Υ _m-1) -c (1-aK_5 /c√(1/Lλρ) )√(𝔼Υ _m).
Defining A=1+aK_5 ( p+√(V_1+V_Υ /ρ/Lλ+α_1+α_2/2+2L(γ_1^2+γ_2^2)/λ+3Z )+c√(V_Υ) ), we have shown
T_m+c (1-aK_5 /c√(1/Lλρ) )√(𝔼Υ _m)
≤ A (T_m-1-T_m )+c ( 1-ρ/2+2√(s)/K_1ρ c +aK_5 √(s)/c )√(𝔼Υ _m-1).
Then, we get
(1+A)T_m+c (1-aK_5 /c√(1/Lλρ) )√(𝔼Υ _m)
≤ AT_m-1+c ( 1-ρ/2+2√(s)/K_1ρ c +aK_5 √(s)/c )√(𝔼Υ _m-1).
This implies
T_m+√(𝔼Υ _m)
≤ max{A/1+A, ( 1-ρ/2+2√(s)/K_1ρ c +aK_5 √(s)/c ) (1-aK_5 /c√(1/Lλρ) )^-1} (T_m-1+√(𝔼Υ _m-1) ).
For large c, the second coefficient in the above expression approaches 1-ρ/2. So there exist τ∈ [1 - ρ,1) such that
∑_k=m^∞√(𝔼 z_k-z_k-1^2)≤τ ^k (T_0+√(𝔼Υ _0) ) ≤ d_1τ ^k
for some constnt d_1. Then using the fact that
𝔼 z_m-z^∗=𝔼∑_k=m+1^∞ (z_k-z_k-1) ≤∑_k=m^∞𝔼 z_k-z_k-1, we proves claim (i).
(ii) Suppose ϑ∈ (1/2 ,1). Each term on the right side of (<ref>) converges to zero, but at different rates. Because
Θ_m = 𝒪 ( √(𝔼 z_m-z_m-1^2)+√(𝔼 z_m-1-z_m-2^2) +√(𝔼 z_m-2-z_m-3^2).
.+√(𝔼 z_m-3-z_m-4^2)+√(s𝔼Υ _m-1) ),
and ϑ satisfies 1-ϑ/ϑ< 1, the term Θ _m^1-ϑ/ϑ dominates the first five terms on the right side of (<ref>) for large m. Also, because 1-ϑ/2ϑ< 1-ϑ, Θ _m^1-ϑ/ϑ dominates the final four terms as well. Combining these facts, there exists a natural number M_1 such that for all
m ≥ M_1,
T_m≤ PΘ _m
for some constant P>(aK_3)^ϑ/1-ϑ. The bound of (<ref>) implies
2√(s𝔼Υ _m-1)
≤ 4√(s)/ρ ( √(𝔼Υ _m-1)-√(𝔼Υ _m) +√(V_Υ) ( √(𝔼 z_m-z_m-1 ^2) +√(𝔼 z_m-1-z_m-2 ^2) . .
. .+√(𝔼 z_m-2-z_m-3 ^2) +√(𝔼 z_m-3-z_m-4 ^2) ) ).
Therefore,
Θ_m = p ( √(𝔼 z_m-z_m-1^2)+√(𝔼 z_m-1-z_m-2^2) +√(𝔼 z_m-2-z_m-3^2) .
.+√(𝔼 z_m-3-z_m-4^2) )+ (2√(s𝔼Υ _m-1)-√(s𝔼Υ _m-1) )
≤ ( p+ 4√(sV_Υ)/ρ ) ( √(𝔼 z_m-z_m-1^2)+√(𝔼 z_m-1-z_m-2^2) +√(𝔼 z_m-2-z_m-3^2) .
.+√(𝔼 z_m-3-z_m-4^2) )+4√(s)/ρ (√(𝔼Υ _m-1)-√(𝔼Υ _m) )-√(s𝔼Υ _m-1).
Furthermore, because ϑ/1-ϑ>1 and 𝔼Υ _m→ 0, for large enough m, we have ( √(𝔼Υ _m) )^ϑ/1-ϑ≪√(𝔼Υ _m). This ensures that there exists a natural number M_2 such that for every m ≥ M_2,
(4√(s) (1-ρ /4)/ρ (p+4√(sV_Υ) /ρ )√(𝔼Υ _m) )^ϑ/1-ϑ≤ P√(s𝔼Υ _m) .
The constant appearing on the left was chosen to simplify later arguments. Therefore, (<ref>) implies
( T_m+4√(s) (1-ρ /4)/ρ (p+4√(sV_Υ) /ρ )√(𝔼Υ _m) )^ϑ/ 1-ϑ
(1)≤ 2^ϑ/1-ϑ/2 ( T_m )^ϑ/ 1-ϑ+2^ϑ/1-ϑ/2 ( 4√(s) (1-ρ /4)/ρ (p+4√(sV_Υ) /ρ )√(𝔼Υ _m) )^ϑ/ 1-ϑ
(2)≤2^ϑ/1-ϑ/2 ( T_m )^ϑ/ 1-ϑ+2^ϑ/1-ϑ/2 ( P√(s𝔼Υ _m) )
(3)≤ 2^ϑ/1-ϑ/2 ( P ( p+ 4√(sV_Υ)/ρ ) ( √(𝔼 z_m-z_m-1^2)+√(𝔼 z_m-1-z_m-2^2) +√(𝔼 z_m-2-z_m-3^2) . .
..+√(𝔼 z_m-3-z_m-4^2) )+4√(s)P /ρ (√(𝔼Υ _m-1)-√(𝔼Υ _m) )-P√(s𝔼Υ _m-1) )+2^ϑ/1-ϑ/2 ( P√(s𝔼Υ _m) )
≤ 2^ϑ/1-ϑ/2 ( P ( p+ 4√(sV_Υ)/ρ ) ( √(𝔼 z_m-z_m-1^2)+√(𝔼 z_m-1-z_m-2^2) +√(𝔼 z_m-2-z_m-3^2) . .
..+√(𝔼 z_m-3-z_m-4^2) )+4√(s)P(1-ρ/4) /ρ (√(𝔼Υ _m-1)-√(𝔼Υ _m) ) ).
Here, (1) follows by convexity of the function x^ϑ/1-ϑ for ϑ∈ [1/2, 1) and x ≥ 0, (2) is (<ref>), and (3) is (<ref>) combined with (<ref>). We absorb the constant 2^ϑ/1-ϑ/2 into P. Define
S_m=T_m+4√(s) (1-ρ /4)/ρ (p+4√(sV_Υ) /ρ )√(𝔼Υ _m).
S_m is bounded for all m because ∑_k=m^∞√(𝔼 z_k+1-z_k^2) is bounded by (<ref>). Hence, we have shown
S_m^ϑ/1-ϑ≤ P ( p+ 4√(sV_Υ)/ρ )(S_m-1-S_m).
The rest of the proof is almost the same as it mentioned in <cit.>. We omit the proof here.
(iii) When ϑ = 0, the KŁ property (<ref>) implies that exactly one of the following two scenarios holds: either 𝔼Φ (z_k)Φ _k^∗ and
0<C≤𝔼ξ _k , ∀ξ _k∈∂Φ (z_k)
or 𝔼Φ (z_k)= Φ _k^∗. We show that the above inequality can hold only for a finite number of iterations.
Using the subgradient bound (<ref>), the first scenario implies
C^2≤ ( 𝔼ξ _k )^2
≤ ( p (𝔼 z_k-z_k-1+𝔼 z_k-1-z_k-2 +𝔼 z_k-2-z_k-3+𝔼 z_k-3-z_k-4 )+Γ_k-1 )^2
≤ 5p^2 ( 𝔼 z_k-z_k-1 ) ^2+5p^2 ( 𝔼 z_k-1-z_k-2 ) ^2+5p^2 ( 𝔼 z_k-2-z_k-3 ) ^2
+5p^2 ( 𝔼 z_k-3-z_k-4 ) ^2+5(𝔼Γ_k-1)^2
≤ 5p^2 ( 𝔼 z_k-z_k-1 ) ^2+5p^2 ( 𝔼 z_k-1-z_k-2 ) ^2+5p^2 ( 𝔼 z_k-2-z_k-3 ) ^2
+5p^2 ( 𝔼 z_k-3-z_k-4 ) ^2+5s𝔼Υ_k-1,
where we have used the inequality (a_1+a_2+⋯ +a_s)^2≤ s (a_1^2+a_2^2+⋯ +a_s^2) and Jensen's inequality. Applying this inequality to the decrease of Ψ_ k (<ref>), we obtain
𝔼_kΨ _k
≤ 𝔼_kΨ _k-1-κ z_k+1-z_k^2-ϵ z_k-z_k-1^2- ϵ z_k-1-z_k-2^2- Z z_k-2-z_k-3^2
≤ 𝔼_kΨ _k-1-C^2+𝒪 ( z_k+1-z_k^2 )+𝒪 ( z_k-z_k-1^2 ) +𝒪 ( z_k-1-z_k-2^2 )
+𝒪 ( z_k-2-z_k-3^2 )+𝒪 ( 𝔼Υ_k-1 )
for some constant C^2. Because the final five terms go to zero as k →∞, there exists an index M_4 so that the sum of these five terms is bounded above by C^2/2 for all k ≥ M_4. Therefore,
𝔼_kΨ _k≤𝔼_kΨ-C^2/2, ∀ k≥ M_4.
Because Ψ_k is bounded below for all k, this inequality can only hold for N < ∞ steps. After N steps, it is no longer possible for the bound (<ref>) to hold, so it must be that 𝔼Φ (z_k)= Φ _k^∗. Because Φ _k^∗<Φ(z^∗), Φ _k^∗<𝔼Φ (z_k), and both 𝔼Φ (z_k), Φ _k^∗ converge to 𝔼Φ(z^∗), we must have Φ _k^∗=𝔼Φ (z_k)=𝔼Φ(z^∗).
§ NUMERICAL EXPERIMENTS
In this section, to demonstrate the advantages of STiBPALM (Algorithm <ref>), we present our numerical study on the practical performance of the proposed STiBPALM with three different stochastic gradient estimators, i.e. SGD estimator <cit.> (STiBPALM-SGD), SAGA gradient <cit.> estimator (STiBPALM-SAGA), and SARAH gradient <cit.> estimator (STiBPALM-SARAH), compared with PALM <cit.>, iPALM <cit.>, TiPALM <cit.>, SPRING <cit.> and SiPALM <cit.> algorithms. We refer to SPRING with SGD, SAGA, and SARAH gradient estimators as SPRING-SGD, SPRING-SAGA, and SPRING-SARAH; SiPALM using the SGD, SAGA, and SARAH gradient estimators as SiPALM-SGD, SiPALM-SAGA, and SiPALM-SARAH, respectively. Two applications are considered here for comparison: sparse nonnegative matrix factorization (S-NMF) and blind image-deblurring (BID).
Since the proposed algorithm is based on the stochastic gradient estimator, we report the average results (over 10 independent runs) of objective values for all algorithms. The initial point is also the same for all algorithms. In addition, we choose step-size which is suggested in <cit.> for PALM and in <cit.> for iPALM, respectively and the same step-size based on <cit.> for all stochastic algorithms for simplicity.
§.§ Sparse nonnegative matrix factorization
Given a matrix A, sparse nonnegative matrix factorization (S-NMF) <cit.> problem can be formulated as the following model
X,Ymin{η/2 A-XY _F^2 : X,Y≥ 0, X_i _0≤ s, i=1,2,… ,r}.
In dictionary learning and sparse coding, X is called the learned dictionary with coefficients Y. In this formulation, the sparsity on X is restricted 75% of the entries to be 0.
We use the extended Yale-B dataset and the ORL dataset, which are standard facial recognition benchmarks consisting of human face images[ http://www.cad.zju.edu.cn/home/dengcai/Data/FaceData.htmlhttp://www.cad.zju.edu.cn/home/dengcai/Data/FaceData.html.]. For solving this S-NMF problem (<ref>), <cit.> gave the details on how to solve the X-subproblems and Y-subproblems. The extended Yale-B dataset contains 2414 cropped images of size 32 × 32, while the ORL dataset contains 400 images sized 64 × 64, see Figure <ref>. In the experiment for the Yale dataset, we extract 49 sparse basis images for the dataset. For the ORL dataset we extract 25 sparse basis images. In each iteration of the stochastic algorithms, we randomly subsample 5% of the full batch as a minibatch. Here for SARAH gradient estimator we set p=1/20.
In STiBPALM, let ϕ_1(X)=θ_1 /2 X^2, ϕ_2(Y)=θ_2 /2Y ^2. In numerical experiment, we choose η=3 and calculate θ_1 and θ_2 by computing the largest eigenvalues of η YY^T and η X^TX at k-th iteration, respectively. We choose α_1k=β _1k=γ_1k=μ_1k=k-1/k+2, α_2k=β _2k=γ_2k=μ_2k=k-1/k+2 in TiPALM and STiBPALM and α_1k=β_1k=γ_1k=μ_1k=k-1/k+2 in iPALM and SiPALM. We use BTiPALM and BSTiPALM to denote TiPALM and STiBPALM with ϕ_1(X)=θ_1^2 /4 X^4, ϕ_2(Y)=θ_2 /2Y ^2, respectively. We refer to BSTiPALM using the SGD, SAGA, and SARAH gradient estimators as BSTiPALM-SGD, BSTiPALM-SAGA, and BSTiPALM-SARAH, respectively.
In Figure <ref> and Figure <ref>, we report the numerical results for Yale-B dataset. A similar result for the ORL dataset is plotted in Figure <ref> and Figure <ref>. One can observe from these four figures that the STiBPALM can get slightly lower values than the other algorithms within almost the same computation time. In addition, STiBPALM can get better performance than the SPRING and SiPALM stochastic algorithm with epoch changes.
The stochastic algorithms can improve the numerical results compared with the corresponding deterministic method. Furthermore, compared with the stochastic gradient algorithm without variance reduction (SGD), the variance reduced stochastic gradient (SAGA, SARAH) algorithm can get better numerical results.
The numerical results applying different Bregman distances under Yale-B dataset and ORL dataset are reported in Figure <ref> and Figure <ref>, respectively. We can observe that BSTiPALM algorithm can obtain better numerical results compared to STiBPALM algorithm, where SARAH gradient estimator can get the best performance with epoch changes.
We also compare STiBPALM with SGD, SAGA, and SARAH for different sparsity settings (the value of s). The results of the basis images are shown in Figure <ref>. One can observe from Figure <ref> that for smaller values of s, the four algorithms lead to more compact representations. This might improve the generalization capabilities of the representation.
§.§ Blind image-deblurring
Let A be a blurred image, the problem of blind deconvolution is given by
X,Ymin{1/2 A-X⊙ Y _F^2+η∑_r=1^2d R([D(X)]_r) : 0≤ X≤ 1, 0≤ Y≤ 1, Y _1≤ 1}.
In numerical experiment, we choose R(v)= log(1 + σ v^2) as in <cit.>, where σ=10^3 and η=5× 10^-5.
We consider two images, Kodim08 and Kodim15, of size 256 × 256 for testing. For each image, two blur kernels—linear motion blur and out-of-focus blur—are considered with additional additive Gaussian noise. In this numerical experiment, we mainly use SARAH gradient estimator, and set p=1/64. We take α_1k=β _1k=γ_1k=μ_1k=k-1/k+2, α_2k=β _2k=γ_2k=μ_2k=k-1/k+2 in TiPALM and STiBPALM and α_1k=β _1k=γ_1k=μ_1k=k-1/k+2 in iPALM.
The convergence comparisons of the algorithms for both images with motion blur are provided in Figure <ref> and Figure <ref>, from which we observe STiBPALM-SARAH is faster than the other methods. Figure <ref> and Figure <ref> provide comparisons of the recovered image and blur kernel. We observe superior performance of stochastic algorithms over deterministic algorithms in these figures as well. In particular, when comparing the estimated blur kernels of the two algorithms every 20 epochs, we clearly see that STiBPALM-SARAH more quickly recovers more accurate solutions than TiPALM.
§ CONCLUSION
In this paper, we propose a stochastic two-step inertial Bregman proximal alternating linearized minimization (STiBPALM) algorithm with the variance-reduced gradient estimator to solve a class of nonconvex nonsmooth optimization problems. Under some mild conditions, we analyze the convergence properties of STiBPALM when using a variety of variance-reduced gradient estimators, and prove specific convergence rates using the SAGA and SARAH estimators. We also implement the STiBPALM algorithm to sparse nonnegative matrix factorization and blind image-deblurring problems, and perform some numerical experiments to demonstrate the effectiveness of the proposed algorithm.
Conflict of Interest: The authors declared that they have no conflict of interest.
§ APPENDIX
§.§ SAGA Variance Bound
We define the SAGA gradient estimators ∇_x(u_k,y_k) and ∇_y(x_k+1,v_k) as follows:
∇_x(u_k,y_k)= 1/b∑_i∈ I_k^x ( ∇ _xH_i(u_k,y_k)- ∇ _xH_i(φ _k^i,y_k) ) + 1/n∑_j=1^n∇ _xH_j(φ _k^j,y_k),
∇_y(x_k+1,v_k)= 1/b∑_i∈ I_k^y ( ∇ _yH_i(x_k+1,v_k)- ∇ _yH_i(x_k+1,ξ _k^i) ) + 1/n∑_j=1^n∇ _yH_j(x_k+1,ξ _k^j),
where I_k^x and I_k^y are mini-batches containing b indices. The variables φ _k^i and ξ _k^i follow the update rules φ _k+1^i=u_k
if i∈ I_k^x and φ _k+1^i=φ _k^i otherwise, and ξ _k+1^i=v_k if i∈ I_k^y and ξ _k+1^i=ξ _k^i otherwise.
To prove our variance bounds, we require the following lemma.
Suppose X_1,⋯ ,X_t are independent random variables satisfying 𝔼_kX_i=0 for 1≤ i≤ t. Then
𝔼_k X_1+⋯ +X_t ^2=𝔼_k [ X_1 ^2 +⋯ + X_t ^2 ].
Our hypotheses on these random variables imply 𝔼_k⟨ X_i,X_j ⟩ =0 for i j. Therefore,
𝔼_k X_1+⋯ +X_t ^2= 𝔼_k∑_i,j=1^t⟨ X_i,X_j ⟩ =𝔼_k [ X_1 ^2 +⋯ + X_t ^2 ].
We are now prepared to prove that the SAGA gradient estimator is variance-reduced.
The SAGA gradient estimator satisfies
𝔼_k∇_x(u_k,y_k)-∇ _xH(u_k,y_k) ^2≤1/bn∑_j=1^n∇ _xH_j(u_k,y_k)- ∇ _xH_j(φ _k^j,y_k) ^2,
𝔼_k∇_y(x_k+1,v_k)-∇ _yH(x_k+1,v_k) ^2≤4/bn∑_j=1^n∇ _yH_j(x_k,v_k)- ∇ _yH_j(x_k,ξ _k^j) ^2
+16N^2γ^2/b(𝔼_kz_k+1-z_k ^2+z_k-z_k-1 ^2+z_k-1-z_k-2 ^2),
as well as
𝔼_k∇_x(u_k,y_k)-∇ _xH(u_k,y_k) ≤1/√(bn)∑_j=1^n∇ _xH_j(u_k,y_k)- ∇ _xH_j(φ _k^j,y_k) ,
𝔼_k∇_y(x_k+1,v_k)-∇ _yH(x_k+1,v_k) ≤2/√(bn)∑_j=1^n∇ _yH_j(x_k,v_k)- ∇ _yH_j(x_k,ξ _k^j)
+4Nγ/√(b)(𝔼_kz_k+1-z_k+z_k-z_k-1+z_k-1-z_k-2),
where N=max{M,L }, γ=max{γ_1,γ_2 }.
According to (<ref>), we have
𝔼_k∇_x(u_k,y_k)-∇ _xH(u_k,y_k) ^2
= 𝔼_k1/b∑_i∈ I_k^x ( ∇ _xH_i(u_k,y_k)- ∇ _xH_i(φ _k^i,y_k) ) -∇ _xH(u_k,y_k)+ 1/n∑_j=1^n∇ _xH_j(φ _k^j,y_k) ^2
(1)≤ 1/b^2𝔼_k∑_i∈ I_k^x∇ _xH_i(u_k,y_k)- ∇ _xH_i(φ _k^i,y_k) ^2
= 1/bn∑_j=1^n∇ _xH_j(u_k,y_k)- ∇ _xH_j(φ _k^j,y_k) ^2.
Inequality (1) follows from Lemma <ref>. By the Jensen's inequality, we can say that
𝔼_k∇_x(u_k,y_k)-∇ _xH(u_k,y_k) ≤ √(𝔼_k∇_x(u_k,y_k)-∇ _xH(u_k,y_k) ^2)
≤ 1/√(bn)√(∑_j=1^n∇ _xH_j(u_k,y_k)- ∇ _xH_j(φ _k^j,y_k) ^2)
≤ 1/√(bn)∑_j=1^n∇ _xH_j(u_k,y_k)- ∇ _xH_j(φ _k^j,y_k) .
We use an analogous argument for ∇_y(x_k+1,v_k). Let 𝔼_k,x denote the expectation conditional on the first k iterations and I_k^x. By the same reasoning as in (<ref>), applying the Lipschitz continuity of ∇ _yH_j, we obtain that
𝔼_k,x∇_y(x_k+1,v_k)-∇ _yH(x_k+1,v_k) ^2
≤ 1/bn∑_j=1^n∇ _yH_j(x_k+1,v_k)- ∇ _yH_j(x_k+1,ξ _k^j) ^2
≤ 4/bn∑_j=1^n∇ _yH_j(x_k+1,v_k)- ∇ _yH_j(x_k,y_k) ^2+4/bn∑_j=1^n∇ _yH_j(x_k,y_k)- ∇ _yH_j(x_k,v_k) ^2
+4/bn∑_j=1^n∇ _yH_j(x_k,v_k)- ∇ _yH_j(x_k,ξ _k^j) ^2+4/bn∑_j=1^n∇ _yH_j(x_k,ξ _k^j)- ∇ _yH_j(x_k+1,ξ _k^j) ^2
≤ 4M^2/bx_k+1-x_k ^2+4M^2/bv_k-y_k ^2+4L^2/by_k-v_k ^2
+4/bn∑_j=1^n∇ _yH_j(x_k,v_k)- ∇ _yH_j(x_k,ξ _k^j) ^2+4M^2/bx_k+1-x_k ^2
≤ 4/bn∑_j=1^n∇ _yH_j(x_k,v_k)- ∇ _yH_j(x_k,ξ _k^j) ^2+8M^2/bx_k+1-x_k ^2
+4(M^2+L^2)/b (2γ_1^2y_k-y_k-1 ^2+2γ_2^2y_k-1-y_k-2 ^2)
≤ 4/bn∑_j=1^n∇ _yH_j(x_k,v_k)- ∇ _yH_j(x_k,ξ _k^j) ^2+16N^2γ^2/b(z_k+1-z_k ^2+z_k-z_k-1 ^2.
.+z_k-1-z_k-2 ^2),
where N=max{M,L }, γ=max{γ_1,γ_2 }. Also, by the same reasoning as in (<ref>),
𝔼_k,x∇_y(x_k+1,v_k)-∇ _yH(x_k+1,v_k)
≤ √(𝔼_k,x∇_y(x_k+1,v_k)-∇ _yH(x_k+1,v_k) ^2)
≤ 2/√(bn)∑_j=1^n∇ _yH_j(x_k,v_k)- ∇ _yH_j(x_k,ξ _k^j) +4Nγ/√(b)(z_k+1-z_k+z_k-z_k-1.
.+z_k-1-z_k-2),
Applying the operator 𝔼_k to (<ref>) and (<ref>), we get the desired result.
Now define
Υ _k+1= 1/bn∑_j=1^n (∇ _xH_j(u_k+1,y_k+1)- ∇ _xH_j(φ _k+1^j,y_k+1) ^2 .
.+4∇ _yH_j(x_k+1,v_k+1)- ∇ _yH_j(x_k+1,ξ _k+1^j) ^2 ),
Γ _k+1= 1/√(bn)∑_j=1^n (∇ _xH_j(u_k+1,y_k+1)- ∇ _xH_j(φ _k+1^j,y_k+1) ^2 .
.+2∇ _yH_j(x_k+1,v_k+1)- ∇ _yH_j(x_k+1,ξ _k+1^j) ^2 ).
By Lemma <ref>, we have
𝔼_k [ ∇_x(u_k,y_k)-∇ _xH(u_k,y_k) ^2+∇_y(x_k+1,v_k)-∇ _yH(x_k+1,v_k) ^2 ]
≤ Υ _k+V_1 (𝔼_k z_k+1-z_k ^2+ z_k-z_k-1 ^2 + z_k-1-z_k-2 ^2 ),
and
𝔼_k [ ∇_x(u_k,y_k)-∇ _xH(u_k,y_k) +∇_y(x_k+1,v_k)-∇ _yH(x_k+1,v_k) ]
≤ Γ_k+V_2 (𝔼_k z_k+1-z_k+ z_k-z_k-1 + z_k-1-z_k-2 ).
This is exactly the MSE bound, where V_1=16N^2γ^2/b and V_2=4Nγ/√(b).
(Geometric decay)
Let Υ _k be defined as in (<ref>), then we can establish the geometric decay property:
𝔼_kΥ _k+1≤ ( 1-ρ )Υ _k+V_Υ (𝔼_k z_k+1-z_k ^2+ z_k-z_k-1 ^2 + z_k-1-z_k-2 ^2 ),
where ρ=b/2n, V_Υ=408nN^2(1+2γ_1^2+γ_2^2)/b^2.
We show that 𝔼_kΥ _k+1 is decreasing at a geometric rate. By applying the inequality a-c ^2≤ (1+ε ) a-b ^2+(1+ε ^-1 ) b-c ^2 twice, it follows that
1/bn∑_j=1^n 𝔼_k∇ _xH_j(u_k+1,y_k+1)- ∇ _xH_j(φ _k+1^j,y_k+1) ^2
≤ 1+ε/bn∑_j=1^n 𝔼_k∇ _xH_j(u_k,y_k)- ∇ _xH_j(φ _k+1^j,y_k+1) ^2+1+ε ^-1/bn∑_j=1^n𝔼_k∇ _xH_j(u_k+1,y_k+1)- ∇ _xH_j(u_k,y_k) ^2
≤ (1+ε)^2 /bn∑_j=1^n 𝔼_k∇ _xH_j(u_k,y_k)- ∇ _xH_j(φ _k+1^j,y_k) ^2
+(1+ε)(1+ε ^-1 ) /bn∑_j=1^n 𝔼_k∇ _xH_j(φ _k+1^j,y_k)- ∇ _xH_j(φ _k+1^j,y_k+1) ^2
+1+ε ^-1/bn∑_j=1^n 𝔼_k∇ _xH_j(u_k+1,y_k+1)- ∇ _xH_j(u_k,y_k) ^2
≤ (1+ε)^2 (1-b/n)/bn∑_j=1^n ∇ _xH_j(u_k,y_k)- ∇ _xH_j(φ _k^j,y_k) ^2+(1+ε)(1+ε ^-1 )M^2 /b𝔼_ky_k- y_k+1 ^2
+(1+ε ^-1)M^2/b𝔼_k (u_k+1-u_k ^2+y_k+1-y_k^2)
≤ (1+ε)^2 (1-b/n)/bn∑_j=1^n ∇ _xH_j(u_k,y_k)- ∇ _xH_j(φ _k^j,y_k) ^2+(2+ε)(1+ε ^-1 )M^2 /b𝔼_ky_k+1- y_k ^2
+(1+ε ^-1)M^2/b𝔼_k (3u_k+1-x_k+1 ^2+3x_k+1-x_k^2+3x_k-u_k^2)
≤ (1+ε)^2 (1-b/n)/bn∑_j=1^n ∇ _xH_j(u_k,y_k)- ∇ _xH_j(φ _k^j,y_k) ^2+(2+ε)(1+ε ^-1 )M^2 /b𝔼_ky_k+1- y_k ^2
+3M^2(1+ε ^-1)(1+2γ_1^2)/b𝔼_kx_k+1-x_k ^2+6M^2(1+ε ^-1)(γ_1^2+γ_2^2)/bx_k-x_k-1^2
+6M^2(1+ε ^-1)γ_2^2/bx_k-1-x_k-2^2.
Similarly,
1/bn∑_j=1^n 𝔼_k∇ _yH_j(x_k+1,v_k+1)- ∇ _yH_j(x_k+1,ξ _k+1^j) ^2
≤ 1+ε/bn∑_j=1^n 𝔼_k∇ _yH_j(x_k+1,v_k)- ∇ _yH_j(x_k+1,ξ _k+1^j) ^2
+1+ε ^-1/bn∑_j=1^n 𝔼_k∇ _yH_j(x_k+1,v_k+1)- ∇ _yH_j(x_k+1,v_k) ^2
≤ (1+ε)^2 (1-b/n)/bn∑_j=1^n 𝔼_k∇ _yH_j(x_k,v_k)- ∇ _yH_j(x_k+1,ξ _k^j) ^2
+(1+ε)(1+ε ^-1) (1-b/n)/bn∑_j=1^n 𝔼_k∇ _yH_j(x_k+1,v_k)- ∇ _yH_j(x_k,v_k) ^2
+1+ε ^-1/bn∑_j=1^n 𝔼_k∇ _yH_j(x_k+1,v_k+1)- ∇ _yH_j(x_k+1,v_k) ^2
≤ (1+ε)^3 (1-b/n)/bn∑_j=1^n ∇ _yH_j(x_k,v_k)- ∇ _yH_j(x_k,ξ _k^j) ^2
+(1+ε)^2(1+ε ^-1) (1-b/n)/bn∑_j=1^n 𝔼_k∇ _yH_j(x_k,ξ _k^j)- ∇ _yH_j(x_k+1,ξ _k^j) ^2
+(1+ε)(1+ε ^-1) (1-b/n)/bn∑_j=1^n 𝔼_k∇ _yH_j(x_k+1,v_k)- ∇ _yH_j(x_k,v_k) ^2
+1+ε ^-1/bn∑_j=1^n 𝔼_k∇ _yH_j(x_k+1,v_k+1)- ∇ _yH_j(x_k+1,v_k) ^2
≤ (1+ε)^3 (1-b/n)/bn∑_j=1^n ∇ _yH_j(x_k,v_k)- ∇ _yH_j(x_k,ξ _k^j) ^2+(1+ε)^2(1+ε ^-1) (1-b/n)M^2/b
𝔼_k x_k+1-x_k ^2+(1+ε)(1+ε ^-1) (1-b/n)M^2/b𝔼_k x_k+1-x_k ^2+(1+ε ^-1)L^2/b𝔼_k v_k+1-v_k ^2
≤ (1+ε)^3 (1-b/n)/bn∑_j=1^n ∇ _yH_j(x_k,v_k)- ∇ _yH_j(x_k,ξ _k^j) ^2+(2+ε)(1+ε)(1+ε ^-1) (1-b/n)M^2/b
𝔼_k x_k+1-x_k ^2+(1+ε ^-1)L^2/b𝔼_k (3v_k+1-y_k+1 ^2+3y_k+1-y_k^2+3y_k-v_k^2)
≤ (1+ε)^3 (1-b/n)/bn∑_j=1^n ∇ _yH_j(x_k,v_k)- ∇ _yH_j(x_k,ξ _k^j) ^2+(2+ε)(1+ε)(1+ε ^-1) (1-b/n)M^2/b
𝔼_k x_k+1-x_k ^2+3L^2(1+ε ^-1)(1+2γ_1^2)/b𝔼_ky_k+1-y_k ^2+6L^2(1+ε ^-1)(γ_1^2+γ_2^2)/b
y_k-y_k-1^2+6L^2(1+ε ^-1)γ_2^2/by_k-1-y_k-2^2.
With
Υ _k+1= 1/bn∑_j=1^n (∇ _xH_j(u_k+1,y_k+1)- ∇ _xH_j(φ _k+1^j,y_k+1) ^2 .
.+4∇ _yH_j(x_k+1,v_k+1)- ∇ _yH_j(x_k+1,ξ _k+1^j) ^2 ),
adding (<ref>) and (<ref>), we can obtain
𝔼_kΥ _k+1
≤ (1+ε)^3 (1-b/n)Υ _k+(2+ε)(1+ε ^-1 )M^2 /b𝔼_ky_k+1- y_k ^2+3M^2(1+ε ^-1)(1+2γ_1^2)/b
𝔼_kx_k+1-x_k ^2+6M^2(1+ε ^-1)(γ_1^2+γ_2^2)/bx_k-x_k-1^2+6M^2(1+ε ^-1)γ_2^2/b
x_k-1-x_k-2^2+4(1+ε)(1+ε ^-1) (1-b/n)M^2(2+ε)/b𝔼_k x_k+1-x_k ^2
+12L^2(1+ε ^-1)(1+2γ_1^2)/b𝔼_ky_k+1-y_k ^2+24L^2(1+ε ^-1)(γ_1^2+γ_2^2)/by_k-y_k-1^2
+24L^2(1+ε ^-1)γ_2^2/by_k-1-y_k-2^2
≤ (1+ε)^3 (1-b/n)Υ _k+13N^2(1+ε)(2+ε)(1+ε ^-1 )(1+2γ_1^2)/b𝔼_kz_k+1- z_k ^2
+24N^2(1+ε ^-1)(γ_1^2+γ_2^2)/bz_k-z_k-1 ^2+24N^2γ_2^2(1+ε ^-1)/bz_k-1-z_k-2^2
≤ (1+ε)^3 (1-b/n)Υ _k+24N^2(1+ε)(2+ε)(1+ε ^-1)(1+2γ_1^2+γ_2^2)/b (𝔼_k z_k+1-z_k ^2.
.+ z_k-z_k-1 ^2 + z_k-1-z_k-2 ^2 ),
where N=max{ M,L }. Choosing ε =b/6n, we have (1+ε )^3(1-b/n ) ≤ 1-b/2n, producing the inequality
𝔼_kΥ _k+1≤ (1-b/2n)Υ _k+24N^2(1+b/6n)(2+b/6n)(1+6n/b)(1+2γ_1^2+γ_2^2)/b (𝔼_k z_k+1-z_k ^2.
. + z_k-z_k-1 ^2+ z_k-1-z_k-2 ^2 )
≤ (1-b/2n)Υ _k+408nN^2(1+2γ_1^2+γ_2^2)/b^2 (𝔼_k z_k+1-z_k ^2+ z_k-z_k-1 ^2 + z_k-1-z_k-2 ^2 ).
This completes the proof.
(Convergence of estimator)
If { z_k } _k∈ℕ satisfies lim_k →∞𝔼 z_k-z_k-1 ^2=0, then 𝔼Υ _k→ 0 and 𝔼Γ _k→ 0 as k→∞.
We frist show that ∑_j=1^n 𝔼∇ _xH_j(u_k,y_k)- ∇ _xH_j(φ _k^j,y_k) ^2→ 0 as k→∞. Indeed,
∑_j=1^n 𝔼∇ _xH_j(u_k,y_k)- ∇ _xH_j(φ _k^j,y_k) ^2≤ L^2∑_j=1^n 𝔼 u_k-φ _k^j ^2
≤ nL^2(1+2n/b)𝔼 u_k-u_k-1 ^2+L^2(1+b/2n)∑_j=1^n 𝔼u_k-1-φ _k^j ^2
≤ nL^2(1+2n/b)𝔼 u_k-u_k-1 ^2+L^2(1+b/2n)(1-b/n)∑_j=1^n 𝔼u_k-1-φ _k-1^j ^2
≤ nL^2(1+2n/b)𝔼 u_k-u_k-1 ^2+L^2(1-b/2n)∑_j=1^n 𝔼u_k-1-φ _k-1^j ^2
≤ nL^2(1+2n/b)∑_l=1^k(1-b/2n)^k-l𝔼 u_l-u_l-1 ^2.
As 𝔼 z_k-z_k-1 ^2→0, so 𝔼 u_k-u_k-1 ^2→0, it is clear that ∑_l=1^k(1-b/2n)^k-l𝔼 u_l-u_l-1 ^2 → 0, and hence ∑_j=1^n 𝔼∇ _xH_j(u_k,y_k)- ∇ _xH_j(φ _k^j,y_k) ^2 → 0 as k→∞. An analogous argument shows that ∑_j=1^n 𝔼∇ _yH_j(x_k,v_k)- ∇ _yH_j(x_k,ξ _k^j) ^2→ 0 as k→∞. So 𝔼Υ _k→ 0 as k→∞. Similarly, we can get 𝔼Γ _k→ 0 as k→∞. Indeed,
∑_j=1^n 𝔼∇ _xH_j(u_k,y_k)- ∇ _xH_j(φ _k^j,y_k) ≤ L∑_j=1^n 𝔼 u_k-φ _k^j
≤ nL𝔼 u_k-u_k-1 +L∑_j=1^n 𝔼u_k-1-φ _k^j
≤ nL𝔼 u_k-u_k-1 +L(1-b/n)∑_j=1^n 𝔼u_k-1-φ _k-1^j
≤ nL∑_l=1^k(1-b/n)^k-l𝔼 u_l-u_l-1.
Because 𝔼 z_k-z_k-1 ^2→0, it follows that 𝔼 z_k-z_k-1→0 (because Jensen's inequality implies 𝔼 z_k-z_k-1≤√(𝔼 z_k-z_k-1 ^2)→ 0). So 𝔼 u_k-u_k-1→0, then it follows that the bound on the right goes to zero as k→∞, hence 𝔼Γ _k→ 0.
§.§ SARAH Variance Bound
As in the previous section, we use I_k^x and I_k^y to denote the mini-batches used to approximate ∇ _xH(u_k,y_k) and ∇ _yH(x_k+1,v_k), respectively.
The SARAH gradient estimator satisfies
𝔼_k (∇_x(u_k,y_k)-∇ _xH(u_k,y_k) ^2+∇_y(x_k+1,v_k)-∇ _yH(x_k+1,v_k) ^2)
≤ ( 1-1/p ) ( ∇_x(u_k-1,y_k-1)-∇_xH(u_k-1,y_k-1)^2+ ∇_y(x_k,v_k-1)-∇_yH(x_k,v_k-1)^2 )
+V_1 (𝔼_k z_k+1-z_k ^2+ z_k-z_k-1 ^2 + z_k-1-z_k-2 ^2+ z_k-2-z_k-3 ^2 ),
as well as
𝔼_k ( ∇_x(u_k,y_k)-∇ _xH(u_k,y_k) +∇_y(x_k+1,v_k)-∇ _yH(x_k+1,v_k) )
≤ √(1-1/p) (∇_x(u_k,y_k)-∇ _xH(u_k,y_k) +∇_y(x_k+1,v_k)-∇ _yH(x_k+1,v_k) )
+V_2 (𝔼_k z_k+1-z_k+ z_k-z_k-1 + z_k-1-z_k-2+ z_k-2-z_k-3 ),
where V_1=6 ( 1-1/p )M^2(1+2γ_1^2+γ_2^2) and V_2=M√(6(1-1/p)(1+2γ_1^2+γ_2^2) ).
Let 𝔼_k,p denote the expectation conditional on the first k iterations and the event that we do not compute the full gradient at iteration k. The conditional expectation of the SARAH gradient estimator in this case is
𝔼_k,p∇_x(u_k,y_k)= 1/b𝔼_k,p ( ∑_i∈ I_k^x∇ _xH_i(u_k,y_k)- ∇ _xH_i(u_k-1,y_k-1) ) +∇_x(u_k-1,y_k-1)
= ∇ _xH(u_k,y_k)-∇ _xH(u_k-1,y_k-1)+∇_x(u_k-1,y_k-1),
and further
𝔼_k,p∇_x(u_k,y_k) -∇_xH(u_k,y_k) ^2
= 𝔼_k,p∇_x(u_k-1,y_k-1)-∇_xH(u_k-1,y_k-1)+∇_xH(u_k-1,y_k-1) -∇_xH(u_k,y_k).
.+∇_x(u_k,y_k)-∇_x(u_k-1,y_k-1) ^2
= ∇_x(u_k-1,y_k-1)-∇_xH(u_k-1,y_k-1)^2+∇_xH(u_k-1,y_k-1) -∇_xH(u_k,y_k) ^2
+𝔼_k,p∇_x(u_k,y_k)-∇_x(u_k-1,y_k-1)^2
+2⟨∇_x(u_k-1,y_k-1)-∇_xH(u_k-1,y_k-1), ∇_xH(u_k-1,y_k-1) -∇_xH(u_k,y_k) ⟩
-2⟨∇_xH(u_k-1,y_k-1)-∇_x(u_k-1,y_k-1), 𝔼_k,p ( ∇_x(u_k,y_k)-∇_x(u_k-1,y_k-1) ) ⟩
-2⟨∇_xH(u_k,y_k)-∇_xH(u_k-1,y_k-1), 𝔼_k,p ( ∇_x(u_k,y_k)-∇_x(u_k-1,y_k-1) ) ⟩.
By (<ref>), we see that
𝔼_k,p ( ∇_x(u_k,y_k)-∇_x(u_k-1,y_k-1) )=∇_xH(u_k,y_k)-∇_xH(u_k-1,y_k-1).
Thus, the first two inner products in (<ref>) sum to zero and the third one is equal to
-2⟨∇_xH(u_k,y_k)-∇_xH(u_k-1,y_k-1), 𝔼_k,p ( ∇_x(u_k,y_k)-∇_x(u_k-1,y_k-1) ) ⟩
= -2⟨∇_xH(u_k,y_k)-∇_xH(u_k-1,y_k-1), ∇_xH(u_k,y_k)-∇_xH(u_k-1,y_k-1)⟩
= -2∇_xH(u_k,y_k)-∇_xH(u_k-1,y_k-1) ^2.
This yields
𝔼_k,p∇_x(u_k,y_k) -∇_xH(u_k,y_k) ^2
= ∇_x(u_k-1,y_k-1)-∇_xH(u_k-1,y_k-1)^2-∇_xH(u_k-1,y_k-1) -∇_xH(u_k,y_k) ^2
+𝔼_k,p∇_x(u_k,y_k)-∇_x(u_k-1,y_k-1)^2
≤ ∇_x(u_k-1,y_k-1)-∇_xH(u_k-1,y_k-1)^2+𝔼_k,p∇_x(u_k,y_k)-∇_x(u_k-1,y_k-1)^2.
We can bound the second term by computing the expectation.
𝔼_k,p∇_x(u_k,y_k)-∇_x(u_k-1,y_k-1)^2
= 𝔼_k,p1/b ( ∑_i∈ I_k^x∇ _xH_i(u_k,y_k)- ∇ _xH_i(u_k-1,y_k-1) )^2
≤ 1/b𝔼_k,p [ ∑_i∈ I_k^x∇ _xH_i(u_k,y_k)- ∇ _xH_i(u_k-1,y_k-1) ^2 ]
= 1/n∑_j=1^n∇ _xH_j(u_k,y_k)- ∇ _xH_j(u_k-1,y_k-1) ^2.
The inequality is due to the convexity of the function x↦ x ^2. This results in the recursive inequality
𝔼_k,p∇_x(u_k,y_k) -∇_xH(u_k,y_k) ^2
≤ ∇_x(u_k-1,y_k-1)-∇_xH(u_k-1,y_k-1)^2+1/n∑_j=1^n∇ _xH_j(u_k,y_k)- ∇ _xH_j(u_k-1,y_k-1) ^2.
This bounds the MSE under the condition that the full gradient is not computed. When the full gradient is computed, the MSE is equal to zero, so taking the M-Lipschitz continuity of the gradients of the H_j into account, we get
𝔼_k∇_x(u_k,y_k) -∇_xH(u_k,y_k) ^2
≤ ( 1-1/p ) ( ∇_x(u_k-1,y_k-1)-∇_xH(u_k-1,y_k-1)^2+1/n∑_j=1^n∇ _xH_j(u_k,y_k)- ∇ _xH_j(u_k-1,y_k-1) ^2 )
≤ ( 1-1/p ) ( ∇_x(u_k-1,y_k-1)-∇_xH(u_k-1,y_k-1)^2+M^2 (u_k,y_k)- (u_k-1,y_k-1) ^2 ).
Using (a+b+c) ^2≤ 3(a^2+b^2+c^2), we can estimate
(u_k,y_k)- (u_k-1,y_k-1) ^2= u_k-u_k-1^2+ y_k-y_k-1^2
≤ 3 u_k-x_k^2+3 x_k-x_k-1^2+3 x_k-1-u_k-1^2+ y_k-y_k-1^2
≤ 3(1+2γ_1^2) x_k-x_k-1^2+6(γ_1^2+γ_2^2) x_k-1-x_k-2^2+6γ_2^2 x_k-2-x_k-3^2+ y_k-y_k-1^2.
Substitute into the above inequality, we can obtain
𝔼_k∇_x(u_k,y_k) -∇_xH(u_k,y_k) ^2
≤ ( 1-1/p ) ( ∇_x(u_k-1,y_k-1)-∇_xH(u_k-1,y_k-1)^2+3M^2(1+2γ_1^2) x_k-x_k-1^2.
.+6M^2(γ_1^2+γ_2^2) x_k-1-x_k-2^2+6M^2γ_2^2 x_k-2-x_k-3^2+M^2 y_k-y_k-1^2 ).
By symmetric arguments, it holds
𝔼_k∇_y(x_k+1,v_k) -∇_yH(x_k+1,v_k) ^2
≤ ( 1-1/p ) ( ∇_y(x_k,v_k-1)-∇_yH(x_k,v_k-1)^2+M^2𝔼_k (x_k+1,v_k)- (x_k,v_k-1) ^2 )
≤ ( 1-1/p ) ( ∇_y(x_k,v_k-1)-∇_yH(x_k,v_k-1)^2+M^2𝔼_k x_k+1-x_k^2+3M^2(1+2μ_1k^2).
. y_k-y_k-1^2+6M^2(μ_1,k-1^2+μ_2k^2) y_k-1-y_k-2^2+6M^2μ_2,k-1^2 y_k-2-y_k-3^2 )
≤ ( 1-1/p ) ( ∇_y(x_k,v_k-1)-∇_yH(x_k,v_k-1)^2+M^2𝔼_k x_k+1-x_k^2+3M^2(1+2γ_1^2).
. y_k-y_k-1^2+6M^2(γ_1^2+γ_2^2) y_k-1-y_k-2^2+6M^2γ_2^2 y_k-2-y_k-3^2 ).
Combining (<ref>) and (<ref>), we can obtain
𝔼_k (∇_x(u_k,y_k)-∇ _xH(u_k,y_k) ^2+∇_y(x_k+1,v_k)-∇ _yH(x_k+1,v_k) ^2)
≤ ( 1-1/p ) ( ∇_x(u_k-1,y_k-1)-∇_xH(u_k-1,y_k-1)^2+ ∇_y(x_k,v_k-1)-∇_yH(x_k,v_k-1)^2.
.+M^2𝔼_k x_k+1-x_k^2+M^2 y_k-y_k-1^2+3M^2(1+2γ_1^2) z_k-z_k-1^2.
.+6M^2(γ_1^2+γ_2^2) z_k-1-z_k-2^2+6M^2γ_2^2 z_k-2-z_k-3^2 )
≤ ( 1-1/p )Υ _k+6 ( 1-1/p )M^2(1+2γ_1^2+γ_2^2) (𝔼_k z_k+1-z_k ^2+ z_k-z_k-1 ^2.
. + z_k-1-z_k-2 ^2+ z_k-2-z_k-3 ^2 ).
Similar bounds hold for Γ _k due to Jensen’s inequality:
𝔼_k ( ∇_x(u_k,y_k)-∇ _xH(u_k,y_k) +∇_y(x_k+1,v_k)-∇ _yH(x_k+1,v_k) )
≤ √(1-1/p) (∇_x(u_k,y_k)-∇ _xH(u_k,y_k) +∇_y(x_k+1,v_k)-∇ _yH(x_k+1,v_k) )
+M√(6(1-1/p)(1+2γ_1^2+γ_2^2) ) (𝔼_k z_k+1-z_k+ z_k-z_k-1 + z_k-1-z_k-2+ z_k-2-z_k-3 ).
This completes the proof.
Now define
Υ _k+1= ∇_x(u_k,y_k)-∇ _xH(u_k,y_k) ^2+∇_y(x_k+1,v_k)-∇ _yH(x_k+1,v_k) ^2,
Γ _k+1= ∇_x(u_k,y_k)-∇ _xH(u_k,y_k) +∇_y(x_k+1,v_k)-∇ _yH(x_k+1,v_k) .
By Lemma <ref>, we have
𝔼_k [ ∇_x(u_k,y_k)-∇ _xH(u_k,y_k) ^2+∇_y(x_k+1,v_k)-∇ _yH(x_k+1,v_k) ^2 ]
≤ Υ _k+V_1 (𝔼_k z_k+1-z_k ^2+ z_k-z_k-1 ^2 + z_k-1-z_k-2 ^2+ z_k-2-z_k-3 ^2 ),
and
𝔼_k [ ∇_x(u_k,y_k)-∇ _xH(u_k,y_k) +∇_y(x_k+1,v_k)-∇ _yH(x_k+1,v_k) ]
≤ Γ_k+V_2 (𝔼_k z_k+1-z_k+ z_k-z_k-1 + z_k-1-z_k-2+ z_k-2-z_k-3 ).
This is exactly the MSE bound, where V_1=6 ( 1-1/p )M^2(1+2γ_1^2+γ_2^2) and
V_2=M√(6(1-1/p)(1+2γ_1^2+γ_2^2) ).
(Geometric decay)
Let Υ _k be defined as in (<ref>), then we can establish the geometric decay property:
𝔼_kΥ _k+1≤ ( 1-ρ )Υ _k+V_Υ (𝔼_k z_k+1-z_k ^2+ z_k-z_k-1 ^2 + z_k-1-z_k-2 ^2+ z_k-2-z_k-3 ^2 ),
where ρ= 1/p, V_Υ=6 ( 1-1/p )M^2(1+2γ_1^2+γ_2^2).
This is a direct result of Lemma <ref>.
(Convergence of estimator)
If { z_k } _k∈ℕ satisfies lim_k →∞𝔼 z_k-z_k-1 ^2=0, then 𝔼Υ _k→ 0 and 𝔼Γ _k→ 0 as k →∞.
By (<ref>), we have
𝔼Υ _k
≤ ( 1-ρ )𝔼Υ _k-1+V_Υ𝔼 ( z_k-z_k-1 ^2 + z_k-1-z_k-2 ^2+ z_k-2-z_k-3 ^2+ z_k-3-z_k-4 ^2 )
≤ V_Υ∑_l=1^k ( 1-ρ )^k-l𝔼 ( z_l-z_l-1 ^2 + z_l-1-z_l-2 ^2+ z_l-2-z_l-3 ^2+ z_l-3-z_l-4 ^2 ),
which implies 𝔼Υ _k→ 0 as k →∞. By Jensen’s inequality, we have 𝔼Γ _k→ 0 as k →∞.
99
MDC Chao M.T., Han D.R., Cai X.J., Convergence of the Peaceman-Rachford splitting method for a class of nonconvex programs, Numer. Math., Theory Methods Appl., 2021, 14(2), 438-460.
XH Fu X., Huang K., Sidiropoulos N.D., Ma W., Nonnegative matrix factorization for signal and data analytics: identifiability, algorithms, and applications,
IEEE Signal Process. Mag., 2019, 36(2), 59-80.
PT Paatero P., Tapper U., Positive matrix factorization: a nonnegative factor model with optimal utilization of error estimates of data values, Environmetrics 5, 1994, 111-126.
LS Lee D.D., Seung H.S., Learning the parts of objects by nonnegative matrix factorization, Nature 401, 1999, 788-791.
MH Ma Y., Hu X., He T., Jiang X., Clustering and integrating of heterogeneous microbiome data by joint symmetric nonnegative matrix factorization with
Laplacian regularization, IEEE/ACM Trans. Comput. Biol. Bioinform., 2020, 17(3), 788-795.
TS Pock T., Sabach S., Inertial proximal alternating linearized minimization (iPALM) for nonconvex and nonsmooth problems, SIAM J. Imaging Sci., 2017, 9, 1756-1787.
ALM Aspremont A., Ghaoui L. E., Jordan M. I., Lanckriet G. R., A direct formulation for sparse PCA using semidefinite programming, in Advances in Neural Information Processing Systems, 2005, 41-48.
HR Zou H., Hastie T., Tibshirani R., Sparse principal component analysis, J. Comput. Graph. Statist., 2006, 15, 265-286.
ABSS Attouch H., Bolte J., Svaiter B.F., Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Guass-Seidel methods, Math. Program., 2013, 137, 91-129.
DC Donoho D.L., Compressed sensing, IEEE Trans. Inform. Theory, 2006, 4, 1289-1306.
JSM Bolte J., Sabach S., Teboulle M., Proximal alternating linearised minimization for nonconvex and nonsmooth problems, Math. Program., 2014, 146, 459-494.
ABR Attouch H., Bolte J., Redont P., Soubeyran A., Proximal alternating minimization and projection methods for nonconvex problems: an approach based on the Kurdyka–Łojasiewicz inequality, Math. Oper. Res., 2010, 35, 438-457.
AI Attouch H., Bolte J., On the convergence of the proximal algorithm for nonsmooth functions involving analytic features, Mathematical Programming, A Publication of the Mathematical Programming Society, 2009, 116(1-2), 5-16.
GCH Gao X., Cai X.J., Han D.R., A Gauss-Seidel type inertial proximal alternating linearized minimization for a class of nonconvex optimization problems, J. Glob. Optim., 2020, 76, 863-887.
XX Wang Q. X., Han D. R., A generalized inertial proximal alternating linearized minimization method for nonconvex nonsmooth problems, Appl. Numer. Math., 2023, 189, 66-87.
ZQ Zhao J., Dong Q.L., Michael Th.R., Wang F.H., Two-step inertial Bregman alternating minimization algorithm for nonconvex and nonsmooth problems, J. Glob. Optim., 2022, 84, 941-966.
GZ Guo C.Z., Zhao J., Two-step inertial Bregman proximal alternating linearized minimization algorithm for nonconvex and nonsmooth problems, 2023, arXiv:2306.07614v1.
Z Chao M.T., Nong F.F., Zhao M.Y., An inertial alternating minimization with Bregman distance for a class of nonconvex and nonsmooth problems, J. Appl. Math. Comput., 2023, 69, 1559-1581.
MP Mukkamala M.C., Ochs P., Pock T., Sabach S., Convex-concave backtracking for inertial Bregman proximal gradient algorithms in nonconvex optimization, SIAM J. Math. Data Sci., 2020, 2, 658-682.
ML Ahookhosh M., Hien L.T.K., Gillis N., Patrinos P., A block inertial Bregman proximal algorithm for nonsmooth nonconvex problems with application to symmetric nonnegative matrix tri-factorization, J. Optim. Theory Appl., 2021, 190, 234-258.
B Bottou L., In: Large-scale machine learning with stochastic gradient descent, In Proceedings of COMPSTAT’2010, 2010, 1, 177-186.
XW Xu Y., Yin W., Block stochastic gradient iteration for convex and nonconvex optimization, SIAM J. Optim., 2015, 25(3), 1686-1716.
DT Driggs D., Tang J.Q., Liang J.W., Davies M, Schonlieb C.B., SPRING: a stochastic proximal alternating minimization for nonsmooth and nonconvex
optimization, SIAM J. Imaing Sci., 2021, 4, 1932-1970.
JG Hertrich, J., Steidl, G., Inertial stochastic PALM and applications in machine learning, Sampl. Theory Signal Process. Data Anal., 2022, 20, https://doi.org/10.1007/s43670-022-00021-x.
SR Schmidt M., Le Roux N., Bach F., Minimizing finite sums with the stochastic average gradient, Math. Program., 2017, 162, 83-112.
JZ Johnson R., Zhang T., Accelerating stochastic gradient descent using predictive variance reduction, in Advances in Neural Information Processing Systems, 2013, 315-323.
KL Konecny J., Liu J., Richtarik P., Takac M., Mini-batch semi-stochastic gradient descent in the proximal setting, IEEE J. Sel. Top. Signal Process., 2016, 10, 242-255.
AFS Defazio A., Bach F., Lacoste-Julien S., SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives, in Advances in Neural Information Processing Systems, 2014, 1646-1654.
LM Li B., Ma M., Giannakis G. B., On the convergence of SARAH and beyond, in International Conference on Artificial Intelligence and Statistics, PMLR, 2020, 223-233.
NL Nguyen L. M., Liu J., Scheinberg K., and M. Takáĉ, SARAH: A novel method for machine learning problems using stochastic recursive gradient, in Proceedings of the 34th International Conference on Machine Learning, 2017, 2613-2621.
JAA Bolte J., Daniilidis A., Lewis A., The Łojasiewicz inequality for nonsmooth subanalytic functions with applications to subgradient dynamical systems, SIAM J. Optim., 2007, 17, 1205-1223.
JAO Bolte J., Daniilidis A., Ley O., Mazet L., Characterizations of Lojasiewicz inequalities: Subgradient flows, talweg, convexity, Trans. Amer. Math. Soc., 2010, 362, 3319-3363.
BT Bertsekas D.P., Tsitsiklis J.N., Parallel and Distributed Computation: Numerical Methods, Prentice hall, Englewood Cliffs, NJ, 1989.
RS Robbins H., Siegmund D., A convergence theorem for non-negative almost supermartingales and some applications. Optimizing Methods in Statistics, Academic Press, New York, 1971, 233-257.
D Damek D., The Asynchronous Palm Algorithm for Nonsmooth Nonconvex Problems, 2016, arXiv:1604.00526.
AB Attouch H., Bolte J., On the convergence of the proximal algorithm for nonsmooth functions involving analytic features, Math. Program. B, 2007, 116, 5-16.
DH Lee D.D, Seung H.S., Learning the parts of objects by non-negative matrix factorization, Nature, 1999, 788-791.
JN Pan J., Gillis N., Generalized separable nonnegative matrix factorization, IEEE Trans. Pattern Anal. Mach. Intell., 2021, 43(5), 1546-1561.
FF Rousset F., Peyrin F., Ducros N., A semi nonnegative matrix factorization technique for pattern generalization in single-pixel imaging, IEEE Trans.
Comput. Imaging, 2018, 4(2), 284-294.
RF Peharz R., Pernkopf F., Sparse nonnegative matrix factorization with l_0-constraints, Neurocomputing, 2012, 80, 38-46.
|
http://arxiv.org/abs/2307.07653v1 | 20230714231056 | RFLA: A Stealthy Reflected Light Adversarial Attack in the Physical World | [
"Donghua Wang",
"Wen Yao",
"Tingsong Jiang",
"Chao Li",
"Xiaoqian Chen"
] | cs.CV | [
"cs.CV"
] |
1]Donghua Wang
2]Wen YaoCorresponding Author.
2]Tingsong Jiang^*
3]Chao Li
2]Xiaoqian Chen
[1]College of Computer Science and Technology, Zhejiang University
[2]Defense Innovation Institute, Chinese Academy of Military Science
[3]School of Artificial Intelligence, Xidian University
[ ][email protected], {wendy0782,lichaoedu}@126.com, [email protected], [email protected]
RFLA: A Stealthy Reflected Light Adversarial Attack in the Physical World
[
=========================================================================
Physical adversarial attacks against deep neural networks (DNNs) have recently gained increasing attention. The current mainstream physical attacks use printed adversarial patches or camouflage to alter the appearance of the target object. However, these approaches generate conspicuous adversarial patterns that show poor stealthiness. Another physical deployable attack is the optical attack, featuring stealthiness while exhibiting weakly in the daytime with sunlight. In this paper, we propose a novel Reflected Light Attack (RFLA), featuring effective and stealthy in both the digital and physical world, which is implemented by placing the color transparent plastic sheet and a paper cut of a specific shape in front of the mirror to create different colored geometries on the target object. To achieve these goals, we devise a general framework based on the circle to model the reflected light on the target object. Specifically, we optimize a circle (composed of a coordinate and radius) to carry various geometrical shapes determined by the optimized angle. The fill color of the geometry shape and its corresponding transparency are also optimized. We extensively evaluate the effectiveness of RFLA on different datasets and models. Experiment results suggest that the proposed method achieves over 99% success rate on different datasets and models in the digital world. Additionally, we verify the effectiveness of the proposed method in different physical environments by using sunlight or a flashlight.
§ INTRODUCTION
Deep neural networks (DNNs) have increasingly been applied to daily life as their dramatic capabilities, such as automatic driving, facial payment, and computer-aided diagnosis. However, DNN-based systems have exposed security risks caused by adversarial examples <cit.>. Adversarial examples are crafted by carefully designed noise that is invisible to humans but can deceive the DNNs. Furthermore, recent researches <cit.> reported that physically deployed DNN-based systems are also exposed to such security risks. Therefore, exploring various potential risks in security-sensitive systems to avoid possible loss is urgent.
Existing adversarial attack methods can be categorized into digital attacks and physical attacks. The former focus on pursuing higher attack performance on limitation conditions, such as breaking the model equipped with adversarial defense <cit.>, preventing the attacker from accessing the target model's information (e.g., architecture or dataset), i.e., black-box attack <cit.>. Although some researchers suggested that adversarial examples generated by the digital attack can be applied to physical attacks <cit.>, the attack performance is not satisfying. The possible reason is that the adversarial perturbation is too small to resist the environmental noise in the physical world. In contrast, physical attacks are designed to be physically deployable, where one crucial change is eliminating the perturbation's magnitude constraint.
A line of physical adversarial attack methods <cit.> has been proposed, which can be grouped into contact attacks and contactless attacks. The former requires the attacker to approach the target object and then modify the appearance of the target object by pasting the adversarial patch or camouflage. However, the adversarial pattern generated by these methods is conspicuous, which easily alerts humans and results in attack failure. By contrast, contactless physical attacks do not require the attacker to approach the target object while modifying the appearance of the target object by projecting or emitting light or laser on the target object to perform attacks, making it stealthy and dangerous. Optical attacks are representative contactless attacks. Although several optical attacks have been proposed <cit.>, they merely work in dark environments as the strong light (e.g., sunlight) environment would disturb the emitted light, limiting their usability.
In this paper, we get inspiration from the fact that the driver is easily affected by the strong reflected light, resulting in a potential car accident, and such potential risks to the automatic driving system remain unexplored. We explore the vulnerability of the DNNs toward the reflected light attack by elaborately designing the position, geometry, and color of the reflected light. Specifically, we propose a Reflected Light Attack (RFLA), which can solve the issue of the poor attack performance of existing optical attacks in strong-light environments, as our light source is sunlight. To perform physical attacks, we use a mirror to reflect the sunlight toward the target object to modify its appearance. However, the monotonous sunlight (usually white) may not obtain the desired performance. Therefore, we first use different colored transparent plastic sheets to modulate the color of the reflected light, then apply a paper cut of a specific shape to control the shape of reflected light on the target object (see Figure <ref>). Finally, we can create different colors and shapes of the reflected light on the target object's specific region to achieve desired attack performance.
To achieve the above goals, we present a general framework based on the circle to model the above problem. Specifically, we first initialize a circle with a random coordinate and radius. On this circle, we create a point on the circle using sine and cosine with a randomly selected angle. Then, we customize a shape by adding a new angle, which is used to create a new point in the circumference. The other points required to create a geometry can be obtained by applying the center symmetry of the circle. Moreover, the fill color and its transparency are also considered in the optimization. Finally, we adopt the particle swarm optimization (PSO) algorithm to find the optimal result.
Our contributions are listed as follows.
* We propose a novel reflect-light-based physical adversarial attack under the black-box scenario. It reflects the natural sunlight toward the target object using a mirror, making it controllable and stealthy.
* We devise a general framework based on a circle to search for the best position, geometry, and color of the reflected light to achieve better attack performance.
* We comprehensively investigate the influence of the geometry, position, and color of the reflected light on attack performance in the digital world. We conduct the physical adversarial attack by using sunlight for daytime and a flashlight for sunlight unavailable, and the experiment results verify the effectiveness of the proposed method.
§ RELATED WORKS
§.§ Digital adversarial attacks
Digital adversarial attacks have enjoyed decade development, which can be roughly divided into white-box attack methods and black-box attack methods. The former grants the adversary access to the target model, allowing them to develop attack algorithms with the model's gradient. The most represented gradient-based attack is the fast gradient sign method (i.e., FGSM <cit.>), which updates adversarial examples along the ascending direction of the gradient under one iteration step. Since then, a line of variants has been proposed, including an iterative variant of FGSM (i.e., I-FGSM <cit.>), random initialization has been adopted (i.e., PGD <cit.>), momentum term is introduced to enhance the transferability (i.e., MI-FGSM <cit.>), and various data augmentation technique like diversity input (i.e., DI-FGSM<cit.>), translation-invariant (i.e., TI-FGSM <cit.>) and scale-invariant (i.e., SI-FGSM <cit.>). In contrast, black-box attacks prohibit the attacker from accessing any information about the target model but are open for queries, which makes black-box attacks more challenging. Nonetheless, many black-box attacks are proposed, such as exploiting the differential evolution algorithm <cit.>, genetic algorithm <cit.>, particle swarm optimization, and so on <cit.>. In addition, several works suggested that adversarial perturbation's position <cit.>, pattern <cit.>, and geometry <cit.> on the clean image significantly impact attack performance. However, current works only investigate one or two of these factors. In this work, we systematically investigate the influence of the adversarial perturbation's position, geometry, and pattern on attack performance under the black-box scenario.
§.§ Physical adversarial attacks
According to whether it requires the attacker to access the target object in the real attack scenario, physical adversarial attacks can be grouped into contact and contactless physical attacks. Contact attacks can be further categorized into patch-based attacks and camouflage-based attacks. Patch-based attacks mainly focus on optimizing an adversarial image patch, which is then printed out and stuck on the target object or held by the attacker to deceive the target DNNs. Patch-based attacks are usually applied in attacking the facial recognition model <cit.>, pedestrian detection model <cit.>, and traffic sign recognition model <cit.>. Camouflage-based attacks <cit.> slightly differ from patch-based, as they concentrate on modifying the appearance of the target object via UV Texture. Thus, camouflage-based attacks show better attack performance in the multi-view scenario by painting the full coverage camouflage over the appearance of the target object. However, although contact physical attacks achieve good physical attack performance, the pattern of the adversarial patch/camouflage is conspicuous, which leads to poor stealthiness.
In contrast, contactless physical attacks are performed by projecting/emitting light <cit.>, or a laser beam <cit.>, usually called optical attacks. However, existing optical attacks work in dark environments <cit.> while performing poorly in strong-light environments. The reason is that the light beam emitted by a light source is easily affected by environmental light, resulting in attacking failure. Recently, Zhang et al. <cit.> proposed a shadow-based attack, but it can only create a triangle shape with one monotonous color (e.g., gray). In this work, we solve the situation of poor attacks in strong-light (i.e., sunlight) environments to perform attacks since we directly use sunlight to perform attacks. Moreover, we create reflected light with different geometrical shapes and colors using the color transparency plastic sheet and paper cut.
§ METHODOLOGY
§.§ Problem statement
Let 𝒳 denote the data distribution, and the corresponding ground truth label is 𝒴. Given an image x ∈𝒳 that has the resolution of x ∈ℝ^C × H × W, a well trained neural network f output ŷ = f(x) and ŷ = y, where the ŷ is prediction of the f and y is the ground truth label, ŷ, y ∈ℝ^|𝒴|. Adversarial attack aims to generate adversarial examples x_adv to make the f output the wrong prediction by adding small perturbation δ into the clean image x, i.e., x_adv = x + δ. Mathematically, the δ is obtained by solving the following problem
min δ s.t. f(x+δ) ≠ f(x), ||δ||_p ≤ϵ,
where ||·||_p is the L_p norm, which bound the maximum allowable magnitude of δ.
The optimization objective of Equation <ref> is the general formal for constructing the full-pixel-wise perturbation, which is unsuitable for physical adversarial attacks as the background of the physical world is unchangeable. Therefore, we reformulate Equation <ref> to optimize the physical deployable perturbation by modifying the construction of x_adv. Specifically, we define an apply function 𝒜(x, p, l, M) to construct adversarial examples x_adv, which indicates that apply the perturbation p at the location l of the clean image x, where M is the binary mask to indicate whether the position is allowed to modify (one denote allow while zero not).
In this work, we aim to reflect the sunlight toward the target object to perform stealthy physical adversarial attacks, where the representation (e.g., geometry, fill color and position) of reflected light on the target object is the key to a successful attack. Therefore, the parameters of 𝒜(x, p, l, M) comprise geometry and fill color of p, and the location of l are variables to be optimized.
§.§ Reflected light Attack
Sunlight is the most common and indispensable natural phenomenon in daily life. People can reflect the sunlight toward the wall to construct various shapes by using different shapes of mirrors. However, the danger of such reflected light against DNN-based systems has been ignored, which may pose a potential risk as it featured extremely stealthy and controllable. In this work, we aim to modulate the reflected light to perform adversarial attacks in the digital and physical world.
Previous work <cit.> modeled the triangle shadow by optimizing three points, which requires complex constraints on points to construct a rational geometry shape if they extend to other geometric shapes. To address this issue, we exploit the characteristic of the circle and propose a novel general framework based on the circle, which can generate various shapes by adjusting the number of angles (see Figure <ref>). The details process is described as follows.
* Select a radius r from the region of [0, min(H, W)/2].
* Spawn a center o(x, y) of the circle from the region of [r, H-r] and [r, W-r].
* Randomly select an angle a_1 and spawn a point p_1(x_1,y_1) on circle by the follow equation
{[ x_1 = x + r×sin (a_1 ×π/180), ; y_1 = y + r×cos (a_1 ×π/180), ].
* Calculate the symmetry point p'_1(x'_1,y'_1) of the point p_1 against the center of the circle by x'_1 = 2× x - x_1 and y'_1 = 2× y - y_1.
* Randomly select a color tuple (red, green, blue) from the region of [0, 255], and the transparency α from [0, 1].
The above process can plot a line on the clean image. To construct varying geometries like a triangle, rectangle, pentagon, or hexagon, one can repeat the third and fourth steps to create a new point by adding a new angle. Algorithm <ref> in describes the detailed particle initialization process.
§.§ Optimization
As aforementioned, we have eight base variables that need to be optimized, expressed as eight-tuples (x, y, r, α, red, green, blue, a_1), which can be used to plot a line on the clean image. To generate various geometries, more additional variables are required to generate various geometries, which depend on the shape to be generated. For example, there is one extra variable for the triangle and rectangle; two extra for the pentagon and hexagon. Note that the proposed method is easily extended to more complex geometry. Recall that our goal is to deceive the DNNs by plotting geometry on the clean image. Thus, we adopt the particle swarm optimization algorithm (PSO) to seek the best geometry, fill color, and position.
In PSO, we represent the optimization variables tuple as a particle (i.e., the solution vector q). The update direction of the particle is determined by a velocity vector v. Every particle stands for a potential solution, which requires to be optimized. We treat the personal historical best solution of a particle as q_pbest, and the global best solution of a particle as q_gbest. Moreover, for every solution, we fix the circle's position and merely optimize the geometry, fill color, and transparency. Therefore, to represent the best solution of a circle, we devised an additional metric q_sgbest, which is the sum of the fitness score of all geometrical shapes in a specific circle. Finally, the update criterion is defined as follows:
v_i(t) = Wv_i(t-1) + C_1κ_1(q_pbest - q_i(t))
+ C_2κ_2(q_gbest - q_i(t))
+ C_3κ_3(q_sgbest - q_i(t)),
q_i(t) = q_i(t-1) + v_i(t),
where W is the inertia weight used to control the impact of the previous velocity on the current velocity. C_1, C_2, and C_3 indicate the learning factors, which balance the impact of different parts empirical on current velocity. κ_1, κ_2, and κ_3 are random values uniformly sampled from [0, 1], which are used to increase the randomness of the search.
Apart from the solution and velocity of a particle, the fitness function is crucial for optimizing in PSO algorithm. In this work, we adopt the following fitness function to evaluate each particle.
min F(q) = Pr_ŷ(A(x, q, M)),
where A(x, q, M) denotes an applied function that paints the geometry with color (red, green, blue) and transparency α at the coordinate of o on the clean image x, where M is a binary mask indicates the allowed modification area. Pr_ŷ() is the predicted label ŷ probability of the target model f on the input. By minimizing the F(q), the confidence of prediction label ŷ gradually decreases. We stop the search until it reaches the maximum iteration or finds the adversarial example. Algorithm <ref> describes the optimization process.
§.§ Physical deployable attack
In digital worlds, we can construct 256^3 color tuples by blending different RGB values, which is impractical in the physical world as the limitation of the device and material. Therefore, we constrain the search space of the color to ensure physically deployable. Specifically, we use seven color transparency plastic sheets to change the color of the reflected light. However, we find discrepancies exist between the color transparency plastic sheet and its reflected light (see Figure <ref>), which may lead to attack failure. To decrease such discrepancy, we collect the light color reflected by the color transparency plastic sheet and adopt it as the searched color. In such a way, we can decrease color discrepancies when performing physical attacks.
§ EXPERIMENTS
§.§ Settings
Datasets: To investigate the effectiveness of the proposed method, we conduct the digital attack on the ImageNet-compatible dataset provided by the NIPS 2017 adversarial competition[https://www.kaggle.com/c/nips-2017-non-targeted-adversarial-attack], which includes 1000 images. Moreover, two commonly used traffic sign datasets: GTSRB <cit.> and LISA <cit.>, are also considered to investigate the extensibility of the proposed method.
Target models: We conduct the proposed method on two tasks: image classification and traffic sign recognition. As for image classification, we select six ImageNet pre-trained networks: ResNet50 (RN50) <cit.>, VGG16 <cit.>, DenseNet121 (DN121) <cit.>, ResNeXt50 (RNX50) <cit.>, WiderResNet50 (WRN50) <cit.> and SqueezeNet (SN) <cit.>, which are all provided by PyTorch <cit.>. As for traffic sign recognition, we follow the setting reported in previous works <cit.> to train GTSRB CNN and LISA CNN on GTSRB and LISA dataset, which obtains the accuracy of 95.06% and 100% on the test set, respectively.
Evaluation metrics: We adopt the attack success rate (ASR) as the evaluation metric, defined as the ratio of the number of the network's prediction flipped caused by adversarial examples to the total test dataset.
Implementation details: We adopt the OpenCV-Pyton package to plot different geometries in the clean image. For the settings of parameters of PSO, we set the max iteration number to 200, C_1, C_2, and C_3 set to 2.05, W is set to 0.7298, the particle size and the geometry number at a circle are set to 50. The particle and velocity bound are provided in Appendix <ref> (The upper bound of the transparency α is set to 0.7 to evade occluding the clean image.). Unless otherwise specified, the mask M is set to all one value matrix in our experiments. All experiments were conducted on an NVIDIA RTX 3090ti 24GB GPU [Code will be available <https://github.com/winterwindwang/RFLA>.].
§.§ Digital Adversarial Attacks
In this section, we quantitatively and qualitatively evaluate the effectiveness of the proposed method in the digital world. For comparison, we adopt two patch-based attack methods: TPA <cit.> and DAPatch <cit.>; one line-based attack method Bezier <cit.>. TPA <cit.> utilized the feature texture image extracted from DNNs as the adversarial patch, which is pasted on the clean image, where the paste position is optimized by reinforcement learning. DAPatch <cit.> optimized the pattern and mask simultaneously, which can create a deformable shape adversarial patch. In contrast, Bezier <cit.> generated adversarial examples by scratching the bezier curve on the clean image, where the bezier curve depends on three points optimized by the optimizer (e.g., PSO). We reproduce the above three methods on the ImageNet-compatible dataset using the default settings.
§.§.§ Quantitatively result
Table <ref> reported the comparison results of the proposed RFLA-Line with the Bezier method. As we can observe, the proposed method outperforms the Bezier method on four of six models and obtains an improvement by 1.93% in average ASR, indicating the proposed method's effectiveness. Additionally, we can get an 8.95% improvement by widening the line thickness two times. Although the length of the Bezier curve may be shorter than ours, the discrepancy is trivial as modifications caused by the line is neglectable. Moreover, our method can be extended to more geometries.
Then, the comparison results of patch-based methods are listed in Table <ref>. We conclude that the proposed geometry variants of RFLA outperform the existing method significantly. Specifically, the average ASR of the RFLA-triangle, RFLA-rectangle, RFLA-pentagon, and RFLA-hexagon are 97.98%, 99.23%, 99.38%, and 99.53%, obtaining the maximum improvement against TPA and DAPatch by 60.32% and 27.52%. We observe that the attack performance of comparison methods fails to achieve such results reported in their paper. The possible reason is that TPA may require two or more patches occluding 8% of the image to achieve higher attack performance. As for DAPatch <cit.>, the position of the adversarial patch is ignored, which makes them fail to seek the model-decision-sensitive position. In contrast, our method simultaneously optimizes the position, geometry, and adversarial pattern, resulting in better performance. Moreover, the ASR gains with increasing vertex of geometry shape are attenuations, such as 20.08% gains from Line (two vertexes) to Triangle(three vertexes), while only 0.15% from Rectangle to Pentagon, which may be attributed to the improvement room of ASR is limited.
In addition, we compare the transferability of the proposed method with the comparison methods. Specifically, we use adversarial examples generated on RNX50 to attack other models. Evaluation results are reported in Table <ref>. As we can observe, RFLA outperforms the comparison methods in most cases, and the magnitude of fall behind cases is small. Concretely, we obtain the maximum average improvement of ASR of Bezier, TPA, and DAPatch are 0.28% (RFLA-Line), 19.18%, and 14.2%, indicating the effectiveness of the proposed method. In addition, we provide other transferability comparison results in Appendix <ref>.
§.§.§ Qualitatively result
We provide the visualization result of adversarial examples generated by different methods in Figure <ref>. As we can observe, on the one hand, the Bezier and RFLA-Line obtain the most natural visuality quality, and the scratched line hardly observe at a glance. Meanwhile, RFLA-Line fools the DNNs in all displayed cases, but Bezier only has two success cases. On the other hand, TPA and DAPatch failed in all displayed cases. Such qualitative results can be used to explain their inferior attack performance. The adversarial patch generated by their method covers the noncontent areas, which may be insignificant to the model decision. Although the proposed method affects more image content than TPA and DAPatch, the covered contents are recognizable. In other words, our method does not modify the semantics of the image. We provide more visualization results of adversarial examples in Appendix <ref>.
In addition, we use the Grad-CAM <cit.> to investigate why the proposed method can work. Figure <ref> illustrates the model attention visualization results. As we can observe, the painted geometry suppresses the model's attention areas, which makes the model output the wrong results. We also provide the visualization analysis result of comparison methods on model attention in Appendix <ref>.
§.§ Extend to Traffic Sign Recognition Task
To further investigate the effectiveness of the proposed method, we use RFLA to attack the traffic sign recognition (TSR) model. Specifically, we collect 200 stop sign images from the GTSRB and LISA test set for evaluation. To avoid the geometry being out of the scope of the stop sign, we use a mask to indicate the allowable modification positions. We get the mask by averaging 200 test images and binary it. Table <ref> lists the digital adversarial performance. As we can observe, the proposed method obtains superior attack performances on two TSR models, especially for GTSRB CNN (100% ASR). In addition, as for LISA-CNN, the attack performance increase with the geometries. One possible reason is that the affected area of the clean image is larger with the change of geometries under a similar circle.
§.§ Physical Adversarial Attacks
Unlike the previous physical attacks that generate the adversarial pattern for the physically captured images, we generate the adversarial pattern (i.e., colored geometries) for digital images (the target model is RN50) and then reflect the light according to the optimized variables toward the corresponding printed images. In physical adversarial attacks, We use sunlight and a flashlight as the light source to mimic two different scenarios. We only evaluate one geometry (i.e., rectangle) for simplicity. Specifically, we randomly select six images from the dataset and generate the corresponding adversarial examples, where the color is fixed during the optimization for physical deployment. We use eight colors: seven colors created by seven transparent color plastic sheets and one white reflected sunlight. Finally, we capture the physical adversarial examples from 2 meters away, collecting 48 images for each light source.
Table <ref> lists the evaluation results. As we can observe, the ASR against the three models is above 80% on physical adversarial examples created by two different light sources. Interestingly, we find that physical adversarial examples created by the reflected light against RN50 can consistently mislead the different models, indicating the reflected light is well-transferable even in the physical world. Figure <ref> illustrates the physical adversarial examples. Furthermore, we study the effectiveness of reflected light attacks using sunlight and a flashlight on the TSR model. Figure <ref> illustrates examples generated by different geometrical shapes.
§.§ Ablation Study
Attack performance v.s. transparency.
The transparency α determined the cover intensity of the color. When α is set to one, the pixel value of the clean image at the specific position is substituted by the pure color, while the smaller value is the lower transparency of the color. We study how transparency changes the attack performance. Specifically, we fixed the other variables except for transparency, which traveled from [0,1] with a step size of 0.01. Figure <ref> (a) illustrates the evaluation results of various geometries. As expectedly, the confidence of the ground-truth label decrease with enlarges of α. In contrast, the attack performance (represented in the number of blue points) rises with the increases in transparency, as more content of the clean image is covered due to deeper pure color. Moreover, we statistics the frequency of successful and failed attacks of the RFLA-Triangle on 100 test images, which is depicted in Figure <ref> (b), consistent with the previous analysis.
Attack performance v.s. color.
The pattern of adversarial perturbation is crucial for a successful attack. Unlike the previous works that optimize the pixel-wise perturbation, we focus on the channel-wise perturbation (perturb a channel with one value) as we must ensure the perturbation is physically realizable by reflecting the light. Furthermore, channel-wise perturbation is visually more acceptable than pixel-wise perturbation. Specifically, we select the color tuple set in intervals of 16 pixels across the three RGB channels to investigate how color influences the attack performance. Figure <ref> illustrates the evaluation results. As we can observe, the success cases almost cluster in specific areas near the searched optimal color tuple when other variables are fixed. In other words, the optimal color tuple has certain robustness to a slight change of color, which makes our attacks can undertake some distortions when applied in the physical world.
Attack performance v.s. position.
To investigate the influence of the position of the adversarial perturbation to attack performance, we fixed the optimal variable except for position. Then, we sample the position in intervals of two steps. Furthermore, we also give the Grad-Cam for comparison. Figure <ref> provides the evaluation results. As we can see, the adversarial geometry plotted around the content areas significantly drops the prediction confidence of the model on the clean image. Meanwhile, the attack success area is consistent with the model attention area, which indicates that our method can automatically locate the model attention areas to perform attacks.
§ CONCLUSION
In this paper, we propose a novel reflected light attack to realize effective and stealthy attacks in both digital and physical worlds, which may impose potential risks to automatic driving systems. Specifically, to explore how to control the reflected light's position, geometry, and pattern, we exploit the characteristic of the circle and propose a general framework based on the circle. To create a geometry, we first generate a specific number of angles to construct the point in circumference, followed by applying point symmetry against the center of a circle to generate a new point. These obtained points fence a geometrical shape where the fill color and transparency are optimized. Finally, we apply the PSO algorithm to find the best position, geometry, fill color, and transparency. Experiment results on digital and physical attacks verify the effectiveness of the proposed method. Moreover, our method can not only use sunlight but also can use flashlights to perform physical attacks for adapting to different environments.
Limitations. Though the reflected-light attack can perform in different environments, it is hard to remain effective in bad weather, such as fog and rain. A more penetrating light source (e.g., the traffic light and foglight) may work in such conditions.
Potential negative societal impact and mitigation. Similar to other types of attack, the adversarial attack is inevitable to cause potential security risks, especially for those physically deployed systems. However, we aim to arouse people's attention to such related applications and then encourage people to develop defense techniques to counter the reflected-light attack. To thwart the RFLA attack proposed in this paper, one can develop multimodal-based DNN systems.
§ ACKNOWLEDGMENTS
The authors are grateful to the anonymous reviewers for their insightful comments. This work was supported by the National Natural Science Foundation of China (No.11725211 and 52005505).
ieee_fullname
§ IMPLEMENTATION DETAILS
In this section, we first introduce the lower and upper bounds of the particle and velocity. Recall that a particle q represents an optimization variable tuple (x, y, r, α, red, green, blue, a_1), where the lower bound is (0, 0, 10, 0, 0, 0, 0, 0, 0, 0) and the upper bound is (H/2, W/2, 0.4×min(W, H), 0.7, 255, 255, 255, 360) except for the line. As for the RFLA-Line, the transparency α is set to 1. The reason is that the modification of the clean image caused by the RFLA-Line geometry shape merely has a few numbers of pixels, even if the original pixels are replaced. Velocity controls the movement speed of particles: a large velocity speed would early lead the particle to reach the bound, while a small velocity makes the particle move slowly, requiring more optimizing time. Thus, we set them in terms of the concrete meaning of the variable. Specifically, we set the upper bound of velocity as follows: coordination and radius of the circle are set to 5, 5, and 10; the transparency α is set to 0.05; the color is set to 5; the angle is set to 10, i.e., the initialization of velocity's upper bound is set to v_upper = (5, 5, 10, 0.05, 5, 5, 5, 10). In contrast, the lower bound is set to v_lower=-v_upper.
The initialization of the proposed method is significant to optimization, which describes the variable that requires to be optimized. Algorithm <ref> describes the initialization of particles and the corresponding velocity generation for different geometries. Specifically, we generate population size S particles, i.e., the center o(x,y) and radius r of the circle. For each specific circle, we generate subpopulation size S_sub geometries, which consists of α, red, green, blue, a_1. Note that we fixed the coordinate and radius when spawning different topologies of the same geometrical shape at a specific circle. Therefore, we devise a novel variable to record which circle can generate the optimal geometrical shapes from an overall viewpoint, i.e., the q_sgbest, which is defined as
q_sgbest = min_i ∈ S∑^S_sub_j=1 F(q_i,j).
Moreover, the defination of the q_gbest and q_pbest is expressed as follow
q_gbest = min_i ∈ S, j ∈ S_sub F(q_i,j).
q_pbest^i = min_j ∈ S_sub F(q_i,j).
After generating the particle, we use the Algorithm <ref> to generate adversarial examples. Specifically, we first obtain the point on the circle by cosine and sine function with the coordinate and radius and an angle, then calculate its symmetry point with respect to the center points. Repets until generating enough points that the geometry shape required. Then, we sort the point set to avoid engendering the intercross edge. Finally, we use the OpenCV package to plot the geometry on the clean image.
§ EXPERIMENT AND RESULT ANALYSIS
To demonstrate the effectiveness of the proposed algorithm, we conducted an ablation study with random search algorithm. We fixed the light color (i.e., set to white sunlight), and only searched the optimal position and geometry. Table <ref> reports the evaluation results, which suggests that our method can significantly outperform the random baseline, where our achieves the average ASR of 95.13%, obtaining 13.63% higher than the random baseline.
We also attempt to implement the targeted attack by modifying the fitness function. Specifically, we perform the experiment on WRN50 and set the targeted label by (y+1)%1000. Table <ref> lists the targeted attack results. As we can observe that the targeted attack is not satisfying comparing with the white-box attack, where the average ASR is
21.26%. We speculate the reason may attribute to the pure color perturbation is hard to implement the targeted attack. Moreover, we also found that the optimization time of the black-box attack is enormously increased compared with the white-box attack.
Model attention analysis can reveal how the attack algorithm works. To provide a more complete visualization comparison results of different attack methods. We use the Grad-CAM tool to analyze changes in the class activation map caused by different attack algorithms. Specifically, we focus on the CAM of the original prediction class since the predicted class of the adversarial examples is different from the original prediction class. Thus, the CAM is also different. In contrast, the changes in the CAM of the original prediction can reflect the attack function. The comparison result is illustrated in Figure <ref>. As we can observe, the proposed method can disperse the CAM while the other method can not. Take a careful look at the CAM of the proposed method, the plotted geometry suppresses the original CAM, where the region is the region with semantic content. Therefore, the proposed method can obtain superior attack performance.
In addition, we provide the complete transferability comparison result in Table <ref>. As we can see, the proposed method achieves the best average ASR on both white-box and black-box attacks. One possible reason for the better transferability of the proposed method is that the proposed method can automatically locate the region of the model decision that is common for different models. Moreover, the proposed method is different from the full-pixel imperceptible perturbation, we generate adversarial examples by modifying partial image regions with the transparency color. Therefore, the image content of the modified region by the proposed method is maintained, which is the main difference compared to the patch-based attack (e.g., TPA and DAPatch).
Finally, we also evaluate the performance of various attacks under various adversarial defenses.
|
http://arxiv.org/abs/2307.05111v1 | 20230711084145 | Density fluctuations for the multi-species stirring process | [
"Francesco Casini",
"Cristian Giardinà",
"Frank Redig"
] | math.PR | [
"math.PR",
"cond-mat.stat-mech",
"math-ph",
"math.MP"
] |
Critical steady states of all-to-all driven-dissipative spin models: An analytic approach
Ana Maria Rey
August 12, 2023
==========================================================================================
We study the density fluctuations at equilibrium of the multi-species stirring process, a natural multi-type generalization of the symmetric (partial) exclusion process. In the diffusive scaling limit, the resulting process is a system of infinite-dimensional Ornstein-Uhlembeck processes that are coupled in the noise terms. This shows that at the level of equilibrium fluctuations the species start to interact, even though at the level of the hydrodynamic limit each species diffuses separately. We consider also a generalization to a multi-species stirring process with a linear reaction term arising from species mutation. The general techniques used in the proof are based on the Dynkin martingale approach, combined with duality for the computation of the covariances.
§ INTRODUCTION
The symmetric exclusion process is a famous and well-studied particle system, where the hydrodynamic limit is the heat equation, and where the stationary fluctuations around the hydrodynamic limit are given by an infinite dimensional Ornstein-Uhlenbeck process
<cit.>, <cit.>, <cit.> <cit.>, <cit.>.
The large deviations from the hydrodynamic limit are also well-studied <cit.>, and because of integrability, in the simplest one-dimensional
setting with reservoirs the non-equilibrium steady state can be computed explicitly <cit.>, and as a consequence, the large deviation around the stationary non-equilibrium density profile can be computed <cit.>, <cit.>. Such explicit solvability of a model is very rare and in the case of the symmetric exclusion process a consequence of the fact that the Markov generator corresponds to an integrable spin chain (for the d=1 nearest neighbor setting) and that the model is self-dual (for the general symmetric model on any graph).
A simple and natural generalization of the symmetric exclusion process is the so-called symmetric partial exclusion process, where every vertex admits at most 2j particles, j∈1/2ℕ. The model is then still self-dual but no longer integrable for j>1/2. The maximum number of particles can be chosen depending on the vertex, without loosing self-duality. For all these generalizations of the symmetric exclusion process, the hydrodynamic limit and the stationary fluctuations around the hydrodynamic limit can be obtained, and up to constants yield the same equations <cit.>.
At present, there is a growing interest in models with multiple conserved quantities, their hydrodynamic limit, and their fluctuations (often referred to as “non-linear fluctuating hydrodynamic”) <cit.>, as well as in “multi-layer” models, where effects such as uphill diffusion can be observed <cit.>. From the point of view of integrable systems or of systems with duality – the latter being a larger class – the construction of models with n conserved quantities is naturally linked with Lie algebras of higher rank, such as 𝔰𝔲(n), with n>2
(or the deformed universal envelopping algebra U_q(𝔰𝔲(n)) for the asymmetric companion model).
Several multi-species versions
of the ASEP process have been
introduced and their dualities
have been studied, such as the particle
exchange process (PEP) <cit.>
or the multi-species ASEP (q,j) <cit.>.
In the symmetric context, the simplest choice of a multi-species model is obtained by considering the coproduct of the quadratic Casimir
of 𝔰𝔲(𝔫), copied along the edges of a graph (see <cit.> for the model on a finite graph and <cit.>
for the boundary driven-version).
If one chooses a spin j discrete representation, one arrives to
the multi-species stirring process, which is
the most natural multi-species generalization of the symmetric exclusion process.
In this model at each site there are at most 2j particles whose type (or color) can be chosen among n available types. In other words each site contains a pile of height 2j which is made of particles of different types and some holes.
The configuration space of the process
is denoted S_n^V where V is the vertex set and the single-vertex state space S_n is the set of n+1 tuples of integers of which the sum equals 2j,
i.e.
S_n= {(η_0, η_1,…, η_n) : η_k∈{0,1,…,2j} satisfying ∑_k=0^nη_k= 2j}.
A configuration of particles at site x∈ V is denoted by η^x= (η_0^x, η_1^x,…, η_n^x) with η_0^x giving the number of holes
and η_k^x specifying the numbers
of particles of type k, with k∈{1,…,n}.
The rate at which a particle of type k at site x is exchanged with a particle of type l at site y is given by
c(x,y) η^x_k η^y_l
where c(x,y) is a symmetric and non-negative conductance associated with the
edge (x,y).
In our paper the underlying vertex set will be always V=ℤ^d, and we will only allow nearest neighbor jumps.
However, for the model to be self-dual, only symmetry of c(x,y) is important.
Notice that, if we stop distinguishing species we retrieve the classical partial exclusion process. See Figure <ref> for an illustration of the process with two colours.
In this paper, for the sake of simplicity, we study the multispecies stirring model in the simplest setting where the vertex set is ℤ (proofs are similar if we choose ℤ^d, d>1) with nearest neighbor edges, with a general number n∈ℕ of species and a general value of the spin j (or equivalently maximal occupancy 2j).
We consider the stationary density fluctuation field (Y^N,t)_t≥ 0 of the n species (only for species different from 0, since the hole dynamics is determined by the dynamics of the other types) and show that in the diffusive re-scaling of space and time, this
field converges as N→∞ to the solution of a n-dimensional SPDE of Ornstein-Uhlenbeck type given by
dY^t =2j (AY^t dt +√(2Σ)∇ dW^t)
The operator-valued matrix A is simply given Δ I with I the identity matrix and Δ =∂_xx, and corresponds to the hydrodynamic limit, which
is a system of uncoupled heat equations (besides exclusion, the species do not interact). The matrix Σ is however non-diagonal, showing that on the level of fluctuations interaction between the different species becomes visible. The stationary distribution is a product of multinomials and the matrix Σ is the covariance matrix of a multinomial distribution. Equation (<ref>) is the natural generalization of the Ornstein-Uhlenbeck process which describes the density fluctuations of the symmetric exclusion process, where the coefficient in front of the conservative noise is the square-root of the variance
of Bernoulli distribution.
This can then be generalized to a setting where reactions (spontaneous species change) are allowed. Then also a non-conservative noise term appears and the operator A corresponds to a (linear) reaction-diffusion system.
Old text: some of it can be kept most probably
In the past literature, a known example of interacting particle system is the partial or generalized exclusion process <cit.>. In such a stochastic process, the particle can jump from one site to another proportionally to the number of particles in the starting site and to the "holes" of the arrival one, under the constraint that at most 2j (j∈ℕ/2) particles can be present in each site. The case whit j=1/2 is the simple exclusion process (SSEP). Then, a repulsion or exclusion constrain is set up. One could try to push further our analysis by asking what happens if we start thinking about different species (sometimes called types or colours) of particles. In other words, each site can now contain a mixture of 2j particles with different labels, that are now no more indistinguishable. Some work in the duality direction have already been done <cit.>. In this paper we aim to rigorously prove some scaling limit of this process, in the geometry of the d-dimensional regular lattice, i.e. ℤ. More precisely we investigate the fluctuations around the hydrodynamic limit for the process starting with the reversible measure, thus called equilibrium fluctuations. For the one species case the literature is wide, then just we cite some examples: in <cit.> the computations were made in a non Euclidean geometry (a manifold); in <cit.>, the finite lattice with various type of boundary conditions has been studied via the replacement lemmas. For the multi-species case, we report the following examples: <cit.> the hard-core exclusion hydrodynamic limit with two species has been studied (2j=1); in <cit.> a multi-layer problem is considered; in <cit.>, the situation is a random walk with an additional drif term, that has also the peculiarity of changing internal state with a certain rate. As specified in the following, the process we are going to consider is different, since the dynamics involves pairs of sites, such that an exchange between their occupation is performed. A key idea is that each species has a common empty state, i.e. the hole is "shared" among all types.
§.§ Organization of the paper
The rest of our paper is organized as follows. In Section <ref> we describe in detail the multi-species stirring process on a line, together with its hydrodynamic limit, and we state our main result, i.e. Theorem <ref>. The proof of this result, is obtained in four main steps that are presented in the subsequent sections. First in Section <ref> we prove some convergence properties of the Dynkin's martingales associated to the density fluctuation field. This is used in Section <ref> for the proof of tightness. In Section <ref>, we apply duality to compute the covariances of the limiting process.
Finally, in Section <ref>, we show that the limit point (that exists because of tightness) is unique and solves the martingale problem associated to the limiting process. In Section <ref> we generalize Theorem <ref> to a multi-type stirring process where also a mutation of types (reaction) is also allowed. In Section <ref> we draw the conclusions of our analysis and in Appendix <ref> we prove the hydrodynamic limits.
§.§ Acknowledgments
F.C. thanks Delft Institute of Applied Mathematics for hospitality and support for a period of three months, during which this work has been performed.
This work has been conducted under the auspices of INdAM-Istituto Nazionale di Alta Matematica. We thanks Patricia Gonçalves, Gunter Schütz and Hidde Van Wiechen for some useful discussions.
§ THE EQUILIBRIUM FLUCTUATION FOR THE STIRRING PROCESS
§.§ Process definition
The interacting particle system is defined on the regular one-dimensional lattice ℤ. At each site and each time we associate a vector η^x(t)=(η_0^x(t),…,η_n^x(t)) where, the α-th component η_α^x(t) denotes the occupation variable of the species α∈{0,…,n}. The labels 1,…,n denote the "true" species, while the 0 plays the role of the hole (absence of a particle). The process on the whole lattice is denoted by (η(t))_t≥ 0. The maximal occupation of each site is assumed to be fixed and equal to 2j where j∈ℕ/2. Therefore, at any time there can be at most 2j particles at each site.
This is encoded in the state space definition
Ω:=S_n^ℤ= {η=(η_0,η_1,…,η_n) : η_k∈{0,1,…,2j} satisfying ∑_k=0^nη_k=2j}^ℤ.
Let us notice that the constrain expressed into the state space can be thought as a dependence of the number of holes on the other types, i.e. at each site x∈ℤ
η_0^x=2j-η_1^x-…-η_n^x.
We assume nearest neighbor jumps. The infinitesimal generator of the process acting on local functions f:Ω→ℝ is given by
ℒf(η)=∑_x∈ℤ∑_k,l=0^nη_k^xη_l^x+1[f(η-δ_k^x+δ_l^x+δ_k^x+1-δ_l^x+1)-f(η)]
where
(δ_k^x)_l^y=
1 if l=k and y=x
0 otherwise.
The interpretation of this generator is that particles of species k,l∈{0,1,…,n} present at sites x,x+1∈ℤ respectively, are exchanged with rate η_k^xη_l^x+1.
If we stop distinguishing the types of particles, we retrieve the partial exclusion process (SEP(2j)), since the constraint becomes
η_0^x=2j-η_1^x ∀
x∈ℤ
thus, the only non zero rates are of the form η_1^x(2j-η_1^x+1).
As already proved in <cit.> the reversible measure of this process is
ν_p=⊗_x∈ℤ MN(2j; p)
where MN(2j; p) denotes
the Multinomial distribution with
2j independent trials and
success probabilities
p=(p_0,…,p_n)
with p_0+p_1+…+p_n=1.
§.§ Hydrodynamic limit
In Theorem <ref> we state the hydrodynamic behavior of the multi-species stirring process. The proof is based on standard arguments and is reported in Appendix <ref>.
We introduce the density field of species α∈{1,…,n}. For any ϕ∈ C_c^∞(ℝ) this field is defined as
X_α^N,t(·): C_c^∞(ℝ)→ℝ
ϕ→ X_α^N,t(ϕ)=1/N∑_x∈ℤϕ(x/N)η_α^x(tN^2)
where N∈ℕ is the scaling parameter. To state the hydrodynamic limit, we need an assumption on the behavior of the density field at the initial time. This assumption is written in Definition <ref>.
Let ρ^(α): ℝ→ [0,2j], with α∈{1,…,n}, be a continuous function called the initial macroscopic profile of species α. A sequence (μ_N)_N ∈ℕ of measures on Ω, is a sequence of compatible initial conditions if
∀α∈{1,…,n}, ∀δ>0:
lim_N→∞μ_N(|X_α^N,0(ϕ)-∫_ℝϕ(u)ρ^(α)(u)du|>δ)=0
with arbitrary ϕ∈ C_c^∞(ℝ).
We state the following result
Let ρ^(α) an initial macroscopic profile of species α∈{1,…,N} and let (μ_N)_N∈ℕ a sequence of compatible initial measures. P_N denotes the law of the process (X_1^N,t(ϕ),…,X_n^N,t(ϕ)) induced by (μ_N)_N∈ℕ. Then, ∀ T>0, δ>0, ∀α∈{1,…,n} and ∀ϕ∈ C_c^∞(ℝ)
lim_N→∞P_N(sup_t∈ [0,T]| X_α^N,t(ϕ)-∫_ℝϕ(u)ρ^(α)(u,t)du|>δ)=0
where ρ^(α)(x,t) is a strong solution of the the PDE Cauchy problem
∂_tρ^(α)(x,t)=(2j)Δρ^(α)(x,t) x∈ℝ, t∈ [0,T]
ρ^(α)(x,0)=ρ^(α)(x)
§.§ Limiting process of the density fluctuation field
We consider the setting where the process (η(t))_ t≥ 0 starts from equilibrium, i.e. a reversible measure where we have fixed the probabilities p=(p_0,…,p_n) once for all.
Then the density fluctuation field for a species α∈{1,…,n} is a random distribution, i.e., a random element of (C_c^∞(ℝ))^* defined via:
Y_α^N,t(·): C_c^∞(ℝ)→ℝ
ϕ→ Y_α^N,t(ϕ)=1/√(N)∑_x∈ℤϕ(x/N)(η_α^x(tN^2)-(2j)p_α)
where (2j)p_α=𝔼_ν_p[η_α^x].
We call Q_N the law of the random vector process (Y^N,t)_t≥ 0 =(Y_1^N,t,…,Y_n^N,t)_t≥ 0 and 𝔼 the expectation with respect to this law. Note that, because (η(t))_t≥ 0 is initialized from the reversible measure (<ref>), the process keeps the product measure structure for every time t≥ 0. We denote by
(C_c^∞(ℝ))^*_n=(C_c^∞(ℝ))^*×…×(C_c^∞(ℝ))^*_n times
the dual space of (C_c^∞(ℝ))^n.
Our main result is the following theorem.
There exist a unique random element (Y_1^t,…,Y_n^t)_t∈[0,T]∈ C([0,T];(C_c^∞(ℝ))^*_n) with law Q such that
Q_N→ Q weakly for N→∞.
Moreover, for every α∈{1,…,n}, (Y_α^t)_t≥0 is a generalized stationary Ornstein-Uhlenbeck process solving the following martingale problem:
M_α,ϕ^t:=Y_α^t(ϕ)-Y_α^0(ϕ)-(2j)∫_0^tY_α^s(Δϕ)ds
is a martingale ∀ϕ∈ C_c^∞(ℝ) with respect to the natural filtration (ℱ_t)_t∈ [0,T] of (Y^t_1,…,Y_n^t)_t∈ [0,T] with quadratic covariation
[ M_α,ϕ,M_β,ϕ]_t=-2t(2j)^2p_αp_β∫_ℝ(∇ϕ(u))^2du
and quadratic variation
[ M_α,ϕ]_t= 2t(2j)^2p_α(1-p_α)∫_ℝ(∇ϕ(u))^2du
The above martingale problem can be restated by requiring that (<ref>) and
𝒩^t_α,β,ϕ=M_α,ϕ^tM_β,ϕ^t+2t(2j)^2p_αp_β∫_ℝ∇(ϕ(u))^2du
𝒩^t_α,α,ϕ=(M_α,ϕ^t)^2-2t(2j)^2p_α(1-p_α)∫_ℝ∇(ϕ(u))^2du
are martingales with respect to the natural filtration (ℱ_t)_t∈ [0,T].
Theorem <ref> suggests that the limiting process
(Y^t)_t∈ [0,T]=(Y^t_1,…,Y_n^t)_t∈[0,T]
can be formally written as the solution of the distribution-valued SPDE
dY^t=2j(AY^tdt+√(2Σ)∇ dW^t)
where
(W^t)_t∈ [0,T]=((W^t_1,…,W_n^t))_t∈ [0,T]
is an n-dimensional vector of independent space-time white noises. The matrices are the following
A=[ Δ 0 … 0; 0 Δ … 0; ⋮ ⋮ ⋱ ⋮; 0 0 … Δ ] Σ=[ p_1(1-p_1) -p_1p_2 … -p_1p_N; -p_1p_2 p_2(1-p_2) … -p_2p_N; ⋮ ⋮ ⋱ ⋮; -p_Np_1 -p_Np_2 … p_N(1-p_N) ]
and Σ is semi-positive definite. The covariances of (<ref>) ∀ t∈ [0,T] are given by:
(i)
when α≠β
Cov(Y_α^t(ϕ),Y_β^0(ψ))=-(2j)p_αp_β⟨ S_tϕ,ψ⟩_L^2(dx)
(ii) when α=β
Cov(Y_α^t(ϕ),Y_α^0(ψ))=(2j)p_α(1-p_α)⟨ S_tϕ,ψ⟩_L^2(dx)
where (S_t)_t≥ 0 is the transition semigroup of the Brownian motion (B_2j(t))_t≥ 0
with variance 2j t.
At the initial time t=0, the distribution of the limiting process is Gaussian with covariances given by
Cov(Y_α^0(ϕ),Y_β^0(ψ))=-(2j)p_αp_β⟨ϕ,ψ⟩_L^2(dx) Cov(Y_α^0(ϕ),Y_α^0(ψ)=(2j)p_α(1-p_α)⟨ϕ,ψ⟩_L^2(dx)
for α≠β and α=β respectively. This is a direct consequence of the central limit theorem, because the initial measure is product of multinomials.
The proof of Theorem <ref> consists in the following steps: firstly we show that the sequence of measures (Q_N)_N∈ℕ is tight and converges to a unique limit point Q; secondly we show that at the initial time t=0 the process is Gaussian and has covariances given by
Cov(Y_α^0(ϕ),Y_β^0(ψ))=-(2j)p_αp_β⟨ϕ,ψ⟩_L^2(dx), Cov(Y_α^0(ϕ),Y_α^0(ψ)=(2j)p_α(1-p_α)⟨ϕ,ψ⟩_L^2(dx).
Finally, we prove that Q solves the martingale problem for any t∈[0,T]. As shown in Section 4, Chapter 11 of <cit.>, these steps are equivalent to saying that Q is the unique solution of the martingale problem and, furthermore they allow to find the transition probabilities of the Markov process (Y_t)_t∈[0,T]. We observe that the Gaussianity of the limiting process at initial time t=0 is a consequence of the central limit theorem and of the fact that, for every x∈ℤ, η^x=(η_0^x,…,η_n^x) is distributed with the reversible Multinomial measure (<ref>).
Preliminary, we need some convergence properties of the Dynkin martingale associated with the density fluctuation field. Thus, we split the proof as follows of Theorem <ref>:
* L^2 convergence of Dynkin's martingale to (<ref>), Section <ref>.
* Tightness of (Q_N)_N∈ℕ, using the Aldous' criterion <cit.>, Section <ref>.
* Space-time covariances. This will be done using duality, Section <ref>.
* Uniqueness of the limiting distribution Q and solution of the martingale problem, Section <ref>.
§ CONVERGENCE OF MARTINGALES
§.§ The Dynkin martingale
We recall some basic facts about Dynkin martingales associated to Markov processes (for details see <cit.>).
Let 𝒢 the generator of a Markov pure jump process (θ(t))_t≥ 0 with state space χ and transition rates c(θ,ξ) to jump from θ to ξ.
The generator reads
𝒢f(θ) =∑_ξ c(θ, ξ)(f(ξ)- f(θ)).
For a function f:χ→ℝ the following quantity is a Dinkin martingale with respect to the natural filtration
M_t^f:=f(θ(t))-f(θ(0))-∫_0^t𝒢f(θ_s)ds.
The quadratic covariation is given by
[M^f,M^g]_t:=∫_0^tΓ^f,g,s(θ_s)ds
where Γ^f,g is the Carré-Du-Champ operator defined as
Γ^f,g=(𝒢fg)-g(𝒢f)-f(𝒢g).
Using the form (<ref>) of the generator, it is possible to rewrite the above expression as
Γ^f,g(θ)=∑_ξ∈χc(θ,ξ)(f(ξ)-f(θ))(g(ξ)-g(θ)).
Applying the scheme above to the process (η(tN^2))_t≥ 0 characterized by the generator (<ref>) and taking, for any ϕ∈ C_c^∞(ℝ), the function f(η(t))=Y_α^N,t(ϕ), we define the following Dynkin martingale
M_α,ϕ^N,t:=Y_α^N,t(ϕ)-Y_α^N,0(ϕ)-∫_0^tN^2ℒY_α^N,s/N^2(ϕ)ds
where Y_α^N,t(ϕ) denotes the equilibrium fluctuation field for the species α defined in (<ref>). Observe that the last term above martingale is defined as
∫_0^tN^2ℒY_α^N,s(ϕ)ds.
Performing a change of integration variable we obtain (<ref>).
abbiamo fatto la seguente cosa: definiamo la seguente maringala al tempo non accelerato t
martingala=Y_α^N,t(ϕ)-Y_α^N,0(ϕ)-∫_0^tℒY_α^N,s(ϕ)ds
cambiamo t con tN^2 (acceleriamo il tempo) e otteniamo che
martingala=Y_α^N,tN^2(ϕ)-Y_α^N,0(ϕ)-∫_0^tN^2ℒY_α^N,sN^2(ϕ)ds
facciamo un cambio di variabile di integrazione da sN^2 a q dunque dq=N^2ds, pertanto
martingala=Y_α^N,tN^2(ϕ)-Y_α^N,0-∫_0^tN^2ℒY_α^N,s(ϕ)dq
richiamando q con il nome s abbiamo il risultato.
The quadratic covariation is
[ M^N_α,ϕ,M^N_β,ϕ]_t= ∫_0^tN^2Γ_α,β^ϕ,s/N^2ds
where, for a generic s≥ 0
Γ_α,β^ϕ,t:=ℒ(Y^N,t_α(ϕ)Y_β^N,t(ϕ))-Y_α^N,t(ϕ)ℒ(Y_β^N,t(ϕ))-Y_β^N,t(ϕ)ℒ(Y_α^N,t(ϕ)).
Using (<ref>),this can be written as
Γ_α,β^ϕ,s=∑_x∈ℤ∑_k,l=0^nη_k^xη_l^x+1[Y^N,s_α,k,l(ϕ)-Y^N,s_α(ϕ)][Y^N,s_β,k,l(ϕ)-Y^N,s_β(ϕ)]
where Y^N,s_α,k,l(ϕ) is a short-cut for the equilibrium fluctuation field computed in the configuration η(N^2s)-δ_k^x+δ_l^x+δ_k^x+1-δ_l^x+1
We further introduce the following family of Doob's martingales
𝒩_α,β,ϕ^N,t=M_α,ϕ^N,tM_β,ϕ^N,t-∫_0^tN^2Γ_α,β^ϕ,s/N^2ds ∀α,β∈{1,…,n}
which will be useful in the analysis.
Often, in the following to alleviate notation we do not write explicitly the time dependence, i.e.
Γ_α,β^ϕ =1/N∑_x∈ℤ∑_k,l=0^nη_k^xη_l^x+1[∑_y∈ℤϕ(y/N)((η_α^y-δ_k^x+δ_l^x+δ_k^x+1-δ_l^x+1)-η_α^y)]
·[∑_z∈ℤϕ(z/N)((η_β^z-δ_k^x+δ_l^x+δ_k^x+1-δ_l^x+1)-η_β^z)]
In principle we should consider Γ_α,β^ϕ,ψ,s, underlining the fact that the test function could depend on the species too. However, Γ_α,β^ϕ,ψ,s is bilinear and symmetric with respect the test function therefore, by polarization identity, it is enough to evaluate Γ_α,β^ϕ,ϕ,s. We will denote it by Γ_α,β^ϕ,s for the sake of notation simplicity. Bilinearity is clear.We prove the symmetry. To alleviate the notation we do not write the here the explicitly the time dependence:
Γ_α,β^ϕ,ψ =1/N∑_x∈ℤ∑_k,l=0^nη_k^xη_l^x+1[∑_y∈ℤϕ(y/N)((η_α^y-δ_k^x+δ_l^x+δ_k^x+1-δ_l^x+1)-η_α^y)]
·[∑_z∈ℤψ(z/N)((η_β^z-δ_k^x+δ_l^x+δ_k^x+1-δ_l^x+1)-η_β^z)]
=
1/N∑_x∈ℤ∑_k,l=0^nη_k^xη_l^x+1[ϕ(x/N)(η_α^x-δ_k^x+δ_l^x-η_α^x)+ϕ(x+1/N)(η_α^x+1+δ_k^x+1-δ_l^x+1-η_α^x+1)]
· [ψ(x/N)(η_β^x-δ_k^x+δ_l^x-η_β^x)+ψ(x+1/N)(η_β^x+1+δ_k^x+1-δ_l^x+1-η_β^x+1)]
=
1/N∑_x∈ℤ{η_α^xη_β^x+1[ϕ(x/N)(-1)+ϕ(x+1/N)(+1)][ψ(x/N)(+1)+ψ(x+1/N)(-1)].
+
.η_β^xη_α^x+1[ϕ(x/N)(+1)+ϕ(x+1/N)(-1)][ψ(x/N)(-1)+ψ(x+1/N)(+1)]}
=
-1/N∑_x∈ℤ(η_α^xη_β^x+1+η_β^xη_α^x+1)[ϕ(x+1/N)-ϕ(x/N)][ψ(x+1/N)-ψ(x/N)].
This expression is clearly symmetric in ϕ and ψ.
In the following, we will denote by C,(C_i)_i∈ℕ,C finite and positive constants.
§.§ Convergence of Dynkin's martingale
Here we state and prove some convergence properties of the family of martingales (M_α,ϕ^N,t)_α∈{1,…,n} and (𝒩_α,β,ϕ^N,t)_α,β∈{1,…,n} when N→∞. We formulate this in Proposition <ref>. This result will be useful in the proof of tightness and uniqueness of the limit point of the sequence of measures (Q_N)_N∈ℕ.
For all ϕ∈ C_c^∞(ℝ) and ∀ t∈ [0,T] we have the following convergences:
* ∀α∈{1,…,n}
lim_N→∞𝔼[( M_α,ϕ^N,t-Y_α^N,t(ϕ)+Y_α^N,0(ϕ)+2j∫_0^tY_α^N,s/N^2(Δϕ)ds)^2]=0
* ∀α,β∈{1,…,n}
lim_N→∞𝔼 [(𝒩_α,β,ϕ^N,t-(Y_α^N,tN^2(ϕ)-Y_α^N,0(ϕ)-2j∫_0^tY_α^N,s/N^2(Δϕ)ds)..
.. (Y_β^N,t(ϕ)-Y_β^N,0(ϕ)-2j∫_0^tY_β^N,s/N^2(Δϕ)ds)+2t(2j)^2p_αp_β∫_ℝ∇(ϕ(u))^2du)^2]=0
when α≠β and
lim_N→∞𝔼 [(𝒩_α,α,ϕ^N,t-(Y_α^N,t(ϕ)-Y_α^N,0(ϕ)-2j∫_0^tY_α^N,s/N^2(Δϕ)ds)^2..
- ..2t(2j)^2p_α(1-p_α)∫_ℝ(∇ϕ(u))^2dx)^2]=0
when α=β.
To prove Proposition <ref> we need two intermediate results that we state in Lemma <ref> and in Lemma <ref>.
For all ϕ∈ C_c^∞(ℝ), for all α,β∈{1,…,n} we have
lim_N→∞𝔼[(N^2 Γ_α,β^ϕ+2(2j)^2p_αp_β∫_ℝ(∇ϕ(u))^2du)^2]=0 for α≠β
lim_N→∞𝔼[(N^2Γ_α,α^ϕ-2(2j)^2p_α(1-p_α)∫_ℝ(∇ϕ(u))^2du)^2]=0 for α= β
Proof:
We will only prove (<ref>), since the proof of (<ref>) is similar. L^2(ν_p) convergence (<ref>) is equivalent to showing the following L^1(ν_p) convergence
lim_N→∞N^2𝔼[Γ_α,β^ϕ]= -2(2j)^2p_αp_β∫_ℝ(∇ϕ(u))^2du
and a vanishing variance
lim_N→∞Var(N^2Γ_α,β^ϕ)=0.
We start by proving (<ref>). Using (<ref>) we write
Γ_α,β^ϕ =1/N∑_x∈ℤ∑_k,l=0^nη_k^xη_l^x+1[∑_y∈ℤϕ(y/N)((η_α^y-δ_k^x+δ_l^x+δ_k^x-δ_l^x)-η_α^y)]
·[∑_z∈ℤϕ(z/N)((η_β^z-δ_k^x+δ_l^x+δ_k^x-δ_l^x)-η_β^z)]
=
-1/N∑_x∈ℤ(η_α^xη_β^x+1+η_β^xη_α^x+1)(ϕ(x+1/N)-ϕ(x/N))^2.
By the Taylor's formula with the Lagrange remainder we have
(ϕ(x+1/N)-ϕ(x/N))^2 =1/N^2∇ϕ(x/N)^2+1/N^41/4(Δϕ(x+θ^+/N))^2
+1/N^31/2(∇ϕ(x/N)Δϕ(x+θ^+/N)+∇ϕ(x/N)Δϕ(x+θ^+/N))
where θ^+∈ [0,x]. We thus obtain
N^2Γ_α,β^ϕ=- 1/N∑_x∈ℤ(η_α^xη_β^x+1+η_β^xη_α^x+1)∇ϕ(x/N)^2+o(1/N).
Therefore
lim_N→∞N^2𝔼[Γ_α,β^ϕ]=
lim_N→∞[ -1/N∑_x∈ℤ𝔼[η_α^xη_β^x+1+η_α^x+1η_β^x]∇ϕ(x/N)^2]=
-2(2j)^2p_αp_β∫_ℝ(∇ϕ(u))^2du
and (<ref>) is proved. To prove (<ref>) we need the second moment.
We have
𝔼[(N^2Γ_α,β^ϕ)^2] =1/N^2∑_x,y∈ℤ∇ϕ(x/N)^2∇ϕ(y/N)^2𝔼[(η_α^xη_β^x+1+η_β^xη_α^x+1)(η_α^yη_β^y+1+η_β^yη_α^y+1)]+o(1/N^2)
=
4(2j)^4p_α^2p_β^21/N^2∑_x,y∈ℤ∇ϕ(x/N)^2∇ϕ(y/N)^2+o(1/N^2).
By taking the limit
lim_N→∞𝔼[(N^2Γ_α,β^ϕ)^2]= 4(2j)^4p_α^2p_β^2(∫_ℝ(∇ϕ(u))^2du)^2.
Therefore, using (<ref>), we have
lim_N→∞Var(N^2 Γ_α,β^ϕ)=lim_N→∞𝔼[(N^2Γ_α,β^ϕ)^2]-lim_N→∞(𝔼[N^2Γ_α,β^ϕ])^2= 0
□
For all ϕ∈ C_c^∞(ℝ), for all α,β∈{1,…,N} and for all t∈ [0,T] we have
lim_N→∞𝔼 [{M^N,t_α,ϕM^N,t_β,ϕ-(Y_α^N,t(ϕ)-Y_α^N,0(ϕ)-2j∫_0^tY_α^N,s/N^2(Δϕ)ds)..
..(Y_β^N,t(ϕ)-Y_β^N,0(ϕ)-2j∫_0^tY_β^N,s/N^2(Δϕ)ds)}^2]=0 for α≠β
lim_N→∞𝔼[{(M_α,ϕ^N,t)^2-(Y_α^N,t(ϕ)-Y_α^N,0(ϕ)-2j∫_0^tY_α^N,s/N^2(Δϕ)ds)^2}^2]=0 for α= β
Proof: We prove only the convergence (<ref>) since (<ref>) can be proved similarly.
By Cauchy-Schwartz inequality
𝔼[((M_α,ϕ^N,t)^2-(Y_α^N,t(ϕ)-Y_α^N,0(ϕ)-2j∫_0^tY_α^N,s/N^2(Δϕ)ds)^2)^2]
≤ 𝔼[((M_α,ϕ^N,t)-(Y_α^N,t(ϕ)-Y_α^N,0(ϕ)-2j∫_0^tY_α^N,s/N^2(Δϕ)ds))^4]_A_N
·𝔼[((M_α,ϕ^N,t)+(Y_α^N,t(ϕ)-Y_α^N,0(ϕ)-2j∫_0^tY_α^N,s/N^2(Δϕ)ds))^4]_B_N.
We will prove that the term denoted by A_N goes to zero when N→∞ while the term B_N remains finite.
Proof that lim_N→∞A_N=0: we first compute the action of the generator on the fluctuation field:
ℒY_α^N(ϕ) =1/√(N)∑_x∈ℤ∑_k,l=0^nη_k^xη_l^x+1[∑_y∈ℤϕ(y/N)((η_α^y-δ_k^x+δ_l^x+δ_k^x+1-δ_l^x+1-2j p_α)-η_α^y+2j p_α)]
=
1/√(N)∑_x∈ℤ∑_k,l=0^nη_k^xη_l^x+1[ϕ(x+1/N)((η_α^x+1+δ_k^x+1-δ_l^x+1)-η_α^x+1).
+.ϕ(x/N)((η_α^x-δ_k^x+δ_l^x)-η_α^x)]
=
1/√(N)∑_x∈ℤ{η_α^x∑_l=0 : l≠α^nη_l^x+1[ϕ(x+1/N)-ϕ(x/N)]+η_α^x+1∑_k=0 : k≠α^nη_k^x[ϕ(x/N)-ϕ(x+1/N)]}
=
1/√(N)∑_x∈ℤ{η_α^x(2j-η_α^x+1)[ϕ(x+1/N)-ϕ(x/N)]+η_α^x+1(2j-η_α^x)[ϕ(x/N)-ϕ(x+1/N)]}
=
2j/√(N)∑_x∈ℤη_α^x[ϕ(x-1/N)+ϕ(x+1/N)-2ϕ(x/N)].
Using Taylor's series with Lagrange remainder implies
ϕ(x+1/N)-ϕ(x-1/N)-2ϕ(x/N)=1/N^2Δϕ(x/N)+1/61/N^3[ϕ^(3)(x+θ^+/N)-ϕ^(3)(x-θ^-/N)]
where θ^+,θ^-∈ [0,x].
Observing further that
∑_x∈ℤ2jp_α[ϕ(x-1/N)+ϕ(x+1/N)-2ϕ(x/N)]=0
we obtain
N^2ℒY_α^N,·(ϕ) =(2j)/√(N)∑_x∈ℤ(η_α^x-2j p_α)Δϕ(x/N)+R_1(ϕ,α)
where
R_1(ϕ,α,·)=(2j)/N^3/2∑_x∈ℤη_α^x[1/6[ϕ^(3)(x+θ^+/N)-ϕ^(3)(x-θ^-/N)]].
Therefore, we find an upper bound for A_N
𝔼[(M_α,ϕ^N,t-(Y_α^N,t(ϕ)-Y_α^N,0(ϕ)-2j∫_0^tY_α^N,s/N^2(Δϕ)ds))^4] =(2j)^4𝔼[(∫_0^tR_1(ϕ,α,s)ds)^4]
≤ C ∫_0^T𝔼[R_1(ϕ,α,s)^4]ds
where in the last inequality we used Fubini's Theorem and Holder's inequality with coefficients 4 and 4/3
The set ∪_k=0^2supp(d^k/dx^kϕ) is compact in ℝ. We call
𝒜:=N (∪_k=0^2supp(d^k/dx^kϕ))∩ℤ.
Then, we bound from above the expectation into the integral as follows
𝔼[R_1(ϕ,α,·)^4]≤1/N^6∑_x_1,x_2,x_3,x_4∈𝒜𝔼[∏_i=1^4(η_α^x_i-2j p_α)]‖Δϕ‖_∞.
The the only terms that survive in the average are:
(η_α^x_i-2jp_α)^2(η_α^x_j-2j p_α)^2 (η_α^x_i-2j p_α)^4 ∀ i,j∈{1,2,3,4} : i≠ j.
The moment generating function of a Multinomial(2j,p_1,…,p_n) vector (X_0,…,X_n) is
M(t)=𝔼[∏_r=0^ne^X_rt_r]=(∑_i=0^np_ie^t_i)^2j.
We can compute explicitly
𝔼[(η_α^x-2j p_α)^4]=f(p_α)
𝔼[(η_α^x-2j p_α)^2(η_α^y-2j p_α)^2]=g(p_α)
where f,g are polynomials of fourth order in p_α and bounded from above by a proper finite and positive constant. The measure of the set 𝒜 is bounded by |𝒜|≤ C N . By consequence
∑_x_1,x_2,x_3,x_4∈𝒜𝔼[∏_i=1^4(η_α^x_i-2j p_α)]=∑_x∈𝒜f(p_α,4)+∑_x,y∈𝒜g(p_α,4)≤ N^2C
Therefore
𝔼[R_1(ϕ,α,·)^4]≤N^2/N^6C‖Δϕ‖_∞.
and by taking the limit
lim_N→∞𝔼[R_1(ϕ,α,·)^4]=0
Recalling (<ref>) this implies that lim_N→∞A_N=0.
Proof that lim_N→∞B_N<∞: for any real numbers a,b∈ℝ,
(a+b)^4≤ 8(a^4+b^4).
Applying this inequality
𝔼[(M_α,ϕ^N,t+Y_α^N,t(ϕ)-Y_α^N,0(ϕ)-2j∫_0^tY_α^N,s/N^2(Δϕ)ds)^4]
≤ 8(𝔼[(M^N,t_α,ϕ)^4]+𝔼[(Y_α^N,t(ϕ)-Y_α^N,0(ϕ)-2j∫_0^tY_α^N,s/N^2(Δϕ)ds)^4]).
Applying again inequality (<ref>)we have
𝔼[(M_α,ϕ^N,t)^4] ≤ C(𝔼[Y_α^N,t(ϕ)^4]+𝔼[Y_α^N,0(ϕ)^4].
+.𝔼[((2j)∫_0^tY_α^N,s/N^2(Δϕ)ds)^4]+𝔼[((2j)∫_0^tR_1(ϕ,α,s)ds)^4])
and
𝔼[(Y_α^N,t(ϕ)-Y_α^N,0(ϕ)-2j∫_0^tY_α^N,s/N^2(Δϕ)ds)^4] ≤C(𝔼[Y_α^N,t(ϕ)^4]+𝔼[Y_α^0,N(ϕ)^4].
+.
𝔼[((2j)∫_0^tY_α^N,s/N^2(Δϕ)ds)^4]).
Arguing similarly to before we find
𝔼[Y_α^N,·(ϕ)^4] = 1/N^2∑_x_1,x_2,x_3,x_4∈𝒜𝔼[∏_i=1^4(η_α^x_i-2j p_α)]∏_i=1^4ϕ(x_i/N)
≤C/N^2‖ϕ‖_∞(∑_x∈𝒜f(p_α,4)+∑_x,y∈𝒜g(p_α,4))<∞
then, by taking the limit
lim_N→∞𝔼[Y_α^N,·(ϕ)^4]≤ C_1.
Obviously, the same bound holds for 𝔼[Y_α^N,0(ϕ)^4]. We can argue similarly and find the following upper bound for the integral term
𝔼[((2j)∫_0^tY_α^N,s/N^2(Δϕ)ds)^4]≤ C ∫_0^T𝔼[Y_α^N,s/N^2((2j)Δϕ)^4]ds<∞
then, in the limit
lim_N→∞𝔼[((2j)∫_0^tY_α^N,s/N^2(Δϕ)ds)^4]=C_2.
By putting together (<ref>), (<ref>) and (<ref>) we obtain that B_N remains finite as N→∞.
□
Proof of Proposition <ref>:
To prove (<ref>) we have that, by the expressions (<ref>), (<ref>),
lim_N→∞𝔼[( M_α,ϕ^N,t-Y_α^N,t(ϕ)+Y_α^N,0+2j∫_0^tY_α^N,s/N^2(Δϕ)ds)^2]
≤ Clim_N→∞∫_0^t𝔼[R_1(ϕ,α,s)^2]ds≤lim_N→∞C_1/N=0.
To prove (<ref>) we only consider the case α= β, since the case α≠β is proved similarly. By the triangle inequality
𝔼[(𝒩_α,α,ϕ^N,t-(Y_α^N,t(ϕ)-Y_α^N,0(ϕ)-2j∫_0^tY_α^N,s/N^2(Δϕ)ds)^2-2t(2j)^2p_α(1-p_α)∫_ℝ(∇ϕ(u))^2du)^2]
≤ 𝔼[{(M_α,ϕ^N,t)^2-(Y_α^N,t(ϕ)-Y_α^N,0(ϕ)-2j∫_0^tY_α^N,s/N^2(Δϕ)ds)^2}^2]
+ 𝔼[(N^2∫_0^tΓ_α,α^ϕ,sds-2t(2j)^2p_α(1-p_α)∫_ℝ(∇ϕ(u))^2du)^2].
In the limit we apply Lemma <ref> and Lemma <ref> and we obtain
lim_N→∞𝔼 [(𝒩_α,α,ϕ^N,t-(Y_α^N,t(ϕ)-Y_α^N,0(ϕ)-2j∫_0^tY_α^N,s/N^2(Δϕ)ds)^2. .
+.. 2t(2j)^2p_α(1-p_α)∫_ℝ(∇ϕ(u))^2dx)^2]=0
□
§ TIGHTNESS
In this section we prove tightness for the sequence of probability measures (Q_N)_N∈ℕ on the Skorokhod space (see <cit.> for details) of càdlàg trajectories D([0,T],(C_c^∞(ℝ)^*). A necessary and sufficient condition for tightness is given by the following Theorem proved by Aldous
<cit.>.
Consider a Polish space ℰ, endowed with a metric d_ℰ(·,·) where we denote by μ_t the functions from [0,T] to ℰ.
A sequence of probability measures (P_N)_N∈ℕ on the Skorokhod space D([0,T],ℰ) is tight if and only if
* ∀ t∈ [0,T] and ∀ϵ >0 ∃ K(t,ϵ)⊂ℰ compact such that
sup_N∈ℕP_N(μ_t∉ K(ϵ,t))≤ϵ
* ∀ϵ>0
lim_δ→ 0lim sup_N→∞sup_τ∈𝒯_T, θ≤δP_N(d_ℰ(μ_τ,μ_τ+θ)>ϵ)=0
where 𝒯_T is a family of stopping times bounded by T.
In Proposition <ref> we will apply Theorem <ref> to prove tightness of the sequence of measure (Q_N)_N∈ℕ.
The computation can be done on the Skorokhod space D([0,T],ℝ^n). Indeed, C^∞_c(ℝ) is a nuclear space (see <cit.> for details), then it suffice to prove tightness of the distribution of Q_N∘ϕ for arbitrary
ϕ∈ C_c^∞(ℝ).
The sequence of measure (Q_N)_N∈ℕ on the space D([0,T],(C_c^∞(ℝ))^*_n) is tight since the following statements are true for any ϕ∈(C_c^∞(ℝ))^n:
* ∀ t∈ [0,T] and ϵ>0 there exists a compact
set K(ϵ,t)∈ℝ^n such that
sup_N∈ℕQ_N(Y^N,t(ϕ)∉ K(ϵ,t))≤ϵ
* ∀ϵ>0
lim_δ→ 0lim sup_N→∞sup_τ∈𝒯_T, θ≤δQ_N(‖ Y^N,τ(ϕ)-Y^N,τ+θ(ϕ)‖_S>ϵ)=0
where ‖ Y^N,t(ϕ)‖_S=max_α∈{1,…,n }{|Y_α^N,t(ϕ)|} and 𝒯_T is a family of stopping times bounded by T.
Proof. We show that the (<ref>) and (<ref>) are satisfied.
Proof of (<ref>):
we fix arbitrary t∈[0,T] and ϵ>0. We apply the central limit theorem for the n-dimensional random vector Y^N,t(ϕ) taking values on ℝ^n, observing that the process (η_t)_t≥ 0 has a product invariant distribution given by (<ref>). To do this we need the expectation and the covariances under Q_N of the equilibrium fluctuation field. We fix arbitrary α,β∈{1,…,n}. We have
𝔼(Y_α^N,t(ϕ))=1/√(N)∑_x∈ℤ𝔼[η_α^x(tN^2)-(2j)p_α]ϕ(x/N)=0
and
Var(Y_α^N,t(ϕ)) =1/N∑_x∈ℤϕ^2(x/N)𝔼[(η_α^x(tN^2))^2]
Cov(Y_α^N,t(ϕ),Y_β^t,N(ϕ)) =1/N∑_x∈ℤϕ^2(x/N)Cov(η_α^x(tN^2)η_β^x(tN^2)).
Taking the limit we obtain
lim_N→∞𝔼(Y_α^N,t(ϕ))=0, lim_N→∞Var(Y_α^N,t(ϕ))=2j p_α(1-p_α)∫_ℝ(ϕ(u))^2du
lim_N→∞Cov(Y_α^N,t(ϕ),Y_β^t,N(ϕ))=-2jp_αp_β∫_ℝ(ϕ(u))^2du
Therefore, the random vector Y^N,t converges in distribution to a centered Gaussian random vector with covariance matrix 𝒦 with elements
𝒦_α,β=-2jp_αp_β∫_ℝ(ϕ(u))^2du, 𝒦_α,α=2j p_α(1-p_α)∫_ℝ(ϕ(u))^2du.
Thus for arbitrary ϵ>0 and ∀ t∈ [0,T] we can choose K(ϵ,t)⊂ℝ^n compact, such that
sup_N∈ℕQ_N(Y^N,t(ϕ)∉ K(ϵ,t))≤ϵ.
Proof of (<ref>): without loss of generality and for the sake of notation, here we will work with a single species α∈{1,…,N}. For arbitrary a stopping time τ∈𝒯,
We write the process
Y_α^N,τ(ϕ)=M_α,ϕ^N,τ+Y_α^N,0(ϕ)+∫_0^τN^2ℒY_α^N,s/N^2(ϕ)ds.
By Chebyshev and triangular inequalities
Q_N(|Y_α^N,τ(ϕ)-Y_α^N,τ+θ(ϕ)| ≥ϵ) ≤1/ϵ^2𝔼[(Y_α^N,τ(ϕ)-Y_α^N,τ+θ(ϕ))^2]
≤2/ϵ^2(𝔼[(M_α,ϕ^N,τ-M_α,ϕ^N,τ+θ)^2]_A_N+𝔼[(∫_τ^τ+θN^2ℒY_α^N,s/N^2(ϕ)ds)^2]_B_N)
We first prove that A_N goes to zero when N→∞. By the martingale property we have
𝔼[(M_α,ϕ^N,τ-M_α,ϕ^N,τ+θ)^2]=𝔼[
(M_α,ϕ^N,τ+θ)^2-
(M_α,ϕ^N,τ)^2].
By Doob's decomposition theorem
𝔼[(M_α,ϕ^N,t)^2]=𝔼[∫_0^tN^2Γ_α,α^ϕ,s/N^2ds].
We write the following chain of inequalities by using Fubini theorem, Cauchy-Schwartz inequality and the fact that, by Lemma <ref>, the sequence N^2Γ_α,α^ϕ,s/N^2 is uniformly bounded in N in L^2(ν_p)
sup_N∈ℕ𝔼[(M_α,ϕ^N,τ+θ)^2-(M_α,ϕ^N,τ)^2]=sup_N∈ℕ𝔼[∫_τ^τ+θN^2Γ_α,α^ϕ,s/N^2ds]
≤ √(θ)(∫_0^Tsup_N∈ℕ𝔼[(N^2Γ_α,α^ϕ,s/N^2)^2])^1/2≤√(θ)C.
By taking the limits and by the above upperbound we have
lim_δ→ 0lim sup_N→∞sup_τ∈𝒯_T, θ≤δA_N≤lim_δ→ 0lim sup_N→∞sup_τ∈𝒯_T, θ≤δ√(θ)C=0
then A_N goes to zero as N→∞.
Secondly, we prove that B_N vanishes when N→∞. By Fubini theorem and Cauchy-Schwarz inequality
𝔼[(∫_τ^τ+θN^2ℒY_α^N,s/N^2(ϕ)ds)^2]≤√(θ)(∫_0^T𝔼[(N^2ℒY_α^N,s/N^2(ϕ))^2]ds)^1/2.
The integrand can be bounded from above as follows
𝔼[(ℒ_NY_α^N,s(ϕ))^2]=𝔼[(1/√(N)∑_x∈ℤ(η_α^x-(2j)p_α)Δ_Nϕ(x/N))^2]≤C/N^3‖Δϕ‖_∞∑_x∈𝒜𝔼[(η_α^x-2j p_α)^2].
where Δ_N denotes the discrete Laplacian with spacing
1/N and 𝒜 is the set defined in (<ref>).
Therefore, arguing as in the proof of Lemma <ref> and by taking the limits we have
lim_δ→ 0lim sup_N→∞sup_τ∈𝒯_T, θ≤δ𝔼[(∫_τ^τ+θN^2ℒY_α^N,s/N^2(ϕ)ds)^2]≤lim_δ→ 0√(δ) C_1=0.
Thus B_N vanishes as N→∞. This concludes the proof of tightness of the sequence (Q_N)_N∈ℕ.
□
§ THE COVARIANCES OF THE LIMITING PROCESS
In this section we compute the covariance of the limiting process, using duality. As a corollary this gives its covariances at the initial time t=0, needed for the proof of Theorem <ref>. By adapting the results of <cit.>, the multi-species stirring process is self-dual with duality function
D(η,ξ)=∏_x∈ℤ((2j-∑_k=1^Nξ_k^x)!/(2j)!∏_k=1^Nη_k^x!/(η_k^x-ξ_k^x)!).
where we denote by (ξ_t)_t≥ 0 the dual process.
The following proposition shows that the covariances (<ref>) and (<ref>) of the limiting process can be computed via the single-particle self-duality. Notice that because the limiting process is Gaussian, the covariances uniquely determine the process.
The covariances of the limiting process (Y_1^t,…,Y_n^t) are:
Cov(Y_α^t(ϕ),Y_β^0(ψ))=-(2j)p_αp_β⟨ S_tϕ,ψ⟩_L^2(dx) α≠β,
Cov(Y_α^t(ϕ),Y_α^0(ψ))=(2j)p_α(1-p_α)⟨ S_tϕ,ψ⟩_L^2(dx) α=β.
Proof: By the self-duality, the dual process initialized with one particle behaves as an independent random walker (IRW) jumping at rate 2j on ℤ. Thus the following computation holds for α≠β:
𝔼[Y_α^N_k,t,Y_β^N,0(ψ)]
=1/N∑_x,y∈ℤϕ(x/N)ψ(y/N)𝔼[(η_α^x(tN^2)-2j p_α)(η_β^y-2j p_β)]
=
1/N∑_x,y∈ℤ∫_Ων_p(dη)𝔼_η[(η_α^x(tN^2)-2j p_α)](η_β^y-2j p_β)ϕ(x/N)ψ(y/N)
=
1/N∑_x,y∈ℤ∫_Ων_p(dη)(η_β^y-2j p_β)∑_z∈ℤp_tN^2^IRW(x,z)(η_α^z-2j p_α)ϕ(x/N)ψ(y/N)
=
1/N∑_x,y,z∈ℤCov(η_α^z,η_β^y)p_tN^2^IRW(x,z)ϕ(x/N)ψ(y/N)
=
-(2j)p_αp_β1/N∑_x,y∈ℤp_tN^2^IRW(x,y)ϕ(x/N)ψ(y/N)
where we denoted by p_t^IRW(·,·) the transition kernel of the IRW jumping at rate 2j. By taking the limit on both sides and by the invariance principle we have
lim_N→∞𝔼[Y_α^N_k,t(ϕ),Y_β^N,0(ψ)]=-(2j)p_αp_β⟨ S_tϕ,ψ⟩_L^2(dx).
For the case α=β the proof is similar.
□
By the following corollary, we find the covariances of the process at the initial time t=0.
The covariance of the limiting process (Y_1^0,…,Y_n^0) at time t=0 are:
Cov(Y_α^0(ϕ),Y_β^0(ψ))=-(2j)p_αp_β⟨ϕ,ψ⟩_L^2(dx) α≠β
Cov(Y_α^0(ϕ),Y_α^0(ψ))=(2j)p_α(1-p_α)⟨ϕ,ψ⟩_L^2(dx) α=β.
Proof: the proof is straightforward from the properties of the semigroup (S_t)_t≥ 0 and by Proposition <ref>.
□
§ UNIQUENESS AND CONTINUITY OF THE LIMIT POINT
As shown in Section <ref>, the sequence of probability measures (Q_N)_N∈ℕ giving the law of (Y^N,t)_t∈ [0,T] is tight, then the Prokhorov's theorem <cit.> guarantees that every sub-sequence (Q_N_k)_k∈ℕ is convergent to a unique limit point that we denote by Q.
It remains to prove that, ∀α∈{1,…,n}, the limiting process (Y^t_α)_t≥ 0 has continuous trajectory (Q-almost surely) and that Q solves the martingale problem introduced in Theorem <ref>.
The Q-a.s. continuity will be proved in Proposition <ref>, while the solution of the martingale problem will be proved in Proposition <ref>.
For every T>0, ϕ∈ C_c^∞ and α∈{1,…,n} the map [0,T]∋ t↦ Y_α^t(ϕ) is Q-a.s. continuous.
Proof.
We prove that the set of discontinuity points of Y_α^t(ϕ) is negligible under Q. We introduce the usual modulus of continuity for any fixed δ>0:
ω_δ(Y_α(ϕ)):=sup_|t-s|<δ| Y_α^t(ϕ)-Y_α^s(ϕ)|
and the modified uniform modulus of continuity
ω_δ^'(Y_α(ϕ)):=inf_{t_i}_0≤ i≤ r max_1≤ i≤ rsup_t_i-1≤ s<t≤ t_i |Y_α^t(ϕ)-Y_α^s(ϕ)|
where the first infimum is taken over all partitions {t_i, 0≤ i≤ r} of the interval [0,T] such that
0=t_0<t_1<…<t_r=T with t_i-t_i-1≥δ for all i=1,…,r.
They are related (see <cit.> for details) by the inequality
ω_δ(Y_α(ϕ))≤ 2ω_δ^'(Y_α(ϕ))+sup_t |Y_α^t(ϕ)-Y_α^t-(ϕ)|.
Moreover, (see again <cit.>) it holds that for arbitrary ϵ>0
lim_δ→ 0lim sup_N→∞Q_N(w^'_δ(Y_α^N(ϕ) )≥ϵ)=0.
Furthermore we have the upper bound
sup_t| Y_α^N,t(ϕ)-Y^N,t-_α(ϕ)|≤4j ||ϕ||_∞/√(N).
As a consequence of tightness we have that, for arbitrary ϵ >0
lim_δ→ 0Q(ω_δ(Y_α(ϕ))≥ϵ)=lim_δ→ 0lim sup_k→∞Q_N_k(ω_δ(Y_α^N_k(ϕ))≥ϵ)
therefore, by (<ref>) we may write
lim_δ→ 0Q(ω_δ(Y_α(ϕ))≥ϵ) ≤lim_δ→ 0lim sup_k→∞Q_N_k((ω_δ^'(Y_α^N_k(ϕ))≥ϵ)
+
lim_δ→ 0lim sup_k→∞Q_N_k(sup_t|Y^N_k,t_α(ϕ)-Y^N_k,t-_α(ϕ)|≥ϵ)
=0.
Thus the almost sure continuity is proved.
□
For all ϕ∈ C_c^∞(ℝ) and for all α,β∈{1,…,n} the processes
(M_α,ϕ^t)_t∈ [0,T]
defined in (<ref>) and (𝒩_α,β,ϕ^t)_t∈ [0,T], (𝒩_α,α,ϕ^t)_t∈ [0,T] defined in (<ref>), (<ref>) are martingales with respect to the natural filtration ℱ_t:=σ{(Y_1^s,…,Y_n^s) : 0≤ s≤ t ≤ T}.
Proof. The strategy of the proof is inspired by the proof of Proposition 2.3, Chapter 11 of <cit.> dealing with the
mono-species zero-range process. The fundamental tools are the Portemanteau theorem and Proposition <ref>. We
further remark that the trajectories of the process (Y_α^N,t)_t∈ [0,T] are elements of the space D([0,T](C_c^∞(ℝ)^*) that is not metric, then
we cannot directly apply Portmanteau theorem. To overcome this issue, we adapt the strategy used in Section 5 of <cit.>. The complete proof is reported for the martingale (M_α,ϕ^t)_t∈ [0,T] while, concerning the martingales (𝒩_α,β,ϕ^t)_t∈ [0,T] and (𝒩_α,α,ϕ^t)_t∈ [0,T], we just give some estimates that allow to follow a similar strategy. Moreover, only the case α≠β is considered, since the case α=β is similar.
Proof for (M_α,ϕ^t)_t∈ [0,T]:
The process (M_α,ϕ^t)_t∈ [0,T] defined in (<ref>) is ℱ_t-measurable, therefore we only need to show that, for arbitrary 0≤ s≤ t≤ T
𝔼_Q[M_α,ϕ^t|ℱ_s]=M_α,ϕ^s
The property (<ref>) is equivalent to showing that
𝔼_Q[M_α,ϕ^tℐ(Y)] =𝔼_Q[M_α,ϕ^sℐ(Y)].
where the function ℐ(Y) is defined as follows. We fix m∈ℕ and we introduce the vectors s=(s_1,…,s_m) with 0≤ s_1≤ s_2≤…,≤ s_m≤ s and 𝐇=(H_1,…,H_m) with H_1,…,H_m∈ (C_c^∞)^n. For arbitrary Ψ∈ C_b(ℝ^m), we introduce the function from (D([0,T],(C_c^∞(ℝ))^*))^m to ℝ
ℐ(Y^N,·,H,s):=Ψ(Y^N,s_1(H_1),…,Y^N,s_m(H_m)).
For the sake of notation, we will denote this function with ℐ(Y^N).
Since (M_α,ϕ^N,t)_t∈ [0,T] defined in (<ref>) is a martingale it holds that
lim_k→∞𝔼_Q_N_k[M_α,ϕ^N_k,tℐ(Y^N_k)]=lim_k→∞𝔼_Q_N_k[M_α,ϕ^N_k,sℐ(Y^N_k)]
therefore, to conclude (<ref>) it is enough
to show that
lim_k→∞𝔼_Q_N_k[M_α,ϕ^N_k,tℐ(Y^N_k)]=𝔼_Q[M_α,ϕ^tℐ(Y)].
For arbitrary ϕ∈ C_c^∞(ℝ) we introduce
ℳ_ϕ : D([0,T],(C_c^∞(ℝ)^*)→ D([0,T],ℝ)
Y_α^·→ℳ_ϕ(Y_α^·) = Y_α^·(ϕ)-Y_α^·(ϕ)-∫_0^·Y_α^q(Δϕ)dq.
Observe that, for every t∈[0,T]
ℳ_ϕ(Y_α^t)=M_α,ϕ^t.
therefore, we need to show that
lim_k→∞𝔼_Q_N_k[ M_α,ϕ^N_k,tℐ(Y^N_k)]=𝔼_Q[ℳ_ϕ(Y_α^t)ℐ(Y)]
We prove this in two steps:
i)
lim_k→∞𝔼_Q_N_k[M_α,ϕ^N_k,t ℐ(Y^N_k)]=
lim_k→∞𝔼_Q_N_k[ℳ_ϕ(Y_α^N_k,t)ℐ(Y^N_k)]
ii)
lim_k→∞𝔼_Q_N_k[ℳ_ϕ(Y_α^N_k,t)ℐ(Y^N_k)]=𝔼_Q[ℳ_ϕ(Y_α^t)ℐ(Y)].
By Cauchy-Schwartz inequality, by the smoothness of Ψ and by Proposition <ref>
we obtain
lim_k→∞𝔼_Q_N_k[( M_α,ϕ^N_k,t-ℳ_ϕ(Y_α^N_k,t))ℐ(Y^N_k)]
≤ ‖Ψ‖_∞lim_k→∞𝔼_Q_N_k[( M_α,ϕ^N_k,t-Y_α^N_k,t(ϕ)+Y_α^N_k,0(ϕ)+2j∫_0^tY_α^N_k,q/N_k^2(Δϕ)dq)^2]=0.
This implies (<ref>),
thus the first step is proved. Furthermore,
we have the following upper-bound
sup_k∈ℕ𝔼_Q_N_k[(ℳ_ϕ(Y_α^N_k,t)ℐ(Y^N_k))^2]≤‖Ψ‖_∞sup_k∈ℕ𝔼_Q_N_k[(ℳ_ϕ(Y_α^N_k,t))^2]<∞
which implies that the family of martingales (ℳ_ϕ(Y_α^N_k,t)ℐ(Y^N_k))_k∈ℕ is uniformly integrable with respect to the law Q_N_k. Then, to prove (<ref>),
it is enough to show that ℳ_ϕ(Y_α^N_k,t)ℐ(Y^N_k) converges in distribution to ℳ_ϕ(Y_α^t)ℐ(Y). To this aim, we define, for arbitrary test functions ϕ,H_1,…,H_m,
P_1^α:D([0,T],(C^∞_c(ℝ))^*) → D([0,T],ℝ)^m+2
Y^N_k,· → P_1^α(Y^N_k,·) =(Y_α^N_k,·(ϕ), Y_α^N_k,·(Δϕ),Y^N_k,·(H_1),…,Y^N_k,·(H_m)
)
and
P_2:D([0,T],ℝ)^m+2→ℝ
P_1^α(Y^N_k,·)→ P_2(P_1^α(Y^N_k,·)) = (ℳ_ϕ(Y_α^N_k,t)) Ψ(Y^N_k,s_1(H_1) ,…,Y^N_k,s_m(H_m))
in such a way that
ℳ_ϕ(Y_α^N_k,t)ℐ(Y^N_k)=P_2∘ P_1^α(Y_α^N_k,t).
Using Theorem 1.7 in <cit.>, each component of P_1 is continuous and therefore
P_1^α(Y^N_k,t)→ P_1^α(Y^t)
as k→∞
on the Skorokhod space D([0,T],ℝ)^m+2.
Since by Proposition <ref> the limiting point (Y_α^t)_t∈[0,T] is a.s. continuous, the convergence holds also uniformly in time. Using the continuity of Ψ we thus obtain
P_2∘ P_1^α(Y^N_k,t)→ P_2∘ P_1^α(Y^t)
as k→∞
uniformly in time. As a consequence, the set of discontinuity points of P_2 under Q_N_k is a negligible set.
By Portmanteau theorem, this implies that ℳ_ϕ(Y_α^N_k,t)ℐ(Y^N_k) converges in distribution to ℳ_ϕ(Y_α^t)ℐ(Y).
Therefore (<ref>) is proved.
Proof for (𝒩_α,β,ϕ^t)_t∈ [0,T] and (𝒩_α,α,ϕ^t)_t∈ [0,T]: we have the following estimate using Proposition <ref>
lim_k→∞𝔼 [(𝒩_α,β,ϕ^N_k,t-(Y_α^N_k,tN_k^2(ϕ)-Y_α^N_k,0(ϕ)-2j∫_0^tY_α^N_k,s/N_k^2(Δϕ)ds)..
.. (Y_β^N_k,t(ϕ)-Y_β^N_k,0(ϕ)-2j∫_0^tY_β^N_k,s/N_k^2(Δϕ)ds)+2t(2j)^2p_αp_β∫_ℝ∇(ϕ(u))^2du)ℐ(Y^N_k)]
≤‖Ψ‖_∞lim_k→∞𝔼[(𝒩_α,β,ϕ^N_k,t-(Y_α^N_k,tN_k^2(ϕ)-Y_α^N_k,0(ϕ)-2j∫_0^tY_α^N_k,s/N_k^2(Δϕ)ds)..
.. (Y_β^N_k,t(ϕ)-Y_β^N_k,0(ϕ)-2j∫_0^tY_β^N_k,s/N_k^2(Δϕ)ds)+2t(2j)^2p_αp_β∫_ℝ∇(ϕ(u))^2du)^2]=0
that implies the counterpart of (<ref>).
Moreover, we have the following upper bound
sup_k∈ℕ𝔼_Q_N_k[(M_α,ϕ^N_k,tM_β,ϕ^N_k,t+2t(2j)^2p_αp_β∫_ℝ∇(ϕ(u))^2du)^2]
≤
Csup_k∈ℕ{𝔼_Q_N_k[(Y_α^N_k,t(ϕ)-Y_α^N_k,0(ϕ)-2j∫_0^tY_α^N_k,q/N_k^2(Δϕ)dq)^4].
. 𝔼_Q_N_k[(Y_β^N_k,t(ϕ)-Y_β^N_k,0(ϕ)-2j∫_0^tY_β^N_k,q/N_k^2(Δϕ)dq)^4]}<∞
where in the last inequality we used Proposition <ref>. This is the counterpart of (<ref>) and allows to show uniform integrability. The rest of the proof is similar.
□
§ THE REACTION DIFFUSION PROCESS
§.§ Description of the process
In this section we investigate a reaction diffusion process. This process is a superposition of two dynamics: the multi-species stirring dynamics and a reaction dynamics that,
at constant rate γ>0, changes each type
to any of the another types.
Therefore now only the total number of particles is conserved (this is different than in the
pure multi-species stirring, where the numbers of particles of each species is constant). We will denote this process by (ζ_t)_t≥ 0. The state space is again Ω defined in (<ref>) and the generator reads
ℒ^rd=ℒ+ℒ^r
where ℒ is the generator defined in (<ref>), while for any local function f:Ω→ℝ
ℒ^rf(ζ)=γ∑_x∈ℤ∑_k,l=1^nζ_k^x[f(ζ-δ_k^x+δ_l^x)-f(ζ)].
This process admits a family of reversible measures that are characterized in Lemma <ref>.
The reversible product measures of the generator ℒ^rd is
Λ_p̂=⊗_x∈ℤ MN(2j; p̂)
where MN(2j; p) denotes
the Multinomial distribution with
2j independent trials and
success probabilities
p̂=(p̂_0,p̂_1…,p̂_1)
with p̂_0+p̂_1n=1.
Proof: for an arbitrary site x∈ℤ and for arbitrary α,β∈{1,…,n} such that α≠β we write the detailed balance condition between configuration ζ and ζ+δ_β^x-δ_α^x with repsect to the measure Λ_p̂ defined in (<ref>) and we obtain
ζ_α^x/ζ_α^x!ζ_β^x!=ζ_β^x+1/(ζ_α^x-1)!(ζ_β^x+1)!p̂_β/p̂_α
that is true if and only if
p̂_α=p̂_β=p̂_1.
□
§.§ The hydrodynamic limit
Before proving the equilibrium fluctuation limit, we state the hydrodynamic result. For arbitrary ϕ∈ C_c^∞(ℝ) we introduce the density field
𝒳_α^N,t(ϕ):=1/N∑_x∈ℤζ_α^x(tN^2)ϕ(x/N) ∀α∈{1,…,n}.
Let ρ^(α): ℝ→ [0,2j], with α∈{1,…,n}, be an initial macroscopic profile and let (μ_N)_N∈ℕ a sequence of compatible initial measures. Let P_N be the law of the process (𝒳_1^N,t(ϕ),…,𝒳_n^N,t(ϕ)) induced by (μ_N)_N∈ℕ. Then, ∀ T>0, δ>0, ∀α∈{1,…,n} and ∀ϕ∈ C_c^∞(ℝ)
lim_N→∞P_N(sup_t∈ [0,T]| 𝒳_α^N,t(ϕ)-∫_ℝϕ(u)ρ^(α)(u,t)du|>δ)=0
where ρ^(α)(x,t) is a strong solution of the PDE
∂_tρ^(α)(x,t)=(2j)Δρ^(α)(x,t)+Υ(∑_β=1 : β≠α^nρ^(β)(x,t)-ρ^(α)(x,t)) x∈ℝ, t∈ [0,T]
ρ^(α)(x,0)=ρ^(α)(x)
where Υ∈ (0,∞).
Proof: the proof is reported in appendix <ref> since the steps are a slight modification of the proof done in <cit.>. As usual for reaction-diffusion systems <cit.>, the diffusive scaling has to be complemented with a weak mutation scaling γ = Υ/N^2.
§.§ The density fluctuation
We consider the process (ζ_t)_t≥ 0 initialized from the reversible measure Λ_p̂ defined in (<ref>). The density fluctuation field for a species α∈{1,…,n} is an element of the space (C_c^∞(ℝ))^* defined, for any test function ϕ∈ C_c^∞(ℝ), as
𝒴_α^N,t(ϕ):=1/√(N)∑_x∈ℤϕ(x/N)(ζ_α^x(tN^2)-(2j)p̂_1)
where (2j)p̂_1=𝔼_Λ_p̂_1[ζ_α^x].
We call π_N the law of the random process (𝒴^N,t)_t≥ 0 = ((𝒴_1^N,t,…,𝒴_n^N,t))_t≥ 0 and 𝔼_π_N the expectation with respect to this law.
The density fluctuation field (<ref>) satisfies the convergence result stated in the following Theorem.
There exists a unique (𝒴^t)_t∈[0,T]=((𝒴_1^t,…,𝒴_n^t))_t∈[0,T] on the space C([0,T];(C_c^∞(ℝ))^*_n) with law π such that
π_N→π weakly for N→∞.
Moreover, (𝒴^t)_t∈[0,T] is a generalized stationary Ornstein-Uhlenbeck process solving, for every α∈{1,…,n}, the following martingale problem:
M_α,ϕ^t:=𝒴_α^t(ϕ)-𝒴_α^0(ϕ)-(2j)∫_0^t𝒴_α^s(Δϕ)ds-Υ∫_0^t(∑_β=1 : β≠α^n𝒴_β^s(ϕ)-𝒴_α^s(ϕ))ds
is a martingale ∀ϕ∈ C_c^∞(ℝ) with respect to the natural filtration of (𝒴^t_1,…,𝒴_n^t) with quadratic covariation
[ M_α,ϕ,M_β,ϕ]_t=-2t(2j)^2p̂_1^2∫_ℝ∇(ϕ(u))^2du-2p̂_1t(2j)Υ∫_ℝ(ϕ(u))^2du
and quadratic variation
[ M_α,ϕ]_t= 2t(2j)^2p̂_1(1-p̂_1)∫_ℝ∇(ϕ(u))^2du+np̂_1t(2j)Υ∫_ℝ(ϕ(u))^2du.
Theorem <ref> suggests that the Therefore, the limiting process
(𝒴^t)_t∈[0,T]=((𝒴^t_1,…,𝒴_n^t))_t∈[0,T]
can be formally written as the solution of the distribution-valued SPDE
d𝒴^t=𝒜𝒴^tdt+2j√(2Σ)∇ dW^t+√((2j)Υ)√(ℬ)d𝒲
where
(W^t)_t∈[0,T]=((W^t_1,…,W_n^t))_t∈[0,T]
(𝒲^t)_t∈[0,T]=((𝒲^t_1,…,𝒲_n^t))_t∈[0,T]
are two n-dimensional vectors of independent space-time white noises. The matrices read
𝒜=[ (2j)Δ-Υ Υ … Υ; Υ (2j)Δ-Υ … Υ; ⋮ ⋮ ⋱ ⋮; Υ Υ … (2j)Δ-Υ ]
Σ=[ p̂_1(1-p̂_1) -p̂_1^2 … -p̂_1^2; -p̂_1^2 p̂_1(1-p̂_1) … -p̂_1^2; ⋮ ⋮ ⋱ ⋮; -p̂_1^2 -p̂_1^2 … p̂_1(1-p̂_1) ] ℬ=[ np̂_1 -2p̂_1 … -2p̂_1; -2p̂_1 np̂_1 … -2p̂_1; ⋮ ⋮ ⋱ ⋮; -2p̂_1 -2p̂_1 … np̂_1 ].
Proof of Theorem <ref>: the strategy is similar to the one used for Theorem <ref>. Therefore, we only report the computation of the quadratic covariation (via the Carré Du Champ operator denoted by Θ_α,β^ϕ,t) of the Dynkin martingale associated to (ζ_t)_t≥ 0
Θ_α,β^ϕ,t =(ℒ+ℒ^r)(𝒴^N,t_α(ϕ)𝒴_β^N,t(ϕ))-𝒴_α^N,t(ϕ)(ℒ+ℒ^r)(𝒴_β^N,t(ϕ))-𝒴_β^N,t(ϕ)(ℒ+ℒ^r)(𝒴_α^N,t(ϕ))
=ℒ(𝒴^N,t_α(ϕ)𝒴_β^N,t(ϕ))-𝒴_α^N,t(ϕ)ℒ(𝒴_β^N,t(ϕ))-𝒴_β^N,t(ϕ)ℒ(𝒴_α^N,t(ϕ))
+ℒ^r(𝒴^N,t_α(ϕ)𝒴_β^N,t(ϕ))-𝒴_α^N,t(ϕ)ℒ^r(𝒴_β^N,t(ϕ))-𝒴_β^N,t(ϕ)ℒ^r(𝒴_α^N,t(ϕ))
introducing
Γ_α,β^ϕ,t,reaction:=ℒ^r(𝒴^N,t_α(ϕ)𝒴_β^N,t(ϕ))-𝒴_α^N,t(ϕ)ℒ^r(𝒴_β^N,t(ϕ))-𝒴_β^N,t(ϕ)ℒ^r(𝒴_α^N,t(ϕ))
and recalling the definition of Γ_α,β^ϕ,t written in (<ref>) we have that the Carré Du Champ operator Θ_α,β^ϕ,t is the sum of the two Carré Du Champ associated to the generators ℒ and ℒ^r respectively, i.e.
Θ_α,β^ϕ=Γ_α,β^ϕ,t+Γ_α,β^ϕ,t,reaction.
Therefore to perform the proof we only need to compute Γ_α,β^ϕ,t,reaction. We consider the case α≠β (the case α=β is similar) and we compute explicitly
N^2Γ_α,β^ϕ,reaction(𝒴^N) =Υ/N^3∑_x∈ℤ∑_k,l=1^nη_k^x[∑_y∈ℤϕ(y/N)((η_α^y-δ_k^x+δ_l^x)-η_α^y)] [∑_z∈ℤϕ(z/N)((η_β^z-δ_k^z+δ_l^z)-η_β^z)]
=
-Υ/N∑_x∈ℤ(η_α^x+η_β^x)ϕ^2(x/N).
As a consequence, the limit of the first and second moment are given by
lim_N→∞𝔼_π_N[N^2Γ_α,β^ϕ,reaction]=-2p(2j)∫_ℝ(ϕ(u))^2du
and
lim_N→∞Var_π_N(N^2Γ_α,β^ϕ,reaction)=4p^2(2j)^2(∫_ℝ(ϕ(u))^2du)^2
□
§ CONCLUSIONS AND PERSPECTIVES
In this paper we considered a multi-species stirring process.
We studied the fluctuation of the density field around the hydrodynamic limit when the process is started from equilibrium reversible measure. The main result (Theorem <ref>) shows that the limit of the empirical fluctuation field behaves as a infinite-dimensional Ornstein-Uhlenbeck process (see equation (<ref>)). The interesting feature is that the space-time white noise terms of different species are coupled, even though in the hydrodynamic equations they are not. Moreover, we extended this result to a reaction-diffusion process. In this last case, the SPDEs are coupled also because of a further space-time white noise term, due to the reactions (change of species).
A future development will be the study of large deviations around the hydrodynamic limit and of the fluctuations starting from a non-equilibrium initial measure. Moreover, it would be interesting to investigate fluctuations and hydrodynamic limit of the asymmetric multi-species stirring process. An other active field of study is the one concerning the extension of hydrodynamic results to non-Euclidean geometry, to random environments and to a segment with various type of boundary conditions. Some examples in the single-species case are <cit.>,<cit.>, <cit.>, <cit.>. In this paper we studied the first order fields, however, one more further development could be to push forward the analysis for higher order fields, similarly to what have been done in <cit.>.
§ PROOF OF THE HYDRODYNAMIC LIMITS
Proof of Theorem <ref>: the proof is based on the martingale techniques proposed in <cit.>. The aim is to show that the sequence of measure (ℙ_N)_N∈ℕ is tight and the limit point has a density that is the solution of the PDE (<ref>).
We start by considering the Dynkin's martingale associated to the process (η_t)_t≥ 0 defined, for any ϕ∈ C_c^∞(ℝ) and ∀α∈{1,…,n}, as
m^N,t_α,ϕ:=X_α^N,t(ϕ)-X_α^N,0(ϕ)-∫_0^tN^2ℒX_α^N,s/N^2(ϕ)ds.
The action of the generator (<ref>) on the density field (<ref>) is
ℒX_α^N,·(ϕ) =1/N∑_x∈ℤ∑_k,l=0^nη_k^xη_l^x+1[∑_y∈ℤϕ(y/N)((η_α^y-δ_k^x+δ_l^x+δ_k^x+1-δ_l^x+1)-η_α^y)]
=
1/N∑_x∈ℤ{η_α^x(2j-η_α^x+1)[ϕ(x+1/N)-ϕ(x/N)]+η_α^x+1(2j-η_α^x)[ϕ(x/N)-ϕ(x+1/N)]}
=
2j/N∑_x∈ℤη_α^x[ϕ(x-1/N)+ϕ(x+1/N)-2ϕ(x/N)]
by the Taylor's series with Lagrange remainder computed in (<ref>) we obtain
N^2ℒX_α^N,·(ϕ) =(2j)/N∑_x∈ℤ(η_α^x-2jp_α)Δϕ(x/N)+R_0(ϕ,α)
where
R_0(ϕ,α)=(2j)/N∑_x∈ℤη_α^x[1/61/N[ϕ^(3)(x+θ^+/N)-ϕ^(3)(x-θ^-/N)]].
with θ^+, θ^-∈(0,1) and where ϕ^(3) denotes the third derivative of ϕ.
Observing that ϕ∈ C^∞_c(ℝ) and η_α^x≤ 2j, then R_0(ϕ,α) is infinitesimal when N→∞. Therefore
N^2ℒX_α^N,·(ϕ)=(2j)/N∑_x∈ℤη_α^xΔϕ(x/N)+o(1/N).
Replacing (<ref>) in (<ref>) we obtain
m_α,ϕ^N,t(X)+o(1/N^2)=X_α^N,t(ϕ)-X_α^N,0(ϕ)-(2j)∫_0^tX_α^N,s/N^2(Δϕ)ds
where on the right-hand-side we recognize the discrete counterpart of the weak formulation of the heat equation with constant diffusivity 2j for the species α. We shall prove that
lim_N→∞P_N(sup_[0,T]|X_α^N,t(ϕ)-X_α^N,0(ϕ)-(2j)∫_0^tX_α^N,s/N^2(Δϕ)ds|>δ)=0.
We find an upper bound by Chebyshev's and Doob's inequalities
P_N(sup_[0,T]|X_α^N,t(ϕ)-X_α^N,0(ϕ)-(2j)∫_0^tX_α^N,s/N^2(Δϕ)ds|>δ)
≤1/δ^2𝔼_μ_N[sup_[0,T]|m_α,ϕ^N,t|^2]
≤4/δ^2𝔼_μ_N[|m^N,T_α,ϕ|^2].
Moreover, by Doob's decomposition
𝔼_μ_N[|m^N,T_α,ϕ|^2] =𝔼_μ_N[∫_0^TN^2Γ_α,α^ϕ,s/N^2ds]
where Γ_α,α^ϕ,s denotes the operator (<ref>) but with the generator ℒ acting on the density field (<ref>). Here, for the sake of notation, we do not write the time dependence. We then obtain
Γ_α,α^ϕ =1/N^2∑_x∈ℤ∑_k,l=0^nη_k^xη_l^x+1[∑_uy∈ℤϕ(y/N)((η_α^y-δ_k^x+δ_l^x+δ_k^x+1-δ_l^x+1)-η_α^y)]^2
=
1/N^2∑_x∈ℤη_α^x∑_l=0 : l≠α^nη_l^x+1[ϕ(x+1/N)-ϕ(x/N)]^2+1/N^2∑_x∈ℤ∑_k=0 : k≠α^nη_k^xη_α^x+1[-ϕ(x+1/N)+ϕ(x/N)]^2
=
1/N^2∑_x∈ℤ(η_x^α∑_l=0 : l≠α^nη_l^x+1+η_α^x+1∑_k=0 : k≠α^nη_k^x)[ϕ(x+1/N)-ϕ(x/N)]^2
by Taylor's series with Lagrage remainder we obtain
N^2Γ_α,α^ϕ=1/N^2∑_x∈ℤ(η_x^α∑_l=0 : l≠α^nη_l^x+1+η_α^x+1∑_k=0 : kα^nη_k^x)∇(ϕ)^2(x/N)+o(1/N^2).
Using (<ref>), (<ref>), the boundness |η^x_α|≤ n 2j ∀ x∈ℤ and ∀α∈{1,…, N} and the fact that ∇ϕ is smooth and has compact support we obtain
𝔼_μ_N[|m_α,ϕ^N,T|^2] ≤ NC/N^2sup_x∈ℤ, t∈ [0,T]𝔼_μ_N[(η^x_α∑_l=0 : l≠α^nη_l^x+1+η_α^x+1∑_k=0 : kα^nη_k^x)]+o(1/N^2)
≤C/N+o(1/N^2).
Taking the limit and using (<ref>) and (<ref>)
lim_N→∞P_N(sup_[0,T]|X_α^N,t(ϕ)-X_α^N,0(ϕ)-(2j)∫_0^tX_α^N,s(Δϕ)ds|>δ)≤lim_N→∞C/N=0.
With the above convergence and by standard computations we can prove that the sequence of measure (ℙ_N)_N∈ℕ defined in Theorem <ref> is tight and
that all limit points do coincide with ρ^(α)(t,x)dx with ρ^(α)(t,x) is the unique solution of
∂_tρ^(α)(t,x)=(2j)Δρ^(α)(t,x)
ρ^(α)(0,x)=ρ^(α)(x)
provided that ρ^(α)(x) is compatible with the initial sequence of measures (μ_N)_N∈ℕ in the sense of Definition <ref>. Finally, existence and uniqueness of a strong solution of the above system of equations is standard.
□
Proof of Theorem <ref>: the generator of the process is given by (<ref>), i.e. it isgiven by the sum of ℒ defined in (<ref>) and ℒ^r defined in (<ref>). Therefore, here we only need to perform the computations for the second one. We diffusively scale the switching rate γ=Υ/N^2, then, the generator reads
ℒ^rf(ζ)=Υ/N^2∑_x∈ℤ∑_k,l=1^nζ_k^x[f(ζ-δ_k^x+δ_l^x)-f(ζ)]
where Υ∈ (0,+∞)
We compute the action of this generator on the density field (<ref>)
ℒ^r𝒳_α^N,·(ϕ) =Υ/N^3∑_x∈𝕫^d∑_k,l=1^nζ_k^x[∑_y∈ℤϕ(y/N)((ζ_α^y-δ_k^x+δ_l^x)-ζ_α^y)]
=Υ/N^3∑_x∈ℤ(∑_k=1 : k≠α^nζ_k^x-ζ_α^x)ϕ(x/N)
=
Υ/N^2(∑_k=1 : k≠α^n𝒳_k^N,·(ϕ)-𝒳_α^N,·(ϕ)).
Then,
∫_0^tN^2ℒ^r𝒳_α^N,s/N^2(ϕ)ds=∫_0^tΥ(∑_k=1 : k≠α^n𝒳_k^N,s/N^2(ϕ)-𝒳_α^N,s/N^2(ϕ))ds.
Arguing as in the proof of the Theorem <ref>, we need to bound the quadratic variation. We explicitly compute
ℒ(𝒳^N,t_α(ϕ)𝒳_β^N,t(ϕ))-𝒳_α^N,t(ϕ)ℒ(𝒳_β^N,t(ϕ))-𝒳_β^N,t(ϕ)ℒ(𝒳_α^N,t(ϕ))
= Υ/N^2∑_x∈ℤ∑_k,l=1^nζ_k^x[∑_y∈ℤϕ(y/N)((ζ_α^y-δ_k^x+δ_l^x)-ζ_α^y)]^2
= Υ/N^2∑_x∈ℤ∑_k=1^nζ_k^xϕ^2(x/N)
≤ C/N^2N
Arguing as in the proof of Theorem <ref> we can show that
lim_N→∞P_N (sup_[0,T]|𝒳_α^N,t(ϕ)-𝒳_α^N,0(ϕ)-(2j)∫_0^t𝒳_α^N,s/N^2(Δϕ)ds
..
..
+∫_0^tΥ(∑_k=1 : k≠α^n𝒳_k^N,s/N^2(ϕ)-𝒳_α^N,s/N^2(ϕ))ds|>δ)=0.
The proof of tightness for the sequence of measure (P_N)_N∈ℕ defined in Theorem (<ref>) and the uniqueness of the limit point are standard and analogous to the ones of Theorem <ref>.
□
unsrt
|
http://arxiv.org/abs/2307.03903v2 | 20230708050310 | Adversarial Self-Attack Defense and Spatial-Temporal Relation Mining for Visible-Infrared Video Person Re-Identification | [
"Huafeng Li",
"Le Xu",
"Yafei Zhang",
"Dapeng Tao",
"Zhengtao Yu"
] | cs.CV | [
"cs.CV"
] |
Article Title]Adversarial Self-Attack Defense and Spatial-Temporal Relation Mining for Visible-Infrared Video Person Re-Identification
1]Huafeng [email protected]
These authors contributed equally to this work.
1]Le [email protected]
These authors contributed equally to this work.
[1]Yafei [email protected]
2]Dapeng [email protected]
1]Zhengtao [email protected]
[1]Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, 650500, Yunnan, China
[2]School of Information Science and Engineering, Yunnan University, Kunming, 650091, Yunnan, China
In visible-infrared video person re-identification (re-ID), extracting features not affected by complex scenes (such as modality, camera views, pedestrian pose, background, etc.) changes, and mining and utilizing motion information are the keys to solving cross-modal pedestrian identity matching. To this end, the paper proposes a new visible-infrared video person re-ID method from a novel perspective, i.e., adversarial self-attack defense and spatial-temporal relation mining. In this work, the changes of views, posture, background and modal discrepancy are considered as the main factors that cause the perturbations of person identity features. Such interference information contained in the training samples is used as an adversarial perturbation. It performs adversarial attacks on the re-ID model during the training to make the model more robust to these unfavorable factors. The attack from the adversarial perturbation is introduced by activating the interference information contained in the input samples without generating adversarial samples, and it can be thus called adversarial self-attack. This design allows adversarial attack and defense to be integrated into one framework. This paper further proposes a spatial-temporal information-guided feature representation network to use the information in video sequences. The network cannot only extract the information contained in the video-frame sequences but also use the relation of the local information in space to guide the network to extract more robust features. The proposed method exhibits compelling performance on large-scale cross-modality video datasets. The source code of the proposed method will be released at <https://github.com/lhf12278/xxx>.
[
[
August 12, 2023
===================
§ INTRODUCTION
Person re-identification (re-ID) is a technology used to determine whether the person images or sequences captured by non-overlapping cameras belong to the same identity <cit.>. Most of the related works, such as domain generalization <cit.> and domain adaptation<cit.>, focus on person re-ID under normal illumination (visible modality). In recent years, cross-modality person re-ID <cit.> based on visible and infrared images have attracted more and more attention since they can meet the requirements of pedestrian image matching under poor illumination at night.
The main difficulty faced by visible-infrared person re-ID is the modality discrepancy between pedestrian images of two modalities. Most methods <cit.> attempt to study how the features can be learned without being affected by the modality of images. These methods can be roughly divided into methods based on adversarial learning <cit.>, methods guided by intermediate modality <cit.>, and methods embedded with high-level semantic information <cit.>. The adversarial learning-based methods achieve modal confusion by adversarial training between the encoder and the discriminator, thereby reducing the difference between different modalities. The intermediate modality-based methods use the information of intermediate modality to guide or strengthen the role of modality invariant features in identity matching. The semantic information-based methods improve the cross-modality capability of features by introducing high-level semantic information into visual features.
The above methods only consider modal discrepancy's impact on person identity matching. However, some aspects are ignored, such as the diversity of pedestrian appearance features caused by view discrepancies, diverse postures of person, and background changes, etc. Moreover, they are designed for the matching between person images and do not consider the information contained in the video sequences. Therefore, if the existing cross-modality person re-ID methods are directly applied to the cross-modality video person re-ID, the retrieval performance may not be optimal. Although video-based person re-ID methods <cit.> are widely applied, they usually do not consider the relationship between different parts of the pedestrian's body under the motion state, which limits the further improvement of the recognition performance. Furthermore, most of the existing video person re-ID methods focus on the identity matching between video sequences under normal lighting conditions, ignoring the impact of the modal discrepancy between infrared and visible person images. Although Lin et al. <cit.> proposed a video-based cross-modality person re-ID method and created the first dataset for this task recently, there are still few studies involving the solution to this problem.
We propose a cross-modality video person re-ID method from a novel view—adversarial self-attack and defense. In the proposed method, we regard all unfavorable factors contaminating the model performance as adversarial perturbations. Factors such as the change of camera view, the existence of occluders, the difference in posture, and the gap between modalities lead to a certain diversity of appearance features of the same identity, as illustrated in Fig. <ref>.
We regard all the differences in appearance features of the same identity caused by all factors as information perturbations. The robustness of the re-ID model is strengthened by improving its defense ability against perturbations. Technically, the proposed method is mainly composed of the adversarial self-attack module (ASAM), adversarial defense module (ADM) and feature representation module under spatial-temporal information guidance (FRM-STIG). The ASAM is mainly used to activate the adversarial perturbations and implement the attack on the ADM. More specifically, the ASAM is used to guide the single-modality feature extraction network to activate the perturbations in the input training samples. With the effect of ASAM, the robustness of the ADM is enhanced in adversarial training of the ADM. The proposed method does not need to synthesize adversarial samples to train the model but activates the adversarial perturbations of the training samples to realize the adversarial attack.
In FRM-STIG, considering the discrimination of the spatial relationship of different body parts under the motion state, we propose to embed the temporal information contained in the video frames into spatial relations of different body parts. To effectively utilize spatial relation to improve the discrimination of features, we propose a spatial-temporal relation-guided feature representation method. More attention is paid to the features related to motion information and spatial relation. Thanks to this design, both the spatial relation of different body parts during motion and the temporal information can be embedded into the pedestrian features, which helps to improve the accuracy and robustness of person video sequence description. Finally, the features with motion information and the features generated by the guidance of spatial-temporal relations are combined as the final features to describe pedestrians.
The main contributions of this paper are as follows:
* A solution is proposed to solve the impact of modal discrepancy, posture changes, complex background and other factors on person identity matching. The complex perturbations carried by the multi-modality images are treated as the adversarial attack information of the re-ID model. At the same time, by improving the defense ability of the re-ID model against these perturbations, the robustness of the model to complex factors can be improved accordingly.
* An adversarial self-attack strategy is proposed to activate the perturbation information contained in the input samples without generating adversarial samples. This design allows adversarial attack and defense to be integrated into one framework.
* A spatial relation mining mechanism is proposed for different parts of a person based on temporal information embedding. A feature highlight mechanism guided by spatial-temporal relations is designed to construct features not affected by modality.
* The validity of the proposed method is verified on the challenging large-scale visible-infrared video re-ID dataset—VCM and the state-of-the-art performance is obtained under two commonly used evaluation metrics.
The rest of this paper is organized as follows. Section <ref> discusses the related state-of-the-art works. Section <ref> elaborates the proposed method in detail. The experimental results are explained in Section <ref> and Section <ref> summarizes the content of this paper and draws conclusions.
§ RELATED WORK
§.§ Visible-Infrared Person Re-ID
To solve the modal discrepancy between visible and infrared person images, Wang et al. <cit.> proposed an adversarial generation method to learn the modality-invariant feature. The encoder can extract the features not affected by the modality information via playing a min-max game between the encoder and the modality discriminator. Given the significance of adversarial learning, a series of practical methods have been conducted <cit.>. However, in those methods, the discriminator is used to identify the modal differences between visible and infrared person images, which may cause the loss of information related to personal identity and is not conducive to matching person identity. Another popular way to learn modality-invariant features is to use the intermediate information between two modalities as guidance <cit.>. Specifically, Zhong et al. <cit.> proposed the gray-scale image of a person as an intermediate modality to assist in extracting modality-invariant features.
Considering the modality invariance of edge details of a person image, Gao et al. <cit.> enhanced the cross-modal matching ability of features by highlighting the role of edge details in features. Basaran et al. <cit.> proposed to extract modality-invariant identity features by introducing the anaglyph. However, it ignores the high-level semantic information between different body parts or critical points of a person. Such information is usually modal invariant and often used in cross-modality person re-ID. Miao et al. <cit.> proposed a cross-modality person re-ID method based on high-order relationship mining of person key points. Chen et al.<cit.> proposed a modality-invariant feature extraction method by mining different part relationships. Those methods are devoted to extracting features not affected by modality, where the challenges to person identity matching caused by the diversity of person appearance features are not considered. The proposed method is based on adversarial self-attack and defense such that the changes in personal appearance features caused by all factors are deemed adversarial perturbations. The shortcomings of the above methods can be alleviated by improving the model's robustness against such perturbations.
§.§ Video Person Re-ID
Videos usually contain many motion information, which carries pedestrian identity clues. The video person re-ID has received more and more attention <cit.>. Wu et al. <cit.> proposed 3D ConvNet as a new layer of feature extraction network to extract a person's appearance and motion information from the video sequences simultaneously. Chen et al. <cit.> proposed spatial-temporal awareness to pay attention to the significant parts of a person in both temporal and spatial domains simultaneously and highlight the effect of this part in identity matching. Li et al.<cit.> proposed a global-local temporal representation (GLTR) method for video person re-ID. This method aggregates the short-term temporal cues and long-term relations as the final GLTR. Liu et al. <cit.> proposed a co-saliency spatial-temporal interaction network (CSTNet) for video person re-ID. The method learned discrimination feature representation by capturing the salient foreground regions in the video and exploring the spatial-temporal long-range context interdependency from such regions. Yang et al. <cit.> designed a two-stream dynamic pyramid representation model to solve the problems of mining spatial-temporal information, suppressing redundant information and improving data quality for video person re-ID. The method used dynamic pyramid deflated convolution and pyramid attention pooling to acquire the person's motion information and static appearance. Eom et al. <cit.> designed a spatial and temporal memory network to address the challenge of person occlusion by using prior knowledge that spatial distractors always appear in a particular location. In contrast, temporal distractors usually appear in the first few frames. Liu et al. <cit.> adopted a bi-directional (forward and backward) mechanism to extract the temporal information in the video sequence.
Although the above methods effectively utilized the motion information for person re-ID, they ignore the potential structural relationship of a person's body parts in space, limiting the further improvement of feature discrimination. Yan et al. <cit.> proposed multi-granular hypergraphs to mine the temporal information of different granularity regions. They modeled spatial-temporal dependencies in terms of multiple granularities, which effectively improved the performance of video person re-ID. Liu et al.<cit.> proposed a spatial-temporal correlation multi-scale topology learning framework to realize video person re-ID. The method achieved hierarchical spatial-temporal dependencies and pedestrian structure information through 3D and cross-scale graph convolution. To solve the problem that 3D convolution is easily affected by the misalignment of person features in mining temporal information, Chen et al. <cit.> proposed a human-oriented graph method. Although the above methods based on graph convolution can mine the spatial relationship between nodes, they cannot extract long-term spatial cues. Since the transformer is more suitable for extracting the long-term relationship of features, Zhang et al. <cit.> proposed a spatial-temporal transformer for video person re-ID. The method is mainly composed of a spatial transformer and a temporary transformer. The former is used to extract the spatial features of person images, and the latter is used to extract the features of a person in video sequences. Although these methods consider the static spatial structure relation between different person regions, they ignore the discrimination of different person body parts when moving. The proposed method embeds the temporal information into the spatial structure information mining, resulting in a spatial relation mining scheme for different body parts of pedestrians in the state of motion.
§.§ Adversarial Attack and Defense in Person Re-ID
Adversarial attacks are designed to deprive the original performance of the deep neural network by adding small-magnitude perturbations to original samples. The concept was first proposed by Szegedy et al. <cit.>. Wang et al. <cit.> developed a multi-stage network to perform back-box attack, given the importance of cross-dataset transferability in Re-ID. It pyramids the features at different levels to extract the general and transferable features for the adversarial perturbations. To explore whether the re-ID model based on CNN is vulnerable to the attack of adversarial samples, Wang et al. <cit.> proposed an attack method called advPattern to generate adversarial patterns on clothes. Those methods focus on generating adversarial samples to invalidate the re-ID model without considering how to defend against attacks from adversarial samples.
One of the easiest ways to improve the re-ID model's robustness to adversarial examples is to incorporate them into training. In addition, some researchers consider identifying and excluding the adversarial samples from the training dataset via the detection algorithm, which can also avoid the attack from the adversarial samples on the model. Specifically, Wang et al. <cit.> proposed a multi-expert adversarial attack detection method to detect adversarial attack by checking context inconsistency. To fill the gap between training samples and test samples, Bai et al. <cit.> developed an adversarial metric attack while presenting an early attempt to produce a metric-preserving network, thereby protecting metrics from adversarial attacks. To defend the model against the attack of the adversarial samples, although it is simple and effective to use the adversarial samples directly to train the model, it does not maximize the robustness of the model to the adversarial samples. In this paper, we elaborately devise an adversarial self-attack and defense approach that enables the model to defend against the impact of the diversity of person identity features on matching performance. Unlike the existing methods for generating adversarial samples, the proposed method replaces the role of the adversarial samples by activating the adversarial perturbations contained in the training samples. The proposed method integrates adversarial attack and defense within a single framework.
§ PROPOSED METHOD
§.§ Overview
The framework proposed in this paper is mainly composed of the adversarial self-attack module (ASAM), adversarial defense module (ADM) and feature representation module under spatial-temporal information guidance (FRM-STIG), as shown in Fig. <ref>. The ASAM is mainly used to activate perturbations in the training samples and achieve the re-ID model's adversarial training. The ADM extracts discrimination features from the sample in which the perturbations are activated. The FRM-STIG extracts the information carried in the sequence and uses them to enhance the effect of features related to motion information. To comprehensively use the information carried in the video sequences, the FRM-STIG integrates visual features with spatial-temporal information to accurately describe a person.
§.§ Adversarial Self-Attack Module
The ASAM is designed to enable the training samples to replace the role of the adversarial samples. It is implemented in a single ResNet50 framework. The ASAM module contains the Conv Layer, Intra-Modality Self-Attention (IMSA) Layer, Feature Encoder E_att, Global Average Pooling (GAP) Layer, and Batch Normalization (BN) Layer. The Conv Layer here refers to the first convolution layer of ResNet50. The E_att is composed of the last four layers of ResNet-50. The IMSA is used to highlight the role of the perturbations in the feature maps output by the Conv Layer. This Conv Layer generates perturbation information in the training samples to make the re-ID model yield its original performance. Compared with existing methods, ASAM does not need to generate new adversarial samples and only uses the original ones to achieve regular and adversarial training for the re-ID model. We denote the video sequence of different modality of the same identity V= { V_t∈ℝ^H × W} _t = 1^T and I = { I_t∈ℝ^H × W} _t = 1^T, where H and W represent the height and width of a single video frame, V and I represent a sequence of person video frames in visible and infrared modality, respectively. T is the total number of frames in a sequence, t means the index of the t-th frame. The results obtained by inputting the video sequence V and I into the Conv Layer can be expressed as:
F_t,c^V = E^V( V_t), F_t,c^I = E^I( I_t) (t=1,2, ⋯, T)
,
where F_t,c^V and F_t,c^I denote the features output by the Conv Layer. E^V and E^I are the encoders consisting of the first convolution layer of ResNet-50, and their parameters are not shared. The encoders E^I and E^V are respectively used to extract the shallow features of visible and infrared person images.
To highlight the role of the perturbations carried in F_t, c^V and F_t, c^I, we first send F_t, c^V and F_t, c^I to the IMSA layer, whose structure is shown in Fig. <ref> and the output results can be expressed as:
F̂_t^V = IMSA( F_t,c^V), F̂_t^I = IMSA( F_t,c^I)
.
To activate the perturbations in F_t,c^V and F_t,c^I such that they can replace the generation of adversarial examples, we send F̂_t^V and F̂_t^I to E_att. After the perturbation information is activated, the feature of the output by the E_att followed by GAP and BN should be misclassified by the pre-trained person identity classifier W_def of ADM.
To this end, we use the following identify loss function to optimize E^V, E^I and E_att:
ℓ_cov_id=-2/n_b(∑_i=1^n_b/2 q(log( W_def( f_i^V))+log( W_def( f_i^I)))
,
where W_def is a pre-trained person identity classifier, q=(1/M,1/M, … ,1/M)^T, M is the total number of person identifies in the training set, n_b is the number of video sequences in a batch, and
f_i^l= BN(GAP( E_att(F̂_1, i^l, F̂_2,i^l, ⋯, F̂_T,i^l)))
,
where F̂_t,i^l(t = 1, 2 ⋯, T; l = V, I; i = 1, 2, ⋯, n_b/2) is the feature map of the t-th frame of the i-th sequence in the modality l output by IMSA.
Minimizing Eq. (3) would activate the perturbations in the person image. In this paper, the perturbations are regarded as adversarial attack information. The activation helps improve the disturbing immunity of the defense network. In order to make the re-ID model robust to the diversity of pedestrian appearance features, f_i^l has been expected to practice adversarial attack and also related to the person identity. E^V, E^I, E_att are further updated by:
ℓ_att_id= - 2/n_b(∑_i = 1^n_b/2 q_i (log ( W_att( f_i^V))+ log ( W_att( f_i^I))))
,
where q_i is a one-hot vector representing the identity of f_i^V and f_i^I. W_att is the person identity classifier only used in ASAM.
§.§ Adversarial Defense Module
ASAM is to activate the perturbations in the training samples and replace the role of the adversarial samples in the adversarial training to improve the robustness of the defense network E_def against the perturbations. To make E_def more immune to attack from the perturbations, an Adversarial Defense Module (ADM) is designed. ADM is mainly composed of defense network E_def, cross-modality cross-attention (CMCA) layer, GAP and BN layers, as shown in Fig. <ref>. The main task of the ADM is to
endow E_def with strong defense ability against the perturbations. In cross-modality person re-ID, modal-invariant features play a positive role in promoting the matching accuracy of person identities. Therefore, the features extracted by E_def contain rich information on different modalities, which would be helpful to defend against attacks from feature perturbation. A CMCA layer is embedded in the ADM, as shown in Fig. <ref>.
As shown in Fig. <ref>, there are two CMCA layers in the ADM, one embedded after the third convolution layer of E_def, and the other embedded after the last convolution layer of E_def. E_def3 is composed of the first three convolution layers of E_def as an encoder. After the perturbations is activated by the ASAM, the feature maps F_t,c^V and F_t,c^I of the t-th frame are sent to the encoders E_def3 and E_def, the results are:
F_t,d3^V= E_def3( F_t,c^V), F_t,d3^I= E_def3( F_t,c^I)
F_t,d^V= E_def( F_t,c^V),
F_t,d^I= E_def( F_t,c^I)
.
After F_t, d3^V, F_t, d3^I, F_t, d^V and F_t, d^I are input into CMCA layer, the results can be expressed as:
F̅_t,d3^V = ConLa_4 (CMCA( F_t, d3^V, F_t, d3^I))
F̅_t,d3^I = ConLa_4 (CMCA( F_t, d3^I, F_t, d3^V))
F̅_t,d^V = CMCA( F_t, d^V, F_t, d^I)
F̅_t,d^I = CMCA ( F_t, d^I, F_t, d^V)
,
where ConLa_4 denotes the last convolution layer of E_def.
The common information can be extracted by embedding the CMCA layer in E_def. The first CMCA layer enables E_def to extract discrimination feature maps with the common information on a shallow convolution layer. The second CMCA layer is used to ensure that the feature maps extracted by E_def contain common information for identity matching. To integrate the complementary information existing in F̅_t,d3^l and F̅_t,d^l (l=V,I) and realize the accurate description of person appearance features, we fuse F̅_t,d3^l and F̅_t,d^l (l=V,I), respectively, and the fused results are sent to the GAP and BN layers.
The feature vectors obtained are:
f_d^V = BN(GAP((F̅_1,d3^V + F̅_1,d^V)/2, ⋯, (F̅_T,d3^V + F̅_T,d^V)/2))
f_d^I = BN(GAP((F̅_1,d3^I + F̅_1,d^I)/2, ⋯ ,(F̅_T,d3^I + F̅_T,d^I)/2))
.
To ensure that f_d^V and f_d^I have strong discrimination, we use the identity loss to optimize E_def:
ℓ_def_id =- 2/n_b(∑_i = 1^n_b/2 q_i(log (W_def(f_d,i^V))+ log ( W_def( f_d,i^I))
))
,
where f_d, i^V and f_d,i^I represent the features of the i-th sequence in the visible and infrared modality, respectively. The triplet loss is used to solve the hard samples problem:
ℓ_def_tri= 1/n_b∑_i = 1^n_b[ f_d,i^a - f_d,i^p_2^2 - f_d,i^a - f_d,i^n_2^2 + α] _ + ,
where [∙]_+=max{0, ∙}, f_d,i^a and f_d,i^p represent the features of the i-th anchor sample sequence and the hard positive sample sequence with the same identity in a mini-batch. f_d,i^n is a hard negative sample sequence with different identity from f_d,i^a. α denotes the margin (>0, empirically set to 0.3 in this work). The features f_d,i^a, f_d,i^p and f_d,i^n are generated by Eq. (8).
§.§ Feature Representation under Spatial-temporal Information Guidance
The video sequence of a person contains a lot of motion information, which is not affected by the modality changes. In a single pedestrian image, there is a latent spatial relation between different regions of the pedestrian's body, and different pedestrians usually show different relations. In order to guide the discrimination features learning, a spatial-temporal information mining approach is proposed, as shown in Fig. <ref>.
The proposed method is mainly composed of two parts: 1) feature highlighting guided by spatial relation (FH-G-SR) and 2) feature representation embedded by temporal information (FR-E-TI). To highlight the discrimination feature with spatial relation guidance, we first use PCB <cit.> to divide the feature map F_t, d^V and F_t, d^I into K different patches in space, which are converted into feature vectors. The feature vectors of the k-th patches of F_t,d^V and F_t,d^I are denoted as f_t, d^V, k and f_t, d^I, k (k = 1,2, ⋯, K). Based on the experience of PCB, K is set to 6 in this paper. As can be seen in Fig. <ref>, the video sequence features { f_1, d^l, k, ⋯, f_T, d^l,k} are sent into LSTM_mot to obtain the features embedded with motion information, expressed as {f̃_1, d^l,k, ⋯ , f̃_T, d^l,k}. Since the potential spatial relation between patches at the different positions is not involved, a sequence of different patches (of the same frame) is formed and input into LSTM_spa for spatial relationship mining between different patches. Considering that after the last frame passes through LSTM_mot, the obtained features integrate the information of all previous frames, we only form a sequence {f̃_T, d^l,1, ⋯ , f̃_T, d^l,K}(l = V, I) of all patch features of the T-th frame and send it to LSTM_spa.
§.§.§ FH-G-SR
The result f̅_T, d^l, K obtained after feeding {f̃_T, d^l,1, ⋯ , f̃_T, d^l,K}(l = V, I) into LSTM_spa is the final spatial feature representation. To effectively utilize the spatial relations and the information carried by f_t,d^l,k to highlight discrimination features, we concatenate f̅_T, d^l, K and the original visual feature f_t, d^l, k:
f̂_t,d^l,k=concat( f_t,d^l,k, f̅_T,d^l,k)
.
As shown in Fig. <ref>, after f̂_t,d^l,k passes through the linear mapping (LM) layer, ReLU activation function, the other LM layer, and the Sigmoid activation function, the corresponding weight matrix for features highlighting is obtained by:
A_t, d^l, k=Sigmoid(LM(ReLU(LM(f̂_t,d^l,k))))
.
With A_t, d^l, k, the feature f_t, d^l, k highlighted by spatial relation guidance is:
ḟ_t, d^l, k= f_t, d^l, k⊙ A_t, d^l, k.
§.§.§ FR-E-TI
Although ḟ_t, d^l, k makes use of the spatial relation between patches, it is only the visual feature of a person, and it does not integrate the motion information contained in the video sequence. Therefore, we embedded the features {f̃_1, d^l,k, ⋯, f̃_T, d^l,k} carrying motion information into the enhanced features in Eq. (13):
f̈_t, d^l,k = ḟ_t, d^l, k + f̃_t, d^l,k.
GAP is used to achieve the fusion of T frame features {f̈_1, d^l, k, ⋯, f̈_T, d^l, k}, and the feature representation of the k-th patch of modality l can be obtained:
f̈_d^l,k = GAP(f̈_1, d^l,k, ⋯, f̈_T, d^l,k)
.
Finally, we concatenate the features of all patches together according to their spatial positions on the image to form a complete person representation f̈_d^l (l = V, I). In order to ensure its discrimination, the cross entropy loss is deployed:
ℓ_p_id= - 2/n_b∑_i = 1^n_b/2 q_ilog( W_se( f̈_d, i^V))+ q_ilog( W_se( f̈_d, i^I))
,
where W_se is the identity classifier of f̈_d, i^l which is generated via Eq. (15). f̈_d, i^l denotes the sequence feature of the i-th video sequence in one batch.
The overall loss function in the proposed approach is:
ℓ_total = ℓ_cov_id+ λ _1 ℓ_att_id+ λ _2 (ℓ_def_id+ ℓ_def_tri) +λ _3 ℓ_p_id,
where λ _1, λ _2, λ _3 are hyper-parameters, which are used to adjust the role of the corresponding loss items. The processes are summarized in Algorithm <ref>.
§ EXPERIMENTS
§.§ Experimental Settings
The dataset used in this experiment is a large-scale cross-modal video-based person re-ID dataset—VCM proposed by Lin et al. <cit.>, which is the first and only one currently constructed for the visible-infrared video person re-ID task. The dataset is recorded by 12 non-overlapping HD cameras and consists of 251,452 visible images and 211,807 infrared images with a resolution of 3,840 × 2,170. These images are further divided into 11,785 sequences in the visible modality and 10,078 sequences in the infrared modality. The dataset contains 927 identities, where 232,496 images of 500 identities involving a total of 11,061 sequences are used for training, and the remaining 230,763 images of 427 identities involving a total of 10,802 sequences are used for testing.
All experiments are carried out on a PC equipped with an NVIDIA TESLA A100 GPU in the Pytorch 1.10 framework <cit.>. In the training phase, all input images are adjusted to 288 × 144. The batch size is set to 32 (i.e., 32 sequences are processed in a mini-batch). In each epoch, 16 sequences of each modality enter the model for training (containing eight identities, each containing two sequences). Each sequence consists of 6 frames, a total of 192 frames. The model is trained for 200 epochs (each of which contains 268 iterations). The first 150 epochs are used to train E^V, E^I, E_att, E_lstm, W_att and W_def. For the remaining 50 epochs, we fix W_def and fine-tune E_def to further enhance the network's defense capability. The entire training is realized by using SGD optimizer with the momentum of 0.9, weight decay of 5 × 10^-4 and learning rate of 0.12. A warm-up strategy <cit.> is applied to tune the learning rate linearly. Cumulative Matching Curve (CMC) <cit.> and mean Average Precision (mAP) <cit.> are used as the evaluation metrics for model performance.
§.§ Ablation Study
The proposed method consists of an adversarial self-attack module (ASAM), adversarial defense module (ADM), and feature representation module under spatial-temporal information guidance (FRM-STIG). E^V, E^I and E_def trained by cross-entropy loss and triplet loss are regarded as “Baseline”. The “Baseline” is pre-trained on the ImageNet <cit.> before training on the dataset VCM. We denote the method of adding ASAM to the baseline as “Baseline+ASAM”, similarly, we obtain “Baseline+ADM”, “Baseline+FRM-STIG” and “Baseline+ASAM+ADM”. When FH-G-SR is removed from FRM-STIG, and the remaining content is added to “Baseline+ASAM+ADM”. Such method is noted “Baseline+ASAM+ADM+FR-E-TI”. The complete proposed model is marked “Baseline+ASAM+ADM +FRM-STIG”. Furthermore, in order to verify the contribution of LSTM_spa, LSTM_spa is removed from “Baseline+ASAM+ADM+FRM-STIG” and the corresponding model is denoted as “Baseline+ASAM+ADM+FRM-STIG*”. The results of ablation experiment are reported in Table <ref>.
Effectiveness of ASAM. To verify the effect of the ASAM, we add the ASAM to the “Baseline”, and obtain the model “Baseline+ASAM”. One can see in Table <ref> that, on the “Infrared to Visible” task, Rank-1 and mAP achieved by “Baseline+ASAM” decrease by 14.4% and 12.57%, respectively. For the task of “Visible to Infrared”, Rank-1 and mAP are reduced by 15% and 14.96%, respectively. These indicate the attacks from the perturbations activated in the shallow features.
Effectiveness of ADM. In Table <ref>, Rank-1 and mAP accuracy of “Baseline+ADM” on the task of querying the visible sequence from the infrared sequence (i.e., Infrared to Visible) is 58.92% and 44.50%, respectively. Compared with that of “Baseline”, the performance is improved by 0.87% and 1.27%, respectively. On the task of “Visible to Infrared” task, the accuracy of Rank-1 and mAP reaches 62.58% and 46.55%, respectively. Compared with that of “Baseline”, the performance of “Baseline+ADM” is improved by 1.57% and 0.96%. It implies that the ADM still has a positive effect on the model performance improvement when the ASAM is absent. It can be also observed that when ASAM and ADM are added to “Baseline” together, the performance of “Baseline+ASAM+ADM” is improved. Compared with “Baseline+ADM”, Rank-1 and mAP of “Baseline+ASAM+ADM” on “Infrared to Visible” task are increased from 58.92% and 44.50% to 60.27% and 46.09%. On “Visible to Infrared” task, the performance of Rank-1 and mAP are improved from 62.58% and 46.55% to 63.01% and 48.05%. These demonstrates the effectiveness of the adversarial training.
Effectiveness of FR-E-TI. FH-G-SR is removed from FRM-STIG to evaluate the validity of FR-E-TI with temporal information embedding. It can be seen in Table <ref> that when the FR-E-TI is added to “Baseline+ASAM+ADM”, Rank-1 and mAP of the model “Baseline+ASAM+ADM+FR-E-TI” on “Infrared to Visible” (“Visible to Infrared”) task are improved from 60.27% and 46.09% (63.01% and 48.05%) to 63.92% and 48.56% (66.42% and 50.40%), respectively. The improvement verifies the validity of FR-E-TI when FH-G-SR is absent.
Effectiveness of FRM-STIG.
It can be seen in Table <ref> that with FRM-STIG, the performance of “Baseline+FRM-STIG” on the “Infrared to Visible” (“Visible to Infrared”) task, the accuracy of Rank-1 and mAP increases from 58.05% and 43.23% (61.13% and 44.80%) to 61.01% and 45.59% (63.74% and 46.34%) respectively. For the same task, after FRM-STIG is added to “Baseline+ASAM+ADM”, the accuracy of the model “Baseline+ASAM+ADM+FRM-STIG” on Rank-1 and mAP increases from 60.27% and 46.09% (63.01% and 48.05%) to 65.31% and 49.49% (67.66% and 51.76%) respectively. It verifies the contribution of FRM-STIG. Moreover, compared with the performance of “Baseline+ASAM+ADM+FR-E-TI”, Rank-1 and mAP of “Baseline+ASAM+ADM+FRM-STIG” on “Visible to Infrared” (“Infrared to Visible”) are improved from 63.92% and 48.56% (66.42% and 50.40%) to 65.31% and 49.49% (67.66% and 51.76%). It demonstrates the validity of FH-G-SR. Further, Rank-1 and mAP of “B+ASAM+ADM+FRM-STIG*” on “Visible to Infrared” (“Infrared to Visible”) decrease by 1.24% and 0.7% (1.88% and 1.19%) compared with those of “Baseline+ASAM+ADM+FRM-STIG”. It verifies the contribution of LSTM_spa.
The visual effect of different settings in the ablation experiment on the retrieval results is shown in Fig. <ref>. From the results shown in Fig. <ref>, one can
see that the retrieval accuracy has improved when the ADM is added to the “Baseline”. It found that when ASAM and ADM are added to the “Baseline” together, the model
performance is visually improved. It indicates that the adversarial attack and defense strategies proposed are effective. Besides, when FRM-STIG is added to “Baseline+ASAM+ADM”, the matching accuracy of sequences is further improved. Fig. <ref> shows the areas focused by “Baseline” and the proposed method, where the warmer the color is, the more attention the area receives. Those results indicate that the proposed method can better extract discriminative features from the person's body area than the Baseline.
§.§ Comparison with State-of-the-Arts
In order to verify the superiority of the proposed method over the existing methods, it is compared with LbA <cit.>, MPANet <cit.>, DDAG <cit.>, VSD <cit.>, CAJL <cit.>, MITML <cit.>, where the first five methods are designed for image-based visible-infrared person re-ID and the last one is for visible-infrared video person re-ID. Since the first five methods are proposed for the single-frame visible-infrared person image matching task, we remove FRM-STIG for comparison and such method “Proposed*” in Table <ref>. Rank1 and mAP of “Proposed*” are 60.27% and 46.09% (63.01% and 48.05%) on “Infrared to Visible” (“Visible to Infrared”), which are 3.68% and 4.60% (2.88% and 5.24%) higher than those of the sub-optimal image-based method CAJL. Compared with the latest video person re-ID method MITML, Rank-1 and mAP of the proposed method are increased from 63.74% and 45.31% (64.54% and 47.69%) to 65.31% and 49.49% (67.66% and 51.76%) on the “Infrared to visible” (“Visible to Infrared”) task. It shows that the proposed method outperforms all compared ones.
§.§ Parameter Analysis
In Eq. (17), three hyper-parameters λ_1, λ_2 and λ_3 need to be set. In this section, we discuss the influence of one hyper-parameter by fixing
the other two parameters. The performance of the proposed method with different hyper-parameters is shown in Fig. <ref>.
The influence of λ_1.
Fig. <ref> (a) and (b) show the effect of λ _1 when it changes in [0.1,9]. On the task of “Infrared to Visible”, the proposed method achieves the best performance when λ _1 = 1, and the performance degenerates when λ _1>1. On the task of “Visible to Infrared”, the method in this paper shows insensitivity to the change of λ _1 value. Therefore, we set λ _1 to 1 in our method.
The influence of λ_2.
Fig. <ref> (c) and (d) show the changes of the model performance when λ _2 changes from 0.01 to 4. On task of “Infrared to Visible”, when λ _2 = 0.5 , the proposed method achieves the highest recognition accuracy. On “Visible to Infrared” task, when λ _2 = 0.1, the proposed method achieves the highest recognition accuracy, and when λ _2 > 0.5, the experimental performance performs a significant downward trend on both of tasks. In this work, we set λ _2 to 0.5 for both tasks.
The influence of λ_3.
Fig. <ref> (e) and (f) show the changes in performance of the proposed algorithm on “Infrared to Visible” and “Visible to Infrared” tasks, when λ _3 varies between 0.1 and 5. As indicated in Fig. <ref> (e) and (f), for both recognition tasks, the recognition performance of the proposed method reaches its peak when the value of λ_3 reaches 0.5. Therefore, we set λ_3 to 0.5 throughout the experiments.
§ CONCLUSION
An adversarial self-attack defense and feature representation module under spatial-temporal information guidance is proposed for the diversity of pedestrian appearance features on pedestrian identity matching. The method consists of the ASAM, the ADM, and the FRM-STIG. Through the cooperative training of the ASAM and the ADM, the defense network's defense capability has been improved in the face of identity-related perturbation. The proposed method is robust to modality differences and feature changes caused by other factors. In addition, FRM-STIG utilizes each local feature effectively through a spatial relationship-guided highlight mechanism. The experimental results show that the proposed method outperforms the compared SOTA methods.
Acknowledgments
This work was supported by the National Natural Science Foundation of China under Grants 62276120, 61966021 and 62161015.
§ DECLARATIONS
Conflict of interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
§ DATA AVAILABILITY STATEMENT
The datasets for this study can be found in the VCM dataset <https://github.com/VCM-project233/MITML>.
|
http://arxiv.org/abs/2307.07342v2 | 20230714134325 | Bounded-memory adjusted scores estimation in generalized linear models with large data sets | [
"Patrick Zietkiewicz",
"Ioannis Kosmidis"
] | stat.ME | [
"stat.ME",
"stat.AP",
"stat.CO",
"62J12, 62F10, 62F12"
] |
MaxMin-L2-SVC-NCH: A Novel Approach for
Support Vector Classifier Training
and Parameter Selection
Linkai Luo, Qiaoling Yang, Hong Peng, Yiding Wang, Ziyang Chen
This work was supported in part by the China Natural Science Foundation under Grant 62171391. (Corresponding author: Linkai Luo).
Linkai Luo, Qiaoling Yang, Hong Peng, Yiding Wang and Ziyang Chen are with the Department of Automation, Xiamen University, Xiamen 361102, China, and also with the Xiamen Key Laboratory of Big Data Intelligent Analysis and Decision-Making, Xiamen, China (e-mail: [email protected]; [email protected]; [email protected]; [email protected]; [email protected];).
=========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The widespread use of maximum Jeffreys'-prior penalized likelihood
in binomial-response generalized linear models, and in logistic
regression, in particular, are supported by the results of Kosmidis
and Firth (2021, Biometrika), who show that the resulting estimates
are also always finite-valued, even in cases where the maximum
likelihood estimates are not, which is a practical issue regardless
of the size of the data set. In logistic regression, the implied
adjusted score equations are formally bias-reducing in asymptotic
frameworks with a fixed number of parameters and appear to deliver a
substantial reduction in the persistent bias of the maximum
likelihood estimator in high-dimensional settings where the number
of parameters grows asymptotically linearly and slower than the
number of observations. In this work, we develop and present two new
variants of iteratively reweighted least squares for estimating
generalized linear models with adjusted score equations for mean
bias reduction and maximization of the likelihood penalized by a
positive power of the Jeffreys-prior penalty, which eliminate the
requirement of storing O(n) quantities in memory, and can operate
with data sets that exceed computer memory or even hard drive
capacity. We achieve that through incremental QR decompositions,
which enable IWLS iterations to have access only to data chunks of
predetermined size. We assess the procedures through a real-data
application with millions of observations, and in high-dimensional
logistic regression, where a large-scale simulation experiment
produces concrete evidence for the existence of a simple adjustment
to the maximum Jeffreys'-penalized likelihood estimates that
delivers high accuracy in terms of signal recovery even in cases
where estimates from ML and other recently-proposed corrective
methods do not exist.
Keywords: iteratively reweighted least squares,
incremental QR decomposition, mean bias
reduction, Jeffreys'-prior penalty, data separation
§ INTRODUCTION
<cit.> shows that, in logistic regression, data
separation is a necessary and sufficient condition for the maximum
likelihood (ML) estimate to have at least one infinite component. Data separation occurs when there is a linear
combination of covariates that perfectly predicts the response values,
and results in the log-likelihood having an asymptote in some
direction of the parameter space. When separation occurs, ML fitting
procedures typically result in apparently finite estimates and
estimated standard errors, which are mere artefacts of the numerical
optimization procedure stopping prematurely by meeting the optimizer's
convergence criteria. If infinite estimates go undetected, then
inference about the model parameters can lead to misleading
conclusions. <cit.> is a recent overview of the
practical issues associated with infinite estimates in logistic
regression. Infinite ML estimates can also occur for other
binomial-response regression models.
The detection of separation and, more generally, identification of
which components of the maximum likelihood estimate are infinite can
be done prior to fitting the model by solving appropriate linear
programs. For example, the R package
<cit.> provides methods that implement the linear
programs in <cit.> for many binomial-response generalized
linear models (GLMs), and the linear program in
<cit.> for log-binomial models for relative
risk regression. Such procedures, however, while helpful to identify
issues with ML, they provide no constructive information about what to
do when infinite ML estimates are encountered.
It is important to note that infinite estimates occur for both small
and large data sets. For an illustration, see
Section <ref>, where infinite estimates are observed for a
probit regression model fit on a data set with about 5.5 million
observations and 37 covariates. Furthermore, for logistic regression
models with n observations and p covariates, where
p/n →κ∈ (0, 1), <cit.> prove that for
model matrices generated from particular distributions, there are
combinations of κ and the variance of the linear predictor for
which the ML estimate has infinite components with probability one as
p/n →κ, regardless of the specific values of p or n.
The linear programs for detecting infinite estimates can be slow for
large model matrices, and infeasible in their vanilla implementations
if the model matrix and response vector do not fit in computer
memory. For example, solving the <cit.> linear program for
the model of Section <ref> with the default settings of
method of the detectseparation R package did not return a result even after 12
hours of computation on a 2021 MacBook Pro with an Apple M1 Max chip
and 64 GB RAM.
To summarize, on one hand, ML estimation for binomial-response GLMs
may lead to arbitrarily large estimates and estimated standard errors,
which are in reality infinite, and if they go undetected can cause
havoc to standard inferential procedures. On the other hand, linear
programs for detecting the occurrence of infinite ML estimates prior
to fitting are not constructive about the consequences of having
infinite estimates in the model fit or the impact this may have on
other parameters in the model, and have either long run times for
large data sets or their standard implementations are infeasible for
data sets that do not fit computer memory.
For those reasons, researchers typically resort to alternative
estimators, which in many settings have the same optimal properties as
the ML estimator has, and are guaranteed to be finite even in cases
where the ML estimator is not, regardless of the data set or its
dimension. For example, <cit.> show that
penalization of the likelihood by Jeffreys' invariant prior, or by any
positive power thereof, always produces finite-valued maximum
penalized likelihood estimates in a broad class of binomial-response
GLMs, only under the assumption that the model matrix is of full
rank. Another approach that has been found to deliver finite estimates
for many binomial-response GLMs are the adjusted score equations for
mean bias reduction (mBR) in <cit.>, which, for fixed p,
result in estimators with mean bias of smaller asymptotic order than
what the ML estimator has in general. To date, there is only empirical
evidence that the reduced-bias estimator of <cit.> has
always finite components for binomial-response GLMs with arbitrary
link functions. For fixed-p logistic regression, the bias-reducing
adjusted score functions end up being the gradient of the logarithm of
the likelihood penalized by Jeffreys' invariant prior. Hence, in that
case, the results in <cit.> show that the
reduced-bias estimators have also always finite components. Both
<cit.> and <cit.> also
illustrate that maximum Jeffreys'-prior penalized likelihood (mJPL)
results in a substantial reduction in the persistent bias of the ML
estimator in high-dimensional logistic regression problems with
p/n →κ∈ (0, 1), when the ML estimator
exists. Those results underpin the increasingly widespread use of mBR
and similarly penalized likelihood estimation in binomial regression
models in many applied fields.
The ML estimates for GLMs can be computed through iterative reweighted
least squares (IWLS; ). It is well-known that due
to the fact that the each component of the working variates vector for
the IWLS iteration depends on the current value of the parameters and
the corresponding observation, ML estimation can be performed even
with larger-than-memory data sets using incremental QR decompositions
as in <cit.>. The incremental IWLS is implemented in the
R package <cit.> for all GLMs.
In this work, we begin by presenting a unifying IWLS procedure for mBR
and mJPL in GLMs through a modification of the ML working
variates. However, unlike the working variates for ML, the modified
working variates involve quantities, like the leverage, that depend on
the whole dataset <cit.>, and as a
result IWLS through incremental QR decompositions is not directly
possible. To overcome this difficulty, we present and analyze the
properties two variants of IWLS for solving the mBR and mJPL adjusted
score equations, which only require access to data blocks of fixed
size that the user can specify in light of any local memory
constraints. As a result, they eliminate the requirement of keeping
O(n) quantities in memory, opening the door for fitting GLMs with
adjusted score equations and penalized likelihood methods on data sets
that are larger than local memory or hard drive capacity, and are
stored in a remote database. The procedures operate with either one-
or two-passes through the data set per iteration, and return the
exact, not approximate, mBR and mJPL estimates.
Using the IWLS variants, we conduct a comprehensive simulation
experiment in the high-dimensional logistic regression setting of
<cit.>, where p / n →κ∈ [0, 1). The
experiment results in novel evidence for using mJPL. Specifically, we
examine in depth and expand the observations of
<cit.> and <cit.>, and
illustrate that mJPL performs well and markedly better than the ML
estimator, in the region below <cit.>'s phase
transition curve where the ML estimates asymptotically exist. In
addition, an analysis of the evidence we gather allows us to derive
and propose a remarkably simple scaling factor for the mJPL estimator
that depends only on κ, the variance of the linear predictor
and the size of the intercept parameter, and enables mJPL to recover
the true signal to high accuracy and for a wide range of settings in
the region above <cit.>'s phase transition curve,
where the ML estimates do no exist with probability approaching
one. To our knowledge, there has been no other proposal for signal
recovery in the high-dimensional logistic regression setting of
<cit.> that applies regardless of whether the ML
estimator exists or not. The experiment also allows us to assess the
performance of the two IWLS variants. It is found that, despite the
two-pass implementation is slower than the one-pass, it requires less
iterations to converge. Also, in notable contrast to the observations
in <cit.>, where mJPL is reported to be
computationally infeasible, requiring about 2.5 hours for n = 2000
and p = 400, the maximum average runtime we observed for mJPL with
the two-pass IWLS variant ranged from milliseconds to just above two
minutes in all settings we tried with n= 2000 and p ranging from
20 to 1100, with no particular care in the choice of starting
values; see, Table <ref> for one of those
settings.
Section <ref> sets notation by introducing GLMs and their ML
estimation using IWLS, and details how that estimation can be done
using only data chunks of fixed size thorough incremental QR
decomposition's. Section <ref> presents the adjusted score
equations for mBR and mJPL in GLMs, and introduces and analyzes the
memory and computation complexity of two variants of IWLS that operate
in a chunk-wise manner, avoiding the memory issues that vanilla IWLS
implementations can encounter. Section <ref> applies the
algorithms for the modelling of the probability of a diverted flight
using probit regression, based data on all 5,683,047 commercial
flights within the USA in 2000. Section <ref> presents a
comprehensive computer experiment to empirically investigate the
limits of mJPL in terms of frequentist performance in the framework of
<cit.> for logistic regression with high-dimensional
data sets. The analysis of the evidence suggests that a rescaled
version of the mJPL estimator can recover the true signal to high
accuracy and for a wide range of settings, where other estimators do
not exist. Finally, Section <ref> provides discussion
and concluding remarks.
§ GENERALIZED LINEAR MODELS
§.§ Model
Suppose that y_1, …, y_n are observations on random variables
Y_1, …, Y_n that are conditionally independent given
x_1, …, x_n, where x_i is a p-vector of covariates. A GLM
<cit.> assumes that, conditionally on x_i,
Y_i has an exponential family distribution with probability density
or mass function of the form
f_Y_i(y; θ_i, ϕ) = exp{y θ_i - b(θ_i) - c_1(y)/ϕ/m_i - 1/2a(-m_i/ϕ) + c_2(y) }
for some sufficiently smooth functions b(.), c_1(.), a(.) and c_2(.), and fixed observation weights m_1, …, m_n. The expected value and the variance of Y_i are then
E(Y_i) = μ_i = b'(θ_i)
Var(Y_i) = ϕ/m_ib”(θ_i) = ϕ/m_iV(μ_i) .
The mean μ_i is linked to a linear predictor η_i through a a
monotone, sufficiently smooth link function g(μ_i) = η_i with
η_i = ∑_t=1^p β_t x_it
where x_it can be thought of as the (i,t)th component of a model
matrix X, assumed to be of full rank, and
β = (β_1, …, β_p)^⊤. An intercept parameter is
customarily included in the linear predictor, in which case
x_i1 = 1 for all i ∈{1, …, n}.
§.§ Likelihood, score functions and information
The log-likelihood function for a GLM is
ℓ(β) = ∑_i = 1^n log f_Y_i(y_i; g^-1(η_i),
ϕ), where g^-1(.) is the inverse of the link function, and
η_i is as in (<ref>). Temporarily suppressing
the dependence of the various quantities on the model parameters and
the data, the derivatives of the log-likelihood function with respect
to the components of β and ϕ are
s_β = 1/ϕX^TW (z - η) and
s_ϕ = 1/2ϕ^2∑_i = 1^n (q_i - ρ_i) ,
respectively, where z = η + D^-1(y - μ) is the vector of
working variates for ML, η = X β,
y = (y_1, …, y_n)^⊤, μ = (μ_1, …, μ_n)^⊤,
W = diag{w_1, …, w_n} and
D = diag{d_1, …, d_n}, with
w_i = m_i d_i^2/v_i being the ith working weight, and
d_i = dμ_i/dη_i, v_i = V(μ_i). Furthermore,
q_i = -2 m_i {y_iθ_i - b(θ_i) - c_1(y_i)} and
ρ_i = m_i a'_i are the ith deviance residual and its
expectation, respectively, with a'_i = a'(-m_i/ϕ), where
a'(u) = d a(u)/d u.
The ML estimators β̂ and ϕ̂ can be found by solution
of the score equations s_β = 0_p and s_ϕ = 0, where 0_p
is a p-vector of zeros. <cit.> derives necessary
and sufficient conditions for the existence and uniqueness of the ML
estimator. Given that the dispersion parameter ϕ appears in the
expression for s_β in (<ref>) only multiplicatively,
the ML estimate of β can be computed without knowledge of the
value of ϕ. This fact is exploited in popular software like the
function in R <cit.>. The jth iteration
of IWLS updates the current iterate β^(j) by solving the
weighted least squares problem
β^(j+1) := (X^⊤ W^(j) X)^-1 X^⊤ W^(j)z^(j) ,
where the superscript (j) indicates evaluation at β^(j)
<cit.>. The updated β from (<ref>) is equal
to that from the Fisher scoring step
β^(j) + {i_ββ^(j)}^-1 s_β^(j) where
i_ββ is the (β,β) block of the expected
information matrix about β and ϕ
[
[ i_ββ 0_p; 0_p^⊤ i_ϕϕ ]]
=
[
[ 1/ϕ X^⊤ W X 0_p; 0_p^⊤ 1/2ϕ^4∑_i = 1^n m_i^2 a”_i ]] ,
with a”_i = a”(-m_i/ϕ), where a”(u) = d^2 a(u)/d u^2.
The weighted least squares problem in (<ref>) is typically
solved either through the method of normal equations, which involves a
Cholesky decomposition of X^⊤ W^(j) X, or through the QR
decomposition of (W^(j))^1/2 X <cit.>. Despite that the QR approach
requires more computations than the method of normal equations, the
former is more appealing in applications and is the default choice in
popular least squares software because it can solve, in a numerically
stable manner, a wider class of least squares problems. In particular,
the method of normal equations can numerically break down with
(W^(j))^1/2 X that are not particularly close to being
numerically rank deficient, while the QR approach solves a “nearby”
least squares problem <cit.>.
ML estimation of ϕ can then take place by solving s_ϕ = 0
after evaluating q_i at the ML estimates β̂. This can be
done through the Fisher scoring iteration
ϕ^(j+1) := ϕ^(j){ 1 + ϕ^(j)∑_i = 1^n (q̂_i - ρ_i^(j))/∑_i = 1^n m_i^2 a_i”^(j)} ,
where q̂_i is q_i evaluated at
β̂. <cit.> recommend against
estimating ϕ using ML, and instead propose the moment estimator
(n - p)^-1∑_i = 1^n ŵ_i (ẑ_i - η̂_i)^2
where ŵ_i, ẑ_i, and η̂_̂î are w_i, z_i,
and η_i, respectively, evaluated at β̂. The moment
estimator of ϕ is considered to have less bias that the ML
estimator.
§.§ Bounded-memory procedures for least squares
<cit.> proposed a procedure to solve least squares
problems using the QR approach, which does not require keeping the
full model matrix X and response vector Y in memory, and operates
by sequentially bringing only a fixed-size chunk of data in
memory. This is particularly useful in getting a numerically stable
least squares solution in cases where the model matrix has too many
rows to fit in memory. We briefly review the method in
<cit.>.
Consider the least squares problem A ψ = b, where A is an
n × p matrix, with n > p, and b is a n-dimensional
vector. The least squares solution ψ̂ for ψ can be found
by first computing the QR decomposition
A = QR = [
[ Q_1 Q_2 ]]
[
[ R̅; 0_(n - p) × p ]] ,
where Q_1 and Q_2 are n × p and n × (n - p)
matrices, respectively, Q is an orthogonal matrix (i.e.
Q^⊤ = Q^-1), R̅ is a p × p upper triangular
matrix, and 0_u × v denotes a u × v matrix of
zeros. Then, ψ̂ is found by using back-substitution to solve
R̅ψ̂= b̅, where b̅ = Q_1^⊤ b. To
describe the proposal in <cit.>, let A_k be the kth
chunk of c ≤ n rows of A, and b_k be the kth chunk of c
elements of b, and denote A_1:k the first k chunks of A. The
last chunks may have c or less observations. Suppose that the QR
decomposition of A_1:k is Q_k R_k. Then, in light of a new
chunk A_k + 1, the QR decomposition can be updated as
[
[ A_1:k; A_k + 1 ]] =
[
[ Q_1,k 0_ck × k Q_2,k; 0_c × p I_c 0_c × (ck - p) ]]
G_1 ⋯ G_cp_Q_k + 1G_cp^⊤⋯ G_1^⊤[
[ R̅_k; A_k + 1; 0_(n - p) × p ]]^R_k+1 ,
where G_1, …, G_cp is the set of the Givens rotation matrices
<cit.> required to eliminate
A_k+1 in the right hand side. By the orthogonality of Givens
rotation matrices, Q_k + 1 R_k + 1 is a valid QR
decomposition. The only values that are needed for computing the least
squares estimates ψ̂ are R̅ and b̅. Hence, their
current values after a new chunk arrives are typically the only
objects that are updated and kept in computer memory. This is useful
for large n, as the need of having an n × p matrix in memory
(X) is swapped by only keeping p(p+1)/2 + p real numbers in memory
at any given time.
§.§ Iteratively re-weighted least squares in chunks
The approach of <cit.> can readily be used
for (<ref>) by replacing A by (W^(j))^1/2 X and b by
z^(j). This is possible because the diagonal entries of W and
the components of z only depend on the corresponding components of
η = X β, and hence, can be computed in a chunk-wise
manner. Specifically, the kth chunk of the diagonal of W^(j),
and the kth chunk of z^(j) can be computed by just bringing in
memory X_k, Y_k, and m_k, computing the current value of
the kth chunk of the linear predictor as X_kβ^(j), and
using that to compute the kth chunks of μ^(j), d^(j) and
v^(j). This is the procedure that the R package
<cit.> implements.
§ BIAS REDUCTION AND MAXIMUM PENALIZED LIKELIHOOD
§.§ Adjusted score equations
Consider the adjusted score equations
0_p = 1/ϕ X^⊤ W {z + ϕ H (b_1 ξ + b_2 λ) } ,
0 = 1/2ϕ^2∑_i = 1^n (q_i - ρ_i) + c_1 ( p - 2/2 ϕ + ∑_i = 1^n m_i^3 a_i”'/2ϕ^2∑_i = 1^n m_i^2 a_i”) ,
where 0_p is a p-vector of zeros,
ξ = (ξ_1, …, ξ_n)^⊤, and
λ = (λ_1, …, λ_n)^⊤ with
ξ_i = d_i'/2 d_i w_i , and λ_i = 1/2{d_i'/d_i w_i - v_i'/m_i d_i} ,
and a”'_i = a”'(-m_i/ϕ), where a”'(u) = d^3 a(u)/d u^3,
d_i' = d^2μ_i/dη_i^2, and v'_i = d^2 V(μ_i) / dμ_i^2. In
the above expressions, H is the diagonal matrix, whose diagonal is
the diagonal of the “hat” matrix X (X^⊤ W X)^-1 X^⊤ W. The
ML estimators of β and ϕ are obtained by
solving (<ref>) and (<ref>) with
b_1 = b_2 = c_1 = 0. <cit.> show
that mBR estimators of β and ϕ can be obtained by
solving (<ref>) and (<ref>) with
b_1 = 1, b_2 = 0, and c_1 = 1. On the other hand, direct
differentiation of the Jeffreys'-prior penalized likelihood shows that
the mJPL estimators for
β = max{ℓ(β) + log|X^⊤ W X| / 2 } in
<cit.> can be obtained by
solving (<ref>) for b_1 = b_2 = 1.
For binomial-response GLMs, <cit.> show that the
mJPL estimates of β has always finite components for a wide
range of well-used link functions, including logit, probit and
complementary log-log, even in cases where the ML estimate have
infinite components. The finiteness property of the mJPL estimator is
attractive for applied work, because it ensures the stability of all
numerical and inferential procedures, even when the data set is
separated and infinite ML estimates occur <cit.>. The same holds for any
power t > 0 of the Jeffreys' prior, in which case
λ_i = t {d_i'/ (d_i w_i) - v_i'/(m_i d_i)} / 2
in (<ref>).
The mJPL estimators, though, do not necessarily have better asymptotic
bias properties than the ML estimator for all GLMs . For GLMs that are
full exponential families, such as logistic regression for binomial
data and log linear models for Poisson counts, the mJPL estimators
with t = 1 and the mBR estimators coincide. This has been shown
in <cit.>, and is also immediately apparent
by (<ref>). For canonical links d_i = v_i,
w_i = m_i d_i, and, by the chain rule v_i' = d_i' / d_i, hence
λ_i = 0. Hence the mBR estimator for logistic regression not
only has bias of smaller asymptotic order than what the ML estimator
generally has, but their components also always take finite values.
We should note that the first-order asymptotic properties expected by
the ML, mJPL and mBR estimators of β and ϕ are preserved
for any combination of b_1, b_2 and c_1
in (<ref>) and (<ref>). For example, mBR
estimators of β can be obtained for b_1 = 1, b_2 = 0 and
c_1 = 0, effectively mixing mBR for β with ML for ϕ. This
is the result of the orthogonality of β and ϕ
<cit.>; the detailed argument is a direct extension of
the argument in <cit.>
for mixing adjusted score equations to get mBR for β and
estimators of ϕ that have smaller median bias. Another option is
to mix the adjusted score equations (<ref>) for β
with the estimating function
ϕ = 1/n - p∑_i = 1^n w_i (z_i - η_i)^2 ,
which gives rise to the moment estimator of ϕ, once μ_i is
evaluated at an estimator for β.
§.§ Iteratively re-weighted least squares for solving adjusted score equations
Using similar arguments as for ML estimation, we can define the
following IWLS update to compute the ML, the mBR, and the mJPL
estimators of β depending on the value of b_1, b_2
in (<ref>). That update has the form
β^(j+1) := (X^⊤ W^(j) X)^-1 X^⊤ W^(j)(z^(j) + ϕ^(j) H^(j)κ^(j)) ,
where κ = b_1 ξ + b_2 λ. The value of ϕ, which is required for implementing mBR and mJPL,
can be found via the quasi-Fisher scoring step for
solving (<ref>), which using (<ref>),
takes the form
ϕ^(j+1) := ϕ^(j)[ 1 + ϕ^(j)∑_i = 1^n (q_i^(j) - ρ_i^(j))/∑_i = 1^n m_i^2 a_i”^(j) +
c_1ϕ^(j){∑_i = 1^n m_i^3 a_i”'^(j)/(∑_i = 1^n m_i^2 a_i”^(j))^2 +
ϕ^(j)p-2/∑_i = 1^n m_i^2 a_i”^(j)}]
The candidate value for the dispersion parameter when one of b_1 or
b_2 is non-zero, can be computed using (<ref>) at
β^(j) in every
iteration. <cit.> have already
derived the special case of (<ref>) and (<ref>)
for mBR estimation, that is for b_1 = 1, b_2 = 0 and c_1 =
1. Convergence can be declared if
‖β^(j + 1) - β^(j)‖_∞ < ϵ and
‖ϕ^(j' + 1) - ϕ^(j)‖_∞ < ϵ for
some small ϵ > 0.
§.§ Adjusted score estimation in chunks
When either (<ref>) or (<ref>) is used, the
updates for ϕ can be performed in a chunkwise manner once
β^(j) has been computed. In fact, use of (<ref>)
has the advantage of requiring only w^(j) and z^(j), which
have already been computed after performing the IWLS
update (<ref>).
Nevertheless, the IWLS update (<ref>) for β cannot be
readily performed in a chunkwise manner because the diagonal of H
cannot be computed at β^(j) by just bringing in memory chunks
of X, Y and m, and the current estimates. In particular, the
ith diagonal element of H is
h_i = x_i^⊤ (X^⊤ W X)^-1 x_i w_i, and, hence, its
computation requires the inverse of the expected information matrix
X^⊤ W X. In what follows, we present two alternatives for
computing adjusted score estimates for β in a chunkwise manner.
§.§ Two-pass implementation
The direct way to make the update (<ref>) for β
possible in a chunk-wise manner comes from the considering the
projection that is being made. The left plot of
Figure <ref> shows how
update (<ref>) projects the current value of the vector
W^1/2(z + ϕ H κ) onto the column space of the current
value of W^1/2 X.
The right of Figure <ref>, then shows how we can
achieve exactly the same projection in two passes through the data: i)
project the current value of W^1/2z onto the column space of the
current value of W^-1/2 X using QR decomposition, and ii) project
the current value of ϕ W^1/2 H κ onto the column space of
the current value of W^-1/2 X. Adding up the coefficient vectors
from the two projections returns the required updated value of
β. Section <ref> describes how the projection
in step i) can be done in a chunkwise manner. Then, given that the
current value of R̅ is available after the completion of the
incremental QR decomposition from the first pass through the data, the
current value of h_1, …, h_n in step ii) can be computed in a
chunkwise manner through another pass through the data.
§.§ One-pass implementation
An alternative way to make the update (<ref>) for
β possible in a chunk-wise manner is to change the IWLS update
into
β^(j+1) := (X^⊤ W^(j) X)^-1 X^⊤ W^(j)(z^(j) + ϕ^(j) H^(j - 1)κ^(j)) .
Iteration (<ref>) has the correct stationary point, that
is the solution of the adjusted score
equations (<ref>), and can be performed in a chunkwise
manner following the descriptions in Section <ref>.
This is because the ith diagonal element of H^(j - 1),
h_i = x_i^⊤ (X^⊤ W^(j - 1) X)^-1 x_i w_i^(j - 1),
depends on the weight w_i^(j - 1) at the previous value of
β, which can be recomputed using knowledge of β^(j - 1)
and x_i only, and the inverse of the expected information
(X^⊤ W^(j - 1) X)^-1 from the previous iteration, which can
be computed using R̅^(j-1) that results from the incremental
QR decomposition at the (j-1)th iteration.
In addition to the starting value for β^(0) that
iteration (<ref>) requires, iteration (<ref>)
also requires starting values for h_i; a good starting value is
h_i^(0) = p / n, which corresponds to a balanced model matrix
X.
§.§ Memory requirements and computational complexity
The direct implementation of IWLS for mBR and mJPL, as done in popular
R packages such as and , requires
O(n p + p^2) memory, as ML does. On the other hand, the chunkwise
implementations require O(c p + p^2) memory, where c is the
user-specified chunk size as in Section <ref>. The
computational cost of all implementations using the QR decomposition
remains at O(n p^2 + p^3).
The two-pass implementation has almost twice the iteration cost of the
one-pass implementation, because two passes through all observations
is required per IWLS iteration. However, the two-pass implementation
reproduces exactly iteration (<ref>) where the adjusted
score, rather that just part of it as in (<ref>), is
evaluated at the current parameter values. From our experience with
both implementations the one-pass one tends to require more iterations
to converge to the same solution, with the two implementations having
the same computational complexity per iteration. In addition, the
two-pass implementation requires starting values only for β,
while the one-pass implementation requires starting values for both
β and h_1, …, h_n.
§ DEMONSTRATION: DIVERTED US FLIGHTS IN 2000
We demonstrate here the one-pass and two-pass implementations using
data on all 5,683,047 commercial flights within the USA in 2000. The
data set is part of the data that was shared during The Data
Exposition Poster Session of the Graphics Section of the Joint
Statistical meetings in 2009, and is available at the Harvard
Dataverse data repository <cit.>.
Suppose that interest is in modelling the probability of diverted US
flights in 2000 in terms of the departure date attributes, the
scheduled departure and arrival times, the coordinates and distance
between departure and planned arrival airports, and the
carrier. Towards this goal, we assume that the diversion status of the
ith flight is a Bernoulli random variable with probability modelled
as
Φ^-1(π_i) = α + β^⊤ M_i + γ^⊤ W_i + δ^⊤ C_i + ζ_(d) T_(d),i + ζ_(a) T_(a),i + ρ D_i + ψ_(d)^⊤ L_(d), i + ψ_(a)^⊤ L_(a), i ,
and that diversions are conditionally independent given the covariate
information that appears in the above expression. The covariate
information consists of M_i, which is a vector of 12 dummy
variables characterizing the calendar month that the ith flight took
place, W_i, which is a vector of 7 dummy variables characterizing
the week day that the ith flight took place, C_i, which is a dummy
variable with 11 levels, characterizing the carrier (out of 11
carriers) of the ith flight, T_(d),i and T_(a),i, which are
the planned departure and arrival times in a 24 hour format, D_i,
which is the distance between the origin and planned destination
airport, respectively and L_(d), i and L_(a), i, which are the
(x, y, z) coordinates of the departure and arrival airport,
respectively, computed from longitude (lon) and latitude (lat)
information as x = cos( lat) cos( lon),
y = cos( lat) sin( lon), and z = sin( lat).
For identifiability reasons, we fix β_1 = 0 (January as a
reference category), γ_1 = 0 (Monday as a reference category)
and δ_2 = 0 (carrier AQ as a reference category). Ignoring the
columns corresponding to those parameters, the model matrix for the
model with linear predictor as in (<ref>) has dimension
5683047 × 37, which requires about 1.6GB of memory.
Despite that the memory requirements for this data set are manageable
by the available memory of modern laptops, testing for data separation
by solving, for example, the <cit.> linear programs is
computationally demanding. For example, the
method of the detectseparation R package did not complete even after 12 hours of
computation on a 2021 MacBook Pro with an Apple M1 Max chip and 64 GB
RAM.
Table <ref> shows the ML estimates of the
parameters α and δ of model (<ref>) after 15
and 20 IWLS iterations using the bounded-memory procedures in
Section <ref> as implemented in the R
package <cit.>, using chunks of c = 10, 000
observations and ϵ = 10^-3. The estimates and estimated
standard errors for α and the components of δ grow in
absolute value with the number of IWLS iterations, which is typically
the case when the maximum likelihood estimates are infinite
<cit.>. The ML
estimates for the other parameters do not change in the reported
accuracy as we move from 15 to 20 IWLS iterations; see
Table <ref> of the Supplementary Materials document for
estimates for all parameters. In contrast, mBR and mJPL return finite
estimates for all parameters by declaring convergence before reaching
the limit of allowable iterations, for both their one- and two-pass
implementations. As expected by the discussion in
Section <ref>, the one-pass implementations require about
58% of the time that the two-pass implementations require. No
memory issues have been encountered obtaining the ML, mJPL and mBR
fits, even when the fits were re-computed on a Ubuntu virtual machine
with 2 cores and 2GB and 4GB of RAM, where also no swapping took
place. On the other hand, the mBR and mJPL fits could not be obtained
using the R package, because available physical memory
was exhausted.
The observations made in this example highlight that, even for large
data sets, use of estimation methods that return estimates in the
interior of the parameter space is desirable, especially so when the
computational cost for an IWLS iteration for ML scales linearly with
the number of observations.
§ MAXIMUM JEFFREYS' PENALIZED LIKELIHOOD IN HIGH-DIMENSIONAL
LOGISTIC REGRESSION
§.§ Preamble
<cit.> shows that for logistic regression with fixed-p
asymptotics mJPL and mBR coincide. More recently,
<cit.> and <cit.> have
empirically found that mBR still achieves a substantial reduction in
the persistent bias of the ML estimator in specific high-dimensional
logistic regression problems under a more extreme asymptotic framework
than <cit.> used to develop mBR.
In particular, <cit.> prove that there
exists a sharp phase transition about when the ML estimate has
infinite components in high-dimensional logistic regression with
linear predictors η_i = α + x_i^⊤β, where x_i is a
p-vector generated from a Normal distribution with mean 0_p and
unknown, non-singular variance-covariance matrix Σ,
p /n →κ∈ (0,1), x_1, …, x_n are generated
independently from each other, and assuming that
(x_i^⊤β) →γ_0^2. That phase transition is useful
because it characterizes when the corrective method developed in
<cit.> can be applied. That corrective method
re-scales the ML estimate by a factor from the solution of the system
of nonlinear equations in <cit.>. As
<cit.> clearly note, that system of nonlinear
equations does not have a unique solution if (κ, γ) are in
the region where the ML estimate has finite components, and their
approach cannot be used when at least one of the ML estimates
diverges. In contrast, <cit.> prove that mJPL for
logistic regression always delivers finite estimates under the sole
assumption that the model matrix has full rank.
Motivated by this fact, in this section, we empirically investigate
the limits of mJPL in terms of frequentist performance in the setting
of <cit.> at either side of the phase transition
curve derived therein. We provide evidence that mJPL performs
exceptionally well in terms of bias in the region where the ML
estimate is expected to exist, and tends to over-correct in the region
where the ML estimate has infinite components. The empirical evidence
also suggests that the amount of over-correction can be accurately
characterized in terms of κ, γ and the size of the
intercept parameter, for a wide range of values of κ and
γ. This leads to a fruitful conjecture about adjusting the mJPL
estimates post-fit to almost recover their performance even in the
region where the ML estimates do not exist, where the methods of
<cit.> do not apply.
§.§ Simulation experiment
We use the asymptotic framework used for producing
<cit.>. For generating a data set under
that asymptotic framework, we need to specify κ∈ (0, 1),
n, ρ^2 ∈ [0, 1), γ > 0, and an initial parameter
vector β^*. We form an n × p matrix X by generating its
entries from a collection of n p independent and identically
distributed standard normal random variables, where
p = ⌈ n κ⌉. We define α = ργ, and
γ_0 = γ√(1 - ρ^2), and rescale β^* to
β = γ_0 β^* / ‖β^* ‖_2, so that
‖β‖_2 = γ_0. In this way,
(x_i^⊤β) = γ_0^2, which is a sufficient condition
for the developments in <cit.>. We then generate
y_1, …, y_n independently with
Y_i ∼Bernoulli(1 / (1 + e^-η_i)), where
η_i = α + x_i^⊤β (i = 1, …, n).
Figure <ref> shows the phase transition curves
derived using <cit.> for ρ^2 = 0
(top) and ρ^2 = 0.75 (bottom). At each from a set of 30 points
on the (κ, γ) plane (κ∈ (0, 0.6) and
γ∈ (0, 20)), and for n = 2000, β^* is set to an
equi-spaced grid of length p between -10 and 10. Then, 5
samples of X and y are drawn independently as detailed above at
each point. The ML estimates of α and β are computed for
the samples at the region where the ML estimates asymptotically exist,
and the mJPL estimates are computed everywhere using the two-pass
implementation with c = 1000, ϵ = 10^-3 (see
Section <ref>), and allowing for up to 250 IWLS
iterations. The mJPL estimates are computed starting at zero for all
parameters, and are used as starting values for the ML estimates
whenever those are computed.
The scatterplots of the estimates versus the true β vectors are
overlaid in Figure <ref> at each of the 30
points on the (κ, γ) plane. As is evident, in the region
where the ML estimates exist, the mJPL estimates β̃
illustrate excellent performance in recovering the true signal, even
when the ML estimates over-estimate it. Their performance is similar
to that of the re-scaled ML estimator in <cit.>. On
the other hand, in the region where the ML estimates do not exist,
mJPL appears to overshrink towards zero moving further away from the
true signal as κ and γ increase.
A simple exploratory regression analysis of mJPL estimates versus the
true values in the region where the ML estimates do not exist results
in strong evidence that a post-hoc multiplicative adjustment of the
mJPL estimates to κγ (1 - ρ^2)^-1/2β̃ can
recover the signal. Figure <ref> shows the
resulting estimates, confirming that the adjusted estimator recovers
the signal to high accuracy, in the region where the ML estimates do
not exist. To our knowledge, there has been no other viable proposal
for signal recovery in that region.
The Supplementary Materials document provides results for
ρ∈{0, 0.25, 0.5, 0.75, 0.9}, for both when β^* is set
to an equi-spaced grid of length p between -10 and 10 (see
Section <ref>), and when β^* is set to have
20% of its values set to -10, 20% of its values set to 10,
and the remaining set to 0 (see Section <ref>),
for n ∈{1000, 2000, 3000}. From the observations made through
those computer experiments, and other settings we have experimented
with, not reported here, and for both smaller and larger n, we are
confident to state that the estimator
β^† = q(κ, γ, ρ) β̃ with
q(κ, γ, ρ) = {[ 1 , if the ML estimate exists asymptotically; κγ/√(1 - ρ^2) , if the ML estimate does not exist asymptotically ].
is effective in recovering the true signal, with its effectiveness
deteriorating as ρ^2 approaches 1 (and, hence, as α
increases relative of ‖β‖_2). This can be seen in
the figures in Supplementary Materials document for ρ =
0.9. Figure <ref> is a composite of the top
of Figure <ref> and
Figure <ref>, and
Figure <ref> is a composite of the bottom of
Figure <ref> and
Figure <ref>.
§.§ Remarks on computational performance and extensions
The two-pass implementation of mJPL estimation converged in all cases
within 250 IWLS iterations. We also tried the one-pass
implementation, which either required more iterations to converge than
the two-pass, or did not converge within 250 iterations.
As a side note, <cit.> reports that mJPL has been computationally
infeasible for high dimensions, with a runtime of approximately 10
minutes for n = 1000 and p = 200, and of over 2.5 hours for
n = 2000 and p = 400. Table <ref>
summarizes the performance of the two-pass implementation of mJPL both
in terms of runtime and number of iterations for ρ^2 = 0. In
notable contrast to the observations in <cit.>, the maximum average runtime for
mJPL we observed has been about 2 minutes and 15 seconds for
n = 2000 and p = 1100 (κ = 0.55, γ = 4.5), with no
particular care in the choice of starting values. The Supplementary
Materials document provides the summaries in
Table <ref>, for ρ^2 = 0,
n ∈{1000, 2000, 3000}, for both when β^* is set to an
equi-spaced grid of length p between -10 and 10 (see
Section <ref>), and when β^* is set to
have 20% of its values set to -10, 20% of its values set to
10, and the remaining set to 0 (see
Section <ref>).
For small κ, the incremental IWLS implementations of mJPL can
be also used to carry out the computer experiments of the current
Section with no memory problems even for larger n, and at least as
long as the O(c p + p^2) memory requirements and O(n p^2 + p^3)
computational complexity do not become prohibitive for the hardware at
hand. The n = 2000 we used here ensures that numerical results that
support our conjecture can be easily reproduced in a modern laptop
with the code we provide in the Supplementary Material. For larger
p, it is worthwhile choosing reasonable starting values for
mJPL. These can be obtained through a few IWLS iterations for ML,
potentially after adjusting the responses to
(1 - 2δ) y_i + δ, for some small δ > 0.
§ CONCLUDING REMARKS
We have developed two variants of IWLS that can estimate the
parameters of GLMs using adjusted score equations for mean bias
reduction and maximum Jeffreys'-prior penalized likelihood, for data
sets that exceed computer memory or even hard-drive capacity and are
stored in remote databases. The two procedures return the exact, not
approximate, mBR and mJPL estimates, and they have been used in
Section <ref> to obtain finite estimates of the parameters
of a probit regression model with 37 parameters from about 5.5
million observations, where the ML estimates have been found to have
infinite components. The two-pass implementation has been also used in
a large-scale experiment to develop a simple adjustment to the maximum
Jeffreys'-penalized likelihood estimates that delivers excellent
performance in terms of signal recovery even in cases where estimates
from ML and other recently-proposed corrective methods do not exist.
A proof of the conjecture about the performance of
q(κ, γ, ρ) β̃ estimator in high-dimensional
logistic regression may be possible by adapting the framework and
results in <cit.> to the case where the likelihood
is penalized by Jeffreys prior and examining when the solution of the
system of nonlinear equations
in <cit.> is close to the scaling
factor q(κ, γ, ρ) in expression (<ref>) that the
numerical evidence points to.
Current work focuses on reducing the cubic complexity on p, without
impacting the finiteness and bias reducing properties of mJPL, or its
highly predictable behaviour in high-dimensional logistic regression
settings. <cit.> on the approximation of the
leverage seems relevant.
As <cit.> show, median bias
reduction for β can also be achieved using an IWLS procedure
after modifying the ML working variates. The IWLS update for median
bias reduction has the form
β^(j+1) := (X^⊤ W^(j) X)^-1 X^⊤ W^(j)(z^(j) + ϕ^(j){ H^(j)ξ^(j) + X u^(j)}) .
The particular form of the p-vector u is given in
<cit.>, and depends
on the inverse i_ββ and hence on R̅. Since X u
can be computed in a chunkwise manner for any given u, it is
possible to develop one- and two-pass implementations of the IWLS
procedure for median bias reduction by the same arguments as those
used in Section <ref> and
Section <ref>. These procedures are computationally more
expensive than the procedures for mBR and mJPL because each component
of u requires O(np^3) operations.
§ SUPPLEMENTARY MATERIALS
The Supplementary Material provides i) the Supplementary Material
document that is cross-referenced above and contains all numerical
results and figures from the case study of Section <ref>
and the computer experiment of Section <ref>, and ii) R
code to reproduce all numerical results and figures in the main text
and in the Supplementary Materials document. The code is organized in
the two directories diverted-flights and high-dim-logistic, for the case study of Section <ref>
and the computer experiment of Section <ref>,
respectively. The file in each directory provides
specific instructions about have the results can be reproduced, along
with the specific versions of the contributed R packages that have
been used to produce the results. The biglm directory has a port
of the R package <cit.>, which implements
the one- and two-pass IWLS variants for solving the bias-reducing
adjusted score equations <cit.> and for maximum
Jeffreys'-penalized likelihood estimation
<cit.>. The Supplementary Material is available
at <https://github.com/ikosmidis/bigbr-supplementary-material>.
chicago
[pages=-]bigbr_supplementary-arxiv-0.pdf
|
http://arxiv.org/abs/2307.05615v1 | 20230711014020 | Laser light scattering (LLS) to observe plasma impact on the adhesion of micrometer-sized particles to a surface | [
"D. Shefer",
"A. Nikipelov",
"M. van de Kerkhof",
"V. Banine",
"J. Beckers"
] | physics.plasm-ph | [
"physics.plasm-ph"
] |
Eindhoven University of Technology, Department of Applied Physics, Eindhoven, 5600 MB, The Netherlands
[email protected]
ASML, Veldhoven, 5504 DR, The Netherlands
Eindhoven University of Technology, Department of Applied Physics, Eindhoven, 5600 MB, The Netherlands
ASML, Veldhoven, 5504 DR, The Netherlands
Eindhoven University of Technology, Department of Applied Physics, Eindhoven, 5600 MB, The Netherlands
ASML, Veldhoven, 5504 DR, The Netherlands
Eindhoven University of Technology, Department of Applied Physics, Eindhoven, 5600 MB, The Netherlands
Laser Light Scattering (LLS) method, combined with a long-distance microscope was utilized to detect micrometer-sized particles on a smooth substrate. LLS was capable to detect individual particle release, shrink, or fragmentation during exposure to a plasma or a gas jet. In-situ monitoring of hundreds of particles was carried out to investigate the effect of hydrogen plasma exposure on particle adhesion, morphology, and composition. LLS was calibrated with monodisperse melamine resin spheres with known sizes of 2.14 μm, 2.94 μm, and 5.26 μm in diameter. The lowest achievable noise level of approximately 3% was demonstrated for counting 5.26 µm spherical melamine particles. The accuracy for melamine particle size measurements ranged from 50% for 2.14 μm particles to 10% for 5.26 μm particles. This scatter was taken as the imprecision of the method. Size distribution for polydisperse particles with known refractive index was obtained by interpolating to an effective scattering cross-section of a sphere using Mie theory. While the Abbe diffraction limit was about 2 μm in our system, the detection limit for Si particles in LLS according to Mie approximation was assessed to about 3 μm, given the limitations of the laser flux, microscope resolution, camera noise, and particle composition. Additionally, the gradual changes in forward scattering cross-sections for Si particles during the exposure to the hydrogen plasma were consistent with Si etching reported in the literature.
Laser light scattering (LLS) to observe plasma impact on the adhesion of micrometer-sized particles to a surface
J Beckers
October 2023
================================================================================================================
§ INTRODUCTION
Under some conditions, plasma exposure is known to cause the release of nanometer and micrometer-sized particles from surfaces.<cit.> Technologies sensitive to plasma-induced particle release are of special interest. For example, NASA’s study of the lunar and Mars surfaces confirmed suspended dust without settling.<cit.> This effect is attributed to UV or plasma charging and may have a negative impact. For example, the mobility of micrometer-sized particles in plasma presents a challenge to solar panel longevity. In another example, a reticle (integrated circuit photo-mask), used in Extreme Ultraviolet (EUV) lithography is highly sensitive to contamination with particles of 20 nm and larger.<cit.> Such particles may deposit on reticles even in the extremely clean environments of an EUV scanner in the presence of EUV-induced plasma.<cit.> Finally, in nuclear fusion plasma vessels (e.g. in ITER), plasma-facing walls releasing particles may deteriorate the gas mix. Because of tritium gas held in wall materials, dust generation in ITER is a serious concern, both from an erosion aspect and due to possible impurity release into the plasma.<cit.> With respect to all these applications, the study of the behavior of micrometer-sized particles attached to a surface and interacting with plasma is important. To enable further studies, the development of new in-situ diagnostic tools is highly relevant.
Traditionally used in the semiconductor industry, Laser Light Scattering (LLS) detects single particles on smooth or patterned substrates by analyzing light scattered into different angles from a relatively small illuminated spot (typically, around 10 µm).<cit.> Particles bigger than 1 µm scatter most of the light in the forward direction. Hence, a reflective substrate is a convenient method to improve such particle visibility.
With respect to the system of a particle attached to a surface, the particle adheres due to the combination of electrical, van der Waals (vdW), and capillary forces, as well as due to the particle’s chemical interaction with the surface. Adhesion depends on the particle’s size, composition, and morphology. A change in one of these parameters also affects the forward-scattered light intensity; hence, this can be used as a diagnostic method. In our work, we apply the LLS method, combined with long-distance microscopy, to image micrometer-sized particles. It will be demonstrated that the LLS method can be adapted in order to in-situ observe micrometer-sized particles on a surface placed in plasma or in other stressed conditions such as those caused by a gas jet. The advantage of the LLS method over traditional SEM measurement used in morphological diagnostics is the non-invasive in-situ manner of measuring which directly shows the impact of plasma treatment on particles during exposure.
§ APPARATUS AND DESIGN
Particles were deposited on the metallic side of the substrates; substrates used in all experiments were 1 inch in diameter polished sapphire wafers with 100 nm chromium coating. The mirror-finished wafers enable LLS to be operated in the dark field mode. The chromium coating is known to be robust against hydrogen embrittlement<cit.> and electrically conductive. The latter is necessary for SEM imaging before or after plasma exposure. Silicon (Si) particles were chosen in this work for the demonstration of the method because of the abundance of scientific literature on silicon including its etching by hydrogen plasma.<cit.> Melamine particles were selected because of their narrow standard deviation in size (when purchased commercially from Sigma Aldrich) and matte surface. Properties of the particles used in the experiments are listed in table <ref>.
The chromium substrates were contaminated with micrometer-sized particles using a Branson sonifier SFX 150 (40 kHz actuated tip). The sonifier disaggregated large clusters of particles by bringing its tip in contact with the edge of contaminated wafers. The average distance between the particles significantly exceeded their size (see Fig. <ref>), which suppressed the effects of interference and simplified imaging, sizing of particles, and analysis of the interaction with plasma.
A schematic overview of the used setup is depicted in Figure <ref>. The setup comprised two vacuum chambers (a main chamber for the plasma and gas jet exposures and a load-lock chamber) separated by a VAT gate valve that remained closed during experiments. The main chamber was a 20x20x20 cm^3 cube with one of the flanges used for connection to the plasma source and the gas supply. A second flange of this chamber had an integrated window with an anti-reflective coating for LLS imaging. A third flange of this chamber was equipped with Philips vacuum gauges (HPT 200 Pirani/Bayard-Alpert and PPT 200 AR) which were both hydrogen calibrated. The flange with the plasma head also held a stainless steel wafer holder and allowed the swapping of wafers via the load-lock. The ultimate pressure in the vacuum chamber, achieved by a turbo-molecular pump (Pfeiffer THU 200 MP) and a scroll dry pre-pump (Edwards XDS10), was 10^-4 Pa.
During the experiments with plasma exposures, hydrogen was supplied to the main chamber at 30 sccm, resulting in a steady state pressure in the range of 1-10 Pa (mostly 5 Pa) without throttling the turbo-pump. The hydrogen plasma was driven by an Electron Cyclotron Resonance (ECR) plasma source (Aura-Wave, Sairem) at 100 W of RF power providing T_e ≃ 5 eV, E_i ≃ 15 eV, and ion flux toward the wafer of about F ≃ 1 A/m^2 according to Shirai et al<cit.>. Under these conditions, the induced hydrogen radical (H^*) flux is expected to be 10 to 100 times higher than the H^+ flux due to a ∼10% chance of H^* association at the stainless steel walls of the main vacuum chamber compared to the 100% chance of H^+ ion neutralization at the walls.<cit.> Moreover, recombination of H_3^+ ions results in the generation of ∼2 radicals per event.<cit.> The selected conditions in this study featured a hundredfold more intense flux and approximately 5 times higher energy of ions compared to EUV-induced plasma<cit.>. Hence, the exhibited results may be considered as the exposure to EUV plasma afterglow, accelerated at around 100 times.<cit.>
For typical experiments a sample with particles was brought through the load-lock chamber to the middle of the main chamber (using a manipulator) and mounted vertically, facing the window with an anti-reflecting coating. A pulsed laser (EverGreen EVG00200, 70-200 mJ and 10 ns long pulses at 532 nm with 100-1000x attenuation by a grey filter), illuminated the wafer with a repetition rate of 0.71 Hz (1.4s between pulses). The laser beam, guided by mirrors, was expanded to 0.5 cm in diameter by two plano-convex lenses and entered the chamber through the window at about 10^∘, reflected from the metal surface of the wafer, exited the chamber at 10^∘, and was finally directed to a beam dump. The light scattered by particles on the surface was collected by a long-distance microscope (Distamax K2) with a working distance of 180 mm and a fully open aperture (diameter of 5 cm) with a CMOS camera (FLIR Grasshopper3) mounted to it. Pulsed laser illumination was chosen instead of illumination by a CW laser to reduce the blurriness caused by the vacuum pump-induced vibrations transferred to the microscope.
The camera shutter was synchronized (Fig. <ref>) with the laser pulse by a signal delay generator (Model 577, BNC). Relatively short (140 µs) camera exposures helped to reduce the impact of the light from the plasma on the image background signal. The camera was configured to save 24-bit images with a resolution of 4,096 x 2,160 pixels. The pixel size was 3.45 x 3.45 µm^2, the quantum efficiency was 64%, and the dynamic range was 65.15 dB. The maximal camera noise was 40.3 dB. The CMOS matrix size in combination with magnification by Distamax K2 and the distance to the sample (around 18 cm) produced a field of view (FoV) of 3 x 2 mm. This microscope FoV with a fully opened diaphragm was aligned with the illumination laser spot and the contaminated center of the wafer. The following camera settings were used: gain 48, gamma 0, black level 0, balance ratio 1.14, digital zoom - off, picture enhancer - off, full automatic control - off, auto exposure - off, auto white balance - off, black & white compensation – off. The camera's gain had the greatest influence on the recognition of particles in post-processing steps.
The acquired images were analyzed by a self-developed Python script. This script extracted the number of particles, their coordinates, and their total integrated intensities and sizes. The way for the size distribution of the particles was found using Mie theory is discussed below. To minimize the impact of laser beam power density fluctuations, the script applied a running average of 5 over the images, which was found to be an optimal value for the trade-off between the noise level and the time resolution achieved. The averaged total integrated scattering (TIS) of an image was computed by the script by summing the intensities of all pixels.
The main chamber was also equipped with a flushing jet, which exhausted nitrogen gas pulses through a 4 mm tube placed at a 5 mm distance from the wafer and facing its center at 45^∘. This flushing could be used to remove loosely bound particles from the substrate when the shear force exceeds the vdW force with which the particles are bound to the surface. The pulsed flushing was realized through a quick valve (DVI 005 M Pfeiffer) and a calibrated orifice (1.016 mm, Swagelok) that limited the flow. The pressure in the nitrogen line was measured by a Pfeiffer gauge (CPT 200 DN). The "flushing" jet could reach up to 6 nlm at the peak of the pulse. The main chamber had a bypass line to a volume extension vessel of 100 liters, separated from the main chamber by a VAT HV gate valve. During the flushing experiments, the turbo-pump was switched off and the bypass line was open. During plasma experiments, however, the bypass line remained closed. The extended vessel had its own pre-pump (Leybold SCROLLVAC 10). The sum productivity of the two pre-pumps for flushing experiments resulted in about 5 l/s at 100 Pa. The flushing pulses of 100 ms to 20 s were limited by the pre-pump productivity: long flushing pulses increased the pressure in the main chamber at the rate of 10 Pa in 10 s.
In addition, to ensure the accuracy of the LLS setup calibration for measuring the sizes of silicon particles, a sample with silicon particles was additionally (measured on a similar, but not the same sample) qualified using SEM. The size distribution diagram obtained by SEM in a scanned area of 3x3 mm and analyzed by self-developed software was compared with the size distribution diagram obtained by LLS.
§ SETUP CALIBRATION
The LLS technique enables monitoring of changes in the number of attached particles (Fig. <ref>), as well as changes in the size distribution during exposure to plasma and flushing. The Figure shows the stages of image processing of Si particles before and after 6h of exposure to hydrogen plasma. The image clearly shows a change in the number of particles. In order to demonstrate the stability of the optical system, a seven-hour measurement of fixed-size particles (melamine) is used to calibrate the counting of particle numbers (see section <ref>). Furthermore, a calibration for obtaining particle size distributions is performed based on Mie theory with a correction for the refractive index (see sections <ref> and <ref>). Finally, in section <ref> the calibration of the total substrate scattering will be demonstrated.
§.§ Particle number evaluation
Evaluating the number of particles on the surface is challenging. For example, the resolution of the long-distance microscope is limited by the Abbe diffraction limit determined by the closest distance at which two separate sources of light can be distinguished from one another. This limit is expressed by<cit.>
d ≈λ/2NA
where d is the minimum resolvable distance between two sources of scattered light, λ is the wavelength of the laser light (532 nm) and NA is the numerical aperture (which in our configuration equals 0.137). Therefore, the resolution of our system is limited to approximately 1.9 μm.
The imaging of particles is limited not only by Abbe diffraction but also by the physical vibrations of the optical system, and variations of the particle shape and composition. In our experiments, the influence of camera noise, intensity fluctuations of the laser beam, and laser multimodality were also noted. Due to the limited coverage of these effects in the literature, comparisons were not made. Experimental uncertainties can be evaluated from measurements of scattering light from a stationary sample without disturbances. To enable this evaluation, a 7-hour-long imaging experiment of highly monodisperse 5.26 µm melamine spheres (see Table <ref> with samples) was conducted. Note that in this experiment no flushing or plasma exposure was applied. The results (Fig. <ref>), demonstrate high laser stability and low counting uncertainty. In this experiment, the laser illumination and camera settings were identical to the experiments with plasma and flushing. It was shown that the dispersion of the number of detected particles was about 3% (which is the lowest achievable noise level) with no long-term trends.
§.§ Size distribution of particles in LLS
Knowing the size distribution of processed particles is important. For instance, if large particles are more subjective to external stress factors, lowering their adhesion, such as those induced by exposure to plasma or a gas jet, the size distribution could shift toward smaller sizes. In another example, if exposure to plasma would lead to a developed surface and, thus, to a higher reflection coefficient of the incident light, the particles under the detection limit would become visible again. The particles that were already above the detection limit would shift toward larger sizes.
The determination of the particle size distribution is even more complicated than the counting of particles. As generally known, CCD and CMOS cameras can be subjected to an effect called "blooming".<cit.> This blooming means that oversaturated pixels leak excess charge to their neighboring pixels. This process propagates until it reaches the edge, visibly and virtually enlarging the particle. Illumination of the entire particle requires sufficient illumination, and most of the particles under study scatter light in the flat Top-Hat regime, which means oversaturation of the pixels' capacity. Hence, the detected particle size as a number of bright pixels above the threshold is not consistent with the true particle size. A 2 μm particle occupied around 50 bright pixels (about 7 pixels in diameter) on the camera when in FoV. The only invariant in this problem is the integral of the photo-induced electrons in the camera's matrix or, in other words, the scattering efficiency of individual particles.
Additional filtering must be applied before integrating the intensities of the pixels imaging the particles. After averaging the intensities of 5 images of 5 laser shots and applying the threshold value, the script filters tiny features (below 10 bright pixels in size). There are two reasons for this filtering. The first reason is that the high camera gain (max value, 48), used for high sensitivity, produces a few hot pixels that occur even without laser illumination and do not correspond to an actual signal. These hot pixels must be removed. The second reason relates to the presence of particles with sizes close to the detection limit. Due to the fluctuating laser intensity, these detections can appear and disappear from the detection region, significantly enhancing the noise level. Thus, by removing them, we focus on the residual population of particles that can always be identified with high confidence.
The correct approach would be to look at the scattering intensity of individual particles. As is generally known, particles of several micrometers in size obey Mie scattering theory.<cit.> The algorithm processing the collected images worked as follows. First, the scripted averaged intensities of 5 captured frames. Second, after applying the threshold, the intensities of images of the particles with an area larger than 10 pixels were integrated. Third, the scattering cross-section of the particle was calculated by multiplying the total intensity by the particle size with a constant, which is a fitting parameter to this model (see Eq. 2). Finally, an equivalent sphere with the same scattering cross-section and a refractive index was calculated using Mie theory, from which the size of the sphere/particle was derived. Therefore, measured scattering cross-sections can be translated into actual particle sizes using the Mie model for the light scattering by an individual particle. For this, a Mie calculator<cit.> was used to evaluate the effective cross-sections of the particles for different particle sizes (from 0.1 to 7 µm). The absorption of light by the particles was not taken into account in the calculations due to a lack of available data. The results of the calculations for particles with a variety of refractive indices n from 1.87 to 4.15 and the light collected in the NA corresponding to the microscope are plotted in Figure <ref>.
In the Mie model, a spherical particle is situated in vacuum and emits light in all directions. Particles whose sizes are several times larger than the wavelength of the incident radiation predominantly scatter light forward and backward. We considered a model in which particles are positioned on a reflecting substrate, thus collecting only a portion of the forward and backward scattering into the NA of the microscope (NA = 0.137 for an objective lens with a diameter of 5 cm and a distance of 18 cm from the particles). It is worth noting that near-field effects due to reflection from the substrate were not taken into account. All calculations were performed assuming an isolated particle in vacuum with scattering confined to the chosen NA of the microscope.
This graph shows that the particle's composition (i.e. the particles' refractive index) is more important for bigger sizes. Smaller particles are more sensitive to shape alterations. Our approach is to measure the scattering efficiency for the particles of known size and composition (in our case, monodisperse melamine spheres) as calibration. After this, for any material (i.e. refractive index) of interest, the cross-section of each particle can be translated into the size using the corresponding calibration curve from Figure <ref>.
§.§ Effective scattering cross-section calibration
In order to use the curves from Figure <ref>, they have to be calibrated. The measured intensities were fitted with the Mie curve. The results of this fit can be seen in Figure <ref>. The arrows indicate the measured cross-sections. The blue dashed line indicates the I_o value and can be considered as the detection limit of this method (it is attributed to the camera's noise which is of the same size as the min detected particles). The sizes of the particles were declared rather monodisperse, according to the manufacturer, with only a small standard deviation (see table <ref>), while the measured intensities had some uncertainty. The scattering cross-sections of the melamine particles were fitted using the formula
I_ec = (1 / α)· A · I_m + I_o
where I_ec is the effective scattering cross-section and I_m is the particle intensity measured by LLS. The constant A equals 700 and is related to the conversion of the laser intensity to the camera counts (or pixel counts). The constant α is the intensity correction factor. The applied laser intensity changed from 1x, to 14x and to 20x depending on the size of the particles,i.e. 2.14, 2.94, and 5.26 µm particles respectively. Therefore, for the purpose of laser intensity normalization, the intensity factor α was taken equal to 1, 14, and 20 for measurements on 5.26, 2.94, and 2.15 µm-particles respectively. The parameter I_o remained constant for all fits and was taken equal to 8.5 μ m^2. Physically it can be attributed to the losses of higher orders of diffraction, reflections from substrate asperities, and camera noise.
The uncertainty of the cross-sections (and related to it the size uncertainty which was nominated by a supplier) can be considered as error bars of the method. For example, the determination of the size of the 2.14 µm particles has an uncertainty of about ±1 µm which is 50% of their size. It explains why 2.14 and 2.94 µm particles appear to have the same scattering cross-sections. At the same time, the determination of the size of the 5.26 µm particles has an uncertainty of about ±0.5 µm which is only 10% of their size.
§.§ Calibration of the total substrate scattering
In addition to the measurements of the number of particles and the particle size (distribution), another possibility is to look at the total integrated scattering from the field of view of the microscope. Technically, the summed and averaged intensity of all pixels is like an analog signal and, therefore, is more reliable as it avoids any image processing other than thresholding for noise removal.
As mentioned, particles of several micrometers in size - as is the case here - obey Mie scattering theory: the scattered intensity is proportional to the particle cross-section (or to r^2 of the particle, where r is the radius) and depends on multiple parameters such as n, k and D/λ, and the polarization of the incident and collected light.<cit.> For instance, melamine resins have n = 1.872, k = 0 (extinction coefficient is approximately zero for melamine-based materials in the visible range of wavelengths<cit.>), D/λ is equal to 4.0, 5.5, 9.9 (for 2.14, 2.94 and 5.26 μm particles respectively). The incident light in our experiments was polarised perpendicular to the plane made up by the incoming beam, the reflecting beam, and the camera. The reflected light was not measured but expected to remain unchanged for particles significantly exceeding the wavelength of the radiation. A change in one of these parameters can be diagnosed by the TIS approach.
The resolution limit of the TIS can be derived by matching it, again, with the Mie calculations for the given size, reflective index, and NA. The amount of scattering by a single particle was obtained by dividing the TIS by the number of detected particles of fixed size (melamine samples in table <ref>). The sizes of the particles were taken according to the values declared by the manufacturer. The results of this calibration (Fig. <ref>) show a perfect match with the previously calibrated scattering cross-sections which proves that imposed filtering, thresholding, and image processing used in the previous subsection do not contribute to the uncertainty in size determination significantly. The good match is explained by testing monodisperse spheres with low standard deviation. When applying the TIS signal for polydisperse particles, the match will be less good. Therefore, it can be concluded that the resolution of the TIS measurements and the effective scattering cross-section of individual particles is the same.
§ RESULTS FOR LLS MEASUREMENTS OF SILICON PARTICLES EXPOSED TO FLUSHING AND PLASMA
Silicon particles were exposed to a series of external stress factors such as flushing and plasma. The sequence of flushing-1 (10 min), plasma exposure (24 h), and flushing-2 (10 min) was applied to a wafer contaminated with Si particles. The flushing power was selected based on the median considerations. The flow must be strong enough to remove a noticeable amount of particles (exceeding the noise level of about 3% as obtained in the calibration section). Physically, this would imply that the flushing shear force and the average adhesion force are comparable. Flushing removes particles, while adhesion keeps them in place. If a particle remains on the substrate after flushing, it means the adhesion force is equal to or greater than the removal force. The flushing (using nitrogen gas) used in the sequence consisted of 3-second long pulsed exhausts (6 nlm flow) at a frequency of 0.01 Hz (every 100 sec). Each flushing campaign lasted 10 min. Between two flushing campaigns, the samples were exposed to the hydrogen ECR plasma with the parameters described before. The quantification of the results used the calibrations described in the previous section.
The top graph in Figure <ref> shows the derived number of particles recorded over the experiment. The types of exposures (flushing or plasma) are mapped in different colors. Baselines (no exposures, only pressure changes) are shown in red, flushing campaigns are shown in green and the plasma exposure is shown in yellow. The plot shows that a significant amount of particles was flushed after the first few pulses. Further flushing appears to be ineffective, meaning that the remaining particles are attached with a force exceeding the applied shear force. The intermediate part of the experiment, during plasma exposure, clearly shows that the number of particles monotonically decays over the exposure which indicates the effect of plasma exposure on the particles' adhesion. This effect is the quantification of the impact shown in the grabbed images from the camera (Fig. <ref>). The bottom graph in Figure <ref> shows the TIS signal which correlates with the top graph and confirms that the intensity drop correlates with the number of scattering centers. The more rapid decay of the TIS signal compared to that of the number of particles during the first hour of plasma exposure needs more investigation. However, hypothetically, this effect could be explained by the presence of a native oxide shell or an adsorbed water layer around the particles that have different n and k (i.e. lower scattering), the oxide shell disappears after the first exposure to hydrogen plasma. After this phase, the scattering is proportional to the number of particles.
The interpretation of the gradual decrease of Si particles during plasma exposure can be the following. First, upon plasma impact, a particle may develop asperities across its surface which reduces the effective vdW force which, in turn, promotes the specific particles to be released.<cit.> An alternative could be the weakening of the interfacing (binding) atomic layers mechanism, e.g. removal by plasma of intermediate adsorbate layers or removal of water forming hydrogen bridges.<cit.>. Another possible explanation could be the etching of the particles' material. The silane molecule SiH_4 is a formation product of sputtered Si atoms reacting with free hydrogen radicals, and it is volatile under our conditions. If the particles - due to this etching - shrink below the detection limit, they disappear from the sub-set of particles detected by the script, and the number of particles is reduced. The second flushing campaign was not necessary due to the lack of remaining measurable particles. Overall, these measurements show that the particles with the adhesion force exceeding the shear force during the first flushing campaign became loose due to plasma exposure. The results are consistent with literature data about the etching of silicon in hydrogen plasma.<cit.>
The histograms in Figure <ref> show the comparison of the size distributions of Si particles (black bins) after deposition (on the left), after the flushing (in the middle), and after 6h of H_2 plasma exposure (on the right). In addition, the size histogram obtained from SEM measurements on a similar (but not the same) sample with virgin Si particles (scanned over an area of 3 mm x 3 mm ) is demonstrated in purple on the left "as-deposited" histogram for comparison. The particle size distribution histograms generated from in-situ laser light scattering (LLS) measurements were derived using the calibration procedure described above. The recorded intensities of Si particles were recalculated into sizes using the black curve from Figure <ref> corresponding to silicon. The uncertainty of the method for these particles is the same as for melamine particles. The blue dashed line indicates the detection limit of the system which depends on n. In fact, the detection limit is determined by the size, at which the constant I_o intersects with the Mie calculation curve. For Si particles with n = 4.15, the detection limit is around 3 µm.
The histogram of as-deposited particles demonstrates the good matching of mean values in the calibrated LLS measurement results compared to size histograms obtained using SEM. The slight deviation in sizes is explained by the fact that SEM measurements were carried out for a similar sample with Si particles, but not the same (to prevent carbonization of particles in SEM and its influence on LLS measurements). It can also be seen from the plot, that after the first flushing a little fraction of the detected particles has been removed with no measurable difference in size distribution. Despite the fact that flushing scales as d^2 and adhesion should scale as d we did not see the removal of bigger particles which can be addressed by the importance of other factors, such as size, shape, and roughness. As was mentioned before: this result indicates that the remaining particles have an adhesion force to the surface that exceeds the shear force exerted by the flushing. As is already shown in Figures <ref> and <ref>, the number of particles decays over the duration of hydrogen plasma exposure, while the histograms in Figure <ref> show that the particle size distribution has striven down and toward smaller sizes (together with the mean value shown as a red dotted line). As soon as a particle size reduces to the one indicated by the blue line (i.e. the detection limit), the particle disappears from the histogram, as it will not be detected anymore, and from the visibility of the script.
Therefore, the reliability of the recognition software has been tested based on 3 types of measurements:
* The stability of the number of particle detections was demonstrated in Figure <ref> for non-disturbed particles (without stressors like flushing or plasma) on a substrate.
* The reliability of the obtained size distribution is shown in Figure <ref>, where the LLS measurements were compared to the SEM data (black bins vs purple bins).
* The average scattering cross-section of a melamine particle using the TIS signal was compared to individually detected particles and demonstrated a good match in Figures <ref> and <ref>. The TIS was treated as an analog signal for changing the scattering efficiency of particles.
The obtained size histograms indicate that the etching mechanism with shrinking particles beyond the detection limit is the dominant mechanism for Si particle interaction with H_2 plasma. As can be seen from the middle and from right histograms, the highest percentage reduction was for the largest particles and the percentage gradually decreased toward the smallest particles. There are two reasons for that: 1) bigger particles shrink and take the place of smaller particles (hence, a relatively constant amount of small particles remained unchanged); 2) etching of Si by chemical sputtering of hydrogen radicals is only possible when accompanied by energetic electrons and ions from plasma breaking Si—Si bonds.<cit.> In that matter, the etching occurs at the place when particles interact with ions; hence, the particles are more to etch from the top rather than from the sides (it has been also demonstrated in AFM measurements<cit.>). It explains why the entire histogram does not strive toward the smaller side as a whole.
§ CONCLUSIONS
The present study demonstrates the application of LLS, combined with long-distance microscopy, to in-situ characterize the response of micrometer-sized silicon particles on a smooth substrate to hydrogen plasma exposure or to a flushing gas jet. The number of particles, particle size distribution, and total scattering intensity (TIS) measured by laser light scattering (LLS) were calibrated with monodisperse melamine resin spheres. The results indicate that the counting accuracy was approximately 3% for 5.26 µm melamine spheres. Furthermore, the observed inconsistency in relating the counting of only the bright pixels to the particle's size was attributed to the blooming effect. Therefore, Mie theory was applied to convert the calibrated particle effective scatter cross-sections to the size equivalent. The accuracy of the LLS size measurement was found to be between 50% for 2.14 µm particles and 10% for 5.26 µm particles.
Surface-deposited Silicon particles were employed for LLS measurements in order to demonstrate the effectiveness of the method to serve as an in-situ diagnostic to visualize the effect of plasma exposure. The effect of plasma on Si particles is complex and may involve particle size and shape evolution due to chemical or physical sputtering. The in-situ measured counting and size evolution proves the etching of Si is dominant when exposed to H_2 plasma. The etching is mostly conducted by hydrogen ions. This is consistent with literature data obtained from SEM measurements. Additionally, SEM measurements conducted on virgin silicon particles demonstrated a high degree of concordance with the size distribution that was calculated using LLS and Mie theory and subsequently plotted.
In conclusion, LLS can be useful as a tool for in-situ measurement of plasma exposure or gas jet flushing, fragmenting, or etching of micrometer-sized particles with a statistical description of adhesion for multiple (100-1000s) particles exposed to the same stressor.
The assistance of P. Sanders, A. B. Schrader, J. T. Kohlhepp, and P. Minten in assembling the setup, as well as ASML in financial and scientific support, is gratefully acknowledged.
|
http://arxiv.org/abs/2307.05665v1 | 20230711180000 | Generalized Dualities and Supergroups | [
"Daniel Butter",
"Falk Hassler",
"Christopher N. Pope",
"Haoyu Zhang"
] | hep-th | [
"hep-th"
] |
TESS Stellar Rotation up to 80 days in the Southern Continuous Viewing Zone
[
August 12, 2023
===========================================================================
§ INTRODUCTION
Abelian T-duality is an exact symmetry of perturbative string theory. Its initial formulation on an S^1 with associated isometries of metric and B-field can be straightforwardly extended to a d-dimensional torus, where the T-duality group expands to O(d,d; ℤ). Its modern description was given by Buscher <cit.>, who couched it in the language of the effective worldsheet σ-models with commuting isometries; here one can derive the transformation rules of the metric and B-field by integrating out the worldsheet one-forms that gauge the isometries. Later work extended this approach to include the fermionic fields and the Ramond-Ramond sector from the target space perspective <cit.> and from the worldsheet using both the Green-Schwarz superstring <cit.> and the pure spinor superstring <cit.>.
When these isometries no longer commute, it is no longer clear that the corresponding classical σ-model duality, known as non-abelian T-duality (NATD), is a full-fledged symmetry of string theory <cit.>. A symptom of this is that the dual space typically lacks local isometries that would permit one to invert the duality and recover the original space – the duality appears to be effectively one-way. Nevertheless, this procedure can still provide a means to systematically generate new supergravity solutions from existing ones.
Klimčík and Ševera showed that one can generalize the notion of duality, so that two or more σ-models related by NATD are indeed properly dual, in the sense that they can be derived from the same universal -model <cit.>. In this framework, NATD is just the simplest example of Poisson-Lie T-duality (PLTD) <cit.>, which can be further generalized to include a Wess-Zumino-Witten term <cit.> and a so-called dressing action <cit.>, where one factors out local symmetries in very close analogy to the construction of σ-models on coset spaces. In this paper, we will be concerned with an even more general framework, known as a generalized coset <cit.>. The relations between these various concepts can be summarized as follows:
[ abelian ⊂ non-abelian ⊂ Poisson-Lie ⊂ WZW-Poisson; 0.6em270⊂ 0.6em270⊂; dressing coset ⊂ generalized coset . ]
One specifies a Lie group D with a split signature Killing metric η and a maximally isotropic subgroup H of half the dimension. In the absence of a dressing action, the physical space lies on the coset H \D, and in this context D is usually called a double Lie group. For the case of a generalized coset, there is an additional “dressing action” by another isotropic subgroup F, and the physical space is the double coset H \D / F. Different σ-models arise when there exist different choices for H, and these are related by this more general notion of duality.
In recent years, a modern perspective on these developments has been provided in the language of double field theory (DFT) <cit.>.[The early work of Siegel <cit.> is essentially equivalent to the frame formulation of DFT. This already included a superspace formulation <cit.>, although limited to the type I and heterotic cases.] This is a generalization of supergravity incorporating T-duality manifestly in the target space geometry and low energy action of string theory. The coordinates x^m of spacetime are “doubled” to include dual coordinates x̃_m corresponding to winding modes of the string. The metric and B-field are combined into a generalized metric . We decompose the coordinates and generalized metric as
x^ = (x^m, x̃_m) ,
_ =
[ g_mn - b_m k g^k l b_l n b_m k g^k n; - g^m k b_k n g^m n ] .
In order to ensure that at most half the coordinates are physical, a section condition is imposed
η^_⊗_ = 0 , η^ =
[ 0 δ^m_n; δ_m^n 0 ] ,
where the derivatives act either on the same field or two different fields. The constant metric η is the natural split-signature O(D,D) invariant, and we have decomposed indices with respect to the GL(D) ⊂O(D,D) subgroup. Typically the section condition is solved by dispensing with all dependence on the winding coordinates ^m = 0. Different T-dual geometries are related by choosing different solutions of the section condition; these solutions are related by global O(D,D) rotations, which act on the generalized metric in the same manner as the Buscher rules <cit.>. In this sense, double field theory geometrizes T-duality.
This bears a striking similarity to PLTD and indeed the two are intimately related <cit.>, with PLTD and its generalizations corresponding to double field theory on group manifolds <cit.> or coset spaces <cit.>. This has been an active area of research in recent years (see e.g. <cit.> and references therein). As formulated in <cit.>, DFT encompassed only the NS-NS sector (graviton, B-field, and dilaton). It has since been extended <cit.> to include the NS-R fermions (gravitini and dilatini) and the R-R sector (the even/odd p-form complexes) of type II string theory, but this extension did not fully unify the fields. The three sectors, NS-NS, NS-R, and R-R are encoded separately in the low energy type II action, and this complicates the construction of the dual supergravity backgrounds since one cannot address all sectors simultaneously using the same methods. Typically, one uses geometric or σ-model methods to fix some of the fields and then exploits the structure of κ-symmetry and supersymmetry to uncover the rest. The Ramond-Ramond sector is particularly onerous, since unlike the other bosonic fields, it does not appear explicitly in the Green-Schwarz σ-model action.[The Ramond-Ramond sector does appear in the pure spinor action <cit.>, which has been used to cleanly derive its transformation rules under NATD.]
The goal of this paper is to address some of these topics from the perspective of a manifestly supersymmetric and duality covariant formulation. Such a formulation has recently been constructed by one of us in the language of double superspace <cit.>, building off earlier work on the subject <cit.>. Double superspace can be understood in a nutshell as simultaneously geometrizing supersymmetry and T-duality. In conventional superspace, the graviton (vielbein) and gravitino are unified into a single supervielbein, which in a certain gauge reads
E_M^A(x,θ) =
[ e_m^a(x) ψ_m^α(x); 0 δ_μ^α ] + (θ) .
Diffeomorphisms and supersymmetry are unified into superdiffeomorphisms. In double superspace one is led to consider a generalized (double) supervielbein, which can be written in a certain gauge and duality frame as a product of three factors,
_^(x,θ, x̃, θ̃) =
[ δ_M^N B_MN (-)^n; 0 δ^M_N ]×[ E_N^B 0; 0 E_B^N (-)^b+bn ]×[ δ_B^A 0; S^B A δ^B_A ]
The field E_M^A is the supervielbein, B_MN is the super two-form (which appears in the Green-Schwarz action), and S^BA includes “matter” fields, the dilatini and Ramond-Ramond bispinor. The duality group O(D,D), which governs the geometric structure of double field theory, is replaced by its natural supergroup analogue, the orthosymplectic group OSp(D,D|2s) with D bosonic coordinates, s fermionic coordinates, and their duals. Diffeomorphisms, B-field gauge transformations, and supersymmetry are all encoded in a single generalized superdiffeomorphism. Because all of the fields of supersymmetric double field theory are described in a single geometric object, one can apply the same techniques to derive how all of them transform under dualities, including abelian, non-abelian, and their generalized cousins.
A crucial point about conventional superspace is that it is not simply described by a super-Riemannian geometry with an unconstrained supermetric. Rather, one must employ the supervielbein and impose constraints on its torsion tensor in order to recover the physical field content of supergravity. These constraints involve θ-derivatives, but typically constrain the x-dependence as well, placing the geometry on-shell. In the Green-Schwarz superstring, these constraints arise from requiring κ-symmetry. Analogous statements hold for double superspace – we need to impose constraints on the generalized flux tensor _ in order for a supergravity interpretation to be possible, and these will coincide with the κ-symmetry constraints.
We begin in section <ref> with a discussion of superspace double field theory, highlighting how the duality group OSp(D,D|2s) acts on the various constituents of _^. These transformations provide the generic scaffolding in which all T-dualities must act. In section <ref>, as the simplest non-trivial example of such a transformation, we review the case of super non-abelian T-duality (NATD) in the Green-Schwarz superstring, where a supergroup G of isometries is dualized <cit.> (see <cit.> for earlier work on abelian T-duality of a single bosonic isometry and <cit.> for the non-abelian T-dual of supercoset σ-models). By comparing the dual Green-Schwarz models, one can deduce the form of the orthosymplectic transformation, which immediately yields the transformation rules of the supergravity fields, including the transformations of the Ramond-Ramond fields <cit.>.
As a side benefit of this analysis, we are able to specialize to a fermionic isometry and recover results for fermionic T-dualities, both in the abelian <cit.> and non-abelian <cit.> cases. The case of non-abelian fermionic T-duality has been of particular interest recently, and we highlight the origin of the conditions given in <cit.> for the Killing spinor from the σ-model.[Fermionic T-duality has also been discussed in the context of a doubled σ-model with T-dual fermionic coordinates <cit.>. We will not address doubled σ-models here, but it is likely super DFT can be formulated there, in analogy to the work of
<cit.>.]
Non-abelian T-duality of the GS superstring provides a concrete example, exhibiting a number of important features that continue to hold for more general cases. In section <ref>, we introduce, following <cit.>, the notion of a generalized parallelizable superspace, which is the natural analogue of a group manifold in the doubled setting, requiring only a double Lie group D and its maximally isotropic subgroup H. In section <ref>, we extend this framework to generalized supercosets, where an additional isotropic subgroup F is factored out, in direct analogy to the bosonic case <cit.>. In both of these discussions, we address two particular examples, D = G × G and D = G^ℂ, where G is a real super Lie group admitting an invertible Killing form. Both examples admit maximally isotropic subgroups H, the diagonal subgroup G_ diag for G× G and the real subgroup G for G^ℂ. The two groups G × G and G^ℂ can be analytically continued into each other, and the same holds true for their respective generalized geometries. For G^ℂ, another isotropic subgroup H is sometimes possible: it requires an R-matrix satisfying the modified classical Yang-Baxter equation. The two solutions for G^ℂ lead to backgrounds related by Poisson-Lie T-duality.
The discussion of generalized parallelizable superspaces and generalized supercosets in sections <ref> and <ref> is not really any different from their bosonic analogues: in effect, we simply insert a grading. In order to apply these results to supergravity, we must further impose additional κ-symmetry constraints on the generalized flux tensors. We review these in section <ref> and discuss how they can be imposed in two specific cases: these are the so-called λ and η deformations of the AdS_5 ×S^5 superstring. The λ deformation <cit.> (building off earlier work <cit.>) arises from a deformation of the non-abelian T-dual of the AdS_5 ×S^5 superstring. The η deformation <cit.> is a type of Yang-Baxter σ-model <cit.> (see also <cit.>). Remarkably, these two different deformations preserve the integrable structure of the superstring, and this property has driven interest in them. From our perspective, these models are interesting because they can be very simply understood in the context of Poisson-Lie T-duality for the double Lie groups G × G and G^ℂ, respectively, where G is the superisometry group PSU(2,2|4) of the AdS_5 ×S^5 superstring. This interpretation was given in the language of -models for the bosonic sector in <cit.>. Our main task in section <ref> is to extend this to the fully supersymmetric case.
In addressing the λ and η models, we proceed ahistorically, and in fact, anti-chronologically. Beginning with the underlying double Lie structure of G × G and G^ℂ, we will seek to build a generalized supervielbein _^ whose flux tensor _ obeys the κ-symmetry constraints (<ref>) and (<ref>).
For each case, there turns out to be a single one-parameter family, and this leads inexorably to the λ and η models upon identifying the underlying constituents of the Green-Schwarz action. All supergravity fields, including the Ramond-Ramond field strengths, are read directly off from the supervielbein and match the results derived by analyzing the respective Green-Schwarz σ-models <cit.>.
In line with our ahistorical approach, we will not directly address issues of integrability or the connection between generalized duality and integrability. For a discussion of integrability, the reader is referred to the recent work <cit.>, which explored some of these very issues for supersymmetric σ-models; specifically, it was shown that the Lax connection is preserved (a sufficient condition for integrability) after performing non-abelian T-duality in superspace analogues of the the principal chiral, symmetric space, and semi-symmetric space σ-models. On the connection between -models and integrability, the reader is referred to
<cit.>.
We include several appendices. Our conventions for supergroups, including the orthosymplectic group, can be found in appendix <ref>. We sketch some relevant results for type II supergravity in superspace in appendix <ref>. A concise discussion of gauged superspace σ-models (whose results we employ in section <ref>) is given in appendix <ref>. Finally in appendix <ref> we give the generalized flux tensors for the η and λ models that are compatible with κ-symmetry.
§ SUPERSYMMETRIC DOUBLE FIELD THEORY AND THE FRAMEWORK OF T-DUALITY
We will be employing the supersymmetric formulation of type II double field theory
in superspace recently discussed in <cit.> (see also <cit.> and <cit.> for related earlier discussions).
In this section, we will review some basic elements of this approach and explain how T-duality is manifested on the generalized supervielbein. As a first step, we will review some key features of
bosonic double field theory, before showing how these generalize to the supersymmetric setting.
§.§ Bosonic double field theory and O(D,D) T-duality
Double field theory <cit.> is formulated on a space with local coordinates x^ where fields are subject to a modified notion of generalized diffeomorphism governed by a Lie derivative which preserves an O(D,D) structure. For vector fields V^,
_ξ V^ = ξ^_ V^ - V^ (_ξ^ - ^ξ_) .
where indices are raised and lowered with the constant O(D,D) metric η_. The space comes equipped with a generalized metric _, which is an element of O(D,D) so that its inverse is (^-1)^ = ^. Closure of generalized diffeomorphisms is guaranteed if we universally impose a section condition on all fields and parameters,
η^_⊗_ = 0,
where the derivatives may act either on the same or different fields. The metric and coordinates can be decomposed in terms of the GL(D)⊂O(D,D) subgroup as
η_ =
[ 0 δ_m^n; δ^m_n 0 ] ,
x^ = (x^m, x̃_m) ,
_ = (_m, ^m) .
The section condition can then be solved by choosing ^m = 0 universally. Then, the generalized metric is described in terms of a metric g_mn and a Kalb-Ramond two-form b_mn as
_ =
[ g_mn - b_m k g^k l b_l n b_m k g^k n; - g^m k b_k n g^m n ] ,
and the generalized Lie derivative decomposes into the conventional GL(D) Lie derivative and B-field transformations.
The description in terms of a generalized metric turns out to not be particularly useful when passing to superspace. Just as supergravity requires that we exchange a metric g_mn for a vielbein e_m^a, supersymmetric double field theory requires we replace the generalized metric _ with a generalized vielbein V_^. These are related by
_ = V_^ V_^_
where _ is a constant matrix invariant only under the double Lorentz subgroup O(D-1,1) ×O(1,D-1) of O(D,D). These objects are naturally written in the chiral basis of O(D,D), where a flat vector V^ = (V^, V^) is decomposed into a left-handed vector V^ of O(D-1,1) and a right-handed vector V^ of O(1,D-1). In this chiral basis,
η_ =
[ η_ 0; 0 η_ ] ,
_ =
[ η_ 0; 0 -η_ ] ,
η_ = - η_ .
The generalized vielbein can be decomposed as <cit.>
V_^m = 1/√(2) e_^m , V__m = 1/√(2) (e_m_- e_^n b_nm)
= 1/√(2) e_^n (g_nm - b_nm) ,
V_^m = 1/√(2) e̅_^m , V__m = 1/√(2) (e̅_m_- e̅_^n b_nm)
= -1/√(2) e̅_^n (g_nm + b_nm) ,
which is the generic form if one supposes V_^m and V_^m to both be invertible matrices. This can be expressed as a product of two O(D,D) factors:
V_^ =
1/√(2)[ e_^n η_ e_n^; e̅_^n η_e̅_n^ ]×[ δ_n^m -b_nm; 0 δ^n_m ] .
The two vielbeins e_m^ and e_m^ describe the same metric,
g_mn = e_m^ e_n^η_ = - e̅_m^e̅_n^η_
implying that they are connected by a Lorentz transformation
Λ_^ = e_^m e̅_m^ .
The double Lorentz symmetry can be fixed to a single Lorentz group by adopting the gauge Λ=1. However, in supergravity this is more subtle because chiral fermions are present, breaking each Lorentz group to its connected (proper orthochronous) component. This means that Λ falls into one of four classes, depending on whether it preserves or reverses the temporal and spatial orientations: this distinguishes the type IIA/IIB/IIA^*/IIB^* duality frames <cit.>.
Double field theory conveniently packages the O(D,D) structure of T-duality transformations. To see how, we define _nm := g_nm - b_nm and _nm := g_nm + b_nm = (_nm)^T = _mn.
An O(D,D) transformation U_^ acting on the right of V_^ can be written
V'_^ = V_^ U_^ ,
U_^ =
[ U_m^n U_mn; U^mn U^m_n ] .
Defining
X_m^n := U_m^n + _m p U^p n , X̅_m^n := U_m^n - _m p U^p n ,
Y_mn := U_mn + _m p U^p_n Y̅_mn := U_mn - _m p U^p_n ,
one can show that
e'_^m = e_^n X_n^m ,
e̅'_^m = e̅_^n X̅_n^m,
'_mn = (X^-1)_m^p Y_p n , '_mn = (X̅^-1)_m^p Y̅_p n .
This recovers the Buscher rules for the metric and B-field and has
the form of a fractional linear transformation on _nm.
The fact that '_mn = '_nm follows from the O(D,D) structure. Also encoded above is how the Lorentz transformation Λ'_^ that defines the type II duality frame is related to the original Λ_^.
This can be written alternatively as a left or right Lorentz transformation Λ(U),
Λ'_^ = e_^m (X X̅^-1)_m^n e_n^_Λ(U)_^×Λ_^
= Λ_^×e̅_^m (X X̅^-1)_m^n e̅_n^_Λ(U)_^ .
Again, t+he fact that this is a Lorentz transformation follows from the O(D,D) structure.
In addition to the generalized vielbein, double field theory also involves a generalized dilaton e^-2d. This is a density under O(D,D) transformations, transforming as
_ξ e^-2d = ξ^_ e^-2d + _ξ^ e^-2d
= _ (ξ^ e^-2d) .
Upon solving the section condition, the physical dilaton φ is
identified by removing a density factor from the generalized dilaton,
e^-2d = e^-2φ× e_m^. A generic transformation of the generalized dilaton is simply a scalar factor
e^-2d' = e^-2d U_Δ ,
which is a priori independent of U_^. Together U_^ and U_Δ encode an O(D,D) ×ℝ_+ transformation. It follows that the physical
dilaton transforms as
e^-2φ' = e^-2 φ× X_m^n × U_Δ .
Note that X̅_m^n = X_m^n since X and X̅ are related by a Lorentz transformation.
§.§ Supersymmetric type II double field theory
We turn now to supersymmetric type II double field theory <cit.>.
At the component level, supersymmetric double field theory consists of the following fields:
* the generalized vielbein V_^ and the generalized dilaton e^-2d;
* the gravitini Ψ_^ and Ψ_^β, which are vectors and Weyl spinors under alternating Lorentz groups, and the dilatini ρ_α and ρ_, which are Weyl spinors of opposite chirality to the gravitini;
* and the Ramond/Ramond field strengths, which can be described equivalently as an O(D,D) spinor |F⟩ <cit.> or a Weyl bispinor F^α of O(D-1,1) ×O(1,D-1) <cit.>.
In order to make contact with conventional superspace (and the Green-Schwarz superstring), a parametrization is needed that naturally leads to a supervielbein E_M^A and a Kalb-Ramond super-two-form B_MN where z^M = (x^m, θ^μ) are the D bosonic and s fermionic coordinates of superspace. This can simply be done by mimicking the structure of bosonic double field theory, but replacing O(D,D) with its natural graded extension OSp(D,D|2s), the orthosymplectic supergroup involving 2D bosonic and 2s fermionic directions <cit.>. For type II superspace, we will need D=10 and s=32. For the details about this supergroup, we refer to appendix <ref>.
One begins by formulating supersymmetric double field theory on a superspace with local coordinates z^, with a curved vector index of OSp(D,D|2s). Generalized diffeomorphisms act on a vector V^ as
_ξ V^ = ξ^_ V^ - V^(_ξ^ - ^ξ_ (-)^) ,
where (-)^ is -1 if is fermionic and +1 otherwise.
Indices are raised and lowered with the graded symmetric orthosymplectic invariant η_ subject to NW-SE rules,
V_ = V^η_,
V^ = η^ V_,
and η^η_ = δ_^ (-)^.
Closure of the gauge algebra is guaranteed by imposing the section condition
η^_⊗_ = 0, exactly as in bosonic double field theory.
To recover conventional superspace, we decompose all objects carrying curved indices under the GL(D|s) ⊂OSp(D,D|2s) subgroup. The OSp(D,D|2s) metric in this basis is
η^ =
[ 0 δ^M_N; δ_M^N (-)^MN 0 ] , η_ =
[ 0 δ_M^N; δ^M_N (-)^MN 0 ] .
The coordinates and their derivatives decompose as
_ = (_M, ^M) ,
z_ = (z̃_M, z^M) , _ z^ = δ_^ _M z^N = δ_M^N , ^M z̃_N = δ_N^M (-)^NM
where z^M is the physical coordinate and z̃_M is the winding coordinate. We normally solve the section condition by discarding any dependence on the winding coordinate.
As in bosonic double field theory, we introduce a generalized supervielbein _^ with which to flatten generalized vectors. We choose it to be an OSp(D,D|2s) element, so that it is related to its inverse (^-1)_^≡_^ by
_^ = η^_^η_ (-)^. For type II superspace, the flat index decomposes in the chiral basis into two vector indices, one for each factor of the double Lorentz group, and four Weyl spinor indices, one of each chirality for each factor. We denote this for a vector V_ as
[ V_ = ( V_ V_α V^α | V_ V_ V^ ) .; relative dimension 0 -12 12 0 -12 12 ]
We have included above the relative dimension of these various components. These dimensions can be understood as arising from the ℝ_+ factor in the decomposition
OSp(10,10|64) →O(9,1)_L ×O(1,9)_R ×ℝ_+.
This dimension is one reason why we should not combine the two 16-component Weyl spinors V_α and V^α into a single 32-component Dirac spinor.
We have normalized the relative dimension so that it leads to the correct notion of engineering dimension for the flat derivatives D_ = _^_,
[ D_ = ( D_ D_α D^α | D_ D_ D^ ) .; engineering dimension 1 12 32 1 12 32 ]
At the component level in double field theory, D_ and D_ correspond to the two flat derivatives (built respectively with e_^m and e̅_^m), while D_α and D_ correspond to the two supersymmetries. (The higher dimension D^α and D^ are discarded upon passing to component double field theory where one solves the section condition on the fermionic coordinates.)
Flat generalized vector indices are raised and lowered with
η_ =
([ η_ 0 0 0 0 0; 0 0 δ_α^β 0 0 0; 0 -δ^α_β 0 0 0 0; 0 0 0 η_ 0 0; 0 0 0 0 0 δ_^; 0 0 0 0 -δ^_ 0 ]) ,
η^ =
([ η^ 0 0 0 0 0; 0 0 δ^α_β 0 0 0; 0 -δ_α^β 0 0 0 0; ^A^A^A 0 0 0 η^ 0 0; 0 0 0 0 0 δ^_; 0 0 0 0 -δ_^ 0 ]) .
These matrices (and their chiral subblocks) are invariant under the double Lorentz group.
As in the bosonic case, there are unphysical ingredients present in the supervielbein, which are associated with local symmetry transformations
δ_^ = λ_^_^ , λ_ = -λ_ (-)^ .
In the bosonic case, the local symmetry group is the double Lorentz group O(D-1,1)_L ×O(1,D-1)_R with commuting left and right factors. In the supersymmetric case, this group is larger, although it still factors into two commuting chiral pieces. We denote it H_L ×H_R. The generators λ_ of H_L are constrained as in Table <ref>.
Unlike the bosonic case, there is no simple prescription whereby some invariant _ determines λ; instead, one needs to take into account the constraint structure on the supersymmetric worldsheet <cit.>. For further details of this symmetry group, we refer to <cit.>.
There are competing ways of parametrizing a generic supervielbein, depending on whether one wishes to make contact with component double field theory or with type II superspace. In this paper, we will only be concerned with the latter. Then as shown in <cit.>, a generic supervielbein can be decomposed as a product of three simple factors:
_^
= (_B)_^× (_EΛ)_^× (_S)_^
The first is built out of the Kalb-Ramond super two-form,
(_B)_^ =
[ δ_M^N B_MN (-)^n; 0 δ^N_M ] ,
just as in the bosonic case.
The second factor _EΛ is written, in a chiral decomposition of the index, as
(_EΛ)_^ =
([ 1/√(2) E_M^ E_M^α 0 1/√(2) E_M^ E_M^ 0; 1/√(2) E^^M 0 - E_α^M (-)^m 1/√(2) E^^M 0 - E_^M (-)^m ]) .
The two superfields E_M^ and E_M^ (along with their inverses) are related by a Lorentz transformation,
E_M^ = E_M^Λ_^ ,
E_^M = Λ_^ E_^M .
We may think of _E Λ as being composed of a square invertible matrix E_M^A = (E_M^, E_M^α, E_M^) and an additional Lorentz transformation Λ with which we can define E_^M and E_M^ by the relations (<ref>). The _S factor is given, also in a chiral decomposition, as
(_S)_^ =
([ δ_^ √(2) S_^β 0 0 0 0; 0 δ_α^β 0 0 0 0; -√(2) S^α S^αβ - S^α S_^β δ^α_β 0 S^α 0; 0 0 0 δ_^ √(2) S_^ 0; 0 0 0 0 δ_^ 0; 0 S^β 0 -√(2) S^α S^αβ - S^ S_^ δ^_ ]).
It consists of fermionic superfields S_^β and S_^ as well as the symmetric bosonic superfields S^αβ, S^αβ, and S^α. All these constituents transform as their indices imply under double Lorentz transformations, while only _S transforms under the higher dimension H_L ×H_R transformations:
δS_^α = -1/√(2) λ_^α ,
δS_^ = -1/√(2) λ_^ ,
δS^αβ = -λ^αβ
+ √(2) S^c (α λ_^β) ,
δS^αβ = -λ^αβ
+ √(2) S^( λ_^) .
The last condition implies that S^αβ and S^αβ are pure
gauge, while the spin-1/2 parts of S_^α and S_^ are invariant
and constitute the dilatini
χ_α := -i S^β (γ_)_βα , χ_ := -i S^β (γ_)_βα .
The invariant components S^α contain the Ramond-Ramond bispinor
field strength (<ref>).
The precise dictionary between these constituents and those of type II supergravity are reviewed in Appendix <ref>. It is instructive to compare the numbers of independent bosonic and fermionic components of these constituents with those of a generic OSp(D,D|2s)
element.[This is a count of superfields rather than component fields. The number of bosonic and fermionic superfields need not match.]
Taking into account the range of the indices a=0,…,D-1 and α=1,…,s/2, we count
object bosonic fermionic
(_B)_^ 1/2 D(D-1)+ 1/2 s(s+1) Ds
(_EΛ)_^ 1/2 D(3D-1) + s^2 2Ds
(_S)_^ 1/2 s(s+1) Ds
OSp(D,D|2s) D(2D-1)+s(2s+1) 4Ds
In the same vein, we find that H_L×H_R gauge fixing gives rise to the physically relevant fields E_M^A (modulo Lorentz transformations λ_a^b), B_MN, χ_α, χ_ and the Ramond-Ramond bispinor S^α:
object bosonic fermionic
B_MN 1/2 D(D-1) + 1/2 s(s+1) Ds
E_M^A / λ_a^b D^2 - 1/2 D (D-1) + s^2 2 Ds
χ_α, χ_ 0 s
S^α 1/4 s^2 0
OSp(D,D|2s) / H_L×H_R D^2 + 7/4 s^2 +1/2 s (3D+1)s
H_L×H_R D(D-1) + 1/2 s (1/2 s + 1) (D-1)s
§.§ The structure of OSp(D,D|2s) transformations
From their embedding in double field theory, we will be able to derive the generic transformations of the supervielbein, dilatini, Ramond-Ramond sector, and dilaton under OSp(D,D|2s) transformations. For now, we will not concern ourselves with the precise form of these transformations. As we will discuss in the next sections, these encompass both bosonic T-duality <cit.> and fermionic T-duality <cit.> (see also <cit.>), as well as more general non-abelian dualities involving a supergroup G <cit.>.
The key first step in uncovering the OSp structure is to introduce square matrices _A^M and _A^M defined by[These definitions serve as the starting point of the generalized supervielbein analysis, see appendix B of <cit.>. Choosing these quantities to furnish two invertible supervielbeins leads to the solution discussed here. This is closely analogous to the bosonic analysis where one poses V_^m and V_^m to be invertible. These two vielbeins _A^M and _A^M will end up being proportional to the operators _± discussed in the context of the η and λ deformations <cit.>.]
_^M =: 1/√(2) _^M , _^M =: 1/√(2) _^M ,
_α^M =: _α^M ≡_α^M , _^M = _^M ≡_^M .
These quantities are presumed invertible and related to E_A^M by
_α^M = E_α^M
_α^M = E_α^M ,
_^M = E_^M
_^M = E_^M ,
_^M = E_^M -2 S_^βE_β^M
_^M = E_^M -2 S_^E_^M .
For reference, the inverse relations are
_M^ = E_M^ ,
_M^ = E_M^ ,
_M^α = E_M^α+ 2 E_M^S_^α ,
_M^α = E_M^α ,
_M^ = E_M^ ,
_M^ = E_M^+ 2 E_M^S_^ .
Note that while _M^ = _M^Λ_^, this does not hold for their inverses. A useful result is
| E_M^A| = |_M^A| = |_M^A|
since the matrices themselves differ only by Lorentz transformations on some of the elements.
In analogy to the bosonic case, we introduce
G_MN := _M^_N^η_
= -_M^_N^η_ , _MN := G_MN - B_MN , _MN := G_MN + B_MN ,
in terms of which we find
_M = 1/√(2) _^N _NM (-)^M ,
_M = -1/√(2) _^N _NM (-)^M ,
_αM = _α^N _NM (-)^M
_αM = -_α^N _NM (-)^M ,
_M = _^N _NM (-)^M
_M = -_^N _NM (-)^M .
A generic orthosymplectic transformation can be written
'_^ = _^_^
where
_^ =
[ U_M^N U_MN (-)^N; U^M N U^M_N (-)^N ] .
Defining
X_M^N := U_M^N + _M P U^P N (-)^P , X̅_M^N := U_M^N - _M P U^P N (-)^P ,
Y_MN := U_MN + _M P U^P_N (-)^P Y̅_MN := U_MN - _M P U^P_N (-)^P ,
one can show that
'_A^M = _A^N X_N^M ,
'_A^M = _A^N X̅_N^M,
'_MN = (X^-1)_M^P Y_P N , '_MN = (X̅^-1)_M^P Y̅_P N .
From these equations one can read off the transformations of B_MN and G_MN.
Similarly, from
'_M^A = (X^-1)_M^N _N^A and
'_M^A = (X̅^-1)_M^N _N^A, we deduce the
transformations for the graviton one-form
E'_M^ = (X^-1)_M^N E_N^ ,
E'_M^ = (X̅^-1)_M^N E_N^ ,
and these are related by the Lorentz transformation
Λ'_^ = E_^M (X X̅^-1)_M^N E_N^_Λ(U)_^×Λ_^
= Λ_^×E_^M (X X̅^-1)_M^N E_N^_Λ(U)_^ .
Some useful identifies are
U^M P (X^-1)_P^N = -U^N P (X̅^-1)_P^M (-)^MN ,
(X X̅^-1)_M^N
= δ_M^N + 2 G_MP U^PQ (X̅^-1)_Q^N (-)^P ,
Λ(U)_^ =
δ_^ + 2 U^MP (X̅^-1)_P^N E_N^ E_M_ .
The gravitini are identified in Dirac spinor language using (<ref>).
Applying this result gives the transformations
E'_M^1
= (X̅^-1)_M^N E_N^1 ,
E'_M^2
= (X^-1)_M^N E_N^2
(Λ(U)^-1)_^
where Λ(U) is the spinorial version of Λ(U).
The transformations for the dilatini (<ref>) are a bit more involved.
From S_^α = -1/2_^M _M^α, we can show
S'^α = S^α - E_N^ U^N M (X̅^-1)_M^P E_P^α (-)^N χ'_α = χ_α - i U^N M (X^-1)_M^P E_P^
(γ_)_αβ E_N^β
where we have used the first identity in (<ref>) to replace X̅ with X.
A similar expression holds for χ_. Converting to Dirac notation gives[Several sign factors factors appear in the second term of χ'_2 relative to χ'_1. A relative minus sign comes about essentially from converting γ̅_ to -γ_* γ_ after conjugating by all the Λ factors. A factor of α_Λ comes from converting C̅^-1 to C^-1. Finally, a factor of α_Λβ_Λ appears after eliminating the γ_*.]
χ'_1 = χ_1 - i U^N M (X^-1)_M^P E_P^
(γ_ C^-1)_ E_N^1 ,
χ'_2 = Λ(U)_^(
χ_2 + i β_Λ U^N M (X̅^-1)_M^P E_P^
(γ_ C^-1)_ E_N^2)
The β_Λ factor is +1 for IIB/IIA^* and -1 for IIA/IIB^*.
The Ramond-Ramond bispinor in Weyl notation is
S^α = -^α M E_M^.
This transforms as
S'^α
=
-(^α N X_N^M
+ (^α_N - ^α P_P N ) U^N M (-)^N)
(X^-1)_M^P E_P^ .
One can show that ^α_N - ^α P_PN = _N^α
and translating this to Dirac form gives
S'^1 2
=
( S^1 2
- E_N^1 U^N M (X^-1)_M^P E_P^2) (Λ(U)^-1)_^ .
In the democratic formulation of type II supergravity, we define
^1 2 :=
∑_p odd1/p!_a_1 ⋯ a_p (C P_R γ^a_1 ⋯ a_p)^ IIB/IIB^*
∑_p even1/p!_a_1 ⋯ a_p (C P_R γ^a_1 ⋯ a_p)^ IIA/IIA^*
From (<ref>), we deduce the transformation
e^φ''^1 2
=
( e^φ^1 2
- 32i E_N^1 U^N M (X^-1)_M^P E_P^2) (Λ(U)^-1)_^ .
The above requires the transformation of the dilaton, which is our last field to discuss.
Its behavior in super-DFT mirrors its bosonic cousin. It is a superfield Φ that transforms as a scalar density under generalized Lie derivatives
_ξΦ = ξ^_Φ + _ξ^Φ
= _ (ξ^Φ) .
The generalized superdilaton Φ is related to the supergravity dilaton φ by
Φ = e^-2φ E_M^A.[Note that Φ is not simply related to the component dilaton e^-2d. They differ by a factor of E_M^A / e_m^.]
Presuming the superdilaton to transform by a scalar factor
Φ' = Φ _Δ,
it follows that
e^-2φ' = e^-2 φ× X_M^N ×_Δ .
The factor _Δ is a priori independent of _^.
§ SUPER NON-ABELIAN T-DUALITY
The simplest and most direct situation where we can explicitly see how the OSp transformations of double field theory come about is in the context of T-duality for a supersymmetric σ-model, namely the Green-Schwarz superstring with a non-abelian (or abelian) isometry supergroup G. This situation was fully analyzed by Borsato and Wulff a few years ago <cit.>. We first summarize their construction and then reinterpret their results in the language of double field theory.
§.§ Worldsheet formulation of non-abelian T-duality
Following <cit.>, the starting point is a worldsheet Lagrangian
= -1/2√(-h) h^ij _i Z^M _j Z^N G_NM
- 1/2^ij _i Z^M _j Z^N B_NM
The supercoordinates Z^M = (X^m, Θ^μ) parametrize a target superspace.
The worldsheet metric h_ij is presumed to have Lorentzian signature (-,+) and
the worldsheet antisymmetric tensor density ^ij obeys ^01 = +1.
The target space tensors G_MN(Z) and B_MN(Z) are graded symmetric and antisymmetric
respectively.[One may refer to G_MN as the supermetric but this is something of a misnomer as it need not be invertible and the usual considerations of Riemannian geometry do not apply. For the Green-Schwarz superstring, G_MN is built from a rectangular piece E_M^a of the supervielbein E_M^A as G_MN = E_M^a E_N^b η_ab.]
Let the σ-model admit a supergroup G of isometries described by supervectors k_1 obeying
[k_1, k_2] = f_1 2^3 k_3.
This is a graded commutator, and the isometry label 1 should be understood to decompose into bosonic and fermionic isometries, 1 = (1, 1). We presume that we can adopt a coordinate system where the coordinates Z^M factorize into coordinates Y^1 on which the isometries act and spectator coordinates Z^, so that k_1 = k_1^1_1. The superfields G and B decompose as
G =
e^1⊗ e^2 G_2 1( Z)
+ 2 e^1⊗ Z^ G_1( Z)
+ Z^⊗ Z^ G_( Z)
,
B = 1/2 e^1∧ e^2 B_2 1( Z)
+ e^1∧ Z^ B_1( Z)
+ 1/2 Z^∧ Z^ B_( Z) .
All the dependence on the coordinates Y^1 is sequestered in the left-invariant vector fields e^1 in the usual manner, e^1 t_1 = g^-1 g for g(Y)∈ G.
We review in Appendix <ref> how the above conditions come about.
The generators t_1 obey the algebra
[t_1, t_2] = - f_1 2^3 t_3 .
Our supergroup conventions are given in Appendix <ref>.
When the isometries act freely (that is, without isotropy), the above has a clear geometric interpretation: the coordinates Y^1 parametrize the orbits of G on the manifold. When the isometries act with an isotropy group H, then we can (at least locally) take the coordinates Y^1 to parametrize the orbits of G/H.[The strategy reviewed here follows <cit.> and is equivalent to extending the coordinates Ż by additional H coordinates so that the full group G acts freely. The conditions (<ref>) and (<ref>) guarantee that the additional degrees of freedom drop out.] The isotropy condition amounts to invariance under g → g h for h ∈ H, meaning that G_MN (and similarly for B_MN) must be invariant under the adjoint action of H,
( h)_1^1' G_1' 2' ( h)_2^2' (-)^22' + 2'
= G_1 2 ,
( h)_1^1' G_1'
= G_1 .
It must also project out the Lie algebra 𝔥,
ζ^1 G_1 2 = ζ^1 G_1 = 0 , ζ∈𝔥 .
Non-abelian T-duality is effected by replacing _i Y^1 e_1^1 with a 𝔤-valued worldsheet one-form Ã_i^1, and adding a term ^ij F(Ã)_ij^1ν_1 where F(Ã)_ij^1 is the worldsheet G-curvature built from Ã. Treating ν_1 as a Lagrange multiplier, one recovers the original action where à = g^-1 g is pure gauge. The T-dual model arises if we instead integrate out the one-form Ã. Working in lightcone coordinates for simplicity, the Lagrangian becomes
=
_+ Z^ _ _- Z^ (-)^
+ Ã_+^1 _1 2 Ã_-^2 (-)^2
+ Ã_+^1(_- ν_1
+ _1 _- Z^ (-)^)
+ (_+ Z^_1 - _+ ν_1) Ã_-^1 (-)^1
where we have introduced
_ = G_ - B_ ,
_1 = G_1 - B_1 ,
_1 = G_1 - B_1 ,
_1 2 = G_1 2 - B_1 2 - f_1 2^3 ν_3 .
The addition of the Lagrange multiplier to _1 2 is the major difference with respect to abelian T-duality. Integrating out the worldsheet one-forms gives the dual model
= _+ Z'^M '_M N _- Z'^N (-)^N
where the new coordinates are Z'^M = (Z^, Ỹ^1) with
Ỹ^1 = ν_2 δ^2 1 (-)^2
= ν_1 (-)^1 .
The choice of grading here may seem awkward, but it makes subsequent formulae simpler:
'_1 2 = ^1 2 ,
'_1 = ^1 2_2 ,
'_1 = -_2^2 1 (-)^2 ,
'_ = _ - _1^1 2_2 (-)^1
where we define ^1 2 as the graded inverse,
^1 3_3 2
= δ_2^1 (-)^21.
Comparing the expressions for '_MN with the formal result (<ref>) for a generic OSp(D,D|2s) transformation, we find can be written as a sequence of three orthosymplectic transformations, = _(0)_(1)_(2), where
_(0) =
(
[ δ_^ 0 0 0; 0 e_1^2 0 0; 0 0 δ^_ 0; 0 0 0 e_2^1 (-)^12+2 ]) ,
_(1) =
(
[ δ_^ 0 0 0; 0 δ_1^2 0 -f_1 2^3ν_3 (-)^2; 0 0 δ^_ 0; 0 0 0 δ^1_2 ]) , [2ex]
_(2) =
(
[ δ_^ 0 0 0; 0 0 0 δ_1 2 (-)^2; 0 0 δ^_ 0; 0 δ^1 2 0 0 ]) .
The factor _(0) flattens G and B in the isometric
directions with the left-invariant vielbein: this occurred in (<ref>).
The factor _(1) gives the non-abelian factor that replaces
_1 2 with _1 2 in (<ref>).
Finally, _(2) induces the familiar T-duality transformation à la Buscher.
Now one can use the results in section <ref> to compute the new gravitini, dilatini, and Ramond-Ramond bispinors. (We will return to the question of the dilaton in due course.)
The additional ingredients we will need are
X_M^N =
[ δ_^ _2 (-)^2; 0 e_1^1 _1 2 (-)^2 ] ,
(X^-1)_M^N =
[ δ_^ -_1^1 2 e_2^2
(-)^1; 0 ^1 2 e_2^2 ] .
Now we can directly compute the new supervielbein
E'^a =
Z^(
E_^a
- _1 (-)^1^1 2 E_2^a
)
+ ν_1 (-)^1^1 2 E_2^a ,
E'^1 = Z^(
E_^1
- _1 (-)^1^1 2 E_2^1)
- ν_1 (-)^1^1 2 E_2^1 ,
E'^2 = [ Z^(
E_^2
- _1 (-)^1^1 2 E_2^2)
+ ν_1 (-)^1^1 2 E_2^2] (Λ^-1)_^ .
The Lorentz transformation Λ and its inverse are
Λ_a^b = δ_a^b - 2 ^1 2 E_2^b E_1 a ,
(Λ^-1)_a^b = δ_a^b - 2 ^1 2 E_2^b E_1 a .
It is difficult to characterize fully this Lorentz transformation, although one can show that
Λ = (-1)^_B G where by _B we mean the bosonic dimension. This was proven in <cit.> for a bosonic group. Adapting their proof for a supergroup is straightforward. In their eq. (3.10), promote traces and determinants to supertraces and superdeterminants, leading to
Λ = (-1) ×_1 2/_1 2.
Because is the supertranspose of , their superdeterminants are related as
_1 2
= _1 2× (-1)^_F G where _F denotes
the fermionic dimension. The result follows since (-1) = (-1)^ G.
The super two-form, covariant field strengths, and dilatini transform as
B' = 1/2 Z^∧ Z^ B_
- 1/2^1 2(
ν_2 + Z^_2)∧(
ν_1 - Z^_1) ,
e^φ''^1 2 =
( e^φ^1 2
-32i E_1^1 ^1 2 E_2^2) (Λ^-1)_^ ,
χ'_1 = χ_1 - i ^1 2 E_2^b E_1^1
(γ_b C^-1)_ ,
χ'_2 = Λ_^(χ_2
+ i β_Λ ^1 2 E_2^b
E_1^2 (γ_b C^-1)_) .
These results match those found by Borsato and Wulff <cit.> subject to
the identifications
ν_I = -ν_1 ,
f_IJ^K = -f_1 2^3 ,
N_+^IJ = (-)^1^1 2 ,
N_-^IJ = -(-)^1^1 2 .
This argument is perhaps a bit too slick as it appears to ignore a key point: the transformation of _MN does not completely determine . Put simply, there are as many degrees of freedom in as there are in _^ itself, but only some of these appear in _MN. The choice (<ref>) was merely the simplest choice that reproduces '_MN, but this is hardly conclusive. What actually singles it out (we will show) is that it leaves the generalized fluxes of double field theory invariant — this has the crucial effect that it guarantees the dual theory will possess the proper supergravity constraints.
§.§ Double field theory interpretation
In defining the dual coordinate Ỹ^1 in (<ref>), we have (as usual in bosonic T-duality) identified it with the Lagrange multiplier ν_1 directly, swapping the index location by hand. This may not actually be the most natural choice; instead, what we can do is to think of ν_1 as a function of the new coordinates, which we denote Ỹ_1.[One could presumably also let ν_1 depend on the spectator coordinates, but this muddies the water.] These can be interpreted as the natural DFT coordinates dual
to Y^1. Then in the σ-model action, we denote
ν_1 = ẽ_1^1Ỹ_1 ,
ẽ_1^1 := ^1ν_1 (-)^11
= ẽ^2_1 (-)^11
with ẽ_1^1 is interpreted as the dual analogue of e_1^1.
The crucial feature is that while e_1^1 is the left-invariant vector field of the group G and therefore carries a flux, the dual vielbein ẽ_1^1 is purely flat. This slightly modifies _(1) to
_(1) =
(
[ δ_^ 0 0 0; 0 ẽ_1^2 0 -f_1 2^3ν_3 ẽ^2_2
(-)^2+2; 0 0 δ^_ 0; 0 0 0 ẽ^1_2 (-)^2 ]) .
For _(2), we simply replace 1 with 1 everywhere.
Now it will be convenient to denote the indices of these matrices as
(_(0))_^ ,
(_(1))_^ ,
(_(2))_^
where is flattened in the isometry direction, i.e. it involves
^, _ and ^1,_1. From the perspective of double field theory, we can dispense with _(2): this merely has the effect of swapping which coordinates we view as physical and which as winding, so we can think of it as a purely passive transformation. What interpretation do we give to _(0) and _(1)?
Suppose we have a generalized vielbein depending on two sets of doubled coordinates, Y^1 and Ỹ_1 as well as Z^ and Z̃_, in such a way that it decomposes into a product of two factors:
_^
= _^(Y, Ỹ) ×_^(Z, Z̃) .
The first factor involves only the Y coordinates and the second only the spectators.
(We don't actually need the dual Z̃_ coordinates, but we keep them for generality.) In the bosonic limit s=0 (<ref>) reduced to the generalized Scherk-Schwarz ansatz <cit.> in DFT. Here, we study its natural supersymmetrization. The tilde index ^ = (^, _) decomposes as = (, 1). We presume is chosen so that
_^ = δ_^ , _ = δ_ ,
That is, is the identity in the non-isometric directions; this is the situation in the case at hand. The original model and its dual differ only in the choice of which in the two cases is
original model _^ =
(
[ δ_^ 0 0 0; 0 e_1^2 0 0; 0 0 δ^_ 0; 0 0 0 e^1_2 (-)^2 ])
= (_(0))_^ ,
dual model _^ =
(
[ δ_^ 0 0 0; 0 ẽ_1^2 0 ẽ_1^1 f_1 2^3ν_3
(-)^2; 0 0 δ^_ 0; 0 0 0 ẽ^1_2 (-)^2 ])
= (_(1)^-1)_^
Here one should think of ν_1(Ỹ) as the potential for ẽ_1^1 as in (<ref>).
The first generalized vielbein depends on Y but not Ỹ, and vice-versa for the second. Both of these, viewed as generalized vielbeins, involve the same flux tensor.
Recall that in double field theory, one can build a generalized flux tensor _
from the generalized vielbein,
___^ = -_^_^ , _ := -3 _[^__^_]
with the sign here chosen so that the definition of the flux tensor matches that of the torsion tensor in conventional (undoubled) superspace. Using the decomposition (<ref>), one finds
_ = _
+ _^_^_^_ (gradings suppressed)
where _ is built purely from (which is unchanged under duality) and
_
:= -3 _[|^__||^_ |]
=
f_1 2^3 _=_12^3
0 otherwise
for both the original and dual models.
This suggests an alternative way of seeing that the class of Green-Schwarz superstrings obeying the κ-symmetry constraints (<ref>) and (<ref>) is closed under super non-abelian T-duality, a result established explicitly in <cit.>. Let's begin with two observations:
* The Green-Schwarz action on its own does not contain all of the physical data — it contains only G_MN = E_M^a E_N_a and B_MN. However, if it obeys the κ-symmetry constraints, then one can uniquely identify the gravitini E_M^1 and E_M^2, as well as the dilatini and Ramond-Ramond bispinor by imposing various purely conventional constraints on top of κ-symmetry <cit.>. From these data, one can identify the generalized supervielbein up to its local tangent space symmetries (which include the double Lorentz group).
* The duality transformations from the GS action determine '_MN from _MN,
but this does not allow one to completely determine the orthosymplectic element .
There is residual ambiguity corresponding precisely to the elements not appearing
explicitly in _MN (and thus the GS action) — the gravitini, dilatini, Ramond-Ramond bispinor (plus the extra local gauge symmetries). We merely guessed the simplest form of .
But these issues are related! The simple choice of turns out to leave the generalized flux unchanged. Since the κ-symmetry constraints are already encoded in the fluxes, these are maintained as well. Hence, κ-symmetry is preserved under non-abelian T-duality.[To put it another way, the simple choice of turns out to be the one that leads to the same choices of gravitini, dilatini, and Ramond-Ramond bispinor made in <cit.>.]
§.§ The role of the dilaton and modified / generalized double field theory
We have not addressed how the dilaton changes under the duality.[From the perspective of the σ-model, the dilaton is an additional field added in order to restore Weyl invariance at the one-loop level. From the perspective of supergravity, the dilaton is a scalar field whose supersymmetry variation gives the dilatini. The perspective here is analogous to the supergravity point of view.] We will do this momentarily, but first, let us make a brief digression on the subject of what we call generalized double field theory.
Recall that the DFT dilaton Φ (here a scalar density of weight 1) can be used to construct a flux
_ = _^_logΦ
+ ^_ .
Upon solving the section condition, the generalized dilaton is related to a conventional dilaton e^-2φ via the superspace measure, Φ = e^-2 φ E_M^A.
Just as generalized supergravity <cit.> relaxes the assumption that a dilaton exists, one can define generalized double field theory by relaxing the assumption that a generalized dilaton exists. Then one replaces the flux (<ref>) with
_ = _^_
+ ^_ .
This is written in terms of a vector field _ which a priori obeys no particular
constraints. In order for _ to be a scalar under generalized diffeomorphisms, _ should transform as
δ_ξ_ = _ξ_ + _^ξ_ .
What distinguishes the choice of _ is that one requires _ to obey the same properties as it did when the dilaton existed. That is, we impose the same constraints and the same Bianchi identities. Viewed in this way, _ is defined in terms of _.
What exactly does this mean? The flux tensors _ and _ obey the Bianchi identities
_ :=
4 D_[ _] + 3 _[|^ _|] = 0 ,
_ := 2 D_[ _] + _^_+ D^_ = 0 ,
:= D^_ + 1/2 ^_+ 1/12 ^ _ = 0 .
The expression for _ (<ref>) can be rewritten in two equivalent ways
_ = 2 _[_] + ^__^_ __^ = _^_^ ,
while in (<ref>) can be rewritten as
= ^_ + 1/2^_ _^ = ^ .
When _ and vanish, ^ has an obvious interpretation as a generalized Killing vector.
Generalized double field theory is nearly (perhaps completely) equivalent to modified double field theory (mDFT) <cit.>. The distinction is that mDFT imposes the section condition on the index of ^, so that ^_ = 0 and ^⊗_ = 0. Upon doing so, vanishes and _ vanishes only if _ is the gradient of some other field. It is unclear to us whether the reverse is true, whether imposing = _ = 0 necessarily implies the section condition on _. If it is, then mDFT and generalized DFT should be identical.[In mDFT, one has both a vector _ and the dilaton gradient _Φ, but in principle one could just absorb the latter into the former to arrive at the formulation discussed here. It was argued in <cit.> that mDFT can always be interpreted as conventional DFT where the dilaton carries a linear dependence on some of the winding coordinates, while still satisfying the section condition. This forces the generalized vielbein to be independent of those winding coordinates.]
Both generalized DFT and mDFT lead to generalized supergravity upon solving the section condition, where we define
_M = -2 (X_M - K^N B_NM) + _M log E_N^A ,
^M = -2 K^M .
The measure factor in the first equation accounts for the inhomogeneous term in (<ref>) so that both X_M and K^M are a conventional one-form and vector respectively.
The explicit factor of the B-field ensures that X_M is inert under the B-field gauge transformations. The factors of -2 are chosen so that X_M = _M φ when a dilaton exists. Now one can show that if the modified flux tensor _ obeys the same Bianchi identities and same constraints as before, then the vector K^M turns out to be a Killing vector in conventional superspace and X_M is a one-form whose spinorial components are the dilatini. The other relations discussed in generalized supergravity <cit.> can be derived in like manner from generalized / modified DFT. We hope to elaborate on this in superspace in the future; the bosonic proof of this was given in <cit.>.
Returning to the original question: how does the dilaton or, more generally, X and K change under duality? Factorizing the supervielbein as in (<ref>), the dilaton flux becomes
_ = _^(
_^_
+ ^_)
+ ^_ .
We posit that the dilaton flux should remain unchanged. If so, then the element in parentheses must be fixed. In the spectator directions, we have simply
'_ = _ and
'^ = ^.
In the isometry directions, we find more intricate relations
'^1
= ^1 + D̃^1ẽ_2^2 ,
'_1
= _1 + D_1 e_2^2
+ 2 f_1 2^2 (-)^2
- ^2 f_21^3ν_3
where for convenience we have defined
D_1 = e_1^1 _1 ,
_1 = e_1^1 _1 ,
'_1 = ẽ_1^1 '_1 ,
D^1 = ^1×ẽ_1^1 ,
^1 = ^1 e_1^1 ,
'^1 = '^1 ẽ_1^1 .
The next step is to strip out the density behavior of _ and '_ by subtracting
factors of _log E_M^A and _log E'_M'^A.
We explicitly prime the index M of the dual model to emphasize that it involves a different
coordinate set (Z, Ỹ) from the original model (Z, Y). Here we will need the explicit transformation of the supervielbein in terms of X_M^N. This leads to
E'_A^M'(Z,Ỹ)
= E_A^M(Z,Y) ×_1 2(Z, Ỹ)
× e_1^1(Y) ×ẽ_1^1(Ỹ)
where we have exhibited the dependence on the coordinates Y, Ỹ, and the spectator coordinates Z. From these relations, we find
('_ - _log E') = (_ - _log E)
+ _log_1 2 ,
('^1 - D^1log E')
= ^1 + D^1log_2 3 ,
'^ = ^ ,
'_1 = (_1 - D_1log E)
+ 2 f_1 2^2 (-)^2
- ^2 f_2 1^3ν_3
Now we may identify the X and K fields. In the original model, we take
_- _logE = -2 X_ ,
^ = -2 K^ ,
_1 - D_1 logE = -2 X_1 ,
^1 = -2 K^1
where we denote X_M = X_M - K^N B_NM for convenience. The indices 1 are flattened
with e_1^1. In the dual model, we have the somewhat more complicated expressions
'_- _logE = -2 X'_ ,
'^ = -2 K'^ ,
'^1 - D'^1 logE' = -2 X'^1 ,
'_1 = -2 K'_1 .
Here we must remember that the duality involves a passive coordinate transformation so
X'_M
=
[ X'_ X'^1 ] ,
K'^M
=
[ K'^ K'_1 (-)^1 ] .
Rather then perform index gymnastics with the isometry coordinates, we will simply express
relations in terms of the flattened isometry index, even though it is in the “wrong” position:
X'_ = X_ -1/2_log_2 3 ,
X'^1 = K^1 - 1/2 D^1log_2 3 ,
K'^ = K^ ,
K'_1 = X_1
- f_1 2^2 (-)^2
- K^2 f_2 1^3ν_3 .
A rather strict check of these relations is this: K ⌟X should vanish in the dual model when it vanishes in the original model. This is a consequence of T-duality preserving κ-symmetric Green-Schwarz actions. We find
K' ⌟X'
- K ⌟X = - 1/2 K^_log_2 3
- K^1 f_1 2^2 (-)^2
+ 1/2 D^1log_2 3×(f_1 4^4 (-)^4
+ K^4 f_4 1^5ν_5
- X_1)
The second line can be rewritten as
1/2^3 2(- f_2 3^1 f_1 4^4 (-)^4
- f_2 3^1 K^4 f_4 1^5ν_5
+ f_2 3^1X_1) .
The first term drops out immediately using the Jacobi identity.
To evaluate the remaining terms requires a few features of K and X that arise in generalized supergravity. First, G_MN and B_MN, once flattened as in (<ref>), are independent of Y^1. The same should be true of K and X in generalized supergravity, because their various components appear in the torsion and curvatures.[We are speaking here of the flattened versions K^1 and X_1.] This means that the isometry condition on G_MN implies in particular that
0 = K^_ G_1 2
+ 2 K^3 f_3 (1^4 G_4 2) .
For the B-field, the relevant relation we need is X = - K ⌟ H. From this, one can show that
-f_1 2^3X_3
= K^_ B_1 2
+ 2 K^3 f_3 [1^4 B_4 2] .
Taking the difference between these two relations lets one rewrite the right-hand side in terms of
= G-B. Introducing the Lagrange multiplier field converts _1 2 to _1 2. Using the Jacobi identity simplifies the result to
f_1 2^3(X̂_3 - K^4 f_4 3^5ν_5)
= K^__1 2
+ K^3(f_3 1^4_4 2
+ f_3 2^4_1 4)
where we have suppressed gradings in the final term for readability. The term on the left-hand side is exactly what remains in (<ref>). Substituting this expression, we find the complete cancellation of the remainder of the right-hand side of (<ref>).
A specific case of interest is when we start with a model with a dilaton, a case analyzed in
<cit.>. Then K=0 and X = X = φ. The equations (<ref>) can be rewritten
X'_ = _( φ-1/2 log_2 3 ) ,
K'^ =0 ,
X'^1
= D^1 ( φ- 1/2 log_2 3 ) ,
K'_1
= D_1 φ- f_1 2^2 (-)^2 ,
where we have used D^1φ = 0.
Now the dual theory satisfies the conventional supergravity constraints when K'=0, so
D_1φ = f_1 2^2 (-)^2.
This imposes a requirement for how the dilaton should depend on the coordinates we
are dualizing. To solve this, we could extract from the dilaton a purely Y-dependent piece that generates this term, i.e.
φ(Z, Y) = φ_0(Z) + Δ(Y) ,
D_1Δ = f_1 2^2 (-)^2 .
In general there is no local obstruction to the existence of Δ, since it obeys the consistency condition [D_1, D_2] Δ = -f_1 2^3 D_3Δ by virtue of the Jacobi identity. Now the dual dilaton can be identified as
φ'(Z, Ỹ) = φ_0(Z) - 1/2log_2 3(Z, Ỹ)
so that X' = X' = φ'.
§.§ Component description
The previous discussion has been at the level of superspace. In order to make contact with the literature on fermionic and bosonic T-dualities of bosonic backgrounds, we should rewrite our expressions at the component level. Here we must already make a distinction between bosonic and fermionic isometries that arise from the algebra of supervectors
[k_1, k_2] = f_1 2^3 k_3 :
* Bosonic isometries are treated as conventional vectors k_1 = k_1^m_m acting on bosonic coordinates. These arise by taking the θ=0 parts of the bosonic supervectors k_1 = k_1^M _M. Since k_1^μ is fermionic, it must be at least linear in θ, and so can be discarded.
* Fermionic isometries are described by commuting spinors _1^i
with i=1,2. These arise by flattening the fermionic isometries k_1 with the gravitino one-forms and setting θ=0:
_1^i = k_1^M E_M^i |_θ=0
= k_1^μ E_μ^i |_θ=0 .
Since k_1^m is fermionic (being linear in θ), it can be discarded.
As is well known, bosonic isometries can arise as bilinears of fermionic ones. To describe this, we first rewrite (<ref>) with flat indices. Under a covariant Lie derivative generated by k_1, the supervielbein is merely rotated,
^ cov_1 E_M^A
:= k_1^N _N E_M^A + _M k_1^N E_N^A
= - E_M^B (λ_1)_B^A
where λ_1 is a Lorentz transformation.
This follows from the Green-Schwarz action, since invariance of G_MN implies the result for E_M^a; for E_M^i, one must employ the torsion constraints (which arise from κ-symmetry). This expression may equivalently be written
_B k_1^A (-)^B1
+ k_1^C T_C B^A = -(λ_1)_B^A .
The algebra of Killing supervectors can then be rewritten (with gradings suppressed)
f_1 2^3 k_3^A
= k_1^B k_2^C T_B C^A
- k_1^B (λ_2)_B^A
+ k_2^B (λ_1)_B^A .
These expressions lead immediately to several useful results. First,
taking (<ref>) with A=a and 1 2 = 1 2, we find
how a bosonic Killing vector is generated from two Killing spinors:
i f_1 2^3 k_3^a
= ^1_1γ^a _2^1
+ β_Λ ^2_1γ^a _2^2
=
^1_1γ^a _2^1
+ ^2_1γ^a _2^2 IIB/IIA^*
^1_1γ^a _2^1
- ^2_1γ^a _2^2 IIA/IIB^*
where k_3^a = k_3^m e_m^a. The chirality of ^1 is fixed while that of ^2 depends on whether one lies in a IIB/IIB^* or IIA/IIA^* duality frame. A crucial point is that the fermionic indices appear symmetrically in (<ref>) and the two commuting spinors may be taken to be the same:
i f_1 1^3 k_3^a
= _1^1 γ^a _1^1
+ β_Λ_1^2 γ^a _1^2 .
Taking A to be spinorial in (<ref>), we find the other useful relations
f_1 1^2_2^1
=
-1/8 k_1^b H_bcd γ^cd_1^1
+ β_Λ/16 e^φ k_1^b
C^-1γ_b _1^2
- λ_1_1^1 ,
f_1 1^2_2^2
=
+ 1/8 k_1^b H_bcdγ^cd_1^2
- 1/16 e^φ k_1^b
C^-1^T γ_b _1^1
- λ_1_1^2
with given by (<ref>).
The vielbein and B-field expressions match what the computations purely from
the bosonic dualities would give,
e'^a =
x^(
e_^a
- _1^1 2 e_2^a
)
+ ν_1^1 2 e_2^a ,
B' = 1/2 x^∧ x^ B_
- 1/2^1 2(
ν_2 + x^_2)∧(
ν_1 - x^_1) ,
indicating that the fermionic T-dualities have no effect on them <cit.>.
Where the fermionic T-dualities matter is for the Ramond-Ramond background and for the dilaton, where we find
φ' = φ_0 - 1/2log_1 2
+ 1/2log_1 2 ,
e^φ''^1 2 =
( e^φ^1 2
- 32i E_1^1 ^1 2 E_2^2) (Λ^-1)_^ .
The additional Lorentz transformation above is given in vectorial form as
Λ_a^b = δ_a^b - 2 ^1 2 e_2^b e_1 a
and depends purely on the bosonic isometries.
The expression for the Ramond-Ramond bispinor involves
E_1^1 = e_1⌟ E^1, but it would be more useful
to rewrite this in terms of k_1⌟ E^1 = _1^1.
To do that, we need to apply the adjoint action of g to the isometry indices.
Recall we have
_1 2 = E_1^a E_2^b η_ab
- B_1 2
- f_1 2^3ν_3 where we have expanded the supervielbein in the original model as
E^a = Z^ E_^a + e^1 E_1^a.
In choosing the original coordinate system (<ref>), we
expanded in terms of the left-invariant vector fields. The right-invariant vector
fields are
g g^-1 = Y^1 k_1^1 t_1
and these are related to e_1^1 by
k_1⌟ e^2 = ( g^-1)_1^2 where g ξ^1 t_1 g^-1 = ξ^1 ( g)_1^2 t_2 .
Applying the adjoint action to _1 2 gives
_1 2 := ( g)_1^1' ( g)_2^2'_1' 2'
= k_1^a k_2^b η_ab
+ k_1⌟ k_2⌟ B
- f_1 2^3 ( g^-1)_3^4ν_4
where we suppressed gradings in the first equality.
Since we only care about purely bosonic expressions, we have simply
_1 2 = k_1^m k_2^n (g_mn - B_mn)
- f_1 2^3 ( g^-1)_3^4ν_4 ,
_1 2 = -Λ_1 2
- f_1 2^3 ( g^-1)_3^4ν_4
where Λ_1 2 := -k_1⌟ k_2⌟ B = k_1^M B_M N k_2^N.
The dilaton can be written
φ' = φ_0 - 1/2log_1 2
+ 1/2log_1 2
+ log ( g)_1^2
The Ramond-Ramond bispinor becomes
e^φ''^1 2
=
( e^φ^1 2
+ 32i _1^1 (^-1)^1 2_2^2) (Λ^-1)_^ .
An extra sign has appeared because we use the inverse (^-1)^1 2 rather than the graded inverse.
What can we say about _1 2? While it is fully characterized in superspace, on the bosonic background it can really only be described by its derivative. From the definition of Λ_1 2 in superspace, one can show that
Λ_1 2 =
- k_1⌟ k_2⌟ H
- f_1 2^3 k_3⌟ B .
This follows because the B-field in (<ref>) (like the metric) obeys _k_1 B = 0. The more general case is discussed in Appendix <ref>.
This leads to
_1 2
=
k_1⌟ k_2⌟ H
+ f_1 2^3(k_3⌟ B
- ( g^-1)_3^4ν_4
- k^4 f_4 3^3' ( g^-1)_3'^4ν_4) .
The quantity _1 2 should depend on the spectator coordinates,
the dual ỹ coordinate (via ν_4), and on the coordinates y only
via the adjoint action (since _1 2 was y-independent). This means
k_1⌟_1 2
= -f_1 1^3_3 2
-f_1 2^3_1 3 .
The terms involving ν already have this form, since
k_1⌟ν_4 = 0
and the the Jacobi identity allows one to rewrite the pair of structure constants appropriately.
For the terms involving H and B, it helps to observe that
k_1⌟ H = - ( k_1⌟ B)
from which the desired property can be deduced. A key step is to exploit
k_1⌟ k_2⌟ k_3⌟ H
- 3 f_[12|^4 k_4⌟ k_|3]⌟ B = 0
which follows from the explicit form of H in terms of the B given in (<ref>).
The expression (<ref>) can be interpreted purely as a bosonic equation once we address the first term involving H. It is given by
k_1⌟ k_2⌟ H
=
i x^m (
_1^1 γ_m _2^1
- β_Λ_1^2 γ_m _2^2
)
=
i x^m ×^1_1γ_m _2^1
- ^2_1γ_m _2^2 IIB/IIA^*
^1_1γ_m _2^1
+ ^2_1γ_m _2^2 IIA/IIB^* .
Note the crucial relative sign difference with (<ref>). The importance of this sign difference was already noted in the context of non-abelian fermionic T-duality in <cit.>.
Abelian fermionic T-duality.
The fermionic T-duality discussed by Berkovits and Maldacena <cit.> corresponds to a single abelian fermionic isometry, for which the left-hand side of (<ref>) vanishes. No bosonic isometries are involved and so the vielbein and B-field are unchanged. However, the dilaton and Ramond-Ramond complex change as
φ' = φ_0
+ 1/2log_1 1 ,
e^φ''^1 2 = e^φ^1 2
+ 32i _1^1 (^-1)^1 1_1^2 .
Note that there is no Lorentz factor since Λ_a^b = δ_a^b. The function obeys
-i _m _1 1
=
^1_1γ_m _1^1
- ^2_1γ_m _1^2 IIB/IIA^*
^1_1γ_m _1^1
+ ^2_1γ_m _1^2 IIA/IIB^*
Since there is no duality in a bosonic direction, there are no dual bosonic coordinates for to depend on.
Non-abelian fermionic T-duality.
Slightly more generally, one can consider a single non-abelian fermionic isometry <cit.>, which generates a single bosonic isometry:
{k_1, k_1} = -i k_1 ,
[k_1, k_1] = 0 , f_1 1^1 = -i .
Because we must dualize the full set of closed Killing supervectors, this is actually two dualities: a fermionic one k_1 and a bosonic one k_1.
In our conventions,
k_1
=
_1^1 γ^a _1^1
+ β_Λ_1^2 γ^a _1^2
=
^1_1γ^a _1^1
+ ^2_1γ^a _1^2 IIB/IIA^*
^1_1γ^a _1^1
- ^2_1γ^a _1^2 IIA/IIB^*
Now the expression (<ref>) becomes
_1 1 =
k_1⌟ k_1⌟ H
-i k_1⌟ B
+ iν_1 .
This can perhaps more transparently be written in the following way:
_m _1 1 = i (V_m - k_1^n B_nm ) ,
^1_1 1 = i ^1ν_1
= i ẽ_1^1
where
V_m =
_1^1 γ_m _1^1
- β_Λ_1^2 γ_m _1^2
=
^1_1γ_m _1^1
- ^2_1γ_m _1^2 IIB/IIA^*
^1_1γ_m _1^1
+ ^2_1γ_m _1^2 IIA/IIB^*
where ẽ_1^1 is the dual vielbein.
Note that k_1⌟ V=0. This is apparent both from the explicit expressions
in terms of the Killing spinors but also from (<ref>) which collapses to
k_1⌟ k_1⌟ k_1⌟ H = 0.
The expression for _m _1 1 in (<ref>) matches the result in <cit.>
(with _1 1→ C and B → -B)
but the expression for ^m _1 1 is different, with
k_1^m there in place of our ẽ_1^1.
(In our approach, ^_1 1 vanishes since there are no dual coordinates
x̃_ in the σ-model.) This is actually a possibility,
because in the case of a single bosonic isometry, one can choose coordinates in the original geometry so that k_1^1 is a constant. Then one simply takes
ν_1 = k_1^1x̃_1.
Next we address the vielbein and B-field. They are
e'^a =
x^(
e_^a
- _1 G^1 1 k_1^a
)
+ ν_1 G^1 1 k_1^a ,
B' = 1/2 x^∧ x^ (B_ + _1 G^1 1 _1)
+ ν_1∧ x^ G_1 G^1 1
where we have exploited that ( g)_1^1 = 1.
Finally, the Ramond-Ramond complex is
e^φ''^1 2
=
( e^φ^1 2
+ 32i _1^1 (^-1)^1 1_1^2) (Λ^-1)_^ .
Recalling that G_1 1 = k_1^a k_1_a,
the Lorentz transformation governing the T-duality frame is
Λ_a^b = δ_a^b - 2 k_1 a k_1^b/k_1· k_1
Since this is a single bosonic T-duality, it exchanges the type of supergravity, from type IIB/IIB^* to IIA/IIA^*. One finds Λ = -1 using the general argument reviewed in section <ref>. Whether this exchanges the star type (e.g. from IIB to IIA^*) depends on whether Λ_0^0 is positive or negative, that is whether the T-duality is spacelike or timelike. We find
Λ_0^0
= k⃗_1·k⃗_1 + k_1^0 k_1^0/k_1· k_1
which is indeed positive for spacelike k_1 and negative for timelike k_1.
§ GENERALIZED DUALITIES AND GENERALIZED PARALLELIZABLE SPACES
§.§ Construction of V_AM̂ for constant fluxes
In the preceding section, we focused on σ-models depending on two sets of fields: spectator fields Z^ and fields Y^1 that were freely acted upon by some set of isometries. After performing non-abelian T-duality, we arrived at a model with dual fields Ỹ_1. A key point we emphasized was that the dualized sector admitted a double field theory interpretation, with two different generalized vielbeins , (<ref>) and (<ref>), depending respectively on Y and Ỹ, so that the generalized fluxes (<ref>) were identical and constant.
Let us focus on this last point first, and for simplicity, we dispense with spectator fields. In analogy with the bosonic analogue <cit.>, we define a generalized parallelizable superspace as a (D+s)-dimensional super manifold upon which we can introduce a set of OSp(D,D|2s)-valued generalized frame fields _^ whose generalized flux tensor _ (<ref>) is a constant, F_. The Bianchi identity for the fluxes reduces to the Jacobi identity
F_[^ F_] = 0
for some double Lie group D. In light of the discussion on non-abelian T-duality, there are two natural questions to pose. First, what conditions on D are needed in order to ensure that such a _^ exists? Second, does this have any relation to an underlying σ-model in which different realizations of _^ are dual in some sense?
We will not discuss the second question here, but such a model does exist: it is known as the -model <cit.> and corresponds essentially to the Tseytlin duality symmetric string <cit.>
with the generalized metric _^ given below. We refer the reader to the original literature as well as the recent discussion in <cit.>.
To construct the requisite _^, it turns out that just three conditions are sufficient:
* A double Lie supergroup D, generated by T_ = (T_A, T^A) with an algebra
[T_, T_] = -F_^ T_.
* A non-degenerate, ad-invariant pairing T_T_ = η_. Conventionally, we choose
η_ =
[ 0 δ_A^B; δ^A_B (-)^B 0 ] .
* A maximally isotropic subgroup H, generated by T^A.
Different choices of H turn out to correspond to different dual geometries and the supervielbein describes a coset H \D.
For the case of non-abelian T-duality discussed in the previous section, we would have T_ = (t_1, t̃^1), with commutation relations
[t_1, t_2] = - f_1 2^3 t_3 ,
[t_1, t̃^2] = -t̃^3 f_3 1^2 ,
[t̃^1, t̃^2] = 0 .
The t_1 generate the isometry group G and t̃^1 generate an abelian dual group G̃. The original σ-model geometry is produced by choosing H = G̃ and the dual geometry is produced by H = G. This case is known as a Drinfeld double, since the quotient of D by either maximally isotropic subgroup G or G̃ generates the other group, i.e. G = G̃\D and G̃ = G \D. The duality exchanges the roles of G and G̃. It is also possible for both groups G and G̃ in a Drinfeld double to be non-abelian. This leads to Poisson-Lie T-duality <cit.>, and this was historically the first step in generalizing non-abelian T-duality.
The construction of _^ proceeds as follows. A general group element of D is denoted g, and its left coset M = H \D corresponds to a decomposition g = h(ỹ) × m(y).
The generalized frame field is built from m. First, we decompose the Maurer-Cartan form as
m m^-1 = V^A T_A + A_A T^A (-)^A
where V^A and A_A are valued respectively on the coset and the subgroup. Next, we build the two-form _ by integrating
_ = _ = -1/12 m m^-1[ m m^-1, m m^-1 ] .
This is usually only locally defined. Then the generalized frame field is given by
_^ = M_^[ V_B^M -V_B^N _NM (-)^m; 0 V^B_M (-)^m ] ,
M_^ := ( m)_^ = m T_ m^-1T^ ,
= 1/2 V^A ∧ A_A + _ ,
We have denoted the two-form by rather than B since contributions from M_^ typically deform the matrix structure and contribute to the physical B-field.
For the case of non-abelian T-duality, choosing H=G leads to
_^ =
[ e_1^M 0; 0 e^1_M (-)^M ]
where e_1^M are the left-invariant vector fields on G,
see (<ref>). Alternatively, one can choose H = G. To arrange indices
as in (<ref>), we take T^A = t_1 (-)^1, T_A = t̃^1,
and m = exp(ν_1t̃^1 (-)^1) = exp(ν^A T_A)
with ν^A = ν_1 (-)^1. The result is
_^ =
[ ẽ_A^M 0; -ν^C f_C^A Bẽ_B^M ẽ^A_M (-)^M ] , ẽ_M^A = _M ν^A , ẽ^A_M = ẽ_M^A (-)^MA .
Swapping indices around, one can show this is just
_^ = _(2)_(1)_(2)^-1
where _(2) and _(1) are the subblocks of (<ref>) in the isometry directions.
More interesting examples are possible for any real Lie supergroup G, provided it admits a non-degenerate Killing form. These can be extended in two distinct ways to a double Lie group D, either by taking the product group G × G or its complexification G^ℂ. Both of these cases will be extremely important for the remainder of the paper, and we will describe them in some detail.
§.§ Example: D=GxG
We denote a group element of D = G× G with the tuple (g_1, g_2)∈D with g_1, g_2 ∈ G. We use the same convention for the Lie algebra d to define the pairing
ΞΞ' = 1/2ξ_1ξ_1' - 1/2ξ_2ξ_2' ,
for Ξ = (ξ_1, ξ_2) ∈d. In terms of the generators t_A of G, we choose the basis of generators on the product group as
T_A = ( t_A, - t_A ) , T^A = (t^A, t^A) .
In the second set, we have raised the indices using the graded inverse κ^AB (with NW-SE conventions) of the non-degenerate Killing form κ_AB = t_At_B.
This choice guarantees that T_T_ = η_ and that T^A generates the maximally isotropic subgroup H = G_diag. This is in fact the only viable choice without imposing additional structure on G. The resulting coset M = H \D is isomorphic to G.
The structure constants F_, defined by
[T_, T_] = -F_^ T_ = -F_ η^ T_ (-)^c
are given by
F^AB_C = f^AB_C and
F_ABC = f_ABC .
A convenient coset representative is
M ∋ m = ( g, e ) , g ∈ G ,
where e is the identity element in G. With this convention, it is straightforward to compute all the ingredients for (<ref>), namely
M_^ = m T_ m^-1T^ = 1/2[ D_A^B + δ_A^B (D_AB - κ_AB) (-)^b; D^AB - κ^AB D^A_B (-)^b + δ^A_B ] ,
D_A^B = g t_A g^-1t^B,
V^A = T^A m m^-1 = 1/2t^A g g^-1 = 1/2 v^A ,
= -1/24 g g^-1, [ g g^-1, g g^-1]
= -1/24 v^A ∧ v^B ∧ v^C f_CBA
Above we employ the right-invariant vector field v^A on G.
To write down the resulting generalised frame field in a simple form, we also introduce
the left-invariant vector fields on G,
e^A = t^Ag^-1 g = D^A_B v^B (-)^b = v^B (D^-1)_B^A
and the respective inverses v_A^M and e_A^M with the defining properties
v_A ⌟ v^B = e_A ⌟ e^B = δ_A^B ,
e_A = D_A^B v_B ,
to eventually obtain
_^ =
[ e_A^N + v_A^N 1/4 (e_A N - v_AN) (-)^n; e^A N - v^A N 1/4 (e^A_N + v^A_N) (-)^n ]×[ δ_N^M -_NM (-)^m; 0 δ^N_M ]
where we use the shorthand e^A_M = e_M^A (-)^ma and v^A_M = v_M^A (-)^ma,
with A indices raised and lowered as needed with the Cartan metric.
We can perform the same calculation for a different coset representative,
m' = ( g', g'^-1 ) , g' ∈ G
which is related to (<ref>) by an H transformation,
m = h m' = (g, e) = (h g', h g'^-1) for g = g'^2. Explicitly, we find
M_^ = 1/2[ (D'+D'^-1)_A^B (D' - D'^-1)_A B (-)^b; (D'- D'^-1)^A B (D' + D'^-1)^A_B (-)^b ] ,
D'_A^B = g' t_A g'^-1t^B,
V^A = 1/2 g' g'^-1 + g'^-1 g't^A
= 1/2 (v'^A + e'^A) ,
' =
- 1/6 g' g'^-1 g' g'^-1 g' g'^-1
+ 1/4g'^-1 g' g' g'^-1
+ 1/4 g' g'^-1 g'^-1 g' = -1/24 (v'+e')^A (v'+e')^B (v'+e')^C f_CBA
and the generalised frame field arises by plugging these quantities into (<ref>). The resulting frame is related by a diffeomorphism (and B-field gauge transformation) induced by g = g'^2 to the frame in (<ref>), using
v^A = (v'+e')^B D'_B^A ,
e^A = (v'+e')^B (D'^-1)_B^A ,
This can equivalently be written
v^A t_A = (g'^2) g'^-2 = g' ( g' g'^-1 + g'^-1 g') g'^-1 ,
e^A t_A = g'^-2 (g'^2) = g'^-1 ( g' g'^-1 + g'^-1 g') g' .
Explicitly, we find the expression
_^ =
[ e_A^N + v_A^N 1/4 (e_A N - v_AN) (-)^n; e^A N - v^A N 1/4 (e^A_N + v^A_N) (-)^n ]×[ δ_N^M -'_NM (-)^m; 0 δ^N_M ]
where now we interpret v_M^A and e_M^A as the one-forms that solve (<ref>).
Naturally this is the same expression as (<ref>), merely interpreted differently, in a different coordinate system. Note that we still have
' = -1/24 v^A v^B v^C f_CBA = -1/24 e^A e^B e^C f_CBA
Though rewriting it in this way may seem to needlessly complicate matters, it will actually make it easy to see how the generalised frame on G^ℂ, which we construct next, can be related to G× G by analytic continuation. The key feature of the coset representative (<ref>) is that it remains in the same class under the involution σ that exchanges the left and right factors: that is, m' just goes to its inverse. This same involution flips the sign of T_A,
which negates the first row of _^. This can be achieved equivalently by exchanging g' with its inverse. This trades v'^A ⇆ -e'^A and v^A ⇆ -e^A, and flips the sign of '. On the actual matrix elements (keeping in mind that x' flips sign), we find
v_M^A ⇆ e_M^A. This involution effectively takes
_A^(x') _→ - _A^(x') _ , ^A^(x') _→ +^A^(x') _
consistent with the relations between T_A in the two cases, provided we transform
_→ (- _M, ^M).
That is, we flip the sign of x' but not of the dual coordinate. This is sensible, since the dual coordinate parametrizes the diagonal subgroup, which is quotiented out by the coset and undergoes no change.
§.§ Example: D=GĈ
Another possibility is to identify D with the complexification G^ℂ. While the pairing for G× G is very simple to define, here we have to work a bit harder. First, let us introduce an involution σ, which is an isomorphism of the complexified Lie algebra Lie(G^ℂ). It has the properties
σ^2 = 1 , σ Xσ Y = XY^* , and σ [ X, Y ] = [σ X, σ Y]
with X, Y ∈𝔤^ℂ.
In this case a natural choice for the pairing is
XY = - i/2( XY - σ Xσ Y) , XY^* = XY
where X and Y are elements of the complexified Lie algebra 𝔤^ℂ. Here, we in particular make use of the Cartan involution σ with the properties
σ^2 = 1 and σ [ X, Y ] = [σ X, σ Y] .
It specifies how the real Lie algebra 𝔤 is embedded into 𝔤^ℂ by identifying the former's generators t_A with the +1 eigenspace of σ, i.e.
σ t_A = t_A. We further assume that σ is given by σ X = - S^-1 X^† S where S denotes an optional similarity transformation (for compact G, we can set S=1). This implies that the structure coefficients are (graded) real, meaning (f_AB^C)^* = f_AB^C (-)^ab. The same holds for the Killing metric (κ_A B)^* = κ_A B (-)^ab.
For the generators of D, we are going to explore two distinct cases.
The first is obvious:
T_A = i t_A , T^A = t^A ,
with non-vanishing components of the generalised flux F_ given by
F^AB_C = f^AB_C , F_ABC = - f_ABC .
For the coset representative, we take a Hermitian element of G^ℂ, so that
σ m = m^-1. Effectively, we can think of m = exp(i x^A t_A).
The building blocks of the generalized vielbein are then
M_^ = 1/2[ (D + D^-1)_A^B i (D - D^-1)_AB (-)^b; -i (D - D^-1)^AB (D + D^-1)^A_B (-)^b ] ,
D_A^B = m t_A m^-1t^B ,
V^A = 1/2 i m m^-1 + m^-1 mt^A ,
=
- 1/6 i m m^-1 m m^-1 m m^-1
+ 1/4 im^-1 m m m^-1
+ 1/4 i m m^-1 m^-1 m .
We introduce the one-form e'^A and its complex conjugate e̅'^A,
m^-1 m = i e'^A t_A , m m^-1 = i e̅'^A t_A .
The primes are for later convenience as these will be related to e' and v'
in the previous section. For these we recover
V^A = 1/2 (e'^A + e̅'^A) , = 1/24 (e'+e̅')^A (e'+e̅')^B (e'+e̅')^C f_CBA .
Now the full generalized vielbein can be written
_^ =
[ e_A^N + e̅_A^N i/4 (e_A N - e̅_AN) (-)^n; -i (e^A N - e̅^A N) 1/4 (e^A_N + e̅^A_N) (-)^n ]×[ δ_N^M -_NM (-)^m; 0 δ^N_M ]
where we use
e^A = (e'^B + e̅'^B) D_B^A , e̅^A = (e'^B + e̅'^B) (D^-1)_B^A ,
or equivalently,
i e^A t_A = m^2 m^-2 = m ( m m^-1 + m^-1 m) m^-1 ,
i e̅^A t_A = m^-2 m^2 = m^-1 ( m m^-1 + m^-1 m) m .
This case and the one for G× G with coset representative (<ref>) are related by an analytic continuation. There are several ways of seeing it. From the level of the building blocks
(<ref>) – (<ref>) and the algebra, we can see it by continuing
T_A → i T_A. To maintain η_, we must substitute
··→ - i ··, too. Consequentially, we obtain
M_A^B → M_A^B ,
M_A B→ i M_A B ,
M^A B→ -i M^A B ,
M^A_B → M^A_B ,
while for the two remaining constituents of the generalised frame field we find
V^A → -i V^A and → -i .
This is somewhat formal, and we can make it more concrete by observing that both coset representatives m are inverted by their respective involutions, and we use this involution to track how factors of i are inserted. Here, m = exp(i x^A t_A)
and for (<ref>) we have g' = exp(x'^A t_A). We want to analytically continue by taking x'= i x. By comparing explicit formulae, we see that D'(x') = D(x) and so
e_M^A(x') and v_M^A(x') become, respectively, e_M^A(x) and e̅_M^A(x).[The forms pick up factors of i because x' = i x.]
The B fields are related as '_MN(x') = -i _MN(x). Putting this together we see that
the two generalized vielbeins _^ turn out to be related by
'_A^(x') '_ = -i _A^(x) _ , '^A^(x') '_ = ^A^(x) _
consistent with the relations between T_A in the two cases, provided we identify
'_ = (-i _M, ^M).
That is, on the doubled space, we transform x'= i x but leave the dual coordinate unchanged. This makes sense on the coset since the dual coordinate describes a copy of G itself in both cases (being the same isotropic subgroup H), and undergoes no analytic continuation.
There is another possibility that will be of interest to us,[The decomposition (<ref>) is actually a Drinfeld double, and one could exchange the roles of T_A and T^A. The result is essentially equivalent to taking (<ref>), up to a similarity transformation and coordinate transformation.]
T_A = t_A , T^A = (R^AB + i κ^AB) t_B
for a matrix R^AB obeying certain properties. Requiring T_T_ = η_ implies that R^AB is graded real and antisymmetric. Requiring that T^A generate a maximally isotropic subgroup implies
[R X, R Y] - R( [R X, Y] + [X, R Y] ) = [X, Y] ∀ X, Y ∈Lie(G) ,
where we employ operator notation for R, i.e. R ·ξ = ξ^A R_A^B t_B. From this equation, we learn that R must solve the modified classical Yang-Baxter equation (mCYBE). For the coset representative m=g, which is now fixed by the involution σ,
we again compute all ingredients required for the generalised frame field,
M_^ =
[ D_A^B 0; R^AC D_C^B - D^A_C R^CB (-)^c D^A_B (-)^b ] , D_A^B = g t_A g^-1t^B,
V^A = g g^-1t^A = v^A , B = 0 .
We can streamline the result further by defining
e^A = g^-1 gt^A = v^B (D^-1)_B^A ,
and the corresponding dual vector fields v_A^M and e_A^M (see (<ref>)). With them, we eventually find
_^ = [ e_A^M 0; Π^AB e_B^M e^A_M (-)^m ] , where Π^AB = R^AB - (R_g)^A B
where
(R_g)^AB := g t^A g^-1R ( g t^B g^-1 )
= D^A_C R^CD D^B_D (-)^c + bd .
It is interesting to note that Π^MN = e_A^M Π^AB e_B^N (-)^am+a defines a Poisson bracket {f, g} = Π^MN∂_N f ∂_M g which turns G into a Poisson-Lie group. Moreover, we can easily extract the generalised fluxes
F_AB^C = f_AB^C , and
F^AB_C = 2 R^[A D f_D^B]_C .
consistent with the structure constants of the generators (<ref>).
It is useful to make a similarity transformation on the generalized vielbein and the generators in this case to give
T_A = t_A , T^A = i t^A
and
_^ = [ e_A^M 0; -(R_g)^A B e_B^M e^A_M (-)^m ]
with generalized fluxes
F_AB^C = f_AB^C , F^ABC = - f^ABC .
Up to the interchange of T_A ⇆ T^A, the generalized vielbein (<ref>)
and the one constructed from (<ref>)-(<ref>) are Poisson-Lie T-dual to each other.
§.§ The role of the dilaton
We have not yet discussed the role of the dilaton on a generalized parallelizable space. Let us address this briefly now. In terms of the generalized dilaton Φ, the dilaton flux is given by (<ref>), which for the supervielbein (<ref>) becomes
_ = M_^B V_B^M (
_M logΦ
- _M log V + A_M C F^C D_D (-)^c
)
- M_ B F^B D_D (-)^b .
In the case of generalized double field theory, we replace
_logΦ→_
and relax the section condition on the free index of _. Solving
for _ = (_M, ^M), we find
^M = (M^-1)^A _ V_A^M + F^A B_B V_A^M ,
_M - _M N^N = V_M^A (M^-1)_A^_
+ _M log V - A_M C F^C D_D (-)^c .
The dilaton is not completely arbitrary since we still require _ to obey the usual Bianchi identities. In the context of a generalized parallelizable space, when the fluxes _ are taken to be constants F_, the most natural choice is to take the dilatonic fluxes to be constants as well, _ = F_. The Bianchi identities then imply F_^ F_ = 0, and the conditions (<ref>) simplify to
^M = (F^A + F^A B_B) V_A^M ,
_M - _M N^N = V_M^A F_A
+ _M log V - A_M C F^C D_D (-)^c .
These can be interpreted as solutions for the vector _. In order to admit a dilaton solution consistent with the section condition, one must restrict F^A = -F^A B_B.
As a special case, we can consider both G × G and G^ℂ.
For G × G using the coset representative (<ref>), we find
^M = 2 F^A v_A^M ,
_M - _MN^N = 1/2 v_M^A F_A + _M log v
with F^A and F_A obeying
f_AB^C F_C = F^C f_C A^B = 0 .
A dilaton solution requires F^A = 0.
For G^ℂ using the coset representative g in the basis (<ref>), we find
^M = (F^A + R^B C F_CB^A) v_A^M ,
_M - _MN^N = v_M^A F_A + _M log v
with F_A and F^A obeying
f_AB^C F_C = (F^C - R^CD F_D) f_C A^B = 0 .
If we make the similarity transformation to the simpler basis (<ref>) with
T'^A = i κ^AB t_B instead, one replaces F^A with F'^A + R^AB F_B in the above formulae. To admit a dilaton solution, we must have the following condition
F^A = F'^A + R^AB F_B = - R^B C F_CB^A .
§ GENERALIZED SUPERCOSETS
§.§ Review of conventional supercosets
To motivate the construction of generalized supercosets, we first recall how conventional supercosets are constructed. Let G be a group and F be a subgroup. Denote the generators of G by t_, the generators of F by t_, and the remaining generators by t_A.
The structure constants are normalized so that [t_, t_] = - f_^ t_.
We decompose a generic group element g as g = m(z) f(y) with coset representative m. The local coordinates are chosen as z^ = (z^M, y^I).
The Maurer-Cartan form z^E_^ t_ = g^-1 g decomposes as
E_^ =
[ δ_^ 0; 0 v_^ ][ E_^ Ω_^; 0 δ_^ ]
( f^-1)_^
with
y^v_^ t_ = h h^-1 .
This decomposition shows how the full group can be reconstructed from the coset. In particular, it has three important properties:
* All quantities relevant for the coset are contained in the middle matrix in (<ref>). These depend only on the physical coordinates on the coset.
* This matrix is in upper triangular form.
* It is dressed by an adjoint f∈ F action on the right and right-invariant Maurer-Cartan form of the subgroup F on the left. These depend only on the subgroup coordinates.
With the dual vector fields corresponding to (<ref>),
E_^ = ( f)_^[ E_^ - E_^Ω_^; 0 δ_^ ][ δ_^ 0; 0 v_^ ] ,
one can compute the anholonomy coefficients
F_^ :=
-2 E_[^∂_E_]^E_^ .
With the y coordinate dependence isolated in the first and third factors, one can show that the anholonomy coefficients with a lower index valued in F are completely fixed in terms of the structure constants. Up to an adjoint action of f, which we discard in the definition
of F_^, we find
F_^ = f_^ , F_^C = 0 ,
F_B^ = f_B^ , F_B^C = f_B^C ,
while the remaining two correspond
to the covariant torsion and curvature tensors
T_AB^C = F_A B^C = F_A B^C
- 2 Ω_[A|^ f_ |B]^C ,
R_AB^ = F_A B^ = 2 D_[AΩ_B]^
+ F_A B^C Ω_C^
+ Ω_A^Ω_B^ f_^ (-)^B
- 2 Ω_[A^ f_ B]^
where F_A B^C = - 2 E_[A^M _M E_B]^N E_N^C.
The results (<ref>) and the covariance of (<ref>) follow from
the general fiber bundle structure of (<ref>) with local symmetry group F acting on the frame bundle. When E_M^A and Ω_M^ are determined from a larger group G, the covariant torsion and curvature tensors are fixed as
T_AB^C = f_AB^C , R_AB^ = f_AB^ .
§.§ Generalized supercoset construction
Let's apply similar considerations to the case of a double Lie group D. As before, we presume a maximally isotropic subgroup H, consistent with the assumptions made in section <ref>. We denote the generators of D as T_ = (T_, T^) with T^ the generators of H. In addition, we presume that D possesses another isotropic subgroup F, with generators T_, with respect to which we will construct a generalized coset H \D / F.
There is a subtlety here, which we should address at this point. We make no assumptions about how F and H are related. This means we will need two distinct bases for the generators of D: the original basis T_ = (T_, T^) where T^ are the generators of H, and a new basis,
T_' = (T_, T_, T^, T^)
where T_ are the generators of F. For this new basis, we take the Killing metric to be
η_' ' = [ 0 0 δ_^; 0 η_ 0; δ^_ (-)^b 0 0 ]
with η_ an OSp metric on the coset. The change of basis matrix between T_' and T_ may in principle be quite complicated.
To avoid a proliferation of indices, we won't explicitly exhibit the prime on , but it should be understood to be in the appropriate basis.
On the generalized frame field in (<ref>), we aim to impose a similar decomposition inspired by (<ref>). The role of the group G and subgroup H will be played by the left coset H \D and the subgroup F respectively:
_^ =
( f)_^[ δ_^ 0 0; - Ω_^ _^ 0; ρ^ - 1/2Ω^Ω_^ Ω^ δ^_ ][ v_^ 0 0; 0 δ_^ 0; 0 0 v^_ (-)^i ] .
By preserving the OSp pairing on the generalised tangent space and splitting it into coset and subgroup contributions, we obtain
η_ = [ 0 0 δ_^; 0 η_ 0; δ^_ (-)^j 0 0 ] .
With the tangent space metric (<ref>), this ensures
_^ is an OSp element. In fact, it decomposes into a product of three OSp matrices. The first and the last are naturally comparable to the factors in (<ref>). For the matrix in the middle, we have imposed a lower triangular form with the diagonal inspired by the geometric coset. Taking _^ to itself be an OSp element, the remaining free parameters are Ω_^, with Ω_^ = _^Ω_^ and
Ω^ = Ω^ (-)^
and the graded antisymmetric matrix ρ^. The former plays obviously a similar role as the connection Ω_^ in the geometric coset, while the latter is a new ingredient required only in generalised geometry. Remarkably, ρ^ also appears in the work by and Siegel to construct a natural curvature with manifest T-duality <cit.>. There, the subgroup F is the double Lorentz group and the contracted version ρ^ F_ AB F_ CD = r_ABCD is used. Hence, we call ρ^ the Poláček-Siegel (PS) field.
From now on, we will refer to _^ as the megavielbein and the enlarged superspace on which it acts the megaspace, when we need to distinguish it from the coset supervielbein _^. Similarly, we use
D_ = _^_ ,
D_ = _^_
to denote their respective flat derivatives. From (<ref>) and recalling that ^ vanishes, the flat derivative on the megaspace becomes
D_ = ( f)_^[ v_^_; D_ - Ω_^v_^_; (ρ^ - 1/2Ω^Ω_^) v_^_
+ Ω^ D_ ] .
Just as in the conventional supercoset, the middle matrix in (<ref>) depends only on the coset coordinates. With the y coordinate dependence isolated in the first and third factors, one can show that up to an overall adjoint action of f, which we discard,
the generalized fluxes with a lower index valued in F are completely fixed as
_ = 0 , _ = 0 , _^ = F_^ ,
_ = F_ _^ = F_^
_^ = F_^ .
The remaining fluxes correspond to generalized curvature tensors.
The torsion tensor is
_ = _
= - 3 _[^__^_]
- 3 Ω_[|^ F_| ]
= _
- 3 Ω_[|^ F_| ]
where is the generalized flux of the coset supervielbein. The generalized Ω
curvature is
_^ = _^ =
2 D_[Ω_]^
+ _^Ω_^
+ Ω_^Ω_^ F_^ (-)^
- 2 Ω_[^ F_]^
- F_ (ρ^ + 12Ω^Ω_^)
(-)^ .
Finally, there are two additional curvatures that are not present in the standard supercoset. These are the covariantized gradient of the -Siegel field
_^ = _^ = - D_ρ^ + ⋯
and an additional curvature
^ = ^ = 3 ρ^[| F_^|]
- 3 Ω^[ | D_ρ^|]
+ ⋯
where we have shown only the leading terms. The precise forms of these curvatures
will not be relevant for us. The curious reader can find them in <cit.>.
These generalized torsion and curvature tensors hold for generic , Ω, and ρ.
When they are determined from a larger doubled Lie group, they are constrained as
_ = F_ , _^ = F_^ , _^ = F_^ , ^ = F^ .
Now let us compute the quantities (<ref>)-(<ref>) required to construct the generalised frame field for the coset representative m = n f, where f is an element of an isotropic subgroup F generated by T_. We find the adjoint action
M_^ := ( m)_^ =
M_^M_^ , M_^ = ( f)_^ , M_^ = ( n)_^ ,
the Maurer-Cartan form components
V^ = n n^-1 + n f f^-1 n^-1T^
A_ = n n^-1 + n f f^-1 n^-1T_ ,
and the B-field
= 1/2(
V^∧ A_
- f f^-1n^-1 n)
+ _ ,
_ = -1/12 n n^-1[ n n^-1, n n^-1 ] .
Above we have split the original _ from (<ref>) into an exact term
and a term _ defined purely on the coset.
A straightforward calculation gives rise to
v_⌟ =
n T_ n^-1V^ T_
= M_ V^ (-)^ ,
v_⌟ V_⌟ = -M_
where v_ = v_^_ denotes the vector field dual to the one-form y^v_^ T_ = f f^-1. From this equation, we immediately obtain
v_⌟( -V_⌟ T^ (-)^ + V^ T_) = n T_ n^-1 ,
which proves that
_ = M_^[ 0; 0; v^_ (-)^i ]
holds. This verifies the form of the last column in the middle matrix of (<ref>). But because _^ is an OSp element, the first row also has the desired form. We can finally read off
Ω_^ = - M_^ S_^ ,
ρ^ = M^[| S_^|] ,
_^ = M_^[ V_^ - V_^_
- S_^ M_ V^_ (-)^ +; 0 V^_ (-)^ ] ,
where we introduced for convenience the quantity
S_^ := V_^v_^ , M_^ S_^ = δ_^ .
It is a somewhat involved calculation to show that both
S_^ and V_^M are y-independent, while
_NM and V_M^ are y-independent by construction.
§.§ The dilaton on the generalized supercoset
Now we will equip the -Siegel megaspace with a dilaton Φ. Its generalized flux tensor is
_ = _^_logΦ+ ^_ .
In analogy to the decomposition of the megavielbein (<ref>), we expand
logΦ= logΦ + logẽ
where ẽ^ T_ = f^-1 f is the left-invariant vector field on F and Φ is chosen to be independent of y. The extracted term is responsible for generating the density behavior of Φ under y diffeomorphisms. One can now show that
_ = (f)_^[ 0; _; ^ ]
+ F_^ (-)^
where
_ = _^_logΦ + ^_ - Ω^ F_ ,
^ = Ω^_logΦ + ^Ω_^
- Ω^ F_^
+ ρ^ F_^
are the dilatonic torsion and curvature respectively.
In the case of generalized DFT, one should replace
_logΦ→_
in (<ref>). A natural replacement of the constraint (<ref>) is
_^(_ - _logẽ) =
( f)_^[ 0; _^_; Ω^_ + ^ ]
where _ and ^ transform under coset diffeomorphisms and
F-gauge transformations as
δ_ = ξ^__
+ _^ξ_ ,
δ^ = ξ^_^
- ^_λ^ - λ^^ F_^ .
Now the dilatonic torsion and curvature are
_ = ^ + _^_ + ^_ - Ω^ F_ ,
^ = ^ + Ω^_ + ^Ω_^
- Ω^ F_^
+ ρ^ F_^ .
The dilaton solution corresponds to _ = _logΦ and ^ = 0
where Φ is gauge invariant under F.
§.§ Example: D=GxG
The examples we will consider are based on the ones presented in the previous section, namely G × G and G^ℂ. We employ the same real semisimple Lie group G as before, but additionally, we presume the existence of a subgroup F ⊂ G. The most relevant cases are when the coset G/F is a symmetric space, but we will remain rather general here.
When embedded into the double Lie group D = G × G, the subgroup F must be isotropic. The pairing (<ref>) makes this constraint very restrictive and only allows for diagonal subgroups. We denote the generators T_ = ( t_, t_ ) for the generators of F. In other words, F here is a subgroup of H itself. The remaining generators are assigned by requiring that the pairing η_' ' has to be of the form given in (<ref>) and we get
T_ = ( t_ , t_ ) ,
T_ = ( t_ , t_ ) ,
T^ = ( t^ , -t^ ) ,
T^ = ( t^ , -t^ ) .
There is a subtle point here: in defining the left coset, H \D, we arranged the generators as
T_ = (t_, -t_) and T^ = (t^, t^), with the latter defining H.
In defining the right coset now, we have swapped the roles of lower and upper indices.[We additionally could swap the roles of T_A and T^A (raising/lowering the indices respectively) to restore the original positioning of the coset indices, but this only works if κ_ vanishes, since we need T_T_ = 0.]
Now we can build the components of the generalized vielbein.
As the coset representative, we take
m = ( n f, f ) = (n, e) × (f,f)
with f∈ F and n in the dressing coset G_diag\ (G× G) / F.
Because F ⊂ H, some care must be taken in the choice of n, because this coset
representative may be rewritten as
m = (f,f) × (f^-1 n f, e) .
The factor on the left is an element of H, so its only effect is to add an exact term to the B-field. For this to be a good coset representative, we must be careful to choose n so that f^-1 n f is a sufficiently generic element of G – namely, that it generates invertible left-invariant and right-invariant vielbeins. This is not always possible — e.g. if F contains an abelian factor that commutes with all elements of G.[It is even problematic for symmetric spaces if we choose n = exp (x^ t_), since then the effect of f is merely to rotate the coordinates x^. Then the left and right-invariant vielbeins vanish on the subgroup F since there is no y component.]
In fact, the coset representative (<ref>) is nothing but the coset representative used in (<ref>) for the case g = f^-1 n f. This means that the generalized vielbein we will construct must actually be equivalent to the generalized vielbein there (<ref>), up to an exact shift in the B-field, and an overall OSp transformation acting on the left to swap the roles and index positions of T_ and T^.
We can begin to see this already when we compute V_^:
V^ t_ = 1/2(
n n^-1 + n f f^-1 n^-1 - f f^-1) = 1/2 f g g^-1 f^-1 .
It is nothing but the adjoint action of f on the right invariant vector field of g.
More explicitly, we take
V_^ =
[ δ_^ 0; 0 ṽ_^ ]×[ V_^ V_^; 1/2 D_^ 1/2 (D-1)_^ ]
where we use D_^ t_ := n t_ n^-1 and ṽ = f f^-1.
Its inverse we denote
V_^ =
[ V_^ S_^; V_^ S_^ ]×[ δ_^ 0; 0 ṽ_^ ]
where S_^ was defined in (<ref>).
Importantly, we will need the two conditions
1/2 (D-1)_^ S_^ = δ_^ ,
1/2 (D-1)_^ V_^ = 0 D_^ V_^ = V_^ .
We need to compute M_'^. Here one needs to keep in mind that the ' index is in the F-adapted basis, whereas the index is in the H-adapted basis. This leads to
M_^ = 1/2 (D - 1)_^ M_
= 1/2 (D κ+ κ)_ (-)^ ,
M_^ = 1/2 (D - 1)_^ M_
= 1/2 (D κ+ κ)_ (-)^ ,
M^
= 1/2 (κD + κ)^
M^_
= 1/2 (κD κ- 1)^_(-)^ ,
M^
= 1/2 (κD + κ)^
M^_
= 1/2 (κD κ- 1)^_(-)^ .
The vector pieces of the generalized vielbein are
_^ = M_^ V_^
= 1/2 (D-1)_^ V_^ ,
^^ = M^ V_^
= 1/2 (κ D +κ)^ V_^
Because 1/2 (D-1)_^ V_^ = 0
we can rewrite the first term as
κ^_^ =
1/2 (κ D-κ)^ V_^ .
At this point, we denote the coset part of the inverse Killing metric κ^,
which we presume to be invertible with graded inverse η_,
κ^η_ = δ_^ (-)^
Note that η_ does not equal κ_ unless κ_ vanishes. Now on the coset, we introduce the vector fields
κ^ e_^ := 1/2 (κ D)^ V_^ , κ^ v_^ := 1/2 κ^ V_^ .
We presume these are invertible. Then we find
_^ = e_^ - v_^ ^ = κ^ (e_^ + v_^) .
At this point, we can exploit a fact more familiar from O(D,D) elements that can be extended to OSp elements when we have a metric κ^ with inverse η_. In general, we may write
_^ =
[ e_^ - v_^ 1/4η_ (e^_ + v^_) (-)^n+b; κ^ (e_^ + v_^) 1/4 (e_^ - v_^) (-)^n ]×[ δ_^ -_ (-)^m; 0 δ^_ ]
for some graded antisymmetric . We have already identified e_^
and v_^. In our case, the two vielbeins are the pure coset parts of the left and right invariant G × G vielbeins e_^ and v_^ for g = f^-1 n f, but dressed with an additional adjoint action of f.
Using the explicit form of the generalized vielbein, one can confirm it falls into the above form for
=
1/8((κ S)^ D_^
- 1/2 (κ S)^ (κ S)^ (Dκ)_
(-)^cb) v^η_∧ v^η_
+ _ ,
or equivalently,
=
1/8((κ D S)^ (D^-1)_^
- 1/2 (κ D S)^ (κ D S)^ (D^-1κ)_
(-)^cb) e^η_∧ e^η_
+ _ ,
In these expressions, the suppressed indices between κ and other objects run over both the coset and subgroup indices, i.e. (κ S)^ = κ^ S_^.
The pure WZW term on the coset is
_
= -1/24 n n^-1[ n n^-1, n n^-1] .
For reference we give the translation between n n^-1 and n^-1 n
and the 1-forms e^A and v^A introduced on the coset:
n n^-1 = v^(t^ - 1/2 (κ S)^ (D-1)_^ t_)
η_
n^-1 n = e^(t^ - 1/2 (κ D S)^ (1-D^-1)_^ t_)
η_ .
The two vielbeins are related by a graded version of a Lorentz transformation,
Λ_^
:= e_^ v_^ , η^Λ_^η_
= Λ^_ = (Λ^-1)_^ (-)^ba
where explicitly
Λ^A_B = (κ D κ)^_ - (κ D κ)^_((D κ - κ)^-1)^ (Dκ)_ .
The remainder of the megavielbein is characterized by Ω and ρ:
Ω_^ = -1/2 (D-1)_^ S_^ ,
Ω^ = -1/2 (κ D+κ )^ S_^ ,
ρ^ = 1/2 (κ D + κ)^[ | S_^|] .
It will actually be useful for us to consider a slightly different coset representative,
which will be relevant for analytic continuation:
m = (n', n'^-1) × (f,f)
The coset element (n', n'^-1) goes to its inverse under the involution σ
that exchanges left and right group factors. Thankfully, we do not however
need to perform any new computation. Similar to the generalized group manifold case,
this coset representative is related to the previous one merely by an H-action on
the left (which is just an exact shift in the B-field) and a coordinate transformation,
taking n = n'^2, exploiting the identification
(n', n'^-1) × (f,f) = (n'^-1, n'^-1) × (n'^2, e) × (f,f) .
Of course, it is related to the two G × G generalized group manifold cases as well.
With these facts in mind, and using what we have learned in the previous cases, we can simply describe the result here in a manner that will be useful for analytic continuation. Let g = f^-1 n f be a generic element of G, and similarly for g' = f^-1 n' f with n = n'^2 (so g = g'^2) . Define on the full group the modified left and right invariant forms
v^ t_ = f g g^-1 f^-1 = n n^-1 + n f f^-1 n^-1 - f f^-1 ,
e^ t_ = f g^-1 g f^-1 = n^-1 n + f f^-1 - n^-1 f f^-1 n .
In terms of n', these can be written
v^ t_ = n' ( n' n'^-1 + n'^-1 n'
+ n' f f^-1 n'^-1 - n'^-1 f f^-1 n' ) n'^-1 ,
e^ t_ = n'^-1( n' n'^-1 + n'^-1 n'
+ n' f f^-1 n'^-1 - n'^-1 f f^-1 n' ) n'
Then define two vielbeins on the coset by
κ^ e_^ := κ^e_^ ,
κ^ v_^ := κ^v_^ ,
and additional fields
S_^ := D'_^v_^ṽ_^
= (D'^-1)_^e_^ṽ_^ .
These equations imply that
n'^2 n'^-2 = v^(t^ - 1/2 (κ D'^-1 S)^ (D'^2-1)_^ t_)
η_
n'^-2 n'^2 = e^(t^ - 1/2 (κ D' S)^ (1-D'^-2)_^ t_)
η_ .
Then the generalized supervielbein on the large space is given by (<ref>).
The connection Ω and -Siegel field are
Ω_^ = -1/2 (D'-D'^-1)_^ S_^ ,
Ω^ = -1/2 (κ D'+κ D'^-1)^ S_^ ,
ρ^ = 1/2 (κ D' + κ D'^-1)^[ | S_^|] .
and is given by
=
1/8((κ D' S)^ (D'^-2)_^
- 1/2 (κ D' S)^ (κ D' S)^ (D'^2κ)_
(-)^cb) e^η_∧ e^η_
+ _
with
_ = -1/24 n'^2 n'^-2[ n'^2 n^-2, n'^2 n'^-2] .
§.§ Example: D=GĈ
Next, we take the complexified group D = G^ℂ discussed in section <ref>. The subgroup F ⊂ G is again an isotropic subgroup using the pairing
(<ref>). The basis (<ref>) already introduced for the G^ℂ case is perfectly suitable here: we merely split the generators up so that
T_ = t_ ,
T_ = t_ ,
T^ = (R^ + i κ^) t_ ,
T^ = (R^ + i κ^) t_ .
Again, we do not need to impose that κ_ vanishes, although this will certainly be the case most of interest. A natural coset representative lies in G itself,
m = n f = g ∈ G .
Introducing the usual left invariant vector fields suitable for G/F,
n^-1 n = e^ t_ + ω^ t_
we easily find
v_^ =
[ e_^ ω_^; 0 ṽ_^ ] D_^ ,
v_^ =
(D^-1)_^[ e_^ -ω_^ṽ_^; 0 ṽ_^ ] ,
where D_^ t_ = n t_ n^-1. It follows that
S_^ = (D^-1)_^ - (D^-1)_^ω_^.
Computing M_^, one finds
M_^ =
[ D_^ 0; (R D - D R)^ D^_ (-)^b ] .
This leads to a generalized vielbein on the coset of
_^ =
[ e_^ 0; Π^ e_^ e^_ (-)^m ] .
The connection Ω and -Siegel field are
Ω_^ = ω_^ , Ω^ = -Π^ + Π^ω_^ , ρ^ = Π^ - Π^[ |ω_^] .
The matrices Π^ appearing above are given by
Π^ = R^ - D^ R_^ (D^-1)_^ .
Note that Π^ resembles the matrix Π^ given in (<ref>), except we have restricted the group element used to construct D_^ from g to n. Of course, this is no accident: the megavielbein on the generalized coset is nothing but the generalized vielbein on the full space (up to a B-field gauge transformation). It is an instructive exercise to check these formulae emerge directly by comparing with the expression (<ref>) and extracting ( f)_^.
Just as on the generalized parallelizable space, we can make a similarity transformation to the basis
T_ = t_ ,
T_ = t_ ,
T^ = i κ^ t_ ,
T^ = i κ^ t_ .
This remains in -Siegel form, except now the various constituents of the megavielbein are given by
_^ =
[ e_^ 0; - (R_n)^ e_^ e^_ (-)^m ] ,
(R_n)_^ := D_^ R_^ (D^-1)_^ ,
Ω_^ = ω_^ , Ω^ = -(R_n)^ω_^ ,
ρ^ = -(R_n)^ + (R_n)^[ω_^] .
This case can also be easily compared with the corresponding megavielbein in (<ref>)
after extracting a factor of ( f)_^.
Another interesting case is to choose
T_ = t_ ,
T_ = t_ ,
T^ = i t^ ,
T^ = i t^
We use the same decomposition for m as given in (<ref>) with n being hermitian and f unitary. For the elements of M_'^ we find
M_^ = 1/2i (D - D^-1)_^ M_
= 1/2 (D κ+ D^-1 κ)_ (-)^ ,
M_^ = 1/2i (D - D^-1)_^ M_
= 1/2 (D κ+ D^-1 κ)_ (-)^ ,
M^
= 1/2 (κD + κD^-1)^
M^_
= -1/2i (κD - κD^-1)^_(-)^ ,
M^
= 1/2 (κD + κD^-1)^
M^_
= -1/2i (κD - κD^-1)^_(-)^ .
The computation is very nearly identical to the G × G coset.
We find for V^ and A^
V^ = 1/2i ( n n^-1 + n^-1 n)
+ 1/2iṽ^ (D - D^-1)_^ t_ ,
A^ = 1/2 ( n n^-1 - n^-1 n)
+ 1/2ṽ^ (D + D^-1)_^ t_ .
The vector pieces of the generalized vielbein are
κ^ _^ = κ^
M_^V_^
= -i/2 (κD- κD^-1)^ V_^
=: -i κ^ (e_^- e̅_^)
,
^^ = κ^ M^ V_^
= + 1/2 (κD +κD^-1)^ V_^
=: -i κ^ (e_^+ e̅_^)
where we again exploit the vanishing of (D - D^-1)_^ V_^ in the first line. These expressions define the doublet of coset supervielbeins e and e̅.
These alternatively can be understood as κ^e_^ where e_^ is the inverse of
i e^ t_ = n^-1( n n^-1 + n^-1 n
+ n f f^-1 n^-1 - n^-1 f f^-1 n ) n
and similarly for its complex conjugate,
i e̅^ t_ = n ( n n^-1 + n^-1 n
+ n f f^-1 n^-1 - n^-1 f f^-1 n ) n^-1
Inspired by G × G case, these can also be written
i e^ t_ = f m^-1 m f^-1 ,
i e̅^ t_ = f m m^-1 f^-1 , m = f^-1 n^2 f
The generalized supervielbein on the coset is
_^ =
[ -i (e_^ - e̅_^) 1/4η_ (e^_ + e̅^_) (-)^b+n; κ^ (e_^ + e̅_^) i/4 (e^_ - e̅^_) (-)^n ]×[ δ_^ -_ (-)^m; 0 δ^_ ]
where
=
-1/8(
i (κ D S)^ (D^-2)_^
- 12 (κ D S)^ (κ D S)^
(D^2 κ)_ (-)^bc)
e^η_∧
e^η_
+ _ ,
with
_ = -1/24 n^2 n^-2[ n^2 n^-2, n^2 n^-2] .
The Ω connection and -Siegel field are
Ω_^ = -1/2i (D-D^-1)_^ S_^ ,
Ω^ = -1/2 (κ D+κ D^-1)^ S_^ ,
ρ^ = 1/2 (κ D + κ D^-1)^[ | S_^|] .
§ GENERALIZED SUPERCOSETS FOR SUPERGRAVITY BACKGROUNDS
§.§ Supergravity backgrounds in double field theory
In order for the generalized supervielbein to describe a valid background of supersymmetric DFT, the generalized flux tensor must obey a certain set of constraints <cit.> (for earlier work, see <cit.> and <cit.>).
At dimension -1/2, all flux tensors vanish
_αβγ =
_αβ =
_α =
_ = 0
while at dimension 0,
_αβ = -i √(2) (γ_)_αβ , _ = -i √(2) (γ̅_)_ , _α = _α =
_αβ = _ = 0 .
We refer to these as κ-symmetric constraints, in analogy to their supergravity analogues <cit.>. In addition, one imposes conventional constraints at dimension 1/2
_αβ^β = 14_β (γ^)_α^β , _^ = 14_ (γ^)_^ , _α (γ^)^αβ =
_ (γ^)^ = 0 ,
which amount only to redefinitions of the physical dilatini and gravitini. A final conventional constraint at dimension 1 redefines the Ramond-Ramond bispinor,
(γ^)^αβ_β^ =
- (γ^)^_^α .
As argued in <cit.> (and in analogy with <cit.>), these constraints alone lead to a generalized double field theory (which is related to modified DFT <cit.>), the DFT analogue of generalized type II supergravity, where one does not presume a dilaton to exist, see section <ref>. We will return to the question of conventional supergravity (i.e. where a dilaton exists) in section <ref>.
Now we can pose the question whether the generalized vielbeins we have constructed in previous sections, namely for the double Lie groups G^ℂ and G × G, satisfy these constraints so that they describe supergravity backgrounds. If we presume that the group G should have 32 supercharges (to accommodate the full range of α and indices we seek), ten corresponding translation generators P_a, and a subgroup F corresponding to any Lorentz and/or R-symmetry groups, we are essentially restricting our attention to maximally supersymmetric type II backgrounds. These were analyzed long ago <cit.>, with only the AdS_5 ×S^5 background of IIB and its Penrose limit (an Hpp wave) <cit.> relevant to us here.[There is also the IIB^* background dS_5 × H^5 <cit.> and its Penrose limit <cit.>, but we won't consider these.]
The supergroup G of isometries for AdS_5 ×S^5 is PSU(2,2|4) (see e.g.
<cit.>). Only some of the details of this algebra are important to us, so we will treat it in rather general language. It consists of generators t_ = { t_a, t_α, t_, t_1}. The generators t_1 span a (bosonic) subgroup F = SO(4,1) ×SO(5). The generators t_A = {t_a, t_α, t_} comprise spatial translations and supersymmetries, and the supercoset G / F is a superspace whose bosonic body is AdS_5 ×S^5. The superalgebra admits a ℤ_4 grading under which t_1, t_α, t_a, and t_ carry charge 0, 1, 2, and 3. The non-vanishing (anti)commutators are
[t_1, t_β] = -f_1 β^γ t_γ , [t_1, t_] = -f_1 ^ t_ , [t_1, t_b] = -f_1 b^c t_c , [t_1, t_2]= -f_1 2^3 ,
{t_α, t_β} = - f_αβ^c t_c , {t_, t_} = - f_^c t_c , {t_α, t_} = - f_α^1 t_1 ,
[t_a, t_β] = -f_a β^ t_ , [t_a, t_] = -f_a ^γ t_γ , [t_a, t_b] = -f_a b^1 t_1 .
We normalize the generators so that the SUSY algebra is conventional with
f_αβ^c = -i (γ^c)_αβ ,
f_^c = -i (γ^c)_ .
Then the structure constants f_AB^C may be interpreted as the torsion tensor T_AB^C of the undeformed AdS_5 ×S^5 background. The algebra admits a non-degenerate Cartan metric κ_ with nonzero pieces κ_ab = η_ab,
κ_α = -κ_α, and κ_12.
The (graded) inverse component κ^α is proportional to the Ramond-Ramond bispinor of the undeformed AdS_5 ×S^5 background, i.e.
κ^α∝F_a_1 a_2 a_3 a_4 a_5 (γ^a_1 a_2 a_3 a_4 a_5)^α, since it appears in the constant torsion
T_a β^ = f_a β^
= -i κ^γ (γ_a)_γβ ,
T_a ^γ = f_a ^γ
= -i κ^γ (γ_a)_ .
A crucial feature of κ^α is that due to the 10D gamma matrix identity
γ_a γ_b_1 b_2 b_3 b_4 b_5γ^a = 0, one finds
T_a β^ (γ^a)_ = T_a ^γ (γ^a)_γβ = 0.
§.§ The eta-deformation
In the context of supercoset sigma models, the η deformation is a specific deformation that preserves the classical integrability of the original model. It depends on the existence of an R-matrix obeying the modified classical Yang-Baxter equation (<ref>); such models are known as (inhomogeneous) Yang-Baxter σ-models <cit.>. For the case of the AdS_5 ×S^5 superstring, the Lagrangian is given by <cit.>
= -(1-η^2)/4 t (√(-h) h^ij - ^ij) (
g^-1_i g 𝐝 _-^-1 g^-1_j g
) = -(1-η^2)/4 t (√(-h) h^ij - ^ij) e_i^e_j^ (_-^-1)_^𝐝_^κ_ .
The group element g is an element of PSU(2,2|4). The factor 1/t can be interpreted as the string tension T. The Lie algebra operator 𝐝 is defined in terms of
ℤ_4 graded projectors as
𝐝 = P^(1) + 2/1-η^2 P^(2) - P^(3). As a diagonal matrix,
𝐝_^ and its transverse are given by[Ref. <cit.> relates operators to matrices as
·ξ^ t_ = ξ^ t_^_,
while we use ·ξ^ t_ = ξ^_^ t_. This amounts to
replacing ^_→_^ (-)^b+ba.]
𝐝_α^β = -(𝐝^T)_α^β= δ_α^β , 𝐝_^ = -(𝐝^T)_^= -δ_^ ,
𝐝_a^b = (𝐝^T)_a^b = 2/1-η^2 δ_a^b 𝐝_1^2 = (𝐝^T)_1^2 = 0 .
The operator _- and a related operator _+ are given in matrix form by
(_-)_^ = δ_^ - η 𝐝_^ (R_g)_^ ,
(_+)_^ = δ_^ + η (𝐝^T)_^ (R_g)_^ .
The Lagrangian (<ref>) can be rewritten in Green-Schwarz form as
= -T/2√(-h) h^i j(A_-i^(2) A_-j^(2))
+ T/2^ij(A_-iB A_-j)
where A_- = _-^-1 (g^-1 g) and
T = 1/t , B = 1-η^2/2(P^(1) - P^(3) + η 𝐝^T R_g 𝐝) .
It is straightforward to show that if one decomposes g = n f for f ∈SO(1,4) ×SO(5), the f factor drops out, so this is indeed describing the supercoset.
In the seminal work <cit.>, Borsato and Wulff analyzed the supergeometry of the η-model, establishing that its κ-symmetry was of the GS form and deriving a condition on the R-matrix (dubbed a unimodularity condition) for the background to be a supergravity solution. Our goal in this section is to analyze the η-deformed model purely on group theoretic grounds and show how the relevant structures of the σ-model emerge purely from the doubled supergeometry.
The starting point is the complexification G^ℂ of the group G = PSU(2,2|4). As we have already discussed in section <ref>, the complexified group involves the addition of generators t̃_ = i t_, obeying
[t_, t̃_] = - f_^t̃_ ,
[t̃_, t̃_] = + f_^ t_ ,
with Killing form built from imaginary part of the Killing form on G, so that
t_t_ = t̃_t̃_ = 0 , t_t̃_ = κ_ .
We want to find a new basis for this supergroup, for which the structure constants can be interpreted as generalized flux tensors for a supergravity background. Denote the generators of this new basis T_ = (T_1, T_, T^1) with pairing
T_T_ = η_ =
[ 0 0 δ_1^2; 0 η_ 0; δ^1_2 0 0 ] .
The generators
T_ = (T_α, T_, T_, T_, T^α, T^)
will parametrize the generalized supercoset with pairing η_ given by (<ref>). A few basic assumptions will help us choose these generators:
* The only group invariant is presumed to be the Killing superform. This suggests that the
new basis of generators T_ should be very simply written in terms of the old basis,
T_ = a_() t_ + b_() t̃_ ,
T^ = c_() κ^ t_ + d_() κ^t̃_ ,
where a, b, c, and d correspond to numerical constants and
no summation on the parenthetical indices is assumed. This implies that
the flux tensors will all be proportional to the original structure constants,
_∝ f_.
* T_1 = t_1, in order to preserve the coset interpretation, with the Lorentz generator acting on all other generators in the expected way.
* The structure constants must obey the supergravity
constraints. This means that all the dimension -1/2 components vanish,
_αβγ = _αβ = _α =
_ = 0.
This is automatic because there is no corresponding structure constant in the original algebra (since the structure constants are bosonic quantities). The dimension 0 components should also be constrained to obey
_αβ = √(2) f_αβ c ,
_ = -√(2) f_ c ,
_α = _α = 0 .
Additional constraints apply at dimension 1/2; however, these are fermionic and must vanish since the fluxes correspond to structure constants of a supergroup (just as for dimension -1/2). Finally, at dimension 1, we will also require (<ref>).
The most general possibility for
T_α and T_ is
T_α = a_1 (t_α + η t̃_α) ,
T_ = a_2 (t_ - η t̃_) .
We choose an arbitrary parameter η and normalization a_1 to define T_α.
The fact that -η appears in T_ is a direct consequence of
T_αT_ = 0. From the basic dimension zero flux constraint (<ref>),
we can deduce T_ from {T_α, T_β} and similarly for T_:
T_ = (a_1)^2/√(2)( (1-η^2) t_a + 2 η t̃_a) ,
T_ = (a_2)^2/√(2)( (1-η^2) t_a - 2 η t̃_a) .
The dimension zero flux also fixes T^α using [T_α, T_] (and similarly for T^) as
T^α = (a_1)^3/2(
(1-3η^2) t^α + η (3-η^2) t̃^α) ,
T^ = (a_2)^3/2(
-(1-3η^2) t^ + η (3-η^2) t̃^) .
The Lorentz generator and its dual can only be
T_1 = t_1 , T^1 = t̃^1
in order to satisfy T_1T^2 = δ_1^2 and
T^1T^2 = 0. From T_T_ = η_ = η_ab and
T_T_ = η_ = -η_ab, we find the normalizations
(a_1)^4 = (a_2)^4 = 1/2η (1-η^2) .
This fixes the range of η as 0 < η < 1 or η < -1.
We fix the phases of a_1 and a_2 by choosing them to be positive real numbers.
We summarize the full set of structure constants in Appendix <ref>.
There are two equivalent paths to the supervielbein, depending on whether we want to view it as the supervielbein for the generalized parallelizable space (section <ref>) or for the generalized coset (section <ref>). While the most direct path is the latter, it will be more instructive to use the former construction to generate the megavielbein directly, since this is closer in spirit to the results of <cit.>. Recall that for G^ℂ, we gave a simple form for the generalized supervielbein in the basis t_ and t̃^ = i t^ in (<ref>) (promoting unhatted indices to hatted ones). The construction involved the left-invariant vector fields e^ t_ = g^-1 g and the R-matrix R^ obeying the mCYBE (<ref>). Then one simply can apply the dictionary derived above for relating t_ and t̃^ to the generators T_ we actually want. This gives a simple similarity transformation which can be applied to give the generalized supervielbein.
Actually, in order to match normalizations, we need to rescale the generalized supervielbein with a dimensionful parameter (this is related to rescaling the worldsheet tension):
'_^ =
_^_^_^ .
The factor rescales the flat indices, with nonzero entries
_1^2 = δ_1^2 , _α^β = v^1/2δ_α^β , _^ = v^1/2δ_^ , _^ = v δ_^ , _^ = v δ_^ ,
^α_β = v^3/2δ^α_β , ^_ = v^3/2δ^_ , ^1_2 = v^2 δ^1_2 ,
The parameter v carries mass dimension, and the choices above reflect the engineering dimensions of D_. The factor rescales the dual derivative ^,
_^ =
[ δ_^ 0; 0 v^-2δ^_ ] .
The choice of v^-2 here is needed to ensure that ' remains an OSp element with unchanged η_ and η_. We drop the prime from now on. After this redefinition, the fluxes are unchanged except for an overall rescaling by v consistent with their engineering dimension. To match conventions in <cit.>, we will choose
v = √(2η/1-η^2) .
The generalized supervielbein can be read off the covariant derivatives. Using the matrices (_±)_^ introduced earlier they are
D_1 = e_1^_ ,
D_α = 1/√(1-η^2)(
(_±)_α^e_^_
+ 1/2 (1-η^2) e_^κ_α ^)
D_ = 1/√(1-η^2)(
(_±)_^e_^_
- 1/2 (1-η^2) e_^βκ_β ^) ,
D_ = 1/√(2)(
(_-)_a^e_^_
+ e_^bη_b a (-)^m ^) ,
D_ = 1/√(2)(
(_+)_a^e_^_
- e_^bη_b a (-)^m ^) ,
D^α = 1/2 √(1-η^2)(
+ 4 κ^αe_^_
- 3-η^2/1-η^2 (_±)^αe_^_
+ 1/2 (3-η^2) e_^α^) ,
D^ = 1/2 √(1-η^2)(
-4 κ^βe_β^_
+ 3-η^2/1-η^2 (_±)^e_^_
+ 1/2 (3-η^2) e_^^) ,
D^1 = -2η^2/1-η^2 (R_g)^1 e_^_
+ e_^1 (-)^m ^ .
It is worth emphasizing here that (_+)_α^ = (_-)_α^
and similarly for ; this is apparent from the operators themselves, but it is a requirement from the underlying structure of supersymmetric DFT, see the second line of (<ref>).
The supervielbein implicit in (<ref>) is not immediately written in -Siegel form. In particular, it has dependence on the subgroup coordinates y. However, it is easy enough to put it into that form. Decomposing the group element as g = n × f, the G vielbeins e_^ employed above can be rewritten as
e_^ =
( f)_^ e_^ ,
e_^
= [ e_A^M -ω_A^1ṽ_1^; 0 ṽ_1^ ]
with e and ω defined as in (<ref>).
Conjugation by f leaves the diagonal matrices 𝐝 and 𝐝^T invariant and replaces R_g with R_n. This leaves an overall f on the very outside of the megavielbein as in (<ref>). The fields on the coset simply correspond to replacing g with n in the operators _± and dropping the f factor in (<ref>). We denote _± as the operators (<ref>) with g replaced by n. The result coincides with applying the similarity transformation for T_ to the coset supervielbein (<ref>) directly.
As discussed in section <ref>, one can read off from these the components of the physical supervielbein. First, one identifies[The fact that the index sum is over B and not comes from the upper triangular structure of e_^ in (<ref>). One could equivalently write
_α^M = 1/√(1-η^2) ( f^-1)_α^β
(_-)_β^ e_^M
with the full _- and e_^ depending on y.]
_α^M = 1/√(1-η^2) (_-)_α^B e_B^M , _α^M = 1/√(1-η^2) (_+)_α^B e_B^M ,
_^M = 1/√(1-η^2) (_-)_^B e_B^M , _^M = 1/√(1-η^2) (_+)_^B e_B^M ,
_^M = (_-)_a^B e_B^M , _^M = (_+)_a^B e_B^M .
The fact that it is e_B^M rather than e_^M appearing here is a consequence of the triangular form of (<ref>). Their inverses are
_M^α = √(1-η^2) e_M^B (_-^-1)_B^α , _M^α = √(1-η^2) e_M^B (_+^-1)_B^α ,
_M^ = √(1-η^2) e_M^B (_-^-1)_B^ , _M^ = √(1-η^2) e_M^B (_+^-1)_B^ ,
_M^ = e_M^B (_-^-1)_B^a , _M^ = e_M^B (_+^-1)_B^a .
It is crucial that (_±^-1)_2^A = 0 for the inverses to have such a simple structure.
The OSp structure requires that _M^ and _M^ be related by a Lorentz transformation,
Λ_^ = (_-)_a^ (_+^-1)_^b .
That this matrix is a Lorentz transformation was observed in <cit.>. There the operator M = _-^-1_+ was introduced; its matrix form is
M_^ =
[ (Λ^-1)_a^b M_a^β M_a^ M_a^2; 0 δ_α^β 0 0; 0 0 δ_^ 0; 0 0 0 δ_1^2 ] .
It is not hard to show that Λ^-1 = M = _+ / _- = 1, with the last equality following from _+^T = _-. This guarantees that we are dealing with an SO(1,9) transformation, so the duality frame must be IIB or IIB^*. Actually, it is clear that Λ_^∈SO^+(1,9) for η sufficiently small, since it is continuously deformable to the identity; this property should hold so long as we restrict to the η locus where _± is invertible. Then the vielbein and gravitino one-forms can be read off from (<ref>)
E_M^a = e_M^B (_-^-1)_B^a ,
E_M^1α = √(1-η^2) e_M^B (_+^-1)_B^α ,
E_M^2α = √(1-η^2) e_M^B (_-^-1)_B^ (Λ^-1)_^α .
Since Λ_^∈SO^+(1,9), the second gravitino is of the same chirality as the first, so we have written the above in terms of 16-component Weyl spinors.
These superficially differ from the corresponding formulae in <cit.> in a few ways. The first is that the expressions in <cit.> are defined on the full group manifold rather than the physical coset. This means the expressions above have the indices M and B replaced with and and the operator _± replaced with _±. As we have discussed, an overall f action (a Lorentz transformation) accounts for the change in the operators, and (_±^-1)_2^A = 0 allows for the restriction of the indices to the coset. The second issue also involves a Lorentz transformation: the Λ factor is moved off the second gravitino and onto the first gravitino and vielbein (modifying _-^-1 to _+^-1 for the latter).
We similarly can read off the dilatini directly using (<ref>):
χ_1α = i/2_^M _M^β (γ^a)_βα
= i/2√(1-η^2) (_- _+^-1)_a^β (γ^a)_βα ,
χ_2α = i/2Λ_α^_^M _M^ (γ^)_
= i/2√(1-η^2) (_+ _-^-1)_a^
(γ^a)_Λ_α^ .
These agree with <cit.> although the intermediate expressions differ.
The Ramond-Ramond bispinor can be read off from either D^α or D^ using
S^α = -^α M_M^
= +1/2(
3-η^2/1+η^2κ^α
- 4 (_-^-1)^α) = - ^ M_M^α
= - 1/2(
3-η^2/1+η^2κ^α
- 4 (_+^-1)^α)
and applying (<ref>).
To recover the original σ-model is straightforward. It should be of Green-Schwarz form (<ref>), since we have imposed the Green-Schwarz constraints. The symmetric term matches the vielbein (<ref>). The antisymmetric term is recovered by working out the B-field by comparing (<ref>) with (<ref>). The result is
B = -e^D (_-^-1)_D^B ∧ e^C (_-^-1)_C^A B_A B ,
B_A^B = 1-η^2/2(δ_α^β- δ_^
+ η (𝐝^T R_n 𝐝)_A^B
) ,
in agreement with (<ref>). Note that the supergeometry does not determine the overall normalization T of the Lagrangian.
§.§ The lambda-deformation
The λ-deformation <cit.> (see also <cit.>) was extended to AdS_5 ×S^5 in <cit.>. Strictly speaking, this is not a deformation of the AdS_5 ×S^5 superstring but rather a deformation of its non-abelian T-dual. The Lagrangian can be written[The normalization in <cit.> differs from <cit.> by a factor of 1/4. We follow the normalization of <cit.>.]
= -k/8π (√(-h) h^ij - ^ij) (
g^-1_i g (1 + - 2 _-^-1) g^-1_j g
) .
As with the η-deformation, the group element g lies in PSU(2,2|4). The constant k is the level of the WZW model, and the antisymmetric operator generates the WZW term.
The Lie algebra operators _± are given by
_- = 1 - g^-1 Ω ,
Ω = P^(0) + λ^-1 P^(1) + λ^-2 P^(2) + λ P^(3) ,
_+ = g^-1 - Ω^T ,
Ω^T = P^(0) + λ P^(1) + λ^-2 P^(2) + λ^-1 P^(3) .
Just as for the η deformation, the Lagrangian (<ref>) can be put into GS form (<ref>) with
T = k/4π (λ^-4 - 1) ,
B = (λ^-4 - 1)^-1(
_-^T _-
+ Ω^T g
- g^-1Ω) .
The string tension is positive for k>0 and |λ|<1 or k<0 and |λ|>1. These two parameter regions are related by taking g → g^-1.
Just as for the η-deformation, we want to recover the supergeometry of this Green-Schwarz σ-model purely from the algebra. The underlying group structure of the λ deformation is D = G × G with generators
t_^(L) = (t_,0) ,
t_^(R) = (0,t_) .
In terms of these, we can build T_ that satisfy the supergravity constraints, under the same simplifying assumptions as for the η-deformation:
T_α = b_1 ( t_α^(L) + λ^-1 t_α^(R) ) ,
T_ = b_2 ( λ^-1 t_α^(L) + t_α^(R) ) ,
T_ = (b_1)^2/√(2) ( t_a^(L) + λ^-2 t_a^(R)) ,
T_ = (b_2)^2/√(2) ( λ^-2 t_a^(L) + t_a^(R)) ,
T^α = (b_1)^3/2 κ^α (
t^(L)_ + λ^-3 t^(R)_) ,
T^ = -(b_2)^3/2 κ^β
(λ^-3 t^(L)_β+ t^(R)_β) ,
T_1 = t^(L)_1 + t^(R)_1 ,
T^1 = κ^1 2 (t^(L)_2 - t^(R)_2) .
The choices for T_α and T_ are the most general expressions subject to the condition T_αT_ = 0. The expressions for T_, T_, T^α, and T^ follow from requiring the canonical choice of the dimension zero flux tensor. The choice of T_1 is obvious, and T^1 is dictated by orthonormality. Requiring T_T_ = -T_T_ = η_ab fixes the normalizations b_1 and b_2 as
(b_1)^4 = (b_2)^4 = 4/1-λ^-4 .
We find here |λ|>1. This comes about for several related reasons – the choice of λ^-1 rather than λ in (<ref>), the sign choice of Killing metric for the left and right sectors, etc. The reason we keep this choice is that it better matches the explicit expressions in <cit.> provided we keep our coset representative (<ref>) for G × G. Replacing g with g^-1 (or equivalently taking m = (e, g)) and flipping λ^-1 to λ would give the same expressions as <cit.>, but now with |λ|<1, as in the σ-model.
Now we apply the generalized parallelizable space construction for G× G in section <ref>, using the coset representative (<ref>).
As with the η-deformation, one can introduce a dimensionful parameter v when defining the generalized supervielbein. We employ the same redefinitions (<ref>) as for the η-deformation, but now subject to the normalization
v^2 = (b_1)^-4 = (b_2)^-4= 1/4 (1- λ^-4) .
For convenience, we isolate the phases of b_i by b̂_i = b_i / |b_i|, so that b_i = v^-1/2b̂_i.
The expressions for D_ are a bit more cumbersome than for the η-deformation:
D_1 =
(1 - g^-1)_1^e_^_
+ 1/4 v^-2e_^ (1 + g)_1^ (-)^m
D_α =
b̂_1 [(_-)_α^e_^_
+ 1/4 v^-2e_^
(1+λ^-1 g)_α^ ] ,
D_ =
b̂_2 [-(_+)_^e_^_
+ 1/4 v^-2e_^ (λ^-1 + g)_α^] ,
D_ =
(b̂_1)^2/√(2)[
(_-)_a^e_^_
+ 1/4 v^-2e_^ (1+λ^-2 g)_ a^ (-)^m
] ,
D_ =
(b̂_2)^2/√(2)[
- (_+)_a^e_^_
+ 1/4 v^-2e_^ (λ^-2 + g)_ a^ (-)^m
] ,
D^α =
1/2 (b̂_1)^3 [
(1-λ^-4 + _-)^αe_^_
+ 1/4 v^-2e_^ (1+λ^-3 g)_^α^] ,
D^ =
1/2 (b̂_2)^3 [
(λ -λ^-3 + _+)^e_^_
- 1/4 v^-2e_^ (λ^-3 + g)_^^] ,
D^1 = v^2 [
(1+ g^-1)^1 e_^_
+ 1/4 v^-2e_^ (1 - g)_B^1^ (-)^m]
The construction involves the left-invariant vector fields e^ t_ = g^-1 g and the intrinsic WZW B-field (see (<ref>)) appearing in _ = _ - _^ (-)^n. Again, we emphasize that (_+)_α^
and (_-)_α^ are related, consistent with the underlying structure of supersymmetric DFT (<ref>), although here the relation is slightly more complicated:
(_+)_α^ = -λ (_-)_α^ ,
(_+)_^ = -λ^-1 (_-)_^ .
As with the η deformation, we have first identified the supervielbein on the full generalized parallelizable space. Following the discussion in section <ref>, we can pass to the generalized coset by taking g = f^-1 n f. However, we cannot directly apply many of the formulae from that section because of the non-trivial similarity transformation applied to the generators T_ (<ref>). This is in contrast to the η-deformation construction, where the triangular structure of the coset supervielbein (<ref>) simplified matters. In this instance, it will be easier to proceed from scratch.
The intrinsic WZW B-field becomes, for g = f^-1 n f,
= 1/4 n n^-1 + n^-1 n + n f f^-1 n^-1 f f^-1
+ _ , _ = -1/24 n n^-1[ n n^-1, n n^-1] .
The WZW part lives purely on the coset, while the other term has at least one leg in the subgroup F. The upshot, while far from obvious from this perspective, is that we recover the -Siegel form with
D_1 = (f)_1^2ṽ_2^_ .
We will not show this explicitly for the other terms, although it is a worthwhile exercise.
From the explicit form of the covariant derivatives, we can read off
_α^M = b̂_1 (_-)_α^e_^M ,
_α^M = - b̂_1 λ^-1 (_+)_α^e_^M ,
_^M = b̂_2 λ^-1 (_-)_^e_^M , _^M = - b̂_2 (_+)_^e_^M ,
_^M = (b̂_1)^2 (_-)_a^e_^M ,
_^M = - (b̂_2)^2 (_+)_a^e_^M .
The bars on _± again signify the restriction to the coset, and by e_^ we mean extracting the f action from e_^, i.e.
e_^ = ( f)_^e_^.
This quantity is not so simple as in the previous section: its inverse can be written
e^ t_ = n^-1 n + f f^-1 - n^-1 f f^-1 n ,
e_^ =
[ e_M^A e_M^1; ṽ_^2 (_-)_2^A ṽ_^2 (_-)_2^1 ] .
The inverses of (<ref>) are
_M^α = 1/b̂_1 e_M^(_-^-1)_^α , _M^α = - λ/b̂_1 e_M^(_+^-1)_^α ,
_M^ = λ/b̂_2 e_M^(_-^-1)_^ ,
_M^ = - 1/b̂_2 e_M^(_+^-1)_^ ,
_M^ = 1/(b̂_1)^2 e_M^(_-^-1)_^a ,
_M^ = - 1/(b̂_2)^2 e_M^(_+^-1)_^a .
Here we have exploited (_+)_1^ = -(_-)_1^ and the structure of the e_^.
The Lorentz transformation that connects _M^ to _M^ is
Λ_^ = - (b̂_1)^2/(b̂_2)^2×
(_-)_a^ (_+^-1)_^b
= - (_-)_a^ (_+^-1)_^b
for b_1 and b_2 both real. The matrix
M_^ = (_+)_^ (_-^-1)_^ is
M_^ =
[ -(Λ^-1)_a^b M_a^β M_a^ M_a^2; 0 -λ δ_α^β 0 0; 0 0 - λ^-1δ_^ 0; 0 0 0 -δ_1^2 ] .
Again, it is not hard to show
Λ^-1 = M = _+ / _- = 1, which follows from ( g) = 1. This guarantees a IIB or IIB^* duality frame.
The supervielbein is
E_M^ = e_M^ (_-^-1)_^a ,
E_M^1α = -λ/b̂_1 e_M^ (_+^-1)_^α ,
E_M^2α = λ/b̂_2 e_M^ (_-^-1)_^
(Λ^-1)_^α ,
where we are free to use 16-component spinors because the duality frame is IIB/IIB^*. Following similar steps as before, we find the dilatini
χ_1α = i/2_^M _M^β (γ^a)_βα
= -i/2b̂_1 λ
(_-)_a^ (_+^-1)_^β (γ^a)_βα ,
χ_2α = i/2Λ_α^_^M _M^ (γ^)_
= -i/2b̂_2 λ
(_+)_a^ (_-^-1)_^
(γ^a)_ Λ_α^ .
and two equivalent expressions for the Ramond-Ramond bispinor
S^1α 2β = -^α M_M^ (Λ^-1)_^β
= -1/2(b̂_1)^3/b̂_2 (
λ (1-λ^-4) (_-^-1)^α
+ λ^-3κ^α) (Λ^-1)_^β = - ^ M_M^α (Λ^-1)_^β
= +1/2(b̂_2)^3/b̂_1(
λ^2 (1-λ^-4) (_+^-1)^α
+ λ κ^α) (Λ^-1)_^β .
Again, we can directly recover the Green-Schwarz σ-model (<ref>). The vielbein E^a matches the desired expression and the B-field is given by
B = _
- e^ (_-^-1)_^ ∧e^ (_-^-1)_^ B_A B ,
B_A^B =
1/1-λ^-4(
n^-1 Ω - Ω^T n
)_A^B .
An overall factor involving the tension must be separately specified. Here it is
T = |k|/4π (1-λ^-4) with the understanding that k should be
taken to be negative and |λ|>1.
To recover the results of <cit.>, we should choose b̂_1 = -1 and b̂_2 = -i. The latter choice is not technically allowed since b_i should be real to ensure the Majorana condition holds. However, one can interpret this as arising from writing IIB^* results in IIB conventions: this introduces factors of i for objects carrying indices (see e.g. footnote 20 of <cit.> or section 5 of <cit.>). Now the sign in (<ref>) is eliminated, so that
Λ_^ = + (_-)_a^ (_+^-1)_^b. Presuming this to lie in SO^+(1,9), we recover the results of <cit.> up to an overall Lorentz transformation.
However, it is by no means obvious that this is fixed in SO^+(1,9) (or SO^-(1,9)). Actually, one can show by randomly sampling elements of SU(2,2) ×SU(4) that Λ_^ can lie in either connected part. Moreover,
(_+)_a^ (_-^-1)_^b turns out to be independent of λ and determined entirely by the group element g; it in fact matches the Lorentz transformation on the coset G / F determined using g as in (<ref>), in remarkable contrast to the η-deformation. This surprising condition follows because the element defined in (<ref>) appears always to be idempotent.[We could find no proof of this last point, but it seems to hold for all random matrices we sampled.] This seems to imply that the λ deformation is not purely fixed in either a IIB or IIB^* duality frame, but that this depends on the specific group element g.
This is unexpected because one might very naturally expect a IIB^* duality frame since the λ-model can be understood as a deformation of the non-abelian T-dual of the AdS_5 ×S^5 superstring, as argued in <cit.>. Certainly it is possible to find IIB backgrounds for very specific cases involving AdS_n ×S^n factors (see e.g. <cit.>). It would be good to understand this point better, and whether some other factor forbids these choices of group element or invalidates the naive duality argument.[We thank Riccardo Borsato and Linus Wulff for discussions about this point and for pointing out references <cit.> to us.]
§.§ Analytic continuation and PL T-duality
Let us briefly comment about how the η and λ models are related <cit.>. As discussed in section <ref>, there exist coset representatives for G × G and G^ℂ that are straightforwardly connected by analytic continuation, and so the same holds for their generalized supervielbeins. For G^ℂ, this corresponds to a different choice of isotropic subgroup (<ref>) than the one (<ref>) relevant for the η deformation; in other words, the η deformation should be the Poisson-Lie dual of the analytic continuation of the λ deformation.
Of course, the generalized supervielbeins built in sections <ref> and <ref> carry no reference to λ or η. These parameters arose from a similarity transformation to recover the physical supervielbeins with the correct supergravity flux constraints. To understand the connection, we need only compare (<ref>) to (<ref>). Since the generators on
G^ℂ map to generators on G × G as
t_→ (t_, t_)
and
t̃_→ i (t_, -t_), it must be that
η→ i 1-λ/1+λ ,
a_i →1+λ/2λ b_i .
This is consistent with the normalizations (<ref>) and (<ref>) up to a factor of i, coming from the analytic continuation of the Killing form on D.
Finally, it is worth mentioning that the η and λ σ-models (<ref>) and (<ref>) each involve one additional parameter corresponding with an overall normalization: these are 1/t and k/π. These parameters are related to the deformation parameter of the quantum group U_q(𝔭𝔰𝔲(2,2|4)) governing the deformed models as
q =
e^- ϰ t η-deformation
e^i π / k λ-deformation
for ϰ = 2η/1-η^2. The analytic continuation from t to k/π can be checked at the classical level by comparing the respective Hamiltonians. For these models, we find = 1/2 TΠ_η^Π_ + 1/2TΠ_η^Π_, where Π_ = _^ (p_M, T _σ x^M). Undoing the rescaling the supervielbein replaces T by T / v^2. This leads to canonical Poisson brackets
{Π_(σ), Π_(σ')} =
T v^-2 η_ _σδ(σ -σ')
+ F_^Π_ δ(σ -σ') .
The normalization of the Schwinger term is
T v^-2 =
1ϰ t η-deformation
|k|π λ-deformation
and captures how the parameters must change, with a factor of i coming from analytically continuing the Killing form.
§.§ Results for the dilaton
We have not yet addressed the question of whether these supergravity backgrounds admit a dilaton. It was shown in <cit.> that the λ-deformation always admits a dilaton while the η-deformation admits a dilaton only when a certain unimodularity condition on the R-matrix is satisfied. We can now see how these conditions arise naturally within double field theory.
As discussed in section <ref>, one can replace _logΦ in the dilatonic flux tensor by a vector _ (<ref>) and impose the same constraints on this flux as in super DFT <cit.>. This implies no additional constraints on the supergeometry: the vector _ is the DFT analogue of X_M and K^M in generalized supergravity. The constraints in question amount to fixing
_α = -_αβ^β , _ = -_^ .
From these expressions, one can compute _α. The question is whether that can be written as D_α of some superfield.
Rather than compute this directly for the models in question, we will follow a less direct but more rewarding route, and address the full set of dilatonic fluxes in one fell swoop. The crucial point is that the covariant dilatonic torsions
_ = _^_ + __^ + Ω^_ .
all vanish when the constraint (<ref>) and the Bianchi identities are imposed <cit.>. These differ from the fluxes _ by the Ω connection of type II DFT, which is composed of not only the double Lorentz connection but also connections associated with the additional parameters given in Table <ref>.
What exactly are these Ω? Recall that the -Siegel framework furnished us a Lorentz spin connection
Ω_^ =
Ω_^ = -Ω_^1 F_1 a^b
where Ω_^1 was a piece of the megavielbein. Is this the right one? That question is easy enough to answer. At dimension 1/2, choosing the DFT torsion tensors _α and _ to vanish fixed the α component of Ω. Indeed, we can check that (similarly for the barred versions)
_α = _α = 0 , _ = _ = 0
where _α is the flux for the megavielbein (which vanishes for both cases of interest). The other dimension 1/2 torsion tensors _αβ^γ,
_α^γ, _αβ^γ, and their barred versions similarly match the corresponding generalized flux tensors (all also vanishing). At dimension 1, we find
_ = _ = 0 , _ = _ = 0 , _ = _ = 0 , _ = _ = 0
implying that Ω_[] and Ω_ and their barred versions are chosen properly. At dimension 1 we also have
_^γ
= _^γ + Ω_^γ , _α^γ
= _α^γ + Ω_α^γ .
Both of these should vanish. Since
_^γ∝κ^γ (γ_b)_ is γ-traceless, using the properties of κ^α, there is no obstruction to choosing Ω_^γ = -_^γ so that first torsion vanishes.
The second vanishes since _α^γ=0 and so we can
choose Ω_α^γ = 0.
At dimension 3/2, we have
_^γ = _^γ+ Ω^γ_ + 2 Ω_[, ]^γ ,
_^γ = _^γ+ Ω_, ^γ ,
_^γ = _^γ+ Ω^γ_ ,
_α^βγ
= _α^βγ
+ Ω_α^βγ ,
_α^β
= _α^β ,
_α^βγ
= _α^βγ .
All the generalized flux tensors vanish on the right, and so we are free to choose
all the corresponding Ω's to vanish.[Strictly speaking, we can only fix Ω up to the residual shift symmetries discussed in <cit.>.]
What does this mean for _? From the conditions derived on the non-Lorentz Ω, we find
_α = _α - Ω_βα^β ,
_ = _ - Ω__^ + Ω_β_^β ,
^α = ^α
+ Ω^β_β^α .
Each of these can be interpreted as pieces of the dilaton flux tensor on the -Siegel megaspace (<ref>). We know for a supergravity solution, all of these
must vanish. Moreover, from the dilatonic Bianchi identity, we also know that the dilatonic SO(4,1) ×SO(5) curvature _a b = - ^1 f_1 a b vanishes. The upshot is from (<ref>) we can impose the strictest possible condition on the -Siegel dilatonic flux,
_ = F_1^1 = 0
with the vanishing of the second term following from the properties of PSU(2,2|4).
This means that for both the η and λ deformations, the generalized dilatonic torsion in the -Siegel framework must be taken to vanish, _ = 0. The results in section <ref> apply for F_ = F^ = 0.
For G × G, we have from (<ref>)
^ = 0 , _ = _logv̂_^
where v̂_^ is the right-invariant vielbein for the group G.
This solution admits a dilaton solution with
logΦ= logv̂_^ + constant .
To derive the supergravity dilaton requires two steps. First, we pass from the -Siegel framework to DFT on the coset. This involves defining
logΦ = logΦ- logẽ_^1.
Then we translate from the DFT dilaton to the supergravity dilaton, using
Φ = e^-2 φ× E_M^A. From (<ref>),
we can replace E_M^A with _M^A or _M^A,
discarding any overall sign difference as an irrelevant constant factor. Combining these factors gives
e^-2 φ = v̂_^×ẽ_1^×_A^M ×constant .
For the λ deformation, this amounts to
e^-2 φ = _±×constant .
To see this, one first exploits
[ δ_1^2 0 0 0; 0 b̂_1 δ_α^β 0 0; 0 0 b̂_2 λ^-1δ_^ 0; 0 0 0 (b̂_1)^2 δ_a^b ]×
(_-)_^e_^
=
[ ṽ_1^ 0; ∙ _A^M ]
where the ∙ denotes an irrelevant quantity. From this, we can immediately see
_- =
ṽ_1^×_A^M ×e_^×constant .
But ṽ_1^ and e_^ differ from
ẽ_1^ and v̂_^ only by factors of ( f)_1^2
and (f^-1 n)_^, respectively, and the superdeterminants of these are just ±1. A similar line of argument establishes that
_- is proportional to _+, and these are also proportional to the full operators _±. This recovers the result of
<cit.>.
For G^ℂ, we first observe from (<ref>) that
^ = R^ f_^ v̂_^ .
Therefore, the existence of a dilaton solution requires the unimodularity condition for the R-matrix, R^ f_^ = 0. Provided this holds, we recover the same conditions, and an identical line of reasoning leads to (<ref>) for the corresponding operators _±. This again is in full agreement with <cit.>.
§ DISCUSSION
In this paper we have discussed how to employ superspace double field theory, involving a generalized supervielbein, an element of OSp(D,D|2s), to describe generalized dualities. We confirmed our initial expectation that all algebraic structures relevant for dualities of the bosonic string carry over to generalized supergeometry naturally. When the generalized flux tensor is constant, the space is generalized parallelizable (or a generalized coset thereof), and one can construct the generalized supervielbein explicitly in terms of the group theoretic data.
A considerable advantage is that the generalized supervielbein unifies all fields of type II supergravity, except for the dilaton, in one object. To appreciate this fact, recall the salient features of established generalized geometries for type II strings:
* In O(D,D) generalized geometry, the metric and B-field are unified by the generalized frame, while the Ramond-Ramond sector can be captured either with an O(D,D) Majorana-Weyl spinor <cit.>
or an O(D-1,1) ×O(1,D-1) bispinor <cit.>
(see <cit.> for the relation between them).
The Ramond-Ramond sector and the generalized frame are a priori
independent objects, related only by the field equations.
* Exceptional generalized geometry improves the situation by incorporating the Ramond-Ramond sector into the generalized frame. However, this requires the transition from a T-duality covariant description to a U-duality covariant one. Consequentially, strings are no longer the fundamental objects. They are replaced by membranes, which come with their own challenges. When the full ten-dimensional spacetime needs a unified treatment, like for the η and λ-deformations of the AdS_5×S^5 superstring, one has to deal with the infinite dimensional duality group E_11(11) <cit.> which is not completely understood yet
(see <cit.> for recent progress).
Additionally, neither approach directly incorporates fermionic dualities. All these problems are resolved by generalized supergeometry making it the ideal framework to analyze integrable deformations of superstrings. Therefore, one main focus of our efforts was to explain the η and λ deformations within superspace double field theory. While their σ-model actions are fairly complicated, their explanation within super-DFT is rather straightforward, in terms of the double Lie groups G × G and G^ℂ, with a single parameter (η and λ, respectively) describing how the supergravity frame is embedded in the doubled space.
A major novelty compared to the purely bosonic approach is the necessity of additional torsion constraints, which restrict the generalized fluxes beyond their Bianchi identities. They fix the form of their dimension -12 and dimension 0 components as in Table <ref>: these imply similar constraints in generalized type II supergravity <cit.>. From the worldsheet perspective, these are required for the underlying Green-Schwarz superstring to possess κ-symmetry. Consequentially, the target space supergeometry satisfies the field equations of generalized supergravity <cit.>. Moreover, they put the theory on-shell; otherwise, supersymmetry transformations would not close into an algebra.
As one can see from Table <ref>, these flux constraints are not covariant under OSp(D,D|2s) transformations. Rather, they break the duality group to the local symmetry group H_L ×H_R, which plays the same role as the double Lorentz group in bosonic DFT. In the latter, the generalized metric is responsible for the breaking. Due to the absence of a generalized supermetric in the supersymmetric extension, the flux constraints take over this function, too. This is analogous to the situation in conventional supergravity: there the torsion constraints are essential and there is no Riemannian supermetric.
There are several additional avenues one could explore at this point. One issue we avoided discussing was the σ-model interpretation of generalized dualities. These are described in terms of the -model <cit.>
and its dressing coset extension <cit.>. These models can be straightforwardly built for supergroups, but a subtlety involves finding the right constraints to ensure that the σ-model is of Green-Schwarz form. This would undoubtedly be related to a duality-symmetric formulation of the GS superstring using the language of super-DFT <cit.>.
Another avenue to explore is the potential connection with integrability. The η and λ deformations were initially constructed as integrable deformations of the AdS_5 ×S^5 superstring, and a key role is played by the ℤ_4 grading in the supergroup. It is already known that there are connections between the structure of -models and integrability <cit.>. It would be interesting to explore the connection for the case of super -models.
Generalized dualities have proven to be useful solution generating techniques. Examples include non-abelian T-duals of backgrounds like AdS_5 ×S^5 and AdS_3 ×S^3 ×T^4 <cit.> which are relevant for the AdS/CFT correspondence. In this context, an important question is how much supersymmetry of the original background is preserved by T-duality. In our framework the amount of supersymmetry is fixed by the number of fermionic generalized Killing vectors. Therefore, one should study how they transform under duality transformations. Perhaps one could construct a systematic treatment within super-DFT. One could then revisit known examples and try to exhaust all possible dualities to find new solutions.
Finally, we should add that very significant work on U-duality extensions of Poisson-Lie T-duality and its generalizations has appeared recently <cit.>. These would undoubtedly have natural descriptions in supersymmetric extensions of U-dual formulations, of the type explored e.g. in <cit.>.
We would like to thank Riccardo Borsato, Sybille Driesen, Gabriel Larios, Grégoire Josse, Edvard Musaev, Yuho Sakatani, and Linus Wulff for helpful discussions. FH wants to thank the organizers of the workshop “Supergravity, Strings and Branes” at Bogazici University, Turkey for giving him the opportunity to present this work. The work of FH is supported by the SONATA BIS grant 2021/42/E/ST2/00304 from the National Science Centre (NCN), Poland. CNP is supported in part by DOE grant DE-FG02-13ER42020.
§ SUPERGROUP CONVENTIONS
§.§ Lie superalgebras and supergroups
We summarize here our conventions for supergroups and superalgebras. A Lie superalgebra 𝔤 is spanned by elements ξ = ξ^A t_A obeying
[ξ_1, ξ_2] = ξ_1^B ξ_2^C f_CB^A t_A = - [ξ_2, ξ_1] .
The elements ξ^A = (ξ^a, ξ^α) are graded, with ξ^a bosonic (commuting) and ξ^α fermionic (anticommuting), so that the structure constants are graded antisymmetric,
f_AB^C = - f_BA^C (-)^ab
and are themselves commuting quantities, so that precisely zero or two of A,B, and C may be fermionic.
When 𝔤 admits a Killing supermetric κ_AB, we introduce the pairing
ξ_1ξ_2 = ξ_2ξ_1 = ξ_1^A ξ_2^B κ_BA , κ_AB = κ_BA (-)^ab
and use κ to raise and lower indices using NW-SE conventions, so that
ξ_A = ξ^B κ_BA , ξ^A = κ^A Bξ_B , κ^ABκ_B C = δ_C^A (-)^c a .
The structure constants with three lowered indices,
f_ABC = f_AB^D κ_DC, are totally (graded) antisymmetric.
Both the algebra and the pairing can expressed purely in terms of the generators t_A, but it depends on whether the generators t_A are treated as commuting quantities, ξ^α t_α = t_αξ^α or as formal graded objects themselves, ξ^α t_α = - t_αξ^α. The first situation applies when the superalgebra 𝔤 is embedded in a supermatrix algebra 𝔤𝔩(m|n); in this case, the generators themselves are matrices of (commuting) complex numbers, and (<ref>) and
(<ref>) imply
[t_A, t_B] := t_A t_B - t_B t_A (-)^ab = - f_AB^C t_C (-)^ab , t_At_B = κ_AB (-)^ab .
The second situation, where the t_A are themselves graded, leads to the more conventional expressions
[t_A, t_B] := t_A t_B - t_B t_A (-)^ab = - f_AB^C t_C , t_At_B = κ_AB
where gradings arise primarily because of index ordering and the direction of contraction. We will employ the latter conventions when explicit indices are exhibited. The sign convention for f_AB^C is a bit unconventional; this is to ensure the torsion tensors for supergroup manifolds have a plus sign, i.e. T_AB^C = + f_AB^C.
§.§ The orthosymplectic group OSp(D,D|2s)
An element of OSp(D,D|2s) is described by a graded supermatrix
_^∈GL(2D|2s) satisfying the condition
(^-1)_^ = η^_^η_ (-)^
for a graded symmetric matrix η_ with graded inverse η^,
η^η_ = -δ_^ (-)^ .
It can be naturally described in terms of its GL(D|s) subgroup where a generalized
vector V_ decomposes as a one-form and vector V_ = (V_M, V^M). In this basis,
η is given by
η^ =
[ 0 δ^M_N; δ_M^N (-)^MN 0 ] , η_ =
[ 0 δ_M^N; δ^M_N (-)^MN 0 ] .
Because of the grading present in η, it matters whether an index is raised or lowered. We conventionally identify elements of a matrix _^ as if they were elements of _, i.e.
_ =
[ U_MN U_M^N; U^M_N U^MN ] _^ =
[ U_M^N U_MN (-)^N; U^M N U^M_N (-)^N ] .
This ensures that multiple contractions (_1)_^ (_2)_^ follow the usual GL(D|s) grading conventions, i.e. NW-SE contractions ^M _M are natural while SW-NE contractions _M^M are accompanied by a grading (-)^M. It also gives a natural expression for the inverse,
(^-1)_^ =
(-)^NM[ U^N_M U_NM (-)^N; U^NM U_N^M (-)^N ] .
§ DEMOCRATIC TYPE II SUPERGRAVITY CONVENTIONS
We summarize here our conventions for democratic type II supergravity and how they arise from DFT. Conventions for 10D gamma matrices and spinors can be found in <cit.>. The inspiration for such a “democratic” approach to type II was inspired by Wulff, see the appendices of <cit.>.
The supervielbein emerging from DFT consists of two copies of the vielbein super one-form E_M^ and E_M^, as well as two gravitino super one-forms, E_M^α and E_M^. The two vielbeins are related by a Lorentz transformation that determines the duality frame relative to IIB. That is, Λ_^ is an element of O^(α, β)(1,9), where α_Λ=-1 or β_Λ=-1 if Λ involves a temporal or spatial orientation reversal, and +1 otherwise, see Table <ref>.
We may think of Λ_^ as a similarity transformation to convert barred vector indices to unbarred ones. In order to convert barred spinors to unbarred ones, we introduce the spinorial matrix Λ = (Λ_^), which obeys
Λγ̅^Λ^-1
= γ_* γ^Λ_^ ,
Λγ̅_* Λ^-1 = α_Λβ_Λγ_* ,
ΛC̅^-1Λ^T = α_Λ C^-1 .
The last condition implies that
Λ^-1 = α_ΛC̅^-1Λ^T C.
The left Lorentz group is conventionally chosen to be the supergravity Lorentz group.
This identifies the supergravity vielbein as E_M^. The barred gravitino and dilatino must be converted to the left Lorentz group with Λ. To do this, we rewrite
gravitini one-forms as 32-component Majorana spinors, with raised indices, E_M^i for i=1,2. The dilatini have lower indices, χ_i:
E_M^1 =
[ E_M^α 0 ] ,
E_M^2 =
[ E_M^ 0 ]
(Λ^-1)_^ ,
χ_1 =
[ χ_α; 0 ] ,
χ_2 =
Λ_^
[ χ_; 0 ] ,
The supercharges Q_i obey analogous formulae as the dilatini and satisfy the SUSY algebra
{Q_1 , Q_1 } = i (P_L γ^a C^-1)_ P_a , {Q_2 , Q_2 } = i/2 α_Λ (P̃_Lγ^a C^-1)_ P_a
where we use the chiral projector P_L = 1/2 (1+γ_*). The second SUSY involves a projector P̃_L = 1/2 (1+α_Λβ_Λγ_*), which is
P_L for IIB/IIB^* and P_R for IIA/IIA^*.
For type IIB/IIB^* duality frames, α_Λ = β_Λ, and
Λ_^ =
[ Λ_α^ 0; 0 Λ^α_ ] ,
(Λ^-1)_^ =
[ (Λ^-1)_^α 0; 0 (Λ^-1)^_α ] =
α_Λ[ Λ^α_ 0; 0 Λ_α^ ] ,
Λ_α^Λ_β^ (γ^)_
= α_Λ (γ^)_αβΛ_^ ,
Λ^α_Λ^β_ (γ^)^
= -α_Λ (γ^)^αβΛ_^ .
For type IIA/IIA^* duality frames, α_Λ = -β_Λ, and
Λ_^ =
[ 0 Λ_α; Λ^α 0 ] ,
(Λ^-1)_^ =
[ 0 (Λ^-1)^α; (Λ^-1)_α 0 ] =
α_Λ[ 0 Λ^α; Λ_α 0 ] ,
Λ^αΛ^β (γ^)_
= -α_Λ (γ^)^αβΛ_^ ,
Λ_αΛ_β (γ^)^
= α_Λ (γ^)_αβΛ_^ .
Democratic type II superspace is described by a supervielbein
E_M^A = (E_M^a, E_M^i), a Kalb-Ramond super two-form B_MN, a scalar dilaton e^-2 φ, and a set of Ramond-Ramond super (p-1)-forms _M_1 ⋯ M_p-1 with p even for IIA/IIA^* and p odd for IIB/IIB^*. The supervielbein is subject to local SO^+(9,1) Lorentz transformations, gauged by a spin connection Ω_M A^B ∈𝔰𝔬(9,1). The Kalb-Ramond two-form and Ramond-Ramond p-forms transform as
δ B = ξ̃ ,
δ_p-1 = λ_p-2 + λ_p-4∧ H .
The torsion tensors T^A and field strengths H and _p are given by
T^A = E^A + E^B ∧Ω_B^A
= 1/2 E^B E^C T_CB^A ,
H = B
= 1/3! E^A E^B E^C H_CBA ,
_p = _p-1 + _p-3 ∧H
=1/p! E^A_1 ⋯E^A_p _A_p ⋯A_1 .
The complex of p-form field strengths is encoded in the supercovariant Ramond-Ramond bispinor
S^1 2 =
[ S^α 0; 0 0 ] (Λ^-1)_^
= e^φ/32i∑_p
1/p!_a_1 ⋯ a_p
(C P_R γ^a_1 ⋯ a_p)^ IIB/IIB^* (p odd)
∑_p
1/p!_a_1 ⋯ a_p
(C P_R γ^a_1 ⋯ a_p)^ IIA/IIA^* (p even) .
This S differs from <cit.> by a factor of -16i. An
extra factor of two comes from employing the democratic formulation with both field strengths and their duals.
Employing 32-component Majorana spinors can be inconvenient when exhibiting the various torsion tensors. This was addressed in <cit.> by introducing tilde spinors for the second copy of the gravitini and dilatini
E_M^2 =
[ E_M^; 0 ] ,
χ_2 =
[ χ_; 0 ] ,
so that 16-component Majorana-Weyl notation can be used throughout. Effectively, tilde spinors are just barred spinors of DFT, reinterpreted as either same chirality or opposite chirality as
unbarred spinors, depending on the duality frame, i.e.
E_M^ is E_M^2αδ_α^ or
E_M^2_αδ^α. We do not employ tilde spinors in the main body of this paper, but they are convenient for describing the superspace curvatures without sprinkling chiral projectors everywhere. First, we introduce tilde γ matrices as
(γ̃^c)_ =
+ (γ^c)_αβ IIB
- (γ^c)_αβ IIB^*
- (γ^c)^αβ IIA
+ (γ^c)^αβ IIA^* ,
(γ̃^c)^ =
+ (γ^c)^αβ IIB
- (γ^c)^αβ IIB^*
- (γ^c)_αβ IIA
+ (γ^c)_αβ IIA^* .
In terms of these, the non-vanishing torsion tensors are given through dimension 1 by
T_αβ^c = -i (γ^c)_αβ , T_^c = -i (γ^c)_ ,
T_γβ^α =
2 χ_(γ δ_β)^α- (γ_a)_γβ (γ^a χ)^α ,
T_^ =
2 χ_( δ_)^- (γ_a)_ (γ^a χ)^ ,
T_γb^α = - 1/8 H_bcd (γ^c d)_γ^α , T_b^ = 1/8 H_bcd (γ^c d)_^ ,
T_b^α = -2i S^α (γ_b)_ , T_γb^ = 2i S^β (γ_b)_βγ .
The dilatini χ_α and χ_ are given by the spinor derivatives of the dilaton
D_αφ = χ_α ,
D_φ = χ_ .
The non-vanishing components of the Kalb-Ramond field strength are
H_γβ a
= -i (γ_a)_γβ,
H_ a
= +i (γ_a)_ , H_abc .
The supercovariant Ramond-Ramond bispinor can be written as
S^α = e^φ/32i×∑_p
1/p!_a_1 ⋯ a_p (γ^a_1 ⋯ a_p)^αβ δ_β^ IIB/IIB^* (p odd)
∑_p
1/p!_a_1 ⋯ a_p (γ^a_1 ⋯ a_p)^α_β δ^β IIA/IIA^* (p even) .
§ GAUGED SUPERSPACE SIGMA-MODELS
In this appendix, we provide a concise extension of the work of Hull and Spence <cit.> to superspace (see also <cit.> and <cit.>). In large part, this is merely a relabeling of indices and the addition of a grading, but we include it here for the reader's convenience.
§.§ Target space supergeometry
A superspace σ-model comes equipped with a graded symmetric rank-two tensor G_MN and a super two-form B_MN. We presume there exist certain superisometries
k_1 = k_1^M _M,
which leave G_MN and H = B invariant. The latter condition means that
_1 H = 0
k_1⌟ H = v_1
for some one-form v_1 = Z^M v_M 1.
We use the convenient shorthand _1≡_k_1.
The Killing supervectors k_1 obey the algebra
[k_1, k_2] = f_1 2^3 k_3.
The following conditions hold:
k_1 ⌟k_2 ⌟H = k_1 ⌟v_2
k_1 ⌟v_2 =
-k_2 ⌟v_1 (-1)^1 2 ,
_1 k_2 ⌟H = f_1 2^3 k_3 ⌟H
_1 v_2 = f_1 2^3 v_3 ,
(
k_1 ⌟k_2 ⌟H
) = f_1 2^3 v_3
k_1 ⌟k_2 ⌟H
= - Λ_1 2 + f_1 2^3 v_3 ,
(
k_1 ⌟k_2 ⌟k_3 ⌟H
) = -3 f_[1 2|^4 Λ_4 |3]
k_1 ⌟k_2 ⌟k_3 ⌟H = - c_1 2 3
- 3 f_[12|^4 Λ_4 |3]
where we introduce a locally defined,
(graded) antisymmetric scalar function Λ_1 2
and the (graded) antisymmetric constant c_1 2 3.
As a consequence of the above equations, one can show
_1 v_2 - f_1 2^3 v_3
= (k_1⌟ v_2 - Λ_1 2)
The closed one-form w introduced in <cit.> corresponds to
(k_1⌟ v_2 - Λ_1 2) here.
There is some gauge redundancy in these quantities:
δ v_1 = ρ_1 ,
δΛ_1 2 = f_1 2^3ρ_3 + c_1 2 ,
δ c_1 2 3 = -3 f_[1 2|^4 c_4 |3] ,
where c_1 2 is an antisymmetric constant and ρ_1(Z) is a scalar function of the target space coordinates, with a residual “gauge-for-gauge symmetry” of
δρ_1 = c_1 and
δ c_1 2 = -f_1 2^3 c_3.
One can also define the isometry on the B field directly,
_1 B = (v_1 + k_1⌟ B) = -ω_1.
Then in the context of generalized geometry, one can speak of the generalized vector
ξ_1 = k_1 + ω_1
with Dorfman bracket
[ξ_1, ξ_2]_D = f_1 2^3ξ_3 +
(Λ_1 2 - k_1⌟ v_2 )
obeying a generalization of the non-abelian algebra of the k_1. The additional
one-form term above is a trivial transformation from the perspective of double field theory.
§.§ Gauged sigma-model
The ungauged σ-model is given as the sum of a kinetic and Wess-Zumino term,
= - 1/2 Z^M ∧⋆ Z^N G_NM
- 1/2 Z^M ∧ Z^N B_NM .
It possesses a global symmetry δ Z^M = λ^1 k_1^M.
To gauge it, we introduce a worldsheet one-form A^1 that transforms as
δ_λ A^1 = -λ^1 - A^2λ^3 f_3 2^1
so that D Z^M := Z^M + A^1 k_1^M transforms as
δ D Z^M = λ^1 D Z^N _N k_1^M.
The kinetic term is then invariant by simply replacing Z^M → D Z^M.
The Wess-Zumino term is more involved. Let us simply give the answer:
_ WZ =
- B
- A^1∧ v_1
- 1/2 A^1∧ A^2Λ_2 1
+ F^1χ_1 .
Pullbacks to the worldsheet are implicitly assumed in the above equations. In the final term, we have used the field strength
F^1 = A^1 - 1/2 A^2∧ A^3 f_3 2^1 , δ F^1 = -F^2λ^3 f_3 2^1
and included a Lagrange multiplier χ_1 whose equation of motion
enforces that A^1 is pure gauge. Strictly speaking the gauged σ-model
lacks the Lagrange multiplier term, but we will include it since we are interested
in performing a duality transformation.
In order for the Wess-Zumino term to be invariant, we must impose two conditions.
First, the Lagrange multiplier field χ_1 must transform as
δχ_1 =
λ^2 f_2 1^3χ_3
+ λ^2 (Λ_2 1 - k_2⌟ v_1)
With this condition, the Wess-Zumino term varies (up to a total derivative) into
δ_ WZ = -1/2 A^1∧ A^2 λ^3 c_3 2 1
and so invariance actually requires this constant to vanish,
c_3 2 1=0.
This is a crucial consistency condition for the ability to gauge the action.
The Lagrange multiplier field must transform under the residual symmetry (<ref>) as δχ_1 = - ρ_1(Z).
The residual constant c_1 2 shift is no longer a symmetry of the action: it instead leads to different gauged actions whose Λ_1 2 factors differ by such a constant. Because c_1 2 3 vanishes, such shifts must obey the cocycle condition
f_[1 2|^4 c_4 |3] = 0.
In a standard gauging, the Lagrange multiplier is
absent and so one must be able to consistently fix χ_1 = 0, leading to
k_2⌟ v_1 = Λ_2 1
k_(2⌟ v_1) = 0 .
This is a key condition discussed in <cit.>. It turns out that a consequence of
(<ref>) is that c_1 2 3 vanishes, so we can consider the former condition as fundamental.
Once the condition (<ref>)
is imposed, the residual symmetry parameter ρ_1 in (<ref>) is restricted to obey _1ρ_2 = f_1 2^3ρ_3.
In principle, the duality can proceed directly by integrating out the gauge fields A^1.
The resulting action admits the local λ^1 gauge symmetry, implying that dim G coordinates are unphysical and can be eliminated by a gauge-fixing. A simpler procedure is to go to adapted coordinates.
§.§ Adapted coordinates
If the isometries act freely, one can select out dim G coordinates so that
k_1 = k_1^1_1. In these adapted coordinates
Z^M = (Z^, Y^1) where Z^ are spectator coordinates.
We do not address the non-free case, but one can follow a very similar line of reasoning.
Let g(Y) be a group element for the group G we are gauging. The left and right-invariant one-forms are
e^1 t_1 = g^-1 g and
k^1 t_1 = g g^-1
with the generators obeying (<ref>).
The Killing vectors k_1 obey
k_1⌟ k^2 = δ_1^2
and
k_1⌟ e^2 = ( g^-1)_1^2.
We define
Ã^1 := D Z^1 e_1^1
= Z^1 e_1^1
+ A^2 k_2^1 e_1^1
= (k^2 + A^2) ( g^-1)_2^1 .
This is a gauge-invariant one-form, δ_λÃ^1 = 0.
The kinetic term can be written
_ kin = -(
Z^∧⋆ Z^ G_
+ 2 Ã^1∧⋆ Z^ G_1
+ Ã^1∧⋆Ã^2 G_2 1)
where we have flattened the 1 indices on the metric with e_1^1.
Every piece above is separately gauge invariant.
For the metric, the invariance condition reduces
to independence of Y^1.
The Wess-Zumino term is more involved. First, trade A^1 for Ã^1. The result is structurally identical and reads
_ WZ =
-B̃
- Ã^1∧ṽ_1
- 1/2Ã^1∧Ã^2Λ̃_2 1
+ F̃^1χ̃_1 ,
where the tilded quantities are defined as
B̃ = B - k^1 ∧v_1
+ 1/2 k^1 ∧k^2 Λ_2 1 ,
ṽ_1 = (g)_1^2 (v_2 - k^3 Λ_3 2) ,
Λ̃_2 1 =
(g)_2^2' (g)_1^1' Λ_2' 1'
(-)^2' (1 + 1') ,
χ̃_1 = (g)_1^2 χ_2 ,
F̃^1 = Ã^1
- 1/2 Ã^2 ∧Ã^3 f_3 2^1
= F^2 (g^-1)_2^1 .
The tilded quantities ṽ_1 and Λ̃_2 1 obey
the useful relations:
ṽ_1 =
e_1⌟ H
+ e^2∧ (e_2⌟ e_1⌟ H)
- 1/2 e^2∧ e^3
(e_3⌟ e_2⌟ e_1⌟ H) ,
Λ̃_2 1 - f_2 1^3ṽ_3 = - e_2⌟ e_1⌟ H
+ e^3 (e_3⌟ e_2⌟ e_1⌟ H) .
The right-hand sides of both these expressions are annihilated by e_1⌟, so they are independent of Y^1.
The field strength F̃^1 can be expanded out to rewrite the Wess-Zumino term as
_ WZ =
- B̃
- Ã^1∧(ṽ_1 + χ̃_1)
- 1/2Ã^1∧Ã^2(
Λ̃_2 1 + f_2 1^3χ̃_3)
In this form, it's very easy to show gauge invariance of the second and third terms using
δ_λχ̃_1 = -λ^2 k_2⌟ṽ_1 ,
δ_λṽ_1 = ( λ^2 k_2⌟ṽ_1) ,
δ_λΛ̃_2 1
= f_2 1^4λ^3 k_3⌟ṽ_4 .
To understand the meaning of B̃, it helps to rewrite it as
B̃ = B - e^1∧ṽ_1
- 1/2 e^1∧ e^2Λ̃_2 1 .
In this form, we can show that
H̃ = H
- e^1∧ e_1⌟ H
- 1/2 e^1∧ e^2∧ (e_2⌟ e_1⌟ H)
+ 1/6 e^1∧ e^2∧ e^3 (e_3⌟ e_2⌟ e_1⌟ H ) = 1/3! Z^∧ Z^∧ Z^ H_
This means that H̃ is independent of both Y^1 and Y^1.
Up to a B-field transformation, the same condition can be imposed on B̃,
at least locally.
This means that we can expand
B = 1/2 Z^∧ Z^ B_
+ e^1∧ Z^ B_1
+ 1/2 e^1∧ e^2 B_2 1 ,
ṽ_1 = Z^ṽ_1 + e^2ṽ_2 1 ,
B_1 = ṽ_1 ,
B_2 1 = 2 ṽ_2 1 + Λ̃_2 1 .
If we want the λ gauge symmetry to be completely eliminated at this stage,
so that e.g. χ̃_1 is invariant, we should choose
ṽ_2 1= 0, which is the consistency condition
(<ref>) discussed earlier. A consequence of this condition is that
k_1⌟ B = -v_1 , _1 B = 0 .
This means that the Wess-Zumino term can finally be written as
_ WZ = - 1/2 D Z^M ∧ D Z^N B_NM
+ F̃^1χ̃_1 = -1/2 Z^∧ Z^ B_
- Ã^1∧ Z^ B_1
- 1/2Ã^1∧Ã^2 B_2 1
+ F̃^1χ̃_1
in terms of the original B-field. The components of B in the second line above, are each independent of the coordinate Y^1. Relabeling χ̃_1 as ν_1,
we recover the recipe for gauging reviewed in section (<ref>).
§ FLUX TENSORS FOR ETA AND LAMBDA DEFORMATIONS
We summarize the structure constants F_ relevant for the η and λ deformations below by their dimension. Both cases can be given in terms of coefficients c_1 and c_2 (which are proportional to a_i or b_i) as well as a function Γ.
Dimension 0
F_αβ = √(2) f_αβc
F_ = -√(2) f_αβc
F_1 α^β = f_1 α^β
F_1 ^ = f_1 ^
F_1 = f_1 a b
F_1 = -f_1 a b
F_1 2^3 = f_1 2^3
Dimension 1
F_α^ = 1/√(2) c_1 c_2 f_αb^
F_^γ = 1/√(2) c_1 c_2 f_b^γ
F_α^1 = c_1 c_2 f_α^1
Dimension 2
F_^1 = 1/2 (c_1)^4 Γ f_a b^1
F_^1 = 1/2 (c_2)^4 Γ f_a b^1
F_^1 = 1/2 (c_1 c_2)^2 f_a b^1
F_α^β1 = 1/2 (c_1)^4 Γ f_α^β1
F_^1 = -1/2 (c_2)^4 Γ f_^1
F_^βγ = 1/√(2) (c_1)^4 Γ f_a^βγ
F_^βγ = 1/2√(2) (c_1 c_2)^2 f_a^βγ
F_^ = -1/√(2) (c_2)^4 Γ f_a^
F_^ = -1/2√(2) (c_1 c_2)^2 f_a^
Dimension 3
F^α1 = -1/4 (c_1 c_2)^3 f^α1
Dimension 4
F^1 2 3 = - 1/4 (c_1 c_2)^4 (1-Γ^2) f^1 2 3
The coefficients c_i appear in the fluxes only quadratically. In terms of the generators T_ given in sections <ref> and <ref>, the coefficients become
c_i c_j = a_i a_j (1+η^2) = b_i b_j λ^-1
Specifying the coefficients quadratically circumvents introducing a square root. One can check that the two expressions for c_i c_j go into each other under the analytic continuation
(<ref>). The function Γ is given in the two cases by
Γ = 1-6η^2 + η^4/(1+η^2)^2
= 1+λ^4/2λ^2 .
For the η deformation, |Γ| ≤ 1 for all values of η and vanishes at
|η| = √(2)± 1. For the λ deformation, Γ≥1 and saturates the lower bound at λ=1.
The highest dimension structure constant F^1 2 3 involves
1-Γ^2 =
(
4 η (1-η^2)/(1+η^2)^2)^2
= - (
1-λ^4/2λ^2)^2 .
For reference, we also give some of the relations above in terms of ϰ = 2η/1-η^2:
-i ϰ = 1-λ^2/1+λ^2 , Γ = 1-ϰ^2/1+ϰ^2 ,
1 - Γ^2 = 4 ϰ^2/(1+ϰ^2)^2 .
Upon truncation to the bosonic sector, it is ϰ and λ^2 that play the role of the parameters for the conventional η and λ deformations for a group G.
After the redefinitions to go the supergravity frame, the derivatives D_ in
(<ref>) and (<ref>) have flux tensors with _ formally given by the F_ above, but with the replacements
c_i c_j =
â_i â_j ×(1+η^2)(1-η^2) η-deformation
b̂_i b̂_j ×λ^-1 λ-deformation ,
where â_i and b̂_i denote the phases of those quantities. In section <ref>, we chose â_1 = â_2 = 1.
utphys_mod_v4
|
http://arxiv.org/abs/2307.05927v1 | 20230712054802 | Measuring photometric redshifts for high-redshift radio source surveys | [
"Kieran J. Luken",
"Ray P. Norris",
"X. Rosalind Wang",
"Laurence A. F. Park",
"Ying Guo",
"Miroslav D. Filipovic"
] | astro-ph.IM | [
"astro-ph.IM",
"astro-ph.CO"
] |
With the advent of deep, all-sky radio surveys, the need for ancillary data to make the most of the new, high-quality radio data from surveys like the EMU, GLEAM-X, VLASS and LOTSS is growing rapidly. Radio surveys produce significant numbers of AGN, and have a significantly higher average redshift when compared with optical and infrared all-sky surveys. Thus, traditional methods of estimating redshift are challenged, with spectroscopic surveys not reaching the redshift depth of radio surveys, and AGN making it difficult for template fitting methods to accurately model the source. ML methods have been used, but efforts have typically been directed towards optically selected samples, or samples at significantly lower redshift than expected from upcoming radio surveys. This work compiles and homogenises a radio-selected dataset from both the northern hemisphere (making use of SDSS optical photometry), and southern hemisphere (making use of DES optical photometry). We then test commonly used ML algorithms such as kNN, RF, ANNz and GPz on this monolithic radio-selected sample. We show that kNN has the lowest percentage of catastrophic outliers, providing the best match for the majority of science cases in the EMU survey. We note that the wider redshift range of the combined dataset used allows for estimation of sources up to z = 3 before random scatter begins to dominate. When binning the data into redshift bins and treating the problem as a classification problem, we are able to correctly identify ≈76% of the highest redshift sources — sources at redshift z > 2.51 — as being in either the highest bin (z > 2.51), or second highest (z = 2.25).
§ INTRODUCTION
Radio astronomy is at a cross-roads. With large survey telescopes like the ASKAPhotanAustralianSquareKilometre2021, MWAtingayMurchisonWidefieldArray2013, LOFARvanhaarlemLOFARLOwFrequencyARray2013, and upgrades to the VLAthompsonVeryLargeArray1980 producing catalogues of up to tens of millions of new radio sources, traditional methods of producing science are struggling to keep up. New methods need to be developed to pick up the shortfall.
One of the most essential pieces of knowledge about an astronomical object is its redshift. From this measurement the object's age and distance can be gleaned, and its redshift used in combination with photometric measurements to estimate a myriad of other features.
Traditionally, redshift has been measured spectroscopically. However, even with modern MOS instrumentation, the tens of millions of radio galaxies expected to be discovered in the coming years will by far outstrip the world's spectroscopic capacity. For example, the 17^th data release of the SDSSsdss_17[<https://www.sdss.org/dr17/scope/>] is currently the largest source of spectroscopic redshifts, with ≈4.8 million redshifts measured – significantly less than the tens of millions of sources the EMUnorrisEMUEvolutionaryMap2011, GLEAM-Xgleam-x, LOTSSshimwellLOFARTwometreSky2017, and VLASSmurphyVlaSkySurvey2015 is expected to deliver, even if all redshifts measured were focused exclusively on radio galaxies. Future spectroscopic surveys like the WAVESdriverWideAreaVISTA2016 are expected to increase the number of spectroscopically known redshifts by another ∼2.5 million sources, but this will still not be enough.
Alternatively, photometric template fitting <cit.> has been highly effective at estimating the redshift of sources for many years, and is able to achieve accuracies approaching those of spectroscopically measured redshifts <cit.>. However, the breadth and depth of measured photometric bands required for this level of accuracy is unavailable for the majority of sources detected by radio surveys like the EMU, GLEAM-X, LOTSS, and VLASS surveys. Additionally, radio galaxies in particular suffer in the photometric template fitting regimes, partly due to a lack of specialised templates, and partly due to the difficulty of separating out the star formation emission from the black hole emission <cit.>.
Finally, like most problems, ML techniques have been applied to the problem of estimating redshift. From the simple algorithms like the kNNcoverNearestNeighborPattern1967 in <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and <cit.> and RFhoRandomDecisionForests1995,breimanRandomForests2001 in <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>, to more complex algorithms like NN in <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and <cit.> and GP in <cit.>, <cit.>, and <cit.>, using the GPz software. Some studies — for example <cit.> and <cit.> — make use of the images themselves, rather than photometry measured from the images. Typically though, ML algorithms aren't tested in a manner suitable for large-scale radio surveys — ML algorithms are generally evaluated using data from fields like the COSMOS, where there are many (up to 31) different photometric bands measured for each source — far beyond what is available to all-sky surveys, or on data from the SDSS, where either the Galaxy sample is used, containing millions of galaxies with optical photometry and a spectroscopically measured redshift (but restricted to z ≲ 0.8), or the QSO sample is used, containing quasars out to a significantly higher redshift, at the cost of lower source count.
As noted by <cit.>, ML-based methods frequently perform better then traditional template fitting methods when the density of observed filters is lacking, or when the sample being estimated contain rarer sub-types like radio or x-ray AGN. The drawback, however, is that ML methods still require a representative sample of these galaxies to be able to model the features well enough to acceptably predict their redshift. One of the biggest issues with any ML algorithm is finding a representative sample to train the model with. For redshift estimation, this generally requires having spectroscopic surveys containing sources to a similar depth as the sources being predicted (or reliably photometrically estimated redshift – see <cit.> for an in-depth investigation).
An example of the expected redshift distribution of the EMU survey, compared with the SDSS Galaxy and QSO samples is presented in Figure <ref>, demonstrating the differences in redshift distributions – one reason why radio samples are typically more difficult to estimate than optically selected samples. Training samples are often not entirely representative of the data being predicted.
Further, <cit.> compares their results with <cit.>, showing that for most populations of galaxies, taking the additional step of training a GMM to split optically selected datasets into more representative samples improves estimates across all measured error metrics. However, <cit.> notes that redshift estimates for optically luminous QSO have lower error estimates when training exclusively on representative data as in <cit.>, compared with using the GMM prior to estimation. Two reasons are postulated for this – one being the addition of the i and y bands used by <cit.>, with the additional reason being the specific training on the representative sample, rather than a generalised approach.
Finally, when ML models have been trained on radio selected samples, they have typically been focused on achieving the best possible accuracy, with model parameters optimised based on the average accuracy. While this approach is entirely appropriate for other use cases, the preferred parameter to optimise in this work is the Outlier Rate — the percentage of sources where the estimated redshift is determined to have catastrophically failed (further details in Section <ref>).
This subtle change optimises the results for key science goals of surveys such as EMU in which the number of catastrophic outliers is more important than the accuracy of each redshift estimate. For example, constraining non-Gaussianity <cit.>, or measuring the evolution of the cosmic star formation rate over cosmic time <cit.> do not require accurate estimates of redshift, but suffer greatly if redshifts are significantly incorrect.
In light of these struggles using optically selected samples to estimate the redshift of radio-selected samples, we create a new radio-selected training sample, taken from the northern hemisphere (selected from the FIRSTbeckerFIRSTSurveyFaint1995 and NVSScondonNRAOVLASky1998) using SDSS spectroscopy and photometry, and combining it with southern hemisphere data (selected from the ATLASnorrisDeepATLASRadio2006,franzenATLASThirdRelease2015 and Stripe82 <cit.>, where the ATLAS data contains DESdarkenergysurveycollaborationDarkEnergySurvey2016 photometry, and the Stripe82 field contains both SDSS and DES photometry. All fields contain AllWISE infrared photometry.
With this large radio-selected dataset, we compare four commonly used ML algorithms and softwares – kNN, RF, GPz, and ANNz. Where possible, we compare these methods using both a regression, and classification mode, as discussed in <cit.>. In order to better cater to the EMU science goals[<http://askap.pbworks.com/w/page/88123540/KeyProjects>], instead of comparing the overall accuracies of each method, we compare the outlier rates – the percentage of sources that have catastrophically failed.
In this work, we pose the research question: Given the upcoming radio surveys (specifically the EMU survey), which ML algorithm provides the best performance for the estimation of radio galaxy's redshift, where best performance is measured by the outlier rate.
§.§ Overall Contributions of this Study
Overall, our contributions for this study include:
* An in-depth investigation of the DES and SDSS optical photometry, and its compatibility, specifically examining the modifications needed to use both surveys for the estimation of redshift using Machine Learning.
* The construction of a representative and homogenous (where possible) training set, available to be used for the estimation of redshift for radio-selected samples.
* The comparison of multiple widely used ML algorithms, providing a like-for-like comparison on the same dataset.
* The comparison of classification- and regression-based methods where possible.
§.§ Knowledge Gap
* Current Template Fitting methods require better photometric coverage than is typical for all-sky radio surveys, and are are based on a set of templates that are not well-matched to those of galaxies that host radio sources.
* Current ML techniques are typically trained and tested on wide, shallow surveys, limited to z < 0.7, or specific, optically selected samples. Where they are not trained on restricted samples, they are typically optimised for best accuracy, rather than minimising the number of catastrophic failures.
* We are looking at a combination of datasets in order to better match the expected density of sources, as well as comparing against current methods used in literature in order to best prepare for the next generation of radio surveys.
§ DATA
In this section we outline the photometry used and sources of data (Section <ref>), the steps taken to homogenise the northern sky SDSS and southern sky DES optical photometry (Section <ref>), and the process of binning the data in redshift space in order to test classification modes of the different algorithms (Section <ref>).
§.§ Data Description
As noted in Section <ref>, most ML-based techniques are typically focused on optically selected datasets, primarily based around the SDSS datasets, providing high source counts of stars, galaxies, and QSO with photometry (generally) in u, g, r, i, and z bands, with a spectroscopically measured redshift — generally using the SDSS Galaxy or QSO datasets, shown in Figure <ref>. In this work, the data are selected specifically to better represent the data expected from the upcoming EMU Survey <cit.> and EMU-PSEMUPS. Towards this end, we only accept SDSS objects with a counterpart in the NVSS or FIRST radio surveys.
This work compiles three datasets, each containing multiple features for comparison:
* Northern Sky — RGZ and NVSS
* Our Northern Sky dataset contains two radio samples — the NVSS sample, and the RGZ FIRST-based sample (where the RGZ sample has been cross-matched with the AllWISE sample, explained in <cit.> and Wong et al. (in prep.)).
* The NVSS sample was cross-matched with AllWISE at 4, providing 564,799 radio sources — approximately 32% of the NVSS sample — with an infrared counterpart. The NVSS/AllWISE cross-match has an estimated 7% — 123,484 sources – misclassification rate, where the misclassification rate is quantified by shifting the declination of all sources in the NVSS catalogue by 1and re-cross-matching based on the new declination, following the process described in <cit.>. Figure <ref> shows the classification/misclassification rates as a function of angular separation, and is used to determine the optimum cross-match radius.
* The NVSS and RGZ samples were then combined, removing duplicates based on the AllWISE unique identifier, providing 613,551 radio sources with AllWISE detections.
* The northern sky radio/infrared catalogue was then cross-matched against the SDSS catalogue (providing both optical photometry, and spectroscopic redshifts) based on the infrared source locations — a radio-infrared cross-match tends to be more reliable, when compared with the radio-optical cross-match <cit.> — at 4, providing a classification/misclassification rate of 9.33%/0.06% (55,716/348 sources) (Figure <ref>).
* Finally, all sources from the Stripe82 Equatorial region were removed (sources with an RA between 9^∘ and 36^∘ or 330^∘ and 350^∘, and DEC between -1.5^∘ and 1.5^∘).
* The final Northern Sky Sample contains 55,452 radio-selected sources with a spectroscopically measured redshift, SDSS g, r, i and z magnitudes measured using Model/PSF/Fibre systems, and AllWISE W1, W2, W3 and W4 infrared magnitudes, shown in Table <ref>
* Southern Sky — ATLAS
* Beginning with the ATLAS dataset — described in <cit.> — we cross-match the SWIRE infrared positions with AllWISE in order to gain the same infrared bands as the Northern Sky dataset. Cross-matching at 1 produces a 100% / 0.34% classification/misclassification rate (1,156 / 4 sources) (Figure <ref>), and a final source count of 1,156 sources, all with g, r, i, and z optical magnitudes in Auto, 2, 3, 4, 5, 6, and 7 apertures, as well as the W1, W2, W3, and W4 infrared magnitudes.
* Equatorial — Stripe82
* Along the equatorial plane, the Stripe82 field has been extensively studied by both northern- and southern-hemisphere telescopes, providing a field that contains both SDSS, and DES photometry. Cross-matching the DES catalogue with the SDSS catalogue (where both catalogues were restricted to the Stripe82 field) at 1produces a 98.4% / 3.36% (170,622/5,831 sources) classification/misclassification rate (Figure <ref>).
* The optical catalogues were then cross-matched against the AllWISE catalogues at 1.25producing a 84.09% / 0.60% (129,837/932 sources) classification/misclassification rate (Figure <ref>).
* Finally, cross-matching the AllWISE Infrared catalogue against the <cit.> (at 4; see Figure <ref>) and <cit.> (at 4; see Figure <ref>) gives us a 21.96% / 0.42% (3,946 / 75 sources) and 45.46% / 0.54% (2,180 / 26 sources) classification/misclassification rates respectively.
* After combination of the Stripe82 Radio datasets with duplicates removed (based on AllWISE ID), we have a final dataset of 3,030 radio-selected sources with a spectroscopic redshift, W1, W2, W3, and W4 infrared magnitude, g, r, i, and z optical magnitudes in PSF, Fibre, and Model systems (for SDSS photometry), and Auto, 2, 3, 4, 5, 6, and 7apertures (for DES photometry).
To summarise, all datasets are radio-selected, and contain:
* a spectroscopically measured redshift, taken from either the OzDES, or SDSS;
* g, r, i, and z optical magnitudes, taken from either the DES or SDSS;
* W1, W2, W3, and W4 (3.4, 4.6, 12, 24μm respectively) infrared magnitudes, taken from AllWISE;
with a final redshift distribution shown in Figure <ref>. While we are still not matching the expected distribution from the EMU survey, we are ensuring all sources have a radio counterpart (and hence, will be dominated by the difficult-to-estimate AGN), with the final distribution containing more, higher redshift radio sources than previous works like <cit.>.
The primary difference between the datasets is the source of the optical photometry. Even though both the DECam on the Blanco Telescope at the Cerro Tololo Inter-American Observatory in Chile and the Sloan Foundation 2.5m Telescope at the Apache Point Observatory in New Mexico both use g, r, i, and z filters, the filter responses are slightly different (demonstrated in Figure <ref>, the DES Collaboration notes that there may be up to 10% difference between the SDSS and DES equivalent filters[<https://data.darkenergysurvey.org/aux/releasenotes/DESDMrelease.html>]), with different processing methods producing multiple, significantly different measurements for the same sources. For ML models, a difference of up to 10% is significant, and had significant effects on redshift estimations in early tests without correction (sample results with one ML algorithm shown in <ref>).
For the SDSS, the three measures of magnitude used in this work (PSF, Fibre and Model) are all extensively defined by the SDSS[<https://www.sdss.org/dr12/algorithms/magnitudes>]. Simply put, the PSF magnitude measures the flux within the PSF of the telescope for that pointing, the Fibre is a static sized aperture based on a single fibre within the SDSS spectrograph (generally 3), and the model magnitude tries to fit the source using a variety of models.
The DES pipelines produce statically defined apertures from 2to 12, as well as an auto magnitude that is fit by a model.
For our purposes in finding DES photometry compatible with SDSS photometry, we only examined the DES auto, and 2–7measurements, as the larger aperture DES measurements begin to greatly differ from any measured SDSS measurement. We find that the DES auto magnitude is most similar to the SDSS model magnitude, and hence, exclusively use this pairing.
§.§ Optical Photometry Homogenisation
The combined dataset discussed above (Section <ref>) contains optical photometry measured using the SDSS (in the Northern, and Equatorial fields) and the DES (Southern and Equatorial fields). As shown in Figure <ref>, while the SDSS and DES g, r, i, and z filters are similar, they are not identical, and hence should not be directly compared without modification before use by typical ML algorithms. As the Stripe82 Equatorial field contains observations with both optical surveys, we can fit a third order polynomial from the g - z colour, to the difference in the SDSS and DES measured magnitude for each band for each object, and use the fitted model to homogenise the DES photometry to the SDSS photometry for the Southern hemisphere data. Figure <ref> shows four panels — one for each of the g, r, i, and z magnitudes — with the orange points showing the original difference between optical samples against the g - z colour, blue points showing the corrected difference, orange line showing the third order polynomial fitted to the original data, and the blue line showing a third order polynomial fitted to the corrected data. While this homogenisation doesn't adjust for the scatter in the differences, it does shift the average difference, dropping from 0.158, 0.149, 0.061, and 0.006 to 0.004, 0.001, 0.001, and 0.007 for the g, r, i, and z magnitudes respectively. We explore the difference the corrections make to predicting the redshift of sources with SDSS and DES, using the kNN algorithm trained on the opposite optical survey, using corrected, and uncorrected DES photometry in <ref>.
§.§ Regression and Classification
The distribution of spectroscopically measured redshifts is highly non-uniform, providing additional difficulties to what is typically a regression problem (a real value — redshift — being estimated based on the attributes — features — of the astronomical object). As demonstrated in Figure <ref>, it also does not follow the expected distribution of the EMU survey, partly because the optical source counts of the local universe vastly outnumber those of the high-redshift universe, and partly because high-redshift galaxies are too faint for most optical spectroscopy surveys. The non-uniform distribution means high-redshift sources will be under-represented in training samples, and therefore are less likely to be modelled correctly by ML models.
In an attempt to provide a uniform redshift distribution for the ML methods to provide better high-z estimations, we quantise the data into 30 redshift bins with equal numbers of sources in each (where the bin edges, and the expected value of the bin — typically the median redshift of the bin — are shown in Table <ref>). While binning the data means that it is no longer suitable for regression, it allows us to use the classification modes of the ML methods and test whether treating the redshift estimation problem as a classification problem rather than attempt to estimate the redshift of sources as a continuous value aids in the estimation of sources in the high-redshift regime.
§ MACHINE LEARNING METHODS
In this section we outline the error metrics we use to compare the results across different ML algorithms (Section <ref>) and the efforts to explain any random variance across our tests (Section <ref>), before discussing the different algorithms used – the kNN algorithm (using the Mahalanobis distance metric; Section <ref>), the RF algorithm (Section <ref>), the ANNz2 algorithm (Section <ref>), and the GPz algorithm (Section <ref>). Finally, we discuss the training methods used in this work (Section <ref>). In this work we provide an initial explanation of each algorithm. However, we direct the reader to their original papers for a full discussion.
§.§ Error Metrics
As stated in Section <ref>, this work differs from the typical training methods that attempt to minimise the average accuracy of the model (defined in Equation <ref>, or Equation <ref>). Instead, it is primarily focused on minimising the number of estimates that are incorrect by a catastrophic level — a metric defined as the Outlier Rate:
η_0.15 = 1/N∑_z∈ Z |Δ z| > 0.15(1 + z_spec) ,
where η_0.15 is the catastrophic outlier rate, Z is the set of sources, |Z| = N, x is the indicator function (1 if x is true, otherwise it is 0), z_spec is the measured spectroscopic redshift, and Δ z is the residual:
Δ z = z_spec - z_photo,
Alternative, we provide the 2-σ outlier rate as a more statistically sound comparison:
η_0.15 = 1/N∑_z∈ Z |Δ z| > 2 σ,
where η_2σ is the 2-σ outlier rate, and σ is the residual standard deviation:
σ = √(1/N∑_i=1^N(y_i - ŷ_i)^2),
where σ is the residual standard deviation, y_i is an individual spectroscopic redshift, and ŷ_̂î is the corresponding estimate for source i. The residual standard deviation gives an indication of the average accuracy of the estimates.
The NMAD gives a similar metric to the Residual Standard Deviation, but is more robust to outliers as it relies on the median, rather than the mean of the residuals:
σ_NMAD = 1.4826 × (median(|X_i - median(X)|),
σ_NMAD is the NMAD, X is a set of residuals (where the individual values are calculated in Equation <ref> as Δ z), from which x_i is an individual observation.
The MSE, is only used in Regression-based tests, provides the average squared error of the estimates
MSE =1/N∑ _i=1^N(y_i-ŷ_̂î)^2,
where MSE is the MSE, y_i is an individual spectroscopic redshift, and ŷ_̂î is the corresponding estimated redshift for source i.
The Accuracy is only used in Classification-based tests, and provides the percentage of sources predicted in the correct “class”, where the class is a particular redshift bin. This metric is provided for completeness only, as the accuracy is only accepting of perfect classifications, whereas the aim of this work is provide redshift estimates that are approximately correct — i.e. we are inherently accepting of classifications in nearby redshift bins, which would be considered incorrect classifications by the Accuracy metric.
Accuracy(y, ŷ) = 1/N∑_i=0^N-1ŷ_i = y_i ,
where y is a vector of spectroscopic redshifts, and ŷ is the corresponding vector of estimated redshifts.
§.§ Statistical Significance
In order to measure the potential random variation within our results, all tests were conducted 100 times, with different random seeds — creating 100 different training/test sets to train and test each algorithm on. All values presented are the average of the results gained, with the associated standard error:
σ_x̅ = σ_x/√(n),
where σ_x̅ is the standard error of x̅ which is calculated from the the standard deviation of the 100 repetitions of the experiment using different random seeds (denoted as σ_x), x̅ is the mean classification/regression error, and n is the number of repetitions — 100 in this case.
We note that the classification bin distribution is calculated for each random initialisation — this means that while each of the 100 random training sets will have roughly the same redshift distribution, there will be slight differences in the bin distributions calculated for classification.
§.§ kNN
The kNN algorithm is one of the oldest <cit.>, as well as one of the simplest machine learning algorithms. Using some kind of distance metric — typically Euclidean distance — a similarity matrix is computed between every source in the training set, comparing the observed photometry between sources. The photometry of sources in the test set — sources with “unknown” redshift — can then be compared to the photometry in the training set, and find the `k' (hereafter k_n) sources with most similar photometry. The mean or mode (depending on whether regression, or classification is performed respectively) of the most similar sources redshift from the training set is taken as the redshift of the unknown source. Following <cit.> who have shown that Euclidean distance is far from optimal for redshift estimation, here we use the Mahalanobis distance metric <cit.>:
d(p⃗,q⃗) = √((p⃗ - q⃗)^TS^-1(p⃗ - q⃗)),
where d(p⃗,q⃗) is the Mahalanobis distance between two feature vectors p⃗ and q⃗, and S is the covariance matrix.
The value of k_n is optimised using k-fold cross-validation, a process where the training set is split into k (hereafter k_f and is assigned a value of 5 for this work) subsets, allowing the parameter being optimised to be trained and tested on the entire training set.
§.§ RF
The RF algorithm is an ensemble ML algorithm, meaning that it combines the results of many other algorithms (in this case DT) to produce a final estimation. DT to split the data in a tree-like fashion until the algorithm arrives at a single answer (when the tree is fully grown). These decisions are calculated by optimising over the impurity at the proposed split using Equation <ref>:
G(Q_m, θ) = n_left/n_m H(Q_left(θ)) + n_right/n_m H(Q_right(θ)) ,
where Q_m is the data at node m, θ is a subset of data, n_m is the number of objects at node m, n_left and n_right are the numbers of objects on the left and right sides of the split, Q_left and Q_right are the objects on the left and right sides of the split, and the H function is an impurity function that differs between classification and regression. For Regression, the Mean Square Error is used (defined in Equation <ref>), whereas Classification often uses the Gini Impurity (defined in Equation <ref>).
H(X_m) = ∑_k ∈ J p_mk (1 - p_mk),
where p_mk is the proportion of split m that are class k from the set of classes J, defined formally in Equation <ref>:
p_mk = 1/n_m∑_y ∈ Q_m y = k ,
where x is the indicator function identifying the correct classifications.
§.§ ANNz2
The ANNz2[<https://github.com/IftachSadeh/ANNZ>] software <cit.> is another ensemble method, combining the results of many (in this work we use 100) randomly assigned machine learning models as a weighted average from the pool of NN and boosted decision trees, using settings noted in <cit.>. However, whereas <cit.> uses the ANNz functionality to weight the training set by the test set feature distributions, here we do not use this option for two reasons. First, this work is designed for larger surveys to be completed, for which we do not know the distributions, so we are unable to effectively weight the training samples towards future samples. Secondly, when attempted, the final outputs were not significantly different, whether the training sets were weighted or not.
§.§ GMM+GPz
The GPz algorithm is based upon GP Regression, a ML algorithm that takes a slightly different track than traditional methods. Whereas most algorithms model an output variable from a set of input features using a single, deterministic function, GP use a series of Gaussians to model a probability density function to map the input features to output variable. The GP algorithm is extended further in the GPz algorithm to handle missing and noisy input data, through the use of sparse GP and additional basis functions modelling the missing data <cit.>.
Following <cit.>, we first segment the data into separate clusters using a GMM before training a GPz model (without providing the redshift to the GMM algorithm) on each cluster, the idea being that if the training data better reflects the test data, a better redshift estimate can be made. We emphasise that no redshift information has been provided to the GMM algorithm, and the clusters determined by the algorithm is solely based on the g – z, and W1-W4 optical and infrared photometry — the same photometry used for the estimation of redshift.
The GMM uses the EM algorithm to optimise the centers of each cluster it defines by using an iterative approach, adjusting the parameters of the models being learned in order to maximise the likelihood of the data belonging to the clusters assigned. The EM algorithm does not optimise the number of clusters, which must be balanced between multiple competing interests:
* The size of the data — the greater the number of clusters, the more chance the GMM will end up with insufficient source counts in a cluster to adequately train a redshift estimator
* The number of distinct source types within the data — If the number of clusters is too small, there will be too few groupings that adequately split the data up into it's latent structure, whereas if it is too high, the GMM will begin splitting coherent clusters
This means that the number of components used by the GMM to model the data is a hyper-parameter to be fine-tuned. Ideally, the number of components chosen should be physically motivated — the number of classes of galaxy we would expect to be within the dataset would be an ideal number, so the ML model is only training on sources of the same type to remove another source of possible error. However, this is not necessarily a good option, as, due to the unsupervised nature of the GMM, we are not providing class labels to the GMM, and hence cannot be sure that the GMM is splitting the data into the clusters we expect. On the other hand, being unsupervised means the GMM is finding its own physically motivated clusters which don't require the additional — often human derived — labels. The lack of labels can be a positive, as human-decided labels may be based less on the actual source properties, and more on a particular science case (see <cit.> for further discussion).
In this work, we optimise the number of components hyper-parameter, where the number of components n_comp is drawn from n_comp∈{1, 2, 3, 5, 10, 15, 20, 25, 30}. We emphasise that the number of components chosen is not related to the number of redshift bins used for classification, and has an entirely separate purpose. The primary metric being optimised is the BICschwarz_bic:
BIC = -2log(L̂) + log(N)d,
where log(L̂) is the log likelihood of seeing a single point drawn from a Gaussian Mixture Model, defined in Equation <ref>, and d is the number of parameters.
logL̂ = ∑_i log∑_j λ_j 𝒩(y_i|μ_j, σ_j)
where y_i is an individual observation, 𝒩 is the Normal density with parameters σ_j^2 and μ_j (the sample variance and mean of a single Gaussian component), and λ_j is the mixture parameter, drawn from the mixture model.
The BIC operates as a weighted likelihood function, penalising higher numbers of parameters. The lower the BIC, the better.
Figure <ref> shows the BIC (Equation <ref>; top panel) with error bars denoting the standard error of each component, the average test size of each component with the error bars denoting the minimum and maximum test set size for each component (middle), and the photometric error in the form of both the outlier rate (Equation <ref>), and the accuracy (Equation <ref>), with error bars denoting the standard error (bottom).
Figure <ref> shows that while the n_comp is being optimised for lowest BIC, this has the additional benefit of lowering the resulting redshift estimation error (Figure <ref>; middle and bottom panels) — showing that the clusters being identified by the GMM algorithm are meaningful in the following redshift estimation. A value of 30 is chosen for the n_comp, despite the BIC continuing to decline beyond this point. However, the number of sources in the smaller clusters defined by the GMM becomes too small to adequately train a GPz model.
Once the data are segmented into 30 components (an example from one random seed is shown in Figure <ref>), a GPz[<https://github.com/cschreib/gpzpp>] model is trained for each component. The GPz algorithm is based around sparse GP, which attempt to model the feature space provided using Gaussian components.
§.§ Training Method
ML algorithms are typically set up and trained following one of two procedures:
* Training / Validation / Test Splits
* The data is split into training, testing and validation sets (for this work, the data are split into 50%/20%/30% subsets). The ML algorithm is trained on the training set, with model hyper-parameters optimised for the validation set. Once optimised, the test set is used to estimate the model's generalisatibility.
* This method is utilised by the ANNz and GMM algorithms
* k-Fold Cross-Validation
* The dataset is split into two sets (for this work, the data are split into 70%/30% subsets), used as training and test sets. Differing from the first method, this method trains and optimises the ML algorithms on the training set alone, before testing the optimised models on the test set.
* The training set is split into k_f subsets. k_f models are trained on k_f - 1 subsets, and hyperparameters optimised and validated against the remaining subset.
* In this work, we use a value of 5 for k_f, with the k-fold Cross-Validation algorithms used to optimise the hyperparameters of the kNN and RF algorithms.
The externally developed software (ANNz and GPz) both operate using training/validation/test split datasets. This is preferable for large, mostly uniform distributions, as it greatly reduces training time. However, for highly non-uniform distributions, the under-represented values are less likely to be involved in all stages of training, validation and testing. Hence, for the kNN and RF algorithms, where we control the training process, we choose the k-fold cross validation method of training and optimising hyper-parameters, to best allow the under-represented high-redshift sources to be present at all stages of training.
§.§.§ Photometry Used in Training
All algorithms use the same primary photometry — g, r, i, z optical magnitudes, and W1, W2, W3 and W4 infrared magnitudes. However, the different algorithms vary in how they treat the uncertainties associated with the photometry. For the simple ML algorithms (kNN and RF), the uncertainties are ignored. ANNz computes their own uncertainties using a method based on the kNN algorithm, outlined in <cit.>, and GPz uses them directly in the fitting of the Gaussian Process.
§.§.§ Using ANNz and GPz for Classification
While the implementations of the kNN and RF algorithms have both regression and classification modes, there is no directly comparable classification mode for the ANNz and GPz algorithms. In order to compare them with the classification modes of the kNN and RF algorithms, we use the ANNz and GPz algorithms to predict the median of the bin, in lieu of a category. The predictions are then re-binned to the same boundaries as the original bins, and the re-binned data compared.
§ RESULTS
For clarity, we break our results up into three sub-sections — Subsection <ref> reports the results using the regression modes of each ML method, Subsection <ref> reports the results using the classification modes of each ML method, and Subsection <ref> reports the comparison between the two modes.
§.§ Regression Results
The results using the regression modes of each ML algorithm are summarised in Table <ref>. Table <ref> shows that the kNN algorithm performs best in terms of both η_0.15 and η_2σ outlier rates, while also performing similarly across other metrics – although the GMM+GPz algorithm provides the lowest σ.
Scatter plots (Figures fig:knn_regress,fig:rf_regress,fig:annz_regress,fig:gpz_regress) show the results from each ML algorithm where the x-axis of each panel shows the measured spectroscopic redshifts, the y-axis of the top panel shows the redshift predicted by the given ML method and the bottom panel the normalised residuals. The dashed red line shows a perfect prediction, with the dashed blue lines highlighting the boundary set by the outlier rate. All figures use the same random seed, and the same test set.
As shown in Figures fig:knn_regress,fig:rf_regress,fig:annz_regress,fig:gpz_regress, all algorithms suffer from the same issues — overestimating the low-redshift sources (z < 1), while underestimating the high-redshift sources (z > 3). At the low-redshift end, the large majority of sources are estimated within the η_0.15 outlier rate by all algorithms, with all algorithms overestimating roughly the same number of sources. At high-redshift, the GPz algorithm performs worst, however, the small number of sources at high-redshift mean this does not significantly impact the error metrics.
§.§ Classification Results
The results using the classification modes of each ML algorithm are summarised in Table <ref>. As with the Regression results in Section <ref>, the kNN algorithm produces the lowest η_0.15 rate, with the RF algorithm being second best. All methods (aside from the RF algorithm) have approximately the same σ. However, the RF and GPz algorithms have a marginally lower NMAD.
Plots showing the results from each ML algorithm (Figures fig:knn_class_scaled,fig:rf_class_scaled,fig:annz_class_scaled,fig:gpz_class_scaled) show the scaled classification bins, with the x-axis showing the measured (binned) spectroscopic redshifts, and the y-axis showing the ML classified bin for each source. While a perfect correlation along the diagonal would be ideal, the inherent error built into the η_0.15 error metric means that at low redshift, there might be many adjacent bins that are deemed “acceptable” redshift estimates, whereas at the highest redshift, there is only one possible bin a source can be classified into for it to be an acceptable estimate.
The kNN algorithm correctly predicts the highest proportion of sources belonging to the highest redshift bin, though it should be noted that all algorithms struggle with assigning this under-represented class. While the width of the final bin means that sources that are not exactly classified are therefore incorrectly classified (unlike sources at the low-redshift end), in all cases, over 70% of the highest redshift sources are placed in the highest two redshift bins. Alternatively, if these bins were to be combined, we would be able to say that the over 70% of sources at z > 2 would be correctly classified. Further discussion is presented in Section <ref>.
§.§ Regression vs Classification
When comparing the results in Tables <ref> and <ref> (demonstrated in Figure <ref>), we find that the binning of redshifts greatly improves the results using the RF algorithm (in terms of η_0.15 outlier rate) while for other algorithms, it doesn't significantly alter the η_0.15 outlier rate. The classification process does slightly reduce σ for kNN and ANNz algorithms, bringing them closer to the results from the GPz algorithm.
When directly comparing the the algorithms in regression and classification mode across the different redshift bins (Figure <ref>; showing the η_0.15 and η_2σ outlier rates, the σ, and NMAD as a function of redshift, comparing the Regression modes of each algorithm with the classification modes), we can see that in terms of η_0.15 outlier rate, the kNN, RF and ANNz algorithms significantly improve for the highest bin (mostly going from 60–80%, to 40–60% outlier rates). The average accuracy (both in terms of σ and NMAD) are comparable between regression and classification modes,
§ DISCUSSION
We have found that all ML algorithms suffer from similar issues when estimating the redshift, regardless of the training data, or algorithm used: the redshifts of low-redshift sources are over-estimated (i.e they are predicted to have a higher redshift than their measured redshift), and those of high-redshift sources are under-estimated (i.e they are predicted to be at a lower redshift than their measured redshift suggests).
In this work, we investigate the combination of heterogeneous datasets (with the impact shown in <ref>), creating a training set with a higher median redshift in order to better sample the high-redshift space, and provide more acceptable redshift estimates to a higher redshift. We combine radio catalogues from the northern hemisphere with SDSS optical photometry and spectroscopic redshifts, with radio catalogues from the southern hemisphere with DES optical photometry and spectroscopic redshifts from the OzDES survey, with the DES photometry mapped to the SDSS photometry using a third-order polynomial. We compare simple ML algorithms in the kNN (when using the more complex Mahalanobis distance metric, instead of the standard Euclidean distance metric) and RF algorithms, with the much more complex ANNz and GPz (with GPz models trained on smaller subsets, modelled using a GMM) — a NN based approach and GP based approach respectively.
We find that the kNN algorithm provides the lowest η_0.15 outlier rates across both the Regression, and Classification modes, with outlier rates of 7.26%± 0.02 and 6.21%± 0.017 respectively, providing acceptable redshift estimates of ∼93% of radio sources with complete photometry, up to a redshift of z∼3.
§.§ Rigidity of η_0.15 outlier rate for Classification
The η_0.15 outlier rate is designed to be more accepting of errors as the source's redshift increases. By binning the data into 30 bins with equal numbers of sources, the classification tests break this acceptance as the predicted values of the higher redshift bins become significantly more spread than at low-redshift, to the point where sources are predicted as being outliers if the source is not classified into the exactly correct bin. There are multiple options to extend the flexibility of the η_0.15 outlier rate to this training regime, however, all have flaws. One method would be to adjust the outlier rate so that instead of determining catastrophic outliers based on a numeric value (i.e. 0.15 scaling with redshift), it allows a fixed number of predicted bins above and below the actual redshift bin of the source (i.e. a source can be predicted in the exactly correct bin, ± some number of bins, and still be considered an acceptable prediction). However, this would mean significant `fiddling' with the bin distribution to ensure that the original intention of the η_0.15 outlier rate is maintained (that, a source be incorrect by up to 0.15 — scaling with redshift — before it is considered a “catastrophic failure”), and would defeat the initial purpose of presenting redshift estimation as a classification task — creating a uniform distribution in order to better predict sources at higher redshift ranges that are under-represented in all training datasets. Another option would be to drop the η_0.15 outlier rate, and label any source that is predicted within an arbitrary number of bins (2-3 perhaps) of the correct bin as an acceptable estimate. However, this would severely penalise the low-redshift end of the distribution that is dense in sources, and would not be comparable across studies, as it would be impossible to ensure the redshift bins (both in distribution, and density) were similar across different datasets. The simplest alternative is to combine the highest two redshift bins, thereby allowing sources in those top two bins to be classified as either, and not be considered a catastrophic failure.
In Table <ref> and Figure <ref>, we present alternatives to Table <ref> and Figure <ref> based on the upper two bins being combined.
The combination of redshift bins significantly decreases the η_0.15 outlier rate for all algorithms, with the kNN algorithm still performing best (and dropping from 6.21% to 4.88%).
§.§ Comparison with Previous Work
Comparison with previous works is difficult, as the selection criteria, such as source type and redshift distribution, can play a significant role in the final error metrics, with most studies aiming for the largest training samples. The motivation of finding the largest possible training set pushes studies into large scale surveys like the SDSS, with millions of sources with spectroscopic redshifts available for use. For the testing of algorithms for the use on similar surveys like the DES and DECaLS surveys, or the LSST being conducted at the Vera Rubin Observatory, this motivation is entirely appropriate. However, this workflow cannot be directly compared with algorithms trained and tested on datasets dominated by a specific subset of sources (for example, radio selected samples, which are typically dominated by difficult-to-estimate AGN), at a significantly higher redshift. The closest comparison would with <cit.>, which contains similar primary algorithms being tested (both this work and <cit.> use the kNN algorithm with a Mahalanobis distance metric, and the RF algorithm, and compare both classification and regression modes), and similar photometry (both use 4 optical, and 4 infrared bands). Both studies are conducted on a radio-selected sample. However, while both are radio-selected, <cit.> is restricted to the ATLAS dataset and the narrower infrared bands of the SWIRE survey, with a significantly smaller dataset and lower median redshift. The combination of the change in infrared photometry to the all-sky, wider-band AllWISE photometry, and the smaller, lower redshift training set used by <cit.> leads to slightly lower η_0.15 outlier rates (∼6% in <cit.>, compared to ∼7% in this work when comparing the kNN algorithm using Regression). However, due to the size and redshift distribution of the dataset compiled in this work, models trained are able to estimate the redshift of radio sources to a significantly higher redshift (z < 3, compared with z < 1).
§.§ Algorithm Comparison
The best performing algorithm (in terms of η_0.15 outlier rate) is the kNN algorithm, despite the kNN algorithm being significantly less complex than all other approaches tested, with all other methods algorithms combining the results of many models (RF combining many Decision Trees, the ANNz algorithm combining 100 different, randomly initialised tree and NN based models, and GPz including a pre-processing step using the GMM algorithm). This may be due to the difference in the way the different algorithms are trained. While the RF, ANNz and GPz algorithms are all methods training some kind of model to best represent the training set, the kNN algorithm treats the data itself as the model, and is not trying to learn a representation. This subtle difference means that in cases where the test data is well-represented by the training data, and the number of features is small, the kNN algorithm may out-perform more complex algorithms. The kNN algorithm also has the added advantage that it doesn't need to try and account for the “noise” within astronomical data — as long as the same types of noise present in the test data is also present in the training data, the kNN algorithm doesn't need to handle it in any particular manner.
However, the kNN algorithm has two major drawbacks. First, the kNN algorithm is making the assumption that the test data follows all the same distributions as the training data. There is no way for the kNN algorithm to extrapolate beyond its training set, whereas more complicated algorithms like GPz are — to a small degree — able to extend beyond the training sample. This means the kNN algorithm is entirely unsuitable when the test set is not drawn from the same distributions as the training set.
Secondly, like many ML algorithms, there is no simple way to provide errors for estimates made by the kNN algorithm in regression mode (the classification mode is able to produce a probability density function across the classification bins chosen). The ANNz algorithm is able to use the scatter in its ML models predictions as a proxy for the error, and the GPz algorithm is based on the GP algorithm, which inherently provides a probability density function — a significant benefit for some science cases.
The tension between best η_0.15 outlier rate, and ability to quantify errors is not trivial and is best left to the individual science case as to which algorithm is best suited to the chosen purpose.
Finally, it is worth reiterating the differing error metrics being optimised between the different algorithms. The ANNz and GPz algorithms are both optimising error metrics favoured by their developers for their particular science needs. In this case, the different error metrics being optimised (like the σ) do not match the science needs of the EMU project, with the η_0.15 outlier rate preferred. The effect of this means that we are comparing an error metric that is optimised in some algorithms (the kNN and RF algorithms), but not in others (the ANNz and GPz algorithms). This presents an inherent disadvantage to the ANNz and GPz algorithms, and and may contribute to their lower performance.
§.§ Estimating Confidence Intervals
Both the ANNz and GPz algorithms explicitly estimate the uncertainty of any prediction made. GPz estimates uncertainties directly as a by-product of the Gaussian fitting in GPz. ANNz estimates its uncertainties as an additional step, where the 100 most similar galaxies from the training set to the test source are found using the kNN algorithm, biases for each estimated redshift calculated, and the 68^th percentile taken as the uncertainty of the galaxy <cit.>.
The RF algorithm is next simplest to identify uncertain estimates. Following <cit.>[implemented as <https://contrib.scikit-learn.org/forest-confidence-interval/index.html>], confidence intervals for RF models can be estimated using the Jackknife method.
Finally, the kNN algorithm does not have a natural way of estimating the uncertainty of predictions. Similar to the method described in <cit.> though, we can get an understanding of which estimates are likely to be uncertain by examining the similar galaxies. We can follow the below workflow to estimate the uncertainty of our predictions, noting that they are unlikely to be realistic uncertainties for the estimate, and more an estimate of how uncertain the “model” is of the prediction, given the data:
For every source in the test set:
* Identify the k_n sources used in the estimation of the redshift.
* Use the same model to estimate the redshift of the above k_n sources.
* Calculate the variance of the k_n sources redshift estimates, and take the variance as the uncertainty for the prediction.
We emphasise that this uncertainty is not an estimate of how well the redshift prediction of the test source fits the photometry — it is purely an estimate of how varied the sources were that were used to make the initial estimate, with the implicit understanding being that the more varied the sources used to predict the redshift, the less likely the estimate is to be accurate. Additionally, there are no photometric uncertainties involved, so the uncertainty provided is further unlikely to be scientifically meaningful, beyond helping to identify potentially unreliable estimates.
§.§.§ Removing Uncertain Predictions
Certainty thresholds for defining acceptable estimates are not, to the best of our knowledge, typically published. <cit.> suggests Equation <ref>:
σ_z/(1 + z_photo) < 0.2
where σ_z is the uncertainty estimate from the GPz model, and z_photo is the photometric redshift estimated from the same model. Unfortunately, given the different quantities the different uncertainties are designed to capture, Equation <ref> can only be used for the estimates measured using the GPz algorithm.
For other algorithms we aim to find a (where possible) statistically sound method of removing the most uncertain estimates, while maintaining approximately the same number of `certain' sources in order to compare outlier rates with the certain GPz estimates.
For the kNN algorithm and uncertainties (defined in Section <ref>), we can define Equation <ref>:
σ_z/(1 + z_photo) < ∑_i (σ_zi - σ̅_̅z̅)^2/n-1
where σ̅_̅z̅ is the average uncertainty.
No statistical method of determining a cutoff for the ANNz and RF produced similar source counts as the GPz algorithm, and hence for this work we choose the following values (Equations <ref> and <ref> respectively) in order to produce comparable outlier rates:
σ_z/(1 + z_photo) < 0.1
σ_z < 2.302
Once these outliers are removed, the residual outlier rates for all methods drop significantly. We show the original outlier rates, the outlier rates of the `certain' predictions, and the outlier rates of the `uncertain' predictions shown in Table <ref> for all algorithms. Prediction plots similar to Figure <ref> for each subset and algorithm can be found in Figures <ref> to <ref> in <ref>.
As demonstrated, the removal of predictions with high uncertainty greatly improves the outlier rates of all algorithms, with the kNN algorithm still performing best, the ANNz and GPz algorithms performing equally well, and the RF algorithm performing worst. We do note, however, that the formal definition of uncertain sources by <cit.> is combined with very well defined uncertainties to make GPz estimates more robust and reliable, particularly when spectroscopic redshifts are not available in test fields of sufficient depth and quantity to help quantify reliability.
§.§ Effects of Differing Radio Survey Depths
Radio sources are typically more difficult to estimate the redshift of using ML than optically-selected sources, as they tend to contain rarer sub-types of galaxies, and hence constructing a representative training sample is problematic. While all of the samples in our training set have been radio-selected, the depth of the radio survey used can play a part in what sub-types of galaxies are represented in the radio sample. As shown by <cit.> at ∼110μJy, radio samples stop being dominated by AGN, and begin being dominated by SFG, and hence would require additional SFG samples in the training sample in order to best estimate these sources. While the majority (∼90%) of sources used in our training sample come from the RGZ catalogues (drawn from the VLA FIRST survey) which have a sensitivity of ∼150 μJy, we include sources from the Stripe 82 region (<cit.>; RMS: 52 μJy, and <cit.>; RMS: 82 μJy), and the ATLAS surveys (<cit.>; RMS: 14 μJy). These additional sources provide some coverage of the radio-faint parameter space, however, we acknowledge that the comparatively small number is inadequate to completely model the space.
Future work will include more radio selected data from deep fields like COSMOS, and the LOFAR Deep Fields.
§ CONCLUSION
Machine Learning attempts for estimating the redshift of radio selected galaxies have significant benefits over traditional template fitting methods — they don't require specifically developed templates, nor do they require the disentanglement of the black hole emission from the galaxy emission. However, the major downside is the requirement for a representative training sample — a significant difficulty given the requirement for spectroscopic redshift measurements, and the typically significantly higher median redshift of radio surveys, when compared with optical surveys.
By combining radio-selected data from the northern- and southern-hemisphere, we have created a larger sample of radio galaxies for training ML algorithms. Once the DES optical data was homogenised with the SDSS optical photometry, current leading ML algorithms were tested. We show that the kNN algorithm — in both regression and classification tests — provides the lowest η_0.15 outlier rate, estimating ∼92% of radio-selected sources within an acceptable limit. The depth in redshift distribution of the assembled training set allows us to estimate the redshift of sources up to z = 3 before the results are dominated by random, under-estimated scatter.
We show that we can use the classification modes of the tested ML methods to identify ∼76% of sources at the highest two redshift bins (z = 2.25 and z > 2.51), providing a way of first identifying the highest redshift sources, before using the regression modes of the provided algorithms to estimate the redshift of the remaining sources more effectively.
In this work, we show that the kNN algorithm using the Mahalanobis distance metric performs best (i.e. minimises outlier rate) for the estimation of the redshift of radio galaxies.
§ DATA AVAILABILITY
All code used within this study is available at <https://github.com/kluken/PhotoZForHigh-ZRadioSurveysPASA>. Data are described and are available at <cit.>.
§ ACKNOWLEDGEMENTS
We thank the anonymous referee for their time and thoughtful comments, as well as their prompt response in helping to improve the manuscript.
We also thank Sarah White for her helpful comments.
The Australia Telescope Compact Array is part of the Australia Telescope National Facility which is funded by the Australian Government for operation as a National Facility managed by CSIRO. We acknowledge the Gomeroi people as the traditional owners of the Observatory site.
This scientific work uses data obtained from Inyarrimanha Ilgari Bundara / the Murchison Radio-astronomy Observatory. We acknowledge the Wajarri Yamaji People as the Traditional Owners and native title holders of the Observatory site. CSIRO’s ASKAP radio telescope is part of the Australia Telescope National Facility (https://ror.org/05qajvd42). Operation of ASKAP is funded by the Australian Government with support from the National Collaborative Research Infrastructure Strategy. ASKAP uses the resources of the Pawsey Supercomputing Research Centre. Establishment of ASKAP, Inyarrimanha Ilgari Bundara, the CSIRO Murchison Radio-astronomy Observatory and the Pawsey Supercomputing Research Centre are initiatives of the Australian Government, with support from the Government of Western Australia and the Science and Industry Endowment Fund.
Based in part on data acquired at the Anglo-Australian Telescope. We acknowledge the traditional owners of the land on which the AAT stands, the Gamilaroi people, and pay our respects to elders past and present.
This project used public archival data from the Dark Energy Survey (DES). Funding for the DES Projects has been provided by the U.S. Department of Energy, the U.S. National Science Foundation, the Ministry of Science and Education of Spain, the Science and Technology FacilitiesCouncil of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, the Kavli Institute of Cosmological Physics at the University of Chicago, the Center for Cosmology and Astro-Particle Physics at the Ohio State University, the Mitchell Institute for Fundamental Physics and Astronomy at Texas A&M University, Financiadora de Estudos e Projetos, Fundação Carlos Chagas Filho de Amparo à Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Desenvolvimento Científico e Tecnológico and the Ministério da Ciência, Tecnologia e Inovação, the Deutsche Forschungsgemeinschaft, and the Collaborating Institutions in the Dark Energy Survey.
The Collaborating Institutions are Argonne National Laboratory, the University of California at Santa Cruz, the University of Cambridge, Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas-Madrid, the University of Chicago, University College London, the DES-Brazil Consortium, the University of Edinburgh, the Eidgenössische Technische Hochschule (ETH) Zürich, Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ciències de l'Espai (IEEC/CSIC), the Institut de Física d'Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig-Maximilians Universität München and the associated Excellence Cluster Universe, the University of Michigan, the National Optical Astronomy Observatory, the University of Nottingham, The Ohio State University, the OzDES Membership Consortium, the University of Pennsylvania, the University of Portsmouth, SLAC National Accelerator Laboratory, Stanford University, the University of Sussex, and Texas A&M University.
Based in part on observations at Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.
This work was performed on the OzSTAR national facility at Swinburne University of Technology. The OzSTAR program receives funding in part from the Astronomy National Collaborative Research Infrastructure Strategy (NCRIS) allocation provided by the Australian Government.
This publication has been made possible by the participation of more than 12,000 volunteers in the Radio Galaxy Zoo project. Their contributions are individually acknowledged at <http://rgzauthors.galaxyzoo.org>
Funding for the Sloan Digital Sky
Survey IV has been provided by the
Alfred P. Sloan Foundation, the U.S.
Department of Energy Office of
Science, and the Participating
Institutions.
SDSS-IV acknowledges support and
resources from the Center for High
Performance Computing at the
University of Utah. The SDSS
website is www.sdss.org.
SDSS-IV is managed by the
Astrophysical Research Consortium
for the Participating Institutions
of the SDSS Collaboration including
the Brazilian Participation Group,
the Carnegie Institution for Science,
Carnegie Mellon University, Center for
Astrophysics | Harvard &
Smithsonian, the Chilean Participation
Group, the French Participation Group,
Instituto de Astrofísica de
Canarias, The Johns Hopkins
University, Kavli Institute for the
Physics and Mathematics of the
Universe (IPMU) / University of
Tokyo, the Korean Participation Group,
Lawrence Berkeley National Laboratory,
Leibniz Institut für Astrophysik
Potsdam (AIP), Max-Planck-Institut
für Astronomie (MPIA Heidelberg),
Max-Planck-Institut für
Astrophysik (MPA Garching),
Max-Planck-Institut für
Extraterrestrische Physik (MPE),
National Astronomical Observatories of
China, New Mexico State University,
New York University, University of
Notre Dame, Observatário
Nacional / MCTI, The Ohio State
University, Pennsylvania State
University, Shanghai
Astronomical Observatory, United
Kingdom Participation Group,
Universidad Nacional Autónoma
de México, University of Arizona,
University of Colorado Boulder,
University of Oxford, University of
Portsmouth, University of Utah,
University of Virginia, University
of Washington, University of
Wisconsin, Vanderbilt University,
and Yale University.
§ DATA HOMOGENISATION
As demonstrated in Figure <ref>, the difference in measured photometry can be significant between the DES and SDSS catalogues. In order to quantify how much of an impact this difference in photometry has, we present the following results using the kNN algorithm. <ref> and subsections show the effect of the data homogenisation discussed in Section <ref> when training on SDSS photometry, and testing on DES photometry. <ref> and subsections show the effect of the data homogenisation when training on DES photometry, and testing on SDSS photometry. Finally, <ref> directly compares these results.
§.§ Training on SDSS Photometry, Testing on DES Photometry
This section is divided into two components — <ref> and <ref>. In these subsections, we demonstrate the results of using uncorrected and corrected photometry in regression and classification tests.
§.§.§ Regression
Figure <ref> and <ref> are of the same style as Figure <ref>. Figure <ref> is the result of training on SDSS photometry, and testing on DES photometry. Figure <ref> is the result of training on SDSS photometry, and testing on corrected DES photometry.
§.§.§ Classification
Figure <ref> and <ref> are of the same style as Figure <ref>. Figure <ref> is the result of training on SDSS photometry, and testing on DES photometry. Figure <ref> is the result of training on SDSS photometry, and testing on corrected DES photometry.
§.§ Training on DES Photometry, Testing on SDSS Photometry
This section is divided into two components — <ref> and <ref>. In these subsections, we demonstrate the results of using uncorrected and corrected photometry in regression and classification tests.
§.§.§ Regression
Figure <ref> and <ref> are of the same style as Figure <ref>. Figure <ref> is the result of training on SDSS photometry, and testing on DES photometry. Figure <ref> is the result of training on SDSS photometry, and testing on corrected DES photometry.
§.§.§ Classification
Figure <ref> and <ref> are of the same style as Figure <ref>. Figure <ref> is the result of training on SDSS photometry, and testing on DES photometry. Figure <ref> is the result of training on SDSS photometry, and testing on corrected DES photometry.
§.§ Comparison between Uncorrected, and Corrected data
In all tests, homogenising the DES photometry to the SDSS improved the outlier rates. Table <ref> and Figure <ref> demonstrate that the outlier rate improves by ∼1-2% for all tests.
§ COMPARING ALL, WITH CERTAIN AND UNCERTAIN PREDICTIONS
Figures <ref>, <ref>, <ref>, and <ref> show plots similar to the top panel of Figure <ref>, allowing for comparisons between all predictions, just those predictions deemed `certain' by the criteria in Section <ref>, and those that don't meet the criteria, and are therefore deemed `uncertain'. Across all algorithms, much of the scatter between the predicted and measured redshift is removed from the `certain' sample, with all algorithms benefiting across all error metrics. The kNN algorithm retains the lowest outlier rate. However, the majority of its `certain' sources lie between 0 < z < 2.5, with few sources beyond z > 2.5. The ANNz and GPz algorithm perform next best in terms of outlier rate, though the GPz algorithm performs better in both σ and NMAD. The GPz algorithm removes most z > 3 sources, with the ANNz algorithm extending up to z < 4. The RF algorithm performs worst, with predictions capped at z < 3.
|
http://arxiv.org/abs/2307.04335v2 | 20230710041331 | The tree-child network problem for line trees and the shortest common supersequences for permutations are NP-hard | [
"Laurent Bulteau",
"Louxin Zhang"
] | math.CO | [
"math.CO",
"05A16, 05C30, 92D15"
] |
[
Eiji Saitoh
August 12, 2023
===================
Reconstructing phylogenetic networks presents a significant and complex challenge within the fields of phylogenetics and genome evolution.
One strategy for reconstruction of phylogenetic networks is to solve the phylogenetic network problem, which involves inferring phylogenetic trees first and subsequently computing the smallest phylogenetic network that displays all the trees. This approach capitalizes on exceptional tools available for inferring phylogenetic trees from biomolecular sequences. Since the vast space of phylogenetic networks poses difficulties in obtaining comprehensive sampling, the researchers switch their attention to inferring tree-child networks from multiple phylogenetic trees, where in a tree-child network
each non-leaf node must have at least
one child that is a tree node (i.e. indegree-one node).
We prove that the tree-child network problem for multiple line trees remains NP-hard by a reduction from the shortest common supersequnece problem for permuations and proving that the latter is NP-hard.
§ INTRODUCTION
Recent genomic studies have highlighted the significant roles of recombination and introgression in genome evolution <cit.>. Consequently, there has been an increasing use of phylogenetic networks to model the evolution of genomes with the presence of recombination, introgression and other reticulate events <cit.>. A phylogenetic network is a rooted directed acyclic graph (DAG) that represents taxa (genomes, individuals, or species) as its leaves and evolutionary events (speciation, recombination, or introgression) as its internal nodes. Over the past three decades, substantial progress has been made in understanding the theoretical aspacts of phylogenetic networks <cit.> (see also <cit.>).
The space of phylogenetic networks is vast, making it challenging to perform comprehensive sampling. As a result, popular methods like maximum likelihood and Bayesian approaches, commonly used for phylogeny reconstruction, are not efficient enough for reconstructing phylogenetic networks containing a large number of reticulate events on more than 10 taxa <cit.>. This has prompted researchers to focus on inferring phylogenetic networks with specific combinatorial properties <cit.>. Popular classes of phylogenetic networks include galled trees <cit.>, galled networks <cit.>, and tree-child networks <cit.>, which can be enumerated and counted efficiently <cit.>. Furthermore, researchers are also investigating the parsimonious inference of phylogenetic networks from multiple trees, aiming to infer a network with the smallest hybridization number (HN) that display all the trees <cit.>. The HN, a generalization of the number of reticulate nodes in binary phylogenetic networks, quantifies the complexity of the network (refer to Section <ref> for more details). Notably, a scalable method has been recently developed to compute a tree-child network with the minimum HN from binary trees <cit.>.
Inference of an arbitrary phylogenetic network with the smallest HN is known to be NP-hard, even in the case of two input trees <cit.> and in the case tree-child networks are inferred <cit.>.
In this paper,
we prove that the problem remains NP-hard even for inferring tree-child networks from line trees.
§ BASIC CONCEPTS AND NOTATION
Let X be a set of taxa. In this paper, a phylogenetic network on X is a rooted DAG such that:
* The root is of indegree 0 and outdegree 1. There is at least one directed path from the root to every other node.
* The leaves (which are of indegree 1 and outdegree 0) are labeled one-to-one with the taxa.
* All nodes except for the leaves and the root are either a tree node or a reticulate node. The tree nodes are of indegree 1 and outdegree 2, whereas the reticulate nodes are of indegree more than 1 and outdegree 1.
In a phylogenetic network, a node u is said to be below another v if there exists a directed path from v to u.
A phylogenetic network is binary if every reticulate node is of indegree 2.
A binary phylogenetic tree is a binary phylogenetic network that does not have any reticulate nodes. In this paper, a binary phylogenetic tree is simply mentioned as a binary tree. A line tree is a binary tree in which all internal nodes but the out-degree-1 root have at least one child that is a leaf.
An important parameter of a phylogenetic network is the hybridization number (HN).
It is defined as the sum over all the reticulation nodes of the indegree of that node minus the number of the reticulate nodes. Note that for a binary phylogenetic network B, each reticulate node has indegree 2 and thus the HN of B is equal to the number of the reticulate nodes of B.
A tree-child network is a phylogenetic network in which every non-leaf node has at least one child that is either a tree node or a leaf.
Let Σ be an n-letter alphabet, and ℓ be a new letter not in Σ. For a permutation P=p_1p_2⋯ p_n on Σ, we use T(P) to denote the line tree on Σ∪{ℓ} that has the node set Σ∪{r, v_i, ℓ | 1≤ i≤ n} and the directed edge set { (r, v_1), (v_i, v_i+1), (v_i, p_i), (v_n, ℓ), (v_n, p_n) | 1≤ i≤ n-1} (left, Figure <ref>).
Let v be a node of indegree 1 and outdegree 1 in a directed graph. Then, there is a unique edge (u, v) entering
v and a unique edge (v, w) leaving v. We contract v by removing v and
replacing (u, v) and (v, w) with a new edge (u, w).
For a sequence Q=q_1q_2⋯ q_m on Σ, we use N(Q) to denote the one-component tree-child network on Σ∪{ℓ} that is obtained by applying degree-2 node contraction from the DAG consisting of the node set
Σ∪{r, ℓ, v_i, r_j | 1≤ i≤ m, 1≤ j≤ n}
and the directed edge set E_1∪ E_2, where E_1={ (r, v_1), (v_i, v_i+1), (v_m, ℓ), | 1≤ i≤ m-1}∪{(r_j, a_j) | a_j∈Σ} and E_2 contains (v_i, r_j) if q_i=a_j for every possible i and j (right, Figure <ref>).
Clearly, the HN of N(Q) is m-n.
§.§ The tree-child network problem
A binary tree is displayed in a tree-child network if it can be obtained from
the network by (i) deletion of all but one incoming edge for each reticulate
node and subsequently (ii) contraction of all indegree-1 and out-degree-1 nodes.
We focus on
how to infer a tree-child network with the minimum HN that display all the input trees. This problem is formally defined as:
The Tree-Child Network (TCN) Problem
Input A set of binary trees on X.
Output A tree-child network with the minimum HN that displays all the trees.
§.§ The shortest common supersequence problem
A string on an alphabet is a supersequence of another if the latter can be obtained from the former by the deletion of 0 or more letters. A string is a common supersequence of multiple strings if it is a supersequence of every string.
The length of a string is the total number of the occurrences of the letters in the string.
A common supersequence is a shortest common supersequence (SCS) if it has the smallest length, over all the common supersequences of the strings.
The SCS problem is formally defined as:
Input A set of strings on an alphabet.
Output A SCS of the strings.
The SCS problem is a fundamental NP-complete problem <cit.>.
§ TREE-CHILD NETWORK INFERENCE VIA LINEAGE TAXA STRINGS
Let X be a set of n taxa and T_i (1≤ i≤ k) be k binary trees on X. The minimum tree-child networks that display all the k trees can be constructed from the lineage taxon strings (LTSs) of the taxa under an ordering on
X <cit.>. In this section, we shall restate the construction process on which our main result will be based.
Consider an ordering π on X.
For any x, x'∈ X, we write
x<_π x' if x is less than x' under π. For a node u of a tree on X, we use min_π(u) to denote the smallest of the taxa below u. We label the root with the smallest taxon under π and each of non-root internal node u with the larger of min_π(u') and min_π(u”), where u' and u” are the two children of u.
In this way, the root and the remaining n-1 internal nodes are uniquely labeled with a taxon. Moreover, the leaf f is below the unique internal node w that had been labeled with f. As a result, there exists a path P_wf from w to f. The LTS of the taxa f consists of the taxon labels of the inner nodes in P_wf.
For example, if the alphabet ordering
(i.e. a<b<c<d<e<ℓ) is used, in the tree in Figure <ref>, the root is labeled with a; v_1 to v_5 are labeled with e, d, b, c, ℓ, respectively. Therefore,
the LTS of a, b, c are edb, c, ℓ, respectively, whereas the LTS of d, e, ℓ are the empty string.
Let π be π_1<π_2 <⋯ <π_n. Note that
X={π_1, π_2, ⋯, π_n}.
We further assume that β_1, β_2, ⋯, β_n are n sequences satisfying the following conditions:
(C1) For each i<n, β_i is a string on {π_i+1, ⋯, π_n};
(C2) β_n is the empty sequence.
It is proved in <cit.> that the following algorithm outputs a tree-child network containing all T_i, written as
N(π, {β_i}^n_i=1), whose HN is equal to
∑_1≤ i≤ n|β_i |.
Tree-Child Network Construction <cit.>
1. (Vertical edges) For each β_i, define a
path P_i with |β_i| +2 nodes:
h_i, v_i1, v_i2, ⋯, v_i|β_i|, π_i,
where β_n is the empty sequence.
2. (Left–right edges)
Arrange the n paths
from left to right as P_1, P_2, ⋯, P_n.
If the m-th symbol of β_i is π_j, we add an
edge (v_im, h_j) for each i and each m.
3. For each i>1, contract h_i if h_i is of indegree 1.
Consider k binary trees T_j on X.
We write α_ji for the LTS of π_i in T_j for each i≤ n and each j≤ k. Then, for each j, α_j1, α_j2, ⋯, α_jn satisfy the conditions (C1) and (C2). Moreover, let β_i be a SCS of α_1i, α_2i, ⋯, α_ki for each i.
The sequences
β_1, β_2, ⋯, β_n also satisfy the conditions (C1) and (C2).
Let T_j (1≤ j≤ k) be k trees on X and let N be a tree-child network on X that displays all the trees.
If N has the minimum HN, there exists a permutation π such that
N=N(π,
{β_i
}^n_i=1), where γ_i is a SCS of the LTSs α_ji of π_i in the input trees T_j and whose HN is ∑_1≤ i≤ n|β_i|.
The proof of Theorem <ref> appears in Section A of the Supplemental Methods of <cit.>. Since there could be multiple SCSs for a set of sequences, Theorem <ref> implies that the TCN problem have multiple solutions.
§ EQUIVALENCE OF THE TCN AND SCS PROBLEMS
By Theorem <ref>, we know that the TCN problem can be solved using multiple SCS sub-problems, with the LTS of each taxa. Aiming at a reduction from SCS to TCN, we now show that for any instance of SCS where all input strings are permutations, an instance of TCN can be built such that a single taxa has a non-trivial LTS in each tree, and such that each such LTS is exactly one of the input permutations.
Consider an instance of the SCS problem consisting of k permutations P_i (1≤ i≤ k) on Σ.
By Theorem <ref>, all the tree-child networks with the smallest HN that display all the T(P_i) can be obtained from the LTS of taxa under a permutation.
Consider an ordering π: π_1< π_2 <⋯ <π_n <π_n+1 on Σ∪{ℓ}. We have the following two cases.
Case 1: ℓ=π_t, where t>1.
For each i, we let P_i=p_i1p_i2⋯ p_in.
If p_in >_πℓ, the LTS of the leaf ℓ ends with p_in and thus is nonempty in T(P_i).
But, the LTS of the leaf p_in is empty in T(P_i).
If p_in<_πℓ, the LTS of ℓ is empty. But, the LTS of p_in contains ℓ and thus is nonempty in T(P_i).
In general, define β_i1=π_1. For each j> 1 such that β_ij=p_ix < min{p_in, ℓ},
define β_i (j+1)=min_π{ p_i(x+1), ⋯, p_in, ℓ}.
We obtain a sequence:
β_i1=π_1, β_i2, ⋯, β_iw_i=
min{p_in, ℓ}.
Then, in L(P_i), the LTS of β_ij end with β_i(j+1) and thus are nonempty under π for each j<w_i; the LTS of β_iw_i ends with
ℓ if β_w_i=p_in and p_in if β_w_i=ℓ under Π. It is also true that the LTS is empty for any other taxon under π. Moreover, we have the following fact.
Let the LTS of β_ij be
S_ij in T_i under π: π_1<π_2<⋯ <π_n+1, where ℓ≠π_1. Then, for each i,
P_i=
S_i1[1, |S_i1|-1]β_i1S_i2[1, |S_i2|-1]β_i2⋯
S_i(w_i-1)[1, |S_i(w_i-1)|-1]β_i(w_i-1)S'_iw_i,
S'_iw_i={[ S_iw_i β_iw_i=ℓ; S_iw_i[1, |S_iw_i|-1]β_iw_i β_iw_i≠ℓ ].
where S_it[1, |S_it|-1] denotes the string obtained by removal of the last letter of S_it for each possible t
and the right-hand side is the concatenation of the strings and letters.
Example 1. For the line tree in the left panel in Figure <ref>, the corresponding permutation is P:edabc on the alphabet {a, b, c, d, e}. Under the ordering
a<b<c<d<e<ℓ,
β_1=a, β_2=b,
β_3=c, whose LTSs are edb, c, ℓ, respectively.
Proposition <ref> is verified by ed a· b· c=P, where
the symbol '·' is added to indicate different parts of P for clearness.
Let the LTS of β_ij be
S_ij in T_i under π.
Fix an π_j for some 1≤ j≤ n+1. If the LTS of π_j is empty for every i, define Q_j to be the empty string.
If S_ij is nonempty only
for indices i_1, i_2, ⋯, i_j,
we
define Q_j to be the string obtained from
W_j=(S_i_1j, S_i_2j, ⋯, S_i_jj) by removing
the last
letter of W_j.
Note that different SCS of the strings give different Q_j of the same length.
Example 2. Consider the ordering π: a<b<c<ℓ<d<e for the tree lines trees in Figure <ref>. The LTSs of the taxa under π in the three trees are listed in the following table, from which we obtain a tree-child network of an HN of 5 (right, Figure <ref>).
Taxon LTS in T(P_1) LTS in T(P_2) LTS in T(P_3) SCS
a eb cb cb ecb
b dc eℓ ℓ dceℓ
c ℓ ϵ ϵ ℓ
ℓ ϵ d ed ed
d ϵ ϵ ϵ ϵ
e ϵ ϵ ϵ ϵ
Here, ϵ denotes the empty string. The LTSs of b are dc, eℓ and ℓ in T(P_1), T(P_2), T(P_3), respectively.
A SCS of dc, eℓ, ℓ is dceℓ, from which we obtain Q_b=dec. Similarly, we obtain
Q_a=ec and Q_ℓ=e and that Q_c, Q_d and Q_e are empty.
Assume that Q_h_1, Q_h_2, ⋯, Q_h_j be all the nonempty strings defined from π by the method described above, where π_h_1<_ππ_h_2<_π⋯ <_ππ_h_j.
If π_h_j=ℓ, we set
Q=Q_h_1π_h_1Q_h_2π_h_2⋯ Q_h_j-1π_h_j-1 W_h_j.
If π_h_j <_πℓ, then, ℓ must appear in Q_h_j if it is not removed. In this case,
we set Q to the string obtained from
Q_h_1π_h_1Q_h_2π_h_2⋯ Q_h_j-1π_h_j-1Q_h_jπ_h_j by deleting the occurrences of ℓ.
Since |Q| is equal to or less than the sum of the lengths of the SCS of the LTSs of the π_i in the k line trees T(P_j) (1≤ j≤ k), the HN of N(Q) is equal to or less than the HN of N_π.
On the other hand, by Proposition 1, Q is a common supersequence of P_1, P_2, ⋯, P_k.
Thus, |Q|≥(P_1, P_2, ⋯, P_k).
Therefore, the HN of the one-component tree-child network N(Q) is not less than that of N((P_1, P_2, ⋯, P_k)).
Example 2 (Continued).
For the trees in Figure <ref>,
Q=Q_aa· Q_bb· Q_cc· W_ℓ=eca· deℓ b· c· ed. After removing ℓ, we obtain
Q'=ecadebced, which is also a supersequence of eadbc, caebd, and cabed.
The one-component tree-child network N(Q') is shown in Figure <ref>.
Case 2: ℓ=π_1. By definition, the LTS is P_i for ℓ and the empty string for π_i for every i>1. In this case, we obtain a tree-child network
N((P_1, P_2, ⋯, P_k)).
Taken together, the discussion for the two cases imply the following result.
Let N be the tree-child network constructed from T(P_1), T(P_2), ⋯, T(P_k) by applying the algorithm with an ordering π: π_1<π_2<⋯ < π_n+1.
It has the smallest HN if and only if ℓ is the smallest element under π, where N=N((P_1, P_2, ⋯, P_k)).
Propositions 1 and 2 imply the following result.
Let X be a set of taxa such that |X|=n+1 and T be a set of line trees on X in which there is a common lowest leaf ℓ, There is a tree-child network displaying all the trees of T with q reticulations if and only if the permutations on X∖{ℓ} that correspond with the line trees have a SCS of length n+q
§ NP-HARDNESS OF THE SCS PROBLEM FOR PERMUTATIONS
The SCS problem is NP-hard for permutations
SCS is already known to be NP-hard when all input strings consist of 2 distinct characters <cit.>; let us denote this variant 2-SCS (we further need the trivial constraint that no character appears in every input string). We thus provide a reduction as follows: consider an instance 𝒮 of 2-SCS with m length-2 strings over a size-n alphabet X={x_1,…,x_n}, and an integer k. Let N=n+k+1, and create a size-N set Y={y_1,…,y_N} of separators. In the context of strings, we also write X and Y for the strings x_1… x_n and y_1… y_N, respectively. For any string ab∈𝒮 (with a,b∈ X and a≠ b), we write X_-ab for the subsequence of x_1x_2… x_n obtained by removing a and b, and S_ab = a b · Y · X_-ab. Note that each S_ab is a permutation of X∪ Y. Let us write 𝒮' = {S_ab, ab∈𝒮} and k'=k+N+n. We now prove the following equivalence that completes the reduction.
Strings in 𝒮 have a common supersequence T of size k
⇔ Strings in 𝒮' have a common supersequence T' of size k'
⇒
Build T' = T · Y · X. String T' is a length-k' string, and it is a supersequence of any S_ab for ab∈ S' (since T is a supersequence of ab and X is a supersequence of X_-ab).
⇐
Pick such a string T'. It contains at least one occurrence of Y as a subsequence. Let P,R be the matching prefix and suffix of T' (i.e. T'=P· R) such that R is the smallest suffix containing Y as a subsequence.
Let T be the subsequence of P obtained by removing all separator characters.
We have |P|≤ k'-N = k+n <N, so P may not contain an entire copy of Y. Hence, for any S_ab = ab· Y· X_-ab∈𝒮', we have that ab is a subsequence of P and X_-ab is a subsequence of R.
Overall, P, and also T, are common supersequence of all ab∈𝒮, and R is a common supersequence of all X_-ab. In order to bound their sizes, note that R contains each character of X and Y at least once, so |R|≥ N+n.
Hence, T has size at most k'-N-n=k, and is a common supersequence of 𝒮.
The TCN problem is NP-hard even for line trees.
Proof. The statement is derived from Thereom 2 and Theorem 3.
Open Problem Does the TCN problem remain NP-hard for two line trees?
The TCN problem for three line trees was studied by Van Iersel et al. in <cit.>.
§ ACKNOWLEDGEMENTS
LX Zhang was partially supported by Singapore
MOE Tier 1 grant R-146-000-318-114 and Merlin 2023. He thanks Yufeng Wu for useful discussion in the early stage of this work.
10
albrecht2012fast
Benjamin Albrecht, Celine Scornavacca, Alberto Cenci, and Daniel H Huson.
Fast computation of minimum hybridization networks.
Bioinformatics, 28(2):191–197, 2012.
bordewich2007computing
Magnus Bordewich and Charles Semple.
Computing the minimum number of hybridization events for a consistent
evolutionary history.
Discrete Applied. Math., 155(8):914–928, 2007.
cardona2009metrics2
Gabriel Cardona, Mercè Llabrés, Francesc Rosselló, and
Gabriel Valiente.
Metrics for phylogenetic networks II: Nodal and triplets metrics.
IEEE/ACM-TCBB, 6(3):454–469, 2009.
cardona2020counting
Gabriel Cardona and Louxin Zhang.
Counting and enumerating tree-child networks and their subclasses.
Journal of Computer and System Sciences, 114:84–104, 2020.
elworth2019advances
RA Leo Elworth, Huw A Ogilvie, Jiafan Zhu, and Luay Nakhleh.
Advances in computational methods for phylogenetic networks in the
presence of hybridization.
In Bioinformatics and Phylogenetics, pages 317–360. Springer,
2019.
Fontaine_15
Michael C Fontaine, James B Pease, Aaron Steele, and et al.
Extensive introgression in a malaria vector species complex revealed
by phylogenomics.
Science, 347(6217):1258524–1258524, 2015.
garey1979computers
Michael R Garey and David S Johnson.
Computers and intractability.
Freeman San Francisco, 1979.
gogarten2005horizontal
J Peter Gogarten and Jeffrey P Townsend.
Horizontal gene transfer, genome innovation and evolution.
Nature Reviews Microbiol., 3(9):679–687, 2005.
gusfield2014book
Dan Gusfield.
ReCombinatorics: the algorithmics of ancestral recombination
graphs and explicit phylogenetic networks.
MIT press, 2014.
huson2009computing
Daniel H Huson, Regula Rupp, Vincent Berry, Philippe Gambette, and Christophe
Paul.
Computing galled networks from real data.
Bioinformatics, 25(12):i85–i93, 2009.
huson2010book
Daniel H Huson, Regula Rupp, and Celine Scornavacca.
Phylogenetic networks: concepts, algorithms and applications.
Cambridge University Press, 2010.
koblmuller2007reticulate
Stephan Koblmüller, Nina Duftner, Kristina M Sefc, Mitsuto Aibara, Martina
Stipacek, Michel Blanc, Bernd Egger, and Christian Sturmbauer.
Reticulate phylogeny of gastropod-shell-breeding cichlids from lake
tanganyika–the result of repeated introgressive hybridization.
BMC Evol. Biol., 7(1):1–13, 2007.
koonin2001horizontal
Eugene V Koonin, Kira S Makarova, and L Aravind.
Horizontal gene transfer in prokaryotes: quantification and
classification.
Annual Rev. Microbiol., 55(1):709–742, 2001.
linz2019attaching
Simone Linz and Charles Semple.
Attaching leaves and picking cherries to characterise the
hybridisation number for a set of phylogenies.
Adv. Applied Math., 105:102–129, 2019.
lutteropp2022netrax
Sarah Lutteropp, Céline Scornavacca, Alexey M Kozlov, Benoit Morel, and
Alexandros Stamatakis.
Netrax: accurate and fast maximum likelihood phylogenetic network
inference.
Bioinformatics, 38(15):3725–3733, 2022.
Marcussen_14
Thomas Marcussen, Simen R Sandve, Lise Heier, Manuel Spannagl, Matthias
Pfeifer, The International Wheat Genome Sequencing Consortium, Kjetill S
Jakobsen, Brande BH Wulff, Burkhard Steuernagel, Klaus FX Mayer, and Odd-Arne
Olsen.
Ancient hybridizations among the ancestral genomes of bread wheat.
Science, 345(6194):1250092–1250092, 2014.
mirzaei2015fast
Sajad Mirzaei and Yufeng Wu.
Fast construction of near parsimonious hybridization networks for
multiple phylogenetic trees.
IEEE/ACM Trans. Comput. Biol. Bioinform., 13(3):565–570, 2015.
pickrell2012inference
Joseph Pickrell and Jonathan Pritchard.
Inference of population splits and mixtures from genome-wide allele
frequency data.
Nat Prec, 2012.
solis2016inferring
Claudia Solís-Lemus and Cécile Ané.
Inferring phylogenetic networks with maximum pseudolikelihood under
incomplete lineage sorting.
PLoS genetics, 12(3):e1005896, 2016.
steel2016phylogeny
Mike Steel.
Phylogeny: discrete and random processes in evolution.
SIAM, 2016.
timkovskii1989complexity
VG Timkovskii.
Complexity of common subsequence and supersequence problems and
related problems.
Cybernetics, 25:565–580, 1989.
van2022practical
Leo van Iersel, Remie Janssen, Mark Jones, Yukihiro Murakami, and Norbert Zeh.
A practical fixed-parameter algorithm for constructing tree-child
networks from multiple binary trees.
Algorithmica, 84(4):917–960, 2022.
van2023three
Leo van Iersel, Mark Jones, and Mathias Weller.
When three trees go to war.
hal.science, 2023.
wang2001perfect
Lusheng Wang, Kaizhong Zhang, and Louxin Zhang.
Perfect phylogenetic networks with recombination.
Journal of Computational Biology, 8(1):69–78, 2001.
wu2010close
Yufeng Wu.
Close lower and upper bounds for the minimum reticulate network of
multiple phylogenetic trees.
Bioinformatics, 26(12):i140–i148, 2010.
yamada2020improved
Kohei Yamada, Zhi-Zhong Chen, and Lusheng Wang.
Improved practical algorithms for rooted subtree prune and regraft
(rSPR) distance and hybridization number.
J. Comput. Biol., 27(9):1422–1432, 2020.
zhang2018bayesian
Chi Zhang, Huw A Ogilvie, Alexei J Drummond, and Tanja Stadler.
Bayesian inference of species networks from multilocus sequence data.
Molecular biology and evolution, 35(2):504–517, 2018.
zhang2019clusters
Louxin Zhang.
Clusters, trees, and phylogenetic network classes.
In Bioinformatics and Phylogenetics: Seminal Contributions of
Bernard Moret. Springer, 2019.
zhang2019
Louxin Zhang.
Generating normal networks via leaf insertion and nearest neighbor
interchange.
BMC Bioinform., 20(20):1–9, 2019.
zhang2023fast
Louxin Zhang, Niloufar Abhari, Caroline Colijn, and Yufeng Wu.
A fast and scalable method for inferring phylogenetic networks from
trees by aligning lineage taxon strings.
Genome Research, 33:gr–277669, 2023.
|
http://arxiv.org/abs/2307.06089v1 | 20230712112709 | Exploring Millions of User Interactions with ICEBOAT: Big Data Analytics for Automotive User Interfaces | [
"Patrick Ebel",
"Kim Julian Gülle",
"Christoph Lingenfelder",
"Andreas Vogelsang"
] | cs.HC | [
"cs.HC"
] |
[]
AIArtificial Intelligence
ACCAdaptive Cruise Control
ADASAdvanced Driver Assistance System
APIApplication Programming Interface
CANController Area Network
CRISP-DMCRoss Industry Standard Process for Data Mining
ECUElectronic Control Unit
GDPRGeneral Data Protection Regulation
GOMSGoals, Operators, Methods, Selection rules
HCIHuman-Computer Interaction
HMIHuman-Machine Interaction
HUHead Unit
IVISIn-Vehicle Information System
KLMKeystroke-Level Model
KPIKey Performance Indicator
LKALane Keeping Assist
LCALane Centering Assist
LDTMLean Design Thinking Methodology
MGDMean Glance Duration
OEMOriginal Equipment Manufacturer
OTAOver-The-Air
AOIArea of Interest
RFRandom Forest
SASteering Assist
SUSSystem Usability Scale
SHAPSHapley Additive exPlanation
TGDTotal Glance Duration
UIUser Interface
UCDUser-Centered Design
UXUser Experience
ICEBOAT]Exploring Millions of User Interactions with ICEBOAT: Big Data Analytics for Automotive User Interfaces
Both authors contributed equally to this research.
[email protected]
0000-0002-4437-2821
University of Cologne
Cologne
Germany
[1]
[email protected]
0009-0007-1284-4402
TU Berlin
Berlin
Germany
[email protected]
0000-0001-9417-5116
MBition GmbH
Berlin
Germany
[email protected]
0000-0003-1041-0815
University of Cologne
Cologne
Germany
UX professionals need to be able to analyze large amounts of usage data on their own to make evidence-based design decisions. However, the design process for IVIS lacks data-driven support and effective tools for visualizing and analyzing user interaction data. Therefore, we propose ICEBOAT[ICEBOAT: InteraCtive UsEr BehaviOr Analysis Tool], an interactive visualization tool tailored to the needs of automotive UX experts to effectively and efficiently evaluate driver interactions with IVIS. ICEBOAT visualizes telematics data collected from production line vehicles, allowing UX experts to perform task-specific analyses. Following a mixed methods UCD approach, we conducted an interview study (N=4) to extract the domain specific information and interaction needs of automotive UX experts and used a co-design approach (N=4) to develop an interactive analysis tool. Our evaluation (N=12) shows that ICEBOAT enables UX experts to efficiently generate knowledge that facilitates data-driven design decisions.
[
Andreas Vogelsang
=====================
§ INTRODUCTION
The growing number of features of modern touchscreen-based IVIS and the need to evaluate them with respect to the driving context <cit.> makes it increasingly complex to design IVIS that meet user needs and are safe to use. To date, the usability and distraction evaluation of IVIS is mostly based on qualitative feedback and small-scale user studies <cit.>. However, these approaches do not scale with the increasing complexity of the design task and the increasing number of features that need to be evaluated. With limited resources for user studies, practitioners often lack customer insight and must make subjective judgments instead of evidence-based design decisions. This contradicts the principles of UCD <cit.> and becomes evident when considering that the usability of infotainment systems has been the biggest source of problems for new car owners for several years <cit.>. These shortcomings lead to an increasing need for data-driven support in the automotive UX design process <cit.>. Although modern cars are equipped with advanced telematics solutions and collect large amounts of customer usage data, UX experts report that this data is not being used. They either do not have access to relevant data or lack the right tools to effectively visualize and efficiently analyze it <cit.>.
In order to make user-centered design decisions, automotive UX professionals must be able to work with large amounts of data collected from customer vehicles. We argue that big data visualization tools, which automate data processing and visualization, play a critical role in this process. They allow experts to explore how customers interact with the IVIS in real-world conditions, and thus inform decision making. These analytical tools must be developed according to the needs of domain experts, must communicate results through visualizations that serve the information needs <cit.>, and should keep the overhead for UX experts low <cit.>.
Therefore, we propose ICEBOAT, an interactive visualization tool that enables automotive UX experts to effectively and efficiently analyze driver interactions with the center stack touchscreen to evaluate UI designs of touchscreen-based IVIS. The tool visualizes user interaction data, driving data, and glance data, that is collected live from production line vehicles. UX experts can specify any task they want to analyze, either by manually specifying the customer journey, or by using an interactive IVIS emulator. ICEBOAT automatically processes the data and generates various statistics and visualizations that are based on an interview study with UX experts and previous work by <cit.>. An interactive drill-down concept allows UX experts to start wide and zoom in to analyze individual touchscreen interactions. UX experts can compare different flows according to performance-related and distraction-related metrics such as time-on-task, number of glances, or total glance duration.
Following a mixed methods UCD approach and building on the work of <cit.>, we make the following contributions:
* We present the information and interaction needs of automotive UX experts in analyzing large amounts of customer data to evaluate touchscreen-based IVIS.
* We extend the visualizations presented by <cit.> to the information needs of UX experts and develop an interaction concept from task definition to user flow analysis that supports automotive UX experts in their data analysis.
* We present a tool that automates the processing and visualization of touchscreen interactions, driving data, and glance data collected from customer vehicles, allowing UX experts to interactively explore and evaluate drivers' IVIS interactions.
* We evaluate the tool with industry experts (N=12) and show that ICEBOAT meets their needs and can improve the evaluation of touchscreen-based IVIS.
§ BACKGROUND AND RELATED WORK
§.§ Data in the Design and Evaluation of IVIS
The design and evaluation of touchscreen-based IVIS is an important factor in the overall product design process of a car. However, consumer demands often conflict with safety regulations and guidelines <cit.>. This domain-specific conflict must, therefore, be considered throughout the design process of IVIS. To create interfaces that are enjoyable and safe to use, the design and evaluation of IVISs relies heavily on questionnaires, explicit user observation, or experimental user studies <cit.>.
However, these studies have to be designed, planned, executed, and analyzed, making them slow and expensive. As a result, they do not meet the need of automotive UX expert to quickly and easily gain insight into customer behavior <cit.>. Using customer data to support decision making can be a competitive advantage <cit.>.
Live data collection from customers and continuous analysis is already standard in web and app development, and effectively used to support decision making <cit.> and continuous product improvement <cit.>. This is different in the automotive domain, where the decision-making culture, technology, and organization have been slow to adapt <cit.>. Automotive UX professionals report that they lack the tools to access and analyze relevant data, even though modern cars collect large amounts of driving and interaction-related data. To gain insights from customer data, UX experts often have to submit requests to data scientists and involve other departments <cit.>. As a result, the problem that traditional methods are slow is only shifted, not solved. UX experts need to be empowered to analyze and visualize user interaction data. Developing data analysis tools that meet their needs and facilitate their design activities is critical to making their jobs easier.
§.§ Creating Meaningful Interactions with Big Data
In today's product development, decision makers, regardless of the domain, aim to make data-driven decisions. This allows them to design products that are tailored to customer needs <cit.>. However, the main challenge in data-driven decision making is not the acquisition of raw data itself, it's the challenge of extracting useful knowledge from it <cit.>. To create solutions that help designers, engineers, or scientists, the right information must be available at the right time <cit.>. Therefore, tools and analytical solutions need to communicate the results of analysis through meaningful visualizations and clear representations <cit.>.
However, creating meaningful visualizations and intuitive tools for analyzing big data is far from obvious. Although several commercial general-purpose tools exist, they often fail to meet domain and task-specific needs. Domain experts often require advanced visualization and interaction concepts that are not supported by commercially available tools. These tools stick to a small set of standardized visualizations <cit.>.
While these standardized dashboards are a valuable tool for quickly communicating KPI to stakeholders or managers, they are often disconnected from the domain expert's workflow and serve as a reporting tool rather than an exploratory knowledge generation tool. To support domain experts in their work, it is important to create solutions that are specific to their workflow. Individual visualizations often address a specific task that is part of a larger workflow. Therefore, these visualizations need to be linked in such a way that they support this workflow as a whole <cit.>. An example of this is the common need to explore large amounts of data at multiple scales. This can be achieved by visualizing the data at different levels of granularity, starting broadly and zooming in on details as the analysis progresses <cit.>. In addition, domain experts are often non-specialists when it comes to analyzing large amounts of data. It is therefore important to avoid information overload and to use visualizations that are easy to understand and whose benefits are immediately apparent.
§.§ Big Data Visualizations to evaluate Automotive User Interfaces
The evaluation of IVIS differs from web or mobile applications. While traditional usability metrics such as time on task or error rates play an important role, they are far from sufficient for evaluating a driver's interaction behavior holistically. Drivers often interact with IVIS while driving even though they are required to constantly monitor the driving scene even in partially automated driving as we see it on the road today (Level 1 and 2 according to SAE <cit.>).
This makes it important to not only evaluate the usability <cit.> of touchscreen-based IVIS but also their distraction potential. Regarding the assessment of distraction, the visual demand of interfaces has proven to be an effective measure, as long glances away from the road (t>2 s) are directly correlated with increased crash risk <cit.>. As a result, the evaluation of IVIS in terms of usability and driver distraction is a well-researched topic <cit.>. However, there is a considerable gap between the academic research conducted to evaluate IVIS and the tools and methods available to the professionals in industry who eventually design these systems. Even though automotive UX professionals express clear needs for effective visualizations to support their work, there is not much research on big data analytics to evaluate user interactions with IVIS. Most visual analytics approaches in the automotive domain focus on visualizing data collected from a few sensors <cit.> or in controlled experimental studies <cit.>. For example, <cit.> present an approach to visualize spatiotemporal data collected during user interface interaction studies. Their approach provides valuable insights into the combined visualization of explicit and implicit data from different sources. However, the tool is built according to academic needs and focuses on the visualization of individual situations recorded during user studies. Therefore, it is not applicable to the challenges faced by industrial UX experts when they need to analyze millions of data points collected live from customers. In this context, driving, glance, and interaction data collected live and in large quantities from customer vehicles OTA must be automatically processed, stored, and visualized to allow UX experts to evaluate the usability and distraction potential of IVIS. Considering the automotive specific requirements and constraints that apply to the analysis and visualization of automotive event sequence data <cit.>, most related approaches <cit.> and commercial alternatives (e.g., UserTesting[<https://www.usertesting.com>], UserZoom[<https://www.userzoom.com>]) are not suited to the problem at hand. A solution explicitly focused on industry professionals has been proposed by <cit.>, who have developed an approach to visualize large amounts of user interaction, driving, and glance behavior date. They visualize this data at three levels of granularity and show that UX experts can use these visualizations to evaluate secondary touchscreen interactions. However, they only propose prototypes of these visualizations and do not provide insight into whether the visualizations meet the needs of UX professionals and how UX experts should use these visualizations in their workflow.
§ APPROACH
Our approach aims to improve the industrial design process of IVIS. Designers and UX researchers report that they are often forced to neglect design evaluation due to time constraints and data accessibility issues <cit.>. To develop a solution that meets the needs of automotive UX experts and improves their design and evaluation process, we followed a mixed methods UCD approach <cit.>.
In the first phase, we conducted semi-structured background interviews (n=4) to extract requirements according to information and interaction needs and to evaluate the visualizations originally proposed by <cit.>. Using a participatory design approach, we then co-designed prototypes with four automotive UX experts. Throughout the co-design approach, we walked different focus groups through the current state of the prototypes and discussed potential improvements and necessary changes.
After the fourth co-design session, we evaluated the prototype in a second study where we conducted usability testing and explored the context of use of the prototype.
§ STUDY 1: EXTRACTING INFORMATION AND INTERACTION NEEDS
The first study has two objectives: First, to confirm the results presented by <cit.>, who claim that their multi-level user behavior framework was found useful by automotive UX experts. Second, <cit.> propose three visualizations, but do not present any insights or solutions on how these visualizations can be connected to effectively support UX experts in evaluating IVIS. Therefore, we extract detailed information and interaction needs that form the basis for the subsequent co-design process. For this purpose we conducted semi-structured interviews.
§.§ Participants
We conducted semi-structured interviews with 4 UX experts (I1 - I4). All of them had between five and nine years of professional experience. At least five of those years were spent in the field of UI design. They have also all been with Mercedes-Benz for more than five years. We therefore consider them to be knowledgeable about automotive design processes and working methods.
§.§ Procedure
The interview agenda consisted of three parts: introduction, main section, and a conclusion. Following the recommendations of <cit.>, we prepared a set of open-ended questions and optional follow-up questions for each section. The latter were intended to refine ambiguous responses and further guide the interview. In addition, we periodically summarized the responses during the interview to reflect and confirm correct understanding. While the introduction was designed to create an open atmosphere and establish a common ground, we ended each interview with a conclusion, asking the interviewees if there was anything they wished to add. The main section contained the majority of the questions. Here we asked the interviewees about their information needs, interaction needs, and the visualizations they would expect to see in a potential tool that supports their current workflow. Regarding the visualizations, we gave the interviewees some time to develop their ideas. We then presented the interviewees with the aforementioned visualizations suggested by <cit.>. By showing them the existing visualizations, we hoped to support their ideation process <cit.> and wanted to confirm the informal evaluation presented by <cit.>.
§.§ Information Needs
After analyzing the coded interview transcripts, we identified 39 information needs that fall into 7 categories. We present these needs below, where Count refers to the number of unique needs within a category and Support refers to the number of total needs expressed by the participants.
INF-1: Usability and Distraction-Related Metrics (Count = 8, Support = 12).
Driver interactions with IVIS while driving are considered secondary tasks <cit.>. Thus, not only usability but also driver distraction play a major role in evaluating automotive user interface <cit.>. Accordingly, the respondents formulated various information needs that revolve around the understandability of the UI (I1, I2, I3), performance-related metrics such as time on task (I1), error rates (I1, I2), or the number of interactions needed to perform a task (I2). They also stated that for a holistic evaluation they need to be able to evaluate the visual demand (e.g. number of glances) of features (I1, I2) and individual user flows (I1). They also stressed the importance of being able to see the correlation between IVIS usage and driving data.
INF-2: Feature Usage Information (Count = 8, Support = 12).
The automotive industry is moving from a technology-driven development approach to a more user-centric one <cit.>. While this process has been going on for many years, there is still a lack of knowledge about how features are used by customers. This leads to many features being carried over from old releases that may not be needed by customers <cit.>. During the interviews, feature usage information was the first KPI that respondents thought of. The typical questions UX professionals want to answer based on data insights are questions like “How often is a feature used?” (I1-I4) or “How long is a feature used on average?” (I1, I2). Participants indicate that information about feature usage is valuable because it is often used to decide whether to continue or discontinue a feature.
INF-3: Usage Pattern Visualizations (Count = 7, Support = 11).
To gain deeper insights into user behavior, UX experts expressed different needs regarding the analysis of user flows and how users interact within certain features (Usage Patterns). They want to know how people use the system (I2, I4), how they navigate the system to perform certain tasks (I1, I4), and what kind of UI elements they use (I2). Participants are also interested in merging this information with usability and distraction-related metrics (e.g., to compare different flows).
INF-4: System Information (Count = 6, Support = 10).
The cars in an OEM's fleet are very heterogeneous, both in terms of hardware and software. Not only do manufacturers offer different models that differ according to the market in which they are sold, but customers can also configure their cars according to their personal preferences (e.g., different sizes of center stack touchscreens) <cit.>. This, combined with the long product lifecycle and limited ability to perform OTA updates, especially for older models, results in many different UI versions being used by customers. This is reflected in the information needs of UX professionals. They state that they need to compare usability and distraction-related metrics, feature usage information, and usage patterns across car models (I1-I4), software versions (I2, I4), screens (driver vs. front passenger vs. rear passengers), and screen sizes (I2). This information is needed to assess the interplay between hardware and software but also to track progress.
INF-5: Contextual Information (Count = 5, Support = 6).
Driver behavior and driver interactions are highly context sensitive <cit.> and participants state that they need contextual information to better judge individual interaction sequences. For example, they state that they need information about the driving situation (1, 3) to be able to judge how drivers interact in different situations. They also want to know how many passengers were present (2) and whether a cell phone was connected to the IVIS (2), arguing that these could be additional sources that influence driver behavior without being represented in the interaction, glance, or driving data.
INF-6: Input Modalities (Count = 3, Support = 4).
Participants were also interested in the different types of modalities that drivers or passengers can choose to interact with IVIS (e.g., different modes of touch interaction, voice, or steering wheel control). In particular, they want to know which modality drivers primarily use (1) and whether this use differs across features (2,4).
INF-7: User Information (Count = 2, Support = 3).
For user-specific information, respondents see value in comparing data from different regions (3, 4) or comparing data for different target groups (e.g., by demographics or frequently used features).
Regarding the visualizations proposed by <cit.>, participants agreed that they already partially address the information needs INF-1, INF-2, INF-3, and INF-5. However, they do not provide system information (INF-4), information about different modalities (INF-6), or user information (INF-7).
§.§ Interaction Needs
To extract the interaction needs of the participants, we asked them to imagine a tool that would meet all their information needs and to explain how they would like to use this tool in their daily work. The expectations were very consistent, as they all expected to use the tool to define a new UI concept, to validate an existing and already implemented UI concept, and to question the customer value of a feature. Based on these insights, we then explored how users would like to interact with the anticipated tool and how they would like to configure it to meet their needs. The answers to these questions form the interaction needs. As shown below, we grouped the 14 individual needs into 4 categories.
INT-1: Task Definition (Count = 4, Support = 10).
Participants emphasized that they want to configure their analytics based on individual use cases, rather than having a "one-size-fits-all" dashboard. While they valued certain standard metrics to be displayed, they wanted to define specific tasks or characteristics for which they needed detailed analytics. To define the tasks of interest, all participants (I1-I4) asked if it would be possible to interactively define sequences without having to manually enter the object identifiers. They suggested using a desktop-based version of IVIS, arguing that this would facilitate task definition since the UI software consists of thousands of elements. However, for known use cases, they suggested traditional input options such as drop-down menus to select UI elements as start and end points (I1, I2). Here, one participant (I2) mentioned that the analysis tool should use the same UI identifiers as those used in the UI concept description.
INT-2: Analysis (Count = 5, Support = 13).
When it came to analyzing, participants were concerned about overall complexity, noting that traditional dashboards often tend to be overloaded and cluttered. Accordingly, they asked for features that would allow them to reduce the complexity of the results. They also wanted to be able to drill down through different levels of granularity depending on their use case, rather than being presented with all the results at once (I1, I2, I4). All participants argued that they need to be able to compare usage by system, context, and user information (I1-I4). Most of the proposed filtering options focused on system-specific information such as car type or software version.
INT-3: Operating Aids (Count = 3, Support = 5).
Two participants (I1, I2) mentioned that the tool should be adaptable according to the user's expertise. They suggested that the tool could provide an “exploration mode” (I2) to help them explore the UI. They also asked for the possibility to display reduced versions of the plots proposed by <cit.>.
INT-4: Sharing and Collaboration (Count = 2, Support = 4).
Participants expressed the need to share the visualization with colleagues and decision makers, either in a portable format (I1-I3) or through a link that provides direct access (I3).
The visualizations presented by <cit.> are stand-alone visualizations without a user interface. Therefore, they do not address any of the identified interaction needs.
§ INTRODUCING ICEBOAT
Study 1 identified the information and interaction needs of automotive UX experts for visualization and analysis of customer data and confirmed that the visualizations presented by <cit.> partially satisfy the information needs of UX experts. However, they do not provide an interface that addresses the interaction needs. Therefore, following INF-1 – INF-7 and INT-1 – INT-4, we developed ICEBOAT, an interactive user behavior analysis tool for automotive UI. ICEBOAT refines the visualizations of <cit.>, adds new functionalities and connects them in a meaningful way. Built on top of the telematics data logging framework introduced by <cit.>, it automates task definition, data processing, and visualization generation, making large amounts of customer data easily accessible for UI evaluation.
We developed ICEBOAT using a co-design approach with four iterations. We invited the background interview participants as co-designers to each of the sessions, which were conducted remotely using Microsoft Teams.
§.§ System Architecture
ICEBOAT consists of a web-based frontend application for data visualization and a backend system for data processing (see Appendix <ref>). The frontend, developed using the JavaScript framework Vue.js[<https://vuejs.org>], receives data from three different services: The Concept Database (containing all UI information), the IVIS Emulator and the Backend. The IVIS Emulator virtualizes the IVIS so that it can be executed on a computer as if it were running in the car.
The backend is divided into two services: An API service built with FastApi[<https://fastapi.tiangolo.com/>] web framework and a data service. The API service receives the analysis requests, passes them to the data service, and returns the results. The data service uses PySpark[<https://spark.apache.org/docs/latest/api/python/index.html>] to efficiently extract, transform, and load the customer data stored in the data lake. The data lake is updated daily with the latest customer data. After running the analytical queries and extracting relevant user flows (see <ref>), the backend returns the results to the API service. The frontend enhances the processed data with additional UI-specific information from the Concept Database. We chose this architecture to make ICEBOAT easily extensible and to ensure interoperability (e.g. with another back end solution) <cit.>.
§.§ Interactive Web Application and IVIS Emulator
Figures <ref> and <ref> show an overview of the final tool after four iterations. The interface consists of a Dashboard Page and a User Flow Analysis Page. The user flow analysis page consists of four panels that fade in one after another based on the user's input addressing the need for a drill down mechanism to reduce complexity (INT-1). An overview of all components is given below:
* Dashboard Page: Upon opening ICEBOAT, users are presented with a dashboard page (see Figure <ref>) that welcomes them and gets them started by explaining the purpose of the application. The tiles below the introduction report specific KPI that describe the underlying data. For example, the number of trips on which the analysis is based and the number of logged interactions with the head unit. This page therefore onboards users and provides a perspective on the data to facilitate entry into the analysis (INT-3).
* Task Definition: The tool provides two ways for users to define the task they want to analyze. (1) They can select the UI elements that define the start and end of a task from a searchable drop-down menu filled with all the UI elements that exist within the IVIS.
(2) Alternatively, you can define a task using the IVIS Emulator (see Figure <ref>). ICEBOAT then automatically extracts all similar flows from the customer data and visualizes them. This provides the user with a playful and easy way to define tasks without having to know the naming conventions of specific UI elements. These two options address the interaction needs of INT-1 and INT-3.
* Task Overview: After loading the data for the specified task, the Task Overview Panel (see Figure <ref>) presents aggregations for all user flows between the start and end event of the task (INF-3). The data is presented as an adapted Sankey diagram <cit.> and in tabular form. The two views provide information about the average time between two consecutive interactions in a flow, the gestures used, the relative and absolute frequency of flows, the total number of interactions in a flow, and the average flow duration (INF-1, INF-3). The Filter Panel on the right-hand side allows users to customize the visualization to reduce visual clutter (INT-2). It also allows filtering for specific software versions or car types (INF-4). The visualized flows are based on all of the user interaction sequences collecte from production vehicles that match the task definition (see Figure <ref>) and the applied filters.
* Flow Comparison: The Flow Comparison Panel shows a reduced Sankey diagram of the selected flows (INT-3) and a box plot comparing each of the flows according to a selected metric, such as gaze duration or time on task (INF-1, INF-3). The box plots show the distribution of these values for all sequences contained in the flow. Thus, each dot represents a single sequence, which users can select (by clicking) to open the Sequence Details view for that interaction sequence.
* Sequence Details: The Sequence Details Panel allows the user to explore details about a single interaction sequence. The view blends glance data (on-road, off-road, center stack touchscreen) with contextual driving data such as speed or steering angle, and embeds the touchscreen interactions <cit.> (INF-5). On the right, users see a history of sequences they have viewed and can save specific ones as favorites (INT-3).
§.§ How ICEBOAT Empowers Automotive UX Experts
The provided visualizations and analyses support UX experts in the design and evaluation of touchscreen-based IVIS. With ICEBOAT, we support professionals in overcoming three key challenges related to the use of big data analytics in the design and evaluation of IVIS:
1. Data-Driven Decision Making:
Due to cultural, organizational, and technological challenges, data plays only a minor role in the decision-making process related to automotive design <cit.>. As technology improves, large amounts of driving- and interaction-related data are being collected. However, UX professionals still report that they lack the tools to access and analyze the data directly and independently <cit.>.
Based on a telematics data processing framework, ICEBOAT gives UX professionals permanent and immediate access to usage data collected live in the field. This allows UX experts to analyze interaction, glance and driving data independently throughout their workflow.
2. Automotive-Specific Analysis: General-purpose tools for big data analytics are often disconnected from the workflows of domain experts <cit.>. This is also true in the automotive domain. UX experts report (Study 1) that current tools do not meet their specific needs for task definition, user flow exploration and comparison, and visualization of individual usage sequences. ICEBOAT supports the definition of user tasks, allows users to compare specific flows according to various usability and driver distraction related metrics, and enables UX experts to visualize details of individual interactions with the IVIS.
3. Information Overload: UX experts are not trained to analyze large amounts of data. Therefore, data visualizations and interactions must be easy to understand and their benefits obvious to avoid information overload <cit.>. ICEBOAT allows UX experts to explore UI-relevant data without requiring technical knowledge. The IVIS emulator allows UX experts to easily define the scope of their analysis and pre-defined KPI are visualized decoupled from the detailed user flow analysis to avoid information overload. The user flow analysis allows users to start with a broad overview and then zoom in on details as the analysis progresses. This drill-down concept (panels appear one after the other) fits the workflow of UX experts and presents only the information that is needed.
§ STUDY 2: EVALUATION WITH DESIGNERS AND DATA SCIENTISTS
To assess whether ICEBOAT enables UX experts to independently explore large-scale behavioral data, we conducted an evaluation study with UX researchers, designers, and data scientists.
§.§ Method
We conducted usability testing, interviews, and a context of use questionnaire. We aimed for a representative sample of participants and used standardized measures to assess usability. To guide the evaluation, we followed a test plan that we created based on the recommendations of <cit.>.
§.§.§ Participants
We recruited 12 potential users from Mercedes-Benz and MBition, a Mercedes-Benz software hub: 4 designers, 4 UX researchers, and 4 data scientists. We included data scientists for two reasons: First, due to cross-functional development teams, data scientists often work closely with designers or UX researchers in decision-making processes. Second, because of their familiarity with data analysis, we expected data scientists to provide a different perspective and baseline for understanding data. We did not invite participants who had already participated in the study, as this could skew the results when evaluating usability <cit.>. The age of the participants ranged from 21 to 41 years (mean 29.6, SD 5.6) and their work experience from 0.5 to 20 years (mean 5.3, SD 5.8). All but one participant had a college degree.
§.§.§ Scenario and Evaluation Tasks
To create a realistic evaluation environment, we derived a test scenario from the storyboard resulting from Study 1 (see Figure <ref> in <ref>). In this scenario, the UX experts are asked to evaluate the destination entry task of the navigation feature. They should use the IVIS Emulator to define a representative task and then analyze it for bottlenecks, driver distraction, and outliers in glance behavior.
§.§.§ Procedure
We collected demographic data in a pre-survey before the experiment. At the beginning of the experiment, we introduced the scenario and asked the participants to complete seven evaluation tasks (compare <ref>) that resembled the scenario introduced above. First, we shared the IVIS emulator with the participants and asked them to complete the first task. Then we switched to the ICEBOAT screen. Before the next task, we had the participants practice thinking aloud by asking them to describe the dashboard page and give feedback. Users then navigated to the User Flow Analysis page of the tool and proceeded with the second task. During the test, participants were free to explore the tool and ask questions. After the participants completed all tasks, we collected their feedback both verbally and with the post-survey. We also recorded whether participants encountered any technical problems during the test. Since participants in a lab study usually answer usability questionnaires on-site <cit.>, we had the participants fill out the surveys online immediately after the study. However, we stopped recording the interviews and turned off the cameras and microphones so that participants would not feel observed while completing the survey.
§.§.§ Measures
We counted and coded the errors participants made while solving the tasks, and collected participants' feedback and interpretations of the visualization. We also had the participants fill out the SUS questionnaire <cit.> and the Context of Use questionnaire (see <ref>).
§.§.§ Test Environment & Schedule
Due to the distributed work environment, we conducted all experiments remotely using Zoom. With the users' permission, we recorded each test session to analyze the session afterwards and to quantify the error rates per task. We prepared a setup with the IVIS emulator open on one screen and the ICEBOAT tool open on another. This mimics the setup we imagine users would have when using the prototype in production. Using screen sharing, we allowed participants to remotely interact with the emulator and analysis tool. This makes our test environment as similar as possible to the production environment. We ran 12 tests of one hour each over three weeks.
§.§ Quantitative Results
We present results from the SUS and Context of Use questionnaires, as well as additional qualitative insights.
§.§.§ SUS
ICEBOAT received a mean SUS score of 68.125 (MD=70, SD=16.89), which, according to <cit.>, is average. While data scientists rated the tool with a mean score of 80, UX experts rated it with a mean score of 62. <cit.> reports a similar spread between domain experts and data scientists. When evaluating the usability of the proposed big data analytics platform, data science testers rated the platform almost 20 points higher than policymakers (75.0 vs. 56.7).
§.§.§ Context of Use
The mean score of the Context of Use questionnaire was 4.2 out of 5 (MD=4.24, SD=0.33). In contrast to the SUS, only two questions were rated differently by data scientists and UX experts. The Pearson correlation between the results of the Context of Use and SUS questionnaires was not significant (R=0.55, p=0.061), suggesting that usability and value to the experts' workflow are not directly related.
§.§ Qualitative Feedback
Overall, participants found the tool valuable and easy to use. They reported that it would open up new possibilities for them and make their workflow much more efficient, “I think this really makes our job easier, especially when you see how quickly you can get evaluations compared to how long it takes now.” (P3) (INT-2). They also report that ICEBOAT provides effective insights because it “[...] would provide better answers to many questions” (P9). P9 further states: “It is relatively difficult for us to make statements about groups that drive premium vehicles. With the tool you could get to those people.” (INF-7). They also appreciated the ability to define the task using the IVIS Emulator, as it allows them to define the scope of their analysis without having to know the identifiers of specific UI elements (INT-1). They report that this facilitates exploration and reduces the burden of using this tool. The design and layout of the tool was generally well received.
§.§ Data Understanding
ICEBOAT effectively stimulated discussion about usability and safety improvements as participants solved the tasks. The Sankey diagram visualization was easy for participants to understand, and they were able to identify bottlenecks using the color scale or by manually comparing interaction times (shown when hovering over the flows) (INF-3). One participant immediately suggested that the search suggestions could be improved to reduce the number of characters the driver has to type, because “[t]he list keeps updating as you type. So it takes the user more time to find what they are looking for if they type more characters.” (P8).
When comparing the top three flows based on the number of glances, 5 participants asked for clarification on how to interpret the box plots, but were able to identify the flow with the lowest average number of glances once explained (INF-1). Participants quickly identified the flow with the most glances (INF-1) and appreciated the ability to select individual sequences to open the Sequence Details Panel (INT-2). Using the Sequence Details Panel, they were able to assess the dependencies between glance, interaction, and driving behavior (INF-5), “The driver is on the move and slows down in the course of the interaction” (P6), “after brief glances at the road, the driver immediately performs several interactions” (P10). However, participants interpreted the steering angle changes differently, with some interpreting them as a sign of distraction and others as a driving maneuver. Overall, participants found the tool helpful and argued that the insights can be particularly valuable in defining the scope of specific user studies to explore not only the “what” but also the “why”.
§.§.§ Errors
In general, participants reported that they understood the tasks easily and were able to complete them efficiently.
When interacting with ICEBOAT (Task 2-7), participants made only minor errors. For example, two participants initially chose a minimum support that was too high or too low, making the visualization either too cluttered or too sparse. Also, to create a reduced Sankey diagram in Task 5, four participants wanted to further reduce the flows using the minimum support instead of using the checkboxes in the table.
Most of the errors occurred when interacting with the IVIS Emulator. While all participants successfully created a recording, only five out of twelve users did so with the expected start and end, as they did not start the recording on the expected screen. When asked, the participants stated that they thought they should start the recording directly from the main menu. However, this is more of a study-induced error with no practical implications.
When asked to elaborate on their errors, participants stated that it takes some time to get used to the tool but “[o]nce you get used to it and it's established as a working tool, it's super helpful.” (P12).
§ LIMITATIONS AND FUTURE WORK
While our results show that ICEBOAT effectively empowers UX experts and meets most of the information and interaction needs for analyzing large amounts of interaction data, some limitations should be considered. First, data scientists rated the usability of the tool higher than UX experts. This may be due to their experience with other data analysis tools. However, it also suggests that further research should be conducted to address the shortcomings with respect to users unfamiliar with data analysis. In addition, the slight delay and minor issues with the screen sharing and remote control feature may have influenced the results.
Second, the study only considered touch interactions on the center stack screen. However, drivers can also interact with IVIS using speech or hardkeys. Thus, to satisfy INF-6, the next step would be to introduce these modalities in ICEBOAT.
Furthermore, we only interviewed employees of one OEM. While related work <cit.> suggests that development practices and challenges are similar across most automotive OEM, information and interaction needs may be skewed.
Due to privacy concerns, we are not allowed to collect personal data. Thus, the only way to satisfy INF-7 is to use a combination of available filters to define “target groups” (e.g., luxury car buyers vs. compact car buyers, as indicated by P9).
Finally, we recorded the tests remotely, and 4 participants reported that the remote control function temporarily stopped working. While we were able to immediately restore control for 3 of the 4 people, this prevented one participant from completing a task. We had this participant verbally instruct us to complete the task and then restored remote control.
§ CONCLUSION
We present ICEBOAT, an interactive tool that makes millions of in-vehicle user interactions available to UX experts to effectively and efficiently visualize and evaluate drivers' touchscreen interactions with IVIS.
In Study 1, we identify the information and interaction needs of UX experts when analyzing large amounts of telematics data. Our findings reveal a design trade-off: UX experts want to access as much data as possible and perform IVIS-specific analyses, but are deterred by the complexity of traditional big data visualization tools. ICEBOAT addresses this conflict of interest by (1) allowing users to define a task via a IVIS emulator, (2) automating all data processing and cleaning while still allowing manipulation of the metrics that matter, and (3) providing an interactive drill-down mechanism that allows users to start broad and zoom into the details of individual interactions. In Study 2, we show that UX experts and data scientists can effectively use ICEBOAT to visualize large amounts of automotive usage data to evaluate touchscreen-based IVIS. Most importantly, ICEBOAT empowers UX experts and contributes to the democratization of data in the automotive domain.
We want to thank Fabian Ober for his work on the user flow recording option.
ACM-Reference-Format
§ APPENDIX
§.§ Evaluation Tasks
* Use the record button of the IVIS emulator to record a flow beginning at the navigation system's start screen and ending with the "Let's Go" button.
* Use the record file to define and analyze the customer journey.
* What are the top 5 flows (by share)? Use the filters to only display these flows.
* Identify one bottleneck in the flows. Could you explain the potential causes?
* Compare the glance behavior (count) of the first 3 flows.
* Which sequence in flow 1 has the highest glance count? (open the Sequence Detail View).
* Identify one long glance and explain the driving situation. Are there possibly distracting interactions?
§.§ Context of Use Questionnaire
§.§ Storyboard
§.§ Sequence Diagram
|
http://arxiv.org/abs/2307.04586v2 | 20230710142856 | Timbre transfer using image-to-image denoising diffusion implicit models | [
"Luca Comanducci",
"Fabio Antonacci",
"Augusto Sarti"
] | eess.AS | [
"eess.AS"
] |
Reliable Devices Yield Stable Quantum Computations
The manuscript is authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The U.S. Government retains for itself, and others acting on its behalf, a paid-up nonexclusive, irrevocable worldwide license in said article to reproduce, prepare derivative works, distribute copies to the public, and perform publicly and display publicly, by or on behalf of the Government. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan: https://www.energy.gov/doe-public-access-plan.
Samudra Dasgupta^1, 2^*, and Travis S. Humble^1,2^†
^1Quantum Science Center, Oak Ridge National Laboratory, Oak Ridge, Tennessee, USA
^2Bredesen Center, University of Tennessee, Knoxville, Tennessee, USA
^*[email protected], ORCID: 0000-0002-7831-745X
^†[email protected], ORCID: 0000-0002-9449-0498
February 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Timbre transfer techniques aim at converting the sound of a musical piece generated by one instrument into the same one as if it was played by another instrument, while maintaining as much as possible the content in terms of musical characteristics such as melody and dynamics. Following their recent breakthroughs in deep learning-based generation, we apply Denoising Diffusion Models (DDMs) to perform timbre transfer. Specifically, we apply the recently proposed Denoising Diffusion Implicit Models (DDIMs) that enable to accelerate the sampling procedure.
Inspired by the recent application of DDMs to image translation problems we formulate the timbre transfer task similarly, by first converting the audio tracks into log mel spectrograms and by conditioning the generation of the desired timbre spectrogram through the input timbre spectrogram.
We perform both one-to-one and many-to-many timbre transfer, by converting audio waveforms containing only single instruments and multiple instruments, respectively.
We compare the proposed technique with existing state-of-the-art methods both through listening tests and objective measures in order to demonstrate the effectiveness of the proposed model.
§ INTRODUCTION
Timbre is an extremely important perceptual aspect of music, yet it is hard to both model and define.
The concept of musical timbre can be defined as the perceived characteristics of a musical sound that are different from pitch and amplitude contours <cit.>.
Timbre Transfer concerns the task of converting a musical piece from one timbre to another while preserving the other music-related characteristics. While this operation is not trivial, it is of extreme interest for several applications, from the development of plugins to be used in Digital Audio Workstations (DAW) to enabling the possibility of playing sounds corresponding to not widely available musical instruments.
In this paper, we present DiffTransfer, a technique for timbre transfer which is tested both between single and multiple instruments and is based on a continuous Denoising Diffusion Implicit Model (DDIM) with deterministic sampling <cit.>, a modified version of Denoising Diffusion Probabilistic Models (DDPMs) that are trained using the same procedure, but allow for faster sampling times. Specifically, in <cit.> it was empirically shown that DDIMs allow for 10×-50× faster wall-clock time performances with respect to DDPMs.
In order to be able to convert one timbre into another, we use a procedure similar to the recently proposed image-to-image technique Palette <cit.>. Specifically, we use as input to the diffusion model the noise and condition it with the chosen input timbre spectrogram, then, through the denoising procedure, the model learns to reconstruct spectrograms of the desired timbre. We consider the scenario where the timbre-transfer task is paired, which means that the desired and input spectrograms have the same melodic/harmonic content, but differ in terms of timbre.
We experiment both with the possibility of converting between tracks containing only single instruments and also mixtures of instruments, with no prior separation step, while making no modifications to the model in order to take into account both configurations.
In order to demonstrate the effectiveness of the proposed model, we compare DiffTransfer with state-of-the-art techniques, both through objective measures and by performing a user-based listening test.
The source code and audio excerpts can be found at < https://lucacoma.github.io/DiffTransfer/>.
§ RELATED WORK
Several types of timbre Transfer techniques have been proposed in the literature.
In <cit.> a CycleGAN <cit.> is applied in order to perform an unpaired transfer using the Constant-Q transform and the audio is then recovered through a WaveNet <cit.> model. In <cit.> an attention-based architecture is applied in order to convert mel spectrograms, which are then inverted through a MelGAN architecture <cit.>. Gaussian mixture-based variational autoencoders are applied <cit.> in order to learn a latent space where pitch and timbre representations are disentangled.
Another class of methods, instead, extracts musical parameters such as pitch and loudness from the input audio tracks and performs the transfer by resynthesizing sound through a network that has learned to generate tracks with the desired timbre. The most known example of these techniques is the Differentiable Digital Signal Processing (DDSP) <cit.> model. Other similar techniques were proposed such as <cit.>, where a hierarchical model is used in order to reconstruct the signal at increasing resolutions.
Recently there have been proposed also models that directly work on the audio waveform such as <cit.>, where music pieces are translated to specific timbre domains. The only model that, to the best of our knowledge and except for the one proposed in this paper, is tested on multi-instrument timbre transfer without any source separation pre-processing is the Music-STAR network, presented in <cit.>. In Music-STAR a WaveNet autoencoder <cit.> is trained by applying teacher-forcing <cit.> to the decoders in order to recover the desired timbre.
Denoising Diffusion Probabilistic Models (DDPMs) <cit.> have recently become the latest state-of-the-art for what concerns deep learning-based generation fastly replacing Generative Adversarial Networks (GANs) <cit.> and Variational Autoencoders <cit.>, due to their easier training procedure and increased quality of the produced results.
DDPMs have been successfully applied to a wide variety of image-related tasks such as generation <cit.> and translation <cit.>.
More recently, DDPMs have been also used for audio-related tasks. In <cit.> a diffusion model is applied in order to convert midi tracks to spectrograms, while in <cit.> a text-to-music diffusion model is proposed. DDPMs have also been applied to symbolic music generation <cit.>, speech synthesis <cit.> and singing voice extraction <cit.>.
While DDPMs have extremely powerful generation capabilities they suffer from slow sampling times. To ameliorate this issue, recently Denoising Diffusion Implicit Models (DDIMs) <cit.>, which allow for faster sampling times and were recently applied to image inpainting <cit.>.
§ PROPOSED MODEL
In this section, we describe the proposed DiffTransfer technique for timbre transfer. Instead of working directly with raw audio signals, we convert them into log mel-scaled spectrograms, due to their easier handling by deep learning models. We then propose a model that, given as input the spectrogram corresponding to the conditioning instrument, generates the corresponding target spectrogram that would have been obtained by playing the same piece of music with the target instrument.
Operatively we achieve this through a conditional continuous-time DDIM, which learns to denoise the target instrument spectrogram, while conditioned on the input instrument spectrogram, as depicted in Fig. <ref>. At inference time, the model is fed with the input conditioning instrument concatenated with Gaussian noise and generates the corresponding target spectrogram. We retrieve the audio signal by applying to the log mel spectrograms the SoundStream[<https://tfhub.dev/google/soundstream/mel/decoder/music/1>] model <cit.>, provided by <cit.> where it was trained on a custom music dataset.
In the following, we'll provide a brief overview of the DDIM framework and notation used in this paper, in order to make the tractation as compact as possible, for additional and more thorough formulations, we refer the reader to <cit.> and <cit.>. We aim at giving a general overview of the process and we'll use a slight abuse of notation to describe the diffusion process using the continuous time framework, in order to make it more similar to the more common literature regarding DDPMs and DDIMs.
§.§ Diffusion Decoder
We adopt a procedure similar to the Palette <cit.> image-to-image translation technique in order to train the timbre transfer decoder as a Denoising Diffusion Implicit Model (DDIM) <cit.>. Broadly speaking, DDIMs work by learning how to generate data from noise in a two-part procedure. The first part is denoted as the forward process, where Gaussian noise γ∼𝒩(0,1) is subsequently added to the input until it is indistinguishable from the former. The second part consists of the reverse process where a decoder learns how to invert the forward process, effectively reconstructing data from the noise. DDIMs can be seen as a generalization of DDPMs that shares the same training procedure, however, they differ in the modeling of the reverse process, by using a non-markovian diffusion process, which allows for faster generation times.
§.§.§ Forward Process
Let us define 𝐗 and 𝐘 as the log mel spectrograms corresponding to the conditioning and target instruments, respectively. We choose a continuous diffusion time <cit.>in order to be able to change the number of desired sampling steps. If we consider T steps, then the diffusion time can be defined as t ∈{0,1}, where consecutive times are separated by Δ_t=1/T.
Then, the forward process is defined similarly to the case of DDPMs by subsequently adding noise to the target spectrogram for T steps
q(𝐘_t|𝐘_t-Δ_t) = 𝒩(𝐘_t, √((α_t))𝐘_t-Δ_t, β_t 𝐈),
q(𝐘_1:T|𝐘_0) = ∏_t=1^Tq(𝐘_t-Δ_t)
where α and β are parameters defined by a simplified cosine schedule <cit.>.
§.§.§ Reverse Process
In the case of DDIMs, the reverse diffusion process is operated by introducing an additional distribution p_θ, where a sample 𝐘_t-Δ t can be generated from a sample 𝐘_t as
𝐘_t-Δ t = √(β_t-Δ t)( c-√(β_t)γ_θ^(t)(𝐘_t,𝐗)/√(()α_t)) +
√(1-α_t-Δ_t)·γ_θ^(t)(𝐘_t,𝐗),
,
where γ is the noise estimated by a network with parameters θ. The noise at time t γ_θ^(t) is estimated by a network that is conditioned also on the input timbre spectrogram 𝐗, similarly to the formulation proposed in Palette <cit.>.
§.§.§ Training Procedure
The denoising process is operated through a U-Net architecture which is conditioned on 𝐗 and trained to predict the added noise in order to minimize the L1 loss
𝔼 = ||γ_θ^(t)(𝐘_t,𝐗)-γ||_1^1,
where γ is the true perturbation, while γ_θ^(t)(𝐘_t,𝐗) is the estimate of the noise added to the target spectrogram at time t, conditioned on the input spectrogram 𝐗.
§.§ Architecture
The decoder architecture is based on a U-Net model. The building element is made of residual blocks, in each of these the input is processed by (i) a 2D convolutional layer with swish activation, followed by batch normalization and by (ii) a convolutional layer with no activation. Both convolutional layers have kernel size 3. The output of this procedure is then summed with the residual, which is obtained by processing the input with a convolutional layer with kernel size 1.
The encoder part of the network consists of 3 downsampling blocks, each consisting of 4 residual blocks having filter sizes 64,128,256. The output of each downsampling block is followed by average pooling, with pool size 2 in order to compress the dimension of the spectrograms. The last block of the encoder is followed a self-attention block.
The bottleneck obtained through the encoder is processed by a residual block with 512 filters and is then processed by the decoder, which is a specular version of the encoder. The only difference lies in the use of transposed convolutions in order to create upsampling layers needed to increase the dimension of the features.
The last downsampling layer of the encoder, the bottleneck and the first upsampling layer of the decoder are followed by self-attention.
§.§ Deployment
The proposed model takes as input spectrograms of a fixed size, therefore audio tracks longer than the ones used for training need to be sliced accordingly.
The decoder takes as input the conditioning spectrogram 𝐗 and the diffusion noise and retrieves an estimate of the latter, which can then be subtracted in order to obtain an estimate of the desired output timbre spectrogram 𝐘̂. The output waveform y can then be obtained by feeding the pre-trained SoundStream model with 𝐘̂.
§ EXPERIMENTS
In this section, we describe experiments performed with the aim of demonstrating the capabilities of the proposed DiffTransfer technique both in the single-instrument and multi-instrument application scenarios.
In Fig. <ref> we show an example of input, generated and ground-truth spectrograms, obtained via the DiffTransfer model when converting from a Clarinet to Strings.
§.§ Dataset
In order to train the model we considered the StarNet dataset <cit.>, which contains a set of tracks that are played with two timbre-domains, namely strings-piano and vibraphone-clarinet. The dataset consists of roughly 22 hours of audio. We used the reduced version of the dataset, where tracks are resampled to 16000 Hz and converted them to mono. In order to perform the evaluation, we use the same ten tracks considered in <cit.>, in order to ease the comparison with their model.
§.§ Techniques Under Comparison
We consider two baselines in order to compare the performances of the proposed DiffTransfer architecture. For what concerns the single-instrument timbre transfer task, we consider the Universal Network <cit.> fine-tuned on the StarNet dataset as done in <cit.>. For what concerns the multi-timbre task, we consider the mixture-supervised version of the Music-STAR network proposed in <cit.>. We perform three different types of timbre transfer tasks: single, where only single instruments are converted, single/mixed where the separate conversions of single instruments are mixed in order to create the desired mixture track and mixture, where the mixture is directly converted. These nomenclatures are used just to ease the presentation of the results, we would like to point out that, for what concerns the DiffTransfer architecture, no specific changes are required for the various types of applications, except for the choice of
desired input data.
§.§ Experiment Setup
The Universal Network and Music-STAR architectures are trained with the procedure described in <cit.>. The DiffTransfer network is trained for 5000 epochs using a batch size of 16, with the AdamW optimizer <cit.> with learning rate 2e-5 and weight decay 1e-4. The epoch that minimizes the L1 noise prediction loss is chosen in order to retain the model used to compute the results. We train a total of six models, performing the following timbre transfer conversions: vibraphone to piano, piano to vibraphone, clarinet to strings, strings to clarinet
vibraphone/clarinet to piano/strings and piano/strings to vibraphone/clarinet.
The network input features are computed by first applying the Short-Time Fourier Transform (STFT) with a Hann window of size 0.020 s and 50 % overlap to normalized audio tracks. Then the log mel spectrogram is computed over 128 bins corresponding to the range of 0-16000 Hz. We do not feed the entire audio tracks as input to the network, instead, during each epoch we extract 128 frames from the log mel spectrogram, corresponding to ≈ 2 s. Each spectrogram slice is normalized between -1 and 1 before being given as input to the network and the output spectrograms are denormalized before being fed to the SoundStream model in order to recover the audio waveform. Since the tracks considered for the test are of length 10 s and the model gets as input a fixed 128 frames spectrogram we slice the conditioning spectrogram before feeding into the model and we keep the input noise fixed for all slices, in order to ensure consistency in the generation. All spectrogram slices are normalized in the range [-1, 1] and denormalized before being fed to the SoundStream decoder.
§.§ Objective Evaluation
We evaluate the model objectively in order to analyze the perceptual similarity and content preservation capabilities of the generated tracks with respect to the ground truth audio.
In order to evaluate the perceptual similarity, we compute the Fréchet Audio Distance (FAD) <cit.> using the VGGish embeddings <cit.>, through a PyTorch implementation[<https://pypi.org/project/frechet-audio-distance/>]. FAD is a reference-free metric for music enhancement algorithms, which views the embeddings as a continous multivariate Gaussian and is computed between the real and generated data as
FAD = ||μ_r -μ_g||^2 + tr(Σ_r + μ_g -2√(Σ_r Σ_g)),
where (μ_r, Σ_r) and (μ_g, Σ_g) are the mean and covariances of the embeddings corresponding to the real and generated data, respectively.
Similarly to <cit.>, we compute FAD in order to analyze the perceptual similarity between the generated audios with respect to the ground truth one, corresponding to the original StarNet dataset.
To understand the content-preservation capabilities of the model, following <cit.>, we compute how the pitch contours of generated ground truth audio tracks are dissimilar, by calculating the mismatch between two sets of pitches A and B through the Jaccard Distance
JD(A,B) = 1 - |A ∩ B|/|A ∪ B|,
where a lower value corresponds to a lower mismatch and thus to a higher degree of similarity between the generated pitch contours. Pitch contours are computed using a multi-pitch version of the MELODIA <cit.> as implemented in the Essentia library <cit.>, rounding pitches to the nearest semitone. We report the values obtained by computing the metrics on the test dataset in Table <ref>.
§.§ Subjective Evaluation
In order to evaluate subjectively the timbre transfer capabilities, we perform a listening test with 18 human participants. The web page of the test is available at [<https://listening-test-ismir-ttd.000webhostapp.com/>]. The test was split into two parts corresponding to the single and multiple instrument application scenarios, respectively.
During the single instrument part of the test, the users listened to four tracks, corresponding to the four types of conversions performed, namely: clarinet to strings, strings to clarinet, piano to vibraphone, vibraphone to piano. Each example consisted of two conditions, one obtained via the DiffTransfer model and the other through the Universal Network.
In the second part of the test, concerning multiple instrument timbre transfer, a total of four tracks were considered, two for the conversion from vibraphone/strings to piano/strings waveforms and two for the reverse conversion. Each example consisted of four conditions, namely DiffStar (single/mix), Universal Network (single/mix), DiffStar (mixture) and Music-STAR (mixture).
Both the order of conditions and the order of examples in each separate part of the test were randomized.
The participants were asked to rate the conditions in terms of similarity with respect to the reference track on a 5 elements Likert scale where 1 corresponds to bad and 5 to excellent.
We report the results obtained through the listening test in Table <ref>.
§.§ Discussion
By briefly inspecting both the objective and subjective results, reported in Table <ref> and <ref>, respectively, it is clear how the proposed DiffTransfer model outperforms the Universal Network and Music-STAR baselines both for what concerns the single and multiple timbre transfer tasks.
When considering single timbre results, DiffTransfer is able to achieve significantly better performances in terms of FAD, Jaccard Distance and Perceived Similarity, with respect to the Universal network. The gap between the two methods becomes even more evident when considering the single/mixed case, i.e. when single timbre transfer tracks are mixed in order to form the desired mixture audio.
For what concerns the Music-STAR method, the gap with respect to DiffTransfer remains high in terms of FAD, but becomes less noticeable when considering JD and the perceived subjective similarity.
§ CONCLUSION
In this paper, we have presented DiffTransfer a technique for both single- and multi-instrument timbre transfer using Denoising Diffusion Implicit models. The novelty of the proposed approach lies in the fact that in addition to being, to the best of our knowledge, the first application of diffusion models to timbre transfer, it is the first model to be tested in order to perform single and multi-timbre transfer, without varying the architecture depending on which application is chosen.
We compared the proposed model with state-of-the-art Universal Network and Music-STAR baselines through both objective evaluation measures and a listening test, demonstrating the better capabilities of the proposed DiffTransfer approach.
Future works will involve increasing the audio quality of the generated audio, by taking into account the consistency of subsequent generated spectrograms. Furthermore, we plan on modifying the model in order to be able to perform unpaired timbre transfer, which greatly eases the dataset requirements and applicability of the technique.
|
http://arxiv.org/abs/2307.04525v2 | 20230710124936 | Cluster-Induced Mask Transformers for Effective Opportunistic Gastric Cancer Screening on Non-contrast CT Scans | [
"Mingze Yuan",
"Yingda Xia",
"Xin Chen",
"Jiawen Yao",
"Junli Wang",
"Mingyan Qiu",
"Hexin Dong",
"Jingren Zhou",
"Bin Dong",
"Le Lu",
"Li Zhang",
"Zaiyi Liu",
"Ling Zhang"
] | eess.IV | [
"eess.IV",
"cs.CV",
"cs.LG"
] |
Yuan, M. et al.
Effective Opportunistic Gastric Cancer Screening
^1DAMO Academy, Alibaba Group
^2Peking University
^3Hupan Lab, 310023, Hangzhou, China
^4Guangdong Province People's Hospital
^5The First Affiliated Hospital of Zhejiang University
^6Peking University Changsha Institute for Computing and Digital Economy
Cluster-Induced Mask Transformers for Effective Opportunistic Gastric Cancer Screening on Non-contrast CT Scans
Mingze Yuan^1,2,3,*, Yingda Xia^1,, Xin Chen^4,, Jiawen Yao^1,3, Junli Wang^5, Mingyan Qiu^1,3, Hexin Dong^1,2,3, Jingren Zhou^1, Bin Dong^2,6, Le Lu^1, Li Zhang^2, Zaiyi Liu^4,, Ling Zhang^1
August 12, 2023
===================================================================================================================================================================================================
Gastric cancer is the third leading cause of cancer-related mortality worldwide, but no guideline-recommended screening test exists. Existing methods can be invasive, expensive, and lack sensitivity to identify early-stage gastric cancer. In this study, we explore the feasibility of using a deep learning approach on non-contrast CT scans for gastric cancer detection. We propose a novel cluster-induced Mask Transformer that jointly segments the tumor and classifies abnormality in a multi-task manner. Our model incorporates learnable clusters that encode the texture and shape prototypes of gastric cancer, utilizing self- and cross-attention to interact with convolutional features. In our experiments, the proposed method achieves a sensitivity of 85.0% and specificity of 92.6% for detecting gastric tumors on a hold-out test set consisting of 100 patients with cancer and 148 normal. In comparison, two radiologists have an average sensitivity of 73.5% and specificity of 84.3%. We also obtain a specificity of 97.7% on an external test set with 903 normal cases. Our approach performs comparably to established state-of-the-art gastric cancer screening tools like blood testing and endoscopy, while also being more sensitive in detecting early-stage cancer. This demonstrates the potential of our approach as a novel, non-invasive, low-cost, and accurate method for opportunistic gastric cancer screening.
Work was done during an internship at DAMO Academy, Alibaba Group.
Corresponding authors: [email protected]; {wolfchenxin, zyliu}@163.com
§ INTRODUCTION
Gastric cancer (GC) is the third leading cause of cancer-related deaths worldwide <cit.>. The five-year survival rate for GC is approximately 33% <cit.>, which is mainly attributed to patients being diagnosed with advanced-stage disease harboring unresectable tumors. This is often due to the latent and nonspecific signs and symptoms of early-stage GC. However, patients with early-stage disease have a substantially higher five-year survival rate of around 72% <cit.>. Therefore, early detection of resectable/curable gastric cancers, preferably before the onset of symptoms, presents a promising strategy to reduce associated mortality. Unfortunately, current guidelines do not recommend any screening tests for GC <cit.>. While several screening tools have been developed, such as Barium-meal gastric photofluorography <cit.>, upper endoscopy <cit.>, and serum pepsinogen levels <cit.>, they are challenging to apply to the general population due to their invasiveness, moderate sensitivity/specificity, high cost, or side effects. Therefore, there is an urgent need for novel screening methods that are noninvasive, highly accurate, low-cost, and ready to distribute.
Non-contrast CT is a commonly used imaging protocol for various clinical purposes. It is a non-invasive, relatively low-cost, and safe procedure that exposes patients to less radiation dose and does not require the use of contrast injection that may cause serious side effects (compared to multi-phase contrast-enhanced CT). With recent advances in AI, opportunistic screening of diseases using non-contrast CT during routine clinical care performed for other clinical indications, such as lung and colorectal cancer screening, presents an attractive approach to early detect treatable and preventable diseases <cit.>. However, whether early detection of gastric cancer using non-contrast CT scans is possible remains unknown. This is because early-stage gastric tumors may only invade the mucosal and muscularis layers, which are difficult to identify without the help of stomach preparation and contrast injection. Additionally, the poor contrast between the tumor and normal stomach wall/tissues on non-contrast CT scans and various shape alterations of gastric cancer, further exacerbates this challenge.
In this paper, we propose a novel approach for detecting gastric cancer on non-contrast CT scans. Unlike the conventional “segmentation for classification" methods that directly employ segmentation networks, we developed a cluster-induced Mask Transformer that performs segmentation and global classification simultaneously. Given the high variability in shape and texture of gastric cancer, we encode these features into learnable clusters and utilize cluster analysis during inference. By incorporating self-attention layers for global context modeling, our model can leverage both local and global cues for accurate detection. In our experiments, the proposed approach outperforms nnUNet <cit.> by 0.032 in AUC, 5.0% in sensitivity, and 4.1% in specificity. These results demonstrate the potential of our approach for opportunistic screening of gastric cancer in asymptomatic patients using non-contrast CT scans.
§ RELATED WORK
Automated Cancer Detection. Researchers have explored automated tumor detection techniques on endoscopic <cit.>, pathological images <cit.>, and the prediction of cancer prognosis <cit.>. Recent developments in deep learning have significantly improved the segmentation of gastric tumors <cit.>, which is critical for their detection. However, our framework is specifically designed for non-contrast CT scans, which is beneficial for asymptomatic patients. While previous studies have successfully detected pancreatic <cit.> and esophageal <cit.> cancers on non-contrast CT, identifying gastric cancer presents a unique challenge due to its subtle texture changes, various shape alterations, and complex background, e.g., irregular gastric wall; liquid and contents in the stomach.
Mask Transformers. Recent studies have used Transformers for natural and medical image segmentation <cit.>. Mask Transformers <cit.> further enhance CNN-based backbones by incorporating stand-alone Transformer blocks, treating object queries in DETR <cit.> as memory-encoded queries for segmentation. CMT-Deeplab <cit.> and KMaX-Deeplab <cit.> have recently proposed interpreting the queries as clustering centers and adding regulatory constraints for learning the cluster representations of the queries. Mask Transformers are locally sensitive to image textures for precise segmentation and globally aware of organ-tumor morphology for recognition. Their cluster representations demonstrate a remarkable balance of intra-cluster similarity and inter-class discrepancy. Therefore, Mask Transformers are an ideal choice for an end-to-end joint segmentation and classification system for detecting gastric cancer.
§ METHODS
Problem Formulation. Given a non-contrast CT scan, cancer screening is a binary classification with two classes as ℒ={0, 1}, where 0 stands for“normal” and 1 for“GC” (gastric cancer). The entire dataset is denoted by 𝒮 = {(𝐗_i, 𝐘_i, 𝐏_i) | i=1,2,⋯,N}, where 𝐗_i is the i-th non-contrast CT volume, with 𝐘_i being the voxel-wise label map of the same size as 𝐗_i and K channels. Here, K=3 represents the background, stomach, and GC tumor. 𝐏_i ∈ℒ is the class label of the image, confirmed by pathology, radiology, or clinical records. In the testing phase, only 𝐗_i is given, and our goal is to predict a class label for 𝐗_i.
Knowledge Transfer from Contrast-Enhanced to Non-contrast CT. To address difficulties with tumor annotation on non-contrast CTs, the radiologists start by annotating a voxel-wise tumor mask on the contrast-enhanced CT, referring to clinical and endoscopy reports as needed. DEEDs <cit.> registration is then performed to align the contrast-enhanced CT with the non-contrast CT and the resulting deformation field is applied to the annotated mask. Any misaligned ones are revised manually. In this manner (Fig. <ref>d), a relatively coarse yet highly reliable tumor mask can be obtained for the non-contrast CT image.
Cluster-Induced Classification with Mask Transformers.
Segmentation for classification is widely used in tumor detection <cit.>. We first train a UNet <cit.> to segment the stomach and tumor regions using the masks from the previous step. This UNet considers local information and can only extract stomach ROIs well during testing. However, local textures are inadequate for accurate gastric tumor detection on non-contrast CTs, so we need a network of both local sensitivity to textures and global awareness of the organ-tumor morphology. Mask transformer <cit.> is a well-suited approach to boost the CNN backbone with stand-alone transformer blocks. Recent studies <cit.> suggest interpreting object queries as cluster centers, which naturally exhibit intra-cluster similarity and inter-class discrepancy. Inspired by this, we further develop a deep classification model on top of learnable cluster representations.
Specifically, given image 𝐗∈ℝ^H × W × D, annotation 𝐘∈ℝ^K × HWD, and patient class 𝐏∈ℒ, our model consists of three components: 1) a CNN backbone to extract its pixel-wise features 𝐅∈ℝ^C × HWD (Fig. <ref>a), 2) a transformer module (Fig. <ref>b), and 3) a multi-task cluster inference module(Fig. <ref>c). The transformer module gradually updates a set of randomly initialized object queries 𝐂∈ℝ^N × C, i.e., to meaningful mask embedding vectors through cross-attention between object queries and multi-scale pixel features,
𝐂←𝐂 + max_N (𝐐^c (𝐊^p)^T) 𝐕^p,
where c and p stand for query and pixel features, 𝐐^c, 𝐊^p, 𝐕^p represent linearly projected query, key, and value. We adopt cluster-wise argmax from KMax-DeepLab <cit.> to substitute spatial-wise softmax in the original settings.
We further interpret the object queries as cluster centers from a cluster analysis perspective. All the pixels in the convolutional feature map are assigned to different clusters based on these centers. The assignment of clusters (a.k.a. mask prediction) 𝐌∈ℝ^N × HWD is computed as the cluster-wise softmax function over the matrix product between the cluster centers 𝐂 and pixel-wise feature matrix 𝐅, i.e.,
𝐌 = Softmax_N(𝐑) = Softmax_N(𝐂𝐅).
The final segmentation logits 𝐙∈ℝ^K × HWD are obtained by aggregating the pixels within each cluster according to cluster-wise classification, which treats pixels within a cluster as a whole. The aggregation of pixels is achieved by 𝐙 = 𝐂_K 𝐌, where the cluster-wise classification 𝐂_K is represented by an MLP that projects the cluster centers 𝐂 to K channels (the number of segmentation classes).
The learned cluster centers possess high-level semantics with both inter-cluster discrepancy and intra-cluster similarity for effective classification. Rather than directly classifying the final feature map, we first generate the cluster-path feature vector by taking the channel-wise average of cluster centers 𝐂 = 1/N∑_i=1𝐂_i ∈ℝ^C. Additionally, to enhance the consistency between the segmentation and classification outputs, we apply global max pooling to cluster assignments 𝐑 to obtain the pixel-path feature vector 𝐑∈ℝ^N. This establishes a direct connection between classification features and segmentation predictions. Finally, we concatenate these two feature vectors to obtain the final feature and project it onto the classification prediction 𝐏∈ℝ^2 via a two-layer MLP.
The overall training objective is formulated as,
ℒ = ℒ_seg(𝐙, 𝐘) + ℒ_cls(𝐏, 𝐏),
where the segmentation loss ℒ_seg(·,·) is a combination of Dice and cross entropy losses, and the classification loss ℒ_cls(·,·) is cross entropy loss.
§ EXPERIMENTS
§.§ Experimental setup
Dataset and Ground Truth. Our study analyzed a dataset of CT scans collected from Guangdong Province People's Hospital between years 2018 and 2020, with 2,139 patients consisting of 787 gastric cancer and 1,352 normal cases. We used the latest patients in the second half of 2020 as a hold-out test set, resulting in a training set of 687 gastric cancer and 1,204 normal cases, and a test set of 100 gastric cancer and 148 normal cases. We randomly selected 20% of the training data as an internal validation set. To further evaluate specificity in a larger population, we collected an external test set of 903 normal cases from Shengjing Hospital. Cancer cases were confirmed through endoscopy (and pathology) reports, while normal cases were confirmed by radiology reports and a two-year follow-up. All patients underwent multi-phase CTs with a median spacing of 0.75 × 0.75 × 5.0 mm and an average size of (512, 512, 108) voxel. Tumors were annotated on the venous phase by an experienced radiologist specializing in gastric imaging using CTLabeler <cit.>, while the stomach was automatically annotated using a self-learning model <cit.>.
Implementation Details. We resampled each CT volume to the median spacing while normalizing it to have zero mean and unit variance. During training, we cropped the 3D bounding box of the stomach and added a small margin of (32, 32, 4). We used nnUNet <cit.> as the backbone, with four transformer decoders, each taking pixel features with output strides of 32, 16, 8, and 4. We set the number of object queries N to 8, with each having a dimension of 128, and included an eight-head self-attention layer in each block. The patch size used during training and inference is (192, 224, 40) voxel. We followed <cit.> to augment data. We trained the model with RAdam using a learning rate of 10^-4 and a (backbone) learning rate multiplier of 0.1 for 1000 epochs, with a frozen backbone of the pre-trained nnUNet <cit.> for the first 50 epochs. To enhance performance, we added deep supervision by aligning the cross-attention map with the final segmentation map, as per KMax-Deeplab <cit.>. The hidden layer dimension in the two-layer MLP is 128. We also trained a standard UNet <cit.> to localize the stomach region in the entire image in the testing phase.
Evaluation Metrics and Reader Study. For the binary classification, model performance is evaluated using area under ROC curve (AUC), sensitivity (Sens.), and specificity (Spec.). And successful localization of the tumors is considered when the overlap between the segmentation mask generated by the model and the ground truth is greater than 0.01, measured by the Dice score. A reader study was conducted with two experienced radiologists, one from Guangdong Province People's Hospital with 20 years of experience and the other from The First Affiliated Hospital of Zhejiang University with 9 years of experience in gastric imaging. The readers were given 248 non-contrast CT scans from the test set and asked to provide a binary decision for each scan, indicating whether the scan showed gastric cancer. No patient information or records were provided to the readers. Readers were informed that the dataset might contain more tumor cases than the standard prevalence observed in screening, but the proportion of case types was not disclosed. Readers used ITK-SNAP <cit.> to interpret the CT scans without any time constraints.
Compared Baselines. <ref> presents a comparative analysis of our proposed method with three baselines. The first two approaches belong to “Segmentation for classification" (S4C) <cit.>, using nnUNet <cit.> and TransUNet <cit.>. A case is classified as positive if the segmented tumor volume exceeds a threshold that maximizes the sum of sensitivity and specificity on the validation set. The third baseline (denoted as “nnUNet-Joint") integrates a CNN classification head into UNet <cit.> and trained end-to-end. We obtain the 95% confidence interval of AUC, sensitivity, and specificity values from 1000 bootstrap replicas of the test dataset for statistical analysis. For statistical significance, we conduct a DeLong test between two AUCs (ours vs. compared method) and a permutation test between two sensitivities or specificities (ours vs. compared method and radiologists).
§.§ Results
Our method Outperforms Baselines. Our method outperforms three baselines (<ref>) in all metrics, particularly in AUC and sensitivity. The advantage of our approach is that it captures the local and global information simultaneously in virtue of the unique architecture of mask transformer. It also extracts high-level semantics from cluster representations, making it suitable for classification and facilitating a holistic decision-making process. Moreover, our method reaches a considerable specificity of 97.7% on the external test set, which is crucial in opportunistic screening for less false positives and unnecessary human workload.
AI Models Surpass Experienced Radiologists on Non-contrast CT Scans. As shown in <ref>, our AI model's ROC curve is superior to that of two experienced radiologists. The model achieves a sensitivity of 85.0% in detecting gastric cancer, which significantly exceeds the mean performance of doctors (73.5%) and also surpasses the best performing doctor (R2: 75.0%), while maintaining a high specificity. A visual example is presented in <ref>. This early-stage cancer (T1) is miss-detected by both radiologists, whereas classified and localized precisely by our model.
Subgroup Analysis. In <ref>, we report the performance of patient-level detection and tumor-level localization stratified by tumor (T) stage. We compare our model's performance with that of both radiologists. The results show that our model performs better in detecting early stage tumors (T1, T2) and provides more precise tumor localization. Specifically, our model detects 60.0% (6/10) T1 cancers, and 77.8% (7/9) T2 cancers, surpassing the best performing expert (50% T1, 55.6% T2). Meanwhile, our model maintains a reliable detection rate and credible localization accuracy for T3 and T4 tumors (2 of 34 T3 tumors missed).
Comparison with Established Screening Tools. Our method surpasses or performs on par with established screening tools <cit.> in terms of sensitivity for gastric cancer detection at a similar specificity level with a relatively large testing patient size (n=1151 by integrating the internal and external test sets), as shown in <ref>. This finding sheds light on the opportunity to employ automated AI systems to screen gastric cancer using non-contrast CT scans.
§ CONCLUSION
We propose a novel Cluster-induced Mask Transformer for gastric cancer detection on non-contrast CT scans. Our approach outperforms strong baselines and experienced radiologists. Compared to other screening methods, such as blood tests, endoscopy, upper-gastrointestinal series, and ME-NBI, our approach is non-invasive, cost-effective, safe, and more accurate for detecting early-stage tumors. The robust performance of our approach demonstrates its potential for opportunistic screening of gastric cancer in the general population.
Acknowledgement
This work was supported by Alibaba Group through Alibaba Research Intern Program. Bin Dong and Li Zhang was partly supported by NSFC 12090022 and 11831002, and Clinical Medicine Plus X-Young Scholars Project of Peking University PKU2023LCXQ041.
splncs04
|
http://arxiv.org/abs/2307.04510v1 | 20230710121242 | An analysis of least squares regression and neural networks approximation for the pricing of swing options | [
"Christian Yeo"
] | q-fin.MF | [
"q-fin.MF"
] |
=1
plain
theoremTheorem[section]
Proposition[theorem]Proposition
definition[theorem]Definition
lemma[theorem]Lemma
remark[theorem]Remark
.pdf,.png
countlist
countlist[2][]
#2
*
#1#1
1,2]Christian Yeo
[1]Sorbonne Université, Laboratoire de Probabilités, Statistique et Modélisation, UMR 8001, case 158, 4, pl. Jussieu,
F-75252 Paris Cedex 5, France
[2]Engie Global Markets, 1 place Samuel Champlain, 92400 Courbevoie, France
equationsection
An analysis of least squares regression and neural networks approximation for the pricing of swing options
[
==========================================================================================================
Least Squares regression was first introduced for the pricing of American-style options, but it has since been expanded to include swing options pricing. The swing options price may be viewed as a solution to a Backward Dynamic Programming Principle, which involves a conditional expectation known as the continuation value. The approximation of the continuation value using least squares regression involves two levels of approximation. First, the continuation value is replaced by an orthogonal projection over a subspace spanned by a finite set of m squared-integrable functions (regression functions) yielding a first approximation V^m of the swing value function. In this paper, we prove that, with well-chosen regression functions, V^m converges to the swing actual price V as m → + ∞. A similar result is proved when the regression functions are replaced by neural networks. For both methods (least squares or neural networks), we analyze the second level of approximation involving practical computation of the swing price using Monte Carlo simulations and yielding an approximation V^m, N (where N denotes the Monte Carlo sample size). Especially, we prove that V^m, N→ V^m as N → + ∞ for both methods and using Hilbert basis in the least squares regression. Besides, a convergence rate of order 𝒪(1/√(N)) is proved in the least squares case. Several convergence results in this paper are based on the continuity of the swing value function with respect to cumulative consumption, which is also proved in the paper and has not been yet explored in the literature before for the best of our knowledge.
Keywords - Swing options, stochastic control, least squares regression, convergence analysis, neural networks approximation, dynamic programming equation.
§ INTRODUCTION
Swing contracts <cit.> are commonly used in commodity derivatives trading to manage commodity supply. These contracts allow the holder to purchase amounts of energy on specific dates (called exercise dates), subject to constraints. The pricing <cit.> of such a contract is a challenging problem that involves finding a vector that represents the amounts of energy purchased through the contract, while maximizing the gained value. This problem is doubly-constrained (exercise dates constraint and volume constraints) and its pricing had been addressed using two groups of methods in the literature. One group concerns methods that are based on the Backward Dynamic Programming Principle (BDPP) <cit.>, which determines the swing price backwardly from the expiry of the contract until the pricing date. In the BDPP-based approach, at each exercise date, the swing value is determined as the maximum of the current cash flows plus the continuation value, which is the (conditional) expected value of future cash flows. To compute the continuation value, nested simulations may be used, but this can be time-consuming. Alternatively, an orthogonal projection over a vector space spanned by a finite set of squared-integrable functions may be used, based on the idea of the least squares regression method introduced by Longstaff and Schwartz <cit.>. This method was initially introduced for the pricing of American-style options <cit.> and had been then used for some stochastic control problems <cit.> and especially in the context of swing contract pricing <cit.>. Despite being widely used by practitioners, in the context of swing pricing, this method has received little study in terms of convergence. The paper <cit.> analyzes the convergence of general regression methods in the context of stochastic control problems. While swing contracts pricing is, by nature, a stochastic control problem, such contracts involves specificities whose analysis goes beyond the scope covered in the paper <cit.>. Note that this paper focuses on the pricing of swing contracts within the firm constraints framework, where the contract holder cannot violate volume constraints. In this framework, the set of admissible controls at each exercise date depends on the cumulative consumption up to that date. Additionally, in the BDPP-based approaches, the optimal control at one exercise date depends on the estimated value of the swing contract at the next exercise date, which in turns is defined as a supremum. Thus, the error propagation through the BDPP meets uniform convergence issue. Taking into account the latter fact, to meet the framework studied in <cit.>, cumulative consumption may need to be included as a state variable along with the Markov process driving the underlying asset price. However, this can be challenging to implement as it requires to know the joint distribution of the underlying asset price and the cumulative consumption. This difficulty is perceptible in <cit.> where, in the context of storage pricing (contracts whose pricing is closed to that of swing contracts), the authors have used uniform sampling for cumulative consumption as a proxy. Furthermore, in <cit.> strong assumptions had been made, such as the boundedness of regression functions, which do not hold in practice. Therefore, in this paper, we aim to analyze the convergence of least squares regression for the specific problem of swing options pricing. Besides, we do not restrict ourselves to least squares method and analyze an alternative method which consist in approximating the continuation value, not by an orthogonal projection but, using neural networks. Both methods for approximating the swing contract price are analyzed in a common framework. To achieve this, we proceed as in previous works <cit.> by proving some convergence results into two main steps. We first replace the continuation value by either an orthogonal projection over a well-chosen basis of regression functions or by neural network. We demonstrate that the resulting swing value function, as an approximation of the actual one, converges towards the actual one as the number of functions in the regression basis or the number of units per hidden layer (in the neural network) increases. Furthermore, practically, a Monte Carlo simulation has to be performed. This is needed to compute the orthogonal projection coordinates in the least squares method; which generally has no closed form while it serves as input for training the neural network. This leads to a second level of approximation, a Monte Carlo approximation. In this paper, we prove that, under some assumptions, this second approximation converges to the first one for both studied methods. Moreover, in the least squares method, a rate of order 𝒪(N^-1/2) (N being the size of the Monte Carlo sample) of the latter convergence is proved.
Several results in this paper depend on the continuity of the swing value function with respect to the cumulative consumption, which is a crucial result that has not yet been proved for the best of our knowledge. We establish this continuity result using Berge's maximum theorem, which is commonly used to analyze the regularity of optimal control and optimal value functions in parametric optimization contexts. Additionally, proving the continuity of the value function with respect to the cumulative consumption also serves as another proof of the existence of an optimal control, which was previously demonstrated differently in <cit.>.
§.§ Organization of the paper
Section <ref>. provides general background on swing contracts. We thoroughly discuss its pricing and show one of the main results concerning the continuity of the swing value function. Section <ref>. We describe how to approximate the swing value function using either least squares regression or neural networks and fix notations and assumptions that will be used in the sequel. Section <ref>. We state the main convergence results of this paper as well as some other technical results concerning some concentration inequalities.
§.§ Notations
We endow the space ℝ^d with the Euclidean norm denoted by |·| and the space of ℝ^d-valued and squared-integrable random variables 𝕃^2_ℝ^d(ℙ) with the canonical norm || · ||_2. ⟨·, ·⟩ will denote Euclidean inner-product of ℝ^d. We denote by |·|_sup the sup-norm on functional spaces. 𝕄_d,q(ℝ) will represent the space of matrix with d rows, q columns and with real coefficients. When there is no ambiguity, we will consider |·| as the Frobenius norm; the space 𝕄_d,q(ℝ) will be equipped with that norm. For m ≥ 2, we denote by 𝔾L_m(ℝ) the subset of 𝕄_m,m(ℝ) made of non-singular matrices. For a metric space (E, d) and a subset A ⊂ E, we define the distance between x ∈ E and the set A by,
d(x, A) = y ∈ Ainf d(x,y).
We denote by d_H(A, B) the Hausdorff metric between two closed, bounded and non-empty sets A and B (equipped with a metric d) which is defined by
d_H(A, B) = max(a ∈ Asup d(a, B), b ∈ Bsup d(b, A)).
Let E be a real pre-Hilbert space equipped with a inner-product ⟨·, ·⟩ and consider x_1, …, x_n some vectors of E. The Gram matrix associated to x_1, …, x_n is the symmetric non-negative matrix whose entries are (⟨ x_i, x_j ⟩)_1 ≤ i, j ≤ n. The determinant of the latter matrix, the Gram determinant, will be denoted by G(x_1, …, x_n) := (⟨ x_i, x_j ⟩)_1 ≤ i, j ≤ n.
§ SWING CONTRACT
In the first section, we establish the theoretical foundation for swing contracts and their pricing using the Backward Dynamic Programming Principle. Additionally, we prove some theoretical properties concerning the set of optimal controls that is involved in the latter principle.
§.§ Description
Swing option allows its holder to buy amounts of energy q_k at times t_k , k = 0, ...,n-1 (called exercise dates) until the contract maturity t_n = T. At each exercise date t_k, the purchase price (or strike price) is denoted K_k and can be constant (i.e K_k = K, k = 0,...,n-1) or indexed on a formula. In the indexed strike setting, the strike price is calculated as an average of observed commodity prices over a certain period. In this paper, we only consider the fixed strike price case. However the indexed strike price case can be treated likewise.
In addition, swing option gives its holder a flexibility on the amount of energy he is allowed to purchase through some (firm) constraints:
* Local constraints: at each exercise time t_k, the holder of the swing contract has to buy at least q_min and at most q_max i.e,
q_min≤ q_k≤ q_max, 0 ≤ k ≤ n-1.
* Global constraints: at maturity, the cumulative purchased volume must be not lower than Q_min and not greater than Q_max i.e,
Q_n = ∑_k = 0^n-1 q_k∈ [Q_min, Q_max] , with Q_0 = 0 and 0 ≤ Q_min≤ Q_max < +∞.
At each exercise date t_k, the achievable cumulative consumption lies within the following interval,
𝒯_k := [Q^down(t_k) , Q^up(t_k) ],
where
{[ Q^down(t_0) = 0,; Q^down(t_k) = max(0, Q_min - (n-k) · q_max), k ∈{1,…,n-1},; Q^down(t_n) = Q_min, ].
{[ Q^up(t_0) = 0,; Q^up(t_k) = min(k · q_max, Q_max) , k ∈{1,…,n-1},; Q^up(t_n) = Q_max. ].
Note that in this paper we only consider firm constraints which means that the holder of the contract cannot violate the constraints. However there exists in the literature alternative settings where the holder can violate the global constraints (not the local ones) but has to pay, at the maturity, a penalty which is proportional to the default (see <cit.>).
The pricing of swing contract is closely related to the resolution of a backward equation given by the Backward Dynamic Programming Principle.
§.§ Backward Dynamic Programming Principle (BDPP)
Let (Ω, ℱ, {ℱ_t }, ℙ) be a filtered probability space. We assume that there exists a d-dimensional (discrete) Markov process (X_t_k)_0 ≤ k ≤ n and a measurable function g_k : ℝ^d →ℝ such that the spot price (S_t_k)_0 ≤ k ≤ n is given by S_t_k = g_k(X_t_k). Throughout this paper, the function g_k will be assumed to have at most linear growth.
The decision process (q_k)_0 ≤ k ≤ n-1 is defined on the same probability space and is supposed to be ℱ_t_k^X- adapted, where ℱ_t_k^X is the natural (completed) filtration of (X_t_k)_0 ≤ k ≤ n. In the swing context, at each time t_k, by purchasing a volume q_k, the holder of the contract makes an algebraic profit
ψ(q_k, X_t_k) := q_k·(g_k(X_t_k) - K).
Then for every non-negative ℱ_t_k-1^X- measurable random variable Q_k (representing the cumulative purchased volume up to t_k-1), the price of the swing option at time t_k is
V_k(X_t_k, Q_k) = _(q_ℓ)_k ≤ℓ≤ n-1∈𝒜_k, Q_k^Q_min, Q_max𝔼(∑_ℓ=k^n-1 e^-r_ℓ(t_ℓ - t_k)ψ(q_ℓ, X_t_ℓ) | X_t_k),
where the set 𝒜_k, Q^Q_min, Q_max of admissible decision processes is defined by
𝒜_k, Q^Q_min, Q_max = {(q_ℓ)_k ≤ℓ≤ n-1, q_t_ℓ : (Ω, ℱ_t_ℓ^X, ℙ) ↦ [q_min, q_max], ∑_ℓ = k^n-1 q_ℓ∈[(Q_min-Q)_+, Q_max-Q] }
and the expectation is taken under the risk-neutral probability and r_ℓ are interest rates over the period [t_0, t_n-1] that we will assume to be zero for the sake of simplicity. Problem (<ref>) appears to be a constrained stochastic control problem. It can be shown (see <cit.>) that for all k=0,…,n-1 and for all Q_k ∈𝒯_k, the swing contract price is given by the following backward equation, also known as the dynamic programming equation:
{[ V_k(x, Q_k) = q ∈ Adm(t_k, Q_k)supψ(q, x) + 𝔼(V_k+1( X_t_k + 1, Q_k + q) | X_t_k = x ),; V_n-1(x, Q_n-1) = q ∈ Adm(t_n-1, Q_n-1)supψ(q, x), ].
where Adm(t_k, Q_k) is the set of admissible controls at time t_k, with Q_k denoting the cumulative consumption up to time t_k-1. Note that, if our objective is the value function, that is V_k(x, Q_k) for any x ∈ℝ defined in (<ref>), then the set Adm(t_k, Q_k) reduces to the following interval,
ℐ_k+1(Q_k) := [max(q_min, Q^down(t_k+1) - Q_k), min(q_max, Q^up(t_k+1) - Q_k) ].
But if our objective is the random variable V_k(X_t_k, Q_k), then, for technical convenience, the preceding set Adm(t_k, Q_k) is the set of all ℱ_t_k^X-adapted processes lying within the interval ℐ_k+1(Q_k) defined in (<ref>). A straightforward consequence of the latter is that the optimal control at a given date must not be anticipatory.
It is worth noting the bang-bang feature of swing contracts proved in <cit.>. That is, if volume constraints q_min, q_max, Q_min, Q_max are whole numbers (this corresponds to the actual setting of traded swing contracts) and Q_max - Q_min is a multiple of q_max - q_min, then the supremum in the BDPP (<ref>) is attained in one of the boundaries of the interval ℐ_k+1(Q_k) defined in (<ref>). In this discrete setting, at each exercise date t_k, the set of achievable cumulative consumptions 𝒯_k defined in (<ref>) reads,
𝒯_k = ℕ∩[Q^down(t_k), Q^up(t_k)],
where Q^down(t_k) and Q^up(t_k) are defined in (<ref>). In this discrete setting, the BDPP (<ref>) remains the same. The main difference lies in the fact that, in the discrete setting the supremum involved in the BDPP is in fact a maximum over two possible values enabled by the bang-bang feature. From a practical standpoint, this feature allows to drastically reduce the computation time. Note that this paper aims to study some regression-based methods designed to approximate the conditional expectation involved in the BDPP (<ref>). We study two methods which are based on least squares regression and neural network approximation. In the least squares regression, we will go beyond the discrete setting and show that convergence results can be established in general. To achieve this, we need a crucial result which states that the swing value function defined in equation (<ref>) is continuous with respect to cumulative consumption. The latter may be established by relying on Berge's maximum theorem (see Proposition <ref> in Appendix <ref>). We may justify the use of this theorem through the following proposition, which characterizes the set of admissible volume as a correspondence (we refer the reader to Appendix <ref> for details on correspondences) mapping attainable cumulative consumption to an admissible control.
Denote by 𝒫([q_min, q_max]) the power set of [q_min, q_max]. Then for all k =0, ...,n-1 the correspondence
Γ_k (𝒯_k, |·|) →(𝒫([q_min, q_max]), d_H )
Q ↦ Adm(t_k, Q)
is continuous and compact-valued.
Let k = 0,...,n-1. We need to prove the correspondence Γ_k is both lower and upper hemicontinuous. The needed materials about correspondences is given in Appendix <ref>. We rely on the sequential characterization of hemicontinuity in Appendix <ref>. Let us start with the upper hemicontinuity. Since the set [q_min, q_max] is compact, then the converse of Proposition <ref> in Appendix <ref> holds true.
Let Q ∈𝒯_k and consider a sequence (Q_n)_n ∈ℕ∈𝒯_k^ℕ which converges to Q. Let (y_n)_n ∈ℕ be a real-valued sequence such that for all n ∈ℕ, y_n lies in the correspondence Γ_k(Q_n). Then using the definition of the set of admissible control we know that q_min≤ y_n ≤ q_max yielding (y_n)_n is a real and bounded sequence. Thanks to Bolzano-Weierstrass theorem, there exists a subsequence (y_ϕ(n))_n ∈ℕ which is convergent. Let y = lim_n → +∞ y_ϕ(n), then for all n ∈ℕ,
y_ϕ(n)∈ Adm(t_k, Q_ϕ(n)) ⟺max(q_min, Q^down(t_k+1) - Q_ϕ(n)) ≤ y_ϕ(n)≤min(q_max, Q^up(t_k+1) - Q_ϕ(n)).
Letting n → + ∞ in the preceding inequalities yields y ∈Γ_k(Q). Which shows that Γ_k is upper hemicontinuous at an arbitrary Q. Thus the correspondence Γ_k is upper hemicontinuous.
For the lower hemicontinuity part, let Q ∈𝒯_k, (Q_n)_n ∈ℕ∈𝒯_k^ℕ be a sequence which converges to Q and y ∈Γ_k(Q). Note that if y = max(q_min, Q^down(t_k+1) - Q) (or y = min(q_max, Q^up(t_k+1) - Q)) then it suffices to consider y_n = max(q_min, Q^down(t_k+1) - Q_n) (or y_n = min(q_max, Q^up(t_k+1) - Q_n)) so that y_n ∈Γ_k(Q_n) for all n ∈ℕ and lim_n → +∞ y_n = y.
It remains the case y ∈Γ_k(Q) (where A denotes the interior of the set A). Thanks to Peak point Lemma [see Theorem 3.4.7 in <https://www.geneseo.edu/ aguilar/public/assets/courses/324/real-analysis-cesar-aguilar.pdf> or in <https://proofwiki.org/wiki/Peak_Point_Lemma>] one may extract a monotonous subsequence (Q_ϕ(n))_n. Two cases may be distinguished.
* (Q_ϕ(n))_n is a non-decreasing sequence.
In this case, for all n ∈ℕ, Q_ϕ(n)≤ Q. Since y ∈Γ_k(Q) and Q ↦min(q_max, Q^up(t_k+1) - Q) is a non-increasing function, it follows y < min(q_max, Q^up(t_k+1) - Q) ≤min(q_max, Q^up(t_k+1) - Q_ϕ(n)) for all n ∈ℕ. Moreover since y > lim_n → +∞max(q_min, Q^down(t_k+1) - Q_ϕ(n)) ↓max(q_min, Q^down(t_k+1) - Q), one may deduce that there exists n_0 ∈ℕ such that for all n ≥ n_0, y ≥max(q_min, Q^down(t_k+1) - Q_ϕ(n)). Therefore it suffices to set y_n = y for all n ≥ n_0 so that (y_n)_n ≥ n_0 is a sequence such that lim_n → +∞ y_n = y and y_n ∈Γ_k(Q_ϕ(n)) for all n ≥ n_0.
* (Q_ϕ(n))_n is a non-increasing sequence.
Here for all n ∈ℕ, we have Q_ϕ(n)≥ Q so that y ≥max(q_min, Q^down(t_k+1)-Q_ϕ(n)). Following the proof in the preceding case, one may deduce that there exists n_0 ∈ℕ such that for all n ≥ n_0, y ≤min(q_max, Q^up(t_k+1) - Q_ϕ(n)). Thus it suffices to set a sequence (y_n)_n ≥ n_0 identically equal to y.
This shows that the correspondence Γ_k is lower hemicontinuous at an arbitrary Q. Thus Γ_k is both lower and upper hemicontinous; hence continuous. Moreover, since for all Q ∈𝒯_k, Γ_k(Q) is a closed and bounded interval in ℝ, then it is compact. This completes the proof.
In the following proposition, we show the main result of this section concerning the continuity of the value function defined in (<ref>) with respect to the cumulative consumption. Let us define the correspondence C^*_k by,
C^*_k : Q ∈𝒯_k ↦_q ∈ Adm(t_k, Q)ψ(q, x) + 𝔼(V_k+1(X_t_k + 1, Q + q) | X_t_k = x ).
Note that the correspondence C^*_k is the set of solutions of the BDPP (<ref>). Then we have the following proposition.
If for all k = 1,...,n-1 X_t_k∈𝕃_ℝ^d^1(ℙ), then for all k=0,...,n-1 and all x ∈ℝ^d,
* The swing value function Q ∈𝒯_k ↦ V_k(x, Q) is continuous.
* The correspondence C^*_k (defined in (<ref>)) is non-empty, compact-valued and upper hemicontinuous.
Let x ∈ℝ^d. For technical convenience, we introduce for all 0 ≤ k ≤ n-1 an extended value function 𝒱_k(x, ·) defined on the whole real line
𝒱_k(x, Q) :={[ V_k(x, Q) if Q ∈𝒯_k = [Q^down(t_k), Q^up(t_k) ],; V_k(x, Q^down(t_k)) if Q < Q^down(t_k),; V_k(x, Q^up(t_k)) if Q > Q^up(t_k). ].
Note that V_k(x, ·) is the restriction of 𝒱_k(x, ·) on 𝒯_k. Propagating continuity over the dynamic programming equation is challenging due to the presence of the variable of interest Q in both the objective function and the domain in which the supremum is taken. To circumvent this issue, we rely on Berge's maximum theorem. More precisely, we use a backward induction on k along with Berge's maximum theorem to propagate continuity through the BDPP.
For any Q ∈𝒯_n-1, we have 𝒱_n-1(x, Q) = q ∈ Adm(t_n-1, Q)supψ(q, x) and ψ(·, x) is continuous since it is linear in q (see (<ref>)). Thus applying Lemma <ref> yields the continuity of 𝒱_n-1(x, ·) on 𝒯_n-1. Moreover, as 𝒱_n-1(x, ·) is constant outside 𝒯_n-1 then it is continuous on (- ∞, Q^down(t_n-1)) and (Q^up(t_n-1), +∞). The continuity at Q^down(t_n-1) and Q^up(t_n-1) is straightforward given the construction of 𝒱_n-1. Thus 𝒱_n-1(x, ·) is continuous on ℝ. Besides, for all Q ∈ℝ
|𝒱_n-1(X_t_n-1, Q)| ≤Q ∈𝒯_n-1sup|V_n-1(X_t_n-1, Q)| ≤ q_max·(|S_t_n-1| + K ) ∈𝕃_ℝ^1(ℙ).
We now make the following assumption as an induction assumption: 𝒱_k+1(x, ·) is continuous on ℝ and there exists a real integrable random variable G_k+1 (independent of Q) such that, almost surely, |𝒱_k+1(X_t_k+1, Q) | ≤ G_k+1. This implies that (q, Q): [q_min, q_max] ×ℝ↦ψ(q, x) + 𝔼(𝒱_k+1(X_t_k+1, Q+q) | X_t_k = x ) is continuous owing to the theorem of continuity under integral sign. Thus owing to Proposition <ref> one may apply Berge's maximum theorem and we get that 𝒱_k(x, ·) is continuous on ℝ. In particular V_k(x, ·) is continuous on 𝒯_k and the correspondence C_k^* is non-empty, compact-valued and upper hemicontinuous. This completes the proof.
As a result of the preceding proposition, one may substitute the sup in equation (<ref>) with a max. It is worth noting that this provides another proof for the existence of optimal consumption in addition to the one presented in <cit.>. Furthermore, our proof, compared to that in <cit.>, does not suppose integer volumes.
Having addressed the general problem in equation (<ref>), we can now focus on solving it which requires to compute the continuation value.
§ APPROXIMATION OF CONTINUATION VALUE
This section is focused on resolving the dynamic programming equation (<ref>). The primary challenge in solving this backward equation is to compute the continuation value, which involves a conditional expectation. A straightforward approach may be to compute this conditional expectation using nested simulations, but this can be time-consuming. Instead, the continuation value may be approximated using either least squares regression (as in <cit.>) or neural networks.
Notice that, it follows from the Markov assumption and the definition of conditional expectation that there exists a measurable function Φ_k+1^Q such that
𝔼(V_k+1(X_t_k + 1, Q) | X_t_k) = Φ_k+1^Q(X_t_k),
where Φ_k+1^Q solves the following minimization problem,
Φ∈ℒ^2inf||𝔼(V_k+1(X_t_k + 1, Q) | X_t_k) - Φ(X_t_k) ||_2,
where ℒ^2 denotes the set of all measurable functions that are squared-integrable. Due to the vastness of ℒ^2, the optimization problem (<ref>) is quite challenging, if not impossible, to solve in practice. It is therefore common to introduce a parameterized form Φ_k+1(· ; θ) as a solution to problem (<ref>). That is, we need to find the appropriate value of θ in a certain parameter space Θ such that it solves the following optimization problem:
θ∈Θinf||𝔼(V_k+1(X_t_k + 1, Q) | X_t_k) - Φ_k+1(X_t_k; θ) ||_2.
Solving the latter problem requires to compute the continuation value whereas it is the target amount. But since the conditional expectation is an orthogonal projection, it follows from Pythagoras' theorem,
||V_k+1(X_t_k + 1, Q) - Φ_k+1(X_t_k; θ) ||_2^2
= ||V_k+1(X_t_k + 1, Q) - 𝔼(V_k+1( X_t_k + 1, Q) | X_t_k)||_2^2 + ||𝔼(V_k+1( X_t_k + 1, Q) | X_t_k) - Φ_k+1(X_t_k; θ) ||_2^2.
Thus any θ that solves the preceding problem (<ref>) also solves the following optimization problem
θ∈Θinf||V_k+1(X_t_k + 1, Q) - Φ_k+1(X_t_k; θ) ||_2.
Thus in this paper and when needed, we will indistinguishably consider the two optimization problems. In the next section we discuss the way the function Φ_k+1(· ; θ) is parametrize depending on whether we use least squares regression or neural networks. Moreover, instead of superscript as in (<ref>) we adopt the following notation: Φ_k+1^Q(·) := Φ(·; θ_k+1(Q)) where θ_k+1(Q) ∈Θ solves the optimization problem (<ref>) or equivalently (<ref>). We also dropped the under-script as the function Φ will be the same for each exercise date, only the parameters θ_k+1(Q) may differ.
§.§ Least squares approximation
In the least squares regression approach, the continuation value is approximated as an orthogonal projection over a subspace spanned by a finite number of squared-integrable functions (see <cit.>). More precisely, given m ∈ℕ^* functions e^m(·) = (e_1(·),...,e_m(·) ), we replace the continuation value involved in (<ref>) by an orthogonal projection over the subspace spanned by e^m(X_t_k). This leads to the approximation V_k^m of the actual value function V_k which is defined backwardly as follows,
{[ V^m_k(X_t_k, Q) = _q ∈ Adm(t_k, Q)ψ(q, X_t_k) + Φ_m(X_t_k; θ_k+1, m(Q+q) ),; V^m_n-1(X_t_n-1, Q) = V_n-1(X_t_n-1, Q) = _q ∈ Adm(t_n-1, Q)ψ(q, X_t_n-1), ].
where Φ_m is defined as follows,
Φ_m(X_t_k; θ_k+1, m(Q) ) = ⟨θ_k+1, m(Q), e^m(X_t_k) ⟩
with θ_k+1, m(Q) ∈Θ_m = ℝ^m being a vector whose components are coordinates of the orthogonal projection and lies within the following set
𝒮_k^m(Q) := _θ∈Θ_m||V_k+1^m(X_t_k + 1, Q) - ⟨θ, e^m(X_t_k) ⟩||_2.
Solving the optimization problem (<ref>) leads to a classic linear regression. In this paper, we will assume that e^m(·) forms linearly independent family so that the set 𝒮_k^m(Q) reduces to a singleton parameter θ_k+1, m(Q) is uniquely defined as:
θ_k+1, m(Q) := (A_m^k )^-1·𝔼(V_k+1^m(X_t_k + 1, Q)e^m(X_t_k) ).
Note that without the latter assumption, 𝒮_k^m(Q) may not be a singleton. However, in this case, instead of the inverse matrix (A_m^k )^-1, one may consider the Moore–Penrose inverse or pseudo-inverse matrix (A_m^k)^† yielding a minimal norm. In equation (<ref>) we used the following notation
𝔼(V_k+1^m(X_t_k + 1, Q)e^m(X_t_k) ) := [ 𝔼(V_k+1^m(X_t_k + 1, Q)e_1(X_t_k) ); 𝔼(V_k+1^m(X_t_k + 1, Q)e_2(X_t_k) ); ⋮; 𝔼(V_k+1^m(X_t_k + 1, Q)e_m(X_t_k) ) ]∈ℝ^m,
where A_m^k := ((A_m^k)_i, j)_1 ≤ i, j ≤ m is a (Gram) matrix with entries
⟨ e_i(X_t_k), e_j(X_t_k) ⟩_𝕃^2(ℙ) = 𝔼(e_i(X_t_k) e_j(X_t_k) ) 1 ≤ i, j ≤ m.
In practice, to compute vector θ_k+1, m(Q) we need to simulate N independent paths (X_t_0^[p], ...,X_t_n-1^[p])_1 ≤ p ≤ N and use Monte Carlo to evaluate the expectations involved (see equations (<ref>) and (<ref>)). This leads to a second approximation which is a Monte Carlo approximation. For this second approximation, we define the value function V_k^m, N from equation (<ref>) where we replace the expectations by their empirical counterparts
{[ V^m, N_k(X_t_k, Q) = _q ∈ Adm(t_k, Q)ψ(q, X_t_k) + Φ_m(X_t_k; θ_k+1, m, N(Q+q) ),; V^m, N_n-1(X_t_n-1, Q) = V_n-1(X_t_n-1, Q) ].
with
θ_k, m, N(Q) = (A_m, N^k )^-11/N∑_p=1^N V^m, N_k+1(X_t_k+1^[p], Q)e^m(X_t_k^[p] ),
using the notation
1/N∑_p=1^N V^m, N_k+1(X_t_k+1^[p], Q)e^m(X_t_k^[p] ) := [ 1/N∑_p=1^N V^m, N_k+1(X_t_k+1^[p], Q)e_1(X_t_k^[p] ); 1/N∑_p=1^N V^m, N_k+1(X_t_k+1^[p], Q)e_2(X_t_k^[p] ); ⋮; 1/N∑_p=1^N V^m, N_k+1(X_t_k+1^[p], Q)e_m(X_t_k^[p] ) ]∈ℝ^m
and A_m, N^k := ((A_m, N^k)_i, j)_1 ≤ i, j ≤ m is a m × m (Gram) matrix whose components are
1/N∑_p=1^N e_i(X_t_k^[p]) e_j(X_t_k^[p]) 1 ≤ i, j ≤ m.
This paper investigates a modified version of the least squares method proposed in <cit.>. In their approach, the value function at each time step is the result of two steps. First, they compute the optimal control which is an admissible control that maximizes the value function (<ref>) along with Monte Carlo simulations. Then, given the optimal control, they compute the value function by summing up all cash-flows from the considered exercise date until the maturity. Recall that we proceed backwardly so that, in practice, it is assumed that at a given exercise date t_k, we already have determined optimal control from t_k+1 to t_n-1; so that optimal cash flows at theses dates may be computed. However, our method directly replaces the continuation value with a linear combination of functions, and the value function is the maximum, over admissible volumes, of the current cash flow plus the latter combination of functions. The main difference between both approaches lies in the following. The value function computed in <cit.> corresponds to actual realized cash flows whereas the value function in our case does not. However, as recommended in their original paper <cit.>, after having estimated optimal control backwardly, a forward valuation has to be done in order to eliminate biases. By doing so, our method and that proposed in <cit.> correspond to actual realized cash flows. Thus both approximations meet.
Our convergence analysis of the least squares approximation will require some technical assumptions we state below.
§.§.§ Main assumptions
ℋ_1^LS: For all k=0,…,n-1, the sequence (e_i( X_t_k))_i ≥ 1 is total in 𝕃^2(σ(X_t_k) ).
ℋ_2^LS: For all k=0,…,n-1, almost surely, e_0(X_t_k),…,e_m(X_t_k) are linearly independent.
This assumption ensures the Gram matrix A_m^k is non-singular. Moreover, this assumption allows to guarantee the matrix A_m, N^k is non-singular for N large enough. Indeed, by the strong law of large numbers, almost surely A_m, N^k → A_m^k ∈𝔾L_m(ℝ) (as N → +∞) with the latter set being an open set.
ℋ_3, r: For all k = 0, …, n-1, the random vector X_t_k has finite moments at order r. ℋ_3, ∞ will then denote the existence of moments at any order.
ℋ_4, r^LS: For all k = 0, …, n-1 and for all j = 1, …, m the random variable e_j (X_t_k) has finite moments at order r. Likewise, ℋ_4, ∞^LS will then denote the existence of moments at any order.
If assumption ℋ_3, ∞ holds, one may replace assumption ℋ_4, r^LS by an assumption of linear or polynomial growth of functions e_j(·) with respect to the Euclidean norm.
Before proceeding, note the following noteworthy comment that will be relevant in the subsequent discussion. Specifically, we would like to remind the reader that the continuity property of the true value function V_k with respect to cumulative consumption, as stated in Proposition <ref>, also applies to the approximated value function V_k^m involved in the least squares regression.
If we assume that ℋ_3, 2r and ℋ_4, 2r^LS hold true for some r ≥ 1, then one may show, by a straightforward backward induction, that the functions
Q ∈𝒯_k+1↦𝔼(|V_k+1^m(X_t_k+1, Q)e^m(X_t_k)|^r ) or V_k+1^m(X_t_k+1, Q)
are continuous. If only assumption ℋ_3, r holds true then V_k+1(X_t_k+1, ·) is continuous and there exists a random variable G_k+1∈𝕃_ℝ^r(ℙ) (independent of Q) such that V_k+1(X_t_k+1, · ) ≤ G_k+1.
Instead of using classic functions as regression functions and projecting the swing value function onto the subspace spanned by these regression functions, an alternative approach consists in using neural networks. Motivated by the function approximation capacity of deep neural networks, as quantified by the Universal Approximation Theorem (UAT), our goal is to explore whether a neural network can replace conventional regression functions. In the following section, we introduce a methodology based on neural networks that aims to approximate the continuation value.
§.§ Neural network approximation
The goal of a neural network is to approximate complex a function Φ : ℝ^d →ℝ^ℓ by a parametric function Φ(· ; θ) where parameters θ (or weights of the neural network) have to be optimized in a way that the distance between the two functions Φ and Φ(·; θ) is as small as possible. A neural network can approximate a wide class of complex functions (see <cit.>). A neural network is made of nodes connected to one another where a column of nodes forms a layer (when there are more than one layer in the neural network architecture we speak of a deep neural network). The outermost (see diagram <ref>) are the input and output layers and all those in between are called the hidden layers. The connection between the input and output layers through hidden layers is made by means of linear functions and activation functions (non-linear functions).
From a mathematical point of view, a neural network can be written as
x ∈ℝ^d ↦Φ(x; θ) := ψ∘ a_I^θ_I∘ϕ_q_I-1∘ a_I-1^θ_I-1∘…∘ϕ_q_1∘ a_1^θ_1(x) ∈ℝ^ℓ,
where
I is the number of hidden layers representing the depth of the neural network.
Each layer has weights 𝒲 and bias b. For all 2 ≤ i ≤ I,
x ∈ℝ^q_i-1↦ a_i^θ_i(x) = 𝒲_i · x + b_i ∈ℝ^q_iwithθ_i = (𝒲_i, b_i) ∈ℝ^q_i-1× q_i×ℝ^q_i,
and
x ∈ℝ^d↦ a_1^θ_1(x) = 𝒲_1 · x + b_1 ∈ℝ^q_1withθ_1 = (𝒲_1, b_1) ∈ℝ^d × q_1×ℝ^q_1.
q_1, …, q_I are positive integers denoting the number of nodes per hidden layer and representing the width of the neural network.
(ϕ_q_i)_1 ≤ i ≤ I-1 are non-linear functions called activation functions and are applied component wise.
ψ is the activation function for the output layer.
For the sake of simpler notation, we embed all the parameters of the different layers in a unique high dimensional parameter θ = (θ_1, …, θ_I ) ∈ℝ^N_q with N_q = ∑_i = 1^I q_i-1· (1 + q_i) (with q_0 = d). In order to study neural network approximation, we take the same notations as in <cit.>. We denote by 𝒩𝒩_∞ the set of all neural networks of form (<ref>). Then we consider, for some integer m ≥ 1, 𝒩𝒩_m the set of neural networks of form (<ref>) with at most m nodes per hidden layer and bounded parameters. More precisely, we consider
Θ_m = {ℝ^d×ℝ^m×( ℝ^m×ℝ^m)^I-2×ℝ^m×ℝ : |θ| ≤γ_m }
which denotes the set of all parameters (bounded by γ_m) of a neural network with at most m nodes per hidden layer. (γ_m)_m ≥ 2 is an increasing and non-bounded (real) sequence. Thus 𝒩𝒩_m is defined as the set of all neural networks which parameters lie in Θ_m,
𝒩𝒩_m = {Φ(·; θ) : ℝ^d →ℝ; θ∈Θ_m }.
Note that 𝒩𝒩_∞ = ⋃_m ∈ℕ𝒩𝒩_m.
In this paper, we consider the approximation of the continuation value using neural network. This leads to an approximated value function V_k^m backwardly defined by
{[ V_k^m(X_t_k, Q) = _q ∈ Adm(t_k, Q)ψ(q, X_t_k) + Φ_m(X_t_k; θ_k + 1, m(Q + q) ),; V_n-1^m(X_t_n-1, Q) = V_n-1(X_t_n-1, Q), ].
where Φ_m(·; θ) denotes a function lying within 𝒩𝒩_m with θ∈Θ_m. Thus θ_k + 1, m(Q) belongs to the following set
𝒮_k^m(Q) := _θ∈Θ_m||V_k+1^m(X_t_k+1, Q) - Φ_m(X_t_k; θ) ||_2.
To analyze the convergence of the neural network approximation we will rely on their powerful approximation ability. The latter is stated by the Universal Approximation Theorem.
Assume that the activation functions in (<ref>) are not constant and bounded. Let μ denote a probability measure on ℝ^d, then for any I ≥ 2, 𝒩𝒩_∞ is dense in 𝕃(ℝ^d, μ).
As stated in <cit.>, Theorem <ref> can be seen as follows. For any (real) squared-integrable random variable Y defined on a measurable space, there exists a sequence (θ_m)_m ≥ 2∈∏_m = 2^∞Θ_m such that lim_p→∞||Y - Φ_m (X; θ) | |_2 for some ℝ^d-valued random vector X. Thus, if for all m ≥ 2, θ_m solves
θ∈Θ_minf||Φ_m(X; θ) - Y ||_2,
then the sequence (Φ_m(X; θ_m) )_m ≥ 2 converges to 𝔼(Y | X) in 𝕃^2(μ).
The universal approximation capacity of neural networks had been widely studied in the literature <cit.>. Some quantitative error bounds have been proved when the function to approximate is sufficiently smooth. A brief overview is presented in the following remark.
When the weighted average of the Fourier representation of the function to approximate is bounded, an error bound of the convergence in Remark <ref> of order 𝒪(m^-1/2) had been shown in <cit.>. It may appears that the dimension of the problem does not degrade the convergence rate but as discussed by the authors, this may be hidden in the Fourier representation. In <cit.> it has been proved that, when the activation functions are infinitely continuously differentiables and the function to approximate is p-times continuously differentiable and Lipschitz, then the sup-norm of the approximation error on every compact set is bounded by a term of order 𝒪(m^-(p+1)/d). For a more detailed overview on quantitative error bounds, we refer the reader to <cit.>.
Note that, as in the least squares method, in practice, we simulate N independent paths (X_t_0^[p], ...,X_t_n-1^[p])_1 ≤ p ≤ N and use Monte Carlo approximation to compute the swing value function. For that purpose, we backwardly define the value function V_k^m, N by,
{[ V_k^m, N(X_t_k^[p], Q) = _q ∈ Adm(t_k, Q)ψ(q, X_t_k^[p]) + Φ_m(X_t_k^[p]; θ_k+1, m, N(Q + q) ),; V_n-1^m, N(X_t_n-1^[p], Q) = V_n-1(X_t_n-1^[p], Q), ].
where θ_k+1, m, N(Q) lies within the following set,
𝒮_k^m, N(Q) := _θ∈Θ_m1/N∑_p = 1^N|V_k+1^m, N(X_t_k+1^[p], Q) - Φ_m(X_t_k^[p]; θ) |^2.
Note that sets 𝒮_k^m(Q) or 𝒮_k^m, N(Q) (respectively defined in equations (<ref>) and (<ref>)) generally does not reduces to a singleton. Thus hereafter, the notation θ_k+1, m(Q) or θ_k+1, m, N(Q) will denote an element of the corresponding set 𝒮_k^m(Q) or 𝒮_k^m, N(Q).
§ CONVERGENCE ANALYSIS
We conduct a convergence analysis by following a similar approach as in <cit.>. Our initial focus is to establish a convergence result as the architecture used to approximate the continuation value increases. By architecture, we mean either regression functions (in the context of least squares approximation) or neural networks units per layer. Then, we fix the value of m (representing the architecture's size) and examine the associated Monte Carlo approximation. Let us start with the first step.
§.§ Convergence with respect to the number of approximation functions
We focus on the approximations (<ref>) and (<ref>) of the BDPP (<ref>). In this section, we do not restrict ourselves to the bang-bang setting. That is, for both approximation methods, we consider arbitrary volume constraints (not limited to integers).
§.§.§ Least squares approximation
We start by analyzing the first approximation in the least squares setting (<ref>). We show the convergence of the approximated value function V_k^m as m tends to infinity. To state this property we need the following result.
Let m be a positive integer. Assume ℋ_2^LS and ℋ_3, 2 hold true. Then, for all k=0,…,n-2, the function
Q ↦||Φ_m(X_t_k; θ̃_k+1, m(Q)) - 𝔼(V_k+1(X_t_k+1, Q)| X_t_k) ||_2
is continuous on 𝒯_k+1, where Φ_m is defined in (<ref>) and θ̃_k+1, m(Q) solves the theoretical optimization problem
θ∈Θ_minf||V_k+1(X_t_k + 1, Q) - Φ_m(X_t_k; θ) ||_2.
Keeping in mind relation (<ref>), it suffices to prove that the functions,
Q ↦||V_k+1(X_t_k+1, Q) - 𝔼(V_k+1(X_t_k+1, Q)| X_t_k) ||_2^2
and
Q ↦||V_k+1(X_t_k+1, Q) - Φ_m(X_t_k; θ̃_k+1, m(Q)) ||_2^2
are continuous. Let us start with the first function. Let Q ∈𝒯_k+1 and consider a sequence (Q_n)_n which converges to Q. We know (as pointed out in Remark <ref>) that assumption ℋ_3, 2 entails that V_k+1(X_t_k+1, ·) is continuous and there exists G_k+1∈𝕃_ℝ^2(ℙ) (independent of Q) such that V_k+1(X_t_k+1, ·) ≤ G_k+1. Thus the Lebesgue dominated convergence theorem implies that,
lim_n → +∞||V_k+1(X_t_k+1, Q_n) - 𝔼(V_k+1(X_t_k+1, Q_n)| X_t_k) ||_2^2 = ||V_k+1(X_t_k+1, Q) - 𝔼(V_k+1(X_t_k+1, Q)| X_t_k) ||_2^2
yielding the continuity of the function defined in (<ref>). We now prove the continuity of the second function defined in (<ref>). Using assumption ℋ_2^LS, it follows from Proposition <ref> that,
||Φ_m(X_t_k; θ̃_k+1, m(Q)) - V_k+1(X_t_k+1, Q) ||_2^2 = G (V_k+1(X_t_k+1, Q), e_1(X_t_k), …, e_m(X_t_k) )/G( e_1(X_t_k), …, e_m(X_t_k) )
where G(x_1, …, x_n) denotes the Gram determinant associated to the canonical 𝕃^2(ℙ) inner product. Since assumption ℋ_3, 2 entails the continuity of V_k+1(X_t_k+1, ·), then owing to the continuity of the determinant, one may conclude that Q ∈𝒯_k+1↦||Φ_m(X_t_k; θ̃_k+1, m(Q)) -V_k+1(X_t_k+1, Q) ||_2^2 is continuous as a composition of continuous functions. This completes the proof.
The preceding proposition allows us to show our first convergence result stated in the following proposition.
Under assumptions ℋ_1^LS, ℋ_2^LS and ℋ_3, 2, we have for all 0 ≤ k ≤ n-1,
lim_m → +∞Q ∈𝒯_ksup||V^m_k(X_t_k, Q) - V_k( X_t_k, Q)||_2 = 0.
We proceed by a backward induction on k. We have, almost surely, V^m_n-1(X_t_n-1, Q) = V_n-1(X_t_n-1, Q) for any Q ∈𝒯_n-1 and therefore the proposition holds true for k = n-1. Let us suppose it holds for k+1. For all Q ∈𝒯_k using the inequality |i ∈ Isup a_i - i ∈ Isup b_i| ≤i ∈ Isup |a_i-b_i|, we get,
|V^m_k(X_t_k, Q) - V_k(X_t_k, Q)|^2 ≤_q ∈ Adm(t_k, Q)|Φ_m(X_t_k; θ_k+1, m(Q+q)) - 𝔼(V_k+1(X_t_k+1, Q + q) | X_t_k ) |^2.
Taking the expectation in the previous inequality yields,
||V^m_k(X_t_k, Q) - V_k(X_t_k, Q) ||_2^2 ≤𝔼(_q ∈ Adm(t_k, Q)|Φ_m(X_t_k; θ_k+1, m(Q+q)) - 𝔼(V_k+1(X_t_k+1, Q + q) | X_t_k ) |^2).
To interchange the essential supremum with the expectation, we rely on the bifurcation property. For all q ∈ Adm(t_k, Q), consider
A_k^m(Q, q) := |Φ_m(X_t_k; θ_k+1, m(Q+q)) - 𝔼(V_k+1( X_t_k+1, Q + q) | X_t_k )|^2.
Then for all q_1, q_2 ∈ Adm(t_k, Q) define the following random variable
q_A^* = q_1 ·1_{A_k^m(Q, q_1) ≥ A_k^m(Q, q_2)} + q_2 ·1_{A_k^m(Q, q_1) < A_k^m(Q, q_2)}.
It follows from the definition of Φ_m in (<ref>) and that of the conditional expectation that A_k^m(Q, q) is σ(X_t_k)-measurable for all q ∈ Adm(t_k, Q). Thus using (<ref>) yields q_A^*∈ Adm(t_k, Q) and A_k^m(Q, q_A^*) = max(A_k^m(Q, q_1), A_k^m(Q, q_2) ). Therefore one may use the bifurcation property in (<ref>) and we get,
||V^m_k(X_t_k, Q) - V_k(X_t_k, Q) ||_2^2 ≤q ∈ Adm(t_k, Q)sup||Φ_m(X_t_k; θ_k+1, m(Q+q)) - 𝔼(V_k+1(X_t_k+1, Q + q) | X_t_k ) ||_2^2
≤ 2 q ∈ Adm(t_k, Q)sup||Φ_m(X_t_k; θ_k+1, m(Q+q)) - Φ_m(X_t_k; θ̃_k+1, m(Q+q)) ||_2^2
+2q ∈ Adm(t_k, Q)sup||Φ_m(X_t_k; θ̃_k+1, m(Q+q)) - 𝔼(V_k+1( X_t_k+1, Q + q) | X_t_k ) ||_2^2
where in the last inequality, we used Minkowski inequality. θ̃_k+1, m(Q+q) solves the theoretical optimization problem (<ref>). Note that in the latter problem, we introduced the actual (not known) value function V_k+1 unlike in equation (<ref>). This is just a theoretical tool as the preceding optimization problem cannot be solved since we do not know the actual value function V_k+1. Thus taking the supremum in (<ref>) yields,
Q ∈𝒯_ksup||V^m_k(X_t_k, Q) - V_k(X_t_k, Q) ||_2^2
≤ 2 Q ∈𝒯_k+1sup||Φ_m(X_t_k; θ_k+1, m(Q)) - Φ_m(X_t_k; θ̃_k+1, m(Q)) ||_2^2
+2Q ∈𝒯_k+1sup||Φ_m(X_t_k; θ̃_k+1, m(Q)) - 𝔼(V_k+1(X_t_k+1, Q) | X_t_k) ||_2^2,
where we used the fact that, for all Q ∈𝒯_k and all q ∈ Adm(t_k, Q) we have Q + q ∈𝒯_k+1. Besides, recall that Φ_m(X_t_k; θ̃_k+1, m(Q)) and Φ_m(X_t_k; θ_k+1, m(Q)) are orthogonal projections of V_k+1(X_t_k+1, Q) and V_k+1^m(X_t_k+1, Q) on the subspace spanned by e^m(X_t_k). Then knowing that the orthogonal projection is 1-Lipschitz, we have
Q ∈𝒯_k+1sup||Φ_m(X_t_k; θ_k+1, m(Q)) - Φ_m(X_t_k; θ̃_k+1, m(Q))||_2^2 ≤Q ∈𝒯_k+1sup||V^m_k+1(X_t_k+1, Q) - V_k+1(X_t_k+1, Q)||_2^2.
Thanks to the induction assumption, the right hand side of the last inequality converges to 0 as m → + ∞, so that,
Q ∈𝒯_k+1sup||Φ_m(X_t_k; θ_k+1, m(Q)) - Φ_m(X_t_k; θ̃_k+1, m(Q))||_2^2 0.
It remains to prove that
lim_m → +∞Q ∈𝒯_k+1sup||Φ_m(X_t_k; θ̃_k+1, m(Q)) - 𝔼(V_k+1( X_t_k+1, Q) | X_t_k) ||_2^2 = 0.
To achieve, this we rely on Dini's lemma whose assumptions hold true owing to the three following facts.
§.§.§ Pointwise convergence
It follows from assumption ℋ_1^LS that, for any Q ∈𝒯_k+1,
lim_m → +∞||Φ_m(X_t_k; θ̃_k+1, m(Q)) - 𝔼(V_k+1( X_t_k+1, Q) | X_t_k) ||_2^2 = 0.
§.§.§ Continuity
The continuity of Q ↦||Φ_m(X_t_k; θ̃_k+1, m(Q)) - 𝔼(V_k+1( X_t_k+1, Q) | X_t_k) ||_2^2 is given by Proposition <ref> under assumptions ℋ_2^LS and ℋ_3, 2.
§.§.§ Monotony
Denote by F_m^k := ( e_1(X_t_k), …, e_m(X_t_k) ). Then it is straightforward that for any m ≥ 1, F_m^k ⊆ F_m+1^k. So that,
||Φ_m(X_t_k; θ̃_k+1, m(Q)) - 𝔼(V_k+1( X_t_k+1, Q) | X_t_k) ||_2^2 = Y ∈ F_m^kinf||𝔼(V_k+1( X_t_k+1, Q) | X_t_k) - Y||_2^2
≥Y ∈ F_m+1^kinf||𝔼(V_k+1( X_t_k+1, Q) | X_t_k) - Y||_2^2
= ||Φ_m+1(X_t_k; θ̃_k+1, m+1(Q)) - 𝔼(V_k+1( X_t_k+1, Q) | X_t_k) ||_2^2.
Thus the sequence,
(||Φ_m(X_t_k; θ̃_k+1, m(Q)) - 𝔼(V_k+1( X_t_k+1, Q) | X_t_k) ||_2^2 )_m ≥ 1
is non-increasing. From the three preceding properties, one may apply Dini lemma yielding the desired result (<ref>). Finally, combining (<ref>) and (<ref>) in (<ref>) yields,
lim_m → +∞Q ∈𝒯_ksup||V^m_k(X_t_k, Q) - V_k(X_t_k, Q) ||_2^2 = 0.
This completes the proof.
§.§.§ Neural network approximation
We now consider the approximation of the continuation value when using neural network. We prove a similar result as in Proposition <ref>, when the number of units per hidden layer increases. To achieve this, we need the following assumptions.
ℋ_1^𝒩𝒩: For every m ≥ 2, there exists q ≥ 1 such that for every θ∈Θ_m, Φ_m(·; θ) has q-polynomial growth uniformly in θ.
ℋ_2^𝒩𝒩: For any 0 ≤ k ≤ n-1, a.s. the random functions θ∈Θ_m ↦Φ_m(X_t_k; θ) are continuous. Owing to the Heine theorem, the compactness of Θ_m yields the uniform continuity.
Assume ℋ_1^𝒩𝒩, ℋ_2^𝒩𝒩 and ℋ_3, 2q (with q involved in assumption ℋ_1^𝒩𝒩) hold true. Then, for all 0 ≤ k ≤ n-1,
lim_m → +∞Q ∈𝒯_ksup||V^m_k(X_t_k, Q) - V_k( X_t_k, Q)||_2 = 0.
We proceed by a backward induction on k. For k = n-1, we have V^m_n-1(X_t_n-1, Q) = V_n-1(X_t_n-1, Q) and therefore the proposition holds true. Let us suppose it holds for k+1. In the spirit of the beginning of the proof of Proposition <ref>, we have for all Q ∈𝒯_k using the inequality: |i ∈ Isup a_i - i ∈ Isup b_i| ≤i ∈ Isup |a_i-b_i| and triangle inequality,
||V^m_k(X_t_k, Q) - V_k(X_t_k, Q) ||_2^2 ≤𝔼(_q ∈ Adm(t_k, Q)|Φ_m(X_t_k; θ_k+1, m(Q+q)) - 𝔼(V_k+1( X_t_k+1, Q + q) | X_t_k) |^2 ).
Then we aim to apply the bifurcation property. For all q ∈ Adm(t_k, Q), consider,
A_k^m(Q, q) = |Φ_m(X_t_k; θ_k+1, m(Q+q)) - 𝔼(V_k+1( X_t_k+1, Q + q) | X_t_k)|^2.
Then for all q_1, q_2 ∈ Adm(t_k, Q) define
q_A^* = q_1 ·1_{A_k^m(Q, q_1) ≥ A_k^m(Q, q_2)} + q_2 ·1_{A_k^m(Q, q_1) < A_k^m(Q, q_2)}.
Using the definition of the conditional expectation and since activation functions are continuous (assumption ℋ_2^𝒩𝒩), A_k^m(Q, q) is σ(X_t_k)-measurable for all q ∈ Adm(t_k, Q). Moreover, q_A^*∈ Adm(t_k, Q) and A_k^m(Q, q_A^*) = max(A_k^m(Q, q_1), A_k^m(Q, q_2) ). Thus using the bifurcation property and taking the supremum in (<ref>) yields,
Q ∈𝒯_ksup||V^m_k(X_t_k, Q) - V_k(X_t_k, Q) ||_2^2 ≤Q ∈𝒯_k+1sup||Φ_m(X_t_k; θ_k+1, m(Q)) - 𝔼(V_k+1( X_t_k+1, Q) | X_t_k)||_2^2.
Using Minkowski inequality and the inequality: (a+b)^2 ≤ 2(a^2+b^2) yields,
Q ∈𝒯_ksup||V^m_k(X_t_k, Q) - V_k(X_t_k, Q) ||_2^2 ≤ 2 Q ∈𝒯_k+1sup||𝔼(V_k+1^m( X_t_k+1, Q) | X_t_k) - 𝔼(V_k+1( X_t_k+1, Q) | X_t_k)||_2^2
+2Q ∈𝒯_k+1sup||Φ_m(X_t_k; θ_k+1, m(Q)) - 𝔼(V_k+1^m( X_t_k+1, Q) | X_t_k) ||_2^2.
By the induction assumption, the first term in the right hand side converges to 0 as m → + ∞. Let us consider the second term. Since θ_k+1, m(Q) solves (<ref>), we have
Q ∈𝒯_k+1sup||Φ_m(X_t_k; θ_k+1, m(Q)) - 𝔼(V_k+1^m( X_t_k+1, Q) | X_t_k) ||_2^2 ≤Q ∈𝒯_k+1sup||Φ_m(X_t_k; θ̃_k+1, m(Q)) - 𝔼(V_k+1^m( X_t_k+1, Q) | X_t_k) ||_2^2,
where θ̃_k+1, m(Q) solves the theoretical optimization problem,
θ∈Θ_minf||V_k+1(X_t_k + 1, Q) - Φ_m(X_t_k; θ) ||_2
with Θ_m defined in (<ref>). Then it follows from Minskowki inequality that
Q ∈𝒯_k+1sup||Φ_m(X_t_k; θ̃_k+1, m(Q)) - 𝔼(V_k+1^m( X_t_k+1, Q) | X_t_k) ||_2^2 ≤Q ∈𝒯_k+1sup||𝔼(V_k+1( X_t_k+1, Q) | X_t_k) - 𝔼(V_k+1^m( X_t_k+1, Q) | X_t_k) ||_2^2
+Q ∈𝒯_k+1sup||Φ_m(X_t_k; θ̃_k+1, m(Q)) - 𝔼(V_k+1( X_t_k+1, Q) | X_t_k) ||_2^2.
Once again, by the induction assumption, the first term in the right hand side converges to 0 as m → +∞. Moreover, thanks to the universal approximation theorem, for all Q ∈𝒯_k+1
||Φ_m(X_t_k; θ̃_k+1, m(Q)) - 𝔼(V_k+1( X_t_k+1, Q) | X_t_k) ||_2^2 0.
Besides notice that,
||Φ_m(X_t_k; θ̃_k+1, m(Q)) - 𝔼(V_k+1( X_t_k+1, Q) | X_t_k) ||_2^2 = Φ∈𝒩𝒩_minf||Φ(X_t_k) - 𝔼(V_k+1(X_t_k+1, Q) | X_t_k) ||_2^2
where 𝒩𝒩_m is defined in (<ref>). But since the sequence (Θ_m )_m is non-decreasing (in the sense that Θ_m ⊆Θ_m+1), then (𝒩𝒩_m)_m is too. So that by the previous equality (<ref>),
( ||Φ_m(X_t_k; θ̃_k+1, m(Q)) - 𝔼(V_k+1( X_t_k+1, Q) | X_t_k) ||_2^2 )_m ≥ 2
is a non-increasing sequence. Thus keeping in mind equation (<ref>), if the function,
H_k (𝒩𝒩_m, |·|_sup) ×(𝒯_k+1, |·|) →ℝ
(Φ, Q) ⟼ ||L_k(Φ, Q)||_2^2 := ||Φ(X_t_k) - 𝔼(V_k+1(X_t_k+1, Q) | X_t_k) ||_2^2
is continuous, then thanks to Theorem <ref> (noticing that for all m ≥ 2, 𝒩𝒩_m is a compact set), the function
Q ↦||Φ_m(X_t_k; θ̃_k+1, m(Q)) - 𝔼(V_k+1( X_t_k+1, Q) | X_t_k) ||_2^2
will be continuous on the compact set 𝒯_k+1. Thus one may use Dini lemma and conclude that the pointwise convergence in (<ref>) is in fact uniform. Which will completes the proof.
Note that we have already shown that Q ↦𝔼(V_k+1(X_t_k+1, Q) | X_t_k) is almost surely continuous under assumption ℋ_3, 2q. Moreover using the classic inequality: (a+b)^2 ≤ 2(a^2 + b^2) and then conditional Jensen inequality
|L_k(Φ, Q)|^2 ≤ 2 ·|Φ(X_t_k)|^2 + 2 ·𝔼(V_k+1(X_t_k+1, Q)^2 | X_t_k)
≤ 2 ·|Φ(X_t_k)|^2 + 2 ·𝔼(G_k+1^2 | X_t_k) ∈𝕃^1_ℝ(ℙ),
where the existence of G_k+1∈𝕃^2_ℝ(ℙ) (independent of Q) follows from Remark <ref> and is implied by assumption ℋ_3, 2q. Note that the integrability of |Φ(X_t_k)|^2 follows from assumptions ℋ_1^𝒩𝒩 and ℋ_3, 2q. This implies that ||L_k(Φ, ·)||_2^2 is continuous.
Besides, for some sequence (Φ_n)_n of 𝒩𝒩_m such that Φ_n Φ, it follows from the Lebesgue's dominated convergence theorem (enabled by assumptions ℋ_1^𝒩𝒩 and ℋ_3, 2q) that ||L_k(Φ_n, Q)||_2^2 ||L_k(Φ, Q)||_2^2. Which shows that ||L_k(·, Q)||_2^2 is continuous. Therefore the function H_k is continuous. And as already mentioned this completes the proof.
In the previous proposition, we made the assumption that the neural networks are continuous and with polynomial growth. This assumption is clearly satisfied when using classic activation functions such as the ReLU function x ∈ℝ↦max(x,0) and Sigmoïd function x ∈ℝ↦ 1 / (1 + e^-x).
§.§ Convergence of Monte Carlo approximation
From now on, we assume a fixed positive integer m and focus is on the convergence of the value function that arises from the second approximation (<ref>) or (<ref>). Unlike the preceding section and for technical convenience, we restrict our analysis of the neural network approximation to the bang-bang setting. However, the least squares regression will still be examined in a general context.
§.§.§ Least squares regression
We establish a convergence result under the following Hilbert assumption.
ℋ_5^LS: For all k=0,…,n-1 the sequence (e_i( X_t_k))_i ≥ 1 is a Hilbert basis of 𝕃^2(σ(X_t_k) ).
It is worth noting that this assumption is a special case of assumptions ℋ_1^LS and ℋ_2^LS with an orthonormality assumption on e^m(X_t_k). Furthermore, in the field of mathematical finance, the underlying asset's diffusion is often assumed to have a Gaussian structure. However, it is well known that the normalized Hermite polynomials {H_k(x)/√(k!) , k ≥ 0 } serve as a Hilbert basis for 𝕃^2(ℝ, μ), the space of square-integrable functions with respect to the Gaussian measure μ. The Hermite polynomials { H_k(x), k ≥ 0} are defined as follows:
H_k(x) = (-1)^k e^x^2d^k/dx^k[ e^-x^2],
or recursively by
H_k+1(x) = 2x · H_k(x) - 2k · H_k-1(x) with H_0(x) = 1, H_1(x) = 2x.
For a multidimensional setting, Hermite polynomials are obtained as the product of one-dimensional Hermite polynomials. Finally, note that assumptions ℋ_5^LS entail that A_m^k = A_m, N^k = I_m.
The main result of this section aim at proving that the second approximation V_k^m, N of the swing value function converges towards the first approximation V_k^m as the Monte Carlo sample size N increases to +∞ and with a rate of convergence of order 𝒪(1/√(N)). To achieve this we rely on the following lemma which concern general Monte Carlo rate of convergence.
Consider X_1, …, X_N independent and identically distributed random variables with order p (p ≥ 2) finite moment (with μ = 𝔼(X_1)). Then, there exists a positive constant B_p (only depending on the order p) such that
||1/N∑_i = 1^N X_i - μ||_p≤ B_p 2^p-1/p(𝔼(|X|^p) + |μ|^p )^1/p/√(N).
It follows from Marcinkiewicz–Zygmund inequality that there exists a positive constant A_p (only depends on p) such that
||1/N∑_i = 1^N X_i - μ||_p^p = 𝔼((∑_i = 1^NX_i - μ/N)^p)
≤ A_p ·𝔼((1/N^2∑_i = 1^N (X_i - μ)^2 )^p/2)
= A_p/N^p/2·𝔼((1/N∑_i = 1^N (X_i - μ)^2 )^p/2).
Using the convexity of the function x ∈ℝ_+↦ x^p/2 yields,
(1/N∑_i = 1^N (X_i - μ)^2 )^p/2≤1/N∑_i = 1^N (X_i - μ)^p.
Thus taking the expectation and using the inequality, (a+b)^p ≤ 2^p-1(a^p + b^p) yields,
||1/N∑_i = 1^N X_i - μ||_p^p ≤A_p/N^p/2·𝔼((X - μ)^p ) ≤ A_p ·2^p-1(𝔼(|X|^p) + |μ|^p)/N^p/2.
This completes the proof.
In the following proposition, we show that using Hilbert basis as a regression basis allows to achieve a convergence with a rate of order 𝒪(1/√(N)).
Under assumptions ℋ_3, ∞, ℋ_4, ∞^LS and ℋ_5^LS, for all k=0,…,n-1 and for any s > 1, we have
Q ∈𝒯_ksup|| V^m, N_k(X_t_k, Q ) - V^m_k(X_t_k, Q ) ||_s = 𝒪(1/√(N)) as N → +∞.
We prove this proposition using a backward induction on k. Since V^m, N_n-1( X_t_n-1, ·) = V^m_n-1(X_t_n-1, ·) on 𝒯_n-1, then the proposition holds for k = n-1. Assume now that the proposition holds for k+1. Using the inequality, |i ∈ Isup a_i - i ∈ Isup b_i| ≤i ∈ Isup |a_i-b_i| and then Cauchy-Schwartz' one, we get,
|V^m, N_k(X_t_k, Q ) - V^m_k(X_t_k, Q) | ≤_q ∈ Adm(t_k, Q)|⟨θ_k+1,m,N(Q+q) - θ_k+1,m(Q+q), e^m(X_t_k)⟩|
≤|e^m(X_t_k) | ·_q ∈ Adm(t_k, Q)|θ_k+1,m,N(Q+q) - θ_k+1,m(Q+q) |
≤|e^m(X_t_k) | ·_q ∈𝒰_k(Q)|θ_k+1,m,N(Q + q) - θ_k+1,m(Q + q) |,
where 𝒰_k(Q) is the set of all ℱ_t_k+1^X-measurable random variables lying within ℐ_k+1(Q) (see (<ref>)). The last inequality is due to the fact that ℱ_t_k^X ⊂ℱ_t_k+1^X. Then for some constants b, c > 1 such that 1/b + 1/c = 1, it follows from Hölder inequality that,
||V^m, N_k(X_t_k, Q) - V^m_k(X_t_k, Q) ||_s ≤|| |e^m(X_t_k) | ||_sb·|| _q ∈𝒰_k(Q)|θ_k+1,m,N(Q + q) - θ_k+1,m(Q + q) | ||_sc.
To interchange the expectation and the essential supremum, we rely on the bifurcation property. Let q_1, q_2 ∈𝒰_k(Q) and denote by
q^* = q_1 ·1_{B_k(Q, q_1) ≥ B_k(Q, q_2)} + q_2 ·1_{B_k(Q, q_1) < B_k(Q, q_2)}
where B_k(Q, q_i) = |θ_k+1,m,N(Q+ q_i) - θ_k+1,m(Q+ q_i)|^sc for i ∈{1,2}. One can easily check that for all i ∈{1,2}, B_k(Q, q_i) is ℱ_t_k+1^X-measurable so that q^*∈𝒰_k(Q). We also have B_k(Q, q^*) = max(B_k(Q, q_1), B_k(Q, q_2) ). Thus one may use the bifurcation property in (<ref>), we get,
||V^m, N_k(X_t_k, Q) - V^m_k(X_t_k, Q) ||_s ≤|| |e^m(X_t_k)| ||_sb·q ∈𝒰_k(Q)sup|||θ_k+1,m,N(Q + q) - θ_k+1,m(Q + q) | ||_sc
≤|| |e^m(X_t_k) | ||_sb·Q ∈𝒯_k+1sup|||θ_k+1,m,N(Q) - θ_k+1,m(Q) | ||_sc.
But for any Q ∈𝒯_k+1, it follows from Minkowski's inequality that,
|||θ_k+1,m,N(Q) - θ_k+1,m(Q) | ||_sc = || | 1/N∑_p = 1^N e^m(X_t_k^[p]) · V^m, N_k+1(X_t_k+1^[p], Q) - 𝔼(e^m(X_t_k)V^m_k+1(X_t_k+1, Q)) | ||_sc
≤|||1/N∑_p = 1^N e^m(X_t_k^[p]) ·(V^m, N_k+1(X_t_k+1^[p], Q) - V^m_k+1(X_t_k+1^[p], Q)) | ||_sc
+ |||1/N∑_p = 1^N e^m(X_t_k^[p])V^m_k+1( X_t_k+1^[p], Q) - 𝔼(e^m(X_t_k)V^m_k+1(X_t_k+1, Q)) | ||_sc
≤||| e^m(X_t_k)| · |V^m, N_k+1(X_t_k+1, Q) - V^m_k+1(X_t_k+1, Q) | ||_sc
+ |||1/N∑_p = 1^N e^m(X_t_k^[p])V^m_k+1( X_t_k+1^[p], Q) - 𝔼(e^m(X_t_k)V^m_k+1(X_t_k+1, Q)) | ||_sc,
where the last inequality comes from the fact that, for all p ≥ 1, (X_t_k^[p],X_t_k+1^[p]) has the same distribution with (X_t_k,X_t_k+1). Therefore, for some constants u, v > 1 such that 1/u + 1/v = 1, it follows from Hölder inequality,
|| |θ_k+1,m,N(Q) - θ_k+1,m(Q) | ||_sc ≤|||e^m(X_t_k)| ||_scu·||V^m, N_k+1(X_t_k+1, Q) - V^m_k+1(X_t_k+1, Q) ||_scv
+ |||1/N∑_p = 1^N e^m(X_t_k^[p])V^m_k+1( X_t_k+1^[p], Q) - 𝔼(e^m(X_t_k)V^m_k+1(X_t_k+1, Q)) | ||_sc.
Taking the supremum in the previous inequality and plugging it into equation (<ref>) yields,
Q ∈𝒯_ksup||V^m, N_k(X_t_k, Q ) - V^m_k(X_t_k, Q) ||_s
≤|| |e^m(X_t_k) | ||_sb·||| e^m(X_t_k)| ||_scu·Q ∈𝒯_k+1sup||V^m, N_k+1(X_t_k+1, Q) - V^m_k+1(X_t_k+1, Q) ||_scv
+ || |e^m(X_t_k) | ||_sb·Q ∈𝒯_k+1sup|||1/N∑_p = 1^N e^m(X_t_k^[p])V^m_k+1(X_t_k+1^[p], Q) - 𝔼(e^m(X_t_k)V^m_k+1(X_t_k+1, Q)) | ||_sc.
Under assumption ℋ_4, r^LS and using induction assumption, the first term in the sum of the right hand side converges to 0 as N → +∞ with a rate of order 𝒪(1/√(N)). Once again, by assumption ℋ_4, ∞^LS, it remains to prove that it is also the case for the second term. But we have,
C_N(Q) := |||1/N∑_p = 1^N e^m(X_t_k^[p])V^m_k+1( X_t_k+1^[p], Q) - 𝔼(e^m(X_t_k)V^m_k+1(X_t_k+1, Q)) | ||_sc
= ||∑_j = 1^m(1/N∑_p = 1^N e_j(X_t_k^[p])V^m_k+1(X_t_k+1^[p], Q) - 𝔼(e_j(X_t_k)V^m_k+1(X_t_k+1, Q)) )^2 ||_sc/2^1/2
≤∑_j = 1^m||1/N∑_p = 1^N e_j(X_t_k^[p])V^m_k+1(X_t_k+1^[p], Q) - 𝔼(e_j(X_t_k)V^m_k+1(X_t_k+1, Q)) ||_sc
≤A_ac/√(N)·∑_j = 1^m{𝔼(|e_j(X_t_k)V^m_k+1(X_t_k+1, Q)|^sc) + |𝔼(e_j(X_t_k)V^m_k+1(X_t_k+1, Q))|^sc},
where the second-last inequality comes from Minkowski inequality and the inequality, √(x+y)≤√(x) + √(y) for all x, y ≥ 0. The last inequality is obtained using Lemma <ref> (with a positive constant A_ac only depends on the order a and c). But using the continuity (which holds as noticed in Remark <ref>) of both functions Q ↦𝔼(|e_j(X_t_k)V^m_k+1(X_t_k+1, Q)|^sc) and Q ↦|𝔼(e_j(X_t_k)V^m_k+1( X_t_k+1, Q))|^sc on the compact set 𝒯_k+1 one may deduce that, as N → +∞,
Q ∈𝒯_k+1sup C_N(Q) = 𝒪(1/√(N)).
This completes the proof.
It is worth noting that it is difficult to obtain an almost surely convergence result without further assumptions (for example boundedness assumption) of the regression functions. The preceding proposition is widely based on Hölder inequality emphasizing on why we have chosen the 𝕃^s(ℙ)-norm. However, in the neural network analysis that follows, we prove an almost surely convergence result.
§.§.§ Neural network approximation
We consider the discrete setting with integer volume constraints with a state of attainable cumulative consumptions given by (<ref>). Results in this section will be mainly based on Lemmas <ref> and <ref> stated below. Let (f_n)_n be a sequence of real functions defined on a compact set K ⊂ℝ^d. Define,
v_n = x ∈ Kinf f_n(x) and x_n ∈_x ∈ K f_n(x).
Then, we have the following two Lemmas.
Assume that the sequence (f_n)_n converges uniformly on K to a continuous function f. Let v^* = x ∈ Kinf f_n(x) and 𝒮^* = _x ∈ K f(x). Then v_n → v^* and the distance d(x_n, 𝒮^*) between the minimizer x_n and the set 𝒮^* converges to 0 as n → +∞.
Let (ξ_i)_i ≥ 1 be a sequence of i.i.d. ℝ^m-valued random vectors and h : ℝ^d ×ℝ^m →ℝ a measurable function. Assume that,
* a.s., θ∈ℝ^d ↦ h(θ, ξ_1) is continuous,
* For all C > 0, 𝔼( |θ| ≤ Csup| h(θ, ξ_1) | ) < +∞.
Then, a.s. θ∈ℝ^d ↦1/N∑_i = 1^N h(θ, ξ_i) converges locally uniformly to the continuous function θ∈ℝ^d ↦𝔼(h(θ, ξ_1) ), i.e.
lim_N → +∞[θ| ≤ Csup| 1/N∑_i = 1^N h(θ, ξ_i) - 𝔼(h(θ, ξ_1) ) | = 0 a.s.
Combining the two preceding lemmas is the main tool to analyze the Monte Carlo convergence of the neural network approximation. The result is stated below and requires the following (additional) assumption.
ℋ_3^𝒩𝒩: For any m ≥ 2, 0 ≤ k ≤ n-1, Q ∈𝒯_k and θ^1, θ^2 ∈𝒮_k^m(Q) (defined in (<ref>)), Φ_m(·; θ^1) = Φ_m(·; θ^2).
This assumption just states that, almost surely, two minimizers bring the same value.
Before showing the main result of this section, it is worth noting this important remark.
[label=(*)]otherlist
* Under assumptions ℋ_1^𝒩𝒩 and ℋ_3, q and using a straightforward backward induction in equation (<ref>), it can be shown that there exists a random variable G_k ∈𝕃^q_ℝ^d(ℙ) (independent of Q) such that |V_k^m(X_t_k, Q) | ≤ G_k for any Q ∈𝒯_k; where V_k^m is defined in (<ref>).
* Under assumption ℋ_1^𝒩𝒩, there exists a positive constant κ_m such that, for any 0 ≤ k ≤ n-1 and any Q ∈𝒯_k,
max(|V_k^m(X_t_k, Q)|, |V_k^m, N(X_t_k, Q)| ) ≤ q_max·|S_t_k - K| + κ_m ·(1 + |X_t_k|^q ).
If in addition, assumption ℋ_3, q holds true, then the right hand side of the last inequality is an integrable random variable.
We now state our result of interest.
Let m ≥ 2. Under assumptions ℋ_1^𝒩𝒩, ℋ_2^𝒩𝒩, ℋ_3^𝒩𝒩 and ℋ_3, 2q, for any 0 ≤ k ≤ n-1, we have,
lim_N → +∞Q ∈𝒯_ksup| V^m, N_k(X_t_k, Q ) - V^m_k(X_t_k, Q ) |= 0 a.s.
Note that in ℋ_3, 2q, parameters q are that involved in assumption ℋ_1^𝒩𝒩. Recall that, the set 𝒯_k is the one of the discrete setting as discussed in (<ref>).
We proceed by a backward induction on k. The proposition clearly holds true for k = n-1 since, almost surely, V_n-1^m, N(X_t_n-1, ·) = V_n-1^m(X_t_n-1, ·) on 𝒯_n-1. Assume now the proposition holds true for k+1. Let Q ∈𝒯_k. Using the inequality, |i ∈ Isup a_i - i ∈ Isup b_i| ≤i ∈ Isup |a_i-b_i| and then triangle inequality, we get,
| V^m, N_k(X_t_k, Q ) - V^m_k(X_t_k, Q ) | ≤_q ∈ Adm(t_k, Q)|Φ_m(X_t_k; θ_k, m, N(Q+q) ) - Φ_m(X_t_k; θ_k, m, N(Q+q) ) |
+ _q ∈ Adm(t_k, Q)|Φ_m(X_t_k; θ_k, m, N(Q+q) ) - Φ_m(X_t_k; θ_k, m(Q+q) ) |,
where θ_k, m, N(Q) lies within the following set,
_θ∈Θ_m1/N∑_p = 1^N|V^m_k+1(X_t_k+1^[p], Q ) - Φ_m(X_t_k^[p]; θ) |^2.
Then taking the supremum in (<ref>) and using triangle inequality, we get,
Q ∈𝒯_ksup| V^m, N_k(X_t_k, Q ) - V^m_k(X_t_k, Q ) | ≤Q ∈𝒯_k+1sup|Φ_m(X_t_k; θ_k, m, N(Q) ) - Φ_m(X_t_k; θ_k, m(Q) ) |
+
2 ·Q ∈𝒯_k+1sup|Φ_m(X_t_k; θ_k, m, N(Q) ) - Φ_m(X_t_k; θ_k, m(Q) ) |.
We will handle the right hand side of the last inequality term by term. Let us start with the second term. Note that owing to assumption ℋ_2^𝒩𝒩, the function
θ∈Θ_m ↦ V^m_k+1(X_t_k+1, Q ) - Φ_m(X_t_k; θ)
is almost surely continuous. Moreover, for any C >0, using the inequality (a+b)^2 ≤ 2(a^2 + b^2) and assumption ℋ_1^𝒩𝒩, there exists a positive constant κ_m such that for any Q ∈𝒯_k+1,
𝔼(|θ| ≤ Csup| V^m_k+1(X_t_k+1, Q ) - Φ_m(X_t_k; θ) |^2 ) ≤ 2 ·𝔼(| V^m_k+1(X_t_k+1, Q ) |^2 ) + 2 ·|θ| ≤ Csup𝔼(|Φ_m(X_t_k; θ) |^2 )
≤ 2 ·𝔼(| V^m_k+1(X_t_k+1, Q ) |^2 ) + 2κ_m (1 + 𝔼|X_t_k|^2q)
and the right hand side of the last inequality is finite under assumption ℋ_3, 2q, keeping in mind point <ref> of Remark <ref>. Thus thanks to Lemma <ref>, almost surely, we have the uniform convergence on Θ_m,
lim_N → +∞θ∈Θ_msup| 1/N∑_p = 1^N|V^m_k+1(X_t_k+1^[p], Q ) - Φ_m(X_t_k^[p]; θ) |^2 - || V^m_k+1(X_t_k+1, Q ) - Φ_m(X_t_k; θ) ||_2^2 | = 0.
Thus, for any Q ∈𝒯_k+1, Lemma <ref> implies that lim_N → +∞ d(θ_k, m, N(Q), 𝒮_k^m(Q) ) = 0. We restrict ourselves to a subset with probability one of the original probability space on which this convergence holds and the random functions Φ_m(X_t_k; ·) are uniformly continuous (see assumption ℋ_2^𝒩𝒩). Then, there exists a sequence (α_k, m, N(Q))_N lying within 𝒮_k^m(Q) such that,
lim_N → +∞|θ_k, m, N(Q) - α_k, m, N(Q) | = 0.
Thus, the uniform continuity of functions Φ_m(X_t_k; ·) combined with assumption ℋ_3^𝒩𝒩 yield,
|Φ_m(X_t_k; θ_k, m, N(Q) ) - Φ_m(X_t_k; θ_k, m(Q) ) | = |Φ_m(X_t_k; θ_k, m, N(Q) ) - Φ_m(X_t_k; α_k, m, N(Q) ) | 0.
Furthermore, since the set 𝒯_k+1 has a finite cardinal (discrete setting) then, we have
lim_N → +∞Q ∈𝒯_k+1sup|Φ_m(X_t_k; θ_k, m, N(Q) ) - Φ_m(X_t_k; θ_k, m(Q) ) | = 0.
It remains to handle the first term in the right hand side of inequality (<ref>). Note that, if the following uniform convergence,
lim_N → +∞θ∈Θ_msup| 1/N∑_p = 1^N|V^m, N_k+1(X_t_k+1^[p], Q ) - Φ_m(X_t_k^[p]; θ) |^2 - 1/N∑_p = 1^N|V^m_k+1(X_t_k+1^[p], Q ) - Φ_m(X_t_k^[p]; θ) |^2 |_:=|Δ_k, m, N^Q(θ)|= 0
holds true, then the latter uniform convergence will entail the following one owing to the uniform convergence (<ref>),
lim_N → +∞θ∈Θ_msup| 1/N∑_p = 1^N|V^m, N_k+1(X_t_k+1^[p], Q ) - Φ_m(X_t_k^[p]; θ) |^2 - || V^m_k+1(X_t_k+1, Q ) - Φ_m(X_t_k; θ) ||_2^2 | = 0
and the desired result follows. To achieve this, we start by proving the uniform convergence (<ref>). Then we show how its implication (<ref>) entails the desired result.
Using triangle inequality and the elementary identity, a^2 - b^2 = (a-b)(a+b), we have,
|Δ_k, m, N^Q(θ)| ≤1/N∑_p = 1^N|V^m, N_k+1(X_t_k+1^[p], Q ) + V^m_k+1(X_t_k+1^[p], Q ) - 2 ·Φ_m(X_t_k^[p]; θ) | ·| V^m, N_k+1(X_t_k+1^[p], Q ) - V^m_k+1(X_t_k+1^[p], Q ) |
≤2/N∑_p = 1^N(q_max|S_t_k+1^[p] - K| + κ_m (1 + |X_t_k+1^[p]|^q) + κ_m (1 + |X_t_k^[p]|^q) ) ·| V^m, N_k+1(X_t_k+1^[p], Q ) - V^m_k+1(X_t_k+1^[p], Q ) |
where in the last inequality we used assumption ℋ_1^𝒩𝒩 and the point <ref> of Remark <ref>. Let ε > 0. Then using the induction assumption and the law of large numbers, we get,
lim sup_Nθ∈Θ_msup|Δ_k, m, N^Q(θ)| ≤ 2ε·𝔼( q_max|S_t_k+1 - K| + κ_m (1 + |X_t_k+1|^q) + κ_m (1 + |X_t_k|^q) ).
Hence letting ε→ 0 entails the result (<ref>). Theorefore, as already mentioned, the result (<ref>) also holds true. Thus, using Lemma <ref>, we get that lim_N → +∞ d(θ_k, m, N(Q), 𝒮_k^m(Q) ) = 0. We restrict ourselves to a subset with probability one of the original probability space on which this convergence holds and the random functions Φ_m(X_t_k; ·) are uniformly continuous (see assumption ℋ_2^𝒩𝒩). Whence, for any Q ∈𝒯_k+1, there exists a sequence (β_k, m, N(Q))_N lying within 𝒮_k^m(Q) such that,
lim_N → +∞|θ_k, m, N(Q) - β_k, m, N(Q) | = 0.
Thus, the uniform continuity of functions Φ_m(X_t_k; ·) combined with assumption ℋ_3^𝒩𝒩 yield,
|Φ_m(X_t_k; θ_k, m, N(Q) ) - Φ_m(X_t_k; θ_k, m(Q) ) | = |Φ_m(X_t_k; θ_k, m, N(Q) ) - Φ_m(X_t_k; β_k, m, N(Q) ) | 0.
Then, since the set 𝒯_k+1 has a finite cardinal (discrete setting), we have
lim_N → +∞Q ∈𝒯_k+1sup|Φ_m(X_t_k; θ_k, m, N(Q) ) - Φ_m(X_t_k; θ_k, m(Q) ) | = 0.
Combining equations (<ref>) and (<ref>) in equation (<ref>) yield the desired result.
§.§ Deviation inequalities: the least squares setting
To end this paper, we present some additional results related to the least squares approximation. These results focus on some deviation inequalities on the error between estimates (<ref>), (<ref>) and the swing actual value function (<ref>). We no longer consider the Hilbert assumption ℋ_5^LS. Let us start with the first proposition of this section.
Let δ > 0 and k = 0, …, n-2. Under assumptions ℋ_3, ∞ and ℋ_4, ∞^LS, for all s ≥ 2, there exists a positive constant D_s, k, m such that,
ℙ(_Q ∈𝒬_k|1/N∑_p = 1^N e^m(X_t_k^[p]) V^m_k+1(X_t_k+1^[p], Q) - 𝔼(e^m(X_t_k) V^m_k+1( X_t_k+1, Q) )| ≥δ) ≤D_s, k, m/δ^s N^s/2
where 𝒬_k is the set of all ℱ_t_k^X-measurable random variables lying within 𝒯_k+1.
Note that 𝒬_k ⊂𝒬_k'; with the latter set being the set of all ℱ_t_k+1^X-measurable random variables lying within 𝒯_k+1. Then we have,
ℙ(_Q ∈𝒬_k|1/N∑_p = 1^N e^m(X_t_k^[p]) V^m_k+1( X_t_k+1^[p], Q) - 𝔼(e^m(X_t_k) V^m_k+1(X_t_k+1, Q) )| ≥δ)
≤ℙ(_Q ∈𝒬_k'|1/N∑_p = 1^N e^m(X_t_k^[p]) V^m_k+1(X_t_k+1^[p], Q) - 𝔼(e^m(X_t_k) V^m_k+1(X_t_k+1, Q) )| ≥δ)
≤ A_s Q ∈𝒬_k'sup{𝔼(| e^m(X_t_k) V^m_k+1(X_t_k+1, Q) |^s ) + 𝔼(| e^m(X_t_k) V^m_k+1(X_t_k+1, Q) | )^s }/N^s/2·δ^s
≤ 2A_s Q ∈𝒬_k'sup𝔼(| e^m(X_t_k) V^m_k+1(X_t_k+1, Q) |^s )/N^s/2·δ^s
where in the second-last inequality, we successively used Markov inequality, bifurcation property and Lemma <ref> (enabled by assumptions ℋ_3, ∞ and ℋ_4, ∞^LS) with A_s = B_s^s · 2^s-1 and B_s being a positive constant which only depends on a. To obtain the last inequality, we used Jensen inequality. Besides, following the definition of 𝒬_k' we have,
Q ∈𝒬_k'sup𝔼(| e^m(X_t_k) V^m_k+1(X_t_k+1, Q) |^s ) ≤Q ∈𝒯_k+1sup𝔼(| e^m(X_t_k) V^m_k+1(X_t_k+1, Q) |^s ).
Then owing to Remark <ref>, the right hand side of the last inequality is a supremum of a continuous function over a compact set; thus finite. Hence it suffices to set,
D_s, k, m := 2A_s ·Q ∈𝒯_k+1sup𝔼(| e^m(X_t_k) V^m_k+1(X_t_k+1, Q) |^s ) < + ∞.
Which completes the proof.
In the following proposition, we state a deviation inequality connecting the estimates of the orthogonal projection coordinates involved in the least squares regression.
Consider assumptions ℋ_3, ∞ and ℋ_4, ∞^LS. For all k=0, …, n-2, δ > 0 and s ≥ 2 there exists a positive constant C_s, k, m such that,
ℙ(_Q ∈𝒬_k|θ_k, m, N(Q) - θ_k, m(Q) | ≥δ) ≤C_s, k, m/b(s, δ) · N^s/2
where b(s, δ) = δ ^s if δ∈ (0,1] else b(s, δ) = δ ^s/2.
We proceed by a backward induction on k. Recall that, for any Q ∈𝒯_n-1, V_n-1^m, N(·, Q) = V_n-1^m(·, Q). Thus, it follows from triangle inequality,
|θ_n-2, m, N(Q) - θ_n-2, m(Q) | = | (A_m, N^n-2)^-11/N∑_p = 1^N e^m(X_t_n-2^[p]) V^m_n-1(X_t_n-1^[p], Q) - (A_m^n-2)^-1𝔼(e^m(X_t_n-2) V^m_n-1(X_t_n-1, Q) )|
≤|(A_m, N^n-2)^-1(1/N∑_p = 1^N e^m(X_t_n-2^[p]) V^m_n-1(X_t_n-1^[p], Q) - 𝔼(e^m(X_t_n-2) V^m_n-1( X_t_n-1, Q) ) ) |
+ |((A_m, N^n-2)^-1 - (A_m^n-2)^-1) ·𝔼(e^m(X_t_n-2) V^m_n-1(X_t_n-1, Q) ) |
= |(A_m, N^n-2)^-1(1/N∑_p = 1^N e^m(X_t_n-2^[p]) V^m_n-1(X_t_n-1^[p], Q) - 𝔼(e^m(X_t_n-2) V^m_n-1(X_t_n-1, Q)) ) |
+ |((A_m^n-2)^-1(A_m^n-2 - A_m, N^n-2)(A_m, N^n-2)^-1) 𝔼(e^m(X_t_n-2) V^m_n-1(X_t_n-1, Q) ) |
where in the last equality we used the matrix identity A^-1 - B^-1 = B^-1 (B-A) A^-1 for all non-singular matrices A, B. Hence taking the essential supremum and keeping in mind that the matrix norm |·| is submultiplicative yields,
_Q ∈𝒬_n-2|θ_n-2, m, N(Q) - θ_n-2, m(Q) |
≤|(A_m, N^n-2)^-1| ·_Q ∈𝒬_n-2|1/N∑_p = 1^N e^m(X_t_n-2^[p]) V^m_n-1(X_t_n-1^[p], Q) - 𝔼(e^m(X_t_n-2) V^m_n-1(X_t_n-1, Q)) |
+ C_n-2·|(A_m^n-2)^-1(A_m^n-2 - A_m, N^n-2)(A_m, N^n-2)^-1|
where C_n-2 := Q ∈𝒯_n-1sup|𝔼(e^m(X_t_n-2) V^m_n-1(X_t_n-1, Q) )| < +∞. For any ε > 0 and k = 0, …, n-2, denote by Ω_k^ε := {|A_m, N^k - A_m^k| ≤ε}. Then one may choose ε such that |(A_m, N^k)^-1| ≤ 2 |(A_m^k)^-1| on Ω_k^ε. Thus there exists positive constants K_1, K_2 such that on Ω_n-2^ε,
_Q ∈𝒬_n-2|θ_n-2, m, N(Q) - θ_n-2, m(Q) |
≤ K_1 ·_Q ∈𝒬_n-2| 1/N∑_p = 1^N e^m(X_t_n-2^[p]) V^m_n-1(X_t_n-1^[p], Q) - 𝔼(e^m(X_t_n-2) V^m_n-1( X_t_n-1, Q) ) | + K_2 ·ε.
Therefore, the law of total probability yields,
ℙ(_Q ∈𝒬_n-2|θ_n-2, m, N(Q) - θ_n-2, m(Q) | ≥δ)
≤ℙ(_Q ∈𝒬_n-2|1/N∑_p = 1^N e^m(X_t_n-2^[p]) V^m_n-1(X_t_n-1^[p], Q) - 𝔼(e^m(X_t_n-2) V^m_n-1( X_t_n-1, Q) ) | ≥δ - K_2 ·ε/K_1) + ℙ((Ω_n-2^ε)^c)
≤D_s, n-2, m/(δ - K_2 ·ε)^s N^s/2 + U_n-2, m/ε^s N^s/2
where the majoration for the first probability in the second-last line comes from Proposition <ref> and constant D_a, n-2, m embeds constant K_1. The majoration of ℙ((Ω_n-2^ε)^c) is straightforward using successively Markov inequality and Lemma <ref>. Then, choosing ε = ρδ for some ρ > 0 sufficiently small yields,
ℙ(_Q ∈𝒬_n-2|θ_n-2, m, N(Q) - θ_n-2, m(Q) | ≥δ) ≤C_s, n-2, m/δ^s N^s/2≤{[ C_s, n-2, m/δ ^s N^s/2ifδ∈ (0, 1],; C_s, n-2, m/δ ^s/2 N^s/2else ]..
for some positive constant C_a, n-2, m. Now let us assume that the proposition holds for k+1 and show that it also holds for k. For any Q ∈𝒯_k+1, it follows from triangle inequality that,
|θ_k, m, N(Q) - θ_k, m(Q) | ≤|(A_m, N^k)^-1| ·|1/N∑_p = 1^N e^m(X_t_k^[p]) (V^m, N_k+1( X_t_k+1^[p], Q) - V^m_k+1(X_t_k+1^[p], Q)) |
+|(A_m, N^k)^-1| ·|1/N∑_p = 1^N e^m(X_t_k^[p])V^m_k+1(X_t_k+1^[p], Q) - 𝔼(e^m(X_t_k) V^m_k+1(X_t_k+1, Q) )|
+|(A_m^k)^-1(A_m^k - A_m, N^k)(A_m, N^k)^-1·𝔼(e^m(X_t_k) V^m_k+1(X_t_k+1, Q)) |
≤|(A_m, N^k)^-1| ·1/N∑_p = 1^N|e^m(X_t_k^[p])| ·|V^m, N_k+1(X_t_k+1^[p], Q) - V^m_k+1(X_t_k+1^[p], Q) |
+ |(A_m, N^k)^-1| ·|1/N∑_p = 1^N e^m(X_t_k^[p])V^m_k+1(X_t_k+1^[p], Q) - 𝔼(e^m(X_t_k) V^m_k+1(X_t_k+1, Q) )|
+ |(A_m^k)^-1(A_m^k - A_m, N^k)(A_m, N^k)^-1·𝔼(e^m(X_t_k) V^m_k+1(X_t_k+1, Q) ) |.
But for all 1 ≤ p ≤ N, Cauchy-Schwartz inequality yields,
|V_k+1^m, N(X_t_k+1^[p], Q) - V_k+1^m(X_t_k+1^[p], Q) | ≤_q ∈ Adm(t_k+1, Q)⟨θ_k+1, m, N(Q+q) - θ_k+1, m(Q+q), e^m(X_t_k+1^[p]) ⟩
≤|e^m(X_t_k+1^[p]) | ·_q ∈ Adm(t_k+1, Q)|θ_k+1, m, N(Q+q) - θ_k+1, m(Q+q)|.
Thus,
|θ_k, m, N(Q) - θ_k, m(Q) | ≤( |(A_m, N^k)^-1|/N∑_p = 1^N|e^m(X_t_k^[p])| ·|e^m(X_t_k+1^[p]) | ) _q ∈ Adm(t_k+1, Q)|θ_k+1, m, N(Q+q) - θ_k+1, m(Q+q)|
+ |(A_m, N^k)^-1| ·|1/N∑_p = 1^N e^m(X_t_k^[p])V^m_k+1(X_t_k+1^[p], Q) - 𝔼(e^m(X_t_k) V^m_k+1(X_t_k+1, Q) )|
+ |(A_m^k)^-1(A_m^k - A_m, N^k)(A_m, N^k)^-1·𝔼(e^m(X_t_k) V^m_k+1(X_t_k+1, Q) ) |.
Therefore, on Ω_k^ε, there exists some positive constants K_1, K_2, K_3 such that,
_Q ∈𝒬_k|θ_k, m, N(Q) - θ_k, m(Q) |
≤ K_1(1/N∑_p = 1^N|e^m(X_t_k^[p])| ·|e^m(X_t_k+1^[p]) | )_I_N^1_Q ∈𝒬_k+1|θ_k+1, m, N(Q) - θ_k+1, m(Q)|_I_N^2
+ K_2 ·_Q ∈𝒬_k+1|1/N∑_p = 1^N e^m(X_t_k^[p])V^m_k+1(X_t_k+1^[p], Q) - 𝔼(e^m(X_t_k) V^m_k+1(X_t_k+1, Q) )|_I_N^3+ K_3 ·ε
where to obtain the coefficient K_3 in the last inequality, we used the fact that,
_Q ∈𝒬_k+1𝔼(e^m(X_t_k) V^m_k+1(X_t_k+1, Q) ) ≤Q ∈𝒯_k+1sup𝔼(e^m(X_t_k) V^m_k+1(X_t_k+1, Q) ) < + ∞.
The term I_N^3 can be handled using Proposition <ref>. Then, it suffices to prove that,
ℙ( I_N^1 · I_N^2 ≥δ) ≤K/δ^a · N^a/2
for some positive constant K. But we have,
ℙ( I_N^1 · I_N^2≥δ) = 1 - ℙ( I_N^1 · I_N^2 ≤δ) ≤ 1 - ℙ( I_N^1 ≤√(δ); I_N^2 ≤√(δ)) ≤ℙ( I_N^1 ≥√(δ)) + ℙ(I_N^2 ≥√(δ)).
Moreover, by the induction assumption, we know that, there exists a positive constant B_a, k, m such that,
ℙ(I_N^2 ≥√(δ)) ≤B_s, k, m/δ ^s/2 N^s/2≤{[ B_s, k, m/δ ^s N^s/2ifδ∈ (0, 1],; B_s, k, m/δ ^s/2 N^s/2otherwise. ].
In addition, it follows from Markov inequality and Lemma <ref> that there exists a positive constant M_a, k, m such that
ℙ( I_N^1 ≥√(δ)) ≤M_s, k, m/δ^s N^s/2≤{[ M_s, k, m/δ ^s N^s/2ifδ∈ (0, 1],; M_s, k, m/δ ^s/2 N^s/2otherwise. ].
Hence, there exists a positive constant C_s, k, m such that,
ℙ( I_N^1 · I_N^2 ≥δ) ≤{[ C_s, k, m/δ ^s N^s/2ifδ∈ (0, 1],; C_s, k, m/δ ^s/2 N^s/2otherwise ].
and this completes the proof.
We now state the last result of this paper concerning a deviation inequality involving the actual swing value function.
Consider assumptions ℋ_3, ∞ and ℋ_4, ∞^LS. For all k=0, …, n-2, δ > 0 and s ≥ 2 there exists a positive constant C_s, k, m such that,
ℙ(_Q ∈𝒬_k|V_k^m, N(X_t_k, Q) - V_k^m(X_t_k, Q) | ≥δ) ≤C_s, k, m/b(s, δ) · N^s/2.
Using the inequality, |i ∈ Isup a_i - i ∈ Isup b_i| ≤i ∈ Isup |a_i-b_i| and then Cauchy-Schwartz' inequality, we have,
_Q ∈𝒬_k| V_k^m, N(X_t_k, Q) - V_k^m(X_t_k, Q) | ≤|e^m(X_t_k) | ·_Q ∈𝒬_k+1| θ_k + 1, m, N(Q) - θ_k+1, m(Q) |.
Thus, using the same argument as in (<ref>), we get,
ℙ( _Q ∈𝒬_k| V_k^m, N(X_t_k, Q) - V_k^m(X_t_k, Q) | ≥δ)
≤ℙ( |e^m(X_t_k) | ·_Q ∈𝒬_k+1| θ_k + 1, m, N(Q) - θ_k+1, m(Q) | ≥δ)
≤ℙ( |e^m(X_t_k) | ≥√(δ)) + ℙ( _Q ∈𝒬_k+1| θ_k + 1, m, N(Q) - θ_k+1, m(Q) | ≥√(δ))
≤K_s, k, m^1/δ^s/2· N^s/2 + K_s, k, m^2/b(s, δ) · N^s/2≤{[ K_s, k, m/δ ^s N^s/2ifδ∈ (0, 1]; K_s, k, m/δ ^s/2 N^s/2otherwise ].
for some positive constant K_s, k, m, where the constant K_s, k, m^1 comes from Markov inequality (enabled by assumption ℋ_4, ∞^LS). The existence of the positive constant K_s, k, m^2 results from Proposition <ref> (enabled by assumptions ℋ_3, ∞ and ℋ_4, ∞^LS). The coefficient b(a, δ) is also defined in Proposition <ref>. This completes the proof.
The preceding proposition entails the following result as a straightforward corollary. For all k = 0, …, n-1 and for any Q ∈𝒯_k, we have,
ℙ( |V_k^m, N(X_t_k, Q) - V_k^m(X_t_k, Q) | ≥δ) ≤C_s, k, m/b(s, δ) · N^s/2.
If we assume that m ≥ 1sup C_s, k, m < +∞, then for any s ≥ 2, we have the following uniform convergence,
lim_N → +∞m ≥ 1supQ ∈𝒯_ksupℙ( |V_k^m, N(X_t_k, Q) - V_k^m(X_t_k, Q) | ≥δ) = 0.
But it follows from triangle inequality that,
ℙ( |V_k^m, N(X_t_k, Q) - V_k(X_t_k, Q) | ≥δ)
= 1 - ℙ( |V_k^m, N(X_t_k, Q) - V_k(X_t_k, Q) | ≤δ)
≤ 1 - ℙ({|V_k^m, N(X_t_k, Q) - V_k^m(X_t_k, Q) | ≤δ/2 }∩{|V_k^m(X_t_k, Q) - V_k(X_t_k, Q) | ≤δ/2})
≤ℙ( |V_k^m, N(X_t_k, Q) - V_k^m(X_t_k, Q) | ≥δ/2 ) + ℙ( |V_k^m(X_t_k, Q) - V_k(X_t_k, Q) | ≥δ/2)
≤ℙ( |V_k^m, N(X_t_k, Q) - V_k^m(X_t_k, Q) | ≥δ/2 ) + 4 ·||V_k^m(X_t_k, Q) - V_k(X_t_k, Q) | |_2^2/δ^2,
where in the last inequality, we used Markov inequality. Then using Proposition <ref> and result (<ref>) yields,
lim_m → +∞lim_N → +∞Q ∈𝒯_ksupℙ( |V_k^m, N(X_t_k, Q) - V_k(X_t_k, Q) | ≥δ) = 0.
The latter result implies that for a well-chosen and sufficiently large regression basis, the limit,
lim_N → +∞Q ∈𝒯_ksupℙ( |V_k^m, N(X_t_k, Q) - V_k(X_t_k, Q) | ≥δ)
may be arbitrary small insuring in some sense the theoretical effectiveness of the least squares procedure in the context of swing pricing.
§ ACKNOWLEDGMENTS
The author would like to thank Gilles Pagès and Vincent Lemaire for fruitful discussions. The author would also like to express his gratitude to Engie Global Markets for funding his PhD thesis.
alpha
*
§ APPENDIX
§.§ Some useful results
We present some materials used in this paper. The following lemma allows to show the continuity of the supremum of a continuous function when the supremum is taken over a set depending of the variable of interest.
Consider a continuous function f : ℝ→ℝ and let A and B be two non-increasing and continuous real-valued functions defined on ℝ such that for all Q ∈ℝ, A(Q) ≤ B(Q). Then the function
g: Q ∈ℝ↦q ∈ [A(Q), B(Q)]sup f(q)
is continuous.
To prove this lemma, we proceed by proving the function g is both left and right continuous. Let us start with the right-continuity. Let Q ∈ℝ and h a positive real number. Since A and B are non-increasing functions, two cases can be distinguished
A(Q + h) ≤ A(Q) ≤ B(Q + h) ≤ B(Q).
Using the definition of g, we have,
g(Q + h) = max(q ∈ [A(Q + h), A(Q)]sup f(q), q ∈ [A(Q), B(Q + h)]sup f(q) ).
Since f is continuous on the compact set [A(Q + h), A(Q)], it attains its maximum on a point α(Q, h) ∈ [A(Q + h), A(Q)]. Owing to the squeeze theorem, the latter implies that lim_h → 0α(Q, h) = A(Q) since A is a continuous function. Thus it follows from the continuity of f
lim_h → 0
>q ∈ [A(Q + h), A(Q)]sup f(q) = lim_h → 0
> f(α(Q, h)) = f(A(Q)).
Moreover, since B(Q + h) ≤ B(Q), we have q ∈ [A(Q), B(Q + h)]sup f(q) ≤q ∈ [A(Q), B(Q)]sup f(q) = g(Q). Thus by the continuity of the maximum function and taking the limit in (<ref>) yields
lim_h → 0
> g(Q + h) ≤lim_h → 0
>max(q ∈ [A(Q + h), A(Q)]sup f(q), g(Q)) = max(lim_h → 0
>q ∈ [A(Q + h), A(Q)]sup f(q), g(Q)).
= max(f(A(Q)), g(Q)) ≤ g(Q).
It remains to prove that lim_h → 0
> g(Q + h) ≥ g(Q) to get the right-continuity. But since A(Q + h) ≤ A(Q)
g(Q) ≤q ∈ [A(Q + h), B(Q)]sup f(q) = max(g(Q+h), q ∈ [B(Q + h), B(Q)]sup f(q) ).
As above, using the continuity of f on the compact set [B(Q + h), B(Q)] yields
lim_h → 0
>q ∈ [B(Q + h), B(Q)]sup f(q) = f(B(Q)).
Therefore taking the limit in (<ref>) yields
g(Q) ≤max(lim_h → 0
> g(Q+h), f(B(Q)) ) = max(lim_h → 0
> g(Q+h), lim_h → 0
> f(B(Q + h)) ) ≤lim_h → 0
> g(Q+h).
where in the last inequality we used the fact that, f(B(Q + h)) ≤ g(Q+h). This gives the right-continuity in this first case. Let us consider the second case.
A(Q + h) ≤ B(Q + h) ≤ A(Q) ≤ B(Q)
Since B(Q+h) ≤ A(Q), it follows from the definition of g that,
lim_h → 0
> g(Q+h) ≤max(lim_h → 0
>q ∈ [A(Q + h), A(Q)]sup f(q), g(Q) ) = max(f(A(Q)), g(Q) ) = g(Q).
where we used as above the continuity of f on the compact set [A(Q + h), A(Q)]. Moreover, notice that
g(Q) ≤q ∈ [A(Q + h), B(Q)]sup f(q) = max(g(Q+h), q ∈ [B(Q + h), B(Q)]sup f(q) )
Then, taking the limit in the last inequality yields,
g(Q) ≤max(lim_h → 0
> g(Q+h), lim_h → 0
>q ∈ [B(Q + h), B(Q)]sup f(q) ) =max(lim_h → 0
> g(Q+h), f(B(Q)) )
= max(lim_h → 0
> g(Q+h), lim_h → 0
> f(B(Q+h)) )
≤lim_h → 0
> g(Q+h).
Thus, from equations (<ref>) and (<ref>) one may deduce that lim_h → 0
> g(Q+h) = g(Q). So that g is a right-continuous function. Proving the left-continuity can be handled in the same way. The idea is the following. We start with h a negative real number and consider the two following cases: A(Q) ≤ A(Q+h) ≤ B(Q) ≤ B(Q+h) and A(Q) ≤ B(Q) ≤ A(Q+h) ≤ B(Q+h) and proceed as for the right-continuity. Which will give lim_h → 0
< g(Q+h) = g(Q). Therefore g is a continuous function on ℝ.
The following theorem also concerns the continuity of function in a parametric optimization.
If X, Y are topological spaces and Y is compact, then for any continuous function f : X × Y →ℝ, the function g(x) := y ∈ Yinf f(x,y) is well-defined and continuous.
. Note that g(x) > -∞ since for any fixed x∈ X, f(x,·):Y→ℝ is a continuous function defined on a compact space, and hence the infimum is attained. Then using that the sets (-∞,a) and (b,∞) form a subbase for the topology of ℝ, it suffices to check that g^-1((-∞,a)) and g^-1((b,∞)) are open. Let π_X be the canonical projection π_X:X× Y→ X, which we recall is continuous and open. It is easy to see that g^-1((-∞,a)) = π_X ∘ f^-1((-∞,a)). Thus since f and π_X are continuous, g^-1((-∞,a)) is open.
We now need to show that g^-1((b,∞)) is open. We rely on the compactness of Y. Observe that,
g(x) > b f(x,y) > b ∀ y ∀ y, (x,y) ∈ f^-1((b,∞)).
Since f is continuous, then f^-1((b,∞)) is open. The latter implies that for all x∈ g^-1((b,∞)) and for all y∈ Y there exists a box neighborhood U_(x,y)× V_(x,y) contained in f^-1((b,∞)). Now using compactness of Y, a finite subset {(x,y_i)} of all these boxes cover {x}× Y and we get,
{x}× Y ⊂( ∩_i = 1^k U_(x,y_i))× Y ⊂ f^-1((b,∞))
and hence g^-1((b,∞)) = ∪_x∈ g^-1((b,∞))∩_i = 1^k(x) U_x,y_i is open. Which completes the proof.
[Gram determinant]
Let F be a linear subspace with dimension n of a pre-Hilbert space E. Consider (x_1, …, x_n) as a basis of F and x ∈ E. Let p(x) denotes the orthogonal projection of x onto F. Then,
G(x, x_1, …, x_n) = || x - p(x) ||^2 · G(x_1, …, x_n)
where G(x_1, …, x_n) denotes the Gram determinant associated to (x_1, …, x_n).
. Note that p(x) is a linear combination of (x_i)_1 ≤ i ≤ n. Since the determinant is stable by elementary operation, we have
G(x, x_1, …, x_n) = G(x - p(x), x_1, …, x_n).
But x - p(x) is orthogonal to each x_i so that,
G(x - p(x), x_1, …, x_n) = || x - p(x) ||^2 · G(x_1, …, x_n).
this completes the proof.
§.§ Correspondences
This section concerns correspondence and the well known Berge's maximum theorem. For a thorough analysis of the concept of correspondence, one may refer to Chapter 2 and 6 in <cit.>.
Let X and Y be two non-empty sets.
* a correspondence Γ from X to 2^Y (noted: Γ: X ⇉ 2^Y) is a mapping that associates for all x ∈ X a subset Γ(x) of Y. Moreover for all subset S ⊆ X, Γ(S) := ∪_x ∈ S^Γ(x).
* a correspondence Γ is single-valued if Card(Γ(x)) = 1 for all x ∈ X
* a correspondence Γ is compact-valued (or closed-valued) if for all x ∈ X, Γ(x) is a compact (or closed) set.
Notice that a single-valued correspondence can be thought of as a function mapping X into Y. Thus as correspondences appear to be a generalization of functions some properties or definitions in functions has their extension in correspondences. Specially the continuity for a classic numerical function is a particular case of the hemicontinuity for a correspondence.
Let (X, d_X) and (Y, d_Y) be two metric spaces and Γ: X ⇉ 2^Y a correspondence.
* Γ is upper hemicontinuous at x ∈ X if and only if for any open set V such that Γ(x) ⊆ V, there exists an open set U ∋ x such that for all y ∈ U, Γ(y) ⊆ V.
* Γ is lower hemicontinuous at x ∈ X if and only if for any open set V such that Γ(x) ∩ V ≠∅, there exists an open set U ∋ x such that for all y ∈ U, V ∩Γ(y) ≠∅.
As for continuous functions on a metric space, there exists a sequential characterization of the hemicontinuity.
[Sequential characterization of hemicontinuity]
Let (X, d_X) and (Y, d_Y) be two metric spaces and Γ: X ⇉ 2^Y a correspondence.
* Γ is lower hemicontinuous at x ∈ X if and only if for all sequence (x_n)_n ∈ℕ∈ X^ℕ that converges towards x, for all y ∈Γ(x) there exists a subsequence (x_n_k)_k ∈ℕ of (x_n)_n ∈ℕ and a sequence (y_k)_k ∈ℕ such that y_k ∈Γ(x_n_k) for all k ∈ℕ and y_k → y.
* if Γ is upper hemicontinuous at x ∈ X then for all sequence (x_n)_n ∈ℕ∈ X^ℕ and all sequence (y_n)_n ∈ℕ such that for all n ∈ℕ, y_n ∈Γ(x_n), there exists a convergent subsequence of (y_n)_n ∈ℕ whose limit lies in Γ(x). If Y is compact then, the converse holds true.
An important result relating correspondence and parametric optimization is the Berge's maximum theorem.
[Berge's maximum theorem]
Let 𝒬 and Y be two topological spaces, Γ: 𝒬⇉ 2^Y a compact-valued and continuous correspondence and ϕ a continuous function on the product space Y ×𝒬. Define for all Q∈𝒬
σ(Q) := _q ∈Γ(Q)ϕ(q, Q) ϕ^*(Q) := q ∈Γ(Q)maxϕ(q, Q).
Then,
* The correspondence σ: 𝒬⇉ Y is compact-valued, upper hemicontinuous, and closed
* The function ϕ^*: 𝒬→ℝ is continuous
|
http://arxiv.org/abs/2307.03976v2 | 20230708133744 | Short-time large deviations of the spatially averaged height of a KPZ interface on a ring | [
"Timo Schorlepp",
"Pavel Sasorov",
"Baruch Meerson"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech"
] |
[email protected]
Institute for Theoretical Physics I,
Ruhr University Bochum, 44801 Bochum, Germany
[email protected]
ELI Beamlines Facility,
ERIC, 25241 Dolní Br̆ežany, Czech Republic
[email protected]
Racah Institute of Physics, Hebrew
University of Jerusalem, Jerusalem 91904, Israel
Using the optimal fluctuation method, we evaluate the short-time probability
distribution P (H̅, L, t=T) of the spatially averaged height H̅ = (1/L) ∫_0^L h (x, t=T) dx
of a one-dimensional interface h (x, t) governed by the Kardar–Parisi–Zhang equation
∂_th=ν∂_x^2h+λ/2(∂_xh)^2+√(D)ξ(x,t)
on a ring of length L. The process starts from a flat interface, h(x,t=0)=0.
Both at λH̅<0, and at sufficiently small positive λH̅ the optimal
(that is, the least-action) path h(x,t) of the interface, conditioned on H̅, is uniform
in space, and the distribution P (H̅, L, T) is Gaussian. However, at sufficiently
large λH̅>0 the spatially uniform solution becomes sub-optimal and gives way
to non-uniform optimal paths. We study them, and the resulting non-Gaussian distribution P (H̅, L, T),
analytically and numerically. The loss of optimality of the uniform solution occurs via a dynamical
phase transition of either first, or second order, depending on the rescaled system size
ℓ = L/√(ν T), at a critical value H̅=H̅_c(ℓ). At large but
finite ℓ the transition is of first order. Remarkably, it becomes an “accidental" second-order
transition in the limit of ℓ→∞, where a large-deviation
behavior -ln P (H̅, L, T) ≃ (L/T) f(H̅)
(in the units λ=ν=D=1) is observed. At small ℓ the transition is of second order,
while at ℓ =O(1) transitions of both types occur.
Short-time large deviations of the spatially averaged height
of a KPZ interface on a ring
Baruch Meerson
August 12, 2023
=========================================================================================
§ INTRODUCTION
Atypically large fluctuations in macroscopic systems out of
equilibrium continue to attract great interest from statistical physicists.
Although a universal description of such fluctuations is unavailable, there has been
much progress in studies of particular systems. One of the main theoretical tools
in this area is known under different names in different areas of physics:
the optimal fluctuation method
(OFM), the instanton method, the weak-noise theory, the
macroscopic fluctuation theory, etc. This method relies
on a saddle-point evaluation of the pertinent path integral
of the stochastic process, conditioned on the
large deviation. The method is based on a model-specific
small parameter (often called “weak noise"), and it brings about a
conditional variational problem. The solution of this problem – a
deterministic, and in general time-dependent, field – describes the “optimal path" of the system:
the most probable system's history which dominates the contribution of different
paths to the statistics in question.
Among multiple applications of the OFM, we focus on one set of problems which has attracted attention in the last
two decades <cit.>: short-time
large deviations of a stochastically growing interface as described by the one-dimensional Kardar–Parisi–Zhang (KPZ) equation <cit.>
∂_th=ν∂_x^2h+λ/2(∂_xh)^2+√(D)ξ(x,t) ,
where ξ(x,t) is a white noise with
⟨ξ(x,t)⟩=0 , ⟨ξ(x,t)ξ(x^',
t^')⟩=δ(x-x^')δ(t-t^') .
Here we employ the OFM to study a KPZ interface on a ring of length L, i.e. with periodic boundary
conditions at x=0 and x=L. The interface is initially flat,
h(x,t=0)=0 ,
and we are interested in evaluating
the probability density function (PDF) P(H̅, L, T)
of the spatially averaged surface height
H̅ = 1/L∫_0^L h(x,T) dx
at a final time t=T >0, which is much shorter than the characteristic nonlinear
time of Eq. (<ref>), τ_NL= ν^5/D^2 λ^4.
The short-time limit allows one to employ the OFM in a controlled
manner <cit.>, as we will
reiterate shortly. The problem, defined by Eqs. (<ref>)-(<ref>), continues the
line of studies of Refs. <cit.> of finite system-size effects (which turn out to be quite dramatic)
in large deviations of height of the KPZ interface.
Upon rescaling t → tT,
x → (ν T)^1/2 x, h →ν h / λ and ξ→(ν T^3)^-1/4ξ, Eq. (<ref>) becomes
∂_th= ∂_x^2h+1/2(∂_xh)^2
+√(ε)ξ(x,t) ,
with rescaled noise strength ε = D λ^2 T^1/2
/ ν^5/2 on a ring of rescaled length ℓ = L / √(ν T).
The PDF of the rescaled average height H̅ at final time t = 1
can then be written as a path integral
P(H̅,ℓ,ε) = ∫_h(·, 0) = 0 Dh δ(
1/ℓ∫_0^ℓ h(x,1) dx - H̅)
J[h] exp{-1/ε S[h] }
with action functional
S[h] = ∫_0^1 dt ∫_0^ℓ dx L(h, ∂_t h ) = 1/2∫_0^1 dt
∫_0^ℓ dx [∂_th - ∂_x^2h-1/2(∂_xh)^2 ]^2 ,
where ℒ(h,∂_t h) is the Lagrangian.
The OFM assumes a weak-noise limit ε→ 0, when the path integral (<ref>) can be evaluated
by the saddle-point method, while the Jacobian J[h] does not contribute in the leading-order.
In this limit, the PDF P(H̅,ℓ,ε) is dominated by
the optimal path of the system, that is by the most likely history h(x,t) conditional on a given average height at t=1:
-ln P(H̅, ℓ, ε) ε→ 0≃ε^-1min_h(·, 0)= 0 ,
∫_0^ℓ
h(x,1)dx = ℓH̅ S[h] = ε^-1 S(H̅, ℓ) .
Hence, the PDF can be determined, up to pre-exponential factors, from the
solution of this constrained minimization problem. Here
we will solve this minimization problem numerically, for different H̅
and ℓ, and analytically in the asymptotic limits of large and small
ℓ[Note that whenever there exists a spatially
non-uniform optimal path, there are actually infinitely many possible
paths due to the translational symmetry of the problem with respect to x. Accounting for
this submanifold of degenerate solutions and for the associated zero
mode is, however, only relevant for pre-exponential factors <cit.> which
we do not address here.].
It will be convenient to present our results by setting
ν=λ=D=1[In most of the paper we assume, without
loss of generality, that λ>0. Indeed, changing λ to -λ is equivalent to changing h to -h.].
Then the weak-noise scaling (<ref>) reads
-ln P(H̅, ℓ, ε→ 0) ≃
T^-1/2 S(H̅, ℓ) .
Note that the limit ε→ 0 at fixed ℓ corresponds to
the short-time limit T → 0 and small-length limit L → 0
with L / √(T) = const.
When instead T goes to zero at L=const, one has
both ε→ 0 and ℓ→∞. The latter limit turns out to be most interesting, and it is analyzed here
in detail. It is natural to expect that for
any H̅, when ℓ→∞, the action S(H̅, ℓ) should exhibit
a large-deviation form
S(H̅,ℓ) ℓ→∞≃ℓ f(H̅) ,
leading to
-ln P(H̅, L, T→ 0) ≃
(L/T) f(H̅) ,
and this is what we indeed observe here. Less expectedly, we also find that the rate
function f(H̅) exhibits, at a critical value H̅=H̅_c(ℓ),
a dynamical phase transition (DPT) which is accidentally second-order.
By that we mean that
the rate function at the critical point becomes continuously differentiable
only in the limit of ℓ→∞. At arbitrary large but finite ℓ the
large-deviation form (<ref>) breaks down. We show, however, that the action S(H̅,ℓ) still exhibits
a DPT at a critical point H̅=H̅_c, but this DPT is of first order and the optimal
path at the critical point changes discontinuously via a subcritical bifurcation.
For small ℓ a truly second-order DPT is observed as predicted earlier <cit.>.
At intermediate values of ℓ = O(1) DPTs of both types occur. In the latter regime analytical
results are unavailable as of yet, and we present some numerical results. All the DPTs that we
found in this system occur because of a loss of optimality of a path that is uniform in space.
The loss of optimality takes the form either of a subcritical bifurcation (for the first-order DPTs),
or a supercritical bifurcation (for the true second-order DPTs).
The remainder of this paper is structured as follows. In Sec. <ref> we formulate
the OFM equations and boundary conditions, present a simple uniform solution of these equations,
previously studied in Refs. <cit.>, and
argue that it describes the optimal path of the system at all λ H<0. Supercritical
bifurcations of the uniform solution have been recently studied in Ref. <cit.>. Still,
for convenience of further discussion, we briefly rederive them in Sec. <ref>.
Section <ref> includes our results of numerical minimization of the action
functional (<ref>) in different regions of the (H̅,ℓ) phase diagram.
These numerical results provided valuable insights into the nature of optimal paths of the
interface which led us to develop asymptotic analytical solutions of the OFM problem for
large ℓ that we present in Sec. <ref>. The asymptotic solution for small ℓ
is briefly discussed in Sec. <ref>. We summarize and discuss our main results
in Sec. <ref>. A description of numerical algorithms that we use here is relegated to the Appendix.
§ OFM EQUATIONS AND UNIFORM SOLUTION
At a technical level, the main objective of this work is to determine the minimum action S(H̅, ℓ)
as a function of the rescaled average height H̅ and rescaled
system size ℓ. In this section, we present the necessary
conditions for minimizers of the action functional (<ref>) – the OFM equations and the boundary conditions.
We argue then that a simple spatially uniform solution
of the ensuing OFM problem is always optimal for H̅ < 0.
The first-order necessary conditions for a minimizer of the action
functional (<ref>) can be represented as a pair of Hamilton's equations
for the optimal history of the interface h(x,t) and the
conjugate momentum density p = ∂ L / ∂(∂_t h). These equations
were derived in many papers <cit.>, and they take the form
∂_th = ∂_x^2h+1/2(∂_xh)^2+p
,
∂_tp = -∂_x^2p+∂_x(p∂_xh)
.
The “momentum density" p(x,t) describes the (rescaled) optimal realization of
the external noise ξ(x,t) that drives the interface conditional on a specified H̅.
In the present case Eq. (<ref>) and (<ref>) should be complemented by the periodic boundary conditions
at x=0 and x = ℓ, by the initial condition
h(x,0)=0 ,
and by the final-time condition
p(x,1)=Λ= ,
which follows from the demand that a boundary term at t=1, originating from an
integration by parts, should vanish for any h(x,1).
The parameter Λ is a Lagrange multiplier which needs
to be chosen so as to impose the rescaled final-time condition
1/ℓ∫_0^ℓ h(x,1) dx = H̅ .
Once the optimal path is determined, the action S(H̅,ℓ)
can be determined from the equation
S = 1/2∫_0^1 dt∫_0^ℓ dx p^2(x,t) ,
which follows from Eqs. (<ref>) and (<ref>).
By differentiating the action S(H̅, ℓ) = S[h(x,t;H̅,ℓ)] of
the optimal profile h = h(x,t;H̅,ℓ) with respect to H̅ using
the chain rule, one can show that Λ is related to the action via
Λ=1/ℓ ∂ S(H̅, ℓ)/∂H̅ dS=ℓΛ dH̅) .
If the action S(H̅, ℓ) is a strictly convex function of H̅,
there is a bijective relation between Λ and H̅, and it
suffices, for the purpose of calculating the action, to only
determine H̅(Λ) and use Eq. (<ref>). This shortcut is very convenient and
holds for many large-deviation calculations <cit.>.
There is an obvious exact solution of the OFM equations and the boundary conditions:
h(x,t)=H̅ t , p(x,t)=Λ , Λ = H̅ ,
S=ℓ/2H̅^2 ,
which describes a uniformly growing flat interface.
We will often call this branch of solutions branch 1. By virtue of Eq. (<ref>),
whenever the uniform solution (<ref>) is the optimal one, we have
a Gaussian PDF for H̅ up to pre-exponential factors. Of most interest, however,
are the regions of parameters H̅
and ℓ, for which the uniform solution is sub-optimal. As we will see,
the loss of optimality can occur via either a supercritical, or a subcritical bifurcation.
First of all, we can argue that, for negative H̅, the uniform
solution (<ref>) is always optimal. Using the evident conservation law
1/ℓ∫_0^ℓ p(x,t)
d x = Λ = const
of Eq. (<ref>), we can rewrite the action (<ref>) for any solution
of the OFM equations as
S = 1/2∫_0^1 dt∫_0^ℓ
dx p^2(x,t)=ℓΛ^2/2+1/2∫_0^1 dt∫_0^ℓ dx
[p(x,t)-Λ]^2 ,
Also, integrating both sides of Eq. (<ref>) with respect to t from 0 to 1 and
with respect to x over the ring, and using the periodic boundary conditions
and the conservation law (<ref>), we obtain
H̅=1/ℓ∫_0^ℓ h(x,1) dx
=Λ+1/2ℓ∫_0^1 dt∫_0^ℓ
dx [∂_xh(x,t)]^2 .
One can easily see from Eqs. (<ref>) and (<ref>) that, at negative Λ
(or H̅) any inhomogeneity in the
momentum density p both increases
the action S, and decreases the average height |H̅| in comparison to their
values for the uniform solution. Therefore, any nonuniform solution here is sub-optimal.
In contrast to this, for Λ >0 (or
H̅>0), an inhomogeneity increases both S,
and H̅ in comparison to the uniform solution. A competition
between these two opposite effects may give rise to non-uniform solutions with lesser action than
the uniform one, as we will indeed see in the following.
§ BIFURCATIONS OF THE UNIFORM SOLUTION
In this brief section we carry out a linear stability analysis of the
uniform solution (<ref>). We find that, for sufficiently
large positive H̅, the uniform solution can continuously
and supercritically bifurcate to a non-uniform solution. The first
spatial Fourier mode to become unstable as H̅ increases depends
on the rescaled system size ℓ in a nontrivial way and is determined
from Eq. (<ref>). This equation has also been obtained in Ref. <cit.>
by calculating the leading-order prefactor correction to the asymptotic
scaling in Eq. (<ref>) through Gaussian integration of
fluctuations around the uniform solution (<ref>).
At first order of a perturbation theory around the uniform
solution (<ref>) we have
p(x,t)=H̅+b(t)cos qx , h(x,t)=H̅ t + a(t)cos qx
, |a|, |b|≪ 1 .
Here the wave number q spans the set 2π m/ℓ for
m=1,2,…. Substituting the expressions (<ref>)
into Eqs. (<ref>) and (<ref>) and neglecting higher-order terms, we obtain
the following system
of linear ordinary differential equations:
ȧ=-q^2a+b , ḃ=q^2b-q^2H̅ a .
It has solutions proportional to e^iω t, where
ω=± q √(H̅-q^2) .
Using the boundary conditions (<ref>) and (<ref>), we obtain the
following relationship between q and H̅ = H̅_c(q)
at the bifurcation points:
tan(q√(H̅-q^2))=-√(H̅-q^2)/q .
Note that the trivial solution H̅=q^2 of Eq. (<ref>) does
not correspond to a valid non-uniform solution due to the boundary conditions
at t=0 and 1. The resulting dependence H̅(q) can be expressed in a
parametric form
H̅ = -2 u/sin 2u , q=√(-u u) ,
(2n-1)π/2<u<nπ; n=1,2,3,… ,
where, for given ℓ, only values of q = 2 π m / ℓ
with m = 1, 2, 3, … are allowed.
The first three branches of Eq. (<ref>) are shown in
Fig. <ref>. As one can see, the first instability appears for n = 1,
and a necessary condition for the instability, for any ℓ, is H̅_c≥ 4.603.
When ℓ→∞, the first instability of the
uniform solution will occur, at H̅_c≃ 4.603, for a very high mode
m ≃ 1.343 ℓ/ 2 π.
For finite ℓ, one can find the bifurcation point on the n=1 branch of Eq. (<ref>)
numerically.
Finally, for ℓ→ 0, the first instability occurs for the m = 1 mode at
H̅≃ (2 π / ℓ)^2 in
agreement with Ref. <cit.>.
§ NUMERICAL RESULTS
Now we proceed with a numerical solution of the
minimization problem in Eq. (<ref>) for different H̅ and ℓ. The numerical methods that
we used are described in the Appendix. In addition to confirming
the supercritical bifurcations of the uniform solution that we discussed in Sec. <ref>,
we will uncover important subcritical bifurcations
and get insight into non-perturbative optimal paths which
will be studied analytically in Secs. <ref> and <ref>.
We start with the simpler case of small ℓ.
Choosing a moderately small value ℓ = π / 8 and numerically
minimizing the action (<ref>) for different Λ, we
obtain
the rate function S(H̅, ℓ) and Lagrange
multiplier Λ(H̅) shown in Fig. <ref>.
The spatially uniform solution (<ref>), corresponding
to branch 1 of the action, is seen to become unstable
close to H̅≃ (2 π / ℓ)^2 as stated in Sec. <ref>,
and there is a
continuous (second-order) DPT to a spatially
nonuniform solution. Indeed, the (m = 1)-spatial Fourier mode of the
profile becomes unstable at this point. One such spatially nonuniform solution close to the transition point
is shown in Fig. <ref>. As H̅ increases, the optimal solution
turns, for most of the time 0<t<1, into a stationary “cnoidal" solution for p which
drives an h-profile which is non-uniform in x, but is uniformly translating in the vertical direction.
The same solution appears in the problem of the one-point height distribution for the KPZ
equation on a ring <cit.>, and we use it in
Sec. <ref> to calculate the theoretical curves in
Figs. <ref> and <ref>,
which match the numerical results quite well.
Next, we turn to the more complicated and interesting case of large
ℓ.
For ℓ = 16 π the minimization of the augmented action (<ref>)
leads to the results for the rate function S(H̅) and Lagrange
multiplier Λ(H̅) shown
in Fig. <ref>. In addition to branch 1 we observe two other branches of solutions.
Branch 2 is observed to the right of a narrow
transition region close to H̅≃ 4. On this branch the action S(H̅) is
approximately a linear function, while Λ is almost constant. Further, for much larger H̅,
there is a smoothed-out second-order transition from branch 2 to a
third branch 3 with a different scaling behavior.
The optimal paths for branches 2 and 3 are shown in
Fig. <ref>. They consist of strongly localized large-amplitude stationary
solitons of p that drive an outgoing almost triangular structure of h (or two antishocks
of V(x,t) = ∂_x h(x,t), see Sec. <ref>. The solution, corresponding to branch 2,
clearly emerges via a subcritical, rather than supercritical bifurcation. Strikingly, the soliton
has a well-defined life time which is very close to 1/2. The
difference between branches 2 and 3 is that, for branch 3, the two edges
of the triangular structure of h(x,t) collide before the final time t=1 is reached,
while for branch 2 they do not.
These crucial findings will guide our stationary-soliton-based asymptotic theory for large ℓ that we develop
in Sec. <ref>. There we give an analytical description of the optimal paths
for branches 2 and 3, which are the only relevant ones for large
ℓ. There we establish a first-order transition at H̅≃ 4 for large but finite ℓ
and show that it becomes “accidentally" second order in the limit of ℓ→∞.
We also find that the smoothed-out second-order
transition from branch 2 to branch 3 occurs at H̅ = ℓ^2 / 6. The resulting
analytical predictions, indicated by the lines in
Figs. <ref> and <ref>, are in good agreement with numerics
at large, but finite ℓ.
At moderate ℓ the transition region where the spatially uniform
solution (<ref>) of branch 1 becomes sub-optimal is quite
complex, as one can appreciate from
Fig. <ref>.
We see that, in general, there are both first and second order
transitions in this region: The uniform solution becomes
linearly unstable for some m > 1, leading to second-order
transitions, but there is also a competition with the (subcritical) one-soliton
solution. The subcritical scenario clearly wins for sufficiently large ℓ. Indeed, for ℓ = 32 π
we observe only a first-order
transition from the spatially uniform to the soliton solution,
while the linear instability becomes irrelevant.
Note that, for branch 2, in addition to stationary single-soliton
solutions of the OFM equation, discussed so far, there are also stationary multi-soliton solutions
consisting of two or more (almost) non-interacting strongly localized stationary solitons
of p and corresponding expanding triangles of h. One such solution, which we observed numerically, is
shown in the top row of
Fig. <ref>. We found, however,
that such solutions always have a larger action than
the one-soliton solution for the same ℓ
and H̅. Therefore, the one-soliton solution indeed seems to provide
the optimal solution. In the limit ℓ→∞,
these multi-soliton solutions – a soliton gas – would contribute to the
pre-exponential factor for 𝒫(H̅, ℓ), but
pre-exponential factors are beyond the scope of this paper. Additionally, in the
bottom row in Fig. <ref>,
we show an optimal path for ℓ = 16 π and close
to H̅ = 4, which emerges through linear instability of
the (m = 11)-mode. Later on, however, it is overtaken by the
one-soliton solution.
§ LARGE-ℓ ASYMPTOTICS: RISE AND FALL OF THE SOLITON
§.§ General description of the solution
Guided by our numerical solutions and by the previous works on the one-point KPZ height
statistics on the line <cit.> and on a ring <cit.>, here we find approximate
asymptotic solutions of Eqs. (<ref>)-(<ref>) which give rise to two nontrivial
branches (we call them branches 2 and 3) of the large-deviation function S(H̅) for large ℓ.
As we found, for both branches the maximum one-point height of the interface H=max h(x,t=1) turns
out to be very large: H≫ 1. Therefore, in addition to the strong inequality ℓ≫ 1,
we can also use the strong inequality H≫ 1. This allows us to construct “inviscid" asymptotic
solutions in different regions of space, separated by discontinuities of proper types. Like their
numerical counterparts, the analytical solutions exhibit two distinct stages in time, with an abrupt
transition between them at some branch-dependent intermediate time 0<t=τ<1 which we will determine.
For 0<t<τ the solution has the form of a strongly localized stationary soliton of p(x,t)
and “antishock" of V(x,t)= -∂_x h(x,t) which were previously identified in the problem
of one-point height statistics on the line <cit.> and on a ring <cit.>.
The characteristic width, O(1/√(H)), of the soliton-antishock structure is much less than
unity. Outside of the soliton-antishock one has p(x,t) ≃ 0. As a result, Eq. (<ref>)
is obeyed trivially and, at distances ≳ 1 from the soliton, h(x,t) follows the deterministic KPZ dynamics
∂_th=∂_x^2h+1/2(∂_xh)^2 ,
which is equivalent to the Burgers equation
∂_tV+ V ∂_x V =∂_x^2V
for the field V(x,t) =-∂_x h(x,t). In addition, the diffusion term in Eq. (<ref>)
can be also neglected at large distainces <cit.>, and one arrives at the inviscid Hopf equation
∂_tV+V∂_x V=0 .
The stationary soliton-antishock structure drives an almost triangular configuration of h(x,t)
which is expanding outwards <cit.>. The height of the triangle grows linearly with time, while
its two edges propagate with a constant speed as “ordinary" shocks of V(x,t) obeying Eq. (<ref>)
or, when treated as discontinuities, obeying Eq. (<ref>) <cit.>. The positions of these shocks
at t=1 determine the boundaries of the “impact region" of the soliton-antishock structure. When the
size of the impact region, which scales as O(√(H)) <cit.>, is shorter than the rescaled system
size ℓ (this happens when H̅ is not too large, see below), there is also an external region
where the uniform solution p(x,t)=Λ =const and V(x,t)=0 holds, see Eq. (<ref>).
The external uniform solution holds for all times 0<t<1, and it contributes to the large-deviation
function of H̅. In the inviscid limit the regions of zero and nonzero p are divided by a
stationary discontinuity. This regime corresponds to branch 2.
Branch 3 appears when, due to the periodicity of the system, the ordinary shocks of V(x,t)
collide with each other before the final time t=1 is reached. In this case the impact region
of the soliton-antishock structure extends to the whole system, and a region of the uniform solution does not appear.
For the solution to obey the boundary condition (<ref>), the p-soliton must turn into a
constant p= Λ at t=1. Remarkably, as we have seen in our numerical results for large ℓ,
the soliton rapidly decays in the vicinity of a well-defined time t=τ<1. For both branches 2 and 3,
the subsequent dynamics, at τ<t<1,
gives only a subleading contribution (which we neglect, alongside with other subleading contributions)
to the maximum one-point height H and to the action. This stage is important, however, for determining H̅.
We can qualitatively understand this nontrivial temporal structure of the solutions from the viewpoint of action
minimization: First, for 0 ≤ t ≤τ, the interface is efficiently driven upward by a stationary
p-soliton, in the same manner as for the one-point height PDF of the KPZ equation on the line <cit.>
and on a ring <cit.>. Then, quickly suppressing the soliton at an intermediate time 0<τ < 1 and
evolving the interface according to the almost free KPZ dynamics for τ < t ≤ 1 increases considerably
the average height H̅ for a negligible additional cost in terms of action. The optimal value of τ
is the one that minimizes the action for a given H̅.
As an overview, we present here the action S(H̅, ℓ) at leading order for large ℓ,
as will be derived in subsections <ref> and <ref>:
S(H̅, ℓ) ≃{[ H̅^22ℓ , -∞ < H̅≤ 4 , (branch 1); (4 H̅ - 8) ℓ , 4 < H̅≤ℓ^26 , (branch 2); H̅^3/2Φ(H̅ / ℓ^2) , ℓ^26 < H̅ < ∞ , (branch 3) ].
where the function Φ(…) is defined in Eq. (<ref>) and
obeys Φ(z →∞) → 8 √(2) /3. The first line in Eq. (<ref>)
comes from the uniform solution (<ref>). The first two lines manifestly reveal the large-deviation
scaling (<ref>), while the third line does not.
Now we proceed to a more detailed description of the solutions, and we will start with branch 2.
§.§ Branch 2
Due to a translational symmetry of the problem (<ref>)-(<ref>), we can place the soliton-antishock
structure at x=0 (see Fig. <ref>) so that, to the leading order, H≃ h(0,τ).
As explained above, at H≫ 1, the p-soliton can be considered as a point-like object. We will only need
the value of its “mass", ∫ dx p(x,t) which, by virtue of Eq. (<ref>), is conserved. Using
the explicit expression for the soliton, p(x,t)=p_s(x) = 2 c cosh^-2 (√(c/2) x) <cit.>,
where c=H/τ, we obtain
∫_-∞^∞ dx p_s(x) = √(32 H/τ) .
The base of the triangular structure of the h-profile is equal to
2a(t)=√(2H/τ) t ,
while the triangle's height is
h(0,t)=Ht/τ , 0<t<τ .
Let us denote the total size of the impact region of the soliton-antishock structure
by 2a_1, where a_1 ≡ a(t=1). In the region a(t)<|x|<a_1 we have
p=h=0 .
The triangular profile of h on the interval 0<|x|<a(t) is described by the expressions <cit.>
p(x,t)=0 , h(x,t)
=H(t/τ-√(2)|x|/√(Hτ))
, and
V(x,t)=-∂_xh(x,t) = Ṽ x ,
where
Ṽ=√(2H/τ) .
As one can see from Eqs. (<ref>) and (<ref>), the ordinary shocks propagate
with the speed Ṽ/2, as to be expected from Eq. (<ref>) or (<ref>) <cit.>.
After the rapid decay of the soliton at t=τ, the “post-soliton" solution (in the region to be determined)
can be described by the ideal hydrodynamic equations corresponding to the inviscid limit of Eqs. (<ref>)
and (<ref>):
∂_tV +V ∂_xV = -∂_x p ,
∂_tp+∂_x(pV) = 0 .
The V-antishock now plays the role of a discontinuity which undergoes a decay starting from t=τ.
In the leading order we can neglect the -∂_x p term, so that Eq. (<ref>) becomes the Hopf
equation (<ref>). Its solution is
V(x,t)=x/t-τ .
Plugging Eq. (<ref>) into Eq. (<ref>) and using the “final" condition (<ref>)
on p(x,t=1), we obtain
p(x,t) =Λ(1-τ)/(t-τ) .
The solution (<ref>) and (<ref>) holds at t>τ and |x|≤ a_d(t). The boundaries of this region,
x= ± a_d(t)≡Ṽ(t-τ) ,
represent weak discontinuities, moving with the speed Ṽ – that is twice as fast as
the ordinary shocks at x=± a(t), see Eq. (<ref>). Our simulations show
that the weak discontinuities catch up with the shocks at t=1. The corresponding condition can
be written as a_d(1) = a_1, and it yields τ=1/2[We also
obtained τ=1/2 analytically by solving the problem for a general τ and then minimizing the
resulting action with respect to τ. These calculations are somewhat cumbersome, and we do not show them here.]
Therefore, during the second stage of the dynamics, 1/2<t<1, V(x,t) is described by the following expressions:
V(|x|≤ a_d(t),t)=x/t-1/2 , V(a_d(t)≤|x|≤ a(t),t)=±Ṽ , V(a(t)<|x|< a_1,t)=0 .
Using the relation V(x,t)=-∂_x h(x,t), we can obtain the h-profile at any time 1/2<t<1
by integrating Eq. (<ref>) over x. The result describes a parabolic profile of h at |x|<a_d(t),
flanked by the linear profiles at a_d(t)<|x|<a_1 corresponding to the triangular structure of h(x,t) of
the first stage the dynamics. At t=1 the parabolic profile takes over the whole interval |x|<a_1, and we obtain
h(x,t=1)=H-x^2 , |x|<a_1=√(H).
At |x|>a_1 the uniform solution holds:
h(|x|>a_1,t)=Λ t , p(|x|>a_1,t)=Λ .
Now we evaluate the contributions of the uniform solution to the action, Δ S_u, and to the average
height, ΔH̅_u, at t=1. As ℓ goes to infinity, we can neglect the difference between the
total system length ℓ and the length of the domain of uniform solution ℓ-2a_1, and obtain
Δ S_u=Λ^2ℓ/2 ΔH̅_u=Λ .
The leading-order contribution of the soliton-antishock solution to the action is <cit.>
Δ S_s=8√(2)/3 H^3/2/√(τ)=16 H^3/2/3 .
This contribution comes from the first stage of the process, 0<t<1/2, while the second stage gives
only a subleading contribution which we neglect.
The second stage, 1/2<t<1 does contribute to H̅, however. Using Eq. (<ref>), we obtain
ΔH̅_s=4 H^3/2/3ℓ .
What remains to be done is to determine Λ, to collect the contributions to S and H̅,
and to eliminate H in favor of H̅ and ℓ.
In order to determine Λ, we use the local conservation of p(x,t) evident in Eq. (<ref>).
Because of this local conservation law,
the total soliton “mass", see Eq. (<ref>), must be equal to the integral of the solution (<ref>)
for p(x,t) over x from -a_1 to a_1. This condition yields a remarkably simple result: Λ=4,
a constant value (up to small subleading corrections).
Combining Eqs. (<ref>)-(<ref>), we obtain
H̅=4+4 H^3/2/3ℓ ,
S=8ℓ+16 H^3/2/3 .
Eliminating H, we arrive at the leading-order result for the large-deviation function of H̅
for branch 2 in the limit of large ℓ, which was announced in the second line of Eq. (<ref>):
S=(4H̅ -8) ℓ .
This expression obeys the large-deviation scaling (<ref>). As was to be expected, the actions
of branch 1 and 2 coincide at
H̅=H̅_c=4. Noticeably, their first derivatives with respect to H̅
also coincide at this point.
In addition, using Eq. (<ref>), we see that Eq. (<ref>) is consistent with Λ=4,
independently of H̅, for branch 2.
We will look into these peculiarities more carefully in Sec. <ref>.
One applicability condition of Eq. (<ref>) is the strong inequality H≫ 1.
Using the first relation in Eq. (<ref>),
we can rewrite this strong inequality in terms of H̅ and ℓ≫ 1:
H̅-4 ≫ 1/ℓ .
This condition limits H̅ from below. A condition on H̅ from above distinguishes
branch 2 from branch 3. It demands that the ordinary shocks of V(x,t) do not collide with
each other until t=1[While deriving Eq. (<ref>) we
demanded a strong inequality 2√(H)≪ℓ. However, when H̅≫ 1, the main contribution
to S and H̅ comes from the soliton-antishock solution, rather than from the uniform one. As a
result, the strong inequality 2√(H)≪ℓ becomes unnecessary, and a simple inequality suffices.].
This condition can be written as 2√(H)<ℓ or, using Eq. (<ref>),
H̅-4<ℓ^2/6ℓ≫1 .
Now we proceed to a description of branch 3.
§.§ Branch 3
When the inequality (<ref>) is violated, the two outgoing ordinary shocks of V(x,t) collide
with each other and merge at x=±ℓ / 2 (which is the same point of the ring) at some t<1.
Upon the merger, a single stationary shock appears, see Fig. <ref>. Now the impact region of
the soliton-antishock is the whole system: 2a_1=ℓ, and the external region of the uniform solution,
characteristic of branch 2, does not appear here.
Most of the general formulas, derived in the context of branch 2, remain valid for branch 3.
In particular, here too τ is determined by the condition that the weak discontinuities catch
up with the ordinary shocks at t=1. The only difference is that a_1=ℓ/2 now. Solving the
equation a_d(1) = a_1, or
√(2H/τ)(1-τ) = ℓ/2 ,
we obtain
τ =1+ℓ^2/16 H-ℓ√(ℓ^2+32H)/16 H ,
so that τ depends on H and ℓ. Unsurprisingly, Eq. (<ref>) yields τ=1/2 in
the boundary case H=ℓ^2/4, when the size 2a_1 of the impact region of the soliton-antishock
in an infinite system is equal to the system size ℓ. When H goes to infinity, τ approaches 1.
We will not repeat here all expressions for h(x,t), V(x,t) and p(x,t) in different regions,
and present only the expression for h(x,1):
h(x,1)=H-x^2/2(1-τ) ,
with τ from Eq. (<ref>).
Using this expression, we can evaluate H̅. The action S remains the same as in the
first equality in Eq. (<ref>), and we obtain
H̅=H-1/24 ℓ^2/(1-τ) ,
S=8√(2)/3 H^3/2/√(τ) .
Eliminating H from these relations and using Eq. (<ref>), we arrive at a leading-order
result for the large-deviation function S(H̅,ℓ) in the limit of large ℓ and very
large H̅, which was announced in the third line of Eq. (<ref>):
S(H̅,ℓ) = H̅^3/2Φ(H̅/ℓ^2) , where Φ(z) =2 √(2) (9 z+1+√(18z+1))^1/2(36 z+1+√(18z+1))/81 z^3/2 .
In terms of H̅, the condition H>ℓ^2/4 becomes, in the leading order, H̅>ℓ^2/6.
As a result, the function Φ(z) is defined for z≥ 1/6, and Φ(1/6) = 4 √(6).
A graph of Φ(z) is depicted in Fig. <ref>.
In the limit of H̅≫ℓ^2≫ 1 Eq. (<ref>) yields
S=8√(2)/3H̅^3/2+4/3H̅ℓ+ … .
The leading-order term of this expression coincides with the action for a single-point height H <cit.>.
This is to be expected, because for very large H̅, τ approaches 1, and the difference
between H̅ and H becomes relatively small.
The expressions in Eqs. (<ref>) and (<ref>) match in the leading order in ℓ
at the boundary H̅≃ℓ^2/6 between the branches 2 and 3, both giving (2/3) ℓ^3+O(ℓ).
For completeness, we also present the optimal transition time τ in Eq. (<ref>) in terms of H̅ and ℓ:
τ(H̅,ℓ)=1+ℓ^2/12 H̅-ℓ√(ℓ^2+18
H̅)/12 H̅ .
§.§ Dynamical phase transition
In this subsection we resolve the nature of the DPT between
branches 1 and 2, which corresponds to the subcritical bifurcation from the uniform solution (<ref>)
to the leading-order soliton solution discussed in Sec. <ref>. To this end we will have to focus
on subleading corrections that we have previously ignored. We will also present the large-deviation
scaling of 𝒫(H̅,L,T) in the limit of T → 0 at fixed L, in the physical units.
As we have already noticed, the actions S_1(H̅, ℓ) and S_2(H̅, ℓ), described
by the first and second lines of Eq. (<ref>),
coincide at H̅=H̅_c=4 together with their first derivatives ∂ S_1(H̅, ℓ) /
∂H̅ and ∂ S_2(H̅, ℓ)/∂H̅
at H̅_c=4. It would be incorrect, however,
to conclude from here that the DPT between branches 1 and 2 at H̅=H̅_c
is of second order. Indeed, the supercritical first bifurcation of the uniform solution (<ref>)
to a solution with a single maximum of h(x,1) – the one with q = 2 π / ℓ
in Eq. (<ref>) – actually occurs, as ℓ→∞, at much
larger H̅≃ℓ^2 / 16 ≫ 4. Furthermore,
as follows from numerical minimization of Eq. (<ref>), instability
of any Fourier mode around the uniform solution can only occur
at H̅≃ 4.60334 (for q ≃ 1.34336). It
is not surprising, therefore, that
at large but finite ℓ, and at a slightly shifted transition
point H̅_c> 4 where the actions of branches 1 and 2
are equal, the optimal paths h(x,t) for branches 1 and 2, that we found numerically,
are dramatically different, and their respective Lagrange
multipliers Λ are not equal. The latter fact means, by
virtue of Eq. (<ref>), that at large ℓ we actually observe a first-order DPT, not a second-order one.
To make sense of these facts, we recall that Eq. (<ref>)
for the action of branch 2 is merely a leading order asymptotic
at ℓ→∞. Subleading terms, so far unaccounted for, should remove
the degeneracy of the leading-order results by breaking the accidental continuity
of the first derivative ∂ S(H̅, ℓ)/∂H̅
at H̅=H̅_c, and
rendering the corresponding bifurcation subcritical and the corresponding DPT
first-order. The subleading terms should also account for a slight shift of the critical
point H̅_c to the right from its leading-order
value H̅_c=4, as observed in our numerics.
Motivated by the large-H asymptotic of the upper tail of the exact
short-time probability distribution of the one-point height h(x = 0,t = 1)=H
on the line, determined in Ref. <cit.>, we can conjecture the following
subleading terms of S_2(H̅,ℓ) at large ℓ:
S_2(H̅,ℓ)=(4H̅ -8) ℓ+B H^1/2+C H^-1/2+… ,
where B>0 and C are numerical constants O(1), which are independent
of ℓ. The condition B>0 is necessary for the equation
S_1 ( H̅_c,ℓ) =
S_2 ( H̅_c,ℓ)
to have a solution for H̅)_c close to
4 at large ℓ.
To verify Eq. (<ref>), we plotted in Fig. <ref> our large-ℓ numerical results for
[S_2(H̅,ℓ) - (4H̅ -8)
ℓ]/√(H) versus H. A fair plateau at large H is observed, with B ≃ 5.3 > 0 found by fitting.
Now, keeping the first subleading term in Eq. (<ref>)
and the leading-order dependence of H on H̅ in Eq. (<ref>),
we can rewrite Eq. (<ref>) in terms of H̅ and ℓ:
S_2(H̅,ℓ)=8ℓ+4(H̅ -4) ℓ
+ (3/4)^1/3 B [(H̅-4)ℓ]^1/3
+ … ,
(H̅-4)ℓ≫ 1 .
Now Eq. (<ref>) for the critical point becomes
1/2(H̅_c-4 )^2ℓ
= (3/4)^1/3 B [ (H̅_c
-4 )ℓ]^1/3+… ,
Its approximate solution,
H̅_c = 4 + 6^1/5 B^3/5 ℓ^-2/5+… ,
describes a small ℓ-dependent positive shift of the critical point from the leading-order value 4.
This H̅_c corresponds to
H = (9/8)^2/5 B^2/5ℓ^2/5 +…
of the branch-2 solution at the critical point. We observe that, for this solution, H →∞
as ℓ→∞, guaranteeing applicability of our theory at large ℓ. Going back to the
large-deviation scaling (<ref>), we notice that there is now a small but finite jump ∼ℓ^-2/5
of the derivative ℓ^-1∂ S/∂H̅ of the effective rate function at the shifted critical
point. The transition between branches 1 and 2, therefore, is of first order.
By virtue of Eq. (<ref>), the subleading correction in Eq. (<ref>) also removes the degeneracy
of the leading-order result Λ=4 by adding to it a small ℓ-dependent correction that goes
to zero as ℓ→∞.
Using Eq. (<ref>), we plotted in Fig. <ref> the actions of branches 1 and 2, normalized
by ℓH̅^2, in the
vicinity of the H̅ = H̅_c. It is clearly seen that the subleading correction removes the degeneracy
and makes the DPT first-order. Furthermore,
the predicted H̅_c from Eq. (<ref>)
for ℓ = 32 π, which is H̅_c≃ 4.6, is close to our numerical result H̅_c≃ 4.57. for this ℓ, see
Fig. <ref>.
Note that our arguments in favor of the expansion (<ref>) are far from rigorous.
In particular, we cannot exclude a very
slow (for example, logarithmic) dependence of the coefficient B on H in Eq. (<ref>)
based only on the numerical evidence. However,
our main conclusion about the first-order DPT between branches 2 and 3
seems robust.
To conclude this section, we present our large-deviation results, described by the first two lines
of Eq. (<ref>), in
the physical units. Recall that, by taking the
limit T → 0 at fixed L,
we have both ε∝ T^1/2→ 0 and ℓ→∞. In this limit only the first
two lines of Eq. (<ref>) are relevant, and we
obtain[Note the factor of T instead of the customary weak-noise
factor T^1/2 on the left-hand side
of Eq. (<ref>).]
-lim_T→ 0 T ln P(H̅,L,T)
=ν^2/Dλ^2 L f(λH̅/ν) ,
f(w)={[ w^2/2 w<4 ,; 4w-8 w>4 . ].
As we
elaborated in this subsection, the DPT
in Eq. (<ref>) at w = 4 can be called an “accidental”
second order DPT in the sense that the optimal paths, that are responsible for the two branches in Eq. (<ref>),
transition into each other discontinuously, and that the differentiability of the rate function
at the critical point emerges only in
the limit T → 0 at fixed L.
§ SMALL-ℓ ASYMPTOTICS
We found that our numerical results on the second-order DPT at small ℓ, shown in Figs. <ref>
and <ref> and described in Sec. <ref>,
can be understood in terms of a small-ℓ asymptotic solution of the OFM equations (<ref>)
and (<ref>) which was previously found in the context of the one-point
height distribution on a ring <cit.>. In this solution
the interface is driven by a stationary dn^2 profile (see below) of p. The solution represents a finite-amplitude
generalization of a weak sinusolidal modulation with m = 1 which results from the second-order DPT from
the uniform solution. This solution is given by the following expressions[This
solution is invalid inside
narrow boundary layers in time at t=0 and t=1, but their contribution to the action is negligible.]
h(x,t) ≃ H t + 2 lndn[2 K(k) x/ℓ,
k ] ,
p(x,t) ≃ p_0(x) = [4 K(k)/ℓ]^2
dn^2 [2 K(k) x/ℓ , k] ,
where K(k) is the complete elliptic integral of the first kind
and dn(…) is one of the Jacobi elliptic functions <cit.>.
The elliptic modulus k ∈ (0,1) is determined by H via the relation
8 (2 - k^2) K^2(k)/ℓ^2 = H ,
The action of this solution as a function of k is <cit.>
S(k) = 128/3 ℓ^3 K^3(k) [2(2-k^2) E(k)
- (1-k^2) K(k) ] .
At given ℓ≪ 1, Eqs. (<ref>) and (<ref>) determine S as a
function of H in a parametric form. The critical point H̅ = (2 π / ℓ)^2 corresponds
to k=0, when Eqs. (<ref>) and (<ref>) reduce to the uniform solution. k>0
correspond to supercritical solutions.
In order to recast this dependence in terms of S(H̅,ℓ),
we need to express H through H̅ and ℓ. Although Eq. (<ref>) is formally inapplicable
at t=1, asymptotically as ℓ→ 0 we still have
H - H̅≃ -1/ℓ∫_-ℓ /2^ℓ / 2
2 lndn[2 K(k) x/ℓ,
k ] dx= 1/2ln1/1 - k^2 .
where we have used a product formula for dn <cit.>.
Using Eqs. (<ref>) and (<ref>), we obtain
H̅(k) = 8 (2 - k^2) K^2(k)/ℓ^2-1/2ln1/1-k^2 .
Equations (<ref>) and (<ref>) determine S=S(H̅,ℓ) and were
used in Fig. <ref> to draw the theoretical curves for the action and
Lagrange multiplier (via Eq. (<ref>))
at ℓ = π / 8, which agree very well with the numerical action minimization results. Also shown is the
asymptotic action
S(H̅) ≃8 √(2)/3H̅^3/2
as H̅→∞, which agrees with Eq. (<ref>) and can be obtained from
Eqs. (<ref>) and (<ref>) by considering the limit k → 1
with E(k) → 1 and K(k) ≃12ln11-k. As one can see from
Fig. <ref>, the asymptotic relation (<ref>)
is not yet satisfied for the moderately small ℓ = π / 8: noticeably, the solution h(x,1)
at the final time deviates from Eq. (<ref>). However, the numerically found action
is already accurately described by Eqs. (<ref>) and (<ref>), because
the difference between H and H̅ is always subleading – at most O(√(H)) – at small ℓ.
§ SUMMARY AND DISCUSSION
We applied the OFM to evaluate analytically and numerically the short-time PDF P (H̅, L, t=T),
and the optimal paths which dominate this PDF, of the KPZ interface on a ring. The short-time PDF has
the scaling form (<ref>), where ε∼ T^1/2 plays the role of the weak-noise
parameter. The phase diagram of the system
represents the (H̅, ℓ=L/√(ν T)) plane. We were especially interested in the DPTs that occur
in this system at sufficiently large positive λH̅>0. We found that, depending on ℓ, these
DPTs occur via either a supercritical, or a subcritical bifurcation of the “trivial" (uniform in space)
optimal path of the KPZ interface. The supercritical bifurcations dominate at very small ℓ, the subcritical
bifurcations dominate at very large ℓ. In these two limits we obtained asymptotic analytical solutions
for the optimal paths of the system, evaluated the resulting action, and verified the analytical results
numerically. We also found that, as T goes to zero at constant L, the PDF acquire a simple large-deviation
form (<ref>). Interestingly, the rate function f(H̅) exhibits, at a critical value
of H̅=H̅_c(ℓ), a DPT which is accidentally second-order.
In the (much more complicated) region of intermediate ℓ=O(1) we observed numerically both supercritical,
and subcritical bifurcations of the uniform solution. This region of the phase diagram is presently out of
reach of analytical theory. It would be very interesting, but challenging, to determine the complete phase
diagram of the system in this region. In particular, it would be interesting to locate, somewhere
between ℓ=16 π and ℓ = 32π, at least one critical point (H̅_*, ℓ_*) where the
second order DPT curve H̅_c^(2)(ℓ) ends when it meets the first order DPT curve H̅_c^(1)(ℓ),
as well as other possible critical points.
These tasks will become more feasible if this problem, as described by Eqs. (<ref>)-(<ref>),
joins the list of similar
large-deviation OFM problems for the KPZ equation which have been solved exactly by the inverse scattering
method (ISM) <cit.>. Indeed, as was previously found in Ref. <cit.>,
a canonical Hopf–Cole transformation brings Eqs. (<ref>) and (<ref>) into the nonlinear
Schrödinger equation in imaginary space and time. Therefore, Eqs. (<ref>) and (<ref>)
belong to a family of completely integrable models. The only problem (but potentially a big one) is to
adapt the ISM to a finite system with periodic boundaries and to accommodate the problem-specific boundary
conditions (<ref>) and (<ref>). The exact solution would also provide
a full analytic control of the subleading corrections to the action of branch 2, which are presently half-empiric.
Finally, it would be very interesting to explore the possibility of extending to the spatially averaged KPZ
interface height some of the recent “stochastic integrability" approaches, which led, for selected initial
conditions, to exact representations for the complete statistics of the one-point interface
height <cit.>.
§ ACKNOWLEDGMENTS
The authors thank Eldad Bettelheim and Naftali R. Smith for useful discussions.
This research was supported by the program
“Advanced Research Using High Intensity Laser-Produced Photons and Particles"
(ADONIS) (CZ.02.1.01/0.0/0.0/16019/0000789) of the European Regional Development Fund (ERDF) (PS),
and by the Israel Science Foundation (Grant No. 1499/20) (BM).
§ NUMERICAL METHODS
Our numerical procedure of finding solutions h and p of the
OFM problem (<ref>)-(<ref>)
can be summarized as follows:
To compute numerical solutions to the boundary-value problem
for h and p for given ℓ and H̅, we use a
refined version of the popular Chernykh–Stepanov
back-and-forth iteration algorithm <cit.> as described in detail
in Ref. <cit.>, using the language of PDE-constrained optimization.
The idea is to interpret the back-and-forth
iterations – fixing Λ and solving Eq. (<ref>) forward in time
with fixed p, and Eq. (<ref>) backward in time with fixed h until
convergence – as adjoint <cit.> gradient evaluations δ S /
δ p of the action
functional with fixed Λ,
S[p] = 1/2∫_0^1 dt ∫_0^ℓ
d x p^2(x,t) - Λ∫_0^ℓ h[p](x,1) dx ,
with the height profile h = h[p] determined for a
given p through Eq. (<ref>).
This interpretation allows us to use automatic update step-size
control (here: Armijo line search <cit.>) and
preconditioning for faster convergence (here: L-BFGS method <cit.>).
Conceptually, one fixes Λ in this formulation and obtains
the corresponding average height value H̅ a posteriori.
For large ℓ we find multiple solutions for the
same H̅, and the action S(H̅,ℓ) of the optimal solution as a
function of H̅
becomes nonconvex for some H̅. Nonconvexity of the rate
function S(H̅) is an issue because
minimizing the functional (<ref>) effectively computes the
Legendre–Fenchel transform of the rate function at Λ,
which may diverge in this case. Therefore, we add a
penalty term to the action, leading to the so-called
augmented Lagrangian formulation <cit.>
S[p] = 1/2∫_0^1 dt ∫_0^ℓ
d x p^2(x,t) - Λ(
∫_0^ℓ h[p](x,1) dx - ℓH̅)
+ μ/2(∫_0^ℓ h[p](x,1)
dx - ℓH̅)^2 ,
and solve multiple minimization problems for increasing penalty
parameters μ.
In this formulation, one can directly prescribe H̅ at the
cost of solving multiple optimization problems, and it is usable
regardless of convexity of the rate function, or in other words regardless of
bijectivity of the map between H̅ and Λ.
The formulation (<ref>) is more convenient to
trace solution branches: one initializes the optimization on an
already found solution on a given branch and slightly changes
Λ. In order to trace branches close to the transition
region for large ℓ in
the nonconvex case, we temporarily reparameterize the observable
as described in Ref. <cit.> with reparameterizations
g(z) = lnln z or g(z) = 1 - exp{-(z - 3.5) }.
Within this general framework, we use a
pseudo-spectral code with spatial resolution n_x
to solve Eqs. (<ref>)
and (<ref>), with an exact integration of the diffusion
terms through an integrating factor in Fourier space. An explicit
second-order Runge–Kutta integrator with n_t equidistant steps
is used in time. The gradient of the action functional is
evaluated exactly on a discrete level (“discretize,
then optimize”). Python source code to illustrate the optimization
methods in a simple toy problem
can be found in Ref. <cit.>.
99
KK2007 I. V. Kolokolov and S. E. Korshunov, Phys. Rev. B 75, 140201(R) (2007).
KK2008 I. V. Kolokolov and S. E. Korshunov, Phys. Rev. B 78, 024206 (2008).
KK2009 I. V. Kolokolov and S. E. Korshunov, Phys. Rev. E 80, 031107 (2009).
MKV B. Meerson, E. Katzav, and A. Vilenkin, Phys. Rev. Lett. 116, 070601 (2016).
KMSparabola A. Kamenev, B. Meerson, and P. V. Sasorov, Phys. Rev. E 94, 032108 (2016).
LDMRS P. Le Doussal, S. N. Majumdar, A. Rosso, and G. Schehr,
Phys. Rev. Lett. 117, 070403 (2016).
Janas2016 M. Janas, A. Kamenev, and B. Meerson, Phys. Rev. E 94, 032133 (2016).
KLD2017 A. Krajenbrink and P. Le Doussal, Phys. Rev. E 96, 020102(R)
(2017).
MeersonSchmidt2017 B. Meerson and J. Schmidt, J. Stat. Mech. (2017) P103207.
SMS2018 N. R. Smith, B. Meerson, and P. V. Sasorov, J. Stat. Mech. (2018) 023202.
SKM2018 N. R. Smith, A. Kamenev, and B. Meerson, Phys. Rev. E 97, 042130 (2018).
SmithMeerson2018 N. R. Smith and B. Meerson, Phys. Rev. E 97, 052110 (2018).
Hartmann2018 A. K. Hartmann, P. Le Doussal, S. N. Majumdar, A. Rosso,
and G. Schehr, Europhys. Lett. 121, 67004 (2018).
MV2018 B. Meerson and A. Vilenkin, Phys. Rev. E 98, 032145 (2018).
Asida2019 T. Asida, E. Livne, and B. Meerson, Phys. Rev. E 99, 042132 (2019).
SMV2019 N. R. Smith, B. Meerson, and A. Vilenkin, J. Stat. Mech. (2019)
053207.
HMS2019 A. K. Hartmann, B. Meerson, and P. Sasorov, Phys. Rev. Res. 1, 032043(R) (2019).
KLD2021 A. Krajenbrink and P. Le Doussal, Phys. Rev. Lett. 127, 064101 (2021).
HMS2021 A. K. Hartmann, B. Meerson, and P. Sasorov, Phys. Rev. E 104, 054125 (2021).
KLD2022 A. Krajenbrink and P. Le Doussal, Phys. Rev. E 105, 054142 (2022).
Lamarre P. Y. G. Lamarre, Y. Lin, L.-C. Tsai,
Probab. Theor. Rel. Fields 185, 885 (2023).
SGG T. Schorlepp, T. Grafke, and R. Grauer, J. Stat. Phys. 190, 50 (2023).
KPZ M. Kardar, G. Parisi, and Y.-C. Zhang, Phys. Rev. Lett. 56, 889
(1986).
shortcut F. D. Cunden, P. Facchi, and P. Vivo, J. Phys. A: Math. Theor. 49, 135202.
(2016).
Whithambook G. B. Whitham, Linear and Nonlinear Waves (Wiley, New York, 2011).
SM18 N. Smith and B. Meerson, Phys. Rev. E 97, 052110 (2018).
Jacobi Wolfram MathWorld, https://mathworld.wolfram.com/JacobiEllipticFunctions.html
Wolf Wolfram Research, Inc., https://functions.wolfram.com/EllipticFunctions/JacobiDN/08/
SS T. Sasamoto and H. Spohn, Phys. Rev. Lett. 104, 230602 (2010).
CDR P. Calabrese, P. Le Doussal, A. Rosso, Europhys. Lett.
90, 20002 (2010).
Dotsenko V. Dotsenko, Europhys. Lett. 90, 20003 (2010).
ACQ G. Amir, I. Corwin, and J. Quastel, Comm. Pur. Appl. Math.
64, 466 (2011).
CLD11 P. Calabrese, and P. Le Doussal, Phys. Rev. Lett. 106, 250603 (2011).
CLD12 P. Le Doussal and P. Calabrese, J. Stat. Mech. (2012) P06001.
IS12 T. Imamura and T. Sasamoto, Phys. Rev. Lett. 108, 190603 (2012).
IS13 T. Imamura and T. Sasamoto, J. Stat. Phys. 150, 908 (2013).
Borodinetal A. Borodin, I. Corwin, P. L. Ferrari, and B. Vető, Math. Phys. Anal. Geom. 18, 20 (2015).
CS A. I. Chernykh and M. G. Stepanov, Phys. Rev. E 64,
026306 (2001).
SGMG T. Schorlepp, T. Grafke, S. May, and R. Grauer, Philos. Trans. Royal Soc. A 380, 20210051 (2022).
Plessix R.-E. Plessix, Geophys. J. Int. 167, 495 (2006).
Armijo L. Armijo, Pacific J. Math. 16, 1 (1966).
LN D. C. Liu and J. Nocedal, Math. Program. 45, 503 (1989).
Hestenes M. R. Hestenes, J. Optim. Theory. Appl. 4, 303 (1969).
AG M. Alqahtani and T. Grafke, J. Phys. A: Math. Theor. 54 175001 (2021).
STGS T. Schorlepp, S. Tong, T. Grafke, and G. Stadler, arXiv:2303.11919 (2023).
|
http://arxiv.org/abs/2307.04686v2 | 20230710164203 | VampNet: Music Generation via Masked Acoustic Token Modeling | [
"Hugo Flores Garcia",
"Prem Seetharaman",
"Rithesh Kumar",
"Bryan Pardo"
] | cs.SD | [
"cs.SD",
"cs.AI",
"eess.AS"
] |
tight_enumerate
listings
inlinecode
listing only,
nobeforeafter,
after=,
hbox,
tcbox raise base,
fontupper=,
colback=lightgray,
colframe=lightgray,
size=fbox
Hugo Flores García^1,2 Prem Seetharaman^1 Rithesh Kumar^1 Bryan Pardo^2
^1 Descript Inc.
^2 Northwestern University
[email protected]
H. Flores García, P. Seetharaman, R. Kumar, and B. Pardo
VampNet: Music Generation via
Masked Acoustic Token Modeling
Igor Ferrier-Barbut
August 12, 2023
==============================================================
We introduce VampNet, a masked acoustic token modeling approach to music synthesis, compression, inpainting, and variation.
We use a variable masking schedule during training which allows us to sample coherent music from the model by applying a variety of masking approaches (called prompts) during inference. VampNet is non-autoregressive, leveraging a bidirectional transformer architecture that attends to all tokens in a forward pass. With just 36 sampling passes, VampNet can generate coherent high-fidelity musical waveforms. We show that by prompting VampNet in various ways, we can apply it to tasks like music compression, inpainting, outpainting, continuation, and looping with variation (vamping). Appropriately prompted, VampNet is capable of maintaining style, genre, instrumentation, and other high-level aspects of the music. This flexible prompting capability makes VampNet a powerful music co-creation tool. Codenote1 and audio samplesnote2 are available online.
§ INTRODUCTION
In recent years, advances in discrete acoustic token modeling have resulted in significant leaps in autoregressive generation of speech <cit.> and music <cit.>. Meanwhile, approaches that use non-autoregressive parallel iterative decoding have been developed for efficient image synthesis <cit.>. Parallel iterative decoding promises to allow faster inference than autoregressive methods and is more suited to tasks like infill, which require conditioning on both past and future sequence elements.
In this work, we combine parallel iterative decoding with acoustic token modeling, and apply them to music audio synthesis. To the best of our knowledge, ours is the first [While our work was under peer review, Google released SoundStorm <cit.>, which leverages a similar parallel iterative decoding approach to ours.] extension of parallel iterative decoding to neural audio music generation. Our model, called VampNet, can be flexibly applied to a variety of applications via token-based prompting. We show that we can guide VampNet's generation with selectively masked music token sequences, asking it to fill in the blanks. The outputs of this procedure can range from a high-quality audio compression technique to variations on the original input music that match the original input music in terms of style, genre, beat and instrumentation, while varying specifics of timbre and rhythm.
Unlike auto-regressive music models <cit.>, which can only perform music continuations – using some prefix audio as a prompt, and having the model generate music that could plausibly come after it – our approach allows the prompts to be placed anywhere. We explore a variety of prompt designs, including periodic, compression, and musically informed ones (e.g. masking on the beat). We find that our model responds well to prompts to make loops and variations, thus the name VampNet [To vamp is to repeat a short passage of music with variation.]. We make our code open source[<https://github.com/hugofloresgarcia/vampnet>] and highly encourage readers to listen to our audio samples[audio samples: <https://tinyurl.com/bdfj7rdx>].
§ BACKGROUND
Two-stage approaches to generative modeling have gained traction in image <cit.> and audio <cit.> synthesis, largely in part due to their computational efficiency. In the first stage, a discrete vocabulary of “tokens” is learned for the domain of interest. The input is put through an encoder to obtain these tokens, which can be converted back into the input domain via a corresponding decoder. In the second stage, a model is trained to generate tokens, and is optionally given some conditioning (e.g. previous tokens, a text description, a class label) to guide generation.
§.§ Stage 1: Tokenization
In images, visual tokenization has been leveraged for state-of-the-art classification <cit.> and synthesis <cit.>. The most popular approach is to use vector quantization on a latent space. Similar approaches have been explored for audio <cit.>, but until recently such approaches have been restricted to low sampling rates (e.g. 16khz), or have been restricted to speech audio. The “sampling rate” of the latent space (the number of latent vectors required every second to represent audio) is a critical aspect of the tokenization scheme. The lower the sampling rate of the latent space, the easier the next stage (generation) will be to accomplish. Recently, methods based on residual vector quantization <cit.> have been proposed for audio tokenization at high compression rates with good reconstruction quality of high-sample-rate audio.
The primary work we leverage for audio tokenization is the Descript Audio Codec (DAC) <cit.>. With DAC, audio is encoded into a sequence of tokens via a fully convolutional encoder. The output of this encoder is then quantized using a hierarchical sequence of vector-quantizers <cit.>. Each quantizer operates on the residual error of the quantizer before it. Because of this residual vector quantization, DAC is able to reconstruct audio with very high quality, at a high compression ratio. It, along with its predecessors <cit.>, are instrumental in enabling audio language models like AudioLM <cit.>, MusicLM <cit.>, and VALL-E <cit.>. While we later briefly describe our tokenizer, the key contributions of our work are applicable to the output of any audio tokenizer and our specific audio tokenizer is not the focus of this work.
§.§ Stage 2: Generation
Given audio encoded as tokens, the common approach is to use an autoregressive model <cit.> for generation. State-of-the-art (SOTA) audio generation approaches like AudioLM <cit.>, MusicLM <cit.>, and JukeBox <cit.> use this approach, generating each acoustic token in the sequence in a step-by-step fashion using transformer-based <cit.> decoder-only models. Autoregressive sampling is slow in nature due to the high number of steps required at inference time <cit.>. Further, autoregressive models inherently restrict downstream applications, as each generated token is only conditioned on the previous tokens. For an autoregressive model to perform tasks like inpainting (“filling in the middle”), one must re-arrange the data during training <cit.>.
In language, masked modeling has been used extensively as a pre-training procedure for high-quality semantic representations <cit.>. This procedure has also been extended for representation learning in images <cit.> and audio <cit.>. Masked modeling for representation learning generally has a constant mask probability. For example, in BERT <cit.>, tokens are masked 15% of the time during training. It has been shown that this approach is equivalent to a single-step discrete diffusion model <cit.>, that uses masking for its noising procedure. Therefore, we can extend masked modeling to masked generative modeling by varying the probability of masking a token during training. This was done for image generation in MaskGIT <cit.>, and in language <cit.>. Similar to diffusion modeling <cit.>, which seeks to synthesize data starting from random noise through a series of denoising steps, masked generative modeling seeks to synthesize data starting from completely masked data through a series of “unmasking” steps.
Key to the efficiency of MaskGIT and related approaches is a parallel iterative decoding procedure. In parallel iterative decoding, the model predicts every token in the output sequence in a single forward pass. However, after just one forward pass of the model, the output often does not have high quality. The output of the first sampling step is re-masked, with a lower masking probability, and then put through the model again. In this way, masked generative models can efficiently refine their output, resulting in high quality generation.
In unconditional generation tasks, the model is asked to generate a realistic sample from the target data distribution from scratch, without any guidance. This is a difficult problem, as many target data distributions are highly multimodal. Unconditional generative models are susceptible to mode collapse <cit.>, blurry samples, mode averaging, and other issues<cit.>. Therefore, some conditioning is helpful as it provides some signal for the model to resolve the multimodality. Conditioning is also a commonly used method to guide the output of the system towards desired content.
Conditioning can take the form of a class label, a genre tag or lyrics <cit.>, or an associated text description <cit.>. Conditioning can also be applied at every timestep, like the semantic tokens of AudioLM <cit.>, or aligned text or phonemes for text-to-speech generation <cit.>.
In this work,we adopt a masked generative modeling approach with a parallel iterative decoding procedure, inspired by work in vision such as MaskGIT <cit.> and Paella <cit.>, as illustrated in Figure <ref>. We do not apply any conditioning beyond that provided by the unmasked tokens in our encoded audio. As we show later, different approaches to masking, applied at inference time, can be used to steer generation in useful and artistic ways.
In training, tokens are masked randomly throughout the sequence. The model is then asked to predict the value of each of the masked tokens in a single forward pass, but it is conditioned on all of the unmasked tokens, both in the future as well as in the past. We vary the number of tokens that are masked during training, allowing us to generate audio at inference time through a sampling procedure. We now describe our method in more detail.
§ METHOD
We adapt the procedure of Masked Visual Token Modeling, proposed in MaskGIT <cit.> to
audio, accounting for several key differences between the vision and audio domain.
We call our approach Masked Acoustic Token Modeling.
§.§ Masked Acoustic Token Modeling
We first train an audio tokenizer based on the techniques described in DAC <cit.>. Unlike the visual tokens of MaskGIT, our acoustic tokens are hierarchical in nature due to residual vector quantization.
As a first step, the audio signal x is encoded at each time step t as a a D dimensional latent vector Z. We then quantize Z using N vector quantizers. Quantizer 1 produces Ẑ_̂1̂, a quantized approximation of Z that has residual error R_1 = Z - Ẑ_1. Thereafter, the residual from each quantizer i is passed to the next quantizer i+1, which produces a quantized approximation of the remaining residual error: R_i ≈Ẑ_̂î+̂1̂. Vector Z is reconstructed by summing the output of the N quantizers: Z = ∑_i=1^NẐ_̂î.
Since the encoded signal is represented as a quantized vector of N discrete tokens at each timestep, we have N tokens that can be masked or unmasked at each timestep. Rather than attempt to generate all tokens at once, we instead split the N tokens into N_c “coarse” tokens, and N_f “fine” tokens, as in AudioLM. We then train two generative models: one that generates the fine tokens given the coarse tokens as conditioning, and one that generates the coarse tokens given a sequence of coarse tokens. To generate a sample (Figure <ref>), we chain the two models together. First, we apply the coarse model to generate a sequence of coarse tokens. Then, we apply the coarse-to-fine model to generate the fine tokens. We decode the tokens to a 44.1khz waveform using the decoder of our audio tokenizer.
§.§ Training procedure
Let 𝐘∈ℝ^T× N be a matrix representing the output of the encoder for some audio segment. Each element y_t,n in 𝐘 is a token from the nth level codebook at timestep t. Let 𝐘_M be the set of all masked tokens in 𝐘 and 𝐘_U be the set of all unmasked tokens in 𝐘. The model generates a probability distribution over the set of possible codebook values for each token y ∈𝐘_M, given the unmasked tokens and the model parameters θ. The training objective is to maximize the probability of the true tokens. This corresponds to minimizing the negative log likelihood.
ℒ = - ∑_∀ y ∈𝐘_Mlog p(y| 𝐘_U, θ)
To predict the masked tokens, we use a multi-layer bidirectional transformer, which predicts the probabilities of each possible token at every timestep, for every quantizer. If each quantizer has a codebook size of C possible values, and there are N quantizers, then the last layer of the network will be a fully connected layer of shape (E, CN), where E is the dimensionality of the output of the last layer. We then reshape this output into (EN, C), and compute the cross-entropy loss between the ground-truth one-hot token and the predicted token. Because the transformer is bidirectional, it can attend to all tokens in the input sequence to optimize the loss for each token.
For the coarse-to-fine generative model, the input sequence always contains N_c coarse tokens, and the masking operation is restricted to the N_f fine tokens. The last layer of this network only predicts masked fine tokens. Otherwise, the training procedure for both models is identical.
§.§ Sampling
We follow the same iterative confidence-based sampling approach used in MaskGIT. More concretely, given Y_M as the set of masked tokens and Y_U as the set of unmasked tokens, do:
* Estimate. For each masked token y in Y_M, estimate the conditional probability distribution over its vocabulary of codebook values V.
* Sample. For each masked token, sample from the distribution to generate an associated token estimate ŷ∈ V. We don't use any sampling tricks in this step, sampling from the categorical probability distribution for each token as-is.
* Rank by Confidence. Compute a confidence measure for each of the sampled tokens by taking their prediction log-probabilities and adding temperature-annealed Gumbel noise to them:
confidence(ŷ_t) = log(p(ŷ_t)) + temp · g_t
where ŷ_t is a token estimate at timestep t, g_t is an i.i.d sample drawn from Gumbel(0,1) <cit.>, and temp is a hyperparameter that is linearly annealed to 0 over the number of sampling iterations.
Then, sort the set of sampled token estimates by the confidence computed above. We find that high temperature values (e.g. >6.0) result in higher quality samples.
* Select.
Pick the number of tokens to mask at the next sampling iteration, k, according to the masking schedule [k = γ (t/t_T) D, where t is the current iteration, t_T is the total number of iterations, and D the total number of tokens in the sequence. The scheduling function γ is a cosine schedule.]. Take the k lowest confidence estimates and toss them out, re-masking their tokens. Place the remaining high-confidence token estimates in Y_U, removing their tokens from Y_M.
* Repeat Return to step 1 until the number of iterations has been reached.
§.§ Prompting
Interactive music editing can be enabled by incorporating human guidance in the sampling procedure through the conditioning prompt of unmasked tokens. Because our approach isn't conditioned on any signal other than the input audio itself, we find that various types of prompts are useful for obtaining coherent samples, as they lower the amount of multimodality when sampling from the model. Like AudioLM, we can prompt our model with prefix audio of some duration (usually between 1 and 4 seconds), and it will provide a continuation of that audio. Unlike AudioLM, and other auto-regressive approaches, we can also prompt our model with suffix audio, and it will generate audio that leads up into that suffix. We can provide prefix and suffix audio, and the model will generate the remaining audio, such that it is appropriate, given
the specified prefix and suffix.
We can also apply a “periodic” prompt, where all but every Pth timestep are masked.
The lower P is, the more the generated audio will sound like the original, as the model is highly conditioned. For example if P = 2, then the model is essentially behaving like a upsampler, imputing the tokens for every other timestep. As P increases, the model shifts from behaving in a compression mode to a generative mode, creating variations that match the style of the original.
Another useful style of prompt are “compression” prompts, where all codebooks other than the most coarse-grained are masked. This gives the model strong conditioning on every timestep, so the model is likely to produce audio that closely matches the original. We can combine this prompt with a periodic prompt with low P for even more extreme compression ratios. Given the bitrate of the codec B , which has number of codebooks N, a downsampling rate P for the periodic prompt, and a number of kept codebooks N_k, we can achieve a bitrate of B / P(N - N_k).
Finally, we can design music-specific prompts, which exploit knowledge about the structure of the music. More concretely, we explore beat-driven prompting, where timesteps that fall on or around the beat are left unmasked. The model is left to create music between these beats, resulting in interesting variations on the original music. These prompts can all be combined to create a very useful music creation tool. In concert with a well designed user interface, VampNet shows promise as the basis for a next-generation music editing and creation suite.
§ EXPERIMENTS
Our experiments aim to evaluate VampNet's capability to both compress and generate music, given the various prompting strategies described in Section <ref>. For our objective audio quality measures, we use a multiscale mel reconstruction error and the Fréchet Audio Distance (FAD). Mel-reconstruction error is defined as the L1 distance between log-mel spectrograms at various time-scales,
D_F,M = || Ŝ_F,M - S_F,M ||_1
where F is the FFT size of each spectrogram, and M is the number of mel-frequency
bins. We use F ∈ [2048, 512] and M ∈ [150, 80], with a hop size of 1/4 the FFT size. Mel-reconstruction is valuable as a metric for compression quality, but not for generation quality, since it is likely that models produce audio that does not match one to one with the original target audio. For generation quality, we use FAD, which measures the overlap between distributions of real and generated audio. Unlike mel-reconstruction, FAD is geared more towards evaluating if sample quality falls within the data distribution of the real audio, and can be used to evaluate generation quality.
§.§ Dataset
Similar to JukeBox <cit.>, we collect a large dataset of popular music recordings. Our dataset consists of 797k tracks, with a sampling rate of 32 khz. These tracks are resampled to 44.1kHz to make compatible with our tokenizer. Our dataset contains music from
thousands of artists across genres described in Echo Nest's Every Noise at Once [<https://everynoise.com/engenremap.html>].
We use a subset of 2k tracks for validation, and another subset of 2k tracks for testing. We ensure that there is no artist overlap between train, validation, and test tracks.
In addition, we collect a set of music and non-music data (speech, environmental sound), which we used to train our tokenizer, using the datasets described in DAC <cit.>.
All audio is normalized to -24dbFS. We do not use any metadata about these files during training, as our model is trained unconditionally.
§.§ Network Architecture and Hyperparameters
The audio tokenizer model we use takes as input 44.1kHz audio, and compresses it to a bitrate of 8kbps using 14 codebooks, with a downsampling rate of 768x. The latent space therefore is at 57Hz, with 14 tokens to predict at every timestep. We designate 4 of these tokens as the coarse tokens, and the remaining 10 as the fine tokens. Refer to the Descript Audio Codec <cit.> for details on the tokenizer architecture. We train the tokenizer for 250k steps.
The VampNet architecture (for both coarse and coarse-to-fine models) consists of a bidirectional transformer <cit.> with relative attention <cit.> and an embedding dimension of 1280 and 20 attention heads. The coarse model has 20 attention layers, while the coarse-to-fine model has 16.
We train the coarse and coarse-to-fine model for 1M and 500k steps, respectively. We train with the AdamW optimizer <cit.> with β_1 and β_2 set to 0.9 and 0.999, respectively. We use the learning rate scheduler introduced by Vaswani et al <cit.> with a target learning rate of 0.001 and 10k warmup steps. We use a dropout of 0.1, and a batch size of 25, with a GPU memory budget of 72GB.
§.§ Efficiency of VampNet
We first validate that VampNet can generate realistic music audio in a low number of steps. To do this, we run VampNet using one of our prompts (the periodic prompt, with P = 16) on our test set, on 10-second excerpts. We vary the number of sampling steps in [1, 4, 8, 12, 36, 64, 72], and report metrics for each sampling step.
§.§ Effect of prompts
We seek to understand how VampNet responds to different prompts, as discussed in Section <ref>. The prompts range from “compression” prompts, which compress music to a low bitrate, to more creative “generative” prompts. We examine whether compression and generative prompts exist on a continuum, and whether decompression from low bitrates results in generative behavior.
We draw 2000 10-second examples from our evaluation dataset, encode them into token streams with our audio tokenizer, and manipulate the token streams in four ways:
* Compression prompt: C codebooks are left unmasked, starting from the coarsest codebook. All other tokens are masked. We set N_k = 1.
* Periodic prompt: every Pth timestep is left unmasked. In an unmasked timestep, tokens from every codebook are unmasked. All other tokens (e.g. tokens in timesteps that do not correspond to the period P) are masked. We set P ∈ [8, 16, 32].
* Prefix and suffix (inpaint) prompts: a segment at the beginning and at the end of the sequence is left unmasked. All other tokens are masked. This prompt is parameterized by a context length in seconds. We set the context to be either 1 second or 2 seconds, which corresponds to 57 or 114 timesteps.
* Beat-driven prompt: we first process the audio waveform with a beat tracker <cit.>. Then, around each detected beat, we unmask timesteps to the right of the beat. We examine a 75ms unmasked section around each beat, which is about 4 timesteps per beat.
After manipulating the input token streams with our prompts, we generate new musical signals from these masked token streams using VampNet, and compute FAD and mel-reconstruction error between the generated signals and the input signals from our music dataset.
We include a noisy token stream baseline, where a portion (as dictated by mask ratio r) of the tokens in the input token stream are replaced with random tokens. We also include as baseline the codec by itself, as well as the coarse-to-fine model.
Finally, we examine how these prompts can be combined - specifically the compression and periodic prompts. By manipulating the hyperparameters of these prompts (C and P), we can shift the model behavior from compression to generation. As more timesteps are masked, the model must generate plausible musical excerpts that connect the unmasked timesteps, that may not match the input music.
§ RESULTS AND DISCUSSION
Results for our experiment varying the number of sampling steps used to generate samples with VampNet are shown on Figure <ref>. We find that VampNet achieves the lowest FAD with 36 sampling steps, although 12 sampling steps achieves comparable performance. In practice, we find that samples taken with 24 steps achieve a fair trade-off between generation quality and compute speed, with 10-second samples taking around 6 seconds to sample on an NVIDIA RTX3090. In contrast, to generate 10 seconds of audio with an autoregressive model would require 574 steps, which would take around 1 min to generate 10 seconds of audio, given an autoregressive model with the same number of parameters as ours, and the same tokenizer.
Results for our study on the effect of each prompt are shown in Figure <ref>. First, we note that while the noisy token baseline has comparable mel reconstruction to all prompts, it performs very poorly in terms of FAD. This indicates that while our prompting strategies may result in audio that is not a perfect match to the original input audio, it still falls inside the distribution of plausible music.
Of our proposed prompts, we find that beat-driven prompts perform best, achieving the lowest FAD of all prompts. A notable result here is that the periodic prompt with P=16 (35 conditioning timesteps) performs on par with inpainting with 1 second of context (57 conditioning timesteps). Therefore, prompt techniques that spread out the conditioning tokens throughout the sequence (periodic prompts) are able to use fewer conditioning timesteps to generate samples of comparable quality to those generated by sampling techniques that place all of the conditioning tokens at the start and end of the sequences (inpainting).
Qualitatively, we also find that beat-driven prompts can keep a steadier tempo than other prompts, though their outputs tend to resemble the original music closer than periodic prompts. In practice, a mix of beat-driven, periodic, and inpainting prompts can be employed to steer of VampNet in creative ways. To illustrate, we highly encourage the reader to listen to the accompanying sound samples [audio samples: <https://tinyurl.com/bdfj7rdx>].
We then combined periodic and compression prompting to show how the model's behavior shifts between reconstruction and generation tasks, as more tokens are masked away.
Results for this experiment are shown in Figure <ref>. At higher bitrates, (600 bps and above), VampNet is able to accurately reconstruct the original music signal, achieving low mel-spectrogram error and FAD values with respect to the evaluation music audio. At bitrates of 200bps and below, VampNet has comparable reconstruction quality to the noisy token baselines, indicating that the sampled VampNet signals no longer resemble the input audio in terms of fine-grained spectral structure. However, the FAD for VampNet samples at low bitrates is much lower than the FAD for noisy baselines. This indicates that even though VampNet isn't able to reconstruct the input music signal at low bitrates, it is still able to generate coherent audio signals with musical structure, that are closer to the distribution of “real music” than our noisy baseline.
§ CONCLUSION
We introduced VampNet, a masked acoustic token modeling approach to music generation. VampNet is bidirectional, and can be prompted a variety of ways using an input audio file. Through different prompting techniques, VampNet can operate in a continuum between music compression and generation, and is an excellent tool for generating variations on a piece of music.
With VampNet, a musician could record a short loop, feed it into VampNet, and have VampNet create musical variations on the recorded idea every time the looped region repeats.
In future work, we hope to investigate the interactive music co-creation potential of VampNet and its prompting techniques, as well as explore the representation learning capabilities of masked acoustic token modeling.
|
http://arxiv.org/abs/2307.05402v1 | 20230711160640 | Complexity results for matching cut problems in graphs without long induced paths | [
"Hoang-Oanh Le",
"Van Bang Le"
] | cs.CC | [
"cs.CC",
"cs.DM",
"math.CO"
] |
Independent Researcher,
Berlin, Germany
[email protected]
Institut für Informatik, Universität Rostock,
Rostock, Germany
[email protected]
Complexity results for matching cut problems in graphs without long induced paths
Hoàng-Oanh Le1 Van Bang Le2
Received / Accepted
=====================================================================================
In a graph, a (perfect) matching cut is an edge cut that is a (perfect) matching. matching cut (), respectively, perfect matching cut (), is the problem of deciding whether a given graph has a matching cut, respectively, a perfect matching cut.
The disconnected perfect matching problem () is to decide if a graph has a perfect matching that contains a matching cut.
Solving an open problem recently posed in [Lucke, Paulusma, Ries (ISAAC 2022) & Feghali, Lucke, Paulusma, Ries (arXiv:2212.12317)], we show that is -complete in graphs without induced 14-vertex path P_14. Our reduction also works simultaneously for and , improving the previous hardness results of on P_19-free graphs and of on P_23-free graphs to P_14-free graphs for both problems.
Actually, we prove a slightly stronger result: within P_14-free graphs, it is hard to distinguish between
(i) those without matching cuts and those in which every matching cut is a perfect matching cut;
(ii) those without perfect matching cuts and those in which every matching cut is a perfect matching cut;
(iii) those without disconnected perfect matchings and those in which every matching cut is a perfect matching cut.
Moreover, assuming the Exponential Time Hypothesis, none of these problems can be solved in time 2^o(n) for n-vertex P_14-free input graphs.
As a corollary from (i), computing a matching cut with a maximum number of edges is hard, even when restricted to P_14-free graphs. This answers a question asked in [Lucke, Paulusma & Ries (arXiv:2207.07095)].
We also consider the problems in graphs without long induced cycles. It is known that is polynomially solvable in graphs without induced cycles of length at least 5 [Moshi (JGT 1989)].
We point out that the same holds for .
§ INTRODUCTION AND RESULTS
In a graph G=(V,E), a cut is a partition V=X∪ Y of the vertex set into disjoint, non-empty sets X and Y. The set of all edges in G having an endvertex in X and the other endvertex in Y, written E(X,Y), is called the edge cut of the cut (X,Y). A matching cut is an edge cut that is a (possibly empty) matching.
Another way to define matching cuts is as follows; see <cit.>: a cut (X,Y) is a matching cut if and only if each vertex in X has at most one neighbor in Y and each vertex in Y has at most one neighbor in X.
The classical -complete problem matching cut () <cit.> asks if a given graph admits a matching cut.
An interesting special case, where the edge cut E(X,Y) is a perfect matching of G,
was considered in <cit.>. Such a matching cut is called a perfect mathing cut and the perfect matching cut () problem asks whether a given graph admits a perfect matching cut. It was shown in <cit.> that this special case of remains -complete.
A notion related to matching cut is disconnected perfect matching which has been considered recently in <cit.>: a disconnected perfect matching is a perfect matching that contains a matching cut. Observe that any perfect matching cut is a disconnected perfect matching but not the converse. Fig. <ref> provides some small examples for matching cuts, perfect matching cuts and disconnected perfect matchings.
The related problem to and , disconnected perfect matching (), asks if a given graph has a disconnected perfect matching; equivalently: if a given graph has a matching cut that is extendable to a perfect matching. It was shown in <cit.> that is -complete. All these three problems have received much attention lately; see, e.g., <cit.> for recent results.
In this paper, we focus on the complexity of these three problems restricted to graphs without long induced paths and cycles.
The current best known hardness results for and in graphs without long induced paths are:
remains -complete in {4P_5,P_19}-free graphs.[Meanwhile the result has been improved to {3P_5,P_15}-free graphs <cit.>.]
remains -complete in {4P_7,P_23}-free graphs.[Meanwhile the result has been improved to {3P_7,P_19}-free graphs <cit.>.]
Prior to the present paper, no similar hardness result for was known.
Indeed, it was asked in <cit.>, whether there is an integer t such that is -complete in P_t-free graphs.
Polynomial-time algorithms exist for and in P_6-free graphs <cit.> and for in P_5-free graphs <cit.>.
For graphs without long induced cycles (including chordal graphs and chordal bipartite graphs),
the only result we are aware of is that is polynomially solvable:
There is a polynomial-time algorithm solving in graphs without induced cycles of length five and more.
Previously, no similar polynomial-time results for and in long-hole-free graphs were known.
Our contributions.
We prove that is -complete in graphs without induced path P_14, solving the open problem posed in <cit.>. For and we improve the hardness results in Theorems <ref> and <ref> in graphs without induced path P_19, respectively, P_23, to graphs without induced path P_14. It is remarkable that all these hardness results for three problems will be obtained simultaneously by only one reduction, and can be stated in more details as follows.
, and are -complete in {3P_6,2P_7,P_14}-free graphs.
Moreover, under the ETH, no algorithm with runtime 2^o(n) can solve any of these problems for n-vertex {3P_6,2P_7,P_14}-free input graphs.
Actually, we prove the following slightly stronger result: within {3P_6,2P_7,P_14}-free graphs, it is hard to distinguish between those without matching cuts (respectively perfect matching cuts, disconnected perfect matchings) and those in which every matching cut is a perfect matching cut. Moreover, under the ETH, this task cannot be solved in subexponential time in the vertex number of the input graph.
An interesting problem interposed beween and , called maximum matching cut (), has recently been proposed in <cit.>. Here, given a graph G, we want to compute a matching cut of G (if any) with maximum number of edges. Formally, in its decision version is as follows.
.96
maximum matching cut (maxmc)
Instance: A graph G and an integer k.
Question: Does G have a matching cut with k or more edges ?
It has been asked in <cit.> what is the complexity of on P_t-free graphs. Our next result answers this question.[Meanwhile the complexity of in H-free graphs has been completely determined <cit.>.]
is -complete in {3P_6,2P_7,P_14}-free graphs.
Moreover, under the ETH, no algorithm with runtime 2^o(n) can solve for n-vertex {3P_6,2P_7,P_14}-free input graphs.
On the positive side, we prove the following.
There is a polynomial-time algorithm solving in graphs without induced cycle of length five and more.
The paper is organized as follows. We recall some notion and notations in Section <ref> which will be used. Then, we prove a slightly stronger result than Theorem <ref> in Section <ref> which then implies Theorem <ref>. The proof of Theorem <ref> will be given in Section <ref>. Section <ref> concludes the paper.
§ PRELIMINARIES
For a set H of graphs, H-free graphs are those in which no induced subgraph is isomorphic to a graph in H.
We denote by P_t the t-vertex path with t-1 edges and by C_t the t-vertex cycle with t edges. C_3 is also called a triangle, and a hole is a C_t for some t≥ 4; C_t with t≥ 5 are long holes.
The union G+H of two vertex-disjoint graphs G and H is the graph with vertex set V(G)∪ V(H) and edge set E(G)∪ E(H); we write pG for the union of p copies of G.
For a subset S ⊆ V(G), let G[S] denote the subgraph of G induced by S; G-S stands for G[V(G)∖ S].
By G contains an H we mean G contains H as an induced subgraph.
Given a matching cut M=(X,Y) of a graph G, a vertex set S⊆ V(G) is monochromatic if S belongs to the same part of M, i.e., S⊆ X or else S⊆ Y. Notice that every clique different from the P_2 is monochromatic with respect to any matching cut.
Algorithmic lower bounds in this paper are conditional, based on the Exponential Time Hypothesis (ETH) <cit.>. The ETH asserts that no algorithm can solve SAT in subexponential time 2^o(n) for n-variable 3-cnf formulas. As shown by the Sparsification Lemma in <cit.>, the hard cases of SAT already consist of sparse formulas with m=O(n) clauses. Hence, the ETH implies that SAT cannot be solved in time 2^o(n+m).
Recall that an instance for IN3SAT is a 3-cnf formula ϕ=C_1 C_2⋯ C_m over n variables, in which each clause C_j consists of three distinct literals. The problem asks whether there is a truth assignment of the variables such that every clause in ϕ has exaclty one true literal. We call such an assignment an 1-in-3 assignment.
There is a polynomial reduction from SAT to IN3SAT (<cit.>), which transforms an instance for SAT with n variables and m clauses to an equivalent instance for IN3SAT with n+4m variables and 3m clauses. Thus, assuming ETH, IN3SAT cannot be solved in time 2^o(n+m) on inputs with n variables and m clauses.
We will need a restriction of IN3SAT, 1IN3SAT, in which each variable occurs positively.
There is a well-known reduction from IN3SAT to 1IN3SAT, which transforms an instance for IN3SAT to an equivalent instance for 1IN3SAT, linear in the number of variables and clauses. Hence, we obtain: assuming ETH,
1IN3SAT cannot be solved in time 2^o(n+m) for inputs with n variables and m clauses.
§ PROOF OF THEOREM <REF> AND THEOREM <REF>
Recall that a perfect matching cut is in particular a matching cut, as well as a disconnected perfect matching.
This observation leads to the following promise versions of , and . (We refer to <cit.> for background on promise problems.)
.96
promise-pmc mc
Instance: A graph G that either has no matching cut, or every
matching cut is a perfect matching cut.
Question: Does G have a matching cut ?
.96
promise-pmc pmc
Instance: A graph G that either has no perfect matching cut, or every
matching cut is a perfect matching cut.
Question: Does G have a perfect matching cut ?
.96
promise-pmc dpm
Instance: A graph G that either has no disconnected perfect matching, or
every matching cut is a perfect matching cut.
Question: Does G have a disconnected perfect matching ?
In all the promise versions above, we are allowed not to consider certain input graphs.
In promise-pmc mc, promise-pmc pmc and promise-pmc dpm, we are allowed to ignore graphs having a matching cut that is not a perfect matching cut, for which must answer yes, and and may answer yes or no.
We slightly improve Theorem <ref> by showing the following result.
promise-pmc mc, promise-pmc pmc and promise-pmc dpm are -complete, even when restricted to {3P_6,2P_7,P_14}-free graphs.
Moreover, under the ETH, no algorithm with runtime 2^o(n) can solve any of these problems for n-vertex {3P_6,2P_7,P_14}-free input graphs.
Clearly, Theorem <ref> implies Theorem <ref>.
Theorem <ref> shows in particular that distinguishing between graphs without matching cuts and graphs in which every matching cut is a perfect matching cut is hard, and not only between those without matching cuts and those with matching cuts which is implied by the -completeness of . Similar implications of Theorem <ref> can be derived for and .
Also, Theorem <ref> implies Theorem <ref>.
Indeed, if or is -hard in a graph class then is -hard in the same class as well.
§.§ The reduction
We give a polynomial-time reduction from 1IN3SAT to promise-pmc pmc (and to promise-pmc mc, promise-pmc dpm at the same time).
Let ϕ be a 3-cnf formula with m clauses C_j, 1≤ j≤ m, and n variables x_i, 1≤ i≤ n, in which each clause C_j consists of three distinct variables.
We will construct a {3P_6,2P_7,P_14}-free graph G such that G has a perfect matching cut if and only if ϕ admits an 1-in-3 assignment. Moreover, every matching cut of G, if any, is a perfect matching cut.
For each clause C_j consisting of three variables c_j1, c_j2 and c_j3, let
G(C_j) be the graph depicted in Fig. <ref>. We call c_j and c_j' the clause vertices, and c_j1, c_j2 and c_j3 the variable vertices.
Then, the graph G is obtained from all G(C_j) by adding
r.31
[scale=.36]
vertexS=[draw,circle,inner sep=2pt,fill=black]
vertex=[draw,circle,inner sep=1.3pt,fill=black]
[vertex] (c1) at (4,1) [label=right:c_j] ;
[vertex] (c11) at (1,4) [label=left:c_j1] ;
[vertex] (c12) at (4,4) [label=right:c_j2] ;
[vertex] (c13) at (7,4) [label=right:c_j3] ;
[vertex] (a11) at (1,7) [label=left:a_j1] ;
[vertex] (a12) at (4,6) ;
at (5,5.7) a_j2;
[vertex] (a13) at (7,7) [label=right:a_j3] ;
[vertex] (b11) at (1,10) [label=left:b_j1] ;
[vertex] (b12) at (4,10) [label=right:b_j2] ;
[vertex] (b13) at (7,10) [label=right:b_j3] ;
[vertex] (c11') at (1,13) [label=left:c_j1'] ;
[vertex] (c12') at (4,13) [label=right:c_j2'] ;
[vertex] (c13') at (7,13) [label=right:c_j3'] ;
[vertex] (c1') at (4,16) [label=right:c_j'] ;
(c1)–(c11); (c1)–(c12); (c1)–(c13);
(c11)–(a11)–(b11)–(c11')–(c1');
(c12)–(a12)–(b12)–(c11'); (b12)–(c13'); (c1')–(c12');
(c13)–(a13)–(b13)–(c13')–(c1');
(b11)–(c12')–(b13);
(a11)–(a12)–(a13)–(a11);
The gadget G(C_j).
* all possible edges between variable vertices c_jk and c_j'k' of the same variable. Thus, for each variable x,
Q(x)= {c_jk| 1≤ j≤ m, 1≤ k≤ 3, x occurs
in clause C_j as c_jk}
is a clique in G,
* all possible edges between the 2m clause vertices c_j and c_j'. Thus,
F= {c_j| 1≤ j≤ m}∪{c_j'| 1≤ j≤ m}
is a clique in G,
* all possible edges between the 3m vertices a_jk. Thus,
T={a_jk| 1≤ j≤ m, 1≤ k≤ 3}
is a clique in G.
The description of G is complete. As an example, the graph G from the formula ϕ with three clauses C_1={x,y,z}, C_2={u,z,y} and C_3={z,v,w} is depicted in Fig. <ref>.
Notice that no edge exists between the two cliques F and T.
Notice also that G-F-T has exactly m+n components:
* For each 1≤ j≤ m, the 6-cycle D_j: b_j1, c_j1', b_j2, c_j3', b_j3, c_j2'
is a component of G-F-T, call it the clause 6-cycle (of clause C_j),
* For each variable x, the clique Q(x)
is a component of G-F-T, call it the variable clique (of variable x).
G is {3P_6,2P_7,P_14}-free.
First, observe that each component of G-F-T is a clause 6-cycle D_j or a variable clique Q(x). Hence,
G-F-T is P_6-free.
Therefore, every induced P_6 in G must contain a vertex from the clique F or from the clique T. This shows that G is 3P_6-free.
Observe next that, for each j, c_j'∈ F is the cut-vertex in G-T separating the clause 6-cycle D_j and F, and N(c_j')∩ D_j ={c_j1', c_j2', c_j3'}.
Observe also that, for each x, (G-T)[Q(x)∪ F] is a co-bipartite graph, the complement of a bipartite graph. Hence, it can be verified immediately that
G-T is P_7-free.
Fact (<ref>) implies that every induced P_7 in G must contain a vertex from the clique T. This shows that G is 2P_7-free.
We now are ready to argue that G is P_14-free. Suppose not and let P: v_1,v_2,…, v_14 be an induced P_14 in G, with edges v_iv_i+1, 1≤ i<14. For i< j, write P[v_i,v_j] for the subpath of P between (and including) v_i and v_j.
Then, by (<ref>), each of P[v_1,v_7] and P[v_8,v_14] contains a vertex from the clique T. Since P has no chords, P[v_1,v_7] has only the vertex v_7 in T and P[v_8,v_14] has only the vertex v_8 in T. By (<ref>), therefore, both P[v_1,v_6] and P[v_9,v_14] contain some vertex in the clique F, and thus P has a chord.
This contradiction shows that G is P_14-free, as claimed.
We remark that there are many induced P_13 in G; we briefly discuss the limit of our construction in the appendix.
For any matching cut M=(X,Y) of G,
(i) F and T are contained in different parts of M;
(ii) if F⊆ X, then |{c_j1, c_j2, c_j3}∩ Y|=1, and if F⊆ Y, then |{c_j1, c_j2, c_j3}∩ X|=1;
(iii) for any variable x, Q(x) is monochromatic;
(iv) if F⊆ X, then |{b_j1, b_j2, b_j3}∩ Y|=2 and |{c_j1', c_j2', c_j3'}∩ Y|=1, and if F⊆ Y, then |{b_j1, b_j2, b_j3}∩ X|=2 and |{c_j1', c_j2', c_j3'}∩ X|=1.
Notice that F and T are cliques with at least three vertices, hence F and T are monochromatic.
(i): Suppose not, and let F and T both be contained in X, say. Then all variable vertices c_jk, 1≤ j≤ m, 1≤ k≤ 3, also belong to X because each of them has two neighbors in F∪ T⊆ X.
Now, if all b_jk are in X, then also all c_jk' are in X because in this case each of them has two neighbors in X, and thus X=V(G).
Thus some b_jk is in Y, and so are its two neighbors in {c_j1',c_j2',c_j3'}. But then c_j', which is in X, has two neighbors in Y. This contradiction shows that F and T must belong to different parts of M, hence (i).
(ii): By (i), let F⊆ X and T⊆ Y, say. (The case F⊆ Y is symmetric.) Then, for any j, at most one of c_j1, c_j2 and c_j3 can be outside X. Assume that, for some j, all c_j1, c_j2, c_j3 are in X. The assumption implies that all b_j1, b_j2, b_j3 belong to Y, and then all c_j1', c_j2', c_j3' belong to Y, too. But then c_j', which is in X, has three neighbors in Y. This contradiction shows (ii).
(iii): Suppose that two variable vertices c_jk and c_j'k' in some clique Q(x) are in different parts of M. Then, as c_jk and c_j'k' have neighbor c_j and c_j', respectively, in the monochromatic clique F, c_jk has two neighbors in the part of c_j'k' or c_j'k' has two neighbors in the part of c_jk. This contradiction shows (iii).
(iv): This fact can be derived from (i) and (ii).
Every matching cut of G, if any, is a perfect matching cut.
Let M=(X,Y) be a matching cut of G. By Lemma <ref> (i), let F⊆ X and T⊆ Y, say. We argue that every vertex in X has a neighbor (hence exactly one) in Y. Indeed, for each j,
* c_j∈ F⊆ X has a neighbor c_jk∈ Y (by Lemma <ref> (ii)),
* c_j'∈ F⊆ X has a neighbor c_jk'∈ Y (by Lemma <ref> (iv)),
* each c_jk∈ X has a neighbor a_jk∈ T⊆ Y (by construction of G),
* each b_jk∈ X has a neighbor a_jk∈ T⊆ Y (by construction of G),
* each c_jk'∈ X has a neighbor in {b_j1,b_j2,b_j3}∩ Y (by Lemma <ref> (iv)).
Similarly, it can be seen that every vertex in Y has a neighbor in X.
If ϕ has an 1-in-3 assignment, then G has a perfect matching cut.
Partition V(G) into disjoint subsets X and Y as follows. (Fig. <ref> shows the partition for the example graph in Fig. <ref> given the assignment y=v= True, x=z = u=w= False.)
First,
* put F into X, and for all variables x which are assigned with False, put Q(x) into X;
* for each 1≤ j≤ m, let c_jk with k=k(j)∈{1,2,3} be the variable vertex, for which the variable x of c_jk is assigned with True. Then put b_jk and its two neighbors in {c_j1', c_j2', c_j3'} into X.
Let Y=V(G)∖ X. Then, it is not difficult to verify that M=(X,Y) is a perfect matching cut of G.
We now are ready to prove Theorem <ref>: First note that by Lemmas <ref> and <ref>, G is {3P_6,2P_7,P_14}-free and every matching cut of G (if any) is a perfect matching cut. In particular, every matching cut of G is extendable to a perfect matching.
Now, suppose ϕ has an 1-in-3 assignment. Then, by Lemma <ref>, G has a perfect matching cut. In particular, G has a disconnected perfect matching and, actually, a matching cut.
Conversely, let G have a matching cut M=(X,Y), possibly a perfect matching cut or one that is contained in a perfect matching of G. Then, by Lemma <ref> (i), we may assume that F⊆ X, and set variable x to True if the coressponding variable clique Q(x) is contained in Y and False if Q(x) is contained in X. By Lemma <ref> (iii), this assignment is well defined. Moreover, it is an 1-in-3 assignment for ϕ: consider a clause C_j={x,y,z} with c_j1=x, c_j2=y and c_j3=z. By Lemma <ref> (ii) and (iii), exactly one of Q(x), Q(y) and Q(z) is contained in Y, hence exactly one of x,y and z is assigned True.
Finally, note that G has N=14m vertices and recall that, assuming ETH, 1IN3SAT cannot be solved in 2^o(m) time. Thus, the ETH implies that no algorithm with runtime 2^o(N) exists for promise-pmc mc, promise-pmc pmc and promise-pmc dpm, even when restricted to N-vertex {3P_6,2P_7,P_14}-free graphs.
The proof of Theorem <ref> is complete.
§ PROOF OF THEOREM <REF>
Recall Theorem <ref>,
is polynomially solvable for long-hole-free graphs (also called quadrangulated graphs). In this section, we point out that is polynomially solvable for long-hole-free graphs, too, by following known approach <cit.>.
Given a connected graph G=(V,E) and two disjoint, non-empty vertex sets A, B⊂ V such that each vertex in A is adjacent to exactly one vertex of B and each vertex in B is adjacent to exactly one vertex of A. We say a matching cut of G is an A,B-matching cut (or a matching cut separating A, B) if A is contained in one side and B is contained in the other side of the matching cut. Observe that G has a matching cut if and only if G has an {a},{b}-matching cut for some edge ab, and G has a disconnected perfect matching if and only if G has a perfect matching containing an {a},{b}-matching cut for some edge ab.
For each edge ab of a long-hole-free graph G, we will be able to decide if G has a disconnected perfect matching containing a matching cut separating A={a} and B={b}.
This is done by applying known propagation rules (<cit.>), which are given below.
Initially, set X:=A, Y:=B and write F=V(G)∖ (X∪ Y) for the set of free vertices. The sets A,B,X and Y will be extended, if possible, by adding vertices from F according to the following rules. The first three rules will detect certain vertices that ensure that G cannot have an A,B-matching cut.
(R1) Let v∈ F be adjacent to a vertex in A. If v is
* adjacent to a vertex in B, or
* adjacent to (at least) two vertices in Y∖ B,
then G has no A,B-matching cut.
(R2) Let v∈ F be adjacent to a vertex in B. If v is
* adjacent to a vertex in A, or
* adjacent to (at least) two vertices in X∖ A,
then G has no A,B-matching cut.
(R3) If v∈ F is adjacent to (at least) two vertices in X∖ A and to (at least) two vertices in Y∖ B, then G has no A,B-matching cut.
The correctness of (R1), (R2) and (R3) is quite obvious. We assume that, before each application of the rules (R4) and (R5) below, none of (R1), (R2) and (R3) is applicable.
(R4) Let v∈ F be adjacent to a vertex in A or to (at least) two vertices in X∖ A. Then X:=X∪{v}, F:=F∖{v}. If, moreover, v has a unique neighbor w∈ Y∖ B then A:=A∪{v}, B:=B∪{w}.
(R5) Let v∈ F be adjacent to a vertex in B or to (at least) two vertices in Y∖ B. Then Y:=Y∪{v}, F:=F∖{v}. If, moreover, v has a unique neighbor w∈ X∖ A then B:=B∪{v}, A:=A∪{w}.
We refer to <cit.> for the correctness of rules (R4) and (R5), and for the following facts.
The total runtime for applying (R1) – (R5) until none of the rules is applicable is bounded by O(nm).
Suppose none of (R1) – (R5) is applicable. Then
* (X,Y) is an A,B-matching cut of G[X∪ Y], and any A,B-matching cut of G must contain X in one side and Y in the other side;
* for any vertex v∈ F,
N(v)∩ A=∅, N(v)∩ B=∅ and |N(v)∩ X|≤ 1, |N(v)∩ Y|≤ 1.
We now are ready to prove Theorem <ref>:
Let G be a connected, long-hole-free graph, and let ab be an edge of G. Set A={a} and B={b}, and assume that none of (R1) – (R5) is applicable. Then, denoting N(S) the set of vertices outside S adjacent to some vertex in S,
for any connected component S of G[F], |N(S)∩ X|=0 or |N(S)∩ Y|=0.
For, otherwise choose two vertices s,s'∈ S with a neighbor x∈ N(s)∩ X and a neighbor y∈ N(s')∩ Y such that the distance between s and s' in S is as small as possible. Then s, s', x and y and a shortest s,s'-path in S, a chordless x,y-path in G[X∪ Y] together would induce a long hole in G. (Observe that, by the definition of X and Y, G[X∪ Y] is connected.)
Partition F into disjoint subsets F_X and F_Y as follows:
F_X = ⋃{S : S is a connected component of G[F] with N(S)∩ X≠∅},
F_Y = ⋃{T : T is a connected component of G[F] with N(T)∩ Y≠∅}.
Then, by the facts above and recall that G is connected,
F=F_X∪ F_Y and F_X∩ F_Y=∅.
Thus,
(X∪ F_X, Y∪ F_Y) is an A,B- matching cut of G,
and it follows, that
G has a disconnected perfect matching containing an A,B-matching
cut if and only if G-A-B has a perfect matching.
Therefore, with Fact <ref>, in time O(nm) we can decide whether G has a matching cut containing a given edge. Moreover, as a maximum matching can be computed in O(√(n)m) time <cit.>, we can decide in time O(n√(n)m^2) whether G has a disconnected perfect matching containing an {a},{b}-matching cut for a given edge ab. Since there are at most m edges to check, Theorem <ref> follows.
§ CONCLUSION
We have shown that all three problems , and are -complete in P_14-free graphs. The hardness result for solves an open problem posed in <cit.>.
For and , the hardness result improves the previously known one in P_19-free graphs, respectively, in P_23-free graphs, to P_14-free graphs. An obvious question is whether one of these problems remains -complete in P_t-free graphs for some t<14.
We also pointed out that, like <cit.>, can be solved in polynomial time when restricted to long-hole-free graphs.
We leave open the complexity of restricted to long-hole-free graphs.
More general, the chordality of a graph G is the length of a longest induced cycle in G. Chordal graphs and long-hole-free graphs (including weakly chordal and chordal bipartite graphs) have chordality 3 and 4, respectively.
Notice that P_t-free graphs have chordality bounded by t, hence Theorem <ref> implies that , and are -complete when restricted to graphs of chordality ≤14.
We remark, however, that the graph constructed in the proof of Theorem <ref> has chordality 8, and thus , and are -complete when restricted to graphs of chordality ≤8.
Does there exist any class of graphs of chordality <8 in which , or is -complete?
Acknowledgment We thank the anonymous reviewers of WG 2023 for their very carefull reading.
In particular, we thank all three reviewers for pointing out a small mistake in the earlier proof of Theorem <ref>.
§ LIMITS OF OUR REDUCTION IN THE PROOF OF THEOREM <REF>
As remarked, the graph G constructed from an instance of 1IN3SAT contains many induced paths P_13. For example, refer to Fig. <ref>; see also Fig. <ref>–<ref>:
* b_11, c_11', b_12, c_13', b_13, a_13, a_22, c_22=z, c_31=z, c_3, c_1, c_12=y, c_23=y;
* b_11, c_11', b_12, c_13', b_13, a_13, a_21, b_21, c_21', c_2', c_3', c_33', b_33;
* b_11, c_11', b_12, c_13', b_13, a_13, a_21, b_21, c_21', c_2', c_2, c_23=y, c_12=y.
It can be seen that all P_13 in G contain a P_5 from a 6-cycle D_j: b_j1, c_j1', b_j2, c_j3', b_j3, c_j2'. We now are going to describe how the gadget G(C_j) used in the construction of G depicted in Fig. <ref> was found. This could be useful when one is trying to improve the construction with shorter induced paths.
A general idea in constructing a graph without long induced paths from a given cnf-formula is to ensure that long induced paths
must go through some, say at most three, cliques. Assuming we want to reduce 1IN3SAT (or 3SAT) to , the following observation gives a hint how to get such a clique: Let G be a graph, in which the seven vertices c, c_k, a_k, 1≤ k≤ 3, induce a tree with leaves a_1,a_2,a_3 and degree-2 vertices c_1,c_2,c_3 and the degree-3 vertex c. If G has a perfect matching cut, then a_1, a_2,a_3 must belong to the same part of the cut. Therefore, we can make {a_1,a_2,a_3} adjacent to a clique and the resulting graph still has a perfect matching cut.
Now,
a gadget G(H;v) may be obtained from a suitable graph H with v∈ V(H) as follows. Let H be a graph having a vertex v of degree 3. Let b_1, b_2, b_3 be the neighbors of v in H. Let G(H;v) be the graph obtained from H-v by adding 7 new vertices a_1,a_2,a_3, c_1,c_2,c_3 and c, and edges cc_k, c_ka_k, a_ka_k, 1≤ k≤ 3, and a_1a_2, a_1a_3 and a_2a_3. (Thus, contracting the triangle a_1a_2a_3 from G(v)∖{c,c_1,c_2,c_3} we obtain the graph H.)
Assuming, for any neighbor w of v in H, H has a perfect matching cut (X,Y) such that v∈ X and w∈ Y. Then, for any neighbor d of c in G(H;v), the graph G(H;v) has a perfect matching cut (X',Y') such that c∈ X' and d∈ Y'.
Examples of graphs H in Observation <ref> include the cube, the Petersen graph and the 10-vertex Heggernes-Telle graph in <cit.>. Our gadget G(C_j) depicted in Fig. <ref> is obtained by taking the cube. Take the Petersen graph or the Heggernes-Telle graph will produce induced P_t for some t≥ 15.
If there exists another graph H better than the cube, then our construction will yield a P_t-free graph for some 10≤ t≤ 13.
plainurl
|
http://arxiv.org/abs/2307.05735v1 | 20230711190317 | GOKU-UI: Ubiquitous Inference through Attention and Multiple Shooting for Continuous-time Generative Models | [
"Germán Abrevaya",
"Mahta Ramezanian-Panahi",
"Jean-Christophe Gagnon-Audet",
"Irina Rish",
"Pablo Polosecki",
"Silvina Ponce Dawson",
"Guillermo Cecchi",
"Guillaume Dumas"
] | cs.LG | [
"cs.LG",
"nlin.CD",
"physics.data-an",
"physics.med-ph"
] |
QCD on Rotating Lattice with Staggered Fermions
Xu-Guang Huang
August 12, 2023
===============================================
Scientific Machine Learning (SciML) is a burgeoning field that synergistically combines domain-aware and interpretable models with agnostic machine learning techniques. In this work, we introduce GOKU-UI, an evolution of the SciML generative model GOKU-nets. The GOKU-UI broadens the original model's spectrum to incorporate other classes of differential equations, such as Stochastic Differential Equations (SDEs), and integrates a distributed, i.e. ubiquitous, inference through attention mechanisms and a novel multiple shooting training strategy in the latent space. These enhancements have led to a significant increase in its performance in both reconstruction and forecast tasks, as demonstrated by our evaluation of simulated and empirical data. Specifically, GOKU-UI outperformed all baseline models on synthetic datasets even with a training set 32-fold smaller, underscoring its remarkable data efficiency. Furthermore, when applied to empirical human brain data, while incorporating stochastic Stuart-Landau oscillators into its dynamical core, it not only surpassed state-of-the-art baseline methods in the reconstruction task, but also demonstrated better prediction of future brain activity up to 12 seconds ahead. By training GOKU-UI on resting-state fMRI data, we encoded whole-brain dynamics into a latent representation, learning an effective low-dimensional dynamical system model that could offer insights into brain functionality and open avenues for practical applications such as mental state or psychiatric condition classification. Ultimately, our research provides further impetus for the field of Scientific Machine Learning, showcasing the potential for advancements when established scientific insights are interwoven with modern machine learning.
§ INTRODUCTION
§.§ Scientific Machine Learning
Scientific Machine Learning (SciML) is an emerging field that, drawing insights from scientific data, seeks to advance data-driven discovery with an approach that can produce interpretable results <cit.>. Its synergetic blend of machine learning and scientific computing based on mechanistic models makes it very powerful for addressing complex problems across all STEM areas and beyond <cit.>. Using informed priors, its application is already making key contributions in scientific inference, data analysis, and machine learning enhanced modeling <cit.>. Recent developments in SciML include various approaches to derive dynamical system models from from observational data. The sparse identification of nonlinear dynamics (SINDy) algorithm is one such approach which, leveraging recent advances in sparsity techniques, exploits the observation that only a few important terms dominate the dynamics in most physical systems <cit.>. Physics-informed neural networks (PINNs) are another approach in which neural networks are trained to solve supervised learning tasks with respect to the laws of physics and can be used to derive data-driven partial differential equations or their solutions <cit.>. The application of PINNs to numerically stiff systems <cit.> is challenging. Universal differential equations (UDEs) are a recent method that not only overcomes the stiffness limitation, but also represents a perfect example of the essence of SciML: use all the prior knowledge and scientific insights available for your problem and fill the missing gaps with machine learning <cit.>. The simple yet powerful idea behind UDEs involves using traditional differential equation models with some unknown terms that are substituted by universal function approximators, such as neural networks. These approximators will be learned simultaneously with the equation's parameters by using sensitivity algorithms and automatic differentiation <cit.>.
The evolution equations that might be derived from the data may not actually correspond to mechanistic models based on first principles. That is, on many occasions, the dynamics take place on a manifold of a lower dimension than the full phase space of the system <cit.>. Having reduced equations that describe the evolution on these manifolds is very useful, especially in high dimensional systems <cit.>. Reduced order models (ROMs) can be derived from data <cit.>. In particular, generative adversarial network (GAN) <cit.> approaches have been used to enhance the application of ROMs to simulations of fluid dynamics <cit.>.
§.§ Neural Differential Equations
Despite the basic ideas behind differential equations parameterized by neural networks, and its connection with deep learning had older roots in literature <cit.>, the publication of <cit.> was a tipping point in the young history of SciML. Since then, the topic of neural differential equations (neural DEs) has become a field, as stated and evidenced in the comprehensive survey by <cit.>. In <cit.>, by interpreting the ResNets <cit.> as a discrete integration of a vector field with the Euler method, the authors proposed an infinitesimally-layered neural network as its continuous limit and modeled it with an ordinary differential equation (ODE) parametrized by a neural network, giving rise to the Neural Ordinary Differential Equation (NODE) models. They also demonstrated that NODEs can be trained by backpropagating through black-box ODE solvers using the adjoint method <cit.>, making it a memory-efficient model.
Furthermore, <cit.> introduced the Latent Ordinary Differential Equations (Latent ODEs), a continuous-time generative model that encodes time series data into a latent space that could potentially capture its underlying dynamics, which are modeled using a NODE. First, the observed time series are encoded in the latent space using a recognition model, typically a Recurrent Neural Network (RNN). The temporal dynamics in the latent space is then modeled using NODEs, and lastly, its solution is decoded back into the observation space to generate predictions or perform other tasks such as anomaly detection or imputation of missing values.
By using this approach, Latent ODEs can capture the intricate and potentially nonlinear dynamical systems that underlie many real-world time series data. The continuous nature of the NODEs allows the model to handle irregularly sampled data <cit.>, a common challenge in many applications. Additionally, the use of the latent space enables the model to capture complex patterns in high-dimensional data, while still providing a compact and interpretable representation of the underlying dynamics.
§.§ GOKU-nets
The work of <cit.> builds on the basis of the Latent ODEs model demonstrating that the incorporation of prior knowledge of the dynamics involved in the form of a backbone differential equation structure can increase the performance of a purely agnostic model. They propose another continuous-time generative model called GOKU-nets (which stands for Generative ODE Modeling with Known Unknowns), which will be the focus of this paper. This model incorporates a variational autoencoder (VAE) structure with a differential equation to model the dynamics in the latent space. However, in this case, a specific form for the ODE is provided while allowing its parameters (the known unknowns) to be inferred. The model is trained end-to-end jointly learning the transformation to the latent space, inferring the initial conditions and parameters of the ODE, which will be integrated afterward; and finally, a last transformation is performed to go back to the input space. In the next section, the details of its architecture will be described in detail. Note that the ODE can be integrated further than the input time span in order to generate an extrapolation, thus becoming a forecast of the future evolution of the time series. The study in <cit.> compares GOKU-net with baselines such as LSTM and Latent-ODE in three domains: a video of a pendulum, a video of a double pendulum, and a dynamic model of the cardiovascular system. The authors show that their model significantly outperforms the others in reconstruction and extrapolation capabilities, reduces the size of the required training sets for effective learning, and furthermore, has greater interpretability, allowing for the identification of unobserved but clinically significant parameters.
The original GOKU-net model was limited to handling only ODEs. In this work, we expand its capabilities by implementing the model in the Julia Language <cit.>, leveraging its potent SciML Ecosystem <cit.> which enables us to utilize a wide spectrum of differential equation classes (including SDEs, DDEs, DAEs) and a diverse suite of advanced solvers and sensitivity algorithms <cit.>.
As reported in the literature, the identification of nonlinear dynamical systems, and in particular the gradient-descend based training of neural DE models, can be often challenging due to their highly complex loss landscapes, leading to poor local minima training stagnation <cit.>. We propose an enhancement to the original GOKU-net architecture which adds attention mechanisms to the main part of the model that infers the parameters of the differential equations. Moreover, to overcome the inherent difficulties of training, we developed a novel strategy to train the GOKU-net based on the multiple shooting technique <cit.> in the latent space. We have evaluated our enhanced model and training strategy against simulated data from a network of stochastic oscillators, specifically Stuart-Landau oscillators, as well as empirical brain data derived from resting-state human functional Magnetic Resonance Imaging (fMRI). In both cases, the GOKU-net that fuses multiple shooting and attention, labeled GOKU-nets with Ubiquitous Inference (GOKU-UI), outperformed both the base GOKU-net model and the baseline models in terms of reconstruction accuracy and forecasting capability, as well as superior data efficiency. We believe that GOKU-UI represents a promising step forward in Scientific Machine Learning, underscoring the rich possibilities that emerge when melding traditional scientific insights with contemporary machine learning techniques.
§ METHODS
§.§ Basic GOKU-nets
GOKU-nets could be thought as a particular case of a more general model class that we call a Latent Differential Equation model (Latent DE), which schema is displayed in Figure <ref>. Initially, each temporal frame of the input data x_i is independently processed by a Feature Extractor, usually reducing its dimensionality. Following this, the entire sequence is subjected to a Pattern Extractor, which aims to learn the distribution of the initial conditions, and possibly of the parameters, for the differential equation that will be subsequently integrated. Lastly, the solution undergoes a final transformation via a Reconstructor, going back to the original input space. The original model is trained as a standard VAE, by maximizing the evidence lower bound (ELBO) <cit.>.
In the case of the Latent ODEs proposed by <cit.>, an RNN is used for the Pattern Extractor, a fully connected NN for the reconstructor, and the differential equation is parametrized with another NN. This means that in this model the specific form of the differential equation is not provided beforehand but simultaneously learned during training. The Feature Extractor and the intermediate layers, Latent in and Latent out, are not present so they could be considered as identity operations. On the other hand, the GOKU-net model proposed by <cit.> has a ResNet with fully connected NNs as the Feature Extractor, while in the Pattern Extractor, an RNN is used to learn the initial conditions and a bidirectional LSTM for the ODE parameters. In this case the differential equation is explicitly predefined, allowing to incorporate some prior knowledge about the dynamical nature of the system under consideration. The Latent in and out layers are fully connected NN and finally, the Reconstructor is another ResNet, similar to the initial one. In the next section, some enhancements to the original GOKU-net model are proposed.
§.§ GOKU-UI
§.§.§ Attention mechanism
The first modification is the addition of a basic attention mechanism <cit.> into the Pattern Extractor, specifically in the part associated with the learning of the parameters of the differential equation. Namely, instead of keeping the last element of the bidirectional LSTM (BiLSTM) used in the original GOKU-net model, all of their sequential outputs pass through a dense layer with softmax activation to calculate the attentional scores that would weight the sum of all the BiLSTM outputs in order to obtain its final output.
§.§.§ Multiple Shooting
When training Neural DE models, gradients have to be calculated through differential equations with respect to its initial conditions and parameters, by means of some sensitivity algorithm <cit.>. This tends to produce highly complex loss landscapes <cit.>. The work of <cit.> demonstrates that training Neural ODEs even on very simple oscillatory data could be problematic, showing that the outcome may result in a trajectory similar to a moving average of the original data, thus failing to capture responses of higher frequency. They proposed a solution based on the multiple shooting methods, which are widely used in Optimal Control <cit.> and Systems Identification <cit.> to alleviate the problem of high sensitivity to initial conditions and lower the probability of getting trapped at local minima with very poor performance. The basic idea of multiple shooting is to partition the complete time span over which the differential equation would be integrated into smaller time windows, for each of which the initial conditions are inferred in parallel. Afterward, the multiple segments are joint and a continuity constraint is imposed during the optimization, in our case, through the penalty method, which in practice simply consists of adding a regularization term to the loss function.
However, applying the multiple shooting method to GOKU-nets is not straightforward. Firstly, in most cases that use this method, such as in <cit.>, the differential equations are typically directly modeling the observable data, having direct access to the true initial conditions for each window. In the case of GOKU-nets, the dynamics modeled by differential equations occur in the latent space, which is being learned simultaneously; as a result, such true initial conditions are not available. Secondly, it is necessary to determine how the method will behave in relation to the parameters of the differential equation, which in the case of Neural ODEs are implicitly learned as part of their parameterization through the neural network.
Our proposal to extend the multiple shooting method to GOKU-nets consists of the following. After passing through the Feature Extractor, we divide the temporal interval in the latent space in such a way that the Pattern Extractor generates in parallel different initial conditions for each temporal window, but provides a single set of parameters for the differential equations that will be shared by all windows. By this strategy, we maintain the potential benefits inherent to the multiple shooting method while leveraging the information available in a wider temporal range for the task of parameter inference, which is generally more challenging than estimating initial conditions. As mentioned before, we do not have access to target true initial conditions, however, what we can strive to achieve is the continuity of trajectories across different windows. To this end, these intervals are defined by overlapping the last temporal point of each window with the first one of the following and the goal is to minimize the distance between these points. Specifically, we employ regularization in the cost function when training the model, quadratically penalizing the discrepancy in the latent space of the overlapping points, that is, between the initial condition of each window and the end point of its preceding segment.
Our experiments indicated that non-variational GOKU-nets models outperform their variational counterparts significantly (see Supplementary Information <ref>). Therefore, we used non-variational GOKU-nets for all the remaining results in this work. Specifically, instead of sampling from normal distributions in the latent space as shown in Figure <ref>, we directly take the mean values μ_z_0 and μ_θ. As a result, the cost function associated with these models does not include the KL divergence term associated with the ELBO, but it does retain the reconstruction term, which is calculated as the mean squared error between the model's output and the input, normalized by the mean absolute value of the input. Furthermore, when using multiple shooting training, the continuity regularization described in the previous paragraph is included.
§.§ Experiments
In the next sections, we evaluate our proposed attention and multiple shooting enhancements through two highly challenging cases; one on synthetic data based on a network of stochastic oscillators known as Stuart-Landau oscillators and the other on empirical human brain data.
We compare the reconstruction and forecast performance of different variations of the GOKU-model (basic or with attention) trained in the original single shooting fashion or with the proposed multiple shooting method, as well as some baseline models: LSTM, Latent ODE, and a naïve model. For a fair comparison, both the LSTM and Latent ODE models are constructed maintaining the same GOKU-net general architecture and changing only the differential equation layer. Specifically, the Feature Extractor, Pattern Extractor, Latent In, Latent Out, and Reconstructor layers (see Figure <ref>) maintain the same architecture and hyperparameters. However, the differential equation layer is substituted with a Neural ODE for the Latent ODE model, and in the other case, it is replaced by a recursively executed LSTM. The latent state dimensionality and size of the NN parameterizing the differential equation inside the Latent ODE, as well as the number of neurons in the LSTM were selected to match the total number of parameters of their contending GOKU-UI (with attention and multiple shooting). The naïve predictors, both for the reconstruction and forecast task, are simply constant predictions with the values of time-averaged inputs.
All models were trained under identical conditions and following the same procedure, with the aim of minimizing reconstruction error. In all instances, the input sequences to the model consisted of 46 time steps. During each training epoch, a random interval of this length was selected within the training data available for each sample in every batch of 64 samples. In the case of the multi-shooting training, the 46-time step sequence was further partitioned in the latent space into five windows, each comprising 10 time steps, with the last point of one window overlapping with the first point of the subsequent window. Under these circumstances, the loss function was augmented by the sum of squared differences between the endpoints of each segment. This sum was normalized by the number of junctions and multiplied by a regularization coefficient to impose a continuity constraint among the different segments. In the results presented here, the regularization coefficient was set to 2. Comprehensive details of the training process, as well as the specific architecture of the models and the hyperparameters used, can be found in the Supplementary Information.
§.§.§ Simulated data
Stuart-Landau (SL) oscillators, representing the normal form of a supercritical Hopf bifurcation, serve as a fundamental archetype of mathematical models and are extensively used across diverse scientific disciplines to study self-sustained oscillations <cit.>. The SL oscillator is often described by the complex-valued differential equation
ż = z(a + iω) - z |z|^2
where z = ρ e^iθ = x + iy, a is the bifurcation parameter, and ω is intrinsic frequency of the oscillator. The parameter a represents the linear growth rate of the system. When a is positive, the amplitude of the oscillation increases, and when a is negative, the amplitude of the oscillation decreases. At a=0, a Hopf bifurcation occurs, and the system transitions from a stable fixed point to limit cycle oscillations (or vice versa). Despite its apparent simplicity, the SL model can exhibit a wide range of behaviors, including limit cycles, chaotic oscillations, and quasiperiodicity, making it a versatile tool in the study of nonlinear dynamics and a good candidate for evaluating the capabilities of the GOKU-net models.
In particular, we generate the simulated data with a network of coupled stochastic Stuart-Landau oscillators that has been widely used to model brain dynamics for resting state fMRI <cit.>, which will also be used in the empirical evaluation on brain data, described in the next section. The dynamics of the i-th node within a network of N oscillators is given by the following equation:
ẋ_j = Re (ż_j) = [a_j - x^2_j - y^2_j]x_j - ω_j y_j + G ∑_i=1^N C_ij (x_i - x_j) + βη_j (t)
ẏ_j = Im(ż_j) = [a_j - x^2_j - y^2_j]y_j - ω_j x_j + G ∑_i=1^N C_ij (y_i - y_j) + βη_j (t)
where C_ij is a connectivity matrix between all nodes in the network, G is a global coupling factor, while η_j represents additive Gaussian noise. Note that independent bifurcation parameters a_j and frequencies ω_j are used for each node.
During the construction of our dataset, we perform a dimensionality augmentation on the network of oscillators, which are utilized as latent dynamics. Specifically, we apply a random linear transformation, f: ℝ^2N→ℝ^D, to the latent trajectories of each sample, where the dimension D is significantly larger than 2N. Each sample corresponds to a unique random set of initial conditions and parameters for the N coupled oscillators. All the synthetic data experiments were performed using N = 3 stochastic oscillators with a high dimension D = 784. All details of the implementation and hyperparameters can be found in the Supplementary Information, and the codes are accessible in the GitHub repository [The link will be provided upon acceptance.].
§.§.§ Empiric data
Next, in an effort to both evaluate our proposed models on a challenging empirical dataset and provide an example of the advantages of incorporating prior scientific insights into agnostic AI models, we focus on one of the most complex systems in nature: the human brain. Resting-state fMRI data from 153 subjects, sourced from the Track-On HD study <cit.>, was utilized in our research. The data was pre-processed as described in <cit.>, followed by a 20-component Canonical ICA <cit.>. Of the original 20 components, 9 were identified as artifacts and therefore eliminated, leaving 11 components for further analysis. Each subject contributed data from two visits, resulting in a total of 306 samples. Each sample comprised 160 time points, captured at a temporal resolution of 3 seconds. For the purposes of our study, the initial 114 time points were allocated for training while the remaining ones were used for testing. Our proposed GOKU-UI was trained using 20 Stuart-Landau stochastic oscillators (Eq. <ref>) in the latent space. The performance of the model when using different numbers of oscillators can be found in the Supplementary Information.
§ RESULTS
We assessed the performance of four GOKU-net variants in reconstruction and forecasting tasks. These variants included both single and multiple shooting methods, used with or without the attention mechanism. Comparisons were made with three baseline models, namely LSTM, Latent ODEs, and a naïve predictor.
All models were trained only on the reconstruction task, while forecasts were generated using those trained models during the evaluation stage. The prediction error was measured by the root mean square error (RMSE) between the target ground truth and the reconstruction or forecast, as appropriate. The errors were averaged over the input dimensions and over the time interval and normalized with respect to the input. The shaded areas represent standard errors from different trainings with multiple random seeds.
§.§ Simulated data
Figure <ref> shows the performance of the different models in the reconstruction task on a synthetic dataset generated with the latent dynamics of three stochastic Stuart-Landau oscillators. Here, the GOKU-net variants trained with the multiple shooting method exhibited notably lower errors than the rest of the models. Importantly, GOKU-nets utilizing the attention mechanism consistently demonstrated improved or equal outcomes compared to the basic variant. Specifically, the single shooting variant greatly benefitted from the attention mechanism, while in the multiple shooting scenario, it initially led to lower NRMSE than the basic model, but for more than 600 training samples, both versions converged to similar results. GOKU-UI, which incorporates both the attention mechanism and multiple shooting, was the best performing model. On the contrary, the Latent ODEs underperformed, showing an error greater than the naïve predictor, while the LSTMs outperformed the basic GOKU-nets trained with single shooting. With the exception of Latent ODEs, all models showed a trend toward improved performance as the number of samples in the training set increased. GOKU-UI, trained with just 150 samples in its training set, achieved a similar reconstruction performance to when it used 4800 samples. Furthermore, when trained on merely 75 unique samples, GOKU-UI surpassed the rest of the models without multiple shooting, even when they were trained with a 64-fold larger training set.
When predicting 20 temporal points beyond the reconstruction limit, a similar trend was observed, as shown in Figure <ref>. In this scenario, the GOKU-nets trained with single shooting but incorporating attention outperformed the LSTMs when using more than 300 training samples, although they performed similarly with 4800 samples. However, the GOKU-nets trained with multiple shooting continued to demonstrate superior overall performance. In comparison with the reconstruction task, here GOKU-UI exhibited a slightly increased sensitivity to the amount of training data, achieving a minimum forecast error for 4800 training samples.
§.§ Empirical data
In a separate analysis, GOKU-UI was trained with 20 coupled stochastic Stuart-Landau oscillators governing the dynamics in its latent space, along with the rest of the baseline models, on 306 samples of 11-component ICA time series derived from fMRI data. As shown in Figure <ref>, GOKU-UI achieved a reconstruction error order of magnitude lower than the rest of the models. Despite a reduction in the performance difference between GOKU-UI and the other models in the forecast task for the 12 seconds of the time series immediately following the reconstruction interval, GOKU-UI still maintained a significantly lower error (see Figure <ref>), reinforcing its superior performance in both the reconstruction and forecast tasks.
§ DISCUSSION AND LIMITATIONS
Our research centered around two critical enhancements to the basic GOKU-net model: the implementation of a basic attention mechanism and the application of the multiple shooting method. Independently, these modifications substantially improved the model's performance on both the reconstruction and forecast tasks. The multiple shooting training method was found to yield the most significant improvement. In particular, GOKU-UI, a composite of both enhancements, showcased the best overall performance. In particular, during the evaluation of the synthetic dataset, the GOKU-UI demonstrated remarkable efficiency with respect to training data. It achieved comparable performance with a mere 150 training samples, as when using 16 times that amount, and outperformed all other baseline models even when they used 32-fold larger training sets.
Furthermore, we implemented GOKU-nets in the Julia programming language, which broadened the capabilities of the model. It helped overcome the initial limitation to Ordinary Differential Equations (ODEs) and facilitated the use of a wider range of differential equation classes (e.g., SDEs), as well as alternative advanced solvers and sensitivities algorithms. The unique SciML ecosystem of the Julia language proved to be a potent and effective tool for research at the intersection of dynamical systems and machine learning.
Although the Stuart-Landau model has been used in previous literature <cit.>, it has generally been used with fixed coupling between oscillators. These empirical structural connectivity estimates were derived from Diffusion Tensor Imaging (DTI), while other parameters were adjusted to maximize the goodness-of-fit criterion, given by a score that incorporates the information from the entire time series. To our knowledge, this is the first instance of employing Stuart-Landau oscillators to model latent brain data dynamics while simultaneously inferring oscillator connectivity and learning the nonlinear transformation into the latent space. This approach differs from previous works, such as <cit.>, which fitted the more general van der Pol oscillators to the latent representation of brain data. However, their learning process involved two separate procedures: encoding in the latent space and parameter estimation of the differential equations. On the contrary, our GOKU-UI model combines these processes into a single end-to-end training; as discussed in <cit.> this approach provides the unique advantage of integrating known dynamics that govern the system, thus facilitating the interpretability and potential applicability with a smaller training set while maintaining the flexibility to learn nonlinear dependencies.
We have demonstrated these capabilities by training the GOKU-UI on fMRI data and encoding whole-brain dynamics into a latent representation. This representation's temporal evolution is effectively modeled by a low-dimensional, interpretable dynamical system, which can yield profound insights into brain functionality, such as the inference of functional connectivity. Beyond mere understanding, the model holds promise for applied usage, including the classification of mental states or psychiatric conditions. These classification applications might leverage the parameters of the differential equations or could potentially draw on higher-level features of the latent system, such as the characteristics of its attractor topology.
Despite its versatility as a tool in the SciML toolbox, GOKU-UI's main advantage may sometimes also be its primary limitation: unlike traditional, more agnostic machine learning models, GOKU-UI requires a preliminary differential equation model hypothesized to govern the data's intrinsic temporal dynamics. This requirement may be challenging to meet in many cases. For example, with latent ODEs, one can bypass this task, allowing another neural network to learn the differential equation. However, the significant complexity of the system under investigation, as evidenced by our experiments, could potentially hinder the efficacy of this method.
To the uninitiated in the field of dynamical systems, the process of proposing a specific differential equation to model the data's intrinsic and not immediately evident dynamics might seem like a guessing game. However, this approach has been the successful foundation of physics since the time of Newton. The application of GOKU-UI to a new problem might not be as straightforward as a general-purpose black-box neural network model. Still, when guided by the vast theory of dynamical systems, it is not only possible but potentially highly fruitful.
§ ACKNOWLEDGMENTS
Guillaume Dumas was supported by the Institute for Data Valorization, Montreal (IVADO; CF00137433) Professor Startup & Operational Funds, the Fonds de la Recherche en Santé du Québec (FRQ; 295289; 295291) Junior 1 salary award, the Natural Sciences and Engineering Research Council of Canada (NSERC; DGECR-2023-00089), and the Azrieli Global Scholars Fellowship from the Canadian Institute for Advanced Research (CIFAR) in the Brain, Mind, & Consciousness program. Irina Rish and Mahta Ramezanian Panahi acknowledges the support from the Canada CIFAR AI Chair Program, the Canada Excellence Research Chairs (CERC) program. Furthermore, Mahta Ramezanian Panahi acknowledges the UNIQUE Center support. Finally, the work of Silvina Ponce Dawson and Germán Abrevaya was funded by UBA (UBACyT 20020170100482BA) and ANPCyT (PICT-2018-02026, PICT-2021-I-A-00128, PICT-2021-III-A-00091).
The computational resources used in this work were provided (in part) by the HPC center DIRAC, funded by Instituto de Fisica de Buenos Aires (UBA-CONICET) and part of SNCAD-MinCyT initiative, Argentina.
icml2021
§ SUPPLEMENTARY INFORMATION
§.§ Models architectures
Referring to the diagram in Figure <ref>, the specific architecture used for the different models, for both simulated and empirical data experiments, is as follows:
§.§.§ Basic GOKU-nets
Feature Extractor
ResNet with 4 fully-connected layers, each with 200 neurons and using mish activation functions <cit.>. Input dimension = number of dimensions in the input data. Output dimension = 128.
Pattern Extractor
Initial values path: an RNN with 2 layers and 64 neurons in each with ReLU activations. Input dimension = 128. Output dimension = 64.
Parameters path: Bidirectional LSTM with 2 layers and 64 neurons in each. Input dimension = 128. Output dimension = 128. Note that the dimension of the output of the forward LSTM and the backward LSTM are 64 but when concatenating them, the resulting output dimension is the given one.
Latent in
Initial values path: single-layered fully connected NN. Input dimension = 64. Output dimension = 64.
Parameters path: fully connected NN with 1 layer. Input dimension = 128. Output dimension = 128.
Latent out
Initial values path: fully connected NN with 2 layers and 200 neurons in the hidden layer, using no activation function (identity). Input dimension = 64. Output dimension = number of state variables of the differential equation.
Parameters path: fully connected NN with 2 layers and 200 neurons in the hidden layer, using sigmoid activation function. The parameters are projected from the interval [0, 1] to the desired range when integrating the differential equation. Input dimension = 128. Output dimension = number of parameters of the differential equation.
Differential Equation layer
The predefined differential equation is solved numerically for each of the sets of parameters and initial conditions provided by the previous layer. The output is the trajectories at time points equivalent to the input data.
Reconstructor
ResNet is similar to the one in the Feature Extractor, except that in this case the input dimension is the number of state variables of the differential equation and the output dimension is the one corresponding to the input data.
§.§.§ GOKU-nets with attention
With the exception of the Pattern Extractor, the rest of the layers in the GOKU-nets with attention model remain identical to those in the basic GOKU-nets.
Pattern Extractor
Initial values path: LSTM with 1 layer. Input dimension = 128. Output dimension = 128.
Parameters path: Bidirectional LSTM (BiLSTM) with 1 layer. Input dimension = 128. Output dimension = 128. A fully connected NN with input and output dimensions of 128 is used for the attention mechanism. This attention NN processes all the output sequences of the BiLSTM, after which a softmax is applied across the time dimension in order to obtain the attentional scores that will be used in the weighted sum of all the time steps returned by the BiLSTM.
§.§.§ LSTM baseline model
The whole architecture is the same as in the basic GOKU-net, except for the Differential Equation layer, which is replaced by an LSTM:
LSTM layer
Single-layered LSTM with input and output dimension given by z_dim, which is calculated in each experiment so that the total number of parameters in the model match as closely as possible that of the corresponding GOKU-UI. The LSTM is run recursively, providing as the first input the equivalent of what would be the initial value for differential equations, and then subsequently feeding back the last output as a new input, until obtaining the same number of time steps as in the model's input.
§.§.§ Latent ODE baseline model
The whole architecture is the same as in the basic GOKU-net, except for the Differential Equation layer, which is replaced by a Neural ODE:
Neural ODE layer
Neural ODE is parametrized by a fully connected NN with 3 layers and node_hidden_dim neurons in each. The input and output dimensions are given by z_dim, which is the number of state variables. Both node_hidden_dim and z_dim are calculated in each experiment so that the total number of parameters in the model match as closely as possible that of the corresponding GOKU-UI. Different combinations of node_hidden_dim and z_dim have been tested while keeping the total number of parameters approximately fixed. The results are plotted in section <ref>.
§.§ Comprehensive description of experiments
§.§.§ Simulated dataset generation
The high-dimensional simulated dataset used for training the model was constructed based on the simulations of 3 coupled Stuart-Landau oscillators (Eqs. <ref>) with different random sets of parameters. Each set of parameters corresponds to a different training sample. Whenever we used the Stuart-Landau model in our experiments (both when generating the dataset and when using it inside the GOKU-nets), the time was rescaled by multiplying the right-hand side of Eqs. <ref> by 20. Thus, when integrating the equations with the used dt=0.05, the input sequences of length 46 time steps contain a few oscillations. The parameters a, ω and C were sampled from uniform distributions within the following ranges
a ∈ [-0.2, 0.2]
ω∈ [0.08 π, 0.14 π]
C ∈ [0, 0.2]
while G = 0.1 and η = 0.02. On the other hand, the initial conditions for the six state variables were sampled from uniform distributions within the ranges [0.3, 0.4]. For each set of parameters and initial conditions, the system is integrated with the cmttSOSRI solver, a Stability-optimized adaptive strong order 1.5 and weak order 2.0 for diagonal/scalar Ito SDEs, from the DifferentialEquations.jl Julia package <cit.>. The complete time span of the integration is 35 units of time and the trajectories are saved every 0.05, resulting in 700 time points. The first 100 time steps are trimmed, in order to remove possible initial transients. Afterwards, a random linear transformation is independently applied to each of the 600 remaining time steps, in order to obtain 784 dimensions. In other words, every state vector of length 6 from each sample is multiplied by the same 784×6 matrix, initialized randomly sampling from a uniform distribution in the range [-1, 1]. A training dataset was created with 5000 samples, which serves as the source for the different training instances using different sizes of training sets (see Figure <ref>). A different test set with 900 samples was created for the posterior evaluations of the model.
§.§.§ Empirical dataset generation
We used resting-state fMRI data from 153 participants, obtained from the Track-On HD study <cit.>. After a pre-processing as described in <cit.>, a 20-components Canonical ICA <cit.>. Upon inspecting the resulting 20 components, 9 of them were identified as artifacts and thus discarded, leaving 11 components for further usage in our experiments. Each subject contributed data for two visits, hence accumulating a total of 306 data samples. Every sample was composed of 160 time points, each obtained at a temporal resolution of 3 seconds. The first 114 time points from each sample were used for training, and the remaining intervals were set aside for evaluation. The input is normalized by dividing the standard deviation of the training set.
§.§.§ Training settings
All the experiments underwent the same training procedure with identical hyperparameters, which will be described here.
The input sequence length for all the models was 46 time steps, and the batch size was 64. As described above, the full length of the samples in the training sets were 600 time steps for the synthetic dataset and 114 for the fMRI dataset. For generating a batch of training data, the first 64 samples not having been used previously in the current training epoch are randomly picked. Then, for each sample, a 46 time-steps-long interval is randomly selected within the 600 or 114 time steps available in the full training data.
The GOKU-net based models, contain the same Stuart-Landau differential equations as described above, however, the allowed ranges of parameters differ from the ones used during the generation of the synthetic dataset. In order to be closer to a real world use-case we allow for a wider range of parameters than those actually used for generating the data, since in principle one would not know the true range:
a ∈ [-1, 1]
ω∈ [0, 1]
while keeping all the other parameters the same. The differential equations definitions were optimized for higher computational performance with the help of ModelingToolkit.jl <cit.>. During training, they were solved with the cmttSOSRI solver, a Stability-optimized adaptive strong order 1.5 and weak order 2.0 for diagonal/scalar Ito SDEs, from the DifferentialEquations.jl Julia package <cit.>. The sensitivity algorithm used was cmttForwardDiffSensitivity from the SciMLSensitivity.jl package <cit.>.
The models were defined and trained within the deep learning framework of the Flux.jl package <cit.>. The experiments were managed using DrWatson.jl package <cit.>.
The model was trained with Adam with a weight decay of 10^-10, and the learning rate was dynamically determined by the following schedule. The learning rate begins with a linear growth (also referred to as learning rate warm-up) from 10^-7, escalating up to 0.005251 across 20 epochs. Afterwards, it maintains that value until the validation loss stagnates (has not achieved a lower value for 50 epochs), at which point it starts a sinusoidal schedule with an exponentially decreasing amplitude.
For the multiple shooting training, all the presented experiments used a time window length of 10, therefore partitioning 46-time-steps-long sequences into 5 windows with their endpoints overlapping. The regularization coefficient in the loss function for the continuity constraint had a value of 2.
Since we found that models with variational autoencoders underperform their non-variational versions (see Figure <ref>), all the results presented in this work were obtained using non-variational GOKU-nets. This is, instead of sampling from normal distributions in the latent space as depicted in Figure <ref>, we pass forward the mean values μ_z_0 and μ_θ. Thus, the associated loss function does not have the KL divergence term associated with the ELBO but retains the reconstruction loss given by the mean squared error between the output of the model and the input, normalized by the mean absolute value of the input. In addition, when multiple shooting training is employed, the extra term regarding the continuity constraint is included in the loss function. This extra term consists of the mean squared differences between the last point of a window and the initial from the next one, divided by the number of junctions and multiplied by a regularization coefficient. Please, note that this continuity regularization is performed in the state space of the differential equation and not in the input space.
§.§ Further exploration of the model
In Figure <ref>, a box plot is presented, comparing the reconstruction performances on the synthetic dataset when the models are variational versus when they are not.
Figures <ref> and <ref> present the performance of the GOKU-UI on the synthetic dataset when trained using different hyperparameters. As a reminder, the base values used in our experiments for the continuity regularization coefficient and window length are 2 and 10, respectively. In Figure <ref> all hyperparameters are kept the same, except for the continuity coefficient. Similarly, Figure <ref> shows variations with respect to the window length.
As evidenced by these results, there exist other sets of hyperparameters that produce higher performances than those obtained with the base hyperparameters used in our experiments. The set of hyperparameters had been selected based on a grid search but while using a different dynamical system. In any case, the results presented in this paper serve as a proof of concept, showing that even when using a sub-optimal set of hyperparameters, GOKU-UI demonstrates significantly better reconstruction and forecast performances with respect to the baselines models and other GOKU-nets variants tested.
Motivated by the question of whether it would be possible to identify the latent dimensionality of some data using the GOKU-UI model, the following experiments were performed. GOKU-UI models with different numbers of Stuart-Landau oscillators were trained on synthetic datasets generated from distinct numbers of latent oscillators but all with the same input size of 784, so the latent dimensionality was not evident. Furthermore, as in the training settings from our other experiments, the allowed parameter ranges were wider in the GOKU-UI than when generating the datasets. The results of such experiments are presented in Figure <ref>, where color represents testing reconstruction errors for GOKU-UIs with model N oscillators on datasets built with latent true N oscillators. We see that with an increasing number of oscillators in the model, the error progressively diminishes until reaching the true number of latent oscillators in the data, and from there, the error gets abruptly reduced. Notably, when using more oscillators in the model than the true latent in the data, the model still learns equally well how to reconstruct it. However, the most salient feature is that in this simulated scenario, it is possible to identify the true latent dimensionality of the data as such of the number of oscillators in GOKU-UI for which the reconstruction error gets abruptly reduced.
A similar attempt was tried to infer the latent dimensionality of the fMRI data, however, in Figure <ref> we see that there is no such abrupt descent in the reconstruction error for any of the number of oscillators inside GOKU-UI that we tried. Nevertheless, an inflection point can be identified for N = 17. We considered that any N ≥ 17 was adequate to use and chose N = 20 to perform the experiments presented in this study.
§.§ Reconstruction plots
In order to get a visual sense of the data and the model performance, in this section, some trajectories from the input data of the synthetic and empirical fMRI evaluation sets and their respective reconstruction from GOKU-UI are presented. In all cases, the x-axis represents time steps. In order to display representative cases, the samples are selected in each case so that their mean reconstruction RMSE was closest to the median error across all samples. In the cases of synthetic data, due to the inconvenience of displaying their 784 components, the 12 with the lowest error are shown in Figure <ref> and <ref>, while the 12 with the highest error are displayed in Figures <ref> and <ref>. In the case of fMRI (Figures <ref> and <ref>), all 11 ICA components are displayed.
|
http://arxiv.org/abs/2307.04364v2 | 20230710064055 | Probe hyperon electric dipole moments with full angular analysis | [
"Jinlin Fu",
"Hai-Bo Li",
"Jian-Peng Wang",
"Fu-Sheng Yu",
"Jianyu Zhang"
] | hep-ex | [
"hep-ex"
] |
APS/123-QED
[email protected]
[email protected]
[email protected]
^1School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100049, People's Republic of China
^2Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, People's Republic of China
^3MOE Frontiers Science Center for Rare Isotopes, Lanzhou University, Lanzhou 730000, People's Republic of China
^4School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000, People's Republic of China
^5Center for High Energy Physics, Peking University, Beijing 100871, People's Republic of China
The electric dipole moment (EDM) of elementary particles, arising from flavor-diagonal CP violation, serves as a powerful probe for new physics beyond the Standard Model (SM) and holds the potential to provide novel insights in unraveling the enigma of the matter-dominated universe.
Hyperon EDM is a largely unexplored territory.
In this paper, we present a comprehensive angular analysis that focuses on entangled hyperon-antihyperon pairs in J/ψ decays for the indirect extraction of hyperon EDM. The statistical sensitivities are investigated for BESIII and the proposed Super Tau-Charm Facility (STCF). Leveraging the statistics from the BESIII experiment, the estimated sensitivity for Λ EDM can reach an impressive level of 10^-19 e cm, demonstrating a three-order-of-magnitude improvement over the only existing measurement in a fixed-target experiment at Fermilab with similar statistics. The estimated sensitivities for the Σ^+, Ξ^-, and Ξ^0 hyperons at the same level of 10^-19 e cm will mark the first-ever achievement and the later two will be the first exploration in hyperons with two strange valence quarks. The EDM measurements for hyperons conducted at the BESIII experiment will be a significant milestone and serve as a litmus test for new physics such as SUSY and left-right symmetrical model. Furthermore, at the STCF experiment, the sensitivity of hyperon EDM measurements can be further enhanced by two orders of magnitude. Additionally, this angular analysis enables the determination of CP violation in hyperon decays, the effective weak mixing angle, and beam polarization.
Probe hyperon electric dipole moments with full angular analysis
Jianyu Zhang^1
August 12, 2023
================================================================
The measurement of a particle's permanent electric dipole moment (EDM), which violates both Parity (P) and Time reversal symmetries, and consequently Charge Parity (CP) symmetry according to the CPT theorem, provides a robust test within and beyond the Standard Model (SM). It serves as a sensitive probe for new physics, especially those that could induce lower loop or flavor diagonal CP Violation (CPV), in the multi-100 TeV mass range <cit.>.
Neutron and ^199Hg EDM measurement have set an upper limit on the SM QCD effective vacuum phase of θ̅⪅10^-10, yet the SM permits any value within the [0,2π] range.
This conundrum is commonly known as the strong CP problem <cit.>.
Examining EDM within the hadronic system serves as a means to either corroborate or disprove the θ̅ explanation and, in conjunction with the investigation of leptonic EDM,
constitutes an essential approach for the pursuit of new physics <cit.>.
Investigating EDM in baryonic and light nuclear systems offers a distinct opportunity to uncover diverse CPV models <cit.>.
Within the hyperon system, the strange quark may exhibit a special interaction with new physics, potentially resulting in a substantial EDM effect.
This could suggest that the new physics possesses a specific flavor structure.
Another crucial aspect is that a single EDM measurement alone is insufficient to distinguish between various sources of CPV beyond the SM. Therefore, it becomes essential to employ complementary observations of different systems, such as hadrons, atoms, nuclei, and molecules, in order to effectively discriminate between these sources <cit.>.
Despite more than 70 years of researches in the pursuit of EDMs, the Λ hyperon remains the sole member of the hyperon family for which the upper limit of EDM, 1.5× 10^-16 e cm, has been measured utilizing spin precession at Fermilab <cit.>.
The indirectly predicted absolute value of the Λ EDM, based on the experimental upper limit of the neutron EDM, is < 4.4× 10^-26 e cm <cit.>.
There are no indirect predictions for hyperons with two or three strange valence quarks.
A variety of experimental approaches have been proposed, such as Λ EDM measurement utilizing spin precession induced by dipole magnetic at the LHCb experiment <cit.>,
Ξ^+ and Ω^+ EDM measurements employing spin precession induced by bent crystal at a fixed-target experiment <cit.>.
Due to the short lifetimes of hyperons, conducting direct measurements of EDM through the spin precession presents significant challenges.
Preparing sources of various hyperons for EDM measurements in a single fixed-target experiment is also challenging, due to different production mechanisms and lifetimes of hyperons.
Unlike fixed-target experiments and hadron collider experiments, a large number of entangled Λ, Σ, and Ξ hyperon-antihyperon pairs can be readily produced and reconstructed from charmonium J/ψ decays at Tau-Charm factories.
The substantial production cross-section of J/ψ in e^+e^- annihilation, along with the large branching fraction of J/ψ to hyperon-antihyperon pairs and the outstanding performance of modern detectors, ensure that the reconstruction of hyperon-antihyperon pairs is usually achieved with a purity greater than 95%.
This capability allows for the search of subtle violations of conservation laws <cit.>.
The production of entangled hyperon-antihyperon pairs, with the electric dipole form factor embeded in the P and CP violating term of the Lorentz invariant amplitude, offers a distinctive opportunity for indirectly extracting hyperon EDM. The electric dipole form factor is generally a complex number for non-zero timelike momentum transfer, and becomes EDM in the zero momentum transfer limit.
In practice, this kind of form factor can be treated as an EDM assuming that the momentum transfer dependence is negligible due to an unknown extension to the zero region.
This Letter reports a proposal to extract the hyperon EDMs through full angular analysis.
EDM measurements will be discussed in e^+e^- collision within the region of J/ψ resonance, considering two different types: (i) J/ψ→ BB where B are Λ, Σ^+ hyperons. (ii) J/ψ→ BB where B are Ξ^-, Ξ^0. Sequential hyperon decays are reconstructed as Λ→ pπ^-, Σ^+→ pπ^0, Ξ^-→Λπ^-, and Ξ^0→Λπ^0, correspondingly.
A comprehensive angular analysis using multi-dimensional information in the full decay chain yields enhanced sensitivity for EDM measurement when compared to one-dimensional analysis, such as a CP-odd triple-product moment encompassing hyperons Λ, Σ^+, Ξ^- and Ξ^0 <cit.>.
Scenarios for the BESIII experiment and a proposed future Super Tau-Charm Facility (STCF) are investigated. The first experiment has already collected the world's largest dataset of 10 billion J/ψ particles <cit.>, while the latter one is designed to collect approximately 3.4×10^12 J/ψ particles per year <cit.>.
Charmonium J/ψ is produced via e^+e^- annihilation, where interference between the contributions from virtual γ and Z-boson exchanges leads to a small longitudinal polarization of J/ψ meson.
The leading contribution from Z-boson exchange in SM, which violates parity symmetry, is suppressed by a factor of M^2_J/ψ/m^2_Z. Polarization effects are encoded in BB hyperon pair spin density matrix defined as
R(λ_1,λ_2;λ^'_1,λ^'_2)∝∑_m,m^' ρ_m,m^'d^j=1_m,λ_1-λ_2(θ)d^j=1_m^',λ^'_1-λ^'_2(θ)
×ℳ_λ_1,λ_2ℳ^*_λ^'_1,λ^'_2δ_m,m^',
where the indices m^(') and λ^(')_1,2 represent the helicities of the J/ψ meson and B(B) hyperons, respectively.
The ρ_m,m^' is spin density matrix of J/ψ meson,
d^j_m^('),λ^(')_1-λ^(')_2(θ) is Wigner rotational function, and ℳ_λ^(')_1,λ^(')_2 is the helicity amplitude of J/ψ→ BB. The θ represents the angle between the momentum of the hyperon B, denoted as p̂, and the motion of electron beam Z axis as shown in Fig <ref>.
The helicity m^(') is denoted as +, -, and 0 corresponding to the helicity states of J/ψ meson. The 3×3 matrix ρ_m,m^' is reduced to a 2×2 matrix due to the component ρ_00 suppressed by a factor of m^2_e/M^2_J/ψ.
The Lorentz invariant helicity amplitude in J/ψ→ BB decay with four independent form factors fixed at q^2=M^2_J/ψ is written as <cit.>
ℳ_λ_1,λ_2=ϵ_μ(λ_1-λ_2)u̅(λ_1,p_1) (F_Vγ^μ+i/2mσ^μνq_νH_σ
+γ^μγ^5F_A+σ^μνγ^5q_νH_T )v(λ_2,p_2),
where m is B hyperon mass, and p_1 and p_2 are four momentum of B and B, respectively.
Processes involving a flavor-diagonal CP-violating vertex contribute to the electric dipole form factor H_T. An effective Lagrangian, encompassing all of these CP-violating operators, plays a crucial role as a bridge between hyperon EDM and the fundamental theories.
The diverse extensions of the SM result in distinct contributions to these operators, leading to different impact on the hyperon EDM.
Taking the Λ hyperon as an example, there are several expressions in the literature for evaluating the contributions arising from the QCD θ term <cit.>, quark chromo-electric dipole moment (qCEDM), four-quark operators <cit.>, and the quark EDM (qEDM) <cit.>. Hyperon EDM measurements offer direct sensitivity to the contributions from qEDM and qCEDM, owing to the suppressed effects of high-dimensional operators and the experimental constraint imposed by neutron EDM measurements on the QCD θ term.
The flavour-diagonal CP-violating contributions in the SM are extremly tiny, while new physics, such as SUSY and left-right symmetrical model, may give large enhancement on hyperon EDM as discussed extensively by analysing EDM results from electron, neutron and ^199Hg systems <cit.>.
The unexpectedly large hyperon EDM may suggest a special coupling between the strange quark and new physics.
Consequently, in the decay chain under consideration, we provide an opportunity to explore these possible effects in the hyperon family by relating H_T to hyperon EDM contribution <cit.>,
H_T=2e/3M^2_J/ψg_Vd_B.
The form factor H_T here, in fact, varies with q^2. Assuming q^2 dependence is ignored, d_B is then EDM of hyperon B. Considering the dispersive part of time-like reaction, the imaginary part of H_T is also investigated in this angular analysis.
The aforementioned discussions will also be applicable to the hyperons Σ and Ξ in this Letter.
The form factors F_V and H_σ are related to the redefined G_1,2 as described in <cit.>
F_V=G_1-4m^2(G_1-G_2)/(p_1-p_2)^2, H_σ=4m^2(G_1-G_2)/(p_1-p_2)^2.
The form factors G_1 and G_2 are linked to the experimental observables α_J/ψ, ΔΦ, and Γ(J/ψ→ BB) through the relations α_J/ψ=M^2J/ψ|G1|^2-4m^2|G_2|^2/M^2J/ψ|G1|^2+4m^2|G_2|^2 and G_1/G_2=|G_1/G_2|e^-iΔΦ <cit.>.
The form factor F_A, primarily arising from Z-boson exchange between cc and light quark pairs qq within the SM can be related to the effective weak mixing angle θ^eff_W through
F_A≈ -1/6Dg_Vg^2/4cos^2θ^eff_W1-8sin^2θ^eff_W/3/m^2_Z,
which leads to a parity violation effect estimated to be the order of 10^-6, where g_V is defined as ⟨0|c̅γ^μc|J/ψ⟩=g_Vϵ^μ, D is a non-perturbative parameter that is fitted from data <cit.>. By conducting precise measurements utilizing large statistics, it becomes possible to extract the weak mixing angle sin^2θ^eff_W which is essential in testing the SM, particularly in regards to the effects derived from quantum corrections of heavy particles, such as the Higgs boson and the top quark, at the loop level <cit.>.
The longitudinal polarization of the J/ψ meson, denoted as P_L, is defined as the relative difference between the diagonal elements of the density matrix, ρ_++ and ρ_–. Moreover, in experiment such as BESIII where there is no beam polarization, the polarization P_L is closely connected to the left-right asymmetry 𝒜^0_LR,
P_L=𝒜^0_LR=σ_R-σ_L/σ_R+σ_L=-sin^2θ^eff_W+3/8/2sin^2θ^eff_Wcos^2θ^eff_WM^2_J/ψ/m^2_Z.
Here, σ_R(L) represents the J/ψ cross section with right-handed(left-handed) electrons. This asymmetry induced by the effective weak mixing angle θ^eff_W and hence suppressed to the order of 10^-4 <cit.>.
When there is longitudinally polarized electron beam polarization with magnitude of P_e, as in the experiment of STCF <cit.>, the P_L can be replaced by ξ
ξ=σ_R(1+P_e)/2-σ_L(1-P_e)/2/σ_R(1+P_e)/2+σ_L(1-P_e)/2=𝒜^0_LR+P_e/1+P_e𝒜^0_LR≈ P_e.
The longitudinally polarized electron beam instead of Z- boson exchange may play a crucial role in enhancing the sensitivity of measurements.
Based on the rotational symmetry, helicity representation of the complete angular distribution for type (ii) is given
dσ/dΩ∝∑_[λ] R(λ_1,λ_2;λ^'_1,λ^'_2)
D^*j=1/2_λ_1,λ_3(ϕ_1,θ_1)D^j=1/2_λ^'_1,λ^'_3(ϕ_1,θ_1)ℋ_λ_3ℋ^*_λ^'_3
D^*j=1/2_λ_2,λ_4(ϕ_2,θ_2)D^j=1/2_λ^'_2,λ^'_4(ϕ_2,θ_2)ℋ_λ_4ℋ^*_λ^'_4
D^*j=1/2_λ_3,λ_5(ϕ_3,θ_3)D^j=1/2_λ^'_3,λ_5(ϕ_3,θ_3)ℱ_λ_5ℱ^*_λ_5
D^*j=1/2_λ_4,λ_6(ϕ_4,θ_4)D^j=1/2_λ^'_4,λ_6(ϕ_4,θ_4)ℱ_λ_6ℱ^*_λ_6
where [λ] is a set containing all of possible helicity symbols appearing in the summation like λ_1,λ_2,λ^'_1,λ^'_2....
Polar and azimuthal angles θ_1,ϕ_1 and θ_2,ϕ_2 parameterize momenta directions of Λ and Λ in the frame of Ξ and Ξ, respectively.
Polar and azimuthal angles θ_3,ϕ_3 and θ_4,ϕ_4 are that of proton and anti-proton in the frame of Λ pairs.
The definitions of these helicity angles are illustrated in Fig <ref>, and analogous definitions are employed for the subsequent decay of antiparticles.
Helicity amplitudes ℋ_λ_i and ℱ_λ_i are used to parameterize dynamics of weak decay Ξ→Λπ and Λ→ pπ,
and corresponding charge conjugated process are denoted by ℋ and ℱ with bar. The formula for type (i) is obtained by retaining only θ_1,2 and ϕ_1,2 and identifying ℋ as ℱ.
Following the definition of asymmetry parameters α and ϕ, originally introduced by Lee and Yang <cit.>,
the hyperon CP violating observables, induced by these asymmetry parameters, are quantified as A^B_CP=(α_B+α̅_B)/(α_B-α̅_B) and Δϕ^B_CP=(ϕ_B+ϕ̅_B)/2 <cit.>.
Two observables are complementary as they rely on the sine and cosine of the strong phase difference, respectively. In hyperon decays, the relative strong phases are small, leading to the latter exhibiting better sensitivity <cit.>. Moreover, in this Letter, the latter in Ξ decays can be determined due to the measurable polarization of Λ hyperon.
To assess the statistical sensitivity of the measurement, 500 pseudo experiments of each decay are generated and fitted by using a probability density function based on the full angular distributions shown in Equation (<ref>).
The estimated yields presented in Table <ref>,
as well as the form factors and decay parameters obtained from the published articles <cit.>, are fixed for the generation.
The EDM, along with other form factors, decay parameters and polarization, can be simultaneously determined from fitting.
The study further investigates sensitivities for different statistics at BESIII and STCF experiments, taking into account branching fractions, detection efficiencies, and the impact of longitudinally polarized electron beam.
Figure <ref> presents the estimated sensitivities for hyperon EDMs.
With the statistics from BESIII experiment, the Λ EDM sensitivity, 10^-19 e cm (red full circle), demonstrates a remarkable three-order-of-magnitude enhancement over
the only existing measurement at Fermilab with similar statistics <cit.>, while maintaining cutting-edge sensitivities, 10^-19 e cm, for Σ^+, Ξ^-, and Ξ^0 hyperons. The EDM sensitivities will be further improved by 1∼2 orders of magnitude (open square and full triangle) at STCF experiment.
Figure <ref> illustrates the estimated sensitivities for CPV in hyperon decays.
With an 80% longitudinally polarized electron beam at STCF experiment, the best sensitivities for CPV induced by the α_B parameter (red full triangle) can reach 5×10^-5 (6×10^-5) in J/ψ→ΛΛ (J/ψ→Σ^+Σ^-) decays, while for the ϕ_B parameter (blue full triangle), they can reach 2×10^-4 (3×10^-4) in J/ψ→Ξ^-Ξ^+ (J/ψ→Ξ^0Ξ^0) decays.
The sensitivities for A^B_CP and Δϕ^B_CP observables have reached the prediction of the SM <cit.>.
Figure <ref> shows the estimated sensitivities for F_A and sin^2θ^eff_W. Only the sensitivities for the module of F_A are reported due to a negligible dependence on the phase from toy study.
The sensitivity for sin^2θ^eff_W associated to F_A can reach to 8×10^-3.
Figure <ref> depicts the estimated sensitivities for J/ψ polarization and sin^2θ^eff_W. The sensitivity for sin^2θ^eff_W associated to P_L can reach to 2×10^-2 at STCF experiment. Additionally, by applying simultaneous constraint on F_A and P_L, the sensitivity for sin^2θ^eff_W can be further enhanced to 5×10^-3 in J/ψ→ΛΛ decays.
Longitudinal polarization for electron beam can also be determined through angular analysis with the highest precision sensitivity reaching up to 6×10^-5, as depicted in Figure <ref> (red full triangle up), which can used for more precise weak mixing angle measurement from Bhabha scattering events <cit.>.
In conclusion, to investigate largely unexpored territory of hyperon EDMs, we have established a comprehensive angular analysis, considering P violation in J/ψ production and CP and P violation in J/ψ decay. The EDM, along with CP violating observables in hyperon decays, effective weak mixing angle, and beam polarization can be simultaneously extracted from angular analysis. The statistical sensitivities for physical observables have been investigated for BESIII and STCF scenarios. Utilizing the expected statistics obtained from the BESIII experiment, the Λ EDM measurement can achieve an impressive upper limit of 10^-19 e cm, presenting a remarkable improvement of three orders of magnitude compared to the only existing measurement at Fermilab with similar statistics. The EDM measurement of Σ^+, Ξ^-, and Ξ^0 hyperons at the same level of 10^-19 e cm could represent a groundbreaking accomplishment as the first-ever achievement and the later two will be the first exploration in hyperons with two strange valence quarks.
At the STCF experiment, with a longitudinally polarized electron beam, a search for hyperon EDMs could potentially reach levels of 10^-21∼10^-20 e cm.
The EDM measurements for hyperons will be a significant milestone and serve as a stringent test for new physics, such as SUSY and left-right symmetrical model.
At the same time, the verification of CPV in hyperon decays could be achieved at levels of 10^-5∼10^-4, which has already matched the predictions of the SM.
The effective weak mixing angle parameter can be measured at a level of 10^-3 and can be further enhanced by utilizing the precisely determined beam polarization obtained from this angular analysis.
This method can also be extended to ψ(2S) decays for investigating the pure strange quark hyperon Ω, taking into account additional form factors due to its spin-3/2 property.
We would like to thank Prof. Fengkun Guo, Prof. Xiaogang He, Prof. Jianping Ma and Prof. Yangheng Zheng for very useful discussion.
This work is supported by National Key R&D Program of China No. 2022YFA1602204; National Natural Science Foundation of China (NSFC) under Contracts Nos. 11935018, 12221005 and 11975112; Fundamental Research Funds for the Central Universities.
|
http://arxiv.org/abs/2307.07394v1 | 20230714151102 | The resource theory of tensor networks | [
"Matthias Christandl",
"Vladimir Lysikov",
"Vincent Steffan",
"Albert H. Werner",
"Freek Witteveen"
] | quant-ph | [
"quant-ph"
] |
[email protected]
Department of Mathematical Sciences, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen, Denmark
[email protected]
Faculty of Computer Science, Ruhr University Bochum, Universitätsstraße 150, 44801 Bochum, Germany
[email protected]
[email protected]
[email protected]
Department of Mathematical Sciences, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen, Denmark
Tensor networks provide succinct representations of quantum many-body states and are an important computational tool for strongly correlated quantum systems. Their expressive and computational power is characterized by an underlying entanglement structure, on a lattice or more generally a (hyper)graph, with virtual entangled pairs or multipartite entangled states associated to (hyper)edges. Changing this underlying entanglement structure into another can lead to both theoretical and computational benefits.
We study a natural resource theory which generalizes the notion of bond dimension to entanglement structures using multipartite entanglement.
It is a direct extension of resource theories of tensors studied in the context of multipartite entanglement and algebraic complexity theory, allowing for the application of the sophisticated methods developed in these fields to tensor networks.
The resource theory of tensor networks concerns both the local entanglement structure of a quantum many-body state and the (algebraic) complexity of tensor network contractions using this entanglement structure.
We show that there are transformations between entanglement structures which go beyond edge-by-edge conversions, highlighting efficiency gains of our resource theory that mirror those obtained in the search for better matrix multiplication algorithms.
We also provide obstructions to the existence of such transformations by extending a variety of methods originally developed in algebraic complexity theory for obtaining complexity lower bounds.
The resource theory of tensor networks
Freek Witteveen
August 12, 2023
======================================
§ INTRODUCTION
What is the structure of quantum many-body states?
Physically relevant states, such as ground states of local Hamiltonians, typically have a very non-generic entanglement structure.
Indeed, such states often exhibit entanglement with a local character, expressed by an area law for the entanglement entropy (as opposed to volume-law entanglement entropy for generic states) <cit.>.
This observation has led to the Ansatz class of tensor network states for representing quantum many-body states.
Such a tensor network state is created by first locally distributing states with bounded entanglement, and then applying local transformations.
Here, the amount of initial entanglement is captured by the bond dimension.
Equivalently, the state is constructed by taking a collection of local tensors and contracting along a set of non-physical indices.
This encodes the global properties of the many-body state into local entangled states together with a local transformation.
Tensor network representations have become one of the main theoretical and numerical tools for understanding quantum many-body physics.
The first examples, now known as Matrix Product States (MPS), were discovered in the study of spin chains (in particular the AKLT model) as finitely correlated states <cit.>. Independently, around the same time White invented the Density Matrix Renormalization Group <cit.> as a numerical method, which in hindsight is a method to optimize over MPS.
Since its conception, tensor network research has developed these two complementary perspectives, one strand of research using tensor network states as a theoretical tool to construct interesting many-body states and to classify phases of matter, and another strand of research developing sophisticated numerical methods to simulate strongly interacting quantum many-body systems.
From a theoretical standpoint, tensor network states approximately parametrize ground states of local Hamiltonians.
Understanding phases of matter means that one would like to understand the set of ground states of local Hamiltonians under an appropriate equivalence relation.
Using sets of suitable tensor network states as a proxy for ground states allows one to reason more easily about phases of matter by reducing questions about the global quantum state to questions involving only the local tensors.
In one spatial dimension it is rigorously known that ground states of gapped Hamiltonians satisfy an area law <cit.> and can be approximated by MPS representations with bond dimensions growing polynomially with system size.
In two or more spatial dimensions it is widely believed that Projected Entangled Pair States (PEPS) are good ground state approximations <cit.>, with area laws proven in special cases <cit.>.
This has amongst others been used to understand topological phases and symmetries in such ground states, since the global properties of the many-body state are encoded in the local tensors that make up the tensor network <cit.>, see also the reviews <cit.>.
Tensor networks are also a powerful numerical tool, since they hugely reduce the number of free parameters in the many-body state, allowing for variational methods for ground state approximation.
The ground state correlation functions, energies or other properties can then be extracted by tensor network contractions.
In one spatial dimension there exist rigorous polynomial time algorithms for finding ground state approximations using MPS with polynomial bond dimension for gapped Hamiltonians <cit.> and in practice DMRG provides an excellent simulation method for optimizing MPS representations <cit.>.
In two or more spatial dimensions, it is known that contraction of tensor networks is computationally hard <cit.>.
Nevertheless, there exist numerical methods for (approximately) contracting and optimizing tensor networks in two spatial dimensions <cit.> and these have been successfully applied to strongly interacting quantum systems <cit.>.
Tensor networks have been developed mostly in the context of condensed matter physics for the study of lattice systems.
However, they have found wide application in other many-body physics problems, for instance for simulating gauge theories and quantum field theories <cit.>, as (toy) models for holographic quantum gravity <cit.> and in quantum chemistry <cit.>.
Besides this, tensor network methods can be used to simulate (small) quantum computers <cit.>.
The applications of tensor networks extend beyond quantum many-body physics: many mathematical and computational problems can be phrased in terms of tensors, and tensor networks provide general methods to decompose global tensors into a collection of local tensors.
A promising example is the use of tensor networks as a tool for machine learning <cit.> and graphical models <cit.>.
Tensor networks also can encode counting problems (and therefore tensor network methods may be used for heuristic counting and optimization algorithms <cit.>) and they are used in the design and decoding of quantum error correcting codes <cit.>.
The essence of tensor network states is thus that they are quantum states exhibiting `local entanglement': they are obtained by applying local transformations to networks of bipartite entangled states. There are both theoretical and practical reasons, which we will review in <ref>, for allowing a more general entanglement structure based on local multipartite entanglement <cit.>.
Indeed, the standard way to construct tensor network states is by placing maximally entangled pairs of dimension D on the edges of a graph G = (V,E) with vertex set V and edge set E.
This is the origin of the nomenclature Projected Entangled Pair States (PEPS).
Typically, in the situation where we would like to simulate a condensed matter system, the graph will be a lattice.
The dimension D of the maximally entangled states is the bond dimension of the tensor network.
The actual tensor network state is now constructed by applying linear maps M_v on each vertex of the graph.
The resulting state is given by
|Ψ⟩ = (⊗_v ∈ V M_v) ⊗_e ∈ E|ϕ^+_e⟩
where |ϕ^+_e⟩ = ∑_i=1^D |ii⟩ is an (unnormalized) maximally entangled state, or EPR pair at edge e.
This construction is illustrated in <ref>.
One can think of the initial state ⊗_e ∈ E|ϕ^+_e⟩ as a resource for creating the state many-body state |Ψ⟩: it is an entanglement structure for |Ψ⟩.
A natural generalization of <ref> is to consider different local entanglement structures in this construction.
Here we are not restricted to only having states along edges, but we may also tile the lattice with states shared by more than two vertices.
Formally speaking, we may start with a hypergraph G = (V,E) where each (hyper)edge e ∈ E is a subset of (possibly more than two) vertices in V.
Our main focus will be on two-dimensional lattices of `plaquettes' (we will use the terms plaquette and (hyper)edge interchangeably).
We then consider again states of the form
|ϕ⟩_G = ⊗_e ∈ E|ϕ_e⟩
as the entanglement structure, but now |ϕ_e⟩ is a k-party state if e consists of k vertices, visualized in <ref>.
For instance, we could take a rectangular lattice of plaquettes as depicted in <ref> and tile it with states of level r
|_r⟩ = ∑_i=1^r |iiii⟩
as a generalization of the usual maximally entangled states.
We then again obtain tensor network states by applying maps at each vertex as in <ref>.
This is illustrated in <ref>.
We will provide a more precise definition of this construction in <ref>.
A key feature is that for plaquettes containing more than two vertices there is more freedom of choice of the entanglement structure than in the usual tensor network approach.
In the usual tensor network approach the bond dimension is the only choice (as any other choice of state on the edge can be absorbed into the tensor).
This is not the case if the plaquette has more than two parties: in that case there are plenty of quantum states which are inequivalent under applying local linear maps. In particular, there might be various entanglement structures associated with the same lattice yielding a given target quantum state |Ψ⟩.
In the usual PEPS picture of tensor network states, the parameters that determine how expressive the class of states is, is the set of bond dimensions on the edges.
Increasing the bond dimension allows one to represent a larger class of states.
In this work we study a resource theory which allows one to compare different entanglement structures.
Here, we will say that an entanglement structure |ϕ⟩_G with states |ϕ_e⟩ on the edges is a stronger resource than an entanglement structure |ψ⟩_G which has states |ϕ_e⟩ on the edges if there exist local transformations at the vertices which map |ϕ⟩_G to |ψ⟩_G.
That is, there should exist linear maps M_v at the vertices such that
(⊗_v ∈ V M_v) ⊗_e ∈ E|ϕ_e⟩ = ⊗_e ∈ E|ψ_e⟩
as illustrated in <ref>.
In other words, the entanglement structure |ψ⟩_G can be written as a tensor network state using the resource |ϕ⟩_G and it is clear that this implies that |ϕ⟩_G is a more powerful resource than |ψ⟩_G.
If there exists a local transformation on every single edge e from |ϕ_e⟩ to |ψ_e⟩ then it is clear that we can just apply these single-plaquette transformations in parallel.
The main question we will address in this work is the following:
In the resource theory of tensor networks, what transformations between entanglement structures are possible that are not given by single-plaquette transformations?
The question of how to transform tensors through local transformations has been extensively studied in the theory of multipartite entanglement as well as in algebraic complexity theory.
In complexity theory, finding certain transformations between tensors is closely related to finding faster algorithms for matrix multiplication.
In this context an extensive resource theory of tensors has been created.
One of the main goals of this work is to introduce certain powerful techniques, developed to better understand the complexity of matrix multiplication, to the theory of tensor networks.
As our first main result, we show that there indeed exist transformations of entanglement structures which go beyond single-plaquette restrictions on lattices.
That is, we construct examples where one can not transform |ϕ_e⟩ into |ψ_e⟩, but once we place copies of these states on a lattice the transformation becomes possible.
There are examples of such transformations where EPR pairs are exchanged between different parties (and used as a resource).
A plausible intuition may be that since in a two-dimensional lattice the plaquettes are typically adjacent in at most two vertices, any lattice transformation can be performed by combining an exchange of EPR pairs with single-plaquette transformations.
We show that this is not the case by giving an explicit example going beyond such transformations. This demonstrates the richness of the resource theory of entanglement structures.
On the other hand it is also important to be able to show that there do not exist transformations between different entanglement structures.
As our second main result we provide methods to find obstructions for the existence of entanglement structure transformations.
The fact that there exist transformations between entanglement structures which go beyond single-plaquette transformations implies that it does not suffice to prove obstructions on the level of individual plaquettes.
We extend and apply various powerful methods from algebraic complexity theory to prove obstructions in prototypical examples.
As a first example we use flattening ranks to prove nontrivial bounds for transforming entanglement structures with states to entanglement structures using pairs.
Next, we use a version of the substitution method to prove that for the λ-state on the kagome lattice (which is an entanglement structure for representing the Resonating Valence Bond state) there exists no representation of bond dimension two, even though there does exist an approximate representation of bond dimension two, answering one of the main open problems from <cit.>.
Finally, we study a class of asymptotic transformations where one has many copies of an entanglement structure.
Such transformations are characterized by the asymptotic spectrum of tensors.
We show that certain points in this spectrum can be used as obstructions to entanglement structure transformations.
§.§ Organization of the paper
We will provide background material on the relevance of entanglement structures beyond maximally entangled states in tensor network theory in <ref>.
There we also provide an introduction to the resource theory of tensors.
We then introduce the resource theory of tensor networks, and in particular of entanglement structures, in <ref>.
We will discuss different types of local transformations between entanglement structure that can be considered.
We relate this resource theory to the (algebraic) complexity of tensor network contractions, observing along the way that this is a VNP-complete problem.
After having introduced the resource theory of tensor networks, we turn to the two main questions that can be answered in this resource theory.
Firstly, we provide a number of explicit transformation which reduce one entanglement structure to another in <ref>.
Secondly, in <ref>, we study the converse question and give obstructions to the existence of transformations.
We present a number of general techniques for showing such obstructions and apply them to concrete examples.
In <ref> we address structural questions relating to symmetries and ranks of entanglement structures.
We end with a summary and conclusion in <ref>.
§ BACKGROUND
We will start by providing a detailed explanation of the notion of an entanglement structure and motivate its relevance for applications.
Then, since readers may not be familiar with the resource theory of tensors, we will give a brief introduction to this resource theory.
Finally, we give a concise overview of previous work which is relevant to the connection between the resource theory of tensors and entanglement structures for tensor networks.
§.§ Entanglement structures
We start by defining the notion of a tensor network and an entanglement structure on a hypergraph more carefully.
We start from some hypergraph G, which consists of a set of vertices, which we denote by V, and a set of hyperedges E.
Each hyperedge e ∈ E consists of a subset of vertices, which are the vertices incident to e. In many examples we will assume that the cardinality e is a constant k, in which case we have a k-uniform hypergraph.
We will also assume that the vertices in any edge e ∈ E are all different. The degree of a vertex is the number of edges e such that v ∈ e.
We allow double edges.
We consider Hilbert spaces of the form
_̋G = ⊗_e ∈ E⊗_v ∈ e_̋e,v
where the _̋e,v are Hilbert spaces. In other words, for each vertex v ∈ V we have a Hilbert space for each edge incident to v.
We let
_̋v = ⊗_e : v ∈ e_̋e,v and _̋e = ⊗_v ∈ e_̋e,v
be the Hilbert spaces at some fixed vertex or edge.
A tensor network state is now constructed from the following data: a collection of states |ϕ_e⟩∈_̋e (not necessarily normalized) for e ∈ E and a collection of linear maps
M_v : _̋v →̋̃_v for v ∈ V.
Here, ̋̃_v = ^d_v is the physical Hilbert space and d_v is the physical dimension at v.
Let
|ϕ⟩_G = ⊗_e ∈ E|ϕ_e⟩.
We call |ϕ⟩_G an entanglement structure.
In most examples the |ϕ_e⟩ are copies of the same k-party state.
We will sometimes assume that the tensors |ϕ_e⟩ are concise, which means that the reduced density matrix on any single party has full rank.
While we have defined |ϕ⟩_G as a tensor product over the edges, we can also regroup the Hilbert spaces along the vertices and think of |ϕ⟩_G as a state in ⊗_v ∈ V_̋v.
We then get a tensor network state |Ψ⟩ by applying the maps M_v to each vertex
|Ψ⟩ = (⊗_v ∈ V M_v)|ϕ⟩_G.
Tensor network states with entanglement structure |ϕ⟩_G are precisely the states which are restrictions of the entanglement structure, where we have grouped according to the vertices V.
The usual version of tensor networks is the case where the hypergraph is a graph (so for each edge e ∈ E we have e = 2) and where we place (unnormalized) pairs |ϕ_e⟩ = ∑_i = 1^D |ii⟩ = |_D⟩ on each edge.
Here D is known as the bond dimension of the tensor network.
There are different perspectives on tensor network states.
Another perspective is that one takes a collection of tensors, and contracts along edges in a graph.
From the perspective of tensor network states as contractions of local tensors, we can also interpret different entanglement structures as different contraction rules, see <ref>.
The idea to use entanglement structures different from maximally entangled states has first come up in the construction of concrete states <cit.> and its theory has been developed more systematically in <cit.> (which introduced the terminology of entanglement structures) and <cit.>.
There are various reasons to study this generalization of the usual notion of tensor network states.
The main reason to allow different entanglement structures is that this can lead to representations of tensor network states with smaller bond dimensions (i.e. an entanglement structure with lower Hilbert space dimensions).
More efficient representations with minimal bond dimension are crucial for numerics in two or more spatial dimensions and developing the theory of entanglement structures beyond maximally entangled states could lead to improved numerical methods <cit.>.
The special case with 3-tensors on a kagome lattice has been proposed under the name Projected Entangled Simplex States (PESS) and one can extend PEPS optimization algorithms to this class of states, achieving superior approximation in frustrated lattice models with the appropriate entanglement structure, especially for spin liquids <cit.>.
A prominent theoretical model for spin liquid behavior is the Resonating Valence Bond (RVB) state and the closely related orthogonal dimer state, which is a superposition over dimers of the lattice, where a dimer is a subset of edges such that each edge is adjacent to exactly one edge in the dimer.
The state is then a uniform superposition over dimers, where singlet states are distributed on the edges in the dimer
[height=3.2cm,grid=false]rvb-dimer
(-55,35)|Ψ⟩ = ∑ (-28,24)dimers
(33,-1)0.8= |01⟩ - |10⟩
In this case it is most interesting to study this on a frustrated lattice, such as the kagome lattice above.
This state can be obtained from an entanglement structure placing a tensor |λ⟩ at each plaquette
[height=3cm,grid=false]rvb-structure
(91,46)0.9λ
(80,28)|λ⟩ = ∑_i,j,k = 0^2 ϵ_ijk|ijk⟩ + |222⟩
where ϵ_ijk is the antisymmetric tensor, as shown in <cit.>.
This perspective can be used to derive a PEPS representation of the orthogonal dimer state, and therefore of the RVB state, with reduced bond dimension <cit.>.
One should think of these examples as representing situations where the local entanglement structure of the many-body state is not accurately represented by pairwise entanglement between neighboring sites, but rather by some locally shared multipartite entanglement.
Beyond this motivation there are also important theoretical reasons to study different entanglement structures.
A central notion in the study of tensor network states is that of injectivity, which means that the state as constructed in <ref> is with injective maps M_v.
In this case one can essentially invert the map M_v and the properties of |Ψ⟩ are very closely related to that of the entanglement structure.
Many theoretical results for tensor networks are only valid for injective tensor networks (or normal tensor networks, which become injective upon blocking sites).
For example, for an injective tensor network state one can always find a local Hamiltonian (the so-called parent Hamiltonian) which has |Ψ⟩ as its unique ground state.
There is a clear generalization of injectivity to states where we allow different entanglement structures.
This concept has been introduced in <cit.> as the class of states where one considers tensor network states with an arbitrary entanglement structure, and the maps M_v are invertible (or more generally injective).
A number of physically important models have injective PEPS representations upon choosing the right entanglement structure, while they are not injective with respect to the standard construction using maximally entangled states.
This is important in the classification of two-dimensional symmetry protected topological phases <cit.>.
A first example is the CZX model <cit.>, using GHZ states on a square lattice an applying controlled Pauli Z operations as well as Pauli X.
Another prominent example is the AKLT model on the square lattice <cit.>. Its ground state has an injective representation, illustrated in <ref>, by using an entanglement structure with a 4-tensor given by four singlet states where at each vertex one projects onto the symmetric subspace (in other words, it is the one-dimensional AKLT state on a periodic chain of length 4) <cit.>.
Finally, an especially natural example is using states as entanglement structure: this simply corresponds to contracting multiple indices at the same time as illustrated in <ref>. This is relevant for tensor network contractions in quantum circuits with controlled gates <cit.>.
Entanglement structures using states for tensor networks have also been used in relating tensor networks to graphical models <cit.> and in studying random tensor networks <cit.>.
§.§ Resource theory of tensors
Tensors are widely studied in mathematics, physics and computer science.
Two domains where tensors are a central object of interest is in algebraic complexity theory and quantum information theory.
In both these fields one of the main questions is when and how two tensors can be transformed into one another using local operations.
Throughout this work we will identify tensors with quantum states shared between different parties where we do not necessarily normalize the quantum states, so a k-party quantum state is just a k-party tensor.
As an example of a transformation between quantum states, suppose we have three parties Alice, Bob and Charlie, and we have quantum states |ϕ_ABC⟩ and |ψ_ABC⟩.
Then we can ask whether there exist linear maps M_A, M_B and M_C which Alice, Bob and Charlie can locally apply to convert the states, i.e. such that
(M_A M_B M_C) |ϕ_ABC⟩ = |ψ_ABC⟩.
In quantum information theory such transformations can implemented using local operations and classical communication, if we additionally allow postselection on measurement outcomes and are therefore known as Stochastic Local Operations and Classical Communication (SLOCC).
This leads to a resource theory of entanglement in multiparty quantum states, under the class of SLOCC operations.
For instance, entanglement with respect to SLOCC for three or four qubit systems has been completely classified <cit.>.
The same resource theory has been studied in the context of theoretical computer science with the goal of understanding the complexity of matrix multiplication.
Here the resource theory, as introduced by Strassen <cit.>, is formulated in terms of tensors rather than quantum states.
In this context, one says that |ϕ⟩ restricts to |ψ⟩ if there exists a transformation as in <ref> (then called a restriction).
More generally, if we have a collection of k parties and
|ϕ⟩∈⊗_i=1^k _̋i, |ψ⟩∈⊗_i=1^k '̋_i
then |ϕ⟩ restricts to |ψ⟩ if there exist linear maps
M_i : _̋i →'̋_i
such that
(⊗_i=1^k M_i ) |ϕ⟩ = |ψ⟩.
We write |ϕ⟩≥|ψ⟩ and this actually defines a partial order on the set of k-party states.
The interpretation justifying the ≥ sign is that |ϕ⟩ as a resource is at least as powerful as |ψ⟩.
The resource theory of tensors turns out to have intimate connections to algebraic complexity theory.
One of the most important outstanding open problems in computer science is to understand the computational complexity of matrix multiplication.
One would like to know how many multiplication operations are required in order to multiply n × n matrices.
The naive algorithm uses n^3 multiplication operations.
However, a surprising realization by Strassen was that one can multiply two 2 × 2 matrices with only 7 (rather than 8) multiplications <cit.>.
By recursively applying this construction one sees that asymptotically one can perform matrix multiplication with only (n^α) multiplications where α = log_2(7) ≈ 2.81.
The study of the complexity of n × n matrix multiplication can be recast as a problem about tensors.
When computing C = AB for n × n matrices A and B, from
C_ij = ∑_k=1^n A_ikB_kj
we see that matrix multiplication is closely related to the tensor
|n⟩_ = ∑_i,j,k = 1^n |ik⟩_A |jk⟩_B |ij⟩_C
which is a 3-party tensor where each pair shares an EPR pair (i.e. maximally entangled state) of dimension n (in the algebraic complexity literature this is known as the matrix multiplication tensor ⟨ n, n, n⟩).
Every restriction of a 3-party GHZ state of r levels
|_r(3)⟩ = ∑_i=1^r |i⟩_A|i⟩_B|i⟩_C
to |n⟩_ gives a method to perform n × n matrix multiplication with r multiplication operations.
This motivates one to compute the rank of a k-tensor |ϕ⟩ as
(|ϕ⟩) = min(r : |_r(k)⟩≥|ϕ⟩)
where the GHZ state of level r on k parties is defined as
|_r(k)⟩ = ∑_i=1^r |i⟩…|i⟩_k times .
Equivalently, the rank (|ϕ⟩) is the minimal number of terms r needed to write |ϕ⟩ as a sum of product states:
|ϕ⟩ = ∑_i=1^r |e_i,1⟩…|e_i,k⟩.
Indeed, if we have a restriction |_r(k)⟩≥|ϕ⟩ with restriction maps M_j, for j = 1,…, k, then we may take |e_i,j⟩ = M_j |i⟩ (and vice versa one can define the M_j from the decomposition (<ref>)).
This definition is such that we can do n × n matrix multiplication using (|n⟩_) multiplication operations.
For example, the insight of <cit.> is that (|2⟩_) = 7.
There is also an approximate version of restriction known as degeneration.
If |ϕ⟩ and |ψ⟩ are k-tensors, then |ϕ⟩|ψ⟩ if there exist maps M_i() for i = 1,…, k continuously depending on a parameter such that
lim_→ 0(⊗_i=1^k M_i() ) |ϕ⟩ = |ψ⟩
so for each > 0 we have a restriction, and its limit as goes to zero is the target tensor |ψ⟩.
Accordingly, one may define the border rank
(|ϕ⟩) = min(r : |_r(k)⟩|ϕ⟩).
It turns out that if |ϕ⟩|ψ⟩, there exist T_i() which are polynomial in and a positive integer d such that
(⊗_i=1^k T_i() ) |ϕ⟩ = ^d |ψ⟩ + ∑_l=1^e ^d+l|ψ_l⟩
for some degree e and tensors |ψ_l⟩.
In this case, we will write |ϕ⟩^e |ψ⟩ to indicate the degree.
Complexity theory motivates another type of transformations.
In complexity theory one is typically interested in the asymptotic behavior as the instance size grows.
For this reason one may investigate the asymptotic rank
(|ϕ⟩) = lim_n →∞(|ϕ⟩)^1/n.
If we let ω = log_2((|2⟩_)) then the complexity of matrix multiplication is given by (n^ω + o(1)).
The current best upper bound on the matrix multiplication exponent ω is approximately 2.37 <cit.> while it is possible that ω = 2 (which coincides with the best known, and trivial, lower bound).
The asymptotic rank is closely related to asymptotic restriction; for tensors |ϕ⟩ and |ψ⟩ we say that |ϕ⟩|ψ⟩ if |ϕ⟩^(n + o(n))≥|ψ⟩^ n for all n ∈.
In other words, for all n there exist maps M_i^(n) acting on n + f(n) copies of the Hilbert space of |ϕ⟩ for some f(n) = o(n) such that
(⊗_i=1^k M_i^(n)) |ϕ⟩^(n + f(n)) = |ψ⟩^ n.
Asymptotic restriction is also natural from the perspective of entanglement theory: it simply corresponds to asymptotic SLOCC conversions!
In this perspective, log_2((|ϕ⟩)) is the optimal rate at which one can convert level-2 states to the state |ϕ⟩ using SLOCC.
Note that when we write |ϕ⟩^ n in this context for a k-tensor |ϕ⟩, we really mean that we group together the n copies of the k systems into a single party so we consider |ϕ⟩^ n as a k-tensor again [In the resource theory of tensors one typically gives this product its own name (the Kronecker product) and its own symbol ⊠.
We will be a bit loose in notation and not explicitly distinguish between tensor product and Kronecker product.].
This is of course the usual way of thinking about asymptotics in quantum information theory.
How are the three different transformations (restriction, degeneration and asymptotic restriction, in equations (<ref>), (<ref>) and (<ref>) respectively) and the corresponding ranks related?
By using polynomial interpolation <cit.> we see that if in <ref> the term with the highest degree in has degree d + e, so |ϕ⟩^e |ψ⟩, we can get |ψ⟩ from restriction of a direct sum of e + 1 copies of |ϕ⟩
< g r a p h i c s >
0.2cm
< g r a p h i c s >
0.2cm⇒ 0.15cm⊕_i=0^e
< g r a p h i c s >
0.2cm≥
< g r a p h i c s >
or equivalently
|ϕ⟩|_e+1(k)⟩≥|ψ⟩.
This can be used to show that |ϕ⟩|ψ⟩ implies |ϕ⟩|ψ⟩, and hence (|ϕ⟩) ≤(|ϕ⟩) ≤(|ϕ⟩).
The rank, asymptotic and border rank measure optimal conversions of GHZ states to a tensor of interest.
One can also study the converse direction, and define subrank as
(|ϕ⟩) = max(r : |ϕ⟩≥|_r(k)⟩),
the largest GHZ state which can be extracted by SLOCC from |ϕ⟩.
Similarly, we may define asymptotic subrank as
(|ϕ⟩) = lim_n →∞(|ϕ⟩)^1/n
and border subrank as
(|ϕ⟩) = (r : |ϕ⟩|_r(k)⟩).
All these different notions of rank are related as
(|ϕ⟩) ≤(|ϕ⟩) ≤(|ϕ⟩) ≤(|ϕ⟩) ≤(|ϕ⟩) ≤(|ϕ⟩).
While for 2-tensors all these notions collapse to the same (standard) notion of rank, for k ≥ 3 all inequalities can be strict.
In summary, understanding the computational complexity of matrix multiplication naturally leads to a resource theory of tensors, involving the notions of restriction, degeneration and asymptotic restriction. In algebraic complexity theory one typically either shows by some construction that a restriction (or degeneration, or asymptotic restriction) exists, possibly leading to faster algorithms; or one shows that there are obstructions to the existence of a restriction, which corresponds to lower bounds for certain algorithms.
See <cit.> for an accessible introduction to algebraic complexity theory with a focus on the complexity of matrix multiplication, as well as the reference work <cit.>.
§.§ Prior work
As alluded to in <ref>, the idea of using entanglement structures beyond maximally entangled states has been explored in various works, both for exactly constructing interesting many-body ground states and for numerical purposes. While the general theory of entanglement structures has remained relatively underexplored, we will here highlight some relevant previous work.
The connection between tensor network states and the resource theory of tensors was first studied in <cit.>.
A first observation is that a tensor network state is precisely a restriction of an initial entanglement structure, by applying linear maps at all the vertices of the network.
As a demonstration of the applicability of the resource theory of tensors to tensor networks, the authors show that degenerations between plaquettes are a useful tool to get lower bond dimension representations of tensor network states.
By an interpolation argument, one can prove that if one has a degeneration |ϕ⟩|ψ⟩ and one considers the entanglement structures |ϕ⟩_G and |ψ⟩_G which have these states on each edge of some hypergraph G, then one can compute observables for any tensor network state |Ψ⟩ constructed from the entanglement structure |ψ⟩_G from observables in (E) tensor network states using |ϕ⟩ as entanglement structure.
For instance, given a degeneration on a single plaquette from an entanglement structure of level-D pairs
< g r a p h i c s >
0.2cm
< g r a p h i c s >
this gives a representation using (E) bond dimension D tensor network states
[height=2.7cm,grid=trufalsee]degeneration-tn
(48,20)≥
(-20,21)⊕_(E)
Importantly, the overhead from the interpolation which turns the degeneration into a restriction (i.e. a proper tensor network state) is only linear in the system size, while contraction algorithms for tensor networks typically scale in the bond dimension as (D^m) where m grows with the system size.
In other words, when computing tensor network observables, the potential savings in bond dimension from a plaquette degeneration are much more significant than the overhead from the interpolation.
As an example of the techniques in <cit.>, the border subrank of |n⟩_ gives rise to low bond dimension representations of entanglement structures of |_r(3)⟩, by computing observables in a linear number (in the system size) of tensor network states.
Another application is that there is a degeneration |_2⟩_≥|λ⟩, where |λ⟩ is the tensor which can be used to construct the RVB state.
Therefore, one can compute expectation values for the RVB state using a linear number (in the system size) of tensor network states with bond dimension 2.
One can also investigate states which are the limit of tensor network states. In the language of tensors, these are states which are degenerations of an entanglement structure, an idea which has been explored in <cit.>.
Another systematic study of entanglement structures investigated entanglement structures on two-dimensional rectangular lattices <cit.>.
The results of this work apply to the translation invariant case with periodic boundary conditions.
The main results concern the question when two entanglement structures are equivalent, i.e. they are related by an invertible transformation M_v on each of the vertices, moreover assuming that each M_v is equal.
Under these assumptions, an invertible transformation between two entanglement structures exists for some lattice of size n × m for n,m ≥ 3 if and only if it exists for all sizes.
Moreover, in that case the map M_v can be taken to be a composition of maps only acting pairwise:
[height=3.5cm,grid=false]invertible-transformation
(47,37)=
(14,3)invertible (46,3)⇒ (72,3)=
These results are then used to show that the classification of symmetry protected topological phases using group cohomology is also valid for injective tensor networks using translation invariant entanglement structures.
While the resource theory of entanglement structures on lattices and other hypergraphs have not been studied in generality, there are various known results in the study of tensors which are closely related.
There are a number of important results in the resource theory of tensors which can be formulated as computing tensor ranks and subranks of entanglement structures of hypergraphs.
These results provided evidence that the resource theory of tensor networks is highly nontrivial.
To make this concrete, consider the W state
|W⟩ = |100⟩ + |010⟩ + |001⟩.
It is known that it has tensor rank (|W⟩) = 3, so there exists a restriction from |_3(3)⟩
[height=1cm,grid=false]rank-w
(45,11)≥
(14,7.5)3 (77.5,6.5)W
and there does not exist such a restriction for |_2(3)⟩ (see also <ref>).
However, if we take two copies and take this again as a 3-tensor (i.e. we take the Kronecker product of two copies of |W⟩), the resulting object only has rank 7 rather than the naive 9 = 3^2 <cit.>.
In other words, the tensor rank is not multiplicative under the Kronecker product, a fact already observed in the context of matrix multiplication <cit.>.
One can think of this situation as placing two copies of the W state on top of each other
[height=1cm,grid=false]rank-W-squared
(45,11)≥
(14,7)7 (79,6)W
We can also think of the following scenario: we place the two W states on the completely disconnected hypergraph consisting of two 3-edges (i.e. this is the usual tensor product and we now have six parties).
It is known that this state has tensor rank 8 <cit.>, so there exists a restriction from a six-party |_8(6)⟩ state
[height=1.2cm,grid=false]submult-rank
(40,11)≥
(13,11)8 (59.5,11.5)0.9W (84,11.5)0.9W
and the tensor rank is also not multiplicative for the tensor product.
Similar strict submultiplicativity under the tensor product is known for the border rank <cit.>.
The (asymptotic) tensor rank of other entanglement structures, especially for states which are EPR pairs distributed over a graph <cit.>, have also been studied.
One can pose the converse question as well where one starts with some arbitrary multiparty tensor and tries to transform to a global state. That is, one tries to compute the subrank.
This question first came up in <cit.>, lower bounding the border subrank (|_n⟩_).
For general hypergraphs with states, the asymptotic subrank was determined in <cit.>.
The picture that emerges is that understanding the tensor (sub)rank of entanglement structures is a rich subject which does not reduce to the ranks of the individual edge states, suggesting that the resource theory of entanglement structures could be similarly nontrivial.
However, the above works do not study the resource theory of tensor networks, as the resource state (or target state) is a global state.
We will rather consider scenarios where both the resource and the target state are a tensor product of or other states over the edges of the hypergraph.
This resource theory has so far not been studied.
There is a number of other works studying tensor networks using tools and ideas from algebraic complexity theory.
For example, <cit.> defines the G-rank of a tensor with respect to a graph G as the minimal bond dimensions required to write the tensor as a tensor network state on the graph G.
This is in similar spirit as our work, but it does not take into account entanglement structures beyond maximally entangled states and does not formulate a resource theory of tensor networks.
We also mention that there is a line of works, inspired by algebraic complexity theory, studying sets of tensor network states as algebraic varieties <cit.>.
§ THE RESOURCE THEORY OF TENSOR NETWORKS
We proceed to define our object of study: the resource theory of tensor networks and entanglement structures.
This resource theory is the natural extension of the resource theory of tensors in algebraic complexity and the theory of entanglement under SLOCC transformations.
We will introduce the resource theory from the entanglement perspective, where we define tensor network states as quantum states arising from SLOCC transformations of strictly local networks of quantum states.
However, tensor networks are also a computational tool, and we show that the resource theory of tensor networks relates directly to the algebraic complexity of tensor network contraction.
Additionally we investigate the computational complexity of tensor network contractions from the algebraic perspective and observe that tensor network contraction is a VNP-complete problem.
§.§ The resource theory of tensor networks
It is clear that the formalism of tensor network states closely aligns with the notions of tensor restriction in <ref>.
Indeed, a tensor network state is nothing else than a restriction of a distribution of EPR pairs, where the parties are the vertices of the graph.
When we consider the order induced by restrictions (i.e. SLOCC transformations on the vertices), a state |Ψ⟩ has a representation as a tensor network state using an entanglement structure |ϕ⟩_G as in <ref> if and only if |ϕ⟩_G ≥|Ψ⟩.
We will use this perspective to compare different entanglement structures.
If we are given a lattice (or more generally some hypergraph) G = (V,E), then |ϕ⟩_G ≥|ψ⟩_G if |ψ⟩_G has a tensor network representation using |ϕ⟩_G as entanglement structure.
Concretely, |ϕ⟩_G ≥|ψ⟩_G if there exist maps M_v on each of the vertices of G such that
(⊗_v ∈ V M_v ) ⊗_e ∈ E|ϕ_e⟩ = ⊗_e ∈ E|ψ_e⟩.
If we have some many-body state |Ψ⟩ with a tensor network representation by some entanglement structure |ψ⟩_G, then the existence of a restriction |ϕ⟩_G ≥|ψ⟩_G directly implies that we also obtain a tensor network representation for |Ψ⟩ using |ϕ⟩_G as our initial entanglement structure:
[rounded corners, thick, color =black!10!white, fill = black!10!white] (0,0) rectangle (1.4,1);
at (.7,.7) physical;
at (.7,.3) state;
at (1.7, .5)≤;
[shift=(2.3,0)]
[rounded corners, thick, color =black!10!white, fill = black!10!white] (-.35,-0.1) rectangle (1.73,1.1);
at (.7,.9) current;
at (.7,.5) entanglement;
at (.7,.1) structure;
at (1.9, .5)≤;
[shift=(4.8,0)]
[rounded corners, thick, color =black!10!white, fill = black!10!white] (-.35,-0.1) rectangle (1.73,1.1);
at (.7,.9) desired;
at (.7,.5) entanglement;
at (.7,.1) structure;
For instance, if we have an injective representation of a many-body state using GHZ states on each plaquette, the question of finding a minimal bond dimension representation of the state is equivalent to finding an optimal restriction from the entanglement structure which has EPR pairs along all the edges.
It is now clear that this defines a resource theory for entanglement structures.
This is the resource theory of entanglement structures induced by SLOCC (in information theory terminology), or restrictions (in algebraic complexity theory terminology).
Note that while in quantum information theory the notion of SLOCC as the class of allowed operations for a theory of multipartite entanglement is not completely natural (operationally one would prefer LOCC, but this is too complicated to classify in practice), for tensor networks it is the natural class of allowed operations.
Based on the three different notions of tensor transformations (restrictions, degenerations, asymptotic degenerations) we can study different transformations of entanglement structures.
* Given two entanglement structures we can ask whether we can transform one into another on each individual plaquette. Whether this is possible then can be formulated as asking whether there is a restriction |ϕ_e⟩≥|ψ_e⟩ for each edge <cit.>.
We will call such transformations single-plaquette restrictions.
* We can also ask whether on the plaquette level there exists a degeneration, so |ϕ_e⟩|ψ_e⟩ for all e ∈ E.
The consequences of the existence of both single-plaquette restrictions and degenerations have been investigated in <cit.>.
Given single-plaquette degenerations, one can represent the tensor network state with only linear overhead in the system size as explained in <ref>.
* We know that there exist transformations of tensors which are only possible asymptotically. Motivated by this fact we can ask whether there exists a restriction on the global entanglement structure, i.e. |ϕ⟩_G ≥|ψ⟩_G, or a global degeneration |ϕ⟩_G |ψ⟩_G.
* Also motivated by asymptotic restrictions we can ask, given some entanglement structure |ϕ⟩_G and a tensor network target state |Ψ⟩, at what rate we can produce copies of |Ψ⟩ from copies of |ϕ⟩_G? More generally, given |ϕ⟩_G^ n, what is the optimal number m such that |ϕ⟩_G^ n≥|Ψ⟩^ m?
It is the third and fourth questions on which we focus on in this work, in order to develop a comprehensive resource theory of tensor networks, in particular of the underlying entanglement structures.
While in principle one can study restrictions, degenerations and asymptotic restrictions between arbitrary many-body states, from the perspective of tensor networks the main interest is the situation where we look for restrictions
|ϕ⟩_G ≥|Ψ⟩
so we have a restriction from an entanglement structure (which means that |Ψ⟩ is by definition a tensor network state).
The focus of our work is the situation where both many-body states are entanglement structures, that is, we investigate restrictions
|ϕ⟩_G ≥|ψ⟩_G.
The existence of such a restriction means that the class of tensor network states using the entanglement structure |ϕ⟩_G encompasses the class of tensor network states using the entanglement structure |ψ⟩_G, justifying the terminology of a resource theory.
Moreover, this encompasses the important class of injective tensor network states (which are by definition the states which are equivalent to an entanglement structure).
Similarly, we study this resource theory with respect to degenerations, so
|ϕ⟩_G |ψ⟩_G.
This relation is less standard in the tensor network literature, but we emphasize that (as argued in <cit.>) if the degeneration has low degree, tensor network computations for |ψ⟩_G reduce to tensor network computations with |ϕ⟩_G with only small overhead.
An important reason to consider degenerations is that they can allow for significant savings in bond dimensions and degenerations are in practice often easier to find (and again, the small overhead is not relevant for the asymptotic scaling of the complexity of the algorithm); for these reasons degenerations play a crucial role in the search for faster matrix multiplication algorithms.
We may for instance take a rectangular lattice and consider 4-party states |ϕ⟩ and |ψ⟩ and investigate whether there exist maps at the vertices such that
( ⊗_v ∈ V M_v ) |ϕ⟩_G = |ψ⟩_G
as illustrated in <ref>.
On first sight, one might think that since the initial and final states are tensor product states over the edges, the best one can do is perform transformations on each plaquette.
However, it is well known that there exist examples for which |ϕ⟩≱|ψ⟩ but |ϕ⟩^ n≥|ψ⟩^ n for some n.
One can think of this as the case where the hypergraph is such that the edges are all stacked on top of each other, so we may have
[height=0.7cm,grid=false]tensor-product-restriction
(40,3)while
(14,3)≱ (76,3)≥
The question is whether such phenomena still occur when we use a lattice hypergraph (which is a much sparser structure), which is the main question we address in this work.
We will indeed construct examples where |ϕ⟩_G ≥|ψ⟩_G while |ϕ⟩≱|ψ⟩ in <ref>.
There are obvious examples of such transformations where EPR pairs are exchanged between different parties (and used as a resource).
A basic example is shown in <ref> where we have tensors |ϕ⟩ and |ψ⟩ consisting of two EPR pairs which are clearly not equivalent (or related by a restriction) on a single plaquette, but when placed on a periodic lattice they give rise to the same state, as already observed in <cit.>.
Given this example, a plausible intuition is that since in a two-dimensional lattice the plaquettes are typically adjacent in at most two vertices, any lattice transformation can be performed by combining an exchange of EPR pairs with single plaquette transformations.
This is however not the case, as we will show in <ref>.
This demonstrates that the resource theory of tensors is highly nontrivial.
On the other hand, we will also provide obstructions for the existence of entanglement structure transformations in <ref>, applying various methods from algebraic complexity for proving complexity lower bounds.
The physically most interesting case for tensor networks are two- and higher-dimensional lattices.
In one spatial dimension, using multipartite entanglement is less natural, although one can in principle consider entanglement structures on a `strip' of plaquettes.
However, our concepts and methods apply to arbitrary hypergraphs.
From a mathematical perspective this poses new interesting questions in the theory of tensors.
We know that in general |ϕ⟩|ψ⟩ does not imply |ϕ⟩≥|ψ⟩.
We can think of the asymptotic restriction as closely related to restrictions on the hypergraph where we stack all edges on top of each other.
This leads to the general question how connected the hypergraph G has to be in order for a restriction |ϕ⟩_G ≥|ψ⟩_G to be possible.
In the special case where the hypergraph is acyclic, we show in <ref> that any global transformation reduces to a local one, acting on each plaquette separately, so |ϕ⟩_G ≥|ψ⟩_G implies |ϕ⟩≥|ψ⟩.
§.§ Algebraic complexity theory of tensor networks
One of the main motivations for studying the resource theory of tensors under restrictions, degenerations and asymptotic restrictions was to understand the complexity of matrix multiplication.
A similar natural question is to study the algebraic complexity of tensor network contractions, and we will now explain how the resource theory of tensor networks relates to the (algebraic) complexity of tensor network contractions.
This discussion is not needed to understand the remainder of this paper.
The question we would like to understand is the following: given an entanglement structure |ϕ⟩_G, what is the number of operations needed to compute a coefficient of a tensor network state using the entanglement structure |ϕ⟩_G?
Here the model of complexity is given by arithmetic circuits, which perform addition, scalar multiplication, multiplication and division.
To measure the complexity of computing a certain function, one assigns weights to these different operations.
A common choice (when studying for instance the complexity of matrix multiplication) is to make addition and scalar multiplication `free' and only count the number multiplications and divisions.
This is a different model than the usual Turing machine model for computation.
In particular, we work directly over some field of numbers (such as the complex numbers ) and we do not use a binary representation of elements in the field.
In the case of matrix multiplication, this is the complexity which is closely related to the tensor rank.
Given an entanglement structure |ϕ⟩_G, we may obtain a tensor network state by applying local linear maps M_v for each vertex v ∈ V.
Choose a basis |i_v⟩, for the physical Hilbert space at vertex v, and let T^(i_v)_v = ⟨i_v|M_v.
Then we may expand the tensor network state as
∑_{i_v}_v ∈ V( (⊗_v ∈ V T_v^(i_v))|ϕ⟩_G ) ⊗_v ∈ V|i_v⟩
The terms
(⊗_v ∈ V T_v^(i_v))|ϕ⟩_G
are therefore the coefficients of the state when expanded in the chosen product basis.
In general, we define the following map, which assigns a value to any collection of linear maps {T_v}_v ∈ V, T_v : _̋v →
f_|ϕ⟩_G : ⊕_v ∈ V_̋v^* →
(T_v)_v ∈ V ↦(⊗_v ∈ V T_v )|ϕ⟩_G.
We call this map a tensor network coefficient.
We will think of f_|ϕ⟩_G as a polynomial in the entries of the T_v (so it has ∏_v (_̋v) variables).
We will also refer to the task of computing the value of this polynomial on some input (i.e. computing a coefficient of some choice of tensors) as tensor network contraction and it is the basic computational primitive in any tensor network based algorithm.
For instance, when performing variational optimization over the class of tensor network states, the energy expectation value reduces to computing a tensor network coefficient.
We will now investigate the hardness of computing f_|ϕ⟩_G as a polynomial in the arithmetic circuit model.
Given an entanglement structure |ϕ⟩_G, we define C(|ϕ⟩_G, G) to be the minimal size of an arithmetic circuit computing f_|ϕ⟩_G.
The resource theory for tensor networks we have defined now relates back to algebraic complexity theory by the following simple observation.
* If |ϕ⟩_G ≥|ψ⟩_G, it holds that
C(|ψ⟩_G, G) ≤ C(|ϕ⟩_G, G).
* Similary, if |ϕ⟩_G ^e |ψ⟩_G, it holds that
C(|ψ⟩_G, G) ≤ (e+1) C(|ϕ⟩_G, G)
Suppose that we have an arithmetic circuit that computes f_|ϕ⟩_G, and we have maps M_v such that
(⊗_v ∈ V M_v ) |ϕ⟩_G = |ψ⟩_G.
Then
f_|ψ⟩_G((T_v)_v ∈ V) = f_|ϕ⟩_G((T_v M_v)_v ∈ V)
so we can compute f_|ψ⟩_G((T_v)_v ∈ V) by first computing (T_v)_v ∈ V↦ (T_v M_v)_v ∈ V and then use the circuit for f_|ϕ⟩_G.
The cost of the first step is zero in the model we consider (as it only requires multiplications with fixed numbers and additions, which are free), proving assertion <ref>.
We conclude that the cost of evaluating f_|ψ⟩_G((T_v)_v ∈ V) is at most C(|ϕ⟩_G, G).
For <ref> we note that by a standard polynomial interpolation argument <cit.> we can compute f_|ψ⟩_G((T_v)_v ∈ V) from e+1 different restrictions of |ϕ⟩_G, giving the desired result.
How hard are tensor network computations?
The standard algorithms contract along edges one by one. Restricting to such computations on a graph (and an entanglement structure with EPR pairs), one finds that the complexity is exponential in the treewidth of the graph <cit.>. Finding optimal contraction orders is NP-hard in general, so one typically resorts to heuristics to finding good contraction orders <cit.>.
It is well-known that in general tensor network contraction is indeed a hard problem.
For instance, one can show that tensor network contraction is a # P-hard problem <cit.>, see <cit.> for further complexity-theoretic investigations of tensor networks.
One can also study the hardness of tensor network contraction in the arithmetic circuit model.
Recall that in this model of computation, the goal is to compute some polynomial through an arithmetic circuit.
The analog of the class P is the class VP, which consists of families of polynomials f_n for which there is a family of polynomial-sized circuit computing the desired polynomials.
The arithmetic analog of NP is the class VNP, which is, informally speaking, the class of families of polynomials f_n of polynomial degree which is such that for each monomial one can determine its coefficient in f_n by a polynomial-sized circuit.
In <ref> we prove that on arbitrary hypergraphs the computation of tensor network coefficients is in VNP and tensor network contraction on a two-dimensional square lattice with constant bond dimension D = 2 is VNP-hard. This is not very surprising (given the corresponding # P-hardness) but we are not aware of previous work explicitly making this observation.
The problem of computing tensor network contraction coefficients, given by the polynomial f_|ϕ⟩_G as in <ref> on hypergraphs with n edges and constant degree is in VNP. The problem of computing tensor network contraction coefficients with bond dimension D = 2 on an n × n square lattice is VNP-hard.
In <ref> we state this result more formally as <ref> and provide the proof.
While for two-dimensional lattices tensor network contraction is VNP-hard, it is easy to see that for acyclic hypergraphs the problem of computing f_|ϕ⟩_G is in VP.
Finally, we mention that from a complexity theory perspective, tensor network contractions are closely related to an approach to counting problems known as Holant problems, see <cit.> for a discussion relating to quantum states and multiparty entanglement. The resource theory of tensor networks should therefore also be relevant to such Holant problems.
§ CONSTRUCTIONS
Recall that the central question of our work is to study what transformations of entanglement structures are possible beyond transformations that act on the individual plaquettes.
In this section we will give explicit constructions to demonstrate that there indeed exist transformations of entanglement structures which go beyond plaquette by plaquette restrictions.
This can be studied on arbitrary hypergraphs, but we will focus on examples which are relevant to lattices, as these are of primary interest for many-body physics applications.
We will consider periodic lattices, to avoid boundary terms (but this is not crucial).
We will focus on two-dimensional structures, since these are important in tensor network applications, and in one spatial dimension entanglement structures beyond EPR pairs are somewhat artificial (although one can study `strips' of plaquettes to obtain nontrivial examples).
We also provide classes of examples in higher spatial dimensions and we end by discussing the asymptotic resource theory of tensor networks.
§.§ Two-dimensional entanglement structures
The first and most basic class of examples are examples where we just distribute EPR pairs of some dimension over the plaquettes.
When arranged on the lattice, we may move EPR pairs to neighboring plaquettes
[height=1cm,grid=false]move-epr
(47,10)⇒
which is an invertible transformation.
For example, as we already saw
[height=2.5cm,grid=false]epr-lattice
(2,14)|ϕ⟩ (31,14)|ψ⟩
(47,20)⇒
which is an example where on the lattice the entanglement structures are equivalent while there are no restrictions (or degenerations) between |ϕ_e⟩ and |ψ_e⟩ for single plaquettes e.
This principle can be used to construct many more examples!
One way is to use the pairs for teleportation.
Here is an example on a cyclic graph of four vertices
[height=1cm,grid=false]teleport
(50,15)≥
In general, in a lattice one can move an pair to a neighboring plaquette, and there use it as a resource.
For example, consider plaquettes
[height=1cm,grid=false]aided-rank-plaquette
(45,10)≱
(18,-12)|ϕ_e⟩
(80,-12)|ψ_e⟩
where we place a _r(3) state and a level-p pair as plaquette state |ϕ_e⟩ and |ψ_e⟩ is given by an arbitrary 3-party state |ψ⟩ distributed over three of the four parties.
When we place these on a lattice
[height=2.3cm,grid=false]aided-rank-lattice
(48,13)≥
we see there exists a restriction if the restriction
[height=1cm,grid=false]aided-rank
(48,12)≥
(13,12)0.8r
(12,-5)0.7p
exists.
In fact, the existence of such a restriction is (by definition) the statement that the p-aided rank of |ψ⟩ is at most r.
See <cit.> for a study of the aided rank, and example computations of the aided rank of tensors.
Next, for a slightly more involved example we consider a triangular lattice.
Let us suppose that we start with |_r(3)⟩ on a sublattice as entanglement structure |ϕ⟩_G and take |ψ⟩_G to be the entanglement structure where we place |_n(3)⟩ at each edge, and we look for a degeneration |ϕ⟩_G |ψ⟩_G:
[height=2cm,grid=false]triangular-ghz
(48,12)
(13.5,14.5)0.9r (71.5,14)0.9n
For which values of r can we perform such a transformation?
It is clear that this is not an edge-wise transformation as in |ϕ⟩_G we only have entangled states on a sublattice and the other sublattice is `empty' (i.e. has only product states).
A good strategy to perform this conversion is as follows.
First, convert each
_r(3) _n(3) |_D⟩_
in |ϕ⟩_G.
This is certainly possible if we take r to be the border rank times the remaining level n, so
r = n(|_D⟩_).
Next, for each of the empty edges, we may now take EPR pairs from the surrounding edges, so we get an |_D⟩_ state at this edge.
From this, we can now extract a _m(3) state, where the optimal m (by definition) is given by the border subrank m = (|_D⟩_).
[height=1.5cm,grid=false]subrank-triangular
(30,10) (66,10)
(12.5,16)0.9r (85,16)0.9n (48,9)0.7D
It is known that
(|_D⟩_) = []3D^2/4
by a construction in <cit.> and a lower bound in <cit.>.
So, we can achieve the transformation with
r = n ([]3n^2/4).
An interesting question is whether this transformation is actually optimal, or whether a smaller r would also suffice.
While nontrivial, the above example still had as a crucial ingredient that we somehow `moved' EPR pairs from one edge to another.
In the lattice hypergraphs we consider, two edges have at most two overlapping vertices.
This suggests the intuition that all `interaction' between different edges may be mediated by two-party entanglement, i.e. by moving an EPR pair from one edge to another.
We will now show that this intuition is not correct and that the resource theory of tensor networks has a richer variety of possible transformations.
We will show that there exists a degeneration on a hypergraph with three 3-edges, where we place _5(3) states on the three edges, and end up with an entanglement structure with EPR pairs.
There exists a degeneration
[height=1.5cm,grid=false]ghz5-to-epr
(11,15)0.75 (24,15)0.75 (17.5,2)0.75
(65,16)0.72 (76,20)0.72 (94,16)0.72
(79,-5)0.72 (87,10)0.72
(73,3)0.73
(48,13)
Note that there is a one level-3 EPR pair.
The key point is that the final structure consists just of EPR pairs, but there is no way to distribute them over the three plaquettes such that we have an edge-to-edge degeneration.
Indeed, if we distribute the EPR pairs over the edges, at least one of them will have both a level-3 and a level-2 EPR pair, and there is no possible degeneration
[height=1cm,grid=false]no-plaquette-degeneration(48,13)(49,13)/(69,15.5)0.73(83,-5)0.72
The impossibility of this degeneration follows from an easy flattening rank bound (see <ref>).
Nevertheless, the degeneration in <ref> does exist.
The main ingredient is the three-party Bini tensor |β⟩∈_̋A _̋B H_C with each Hilbert space equal to (^2)^ 2, which is given by taking three level-2 EPR pairs, that is the tripartite state |_2⟩_, projecting out |11⟩ on C, so
[height=2.5cm,grid=false]bini
(13,44)β
(46,48)=
(56,33)0.8A (77,66)0.8B (100,33)0.8C
(28,8)= 𝕀 - 11
We may write this out in the standard basis
|β_ABC⟩ = |00⟩_A|00⟩_B|00⟩_C + |00⟩_A |01⟩_B|10⟩_C
+ |01⟩_A |10⟩_B|00⟩_C + |10⟩_A |00⟩_B|01⟩_C
+ |01⟩_A |11⟩_B|10⟩_C + |11⟩_A |10⟩_B|01⟩_C
so it is clear that (β) ≤ 6.
In <cit.> it was shown that there exists a degeneration |_5(3)⟩|β⟩ (so we have border rank (|β⟩) ≤ 5), given by
(|01⟩ + |11⟩)|10⟩(|01⟩ + |00⟩)
+ |00⟩(|00⟩ + |01⟩)(|10⟩ + |00⟩)
- |01⟩(|00⟩ + |10⟩ + |11⟩)|01⟩
- (|00⟩ + |01⟩ + |10⟩)|00⟩|10⟩
+ (|01⟩ + |10⟩)(|00⟩ + |11⟩)(|01⟩ + |10⟩)
= |β⟩ + (^2)
The authors of <cit.> were motivated by the complexity of matrix multiplication, as |_2⟩_ is the 2 × 2 matrix multiplication tensor, and they used a direct sum of two copies of |β⟩ to perform 2 × 3 matrix multiplication.
We start by using the degeneration |_5(3)⟩|β⟩ on all three |_5(3)⟩ states
[height=1.5cm,grid=false]bini-degeneration
(47,16)
(11.5,16)0.75 (24,16)0.75 (18,2)0.75
(71,16)0.7β (84,16)0.7β (77,2)0.7β
Let us call the three outer parties A, B and C, and the interior party D.
With the Bini tensors, on the outer edges we now have level-2 EPR pairs (between AB, BC and AC), and it remains to extract EPR pairs between D and A, B and C respectively:
[height=1.7cm,grid=false]bini-graph
(50,15)=
(10,15)0.9β (23,15)0.9β (16,2.5)0.9β
(56,-2)0.8A (78,35)0.8B (100,-2)0.8C
(86,13)0.7D
We divide D into parties A', B' and C', and apply a projection operator on D
[height=2.2cm,grid=false]bini-project
(-3,-1.5)0.8A (25,49)0.8B (53,-1.5)0.8C
(19.5,5)0.8A' (17,24)0.8B' (36,18)0.8C'
(103,18)= P
where P projects onto the span of the states
{|0a⟩_A'|0b⟩_B'|0c⟩_C'}_a,b,c ∈{0,1}
together with
{|10⟩_A'|b0⟩_B'|c0⟩_C'}_b,c ∈{0,1}.
Note that all of these states are in the support of the tensor on D, indeed, the only elements which are not in the support are
|a_1 a_2⟩_A'|b_1 b_2⟩_B'|c_1 c_2⟩_C'
with either a_2 = b_1 = 1, or b_2 = c_1 = 1 or c_2 = a_1 = 1.
Finally, we apply the linear map
|0⟩⟨00| + |1⟩⟨01| + |2⟩⟨10|
on A and A', and on B, B', C and C' we apply the map
|0⟩⟨00| + |1⟩(⟨01| + ⟨10|)
which results in the transformation
[height=2.3cm,grid=false]bini-to-epr
(-3,-1.5)0.8A (20,39)0.8B (42,-1.5)0.8C
(50,20)⇒
(70,9)0.83 (83,23)0.82 (90,9)0.82
which gives the target state of <ref>.
We can also put this degeneration on a lattice, see <ref>. An interesting open question is whether this is optimal for this lattice, or whether one can do better.
We give one more example. This time we will see a restriction which is only possible on the lattice, which is such that it extracts a global state.
We consider the square lattice, and consider the entanglement structure where we place the state
[height=2cm,grid=false]global-ghz-plaquette(-30,38)|ϕ_e⟩ =(28,38)=(69,38)+(32,8)+(69,8)+(-7,25)0.8A(-7,50)0.8B(24,25)0.8D(24,50)0.8C
at each plaquette.
Here, the four parties of the plaquette each have Hilbert spaces (^n+1)^ 2, with basis |0⟩,…,|n⟩.
Unconnected dots mean that the two parties have a product state |0⟩|0⟩, connected dots mean that they share a level-n EPR pair ∑_i=1^n |i⟩|i⟩.
For example, the first term on the right-hand side in <ref> is given by
∑_i,j = 1^n |0i⟩_A |ij⟩_B |j0⟩_C |00⟩_D.
We will show that there exists a restriction to the entanglement structure where we have level-n pairs on the square lattice (i.e. the entanglement structure for a standard bond dimension n tensor network state) together with a global level-4 state shared by all parties v ∈ V, as illustrated in <ref>.
Let G be the square lattice, and let |ϕ⟩_G be the entanglement structure with edge states given by <ref>.
Let |ψ⟩_G be the entanglement structure with level-n pairs on the square lattice as in <ref>.
Then
|ϕ⟩_G ≥|ψ⟩_G _4(V).
We will now argue that we can apply a restriction to get the entanglement structure with level-n EPR pairs on the edges with additionally a level-4 state shared by all vertices.
We apply the projection given by P = P_1 + P_2 + P_3 + P_4
[height=3cm,grid=false]global-ghz-projector
(-18,39)P = (24,39)=
(67,39)+ (30,10)+ (67,10)+
(-2,32)0.6A (-2,47)0.6B
(14,32)0.6D (14,47)0.6C
where the figure should be interpreted similar to <ref>; for instance the first term is the projection onto the subspace spanned by
{|i0⟩_A |jk⟩_B|0l⟩_C|00⟩_D; i,j,k,l = 1, …, n}
It is clear that each of the four terms in this projection gives a lattice of EPR pairs, and the images of the P_i are orthogonal so we extract a global _4(V) state.
Note that such a global state may in turn be used for the purpose of transforming degenerations of entanglement structures into restrictions <cit.> as explained in <ref>.
Finally, an interesting open question is the following: do there exist entanglement structures (with the same state at each plaquette) where the existence of a transformation depends on the size of the lattice? So far, our lattice examples exist for any lattice size, while there may exist transformations that only become possible when the size of the lattice is sufficiently large.
§.§ Higher dimensional entanglement structures
The construction principles which are demonstrated in <ref> may be applied in similar fashion for higher dimensional lattices.
The most basic constructions in two spatial dimensions used that two adjacent plaquettes can exchange pairs (even though we show that, perhaps surprisingly, there exist transformations that go beyond this mechanism).
In higher spatial dimensions it is significantly easier to construct nontrivial examples of lattice transformations since adjacent plaquettes now may be neighboring in more than two parties. This means that plaquettes can exchange multipartite entangled states.
It allows for a new construction principle, which does not yet show up in two spatial dimensions.
This is based on the fact that there exist states |ϕ⟩, |ψ⟩ on more than two parties which are such that
|ϕ⟩≱|ψ⟩ while |ϕ⟩^ 2≥|ψ⟩^ 2
and the same for degenerations.
Let us see in a concrete example how this can be used.
We will again use that we know the precise value of the border subrank (|_D⟩_) as in <ref>.
Now, as plaquettes we take tetrahedra (so with four parties), where for |ϕ_e⟩ we place level-D^2 -pairs on the 2-edges of the tetrahedron, which corresponds to placing |_D⟩_ states on the faces of the tetrahedron.
For |ψ_e⟩ we place level-r states on the faces of the tetrahedron:
[height=2.1cm,grid=false]tetrahedron
(65,17)⇒
(88.5,5)0.8r (97,35)0.8D
(110,30)|ϕ_e⟩
(110,4)|ψ_e⟩
Then, on an individual plaquette, we can use a |_D⟩_Δ state on each of the faces to perform a degeneration to a |_r(3)⟩ state for r = 3D^2/4, so for this value of r we have a degeneration |ϕ_e⟩|ψ_e⟩.
Now, if we consider a three-dimensional lattice where the tetrahedra have adjacent faces, we can group together pairs from adjacent plaquettes and apply a degeneration (and at the end redistribute the resulting states to the two plaquettes).
This allows a transformation for
r = []√([]3D^4/4)≥(|_D⟩_).
To give a concrete example, for D = 4 the single plaquette transformation is possible for r = 12, while on the lattice we can achieve r = 13.
§.§ Asymptotic conversions of tensor network states
We now address a question of a different nature, where we consider many copies of an entanglement structure to see how it can be used asymptotically as a resource.
Let |ϕ⟩_G = ⊗_e ∈ E|ϕ_e⟩ be an entanglement structure on a hypergraph G = (V,E).
Then we let |ϕ⟩_G^ n be the entanglement structure on the same hypergraph G where we place |ϕ_e⟩^ n at edge e
|ϕ⟩_G^ n = ⊗_e ∈ E|ϕ_e⟩^ n.
So, at each plaquette we place n copies of the state |ϕ_e⟩:
< g r a p h i c s >
Given a target state |Ψ⟩ (which could be another entanglement structure, but in principle any state) we would like to know how useful |ϕ⟩_G is as a resource for |Ψ⟩.
More precisely, we would like to know for which m and n it is true that
|ϕ⟩_G^ n≥|Ψ⟩^ m.
To study the asymptotics, we say that |ϕ⟩_G |Ψ⟩ if
|ϕ⟩_G^ (n + o(n))≥|Ψ⟩^ n.
In the particular case where |Ψ⟩ = |ψ⟩_G is some entanglement structure
[height=1.8cm,grid=false]as-restriction-lattice
(49,9)
means that we have restrictions
[height=2.1cm,grid=false]many-copies-restriction-lattice
(50,11)≥
(47,23.5)} n + o(n)
(95,0)} n
for n →∞.
To find an example, it suffices to have k-party states |ϕ⟩ and |ψ⟩ such that on the one hand if we consider the entanglement structures where we place these states on each hyperedge, there is no restriction, so |ϕ⟩_G ≱|ψ⟩_G, but we do have |ϕ⟩|ψ⟩.
It is clear that by applying the asymptotic restrictions plaquette-wise in that case we obtain |ϕ⟩_G|ψ⟩_G.
There are many examples of tensors for which |ϕ⟩|ψ⟩ but |ϕ⟩≱|ψ⟩.
However, it is nontrivial to show that |ϕ⟩_G ≱|ψ⟩_G.
In <ref> we investigate the case where on the kagome lattice we have |ϕ⟩ = |_2⟩_ and |ψ⟩ = |λ⟩, the tensor from which we may obtain the RVB state.
It is known that |_2⟩_|λ⟩ and hence |_2⟩_|λ⟩. We prove that on the kagome lattice |ϕ⟩_G ≱|ψ⟩_G <cit.>.
Note that this example also directly implies that for the RVB state |Ψ⟩ on the kagome lattice we have |_2⟩_|Ψ⟩ (although we do not prove that |_2⟩_≱|Ψ⟩).
By a similar method we prove that when we take |ϕ⟩ = |_2(3)⟩ and |ψ⟩ = |W(3)⟩ on the kagome lattice there is no restriction |ϕ⟩_G ≥|ψ⟩_G, while already on the level of a single plaquette |_2(3)⟩|W(3)⟩.
For a final example, the value of the asymptotic subrank of |_D⟩ is given by <cit.>
(|_D⟩_) = D^2
[height=1cm,grid=false]as-subrank-epr
(48,10)
(-1,14)0.6D (13,-6)0.6D (27,14)0.6D
(82,7.5)0.6D^2
This means that we have an asymptotic restriction
[height=1.5cm,grid=false]as-bond-ghz
(50,10)
(10,13)0.6D (78,12)0.5D
On the other hand, a simple flattening rank bound (see <ref>) shows that this is optimal.
Determining the optimal bond dimension for finite n is an interesting open problem.
§ OBSTRUCTIONS
In <ref> we saw explicit examples where the conversion between entanglement structures was possible only by putting the tensors on the plaquettes of a lattice. This clearly demonstrates that in order to show that no conversion can exist we must take into account the entire lattice.
That is, if one wants to show |ϕ⟩_G ≱|ψ⟩_G it does not suffice to show that for the edges |ϕ_e⟩≱|ψ_e⟩.
Whereas previous work proved the optimality of certain single-plaquette transformations <cit.>, our work provides for the first time general tools to show the impossibility of the conversion of entanglement structures.
The fundamental insight is that we may coarse-grain (or `fold') the network, and that any restriction on the initial graph should also be a restriction on the folded graph.
This allows one to reduce the problem to showing that there does not exist a restriction on the folded graph.
We will explain two important methods from algebraic complexity theory for obtaining obstructions for the existence of restrictions, namely (generalized) flattenings and the substitution method and we adapt them to prove the nonexistence of a restriction in interesting and nontrivial examples.
Next, we will discuss the question of asymptotic conversion of entanglement structures.
Asymptotic restriction is completely characterized by a compact topological space of functionals (the so-called asymptotic spectrum).
We will explain how to use this in the context of entanglement structures.
§.§ Folding the tensor network
If G is a hypergraph, we can obtain a new hypergraph by grouping some vertices together to a single vertex.
We call this procedure folding, and if H is obtained in such a way from G then H is a folding of G.
To be precise, H has vertex set V' and we have a surjective map f : V → V'.
The edge set E' of H is given by edges
f(e) = {v' such that v' = f(v) for v ∈ e}
for each edge e = {v_1,…,v_k}∈ E.
Note that f(e) may be smaller than e since it is possible that f sends vertices in the same edge to the same image.
If we have an entanglement structure |ϕ⟩_G = ⊗_e ∈ E|ϕ_e⟩ on G, then this can naturally also be interpreted as an entanglement structure on H, by grouping the Hilbert spaces as
_̋v' = ⊗_v : f(v) = v'_̋v and _̋f(e) = _̋e
and simply reinterpreting the state as
|ϕ⟩_H = ⊗_e' ∈ E'|ϕ_e'⟩.
The only thing we have done here is that we have grouped parties (vertices) together into a single parties.
It is clear that folding preserves restrictions.
Let G = (V,E) be a hypergraph and H = (W, F) a folding of G.
Let |ϕ⟩_G and |ψ⟩_G be entanglement structures for G.
Then |ϕ⟩_G ≥|ψ⟩_G implies |ϕ⟩_H ≥|ψ⟩_H.
Similarly, |ϕ⟩_G |ψ⟩_G implies |ϕ⟩_H |ψ⟩_H.
If the restriction is given by maps M_v for v ∈ V, then it is clear that ⊗_v : f(v) = v' M_v for v' ∈ V' defines a restriction on H.
While <ref> is obvious, it is the key to proving obstructions!
An important special case is where we fold to a graph with only two vertices.
For 2-tensors |ϕ⟩ and |ψ⟩ we know that |ϕ⟩≥|ψ⟩ (and in fact |ϕ⟩|ψ⟩) if and only if |ϕ⟩ has Schmidt rank at least as large as |ψ⟩.
For a 2-tensor |ϕ_AB⟩∈_̋AB we denote by (|ϕ_AB⟩) its Schmidt rank (and we will just call this the rank), which is equal to the rank of its reduced density matrix on either A or B, or the rank of |ϕ_AB⟩ as a linear map _̋A →_̋B^*.
The theory of restrictions (or SLOCC conversion) and degenerations for 2-tensors is completely determined by the rank.
With <ref> this directly yields the following:
Let G = (V,E) be a hypergraph and let |ϕ⟩_G and |ψ⟩_G be entanglement structures for G. If |ϕ⟩_G |ψ⟩_G, then we must have that for any bipartitioning of the vertices V = A ⊔ B the rank along A and B of |ϕ⟩_G is at least that of |ψ⟩_G.
As an example, if we have the matrix multiplication tensor for n × n matrix multiplication (i.e. level-n EPR pairs shared between three parties) and make the bipartitioning
[height=1.4cm,grid=false]mamu-flattening
(-25,10)A (70,80)B (105,10)C
(15,45)0.8n (45,0)0.8n (75,45)0.8n
then this has Schmidt rank n^2 between A and BC. If we compare with the 3-party GHZ state of level r we get Schmidt rank r, and therefore |_r(3)⟩≥|n⟩_ implies r ≥ n^2, so
(|n⟩_) ≥ n^2.
This gives a lower bound on the matrix multiplication exponent of ω≥ 2.
While this lower bound is obvious, it is highly nontrivial to find lower bounds improving <ref>!
Another example is the impossibility of degeneration in <ref>, where we see that if we flatten, the _5(3) tensor has rank 5, while the two EPR pairs together have rank 6.
In the algebraic complexity literature, grouping parties such that we get a 2-tensor is often called a flattening, since the resulting 2-tensor can be thought of as a matrix (which is a two-dimensional array, as opposed to a k-tensor, which is a k-dimensional array).
Let us now look at an example of an entanglement structure on a lattice. We take a rectangular lattice of size n_1 × n_2 with n_1, n_2 even, with periodic boundary conditions and rectangular plaquettes as in <ref>.
For |ϕ⟩_G we tile plaquettes with GHZ states on four parties, so |ϕ_e⟩ = |_r(4)⟩ for each edge.
For |ψ⟩_G we place maximally entangled pairs at the boundary of the plaquette, so |ψ_e⟩ = |D⟩_□ for each edge.
We now take the following bipartitioning:
[height=3cm,grid=false]lattice-flattening
(105,38)A (105,21)B
It is easy to see that this gives Schmidt rank of Er for |ϕ⟩_G and ED^4 for |ψ⟩_G.
Therefore, by <ref>, |ϕ⟩_G ≥|ψ⟩_G implies that r ≥ D^4.
On the other hand, it is easy to see that on the level of a single plaquette, for r ≥ D^4 we have |_r(4)⟩≥|D⟩_□.
The lower bound is sharp, and there is no gain from placing the states on the lattice (i.e. in this case |ϕ⟩_G ≥|ψ⟩_G is only possible when |ϕ_e⟩≥|ψ_e⟩).
A next observation is that if the folding f is such that for some edge e = v_1,…,v_k all vertices are mapped to the same vertex v', then we may remove this edge for the purpose of finding obstructions.
To see this, let H̃ be the hypergraph with the same vertex set as H, and with the edge f(e) removed from the edge set.
An entanglement structure |ϕ⟩_H then defines an entanglement structure |ϕ⟩_H̃ on H̃ by leaving out the state |ϕ_e⟩ on the edge e that gets removed.
It holds that
|ϕ⟩_H ≥|ψ⟩_H if and only if |ϕ⟩_H̃≥|ψ⟩_H̃
and the same statement holds with degenerations rather than restrictions.
This is the case since |ϕ_e⟩ is at a single party v', and we have the equivalences |ϕ⟩_H ≅|ϕ⟩_H̃ and |ψ⟩_H ≅|ψ⟩_H̃ (since we can locally prepare the states |ϕ_e⟩ and |ψ_e⟩ at v').
This is often helpful to reduce a problem on a large hypergraph such as a lattice, to a problem involving only a small number of edges.
As a special case one can show that if the hypergraph has no cycles, |ϕ⟩_G ≥|ψ⟩_H implies |ϕ_e⟩≥|ψ_e⟩ for each edge (so the graph structure does not allow additional transformations).
Here a cycle is any path of vertices v_1, v_2, …, v_k for k ≥ 2 such that v_k = v_1 and for each i = 1,…,k=1, v_i and v_i+1 are not equal and are connected by an edge, so there is an edge e with v_i ∈ e and v_i+1∈ e.
A hypergraph is acyclic if it has no cycles (this notion is also known as Berge-acyclic <cit.>). Note that in this definition if a graph is acyclic, any two different edges share at most one vertex.
If G is an acyclic hypergraph, then it is easy to see that for any edge e there is a folding from G to the hypergraph which consists just of the single edge e.
Thus, <ref> directly implies the following result.
Let G be an acyclic hypergraph and let |ϕ⟩_G and |ψ⟩_G be entanglement structures for G. Then |ϕ⟩_G ≥|ψ⟩_G if and only if |ϕ_e⟩≥|ψ_e⟩ for each edge e ∈ E. Similarly, |ϕ⟩_G |ψ⟩_G if and only if |ϕ_e⟩|ψ_e⟩ for each edge e ∈ E.
Finally, we consider the situation where we have a k-uniform hypergraph G which is such that there exists a folding onto k vertices {v_1',…,v_k'}, such that each edge e gets mapped to {v_1',…,v_k'}. This means that edges are all folded on top of each other (and this is often possible for a lattice).
We will say that the hypergraph is foldable in this case.
[height=3cm,grid=false]folding
(43,32)⇓
Consider entanglement structures |ϕ⟩_G and |ψ⟩_G where we place the same k-party states |ϕ⟩ and |ψ⟩ at each edge, and where we moreover assume that |ϕ⟩ and |ψ⟩ are invariant under permutation of the k parties.
Then we directly see by folding that if |ϕ⟩_G ≥|ψ⟩_G then we also find a restriction |ϕ⟩^E≥|ψ⟩^E.
In particular, this implies the following result.
Suppose G is a foldable k-uniform hypergraph, and |ϕ⟩_G and |ψ⟩_G are entanglement structures using permutation invariant k-party states |ϕ⟩ and |ψ⟩.
Then if |ϕ⟩_G |ψ⟩_G it holds that
|ϕ⟩|ψ⟩.
§.§ Folding and generalized flattenings
In this section we describe another useful way of grouping vertices in a hypergraph by folding some vertices onto each other, leaving other vertices separate, leading to a fan-like structure.
Using this grouping, we can get obstructions for the conversion of entanglement structures using the ideas of <cit.> about multiplicativity of rank lower bounds obtained via generalized flattenings of tensors.
Let us first explain the idea of using generalized flattenings of tensors.
For convenience we will restrict to the case of 3-tensors (although this is not crucial).
Let |ϕ⟩∈_̋A _̋B _̋C.
A `usual' flattening bound would be to group two out of A, B and C together and compute the rank of the resulting 2-tensor (i.e. matrix) and use this as a lower bound for the tensor rank. However, we may also try to `split' by some arbitrary linear map.
Let |ϕ⟩∈_̋A _̋B _̋C and let
P : _̋A _̋B _̋C →_̋X _̋Y
be a linear map.
Then we say that the 2-tensor
|ϕ^(P)⟩ = P |ϕ⟩∈_̋X _̋Y
is a generalized flattening of |ϕ⟩.
We need to take into account the map P when we using this relation to bound the rank.
To this end we define the commutative rank to be the maximum rank in the image of P of any rank-1 tensor
(P) = max_(|ψ_ABC⟩) = 1 (P(|ψ_ABC⟩))
so we maximize over
|ψ_ABC⟩ = |ψ_A⟩|ψ_B⟩|ψ_C⟩∈_̋A _̋B _̋C.
It is then easy to see that one can bound the (border) rank as follows (see e.g. <cit.>):
(|ϕ⟩) ≥(|ϕ^(P)⟩)(P).
Lower bound methods of this type have been studied in <cit.> where it has been shown that the power of such bounds when applied to the border rank of the matrix multiplication tensor is limited.
An important special case is where one applies a linear map P_C : _̋C →_̋X'_̋Y' only to the C-system, and let P = 𝕀_AB P_C and group together A X' as X and BY' as Y, so
|ϕ^(P)⟩ = (𝕀_AB P_C)|ϕ⟩.
[height=1.5cm,grid=false]generalized-flattening
(50,10)⇒
(-7,3)A (2,19)B (18,12)P_C
(32,3)X' (35,14)Y'
(68,-6)A (75,26)B
(86,-6)X' (93,26)Y'
(77,11)0.8|ϕ^(P)⟩
The most useful generalized flattenings for (border) rank bounds are the Koszul flattenings arising from the antisymmetrization map viewed as a linear map P_C _̋C →Λ^p _̋C^* ⊗Λ^p + 1_̋C.
In this case (P) = c - 1p where c = _̋C.
There are also the more general Young flattenings, based on other representations of the symmetric group.
See <cit.> for extensive discussion of Young flattenings and associated lower bounds.
Here, we generalize this to the case where the system C is replaced by multiple systems, each of which we split up.
Let
|ϕ⟩∈_̋A _̋B ⊗_i=1^m _̋C_i
and let P_i : _̋C_i→_̋X_i_̋Y_i be linear maps.
Then the 2-tensor obtained by grouping together AX_1… X_m and BY_1… Y_m from
|ϕ^(P)⟩ = (𝕀_AB P_1 … P_m) |ϕ⟩_∈_̋A _̋B ⊗_i=1^m _̋X_i_̋Y_i
is a generalized multiflattening of |ϕ⟩.
(|ϕ⟩) ≥(|ϕ^(P)⟩)∏_i=1^m (P_i).
Suppose that
|ϕ⟩ = ∑_i = 1^r |a_i⟩|b_i⟩|c_1,i⟩…|c_m,i⟩_:= |ψ_i⟩
so (|ϕ⟩) ≤ r.
Then |ϕ^(P)⟩ can be written as a sum of terms |ψ_i^(P)⟩.
Now, it is clear that since |ψ_i⟩ is a product tensor
(|ψ_i^(P)⟩) ≤∏_i=1^m (P_i)
and therefore
(|ϕ^(P)⟩) ≤∑_i=1^r (|ψ_i^(P)⟩)
≤ r ∏_i=1^m (P_i)
so we get
(|ϕ⟩) ≥(|ϕ^(P)⟩)∏_i=1^m (P_i).
By semicontinuity of the rank the same inequality holds for tensors |ϕ⟩ with (|ϕ⟩) = r.
Next, we will combine this idea with a folding.
Consider some hypergraph G, which we assume to be 3-uniform.
We assume that we can divide the vertices together into subsets A, B and a collection of subsets C_i in such a way that the resulting folded hypergraph has a fan structure, meaning that there are edges {A,B,C_i}, but no edges involving multiple C_i.
We will denote by (n) the fan hypergraph, with vertices V = {A, B, C_1,…, C_m} and edges e_i = {A,B,C_i}.
[height=1.7cm,grid=false]fan
(-15,42)A (30,-10)B
(70,65)C_1 (84,48)C_2 (92,33)C_3 (102,13)C_4
The fan hypergraph is useful for two reasons.
First of all, many interesting lattice graphs can be folded to a fan.
Secondly, the generalized multiflattenings behave well with respect to the fan structure!
Let |ϕ⟩_(m) be an entanglement structure with state |ϕ_i⟩ on edge {A,B,C_i}.
Then for any generalized flattening maps P_i : _̋C →_̋C _̋Y we have
(|ϕ⟩_G) ≥∏_i=1^m (|ϕ_i^(P)⟩)(P_i)
If |ψ⟩ = |ϕ⟩_(m) we see that |ψ^(P)⟩ is a tensor product of the states |ϕ^P_i_i⟩, so
(|ϕ⟩_(m)^(P)) = ∏_i=1^m (|ϕ_i^(P)⟩).
We conclude by <ref>.
This means that once we have folded to a fan, we can apply generalized flattening bounds on individual plaquettes.
Two important examples where we can fold to a fan are the triangular lattice and the kagome lattice, see <ref>.
We denote by (n) and (n) the kagome and triangular lattice with n edges.
From these foldings and <ref> we see that
Suppose that |ϕ⟩_(2n)≥|ψ⟩_(2n), then
|ϕ^ 2⟩_(n)≥|ψ^ 2⟩_(n).
If |ϕ⟩_(6n)≥|ψ⟩_(6n), then
|ϕ^ 6⟩_(n)≥|ψ^ 6⟩_(n).
The same statements are true using degenerations or asymptotic restrictions.
We can use this to prove obstructions to entanglement structure transformations, where we consider transformations from an entanglement structure where we place |_r(3)⟩ states on each plaquette to an entanglement structure with some 3-party state |ϕ⟩ at each plaquette.
To make the question more concrete, let us take the kagome lattice, and study the conversion of |_r(3)⟩ states to |_D(3)⟩ states.
The `naive' flattening bound, by just taking the rank at a single vertex, gives
r^2 ≥ D^4.
Asymptotic lower bounds do not perform better: if we use <ref>, we see that we need
|_r(3)⟩|_D⟩_,
but since the best known lower bound for the matrix multiplication exponent is ω≥ 2, we again get r ≥ D^2.
Suppose that
|_r(3)⟩_(n)|ϕ⟩_(n),
then for any generalized flattening we must have
r^2 ≥((|ϕ⟩^ 2)^(P))/(P).
In particular, if |ϕ⟩ = |_D⟩_ we must have
r^2 ≥ 2D^4 - D^2.
Similarly, if
|_r(3)⟩_(n)|ϕ⟩_(n),
then for any generalized flattening we must have
r^6 ((|ϕ⟩^ 6)^(P))/(P).
If |ϕ⟩ = |_D⟩_ we must have r^6 ≥ 2D^12 - D^6.
First of all, note that if we have n edges on an arbitrary 3-regular hypergraph G
(|_r(3)⟩_G) ≤ r^n
and |ψ⟩_G |ϕ⟩_G implies (|ψ⟩_G) ≥(|ϕ⟩_G).
On the other hand, by <ref> and <ref>, for any generalized flattening P
(|ϕ⟩_(n)) ≥(((|ϕ⟩^ 2)^(P))/(P))^n.
The statement about a general 3-party state |ϕ⟩ now follows from <ref>.
For the special case where |ϕ⟩ = |_D⟩_, using Koszul flattenings one obtains <cit.> (see also <cit.>)
|_r(3)⟩|_D⟩_⇒ r ≥ 2D^2 - D.
Since this bound is obtained by a generalized flattening, this implies the desired result.
For instance, on the kagome lattice, if D = 2 this requires r ≥ 6 while <ref> only gives r ≥ 4, and for D = 3 this gives r ≥ 13, while <ref> only requires r ≥ 9.
On the triangular lattice, if D = 2 this requires r ≥ 5 and for D = 3 this requires r ≥ 11 (while again, <ref> only gives r ≥ 4 and r ≥ 9 respectively).
§.§ Folding and the substitution method
Another important method for proving obstructions (i.e. lower bounds in algebraic complexity) is the substitution method.
We start by explaining the idea of the substitution method, and for convenience we again restrict to 3-tensors.
Suppose we have states |ϕ_ABC⟩ and |ψ_ABC⟩ on three parties A, B and C and suppose there exists a restriction
(M_A M_B M_C) |ϕ_ABC⟩ = |ψ_ABC⟩.
In the substitution method we observe that we may `substitute' any |x⟩∈_̋A in the first factor and get a restriction
(⟨x| M_A M_B M_C) |ϕ_ABC⟩ = (⟨x|𝕀_BC) |ψ_ABC⟩.
Note that if we think of the tensor as a map from _̋A to _̋B _̋C this corresponds to substituting some value in the map.
This is typically used for the case where |ϕ_ABC⟩ is a state and we want to compute the rank (|ψ⟩).
As a concrete example, let us consider the W state on three parties
|W(3)⟩ = |100⟩ + |010⟩ + |001⟩
and bound its rank.
It is clear that there exists a restriction |_3(3)⟩≥|W(3)⟩, since |W(3)⟩ is defined as a sum of three product tensors.
So, (|W(3)⟩) ≤ 3.
Can we do better?
The substitution method shows us that this is not the case.
Let |x⟩ = |0⟩ + x|1⟩ be nonzero, with x ∈, then applying the substitution we find that
|W^(x)(3)⟩ = |01⟩ + |10⟩ + x |00⟩.
Note that (|W^(x)(3)⟩) = (|W^(x)(3)⟩) ≥ 2 for all choices of |x⟩.
On the other hand, suppose that we have a restriction |_2(3)⟩≥|W(3)⟩, then we have
|W(3)⟩ = ∑_i=0,1|a_i⟩|b_i⟩|c_i⟩.
At least one of the |a_i⟩ must not be proportional to |0⟩, so we can choose |x⟩ such that ⟨x | a_i|=⟩ 0. But this would imply that |W^(x)(3)⟩ is a product state, which is a contradiction.
Therefore, there does not exist a restriction |_2(3)⟩≥|W(3)⟩ and (W(3)) = 3.
There does exist a degeneration |_2(3)⟩|W(3)⟩, given by
(|0⟩ + |1⟩)^ 3 - |0⟩^ 3 = |W(3)⟩ + (^2)
so (|W(3)⟩) = 2 and we see that the substitution method is able to distinguish degenerations from restrictions (which is not the case for flattening ranks).
Here we will apply a more complicated version of the substitution method to prove that on the kagome lattice or a triangular lattice there is no restriction from the entanglement structure using |_2(3)⟩ states to the entanglement structure using |W(3)⟩ states.
This is not a direct consequence of the substitution method above, as it could be the case that the transformation is not possible on a single plaquette, but is possible on the lattice.
Note that we have |_2(3)⟩|W(3)⟩ and hence |_2(3)⟩|W(3)⟩.
We will use a folding, where we choose one specific plaquette (because of translation invariance of the lattice it does not matter which one).
Let us denote the three vertices adjacent to this plaquette a, b and c.
In the folding we choose we map a to A, b to B and all other vertices to C. We may apply a similar folding to a (half-filled) triangular lattice, see <ref>. For the remainder of this section we let G either be this lattice or the kagome lattice.
For the structure with |_2(3)⟩ this gives a state equivalent to the three parties sharing a |_2(3)⟩ state, and level-2 pairs between A and C, and between B and C, a state we denote by |ϕ_ABC⟩ the state |_2(3)⟩|2⟩_∧.
Similarly, for the entanglement structure using the |W(3)⟩ state we get a state equivalent to the three parties sharing a W state, and level-2 pairs between A and C, and between B and C, which we denote by |ψ_ABC⟩ = |W(3)⟩|2⟩_∧.
We label the parties as follows:
[height=1.2cm,grid=false]folded-w
(-5,-5)0.8A_1 (-8,5)0.8A_2 (30,-5)0.8B_1 (32,5)0.8B_2
(4,27)0.8C_2 (12.5,28)0.8C_1 (22,27)0.8C_3
(63,-5)0.8A_1 (60,5)0.8A_2 (98,-5)0.8B_1 (100,5)0.8B_2
(72,27)0.8C_2 (80.5,28)0.8C_1 (90,27)0.8C_3
(14,8)2 (80,7)W
(10,-10)|ϕ⟩ (80,-10)|ψ⟩
There is no restriction from |ϕ_ABC⟩ to |ψ_ABC⟩.
By the folding argument <ref> has as a direct consequence that the entanglement structure with the W state cannot be obtained from the entanglement structure with _2(3) states on the triangular or kagome lattice G.
|_2(3)⟩_G≱|W(3)⟩_G.
We assume that there exists a restriction
(M_A M_B M_C)|ϕ_ABC⟩ = |ψ_ABC⟩
and derive a contradiction.
In this case, the tensors are all concise (their reduced density matrices have full rank) and since the systems on both sides are of equal dimension, the maps M_A, M_B and M_C must be invertible.
We will use a version of the substitution method to show that this is not possible.
For |x⟩∈ (^2)^ 2, which is the Hilbert space of the A = A_1 A_2 system for both |ϕ_ABC⟩ and |ψ_ABC⟩, let
|ϕ^(x)⟩ = (⟨x|𝕀_BC) |ϕ_ABC⟩
[height=1.2cm,grid=false]substitution
(45,30)2
(-20,10)x
(20,-30)|ϕ^(x)⟩
and similarly for |ψ_ABC^(x)⟩.
These are 2-tensors on systems B and C.
We then define
X_ϕ = {|x⟩ such that (|ϕ^(x)⟩) ≤ 2 }
X_ψ = {|x⟩ such that (|ψ^(x)⟩) ≤ 2 }.
Since the maps M_A, M_B and M_C are invertible, it is easy to see that
|x⟩∈ X_ψ⇔|y⟩ = M_A^†|x⟩∈ X_ϕ.
Indeed,
|ψ^(x)⟩ = (⟨x|𝕀_BC)(M_A M_B M_C)|ϕ⟩
= (M_B M_C) |ϕ^(y)⟩
so (|ψ^(x)⟩) = (|ϕ^(y)⟩).
So, we see that (if a restriction exists) the sets X_ϕ and X_ψ must be related by a linear transformation.
Now, we note that for any |x⟩, |ϕ^(x)⟩ and |ψ^(x)⟩ still share an pair between B and C.
Denote by |ϕ̃⟩ and |ψ̃⟩ the states where we have left out the pair between B and C, so
[height=1.2cm,grid=false]folded-w-no-epr
(-5,-5)0.8A_1 (-8,5)0.8A_2 (30,-5)0.8B_1
(4,27)0.8C_2 (12.5,28)0.8C_1
(63,-5)0.8A_1 (60,5)0.8A_2 (98,-5)0.8B_1
(72,27)0.8C_2 (80.5,28)0.8C_1
(14,8)2 (80,7)W
(10,-10)|ϕ̃⟩ (80,-10)|ψ̃⟩
Now, for x ∈ X_ϕ we let
|ϕ̃^(x)⟩ = (⟨x|𝕀_BC)|ϕ̃⟩
which is just |ϕ^(x)⟩ without the pair between B and C, and therefore has rank (|ϕ^(x)⟩) = 2 (|ϕ̃^(x)⟩) ≤ 2 so (|ϕ̃^(x)⟩) ≤ 1.
The state |x⟩ has either rank 1 or 2 as a state on A_1 A_2, so we can write a Schmidt decomposition
|x⟩ = ∑_i = 1^r |f_i⟩_A_1|e_i⟩_A_2
with r = 1 or r = 2.
Then,
|ϕ̃^(x)⟩ = ∑_i=1^r (⟨f_i | 0||%s⟩⟩0_B_1|0⟩_C_1 + ⟨f_i | 1||%s⟩⟩1_B_1|1⟩_C_1) |e_i⟩_C_2
which has rank at least r.
So, (|ϕ̃^(x)⟩) ≤ 1 implies that |x⟩ as a state on A_1 A_2 must have rank 1.
So, write |x⟩ = |f⟩_A_1|e⟩_A_2 with |f⟩ = f_0|0⟩ + f_1 |1⟩.
Then
|ϕ̃^(x)⟩ = (f̅_̅0̅|0⟩_B_1|0⟩_C_1 + f̅_̅1̅|1⟩_B_1|1⟩_C_1)|e⟩_C_2
This has rank 1 if and only f_0 = 0 or f_1 = 0, so if |x⟩ can be written as
|x⟩ = |0⟩|e⟩ or |x⟩ = |1⟩|e⟩
for |e⟩∈^2.
In other words, X_ϕ is a union of two complex planes.
For |ψ⟩ the same reasoning holds, but with the difference that |ψ̃^(x)⟩ is given by
(f̅_̅0̅ (|0⟩_B_1|1⟩_C_1 + |1⟩_B_1|0⟩_C_1) + f̅_̅1̅|0⟩_B_1|0⟩_C_1)|e⟩_C_2
which has rank 1 if and only if f_0 = 0, and X_ψ consists of all vectors
|x⟩ = |1⟩|e⟩
for |e⟩∈^2.
As X_ϕ is a union of two planes, and X_ψ is a single plane, they can not be related by an invertible linear map M_A, and we conclude that no restriction as in (<ref>) exists.
In this case, the proof was relatively straightforward due to the fact that the restriction had to be invertible (since the tensors were concise on Hilbert spaces of the same dimension).
However, the proof is instructive since the same strategy can be used in a more general setting, where the map M_A need not be invertible.
We use this more general approach to show that there is no restriction from the entanglement structure using EPR-pairs (so placing |2⟩_, the 2 × 2 matrix multiplication tensor on each plaquette) to the entanglement structure using the λ-tensor
|λ⟩ = ∑_i,j,k = 0^2 ϵ_ijk|ijk⟩ + |222⟩
where ϵ_ijk is the antisymmetric tensor.
It is already known <cit.> that on the level of a single plaquette there is no restriction
|_2⟩_≱|λ⟩
while there does exist a degeneration
|_2⟩_|λ⟩.
Hence, there are also asymptotic restrictions |_2⟩_|λ⟩ and a lattice degeneration |_2⟩_G|λ⟩_G.
Since we need to be able to separate a degeneration from a restriction, the substitution method is again a natural choice in this case.
We let G be the kagome lattice (or the half-filled triangular lattice), but without periodic boundary conditions.
We perform the folding at the boundary of the lattice; see <ref>
In this case, for |λ⟩_G we get a state equivalent to the three parties sharing one copy of |λ⟩ and a level-3 EPR pair between B and C.
We denote this state by |λ⟩|_3⟩_BC.
For the state |_2⟩_G we get a state where there are level-2 pairs between A and B and between A and C and a level-8 pair between B and C.
One can think of this as |_2⟩_|_4⟩_BC
So, we would like to show
[height=1.2cm,grid=false]folded-lambda-boundary
(47,12)≱
(-5,-5)0.8A (30,-5)0.8B
(14,29)0.8C
(63,-5)0.8A (98,-5)0.8B
(82,29)0.8C
(81,8)λ
(0,14)0.82 (27,14)0.88 (14,-6)0.82
(97,14)0.83
There does not exist a restriction from |_2⟩_|_4⟩_BC to |λ⟩|_3⟩_BC.
The proof (based on the substitution method and generalizing the ideas in <ref>) can be found in <ref>.
The folding argument implies that <ref> has as a direct consequence that the entanglement structure with the λ-tensor can not be represented with bond dimension 2 on any kagome lattice G with a boundary, solving a main open problem from <cit.>.
|_2⟩_G≱|λ⟩_G
We conjecture that this should also not be possible with periodic boundary conditions, and explain how our methods could be used for this in <ref>.
§.§ Asymptotic restrictions and the asymptotic spectrum
We already saw that at least for foldable hypergraphs (and permutation-invariant states), the non-existence of asymptotic restrictions is an obstruction to the existence of entanglement structure conversions.
We will now study in more depth the asymptotic conversion of entanglement structures |ϕ⟩_G |ψ⟩_G as in <ref>.
There is a well-developed theory for asymptotic conversion of tensors.
It turns out that whether or not there exists an asymptotic conversion between two tensors can be decided by evaluating a compact set of appropriate functionals on the tensors, known as the asymptotic spectrum of tensors.
Suppose we consider k-tensors.
Then we denote by 𝒳(k) the set of all functions on non-normalized quantum states (i.e. tensors)
f:{ k-party quantum states}→ℝ_≥ 0
which are such that they are
* monotone under restriction, so |ϕ⟩≥|ψ⟩ implies f(|ϕ⟩) ≥ f(|ψ⟩)
* multiplicative under tensor products, so f(|ϕ⟩|ψ⟩) = f(|ϕ⟩)f(|ψ⟩)
* additive under direct sums, so f(|ϕ⟩|ψ⟩) = f(|ϕ⟩) + f(|ψ⟩)
* normalized by f(|_r(k)⟩) = r.
This collection of functionals 𝒳(k) can be given the structure of a compact topological space and is known as the asymptotic spectrum of tensors.
A function f ∈𝒳(k) is a point in the asymptotic spectrum.
A cornerstone result of the algebraic theory of tensors is the following theorem by Strassen
<cit.>.
Let |ϕ⟩ and |ψ⟩ be k-tensors.
We have an asymptotic restriction |ϕ⟩|ψ⟩ if and only if f(|ϕ⟩) ≥ f(|ψ⟩) for all f ∈𝒳(k).
This gives a complete characterization of asymptotic restriction. However, we do not have complete knowledge of the asymptotic spectrum and only know a few points (note that if one could evaluate all f ∈𝒳(3) this would allow one directly to compute the matrix multiplication exponent ω).
We will relate the asymptotic spectrum to asymptotic tensor network conversions.
Given f ∈𝒳(n) we can restrict to functionals which only act on a subset of the n parties.
So, we assume we have n parties X = {x_1,…,x_n}.
Let k < n and let Y be a subset of X of size Y = k.
Given a state |ϕ_Y⟩ on the k parties Y on a Hilbert space _̋Y = ⊗_x ∈ Y_̋x, define
|ϕ_Y⟩_X = |ϕ_Y⟩⊗_x ∈ X ∖ Y|0⟩.
For any f ∈𝒳(n) we define a nonnegative function f_Y on k parties by
f_Y(|ϕ_Y⟩) = f(|ϕ_Y⟩_X).
The functions f_Y are monotone under restriction and multiplicative under tensor products.
Moreover,
f_Y(|ϕ⟩|ϕ⟩) = α f_Y(|ϕ⟩)
for
α = f_Y(|_2(k)_Y⟩).
First of all, f_Y is monotone under restriction and multiplicative under tensor products.
Indeed, a restriction |ϕ_Y⟩≥|ψ_Y⟩ implies |ϕ_Y⟩_X ≥|ψ_Y⟩_X, so using the monotonicity of f we have f_Y(|ϕ_Y⟩) ≥ f_Y(|ψ_Y⟩).
Similarly, (|ϕ_Y⟩|ψ_Y⟩)_X = |ϕ_Y⟩_X |ψ_Y⟩_X, so
f_Y(|ϕ_Y⟩|ψ_Y⟩) = f(|ϕ_Y⟩_X |ψ_Y⟩_X)
= f(|ϕ_Y⟩_X)f(|ψ_Y⟩_X)
= f_Y(|ϕ_Y⟩)f_Y(|ψ_Y⟩).
In general, f_Y need not be additive or normalized.
For any |ϕ_Y⟩ we have
(|ϕ_Y⟩|ϕ_Y⟩)_X ≅ (|ϕ_Y⟩|_2(k)_Y⟩)_X
and hence, by multiplicativity
f_Y(|ϕ_Y⟩|ϕ_Y⟩) = f_Y(|ϕ_Y⟩) f_Y(|_2(k)_Y⟩).
While we do not necessarily get additivity, for the f_Y, multiplicativity and monotonicity are in some sense the most important properties of the points in the asymptotic spectrum, since these properties directly relate to the notion of asymptotic restriction.
We will apply this to the case where we have a graph G, so we may consider the asymptotic spectrum 𝒳(n) where n = V.
Then as in <ref> we get functions f_e for each edge e.
Suppose G is a hypergraph with V = n and |ϕ⟩_G and |ψ⟩_G are entanglement structures.
Then |ϕ⟩_G |ψ⟩_G if and only if for all f ∈𝒳(n)
∏_e ∈ E f_e(|ϕ_e⟩) ≥∏_e ∈ E f_e(|ψ_e⟩)
By <ref> we have |ϕ⟩_G |ψ⟩_G if and only if
f(⊗_e ∈ E|ϕ_e⟩) ≥ f(⊗_e ∈ E|ϕ_e⟩)
for all f ∈𝒳(n).
By definition, f ∈𝒳(n) is multiplicative under tensor products.
This gives
f(⊗_e ∈ E|ϕ_e⟩) = ∏_e ∈ E f_e(|ϕ_e⟩)
and similarly for |ψ⟩_G.
Note that in <ref> the functions f_e are not independent, as they have to derive from a global f ∈𝒳(n). So, the condition in <ref> is not equivalent to f_e(|ϕ_e⟩) ≥ f_e(|ψ_e⟩) for all edges.
This makes sense, as in general |ϕ⟩_G |ψ⟩_G does not imply conversion on the level of individual plaquettes, as in the basic example
[height=2.5cm,grid=false]epr-lattice
(2,14)|ϕ⟩ (31,14)|ψ⟩
(47,20)⇒
In this case |ϕ⟩_G and |ψ⟩_G are actually equivalent (and therefore asymptotically equivalent), so we have f(|ϕ⟩_G) = f(|ψ⟩_G) for all f ∈𝒳(n).
On the other hand, an appropriate rank functional clearly distinguishes single plaquettes |ϕ_e⟩ and |ψ_e⟩.
To be able to make use of <ref>, we need to know concrete points in the asymptotic spectrum.
Easy examples are given by ranks across bipartitions of the parties.
This of course is equivalent to previous obstructions obtained by folding.
However, there exists a broad class of examples (in fact encompassing all known examples), which are the so-called quantum functionals as introduced in <cit.>.
These are defined as follows.
For a k-particle quantum state |ϕ⟩ we denote by ρ^(ϕ)_i the (mixed) quantum state resulting from tracing out all but the i'th system of the normalized state
ρ^(ϕ) = 1/⟨ϕ | ϕ|⟩ϕ.
The von Neumann entropy of the reduced state ρ^(ϕ)_i is given by
H(ρ^(ϕ)_i) = - [ ρ^(ϕ)_ilogρ^(ϕ)_i]
For a probability distribution θ = {θ_1,…, θ_n} the corresponding quantum functional is defined as F_θ = 2^E_θ with
E_θ(|ϕ⟩)=sup_|ϕ⟩|ψ⟩ ∑_i = 1^n θ_i H(ρ^(ψ)_i).
The set that one optimizes over is closely related to the entanglement polytope of |ϕ⟩, which is the set of all spectra of ρ^(ψ)_i that can be obtained from a degeneration |ϕ⟩|ψ⟩ <cit.>.
A dual characterization of the entanglement polytope is crucial in showing that the quantum functionals are multiplicative.
As an example application, consider again the W state on three parties.
The W state does not asymptotically restrict to the level-2 state <cit.> which can be seen by computing the quantum functional using the uniform distribution θ = {1/3,1/3,1/3}, in which case
F_θ(|_2(3)⟩) = 2 > F_θ(|W(3)⟩) = 2^H({1/3,2/3})≈ 1.89.
The quantum functionals behave especially nice when applied as in <ref>.
In this case, choose a distribution θ = {θ_v : v ∈ V}.
We now see that (by multiplicativity)
E_θ(|ϕ⟩) = ∑_e ∈ E∑_v ∈ V E_θ_e(|ϕ_e⟩)
= ∑_e ∈ EΘ_e E_θ^(e)(|ϕ_e⟩)
where
Θ_e = ∑_v ∈ eθ_v and θ^(e) = {θ_v^(e) = θ_v/Θ_e : v ∈ e}.
So, E_θ(|ϕ⟩) is a convex combination of quantum functionals E_θ^(e)(|ϕ_e⟩).
In particular, if in <ref> we take f = F_θ, then the normalized restriction to e given by f_e^Θ_e^-1 is again a quantum functional.
This is stronger than <ref>, as it implies that (after normalization) we also have additivity of the restricted functional.
If we have a graph (so a 2-uniform hypergraph) with n vertices and we place maximally entangled states of bond dimension D_e on each edge giving an entanglement structure |ϕ⟩_G, we find that taking the uniform distribution θ with θ_v = 1/n for v ∈ V, the quantum functional gives
F_θ(|ϕ⟩_G) = ∏_e ∈ E D_e^2/n
and more generally for an arbitrary distribution θ we find
F_θ(|ϕ⟩_G) = ∏_e = (vw) ∈ E D_e^θ_v + θ_w.
This can be used to lower bound the asymptotic bond dimension required for some entanglement structure of interest.
As an example application, the quantum functionals allow one to conclude that on any hypergraph (so it does not have to be foldable), we can not have an (asymptotic) conversion of an entanglement structure using W states to one using _2(3) states.
A small example of such a graph which can not be folded is
< g r a p h i c s >
as can be easily seen.
Let G be any 3-uniform hypergraph, and let |W⟩_G and |_2(3)⟩_G be the entanglement structures where each edge is assigned a W state or _2(3) state respectively.
Then there is no asymptotic restriction |W⟩_G |_2(3)⟩_G.
In particular, there is also no restriction, so |W⟩_G ≱|_2(3)⟩_G.
For the uniform distribution over the vertices, we find that each θ^(e) is a uniform distribution, and therefore
F_θ(|_2(3)⟩_G) > F_θ(|W⟩_G)
which implies there is no asymptotic restriction.
§ SYMMETRIES AND RANKS OF ENTANGLEMENT STRUCTURES
We end our investigation of the resource theory of tensor networks by studying some structural properties of entanglement structures.
An important feature of tensor networks is the so-called gauge symmetry which implies that the same quantum state can have equal bond dimension representations with different tensors applied.
This is a crucial ingredient in relating tensor network states to phases of matter.
Here, we will introduce this gauge symmetry for arbitrary entanglement structures.
This gauge symmetry has rather different properties than in the standard tensor network formalism.
For one, the gauge group is potentially a finite group. Secondly, we show that there are entanglement structures which have gauge symmetries which act on multiple edges.
We show that this does not happen for acyclic hypergraphs.
After this we turn to the tensor rank of entanglement structures.
We work out a nontrivial example, solving an open question in <cit.>.
§.§ Stabilizers of entanglement structures
We will now turn our attention to the stabilizer group, or gauge group, of entanglement structures.
Given an entanglement structure with Hilbert spaces _̋v of dimension d_v at vertex v ∈ V, we have the group ∏_v ∈ V(d_v) which acts on |ϕ⟩_G by
(g_v)_v ∈ V·|ϕ⟩_G = (⊗_v ∈ V g_v ) |ϕ⟩_G.
Then we define the stabilizer of the entanglement structure as
(|ϕ⟩_G) = {(g_v ∈(d_v))_v ∈ V
such that (g_v)_v ∈ V·|ϕ⟩_G = |ϕ⟩_G}.
A well-known example is the situation where we have maximally entangled states along edges.
If we have a maximally entangled state of level D at some edge, we may act with g ∈(D) on one side of the edge and with g^- = (g^-1)^ on the other side.
[width=2cm,grid=false]epr-gauge
(8,-11)g (83,-11)g^-T
This gives an element in the stabilizer since
(g g^-T) ∑_i=1^D |ii⟩ = ∑_i=1^D |ii⟩.
This gauge symmetry is widely studied in the context of tensor networks and is an essential ingredient in the classification of phases and symmetries using the tensor network formalism <cit.>.
Note that it implies that for a tensor network state with tensors T_v, there exist transformations on the tensors T_v which change the local tensors but do not change the resulting tensor network state.
More generally, given an arbitrary entanglement structure we get elements in the stabilizer by looking at the stabilizers for single edges, that is, if e = {v_1,…,v_k} we may look at g_e,v_i∈(_̋e,v) such that
(g_e,v_1… g_e,v_k) |ϕ_e⟩ = |ϕ_e⟩.
Such g_e,v_i is called an edge stabilizer.
Then it is clear that
(g_v)_v ∈ V, g_v = ⊗_e : v ∈ e g_e,v
is an element of (|ϕ⟩_G).
In the special case of an pair, this group was a continuous group, but in general this group can be finite <cit.>.
An interesting question is whether all symmetries of the entanglement structure arise as a symmetry of the edges states.
In other words, the question is whether every element of (|ϕ⟩_G) arises as a product of edge stabilizers.
The answer is no as the following example shows.
Consider the hypergraph G consisting of two 3-edges with two common vertices.
As entanglement structure we place a W state on both edges.
For q ∈, let g_q be defined by
g(q) : (^2)^ 2→ (^2)^ 2
g(q) = 𝕀 + q|00⟩⟨11|.
This is clearly not a tensor product operator for q ≠ 0.
We now apply g(q) and g(-q) to the vertices of degree 2
[height=1.8cm,grid=false]W-gauge
(51,85)g(q) (51,-10)g(-q)
(23,34)W (62,34)W
This stabilizes |W⟩_G, but is not a product of edge stabilizers.
On the other hand, for acyclic hypergraphs we will show that the stabilizer is just the product of the edge stabilizers.
Suppose G has two edges e_1 and e_2, sharing a single vertex v.
Let |ϕ⟩_G = |ϕ_e_1⟩|ϕ_e_2⟩ and |ψ⟩_G = |ψ_e_1⟩|ψ_e_2⟩ be an entanglement structure with |ϕ_1⟩ and |ϕ_2⟩ concise, and suppose that M_v is such that
(M_v 𝕀_V ∖ v)|ϕ⟩_G = |ψ⟩_G.
Then there exist M_i acting only on _̋e_i,v such that M = M_1 M_2.
So, if we have
[height=1.2cm,grid=false]tree-gauge
(48,10)=
(16,18)M
(3,9.5)0.8ϕ_e_1 (29,9.5)0.8ϕ_e_2
(64,9.5)0.8ψ_e_1 (88,9.5)0.8ψ_e_2
then
[height=1.2cm]product-stabilizer
(16,18)M
(48,10)=
(72,17)0.8M_1 (81.5,17)0.8M_2
We group together all vertices in respectively e_1 and e_2 into parties A and B.
Then we consider Schmidt decompositions
|ϕ_1⟩ = ∑_i=1^r_1 s_i|a_i⟩|e_i⟩ |ϕ_2⟩ = ∑_i=1^r_1 t_i|b_i⟩|f_i⟩
where the |e_i⟩ and |f_i⟩ are orthonormal bases of _̋e_1,v and _̋e_1,v (by conciseness).
Now, the map M_v is completely defined by
M_v |e_i⟩|f_j⟩ = (𝕀_v ⟨a_i|⟨b_j|)(M_v 𝕀_V ∖ v)|ϕ⟩_G
= (𝕀_v ⟨a_i|⟨b_j|) |ψ⟩_G
= ((𝕀_v ⟨a_i|) |ψ_1⟩) ((𝕀_v ⟨b_j|) |ψ_2⟩).
This is again a tensor product state, so we see that we may take
M_1 = ∑_i (𝕀_v ⟨a_i|) |ψ_1⟩
M_2 = ∑_j (𝕀_v ⟨b_j|) |ψ_2⟩
If G is an acyclic hypergraph and each |ϕ_e⟩ is concise, all elements of (|ϕ⟩_G) are products of edge stabilizers.
Take (g_v)_v ∈ V∈(|ϕ⟩_G).
Pick an arbitrary vertex v ∈ V, and divide V ∖{v} into two groups V_1 and V_2 such that there are no edges between V_1 and V_2 (this is possible since the hypergraph is acyclic). Let E_i be the set of edges e for which e ⊂ E_i ∪{v}. The disjoint union of E_1 and E_2 is the full edge set E.
Now, define a hypergraph H with vertex set V and edge set F = {f_1, f_2}, where f_i = V_i ∪{v}.
Define |ϕ⟩_H by
|ϕ_f_i⟩ = ⊗_e ∈ E_i|ϕ_e⟩.
That is, we coarse-grain the hypergraph ss
[height=2.3cm,grid=false]coarse-grain-tree
(51,40)v
(10,30)f_1
(80,30)f_2
Let
|ψ⟩_H = (𝕀_v (⊗_v ∈ V ∖{v} g_v))|ϕ⟩_H.
Note that |ψ⟩_H is again an entanglement structure on H.
Then, (g_v 𝕀_V ∖{v})|ϕ⟩_H = |ψ⟩_H, and by <ref> there exist g_v,1, g_v,2 such that g_v = g_v,1 g_v,2.
Since we showed this for an arbitrary vertex and arbitrary assignment of edges, we conclude that each g_v can be written as a tensor product over the adjacent edges
g_v = ⊗_e : v ∈ e g_e,v.
This decomposition is unique up to a choice of phases. After an appropriate choice of phases, in order for (g_v)_v ∈ V to be a stabilizer of |ϕ⟩_G we have
(⊗_v ∈ e g_e,v) |ϕ_e⟩
and we conclude that (g_v)_v ∈ V∈(|ϕ⟩_G) is a product of edge stabilizers.
This result supplements <ref>, which shows that on an acyclic graph |ϕ⟩_G ≥|ψ⟩_G implies |ϕ_e⟩≥|ψ_e⟩ for each edge e ∈ E.
§.§ Ranks of entanglement structures
We will end by making some observations regarding the tensor rank of entanglement structures. Recall that for a quantum state |ϕ⟩ on k parties we call
(|ϕ⟩) = min{ r: |ϕ⟩ = ∑_i = 1^r|u_i1⟩…|u_ik⟩, u_ij∈ U_j }
the rank of |ϕ⟩. In other words, it is the minimal number of product states whose linear span contains |ϕ⟩, or it is the minimal r such that there exists a restriction _r(k) ≥|ϕ⟩.
In the case of 3-tensors it is closely related to the complexity of computing bilinear operations (e.g. the complexity of matrix multiplication is related to the tensor rank of |_n⟩_ as discussed in <ref>).
The rank of a quantum state is a natural measure of multipartite entanglement (as it is the entanglement cost from states under SLOCC) <cit.>, and it is interesting to study this measure for entanglement structures.
Given a hypergraph G and entanglement structure |ϕ⟩_G with state of |ϕ_e⟩ on the plaquettes of G it is interesting to compare the ranks of the individual |ϕ_e⟩ with the rank of the entanglement structure |ϕ⟩_G.
It is immediate that
(|ϕ⟩_G) ≤∏_e ∈ E(|ϕ_e⟩).
This inequality can be an equality but is not so in general <cit.>.
The same is true for the border rank <cit.>.
The tensor rank (|ϕ⟩_G) is well-studied in the special case where G is a graph and the |ϕ_e⟩ are two-party states (and then we may assume without loss of generality they are pairs).
For example, when G has no cycles, one has equality in <ref>.
More generally, it is easy to see that we have equality if the graph is bipartite <cit.>, since we get a sharp flattening rank lower bound from the bipartitioning of the graph. This for instance allows one to compute the rank of an entanglement structure with pairs on a square lattice.
On the other hand, for example for any cycle of odd length one has strict inequality in <ref> (with the cycle of length 3 corresponding to matrix multiplication) <cit.>.
The asymptotic version (computing (|ϕ⟩_G)) of this question has been studied in <cit.> for complete graphs.
For hypergraphs with edges with e > 2, in general we can even have strict inequality in <ref> if the hypergraph is completely disconnected <cit.>.
This occurs for instance when one takes two copies of the W state, as already discussed in <ref>.
All together this leads to the following question:
Given n copies of a k-party state |ϕ⟩, and a k-regular hypergraph G with n edges, how does (|ϕ⟩_G) depend on the structure of G?
In this work we only briefly touch on this question. As an example, we completely work out the (nontrivial) answer in the case of two W states.
We have the following four possible graphs
[height=3cm,grid=false]two-edge-graphs
(20,33)G_6 (80,33)G_5
(20,-10)G_4 (80,-10)G_4
By folding we see that
(|W⟩_G_6) ≥(|W⟩_G_5) ≥(|W⟩_G_4) ≥(|W⟩_G_3).
In <cit.> it was shown that
(|W⟩_G_3) = 7
(i.e. the Kronecker product of two copies of |W⟩ has rank 7).
On the other hand, in <cit.> it was shown that (|W⟩_G_6) ≤ 8 and in <cit.> that (|W⟩_G_6) ≥ 8 yielding that the tensor product of two copies of |W⟩ has
(|W⟩_G_6) = 8.
Thus, with <ref> we know
8 = (|W⟩_G_6) ≥(|W⟩_G_5)
≥(|W⟩_G_4)
≥(|W⟩_G_3) = 7.
In <ref> we prove
(|W⟩_G_4) = 8.
This solves an open problem in <cit.>, and completes the characterization of the tensor rank of two copies of the W state on different hypergraphs
(|W⟩_G_6) = (|W⟩_G_5) = (|W⟩_G_4) = 8
(|W⟩_G_3) = 7.
In general, for acyclic hypergraphs one might expect that since their stabilizer group equals that of the separate edge, as we have shown in <ref>, the rank of an entanglement structure on the acyclic hypergraph equals that of the rank one has when one just takes the tensor product. That is, given an entanglement structure |ϕ⟩_G on an acyclic hypergraph G, one may consider the completely disconnected graph H on the same number of edges (and we define |ϕ⟩_H by just placing the same edge states |ϕ_e⟩ on these disconnected edges).
We then have the open question whether for any acyclic hypergraph
(|ϕ⟩_G) ?=(|ϕ⟩_H).
§ OUTLOOK AND DISCUSSION
We have demonstrated that there is a rich resource theory of different ways of distributing local multiparty entanglement over hypergraphs, such as lattices, under applying local operations. This is a resource theory of tensor networks, in the sense that it allows us to compare different entanglement structures as a resource for creating many-body tensor network states.
This resource theory establishes a framework which accommodates the previously proposed special instances of tensor networks using multiparty entangled states, extending beyond the traditional use of maximally entangled pair states.
The resource theory effectively generalizes the notion of bond dimension, allowing for a systematic comparison of different entanglement structures.
Our first main result is that this resource theory goes beyond transformations which only act on individual plaquette states and that certain transformations only become possible on a lattice.
By enabling transformations between different entanglement structures, our approach allows for the conversion between different tensor network representations of the same quantum many-body state. This may for example be used to relate representations which are natural from the physics of a many-body Hamiltonian, to representations which are practical and efficient with respect to computational implementations.
We have showcased a number of principles which can be used to construct such transformations, laying the groundwork for systematic development of transformations between entanglement structures.
In the converse direction, we provide techniques to prove obstructions to the existence of transformations of entanglement structures on a lattice.
For these methods we draw on powerful techniques developed in order to understand the computational complexity of matrix multiplication.
We have adapted generalized flattening bounds, the substitution method and asymptotic spectral functionals to lattice problems and use these to prove no-go theorems for transformations between entanglement structures.
These results illustrate that the profound mathematics developed in algebraic complexity theory has interesting applications in the theory of tensor networks.
The resource theory of tensors has been developed in different scientific communities, on the one hand to study the complexity of matrix multiplication and on the other hand to classify multipartite entanglement with respect to SLOCC transformations.
Interestingly, the resource theory of tensor networks operates at the intersection of these two viewpoints.
Indeed, our fundamental object of interest are local entanglement structures on a lattice; at the same time, tensor networks are a computational tool and we show that the resource theory of tensor networks is directly related to the complexity of tensor network contractions.
The resource theory of tensor networks opens up numerous promising avenues for future research.
In the algorithmic direction one could further advance the usage of entanglement structures in variational methods, for instance following up on the proposal <cit.>. It is also clear that the resource theory of tensor networks can be applied to infinite Projected Entangled Pair States (iPEPS) <cit.> where finding good resource states could lead to more efficient algorithms.
Finally, the connection to algebraic complexity theory we have emphasized may lead to the development of improved contraction algorithms; or in the converse direction could be used to import tensor network algorithms to different problems in computer science.
Besides this it would also be interesting to explore the relevance of our resource theory to the theory of condensed matter systems, for instance relating to (symmetry protected) topological phases and using symmetries of entanglement structures to define new canonical forms of tensor networks, for instance using the framework of <cit.>. Finally, we believe that our viewpoint on tensor networks may also be of benefit outside the realm of quantum many-body states and quantum computation, in particular in more data-driven subjects such as machine learning.
§ ACKNOWLEDGEMENT
We acknowledge financial support from the European Research Council (ERC Grant Agreement No. 818761), VILLUM FONDEN via the QMATH Centre of Excellence (Grant No.10059) and the Villum Young Investigator program (Grant No. 25452) and the Novo Nordisk Foundation (grant NNF20OC0059939 ‘Quantum for Life’).
V.L. additionally acknowledges financial support from the European Union (ERC Grant No. 101040907). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them.
§ PROOF OF THEOREM 17
<ref> states that there is no restriction
[height=1.2cm,grid=false]folded-lambda-boundary
(-5,-5)0.8A (30,-5)0.8B
(14,29)0.8C
(61,-5)0.8A' (98,-5)0.8B'
(82,29)0.8C'
(81,8)λ
(0,14)0.82 (27,14)0.88 (14,-6)0.82
(97,14)0.83
where we have labelled the parties on the right-hand side with a prime to indicate that they have different Hilbert spaces.
We will follow a strategy which is a generalization of the substitution method we used for <ref>.
Consider the general situation where we have 3-tensors |ϕ_ABC⟩ and |ψ_A'B'C'⟩ on three parties.
The Hilbert spaces are potentially different; we will assume that (_̋A) ≥(_̋A').
For |x⟩∈_̋A, similar to before we let
|ϕ^(x)⟩ = (⟨x|𝕀_BC) |ϕ_ABC⟩
and similarly for |ψ_ABC^(x)⟩ for |x⟩∈_̋A'.
These are 2-tensors on systems B and C (or B' and C').
As before we define
X_ϕ^(k) = {|x⟩ such that (|ϕ^(x)⟩) ≤ k }
X_ψ^(k) = {|x⟩ such that (|ψ^(x)⟩) ≤ k }.
We now assume that |ϕ⟩≥|ψ⟩, so there exists a transformation (M_A M_B M_C)|ϕ⟩ = |ψ⟩. If this is the case, we can relate X_ϕ^(k) and X_ψ^(k) by the following two lemmas.
Suppose that |ϕ⟩≥|ψ⟩.
If for |y⟩∈_̋A' we have M_A^†|y⟩∈ X_ϕ^(k), then |y⟩∈ X_ψ^(k).
If M_A^†|y⟩∈ X_ϕ^(k), and we let |x⟩ = M_A^†|y⟩ then
(|ψ⟩^(y)) = ((⟨y|M_A M_B M_C)|ϕ⟩)
= ((M_B M_C) |ϕ^(x)⟩)
≤(|ϕ^(x)⟩) ≤ k
so |y⟩∈ X_ψ^(k).
Suppose that |ϕ⟩≥|ψ⟩ and suppose that the reduced density matrix of |ψ⟩ has full rank on A'. Let U ⊆_̋A be the subspace given by M_A^†_̋A'. Then (U) = (_̋A') and there exists an invertible map N_A : U →_̋A' such that if |x⟩∈ X_ϕ^(k)∩ U, then we have N_A |x⟩∈ X_ψ^(k).
The fact that the reduced density matrix of |ψ⟩ has full rank on A' means that M_A must be surjective, and hence M_A^† is injective.
Therefore, restricting it to its image U, it is invertible and we let N_A be its inverse.
If |x⟩∈ X_ϕ^(k)∩ U, then by <ref> we find N_A |x⟩∈ X_ψ^(k).
Now we specialize to the situation in <ref> we are interested in. We assign subsystems as follows:
[height=1.2cm,grid=false]folded-lambda-boundary
(47,12)≱
(-10,5)0.8A_2 (-5,-5)0.8A_1 (32,5)0.8B_2 (30,-5)0.8B_1
(5,29)0.8C_1 (18,29)0.8C_2
(60,-2)0.8A' (98,-5)0.8B_1' (102,5)0.8B_2'
(80,29)0.8C_1' (89,27)0.8C_2'
(82,8)λ
(1,14)0.82 (27,14)0.88 (14,4)0.82
(97,14)0.83
(10,-10)|ϕ⟩ (79,-10)|ψ⟩
For our purposes we take k = 8 and write X_ϕ^(8) = X_ϕ and X_ψ^(8) = X_ψ.
We will identify the sets X_ϕ and X_ψ and show that no subspace as in <ref> can exist, from which we conclude that there can be no restriction.
First of all, similar to the way we reasoned in the proof of <ref>
(|ϕ^(x)⟩) = 8(|x⟩)
for any |x⟩∈_̋A_1_̋A_2 = ^2 ^2.
Thus, if |x⟩∈ X_ϕ it must have (|x⟩) ≤ 1 and
X_ϕ = {|x_1⟩|x_2⟩, |x_i⟩∈^2 }.
Next, for X_ψ, note that
(|ψ^(x)⟩) = ( (⟨x|𝕀_B'C')|ψ_A'B'C'⟩)
= ( (⟨x|𝕀_B_1'C_1')|λ_A'B_1'C_1'⟩)(_3)
= 3 ( (⟨x|𝕀_B_1'C_1')|λ_A'B_1'C_1'⟩).
Therefore, |x⟩∈ X_ψ if and only if
( (⟨x|𝕀_B_1'C_1')|λ_A'B_1'C_1'⟩) ≤ 2.
Suppose
|x⟩ = x_0 |0⟩ + x_1|1⟩ + x_2|2⟩
then writing λ^(x) = (⟨x|𝕀_B'C_1')|λ_A'B'C_1'⟩ as a 3 × 3 matrix
Λ^(x) = [ 0 x_2 -x_1; -x_2 0 x_0; x_1 -x_0 x_2 ].
We have (λ^(x)) ≤ 2 if and only if this matrix has determinant zero.
This is the case if and only if x_2 = 0 since (Λ^(x)) = (x_2)^3.
We conclude that
X_ψ = {|0⟩, |1⟩}.
Now we assume that a restriction exists, and therefore we find a subspace U ⊂_̋A = ^2 ^2 of dimension (_̋A') = 3 from <ref> (note that the tensors are concise, so the full rank condition is satisfied).
Since (_̋A) - (U) = 1, the subspace U must be described a by a single linear equation.
We may assume without loss of generality (after a choice of basis on _̋A_2) that U is given by
U = {|x⟩ = x_00|00⟩ + x_01|01⟩ + x_10|10⟩
+ (ax_00 + bx_01 + cx_10)|11⟩, x_ij∈}.
We claim that (U ∩ X_ϕ) = U. Recall that X_ϕ is the set of product states.
The following vectors are elements of U ∩ X_ϕ:
|x_1⟩ = |1⟩(|0⟩ + c|1⟩)
|x_2⟩ = (|0⟩ + b|1⟩)|1⟩
|x_3⟩ = |0⟩(b|0⟩ - a|1⟩)
|x_4⟩ = (c|0⟩ - a|1⟩)|0⟩.
E.g., for |x_3⟩, we have x_00 = b, x_01 = -a, x_10 = 0, and the coefficient of |11⟩ is given by ax_00 + bx_01 + cx_10 = 0 so |x_3⟩∈ U.
For any choice of a,b,c for which we do not have b = c = 0, these vectors span a three-dimensional space, so (U ∩ X_ϕ) = U.
In the case where b = c = 0, the following three vectors in U ∩ X_ϕ
|x_1⟩ = |1⟩|0⟩
|x_2⟩ = |0⟩|1⟩
|x_3⟩ = (|0⟩ + |1⟩)(|0⟩ + a|1⟩)
are linearly independent, so (U ∩ X_ϕ) = U in this case as well.
However, this leads to a contradiction: we saw that (X_ψ) = 2, but by <ref> we have a linear invertible map N_A mapping U ∩ X_ϕ (and hence this maps (U ∩ X_ϕ)) into X_ψ but this is not possible if ((U ∩ X_ϕ)) = 3.
As a consequence of <ref>, there is no bond dimension D = 2 representation for the λ tensor on a kagome lattice (<ref>). However, we proved this with a folding using open boundary conditions.
We conjecture that the same is true with periodic boundary conditions.
One way to prove this is to use the same folding as in <ref>.
In this case, it would suffice to show
[height=1.2cm,grid=false]folded-lambda
(47,12)≱
(81,8)λ
(1,14)0.88 (27,14)0.88 (14,-6)0.82
(97,14)0.83 (66,14)0.83
We have obtained numerical evidence that there is indeed no such restriction, but were unable to give a complete proof, so we leave this as a conjecture.
§ TENSOR RANK OF TWO COPIES OF THE W STATE
The goal of this section is to prove <ref>, which determines the tensor rank of two copies of the W state
|W⟩ = |001⟩ + |010⟩ + |100⟩∈( ℂ^2)^⊗ 3
on the hypergraph G_4.
Our proof strategy is inspired by <cit.>. We consider |W⟩_G_4 with subsystems A,B,C,D
[height=1.5cm,grid=false]w-4
(-15,25)A (105,25)D (45,60)B (45,-15)C
Suppose that |W⟩_G_4 has rank 7 or less, then there must exist a decomposition
|W⟩_G_4 = ∑_i=1^7 |a_i⟩|b_i⟩|c_i⟩|d_i⟩
with |a_i⟩,|d_i⟩∈^2 and |b_i⟩,|c_i⟩∈^2 ^2.
Since (𝕀_ACD⟨11|_B)|W⟩_G_4≠ 0 we may assume without loss of generality that ⟨11 | b_7|=⟩ 1.
So, we expand
|b_7⟩ = x_00|00⟩ + x_01|01⟩ + x_10|10⟩ + |11⟩
for some x_ij∈.
We will now use a symmetry of a single copy of the W state.
If we define
h(p) : ^2 →^2
|0⟩ ↦|0⟩, |1⟩↦|1⟩ + p|0⟩
then
(h(p) h(q) h(r))|W⟩ = |W⟩
if p + q + r = 0.
We now apply the map
𝕀_A (h(-x_01) h(-x_10))_B (h(x_01) h(x_10))_C 𝕀_D
to |W⟩_G_4. It leaves the state invariant, but we get a new decomposition
|W⟩_G_4 = ∑_i=1^7 |a'_i⟩|b'_i⟩|c'_i⟩|d'_i⟩
which now has
|b'_7⟩ = x_00'|00⟩ + |11⟩
for some x_00' ∈.
Next, we use the symmetry given by
𝕀_A g(-q)_B g(q)_C 𝕀_D
for g(q) with any q ∈ as in <ref>.
This allows us, using q = x_00', to transform to a decomposition
|W⟩_G_4 = ∑_i=1^7 |a_i”⟩|b_i”⟩|c_i”⟩|d_i”⟩
with
|b_7”⟩ = |11⟩.
Next, we observe that
(𝕀_ACD (⟨0|𝕀)_B) |W⟩_G_4 = |ϕ⟩|W⟩
where |ϕ⟩ = |01⟩ + |10⟩.
[height=1.2cm,grid=false]phi-w
(12,-15)|ϕ⟩|W⟩
(70,-15)|W⟩|ϕ⟩
(-7,9)A (42,10)D (18,23)B (18,-7)C
(53,10)A (103,10)D (76,23)B (76,-7)C
Similarly,
(𝕀_ACD (𝕀⟨0|)_B) |W⟩_G_4 = |W⟩|ϕ⟩.
By applying the same map to our rank 7 decomposition, we see that these tensors have rank at most 6 (since |b_7”⟩ = |11⟩):
|ϕ⟩|W⟩ = ∑_i=1^6 |a_i”⟩ (⟨0|𝕀) |b_i”⟩|c_i”⟩|d_i”⟩
|W⟩|ϕ⟩ = ∑_i=1^6 |a_i”⟩ (𝕀⟨0|) |b_i”⟩|c_i”⟩|d_i”⟩.
Now, grouping D together with B turns |W⟩|ϕ⟩ into a 3-tensor
[height=1.2cm,grid=flase]fold-w-phi
(-30,40)A (95,105)BD (95,-20)C
of rank 6 (this can be shown using the substitution method).
In particular, the vectors |α_i⟩ = |a_i”⟩|c_i”⟩ for i=1,…,6 are linearly independent.
We claim that the vectors |β_i⟩ = (⟨0|𝕀)|b_i”⟩|d_i”⟩ for i=1,…,6 must span at least a dimension 3 subspace.
To see this, from the decomposition of |ϕ⟩|W⟩ and letting
|α̃_i⟩ = (⟨ϕ|𝕀)|α_i⟩
on C (since (⟨ϕ|𝕀)(|ϕ⟩|W⟩) = |W⟩) we obtain a decomposition
|W⟩ = ∑_i=1^6 |α̃_i⟩|β_i⟩
where the |β_i⟩ are product states on BD.
This implies that if there were at most two linearly independent elements amongst the |β_i⟩, |W⟩ would have tensor rank at most two, but we know that (|W⟩) = 3.
Now we consider |ϕ⟩|W⟩ as a bipartite tensor between AC and BD. It is clear that it has rank 2:
[height=1.2cm,grid=false]rank-phi-w
(-17,25)A (105,25)D (45,60)B (45,-16)C
However, we can write
|ϕ⟩|W⟩= ∑_i = 1^6 |α_i⟩_AC|β_i⟩_BD.
The existence of three independent vectors amongst the |β_i⟩, and the linear independence of the |α_i⟩ implies that this has rank at least 3 as a 2-tensor, which is a contraction with the fact that it has rank 2.
§ HARDNESS OF TENSOR NETWORK CONTRACTION
We will now discuss a hardness result for the complexity of tensor network contraction in the arithmetic circuit model.
Recall that in this model of computation, the goal is to compute some polynomial through an arithmetic circuit.
We will first define the complexity classes VP and VNP.
A sequence of functions f_n, n = 1,2,… is in VP if f_n are polynomials in a number of variables and with a degree that is polynomial in n, and moreover there exists a family of arithmetic circuits of polynomial size in n computing f_n.
A representative example is the n × n determinant polynomial.
Next we define the class of VNP, which is the analog of NP.
A sequence of functions f_n, n = 1,2,… is in VNP if f_n has a number of variables v(n) and degree which are polynomial in n, and moreover there exists a number m(n) and a sequence of polynomials g_n with v(n) + m(n) variables such that g_n is in VP and f_n(x_1,…,x_v(n)) can be computed as
∑_y ∈{0,1}^m(n) g_n(x_1,…,x_v(n),y_1,…,y_m(n)).
The paradigmatic example is the n × n permanent polynomial. It is conjectured that VP ≠ VNP.
There are connections to the problem of P versus NP; for instance it is known that VP = VNP over a field of characteristic zero, together with the generalized Riemann hypothesis, would imply P/ = NP/, see Corollary 4.6 in <cit.>.
We now formally state our result on the hardness of tensor network contraction, showing that tensor network contraction is VNP-complete.
Let f_|ϕ⟩_G be defined as in <ref> for an arbitrary hypergraph G and entanglement structure |ϕ⟩_G. We denote by
f_n = f_|ϕ⟩_G_n
the family of polynomials where we take G_n to be the n × n square lattice graph (so each edge is a 2-edge), and the entanglement structure |ϕ⟩_G_n is given by placing level-2 pairs on each edge.
* The problem of computing tensor network contraction coefficients, given by the polynomial f_|ϕ⟩_G on hypergraphs with n edges and constant degree, and local Hilbert spaces of polynomial size in n with an arbitrary entanglement structure |ϕ⟩_G, is in VNP.
* The problem of computing tensor network contraction coefficients with bond dimension D = 2 on a square lattice, given by the polynomial f_n, is VNP-hard.
Note that in <ref>, the argument of f_|ϕ⟩_G_n consists of (T_v)_v ∈ V, and each T_v consists of (_̋v) variables, so if all local Hilbert spaces have polynomial dimension and the vertices have constant degree, then f_|ϕ⟩_G_n has as polynomial number of variables and polynomial degree (it is linear in each of the variables).
Given a hypergraph G with n edges, and plaquette states |ϕ_e⟩, for edges e ∈ E on Hilbert spaces of polynomial dimension, we can find a restriction |ψ⟩_G ≥|ϕ⟩_G with plaquette states |ψ_e⟩ on edge Hilbert spaces of polynomial dimension and which consist of a collection of level-2 pairs.
By the argument in <ref> this implies that it suffices to prove the result for graphs G with only 2-edges, with edge states |ϕ_e⟩ which are level-2 pairs.
For each edge e ∈ E we may define
|ϕ_e(y_e)⟩ = ((1 - y_e)|0⟩ + y_e|1⟩)^ 2
where y_e is a variable on the edge e.
We let |ϕ(y)⟩_G_n be the entanglement structure where we place these states on G_n, which is a state depending on the variables y = (y_e)_e ∈ E.
Then we define the following polynomial:
g_|ϕ⟩_G((T_v)_v ∈ V, (y_e)_e ∈ E) = (⊗_v ∈ V T_v) |ϕ(y)⟩_G_n.
The g_|ϕ⟩_G are in VP, since they can be computed as
∏_v ∈ V( T_v ⊗_e, v ∈ e((1 - y_e)|0⟩ + y_e|1⟩) )
i.e. since the edge states are product states, the computation factorizes over the vertices.
Moreover, the tensor network contraction f_|ϕ⟩_G((T_v)_v ∈ V) is computed by
∑_(y_e ∈{0,1})_e ∈ E g_|ϕ⟩_G((T_v)_v ∈ V, (y_e)_e ∈ E)
so we conclude f_|ϕ⟩_G is in VNP, proving <ref>.
We proceed to prove <ref>.
We will do so by reducing the computation of a partition function of matchings on a square lattice graph to the tensor network contraction f_n.
We will then use that computing the partition function of weighted (non-perfect) matchings on a square lattice graph is VNP-hard.
That is, one considers the graph G_n. A matching is a subset of edges M ⊆ E such that the edges in M have no common vertices.
We let
x = {x(e)}_e ∈ E
be set of variables assigned to the edges e ∈ E. We then define a partition function as
Z_n(x) = ∑_M∏_e ∈ M x(e)
where the sum is over all matchings M of G_n.
By Theorem 3.7 in <cit.>, based on <cit.>, computing this polynomial is VNP-hard.
We now argue that it can be computed as a tensor network contraction.
To this end, we need to assign the edge variables to vertices.
We will do so by assigning them to the vertex to the right or to the vertex below.
We can then enforce the matching constraint locally at the vertices.
Consider a vertex v which has edges e_1, e_2, e_3, e_4 (starting from the left edge in clockwise order).
We then define the tensor T_v as follows:
T_v(x_e_1,x_e_2) = x_e_1⟨1|_e_1⟨0|_e_2⟨0|_e_3⟨0|_e_4
+ x_e_2⟨0|_e_1⟨1|_e_2⟨0|_e_3⟨0|_e_4
+ ⟨0|_e_1⟨0|_e_2⟨1|_e_3⟨0|_e_4
+ ⟨0|_e_1⟨0|_e_2⟨0|_e_3⟨1|_e_4
+ ⟨0|_e_1⟨0|_e_2⟨0|_e_3⟨0|_e_4
or in diagrammatic notation
[height=2.7cm,grid=false]vnp-reduction
(6,44)x_e_1 (43,44)+ x_e_2
(-8,10.5)+ (28,10.5)+ (67,10.5)+
(18,47)0.51 (30,53)0.50 (37,41.5)0.50 (25,35)0.50
(62,47)0.50 (73,53)0.51 (80,41.5)0.50 (68,35)0.50
(2,14)0.50 (14,20)0.50 (22,8)0.51 (9,2)0.50
(40,14)0.50 (51.5,20)0.50 (59,8)0.50 (47,2)0.51
(78,14)0.50 (89,20)0.50 (96,8)0.50 (84,2)0.50
We find that in the resulting tensor network contraction, we sum over all assignments of 0,1 to the edges where there is at most one 1 neighboring each vertex, and we assign variable x_e to each edge which is labelled by 1:
[height=2.5cm,grid=false]matchings
(-60,45)∑_matchings
(35,77)0.7x_e_1 (79,77)0.7x_e_2
(10,37)0.7x_e_3 (76,37)0.7x_e_4
(35,33)0.7x_e_5
(79,10)0.7x_e_6
The resulting polynomial over the variables x_e is precisely Z_n(x).
|
http://arxiv.org/abs/2307.06263v1 | 20230712160334 | On the hierarchical Bayesian modelling of frequency response functions | [
"T. A. Dardeno",
"R. S. Mills",
"N. Dervilis",
"K. Worden",
"L. A. Bull"
] | cs.LG | [
"cs.LG",
"cs.SY",
"eess.SY"
] |
add1]T.A. Dardenomycorrespondingauthor
[mycorrespondingauthor]Corresponding author
[email protected]
add1]R.S. Mills
add1]N. Dervilis
add1]K. Worden
add2]L.A. Bull
[add1]Dynamics Research Group, Department of Mechanical Engineering,
University of Sheffield, Sheffield S1 3JD, UK
[add2]Department of Engineering, University of Cambridge, CB3 0FA, UK
Population-based structural health monitoring (PBSHM) aims to share valuable information among members of a population, such as normal- and damage-condition data, to improve inferences regarding the health states of the members. Even when the population is comprised of nominally-identical structures, benign variations among the members will exist as a result of slight differences in material properties, geometry, boundary conditions, or environmental effects (e.g., temperature changes). These discrepancies can affect modal properties and present as changes in the characteristics of the resonance peaks of the frequency response function (FRF). Many SHM strategies depend on monitoring the dynamic properties of structures, so benign variations can be challenging for the practical implementation of these systems. Another common challenge with vibration-based SHM is data loss, which may result from transmission issues, sensor failure, a sample-rate mismatch between sensors, and other causes. Missing data in the time domain will result in decreased resolution in the frequency domain, which can impair dynamic characterisation. The hierarchical Bayesian approach provides a useful modelling structure for PBSHM, because statistical distributions at the population and individual (or domain) level are learnt simultaneously to bolster statistical strength among the parameters. As a result, variance is reduced among the parameter estimates, particularly when data are limited. In this paper, combined probabilistic FRF models are developed for a small population of nominally-identical helicopter blades under varying temperature conditions, using a hierarchical Bayesian structure. These models address critical challenges in SHM, by accommodating benign variations that present as differences in the underlying dynamics, while also considering (and utilising), the similarities among the blades.
Population-based structural health monitoring (PBSHM); Hierarchical Bayes; Multilevel models; Uncertainty quantification; Repeatability
§ INTRODUCTION
The current work is focussed on developing population-based structural health monitoring (PBSHM) techniques, for a group of nominally-identical structures (i.e., a homogeneous population <cit.>), with consideration for the effects of temperature changes, to improve understanding of the variability in the health states of the population and the individual members. (In contrast, a heterogeneous population <cit.> will have greater discrepancies among the members, such as different suspension bridge designs, and may require further processing such as domain adaptation <cit.>). Even among nominally-identical structures, variations caused by manufacturing differences, ageing parts, and changes in testing conditions can introduce uncertainty in the underlying dynamics. Cawley et al. <cit.> performed vibration testing on filament-wound carbon fibre-reinforced plastic (CFRP) tubes. Of the 18 tested, six tubes were considered `normal' (i.e., having the same microstructure, with a ± 45^∘ fibre winding angle) <cit.>. They found that the first and second natural frequencies of the `normal' tubes varied by as much as 4% <cit.>. When the remaining 12 tubes were also considered (which had intentional defects in their microstructure, such as slightly misaligned fibres, changes in volume fraction, etc.), the natural frequencies varied by as much as 18% <cit.>. Changes in bolt tightness can similarly cause variations in natural frequency. Zhu et al. <cit.> performed modal testing on an aluminium three-bay space-frame structure with bolted joints, before and after manually loosening several bolts to hand tight. They found that natural frequencies changed by as much as 8%, depending on the mode considered <cit.>.
Global variations, such as changes in ambient temperature, can also affect modal properties. For example, increased temperature may reduce stiffness (depending on the specific material properties of the structure, and the duration/temperature of exposure), which may decrease natural frequency. Colakoglu <cit.> tested a polyethylene fibre composite beam from ^∘C to 60^∘C, and found that the first natural frequency decreased by 12% over the measured temperature range. In addition, for the same polyethylene beam, Colakoglu <cit.> found that modal damping increased with rising temperature. They concluded that the relationships between temperature and natural frequency, and between temperature and damping, appeared to be functional and monotonic for the polyethylene beam, over the temperature range considered <cit.>. Similar effects of temperature on stiffness/natural frequency and damping have been noted for a magnesium alloy <cit.> and a composite honeycomb structure <cit.>. Accounting for these benign fluctuations is important for the practical implementation and generalisation of SHM technologies, as features commonly used for damage identification may be sensitive to harmless changes as well as damage <cit.>. (Indeed, it is well known that structural damage can reduce stiffness, often manifesting as a reduction in natural frequency. Benign variations can then either mimic or mask damage, depending on whether they exhibit a stiffening or softening effect <cit.>. Similar effects have been noted with damping, and multiple studies have shown that damping tends to increase with damage, particularly with crack growth <cit.>.)
Data scarcity and sparsity present additional challenges for SHM systems that rely on machine learning. Data scarcity refers to incomplete information regarding the damage- or normal-condition states of structures, particularly those newly in operation, and can impair model training and development. Likewise, sensing networks are prone to data loss (causing sparse data), because of sensor failure caused by harsh environmental conditions or insufficient maintenance. Transmission issues make wireless-sensing networks particularly susceptible to loss, and can be caused by large transmission distances between the sensors and base station <cit.>, software/hardware problems <cit.>, and other issues such as weather changes, interference from nearby devices, or installation difficulties <cit.>. Furthermore, modern systems that produce large amounts of high-resolution data can suffer losses resulting from a data transfer bottleneck <cit.>. Significant losses higher than 30% have been reported <cit.>, and a 0.38% data loss was found to have similar effects on power spectral density (PSD) as 5% additive noise <cit.>. Differences in sample rate among sensors could have a similar presentation, with the data captured at a lower rate seemingly missing data relative to that captured at a higher rate. Again, PBSHM addresses these scarcity/sparsity issues via knowledge transfer among similar structures, so that data-rich members can support those with more limited information.
Homogenous populations can be represented using a general model, called a population form. The form attempts to capture the `essential nature' of the population and benign variations among the members <cit.>. The form was first introduced in <cit.>, where a conventional or single Gaussian process (GP) was applied to frequency response functions (FRFs), to develop a representation for a nominally-identical population of eight degree-of-freedom (DOF) systems. Then, to accommodate greater differences among the nominally-identical members, an overlapping mixture of Gaussian processes (OMGP) <cit.>, was used in <cit.>, to infer multivalued wind-turbine power-curve data, with unsupervised categorisation of the data. The OMGP approach <cit.> was again used in <cit.> to develop a population form for real and imaginary FRFs, obtained from four nominally-identical, full-scale, composite helicopter blades. Recently, hierarchical modelling was used to improve the predictive capability of simulated <cit.> and in-service <cit.> truck-fleet hazard models, and wind-turbine power curves <cit.>. Specifically, the work presented in <cit.> used partial pooling, where data groups are treated as belonging to different domains, and each domain can be considered a realisation from global or population-level distributions. (This approach is in contrast to complete pooling, where all data are considered as belonging to a single domain, or no pooling, where each domain is treated as fully independent of the other domains). It was shown that when populations of structures were allowed to share correlated information, model uncertainty was reduced <cit.>. In addition, domains with incomplete data were able to borrow statistical strength from data-rich groups <cit.>. In <cit.>, multilevel modelling with partial pooling was used to learn a 2D map of arrival times for waves propagating through a complex plate geometry, via GP regression, using a series of acoustic-emission experiments performed on the same plate but with differing experimental designs. Domain expertise regarding the positive gradient for the expected intertask functions was encoded into the model, via a linear mean function and appropriate priors <cit.>. GP kernel hyperparameters were learnt at the domain level, to allow variation in the response surface among the different tests, and were correlated via a higher-level sampling distribution <cit.>.
§.§ Research aims of the current work
The current work addresses critical SHM challenges, including uncertainties in the underlying dynamics of structures, and data-scarcity/sparsity issues, which can impair generalisation of SHM technologies. In line with the population forms developed in <cit.>, this paper presents generalised, probabilistic FRF models that were developed for the helicopter blades in <cit.>, using a hierarchical Bayesian structure. Two case studies are presented. The first case used FRFs collected from all four blades, at ambient laboratory temperature, with variations among the blades resulting from manufacturing differences (e.g., small discrepancies in material properties and geometry) and boundary conditions. Limited training data that did not fully characterise the resonance peaks were taken from two of the FRFs, while sufficient training data were taken from the remaining two FRFs, so that information could be shared with the data-poor domains via shared distributions over the parameters. This situation is representative of incomplete data in the time domain, which would reduce the number of spectral lines in the frequency domain. Independent models were generated for comparison, to visualise the variance reduction from the combined model. The second case considered vibration data from one of the helicopter blades collected at various temperatures in an environmental chamber. A probabilistic FRF model was again developed using a hierarchical approach with partial pooling. Functional relationships between temperature and natural frequency, and between temperature and damping, were approximated via Taylor series expansion, and polynomial coefficients were learnt at the population level. A subset of temperature-varied FRFs were used to train the model. Population-level modal parameters and polynomial coefficients were then used to generalise to temperature-varied FRFs not used to train the model, and model accuracy was evaluated by comparing the results to FRFs computed via measured vibration data.
§.§ Paper layout
The layout of this paper is as follows. Section <ref> summarises existing research related to population-level monitoring of systems. Section <ref> outlines the novelty and contribution of the current work. Section <ref> presents the theoretical basis for this research, including modal analysis and hierarchical Bayesian modelling techniques. Section <ref> briefly describes the datasets used to develop the models. Sections <ref> and <ref> discuss the models developed and analysis results for the first and second case studies, respectively. Conclusions are presented in Section <ref>, and acknowledgements are presented in Section <ref>.
§ RELATED WORK
Most literature related to population-level monitoring of engineering structures is focussed on transfer learning, which aims to improve predictions in a target domain given a source domain with more complete information. Some cases have involved fine-tuning the classifiers and weights of a pre-trained convolutional neural network (CNN) according to a new dataset. For example, CNNs have been used to detect cracks, as in <cit.>, and other defects (e.g., corrosion, staining), as in <cit.>. Other cases have used domain adaptation (DA), a subcategory of transfer learning, where the source and target domains have the same feature space, and the target domain is mapped onto a shared space. Predictions are then made from a single model. For example, domain-adversarial neural networks have been implemented for condition monitoring of a fleet of power plants <cit.>, and for fault detection in a gearbox and three-story structure <cit.>. A kernelised linear projection approach to DA has also been studied, between simulated source and target structures <cit.>, and between a simulated source structure and experimental target structure <cit.>. DA has also been used to transfer damage detectors across systems using vibration data, in a group of tail planes <cit.>. Lastly, statistic alignment has been shown to improve the performance of established DA methods, by making them more robust to class imbalance and data-sparsity issues <cit.>.
Population-level modelling can also be viewed in the context of multitask learning (MTL), where multiple tasks are solved simultaneously, while considering similarities (and differences) across tasks. In MTL, a combined inference allows sharing of correlated information among domain-specific models, to improve accuracy in data-poor domains <cit.>. In the context of modelling engineering infrastructure, MTL has been used as part of a multioutput Gaussian process (GP) regression to learn correlations between data obtained at adjacent sensors from the Canton Tower, to reconstruct missing information <cit.>. (Note that using MTL in this context differs from the current work. In <cit.>, missing time-domain data from temperature and acceleration sensors were reconstructed using MTL, and all data were sourced from a single structure. The current work considers frequency-domain data from multiple structures, as in the first case, and incorporates the relationships between temperature and domain-specific tasks into the model structure, as in the second case.) A similar approach was used in <cit.>, where GP regression assisted with missing data recovery from a faulty sensor, by capturing the correlation among the remaining sensors on a hydroelectric dam. Likewise, in <cit.>, GPs were used to transfer information between spatial temperature or pressure profiles at axial stations within an aeroengine.
Hierarchical Bayesian modelling is an MTL framework, where (lower-level) domain-specific tasks are correlated by conditioning on (higher or population-level), shared variables. The technique was first introduced in an SHM context in <cit.>, where a multilevel hierarchical Bayesian model was developed to identify damage on a single structure, by inferring stiffness loss from changes in modal parameters, given noisy, incomplete modal data. (Note that this approach differs from the current work. In <cit.>, modal parameters were learnt for different damage scenarios using a series of coupled linear regressions to approximate the eigenvalue problem. In <cit.>, the similarities between acceleration responses at adjacent sensors were exploited for data-loss recovery, by modelling the measured signal as a linear combination of basis functions. Both cases considered data from a single structure. The work presented in the current paper, in the first case study, models the entire (band-limited) FRF from multiple structures, using a likelihood function based on modal parameters. The second case presented in the current work models the entire FRF using measurements at various temperatures from a single structure, and incorporates functional relationships between temperature and the modal parameters, enabling predictions at `unseen' temperature conditions.) Hierarchical Bayesian models were again used in <cit.>, to estimate corrosion rates via data from multiple sensors on the same test structure, to support decision-making in the absence of complete data. In <cit.>, hierarchical Bayesian models were developed using strain measurements from multiple tensile tests to inform the material properties of the samples, in a manner that considered the inherent variability in material properties among the samples and uncertainties related to experimental repeatability. Likewise, in <cit.>, hierarchical Gaussian mixture models were used for combined inference in a simulated population of structures, for the purpose of damage identification.
§ NOVELTY AND CONTRIBUTION
Unlike previous efforts, the current work is concerned with developing a probabilistic FRF model, using an FRF equation based on modal parameters (natural frequency, mode shape, and damping) as the likelihood function. As with <cit.>, a hierarchical Bayesian (or MTL) framework was used. However, the focus is data sharing in the presence of low-resolution frequency information (i.e., given sparse data). The inability to localise both time and frequency to a fine resolution, per the uncertainty principle, means that missing time-domain data (e.g., acceleration, velocity) measured from SHM sensors will result in fewer spectral lines in the frequency domain. This decreased frequency resolution could impair proper characterisation/identification of modal peaks, which may be features of interest in a damage-detection application. By using a combined-inference approach, structures whose FRFs have many spectral lines in a band of interest can lend statistical support to those whose FRFs are limited by missing data. In addition, the current work incorporated functional relationships to describe changes in environmental conditions. This inclusion of functional relationships allows for extrapolation to temperature states not used in model training, which increases the amount of normal-condition information available, to address data-scarcity challenges. In contrast to <cit.>, parameters were learnt over the experimental campaign, rather than hyperparameters, giving greater physical interpretability of the results.
§ BACKGROUND THEORY
In this work, combined probabilistic FRF models were developed using a hierarchical Bayesian modelling approach, to support PBSHM research. This section provides an overview of the methodologies used, including hierarchical multilevel modelling from a Bayesian perspective, linear modal analysis, FRF estimation, and model evaluation.
§.§ Hierarchical Bayesian modelling
Hierarchical models can be used to make combined inferences, whereby domains are treated as separate; but, at the same time, it is assumed that each domain is a realisation from a common latent model. This modelling structure involves partial pooling, and is beneficial in that population-level distributions are informed by the full dataset, comprised of multiple domains. In partial pooling, certain parameters are permitted to vary between domains (i.e., varying parameters); which are correlated by conditioning on parent variables at the population level. In the current work, the natural frequency would be a varying parameter. Other parameters can be considered shared among members of a population (e.g., additive noise) and are learnt at the population level (these shared variables can still be sampled from parent distributions, which are also learnt at the population level). In contrast, a complete-pooling approach would consider all population data as having originated from a single source, while a no-pooling approach would involve fitting a single domain independently from the other domains.
Hierarchical models with partial pooling are particularly useful for PBSHM. Because parameters are allowed to vary at the domain level (as opposed to complete pooling), this approach can represent benign variations within a population. In addition, population-level variables are informed by the full dataset, rather than data from a single domain. This increase in statistical power is especially important in situations where one or more domains have limited data <cit.>. In such cases, parameters from the data-poor domains exhibit shrinkage towards the population mean (therefore borrowing information from the other domains), which tightens the parameter variance <cit.>. The differences among the data-pooling techniques are shown graphically in , using a simple linear regression example.
< g r a p h i c s >
figureComparison of data-pooling approaches, using a simple linear regression example.
From a general perspective, data from a population comprised of K groups can be denoted via,
{𝐱_k,𝐲_k}_k=1^K =
{{x_ik,y_ik}_i=1^N_k}_k=1^K
where 𝐲_k is the target response vector for inputs 𝐱_k, and {x_i,k,x_i,k} are the ith pair of observations in group k <cit.>. Each group is comprised of N_k observations, giving a total of ∑_k=1^KN_k observations <cit.>. The objective is to learn a set of K predictors (one for each domain), related to the regression task, where the tasks satisfy,
{y_ik = f_k(x_ik) + ϵ_ik}_k=1^K
In other words, for each observation i, the output is determined by evaluating one of K latent functions, f_k(x_i,k), plus additive noise, ϵ_i,k <cit.>.
While each of the k groups can be learned independently, a combined inference can be used to take advantage of the full ∑_k=1^KN_k population dataset. For example, consider a population that can be expressed using K linear regression models, as in Figure <ref>,
{𝐲_k = α_k + β_k𝐱_k + ϵ_k}_k=1^K
where α_k and β_k are the intercept and slope for domain k, respectively; 𝐱_k is a vector of inputs for the kth domain, with length N_k; and ϵ_k is the noise vector for the kth domain, with length N_k, and is assumed to be normally-distributed. Then, the likelihood of the target response vector is given as,
𝐲_k | 𝐱_k∼𝒩(α_k + β_k𝐱_k,σ_k^2)
A shared hierarchy of prior distributions can be placed over the slopes and intercepts for the groups k ∈{1, ... , K}, in line with a Bayesian framework. To allow information to flow between groups, parent nodes {μ_α,σ^2_α} and {μ_β,σ^2_β} can be learned at the population level. Note that sometimes, it may be appropriate to learn certain parameters at the population level rather than the domain level. If, for example, the same hardware was used for data collection within the population, one could assume a global noise variance σ^2 that is the same for each domain. Consider the directed graphical models (DGMs) shown in Figures <ref> and <ref>. Figure <ref> shows the DGM for the linear regression example, given an independent (no-pooling) approach. Each model is learnt independently and no information is shared. In contrast, Figure <ref> shows the DGM given a partial pooling approach. The slopes and intercepts are indexed by k, and plate notation is used to show that these nodes are repeated. Shared parent nodes {μ_α,σ^2_α} and {μ_β,σ^2_β} are outside of the plates and not indexed by k, meaning that these are population-level variables, and information is permitted to flow between these and the domain-specific parameters. The noise variance σ^2 is not indexed by k, and is also learnt globally, at the population level.
The current work considers a hierarchical probabilistic FRF model, that uses an accelerance FRF estimate (based on modal parameters), as the mean of the likelihood function. A brief introduction to modal analysis and FRF estimation is provided.
§.§ Modal analysis and FRF estimation
The equation of motion for a multiple DOF system can be written as,
𝐌𝐮̈(t) + 𝐂𝐮̇(t) + 𝐊𝐮(t) = 𝐳(t)
where 𝐮̈(t), 𝐮̇(t), 𝐮(t), and 𝐳(t) are acceleration, velocity, displacement, and force, respectively. In most cases, the mass 𝐌, damping 𝐂, and stiffness 𝐊 matrices are coupled. For linear systems, and in the absence of viscous damping, the equation of motion can be decoupled, such that the system is represented by multiple single degree-of-freedom (SDOF) oscillators. This decoupling is performed via the eigenvalue expression,
[𝐊-ω_n^2𝐌]Ψ = 0
and yields the natural frequencies in radians, ω_n, and mode shapes, Ψ, of the system.
The physical equation of motion can then be cast in modal space to give the uncoupled modal equations, with modal coordinates 𝐩(t), written as,
Ψ^T𝐌Ψ𝐩̈(t)+ Ψ^T𝐂Ψ𝐩̇(t) + Ψ^T𝐊Ψ𝐩(t) = Ψ^T𝐳(t)
where,
𝐮(t) = Ψ𝐩(t)
FRFs aid visualisation of the natural frequency components of a system, and are computed by normalising the response signal at a given location to the excitation force. This work used the H_1 estimator to compute FRFs. For a response at location h resulting from excitation at j, the H_1 estimator is computed as,
𝐇_hj(ω) = 𝐆_zu(ω)/𝐆_zz(ω)
where,
𝐆_zu(ω) Δ=𝔼 [𝐒_u_h(ω) 𝐒_z_j^*(ω)]
𝐆_zz(ω) Δ=𝔼 [𝐒_z_j(ω) 𝐒_z_j^*(ω)]
and,
𝐒_z_j(ω) Δ=ℱ[𝐳_j(t)]
𝐒_u_h(ω) Δ=ℱ[𝐮_h(t)]
The asterisk * denotes complex conjugation, ω is frequency in radians, and ℱ is a discrete Fourier transform (this work used a fast Fourier transform or FFT). The input force in the time domain at location j is 𝐳_j(t), and the output response (i.e., acceleration, velocity, or displacement) in the time domain at h is 𝐮_h(t).
With assumed linear behaviour and proportional damping, the (complex) accelerance FRF (i.e., given acceleration response data) can also be estimated using modal parameters,
𝐇_hj(ω) = -ω^2 ∑_m=1^MA_hj^(m)/ω_nat,m^2-ω^2+2iζ_mωω_nat,m
where A_hj^(m) is the residue for mode m, defined as the product of the mass-normalised mode shapes at locations h and j (A_hj^(m) = ψ_hmψ_jm) <cit.>. The natural frequency associated with mode m is ω_nat,m, and the modal damping associated with mode m is ζ_m <cit.>. The real and imaginary parts of the FRF can be computed independently, by multiplying Eq. (<ref>) by its complex conjugate. This work considers FRFs from multiple domains (different structures in Case 1, and a single structure in Case 2 with data obtained at varying temperatures), with each domain indexed by k. In each case, only one FRF is assigned to each domain, from a given measurement location (so, subscripts h and j can be neglected). Thus, the real and imaginary components of the FRF from the kth domain, given a vector of frequency inputs ω_k, can be estimated via,
real[𝐇_k(ω_k)] = -ω^2_k∑_m=1^MA_m(ω_nat^2^(k,m) - ω^2_k)/ω_nat^4^(k,m)+ω^4_k + 2ω^2_kω_nat^2^(k,m)(2ζ_(k,m)^2-1)
imag[𝐇_k(ω_k)] = -ω^2_k∑_m=1^M2A_miζ_(k,m)ω_kω_nat^(k,m)/ω_nat^4^(k,m)+ω^4_k + 2ω^2_kω_nat^2^(k,m)(2ζ_(k,m)^2-1)
Note that residues A_m are not indexed by k in Eq. (<ref>). For the models presented in the case studies below, mode shapes were shared among the population, to address identifiability concerns during sampling, and because mode shapes are often less sensitive to global variations compared to other modal parameters.
The current work has focussed on the real part of the FRF, and Eq. (<ref>) provided the mean of the likelihood function for the models developed. The residues, modal damping, and natural frequencies for each domain were then learnt as hyperparameters for the hierarchical model. Learning the real part of the FRF was sufficient for demonstrating the proposed technology. However, the imaginary part could be learnt using the same methods, with Eq. (<ref>) providing the likelihood function, or could be inferred (at least in part), by exploiting the causal relationship between the real and imaginary components of the FRF <cit.>.
§.§ Model evaluation for generalisation beyond training data (Case 2)
The normalised mean-squared error (NSME) was calculated to evaluate the accuracy of the extrapolation to temperatures beyond the training data, for the second case presented herein, via
NMSE = 100/Mσ^2_𝐲∑_i=1^M(y_i - y_i^*)
where M is the number of test data points, y is the test data, σ^2_𝐲 is the variance of the test data, and y^* is the predicted function. Normalising to the test data variance allows for comparison of results on a consistent scale, regardless of signal magnitude. According to convention, an NMSE less than 5% suggests that the model fits the data well.
§ DATASET SUMMARY
The population dataset was comprised of vibration data collected from four healthy, nominally-identical, full-scale composite[Although the exact internal geometry of the blades is unknown, specifications for Gazelle helicopter blades have indicated that they are comprised of steel, fibreglass, and a honeycomb core <cit.>.] blades from a Gazelle helicopter (referenced in this paper as Blades 1-4). Data were collected using Siemens PLM LMS SCADAS hardware and software at the Laboratory for Verification and Validation[https://lvv.ac.uk/] (LVV) in Sheffield, UK. The first case used FRFs calculated from data that were collected on all four blades at ambient laboratory temperature, as described in <cit.>. The second case used FRFs calculated from data that were collected on a single blade (Blade 1) at multiple temperatures in an environmental chamber.
§.§ Data collection at ambient laboratory temperature
The first case used data collected at ambient laboratory temperature, with the blades in a fixed-free boundary condition, which was approximated by placing the root end of each blade in a substantiated strong-wall mount. Ten uniaxial 100 mV/g accelerometers were placed along the length of the underside of each blade. Note that the same accelerometers were used on each blade, and care was taken to ensure that they were attached to approximately the same locations on each blade. An electrodynamic shaker with force gauge was mounted to a fixture bolted to the laboratory floor and attached to the blade 0.575 metres from the root. The shaker was attached to the underside of the blade in the flapwise direction. A continuous random excitation was generated in LMS (note: LMS refers to Siemens PLM LMS SCADAS hardware and software) and applied to excite the blade up to 400 Hz, with a step size of 4.88e-02 Hz. Throughput time data were collected for each test, and the data were divided into 20 blocks. Hanning windows were applied and FRFs were computed in LMS, which were then averaged in the frequency domain. The experimental setup is shown in Figure <ref>, and the accelerometer positions on the blades are shown in Figure <ref>.
To simplify the analysis, a narrow frequency band was selected between 24 and 61 Hz, with the fourth and fifth bending modes of the blades dominating the response in this band. Although other modes appear to have a small influence in this band, a 2DOF assumption was imposed. (This assumption results in smoothing of the FRF over the band, and might result in some loss of interpretability, but is acceptable for these preliminary analyses). The real part was modelled as a probabilistic FRF, using the FRF estimate from Eq. (<ref>) as the mean of the likelihood function, as described in Case 1, presented in Section 6 of this paper. The real parts of the averaged FRFs for each blade, at the second accelerometer from the blade root (corresponding to the drive-point location), are shown in Figures <ref> and <ref>. Figure <ref> shows the full measured bandwidth, and Figure <ref> shows the FRF in the bandwidth of interest, between 24 and 61 Hz.
< g r a p h i c s >
figureHelicopter blade in a substantiated wall mount.
< g r a p h i c s >
figureSensor locations on the helicopter blades.
Figures <ref> and <ref> show increasing variability with respect to frequency, which is an expected result, given that higher-frequency modes are more sensitive to small physical changes than lower-frequency modes. For modes less than 80 Hz, the maximum frequency difference among the blades was approximately 2.5 Hz; for modes greater than 80 Hz, the maximum frequency difference was approximately 6.3 Hz. Note the grouping visible at several of the peaks, where Blades 1 and 2 appear closely aligned in frequency while Blades 3 and 4 appear closely aligned. These results are quite relevant for PBSHM. All of the helicopter blades are healthy, and represent a normal-condition state of the population. Consider a situation where only FRFs from one of the groupings are available for training a model (or FRFs from the other groups are missing data). The normal condition could be heavily biased towards the training set, and incoming FRFs could be flagged as damaged, even if they are healthy. Further details regarding the data collection and processing for these tests can be found in <cit.>.
§.§ Data collection at various temperatures in an environmental chamber
The second case used data from Blade 1, collected at temperatures ranging from -20 to 30^∘C in increments of 5^∘C in an environmental chamber. The blade was tested in an approximately fixed-free boundary condition, with the blade root mounted on a fixture that was substantiated with a large concrete block. These tests used the same accelerometers and sensor layout, as the previous tests at ambient laboratory temperature. Likewise, the same data-acquisition parameters and processing methods were used. The shaker was mounted to the bottom plate of the test fixture, and the shaker and force gauge were connected to the underside of the blades to excite the blades in the flapwise direction. A thermal jacket was used to protect the shaker when testing at lower temperatures. Prior to data collection, the environmental chamber was set to each of the temperatures of interest, after which the blades were allowed to soak for at least two hours to reach the desired temperature. Throughput force and acceleration data were collected for each test. The data were then divided into 20 blocks, and a Hanning window was applied to each data block. FRFs were then computed in LMS, and averaged in the frequency domain. The experimental setup, with the helicopter blade inside the environmental chamber, is shown in Figure <ref>.
To simplify the analysis, a narrow frequency band was selected between 135 and 155 Hz, with a higher-order bending mode of the blade dominating the response in this band. An SDOF assumption was imposed. (Again, this assumption was considered acceptable for these preliminary analyses, to avoid data-separability issues.) FRFs captured at temperatures -10, -5, 10, and 25^∘C provided the training data. The remaining temperature-varied FRFs were used to evaluate the extrapolation results to other temperatures. As with the previous case, the real part was modelled as a probabilistic FRF, using the FRF estimate from Eq. (<ref>) as the mean of the likelihood function. The real parts of the averaged FRFs for each temperature, from the fourth accelerometer from the blade tip, are shown in Figures <ref> and <ref>. Figure <ref> shows the full measured bandwidth, and Figure <ref> shows the FRF in the bandwidth of interest, between 135 and 155 Hz.
Figures <ref> and <ref> show a proportional decrease in frequency corresponding to each incremental temperature increase, with discrepancies more noticeable at the higher modes than the lower modes. These results are expected, as the higher-frequency modes are more sensitive to environmental and other changes. The maximum frequency difference among the tests was approximately 15.3 Hz (for modes less than 400 Hz), found in the band between 335 and 375 Hz, as obtained via peak-picking. In the same band, the average frequency difference for each 5^∘C increment was approximately 1.5 Hz. For the mode at approximately 145 Hz shown in Figure <ref>, the maximum frequency difference was approximately 3.8 Hz, and the average frequency difference for each 5^∘C increment was approximately 0.38 Hz. Further details regarding the data collection and processing for these tests can be found in <cit.>.
§ CASE 1: POPULATION-BASED MODELLING OF FRFS FOR NOMINALLY-IDENTICAL STRUCTURES
The first case used FRF data from four nominally-identical helicopter blades, collected at ambient laboratory temperature. A population form for the FRFs of the helicopter blades was developed using the aforementioned hierarchical (partial-pooling) approach. Models were developed using the probabilistic programming language . Analyses were performed using MCMC, via the no U-turn (NUTS) implementation of Hamiltonian Monte Carlo (HMC) <cit.>. HMC uses approaches based in differential geometry to generate transitions that span the full marginal variance, which allows the algorithm to accommodate large curvature changes in the target distribution (which are common with hierarchical models) <cit.>. This allows the sampler to efficiently explore the posterior distribution, without being susceptible to the random-walk behaviour that can occur with other samplers <cit.>.
§.§ Model development
The population of helicopter blade FRFs, with frequency inputs, ω_k, and accelerance outputs, 𝐇_k, was re-written from the general representation in Eq. (<ref>),
{ω_k,𝐇_k}_k=1^K =
{{ω_ik,H_ik}_i=1^N_k}_k=1^K
where {ω_ik,H_ik} was the i^th pair of observations in domain k. Then, considering only the real component of the FRF, Eq. (<ref>) was re-written as,
{real[H_ik] = f_k(ω_ik) + ϵ_ik}_k=1^K
where for each observation i, the output was determined by evaluating one of K latent functions, f_k(ω_i,k), plus additive noise, ϵ_i,k. For this work, there were four helicopter blades and four domains. Therefore, the population model included four latent functions (i.e., K = 4). The (real) FRFs were modelled probabilistically with an assumed Gaussian-distributed likelihood,
real[𝐇_k(ω_k)] ∼𝒩(𝐟_k(ω_k),σ^2_H)
where 𝐟_k(ω_k) was equal to the real component of the FRF, calculated using modal parameters via Eq. (<ref>). The additive noise variance σ^2_H of the FRFs was assumed to be the same for each blade. This assumption was reasonable, as the same data acquisition system and sensors were used among the different tests. Note that ω_k was permitted to vary depending on the given FRF. This allowed for flexibility, to consider FRFs with different numbers of spectral lines or missing data points, and to represent population uncertainty.
Natural frequencies and modal damping were learnt at the domain level, and allowed to vary among the different helicopter blades. Residues were shared among the different domains and learnt at the population level, to mitigate model identifiability issues. Shared, population-level prior distributions (with hyperpriors) were also placed over the modal parameters to capture/infer the similarity among the FRFs. Domain-level natural frequencies, ω_nat = {{ω_nat^k,m}_m=1^2}_k=1^4, were sampled from a truncated normal parent distribution, with higher-level expectation and variance sampled from truncated normal distributions,
ω_nat∼𝒯𝒩(μ_ω_nat,σ^2_ω_nat)
μ_ω_nat∼𝒯𝒩([190,335],[5^2,5^2])
σ^2_ω_nat∼𝒯𝒩([5,5],[5^2,5^2])
Domain-level damping, ζ = {{ζ_k,m}_m=1^2}_k=1^4, was sampled from a beta parent distribution, with higher-level shape parameters sampled from truncated normal distributions,
ζ∼ℬ(α_ζ,β_ζ)
α_ζ∼𝒯𝒩([6,6],[0.5^2,0.5^2])
β_ζ∼𝒯𝒩([1000,1000],[10^2,10^2])
Likewise, shared modal residues, A = {A_m}_m=1^2, were sampled from a normal parent distribution, with higher-level expectation and variance sampled from normal and truncated normal distributions, respectively,
A∼𝒯𝒩(μ_A,σ^2_A)
μ_A∼𝒩([-0.004,-0.004],[0.003^2,0.003^2])
σ_A^2 ∼𝒯𝒩([0.003,0.003],[0.003^2,0.003^2])
For simplicity, normal distributions were chosen for most of the parameters (or truncated normal distributions, if the parameter was restricted to be positive), although other distributions could be used. A beta distribution was chosen for damping, as beta distributions have support x ∈ [0,1], which is suitable for lightly-damped systems. Hyperpriors were chosen based on the physics that could be interpreted by visual inspection of the training data (e.g., natural frequency), or by fitting the data-rich domains independently (discussed below). Note that hyperpriors for ω_nat are shown in rad/s.
Shared noise variance, σ^2_H, was sampled from a truncated normal parent distribution, with higher-level expectation and variance also sampled from truncated normal distributions,
σ_H^2∼𝒯𝒩(μ_σ^2,σ^2_σ^2)
μ_σ^2∼𝒯𝒩(0,100^2)
σ^2_σ^2∼𝒯𝒩(100,100^2)
Note that a non-informative prior was used for the noise variance, to stabilise computation. To incorporate more prior information, the half-t family of distributions (e.g., half-Cauchy distribution) can be used instead <cit.>. A graphical model displaying the parameter hierarchy is shown in Figure <ref>.
Prior to fitting the FRF model, random Gaussian noise was generated, with a magnitude equal to 5% of the absolute peak value of the FRF from Blade 2. (Noise was added to the training data to avoid over-fitting small deviations/lightly participating modes within the band of interest.) The noise was then added to each FRF, and 100, 100, 7, and 20 training points were randomly selected from the FRFs belonging to Blades 1 to 4, respectively. The intent was that the two data-rich FRFs (Blades 1 and 2) would lend statistical support to the sparser domains (Blades 3 and 4), thereby reducing uncertainty compared to an approach without pooling. Recall the grouping of the peaks from the FRFs shown in Figures <ref> and <ref>. Both data-rich FRFs belonged to one of the groupings; likewise, the two data-poor FRFs belonged to the other grouping. This represents a challenging situation where the model can be heavily biased towards the available training data. The hierarchical/partial-pooling modelling structure allows the target distributions to be informed by the data (for very data-poor domains, this will of course be limited), but with the help of the full population dataset (including the data-rich domains).
The HMC sampler was run using four chains, with a target average proposal acceptance probability rate of 0.99 (this parameter controls the target acceptance rate of the NUTS algorithm, and setting it to a value close to 1 can reduce the number of false-positive divergences <cit.>). The sampler was run for 10000 samples per chain (and an additional 5000 warm-up samples were discarded per chain to diminish the influence of the starting values <cit.>), for each parameter. In addition, models were run for each set of blade data independently (no pooling), for comparison with the partial-pooling model.
§.§ Model results
FRFs for each blade were computed via Eq. (<ref>), from the samples of the modal parameters. Total variance was estimated by adding the standard deviation of the FRFs to the expectation of the noise variance. Posterior predictive mean and 3-sigma deviation for the partial pooling and independent models are plotted in Figures <ref> to <ref>, respectively.
Indeed, Figures <ref> to <ref> show that the sparser FRFs improved significantly for the combined model, compared to the independent models. With only 7 data points, Blade 3, shown in Figure <ref> in green, was missing most of the information necessary to characterise the two modes. The independent model for Blade 3 relied heavily on the user-specified prior, while the partial-pooling model borrowed information from the other three blades to inform the shared latent model. Similar (albeit less severe), results were seen for Blade 4, shown in Figure <ref> in purple. With 20 data points, the FRF for Blade 4 was less sparse than that for Blade 3, but was still somewhat lacking near the resonance peaks, especially for the first mode. Figure <ref> shows a significant reduction in variance with the partial-pooling approach. The improvements in the data-rich FRFs, Blades 1 (blue) and 2 (red) were not as apparent, as expected.
Variance reduction from the combined-inference (partial-pooling) approach can also be visualised by plotting the marginal distributions of the hyperparameters. For the population-level variables, marginal distributions were approximated by sampling the expectation and variance for each variable (or shape parameters, for damping), and then drawing from a distribution (normal for natural frequency and residue, and beta for damping) with the same statistics as the parameter draws. Kernel density estimation (KDE) was then used to approximate marginal distributions using the population-level draws. Marginal distributions for the domain-level variables were approximated via KDE of the samples. For the natural frequencies, the population- and domain-level distributions for each mode are shown in Figures <ref> and <ref>. For the (shared) residues, which were learnt at the population-level, the parent- and lower-level distributions for each mode are shown in Figures <ref> and <ref>. For damping, the population- and domain-level distributions for each mode are shown in Figures <ref> and <ref>.
Figures <ref> to <ref> show that when the domains were able to share information via common higher-level distributions, the variance was reduced compared to a no-pooling approach. In Figure <ref>, for the first natural frequency, the population-level distribution of the partial-pooling model was taller and with lower variance than the independent models, which was likely because the data-rich domains dominated the available data for that mode, compared to the data-poor domains (forcing the distribution towards the natural frequencies of the data-rich domains). This data sharing was also evident at the domain-level, where the KDEs of the data-poor domains aligned closely with those from the data-rich domains, for the partial-pooling models. The no-pooling KDEs of the data-poor FRFs differed significantly, and were primarily informed by the priors. In contrast, Figure <ref> shows that for the second natural frequency, the population-level distribution for the partial-pooling model was flatter than those for the independent models. This was likely to have occurred because more data were available for the second peak from Blade 4, which was from the second grouping (widening the distribution to accommodate the greater differences in natural frequency). Again, this conclusion is supported at the domain-level, where the KDE for Blade 4 is very similar for the partial-pooling and independent models (although there is still a small variance reduction for the partial-pooling model). For Blade 3, which was quite data-poor at the second mode as well as the first, the KDE in Figure <ref> shows that the natural frequency was estimated to occur in between those of the other, more data-rich blades. Again, for Blade 3, the no-pooling KDE differed significantly from the others, as this parameter was primarily informed by the priors, because of a lack of data at the second mode.
Figures <ref> and <ref> show that the population-level distributions for damping did not vary significantly among the partial-pooling and independent models, likely because the priors were fairly strong and also accurate to the data. At the domain-level, KDEs for the data-rich FRFs showed high similarity for the different pooling models, while KDEs for the data-poor FRFs tended towards the priors. Similar results were seen for the residues, in Figures <ref> and <ref>, except that the residues were shared among the domains for the partial-pooling model, so there was only one lower-level distribution for each mode. Close alignment of the domain-level distributions, of the data-rich domains, suggests that sharing the residue among the different blades was a suitable assumption.
The results presented in this case demonstrate the development of a population form, where a combined hierarchical FRF model is learnt for a group of nominally-identical helicopter blades. Two domains (Blades 3 and 4) had limited data, and were especially sparse near the resonance peaks. Such a situation could occur if an insufficient frequency resolution was selected during the data acquisition process. By borrowing data from data-rich domains within the population, variance was reduced compared to an independent modelling approach.
§ CASE 2: POPULATION-BASED MODELLING OF FRFS WITH TEMPERATURE VARIATION
The second case also used a hierarchical modelling structure with partial pooling, but assumed non-exchangeable models whereby parameters of the FRF varied with respect to temperature, i.e., the parameters were conditioned on the data. The goal was to learn functional relationships (at the population level), between temperature and natural frequency, and between temperature and damping, so that inferences could be made at temperatures not used in model training. As with the first case, all models in the second case were developed using , and analyses were performed using MCMC, via the no U-turn (NUTS) implementation of Hamiltonian Monte Carlo (HMC) <cit.>.
§.§ Identification of appropriate functional relationships between temperature and the modal parameters
Prior to developing the partial-pooling model, it was useful to determine appropriate functions for the temperature relationships. Independent models for each (real) temperature-varied FRF from Figure <ref> were fitted, and ground-truth values were estimated by computing the expectation for each modal parameter. (Note that in the second case, the modal parameters from the independent models for FRFs at all measured temperatures were considered to be the ground truth, as each FRF was data-rich. This process of using the same data to train and develop the model structure is called post-selection inference <cit.>, and must be used cautiously, as models that employ the technique are at risk of overfitting <cit.>. However, it was assumed for this work that because the number of candidate models was small, the bias resulting from using the data twice was also small <cit.>.) The ground-truth estimates for natural frequency and damping are plotted against temperature, and shown with least-squares fits (polynomial for natural frequency, and linear for damping), in Figures <ref> and <ref>. In the figures, the ground-truth estimates are plotted as asterisks, and those associated with the FRFs used to train the combined-inference model are shown in red, while those at `unseen' temperatures are shown in black. The residue\mode shape was assumed to be constant with respect to temperature.
Figure <ref> shows that a second-order polynomial appears to be an appropriate fit for natural frequency in the temperature range of interest. From Figure <ref>, a linear fit was considered most appropriate for damping. Although there is likely a (weakly) nonlinear relationship between temperature and damping for this dataset, a linear assumption provided good results for the experiments used here. A higher-order Taylor series expansion for the temperature-damping relationship would require many coefficients, which would significantly increase the number of hyperparameters in the model. Further studies, including material-properties evaluation, may help elucidate the nature of this relationship specific to the helicopter blades, which may allow for better inferences from increased physics-based knowledge.
§.§ Model development and results
As with the first case, the population model for the second case included four latent functions (i.e., K = 4), as four temperature-varied FRFs provided training data, i.e., those at -10, -5, 10, and 25^∘C. Likewise, only the real part of the FRF was fitted for simplicity. The real components of the FRFs were modelled probabilistically with an assumed Gaussian distribution, using the same accelerance FRF estimation as a likelihood function, as in Eqs. <ref> and <ref>. The additive noise variance σ^2_H of the FRFs was assumed shared among each of the domains (again, the same data acquisition system and sensors were used among the different tests), and the modal residue was assumed constant with respect to temperature.
Shared, higher-level distributions were placed over natural frequency, ω_nat = {ω_nat^k}_k=1^4, damping, ζ = {ζ_k}_k=1^4, and residue, A, to allow information sharing among the domains,
μ_A∼𝒩(-0.008, 0.002^2), σ_A^2 ∼𝒯𝒩(0.002, 0.002^2)
μ_ω_nat∼𝒯𝒩(910, 10^2)
μ_ζ∼𝒯𝒩(0.01, 1^2)
Note that hyperpriors for ω_nat are shown in rad/s. Also note that for this preliminary model, normal distributions were assumed for all parameters (with truncated normal distributions assumed for variance, damping, and natural frequency). The residue, A, was assumed constant over all temperatures, with distributions,
A ∼𝒩(μ_A, σ^2_A)
A second-order Taylor series expansion was used to approximate the functional relationship between temperature and natural frequency, with coefficients 𝐚 = {a_1, a_2}, where the domain-level natural frequencies were defined as temperature-shifted realisations from the population-level distributions, via,
{ω_nat^k = μ_ω_nat + a_1T_k + a_2T_k^2}_k=1^4
Likewise, the relationship between temperature and modal damping was approximated as linear, with slope b, over the measured temperatures, via,
{ζ_k = μ_ζ + bT_k}_k=1^4
where T_k was the temperature associated with the kth FRF. Priors were placed over the shared polynomial coefficients, with assumed distributions,
a_1 ∼𝒩(-0.01, 1^2)
a_2 ∼𝒩(0.001, 1^2)
b ∼𝒩(-5e-6, 1^2)
The shared noise variance σ^2_H was expressed via,
σ^2_H∼𝒯𝒩(0.3, 1^2)
A graphical model displaying the parameter hierarchy is shown in Figure <ref>.
Prior to fitting the FRF model, random Gaussian noise was generated, with a magnitude equal to 5% of the absolute peak value of the FRFs. The noise was then added to each FRF, and 100 training points were randomly selected from each of the FRFs captured at temperatures -10, -5, 10, and 25^∘C. The intent was to use a sub-set of the temperature-varied FRFs, to learn the population-level coefficients necessary to describe the relationships between temperature and the modal parameters, so that these population-level variables can be used to make predictions at `unseen' temperatures (of course, these predictions were validated using experiments that did not contribute to the training data). As with the previous case, the HMC sampler was run using four chains (and with a target average proposal acceptance probability rate of 0.99), for 10000 samples per chain (with an additional 5000 warm-up samples per chain, which were discarded), for each parameter. The expectation for the domain and population-level parameters were obtained from the samples. The expectation of the domain-level natural frequencies and damping, and shared residue and noise variance, were used to compute the FRFs via Eq. (<ref>), as shown in Figure <ref>. The expectation of the population-level variables which were used to extrapolate to other temperatures are shown in Table <ref>. (Because the shared residue and noise variance were assumed to be temperature-invariant, they were also used to extrapolate to other temperatures).
§.§ Extrapolation to other temperatures
After training the model, predictions were made at `unseen' temperatures, not used in training, using the expectation of population-level variables (shown in Table <ref>), and compared to measured FRFs to evaluate model accuracy. All shared variables (i.e., residue, polynomial coefficients, and noise variance), were assumed equal to the expectation of the variables computed using the model. Natural frequency and damping were estimated for all the measured temperatures, T = {T_i}_i=1^11, including the four used in training, via the relations,
{ω_nat^i = E[μ_ω_nat] + E[a_1]T_i + E[a_2]T_i^2}_i=1^11
{ζ_i = E[μ_ζ] + E[b]T_i}_i=1^11
The extrapolated FRFs are shown in Figure <ref>, ranging from -20 to 30^∘C in increments of 5^∘C, with decreasing temperature from left to right, as natural frequency increases with decreasing temperature. Model accuracy was evaluated by calculating the NMSE via Eq. (<ref>) for each measured FRF with added noise, as shown in Table <ref>. Note that the NMSE computed for the prediction at temperatures -10, -5, 10, and 25^∘C, which correspond to the data used in model training, are included in the table for completeness. However, the specific training points were excluded from the calculation. To further visualise the relationships between temperature and the modal parameters, the functions were computed for each sample. The mean and variance of the functions were plotted with the parameters estimated via independent modelling (in this case, these results were considered to be the best approximation of the ground truth, as each FRF was data-rich). These plots are shown in Figures <ref> and <ref>.
In Figure <ref>, the extrapolated FRFs are shown as solid blue lines, with a shaded blue region indicating the variance bounds. The measured FRFs used to train the model, without added noise, are shown as solid red lines, and the measured FRFs used to test the model at `unseen' temperatures are shown as dashed black lines. From the figure, it is clear that excellent agreement was achieved between the extrapolated and measured FRFs at the temperatures used to train the model, as expected. Likewise, the FRFs at temperatures between those used to train the model (e.g., at 0, 15, and 20^∘C) show excellent agreement. Notably, the FRFs at colder temperatures, which were further away from the training data, still show good agreement. The accuracy of the fit is further shown in Table <ref>, as all NMSE values were less than 5%.
Figures <ref> and <ref> further show that the extrapolated parameters accurately represented the measured data, as the measured parameters largely fell within the variance bounds of the approximations. Figure <ref> shows that for natural frequency, parameter variance increased as the inferred parameters varied further from the training data (i.e., at temperatures further from those used to train the model), as expected. Figure <ref> shows that for damping, the relatively consistent variance was the result of the linear assumption for the temperature-damping relationship, and the relatively high deviation of the measured data from the linear fit. Further model development could involve the incorporation of more physics-based knowledge, such as, forcing the relationship between natural frequency and temperature to be monotonically decreasing, or further investigation into the possibly nonlinear relationship between damping and temperature.
§ CONCLUDING REMARKS
Current work has involved the development of probabilistic FRF models using a hierarchical Bayesian approach, that account for benign variations as well as similarities among nominally-identical structures. Two cases were presented, to demonstrate the usefulness of this modelling structure. The first case demonstrated how hierarchical Bayesian models with partial pooling can reduce variance in data-poor domains by allowing information transfer with data-rich domains, via shared population-level distributions. The second case showed that incorporating functional relationships (approximated via Taylor series expansion) into the modelling structure to describe temperature variations allows for prediction beyond the training data. The first case addresses data-sparsity challenges in SHM, as missing time-domain data resulting from sensor dropout and other causes will reduce the number of spectral lines in the frequency domain, which can impede dynamic characterisation. Likewise, the second case addresses data-scarcity issues, as encoding physics-based knowledge into the model via functional relationships can increase the amount of normal-condition information available. Future efforts may involve further investigation into the physical relationships between temperature and the modal parameters, particularly damping, to improve model accuracy.
§ ACKNOWLEDGEMENTS
The authors gratefully acknowledge the support of the UK Engineering and Physical Sciences Research Council (EPSRC), via grant reference EP/W005816/1. This research made use of The Laboratory for Verification and Validation (LVV), which was funded by the EPSRC (via EP/J013714/1 and EP/N010884/1), the European Regional Development Fund (ERDF), and the University of Sheffield. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.
The authors would like to extend special thanks to Michael Dutchman of the LVV, for helping set up the experiments, and also Domenic Di Francesco of the Alan Turing Institute, for his advice when designing the hierarchical models.
elsarticle-num-names
|
http://arxiv.org/abs/2307.04325v1 | 20230710033939 | Influence of Charge on Anisotropic Class-one Solution in Non-minimally Coupled Gravity | [
"M. Sharif",
"Tayyab Naseer"
] | gr-qc | [
"gr-qc"
] |
Influence of Charge on Anisotropic Class-one Solution in Non-minimally Coupled Gravity
M. Sharif^1 [email protected] and Tayyab Naseer^1,2 [email protected]
^1 Department of Mathematics and Statistics, The University of Lahore,
1-KM Defence Road Lahore, Pakistan.
^2 Department of Mathematics, University of the Punjab,
Quaid-i-Azam Campus, Lahore-54590, Pakistan.
=========================================================================================================================================================================================================================================================================================================
This paper studies charged star models associated with anisotropic
matter distribution in f(ℛ,𝒯,𝒬)
theory, where
𝒬=ℛ_ϕψ𝒯^ϕψ. For this
purpose, we take a linear model of this gravity as
ℛ+ζ𝒬, where ζ represents a coupling
constant. We consider a self-gravitating spherical geometry in the
presence of electromagnetic field and generate solution to the
modified field equations by using the “embedding class-one”
condition and 𝕄𝕀𝕋 bag model equation of state. The
observational data (masses and radii) of four different stellar
models like 4U 1820-30, SAX J 1808.4-3658, SMC X-4 and Her X-I is
employed to analyze the effects of charge on their physical
properties. Finally, the effect of the coupling constant is checked
on the viability, hydrostatic equilibrium condition and stability of
the resulting solution. We conclude that the considered models show
viable and stable behavior for all the considered values of charge
and ζ.
Keywords: f(ℛ,𝒯,ℛ_ϕψ𝒯^ϕψ) gravity; Stability;
Self-gravitating systems; Compact objects.
PACS: 04.50.Kd; 04.40.Dg; 04.40.-b.
§ INTRODUCTION
General Relativity (𝔾ℝ) is viewed as the best
gravitational theory to tackle various challenges, yet it is
inadequately enough to explain the rapid expansion of our cosmos
properly. As a result, multiple extensions to 𝔾ℝ have
been proposed to deal with mystifying problems such as the dark
matter and cosmic expeditious expansion etc. Various cosmologists
pointed out that this expansion is caused by the presence of a large
amount of an obscure force, named as dark energy which works as
anti-gravity and helps stars as well as galaxies to move away from
each other. The simplest extension to 𝔾ℝ was obtained by
putting the generic function of the Ricci scalar ℛ in
geometric part of the Einstein-Hilbert action, named as
f(ℛ) theory <cit.>. There is a large body of
literature <cit.>-<cit.> to explore the viability and stability
of celestial structures in this theory.
Bertolami et al <cit.> introduced the concept of
matter-geometry coupling in f(ℛ) scenario by coupling
the effects of ℛ in the matter Lagrangian to study
self-gravitating objects. Such couplings have prompted many
researchers and hence several modifications of 𝔾ℝ (based
on the idea of coupling) have been suggested. The first
matter-geometry coupling was proposed by Harko et al
<cit.>, named as f(ℛ,𝒯) gravity, in which
𝒯 serves as trace of the energy-momentum tensor
(𝔼𝕄𝕋). The incorporation of 𝒯 in modified
functionals produces non-null divergence of the corresponding
𝔼𝕄𝕋 as opposed to 𝔾ℝ and f(ℛ)
theories. This coupling gravity offers several remarkable
astrophysical results <cit.>-<cit.>.
Haghani et al <cit.> suggested much complicated theory
whose functional depends on ℛ, 𝒯 and
𝒬, where
𝒬≡ℛ_ϕψ𝒯^ϕψ.
They studied three different models of this theory to analyze their
physical viability. The insertion of
ℛ_ϕψ𝒯^ϕψ makes this theory
more effective than other modified theories such as
f(ℛ,𝕃_m) and f(ℛ,𝒯). The
reason is that it entails strong non-minimal interaction between
geometry and matter distribution in a self-gravitating object even
for the scenarios when f(ℛ,𝒯) fails. For
instance, for the case in which a compact interior has trace-free
𝔼𝕄𝕋, (i.e., 𝒯=0), the particles can entail
such strong coupling. This theory provides better understanding of
inflationary era of our cosmos as well as rotation curves of
galactic structures. Sharif and Zubair <cit.> adopted matter
Lagrangian as 𝕃_m=μ, -P to study thermodynamical laws
corresponding to two models ℛ+ζ𝒬 as well
as ℛ(1+ζ𝒬) and determined viability
constraints for them. The same authors <cit.> checked the
validity of energy bounds analogous to the above models and
concluded that only positive values of ζ fulfill weak energy
conditions.
Odintsov and Sáez-Gómez <cit.> demonstrated certain
cosmological solutions and confirmed that
f(ℛ,𝒯,𝒬) gravity supports the
ΛCDM model. Baffou et al <cit.> obtained numerical
solutions of Friedmann equations and perturbation functions with
respect to two peculiar modified models and explored their
stability. Sharif and Waseem <cit.> determined the
solutions and their stability for isotropic as well anisotropic
configurations and concluded that 𝕃_m=P_r results in more
stable structures for the later case. Yousaf et al
<cit.>-<cit.> employed the idea of orthogonal splitting of
the curvature tensor in this gravity and calculated some scalars in
the absence and presence of charge which help to understand the
structural evolution of self-gravitating bodies. Recently, we have
obtained physically acceptable solutions in this scenario through
multiple approaches <cit.>-<cit.>. The complexity factor and
two different evolutionary modes have also been discussed for a
self-gravitating object <cit.>.
Numerous investigations have been conducted in the context of
𝔾ℝ and its extended theories to examine how charge
influences the structural changes in celestial objects. Das et
al. <cit.> used Riessner-Nordström metric as an exterior
geometry and calculated the solution of the equations coupled with
charge at the hypersurface. Sunzu et al <cit.> studied
several strange stars owning charged matter configuration in their
interiors with the help of mass-radius relation. Various authors
<cit.>-<cit.> observed that presence of charge inside
physical systems usually make them more stable in a wide range.
The state variables for isotropic or anisotropic quark bodies are
usually represented by energy density and pressure, that can be
interlinked through different constraints, one of them is the
𝕄𝕀𝕋 bag model equation of state
(𝔼o𝕊) <cit.>. It is well-known that
compactness of strange structures like RXJ 185635-3754, PSR 0943+10,
Her X-1, 4U 1820-30, SAX J 1808.4-3658 and 4U 1728-34, etc. can be
efficiently described by 𝕄𝕀𝕋 𝔼o𝕊,
whereas an 𝔼o𝕊 for neutron star fails in this
context <cit.>. In general, a vacuum comprises of two states,
namely false and true whose discrepancy can be calculated through
the bag constant (𝔅). This model has extensively been
used by several researchers <cit.>-<cit.> to analyze the
internal composition of various quark bodies. Demorest et al
<cit.> discussed a particular strange star (namely, PSR
J1614-2230) and found that class of such massive objects can only be
supported by 𝕄𝕀𝕋 bag model. Rahaman et al
<cit.> employed this model along with interpolating technique to
explore the mass and some other physical aspects of compact
structures.
The solution to the field equations in any gravitational theory can
be formulated by virtue of multiple techniques, such as the
consideration of a particular 𝔼o𝕊 or the
solution of metric potentials etc. A useful technique in this regard
is the embedding class-one condition which points out that an
n-dimensional space can always be embedded into a space of one
more dimension, i.e., n+1. Bhar et al <cit.> used an
acceptable metric potential to determine physically viable
anisotropic star models through this condition. Maurya et al
<cit.> employed this condition to calculate the solutions
corresponding to relativistic stars and also analyzed the effects of
anisotropy on these structures. Singh et al <cit.> formed
non-singular solution for spherically symmetric spacetime in terms
of new metric function by using this technique. The decoupled
solutions for self-gravitating anisotropic systems have been
determined through class-one condition <cit.>. The same
condition has also been employed to modified theories. Singh
et al <cit.> used the embedding approach to study the
physical features of different compact stars in the context of
f(ℛ,𝒯) theory. Rahaman et al
<cit.> also discussed celestial structures through an embedding
approach in the same scenario and claimed that this modified theory
better explains such massive bodies. Various authors formulated
multiple acceptable class-one solutions in various backgrounds such
as f(ℛ), f(𝒢), f(ℛ,𝒯) and
f(𝒢,𝒯) theories <cit.>-<cit.>. Sharif
and his collaborators <cit.>-<cit.> extended this work in
f(𝒢) and Brans-Dicke scenarios, and obtained viable as
well as stable solutions.
In this paper, we study charged star models with anisotropic matter
distribution in the framework of
f(ℛ,𝒯,𝒬) theory. The paper has the
following format. Next section is devoted to the basic description
of modified theory and construction of the field equations
corresponding to a model ℛ+ζ𝒬. We assume
𝕄𝕀𝕋 bag model 𝔼o𝕊 and utilize
embedding condition to find radial metric potential from known
temporal component. The boundary conditions are given in section 3.
Section 4 explores the effects of electromagnetic field on several
physical characteristics of compact objects through graphical
analysis. Finally, we summarize all the results in section 5.
§ THE F(ℛ,𝒯,𝒬) GRAVITY
The action for this theory is obtained by inserting
f(ℛ,𝒯,𝒬) in place of ℛ
in the Einstein-Hilbert action (with κ=8π) as <cit.>
𝕀_f(ℛ,𝒯,𝒬)=∫√(-g){f(ℛ,𝒯,𝒬)/16π
+𝕃_m+𝕃_ℰ}d^4x,
where 𝕃_m and 𝕃_ℰ symbolize the
Lagrangian densities of matter configuration and electromagnetic
field, respectively. The corresponding field equations are
𝒢_ϕψ=𝒯_ϕψ^(EFF)=8π{1/f_ℛ-𝕃_mf_𝒬(𝒯_ϕψ+ℰ_ϕψ)
+𝒯_ϕψ^(𝒞)},
where 𝒢_ϕψ is the Einstein tensor,
𝒯_ϕψ^(EFF) can be termed as the
𝔼𝕄𝕋 in extended gravity, 𝒯_ϕψ is the
matter energy-momentum tensor and ℰ_ϕψ is the
electromagnetic tensor. The modified sector of this theory becomes
𝒯_ϕψ^(𝒞) = -1/8π(𝕃_mf_𝒬-f_ℛ)[(f_𝒯+1/2ℛf_𝒬)𝒯_ϕψ
+{ℛ/2(f/ℛ-f_ℛ)-𝕃_mf_𝒯..
- .1/2∇_σ∇_ω(f_𝒬𝒯^σω)}g_ϕψ
-1/2(f_𝒬𝒯_ϕψ)-(g_ϕψ-
∇_ϕ∇_ψ)f_ℛ
- 2f_𝒬ℛ_σ(ϕ𝒯_ψ)^σ
+∇_σ∇_(ϕ[𝒯_ψ)^σf_𝒬]
+2(f_𝒬ℛ^σω+.f_𝒯g^σω)∂^2𝕃_m/∂ g^ϕψ∂ g^σω].
Here, f_ℛ, f_𝒯 and f_𝒬 are
the partial derivatives of f with respect to its arguments. Also,
≡1/√(-g)∂_ϕ(√(-g)g^ϕψ∂_ψ)
and ∇_ω indicate D'Alambert operator and covariant
derivative, respectively. We take suitable choice of matter
Lagrangian as
𝕃_m=-1/4𝒜_ϕψ𝒜^ϕψ
which leads to ∂^2𝕃_m/∂
g^ϕψ∂
g^σω=-1/2𝒜_ϕσ𝒜_ψω
<cit.>. Here,
𝒜_ϕψ=ω_ψ;ϕ-ω_ϕ;ψ
serves as the Maxwell field tensor and
ω_ψ=ω(r)δ^ψ_0 is termed as the four
potential. The violation of the equivalence principle is obvious in
this theory due to the arbitrary coupling between matter and
geometry which results in the disappearance of covariant divergence
of 𝔼𝕄𝕋 (<ref>) (i.e., ∇_ϕ𝒯^ϕψ≠ 0). Consequently, an additional force is
produced in the gravitational structure which causes non-geodesic
motion of test particles. Thus we have
∇^ϕ𝒯_ϕψ =2/2f_𝒯+ℛf_𝒬+16π[∇_ϕ(f_𝒬ℛ^σϕ𝒯_σψ)-𝒢_ϕψ∇^ϕ(f_𝒬𝕃_m)
-1/2∇_ψ𝒯^σω(f_𝒯g_σω+f_𝒬ℛ_σω)
+∇_ψ(𝕃_mf_𝒯)-8π∇^ϕℰ_ϕψ].
In the structural development of celestial bodies, anisotropy is
supposed as a basic entity which appears when there is a difference
between radial and tangential pressures. In our cosmos, many stars
are likely to be interlinked with anisotropic fluid, thus this
factor becomes highly significant in the study of stellar models and
their evolution. The anisotropic 𝔼𝕄𝕋 is
𝒯_ϕψ=(μ+P_) 𝒦_ϕ𝒦_ψ+P_
g_ϕψ+(P_r-P_)𝒲_ϕ𝒲_ψ,
where the energy density, radial as well as tangential pressure,
four-vector and four-velocity are given by
μ, P_r, P_, 𝒲_ϕ and 𝒦_ϕ,
respectively. The trace of the field equations provides
3∇^ω∇_ω
f_ℛ-ℛ(𝒯/2f_𝒬-f_ℛ)-𝒯(8π+f_𝒯)+1/2∇^ω∇_ω(f_𝒬𝒯)
+∇_ϕ∇_ω(f_𝒬𝒯^ϕω)-2f+(ℛf_𝒬+4f_𝒯)𝕃_m
+2ℛ_ϕω𝒯^ϕωf_𝒬
-2g^ψξ∂^2𝕃_m/∂
g^ψξ∂
g^ϕω(f_𝒯g^ϕω+f_𝒬R^ϕω)=0.
For f_𝒬=0, this yields f(ℛ,𝒯)
theory, which can further be reduced to f(ℛ) gravity
when f_𝒯=0. The electromagnetic 𝔼𝕄𝕋 is
defined as
ℰ_ϕψ=1/4π[1/4g_ϕψ𝒜^σω𝒜_σω
-𝒜^ω_ϕ𝒜_ωψ],
and Maxwell equations are
𝒜^ϕψ_;ψ=4π𝒥^ϕ, 𝒜_[ϕψ;σ]=0,
where 𝒥^ϕ=ϖ𝒦^ϕ,
𝒥^ϕ and ϖ are the current and charge
densities, respectively. To examine the interior compact stars, we
take self-gravitating spherical spacetime as
ds^2=-e^ρ dt^2+e^α dr^2+r^2dθ^2+r^2sin^2θ
dφ^2,
where ρ=ρ(r) and α=α(r). The Maxwell equations
ω”+1/2r[4-r(ρ'+α')]ω'=4πϖ
e^ρ/2+α,
lead to
ω'=s/r^2e^ρ+α/2,
where s shows the presence of charge inside the geometry
(<ref>) and '=∂/∂ r. In this context, the
matter Lagrangian turns out to be 𝕃_m=s^2/2r^4.
Also, the four-vector and four-velocity in comoving framework are
𝒲^ϕ=δ^ϕ_1 e^-α/2, 𝒦^ϕ=δ^ϕ_0 e^-ρ/2,
satisfying 𝒦^ϕ𝒦_ϕ=-1 and
𝒲^ϕ𝒦_ϕ=0.
We consider a linear model as <cit.>
f(ℛ,𝒯,ℛ_ϕψ𝒯^ϕψ)=f_1(ℛ)+
f_2(ℛ_ϕψ𝒯^ϕψ)=ℛ+ζℛ_ϕψ𝒯^ϕψ,
where ζ is an arbitrary coupling constant. The nature of the
corresponding solution is found to be oscillatory (representing
alternating collapsing and expanding phases) for the case when
ζ > 0. On the other hand, ζ < 0 yields the cosmic scale
factor having a hyperbolic cosine-type dependence. The stability of
this model has been analyzed for isotropic/anisotropic
configurations through different schemes leading to some acceptable
values of ζ <cit.>. The factor 𝒬 of
this model becomes
𝒬 = e^-α[μ/4(2ρ”+ρ'^2-ρ'α'+4ρ'/r)+P_r/4(ρ'α'-ρ'^2
-2ρ”-4α'/r)
- P_(ρ'/r-α'/r-2e^α/r^2+2/r^2)].
The corresponding field equations (<ref>) take the form as
𝒢_ϕψ = ζ/1-ζ s^2/2r^4[(8π/ζ+1/2ℛ)𝒯_ϕψ
+8π/ζℰ_ϕψ+1/2{𝒬
-∇_σ∇_ω𝒯^σω}g_ϕψ
- 2ℛ_σ(ϕ𝒯_ψ)^σ-1/2𝒯_ϕψ
+∇_σ∇_(ϕ𝒯_ψ)^σ
-ℛ^σω𝒜_ϕσ𝒜_ψω].
The non-conservation of 𝔼𝕄𝕋 (<ref>) becomes
∇^ϕ𝒯_ϕψ =2ζ/ζℛ+16π[∇_ϕ(ℛ^σϕ𝒯_σψ)-1/2ℛ_σω∇_ψ𝒯^σω-1/2𝒯_ϕψ∇^ϕℛ-8π∇^ϕℰ_ϕψ
-𝒢_ϕψ∇^ϕ(𝕃_m)].
Equation (<ref>) leads to three non-zero components as
8πμ =e^-α[α'/r+e^α/r^2-1/r^2
+ζ{μ(3ρ'α'/8-ρ'^2/8
+α'/r+e^α/r^2-3ρ”/4-3ρ'/2r
-1/r^2)-μ'(α'/4-1/r-ρ')
+μ”/2+P_r(ρ'α'/8
-ρ'^2/8-ρ”/4+α'/2r+α”/2
-3α'^2/4)+5α'P'_r/4-P”_r/2
+P_(α'/2r-ρ'/2r+3e^α/r^2
-1/r^2)-P'_/r
+s^2/r^4(α'/2r-e^α/2r^2+1/2r^2+ρ'α'/8
-ρ'^2/8-ρ”/4-e^α/ζ)}],
8π
P_r =e^-α[ρ'/r-e^α/r^2+1/r^2
+ζ{μ(ρ'α'/8+ρ'^2/8
-ρ”/4-ρ'/2r)-ρ'μ'/4
-P_r(5ρ'^2/8-7ρ'α'/8+5ρ”/4-7α'/2r+ρ'/r-α'^2
-e^α/r^2+1/r^2)
+P'_r(ρ'/4+1/r)-P_(α'/2r-ρ'/2r+3e^α/r^2
-1/r^2)+P'_/r
+s^2/r^4(ρ'/2r+e^α/2r^2
-1/2r^2+ρ”/4+ρ'^2/8-ρ'α'/8+e^α/ζ)}],
8π
P_ =e^-α[1/2(ρ”+ρ'^2/2-ρ'α'/2
-α'/r+ρ'/r)
+ζ{μ(ρ'^2/8+ρ'α'/8-ρ”/4-ρ'/2r)
-μ'ρ'/4+P_r(ρ'^2/8+3α'^2/4-ρ'α'/8+ρ”/4-α'/2r
-α”/2)-5α'P'_r/4+P”_r/2
-P_(ρ'^2/4-ρ'α'/4+ρ”/2-α'/r+ρ'/r)
-P'_(α'/4-ρ'/4-3/r)+P”_/2
+s^2/r^4(ρ'α'/8-ρ'^2/8-ρ”/4
+α'/4r-ρ'/4r-e^α/ζ)}].
The explicit expressions for the matter variables are given in
Eqs.(<ref>)-(<ref>). In order to keep the system in
hydrostatic equilibrium, we can obtain the corresponding condition
from Eq.(<ref>) as
dP_r/dr+ρ'/2(μ
+P_r)-2/r(P_-P_r)-2ζ
e^-α/ζℛ+16π[ρ'μ/8(ρ'^2+2ρ”-ρ'α'+4ρ'/r)
-μ'/8(ρ'^2-ρ'α'+2ρ”+4ρ'/r)+P_r(5ρ'^2α'/8
-5ρ'α'^2/8-5α'^2/2r+7ρ”α'/4-ρ”'/2
-ρ'ρ”+ρ'α”/2+2α”/r+ρ'α'/r-α'/r^2
-ρ”/r+ρ'/r^2+2e^α/r^3-2/r^3)+P'_r/8(ρ'α'-2ρ”
-ρ'^2+4α'/r)+P_/r^2(α'-ρ'+2e^α/r
-2/r)-P'_/r(α'/2-ρ'/2
+e^α/r-1/r)
-(ss'/r^4-2s^2/r^5)(ρ'/r-e^α/r^2+1/r^2
+2e^α/ζ)]=0.
This represents Tolman-Opphenheimer-Volkoff (𝕋𝕆𝕍)
equation in extended framework which helps in analyzing the
structure and dynamics of self-gravitating celestial objects.
Misner-Sharp <cit.> provided the mass of a sphere as
m(r)=r/2(1-g^ϕψr_,ϕr_,ψ),
which leads to
m(r)=r/2(1-e^-α+s^2/r^2).
The non-linear system (<ref>)-(<ref>) contain six unknowns
ρ, α, μ, P_r, P_ and s, hence some constraints
are required to close the system. We investigate various physical
aspects of different quark bodies through a well-known
𝕄𝕀𝕋 bag model 𝔼o𝕊 which interrelates
the matter variables inside the geometry <cit.>. This constraint
has the form
P_r=1/3(μ-4𝔅).
The constant 𝔅 has been determined corresponding to
different stars <cit.> that are used in the analysis of
physical attributes of all the considered star models. The solution
of the modified field equations (<ref>)-(<ref>) along with
𝔼o𝕊 (<ref>) turns out to be
μ =[8π
e^α+ζ(9ρ”/8-e^α/r^2+1/r^2-α”/8
-5ρ'α'/8-α'^2/16
-7α'/2r+3ρ'^2/16+7ρ'/4r)]^-1
×[3/4(1+ζ
s^2/2r^4)(α'/r+ρ'/r)+𝔅{8π
e^α-ζ(4α'/r-3ρ'^2/4-3ρ”/2+ρ'α'
α”/2+α'^2/4-ρ'/r+e^α/r^2-1/r^2)}],
P_r =[8π
e^α+ζ(9ρ”/8-e^α/r^2+1/r^2
-α”/8-5ρ'α'/8-α'^2/16
-7α'/2r+3ρ'^2/16+7ρ'/4r)]^-1
×[1/4(1+ζ
s^2/2r^4)(α'/r+ρ'/r)-𝔅{8π
e^α-ζ(ρ'α'/2
+α'/r-2ρ'/r+e^α/r^2
-ρ”-1/r^2)}],
P_ =[8π
e^α+ζ(1/r^2-2e^α/r^2+ρ'^2/4+ρ”/2-ρ'α'/4+ρ'/r
-α'/r)]^-1[ρ'/2r-α'/2r
+ρ'^2/4-ρ'α'/4+ρ”/2+ζ{8π
e^α+ζ(9ρ”/8-e^α/r^2+1/r^2
-α”/8-5ρ'α'/8-α'^2/16
-7α'/2r+3ρ'^2/16+7ρ'/4r)}^-1{1/8r(1+ζ
s^2/2r^4)(2ρ'α'^2+ρ'^3-ρ”α'-ρ'ρ”
-α'α”-ρ'α”+3ρ'^2α'/2
-3ρ'^2/r+3α'^3/2-α'^2/r-4ρ'α'/r)+2π
e^α𝔅(ρ'α'
-2ρ”+2α”-3α'^2-2ρ'/r+2α'/r)
+ζ𝔅/16(10ρ”α”-5ρ'α'α”+11ρ'ρ”α'
-11ρ”α'^2-ρ'^2α”
-2ρ”ρ'^2-10ρ”^2-7ρ'^2α'^2/2
+ρ'^3α'/2-36ρ'α'^2/r-8ρ'^3/r
+11ρ'α'^3/2+16ρ'^2α'/r
+28ρ”α'/r-8α'α”/r+12α'^3/r+3ρ'^4/2
-8ρ'^2/r^2-8α”e^α/r^2
+8α”/r^2-20α'^2/r^2-24ρ'ρ”/r+52ρ'α'/r^2+10ρ'α”/r
-4e^αρ'α'/r^2+8e^αρ”/r^2-8ρ”/r^2
+12α'^2e^α/r^2-8ρ'/r^3
-8e^αα'/r^3+8α'/r^3+8e^αρ'/r^3)}]
+ζ
s^2/4r^4e^α(ρ'α'/2-ρ'^2/2-ρ”
+α'/r-ρ'/r-4e^α/ζ).
A comprehensive analysis has been done on the study of celestial
bodies configured with quark matter through 𝔼o𝕊
(<ref>) in 𝔾ℝ and other modified theories
<cit.>. We find solution to the modified charged field
equations by employing this 𝔼o𝕊 and setting
values of the coupling constant as ζ=±5.
Eiesland <cit.> computed the essential and adequate condition
for the case of an embedding class-one as
ℛ_1212ℛ_0303-ℛ_0101ℛ_2323+ℛ_1202ℛ_1303=0,
which leads to
ρ'^2-(ρ'-α')ρ'e^α-2(e^α-1)ρ”=0,
and hence
α(r)=ln(1+C_1ρ'^2e^ρ),
where C_1 is an integration constant. To evaluate α(r), we
consider the temporal metric function as <cit.>
ρ(r)=ln C_3+2C_2r^2.
Here, C_2 and C_3 are positive constants that need to be
determined. Lake <cit.> proposed the criteria to check the
acceptance of ρ(r) as ρ(r)|_r=0=ln
C_3, ρ'(r)|_r=0=0 and ρ”(r)|_r=0>0 everywhere in the
interior configuration (r=0 indicates center of the star). This
confirms the acceptance of the metric potential (<ref>). Using
Eq.(<ref>) in (<ref>), we obtain
α(r)=ln(1+C_2C_4r^2e^2C_2r^2),
where C_4=16C_1C_2C_3. Equations (<ref>)-(<ref>) in
terms of these constants take the form as given in Appendix
B.
§ BOUNDARY CONDITIONS
In order to understand the complete structural formation of massive
stars, we impose some conditions on the boundary surface, known as
the junction conditions. In this regard, several conditions have
been discussed in the literature, such as the Darmois, Israel and
Lichnerowicz junction conditions. The first of them requires the
continuity of the first and second fundamental forms between both
the interior and exterior regions at some fixed radius <cit.>.
On the other hand, Lichnerowicz junction conditions yield the
continuity of the metric and all first order partial derivatives of
the metric across Σ <cit.>. However, both of these
conditions are often stated to be equivalent, known as the
Darmois-Lichnerowicz conditions <cit.>. Since we need to
calculate three constants, thus we use these junction conditions to
increase the number of equations.
The choice of the exterior spacetime should be made on the basis
that the properties (such as static/non-static and
uncharged/charged) of the interior and exterior geometries can match
with each other at the hypersurface. Also, for model (<ref>),
the term ℛ_ϕψ𝒯^ϕψ does not
contribute to the current scenario. Therefore, we take the
Reissner-Nordström exterior metric as the most suitable choice
given by
ds^2=-(1-2M̅/r+S̅^2/r^2)dt^2+dr^2/(1-2M̅/r+S̅^2/r^2)
+r^2dθ^2+r^2sin^2θ dφ^2,
where S̅ and M̅ are the charge and mass of the
exterior region, respectively. We suppose that the metric potentials
(g_tt and g_rr components) and the first order differential
(g_tt,r) corresponding to inner and outer geometries are
continuous across the boundary, leading to the following constraints
e^ρ(ℋ) =C_3e^2C_2ℋ^2=1-2M̅/ℋ+S̅^2/ℋ^2,
e^ζ(ℋ) =1+C_2C_4ℋ^2e^2C_2ℋ^2=(1-2M̅/ℋ
+S̅^2/ℋ^2)^-1,
ρ'(ℋ) =4C_2ℋ=2M̅ℋ-2S̅^2/ℋ(ℋ^2
-2M̅ℋ+S̅^2),
where ℋ denotes the boundary of a compact star.
Equations (<ref>)-(<ref>) are solved simultaneously so that
we obtain
C_1 = ℋ^4(2M̅ℋ-S̅^2)/4(M̅ℋ-S̅^2)^2,
C_2 = M̅ℋ-S̅^2/2ℋ^2(ℋ^2-2M̅ℋ+S̅^2),
C_3 = (ℋ^2-2M̅ℋ+S̅^2/ℋ^2)e^M̅ℋ-S̅^2/2M̅ℋ-ℋ^2-S̅^2,
C_4 = 2(2M̅ℋ-S̅^2)/M̅ℋ-S̅^2e^M̅ℋ-S̅^2/2M̅ℋ-ℋ^2-S̅^2.
The second fundamental form yields
P_r^Σ_=0, s^Σ_=S̅,
m^Σ_=M̅.
Equation (<ref>) provides the radial pressure inside a compact
star which must disappear at the hypersurface. This leads to the bag
constant in terms of Eqs.(<ref>)-(<ref>) as
𝔅 =[4ℋ^5(ζ(-4M̅^3ℋ+2M̅^2S̅^2
+10M̅S̅^2ℋ-5S̅^4-3S̅^2ℋ^2)
+8πℋ^4(ℋ(ℋ-2M̅)+S̅^2))]^-1[(ℋ(ℋ
-2M̅)+S̅^2)(-2M̅^2ℋ
+M̅(S̅^2+3ℋ^2)-2S̅^2ℋ)(ζS̅^2+2ℋ^4)].
We can evaluate the constants (C_1, C_2, C_3, C_4) as well
as bag constant through the experimental data (masses and radii) of
four strange stars <cit.> given in Table 1. Tables
2 and 3 present the values of these constants
for S̅=0.2 and 0.7, respectively. It is observed that all
these stars exhibit consistent behavior with the Buchdhal's proposed
limit <cit.>, i.e., 2M̅/ℋ<8/9.
The solution to the field equations (<ref>)-(<ref>) is
obtained by applying some constraints. The values of matter
variables such as the energy density (at the core and boundary) and
central radial pressure along with the bag constant with respect to
different choices of the coupling constant (ζ=5, -5)
and charge (S̅=0.2, 0.7) are given in Tables
4-7. We obtain 𝔅 for different
stars as
* For ζ=5 and S̅=0.2: 116.27, 215.48, 235.81 and 113.18
MeV/fm^3.
* For ζ=5 and S̅=0.7: 115.15, 210.95, 226.74 and 109.69
MeV/fm^3.
* For ζ=-5 and S̅=0.2: 116.07, 215.01, 235.56 and 113.15
MeV/fm^3.
* For ζ=-5 and S̅=0.7: 114.94, 210.32, 226.07 and 109.58
MeV/fm^3.
Notice that the predicted range (60-80 MeV/fm^3
<cit.>) of bag constant for which stars remain stable
does not incorporate the above computed values for different cases
in this theory. Nevertheless, CERN-SPS and
RHIC performed several experiments and revealed that
density dependent bag model could provide a vast range of this
constant.
§ GRAPHICAL INTERPRETATION OF COMPACT STRUCTURES
This sector deals with the graphical analysis of different physical
attributes of anisotropic compact models coupled with
electromagnetic field. With the help of preliminary data presented
in Tables 1-3, the graphical nature of the
developed solution (<ref>)-(<ref>) is analyzed for different
parametric values. We check physical acceptance of the metric
potentials, anisotropic pressure, energy conditions and mass inside
all considered candidates. Since ζ is an arbitrary constant,
so the analysis of physical attributes of compact stars
corresponding to its different values would help us to explore the
effects of this theory. For this, we choose ζ=±5 and check
the stability of modified gravity model (<ref>), and the
constructed solution. Further, the modified field equations still
engage an unknown such as the interior charge, thus one can now
either adopt a constraint to make it known or take its known form.
In this regard, we take the electric charge s(r) depending on the
radial coordinate as follows <cit.>
s(r)=S̅(r/ℋ)^3=kr^3,
where k is a constant with the dimension of inverse square length.
We obtain increasing and singularity-free nature of the metric
functions everywhere.
§.§ Study of Matter Variables
A solution can be considered physically acceptable if it exhibits
the maximum value of state variables (pressure and energy density)
at the core of celestial object and decreasing towards its boundary.
Figures 1-3 show the graphs of energy density,
radial and tangential pressures, respectively corresponding to each
star for two values of charge and k=0.001. We note that all stars
provide acceptable behavior of these quantities. Figure 1
shows that energy density increases by increasing the coupling
constant and decreasing charge. Figures 2 and
3 demonstrate the decreasing behavior of radial and
tangential pressures inside each star with the increase in charge as
well as ζ. The radial pressure vanishes at the boundary only
for ζ=-5. Tables 4-7 indicate that
structure of each star becomes more dense for ζ=5 and
S̅=0.2. We have checked the regular behavior of the developed
solution (dμ/dr|_r=0 = 0, dP_r/dr|_r=0 =
0, d^2μ/dr^2|_r=0 < 0, d^2P_r/dr^2|_r=0 <
0) and is satisfied. In all plots of this paper, remember that
* Red (thick) line corresponds to ζ=-5 and S̅=0.2.
* Red (dotted) line corresponds to ζ=-5 and S̅=0.7.
* Black (thick) line corresponds to ζ=5 and S̅=0.2.
* Black (dotted) line corresponds to ζ=5 and S̅=0.7.
§.§ Behavior of Anisotropy
The solution (<ref>)-(<ref>) produces the anisotropy
(Δ=P_-P_r). We analyze the influence of charge on
anisotropy to study its role in structural development. The
anisotropy shows inward (decreasing) or outward (increasing)
directed behavior accordingly whether the radial pressure is greater
or less than the tangential component. Figure 4 depicts
that it disappears at the core and possess increasing behavior in the interior of all
stars. It is also shown that large value of charge reduces
anisotropy.
§.§ Effective Mass, Compactness and Surface Redshift
The sphere (<ref>) has an effective mass in terms of energy
density as
m(r)=1/2∫_0^ℋr^2μ dr,
where μ is provided in Eq.(<ref>). Equivalently,
Eq.(<ref>) along with (<ref>) yields
m(r)=r/2{r^2(S̅^2-2M̅ℋ)e^(M̅ℋ-S̅^2)
(r^2-ℋ^2)/ℋ^2(ℋ^2-2M̅ℋ+S̅^2)/r^2(S̅^2
-2M̅ℋ)e^(M̅ℋ-S̅^2)(r^2-ℋ^2)/ℋ^2(ℋ^2-2M̅ℋ+S̅^2)
-ℋ^2(ℋ^2-2M̅ℋ+S̅^2)}.
The increasing behavior of mass towards boundary with respect to
each candidate is shown in Figure 5 indicating that all
compact objects become more massive for ζ=5 and S̅=0.2.
The increment in charge results in the less massive structure. Some
physical quantities play a significant role in the study of
evolution of compact objects, one of them is the mass to radius
ratio of a star, known as compactness. This is given as
β(r)=m(r)/r=1/2{r^2(S̅^2-2M̅ℋ)e^(M̅ℋ-S̅^2)
(r^2-ℋ^2)/ℋ^2(ℋ^2-2M̅ℋ+S̅^2)/r^2(S̅^2
-2M̅ℋ)e^(M̅ℋ-S̅^2)(r^2-ℋ^2)/ℋ^2(ℋ^2
-2M̅ℋ+S̅^2)-ℋ^2(ℋ^2-2M̅ℋ+S̅^2)}.
Buchdahl <cit.> used the matching criteria at the hypersurface
and proposed that a feasible solution corresponding to a celestial
body must have its value less than 4/9 everywhere. A
massive object with sufficient gravitational pull undergoes certain
reactions and releases electromagnetic radiations. The surface
redshift quantifies increment in the wavelength of those radiations,
provided as
D(r)=-1+1/√(1-2β(r)),
which then leads to
D(r)=-1+√(r^2(S̅^2-2M̅ℋ)e^(M̅ℋ-S̅^2)(r^2
-ℋ^2)/ℋ^2(ℋ^2-2M̅ℋ+S̅^2)+ℋ^2(2M̅ℋ
-ℋ^2-S̅^2)/ℋ^2(2M̅ℋ-ℋ^2-S̅^2)).
For a feasible star model, Buchdahl calculated its upper limit as
2 for isotropic interior, whereas it is 5.211 for anisotropic
configuration <cit.>. Figures 6 and 7 show
graphs of both factors for each star that are consistent with the
required range for all values of ζ and charge (Tables
4-7). Moreover, these quantities increase with
the increasing of bag constant and decreasing charge.
§.§ Energy Conditions
A geometrical structure may contain normal or exotic matter in its
interior. In astrophysics, some constrains depending on state
variables are extensively used, known as energy conditions. The
verification of these conditions confirm the existence of normal
matter in a considered star as well as viability of the developed
solution. These bounds are given as
* Null: μ+P_+s^2/4π r^4≥ 0, μ+P_r ≥ 0,
* Weak: μ+s^2/8π r^4≥ 0, μ+P_+s^2/4π r^4≥ 0, μ+P_r ≥ 0,
* Strong: μ+2P_+P_r+s^2/4π r^4≥ 0,
* Dominant: μ-P_≥ 0, μ-P_r+s^2/4π r^4≥ 0.
We observe from the graphs of matter variables (Figures
1-3) that they possess positive behavior. Also,
μ>P_r and μ>P_ everywhere in the domain, thus the
fulfilment of all the energy conditions is obvious, contradicting
the results found in <cit.>. However, we have not added their
plots. Consequently, we can say that our resulting solution and
extended model (<ref>) are physically viable.
§.§ Tolman-Opphenheimer-Volkoff Equation
The generalized 𝕋𝕆𝕍 equation is already expressed in
Eq.(<ref>). We are required to plot different forces involving
in this equation to check whether the model is in stable equilibrium
condition or not <cit.>. To do this, the compact form of the
non-conservation equation in the presence of charge can be written
as
f_g+f_h+f_a=0,
where f_g, f_h and f_a are gravitational, hydrostatic and
anisotropic forces, respectively, defined as
f_g=-ρ'/2(μ+P_r),
f_h=-dP_r/dr+ss'/4π r^4,
f_a=2/r(P_-P_r).
Here, the effective matter variables are given in
Eqs.(<ref>)-(<ref>). Figure 8 exhibits the plots of
this equation, from which it can clearly be noticed that our
considered quark models are in hydrostatic equilibrium.
§.§ Stability Analysis
The stability criteria helps to understand the composition of
astronomical structures in our universe. Here, we check stability of
the developed solution through two techniques.
§.§.§ Herrera Cracking Technique
The causality condition <cit.> states that speed of sound in
tangential and radial directions must lie within 0 and 1 for a
stable structure, i.e., 0 ≤ v_s^2 < 1 and 0 ≤
v_sr^2 < 1, where
v_s^2=dP_/dμ,
v_sr^2=dP_r/dμ.
Herrera <cit.> suggested a cracking approach according to which
the stable system must meet the condition 0 ≤|
v_s^2-v_sr^2| < 1 everywhere in its interior.
Figure 9 shows that our solution with respect to all
candidates is stable throughout.
§.§.§ Adiabatic Index
Another approach to check the stability is the adiabatic index
(Γ). Several researchers <cit.> studied the
stability of self-gravitating structures by utilizing this concept
and concluded that stable models have its value not less than
4/3 everywhere. Here, Γ is defined as
Γ=μ+P_r/P_r(dP_r/dμ)=μ+P_r/P_r(v_sr^2).
To overcome the problem such as the occurrence of dynamical
instabilities inside the star, Moustakidis <cit.> recently
proposed a critical value of the adiabatic index depending on
certain parameters as
Γ_Crit=4/3+19/21β(r),
where the condition Γ≥Γ_Crit ensures the stability
of compact structure. This condition has also been discussed
decoupled class-one solutions <cit.>. Figures 10
and 11 depict the plots of Γ and Γ_Crit
for different values of charge corresponding to each quark star. We
observe that the criterion of this approach is fulfilled and thus
all the candidates show stable behavior.
§ FINAL REMARKS
In this paper, we have studied the influence of matter-geometry
coupling through the model ℛ+ζ𝒬 on four
charged anisotropic compact stars for the coupling constant
ζ=±5. We have adopted the matter Lagrangian proposed by
Haghani et al <cit.> which turns out to be
𝕃_m=s^2/2r^4. We have formulated the
corresponding equations of motion and non-conservation equation. We
have used the temporal metric function (<ref>) to determine the
radial metric potential (<ref>) through embedding class-one
condition and then found the solution (<ref>)-(<ref>) of the
modified field equations. The four unknowns (C_1,C_2,C_3,C_4) have
been determined at the hypersurface with the help of observed mass
and radius of each celestial object. We have used the preliminary
information of four compact stars, i.e., SAX J 1808.4-3658, 4U
1820-30, SMC X-4 and Her X-I (Table 1) to calculate
constants for different values of charge (Tables 2 and
3) as well as bag constant with respect to different
choices of ζ. We have found that the solution with respect to
each star is physically acceptable as state variables are maximum
(minimum) at the center (boundary). The mass of strange stars
exhibits increasing behavior for the given values of charge, bag
constant and ζ (Figure 5).
It is found that increasing nature of the coupling constant and
decreasing the charge (i.e., ζ=5 and S̅=0.2) produce
dense interiors in this modified gravity. The compactness and
redshift parameters also provide acceptable behavior (Figures
6 and 7). We have obtained that our developed
solution is viable and stellar models contain normal matter.
Finally, we have checked hydrostatic equilibrium condition and
stability of the resulting solution through two criteria. We
conclude that our solution with respect to all the considered models
show stable behavior for both values of charge as well as considered
range of ζ (Figure 9). The adiabatic index and its
critical value also confirm their stability (Figures 10
and 11). These results are observed to be consistent with
<cit.>. It is worthwhile to mention here that all our results
reduce to 𝔾ℝ by choosing ζ=0.
§ APPENDIX A
The explicit expressions of the matter
variables are deduced from Eqs.(<ref>)-(<ref>) as
μ =-[4 r^4 ((χ _3 (ζχ _6+8 π
e^α)-ζχ _2 χ _7) (ζ ^2 χ _3
χ _5+ζχ _1 (ζχ _10+8 π
e^α)-8 π e^α
×(ζχ
_10+8 π e^α))+ζ(ζχ _3 χ
_5+χ _7 (ζχ _1-8 π e^α)) (ζχ _3 χ _9+χ _2 (ζχ _10
+8 π
e^α)))]^-1[4 ζ(ζχ _3
χ _9+χ _2 (ζχ _10+8 π e^α))
(χ _7 (r^2 (r
α'+e^α-1)
+ζ s^2 χ _4)+χ
_3 (r^2 (-e^α+r ρ '+1)+ζ s^2 χ
_8))+(χ _3 (ζχ _6+8 π e^α)-ζχ _2 χ _7)
(-ζ r^4 χ
_3 α 'ρ '+2 ζ r^4 χ _3 ρ”+ζ r^4 χ _3
ρ '^2-2 ζ r^3 χ _3 α '+32 π r^3 e^αα
'+2 ζ r^3 χ _3 ρ '
+4 ζ r^2 χ _10(r α '+e^α-1)-32 π r^2 e^α+32 π r^2
e^2 α+4 ζ s^2 χ _4 (ζχ _10+8 π
e^α)
+4 ζ ^2 s^2 χ _3 χ
_11)],
P_r =[4 r^4 (ζ ^3 (-χ
_3) χ _5 χ _6-ζ ^3 χ _3 χ _5 χ _9-8 πζ
^2 χ _3 χ _5 e^α+8 πζ ^2 χ _7 χ _9
e^α+ζ ^2 χ _2 χ _5
×(ζχ _7-ζχ _10-8 π e^α)+8 πζ ^2 χ
_6 χ _10 e^α-ζχ _1 (ζ ^2 χ _7 χ
_9+ζχ _6 (ζχ _10+8 π
e^α)
+8 π e^α(ζχ _10+8
π e^α))+64 π ^2 ζχ _6 e^2 α+64
π ^2 ζχ _10 e^2 α+512 π ^3 e^3
α)]^-1
×[ζχ _5 (4
(-ζχ _7+ζχ _10+8 π e^α)
(r^2 (r α '+e^α-1)+ζ s^2 χ
_4)-ζχ _3 (r^2
×(-2 r^2 ρ”-r^2ρ '^2+r α ' (r ρ '+2)-4 e^α+2 r
ρ '+4)+4 ζ s^2 χ _8-4 ζ s^2 χ
_11))
-ζχ _1 (ζ r^4 χ _7
α 'ρ '-2 ζ r^4 χ _7 ρ”-ζ r^4 χ _7 ρ
'^2+2 ζ r^3 χ _7 α '+32 π r^3 e^αρ '-2
ζ r^3 χ _7 ρ '
+4 ζ r^2 χ _10(-e^α+r ρ '+1)+32 π r^2 e^α-32 π r^2
e^2 α+4 ζ s^2 χ _8 (ζχ _10+8 π
e^α)
-4 ζ ^2 s^2 χ _7 χ
_11)-8 π e^α(-ζ r^4 χ _7 α ' ρ
'+2 ζ r^4 χ _7 ρ”+ζ r^4 χ _7 ρ '^2-2 ζ r^3
χ _7 α '
-32 π r^3 e^αρ '+2 ζ
r^3 χ _7 ρ '-4 ζ r^2 χ _10(-e^α+r ρ
'+1)-4 ζ s^2 χ _8 (ζχ _10+8 π e^α)
-32 π r^2 e^α+32 π r^2 e^2 α+4 ζ ^2 s^2 χ _7 χ _11)],
P_ =[4 r^4 (ζ ^3 (-χ _3) χ _5 χ
_6-ζ ^3 χ _3 χ _5 χ _9-8 πζ ^2 χ _3 χ _5
e^α+8 πζ ^2 χ _7 χ _9 e^α+ζ ^2 χ
_2 χ _5
×(ζχ _7-ζχ _10-8
π e^α)+8 πζ ^2 χ _6 χ _10
e^α-ζχ _1 (ζ ^2 χ _7 χ _9+ζχ _6
(ζχ _10+8 π e^α)
+8 π
e^α(ζχ _10+8 π e^α))+64 π
^2 ζχ _6 e^2 α+64 π ^2 ζχ _10 e^2
α+512 π ^3 e^3 α)]^-1
×[ζχ _5 (ζχ _2 (r^2 (-2 r^2 ρ”+r^2 (-ρ '^2)+r α '(r ρ '+2)-4
e^α+2 r ρ '+4)
+4 ζ s^2 χ _8-4
ζ s^2 χ _11)+4 (ζχ _6+ζχ _9+8 π
e^α) (r^2 (r α '+e^α-1)+ζ
s^2 χ _4))
+(8 π e^α-ζχ
_1) ((ζχ _6+8 π e^α) (r^3
(-α ' (r ρ '+2)+2 r ρ”+r ρ '^2+2 ρ
')
+4 ζ s^2 χ _11)+4 ζχ _9
(r^2 (-e^α+r ρ '+1)+ζ s^2 χ
_8))],
where
χ_1 =3ρ'α'/8-ρ'^2/8
+α'/r+e^α/r^2-3ρ”/4-3ρ'/2r-1/r^2,
χ_2 =ρ'α'/8-ρ'^2/8-ρ”/4+α'/2r+α”/2
-3α'^2/4,
χ_3 =α'/2r-ρ'/2r+3e^α/r^2-1/r^2,
χ_4 =α'/2r-e^α/2r^2+1/2r^2+ρ'α'/8
-ρ'^2/8-ρ”/4-e^α/ζ,
χ_5 =ρ'α'/8+ρ'^2/8-ρ”/4-ρ'/2r,
χ_6 =5ρ'^2/8-7ρ'α'/8+5ρ”/4-7α'/2r+ρ'/r-α'^2
-e^α/r^2+1/r^2,
χ_7 =α'/2r-ρ'/2r+3e^α/r^2-1/r^2,
χ_8 =ρ'/2r+e^α/2r^2
-1/2r^2+ρ”/4+ρ'^2/8-ρ'α'/8+e^α/ζ,
χ_9 =ρ'^2/8+3α'^2/4-ρ'α'/8+ρ”/4-α'/2r
-α”/2,
χ_10 =ρ'^2/4-ρ'α'/4+ρ”/2-α'/r+ρ'/r,
χ_11 =ρ'α'/8-ρ'^2/8-ρ”/4
+α'/4r-ρ'/4r-e^α/ζ.
§ APPENDIX B
Equations (<ref>)-(<ref>) in
terms of constants take the form as
μ =[r^4{16π(16C_2^2C_1C_3r^2e^2C_2r^2+1)^3-ζ
C_2(4096C_2^5C_1^2C_3^2r^4e^4C_2r^2(2C_1C_3
×
e^2C_2r^2+r^2)+1024C_2^4C_1^2C_3^2r^4e^4C_2r^2+64C_2^3C_1C_3r^2e^2C_2r^2(44C_1C_3e^2C_2r^2
+3r^2)-272C_2^2C_1C_3r^2e^2C_2r^2
+2C_2(76C_1C_3e^2C_2r^2-3r^2)-23)}]^-1
×[4096𝔅C_2^6C_1^2C_3^2r^8e^4C_2r^2(2C_1C_3e^2C_2r^2(8π
r^2-ζ)-ζ r^2)-512C_2^5C_1^2C_3^2
×
r^4e^4C_2r^2(r^4(20ζ𝔅-6)-3ζ
s^2)+128C_2^4C_1^2 C_3^2r^2e^4C_2r^2{3ζ
s^2+96π𝔅r^6
+r^4(6-40ζ𝔅)}-16C_2^3C_1C_3r^2e^2C_2r^2(2r^4(14ζ𝔅-9)-9ζ
s^2)+8C_2^2
×{C_1C_3e^2C_2r^2(3ζ
s^2+96π𝔅r^6+r^4(6-40ζ𝔅))+3ζ𝔅r^6}+C_2{3ζ
s^2
+r^4(20ζ𝔅+6)}+16π𝔅r^4],
P_r =-[r^4{16π(16C_2^2C_1C_3r^2e^2C_2r^2+1)^3-ζ
C_2(4096C_2^5C_1^2C_3^2r^4e^4C_2r^2(2C_1
×
C_3e^2C_2r^2+r^2)+1024C_2^4C_1^2C_3^2r^4e^4C_2r^2+64C_2^3C_1C_3r^2e^2C_2r^2(44C_1e^2C_2r^2
× C_3+3r^2)-272C_2^2C_1C_3r^2e^2C_2r^2
+2C_2(76C_1C_3e^2C_2r^2-3r^2)-23)}]^-1
×[(16r^2
C_2^2C_1C_3e^2C_2r^2+1)(256C_2^4𝔅C_1C_3r^6e^2C_2r^2(2C_1e^2C_2r^2(8π
r^2-ζ)
× C_3-ζ
r^2)+32C_2^3C_1C_3r^2e^2C_2r^2(r^4(4ζ𝔅-2)-ζ s^2)+8C_2^2C_1C_3e^2C_2r^2
×(64π𝔅r^6-ζ
s^2-2r^4(6ζ𝔅+1))+C_2(r^4(24ζ𝔅-2)-ζ
s^2)+16π𝔅r^4)],
P_ =[r^4(16C_2^2C_1C_3r^2e^2C_2r^2+1)^2(4π(16C_2^2C_1C_3r^2e^2C_2r^2+1)^2-ζ
C_2(8C_2C_1
×
C_3e^2C_2r^2-1)(16C_2^2C_1C_3r^2e^2C_2r^2+2C_2r^2+3))(16π(16C_2^2C_1C_3r^2e^2C_2r^2
+1)^3-ζ
C_2(4096C_2^5C_1^2C_3^2r^4e^4C_2r^2(2C_1C_3e^2C_2r^2+r^2)+1024C_2^4C_1^2C_3^2r^4
× e^4C_2r^2+64C_2^3C_1C_3r^2e^2C_2r^2(44C_1C_3e^2
C_2r^2+3r^2)+2(76C_1C_3e^2C_2r^2-3r^2)
×
C_2-272C_2^2C_1C_3r^2e^2C_2r^2-23))]^-1[-67108864
C_1^5 C_3^5 e^10 C_2 r^2 r^10(2 C_1 C_3
×
e^2 C_2 r^2(8 π r^2-ζ)-r^2 ζ) (ζ𝔅 r^6+2 C_1 C_3 e^2 C_2 r^2 s^2 (8 π r^2-ζ)) C_2^14-C_1^5 C_3^5
× 8388608 e^10
C_2 r^2 r^10ζ(2 (3 ζ𝔅 +80 C_1 C_3
e^2 C_2 r^2π𝔅 -3) r^6-20 C_1 C_3 e^2 C_2 r^2ζ r^4
×𝔅 -s^2 (32 C_1 e^2 C_2
r^2π C_3+3 ζ) r^2+4 C_1 C_3 e^2 C_2 r^2 s^2 ζ) C_2^13-1048576 C_1^4 C_3^4
× e^8 C_2 r^2
r^6 (ζ((2-4 ζ𝔅 ) r^6-2 (1+4 π ) s^2
ζ r^2+s^2 ζ ^2) r^4+2 C_1 C_3 e^2 C_2 r^2(64 π
^2
× s^2 ζ r^4+8 π(2 r^8 (5 ζ𝔅 -1)-17 r^4 s^2 ζ)-ζ(2 (9 ζ𝔅 +5) r^6+s^2 ζ ^2-17ζ
× s^2
r^2)) r^2+8 C_1^2 C_3^2 e^4 C_2 r^2(8 π r^2-ζ) ((6 ζ𝔅 +2) r^6+112 π s^2 r^4-s^2 ζ
r^2
×(21+8 π )+s^2 ζ ^2 ))
C_2^12-262144 C_1^4 C_3^4 e^8 C_2 r^2 r^6 (ζ((94
ζ𝔅 -26) r^6-8
× (4+5 π ) s^2
ζ r^2+5 s^2 ζ ^2) r^2+4 C_1 C_3 e^2 C_2 r^2(128
π ^2 s^2 ζ r^4+ζ(-(44 ζ𝔅
+5)
× r^6-8 s^2 ζ r^2+s^2 ζ ^2)+4 π((68 ζ𝔅 -8) r^8+13 s^2 ζ r^4-6 s^2 ζ ^2
r^2))) C_2^11
-16384 C_1^2 C_3^2 e^4 C_2
r^2 r^4 (-s^2 ζ ^3 r^6+C_1 C_3 e^2 C_2 r^2ζ(22
r^6-2 (11+36 π ) s^2 ζ r^2
+17 s^2 ζ ^2)
r^4+4 C_1^2 C_3^2 e^4 C_2 r^2(640 π ^2 s^2 ζ r^4-8 π(20 r^8+49 s^2 ζ r^4+22 s^2 ζ ^2 r^2)
+ζ(4 (2 ζ𝔅 +7) r^6+47 s^2 ζ r^2+8 s^2
ζ ^2)) r^2+16 C_1^3 C_3^3 e^6 C_2 r^2(128 π ^2
s^2 (42 r^2
-5 ζ) r^4+8 π(20 (2
ζ𝔅 +1) r^8-231 s^2 ζ r^4+27 s^2 ζ ^2
r^2)-ζ(4 (13 ζ𝔅 +9)
×
r^6-154 s^2 ζ r^2+17 s^2 ζ^2))) C_2^10-4096
C_1^2 C_3^2 e^4 C_2 r^2 r^4 (-11 s^2 ζ ^3 r^4+C_1
C_3
× e^2 C_2 r^2ζ(2 (274 ζ𝔅 -51) r^6-40 (5+3 π ) s^2 ζ r^2+63 s^2 ζ
^2) r^2+4 C_1^2 C_3^2 e^4 C_2 r^2
×(2560
π ^2 s^2 ζ r^4+8 π(80 (2 ζ𝔅 -1) r^8+254
s^2 ζ r^4-129 s^2 ζ ^2 r^2)+ζ(-6
× (32 ζ𝔅 -23) r^6-340 s^2 ζ r^2+85 s^2
ζ^2))) C_2^9-256 C_1 C_3 e^2 C_2 r^2 r^2 (-3
s^2 ζ ^3
× r^6-2 C_1 C_3 e^2 C_2 r^2ζ((44 ζ𝔅 -34) r^6+2 (17+20 π) s^2 ζ r^2+69
s^2 ζ ^2) r^4-16
× C_1^2 C_3^2 e^4 C_2
r^2(-1280 π ^2 s^2 ζ r^4-ζ((142-44 ζ𝔅 ) r^6+66 s^2 ζ r^2+35 s^2 ζ ^2)
+16 π(20 (ζ𝔅 +1) r^8+8 s^2 ζ r^4+17 s^2
ζ ^2 r^2)) r^2+32 C_1^3 C_3^3 e^6 C_2 r^2(2560
π ^2 s^2
×(7 r^2-ζ) r^4+8 π(80 (ζ𝔅 +1) r^8-768 s^2 ζ r^4+115 s^2 ζ
^2 r^2)-ζ(2 (44 ζ𝔅
+75)
r^6-514 s^2 ζ r^2+85 s^2 ζ ^2))) C_2^8-64 C_1
C_3 e^2 C_2 r^2 r^2 (-13 s^2 ζ ^3 r^4+4 C_1 C_3
× e^2 C_2 r^2ζ(50 (4 ζ𝔅 -3) r^6+4
(-13+158 π ) s^2 ζ r^2-177 s^2 ζ ^2) r^2+32 C_1^2
C_3^2
× e^4 C_2 r^2(2560 π ^2 s^2 ζ
r^4+ζ((241-122 ζ𝔅 ) r^6-396 s^2 ζ
r^2+155 s^2 ζ ^2)+4 π
×(80 (ζ𝔅 -2) r^8+596 s^2 ζ r^4-319 s^2 ζ ^2
r^2))) C_2^7-8 (3 s^2 ζ ^3 r^6+8 C_1 C_3 e^2
C_2 r^2
×ζ ^2 (-28 𝔅 r^6+48
π s^2 r^2+9 s^2 ζ) r^4+32 C_1^2 C_3^2 e^4 C_2 r^2(1280 π ^2 s^2 ζ r^4+ζ
×((82-174
ζ𝔅 ) r^6+160 s^2 ζ r^2-123 s^2 ζ ^2)-8
π(40 (ζ𝔅 +1) r^8-26 s^2r^4
ζ -33 s^2 ζ ^2 r^2)) r^2+64 C_1^3 C_3^3 e^6 C_2
r^2(2560 π ^2 s^2 (7 r^2-ζ) r^4+32 π(20
r^8-173
× s^2 ζ r^4+25 s^2 ζ ^2
r^2)+ζ(4 (10 ζ𝔅 -29) r^6+400 s^2 ζ
r^2-57 s^2 ζ ^2))) C_2^6-8
×(19 s^2 ζ ^3 r^4+4 C_1 C_3 e^2 C_2 r^2ζ(5 (2
ζ𝔅 -17) r^6-56 s^2 ζ ^2+4s^2 ζ (13+123 π
)
× r^2 ) r^2+16 C_1^2 C_3^2 e^4 C_2 r^2(2560 π^2 s^2 ζ r^4+ζ((235-208 ζ𝔅
) r^6-366 s^2 ζ r^2
+120 s^2 ζ ^2)+4 π(40 (ζ𝔅 -4) r^8+644s^2 ζ r^4-299 s^2 ζ
^2 r^2))) C_2^5-2 (32 C_1^2
× e^4
C_2 r^2(128 π ^2 r^2 (42 r^2-5 ζ) s^2-4
π(40 (ζ𝔅 -1) r^6+324 s^2 ζ r^2-31 s^2
ζ ^2)
+ζ(3 (8 ζ𝔅 -5)
r^4+59 s^2 ζ)) C_3^2+4C_1 e^2 C_2 r^2(1280 π
^2 s^2 ζ r^4-32 π(2 (3 ζ𝔅
+5) r^8-13 s^2 ζ r^4-40 s^2 ζ ^2 r^2)- (4 (26 ζ𝔅 +25) r^6-376 s^2 ζ r^2+321 s^2 ζ
^2)
×ζ) C_3+r^2 ζ(-6 (2 ζ𝔅 +1) r^6+2 (3+28 π ) s^2 ζ r^2+133 s^2 ζ
^2)) C_2^4-2 (ζ
×((20 ζ𝔅 -33) r^6+2 (15+98 π ) s^2 ζ r^2+69 s^2 ζ
^2)+4 C_1 C_3 e^2 C_2 r^2(1280 π ^2 r^2
×ζ s^2+ζ(r^4 (73-114 ζ𝔅 )-120
s^2 ζ)+ (40 (ζ𝔅 -2) r^6+338 s^2 ζ
r^2-97
× s^2 ζ ^2)4 π))
C_2^3-(128 π ^2 r^2 ζ s^2-8 π(4 r^6-7 s^2 ζ
r^2-35 s^2 ζ^2)+32 π e^2 C_2 r^2
× C_1
C_3((4-8 ζ𝔅 ) r^4+224 π s^2 r^2-(31+16 π )
s^2 ζ)+ ((42 ζ𝔅 -38)
r^4+s^2
×73 ζ)ζ) C_2^2-4 π(8 (ζ𝔅 -1) r^4+(35+32 π ) s^2 ζ)
C_2-64 π ^2 s^2].
§ APPENDIX C
The resulting solution
(<ref>)-(<ref>) produces the anisotropy as
Δ =[r^4(16C_2^2C_1C_3r^2e^2C_2r^2+1)^2(16π(16C_2^2C_1C_3r^2e^2C_2r^2+1)^3-ζ
C_2(4096
×
r^4C_2^5C_1^2C_3^2e^4C_2r^2(2C_1C_3e^2C_2r^2+r^2)+1024C_2^4C_1^2C_3^2r^4e^4C_2r^2+64C_2^3C_1C_3
× r^2e^2C_2r^2(44C_1C_3e^2C_2r^2+3r^2)-272
C_2^2C_1C_3r^2e^2C_2r^2+2(76C_1C_3e^2C_2r^2
-3r^2)C_2-23))]^-1[(16C_2^2C_1C_3r^2e^2C_2r^2+1)^3
(256C_2^4𝔅C_1C_3r^6e^2C_2r^2(C_1C_3
×2e^2C_2r^2(8π r^2-ζ)-ζ
r^2)+32C_2^3C_1C_3r^2e^2C_2r^2(r^4(4ζ𝔅-2)-ζ
s^2)+8C_2^2
×
C_1C_3e^2C_2r^2(64π𝔅r^6-2r^4(6ζ𝔅+1)-ζ
s^2)+C_2(r^4(24ζ𝔅-2)-ζ
s^2)
+16π
r^4𝔅)-{4π(16C_2^2C_1C_3r^2e^2C_2r^2+1)^2-ζ
C_2(8C_2C_1C_3e^2C_2r^2-1)(C_2^2
×
16C_1C_3r^2e^2C_2r^2+2C_2r^2+3)}^-1{67108864
C_1^5 C_3^5 e^10 C_2 r^2 r^10(2 C_1 C_3 e^2 C_2
r^2
×(8 π r^2-ζ)-r^2 ζ)
(ζ𝔅 r^6+2 C_1 C_3 e^2 C_2 r^2 s^2 (8 π
r^2-ζ)) C_2^14+8388608 C_1^5
×
C_3^5 e^10 C_2 r^2r^10ζ(2 (3 ζ𝔅
+80 C_1 C_3 e^2 C_2 r^2π𝔅 -3) r^6-20 C_1 C_3
e^2 C_2 r^2ζ𝔅 r^4
-s^2 (32 C_1 e^2
C_2 r^2π C_3+3 ζ) r^2+4 C_1 C_3 e^2 C_2 r^2 s^2 ζ) C_2^13+1048576 C_1^4 C_3^4 e^8 C_2 r^2
×
r^6 (ζ(r^6(2-4 ζ𝔅 ) -2 (1+4 π ) s^2
ζ r^2+s^2 ζ ^2) r^4+2 C_1 C_3 e^2 C_2 r^2(64 π
^2 s^2 ζ r^4
+8 π(2 r^8 (5 ζ𝔅 -1)-17 r^4 s^2 ζ)-ζ(2 (9 ζ𝔅 +5) r^6-17 s^2 ζ r^2+s^2 ζ ^2))
r^2
+8 C_1^2 C_3^2 e^4 C_2 r^2(8 π r^2-ζ) ((6 ζ𝔅 +2) r^6+112 π s^2 r^4-(21+8 π )
s^2 ζ r^2+s^2
×ζ
^2))C_2^12+262144 C_1^4 C_3^4 e^8 C_2 r^2 r^6 (ζ
r^2 ((94 ζ𝔅 -26) r^6+5 s^2 ζ ^2-8s^2 ζ
r^2 (5π
+4)) +4 C_1 C_3 e^2 C_2 r^2(128 π
^2 s^2 ζ r^4+ζ(-(44 ζ𝔅 +5) r^6-8 s^2
ζ r^2+s^2 ζ ^2)
+4 π((68 ζ𝔅 -8) r^8+13 s^2 ζ r^4-6 s^2 ζ ^2
r^2))) C_2^11+16384 C_1^2 C_3^2 e^4 C_2 r^2
r^4
×(-s^2 ζ ^3 r^6+C_1 C_3 e^2 C_2 r^2ζ(22 r^6-2 (11+36 π ) s^2 ζ r^2+17 s^2 ζ ^2)
r^4+4 C_1^2 C_3^2
× e^4 C_2 r^2(640 π ^2
s^2 ζ r^4-8 π(20 r^8+49 s^2 ζ r^4+22 s^2 ζ ^2
r^2)+ζ(4 (2 ζ𝔅 +7) r^6
+47
s^2 ζ r^2+8 s^2 ζ ^2)) r^2+16 C_1^3 C_3^3 e^6 C_2
r^2(128 π ^2 s^2 (42 r^2-5 ζ) r^4+8 π(20
(2 ζ
×𝔅 +1) r^8-231 s^2 ζ
r^4+27 s^2 ζ ^2 r^2)-ζ(4 (13 ζ𝔅
+9) r^6-154 s^2 ζ r^2+17
× s^2
ζ^2))) C_2^10+4096 C_1^2 C_3^2 e^4 C_2 r^2 r^4
(-11 s^2 ζ ^3 r^4+C_1 C_3 e^2 C_2 r^2ζ(2r^6 (274
ζ𝔅
-51) -40 (5+3 π ) s^2 ζ r^2+63
s^2 ζ ^2) r^2+4 C_1^2 C_3^2 e^4 C_2 r^2(2560 π ^2
s^2 ζ r^4+8 π(80
× (2 ζ𝔅
-1) r^8+254 s^2 ζ r^4-129 s^2 ζ ^2 r^2)+ζ(-6
(32 ζ𝔅 -23) r^6-340 s^2
×ζ
r^2+85 s^2 ζ^2))) C_2^9+256 C_1 C_3 e^2 C_2 r^2
r^2 (-3 s^2 ζ ^3 r^6-2 C_1 C_3 e^2 C_2 r^2ζ((44
ζ𝔅
-34) r^6+2 (17+20 π) s^2 ζ
r^2+69 s^2 ζ ^2) r^4-16 C_1^2 C_3^2 e^4 C_2 r^2(-1280
π ^2 s^2 ζ r^4
-ζ((142-44 ζ𝔅 ) r^6+66s^2 ζ r^2+35 s^2 ζ ^2)+16 π(20 (ζ𝔅 +1) r^8+8 s^2 ζ r^4
+17
s^2 ζ ^2 r^2)) r^2+32 C_1^3 C_3^3e^6 C_2 r^2(2560
π ^2 s^2 (7 r^2-ζ) r^4+8 π(80 (ζ𝔅 +1) r^8
-768 s^2 ζ r^4+115 s^2 ζ ^2
r^2)-ζ(2 (44 ζ𝔅 +75) r^6-514 s^2 ζ
r^2+85 s^2 ζ ^2))) C_2^8
+64 C_1 C_3
e^2 C_2 r^2 r^2 (-13s^2 ζ ^3 r^4+4 C_1 C_3 e^2 C_2 r^2ζ(50 (4 ζ𝔅 -3) r^6+(158π-13)
× 4s^2 ζ r^2-177 s^2 ζ ^2) r^2+32 C_1^2 C_3^2e^4
C_2 r^2(2560 π ^2 s^2 ζ r^4+ζ((241-122 ζ𝔅 ) r^6
-396 s^2 ζ r^2+155 s^2 ζ
^2)+4 π(80(ζ𝔅 -2) r^8+596 s^2 ζ
r^4-319 s^2 ζ ^2 r^2))) C_2^7
+8 (3
s^2 ζ ^3 r^6+8 C_1 C_3 e^2 C_2 r^2ζ ^2(-28
𝔅 r^6+48 π s^2 r^2+9 s^2 ζ) r^4+32 e^4 C_2
r^2 C_1^2
× C_3^2 (1280 π ^2 s^2 ζ
r^4+ζ((82-174 ζ𝔅 ) r^6+160 s^2 ζ
r^2-123 s^2 ζ ^2)-8 π(40 (ζ
×𝔅 +1) r^8-26 s^2 ζ r^4-33 s^2 ζ ^2
r^2))r^2+64 C_1^3 C_3^3 e^6 C_2 r^2(2560 π ^2 s^2
(7 r^2-ζ)
× r^4+32 π(20 r^8-173
s^2 ζ r^4+25 s^2 ζ ^2r^2)+ζ(4 (10 ζ𝔅 -29) r^6+400 s^2 ζ r^2
-57 s^2 ζ
^2))) C_2^6+8 (19 s^2 ζ ^3 r^4+4 C_1 C_3e^2 C_2
r^2ζ(5 (2 ζ𝔅 -17) r^6+4s^2 ζ r^2
(13
+123 π )-56 s^2 ζ ^2) r^2+16 C_1^2 C_3^2
e^4 C_2 r^2(2560 π ^2 s^2 ζ r^4+ζ((235-208
ζ𝔅 ) r^6
-366 s^2 ζ r^2+120 s^2
ζ ^2)+4 π(40 (ζ𝔅 -4) r^8+644s^2 ζ
r^4-299 s^2 ζ ^2 r^2))) C_2^5
+2 (32
C_1^2 e^4 C_2 r^2(128 π ^2 r^2 (42 r^2-5 ζ)
s^2-4 π(40 (ζ𝔅 -1) r^6+324 s^2 ζ
r^2
-31 s^2 ζ ^2)+ζ(3 (8 ζ𝔅 -5) r^4+59 s^2 ζ)) C_3^2+4C_1 e^2 C_2
r^2(1280 π ^2 s^2 ζ r^4-32 π
×(2
(3 ζ𝔅 +5) r^8-13 s^2 ζ r^4-40 s^2 ζ ^2
r^2)-ζ(4(26 ζ𝔅 +25) r^6-376 s^2 ζ
r^2
+321 s^2 ζ ^2)) C_3+r^2 ζ(-6 (2
ζ𝔅 +1) r^6+2 (3+28 π ) s^2 ζ r^2+133 s^2 ζ
^2)) C_2^4
+2 (ζ((20 ζ𝔅 -33) r^6+2 (15+98 π ) s^2 ζ r^2+69 s^2 ζ
^2)+4 C_1 C_3e^2 C_2 r^2(1280 π ^2
×
r^2 ζ s^2+ζ(r^4 (73-114 ζ𝔅 )-120 s^2
ζ)+4 π(40 (ζ𝔅 -2) r^6+338 s^2 ζ
r^2
-97 s^2 ζ ^2))) C_2^3+(128 π
^2 r^2 ζ s^2-8 π(4 r^6-7 s^2 ζ r^2-35 s^2 ζ
^2)+32 C_1 C_3 e^2 C_2 r^2
×π((4-8
ζ𝔅 ) r^4+224 π s^2 r^2-(31+16 π ) s^2 ζ)+ζ((42 ζ𝔅 -38) r^4+73
s^2
×ζ)) C_2^2+4 π(8 (ζ𝔅-1) r^4+(35+32 π ) s^2 ζ) C_2+64 π ^2
s^2}].
43
1 Buchdahl H A 1970 Mon. Not. R. Astron. Soc.
150 1
2 Nojiri S and Odintsov S D 2003 Phys. Rev. D
68 123512
2b Song Y S, Hu W and Sawicki I 2007 Phys.
Rev. D 75 044004
2d Sharif M and Yousaf Z 2013 Mon. Not.
R. Astron. Soc. 434 2529
2f Astashenok A V, Capozziello S and
Odintsov S D 2014 Phys. Rev. D 89 103509
10 Bertolami O et al 2007 Phys. Rev. D 75 104016
20 Harko T et al 2011 Phys. Rev. D 84 024020
21 Sharif M and Zubair M 2013 J. Exp. Theor. Phys.
117 248
21a Shabani H and Farhoudi M 2013 Phys. Rev. D
88 044048
21e Sharif M and Siddiqa A 2017 Eur. Phys. J.
Plus 132 529
21f Das A et al 2017 Phys. Rev. D
95 124011
22 Haghani Z et al 2013 Phys. Rev. D 88 044023
22a Sharif M and Zubair M 2013 J. Cosmol. Astropart. Phys. 11 042
22b Sharif M and Zubair M 2013 J. High Energy Phys. 12 079
23 Odintsov S D and Sáez-Gómez D 2013 Phys. Lett. B 725 437
25 Baffou E H, Houndjo M J S and Tosssa J 2016 Astrophys. Space Sci. 361 376
25a Sharif M and Waseem A 2016 Eur. Phys. J. Plus
131 190
25a1 Sharif M and Waseem A 2016 Can. J. Phys. 94 1024
26 Yousaf Z, Bhatti M Z and Naseer T 2020 Eur. Phys. J.
Plus 135 353
26a Yousaf Z, Bhatti M Z and Naseer T 2020 Phys. Dark
Universe 28 100535
26b Yousaf Z, Bhatti M Z and Naseer T 2020 Int. J. Mod. Phys. D 29 2050061
26c Yousaf Z, Bhatti M Z and Naseer T 2020 Ann.
Phys. 420 168267
26d Yousaf Z et al 2020 Phys. Dark Universe 29 100581
26e Yousaf Z et al 2020 Mon. Not.
R. Astron. Soc. 495 4334
27 Sharif M and Naseer T 2021 Chin. J. Phys.
73 179
27a1 Sharif M and Naseer T 2022 Phys. Scr. 97 055004
27a1a Sharif M and Naseer T 2022 Pramana 96
119
27a2 Sharif M and Naseer T 2022 Int. J. Mod. Phys. D
31 2240017
27a3 Naseer T and Sharif M 2022 Universe 8 62
27aa Sharif M and Naseer T 2022 Chin. J. Phys.
77 2655
27aaa Sharif M and Naseer T 2022 Eur. Phys. J. Plus
137 947
27a Das B et al 2011 Int. J. Mod. Phys. D 20 1675
27b Sunzu J M, Maharaj S D and Ray S 2014 Astrophys. Space Sci. 352 719
27e Gupta Y K and Maurya S K 2011 Astrophys.
Space Sci. 332 155
27h Sharif M and Sadiq S 2016 Eur. Phys. J. C 76 568
27i Sharif M and Majid A 2021 Phys. Dark Universe 32 100803
33a Bordbar G H and Peivand A R 2011 Res. Astron. Astrophys. 11 851
33b Haensel P, Zdunik J L and Schaefer R 1986 Astron.
Astrophys. 160 121
34 Cheng K S, Dai Z G and Lu T 1998 Int. J.
Mod. Phys. D 7 139
34a Mak M K and Harko T 2002 Chin. J.
Astron. Astrophys. 2 248
34b Demorest P B et al (2010) Nature 467 1081
35 Rahaman F et al 2014 Eur. Phys. J. C 74 3126
36 Bhar P et al 2016 Eur. Phys. J. A 52 312
37 Maurya S K et al 2016 Eur. Phys. J. C
76 266
37a1 Maurya S K et al 2016 Eur. Phys. J. C 76 693
37b Singh K N, Bhar P and Pant N 2016 Astrophys. Space Sci. 361
339
37c Tello-Ortiz F, Maurya S K and Gomez-Leyton Y 2020 Eur. Phys. J. C 80 324
37d Dayanandan B, Smitha T T and Maurya S K 2021 Phys. Scr. 96 125041
37da Singh K N et al 2020 Chinese Phys. C
44 105106
37db Rahaman M et al 2020 Eur. Phys. J. Plus
80 272
dd Deb D et al 2019 Mon. Not. R.
Astron. Soc. 485 5652
ee Maurya S K et al 2019 Phys. Rev. D
100 044014
ff Mustafa G et al 2020 Chin. J. Phys. 67 576
gg Maurya S K et al 2020 Eur. Phys. J. Plus 135 824
gga Mustafa G et al 2021 Phys. Dark
Universe 31 100747
ggb Maurya S K, Tello-Ortiz F and Ray S 2021 Phys. Dark
Universe 31 100753
ggc Mustafa G et al 2021 Eur. Phys. J. Plus 136
166
ggd Maurya S K, Singh K N and Nag R 2021 Chin. J. Phys.
74 313
gge Adnan M et al 2022 Int. J. Mod. Phys. D
19 2250073
ggf Sarkar S, Sarkar N and Rahaman F 2022 Chin. J. Phys. 77
2028
38 Sharif M and Waseem A 2018 Eur. Phys. J. C 78 868
38c Sharif M and Majid A 2020 Eur. Phys. J. Plus
135 558
38a2 Sharif M and Saba S 2020 Chin. J. Phys. 64 374
41b Misner C W and Sharp D H 1964 Phys. Rev. 136 B571
41f Kalam M et al 2013 Int. J. Theor. Phys.
52 3319
41f1 Arbañil J D V and Malheiro M 2016 J.
Cosmol. Astropart. Phys. 11 012
41fa Biswas S et al 2019 Ann. Phys. 409 167905
41fb Sharif M and Ramzan A 2020 Phys. Dark Universe
30 100737
41i Eiesland J 1925 Trans. Am. Math. Soc. 27 213
41j Lake K 2003 Phys. Rev. D 67 104015
41ja Darmois G 1927 Les
equations de la gravitation einsteinienne
41jb Lichnerowicz A 1955 Théories Relativistes de la Gravitation et de l'Electromagnétisme Masson,
Paris
41jc Lake K 2017 Gen. Relativ. Gravit. 49 134
41k Dey M et al 1998 Phys. Lett. B 438 123
42a Buchdahl H A 1959 Phys. Rev. 116 1027
aaa Farhi E and Jaffe R L 1984 Phys. Rev. D 30 2379
bbb Alcock C, Farhi E and Olinto A 1986 Astrophys.
J. 310 261
hh Gangopadhyay T et al 2013 Mon. Not.
R. Astron. Soc. 431 3216
hha Deb D et al 2019 J. Cosmol. Astropart. Phys.
10 070
hhb de Felice F, Yu Y and Fang J 1995 Mon. Not.
R. Astron. Soc. 277 L17
42b Ivanov B V 2002 Phys. Rev. D 65 104011
42d Abreu H, Hernandez H and Nunez L A 2007 Class. Quantum Gravit. 24 4631
42e Herrera L 1992 Phys. Lett. A 165 206
42f Heintzmann H and Hillebrandt W 1975 Astron.
Astrophys. 38 51
42g Moustakidis C C 2017 Gen. Relativ. Gravit. 49 68
|
http://arxiv.org/abs/2307.04498v1 | 20230710113918 | RCS-based Quasi-Deterministic Ray Tracing for Statistical Channel Modeling | [
"Javad Ebrahimizadeh",
"Evgenii Vinogradov",
"Guy A. E. Vandenbosch"
] | cs.NI | [
"cs.NI",
"cs.SY",
"eess.SY"
] |
RCS-based Quasi-Deterministic Ray Tracing for Statistical Channel Modeling
Javad Ebrahimizadeh, Evgenii Vinogradov, Guy A.E. Vandenbosch J. Ebrahimizadeh and G. Vandenbosch are with WaveCoRE of the Department of Electrical Engineering (ESAT), KU Leuven, Leuven, Belgium. E-mail: {Javad.Ebrahimizade,Guy.Vandenbosch}@kuleuven.be
E. Vinogradov is with ESAT, KU Leuven, Leuven, Belgium, also with Autonomous Robotics Research Center, Technology Innovation Institute (TII), Abu Dhabi, UAE. E-mail: [email protected].
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================
This paper presents a quasi-deterministic ray tracing (QD-RT) method for analyzing the propagation of electromagnetic waves in street canyons. The method uses a statistical bistatic distribution to model the Radar Cross Section (RCS) of various irregular objects such as cars and pedestrians, instead of relying on exact values as in a deterministic propagation model. The performance of the QD-RT method is evaluated by comparing its generated path loss distributions to those of the deterministic ray tracing (D-RT) model using the Two-sample Cramer-von Mises test. The results indicate that the QD-RT method generates the same path loss distributions as the D-RT model while offering lower complexity. This study suggests that the QD-RT method has the potential to be used for analyzing complicated scenarios such as street canyon scenarios in mmWave wireless communication systems.
quasi-deterministic, ray tracing, Radar Cross Section, statistical distribution, EM propagation, Cramer-von Mises test.
§ INTRODUCTION
Wireless communication has been rapidly evolving with the advent of new technologies and the increasing demand for high-speed data transmission. Millimeter-Wave (mmWave) wireless communication is considered a promising technology for the next generation of wireless communication due to its ability to provide multi-Gbps average data rates with low latency <cit.>. This high data rate is particularly necessary for dense urban areas such as the street canyon scenario, where a large number of users demand high-speed data transmission. In this scenario, radio frequencies at mmWave bands are used to transmit data, which requires an understanding of the propagation characteristics of mmWave signals in street canyons. Recently, Facebook introduced an affordable solution for deploying high-speed data access in street canyons using mmWave Terragraph radios operating at 60 GHz for rooftop-to-rooftop or light-pole-to-light-pole links <cit.>.
Since there is no closed-form scattering model available for bistatic Radar Cross Section (RCS) of irregular objects such as pedestrians and cars, numerical methods such as the Method of Moments (MoM), Geometrical Optics (GO), Physical Optics (PO), or their combinations, are typically used to calculate the bistatic RCS of these objects. However, this increases the computational complexity of the analysis, which can be especially challenging in the case of the street canyon scenario, where a large number of irregular objects need to be considered.
While the use of the bistatic RCS model of a sphere in the METIS channel model is simple, it may not accurately represent the scattering from irregular objects in all directions. This is because a large sphere, relative to the wavelength, exhibits a constant RCS. To address this limitation, Lahuerta-Lavieja et al. developed a fast mmWave scattering model based on the 3D Fresnel model for rectangular surfaces. However, while these models are useful for certain types of objects, they may not accurately model more complex or irregular objects <cit.>. Therefore, further research is needed to develop more accurate bistatic RCS models that can be incorporated into channel models for a more comprehensive analysis of wireless communication systems in real-world scenarios.
Myint et al. demonstrated the feasibility of modeling the bistatic RCS of intricate objects using a closed-form statistical distribution function. They found that the bistatic RCS of cars conforms to a logistic distribution and applied this model to various vehicle types, including passenger cars, vans, and trucks, at sub-6 GHz frequency. However, they did not validate this Probability Density Function (PDF) model in a practical channel environment <cit.>.
The present paper introduces a low-complexity quasi-deterministic ray tracing method that takes advantage of the statistical distribution of bistatic RCS of irregular objects for calculating scattering instead of its exact values, as done in deterministic ray tracing. The method uses the Physical Optics (PO) method to calculate the bistatic RCS of irregular objects at mmWave and assigns a suitable Probability Density Function (PDF) to them. This approach significantly reduces the complexity of the ray tracing method. The QD-RT method is verified numerically by calculating the path loss due to irregular objects in a realistic street canyon scenario.
The main contributions of the paper are:
* Development of a quasi-deterministic ray tracing technique based on dedicated PDFs of bistatic RCSs of objects.
* Deriving the probability density function of the area coverage for a specific street canyon scenario in spherical coordinates.
The rest of the paper is organized as follows. Section <ref> describes the quasi-deterministic ray tracing method. Section <ref> validates the quasi-deterministic propagation technique. Finally, the paper is concluded in Section <ref>.
§ QUASI-DETERMINISTIC RAY TRACING METHOD
In this section, we provide a comprehensive overview of the street canyon topology and the theory of deterministic electromagnetic (EM) propagation in the scenario. Additionally, we outline the quasi-deterministic and statistical channel models used in the study and their corresponding parameterization.
§.§ street canyon scenario topology
The topology of the street canyon scenario is shown in Figure <ref> with two tall buildings on either side of the street. The street has a length of W_2 and a width of L_1, and there is a sidewalk on both sides of the street with a width of W_1. In this scenario, there are scattering objects such as lampposts, parked cars, and pedestrians placed on the street. The walls of the buildings have a thickness of D_w and are made of bricks with a relative permittivity of ϵ_r,w at operational frequency f_0. The transmitter and receiver antennas are omnidirectional antennas with vertical polarization, and they are located at positions (X_tx, Y_tx, Z_tx) and (X_rx, Y_rx, Z_rx), respectively. The lampposts have a radius of R_l and a length of L_l, and they are equidistantly positioned on both sides of the street with a separation distance of d_l. The scenario dimensions and parameter values are provided in Table <ref>.
§.§ deterministic propagation
The propagation of Em wave in street canyon scenario includes the Line of Sight (LOS), reflection, and scattering paths without considering shadowing, and diffraction.
The LOS, reflection (from walls and ground), and scattering components can be modeled as:
§.§.§ LOS propagation
H_0(ω)=a_0e^jωτ_0,
where | a_0| ^2=(λ/4π r_0)^2 is the LOS propagation loss, the corresponding path loss in dB is PL=-20 log_10(|a_0|), and τ_0=r_0/c_0 is the propagation time.
§.§.§ reflections from ground and walls
H^r(ω)=a^re^jωτ_r,
where | a^r| ^2=(R^TE/TMλ/4π (r_1+r_2))^2, the corresponding path loss in dB due to reflection is PL=-20 log_10(|a^r|), and τ^r=(r_1+r_2)/c_0 is the propagation time; r_1 is the distance between TX to specular point and r_2 is the distance between the specular point to RX. The reflection coefficient R^TE/TM for both TE and TM polarization for dielectric slab (wall) and half-space (Ground) media <cit.>.
§.§.§ Scattering from objects
H^s(ω)=a^se^jωτ_s,
where | a^s| ^2= 1/4π r_1^2×σ_rcs×1/4π r_2^2×λ^2/4π, the corresponding path loss in dB due to scattering is PL=-20 log_10(|a^s|), and the propagation time is τ^s=(r_1+r_2)/c_0; r_1 and r_2 are the distance between the scatterer and RX and TX, respectively and σ_rcs is the bistatic RCS of the scatterer. In this paper, the bistatic RCS values of complex objects (such as cars or pedestrians) are computed by the Physical Optics Gordon method, and regular shape objects (e.g., lampposts) are computed with the closed-form model of the RCS of a conducting cylinder <cit.>.
§.§ quasi-deterministic propagation
In the quasi-deterministic ray tracing (QD-RT) method, the PDF of a bistatic RCS in (<ref>) is used instead of the exact value of this bistatic RCS resulting in computational complexity decreases drastically. The QD-RT method is a low-complexity technique for statistically analysis and modeling of the channel with Monte-Carlo simulations should be done. A Monte Carlo simulation has variables that should be randomly varied during each iteration. For example for statistical analysis of the path loss due to an irregular object using (<ref>), the distance between the object and the TX and RX antenna denoted by r_1 and r_2 are considered as the Monte-Carlo variable denoted as the independent random variable X_1 and X_2. Therefore, based on (<ref>), the path loss is a random variable denoted as
PL(X_1,X_2) ∼ A_0-
40× log_10(X_1+X_2)-10× log_10(σ_rcs)
where A_0=-10log_10((4π)^3 ×λ^2) is a constant value. According to (<ref>), using the PDF of the bistatic RCS of objects can generate the same distribution for the path loss as using the exact values of bistatic RCS.
To model the bistatic Radar Cross Section (RCS) of an irregular object using a Probability Density Function (PDF), a dataset of bistatic RCS for all incident and scattered angles must be generated. It is important to note that the combination of the bistatic RCS at different angles to generate the dataset of bistatic RCS is not equal and depends on the specific scenario being tested. In the case of a street canyon scenario, the angular dependency of the bistatic RCS in creating the dataset follows a specific equation:
f_Θ, Φ(θ,ϕ)=
(Δ z)^2sin(θ)/2L_1W_2 cos^3(θ) , if a/Δ z × sin(ϕ)<θ<b/Δ z × sin(ϕ)
ϕ_0 < ϕ < π - ϕ_0,
0 , otherwise
where (θ and ϕ) are elevation and azimuth angles in spherical coordinates. Δ z is the differential height between the object and TX (RX). Here, the elevation angle is limited by the lines Y_w = a and Y_w = b and the azimuth angle is bounded by ϕ_0 = 2a/L_1, see Fig .<ref>.
§ SIMULATION RESULTS
The purpose of this study is to validate the quasi-deterministic ray tracing (QD-RT) method by comparing it with the deterministic ray tracing (D-RT) method for a street canyon scenario. To accomplish this, the pass loss and excess delay time distributions due to a pedestrian (parked cars) located randomly on the sidewalk (along the street) in the street canyon scenario shown in Fig. <ref> with dimensions listed in Table <ref> are numerically calculated using both methods. The PDF of the bistatic RCS for the pedestrians and parked cars are first obtained using the Physical Optics method, with logistic distributions observed for both cases. It is observed that the pedestrian and parked car follow the logistic distributions as listed in Table <ref>. The mean values for a car and a pedestrian are 11 and 6.17 dBsm, respectively. However, the maximum values (corresponding to the specular points) for a car and a pedestrian are around 60 dBsm and 40 dBsm, which yields a considerable difference of approximately 20 dBsm. Monte Carlo simulations are then conducted with a total of 1000 Monte-Carlo simulations with n ∈{1, ..., 10} pedestrians, randomly positioned on the sidewalks is performed with the resulting path loss distributions fitted to Weibull distributions with scale and shape parameters. Excess time delay distributions for both pedestrians and parked cars are also calculated, with lognormal distributions observed. The statistical parameters of the path loss and excess time delay distributions are presented in Fig. <ref> and table <ref>, respectively. This study demonstrates that the QD-RT method offers the same path loss distributions as the D-RT method with lower complexity, making it a promising approach for analyzing complex scenarios such as street canyon scenarios in mmWave wireless communication systems.
§ CONCLUSION
In conclusion, the proposed quasi-deterministic ray tracing method using a statistical bistatic distribution to model the Radar Cross Section of various irregular objects showed promising results in analyzing the propagation of electromagnetic waves in street canyon scenarios. The method provided the same path loss and excess time delay distributions as the deterministic ray tracing model while offering lower complexity. The study also found that the scenario-specific PDF of bistatic RCS of irregular objects followed logistic distributions and the path loss and excess time delay followed Weibull and lognormal distributions, respectively. This study highlights the potential of the QD-RT method for analyzing complicated scenarios, such as street canyon scenarios, in mmWave wireless communication systems.
§ ACKNOWLEDGMENTS
The present work received funding from the European Union’s
Framework Programme for Research and Innovation Horizon
2020 under Grant Agreement No. 861222 (MINTS project).
IEEEtran
|
http://arxiv.org/abs/2307.04023v1 | 20230708180031 | SDT: A Low-cost and Topology-reconfigurable Testbed for Network Research | [
"Zixuan Chen",
"Zhigao Zhao",
"Zijian Li",
"Jiang Shao",
"Sen Liu",
"Yang Xu"
] | cs.NI | [
"cs.NI",
"cs.PF"
] |
†]Zixuan Chen
†]Zhigao Zhao
†]Zijian Li
†]Jiang Shao
†]Sen Liu
†∗]Yang Xu
[ ]{zxchen20, zgzhao20, lizj21, jshao20, senliu, xuy} @fudan.edu.cn
[†]School of Computer Science, Fudan University, Shanghai, China
[]Institute of Fintech, Fudan University, Shanghai, China
[]Peng Cheng Laboratory, Shenzhen, China
SDT: A Low-cost and Topology-reconfigurable Testbed for Network Research
* Corresponding author: Yang Xu.
This paper will be published in IEEE CLUSTER 2023. Preview version only.
[
===================================================================================================================================================================================
Network experiments are essential to network-related scientific research (e.g., congestion control, QoS, network topology design, and traffic engineering). However, (re)configuring various topologies on a real testbed is expensive, time-consuming, and error-prone. In this paper, we propose Software Defined Topology Testbed (SDT), a method for constructing a user-defined network topology using a few commodity switches. SDT is low-cost, deployment-friendly, and reconfigurable, which can run multiple sets of experiments under different topologies by simply using different topology configuration files at the controller we designed. We implement a prototype of SDT and conduct numerous experiments. Evaluations show that SDT only introduces at most 2% extra overhead than full testbeds on multi-hop latency and is far more efficient than software simulators (reducing the evaluation time by up to 2899x). SDT is more cost-effective and scalable than existing Topology Projection (TP) solutions. Further experiments show that SDT can support various network research experiments at a low cost on topics including but not limited to topology design, congestion control, and traffic engineering.
Testbed, reconfigurable topology, network evaluation
§ INTRODUCTION
As the main bottleneck of Data Centers (DCs), the Data Center Networks (DCNs) have attracted much research attention from both industry and academia <cit.>. There exist some commonly used DCN topologies that are scalable and cost-effective including Fat-Tree <cit.>, Dragonfly <cit.>, Torus <cit.>, BCube <cit.>, HyperBCube <cit.>, et al. Research on DCNs, including congestion control mechanisms, routing algorithms, deadlock avoidance functions, et al., should be applied to most of these topologies (or at least some) for better generality (e.g., <cit.>). There are also many pieces of state-of-the-art research on optimizing the physical topology to improve the application performance like Distributed Machine Learning (DML) <cit.>. All of these require a testbed that can support multiple topologies to verify the effects of each mechanism.
It is not easy to support multiple topologies at the same time and do reconfiguration among them. First, building a topology such as Fat-Tree can be complex. For example, it needs 20 4-port switches and 48 cables to deploy a standard Fat-Tree topology supporting only 16 nodes (Figure <ref>). In addition, it is more complicated to support different topologies and reconfigurations simultaneously. Connections are error-prone and difficult to check when reconfiguring. Although emulators (e.g., Mininet <cit.>, Open vSwitch <cit.>, OpenStack <cit.>) can simulate a variety of topologies, they still have some obvious drawbacks such as long simulation time and insufficient authenticity of results. Therefore, deploying a full testbed for evaluation is crucial and irreplaceable, even if it is hard to make.
As far as we know, a qualified real-world testbed requires several characteristics, including fast topology reconfiguration, cost-friendly deployment, and convenient maintenance. The challenges in designing such a testbed lie in how to support topology reconfiguration, preferably without manual switching of cables; how to reduce the cost of the test platform, including hardware and labor costs; and even how to support user-defined topologies, rather than being limited to the existing commonly used topologies.
Switch Projection (SP) is a solution to construct topologies for network experiments but needs heavy staffing. The good news is that the Micro Electro Mechanical System (MEMS) optical switches can be used to build reconfigurable network topologies <cit.>. Based on its reconfigurable and lossless bi-switching property, it can take the place of SP's manpower. We call the SP with MEMS optical switches the “Switch Projection-Optical Switch (SP-OS)”. SP-OS can construct user-defined topologies and support real-time reconfiguration without manual operations. However, it still has certain disadvantages, such as high cost and poor expandability. Considering the above characteristics and challenges, we propose a topology-reconfigurable testbed named Software Defined Topology Testbed (SDT) without costly optical switches to achieve lower cost and better scalability.
In short, the contributions of the paper are
* We summarize the methodology of Topology Projection (TP) and propose SDT, a testbed solution for building real topologies. SDT uses commodity OpenFlow switches to construct various topologies. Once the connection deployment is completed, the topology (re)configuration can be finished in a short time without manually changing the physical connections or using optical switches (Figure <ref>).
* We develop an easy-to-use SDT controller supporting user-defined topologies. Users can develop their routing strategy or other new technologies with the SDT controller. The transformation process from logical topology to physical topology is fully automated.
* We compare SDT with existing TP methods, and SDT shows better cost-effectiveness and scalability. We use real applications to evaluate 1) the latency and bandwidth differences compared with the full testbed and 2) the Application Completion Time (ACT) and time consumption compared with the simulator. Evaluations show that SDT has only 0.03-2% deviation on latency compared to the full testbed and reduces the evaluation time by up to 2899x faster than the simulator in a 16-second HPC benchmark for communication efficiency with 32 nodes.
* We further implement some prevalent network functions on SDT, including routing strategy, deadlock avoidance, and congestion control. SDT shows substantial flexibility in network evaluations.
The rest of the paper is organized as follows. We introduce the related works in <ref>. We present the motivation and design of SDT in detail in Sections <ref> and <ref>. A prototype of SDT controller is introduced in <ref>. The accuracy and efficiency of SDT are evaluated in <ref>, with some state-of-the-art network functions implemented. We discuss SDT in <ref> and conclude the paper in <ref>.
§ RELATED WORKS
§.§ Reconfigurable Networks
To better allocate link bandwidth in response to the non-uniform traffic often present in DCNs, some researchers propose reconfigurable networks, which can dynamically adjust links based on real-time network traffic to better serve hot node pairs (nodes with heavy traffic). These reconfigurable networks are often implemented with optical devices, which can offer lossless bi-switching capabilities. The optical devices used in reconfigurable networks can mainly be categorized into MEMS-based optical switches and other specialized optical devices (e.g., free-space optics and optical devices that forward based on light wavelength).
§.§.§ Reconfigurable Networks based on MEMS Optical Switch
MEMS optical switches use several tiny mirrors on the silicon crystal to forward the light between different fiber interfaces. The tiny mirrors are called microarrays, working as a reconfigurable static crossbar by rotation.
MEMS optical switches have been put into practical usage very early, and the technology is relatively mature and less error-prone. Therefore, early reconfigurable networks, such as c-Through <cit.> and Helios <cit.>, use MEMS optical switches to build reconfigurable networks. However, MEMS optical switches still have drawbacks, such as their relatively large reconfiguration delays (about 100ms) and high hardware costs.
§.§.§ Reconfigurable Networks based on Customized Optics
To achieve faster reconfiguration, researchers have proposed other customized optical devices, such as Free Space Optics used in Firefly <cit.> and ProjecToR <cit.>, which reflect the laser propagating in the air with mirrors that can do faster angle adjustment to complete the reconfiguration. This kind of network can achieve reconfiguration as fast as 12μ s, but it is easily disturbed by the environment, which causes significant optical path shifts and makes the deployment impossible.
In addition, Sirius <cit.> uses Arrayed Waveguide Grating Router (AWGR) to forward the input light of different wavelengths to the corresponding output ports to complete the reconfiguration. However, this method needs to be used with a highly customized tunable laser that can quickly generate lasers of different wavelengths, which is also less practical.
Besides these, there are some other similar customized-optics-based fast reconfiguration works like <cit.>.
§.§ Network Evaluation Tools
Network researchers have developed and used many network evaluation tools in the past few decades. We roughly divide them into 1) simulator, 2) emulator, and 3) testbed. They have played a significant role in the progress of network technologies, but they also have certain disadvantages.
§.§.§ Simulator
Existing network simulation tools such as NS-2 <cit.>, NS-3 <cit.>, OPNET <cit.>, OMNET++ <cit.> and GloMoSim <cit.> offer efficient and cost-effective ways to evaluate the network performance under different conditions. However, compared with the testbed, they lack both scalability and reality. Simulators may take several days to complete one simulation, and they also suffer from the lack of ability to simulate various random situations that might occur in real networks.
§.§.§ Emulator
The primary goal of network emulators such as Mininet <cit.> with Open vSwitch (OVS) <cit.> and Netem <cit.> is to create an environment whereby users can flexibly combine the VMs, applications, products, and services to perform a relatively more authentic simulation. However, the performance of emulators is poor in the high bandwidth environment (10Gbps+) or medium-scale topologies (containing 20+ switches) due to the limitation of the system resources. Besides, emulators cannot do everything we want, e.g., Mininet has no official support for Priority-based Flow Control (PFC), even though PFC is already a standard feature.
As a widely used cloud computing infrastructure software, OpenStack <cit.> can be used to build a set of computing nodes with specific topologies using commodity servers and switches. However, the construction of topology on OpenStack is still virtualized by OVS. As a result, the network topology on OpenStack has scalability and reality problems and will be limited by the bandwidth.
§.§.§ Testbed
Existing testbed platforms available to researchers include Emulab <cit.>, CloudLab <cit.> and PlanetLab <cit.>, which have made considerable progress in making testbed as easy to use and control as simulation. Nevertheless, their drawbacks are also obvious. Whether virtualization is used or not, the reconfiguration of the testbed requires heavy manual operations. Several testbeds dedicated to wireless environments are proposed, such as TWIST <cit.>, and DRIVE <cit.>. These works mainly consider wireless environments, which do not apply to DCN-related experiments.
§ MOTIVATION AND BACKGROUND
This section firstly introduces our motivation for “Topology Projection (TP)”. Then, we summarize a straightforward solution named Switch Projection (SP). The SP can support TP easily but can not be reconfigured without manpower. MEMS optical switches can be introduced for topology reconfiguration, which is introduced at the end of this section with the name Switch Projection-Optical Switch (SP-OS).
§.§ Why Do We Need the SDT?
By comprehensively considering the pros and cons of three types of existing network evaluation tools (Table <ref>), we find that they are generally unable to achieve high-performance and low-cost evaluations for various network topologies. Although the simulation is easy to operate and the cost is relatively small, its scalability is limited by the high time cost. As the number of nodes increases and the network traffic grows, the simulation time can be thousands of times longer than the real-world ACT. Testbeds are needed to get better evaluation scalability and efficiency. However, the deployment expenses of testbeds are high and even unacceptable for researchers.
Therefore, we want to construct a system that performs almost the same as the full testbed with high efficiency and scalability. The system should support fast reconfiguration among various topologies without changing the physical connections under an acceptable budget. That is why we present SDT. The efficiency of SDT is close to full testbeds without any manual operation during reconfiguration and with lower hardware costs.
§.§ A Possible Solution: Switch Projection
Some works (e.g., <cit.>) use a switch to construct a simple topology for evaluation. We call this method of constructing a topology “TP”. SDT is also a TP method.
The main idea of traditional TP is to project the topologies by using the logical switch as a meta unit. The right side of Figure <ref> is the topology we want to construct, which is a part of a 2D-Torus. We call this “logical topology”. The radix of the switches in this logical topology is 4, i.e., every logical switch has 4 ports. The physical switch can be divided into sub-switches based on the radix. As a result, each sub-switch has 4 ports as well. After that, we can use these sub-switches for the topology projection.
We call this type of TP “SP” and conclude its general approach here. The first step of SP is dividing one physical switch into multiple sub-switches. Then we project the sub-switches to the logical switches in the topology, which is why this method is called SP. After the projection, we manually connect these sub-switches' corresponding ports to build the topology. We can use Software-Defined Networking (SDN) functions (e.g., flow tables in the OpenFlow switch) to divide the sub-switches.
Take Figure <ref> as an example of how SP works. We first divide and project the sub-switches. Ports 1-4 on the physical switch are considered on one sub-switch, so we project them to an arbitrary logical switch e.g., switch 1. Ports in the logical switch 1 are numbered based on the projected ports from the physical switch. The operations are the same for other sub-switches.
We then connect the cables between specific sub-switch ports based on the logical topology. For example, in the logical topology, there is a link between ports 3 and 9 (i.e., Link (A)). We connect the corresponding ports on the physical switch. After all the links are made, it is time to deploy the flow table (we use OpenFlow in this paper) to restrict the packet forwarding domain on the physical switch based on the ports' labels. For instance, data packets entering port 1 can only be forwarded to ports 2-4. The restrictions are based on the partition of sub-switches.
§.§ Make SP Topology-reconfigurable
The manual operations required for SP on topology reconfiguration are massive. We have to re-connect the cables manually on every topology reconfiguration, which is error-prone. As the topology size increases, the difficulty of deployment increases correspondingly. Therefore, we introduce MEMS optical switches into SP to reduce labor costs. The new design is called SP-OS.
The optical switch can replace manual operations on the reconfiguration. We connect all the ports on the physical switch to the optical switch (Figure <ref>). When the topology needs to be reconfigured, modifying the configuration of the optical switch based on the labels can replace the manual operations. The advantage of SP-OS is that once the testbed is deployed, all reconfigurations can be done remotely by software control.
The introduction of optical switches leads to increased hardware costs. Optical devices are generally costly. The price of a 320-port MEMS optical switch is more than $100k, and only 160 LC-LC[Lucent Connector (LC).] fibers can be connected. As the number of ports on the optical switch increases, the price increases significantly. SDT can work without optical switches, which provides significant savings.
TurboNet <cit.> is another topology-reconfigurable SP method for TP, which replaces manual reconnection with the Tofino switch's loopback ports. However, the use of loopback ports results in a reduction in the available bandwidth of the switches <cit.>. We compare the scalability between TurboNet and SDT in <ref>.
§ THE DESIGN OF SDT
In this section, we first introduce the fundamental design of SDT on a single switch. Then, we expand the SDT to multiple switches to support larger topologies. We also address the issue of topology partitioning in multi-switch deployments.
§.§ SDT on a Single Switch
Although SP-OS can support automated topology reconfiguration, its cost is relatively high due to the introduction of optical switches. Therefore, we design the SDT, which can provide the same functionality as SP-OS but without optical switches.
The main idea of SDT is to use Link Projection (LP) rather than SP to construct the logical topology on a physical switch. SDT first projects physical links[To construct a physical link, we connect two arbitrary ports on the switch. In the paper, the switch's upper and lower adjacent ports are connected for simplicity.] to logical ones on the topology, and then number the ports on the logical topology based on the projected ports from the physical switch. Taking Figure <ref> as an example, the physical links A and B are projected to the logical topology, and then the corresponding ports in the logical topology can be tagged with 1, 2, 3, and 4, respectively.
After the projection, we group ports on the physical switch into different sub-switches based on the relationship of their counterparts in the logical topology. For instance, in Figure <ref>, ports 1, 3, 5, and 7 in the topology form a logical switch, so the corresponding ports 1, 3, 5, 7 in the physical switch should be grouped in the same sub-switch. We use OpenFlow flow tables to keep the packets entering this sub-switch only forwarded to their corresponding forwarding domain. The other sub-switches are divided according to these steps as well.
Please note that no optical switch is needed when the topology is reconfigured in SDT.
Here we summarize the fundamental differences between SP-OS and SDT.
* In SP-OS, sub-switch partitions are determined arbitrarily (the only constraint is that the radix of sub-switches should match the radix of logical switches in the topology). MEMS optical switches are used to (re)connect links between these sub-switches based on the topology's logical switches (projected by SP).
* In SDT, physical links on the physical switch will remain fixed once constructed (which can be arbitrary). The sub-switches are (re)partitioned based on the result of LP. Rules in the flow tables of the OpenFlow switch can be used to realize the sub-switch partition, and no optical switch is needed during a topology reconfiguration.
The size of the logical topology supported by SDT is limited by the number of ports on the physical switch. A topology can be appropriately built if the total number of ports in the topology is less than or equal to the number of ports on the physical switch (excluding the ports connected to the end hosts). This constraint applies to all TP methods.
§.§ SDT on Multiple Switches
When one switch is insufficient to project the entire logical topology, multiple switches are needed to use. In SP-OS, it is not difficult to expand the supported logical topology by adding more switches and optical devices. The expansion of SDT is also relatively simple but requires additional discussions below.
On the construction of the multi-switch scenario, it needs to cut the logical topology into various sub-topologies, and each sub-topology is maintained independently by one physical switch.
There are two different types of links in multi-switch SDT. We call the links between the upper and lower adjacent ports of one switch self-links. For those links across the sub-topologies, we project them from the links across physical switches and call them inter-switch links. For instance, the topology has been cut into two sub-topologies on the right side of Figure <ref>. The links inside each sub-topology are self-links, and the links between the two sub-topologies are inter-switch links.
There is a requirement for the number of inter-switch links. Taking Figure <ref> as an example, the scale of the logical topology is larger than the previous one. As a result, one 64-port switch cannot build this topology, but two can make it. To build the topology, we divide the topology into two sub-topologies. How to divide the topologies is discussed in Sec. <ref>.
Here we use the formula to represent the inter-switch links. Define topology (graph) G(E, V) as the logical topology we hope to build, and the sub-topologies are G_A(E_A, V_A) and G_B(E_B, V_B). E_nA represents the links to nodes on the physical switch A, E_sA represents the self-links on the physical switch A, and E_aAB represents the inter-switch links between the physical switches A and B. In the logical topology, there is a relationship: E = E_n + E_s. For sub-topologies after being divided, they have
E_A = E_nA + E_sA
E_B = E_nB + E_sB
V = V_A + V_B
For inter-switch links, the following equation exists.
E_aAB = E_aBA = E - E_A - E_B
We can now determine the number of inter-switch links for the logical topology by Eq. <ref>. For the case in Figure <ref>, there are 8 inter-switch links between the two sub-topologies, which means at least 8 inter-switch links are required to construct this topology.
The reservation of inter-switch links is flexible, but it must fulfill the requirements of the desired topologies and the specifications of physical switches. Taking Figure <ref> as an example, we aim to construct a 4x4 2D-Torus topology (the connections to nodes are omitted for simplicity). When the number of ports on physical switches is greater than 64, only 1 switch is necessary. When the number of ports exceeds 32 but is less than 64, 2 switches are required to build the topology, as shown on the left side of Figure <ref>. Each switch is assigned 12 self-links and 8 inter-switch links in this scenario. When the number of ports is less than 32 but greater than 16, we can build it with 4 switches. Attention must be paid to determining the switches at both ends of the inter-switch links according to the partition results.
It is worth noting that even if the partitioning methods are different, the results of TP are almost the same. Nevertheless, a proper cutting method enables the testbed to support more topologies without manual modifications. In the implementation, if it needs to perform experiments on multiple topologies, we generally divide the topologies in advance based on the specifications of switches (port number, limitation of flow table et al.) to obtain a proper number of inter-switch links between different switch pairs, i.e., to keep the number of inter-switch links between multiple different switch pairs about the same. The reserved inter-switch links usually come from the maximum inter-switch links among all topologies.
§.§ Topology Partition for SDT on Multiple Switches
The partition of the logical topology needs to be discussed. We define the function “Cut(G(E, V), params...)” for dividing the topology. The input of the function is the logical topology G(E, V), switch parameters, and the number of switches. The output is the partitioning method that satisfies the requirements of all the topologies we aim to build and the number of links of each type to be assigned. The problem is represented with switches and nodes as vertices and logical links as edges. The logical topology can be described as an undirected graph. To achieve the partitioning, we apply a graph partitioning algorithm that splits the graph into sub-graphs.
The partition of the graph needs to meet certain requirements. The first is that the number of inter-switch links should be small, for the inter-switch links are relatively more complicated than self-links. With this requirement, one initial idea is to use the “Min-cut” partitioning algorithm to divide the topology. The target is to minimize the CutEdges(E_A, E_B) = ∑_u∈ V_A, v∈ V_Bw(u, v). Notes that w(u, v)=1.
Besides this, we also want to keep the number of used links (or ports) per physical switch as balanced as possible. It is beneficial to balance the number of ports and links of each physical switch in terms of resource usage and complexity of ports to nodes. However, Min-cut partitioning can not work well under this condition. Figure <ref> shows the differences between these partitioning methods. Another graph partitioning algorithm is needed, whose target is to minimize α× Cut(E_A, E_B) + β× (1/∑_E_A^i1 + 1/∑_E_B^i1).
To summarize the requirements for the SDT partitioning algorithm, the graph partitioning algorithm should 1) minimize the number of edges between sub-graphs and 2) balance the number of edges within each sub-graph. Meeting these requirements is a proven NP-hard problem, and algorithms such as RatioCut <cit.> or minimize normalized cut (NCut) <cit.> can be used to solve it. In practice, we use the widely-used METIS library <cit.> with these constraints to perform the partitioning of the topology, and the results are usually satisfactory. When multiple topologies need to be evaluated in real-world experiments, we perform graph partitioning for all topologies and then select the maximum number of inter-switch links as the reference for deployment on the physical topology.
§ IMPLEMENTATION DETAILS: SDT CONTROLLER
We implement the SDT controller based on the library Ryu <cit.> under version 4.34 and the API in commodity OpenFlow switches. As shown in Figure <ref>, the SDT controller consists of 4 modules. Topology Customization and Routing Strategy are two basic modules of the controller. The remaining two modules, i.e., Deadlock Avoidance and Network Monitor, are dedicated modules for DCNs. SDT controller supports fast (re)configuration of network topology and other modules by running a simple configuration file as shown in Figure <ref>.
§.§.§ Topology Customization
This module is essential for performing TP, consisting of 1) the checking function and 2) the deployment function. In the checking function, all user-defined topologies will be used as input to the module, along with how the testbed is connected (e.g., distribution of nodes and two types of links). The module first checks if these topologies meet the deployment conditions as addressed in <ref>. If not, the module will inform the user of the necessary link modification. Then, the checked user-defined topology is used as the input for the deployment function. The controller will maintain the logical topology as an undirected graph and run the TP process automatically in this function.
§.§.§ Routing Strategy
This module contains various routing strategies for different topologies. We implement several routing algorithms as shown in Table <ref>. Most of the user-defined routing strategies can be implemented by the SDT controller as a specific set of flow tables. For instance, when a new flow comes, the SDT controller calculates the paths on the logical topology according to the strategies and then delivers the corresponding flow tables to the proper OpenFlow switches to perform a specific routing for the flow.
§.§.§ Deadlock Avoidance and Network Monitor
These two modules are dedicated modules for DCNs. The former works in the lossless network, like RDMA over Converged Ethernet (RoCE), along with Routing Strategy module to avoid the deadlock. The latter is mainly used for network telemetry. For example, the SDT controller periodically collects statistics data in each port of OpenFlow switches through provided API. The collected data can be further used to calculate the load of each logical switch in the case of adaptive routing.
We use the SDT controller to implement some prevalent network functions to evaluate SDT's capability. For details, please refer to <ref>.
§ EVALUATION
In this section, we conduct several experiments to answer the questions, including:
* Will SDT introduce additional overhead (e.g., latency) compared to a full testbed? ( <ref>)
* How many types of topologies can SDT project? ( <ref>)
* How cost-effective and scalable is SDT compared to previous TP methods? ( <ref>)
* How much speed-up can SDT bring to network experiments? ( <ref>)
* Can existing network functions be applied to SDT? ( <ref>)
It is worth mentioning that all topology reconfigurations of SDT in this section are done remotely without any manual rewiring.
§.§ Experiment Setup
§.§.§ SDT Cluster Setup
We use 3 H3C S6861-54QF OpenFlow switches (with 64 10Gbps SFP+ ports and 6 40Gbps QSFP+ ports, which can be split into 4 10Gbps SFP+ ports) for SDT. We use 16 HPE DL360 Gen9 servers with E5-2695v4 (18 cores and 36 threads) as host servers and virtualize them to 32 computing nodes (i.e., virtual machines). Each host server has one Mellanox ConnectX-4 10GbE dual-port NIC. Each computing node is allocated with 32GB RAM and 8 CPU cores. Moreover, each computing node is bound with a physical NIC port through SR-IOV to ensure that the virtualization will not become the performance bottleneck. All the network devices support the Priority Flow Control (PFC) for lossless ethernet.
§.§.§ Baselines
We use a full testbed to compare the accuracy of SDT in terms of latency and bandwidth. We compare the Application Completion Time (ACT) of SDT with a self-designed simulator running different HPC applications under different topologies. We also evaluate the cost-effectiveness and scalability compared to SP, SP-OS, and TurboNet <cit.>.
The network simulator we use is based on two popular simulators BookSim <cit.> and SST/Macro <cit.>. The simulator supports a range of features needed by the evaluations (including PFC, cut-through, trace replaying, et al.) and is event-driven for efficiency. To run the same application as the nodes on SDT, the simulator uses the traces collected from running an HPC application on real computing nodes to ensure the simulator's authenticity. We only compare the SDT to the TurboNet with Port Mapper (PM) because the number of queues on each port in the topology projected by Queue Mapper (QM) is inadequate for experiments inside the DCs.
§.§ TP Accuracy of SDT
§.§.§ Latency
We construct a multi-hop topology for latency and bandwidth tests as shown in Figure <ref>. The topology consists of 8 switches and computing nodes. There is one node connected to each switch. The switches and nodes are inter-connected with 10Gbps links. We build this topology on SDT and a full testbed and compare the latency between Node 1 to Node 8 by using the Pingpong application in Intel MPI Benchmark (IMB) <cit.>. The application is running on the RoCEv2 network with ECN-disabled.
We perform the latency test 10k times on incremental message lengths (param -msglen) and collect the latencies. Define the average latency of the full testbed as l_r, and the latency of SDT is l_s. The overhead is calculated by l_s - l_r/l_r. Figure <ref> shows that the SDT would bring an acceptable overhead to the RTT. It is worth noting that the latency is quite small in the RoCEv2 network, which means introducing any tiny delay can lead to large deviations in results. For example, the 10-hop latency of the lengths below 256 bytes is under 10μ s. Although the latencies on RoCEv2 are sensitive to the hardware conditions, the overheads brought by SDT are below 1.6%, which can be ignored. With the increment of message lengths, the overhead brought by SDT is getting smaller.
§.§.§ Bandwidth
We use iperf3 to construct an incast scenario for bandwidth test: all other nodes send 10Gbps TCP traffics to node 4. We compare the bandwidth on loss and lossless networks (with PFC off/on, respectively).
The results (refer to Figure <ref>) demonstrate that with PFC enabled, the bandwidth allocation for each iperf3 flow aligns with the full testbed. For instance, nodes 3 and 5, which have 2 congestion points on their path to node 4, have comparable bandwidth when controlled by PFC in both the SDT and full testbed. Their bandwidth allocation is significantly distinct from that of other nodes with different hop counts. In the network without PFC, the bandwidth distribution between SDT and the full testbed has a nearly identical trend. Nodes that can allocate relatively high bandwidth (which may be influenced by RTT and other factors) behave similarly in both the actual topology and SDT. The trends are nearly alike for nodes with lower bandwidth. The only differences may be due to the additional overhead introduced by SDT, leading to slight differences in RTT and therefore different window growth rates.
To summarize, the way SDT builds the topology does introduce a bit of additional overhead, resulting in a deviation of 1.6% or less to the latencies compared to the full testbed in our environment. Our initial speculation is that these additional latency overheads are because TP increases the load of the switch's crossbar, which causes a slight bias compared to the real environment. These deviations are reasonable and have a negligible impact on the bandwidths.
During the evaluation, we also evaluate the hardware isolation using the Wireshark network sniffer on the client side. We deploy two unconnected topologies in one SDT testbed and conduct the same Pingpong experiment separately. The evaluation results show that the client's port does not receive packets from nodes that are not connected in one topology.
§.§ Scalability, Convenience, and Cost of SDT
We use simulations to compare the scalabilities, conveniences, and costs between SDT and other TP methods (SP, SP-OS, and TurboNet <cit.>) on the projection of multiple topologies, including the widely-used topologies in DCs (Fat-Tree, Dragonfly, and Torus) and 261 WAN topologies (comes from the Internet Topology Zoo <cit.>). The metric of reconfiguration times is calculated by the total time spent from the time the configuration is placed until the network is available. The hardware costs are extrapolated from the current market price of the hardware.
Table <ref> presents the results of the evaluations and shows that SDT can project more topologies than TurboNet at the same hardware cost, making it more scalable and cost-efficient than SP and SP-OS. SP requires manual reconnection, making reconfiguration time-consuming and prone to errors, especially for large topologies. SP-OS incorporates optical switches (OS) to facilitate reconfiguration but suffers from expensive hardware costs. TurboNet employs the loopback port of P4 switches for reconfiguration, resulting in halved bandwidth on the ports and reduced scalability compared to SDT. Also, recompiling the P4 program is time-consuming. SDT is the best option among these solutions due to its excellent scalability and cost-effectiveness.
§.§ Comparison between SDT, Simulator, and Full Testbed
We run a batch of HPC applications and benchmarks, including HPCG, HPL, miniGhost, miniFE, and IMB, to verify the ACT differences among SDT, the simulator, and the full testbed.
The HPC applications can verify the universality of SDT in the network experiments, while the IMB Alltoall is a pure traffic benchmark without any computation, ideal for verifying the impact on network performances brought by SDT's overhead. We run the applications on specific topologies and construct the topologies on both SDT and simulator. All parameters remain the same for the simulator and SDT, including PFC thresholds, congestion control, DCQCN enabled, cut-through enabled, et al. For details on network functions like deadlock avoidance, please refer to <ref>.
We select the topologies 1) Dragonfly with a=4, g=9 <cit.>, and h=2, 2) Fat-Tree with k=4 <cit.>, 3) 5x5 2D-Torus, and 4) 4x4x4 3D-Torus <cit.> for evaluation. For the topologies with the number of nodes greater than 32, we randomly select the nodes but keep the same among all the evaluations.
Table <ref> shows the difference in real-application evaluation between the SDT and simulator. Ax (B%) in the table represents the evaluation time of the SDT is A times faster than the simulator with a difference of ACT in B%. The result shows that the ACT collected in SDT is almost identical to the simulator, with a maximum deviation of 3%. However, the time consumption of SDT is greatly reduced compared to the simulator, especially in applications with heavy traffic.
Further evaluations are conducted to assess the performance improvement brought by SDT as the number of nodes increases. Figure <ref> compares the time consumption of full testbed (real ACT), simulator, and SDT in evaluating IMB Alltoall benchmark on a Dragonfly topology (a=4, g=9, h=2) with 1, 2, 4, 8, 16, and 32 randomly selected nodes. Note that SDT's time consumption includes the deployment time of the topology. Results show that when the ACT is short, the topology deployment time may result in overhead in the evaluation time consumption, but it is still faster than the simulator. It's worth mentioning that the simulation time may be affected by the performance of the machine running the simulation, but this does not resolve the issue that the simulation is much slower than a real experiment on SDT.
To summarize, SDT can well construct actual network topologies. The experiments performed on SDT show almost the same ACT as the real environments and the simulations, while SDT has much lower costs than the full testbed and is much faster than the simulator. There are good reasons that SDT can be used for more authentic and efficient network evaluations than simulators and emulators.
§.§ Running Prevalent Network Functions on SDT
We also evaluate the feasibility of deploying prevalent network features on the SDT, with two specific modern network functions, RoCEv2, and a naive active routing.
RoCEv2 works over lossless ethernet with PFC enabled. Since SDT does not have any hardware modifications to the physical ports, building a lossless network environment is possible by simply enabling the PFC on both switches and NIC ports. Moreover, DCQCN <cit.> is an end-to-end congestion control method to delay the generation of PFC messages. Like PFC, the DCQCN can be enabled by directly turning it on as long as the switch and the NIC support it. We further deploy three types of deadlock avoidance methods alongside routing strategies on the SDT (Table <ref>), which are working properly in the evaluation of real applications (See <ref>).
We implement an active routing algorithm based on <cit.> for the Dragonfly topology (a=4, g=9, h=2, with randomly selected 32 nodes). This algorithm extends Dragonfly's minimal routing policy by estimating network congestion according to the statistic data from Network Monitor module. We evaluate active routing using a prevalent pure communication application, i.e., IMB Alltoall. Results show that active routing works well on SDT, which can reduce the ACT of the IMB Alltoall.
In summary, SDT shows strong adaptability to existing network functions. Most existing ethernet features can be easily deployed in SDT. Researchers can use SDT to validate existing network functions in multiple-scale topologies or to develop and evaluate new network functions using SDT.
§ DISCUSSION AND FUTURE WORK
§.§ Flexibility Enhancement
In SDT, the inter-switch links reservation issue might occur ( <ref>). Manual operations may still be required once the reserved inter-switch links cannot accommodate the new user-defined topology. To handle this, SDT can leverage optical switches to turn a link into either a self-link or an inter-switch link dynamically according to the topology requirements to further enhance the flexibility of SDT. We are designing the SDT controller with the optical switches and investigating whether there are additional challenges.
§.§ Switch Selection
The SDT controller in this paper performs TP operations on commodity OpenFlow switches. Generally, other switches can also be used for TP if they meet the following conditions: 1) allowing loopback packets to pass through self-links (or the STP protocol can be disabled), and 2) supporting 5-tuple matching or others similar to determine the forwarding of packets. For instance, other types of switches, like switches supporting extended ACL tables, are also suitable for TP. The P4-based (Intel Tofino) SDT controller is under refinement.
§.§ Resource Limitation
In SDT, the most significant resource is the maximum number of supported flow table entries in each OpenFlow switch. When a switch runs out of flow table entries during the setup of logical topology, the setup procedure may fail or other unknown failures could occur. SDT controller leverage a built-in module to check the number of available table entries to avoid such problem. If the demand for entries is greater than the available one, it can merge entries, split the topology, or inform operators to add more switches. In our evaluation, the problem of inadequate flow table capacity is rare. For instance, when we project a Fat-Tree with k=4 (containing 20 switches and 16 nodes) to 2 OpenFlow switches, each switch requires about only 300 flow table entries, which is not difficult for modern commercial OpenFlow switches to deploy.
§ CONCLUSION
We summarize the advantages and disadvantages of existing network evaluation tools and conclude the methodology of an alternative method called “Topology Projection” (TP). Based on the idea of TP, we propose SDT, a deployment-friendly and automatically reconfigurable network topology testbed. SDT allows researchers to use several commodity OpenFlow switches to build network topologies based on user-defined topology configurations. SDT is fully transparent to other network components and can significantly reduce the deployment cost for network topology evaluations. We also develop the corresponding SDT controller for automatic topology reconfiguration. Through evaluations, we find that SDT can achieve almost the same physical properties as the full testbed and runs up to 2899x faster on network evaluations than the simulator does. SDT is more cost-effective and scalable than other TP solutions and can support a wide range of network research works.
§ ACKNOWLEDGEMENTS
This work is sponsored by the Key-Area Research and Development Program of Guangdong Province (2021B0101400001), National Natural Science Foundation of China (62150610497, 62172108, 62002066), Natural Science Foundation of Shanghai (23ZR1404900), the Major Key Project of PCL, and Open Research Projects of Zhejiang Lab (2022QA0AB07). We also sincerely appreciate the anonymous reviewers for their valuable and constructive feedback.
Touko-Format-unsrt
|
http://arxiv.org/abs/2307.04146v1 | 20230709103222 | Intrinsic Separation Principles | [
"Boris Houska"
] | math.OC | [
"math.OC"
] |
-2cm
A Survey and Approach to Chart Classification
Anurag Dhote10009-0000-9385-4758 Mohammed Javed1Corresponding author0000-0002-3019-7401 David S Doermann20000-0003-1639-4561
August 12, 2023
================================================================================================================================
This paper is about output-feedback control problems for general linear systems in the presence of given state-, control-, disturbance-, and measurement error constraints. Because the traditional separation theorem in stochastic control is inapplicable to such constrained systems, a novel information-theoretic framework is proposed. It leads to an intrinsic separation principle that can be used to break the dual control problem for constrained linear systems into a meta-learning problem that minimizes an intrinsic information measure and a robust control problem that minimizes an extrinsic risk measure. The theoretical results in this paper can be applied in combination with modern polytopic computing methods in order to approximate a large class of dual control problems by finite-dimensional convex optimization problems.
§ INTRODUCTION
The separation principle in stochastic control is a fundamental result in control theory <cit.>, closely related to the certainty-equivalence principle <cit.>. It states certain problem of optimal control and state estimation can be decoupled.
For general control systems, however, the separation theorem fails to hold. Thus, if one is interested in finding optimal output-feedback control laws for such systems, one needs to solve a rather complicated dual control problem <cit.>. There are two cases where such dual- or output-feedback control problems are of interest:
2pt
* The first case is that we have an uncertain nonlinear system—in the easiest case, without state- and control constraints—for which the information content of future measurements depends on the control actions. In practice, this dependency can often be neglected, because, at least for small measurement errors and process noise, and under certain regularity assumptions, the separation theorem holds in a first order approximation <cit.>. Nevertheless, there are some nonlinear systems that can only be stabilized if this dependency is taken into account <cit.>.
* And, the second case is that we have an uncertain linear system with state- and control constraints. Here, the process noise and future measurement errors have to be taken into account if one wants to operate the system safely, for instance, by designing output-feedback laws that ensure constraint satisfaction for all possible uncertainty scenarios.
The current paper is about the second case. This focus is motivated by the recent trend towards the development of safe learning and control methods <cit.>.
§.§ Literature Review
Dual control problems have been introduced by Feldbaum in the early 1960s <cit.>. Mature game-theoretic and stochastic methods for analyzing such dual- and output feedback control problems have, however, only been developed much later. They go back to the seminal work of N.N. Krasovskii <cit.> and A.B. Kurzhanskii, <cit.>. Note that these historical articles are complemented by modern set-theoretic control theory <cit.>. Specifically, in the context of constrained linear systems, set-theoretic notions of invariance under output feedback can be found in the work of Dórea <cit.>, which focuses on the invariance of a single information set, and in the work of Artstein and Raković <cit.>, which focuses on the invariance of a collection of information sets. Moreover, a variety of set-theoretic output-feedback control methods for constrained linear systems have appeared in <cit.>. These have in common that they propose to take bounds on measurement errors into account upon designing a robust predictive controller. In this context, the work of Goulart and Kerrigan must be highlighted <cit.>, who found a remarkably elegant way to optimize uncertainty-affine output feedback control laws for constrained linear systems. A general overview of methods for output-feedback and dual model predictive control (MPC) can be found in <cit.>, and the reference therein.
§.§ Contribution
The three main contributions of this paper can be outlined as follows.
Meta Information Theory. While traditional information theories are based on the assumption that one can learn from accessible data, models for predicting the evolution of an uncertain control system require a higher level of abstraction. Here, one needs a prediction structure that is capable of representing the set of all possible future information states of a dynamic learning process without having access to future measurement data. Note that a comprehensive and thorough discussion of this aspect can be found in the above mentioned article by Artstein and Raković <cit.>, in which notions of invariance under output-feedback for collections of information sets are introduced. Similar to their construction, the current article proposes a meta information theoretic framework that is based on a class of information set collections, too. A novel idea of the current article in this regard, however, is the introduction of intrinsic equivalence relations that can be used to categorize information sets with respect to their geometric properties. This leads to an algebraic-geometric definition of meta information spaces in which one can distinguish between extrinsic and intrinsic information measures. Here, intrinsic information about a system is needed to predict what we will know about its states, while extrinsic information is needed to predict and assess the risk that is associated to control decisions.
Intrinsic Separation Principle. The central contribution of this paper is the introduction of the intrinsic separation principle. It formalizes the fact that the intrinsic information content of a constrained linear system does not depend on the choice of the control law. An important consequence of this result is that a large class of dual receding horizon control problems can be solved by separating them into a meta learning problem that predicts intrinsic information and a robust control problem that minimizes extrinsic risk measures. Moreover, the intrinsic separation principle can be used to analyze the existence of solutions to dual control problems under certain assumptions on the continuity and monotonicity of the objective function of the dual control problem.
Polytopic Dual Control. The theoretical results in this paper are used to develop practical methods to approximately solve dual control problems for linear systems with convex state- and control constraints as well as polytopic process noise and polytopic measurement error bounds. In order to appreciate the novelty of this approach, it needs to be recalled first that many existing robust output-feedback control methods, for instance the state-of-the-art output-feedback model predictive control methods in <cit.>, are based on a set-theoretic or stochastic analysis of a coupled system-observer dynamics, where the control law depends on a state estimate. This is in contrast to the presented information theoretic approach to dual control, where control decisions are made based on the system's true information state rather than a state estimate. In fact, for the first time, this paper presents a polytopic dual control method that neither computes vector-valued state estimates nor introduces an affine observer structure. Instead, the discretization of the control law is based on optimizing a finite number of control inputs that are associated to so-called extreme polytopes. The shapes, sizes, and orientations of these extreme polytopes encode the system's intrinsic information while their convex hull encodes the system's extrinsic information. The result of this discretization is a finite dimensional convex optimization problem that approximates the original dual control problem.
§.§ Overview
The paper is structured as follows.
1pt
* Section <ref> reviews the main idea of set-theoretic learning and introduces related notation.
* Section <ref> establishes the technical foundation of this article. This includes the introduction of meta information spaces and a discussion of the difference between intrinsic and extrinsic information measures.
* Section <ref> introduces the intrinsic separation principle for constrained linear systems, see Theorem <ref>.
* Section <ref> discusses how to resolve dual control problems by intrinsic separation, see Theorem <ref>.
* Section <ref> presents methods for discretizing dual control problems using polytopic information set approximations. The main technical result is summarized in Theorem <ref>. A numerical case study is presented. And,
* Section <ref> summarizes the highlights of this paper.
§.§ Notation
Throughout this paper, 𝕂^n denotes the set of closed subsets of ℝ^n, while 𝕂_c^n denotes the set of compact subsets of ℝ^n. It is equipped with the Hausdorff distance
d_H(X,Y) max{max_x ∈ Xmin_y ∈ Y x-y , max_y ∈ Ymin_x ∈ X x-y }
for all X,Y ∈𝕂_c^n, where ‖·‖: ℝ^n →ℝ denotes a norm on ℝ^n, such that (𝕂_c^n,d_H) is a metric space. This definition can be extended to 𝕂^n as follows: if the maxima in the above definition do not exist for , we set d_H(X,Y) = ∞. The pair (𝕂^n,d_H) is called an extended metric space.
Finally, the notation cl(·) is used to denote the closure, assuming that it is clear from the context what the underlying metric distance function is. For instance, if 𝔛⊆𝕂^n denotes a set of closed sets, cl(𝔛) denotes the closure of 𝔛 in (𝕂^n,d_H).
§ INFORMATION SPACES
An information space (ℐ,d,⊓) is a space in which learning can take place. This means that (ℐ,d) is an extended metric space that is equipped with a learning operator
⊓: ℐ×ℐ→ℐ ,
such that (ℐ,⊓) is a semi-group. Among the most important examples for such spaces is the so-called set-theoretic information space, which is introduced below.
§.§ Set-Theoretic Learning
In the context of set-theoretic learning <cit.>, ℐ = 𝕂^n denotes the set of closed subsets of the vector space ℝ^n, while d = d_H denotes the (extended) Hausdorff distance. Here, the standard intersection operator takes the role of a learning operator,
⊓ = ∩ ,
recalling that the intersection of closed sets is closed. The motivation behind this definition can be outlined as follows: let us assume that we currently know that a vector is contained in a given set X ∈𝕂^n. If we receive additional information, for instance, that the vector x is also contained in the set Y ∈𝕂^n, our posterior information is that x is contained in the intersection of the sets X and Y, which is denoted by X ∩ Y.
Note that the above set-theoretic framework is compatible with continuous functions. If f: ℝ^n →ℝ^m denotes such a continuous function, the notation
∀ X ∈𝕂^n, f(X) { f(x) | x ∈ X }
is used to denote its associated continuous image map. It maps closed sets in ℝ^n to closed sets in ℝ^m. Similarly, for affine functions of the form f(x) = A x + b, the notation
AX+b = { Ax+b | x ∈ X }
is used, where A and b are a matrix and a vector with compatible dimensions. And, finally, the Minkowski sum
X+Y { x+y | x ∈ X, y ∈ Y },
is defined for all sets X,Y ∈𝕂^n.
Set theoretic learning models can be augmented by probability measures in order to construct statistical information spaces <cit.>. In such a context, every element of ℐ consists of a set X and a probability distribution ρ_X on X. A corresponding metric is then constructed by using the Wasserstein distance <cit.>. Moreover, if (X,ρ_X) ∈ℐ and (Y,ρ_Y) ∈ℐ are two independent random variables, the learning operation
(X,ρ_X) ⊓ (Y,ρ_Y) ( X ∩ Y , ρ_XY )
has the form of a Bayesian learning update,
ρ_XY(x) ρ_X(x)ρ_Y(x)/∫_X ∩ Yρ_X(y)ρ_Y(y) dy .
Thus, as much as the current paper focuses—for simplicity of presentation—on set-theoretic learning, most of the developments below can be generalized to statistical learning processes by augmenting the support sets with probability distributions or probability measures <cit.>.
§.§ Expectation and Deviation
Expectation and deviation functions are among the most basic tools for analyzing learning processes <cit.>. The expectation function is defined by
∀ X ∈𝕂_c^n, E(X) ∫_X x dx .
It is a continuous function on 𝕂_c^n that satisfies
E(AX+b) = A E(X) + b .
For the special case that the compact set X is augmented by its associated uniform probability distribution, as discussed in Remark <ref>, the above definition of E(X) corresponds to the traditional definition of expected value functions in statistics. Similarly, a deviation function D: 𝕂_c^n →ℝ is a continuous and radially unbounded function that satisfies
2pt
* D(X) ≥ 0,
* D(X) = 0 if and only if X = { E(X) },
* D(X) = D(X-E(X)), and
* D(X ∩ Y) ≤ D(X),
for all X,Y ∈𝕂_c^n. While statistical learning models often use the variance of a random variable as a deviation measure, a more natural choice for D in the context of set theoretic learning is given by the diameter,
D(X) = diam(X) max_x,y ∈ X x - y .
A long and creative list of other possible choices for D can be found in <cit.>.
§ META LEARNING
The above definition of information spaces assumes that information or data is accessible at the time at which learning operations take place. If one wishes to predict the future evolution of a learning process, however, one faces the problem that such data is not available yet. Therefore, this section proposes to introduce a meta information space in which one can represent the set of all possible posterior information states of a learning process without having access to its future data. Informally, one could say that a meta information space is an abstract space in which one can “learn how to learn”.
§.§ Information Ensembles
The focus of this and the following sections is on the set-theoretic framework recalling that 𝕂_c^n denotes the set of compact subsets of ℝ^n. A set 𝔛⊆𝕂_c^n is called an information ensemble of 𝕂_c^n if
∀ Y ∈𝕂_c^n, X ∩ Y ∈𝔛
for all X ∈𝔛. Because ∅ = X ∩∅∈𝔛, any information ensemble contains the empty set.
If 𝔛⊆𝕂_c^n is an information ensemble, then cl(𝔛) is an information ensemble, too.
Let 𝔛 be a given information ensemble and let X_∞∈cl(𝔛) be a given set in its closure. Then there exists a Cauchy sequence X_1,X_2,…∈𝔛 such that
X_∞ lim_k →∞ X_k ∈ cl(𝔛) .
Next, let Y ∈𝕂_c^n be an arbitrary compact set. The case X_∞∩ Y = ∅ is trivial, since ∅∈𝔛⊆cl(𝔛). Next, if X_∞∩ Y ≠∅, then there exists for every ξ∈ X_∞∩ Y an associated sequence z_1(ξ) ∈ X_1, z_2(ξ) ∈ X_2, … with
lim_k →∞ z_k(ξ) = ξ .
This construction is such that the sets
Z_k cl( { z_k(ξ) | ξ∈ X_∞∩ Y } )
satisfy Z_k = X_k ∩ Z_k ∈𝔛, since 𝔛 is an information ensemble. Consequently, it follows that
X_∞∩ Y = lim_k →∞ Z_k ∈ cl(𝔛) .
Thus, cl(𝔛) is an information ensemble, as claimed by the statement of the proposition.
Information ensembles can be used to construct information spaces, as pointed out below.
Let 𝔛 be an information ensemble of 𝕂_c^n. Then (𝔛,d_H,∩) is an information space.
Condition (<ref>) implies that X ∩ Y ∈𝔛 for all X,Y ∈𝔛. Thus, (𝔛,∩) is a subsemigroup of (𝕂_c^n,∩). Moreover, d_H defines a metric on 𝔛. Consequently, (𝔛,d_H,∩) is an information space.
The difference between information ensembles and more general set collections, as considered in <cit.>, is that Property (<ref>) is enforced. Note that this property makes a difference in the context of developing a coherent learning algebra: if (<ref>) would not hold, (𝔛,∩) would, in general, not be a subsemigroup of (𝕂_c^n,∩).
§.§ Extreme Sets
A set X ∈cl(𝔛) of a given information ensemble 𝔛⊆𝕂_c^n is called an extreme set of 𝔛 if
∀ Y ∈cl(𝔛) ∖{ X }, X ∩ Y ≠ X .
The set of extreme sets of 𝔛 is denoted by ∂𝔛. It is called the boundary of the information ensemble 𝔛. Clearly, we have ∂ X ⊆cl(𝔛), but, in general, ∂ X is not an information ensemble. Instead, ∂𝔛 can be interpreted as a minimal representation of the closure of 𝔛, because
cl(𝔛) = { Y ∈𝕂_c^n | ∃ X ∈∂𝔛, Y ⊆ X } .
Reversely, the closure of 𝔛 can be interpreted as the smallest information ensemble that contains ∂𝔛.
§.§ Meta Information Spaces
Let 𝕀^n denote the set of closed information ensembles of 𝕂_c^n; that is, the set of closed subsemigroups of (𝕂_c^n,∩) that are closed under intersection with sets in 𝕂_c^n. Similarly, the notation 𝕀_c^n will be used to denote the set of compact information ensembles of the information space (𝕂_c^n,d_H,∩). Next, the meta learning operator is introduced by defining
𝔛⊓𝔜 { X ∩ Y | X ∈𝔛, Y ∈𝔜 }
for all 𝔛,𝔜∈𝕀^n. A corresponding metric distance function, Δ_H is given by
Δ_H(𝔛, 𝔜)
max{max_X ∈𝔛min_Y ∈𝔜 d_H(X,Y), max_Y ∈𝔜min_X ∈𝔛 d_H(X,Y) }
for all 𝔛, 𝔜∈𝕀_c^n such that (𝕀_c^n,Δ_H) is a metric space. Similar to the construction of the Hausdorff distance d_H, the definition of Δ_H can be extended to 𝕀^n by understanding the above definition in the extended value sense. The following proposition shows that the triple (𝕀^n, Δ_H, ⊓) is an information space. It is called the meta information space of (𝕂_c^n, d_H, ∩).
The triple (𝕀^n, Δ_H, ⊓) is an information space. It can itself be interpreted as a set-theoretic information space in the sense that we have
𝔛⊓𝔜 = 𝔛∩𝔜
for all 𝔛,𝔜∈𝕀^n.
The proof of this proposition is divided into two parts: the first part shows that (<ref>) holds and the second part uses this result to conclude that (𝕀^n, Δ_H, ⊓) is an information space.
Part I. Let 𝔛,𝔜∈𝕀^n be given information ensembles. For any X ∈𝔛∩𝔜 the intersection relation
X ∩ X = X ∈𝔛∩𝔜
holds. But this implies that
𝔛∩𝔜 ⊆ { X ∩ Y | X ∈𝔛, Y ∈𝔜 } = 𝔛⊓𝔜 .
In order to also establish the reverse inclusion, assume that Z ∈𝔛⊓𝔜 is a given set. It can be written in the form with and . Clearly, we have and . Moreover, we have , since the intersection of compact sets is compact. Thus, since 𝔛 and 𝔜 are information ensembles, (<ref>) implies that and . But this is the same as saying that , which implies 𝔛∩𝔜⊇𝔛⊓𝔜. Together with the above reverse inclusion, this yields (<ref>).
Part II. Note that (𝕀^n,∩) is a semigroup, which follows from the definition of intersection operations. Moreover, (𝕀^n,Δ_H) is, by construction, an extended metric space. Thus, (𝕀^n, Δ_H, ⊓) is indeed an information space, as claimed by the statement of this proposition.
The triple (𝕀_c^n,Δ_H,⊓) is also an information space. It can be interpreted as a sub-meta information space of (𝕀^n,Δ_H,⊓).
The statement of this corollary follows immediately from the previous proposition, since the intersection of compact sets is compact; that is, (𝕀_c^n,⊓) is a subsemigroup of (𝕀^n,⊓).
The statement of the above proposition about the fact that (𝕀^n,Δ_H,⊓) can be interpreted as a set-theoretic information space can be further supported by observing that this space is naturally compatible with continuous functions, too. Throughout this paper, the notation
f(𝔛) { f(X) | X ∈𝔛 }
is used for any 𝔛∈𝕀^n, recalling that f(X) denotes the compact image set of a continuous function f on a compact set X ∈𝕂_c^n. Due to this continuity assumption on f, closed information ensembles are mapped to closed information ensembles.
§.§ Interpretation of Meta Learning Processes
Meta information spaces can be used to analyze the evolution of learning processes without having access to data. In order to discuss why this is so, a guiding example is introduced: let us consider a set-theoretic sensor, which returns at each time instance a compact information set X ∈𝕂_c^1 containing the scalar state x of a physical system, x ∈ X. If the absolute value of the measurement error of the sensor is bounded by 1, this means that X ⊆ [a,a+2] for at least one lower bound a ∈ℝ. The closed but unbounded information ensemble that is associated with such a sensor is given by
𝔙 = { X ∈𝕂_c^1 | ∃ a ∈ℝ: X ⊆ [a,a+2] }∈𝕀^1 .
It can be interpreted as the set of all information sets that the sensor could return when taking a measurement.
Next, in order to illustrate how an associated meta learning process can be modeled, one needs to assume that prior information about the physical state x is available. For instance, if x is known to satisfy x ∈ [-3,3], this would mean that our prior is given by
𝔛 = { X ∈𝕂_c^1 | X ⊆ [-3,3] } .
In such a situation, a meta learning process is—due to Proposition <ref>—described by an update of the form
𝔛^+ = 𝔛⊓𝔙 = 𝔛∩𝔙 ,
where 𝔛^+ denotes the posterior,
𝔛^+ = { X ∈𝕂_c^1 | [ ∃ a ∈ℝ:; X ⊆ [ max{ a,-3}, 2 + min{ 1, a } ] ]} .
It is computed without having access to any sensor data.
§.§ Intrinsic Equivalence
Equivalence relations can be used to categorize compact information sets with respect to their geometric properties. In the following, we focus on a particular equivalence relation. Namely, we consider two sets X,Y ∈𝕂_c^n equivalent, writing X ≃ Y, if they have the same shape, size, and orientation. This means that
X ≃ Y ⟺ ∃ a ∈ℝ^n: X + a = Y .
The motivation for introducing this particular equivalence relation is that two information sets X and Y can be considered equally informative if they coincide after a translation.
Two information ensembles 𝔛,𝔜⊆𝕂_c^n are called intrinsically equivalent, 𝔛∼𝔜, if their quotient spaces coincide,
(𝔛/≃) = (𝔜/≃) .
The intrinsic equivalence relation ∼ from the above definition is—as the name suggests—an equivalence relation. This follows from the fact that 𝔛∼𝔜 if and only if
[ ∀ X ∈𝔛, ∃ a ∈ℝ^n: X + a ∈ 𝔜; and ∀ Y ∈𝔜, ∃ b ∈ℝ^n: Y + b ∈ 𝔛 , ]
which, in turn, follows after substituting the above definition of ≃ in (<ref>).
If 𝔛,𝔜⊆𝕂_c^n are intrinsically equivalent information ensembles, 𝔛∼𝔜, their closures are intrinsically equivalent, too,
cl(𝔛) ∼ cl(𝔜) .
Proposition <ref> ensures that the closures of 𝔛 and 𝔜 are information ensembles, cl(𝔛) ∈𝕀^n and cl(𝔜) ∈𝕀^n. Next, there exists for every X_∞∈cl(𝔛) a convergent sequence of sets X_1,X_2,…∈𝔛 such that
X_∞ = lim_k →∞ X_k .
Moreover, since 𝔛∼𝔜, there also exists a sequence a_1,a_2,…∈ℝ^n such that the sequence
Y_k X_k + a_k ∈ 𝔜
remains in 𝔜. Because 𝔛 and 𝔜 are compact, the sequence of offsets a_k must be bounded. Thus, it has a convergent subsequence, a_j_1,a_j_2,…∈ℝ^n, with limit
a_∞ lim_k →∞ a_j_k ∈ ℝ^n .
This construction is such that
X_∞ + a_∞ = lim_k →∞ { X_j_k + a_j_k} ∈ cl(𝔜) .
A completely analogous statement holds after replacing the roles of 𝔛 and 𝔜. Consequently, the closures of 𝔛 and 𝔜 are intrinsically equivalent, which corresponds to the statement of the proposition.
§.§ Extrinsic versus Intrinsic Information
Throughout this paper, it will be important to distinguish between extrinsic and intrinsic information. Here, the extrinsic information of an information ensemble is encoded by the union of its elements, namely, the extrinsic information set. It describes present information. The extrinsic information content of an information ensemble can be quantified by extrinsic information measures:
An information measure f: 𝕀_c^n→ℝ is called extrinsic, if there exist a function g: 𝕂_c^n →ℝ with
∀𝔛∈𝕀_c^n, f(𝔛) = g ( ⋃_X ∈𝔛 X ) .
In contrast to extrinsic information, the intrinsic information of an information ensemble 𝔛 is encoded by its quotient space, 𝔛/≃. It describes future information. In order to formalize this definition, it is helpful to introduce a shorthand for the meta quotient space
ℚ_c^n 𝕀_c^n/∼ .
In analogy to Definition <ref>, the intrinsic information of an information ensemble can be quantified by intrinsic information measures:
An information measure f: 𝕀_c^n→ℝ is called intrinsic, if there exist a function g: ℚ_c^n →ℝ with
∀ X ∈𝕀_c^n, f(𝔛) = g(𝔛/≃) .
In order to develop a stronger intuition about the difference between extrinsic and intrinsic information measures, it is helpful to extend the definitions of the expectation and deviation functions E and D from the original information space setting in Section <ref>. These original definitions can be lifted to the meta information space setting by introducing their associated extrinsic expectation 𝔈 and extrinsic deviation 𝔇, given by
𝔈(𝔛) E( ⋃_X ∈𝔛 X ) and 𝔇(𝔛) D( ⋃_X ∈𝔛 X )
for all 𝔛∈𝕀_c^n. Note that 𝔈 and 𝔇 are continuous functions, which inherit the properties of E and D. Namely, the relation
𝔈(A 𝔛 + b ) = A 𝔈(𝔛) + b
holds. Similarly, 𝔇 satisfies all axioms of a deviation measure in the sense that
2pt
* 𝔇(𝔛) ≥ 0,
* 𝔇(𝔛) = 0 if and only if 𝔛 = {{𝔈( 𝔛 ) }},
* 𝔇(𝔛) = 𝔇(𝔛-𝔈(𝔛)), and
* 𝔇(𝔛⊓𝔜) ≤𝔇(𝔛),
for all 𝔛,𝔜∈𝕀_c^n. Note that such extrinsic deviation measures need to be distinguished carefully from intrinsic deviation measures. Here, a function 𝔇^∘: 𝕀_c^n→ℝ, is called an intrinsic deviation measure if it is a continuous and intrinsic function that satisfies
2pt
* 𝔇^∘(𝔛) ≥ 0,
* 𝔇^∘(𝔛) = 0 if and only if 𝔛∼{{𝔈( 𝔛 ) }},
* 𝔇^∘(𝔛) = 𝔇^∘(𝔛-𝔈(𝔛)), and
* 𝔇^∘(𝔛⊓𝔜) ≤𝔇^∘(𝔛),
for all 𝔛,𝔜∈𝕀_c^n. The second axiom is equivalent to requiring that 𝔇^∘ is positive definite on the quotient space ℚ_c^n. In order to have a practical example in mind, we introduce the particular function
∀𝔛∈𝕀_c^n, 𝔇_∞^∘(𝔛) = max_X ∈𝔛 max_x,y ∈ X x - y ,
which turns out to be an intrinsic information measure, as pointed out by the following lemma.
The function 𝔇_∞^∘, defined by (<ref>), is an intrinsic deviation measure on 𝕀_c^n.
Let 𝔛∈𝕀_c^n be a given information ensemble and let X^⋆ be a maximizer of (<ref>), such that
𝔇_∞^∘( 𝔛 ) = diam(X^⋆) = max_x,y ∈ X^⋆ x - y .
If 𝔜∈𝕀_c^n is an intrinsically equivalent ensemble with 𝔛∼𝔜, then there exists an offset vector a^⋆∈ℝ^n such that X^⋆+a^⋆∈𝔜. Thus, we have
𝔇_∞^∘(𝔜) = max_Y ∈𝔜 diam(Y) ≥ diam(X^⋆ + a^⋆)
= diam(X^⋆ + a - E(X^⋆ + a) )
= diam(X^⋆ - E(X^⋆))
= diam(X^⋆) = 𝔇_∞^∘(𝔛) ,
where the equations in the second, third, and fourth line follow by using the axioms of D and E from Section <ref>. The corresponding reverse inequality follows by using an analogous argument exchanging the roles of 𝔛 and 𝔜. Thus, we have 𝔇_∞^∘(𝔛) = 𝔇_∞^∘(𝔜). This shows that 𝔇_∞^∘ is an intrinsic information measure. The remaining required properties of 𝔇_∞^∘ are directly inherited from
the diameter function, recalling that the diameter is a continuous deviation function that satisfies the corresponding axioms from Section <ref>. This yields the statement of the lemma.
Let us revisit the tutorial example from Section <ref>, where we had considered the case that
𝔛 = { X ∈𝕂_c^1 | X ⊆ [-3,3] } and
𝔛^+ = { X ∈𝕂_c^1 | [ ∃ a ∈ℝ:; X ⊆ [ max{ a,-3}, 2 + min{ 1, a } ] ]}
denote the prior and posterior of a data-free meta learning process. If we set D(X) = diam(X) and define 𝔇 and 𝔇_∞^∘ as above, then
𝔇( 𝔛 ) = 𝔇( 𝔛^+ ) = 6 .
An interpretation of this equation can be formulated as follows: since our meta learning process is not based on actual data, the extrinsic information content of the prior 𝔛 and the posterior 𝔛^+ must be the same, which implies that their extrinsic deviations must coincide. This is in contrast to the intrinsic deviation measure,
𝔇_∞^∘( 𝔛 ) = 6 > 2 = 𝔇_∞^∘( 𝔛^+ ),
which predicts that no matter what our next measurement will be, the diameter of our posterior information set will be at most 2.
§ INTRINSIC SEPARATION PRINCIPLE
The goal of this section is to formulate an intrinsic separation principle for constrained linear systems.
§.§ Constrained Linear Systems
The following considerations concern uncertain linear discrete-time control systems of the form
[ x_k+1 = A x_k + B u_k + w_k; η_k = C x_k + v_k . ]
Here, x_k ∈ℝ^n denotes the state, u_k∈𝕌 the control, w_k ∈𝕎 the disturbance, the measurement, and v_k ∈𝕍 the measurement error at time k ∈ℤ. The system matrices A, B, and C as well as the state, control, disturbance, and measurement error constraints sets, 𝕏∈𝕂^n, 𝕌∈𝕂_c^n_u, 𝕎∈𝕂_c^n, and 𝕍∈𝕂_c^n_v, are assumed to be given.
§.§ Information Tubes
The sensor that measures the outputs C x_k of (<ref>) can be represented by the information ensemble
𝔙 { X ∈𝕂_c^n | ∃η∈ℝ^n_v : η -C X ⊆𝕍 } .
Since 𝕍 is compact, 𝔙 is closed but potentially unbounded, 𝔙∈𝕀^n. If 𝔛∈𝕀^n denotes a prior information ensemble of the state of (<ref>) an associated posterior is given by 𝔛⊓𝔙. This motivates to introduce the function
F( 𝔛, μ ) { X^+ ∈𝕂_c^n|
[ ∃ X ∈𝔛⊓𝔙:; X^+ ⊆ A X + B μ(X) + 𝕎 ]},
which is defined for all 𝔛∈𝕀^n and all control laws that map the system's posterior information state to a feasible control input. Let 𝒰 denote the set of all such maps from 𝕀^n to 𝕌. It is equipped with the supremum norm,
‖μ‖ sup_X ∈𝕂^n μ(X) ,
such that (𝒰,‖·‖) is a Banach space. As μ∈𝒰 is potentially discontinuous, F(𝔛,μ) is not necessarily closed. Instead, the following statement holds.
If 𝔛, 𝕌, 𝕍, and 𝕎 are closed, then the closure of the set F(𝔛,μ) is for every given μ∈𝒰 a closed information ensemble,
F(𝔛,μ) cl( F(𝔛,μ) ) ∈𝕀^n .
The statement of this proposition follows from Proposition <ref> and the above definition of F.
The functions F and F are the basis for the following definitions.
An information ensemble 𝔛_s∈𝕀^n is called control invariant (<ref>) if there exists a such that
𝔛_s⊇ F(𝔛_s, μ_s) .
A sequence 𝔛_0,𝔛_1,…∈𝕀^n of information ensembles is called an information tube for (<ref>) if there exists a sequence μ_0,μ_1,…∈𝒰 such that
∀ k ∈ℕ, 𝔛_k+1⊇ F(𝔛_k, μ_k) .
An information tube 𝒳_0,𝒳_1,…∈𝕀^n is called tight if it satisfies
∀ k ∈ℕ, 𝔛_k+1 = F( 𝔛_k, μ_k )
for at least one control policy sequence μ_k ∈𝒰.
§.§ Intrinsic Separation
The following theorem establishes the fact that the intrinsic equivalence class of tight information tubes does not depend on the control policy sequence.
Let 𝔛_0,𝔛_1, …∈𝕀_c^n and 𝔜_0, 𝔜_1, …∈𝕀_c^n be tight information tubes with compact elements. If the initial information ensembles are intrinsically equivalent, 𝔛_0 ∼𝔜_0, then all information ensembles are intrinsically equivalent; that is, 𝔛_k ∼𝔜_k for all k ∈ℕ.
Because 𝔛 and 𝔜 are tight information tubes, there exist control policies and such that
𝔛_k+1 = F(𝔛_k, μ_k ) and 𝔜_k+1 = F(𝔜_k, ν_k )
for all k ∈ℕ. Next, the statement of the theorem can be proven by induction over k: since we assume 𝔛_0 ∼𝔜_0, this assumption can be used directly as induction start. Next, if 𝔛_k ∼𝔜_k, there exists for every X_k ∈𝔛_k ∩𝔙 an offset vector a_k ∈ℝ^n such that Y_k = X_k + a_k ∈𝔜_k. Because 𝔙 satisfies
∀ a ∈ℝ^n, ∀ V ∈𝔙, V + a ∈𝔙,
it follows that Y_k = X_k + a_k ∈𝔜_k ∩𝔙. Consequently, a relation of the form
A X_k + B μ_k(X_k) + 𝕎 = A Y_k + (Bμ_k(X_k) - Aa_k) + 𝕎
= A Y_k + B ν_k(Y_k) + 𝕎 - a_k+1,
can be established, where the next offset vector, a_k+1, is given by
a_k+1 A x_k + B ν_k(Y_k) - B μ_k(X_k) ∈ ℝ^n .
Note that a completely symmetric relation holds after exchanging the roles of 𝔛_k and 𝔜_k. In summary, it follows that an implication of the form
𝔛_k ∼𝔜_k ⟹ F(𝔛_k,μ_k) ∼ F(𝔜_k,ν_k)
holds. An application of Proposition <ref> to the latter equivalence relation yields the desired induction step. This completes the proof of the theorem.
The above theorem allows us to formulate an intrinsic separation principle. Namely, Theorem <ref> implies that the predicted future information content of a tight information tube does not depend on the choice of the control policy sequence with which it is generated. In particular, the tight information tubes from (<ref>) satisfy
∀ k ∈ℕ, 𝔇^∘(𝔛_k) = 𝔇^∘(𝔜_k)
for any intrinsic information measure 𝔇^∘. Note that this property is independent of the choice of the control policy sequences μ_k and ν_k that are used to generate these tubes.
§.§ Control Invariance
As mentioned in the introduction, different notions of invariance under output-feedback control have been analyzed by various authors <cit.>. This section briefly discusses how a similar result can be recovered by using the proposed meta learning based framework. For this aim, we assume that
2pt
* the sets 𝕍∈𝕂_c^n_v and 𝕎∈𝕂_c^n_w are compact,
* the set is closed and convex,
* the pair (A,C) is observable, and
* (A,B,𝕌, 𝕎) admits a robust control invariant set.
The first two assumptions are standard. The third assumption on the observability of (A,C) could also be replaced by a weaker detectability condition. However, since one can always use a Kalman decomposition to analyze the system's invariant subspaces separately <cit.>, it is sufficient to focus on observable systems. And, finally, the fourth assumption is equivalent to requiring the existence of a state-feedback law μ: ℝ^n →𝕌 and a set X∈𝕂_c^n such that
∀ x ∈X, ∀ w ∈𝕎, Ax+B μ(x) +w ∈X ,
which is clearly necessary: if we cannot even keep the system inside a bounded region by relying on exact state measurements, there is no hope that we can do so without such exact data.
If the above four assumptions hold, (<ref>) admits a compact control invariant information ensemble.
The proof of this lemma is divided into two parts, which aim at constructing an information tube that converges to a control invariant information ensemble.
Part I. The goal of the first part is to show, by induction over k, that the recursion
∀ k ∈ℕ, 𝔛_k+1^∘ A ( 𝔛_k^∘∩𝔙 ), 𝔛_0^∘ 𝕂_c^n
is set monotonous. Since 𝔛_0^∘ = 𝕂_c^n, 𝔛_1^∘⊆𝔛_0^∘ holds. This is the induction start. Next, if 𝔛_k+1^∘⊆𝔛_k^∘ holds for a given integer k ≥ 0, it follows that
𝔛_k+2^∘ = A(𝔛_k+1^∘∩𝔙 ) ⊆ A(𝔛_k^∘∩𝔙 ) = 𝔛_k+1^∘ ,
where the inclusion in the middle follows directly by substituting the induction assumption. In summary, the monotonicity relation 𝔛_k+1^∘⊆𝔛_k^∘ holds for all k ∈ℕ.
Part II. The goal of the second part is to show that the sequence
𝔛_k { X - E(X) + x | X ∈𝔛_k^∘, x∈cvx(X) } ,
converges to an invariant information ensemble. Here, cvx(X) denotes the convex hull of the robust control invariant set X. Because we assume that 𝕌 is convex, cvx(X) is robust control invariant, too. This means that there exists a μ: ℝ^n →𝕌 such that
∀ x ∈cvx(X), ∀ w ∈𝕎, Ax + B μ(x)+w ∈cvx(X) .
Since E satisfies E( X ) ∈cvx(X) for all X ∈𝕂_c^n, (<ref>) and the definitions of 𝔛_k and 𝔙 imply that
[ ∀ X ∈𝔛_k ∩𝔙, E(X) ∈cvx(X),; ∀ X ∈𝔛_k, X - E(X) ∈𝔛_k^∘; and ∀ X ∈𝔙, X - E(X) ∈𝔙 ]
for all k ∈ℕ. Thus, the state estimation based auxiliary feedback law
∀ X ∈𝕂_c^n, μ(X) μ(E(X))
ensures that the recursive feasibility condition
[ A X + B μ(X) + 𝕎; = A(X-E(X)) + A E(X) + B μ(E(X)) + 𝕎_⊆ cvx(X)∈𝔛_k+1 ]
holds for all X ∈𝔛_k ∩𝔙. Consequently, the auxiliary sequence 𝔛_k is a monotonous information tube,
∀ k ∈ℕ, 𝔛_k ⊇ 𝔛_k+1 ⊇ F(𝔛_k,μ) ,
where monotonicity follows from (<ref>) and the considerations from Part I.
Moreover, since (A,C) is observable, 𝔛_k is compact for all k ≥ n-1. In summary, 𝔛_k is a monotonously decreasing sequence of information ensembles, which—due to the monotone convergence theorem—converges to a compact control invariant information ensemble,
𝔛_∞ = lim_k →∞ 𝔛_k ∈ 𝕀_c^n_x and F(𝔛_∞,μ) ⊆ 𝔛_∞ .
This corresponds to the statement of the lemma.
The purpose of Lemma <ref> is to elaborate on the relation between control invariant information ensembles and existing notions in linear control as observability and robust stabilizability. Lemma <ref> does, however, not make statements about feasibility: the state constraint set 𝕏 is not taken into account. Moreover, the construction of the feedback law μ in (<ref>) is based on the vector-valued state estimate E(X) rather than the information state X, which is, in general, sub-optimal. Note that these problems regarding feasibility and optimality are resolved in the following section by introducing optimal dual control laws.
§ DUAL CONTROL
This section is about dual control problems for constrained linear systems. It is discussed under which assumptions such problems can be separated into a meta learning and a robust control problem.
§.§ Receding Horizon Control
Dual control problems can be implemented in analogy to traditional model predictive control (MPC) methods. Here, one solves the online optimization problem
J(X_0) = inf_𝔛,μ ∑_k=0^N-1 L(𝔛_k,μ_k) + M(𝔛_N)
s.t. {[ ∀ k ∈{ 0, 1, …, N-1 },; F(𝔛_k,μ_k) ⊆𝔛_k+1, X_0 ∈𝔛_0; μ_k ∈𝒰,; ∀ X_k ∈𝔛_k, X_k ⊆𝕏 ].
on a finite time horizon { 0,1,…, N }, where 0 is the current time. The optimization variables are the feedback policies μ_0,μ_1,…,μ_N-1∈𝒰 and their associated information tube, 𝔛_0,𝔛_1,…,𝔛_N ∈𝕀_c^n. In the most general setting, the stage and terminal cost functions,
L: 𝕀_c^n×𝒰→ℝ and M: 𝕀_c^n→ℝ,
are assumed to be lower semi-continuous, although some of the analysis results below will be based on stronger assumptions. We recall that 𝕏 denotes the closed state constraint set. The parameter X_0 ∈𝕀_c^n corresponds to the current information set. It is updated twice per sampling time by repeating the following steps online:
2pt
i) Wait for the next measurement η.
ii) Update the information set,
X_0 ← X_0 ∩{ x ∈ℝ^n |η - Cx ∈𝕍} .
iii) Solve (<ref>) and denote the first element of the optimal feedback sequence by μ_0^⋆∈𝒰.
iv) Send u^⋆ = μ_0^⋆(X_0) to the real process.
v) Propagate the information set,
X_0 ← A X_0 + B u^⋆ + 𝕎 .
vi) Set the current time to 0 and continue with Step i).
Note that Step iii) assumes that the “inf” operator in (<ref>) can be replaced by a “min” and that an associated optimal feedback policy exists. Conditions under which this can be guaranteed are discussed in Section <ref>.
§.§ Objectives
Tube model predictive control formulations <cit.> use risk measures as stage cost functions. In principle, any lower semi-continuous function of the form
R: 𝕂_c^n →ℝ∪{∞},
can be regarded as such a risk measure, although one would usually require that the monotonicity condition
X ⊆ Y ⟹ R(X) ≤ R(Y)
holds for all X,Y ∈𝕂_c^n. Similarly, this paper proposes to call ℜ: 𝕀_c^n →ℝ∪{∞} an extrinsic risk measure if
ℜ(𝔛) = R ( ⋃_X ∈𝔛 X )
for a lower semi-continuous function R that satisfies (<ref>).
Problem (<ref>) enforces state constraints explicitly. Alternatively, one can move them to the objective by introducing the indicator function I_𝕏 of the state constraint set 𝕏. Because we have
( ∀ X_k ∈𝔛_k, X_k ⊆𝕏 ) ⟺ I_𝕏( ⋃_X ∈𝔛_k X ) < ∞,
enforcing state constraints is equivalent to adding an extrinsic risk measure to the stage cost; here with R = I_𝕏.
By using the language of this paper, the traditional objective of dual control <cit.> is to tradeoff between extrinsic risk and intrinsic deviation. This motivates to consider stage cost functions of the form
L( 𝔛,μ) = ℜ(𝔛) + τ·𝔇^∘(𝔛) .
Here, ℜ denotes a lower semi-continuous extrinsic risk measure and 𝔇^∘ a lower semi-continuous intrinsic information measure. For general nonlinear systems, the parameter τ > 0 can be used to tradeoff between risk and deviation. In the context of constrained linear systems, however, such a tradeoff is superfluous, as formally proven in the sections below.
The stage cost function (<ref>) can be augmented by a control penalty. For example, one could set
L( 𝔛,μ) = ℜ(𝔛) + τ·𝔇^∘(𝔛) + ℭ(μ) ,
where ℭ: 𝒰→ℝ models a lower semi-continuous control cost. This additional term does, however, not change the fact that the parameter τ does not affect the optimal solution of (<ref>). Details about how to construct ℭ in practice will be discussed later on in this paper, see Section <ref>.
§.§ Separation of Meta-Learning and Robust Control
The goal of this section is to show that one can break the dual control problem (<ref>) into an intrinsic meta learning problem and an extrinsic robust control problem. We assume that
* the stage cost function L has the form (<ref>),
* the function ℜ is an extrinsic risk measure,
* the function 𝔇^∘ is intrinsic and τ≥ 0, and
* the function M is extrinsic and monotonous,
𝔛⊆𝔜 ⟹ M(𝔛) ≤ M(𝔜) .
In this context, the meta learning problem consists of computing a constant information tube that is found by evaluating the recursion
[ ∀ k ∈ℕ, 𝔜_k+1 F(𝔜_k,ν_k); with 𝔜_0 { X ∈𝕂_c^n | X ⊆ X_0 }, ]
for a constant sequence ν_0,ν_1, …∈𝒰. For simplicity of presentation, we assume 0 ∈𝕌 such that we can set ν_k(X) = 0 without loss of generality. Due to Theorem <ref>, L satisfies
L(𝔛_k) = ℜ(𝔛_k) + τ·𝔇^∘( 𝔜_k )
along any optimal tube of (<ref>). Consequently, (<ref>) reduces to a robust control problem in the sense that all objective and constraint functions are extrinsic, while the shapes, sizes and orientations of the sets of the optimal information tube are constants, given by (<ref>).
In summary, the contribution of intrinsic information to the objective value of (<ref>), denoted by
J_I(X_0) τ·∑_k=0^N-1𝔇^∘(𝔜_k),
depends on X_0 but it does not depend on the choice of the control law. It can be separated from the contribution of extrinsic objective terms, as elaborated below.
§.§ Existence of Solutions
In order to discuss how one can—after evaluating the meta-learning recursion (<ref>)—rewrite (<ref>) in the form of an extrinsic robust control problem, a change of variables is introduced. Let ℬ_k denote the set of bounded functions of the form c_k: 𝔜_k →ℝ^n. It is a Banach space with respect to its supremum norm
‖ c_k ‖ sup_X ∈𝔜_k c_k(X) .
Due to Theorem <ref>, any tight information tube 𝔛_0,𝔛_1, …∈𝕀_c^n, started at 𝔛_0 = 𝔜_0, is intrinsically equivalent to the precomputed tube 𝔜_0, 𝔜_1, …∈𝕀_c^n and can be written in the form
𝔛_k = { Y + c_k(Y) | Y ∈𝔜_k }
for suitable translation functions c_k ∈ℬ_k. In the following, we introduce the auxiliary set
𝒞_k { (c,c^+) | [ ∀ X ∈∂[ 𝔜_k ∩𝔙],; A c(X) - c^+( AX + 𝕎) ∈ (-B 𝕌) ]}
recalling that ∂ denotes the boundary operator that returns the set of extreme sets of a given information ensemble. Because 𝕌 is compact, 𝒞_k ⊆ℬ_k ×ℬ_k+1 is a closed set. Additionally, we introduce the shorthands
ℛ_k(c_k) ℜ( { Y + c_k(Y) | Y ∈𝔜_k } )
and ℛ_N(c_N) M( { Y + c_N(Y) | Y ∈𝔜_N } ) .
Since we assume that ℜ and M are lower-semicontinuous on 𝕀_c^n, the functions ℛ_k: ℬ_k →ℝ are lower semi-continuous on the Banach spaces ℬ_k. They can be used to formulate the extrinsic robust control problem[If the sets 𝕌 and 𝕏 and the functions ℛ_k are convex, (<ref>) is a convex optimization problem.]
J_E(X_0) = min_c_0,c_1,…,c_N ∑_k=0^N-1ℛ_k(c_k) + ℛ_N(c_N)
s.t. {[ ∀ k ∈{ 0, 1, …, N-1 },; (c_k,c_k+1) ∈𝒞_k, c_0 ≡ 0,; ∀ Y ∈𝔜_k, Y + c_k(Y) ⊆𝕏, ].
which can be used to find the optimal solution of (<ref>). In detail, this result can be summarized as follows.
Let 𝕏∈𝕂^n be a closed set, let 𝕌∈𝕂_c^n_u, 𝕍∈𝕂_c^n_v, and 𝕎∈𝕂_c^n_w be compact sets, let L be given by (<ref>) with ℜ and M being set-monotonous and lower semi-continuous extrinsic risk measures, and let 𝔇^∘ be an intrinsic lower semi-continuous information measure. Then the following statements hold.
* Problem (<ref>) admits a minimizer or is infeasible.
* Problem (<ref>) is intrinsically separable; that is,
J(X_0) = J_E(X_0) + J_I(X_0).
* If c_0,c_1,…,c_N is a minimizer of (<ref>), its associated sequence of information ensembles, given by (<ref>), is an optimal information tube of (<ref>).
Because the objective functions ℛ_k of (<ref>) are lower semicontinuous and since the feasible set of (<ref>) is closed under the listed assumptions, it follows directly from Weierstrass' theorem that this optimization problem admits a minimizer or is infeasible. Next, a relation between (<ref>) and (<ref>) needs to be established. For this aim, we divide the proof into three parts.
Part I. Let us assume that 𝔛_0,𝔛_1, …, 𝔛_N ∈𝕀_c^n is a tight information tube for given μ_0,μ_1,…,μ_N-1∈𝒰,
∀ k ∈{ 0,1,…,N-1}, 𝔛_k+1 = F(𝔛_k,μ_k) .
Due to Theorem <ref>, there exist functions c_k: ℬ_k →ℝ^n such that 𝔛_k = { Y + c_k(Y) | Y ∈𝔜_k }. The goal of the first part of this proof is to show that (c_k,c_k+1) ∈𝒞_k. Because the information tube is tight, we have
A X + B μ_k(X) + 𝕎 ∈ ∂𝔛_k+1
for all X ∈∂ [ 𝔛_k∩𝔙 ]. Since any set Y ∈∂ [ 𝔜_k ∩𝔙] is mapped to an extreme set
X = Y + c_k(Y) ∈∂ [ 𝔛_k ∩𝔙],
it follows that
A (Y+c_k(Y)) + B μ_k(X) + 𝕎∈∂𝔛_k+1
⟹ (AY+𝕎)_∈ ∂𝔜_k+1 + (c_k(Y) + B μ_k(X))_∈ ℝ^n ∈ ∂𝔛_k+1
for any such pair (X,Y).
But this is only possible if
c_k(Y) + B μ_k(X) = c_k+1(AY+𝕎) .
Since μ_k(X) ∈𝕌 and since the choice of Y ∈∂ [ 𝔜_k ∩𝔙] is arbitrary, it follows from (<ref>) that (c_k,c_k+1) ∈𝒞_k.
Part II. The goal of the second part of this proof is to reverse the construction from the first part. For this aim, we assume that we have functions c_k: ℬ_k →ℝ^n that satisfy the recursivity condition (c_k,c_k+1) ∈𝒞_k for all k ∈{ 0,1,…,N-1} while the sets 𝔛_k are given by (<ref>). Since every set X ∈𝔛_k ∩𝔙 is contained in at least one extreme set X∈∂[ 𝔛_k ∩𝔙], there exists for every such X a set Y∈∂ [ 𝔜_k ∩𝔙 ] with
X ⊆ X = Y + c_k(Y).
Note that this is equivalent to stating that there exists a function Σ_k: 𝔛_k ∩𝔙→∂ [ 𝔜_k ∩𝔙 ] that satisfies
∀ X ∈𝔛_k ∩𝔙, X ⊆Σ_k(X) + c_k(Σ_k(X)) .
It can be used to define the control laws
μ_k(X) B^†[ c_k+1(A Σ_k(X)+𝕎 ) - A c_k(Σ_k(X)) ],
where B^† denotes the pseudo-inverse of B. Because we assume (c_k,c_k+1) ∈𝒞_k, we have μ_k(X) ∈𝕌 and
A X + B μ_k(X) + 𝕎
[ (<ref>),(<ref>)⊆ A Σ_k(X) + 𝕎 + c_k+1(A Σ_k(X) + 𝕎) ∈ 𝔛_k+1 ]
for all X ∈𝔛_k ∩𝔙, where the latter inclusion follows from (<ref>) and the fact that A Σ_k(X) + 𝕎∈𝔜_k+1. Consequently, we have 𝔛_k+1⊇ F(𝔛_k,μ_k).
Part III. The construction from Part I can be used to start with any feasible information tube of (<ref>) to construct a feasible sequence c_0,c_1,…,c_N such that
J_E(X_0) ≤ ∑_k=0^N-1ℛ_k(c_k) + ℛ_N(c_N)
= ∑_k=0^N-1 L(𝔛_k,μ_k) + M(𝔛_N) - J_I(X_0) .
Thus, we have J_E(X_0) + J_I(X_0) ≤ J(X_0). Similarly, the construction from Part II can be used to start with an optimal solution of (<ref>) to construct a feasible point of (<ref>), which implies J_E(X_0) + J_I(X_0) ≥ J(X_0). Thus, the second and the third statement of the theorem hold.
§.§ Recursive Feasibility and Stability
Feasible invariant information ensembles 𝔛_s∈𝕀_c^n exist if and only if the optimization problem
min_𝔛_s,μ_s L(𝔛_s,μ_s) s.t. {[ F(𝔛_s,μ_s) ⊆𝔛_s,; μ_s∈𝒰,; ∀ X ∈𝔛_s, X ⊆𝕏 ].
is feasible. By solving this optimization problem, one can find optimal invariant information ensembles avoiding the constructions from the proof of Lemma <ref>; see Remark <ref>. In analogy to terminal regions in traditional MPC formulations <cit.> invariant information ensembles can be used as a terminal constraint,
𝔛_N ⊆𝔛_s.
If (<ref>) is augmented by such a terminal constraint, recursive feasibility can be guaranteed. Similarly, if one chooses the terminal cost M such that
min_μ∈𝒰 L(𝔛,μ) + M ( F(𝔛, μ) ) ≤ M(𝔛)
for all 𝔛∈𝕀_c^n, the objective value of (<ref>) descends along the trajectories of its associated closed-loop system. Under additional assumptions on the continuity and positive definiteness of L, this condition can be used as a starting point for the construction of Lyapunov functions. The details of these constructions are, however, not further elaborated at this point, as they are analogous to the construction of terminal costs for traditional Tube MPC schemes <cit.>.
§ POLYTOPIC APPROXIMATION METHODS
This section discusses how to solve the dual control problem (<ref>) by using a polytopic approximation method. For this aim, we assume that 𝕍 and 𝕎 are given convex polytopes, while 𝕏 and 𝕌 are convex sets.
§.§ Configuration-Constrained Polytopes
Polytopic computing <cit.> is the basis for many set-theoretic methods in control <cit.>. Specifically, tube model predictive control methods routinely feature parametric polytopes with frozen facet directions <cit.>. In this context, configuration-constrained polytopes are of special interest, as they admit a joint parameterization of their facets and vertices <cit.>. They are defined as follows.
Let Y ∈ℝ^m × n and G ∈ℝ^n_G × m be matrices that define the parametric polytope
[ ∀ y ∈𝒢, P(y) { x ∈ℝ^n | Y x ≤ y }; on 𝒢 { y ∈ℝ^m | G y ≤ 0 } ; ]
and let Λ_1,Λ_2,…,Λ_ν∈ℝ^n × m be vertex maps, such that
P(y) = conv( Λ_1 y, Λ_2 y, …, Λ_ν y ) ⟺ y ∈𝒢 ,
where conv(·) denotes the convex hull. The condition y ∈𝒢 is called a configuration-constraint. It restricts the parameter domain of P to a region on which both a halfspace and a vertex representation is possible. Details on how to construct the template matrix Y together with the cone 𝒢 and the matrices Λ_i can be found in <cit.>.
§.§ Polytopic Information Ensembles
As pointed out in Section <ref>, the minimal representation of a closed information ensemble 𝔛∈𝕀_c^n is given by its set of extreme sets, denoted by ∂𝔛. This motivates to discretize (<ref>), by introducing a suitable class of information ensembles, whose extreme sets are configuration-constrained polytopes. In detail,
𝔓(z) { X ∈𝕂_c^n | ∃ y ∈ℙ(z): X ⊆ P(y) }
defines such a class of information ensembles with the polytope
ℙ(z) { y ∈ℝ^m | G y ≤ 0, Z y ≤ z }⊆𝒢
being used to parameterize convex subsets of 𝒢. The choice of Z ∈ℝ^l × m influences the polytopic discretization accuracy and z ∈ℝ^l denotes its associated discretization parameter. Note that 𝔓(z) ∈𝕀_c^n is for any such z a compact but potentially empty information ensemble.
§.§ Polytopic Meta Learning
Traditional set-theoretic methods face a variety of computational difficulties upon dealing with output feedback problems, as summarized concisely in <cit.>. The goal of this and the following sections is to show that the proposed meta learning framework has the potential to overcome these difficulties. Here, the key observation is that Proposition <ref> alleviates the need to intersect infinitely many information sets for the sake of predicting the evolution of a learning process. Instead, it is sufficient to compute one intersection at meta level in order pass from a prior to a posterior information ensemble.
In detail, if we assume that our prior information about the system's state is represented by a polytopic ensemble, 𝔓(z), the posterior
𝔓(z) ⊓𝔙 = 𝔓(z) ∩𝔙
needs to be computed, where 𝔙 is given by (<ref>). Since 𝕍 is assumed to be a polytope, 𝔙 can be written in the form
𝔙 = { X ∈𝕂_c^n | ∃ y ∈𝒢: X ⊆ P(y), Z_1 y ≤v},
as long as the template matrices Y, G, and Z_1 ∈ℝ^l_1 × m as well as the vector v∈ℝ^l_1 are appropriately chosen. Here, we construct the matrix Z = (Z_1^,Z_2^)^ such that its first l_1 ≤ l rows coincide with Z_1. This particular construction of Z ensures that the intersection
𝔓(z) ∩𝔙 = 𝔓(ζ) with {[ ζ_1 = min( z_1, v); ζ_2 = z_2 ].
can be computed explicitly, where min(z_1,v) denotes the componentwise minimizer of the vectors z_1 and v. The latter condition is not jointly convex in z and ζ. Therefore, the following constructions are based on the convex relaxation
(
[ v; z_2 ]) ≤ (
[ ζ_1; ζ_2 ]) ⟹ 𝔓(z) ∩𝔙 ⊆ 𝔓(ζ) .
Note that the conservatism that is introduced by this convex relaxation is negligible if the measurement error set 𝕍 is small. In fact, for the exact output feedback case, 𝕍 = { 0 }, we have min(z_1,v) = v, since the measurements are exact and, as such, always informative.
§.§ Extreme Vertex Polytopes
In analogy to the construction of the domain 𝒢, a configuration domain
ℋ = { z ∈ℝ^l | H z ≤ 0 }
can be chosen. In detail, by using the methods from <cit.>, a matrix H ∈ℝ^l × n_H and matrices Ω_1,…,Ω_ν∈ℝ^m × l can be pre-computed, such that
ℙ(ζ) = conv( Ω_1 ζ, Ω_2 ζ, …, Ω_νζ ) ⟺ ζ∈ℋ .
This has the advantage that the vertices Ω_j ζ of the polytope ℙ(ζ) are known. In order to filter the vertices that are associated to extreme polytopes, the index set
𝕁 { j ∈{ 1,2,…,ν} | P(Ω_j ζ) ∈ ∂ [𝔓(ζ)] }
is introduced. Its definition does not depend on the choice of the parameter ζ∈ℋ. This follows from the fact that the normal cones of the vertices of ℙ(ζ) do not depend on ζ∈ℋ—recalling that the facet normals of ℙ(ζ) are given constants.
The polytopes P(Ω_j ζ), with j ∈𝕁, are called the extreme vertex polytopes of 𝔓(ζ).
Extreme vertex polytopes play a central role in the context of designing polytopic dual control methods. This is because their shapes, sizes, and orientations can be interpreted as representatives of the intrinsic information of 𝔓(ζ). Moreover, the convex hull of the extreme vertex polytopes encodes the extrinsic information of 𝔓(ζ),
conv( { P(Ω_j ζ) | j ∈𝕁 } ) = ⋃_X ∈𝔓(ζ) X .
The latter equation follows from the fact that the vertices of the extreme polytopes of 𝔓(ζ) are contained in the convex hull of the vertices Λ_i Ω_j ζ of the extreme vertex polytopes, with i ∈{ 1, …, ν} and j ∈𝕁.
§.§ Polytopic Information Tubes
The goal of this section is to show that it is sufficient to assign one extreme control input u_j ∈𝕌 to each extreme vertex polytope P(Ω_j ζ) in order to discretize the control law, without introducing conservatism. This construction is similar in essence to the introduction of the vertex control inputs that are routinely used to compute robust control invariant polytopes <cit.>. The key difference here, however, is that the “vertices” P(Ω_j ζ) of the information ensemble 𝔓(ζ) are sets rather than vectors. They represent possible realizations of the system's information state, not a state estimate.
Let us assume that 𝕎 = P(w) is a polytope with given parameter w∈𝒢. Moreover, we assume that the vertices of ℙ(·) are enumerated in such a way that
𝕁 = { 1,2,…, |𝕁|},
where |𝕁| ≤ν denotes the cardinality of 𝕁. Let us introduce the convex auxiliary set
ℱ {
(z,z^+) |
[ ∃ (ζ,ξ,u) ∈ℝ^l× (ℝ^m)^|𝕁|×𝕌^|𝕁|; ∀ i ∈{ 1, …, ν}, ∀ j ∈𝕁,; v≤ζ_1, z_2 ≤ζ_2,; Y A Λ_i Ω_j ζ + YB u_j + w≤ξ_j,; G ξ_j ≤ 0, H ζ≤ 0, Z ξ_j ≤ z^+ ]}.
The rationale behind the convex constraints in this definition can be summarized as follows.
2pt
* We start with the current information ensemble 𝔓(z).
* The constraints v≤ζ_1 and z_2 ≤ζ_2 subsume (<ref>).
* The constraint H ζ≤ 0 ensures that the vertices of P(Ω_j ζ) are given by Λ_i Ω_j ζ, with i ∈{ 1, …, ν}.
* The extreme controls u_j are used to steer all vertices of P(Ω_j ζ) into the auxiliary polytope P(ξ_j).
* And, finally, the constraints G ξ_j ≤ 0 and Z ξ_j ≤ z^+ ensure that P(ξ_j) is contained in 𝔓(z^+).
The above points can be used as road-map for the rather technical proof of the following theorem.
Let ℱ and 𝔙 be defined as above, recalling that 𝕎 = P(w) denotes the uncertainty set and that 𝕌 is assumed to be convex. Then, the implication
(z,z^+) ∈ℱ ⟹ ∃μ∈𝒰, 𝔓(z^+) ⊇ F(𝔓(z),μ)
holds for all z,z^+ ∈ℝ^l.
Let us assume that (z,z^+) ∈ℱ. As discussed in Section <ref>, the inequalities v≤ζ_1 and z_2 ≤ζ_2 in the definition of ℱ ensure that 𝔓(z) ⊓𝔙 ⊆ 𝔓(ζ). Moreover, there exists for every X ∈𝔓(ζ) a y ∈ℙ(ζ) with
X ⊆ P(y) ∈ ∂ [ 𝔓(ζ)] .
Next, since we enforce H ζ≤ 0, y is in the convex hull of the extreme vertices. That is, there exist scalar weights θ_1,θ_2,…, θ_|𝕁|∈ [0,1] with
∑_j ∈𝕁θ_j = 1 and y = ∑_j ∈𝕁θ_j Ω_j ζ ,
keeping in mind that these weights depend on X. They can be used to define the control law
μ(X) ∑_j ∈𝕁θ_j u_j ∈ 𝕌 and ξ ∑_j ∈𝕁θ_j ξ_j ∈ 𝒢
where u_1,u_2,…,u_|𝕁|∈𝕌 are the extreme control inputs and ξ_1,ξ_2,…,ξ_|𝕁|∈𝒢 are the auxiliary variables that satisfy the constraints from the definition of ℱ.
Note that this construction is such that the vertices of the polytope P(y), which are given by Λ_i y, satisfy
A Λ_i y + B μ(X) + w = ∑_j ∈𝕁θ_j [ A Λ_i Ω_j ζ + B u_j + w ] ∈ P(ξ),
where the latter inclusion holds for all w ∈ W. Consequently, since this holds for all vertices of P(y), we have
A X + B μ(X) + 𝕎 ⊆ A P(y) + B μ(X) + 𝕎 ⊆ P(ξ) .
Moreover, the above definition of ξ and the constraints G ξ_j ≤ 0 and Z ξ_j ≤ z^+ from the definition of ℱ imply that Z ξ≤ z^+ and P(ξ) ∈𝔓(z^+). But this yields
F(𝔓(z),μ) ⊆ 𝔓(z^+) ,
which completes the proof.
§.§ Polytopic Dual Control
In order to approximate the original dual control problem (<ref>) with a convex optimization problem, we assume that the stage and terminal cost functions have the form
L(𝔓(z),μ) = 𝔩(z,u) and M(𝔓(z)) = 𝔪(z)
for given convex functions 𝔩 and 𝔪, where the stacked vector u = ( u_1^,u_2^,…,u_|𝕁|)^ collects the extreme control inputs. Due to Theorem <ref> a conservative approximation of (<ref>) is given by
min_z,ζ,ξ,u ∑^N-1_k=0𝔩(z_k,u_k) + 𝔪(z_N)
s.t. { [ ∀ k ∈{ 0, …, N-1},; ∀ i ∈{ 1, …, ν}, ∀ j ∈𝕁,; v≤ζ_k,1, z_k,2≤ζ_k,2,; Y A Λ_i Ω_j ζ_k + YB u_k,j + w≤ξ_k,j,; G ξ_k,j≤ 0, H ζ_k ≤ 0, Z ξ_k,j≤ z_k+1,; u_k,j∈𝕌, Λ_i Ω_j ζ_k ∈𝕏, Z ŷ≤ z_0 . ].
Since 𝕌 and 𝕏 are convex sets, this is a convex optimization problem. Its optimization variables are the parameters z_k ∈ℝ^l of the polytopic information tube, the associated extreme control inputs u_k,j∈𝕌 and the auxiliary variables ζ_k ∈ℋ and ξ_k ∈𝒢, needed to ensure that
∀ k ∈{ 0,1,…, N-1 }, F(𝔓(z_k),μ_k) ⊆ 𝔓(z_k+1) .
Here, X_0 = P(ŷ) denotes the information set at the current time, modeled by the parameter ŷ∈𝒢. The constraint Z ŷ≤ z_0 corresponds to the initial value condition . Additionally, it is pointed out that the extrinsic information content of the auxiliary ensemble 𝔓(ζ) ⊇𝔓(z) ⊓𝔙 overestimates the extrinsic information content of 𝔓(z). Thus, the extrinsic state constraints can be enforced by using the implication chain
∀ i ∈{ 1, …, ν}, ∀ j ∈𝕁, Λ_i Ω_j ζ_k ∈𝕏
⟹ ⋃_X ∈𝔓(ζ) X ⊆𝕏 ⟹ ⋃_X ∈𝔓(z) X ⊆𝕏 .
Finally, (<ref>) is solved online whenever a new information set X_0 = P(ŷ) becomes available, denoting the optimal extreme controls by u_k,j^⋆. A corresponding feasible control law can then be recovered by setting
μ_0^⋆(X_0) ∑_j ∈𝕁θ_j^⋆(X_0) u_0,j,
where the scalar weights θ_j^⋆(X_0) can, for instance, be found by solving the convex quadratic programming problem
θ^⋆(X_0) θ≥ 0argmin ( ∑_j ∈𝕁θ_j^⋆ u_0,j)^2
s.t. {[ ∑_j ∈𝕁θ_j Ω_j ζ_0^⋆ = ŷ; ∑_j ∈𝕁θ_j = 1, ].
although, clearly, other choices for the weights θ_j^⋆ are possible, too. Finally, the receding horizon control loop from Section <ref> can be implemented by using the above expression for μ_0^⋆, while the information set update and propagation step can be implemented by using standard polytopic computation routines <cit.>.
By solving the convex optimization problem
min_z^s,ζ^s,ξ^s,u^s 𝔩(z^s,u^s)
s.t. { [ ∀ i ∈{ 1, …, ν}, ∀ j ∈𝕁,; v≤ζ_1^s, z_2^s≤ζ_2^s,; Y A Λ_i Ω_j ζ^s + YB u_j^s + w≤ξ_j^s,; G ξ_j^s≤ 0, H ζ^s≤ 0, Z ξ_j^s≤ z^s,; u_j^s∈𝕌, Λ_i Ω_j ζ^s∈𝕏 ].
an optimal control invariant polytopic information ensemble can be computed.
§.§ Structure and Complexity
Problem (<ref>) admits a tradeoff between the computational complexity and the conservatism of polytopic dual control. In detail, this tradeoff can be adjusted by the choice of the following variables.
* The number of facets, m, the number of vertices, ν, and the number of configuration constraints, n_G, depend on our choice of Y and G. The larger m is, the more accurately we can represent the system's intrinsic information content.
* The number of information ensemble parameters, l, the number of extreme vertex polytopes, |𝕁|, and the number of meta configuration constraints, n_H, depends on how we choose Z and H. The larger |𝕁| is, the more degrees of freedom we have to parameterize the optimal dual control law.
In contrast to these numbers, the number of controls, n_u, is given. If we assume that 𝕌 and 𝕏 are polyhedra with n_𝕌 and n_𝕏 facets, these number are given by the problem formulation, too. Additionally, we recall that N denotes the prediction horizon of the dual controller. The number of optimization variables n_opt and the number of constraints, n_con, of Problem (<ref>) are given by
n_opt = (2N+1) l + N |𝕁| ( n_u + m )
n_con = N ( l + |𝕁| ( n_G + n_H + l + n_𝕌 + ν ( m + n_𝕏) ) + l.
In this context, however, it should also be taken into account that the constraints of (<ref>) are not only sparse but also possess further structure that can be exploited via intrinsic separation. For instance, the algebraic-geometric consistency conditions
[ G Y = 0, Λ_i Y = 1,; H Z = 0, Ω_j Z = 1, and Z_1 Y = 0 ]
hold for all i ∈{1,…,ν} and all j ∈𝕁, which can be used to re-parameterize (<ref>), if a separation of the centers and shapes of the information sets is desired.
Last but not least, more conservative but computationally less demanding variants of (<ref>) can be derived by freezing some of its degrees of freedom. For instance, in analogy to Rigid Tube MPC <cit.>, one can implement a Rigid Dual MPC controller by pre-computing a feasible point (z^s,ζ^s,ξ^s,u^s) of (<ref>). Next, we set
[ z_k = Z Y x_k + z^s, ζ_k = ZY x_k + ζ^s,; and u_j,k = u_k + u_j^s, ξ_k,j = Y x_k+1 + ξ_j^s , ]
where x and u denote a central state- and a central control trajectory that are optimized online, subject to x_k+1 = A x_k + B u_k. By substituting these restrictions in (<ref>) and by using (<ref>), the resulting online optimization problem can be simplified and written in the form
min_x, u ∑^N-1_k=0ℓ( x_k, u_k) + m(x_N)
s.t. { [ ∀ k ∈{ 0, …, N-1},; A x_k + B u_k = x_k+1,; Z ŷ≤ ZY x_0 + z^s,; u_k∈𝕌, x_k ∈𝕏 . ].
Problem (<ref>) can be regarded as a conservative approximation of (<ref>). The sets
X { x∈ℝ^n | [ ∀ i ∈{ 1, …, ν}, ∀ j ∈𝕁,; x + Λ_i Ω_j ζ^s∈𝕏 ]}
and 𝕌 { x ∈ℝ^n | [ ∀ j ∈𝕁, u + u_j^s∈𝕌 ]}
take the robustness constraint margins into account, while ℓ and m are found by re-parameterizing the objective function of (<ref>). Problem (<ref>) is a conservative dual MPC controller that has—apart from the initial value constraint—the same complexity as certainty-equivalent MPC. Its feedback law depends on the parameter ŷ of the initial information set X_0 = P(ŷ).
§.§ Numerical Illustration
We consider the constrained linear control system
A = 1/4(
[ 6 4; 1 3 ]), B = (
[ 0; 1 ]), C = (
1 0
),
𝕏 = { x ∈ℝ^2 | x_2 ≥ -45 }, 𝕌 = [-55,55],
𝕎 = [ -1/2, 1/2]^2 ⊆ℝ^2 , 𝕍 = [-1,1] .
In order to setup an associated polytopic dual controller for this system, we use the template matrices
Y = (
[ 1 ; 1 1; 1; -1 ; -1 -1; -1 ]) and G = (
[ -1 1 -1; -1 1 -1; -1 1 -1; -1 1 -1; -1 -1 1; 1 -1 -1 ])
setting m = ν = n_G = 6. Here, Y and G are stored as sparse matrices: the empty spaces are filled with zeros. By using analogous notation, we set
Z = (
[ 1 1; 1 1; 1 ; 1; 1; 1; 1; 1 ]) , H = ( [ -1 -1 1 1; -1 1 1 -1 -1; 1 -1 1 -1 ; 1 1 -1 -1; 1 1 -1 -1 ; -1 1 -1 1 ; -1 1 -1 1 -1 ; 1 -1 1 -1 ; 1 -1 1 -1 ]) ,
l = 8, and n_H = 9, which can be used to represent six dimensional meta polytopes with 6+8=14 facets and ν= 68 vertices. They have up to |𝕁| = 60 extreme vertex polytopes. The first row of Z corresponds to the block matrix Z_1. It can be used to represent the set 𝔙 by setting v = 2, since the diameter of 𝕍 is equal to 2. Moreover, due to our choice of Y and 𝕎, we have
w = [ 1/2, 1, 1/2, 1/2, 1, 1/2 ]^∈𝒢 .
Next, we construct a suitable stage cost function of the form (<ref>). We choose the extrinsic risk measure
ℜ(𝔛) ∑_i=1^6 ( max_{x}∈𝔛 Y_i x )^2 + 50 ·∑_i=1^2 max_{x},{ x' }∈𝔛 (x_i-x_i')^2
and the intrinsic information measure
𝔇^∘(𝔛) ∑_i=1^2 ( max_X ∈𝔛 max_x,x' ∈ X | x_i-x_i' | )^2 .
This particular choice of ℛ and 𝒟^∘ is such that
𝔯(z) ℜ( 𝔓(z) ) and 𝔡^∘(z) 𝔇^∘( 𝔓(z) )
are convex quadratic forms in z that can be worked out explicitly. Namely, we find that
𝔯(z) = ( ∑_i=1^6 z_i+2^2 ) + 50 ·[ (z_3+z_6)^2 + (z_5+z_8)^2 ]
and 𝔡^∘(z) = z_1^2 + z_2^2 .
Last but not least a control penalty function needs to be introduced, which depends on the extreme control inputs, for instance, we can set
𝔠(u) = ∑_i=1^|𝕁|[ u_i^2 + 50 ·( u_i - 1/|𝕁|∑_j=1^|𝕁| u_j )^2 ]
in order penalize both the extreme inputs as well as the distances of these extreme inputs to their average value. The final stage cost is given by
𝔩(z,u) = 𝔯(z) + τ·𝔡^∘(z) + 𝔠(u),
where we set the intrinsic regularization to τ = 0.01.
The optimal invariant information ensemble 𝔓(z_s) is found by solving (<ref>). It is visualized in the left plot of Figure <ref>. Note that the light blue shaded hexagon corresponds to the union of all sets in 𝔓(z_s), which can be interpreted as a visualization of its extrinsic information content. The 60 extreme vertex polytopes of 𝔓(ζ_s), given by P(Ω_j ζ_s) for j ∈{ 1,2,…, 60 }, are difficult to plot as they are all clustered at the vertices of the extrinsic hexagon (they partly obscure each other; not all 60 are clearly visible), but an attempt is made to visualize them in different shades of gray. As the optimal solution happens to satisfy 𝔓(z_s)∩𝔙 = 𝔓(ζ_s), at least for this example, the convex relaxation (<ref>) does not introduce conservatism.
Next, a closed-loop simulation of the polytopic dual controller (<ref>) is started with the initial information set
X_0 = [17,23] × [17,23]
using the prediction horizon N=10 while the terminal cost is set to
𝔪(z_N) = {[ 0 if z_N ≤ z_s; ∞ otherwise ].
in order to enforce recursive feasibility. The right plot of Figure <ref> shows an extrinsic image of the first predicted tube; that is, the union of the sets 𝔓(z_k) along the optimal solution of (<ref>) for the above choice of X_0, which are shown in light blue. The dark blue shaded polytope corresponds to the terminal region that is enforced by the above choice of 𝔪.
The proposed polytopic dual control method optimizes feedback laws that depend on the system's information state. Note that such dual control laws are, in general, less conservative than robust output feedback laws that are based on state estimation or observer equations with affine structure, as considered in <cit.> or <cit.>.
§ CONCLUSIONS
This paper has presented a set-theoretic approach to dual control. It is based on meta information spaces that enable a data-free algebraic characterization of both the present and the future information content of learning processes. In detail, an intrinsic equivalence relation has been introduced in order to separate the computation of the future information content of a constrained linear system from the computation of its robust optimal control laws. An associated intrinsic separation principle is summarized in Theorem <ref>. It is the basis for analyzing the existence of solutions of a large class of dual control problems under certain structural and continuity assumptions that are summarized in Theorem <ref>.
For the first time, this paper has presented a polytopic dual control method for constrained linear systems that is based on convex optimization. In contrast to existing robust output-feedback control schemes, this method optimizes control laws that depend on the system's information state. This alleviates the need to make control decisions based on state estimates or observer equations that induce a potentially sub-optimal feedback structure. Instead, (<ref>) optimizes a finite number of control inputs that are associated to the extreme vertex polytopes of the predicted information ensembles.
A numerical case study for a system with two states has indicated that (<ref>) can be solved without numerical problems for moderately sized problems. For larger systems, however, the computational complexity of accurate dual control can become exorbitant. In anticipation of this problem, this paper has outlined strategies towards reducing the computational complexity at the cost of more conservatism. For instance, the Rigid Dual MPC problem (<ref>) has essentially the same online complexity as a comparable certainty-equivalent MPC problem. The development of more systematic methods to tradeoff conservatism and computational complexity of polytopic dual control methods as well as extensions of polytopic dual control for constrained linear systems that aim at simultaneously learning their state and their system matrices A, B, and C appear to be challenging and practically relevant directions for future research.
10
Dorea2021
T.A. Almeida and C.E.T. Dórea.
Output feedback constrained regulation of linear systems via
controlled-invariant sets.
IEEE Transactions on Automatic Control, 66(7), 2021.
Artstein2011
Z. Artstein and S.V. Raković.
Set invariance under output feedback: a set-dynamics approach.
International Journal of Systems Science, 42(4):539–555, 2011.
Bemporad2000
A. Bemporad and A. Garulli.
Output-feedback predictive control of constrained linear systems via
set-membership state estimation.
International Journal of Control, 73(8):655–665, 2000.
Bertsekas1971
D.P. Bertsekas and I.B. Rhodes.
Recursive state estimation for a set-membership description of
uncertainty.
IEEE Transactions on Automatic Control, 16:117–128, 1971.
Blanchini2003
F. Blanchini and S. Miani.
Stabilization of LPV systems: state feedback, state estimation, and
duality.
SIAM Journal on Control and Optimization, 42(1):76–97, 2003.
Blanchini2015
F. Blanchini and S. Miani.
Set-theoretic methods in control.
Systems & Control: Foundations & Applications. Birkhäuser Boston,
Inc., Boston, MA, 2015.
Brunner2018
F.D. Brunner, M.A. Müller, and F. Allgöwer.
Enhancing output-feedback MPC with set-valued moving horizon
estimation.
IEEE Transactions on Automatic Control, 63(9):2976–2986, 2018.
Doob1953
J.L. Doob.
Stochastic Processes.
Wiley, 1953.
Dorea2009
C.E.T. Dórea.
Output-feedback controlled-invariant polyhedra for constrained linear
systems.
Proceedings of the 48th IEEE Conference on Decision and Control,
Shanghai, pages 5317–5322, 2009.
Efimov2022
A. dos Reis de Souza, D. Efimov, T. Raïssi, and X. Ping.
Robust output feedback model predictive control for constrained
linear systems via interval observers.
Automatica, 135(109951), 2022.
Feldbaum1961
A.A. Feldbaum.
Dual-control theory (i-iv).
Automation and Remote Control, 21, pages 1240–1249 and
1453–1464, 1960; and 22, pages 3–16 and 129–143, 1961.
Filatov2004
N.M. Filatov and H. Unbehauen.
Adaptive Dual Control.
Springer, 2004.
Findeisen2003
R. Findeisen, L. Imsland, F. Allgöwer, and B.A. Foss.
State and output feedback nonlinear model predictive control: An
overview.
European Journal of Control, 9:190–207, 2003.
Fukuda2020
K. Fukuda.
Polyhedral Computation.
ETH Zürich Research Collection, 2020.
Goulart2007
P. Goulart and E. Kerrigan.
Output feedback receding horizon control of constrained systems.
International Journal of Control, 80(1):8–20, 2007.
Gutman1986
P.O. Gutman and M. Cwikel.
Admissible sets and feedback control for discrete-time linear
dynamical systems with bounded controls and states.
IEEE Transactions on Automatic Control, 31(4):373–376, 1986.
Hewing2020
L. Hewing, K.P. Wabersich, M. Menner, and M.N. Zeilinger.
Learning-based model predictive control: Toward safe learning in
control.
Annual Review of Control, Robotics, and Autonomous Systems,
3:269–296, 2020.
Hovd2005
M. Hovd and R.R. Bitmead.
Interaction between control and state estimation in nonlinear MPC.
Modeling, Identification, and Control, 26(3):165–174, 2005.
Joseph1961
P.D. Joseph and J.T. Tou.
On linear control theory.
AIEE Transactions on Applications and Industry, 80:193–196,
1961.
Kalman1962
R.E. Kalman.
Canonical structure of linear dynamical systems.
Proceedings of the National Academy of Sciences the United
States of America, 48:596–600, 1962.
Krasovskii1995
A.N. Krasovskii and N.N. Krasovskii.
Control under lack of information.
Birkhäuser, Boston, 1995.
Krasovskii1964
N.N. Krasovskii.
On the theory of controllability and observability of linear dynamic
systems.
Journal of Applied Mathematics and Mechanics, 28(1):1–14,
1964.
Kurzhanski1972
A.B. Kurzhanskii.
Differential games of observation.
Doklady Akademii Nauk SSSR, 207(3):527–530, 1972.
Kurzhanski2004
A.B. Kurzhanskii.
The problem of measurement feedback control.
Journal of Applied Mathematics and Mechanics, 68:487–501,
2004.
Langson2004
W. Langson, I. Chryssochoos, S.V. Raković, and D.Q. Mayne.
Robust model predictive control using tubes.
Automatica, 40(1):125–133, 2004.
Lindquist1973
A. Lindquist.
On feedback control of linear stochastic systems.
SIAM Journal on Control, 11:323–343, 1973.
Mayne2009
D.Q. Mayne, S.V. Rakovic, R. Findeisen, and F. Allgöwer.
Robust output feedback model predictive control of constrained linear
systems: time varying case.
Automatica, 45:2082–2087, 2009.
Rakovic2012
S. Raković, B. Kouvaritakis, R. Findeisen, and M. Cannon.
Homothetic tube model predictive control.
Automatica, 48(8):1631–1638, 2012.
Rakovic2016
S. Raković, W.S. Levine, and B. Açıkmeşe.
Elastic tube model predictive control.
In American Control Conference (ACC), 2016, pages 3594–3599.
IEEE, 2016.
Rawlings2017
J.B. Rawlings, D.Q. Mayne, and M.M. Diehl.
Model Predictive Control: Theory, Computation, and Design.
Madison, WI: Nob Hill Publishing, 2017.
Rockafellar2013
R.T. Rockafellar and S. Uryasev.
The fundamental risk quadrangle in risk management, optimization and
statistical estimation.
Surveys in Operations Research and Management Science,
18:33–53, 2013.
Sehr2019
M.A. Sehr and R.R. Bitmead.
Probing and Duality in Stochastic Model Predictive Control.
In Handbook of Model Predictive Control. Control Engineering, pages
125–144, Birkhäuser, 2019.
Stengel1994
R. Stengel.
Optimal Control and Estimation.
Dover Publications, New York, 1994.
Taylor1996
J.C. Taylor.
An introduction to measure and probability.
Springer, 1996.
Warter1981
H. van Warter and J.C. Willems.
The certainty equivalence property in stochastic control theory.
IEEE Transactions on Automatic Control, AC-26(5):1080–1087,
1981.
Villani2005
C. Villani.
Optimal transport, old and new.
Springer, 2005.
Villanueva2020
M.E. Villanueva, E. De Lazzari, M.A. Müller, and B. Houska.
A set-theoretic generalization of dissipativity with applications in
Tube MPC.
Automatica, 122(109179), 2020.
Villanueva2022
M.E. Villanueva, M.A. Müller, and B. Houska.
Configuration-constrained tube MPC.
arXiv e-prints, page arXiv:2208.12554, 2022 (accessed 2022
November 4).
Witsenhausen1968
H.S. Witsenhausen.
Sets of possible states of linear systems given perturbed
observations.
IEEE Transactions on Automatic Control, 13:556–558, 1968.
Wonham1969
W.M. Wonham.
On the separation theorem of stochastic control.
SIAM Journal on Control, 6(2):312–326, 1968.
Wu2022
F. Wu, M.E. Villanueva, and B. Houska.
Ambiguity tube MPC.
Automatica, 146(110648), 2022.
Zanon2021
M. Zanon and S. Gros.
Safe reinforcement learning using robust MPC.
IEEE Transactions on Automatic Control, 66(8):3638–3652, 2021.
|
http://arxiv.org/abs/2307.03996v1 | 20230708153748 | ReviewRanker: A Semi-Supervised Learning Based Approach for Code Review Quality Estimation | [
"Saifullah Mahbub",
"Md. Easin Arafat",
"Chowdhury Rafeed Rahman",
"Zannatul Ferdows",
"Masum Hasan"
] | cs.SE | [
"cs.SE"
] |
[email protected]
Code review is considered a key process in the software industry for minimizing bugs and improving code quality. Inspection of review process effectiveness and continuous improvement can boost development productivity. Such inspection is a time-consuming and human-bias-prone task. We propose a semi-supervised learning based system ReviewRanker which is aimed at assigning each code review a confidence score which is expected to resonate with the quality of the review. Our proposed method is trained based on simple and and well defined labels provided by developers. The labeling task requires little to no effort from the developers and has an indirect relation to the end goal (assignment of review confidence score). ReviewRanker is expected to improve industry-wide code review quality inspection through reducing human bias and effort required for such task. The system has the potential of minimizing the back-and-forth cycle existing in the development and review process. Usable code and dataset for this research can be found at: https://github.com/saifarnab/code_review
<ccs2012>
<concept>
<concept_id>10011007.10011074.10011081</concept_id>
<concept_desc>Software and its engineering Software development process management</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Software and its engineering Software development process management
ReviewRanker: A Semi-Supervised Learning Based Approach for Code Review Quality Estimation
Masum Hasan
August 12, 2023
==========================================================================================
§ INTRODUCTION
The editorial world has been using peer review since 1731 <cit.>.
Modern software development industries have given it a more common name: Code Review. Since then Modern Code Review (MCR) <cit.> has become an essential part of software development. MCR is a software quality control process in which one or a group of people evaluates the system by examining and analyzing different parts of source code which can be done either during or after the completion of the implementation phase. The purpose of code review is to find bugs, correct mistakes, and boost the consistency of code by improving performance and by reducing security vulnerabilities.
Figure <ref> outlines a typical code review process. A developer or a set of developers prepares the codes and submits them for review. A reviewer or a subgroup of reviewers then performs review checking and makes sure that the author’s codes cause no system failures in other parts of the codebase. They also ensure consistent coding style and design pattern. Following all these checking and evaluations, the reviewer or the subgroup of reviewers who have a higher role either approve or reject these reviews. Developers then make changes in codes, revise their works based on the feedback, or provide appropriate explanations against the approved review until both parties are satisfied.
Sometimes a reviewer figures out the problematic part of the reviewed code but fails to submit an appropriate explanation of the problem. In such cases, the changes made by the developers will probably not satisfy the reviewer and we are going to get another couple of develop-review cycles. Such cycles can lead to substantial decrease in productivity in the software industry.
It is possible to minimize such situations if we can somehow assign each review a quality score. Such scoring will help us in (a) gaining a deeper understanding of quality reviews, (b) identifying quality reviewers in the company and (c) estimating provided review quality before sending off to the developers. Essentially, if after going through a particular review, a developer feels confident about the changes that he has to make in the codebase, then that review is probably of good quality. In this paper, we focus on modeling the developer confidence in a review.
One way is to simply form this task as a supervised learning task where the input will be a review and the output will be the confidence score for that review. The output labeling will be performed by the developer to whom the review had been sent for making changes in the codebase. Figure <ref> shows the problem behind such labeling. We can see a review in the figure which has been marked as good, average, below average and poor by a significant set of developers from three different software companies. We performed this experiment on 25 reviews in total and got more or less similar results. Let us understand what this means. There are developers who are broad minded and will give good score even when the review is not that good. The opposite spectrum is also equally visible in the industry. The score assigned by a developer also depends on what type of mood he is in at that particular moment. In short, this labeling process is highly dependent on human perception which can vary widely from person to person.
We propose an alternative labeling scheme in this paper which indirectly trains a set of three models and enables them in predicting the confidence scores for a particular set of reviews. We call this semi-supervised learning approach ReviewRanker. The labeling is related to three simple multiple choice questions (for the three models) regarding - (a) the understanding of the type of change to perform in the code, (b) the understanding of what to insert and (c) what to delete from the code based on the review of interest. We performed a similar experiment (as of Figure <ref>) with these three multiple choice questions and found out that the choices made by the developers from different companies are similar unless the review is largely vague. Thus we have come to a conclusion that the answer to these questions are not biased by the human perception side of the developers.
During inference (after training is done with a set of labeled reviews), we provide a code review as input to the three models for predicting the answer to the three questions (see Figure <ref>). We get three confidence scores from these three models corresponding to the ground truth answers of these questions (labeled by a developer in advance). We obtain the final confidence score from these three scores. Thus we model the confidence of the developer in understanding the review given to him or her.
Mainly three types of related studies have been performed regarding code review analysis: (1) theoretical studies on different aspects of code reviewing <cit.>, (2) assisting reviewers by problematic code snippet identification <cit.> and (3) reviewer recommendation <cit.>. Although RevHelper <cit.> was developed to measure code review usefulness, it is actually a binary classification tool (useful vs not useful) and does not provide any quality score to the review of interest. Also this method has the human bias aspect that we have mentioned in detail in Figure <ref>.
§ PROBLEM DEFINITION
The input of ReviewRanker is a large set of code reviews R. The output is a confidence score C_i for each review R_i ∈ R, where C_i ∈ [0, 1]. Higher confidence score denotes higher review quality.
C_i is the combination of three different confidence scores coming from three different questions related to review R_i. The answer of each question Q_ij is predicted by a model M_j that forms the question answering as a binary classification task. We get a confidence score C_ij (associated with the ground truth label answer) from each model M_j for each question Q_ij for the review of interest R_i. The final confidence score C_i of review R_i is the geometric mean of all C_ij's, where j ∈{1,2,3}.
The three questions are as follows:
* What type of operation (change in code) did the code review suggest (multi-class classification)?
* Did you understand what to insert in the code from the review (binary classification)?
* Did you understand what to delete from the code reading the review (binary classification)?
Unlike questions related to directly assigning a quality score to a review, these three questions are straightforward and have little to no human bias.
§ RELATED WORKS
Researches have been undertaken to automate the process of reviewing code by using static checks such as standard violation, and common structure defects; while other researchers have focused on automating the process of reviewer recommendation and problematic code detection.
§.§ Studies on Code Review
Semi-structured individual interviews were conducted with seven developers from Microsoft in <cit.>. They concluded that prior knowledge of files leads to useful comments and tends to increase efficiency. The contemporary code review process at Microsoft was looked into in <cit.>. Research shows that the average spending time in a week for Microsoft developers is four hours in code review, while open source developers take five hours. Microsoft developers give more attention to reviewing relationships with developers compared to open-source developers.
An observational survey on Mozilla’s 88 core developers was conducted in <cit.>. The authors found out that approximately 57-69% developers reviewed fewer than 5 patch files, 10% developers reviewed 11 to 20 such files and 4% developers reviewed more than 21 patch files each week. A study described why code review is responsible for evaluating the reliability of test codes and what professional developers do to review test codes by analyzing 300,000 code reviews from open-source projects <cit.>.
§.§ Code Review Automation Empirical Studies
A prototype tool named Code Distance Visualiser was proposed in <cit.> to detect problematic codes like string overflow, memory leaks, null pointer references, and incorrect API usages. ReviewBot model was proposed in <cit.> where they automated the checking for source code by using a static analyzer and recommended reviewers based on the belief that every line of code had a past history. cHRev model used three measurement metrics to measure the expertise of the reviewers based on their review comments: 1) higher number of review count, 2) reviewer’s effort in the workday and 3) higher weight assignment to the latest reviews <cit.>. RevFinder, a recommendation model for reviewers based on file location was developed in <cit.>. According to their heuristics, identical path files should be reviewed by identical reviewers. To analyze similar file paths, they used four string comparison techniques: 1) longest common prefix, 2) longest common suffix, 3) longest common subsequence and 4) longest common substring. RevRec developed in <cit.> consists of two models: the reviewer expertise model (RevRecRE) and the reviewer collaboration model (RevRecRC). They evaluated three open-source projects - Android, OpenStack, and Qt. A comparative study on code review usefulness was conducted based on textual features and reviewer expertise in <cit.>. The authors proposed a machine learning model named RevHelper to predict the usefulness of a review comment. Their comparative study was based on two heuristics - 1) differences between useful and non-useful reviews and 2) how the reviewers' experience helps them to provide appropriate reviews.
§ DATASET DESCRIPTION
The steps regarding the dataset creation process for this research has been briefly shown in the leftmost box of Figure <ref>. We shall describe each of these steps in detail in this section.
§.§ Data Source
We have collected our data from multiple open-source projects hosted in Gerrit [https://www.gerritcodereview.com/]. Gerrit is a popular tool for code review in both open-source and commercial code repositories. Gerrit provides an easily accessible REST API [https://gerrit-review.googlesource.com/Documentation/rest-api.html] for collecting code reviews and their related codes. We have created a Gerrit Miner using Java that mines code reviews from open source code repositories such as Android & Iotivity and stores them in a MySQL database. We later query the database and label the reviews with different criteria described in detail in the upcoming subsections.
§.§ Data Labeling
We have created a labeling application with the Django framework in Python <cit.>.
The labeling app was designed to be user-friendly and intuitive. On entry, the web app asks for the login credentials of the user. Once it is provided, it directly goes to the labeling page and displays a code review comment to the user. The user is asked what type of operation (change type in code) the code review suggests (see Figure <ref>). Four options are provided in the form of a drop-down menu: Insert, Delete, Replace, and Not Enough Information. The web app provides the private URLs to the source code, and by clicking the link the user can view the source code, where the code review was submitted, and the later modification (accepted by reviewer) in the source code side by side (see Figure <ref>).
When the user selects one of the four operations from the drop down menu, he/she is also asked to provide the code snippet that is impacted by the operation. If the operation is an Insert operation, the user is supposed to provide the code snippet that was to be inserted in a text field named Add Code (only if it is understood from the review what was to be inserted). If the operation is a Remove operation, the user puts the code that was to be removed from the original code in the text box named Remove Code (only if it is understood from the review what was to be removed). If the operation is a Replace operation, the user puts the part of the code that changed in Remove Code text box, and the part that it changed into in the Add Code text box (only if both these parts can be understood from the code review alone). We also took a human-centric design approach to design the labeling app. Each time a sample data was submitted, the web page changed the background color so that the labeling process would not become monotonous and also would give a sense of progress to the user.
§.§ Label Validation
The reviews were labeled by a team of five independent volunteers who possess substantial experience in programming. All the labelers are from Computer Science background and have more than two years of working experience with programming languages such as C and Java, specifically in the areas of Android and Iotivity. To ensure consistency in the labeling process, 10% of the reviews were given to all the participants for labeling. The remaining 90% of samples were unique for each labeler. The admin frequently examined 10% of the data labels to check for any discrepancies among the labelers. If there was a considerable variation in the labeling, appropriate measures were taken to make the data labels more consistent. Later on, the entire dataset was manually labeled and reviewed by senior software developers to ensure proper validation of the assigned labels. The final confirmation for the labeling was obtained from the admin and considered conclusive for this dataset.
§ MATERIALS AND METHODS
Figure <ref> provides an overview of the steps in developing ReviewRanker. We have already described the dataset creation step in the previous section. In this section, we are going to elaborate the next four steps which are more related to ReviewRanker training and inference phase.
§.§ Data Preprocessing
§.§.§ Data Labeling:
Our initial dataset consisted of 2052 review comments. After the elimination of redundant samples, we are now left with 1483 sample reviews in our final dataset. Let us talk about the ground truth label assignment process for the three multiple choice questions asked for each review (the three questions can be found in Section <ref>). In real life scenario, the ground truth labels associated to a particular review are expected to be assigned by the developer/ developers to whom the review is directed to during the development process. Observing the questions, it is evident that it will take little to no effort from the developers to perform this labeling process.
We start with the operation (code change) related question. We define four types of operations: (1) replace (class label 0), (2) delete (label 1), (3) insert (label 2) and (4) not enough information (no label assigned). If a review operation is assigned as "not enough information", then we simply assign that review a confidence score of 0 and exclude that review from ReviewRanker training and inference.
The next two questions are about understanding of what to insert and what to remove from the current code base (both are binary classification tasks). If it is clear from the review what to insert, then the insertion related question receives ground truth label of 1, else the label is 0. The exact same aspect goes for the deletion related question.
If the operation is labeled as "replace" (first question), then it is expected that the label of both the insertion and deletion related questions will be 1 (it will not always happen in non-ideal cases). Similarly, if the operation is labeled as "delete", then the label of deletion related question is expected to be 1, while the insertion related question will have a label of 0 in an ideal world; and the opposite aspect will happen if the operation is labeled as "insert".
Let us now look at an example review - “outer parens not needed”. The labels for this review are as follows:
Operation Type: delete (label 1)
Understanding of something to be added: nothing to add (label 0)
Understanding of something to be deleted: parentheses need to be deleted (label 1)
§.§.§ Similar Word Handling
Our corpus contains more than 3000 unique words, which is a large number considering the small corpus size (less than 1500 reviews). So, by replacing all semantically identical words with a single word, we minimize the word list, which helps our model find acceptable relationships between words. While doing so, we use both the process of word stemming and lemmatization. Using word-stemming, we can modify a word’s plural instance to singular, normalize grammatical state, and so on.
Consider the words provided below:
The above words are generated from the word “program”. Through the word-stemming process, we replace all of these words with the word program in our unique word list. Using word lemmatization, we can generate a similar set of words from a single word. For example, the word minor generates the following words:
These words are verbally similar to the word minor. Thus we replace all of these words with the word minor in our unique word list as well. By doing so, our corpus now contains around 1700 unique words.
§.§.§ Special Word Handling:
Our dataset contains code reviews that include a significant amount of special words specific to C code that have no real meaning but play a very important role in review comments. Our proposed model works based on the textual relationship between normal words and these special words. Hence we replace these words with some common words based on their operational characteristics. First, we lowercase the starting letter of all words in our corpus. After that for each of the words:
* If the word has any uppercase letter, then we replace the word with keywordvariable, considering we usually use camel case to write variables.
* Otherwise, if the word contains .h or #, then we replace the word with keyworddoth. The presence of such special characters denotes header files in C programming.
* Otherwise, if the word contains _, then we replace the word with keywordunderscore. Having an underscore in a word is a bit confusing, it may denote a function or a variable. That is why we treat them with a special keyword.
* Otherwise, If the word contains parenthesis, then we replace the word with keywordfunction, considering all functions must initiate with a pair of parentheses.
After such special keyword handling, our corpus now contains 1368 unique words which started with 3000 initially.
§.§ Feature Extraction
In order to feed a review to a model as input, We need a mathematical representation of that review. We have 1368 unique words in our preprocessed dataset (see Section <ref>). Each review
contains a subset of these words. So, we represent each review with a vector V of size 1368, where V_i represents the total count of word_i found in the review. Let us look at two examples:
Review sample 1: line over fifty characters you should reduce it to twenty characters.
Review sample 2: provide line level comment to line.
If we create a unique word list from this corpus, it would be:
We can index these words from 0 to 12. The feature vector for the two sample reviews is as follows:
Instead of utilizing word embedding based approaches such as Word2Vec <cit.> and FastText <cit.>, we have opted for a bag-of-words type of approach <cit.>. Word embedding produces semantic vectors for each word typically employed with recurrent neural networks (RNNs) <cit.>. However, due to our small dataset and straightforward classification tasks, we have observed that a basic shallow neural network with bag-of-words feature outperforms RNNs with word embeddings through five fold cross validation.
§.§ Model Details
Our proposed algorithm combines three models as shown in Table <ref>. Details of the classes present under each model can be found in Section <ref>. Each model is a fully connected vanilla neural network but with a different set of parameter values. The input layer is of size 1368 (word frequency vector: total unique word no. is 1368). M_1 and M_2 are used for binary classification while M_3 is used for multi-class classification (three classes). Relu activation function <cit.> has been used for the intermediate layers, while Softmax has been used for the output layer. A dropout of 20% has been applied between each consecutive hidden layers to prevent overfitting <cit.>. Categorical Cross Entropy <cit.> has been used as the loss function, while Adam (Adaptive Moment Estimation) optimizer <cit.> has been used for weight update.
§.§ Review Confidence Score Generation
Table <ref> illustrates the entire process of confidence score generation for two sample reviews (We assume that the three task specific models M_1, M_2 and M_3 are already trained).
The feature vector of each review is passed through all three models separately. Each model provides a discrete probability distribution of the task specific classes. For example, model M_3 always provides three probability values (sums to 1) for the three operation type specific classes. For each model, we only take the probability score associated with the ground truth class label (expected to be available for all reviews). Thus, for one review, we get total three confidence scores (predicted probability values) from the three models. The final confidence score is the geometric mean ((C_1 × C_2 × C_3)^1/3) of these three confidence scores. A higher confidence score denotes higher review quality, as it is expected that the developer confidence in such reviews will be high.
§.§ Confidence Score Generation for the Entire Review Set
The expected input to the ReviewRanker system is not a single review, but an entire set of labeled (the three questions/ tasks) reviews. The three models that are part of ReviewRanker are trained on a fraction of this labeled review set. The confidence scores for the reviews are obtained in a 10-fold cross validation style. Let us understand the entire process. Given a large set of labeled reviews S, we first randomly divide the set into 10 small disjoint subsets S_1, S_2, … S_10 of reviews. For fold no. i of the 10-fold cross validation, we use all S_j (j ≠ i) subsets of reviews for training the three models (from randomly assigned initial weights) and finally, use the trained models to predict the final confidence scores of the validation review subset S_i. After doing this 10 times for the 10 folds, we are going to get review confidence scores for all the reviews available in the entire review set S. The important thing to note here is that the confidence score of each review is obtained only when that review is part of the validation subset. This is done to avoid obtaining overfitted scores on training data (many of the confidence scores of training data are close to 1).
§ RESULTS AND DISCUSSION
§.§ Manual Inspection of Assigned Review Quality
We examine both the review text and its corresponding confidence score to gain insight into the behavior of the proposed ReviewRanker system. Our goal is to understand why certain reviews receive higher scores than others. To this end, we randomly selected several reviews with high, average, and low confidence scores and analyzed their content (shown in Table <ref>). Through our analysis, we discovered that reviews with higher confidence scores are generally easy to understand, provide clear suggestions for changes to the code, and use specific variable and function names. Reviews with average confidence scores are sometimes easy to understand but lack substantive information, are excessively long, or contain lengthy blocks of code. Reviews with very low confidence scores are often too short to understand, lack meaningful information, and include asterisks and other special characters. Since ReviewRanker is composed of three training based neural network models, it is a data hungry system. So, larger the provided review set, better will ReviewRanker be able to model the developer confidence in a particular review.
§.§ Model Performance
Table <ref> shows the dataset size and performance of the three ReviewRanker models across the 10 folds. The high mean validation accuracy shows that the models can learn to answer the three simple questions associated with review confidence score generation effectively and can generalize well to validation data. The reported performance has some implications on the usage of ReviewRanker. If for some particular set of code reviews, we see that the 10-fold cross validation performance is not upto the mark, then what it means is that the three models have not been able to understand how to answer the three questions for the provided reviews. In that case, the final confidence score provided by ReviewRanker will not be a reliable metric to measure review quality.
§.§ ReviewRanker Validation
ReviewRanker has not been validated at industry-wide scale. We have made effort of validating ReviewRanker at small scale in three different software companies. But just as we have mentioned in the Introduction section, there is high human bias when it comes to assigning some kind of quality score to a review manually as part of the labeling process. Hence, our effort remains unsuccessful. Nevertheless, this is a system that has the potential of providing us with effective review quality score at industry scale. The system works end-to-end. The input is a set of reviews (no limitation in the number of reviews provided in the set) and the output is a csv file containing confidence score for each of the provided reviews. These scores can be used to find out characteristics of high, average and poor quality reviews; which in turn can aid software industries in coming up with proper guidelines for providing code reviews. This can save considerable time and cost by minimizing the occurrence of develop-review-develop cycles. Designing an effective industry-wide validation study can be an immediate next research step for ReviewRanker.
§.§ Limitations
ReviewRanker asks three questions regarding change type, code addition and code deletion while providing confidence score for a particular review. It does not use the context of code based on which the review has been provided. But we firmly believe that usage of code review context by the models for answering the three questions can greatly benefit the confidence score generation process. In such a case, sequence modeling approaches such as Long Short Term Memory (LSTM) <cit.> or Transformer <cit.> can be used as the three models of ReviewRanker. But one also has to take note of the fact that these sequence models are extremely data hungry. So, if a particular review set has less than 10K reviews (which is our case as well), then it is better to use the simple feature extraction method and model architecture that we have proposed. The three questions that we ask the developers to label for each sample are not based on any large scale study. We believe that a more optimal set of questions can be used for review quality estimation provided that a well designed large scale study is undertaken for this purpose. The reviews that we are dealing with in the experimental dataset for ReviewRanker are line-level code reviews. We have not tested the method on block-level code reviews, although we expect similar result for such case as well. Finally, because of the human bias factor, proper validation of the proposed ReviewRanker method could not be performed.
§ CONCLUSION
In this paper, we propose ReviewRanker with the goal of enabling effective inspection of code review quality. We discover the human bias factor of a supervised learning based approach and thus resort to a human-bias free multiple choice question scheme in order to indirectly get the confidence score for each review in a semi-supervised fashion. We ensure that the labeling process requires little to no effort from the developers. ReviewRanker can handle a large number of reviews (theoretically no limitation in number of reviews provided) and can provide the confidence score for each review in an end to end manner with zero external effort required. The proposed system can be implemented easily at industry level to consistently identify the best reviewers and promote the best review practices with minimal time and effort. The adoption of this system is expected to enhance code quality and to reduce the back-and-forth cycle of the review process. Some immediate future research directions are - (a) well designed industry scale evaluation of ReviewRanker effectiveness in review quality estimation, (b) incorporation of code context in ReviewRanker models and (c) replacing the current set of questions with more suitable set of questions through large scale study. We plan to make ReviewRanker publicly available in the form of a Python package upon acceptance.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.04484v1 | 20230710111419 | Invertible Low-Dimensional Modelling of X-ray Absorption Spectra for Potential Applications in Spectral X-ray Imaging | [
"Raziye Kubra Kumrular",
"Thomas Blumensath"
] | cs.LG | [
"cs.LG",
"physics.app-ph",
"physics.comp-ph"
] |
Invertible Low-Dimensional Modelling of X-ray Absorption Spectra for Potential Applications in Spectral X-ray Imaging
Raziye Kubra Kumrular and Thomas Blumensath
R. K. Kumrular and T. Blumensath are with the ISVR Signal Processing and
Audio Hearing Group, University of Southampton, Southampton SO17 1BJ, U.K.
(e-mail: [email protected] )
Received; accepted
=============================================================================================================================================================================================================================================
X-ray interaction with matter is an energy-dependent process that is contingent on the atomic structure of the constituent material elements. The most advanced models to capture this relationship currently rely on Monte Carlo (MC) simulations. Whilst these very accurate models, in many problems in spectral X-ray imaging, such as data compression, noise removal, spectral estimation, and the quantitative measurement of material compositions, these models are of limited use, as these applications typically require the efficient inversion of the model, that is, they require the estimation of the best model parameters for a given spectral measurement. Current models that can be easily inverted however typically only work when modelling spectra in regions away from their K-edges, so they have limited utility when modelling a wider range of materials. In this paper, we thus propose a novel, non-linear model that combines a deep neural network autoencoder with an optimal linear model based on the Singular Value Decomposition (SVD). We compare our new method to other alternative linear and non-linear approaches, a sparse model and an alternative deep learning model. We demonstrate the advantages of our method over traditional models, especially when modelling X-ray absorption spectra that contain K-edges in the energy range of interest.
Convolutional neural network (CNN), Denoising autoencoder, K-edge, Singular value decomposition (SVD), X-ray absorption spectrum
§ INTRODUCTION
X-ray Computed Tomography (XCT), which generates volumetric images based on measurements of X-ray transmission through an object, is a versatile imaging technique with applications in industry, security, medicine, and scientific investigation <cit.>. X-ray transmission is a function of X-ray energy, and the measurement of this dependency can be of significant importance in many applications. We are here interested in building models of this dependency that can help achieve this by allowing us to remove measurement noise, compress measurement data and constrain the estimation from limited measurements. In these applications, using models that both constrain the measurement whilst also allowing for easy estimation of the model parameters is crucial.
Whilst the physical interaction between photons and material can be modelled explicitly via very accurate Monte Carlo (MC) simulations<cit.>, these models do not allow for simple model inversion. We thus here develop models with few parameters (so-called low-dimensional models) that are easy to invert, that is, that allow us to easily compute optimal parameters for a given X-ray spectral observation. These models can then be used to create a parameterised function as a computational tool for spectral data analysis that has a range of significant applications in X-ray imaging. For example, traditional XCT reconstruction algorithms that ignore energy dependence produce image artefacts which can be removed when using invertible low-dimensional models <cit.>. Furthermore, low-dimensional models are crucial to remove measurement noise or constrain the ill-conditioned inverse problems that arise in several spectral imaging methods <cit.>.
Our work here is particularly motivated by our interest in measuring the spatial distribution of X-ray absorption spectra using commonly available lab-based X-ray tomography systems. There are several approaches to this. X-ray sources found in these systems generate X-ray photons with a range of energies (the X-ray source spectrum I_0(E)), though the X-ray detector does not normally differentiate different energy levels. To estimate absorption spectra, Dual-Energy CT uses two source spectra to allow spectral estimation <cit.> by utilising a two-parameter linear absorption spectral model. In Multi-Energy computed tomography (MECT), also called spectral X-ray tomography, spectrally resolved measurements are taken using photon counting detectors (PCD), though this comes at the cost of additional hardware requirements, a significant decrease in measurement speed, a significant increase in measurement noise as well as an increase in computational loads associated with the increase in measured data <cit.>. In all of these applications, a more accurate invertible low-dimensional model of the X-ray absorption spectra is of significant interest, especially when imaging a wide range of materials.
The attenuation of an X-ray beam with photons of a single energy (E) travelling along a path through an object is often modelled using the Beer-Lambert law:
I(E)=I_0(E)e^-∫μ(x,E) dx
where I(E) is the X-ray intensity measured by the detector, and I_0(E) is the X-ray intensity that would be measured by the detector without an object present. μ(x,E) is the energy-dependent X-ray linear attenuation coefficient (LAC) at position x along the X-ray beam and the integration is along the line of the X-ray path through the object.
For X-ray energies below about 1.02 MeV, X-ray material interactions are due to three primary phenomena: Rayleigh scattering (μ_R(E)); Compton scattering (μ_C(E)); and the Photoelectric effect (μ_P(E)) <cit.>. The total linear attenuation coefficient μ(E) can thus be written as:
μ(E)=μ_R(E)+ μ_C(E) +μ_P(E)
Figure <ref> shows the total linear attenuation of Aluminum (atomic-number(Z)=13) and Iodine (Z=53) and the contribution of each of these interactions.
We here show the LAC as a function of energy, focusing on the energy range between 20 keV and 150 keV commonly used in lab-based X-ray systems. Of particular interest for our paper will be the step in the LAC due to the Photoelectric effect (as seen in Fig <ref>b for Iodine at 33.17 keV), which appears at the K-shell binding energy of the atom. This step is known as the K-edge of the element and is unique for each element <cit.>.
The intrinsic dimensionality of the LACs using a linear principal component model has been studied previously in different settings <cit.>. We here instead investigate non-linear low-dimensional models of the X-ray absorption spectrum that work for all elements (Z≤ 92) and over energy ranges found in typical lab-based tomography systems (20keV≤ E≤150keV).
§ STATE OF THE ART ABSORPTION SPECTRUM MODELLING
§.§ Representation of Linear Attenuation Coefficient with Linear Models
Different X-ray absorption models have been proposed in the literature. These models typically assume absorption spectra to be a linear combination of two or more basis functions, which are assumed to be independent of the material <cit.>.
§.§.§ Photoelectric-Compton Basis (PCB) model
The first model is based on (<ref>). Rayleigh scattering is negligible. <cit.>. The Photoelectric-Compton Basis (PCB) model thus only uses basis functions to represent the Photoelectric effect and Compton scattering which are assumed to be material invariant.
This holds for energies far from the K-edge energy of a material (see Fig. <ref>a), where the Photoelectric absorption and Compton scattering phenomenon can be approximated as <cit.>:
μ(E)=a_pf_p(E) + a_cf_KN(E)
where f_p(e) and f_KN(e) are functions of energy only and capture the energy dependence of the Photoelectric absorption and Compton scattering. a_p and a_c on the other hand are parameters that are independent of energy and instead only vary with the material (they are functions of the electron density (ρ_e) and the atomic number of the material). a_p and a_c are thus two parameters that can be used to fit this linear two-dimensional model to data <cit.>. For a single material, they can be derived as functions of ρ_e and Z as:
a_p=ρ_eC_PZ^m
a_c=ρ_e,
where C_P = 9.8x10^-24 <cit.> is a constant, and m= 3.8 was determined experimentally.
The energy dependence of the Photoelectric effect is approximated by f_p(E)=1/E^n and it is possible to approximate the energy dependence of Compton scattering using the Klein-Nishina function (<ref>)<cit.>.
f_KN(E) = 1+α/α^2[ 2(1+α)/(1+ 2α)- 1/αln(1+ 2α) ]
+1/2αln(1+ 2α) -1+3α/(1+ 2α)^2
where α is E/E_e and E_e ≈ 511 keV denotes the rest mass energy of an electron. This two-dimensional linear model is suitable for low atomic number materials (Z<18) that do not have a K absorption edge in the range of energies considered, though the approximation error increases close to the K-edge as well as for higher energies <cit.>.
§.§.§ Material-basis (MB) model
The second linear model uses material-basis (MB) functions, which are two or more LAC functions taken from previously chosen reference materials<cit.>. This model is particularly popular in medical imaging, where the imaged object can be modelled using a basis function for the LAC of bone and one for soft tissue (or water)<cit.>. However, it is hard to express a wider range of materials with just two reference materials in the MB model, and this model does not provide a direct estimate of the electron density and effective atomic number <cit.>
§.§.§ Learned linear representations
Basis functions for low-dimensional modelling of X-ray absorption spectra can also be learned from training data. This can be done using the singular value decomposition (SVD), which also gives an estimate of the approximation error that can be achieved. The SVD computes the best linear approximation to a given training dataset in the mean squared error sense for a given size of subspace. There is a close relationship between SVD and principal component analysis (PCA) <cit.>, which has been used several times to derive low-dimensional linear models <cit.>.
For materials without K-edge in the energy range, it has been found that SVD models provide good approximations to LACs using two basis functions <cit.>. Furthermore, the learned basis functions are very similar to μ_p(E) and μ_c(E) <cit.>.
However, these models no longer work close to a K-edge <cit.>, though increasing the number of basis functions naturally has been found to increase performance also in these cases <cit.>.
§.§ Non-linear models
Given the inability of low-dimensional linear models to capture K-edges in absorption spectra, non-linear models might be suitable alternatives. As there are no known analytic models that capture the change in absorption around the K-edge of all materials in a succinct parameterization, learned non-linear models are a viable alternative.
§.§.§ Sparse Model
We have already introduced the idea of using a material basis function model. It is possible to include a basis function for each material in the periodic table, but as a linear model, this would require us to fit many coefficients. Instead, to derive models with few non-zero coefficients when using a larger set of material basis functions, sparse models can be used <cit.>. Whilst the generative model here is still linear (i.e. a spectrum is modelled as a linear computation of basis function), the estimation of the weights now becomes a non-linear process. To get a low-dimensional model, the basic assumption then is that a given spectrum represents a material that is a combination of a few elements.
Sparse models have been suggested as a complement to traditional regression methods for better identification of spectra in Raman spectroscopy <cit.>, though to the best of our knowledge, have not yet been used to model X-ray absorption spectra.
§.§.§ Neural Network based Models
Recent advances in deep learning now allow the estimation of complex non-linear relationships in complex data.
A suitable model for our purpose is an autoencoder, which is a deep neural network that can learn a non-linear low-dimensional representation <cit.>.
The network consists of two main components; a non-linear encoder, which compresses the input into a latent space representation, and a non-linear decoder which reconstructs the data from the low-dimensional representation <cit.>. For a single material, autoencoders have already demonstrated the ability to capture fine detail in the absorption spectrum around K-edge energies <cit.>.
To increase robustness and to incorporate noise suppression, an autoencoder is often trained as a denoising autoencoder (DAE), where the difference in training is that the input to the encoder is corrupted by noise during training.
§ MATERIAL AND METHODS
In this paper, we hypothesize that a single non-linear low-dimensional latent representation will allow us to model the X-ray absorption spectra of all elements, including those that have a K-edge in the energy range of interest. As low Z materials without a K-edge in the energy range under investigation are already well approximated using two linear basis functions, we propose to model the difference between a given spectrum and an optimal two-dimensional linear approximation.
§.§ Proposed Non-linear Model
Our proposed model combines a non-linear autoencoder with a two-dimensional SVD-based representation as shown in Fig. <ref>.
The SVD learns the effect of the Photoelectric effect and Compton scattering for materials with low atomic numbers, where we do not have a K-edge in the energy range of interest. The autoencoder then uses a 3-node latent representation to try and model the deviation of the true spectrum from the linear model of materials that have a K-edge in the energy range.
§.§ Low Dimensional Representation of X-ray Absorption Spectrum with the Autoencoder
There are different network architectures that can be used as the autoencoder in our model. We here compare two convolutional neural network (CNN) and three fully connected neural network (FCNN). The most basic FCNN simply consisted of a single input layer, the hidden (code) layer and an output layer with ReLU non-linearities in the input and output layers. The other four network architectures are shown in Fig.<ref> and Fig. <ref>. These architectures all had 3 nodes in the latent space when used jointly with the two-component SVD model, or 5 nodes if used without an initial SVD
approximation.
The main difference between FCNN2 and FCNN3 is the layer structure. In the FCNN3, the number of nodes shrinks gradually in the encoder and expands gradually in the decoder, which is a regular layer structure for the autoencoder, whilst, for FCNN2, the number of nodes in two consecutive layers first shrinks by about half before slightly expanding again in the next layer, a pattern that is repeated in the encoder and inverted in the decoder. Batch normalization is used to prevent overfitting. The main difference between the two convolutional networks is that CNN1 uses strided convolutions, whilst in CNN2 we use max pooling. To apply our idea to datasets sampled at different energy levels (131 energy levels), the CNN2 architecture is modified by creating much deeper layers but using the same node number in the latent space.
§.§ A Sparse Regularized Model for X-ray Absorption Spectrum
As a comparison, we also implement a sparse model using a material basis function matrix. Let Y be the X-ray absorption spectrum of a chemical mixture (Y ϵ R^N), and A (A=a_1,a_2,...a_N ) a matrix whose columns are the material basis functions of all elements of interest (A ϵ R^NxM ). To compute a sparse representation X, we solve the lasso problem:
Xmin𝐘 - AX _2 + λ X_1
where we use the FISTA algorithm for optimisation.
We here generate the material basis function matrix by using the LAC values for the 92 elements provided by the National Institute of Standards and Technology (NIST) database <cit.>. As the solution to the above lasso problem does potentially provide approximations of the data with more than 5 basis functions, for consistency with our 5 parameter model, we restrict the solution by selecting the 5 largest elements (in magnitude) of X and then fitting these values by computing a least squares solution using only the selected 5 material basis functions.
§.§ Traditional methods
To compare the two non-linear models above to traditional linear and non-linear models, we furthermore implemented an SVD-based method, where we selected the largest 5 components to provide a 5-dimensional linear model. We also implemented our three autoencoder models without the initial 2-dimensional SVD model by extending the dimension of the hidden layer to 5. Thus, all our models could be used to fit 5 parameters into a spectrum.
§.§ Dataset of the simulated X-ray absorption spectrum
X-ray absorption spectra have been simulated using the linear attenuation coefficients of the 92 chemical elements with Z≤ 92. LAC values were obtained by multiplying MAC (Mass attenuation coefficient) with average mass densities obtained from the NIST <cit.>.
The energy range of interest was chosen to be between 20 keV to 150 keV, which is the available source energy range found in many lab-based X-ray tubes. For computational efficiency, spectra were re-sampled into 26 equally sized energy bins, though similar results can be achieved with a finer energy resolution.
We generated a range of different datasets, consisting of combinations of between 1 and 5 different elements, with some datasets having pre-specified numbers of elemental spectra with K-edges. All datasets are summarised in Table <ref>. Each mixture is generated by randomly choosing the elements (possibly with restrictions on the required numbers of K-edges) and then combining them by multiplying them by the standard elemental density for that material as well as a random scalar drawn from a uniform distribution in the range between 0 and 1. To consistent data, the datasets scaled with standardization after generating combined LACs. To train the de-noising autoencoders, Gaussian noise, with zero mean and 0.1 standard deviations, was added to generate a noisy version of each dataset.
We created various datasets to perform the proposed method and compare it with other methods. Table <ref> shows the name of the generated datasets, where the subscript indicates the number of elements in each mixture in that dataset, e.g. each element in D_2E consists of two randomly selected elements, as well as the number of the elements in each mixture that have K-edges, e.g. each element in D_2E,2K contains two elements with K-edges in the energy range of interest (i.e. (Z>42)). The dataset containing 131 energy levels (D_2E,131) was generated the same way as the other datasets, the only difference being that it was quantized at every energy level.
Example spectra are shown in Figure <ref>a where we show noisy and noise-free spectra from D_2E, and Fig. <ref>b, where we show two example spectra from D_2E,0K.
§.§ Loss function
To evaluate the performance of different methods, we use the normalised mean squared error (NMSE):
NMSE = Y-Ŷ^2/Y^2,
where Y is true X-ray absorption spectrum, and Ŷ is predicted X-ray absorption spectrum. Y-Ŷ is the l_2 norm of the error between true spectrum and predicted spectrum, while Y is the l_2 norm of the true spectrum.
§ EXPERIMENTS AND RESULTS
We test the sparse and machine learning-based non-linear models and compare them to the linear methods. In the rest of the paper, we referred to the proposed hybrid models as SVD/autoencoder and the autoencoder models with 5 nodes in the latent space layer as 5-dimensional autoencoders. We also fit an SVD model using the largest 5 components, which we call the 5-dimensional SVD. The sparse model, where we fit the largest 5 components after sparse decomposition is called the Fista model.
All models that include one of the autoencoders were trained using the same parameters, using an Adam optimiser with a batch size of 64 and running for 300 epochs with a mean squared error loss function.
Unless otherwise stated, all autoencoder-based models were trained on the data-set D_2E, which was divided by random training (72%), validation (20%) and test (8%). The validation set was used to validate the model performance during training.
For the SVD/autoencoder model, we trained the SVD and the autoencoder separately, starting by fitting the SVD using data without K-edges, namely D_2E,0K. We then trained the autoencoder in the SVD/autoencoder model with the training data from D_2E, which also included simulated absorption spectra with K-edges. For the training of the autoencoder part of the SVD/autoencoder model, each spectrum was first projected onto the SVD subspace and the residual error was used to train the autoencoder. The output of the autoencoder was then added back to the approximation computed with the SVD model to provide the spectral approximation (as shown in Fig. <ref>).
We also used the same training dataset from D_2E to fit the 5-dimensional SVD model. For the FISTA model, the sparsity parameter (λ) was optimised for optimal performance on the same dataset. As the SVD is known to provide the best linear low-dimensional approximation in the mean squared error sense, we do not report results for other linear models.
After the training step, all models were tested on the test dataset of D_2E, and each model was evaluated with by plotting Box-whisker plots of the MMSE for each spectrum in the test data. Fig. <ref> shows results for the SVD/autoencoder models, the 5-dimensional autoencoder models, the 5-dimensional SVD model as well as the Fista model. From these results, we see that CNN2 performs better as the non-linear model, both on its own or in conjunction with the initial SVD projection. (Similar results were found when analysing other datasets (results not shown for brevity).) For the remainder of this paper, we thus only report the results for the CNN2-based models, the 5-dimensional SVD model and the Fista model.
To research the performance of our ideas on the dataset with finer energy resolution, we followed the same steps in the training and testing with other architectures. We focused on two different architectures that extended versions of CNN2 (have a better result than others) in this experiment. Figure <ref> shows the result of the modified version of SVD/CNN2 and CNN2 along with Fista and 5-dimensional SVD results for D_2E,131 dataset. The average NMSE performance here is similar to that found for the D_2E dataset.
To further demonstrate this, the modelling performance of our approach was also tested using the D_3E,0K (see Fig. <ref>) of 3 element mixtures without K-edge. For this dataset without K-edges, we again found that the SVD/autoencoder model no longer outperforms all other methods, and in fact, the 5-dimensional CNN2 now performed slightly better in terms of the mean of the NMSE errors. Crucially, the 5-dimensional SVD and Fista models showed almost the same performance. Of interest here is also the fact that the 5-dimensional SVD does not work as well as the non-linear models, which is likely due to the fact that the linear approximation used is not valid in energy ranges close to K-edges.
To see how the performance of our methods changes when the data has more materials with K-edges in their absorption spectra, we plot the NMSE of the datasets (D_3E,1K, D_3E,2K, D_3E,3K, and D_5E,5K) in Fig. <ref> for the SVD/CNN2, the 5-dimensional CNN2, the 5-dimensional SVD and Fista models. Whilst there is a decrease in the performance of the non-linear models if we increase the number of elements with K-edges, their relative performance is consistent, with Fista, SVD/CNN2 and CNN2 working better than the 5-dimensional SVD model in general.
We also trained our best architectures (SVD/CNN2 and CNN2), 5-dimensional SVD and Fista with two different dataset of 5 materials each to see if their performance depended on the training sets. We here used D_5E and D_5E,0K. The training in this experiment is the same as in the previous one; the only difference is the dataset used for training. After the training, these architectures were tested with D_2E,2K and D_5E,5K. Figures <ref>a and <ref>b demonstrate the NMSE result of this experiment, and our models (SVD/CNN2, CNN2 and Fista) still have lower errors than the traditional models.
§ DISCUSSION AND CONCLUSIONS
Accurate and precise modelling of the X-ray absorption spectrum of objects has been important for reducing image artefacts <cit.>, estimating material distributions within the object<cit.>, and constraining the ill-conditioned inverse problems <cit.> that arise in
several spectral imaging methods. In this paper, we considered non-linear models of the energy-dependent X-ray absorption spectrum for all possible materials. We introduced a novel non-linear model, consisting of a linear SVD and a deep learning-based approach, that accurately represents the LACs of K-edge-containing materials using several parameters. Furthermore, we evaluated the performance of different deep learning architectures, traditional linear models, and a sparse model for various simulated objects.
As seen in Fig. <ref>, all complex architectures (except SVD/FCNN1 and FCNN1), and the Fista model have a lower approximation error than the best linear model (5-dimensional SVD). Crucially, the traditional linear model has almost the same error, which is 5% higher than the SVD/FCNN1 and FCNN1 models. This primarily shows that a non-linear model is useful for modelling K-edges. Furthermore, this result suggests that more layers should be used while designing the deep learning architectures for modelling. The last and most important result is that the SVD/CNN2 and CNN2 architectures showed the best performance compared to other architectures in the experiment with the D_2E test dataset.
The result of experiments with data with D_3E,131 dataset showed that our models have the same sensitivity even for finer energy resolution. Interestingly, it shows that if objects with finer resolution have a K-edge in their absorption spectra, the 5-dimensional SVD approach cannot capture it. It can be seen in Fig <ref>, the SVD model has a higher error (10%) than all other models. For computational efficiency, we did not conduct any further experiments with the 131 energy level dataset, even though our models achieved better performance.
For objects whose K-edge in the X-ray absorption spectrum lies outside of the considered energy range, there is some loss in the SVD/autoencoder approach, as can be seen in Fig.<ref>. The main reason for this is that we have not trained the autoencoder part in the SVD/autoencoder model with non-K-edge materials. We trained the non-linear step in this model with the residual error (i.e. to model the K-edges), whilst the linear step is trained to model the non-K-edge X-ray absorption spectrum. Since the training methods used to model the non-K-edge X-ray absorption spectrum are not the same (such as non-linear and linear), this is likely to affect the performance of our approach. However, the errors are lower than the best linear model in the SVD/autoencoder, the 5-dimensional autoencoder and the Fista model (error value below 2% for CNN2, below 4% for SVD/CNN2 and Fista). Interestingly, the best linear model has a higher error than the other model, even for objects that do not contain K-edges in the X-ray absorption spectrum. Although traditional models are used to model the X-ray absorption spectra of non-K-edge materials in the selected energy range in the literature, these results show that our models can also be used for these spectra.
All experiments with objects with various numbers of K-edges in the X-ray absorption spectrum suggest that our models can be more accurate than the traditional model.
Furthermore, the error in the SVD/autoencoder and the 5-dimensional autoencoder model have increased when the number of K-edges in the X-ray absorption spectrum is increased, as seen in Fig. <ref>a and <ref>b <ref>c <ref>d. Interestingly, the errors of the Fista model for all experiments nearly stayed the same. The reason for this is that there is no training step in the Fista model (apart from fitting the sparsity parameter). Crucially, with the five-element dataset test (as shown in Fig. <ref>), we found that our models work better than the traditional model, even when trained with more complex datasets.
Our experimental results indicate that using the SVD/autoencoder model approach has significant advantages in the representation of the X-ray absorption spectrum of high atomic number materials compared to the linear model. In addition, the 5-dimensional autoencoder method has been experimentally shown to work better than traditional linear methods for non-K-edge materials (low atomic number materials) and also complex datasets. Whilst the Fista model did not show good performance for objects that don't have a K-edge, it has good accuracy for objects that have K-edges. The overall utility of our approach lies in that exploring the so-called low-dimensional representation of the X-ray absorption spectrum can be a valuable tool for analyzing the information on the scanned material.
IEEEtran
|
http://arxiv.org/abs/2307.07542v1 | 20230714142203 | Source-Free Domain Adaptation with Temporal Imputation for Time Series Data | [
"Mohamed Ragab",
"Emadeldeen Eldele",
"Min Wu",
"Chuan-Sheng Foo",
"Xiaoli Li",
"Zhenghua Chen"
] | eess.SP | [
"eess.SP",
"cs.AI",
"cs.LG"
] |
0000-0002-2138-4395
Institute for Infocomm Research, Agency for Science Technology and Research (A*STAR)
1 Fusionopolis Way
Singapore
138632
Centre for Frontier AI Research, Agency for Science Technology and Research (A*STAR)
1 Fusionopolis Way
Singapore
138632
[email protected]
0000-0002-9282-0991
Nanyang Technological University
50 Nanyang Ave
Singapore
639798
Centre for Frontier AI Research, Agency for Science Technology and Research (A*STAR)
1 Fusionopolis Way
Singapore
138632
[email protected]
Institute for Infocomm Research, Agency for Science Technology and Research (A*STAR)
1 Fusionopolis Way
Singapore
138632
[email protected]
Institute for Infocomm Research, Agency for Science Technology and Research (A*STAR)
1 Fusionopolis Way
Singapore
138632
Centre for Frontier AI Research, Agency for Science Technology and Research (A*STAR)
1 Fusionopolis Way
Singapore
138632
[email protected]
Institute for Infocomm Research, Agency for Science Technology and Research (A*STAR)
1 Fusionopolis Way
Singapore
138632
Centre for Frontier AI Research, Agency for Science Technology and Research (A*STAR)
1 Fusionopolis Way
Singapore
138632
[email protected]
Corresponding Author
Institute for Infocomm Research, Agency for Science Technology and Research (A*STAR)
1 Fusionopolis Way
Singapore
138632
Centre for Frontier AI Research, Agency for Science Technology and Research (A*STAR)
1 Fusionopolis Way
Singapore
138632
[email protected]
Source-free domain adaptation (SFDA) aims to adapt a pretrained model from a labeled source domain to an unlabeled target domain without access to the source domain data, preserving source domain privacy. Despite its prevalence in visual applications, SFDA is largely unexplored in time series applications. The existing SFDA methods that are mainly designed for visual applications may fail to handle the temporal dynamics in time series, leading to impaired adaptation performance. To address this challenge, this paper presents a simple yet effective approach for source-free domain adaptation on time series data, namely MAsk and imPUte (MAPU). First, to capture temporal information of the source domain, our method performs random masking on the time series signals while leveraging a novel temporal imputer to recover the original signal from a masked version in the embedding space. Second, in the adaptation step, the imputer network is leveraged to guide the target model to produce target features that are temporally consistent with the source features. To this end, our MAPU can explicitly account for temporal dependency during the adaptation while avoiding the imputation in the noisy input space. Our method is the first to handle temporal consistency in SFDA for time series data and can be seamlessly equipped with other existing SFDA methods. Extensive experiments conducted on three real-world time series datasets demonstrate that our MAPU achieves significant performance gain over existing methods. Our code is available at <https://github.com/mohamedr002/MAPU_SFDA_TS>.
<ccs2012>
<concept>
<concept_id>10010147.10010257.10010258.10010262.10010279</concept_id>
<concept_desc>Computing methodologies Learning under covariate shift</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10002950.10003648.10003688.10003693</concept_id>
<concept_desc>Mathematics of computing Time series analysis</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
<ccs2012>
<concept>
<concept_id>10010147.10010257.10010258.10010262.10010279</concept_id>
<concept_desc>Computing methodologies Learning under covariate shift</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010257.10010258.10010262.10010277</concept_id>
<concept_desc>Computing methodologies Transfer learning</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10002950.10003648.10003688.10003693</concept_id>
<concept_desc>Mathematics of computing Time series analysis</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Computing methodologies Learning under covariate shift
[500]Computing methodologies Transfer learning
[500]Mathematics of computing Time series analysis
Source-Free Domain Adaptation with Temporal Imputation for Time Series Data
Zhenghua Chen
August 12, 2023
===========================================================================
§ INTRODUCTION
Deep learning has achieved impressive performance in numerous time series applications, such as machine health monitoring, human activity recognition, and healthcare. However, this success heavily relies on the laborious annotation of large amounts of data. To address this issue, unsupervised domain adaptation (UDA) has gained traction as a way to leverage pre-labeled source data for training on unlabeled target data, while also addressing the distribution shift between the two domains <cit.>. There is a growing interest in applying UDA to time series data <cit.>, with existing methods seeking to minimize statistical distance across the source and target features <cit.> or using adversarial training to find domain invariant features <cit.>. However, these approaches require access to the source data during the adaptation process, which may not be always possible, due to data privacy regulations.
To address this limitation, a more practical setting, i.e., source-free domain adaptation (SFDA), has been proposed, where only a source-pretrained model is available during the adaptation process <cit.>. In recent years, several SFDA methods have been developed for visual applications <cit.>. One prevalent paradigm has incorporated some auxiliary tasks to exploit the characteristics of visual data to improve the source-free adaptation <cit.>.
However, all these methods are primarily designed for visual applications and may fail to handle the temporal dynamics of time series data.
In time series data, temporal dependency refers to the interdependence between values at different time points, which has a significant impact on predictions <cit.>. As demonstrated in Figure <ref>, even two signals with similar observations can lead to differing predictions if the temporal order is different. Such temporal dynamics make adapting the temporal information between two shifted domains a key challenge in unsupervised domain adaptation. The problem becomes even more challenging under source-free adaptation settings, where no access to the source data is provided during the target adaptation. Therefore, our key question is how to effectively adapt temporal information in time series data in the absence of the source data.
In this work, we address the above challenges and propose a novel SFDA approach, i.e., MAsk and imPUte (MAPU), for time series data. Our method trains an autoregressive model to capture the temporal information on the source domain, which is then transferred to the target domain for adaptation.
The key steps of our approach are illustrated in Figure <ref>. First, the input signal undergoes temporal masking. Both the masked signal and the original signal are then fed into an encoder network, which generates the corresponding feature representation. Subsequently, the temporal imputation network is trained to impute the original signal from the masked signal in the feature space, enabling smoother optimization for the temporal imputation task. During adaptation, the imputation network is used to guide the target model to generate target features that can be imputed by the source imputation network. Our method is versatile and can be integrated with the existing SFDA methods to provide them with temporal adaptation capability.
The main contributions of this work can be summarized as follows:
* To the best of our knowledge, we are the first to achieve the source-free domain adaptation for time series applications.
* We propose a novel temporal imputation task to ensure sequence consistency between the source and target domains.
* We propose a versatile methodology for integrating temporal adaptation capability into existing SFDA methods.
* We conduct extensive experiments and demonstrate that our approach results in a significant improvement in adaptation performance on real-world data, and is particularly effective for time series adaptation tasks.
§ RELATED WORK
§.§ Time series Domain Adaptation
Several methods have been proposed to address the challenge of distribution shift in time series data. These methods can be broadly categorized into two groups: discrepancy-based methods and adversarial-based methods. Discrepancy-based methods use statistical distances to align the feature representations of the source and target domains. For instance, AdvSKM leverages the maximum mean discrepancy (MMD) distance in combination with a hybrid spectral kernel to consider temporal dependencies during domain adaptation <cit.>. Another example is SASA, which learns the association structure of time series data to align the source and target domains <cit.>. On the contrary, adversarial-based methods use adversarial training to mitigate the distribution shift between the source and target domains. For instance, CoDATS utilizes a gradient reversal layer (GRL) for adversarial training with weak supervision on multi-source human activity recognition data <cit.>. Furthermore, DA_ATTN couples adversarial training with an un-shared attention mechanism to preserve the domain-specific information <cit.>. Recently, SLARDA presents an autoregressive adversarial training approach for aligning temporal dynamics across domains <cit.>.
Albeit promising, the design of these methods is based on the assumption that source data is available during the adaptation step. However, accessing source data may not be possible in practical situations due to privacy concerns or storage limitations. Differently, our MAPU adapts a model pretrained on source data to new domains without access to source data during adaptation, which can be a more practical solution for high-stake applications.
§.§ Source Free Domain Adaptation
Source-Free Domain Adaptation (SFDA) is a new problem-setting, where we do not have access to the source domain data during adaptation. This objective can be achieved in several ways. One approach is to leverage a model pretrained on the source domain to generate synthetic source-like data during the adaptation step <cit.>. Another approach is to use adversarial training between multiple classifiers to generalize well to the target classes <cit.>. Another prevalent approach uses softmax scores or their corresponding entropy to prioritize confident samples for pseudo-labeling, assuming that the model should be more confident on source samples and less confident on target samples <cit.>.
Despite the strong potential demonstrated by these methods, they are primarily designed for visual applications and may fail to effectively align temporal dynamics in time series data. In contrast, our method addresses this challenge through a novel temporal imputation task, ensuring temporal consistency between domains during the adaptation.
§ METHODOLOGY
§.§ Problem definition
Given a labeled source domain 𝒟_S = {X_S^i, y_S^i}_i=1^n_S, where X_S ∈𝒳_S can be a uni-variate or multi-variate time series data with a sequence length L, while y_S ∈𝒴_S represents the corresponding labels. In addition, we have an unlabeled target domain 𝒟_T= {X_T^i}_i=1^n_T, where X_T ∈𝒳_T, and it also shares the same label space with 𝒟_S. Following the existing UDA settings, we assume a difference across the marginal distributions, i.e., P(X_S) ≠ P(X_T), while the conditional distributions are stable, i.e., P(y_S|X_S) ≈ P(y_T|X_T).
This work aims to address the source-free domain adaptation problem, where access to source data is strictly prohibited during the adaptation phase to ensure data privacy. Furthermore, we adopt the vendor-client source-free paradigm <cit.>, which allows the influence of the source pretraining stage. This assumption is realistic in use cases where there is a collaboration between various entities, but it is not possible to share source data due to data privacy, security, or regulatory issues.
§.§ Overview
We present our MAPU to achieve source-free adaptation on time series temporal data while considering the temporal dependencies across domains. The pipeline of the proposed method is illustrated in Figure <ref>. Given the input signal and a temporally masking signal, our method comprises two stages: (1) training an autoregressive network, referred to as the imputer network, which captures the temporal information of the source domain through a novel temporal imputation task, and (2) leveraging the source-pretrained imputer network to guide the target encoder towards producing temporally consistent target features in the adaptation stage. Next, we will first elaborate on the temporal masking procedure before delving into the details of each stage.
§.§ Temporal Masking
In this section, we explain our process of temporal masking. We start by dividing the input signal, X, into several blocks along the time dimension. Then, we randomly choose some of these blocks and set their values to zero, creating a masked version of the signal called X^'. This process is applied to both the source and target domains. Our aim is to challenge the model to use the information from surrounding blocks to fill in the missing parts and capture the temporal dependencies in the input signal. Further discussion on the impact of the masking ratio on the adaptation performance can be found in the experiment section.
§.§ Capturing Source Temporal Dynamics
In the pretraining stage, current methods typically map the source data from the input space to the feature space using an encoder network, represented as f_θ: 𝒳_S →ℋ_S. The extracted features are then passed through a classifier network, g_θ:ℋ_S →𝒴_S, to make class predictions for the source data.
However, to effectively adapt to other time series domains, it is important to consider the temporal relations in the source domain. Using only cross-entropy for training the source network may neglect this aspect. To address this, we propose a temporal imputation task that aims to recover the input signal from a temporally masked signal in the feature space.
The imputation task is performed by an imputer network j_θ that takes the masked signal and maps it to the original signal. The input signal X_S and masked signal X_S^' are first transformed into their corresponding feature representations H_S and H_S^' by the encoder f_θ. The task of the imputer network is represented as Ĥ_S^' = j_θ(f_θ(X_S^')) → H_S = f_θ(X_S), where Ĥ_S^' is the imputed signal. The imputer network is trained to minimize the mean square error between the features of the original signal and the imputed signal, which can be formulated as:
min_j_θℒ_mapu^S = 1/n∑_i=1^n_Sf_θ(X_S^i) - j_θ(f_θ(X_S^' i))_2^2,
where H_S = f_θ(X_S^i) are the latent features of the original signal, Ĥ_S = j_θ(f_θ(X_S^' i)) is the output of the imputer network, and n_S is the total number of source samples.
§.§ Temporal Adaptation with Feature Imputation
In the adaptation stage, the goal is to train the target encoder network to produce target features temporally consistent with the source features. The target encoder network f_θ is used to extract latent feature representations from a target sample X_T and its masked version X_T^'. The fixed source-pretrained imputer network j_θ is then used to reconstruct the features of the original signal from the masked features. However, due to domain differences, the source imputer may not be able to accurately reconstruct the target features. Thus, the encoder network f_θ is updated to produce target features that can be accurately reconstructed by the imputer network. This can be expressed as the following optimization problem:
min_f_θℒ_mapu^T = 1/n∑_i=1^n_Tf_θ(X_T)^i - j_θ(f_θ(X_T^')^i))_2^2,
where H_T= f_θ(X_T) are the original target features, Ĥ_T = j_θ(f_θ(X_T^')) are the adapted target features produced by the imputer network to minimize the mean square error loss, and n_T is the total number of target samples. Notably, only the encoder network is optimized, producing features that can be accurately imputed by the fixed source-pretrained imputer network. To reduce the imputation loss, the adapted target features should be temporally consistent with the source features.
Algorithm 1 illustrates the adaptation procedure via temporal imputation. The process starts by first constructing a temporally masked version of the input target sample represented as X_T^'. Next, the source-pretrained encoder is used to extract the latent features of both the original signal and the temporally masked signal, represented as H_T and H_T^', respectively. Finally, the encoder network is updated to make features of the masked signal recoverable by the source-pretrained imputer network, using the mean square error loss in Equation 2.
§.§ Integration with Other Source-free Methods
Our proposed MAPU is generic and can be integrated with other source-free adaptation methods. Typically, source-free adaptation involves a two-stage training procedure: (1) pretraining the source model with source domain data, and (2) adapting the pretrained model to the target domain. As shown in Figure <ref>, our MAPU can be seamlessly integrated into existing SFDA methods in both stages.
In the pretraining stage, MAPU operates in the feature space by training the temporal imputation network, j_θ, to capture the temporal information from the source domain. The loss associated with the temporal imputation task does not propagate to the encoder model, f_θ. As a result, the encoder can be trained exclusively with the conventional cross-entropy loss, ensuring that the imputation task does not negatively impact the pretraining performance. The total pretraining loss is formalized as:
min_f_θ, j_θℒ_S= -𝔼_(X_S,y_S) ∼𝒟_Sℒ_ce + ℒ_mapu^S,
where ℒ_ce = ∑_k=1^K 1_[y_S = k]log(p̂_k) represents the standard cross-entropy loss between the predicted label and the true label, p̂k^i represents the predicted probability for class k and sample i, and ℒ_mapu^S represents the training loss for our temporal imputer network on the source data to capture the source temporal information.
In the target adaptation step, the objective is to optimize the target encoder, f_θ, by balancing the temporal imputation loss and the generic source-free loss to achieve temporal consistency and perform adaptation on the target domain. This can be formalized as follows:
min_f_θ L_T = 𝔼_X_T ∼𝒟_Tℒ_sf + αℒ_mapu^T,
where α is a hyperparameter that regulates the relative importance of the temporal imputation task, and ℒ_sf represents the generic loss used by the SFDA method to adapt the target domain to the source domain.
§ EXPERIMENTAL SETTINGS
§.§ Datasets
We evaluate our proposed method on three real-world datasets spanning three time series applications, i.e., machine fault diagnosis, human activity recognition, and sleep stage classification. The selected datasets differ in many aspects, as illustrated in Table <ref>, which leads to a considerable domain shift across different domains.
§.§.§ UCIHAR Dataset
This dataset focuses on human activity recognition tasks. Three types of sensors have been used to collect the data, i.e., accelerator sensor, gyroscope sensor, and body sensor, where each sensor provides three-dimensional readings, leading to a total of 9 channels per sample, with each sample containing 128 data points. The data is collected from 30 different users and each user is considered as one domain. In our experiments, five cross-user experiments are conducted, where the model is trained on one user and tested on different users to evaluate its cross-domain performance <cit.>.
§.§.§ Sleep Stage Classification (SSC) Dataset
The Sleep Stage Classification (SSC) task involves categorizing Electroencephalography (EEG) signals into five distinct stages, namely Wake (W), Non-Rapid Eye Movement stages (N1, N2, N3), and Rapid Eye Movement (REM). To accomplish this, we utilize the Sleep-EDF dataset <cit.>, which comprises EEG readings from 20 healthy subjects. In line with previous studies <cit.>, we select a single channel, specifically Fpz-Cz, and utilize 10 subjects to construct five cross-domain experiments.
§.§.§ Machine Fault Diagnosis (MFD) Dataset
This dataset has been collected by Paderborn university for the fault diagnosis application, where the vibration signals are leveraged to identify different types of incipient faults. The data has been collected under four different working conditions. Each data sample consists of a single univariate channel and 5120 data points following previous works <cit.>. In our experiments, each working condition is considered as one domain, where we utilize five different cross-condition scenarios to evaluate the domain adaptation performance.
More details about the datasets are included in Table <ref>.
§.§ Implementation Details
Encoder Design
In our study, we adopt the encoder architecture presented in existing works <cit.>, which is a 1-dimensional convolutional neural network composed of three layers with filter sizes of 64, 128, and 128 respectively. Each conventional layer was followed by the application of a rectified linear unit activation function and batch normalization.
MAPU Parameters
For the purpose of temporal masking, a masking ratio of 1/8 is utilized across all datasets in our experiments. To perform the imputation task, a single-layer recurrent neural network with a hidden dimension of 128 is employed for all datasets. In addition, our method includes a primary hyperparameter, α, which is set to 0.5 for all datasets in our evaluation.
Unified Training Scheme
To provide a fair and valid comparison with source-free baseline methods, we adhered to their established implementations <cit.> while incorporating the same backbone network and training procedures utilized in our proposed method.
In accordance with the AdaTime framework <cit.>, all the models are trained for a total of 40 epochs, using a batch size of 32, with a learning rate of 1e-3 for UCIHAR and 1e-4 for SSC and MFD.
Also, the macro F1-score (MF1) metric <cit.> has been used to ensure a reliable evaluation under data imbalance situations, where we report the mean and the standard deviation of three consecutive runs for each cross-domain scenario.
§.§ Baseline Methods
To evaluate the performance of our model, we compare it against conventional UDA approaches that assume access to source data during adaptation. These baselines are adapted from the AdaTime benchmark <cit.>. Additionally, we compare our model against recent source-free domain adaptation methods. To ensure fair evaluation, we re-implement all source-free baselines in our framework, while ensuring the same backbone network and training schemes. Overall the compared methods are as follows:
Conventional UDA methods
* Deep Domain Confusion (DDC) <cit.>: leverages the MMD distance to align the source and target features.
* Deep Correlation Alignment (DCORAL) <cit.>: aligns the second-order statistics of the source and target distributions in order to effectively minimize the shift between the two domains.
* High-order Maximum Mean Discrepancy (HoMM) <cit.>: aligns the high-order moments to effectively tackle the discrepancy between the two domains.
* Minimum Discrepancy Estimation for Deep Domain Adaptation (MMDA) <cit.>: combines the MMD and correlation alignment with entropy minimization to effectively address the domain shift issue.
* Domain-Adversarial Training of Neural Networks (DANN) <cit.>: leverages gradient reversal layer to adversarially train a domain discriminator network against an encoder network.
* Conditional Domain Adversarial Network (CDAN) <cit.>: realizes a conditional adversarial alignment by integrating task-specific knowledge with the features during the alignment step for the different domains.
* Convolutional deep adaptation for time series (CoDATS) <cit.>: employs adversarial training with weak supervision to enhance the adaptation performance on time series data.
* Adversarial spectral kernel matching (AdvSKM)<cit.>: introduces adversarial spectral kernel matching to tackle the challenges of non-stationarity and non-monotonicity present in time series data.
Source-free methods
* Source Hypothesis Transfer (SHOT) <cit.>: minimizes information maximization loss with self-supervised pseudo labels to identify target features that can be compatible with the transferred source hypothesis.
* Exploiting the intrinsic neighborhood structure (NRC) <cit.>: captures the intrinsic structure of the target data by forming clear clusters and encouraging label consistency among data with high local affinity.
* Attracting and dispersing (AaD) <cit.>: optimizes an objective of prediction consistency by treating SFDA as an unsupervised clustering problem and encouraging local neighborhood features in feature space to have similar predictions.
§ RESULTS
In this section, we rigorously test our approach against state-of-the-art methods in various time series applications. We also assess the versatility of our method by combining it with different SFDA techniques. Furthermore, we compare the effectiveness of our task to other auxiliary tasks on time series data. Lastly, we examine our model's sensitivity to different importance weights and masking ratios. In our MAPU, we leverage SHOT as the base SFDA method. Nevertheless, our approach is not limited to SHOT and can be effectively integrated with other SFDA methods, as demonstrated in our versatility experiments.
§.§ Quantative Results
To assess the efficacy of our approach, we evaluate its performance on three different time series datasets, namely, UCIHAR, SSC, and MFD. Tables <ref>, <ref>, and <ref> present results for five cross-domain scenarios in each dataset, as well as an average performance across all scenarios (AVG). The algorithms are divided into two groups: the traditional UDA methods are marked with , while the source-free methods are marked with .
§.§.§ Evaluation on UCIHAR Dataset
The results presented in Table <ref> show the performance of our MAPU in five cross-subject scenarios. Our method demonstrates superior performance in three of the five scenarios, achieving an overall performance of 89.57%. This exceeds the second-best source-free method by 3%. Notably, the source-free methods (i.e., SHOT, NRC, and AaD) perform competitively with conventional unsupervised domain adaptation (UDA) methods that utilize source data. This can be attributed to the two-stage training (i.e., pertaining and adaptation) scheme employed in the source-free methods, which focuses on optimizing the target model for the target domain without considering source performance <cit.>. Furthermore, our MAPU, with its temporal adaptation capability, outperforms all conventional UDA methods, surpassing the best method (i.e., CDAN) by 2.78%.
§.§.§ Evaluation on SSC Dataset
The results of the sleep stage classification task, as presented in Table <ref>, demonstrate the superior performance of our proposed method, MAPU, over other baseline methods. Our MAPU performs best in three out of the five cross-domain scenarios, with an overall performance of 64.05%. This is higher than the best source-free method, SHOT, and the best conventional UDA method, with an improvement of 1.72% and 1.27% respectively. It is worth noting that source-free methods that rely on features clustering, i.e., NRC and AaD, perform poorly on the SSC dataset due to its class-imbalanced nature. However, our MAPU, with its temporal adaptation capability, is able to handle such imbalance and outperform all source-free methods with a maximum improvement of 4.8% in scenario 16 →1.
§.§.§ Evaluation on MFD Dataset
The results of the Machine Fault Diagnosis (MFD) task, presented in Table <ref>, showcase the superior performance of our MAPU when compared to all other baselines. With an average performance of 92.45%, MAPU exceeds the second-best method by a large margin of 7.85%. Additionally, MAPU significantly outperforms baseline methods in the hard transfer tasks (i.e., 0→1 and 1→0), reaching a 14.46% improvement in the latter scenario, while performing competitively with other baseline methods in the easy transfer tasks (i.e., 2→3 and 3→1). Compared to source-free methods, our MAPU achieves the best performance in all cross-domains, surpassing the second-best source-free method, AaD, by 11.87%.
It is worth noting that the performance improvement of our method is relatively large in the MFD dataset compared to other datasets. This is mainly attributed to two reasons. First, the MFD dataset has the longest sequence length among all other datasets, thus, the adaptation of temporal information is more prominent and necessary. Second, unlike other datasets, this dataset has a limited number of classes, i.e., 3 classes, and thus, failing to correctly classify one class can significantly harm the performance.
§.§ Ablation Study on Auxiliary Tasks
To demonstrate the effectiveness of our proposed temporal imputation auxiliary task, we conducted evaluations using various auxiliary tasks, including rotation prediction <cit.> and jigsaw puzzle <cit.>. We chose three different SFDA backbones, SHOT, NRC, and AaD, for the auxiliary tasks to eliminate the bias to a specific SFDA method. Table <ref> shows the average performance of five cross-domain scenarios for each dataset. The results show that our temporal imputation task consistently outperforms the other tasks across all datasets, even when combined with different SFDA backbones. Meanwhile, the baseline tasks, including rotation and jigsaw, not only exhibit limited improvement but also consistently harm the performance in many cases across various datasets. This indicates the inadequacy of these tasks for time series data and highlights the importance of considering temporal dynamics to the adaptation performance, as demonstrated by the superior performance of our MAPU approach.
§.§ Model Analysis
§.§.§ Versatility Analysis
This study investigates the effectiveness of incorporating temporal information into other SFDA methods. To achieve that, we evaluated the performance of three different SFDA methods when used in conjunction with our proposed temporal imputation task on the UCIHAR, SSC, and MFD datasets. Figure <ref> shows the average performance of five cross-domain scenarios in each dataset. Our results indicate a significant improvement in performance across all tested datasets through the integration of our temporal imputation task. For instance, on the UCIHAR dataset, we saw a notable 3% boost in performance for the NRC and AaD methods. On the UCIHAR dataset, the NRC and AaD methods all experienced a performance boost of approximately 3% upon integration with our temporal imputation task. The improvements are consistent across the SSC and MFD datasets, demonstrating our approach's effectiveness in providing temporal adaptation capability to existing SFDA methods that are mainly proposed for visual applications.
§.§.§ Sensitivity Analysis
This study evaluates the sensitivity of our temporal imputation component to the relative weight α when integrated with other SFDA methods, as illustrated in Figure <ref>. The results indicate that our model's performance is relatively stable across a range of values for the α parameter. Particularly, the highest MF1 score achieved was 89.77, while the lowest accuracy was 87.75, with a difference of only 2%. This observed stability may be attributed to the imputation process being carried out on the feature space rather than the input space. As such, the feature space provides a more abstract representation of the data, making the imputation process free of the variations present in the input space.
§.§.§ Impact of Masking level
Here, we systematically examine the impact of the masking ratio on adaptation performance in the context of imputation tasks. Specifically, we employed three different masking ratios (12.5%, 25%, and 50%) and evaluated the performance on the three benchmark datasets. The results, shown in Figure <ref>, reveal a clear trend of improved performance with lower masking ratios. Notably, the best performance was achieved with a masking ratio of 12.5% across all datasets. These findings suggest that excessive masking may negatively impact the adaptation performance in the imputation task.
§ CONCLUSION
This paper introduced MAsk And imPUte (MAPU), a novel method for source-free domain adaptation on time series data. The proposed method addressed the challenge of temporal consistency in time series data by proposing a temporal imputation task to recover the original signal in the feature space rather than the input space. MAPU is the first method to explicitly account for temporal dependency in a source-free manner for time series data. The effectiveness of MAPU is demonstrated through extensive experiments on three real-world datasets, achieving significant gains over the existing methods. This work highlights the potential of MAPU in addressing the domain-shift problem while preserving data privacy in time series applications.
§ ACKNOWLEDGMENTS
This work was supported by the Agency of Science Technology and Research under its AME Programmatic (Grant No. A20H6b0151) and its Career Development Award (Grant No. C210112046).
kdd_style/ref_style
|
http://arxiv.org/abs/2307.04224v1 | 20230709163518 | Reach of Segre-Veronese Manifolds | [
"Paul Breiding",
"Sarah Eggleston"
] | math.AG | [
"math.AG",
"math.DG"
] |
Seismic Data Interpolation based on Denoising Diffusion Implicit Models with Resampling
Xiaoli Wei, Chunxia Zhang, Member, IEEE, Hongtao Wang, Chengli Tan, Deng Xiong, Baisong Jiang, Jiangshe Zhang, Sang-Woon Kim, Life Senior Member, IEEE
Corresponding author: Chunxia Zhang. E-mail: [email protected].
Xiaoli Wei, Chunxia Zhang, Hongtao Wang, Chengli Tan, Baisong Jiang, Jiangshe Zhang are with the School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, Shaanxi, 710049, China.
Deng Xiong is with the Geophysical Technology Research and Development Center, BGP, Zhuozhou, Hebei, 072751, China
Sang-Woon Kim is with the Department of Computer Engineering, Myongji University, Yongin, 17058, South Korea.
This research was supported by the National Key Research and Development Program of China (No. 2018AAA0102201) and the National Natural Science Foundation of China (No. 61976174).
This work has been submitted to the IEEE TGRS for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.
August 12, 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
We compute the reach, extremal curvature and volume of a tubular neighborhood for the Segre–Veronese variety intersected with the unit sphere.
Keywords. Tensors, Rank-One Tensors, Reach, Curvature, Tubes.
§ INTRODUCTION
In this paper we study the metric geometry of rank-one tensors. More specifically, we compute the reach and the volume of a tubular neighborhoods of the Segre–Veronese variety, i.e. the variety of rank-one tensors in the space of partially symmetric tensors. Since rank-one tensors form a cone, we intersect the Segre–Veronese variety with the unit sphere, thus obtaining the (spherical) Segre–Veronese manifold; the proof that this is a manifold is provided below. We first describe the setting.
Let H_n,d denote the vector space of homogeneous polynomials in n+1 variables x_0,…, x_n of degree d.
We consider the Bombieri-Weyl inner product ⟨ , ⟩ on H_n,d: this is the inner product corresponding to the orthogonal basis vectors m_α := √(dα) x^α, where dα = d!/α_0!⋯α_n! is the multinomial coefficient for α = (α_0,…,α_n). The reason for this choice is that the Bombieri-Weyl inner product is invariant under an orthogonal change of coordinates; this was proved by Kostlan <cit.>.
The norm of a polynomial f∈ H_n,d is ‖ f‖ =√(⟨ f, f⟩), and the sphere is 𝕊(H_n,d):={ f∈ H_n,d|‖ f‖ =1}.
For n,d≥ 1, we denote the real variety of powers of linear forms in H_n,d by
𝕍_n,d := {± ℓ^d|ℓ is a linear form in x_0,…,x_n}.
The hat indicates that 𝕍_n,d is the cone over a spherical variety
:=𝕍_n,d∩𝕊(H_n,d),
which we call the (spherical) Veronese variety. Its dimension is = n.
We fix r-tuples of positive integers d=(d_1,…,d_r) and n=(n_1,…,n_r) and write
:= H_n_1,d_1⊗⋯⊗ H_n_r,d_r.
The elements in are called
partially symmetric tensors. They are multihomogeneous forms in r sets of variables. The number d=d_1+⋯ +d_r is called the total degree of the tensors in .
For a tensor F = ∑_α_1,…,α_r F_α_1,…, α_r m_α_1⊗⋯⊗ m_α_r∈, we use the short form F = (F_α_1,…, α_r).
The following defines an inner product on :
⟨ F, G ⟩ := ∑_α_1,…,α_r F_α_1,…, α_r· G_α_1,…, α_r, where F = (F_α_1,…, α_r), G = (G_α_1,…, α_r)∈.
With this, becomes a Euclidean space, and we can measure volumes and distances in . The norm of a tensor F∈ is ‖ F‖ := √(⟨ F, F⟩), and the angular distance is d_𝕊( F, G):= arccos⟨ F, G⟩ for F, G in the unit sphere 𝕊()⊂.
The (spherical) Segre–Veronese variety in is
:= { f_1⊗⋯⊗ f_r| f_i∈𝕍_n_i,d_i}∩𝕊().
This is the variety of products of powers of linear forms in that have unit norm. Tensors in are also called decomposable, simple or rank-one tensors. We prove in Proposition <ref> that is an embedded smooth submanifold of 𝕊() of dimension =n_1+⋯+n_r; hence, we call a (spherical) Segre–Veronese manifold.
The main focus of this paper is the reach and the volume of a tubular neighborhood of . We briefly recall what these are:
the medial axis Med(S) of a subset S⊂𝕊() is the set of all points F∈𝕊() such that there exist at least two points G_1, G_2 ∈ S with d_𝕊( F, S) = d_𝕊( F, G_i), i=1,2. The reach of a subset S is its minimum distance to the medial axis:
τ(S):=inf_ F ∈ Sd_𝕊( F, Med(S)).
This notion was first introduced by Federer <cit.>.
In our first main theorem we calculate the reach of the (spherical) Segre–Veronese manifold.
Let d=(d_1,…,d_r) and n=(n_1,…,n_r) be r-tuples of positive integers, and let d:=d_1+⋯+ d_r≥ 2 be the total degree. The reach of the (spherical) Segre–Veronese manifold is
τ()=
π/4, d≤ 5
√(d/2(d-1)), d>5.
In particular, the reach only depends on the total degree d and not on the dimensions of the Veronese varieties 𝕍_n_i,d_i.
This extends a theorem by Cazzaniga, Lerario and Rosana <cit.>, who proved this formula for the Veronese variety, which is the special case r=1. Another special case worth mentioning is d_1=⋯=d_r=1, which corresponds to the Segre variety.
Since is a smooth submanifold of the sphere, its reach is the minimum of the inverse of its largest radius of curvature and its smallest bottleneck. We also compute these. The next theorem explains which curves in have maximal and minimal curvature; this is proved in Section <ref>.
Let the total degree of the (spherical) Segre–Veronese manifold be d=d_1+⋯+d_r≥ 2. Consider a geodesic
γ(t) = γ_1(t)⊗⋯⊗γ_r(t) ∈.
* The maximum curvature of γ(t) is √(2(d-1)d). It is attained by curves where the γ_i(t) are curves of constant speed ‖γ_i'(t)‖ = √(d_id).
* The minimal curvature is √(2(d_ℓ-1)d_ℓ), where d_ℓ=min{d_1,…,d_r}.
It is attained by curves where γ_ℓ(t) is a geodesic parametrized by arc length in 𝕍_n_ℓ,d_ℓ and the other γ_i(t) are constant.
Our third main result concerns the volume of the tubular neighborhood
U(ε) := { F∈𝕊() | d_𝕊( F, ) < ε}.
In Section <ref> we compute this volume in terms of complete matchings in a weighted graph. For a tuple (m_1,…,m_r) of nonnegative integers let G=(V,E) be the complete graph on m:=m_1+⋯+m_r vertices. Recall that the tuple of degrees is d = (d_1,…, d_r). We define weights on E as follows: the vertices of G are partitioned into r groups V= ℐ_1⊔⋯⊔ℐ_r of cardinalities |ℐ_k| = m_k. The weight w(e) of an edge e between vertices in group ℐ_k is w(e)=d_k(d_k-1). Given a perfect matching C⊂ E we define its weight to be w(C):=∏_e∈ C w(e).
This defines the function
D_ d(m_1,…,m_r) := (-1)^m/2∑_C⊂ E perfect matching w(C).
We now have the following result.
Let n =, N = (𝕊()) and c=N-n. Define
J_i(ε) = ∫_0^ε (sinϕ)^N-n+2i· (cosϕ)^n-2i dϕ
and
a_i = Γ(c/2)/2^i Γ(i+c/2) ∑_m_1,…,m_r∈ℕ: m_i≤ n_i
m_1+⋯+m_r = 2i D_ d(m_1,…,m_r).
Then, for ε < τ() we have
vol(U(ε)) = √((d_1⋯ d_r)^n)/2^r-1·vol(𝕊^n_1) ⋯vol(𝕊^n_r) ·vol(𝕊^c-1)·∑_0≤ 2i≤ n a_i· J_i(ε).
The proof of this theorem is based on computing the Weingarten map of 𝕏_ n, d, which we do in Theorem <ref>. We show that the Weingarten map of 𝕏_ n, d admits a block structure where the diagonal blocks are the Weingarten maps of the Veronese factors.
§.§ Acknowledgements
We thank Antonio Lerario and Andrea Rosana for helping in finding several references related to differential geometry and for carefully explaining their paper <cit.> to us. We also thank Jan Draisma for a discussion which led to Theorem <ref>.
§.§ Organization of the paper
In Section <ref> we discuss the differential geometry and curvature of manifolds defined by tensor products of vectors. We then apply the results from Section <ref> in Section <ref> to study the curvature of the (spherical) Segre–Veronese manifold 𝕏_ n, d. In particular, we work out the Weingarten map of 𝕏_ n, d. In Section <ref> we compute the reach and prove Theorems <ref> and <ref>. Finally, in Section <ref>, we compute the volume of the tubular neighborhood and prove Theorem <ref>.
§ TENSOR PRODUCTS OF RIEMANNIAN MANIFOLDS
The tensor space ℝ^m_1+1⊗⋯⊗ℝ^m_r+1 is a Euclidean space for the inner product defined by ⟨ x_1⊗⋯⊗ x_r , y_1⊗⋯⊗ y_r⟩ = ⟨ x_1, y_1⟩⋯⟨ x_r, y_r⟩, where ⟨ x, y⟩ = x^T y. Write N:=m_1+⋯+m_r+r-1; then 𝕊^N is the sphere in ℝ^m_1+1⊗⋯⊗ℝ^m_r+1.
We consider for 1≤ i≤ r a smooth embeeded submanifold 𝕄_i of the sphere 𝕊^m_i⊂ℝ^m_i+1. We define the tensor product of these manifolds to be
𝕄_1⊗⋯⊗𝕄_r:= { x_1 ⊗⋯⊗ x_r | x_1∈𝕄_1,…, x_r∈𝕄_d}.
For 1≤ i≤ r let 𝕄_i be a smooth Riemannian submanifold of 𝕊^m_i of dimension n_i, and denote
𝕄:= 𝕄_1⊗⋯⊗𝕄_r.
Furthermore, denote the tensor product map by ψ: 𝕄_1×⋯×𝕄_r→𝕄, ( x_1,…, x_r)↦ x_1 ⊗⋯⊗ x_r.
Then:
* 𝕄 is a Riemannian submanifold of 𝕊^N of dimension n_1+⋯+n_r.
* The tangent space of 𝕄 at x= x_1 ⊗⋯⊗ x_r is
T_ x𝕄 = T_ x_1𝕄_1⊗ x_2 ⊗⋯⊗ x_r + ⋯ + x_1⊗ x_2 ⊗⋯⊗ T_ x_r𝕄_r.
* ψ is a local isometry.
For 1≤ i≤ r let 𝒜_i = (𝒰_i_j,φ_i_j)_j be an atlas for 𝕄_i, such that u∈𝒰_i implies that the antipodal point - u∉𝒰_i. Such an atlas exists, since 0∉𝕄_i. Define the open sets 𝒰_i_1,…,i_d := ψ(U_i_1×⋯× U_i_d); then ψ|_U_i_1×⋯× U_i_d is an isomorphism, so we have an atlas for 𝕄 with charts 𝒰_i_1,…,i_d
and maps (φ_i_1×…×φ_i_d) ∘ (ψ|_U_i_1×⋯× U_i_d)^-1. This also shows that we have 𝕄= (𝕄_1×⋯×𝕄_d)=m_1+⋯ +m_r. The Riemmanin structure on the ambient space 𝕊^N induces a Riemmanin structure on 𝕄.
For the second statement, we use that T_( x_1,…, x_r)𝕄 = T_ x_1𝕄_1×⋯× T_ x_r𝕄_r. For 1≤ i≤ r let v∈ T_ x_i𝕄_i.
By multilinearity, the derivative of ψ at ( x_1,…, x_r) maps
D_( x_1,…, x_r)ψ (0,…,0, v,0,…, 0) = x_1⊗⋯⊗ x_i-1⊗ v⊗ x_i+1⊗⋯⊗ x_r.
This proves the second statement, since T_ x𝕄 is the image of D_( x_1,…, x_r)ψ.
Finally, for v∈ T_ x_i𝕄_i and w∈ T_ x_j𝕄_j we have
⟨ x_1⊗⋯⊗ v⊗⋯⊗ x_r, x_1⊗⋯⊗ w⊗⋯⊗ x_r⟩ = ⟨ v, w⟩, i=j
⟨ v, x_i⟩ ⟨ w, x_j⟩, i≠ j
Since ⟨ v, x_i⟩ = ⟨ w, x_j⟩=0, this shows that the inner product between the images of (0,…,0, v,0,…, 0) and (0,…,0, w,0,…, 0) under D_( x_1,…, x_d) is ⟨ v, w⟩, if i=j, and 0 otherwise. This shows D_( x_1,…, x_r) preserves inner products on a basis of T_( x_1,…, x_r)𝕄 and hence is an orthogonal map. This proves the third statement.
Using the notation of Proposition <ref> we can now write
= 𝕍_n_1,d_1⊗⋯⊗𝕍_n_r,d_r.
Furthermore, Proposition <ref> implies that is a smooth submanifold of the sphere of dimension = n_1+⋯+n_r.
Therefore, we will henceforth call it the (spherical) Segre–Veronese manifold.
§.§ The second fundamental form of a tensor product manifold
Recall that the second fundamental form II_ x of a Riemannian submanifold 𝕄⊂𝕊^N at a point x∈𝕄 is the trilinear form
II_ x: T_ x𝕄× T_ x𝕄× N_ x𝕄→ℝ, ( v, w, a)↦⟨∂ v( u)/∂ w|_ u = x, a ⟩,
where v( u) is a (local) smooth tangent field of 𝕄 with v( x)= v.
For a fixed a∈ N_ x𝕄 the Weingarten map is the linear map
L_ a: T_ x𝕄→ T_ x𝕄,
such that II_ x ( v, w, a) = ⟨ v, L_ a( w)⟩.
The next proposition provides the Weingarten map for a tensor product of manifolds.
Let 𝕄_1,…, 𝕄_r be as in Proposition <ref> and 𝕄 = 𝕄_1⊗⋯⊗𝕄_r. Consider a point x = x_1⊗⋯⊗ x_r∈𝕄 and a normal vector a∈ N_ x𝕄. A matrix representation of the Weingarten map of 𝕄 at x in direction a relative to orthonormal coordinates is
L_ a = [ L_1 L_1,2 ⋯ L_1,r; (L_1,2)^T L_2 ⋯ L_1,r-1; ⋱ ; (L_1,r)^T (L_1,r-1)^T ⋯ L_r ]
,
where the matrices L_i,j and L_i are defined as follows: let v_1^(i),…, v_n_i^(i) be an orthonormal basis for the tangent space T_ x_i𝕄_i.
* The off-diagonal blocks are
L_i,j := [ ⟨ x_1⊗⋯⊗ v_k^(i)⊗⋯⊗ v_ℓ^(j)⊗⋯⊗ x_r, a⟩ ]_1≤ k≤ n_i, 1≤ℓ≤ n_j∈ℝ^n_i× n_j.
* Write R_i := x_1⊗⋯⊗ x_i-1⊗ N_ x_i𝕄_i ⊗ x_i+1⊗⋯⊗ x_r,
and let the orthogonal projection of a onto R_i be x_1⊗⋯⊗ x_i-1⊗ a_i ⊗ x_i+1⊗⋯⊗ x_r.
Then a_i∈ N_ x_i𝕄_i, and L_i∈ℝ^n_i× n_i is a matrix representation of the Weingarten map L_ a_i of 𝕄_i at x_i in direction a_i with respect to the orthonormal basis v_1^(i),…, v_n_i^(i).
By Proposition <ref>, x_1⊗⋯⊗ v_k^(i)⊗⋯⊗ x_r for 1≤ k≤ n_i and 1≤ i≤ r is an orthonormal basis of T_ x𝕄.
Fix tangent vectors v = x_1⊗⋯⊗ v_k^(i)⊗⋯⊗ x_r
and
w:= x_1⊗⋯⊗ v_ℓ^(j)⊗⋯⊗ x_r. Furthermore, let v_k^(i)( u_i) be a local smooth tangent field of 𝕄_i with v_k^(i)( x_i) = v_k^(i). Then, we obtain a local smooth tangent field of 𝕄 with v( x) = v by setting
v( u_1⊗⋯⊗ u_r) := u_1⊗⋯⊗ v_k^(i)( u_i)⊗⋯⊗ u_r.
By multilinearity,
∂ v( u)/∂ w =
x_1⊗⋯⊗∂ v_k^(i)( u_i)/∂ v_ℓ^(i)⊗⋯⊗ x_r, if i= j.
x_1⊗⋯⊗ v_k^(i)⊗⋯⊗ v_ℓ^(j)⊗⋯⊗ x_r, if i≠ j;
This shows that the off-diagonal blocks of L_ a are the matrices L_i,j.
For the diagonal blocks (i=j) we observe that x_1⊗⋯⊗∂ v_k^(i)( u_i)∂ v_ℓ^(i)⊗⋯⊗ x_r∈ R_i, so
⟨ v, L_ a( w)⟩ = II_ x( v, w, a) = ⟨ x_1⊗⋯⊗∂ v_k^(i)( u_i)∂ v_ℓ^(i)⊗⋯⊗ x_r, a⟩
= ⟨∂ v_k^(i)( u_i)∂ v_ℓ^(i), a_i⟩= ⟨ v_ℓ^(i), L_ a_i( v_k^(i))⟩.
This settles the case i=j.
§ GEODESICS AND SECOND FUNDAMENTAL FORM OF SEGRE–VERONESE MANIFOLDS
Recall from (<ref>) that the (spherical) Segre–Veronese manifold is
= 𝕍_n_1,d_1⊗⋯⊗𝕍_n_r,d_r.
We now use the results from the previous section to compute geodesics, the second fundamental form and the Weingarten map for a (spherical) Segre–Veronese manifold 𝕏_ n, d. The first step towards this goal is considering the Veronese manifold (r=1).
§.§ Veronese manifolds
The Bombieri-Weyl inner product on the space of homogeneous polynomials H_n,d has the property that
⟨ f, ℓ^d⟩ = f(ℓ_0,…,ℓ_n), where ℓ( x) = ℓ_0x_1+⋯ + ℓ_nx_n;
that is, taking the inner product of f∈ H_n,d with ℓ^d∈𝕍_n,d evaluates f at the coeffcient vector of ℓ. One calls ( x, y)↦⟨ x, y⟩^d a reproducing kernel for H_n,d.
Recall that the scaled monomials m_α = √(dα) x^α form an orthonormal basis for the space of polynomials H_n,d. We first prove a lemma on the structure of the tangent space of the Veronese manifold.
Consider m_(d,0,…,0) = x_0^d∈. Then m_(d-1,0,…,k,…,0) =√(d) x_0^d-1x_k, for 1≤ k≤ n, form an orthonormal tangent basis for the tangent space T_x_0^d.
It follows from <cit.> that T_x_0^d is spanned by √(d) x_0^d-1x_k, 1≤ k≤ n. The fact that these monomials are orthonormal follows directly from the definition of the Bombieri-Weyl inner product.
We denote the two linear spaces from <cit.>:
P :=span{ m_α|α_0 < d-2} and
W:=span{ m_α|α_0 = d-2}.
The spaces P and W are orthogonal to each other. Lemma <ref> implies the following.
N_x_0^d = P⊕ W.
The next theorem follows from Equations (28) and (29) in <cit.>.
Let f∈ N_x_0^d = P⊕ W and L_ f be the Weingarten map of at x_0^d and f.
* If f∈ P, then L_ f = 0.
* If f ∈ W, then L_ f can be represented in orthonormal coordinates by the matrix
L_ f = √(d-1/d) [ √(2)· f_1,1 f_1,2 ⋯ f_1,n; f_2,1 √(2)· f_2,2 ⋯ f_2,n; ⋮ ⋮ ⋱ ⋮; f_n,1 f_n,2 ⋯ √(2)· f_n,n ],
where
f = ∑_1≤ i<j≤ n f_i,j√(d(d-1)) x_0^d-2x_ix_j + ∑_1≤ i≤ n f_i,i√(d(d-1)/2) x_0^d-2x_i^2.
Recall that a random symmetric n× n matrix L=(ℓ_i,j) has distribution L∼GOE(n), if ℓ_i,j∼ N(0,12) for i≠ j and ℓ_i,i∼ N(0,1) and all entries are independent (except for the symmetry condition). The probability density of L is (2π)^n(n+1)/4exp(-12Trace(L^TL)).
Let f∈ N_x_0^d and L_ f be the Weingarten map of at x_0^d and f. If f is Gaussian with respect to the Bombieri-Weyl metric then
L_ f∼√(2(d-1)/d) GOE(n).
§.§ Segre–Veronese manifold
We now turn to the Segre–Veronese manifold 𝕏_ n, d. We first show that 𝕏_ n, d is a homogeneous space. This allows us to compute geodesics and the second fundamental form at the distinguished point
E := x_0^d_1⊗⋯⊗ x_0^d_r∈𝕏_ n, d.
The Bombieri-Weyl inner product on has the property
⟨ f_1⊗⋯⊗ f_r, g_1⊗⋯⊗ g_r⟩ := ⟨ f_1, g_1⟩⋯⟨ f_r, g_r⟩.
Moreover, it is invariant under orthogonal change of variables; i.e., for orthogonal matrices Q_1∈ O(n_1+1),…, Q_r∈ O(n_r+1) we have
⟨ ( f_1∘ Q_1)⊗⋯⊗ ( f_r∘ Q_r), ( g_1∘ Q_1)⊗⋯⊗ ( g_r∘ Q_r)⟩ = ⟨ f_1⊗⋯⊗ f_r, g_1⊗⋯⊗ g_r⟩.
This invariance was proved by Kostlan in <cit.>.
We extend this linearly to an isometric action of O(n_1+1)×⋯× O(n_r+1) on . Furthermore, the orthogonal group O(n_i+1) acts transitively on 𝕍_n_i, d_i for every i. This implies the following.
is a homogeneous space under the isometric O(n_1+1)×⋯× O(n_r+1) action on restricted to .
Next, we provide an explicit description of geodesics in 𝕏_ n, d.
Let γ(t) be a geodesic in 𝕏_ n, d parametrized by arc length and passing through E. Up to the action by a tuple of orthogonal matrices in O(n_1+1)×⋯× O(n_r+1) the geodesic γ(t) has the form
γ(t) = γ_1(t) ⊗⋯⊗γ_r(t),
where
γ_i(t) = (cos(d_i^-1/2 a_i(t)) x_0 + sin(d_i^-1/2 a_i(t)) x_1)^d_i
and a_i(t),1≤ i≤ r, are smooth functions with a_1'(t)^2+⋯ +a_r'(t)^2 = 1.
Let γ(t) = γ_1(t) ⊗⋯⊗γ_r(t) be a curve through E. From Proposition <ref> (3) it follows that
‖γ'(t)‖^2 = ‖γ_1'(t)‖^2 + ⋯ + ‖γ_r'(t)‖^2.
Therefore, γ(t) is a geodesic parametrized by arc length if and only if
‖γ_1'(t)‖^2 + ⋯ + ‖γ_r'(t)‖^2=1.
After a rotation in every factor we can assume that γ_i(t) = (cos(w_i(t)) x_0 + sin(w_i(t)) x_1)^d_i where the w_i(t) are smooth functions; then
γ_i'(t) = d_i· (cos(w_i(t)) x_0 + sin(w_i(t)) x_1)^d_i-1· (-sin(w_i(t)) x_0+ cos(w_i(t)) x_1) · w_i'(t).
The norm of this polynomial is ‖γ_i'(t) ‖^2 = d_i · w_i'(t)^2. Setting a_i(t) := d_i^1/2 w_i(t) we see that γ(t) is a geodesic if and only if a_1'(t)^2 +⋯ + a_r'(t)^2 = 1.
For our formulation of the Weingarten map of 𝕏_ n, d we first need to obtain a certain decomposition of the normal space. We define the spaces
𝒲_i := span{ m_α_1⊗⋯⊗ m_α_r| (α_i)_0 = d_i-2, (α_k)_0 = d_k for k≠ i},
𝒢_i,j := span{ m_α_1⊗⋯⊗ m_α_r| (α_i)_0 = d_i-1, (α_j)_0 = d_j -1, (α_k)_0 = d_k , for k≠ i,j}
and set
𝒲 := ⊕_1≤ i ≤ r𝒲_i,
𝒢 := ⊕_1≤ i<j≤ r𝒢_i,j,
𝒫 := (𝒲⊕𝒢)^⊥∩ N_ E𝕏_ n, d.
The next result extends Lemma <ref> to the case r≥ 2.
If r≥ 2, the normal space has the orthogonal decomposition
N_ E𝕏_ n, d = 𝒫⊕𝒲⊕𝒢.
By the inner product rule for simple tensors (<ref>), and since the monomials m_α are orthogonal, the decomposition 𝒫⊕𝒲⊕𝒢 is an orthogonal decomposition.
Therefore, we only have to show that 𝒲,𝒢⊂ N_ E𝕏_ n, d.
It follows from Proposition <ref> (2) that
T_ E𝕏_ n, d = T_x_0^d_1𝕍_n_1,d_1⊗ x_0^d_2⊗⋯⊗ x_0^d_r + ⋯ + x_0^d_1⊗ x_0^d_2⊗⋯⊗ T_x_0^d_r𝕍_n_r,d_r.
Lemma <ref> implies that
T_x_0^d_ℓ𝕍_n_ℓ,d_ℓ = span{ m_α_1⊗⋯⊗ m_α_r| (α_ℓ)_0 = d_ℓ-1, (α_k)_0 = d_k , for k≠ℓ}.
The space 𝒲_i is spanned by simple tensors m_α_1⊗⋯⊗ m_α_r, such that the i-th factor m_α_i is orthogonal to both T_x_0^d_i𝕍_n_i,d_i and x_0^d_i. This already shows, using (<ref>), that 𝒲_i⊥ T_ E𝕏_ n, d. Consequently, 𝒲⊂ N_ E𝕏_ n, d.
The space 𝒢_i,j is spanned by
simple tensors m_α_1⊗⋯⊗ m_α_r, such that the i-th factor m_α_i is orthogonal to x_0^d_i and the j-th factor m_α_j is orthogonal to x_0^d_j. Since T_ E𝕏_ n, d is spanned by simple tensors that have at most one factor different than x_0^d_k, the inner product rule (<ref>) implies that 𝒢_i,j⊥ T_ E𝕏_ n, d for all i,j, hence 𝒢⊂ N_ E𝕏_ n, d.
Let us work out the decomposition from Lemma <ref> in the case of the Segre manifold. This is the case d = 1 = (1,…,1). Since H_n, 1≅ℝ^n, we can view elements in = H_n_1, 1⊗⋯⊗ H_n_r,1 as r-dimensional matrices F=(F_i_1,…,i_r), where 0≤ i_j≤ n_j for 1≤ j≤ r.
Figure <ref> shows an order-three tensor illustrating the case r=3. We have for the Segre manifold
T_ E𝕏_ n, 1 = {(F_i_1,…,i_r)| there is exactly one i_j greater than zero}
Moreover, 𝒲 = ∅ and
𝒢 = {(F_i_1,…,i_r)| there are exactly two i_js greater than zero},
𝒫 = {(F_i_1,…,i_r)| there are at least three i_js greater than zero}.
Figure <ref> shows the case for r=3 the tangent space of 𝕏_ n, 1 in red, 𝒢 in green and 𝒫 in blue.
We can now prove a theorem on the structure of the Weingarten map of 𝕏_ n, d.
Consider a normal vector F ∈ N_ E𝕏_ n, d = 𝒫⊕𝒲⊕𝒢 and L_ F be the Weingarten map of 𝕏_ n, d at E and F. Then, L_ F
is represented in orthonormal coordinates by the matrix
L_ F=
[ L_1 L_1,2 ⋯ L_1,r; (L_1,2)^T L_2 ⋯ L_1,r-1; ⋱ ; (L_1,r)^T (L_1,r-1)^T ⋯ L_r ]∈ℝ^n× n, n=,
defined as follows:
Let us write F = P + W + G, where P∈𝒫, W∈𝒲 and G ∈𝒢. Decompose further:
W = ∑_1≤ i≤ r W_i, G = ∑_1≤ i<j≤ r G_i,j,
where W_i = m_(d_1,0,…,0)⊗⋯⊗ m_(d_i-1,0,…,0)⊗ f_i⊗ m_(d_i+1,0,…,0)⋯⊗ m_(d_r,0,…,0)∈𝒲_i with
f_i = ∑_1≤ k<ℓ≤ n_i f_i, (k,ℓ) m_(d_i-2,0,…, k-th1,…,ℓ-th1,…,0) + ∑_1≤ k ≤ n_i f_i, (k,k) m_(d_i-2,0,…,k-th2,…,0),
and
G_i,j = ∑_k=1^n_i∑_ℓ = 1^n_j g_(i,j),(k,ℓ) m_(d_1,0,…,0)⊗…⊗ m_(d_i-1,0,…, k-th1, …, 0)⊗…⊗ m_(d_r,0,…, 0) .
Then,
L_i = √(d_i-1/d_i)[ √(2)· f_i,(1,1) f_i,(1,2) ⋯ f_i,(1,n); f_i,(2,1) √(2)· f_i,(2,2) ⋯ f_i,(2,n_i); ⋮ ⋮ ⋱ ⋮; f_i,(n_i,1) f_i,(n_i,2) ⋯ √(2)· f_i,(n_i,n_i) ]∈ℝ^n_i× n_i
and
L_i,j = [ g_(i,j),(1,1) g_(i,j),(1,2) ⋯ g_(i,j),(1,n_j); g_(i,j),(2,1) g_(i,j),(2,2) ⋯ g_(i,j),(2,n_j); ⋮ ⋮ ⋱ ⋮; g_(i,j),(n_i,1) g_(i,j),(n_i,2) ⋯ g_(i,j),(n_i,n_j) ]∈ℝ^n_i× n_j.
(In particular, L_ F depends on the components 𝒲 and 𝒢. If F∈𝒫, we have L_ F = 0.)
Proposition <ref> implies the block structure of L_ F. The structure of the diagonal blocks is given by Theorem <ref>. The structure of the off-diagonal blocks comes from the fact that m_(d_i-1,0⋯,k-th1,⋯,0)
for 1≤ k≤ n_i are an orthonormal basis of the tangent space T_x_0^d_i𝕍_n_i,d_i by Lemma <ref>.
An immediate corollary of Theorem <ref> comes next.
Let F = (F_α_1,…,α_r)∈ N_ E𝕏_ n, d and L_ F be the Weingarten map of 𝕏_ n, d at E in the normal direction F. If F is Gaussian with respect to the Bombieri-Weyl norm, then
L_ F∼[ L_1 L_1,2 ⋯ L_1,r; (L_1,2)^T L_2 ⋯ L_1,r-1; ⋱ ; (L_1,r)^T (L_1,r-1)^T ⋯ L_r ]
,
where
L_k∼√(d_k(d_k-1)/2) GOE(n_i) and L_i,j∼ N(0, I_n_i⊗ I_n_j),
and all blocks L_k, L_i,j, 1≤ k≤ r, 1≤ i<j≤ r, are independent.
[Weingarten map of the Segre manifold]
We consider again the case of the Segre manifold, where d = 1 = (1,…,1). In this case, the diagonal blocks of L_ F are all zero and the off-diagonal blocks are independent standard normal matrices.
For instance, when n_1 = n_2 = 2 and n_3 = n_4=1 we have two 2× 2 diagonal zero blocks and two 1× 1 diagonal zero blocks:
L_ F = [
[ 0 0 F_1100 F_1200 F_1010 F_1001; 0 0 F_2100 F_2200 F_0201 F_2001; F_1100 F_2100 0 0 F_0110 F_0101; F_1200 F_2200 0 0 F_0210 F_0201; F_1010 F_2010 F_0110 F_0210 0 F_0011; F_1001 F_2001 F_0101 F_0201 F_0011 0 ]],
where F = ∑_i=0^2∑_j=0^2∑_k=0^1∑_ℓ=0^1 F_ijkℓ x_i⊗ x_j⊗ x_k⊗ x_ℓ. We will revisit this in Example <ref> below.
§ REACH OF THE SEGRE–VERONESE MANIFOLD
We compute the reach τ() of the Segre–Veronese manifold. We adapt the strategy from <cit.> and calculate the reach as the minimum of two quantities:
τ() = min{ρ_1, ρ_2},
where ρ_1 is the inverse of the maximal curvature of a geodesic curve in that is parametrized by arc length:
1/ρ_1 = sup{‖ P_ E(γ”(0)) ‖|γ is a geodesic in parametrized by arc length};
here P_ E denotes the orthogonal projection onto N_ E.
The other quantity, ρ_2 , is the width of the smallest bottleneck:
ρ_2 = min{12 d_𝕊( F, E) | F∈, F≠ E and F- E ∈ N_ E⊕ℝ· E}.
The goal of this section is to prove the following proposition, giving formulas for both ρ_1 and ρ_2.
Let d=(d_1,…,d_r) and n=(n_1,…,n_r) be r-tuples of positive integers, and let d:=d_1+⋯+ d_r≥ 2. For the (spherical) Segre–Veronese manifold of total degree d, we have
* ρ_1 = √(d2(d-1)).
* ρ_2 = π/4.
We prove Proposition <ref> (<ref>) in Section <ref> and (<ref>) in Section <ref>. Because the reach is the minimum of ρ_1 and ρ_2, this proves Theorem <ref>.
§.§ Extremal curvature of the Segre–Veronese manifold
Let γ(t) be a geodesic in parametrized by arc length. By orthogonal invariance (Lemma <ref>) we can assume that γ(0)= E.
As shown in Lemma <ref>, geodesics in parametrized by arc length that pass through E can, without loss of generality, be written as
γ(t) = γ_1(t) ⊗…⊗γ_r(t)
where
γ_i(t) = (cos(d_i^-1/2 a_i(t)) x_0 + sin(d_i^-1/2 a_i(t)) x_1)^d_i
and a_i(t),1≤ i≤ r, are smooth functions such that a_1'(t)^2+⋯ +a_r'(t)^2 = 1.
The first derivative of γ_i(t) at t=0 is
γ'_i(0) = a_i'(0) ·√(d_i) x_0^d_i-1x_1 = a_i'(0) · m_(d_i-1,1,0,…,0).
The second derivative is
γ”_i(0) = -a_i'(0)^2 x_0^d_i + a_i”(0) √(d_i) x_0^d_i-1 x_1 + a_i'(0)^2 (d_i-1) x_0^d_i-2 x_1^2
= -a_i'(0)^2 m_(d_i,0,…,0) + a_i”(0) m_(d_i-1,1,0,…,0) + a_i'(0)^2 √(2(d_i-1)d_i) m_(d_i-2,2,0,…,0).
These formulas give the first and second derivatives of the factors of γ(t).
Next, we compute the derivative of the geodesic γ(t) itself. The first derivative is
γ'(t) = ∑_i=1^r γ_1(t) ⊗…⊗γ_i-1(t) ⊗γ'_i(t) ⊗γ_i+1(t) ⊗…⊗γ_r(t).
The second derivative is
γ”(t) = ∑_i=1^r γ_1(t) ⊗…⊗γ_i-1(t) ⊗γ”_i(t) ⊗γ_i+1(t) ⊗…⊗γ_r(t)
+ 2∑_1≤ i<j ≤ rγ_1(t) ⊗…⊗γ'_i(t) ⊗…⊗γ'_j(t) ⊗…⊗γ_r(t)
The second derivative γ_i”(0) is the sum of three terms. Tensoring the first term in (<ref>) with γ_j(0)=x_0^d_j, j≠ i, gives a multiple of E. Tensoring the second term in γ”_i(0) with γ_j(0)=x_0^d_j, j≠ i, gives a point a point in the tangent space T_ E𝕏_ n, d by Lemma <ref>. Therefore, projecting γ”(0) onto N_ E, we have
P_ E(γ”(0)) = W + G,
where W∈𝒲 and G∈𝒢 (see (<ref>) for the definition of these spaces) are given by
W = ∑_i=1^r a_i'(0)^2 √(2(d_i-1)d_i) m_(d_1,0,…,0)⊗…⊗ m_(d_i-2,2,0,…,0)⊗…⊗ m_(d_r,0,…,0),
G = 2∑_1≤ i<j ≤ r a_i'(0) a_j'(0) m_(d_1,0,…,0)⊗…
⊗ m_(d_i-1,1,0,…,0)⊗…⊗ m_(d_j-1,1,0,…,0)⊗…⊗ m_(d_r,0,⋯,0).
Let us write
θ_i:=a_i'(0).
As 𝒲⊥𝒢 and the m_α form orthonormal bases, the magnitude of P_ E(γ”(0)) is
‖ P_ E(γ”(0)) ‖^2 = ‖ W ‖^2 + ‖ G ‖^2
= ∑_i=1^r θ_i^4 ·2(d_i-1)/d_i + 4∑_1≤ i < j≤ dθ_i^2 θ_j^2
= ∑_i=1^r θ_i^4 ·2(d_i-1)/d_i + 2∑_i=1^d θ_i^2 ∑_j ≠ iθ_j^2
= ∑_i=1^r ( θ_i^4 ·2(d_i-1)/d_i + 2 a_i^2(1-a_i^2) ), (because ∑_i=1^r θ_i^2=1)
= 2∑_i=1^r θ_i^2 - θ_i^4/d_i .
To maximize this expression under the constraint ∑_i=1^r θ_i^2=1 we consider the Lagrange function
ℒ(θ_1,…,θ_r,λ) := ∑_i=1^r (θ_i^2 - θ_i^4/d_i) - λ(1-∑_i=1^r θ_i^2) .
Setting the derivatives of ℒ to zero, we have
0 = ∂ℒ/∂θ_i = 2θ_i - 4/d_iθ_i^3 + 2λθ_i ⟹ θ_i = √(d_i(1+λ)/2) or θ_i = 0.
Let us first consider the case when the θ_i are not equal to zero. In this case, the equation ∑_i=1^r θ_i^2 = 1 implies
1 = ∑_i=1^r θ_i^2 = ∑_i=1^r d_i(1+λ)/2 = d(1+λ)/2 ,
where d=d_1+⋯ +d_r is the total degree. This shows λ = 2/d - 1, so that
θ_i = √(d_i/d).
Thus, in this case
‖ P_ E(γ”(0)) ‖ = √( 2 ∑_i=1^r d_i/d - d_i/d^2) = √(2(d-1)/d)
For the other critical values of (θ_1,…,θ_r) we get √(2(d'-1)/d'), where d' = ∑_i∈ I d_i is the total degree of a subset I⊂{1,…,r} of factors. Since x↦√(2(x-1)/x) is an increasing function for x≥ 1, this shows that √(2(d-1)/d) is indeed the maximal curvature. It also shows that √(2(d_ℓ-1)/d_ℓ) is the minimal curvature, where d_ℓ = min{d_1,…,d_r}.
We have shown above that √(2(d-1)d) is the maximal curvature, and that √(2(d_ℓ-1)d_ℓ), where d_ℓ = min{d_1,…,d_r}, is the minimal curvature. The curves that attain these curvatures are given by the critical values θ_i. To realize them we can choose constant speed curves defined by a_i(t) = θ_i· t.
§.§ Bottlenecks of the Segre–Veronese manifold
We compute ρ_2, the width of the smallest bottleneck of the Segre–Veronese manifold.
Recall that ρ_2 is the minimum over the distances 12 d_𝕊( F, E) where F∈ with F≠ E and F- E∈ N_ E⊕ℝ· E. The latter is equivalent to
⟨ F- E, G⟩ = 0 for all G∈ T_ E𝕏_ n, d.
We have
T_ E𝕏_ n, d = T_x_0^d_1𝕍_n_1,d_1⊗ x_0^d_2⊗⋯⊗ x_0^d_r + ⋯ + x_0^d_1⊗ x_0^d_2⊗⋯⊗ T_x_0^d_r𝕍_n_r,d_r
by Proposition <ref> (2).
We check that F- E is orthogonal to each summand in this decomposition:
let us write
F = ℓ_1^d_1⊗⋯⊗ℓ_r^d_r
and consider the inner product of F- E with elements from the first summand in the decomposition of T_ E𝕏_ n, d above. By Lemma <ref>
the monomials
x_0^d_1-1x_k, for 1≤ k≤ n_1, span the tangent space T_x_0^d_1𝕍_n_1,d_1.
For G = (x_0^d_1-1x_k)⊗ x_0^d_2⊗⋯⊗ x_0^d_r∈ T_x_0^d_1𝕍_n_1,d_1 we have that
⟨ F- E, G ⟩ = ⟨ F, (x_0^d_1-1x_k)⊗ x_0^d_2⊗⋯⊗ x_0^d_r⟩ by (<ref>)=⟨ℓ_1^d_1, x_0^d_1-1x_k⟩ ∏_i=2^r ⟨ℓ_i^d_i,x_0^d_i⟩
by (<ref>)=⟨ℓ_1, x_0⟩^d_1-1 ⟨ℓ_1,x_k⟩ ∏_i=2^r ⟨ℓ_i,x_0⟩^d_i.
This inner product is zero for every 1≤ k≤ n_1 if either ℓ_1=x_0 or ⟨ℓ_1,x_0⟩ =0.
We proceed similarly for the other summands in the decomposition of T_ E𝕏_ n, d.
Ultimately, we find that
⟨ F- E, G⟩ = 0 for all G∈ T_ E𝕏_ n, d if and only if either ℓ_1=⋯=ℓ_r=x_0 or there is at least one ℓ_i with ⟨ℓ_i, x_0⟩=0, in which case ⟨ F, E⟩ =0 by (<ref>). Since F≠ E, it must be that the latter holds.
Therefore, the bottlenecks of all have width arccos 0 = π/2, so ρ_2 = 1/2·π/2 = π/4.
§ VOLUME OF THE TUBULAR NEIGHBORHOOD
Recall from Theorem <ref> that the reach of the (spherical) Segre–Veronese manifold is τ()=π/4, if d<5, and τ()=√(d/2(d-1)), if d>5.
In this section we prove Theorem <ref> by computing the volume of the tubular neighborhood for ε < τ()
U(ε) := { F∈𝕊() | d_𝕊( F, ) < ε}.
The proof will be completed in Section <ref> below.
For the computation we use Weyl's tube formula <cit.>.
We denote
n = = n_1+⋯ + n_r, N = (𝕊()),
and
J_i(ε) = ∫_t=0^ε (sinϕ)^N-n+2i· (cosϕ)^n-2i dϕ.
Then Weyl's tube formula implies that the volume of U(ε) is given as the following linear combination of the functions J_i:
vol(U(ε)) = ∑_0≤ 2i≤ nκ_i· J_i(ε),
with coefficients
κ_i = ∫_ G∈(∫_ F ∈ N_ G : ‖ F‖ = 1 m_2i(L_ F) d F) d G,
where m_2i(L_ F) denotes the sum of the 2i-principal minors of the Weingarten map L_ F in normal direction F. The coefficients κ_2i are called curvature coefficients. They are isometric invaiants of .
It follows from Lemma <ref> that the integral in the formula for κ_2i is independent of G, so that
κ_2i= vol() ∫_ F ∈ N_ E : ‖ F‖ = 1 m_2i(L_ F) d F,
where now the inner integral is over the sphere in the normal space of E=x_0^d_1⊗⋯⊗ x_0^d_r. The volume of the (spherical) Segre–Veronese manifold is computed next.
vol() = √((d_1⋯ d_r)^n)/2^r-1·vol(𝕊^n_1) ⋯vol(𝕊^n_r).
In the case of the map ψ in Proposition <ref> is 2^r-1:1.
Proposition <ref> (3) therefore implies
vol() = 1/2^r-1·vol(𝕍_n_1,d_1) ⋯vol(𝕍_n_r,d_r).
Finally, vol() =√(d^n)·vol(𝕊^n) (see, e.g., <cit.>).
The volume of the k-sphere is
vol(𝕊^k) = 2 π^k+1/2/Γ (k+12).
The main task in computing the volume of U(ε) therefore is integrating the principal minors of the Weingarten map L_ F over the normal space. For this we pass from the uniform distribution on the sphere to the Gaussian distribution. Since L_λ· F = λ· L_ F for F ∈ N_ E and λ∈ℝ, we have
m_2i(L_λ· F)=λ^2i· m_2i(L_ F).
Suppose that F is a Gaussian vector in the normal space; that is, a random tensor in N_ E with probability distribution (2π)^c/2exp(-12‖ F‖^2). Then, the two random variables ‖ F ‖ and F/‖ F ‖ are independent. We define the scalars
λ_i := _ F ∈ N_ E Gaussian ‖ F‖^2i.
Using (<ref>) we can then pass between the uniform distribution on the sphere and the Gaussian distribution as follows:
_ F ∈ N_ E Gaussian m_2i(L_ F)= λ_i·_ F ∈ N_ E uniform in the sphere m_2i(L_ F)
Since ‖ F‖^2 has χ_c^2-distribution with c= N_ E degrees of freedom, λ_i is the i-th moment of χ_c^2; i.e.,
λ_i = 2^i Γ(i+c/2)/Γ(c/2).
We have thus proved the following reformulation of (<ref>).
Let c= N_ E. Then
κ_2i= vol() ·vol(𝕊^c-1) · Γ(c/2)/2^i Γ(i+c/2)·_ F ∈ N_ E Gaussian m_2i(L_ F).
For computing the expectation of m_2i(L_ F) we can rely on Corollary <ref>. Recall that this corollary implies that if F is Gaussian, L_ F is a random symmetric matrix with independent blocks
L_ F∼[ L_1 ⋯ L_1,r; ⋱ ; (L_1,r)^T ⋯ L_r ], [ L_k∼√(d_k(d_k-1)2) GOE(n_i),; L_i,j∼ N(0, I_n_i⊗ I_n_j) ]
.
In general it is difficult to evaluate the expected value of the minors of this random matrix. We make an attempt using graph theory in the next subsection.
§.§ Perfect matchings in graphs and random determinants
In this section we give a formula for m_2i(L_ F) when F is Gaussian using concepts from graph theory. In the following, the degrees d=(d_1,…,d_r) are fixed. Define the following random symmetric matrix with independent blocks:
L_ d(m_1,…,m_r) := [ L_1 ⋯ L_1,r; ⋱ ; (L_1,r)^T ⋯ L_r ], [ L_k∼√(d_k(d_k-1)2) GOE(m_i),; L_i,j∼ N(0, I_m_i⊗ I_m_j) ].
This differs from (<ref>) in that we allow the sizes of the blocks to be arbitrary, not necessarily given by the dimension n_i = 𝕍_n_i,d_i.
We can write the expected principal minors of L_ F as
_ F ∈ N_ E Gaussian m_2i(L_ F) =
∑_m_1,…,m_r∈ℕ: m_i≤ n_i
m_1+⋯+m_r = 2i L_ d(m_1,…,m_r).
Recall the definition of D_ d(m_1,…,m_r) from (<ref>): for a tuple (m_1,…,m_r) of nonnegative integers let G=(V,E) be the complete graph on m:=m_1+⋯+m_r vertices. The vertices are partitioned into r groups V= ℐ_1⊔⋯⊔ℐ_r of cardinalities |ℐ_k| = m_k. The weight w(e) of an edge between vertices in group ℐ_k is w(e)=d_k(d_k-1). The weight of an edge across groups is 1. Given a perfect matching C⊂ E its weight is w(C):=∏_e∈ C w(e).
Then,
D_ d(m_1,…,m_r) = (-1)^m/2∑_C⊂ E perfect matching w(C)
The main goal of this section is to prove the following characterization of the function D_ d. In combination with (<ref>), Lemma <ref> and Lemma <ref> the next proposition completes the proof of Theorem <ref>.
Let (m_1,…,m_r) be nonnegative integers. Then,
D_ d(m_1,…,m_r) = (L_ d(m_1,…,m_r)).
Recall from Example <ref> that the random matrix for the Segre manifold with n_1=n_2=2, n_3=n_4=1 and degrees 1 = (1,1,1,1) is
L_1(2,2,1,1) = [
[ 0 0 F_1100 F_1200 F_1010 F_1001; 0 0 F_2100 F_2200 F_0201 F_2001; F_1100 F_2100 0 0 F_0110 F_0101; F_1200 F_2200 0 0 F_0210 F_0201; F_1010 F_2010 F_0110 F_0210 0 F_0011; F_1001 F_2001 F_0101 F_0201 F_0011 0 ]],
where the entries are all i.i.d. standard Gaussian. We compute the expected determinant of this matrix using Theorem <ref>. The corresponding graph has n_1+n_2+n_3+n_4=6 vertices and four groups ℐ_1={1,2},ℐ_2={3,4},ℐ_3={5},ℐ_3={6}:
[fill = purple!30] (1) at (-1,0) [circle,draw,inner sep=2pt] 1;
[fill = teal!30] (3) at (3,0) [circle,draw,inner sep=2pt] 3;
[fill = teal!30] (4) at (3,2) [circle,draw,inner sep=2pt] 4;
[fill = blue!30] (5) at (-1,2) [circle,draw,inner sep=2pt] 5;
[fill = purple!30] (2) at (1,-1) [circle,draw,inner sep=2pt] 2;
[fill = orange!30] (6) at (1,3) [circle,draw,inner sep=2pt] 6;
(1) edge node (3);
(1) edge node (4);
(1) edge node (5);
(1) edge node (6);
(2) edge node (3);
(2) edge node (4);
(2) edge node (5);
(2) edge node (6);
(3) edge node (5);
(3) edge node (6);
(4) edge node (5);
(4) edge node (6);
(5) edge node (6);
The edges within groups all have weight zero (they can be deleted). All other edges have weight one, so D_1(2,2,1,1)= L_1 (2,2,1,1) is given by the negative of the number of perfect matchings in this graph. We can match {1,2} with {3,4}. There are two possible such matches. Or we can match 1 with either 5 or 6, in which case we have to match 2 with either 3 or 4. There are 4 such matches. Finally, we can also match 2 with either 5 or 6, and by symmetry there are again 4 such matches. In total these are 10 matches, which shows that D_1(2,2,1,1)=-10.
Let m:=m_1+⋯+m_r and write
L_ d(m_1,…,m_r) = (ℓ_i,j)_1≤ i,j≤ m.
Since the expectation is linear, Laplace expansion of the determinant yields
L_ d(m_1,…,m_r) = ∑_σ∈𝔖_msgn(σ) ∏_i=1^m ℓ_i,σ(i),
where 𝔖_m is the symmetric group on m elements. The ℓ_i,σ(i) are all Gaussian with mean ℓ_i,σ(i) = 0 and independent. This implies that the only terms whose expectation is not zero are those where all ℓ_i,σ(i) appear as a square. In other words, only those expectations are not zero where σ∈𝔖_m has the property that σ(i)≠ i for all i and
σ(i)=j implies σ(j)=i. Such σ∈𝔖_m only exist when m is even.
If m is odd, we therefore have L_ d(m_1,…,m_r)=0. Since for m odd, no perfect matchings can exist, we also have D_ d(m_1,…,m_r)=0.
If m is even, on the other hand, the σ∈𝔖_m with the above property are precisely products of m2 transpositions, so that
L_ d(m_1,…,m_r) = (-1)^m/2∑_σ∈𝔖_m:
σ is product of m/2 transpositions∏_i=1^m ℓ_i,σ(i).
There is a 1:1 correspondence between products of m2 transpositions and perfect matchingts C⊂ E, where E is the set of edges in the complete graph G=(V,E) on m vertices. Let C={(i_1,i_2), (i_3,i_4),…, (i_m-1,i_m)} be the matching corresponding to σ; i.e., σ(i_j)=i_j+1. Then, using independence we get
∏_i=1^m ℓ_i,σ(i) = (ℓ_i_1,i_2^2 ⋯ℓ_i_m-1,i_m^2) = ℓ_i_1,i_2^2 ⋯ℓ_i_m-1,i_m^2 = σ_i_1,i_2^2⋯σ_i_m-1,i_m^2,
where σ_i_j,i_j+1^2 is the variance of ℓ_i_j,i_j+1. By definition of L_ d(m_1,…,m_r) the variance of the off-diagonal entries in the diagonal blocks is d_k(d_k-1), while the variance of the entries in the off-diagonal blocks of L_ d(m_1,…,m_r) is 1. That is:
σ_i_j,i_j+1^2 = d_k(d_k-1), i_j,i_j+1∈ℐ_k
1, i_j,i_j+1 are in different groups of vertices .,
which shows that ∏_i=1^m ℓ_i,σ(i) = w(C), so D_ d(m_1,…,m_r)= L_ d(m_1,…,m_r).
alpha
|
http://arxiv.org/abs/2307.04365v1 | 20230710064447 | One-Shot Pruning for Fast-adapting Pre-trained Models on Devices | [
"Haiyan Zhao",
"Guodong Long"
] | cs.CV | [
"cs.CV",
"cs.LG"
] |
University of Technology Sydney, Sydney, Australia
[email protected]
[email protected]
One-Shot Pruning for Fast-adapting Pre-trained Models on Devices
Haiyan Zhao Guodong Long
August 12, 2023
================================================================
Large-scale pre-trained models have been remarkably successful in resolving downstream tasks. Nonetheless, deploying these models on low-capability devices still requires an effective approach, such as model pruning. However, pruning the model from scratch can pose a practical challenge given the limited resources of each downstream task or device.
To tackle this issue, we present a scalable one-shot pruning method that leverages pruned knowledge of similar tasks to extract a sub-network from the pre-trained model for a new task. Specifically, we create a score mask using the pruned models of similar tasks to identify task-specific filters/nodes in the pre-trained model for the new task. Based on this mask, we conduct a single round of pruning to extract a suitably-sized sub-network that can quickly adapt to the new task with only a few training iterations.
Our experimental analysis demonstrates the effectiveness of the proposed method on the convolutional neural networks (CNNs) and vision transformers (ViT) with various datasets. The proposed method consistently outperforms popular pruning baseline methods in terms of accuracy and efficiency when dealing with diverse downstream tasks with different memory constraints.
§ INTRODUCTION
Large-scale pre-trained models have exhibited exceptional performance on a wide range of downstream tasks. For instance, CLIP <cit.> has surpassed the current state-of-the-art computer vision models on 27 downstream tasks, each having diverse distributions. However, these pre-trained models typically consist of millions of parameters, hindering their deployment on edge devices with limited memory and computation budgets.
Previous studies <cit.> have demonstrated that only a subset of the filters/nodes in a pre-trained model are crucial for the inference process of a given downstream task. To address this issue, model pruning presents an effective approach wherein unnecessary filters/nodes can be removed without compromising accuracy.
Conventional pruning methods in real-world applications often require repeated pruning of the pre-trained model to adapt to different downstream tasks and low-capability devices, resulting in a waste of computational power and time. Moreover, some devices may not have the capacity to prune large models from scratch due to memory and computation limitations. The question arises: Is it feasible to find a sparse sub-network within a pre-trained model that can quickly adapt to a new downstream task?
Recent studies <cit.> have shown evidence of the lottery ticket hypothesis (LTH), which states that training from a sparse sub-network in a randomly initialized model can achieve comparable performance to the original dense network. However, LTH cannot reduce the number of training iterations required. Furthermore, LTH focuses solely on unstructured weight pruning, which may not necessarily improve the efficiency of training and inference of the pruned model.
Tian et al. <cit.> developed a meta-model that is trained on hundreds of tasks to create a well-initialized pruned model, which can rapidly adapt to a new task within a few training iterations, thereby reducing computational costs. The meta-model is the same for all tasks.
However, in practical scenarios, it is common for a pre-trained model to produce pruned models for downstream tasks or devices with varying memory constraints.
Therefore, we propose to directly utilize prior knowledge from previous pruned models instead of training a new meta-model. For each downstream task, its pruned model retains only critical and task-specific filters/nodes from the pre-trained model.
We investigate the relationship between the pruned models of downstream tasks with different similarities. We observe that tasks with high similarities share more task-specific filters/nodes in their pruned models.
Based on this observation, this paper proposes a novel one-time pruning method called "Scalable Mask Selection Pruning (SMSP)", which is illustrated in Fig. <ref>. By learning from the pruned results of similar tasks, SMSP can create a mask to identify task-specific filters/nodes in the pre-trained model and prune the model once to extract a suitably sized sparse sub-network for a new task. SMSP is scalable because the created mask can be used to extract a sub-network of any pruning ratio from the pre-trained model to adapt to different devices. The sparse sub-network is then trained on the training data of the new task for a few iterations to quickly adapt to the new task. SMSP can significantly reduce the computation cost during pruning while maintaining the excellent performance of the pruned models. Extensive experiments have been conducted to evaluate the proposed method, which demonstrates that SMSP outperforms state-of-the-art pruning methods on CNN and ViT over several datasets. Furthermore, SMSP performs well when used to produce pruned models for tasks with different memory constraints and tasks from unseen datasets, which demonstrates the scalability and generality of SMSP.
§ RELATED WORKS
Model pruning is a highly effective technique for compressing deep neural networks.
Some existing works <cit.> apply iterative pruning approaches to reduce the model size by eliminating filters/nodes with small weights while minimizing the loss of accuracy.
Alternatively, methods like HRank <cit.> and APoZ <cit.> evaluate the importance of each filter based on its corresponding activation maps.
Another line of methods <cit.> maintains a mask for filters/nodes in the model to eliminate redundant parameters automatically.
And this dynamic pruning setting is also widely used in the pruning of the vision transformer.
Recent works <cit.> introduce learnable parameters to each attention head, node, layer, or block in the vision transformer to reduce the model's complexity.
The approach of Goyal et al. <cit.> is different from traditional parameter pruning as they dynamically prune input patches in each block of ViT, resulting in significant reductions in inference computation without compromising the model's performance. Meanwhile, Tang et al.<cit.> evaluate the importance of each patch in maintaining the original final results. However, these pruning methods require starting the pruning process from scratch, which is time-consuming. In contrast, our method leverages pruned knowledge of similar tasks to reduce the number of pruning iterations significantly.
Some other pruning methods aim to speed up the pruning process.
Cai et al. <cit.> propose a once-for-all network that supports diverse settings by decoupling training and neural architecture search, which reduces the cost and makes it scalable for efficient inference across many devices and resource constraints. However, the generated pruned models are all for one task and cannot be generalized to other tasks.
Tian et al.<cit.> proposed a meta method that trains a well-initialized pruned meta-model to quickly adapt to different few-shot tasks. However, this meta-model is the same for all tasks and cannot generalize to devices with varying memory constraints.
MEST<cit.>, which is designed for edge devices, starts training from a sparse sub-network to save training computation.
DLTH <cit.> is a variant of LTH and also starts from a well-designed sub-network. It claims that randomly extracted subnetworks from a randomly initialized dense network can be transformed into a well-performing sub-network that can achieve admirable performance compared to LTH. However, all these methods require a significant amount of time and computation to find the initialized sub-networks. In contrast, our proposed method can be applied to different downstream tasks, and it does not require any additional computation cost to extract a sub-network for each new task.
§ METHODOLOGY
In this section, we establish a model pool consisting of the pruned models obtained from hundreds of tasks on both CNN and ViT. These pruned models are extracted to retain the task-specific knowledge present in the pre-trained model for each task.
We observe that similar tasks tend to share more task-specific filters/nodes. Leveraging this observation, we propose a generic and scalable approach to reduce the computational cost of pruning for new tasks or devices.
§.§ Pool of Pruned Models from Different Tasks
A pruned model of the downstream task typically preserves filters/nodes that are indispensable for its inference in the pre-trained model.
In practice, a dataset of pruned models exists owing to the extensive utilization of large-scale models across various downstream tasks and devices.
In this paper, to emulate this situation, we construct a simplified dataset of pruned models for different tasks and devices using the same pre-trained models.
Automatic Mask Pruning (AMP).
Inspired by <cit.>, we propose automatic mask pruning (AMP) to automatically identify task-specific filters/nodes for different tasks in the pre-trained model.
Algorithm <ref> provides a detailed outline of the AMP process.
Specifically, given a pre-trained network F(·;Θ) with parameter Θ and a training set D_t of a new target task t, let Θ^t={θ^t_i}_i=1:n where θ^t_i denotes every filter/head/node-i in the network.
By adding a mask, we incorporate a learnable mask score S^t_i to each prunable filter/head/node-i in the pre-trained model.
We define an operator ⊙ applied to Θ^t and its associated scores S^t as
(Θ^t⊙ S^t)[i]≜Θ^t[i] · S^t[i]
During the pruning process, these differentiable scores are optimized along with model parameters. To encourage sparsity, an additional L1 regularization loss is applied and filters/nodes with scores below a predefined threshold will be pruned.
The final objective function of AMP is defined as follows:
min_{S^t_i}_i=1:n𝔼_(x,y)∼ D_tl(y, F(x;Θ^t⊙ S^t))+ λS^t_1
where y represents the ground truth for x, l denotes the cross-entropy loss, and λ is the weight used to balance between the two losses.
We apply AMP to prune two major categories of pre-trained models, i.e., CNN and ViT, for diverse tasks with different memory constraints.
Specifically, we select ResNet-18(ResNet-50)<cit.> pre-trained on CIFAR-100<cit.>(ImageNet <cit.>) for CNN, and apply AMP to multiply the mask score to each filter in the network.
For ViT, we use DeiT-S <cit.> pre-trained on ImageNet.
As reported in previous work <cit.>, only some attention heads in deep pre-trained transformers are necessary for downstream tasks. Therefore, AMP is used to prune ViT at two levels: heads in the multi-head attention modules and nodes in the feed-forward network modules.
In the pool of pruned models, tasks for ResNet-18, ResNet-50, and ViT are randomly sampled from classes in CIFAR-100 and ImageNet datasets, respectively.
To verify the generality and scalability of our proposed method, we collect the pruned models of diverse tasks,
which can be divided into 3 groups: 3-classes, 5-classes and 10-classes classification tasks, each containing 300 tasks.
To emulate the memory limitations of various devices, we store pruned models with varying pruning ratios for each task in our model pool.
Due to the high memory costs of storing each pruned model, we have modified the AMP algorithm such that only mask scores are optimized with regularization, while all pre-trained model parameters remain fixed.
This modification facilitates accurate masking for each task to identify task-specific knowledge in the pre-trained model.
As all tasks can share the same pre-trained model during inference, we only record the class labels C^t and the mask S^t for each task t. The mask scores of pruned filters/nodes are then set to 0.
§.§ Knowledge Shared between Tasks
In the realm of multi-task/lifelong learning methods, similar tasks usually share more parameters in the network. In this section, we study the overlap of pruned models for similar tasks to verify whether more similar downstream tasks share more parameters in the pre-trained model.
To compute the similarity between downstream tasks, we apply the Log Expected Empirical Prediction (LEEP) <cit.>, which is used to evaluate the transferability of representations learned by the source task to the target task. This method only requires running the target task's data through the pruned model once to compute the LEEP score.
Overlap of task-specific filters/nodes.
Upon applying AMP to a new task, filters or nodes that have small mask scores will be pruned, whereas those with high mask scores, which contain task-specific knowledge relevant to the downstream task, can be retained in the model.
So we focus on the overlap of these high-score filters/nodes between tasks.
Given the pruned model of a task m, the set of filters/nodes Ω^m retained in the pre-trained model are sorted according to their mask scores {S^m_i}_i∈Ω^m in the descending order.
Ω^m_k denotes the filters/nodes with top-k mask score values in the mask of task m.
For each pair of tasks, say task m and task n (using the same pre-trained model), we compute the overlap ratio R of filters/nodes with top-k score values in their masks, i.e., R = |Ω^m_k ∩Ω^n_k|/k.
In Fig. <ref>, we present the overlap ratio of retained filters/nodes in various pre-trained models for tasks with varying degrees of similarity.
The x-axis of Fig. <ref> represents the top-k filters/heads/nodes with the highest mask scores in the pruned model, while the y-axis represents the overlap ratio of top-k filters in the pruned models of two similar tasks.
Given a new task, we calculate its LEEP similarities to the existing tasks in the model pool. Then we sort these LEEP similarities and partition them into three groups of equal intervals. Existing tasks whose similarity scores fall into a specific interval will be assigned to the corresponding similarity group. From similarity group 1 to group 3 in Fig. <ref>, the similarities between tasks decrease.
We observed from all three plots in Fig. <ref> that the overlap ratios of tasks belonging to similarity group 1 are considerably greater than those of tasks in similarity group 3. This indicates that the pruned models of more similar tasks share a significantly higher number of task-specific filters/heads/nodes. Hence, the pruned models of previous similar tasks can be utilized to identify task-specific parameters in the pre-trained model, expediting the pruning of the new task.
On the other hand, as the value of k increases, the overlap ratios in three plots grow gradually. This can be attributed to the fact that certain filters/heads/nodes with high mask scores in one task may be retained by another task with smaller scores. These filters/nodes have varying importance for different tasks and may serve distinct roles. In plot (c), we observe that the overlap ratios begin to converge when k exceeds 6. This is due to the fact that only a small number of heads (approximately 8) are preserved in the pruned model of each task.
§.§ Scalable Mask Selection Pruning (SMSP)
Inspired by the above discovery, we propose a generic and simple method called “Scalable Mask Selection Pruning (SMSP)" to fast-adapt the pre-trained model to downstream tasks.
The process of generating a mask for each new task is illustrated in Figure <ref>.
SMSP leverages the knowledge of pruned models for similar tasks to create a pruning mask of the pre-trained model for a new task. The detailed process of SMSP is shown in Alg. <ref>.
Specifically, given a new task t, SMSP first calculates its LEEP similarities <cit.> to tasks in the pool and samples M similar neighbor tasks M^t.
The mask scores S^t of task t are computed by summing the mask scores of all selected similar tasks, as shown below:
{S^t_i}_i=1:n = ∑_m=1^MS^m_i
Here, n represents the total number of filters/heads/nodes in the model, and M represents the total number of selected similar tasks.
As filters/nodes with high scores in S^t have been shown to play essential roles in similar tasks, it is likely that they contain task-specific knowledge relevant to the new target task t.
We sort the mask score of task t in descending order. Given any pruning ratio r, SMSP prunes r*n filters with the smallest mask scores once to meet the memory constraint.
The training objective of SMSP is:
min 𝔼_(x,y)∼ D_tl(y, F(x;θ^t_i: i∈Ω))
where θ^t_i: i∈Ω represents filters/nodes retained after pruning.
In the retained sub-network, the mask is removed, and all the parameters are inherited from the original pre-trained model.
SMSP trains the sub-network on the new target task's data for only a few iterations to speed up pruning.
§ EXPERIMENTS
In this section, we evaluate SMSP by pruning ResNet and ViT for downstream tasks from several datasets and compare its results with SOTA pruning methods. We validate the scalability and generality of SMSP by generating pruned models for tasks with different memory constraints. Finally, we study the effect of the mask, the number of similar tasks and task similarities on SMSP.
§.§ Experimental Settings
For each experiment scenario, we randomly sample 50 test tasks from the dataset. Each test task selects its similar tasks from the pool of pruned models according to their LEEP similarities. To make our study more solid, classes in selected similar tasks are disjoint from those in the test task so that their training data are totally different.
In our experiments, we conduct a grid search on a small subset of test tasks to tune the hyperparameters, which are then applied to all tasks. When applying SMSP to prune ResNet, we utilize SGD to train the sub-network and apply cosine annealing learning rate. The batch size is set to 128, and the initial learning rate is set to 0.01.
For experiments of ViT, we follow previous works<cit.> and use the optimizer of AdamW with the cosine-annealing learning rate. During training, we use a batch size of 256 and a smaller initial learning rate of 0.0002.
All results shown in this section are averaged over 50 test tasks.
§.§ Comparison with SOTA Methods
We compare our method with several SOTA pruning methods. To demonstrate our method's effectiveness, we compare it with AMP, a conventional pruning method that prunes the pre-trained model from scratch using a large number of pruning iterations.
For tasks on ResNet, we also include two popular pruning methods as baselines: Feature Pruning <cit.> and Taylor Pruning <cit.>. Feature Pruning calculates the importance of filters by averaging the activation values over all training samples, while Taylor Pruning measures the impact of removing each filter on the final loss to determine their importance.
We also compare our method with some popular methods that accelerate pruning. For example, IHT-based Reptile <cit.> learns a well-initialized pruned meta-model on a set of training tasks. For each new task, it can obtain the final pruned model by training the meta-model for a few iterations. DLTH <cit.> is a variant of LTH, which extracts a winning ticket for each task. MEST <cit.> can accelerate pruning by training from a sparse sub-network.
For pruning ViT, we compare SMSP with PoWER <cit.>, which proposes to dynamically prune the input patches of each block in ViT, and UVC <cit.>, which not only prunes heads and nodes but also unimportant layers and blocks in the model.
The results of comparing SMSP with the baseline methods for ResNet and ViT are presented in Tab. <ref> and Tab. <ref>, respectively. All results are obtained by pruning 5-classes classification tasks with a pruning ratio of 90%. The findings indicate that, for both ResNet and ViT, SMSP performs slightly better than AMP, which requires significantly more pruning iterations.
Although Feature Pruning and Taylor Pruning also yield similar or slightly better results than SMSP for ResNet-18 and ResNet-50, they demand significantly more computational resources than SMSP.
Moreover, SMSP surpasses IHT-based Reptile by a large margin, despite the fact that both approaches leverage knowledge from multiple tasks. Unlike IHT-based Reptile, which employs the same pruned meta-model for each new task, SMSP extracts different sub-networks for different tasks, composed of task-specific parameters, which can enhance performance.
Furthermore, the performance of SMSP outperforms DLTH and MEST, which, like SMSP, start with a well-designed sub-network. However, neither DLTH nor MEST has task-specific knowledge in their initialized pruned model, while SMSP initializes the sub-network by leveraging knowledge from similar tasks.
The outcomes presented in Tab. <ref> demonstrate that SMSP significantly outperforms baseline methods for ViT. Owing to a relatively low number of training iterations, neither UVC nor PoWER can recover the accuracy when a considerable number of parameters or patches are eliminated. Conversely, SMSP leverages a sub-network created by similar tasks as an initialization, hence, only a few training iterations are necessary to construct a well-performing pruned model.
§.§ Evaluation of Scalability and Generality
Our proposed SMSP is scalable in two folds. 1) SMSP can produce a promising pruned model for a new task of any memory constraint with a few training iterations. 2) All pruned models of tasks with varying data distribution and sizes can be selected as similar tasks to accelerate the pruning of the new task.
Applying SMSP to tasks of different sizes.
In Tab. <ref>, we show the results of applying SMSP to tasks of different sizes. The pruning ratios of all tasks are set to 90%. In the table, we find that for test tasks of different sizes, when we use the 5-classes similar tasks to extract the sub-networks for the test tasks, its performance is better than that of the 3-classes similar tasks. This is because similar tasks containing more classes can better differentiate data from different classes. Similar tasks of large sizes can extract more accurate task-specific filters/nodes for a given new task.
Applying SMSP to tasks of different memory constraints.
In Tab. <ref>, we apply SMSP to tasks of varying memory constraints. All the tasks are 5-classes classification tasks.
We observe that SMSP outperforms AMP when transferring between different pruning ratios.
Additionally, SMSP performs better when the pruning ratios of similar tasks and test tasks are the same. This could be attributed to the fact that in a pruned model with a small pruning ratio, some redundant filters/nodes are preserved in the mask, whereas in a pruned model with a large pruning ratio, some task-specific filters/nodes will be removed.
An interesting finding is that SMSP can leverage similar tasks with large pruning ratios to generate a well-performing pruned model of a smaller pruning ratio for a new task. This demonstrates the superiority of using pruned results of similar tasks as prior knowledge.
Performance on unseen tasks. To validate the generality of SMSP, we randomly sample 50 test tasks from Caltech-256 <cit.>. SMSP produces pruned models for these test tasks by learning from pruned results of tasks from ViT trained on ImageNet. The pre-trained ViT and similar tasks in the pool of pruned results never see the data of Caltech-256. All the test tasks are 5-classes classification tasks with the pruning ratio of 90%.
In Tab. <ref>, we show the results of applying SMSP to Caltech-256 and compare it with AMP.
The results show that SMSP can achieve comparable performance as AMP, which uses 10x training iterations. This indicates that SMSP can also identify task-specific heads/nodes in the pre-trained ViT for each unseen task from Caltech-256, so only a few training iterations suffice to produce a well-performed pruned model, showing the generality of SMSP to diverse datasets.
§.§ Ablation Study
Effect of the mask.
The main contribution of SMSP is its ability to leverage the pruned results of similar tasks to generate the task-specific mask for each new test task. To validate the efficacy of the masks produced by SMSP, we randomly generate a mask for each task using the same pruning ratio and compare their performance with that of SMSP. In Tab. <ref>, we observe that for tasks using ResNet-18 and ViT, the performance of random masks is significantly worse than that of SMSP. These results suggest that the masks generated by SMSP can effectively identify filters/nodes that are relevant to the new target tasks.
Effect of the number of similar tasks.
In plot (a) of Fig.<ref>, we study the effect of the number of similar tasks for each new task. For both tasks from ResNet-18 and ViT, as the number of similar tasks increases, the performance of SMSP also improves.
This is because more pruned results of similar tasks can provide more task-specific knowledge for the new task.
When the number >8, SMSP converges, which indicates that 8 similar tasks for each task in SMSP are enough to create a high-quality mask.
Effect of task similarities.
In plot (b) of Fig.<ref>, we compare the performance of SMSP when tasks with different similarities are used. The accuracy of using pruned models with higher similarities is always better than that of lower similarities, which implies that tasks with high similarities share more knowledge with new target tasks. This observation aligns with the findings presented in Section <ref>. The plot also illustrates that SMSP converges when the training iterations >80, indicating that only a limited number of training iterations will be enough for SMSP to build a promising pruned model.
§ CONCLUSION
In this paper, we propose a generic one-shot pruning method called SMSP to fast-adapt the pre-trained model to downstream tasks.
Based on the discovery that tasks with high similarities share more filters/nodes in their pruned models, given a new task, SMSP leverages the knowledge from the pruned models of its similar tasks to extract a sub-network from the pre-trained model. Then, a few training steps on the sub-network can reach a high-quality pruned model. Our experiments demonstrate that SMSP achieves SOTA results in terms of both accuracy and efficiency across various datasets and pre-trained models.
splncs04
|
http://arxiv.org/abs/2307.05968v1 | 20230712072953 | Surfaces of constant principal-curvatures ratio in isotropic geometry | [
"Khusrav Yorov",
"Mikhail Skopenkov",
"Helmut Pottmann"
] | math.DG | [
"math.DG",
"53A05, 53A10, 53C42"
] |
@headings
to Springer Nature 2021 template
to
@headings
to Springer Nature 2021 template
to
@titlepage
to Springer Nature 2021 template
to
2023
1pt
1pt
theoremtheoremTheoremprop[theorem]Propositionprob[theorem]Problemlemma[theorem]Lemmadefinitiondefinition[theorem]Definitionexample[theorem]Exampleremarkremark[theorem]Remark]KhusravYorov[]MikhailSkopenkov^*,mikhail.skopenkov @ gmail·com]HelmutPottmann[]King Abdullah University of Science and Technology, Thuwal, 23955, Saudi Arabia
We study surfaces with a constant ratio of principal curvatures in Euclidean and isotropic geometries and characterize rotational, channel, ruled, helical, and translational surfaces of this kind under some technical restrictions (the latter two cases only in isotropic geometry). We use the interlacing of various methods of differential geometry, including line geometry and Lie sphere geometry, ordinary differential equations, and elementary algebraic geometry.
[Mathematics Subject Classification]53A05,53A10,53C42Surfaces of constant principal-curvatures ratio in isotropic geometry
[
August 12, 2023
=====================================================================
§ CONTENTS
§ INTRODUCTION
We study surfaces with a constant ratio of principal curvatures in Euclidean and isotropic geometries and characterize rotational, channel, ruled, helical, and translational surfaces of this kind under some technical restrictions (the latter two cases only in isotropic geometry).
Surfaces with a constant ratio of principal curvatures, or briefly CRPC surfaces, generalize minimal surfaces while keeping invariance under similarities. However, they are significantly harder to construct than minimal surfaces. CRPC surfaces are characterized geometrically as surfaces having a constant angle between characteristic curves (asymptotic curves in the case of negative Gaussian curvature; conjugate and principal symmetric curves
in case of positive Gaussian curvature).
Recent interest in CRPC surfaces has its origin in architecture, in
particular in the aim of
building geometrically complex shapes from simple elements. A remarkable
class of such shapes is given by the asymptotic gridshells of E. Schling <cit.>. They are formed by bending originally
flat straight lamellas of bendable material (metal, timber) and
arranging them in a quadrilateral structure so that all strips are orthogonal
to some reference surface S. This requires the strips to follow
asymptotic curves on S. If, in addition, one aims at congruent nodes
to further simplify fabrication, one arrives at surfaces S on
which asymptotic directions form a constant angle, i.e., at
negatively curved CRPC surfaces. Even positively curved CRPC
surfaces and other Weingarten surfaces are of interest in architecture,
since they only have a one-parameter family of curvature elements,
which simplifies surface paneling of double-curved
architectural skins through mold re-use <cit.>.
A classical general approach to complicated problems in Euclidean geometry is to start with their
simpler analogs in so-called isotropic geometry. The isotropic analogs give a lot of geometric insight and also provide an initial guess for numerical optimization. This approach has been (implicitly) used since as early as the work <cit.> by Müntz from 1911, who solved the Plateau problem for Euclidean minimal surfaces in a quite general setup by deformation of graphs of harmonic functions. Such graphs are minimal surfaces in isotropic geometry; thus in this case the optimization has led to the whole existence proof.
Isotropic geometry has been studied extensively
by K. Strubecker (see, e.g., <cit.>) and is treated in the monograph by H. Sachs <cit.>. It is based on the group of affine transformations which preserve the isotropic semi-norm(x,y,z)_i:=√(x^2+y^2) in space with the coordinates x,y,z.
It can also be seen as relative differential geometry with respect to the unit isotropic sphere (paraboloid of revolution) 2z=x^2+y^2. Isotropic geometry is simpler but has much in common with Euclidean and other Cayley–Klein geometries.
The isotropic geometry of surfaces appears also in structural design and statics (see, e.g., <cit.>), due to the close relation between the stresses in a planar body
and the isotropic curvatures of the associated Airy stress surface
<cit.>. CRPC surfaces in isotropic space
represent planar stress states with a constant ratio of principal stresses.
In our arguments, we use the interlacing of various methods of differential geometry
, ordinary differential equations, and elementary algebraic geometry (the latter — in the classification of the translational CRPC surfaces, which we consider as our main contribution; see Section <ref>).
§.§ Previous work
Euclidean Geometry. Only a few explicit examples of CRPC surfaces, not being minimal surfaces, have been known before. The explicit parameterizations were only available for rotational and helical CRPC surfaces <cit.>.
CRPC surfaces are a special case of so-called Weingarten surfaces.
A Weingarten surface is a surface with a fixed functional relation f(H, K) = 0 between the mean curvature H and Gaussian curvature K at each point (we assume that the zero set of the function f is an analytic curve).
A surface is called linear Weingarten if there is a fixed linear relation between the two principal curvatures κ_1 and κ_2
at each point. Recently Lopez and Pampano <cit.> have classified all rotational linear Weingarten surfaces, which are CRPC surfaces when the intercept of the relation is zero.
Moreover, it has been shown that linear Weingarten surfaces are rotational if they are foliated by a family of circles <cit.>. Using quite involved computations, Havlíček <cit.> proved that channel Weingarten surfaces must be rotational or pipe surfaces; cf. <cit.>.
In Section <ref> we give a geometric proof of this result.
In works <cit.>
rotational CRPC surfaces with K<0 have been characterized via isogonal asymptotic parameterizations. Yang et al. <cit.> have recently presented a characterization of all helical CRPC surfaces.
The most well-studied Weingarten surfaces are
the ones with K=const or H=const. Classification results for rotational, helical, and translational surfaces of this kind can be found in <cit.> and <cit.>.
CRPC surfaces, via a Christoffel-type transformation of certain spherical nets, were derived in <cit.> with a focus on discrete models. In work <cit.>, we can find an effective method for the computation of discrete CRPC surfaces that provides insight into the shape variety of CRPC surfaces. Since numerical optimization was involved there, one
cannot derive precise mathematical conclusions, but it can be helpful for further studies.
Isotropic Geometry.
As far as we know, the examples of isotropic CRPC surfaces known before were either minimal (having isotropic mean curvature H=0) or paraboloids (having both H=const and the isotropic Gaussian curvature K=const). However, there is a variety of related works regarding the conditions
H=const or K=const separately. Surfaces with K=const have received early attention as solutions of the
Monge-Ampere equation, but only within isotropic geometry their
geometric constructions, e.g., as Clifford translational surfaces, are elegant and simple <cit.>.
An exact representation of several types of ruled surfaces with H=const or K=const
can be found in <cit.>.
All helical surfaces with H=const or K=const
were classified in <cit.>. Translational surfaces with H=const or K=const
were classified in <cit.> in the case when the generating curves are planar and in <cit.> in the case when one of the generating curves is spatial. However, the classification is still unknown when both generating curves are spatial.
§ PRELIMINARIES
§.§ Admissible surfaces and isotropic curvatures
Recall that the isotropic semi-norm in space with the coordinates x,y,z is
(x,y,z)_i:=√(x^2+y^2).
An affine transformation of ℝ^3 that scales the isotropic semi-norm by a constant factor has the form
𝐱'= A·𝐱+𝐛,
A= [ h_1 -h_2 0; h_2 h_1 0; c_1 c_2 c_3 ]
for some values of the parameters 𝐛∈ℝ^3 and h_1,h_2,c_1,c_2,c_3∈ℝ. Such transformations form the 8-parametric group G^8 of general isotropic similarities. The group of isotropic congruences
is the 6-parametric subgroup G^6 with
h_1=cosϕ, h_2=sinϕ, c_3=1.
These transformations appear as Euclidean congruences in the projection onto the
plane z=0, which we call top view.
Therefore, isotropic distances between
points and isotropic angles between lines appear in the top view as Euclidean distances and angles, respectively.
Lines and planes which are parallel to the z-axis are called isotropic or vertical. They play a special role and are usually excluded as tangent spaces in differential geometry. A point of a surface is admissible if the tangent plane at the point is non-isotropic, and a surface is admissible if it has only admissible points. Hereafter by a surface we mean the image of a proper injective C^3 map of a closed planar domain into ℝ^3 with nondegenerate differential at each point, or, more generally, an embedded 2-dimensional C^3 submanifold of ℝ^3, possibly with boundary and possibly non-compact.
An admissible surface can be locally represented as the graph of a function,
z=f(x,y).
It is natural to measure the curvature of the surface in a given direction by a second-order quantity invariant under isotropic congruences and vanishing for a plane.
Thus the isotropic normal curvature in a tangent direction t=(t_1,t_2,t_3) with t_i=t_1^2+t_2^2=1 is defined to be
the second directional
derivative of f,
κ_n(t)= (t_1,t_2) ·[ f_xx f_xy; f_yx f_yy ]·[ t_1; t_2 ],
and the isotropic shape operator is defined to be the Hessian ∇^2(f) of f. Its
eigenvalues κ_1 and κ_2 are the isotropic principal curvatures. Of course, they occur in directions (called isotropic principal directions) that are orthogonal in the top view, and thus also orthogonal in the isotropic sense.
The isotropic mean and Gaussian curvatures are defined respectively by
H:=κ_1+κ_2/2=f_xx+f_yy/2 and K:=κ_1κ_2=f_xxf_yy-f_xy^2.
An admissible surface has a constant ratio a0 of isotropic principal curvatures, or is a CRPC surface, if κ_1/κ_2=a or κ_2/κ_1=a at each point of the surface. The latter condition is equivalent to H^2/K = (a+1)^2/(4a).
In particular, for a=-1 we get isotropic minimal surfaces, characterized by the condition H=0, i.e., the graphs of harmonic functions. As another example, for a=1 we get
a unique up to similarity CRPC surface 2z = x^2+y^2, also known as the isotropic unit sphere <cit.>.
Isotropic principal curvature lines, asymptotic and isotropic characteristic curves are defined analogously to the Euclidean case as curves tangent to corresponding directions. Recall that two tangents at a surface point are conjugate if one is tangent to a curve on the surface, while the other one is a ruling of the envelope of the tangent planes at points on the curve. For instance, the isotropic principal directions are the ones that are conjugate and orthogonal in the top view. For K>0, the isotropic characteristic directions are the ones that are conjugate and symmetric with respect to the isotropic principal directions in the top view. For K<0, they coincide with the asymptotic directions, which are the same in Euclidean and isotropic geometry.
For isotropic CRPC surfaces, the isotropic characteristic curves intersect
under the constant isotropic angle γ
with ^2(γ/2) = | a|. To see this, make the tangent plane at the intersection point horizontal by an appropriate isotropic congruence of the form z↦ z+px+qy and apply a similar assertion in Euclidean geometry <cit.>. All sufficiently smooth Euclidean or isotropic CRPC surfaces are analytic by the Petrowsky theorem <cit.>. We sometimes restrict our results to the case of analytic surfaces, if this simplifies the proofs.
For any paraboloid with a vertical axis, or, equivalently, the graph of any quadratic function f(x,y) with ∇^2(f) 0, both isotropic principal curvatures are constant. Hence their ratio a is also constant. This surface can be brought to the paraboloid z = x^2+ay^2 by an appropriate general isotropic similarity.
(Technically, 1/a can also be considered as such a ratio of the same surface, but z = x^2+ay^2 is isotropic similar to z = x^2+y^2/a.)
The isotropic principal curvature lines of a non-rotational paraboloid z = x^2+ay^2, where a 0,1, are parabolae (parabolic isotropic circles, to be discussed below) in the isotropic planes x=const and y=const. The rotational paraboloid z = x^2+ay^2+(a-1)(x-x_0)^2 touches the surface z = x^2+ay^2 along the isotropic principal curvature line x=x_0. Thus the surface is an envelope of a one-parameter family (actually, two families) of congruent rotational paraboloids with vertical axes (also known as parabolic isotropic spheres).
The characteristic curves of the paraboloid z = x^2+ay^2, where a≠ 0,1, appear in the top view as lines parallel to x=±√(| a|)y.
For a hyperbolic paraboloid (a < 0), these
curves are the rulings. For an elliptic paraboloid (a > 0), they are parabolae forming a translational net on the surface.
Thus a paraboloid with a vertical axis is both a translational (see Section <ref>), a parabolic rotational (see Section <ref>), isotropic channel (see Section <ref>),
and, for a<0, a ruled surface (see Section <ref>).
§.§ Isotropic spheres and circles
In isotropic geometry, there are two types of isotropic spheres.
The set of all points at the same isotropic distance r from a fixed point O is called a cyclindric isotropic sphere. In Euclidean terms, it can be visualized as a right circular cylinder with vertical rulings. Its top view appears as a Euclidean circle with the center at the top view of O and the radius r. Any point
on the axis of this cylinder
can serve as the center of the same isotropic sphere.
An (inclusion-maximal) surface with both isotropic principal curvatures equal to a constant A 0
is a parabolic isotropic sphere. It has the equation
2z=A(x^2+y^2)+B x+C y+D, A ≠ 0,
for some B,C,D∈ℝ. Here 1/A is called
the radius of the isotropic sphere.
Such isotropic spheres are paraboloids of revolution with vertical axes.
The intersection of an isotropic sphere S with a non-tangential plane P is an isotropic circle. The isotropic circle is elliptic if P is non-isotropic, parabolic if S is parabolic and P is isotropic, cylindric if S is cylindric and P is isotropic. The resulting isotropic circle is an ellipse whose top view is a Euclidean circle, a parabola with a vertical axis, or a pair of vertical lines, respectively.
Recall that two
curves 𝐱(t) and 𝐲(t) have a second-order contact for t=0, if 𝐱(0)=𝐲(0), 𝐱'(0)=𝐲'(0), and 𝐱”(0)=𝐲”(0). Two non-parameterized curves have a second-order contact if some of their regular parametrizations do. The osculating isotropic circle of a spatial curve at a non-inflection point is an isotropic circle having a second-order contact with the curve at the point.
There is an analog of Meusnier's theorem in isotropic geometry.
Let an admissible surface Φ have isotropic normal curvature κ_n 0 at a point p ∈Φ along a surface tangent line T.
Then the osculating isotropic circles of all curves on Φ touching T at p lie on the parabolic isotropic sphere of radius 1/κ_n touching Φ at p.
§ ROTATIONAL SURFACES
§.§ Isotropic rotational CRPC surfaces
Euclidean rotations about the z-axis are also
isotropic congruences. A surface invariant under these rotations is called isotropic rotational, as well as the image of the surface under any isotropic congruence.
Looking for isotropic rotational CRPC surfaces, we
consider the graph of a smooth function z=h(r) of the radial distancer:=√(x^2+y^2). Profiles x/y=const and parallel circles r=const give a principal parameterization in isotropic geometry,
due to the symmetries. The isotropic profile curvature κ_2
equals the 2nd derivative h”(r), and κ_1=h'(r)/r by
Meusnier's theorem (Theorem <ref>). Hence
κ_2/κ_1=a amounts to solutions of ah'=rh”. Up to isotropic similarities,
this yields the profile curves
h(r)= r^1+a, if a -1;
log r, if a = -1,
and the ones with a replaced by 1/a. We have arrived at the following proposition.
(See Figure <ref>)
An admissible isotropic rotational surface has a constant ratio a 0 of isotropic principal curvatures if and only if it is isotropic similar to a subset of one of the surfaces
z = (x^2+y^2)^(1+a)/2 or z = (x^2+y^2)^(1+a)/(2a),
if a -1;
z = log(x^2+y^2),
if a = -1.
§.§ Geometry of the surfaces and their characteristic curves
Let us discuss the geometry of the resulting surfaces, namely, the first of two surfaces (<ref>) for a 0,±1. Profile curves (<ref>) are also known as W-curves: they are
paths of one-parameter continuous subgroups of the group of affine maps, actually, of G^8.
The isotropic characteristic curves
intersect the first isotropic principal curvature lines (parallel circles) under the constant isotropic angle γ/2 with ^2(γ/2) = | a|.
Since isotropic angles appear as Euclidean angles in the top view,
the top views of the former must be logarithmic spirals, which intersect the radial
lines at angles (π-γ)/2. Hence, in the cylindrical coordinate system (r,ϕ,z), the isotropic characteristic curves are isotropic congruent to
r(ϕ)= e^ϕ/√(| a|), z(ϕ)=e^ϕ(1+a)/√(| a|).
These curves are again W-curves of a one-parametric subgroup of the isotropic similarity group G^8, and the rotational surfaces themselves are generated by the group. Its elements
are compositions of a rotation about the z-axis through some angle ϕ, a homothety with the center at the origin and the
coefficient e^ϕ/√(| a|), and the scaling by a factor of e^ϕ a/√(| a|) in the vertical direction.
The simplest cases are a=± 1. We obtain the isotropic sphere
z=r^2 and the logarithmoid z=log r, the latter being an isotropic minimal surface.
Also of interest is the case a=-1/2, which leads to surfaces obtained by rotating the parabola z^2=r about
its tangent at the vertex. Recall that in Euclidean geometry, the ratio κ_2/κ_1=-1/2 also leads to parabolae as
profiles, but rotated about the directrix <cit.>. Both surfaces are algebraic of order 4.
Clearly, we get rational algebraic surfaces for rational values of a -1.
§.§ Euclidean rotational CRPC surfaces
It is interesting to compare profile curves (<ref>)
of rotational CRPC surfaces with their analogs in Euclidean geometry (<cit.>, <cit.>, <cit.>, <cit.>):
h(r)=∫r^a dr/√(1-r^2a)
=r^1+a/1+a _2F_1(1/2,1/2+1/2 a; 3/2+1/2 a; r^2a),
a -1,-1/3,
…
Here _2F_1(α,β;γ;z) is the
Gauss hypergeometric function
(see e.g. <cit.> for a definition), r^a≤ 1, and 1/a is not a negative odd integer. The latter equality is checked in <cit.>.
§.§ Parabolic rotational CRPC surfaces
In isotropic geometry, there is a second type of rotations, so-called
parabolic rotations, given by
x' =x+t,
y' =y,
z' =t^2/2+(x+by)t+z,
for some parameters b,t<cit.>. A surface invariant under these transformations
for b fixed and t running through ℝ is called parabolic rotational, as well as the image of the surface under any isotropic congruence.
It is not hard to find all parabolic rotational CRPC surfaces. Indeed, let the graph of a smooth function z=z(x,y) be invariant under the parabolic rotations. The section x=0 is the graph of the function h(y):=z(0,y). Applying the parabolic rotation
with t=x to the latter curve, we get the identity z(x,y)=x^2/2+bxy+h(y). Then the isotropic Gaussian and mean curvatures are K=h”-b^2 and H=(h”+1)/2. Then the equation H^2/K = (a+1)^2/(4a) is equivalent to a(h”+1)^2=(a+1)^2(h”-b^2). Hence h”=const and z(x,y) is a quadratic function. By Example <ref>,
we arrive at the following proposition.
An admissible parabolic rotational surface has a constant ratio a 0 of isotropic principal curvatures if and only if it is isotropic similar to a subset of the paraboloid
z = x^2+ay^2.
In this case, both isotropic principal curvatures are constant. In the next section, we show that this is the only surface with this property (Theorem <ref>).
§ CHANNEL SURFACES
Now we turn to channel surfaces and show that all channel CRPC surfaces are rotational (or parabolic rotational). Thus we will not encounter new surfaces.
§.§ Euclidean channel CRPC surfaces
A channel surfaceC is defined as the envelope of a smooth one-parameter family of spheres S(t), i.e., the
surface C touching each sphere S(t) along a single closed curve c(t) so that the curves c(t) cover C. The curves c(t) are called characteristics. Each characteristic c(t) is a circle which is a principal curvature line on C
(see Lemma <ref>).
The locus of sphere centers s(t) is referred to as a spine curve and their radii r(t) constitute the radius function. Special cases of channel surfaces are pipe surfaces, being envelopes of congruent spheres.
For simplicity, we restrict ourselves to analytic surfaces. If C is analytic (and has no umbilic points), then the family S(t) is also analytic up to a change of the parameter t (because the principle directions, hence the principal curvature lines c(t), hence the spheres S(t) analytically depend on a point of C). Thus by an analytic channel surface we mean an analytic surface which is the envelope of a one-parameter analytic family of spheres S(t), i.e, a family such that both s(t) and r(t) are real-analytic functions.
We would like to give
a short proof of the following result by Havlíček <cit.>.
An analytic channel Weingarten surface
is a rotational or pipe surface.
In particular, if an analytic channel surface has a constant ratio of principal curvatures, then it is rotational.
We start by recalling a well-known proof of the basic properties of characteristics; this will help us in establishing their isotropic analogs.
Under the notation at the beginning of Section <ref>, c(t) is a principal curvature line on C, and the principal curvature along c(t) is 1/r(t). If the family S(t) is analytic then c(t) is a circle with the axis parallel to s'(t). If r'(t) 0 then s'(t) 0.
Since S(t) and C touch along c(t), and c(t) is a principal curvature line of S(t), it is a principal curvature line of C as well by Joachimsthal theorem.
Let us compute the principal curvature κ_1 along c(t). Let p be a point on c(t) and T be the tangent line to C through p. Then by Meusnier's theorem, the osculating circle of c(t) lies on the sphere of radius 1/κ_1 that touches C at p. On the other hand, c(t), hence its osculating circle, lies on the sphere S(t) of radius r(t) that also touches C at p. This implies that κ_1=1/r(t).
Distinct characteristics c(t_1) and c(t_2) cannot have two common points or touch, otherwise S(t_1) and S(t_2) coincide and do not touch C along a single closed curve.
Let us compute c(t). The sphere S(t) has the equation (𝐱-s(t))^2=r(t)^2, hence c(t) is contained in the intersection of S(t) with the set (𝐱-s(t))s'(t)=r'(t)r(t); cf. <cit.>. If s'(t) 0, the latter is a plane orthogonal to s'(t), hence c(t) is a circle with the axis parallel to s'(t) because c(t) is a closed curve by the definition of a channel surface. If s'(t)=0, then c(t) is still a circle as a closed curve
containing the limit of circles c(t_n), where t_n→ t and s'(t_n) 0 (the limit exists by analyticity and does not degenerate to a point because
a family of pairwise disjoint circles disjoint from a curve on a surface cannot shrink to a point on the curve).
Finally, if r'(t) 0 then s'(t) 0 because c(t)∅.
We now prove the theorem modulo a technical lemma and then the lemma.
Consider an analytic channel Weingarten surface C and a sphere S=S(t) of the enveloping family. They touch along the characteristic c=c(t).
By Lemma <ref> the principal curvature along the circle c, say κ_1=κ_1(t), is constant (it is equal to the inverse of the radius of S(t)).
If the Weingarten relation is not κ_1=const then the other principal curvature κ_2=κ_2(t) has to be constant along c.
First, let us consider the general-position case: κ_1(t),κ_2(t),κ_1'(t),κ_2'(t) 0 and κ_1(t)κ_2(t) for all t.
At each point of c, draw an osculating circle of the section of C by the plane orthogonal to c.
Since all such circles have curvature κ_2 and touch the sphere S, they are obtained by rotations of one circle
about the axis of c and form a rotational surface D.
By construction, C and D have a second-order contact along c.
Then Lemma <ref> below asserts that at least in the general-position case, the spine curve s(t) of C has a second-order contact with the spine curve of D, i.e., the axis of c(t). Since s(t) has a second-order contact with a straight line for all t, it is a straight line and C is a rotational surface.
Next, let us reduce the theorem to the above general-position case.
If κ_1(t)≡κ_2(t) as functions in t, then C is a subset of a sphere because κ_1(t) 0.
If κ_1(t)=const, then C is a pipe surface because 1/κ_1(t) is the radius of S(t).
If κ_2(t)=const, then C is a rotational surface. Indeed, consider a principal curvature line orthogonal to c(t) and let p(t) be its intersection point with c(t). Let n(t) be the surface unit normal at p(t). Then n'(t)=κ_2(t)p'(t). Hence the curvature center p(t)-n(t)/κ_2(t)=const (or n(t)=const for κ_2(t)=0). Thus the principal curvature line lies on the sphere of radius 1/κ_2(t) touching the surface (or a plane for κ_2(t)=0). For the other principal curvature lines orthogonal to c(t), such spheres (or planes) are obtained by rotations about the axis of the circle c(t). Then C is the envelope of those spheres (or planes) and hence it is rotational.
Otherwise, restrict the range of t to an interval where the above general-position assumptions are satisfied. The envelope of the resulting sub-family S(t) is rotational by the above general-position case. Then the whole C is rotational by the analyticity.
Finally, for a CRPC surface, the condition κ_1(t)=const implies κ_2(t)=const and yields again to a rotational surface.
The deep idea beyond this proof is that a channel surface C has a second-order contact with the osculating Dupin cyclideD<cit.> along a characteristic, and the spine curves of C and D also have a second-order contact. Here D is the limit of the envelope of all spheres tangent to three spheres S(t_1), S(t_2), S(t_3) for t_1,t_2,t_3 tending to t. The spine curves of C and the envelope have 3 colliding common points s(t_1),s(t_2),s(t_3), leading to a second-order contact. For a Weingarten surface C, the osculating Dupin cyclides D turn out to be rotational, with straight spine curves, hence the same must be true for C itself.
Although this construction gives geometric insight, passing to the limit t_1,t_2,t_3→ t rigorously is a bit technical. Thus we complete the proof by a different argument relying on additional curvature assumptions.
Under the notation of the proof of Theorem <ref>
and the assumptions κ_1(t),κ_2(t),κ_1'(t),κ_2'(t) 0 and κ_1(t)κ_2(t) for all t, the spine curve s(t) has a second-order contact with the axis of the circle c(t).
In what follows we fix a value of t, say, t=0, and assume that t is sufficiently close to this value. It is also convenient to assume that κ_2'(0)(κ_1(0)-κ_2(0))>0 and t>0. Since κ_1'(0) 0, by Lemma <ref> it follows that s'(0) is nonzero and parallel to the axis of c(0).
It remains to prove that the distance from s(t) to the axis is O(t^3).
We do it by reducing to a planar problem; see Fig. <ref> to the left. Fix some t_0> 0 and the plane P passing through s(t_0) and the axis of c(0). The section of c(t) by the plane P consists of two points c_1(t) and c_2(t). Let c_1 and c_2 be the two curves formed by these points. The section of C by P coincides with c_1 and c_2 because the characteristics cover the surface.
The section of S(t) is a circle S_1(t) touching c_1 and c_2 at c_1(t) and c_2(t).
Let D_1(t) and D_2(t) be the osculating circles of c_1 and c_2 at c_1(t) and c_2(t).
Orient each circle S_1(t) counterclockwise and fix
the orientations of D_1(t) and D_2(t) such that the contacts are oriented. Let s_1(t) be the center of S_1(t). Since s_1(t_0)=s(t_0), it suffices to prove that the distance from s_1(t) to the bisector of c_1(0)c_2(0) is O(t^3).
Next, let us prove that S_1(t) and D_1(0) are disjoint.
For this purpose, we compute the derivative of the signed curvature k_1(t) of the curve c_1(t) at t=0. By the Euler and Meusnier theorems, we get
k_1(t)=
κ_1(t)cos^2 ∠(c_1,c(t)) +κ_2(t)sin^2∠(c_1,c(t))/cos∠ (s(t)c_1(t),P).
Differentiating, we get k_1'(0)=κ_2'(0) 0 because ∠(c_1,c(0))=π/2 and ∠ (s(0)c_1(0),P)=0.
Since the signed curvatures of S_1(0) and D_1(0) are
κ_1(0) and κ_2(0) respectively, by the assumption κ_2'(0)(κ_1(0)-κ_2(0))>0 it follows that the signed curvature of D_1(t) is between the signed curvatures of D_1(0) and S_1(t) for t>0 small enough. By the Tait–Kneser theorem, D_1(t) is disjoint from D_1(0), and by construction, D_1(t) has an oriented contact with S_1(t). Thus D_1(0) and S_1(t) are separated by D_1(t), hence are disjoint.
Finally, we estimate the distance from s_1(t) to the bisector of c_1(0)c_2(0). Since c_1 and D_1(0) have second-order contact, it follows that
c_1(t) is within distance O(t^3) from D_1(0). Then the distance between the disjoint circles S_1(t) and D_1(0) is O(t^3). Analogously, the distance between S_1(t) and D_2(0) is O(t^3). Then the difference in the distances from the center s_1(t) to the centers of D_1(0) and D_2(0) is O(t^3). Since D_1(0) and D_2(0) are symmetric with respect to the bisector of c_1(0)c_2(0), it follows that
the distance from s_1(t) to the bisector is O(t^3), which proves the lemma.
§.§ Surfaces with both isotropic principal curvatures constant
As a motivation for studying isotropic channel surfaces,
and also as one step in their classification, let us find all surfaces with both isotropic principal curvatures constant.
An admissible surface has constant nonzero isotropic principal curvatures κ_1 and κ_2 if and only if it is isotropic congruent to a subset of the paraboloid
2z = κ_1 x^2+κ_2 y^2.
Let us see how the notion of an isotropic channel surface naturally arises in the proof of this theorem.
Assume that the isotropic principal curvature κ_1 is constant and nonzero along an isotropic principal curvature line c of an admissible surface C. At each point of c, take the parabolic isotropic sphere with the radius 1/κ_1 that touches C at this point. Then all these isotropic spheres coincide.
By the center of a parabolic isotropic sphere of radius r we mean the point obtained from the vertex of the paraboloid by the translation by the vector (0,0,r).
The unique parabolic isotropic sphere of radius r with the center m=(m_1,m_2,m_3) is given by the equation
2(z-m_3+r) = 1/r((x-m_1)^2+(y- m_2)^2).
Beware that the notion of a center is not invariant under isotropic congruences, but similar notions are common in isotropic geometry <cit.>.
The isotropic curvature center of C at a point p = (x,y,z) ∈ c is the center of the parabolic isotropic sphere of radius r=1/κ_1 that touches C at p. It is given by
m=
[
x - 1/κ_1 f_x
y - 1/κ_1 f_y
z - 1/2κ_1(f_x^2+f_y^2-2)
],
where we locally represent C as the graph of a function z=f(x,y). This formula is obtained from the three equations (<ref>), f_x= κ_1(x-m_1), f_y= κ_1(y -m_2).
Now let p(t) =(x(t), y(t), f(x(t),y(t))) run through the isotropic principal curvature line
c and let m(t) be the corresponding isotropic curvature center. It suffices to prove that
m'(t)=0. We omit the arguments of the functions x,y,f in what follows.
Since c is the isotropic principal curvature line, it follows that
f_xxx' + f_xyy' = κ_1 x'
f_xyx' + f_yyy' = κ_1 y',
hence
m'=[
x' - 1/κ_1f_xxx'-1/κ_1f_xyy'
y' - 1/κ_1f_xyx'-1/κ_1f_yyy'
f_xx' + f_yy' - f_xx' - f_yy'
]
=0.
The meaning of this lemma is that κ_1 = const implies that the surface is essentially an isotropic pipe surface, which we are going to define now.
An isotropic channel surface is the envelope of a smooth one-parameter family of parabolic isotropic spheres S(t), i.e. the surface C touching each isotropic sphere S(t) along a single curve c(t)
without endpoints so that the curves c(t) cover C. The curves c(t) are called characteristics.
If the radii of the isotropic spheres are constant, then the envelope is called an isotropic pipe surface.
To proceed, we need an isotropic analog of Lemma <ref> above.
Consider a characteristic c(t) of an isotropic channel surface C. Let r(t) be the radius of the isotropic sphere S(t) that touches C along c(t). Then c(t) is an isotropic principal curvature line on C and the isotropic principal curvature along c(t) is 1/r(t).
If r'(t) 0 then c(t) is
an elliptic isotropic circle.
If r(t)=const then for some t the curve c(t) is
a parabolic isotropic circle.
Since c(t) is an isotropic principal curvature line of S(t), then by the isotropic Joachimsthal theorem <cit.>,
it is an isotropic principal curvature line of C.
Let p be a point on c(t) and T be the tangent line to C through p. Then by Meusnier's theorem (Theorem <ref>), the osculating isotropic circle of c(t) lies on the isotropic sphere of radius 1/κ_n that touches C at p. Here κ_n is the isotropic principal curvature along c(t) because c(t) is an isotropic principal curvature line.
On the other hand, c(t), hence its osculating isotropic circle, lies on the isotropic sphere S(t) of radius r(t) that also touches C at p. This implies that κ_n=1/r(t).
Now let S(t) have the equation
2z = A(t)(x^2+y^2)+B(t) x+C(t) y+D(t).
Then
c(t) is contained in the set defined by the system (cf. <cit.>)
{[ A(t)(x^2+y^2)+B(t) x+C(t) y+D(t) -2z = 0,; A^'(t)(x^2+y^2)+B^'(t)x+C^'(t) y+D^'(t) = 0. ].
If r'(t) 0 then A'(t) =r^'(t)/r(t)^2 0 because r(t) = 1/A(t). Remove the quadratic terms in the second equation by subtracting the first equation with the coefficient A^'(t)/A(t) 0. This introduces a term linear in z into the second equation. Thus c(t) is the intersection of S(t) with a non-isotropic plane, hence an elliptic isotropic circle.
Finally, assume that r(t)=const so that A'(t) =0. There exists t such that at least one of the derivatives B'(t),C'(t),D'(t) 0, otherwise all S(t) coincide and there is no envelope. Then the second equation of (<ref>) defines an isotropic plane. Thus
c(t) is the intersection of S(t) with the plane, hence a parabolic isotropic circle.
This lemma and its proof remain true, if we allow the characteristics c(t) to have endpoints (in the definition of an isotropic channel surface); then
c(t) is going to be an arc of an isotropic circle instead of a full one.
By Lemma <ref>,
locally there are two families of congruent parabolic isotropic spheres of radii 1/κ_1 and 1/κ_2 respectively touching the surface along isotropic principal curvature lines. By the last assertion of Lemma <ref>, one of the characteristics c(t) of one family
is an arc of a parabolic isotropic circle. Performing an appropriate isotropic congruence, one can make the parabolic isotropic sphere S(t), touching the surface along c(t), symmetric with respect to the plane of c(t). The isotropic spheres of the other family are congruent and touch S(t) at the points of c(t). Hence they are obtained by parabolic rotations of one isotropic sphere, and thus the surface is locally parabolic rotational. By Proposition <ref>, the theorem follows. (The envelope of the other family can also be computed directly.)
§.§ Isotropic channel CRPC surfaces
The classification of isotropic channel CRPC surfaces is similar to the Euclidean ones,
but in addition to rotational surfaces, we get parabolic rotational ones.
An analytic isotropic channel surface is an analytic surface that is the envelope of a one-parameter analytic family of parabolic isotropic spheres.
An analytic isotropic channel Weingarten surface is an isotropic rotational or isotropic pipe surface.
In particular, an analytic isotropic channel surface with a constant nonzero ratio of isotropic principal curvatures is a subset of an isotropic rotational or parabolic rotational one.
The proof is analogous to the Euclidean one, but we consider the centers curve instead of the spine curve.
If the characteristics c(t) are elliptic isotropic circles, then the locus of their centers s(t)
is called the centers curve.
If the centers curve of an isotropic channel surface is contained in a vertical line, then the surface is isotropic rotational.
Bring the vertical line to the z-axis by an appropriate isotropic congruence.
Use the notation from the proof of Lemma <ref>. If A'(t) 0, then the second equation of (<ref>) defines a circle, which represents the top view of the characteristic c(t). Thus the top view of all the characteristics c(t) are circles with the center at the origin.
Therefore B^'(t) = C^'(t) =0 for all t, hence B(t) and C(t) are constants. Then the second equation of (<ref>) takes form x^2+y^2 = -D'(t)/ A'(t). Substituting it into the first equation of (<ref>), we obtain that all the isotropic circles c(t) lie in the planes parallel to one plane B(0)x+C(0)y-2z = 0. By continuity, this remains true for the roots of the equation A'(t)=0. Therefore the surface is isotropic rotational.
To ensure that the centers curve s(t) is a vertical line, we need the following two technical lemmas. In the first one, for a line segment joining two symmetric parabolic isotropic circles, we express the distance from the midpoint of the segment to the symmetry axis in terms of the replacing angles between the segment and the isotropic circles. The (oriented) replacing angle between two non-isotropic lines lying in one isotropic plane is the difference in their slopes.
(See Fig. <ref> to the middle.)
Assume that two parabolic isotropic circles D_1 and D_2 of isotropic curvature A lie in an isotropic plane and are symmetric with respect to an isotropic line L. Let the distance between their axes be 2B 0. Let two points p_1 and p_2 lie on D_1 and D_2 respectively. Let the segment p_1p_2 have isotropic length d and form replacing angles α_1 and α_2 with D_1 and D_2 respectively. Then the isotropic distance from the midpoint of p_1p_2 to the line L equals |α_1+α_2| d/| 8AB|.
Without loss of generality, D_1 and D_2 lie in the xz-plane and have the equations z=A(x± B)^2. If the x-coordinates of p_1 and p_2 are x_1 and x_2, then
we compute directly |α_1+α_2|=4| AB|·| x_1+x_2|/| x_1-x_2|. Since
| x_1-x_2|=d and | x_1+x_2|/2 is
the desired isotropic distance, the lemma follows.
Under the notation of Lemma <ref>, assume that the second principal curvature is constant along c(t) and different from the first one. If r'(t) 0, then s'(t) is vertical (or zero).
Fix a particular value of t, say, t=0, and assume that t is sufficiently close to this value.
It suffices to prove that the isotropic distance from s(t) to s(0) is O(t^2).
We do it by reduction to a planar problem. Add a figure?
Since r'(0) 0, by Lemma <ref> it follows that c(0) is an elliptic isotropic circle. Performing an isotropic congruence of the form z↦ z+px+qy, we can take c(0) to a horizontal circle. Fix some t_0>0 and the isotropic plane P passing through s(0) and s(t_0). The section of c(t) by the plane P consists of two points c_1(t) and c_2(t). The section of C coincides with the two curves c_1(t) and c_2(t) because the characteristics cover the surface. The section of S(t) is a parabolic isotropic circle touching the two curves at the points c_1(t) and c_2(t). Then the chord c_1(t)c_2(t) forms equal replacing angles (of opposite signs) with the tangents at c_1(t) and c_2(t). For t=t_0, the midpoint
of the chord is s(t_0).
Let us estimate how those replacing angles and the midpoint change if we replace the two curves with their osculating isotropic circles D_1 and D_2 (possibly degenerating into lines) at the points c_1(0) and c_2(0). Since the second principal curvature is constant along c(0) and the plane of c(0) is horizontal, it follows that D_1 and D_2 are symmetric with respect to the vertical line L through s(0).
Since the first principal curvature is different, it follows that D_1 D_2. Let p_1(t) and p_2(t) be the points of D_1 and D_2 lying on the vertical lines through the points c_1(t) and c_2(t) respectively. The replacing angle between the tangents to the respective curves at the points p_1(t) and c_1(t) is O(t^2) because D_1 is osculating. The same is true for the tangents at p_2(t) and c_2(t). The replacing angle between p_1(t)p_2(t) and c_1(t)c_2(t) is O(t^3). Thus the replacing angles α_1(t) and α_2(t) which p_1(t)p_2(t) forms with D_1 and D_2 satisfy α_1(t)+α_2(t)=O(t^2). Now the result follows from Lemma <ref> and its analog for lines D_1 and D_2.
Consider an analytic isotropic channel Weingarten surface C and the isotropic parabolic sphere S(t) of radius r(t) touching C along the characteristic c(t).
Then according to Lemma <ref>, the
isotropic principal curvature κ_1 along c(t) is 1/r(t). If the Weingarten relation is not κ_1=const, then
the other isotropic principal curvature κ_2 is constant along c(t).
If r(t)=const or κ_1≡κ_2, then we get an isotropic pipe surface or an isotropic sphere.
Otherwise, restrict the range of t to an interval where r'(t) 0 and κ_1κ_2. By Lemma <ref>, the centers curve s(t) has vertical tangent vector s'(t) for all t.
Then s(t) is contained in a vertical line, and by Lemma <ref> the surface C is isotropic rotational.
Finally, suppose that C has a constant ratio of the isotropic principal curvatures. Then it is Weingarten. As we have proved, it is an isotropic rotational or pipe surface. In the latter case, κ_1 =const by Lemma <ref>, hence κ_2 = const, and C is a subset of a parabolic rotational surface by Theorem <ref>.
§ RULED SURFACES
Let us now turn to ruled surfaces. In Euclidean geometry we will not encounter a new surface, but in isotropic geometry there is a non-trivial CRPC ruled surface.
Our arguments are based on line geometry. For the concepts used in the following, we refer to <cit.>. The methods for ruled surfaces and channel surfaces are actually related via Lie's line-sphere correspondence. We again restrict ourselves to analytic surfaces (with nonvanishing Gaussian curvature); then the rulings form an analytic family because the direction of a ruling is asymptotic.
An analytic ruled surface is an analytic surface covered by an analytic family of line segments. The lines containing the segments are the rulings.
§.§ Euclidean ruled CRPC surfaces.
We start with the Euclidean
case and show the following result.
The only ruled surfaces with a constant nonzero ratio of principal curvatures
are the ruled minimal
surfaces, i.e., helicoids.
A ruled CRPC surface must be skew (without torsal rulings) and its asymptotic curves should intersect under a constant angle γ<cit.>.
One family of asymptotic curves is the rulings. Let us fix a ruling R and consider the other asymptotic tangents A(p) (different from R) at all points p ∈ R. They form a quadric L(R)
(the so-called Lie quadric of R; see e.g. <cit.>). If the angle between R and A(p) is constant, it can only be a right angle (so that L(R) is a right hyperbolic paraboloid). Indeed, if the angle γ is not a right one,
then the ideal points of the lines A(p) form a conic c_ω in the ideal plane ω (ideal conic of a rotational cone with axis
R) which does not contain the ideal point R_ω of R.
However, R_ω and c_ω should lie in the same curve L(R)∩ω
(conic or pair of lines)
, which is not possible.
So, γ=π/2 and our surface is a skew ruled minimal surface,
i.e. a helicoid by the Catalan theorem.
Here we used the famous Catalan theorem stating that the only ruled minimal surfaces are helicoids and planes. Remarkably, in Lemmas <ref>–<ref> below we actually obtain a line-geometric proof of this classical result.
Indeed, we have just shown
that the Lie quadric of each ruling must be a right hyperbolic paraboloid. Then Lemma <ref> and its proof remain true in Euclidean geometry. Then without loss of generality, all the rulings are parallel to the plane z=0. Since the asymptotic directions and the rulings are orthogonal, their top views are also orthogonal, and our Euclidean minimal surface is an isotropic minimal surface as well. The Catalan theorem now reduces to Lemmas <ref>–<ref>, where the case of a hyperbolic paraboloid is easily excluded.
The proof of Proposition <ref> already indicates that there is hope to get a ruled
CRPC surface to a constant a -1 in isotropic geometry. This is what we will now pursue.
§.§ Isotropic ruled CRPC surfaces.
(See Fig. <ref>)
An admissible analytic ruled surface has a constant ratio a< 0 of isotropic principal curvatures if and only if it is isotropic similar to a subset of
either the hyperbolic paraboloid
z = x^2+ay^2,
or the helicoid
r(u,v) =[ ucos v; usin v; v ],
if a= -1,
or the surface
r_a(u,v) =[ ucos v; usin v; exp(a+1/√(| a |)v) ],
if a -1.
This follows directly from Lemmas <ref>–<ref> below (which themselves rely on standard Lemmas <ref>–<ref> from Appendix <ref>).
An admissible analytic ruled surface with a constant nonzero ratio of isotropic
principal curvatures is a conoidal (or Catalan) surface, i.e. all the rulings are parallel to one plane.
Let us show that the Lie quadric of each ruling R (see <cit.>) is a hyperbolic paraboloid. Since the surface is admissible, the top view R' of R is not a point.
Since isotropic angles are seen as Euclidean angles in the top view, the top views A(p)' of the asymptotic tangents A(p) at points p of R must form the same angle γ with R'. Hence the lines A'(p) are parallel to each other, implying that the quadric formed by the lines A(p) is
a hyperbolic paraboloid.
Then the second family of rulings (distinct from A(p)) of the quadric intersects the ideal plane ω by a line A_ω. Since the Lie quadric has a second-order contact with our surface <cit.>, it follows that
A_ω has a second-order contact with the ideal curve s_ω (possibly degenerating to a point) formed by the ideal points of the rulings of our surface <cit.>.
As a curve that has an osculating straight line at each point, the curve s_ω is itself a straight line or a point and
therefore our surface must be a conoidal surface.
An admissible analytic conoidal surface with a constant nonzero ratio of isotropic
principal curvatures is a conoid, i.e., all the rulings are parallel to a fixed plane and intersect a fixed line. The fixed line is either vertical or belongs to another family of rulings.
In the latter case, the surface is a hyperbolic paraboloid with a vertical axis.
By the analyticity, it suffices to prove the lemma for an arbitrarily small part of our surface. Thus in what follows we freely restrict and extend our surface.
Let R_t be the analytic family of the rulings of the surface. Since the ratio of the principal curvatures is nonzero, it follows that
there are no torsal rulings; in particular, the surface is not a plane.
Let R'_t be the top view of R_t. By Lemma <ref>, one of the following cases (i)–(iii) holds, after we restrict t to a smaller interval.
Case (i):
all R'_t have a common point. Then all R_t intersect one
vertical line and the lemma is proved.
Case (ii):
all R'_t are parallel. Then due to the fixed angle between the asymptotic directions in the top view, the second family of
asymptotic curves also appears as parallel lines in the top view.
Hence those curves lie in the isotropic planes. However, at non-inflection points of the asymptotic curves the osculating planes are the tangent planes of the surface <cit.>. Thus these tangent planes needed to be isotropic, which is not possible for an admissible surface. Hence, there are no non-inflection points, both families of asymptotic curves are straight lines, and our surface is a hyperbolic paraboloid with a vertical axis.
Case (iii):
all R'_t touch one curve e (envelope).
Let us show that this case is actually impossible.
For this purpose, we are going to extend our surface to reach the envelope.
Let h(t) be the Euclidean distance from the ruling R_t to the plane parallel to them. We have h(t)const because our surface is not a plane. By continuity, there is an interval I where h'(t) has a constant sign. Then the union ⋃_t∈ IR_t is an analytic surface containing a part of the initial surface. The resulting surface is not admissible: By Lemma <ref> the envelope forms a part of the boundary of the top view of the surface, hence the tangent planes are isotropic at the points with the top views lying on the envelope.
Switch to the new surface ⋃_t∈ IR_t. By analyticity, it still has a constant ratio of isotropic principal curvatures (at the admissible points).
The Gaussian curvature still vanishes nowhere because there are no torsal rulings. Thus the whole surface, including non-admissible points, is covered by two analytic families of asymptotic curves (recall that the asymptotic curves are the same in Euclidean and isotropic geometry, hence they acquire no singularities at the non-admissible points). One of the families consists of the rulings, and at admissible points, the other one crosses them under constant angle γ in the top view.
Now we find an asymptotic curve α containing a non-admissible point O (but not entirely consisting of non-admissible points). See Fig. <ref> to the right.
The top view α' needs to have a common point with the envelope e.
Take a,b∈ I close enough so that the angle between R'_a and R'_t is an increasing function in t on [a,b] not exceeding π-γ. Let A' and B' be the tangency points of R'_a and R'_b with the envelope, and C∈ R_a be the point with the top view C':=R'_a∩ R'_b. Then the second asymptotic curve α through C is the required one. Indeed, the angle between its top view α' and R'_a equals γ, hence α' enters the curvelinear triangle A'B'C' formed by the arc AB of the envelope and two straight line segments B'C' and C'A'. Since
the angle between α' and R'_t is constant and
the angle between R'_a and R'_t is increasing, the curve α' cannot reach the sides B'C' and C'A' as long as it remains smooth. Since the asymptotic curves extend till the surface boundary, it follows that α' has a common point with the envelope
and α has a non-admissible point O, as required.
Since α does not entirely consist of non-admissible points, it follows that at the other close enough points P O of α,
the tangents cross the rulings under constant angle γ in the top view. By the continuity, the limit L' of the top views of the tangents crosses the top view of the ruling through O under angle γ.
Now let us prove that L' must be the top view of the ruling through O, and thus get a contradiction.
If the tangent of α at O is not vertical, then L' coincides with the top view of the tangent, hence with the top view of the tangent plane at O, hence with the top view of the ruling through O.
If the tangent of α at O is vertical, then by Lemma <ref> the limit L' coincides with the top view of the limit of the osculating planes at the points of α. But the osculating plane of an asymptotic curve at non-inflection points is the tangent plane, and the points close enough to O are non-inflection. Hence we again obtain the top view of the tangent plane at O, equal to the top view of the ruling through O.
This contradiction shows that case (iii) is impossible, which completes the proof of the lemma.
An admissible conoid has a constant ratio a 0 of isotropic principal curvatures if and only if it is isotropic similar to a subset of one of the surfaces z=x^2+ay^2, (<ref>), or (<ref>).
Let all the rulings of the conoid be parallel to a fixed plane α and intersect a fixed line l. We have two possibilities indicated in Lemma <ref>.
If l∦Oz then by Lemma <ref> the surface is a hyperbolic paraboloid with a vertical axis. By Example <ref> it is isotropic similar to a subset of the paraboloid z = x^2+ay^2.
If l∥ Oz then performing an isotropic similarity, we can take l to the z-axis and α to the plane z=0. The resulting conoid can be parameterized as
r(u,v) = (u cos v, u sin v, h(v))
for some smooth function h(v). The asymptotic curves (distinct from the rulings) are characterized by the differential equation
<cit.>
u(v)^2 = b h'(v)
for some constant b. The top views of the asymptotic curves
must intersect the lines through the origin under the constant angle γ, where ^2(γ/2)=| a|. We now have to distinguish whether this angle is right one or not.
If γ is a right angle, then the top views of asymptotic curves must be concentric circles, leading to u(v)=const and h(v)=v up to a translation and a scaling along the z-axis. We get helicoid (<ref>).
If γ is not a right angle, then the top views
must be logarithmic spirals
u(v)= c e^± vγ
for some constant c. This yields
h(v)= e^± 2vγ
up to a translation and a scaling along the z-axis.
Changing the signs of v and y, if necessary, we arrive at (<ref>).
§.§ Geometry of the surfaces and their characteristic curves
Ruled CRPC surface (<ref>) is a spiral surface (<cit.>), generated by a one-parameter group of (Euclidean and isotropic) similarities, composed of rotations about the z-axis and central similarities with center at the origin. The paths of that motion are cylindro-conical spirals which appear in the
top view as logarithmic spirals with polar equation r(v)= const· e^2v γ. However, the
asymptotic curves (different from rulings) are not such paths. They are
expressed as
c(v)= (c e^v γcos v, c e^vγsin v, e^2v γ),
and are also obtained by intersecting the ruled surface with isotropic spheres (of variable isotropic radius c^2),
2z= 1/c^2(x^2+y^2).
On these, the curves c(v) are isotropic loxodromes. Their tangents are contained in a linear line complex with the z-axis as the axis.
This is related to another non-Euclidean interpretation of isotropic CRPC ruled surface (<ref>) and its characteristic
curves c(v): One can define one of the paraboloids (<ref>) as absolute quadric of the projective
model of hyperbolic 3-space. There, (<ref>) is a helicoid and the asymptotic curves c(v) are paths of a hyperbolic helical motion (one parameter group of the group of hyperbolic congruence transformations). It is also well known and easy to see that the hyperbolic helices are projectively equivalent to Euclidean spherical loxodromes <cit.>. MS: It is better to make the reference more precise, maybe cite a particular Satz. The word 'loxodrome' appears in Strubecker's work only on page 65, and cannot find anything on the projective equivalence around.
In summary, we have proved the following result.
The asymptotic curves of spiral ruled surfaces (<ref>), distinct from the rulings, lie on isotropic spheres. Viewing one of these isotropic spheres as the absolute quadric
of the projective model of hyperbolic geometry, these surfaces are helicoids and the asymptotic curves are helical paths. The latter are projectively equivalent to Euclidean spherical loxodromes.
§ HELICAL SURFACES
A helical motion through the angle ϕ about the z-axis with pitch h is the composition of the rotation through the angle ϕ about the z-axis and the translation by hϕ in the z-direction. The helical motion is also an isotropic congruence. A surface invariant under the helical motions for fixed h and all ϕ is called helical with pitch h. In particular, for h=0 we get a rotational surface.
(See Fig. <ref>)
An admissible helical surface with nonzero pitch has a constant ratio a 0 of isotropic principal curvatures, if and only if it is isotropic similar to a subset of one of the surfaces
r_a(u,v) =[ cos v(cos usin^a u)^-1/a+1; sin v(cos usin^a u)^-1/a+1; v+u+ 2u+a^2+1/a^2-1 2u ],
if a ≠± 1,
r_c(u,v) =[ ucos v; usin v; clog u + v ],
if a = -1,
where c is an arbitrary constant.
In (<ref>)–(<ref>),
the variable v runs through ℝ.
In (<ref>), u runs through (0,+∞).
In (<ref>), u runs through a subinterval of (0,π/2), where tan^2 u a.
Since the pitch is nonzero, it can be set to 1 by appropriate scaling along the z-axis. Take the section of the surface by a half-plane bounded by the z-axis. Since the surface is admissible, the section is a disjoint union of smooth curves without vertical tangents. Then one of those curves can be parameterized as z=f(√(x^2+y^2)) for some smooth function f(u) defined in an interval inside the ray u>0. Hence, up to rotation about the z-axis, our surface can be parameterized as
r(u,v) = (ucos v, usin v, f(u)+v).
Then the isotropic Gaussian and mean curvatures are (see <cit.>)
K = u^3f”(u)f'(u)-1/u^4, H = f'(u)+uf”(u)/2u.
Then the equation H^2/K = (a+1)^2/(4a) is equivalent to
au^2(f'(u)+uf”(u))^2 = (a+1)^2(u^3f”(u)f'(u)-1).
First let us solve the equation for a= - 1. In this case, f”(u)u+f'(u) =0. Hence f(u) = clog u + c_1 for some constants c and c_1. By performing the isotropic similarity z↦ z-c_1 we bring our surface to form (<ref>).
Assume further that a - 1.
Then (<ref>) is equivalent to (see <cit.>)
((a-1) (u^2 f”(u)+uf'(u))/2 (a+1) )^2- (uf'(u)-u^2 f”(u)/2)^2=1.
Thus the first fraction here vanishes nowhere (in particular, a 1). We may assume that it is positive, otherwise change the sign of f and v in (<ref>), leading to just a rotation of the surface through the angle π about the x-axis. Then
the first fraction in (<ref>) can be set to (2s(u)) and the second one can be set to (2s(u)) for some smooth function s(u) with the values in (0,π/2). Therefore by direct calculations (see <cit.>)
f'(u) = a s(u)+tan s(u)/(a-1) u,
f”(u) = atan s(u) + s(u)/(a-1) u^2.
Taking the derivative of (<ref>) with respect to u and combining it with (<ref>) we obtain
s'(u) (tan s(u)-a s(u))/(a+1)=1/u
(see <cit.>).
In particular, s'(u) 0 everywhere, hence s(u) has an inverse function u(s).
Integrating both sides of (<ref>) and using that s(u) assumes values in (0,π/2), we get u(s) = c_2(cos ssin^a s)^-1/a+1 for some constant c_2 0.
Denote f(s):=f(u(s)). By the chain rule, (<ref>), and (<ref>) we get
f'(s) = .f'(u)/s'(u)|_u=u(s)= (tan s+a s) (tan s-a s)/(a-1) (a+1).
Integrating both sides of (<ref>),
we get f(s) =s+ 2 s + a^2+1/a^2-1 2 s+ c_3 for some constant c_3<cit.>. The isotropic similarity (x,y,z)↦(x/c_2,y/c_2,z-c_3) brings our surface to form (<ref>) (up to renaming the parameter s to u).
§.§ Geometry of the surfaces
Family (<ref>) is a family of helical isotropic minimal surfaces
joining helicoid (<ref>) and logarithmoid (<ref>) (after appropriate scaling of the z-coordinate). It can be alternatively described as the family of the graphs of the harmonic functions z=Re(Clog (x+iy)) with varying complex parameter C (again, up to isotropic similarity).
§ TRANSLATIONAL SURFACES
Now we present the main result. If α(u) and β(v) are two curves in ℝ^3,
then the surface r(u,v)=α(u)+β(v)
is called the translational surface formed by α(u) and β(v).
(Fig. <ref>)
An admissible translational surface formed by a planar curve α and another curve β has a constant ratio a 0 of isotropic principal curvatures, if and only if it is isotropic similar to a subset of one of the surfaces
r_a(u,v)
= [ u; v; v^2+a u^2 ],
r_b(u,v)
= [ v+ bcos v; bsin v+(b^2-1)log| b-sin v|+(1-b^2)u; exp u ],
if a ≠ 1
r(u,v)
= [ u+v; log|cos u | - log|cos v|; u ],
if a =-1,
where
we denote
b:=(a+1)/(a-1). In particular, β must be a planar curve as well.
In (<ref>), the variables u, v run through ℝ.
In (<ref>), u runs through ℝ and v runs through an interval where sin v b (for a<0) and bsin v 1 (for a>0). In (<ref>),
(u, v) runs through a subdomain of (-π/2,π/2)^2∖{u+v=0}.
The theorem follows from Lemmas <ref>–<ref>, where the following 5 cases are considered:
* α and β are isotropic planar;
* α is isotropic planar and β is non-isotropic planar;
* α and β are non-isotropic planar;
* α is isotropic planar and β is non-planar;
* α is non-isotropic planar and β is non-planar.
In our arguments, we use the expressions for K and H obtained by combining <cit.>. (The convention for the sign of H in <cit.> is different from ours
but this does not affect the equation H^2/K = (a+1)^2/4a of CRPC surfaces.)
Under the assumptions of Theorem <ref>, if α and β are contained in isotropic planes, then the surface is isotropic similar to a subset of (<ref>).
Performing a rotation about a vertical axis, we can take the plane of β to the plane x=0. Since the surface is admissible, α and β cannot have vertical tangents. Thus the curves can be parameterized as α(u) = (u, ku, f(u)) and β(v) = (0, v, g(v)) for some k∈ℝ and some functions f(u) and g(v) defined on some intervals. Then our surface is
r(u,v) = (u, ku + v, f(u)+g(v)).
Then the isotropic Gaussian and mean curvatures are (see <cit.> and <cit.>)
K = f”(u)g”(v), H = (k^2+1) g”(v)+f”(u)/2.
Since a 0, it follows that K 0, hence f”(u)g”(v)≠ 0 for all (u,v) from the domain. Then the equation H^2/K = (a+1)^2/(4a) is equivalent to the following differential equation:
(k^2+1+f”(u)/g”(v))^2 = (a+1)^2/a f”(u)/g”(v).
By (<ref>) we get f”(u)/g”(v) = const. Then f”(u) = const and g”(v) = const.
Thus f(u)+g(v) is a polynomial of degree 2. By Example <ref> our surface is isotropic similar to a subset of (<ref>).
One can see the same geometrically. The directions of the two isolines u=const and v=const through each point of a translational surface are conjugate (by the definition in Section <ref>). The top view of the isolines consists of two families of parallel lines. Therefore the top view of the two conjugate directions is the same everywhere. But a pair of conjugate directions and the ratio a ≠ 1 of isotropic principal curvatures are enough to determine the two isotropic principal directions up to symmetry.
By continuity, the top views of the isotropic principal directions are the same everywhere. Consider one isoline v=const and a pair of isolines u=const. These three isolines intersect at two points. The isolines u=const are translations of each other along the isoline v=const. Hence the isotropic normal curvature of the former two isolines is the same at these two points. Since the ratio of isotropic principle curvatures is constant, by the (isotropic) Euler formula (see <cit.>) it follows that the isotropic principle curvatures are also the same. Thus, again by the Euler formula, the isotropic normal curvature of the isoline v=const is the same everywhere. Thus the latter isoline, hence α(u), is a parabolic isotropic circle. Analogously, β(v) is a parabolic isotropic circle, and our surface is a paraboloid.
Under the assumptions of Theorem <ref>, if α and β are contained in an isotropic and a non-isotropic plane respectively, then the surface is isotropic similar to a subset of (<ref>).
Performing an isotropic similarity of the form z↦ z+px+qy we can take the plane of β to the plane z=0. After that, performing a rotation about a vertical axis,
we can take the plane of α to the plane x=0. Since the surface is admissible, α cannot have vertical tangents. Thus it can be parameterized as α(u) = (0, -u, f(u)) for some function f(u). Since the surface r(u,v) = α(u)+β(v) is admissible, β cannot have tangents parallel to the y-axis. Thus it can be parameterized as β(v) = (v, g(v), 0)
for some function g(v). Then the surface is
r(u,v) = (v, -u+g(v), f(u)).
Hence the isotropic Gaussian and mean curvatures are (see <cit.>)
K = f'(u)f”(u)g”(v), H = f'(u)g”(v)+(1+g'(v)^2)f”(u)/2.
Since a 0, it follows that f'(u)f”(u)g”(v) ≠ 0. Therefore the equation H^2/K = (a+1)^2/(4a) is equivalent to
((1+g'(v)^2)f”(u)/f'(u)+g”(v))^2 = (a+1)^2/af”(u)/f'(u)g”(v).
By (<ref>) we getf”(u)/f'(u)=const. Hence f(u) = pe^λ u + q for some constants p,q,λ, where p,λ≠ 0.
Substituting λ for f”(u)/f'(u) in (<ref>), we get (see <cit.>)
g”(v) = λ/4a(a+1 ±√((a-1)^2-4ag'(v)^2))^2.
To solve the resulting ODE, introduce the new variable p=g'(v). Since g”(v) 0, it follows that the function g'(v) has a smooth inverse v(p) and there is a well-defined composition g(p):=g(v(p)). Substituting g'(v) = p into (<ref>) and using g”(v) = dp/dv =1/dv/dp= p/dg/dp for p 0, we obtain
λdg/dp = 4ap/(a+1 ±√((a-1)^2-4ap^2))^2,
λdv/dp = 4a/(a+1 ±√((a-1)^2-4ap^2))^2.
Clearly, a 1. Integrating, we obtain (see <cit.>)
λ g(p) =-log| a+1±√((a-1)^2-4ap^2)| - a+1/a+1±√((a-1)^2-4ap^2) + C_1,
λ v(p) =
(a-1)^2/4a[arctan p ∓arctan((a+1) p/√((a-1)^2-4 a p^2))]+
+(a+1) p/a+1 ±√((a-1)^2-4 a p^2)+C_2
for some constants C_1, C_2.
Denote by w the expression in square brackets in (<ref>) plus ±(a-1)·π/2.
Passing to the new variable w and using the notation b := (a+1)/(a-1),
we get (see <cit.>)
λ g(w) = (b^2-1)log| b-sin w| + bsin w+C'_1/(b^2-1),
λ v(w) = w + bcosw + C'_2/(b^2-1)
for some other constants C_1' and C_2'. Performing the isotropic similarity
(x,y,z)↦(λ(b^2-1)x-C'_1, λ(b^2-1)y-C'_2,z-q/p)
and renaming the parameters u and w to u/λ and v, we bring (<ref>) to form (<ref>).
Under the assumptions of Theorem <ref>, if α and β are contained in non-isotropic planes, then the surface is isotropic similar to a subset of (<ref>).
Similarly to the previous lemma, performing an isotropic similarity of the form z↦ px+qy+rz and a rotation about a vertical axis we take the planes of α and β to the planes z = x and z=0 respectively. The tangent to α cannot be perpendicular to the x-axis at each point,
because otherwise α is a straight line and a=0, contradicting to the assumptions of the theorem. Thus α'(u)⊥̸Ox at some point u. By continuity, the same is true in an interval around u. In what follows switch to an inclusion-maximal interval (u_1,u_2) with this property. Notice that then each endpoint u_k is either an endpoint of the domain of α(u) or there exist finite
lim_u→ u_kα(u) and lim_u→ u_kα'(u)/|α'(u)|⊥ Ox.
On the interval (u_1,u_2), the curve α can be parameterized as α(u) = (u, f(u), u) for some smooth function f(u). Analogously, on a suitable interval (v_1,v_2) the curve β can be parameterized as β(v) = (v, g(v), 0) for some smooth g(v). Therefore a part of our surface can be parameterized as
r(u,v) = (u+v, f(u)+g(v), u).
Thus the isotropic Gaussian and mean curvatures are (see <cit.> and <cit.>)
K = f”(u)g”(v)/(f'(u)-g'(v))^4,
H = (1+f'(u)^2)g”(v)+(1+g'(v)^2)f”(u)/2| f'(u)-g'(v)|^3.
Here f”(u)g”(v) ≠ 0 and f'(u)-g'(v) ≠ 0 because a 0 and the surface is admissible. Hence the equation H^2/K = (a+1)^2/(4a) is equivalent to
((1+f'(u)^2)g”(v)+(1+g'(v)^2)f”(u))^2 = (a+1)^2/af”(u)g”(v)(f'(u)-g'(v))^2.
First let us solve the equation in the case when a=-1 (this was done in <cit.>). In this case,
f”(u)/(1+f'(u)^2) = -g”(v)/(1+g'(v)^2) = c
for some constant c0. Hence
f(u) = 1/clog|cos(cu+c_1)| + c_2 and g(v) = -1/clog|cos(cv+c_3)| + c_4
for some constants c_1, c_2, c_3, and c_4. Changing the parameters (u,v) to (u-c_1,v-c_3)/c and performing the isotropic similarity (x,y,z)↦ c(x+c_1+c_3,y-c_2-c_4,z+c_1) we bring (<ref>) to form (<ref>). Notice that there are no points u_1,u_2∈ℝ with finite lim_u→ u_kf(u) and lim_u→ u_k1/(1+f'^2(u))=0; hence the above maximal intervals (u_1,u_2) and (v_1,v_2) coincide with the domains of α(u) and β(v), and (<ref>) actually coincides with the whole given surface.
Now let us prove that for a≠ -1 equation (<ref>) has no solutions with f”(u)g”(v) 0.
The equation is equivalent to
(1+f'(u)^2)g”(v)+(1+g'(v)^2)f”(u) ±(a+1)√(f”(u)g”(v)/a)(f'(u)-g'(v)) = 0.
We may assume that here we have a plus sign and g”(v)>0, otherwise replace (f(u),g(v)) by (g”(v))(f(± u),g(± v)).
If a=1 then (<ref>) is equivalent to
(√(f”(u)/g”(v))+ f'(u))^2+(g'(v)√(f”(u)/g”(v))- 1)^2 = 0.
Hence f'(u)=-1/g'(v) is constant. Therefore f”(u) = 0, a contradiction.
Assume further a± 1.
For fixed v, a solution f(u) of (<ref>) gives a regular curve in the plane with the coordinates
(X,Y) := (√(f”(u)/a), f'(u))
because f”(u)≠ 0. By (<ref>), the curve is contained in the conic
a(1+g'(v)^2)X^2 + (a+1)√(g”(v))XY+ g”(v)Y^2-(a+1)√(g”(v))g'(v)X+ g”(v)=0.
The conic is irreducible because the determinant of its matrix is
-(a-1)^2(g'(v)^2+1)g”(v)^2/4≠ 0
for a≠ 1 (see <cit.>). Since such irreducible conics (<ref>) for distinct v have a common curve, they actually do not depend on v. Thus the ratio of the coefficients at X and XY is constant. Since a -1, it follows that g'(v) = const and g”(v) = 0, a contradiction. Therefore there are no solutions for a -1.
There are no admissible translational surfaces with constant nonzero ratio of isotropic principal curvatures formed by an isotropic planar curve α and a nonplanar curve β.
Assume the converse. Performing a rotation about a vertical axis,
we can take the plane of α to the plane x=0. Since the surface is admissible, α cannot have vertical tangents. Thus it can be parameterized as α(u) = (0, u, f(u)) for some function f(u). Since the surface r(u,v) = α(u)+β(v) is admissible, β cannot have tangents perpendicular to the x-axis. Thus it can be parameterized as β(v) = (v, g(v), h(v)) for some functions g(v) and h(v).
Then the surface is
r(u,v) = (v, u+g(v), f(u)+h(v)).
Therefore the isotropic Gaussian and mean curvatures are (see <cit.>)
K =-f”(u)(f'(u)g”(v)-h”(v)),
H = f'(u)g”(v)-h”(v)-(1+g'(v)^2)f”(u)/2.
Here f”(u)≠ 0
because a≠ 0. Thus the equation H^2/K = (a+1)^2/(4a) is equivalent to
(1+g'(v)^2+h”(v)-f'(u)g”(v)/f”(u))^2 = (a+1)^2/a(h”(v)-f'(u)g”(v))/f”(u).
Here the right side (without the factor (a+1)^2/a) does not depend on u because equation (<ref>) is quadratic in it with the coefficients not depending on u.
Differentiating the right side with respect to u, we get (see <cit.>)
g”(v)(f”'(u)f'(u)-f”(u)^2)-h”(v)f”'(u)/f”(u)^2 = 0.
If there exists u such that f”'(u)≠ 0, then h”(v) = const· g”(v), otherwise g”(v)=0 identically. In both cases g”'(v)h”(v)-h”'(v)g”(v) = 0 identically. Hence β is a planar curve, a contradiction.
There are no admissible translational surfaces with constant nonzero ratio of isotropic principal curvatures formed by a non-isotropic planar curve α and a nonplanar curve β.
Assume the converse. Performing an isotropic similarity of the form z↦ px+qy+z we can take the plane of α to the plane z=0. Performing an appropriate rotation with a vertical axis and restricting to sufficiently small parts of our curves,
we may assume that the tangents to α and β are not perpendicular to the x-axis at each point. Then the curves can be parameterized as α(u) = (u, f(u), 0) and β(v) = (v, g(v), h(v)) for some smooth functions f(u), g(v), and h(v).
Therefore a part of our surface can be parameterized as
r(u,v) = (u+v, f(u)+g(v), h(v)).
Thus the isotropic Gaussian and mean curvatures are (see <cit.> and <cit.>)
K = f”(u)h'(v)(g”(v)h'(v)-h”(v)(g'(v)-f'(u)))/(f'(u)-g'(v))^4,
H = -(1+g'(v)^2)f”(u)h'(v)+(1+f'(u)^2)(g”(v)h'(v)-h”(v)(g'(v)-f'(u)))/2| f'(u)-g'(v)|^3.
Here f”(u),h'(v),f'(u)-g'(v) ≠ 0 because a 0.
Hence the equation H^2/K = (a+1)^2/(4a) is equivalent to
a((g'(v)^2+1)f”(u)+(f'(u)^2+1) (g”(v)-h”(v)/h'(v)(g'(v)-f'(u))))^2-
-(a+1)^2 (f'(u)-g'(v))^2 f”(u)(g”(v)-h”(v)/h'(v)(g'(v)-f'(u)))=0.
Fix a value of v. A solution f(u) of (<ref>) gives a regular curve in the plane with the coordinates (X,Y) := (f”(u), f'(u))
because f”(u)≠ 0. The curve is disjoint with the line X=0. By (<ref>), the curve is contained in the algebraic curve
a((g'(v)^2+1)X +(Y^2+1) L(Y))^2-(a+1)^2 X (Y-g'(v))^2L(Y)=0,
where
L(Y) := g”(v)-h”(v) (g'(v)-Y)/h'(v).
The expression L(Y) is not a zero polynomial in Y, because otherwise (<ref>) reduces to X=0, whereas the above regular curve is disjoint with the line X=0.
Let us prove that the algebraic curve (<ref>) is irreducible
(where an irreducible curve of multiplicity two is also viewed as irreducible). Indeed, otherwise the left side equals
a(g'(v)^2+1)^2(X-P_1(Y))(X-P_2(Y))
for some complex polynomials P_1(Y) and P_2(Y). Consider (<ref>) as a quadratic equation in X. Then its discriminant D(Y)=a^2(g'(v)^2+1)^4(P_1(Y)-P_2(Y))^2 is the square of a polynomial in Y.
A direct computation gives (see <cit.>)
D(Y)=(a+1)^2 (Y-g'(v))^2L(Y)^2 ·
·((a-1)^2 g'(v)^2-4 a-2 (a+1)^2 g'(v)Y+((a-1)^2-4 a g'(v)^2) Y^2).
Assume a -1, otherwise the left side of (<ref>) is the square of a linear in X, hence irreducible, polynomial. All factors of D(Y) except the last one are complete squares and not zero polynomials in Y. Hence the last factor, which is at most quadratic in Y, is a square of a polynomial in Y. Hence its discriminant (see <cit.>) 16 (a-1)^2 a (g'(v)^2+1)^2 vanishes. Since a 0, we get a= 1. Then D(Y) ≤ 0 with the equality only for a finite number of real values of Y. Therefore (<ref>) has only a finite number of real points (X,Y) and cannot contain a regular curve. This contradiction proves that (<ref>) is irreducible.
Since irreducible curves (<ref>) for distinct v contain the same regular curve, they must all coincide. Thus the ratio of the free term and the coefficient at Y in (<ref>) is constant. Hence the ratio of the two coefficients of the polynomial L(Y) is constant. Thus p(h'(v)g”(v)-h”(v) g'(v))-qh”(v)=0 for some constants p and q not vanishing simultaneously. Therefore ((pg'(v)+q)/h'(v))' = 0, hence pg'(v)+q=r h'(v) and
pg(v)+qv+rh(v)+s=0 for some constants r and s. Thus the curve β(v)=(v,g(v),h(v)) is planar, a contradiction.
§ DUAL-TRANSLATIONAL SURFACES
§.§ Isotropic metric duality
The principle of duality is a crucial concept in projective geometry. For example, in projective 3-space, points are dual to planes and vice versa, straight lines are dual to straight lines, and inclusions are reversed by the duality.
In contrast to Euclidean geometry, isotropic geometry possesses a metric duality. It is defined as the polarity with respect to the unit isotropic sphere, which maps a point P=(p_1, p_2, p_3) to the non-isotropic plane P^* with the equation z=p_1 x+p_2 y-p_3, and vice versa. For two
points P and Q
at isotropic distance d, the dual planes P^* and Q^* intersect at the isotropic angle d.
The latter is defined as the difference between the slopes of the two lines obtained in a section of P^* and Q^* by an isotropic plane orthogonal to the line P^*∩ Q^*.
The following properties of the metric duality are straightforward. Parallel points, defined as points having the same top view, are dual to parallel planes. Two non-parallel lines in a non-isotropic plane are dual to two non-parallel lines in a non-isotropic plane. Two parallel lines in a non-isotropic plane are dual to two non-parallel lines in an isotropic plane.
The dualΦ^* of an admissible surface Φ is the set of points dual to the tangent planes of Φ. If Φ is the graph of a smooth function f, then the tangent plane at the point (x_0, y_0, f(x_0, y_0)) is
z = xf_x(x_0, y_0) + yf_y(x_0, y_0) -(x_0f_x(x_0, y_0) + y_0f_y(x_0, y_0)-f(x_0, y_0)).
Hence Φ^* is parameterized by
x^*(x,y)=f_x(x, y), y^*(x,y)=f_y(x, y), z^*(x,y)=x f_x+y f_y-f .
If Φ has parametric form (x(u,v), y(u,v), z(u,v)), then Φ^* is parameterized by
x^*(u,v) =y_u z_v-y_v z_u/x_v y_u-x_u y_v, y^*(u,v)=x_u z_v-x_v z_u/x_u y_v-x_v y_u, z^*(u,v)=x x^*+y y^*-z.
It is important to note that Φ^* may have singularities that correspond to parabolic points of Φ, where K=0, and doubly-tangent planes. This duality relationship is reflected in the following expressions that relate the isotropic curvatures of dual surfaces, as shown in <cit.>: H^*=H/K and K^*=1/K.
Thus the dual of an isotropic CRPC surface is again an isotropic CRPC because (H^*)^2/ K^* = H^2/K. The classes of rotational, parabolic rotational, ruled, and helical CRPC surfaces are clearly invariant under the duality. Each surface (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>) is isotropic similar to its dual.
Two surfaces (<ref>) are isotropic similar to the duals of each other.
More properties of the metric duality can be found in <cit.>.
§.§ Dual-translational isotropic CRPC surfaces
For the translational surfaces, the duality leads to a new type of surfaces: the ones with a conjugate net of isotropic geodesics. A curve on a surface is an isotropic geodesic if its top view is a straight line segment. Two families of curves form a conjugate net if any two curves from distinct families intersect and their directions at the intersection point are conjugate (see the definition in Section <ref>).
(See Fig. <ref>)
The dual surfaces of (<ref>) and (<ref>) are up to isotropic similarity
respectively
r_b^*(u,v)
= exp u
[ cos v/ b-sin v; 1; b-b^3+v cos v /(b^2-1) (b-sin v)-log| b-sin v|+u ],
r^*(u,v)
= 1/tan u+tan v[ tan v; 1; log|cos v/cos u|-u tan u+v tan v ].
They have a constant ratio (equal to (b-1)/(b+1) and -1 respectively) of isotropic principal curvatures and
possess a conjugate net of isotropic geodesics.
The domain of maps (<ref>) and (<ref>) is a subset of the domain of (<ref>) and (<ref>), where the maps are injective; see Remark <ref>.
The proposition is proved by direct calculation with the help of (<ref>) (see <cit.>).
We still have to show that duals to translational surfaces possess a conjugate net of isotropic geodesics.
At all points of a curve u=const on a translational surface, the tangents to the curves v=const are parallel
and form a general cylinder.
Hence all tangent planes along the curve u=const are parallel to one line. Then the duals of those planes form a section of the dual surface by an isotropic plane, which is an isotropic
geodesic. The same is true for the tangent planes along each curve v=const. Since curves u=const and v=const form a conjugate net on the translational
surface and any projective duality maps conjugate
tangents to conjugate tangents, the metric dual to a translational
surface possesses a conjugate net of geodesics.
Surfaces with a conjugate net of Euclidean geodesics have been determined by A. Voss <cit.>. The conjugate
net of geodesics is remarkably preserved by a one-parameter
family of isometric deformations. The conjugate nets
of geodesics are reciprocal parallel to the asymptotic nets
of surfaces with constant negative Gaussian curvature. Discrete
versions of Voss nets are quad meshes with planar faces that are flexible when the faces are rigid and the
edges act as hinges. We refer the reader to R. Sauer <cit.>.
Analogous properties hold for the isotropic counterparts of Voss surfaces if one defines
an isometric deformation in isotropic space as one which
preserves the top view and isotropic Gauss curvature. We
will report on this and related topics in a separate
publication.
§ OPEN PROBLEMS
Following the general philosophy discussed in Section <ref>, one can try to apply the methods developed for the classification of helical and translational isotropic CRPC surfaces in Sections <ref>–<ref> to their analogs in Euclidean geometry; cf. <cit.>.
The case of translational surfaces generated by two spatial curves remains open in both geometries.
It is natural to extend the search for CRPC surfaces to other Cayley–Klein geometries <cit.>.
The transition from Euclidean to pseudo-Euclidean (Minkowski) geometry is not expected to lead to
significant differences, but more case distinctions. It is relative differential geometry with respect
to a hyperboloid. We may return to this topic in future research if it appears to be rewarding.
§ SINGULARITIES IN THE TOP VIEW
For the classification of ruled CRPC surfaces (see Section <ref>), we need the following two well-known lemmas from singularity theory, which we could not find a good reference for.
If R_t is an analytic family of lines in the plane, then for all t in some interval I the lines R_t satisfy one of the following conditions
(i) they have a common point;
(ii) they are parallel;
(iii) they touch one regular curve (envelope)
forming a part of the boundary of the union ⋃_t∈ IR_t.
Assume without loss of generality that all R_t are not parallel to the y axis
for t in some interval I_1.
Let L(x,y,t):=y+a(t)x+b(t)=0 be the equation of the line R_t.
For each n=1,2,… consider the subset Σ_n of ℝ^2× I_1 given by
L(x,y,t)=∂∂ tL(x,y,t)=…=∂^n∂ t^nL(x,y,t)=0 but ∂^n+1∂ t^n+1L(x,y,t) 0, and also the subset Σ_∞ given by ∂^n∂ t^nL(x,y,t)=0 for all n=0,1,… .
By the analyticity, we may restrict to a subsegment I_2⊂ I_1 such that
no connected component of Σ_n is contained in a plane of the form ℝ^2×{t}.
Consider the following 3 cases.
Case (i): Σ_∞∅. Then take a point (x,y,t)∈Σ_∞. By the analyticity L(x,y,t)=0 for all t. Hence (x,y) is a point common to all R_t.
Case (ii): Σ_∞=Σ_n=∅ for all n. Then the system L(x,y,t)=∂∂ tL(x,y,t)=0 has no solutions, i.e., a'(t)=0 and b'(t) 0 everywhere. Hence a(t)=const, b(t) is monotone, and all R_t are parallel.
Case (iii): Σ_n∅ for some n. Then take a point (x,y,t)∈Σ_n and consider the map G(x,y,t):= (L(x,y,t),∂^n∂ t^nL(x,y,t)); this generalizes the argument from <cit.>, where n=1.
Let us show that the top view (projection to the xy-plane along the t-axis) of G^-1(0,0) is the required curve (envelope). Since ∂ L∂ t=0, ∂^n+1 L∂ t^n+1 0, and ∂ L∂ y 0, it follows that the differential dG is surjective. Then by the Implicit Function Theorem, the intersection of G^-1(0,0) with a neighborhood of (x,y,t) is a regular analytic curve with the tangential direction (dx,dy,dt) given by
∂ L∂ tdt+∂ L∂ xdx+∂ L∂ ydy=∂^n+1 L∂ t^n+1dt+∂^n+1 L∂ t^n∂ xdx+∂^n+1 L∂ t^n∂ ydy=0;
cf. <cit.>. Since ∂ L∂ t=0 and ∂^n+1L∂ t^n+1 0, it follows that the top view of the curve is a regular analytic curve tangent to L_t. Since no component of Σ_n is contained in the plane ℝ^2×{t}, it follows that the top view is tangent to L_t for each t
in a sufficiently small interval I_3⊂ I_2.
The resulting envelope cannot be a straight line, otherwise, we have case (i). Then it has a non-inflection point. Then for a sufficiently small I⊂ I_3, a part of the curve is contained in the boundary of the union ⋃_t∈ IR_t.
Assume that an analytic curve in ℝ^3
has a vertical tangent at a point O and does not coincide with the tangent.
Then the limit of the top view of the tangent at a point P tending to O coincides with the top view of the limit of the osculating plane at a point P tending to O. In particular, both limits exist.
Let r(t) be the arclength parametrization of the curve with r(0)=O. Since the curve is not a vertical line, r'(t)const. Hence by the analyticity r'(t)=r'(0)+t^na(t) for some integer n≥ 1 and a real analytic vector-function a(t) such that a(0) 0.
Since | r'(t)|=const, it follows that a(0)⊥ r'(0), i.e. a(0) is horizontal.
The top view of the tangent at P=r(t), where t 0 is small enough, is parallel to the top view of a(t) because r'(0) is vertical. Hence the limit of the former is parallel to a(0).
The osculating plane at P=r(t), where t 0 is small enough, is parallel to the linear span of r'(t)=r'(0)+t^na(t) and r”(t)/t^n-1=na(t)+ta'(t). As t→ 0, the
latter tends to the span of r'(0) and na(0), with the top view again parallel to a(0).
§.§ Acknowledgements
This research has been supported by KAUST baseline funding (grant BAS/1/1679-01-01).
§.§ Statements and Declarations
Conflict of interests. The authors have no relevant financial or non-financial interests to disclose.
unsrt
§ TODO:
§.§ Missing references:
(Helmut, maybe you could recommend some books where to look and we search there ourselves?)
* “
the asymptotic curves c(v) (see the equation right before (<ref>)) are projectively equivalent to Euclidean spherical loxodromes”
— It is better to make the reference more precise, maybe cite a particular Satz. The word 'loxodrome' appears in Strubecker's work only on page 65, and cannot find anything on the projective equivalence around it. Cannot prove it myself quickly.
* “at non-inflection points of the asymptotic curves the osculating planes are the tangent planes of the surface”.— DONE
* It seems relevant to add references to the classification of rotational, helical, and translational surfaces
with HorK=const in Euclidean geometry.— DONE
* Characterization of helical CRPC surfaces in Yang et al. is wrong. Should we add a bugfix in the present paper and also simplify the result (to make the overview of known results complete)? Anyway, it is not OK just to cite that characterization without any comments.
(The error is in the last line of the proof of Theorem 3: changing the variable e^s to s will change the formula z-coordinate of X_0(s)=(…,∫ g(s)/t(s) ds) because ds also changes.) - NO ERROR THERE; THE INTEGRAL IS DIFFERENT (∫ g(s)'/t(s) ds)
* classification when both H=constandK=const
BYPASSED
* κ_1=κ_2⇒ sphere (isotropic); DONE
* is it true that if one of the principal curvatures is constant then the surface is a pipe surface (Euclidean, isotropic)? BYPASSED
* each characteristic of a channel surface is a circle
which is a principal curvature line (Euclidean);— ADDED A PROOF INSTEAD
* the asymptotic tangent to a ruling at infinity has second
order contact with the ideal curve formed by the
ideal points of the rulings; DONE
* isolines form a
conjugate net on the translational surface— BYPASSED
* any projective duality maps
conjugate tangents to conjugate— BYPASSED
§.§ Missing beauty:
* a figure illustrating the proof of a lemma on ruled surfaces
* a figure illustrating the proof of a lemma on isotropic channel surfaces
* display some equations to make the text less dense.— DONE
§.§ Missing proofs:
* both H=constandK=const⇒ paraboloid (isotropic);— DONE
* second-order contact along a characteristic ⇒ second-order contact of spine curves (Euclidean, isotropic).— DONE
|
http://arxiv.org/abs/2307.07607v2 | 20230714200514 | SubT-MRS: A Subterranean, Multi-Robot, Multi-Spectral and Multi-Degraded Dataset for Robust SLAM | [
"Shibo Zhao",
"Tianhao Wu",
"YuanJun Gao",
"Damanpreet Singh",
"Rushan Jiang",
"Haoxiang Sun",
"Jay Karhade",
"Ian Higgins",
"Chuck Whittaker",
"Lucas Nogueira",
"Tingting Da",
"Mansi Sarawata",
"Can Xu",
"Jiahe Xu",
"He Yao",
"Sourojit Saha",
"Yuheng Qiu",
"Chen Wang",
"Wenshan Wang",
"Sebastian Scherer"
] | cs.RO | [
"cs.RO"
] |
[
[
[
August 12, 2023
===================
< g r a p h i c s >
figureGround truth dense, high-quality reconstruction map and Ground truth trajectory generation
]
In recent years, significant progress has been made in the field of simultaneous localization and mapping (SLAM) research. However, current state-of-the-art solutions still struggle with limited accuracy and robustness in real-world applications.
One major reason is the lack of datasets that fully capture the conditions faced by robots in the wild. To address this problem, we present SubT-MRS, an extremely challenging real-world dataset designed to push the limits of SLAM and perception algorithms.
SubT-MRS is a multi-modal, multi-robot dataset collected mainly from subterranean environments having multi-degraded conditions including structureless corridors, varying lighting conditions, and perceptual obscurants such as smoke and dust. Furthermore, the dataset packages information from a diverse range of time-synchronized sensors, including LiDAR, visual cameras, thermal cameras, and IMUs captured using varied vehicular motions like aerial, legged, and wheeled, to support research in sensor fusion, which is essential for achieving accurate and robust robotic perception in complex environments. To evaluate the accuracy of SLAM systems, we also provide a dense 3D model with sub-centimeter-level accuracy, as well as accurate 6DoF ground truth. Our benchmarking approach includes several state-of-the-art methods to demonstrate the challenges our datasets introduce, particularly in the case of multi-degraded environments.
§ INTRODUCTION
Robustness is essential for SLAM systems to ensure accurate and reliable estimates despite the noise, errors, uncertainties, and unexpected events. This is especially critical in complex field robotic applications, where SLAM provides precise maps and localization for tasks such as collaborative exploration, multi-robot search and rescue, and autonomous off-road driving.
Despite significant advancements in state-of-the-art SLAM algorithms such as VINS-MONO, LIO-SAM, and ORB-SLAM3<cit.>, they struggle to perform well in real-time online missions due to the lack of good training datasets. When evaluated on datasets that do not capture real-life SLAM challenges, algorithms become biased towards underchallenging environments, resulting in poor performance in real-life scenarios. Therefore, it is crucial to create a dataset that can capture and emulate the real-life challenges faced by SLAM algorithms, such as sensor degradation, perceptual obscurants, and weather changes.
In recent years, several datasets have been developed to achieve this. For example, handheld datasets such as <cit.> present indoor-outdoor scenarios with illumination changes, but are limited to a single kinematic profile, walking, and do not incorporate challenges of high-speed car-like motion. On the other hand, datasets like EuRoC-MAV<cit.> and UZH-FPV<cit.> provide visual-inertial datasets for drones with high-speed, holonomic motion, but they lack multiple sensors and are only suitable for well-lit areas. While KITTI, TartanAir, and Hilti Challenge<cit.> include more than two sensors and sensor-degraded scenarios, they lack multiple kinematic profiles. Additionally, most leading datasets only contain single-robot datasets and do not support research in multi-robot SLAM. A tabular comparison for these datasets can be seen in Table<ref>.
To address these limitations, we present our dataset, Subterranean, Multi-Robot, Multi-Spectral-Inertial, and Multi-Degraded (SubT-MRS), which has the following characteristics:
* Multiple Modalities: Our dataset packages hardware time-synchronized data from 4 RGB cameras, 1 LiDAR, 1 IMU, and 1 thermal camera.
* Diverse Scenarios: Our dataset has been collected at multiple locations with varying environmental setups, such as indoors, outdoors, mixed indoor-outdoor, underground, off-road, buildings, etc.
* Multi-Degraded: Owing to the fact that our dataset has 4 different modalities and multiple environmental scenarios, we have multiple permutations of sensor degradation, ranging from single sensor failure to multi-sensor failure induced by perceptual obscurants like fog, snow, smoke, illumination changes, etc, as can be seen in Figure<ref>.
* Heterogeneous Kinematic Profiles: Our dataset is the first dataset that includes time-synchronized sensor data from multiple holonomic and non-holonomic vehicles having different speed ranges. We have collected data using multiple RC cars, legged robot, drones, and handheld also.
* Multi-Robot Data: Our dataset is the first one to provide time-synchronized multi-robot data using multiple vehicles with heterogeneous kinematic profiles.
Based on the listed features, our dataset represents a significant advancement over existing SLAM datasets, as it encompasses a range of sensor modalities, kinematic profiles, and sensor degradations. A key feature of our dataset is the availability of data for multiple kinematic profiles within a single scene, including diverse weather conditions. This breadth of scenarios allows for close emulation of real-world conditions and provides a formidable challenge for any SLAM algorithm.
Here on, research work related to the dataset is presented in Section <ref>, followed by details about the hardware aspect of the dataset in Section <ref>, including discussions on the sensor payload, calibrations, and time synchronizations. Subsequently, Section <ref> discusses the dataset's features, while Section <ref> details the procedure for generating ground truth maps and trajectories. Finally, Section <ref> evaluates various SLAM algorithms on our dataset and provides a conclusion by discussing the results and findings.
§ RELATED WORK
Over the past decade, numerous SLAM datasets have emerged, aiming to replicate various real-life scenarios faced by SLAM algorithms such as sensor degradation, vehicular motion, and weather changes. Most of these datasets focus on visual-inertial data, reflecting the SLAM community's interest in visual-inertial SLAM due to the ease of setting up a visual-inertial sensing system. This trend is evident in state-of-the-art SLAM algorithms such as VINS-MONO, ORB-SLAM3, and LIO-SAM<cit.>.
The TUM-VI dataset<cit.> is a popular indoor-outdoor visual-inertial dataset, collected on a custom sensor deck made of aluminum bars. It is a challenging dataset due to the presence of illumination changes in the indoor-outdoor transitions but lacks any high-speed motions. The UMA-VI dataset<cit.> is also another handheld indoor-outdoor visual-inertial dataset collected in low-textured and dynamic illuminated environments. The EuRoC MAV dataset<cit.> provides indoor SLAM data onboard a drone. The motion blur induced by the fast holonomic motion of the drone provides a good challenge for SLAM algorithms. This dataset has been appreciated a lot by the SLAM community but lacks outdoor data. UZH-FPV drone racing dataset<cit.> incorporates the benefits of both TUM-VI and EuRoC by providing an indoor-outdoor drone dataset with aggressive motions. The Rosario dataset<cit.> is an example of visual-inertial data in non-holonomic motion as it records data onboard a wheeled robot but is only limited to agricultural fields.
SLAM algorithms relying merely on visual and inertial data tend to fail in environments with low visual texture, which is why many recent SLAM datasets include data from multiple sensing modalities. The KITTI dataset<cit.> provides extensive autonomous driving data in the form of outdoor LiDAR-visual-inertial data collected onboard a car along with GPS-based ground truth trajectory, making it the most popular benchmark in SLAM. However, it is unable to be extended to indoor SLAM challenges and does not have any fast motion,sensor degradation and challenging environment. The Hilti SLAM dataset<cit.> attempts to resolve this by including indoor-outdoor LiDAR-visual-inertial datasets captured on their custom-built sensor stick and gives a good challenge due to the presence of dynamic illumination and featureless spaces but the difficulty plateaus due to the absence of fast motion or additional kinematic profiles and absence of perceptual obscurants. Tartan Air<cit.> also provides abundant LiDAR-visual-inertial-stereo dataset but is far from the real-life SLAM challenges as the data is simulation-based and the motion kinematic profile is undefined.
We can see that all these state-of-the-art benchmarks for SLAM have saturated due to one or more of the following reasons:
* Lack of enough sensor degradation caused by perceptual obscurants like smoke, fog, and structureless environments.
* Lack of multiple motion kinematic profiles. Having different motion patterns induces different motional uncertainties and makes the data more challenging
* Lack of fast motion. Fast motion induces disruptions in sensory data, like skew and blur, which accounts for additional challenge
Our dataset is able to tackle these issues by having multiple sensors, diverse kinematic profiles, perceptual obscurants, and diverse environment and weather conditions.
There are only two peer-reviewed multi-robot SLAM (MR-SLAM) datasets available, both collected in controlled environments, limiting their ability to emulate real-life SLAM challenges. The UTIAS MR-CLAM dataset<cit.> used five identical slow-moving ground vehicles with monocular camera data, while the AirMuseum dataset<cit.> used three ground vehicles and a drone with stereo visual-inertial data. Although the AirMuseum dataset added a holonomic robot and was collected in a larger area, both datasets lack diversity in vehicular motion kinematics and fail to fully represent real-life SLAM challenges. Our dataset presents a significant improvement over these because of a bigger sensor stack per robot of LiDAR-visual-thermal-inertial data and diverse kinematic profiles with RC cars, legged robots, and drones.
§ HARDWARE
§.§ Sensor Payload
The sensor payload used for our dataset collection can be seen in Figure<ref>. This payload was designed by the Explorer team during the DARPA Subterranean Challenge and has been assembled rigidly with the purpose of protecting the sensors from external impacts and preventing any internal vibrations. Our sensor payload is equipped with 4 Leopard Imaging RGB monocular cameras, 1 Velodyne puck, 1 Epson M-G365 IMU, and 1 FLIR Boson thermal camera. The payload has an NVIDIA Jetson AGX Xavier as its onboard computer. The specifics for these components can be seen in Table <ref>.
§.§ Time Synchronization of Sensors
Time synchronization plays a critical role in any multi-sensor system and we achieve that using the “pulse per second (PPS)” technique. All the sensors sync to the CPU clock on the onboard computer, as can be seen in Figure <ref>. The IMU, LiDAR, and thermal camera directly use the CPU clock, whereas the 4 RGB cameras are synchronized using an FPGA board. Our experiments revealed that the time synchronization gap between any two sensors is not greater than 3ms.
§.§ IMU Calibration
We use the M-G365 inertial sensor[https://global.epson.com/products_and_drivers/sensing_system/imu/g365/https://global.epson.com/products_and_drivers/sensing_system/imu/g365/] on our platform and calibrate it to reduce bias instability and drift. We employ an Allan variance<cit.> based tool[https://github.com/gaowenliang/imu_utilshttps://github.com/gaowenliang/imu_utils]
to estimate the white noise angle random walk and bias instability for both the gyroscope and accelerometer data.
To this end, we collect a 1-hour IMU data sequence on a flat and stationary surface, consistent with other datasets like <cit.>. The same is made available in our dataset to the user. The IMU-LiDAR calibration is done using CAD models, the data for which will be made available to the user.
§.§ RGB Camera Calibration
We use an open-source calibration toolbox, Kalibr[https://github.com/ethz-asl/kalibrhttps://github.com/ethz-asl/kalibr] for the intrinsic and extrinsic calibration of the RGB camera. The camera is extrinsically calibrated to the IMU. For this purpose, we use a 7×9 checkerboard, the omnidirectional camera model, and the radial-tangential distortion model. A 60s random-motion video is used for the calibration, which will be provided to the user along with other parameters.
§.§ Thermal Camera Calibration
Calibrating a thermal camera follows a procedure similar to that of an RGB camera, with the addition of an image processing task that requires obtaining a thermal calibration image with good contrast. This can be challenging, but we achieved it by heating a 7×9 chessboard under sunlight and feeding an inverted image from the recording into the Kalibr toolbox. Like the RGB camera, the thermal camera is also extrinsically calibrated to the IMU. This process uses a 60s random-motion clip, which we provide to the user along with other essential parameters.
§.§ Extrinsic Calibration for Multiple Robots
Collaborative tasks require multiple robots to operate in a common frame of reference. Our procedure to create this involves the usage of Generalized-ICP (GICP)<cit.> and is two-fold. In the first step, one robot shares its map from a feature-rich location to a base station. In the second step, the remaining robots take turns solving a GICP, in the same feature-rich location as the first robot and obtain a frame transformation from its own to the first robot. All the robots end up in a common frame after this procedure and build their map further using Super Odometry<cit.>. We will provide a graphic and mathematical description of this process in the supplemental material.
§ DATASET
Our dataset was recorded in the form of ROS bags, in diverse locations including rural and urban areas, and structured and unstructured sites, to provide varying levels of challenge for modern SLAM algorithms. The data contains multi-modal and sensor-degraded data and was recorded on multiple robots with varying kinematic profiles. The locations range from university campuses to caverns, buildings, and off-road areas (as shown in Figure <ref>). Here we briefly discuss our dataset in the following sub-sections. The detailed specifications of our dataset are further listed and discussed comprehensively in the supplementary material.
§.§ Multi Modal Dataset
Our dataset incorporates data from RGB cameras, thermal camera, LiDAR, and IMU, making it a multi-modal dataset. The majority of our dataset incorporates multi-modal data, recorded on different vehicles like the RC car, legged robot, and drones. This dataset has been collected in varied locations like caves, buildings, and university campus areas.
§.§ Multi Robot Dataset
Multi-robot SLAM(MR-SLAM) has garnered increasing attention in recent years due to its immense applications ranging from warehouse management, and autonomous truck fleets <cit.> to search and rescue <cit.>.
To this end, a lot of focus is being put on building systems such as <cit.> to tackle the problem of MR-SLAM. This naturally warrants a reliable multi-robot dataset, yet, as is visible from <ref> none of the publicly available datasets provide detailed multi-robot data. To enable research and development of MR-SLAM systems, we are the first ones to provide time and frame-synchronized sensor data from 3 RC cars and a legged robot in a diverse environment. This data has been collected in the campus area, hospital building, and caves.
§.§.§ Synchronization
To get the best performance out of a multi-robot system, it is necessary to have all the robots operate in the same time and world frame. The time across all the robots is synchronized with respect to a common base station clock over Secure Shell. All the robots are aligned to the world frame using the calibration techniques explained in section <ref>.
§.§ Multi Degraded Dataset
Subt-MRS includes different sensor degradation categorized as visually degraded, geometrically degraded, and simultaneously degraded.
In the following sections, we discuss the different types of sensor degradation in our dataset.
§.§.§ Visually Degraded
Visual degradation happens in areas with low visual textures like low-lit areas, smoke/fog, etc.
In this dataset, we collect the data from subterranean and indoor environments, e.g., corridors.
* Low light and Darkness: Our dataset includes several sequences in dim environments from the hospital building as well as the caves.
The images A-F in Figure <ref> provide a glimpse of the environment.
* Smoke and Dust: Our dataset includes runs in smoke-filled areas in the hospital and caves. The images M-N in figure <ref> show snapshots from these runs.
§.§.§ Geometrically Degraded
Geometric degradation relates to when environmental degradation of the LiDAR sensor either due to lack of structural features like planes, points, lines, etc, or due to LiDAR range deficit in big parking areas and long corridors leads to LiDAR odometry being unconstrained.
* Long corridor
In a featureless long corridor environment, the LiDAR range falls short. In this situation, the LiDAR sensor cannot constrain the estimation in the forward/backward direction. The image E from the hospital in Figure <ref> shows an example of such environments.
* Stairs
LiDAR odometry tends to drift in staircases because of low feature access caused by fixed positioning of the LIDAR sensor on the platform, as shown in figure <ref> and causes z-drift in LiDAR odometry. We have collected stair data on the university campus as well as the hospital. (Figure <ref>C,G)
§.§.§ Simultaneously Visually and Geometrically Degraded
Simultaneous visual and geometric degradation occurs if we combine the aforementioned scenarios. Such data can be useful for analyzing the robustness of multi-modal SLAM algorithms.
* Dark corridor
Long, dark corridors as shown in images A, E, and F of Figure <ref> are a very general example of visual-geometric degradation.
* Dark stairs
As shown in image C of Figure <ref>, we have runs of the legged robot on dark staircases with the LED lights turned on.
* Snowy stairs
Data was also collected in snowy environments, as can be seen in images H, K, and L of Figure <ref>. Snow leads to visual degradation due to loss of RGB texture because of white-capped environments. Additionally, snowflakes can show up as spoof point features for LiDAR odometry, leading to drift.
§ GROUND TRUTH MAP
§.§ Instruments
In our approach to creating a ground truth map, we utilize a combination of surveying techniques and advanced measurement tools. Specifically, the Leica Viva Total Station 15A is used to provide accurate distance and angle measurements. The Leica Mini 360 Prism is utilized to reflect the laser back to the Total Station with high precision, and the robotic aiming feature is employed to ensure consistency and reliability in unlit hallways and interior spaces. Additionally, we use the FARO Focus 3D S120 for precise dense point cloud measurements, with its high accuracy obtained through a laser ranger. Finally, checkerboard fiducial targets are used in select areas to supplement the software used for matching and alignment.
§.§ Data Collection
This study performs Ground Truth modeling at two locations in Pittsburgh, PA, USA: an abandoned hospital and Carnegie Mellon University campus. The hospital survey utilizes a Total Station survey instrument with eye bolts installed at strategic locations to measure points around a large loop. With the loop closure algorithm, an error of approximately 3.4 cm is yielded, and the model covers an area of roughly 350 meters by 350 meters, including several buildings and surrounding landscapes.
On the other hand, the Carnegie Mellon University campus survey focuses on four buildings near the Newell-Simon Hall and utilizes 79 FARO scans to generate a model with an expected accuracy of less than 2cm. The survey covers an area of about 200 meters by 200 meters and results in a highly accurate model
The dataset generated from these surveys exceeds 750Gb of data, providing valuable resources for various research and applications. The ground truth maps for all the datasets will be provided to the users. A glimpse of these ground truth maps is provided in the supplementary material.
§.§ Ground Truth Trajectory
§.§.§ Generation
Ground truth trajectories are generated for all datasets, using Super Odometry <cit.> with a modified Laser Odometry algorithm. As shown in Figure <ref>, the ground truth point cloud is pre-processed using a feature extraction module that assesses “local linearity, planarity, and curvatures of geometric feature” <cit.>, with the extracted geometric features stored. As new LiDAR scans arrive, a subset of the ground truth points is selected from the stored features based on the current pose and used as the reference for Iterative Closest Point (ICP). The resulting ICP output is integrated into the normal Super Odometry procedures. Visual Odometry is employed to help constrain pose estimation in datasets with degraded geometry, such as long corridor environments.
§.§.§ Evaluation
In this study, we assess the accuracy of the created trajectories by employing Map Analysis[https://github.com/subtchallenge/map_analysishttps://github.com/subtchallenge/map_analysis], an open-source tool that calculates the map deviation utilizing both the integrated point cloud and the ground truth map. The modified ground truth trajectories provide both the trajectory and the integrated point cloud. To quantify the map deviation, we introduce the distance metric d_i^j, defined as the euclidean distance between point i and its corresponding point j in the ground truth map. We then identify the set of outlier points using the deviation threshold of 1.0m specified in Equation <ref>.
Outlier Points: d_i^j > 1.0m
The map deviation is computed by dividing the number of outlier points by the total number of points, as shown in Equation <ref>.
σ = N_Outlier Points/N_Total Points
To ensure the validity of the generated ground truth trajectory, we require that the map deviation of its integrated point cloud is less than 1%. This threshold value accounts for the possibility that some LiDAR scan points may not be covered by the ground truth map. Notably, we observe that all the identified outliers originate from rooms that lie outside the coverage area of the ground truth map for the long corridor. After excluding the points in the rooms outside the coverage area of the ground truth map for the long corridor, our trajectory generation result achieved a map deviation of less than 0.1%.
§ CHALLENGING RESULTS AND FINDINGS
§.§ Results
The Subt-MRS dataset provides a comprehensive framework for evaluating the performance of different SLAM algorithms. To analyze the limitations of current state-of-the-art SLAM methods, we conduct extensive evaluations on various LiDAR-only, LiDAR-Inertial, and LiDAR-Visual-Intertial SLAM algorithms using our dataset. To ensure diversity in our analysis, we select three runs collected by drones (Canary, DS3, DS4) and three runs collected by ground robots (R1, R2, R3) in a distinct section of the subterranean environment. In our evaluations, we focus on the LiDAR data from R3, which has a higher level of sensor noise and collect sequences to assess the algorithms' robustness under such disturbances. To quantify the performance of each algorithm, we compare the reconstructed maps against the ground truth map and estimate the map deviations. Our results are presented in Table <ref>, and the point cloud data used in our evaluations are shown in Figure <ref>.
§.§ Discussion
The evaluation results in Table <ref> provide insights into the performance of various SLAM algorithms on the Subt-MRS dataset. Super Odometry stands out as the most robust algorithm, achieving an average deviation of only 0.5%. The combination of LiDAR, visual camera, and IMU sensors in Super Odometry provides greater resilience against challenging environmental conditions. In contrast, LiDAR-Inertial and LiDAR-Only algorithms, such as Fast_LIO, Faster_LIO, and ALOAM, show significant drift in most runs, indicating limited robustness in multi-degraded environments. Clins, which implements continuous-time trajectory estimation, performs better than other LiDAR-Inertial and LiDAR-Only algorithms, with deviations of less than 17% in most runs. However, Clins operates offline, which may not be suitable for real-time applications. The results suggest that the use of additional sensors and advanced estimation techniques is necessary to achieve high robustness in challenging environments.
ieee_fullname
|
http://arxiv.org/abs/2307.04997v1 | 20230711033003 | The classifying space for commutativity of geometric orientable 3-manifold groups | [
"Omar Antolín-Camarena",
"Luis Eduardo García-Hernández",
"Luis Jorge Sánchez Saldaña"
] | math.GR | [
"math.GR",
"math.AT",
"math.GT"
] |
The GECAM Real-Time Burst Alert System
Yue Huang*
1
Dongli Shi
1,2
Xiaolu Zhang
1,3
Xiang Ma
1
Peng Zhang
1,2
Shijie Zheng
1
Liming Song
1
Xiaoyun Zhao
1
Wei Chen
1
Rui Qiao
1
Xinying Song
1
Jin Wang
1
Ce Cai
1,4
Shuo Xiao
1,4
Yanqiu Zhang
1,4
Shaolin Xiong*
1,4
August 12, 2023
==================================================================================================================================================================================================================================================================================================================================================================================================
For a topological group G let G be the total space of the universal transitionally commutative principal G-bundle as defined by Adem–Cohen–Torres-Giese. So far this space has been most studied in the case of compact Lie groups; but in this paper we focus on the case of infinite discrete groups.
For a discrete group G, the space G is homotopy equivalent to the geometric realization of the order complex of the poset of cosets of abelian subgroups of G. We show that for fundamental groups of closed orientable geometric 3-manifolds, this space is always homotopy equivalent to a wedge of circles. On our way to prove this result we also establish some structural results on the homotopy type of G.
§ INTRODUCTION
For a topological group G one can topologize the set of homomorphisms (ℤ^n, G) as the subspace of G^n consisting of n-tuples of pairwise commuting elements of G. These spaces of commuting tuples form a simplicial subspace of the usual simplicial space model for the classifying space BG. The geometric realization of this simplicial subspace is called the classifying space for commutativity of G and denoted by G. It was first introduced in <cit.> and further studied in <cit.>, where a notion of transitionally commutative principal G-bundle was introduced and shown to be classified by G. The total space of the universal transitionally commutative principal G-bundle is denoted G. When G is abelian, G agrees with BG and G agrees with EG. One can consider how far G is from being contractible as a sort of measure of non-abelianness of G, and indeed, it has been shown <cit.> that for compact, not necessarily connected, Lie groups G, one has that G is abelian if and only if G is contractible, and in fact, if and only if π_k(G) = 0 for i ∈{1,2,4} (note the absence of 3 in that set!).
It is fair to say that most of the attention given to G and G so far has been in the case of Lie groups, and more particularly compact Lie groups, but the definitions are also interesting for discrete groups. In the case of a discrete group G, the theorem cited above can improved to say that G is abelian if and only if G is simply-connected, as first shown in <cit.> and with a different proof in <cit.>. For discrete groups one can give a simple combinatorial model of the homotopy type of G as a poset, namely, G is homotopy equivalent to the geometric realization of the order complex of the poset of cosets of abelian subgroups of G.
In this paper we use this poset description to compute the homotopy type of G for fundamental groups of orientable geometric 3-manifolds and show that they are all wedges of countably many circles. Recall that Thurston
showed that there are eight 3-dimensional maximal geometries up to
equivalence (<cit.>): ^3, ^3, ,
^2×, , , , and . The main result of this paper is the following:
Let G be the fundamental group of an orientable geometric 3-manifold. Then G is homotopically equivalent to ⋁_I S^1, where I is a (possibly empty) countable index set. Moreover, we have the following:
* I is empty if and only if G is abelian, in which case G is contractible.
* I is infinite if and only if G is infinite and nonabelian.
From <Ref> we conclude that G is a finite nontrivial wedge of circles exactly when G is finite and nonabelian which, by Perelman's theorem, happens when G is the fundamental group of a spherical manifold. In that case we explicitly compute the number of circles in the wedge decomposition, see Section 5. Some of these calculations were done using GAP.
As a consequence of <Ref>, <Ref> and the Kneser–Milnor prime decomposition theorem (see <Ref>), we obtain the following theorem:
Let M be a 3-manifold with fundamental group G. Assume that P_1 #⋯# P_n is the prime decomposition of M, with n≥ 2, and each P_i is a geometric 3-manifold. Then G is homotopically equivalent to ⋁_ S^1.
The proof of <Ref> is done case by case, that is, analyzing the possible fundamental groups that appear for each of the eight 3-dimensional geometries. In the following table we summarize the references for each geometry:
Let us say something about the strategy of the proof of the theorems in <Ref>. The space G is homotopically equivalent to the geometric realization of the poset G that consists of cosets of maximal abelian subgroups of G and their intersections, ordered by inclusion. For most of the fundamental groups we dealt with, it turned out that G has height 1, that is, its geometric realization has dimension one, thus it is straightforward that has the homotopy type of a wedge of circles. The exceptional cases where given by amalgamated products of the form K∗_^2K, where K is the fundamental group of the Klein bottle. Such groups appear in geometries ^3, , and . In these cases G has height 2, and therefore we have triangles in its geometric realization. Nevertheless, it is always the case that some triangles have free faces, that is they have a face that does not belong to any other triangle, and they can be deformation retracted to the other two faces. Finally, after collapsing these triangles it happens that all the remaining triangles have a free face and they also can be collapsed. In conclusion, via a two-staged collapsing procedure we see that the geometric realization of G can be deformation retracted to its 1-skeleton, and this concludes the proof.
§.§ Some byproduct results
On our way to prove <Ref>, we establish some structural results on the homotopy type of G and we also computed the homotopy type of G of some other groups. Next, we state these results for the sake of the reader. The first theorem tell us that the homotopy type of G behaves well with respect to free and direct products.
Let G_1 and G_2 be nontrivial groups. Then
* G_1 ∗ G_2≃⋁_ℕG_1∨⋁_ℕG_2∨⋁_ℕ S^1, and
* G_1 × G_2≃G_1×G_2.
Fundamental groups of closed hyperbolic 3-manifolds are very particular cases of hypebolic groups (in the sense of Gromov). We proved the following.
Let G be a torsion-free hyperbolic group in the sense of Gromov. Assume G is not virtually cyclic. Then
G≃⋁_ℕ S^1.
In the following theorem there is a computation of the homotopy type of G for five of the nonabelian wallpaper groups, i.e. 2-dimensional crystallographic groups.
Let G be one of the following wallpaper groups:
* the fundamental group of the Klein bottle ⋊_-1,
* ⋊_A_n/n, where A_n is an integral matrix of order 2, 3, 4 or 5 (see <Ref> for a concrete description).
Then G≃⋁_ℕ S^1.
§.§ Open questions and further research
Note that the center Z of a group G is exactly the intersection of all maximal abelian subgroups of G. Provided that Z has finite index in G we conclude that G is a finite poset, and as a conclusion G has the homotopy type of a finte CW-complex. This conclusion does not apply to groups that contain an abelian subgroup of finite index like the fundamental group of the Klein bottle, see <Ref>. This leads to the following question.
Let G be a group. Assume that G has the homotopy type of a finite CW-complex. Is it true that G contains its center as a finite index subgroup?
It is well-known that there are 17 wallpaper groups. The only abelian of these groups is ^2 for which E_𝖼𝗈𝗆 is contractible. On the other hand, for the groups listed in <Ref>, we proved that G has the homotopy type of an infinite countable wedge of circles. It is natural to ask for the analogous computation for the remaining wallpaper groups.
Let G be a wallpaper group. What is the homotopy type of G? Is it true that G≃⋁_ℕ S^1?
Let a closed, prime, oriented 3-manifold M that is not homeomorphic to S× D^2, T^2× I nor the nontrivial I-bundle over the Klein bottle. Assume G is not geometric. Then there exists a nonempty collection
T⊆ M of disjoint incompressible tori (i.e. two sided properly embedded and π_1-injective), such that each component of
M - T is geometric, see for instance <cit.>. A deep analysis of this decomposition and the results in the present article, will lead to a classification of the (maximal) abelian subgroups of the fundamental group G of M. A natural question that is left by <Ref> is the following.
Let G be the fundamental group of a prime non-geometric 3-manifold. What is the homotopy type of G?
§.§ Outline of the paper
In Section 2 we establish that for any discrete group G the space G is homotopically equivalent to the geometric realization of G, which is the corner stone of all of our computations. Section 3 is devoted to recall what is needed, for our purpose, about the theory of 3-manifolds. In Section 4 we prove some structural results on the homotopy type of E that will be useful in the proof of <Ref>; among other things we prove in this section <Ref>. In Sections 5 to 8 we compute the homotopy type of G for geometries ^3, , ^2×, and respectively. The main goals of Sections 9 and 10 are to set up notation and preliminary results to deal with groups of the form K∗_^2K, with K the fundamental group of the Klein bottle; is in this section that we compute also the homotopy type of G for some wallpaper groups. Finally in Sections 11 to 13 we compute the homotopy type of G for geometries ^3, and .
10pt
Acknowledgments O.A-C. and L.E.G.-H. gratefully acknowledge support from CONACyT Ciencia de Frontera 2019 grant CF217392. L.J.S.S is grateful for the financial support of DGAPA-UNAM grant PAPIIT IA106923.
§ POSETS OF COSETS OF ABELIAN SUBGROUPS
In this section we establish that for every discrete group G, the space G is homotopically equivalent to the geometric realization of the poset G of all cosets of maximal abelian subgroups of G and their intersections, see <Ref>. This result is the corner stone of all of our computations.
Let P be a poset. We say a chain x_0< x_1 < ⋯ < x_n in P has length n, and the height of P is the greatest length of a chain in P. Note that the height of P coincides with the dimension of the geometric realization (of the nerve) of P.
Let G be a group. Define the following posets
* G as the poset of all abelian subgroups of G.
* G as the poset of all cosets of abelian subgroups of G.
* G as the poset of all cosets gB such that g∈ G and B is an intersection of maximal abelian subgroups of G.
In all of these posets the order relation is the one given by inclusion.
Let G be a group and Z its center. Notice that all maximal abelian subgroups of G contain Z. In fact, one can show that Z is equal to the intersection of all maximal abelian subgroups of G.
All of our computations on the homotopy type of G rely on the following result, which will be used from now on without further mention. Therefore all throughout the article we will only deal with (the geometric realization of) G and G.
Given a discrete group G, the following CW-complexes are homotopy equivalent.
* G,
* the geometric realization of G.
* the geometric realization of G,
This result is essentially contained in <cit.>, but for the reader's convenience we sketch the argument in addition to providing references. Note that <cit.> states these results only for finite groups, but the argument only requires discreteness.
Recall that G is the homotopy fiber of the canonical map G→ BG, and that G in turn is homotopy equivalent to the _A ∈G BA. Since homotopy colimits are preserved by homotopy pullback along a fixed map, we can pullback the homotopy colimit for G along the map → BG to obtain (<cit.>):
G≃_A ∈G(BA → BG)≃_A ∈G G/A.
By Thomason's theorem the homotopy colimit of a functor like A → G/A whose values are discrete spaces can be computed as the geometric realization of the nerve of the Grothendieck construction (or category of elements) of that functor. In this case the objects of this Grothendieck construction would be the cosets of abelian subgroups of G and unwinding the definition of the morphisms they turn out to be simply inclusions of cosets (see the remark immediately following <cit.>). This establishes the equivalence of (1) and (2). The equivalence of (2) and (3) is simply because G is cofinal in G.
The proof of the following lemma is elementary and it is left to the reader. We will also be using this result all throughout the paper without mentioning it.
Let G be a group. Then (the geometric realization of) G is connected.
§ GEOMETRIC 3-MANIFOLDS
In this section we will review a bit of 3-manifold theory. For more details see <cit.>, <cit.>.
§.§ Geometric 3-manifolds
A Riemannian manifold X is a smooth
manifold that admits a Riemannian metric. If the isometry group (X)
acts transitively, we say X is homogeneous. If in addition X has a quotient of finite
volume, X is unimodular. A geometry is a simply-connected,
homogeneous, unimodular Riemannian manifold along with its isometry group. Two
geometries (X,(X)) and (X',(X')) are equivalent if
(X)≅(X') and there exists a diffeomorphism X→ X' that
respects the (X), (X') actions. A geometry (X,(X)) (often
abbreviated X) is maximal if there is no Riemannian metric on X with
respect to which the isometry group strictly contains (X). A manifold
M is called geometric if there is a geometry X and discrete subgroup
Γ≤(X) with free Γ-action on X such that M is
diffeomorphic to the quotient X/Γ; we also say that M admits a
geometric structure modeled on X. Similarly, a manifold with non-empty
boundary is geometric if its interior is geometric.
It is a consequence of the uniformization theorem that compact surfaces
(2-manifolds) admit Riemannian metrics with constant curvature; that is, compact
surfaces admit geometric structures modeled on ^2, ^2, or .
In dimension three, we are not guaranteed constant curvature. Thurston
showed that there are eight 3-dimensional maximal geometries up to
equivalence (<cit.>): ^3, ^3, ,
^2×, , , , and .
§.§ Prime decomposition
The material of this subsection it is only stated by the sake of completeness as these concepts are mentioned in the statement of <Ref>. The prime decomposition theorem will not be used anywhere else in this article.
A closed n-manifold is an n-manifold that is compact with empty
boundary. A connected sum of two n-manifolds M and N, denoted M#
N, is a manifold created by removing the interiors of a smooth n-disc D^n
from each manifold, then identifying the boundaries ^n-1. An n-manifold is
nontrivial if it is not homeomorphic to ^n. A prime
n-manifold is a nontrivial manifold that cannot be decomposed as a connected
sum of two nontrivial n-manifolds; that is, M=N# P for some n-manifolds
N,P forces either N=^n or P=^n. An n-manifold M is called
irreducible if every 2-sphere ^2⊂ M bounds a ball D^3⊂ M. It is well-known that all orientable prime manifolds are irreducible with the exception of S^1×
S^2. The following is a well-known theorem of Kneser (existence) and Milnor (uniqueness).
Let M be a closed oriented nontrivial 3-manifold. Then M=P_1#⋯#
P_n where each P_i is prime. Furthermore, this decomposition is unique up to
order and homeomorphism.
§ SOME STRUCTURAL RESULTS ON THE HOMOTOPY TYPE OF G
In this section we prove some foundational results on the homotopy type of G, which are interesting in their own right. These results will be used in the rest of the paper.
Consider the following extension of groups
1→ K → G → Q → 1.
Let be the poset of cosets of all abelian subgroups of G that contain K, and let be the poset of cosets of subgroups of Q whose preimage in G is abelian. Then and are isomorphic.
Moreover, if _𝗆𝖺𝗑 (resp. _𝗆𝖺𝗑) is the poset of intersections of finitely many maximal members of (resp. ), then _𝗆𝖺𝗑 is isomorphic to _𝗆𝖺𝗑.
The standard bijection between subgroups of G containing K and subgroups of Q restricts to a bijection between abelian subgroups of G containing K and subgroups of Q with abelian preimage in G. Given a subgroup H of G that contains K let H̅ be its image in Q. Notice that [G:H] = [Q : H̅]. This implies that the bijection we have between subgroups also gives rise to a isomorphism of poset cosets.
We will often apply this proposition in cases where every abelian subgroup of Q has abelian preimage in G, so that and _𝗆𝖺𝗑 are actually Q and Q respectivelly. Note further in that case the extension is central.
Let G be a nonabelian countably infinite group. Assume that for every pair of distinct maximal abelian subgroups A and B of G, A∩ B=1. Then, G≃⋁_ℕ S^1.
The hypothesis implies directly that G is of height 1, thus its geometric realization has dimension 1 and it has a finite or countable number of cells. We conclude, by <Ref> that G is homotopically equivalent to a wedge of circles. It only remains to be proved that there are infinitely many of these circles.
Consider Y the subcomplex of the geometric realization X of G given as follows:
* All the vertices of X belong to Y.
* All edges of the form {{g},A} with g∈ A and A a maximal abelian subgroup of G, belong to Y.
* For each vertex of X of the form xA with A≠ 1, choose and fix a representative of the corresponding coset, lets say x. The edge {{x}, xA} belongs to Y.
We claim Y is connected. First, any coset of the trivial group, say {g}, is contained in some maximal abelian subgroup A, and Y has edges {{1},A},{{g},A}, which connect {g} to {1}. Second, any coset of a maximal abelian subgroup A, has a chosen representative x and Y has the edge {{x}, xA} connecting xA to a coset of the trivial group, which we already connected to {1}.
Since Y is connected and contains all vertices of X, a maximal subtree T of Y is also a maximal subtree of X containing all vertices. To finish the proof it is enough to show that there are infinitely many edges in X that do not belong to Y. Among the maximal abelian subgroups of G there is either a subgroup H of infinite index or an infinite proper subgroup K; we deal with each case separately. Let H be as in the first case, note that, by construction of Y, for each coset gH≠ H there is exactly one edge in Y of the form {{x},gH}, and since H is nontrivial there is an element y≠ x such that the edge {{y},gH} does not belong to Y; this finishes the proof in the first case. Let K be as in the second case, since K is proper there is a coset gK≠ K, as in the previous case, all but one of the edges of the form {{y},gK} do not belong to Y; this finishes the proof of the second case.
The following corollary is a direct consequence of <Ref> and <Ref>.
Let G be a group, and denote by Z the center of G. Assume that for every pair of distinct maximal abelian subgroups A and B of G, A∩ B=Z, and assume Z has countable infinite index in G. Then, G≃⋁_ℕ S^1.
Let G_1 and G_2 be nontrivial groups. Then
* G_1 ∗ G_2≃⋁_ℕG_1∨⋁_ℕG_2∨⋁_ℕ S^1, and
* G_1 × G_2≃G_1×G_2.
* By Kurosh's theorem, any abelian subgroup of G_1 ∗ G_2 is either a subgroup of a conjugate of G_1, a subgroup of a conjugate of G_2, or an infinite cyclic subgroup generated by an element which is not conjugate to any element of G_1 or G_2. Furthermore, abelian subgroups of different types intersect only in the trivial subgroup. This means the poset of abelian (resp. maximal abelian) subgroups of G_1 ∗ G_2 is a wedge of the all the abelian (resp. maximal abelian) subgroups of all the conjugates of G_1, all the abelian (resp. maximal abelian) subgroups of all the conjugates of G_2 and all the cyclic subgroups generated by elements (resp. primitive elements) not conjugate to G_1 or G_2. For the coset posets, we have
G=⋃_y∈ G/G_1_g∈ G/G_1^yg·G_1^y∪⋃_x∈ G/G_2_g∈ G/G_2^xg·G_2^x∪𝖢𝗈P
where P is the poset consisting of the subgroup 1 and all maximal abelian subgroups generated by a primitive element not conjugate to either factor; and 𝖢𝗈 P denotes the poset of cosets of subgroups in P.
Consider the subcomplex Y of the geometric realization X of G given as follows:
* For each g∈ G - {1} choose and fix a maximal abelian subgroup M_g of G, such that g∈ M_g. The edges {{1},M_g} and {{g},M_g} belong to Y.
* For each vertex of X of the form xA with A a maximal abelian subgroup and x A≠ A, choose and fix a representative of the coset xA, say x. The edge {{x}, xA} belongs to Y.
* All the vertices in the previous edges belong to Y, i.e., the vertices of Y are the singletons and the cosets of all maximal abelian subgroups of G.
We claim Y is connected. First, any coset of the trivial group, say {g}, is contained in M_g, and Y has edges {{1},M_g},{{g},M_g}, which connect {g} to {1}. Second, any coset of a maximal abelian subgroup A, has a chosen representative x and Y has the edge {{x}, xA} connecting xA to the coset {x} of the trivial group, which we already connected to 1. Also from the construction of Y, this subcomplex is a tree; indeed, taking {1} as the root, the succesive levels of the tree are the M_g, then the {g} for g≠ 1, then the non-subgroup cosets xA.
We have that Y has non-empty intersection with every g·G_i^y. Also for each xH coset in P with xH≠ H, there are infinitely many edges of X that do not belong to Y. Therefore, after collapsing Y, we have a complex with homotopy type the wedge of infintely many G_1 plus a number of copies of S^1, infintely many G_2 plus a number of copies of S^1, and infinitely many copies of S^1 from P.
* Any abelian subgroup M of G_1 × G_2 is contained in π_1(M) ×π_2(M), which is also abelian. Therefore, the maximal abelian subgroups of G_1× G_2 are of the form A_1× A_2 with A_1 and A_2 maximal abelian subgroups of G_1 and G_2, respectively. This implies that the poset G_1× G_2 is isomorphic to G_1×G_2 with the coset (g_1,g_2)A_1 × A_2 corresponding to the pair (g_1 A_1, g_2 A_2). As the geometric realization of a poset respects products the result follows.
Let G be a torsion-free hyperbolic group in the sense of Gromov. Assume G is not virtually cyclic. Then
G≃⋁_ℕ S^1.
It is well-known that in a hyperbolic group the centralizer of any infinite cyclic group is virtually cyclic, in particular, every abelian subgroup of G is either trivial or infinite cyclic (see for instance <cit.>). On the other hand, if H and K are infinite cyclic maximal subgroups of G, then either H=K or H∩ K=1, (see for instance <cit.>). Since G is not virtually cyclic, and in particular countably infinite nonabelian, now the conclusion follows from <Ref>.
The following lemma will not be used in the proof of the main result, but we record it for future reference.
Let A and B be abelian groups with a common proper subgroup F. Then
A ∗_F B≃⋁_ℕ S^1.
Kurosh's theorem implies that any abelian subgroup of (A/F) ∗ (B/F) is either a subgroup of a conjugate of A/F, a subgroup of a conjugate of B/F or infinite cyclic. In all three cases the pre-image of the subgroup in A ∗_F B can be easily seen to be abelian. Thus, <Ref> implies that A ∗_F B and (A/F) ∗ (B/F) are isomorphic. Finally, <Ref> establishes the claim, since A/F and B/F are contractible.
Let G=SL_2() and H=PSL_2(). Since G≅/4∗_/2/6 and H≅/2∗/3, <Ref> implies that both G and H have the homotopy type of ⋁_ℕ S^1.
§ 3-MANIFOLDS MODELED ON ^3
The fundamental group of a 3-manifolds modeled on ^3 is either finite cyclic or a direct product of a finite cyclic group with a group described in the following list, see for instance <cit.>.
* Q_4n = ⟨ x,y | x^2=(xy)^2=y^n ⟩ where n≥ 2,
* P_48=⟨ x,y | x^2=(xy)^3=y^4, x^4=1 ⟩,
* P_120=⟨ x,y | x^2=(xy)^3=y^5, x^4=1 ⟩,
* D_2^m(2n+1)=⟨ x,y | x^2^m=1, y^2n+1=1, xyx^-1=y^-1⟩, where m≥ 2 and n≥ 1,
* P'_8· 3^m=⟨ x,y, z | x^2=(xy)^2=y^2, zxz^-1=y, zyz^-1=xy, z^3^m=1 ⟩, where m≥ 1.
Since <ref> allows to calculate the homotopy type of direct products, it's enough to calculate G for the previous groups.
Let Q_4n be the finite group of generalized quaternions of order 4n. Then Q_4n≃⋁_n^2-1S^1.
The group Q_4n admits the following presentation, see <cit.>:
Q_4n=⟨ x,y|y^2n=1, x^2=y^n, x yx^-1=y^-1⟩
With this presentation it is easy to check that the elements can be written uniquely as x^iy^a with i∈{0,1} and 0≤ a≤ 2n-1. Following this presentation the only pair of elements that commute are elements that belongs to ⟨ y⟩ or elements that belongs to ⟨ z⟩ with z∉⟨ y⟩. Thus the maximal abelian subgroups of Q_4n are as follows: the cyclic group A=⟨ y⟩ of order 2n and for each z∉ A the group ⟨ z⟩ is a group of order 4. The intersection of any two distinct subgroups is the center {1,x^2=y^n}, then Q_4n is of height 1, thus Q_4n is a graph. There are 2 cosets of A and for each subgroup of order 4 there are n cosets, since are n of these subgroups in the up level of the poset Q_4n there are n^2+2 vertices. In the bottom level there are 2n vertices, the cosets of Z. The cosets of A are connected to n cosets of Z and the cosets of a cyclic of order 4 are conected to 2 coset of Z, then there are 2n+2n^2 edges. Using the Euler characteristic, we conclude that the number of circles is
2n^2+2n-(n^2+2+2n)+1=n^2-1.
The conclusions of the following theorem where obtained using a GAP routine. Notice that this is possible because we are dealing with two very concrete groups. The code can be found in <Ref>.
The following holds
* P_48≃⋁_167S^1, and
* P_120≃⋁_1079S^1.
For m≥ 2 and n≥ 1, D_2^m(2n+1)≃⋁_(2n+1)^2-1S^1.
First let us prove that x^2 is central and that D_2^m(2n+1)/⟨ x^2⟩≅ D_2(2n+1). From the presentation we have
x^2yx^-2=xy^-1x^-1=y,
so x^2 is central. Also
D_2^m(2n+1)/⟨ x^2⟩≅⟨x̅,y̅|y̅^2n+1=1,x̅y̅x̅^-1=y̅^-1,x̅^2=1⟩=D_2(2n+1).
Then D_2^m(2n+1) fits in the central extension
1→/2^m-1→ D_2^m(2n+1)→ D_2(2n+1)→ 1.
Recall that D_2(2n+1) can be realized as the isometry group of a (2n+1)-gon. Since 2n+1 is odd, every abelian subgroup of D_2(2n+1) is generated by a rotation or a reflection, hence the pre-image in D_2^m(2n+1) of any abelian subgroup of D_2(2n+1) is abelian, then D_2^m(2n+1)≅D_2(2n+1).
The maximal abelian subgroups of D_2(2n+1) are as follows: the cyclic group A=⟨y̅⟩ of order 2n+1 and the reflections. Any two distinct maximal abelian subgroups intersect in the trivial subgroup, hence the height of D_2(2n+1) is 1 and then is a graph. The vertices consist of 2 cosets of A, (2n+1)^2 cosets of all possible reflections and 2(2n+1) cosets of the center. The edges are (2n+1) for each coset of A and 2 for each coset of a reflection, hence the total number of edges is 2(2n+1)+2(2n+1)^2. Using the Euler characteristic we conclude that the number of circles is
2(2n+1)^2+2(2n+1)-(2+(2n+1)^2+(2n+1))+1=(2n+1)^2-1.
For all m≥ 1 we have P'_8· 3^m≃⋁_39S^1.
It is an easy exercise to verify, directly from the presentation of P'_8· 3^m, that both z^3 and x^2=y^2 are central elements, thus the subgroup generated by them is a central subgroup (actually it is the center itself). The quotient group has the following presentation
⟨ x,y,z | (xy)^2, zxz^-1=y, zyz^-1=xy⟩
which no longer depends on m. It can be verified, using GAP for example, that this quotient group is A_4 the alternating group on four letters. By <Ref> it is enough to compute the poset of cosets of abelian subgroups of A_4 that have abelian preimage in P'_8· 3^m. All abelian subgroups of A_4 are cyclic of order 3, with the only exception of the Klein subgroup which is noncyclic of order 4. Recall that, since the kernel subgroup is central, the preimage of any cyclic subgroup of A_4 is abelian. We claim that the preimage of the Klein subgroup is not abelian. In fact the Klein subgroup is generated by the images of x and y, and it can be verified with GAP that they do not commute inside P'_24 = P'_8· 3^1 (see <Ref>), which is a quotient of P'_8 · 3^m for all m ≥ 1. In conclusion, for all m≥ 1, P'_8· 3^m has the homotopy type of the geometric realization of the poset of cosets of cyclic subgroups of A_4. In particular, all P'_8· 3^m have the same homotopy type and we compute P'_24≃⋁_39 S^1 using GAP in <Ref>.
§ 3-MANIFOLDS MODELED ON
Manifolds modeled on , also known as hyperbolic manifolds, may have empty-boundary or not.
Let G be the fundamental group of a finite volume hyperbolic manifold M. Then G≃⋁_ℕ S^1.
We have two cases depending on whether M is closed ornot. In either case G is a torsion-free group because M is a finite dimensional K(G,1).
If M is closed, then G is a torsion-free word-hyperbolic group that is not virtually cyclic, see <cit.>. Now the assertion follows from <Ref>.
Now, let us focus on the case where M is not closed. The following argument is done in the proof of Theorem 3.1 from <cit.>, nevertheless we include it for the sake of completeness.
Consider the collection ℬ of subgroups of G consisting of
* All conjugates of the fundamental groups of the cusps of M. All these groups are isomomorphic to ^2 since every cusp is homemorphic to the product of a torus and an interval.
* All maximal infinite virtually cyclic of subgroups G that are not subconjugate to the fundamental group of a cusp. All these groups are isomorphic to by the classification of virtually cyclic groups.
We notice that in <cit.> Lafont and Ortiz verified that every infinite virtually cyclic subgroup of G is contained in an element of ℬ. On the other hand, if H is a ^2-subgroup of G, then H does not contain a subgroup isomorphic to a free group in two generators, therefore by the Tits alternative for relatively hyperbolic group, see for instance <cit.>, H is subconjugate to the fundamental group of a cusp, that is, H is contained in an element of ℬ. Also in <cit.> it is proved that the intersection of any two elements of ℬ is trivial. Now the result follows from <Ref>.
§ 3-MANIFOLDS MODELED ON ^2×
There are only two possible fundamental groups for manifolds modeled on ^2×, and the infinite dihedral group D_∞≅/2 * /2, see for instance Table 1 on page 19 in <cit.>. In these cases is contractible and, by <Ref>, D_∞≃⋁_ℕ S^1.
§ 3-MANIFOLDS MODELED ON OR
By <cit.>, we have that any group that appears as the fundamental group of a manifold modeled on
, also is the fundamental group of a manifold modeled on . Thus we only have to deal with fundamental groups of manifolds modeled on .
Let G be the fundamental group of a finite volume 3-manifold modeled on . Then G≃⋁_ℕ S^1.
Let M be a finite volume 3-manifold modeled on with fundamental group G. Then M is a Seifert fibered space with base orbifold B modeled on . By <cit.> we have the following central extension
1→→ G → Q →1
where is generated by a regular fiber of M, and Q is the orbifold fundamental group of B, in particular, Q is a finitely generated fuchsian group.
Recall that for a fuchsian group, all abelian subgroups are (finite or infinite) cyclic. Thus the preimage under the map G→ Q of any abelian subgroup of Q is an abelian subgroup of G. Therefore, by <Ref>, G is isomorphic, as a poset, to Q. It follows from standard arguments of plane hyperbolic geometry that the intersection of any two distinct maximal cyclic subgroups of in a fuchsian group is trivial. The result now follows from <Ref>.
§ SOME CRYSTALLOGRAPHIC GROUPS OF DIMENSION 1 AND 2
The main objective of this section is to set the stage to deal with the geometries ^3, and . The fundamental groups arising from these geometries have the common feature of being virtually poly- groups. That is why we set notation and some basic facts about this kind of groups. We also compute the homotopy type of G for the infinite dihedral group D_∞ and some wallpaper groups, as these will be appear as quotients of the fundamental groups of manifolds modeled on ^3, and .
§.§ Poly- groups
Recall that a group G is called a poly- of rank n if it admits a filtration 1=G_0<G_1<⋯ < G_n=G such that G_i G_i+1 and G_i+1/G_i≅ for all i. It is well known that the rank does not depend on the filtration, that is, is a well-defined invariant of the group that we denote rank(G). We will use the following well known facts about poly- groups that can be found, for instance, in <cit.>.
Let G be a poly- group and let H be subgroup. Then the following statements hold
* H is a poly- group.
* rank(H)≤rank(G).
* rank(H)= rank(G) if and only if H has finite index in G.
The proof of the following lemma is left as an exercise to the reader.
Let D_∞=/2 ∗/2 be the infinite dihedral group. Then
* the center of D_∞ is trivial,
* the subgroup generated by ab, where a and b are generators of each of the factors of D_∞, is the only maximal abelian subgroup of D_∞ of rank 1,
* every nontrivial finite subgroup of D_∞ is conjugate to one of the factors, and
* if A and B are two distinct maximal abelian subgroups of D_∞ then A∩ B is trivial.
Let K=⋊ be the fundamental group of the Klein bottle. Then
* every abelian subgroup of K is free of rank at most 2,
* Z={0}⋊ 2 is the center of K,
* ⋊ 2 is the only maximal abelian subgroup of K of rank 2,
* if A and B are two distinct maximal abelian subgroups of K of rank 1 that are not contained in ⋊ 2, then both A and B contain Z as an index 2 subgroup and A∩ B=Z,
* the intersection of any maximal abelian subgroup of rank 1 with ⋊ 2 equals Z, and
* K≃⋁_ℕ S^1.
Since K is a torsion free poly- group of rank 2, item (1) is clear. Notice that we have a short exact sequence
1→ Z → K D_∞→ 1
where D_∞ is the infinite dihedral group which it is isomorphic to both ⋊/2 and /2∗/2. Note that the center of K maps into the center of D_∞ which is trivial, thus the center of K is contained in Z. It is not difficult to see that Z is contained in the center of K, this proves (2). Items (3) and (4) are a direct consequence of <Ref> and <Ref>.
Let us prove (5). Let A be a maximal abelian subgroup of K. By (4), the center Z is contained in both A and ⋊ 2. Hence Z⊆ A∩ (⋊ 2). If this inclusion is strict, since Z has index 2 in A, it would follow that A is contained in Z⊆⋊ 2 which is impossible by maximality of A.
Finally, part (6) is a direct consequence of part (5) and <Ref>.
Consider the following matrices:
A_2=[ -1 0; 0 -1 ], A_3=[ 0 -1; 1 -1 ],
A_4=[ 0 -1; 1 0 ],A_6=[ 1 -1; 1 0 ]
Let Q=^2⋊_A_n/n where A_n is one of the matrices in <ref>. Then
* every element in Q-(^2⋊{0}) has finite order,
* every finite subgroup of Q is cyclic,
* the intersection of any two finite maximal abelian subgroups of Q is trivial,
* ^2⋊{0} is the only infinite maximal abelian subgroup of Q,
* Q≃⋁_ℕ S^1.
This group Q is one of the plane crystallographic groups, generated by translations by vectors in ^2 and a single rotation around the origin of order n. Every element in the group is either a translation or a rotation of order dividing n around some center. It is easy to see when two elements commute: two translation always commute, two rotations only commute if they have the same center, and a translation never commutes with a rotation. This implies that the maximal abelian subgroups are the group of translations, and for each possible center of rotation, the group of all rotations with that center. Parts (1)–(4) follow from this description of the maximal abelian subgroups.
Part (5) is a direct consequence of parts (3) and (4) and <Ref>.
§ A LITTLE TECHNICAL DETOUR
In this section we settle notation and a preliminary result concerning the structure of gropus of the form K_1∗_A K_2 where K_1 and K_2 are copies of the fundamental group of the Klein bottle, A is the index two ^2 subgroup of K_1 and K_2. These groups will appear in the analysis of geometries ^3, , and , and it is fair to say that they are the most difficult cases in this article.
Let us set some notation. Let G be a group of the form K_1∗_A K_2 where K_1 and K_2 are copies of the fundamental group of the Klein bottle, A is the index two ^2 subgroup of K_1 and K_2, and the amalgamation is given by an automorphism φ A→ A. Since A is normal in both factors of G, we conclude A is normal in G. Moreover, G/A is isomorphic to the infinite dihedral group D_∞. Let p denote the quotient map. Now, D_∞ is isomorphic to ⋊_-1/2, and this implies that we have an index 2 subgroup G'=p^-1(⋊{0}) of G that is isomorphic to A⋊_φ' where φ' is a certain automorphism of A that depends on φ. All of this can be summarized in the following commutative diagram
1 [r] A @=[d] [r] G [r]^p ⋊_-1/2 [r] 1
1 [r] A [r] G' [r]^p@^(->[u] [r] @^(->[u] 1
Depending on the automorphism φ, A may or may not be a maximal abelian subgroup of G. We will classify the maximal abelian subgroups of G that are not equal to A (which, if A is not maximal, are all of them) as follows:
Type I those that are subconjugate to one of the factors, or equivalently, those that have nontrivial finite image under p, and
Type II those that are subgroups of G', equivalently those that have infinite image under p.
Given an element x∈ G we denote p(x) by x̅.
Consider K_1∗_A K_2 where K_1 and K_2 are copies of the fundamental group of the Klein bottle, A is the index two ^2 subgroup of K_1 and K_2. Let x and y be elements in K_1-A and in K_1-A respectively. Then
* every conjugate of K_i for i=1,2 is of the form K_i^(xy)^n with n∈,
* the center of K_1^(xy)^n (resp. K_2^(xy)^n) is the infinite cyclic group generated by (x^2)^(xy)^n (resp. (y^2)^(xy)^n),
* any abelian subgroup of K_1∗_A K_2 that is maximal among those of type II, is maximal abelian in K_1∗_A K_2.
Let us prove (1), and note that (2) is a direct consequence of (1).
Consider p K_1∗_A K_2→ D_∞ = ⟨x̅⟩∗⟨y̅⟩. Note that K_1^g=p^-1(⟨x̅⟩^g̅), the result now follows from the fact that all conjugates of ⟨x̅⟩ in D_∞ are of the form ⟨x̅⟩^(x̅y̅)^n with n∈.
To prove (3), let B be an abelian subgroup of K_1∗_A K_2 that is maximal among those of type II, then p(B) is infinite. Let C be an abelian subgroup of K_1∗_A K_2 that contains B, then p(C) is infinite, and therefore C is of type II. We conclude B=C by maximality.
§ 3-MANIFOLDS MODELED ON ^3
There are exactly 6 orientable compact 3-manifolds modeled on ^3 and their fundamental groups are the following, see for instance <cit.>.
* G_n=^2⋊_φ_n, where φ_n is one of the following matrices:
φ_1=[ 1 0; 0 1 ], φ_2=[ -1 0; 0 -1 ],
φ_3=[ 0 1; -1 0 ],φ_4=[ 0 -1; 1 -1 ],φ_6=[ 0 -1; 1 1 ]
of orders 1, 2, 3, 4, and 6 respectively.
* The sixth group, which we denote Γ_6, admits the presentation:
⟨ x,y,z | x y^2 x^-1=y^-2 , y x^2 y^-1=x^-2, z=y^-1x^-1⟩.
Notice that the subgroup T generated by x^2, y^2 and z^2 is the translation subgroup of Γ_6, and therefore is isomorphic to ^3. Also notice that the subgroups K_x = ⟨ x, y^2 ⟩ and K_y = ⟨ x^2, y ⟩ are isomorphic to the fundamental group of the Klein bottle and we have the following splitting
Γ_6≅ K_x∗_A K_y
where A is the subgroup generated by x^2 and y^2. It is not difficult to see that, following <Ref>, Γ_6' is isomorphic to A⋊_-1⟨x̅y̅⟩=A⋊_-1⟨z̅⟩.
* G_1 is contractible,
* every abelian subgroup of G_n of rank at least 2, with n≠ 1, is contained in ^2 ⋊_φ_n n≅^3,
* the center Z of G_n, for n≠ 1, equals {0}⋊_φ_n n≅, and the intersection of any two distinct maximal abelian subgroups of G_n equals Z,
* G_n≃⋁_ℕ S^1 for n=2,3,4,6.
The conclusion in item (1) is clear since G_1 is abelian. For the rest of the proof, let n=2,3,4,6 and, for brevity, set φ=φ_n, and let G=G_n.
Let us prove part (2). Let B be an abelian subgroup of G of rank at least 2 such that Z≤ B (see <Ref>). We have the following short exact sequence
1→ B∩ (^2⋊_φ{0}) → B → m→ 1
where m divides n.
Thus B is generated by B∩ (^2⋊_φ{0}) and a preimage x of a generator of the quotient m. Moreover, since x is of the form (z,m) and B is abelian, we have that (0,m) acts trivially by conjugation on B∩ (^2⋊_φ{0}). Since B has rank 2, we have that B∩ (^2⋊_φ{0}) is nontrivial, and thus n=m because φ^m does not have fixed points unless n divides m. Hence in this case, we have that B is of the form (B∩ (^2⋊_φ{0}))× n. If additionally, B is a maximal abelian subgroup of G, then B=^2 × n.
Now we proceed with the proof of part (3). The first claim about the center is left as an exercise to the reader, and we will show the part about the intersection of maximal abelian subgroups. We claim that if B and C are two distinct maximal abelian subgroups satisfying that their intersection with ^2⋊_φ{0} is trivial, then B∩ C=Z. Consider the quotient G/Z=^2⋊_φ (/n). One can easily see that, from the definition of φ, the action of /n on ^2-{(0,0)} is free, hence by <cit.>, G/Z satisfies that the intersection of any two distinct maximal finite (actually cyclic) subgroups of G/Z is trivial. Now the claim follows by the correspondence theorem applied to the projection G→ G/Z.
To finish the proof of this item, we will prove that the intersection of any two distinct maximal abelian subgroups of G/Z is trivial and apply <Ref>. From the previous paragraph we only have to show that the intersection of T=^2 × n with any other maximal abelian subgroup B (those described in the previous paragraph) is exactly Z. Since B ∩^2 = 0, we have that T ∩ B ≤ Z, and since they are maximal, T ∩ B ≥ Z too.
We now turn our attention to the abelian subgroups of Γ_6. In the following lemma we describe the maximal abelian subgroups of Γ_6 and their intersections, all the conclusions in this lemma are summarized in <Ref>.
Following <Ref> we have that:
* every abelian subgroup of Γ_6 is free of rank at most 3,
* the subgroup T=⟨ x^2, y^2, z^2 ⟩ is the only rank 3 maximal abelian subgroup of Γ_6,
* every rank 2 abelian subgroup of G is contained in T, in particular there are no maximal abelian subgroups of rank 2,
* the intersection of any two subgroups of Type II is ⟨ z^2 ⟩,
* the center of any conjugate of K_x (resp. K_y) is ⟨ x^2 ⟩ (resp. ⟨ y^2 ⟩),
* the intersection of any two subgroups of Type I that lie in conjugates of K_x (resp. K_y) is ⟨ x^2 ⟩ (resp. ⟨ y^2 ⟩),
* the centers of any conjugate of K_x and any conjugate of K_y intersect trivially,
* the intersection of any subgroup of Type I with any subgroup of Type II different from T is trivial,
* the intersection of T with any subgroup of Type I is either ⟨ x^2 ⟩ or ⟨ y^2 ⟩.
* Since G is a torsion free poly- group of rank 3, any abelian subgroup is free of rank at most 3, see <ref>.
* For any n-dimensional crystallographic group, the translation group is the only maximal abelian subgroup of rank n.
* Let B be a rank 2 abelian subgroup of Γ_6. If p(B) is finite, then B is subconjugate to either K_x or K_y, thus B must be contained in A⊆ T. If p(B) is infinite, then B is contained in Γ_6'. Thus by <ref> (2) B is contained in T.
* It is a direct consequence of the fact that Γ_6' is isomorphic to G_2 and <Ref> (3).
* By <ref> (1) all conjugates of K_x (resp. K_y) are of the form K_x^z^n (resp. K_y^z^n) for some n∈. It is easy to check that (x^2)^z = x^-2 and (y^2)^z = y^-2 directly from the presentation. The claim now follows from <ref>(2).
* This is a consequence of the previous item and <Ref> (4).
* By item (6) we have, for all n, that Z(K_x)^z^n∩ Z(K_y)^z^n=⟨ x^2⟩∩⟨ y^2⟩, and the latter intersection is clearly trivial since x^2 and y^2 are independent generators of the translation group T.
* Let B a subgroup of type I and C a subgroup of Type II different from T. By parts (2) and (3) both B and C must be infinite cyclic. By part (6) B either contains ⟨ x^2⟩ or ⟨ y^2⟩ as a finite index subgroup, and by part (4) C contains ⟨ z^2⟩ as a finite index subgroup. The claim follows by noting that ⟨ x^2⟩∩⟨ z^2⟩ and ⟨ y^2⟩∩⟨ z^2⟩ are trivial.
* Let B be a maximal abelian subgroup of Type I, which means that p(B) is finite. Since p(T) is infinite cyclic, we must have p(B ∩ T) = 0, so that B ∩ T ≤(p) = ⟨ x^2, y^2 ⟩. By <ref> (5), the intersection of B with ⟨ x^2, y^2 ⟩ is the center of the conjugate of K_x or K_y containing B, and by part (5) this is either ⟨ x^2 ⟩ or ⟨ y^2 ⟩.
The following corollary is an immediate consequence of the preceding lemma.
Following <Ref> and <Ref>, every maximal chain in G is of one of the following types:
* 1 < ⟨ w ⟩< T where w is either x^2, y^2 or z^2,
* 1 < ⟨ w ⟩< C_w where w is either x^2, y^2 or z^2 and C_w is an infinite cyclic group that contains ⟨ w ⟩ as a subgroup of index 2,
We have Γ_6≃⋁_ℕ S^1.
As a consequence of <Ref>, all maximal chains in G are of one of the following types:
* {g}< g⟨ w ⟩< gT where w is either x^2, y^2 or z^2,
* {g} < g⟨ w ⟩< gC_w where w is either x^2, y^2 or z^2 and C_w is an infinite cyclic group that contains ⟨ w ⟩ as a subgroup of index 2,
Note that for a given choice of g ∈ G and subgroup C_w of Type I, there is a unique triangle of the form (2) in the above list, so we can remove the edge g < gC_w and the interior of the triangle without changing the homotopy type. After removing those triangles, each edge of the form {g} < g ⟨ w ⟩ is in a unique triangle, which is of the form (1). We can remove those edges and the interior of their containing triangles without changing the homotopy type either.
At this point, we have no triangles remaining, which shows G has the homotopy type of a graph, and hence of a wedge of circles (since it is connected). The complex obtained after removing the triangles still contains a copy of G', which has the homotopy type of a countable wedge of circles, and therefore so does G.
§ 3-MANIFOLDS MODELED ON
A very explicit classification of fundamental groups of manifolds modeled on can be found in <cit.>. In that paper, they split the classification into seven infinite families that they call Type 1-7 . For convenience we collect the types into three clusters.
Semi direct products Groups of the form ^2⋊_φ where φ is a matrix of the form [ 1 n; 0 1 ] with n≥ 1. This corresponds to Type 1 groups in <cit.>.
Orientable crystallographic quotient Groups G that fit in a central extension of the form
1→→ G →^2⋊_A_n/n → 1
where A_n is one of the matrices in <ref>. These groups correspond to Types 2, 5, 6, and 7 in <cit.>.
Non-orientable crystallographic quotient We subdivide these groups into two Types that we will call Type pg and Type pgg respectively. Groups of Type pg are defined by means of the following presentation
E_k=⟨ a,b,c, α | [a,b]=c^2k, α c = c^-1α, [c,a]=[c,b]=1, α b = b^-1α c^-k, α^2=a ⟩
while groups of Type pgg are defined by means of the following presentation
Γ_k=⟨ a,b,c, α,β | [a,b]=c^2k, [c,a]=1,[c,b]=1, α c = c^-1α, β a = a^-1β c^k, β b=b^ -1β c^-k,
β^2=c,α^2=a, α b=b^-1α c^-k, βα=a^-1b^-1αβ c^-k-1⟩
where k in each case is a positive integer. These groups correspond to Types 3 and 4 in <cit.>.
§.§ Semi direct products
Let G=^2⋊_φ where φ is a matrix of the form [ 1 n; 0 1 ] with n≥ 1. Then
* every maximal abelian subgroup of G has rank 2,
* the intersection of any two maximal subgroups equals Z(G),
* G≃⋁_ℕ S^1.
Consider the central extension
1→ Z → G → Q → 1
where Z=, and the first map is given by n↦ (n,0,0) and the second map is the quotient projection. Note that Q is isomorphic to ^2.
Notice that G is a poly- group of rank 3 that does not contain a ^3-subgroup, see <cit.>. On the other hand, it is not difficult to verify that Z is the center of G. Since Q is isomorphic to ^2, and the preimage in G of any infinite cyclic subgroup of Q is abelian of rank 2, we conclude that every maximal abelian subgroup of G is isomorphic to ^2 and it contains Z. This proves (1). Let us call p the surjective homomorphism G→ Q. As Q is torsion free, then any maximal abelian subgroup of G isomorphic to must be equal to Z. Let B,C≅^2 be two maximal abelian subgroup of G, then p(B) and p(A) are maximal abelian subgroups of Q that are isomorphic to . Hence either p(A)=p(B) or p(A)∩ p(B) is trivial. This proves (2). Now (3) follows directly from <Ref>.
§.§ Orientable crystallographic quotient
Let G be the fundamental group of a 3-manifold modeled on and such that it fits in a central extension of the form
1→→ G →^2⋊_A_n/n → 1
where A_n is one of the matrices in <ref>.
Then G≃⋁_ℕ S^1.
Since the extension is central, we can use <Ref> to translate the problem to studying the poset generated by the maximal abelian subgroups of ^2⋊_A_n/n that have abelian preimage in G.
By <Ref> we have that every abelian subgroup of ^2⋊_A_n/n is either finite cyclic or a subgroup of ^2⋊_A_n{0}. The preimage of (any finite index subgroup of) ^2⋊_A_n{0} in G is clearly nonabelian, otherwise G contains a ^3 subgroup which is impossible in the presence of -geometry. Hence we get that the only abelian subgroups of ^2⋊_A_n/n with abelian preimage are the (infinite or finite) cyclic ones. Clearly the intersection of any two of these maximal distinct cyclic subgroups is trivial. Therefore the intersection of any two distict maximal abelian subgroups of G is the kernel group ; now the claim follows from <Ref>.
§.§ Case pg
This is a family of groups, one for each positive integer k, and they correspond to Type 3 in <cit.>. We have the following presentation
E_k=⟨ a,b,c, α | [a,b]=c^2k, α c = c^-1α, [c,a]=[c,b]=1, α b = b^-1α c^-k, α^2=a ⟩
Let k be a positive integer. Then
E_k fits in the following short exact sequence
1→⟨ a,c ⟩→ E_k →⟨α̅⟩∗⟨b̅α̅⟩→ 1
In particular we have a splitting E_k= K_α∗_A K_bα, where K_α = ⟨ c, α⟩, K_bα = ⟨ c, bα⟩, and A=⟨ a,c ⟩.
Moreover, following <Ref>, E_k' is isomorphic to A⋊_φ where φ is the automorphism of A given by the matrix [ 1 2k; 0 1 ].
The relations that define E_k imply that the subgroup ⟨ c,a⟩ is stable under conjugation for b and α, therefore ⟨ a,c⟩ is a normal subgroup of E_k. It is clear that the quotient group p(E_k)=E_k/⟨ a,c⟩ is generated by α̅ and b̅α̅. Moreover p(E_k) admits the following presentation ⟨α̅,b̅α̅| (α̅)^2=1, b̅α̅^2=1⟩ because bα bα=bb^-1α c^-kα=α c^-kα=c^kα^2=c^k a, thus p(E_k)= ⟨α̅⟩∗⟨b̅α̅⟩≅ D_∞. Since α^2=a and α c α^-1=c^-1, A is a index 2 subgroup of ⟨ c, α⟩ and this group torsion free and non abelian, thus ⟨ c,α⟩=K_α is the fundamental group of a Klein bottle, in a similar way ⟨ c, bα⟩=K_bα is the fundamental group of a Klein bottle.
To finish, from the presentation, we have that (bα^2)c(bα^2)^-1=c and (bα^2)a(bα^2)^-1=b α^2 b^-1=b a b^-1=c^2ka so the automorphism A→ A given by conjugation by bα^2 is represented by the matrix [ 1 2k; 0 1 ] with respect to the ordered base c,a.
In the following lemma we describe the maximal abelian subgroups of E_k and their intersections, all the conclusions in this lemma are summarized in <Ref>.
Following <Ref> we have that:
* every abelian subgroup of E_k is free of rank at most 2,
* every maximal abelian subgroup of Type II has rank 2 and the intersection of any two of such subgroups is ⟨ c ⟩,
* the center of K_α^(ba)^n (resp. K_bα^(ba)^n) is ⟨ c^2nka ⟩ (resp. ⟨ c^(2n+1)ka ⟩),
* let i,j ∈{α, bα}, if K_i^(ba)^n≠ K_j^(ba)^m, then Z(K_i^(ba)^n)∩ Z(K_j^(ba)^m) is trivial,
* the intersection of any two subgroups of Type I that lie in a single conjugate K_i^(ba)^n is Z(K_i^(ba)^n),
* the intersection of two subgroups of Type I is trivial if they belong to distinct conjugates of the factors of E_k,
* the intersection of A with a subgroup of Type I that is subconjugate to K_i^(ba)^n is Z(K_i^(ba)^n),
* the intersection of A with a subgroup of Type II is ⟨ c ⟩.
* the intersection of any subgroup of Type I with any subgroup of Type II is trivial,
* Since E_k is a torsion free virtually poly- group of rank 3, any abelian subgroup is free of rank at most 3. On the other hand E_k cannot have a ^3 subgroup as this posibility only happens, in the context of 3-manifolds, in the presence of an euclidean metric.
* Every subgroup of Type II is contained in E_k'. The claim follows directly from <Ref>.
* By <Ref> (1) all conjugates of K_α (resp. K_bα) are of the form K_α^(ba)^n (resp. K_bα^(ba)^n) for some n∈. It is easy to check that (α^2)^ba = c^2ka and ((bα)^2)^ba = (c^ka)^ba=c^3ka directly from the presentation. The claim now follows from <ref>(2).
* By the previous item, the centers Z(K_i^(ba)^n) and Z(K_j^(ba)^m) are of the form ⟨ c^j_nka ⟩ and ⟨ c^j_mka ⟩ for some j_n and j_m. It is easy to check that j_n≠ j_m, whether or not i=j, as long as Z(K_i^(ba)^n) ≠ Z(K_j^(ba)^m). Thus ⟨ c^j_nka ⟩∩⟨ c^j_mka ⟩=1 (we can think these groups as lines in the lattice plane ^2 ≅⟨ c, a ⟩ with different slopes).
* This is (3) in <Ref>.
* Let B, C be Type I subgroups that lie in different conjugates K_i^(ba)^n and K_j^(ba)^m, respectively. Then by <Ref> (4), Z(K_i^(ba)^n) and Z(K_j^(ba)^m) are index 2 subgroups of B and C, respectively. By part (4), the centers have trivial intersection, and thus so do A and B.
* This is (4) in <ref>.
* Since A is maximal in E_k' and Z(E_k')=⟨ c ⟩, this item is a direct consequence of <Ref> (2).
* Let B be a Type I subgroup and C a Type II subgroup, this means that p(B)≅/2 and p(C)≅. Then p(B∩ C)⊂ p(B)∩ p(C)=1, thus B∩ C⊂ A=⟨ c,a⟩. Therefore B∩ C=B∩ C∩ A=(B∩ A)∩(C∩ A)= Z(K_i^(ba)^n)∩⟨ c⟩= ⟨ c^j_nk a⟩∩⟨ c⟩=1.
The following corollary is an immediate consequence of the preceding lemma.
Following <Ref> and <Ref>, every maximal chain in G is of one of the following types:
* 1 < ⟨ c ⟩< A where A=⟨ a,c ⟩,
* 1 < ⟨ c^mka ⟩< A where m∈,
* 1 < ⟨ c ⟩< B where B is a subgroup of Type II,
* 1 < ⟨ c^mka ⟩< C where m∈ and C is an infinite cyclic group that contains ⟨ c^mk⟩ as a subgroup of index 2,
Let k be a positive integer, then E_k≃⋁_ℕ S^1.
As a consequence of <Ref>, all maximal chains in E_k are of one of the following types:
* {g}< g⟨ w ⟩< gA where w is either c or c^mka,
* {g}< g⟨ c ⟩< gB where B is a subgroup of Type II,
* {g} < g⟨ ac^mk⟩< gC where m∈ and C is a subgroup of Type I.
Note that for a given choice of g ∈ E_k, subgroup C of Type I and subgroup B of Type II there are unique triangles of the forms (3) and (2), respectively, so we can remove the edges g < gC and g<gB and the interiors of the triangles without changing the homotopy type. After removing those triangles, each edge of the form {g} < g ⟨ ac^mk⟩ or {g}<g⟨ c⟩ is in a unique triangle, which is of the form (1). We can remove those edges and the interior of their containing triangles without changing the homotopy type either.
At this point, we have no triangles remaining, which shows E_k has the homotopy type of a graph, and hence of a wedge of circles (since it is connected). The complex obtained after removing the triangles still contains a copy of E_k', which has the homotopy type of a countable wedge of circles, and therefore so does E_k.
§.§ Case pgg
This is a family of groups, one for each positive even integer k, and they correspond to Type 4 in <cit.>. We have the following presentation
Γ_k=⟨ a,b,c, α,β | [a,b]=c^2k, [c,a]=1,[c,b]=1, α c = c^-1α, β a = a^-1β c^k, β b=b^ -1β c^-k,
β^2=c,α^2=a, α b=b^-1α c^-k, βα=a^-1b^-1αβ c^-k-1⟩
Let k be a positive even integer. Then
* Γ_k fits in the following short exact sequence
1→⟨ a,c ⟩→Γ_k →⟨α̅⟩∗⟨β̅⟩→ 1
In particular we have a splitting Γ_k= K_α∗_A K_β, where K_α = ⟨ c, α⟩, K_β = ⟨ c, β⟩, and A=⟨ a,c ⟩,
* define γ= αβ, then γ^2=b and Γ_k' fits in the following short exact sequence
1→⟨ c,b ⟩→Γ_k' →⟨γ̅ ̅a̅⟩∗⟨γ̅⟩→ 1
Moreover Γ_k' is a group of Type pg.
* The relations that define Γ_k imply that the subgroup ⟨ a,c⟩ is stable under conjugation for b, α and β therefore ⟨ a,c⟩ is a normal subgroup of Γ_k. The relations in the quotient group p(Γ_k) that are not trivial are β̅b̅=b̅^-1β̅, β̅^2=1, α̅^2=1, α̅b̅=b̅^-1α̅ and β̅α̅=b̅^-1α̅β̅. The last one implies that b̅=α̅β̅α̅β̅, all of these equations imply that p(Γ_k) admits the presentation ⟨α̅,β̅|α̅^2=1,β̅^2=1⟩, thus p(Γ_k)= ⟨α̅⟩∗⟨β̅⟩≅ D_∞. Since α^2=a and α c α^-1=c^-1, A is a index 2 subgroup of ⟨ c, α⟩ and this group isn't abelian, so ⟨ c,α⟩=K_α is the fundamental group of a Klein bottle, in a similar way ⟨ c, β⟩=K_β is the fundamental group of a Klein bottle.
* Following the presentation, we have
γ^2=αβαβ =α(a^-1b^-1αβ c^-k-1)β
=α(a^-1b^-1c^k+1αβ )β
=α a^-1b^-1c^k+1α c
=α a^-1b^-1c^kα
=a^-1α(b^-1α)c^-k
=a^-1α(α b c^k)c^-k=b
Since Γ_k'=⟨ a,c,αβ=γ⟩, we have aba^-1=c^2kb, cbc^-1=b, γ bγ^-1=γγ^2γ^-1=γ^2=b, aca^-1=c, ccc^-1=c and
γ cγ^-1=αββ^2β^-1α^-1=αβ^2α^-1=α cα^-1=c^-1.
These equations imply that ⟨ b,c⟩ is normal in Γ_k'. The group Γ_k'/⟨ c⟩ admits the presentation ⟨a̅, γ̅|γ̅a̅=a̅^-1γ̅⟩ that is isomorphic to ⟨a̅⟩⋊_-1⟨γ̅⟩. With this equivalence we have that Γ_k'/⟨ b=γ^2,c⟩=⟨a̅⟩⋊_-1⟨γ̅|γ^2⟩ that is isomorphic to ⟨γ̅ ̅a̅⟩∗⟨γ̅⟩. Since γ^2=b and γ c γ^-1=c^-1, ⟨ c,γ⟩ is a index 2 subgroup of ⟨ b, c ⟩ and this group isn't abelian, so ⟨ b,γ⟩=K_γ is the fundamental group of a Klein bottle, in a similar way ⟨ c, γ a⟩=K_γ a is the fundamental group of a Klein bottle. The automorphism of this amalgamation it's given by aca^-1=c, and a b a^-1=c^2kb, equivalently the automorphism is given by the matrix [ 1 2k; 0 1 ], then Γ_k' is a group of pg's case.
In the following lemma we describe the maximal abelian subgroups of Γ_k and their intersections, all the conclusions in this lemma are summarized in <Ref>.
Following <Ref> we have that:
* every abelian subgroup of Γ_k is free of rank at most 2,
* as recorded in the bold lines of <Ref>, the diagram of subgroups of Type II (which are subgroups of the Type pg group Γ_k') and their intersections is as described in <Ref> and <Ref>, except that the generators a and b are interchanged,
* the center of K_α^(βα)^n (resp. K_β^(βα)^n) is ⟨ ac^-nk⟩ (resp. ⟨ c ⟩),
* let i,j ∈{α, β}, if K_i^(βα)^n≠ K_j^(βα)^m, then Z(K_i^(βα)^n)∩ Z(K_j^(βα)^m) is trivial,
* the intersection of any two subgroups of Type I that lie in a single conjugate K_i^(βα)^n is Z(K_i^(βα)^n),
* the intersection of two subgroups of Type I is trivial if they belong to distinct conjugates of the factors of Γ_k,
* the intersection of A with a subgroup of Type I that is subconjugate to K_i^(βα)^n is Z(K_i^(βα)^n),
* the intersection of A with a subgroup of Type II is either trivial or equal to ⟨ c ⟩.
* the intersection of any subgroup of Type I with any subgroup of Type II is either trivial or equal to ⟨ c ⟩.
* Since Γ_k is a torsion free virtually poly- group of rank 3, any abelian subgroup is free of rank at most 3. On the other hand Γ_k cannot have a ^3 subgroup as this posibility only happens, in the context of 3-manifolds, in the presence of an euclidean metric.
* Since every Type II subgroup of Γ_k is actually a subgroup of Γ_k', this claim is a direct consequence of <Ref> (2) and <Ref>. See also <Ref>.
* By <Ref> (1) all conjugates of K_α (resp. K_β) are of the form K_α^(αβ)^n (resp. K_β^(αβ)^n) for some n∈. It is easy to check that (α^2)^αβ = a^αβ=a^-1c^k and (β^2)^αβ =c^αβ=c^-1 directly from the presentation. The claim now follows from <ref>(2).
* By the previous item, the centers Z(K_i^(αβ)^m) and Z(K_j^(αβ)^n) are of the form ⟨ ac^mk⟩ and ⟨ c ⟩. Thus Z(K_i^(αβ)^m)∩ Z(K_j^(αβ)^n)=1 provided Z(K_i^(αβ)^m)≠ Z(K_i^(αβ)^n) (we can think these groups as lines in the lattice plane ^2 ≅⟨ c, a ⟩ with diferent slopes).
* This is (3) in <Ref>.
* Let B y D be Type I subgroups that lie in different conjugates K_i^(αβ)^n and K_j^(αβ)^m, respectively. Then by <Ref> (4), Z(K_i^(αβ)^n) and Z(K_j^(αβ)^m) are index 2 subgroups of C and D, respectively. By part (4), the centers have trivial intersection, and thus so do C and D.
* This is (4) in <Ref>.
* This claim can be read from <Ref>.
* Let B be a Type I subgroup and C a Type II subgroup, this means that p(B)≅/2 and p(C)≅. Then p(B∩ C)⊂ p(B)∩ p(C)=1, thus B∩ C⊂ A=⟨ c,a⟩. Therefore B∩ C=B∩ C∩ A=(B∩ A)∩(C∩ A), and the latter intersection is either trivial or ⟨ c ⟩ by the two previous items.
Following <Ref> and <Ref>, every maximal chain in G is of one of the following types:
* 1 < ⟨ c ⟩< B where B is a subgroup of Type II within Γ_k',
* 1 < ⟨ c ⟩< ⟨ c,b ⟩,
* 1 < ⟨ c^mkb ⟩< C where m∈ and C is an infinite cyclic group that contains ⟨ c^mk b⟩ as a subgroup of index 2,
* 1 < ⟨ c^mkb ⟩< ⟨ c,b ⟩ where m∈,
* 1 < ⟨ c ⟩< D where D is a subgroup of Type I,
* 1 < ⟨ c ⟩< A where A=⟨ a,c ⟩,
* 1 < ⟨ c^mka ⟩< E where m∈ and E is an infinite cyclic group that contains ⟨ c^mka ⟩ as a subgroup of index 2,
* 1 < ⟨ c^mka ⟩<⟨ a,c⟩ where m∈,
Let k be a positive integer, then E_k≃⋁_ℕ S^1.
This proof is very similar to that of <Ref>. Any maximal chain in Γ_k is obtained by multiplying one of the chains listed in <Ref> by a group element g.
For a given choice of g, the edge going from {g} to the maximal element of a chain in cases (1), (3), (5) or (7) is contained only in that triangle. Thus we can remove those edges and triangles without the changing the homotopy type. After removing those, in each maximal chain of type (2), (4), (6) or (8), the edge from {g} to the middle element is contained in a unique triangle, so we can now removes those edges and triangles. After this, no triangles remain, showing Γ_k has the homotopy type of a graph, and thus of a wedge of circles (since it is connected).
The complex obtained after removing the triangles still contains a copy of Γ_k', which has the homotopy type of a countable wedge of circles, and therefore so does Γ_k.
§ 3-MANIFOLDS MODELED ON
By <cit.>, the fundamental groups of manifolds modeled on can be split into two categories that we described below.
Semi direct products Groups of the form ^2⋊_φ where φ is an automorphism of ^2 that does not fix any subgroup of rank 1.
Amalgams Groups of the form K∗_A K where where K is the fundamental group of the Klein bottle, A=× 2≅^2, and the amalgamation is given by an automorphism φ A→ A that does not fix any subgroup of rank 1.
Let G=^2⋊_φ where φ is an automorphism of ^2 that does not fix any subgroup of rank 1 . Then
* every abelian subgroup of G is free of rank at most 2,
* the subgroup ^2⋊{0} is the unique maximal abelian subgroup of rank 2,
* the intersection of any two maximal abelian subgroups of G is trivial, and
* G≃⋁_ℕ S^1.
First we prove (1). Since G is a finitely generated and torsion-free poly- group of rank 3, every abelian subgroup of G is of the form ^n with n≤rank(G)=3. Notice that G cannot have a subgroup isomorphic to ^3, because it would have finite index by <ref>, but the only 3-manifold groups with this property are the ones modeled on 𝔼^3.
Item (2) follows from <cit.>. Since item (4) follows directly from item (3) and <Ref>, the rest of the proof deals with the proof of item (3).
Denote H=^2⋊{0}. Let C be a maximal abelian subgroup of G different from H. We claim that C∩ H is trivial. Let p G → be the projection onto the second factor of G. If C∩ H is not trivial, then it must be a infinite cyclic subgroup of the kernel of p, and therefore p(C∩ H) has rank zero, that is, p(C∩ H) is finite, and thus it is trivial. Hence C∩ H is contained in H which contradicts the maximality of C, this proves the claim.
Let us pause our proof to show that comm_GC=C, where the commensurator is given by comm_GC={g∈ G | gCg^-1∩ C≠ 1}. As comm_GC is a subgroup of G, it is poly- by <Ref>. Assume comm_GC contains a subgroup L of rank 2, then L≅⋊ and therefore it contains a subgroup isomorphic to ^2, which is contained in H for being the unique maximal ^2 subgroup of G. Since comm_GC contains both L and C, and C∩ L is trivial, we have that comm_GC is of rank 3, and therefore has finite index in G by <Ref>. This implies that comm_GC also is of the form ^2⋊_ψ where ψ is an automorphism of ^2 that does not fix any subgroup of rank 1, in particular this group does not have any nontrivial normal cyclic subgroups by <cit.>. On the other hand, comm_GC normalizes a finite index subgroup of C (see for instance <cit.>), this leads to a contradiction. Thus we conclude comm_GC is of rank one, that is comm_GC is infinite cyclic. Since C⊆comm_GC, by maximality of C we have C=comm_GC.
Let C and D be two maximal abelian subgroups of G isomorphic to , such that C∩ D is nontrivial. Thus C and D are commensurable, and therefore comm_GC=comm_GD. By the previous paragraph, we conclude C=D. This finishes the proof.
In the following lemma we describe the maximal abelian subgroups of K∗_A K and their intersections, all the conclusions in this lemma are summarized in <Ref>.
Following <Ref> we have that:
* every abelian subgroup of G is free of rank at most 2,
* the subgroup A is the only rank 2 maximal abelian subgroup of G,
* the intersection of any two subgroups of Type II is trivial, and the intersection of any subgroup of Type II with A is trivial,
* The intersection of any subgroup of Type I with any subgroup of Type II is trivial,
* the intersection of any two subgroups of Type I that lie in a single conjugate K_i^x is Z(K_i^x),
* the intersection of A with a subgroup of Type I that is subconjugate to K_i^x is Z(K_i^x),
* if K_i^x≠ K_j^y, then Z(K_i^x)∩ Z(K_j^y) is trivial.
* Since G is a torsion free poly- group of rank 3, any abelian subgroup is free of rank at most 3. On the other hand G cannot have a ^3 subgroup as this posibility only happens, in the context of 3-manifolds, in the presence of an euclidean metric.
* Let H be a rank 2 abelian subgroup of G. Note that p(H) cannot be an infinite subgroup of ⋊_-1/2. Indeed, if p(H) were infinite it would mean that a generator of p(H) stabilizes the rank one subgroup H∩ A, which is impossible. Then H'=H∩ G' is a rank 2 abelian subgroup subconjugate to one of the factors of G, and we conclude from <Ref> that H is a subgroup of A.
* It follows directly from <Ref>(3).
* Let H_1 and H_2 be subgroups of Type I and II respectively. Then p(H_1) is finite and p(H_2) is infinite cyclic, so p(H_1) ∩ p(H_2) = 1. Therefore H_1 ∩ H_2 ⊆ p = A. But by the previous part, H_2 ∩ A = 1, and hence H_1 ∩ H_2 = 1 too.
* It follows directly from <Ref>(4).
* It follows directly from <Ref>(5).
* Note that Z(K_i^x)∩ Z(K_j^y)=1 is equivalent to Z(K_i)∩ Z(K_j^x^-1y)=1. Then, let us prove Z(K_i)∩ Z(K_j^x)=1, provided K_i≠ K_j^x. There are two cases:
Case 1. i=j Let us first handle the case in which x ∈ G'.
Since G' is isomorphic to ^2⋊_φ', we have that for a∈ A (in particular for a∈ Z(K_i^x)), xax^-1=φ'^(n)(a) for some n∈. Since K_i ≠ K_i^x, we have n ≠ 0. Then
Z(K_i)∩ Z(K_i^x)=Z(K_i)∩φ'^(n)(Z(K_i))=1.
where the last equality holds because φ' doesn't fix any rank 1 subgroup of A.
Now, we handle the general case. Since G' has index 2, then x^2∈ G'. By the previous argument,
Z(K_i)∩ Z(K_i^x^2)=1. If we had Z(K_i)∩ Z(K_i^x) ≠ 1, let a be a generator of that intersection.
We have xax^-1 = a^± 1, and thus x^2 a x^-2 = a, contradicting that Z(K_i)∩ Z(K_i^x^2)=1.
Case 2. i≠ j. Note that H=⟨ K_1,K_2^x⟩ has finite index in G. Hence the manifold M̃/H is a 3-manifold modeled on , where M̃ is the universal cover of the initial 3-manifold. Hence, H is isomorphic to K_1∗_A K_2 by an automorphism ψ A→ A that does not fix any subgroup of rank 1. Since Z(K_1)∩ Z(K_2^x)⊂ Z(H)=1, the result follows.
The following corollary is an immediate consequence of the preceding lemma.
Following <Ref> and <Ref>, every maximal chain in G is of one of the following types:
* 1 < L with L of Type II,
* 1< Z(K_i^x) < A for some x∈ G and i∈{1,2},
* 1< Z(K_i^x) < C for some x∈ G and i∈{1,2}, and C of Type I.
Let G be a group of the form form K∗_A K where A=× 2≅^2 and the amalgamation is given by an automorphism φ A→ A that does not fix any subgroup of rank 1. Then, G≃⋁_ℕ S^1.
As a consequence of <Ref>, all maximal chains in G are of one of the following types:
* {g} < gL with L of Type II,
* {g} < gZ(K_i^x) < gA for some x∈ G and i∈{1,2},
* {g} < gZ(K_i^x) < gC for some x∈ G and i∈{1,2}, and C of Type I.
Note that for a given choice of g ∈ G and subgroup C of Type I, there is a unique triangle of the form (3) in the above list, so we can remove the edge g < gC and the interior of the triangle without changing the homotopy type. After removing those triangles, each edge of the form {g} < g Z(K_i^x) is in a unique triangle, which is of the form (2). We can remove those edges and the interior of their containing triangles without changing the homotopy type either.
At this point, we have no triangles remaining, which shows G has the homotopy type of a graph, and hence of a wedge of circles (since it is connected). The complex obtained after removing the triangles still contains a copy of G' (comprising the edges of the form (1)), which has the homotopy type of a countable wedge of circles, and therefore so does G.
§ COMPUTATIONS FOR THE SPHERICAL CASE USING GAP
We will exhibit GAP code that can be used to verify that G, and therefore also G, is of height 1.
First, we define a function to compute the maximal sets under inclusion in a given family of sets. A set s in the family is maximal if the list of sets containing s has length one (that is, consists solely of s itself):
[language=GAP]
maximal:=sets->Filtered(sets,
s->Length(Filtered(sets,t->IsSubset(t,s)))=1);
Next we define functions to compute the maximal abelian subgroups of a group G, and to compute the family of pairwise intersections of maximal abelian subgroups:
[language=GAP]
maxAbSub := G->maximal(Filtered(AllSubgroups(G), IsAbelian));
intMaxAbSub := G -> List(Combinations(maxAbSub(G),2),
P->Intersection(P[1],P[2]));
To check whether a group G has G of height 1, we simply check if all pairwise intersections of maximal abelian subgroups are equal to the center of G:
[language=GAP]
isHeight1 := G -> ForAll(intMaxAbSub(G), A -> A = Center(G));
When a group G has G of height 1, the geometric realization of G is a graph, and thus homotopy equivalent to a wedge of circles. We can compute how many circles by computing the Euler characteristic of the graph:
[language=GAP]
circles := function(G)
local maxAb, Z, edges, vertices;
Assert(0, isHeight1(G));
maxAb := maxAbSub(G);
Z := Center(G);
edges := Index(G,Z)*Length(maxAb);
vertices := Sum(maxAb,A->Index(G,A)) + Index(G,Z);
return edges - vertices + 1;
end;
Now we can easily compute the homotopy type of P_48:
[language=GAP]
gap> F := FreeGroup("x", "y");; x := F.1;; y := F.2;;
gap> G := F / [x^(-2) * (x*y)^3, x^(-2) * y^4, x^4];; IdGroup(G);
[ 48, 28 ]
gap> isHeight1(G);
true
gap> circles(G);
167
Similarly, we can compute the homotopy type of P_120:
gap> G := F / [x^(-2) * (x*y)^3, x^(-2) * y^5, x^4];; IdGroup(G);
[ 120, 5 ]
gap> isHeight1(G);
true
gap> circles(G);
1079
Finally, we compute the number of circles in P'_8· 3^m. This computation is more involved since we are not talking about a single group but rather about a family of groups, one for each m ≥ 1. The proof of <Ref> outlines the computation leaving only a couple of claims to check with GAP, which we now do.
The first claim is that x and y do not commute in P'_8· 3^m. To verify this, note that P'_24 = P'_8· 3^1 is a quotient of P'_8· 3^m for any m ≥ 1, so it suffices to check x and y do not commute in P'_24:
gap> F := FreeGroup("x", "y", "z");;
gap> x := F.1;; y := F.2;; z:=F.3;;
gap> G := F / [x^2 * y^(-2), x^2 * (x*y)^(-2), z^3,
(x^z) * y^(-1), (y^z) * (x*y)^(-1)];;
gap> IdGroup(G);
[ 24, 3 ]
gap> IsAbelian(Subgroup(G, [G.1, G.2]))
false
Having verified that claim, the proof of <Ref> show that all P'_8· 3^m are a wedge of the same number of circles, so to verify the number of circles claimed there it suffices to check P'_24:
gap> isHeight1(G);
true
gap> circles(G);
39
|
http://arxiv.org/abs/2307.05104v1 | 20230711082608 | A Deep Dive into Perturbations as Evaluation Technique for Time Series XAI | [
"Udo Schlegel",
"Daniel A. Keim"
] | cs.LG | [
"cs.LG",
"cs.AI"
] |
A Deep Dive into TS-XAI Perturbations
Schlegel & Keim
University of Konstanz, Universitätsstraße 10, 78464 Konstanz, Germany
{u.schlegel,daniel.keim}@uni-konstanz.de
A Deep Dive into Perturbations
as Evaluation Technique for Time Series XAI
Udo Schlegel10000-0002-8266-0162 Daniel A. Keim10000-0001-7966-9740
August 12, 2023
==========================================================================
Explainable Artificial Intelligence (XAI) has gained significant attention recently as the demand for transparency and interpretability of machine learning models has increased.
In particular, XAI for time series data has become increasingly important in finance, healthcare, and climate science.
However, evaluating the quality of explanations, such as attributions provided by XAI techniques, remains challenging.
This paper provides an in-depth analysis of using perturbations to evaluate attributions extracted from time series models.
A perturbation analysis involves systematically modifying the input data and evaluating the impact on the attributions generated by the XAI method.
We apply this approach to several state-of-the-art XAI techniques and evaluate their performance on three time series classification datasets.
Our results demonstrate that the perturbation analysis approach can effectively evaluate the quality of attributions and provide insights into the strengths and limitations of XAI techniques.
Such an approach can guide the selection of XAI methods for time series data, e.g., focusing on return time rather than precision, and facilitate the development of more reliable and interpretable machine learning models for time series analysis.
§ INTRODUCTION
Artificial intelligence (AI) has become an integral part of our daily lives, from the personalized advertisement we receive on social media to conversational AI (chatbots) answering questions of users and customers using deep neural networks.
However, as the complexity of deep neural network models increases, so does the difficulty in understanding how they get to their decisions <cit.>.
A lack of interpretability can lead to severe consequences in critical domains such as finance, healthcare, and transportation, including financial losses, medical errors, and even loss of life by providing wrong decisions if complex models are deployed <cit.>.
One promising approach to addressing such issues is through the usage of explainable artificial intelligence (XAI), which seeks to provide insights into the inner workings of complex models and the factors that drive their decision-making <cit.>.
One particular area of interest is time series data, which is characterized by the sequential nature of its observations and the interdependencies between them, as more sensors generate a massive amount of data and more tasks are tackled by complex models <cit.>.
In recent years, a growing body of research has focused on developing XAI techniques tailored explicitly for time series data <cit.>.
These techniques often rely on the concept of attributions, which aim to identify the contributions of individual features and time points to the overall prediction made by a model <cit.>.
By providing insights into which parts of the input data are most relevant to the output, attributions can help users understand the reasoning behind the model's decision-making process <cit.>.
However, the evaluation of such attributions is not trivial <cit.>.
To address the challenge of evaluating the quality of explanations for time series data, perturbation analysis has emerged as a promising evaluation technique <cit.>.
Perturbation analysis involves systematically modifying the input data and assessing the impact on the attributions generated by XAI methods <cit.>.
By perturbing the input data, it is possible to evaluate the robustness of the explanations provided by XAI methods <cit.>.
However, the effectiveness of perturbation analysis for evaluating the quality of attributions for time series data has not been extensively studied <cit.>.
In this paper, we apply attribution techniques from various fields to a convolution neural network trained on time series classification data to evaluate and inspect the generated attributions in detail using perturbations, which involves systematically altering the input data and observing the effect on the model's output.
We investigate the performance of attribution techniques compared to each other based on the perturbation analysis result and explore the perturbation changes based on these attributions to gain insights into the model.
Through such an analysis, we can identify spurious correlations and shortcuts in the complex models and thus enable developers to potentially improve models by debugging datasets.
We show that our convolution neural network trained on time series classification learned certain shortcuts to achieve state-of-the-art performances.
Based on these experiments and results, we provide guidelines for the application of attribution techniques for time series classification and release our evaluation framework to investigate other attribution techniques.
Thus, we contribute:
(1) an in-depth analysis of attribution techniques on time series classification for deep learning models using a perturbation analysis,
(2) insights into convolution neural networks trained on time series based on the generated attributions,
(3) guidelines and a framework for applying attribution techniques for time series models with a perturbation analysis card for reporting.
We first look into related work, and then we introduce the perturbation analysis methodology and the experiment setup we use for our deep dive.
Here we also propose perturbation analysis cards as a guideline to report the results of an evaluation.
Next, we present our results and discuss the impact of our conclusions for attribution techniques applied to time series.
Lastly, in future work, we motivate new measures for the evaluation of attributions on time series data.
Results and source code of the experiments is online available at:
https://github.com/visual-xai-for-time-series/time-series-xai-perturbation-analysishttps://github.com/visual-xai-for-time-series/time-series-xai-perturbation-analysis
§ RELATED WORK
Explainable AI (XAI) accelerated through several surveys <cit.> and techniques, e.g., LIME <cit.> and SHAP <cit.> in the last few years.
Especially, attributions are prevalent in the image domain as heatmap explanations are easy to understand for users <cit.>.
Some theoretical works dig into the backgrounds of why models learn certain shortcuts to solve tasks <cit.> and thus enable further explanations for decisions.
However, evaluating explanations is still a slowly growing area with limited work toward benchmarking different techniques against each other <cit.>.
Further, shortcuts or spurious correlations are not trivial to detect in explanations and need an in-depth analysis to be able to identify these <cit.>.
Some works started to collect possible evaluation techniques <cit.> and categorized these into five measurements: mental model, explanation usefulness and satisfaction, user trust and reliance, human-AI task performance, and computational measures.
The first few measures focus on evaluating with or in cooperation with humans and are thus heavily influenced by human factors.
The computational measures exclude human factors and focus on purely automatic evaluation of explanations.
In this work, we inspect the computational measures and, more precisely, the explainer fidelity of the attribution technique on the model to show how the attributions fit the model.
XAI for time series classification (TSC), on the one hand, incorporates previously proposed explanation techniques from other fields and introduces the time dependence into some of the techniques <cit.>.
Theissler et al. <cit.> categorize possible explanations for TSC into time point, subsequence, and instance explanations.
All these operate on a different level of the time series and are thus unique in their explanation and evaluation.
In this work, we tackle time point explanations and, to be more precise, attributions to highlight and explore shortcuts and spurious correlations.
As Schlegel et al. <cit.> and others <cit.> demonstrated, attributions techniques such as LIME <cit.>, SHAP <cit.>, LRP <cit.>, GradCAM <cit.>, Integrated Gradients <cit.>, and more <cit.>, produce working attributions on time series to extract explanations from a model.
However, in most cases, only purely computational measures are applied to the attributions, which are not further inspected, e.g., Mercier et al. <cit.> to gain deeper insights.
Schlegel et al. <cit.> started by using a perturbation analysis on attribution techniques applied to TSC using various perturbation functions to highlight that techniques for images and text are also working on time series.
Based on such preliminary experiments, they enhanced their approach with additional perturbation functions to showcase deeper insights into the fidelity evaluation <cit.>.
Mercier et al. <cit.> enhanced these perturbations with further measures from the image domain, such as (in)fidelity and sensitivity <cit.>.
Simic et al. <cit.> extended the proposed methods by Schlegel et al. <cit.> with out-of-distribution detecting functions and gave guidelines for the selection of attribution techniques and the size of the window for the perturbation.
Turbe et al. <cit.> enhance previous approaches with another metric to improve the comparison of the attribution techniques and the ability to demonstrate their fidelity towards the model.
However, all of these approaches do not look into the attributions and the produced values to investigate further into the techniques behind the attributions and the models.
Thus, an in-depth analysis is needed to investigate the attributions generated for time series classification models.
§ PERTURBATION ANALYSIS
We use the perturbation analysis approach by Schlegel et al. <cit.> to generate attributions, verify, and compare them using the proposed perturbation function strategies <cit.>.
We extend the comparison by calculating the Euclidean and cosine distance between the original and the perturbed time series instance and the Euclidean and cosine distance between the original attributions of the dataset and the attributions of the perturbed instances of the dataset.
Collecting these results can help us have a more in-depth analysis of the attribution techniques and reveal relations between attributions and models.
However, we first need to establish the general perturbation analysis.
Let D = (X, Y) be a time series classification dataset with X as the time series samples and Y as the time series labels.
X = {ts_1, ts_2, ..., ts_n} contains n time series samples with m time points for each sample represented as ts = {tp_1, tp_2, ..., tp_m}, where tp_1 is the value for the ith time point of ts.
Y = {l_1, l_2, ..., l_n} contains n labels one label for each time series sample.
Let M(ts, θ) = y' be a time series classification model which predicts a label y' based on a time series input ts and has the parameters θ.
Let A(X, M, θ) be an XAI technique for generating attributions for the time series data.
The original attributions for X generated by A can be represented as A(X, M, θ) = {a_1, a_2, ..., a_m}, where a_i is the attribution score for the ith time point of X, M the time series classification model for which the attributions are calculated, and θ the parameters of the attribution technique.
To perform perturbation analysis, we introduce a perturbation function g that modifies X in a controlled manner.
Specifically, we define a perturbed time series dataset X' as:
X' = g(X, A, ξ)
Our perturbation function g modifies the dataset X based on the attributions A and a threshold ξ.
The value for the modification can be changed and depends on the selected function g, e.g., exchange to zero.
The threshold ξ can be set to a value by hand or some other function, e.g., using the 90-percentile of the attributions so that the attributions, e.g., a_i the ith element, above the threshold, will be modified to the previously set value, e.g., zero.
<ref> demonstrates the approach with zero perturbations on attributions with high values.
The original X and the perturbed dataset X' get predicted with the model M to get M(X) = Y' and M(X') = Y”.
Based on Schlegel et al. <cit.>, we incorporate a quality metric qm, e.g., accuracy, to compare the performance of the model M with the original X and the perturbed dataset X'.
For the time series classification, we assume that the qm decreases after the original data changes, and thus the labels are not fitting anymore <cit.>.
We further assume a suiting attribution technique decreases the performance more heavily as the most relevant parts of the input data get perturbed <cit.>.
Thus, we assume:
qm(Y', Y)≤ qm(Y”, Y)
However, in some cases, the scores are very similar <cit.>, and a deeper investigation into the attributions is necessary to find similarities or dissimilarities in the relevances of the techniques.
Thus, we do not only compare the quality metrics but also focus on the distances between the original X and the perturbed X' datasets.
We apply the Euclidean and cosine distances to the datasets as these are common distance functions for time series <cit.> to collect the changes of the perturbation function g.
We define the Euclidean distance as:
Euc(X, X') = √(∑_i=1^n (ts_i - ts_i')^2)
where X = ts_1, ts_2, ..., ts_n and X' = ts_1', ts_2', ..., ts_n' are the two time series being compared.
And we define the cosine distance as:
Cos(X, X') = 1 - ∑_i=1^n ts_i × ts_i'/√(∑_i=1^n ts_i^2)×√(∑_i=1^n ts_i'^2)
where X = ts_1, ts_2, ..., ts_n and X' = ts_1', ts_2', ..., ts_n' are the two time series being compared.
These changes enable us to compare the attributions not only on a performance level but on a raw level directly on the data.
§ EXPERIMENTS WITH PERTURBATION ANALYSIS
For our analysis, we look into the time series that changed and those that did not change during the perturbation analysis.
We especially want to understand the attribution distributions to investigate the attribution techniques responsible for fitting explanations, with high fidelity <cit.>, on the models.
Fitting explanations in our assumptions are techniques that change the prediction of more samples in a perturbation analysis <cit.>.
However, a general measure and metric for evaluating explanations are essential, but another factor is the attributions, as these can also hide information or present spurious correlations <cit.>.
E.g., the question of how attributions are distributed over the techniques arises.
To answer such questions and others, we use the changes from Y (old prediction) to Y' (new prediction) to look into the samples that changed their prediction and those that do not change.
We especially want to know when a change in the prediction happened, e.g., after how many removals based on the attributions and the perturbation strategy.
Thus, we look at the prediction changes from one class to the other.
E.g., in a binary classification with the assumption from above, the predictions change from one to the other class to demonstrate that the attributions highlight relevant time points for the model.
Thus, we slowly perturb more and more values from the time series until there is a change in prediction.
We use the percentile values (99, 98, ..., 25) as a threshold for the perturbation and record when the change happens.
Further, we collect the skewness of the attributions of the changed and unchanged predictions.
With such an exploration of the distributions of the attributions, we enable to inspect patterns inside of the attributions generated by different techniques.
Also, the distributions of the skewness enable to have another factor for the comparison of the attribution techniques.
Lastly, we do not only collect the skewness but also the Euclidean and the cosine distances of the original sample to the perturbed instance with regard to the changed and unchanged predictions.
All these different collected statistics and properties can help us to identify various findings, insights, and correlations in the attribution techniques as we collect as much data from our perturbation analysis as possible.
Summary –
Overall, we have the following dimensions we want to experiment on:
a) attribution techniques,
b) perturbation strategy.
We collect and analyze the following properties:
a) mean of the raw data samples of the changed and unchanged predictions;
b) skewness of attributions based on the changed and unchanged predictions after the perturbation;
c) new class distributions of the changed and unchanged predictions after the perturbation;
d) amount of relevant attributions needed to perturb an instance to another class prediction.
<ref> presents the collected properties using a perturbation analysis card with various statistics aggregated and visualized for easier analysis.
We created these perturbation cards for all the experiments.
Hypotheses –
After we established our experiment setup, we generated hypotheses around the results of the experiment on the basis of other work.
Based on the preliminary experiments by Schlegel et al. <cit.>, we generated the hypothesis that SHAP or SHAP derivatives will lead to the best results for the TSC task.
Based on the results of Simic et al. <cit.>, we will further look into the other attributions and double-check the results of Schlegel et al. <cit.> and the SHAP results even if SHAP results are less consistent <cit.>.
Based on Simic et al. <cit.>, we further look into the different perturbation strategies as we hypothesize that using one strategy is not enough to find a suitable attribution technique.
Based on Geirhos et al. <cit.>, we want also to check if there are patterns in the data the attributions show as relevant to find shortcuts the model learned to classify the data.
E.g., using certain maximum or minimum values to separate one class from the other in a binary classification problem.
Perturbation Analysis Card –
The perturbation analysis card is our proposed approach to reporting the results of our perturbation analysis strategies and techniques.
<ref> shows such a perturbation analysis card with meta information (S), bar charts about the switch from one class to another (C), bar charts for the distribution of distances (D), statistics about the attributions (A), and a line plot visualization about the raw time series data (R).
Starting on top, <ref> (S), a short introduction presents a description of the dataset, the attribution technique, and the perturbation strategy.
Right under the description, a stacked vertical bar chart shows a short glimpse of how good or bad the overall perturbation was.
A good perturbation with an attribution technique presents just a lot of blue in this bar chart, while a bad perturbation shows a lot of orange in the visualization.
Next to it, the exact numbers of the changed and unchanged samples are shown so that comparable numbers enhance the fast glance with other cards.
<ref> (C) gives a detailed view of the perturbation and the changes there.
The bar chart on the left visualizes the classes of the changed and unchanged predictions.
For the changed prediction, the visualization also further presents the classes before and after the perturbation.
Such visualization can help to identify spurious correlations as a model could, for instance, learn one feature of one class for the prediction.
The bar chart on the right at (C) shows the number of perturbed values needed to change the prediction.
The fewer changes needed, the better the attribution can identify relevant values.
In <ref> (D), the histogram of the distances between the perturbed and the original instances are shown.
On top of (D), the Euclidean distances, and on the bottom of (D), the cosine distance can help to find clusters of needed changes for the perturbation of the samples by revealing a trend towards a certain distance range.
Also, the distances can be used to compare the individual attribution techniques against each other.
A smaller distance range, together with a lower number of perturbed values, presents a more focused technique.
<ref> (A) visualizes more statistical information about the attributions.
The plot on top of (A) shows the skewness of the attributions of the samples of the dataset.
On the bottom, the means of the attributions are visualized.
Through these, a general trend of the changed and unchanged samples and their attributions can be seen.
Especially, outliers are interesting as a starting point for deeper analysis with other methods and visualizations.
Lastly, in <ref> (R), the time series time point means of the changed and unchanged samples can be inspected.
So, for every time point in the time series, the mean of it over the subset (changed or unchanged) of the whole dataset is calculated and visualized.
Thus, in the case of, e.g., FordA, with a standardization of the dataset, the samples slowly converge to zero.
The visualization enables to spot large differences between the changed and unchanged samples.
§ RESULTS AND DISCUSSION
Our current experiment setup evolves around an in-depth analysis of the attributions of seven attribution techniques (Saliency, IntegratedGradients, DeepLift, Occlusion, GradientShap, DeepLiftShap, KernelShap) based on the implementations in Captum [Captum is a Pytorch-based XAI module for Python: <https://captum.ai/>].
We incorporate 16 perturbation strategies, two based on Simic et al. <cit.>, six based on Schlegel et al. <cit.>, and eight extensions we describe later.
We implemented nine single time point perturbations (zero, mean, inverse, dataset mean, dataset max, dataset min, OOD high, OOD low, random between min and max) and seven subsequence perturbations (zero, subsequence mean, dataset mean, inverse, OOD high, OOD low, random between min and max).
The subsequence length is fixed to ten percent of the length of the data.
We focus on the UCR benchmark datasets <cit.> and take three of the most extensive datasets (FordA, FordB, ElectricDevices) to investigate data characteristics.
However, our approach can be applied to any time series classification dataset.
The FordA and FordB are sensor data with a length of 500 and provide an anomaly detection binary classification task.
FordA has 3601 samples in the training set and 1320 in the test set.
FordB has 3636 samples in the training set and 810 in the test set.
The ElectricDevices dataset is shorter, with only 96 time points.
However, the dataset has 8926 training samples and 7711 test samples.
We investigate two architectures of convolutional neural networks.
The first architecture tackles the FordA and FordB datasets.
The model consists of three 1D convolutional layers with a kernel size of three and increases the channels from one to 10 to 50 to 100.
A max-pooling of three after the convolutional layer decreases the size again.
A ReLu activation function activates the neuron.
Afterward, a fully connected layer with 100 neurons and a ReLu function uses the feature maps from the convolutional layers to process the data further.
And lastly, another fully connected layer with two neurons classifies the data with a softmax activation on top.
We train the model with a batch size of 120 and the Adam optimizer <cit.>.
The second architecture is trained on the ElectricDevices data and introduces a residual from the input to the fully connected layers.
The original input gets downsampled using a 1D convolution with kernel size seven for the residual addition right before the fully connected layers.
We train our models using the cross-entropy loss for multi-label classification on the datasets for 500 epochs.
Our models achieve for FordA an accuracy of 0.99 for the training set and 0.89 for the test set, demonstrating overfitting to the training data.
Our models achieve for FordB an accuracy of 0.99 for the training set and 0.70 for the test set, demonstrating overfitting to the training data.
Our models achieve for ElectricDevices an accuracy of 0.94 for the training set and 0.64 for the test set, demonstrating overfitting to the training data.
As Ismail Fawaz et al. <cit.> showed, even our simple models are not too far from the state-of-the-art with other more sophisticated models.
However, as we want to analyze our model, we look into the attributions of the training data, and thus our overfitting is a nice bonus to investigate spurious correlations and shortcuts <cit.>.
Results –
First, we start with the FordA dataset; next, we will present the FordB results, and lastly, the ElectricDevices dataset.
FordA demonstrates interesting results regarding the attribution techniques and the perturbation strategies.
The best working strategies are setting the perturbed value to an out-of-distribution low <cit.> on a subsequence <cit.> as you can see in <ref>.
Especially, the saliency method <cit.> achieves the best result regarding the flip of predictions by flipping 2088 of 3601 samples, as also seen in <ref>.
However, the KernelSHAP method <cit.> comes close to the flip with just 39 less with 2049 flips.
Also, as seen in <ref> on the plot on the right, the perturbation strategy out-of-distribution low changes the class quite late with a lot of perturbed values.
Such an effect is unwanted in many cases as the model is, so to say, attacked by an adversarial attack outside of the distribution of the original data.
In some cases, we can use such a method to test the model on data shifts, as, for example, the attributions can shift heavily.
However, for our focus on the model itself, such an adversarial attack is interesting but does not show internal decision makings for the dataset we are interested in.
However, we also notice that the perturbation strategy heavily influences the best working method.
If we switch, for example, to a perturbation to zero, we see Occlusion <cit.> as the winner in <ref>.
Such a change in the best working technique demonstrates that the perturbation analysis just with one strategy is not enough to compare attribution techniques.
We need multiple strategies to decide on one technique.
However, we can also further take a deeper look into the attributions themselves.
Focusing on the different skewness of the attributions and their distributions as seen in <ref>, we can already see some trends toward techniques enabling an easier inspection of the method and how well the method performs for the perturbation analysis.
Especially, KernelSHAP in <ref> demonstrates a nice pattern with two nearly not overlapping distributions.
Such a nice distribution can help us to decide on one attribution technique.
The model for the FordB dataset is a bit worse than for the FordA dataset, which leads, in most cases, to worse performance in the perturbation analysis <cit.>.
However, again the KernelSHAP and Saliency generate good working attributions for the change in the prediction for the perturbation to zero strategy.
For this dataset, KernelSHAP achieves to change of 3472 from 3636 samples as seen in <ref>.
Especially interesting is the distribution of the skewness of the attributions.
A more in detail analysis of these two peaks could lead to further insights into the model and the attributions, but such an analysis needs another visualization, e.g., projecting the attributions in a scatter plot.
However, if we further inspect our corresponding model card in <ref>, we can see that on the plot on the right, the change happens if a lot of values are removed from the original sample.
Such a result is also observable in the other perturbation cards for the other techniques.
In our study, we have identified a possible shortcut <cit.> that our model has learned from the training data.
We speculate that the shortcut consists of a certain range or specific time points which need to be in a certain range of values to be classified as one class or the other class, and if we destroy this property, we change the prediction.
So, our model learns a static version or range for one class and classifies everything else into the other class.
Such a model does have more in common with an outlier detector than with a wanted classifier.
Thus, we identified a shortcut of the model to be able to improve the classification without using all available features <cit.>.
The ElectricDevices dataset is harder for the model as we do not only have a binary classification problem but seven classes the model needs to separate.
However, as before, not even the state-of-the-art performance is as accurate as possible <cit.>, which leads to worse attributions and a more diverse perturbation analysis result.
Again, KernelSHAP performs best with a change of 8906 from 8926 samples with the values perturbed to the global max as seen in the perturbation card of <ref>.
However, also IntegratedGradients <cit.> works well, but only with another perturbation strategy, namely changing the perturbed value to the global mean of the dataset.
The dataset demonstrates quite nicely that the attribution techniques need different perturbation strategies to reveal the models' internal decision-making.
Some of the techniques focus on different features the model learned as the ranking of the best-performing attribution techniques based on the perturbation analysis changes from strategy to strategy for this dataset.
Additionally, when we delve into the labels of the changed and unchanged predictions, we notice that various attribution methods alter different labels in the perturbation.
For example, KernelSHAP seems to modify every class besides seven, whereas Saliency influences classes other than five and six.
However, unlike the previous FordB dataset, we do not see an unwanted perturbation pattern in the amount of perturbed values visualization.
Such an effect presents that the attribution techniques are more suitable for the dataset and model than for the FordB model.
Summary – As we have seen in our results (<ref>, <ref>, <ref>), KernelSHAP performs quite well but takes a lot of time to calculate the attributions.
Due to the sampling-based approach of KernelSHAP, the attributions are not deterministic and can vary from multiple computational runs.
Further, in many cases, Saliency (or Vanilla Gradient multiplied by the Input) works surprisingly well and is only sometimes improved by additional extensions on top, such as IntegratedGradients.
Thus, Saliency provides a promising variant for future experiments and techniques on top of it.
So, if the attribution (explanation) is time-critical, Saliency is a well-suited method.
If it is not time-critical, KernelSHAP provides the best-working attributions based on our experiments.
Our collected data has even more insights and findings using the proposed perturbation analysis cards, which we look forward to analyzing and publishing with the code.
The published source code can be used as a framework to experiment on more datasets, and the perturbation analysis cards can be used to report the results.
The GitHub repository can be explored with more perturbation analysis cards and JSON data for the collected results of our experiments.
§ CONCLUSION AND FUTURE WORK
After reviewing related work, we presented an in-depth analysis of perturbation strategies for attributions on time series.
With the analysis, we dug into a CNN trained on time series classification to investigate attributions, perturbation strategies, and shortcuts the network learned.
We presented our results in perturbation analysis cards to enable users to analyze the results in detail by inspecting the aggregated data in visualizations and comparing them easily with other techniques based on the provided cards.
We identified SHAP as a suitable method to generate working attributions in all experimented datasets.
Other gradient-based methods also work quite well but do not perform as well as, e.g., KernelSHAP.
However, depending on the perturbation strategy, the best working attribution technique changes quite drastically also for some techniques.
We advise not only focusing on a single strategy but to using multiple strategies and aggregating the results of these, and looking at the distribution of the skewness to enhance the comparability.
In our experiments, we also found a shortcut or spurious correlation for the FordB dataset, which our model learned to classify one class and to classify everything else as the other class.
Future work –
We want to extend the experiment to other attribution techniques and compare the results with the already collected experiment results.
Also, we want to compare the attributions even in more detail by, e.g., aggregating the attributions and comparing them on a higher level to find matching patterns.
Different trends and subsequences are further patterns to analyze to gain knowledge into the attribution techniques.
With such an approach, we also want to include the local Lipschitz estimate <cit.> to rank consistent attributions higher.
Last, we want to extend the Perturbation Effect Size <cit.> and use our gained knowledge to combine perturbation strategies, switching predictions, and distances to generate a measure to evaluate attributions on time series classification models more robust and fully automatically to make it easier for users to decide which attributions to use for explanations.
We also want to enhance our perturbation analysis cards further to be more easily readable and comfortable for non-experts to be able to gain insights at a single glance.
§.§.§ Acknowledgements
This work has been partially supported by the Federal Ministry of Education and Research (BMBF) in VIKING (13N16242).
splncs04
|
http://arxiv.org/abs/2307.04993v1 | 20230711031325 | Uncertainty Quantification of the Virial Black Hole Mass with Conformal Prediction | [
"Suk Yee Yong",
"Cheng Soon Ong"
] | astro-ph.CO | [
"astro-ph.CO",
"astro-ph.GA",
"astro-ph.IM",
"cs.LG"
] |
firstpage–lastpage
Adversarial Training Over Long-Tailed Distribution
Guanlin Li
Nanyang Technological University, S-Lab
[email protected]
Guowen Xu
City University of Hong Kong
[email protected]
Tianwei Zhang
Nanyang Technological University
[email protected]
August 12, 2023
============================================================================================================================================================================================================================
Precise measurements of the black hole mass are essential to gain insight on the black hole and host galaxy co-evolution.
A direct measure of the black hole mass is often restricted to nearest galaxies and instead, an indirect method using the single-epoch virial black hole mass estimation is used for objects at high redshifts.
However, this method is subjected to biases and uncertainties as it is reliant on the scaling relation from a small sample of local active galactic nuclei.
In this study, we propose the application of conformalised quantile regression (CQR) to quantify the uncertainties of the black hole predictions in a machine learning setting.
We compare CQR with various prediction interval techniques and demonstrated that CQR can provide a more useful prediction interval indicator.
In contrast to baseline approaches for prediction interval estimation, we show that the CQR method provides prediction intervals that adjust to the black hole mass and its related properties.
That is it yields a tighter constraint on the prediction interval (hence more certain) for a larger black hole mass, and accordingly, bright and broad spectral line width source.
Using a combination of neural network model and CQR framework, the recovered virial black hole mass predictions and uncertainties are comparable to those measured from the Sloan Digital Sky Survey.
The code is publicly available at .
black hole physics – (galaxies:) quasars: general – (galaxies:) quasars: supermassive black holes – methods: data analysis – methods: statistical
§ INTRODUCTION
At the centre of every active galactic nuclei (AGN) is a black hole <cit.>.
The black hole mass, MBH, is a crucial quantity in understanding the co-evolution between the black hole and its host galaxy <cit.>.
However, direct and accurate measurements are very limited to close by galaxies as high spatial resolution is required <cit.>.
Beyond the local universe, the single-epoch virial mass estimation is applied to estimate the virial black hole mass, Mvir, which is calibrated empirically using reverberation mapping <cit.> samples of local AGN <cit.>.
This method assumes that the gas in the broad line region (BLR) of the AGN is in Keplerian motion and the virial black hole mass is estimated by
Mvir=Δ V^2 R/G,
where G is the gravitational constant and Δ V is the velocity dispersion of a particular broad emission line often measured by the full width at half maximum (FWHM).
Due to the intensive monitoring at a high cadence over a duration, reverberation mapping for multi-epoch observations are often carried out for a limited number of sources <cit.>.
Nonetheless, reverberation mapping studies have also found that there is a relationship between the BLR radius R and monochromatic continuum or line luminosities L <cit.>, which is used as the basis for single-epoch virial mass estimates <cit.>.
Based on this R-L relation, the BLR size is derived for a given luminosity and then estimate the Mvir, in which case <ref> can be rewritten as:
logMvir=a + blog L + clogFWHM,
where (a,b) are the coefficients calibrated from reverberation mapping.
The coefficient c is usually set to 2 based on the virial theorem <cit.>.
Depending on the redshift of the object, different emission line widths and luminosities are used <cit.>.
For low redshift sources, this is typically the Hβ and Mgii lines, and their respective continuum luminosity measured at rest wavelengths of 5100 Å and 3000 Å.
The majority of reverberation mapping studies have been conducted using Hβ on low redshift AGN <cit.>.
Often for higher redshift, the Mgii or Civ line is utilised.
However, this involves applying additional scaling from the Hβ line to formulate the virial mass based on other lines <cit.>.
There have been efforts to establish the R-L relation for high redshift AGN <cit.>, though it is still debatable whether the single-epoch Mvir of these lines are reliable or will need further correction <cit.> since they might be affected by non-virial component due to the stratified BLR of the different lines <cit.>.
There are several limitations and sources of uncertainties in using the single-epoch method that could lead to significant error up to 0.5 dex in the virial black hole mass <cit.>.
Some of the common issues are as follows.
First, the relationship between the line width of Hβ and Mgii might be non-linear <cit.>, which is not accounted for if a constant c=2 in <ref> is applied on Mgii line-based MBH.
Second, the intrinsic scatter in the R-L relation calibrated against local reverberation mapped AGN samples using Hβ is ∼ 0.2 dex <cit.> and can be larger than 0.36 dex when using Mgii <cit.>.
The Mvir based on Mgii line have to be properly calibrated such that they match those of Hβ line <cit.>.
Various prescriptions have been proposed <cit.> to calibrate the (a,b) coefficients in <ref>, which can vary depending on which specific line is used <cit.>.
Practically, this also assumes that a single best fit line from the empirical relationship, fixed by some constant coefficients, is applicable to every sources.
Third, the derived continuum and spectral line properties rely on the choice of spectral fitting process.
Mainly, this requires a consistent procedure for fitting the continuum and modelling individual spectral line component as this will substantially affect the line measurements <cit.>.
The presence of strong absorption lines and using low quality signal-to-noise ratio spectra are likely to result in unreliable measurements <cit.>.
Recently, several studies have employed machine learning and deep learning methods to predict the properties of the black hole.
<cit.> explored the MBH correlation with their host galaxy properties using Lasso regression.
They then used the extracted subset of properties to derive an empirical formula for the black hole mass and shown that it is able to retrieve the masses with a scatter of 0.5 dex.
Though only trained using a small sample available from reverberation mapping, <cit.> demonstrated that they are able to generate quasar spectra along with the associated physical properties even for missing spectral region and without requiring calibration from the R-L scaling relation.
They applied a multi-output Gaussian process latent variable model and estimated the uncertainties in the predicted MBH due to errors from measurements and input spectra, and reported a scatter of 0.4 dex in the predictions.
Similarly, <cit.> used a multi-layer perceptron regressor on a few reverberation mapped AGN samples probed in the X-ray regime and recovered the MBH within ± (2–5)%.
<cit.> employed a hybrid deep neural network model consisting of convolutional and fully connected layers on quasar light curves as an alternative to the expensive spectral data.
They predicted the MBH from the light curves within 0.37 dex scatter.
Previous studies primarily considered recovering the black hole mass from the measured MBH using light curves or calibrated based on reverberation mapping of low redshift quasars.
However, a question still remain: since all measurements of the black hole mass have intrinsic scatter, how good is then the uncertainties of the black hole mass predictions? In this work, we do not attempt to build a more accurate predictor for the black hole mass.
Instead, we focus on quantifying the uncertainties of the line-based virial mass, Mvir, and address some of the aforementioned limitations and sources of uncertainty.
In particular, we employ a conformal prediction for regression framework, specifically the conformalised quantile regression <cit.>, and conduct a comparative study with several other prediction interval approaches.
The conformalised quantile regression is of particular interest as it has been shown to be flexible to any heteroscedasticity in the data and generates adaptive prediction intervals.
We present a method to quantify the uncertainty in black hole mass predictions with adaptive prediction intervals.
We separate this into two parts:
* Perform representation learning (finding a good feature encoding) using a neural network model: This effectively avoid the need to fit and obtain individual line measurements.
* Generate predictions and prediction intervals for the line-based Mvir: We examine different prediction interval methods to quantify the uncertainties in the Mvir.
The outline of the paper is as follows.
<Ref> describes the dataset utilised.
Overviews of the neural network model and the prediction interval methods employed are given in <ref>.
The results followed by discussions in <ref> and <ref>, respectively.
Finally, <ref> summarises our findings.
§ DATASET
We briefly describe the dataset used in this work and pre-processing applied on the data.
We use the recent catalogue of quasar properties <cit.> derived from the Sloan Digital Sky Survey (SDSS) Data Release 16 Quasar <cit.> catalogue.
The data[<http://quasar.astro.illinois.edu/paper_data/DR16Q/>] and tutorial[<https://github.com/QiaoyaWu/sdss4_dr16q_tutorial>] containing the description of the data and demo are publicly available online.
The details on the derived spectral line measurements are described in Section 3 of <cit.> and also in their earlier work <cit.>, which we briefly outline here.
They corrected the spectra for Galactic reddening using the dust map from <cit.> and <cit.> and the extinction curve from <cit.>.
After shifting the spectra to the rest-frame using the redshift from the SDSS DR16Q catalogue, they fitted the continuum by a power law and a third-order polynomial, and also an iron template <cit.> to several continuum fitting windows that are not affected by broad line emission.
Quasars that have peculiar continuum shapes are fitted with the additive (positive-definite) polynomial component.
They subtracted the continuum and iron fit from the spectrum to form a line-only spectrum, which is then fitted with a set of Gaussians in logarithmic wavelength space.
To minimise the effect of absorption lines from intervening absorption systems, they performed an iterative approach to mask pixels below 3-sigma of the original model fit and refit.
From the best spectral fitting parameters, <cit.> measured the continuum and emission line properties, including the spectral line peak and FWHM.
Using a Monte Carlo approach, they estimated the uncertainties in the line measurements by randomly perturb the original spectrum at each pixel with a Gaussian.
They performed for 25 iterations and took the semi-amplitude within the 16th and 84th percentile range as the error of each spectral quantity.
To calibrate the coefficient (a,b) for the single-epoch Mvir, they adopted (0.91, 0.50) for Hβ <cit.> and (0.74, 0.62) for Mgii <cit.>.
The measurement uncertainties in the Mvir are also provided in the catalogue.
Their compiled catalogue has a total of 750,414 spectra, with each data file containing the original fluxes, continuum fluxes, and the spectral line fluxes with continuum subtracted.
For the sample selection, we follow the recommended quality cuts for specific emission lines in their paper, namely line flux/flux error >2 and logarithm line luminosity ranges 38–48 erg s^-1 and apply them to the Hβ and Mgii lines.
We further restrain the sample that have both Hβ and Mgii line widths and black hole masses available.
As black hole mass is derived from the line width, we remove quasars with large errors in the black hole mass with >0.5 dex and line width error with >2000km s^-1.
We also select spectra with high median signal-to-noise ratio per pixel of ≥ 10.
A summary of selection criteria along with the number of drop out after each cut is listed in <ref>.
Our final data sample consists of 13,952 spectra, and the distributions of the black hole masses with redshifts are shown in <ref>.
We split the data into 70% training, 20% validation, and 10% test sets, which are 9766, 2930, and 1256 spectra, respectively.
In training the machine learning model, we find that using the fluxes of the entire spectrum as our input data does not lead to any meaningful feature extraction.
This might be due to the noisy fluctuations and spurious spectral spikes in the fluxes.
Hence, we use the spectral line flux with continuum subtracted, which is provided in the data file from <cit.>, as the input for the training and validation of the machine learning.
The validation set is used for evaluating the model performance during the training.
Since we do not utilise the wavelength, which contains the position information, when training the neural network and the line flux mainly cut off at ∼ 1000 pixels, we therefore truncate the data to the first 1000 pixels.
The virial Hβ and Mgii black hole mass estimates are used as the ground truth labels.
The fluxes and labels are normalised from 0 to 1.
§ VIRIAL BLACK HOLE MASS PREDICTIONS AND UNCERTAINTIES
In this section, we detail the end-to-end pipeline implemented from training the input data using a neural network model to output the prediction intervals.
A flowchart of the pipeline is illustrated in <ref>.
The following notation is adopted.
Given n, the number of independently and identically distributed training data with input-target pair {(X_i,Y_i)}_i=1^n, we perform regression.
In regression analysis, the target can be represented by
Y=μ̂(X)+ϵ,
where μ̂(X) is the regression function to be estimated and ϵ is the model error.
In this case, the target Y is the virial black hole mass Mvir, and the input X is the SDSS spectra.
§.§ Construction of neural network for feature extraction
To extract the feature vectors from the spectra, we employ a supervised learning approach using a generic fully connected neural network model with similar hidden layer architecture in <cit.>.
The neural network is constructed using PyTorch <cit.>, an open source machine learning framework in Python.
The input layer consists of 1000 neurons followed by 3 hidden layers of 64, 64, and 8 neurons with rectified linear unit as activation function and dropout <cit.> of probability 0.1, then finally an output layer with 1 node and sigmoid activation function.
The outputs of the second to last layer of 8 neurons is saved as features of the spectra.
As our main aim is not to find the best model, we do not attempt any optimisation or hyperparameters tuning on the model.
Following <cit.>, the stochastic gradient descent based Adam optimiser <cit.> is used with initial learning rate of 5 × 10^-4 and weight decay regularisation parameter of 10^-6.
Additionally, we apply a constant learning rate scheduler that decreases by a factor of 0.5 every 2 steps.
The model is optimised with mean squared error (MSE) as the cost function.
The model is then trained for 100 epochs with batch size of 64.
There are a few main assumptions that we made in training our machine learning model.
We assume that the SDSS spectra are of good quality with reliable derived properties.
On the other hand, note that these properties are also constrained by the same assumptions used to derive them.
In particular, the derived Mvir from SDSS are dependent on various factors, including the Keplerian motion assumption into the virial theorem and the applicability of the empirical scaling relation to single-epoch mass estimates.
There are also potential systematic uncertainties that might not be completely accounted for.
Further caveats are discussed in <ref>.
To train the supervised neural network model, we use the spectra as inputs and the SDSS DR16Q derived virial Hβ and Mgii black hole mass estimates as targets to be optimised.
The uncertainties of the measured Mvir are not included when training the model.
§.§ Construction of regressor for predictions
After the feature extraction process from the neural network, we use gradient boosting for regression to make the predictions.
Depending on the uncertainty quantification methods, which will be described in <ref>, the quantile loss is applied for the conformalised quantile regression, while the MSE loss for the rest of the resampling techniques.
To reduce the prediction error, we optimise the regressor by performing a randomised search with 10-fold cross-validation for 100 iterations to find the best hyperparameters for the regressor.
The explored parameter space and the adopted best model are shown in <ref>.
To check the performance of the regression model, two common evaluation metrics, the mean absolute error (MAE) and root mean squared error (RMSE), are evaluated.
MAE is the average of the absolute errors between the target value Y_i and predicted value μ̂(X_i):
MAE=1/n∑_i=1^n |Y_i-μ̂(X_i)|.
RMSE is the average of the squares of the difference between the target and predicted value:
RMSE=√(1/n∑_i=1^n[Y_i-μ̂(X_i)]^2).
MAE is more robust to outliers, while the RMSE is easier to optimise.
In both cases, the lower score the better.
Additionally, 10-fold cross validation is performed to obtain the mean and standard deviation of the respective evaluation metrics.
§.§ Assessing the performance of prediction intervals
The two criteria that are crucial to assess the performance of the prediction intervals are the coverage and the width <cit.>.
The prediction interval coverage probability (PICP) or coverage for short reflects the probability that the prediction interval will contain the target value, which is defined as
PICP=1/ntest∑_i=1^ntestc_i,
where c_i=1 if Y_i∈ [L(X_i), U(X_i)] otherwise c_i=0, L(X_i) and U(X_i) are the lower and upper bounds, respectively.
Ideally, higher PICP is better and it should be close to the nominal confidence level of (1-α).
The confidence level is set to be 90%.
Additionally, we compute the coefficient of determination, R^2, to measure the percentage of variance
between PICP and the (1-α) nominal coverage rate.
R^2=1-∑[Y_i-μ̂(X_i)]^2/∑(Y_i-Y̅)^2,
where Y̅ is the mean of Y.
The R^2 ranges 0–1 (or in percentage 0–100%), where the higher the better with 100% being a perfect fit.
The mean prediction interval width (MPIW) measures the wideness of the prediction interval and is given by the average of the width
MPIW=∑_i=1^ntest[U(X_i)-L(X_i)],
where the prediction interval width is defined as the difference between the upper and lower bounds, which is the term in the square bracket.
The larger the width, the more uncertain.
It is desirable to have a high PICP but a narrow MPIW.
§.§ Construction of prediction intervals
Various methods to construct prediction intervals have been developed and a comparison between different strategies is reviewed in <cit.>.
Using an open-source Python package called model agnostic prediction interval estimator <cit.> or MAPIE[<https://github.com/scikit-learn-contrib/MAPIE>], we explore different techniques to estimate the prediction intervals.
To estimate the prediction interval, the model error ϵ can be characterised as the conditional probability distribution of Y given X, ℙ_Y|X.
In practice, this is estimated by the difference between the label and the prediction, Y_i - μ̂(X_i).
Let (X_n+1,Y_n+1) be the input-target for a new unseen test point.
Suppose we want to construct a valid prediction interval 𝒞̂_n,α(X_n+1) for the test data.
It should satisfy
ℙ{Y_n+1∈𝒞̂(X_n+1)}≥ 1-α,
where α is the target quantile and the complementary (1-α) is the confidence level or coverage rate.
The estimator (for the prediction interval) is considered calibrated if it satisfies the inequality in <ref>.
A conformity score is a measure of how similar a sample is compared to the rest of the dataset and is used to determine the threshold for the quantile, leading to a prediction interval.
A key challenge to estimating the prediction interval is to ensure statistical consistency, and various approaches have been proposed.
Conformal prediction <cit.> offers a robust uncertainty quantification framework and a distribution-free coverage guarantee that satisfy <ref>.
As a set of baseline comparison, we compare conformal prediction against various uncertainty quantification methods from the MAPIE package, namely naive, jackknife+-after-bootstrap, cross-validation and its variations.
We briefly review the methods we use in this paper in the following.
§.§.§ “Naive” conformity score
Consider a simple or “naive” way to compute conformity score, by using the residual of the training dataset, which gives
𝒞̂_n,α^naive(X_n+1)=μ̂(X_n+1) ±q̂_n,α^+|Y_i-μ̂(X_i)|,
where q̂_n,α^+ is the (1-α) quantile of the empirical distribution.
Though this method is computationally cheap, it does not guarantee coverage and is likely to overfit, which underestimates the prediction interval widths.
§.§.§ Jackknife+-after-bootstrap
The standard jackknife is a leave-one-out cross-validation <cit.> approach.
We opt for jackknife+-after-bootstrap <cit.> as it is more computationally efficient than the standard jackknife.
The steps to infer the jackknife+ab prediction intervals are as follow:
* Bootstrap resampling from the training set with replacement K times, B_1,…,B_K.
* Fit the K regression functions μ̂_B_k on the bootstrapped dataset.
* Aggregate the estimated prediction function using the bootstrapped dataset excluding sample i given by μ̂_φ,-i=φ({μ̂_B_k(X_n+1: i ∉ B_k)}), where φ is the aggregation function usually taken to be the mean or median.
The mean is used, which is the default.
Then compute the conformity score as the residual R_φ,i=|Y_i-μ̂_φ,-i(X_i)| for i=1,…,n.
* Output jackknife+ab prediction interval:
𝒞̂_n,α,B^jackknife+ab(X_n+1)=[q̂_n,α^-{μ̂_φ,-i(X_n+1) - R_φ,i}, .
. q̂_n,α^+{μ̂_φ,-i(X_n+1) + R_φ,i}],
where q̂_n,α^- is the α quantile of the distribution and recall the (1-α) counterpart is q̂_n,α^+.
§.§.§ Cross-validation and its variations
Rather than the leave-one-out method, cross validation can be performed in K-fold to reduce computation time.
The steps to infer the CV+ prediction intervals are as follow:
* Split training set into K disjoint subsets S_1,…,S_K each of size m=n/K.
* Fit the K regression functions μ̂_-S_k on the training dataset with kth subset excluded.
* Compute the conformity score from the K-fold process as R_i^CV=|Y_i-μ̂_-S_k(i)(X_i)|, where the subset k(i) contains i.
* Output CV+ prediction interval:
𝒞̂_n,α,K^CV+(X_n+1)=[q̂_n,α^-{μ̂_-S_k(i)(X_n+1) - R_i^CV}, .
. q̂_n,α^+{μ̂_-S_k(i)(X_n+1) + R_i^CV}].
The jackknife+ab and CV+ provide slightly larger coverage guarantee of (1-2α).
For standard CV, the output prediction interval is defined as
𝒞̂_n,α^CV(X_n+1)=[q̂_n,α^-{μ̂(X_n+1) - R_i^CV}, q̂_n,α^+{μ̂(X_n+1) + R_i^CV}].
Another variation of CV that is more conservative than CV+ is the CV-minmax method given by
𝒞̂_n,α^CV-minmax(X_n+1)=[min_i=1,…,nμ̂_-i(X_n+1) - q̂_n,α^+{R_i^CV}, .
. max_i=1,…,nμ̂_-i(X_n+1) + q̂_n,α^+{R_i^CV}],
which guarantee the (1-α) coverage in <ref>
§.§.§ Conformalised quantile regression
As the transductive or full conformal prediction is computationally heavy, the inductive or split conformal prediction <cit.> approach is applied to alleviate the issue.
In this setting, it trains the model only once, but requires data splitting for the calibration set.
For regression, the conformalised quantile regression <cit.> is built upon conformal prediction and quantile regression <cit.> to provide a two-sided prediction interval or band.
The steps to infer the CQR prediction intervals are as follow:
* Split dataset into two disjoint subsets for training set ℐ_1 and calibration set ℐ_2.
* Fit two conditional quantile functions for the lower quantile q̂_α/2 and upper quantile q̂_1-α/2.
* Compute the conformity score for each i ∈ℐ_2 as E_i^CQR=max{q̂_α/2(X_i)-Y_i, Y_i-q̂_1-α/2(X_i)}.
* Compute Q̂_1-α(E^CQR,ℐ_2):=(1-α)(1+1/|ℐ_2|)-th empirical quantile of {E_i^CQR: i ∈ℐ_2}.
* Output CQR prediction interval:
𝒞̂_n,α^CQR(X_n+1)=[q̂_α/2(X_n+1) - Q̂_1-α(E^CQR,ℐ_2), .
. q̂_1-α/2(X_n+1) + Q̂_1-α(E^CQR,ℐ_2)].
We employ CQR with inductive split using the validation set as the calibration set.
Towards the final stage of the prediction pipeline in <ref>, prediction intervals are obtained from the various uncertainty quantification methods.
Their performances are evaluated and compared using the two metrics, PICP and MPIW, as defined previously in <ref>.
§ RESULTS
§.§ Effectiveness of neural network for feature extraction
Features extracted by a neural network are not directly interpretable, as they do not correspond to any particular physical parameters.
However, if the regressor is to perform well, the extracted features should capture meaningful aspects of the raw data.
To determine whether the extracted features from the neural network are meaningful, we use Uniform Manifold Approximation and Projection <cit.> or UMAP[<https://github.com/lmcinnes/umap>], a dimension reduction technique to project the 8-dimension features to 2-dimension parameter space.
As the purpose is purely for visualisation, we set the number of components to 2 and use the defaults for the rest of the UMAP hyperparameters.
It can be observed in <ref> that the 2-dimensional UMAP representation is structured such that smaller Mvir objects tend to be on the right and gradually towards the left for increasing Mvir.
This affirms that the 8 features extracted are sensible to characterise the Hβ and Mgii line-based Mvir, which are used as inputs for the regressor.
§.§ Performance of regressor for predictions
Due to the various assumptions imposed on estimating the black hole mass, the black hole mass estimates can be substantially biased and uncertain <cit.>.
To leverage the need for individual spectral line fitting, we use a neural network model to extract the latent feature vectors and use a regressor to predict the Mvir.
The prediction errors of the regressors are shown in <ref>.
Overall, the performances of the regressors using quantile loss and MSE loss for both Hβ and Mgii cases are similar.
The Mgii line-based Mvir prediction errors are slightly larger compared to those of Hβ.
As previously mentioned, this is likely because the Mvir based on Hβ is better calibrated <cit.>, which leads to the smaller prediction error.
As a comparison, the performances of the black hole mass predictions trained using machine learning model reported by other studies are also listed in <ref>.
It can be seen that the predictions are relatively good with low prediction errors when compared to those from other studies.
Though note that the dataset used in those studies are not the same from one another; thus, the difference in scores might also be attributed to the difficulty of the machine learning task.
§.§ Reliability of prediction intervals
A well calibrated uncertainty quantification is valuable to assess the reliability of the black hole mass predictions.
We compare several techniques to estimate the prediction intervals.
The comparison between the predicted mass Mvir,pred and actual mass Mvir from SDSS with prediction intervals at 90% confidence level is presented in <ref>.
For reference, the shaded gray regions indicate the intrinsic scatter or standard deviation about the scaling relation of 0.2 dex using Hβ <cit.> and 0.36 dex using Mgii line <cit.>.
As previously demonstrated, the neural network is able to retrieve the Mvir predictions, being comparatively close to those measured from SDSS (<ref>, identity line in grey dashed line).
All but one of the methods for the Mgii line-based Mvir dataset have PICP lower than the target 90% confidence level.
Although, this is an indication that they are inadequately calibrated, their PICP remain relatively close to the nominal value.
Overall, at 90% confidence level, the mean widths of the prediction intervals for all methods are larger than the width of the intrinsic scatter, but still well below some of the reported intrinsic scatter about the R-L relationship in order of ≳± 0.4dex <cit.>, which is ≥ 0.8dex for the width of the scatter.
An evaluation of the performance of the prediction intervals over a range of nominal confidence levels using PICP and MPIW is presented next.
As mentioned, it is desirable to have PICP close to the target coverage and small MPIW.
<Ref> displays the difference between the PICP and nominal coverage with respect to the nominal coverage along with the coefficient of determination R^2 for each method.
For most of the ranges of nominal coverage, the PICP of the naive method is underestimated, while on the other end, the CV-minmax is overestimated.
This is also evident from the lower overall R^2.
In general, the CV-minmax has the least performing PICP with lowest R^2, especially when the target confidence level is small.
This is followed by the naive method.
The rest of the methods, including jackknife+ab, CV, CV+, and CQR, have comparable PICP as well as R^2, particularly towards larger nominal confidence level.
The MPIW scores for a range of nominal coverage is presented in <ref>.
There is a trade-off of larger width with increasing confidence level, as expected.
The MPIW values for CV-minmax are the largest in all ranges of nominal coverage level, while the MPIW tends to be smaller for the naive method as the nominal coverage is set to be larger.
Other methods have similar MPIW.
For the remaining of the analysis, the results are for 90% confidence level, unless otherwise stated.
<Ref> compares the degree of variations in the prediction interval widths for the different uncertainty quantification methods.
The naive, CV, and jackknife+ab methods produce constant or negligible changes in the widths of the prediction intervals.
The prediction bounds from CV+ are also mainly constant except for a few minorities.
The two methods that exhibit variable widths are the CV-minmax and CQR.
However, it can be seen that CV-minmax will generate at the very least larger widths compared to the widths from the constant prediction interval methods as baseline.
Using CQR, it shows greater variability and is able to yield narrower widths under certain circumstances, which will be presented next.
When comparing the scale of the Hβ and Mgii Mvir,pred prediction widths, those from Mgii are wider, which is consistent with it being harder to measure, for instance, due to non-virial component <cit.>.
Among the explored uncertainty quantification methods, CQR performs the best; therefore, we focus on CQR and demonstrate its adaptiveness with respect to the properties of the quasars.
<Ref> portrays the variations in the prediction interval width for selected quasar properties.
To measure the strength of the correlation, the Spearman's correlation coefficient <cit.> and its corresponding p-value are also calculated.
It is found that there is a negative correlation (statistically significant at p-value≪ 0.001%) between the prediction interval widths and Mvir, and subsequently with the black hole mass related quasar properties, including the line luminosity L and the FWHM of the broad component of the Hβ and Mgii lines.
Some other quasar properties that are also of significantly correlated with the widths (not shown in the figure) are their respective properties measured using the whole line component.
The two quantities, line luminosity and FWHM, relationship with the prediction interval width is expected as these are incorporated into the virial theorem to estimate the Mvir.
Between the line luminosity and FWHM, the FWHM is more strongly anti-correlated with the size of the prediction interval width, which is a consequence from the virial theorem.
For more luminous and broader spectral line width quasars, the inferred prediction interval using CQR is able to generate a tighter bound.
We then compare the Mvir,pred based on Hβ and Mgii, as well as their associated prediction intervals using CQR in <ref>.
Comparing multiple emission lines are recommended to get a better constraint of the Mvir <cit.>.
Similar analysis is commonly carried out using single-epoch Mvir calibrated with empirical scaling relation from reverberation mapping <cit.>.
As expected, the Hβ and Mgii line-based Mvir are tightly correlated, albeit the large scatter.
This is not surprising, considering that the amount of scatter from the SDSS measured Mvir is even larger, as illustrated in <ref>.
Since the errors from SDSS measurements only account for the propagated measurement errors, the median lower and upper intervals are smaller in comparison to the prediction intervals from CQR, as expected.
The retrieved Hβ and Mgii based Mvir,pred along with the prediction intervals using CQR are comparable to those measured from SDSS.
§ DISCUSSIONS
§.§ Black hole mass predictions and uncertainties
In the past decade, artificial intelligence and machine learning have witnessed increasing growth and gained popularity within the astronomy community to solve big data challenges <cit.>.
Not surprisingly, recently a number of papers have employed machine learning to predict the masses of the black hole in AGN <cit.>.
In those studies, they mainly focused on retrieving the predictions of the true black hole mass, whereby the performance in terms of prediction error is usually assessed using MAE, MSE, or RMSE.
Yet, this only evaluates the ability of the machine learning model to recover the true value, but not the reliability of the predictions.
Uncertainty quantification of the black hole mass predictions is vital, especially since the single epoch MBH estimates already suffer from a wide range of intrinsic scatter <cit.>.
In fact, the uncertainty can extend more than 0.5 dex for individual AGN <cit.> and is dependent on which emission line is used to probe the MBH <cit.>.
At the same time, there are also uncertainties from the adopted machine learning pipelines, which introduce further uncertainties into the MBH estimation.
Without properly accounting for the uncertainties in the predicted MBH, the recovered value will be more biased than it already is.
Therefore it is more desirable to quantify the uncertainties of the black hole masses for each individual AGN rather than for the general AGN population.
Specifically, variable or adaptive widths prediction interval should be considered, as addressed in this study.
Subsequently, one can then attain the prediction interval and conduct analysis similar to those done in reverberation mapping studies (or other black hole mass estimation techniques).
§.§ Proposed adaptive uncertainty quantification
We recommend the need to not only assess the performance of the predictions of the black hole mass from the machine learning model, but also quantify the uncertainties for the prediction intervals.
We present an uncertainty quantification method to generate adaptive prediction intervals for the black hole mass estimation using CQR introduced by <cit.>.
In <ref>, we have shown that CQR is more informative of the model's uncertainty compared to other investigated uncertainty quantification methods.
Therefore, we propose that a variable width prediction interval method using CQR is better suited for this particular task.
In assessing the performance of the prediction intervals, it can be seen that the CQR outperforms the rest.
Other methods either produce prediction interval widths that are the same or too wide.
The CQR is more adaptive and better reflects the uncertainty of each individual object.
Additionally, we find that the width of the prediction interval is correlated with the black hole mass and its associated properties, particularly the line luminosity and FWHM.
The larger the black hole mass, the tighter the prediction interval widths.
This suggests that given a bright and broad spectral line source, we should be able to predict the black hole mass with more certainty.
We also highlighted that the virial black hole mass predictions and their corresponding prediction uncertainties generated from the combination of the neural network and CQR architecture, are comparable in scale and magnitude as those measured from SDSS using a spectral line fitting algorithm and reverberation mapping scaling relation with errors from measurements.
The dependence of the spectral line fitting algorithm will affect the continuum and spectral emission line width measurements, effectively biasing the black hole mass estimation <cit.>.
In which case, one can then opt to predict the black hole mass and their associated uncertainties using machine learning coupled with CQR as it offers a somewhat agnostic framework to the fitting of the individual spectral emission lines.
The uncertainty quantification methods that we presented in this study can be deployed with any base machine learning algorithm to quantify the uncertainty of the predicted MBH.
The code repository at contains Python scripts to get the data (described in <ref>), run feature extraction using neural networks and run uncertainty quantification for regression (described in <ref>).
These can be used separately or deployed to existing machine learning methods to generate prediction intervals for the black hole mass predictions (refer to MAPIE documentation for more details).
Additionally, the reproducible outputs for the analysis in this work are also provided.
We include a pre-trained model in PyTorch of the feature extraction method from the supervised neural network model that has been trained on the Hβ and Mgii line-based Mvir dataset from SDSS.
Examples of practical usage include evaluating new datasets, fine-tuning existing networks, and employing the pre-trained model in downstream tasks such as classification and anomaly detection based on the quasar properties.
The generated predictions as well as the uncertainty estimates for the different uncertainty quantification methods are included.
Supplementary Python notebooks including tutorials on usage and data analysis are also provided.
§.§ Further Experimentations and Caveats
It is apparent that the choice of dataset affects the performance of the prediction intervals.
As aforementioned, we have tested with different input spectra, including the full spectra and those with continuum subtracted, though they performed badly.
Therefore, for our input dataset from SDSS, we use the spectral line flux only that have been continuum subtracted.
This means that the input data still depend on the spectral fitting algorithm and procedure, whereby in this case, to fit and subtract the continuum and extract only the regions with the spectral lines.
As we did not visually inspect the spectra, some of them might also have been fitted poorly.
In this case, the derived properties might also be biased.
Another further constraint is that we choose to use spectra that have both Hβ and Mgii lines.
As a consequence, the predictions will not perform well in the absence of any of these lines or if other broad emission lines are present.
One obvious experiment is to evaluate on reverberation mapped samples.
For this purpose, we applied our neural network model on the Mgii reverberation-mapped SDSS objects from <cit.>.
Using a subset of their sample that contains both Hβ and Mgii lines, we found that the model is able to recover the Mgii-based Mvir reasonably well, however, predicting the Mgii reverberation mapping black hole measurements yield larger errors (see <ref>).
This is due to the fact that Mvir and MBH from reverberation mapping are not directly comparable, whereby the former does not account for the unknown f factor that is known to be unique for individual sources <cit.>; albeit often assumed to be constant <cit.>.
Since we have not performed a rigorous search for the best regressor, further performance improvement on prediction accuracy could be obtained with more computational resources.
Nevertheless, the basic architecture can act as a baseline and is able to obtain an effective feature extraction that leads to a reasonable prediction of the Mvir.
We have also conducted experiments using unsupervised learning approach on the same dataset.
We employed a vanilla autoencoder model consisting of a layer of 512 neurons for the encoder and decoder with 8 as the latent dimension for feature extraction.
However, it appears that this model is greatly affected by the presence/absence of other strong broad emission lines, in this case the Hα line; thus, outputs higher errors compared to those from the supervised learning approach.
It is also important to point out that the coverages of most of the explored uncertainty quantification methods are below the intended nominal coverage, but mainly still rather close to it.
For our purpose, we did not further attempt to achieve the nominal coverage, which possibly can be fixed by increasing the number of calibration samples <cit.>.
<cit.> has provided an outline of the procedure and also the method to check for the correct coverage.
§.§ Future prospects and avenues
We highlight some future investigations that can be carried out.
Though the availability of reverberation mapped objects is currently limited, the single-epoch black hole mass measurements from these are more precise and better constrained than those calibrated from the scaling relation using Hβ line <cit.>.
They can then be used to predict the reverberation mapping MBH.
Note that there are various systematic errors from the reverberation mapping method that could lead to poor estimates of the black hole mass by a factor of 3 or ∼ 0.5dex <cit.>.
There are several advantages of using machine learning to perform predictions of the black hole mass.
We demonstrate that the neural network model is capable of retrieving the Mvir predictions without having to model individual emission lines of the spectrum and derive their line properties.
The estimates are also comparable to those from SDSS measurements.
Since the machine learning approach is general, one can also apply a similar pipeline to predict other properties of quasars, such as the emission line width, and quantify their uncertainties.
The development of a machine learning model that is completely independent on the spectral fitting algorithm and process would be of interest.
Another benefit that has been mentioned in <cit.> is that it avoids the use of the empirical scaling relation between the BLR size and luminosity from reverberation mapping to calibrate line-based Mvir.
With this, the induced bias from the scaling relation can be mitigated or removed.
Subsequently, the inferred Mvir for high redshift objects using Civ might also be less biased.
However, it is noteworthy that there are additional complications when using Civ line such as it is dominated by outflows <cit.> instead of virial motion, which is the basis of the Mvir estimate.
Hence, it is also important to ensure that the dataset used contains reliable measurements with good signal-to-noise ratio <cit.>.
We envision that the inclusion of uncertainty quantification will provide a useful assessment of the reliability of the black hole mass predictions trained using machine learning.
The CQR as well as other uncertainty quantification methods that we explored in this work can be incorporated in conjunction with many machine learning models.
A further extension to this work is to explore various or novel uncertainty quantification techniques that would improve the coverage and widths, such as in the presence of limited data and data with large measurement errors.
A potential approach is the conformal predictive system for regression <cit.>.
Rather than a single interval, the conformal predictive system estimates the cumulative probability distribution.
In this way, it can be used to further access the trustworthiness of the uncertainty based on the difficulty of the estimates.
§ SUMMARY
Measuring an accurate black hole mass has been known to be challenging due to the induced bias from the scaling relation that is used to calibrate the virial black hole mass for high redshift sources.
A reliable tool to determine the uncertainty of the virial black hole mass is important to probe the black hole population and evolution.
In this work, we examine various prediction interval methods, including conformalised quantile regression (CQR), to quantify the uncertainties in the Hβ and Mgii line-based virial black hole mass estimation.
The code is publicly available at .
Using quasar spectra from the Sloan Digital Sky Survey, we train the data on a neural network model for feature extraction using supervised learning, which is then provided to a regressor for predictions.
Among the uncertainty quantification methods that we investigated, the CQR generates a more practical and meaningful range of probable intervals compared to other methods such as jackknife+-after-bootstrap, cross-validation and its variations.
The uncertainty interval of every other methods is either fixed or relatively large.
Conversely, the CQR is able to provide variable width prediction intervals and the tightness of the bounds reflects the correlation with the black hole mass as well as its associated properties.
As objects increase in black hole mass, the size of the prediction interval become narrower.
That is, the prediction bound from CQR will be more certain given a luminous object with broad spectral line width.
Additionally, the neural network architecture coupled with CQR framework are able to retrieve the line-based virial black hole masses and their corresponding errors as wellx as those estimated from the Sloan Digital Sky Survey.
The uncertainty quantification method can be deployed to any machine learning algorithm to assess the quality of the black hole mass predictions, and hence, is recommended.
§ ACKNOWLEDGEMENTS
We thank the anonymous referee for valuable suggestions on the manuscript.
Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the University of Utah. The SDSS website is <www.sdss4.org>.
SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics | Harvard & Smithsonian, the Chilean Participation Group, the French Participation Group, Instituto de Astrofísica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut für Astronomie (MPIA Heidelberg), Max-Planck-Institut für Astrophysik (MPA Garching), Max-Planck-Institut für Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatário Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autónoma de México, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.
Software: Astropy <cit.>, Jupyter <cit.>, MAPIE <cit.>, Matplotlib <cit.>, NumPy <cit.>, pandas <cit.>, PyTorch <cit.>, scikit-learn <cit.>, SciPy <cit.> UMAP <cit.>.
§ DATA AVAILABILITY
The catalogue and spectroscopic data underlying this article are available in Sloan Digital Sky Survey Data Release 16 quasar properties catalogue at <http://quasar.astro.illinois.edu/paper_data/DR16Q/>. The code repository used in this work is publicly available at .
mnras
§ DATASET SAMPLE SELECTION
From the 750,414 SDSS DR16Q spectra, we perform quality cuts, as described in <ref> of the main paper.
The selection criteria and the corresponding number of spectra after each cut are as follow.
Note that the number of drop out with every cut depends on the ordering in which the criterion is performed as there will be spectra that satisfy multiple criteria.
* Hβ line flux/flux error >2: 140,172
* and Mgii line flux/flux error >2: 133,772
* and Hβ logarithm line luminosity ranges 38–48 erg s^-1: 132,543
* and Mgii logarithm line luminosity ranges 38–48 erg s^-1: 132,538
* and signal-to-noise ratio per pixel ≥ 10: 25,283
* and Hβ line width available: 25,283
* and Mgii line width available: 25,283
* and Hβ black hole mass available: 14,798
* and Mgii black hole mass available: 14,777
* and Hβ black hole mass error <0.5: 14,602
* and Mgii black hole mass error <0.5: 14,314
* and Hβ line width error <2000km s^-1: 14,124
* and Mgii line width error <2000km s^-1: 13,952
The final data sample contains 13,952 spectra. The SDSS DR16Q data along with the derived catalogue are publicly available at <http://quasar.astro.illinois.edu/paper_data/DR16Q/>.
§ PREDICTING BLACK HOLE MASS FROM REVERBERATION MAPPING
To examine the performance of the predictions and prediction intervals on reverberation mapped black hole masses, MRM, we utilise the reverberation mapped SDSS samples from <cit.> measured using Mgii lags.
We cross-match them with the SDSS DR16Q quasar properties catalogue <cit.> and further restrict those with both Hβ and Mgii lines.
This provides a total of 14 samples and 7 among them are gold samples with most credible Mgii lags of ≤ 10% individual false positive rate.
<Ref> shows the comparison between the Mvir and MRM along with the predictions from the supervised neural network and prediction intervals from conformalised quantile regression, as outlined in <ref> of the main paper.
Half of the samples are gold samples (<ref>, gold star).
The majority of them have Mvir close to MRM within 0.5 dex, except one that also shows the largest discrepancy of ∼ 1.5 dex.
As mentioned, this inconsistency arise due to the fundamental difference between the Mvir and MRM.
A further suggestion is to train using MRM samples in order to predict the same quantity.
§ EXPERIMENT ON SELECTED SUBSAMPLES
The same experiment outlined in the main paper is repeated using a smaller but rather balanced dataset that evenly covers a range of Mvir measurements.
We choose to equally sample the black hole masses estimated from Hβ and Mgii lines within the range of 10^8–10^9 M_⊙ with bin interval of 0.2.
Our final sample is then 6070 spectra, which is split into 4249, 1274, and 547 spectra that account for 70% training, 20% validation, and 10% test sets, respectively.
|
http://arxiv.org/abs/2307.05361v1 | 20230708230112 | A Physics-Informed Low-Shot Learning For sEMG-Based Estimation of Muscle Force and Joint Kinematics | [
"Yue Shi",
"Shuhao Ma",
"Yihui Zhao",
"Zhiqiang Zhang"
] | eess.SP | [
"eess.SP",
"cs.AI",
"cs.LG",
"cs.RO"
] |
Robotic Ultrasound Imaging: State-of-the-Art and Future Perspectives
[
====================================================================
Muscle force and joint kinematics estimation from surface electromyography (sEMG) are essential for real-time biomechanical analysis of the dynamic interplay among neural muscle stimulation, muscle dynamics, and kinetics. Recent advances in deep neural networks (DNNs) have shown the potential to improve biomechanical analysis in a fully automated and reproducible manner. However, the small sample nature and physical interpretability of biomechanical analysis limit the applications of DNNs.
This paper presents a novel physics-informed low-shot learning method for sEMG-based estimation of muscle force and joint kinematics. This method seamlessly integrates Lagrange's equation of motion and inverse dynamic muscle model into the generative adversarial network (GAN) framework for structured feature decoding and extrapolated estimation from the small sample data. Specifically, Lagrange's equation of motion is introduced into the generative model to restrain the structured decoding of the high-level features following the laws of physics. And a physics-informed policy gradient is designed to improve the adversarial learning efficiency by rewarding the consistent physical representation of the extrapolated estimations and the physical references.
Experimental validations are conducted on two scenarios (i.e. the walking trials and wrist motion trials). Results indicate that the estimations of the muscle forces and joint kinematics are unbiased compared to the physics-based inverse dynamics, which outperforms the selected benchmark methods, including physics-informed convolution neural network (PI-CNN), vallina generative adversarial network (GAN), and multi-layer extreme learning machine (ML-ELM).
§ INTRODUCTION
Human movements involve complex interactions within the neuromuscular system. The surface electromyography (sEMG)-driven estimation of muscle force and joint kinematics dynamics provides detailed biomechanical analysis to understand the neuromuscular system <cit.>, which benefits various applications, such as sports rehabilitation treatments <cit.>, <cit.>, and optimizing robotic design for individuals with impairments <cit.>. Although physics-based models explicitly explain and map sEMG signals to joint kinematics, the high cost of their static optimization has always limited the practical applications of these models <cit.>.
Recently, deep neural networks (DNNs) provide an alternative solution to map the sEMG signals to the joint kinetics and kinematics <cit.>. In this kind of model, the multi-layer convolution architecture has been explored to establish relationships between movement variables and neuromuscular status <cit.>. For example, Nasr et al <cit.> mapped the sEMG signals to the regression of joint angle, joint velocity, joint acceleration, joint torque, and activation torque, illustrating that the multi-layer convolution operators are capable of extracting underlying motor control information. Zhang et al <cit.> developed an active deep convolutional neural network to enhance the dynamic tracking capability of the musculoskeletal model on unseen data.
Despite the advantages, traditional DNNs are data-hungry and their performance is highly dependent on the quantity and quality of data <cit.>. Meanwhile, biomechanics analysis is typically a physics-based extrapolation process with small sample nature <cit.>. Therefore, it is a challenge to train DNNs with small sample data so that the DNNs perform consistently with the physics-based model. To fill this research gap, the low-shot learning (LSL) technique has attracted many researchers' attention <cit.>. For example, Rahimian et al <cit.> introduced a Few-Shot Learning Hand Gesture Recognition (FS-HGR) model to enhance the generalization capability of DNNs from a limited number of instances. Lehmler et al <cit.> explored a low-shot learning methodology that adjusts DNNs to new users with only a small size of training data.
In addition, the generative adversarial network (GAN) framework has shown great potential in handling physical extrapolating and predictive problems <cit.>. The GAN-based model is capable of discovering the structured patterns of the references and extrapolating the underlying data distribution characteristics during the adversarial learning process <cit.>. For example, Chen et al <cit.> tested and evaluated the performance of the deep convolutional generative adversarial network (DCGAN) on sEMG-based data enhancement, and their results indicated that the extrapolated data is able to augment the diversity of the original data. Fahimi et al <cit.> proposed a generative adversarial learning framework for generating artificial electroencephalogram (EEG) data to extrapolate the brain-computer interface, and their findings suggest that generated EEG augmentation can significantly improve brain-computer interface performance.
In this study, we propose a physics-informed low-shot learning method for muscle force and joint kinematics estimation from multi-channel sEMG signals. This method seamlessly integrates physics knowledge with the GAN framework for structured feature decoding and extrapolated estimation from the small sample data. Specifically, Lagrange's equation of motion is introduced into the generative model to restrain the structured decoding of the high-level features following the laws of physics. And a physics-informed policy gradient is designed to improve the adversarial learning efficiency by rewarding the consistent physical representation of the extrapolated estimations and the physical references. Results show the muscle forces and joint
kinematics estimated from the proposed method are unbiased compared to the physics-based inverse dynamics.
The remainder of this paper is organized as follows: Section <ref> detailed describes the algorithm of the proposed physics-informed policy gradient for reinforcement generative adversarial learning, including the mathematics framework of the algorithm and network architectures. Section <ref> presents the material and experimental methods. Section <ref> discusses the experimental results and model evaluations. and Section <ref> presents the conclusions.
§ PHYSICS-INFORMED LOW-SHOT LEARNING METHOD
The continuous estimation of muscle forces (F) and joint kinematics(θ) from multi-channel sEMG can be denoted as the time-series generation problem. Thus, given a real multi-channel sEMG time series, we train a σ parameterized generative network G_σ to estimate the muscle force (F̂) and joint kinematics (θ̂). In this section, we propose a GAN framework, as shown in Fig.<ref>, to train the G_σ on the small sample data.
Specifically, we denote the F̂ and θ̂ estimated by G_σ as the negative samples (see details in Section <ref>), the ground truth (θ) and the inverse dynamics-based (F) <cit.> as positive samples (i.e. references). The ϕ-parameterized discriminative model D_ϕ is introduced to distinguish the positive samples and negative samples (see details in Section <ref>). During adversarial learning, the task of D_ϕ is to determine if an input sample is positive or negative, and the task of G_σ is to generate the unbiased negative samples to fool the discriminator D_ϕ. The model optimization process is driven by the newly proposed physics-informed policy gradient (see details in Section <ref>) which rewards the homogeneity of physics representation and structural characteristics between the positive and negative samples.
§.§ GAN optimization via physics-informed policy gradient
The physics-informed policy gradient method, inspired by reinforcement learning <cit.>, aims to optimize the learning process of the GAN-based model yielding physical extrapolations from the small sample data (i.e. low-shot learning). Mathematically, the physics-informed policy gradient method maximizes its expected reward J(σ) based on the physics law and structured characteristics from the small sample data. The J(σ) consists of two parts, the structural reward R_G_σ and physics representation action Q_D(ϕ)^G(σ). The J(σ) is defined as follows.
J(σ) = 𝔼[R_G_σ(G_σ(sEMG_0:T))]
· Q_Dϕ^Gσ((G_σ(sEMG_0:T), [F,θ]_0:T)
= 𝔼[R_G_σ ([F̂, θ̂]_0:T)]
· Q_Dϕ^Gσ([F̂, θ̂]_0:T, [F, θ]_0:T)
where sEMG_0:T is the input multi-channel sEMG time series for T time steps. The J(σ) is beginning with the expected reward from a predetermined state from the positive samples. And then, the R_G_σ and Q_D(ϕ)^G(σ) will jointly optimize the generative network G_σ to generate the unbiased ([F̂, θ̂]_0:T) following the physics laws.
Specifically, the structural reward R_G_σ is computed by the G_σ and defined as follows.
R_G(([F̂, θ̂]_0:T) = exp ^ PL^2 ([F̂, θ̂]_0:T)
where PL([F̂, θ̂]_0:T) is the physics law used to restrict the hierarchical structure of the generated data, which provides the additional information to the regularize the learning process from the small sample data. In this case, we use the Lagrange equation of motion <cit.> as the physics law, which is defined as follows.
PL([F̂, θ̂]_0:T) = 1/T∑_t=1^T (m(θ̂_t)θ̈̂̈_t + c(θ̂_t, θ̇̂̇_t
+ g(θ̂_t) - ∑_n=1^NF̂^n_t)^2
where T is the number of time-steps, N is the channels of the F̂, m(θ̂_t), c(θ̂_t, θ̇̂̇_t, and g(θ̂_t) denote mass matrix, the Centrifugal and Coriolis force, and the gravity, respectively <cit.>. In this manner, the G_σ will generate the structured outputs of (F̂, θ̂).
The Q_D(ϕ)^G(σ) is computed by the D(ϕ) and interprets the physics constraint action values as the estimated probability of being physics real by D(ϕ). These physics constraint action values lead to the improvement of GAN model in physical extrapolation from the small training data. The Q_D(ϕ)^G(σ) can be formulated as:
Q_Dϕ^Gσ((G_σ( sEMG_0:T), [F, θ]_0:T) =
𝔼_[F̂, θ̂]_0:T∼ [F, θ]_0:T [log Dϕ([F̂, θ̂]_0:T)] +
𝔼_[F̂, θ̂]_0:T∼ G_σ(sEMG_0:T))[log (1-Dϕ([F̂, θ̂]_0:T))]
For each epoch, once the new R_G and Q_D(ϕ)^G(σ) has been obtained, the policy model G(σ) will be updated following the gradient of the reward function as follows.
∇_σ J(σ) = 𝔼_[F̂, θ̂]_0:T∼ G_σ(sEMG_0:T)∑∇_σ R_G_σ([F̂, θ̂]_0:T|[F, θ]_0:T)
· Q^G_σ_D_ϕ ([F̂, θ̂]_0:T, [F, θ]_0:T)
Using likelihood ratios, the unbiased estimation for Eq. <ref> on one epoch can be described as follows.
∇_σJ(σ) ≃1/T∑_t=1^T∑_y_t ∈ [F̂, θ̂]_t∇_σ R_G_σ(y_t|[F, θ]_t) · Q^G_σ_D_ϕ (y_t, [F, θ]_t)
=1/T∑_t=1^T ∑_y_t ∈ [F̂,θ̂]_t G_σ(y_t|[F, θ]_t) ∇_σlog G_σ(y_t|[F, θ]_t)
· Q^G_σ_D_ϕ(y_t, [F, θ]_t)
The parameters of the policy model G_σ can be updated as follows.
σ←σ + α∇_σ J(σ)
where α∈ℝ is the learning rate.
To summarize, Algorithm 1 provides an in-depth look at our proposed GAN optimization via a physics-informed policy gradient. Initially, G_σ is pre-trained on the training set sEMG = {X_1:T} using the maximum likelihood estimation (MLE). And then, the G_σ and D_ϕ undergo adversarial learning. As the G_σ improves, the D_ϕ is routinely retrained to stay synchronized with the G_σ improvement. We ensure balance by generating an equal number of negative samples for each training step as the positive samples.
§.§ The generative network
The proposed physics-informed low-shot learning method does not depend on the specific generative network architecture. In this study, considering the long-term temporal dependencies of the F and θ sequences to the input multi-channel sEMG sequence, we employ the Long Short-Term Memory (LSTM) cells to our generative model <cit.>. The architecture of the generator network G is shown in Fig.<ref>. It serves three functions: multi-channel sEMG feature extraction, residual learning with LSTM, and musculoskeletal tokens sequence generation.
Firstly, for the multi-channel sEMG feature extraction, a 1-dimensional (1D) convolution filter with a 2 /times 1 kernel is introduced to capture the multiple sEMG features at time step t. The extracted convolution features represent the hierarchical structures of the multi-channel sEMG. In this study, the convolution kernel is set to 1 × b for a b-channel sEMG input. Considering the batch normalization (BN) layer would normalize the features and get rid of the range flexibility for upscaling features <cit.>, no BN layer is used here to avoid blurring the sEMG responses hidden in the extracted features. The max-pooling layer is used to combine the extracted sEMG features into a single neuron by using the maximum value from each convolution window. The max-pooling operation reduces the number of parameters and network computation costs and has the effect of adjusting over-fitting.
Secondly, the LSTM blocks are employed for residual learning of the time-series characteristics of the target musculoskeletal tokens. The LSTM layer is well suited for time-series sequence generation by addressing the explosive and vanishing gradient issues <cit.>. An LSTM block consists of a memory cell, an input gate, an output gate, and a forget gate, the detailed definitions of the components are described in <cit.>'s study. Specifically, in this study, in time step t, the memory cell remembers structured feature values over the previous t-1 intervals and the three gates regulate the flow of information into and out of the memory cell, which has a great preference for preserving long-term temporal structure characteristics by consolidating previous temporal correlations as memory units. Meanwhile, the high-level sEMG features extracted from the convolution layer represent the current multi-channel sEMG responses to muscle force and joint kinematics. The skip-connect of the memory cell and the high-level sEMG features not only represent extracted local kinetic invariances but also represent the temporal dynamics of the motions.
It is noteworthy that the traditional LSTM layer only produces fitness between the current time step and the previous time steps. However, we expect the model also can pay insight into the resulting future outputs. In order to compute the action value for future physical fitness, a Monte Carlo (MC) search with a roll-out strategy is used to sample the unknown last T-t time steps. and the N-time Monte Carlo search can be formulated as:
{(F_0:T, θ_0:T)^1, ..., (F_0:T, θ_0:T)^N = MC(F_0:t, θ_0:t)}
Finally, the fully connected layers are used to generate the musculoskeletal tokens sequence over a motion period. The output of the LSTM unit is flattened to a feature vector and scaled to the muscle force F and joint kinematics θ.
§.§ The discriminative model
In this study, a ϕ parameterized discriminator network D_ϕ is built to guide the iterations of G_σ from the small sample data. D_ϕ outputs a probability indicating the heterogeneity between [F̂, θ̂] and [F, θ]. For this purpose, we employ a convolution neural network (CNN) <cit.> as the discriminative model because of its successful applications in sequence classification. In this study, we concentrate on the situation where the discriminator estimates the likelihood of a completed [F̂, θ̂] time-series from the physical-law model (i.e. ID).
We first represent an input muscle force and joint kinematics time series x_1,...,x_T as
E_0:T = [F̂, θ̂]_0 ⊕ [F̂, θ̂]_2 ⊕ ... ⊕ [F̂, θ̂]_T
where, x_t ∈ℝ^b is the muscle force and joint kinematics in time-step t and ⊕ is the concatenation operator to build the matrix E_1:T∈ℝ^T. Then the convolution operator is used to produce a new feature map:
c_i = ρ(w ⊙ E_i:i+l-1 + b)
where ⊙ is the element-wise production, b is a bias term and ρ is a non-linear function. In this study, the discriminator, as shown in Fig.<ref>, employs various numbers of kernels with different window sizes to extract different features from the input musculoskeletal sequence. And the max-pooling operation over the feature maps to reduce the number of parameters and network computation costs. In order to enhance the discrimination performance, a highway operator <cit.> based on the pooled feature maps is also employed in our discriminative model. Finally, a fully connected layer with softmax activation is used to output the estimation of the likelihood that the input sequence conforms to physical laws.
§ MATERIAL AND EXPERIMENTAL METHODS
In this study, we test our proposed method on two joint motion scenarios. The first one is the knee joint modeling from an open-access dataset of walking trials, and the second one is the wrist joint modeling from the self-collected dataset of wrist motions.
§.§ Open-access dataset of walking trials
The open-access dataset of walking trails is obtained from a real-world experiment reported in <cit.>. This dataset involves six healthy participants with an average age of 12.9 ± 3.2 years and an average weight of 51.8 ± 19.1 Kg. Participants are instructed to walk at four distinct speeds, which include very slow (0.53 ± 0.1 m/s), slow (0.75 ± 0.1 m/s), free (1.15 ± 0.08 m/s), and fast (1.56 ± 0.21 m/s) speeds. The sEMG signals are captured from the biceps femoris short head (BFS) and the rectus femoris (RF) as they are the primary flexor and extensor of the knee joint. In this study, we normalize each gait cycle into 100 frames for model training and testing, and the original data for model extrapolation evaluation. In the model training and testing session, each walking trial sample is formatted into a source matrix that includes the time step, gait motion data, and enveloped sEMG signals. All of the samples from different participants are combined to create a comprehensive dataset for model training and testing.
§.§ Self-collected dataset of wrist motions
Our wrist motions experiment, approved by the MaPS and Engineering Joint Faculty Research Ethics Committee of the University of Leeds (MEEC 18-002), involved six participants with signed consent. Participants were instructed to keep their torso straight with their shoulder abducted at 90 degrees and their elbow joint flexed at 90 degrees. The VICON motion capture system is used to record continuous wrist flexion/extension motion. Joint motions are calculated using an upper limb model with 16 reflective markers with 250 Hz sampling rate. Concurrently, sEMG signals are captured from the primary wrist muscles (n = 1, 2,..., 5), including the flexor carpi radialis (FCR), the flexor carpi ulnaris (FCU), the extensor carpi radialis longus (ECRL), the extensor carpi radialis brevis (ECRB), and the extensor carpi ulnaris (ECU) using Avanti Sensors (sampling rate is 2000 Hz). Electrodes are placed by palpation and their placement is validated by observing the signal during contraction before the experiment. The sEMG signals and motion data were synchronized and resampled at 1000 Hz. Each participant performed five repetitive trials with a three-minute break between trials to prevent muscle fatigue.
The recorded sEMG signals are pre-processed by a 20 Hz and 450 Hz band-pass filter, full rectification, and a 6 Hz low-pass filter. These signals are then normalized based on the maximum voluntary contraction recorded prior to the experiment, yielding the enveloped sEMG signals. We normalize each motion cycle into 156 frames for model training and testing, and the original data for model extrapolation evaluation. A total of 360 motion data are then combined to create a comprehensive dataset for model training and testing, and 6 motion data are used for model evaluation.
§.§ Benchmark models and parameter settings
To evaluate the performance and effectiveness of the proposed physics-informed policy gradient for low-shot generative adversarial learning, the benchmark models employ three representative methods, including physics-Informed convolutional neural network (PI-CNN) <cit.> which represents the state-of-the-art deep learning based musculoskeletal modeling method, ML-ELM <cit.> which represents the general musculoskeletal modeling method, and the vanilla GAN which represents the traditional GAN family without physical-law <cit.>.
§.§ Evaluation metrics
The evaluation metrics include 1) the metrics for evaluating the quality of the generated samples including the information entropy associated peak signal-to-noise ratio (PSNR) <cit.>, coefficient of Determination (R^2) <cit.>, root mean square error (RMSE) <cit.>, Spearman's Rank Correlation Coefficient (SRCC) <cit.>, and 2) the metrics for evaluating the mode collapse of GANs, including 1) inception score (IS) <cit.>, and 2) Frechet inception distance (FID) <cit.>.
§ RESULTS AND DISCUSSION
In this section, we evaluate the performance of the proposed physics-informed low-shot learning in the knee joint and wrist joint scenarios. We first carry out overall comparisons of the results from the proposed and benchmark methods. We also evaluate the model performance on small training data and handling mode collapse. Lastly, we investigate the robustness and generalization performance of the proposed method in intersession scenarios. The training of the proposed framework and benchmark methods was conducted using PyTorch on a workstation equipped with NVIDIA Quadro K4200 graphics cards and 256G RAM.
§.§ Overall evaluation of the muscle force dynamics modeling
In this section, we first carry out overall comparisons between the proposed and benchmark methods on the test dataset. Fig. <ref> demonstrates the overall results of the joint kinematics generation in one motion circle from the proposed and benchmark methods for both the knee joint (the first row of Fig. <ref>) and wrist joint cases (the second row of Fig. <ref>). The average joint kinematics and standard deviation distribution from the proposed method align well with the ground truth in both the knee joint and wrist joint cases. These findings indicate the proposed model achieves the best performance among the benchmark models on the unbiased estimation of the joint kinematics.
Similarly, Fig. <ref> and Fig.<ref> demonstrate the overall results of the muscle force estimations in one motion circle for both the knee joint (i.e. RF and BFS) and wrist joint (i.e. FCR, FCU, ECRL, ECRB, and ECU) cases, respectively. The average muscle forces estimated by the proposed method align well with the inverse dynamics, demonstrating the excellent multiple muscle tracking capability of the proposed model. In addition, the standard deviation distribution of the proposed model-generated muscle forces is perfectly consistent with the standard deviation distribution of the inverse dynamics-based references. These results indicate that the proposed model achieves the best performance among the benchmark models on the unbiased estimation of the muscle force from the multi-channel sEMG signals.
To further assess the extrapolation performance quantitatively, we present detailed comparisons of the proposed and benchmark models on both of the test data and evaluation data. Table <ref> and Table <ref> respectively shows the results for the knee joint case and the wrist joint case. The results indicate that the proposed model performs best on both of the testing and evaluation data. Specifically, for model testing, the PSNR, R^2, RMSE, SRCC of the proposed model are 15.57%, 6.22%, 28.08%, 7.2% higher than that of the second best model (i.e. PI-CNN). For model evaluation, the PSNR, R^2, RMSE, SRCC of the proposed model are 24.72%, 16.29%, 38.99%, 17.66% higher than that of the second best model (i.e. GAN). In addition, because the evaluation data involve the original sEMG recordings, the comparison of the testing results and evaluation results indicates the model extrapolation from the experimental scenarios to real scenarios. The proposed model shows the best extrapolated estimation of muscle force and joint kinematics among the benchmark models, the results from the testing data and evaluation data is consistent. In contrast, the performance of the benchmark models show serious decline on evaluation data.
§.§ Evaluation of low-shot learning
The proposed physics-informed policy gradient incorporates the temporal relationship of the muscle force and joint kinematics dynamics from the Lagrange motion equation, resulting in an improved kinetics estimation from the low-shot samples. Initially, the physical information is used to constrain the model reward accumulated following the periodic multi-channel sEMG signals. And then, the accumulative reward is used to guide the Monte Carlo search to generate the unbiased estimation of muscle force and joint kinematics dynamics. To quantitatively assess the effectiveness of the proposed method on low-shot learning, we firstly regard the modeling results shown in Table <ref> and Table <ref> as the baselines that represent the optimal performance of the proposed and benchmark models, and then we train the models with different training sample sizes for 1500 epochs as low-shot learning learning. The percentages of the low-shot learning learning results and the baseline joint kinematics modeling results, denote as P-PSNR, P-R^2, P-RMSE, and P-SRCC, are used as the evaluation metrics to describe what percentage of the performance of the baseline models can be achieved with the new models.
The evaluation of the low-shot learning of the proposed and benchmark models on the knee joint and wrist joint kinematics modeling is shown in Table <ref>. It is obvious that the proposed model with a physics-informed policy gradient outperforms all of the benchmark models in low-shot learning. The 10-shot learning is able to achieve over 80% baseline performance in terms of PSNR, R^2, RMSE, and SRCC. In comparison, the PINN and GAN models achieving a similar modeling performance require at least 80-shot learning. Therefore, it can be inferred that the proposed physics-informed policy gradient relies heavily on the physical representations and temporal structural characteristics of the training data, rather than the quantity of the data. This is encouraging as it suggests that the proposed method facilitates the applications of deep learning in biomechanical engineering from the general issue of limited sample size.
§.§ Mode collapse evaluation
Mathematically, the generative model is easy to find a biased estimation caused by mode collapse, which leads to the generated samples only being located in the partial real distribution where it can fool the discriminative model and ignore other modes of real distribution during the adversarial learning. To handle this issue, the proposed physics-informed policy gradient alleviates the random noises and makes the generated feature sequence governed by the physics law, which facilitates the estimation of compound kinematics patterns and achieves the unbiased estimation of kinematics generation.
In order to evaluate the performance of the proposed method on alleviating the mode collapse, we test and compare the proposed model with the benchmark model from two aspects: 1) a quantitative evaluation of the diversity of the generated motions, based on the distance-derived IS and FID metrics; and 2) a monotonicity assessment on the generator iterations during the network training process; and 3) visualization of the distributions of the real and the generated motion samples.
Firstly, the quantitative evaluation for the diversity of the generated motions is conducted on the testing dataset. The higher IS and lower FID indicate the better diversity of the generated super-resolution HSIs, which further indicates the alleviation of mode collapse.
The results demonstrated in Table <ref> show the proposed model outperforms the competitors in terms of the IS and FID measurements for both the knee joint and wrist joint motion generation. In addition, the benchmark GAN model, with the network architecture as same as the proposed model, is 19.11% higher in IS, and 14.23% lower in FID than the proposed model. These findings suggest that the proposed physics-informed policy gradient optimization approach has great performance in alleviating the mode collapse during adversarial learning.
Secondly, in order to further explore the performance of the proposed physics-informed policy gradient on the mode collapse issue, we compare the generator iterations of the same GAN architectures with and without the physics-informed policy gradient (Fig. <ref>). The IS and FID curves from the GAN with the proposed physics-informed policy gradient are more monotonous than the GAN without the physics-informed policy gradient, along with the increase of iteration number. Thus, the curves of IS from the proposed physics-informed policy gradient steadily increase and the curves of FID steadily decrease for both knee joint (<ref>a and b) and wrist joint (<ref>c and d) cases.
§.§ Model application on intra-session scenario
In musculoskeletal modeling, the intra-session scenario is regarded as the multiple sets of motions that occur within the same session. To test the robustness of the proposed model in the intra-session scenario, we use the knee joint data with different walking speeds for one subject as the intra-session evaluation dataset. The muscle force and joint kinematics modeling results, as shown in Fig. <ref>, indicate that the proposed framework performs best among the baseline methods. Importantly, the median and interquartile values of the proposed model with physics-informed policy gradient remain consistent with the real data across different walking speeds. In comparison, the median and quartiles of the baseline methods, such as the GAN model without using the physics-informed policy gradient, show significant inconsistencies with the real data, indicating a declined performance in the intra-session scenario due to the variability in walking speeds. These findings suggest that the model optimized by the proposed physics-informed policy gradient has great robustness in intra-session scenarios.
§.§ Model application on inter-session scenario
The inter-session scenario generally refers to a situation where motion data are collected across multiple sessions. To test the robustness of the proposed model in the inter-session scenario, we use the wrist joint data with different subjects as the evaluation dataset. The muscle force and joint kinematics modeling results, as shown in Fig. <ref>, indicate that the proposed framework performs best on the musculoskeletal modeling among the baseline methods. Specifically, the median and interquartile values of the proposed model with physics-informed policy gradient remain consistent with the real data across different subjects. In comparison, the baseline methods, such as the GAN model without using the physics-informed policy gradient, show a declined performance in the inter-session scenario due to the variability in walking speeds. These findings suggest that the model optimized by the proposed physics-informed policy gradient has great robustness in inter-session scenarios.
§ CONCLUSION
This paper develops a physics-informed low-shot learning method, which seamlessly integrates the Lagrange equation of motion and inverse dynamic muscle model into the adversarial learning process, to train the generative network for the unbiased estimation of the muscle force and joint kinematics from the small size sEMG time series. Specifically, the Lagrange equation of motion is introduced as physical constraint, which facilitates the generator to estimate the muscle force and joint kinematics with more temporal structural representations. Meanwhile, the physics-informed policy gradient rewards the physical consistency of the generated muscle force and joint kinematics and the inverse dynamics-based references, which improve the extrapolation performance of the generative network. Comprehensive experiments on the knee joints and wrist joints indicate the feasibility of the proposed method. The resultant findings suggest that the proposed method performs well in handling the mode collapse issue on the small sample data, and the estimations of the muscle forces and joint kinematics are unbiased compared to the physics-based inverse dynamics. These findings suggest that the proposed method may reduce the gaps between laboratory prototypes and clinical applications. However, it is worth noting that the physics reference (i.e. the inverse dynamics for this study) plays an important role in constraining the physics representation of the generated samples. Therefore, the choice of physics module may vary when the proposed approach is extended to other application cases.
Going forward, we plan to delve deeper into the properties of the physics-informed deep learning framework in the context of sEMG-based musculoskeletal modeling. We aim to investigate the potential of the low-shot learning-based model on the continuous and simultaneous estimation of multiple joint kinematic chains from sEMG signals. We also plan to adjust the compositions of the proposed method to cater to different application scenarios. Furthermore, we intend to evaluate the reliability and accuracy of the proposed framework through more complex movements.
unsrtnat
|
http://arxiv.org/abs/2307.04869v1 | 20230710193253 | Fed-CPrompt: Contrastive Prompt for Rehearsal-Free Federated Continual Learning | [
"Gaurav Bagwe",
"Xiaoyong Yuan",
"Miao Pan",
"Lan Zhang"
] | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CV"
] |
[
Fed-CPrompt: Contrastive Prompt for Rehearsal-Free Federated Continual Learning
equal*
Gaurav Bagweecemtu
Xiaoyong Yuancompmtu
Miao Paneceh
Lan Zhangecemtu
ecemtuDepartment of ECE, Michigan Technological University, Houghton, MI, USA
compmtuCollege of Computing, Michigan Technological University, Houghton, MI, USA
ecehDepartment of ECE, University of Houston, Houston, TX, USA
Gaurav [email protected]
Federated Continual Learning, Federated learning, Rehearsal-free Continual Learning, Prompt Learning
0.3in
]
Federated continual learning (FCL) learns incremental tasks over time from confidential datasets distributed across clients. This paper focuses on rehearsal-free FCL, which has severe forgetting issues when learning new tasks due to the lack of access to historical task data. To address this issue, we propose Fed-CPrompt based on prompt learning techniques to obtain task-specific prompts in a communication-efficient way. Fed-CPrompt introduces two key components, asynchronous prompt learning, and contrastive continual loss, to handle asynchronous task arrival and heterogeneous data distributions in FCL, respectively. Extensive experiments demonstrate the effectiveness of Fed-CPrompt in achieving SOTA rehearsal-free FCL performance.
§ INTRODUCTION
Federated learning (FL) has been a popular collaborative machine learning paradigm enabling multiple clients to learn a shared model without exposing private client data <cit.>. While successful, existing FL algorithms are mainly designed for a single task with fixed datasets on clients <cit.>, which becomes ineffective in handling non-stationary data distribution over time. Therefore, recent efforts have been put into federated continual learning (FCL) to learn tasks that are presented sequentially. Since the model in continual learning (CL) may overfit data from the current task and suffer from catastrophic forgetting <cit.>, the mainstream research to address the forgetting issue can be roughly divided into two categories: rehearsal-based and rehearsal-free FCL.
Although rehearsal-based approaches achieve state-of-the-art (SOTA) performance by using the rehearsal buffer to store and retrain data from previous tasks, the buffer size needs to be large enough to effectively mitigate forgetting <cit.>, leading to scalability and data storage constraints in FL. Moreover, many applications do not allow this buffer due to privacy concerns <cit.>, further restricting their adoption in practice. Hence, this work focuses on rehearsal-free FCL. Existing efforts along this line regularize the global model with knowledge from previous tasks when learning a new task <cit.>. Unfortunately, they have substantially deteriorated performance compared to rehearsal-based approaches <cit.>. Moreover, existing research requires continuously exchanging the entire model to learn incremental tasks in FCL, leading to significant communication overhead. In view of these, it is critical to developing innovative rehearsal-free FCL in a communication-efficient way to address the forgetting issue while maintaining the model plasticity for new tasks.
Enlightened by the recent advance of prompting techniques <cit.>, in this work, we leverage prompt learning to achieve the above goal. As one promising transfer learning approach, prompt learning uses insertable embeddings called prompts to condition a pre-trained model for downstream tasks. Recent research enables prompt-based CL by using key-query mechanisms, which achieves SOTA rehearsal-free performance, even outperforming rehearsal-based CL <cit.>. Due to the small size of prompt parameters, the communication efficiency of FCL is expected to be improved significantly. However, existing prompt-based CL is designed for centralized datasets, which becomes ineffective in FL with distributed and confidential datasets. The main limitation is due to the inherent heterogeneity of distributed clients. On the one hand, clients may observe heterogeneous data for the same task, leading to biased learning performance and slow convergence. On the other hand, incremental tasks may arrive asynchronously on clients, further deteriorating the overall learning performance. Therefore, to unleash the potential of prompting for rehearsal-free FCL, we propose Fed-CPrompt to facilitate inter-task and inter-client prompt-based knowledge transfer while addressing the heterogeneity concerns of data distribution and task arrival over clients.
Our key contributions are summarized below:
* We propose Fed-CPrompt, an innovative rehearsal-free FCL framework based on prompting techniques. Fed-CPrompt achieves SOTA FCL performance to handle the stability-plasticity dilemma under heterogeneous FL environments in a communication-efficient way.
* We introduce two key components to Fed-CPrompt: asynchronous prompt learning takes advantage of task asynchronicity to strengthen the task-specific prompts; C2L loss alleviates inter-task forgetting and inter-client data heterogeneity via a contrastive and continual loss.
* We conduct extensive experiments to demonstrate the effectiveness of Fed-CPrompt in various challenging FCL settings, such as heterogeneous data distribution and asynchronous task arrival.
§ PROPOSED METHOD
§.§ Problem Statement
In a standard FCL setting, a central server coordinates a set of distributed clients 𝒞 to learn incremental tasks 𝒯_1, …, 𝒯_n over time.
The training data for each task is distributed to clients and cannot be shared. FCL aims to obtain a global model parameterized by 𝐰 to perform all existing tasks. In this work, we consider a challenging CL problem, class-incremental CL, where the task labels are unknown during inference <cit.>. Our design can be easily extended to the task- or domain-incremental FCL problems. The optimization objective can be written as
min_𝐰∑_i∈{1, …,n}∑_c∈𝒞n_c^𝒯_i/n^𝒯_iℒ(𝒟_c^𝒯_i;𝐰),
where n_c^𝒯_i and n^𝒯_i represent the number of training samples from client c and all clients for task 𝒯_i, respectively. 𝒟_c^𝒯_i is the training dataset of 𝒯_i on client c.
This objective function uses data from all existing tasks, making it a rehearsal-based FCL problem.
This work focuses on rehearsal-free FCL. Specifically, each client can only observe the training data of the current task, i.e., when training on task 𝒯_n, training data of all previous tasks are unseen.
However, due to the unavailability of historical task data, training the current task can overwrite previous task information of the model 𝐰 in (<ref>), deteriorating the forgetting issues in CL. Thus, existing rehearsal-free FCL approaches cannot achieve comparable performance to rehearsal-based approaches <cit.>.
§.§ Design Principle
In this work, we aim to accommodate the forgetting issue for rehearsal-free FCL. Inspired by the success of the prompt-based rehearsal-free CL that achieves SOTA performance, we intend to implement prompting techniques in our design. Existing prompt-based CL <cit.> use insertable embeddings, called prompts p, to condition a frozen pre-trained model θ to perform incremental tasks. Due to the small size of prompt parameters, a task-specific prompt is created and stored for each task to avoid overwriting previous knowledge. Here, we refer readers to Appendix <ref> for more details. While successful, the above prompt-based CL research is designed for centralized datasets, which becomes ineffective in FL settings.
The main challenge of implementing prompting techniques in FCL is the inherent heterogeneity of distributed clients. On the one hand, the data heterogeneity among clients leads to biased local updates and slow convergence. On the other hand, the sequential tasks may appear asynchronously over clients, further delaying convergence. Due to the small size of learnable parameters in prompt learning, it is essential to improve their learning capacity by facilitating knowledge transfer between tasks and clients. Therefore, we propose Fed-CPrompt, an innovative prompt-based rehearsal-free FCL framework. As shown in Figure <ref>, Fed-CPrompt introduces two key components, asynchronous prompt learning and contrastive and continual loss, to address the aforementioned task arrival and data heterogeneity concerns. In the following, we first introduce these two components and then present the overall training of Fed-CPrompt.
§.§ Asynchronous Prompt Learning
We adopt the existing prompt-based CL approach (CODA-P <cit.>) on clients to learn incremental tasks based on their local data. In CODA-P, the prompt for the current task is re-weighted based on previous task information to refine task-specific representation via attention mechanisms (see Appendix <ref>). In Fed-CPrompt, when client c∈𝒞 learns task 𝒯_m, p^𝒯_m_c=∑_i∈[1,m-1]α^𝒯_i_s P^𝒯_i_s + α^𝒯_m_c P^𝒯_m_c, where α^𝒯_i_b and P^𝒯_i_b are the 𝒯_i-specific attention and prompt at the server (b=s) and client c (b=c), respectively. The updated client-side p^𝒯_m_c will be uploaded to the server and aggregated based on classical FL <cit.> to obtain server-side prompt p^𝒯_m_s. However, such naive aggregation becomes inefficient in asynchronous task arrival. When client c is training task 𝒯_m, the latest task observed by other clients might be task 𝒯_n (m<n), and 𝒯_n will be observed by client c later. Hence, to handle this condition, Fed-CPrompt introduces asynchronous prompt learning.
Instead of waiting for updated prompts of the current task 𝒯_n from all clients before aggregation, we allow task-specific prompt aggregation in parallel. In this way, the previously learned prompt at the server p_s^𝒯_m can be refined by p^𝒯_m_c. Moreover, taking advantage of the task arrival heterogeneity, the training of p^𝒯_m_c becomes
p_c^𝒯_m=∑_i= 1^m-1α^𝒯_i_s P^𝒯_i_s + α_c^𝒯_m P_c^𝒯_m + ∑_j= m+1^nα^𝒯_j_s P^𝒯_j_s ,
where the first and the third terms are task knowledge from the server, which are frozen when training task 𝒯_m. It should be mentioned that although the newest task for client c is 𝒯_m, the asynchronous task arrival in FCL allows client c to leverage unseen task knowledge to navigate the local training. By incorporating past and future task representations, we increase the capacity of prompts to learn task-specific instructions.
§.§ C2Loss: Contrastive and Continual Loss
To address the data heterogeneity issue while alleviating forgetting in FCL, we introduce a new loss function, contrastive and continual loss (C2Loss), to regularize local training on clients. The goal of C2Loss is mainly twofold. First, C2Loss accommodates disagreements between clients due to biased local training with heterogeneous data distribution. Second, C2Loss enforces distinct task-specific prompts construction, which facilitates CL to avoid the forgetting effect.
Specifically, when learning task 𝒯_m at communication round r, we have the C2Loss on client c∈𝒞 given by
ℒ_C2L (P_c^𝒯_m(r))=max(|| P_c^𝒯_m(r) - P^𝒯_m_s(r-1)||_2
- γ min{|| P_c^𝒯_m(r) - P^𝒯_i_s ||_2 , i∈[1,n], i≠ m } + α, 0),
where the first term within the max() calculates the change of the current prompt compared to that in the previous round. By restricting this change, C2Loss smooths the local update to achieve the first goal. The second term within the max() finds the most similar prompt to the current prompt based on the distance between the current and all previous prompts. By increasing this distance, C2Loss enforces the discrimination between task-specific prompts to achieve the second goal. Besides, γ>0 is the hyperparameter to balance the impact between the first two terms. α∈ [0,1] represents a margin value that encourages a separation between the first two terms <cit.>.
§.§ Overall Training
In Fed-CPrompt, client c∈𝒞 conducts local training with dataset 𝒟_c^𝒯_m for the current task 𝒯_m. As discussed in (<ref>), a prompt is constructed based on attention mechanisms, and thus the learnable prompt parameter for client c is defined by 𝐰_c^𝒯_m={P_c^𝒯_m, K_c^𝒯_m, A_c^𝒯_m} (K and A composite the α in (<ref>), detailed in Appendix <ref>). By incorporating the C2Loss to the cross-entropy loss, we have the local optimization function of client c by
min_𝐰_c^𝒯_m,ϕ_c^𝒯_mℒ_CE(f_ϕ_c(x; θ, 𝐰_c),y) + λ ℒ_C2L(𝐰_c^𝒯_m),
where ϕ_c^𝒯_m represents the classifier parameter for task 𝒯_m. Note that 𝐰_c and ϕ_c concatenate both the frozen previous task parameter and the current task learnable parameter as discussed in (<ref>). Besides, θ is the frozen pretrained model parameters; (x,y)∈𝒟_c^𝒯_m; λ∈ [0,1] is the hyperparameter balancing losses.
Both prompt parameters 𝐰_c^𝒯_m and classifier parameters ϕ_c^𝒯_m will be uploaded to the server. The server handles asynchronous task arrival by conducting parallel aggregation following classical FL <cit.>.
The overall training is illustrated in Algorithm <ref> of Appendix <ref>.
§ EXPERIMENTS
§.§ Experimental Setup
We evaluate the proposed Fed-CPrompt based on the CIFAR-100 dataset <cit.>, a widely used dataset in continual learning for classification tasks. We consider a total of 10 clients in FCL. The server-side knowledge aggregation is based on FedAvg <cit.>. The evaluation metrics include average accuracy and average forgetting, which are standard metrics used in previous CL research <cit.>. To comprehensively evaluate Fed-CPrompt, we consider baseline approaches, including rehearsal-free FL approaches (i.e., Fed-EWC and Fed-LWF) and recent prompt-based CL approaches (i.e., Fed-CODAP, Fed-DualP, Fed-L2P). Further details on the dataset setup, FL settings, evaluation metrics, and baseline approaches can be found in Appendix <ref>.
§.§ Experimental Results
Effectiveness of Fed-CPrompt.
We evaluate the effectiveness of Fed-CPrompt under iid and non-iid FL settings. We report the average test accuracy and forgetting over all ten tasks. As illustrated in Table <ref>, Fed-CPrompt gains a significant performance improvement over all rehearsal-free FCL methods under iid settings. Compared with the best of existing works, Fed-CPrompt achieves around a 2% increase in Top-1 accuracy and around a 2% drop in Forgetting.
It should be mentioned that non-prompt-based methods (Fed-EWC and Fed-LwF) optimize about 86 million parameters, while Fed-CPrompt optimizes only 4 million (≈ 4.18%) to achieve better performance. Moreover, Fed-CPrompt has better convergence, which significantly reduces the communication cost for FCL.
Besides, we further compare the Fed-CPrompt with other prompt-based FCL baselines under non-iid settings. In the experiments following <cit.>, we consider two non-iid settings: label skew and quantity skew. As illustrated in Table <ref>, Fed-CPrompt outperforms the existing prompt-based methods under non-iid settings. In particular, under a challenging label-skew setting, Fed-CPrompt achieves a significant performance improvement by 10.65%.
Impact of Asynchronous Continual Learning Tasks. We demonstrate the effectiveness of Fed-CPrompt under asynchronous task arrival, where the clients train the models on different tasks at the same time.
As illustrated in Table <ref>, the average test accuracy of Fed-CPrompt significantly outperforms the existing methods by 2.07% and 11.79% under iid and non-iid settings, respectively.
Our findings suggest jointly considering past and future task information can improve the training efficiency of FCL.
It should be noted that the forgetting of Fed-CPrompt is comparable to or higher than the existing works; however this is due to the high accuracy gained by Fed-CPrompt on the first task. Besides the impact of high accuracy on the first task, we can still observe the substantial advantage of Fed-CPrompt in mitigating catastrophic forgetting, as the average accuracy on all ten tasks achieved by Fed-CPrompt is much higher than the existing works.
Impact of C2Loss. We further perform ablation studies to evaluate the effectiveness of the proposed C2Loss. We compare the performance between FedProx and Fed-CPrompt with and without C2Loss. As shown in Table <ref>, Fed-CPrompt with C2Loss achieves the highest accuracy and lowest forgetting compared with the rest two methods.
This is mainly due to that C2Loss handles inter-task and inter-client knowledge transfer, thereby leading to better task discrimination and improved accuracy.
§ CONCLUSION
This paper proposed Fed-CPrompt, an innovative rehearsal-free FCL framework to alleviate catastrophic forgetting over incremental tasks and facilitate knowledge transfer among distributed and heterogeneous clients. Fed-CPrompt introduces two key components: asynchronous prompt learning to handle asynchronous arrival, and a simple yet effective contrastive continual loss that optimizes prompt parameters while providing additional supervision for learning distinct task-specific prompts. Extensive experiments demonstrate the effectiveness of our proposal.
icml2023
§ PRELIMINARIES FOR PROMPT-BASED CONTINUAL LEARNING
In this work, we build upon the technical foundations of prompt-based methods from the prior centralized continual learning research <cit.> to introduce prompts, which can collaboratively learn in heterogeneous federated settings.
As done in CODA-P, prompt parameters are attached to several multi-head self-attention (MSA) layers in a pre-trained ViT. Define a task-specific prompt parameter for task 𝒯_m as P^𝒯_m∈ℝ^L_P × D ×𝒯_m, where L_P, D, and 𝒯_m are the prompt lengths, embedding dimension, and the number of prompts for each task, respectively. We consider prefix-tuning to attach prompts to the keys and values of an MSA layer with input h∈ℝ^L × D and the query, key, and value as h_Q, h_K, and h_V. A prompt p is split into {P_K, P_V}∈ℝ^L_p/2× D, which are respectively attached to the key and the value of this layer, i.e., MSA(h_Q, [P_K; h_K], [P_V;h_V]). where [·;·] is a concatenation operation. Since CODA-P achieves SOTA centralized continual learning performance, we adopt the weighted prompt for local training, and the prompt for task 𝒯_m can be calculated by
p^𝒯_m=∑_i∈[1,m]α^𝒯_i P^𝒯_i,
where P^𝒯_m is a learnable prompt to the current task 𝒯_m, α^𝒯_m=γ(q(x)⊙ A^𝒯_m, K^𝒯_m) measures the cosine similarity γ between the attended query and the key, where the attended query defined by the element-wise product ⊙ between query and learnable attention parameter. The query is produced as q(x)∈ℝ^D=f(x; θ), where f(·;θ) is the encoder of the pre-trained ViT[We refer the reader to sections 4.1 and 4.2 of the CODA-p <cit.> paper for more details.]. For training task 𝒯_m, the learnable parameters include P^𝒯_m, K^𝒯_m, A^𝒯_m, and the classification head ϕ^𝒯_m, whereas ( α^𝒯_i, P^𝒯_i) ∀ i ∈[1, m-1] is frozen but contributes to the training as in Equation (<ref>). In addition, the classification head of the previous task 𝒯_1,⋯, 𝒯_m-1 , i.e are frozen.
§ RELATED WORK
Federated Continual Learning (FCL).
FCL performs addresses catastrophic forgetting across multiple clients trained on their private sequential tasks, where a global model is obtained by exchanging task-specific knowledge via a global server. The mainstream FCL research can be roughly divided into two categories: rehearsal-based and rehearsal-free FCL.
The rehearsal-based research stores and replays information from previous tasks to mitigate the global model's forgetting over time <cit.>. For example, Huang et al. proposed FCCL to address the heterogeneity and catastrophic forgetting in federated learning based on buffered data for intra- and inter-domain knowledge distillation <cit.>. Similarly, Zizzo et al. and Wang et al. leveraged replay buffers and novel data-sharing approaches based on differential privacy to mitigate forgetting <cit.>. To tackle the global model's forgetting brought by heterogeneous clients, Dong et al. introduced a proxy server to store and select the best old models to assist clients' local training <cit.>. While successful, the above rehearsal-based FCL research requires large storage space and complex data-sharing strategies to replay past information, making it challenging to scale over time.
Another category of FCL research is rehearsal-free approaches without storing past information.
One group of rehearsal-free continual learning (CL) expands the model architecture when encountering new tasks <cit.>. However, most architecture-based approaches require task identity to condition the network during inference, leading to their ineffectiveness for class-incremental or task-agnostic CL scenarios, i.e., the task identity is unknown. In this work, we focus on the practical but more challenging class-incremental FCL in a rehearsal-free manner.
Existing FCL research along this line proposed regularizing the model with respect to the previous task knowledge when training a new task. For example, Shoham et al. and Yoon et al. leveraged the weight consolidation method to restrict the updates of the important parameters regarding previous tasks while improving the training performance for the new task <cit.>. Similarly, several recent works implemented knowledge distillation methods to transfer knowledge from the model for the old task to that for the current task <cit.>.
In addition, <cit.> investigates the asynchronous-task FCL while using representation loss and a modified aggregation strategy to address the forgetting across multiple clients asynchronously learning respective tasks.
While the aforementioned research enables class-incremental FCL without the rehearsal buffer, they rely on optimizing the entire model on the client side, leading to heavy communication overhead when iteratively exchanging distributed client knowledge in FL, especially for CL scenarios. To address the limitations of existing research, in this work, we propose a novel rehearsal-free FCL approach for class-incremental learning problems based on prompt learning techniques.
Prompt Learning. Prompt learning has been a popular transfer learning approach that modifies the input sample with input embedding called prompts, aiming to provide additional information to condition the model to perform downstream tasks <cit.>. However, designing the prompt function for various downstream tasks is challenging. Recent research has introduced "soft prompts" to automatically train the learnable prompt parameters to replace the heuristic manual selection, such as the prompt tuning, p-tuning, and prefix tuning <cit.>. Prompt learning has shown great potential for parameter-efficient transfer learning with a small set of prompt parameters. Taking advantage of the small parameter size, Zhao et al. <cit.> and Guo et al. <cit.> adopted prompt learning to improve federated learning efficiency.
Some recent works have implemented prompt learning techniques in CL. Wang et al. proposed L2P by using the key-query-based similarity method to select prompts from a prompt pool to instruct different tasks in CL <cit.>. Later, DualPrompt was introduced as the follow-up to L2P with better CL performance, which learns two sets of disjoint prompt spaces to encode task-specific and task-invariant instructions, respectively <cit.>. More recently, CODA-Prompt was proposed using an attention-based end-to-end key-query method, which produces the input-conditioned prompts to further improve CL performance <cit.>. Nevertheless, the above prompt-based CL approaches designed for centralized datasets cannot be directly used for federated learning scenarios, as they ignore the unique challenges raised by distributed nature of clients, such as the heterogeneous data distribution and asynchronous task arrival over clients. To the best of our knowledge, none of the existing prompt learning research has been done for FCL.
§ ALGORITHM
The overall training process includes four main steps (a-d) as shown in Algorithm <ref>.
(a) The server distributes the prompts and model to each new participating device. (b) Each user first freezes the previous prompt parameters. (c) Each user optimizes the local prompt parameters and classifier head following CE loss and Equation (<ref>). (d) The clients return the locally trained model to the server. Further, the server aggregates the model following classical FedAvg <cit.>. Algorithm <ref> follows steps (a) - (d) until convergence.
§ IMPLEMENTATION DETAILS
In this section, we conduct extensive experiments to evaluate the proposed Fed-CPrompt. We first introduce the experimental setup, followed by the experimental results.
Additionally, the same random seed is used to conduct all experiments for reproducibility.
§.§ Dataset Setup.
The CIFAR-100 dataset consists of 100 classes with 600 samples per class. In the experiments, we divide the dataset into 10 disjoint tasks with 10 classes per task (5,000 training samples per task). We divide the samples on each task among clients following a uniform distribution for the iid settings in federated learning. We implement label-based and quantity-based distribution skew (i.e., label skew and quantity skew) for non-iid settings with non-iid degree β = 0.5 following <cit.>. The test dataset consists of 10,000 samples, with 1,000 samples per task.
§.§ Federated Learning Settings.
The learning rate is set to lr = 0.0001. We also deploy an early-stopping mechanism in each task using a validation set.
We consider 𝒞 = 10 clients, R=40 communication rounds, and local epochs l_epochs=5. The network parameters are optimized using Adam optimizer and a batch size of 128 images. We split the CIFAR100 dataset into 10 tasks, each with 10 classes. This is distributed among the 10 clients following <ref>.
§.§ Asynchronous Tasks.
We consider an asynchronous scenario where different clients learn from different tasks at the same time. Specifically, we select a random set of 5 clients to participate in the following task 𝒯_n = 𝒯_m+1, while the remaining 5 clients remain on task 𝒯_m.
§.§ Baseline Approaches.
We compare our proposed Fed-CPrompt with CODA-Prompt <cit.>, Dual prompt <cit.>, L2P <cit.> applied to federated settings. These prompt-based methods have shown potential parameter-efficient SOTA solutions in continual learning.
Additionally, we consider conventional non-prompt-based rehearsal-free methods to demonstrate the advantages of prompt-based methods. Specifically, Fed-EWC <cit.> and Fed-LWF <cit.> provide a fair representation of conventional non-prompt-based rehearsal-free methods in continual learning <cit.>. This comparison allows us to show the potential of a prompt-based approach to other rehearsal-free methods.
The same set of hyper-parameters, such as the learning rate, batch size, and number of rounds is adopted in the baselines and our proposed Fed-CPrompt.
§.§ Prompt parameters.
We use prefix-tuning <cit.> to attach the prompts to layers (1-5) of the pretrained ViT network <cit.>. The total prompt size is n = 100, and 10 prompts per task. Each prompt is set with length L_p = 8 and embedding dimension D = 768.
§.§ Evaluation metrics.
We evaluate our model on the standard continual learning metrics, including average accuracy and average forgetting, which are widely used in previous works <cit.>. We follow the standard definition of accuracy and forgetting mentioned in <cit.>.
§ ADDITIONAL RESULTS
Training Efficiency.
Figure <ref> and Figure <ref> demonstrate the effect of catastrophic forgetting when training new incremental tasks. Overall, we observe that in Fed-CPrompt retains knowledge from previous tasks, mitigating the catastrophic forgetting issues. Additionally, the accuracy per task is higher due to the increased capacity of prompts compared to prompts used in Dual Prompt and L2P.
Overall, our findings suggest that prompt-based algorithms, especially Fed-CPrompt, can effectively mitigate the problem of catastrophic forgetting and improve the training efficiency of lifelong learning systems.
Impact of Asynchronous Continual Learning Tasks. To investigate the impact of client pacing on the training efficiency of our lifelong learning system, we conduct experiments with varying degrees of client pacing. Specifically, we compare the system's performance when all clients move to the next task simultaneously versus when some clients move to the next task while others are still at the current task. Our results show that when some clients move to the next task, the knowledge of the next task can benefit the current task prompt by providing additional context and improving the convergence speed (Figure <ref>). The model can leverage the knowledge learned from the next task to understand the current task better, leading to faster convergence and improved accuracy.
Moreover, we also explore the idea of leveraging the prompts from other tasks for example in our design, the clients on task 𝒯_m+1 leverage prompts from task 𝒯_m to improve the convergence speed of the current task. In addition, clients on task 𝒯_𝓂 have an increased capacity due to the additional prompt from task 𝒯_m+1. Our experiments show that incorporating prompts from previous tasks into the current task prompt can significantly improve the convergence speed and reduce the training time, as shown in Figure <ref>. This is because the model can reuse the knowledge learned from previous tasks and incorporate it into the current task prompt to improve its understanding of the new task.
|
http://arxiv.org/abs/2307.04300v1 | 20230710013458 | Blockwise Key Distillation in Satellite-based Quantum Key Distribution | [
"Minu J. Bae",
"Nitish K. Panigrahy",
"Prajit Dhara",
"Walter O. Krawec",
"Alexander Russell",
"Don Towsley",
"Bing Wang"
] | quant-ph | [
"quant-ph"
] |
Blockwise Key Distillation in Satellite-based Quantum Key Distribution
Minu J. Bae
University of Connecticut
Storrs CT, USA
[email protected]
Nitish K. Panigrahy
University of Massachusetts
Amherst MA, USA
[email protected]
Prajit Dhara
University of Arizona
Tucson AZ, USA
[email protected]
Walter O. Krawec
University of Connecticut
Storrs CT, USA
[email protected]
Alexander Russell
University of Connecticut
Storrs CT, USA
[email protected]
Don Towsley
University of Massachusetts
Amherst MA, USA
[email protected]
Bing Wang
University of Connecticut
Storrs CT, USA
[email protected]
August 12, 2023
============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Free-space satellite communication has significantly lower photon loss than terrestrial communication via optical fibers. Satellite-based quantum key distribution (QKD) leverages this advantage and provides a promising direction in achieving long-distance inter-continental QKD. Satellite channels, however, can be highly dynamic, due to various environmental factors and time-of-the-day effects, leading to heterogeneous noises over time. In this paper, we compare two key distillation techniques for satellite-based QKD. One is the traditional non-blockwise strategy that treats all the signals as a whole; the other is a blockwise strategy that divides the signals into individual blocks that have similar noise characteristics and processes them independently. Through extensive simulation in a wide range of settings, we show trend in optimal parameter choices and when one strategy provides better key generation rates than the other. Our results show that the blockwise strategy can lead to up to 5% key rate improvement (leading to on average 1.9×10^7 more key bits per day) when considering two types of blocks, i.e., for nighttime and daytime, respectively. The blockwise strategy only requires changes in the classical post-processing stage of QKD and can be easily deployed in existing satellite systems.
The blockwise key distillation in satellite-based quantum key distribution (SatQKD) is a new approach to extracting secret quantum keys in dynamic, heterogeneous, and environmental circumstances due to various times and weather conditions. SatQKD leveraged by the satellite-based entanglement distribution via free-space to ground stations shows much less photon loss than ground quantum communication via optical fibers. Thus, the blockwise-based SatQKD allows us to build an inter-continental QKD network and manages multifarious losses and failures in the key distillation. This paper introduces the new blockwise key distillation approach in SatQKD over different satellite altitudes and baseline distances of two ground stations. Mainly, we show its various simulation results that show the main takeaways of the blockwise-based SatQKD.
As quantum key distribution becomes increasingly practical, questions of how to effectively employ it in large-scale networks and over large distances becomes increasingly important. To that end, in this work, we model the performance of the E91 entanglement based QKD protocol when operating in a network consisting of both quantum repeaters and trusted nodes. We propose a number of routing protocols for this network and compare their performance under different usage scenarios. Through our modeling, we investigate optimal placement and number of trusted nodes versus repeaters depending on device performance (e.g., quality of the repeater's measurement devices). Along the way we discover interesting lessons determining what are the important physical aspects to improve for upcoming quantum networks in order to improve secure communication rates.
§ INTRODUCTION
Quantum cryptography, and specifically Quantum Key Distribution (QKD), holds several promising benefits. In particular, it has the ability to achieve certain cryptographic tasks without relying on computational assumptions, unlike much of our current-day secure communication infrastructure based on public key systems <cit.>. However, several challenges remain, limiting its effectiveness and the rate of adoption of this technology. One challenge is that long distance quantum communication links are required, while greater key generation rates are necessary to allow for either faster refreshing of AES keys or, hopefully, the ability to stream a true one-time-pad at a rate fast enough to keep up with the communication stream. Due to the exponential loss in fiber channels <cit.>, many researchers are turning to the study of satellite-based quantum communication to solve this long-distance quantum communication problem, leveraging free-space satellite communication that has much lower loss than fiber channels.
A satellite can help to distribute long-range entangled pairs to two ground stations, thus building a QKD network over much longer distances than a point-to-point ground fiber network (without repeaters) could achieve on its own <cit.>.
Several experimental demonstrations of quantum communication through satellites have been conducted recently <cit.>, showing their technological feasibility. Despite this interest, several questions remain, especially in terms of optimizing overall QKD system performance and speed. Since the hardware of these systems would be difficult to change after launch, it is important to investigate what can be done on the classical stage of the protocol, without forcing users to invest or install new quantum hardware. Every QKD protocol consists of two stages: a quantum communication stage and a classical post-processing stage. The first is the only one that requires quantum-capable hardware; the second involves only classical communication and can more easily be altered than the first.
In this work, we investigate the classical post-processing stage of a standard QKD protocol, specifically BB84 <cit.> (or, rather, the entanglement based version E91 <cit.>) in an attempt to maximize the performance of a satellite system, without altering the quantum layer of the network. Satellite channels show dynamic environmental circumstances due to various time and weather conditions <cit.>. For instance, nighttime and daytime have different background/thermal photons that impact the fidelity of entangled photons or their loss, while weather conditions in the atmospheric layers increase noise on entangled photons. So, we must carefully consider the impact of these factors on QKD. Next, it is necessary to determine the optimal pump power to generate the right entangled photon pairs and a sampling rate to estimate noise in the finite key analysis.
In this work, we compare two classical post-processing strategies for satellite systems. In one instance, which we call “blockwise post-processing,” we divide an entire signal into several individual “blocks” and process them independently; these blocks should have similar noise/loss characteristics.
The other strategy is the more traditional method of treating the entire quantum signal as a single unit and processing accordingly (which we call “non-blockwise post-processing”).
For both post-processing methods, we investigate optimal parameter settings for various satellite configurations and operating conditions.
We make several contributions in this work. To our knowledge, we are the first to evaluate and compare
these two different post-processing methods in both the finite key and asymptotic scenario and under various satellite operating conditions.
We also conduct a rigorous evaluation of QKD satellite operation using extensive simulations with realistic noise and loss models,
showing trends in optimal parameter choices and when, exactly, the two different post-processing methods should be used to optimize overall key generation rates. For instance, we show that the blockwise scheme can lead to 5% higher key rate than the non-blockwise scheme when the satellite is at a high altitude, leading to on average 1.9×10^7 more key bits per day.
We comment that all our investigations are on the classical stage of the QKD protocol, and any alterations which our work suggests that may be beneficial to a satellite QKD system, can be easily adopted by the current systems, and easily added after a satellite's launch. In addition, while focusing on satellite-based QKD, our findings also apply to terrestrial QKD network scenarios where the raw key bits have significant dynamics, e.g., because they are created over disparate network paths.
§ PRELIMINARIES
§.§ Notation and Definitions
In this section, we introduce underlying definitions and notations that will be used throughout this paper. A density operator is a Hermitian positive semi-definite operator of unit trace acting on a Hilbert space ℋ. Note that for given a pure quantum state |φ⟩, its density operator is |φ⟩⟨$|. We simply denote it as the symbol[φ]. We define a notationh(x)as an extended binary entropy function, namely,h(x)=0ifx < 0,h(x)=xlog_2(x)-(1-x)log_2(1-x)if0≤x≤1/2, andh(x)=0ifx>1/2.
§.§ Satellite-based QKD
We consider a satellite that orbits around the Earth at a certain altitude. The satellite has photon sources that generate entangled pairs, and sends them to a pair of ground stations. Specifically, for each entangled pair, the satellite transmits one photon in the pair to one ground station using a down-link optical channel, thus creating a dual downlink entanglement distribution as shown in Fig. <ref>. The two ground stations run an entanglement based protocol (e.g., E91 <cit.>) for QKD.
In the rest of this paper, we assume that the two ground stations are located on the equator. The satellite orbits the Earth in a west-to-east direction above the equator, in alignment with the Earth's rotation. We focus on low-earth-orbit (LEO) satellites (i.e., altitude between 250 to 2000 km) that benefit from proximity to earth surface and have been demonstrated experimentally
<cit.>.
To transmit photons successfully to a ground station, the elevation angle, i.e., the angle between the satellite and the horizon at the ground station, needs to exceed a threshold,θ_e. For successful delivery of an entangled pair, the elevation angles between the satellite and the two ground stations must both exceedθ_e, as illustrated in Fig. <ref>. In this figure, the sector betweenG_iLandG_iRrepresent the region where the elevation angle between the satellite and ground stationiexceedsθ_e,i=1,2. The intersection of these two sectors is the region where the satellite can transmit entanglement pairs successfully to both ground stations.
This study considers one pair of two ground stations on earth. They rotate around the earth's axis with 24 hours rotating period. Also, we operate a single satellite of our research, which orbits around the earth at particular altitudes. A satellite has photon sources that generate entangled pairs of photons and transmit them to the ground stations using a down-link optical channel. The ground stations can achieve any quantum application with the transmitted entanglements' pairs, e.g., QKD. Note that a satellite should exceeds the elevation angleθ_eto successfully deliver an entangled pair to the ground stations, see Fig. <ref>-(a) and (b). Our SatQKD employs the dual-downlink protocol <cit.>, meaning a satellite transmits an entangled photon pair to two ground stations. Since a beam wander effect induced by atmospheric turbulence—also known as the shower-curtain effect <cit.>—arises only around the end of the transmission route close to the surface of Earth, the downlink shows a lower beam wander effect and higher link efficiency than the uplink <cit.>. In the dual down-link approach, a satellite and each ground station act as transmitters and receivers, respectively. The satellite generates an entangled photon pair via its entanglement photon source to distribute pairs to ground stations, see Fig. <ref>-(c).
§.§ Quantum Key Distribution
The goal of a key distribution protocol is to generate a secret key shared between two parties, Alice and Bob. Under a classical communication channel, building a secret key sharing between Alice and Bob is impossible <cit.>. But, when quantum mechanics applies to the key distribution, the storyline is changed. Bennett and Brassard first introduced a quantum key distribution (QKD) scheme, which utilizes an insecure quantum channel and the authenticated classical channel. Their scheme is known as the BB84 protocol <cit.>. Ekert introduced the entangled-based equivalence <cit.>, called the E91 protocol. Since observing a quantum system without measuring its state is impossible, QKD can provide unconditional security proof and easily detect tampering by an adversary based on the laws of physics. Renner introduced the security proof of QKD by using entropic uncertainty relations, an error collection, and a privacy amplification <cit.>. The short concept is as follows: for given a classic-quantum stateρ_SEafter the measurement on a quantum state, and it conducts an error correction and a privacy amplification process on theSregister of the state to have a stateσ_YE. The process maps theSrecord on theYregister through a randomly chosen two-universal hash function. If the output isℓbits is long, the following relation was shown in <cit.>, namely:
σ_YE - I_Y/2^ℓ⊗σ_E≤ 2^-1/2(H_∞^ϵ(S|E)_ρ - ℓ) + 2ϵ,
whereH_∞^ϵis the smooth conditional entropy andI_Y/2^ℓis theℓlength of uniform random bits and independent of an adversarial partyE.
§.§ Entanglement Sources
We assume that the satellite utilizes spontaneous parametric down-conversion (SPDC) based dual-rail polarization entanglement sources that are well-studied and widely used <cit.>.
In such entanglement sources, a two-qubit entangled Bell state requires four orthogonal modes (i.e., two pairs of mode) to encode.
The expression of the output is a quantum state as follows<cit.>:
|φ^±⟩ = N_0[√(p(0))|0,0;0,0⟩+√(p(1)/2)(|1,0;0,1⟩±|0,1;1,0⟩)+√(p(2)/3)(|2,0;0,2⟩±|1,1;1,1⟩+|0,2;2,0⟩)],
whereN_0is a normalization factor, namely:
N_0 = 1/√(p(0)+p(1)+p(2)) = (N_s+1)^2/√(6N_s^2+4N_s+1)
andp(n)is the probability of generating an-photon term in each pair of mode, given by:
p(n) = (n+1)N_s^n/(N_s+1)^n+2,
whereN_sis pump power, i.e., the mean photon number per mode. The entangled pair from the SPDC dual-rail polarization source is as follows:
|Ψ^±⟩ = 1/√(2)(|1,0;0,1⟩±|0,1;1,0⟩),
the vacuum state is|0,0;0,0⟩, and all the other terms are spurious two-photon states. In Eq. (<ref>), we assume thatN_sis low (e.g., below0.2) and hencep(n)forn>2is negligible and is omitted in the quantum state.
The pump power is an important configurable parameter that can be tuned to maximize the entanglement rate, while adhering to a desired fidelity threshold. In Section <ref>, we show that the pump power impacts two important factors, success probability and fidelity, for QKD, and needs to be chosen carefully.
§ ENTANGLEMENT SOURCE
§.§ The Dual-rail Entangled Source in a Satellite
The expression of 2 photon pairs in <cit.> is as follows:
|Ψ^±⟩ = N_0[√(p(0))|0,0;0,0⟩+√(p(1)/2)(|1,0;0,1⟩±|0,1;1,0⟩)+√(p(2)/3)(|2,0;0,2⟩±|1,1;1,1⟩+|0,2;2,0⟩)],
whereN_0is a normalization factor, namely:
N_0 = 1/√(p(0)+p(1)+p(2)) = (N_s+1)^2/√(6N_s^2+4N_s+1)
and the coefficient termsp(n)are given by:
p(n) = (n+1)N_s^n/(N_s+1)^n+2.
Note thatN_sis a pumping power, in other words, the mean photon number per mode. The entangled pair from the SPDC dual-rail polarization source is as follows:
|φ^±⟩ = 1/√(2)(|1,0;0,1⟩±|0,1;1,0⟩),
the vacuum state is|0,0;0,0⟩, and all the other terms are suspicious two-photon states.
§ MODELS
§.§ Orbit Model
In this paper, we consider two ground stations located on the equator. And we define a satellite orbit model as follows—the satellite orbits Earth in a west-to-east direction above the equator, in alignment with Earth's rotation in Fig. <ref>. We also consider a low-orbit satellite model with 500, 800, and 1000 km altitudes and 600, 1200, and 1800 km baseline distances between two ground stations on the equator.
§.§ Loss Model
The satellite quantum communication channel via free space optical (FSO) transmission, must account for the characteristics of the optical channel in its underlying analysis. The transmission losses for each qubit (comprising of a pair of modes) scales quadratically with free space propagation length and exponentially with aerial propagation length <cit.>. We incorporate the effect of transmission loss by treating FSO transmission as a Bosonic pure loss channel acting on each mode of the quantum state described in Eq. (<ref>).
Most generally, the Bosonic pure loss channel leads to a reduction in the mean photon number of the input state; additionally an input pure quantum state becomes a mixed state for non-zero loss. In the present context, this impedes the probability of successfully delivering the entangled pairs to both ground stations, as well as affecting the fidelity (to the ideal Bell state) of the delivered entangled photons <cit.>. Namely, it changes the probability of the generation of entangled pairs. The probability of receiving a perfect Bell pair becomes necessarily smaller thanp(1)from Eq. (<ref>). See more details of the loss model in <cit.>.
§.§ Noise Model
Atmospheric FSO transmission channels have to contend with a variety of noise processes. In this manuscript, we limit our noise estimate to unfiltered background photons. Any excess photons in the channel will cause false events (i.e., where the qubits of the Bell pair were lost) to be treated as successes, thereby impacting the fidelity of the entangled pair that should have been delivered. The main contributor of the background photon flux (for example from imperfect filtering by the ground receiver) is commonly associated with the brightness of the sky and varies drastically depending on the time of the day. More specifically, the level of background photon flux is at its highest during clear daylight, and at its lowest during clear nighttime. In our work, we consider these two setups (i.e., daytime and nighttime) consistent with the state-of-the-art <cit.> and compute the fidelity of the generated entangled state between two ground stations by modeling the arrival of unfiltered background photons as detector dark click events.
This paragraph assumes an entanglement source with p(2)=0, right? If so, move it to key rate analysis after clarifying that we assume p(2)=0 for the analysis. We utilize the fidelityFobtained using the above model to estimate the noiseQfor QKD key rate analysis (see Section <ref>).
Specifically, we assume a depolarizing channel, and the state is modeled as
F |Ψ^+⟩⟨+| (1-F)I/4,
where|Ψ^+⟩is the desired Bell state. In that case,Q=(1-F)/2.
a secret key distillation. Namely, we can compute the noise as follows: for anyt∈{night,day},
Q_ave^(t) = (1-fid_ave^(t))/2,
wherefid_ave^(t)can be found in Eq. (<ref>).
Add information on depolarization noise - how to get Q = (1-F)/2.
§ BLOCKWISE KEY DISTILLATION
We assume the E91 protocol<cit.> is used for QKD between the pair of ground stations. This protocol, like most QKD protocols, consists of a quantum communication stage, followed by a classical post-processing stage. In the quantum communication stage,Mentangled pairs are sent to Alice and Bob (some of which may be lost in transmission due to channel loss). Alice and Bob then choose, independently at random, whether to measure their particles in theZorXbasis, recording their results. Later, Alice and Bob will disclose, over the authenticated classical channel, their basis choice, discarding all iterations that do not match. The resulting strings are called the users' raw key. This concludes the quantum communication stage—the output is a raw key of sizeN-bits (withN ≤M), which may be partially correlated (errors in the channel or adversarial noise may cause errors in Alice and Bob's raw key) and partially secret (Eve may have some non-negligible side information on the raw key based on her attack). Next, the classical post-processing stage will further process the raw key to produce a secret key. First the error rate in the raw key is determined. After this, error correction, and finally privacy amplification protocols are run. The output of this stage is the final secret key of sizeℓ≤Nbits. An important metric for the entire QKD protocol is its key rate, namely the ratio of the secret key size (ℓ) to the total number of signals sent (M).
§.§ Blockwise vs. Non-blockwise Schemes
In this work we analyze and compare two different classical post-processing strategies: blockwise and non-blockwise. The latter, non-blockwise, is the traditional QKD scenario whereby the raw key ofNbits is treated as a single system from which error correction and privacy amplification are run. The former, blockwise, divides the raw key up into smaller systems, or blocks. This division can be arbitrary, but to potentially provide a performance boost, each block should have homogenous channel statistics (especially in terms of noise—that is, while each block may have very different noise levels, the raw keys within a single block should be similar). In general, if there is a significant difference in the noise levels of the blocks, one can expect blockwise to produce a strictly higher key rate due to the concavity of entropy as discussed below.
First, consider the standard non-blockwise post-processing where all the raw key bits are considered together and is agnostic to the dynamics of the quantum channel. Specifically, in this strategy, a random subset ofm < N/2bits is randomly chosen from the set ofNbits in the raw key, and Alice and Bob's measurement results are used to estimate the noise of the entire block, denoted asQ. This is defined to be the relative number of bit-flips in Alice and Bob's raw key. In this work, we assume the noise is modeled by a depolarizing channel and thus (1) the noise is the same in bothZandXbases and (2) we haveQ = (1-F)/2, whereFis the fidelity obtained using the noise model in Section <ref>.
Under the blockwise post-processing strategy,
users break the raw key into blocks of signals based on operating conditions, where each block is expected to have similar noise characteristics.
In our case, for simplicity, we divide the total raw key into two classes of blocks: one from day and one from night operating conditions as it is expected that the noise in the daylight will be higher than at night. However, blockwise processing can be applied to any arbitrary number of blocks, so long as there are a sufficient number of raw key bits in each block.
More formally, letrk_A, rk_B∈{0,1}^Nbe the raw key for Alice and Bob, and letB^A_ibe a single block such thatrk_A = B^A_1B^A_2⋯B^A_k(i.e.,Bis the bitstring concatenation of each block). Similarly forrk_Bwhich will have the same decomposition. Then, a random subsett_iof sizem_iis chosen for each blockB_iand measurement results in that block, indexed by that subset, are disclosed over the authenticated channel to determine the noise present in each block, denotedQ_i. Finally, error correction and privacy amplification are run on each block separately, distillingksecret keyss_1throughs_kwhich are later concatenated into a single secret keys. The size of eachs_idepends onQ_i.
§.§ Key Rate Analysis
As the main goal of this paper is to evaluate and compare the effectiveness of both blockwise and non-blockwise key distillation strategies, we require a performance metric. For this, we will use the key rate of the protocol, defined to be the ratio of the number of final secret key bits to the total number of attempted entanglement pairs sent by the source. Finally, we will also consider both the asymptotic scenario, where the number of signals sent approaches infinity giving us upper-bounds on the key-rates, and the more realistic finite-key scenarios,
where we will also have to take into account imperfect sampling and other imprecisions. We note that, for this work, we consider idealized photon sources which may emit zero or one photon, but never two. That is, we setp(2) = 0in Eq. <ref>. In our evaluations,p(2)is generally small under the optimal pump power and cannot drastically decrease the key rate, as we shall show in Section <ref>. However, a rigorous blockwise and non-blockwise analysis for multiphoton sources remains an interesting future challenge.
To compute the key rate of the protocol, both in the blockwise and non-blockwise cases, we turn to analysis methods derived in <cit.> which utilize entropic uncertainty <cit.>.
Key rate analysis for non-blockwise scheme. First consider the non-blockwise case. Here, the entire raw-key is treated as a single system from which a random sample of sizemis chosen (leavingn=N-mbits for the raw key). This sample allows parties to estimate the error in the entire raw key denoted asQ. From this, the remaining signals are run through an error correction process (leaking an additionalλ_ECbits to the adversary). A test is then run by hashing the error corrected raw key and testing correctness between Alice and Bob (which leakslog1/ϵ_corbits to the adversary for user specifiedϵ_cor). Finally, privacy amplification is run, outputting a secret key of sizeℓ≤n. It is guaranteed that, conditioning on not aborting the protocol, the final secret key system and Eve's ancilla (denoted asρ_KE) will satisfy the following:
1/2ρ_KE - I/2^ℓ⊗ρ_E≤ϵ_sec,
whereρ_Eis Eve's system.
That is, the final secret key system will beϵ_secclose (in trace distance) to a truly uniform random key,I/2^ℓ, which is also completely independent of Eve's systemρ_E.
Using results in <cit.>, the non-blockwise case can be shown to have an overall secret key length of:
ℓ_non-block = n(1-h(Q+μ)) - λ_EC - log2/ϵ_sec^2ϵ_cor.
whereϵ_coris a security parameter determining the failure rate of the correctness portion of the protocol (i.e., Alice and Bob will have the same secret key, except with probability at mostϵ_cor), andϵ_secis a security parameter determining the distance of the final secret key from a truly uniform random key. Above,λ_ECrepresents the information leaked during error correction; in our evaluations, later, we simply setλ_EC = nh(Q+μ). Other realistic settings ofλ_EC(Q) = 1.2nh(Q)can also be used. However such a setting will not significantly affect our results in later sections as we are primarily interested in comparing blockwise to non-blockwise. Finally,μis a result of finite sampling effects and is set to:
μ = √((n+m)(m+1)/nm^2ln2/ϵ_sec).
The above may be derived from standard classical sampling arguments <cit.>.
Key rate analysis for blockwise scheme. In the blockwise case, the setting is similar and we may again use results from <cit.> to distill each sub-block into secret keys independently, and then concatenate the final blockwise secret keys into a single secret key. Here, letB_ibe the size of thei'th block (determined by the user). Now, a random subsett_iof sizem_ifor each blockB_iis chosen. As with the non-blockwise, this sampling subset is used to determine the error rate in the raw key, however, now, it is used only to estimate the error rate in thei'th block of the raw key, denotedQ_i. Error correction, a correctness test, and finally privacy amplification is then performed individually on each block. From this setup, we can compute the secret key size of blockito be:
ℓ_i = (B_i-m_i)(1-h(Q_i+μ_i)) - λ_EC^(i) - log2/ϵ_sec^2ϵ_cor,
from which we have the following total secret key size:
ℓ_block = ∑_i=1^kn_i(1-h(Q_i+μ_i)) - ∑_iλ_EC^(i) - klog2/ϵ_sec^2ϵ_cor,
wherekis the total number of blocks andn_i = B_i-m_i. The value ofμ_iis identical toμabove, except replacingmwithm_iandnwithB_i-m_i. Finally,λ_EC^(i)is the amount of information leaked during error correction of blocki. In our evaluations, we set this toλ_EC^(i) = (B_i-m_i)h(Q_i+μ_i).
The above values ofℓcan be used to immediately compute the key rate simply by dividing by the number of attempted entanglement pairs sent by the satellite (we say attempted as the satellite may send out vacuum states which count, detrimentally, to the overall key rate).
To determine theoretical upper-bounds, we also consider the asymptotic scenario, where the number of signals approaches infinity. In this instance, the key rate for the non-blockwise scheme is simply1-2h(Q), while the key rate for the blockwise scheme converges to∑_i p_i(1-2h(Q_i)), wherep_iis the proportion of total raw keybits used in blockias the size of the raw key approaches infinity.
Note that the above equations give immediate intuition as to why blockwise processing can lead to higher key-rates. For non-blockwise, the total errorQis actually the average error over all individual blocks. Due to the concavity of Shannon entropy, the key rate can only be higher, but no less than, the blockwise processing, at least in the asymptotic scenario. In the finite key scenario, sampling imprecisions lead to other problems and, so, as we show later blockwise processing can actually lead to worse results in some settings. Knowing when to use blockwise processing and when to use non-blockwise is an important question to answer if these systems are to be practically deployed.
The block-wise and entanglement-based BB-84 (E91) protocol runs as follows:
* A satellite, s, prepares a quantum state |ϕ_0⟩∈ℋ_g_1⊗ℋ_g_2⊗ℋ_E, where ℋ_g_1≅ℋ_g_2≅ℋ_d^⊗ B. Note that B is the entire number of quantum states generated from the satellite such that B=∑_i=1^nB_i and B_i = m_i+n_i. The g_1 portion is sent to a ground station g_1 and the g_2 portion is sent to another ground station g_2 via the free space and atmospheric layer.
* For each block period i, the station g_1 chooses a random subset t_i of size m_i out of B_i and sends it to another station g_2. Both ground stations measure their systems indexed by t_i in the X-measurement, which produces outcomes q_1^(i), q_2^(i)∈𝒜_2^m_i, respectively. These values are revealed to each other via the authenticated channel.
* After the sampling and for each block period i, a pair of grounds stations, (g_1,g_2), measures the n_i-number of the remaining portion of their system in the Z-measurement, which outputs their raw-keys r_1^(i) and r_2^(i) of size at most n_i bits each. If there are observations of vacuum states, |vac⟩, those will not contribute to the raw key so their keys are possibly smaller than n_i in a lossy channel. Then these stations move to the next block and repeat processes 2 and 3 until they exhaust all the n-number of blocks.
* After all recurrences, g_1 and g_2 run an error correction protocol that can correct their block-wise all raw keys up to Q errors, which reveals leak_EC bits to Eve.
* Finally, privacy amplification operates on the error-corrected raw key resulting in their secret key.
§.§ Finding a Fidelity and Entanglement Transmission Data
The computing the average fidelity over each block, e.g., nighttime or daytime, as follows: for eacht∈{night, day},
fid_ave^(t) = 1/k∑_i=1^kfid_ave^(i)=fid_ave^(i),
wherekis the number of all rounds in the time block, e.g.,k=16, and the average fidelity per each contact round:
fid_ave^(i) = fid_tot^(i)/pass,
wherefid_tot^(i) = ∑_j=1^passfid^(i,j)andpassis the passing time for each contact round, e.g.,pass=84sec. We also pick a different pumping power for the night and the day to get an optimal fidelity for the positive key generation. The night and daytime dark click probabilities are considered as follows:Pd_n = 3×10^-6and3×10^-3, respectively. The simulation results divide into two details: one is from nighttime simulation, and another is from daytime. As we choose a higher pumping power for the daytime than the nighttime, the success probability is higher than the nighttime one. So, the number of transmitted entanglements in the daytime is higher than at night, but the fidelity is lower due to noises. The number of entanglements per each time is as follows: for eacht∈{night, day}B_t = ∑_i=1^kB_t^(i),
whereB_t^(i)is the number of entanglements per each contact round during the timet, namely:
B_t^(i) = S_#× P_tot^(i),
where the number of signalsS_#=10^9,P_tot^(i)is the total probability, that is,P_tot^(i) = ∑_j=1^pass P_succ^(i,j),passis the passing time per each contact round, e.g.,pass=84, andS_#=10^9is a source generating rate from a satellite per second during one contact round. So, the number of entanglements in the nighttime isN_n, and the number of entanglements in the daytime isN_d.
§ PERFORMANCE EVALUATION
In this section, we evaluate the performance of blockwise and non-blockwise key distillation schemes.
As mentioned in Section <ref>, we consider a LEO satellite, with two ground stations on the equator. In the following, we first describe the evaluation setup and then the results.
§.§ Evaluation Setup
We consider three satellite altitudes,A=500km, 800 km, 1000 km. For each satellite altitude, we consider two ground stations along the equator of the Earth, with a distance ofD=600km, 1200 km, or 1800 km. The satellite is equipped with a SPDC entanglement source (see Section <ref>) that operates at a 1 GHz rate, i.e., generating10^9entangled photons per second. The elevation angle threshold (Section <ref>) is set to 20^∘. For simplicity, we assume 10 hours of nighttime (8pm-6am), and the remaining 14 hours as daytime each day. The dark click probabilityP_dis set to3 ×10^-6for nighttime and3 ×10^-3for daytime based on the study in <cit.>.
The blockwise scheme treats the raw key bits produced during daytime and nighttime separately, i.e., it considers two types of blocks, corresponding to the raw keys from daytime and nighttime respectively, while the non-blockwise scheme considers all of the raw key bits together.
When the satellite altitude is 500 km, the orbit time (the amount of time for a satellite to finish one orbit) is 5,647 seconds, for satellite altitudes of 800 and 1000 km, the orbit time is longer (6,022 and 6,276 seconds, respectively). The number of passes of the satellite over the two ground stations is 6 passes during nighttime and 9 passes during daytime for all the settings, except when the satellite altitude is 500 km, where the number of passes during nighttime is 7.
Table <ref> lists the contact length (i.e., pass duration, the duration that the satellite is in the contact of both ground stations) for the various settings.
The contact length varies from less than 1 minute to over 8 minutes. As expected, for a given satellite altitude, larger ground station distance leads to shorter contact length; while for the same ground station distance, higher satellite altitude leads to longer contact length. Using the loss and noise models in Section <ref>, we obtain the success probability and fidelity of the transmission from the satellite to each ground station in each second, and then obtain the average success probability and fidelity over the contact length in the following evaluation.
We consider the key rate for running the protocol over 1 to 80 days to show the performance of the two key distillation schemes over time as more raw key bits are accumulated at the ground stations, and the performance of these schemes relative to the asymptotic results.
For both blockwise and non-blockwise schemes, we vary the pump power
of the SPDC source (see Section <ref>) and the sampling rate for each setting so that the number of secret key bits is maximized. Specifically, the pump power is varied from 0 to 0.1. We limit the pump power up to 0.1 so that the approximation of the quantum states in <ref> is accurate and high-order-photon contributions are negligible <cit.>.
The sampling rate is varied from5×10^-4/kto3×10^-1/kfor the raw keys generated inkdays.
Unless otherwise stated, our results below assume thatp(2), i.e., the probability of generating a 2-photon term in each pair of mode of the SPDC source, is zero
(see Section <ref>). In Section <ref>, we show that this is a reasonable approximation.
Baseline distance between two ground stations: 600 km, 1200 km, and 1800 km above the equator.
Satellite altitude: 500 km, 800 km, 1000 km
nighttimeP_d:3 ×10^-6day timeP_d:3 ×10^-3the range of sample sizes:[5×10^-4/days×B_i,...3×10^-1/days×B_i ], whereB_iis the number of entanglements for each block.
Ignorep_2in simulation; not ignorep_2in simulation. We define the nighttime as 10 hours, and so the daytime as 14 hours.
Basic statistics:
For the 3×3 altitude and baseline combinations, list
* the orbit time (amount of time to finish one orbit; this only depends on the altitude, does not depend on baseline);
* the number of passes in night time and the number of passes in day time;
* for each pass at night, the duration of the pass time (i.e., the contact length, i.e., the duration that the satellite is in the contact of both ground stations). Similarly, do this for day time.
§.§ Impact of Pump Power
images/bk_sampling_num_keys_overhead_1_1.png Blockwise scheme: Number of secret key bits that are generated under the optimal pump power and sampling rate (1 day). num-key-blockwise
We first examine the impact of the pump power on the success probability and fidelity in the various settings.
Fig. <ref> plots the success probability and fidelity as a function of pump power when the satellite altitudeA=500km. Figures <ref>(a) and (b) show the results for nighttime, where the results for various ground station distances are shown in the figure. We see that, for all three ground station distances, success probability increases with pump power, while fidelity decreases with pump power. In addition, for the same pump power, a shorter ground station distance leads to a larger success probability, but lower fidelity.
Figures <ref>(c) and (d) show the results for daytime. In this case, while we see similar trend for success probability as that for nighttime, the relationship between fidelity and pump power is more complex: fidelity first increases and then decreases with the pump power. In addition, for the same pump power, while a shorter ground station distance again leads to higher success probability as that in nighttime, it leads to higher fidelity in daytime, opposite to the observation in nighttime.
Results for the other two satellite altitudes (800 and 1000 km) show similar trends, with variations in the relative relationship among the three ground station distances. For instance, whenA=1000km, for the same pump power, the fidelity for the three ground station distances is very close to each other during nighttime, while the fidelity forD=1200km is larger than that forD=600km, followed by that ofD=1800km.
main message: pump power has roughly opposite impact on the number of entanglement and fidelity; finding optimal pump power is important. Fig. <ref> plots the results for night time. Fig. <ref>(a), (c) and (e) plot the success probability, while Fig. <ref>(b), (d) and (f) plot fidelity for various satellite altitudes.
Fig. <ref> plots the corresponding results for day time.
They are independent of key distillation schemes.
Only present the results of overhead mode. Use two figures, each with 6 sub-figures. Do not include the results when the power pump is 0 in the plots; we just need to mention that when the pump power is 0, the number of entanglement is zero and hence it has no impact on key generation.
§.§.§ Satellite Altitude 500 km
Plot how the number of entanglement, fidelity and the total number of keys (plot blockwise and non-blockwise separately, show the result under optimal sampling) as a function of pump power;
4*2*2 (bisect and overhead mode; night time daytime) figures in total; in each figure, plot 3 curves (3 distance values).
§.§.§ Satellite Altitude 800 km
§.§.§ Satellite Altitude 1000 km
images/nbk_sampling_total_num_keys_overhead_1_1.pngNon-blockwise scheme: Number of secret key bits that are generated under the optimal pump power and sampling rate (1 day). num-key-non-blockwise
§.§ Optimal Pump Power and Sampling Rate
Since secret key rate is affected by both success probability and fidelity, while these two factors are affected by pump power in roughly opposite ways as shown above, we need to find the optimal pump power to maximize the secret key generation rate for various settings. The optimal pump power hence may differ, depending on satellite altitude, ground station distance, nighttime versus daytime, and also the key distillation scheme. In addition, as shown in Eq. (<ref>) and Eq. (<ref>), secret key rate is also affected by sampling rate. In the following, we show the optimal pump power and sampling rate for the various settings for key generation in one day; the results for multiple days are deferred to Section <ref>.
§.§.§ Blockwise Scheme
Fig. <ref> plots results for the blockwise distillation scheme. Specifically, Fig. <ref>(a) shows the optimal pump power for various combinations of satellite altitude and ground station distance; results for both nighttime and daytime are plotted in the figure. For the same satellite altitude and ground station distance, we see the optimal pump power for nighttime is larger than that for daytime. Specifically, for satellite altitude of 800 and 1000 km, the optimal pump power for nighttime is 0.1, the maximum pump power that is allowed, and for satellite altitude of 500 km, the optimal pump power is close or equal to 0.1. For daytime, under the same ground station distance, the optimal pump power is lower for higher satellite altitude. As a special case, when the ground station distance is 1800 km, the optimal pump power for daytime is 0 when the satellite altitude is 1000 km, since no key can be generated for any values of the range of pump power.
Figures <ref>(b) and (c) plot the success probability and fidelity under the optimal pump power for the various settings. For the same satellite altitude and ground station distance, the success probability and fidelity for nighttime are both larger than their corresponding values for daytime. In addition, for the same ground station distance, the success probability under the optimal pump power for lower satellite altitude tends to be larger, for both nighttime and daytime. For fidelity, the optimal fidelity is similar for all nighttime settings, while for daytime, lower satellite altitude tends to have higher fidelity for the same ground station distance. When the ground station distance is 1800 km and the satellite altitude is 1000 km, both the success probability and fidelity are 0, since the optimal pump power for that case is 0.
Fig. <ref>(d) plots the optimal sampling rate for the various settings. The optimal sampling rate varies from 0.0075 to 0.1045, with higher optimal sampling rate for daytime than nighttime under the same satellite altitude and ground station distance. For a given satellite altitude, larger ground station distances tend to require higher optimal sampling rates.
Fig. <ref> plots the number of secret key bits generated over daytime and nighttime in a day for the various settings. For each satellite altitude, the number of secret keys generated decreases with ground station distance for both nighttime and daytime, except for a satellite altitude of 1000 km during daytime. For the same ground station distance, lower satellite altitude tends to lead to more secret keys, except for one case (satellite altitude of 500 km and ground station distance of 1800km), which leads to fewer key bits than the satellite altitude of 800 km due to the significantly shorter contact length in this scenario than others (see Table <ref>).
§.§.§ Non-blockwise Scheme
Fig. <ref> plots the results for the non-blockwise scheme. For various settings, the optimal pump power in Fig. <ref>(a) is similar to that under the blockwise scheme, except that when the satellite altitudeA=1000km, the optimal pump power for daytime is 0 for all ground station distances. This is because the fidelity in the daytime is low for all the pump power values, which leads to higher average error rate (across nighttime and daytime), and overall lower number of keys, compared to the case when only generating keys at nighttime. Fig. <ref>(b) and (c) plot the resultant success probability and fidelity for the various settings with the optimal pump power. They are similar to those for the blockwise scheme except when the satellite altitudeA=1000km and daytime. Fig. <ref>(d) plots the optimal sampling rate, which is in the range of0.0065and0.0245. Similar to that of the blockwise case, for a given satellite altitude, larger ground station distances have higher optimal sampling rates.
Fig. <ref> plots the number of secret key bits generated using the non-blockwise scheme in a day for the various settings. Since this scheme combines the raw keys generated during nighttime and daytime together, we simply plot the overall number of keys over a day. For the same ground station distance, more keys are generated at lower satellite altitude, except for one case,
the satellite altitude is 500 km and the ground station distance is 1000 km, due to its significantly shorter contact length than other settings (see Table <ref>).
§.§ Compring Blockwise and Non-blockwise Schemes
We now compare the key rate of the blockwise
and non-blockwise schemes. Specifically, we assume that the schemes run overkdays, and setkto 1, 20, 40, 60, and 80.
For eachk, we again select the pump power and the sampling rate to maximize the number of secret keys generated overkdays. We see that the optimal pump power forkdays is similar to that of one day for both the blockwise and non-blockwise schemes (figure omitted).
Figures <ref>(a)-(c) plot the key rate under the blockwise scheme
when the satellite altitude is 500, 800, and 1000 km, respectively. In each plot, we show results for both the finite and asymptotic scenarios. We see that for the key rate for the finite scenario increases with the number of days, and approaches the asymptotic result whenk≥20.
Figure <ref>(a)-(c) plot the key rate
when the satellite altitude is 500, 800, and 1000 km, respectively. In each plot, the results for the three ground station distances under the blockwise and
non-blockwise schemes are shown in the figure. We see that for the effective key rate increases with the number of days. The effective key rate under the blockwise scheme is visibly higher than that of the non-blockwise scheme when the satellite altitude is800and1000km.
We define(r_b -r_nb)/r_nbas the relative key rate difference between the blockwise and non-blockwise schemes, wherer_bandr_nbare the key rate of the blockwise and non-blockwise schemes for the finite scenario, respectively. Fig. <ref> plot the
relative key rate difference for the various settings.
We see that the difference is larger than 0 for all the cases when the satellite altitude is800and1000km, i.e., the blockwise outperforms the non-blockwise scheme. Specifically, the blockwise scheme leads up to 4% and 5% improvements when the satellite altitude is 800 km and 1000 km, respectively. When the satellite altitude is 500 km, we see up to 1% difference between these two schemes, and the blockwise scheme leads to slightly lower key rate in some settings (D=600and 1200 km and when the number of days is small). We only see four cases where the blockwise strategy leads to fewer key bits than the non-blockwise strategy:A=500km andD=600, 1200 or 1800 km,k=1day; and whenA=500km,D=1200km, andk=20days.
What about the differences in the number of key bits per day between blockwise and non-blockwise schemes in the various scenarios? (A=500,D=600)→ 2.9× 10^6, (A=500,D=1200)→ 1.7× 10^6, (A=500,D=1800)→ 10^6, (A=800,D=600)→ 1.8× 10^7, (A=800,D=1200)→ 10^7, (A=800,D=1800)→ 7× 10^6, (A=1000,D=600)→ 1.6× 10^7, (A=1000,D=1200)→ 1.9× 10^7, (A=1000,D=1800)→ 0.
Letℓ̅_blockandℓ̅_non-blockrepresent the average number of key bits generated per day for the blockwise and non-blockwise strategies, respectively. Table <ref> showsℓ̅_block-ℓ̅_non-block, where both quantities are obtained from the results of 80 days (i.e., the number of secret keys generated over 80 days divided by 80). We see that the blockwise strategy leads to10^6to1.9×10^7more keys per day in the various settings, except for one setting (A=1000km andD=1800km) since no key is generated during daytime for both strategies.
Summarizing the the results in Fig. <ref> and Table <ref>, we see that the blockwise strategy in general leads to higher key rate and more key bits except for the scenarios with low satellite altitudes and small number of days.
Therefore, it is in general more advantageous to use the blockwise strategy, which can be easily deployed since it is only used in the classical post-processing stage of QKD.
We see that the relative key rate difference in the finite case approaches the asymptotic case as the number of keys increases. The blockwise scheme outperforms the non-blockwise scheme by up to 4% when the satellite altitude is 800 km, and up to 5% when the satellite altitude is 1000 km. When the satellite altitude is 500 km, we only see up to 1% difference between these two schemes.
For comparison, we further add the asymptotic results,(r̅_b -r̅_nb)/r̅_nb, wherer̅_bandr̅_nbare the asymptotic effective key rate for the blockwise and non-blockwise schemes, respectively.
We see the relative key rate different in the finite key case approaches that of the
asymptotic results in the various settings.
4/7/2023. Show three plots, for satellite altitude of 500, 800 and 1000 km, respectively. In each plot, the x-axis is number of days, the y-axis is effective key rate. In each figure, include 6 curves, three for blockwise and the other three for non-blockwise, for ground distance of 551, 1200 and 1922 km, respectively.Also show three plots of the relative difference, for satellite altitude of 500, 800 and 1000 km, respectively. In each plot, include three curves, for ground distance of 551, 1200 and 1922 km, respectively.4/10/2023. Add the results on optimal sampling rate in all the cases: Show three plots, for satellite altitude of 500, 800 and 1000 km, respectively. In each plot, the x-axis is number of days, the y-axis is optimal sampling rate. In each figure, include 6 curves, three for blockwise and the other three for non-blockwise, for ground distance of 551, 1200 and 1922 km, respectively.4/17/2023. Does multiday case uses the same optimal pump power as that for the single day?
§.§ Handling Spurious 2-photon Terms
Recall thatp(2)in Eq. (<ref>) is the probability of generating a 2-photon term in
each pair of mode. Such 2-photon events are detrimental to QKD due to photon-number-splitting (PNS) attacks <cit.>. Specifically, when two photons (instead of one photon in an entanglement pair) are sent from the satellite to a ground station, an adversary can keep one photon and sends the other to the ground station, and hence knows the state at the ground station. So far,
we have ignoredp(2)for ease of analysis. To investigate the impact of this approximation on our results, we
simulate a hypothetical idealistic entanglement source, where either vaccuum state or entanglement pairs are generated, i.e., we normalizep(0)andp(1)asp(0)/(1-p(2))andp(1)/(1-p(2)), respectively, and then setp(2)=0.
After that, the simulation of loss and noise on the entanglement pairs follows the models in Section <ref>.
Then for the optimal pump power chosen in Section <ref>, we compare the resultant success probability and fidelity of this idealistic source with those of the actual SPDC source we use. We observe that these two sources have similar success probabilities in all settings (the difference is within 0.001). For fidelity, although their differences are small (within 0.01) in most cases, the difference can be large (0.03) when the satellite altitude is 500 km and the ground station distance is 600 km. Further exploration on such cases is left as future work.
Create plots to compare the above quantities under optimal power for the various settings.
6 plots for success probability: x-axis: ground station distance. y-axis: success probability. 3 plots are for day time, with the altitude of 500, 800 and 1000 km.
3 plots are for night time, with the altitude of 500, 800 and 1000 km. In each plot, show the success probability for 4 cases: blockwise (pr_2=0), blockwise, non-blockwise (pr_2=0), and non-blockwise.Similarly, 6 plots for fidelity: x-axis: ground station distance. y-axis: fidelity. 3 plots are for day time, with the altitude of 500, 800 and 1000 km.
3 plots are for night time, with the altitude of 500, 800 and 1000 km. In each plot, show the fidelity for 4 cases: blockwise (pr_2=0), blockwise, non-blockwise (pr_2=0), and non-blockwise.
§.§ Blockwise and non-blockwise Schemes
We now compare the effective key rate of the blockwise and non-blockwise schemes. Fig. <ref>(a) plots the effective key rate when the satellite altitude is 500 km versus the various ground station distances in one day. Both the results for blockwise and non-blockwise schemes are plotted in the figure. Figures <ref>(b) and (c) plot the results when the satellite altitude is 800 km and 1000 km, respectively.
We see that the blockwise scheme leads to higher key rate in most cases except for the cases when no keys are generated during day time.
Fig. <ref> plots the relative difference of key rate of the blockwise scheme versus the non-blockwise scheme, i.e., (the effective key rate of blockwise - effective key rate of non-blockwise)/(effective key rate of non-blockwise). We see that the effective key rate of the blockwise scheme is up to xx higher than that of the non-blockwise scheme.
§.§ Key Generation Results
3/16/2023: only plot the key rate. Also use effective key rate: total number of secret key bits/ total number of signals.
§.§.§ Satellite Altitude 500 km
Total number of keys and key rate. x: number of days. plot 3 figures for each: for the 3 difference distances; in each figure, show the 4 different schemes (asymptotic and finite, block and non-block) 2 modes separately.
The high ratio of the blockwise number of keys versus the non-blockwise number of keys with an altitude of 500 km and a distance of 1922 km is about0.022%in the overhead mode and0.027%in the bisect mode.
§.§.§ Satellite Altitude 800 km
The high ratio of the blockwise number of keys versus the non-blockwise number of keys with an altitude of 800 km and a distance of 1922 km is about0.064in the overhead mode and0.7in the bisect mode.
§.§.§ Satellite Altitude 1000 km
The high ratio of the blockwise number of keys versus the non-blockwise number of keys with an altitude of 1000 km and a distance of 1922 km is about0.5in the overhead mode and2.3in the bisect mode.
§.§ Optimal Sampling rate
For each block, nighttime and daytime, we set up a sample sizem_i∈{10^3,5×10^3,10^4,5×10^4,10^5,5×10^5,10^6,5×10^6,10^7,5×10^7}to find an optimal sample rate for the finite key distillation. For all altitudes and distances, the optimal sample sizem_i=5×10^6is to produce the maximum number of secret keys in the nighttime and daytime.
§.§.§ Satellite Altitude 500 km
For each figure, x-axis: # of days, y-axis: optimal sampling rate (from 1% to 50%). Three curves: blockwise daytime and nighttime and non-block finite key. For the cases where we do not have a positive key rate for daytime (distance of 1922km, altitude of 800 and 1000km), just set it to 0.5 or some constant for now.
§.§.§ Satellite Altitude 800 km
§.§.§ Satellite Altitude 1000 km
§ RELATED WORK
Satellite-based quantum communication provides a promising direction for global-scale QKD <cit.>. A recent study <cit.> explores the finite key effect in satellite-based QKD. It considers a satellite communicating with a single ground station, instead of entanglement-based QKD where a satellite transmits entanglement pairs to a pair of ground station simultaneously as in this study. In addition, it concatenates all the data together, i.e., it only considers the non-blockwise strategy, while our study compares blockwise and non-blockwise strategies. The authors use the finite key analysis techniques proposed in <cit.>. We derive our finite key results based on <cit.>. In particular, that reference provides tight key-rate bounds, using entropic uncertainty, when processing a raw key into a secret key. Typically this method is used directly for the non-blockwise scenario which is usually considered in QKD research. We also use their methods in our work to analyze the amount of secret key material in smaller blocks, running privacy amplification independently on each block and, thus, using results in <cit.> to determine the size of the secret key derived from each (smaller) block. It would be interesting future work to see if one could bound the quantum min entropy of each sub-block directly and run a single privacy amplification processes over the entire block. That is, use a single invocation of privacy amplification, as in the non-blockwise strategy, yet still retain the benefit of increased key lengths as in blockwise postprocessing.
The loss and noise models in this paper are based on those in <cit.>, and we extend its noise model by considering unfiltered background photons. The focus of <cit.> is on optimal scheduling of satellite to ground station transmissions
with a constellation of satellites. This work focuses on comparing blockwise and non-blockwise key distillation in satellite-based QKD.
A promising approach to building a practical global-scale secure QKD is sending a photon through an optical fiber or terrestrial space directly. But both cases show exponentially decreasing the number of transmitted photons as the length increases due to channel loss. Also, the quantum non-cloning theorem does not allow the noiseless amplification of the quantum signal in QKD as opposed to classical communications <cit.>, which restricts the highest distance for secure QKD to hundred kilometers <cit.>. So, the secure key distribution via quantum communications evolves significantly difficult beyond the length scale <cit.>. One of the solutions to the limited length for QKD is using a quantum repeater incorporating entanglement swapping, purification, and quantum memories <cit.>. However, the quantum repeater-based approach is still far away down the road to constructing practical long-distance secure quantum communications despite remarkable signs of progress <cit.>. For global-scale QKD, one of the promising solutions will be using quantum-free space communications via satellites, which can significantly reduce photon losses <cit.>. Because the atmospheric layer—about 10 km—only affects photon losses, the entanglement distribution via free space shows successfully transmitted entangled photon pairs at a ground station using a quantum communication application <cit.>. The more related works of satellite-based quantum communications show the feasibility of satellite-based QKD over long distances and multiple circumstances <cit.>. In 2017, Liao et al. introduced the satellite-to-ground QKD with a low-Earth-orbit satellite and about an altitude of 500 km to implement decoy-state QKD. They achieved a kilohertz key rate from the satellite to the ground over a distance of up to 1,200 kilometers <cit.>.
§ CONCLUSION AND FUTURE WORK
In this paper, we
compare blockwise and non-blockwise key distillation strategies for satellite-based QKD, where the satellite quantum channel is highly dynamic and hence can produce raw key blocks with significantly difference characteristics. Using extensive simulation, we show that the blockwise strategy can lead to a 5% higher secret key rate
than the traditional non-blockwise strategy that is agnostic to the dynamics of the quantum channel.
As future work, we will consider scenarios with multiple satellites in a constellation and multiple ground station pairs. We will also consider more factors when modeling quantum satellite channels (e.g., weather conditions, cloud coverage). In addition, we will consider more blocks based on time of the day (e.g., sunset, night, sunrise, noon).
§ ACKNOWLEDGMENTS
This research was supported in part by the NSF grant CNS-1955744, NSF-ERC Center for Quantum Networks grant EEC-1941583, MURI ARO Grant
W911NF2110325, and NSF CCF-2143644.
ieeetr |
http://arxiv.org/abs/2307.03951v1 | 20230708105908 | High precision tests of QCD without scale or scheme ambiguities | [
"Leonardo Di Giustino",
"Stanley J. Brodsky",
"Philip G. Ratcliffe",
"Xing-Gang Wu",
"Sheng-Quan Wang"
] | hep-ph | [
"hep-ph",
"hep-th"
] |
sort compress
|
http://arxiv.org/abs/2307.04129v1 | 20230709085847 | Cross-modal Orthogonal High-rank Augmentation for RGB-Event Transformer-trackers | [
"Zhiyu Zhu",
"Junhui Hou",
"Dapeng Oliver Wu"
] | cs.CV | [
"cs.CV"
] |
Cross-modal Orthogonal High-rank Augmentation
for RGB-Event Transformer-trackers
Zhiyu Zhu, Junhui Hou, and Dapeng Oliver Wu
Department of Computer Science, City University of Hong Kong
[email protected]; [email protected]; [email protected]
August 12, 2023
==================================================================================================================================================================================================
empty
This paper addresses the problem of cross-modal object tracking from RGB videos and event data. Rather than constructing a complex cross-modal
fusion network, we explore the great potential of a pre-trained vision Transformer (ViT). Particularly, we delicately investigate plug-and-play training augmentations that encourage the ViT to bridge the vast distribution gap between the two modalities, enabling comprehensive cross-modal information interaction and thus enhancing its ability.
Specifically, we propose a mask modeling strategy that randomly masks a specific modality of some tokens to enforce the interaction between tokens from different modalities interacting proactively.
To mitigate network oscillations resulting from the masking strategy and further amplify its positive effect, we then theoretically propose an orthogonal high-rank loss to regularize the attention matrix.
Extensive experiments demonstrate that our plug-and-play training augmentation techniques can significantly boost state-of-the-art one-stream and two-stream trackers to a large extent in terms of both tracking precision and success rate. Our new perspective and findings will potentially bring insights to the field of leveraging powerful pre-trained ViTs to model cross-modal data. The code will be publicly available.
§ INTRODUCTION
Event cameras asynchronously capture pixel intensity fluctuations with an ultra-high temporal resolution, low latency, and high dynamic range, making it gain increasing attention recently <cit.>. Owing to such admirable advantages, event cameras have been widely adopted in various applications, such as object detection <cit.> and depth/optical flow estimation <cit.>. Particularly, the distinctive sensing mechanism makes event cameras to be a promising choice for object tracking <cit.>.
Despite many advantages of event-based object tracking under special environments, e.g., low-light, high-speed motion, and over-exposed, event data lack
sufficient visual cues, such as color, texture, and complete contextual appearance that can be easily captured by RGB data,
resulting in only event-based vision still suffering from relatively inferior performance in practice. Thus, a more promising direction is to investigate cross-modal object tracking from both RGB and event data, where the merits of the two modalities can be well leveraged for pursuing higher performance.
However, the vast distribution gap between RGB and event data poses significant challenges in designing algorithms for modeling cross-modal information.
Most existing pioneering cross-modal trackers heavily engage in robust cross-modal fusion modules, which is cumbersome to use advanced embedding backbones for boosting performance.
In view of the success of Transformer-based tracking algorithms <cit.>, where the multi-head attention naturally models the indispensable correlation relationship between template and search regions, we plan to investigate the potential of pre-trained powerful vision Transformers (ViTs) in cross-modal object tracking from both RGB and event data.
However, those pre-trained Transformers with RGB data may not be able to fully model the essential feature interaction across RGB and event data, due to the distribution gap between the two modalities.
To this end, we study plug-and-play training techniques for augmenting the pre-trained Transformer used as the embedding backbone of our RGB-event object tracking framework.
To be specific, to promote the learning of the attention layer across two modalities, we propose a cross-modal mask modeling strategy, which randomly masks/pops out the multi-modal tokens. We anticipate that, in reaction to the absence of a particular modality at certain locations, the network would proactively enhance interactions on the remaining cross-modal tokens. Nevertheless, randomly masking tokens will inevitably alter data distributions and introduce disruptions, impeding network training. To mitigate the induced negative effect, we further propose a regularization term to guide the training of each attention layer. Based on the observation that the values of internal attention matrices of a Transformer indicate the degree of cross-modal feature interaction,
we propose to orthogonalize the attention matrix to promote its rank obligatorily. Beyond, we anticipate that such regularization could encourage the cross-modal correlation to be evenly and concisely established using the multi-domain signatures, rather than unduly reliant on a specific domain. Finally, we apply the proposed techniques to state-of-the-art one-stream and two-stream Transformer-based tracking frameworks and experimentally demonstrate that their tracking performance is further boosted significantly.
In summary, the contributions of this paper are:
* a mask modeling strategy for encouraging the interaction between the cross-modal tokens in a proactive manner;
* theoretical orthogonal high-rank regularization
for suppressing network fluctuations induced by cross-modal masking while amplifying its positive effect;
and
* new state-of-the-art baselines for RGB-event object tracking.
Last but not least, our novel perspectives will potentially bring insights to the field of leveraging pre-trained powerful ViTs to process and analyze cross-modal data.
§ RELATED WORK
§.§ Object Tracking
Recent years have seen remarkable progress in the study of object tracking, which is primarily due to the widespread success of deep learning <cit.>. Based on the distribution of computational burdens, current methods could be generally divided into two-stream <cit.> and one-stream methods <cit.>. As the earlier invented and relatively mature ones, most offline Siamese-based tracking methods <cit.> fall into the first category. It utilizes a delicate embedding backbone to extract semantic-rich embeddings and then models the target location via either a direct proposal head <cit.> or an online optimization process <cit.>, which is also called deep Siamese-trackers or discriminative correlation filters, respectively <cit.>. SiamFC <cit.> first developed a fully-convolutional architecture to fuse template and search embeddings for object tracking. Though introducing a single-stage RPN <cit.> detector SiamRPN <cit.> achieved target object tracking by comparing the current-frame features to those from a template. To remove the disturbance factors, e.g., padding, SiamRPN++ <cit.> introduced a spatial-aware sampling strategy and further utilized ResNet <cit.> to embed representative features for Siamese-based tracking.
DiMP <cit.> proposed to exploit both target and background appearances to achieve object tracking. KYS <cit.> represented the scene information as dense state vectors and utilizes such state vectors to maximize the tracking performance. Besides, some spatio-temporal-based methods also exploit temporal information to achieve robust and effective tracking <cit.>. MDNet <cit.> separated domain-independent from domain-specific information via a CNN-based framework. RT-MDNet <cit.> further improved it via an RoI-Align strategy, which extracts more precise embeddings from feature maps of targets and candidates. Swin-tracker <cit.> introduced the Swin-Transformer <cit.> to effectively encode the semantic information from input images for high-performance visual tracking.
Due to the extraordinary correlation modeling ability of Transformer, an emerging branch of one-stream methods shows strong potential in correlation modeling. OS-track <cit.> unified the embedding and relation modeling processes with a single vanilla ViT <cit.>, which achieves admiring performance with reduced computational resources. Meanwhile, SimViT-Track <cit.> proposed a similar approach, which feeds search and template image tokens straight into a ViT backbone and performs regression and classification on the resulting tokens.
In summary, with the success of existing embedding backbones, such as ViT <cit.> and Swin-Transformer <cit.>, more intriguing and effective methods have been proposed recently. While these methods could achieve admirable performance, most of them are driven by matching semantically identical segments of the search and template regions viewed as RGB images. As a result, their performance is inextricably tied to imaging characteristics, which can be compromised in specific scenarios such as high-speed and low-light scenes. Hence, it is highly desired to incorporate multi-modal inputs to remedy each deficiency. Moreover, the crucially multi-modal data necessitates additional efforts to generalize these methods to the event-based.
§.§ Event-based Tracking
Owing to its innate characteristics and superiority for object tracking, event-based tracking has been a progressively prevalent subject for research in recent years. Additionally, existing approaches may be broadly classified into two categories: model-based and data-driven. Through describing surrounding environments by a photometric 3D map, Bryner et al. <cit.> proposed to track the 6-DOF pose of a camera. To capture the spatio-temporal geometry of event data, Mitrokhin et al. <cit.> utilized a parametric model to compensate camera motion. Based on a pipeline of tracking-learning-detection, Ramesh et al. <cit.> proposed an object tracking algorithm for event cameras, which is the first learning-based long-term event tracker. Then, Li et al. <cit.> introduced the VGG-Net-16 to encode the appearance of the event-stream object. Inspired by the classic Siamese-matching paradigm, Chae et al. <cit.> presented to track objects via learning an edge-aware similarity in the event domain. Recently, Zhang et al. <cit.>, introduced a spiking transformer for encoding spatio-temporal information of object tracking. Moreover, ZHU et al. <cit.> proposed to utilize inherent motion information of event data to achieve effective object tracking. To summarize, although there are some promising studies that provide directive insights for event-based tracking, a limited number of works have sought to find complementary information from RGB data, e.g., semantic information.
§.§ Cross-modal Learning
Fusing embedding with multiple modalities is a sensible solution for perceiving and recognizing the objects robustly and accurately <cit.>. However, for current machine learning algorithms, learning representative patterns from multiple modalities is still a challenging issue <cit.>. Wang et al. <cit.> proposed to apply data augmentation techniques to boost cross-modal 3D object detection. Liu et al. <cit.> utilized cross-modal feature rectification and fusion models for image segmentation with input from multiple modalities. Jaritz et al. <cit.> solved the multi-modal segmentation issue from the perspective of unsupervised domain adaptation. Moreover, Wang et al. <cit.> designed an RGB-T tracking framework by propagating the intermodal pattern and long-term context. Ye et al. <cit.> proposed a cross-modal self-attention module to achieve natural language-based image segmentation via adaptively capturing informative words and important regions in images. Zeng et al. <cit.> proposed to project the camera features onto the point set on LiDAR. In summary, recent works are clearly founded on network architecture, as is evident by their prevalence. Moreover, the current advanced Transformer paradigm could adaptively process different modalities. However, there is still a lack of further investigations and analysis of the internal mechanism.
§ PROPOSED METHOD
§.§ Motivation
Learning the correlation between the template and search regions robustly and precisely is one of the most essential aspects of object tracking. Fortunately, with current advancements in the multi-head attention mechanism, such correlation
could be naturally achieved via Transformer-based frameworks <cit.>.
However, current powerful ViTs were usually pre-trained with RGB data, e.g., ImageNet <cit.>,
potentially resulting in that they cannot be adequately adapted to cross-modal learning, i.e., the full feature interaction between RGB and event data cannot be well achieved, which is essential for cross-modal object tracking, due to the vast distribution gap between RGB and event data. Accordingly, the tracking performance may be limited.
Instead of following existing cross-modal research paradigms mainly focused on designing sophisticated cross-modal information fusion networks, we aim to explore plug-and-play training augmentation techniques to mitigate the above-mentioned potential limitation of a pre-trained ViT used as the embedding backbone of an RGB-Event object tracking scheme.
Generally, based on a fundamental and essential premise that different modalities possess their own unique benefits for a cross-modal tracker, token embedding information should be adequately transmitted across multi-modalities, especially for the regions with target objects, in a bid to enhance themselves using specific merits from the other modality. Thus, we propose a mask modeling strategy to enable the network to proactively exploit the cross-modal information in Sec. <ref>. Furthermore, we propose a high-rank orthogonalization mechanism in Sec. <ref>, which can not only alleviate network fluctuations induced by the mask modeling strategy but also further boost cross-modal information interaction.
In what follows, we will detail the proposed techniques adapted to both one-stream and two-stream trackers, as illustrated in Fig. <ref> (b) and Fig. <ref> (c), respectively.
We always use I and E in the subscripts to indicate the RGB and event modalities, and T and S are the tokens of template and search regions, respectively.
§.§ Mask-driven Cross-modal Interaction
Grouping tokens via similarity is one of the most representative steps for the self-attention mechanism of a Transformer <cit.>. However, due to the distribution gap between
tokens corresponding to different modalities, the similarity-driven attention may tend to aggregate information from the identical modality, hence impeding the cross-modal learning,
Thus, how to effectively and efficiently promote the cross-modal interactions is critical for maximizing the potential of a pre-trained ViT for RGB-event object tracking.
We propose a cross-modal mask modeling strategy to address this issue in a proactive manner, shown as Fig. <ref> (a).
As illustrated in Fig. <ref>, the underlying intuition of this strategy
is through removing the patches of different modalities and locations, we expect that the task loss would enforce the network to spontaneously enhance/build cross-modal correlation, due to the remaining tokens in different modalities. Once the interaction is established, the RGB and event tokens may learn to shrink the distribution gap, maintaining such correlation to the inference phase.
Specifically, we apply random masks to RGB and event data to remove distinct patches.
To begin, for the one-stream methods, masking elements can be readily accomplished by simply popping out corresponding elements, which could concurrently lessen the network training burden.
For the two-stream methods, due to the large computational resource consumption of the embedding backbone, we directly average the masked features of RGB and event data at the primary stage, which are
further fed into the high-level embedding backbone and relation modeling modules for the object proposal.
Remark. It is worth noting that the motivation and objective of the proposed masking strategy are considerably different from those of the well-known masked image modeling <cit.>. We start from the pursuit of promoting the network to actively utilize cross-modal information. Thus, the patches with distinct positions across RGB and event modalities are randomly removed to permit each location can be perceived by the network but with different modalities. However, mask image modeling pre-trains network weights to comprehend image semantics by feeding just a subset of image patches to reconstruct the unseen area.
Although such a masking strategy used in the training phase is expected to strengthen the ability of the network to perceive cross-modal information to some extent, the randomly dropped information would potentially result in an unstable training process. Moreover, such disruptions are especially devastating for one-stream algorithms, which must concurrently learn representative embeddings and establish the relationship between the cross-modal template and search tokens (see the experimental demonstration in Sec. <ref>). Thus,
to pull the network out of this predicament, we further propose orthogonal high-rank regularization in a theoretical manner in the next section.
§.§ Orthogonal High-rank Regularization
To appreciate the multi-head attention mechanism, we take a one-stream tracker <cit.> with the vanilla ViT <cit.> as an example. As illustrated in Fig. <ref> b), its internal self-attention layers concurrently perceive the RGB and event tokens from both the template and search areas. Depending on the query and key belongings k ∈ℝ, we can partition the resulting attention matrix into k^2 blocks (Here k=4). Note that the attention values of a typical block reflect the degree of the interaction between tokens.
To
mitigate network disturbs induced by the cross-modal mask modeling strategy
and further amplify its positive effect (i.e., boosting cross-modal learning), we concentrate on the cross-modal zones of the attention matrix, such as M_S_I,S_E, and M_S_E,S_I. Assuming that if tokens are well-embedded and with highly discriminative features, each token will form a unique correlation with its identical counterpart, resulting in each row or column being orthogonal to the others. Moreover, as attention elements are non-negative, the corresponding matrix should be full rank[We refer readers to the Supplementary Material for more details]. Therefore, we propose the following regularization to encourage some desired blocks of the attention matrix to be high-rank:
L(M,τ) = (Σ)-(τ)_1, M = U Σ V,
where τ∈ℝ is a pre-defined threshold value, U∈ℝ^n× n, Σ∈ℝ^n× m, and V∈ℝ^m× m are the outputs of the singular value decomposition (SVD) of block M∈ℝ^n× m, and (·) returns a vector, consisting of the main diagonal elements of the input matrix, and (·) converts an input scalar to be a vector by duplicating the scalar. We impose the regularization term onto a set of blocks of the attention matrix {M^(i )}_i=1^N standing for the interaction of cross-modal tokens.
Due to its strong regularization effect, we empirically select the blocks corresponding to image-to-event attention (i.e.M_S_I,T_E, and M_S_I,S_E), and the blocks to event-to-image attention (i.e., M_S_E, T_I, and M_S_E, S_I).
Moreover, as computing the SVD of a matrix is time-consuming, we randomly choose a layer to implement this regularization at each optimization step, instead of operating it in each layer.
For the two-stream methods, since the input data from different modalities are mixed in a preceding embedding backbone as shown in Fig. <ref> (c), e.g., swin-Transformer <cit.>, the resulting attention matrix only consists of two parts, i.e., the search-to-template and template-to-search regions, as illustrated in Fig. <ref> (c).
Under this scenario, we anticipate that the discriminative cross-modal tokens will be able to form a unique correlation with the identical object parts across template and search areas. As shown in the right part of Fig. <ref> (a) and Fig. <ref> (c), such a relationship would also produce that each row is orthogonal to the others.
Thus, we also regularize the regions belonging to the target objects in M_S,T. Specifically, guided by bounding box information, we first mask the attention weights in non-target regions of M_S,T, then apply Eq. (<ref>) to increase the rank of the masked matrix.
§.§ Training
To train a Transformer-based tracker with the proposed plug-and-play augmentation techniques, at each optimization step, we first randomly mask/pop out event and image patches with a ratio of δ_e and δ_i (0<δ<1), respectively. Then, we train the whole network with the following loss function:
L_all = L_task + α L(M,τ),
where L_task denotes the original task loss function, composed of regression and classification branches, and α is a balanced weight for the proposed regularization term.
§ EXPERIMENT
Implementation details. We evaluated the proposed plug-and-play training augmentation techniques on both one-stream and two-stream trackers. We set template and search sizes as 128 and 256, respectively, which contain 2× and 4× regions than annotations. Moreover, the location and scale jitter factors of the search region are set as 3 and 0.25, respectively (No jitter to template region). For one-stream, we directly adopted the SOTA method named color-event unified tracking (CEUTrack) <cit.> as our baseline model (ViT-B). During training, we used the same optimizer (ADAW), learning rate scheduler, and task loss function as the original paper. We set the batch size as 24 and the augmentation weight α in Eq. (<ref>) empirically as 1.2. The masking ratios of both modalities δ_i and δ_e were set to 0.1.
For two-stream, to the best of our knowledge, there is no Transformer-based RGB-event tracker available,
we chose the most recent event cloud-based motion-aware tracker (MonTrack) <cit.> and modified it with the proposal head of a Transformer-tracker <cit.> and the backbone of pre-trained swin-v2 <cit.> to construct two-stream RGB-event trackers for the detailed architecture). Moreover, we tested lightweight and heavy backbones, i.e., Swin-V2-Tiny <cit.> and Swin-V2-Base <cit.>, to achieve comprehensive evaluation, and the resulting baselines are named MonTrack-T and MonTrack-B, respectively. To train the whole framework, we utilized the AdamW optimizer <cit.> with the learning rate of 1e^-4for the proposal head and 1e^-5 for the backbone. We set the weight decay as 1e^-4. MonTrack-T and MonTrack-B were trained with 57K and 81K steps, respectively. We empirically set the value of α as 1.0, and the masking ratios of RGB and event data δ_i and δ_eas 0.4 and 0.3, respectively.
We refer readers to the Supplementary Material for the detailed network architectures and settings.
Datasets. We employed two large-scale cross-modal RGB-event single object tracking datasets: FE108<cit.> and COESOT<cit.>. Both datasets were collected by DAVIS346 with a spatial resolution of 346 × 260, dynamic range of 120 dB, and minimum latency of 20 μ s. FE108 consists of 108 RGB-event sequences collected indoors with a total length of 1.5 hours, which captures 21 different types of objects. The training split of FE108 consists of 140K RGB-Event pairs and 59K for testing. The ground-truth bounding boxes were annotated by a Vicon motion capture system. Moreover, the COESOT dataset consists of 578,721 RGB-Event pairs, which could be split into 827 and 527 sequences for training and testing, respectively. Those sequences are collected from both indoor and outdoor scenarios and cover a range of 90 classes and 17 attributes. The ground truth bounding boxes of the COESOT dataset were manually annotated. Note that we adopted the quantitative metrics suggested by each dataset to evaluate different methods.
§.§ Experimental Results
Results on FE108. As listed in Table <ref>, after being augmented by the proposed techniques during training, both MonTrack-T and MonTrack-B substantially improve both RSR and PRP by more than 3%. Moreover, the larger model “MonTrack-B" yields a greater performance gain. We reason such an effect may be the consequence of promoting thoroughly cross-modal interaction Besides, the superior performance of the proposed techniques is also demonstrated in the precision and success plots in Fig. <ref>, which exceeds SOTA methods by a large extent, i.e., 5.1% in RSR, 8.1% in OP_0.50, 12.1% in OP_0.75, and 3.8% in RPR. Additionally, the higher performance of cross-modal methods
than that of only event-based methods and only RGB-based methods
demonstrates the significance and necessity of using the information of both RGB and event data for object tracking.
Results on COESOT. As shown in Table <ref>, the original Tansformer-based cross-modal tracker, i.e., CEUTrack, improves the SR value of the previous SOAT SiamR-CNN by 1.1%. After being augmented with our techniques, i.e., CEUTrack+Ours, the values of SR and PR are further improved by 1.2% and 1.4%, respectively, and its NPR achieves higher than 70%,
convincingly validating the effectiveness of the proposed techniques. In addition, we also provide the success and precision plots of different attributes in Fig. <ref>, where it can be seen that the proposed augmentations can yield general improvements instead of only strengthening certain circumstances. For example,
the proposed augmentations achieve 3.4 % precision and 2.8 % success improvements under the blurring attribute. Especially, CEUTrack+Ours maintains the best performance under the camera motion attribute, while the baseline CEUTrack drops to the 7^th.
We also refer readers to the Supplementary Material for the comparisons of the network size and inference time.
§.§ Ablation Study
Visualizations. Fig. <ref> visualizes the internal attention matrix of CEUTrack. The values of each row of the matrix are utilized to weight-sum tokens in that row and project to a corresponding token. Due to the absence of values in the blocks M_S_I,S_E, M_S_I,T_E, M_T_I,T_E, M_T_I,S_E in Figs. <ref> (a) and (d), there is scarce information projected from the event domain to the RGB domain.
The reason may be that the ViT was pre-trained on ImageNet composed of RGB data,
making it preferable to process RGB data.
When used as the backbone for constructing RGB-event object tracking, the pre-trained filters attempt to project event information onto RGB tokens to complete the labor-intensive tasks of information fusion and processing, instead of the inverse projection. After being augmented with our techniques during training, the cross-modal interaction is
noticeably enhanced, i.e., the matrix blocks, which are zeros in Figs. <ref> (a) and (d), exhibit attention values, as demonstrated in Figs. <ref> (b) and (e).
Besides, we also visualized the singular values of matrix blocks related to the cross-modal interaction in Figs. <ref> (c) and (f), which substantially validates they have been pushed far away from a low-rank matrix after applying the proposed techniques. We refer readers to the Supplementary Material for more results.
Finally, Fig. <ref> shows the queries of the 2^nd, 4^th, and 7^th self-attention layers where it can be seen that the proposed augmentations narrow the distribution gaps between event and RGB tokens, especially for the 4^th layer.
Masking vs. High-rank. We conducted throughout experiments to better understand the relationship and function of the proposed two augmentation techniques. From Table <ref>, it can be seen that
when the two techniques were simultaneously applied, the improvement is much more significant than that of only applying the masking scheme. The improvement is slight when only the high-rank regularization was applied. These observations validate our claim that the two techniques are complementary.
Effect of the mask size. We experimentally validated the effect of different mask sizes on performance. As shown in Table <ref>, the benefits may be nullified under extremely large or tiny masks. The possible reason is that the network experiences the small masks as noise. While, if the mask is too broad, the object may only appear in one modal, which may be detrimental to cross-modal learning.
§.§ Discussion
In view of the impressive performance of the proposed plug-and-play training augmentations, it is worth further exploring their potential in other cross-modal scenarios, such as RGB-3D point clouds, or even vision-natural language. In addition, as demonstrated in Fig. <ref>, the proposed orthogonal high-rank regularization indeed facilitates the interactions between cross-modal tokens, and thus, it would be promising to further develop task-specific regularization terms for other visual Transformers-based works.
§ CONCLUSION
In this paper, we introduced plug-and-play training augmentations for Transformer-based RGB-event object tracking. Our augmentations consist of two complementary techniques–cross-modal mask modeling and orthogonal high-rank regularization with the same objective of enhancing the cross-modal interaction of a ViT pre-trained only with RGB data.
Our extensive experiments demonstrate the effectiveness of our training augmentations, as state-of-the-art methods achieve significant improvement in tracking performance after augmentation.
While current Transformers can be scaled up to enormous sizes, relying solely on final objectives to guide the model learning process may be insufficient. We hope our perspectives, findings and analysis
will inspire further research into the internal mechanisms of Transformer-based cross-modal fusion tasks.
ieee_fullname
|
http://arxiv.org/abs/2307.04616v1 | 20230710145810 | MiVOLO: Multi-input Transformer for Age and Gender Estimation | [
"Maksim Kuprashevich",
"Irina Tolstykh"
] | cs.CV | [
"cs.CV",
"cs.AI",
"cs.LG",
"I.2.0; I.4.0; I.4.9"
] |
: Multi-input Transformer for Age and Gender Estimation
t]c@8emc
Maksim Kuprashevich Irina Tolstykh
[email protected] [email protected]
Layer Team, SaluteDevices
August 12, 2023
============================================================================================================================================
empty
Age and gender recognition in the wild is a highly challenging task: apart from the variability of conditions, pose complexities, and varying image quality, there are cases where the face is partially or completely occluded. We present (Multi Input VOLO), a straightforward approach for age and gender estimation using the latest vision transformer. Our method integrates both tasks into a unified dual input/output model, leveraging not only facial information but also person image data. This improves the generalization ability of our model and enables it to deliver satisfactory results even when the face is not visible in the image. To evaluate our proposed model, we conduct experiments on four popular benchmarks and achieve state-of-the-art performance, while demonstrating real-time processing capabilities. Additionally, we introduce a novel benchmark based on images from the Open Images Dataset. The ground truth annotations for this benchmark have been meticulously generated by human annotators, resulting in high accuracy answers due to the smart aggregation of votes. Furthermore, we compare our model's age recognition performance with human-level accuracy and demonstrate that it significantly outperforms humans across a majority of age ranges.
Finally, we grant public access to our models, along with the code for validation and inference. In addition, we provide extra annotations for used datasets and introduce our new benchmark. The source code and data can be accessed at <https://github.com/WildChlamydia/MiVOLO.git>
§ INTRODUCTION
Age and gender recognition of a person in a photo is a highly important and complex task in computer vision. It is crucial for various real-world applications, including retail and clothes recognition, surveillance cameras, person identification, shopping stores and more. Additionally, this task becomes even more challenging in uncontrolled scenarios. The significant variability of all conditions such as image quality, angles and rotations of the face, partial facial occlusion, or even its absence in the image, coupled with the necessary speed and accuracy in real-world applications, makes the task quite challenging.
Our objective was to develop a simple and easy to implement approach capable of simultaneously recognizing both age and gender, even in situations where the face is not visible. We aimed for scalability and speed in our solution.
In this paper, "gender recognition" refers to a well-established computer vision problem, specifically the estimation of biological sex from a photo using binary classification. We acknowledge the complexity of gender identification and related issues, which cannot be resolved through a single photo analysis. We do not want to cause any harm to anyone or offend in any way.
Meanwhile, gender recognition is a classification task, while age estimation can be solved either through regression or classification.
Many popular benchmarks and research papers <cit.> <cit.> consider age as a classification problem with age ranges. However, this approach can be inaccurate because age, by its nature, is a regression problem. Moreover, it is inherently imbalanced <cit.>. Treating it as a classification causes several issues. For a classification model, it makes no difference whether it misclassifies into a neighboring class or deviates by several decades from the ground truth. Additionally, as stated in <cit.>, classification models cannot approximate age ranges from unseen classes, while regression models can. However, regression models are much trickier to train, and collecting or cleaning datasets for such task is more challenging.
In this paper, we consider the following popular benchmarks: IMDB-Clean <cit.>, UTKFace <cit.>, Adience <cit.>, FairFace <cit.>. These are some of the most famous datasets containing both age and gender ground truth. IMDB-Clean is the largest available dataset for this task, but it consists of celebrities and is heavily biased. This bias poses a problem for recognition in the wild, you can see an example in Figure <ref>. For more details, refer to the <ref> section.
Therefore, in our work, we introduce a completely new benchmark comprising 84,192 pairs (FaceCrop, BodyCrop) randomly selected from the Open Images Dataset<cit.>.
These images were annotated on a crowd-sourcing platform, and we have achieved remarkably high accuracy using a weighted averaging votes strategy.
While most existing works focus on estimating age and/or gender solely from face images, this work introduces the MiVOLO model, which is built upon the visual transformer model VOLO <cit.>. The allows for the simultaneous prediction of age and gender by incorporating both face and body features.
Our model, trained using both body and face images, achieves SOTA results on the 4 largest benchmarks.
Additionally, it attains a high frame rate of 971 frames per second (FPS) when utilizing a batch size of 512 on the NVIDIA V100. Moreover, our model accommodates the inclusion of images that may lack visible faces.
Human-level estimation is also an open question. Accuracy heavily depends on conditions and is unclear in this task. Some articles <cit.> state that neural network models have already surpassed human-level performance. However, there are not many works where exact human-level performance has been estimated, and we did not find any that have been conducted on images with full-sized persons in the wild.
In this paper, we estimated this level using random images from the IMDB-clean dataset.
The main contributions of our work can be summarized as follows:
* We provide publicly available models that achieved SOTA results in 4 benchmarks.
* We have developed a readily implementable architecture called , capable of simultaneously handling faces and bodies. It enables accurate age and gender prediction, even in cases where humans may struggle. The architecture supports predictions with and without face input. has achieved top-1 results on 4 popular benchmarks, 2 of them without any fine-tuning on training data.
* Additionally, we have once again demonstrated that a carefully implemented multi-output (multi-task) approach can provide a significant performance boost compared to single-task models.
* We have also shown that multi-input models able to gain generalization ability in the same way as multi-task models.
* The original UTKFace dataset has been restored to include full-sized images.
* The annotations of IMDB-clean, UTK and FairFace datasets have been modified to include annotations of all detectable persons and faces in each image using our models.
* Human-level estimation for the task with a substantial sample size.
* A completely new, very well balanced that we propose to use as a benchmark for age and gender recognition in the wild.
§ RELATED WORKS
Facial age and gender recognition. Typically, solving the gender task separately is not of great interest in research or business. Therefore, most methods and benchmarks either tackle both age and gender tasks or focus solely on age. Convolutional neural networks (CNNs) have become the state-of-the-art in most computer vision challenges, although in recent years, there has been a trend to replace them in certain tasks. Levi et al. <cit.> were the first to use CNNs, evaluating their approach on the Adience dataset <cit.>, which contains age and gender as classes.
The network they implemented is a convolutional model with two fully-connected layers. It achieves an accuracy of 50.7 ± 5.1 for age.
With significant advancements in computer vision neural networks, many methods have been developed, some based on face recognition techniques and models <cit.>, suggesting the existence of powerful generic models for faces that can be adapted to downstream tasks.
Some papers <cit.> even employ more general models as encoders, such as VGG16 <cit.>, particularly for ordinal regression approaches in age estimation.
Other methods utilize CNN networks for direct classification or regression for age recognition <cit.> <cit.>.
As of the writing of this article, the state-of-the-art model on Adience for age classification used the Attention LSTM Networks approach <cit.>, achieving an accuracy of 67.47. However, they did not employ their model for gender prediction.
Recognition using face and body images.
Most methods for age or gender estimation are based on facial analysis. Some consider the body for age <cit.> or gender <cit.> recognition, but in very few works <cit.>, joint recognition using both face and body pictures has been utilized. Therefore, it is difficult to find a baseline in open sources to start with.
Only a few works exist that utilize full-body images of individuals. The earliest attempt <cit.> predates the era of neural networks and employed classical image processing techniques to predict age.
A more recent study <cit.> utilized both face and body images together in a single neural network for age and gender prediction. Another paper <cit.> employed face and body images with a late fusion approach, but solely for gender prediction.
Datasets and benchmarks. Our focus primarily lies on datasets containing both age and gender information. The largest existing dataset for these tasks is IMDB-Wiki <cit.> <cit.>. However, the ground truth answers in this dataset do not appear to be clean. Therefore, we used the cleaned version <cit.>.
Another interesting dataset is UTKFace <cit.>, which also contains both age and gender information but is much smaller, with only annotations for face crops.
The MORPH <cit.> dataset is also notable for age estimation, although the domain captured in this dataset cannot be considered as representing wild conditions.
KANFace <cit.> is another large dataset of images and videos that includes gender information.
The CACD dataset <cit.> is also sizeable and features celebrities from different age groups, making it highly useful, but it does not include gender information.
The aforementioned datasets above contains age information suitable for regression.
Adience dataset <cit.> contains both age and gender, but age presented as 8 classes.
FairFace <cit.> is a big and well-balanced dataset, where age is categorized into ranges.
All these datasets are focused on faces, but for most of it is possible to generate additionally persons information.
We are using for training experiments only IMDB-clean and UTKFace as biggest datasets with suitable image domain and annotations.
The FairFace and Adience are employed specifically for benchmarking purposes.
Visual Transformer Models. For many years, convolutional neural networks have dominated the field of computer vision. However, transformers have been increasingly gaining prominence in various tasks and benchmarks. Transformers are powerful and versatile models, and they are far from reaching their limits. One of the first transformer models applied to computer vision was ViT <cit.>, which achieved great success and inspired the exploration of many other variants <cit.> <cit.>. VOLO<cit.> is also a transformer-based model, but it efficiently combines the worlds of CNNs and Transformers and performs extremely well. We chose the VOLO model because it converges quickly and requires less data in our experience. Additionally, VOLO is one of the fastest transformer-based vision models.
Human level for age estimation.
In <cit.>, a comparison was made between the mean absolute error (MAE) of human and machine age estimation. The study examined the FG-NET dataset and the Pinellas County Sheriff's Office (PCSO) dataset (currently unavailable). The authors found that the human MAE on the FG-NET dataset <cit.> was 4.7, while on the PCSO dataset it was 7.2. For the machine results, they obtained 4.6 and 5.1, respectively. They also claimed that their algorithm performed worse than humans in the age range ∈[0, 15] years. The authors noted that this age range is not present in the FG-NET dataset <cit.>, which caused the observed difference. When excluding this range, the estimated human MAE for FG-NET is also very close - 7.4. Eventually, the authors concluded that their model is more accurate than humans.
§ DATASETS
§.§ IMDB-clean
We primarily conducted our experiments on the IMDB-Clean dataset, which comprises 183,886 training images, 45,971 validation images, and 56,086 test images. We utilized the original split of the dataset. The images in this dataset are highly suitable for our tasks and represent a wild domain. However, it is important to note that the dataset only includes images of celebrities, which introduces a bias. Additionally, it suffers from significant class imbalance (Figure <ref>), similar to other datasets.
For the face-only baseline, we utilized this dataset without making any modifications. For experiments involving body images, we generated face and person bounding boxes for all individuals detectable with our model in each image.
§.§ UTKFace
The original dataset only includes bounding boxes for cropped face images, as we performed backward matching to the original full-sized images. This process also involved double-checking by the face encoder. During this process, we encountered 4 faces that did not match back to the original images, so we dropped those images from our annotation. The remaining images maintain the original annotations but with bounding boxes generated by our detector.
The original dataset does not provide any predefined training, validation, and test splits. To align with other works, we utilized the exact same split as described in <cit.>. In this subset, the ages are in range ∈ [21, 60], totalling in 13,144 training and 3,287 test images.
§.§ FairFace
The FairFace<cit.> dataset comprises 86,744 training images and 10,954 validation images. The ground truth attributes in this dataset cover race, gender, and age, categorized into nine classes: (0-2, 3-9, 10-19, 20-29, 30-39, 40-49, 50-59, 60-69, 70+). The dataset is also very well balanced by races.
For measuring gender and age classification accuracy, we utilize a validation set.
To gather information about bodies, we utilize 'fairface-img-margin125' images and employ our detector model to localize the centered face and its corresponding body region.
§.§ Adience
The Adience<cit.> dataset consists of 26,580 facial images, depicting 2,284 subjects across eight age group classes (0-2, 4-6, 8-13, 15-20, 25-32, 38-43, 48-53, 60-100). The dataset additionally includes labels for the binary gender classification task.
The images in the Adience<cit.> dataset were captured under varying conditions, including differences in head pose, lighting conditions, and image quality.
For our analysis, we utilize coarse aligned images and refrain from applying any other aligning methods.
To refine the facial localization on the images, we employ our detector model. We deliberately avoid using in-plane aligned versions of the faces to prevent distortion. The validation of our models and computation of classification accuracy are performed using all five-fold sets.
§.§ New
§.§.§ benchmark
Due to issues such as bias in datasets containing celebrities and professional photos, we introduce a completely new benchmark in our paper for age and gender recognition tasks in wild conditions. We named this benchmark the Dataset (), by the name of our team. To create it, we initially sampled random person images from the Open Images Dataset <cit.> (OID). This dataset offers a high level of diversity, encompassing various scenes and domains.
The images were annotated using a crowd source platform. To ensure high-quality annotations for age estimation, we implemented strict control measures. Control tasks (honeypots) were included to maintain accuracy. Each honeypot had a 7-year age range, within ±3 years of the true age. Therefore, the accuracy on these control tasks can be seen as just CS@3 (see <ref>).
Control measures included:
* Mandatory training for all users before proceeding.
* Users had to pass an examination; CS@3 below 20% resulted in a ban.
* Annotation tasks consisted of 6 examples and 1 hidden control task, totaling 7 tasks per suite.
* After completing 10 task suites, users with average CS@3 below 20% were banned, and their answers rejected.
These measures were implemented to prevent significant noise from bots and cheaters.
Our dataset was annotated with an overlap of 10, meaning that each real task received 10 votes for both age and gender.
In the last step, we balanced the dataset by age distribution using 5-year groups and ensured gender distribution within each one. As a result, we obtained 67,159 images with 84,192 persons, comprising 41,457 males and 42,735 females samples. Please refer to Figure <ref> for a visualization of the dataset distribution.
§.§.§ Votes ensembling
After completing the annotation process, we encountered the challenge of determining how to utilize the obtained votes.
Table <ref> provides a list of all the methods that were tested. In addition to other statistical methods, we employed a weighted mean strategy. It was implemented as follows:
A(v) = ∑_i=1^N v_i * e^(MAE(u_i))^-1/∑_i=1^Ne^(MAE(u_i))^-1
where A is final age prediction for the v vector of user votes, N is size of v, amount of users who annotated this sample and MAE(u_i) denotes the individual MAE across all control tasks for the i-th user u.
We used an exponential weighting factor because there is a substantial difference in annotation quality between users with MAE of 3 and 4, for example. This approach outperformed other variants significantly.
Gender was aggregated using the simple mode(v), where v is an array of elements ∈ male, female. We discarded all answers where the mode occurred with a frequency of less than 75%. Based on control tasks, the gender accuracy has to be 99.72%. We can roughly claim that human accuracy for this task is less or equal to this level.
§.§.§ trainset
Experiments in this work required not only a high-quality benchmark but also a large amount of training data. Therefore, besides our benchmark, we also collected data from other sources, mostly from our production. These images are in almost the same visual domain as images from OID<cit.>.
Our train dataset contains approximately 500,000 images in total, which have been annotated in exactly the same way as benchmark.
In the text, we refer to this training and validation proprietary data as trainset.
Although we cannot make this data publicly available, we provide a demo with the model trained on it (the link can be found in the Github repository).
§ METHOD
§.§ : Multi-input age & gender model
Our model is depicted in Figure <ref>.
For each input pair (FaceCrop, BodyCrop) of size 224 × 224, we independently apply the original VOLO<cit.> patch embedding module, which tokenizes the crops into image patches of size 8 × 8.
Two representations are then fed into a feature enhancer module for cross-view feature fusion, which is achieved using cross-attention. The module is illustrated in Fig. <ref>, block 2. Once the features are enriched with additional information, we perform a simple concatenation, followed by a Multi-Layer Perceptron (MLP) that creates a new fused joint representation and reduces the dimensionality of the features.
This feature fusion allows us to pay attention to important features from both inputs and disregard less significant ones. Additionally, it handles scenarios where one of the inputs is empty, ensuring meaningful information is extracted even from a single view.
The fused features are then processed using the VOLO two-stage architecture<cit.>, which involves a stack of Outlookers, tokens downsampling, a sequence of transformers, and two heads on top.
The last two linear layers update the class embedding into a 3-dimensional vector: two output values for gender and one for the normalized age value. Unlike <cit.>, which uses multiple heads for separate age and gender predictions, produces a single vector for each image containing both outputs.
We use combination of two losses for training:
* WeightedMSE loss function for age prediction with weights from LDS<cit.>
* BinaryCrossEntropy loss function for gender prediction
As demonstrated in Table <ref>, multi-task learning enables us to achieve improvements in both tasks.
Moreover, early feature fusion allows us to maintain almost the same high performance as that of the original VOLO (see <ref>).
§.§ Data preprocessing
Each face and body crop image we resize by using letterbox with padding to preserve the aspect ratio, followed by RGB channel Z-score normalization, using the Imagenet original values. The resize algorithm used is bilinear.
The ground truth answers are also processed with min-max normalization:
ỹ_i = y_i - y_min/y_max - y_min.
To obtain face-body pairs, we follow these steps:
* The input image is first passed through a detector to find all faces and persons. We specifically trained YOLOv8 <cit.> for the publicly available version of our code.
* Using the lists of face and person objects obtained, we run the Assign(faces, persons) algorithm to associate faces with corresponding persons. This method makes use of the Hungarian algorithm. Unassigned faces or bodies they can still be utilized as independent inputs.
Unlike faces, body images of persons pose several specific challenges. The body can be heavily occluded, and there can be many different small parts of the body appearing in the image that are not useful for recognition. Such images require a more complex preprocessing approach. Additionally, the nature of bounding boxes introduces another challenge. While face crops rarely contain other people's faces or bodies, body crops often do.
We implemented additional preprocessing steps for body images:
* Check for intersections of the current body bounding box body_i with all detected objects in the image. If any intersection exists, regardless of its size, apply DetachObject(body_i) that removes all the objects intersected with the i-th. This also applies to the paired face crop.
* The remaining image may contain unwanted artifacts. To handle these artifacts, we added a trimming operation Trim(b_i). In Figure <ref>, result of this operation can be observed.
* If the resulting body image is too small in terms of pixels or size compared to the original crop, it is considered useless and discarded.
§.§ Performance
We consider the VOLO-D1 model variation as our baseline, which consists of 25.8M parameters. In comparison, the -D1 model has 27.4M parameters. Figure <ref> demonstrates that while -D1 is slightly slower than the original version, it still exhibits high performance. All measurements were conducted using a single V100 GPU with float16 precision. When dealing with a single input (even in a mixed batch), we have the option to skip the first PatchEmbedding step for the missing input, leading to a significantly faster inference time.
§ EXPERIMENTS
Our code is based on PyTorch <cit.> and timm <cit.>. We use the VOLO <cit.> model as our baseline.
§.§ Evaluation metrics
In this section, we present the model's performance using various metrics. For gender prediction and age prediction in classification benchmarks, we utilize the classification accuracy metric.
In regression age benchmarks, the model's performance is evaluated based on two metrics: Mean Absolute Error (MAE) and Cumulative Score (CS). MAE is calculated by averaging the absolute differences between the predicted ages and the actual age labels in the testing dataset. CS is computed using the following formula:
CS_l = N_l/N× 100%
Here, N represents the total number of testing examples, while N_l denotes the count of examples for which the absolute error between the estimated age and the true age does not exceed l years.
§.§ VOLO Experiments on Open Source Datasets
First, we conducted experiments on IMDB-clean and UTKFace datasets to establish a good baseline and identify model limitations. In this section original images, annotations and data splits were taken.
For the age estimation task, our baseline model, VOLO-D1, was trained using only the face input. We employed the AdamW optimizer with an initial learning rate of 1.5e-5 and a weight decay of 5e-5. The model was trained for 220 epochs individually on both the IMDB-clean and UTKFace datasets. The base learning rate batch size was set to 192. At the start of training, we performed a warmup with lr=1e-6 for 25 epochs with gradual increase.
The following data augmentations were applied during training:
* RandAugment with a magnitude of 22 and bilinear resizing.
* Random bounding box jitter for position and size, both with a magnitude of 0.45.
* Reprob with p=0.5.
* Random horizontal flip with p=0.5.
Additionally, we incorporated drop and drop-path with p=0.32.
We performed several experiments, exploring different parameters and loss functions. For age estimation, we tried WeightedFocalMSE loss and WeightedHuber loss, but simple WeightedMSE yielded the best performance.
As shown in Table <ref> our results are state-of-the-art without any additional data or advanced techniques on IMDB-clean and UTKFace datasets.
For the age & gender VOLO-D1 model, we followed the same training process. To address the discrepancy in the magnitudes of loss between age and gender, we weighted the gender loss with w=3e-2. We did not change anything else, including the number of epochs.
By adding a second age output to the model, we expected to observe the same effect as reported in the study <cit.>, where a single model performs better than multiple separate models, leveraging the benefits of learning two tasks simultaneously. And, indeed, we obtained a significantly better MAE for the age, while also achieving impressive accuracy for gender classification. Please refer to Table <ref> for the detailed results.
§.§ MiVOLO Experiments on Open Source Datasets
We made some minor adjustments to the training process for the model. To reduce training time, we initialized the model from a single-input multi-output VOLO checkpoint. We initialized weights of the body PatchEmbedding block with the same weights as the face PatchEmbedding block. The Feature Enhancer Module was initialized with random parameters.
During training, we froze the face PatchEmbedding block since it was already trained. We trained the model for an additional 400 epochs, incorporating random dropout of the body input with a probability of 0.1, and random dropout of the face input with a probability of 0.5. Face inputs were only dropped for samples with suitable body crops. If a face input was dropped, the model received an empty (zero tensor) input for face PatchEmbedding, and the same for empty body inputs.
These techniques were implemented to adapt the model for various mixed cases and to improve its understanding of input images, resulting in enhanced generalization. We also set the learning rate to 1e-5. To preserve the structural integrity of the data, all augmentations, excluding jitter, are applied simultaneously.
The remaining parts of the training procedure are unchanged.
We conducted experiments on the IMDB-clean dataset using our . Table <ref> shows a comparison between the single-input VOLO and the multi-input MiVOLO. The results indicate that the best performance across all benchmarks is achieved by using both face and body crops. The model trained on our dataset consistently outperforms the one trained on IMDB.
To evaluate the quantitative performance of the when only body images are available, we conducted an experiment where all faces were removed from the data. Additionally, we excluded any images that did not meet our specified requirements mentioned in Section <ref>. For IMDB-clean, UTKFace and Lagenda test datasets retained 84%, 99.6% and 89% of images, respectively. Results are displayed in the Table <ref> and Figure <ref> (b).
§.§ experiments
We repetead all previous experiments on our trainset. We trained three variants of the model: VOLO-D1 face-only age, VOLO-D1 face-only age & gender, and MiVOLO-D1 face + persons age & gender. We kept all training parameters unchanged, following the same configuration as for the IMDB-clean dataset.
Please refer to Table <ref> and Table <ref> for the results. As expected, the amount of data played a crucial role in the performance of our . We observed significant improvements and achieved SOTA results for the , UTKFace, and IMDB-clean datasets by utilizing the face & body multi-input approach. Remarkably, we also obtained satisfactory results for body-only inference.
In Figure <ref>, we provide an illustration of a successful recognition result without visible faces in a random picture sourced from the internet.
Model generalizes very well, even though it has never seen images like this with persons shown from the back.
Relationship between MAE and age for final models is shown in Figure <ref> (a) and (b).
§.§ Adience and FairFace experiments
Due to the model's impressive generalization capabilities, we decided to apply to popular classification benchmarks such as FairFace <cit.> and Adience <cit.>. Since our model cannot be trained explicitly for classification, we utilized our final -D1 age & gender model without any modifications. The only change made was mapping the regression output to classification ranges. As shown in Table <ref>, we achieved SOTA results for both datasets without any additional changes.
§ HUMAN LEVEL ESTIMATION AND VOTES ENSEMBLING FOR AGE RECOGNITION
§.§ Human level for age estimation
As described in Section <ref>, during the annotation of the , control tasks (honeypots) were generated from the IMDB-clean dataset. A total of 3,000 random examples were sampled for this purpose. Users were not aware of which examples were honeypots and annotated them alongside other tasks. This approach provided a reliable source for estimating the human level of performance in the task.
Figure <ref> illustrates the distribution of MAE values among the users. The mean of this distribution is 7.22, and the median is 7.05. The averaged maximum error is 28.56, while the minimum mean error for a specific user is 4.54.
We have briefly described paper <cit.> in section <ref>.
We disagree with the method of excluding certain age ranges as it can potentially lead to incorrect conclusions. The authors claimed that their model's accuracy is either equal to or surpasses human accuracy. However, since we can only consider the results obtained on the FG-NET dataset due to the aforementioned issue, we have only one estimation where the model achieved an MAE of 4.6 compared to 4.7 in humans. Given this small difference and the sample size of 1,002 images, the statistical evidence does not appear to be substantial. Furthermore, it is important to note that both datasets have specific visual domains, which can further affect the generalizability of the results.
To accurately compare human and machine performance, it is crucial to take into account the entire range of ages and images from the wild domain.
As can be seen in Figure <ref>, the previous suggestion about low neural network and high human performance in the age range of [0, 15] years no longer holds.
It turned out that both humans and neural network exhibit an increase in error and its dispersion with the age of the person in the image.
Overall, we can confidently state that our model surpasses human annotators across the majority of age ranges. Furthermore, as shown in Table <ref>, the model achieved a MAE of 6.66 on IMDB-clean with body-only images. This demonstrates that, on average, our model outperforms humans even when considering body-only mode.
§ CONCLUSIONS
We have introduced a simple yet highly efficient model, , that achieves state-of-the-art results on 4 benchmarks, demonstrating its capability to function robustly even in the absence of a face image.
To contribute to the research community, we are providing the weights of the models, which have been trained on Open Sourced data.
In addition, we have enriched and expanded the annotations of 3 prominent benchmarks, IMDB-clean, UTKFace and FairFace. Furthermore, we have developed our own diverse and unbiased , which contains challenging real-world images and is publicly available.
For the task of age annotation aggregation, we employed an intuitive yet remarkably accurate method and evaluated its performance.
Our investigation into the comparison of human and machine accuracy in age recognition tasks revealed that our current model consistently outperforms humans across various age ranges, exhibiting superior overall accuracy.
§ FUTURE WORK AND DISCUSSION
Despite the fact that we achieved our goals, some questions remain open. We still cannot be sure about the physically possible MAE on these or any other age recognition task in computer vision.
However, the weighted mean from human annotators gives us a very interesting estimation of a certain achievable level in the age recognition task, which is 3.5.
Our approach can be significantly improved by incorporating new class-agnostic segmentation approaches, such as the Segment Anything Model <cit.>. These approaches can provide accurate masks for the body, which would be highly beneficial.
Certainly, even in our very well-balanced dataset, there is a lack of data in the higher age ranges, particularly around 80 years and beyond. As we have shown, the largest contribution to the achieved MAE comes from this range, so it needs to be addressed in future work.
Additionally, this task requires a huge amount of data in order to train a perfect model. However, due to the nature of the task, it is very difficult to obtain it. Therefore, we expect that our method can be combined with Masked Autoencoders <cit.> or other scalable self-supervised method.
ieee_fullname
|
http://arxiv.org/abs/2307.04456v1 | 20230710101101 | Invex Programs: First Order Algorithms and Their Convergence | [
"Adarsh Barik",
"Suvrit Sra",
"Jean Honorio"
] | math.OC | [
"math.OC",
"cs.LG"
] |
Analyzing the Evolution of Inter-package Dependencies in Operating Systems: A Case Study of Ubuntu
Victor Prokhorenko1,2 Chadni Islam3 Muhammad Ali Babar1,2
August 12, 2023
==================================================================================================
Invex programs are a special kind of non-convex problems which attain global minima at every stationary point. While classical first-order gradient descent methods can solve them, they converge very slowly. In this paper, we propose new first-order algorithms to solve the general class of invex problems. We identify sufficient conditions for convergence of our algorithms and provide rates of convergence. Furthermore, we go beyond unconstrained problems and provide a novel projected gradient method for constrained invex programs with convergence rate guarantees. We compare and contrast our results with existing first-order algorithms for a variety of unconstrained and constrained invex problems. To the best of our knowledge, our proposed algorithm is the first algorithm to solve constrained invex programs.
§ INTRODUCTION
Many learning problems are modeled as optimization problems. With the explosion in deep learning, many of these problems are modeled as non-convex optimization problems — either by using non-convex objective functions or by the addition of non-convex constraints. While well-studied algorithms with fast convergence guarantees are available for convex problems, such mathematical tools are more limited for non-convex problems. In fact, the general class of non-convex optimization problems is known to be NP-hard <cit.>. Coming up with global certificates of optimality is the major difficulty in solving non-convex problems. In this paper, we take the first steps towards solving a special class of non-convex problems, called invex problems, which attain global minima at every stationary point <cit.>. Invex problems are tractable in the sense that we can use local certificates of optimality to establish the global optimality conditions.
Related work.
First-order gradient descent methods are the most well-known algorithms to solve convex optimization problems. While they can also solve invex optimization problems under certain conditions, they can be really slow in their convergence due to their inability to use any underlying `invex' geometry. <cit.> have studied the minimization of a special class of unconstrained invex functions – called geodesically convex functions. They provide convergence rate guarantees for their algorithms assuming upper bounds on the sectional curvature of the manifold. Such algorithms have also been studied by <cit.> in their work on optimization methods on Riemannian manifolds, albeit with a focus on asymptotic convergence. The simplest instance of geodesically convex optimization is more commonly known as geometric programming <cit.>. The algorithms solving geodesically convex problems use topological properties of geodesic curves for their convergence. Often, finding the underlying geodesic curves and characterizing the manifold prove to be the bottleneck for solving such problems. These difficulties extend naturally to the general class of invex problems where topological properties are difficult to establish. In this work, while we do connect properties of invex functions with the topology of the domain, we also develop algebraic methods for implementing our proposed algorithm. Our focus in this work is to develop first-order methods with provable global convergence rates for a broader class of invex problems. Our method reduces to classical gradient descent (Riemannian gradient descent <cit.>) if the underlying function is convex (geodesically convex). Many optimization problems can be classified as invex problems by using the simple characterization by <cit.>. We provide some such examples that have been studied in recent years as motivation. <cit.> showed that geodesically convex functions are invex. This means that problems such as matrix Karcher mean problem <cit.>, power control <cit.>, optimal doping profile <cit.> and non-convex matrix factorization <cit.> are invex. Any function which satisfies PL-inequality <cit.> is an invex function. This implies that averaged-out non-convex functions <cit.> are also invex. Similarly, quasar-convex functions <cit.> can also be shown to be invex. Recent studies have shown that many machine learning problems such as learning output kernels <cit.>, multiple tasks learning <cit.>, minimum distance lasso <cit.>, reinforcement learning with general utilities <cit.>, fair sparse regression <cit.>, sparse mixed linear regression <cit.>, imaging with invex regularizers <cit.> and DAG learning <cit.> are also invex. Identifying a problem to be invex is relatively a simple task, whereas coming up with an efficient algorithm to solve such a problem by leveraging the invexity is quite challenging. Furthermore, convergence rate analysis of such algorithms becomes even more tedious. To the best of our knowledge, we are not aware of any provably convergent general algorithm to solve invex problems.
In this paper, we present first-order methods to solve invex problems with provable convergence rate guarantees under some natural technical conditions.
Summary of contributions
* We present a first-order gradient descent algorithm for invex problems (Algorithm <ref>). We demonstrate the feasibility of our update rule over a wide variety of examples (Section <ref>).
* As an extension, we propose a projected gradient descent algorithm for constrained invex problems (Algorithm <ref>). We show that our algorithm works for constrained geodesically convex programs in Hadamard manifolds (Section <ref>).
* We provide convergence rate guarantees (Table <ref>) for our proposed algorithms (Theorem <ref>, <ref>, <ref>, <ref>, <ref>) under varying degree of assumptions. We identify sufficient technical conditions needed for the convergence of our proposed algorithm.
* Finally, we show the applicability of our algorithms on both unconstrained <cit.> and constrained <cit.> machine learning problems in Section <ref>. We show that under the same initialization and hyperparameters, our algorithms outperform the standard gradient descent algorithms.
§ INVEXITY
In this section, we formally define the invex function and relate it with convexity along the curve. Consider a differentiable function ϕ(x) defined on a Riemannian manifold . Let ··_x be the inner product in the tangent space T_x of x induced by the Riemannian metric.
Let ϕ(x) be a differentiable function defined on . Let η be a vector valued function defined in × such that η(y, x)∇ϕ(x)_x is well-defined ∀ x, y ∈. Then, ϕ(x) is an η-invex function if
ϕ(y) - ϕ(x) ≥η(y, x)∇ϕ(x)_x, ∀ x, y ∈ .
If the manifold is ^n, then we get the standard definition of invex functions <cit.>. Convex functions are invex functions on ^n with η(y, x) = y - x. In that sense, invex functions are a generalization of convex functions. <cit.> proved the sufficient and necessary condition that any stationary point of the invex function is the global minima. It follows that (at least in ^n) any algorithm that converges to a stationary point, in principle, can solve unconstrained invex problems. However, convergence rate guarantees are not available for any such algorithms. Similarly, geodesically convex functions on the Riemannian manifold are η-invex with η(y, x) = ^-1_x(y) where ^-1_x is the inverse of the exponential map y = _x(v) for some v in the tangent space at point x ∈. This motivates to characterize invex functions by treating them as convex functions along a curve. More formally, we provide the following proposition from <cit.>.
A differentiable real function ϕ(x) defined on is η-invex if and only if for every x, y ∈, the real function g_x, y(t) = f(γ_x, y(t)) is convex on [0, 1] for some curve γ_x,y such that
γ_x, y(0) = x, γ_x, y(1) = y, γ̇_x, y(u) (t - u) = η(γ_x, y(t), γ_x, y(u)), ∀ t, u ∈ [0, 1] .
Proposition <ref> immediately provides a setting for η(y, x) in terms of underlying curve, i.e., η(y, x) = γ̇_x, y(0).
For convex functions, the underlying curve γ_x, y(t) = x + t (y - x). Similarly, for a geodesically convex function, the underlying curve γ_x, y(t) is the geodesic curve joining x and y. We notice, however, that finding the underlying curve for any given η-invex function may not be an easy task. We observe that proposition <ref> allows us to connect invexity of a function to a geometric property (underlying curves) of the domain of the function.
This leads us to define invex sets as a natural extension of convex sets.
A set ⊆ is called η-invex set if contains every curve γ_x, y of as defined in proposition <ref> whose endpoints x and y are in .
It is also possible to characterize invex sets using η(y, x) functions by using the relationship between γ_x, y and η(y, x) from equation (<ref>). Thus, we sometimes refer to the invex set as η-invex set with the assumption that η(y, x) is computed using γ_x, y.
We note that our definition is a slight departure from the definition of the invex set used in <cit.>. However, we find our definition more natural for our purpose.
Using definition <ref>, we can redefine invex functions on an invex set ⊆ as following:
Let ⊆ be an invex set. A real-valued differentiable function ϕ:→ is called invex if
ϕ(γ_x, y(t)) ≤ (1 - t) ϕ(x) + t ϕ(y), ∀ x, y ∈, ∀ t ∈ [0, 1]
Definitions <ref> and <ref> are connected with each other through the relationship between γ_x, y and η(y, x) in equation (<ref>).
In the next sections, we will build up our definition of invex sets to define invex programs.
§ INVEX PROGRAM
In this section, we will define the optimization problem that we are trying to solve.
Our optimization problem involves minimizing an η-invex function over an η-invex set. In the remaining paper, we would assume to be an η-invex set unless stated otherwise.
Let f: → be an η_1-invex function, and g_i: →, ∀ i ∈{ 1, ⋯, m } be η_2-invex functions, then the optimization problem
min_x ∈ f(x), such that g_i(x) ≤ 0 , ∀ i ∈{ 1, ⋯, m }
is called an invex program.
It is possible to include equality constraints in the program, but we opt for only inequality constraints for simplicity.
Before we begin to solve the optimization problem (<ref>), we will prove some technical results to understand the problem in a better way. First, we will show that the constraint set is indeed an η_2-invex set. We will do it in two parts.
Let ϕ:→ be an η-invex function, then ϕ(x) ≤ c is an η-invex set for any c ∈.
Next, we use Lemma <ref> to show that the constraint set is an η-invex set.
Let g_i:→, ∀ i ∈{ 1, ⋯, m } be η-invex functions, then the set = ∩_i=1^m _i is η-invex where _i = { x ∈ | g_i(x) ≤ 0 }.
Invex programs without any constraints are called unconstrained invex programs. In the next section, we propose a first-order method to solve invex programs.
§ NEW FIRST ORDER ALGORITHM
In this section, we develop first-order gradient descent methods for invex programs. We start with the unconstrained version of problem (<ref>) and then gradually build up our method for the constrained version.
§.§ Invex gradient descent method
The main task in our algorithm is to figure out a y ∈ for a given x ∈ and a direction v ∈ T_x such that η(y, x) = v. Such a y need not be unique and we are only interested in finding one y (of possibly many) that satisfies η(y, x) = v. We provide the following gradient descent algorithms to solve invex programs.
[t]0.45
[t]0.5
In Algorithm <ref>, T is the maximum number of iterations and α_k is the step size which depends upon the particular implementation of the algorithm. We will specify a particular choice of α_k in the convergence rate analysis of Algorithm <ref>. Without any information on underlying curve γ_x, y(t), the update step of Algorithm <ref>, i.e., finding a y ∈ such that η(y, x) = v is a problem-dependent task. Below we provide an array of examples to explain this observation.
[Convex case]
For convex problems, y = x + v.
[Geodesically convex case]
For geodesically convex problems, y = _x(v) where is exponential map as defined in <cit.>.
[PL inequality]
It is known that the functions satisfying PL-inequality are invex <cit.>. However, this characterization does not readily lead to a good η(y, x). We provide an η function in the following lemma which can be used in the update step.
Let f(x) be an L-smooth function that satisfies PL inequality for some μ > 0. Then it is η-invex for η(y, x) = 1/μ(∇ f(y) + L y - x /∇ f(x) ∇ f(x) ).
The proof of Lemma <ref> and further discussion is deferred to Appendix <ref>.
[Quasar Convex Functions]
<cit.> showed that quasar convexity implies invexity. However, they do not provide any η for the quasar convex functions. In the following lemma, we provide an η for quasar convex functions.
For any ν≥ 0, there exists a β∈ [0, 1] such that quasar convex functions are η-invex for η(y, x) = β/ν(1 - β) (y - x).
This leads to the update y = x + ν1 - β/β v. We provide the proof of Lemma <ref> in Appendix <ref>.
[Connection with Bregman divergence and Mirror descent]
Let B_ψ(y, x) be the Bregman divergence associated with a strongly convex and differentiable function ψ(x) such that B_ψ(y, x) = ψ(y) - ψ(x) - ∇ψ(x)y - x.
Let η(y, x) = ∇ B_ψ(y, x) = ∇ψ(y) - ∇ψ(x), i.e., η(y, x) is a conservative field and B_ψ(y, x) is its potential function. Then a typical mirror descent update <cit.> can be used to compute y, i.e., y = inf_u B_ψ(u, x) + α∇ f(x)u - x.
[Recent Invex Problems]
Some recently studied problems in invexity such as <cit.> and <cit.> are invex for a particular form of η(y, x). In particular, consider any point x ∈^n of the form x = [ x_1 x_2 ]^ where x_1 ∈^n_1, x_2 ∈^n_2 such that n_1 + n_2 = n. Then for any two x, y ∈^n, η(y, x) takes the form η(y, x) = [ y_1 - x_1 A(y_1, x_1) (y_2 - x_2) ]^
where A(y_1, x_1) ∈^n_2 × n_2 and A(y_1, x_1) ≻0, ∀ y_1, x_1 ∈^n_1. For such problems, update step in Algorithm <ref> becomes y_1 = x_1 + v, y_2 = x_2 + A(y_1, x_1)^-1 v.
[A generic approach using function inverse]
A generic approach to compute y such that η(y, x) = v is to treat η(y, x) = g(y) for a fixed x and then compute y = g^-1(v). This approach works as long as we have explicit closed-form expression for g^-1(v). For our purpose, we ignore the uniqueness of y = g^-1(v) and allow any y as long as g(y) = v.
§.§ Convergence of invex gradient descent method
We start the convergence analysis of Algorithm <ref> with the weakest set of assumptions, and then we gradually add stronger conditions to get a better convergence rate. Before we delve into our first result of convergence, we define a notion of smoothness in the invex manifold .
A differentiable function f:→ is called L-smooth on an η-invex set if ∀ x, y ∈
f(y) ≤ f(x) + η(y, x)∇ f(x)_x + L/2η(y, x) ^2,
where norm · is induced by the Riemannian metric at x.
Note that a function f need not be an invex function to be an L-smooth function. Our first convergence guarantee is for L-smooth functions.
Let f be a L-smooth function and f^* = min_x ∈ f(x) ≥ B for some B > - ∞. If x_k is computed using Algorithm <ref> with α_k = α∈ (0, 2/L), then lim_k →∞∇ f(x_k) = 0.
Theorem <ref> states that Algorithm <ref> converges to a stationary point even if the function is not invex. Our next task is to achieve a better convergence rate by adding the assumption that f is an invex function. However, to do that, we need to impose an extra condition on the choice of η(·, ·) which in turn imposes an extra condition on the geometry of . To make Algorithm <ref> amenable to rigorous convergence rate analysis, we impose a sufficient condition on the geometry of which is analogous to triangle inequality in Euclidean space.
[Triangle Inequality]
Let x, y, z ∈, then for some b, c > 0
η(y, z) ^2 ≤ η(x, z) ^2 + b η(y, x) ^2 - c η(y, x) η(z, x)_x .
The triangle inequality assumption is an assumption on the geometry of manifold . We also note that Euclidean spaces clearly satisfy Assumption <ref> by simply taking b=1 and c = 2. <cit.> showed that any Riemannian manifold with sectional curvature upper bounded by κ≤ 0 also satisfies Assumption <ref>. Now, we are ready to state our second convergence result.
Let f: → be an L-smooth η-invex function such that satisfies Assumption <ref>. Furthermore, let x^* = min_x ∈ f(x) such that f(x^*) > -∞ and η(x_0, x^*) ≤ M < ∞. If x_k is computed using Algorithm <ref> with α_k = α∈ (0, 2/L), then f(x_k) converges to f(x^*) at the rate (1/k).
We further improve convergence rate results by imposing even more conditions on function f. We define μ-strongly η-invex functions as a natural extension to μ-strongly convex functions as follows.
A differentiable function f:→ is called μ-strongly η-invex function for some μ > 0 if f(y) ≥ f(x) + η(y, x)∇ f(x)_x + μ/2η(y, x) ^2, where norm · is induced by the Riemannian metric at x.
We provide the following convergence results for the μ-strongly η-invex functions.
Let f:→ be an L-smooth μ-strongly η-invex function such that satisfies Assumption <ref>. Furthermore, let x^* = min_x ∈ f(x) such that f(x^*) > -∞, η(x_0, x^*) ≤ M < ∞ and η(y, x) _x^2 ≤ R η(x, y) _y^2, ∀ x, y ∈ for some R > 0. If x_k is computed using Algorithm <ref> with α_k = α∈ (0, min (2/ R μ c , c/2bL)), then
η(x_k+1 , x^*) ^2 ≤(1 - c α R μ/2)^k+1 M^2 .
We have intentionally chosen to show convergence results for a constant step size of α_t for simplicity. It is not difficult to get better convergence rates by carefully choosing α_t. It is easy to verify that all our results hold for the convex case. They also extend nicely to all the results in <cit.> for geodesically convex case.
§.§ Projected invex gradient descent method
Now that we have shown convergence results for unconstrained invex programs. We can extend these results to constrained case by providing a projected invex gradient descent method. We first discuss projection on an invex set before providing the algorithm.
Let ⊆ be an η-invex set. We define the projection of x ∈ on as a retraction.
Let γ_x, y(t) be the curve connecting x ∈ to y ∈ such that γ_x, y(0) = x and γ_x, y(1) = y. Projection ρ_η(x) of x on is defined as ρ_η(x) = min_ y ∈η(y, x).
It is easy to see that for convex sets, projection reduces to finding y ∈ which is closest to x in Euclidean distance. Also, notice that if x ∈, then ρ_η(x) = x.
First, observe that in the invex program as defined in <ref> objective function f is η_1-invex while the constraint set is η_2-invex. Thus, we make update in two steps. The first step works in η_1-invex set and then it is projected back to η_2-invex set. The convergence rates of the invex gradient descent algorithm can be extended to the projected invex gradient descent algorithm under the following condition (Details in Appendix <ref>).
[Contraction]
Let x, y ∈ and ρ_η_2(x), ρ_η_2(y) are their projection on an η_2-invex set respectively. Then, η_1( ρ_η_2(y), ρ_η_2(x) ) _ρ_η_2(x)≤η_1( y, x ) _x.
Next, we will discuss the convergence of Algorithm <ref>.
§.§ Convergence of Projected Invex Gradient Descent Method
To guarantee convergence of Algorithm <ref>, we need to place extra technical conditions on the projection operator. In particular, the following condition suffices to ensure convergence.
[Contraction]
Let x, y ∈ and ρ_η_2(x), ρ_η_2(y) are their projection on an η_2-invex set respectively. Then, η_1( ρ_η_2(y), ρ_η_2(x) ) _ρ_η_2(x)≤η_1( y, x ) _x.
Next, we will show that once Assumption <ref> is satisfied by the projection operator, results from Theorems <ref> and <ref> extend nicely to Algorithm <ref>.
Let f: → be an L-smooth η_1-invex function such that satisfies Assumption <ref>. Let ⊆ be an η_2-invex set. Furthermore, let x^* = min_x ∈ f(x) such that f(x^*) > -∞ and η_1(x_0, x^*) ≤ M < ∞. If x_k is computed using Algorithm <ref> with α_k = α∈ (0, 2/L) and with projection operator satisfying Assumption <ref>, then f(x_k) converges to f(x^*) at the rate (1/k).
We have a similar result for μ-strongly η-invex functions.
Let f:→ be an L-smooth μ-strongly η_1-invex function such that satisfies Assumption <ref>. Let ⊆ be an η_2-invex set. Furthermore, let x^* = min_x ∈ f(x) such that f(x^*) > -∞, η_1(x_0, x^*) ≤ M < ∞ and η_1(y, x) _x^2 ≤ R η_1(x, y) _y^2, ∀ x, y ∈ for some R > 0. If x_k is computed using Algorithm <ref> with α_k = α∈ (0, min (2/ R μ c , c/2bL)) and with projection operator satisfying Assumption <ref>, then
η_1(x_k+1 , x^*) ^2 ≤(1 - c α R μ/2)^k+1 M^2 .
Assumption <ref> clearly holds for convex objective functions on convex constraints and thus, it is a natural choice of assumption to impose on the general case of constrained invex programs. In fact, in the next subsection, we show that it also holds for geodesically convex programs.
§.§ Constrained geodesically convex problem
In recent literature, there has been a lot of focus on constrained geodesically convex problems (with both the objective function and constraints being geodesically convex) <cit.>. Our projected gradient algorithm <ref> works for constrained case of geodesically convex optimization problems with sectional curvature upper bounded by κ≤ 0. To that end, we can show that Assumption <ref> holds in this particular case.
Let ρ be the projection operator as defined in <ref> on a closed geodesically convex subset of a simply connected Riemannian manifold with sectional curvature upper bounded by κ≤ 0. Then the projection satisfies Assumption <ref>.
Thus, we can use Algorithm <ref> to solve constrained geodesically convex problems with sectional curvature upper bounded by κ≤ 0. This extends all the results from <cit.> to the constrained case and provides a novel method to solve constrained geodesically convex problems.
§ APPLICATIONS
In this section, we provide specific examples of invex programs to validate our theory. Our task is to provide a working η(y, x) for all the problems and explicitly construct the update step and projection step (if needed). Finally, we compare the performance of our algorithm with the gradient descent (or projected gradient descent) algorithm. The latter of which provides no convergence rate guarantees for invex problems. We chose to go with the vanilla implementation for both algorithms, i.e., without any performance enhancement tricks such as line search. This was done to ensure that the comparison remains fair as our algorithms can also be adapted to include such tricks to further boost its performance. However, that is not our focus in this work.
§.§ Log-determinant acyclicity characterization for DAGs
We start with an unconstrained invex program. <cit.> provided a novel characterization of the acyclicity of DAGs in their recent work. Their characterization employs a log-determinant function and is stated in Theorem 1 <cit.>. Let 𝒲 = ≜{ W ∈ℝ^d × d| s > r(W ∘ W) }. Their log-determinant acyclicity characterization of DAGs uses the function h(W) = - log (sI - W ∘ W) + d log s
where W ∈𝒲, I is identity matrix, r(·) denotes the spectral radius and A ∘ B denotes Hadamard product between two matrices. We take s to be 1 without loss of generality and thus h(W) = - log (I - W ∘ W). They show in Corollary 3 <cit.> that h(W) is an invex function. However, they do not provide any specific η for invexity. Next, we will provide a possible η for the problem but before that, we need to define Hadamard division of two same-sized matrices A, B ∈^d × d as (A ⊘ B)_ij = A_ij/B_ij when B_ij 0 and 0 otherwise. Now we are ready to state the following lemma.
The function h(W) = - log (I - W ∘ W), ∀ W ∈𝒲 is η-invex for η(U, W) = -1/2 ((I - W ∘ W) (log(I - U ∘ U) - log(I - W ∘ W))) ⊘ W.
We can use our proposed η to construct updates for Algorithm <ref>. Observe that for an stepsize α, update step in Algorithm <ref> is η(W_k+1, W_k) = - α∇ h(W_k). We take M = (I - W_k+1∘ W_k+1) and N = (I - W_k ∘ W_k) for clarity. Then the update step becomes -1/2 (N (log M - log N)) ⊘ W_k = - 2 α N^-∘ W_k and we get M = exp( log N + 4 α N^-1 ((N^-∘ W_k) ∘ W_k )) which provides the update step.
We used this update step to implement Algorithm <ref>. The performance of our algorithm was compared against the standard gradient descent algorithm. Both the algorithms were run with a random initialization of W which was kept the same for both algorithms. We found that the gradient descent algorithm failed to converge in several instances but our algorithm converged towards zero objective function value as predicted by <cit.> (See Figure <ref>).
§.§ Fair sparse regression
Our next example is a constrained invex program. <cit.> proposed a novel invex relaxation for fair sparse regression problem. In this problem, each data point is associated with one of the two groups, and the response variable is generated with a signed bias term based on the membership to the group. They use the following generative model: y_i = X_i^ w^* + γ z_i^* + e_i, ∀ i ∈{1, ⋯, n},
where e_i is a zero mean independent additive noise and z_i^* is the group membership. The task is to identify regression vector w^* ∈^d along with z_i^* for every data point. <cit.> proposed the following invex relaxation for this problem.
min_w, Z M(w)Z + λ_n w _1 such that (Z) = 1, Z ≽ 0
,
where
M(w) ≜[ 1/n(Xw - y)^ (Xw - y) + 1 γ/n(Xw - y)^; γ/n(Xw - y) (γ^2/n + 1) I ]
with X ∈^n × d being the data matrix and I being the identity matrix of appropriate dimension. They provide an η_1 for the objective function and it is obvious that constraints are convex (we ignore the dimension of the matrices for succinct representation). Thus,
η_1((w, Z), (w, Z)) = [ w - w; M(w)^-1 M(w) (Z - Z) ], η_2((w, Z), (w, Z)) = [ w - w; Z - Z ] .
We used these η functions to construct updates and projection for Algorithm <ref>. Let f(w, Z) = M(w)Z. Let ∇_w f(w, Z) = ∂M(w)Z/∂ w and ∇_Z f(w, Z) = M(w), then using the η functions and step-size α we write the following update steps for this problem:
w_t+1 = ∏_λ(w_t - α∇_w f(w_t, Z_t)), Z̅_t+1 = Z_t - α M(w_t+1)^-1M(w_t) ∇_Z f(w_t, Z_t) ,
where ∏_λ(·) is the projection operator which uses soft thresholding to deal with ℓ_1-regularization. We need to project Z̅_t+1 on constraints to get the final Z_t+1.
Z_t+1 = min_Z Z - Z̅_t+1_F^2 such that (Z) = 1, Z ≽ 0
We used update rules from equation (<ref>) and (<ref>) to implement Algorithm <ref>. We compared its performance against the projected gradient descent algorithm. The hyper-parameters (such as λ and α) and initial values of w and Z were kept the same across both algorithms. We report our results in Figure <ref>. We see that both algorithms perform in a similar manner. We expect this behavior as when w_t is close to w_t+1 the update rules are the same for both the algorithms.
§.§ Mixed linear regression
In mixed linear regression, measurements come from one of the two different linear regression models and the task is to identify two regression vectors and the model associated with each data point. Mathematically, each data point is generated as follows: y_i = z_i^* X_iβ_1^* + (1 - z_i^*) X_iβ_2^* + e_i, ∀ i ∈{1, ⋯, n } where β_1^* and β_2^* are d-dimensional vectors. <cit.> proposed an invex program to solve this problem. Let f(t, W, U) = ∑_i=1^n 1/2S_iW + U + 1/2 t_i S_iW - U, g(t, W, U) = (W) _1 and h(t, W, U) = (U) _1, where S_i = [ X_i; -y_i ][ X_i^ -y_i ]
and operator (.) vectorizes the matrix. Their invex formulation is given as:
min_t, W, U ∑_i=1^n f(t, W, U) + λ_1 g(t, W, U) + λ_2 h(t, W, U)
such that W ≽ 0, U ≽ 0, W_d+1, d+1 = 1, U_d+1, d+1 = 1, t _∞≤ 1
The constraints of the problem are clearly convex. <cit.> also provide an η_1 for the objective function, but it does not lend well to construct update rules required for Algorithm <ref>. We bypass this problem by showing that when W U then the objective function is invex for a different η_1. When W = U, we revert to the η provided by <cit.>. To that end, we prove the following lemma.
Assume that WU, then functions f(t, W, U) = ∑_i=1^n 1/2S_iW + U + 1/2 t_i S_iW - U, g(t, W, U) = (W) _1 and h(t, W, U) = (U) _1 are η_1-invex for
η_1((t, W, U), (t, W, U)) = [ τ∘ (t - t); W - W; U - U ] ,
where τ(W, U, W, U) ∈^n such that τ_i = S_iW - U/S_iW - U.
Now we are ready to construct update and projection rules.
Let ∇_t f(t, W, U)_i = 1/2S_iW - U, ∀ i={1, ⋯, n}, ∇_W f(t, W, U) = ∑_i=1^n t_i + 1/2 S_i and ∇_U f(t, W, U) = ∑_i=1^n 1 - t_i/2 S_i, then we propose the following update steps for step-size α:
W̅_k+1 = ∏_λ_1(W_k - α∇_W f(t_k, W_k, U_k)), U̅_k+1 = ∏_λ_2(U_k - α∇_W f(t_k, W_k, U_k))
t̅_k+1 = t_k - α∇_Z f(t_k, W_k, U_k) ⊘τ(W_k+1, U_k+1, W_k, U_k) ,
where ∏_λ(·) is the projection operator which uses soft thresholding to deal with ℓ_1-regularization. We use the following projection steps to get W_k+1, U_k+1 and t_k+1.
W_k+1 = min_W W - W̅_k+1_F^2
such that W_d+1, d+1 = 1, W ≽ 0
, U_k+1 = min_U U - U̅_k+1_F^2
such that U_d+1, d+1 = 1, U ≽ 0
t_k+1 = min_t t - t̅_k+1_2^2
such that t _∞≤ 1
We implemented Algorithm <ref> using update and projection rules from equation (<ref>) and equation (<ref>). Like before, we compared the performance of our algorithm with the projected gradient descent method with the same set of hyperparameters and initialization. We report our results in Figure <ref>. We see that our algorithm converges faster than the projected gradient descent algorithm.
§ CONCLUSION AND FUTURE WORK
In this work, we have taken the first steps towards providing algorithms to solve constrained and unconstrained invex programs within certain technical conditions. We show that our algorithm can be used to solve constrained geodesically convex optimization problems with provable convergence rate guarantees. We also show the applicability of our proposed algorithm in a variety of machine-learning applications. Our work employs some natural assumptions for mathematical analysis. But these are only sufficient conditions for convergence. As the future direction, it would be interesting to see if these assumptions can be relaxed without losing on the convergence rate guarantees. From an application point of view, it would also be interesting to come up with an explicit form of update rules and the projection operator from a given η for a large class of invex problems. Another direction of research could be to study the accelerated version of our algorithms. While already for the subclass of invex problems, namely, geodesically convex ones, it is known that without further assumptions/restrictions, global acceleration similar to the Euclidean Nesterov acceleration does not hold. However, it is a valuable question to explore conditions under which such an acceleration holds for our setting.
apalike
§ FUNCTIONS SATISFYING PL INEQUALITY
PL functions are a special class of (possibly nonconvex) functions that satisfy the following property:
∇ f(x) ^2 ≥μ (f(x) - f(x^*)) ,
<cit.> showed that if an L-smooth function satisfies PL inequality, then it can be shown that it achieves an exponential convergence rate. These functions are known to be invex. Here we provide a characterization of their invexity by providing an η which can be used to construct updates in Algorithm <ref>. To that end, we show the validity of Lemma <ref>.
Lemma <ref>
Let f(x) be an L-smooth function which satisfies PL inequality for some μ > 0. Then it is η-invex for the following η:
η(y, x) = 1/μ(∇ f(y) + L y - x /∇ f(x) ∇ f(x) )
Since f(x) follows PL inequality for some μ > 0, the following inequality holds <cit.> for all x in the domain D of f(x):
∇ f(x) ^2 ≥μ (f(x) - f(x^*)) ,
where x^* = min_x ∈ D f(x). Using Taylor series expansion of g(y) = ∇ f(y)^ v around x and then substituting for v = ∇ f(x), we can write
∇ f(x) ^2 = ∇ f(x)^∇ f(y) + ∇ f(x)^∇^2 f(z) (x - y)
for some z= y + t (x - y), t ∈ [0, 1]. It follows that
∇ f(x)∇ f(y) + ∇^2 f(z) (x - y) ≥μ (f(x) - f(x^*)) ≥μ (f(x) - f(y))
Given that f(x) is L-smooth and
max_ M _2 ≤ L u^ M v = L u v ,
we can write
∇ f(x)∇ f(y) + L y - x /∇ f(x) ∇ f(x) ≥μ (f(x) - f(x^*)) ≥μ (f(x) - f(y))
This completes our proof.
§ QUASAR CONVEX FUNCTIONS
Quasar convex functions <cit.> are another interesting class of possibly nonconvex functions which achieve global minima at all their stationary points. Thus, they fall under the class of invex functions. Here, we propose an η for the class of quasar convex functions which can be used to construct updates in Algorithm <ref>. Below, we prove Lemma <ref>.
Lemma <ref>
For any ν≥ 0, there exists a β∈ [0, 1] such that quasar convex functions are η-invex for η(y, x) = β/ν(1 - β) (y - x).
First, we use the result from Lemma 2 of <cit.> to show that for any ν≥ 0, there exists a β∈ [0, 1], such that
β∇ f(x)^ (y - x - β y/1 - β) ≤ν (f(y) - f(x))
We can simplify equation (<ref>) to write
f(y) ≥ f(x) + ∇ f(x)^β/ν (1 - β) (y - x)
This completes our proof.
We note that β can be computed efficiently using a binary-search algorithm(refer to Algorithm 2 of <cit.>).
§ PROOF OF THEOREMS AND LEMMAS
Before we begin to solve the optimization problem (<ref>), we will prove some technical results to understand the problem in a better way. First, we will show that the constraint set is indeed an η_2-invex set. We will do it in two parts.
Let ϕ:→ be an η-invex function, then ϕ(x) ≤ c is an η-invex set for any c ∈.
Let γ_x, y be the underlying curve connecting x, y ∈ corresponding to η(y, x) satisfying equation (<ref>).
Using definition <ref>, we can redefine invex functions on an invex set ⊆ as following:
Let ⊆ be an invex set. A real-valued differentiable function ϕ:→ is called invex if
ϕ(γ_x, y(t)) ≤ (1 - t) ϕ(x) + t ϕ(y), ∀ x, y ∈, ∀ t ∈ [0, 1]
Definitions <ref> and <ref> are connected with each other through the relationship between γ_x, y and η(y, x) in equation (<ref>).
Let = { x ∈ | ϕ(x) ≤ c }. We take x, y ∈. We will then need to show that γ_x, y(t) ∈, ∀ t ∈ [0, 1]. Using definition
<ref>
ϕ(γ_x,y(t)) ≤ (1 - t) ϕ(x) + t ϕ(y), ∀ x, y ∈, ∀ t ∈ [0, 1]
≤ (1 - t) c + t c = c
It follows that γ_x, y(t) ∈, ∀ t ∈ [0, 1].
Next, we use Lemma <ref> to show that the constraint set is an η-invex set.
Let g_i:→, ∀ i ∈{ 1, ⋯, m } be η-invex functions, then the set = ∩_i=1^m _i is η-invex where _i = { x ∈ | g_i(x) ≤ 0 }.
Let x, y ∈, then by definition x, y ∈_i, ∀ i ∈{ 1, ⋯, m }. We know from Lemma <ref>, that _i's are η-invex set. Let γ_x, y be the underlying curve connecting x, y. Then, it follows that γ_x, y(t) ∈_i, ∀ i ∈{1, ⋯, m}, ∀ t ∈ [0, 1]. Thus, γ_x, y(t) ∈, ∀ t ∈ [0, 1].
Theorem <ref>(Convergence of L-smooth functions.)
Let f be a L-smooth function and f^* = min_x ∈ f(x) ≥ B for some B > - ∞. If x_k is computed using Algorithm <ref> with α_k = α∈ (0, 2/L), then
lim_k →∞∇ f(x_k) = 0 ,
Since f is an L-smooth function. We have
f(x_k+1) ≤ f(x_k) + η(x_k+1, x_k)∇ f(x_k)_x_k + L/2η(x_k+1, x_k) ^2
Using Algorithm <ref>, we have η(x_k+1, x_k) = - α_k ∇ f(x_k). Thus,
f(x_k+1) ≤ f(x_k) - α∇ f(x_k) ∇ f(x_k)_x_k + L α^2/2∇ f(x_k) ^2
Since α∈ (0, 2/L), it follows that
α (1 - Lα/2) ∇ f(x_k) ^2 ≤ f(x_k) - f(x_k+1)
After telescoping sum and simplification, we get
∑_k=0^∞∇ f(x_k) ^2 ≤f(x_0) - B/α (1 - L α/2)
Since right-hand side of equation (<ref>) is finite, it follows that lim_k →∞∇ f(x_k) = 0.
Theorem <ref>(Convergence of invex functions.)
Let f: → be an L-smooth η-invex function such that satisfies Assumption <ref>. Furthermore, let x^* = min_x ∈ f(x) such that f(x^*) > -∞ and η(x_0, x^*) ≤ M < ∞. If x_k is computed using Algorithm <ref> with α_k = α∈ (0, 2/L), then f(x_k) converges to f(x^*) at the rate (1/k).
We apply Equation (<ref>) by taking x = x_k, y = x_k+1 and z = x^*.
η(x_k+1, x^*) ^2 ≤ η(x_k, x^*) ^2 + b η(x_k+1, x_k) ^2 - c η(x_k+1, x_k) η(x^*, x_k)_x_k
From Algorithm <ref>, η(x_k+1, x_k) = -α∇ f(x_k).
η(x_k+1, x^*) ^2 ≤ η(x_k, x^*) ^2 + b α^2 ∇ f(x_k) ^2 + c α∇ f(x_k) η(x^*, x_k)_x_k
Note that since f is η-invex, we have f(x^*) ≥ f(x_k) + η(x^*, x_k)∇ f(x_k). Thus,
η(x_k+1, x^*) ^2 ≤η(x_k, x^*) ^2 + b α^2 ∇ f(x_k) ^2 + c α (f(x^*) - f(x_k))
c α (f(x_k) - f(x^*)) ≤η(x_k, x^*) ^2 - η(x_k+1, x^*) ^2 + b α^2 ∇ f(x_k) ^2
Summing over T terms, we get
c α∑_k=0^T (f(x_k) - f(x^*)) ≤η(x_0, x^*) ^2 - η(x_T+1, x^*) ^2 + b α^2 ∑_k=0^T ∇ f(x_k) ^2
Since α∈ (0, 2/L), we make two observations from equation (<ref>),
∑_k=0^T ∇ f(x_k) ^2 ≤ f(x_0) - f(x_T+1)/α (1 - Lα/2)
f(x_k+1) - f(x_k) ≤α (-1 + Lα/2) ∇ f(x_k) ^2 ≤ 0
By using the observations in equation (<ref>), it follows that
c α T ( f(x_T) - f(x^*) ) ≤η(x_0, x^*) ^2 + b α (f(x_0) - f(x^*)) /1 - Lα/2 .
Note that using L-smoothness condition and noticing that ∇ f(x^*) = 0, we can show that
f(x_0) - f(x^*) ≤L/2η(x_0, x^*) ^2
Thus,
f(x_T) - f(x^*) ≤1/T1/cα (1+
b α L /2 - Lα) M^2
This proves our claim.
Theorem <ref>(Convergence of strongly invex functions.)
Let f:→ be an L-smooth μ-strongly η-invex function such that satisfies Assumption <ref>. Furthermore, let x^* = min_x ∈ f(x) such that f(x^*) > -∞, η(x_0, x^*) ≤ M < ∞ and η(y, x) _x^2 ≤ R η(x, y) _y^2, ∀ x, y ∈ for some R > 0. If x_k is computed using Algorithm <ref> with α_k = α∈ (0, min (2/ R μ c , c/2bL)), then
η(x_k+1 , x^*) ^2 ≤ (1 - c α R μ/2)^k+1 M^2 .
We begin our proof by proving an auxiliary lemma.
If f is L-smooth then
f(x^*) - f(x) ≤ -1/2L∇ f(x) ^2, ∀ x ∈ .
We can always find a y ∈ such that η(y, x) = - 1/L∇ f(x). Using equation (<ref>) we have,
f(y) ≤ f(x) - 1/L∇ f(x)∇ f(x)_x + L/21/L∇ f(x) ^2
=f(x) - 1/2L∇ f(x) ^2
Clearly, f(x^*) ≤ f(y), thus
f(x^*) - f(x) ≤ -1/2L∇ f(x) ^2 .
This proves our claim.
Now using Assumption <ref> for x = x_k, y = x_k+1 and z = x^*, we have
η(x_k+1, x^*) ^2 ≤ η(x_k, x^*) ^2 + b η(x_k+1, x_k) ^2 - c η(x_k+1, x_k) η(x^*, x_k)_x_k
Now since η(x_k+1, x_k) = -α∇ f(x_k), we have
η(x_k+1, x^*) ^2 ≤ η(x_k, x^*) ^2 + b α^2 ∇ f(x_k) ^2 + c α∇ f(x_k) η(x^*, x_k)_x_k
Using the strong convexity of f and setting y = x^* and x = x_k, we have
η(x_k+1, x^*) ^2 ≤ η(x_k, x^*) ^2 + b α^2 ∇ f(x_k) ^2 + c α (f(x^*) - f(x_k) - μ/2η(x^*, x_k) ^2)
Using the condition that η(x^*, x_k) ^2 ≤ R η(x_k, x^*) ^2, we have
η(x_k+1, x^*) ^2 ≤ (1 -c α R μ/2 ) η(x_k, x^*) ^2 + b α^2 ∇ f(x_k) ^2 + c α (f(x^*) - f(x_k))
Using L-smoothness of f and Lemma <ref>, we have
η(x_k+1, x^*) ^2 ≤ (1 -c α R μ/2 ) η(x_k, x^*) ^2 - α ( - 2 b α L + c )(f(x_k) - f(x^*))
Taking α≤min (2/ R μ c , c/2bL), we get
η(x_k+1, x^*) ^2 ≤ (1 -c α R μ/2 ) η(x_k, x^*) ^2
We prove our result by unrolling the recurrence in equation (<ref>).
§.§ Convergence of Projected Invex Gradient Descent Method
Next, we will show that once Assumption <ref> is satisfied by the projection operator, results from Theorems <ref> and <ref> extend nicely to Algorithm <ref>.
Let f: → be an L-smooth η_1-invex function such that satisfies Assumption <ref>. Let ⊆ be an η_2-invex set. Furthermore, let x^* = min_x ∈ f(x) such that f(x^*) > -∞ and η_1(x_0, x^*) ≤ M < ∞. If x_k is computed using Algorithm <ref> with α_k = α∈ (0, 2/L) and with projection operator satisfying Assumption <ref>, then f(x_k) converges to f(x^*) at the rate (1/k).
First notice that since x^* ∈, we have ρ_η_2(x^*) = x^*. We follow the same proof technique as Theorem <ref> until equation (<ref>) which becomes:
c α (f(x_k) - f(x^*)) ≤η_1(x_k, x^*) ^2 - η_1(y_k+1, x^*) ^2 + b α^2 ∇ f(x_k) ^2
Using Assumption <ref>, we know that η_1(y_k+1, x^*) ^2 ≥η_1(ρ_η_2(y_k+1), x^*) ^2 = η_1(x_k+1, x^*) ^2 and thus remaining steps of the proof follow.
We have a similar result for μ-strongly η-invex functions.
Let f:→ be an L-smooth μ-strongly η_1-invex function such that satisfies Assumption <ref>. Let ⊆ be an η_2-invex set. Furthermore, let x^* = min_x ∈ f(x) such that f(x^*) > -∞, η_1(x_0, x^*) ≤ M < ∞ and η_1(y, x) _x^2 ≤ R η_1(x, y) _y^2, ∀ x, y ∈ for some R > 0. If x_k is computed using Algorithm <ref> with α_k = α∈ (0, min (2/ R μ c , c/2bL)) and with projection operator satisfying Assumption <ref>, then
η_1(x_k+1 , x^*) ^2 ≤(1 - c α R μ/2)^k+1 M^2 .
Again, we notice that since x^* ∈, we have ρ_η_2(x^*) = x^*. We follow the same proof technique as Theorem <ref> until equation (<ref>) which becomes:
η_1(y_k+1, x^*) ^2 ≤ (1 -c α R μ/2 ) η_1(x_k, x^*) ^2
Using Assumption <ref>, we note that η_1(x_k+1, x^*) ^2 = η_1(ρ_η_2(y_k+1), x^*) ^2 ≤η_1(y_k+1, x^*) ^2 and thus remaining steps of the proof follow.
Theorem <ref>(Projection contracts for geodesically convex sets with negative curvature.)
Let ρ be the projection operator as defined in <ref> on a closed geodesically convex subset of a simply connected Riemannian manifold with sectional curvature upper bounded by κ≤ 0. Then the projection satisfies Assumption <ref>.
First note that since geodesics are constant velocity curves, there exists a parameterization such that d(y, x) = γ̇_x, y(0) = η(y, x) where d(y, x) is the length of geodesic between x and y. Thus, it only remains to show that d(y, x) contracts for geodesically convex sets (sometimes known as totally convex sets) which follows from Lemma 11.2 of <cit.>.
Lemma <ref>
The function h(W) = - log (I - W ∘ W), ∀ W ∈𝒲 is η-invex for η(U, W) = -1/2 ((I - W ∘ W) (log(I - U ∘ U) - log(I - W ∘ W))) ⊘ W.
The invexity of h(W) is already shown by <cit.>. Here, we will verify that our proposed η satisfies equation (<ref>). Note that ∇ h(W) = 2 (I - W ∘ W)^-∘ W. Then ∀ U, W ∈𝒲,
h(U) - h(W) - η(U, W)∇ h(W) = - log (I - U ∘ U) + log (I - W ∘ W) -
-1/2 ((I - W ∘ W) (log(I - U ∘ U) - log(I - W ∘ W))) ⊘ W2 (I - W ∘ W)^-∘ W
= 0
This validates our claim.
Lemma <ref>
Assume that WU, then functions f(t, W, U) = ∑_i=1^n 1/2S_iW + U + 1/2 t_i S_iW - U, g(t, W, U) = (W) _1 and h(t, W, U) = (U) _1 are η_1-invex for
η_1((t, W, U), (t, W, U)) = [ τ∘ (t - t); W - W; U - U ] ,
where τ(W, U, W, U) ∈^n such that τ_i = S_iW - U/S_iW - U.
The invexity of the objective function in (<ref>) is already shown by <cit.>. It suffices to verify that our proposed η satisfies equation (<ref>) for f(t, W, U), g(t, W, U) and h(t, W, U). It can be trivially verified that our η_1 works for g(t, W, U) and h(t, W, U) due to their convexity. For f(t, W, U),
∂ f/∂ t_i = 1/2S_iW - U
∂ f/∂ W = ∑_i=1^n t_i + 1/2 S_i
∂ f/∂ U = ∑_i=1^n 1 - t_i/2 S_i
It is easy to verify that
f(t, W, U) - f(t, W, U) - η_1(·, ·)∇ f(t, W, U) = ∑_i=1^n 1/2S_iW + U + 1/2 t_i S_iW - U -
∑_i=1^n 1/2S_iW + U - 1/2t_i S_iW - U - ∑_i=1^n τ_i 1/2S_iW - U - W - W∑_i=1^n t_i + 1/2 S_i -
U - U∑_i=1^n 1 - t_i/2 S_i
= 0
|
http://arxiv.org/abs/2307.07522v1 | 20230709211656 | The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence | [
"Hector Zenil",
"Jesper Tegnér",
"Felipe S. Abrahão",
"Alexander Lavin",
"Vipin Kumar",
"Jeremy G. Frey",
"Adrian Weller",
"Larisa Soldatova",
"Alan R. Bundy",
"Nicholas R. Jennings",
"Koichi Takahashi",
"Lawrence Hunter",
"Saso Dzeroski",
"Andrew Briggs",
"Frederick D. Gregory",
"Carla P. Gomes",
"Christopher K. I. Williams",
"Jon Rowe",
"James Evans",
"Hiroaki Kitano",
"Joshua B. Tenenbaum",
"Ross King"
] | cs.AI | [
"cs.AI",
"cs.LG"
] |
0.0cm
0.2cm
16cm
21cm
1.0cm
sciabstract
24pt
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Hector Zenil,^1,2,3,4,∗ Jesper Tegnér,^21,28 Felipe S. Abrahão,^4,8,27,
Alexander Lavin,^19,20 Vipin Kumar,^6 Jeremy G. Frey,^7 Adrian Weller,^1,2
Larisa Soldatova,^9 Alan R. Bundy,^5
Nicholas R. Jennings,^10 Koichi Takahashi,^11,12,13
Lawrence Hunter,^14 Saso Dzeroski,^15
Andrew Briggs,^16 Frederick D. Gregory,^17
Carla P. Gomes,^18
Christopher K. I. Williams,^1,5 Jon Rowe,^1,22 James Evans,^23
Hiroaki Kitano,^1,24 Joshua B. Tenenbaum,^25 Ross King^1,2,26
^1The Alan Turing Institute
^2Department of Chemical Engineering and Biotechnology, University of Cambridge
^3Oxford Immune Algorithmics
^4Algorithmic Nature Group, LABORES for the Natural and Digital Sciences
^5School of Informatics, University of Edinburgh
^6Department of Computer, Science and Engineering, University of Minnesota
^7Department of Chemistry, University of Southampton
^8Centre for Logic, Epistemology and the History of Science, University of Campinas, Brazil.
^9Department of Computing, Goldsmiths, University of London
^10Vice-Chancellor's Office, Loughborough University
^11RIKEN Center for Biosystems Dynamics Research,
^12RIKEN Innovation Design Office
^13Keio University
^14Center for Computational Pharmacology, School of Medicine, University of Colorado
^15Department of Knowledge Technologies, Jozef Stefan Institute
^16Department of Materials, University of Oxford
^17DEVCOM ARL Army Research Office
^18Department of Computer Science, Cornell University
^19Pasteur Labs
^20Institute for Simulation Intelligence
^21Living Systems Laboratory, BESE, CEMSE, King Abdullah University of Sciences and Technology
^22School of Computer Science, University of Birmingham
^23Knowledge Lab, University of Chicago
^24The Systems Biology Institute, Okinawa Institute of Science and Technology
^25Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
^26Chalmers Institute of Technology
^27DEXL, National Laboratory for Scientific Computing, Brazil.
^28Department of Medicine, Karolinska Institutet, Stockholm, Sweden.
^∗To whom correspondence should be addressed; E-mail: [email protected]
=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
§ INTRODUCTION
With the scientific revolution in the seventeenth century, the notion of mathematical modeling using equations became the efficient language of choice to understand and predict events in the natural world. Four hundred years later, we have vast amounts of data and increasing access to computational power. Recently, we have witnessed an ever-increasing comprehensive application of machine learning accelerating science in unprecedented ways with many questions around quantifying the speed up of discovery (Fig. <ref>). One consequence of this increase in scientific production that the digital revolution enabled, it is becoming more and more challenging for individual scientists to keep abreast of their fields and digest the relevant literature.
To advance science and perform end-to-end high-quality scientific investigations, scientists require one or two orders of magnitude more hypothesis-led experiments than are currently humanly possible.
Laboratories are under pressure to perform an increasing number of experiments needed to replicate results. This makes collaborations more challenging, given all the associated overheads in particular for interdisciplinary research. It may be that science will be too difficult for humans to continue to perform by themselves and will need AI at the driving seat of knowledge discovery to continue the endeavour of human science.
To conceptualise how AI can augment science, we like to distinguish between the following levels. First, AI and machine learning can operate as extractors of information. This includes text mining of scientific literature to find relevant papers and, in the best case, extract knowledge and synthesise a body of research from vast sources. A more “modern" use of AI has made existing workflows or procedures more efficient such as being faster and more automatic. This includes augmented computations and simulations in physics. Alternatively, to make an analysis workflow more automatic by constructing a loss function that incorporates several parameter-dependent steps into a single (complex) optimisation problem. The first version of AlphaFold is one example. Yet, in these two levels, AI primarily supports and augments current scientific practice. A third level is where AI could potentially discover something novel by learning, not only a useful but a "true" representation of a given process in nature. For example, a useful compressed latent representation could be learned by training an AI system on data. Alternatively, the scientist could impose soft priors such that certain symmetries and invariants exist in the problem, thus forcing the AI system to discover interpretable structures in the physical process. For example, recent work on flow-related problems in physics and chemistry, using physics-inspired neural networks (PINNs), demonstrated the feasibility of AI for science beyond a black-box model <cit.>. This line of work is similar to the “classical" model-based analysis of nature initiated by the scientific revolution. Here in this review, we focus on the prospect of not only augmenting science, by finding useful, interpretable representations but finding representations leading to new scientific discoveries by involving AI in what we refer to as a closed-loop-science-AI iterative cycle.
Here we suggest a next level of applying AI to science and leading science by developing such closed-loop science: AI systems integrated with laboratory automation to execute cycles of planned experiments <cit.>. Such systems fully automate simple forms of scientific research
and can facilitate collaboration across disciplines and between partners - humans or AI systems. AI can speed up the scientific discovery process and has the potential to advance AI itself in areas relevant to fundamental science, such as causal discovery, automation of experimental science, and the expansion of scientific knowledge. One current bottleneck is knowledge representation which, by nature, is biased toward the limited understanding of the human (scientific) mind. Here we can either use soft priors exploiting deep structures in the scientific area of interest or, better, actually develop AI systems that will discover effective interpretable representations of the scientific area in question. This is another inspiring direction in which science may move, and scientists will have to decide (if they have the option) to let machines build their internal languages and representations to do their science. This will not happen all of a sudden,it is most likely happening already, even if such language is in some way rudimentary. Here we refer to them as black boxes in areas such as deep learning, where scientists often struggle to make sense of it and increasingly use AI tools to guide and decipher AI results.
Applications of AI have so far been successful, but their success has been largely limited to industrial applications, problems of classification, and data science. A recipe for success has been to pick a problem, that has a well-defined metric for performance. The problems should preferentially have a history of previously increasingly successful attempts to solve them. Examples include board games (Chess, GO) and bioinformatics workflows (transcription factor binding, protein folding, antibiotics). Yet, despite impressive, the success of these examples, hinges upon clever search algorithms and efficient implementation of end-to-end workflows. But, in the end, no new fundamental laws of nature are being discovered. Furthermore, as a reflection of the lack of fundamental laws, the success of these AI systems remains challenging to disentangle. To advance beyond this state of affairs, we argue that we need AI systems that can discover new representations and generative laws of the scientific problem at hand. The notion of an iterative closed-loop discovery scheme constitutes one putative path forward. Yet, only a limited number of successful examples have closed the full loop of scientific discovery <cit.>.
Human scientists today need to think about how to create AI systems that can partner with scientists and take on responsibilities over the complete arc of scientific discovery, from the process of observation and intervention to hypothesis generation from a domain knowledge base to conducting experiments and evaluating results and rejecting or validating the assumptions, to integrating them into the current knowledge base and filing them with the relevant existing literature <cit.>.
Thus, the question is how to make substantial and meaningful advances in AI to enable us to go even further in accelerating science, hitherto driven exclusively by humans,
to not only rapidly expand human knowledge and improve the impact of scientific practice, but also to increase its reliability, availability, reproducibility, verifiability, transparency, and trustworthiness as the processes involved in scientific discovery become more automated.
In Fig. <ref>, we propose some quantitative measures that will not apply to all cases but rather instances where a combination of AI and human approaches can further accelerate science. Nevertheless, the expectation is that AI will provide a real gain on most fronts and domains.
§ AI IN SCIENTIFIC DISCOVERY
§.§ Challenges
Humans are traditionally biased and prone to very well-known cognitive fallacies or biases, which science is hardly a stranger to <cit.>. One common and increasingly discussed issue is reproducibilityreproducibility across all domains <cit.>.
Humans are ill-equipped to deal with the repetitive tasks that reproducibility entails, and there are all sorts of inducements for consciously or unconsciously making dubious moves, particularly when it comes to the game of funding and high-impact publishing <cit.>.
Confirmation bias, fake rigour, prior assumptions/hypotheses omission, ad hoc methodologies, cherry-picking experimentation, selective data, hype and overstatement of results, network community effects, “rich-get-richer” phenomena widening the inequality gap in science, and poor reporting are examples <cit.>.
We used to think that science was entirely objective, but history has taught us that it is also driven by community choices and groups, where it becomes clear that political and social preferences and underlying cognitive biases can interfere with scientific progress <cit.>.
All these problems are leading to a crisis impacting scientific networks, putting collaborative networks at a disadvantage and favouring competitive ones, and often compromising the very principles of scientific practice.
Closed-loop-AI-led science has the potential to mitigate all these problems because it can bootstrap itself with the right mechanisms to detach itself from human-led science and its own biases, even if human scientists initially transfer them. Furthermore, this invites scientists with the task of initially guiding AI as to the type of meaningful research that should be conducted but then letting it explore regions of the scientific space that may never be reachable by human scientists while having the option to keep what human scientists believe is of greatest interest but letting the close-loop-AI system to potentially continue using less human-relevant content searching for novelty in terms of what is potentially interesting to go after. That is to have AI bootstrap itself out of and above the loop-AI-science without human guidance.
One challenge in this direction is that automation can easily fall into the over-fitting trap without human input and mechanisms to avoid this have to be in place. However, it has been found that simplicity and randomness are powerful mechanisms to avoid local minima and maxima when iterating over searching algorithms<cit.>.
A striking feature of supervised machine learning is its propensity for over-parametrisation <cit.>. Deep networks contain millions of parameters, often exceeding the number of data points by orders of magnitude, so often, the model starts to over-fit right at the beginning <cit.>.
Broadly speaking, networks are designed to interpolate the data, learning/constructing an associated manifold by driving the training error to zero.
Deep neural networksDeep neural networks in particular are widely regarded as black-box approaches, ill-equipped to offer explanations of the produced models for classification, often with superhuman ability <cit.>. One strategy that has enabled researchers to make progress in understanding the workings and limitations of deep learning is the use of what has been called `generative models' <cit.>.generative models This involves training adversarial algorithms represented by neural networks that systematically tamper with data while asking it to generate novel examples <cit.>. By observing the resulting examples and how the classifier fails, they can understand the model's limitations and improve the classifier.
However, current approaches in science (see Fig. <ref>), including most machine and deep learning methods, rely heavily on traditional statistics and information theory. Consequently, such models are insufficient to capture certain fundamental properties of data and the world related to recursive and computable phenomena, and they are ill-equipped to deal with high-level functions such as inference, abstraction, modelling, and causation, being fragile and easily deceived <cit.>.
Most of these algorithms fail to be scalable in domains outside the training set. Such algorithms lack mechanisms for abstraction and logical inference, they fail at generalisation <cit.>. For example, in the case of driverless carsdriverless cars, one does not want a car to crash millions of times to learn how not to crash, so current techniques such as adversarial networks offer a way to produce examples in which not driving appropriately can lead to an event that is labelled a crash <cit.>. However, driving and crashing are events where cause and effect need to be learned, which current approaches cannot do.
When AI leads science so that laboratory experiments are automated to execute cycles of planned experiments, AI frees humans from repetitive, tedious, and error-prone tasks and can deal with vast amounts of data that no human could handle <cit.>.
These human scientists, in turn, can feed the AI systems back with new insights and novel theories.
Thus, such an emerging feedback loop of AI-human collaboration will synergistically boost scientific discovery toward previously unattainable results, rigour, and dissemination.
To overcome the above limitations and challenges, we claim that it will require the fostering of new theories and methods, as well as human and technological resources in AI, data science, and interdisciplinarity, so scientists become capable of dealing with this AI-human interplay both at an infrastructural and a metastructural level. One of these methods may involve AI that guides AI, and translates results to humans, and this intermediate AI may not be of the same type. For example, causal and model-driven AI may be required to disentangle other AI systems to which human scientists cannot relate if they do not have a mechanistic explicative component, whether there is one or not. This may lead to some sort of meta-AI that may not require Artificial general Intelligence but would require a different set of skills than purely statistical machine learning approaches.
§.§ Historical Context
Applications of AI in science are quite broad and cover many fields. The idea of automating reasoning goes back to Leibniz, where the modern incarnation can be traced back to efforts to build computing machines in Europe. In particular, the heroic efforts of Alan Turing'sTuring, Alan work at Bletchley to automate the problem of code breaking and his ideas of an imitation game <cit.>. It can also be traced back to Joshua LederbergLederberg, Joshua (Nobel laureate) <cit.>, Ed. FeigenbaumFeigenbaum, Edward (Turing award winner) <cit.>, Karl DjerassiDjerassi, Karl (co-inventor of the contraceptive pill) <cit.>, and colleagues at Stanford in the 1960s, who worked on automating mass-spectroscopy for the Viking Mars lander <cit.>. AI has long been a tradition of taking scientific discovery as an area of study. In the 1970s the Nobel Prize laureate and Turing prize winner Herbert SimonSimon, Herbert developed Bacon, an AI system for science <cit.>. Since this pioneering work, much has been achieved, and there are now many convincing examples of AI systems making clear contributions to scientific knowledge (e.g. the very recent <cit.>).
EuriskoEurisko <cit.> and CyranoCyrano <cit.> are two examples of other attempts to perform automated discovery from basic principles in a variety of technical fields, in particular in mathematics, chemistry, and a few other domains.
These are systems that can be viewed as heuristic search systems, with the additional advantage that they can reconfigure their own search space.
Some commercial products are specifically designed to be applied to knowledge and scientific discovery. For example, DataRobot <cit.> DataRobot promotes Eureqa <cit.>,Eureqa having acquired Nutonian <cit.>.Nutonian Eureqa was designed to create models from time series data and is based on creating random equations from mathematical building blocks through evolutionary search to explain the data <cit.>. It has been called a “Virtual Data Scientist”Virtual Data Scientist <cit.>.
A team of researchers from Google DeepMindDeepMind launched a machine learning project called AlphaFoldAlphaFold in 2018 to participate in the Critical Assessment of Techniques for Protein Structure Prediction or CASP <cit.>. CASPCASP is a biennial competition that assesses state-of-the-art in three-dimensional protein structure modelling. In its first version, AlphaFoldAlphaFold was particularly successful at predicting the most accurate structure for targets rated as the most difficult by the competition's organisers, but it was not until the second program, AlphaFold 2,AlphaFold 2 in 2020, when the team achieved a level of accuracy much higher than any other group before and scored above 90 for around two-thirds of the proteins in CASP's global distance test (GDT), a test that measures the degree to which a structure predicted by a computational program is similar to the structure validated experimentally, with 100 being a complete match. AlphaFoldAlphaFold relied on a lot of human knowledge already generated in the years before, especially in areas such as molecular dynamics. The program was designed to include the expert domain in the form of the training data. How much molecular biological knowledge was introduced is still not known, but while it required a team that did draw heavily on domain expertise to tune it, most of the predictive power came from the AlphaFold 2AlphaFold 2 tool itself <cit.>.
A precursor of AI in physics is the project GALILEO (Guided Analysis of Logical Inconsistencies Leads to Evolved Ontologies) <cit.>. The GALILEO project tried to model the repair of faulty theories of Physics whose predictions were contradicted by empirical evidence.
One area of successful application of machine learning from climate data, for example, was the discovery of climate dipoles through machine learning <cit.>.
Physics-driven AI has the potential to impact how we approach science, on our current predominantly data-reliant—as opposed to the model-centred—scientific method, by placing the mechanistic model at the centre of modelling itself.
Paradoxically, current physics-led AI and machine learning research have distracted researchers from more fundamental research, even though the discussion has started, and researchers will hopefully eventually get around to the first principles they claim to care about.
On the knowledge side, there are many applications of knowledge extraction of interest, such as for drug re-purposing by pharmaceutical companies <cit.>.
On task-oriented problem solving, we can find an increasing number of workflow systems that understand scientific tasks and carry them out.
There have been some success stories demonstrating that by collecting and integrating available molecular data into computational models, accurate predictions of interventions in the system can actually be made. An example is the Robot Scientist program <cit.> that was able to autonomously execute high-throughput hypothesis-led
research investigating yeast-based functional genomics, with the next-generation scientific program later using the same principles for drug screening. In another example, a computational model of Halobacterium salinarum NRC-1 was first constructed through massive data integration and machine learning-driven inference of the regulatory network <cit.>.
Another example was the ambitious whole-cell computational model of the life cycle of the human pathogen Mycoplasma genitalium <cit.>. The model accounted for all annotated gene functions and was validated against a broad range of data. Now, the model encompasses approximately 500 genes and their interactions.
In the area of neural networks, there has been, for example, an effort to make them `understand' cause and effect by algorithmic training. While more research is needed, fundamental research is aware that alternative approaches are required to capture the complexities of hypothesis and model generation or selection <cit.>.
In this sense, the research in this type of higher-order AI, such as deconvolution from searching for generative processes from the entire algorithmic space <cit.>, will also be crucial to advance current research.
To present a summary of the current state of AI applications to each scientific domain, Table <ref>
displays an organisation of scientific domains[Note that:
Complexity includes systems and intelligence as defined by the Santa Fe Institute;
Manufacturing notably includes ML-based design of sensors and chips;
and Earth systems includes oceans, land, air, and near space (see https://earthdna.orgearthdna.org).]
and the applicable AI algorithms' classes and approaches. Scientific domains are approximately ordered from smallest physical scales to largest.
Overlapping areas are not reflected in this high-level table (e.g., semi-supervised RL methods, or the representation of neural networks (NNs) that conflates various deep learning types like LSTM and Transformers), not to mention complex, context-dependent multidisciplinarity. Table <ref>'s content was the consensus and understanding of a subset of this paper authors. While supervised statistical methods have made contributions to almost every area of knowledge, these are of very different type mostly ranging from identification to classification. Some areas are more difficult than others across all approaches, such as mathematics, philosophy and epistemology. In general, statistical approaches rank poorly at finding first principles or adding new mechanistic knowledge to scientific domains.
Generative AI (GenAI) and Large Language Models (LLMs) are promising to advance science by assimilating and synthesising the vast corpus of human knowledge embedded in scientific literature. Through this synthesis, LLMs can interconnect disparate ideas, construct unique hypotheses, and venture into uncharted areas of scientific knowledge. However, this exploration is bound by the data they have been trained on, creating a theoretical bubble that could lead to model collapse through excessive training on the same data.
To burst this bubble, it is essential to supplement LLMs with other methods and multiple sources. For instance, active learning could serve to maximise information gain, challenging the model with fresh data and different viewpoints cross-pollinating from different scientific domains. Hybrid models blending AI with symbolic reasoning could tackle scientific problems requiring high-level abstraction, thus broadening LLMs' capabilities. This approach would therefore fall into the neuro-symbolic category for purposes of scientific discovery.
Indeed, an area where LLMs could be especially impactful is in scientific model discovery. By analysing patterns and correlations in vast datasets, LLMs could help identify mathematical relations and possibly reveal new potential (physical, or computational) laws just as it learns language grammar from natural language statistics. This could expedite the scientific process, enabling more rapid breakthroughs.
Furthermore, LLMs could make a significant contribution to causal analysis. By processing extensive scientific literature, they could draw links between causes and effects that might be overlooked by human researchers, proposing novel causal hypotheses for testing. Pairing this with counterfactual reasoning, where the AI predicts the outcome of modifying specific variables, could deepen our understanding of cause-effect relationships, and help simulate alternative model outcomes.
However, it is also important to acknowledge the limitations of current LLMs, and statistical machine learning (ML) in general, which currently lack the depth needed for any breakthrough to happen and require quality and diversity of data allowing an LLM `temperature' (favouring less likely statistical patterns) to explore further along the potential long tails of the distribution of scientific results with potential breakthrough science away from incremental average science. A collaborative approach, in which human scientists guide the AI, can help harness the strengths of both worlds mitigating the current weaknesses of LLMs and statistical ML, ensuring a more effective utilisation of this technology today.
§ ASPECTS OF AI-LED CLOSED-LOOP SCIENCE
The ability to predict and design (inverse design), while exceptionally useful, will not necessarily lead to new fundamental discoveries (new theories) unless AI and human goals in scientific discovery are aligned and synergistically intertwined to impose the similar objectives quantified and introduced into, for example, a loss function .
This is because scientific discovery cycles, such as those illustrated in Figs. <ref>, are not isolated parts but belong within a greater cycle of scientific inquiry spanning an entire topic or field comprised of a community of scientists.
It is the larger learning cycle that fuels the questions in the smaller learning cycles.
The larger cycle is fuelled by human curiosity and human challenges and has a strong historical and social component, but the shorter cycles, being more well-defined, they are more prone to be automated.
Nevertheless, the larger cycles may be needed to kick-start the discovery process of the smaller learning cycles.
In this sense, one option to integrate human scientists and AI-driven science is for humans to build the context of the greater cycle (for example, fulfilling the role of the `Final Theory' and `Background knowledge' steps at the leftmost smaller cycle in Fig. <ref>), feeding the AI with new insights, and leave the AI to independently deal with the smaller cycles (such as the rightmost smaller cycle in Fig. <ref>), guided by the greater ones. The LLM's could for example, be very useful as a technical interface and translation of human high-level larger cycle aspirations and their respective "divide-and-conquer" breakdown into smaller cycles. If one aims at the highest degree of automation of the discovery cycle, more sophisticated forms of AI should include automation of the validation, dissemination, refereeing, and other aspects of human science and its practice.
To tackle such challenges, we propose in the following sections the steps and technology suggested to conduct an entire cycle of AI-led scientific discovery <cit.>, as in Fig. <ref>.
§.§ Hypothesis Generation
One of the central components of the scientific practice is the `hypothetico-deductive' method <cit.>.
An additional set of epistemological tools is induction <cit.>, abduction <cit.> and counterfactual reasoning<cit.>.
To automate those knowledge processes, a deduction can be combined with simulation to infer the experimental consequences of hypotheses. Matching simulation with experimental output will be a reliable basis for an AI to accept or reject a hypothesis.
Such experimental output is tested with multiple interventions in the automated series of perturbation analyses <cit.>.
However, while one traditional approach to automate induction may follow, for example, new methods for clustering and regression, automating abduction and the creation of counterfactual scenarios may pose an even more challenging problem.
For this purpose, it would require the AI algorithm to explore irreducibly novel possibilities that are emergent to the current state of knowledge in which the AI is situated <cit.>.
In this sense, neural networks are unlikely to be useful in the process of hypothesis generation, nor is any statistical machine learning. This is because they need training, and not only is training over hypothesis generation exactly the problem to be solved in the first place, but training over previous hypotheses, dividing them into rejected or valid, may undermine the freedom and the unbiased exploration that is desired of regions of interest in the hypothesis space.
For hypothesis generation, what is needed is a bottom-up approach (e.g., a model-driven AI) or a hybrid one able to conduct cycles of systematic hypothesizing,
from either partial or exhaustive enumerations (even if redundant though universal) <cit.>.
A bottom-up approach that deals with this open-endedness concerning the role of novelty is the field of algorithmic information dynamics (AID) <cit.>, a framework for causal discovery and causal analysis based on algorithmic information theory and perturbation analysis.
Open-ended innovation in hypothesis generation and how to create and search over unbounded hypothesis spaces in less well-specified domains is an open challenge in itself, where research on the topics of this document can help make progress. These spaces and the methods exploring them usually have to deal with problems of intractability or uncomputability <cit.>.
Each method has its advantages and drawbacks and lies at different extremes of the causal inference spectrum.
Guiding heuristics based on first principles are needed to explore the hypothesis space <cit.>. Dovetailing partial results is necessary to avoid infinitely long cycles running the search. Here aspects of computability and tractability will be in play at every step, which we will need measures to deal with unless less powerful techniques are implemented (e.g. propositional logic or domain-restricted spaces such as a set of genetic circuits).
At one extreme are the statistical tools that confound correlation and causation but can help scientists make a call and guide their experiments, viz. graphical models that combine probability with symbolic logic, reasoning, and interventional calculus.
The statistical approach often leads to less computationally expensive methods and, although in general, they may present distortions or biases toward some selected features <cit.>, it returns sound results in cases one knows a priori that the underlying generative processes are purely stochastic, stationary and ergodic.
At the other extreme is AID, which searches for sets of agnostic generative models compatible with observations, and exploits these models as testable underlying mechanisms and causal first principles <cit.>, regardless of those being stochastic, computable, or mixed processes.
In addition to offering less constrained methods, for example deconvolution algorithms <cit.> and optimisation in non-differential spaces <cit.>, this approach offers results in direction to tackling the abduction and counterfactual problem, as for example shown in new methods for open-ended evolutionary computation <cit.>, and synergistic distributed computation <cit.>.
However, bottom-up approaches like AID may not be humanly understandable, or when they are, scrutinising them may require great computational effort, as is the case in other areas such as automatic theorem proving (e.g., the four-colour theorem).
LLMs may here again provide an advantage to interface between these model spaces as natural language processors integrating otherwise disparate systems translating among different domain databases and knowledge bases.
§.§ Experimentation and Sensing
One key task is to create AI systems for scientific discovery able to conduct experimentation and hypothesis testing independent of human instruction or with little to no human instruction.
This is because what is desired to take scientific discovery to the next level is not the programming of algorithms able to conduct experiments, but open-ended algorithms able to set their own goals and experiments guided by previously conducted experiments (their own or from the human literature).
To this end, involving the machine embodiment to perform as a physical scientist, instrument-driven approaches render robotics key to making progress in physical experimentation so that more and more of the physical execution of experiments will be done using robotics.
This will increase the productivity of science, as robots work cheaper, faster, more accurately, and for longer than humans.
Furthermore, if not embodied, the scientific experiment may collapse into a problem of data analysis and inference without the hypothesis, model, and theory testing that requires positive or negative feedback from the empirical side. Thus only a tiny part of the scientific discovery cycle would be tackled.
Neural networks can help physical machines to embed themselves in a physical world for representation purposes, as neural networks have proven useful in representing all sorts of images. Still, innovation in areas of robotics and mechatronics will be required to accommodate the kind of depth and range of scientific experiments, in particular when it comes to accuracy and precision—which should not present a problem—while also helping with the current, very human problem of reproducibility <cit.>.
This is expected to have a significant impact on the reproducibility of science, as automating science requires semantic precision.
LLMs will also interface between human and robot instructions making it easier to create tools to automate experiments in natural language effectively instantiating a robot assistant able to process human instructions for scientific experimentation.
§.§ Rejection, Validation and Model Selection
Model selection and reduction have been a recurring theme across several sub-fields of areas, such as computational biology and neuroscience, with special reference to dynamical forward models. The idea is that if a complex nonlinear model can be reduced in complexity (fewer state variables and parameters), the investigator can more readily discern which parameters and state variables are more crucial to the model's behaviour, facilitating model analysis and understanding. One example is the reduction of the four-dimensional Hodgkin–Huxley model to a two-dimensional FitzHugh–Nagumo (FHN) system <cit.>. The core idea was to perform a time-scale separation into fast and slow subsystems. This has been used in a number of model reduction studies, including the cell cycle.
Techniques for dimension reduction, feature, and model selection will be helpful at this stage, from statistical approaches such as principal component analysis to more sophisticated ones such as minimal information loss techniques.
Another core idea for model selection is that each hypothesis formed will have a predicted probability of being correct,
possibly along with the associated cost of the respective experiment. This may be the monetary cost of executing the experiment, plus a temporal discount rate to value finding results more quickly. It has been empirically shown that using a Bayesian approach to experiment selection is sound and outperforms experiments chosen manually <cit.>.
Current AI has shown the ability to yield valuable insights from noisy or incomplete data, optimise procedure design, and learn notions of structure amongst heterogeneous observations. Neural networks have shown utility in isolating proper signals from noisy datasets spanning disciplines from physics to biology; such capabilities could be critical to establishing scientific conclusions as we reach the practical limit of experimental data quality <cit.>. Approaches from optimisation have demonstrated an ability to reduce the expense of experimental campaigns by optimising sampling patterns using, for instance, bandit-style methods to more rapidly design electric batteries or iteratively specify experimental conditions in biology. Structure learning techniques from the graphical model literature could find use in identifying statistically meaningful relationships from large amounts of unannotated data <cit.>.
§.§ Knowledge Representation and Natural Language Processing
Ingested knowledge may no longer be machine-readable, either rule-based or probabilistic given that LLMs can interface between them but its possible caveats, such as low-level hidden misalignments, are difficult to unveil, making difficult traceability and liability. LLMs can allow machines to read, interpret, and exploit the current knowledge from a scientific domain in human natural language and digest the relevant literature in the target area.
An AI-led scientific discovery approach will require at least access to the space of interest needed for the system to be able to validate or reject a hypothesis based on contradiction or confirmation of previous knowledge which may be difficult in a black box like an LLM. So, the LLM will need to be self-explanatory with the caveat that the output explanation may not fit the internal statistical derivation of what the LLM ends up producing. An independent system and a more explainable mechanistic process may need to verify the output.
Without LLMs, this task would have required massive databases and curation efforts for domains that are not already significantly represented in a computable fashion.
Although all sorts of languages can be used to represent knowledge, some domains will be aptly represented by propositional-logic rules, such as simplified genetic circuits, to avoid these potential misalignments from LLMs or statistical ML in general.
Other domains will require more sophisticated representations, either to encompass the greater complexity of an extended domain or to deal with the greater sophistication of, e.g., a domain such as biomedicine, where system-expert rules with ifs, dos, and whiles are required, hence the full power of first-order logic and Turing-completeness.
For example, knowledge representation systems/ontologies are well developed in biology: The Gene Ontology (GO), nascent Causal Activity Models with the GO, Human Phenotype Ontology, Chemical Entities of Biological Interest, Ontology of Biomedical Investigation, among others <cit.>. So are integration efforts built on these ontologies, e.g., Monarch <cit.>.
The JST MIRAI `Robotic Biology' project can also provide technologies to help adoption, such as LabCode, a common formal language for experimental protocols, LabLive, a laboratory information IoT platform, and real-time parallel workflow scheduling software that can decompose processes in a given protocol and assign each to different robots/equipment so these are executed considering dependencies and concurrency between them.
Another example is statistical relational learning (SRL), which combines relational learning and probability theory and is an area of ML research (e.g. <cit.>),
enabling the representation of beliefs about relational data using probabilistic models.
Relational Learning (RL) is a general representation language based on first-order predicate logic <cit.>.
Such probabilistic logic models enable the specification of graphical models (Bayesian networks, Markov networks, etc.) over large relational domains.
One of the fundamental design goals of the representation formalisms developed in SRL is to abstract away from concrete entities and to represent instead general principles that are intended to be universally applicable. A key advantage of RL is that it can easily incorporate background scientific knowledge, and learn about structured objects such as scientific models particularly appropriate for utilising background bioinformatic data <cit.>.
These approaches can be further enhanced or complemented by the do-calculus <cit.> or algorithmic information dynamics <cit.>.
Deep neural networks are also good at capturing the apparent granularity and complexity of natural phenomena in a computable form (in weighted vectors of numerical matrices). The success of neural networks implies that once one captures an object in an optimal way, classification is trivial, as it was for deep learning in the protein-folding challenge <cit.> with its limitations.
Assuming that an appropriate formalism to record observation could be found for any domain, a modeller may be faced with a severe feature selection problem, which translates into a question of the identity of the relevant state variables of the systems of interest, e.g., drug docking dynamics for drug discovery or cytokines for cell dynamics.
On the one hand, all the entities of the system that are measured could define the set of state variables to be represented, e.g. drugs or proteins, augmented with the set of rules to which the entities may be subjected, such as thermodynamics or collisions.
However, this type of representation could quickly become very complex <cit.>.
On the other hand, a certain subset of combinations of measured state variables may be a useful representation of the governing dynamics driving a possible system, and this is a question that needs to be asked and resolved for scientific domains on a case-by-case basis.
Such a feature selection problem in computably representable objects is often found in analyses in which one is assuming a pure stochastic nature of the system's generative processes, although the system also comprises deterministic, mechanistic, or computable subprocesses <cit.>.
In addition, even in cases the whole algorithmic space of possibilities is covered, analyzing the information content carried by a network highly depends on the multidimensional space into which it is embedded <cit.>, where distortions may be exponential for multidimensionality-agnostic encodings.
Thus, developing expressive and efficient frameworks to computationally represent and capture a wide range of scientific knowledge about processes, models, observations and hypotheses is key.
Capturing scientific knowledge will push the limits of the state of the art.
In the opposite direction of knowledge representation by machines, the AI for scientific discovery may need to communicate in the form of a publication or other scientific means to explain the innovation and methods behind the discovery to humans and to articulate its significance and impact.
A choice that has to be made, on a case-by-case basis, is whether it is required that AI conducts the experiments without much human understanding or whether it is acceptable not to have a sophisticated translation of both the hypotheses generated and the process arriving at a conclusion.
In cases where there is a requirement for human understanding, and even for the most general case, at least partial interpretation by human scientists may be required.
Thus, knowledge representation and natural language processing techniques will be needed to be jointly developed to both:
feed the system with the current knowledge relevant to the hypothesis space;
and guide the search (in cases of human-machine interaction) or be able to follow up the inference process and interpret the results <cit.>.
These requirements will force us to make progress on humanly readable and interpretable machine-human translation.
§.§ Integration, Interpretation and Interfacing
One of the most challenging aspects of scientific discovery is integrating a new piece of information with the corpus of existing human knowledge.
Analysing the data will require moving to the larger learning loop where there is a broader view of the results for possible (re-)interpretation.
This is because while the specific objective for the target hypothesis may have been rejected, one of the main serendipity checkpoints is the reinterpretation of results in a broader context.
Machine learning systems have proven incredibly useful for automated knowledge base construction. They have recently contributed to the creation of multiple large databases describing, for instance, genome-wide association studies and drug-disease interactions directly from the published literature <cit.>. This ability to create massive knowledge bases that rapidly and effectively contextualise new findings could substantially accelerate scientific discovery by ensuring that seemingly disparate dots are more rapidly connected.
However, exploring and understanding user context requires automating certain social, political, and economic aspects of interconnected knowledge that are intrinsic to science <cit.>.
The AI systems' interactions with scientists must be guided by a knowledge-rich user model that enables the AI systems to act as colleagues like LLMs now may allow.
This constitutes an inextricable loop in which human scientists and AI-scientists are parts of a whole system, which the AI algorithm should try to optimise.
A striking example of such an optimal interplay has been the evolution of machine-human chess collaboration.
After the defeat of Gary Kasparov, it became standard to have human chess players practice with computers, and for champions, it became impossible to reach the level of playing demanded without intensive computer training <cit.>. To this day, the strongest freestyle chess teams have been those able to strike a perfect balance between machine and computer training and playing.
Again, neural networks and statistical machine learning will not help in this process, at least not on their own or in their traditional architectures.
What is most likely needed here is first an inference engine able to extract knowledge readable by humans as well, especially under human-machine schemes.
Classical logical inference engines are key, but so are hybrid approaches combining statistical learning and symbolic computation.
Techniques such as feature selection and data dimension reduction will be helpful.
Secondly, an AI algorithm that can simulate the network topological properties of scientific production <cit.> and perform the first five steps of the full cycle of AI-led scientific discovery, while taking into account the relational structures and biases that emerge when the AI-human relationship is analysed as a single system.
The application of AI to science will confer multiple advantages, and eliminate some of the disadvantages of having a human in the loop, such as biases and lack of reproducibility. Yet, if humans rely on automated scientific discovery, verifiability and transparency are crucial because the coupled AI-human system has to be able to be formally verified to ensure that it matches the goals and that the results match the process.
In this manner, the AI algorithm should be designed to continuously reiterate its data gathering from the outputs and behaviours of the whole system the AI is part of.
The same for the human scientist, which needs to be able to perform, evaluate, and produce analytical reasoning while participating in this coupled computational-social system.
§.§ Closing the Loop
Finally, connecting all the steps will require a meta-algorithm that will need to systematically manage each cycle and even decide when to break or restart the cycles (see Fig. <ref>), if human intervention is taking place.
The whole cycle should be open to human intervention, and the AI algorithm should both reiterate the new insights and data given by humans and counter any bias that these may introduce.
Therefore, the “grand challenge” that we propose ranges over automating not only laboratory practices and theory making, but also writing a paper, refereeing, and disseminating achievements.
Technology for remote web control and monitoring of full-cycle scientific discovery may require technologies such as TypeScript, React, GraphQL, Jest, and Redux to create a web-based beamline control system.
Techniques such as optimisation and anomaly detection can be used to find possible gaps and even glitches (found or promoted). These gaps can be exploited to reinterpret data, explore other regions of the hypothesis space and kick-start the process of hypothesis generation again, thus closing and restarting the discovery cycle.
§ CONCLUSION: THE FUTURE OF AI IN SCIENTIFIC DISCOVERY
Academic fields are disrupted to such an extent that future progress has become almost unthinkable without the involvement of some machine learning.
We have explored some of the challenges and opportunities in utilising and exploiting AI. We argue that a closed-loop formulation not only augments and accelerates scientific discovery but also leads science in new directions, thus disrupting the future of human science. Such AI-led, closed-loop experimentation, may also mitigate current challenges, such as the production and replication of data.
The application of AI in scientific discovery presents us with very different challenges compared to the application of AI to games such as chesschess, shogishogi, or GoGo. However, recent developments surprisingly suggest that some scientific challenges may not be that different from these games <cit.>.
To make contributions to fundamental science, of the kind that goes into textbooks of first principles of major fields, do we require AI equipped with sufficient intelligence and autonomy to render it capable of sensing and making observations to ask novel and relevant questions? Do we want AI in scientific discovery to be hand to remain guard-railed, unsupervised, or semi-supervised? By humans or other tertiary systems that we may trust? These are questions that scientists and policymakers will have to face and answer soon, if not now.
We envision future AI systems to have the potential to transform scientific discovery and enable an unprecedented expansion of knowledge while keeping some open questions unanswered, such as to what extent we want to exert full control over what is explored or experimented on, or to what extent do we want to be able to understand or at what moment we are willing to let human scientist understanding be left behind as it is in some ways with no single scientist able to know or catch up with their fields. Certainly, for some questions, limits are desirable, but for others, how much humans are willing to sacrifice, such as understanding, in exchange for possibly solving problems such as cancer, or climate change.
Science
10
Lavin2021si
A. Lavin, et al.,
abs/2112.03235 (2021).
King2004a
R. D. King, et al., Nature 427, 247 (2004).
King2009
R. D. King, et al., Science 324 (2009).
ROSS
R. D. King, Scientific American 304, 72 (2011).
Wang2019
D. Wang, et al., Proceedings of the ACM on Human-Computer
Interaction pp. 1–24 (2019).
Nosek2012
B. A. Nosek, J. R. Spies, M. Motyl, Perspectives on Psychological
Science 7, 615 (2012).
Fanelli2017
D. Fanelli, R. Costas, J. Ioannidis, PNAS 14, 3714 (2017).
Nuzzo2015
R. Nuzzo, Nature pp. 182–185 (2015).
Goodman2018
S. N. Goodman, D. Fanelli, J. P. Ioannidis, Getting to Good: Research
Integrity in the Biomedical Sciences pp. 96–102 (2018).
Harris2019
J. K. Harris, et al., Public Health Reports 134, 109
(2019).
Kaanders2021
P. Kaanders, P. Sepulveda, T. Folke, P. Ortoleva, B. D. Martino, bioRxiv p. 2021.06.29.450332 (2021).
BarabSci
W. Dashun, B. Albert-László, The Science of Science (Cambridge
University Press, Cambridge, UK, 2021).
Fortunato2018
S. Fortunato, et al., Science 359 (2018).
Nature2016
Nature, Nature 537, 465 (2016).
Colizza2006-ed
V. Colizza, A. Flammini, M. A. Serrano, A. Vespignani, Nat. Phys. 2, 110 (2006).
Baker2016-cd
M. Baker, Nature 533, 452 (2016).
Baddeley2015-jp
M. Baddeley, EMBO Rep. 16, 902 (2015).
Resnik2016
D. B. Resnik, K. C. Elliott, Accountability in research 23, 31
(2016).
HernandezOrozco2021
S. Hernández-Orozco, et al., Frontiers in Artificial
Intelligence 3, 567356 (2021).
Venturi2019-mr
L. Venturi, A. Bandeira, J. Bruna, Journal on Machine Learning Research
20, 1 (2019).
Goodfellow2016
Y. Goodfellow, Y. Bengio, A. Courville (MIT Press, 2016).
blackbox
V. Buhrmester, D. Münch, M. Arens (2019).
Rudin2019-pv
C. Rudin, Nature Machine Intelligence 1, 206 (2019).
Salakhutdinov2015-su
R. Salakhutdinov, Annual Review of Statistics and Its Application 2, 361 (2015).
Creswell2018-qa
A. Creswell, et al., IEEE Signal Process. Mag. 35, 53
(2018).
Bian2021-vh
Y. Bian, X.-Q. Xie, J. Mol. Model. 27, 71 (2021).
Calude2017
C. S. Calude, G. Longo, Foundations of Science 22, 595 (2017).
Zenil2020
H. Zenil, Entropy 22, 612 (2020).
Scholkopf2021
B. Scholkopf, et al., Proceedings of the IEEE 109, 612
(2021).
Colbrook2022
M. J. Colbrook, V. Antun, A. C. Hansen, Proceedings of the National
Academy of Sciences 119 (2022).
Nadeau2003-vk
C. Nadeau, Mach. Learn. 52, 239 (2003).
Spooner2021-px
J. Spooner, V. Palade, M. Cheah, S. Kanarachos, A. Daneshkhah, Applied
Sciences 11, 471 (2021).
Kitano2016
H. Kitano, AI Magazine 37 (2016).
Turing1
J. Copeland, Alan Turing: The codebreaker who saved `millions of lives' - BBC
News (2012).
Turing2
J. Copeland, D. Proudfoot, Alan Turing, Codebreaker and Computer Pioneer -
AlanTuring.net The Turing Archive for the History of Computing (2004).
Lederberg
L. L. Cavalli-Sforza, Cell 132 (2008).
Feigenbaum1
IEEE, IEEE Intelligent Systems 26 (2011).
Djerassi
J. I. Seeman, Chemical & Engineering News pp. 10–14 (2013).
DENDRAL
J. Lederberg, E. A. Feigenbaum, B. G. Buchanan, R. K. Lindsay, Applications of Artificial Intelligence for Organic Chemistry: The DENDRAL
Project (McGraw-Hill, 1980).
Buchanan1984
B. G. Buchanan, E. H. Shortliffe, Rule-Based Expert Systems: The MYCIN
Experiments of the Stanford Heuristic Programming Project
(Addison-Wesley, Reading, MA, 1984).
Langley1987
P. W. Langley, H. A. Simon, G. Bradshaw, J. M. Zytkow, Scientific
Discovery: Computational Explorations of the Creative Process (MIT Press,
Cambridge, Mass, 1987).
Burger2020
B. Burger, et al., Nature 583 (2020).
Jumper2021-gb
J. Jumper, et al., Nature 596, 583 (2021).
lenat
D. B. Lenat, Machine Learning, R. Michalski, J. Carbonell, Mitchell
T.M., eds. (Springer, Berlin, Heidelberg, 1983).
hasse
K. W. Haase, Discovery Systems AI Memo 898, Tech. rep., Artificial
Intelligence Laboratoy MIT, Cambridge Mass. (1986).
DataRobot
DataRobot, DataRobot - AI Cloud - The Next Generation of AI.
Eureqa
Eureqa, Eureqa Models | DataRobot.
Nutonian
Nutonian, DataRobot AI Cloud Platform.
Dubcakova2011-he
R. Dubčáková, Genet. Program. Evolvable Mach. 12,
173 (2011).
Awange2018-el
J. L. Awange, B. Paláncz, R. H. Lewis, L. Völgyesi, Mathematical
Geosciences (Springer International Publishing, Cham, 2018), pp. 321–357.
Wei2019-vt
G.-W. Wei, Nature Machine Intelligence 1, 336 (2019).
Skolnick2021-ty
J. Skolnick, M. Gao, H. Zhou, S. Singh, J. Chem. Inf. Model. 61,
4827 (2021).
Liu2021-zj
J. Liu, et al., Geophys. Res. Lett. 48 (2021).
Gupta2021-py
R. Gupta, et al., Mol. Divers. 25, 1315 (2021).
Liu2021-repurpose
R. Liu, L. Wei, P. Zhang, Nat Mach Intell 3, 68 (2021).
Bonneau2007
R. Bonneau, et al., Cell 131, 1354 (2007).
Karr2012
J. R. Karr, et al., Cell 150, 389 (2012).
Luo2020-xz
Y. Luo, J. Peng, J. Ma, Nat Mach Intell 2, 426 (2020).
Zenil2019b
H. Zenil, N. A. Kiani, A. A. Zea, J. Tegnér, Nature Machine
Intelligence 1, 58 (2019).
Gil2014-ch
Y. Gil, M. Greaves, J. Hendler, H. Hirsh, Science 346, 171
(2014).
Popper1972
K. R. Popper, Objective Knowledge: An Evolutionary Approach (Oxford
University Press, New York, 1972).
King2011
R. D. King, M. Liakata, C. Lu, S. G. Oliver, L. N. Soldatova, Journal of
the Royal Society Interface 8, 1440 (2011).
Russell1912
Bertrand Russell, The Problems of Philosophy (Home University
Library, 1912).
Pearl1995
J. Pearl, Biometrika 82, 669 (1995).
Zenil2020cnat
H. Zenil, N. Kiani, F. Abrahão, J. Tegnér, Scholarpedia
Journal 15, 53143 (2020).
Abrahao2022
F. S. Abrahão, H. Zenil, Philosophical Transactions of the Royal Society
A: Mathematical, Physical and Engineering Sciences 380 (2022).
Morgan1971-ly
C. G. Morgan, Artificial Intelligence 2, 179 (1971).
Thieme2005-ij
S. Thieme, Knowledge Representation and Organization in Machine
Learning (Springer-Verlag, Berlin/Heidelberg, 2005), pp. 177–191.
Zenil2018-pk
H. Zenil, et al., SSRN Electron. J. (2018).
Zenil2019
H. Zenil, et al., iScience pp. 1160––1172 (2019).
Lenat1982-aj
D. B. Lenat, Artificial Intelligence 19, 189 (1982).
Zenil2017a
H. Zenil, N. A. Kiani, J. Tegnér, Physical Review E 96,
012308 (2017).
Hernandez-Orozco2018
S. Hernández-Orozco, F. Hernández-Quiroz, H. Zenil, Artificial
Life 24, 56 (2018).
Hernandez-Orozco2018a
S. Hernández-Orozco, N. A. Kiani, H. Zenil, Royal Society Open
Science 5, 180399 (2018).
Abrahao2017
F. S. Abrahão, K. Wehmuth, A. Ziviani, Theoretical Computer
Science 785, 83 (2019).
Abrahao2018
F. S. Abrahão, K. Wehmuth, A. Ziviani, Complex Systems 27
(2018).
Lindner1999-wy
B. Lindner, L. Schimansky-Geier, Physical Review E 60, 7270
(1999).
Drton2017-bz
M. Drton, M. H. Maathuis, Annual Review of Statistics and Its
Application 4, 365 (2017).
Eddy2004-ub
S. R. Eddy, Nat. Biotechnol. 22, 1177 (2004).
Stevens2000-fu
R. Stevens, C. A. Goble, S. Bechhofer, Brief. Bioinform. 1, 398
(2000).
Bard2004-ej
J. B. L. Bard, S. Y. Rhee, Nat. Rev. Genet. 5, 213 (2004).
Shefchek2020-ci
K. A. Shefchek, et al., Nucleic Acids Res. 48, D704
(2020).
Raedt2008
L. D. Raedt, Logical and Relational Learning (Springer Berlin
Heidelberg, Berlin, Heidelberg, 2008).
Orhobor2020
O. I. Orhobor, N. N. Alexandrov, R. D. King, Machine Learning 2020
109:11 109, 2195 (2020).
Pearl2012
J. Pearl, Uncertainty in Artificial Intelligence - Proceedings of the 28th
Conference, UAI 2012 pp. 4–11 (2012).
Tang2019-ni
C. Tang, et al., Neural Netw. 117, 163 (2019).
Abrahao2021
F. S. Abrahão, K. Wehmuth, H. Zenil, A. Ziviani, Entropy 23
(2021).
Chowdhury2005-gl
G. G. Chowdhury, Annual Review of Information Science and Technology
37, 51 (2005).
Cambria2014-yd
E. Cambria, B. White, IEEE Comput. Intell. Mag. 9, 48 (2014).
Andronis2011-uk
C. Andronis, A. Sharma, V. Virvilis, S. Deftereos, A. Persidis, Brief.
Bioinform. 12, 357 (2011).
McCarthy2007-ns
J. McCarthy, Artificial Intelligence 171, 1174 (2007).
Campbell2002-tv
M. Campbell, A. J. Hoane, F. Hsu, Artificial Intelligence 134, 57
(2002).
Evans2011
J. A. Evans, J. G. Foster, Science 331 (2011).
Silver2016
D. Silver, et al., Nature 7587, 484– (2016).
Hassabis2017
D. Hassabis, Nature pp. 413–414 (2017).
Kitano2021
H. Kitano, npj Systems Biology and Applications 2021 7:1 7, 1
(2021).
Kitano1997
H. Kitano, et al., AI Magazine 18, 73 (1997).
Kitano1998-eo
H. Kitano, M. Asada, I. Noda, H. Matsubara, IEEE Robot. Autom. Mag.
5, 30 (1998).
|
http://arxiv.org/abs/2307.03875v2 | 20230708014222 | Large Language Models for Supply Chain Optimization | [
"Beibin Li",
"Konstantina Mellou",
"Bo Zhang",
"Jeevan Pathuri",
"Ishai Menache"
] | cs.AI | [
"cs.AI",
"cs.CL",
"cs.DM",
"cs.LG"
] |
New Constraints on ALP Electron and Photon Couplings from ArgoNeuT and the MiniBooNE Beam Dump
Jaehoon Yu
==============================================================================================
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design – a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
§ INTRODUCTION
Modern supply chains are complex, containing multiple tiers of suppliers, customers, and service providers <cit.>. Optimization tools have been widely utilized for decision making in such supply chains. These tools not only automate some of the decision making processes, but also result in efficiency gains and substantial cost reductions across many industries <cit.>. However, some of the automated processes require involving business operators, for understanding and explaining certain decisions, providing what-if analysis, and even overriding some optimization outcomes. In many cases, these operators are not equipped with the necessary background in optimization, resulting in time-consuming back-and-forth interactions with program managers, data scientists and engineers.
Large language models (LLMs) have recently emerged as a promising tool for assisting humans with a wide variety of tasks, such as writing documents, presenting work, coding and health diagnosis <cit.>. Generative multimodal LLMs, such as OpenAI's GPT-4, are being rapidly integrated within co-pilots, for answering questions and increasing productivity through simple, language based interactions with technology <cit.>.
In this paper, we study how state-of-the-art LLMs can be applied for reasoning about supply chain optimization. Using LLMs in our context is challenging.
First, the underlying optimization problems are often large scale combinatorial optimization problems, and solving them directly is currently out of reach for LLMs <cit.>. Second, one needs to align the large foundation models to answer the domain-specific questions. Due to the large scale, fully training these models is not possible, and even middle-ground solutions such as fine-tuning LLMs require substantial compute and engineering investments <cit.>. Last but not least, any use of LLMs in business-critical operations, should have solutions when “things go wrong", including diagnosing of and recovering from mistakes and hallucinations <cit.>.
In view of these challenges, we design and implement – a framework that employs LLMs to interpret supply chain optimization solutions. A key idea behind is not to replace optimization technology by LLMs, but rather use optimization solvers in tandem with LLMs. In our design (see Figure <ref> for system architecture), the LLM is responsible for translating the human query to “optimization code", which is in turn used by an optimization solver to produce the necessary output; the output then passes through the LLM for producing the answer in human language (English). This architecture is used both for textual explanations and visualizations of the optimization solution, as well as for answering what-if queries. To address what-if queries, uses the LLM to appropriately modify the input to the optimization solver, and then reruns the solver under the hood to produce an answer.
To enable , we solve multiple technical challenges. First, we circumvent all forms of costly training, by applying in-context learning, namely “teaching" the LLM about the domain directly through the query's prompt (i.e., as part of the inference). This requires careful co-design of the optimization code and the prompt with the understanding that the prompt can be space constrained. For example, we write the code in certain functional form that can be efficiently mapped to questions asked by humans.
We also design a simple safeguard mechanism that confronts output mistakes.
To evaluate the effectiveness of , we introduce an evaluation benchmark that includes (i) a variety of common supply chain scenarios, and (ii) an evaluation methodology that incorporates new metrics for quantifying accuracy, generalizability within a scenario, and extrapolation capability to unseen scenarios. We test on five different scenarios and obtain 93% accuracy on average using GPT-4. We view the benchmark and methodology as contributions that stand on their own, and can be used to evaluate future approaches. We are in the process of open-sourcing our benchmark. Finally, we deploy for the server deployment optimization used in Microsoft Azure's supply chain. We discuss some of the engineering challenges, and report initial promising results from our evaluation.
We believe that this paper sets important foundations, which can be used by other organizations for explaining optimization outcomes through LLMs. There are several future directions that emerge from our study, for example, using smaller models that can be trained with modest resources. As a longer-term goal, it is natural to expand the scope of LLMs beyond explainability, to facilitate interactive optimization (e.g., “please provide a more load-balanced solution", “please use at most two suppliers"). With the constant advances of LLM technology, it will be fascinating to examine whether LLMs can be utilized not only as translators, but also for refining and improving optimization outcomes.
The rest of the paper is organized as follows. In Section <ref>, we provide the necessary background on supply chain optimization and current LLM technology. In Section <ref>, we describe the design of .
Section <ref> describes our evaluation benchmark, and 's evaluation results. In Section <ref>,
we outline our findings from 's deployment in Azure's supply chain. We discuss future perspectives in Section <ref>.
§ BACKGROUND AND MOTIVATION
In this section, we provide brief background on decision making in supply chain operations, and elaborate on the notion of explainability. We then describe current capabilities and limitations of LLMs, and conclude with a simple supply chain example, which will be useful for explaining our solution approach.
§.§ Decision Making in Supply Chains
A supply chain may be defined as “an integrated network of facilities and transportation options for the supply, manufacture, storage, and distribution of materials and products” <cit.>. A simple supply chain may consist of a company (e.g., a service provider) and the set of its suppliers and customers <cit.>. However, most supply chains nowadays contain multiple tiers with suppliers of suppliers, customers of customers, and hierarchies of service providers <cit.>.
This results in highly complex global networks where decisions must be optimized across multiple layers to satisfy customer demand while guaranteeing operational efficiency.
Decision making in supply chains spans different time-scales: starting from the design of the supply chain network (e.g., location of factories), planning (e.g., procurement of supply), and execution (e.g., transportation of goods). This leads to many types of decisions; a few examples:
0em
* How many factories should we open, where, and with what manufacturing capacity?
* What suppliers should we use?
* How much inventory should we keep in stock and at which locations?
* How should we transport intermediate and finished goods efficiently?
The complexity of the decision-making often requires the design of optimization approaches that can incorporate a multitude of constraints and objectives, and still generate good quality solutions in plausible running times. To this end, different aspects of the supply chain (facility location, inventory planning, routing) may be optimized separately or considered jointly (e.g., inventory planning integrated with routing <cit.>). Common solution approaches for these optimization problems include Mixed Integer Programming based techniques and heuristics that can tackle the large scale of the problem.
§.§ Explainability
Business operators and planners involved in decision-making need to maintain a good understanding of the optimization outcomes. This allows them to not only address customer questions, but also react to unexpected events, and resolve inefficiencies and bottlenecks. However, the understanding is often challenging due to the complexity of the decision process (e.g., large scale, solution obtained by “black-box" algorithm, etc.) and lack of optimization expertise.
For concreteness, we provide below some examples of questions that operators may wish to answer.
0em
* What is the cost breakdown for each fulfilled demand?
* How much excess inventory have I had per month in the past year?
* What would happen if the demand at a particular location increased by 10%?
* Can I reduce a factory's manufacturing capacity by 5% and still meet the demand?
* Why was a particular supplier selected for a demand?
* How would selecting a different transportation option affect the delivery timelines and the overall cost?
These and other questions aim at explaining the outcome of supply chain decisions. They include analyzing the current solution (input and output), investigating historical trends, and exploring what-if scenarios.
Obtaining insights on optimization decisions may require involving multiple professionals with different roles. Suppose that planners may wish to understand why a demand has not been fulfilled on time. They often surface the concern to the program managers, who involve domain experts, such as data scientists or the engineers that developed the optimization system. The domain experts in turn may need to write additional code and often rerun the optimization to extract the relevant insights. This overall process might be very time-consuming for all parties involved and can cause significant delays in the decision making process.
In some applications, teams maintain some custom tools that allow decision makers to reason about certain decisions. For example, application dashboards can provide visualizations or even allow enforcing some actions (e.g., fix a specific supplier for a demand). However, given the engineering overhead of maintaining
the tools, they are typically limited to the most common use cases.
The notion of explainability is certainly not novel, and has drawn attention in both academia and industry. There have been numerous studies on explaining ML/AI <cit.>. In the optimization context, IBM Decision Optimization <cit.> provides answers to a fixed set of queries that the user may choose to activate. See also <cit.> and references therein.
§.§ Large Language Models
Overview.
A large language model (LLM) is a foundation model <cit.> trained on extensive text data using deep learning techniques, such as Transformer neural networks; ELMo <cit.>, BERT <cit.>, Turing NLG <cit.>, GPT-3 <cit.>, GPT-4 <cit.>, PaLM <cit.>, PaLM-E <cit.>, LLaMA <cit.>, and Vicuna <cit.> are some examples of widely used LLMs.
In the training phase, a LLM learns statistical patterns, word relationships, and contextual information from diverse sources, such as books, articles, websites, and code repositories. LLMs are used for a variety of tasks in the inference phase <cit.>, including chatbots, translation, writing assistance, coding <cit.>, planning <cit.>, poem and story composition.
Using LLMs in applications. Multiple strategies can be employed to adapt LLMs for a specific application. The most common approaches are fine-tuning and in-context learning.
Fine-tuning is a classic approach for “transfer learning" aimed at transferring knowledge from a pre-trained LLM to a model tailored for a specific application <cit.>. Typically, this process involves tweaking some weights of the LLM. While fine-tuning approaches can be made efficient <cit.>, they still necessitate model hosting in GPUs. This requirement can prove excessively costly for many applications. In-context learning <cit.> is an alternative cheaper approach, which involves incorporating a few training examples into the prompt (or query). The idea here is to append the prompt with domain-specific examples and have the LLM learn from these “few-shot" examples. A key advantage of this approach is that it does not require model parameter updates.
Prompt engineering. In a production setting, developers often send prompts (aka, queries) to the model, which can be appended with domain-specific examples for obtaining higher-quality answers. A collection of prompt management tools, such as ChatGPT Plugin <cit.>, GPT function API call <cit.>, LangChain <cit.>, AutoGPT <cit.>, and BabyAGI <cit.>, have been designed to help engineers integrate LLMs in applications and services. The prompt size is measured in the number of tokens, which is proportional to the query size. LLMs can only process a limited number of tokens because of resource limitations, which is a strict constraint that developers and tools need to find workarounds for.
Privacy. Using domain-specific information in the prompt may involve proprietary data, which users may prefer not to reveal to LLM hosts. Even if LLM providers offer service level agreements (SLAs) for privacy, passive eavesdropping attackers might still intercept the data. Therefore, many organizations would prefer utilizing LLMs in a privacy-preserving way, namely keeping the proprietary data in-house.
Mistakes.
Naturally, LLMs might provide sub-optimal outcomes, such as inaccuracies and even hallucinations <cit.>.
There are generic tools that tackle this problem <cit.>, however one may need domain specific tools for better outcomes. One example is fixing code generated by LLMs <cit.>.
§.§ A Simple Example
We now describe a simple supply chain example that will be useful for illustrating our approach.
The supply chain. Consider a coffee roasting company that roasts two types of coffee (light and dark roast). The company sources coffee beans from three different suppliers, it roasts them in one of its two roasting facilities, and then ships them to one of its three retail locations for selling to customers. The goal is to fulfill the demand in each retail location, while minimizing the total cost. The total cost consists of the cost of purchasing the coffee from the suppliers, the roasting cost in each facility, and the shipping cost of the end product to the retail locations. An illustration is given in Figure <ref>.
Model formulation. We can model this problem as a Mixed Integer Program. Let x_s,r denote the number of units purchased from supplier s for roasting facility r, and y^L_r,ℓ and y^D_r,ℓ the amount of light and dark roast sent to retail location ℓ from roasting facility r. Each supplier s has a capacity C_s, and each retail location ℓ has demand D^L_ℓ and D^D_ℓ for light and dark roast respectively. There is a cost c_s,r for each unit purchased from supplier s for roasting facility r, a shipping cost of g_r,ℓ for each unit sent to retail location ℓ from roasting facility r, and a roasting cost h_r^L and h_r^D per unit of light roast and dark roast respectively in facility r.
The optimization problem is the following:
minimize ( ∑_s,r x_s,r· c_s,r +
∑_r,ℓ y^L_r,ℓ· h^L_r+
∑_r,ℓ y^D_r,ℓ· h^D_r + ∑_r,ℓ (y^L_r,ℓ + y^D_r,ℓ) · g_r,ℓ) (Objective)
subject to ∑_r x_s,r≤ C_s ∀ s (Supplier capacity constraint)
∑_s x_s,r = ∑_ℓ (y^L_r,ℓ + y^D_r,ℓ) ∀ r (Conservation of flow constraint)
∑_r y^L_r,ℓ≥ D^L_ℓ ∀ℓ (Light coffee demand constraint)
∑_r y^D_r,ℓ≥ D^D_ℓ ∀ℓ (Dark coffee demand constraint)
x_s,r, y^L_r,ℓ, y^D_r,ℓ∈ℤ^+ ∀ s,r,ℓ (Integrality constraint)
Explainability. Let us now zoom into the example from Figure <ref>. The optimal solution is depicted in Figure <ref>. We see that in the optimal plan, both roasteries produce light and dark coffee; the first roastery sources its beans from supplier 3, while the second from suppliers 1 and 2. The first two retail locations then obtain all their coffee from the first roastery, while the third retail location is supplied by both roasteries. A user may ask the following questions:
0em
* What would happen if the demand at retail location 1 increased by 10%?
* What would happen if the demands at all retail locations doubled?
* Why are we using supplier 3 for roasting facility 1?
* Can I use roasting facility 1 only for retail location 2?
* What if supplier 3 can now provide only half of the quantity?
* The per-unit cost from supplier 3 to roasting facility 1 is now $5. How does that affect the total cost?
* Why does Roastery 1 produce more light coffee than Roastery 2?
* Why does supplier 1 ship more to Roastery 2 than Roastery 1?
* Why not only use one supplier for Roastery 2?
§ THE LLM FRAMEWORK
Large-scale supply chain management entails multiple functions, such as extensive data gathering, data processing and analysis, optimization processes and communication and enforcement of decisions across multiple stakeholders. While LLMs and supporting tools may handle part of these functions, there is a need for an end-to-end framework that will address the underlying challenges in a systematic way. In this section, we describe the design of our framework, .
§.§ System Overview
The framework, depicted in Figure <ref>, consists of three sets of entities: agents, LLMs, and application-specific components. When a user poses a question (1), the coder takes the question and formulates it as an in-context learning (ICL) question (2) for the LLM. The LLM then generates code (3) to answer the question. The safeguard checks the validity of the code and aborts the operation in case of a mistake; otherwise the safeguard feeds the code to an application specific component (4), such as a database engine or an optimization solver (depending on the query). The component processes the code and produces results, which are logged in a file (5). We note that obtaining the final result
may involve multiple iterations (2 to 5) where the query is automatically refined until the desired output is achieved. Finally, the output logs from the component are fed back into the LLM (6). The LLM analyzes the logs and generates a human-readable answer (7) that is sent back to the user (8).
We now provide an overview of the different entities and components. More details can be found in Appendix <ref>.
§.§.§ Agents
Agents facilitate the interaction between users, the LLM, and application-specific components. The coder converts raw user questions into specific ICL queries. The conversion includes supplying the application context, providing ample training examples, and restructuring the user's query, as exemplified in Figure <ref>. The safeguard operates as a quality control checkpoint. It scrutinizes the code for potential discrepancies and initiates self-debugging upon encountering failures. When cannot successfully address a query, the safeguard would either initiate a new iteration with a proposed fix, or generate an error message for the user. The interpreter takes the output logs, tables, graphs, etc., and generates a human friendly response to the user's query.
§.§.§ Application Specific Components
Different applications may have different types of components; we provide an overview of the most common ones. is designed in a modular way, so that using for a different application requires only switching to a new set of components.
The database is a systematically arranged collection of data in various formats, such as CSV, SQL, JSON, Parquet, which are queried to extract answers. The solver can be a commercial integer programming solver, such as Gurobi. can query the solver output directly, or the output can be stored and queried from the database. If a question demands profound domain knowledge or historical context, consults documents to enhance the depth and relevance of the response. The helper is an optional component. It consists of a set of functions written by application engineers, for simplifying the code produced by LLMs. For example, a complex data analysis workflow can be simplified to a single helper function call.
§.§ A Running Example
We illustrate 's data flow via the user question, “What if we prohibit shipping from supplier 1 to roastery 2? Show me the new plan and compare with the previous result". First, the coder converts this question into an in-context learning query for the LLM, see Figure <ref> for the prompt. In addition to the question itself, the prompt contains (i) training examples, namely pairs of questions and code answers, and (ii) a documentation of the helper functions. Intuitively, (ii) supplements (i) by providing additional context into what the code does.
Subsequently, the LLM generates code that adds a new constraint (green region in Figure <ref>). The safeguard then extracts the code from the LLM's response, and calls the optimization solver to resolve the planning problem, yielding a result depicted in the yellow region in Figure <ref>. This result is then fed into the LLM by the interpreter, which produces a response. Finally, presents the response to the user alongside a visualization of the plan (green region in Figure <ref>) and a comparison with the original cost. Note that preserves privacy, since the domain-specific data remains in either the solver or database, and is never transferred to the LLM. Additional examples are provided in Figure <ref>.
§ EVALUATION BENCHMARK
In this section, we develop a benchmark for evaluating the performance of our framework on a variety of supply chain optimization problems. The benchmark and the methodology around it can guide future efforts for using LLMs in supply chain optimization.
§.§ Scenarios and Data
To evaluate our framework, we selected a variety of optimization problems that capture multiple types of decisions that may be relevant in different supply chain settings. Specifically, our dataset includes a facility location scenario, a multi-commodity network flow for distribution of products, workforce assignment optimization, the traveling salesman problem, as well as the coffee distribution scenario from Section <ref>. The code for all problems is in Python and the Gurobi optimization solver <cit.> is used to obtain the optimal solution; Appendix <ref> provides the code for the coffee distribution problem as an example.
Our next step is to generate a repository of questions and code answers for each scenario. Some of these question-answer pairs will be used as examples for in-context learning, while others for evaluating 's performance.
To create a large set of questions, we write macros for each question, which results in generating question sets of closely related question-answer pairs. An example of a macro for a question set is the following:
In order to increase the diversity in the question sets, we also ask GPT to rephrase the questions while preserving their meaning. For instance, GPT might rephrase the generated question “Why would we ship beans from Supplier 1 to Roastery 2” to “What benefits are associated with the choice of shipping beans from Supplier 1 to Roastery 2?”.
We note that the question sets for all problems that are used in the benchmark were created from scratch and kept in house, so that the LLMs have not observed these data as part of their training.
§.§ Evaluation Methodology
The goal of our evaluation is to assess the accuracy of LLMs in answering user questions for supply chain optimization problems. Unfortunately, existing metrics, such as pass@k which is used for analyzing coding accuracy <cit.>, are not well suited for explainability through code (intuitively, the metrics are “too forgiving"). We therefore propose a different methodology which is inspired by the unit-test approach used in software development.
Our evaluation proceeds as follows. For each scenario we run R experiments. Each experiment consists of T question sets. Each question set consists of Q test questions and answers.
The LLM is asked to write the code and answer for a test question; it is given three chances to produce a response in case of an evident error (runtime or syntax). We then evaluate the correctness of the final answer. Note that we do not necessarily evaluate whether the generated code matches exactly with our ground-truth code, as there are different ways to obtain the correct response. The following example demonstrates a scenario where the generated code is quite different, but the optimization outcome would be the same.
Accuracy. We define the accuracy metric AC as the average success rate across all scenarios, experiments and question sets. Formally,
AC = 1/ SR∑_s=1^S∑_r=1^R1/T_s∑_t=1^T_s1(q_t),
where q_t is the question set, and 1(q_t) is the indicator whether it passed successfully. The LLM passes a question set if and only if it successfully answers all questions in the question set.
In-distribution and out-of-distribution evaluation.
As common practice, we evaluate our framework in both `in-distribution' and `out-of-distribution' <cit.> settings.
For in-distribution evaluation (Figure <ref>), the test question and the examples used in the prompt are from the same question set. In contrast, for out-of-distribution evaluation (Figure <ref>), the example questions are extracted from different question sets.
Example selection. As the number of tokens that can be provided as input to the LLMs is limited, we explore different approaches for selecting the training examples for each query. The approaches can be evaluated both for in-distribution and out-of-distribution evaluation. One approach is random selection, where a fixed number of example questions is selected uniformly at random. Another approach is based on nearest neighbors, where we select examples that are similar to the test question; similarity is based on the text embedding <cit.> of the questions as determined by the model text-embedding-ada-002 <cit.>. We also experiment with different sizes of the example set (0, 1, 3, 5, or 10 examples).
§.§ Performance
Setup. For each scenario s, we run R=10 experiments. In each experiment we evaluate T_s≥ 10 question sets. Each question set q_t usually contains 10-30 questions and answers.
We use both text-davinci-003 <cit.> and GPT-4 <cit.> for our evaluation.
Performance results across different LLMs, example selection approaches, and example set sizes are summarized in Table <ref>.
Observations. GPT-4 consistently outperforms text-davinci-003 in both in-distribution and out-of-distribution evaluation.
As expected, both models show higher accuracy on in-distribution compared to out-of-distribution evaluation. GPT-4 performs relatively much better in out-of-distribution evaluation, demonstrating its stronger reasoning and generalization capabilities; another sign for these capabilities is the 59% accuracy even without any training examples. Increasing the number of examples results in improved accuracy across the board. We also note that the gap between text-davinci-003 and GPT-4 decreases with the size of the example set.
The nearest neighbor selection approach yields slight performance improvements for in-distribution evaluation. Interestingly, when the size of the example set is greater than one, random selection outperforms nearest neighbor for out-of-distribution evaluation. One explanation here is that selecting examples based on text similarity results in overfitting, and random selection results in more diverse training examples.
§ FOR AZURE'S SUPPLY CHAIN
In this section, we demonstrate 's capabilities on the server fulfillment supply chain of Microsoft Azure. We start with providing the necessary details for the decisions involved in Azure's supply chain. We then outline the steps for deploying in production, and provide examples of user interactions and early feedback we obtained. We conclude this section by describing preliminary performance results.
§.§ The Supply Chain
The rapid growth of the cloud industry requires cloud providers to continuously deploy additional capacity to keep up with the demand. This is achieved by acquiring new clusters of servers and deploying them in the data centers. The Microsoft Azure supply chain encompasses a broad array of processes including demand forecasting, strategic foresight, hardware semantic search, fulfillment planning, and document management. Due to complexity and large scale, the optimization of Azure's supply chain is assigned to different subsystems. We focus here on one such subsystem called Intelligent Fulfillment System (IFS), which deals with assigning and shipping servers from the warehouse to the data centers.
Main decisions. For each demand for cloud capacity, the main decisions consist of (i) the hardware supplier that will be used to fulfill the demand, (ii) the timeline of the deployment - in particular, the cluster's dock-date (which determines the date of shipping from the warehouse), and (iii) the cluster's deployment location in the data center (selection of a row of tiles to place the cluster on). The goal is to minimize the total cost that consists of multiple components, such as delay/idle cost of the clusters compared to their ideal dock-date and shipping costs, while respecting a multitude of constraints. Examples of constraints include capacity constraints on the suppliers and the data centers, location preferences for demands and compatibility constraints. The underlying optimization problem is formulated as a Mixed Integer Program (MIP) where the total input data size is around 500 MB.
The optimal solution is obtained hourly using Gurobi. More details about the optimization problem can be found in Appendix <ref>.
Stakeholders. The main consumers of IFS are planners. These are professionals that have the business context, so when they receive the outcome of the optimization, they can confirm that it meets business needs (or override decisions otherwise) and ensure the execution of the decisions is completed as planned. However, the increased complexity of the underlying optimization problem in combination with the global scale of decision making (hundreds of data centers) prevents immediate clarity in the reasoning behind each decision. Consequently, planners often reach out to the engineers (including data scientists) that develop the optimization system for obtaining additional insights.
Oftentimes, planners and engineers have multiple rounds of interaction around understanding an issue or exploring what-if scenarios.
Common questions. We summarize below the main types of questions that are raised by planners:
0em
* [Management] Does the system support a particular region, resource, or supplier?
* [Availability] Is a resource available or allocated?
* [Decisions] Why did the system make decision `x' related to supplier/demand selection, time, and location?
* [Details of shipments] What are the details related to cross-geographical shipments and expected dock counts on a specific date?
* [Historical data analysis] What is the standard deviation of the supplier's inventory in the last month?
* [Visualization] Can you visualize the dock capacity, availability, dates, or delays at a given location?
§.§ Deploying for Azure Supply Chain
Our current deployment of consists of (i) a front-end service for multiple-user interaction; (ii) an agent service, which is connected to Azure OpenAI for LLM access; (iii) multiple virtual machines (VMs) which host IFS and the application specific components to support multiple users at the same time.
We preload VMs' memories with the input data and solver's solutions to speedup code executions for users. The input data for the optimization problem are updated periodically (hourly), where the VMs load the updated data in a round-robin fashion so that there are always some VMs available to support users. We use GPT-4 as the LLM.
§.§ Preliminary Feedback and Results
Figure <ref> provides examples of interactions between users and .
The preliminary feedback we obtained from both planners and engineers has been positive. Users expressed excitement noting the potential of to help them understand the underlying optimization logic. Users especially emphasized the benefits of supporting key what-if scenarios, which gives planners more autonomy and may substantially reduce the engineering on-call burden. For example, before , answering one what-if question would need more than three operators to coordinate the investigation and one on-call engineer to inspect the plan output.
Our preliminary evaluation indicates that can achieve more than 90% accuracy for our in-distribution evaluation. This result is consistent with the ones obtained in Section <ref>.
§ CONCLUDING REMARKS
We conclude this paper by discussing current limitations, and highlighting intriguing directions for future work.
§.§ Current Limitations
Users need to be specific. The user needs to ask precise questions. For instance, “Can we dock demand xc132 fifteen days earlier?" is ambiguous, because “earlier" can mean “15 days before today", “15 days before the currently planned date", or “15 days before the deadline". Consequently, the LLM might misunderstand the user and yield the wrong code.
Dependency on application-specific components.
relies on proper design of application-specific components, such as the schema of the database and the helper functions. Some of these components might require non-negligible engineering efforts. While there has been progress in automating some of these components <cit.>, there are still gaps in using them in some production settings.
Undetected mistakes. We observed cases where the LLM writes code that runs smoothly, but it may be totally wrong (e.g., due to string matching mistakes). We expect that things will improve in the future with more advances in LLMs and supporting tools.
Generalize to new questions. While the LLM performs well on seen questions, it still struggles when presented with questions that do not appear in the examples (see, e.g., Table <ref>). We believe that future models will have better generalizability.
Benchmark. Our current evaluation quantifies performance only for quantitative questions; for example, we exclude visualization queries from our analysis. Furthermore, the evaluation is based on a specific programming language (Python) and optimization solver (Gurobi).
§.§ Future Directions
We see our work as a cornerstone for future research in the area.
One interesting direction is incorporating human feedback (e.g., from supply chain planners) which could lead to significant performance improvements <cit.>. Another direction that we are currently examining is using smaller models (see, e.g., <cit.> and references therein) for the specific tasks of supply chain optimization; using such models allows for more affordable hosting and fine-tuning of the model. In particular, we are examining whether fine-tuning can help with interpreting unseen questions. On a related note, it is of interest to consider a hybrid framework that combines the strengths of different AI models, for example combining large LMs with smaller ones. A natural longer-term goal is to go beyond explainability and facilitate interactive optimization, where the user directly influences the optimization outcomes; this will require designing more comprehensive safeguards, to prevent costly mistakes.
§.§ Acknowledgements
We thank Sébastien Bubeck, Yin Tat Lee, Chi Wang, Erkang Zhu, Leonardo Nunes, Srikanth Kandula, Adam Kalai, Marco Molinaro, Luke Marshall, Patricia Kovaleski, Hugo Barbalho, Tamires Santos, Runlong Zhou, Ashley Llorens, Surajit Chaudhuri, and Johannes Gehrke from Microsoft Research for useful discussions. We also thank Brian Houser, Matthew Meyer, Ryan Murphy, Russell Borja, Yu Ang Zhang, Rojesh Punnath, Naga Krothapalli, Navaneeth Echambadi, Apoorav Trehan, Jodi Larson, and Cliff Henson from the Microsoft Cloud Supply Chain for their advice and support.
unsrt
§ INTELLIGENT FULFILLMENT SYSTEM
In this section, we present a partial formulation of the optimization in the Intelligent Fulfillment System that assigns and ships servers from the warehouse to the data centers.
§.§ Main Decisions
We introduce the following variables:
* z_dt∈{0,1}: equals 1 if demand d docks on day t, and 0 otherwise
* u_dr∈{0,1}: equals 1 if demand d docks on row r, and 0 otherwise
* w_ds∈{0,1}: equals 1 if d is fulfilled using supplier s, and 0 otherwise
* y_d,dc,t∈{0,1}: equals 1 if d docks at datacenter dc on day t, and 0 otherwise.
* v_d,s,t≥ 0 : whether demand d docks on day t using supplier s or not
§.§ Constraints
This section describes some of the constraints in the formulation.
Docking day. The docking for each demand takes place on a single day.
∑_t z_dt≤ 1 ∀ d
Datacenter dockings. For each demand d, we dock at a datacenter dc on a specific day t only if the selected row belongs to that datacenter dc and the selected day is that particular day t.
∑_dc y_d,dc,t≤ z_dt ∀ d, t
∑_t y_d,dc,t = ∑_r ∈ rows(dc) u_dr ∀ d,dc
Datacenters' daily capacities. There are restrictions restr on the daily amount of dockings that sets of datacenters can handle. Let R_d denote the number of racks required for demand d.
∑_d, dc ∈ DC(restr) y_d,dc,t· R_d ≤DockRestrAvailCap(restr,t) ∀ restr ∈ Restrictions, t
Single supplier. Each demand must be fulfilled by a single supplier. A row is selected for a demand only if a supplier has been found.
∑_s w_ds≤ 1 ∀ d
u_dr≤∑_s w_ds ∀ d,r
Auxiliary supplier variables. Connecting variables v_dst with the rest of the variables.
z_dt = ∑_s v_dst ∀ d,t
w_ds = ∑_t v_dst ∀ d,t
Supply availability. We have a set of supply pools with a certain capacity (amount of available supply) evaluated at times ct. We need to make sure that the supply s we consume from each supply pool sp is available at the time t that we consume it. The time where each supply becomes available depends on its lead time.
∑_d, s ∈ sp, t ≤ leadtime(ct, d, s) v_dst≤Available_Supply(sp,ct) ∀ sp, ct
Overrides. Some demand-supply combinations might be undesirable or disallowed for some reason. These can be explicitly blocked. Let B denote the set of blocked pairs.
w_ds = 0 ∀ (d,s) ∈ B
§.§ Objective
Our goal is to minimize the total cost which is the aggregate of multiple components, including the cost of docking too early or too late compared to the ideal dock-date of each demand, the cost of not fulfilling demands, and the shipping cost, among others.
DockCost = ∑_d,t z_dt·Demand_Day_DockCost(d,t)
NoDockCost = ∑_d (1-∑_t z_dt) ·Unsatisfied_Cost(d)
ShippingCost = ∑_d,s w_ds·Transit_Ship_Cost(d,s)
§ ENGINEERING DETAILS
Figure <ref>, at the end of this document, presents a detailed screenshot of with IFS, including intermediate results for illustration purposes.
§.§ Useful Tricks
SQL: Many LLMs are trained with SQL database. Hence, saving optimization input and output data into SQL could make the system easier to use and more explainable.
Logical simplification:
If the prompt is not designed well, the LLM might make many simple logical mistakes (e.g., “not use" v.s. “use", before v.s. after, etc.).
Intermediate outputs. When dealing with complex prompts, providing intermediate outputs can help keep the LLM on track. By returning intermediate results or steps, the LLM can check the consistency of its process, making it easier to debug and refine.
§.§ Failed Attempts
Chain of thought (CoT) failures. Unlike many recent studies <cit.> that have found that LLMs have strong CoT abilities, we found CoT is not helpful for writing complex code. This is another reason why we integrated the helper functions in the application-specific tools, which outperformed CoT. Our hypothesis is that if the LLM makes one mistake in the thinking chain, then the whole response would be wrong because correcting its own mistakes is hard.
Overuse of prompt engineering: While prompt engineering can often lead to improved results, overdoing it can sometimes lead to worse outcomes. When the prompts become too complex or too specific, the LLM might not understand them correctly or might overfit to the specific prompt structure, limiting its ability to handle a variety of questions.
§ COFFEE DISTRIBUTION EXAMPLE
§.§ Code
§.§ Question and Ground Truth Macros
|
http://arxiv.org/abs/2307.04097v1 | 20230709045910 | Restricted Generative Projection for One-Class Classification and Anomaly Detection | [
"Feng Xiao",
"Ruoyu Sun",
"Jicong Fan"
] | cs.LG | [
"cs.LG"
] |
Journal of Class Files, Vol. 14, No. 8, August 2021
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
Restricted Generative Projection for One-Class Classification and Anomaly Detection
Feng Xiao, Ruoyu Sun, Jicong Fan Member, IEEE,
The authors are with the School of Data Science, The Chinese University of Hong Kong, Shenzhen, and Shenzhen Research Institute of Big Data. E-mail: [email protected].
Manuscript received April 19, 2021; revised August 16, 2021.
May 17, 2023
====================================================================================================================================================================================================================================================================================================
We present a simple framework for one-class classification and anomaly detection. The core idea is to learn a mapping to transform the unknown distribution of training (normal) data to a known target distribution. Crucially, the target distribution should be sufficiently simple, compact, and informative. The simplicity is to ensure that we can sample from the distribution easily, the compactness is to ensure that the decision boundary between normal data and abnormal data is clear and reliable, and the informativeness is to ensure that the transformed data preserve the important information of the original data. Therefore, we propose to use truncated Gaussian, uniform in hypersphere, uniform on hypersphere, or uniform between hyperspheres, as the target distribution. We then minimize the distance between the transformed data distribution and the target distribution while keeping the reconstruction error for the original data small enough. Comparative studies on multiple benchmark datasets verify the effectiveness of our methods in comparison to baselines.
Anomaly Detection, One-class Classification, Generative Projection.
§ INTRODUCTION
Anomaly detection (AD) under the setting of one-class classification aims to distinguish normal data and abnormal data using a model trained on only normal data <cit.>. AD is useful in numerous real problems such as intrusion detection for video surveillance, fraud detection in finance, and fault detection for sensors. Many AD methods have been proposed in the past decades <cit.>. For instance, Schölkopf et al.<cit.> proposed the one-class support vector machine (OC-SVM) that finds, in a high-dimensional kernel feature space, a hyperplane yielding a large distance between the normal training data and the origin. Tax et al.<cit.> presented the support vector data description (SVDD), which obtains a spherically shaped boundary (with minimum volume) around the normal training data to identify abnormal samples. Hu et al.<cit.> propose a new kernel function to estimate samples’ local densities and propose a weighted
neighborhood density estimation to increase the robustness to changes in the neighborhood size.
There are also many deep learning based AD methods including unsupervised AD methods <cit.> and semi-supervised AD methods <cit.>.
Deep learning based AD methods may be organized into three categories. The first category is based on compression and reconstruction. These methods usually use an autoencoder <cit.> to learn a low-dimensional representation to reconstruct the high-dimensional data <cit.>. The autoencoder learned from the normal training data is expected to have a much higher reconstruction error on unknown abnormal data than on normal data.
The second category is based on the combination of classical one-class classification <cit.> and deep learning <cit.>. For instance, Ruff et al.<cit.> proposed a method called deep one-class SVDD. The main idea is to use deep learning to construct a minimum-radius hypersphere to include all the training data, while the unknown abnormal data are expected to fall outside.
The last category is based on generative learning or adversarial learning
<cit.>.
For example, Perera et al. <cit.> proposed to use the generative adversarial network (GAN) <cit.> with constrained latent representation to detect anomalies for image data. Goyal et al.<cit.> presented a method called deep robust one-class classification (DROCC) and the method aims to find a low-dimensional manifold to accommodate the normal data via an adversarial optimization approach.
Although deep learning based AD methods have shown promising performance on various datasets, they still have limitations. For instance, the one-class classification methods such as Deep SVDD <cit.> only ensure that a hypersphere could include the normal data but cannot guarantee that the normal data are distributed evenly in the hypersphere, which may lead to large empty regions in the hypersphere and hence yield incorrect decision boundary (see Fig.<ref>). Moreover, the popular hypersphere assumption may not be the best one for providing a compact decision boundary (see Fig.<ref> and Tab.<ref>). The adversarial learning methods such as <cit.> may suffer from instability in optimization.
In this work, we present a restricted generative projection (RGP) framework for one-class classification and anomaly detection. The main idea is to train a deep neural network to convert the distribution of normal training data to a target distribution that is simple, compact, and informative, which will provide a reliable decision boundary to identify abnormal data from normal data. There are many choices for the target distribution, such as truncated Gaussian and uniform on hypersphere. Our contributions are summarized as follows.
* We present a novel framework called RGP for one-class classification and anomaly detection. It aims to transform the data distribution to some target distributions that are easy to be violated by unknown abnormal data.
* We provide four simple, compact, and informative target distributions, analyze their properties theoretically, and show how to sample from them efficiently.
* We propose two extensions for our original RGP method.
We conduct extensive experiments (on eight benchmark datasets) to compare the performance of different target distributions and compare our method with state-of-the-art baselines. The results verify the effectiveness of our methods.
The rest of this paper is organized as follows. Section <ref> introduces the related work.
Section <ref> details our proposed methods.
Section <ref> presents two extensions of the proposed method.
Section <ref> shows the experiments.
Section <ref> draws conclusions for this paper.
§ RELATED WORK
Before elaborating our method, we in this section briefly review deep one-class classification, autoencoder-based AD methods, and maximum mean discrepancy (MMD)<cit.>.
We also discuss the connection and difference between our method and these related works.
§.§ Deep One-Class Classification
The Deep SVDD proposed by <cit.> uses a neural network to learn a minimum-radius hypersphere to enclose the normal training data, i.e.,
minimize_𝒲1/n∑^n_i=1‖ϕ(𝐱_i; 𝒲) - 𝐜‖^2 + λ/2∑^L_l=1‖𝐖_l ‖^2_F
where 𝐜∈ℝ^d is a predefined centroid and 𝒲={𝐖_1,…,𝐖_L} denotes the parameters of the L-layer neural network ϕ, and λ is a regularization hyperparameter. In (<ref>), to avoid model collapse, bias terms should not be used and activation functions should be bounded <cit.>. There are also a few variants of Deep SVDD proposed for semi-supervised one-class classification and anomaly detection <cit.>.
Both our method and Deep SVDD as well as its variants aim to project the normal training data into some space such that a decision boundary between normal data and unknown abnormal data can be found easily. However, the sum-of-square minimization in Deep SVDD and its variants only ensures that the projected data are sufficiently close to the centroid 𝐜 in the sense of Euclidean distance and does guarantee that the data are sufficiently or evenly distributed in the hypersphere centered at 𝐜. Thus, in the hypersphere, there could be holes or big empty regions without containing any normal data and hence it is not suitable to assume that the whole space enclosed by the hypersphere is completely a normal space. In other words, the optimal decision boundary between normal data and abnormal data is actually very different from the hypersphere. An intuitive example is shown in Fig.<ref>. We see that there is a large empty space in the hypersphere learned by Deep SVDD. In contrast, the transformed data of our method are sufficiently distributed.
§.§ Autoencoder-based AD Methods
Our method is similar to but quite different from the variational autoencoder (VAE) <cit.>. Although our model is an autoencoder, the main goal is not to represent or generate data; instead, our model aims to convert distribution to find a reliable decision boundary for anomaly detection. More importantly, the latent distribution in VAE is often Gaussian and not bounded while the latent distribution in our model is more general and bounded, which is essential for anomaly detection. In addition, the optimizations of VAE and our method are also different: VAE involves KL-divergence while our method involves maximum mean discrepancy <cit.>.
It is worth noting that similar to our method, Perera et al.<cit.> also considered bounded latent distribution in autoencoder for anomaly detection. They proposed to train a denoising autoencoder with a hyper-cube supported latent space, via adversarial training. The latent distribution and optimization are different from ours. In addition, the latent distributions of our method, such as uniform on hypersphere, are more compact than the multi-dimensional uniform latent distribution of their method.
Compared with the autoencoder based anomaly detection method NAE <cit.> that uses reconstruction error to normalize autoencoder, our method pays more attention to learning a mapping that can transform the unknown data distribution into a simple and compact target distribution. The ideas are orthogonal.
§.§ Maximum Mean Discrepancy
In statistics, maximum mean discrepancy (MMD)<cit.> is often used for Two-Sample test and its principle is to find a function that assumes different expectations on two different distributions:
MMD[ℱ, p,q] =‖ f ‖_ℋ≤1sup(𝔼_p[f(𝐱)]-𝔼_q[f(𝐲)]),
where p, q are probability distributions, ℱ is a class of functions f:𝕏→ℝ and ℋ denotes a reproducing kernel Hilbert space.
Using the kernel trick, MMD can be represented as a simple loss function to measure the discrepancy between two distributions by finite samples, which is easy to apply to deep learning and can be efficiently trained by gradient descent. Based on the aforementioned advantages of MMD, Li et al.<cit.> proposed generative moment matching networks (GMMNs), which leads to a simpler optimization objective compared to the min-max optimization of GAN <cit.>.
Although both our method and GMMNs <cit.> minimize the MMD between data distribution and prior distribution, our goal is not generating new data but detecting anomalies. In addition, we consider a few bounded target distributions and analyze their sampling properties. More importantly, our method has very competitive performance when compared with SOTA methods of anomaly detection and one-class classification.
§ RESTRICTED GENERATIVE PROJECTION
In this section, we introduce our RGP framework, bounded target distributions, and the computation of anomaly scores.
§.§ Restricted Distribution Projection
Suppose we have a set of m-dimensional training data 𝐗={𝐱_1,𝐱_2,…,𝐱_n }
drawn from an unknown bounded distribution 𝒟_𝐱 and any samples drawn from 𝒟_𝐱 are normal data. We want to train a model ℳ on 𝐗 to determine whether a test data 𝐱_new is drawn from 𝒟_𝐱 or not. One may consider estimating the density function (denoted by p_𝐱) of 𝒟_𝐱 using some techniques such as kernel density estimation <cit.>. Suppose the estimation p̂_𝐱 is good enough, then one can determine whether 𝐱_new is normal or not according to the value of p̂_𝐱(𝐱_new): if p̂_𝐱(𝐱_new) is zero or close to zero, 𝐱_new is an abnormal data point; otherwise, 𝐱_new is a normal data point [Here we assume that the distributions of normal data and abnormal data do not overlap. Otherwise, it is difficult to determine whether a single point is normal or not.]. However, the dimensionality of the data is often high and hence it is very difficult to obtain a good estimation p̂_𝐱.
We propose to learn a mapping 𝒯:ℝ^m→ℝ^d to transform the unknown bounded distribution 𝒟_𝐱 to a known distribution 𝒟_𝐳 while there still exists a mapping 𝒯':ℝ^d→ℝ^m that can recover 𝒟_𝐱 from 𝒟_𝐳 approximately.
Let p_𝐳 be the density function of 𝒟_𝐳. Then we can determine whether 𝐱_new is normal or not according to the value of p_𝐳(𝒯(𝐱_new)). To be more precise, we want to solve the following problem
𝒯, 𝒯'minimize ℳ(𝒯(𝒟_𝐱), 𝒟_𝐳)+λℳ(𝒯'(𝒯(𝒟_𝐱)),𝒟_𝐱),
where ℳ(·, ·) denotes some distance metric between two distributions and λ is a trade-off parameter for the two terms. Note that if λ=0, 𝒯 may convert any distribution to 𝒟_𝐳 and lose the ability of distinguishing normal data and abnormal data.
Based on the universal approximation theorems <cit.> and substantial success of neural networks, we use deep neural networks (DNN) to model 𝒯 and 𝒯' respectively. Let f_θ and g_ϕ be two DNNs with parameters θ and ϕ respectively. We solve
θ, ϕminimize ℳ(𝒟_f_θ(𝐱), 𝒟_𝐳)+λℳ(𝒟_g_ϕ(f_θ(𝐱)), 𝒟_𝐱),
where f_θ and g_ϕ serve as encoder and decoder respectively.
However, problem (<ref>) is intractable because 𝒟_𝐱 is unknown and 𝒟_f_θ(𝐱), 𝒟_g_ϕ(f_θ(𝐱)) cannot be computed analytically. Note that the samples of 𝒟_𝐱 and 𝒟_g_ϕ(f_θ(𝐱)) are given and paired. Then the second term in the objective of (<ref>) can be replaced by sample reconstruction error such as 1n∑_i=1^n𝐱_i-g_ϕ(f_θ(𝐱_i))^2. On the other hand, we can also sample from 𝒟_f_θ(𝐱) and 𝒟_𝐳 easily but their samples are not paired. Hence, the metric ℳ in the first term of the objective of (<ref>) should be able to measure the distance between two distributions using their finite samples. To this end, we propose to use the kernel maximum mean discrepancy (MMD)<cit.> to measure the distance between 𝒟_f_θ(𝐱) and 𝒟_𝐳.
Its empirical estimate is
MMD^2[ℱ, X,Y] = 1/m(m-1)∑_i=1^mj≠ i∑^mk(𝐱_i, 𝐱_j)
+ 1/n(n-1)∑_i=1^nj≠ i∑^nk(𝐲_i, 𝐲_j)
- 2/mn∑_i=1^mj=1∑^nk(𝐱_i, 𝐲_j),
where X = {𝐱_1, …, 𝐱_m} and Y = {𝐲_1, …, 𝐲_n} are samples consisting of i.i.d observations drawn from p and q, respectively. k(·, ·) denotes a kernel function, e.g., k(𝐱, 𝐲)=exp(-γ𝐱-𝐲^2), a Gaussian kernel.
Based on the above analysis, we obtain an approximation for (<ref>) as
minimize_θ, ϕ MMD^2(𝐙_θ,𝐙_T)+ λ/n∑_i=1^n𝐱_i-g_ϕ(f_θ(𝐱_i))^2,
where 𝐙_θ={f_θ(𝐱_1),f_θ(𝐱_2),…,f_θ(𝐱_n) } and 𝐙_T={𝐳_i:𝐳_i∼𝒟_𝐳, i=1,…,n}.
The first term of the objective function in (<ref>) makes f_θ learn the mapping 𝒯 from data distribution 𝒟_𝐱 to target distribution 𝒟_𝐳 and the second term ensures that f_θ can preserve the main information of observations provided that λ is sufficiently large.
§.§ Bounded Target Distributions
Now we introduce four examples of simple and compact 𝒟_𝐳 for (<ref>). The four distributions are Gaussian in Hypersphere (GiHS), Uniform in Hypersphere (UiHS), Uniform between Hyperspheres (UbHS), and
Uniform on Hypersphere (UoHS). Their 2-dimensional examples are visualized in Fig.<ref>.
GiHS (Fig.<ref>.a) is actually a truncated Gaussian. Suppose we want to draw n samples from GiHS. A simple approach is drawing (1+ρ)n samples from a standard d-dimensional Gaussian and discarding the ρ n samples with larger ℓ_2 norms. The maximum ℓ_2 norm of the remaining n points is the radius of the hypersphere. One may also use the inverse transform method of <cit.>. We have the following results.
Suppose 𝐳_1,𝐳_2,…,𝐳_n are sampled from 𝒩(0,𝐈_d) independently. Then for any r>√(d), we have
Pr(𝐳_j≥ r) ≤exp(-0.5α), j∈[n],
and
Pr(max_1≤ j≤ n‖𝐳_j‖≤ r)≥ 1-nexp(-0.5α),
where α=√(d+2r^2)-√(d).
Inequality (<ref>) means a hypersphere of radius r can include all the n samples with a high probability if r is sufficiently large. On the other hand, according to (<ref>), if we expect to get n samples in a hypersphere of radius r, we need to sample about n/(1-exp(-0.5α)) points from 𝒩(0,𝐈_d). If d is larger, we need to sample more points.
UiHS (Fig.<ref>.b) is a hyperball in which all the samples are distributed uniformly. To sample from UiHS, we first need to sample from 𝒰(-r,r)^d. Then we discard all the data points outsides the radius-r hyperball centered at the origin.
The following proposition (the proof is in Appendix) shows some probability result of sampling from a d-dimensional uniform distribution.
Suppose 𝐳_1,𝐳_2,…,𝐳_n are sampled from 𝒰(-r,r)^d independently. Then for any t>0, we have
Pr(𝐳_j≥rt) ≤d/3t^2, j∈[n],
and
Pr(max_1≤ j≤ n‖𝐳_j‖≤ rt)≥ 1-nd/3t^2.
Inequality (<ref>) means a hypersphere of radius rt can include all the n samples with probability at least 1-nd/(3t^2). On the other hand, inequality (<ref>) indicates that if we draw n/(1-d/(3t^2)) samples from 𝒰(-r,r)^d, the expected number of samples falling into a hypersphere of radius rt is at least n.
Actually, sampling from UiHS is closely related to the Curse of Dimensionality and we need to sample a large number of points from 𝒰(-r,r)^d if d is large because only a small volume of the hypercube is inside the hyperball. To be more precisely, letting V_hypercube be the volume of a hypercube with length 2r and V_hyperball be the volume of a hyperball with radius r, we have
V_hyperball/V_hypercube=π ^d/2/d2^d-1Γ (d/2)≜η,
where Γ is the gamma function. Therefore, we need to draw n/η samples from 𝒰(-r,r)^d to ensure that the expected number of samples included in the hyperball is n, where η is small if d is large.
UbHS (Fig.<ref>.c) can be obtained via UiHS. We first sample from UiHS and then remove all samples included by a smaller hypersphere. Since the volume ratio of two hyperballs with radius r and r'is (r/r')^d, where r'<r, we need to draw n/(1-(r'/r)^d) samples from UiHS to ensure that the expected number of samples between the two hyperspheres is n. Compared with GiHS and UiHS, UbHS is more compact and hence provides larger abnormal space for abnormal data to fall in.
UoHS (Fig.<ref>.d) can be easily obtained via sampling from 𝒩(0,𝐈_d). Specifically, for every 𝐳_i drawn from 𝒩(0,𝐈_d), we normalize it as 𝐳_i←r𝐳_i/‖𝐳_i‖, where r is the predefined radius of the hypersphere. UoHS is a special case of UbHS when r'=r.
To quantify the compactness of the four target distributions, we define density ρ as the number of data points in unit volume, i.e., ρ=n/V. Consequently, the densities of the four target distributions are reported in Table <ref>.
UoHS is more compact than UbHS as well as GiHS and UiHS, it should have better performance in anomaly detection. Indeed, our numerical results show that UoHS outperforms others in most cases.
§.§ Anomaly Scores
In the test stage, we only use the trained f_θ^* to calculate anomaly scores. For a given test sample
𝐱_new, we define anomaly score s for each target distribution by
s(𝐱_new)= {[ |‖ f_θ^*(𝐱_new) ‖ - r |, for UoHS; ‖ f_θ^*(𝐱_new) ‖, for GiHB or UiHS; (‖ f_θ^*(𝐱_new) ‖ - r)· (‖ f_θ^*(𝐱_new) ‖ - r'),; for UbHS ].
There are clear decision boundaries according to (<ref>) and they can be regarded as `hard boundaries' between normal samples and abnormal samples. However, these `hard boundaries' only work in ideal cases where the projected data exactly match the target distributions. In real cases, due to the noise of data or the non-optimality of optimization, the projected data do not exactly match the target distributions. Therefore, we further propose a `soft boundary' for calculating anomaly scores. Specifically, for a given test sample 𝐱_new, we define anomaly score s for all four target distributions as
s(𝐱_new)= 1/k∑_i ∈ N_k‖ f_θ^*(𝐱_new) - f_θ^*(𝐱_i) ‖
where 𝐱_i denotes a single sample with index i in the training data and N_k denotes the index set of the k nearest training (projected) samples to f_θ^*(𝐱_new).
Empirically, in the experiments, we found that (<ref>) has better performance than (<ref>) in most cases. Table <ref>, <ref>, <ref> only report the results from (<ref>). The comparison results between (<ref>) and (<ref>) are provided in Section <ref>.
We call our method Restricted Generative Projection (RGP), which has four variants, denoted by RGP-GiHS, RGP-UiHS, RGP-UbHS, and RGP-UoHS respectively, though any bounded target distribution applies.
§ EXTENSIONS OF RGP
In this section, based on the general objective in (<ref>), we provide two variants of RGP.
§.§ Double-MMD based RGP
In the objective function of RGP defined by (<ref>), the second term is the reconstruction error for 𝐗, which is only a special example of approximation for the second term in the objective function of (<ref>), i.e., ℳ(𝒟_g_ϕ(f_θ(𝐱)), 𝒟_𝐱). Alternatively, we can use MMD to approximate ℳ(𝒟_g_ϕ(f_θ(𝐱)), 𝒟_𝐱), which yields the following Double-MMD RGP:
minimize_θ, ϕ MMD^2(𝐙_θ,𝐙_T)+ λMMD^2(g_ϕ(𝐙_θ),𝐗).
Compared to the sum of squares reconstruction error used in (<ref>), MMD^2(g_ϕ(𝐙_θ),𝐗) is a weaker approximation for ℳ(𝒟_g_ϕ(f_θ(𝐱)), 𝒟_𝐱),
because it does not exploit the fact that the samples in 𝐙_θ and 𝐗 are paired. Thus, the projection of Double-MMD RGP cannot preserve sufficient information of 𝐗,
which will reduce the detection accuracy. Indeed, as shown by the experimental results in Section
<ref>, our original RGP outperforms Double-MMD RGP.
§.§ Sinkhorn Distance based RGP
Besides MMD, the optimal transport theory can also be used to construct a notion of distance between pairs of probability distributions. In particular, the Wasserstein distance <cit.>, also known as “Earth Mover’s Distance”, has appealing theoretical properties and a very intuitive formulation
𝒲 = ⟨γ^*, 𝐂⟩_F
where 𝐂 denotes a metric cost matrix and γ* is the optimal transport plan.
Finding the optimal transport plan γ^* might appear to be a really hard problem. Especially, the computation cost of Wasserstein distance can quickly become prohibitive when the data dimension increases. In order to speed up the calculation of Wasserstein distance, Cuturi <cit.> proposed Sinkhorn distance that regularizes the optimal transport problem with an entropic penalty and uses Sinkhorn's algorithm <cit.> to approximately calculate Wasserstein distance.
Now, if replacing the first term in (<ref>) with the Sinkhorn distance<cit.>, we can get a new optimization objective
minimize_θ,ϕ ⟨γ, ℳ(𝐙_θ ,𝐙_T) ⟩_F + ϵ∑_i,jγ_ijlog(γ_ij)
+ λ/n∑_i=1^n 𝐱_i-g_ϕ(f_θ(𝐱_i))^2
subject to γ1 = 𝐚, γ^T 1 = 𝐛, γ≥ 0
where ℳ(𝐙_θ ,𝐙_T) denotes the metric cost matrix between 𝐙_θ and 𝐙_T, ϵ is the coefficient of entropic regularization term, 𝐚 and 𝐛 are two probability vectors and satisfy 𝐚^T1=1 and 𝐛^T1=1 respectively. We call this method Sinkhorn RGP.
Compared to MMD, Sinkhorn distance is more effective in quantifying the difference between two distributions using their finite samples. Therefore, the Sinkhorn RGP usually has better performance than our original RGP (<ref>), which will be shown by the experimental results in Section <ref>.
§ EXPERIMENTS
§.§ Datasets and Baselines
We compare the proposed method with several state-of-the-art methods of anomaly detection on five tabular datasets and three widely-used image datasets for one-class classification. The datasets are detailed as follows.
* Abalone[http://archive.ics.uci.edu/ml/datasets/Abalone]<cit.> is a dataset of physical measurements of abalone to predict the age. It contains 1,920 instances with 8 attributes.
* Arrhythmia[http://odds.cs.stonybrook.edu/arrhythmia-dataset/]<cit.> is an ECG dataset. It was used to identify arrhythmic samples in five classes and contains 452 instances with 279 attributes.
* Thyroid[http://odds.cs.stonybrook.edu/thyroid-disease-dataset/]<cit.> is a hypothyroid disease dataset that contains 3,772 instances with 6 attributes.
* KDD[https://kdd.ics.uci.edu/databases/kddcup99/]<cit.> is the KDDCUP99 10 percent dataset from the UCI repository and contains 34 continuous attributes and 7 categorical attributes. The attack samples are regarded as normal data, and the non-attack samples are regarded as abnormal data.
* KDDRev is derived from the KDDCUP99 10 percent dataset. The non-attack samples are regarded as normal data, and the attack samples are regarded as abnormal data.
* MNIST[http://yann.lecun.com/exdb/mnist/]<cit.> is a well-known dataset of handwritten digits and totally contains 70,000 grey-scale images in 10 classes from number 0-9.
* Fashion-MNIST[https://www.kaggle.com/datasets/zalando-research/fashionmnist]<cit.> contains 70,000 grey-scale fashion images (e.g. T-shirt and bag) in 10 classes.
* CIFAR-10[https://www.cs.toronto.edu/ kriz/cifar.html]<cit.> is a widely-used benchmark for image anomaly detection. It contains 60,000 color images in 10 classes.
We compare our method with three classic shallow models, four deep autoencoder based methods, three deep generative model based methods, and some latest anomaly detection methods.
* Classic shallow models: local outlier factor (LOF)<cit.>, one-class support vector machine (OC-SVM)<cit.>, isolation forest (IF)<cit.>.
* Deep autoencoder based methods: denoising auto-encoder (DAE)<cit.>, DCAE<cit.>, E2E-AE, DAGMM<cit.>, DCN <cit.>.
* Deep generative model based methods: AnoGAN<cit.>, ADGAN<cit.>, OCGAN <cit.>.
* Some latest AD methods: DeepSVDD<cit.>, GOAD <cit.>, DROCC <cit.>, HRN <cit.>, SCADN <cit.>, NeuTraL AD <cit.>, GOCC <cit.>, PLAD <cit.>, MOCCA <cit.>.
§.§ Implementation Details and Evaluation Metrics
In this section, we introduce the implementation details of the proposed method RGP and describe experimental settings for image and tabular datasets. Note that our method neither uses any abnormal data during the training process nor utilizes any pre-trained feature extractors.
For the five tabular datasets (Abalone, Arrhythmia, Thyroid, KDD, KDDRev), in our method, f_θ and g_ϕ are both MLPs. We follow the dataset preparation of <cit.> to preprocess the tabular datasets for one-class classification task. The hyper-parameter λ is set to 1.0 for the Abalone, Arrhythmia and Thyroid. For the KDD and KDDRev, λ is set to 0.0001.
For the three image datasets (MNIST, Fashion-MNIST, CIFAR-10), in our method, f_θ and g_ϕ are both CNNs. Since the three image datasets contain 10 different classes, we conduct 10 independent one-class classification tasks on both datasets: one class is regarded as normal data and the remaining nine classes are regarded as abnormal data. In each task on MNIST, there are about 6,000 training samples and 10000 testing samples. In each task on CIFAR-10, there are 5,000 training samples and 10,000 testing samples. In each task on Fashion-MNIST, there are 6,000 training samples and 10,000 testing samples. The hyper-parameter λ is chosen from {1.0, 0.5, 0.1, 0.01, 0.001, 0.0001} and varies for different classes.
In our method, regarding the radius r of GiHS and UiHS, we first generate a large number (denoted by N) of samples from Gaussian or uniform, sort the samples according to their ℓ_2 norms, and set r to be the pN-th smallest ℓ_2 norm, where p=0.9. For UbHS, we need to use the aforementioned method to determine an r with p=0.95 and a r' with p=0.05. We see that {r, r'} are not related to the actual data, they are determined purely by the target distribution.
In each iteration (mini-batch) of the optimization for all four target distributions, we resample 𝐙_T according to r. For UoHS, we draw samples from Gaussian and normalize them to have unit ℓ_2 norm, then they lie on a unit hypersphere uniformly. The procedure is repeated in each iteration (mini-batch) of the optimization.
For hyper-parameter k on the testing stage, we select k=3 for Thyroid, Arrhythmia, KDD, KDDRev, and select k=5 for Abalone dataset. For three image datasets, the hyper-parameter k is chosen from {1, 3, 5, 10} and varies for different classes.
We use Adam <cit.> as the optimizer in our method. For MNIST, Fashion-MNIST, CIFAR-10, Arrhythmia and KDD, the learning rate is set to 0.0001. For Abalone, Thyroid and KDDRev, the learning rate is set to 0.001. Table <ref> shows the detailed implementation settings of RGP on all datasets. All experiments were run on AMD EPYC CPU with 64 cores and with NVIDIA Tesla A100 GPU, CUDA 11.6.
To evaluate the performance of all methods, we follow the previous works such as <cit.> and <cit.> to use AUC (Area Under the ROC curve) for image datasets and F1-score for tabular datasets.
Note that when conducting experiments on the tabular datasets, we found that most of the strong baselines, like DROCC <cit.>, NeuTral AD <cit.>, GOCC <cit.>, used the F1-score and we just followed this convention.
In our method, we get the threshold via simply calculating the dispersion of training data in latent space. Specifically, we first calculated the scores s(𝐗) on training data 𝐗 using (12) or (13), and then sorted s(𝐗) in ascending order and set the threshold to be the pN-th smallest score, where p is a probability varying for different datasets.
§.§ Results on Image Datasets
Tables <ref> and <ref> show the comparison results on Fahsion-MNIST and CIFAR-10 respectively. We have the following observations.
* Firstly, in contrast to classic shallow methods such as OC-SVM <cit.> and IF <cit.>, our RGP has significantly higher AUC scores on all classes of Fashion-MNIST and most classes of CIFAR-10. An interesting phenomenon is that most deep learning based methods have inferior performance compared to IF <cit.> on class `Sandal' of Fashion-MNIST and IF <cit.> outperforms all deep learning based methods including ours on class `Deer' of CIFAR-10.
* Our methods outperformed the deep autoencoder based methods and generative model based methods in most cases and have competitive performance compared to the state-of-the-art in all cases.
* RGP has superior performance on most classes of Fashion-MNIST and CIFAR-10 under the setting of UoHS (uniform distribution on hypersphere).
Table <ref> shows the average performance on MNIST, Fashion-MNIST, and CIFAR-10 over all 10 classes to provide an overall comparison. We see that RGP achieves the best average AUC on Fashion-MNSIT and CIFAR-10 among all competitive methods. Four variants of RGP have relatively close average performance on all three image datasets. The experimental results of a single class on MNIST are reported in Appendix.
§.§ Results on Tabular Datasets
In Table <ref>, we report the F1-scores of our methods in comparison to ten baselines on the five tabular datasets. Our four variants of RGP significantly outperform all baseline methods on Arrhythmia, thyroid, and Abalone. Particularly, RGP-GiHS has 23.25%, 12.22%, and 19.58% improvements on the three datasets in terms of F1-score compared to the runner-up, respectively. It is worth mentioning that Neutral AD <cit.> and GOCC <cit.> are both specially designed for non-image data but are outperformed by our methods in most cases.
Compared with image datasets, the performance improvements of RGPs on the three tabular datasets are more significant. One possible reason is that, compared to image data, it is easier to convert tabular data to a compact target distribution. Furthermore, we also report the AUC scores on Abalone, Thyroid and Arrhythmia datasets and the results are provided in Appendix.
In addition to the quantitative results, we choose Thyroid (with 6 attributes) as an example and transform the data distribution to 2-dimensional target distributions, which are visualized in Figure <ref>. Plots (a), (b), (c), (d) in Figure <ref> refer to GiHS, UiHS, UbHS, UoHS, respectively. The blue points, orange points, green points, and red points denote samples from target distribution, samples from training data, normal samples from test set, and abnormal samples from test set, respectively. For much clearer illustration, the left figure in each plot of Figure <ref> shows all four kinds of instances and the right figure shows two kinds of instances including normal and abnormal samples from test set.
We see that RGPs are effective to transform the data distribution to the restricted target distributions, though the transformed data do not exactly match the target distributions (it also demonstrates the necessity of using the `soft boundary' defined by (<ref>)).
§.§ Comparison between`soft' and `hard' boundary
We further explore the performance of two different anomaly scores. Specifically, we compare the `hard boundaries' (<ref>) and `soft boundary' (<ref>) as anomaly scores during the test stage on image datasets and tabular datasets. The results are showed in Figures <ref>, <ref>, <ref>. It can be observed that using `soft boundary' (<ref>) to calculate anomaly score has better performance than using `hard boundaries' (<ref>) on most classes of image and tabular datasets. Nevertheless, using `hard boundaries' to calculate anomaly scores still achieves remarkable performance on some classes. For example, on the class `Ankle-boot' of Fashion-MNIST and the class `Trunk' of CIFAR-10, the best two results are both from RGPs using `hard boundaries' (<ref>) to calculate anomaly score.
§.§ Experiments of Double-MMD RGP and Sinkhorn RGP
We use Double-MMD RGP (<ref>) to conduct experiments and the results are reported in Table <ref>, <ref>. On image datasets, we just consider the target distribution UoHS (Uniform on HyperSphere) for simplicity.
On tabular datasets, we conduct experiments on the proposed four different target distributions.
From the experimental results of Table <ref>, <ref>, we found that Double-MMD RGP and original RGP have similar performance on the three tabular datasets, whereas on image datasets including Fashion-MNIST and CIFAR-10, the performance has apparent gap in spite of a large range of adjustment of λ∈{10.0, 5.0, 1.0, 0.5, 0.1, 0.01} for Double-MMD RGP (<ref>). Note that Table <ref> reports the average AUC(%) on all classes of Fahion-MNIST and CIFAR-10, the results on single class are provided in Appendix.
For the phenomenon, we consider that the tabular datasets in our implementation have fewer features (no more than 279) than the image datasets and second term of (<ref>) is a much weaker constraint for preserving data information than that of (<ref>). As a consequence, Double-MMD RGP (<ref>) is able to preserve the enough key information on the tabular data but loses a lot of important information on the image data than original RGP (<ref>). Meanwhile, we know that the generalization error of MMD for high-dimensional samples or distribution is often larger than that for low-dimensional samples or distribution. To ensure that MMD is able to accurately measure the distance between two high-dimensional distributions, the sample sizes should be sufficiently large.
We use Sinkhorn RGP (<ref>) to conduct experiments on Abalone, Arrhythmia, and Thyroid datasets and the results are reported in Table <ref>. In all implementations, ϵ is set to 0.01 and the a, b are uniform. In keeping with our expectation, the performance of Sinkhorn RGP (<ref>) is similar to or better than the original RGP (<ref>) for all four objective distributions, whereas the time cost of Sinkhorn RGP (<ref>) is much higher. We do not experiment with Sinkhorn RGP for the image dataset since the time cost is too higher.
§.§ Ablation Study
§.§.§ The Gaussian Kernel Function for MMD
We use the Gaussian kernel exp(-γ‖𝐱 - 𝐲‖^2) for MMD in optimization objective and set γ = 1/d^2 in all experiments, where d=1/n(n-1)∑^n_i=1∑^n_j=1‖𝐱_i - 𝐱_j ‖ denotes the mean Euclidean distance among all training samples.
To show the influence of γ, we fix γ from {0.1, 1, 10, 100} to run experiments on Fashion-MNIST.
As shown in Table <ref>, there are differences in every single case but the gaps in the average results are not significant. This demonstrated that our methods are not sensitive to γ.
§.§.§ The Coefficient λ of Reconstruction Term in Optimization Objective
The coefficient λ is a key hyperparameter in problem (<ref>). Now we explore the influence of λ for model performance.
Figures <ref>, <ref> show F1-scores of our methods with λ varying from 0 to 1000, on the tabular datasets. It can be observed that too small or too large λ can lower the performance of RGP. When λ is very tiny, the reconstruction term of (<ref>) makes less impact on the training target and f_θ can easily transform the training data to the target distribution but ignores the importance of original data distribution (see Figure <ref>). On the other hand, when λ is very large, the MMD term of optimization objective becomes trivial for the whole training target and f_θ under the constraint of reconstruction term more concentrates on the original data distribution yet can not learn a good mapping from data distribution to the target distribution. Figure <ref> illustrates the influence of hyper-parameter λ on the training set of Thyroid dataset. We see that f_θ transforms training data to target distribution better with the decrease of the λ. The blue points and orange points in Figure <ref> denote samples from target distribution, samples from training data, respectively.
§ CONCLUSION
We have presented a novel and simple framework for one-class classification and anomaly detection. Our method aims to convert the data distribution to a simple, compact, and informative target distribution that can be easily violated by abnormal data. We presented four target distributions and the numerical results showed that four different target distributions have relatively close performance and uniform on hypersphere is more effective than other distributions in most cases. Furthermore, we also explore two extensions based on the original RGP and analyze performance difference among them. Importantly, our methods have competitive performances as state-of-the-art AD methods on all benchmark datasets considered in this paper and the improvements are remarkable on the tabular datasets.
IEEEtran
|
http://arxiv.org/abs/2307.05681v1 | 20230711180004 | Quantum-Classical Correspondence for the Relaxation Dynamics of Many-Body Spin Systems: Linear Chaos and Diffusion in the Energy Shell | [
"Fausto Borgonovi",
"Felix M. Izrailev",
"Lea F. Santos"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech",
"quant-ph"
] |
Dipartimento di Matematica e
Fisica and Interdisciplinary Laboratories for Advanced Materials Physics,
Università Cattolica, via della Garzetta 48, 25133 Brescia, Italy
Istituto Nazionale di Fisica Nucleare, Sezione di Milano,
via Celoria 16, I-20133, Milano, Italy
Instituto de Física, Benemérita Universidad Autónoma
de Puebla, Apartado Postal J-48, Puebla 72570, Mexico
Department of Physics and Astronomy, Michigan State University, E. Lansing, Michigan 48824-1321, USA
Department of Physics, University of Connecticut, Storrs, Connecticut, USA
We study quench dynamics in a one-dimensional interacting spin model that is strongly chaotic in the classical and quantum domain. We use the knowledge of the quantum-classical correspondence developed in [Phys. Rev. B 107, 155143 (2023)] to elucidate the mechanism of the system relaxation process. It actually involves two mechanisms, one due to linear parametric instability and the other caused by nonlinearity. We show that the relaxation of the noninteracting energy (global quantity) and of the onsite magnetization (local observable) is mainly due to the first mechanism, referred to as linear chaos. With a semi-analytical approach based on classical ergodicity, we find that the relaxation timescale of both quantities is independent of the system size for both the classical and the quantum case. We also verify that the spread of the noninteracting energy in the energy shell is diffusive-like. In contrast to these results, the number of principal components, which quantifies how the initial state spreads in the many-body Hilbert space and does not have a classical counterpart, grows exponentially in time and has a relaxation time that depends on the number of spins.
Quantum-Classical Correspondence for the Relaxation Dynamics of Many-Body Spin Systems: Linear Chaos and Diffusion in the Energy Shell
Lea F. Santos
Indian Institute of Technology Kharagpur
======================================================================================================================================
Much attention has been paid to the relaxation of isolated interacting many-body quantum systems quenched out of equilibrium and how the relaxation time to a steady state depends on the model parameters, such as the interaction strength between particles and the number of particles, as well as on the initial states and observables. Emphasis has been given to spin models, since they can describe various experimental setups. Different works have reached different conclusions: that the relaxation time decreases with the system size L <cit.>, that it depends weakly on L <cit.>, that it does not depend on L <cit.>, and that it increases with L <cit.>, which can happen polynomially or exponentially depending on the observable <cit.>.
Despite these results, both analytical and numerical, the problem remains open
due to the absence of a general theoretical frame and to the difficulty of simulating quantum systems with a large number of particles.
Recently, we put forward a new approach <cit.> to describe some of the statistical properties of interacting many-body quantum systems with a well defined classical limit. The approach works very well when both the classical and the quantum model are strongly chaotic.
Supported by detailed numerical analyses and semi-analytical studies, we demonstrated that quantities that serve as the building blocks of physical observables coincide in the classical and quantum descriptions. One of these quantities is the shape of chaotic eigenstates (SoE) with respect to the energies of the non-interacting part of the Hamiltonian. The other is the local density of states (LDoS), which is known in nuclear physics as strength function and is defined by the form of the energy distribution of the initial state in quench dynamics. The width of the quantum LDoS determines the growth rate of the number of principal components, N_pc(t), used to measure the number of many-body states participating in the evolution of the initial state <cit.> before the saturation. The width of the quantum LDoS can be obtained directly from the classical description.
Our approach advances the analysis of the quantum-classical correspondence (QCC), which has its roots in the early days of quantum mechanics and is still under development. Studies of the QCC for many-body systems are not as widespread as the case of few degrees of freedom, because it is challenging. Specifically, in the classical limit, one needs to deal with highly complicated multidimensional phase spaces and in the quantum domain, the Hilbert space grows exponentially with the system size, rendering the QCC analysis nearly intractable. Recent advances in this direction have been done in the context of the out-of-time-ordered correlator <cit.> and spin models <cit.>, although many questions remain open.
In this Letter, we investigate and compare the classical and quantum dynamics of global and local observables as they evolve towards equilibrium, and estimate their relaxation time. We take a one-dimensional interacting spin model in the chaotic regime and consider the evolution of the noninteracting energy, defined as the non-interacting part of the total Hamiltonian, and of the onsite magnetization. The first is a global quantity and the second is local.
Our numerical results and analytical estimates reveal that the variance of the energy distribution increases linearly in time with an excellent QCC. This means that the energy, initially concentrated in a basis state (quantum) or in an initial packet of classical trajectories (classical), exhibits a diffusive-like behavior. Due to strong chaos and to the ergodicity of the classical motion of individual spins, the spread of energy results in the ergodic filling of the energy shell. After some time, the dynamics saturates to a steady state accompanied by statistical fluctuations. Unexpectedly, the timescale for the diffusion turns out to be independent of the number of spins L. The same holds for the relaxation time of the magnetization of an individual spin. We explain that this result is due to the existence of L local integrals of motion, which is a general property of spin models <cit.>. In this case, chaos is mainly caused by linear parametric instability, instead of nonlinear effects.
Contrary to the energy and magnetization, N_pc(t) does not have a well-defined classical limit. It increases exponentially in time with a rate given by the width of the LDoS and then saturates at a time that grows with the system size in close correspondence with previous results obtained for interacting fermions and bosons <cit.>.
In addition, motivated by the role of the Lyapunov timescale, τ_λ, in chaotic systems <cit.>,
we compare τ_λ with the relaxation time for the noninteracting energy and onsite magnetization. Our semianalytical results indicate that τ_λ is not related with the relaxation timescale for global and local observables, even though they are of the same order of magnitude. We discuss the meaning of this outcome.
Quantum model.– We consider the one-dimensional spin model explored in Ref. <cit.>. It has L interacting spins of fixed angular momentum I described by the Hamiltonian,
H= H_0 +V = ∑_k=1^L B_k S_k^z -
∑_k=1^L-1∑_i>k^L J_ik
S_i^x S_k^x,
where B_k ≡(B_0+δ B_k ) are the frequencies of the non-interacting Hamiltonian H_0 with random entries |δ B_k| ≤δ W ≪ B_0.
The strength of the spin-spin coupling, J_ik=J_0/|i-k|^ν, decays algebraically with the distance between the spins. In what follows, we fix ν = 1.4, which strictly corresponds to short-range interaction and is also referred to as “weak long-range” interaction.
We set the angular momentum I=1, so that time has dimension of inverse energy, and
J_0 > B_0, which guarantees strong chaos <cit.>.
The spins are quantized with an integer value S and the effective Planck constant is ħ =1/√(S(S+1)), so the semiclassical limit is achieved for S ≫ 1. The non-interacting many-body basis corresponds to the eigenstates of H_0 and is denoted by
|k⟩≡|s_1,...,s_j,...,s_L⟩, where -S≤ s_j ≤ S and j=1,...,L. The term V couples basis
vectors that differ by two excitations, so the dimension of each symmetry sector is dim = (2S+1)^L/2.
The quantum dynamics starts with a quench from H_0 to H, so that the initial state |Ψ(0) ⟩ is a many-body basis vector |k_0⟩. The components of the evolving wavefunction written in this basis are
[ ⟨ k |Ψ(t)⟩ =
∑_α C_k^α( C_k_0^α)^* e^-iE_α t/ħ , ]
where C_k^α≡⟨ k |α⟩ and |α⟩ is an eigenstate of the total Hamiltonian H with energy E_α.
In <cit.>, we provided a detailed analysis of the properties of the probabilities |C_k^α|^2 and used them to investigate the SoE, written as a function of the non-interacting energies, and the LDoS,
W_k_0(E) = ∑_αδ(E-E_α) |C_k_0^α|^2 ,
which is the distribution of the probabilities |C_k_0^α|^2 as a function of the total energy E. A key parameter in our studies
is the width of the LDoS,
σ= ∑_k ≠ k_0⟨ k|H |k_0 ⟩ = ∑_α |C_k_0^α|^2 E^2 - (∑_α |C_k_0^α|^2 E )^2 .
We showed in <cit.> that the SoE and the LDoS have well-defined classical counterparts, and their QCC is excellent in the chaotic regime even for spin quantum numbers as small as S=2 (see Supplementary Material (SM) <cit.>).
Classical model.– The classical equations of motion for this system are written as
Ṡ_k^x = - B_k S_k^y ,
Ṡ_k^y = B_k S_k^x + S_k^z ∑_i k J_ik S_i^x ,
Ṡ_k^z = - S_k^y ∑_i k J_ik S_i^x .
As explained in <cit.>, the motion of S_x and S_y is different from that of S_z. For example, if the interaction is very weak, the k-th spin rotates about the z-axis with frequency B_k, keeping the z-component almost constant.
Our analysis concentrates on the motion of S_k^z, which is directly related to the magnetization in the z-direction and is described by the following equation <cit.>,
S̈_̈k̈^̈z̈ + Ω_k^2(t) S_k^z = F_k(t),
with the nonlinear time-dependent frequency
Ω_k^2 (t) = J_0^2 [∑_i k S_i^x(t)/|i-k|^ν]^2 ,
and the driving nonlinear force
F_k (t) = J_0 ∑_i k B_i S_i^y(t) S_k^y(t)- B_k S_i^x(t) S_k^x(t) /|i-k|^ν.
Equation (<ref>) describes a linear parametric oscillator with a time-dependent frequency Ω_k(t) and a force F_k(t) that depend on the x and y spin components. For small interaction, one can neglect the nonlinearity contained in
Ω_k and F_k, but even then the motion of the k-th spin can be strongly chaotic. This behavior finds a parallel in the motion of charged particles in magnetic traps <cit.> and linear maps <cit.>, where it has been called “linear chaos”. The term refers to the parametric instability in linear system due to the presence of a time-dependent linear frequency. This property is generic to many-body spin systems and should be taken into account when analyzing global dynamics.
As discussed in <cit.>, it is possible to talk about ergodicity of the entire classical system by examining the motion of each single spin on the Bloch sphere. This significantly simplifies the numerical analysis of the onset of ergodicity, because we do not need to determine ergodicity in the multi-dimensional phase space.
Relaxation.– To study the dynamics, we consider initial conditions in the middle of the energy band, where the system is strongly chaotic and ergodic. We investigate how the relaxation process depends on the interaction J_0 and the system size L, keeping fixed the semiclassical parameter S=2. In all figures, the parameters are ν=1.4, B_0=1, δ W=0.2. The dynamics takes place in the energy shell, which is defined by the projection of H onto H_0<cit.>.
Number of principal components.– We start the analysis with the number of principal components,
N_pc(t)= 1/∑_k |⟨ k | Ψ(t)⟩ |^4 ,
also known as inverse participation ratio. This is an important quantity in quantum dynamics, because it measures how the initial state spreads in the non-interacting many-body basis. This quantity is purely quantum.
When the initial state is composed of many chaotic eigenstates of the total Hamiltonian, N_pc(t) grows exponentially in time with a rate given by the width of the LDoS <cit.>, as shown in Fig. <ref>(a). To extract a reliable timescale for the relaxation, in Fig. <ref>(b), we rescale N_pc(t) to the dimension of the subspace associated with the initial state and verify that all curves saturate at the same point. For values of J_0 that ensure quantum chaos, the saturation point of N_pc(t) is approximately dim/2.
In hands of this result, we can find an analytical estimate of the timescale τ_N for the relaxation of N_pc(t) using the equality
N_pc (τ_N) ≃ e^2στ_N /ħ = (1/4) (2S+1)^L,
which leads to
τ_N ∝ L ħln (2S+1) / σ .
Using the analytical result for
width of LDoS, σ∝ J_0 √(L), (see SM <cit.>)
in Eq. (<ref>) gives
τ_N ∝√(L)1/J_0ln(2S+1)/√(S(S+1)).
The equation above indicates that for fixed S, the relaxation time grows as L increases, which is indeed noticeable in Fig. <ref>(a). On the other hand, for fixed L, increasing S slowly decreases τ_N (see also discussion about the dependence on S in <cit.>). This means that for N_pc the thermodynamic and the semiclassical limit of Eq. (<ref>) lead to opposite conclusions.
Diffusion in energy space.– The evolution of N_pc(t) is in stark contrast with the spread in time of the noninteracting energy, which we quantify with the variance of the energy distribution,
(Δ E_0)^2 (t) = ⟨Ψ (t) | Ĥ_0^2 |Ψ(t) ⟩ -
⟨Ψ (t) | Ĥ_0 |Ψ(t) ⟩^2 .
This is a global quantity with a well-defined classical meaning.
The evolution of (Δ E_0)^2 (t) is depicted in Fig. <ref>(a). Before saturation and for a sufficiently large interaction strength J_0, the energy spreading is diffusive-like, that is, linear in time. The agreement between the classical and quantum results is excellent even for the small spin number S=2, which can be linked with the QCC for the LDoS verified in <cit.>.
Since the variance of the noninteracting energy increases linearly in time, we can write
(Δ E_0)^2 ≃ D t,
and associate D with the diffusion coefficient.
We deduce from Fig. <ref>(a) that the slope of the linear growth is proportional to the interaction strength, D ∝ J_0.
By rescaling the variance to the system size, we observe in Fig. <ref>(b) that the curves for large values of L are superimposed. This indicates that for large system sizes, D ∝ L, while for smaller L, finite size effects are relevant.
Therefore, combining these two results one arrives at the following dependence,
D = c J_0 L,
where c=0.2 is a fitting parameter. Following the same reasoning used for N_pc(t), we estimate the relaxation time, τ_d, for the energy spreading from the relation D τ_d = (Δ E_0)^2, where (Δ E_0)^2 is the saturation value. The latter can be obtained analytically employing the ergodic filling of each single spin in the Bloch sphere (see SM <cit.>), which gives
(Δ E_0)^2≃ (1/3) ∑_k B_k^2. This leads to
τ_d ∼∑_k=1^L B_k^2/L J_0∼1/J_0,
since B_k ≃ 1. This estimate shows that the relaxation time for the energy spreading is independent of the system size, which is numerically confirmed in Fig. <ref>. This is at odds with the relaxation timescale for N_pc. To understand the result in Eq. (<ref>), we recall that the classical initial ensemble is created by fixing the z-component of each spin to have E_0≃ 0, while the x and y components are completely random. This means that, in a sense, all spins are initially excited, which is very different from the common situation in Fermi and Bose system, where the excitation is contained in one particle <cit.>.
We stress that the linear behavior for the quantum spreadind
cannot be treated as “true” diffusion. Indeed, a close inspection of the stationary energy distribution shows that it is Gaussian for the classical case, but not for the quantum model, where there appears 2LS+1 peaks enveloped by the Gaussian distribution (see SM <cit.>).
Local observable.–
Having established a timescale for the diffusion of the non-interacting energy, we now move our attention to a local observable and consider a different initial state, |Ψ (0)⟩ =|0,...,0,S,0,...,0⟩, where only the central spin m has its maximal component S along the z-direction, while the other L-1 spins have 0 spin component along z, that is, all other spins are in the (x,y) plane.
The correspondent classical initial condition, is
S_L/2^z (0) = S/√(S(S+1)) for the central spin,
S_k^z (0) = 0 for the other spins, while the x,y components are random. The analysis of the quantum evolution is done through the observable ⟨ S_k^z (t) ⟩. We investigate the time that is needed for the k-th spin to share the z-magnetization (excitation) with all other L-1 spins and whether it coincides with the diffusion time τ_d found in Eq. (<ref>).
In Fig. <ref>(a) and its inset, we show the dynamics of the z-component of each spin for the classical and quantum models. The quantum-classical agreement is extremely good and the timescale for the relaxation to the stationary value of the central spin m is of the same order of magnitude as the relaxation of all other spins.
In Fig. <ref>(b), we show the onsize magnetization for the central spin m for different system sizes. The excellent QCC gives us access to large system sizes through the classical dynamics, which makes it evident that the timescale for the relaxation is independent of L and of the same order as the diffusion time τ_d. Therefore, our results in Fig. <ref> and Fig. <ref> show that the local and global quantities investigated, both with a well-defined classical limit, exhibit the same relaxation timescale, independent of the number of spins.
A crucial outcome of our analysis is the understanding that the main mechanism responsible for the classical relaxation is the linear parametric instability of individual spins. To better clarify this point, we consider the model in Eq. (<ref>) in first order approximation. To do that, we insert in the expression for Ω_k(t) and F_k(t) the non-interacting solutions of S_k^x, S_k^y described by linear rotation with frequency B_k. In this way, Eq. (<ref>) for each spin describes a linear oscillator with time dependent frequency and linear force. The instability of this oscillator can be analyzed by integrating the equation of motion. The results for this simplified picture are shown for the central spin m in Fig. <ref>, where it is compared with the exact full dynamics. One sees that the timescale for the relaxation of the linear parametric oscillator is effectively the same as that for the full dynamics.
A natural question is whether the Lyapunov time of the full dynamics, τ_λ≈ 1/λ_max, where λ_max is the maximal Lyapunov exponent, plays any role in the description of the relaxation to equilibrium of (Δ E_0)^2(t) and ⟨ S_z (t) ⟩. The answer is negative, as explained in the SM <cit.>, even if they are of the same order of magnitude in the ergodic region.
Discussion.– In this Letter, we used the quantum-classical correspondence (QCC) to investigate the relaxation dynamics of a many-body spin system quenched out of equilibrium. We showed how essential properties of the quantum dynamics in the many-body Hilbert space can be explained by the properties of the classical equations of motion.
We found that the variance of the time-dependent noninteracting energy distribution increases linearly in time before saturation, exhibiting a diffusion-like behavior. Based on the ergodic motion of individual spins, we obtained a semi-analytical
expression for the energy spread. Remarkably, this timescale is independent of the number of spins L, the same holding for the relaxation time of the onsite magnetization. The key point to understand this finding lies in the structure of the classical equations of motion. Specifically, even if initially only the S^z component of a single spin is excited, S^z ≈ 1, the components S^x, S^y of all other spins cannot be zero, due to spin conservation, (S^x)^2+(S^y)^2+(S^z)^2=1. This results in L additional constants of motion, apart from the total energy. Therefore, for any initial excitation, the interaction immediately excites all spins in the chain. This fact explains why the relaxation timescale does not depend on L. This result is generic for spin systems with a well-defined classical limit.
Another important outcome of this Letter is the verification that the dynamics of individual spins is mainly due to the linear parametric instability in the classical equations of motion, rather than to the nonlinearity. This is an example of the so-called linear chaos <cit.>,
for which chaos is predominantly caused by the parametric instability.
Unlike the results for energy spread and onsite magnetization, the relaxation
timescale for the number of principal components, N_pc(t), after its exponential growth,
depends both on the number of spins L and the spin number S.
Since this quantity does not have a classical limit and strongly depends on the quantum basis in which it is studied, there is no reason to expect its relaxation timescale to coincide with those of quantities that can be described by a classical analysis.
Acknowledgments.– F.B. acknowledges support by the Iniziativa Specifica INFN-DynSysMath and MIURwithin the Project No. PRIN 20172H2SC4. F.M.I. acknowledges financial support from CONACyT (Grant No. 286633). L.F.S. was supported by NSF Grant No. DMR-1936006.
38
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Goldstein et al.(2013)Goldstein, Hara, and Tasaki]Goldstein2013
author author S. Goldstein, author T. Hara, and author H. Tasaki, title title Time scales in the approach to
equilibrium of macroscopic quantum systems, https://doi.org/10.1103/PhysRevLett.111.140401 journal
journal Phys. Rev. Lett. volume 111, pages 140401 (year 2013)NoStop
[Goldstein et al.(2015)Goldstein, Hara, and Tasaki]Goldstein2015
author author S. Goldstein, author T. Hara, and author H. Tasaki, title title Extremely quick thermalization in a
macroscopic quantum system for a typical nonequilibrium subspace, https://doi.org/10.1088/1367-2630/17/4/045002 journal
journal New J. Phys. volume 17, pages 045002 (year 2015)NoStop
[de Oliveira et al.(2018)de Oliveira, Charalambous, Jonathan,
Lewenstein, and Riera]Oliveira2018
author author T. R. de Oliveira, author C. Charalambous, author D. Jonathan, author M. Lewenstein, and author A. Riera, title title Equilibration time scales
in closed many-body quantum systems, https://doi.org/10.1088/1367-2630/aab03b journal journal New J. Phys. volume 20, pages 033032 (year 2018)NoStop
[Carvalho et al.(2023)Carvalho, dos Prazeres, Correia, and de Oliveira]Carvalho2023
author author G. D. Carvalho, author L. F. dos
Prazeres, author P. S. Correia, and author T. R. de Oliveira, @noop title Equilibration of isolated
systems: investigating the role of coarse-graining on the initial state
magnetization (year 2023), https://arxiv.org/abs/2305.11985 arXiv:2305.11985 [quant-ph] NoStop
[Niknam et al.(2021)Niknam,
Santos, and Cory]Niknam2021
author author M. Niknam, author L. F. Santos, and author D. G. Cory, title title Experimental detection of the
correlation rényi entropy in the central spin model, https://doi.org/10.1103/PhysRevLett.127.080401 journal
journal Phys. Rev. Lett. volume 127, pages 080401 (year 2021)NoStop
[Reimann(2008)]Reimann2008
author author P. Reimann, title title Foundation of statistical
mechanics under experimentally realistic conditions, @noop
journal journal Phys. Rev. Lett. volume 101, pages 190403 (year
2008)NoStop
[Reimann(2016)]Reimann2016
author author P. Reimann, title title Typical fast
thermalization processes in closed many-body systems, https://doi.org/10.1038/ncomms10821 journal journal Nat. Comm. volume 7, pages
10821 (year 2016)NoStop
[Short(2011)]Short2011
author author A. J. Short, title title Equilibration of quantum
systems and subsystems, @noop journal journal New J. Phys. volume 13, pages 053009 (year 2011)NoStop
[Short and Farrelly(2012)]Short2012
author author A. J. Short and author T. C. Farrelly, title title Quantum equilibration in
finite time, @noop journal journal New
J. Phys. volume 14, pages 013063
(year 2012)NoStop
[Monnai(2013)]Monnai2013
author author T. Monnai, title title Generic evaluation of
relaxation time for quantum many-body systems: Analysis of the system size
dependence, https://doi.org/10.7566/JPSJ.82.044006 journal journal J. Phys. Soc. Jpn. volume 82, pages 044006 (year
2013)NoStop
[Malabarba et al.(2014)Malabarba, García-Pintos, Linden,
Farrelly, and Short]Malabarba2014
author author A. S. L. Malabarba, author L. P. García-Pintos, author N. Linden, author T. C. Farrelly, and author A. J. Short, title title Quantum
systems equilibrate rapidly for most observables, https://doi.org/10.1103/PhysRevE.90.012121 journal journal Phys. Rev. E volume 90, pages 012121 (year 2014)NoStop
[Hetterich et al.(2015)Hetterich, Fuchs, and Trauzettel]Hetterich2015
author author D. Hetterich, author M. Fuchs, and author B. Trauzettel, title title Equilibration in closed quantum
systems: Application to spin qubits, https://doi.org/10.1103/PhysRevB.92.155314 journal journal Phys. Rev. B volume 92, pages 155314 (year 2015)NoStop
[Gogolin and Eisert(2016)]Gogolin2016
author author C. Gogolin and author J. Eisert, title title Equilibration,
thermalization, and the emergence of statistical mechanics in closed quantum
systems, http://stacks.iop.org/0034-4885/79/i=5/a=056001
journal journal Rep. Prog. Phys. volume 79, pages 056001 (year
2016)NoStop
[García-Pintos et al.(2017)García-Pintos, Linden,
Malabarba, Short, and Winter]Pintos2017
author author L. P. García-Pintos, author N. Linden, author A. S. L. Malabarba, author A. J. Short, and author A. Winter, title title Equilibration time scales
of physically relevant observables, https://doi.org/10.1103/PhysRevX.7.031027 journal journal Phys. Rev. X volume 7, pages
031027 (year 2017)NoStop
[Schiulaz et al.(2019)Schiulaz, Torres-Herrera, and Santos]Schiulaz2019
author author M. Schiulaz, author E. J. Torres-Herrera, and author L. F. Santos, title title Thouless
and relaxation time scales in many-body quantum systems, https://doi.org/10.1103/PhysRevB.99.174313 journal journal Phys. Rev. B volume 99, pages 174313 (year 2019)NoStop
[Bertini et al.(2021)Bertini, Heidrich-Meisner, Karrasch,
Prosen, Steinigeweg, and ŽŽnidari čč]Bertini2021
author author B. Bertini, author F. Heidrich-Meisner, author C. Karrasch, author T. Prosen,
author R. Steinigeweg, and author M. ŽŽnidari čč, title
title Finite-temperature transport in one-dimensional quantum
lattice models, https://doi.org/10.1103/RevModPhys.93.025003
journal journal Rev. Mod. Phys. volume 93, pages 025003 (year
2021)NoStop
[Dymarsky(2022)]Dymarsky2022
author author A. Dymarsky, title title Bound on eigenstate
thermalization from transport, https://doi.org/10.1103/PhysRevLett.128.190601 journal
journal Phys. Rev. Lett. volume 128, pages 190601 (year 2022)NoStop
[Lezama et al.(2021)Lezama,
Torres-Herrera, Pérez-Bernal, Bar Lev, and Santos]Lezama2021
author author T. L. M. Lezama, author E. J. Torres-Herrera, author F. Pérez-Bernal, author Y. Bar Lev, and author L. F. Santos, title title Equilibration
time in many-body quantum systems, https://doi.org/10.1103/PhysRevB.104.085117 journal journal Phys. Rev. B volume 104, pages 085117 (year 2021)NoStop
[Benet et al.(2023)Benet,
Borgonovi, Izrailev, and Santos]Benet2023
author author L. Benet, author F. Borgonovi,
author F. M. Izrailev, and author L. F. Santos, title title Quantum-classical correspondence of strongly
chaotic many-body spin models, https://doi.org/10.1103/PhysRevB.107.155143 journal journal Phys. Rev. B volume 107, pages 155143 (year 2023)NoStop
[Borgonovi et al.(2019a)Borgonovi, Izrailev, and Santos]Borgonovi2019
author author F. Borgonovi, author F. M. Izrailev, and author L. F. Santos, title title Exponentially fast
dynamics of chaotic many-body systems, https://doi.org/10.1103/PhysRevE.99.010101 journal journal Phys. Rev. E volume 99, pages 010101 (R) (year 2019a)NoStop
[Borgonovi et al.(2019b)Borgonovi, Izrailev, and Santos]Borgonovi2019b
author author F. Borgonovi, author F. M. Izrailev, and author L. F. Santos, title title Timescales in the quench
dynamics of many-body quantum systems: Participation ratio versus out-of-time
ordered correlator, https://doi.org/10.1103/PhysRevE.99.052143
journal journal Phys. Rev. E volume 99, pages 052143 (year
2019b)NoStop
[Rammensee et al.(2018)Rammensee, Urbina, and Richter]Rammensee2018
author author J. Rammensee, author J. D. Urbina, and author K. Richter, title title Many-body quantum
interference and the saturation of out-of-time-order correlators, https://doi.org/10.1103/PhysRevLett.121.124101 journal
journal Phys. Rev. Lett. volume 121, pages 124101 (year 2018)NoStop
[Hummel et al.(2019)Hummel,
Geiger, Urbina, and Richter]Hummel2019
author author Q. Hummel, author B. Geiger,
author J. D. Urbina, and author K. Richter, title title Reversible quantum information spreading in
many-body systems near criticality, https://doi.org/10.1103/PhysRevLett.123.160401 journal
journal Phys. Rev. Lett. volume 123, pages 160401 (year 2019)NoStop
[Akila et al.(2017)Akila,
Waltner, Gutkin, Braun, and Guhr]Akila2017
author author M. Akila, author D. Waltner,
author B. Gutkin, author P. Braun, and author
T. Guhr, title title Semiclassical identification of periodic orbits in a quantum
many-body system, https://doi.org/10.1103/PhysRevLett.118.164101
journal journal Phys. Rev. Lett. volume 118, pages 164101 (year
2017)NoStop
[Akila et al.(2018)Akila,
Gutkin, Braun, Waltner, and Guhr]Akila2018
author author M. Akila, author B. Gutkin,
author P. Braun, author D. Waltner, and author
T. Guhr, title title Semiclassical prediction of large spectral fluctuations in
interacting kicked spin chains, https://doi.org/https://doi.org/10.1016/j.aop.2017.12.004 journal journal Ann. Phys. (New York) volume 389, pages 250 (year
2018)NoStop
[Schubert et al.(2021)Schubert, Richter, Jin, Michielsen, De Raedt, and Steinigeweg]Schubert2021
author author D. Schubert, author J. Richter,
author F. Jin, author
K. Michielsen, author
H. De Raedt, and author
R. Steinigeweg, title
title Quantum versus classical dynamics in spin models: Chains,
ladders, and square lattices, https://doi.org/10.1103/PhysRevB.104.054415 journal journal Phys. Rev. B volume 104, pages 054415 (year 2021)NoStop
[Borgonovi et al.(2016)Borgonovi, Izrailev, Santos, and Zelevinsky]Borgonovi2016
author author F. Borgonovi, author F. M. Izrailev, author L. F. Santos, and author V. G. Zelevinsky, title title Quantum chaos and
thermalization in isolated systems of interacting particles, https://doi.org/10.1016/j.physrep.2016.02.005 journal
journal Phys. Rep. volume 626, pages 1 (year 2016)NoStop
[Steinberg and Swingle(2019)]Steinberg2019
author author J. Steinberg and author B. Swingle, title title Thermalization and chaos
in qed_3, https://doi.org/10.1103/PhysRevD.99.076007 journal journal Phys. Rev. D volume 99, pages 076007 (year 2019)NoStop
[Malishava and Flach(2022a)]Malishava2022
author author M. Malishava and author S. Flach, title title Thermalization dynamics of
macroscopic weakly nonintegrable maps, https://doi.org/10.1063/5.0092032 journal journal Chaos volume 32, pages
063113 (year 2022a)NoStop
[Malishava and Flach(2022b)]Malishava2022PRL
author author M. Malishava and author S. Flach, title title Lyapunov spectrum scaling
for classical many-body dynamics close to integrability, https://doi.org/10.1103/PhysRevLett.128.134102 journal
journal Phys. Rev. Lett. volume 128, pages 134102 (year 2022b)NoStop
[Correale et al.(2023)Correale, Polkovnikov, Schirò, and Silva]CorrealeARXIV
author author L. Correale, author A. Polkovnikov, author M. Schirò, and author A. Silva, @noop title Probing semi-classical chaos in
the spherical p-spin glass model (year 2023), https://arxiv.org/abs/2303.15393 arXiv:2303.15393 [cond-mat.dis-nn]
NoStop
[Wang et al.(2021)Wang,
Wang, and Wu]Wang2021b
author author Z. Wang, author Y. Wang, and author B. Wu, title title Quantum chaos and physical distance between
quantum states, https://doi.org/10.1103/PhysRevE.103.042209
journal journal Phys. Rev. E volume 103, pages 042209 (year
2021)NoStop
[Bilitewski et al.(2021)Bilitewski, Bhattacharjee, and Moessner]Bilitewski2021
author author T. Bilitewski, author S. Bhattacharjee, and author R. Moessner, title title Classical many-body
chaos with and without quasiparticles, https://doi.org/10.1103/PhysRevB.103.174302 journal journal Phys. Rev. B volume 103, pages 174302 (year 2021)NoStop
[sup()]supp
@noop note Supplemental Material is
available.Stop
[Chirikov(1960)]Chirikov1960a
author author B. V. Chirikov, title title Resonance processes in
magnetic traps, https://doi.org/10.1007/BF01483352 journal journal The Soviet Journal of Atomic Energy volume 6, pages 464 (year
1960)NoStop
[Izrailev(1980)]Izrailev1980
author author F. Izrailev, title title Nearly linear mappings
and their applications, https://doi.org/https://doi.org/10.1016/0167-2789(80)90025-1 journal journal Physica D volume
1, pages 243 (year 1980)NoStop
[Lebel et al.(2022)Lebel,
Santos, and Lev]Lebel2023
author author Y. Lebel, author L. F. Santos, and author Y. B. Lev, @noop title Chaos enhancement in large-spin chains
(year 2022), https://arxiv.org/abs/2204.00018
arXiv:2204.00018 [nlin.CD] NoStop
[Chirikov(1997)]Chirikov1997
author author B. Chirikov, title title Linear and nonlinear
dynamical chaos, https://doi.org/10.1023/A:1009678102891
journal journal Open Systems & Information
Dynamics volume 4, pages 241
(year 1997)NoStop
Dipartimento di Matematica e
Fisica and Interdisciplinary Laboratories for Advanced Materials Physics,
Università Cattolica, via della Garzetta 48, 25133 Brescia, Italy
Istituto Nazionale di Fisica Nucleare, Sezione di Milano,
via Celoria 16, I-20133, Milano, Italy
Instituto de Física, Benemérita Universidad Autónoma
de Puebla, Apartado Postal J-48, Puebla 72570, Mexico
Department of Physics and Astronomy, Michigan State University, E. Lansing, Michigan 48824-1321, USA
Department of Physics, University of Connecticut, Storrs, Connecticut, USA
Supplementary Material:
Quantum-Classical Correspondence for the Relaxation Dynamics of Many-Body Spin Systems: Linear Chaos and Diffusion in the Energy Shell
Lea F. Santos
Indian Institute of Technology Kharagpur
===============================================================================================================================================================
Supplementary Material:
Quantum-Classical Correspondence for the Relaxation Dynamics of Many-Body Spin Systems: Linear Chaos and Diffusion in the Energy Shell
Fausto Borgonovi^1,2, Felix M. Izrailev^3,4, and Lea F. Santos^5
0.1cm
^1Dipartimento di Matematica e
Fisica and Interdisciplinary Laboratories for Advanced Materials Physics,
Università Cattolica, via della Garzetta 48, 25133 Brescia, Italy
^2Istituto Nazionale di Fisica Nucleare, Sezione di Milano,
via Celoria 16, I-20133, Milano, Italy
^3Instituto de Física, Benemérita Universidad Autónoma
de Puebla, Apartado Postal J-48, Puebla 72570, Mexico
^4 Department of Physics and Astronomy, Michigan State University, E. Lansing, Michigan 48824-1321, USA
^5 Department of Physics, University of Connecticut, Storrs, Connecticut, USA
In this supplementary material (SM), we provide derivations and figures that support the discussions in the main text. They include (i) derivation of the width of the classical LDoS, σ∝ J_0 √(L), and a figure to show that the classical and quantum LDoS agree extremely well when the system is strongly chaotic, (ii) derivation of the saturation value of the energy spreading as a function of the interaction strength J_0 and a figure to show that for strong interaction, the spreading agrees with the result obtained due to complete ergodicity, (iii) derivation of the Lyapunov time and its comparison with the relaxation time for the energy spreading, showing that the two are not related, and (iv) a figure comparing the classical and quantum energy distribution after saturation, stressing that the Gaussian shape in the classical case confirms that the energy spreading is diffusive.
§ I. CLASSICAL WIDTH OF THE LDOS
In this section, we use the ergodicity of the classical motion to derive an analytical expression for the width of the classical Local Density of States (LDoS), as defined in Ref. <cit.>. As shown in the main text, the quantum LDoS,
W_k_0(E) = ∑_αδ(E-E_α) |C_k_0^α|^2 ,
gives the probability for a noninteracting state to be found in the eigenstate with total energy E. Equivalently, to build the classical LDoS, we fix the initial noninteracting energy and take random initial conditions for each
single spin on the unit sphere with that noninteracting energy. We then compute the total energy for all of them and use their weights to construct a histogram <cit.>. The steps go as follows.
We first consider the exact solution of the noninteracting equations of motion for some fixed noninteracting energy E_0=0, that is,
[ _k^x (t) = S_k^x(0) cos(B_k t) + S_k^y(0) sin(B_k t) ,; ; _k^y (t) = S_k^y(0) cos(B_k t) - S_k^x(0) sin(B_k t) ,; ; _k^z (t) = S_k^z(0). ]
Notice that to satisfy the constraint E_0=0, one has to choose specific values for S_k^z(0), since they uniquely determine E_0, while S_k^x(0) and S_k^y(0) can be completely random on the circle defined by (S_k^x(0))^2+(S_k^y(0))^2=1-(S_k^z(0))^2. We then plug the equations above into that for the total energy and obtain the following time-dependent function,
[ E(t) = ∑_k=1^L B_k _k^z (t)-; ; ∑_k=1^L∑_j>kJ_0/|j-k|^ν_k^x (t) _j^x (t). ]
By sampling this function at random times, such that 0<B_k t<2π, we explicitly build the classical LDoS.
Since we start with E_0 = 0 and since _k^z are constants of motion for the noninteracting dynamics, the first term in the right hand side of Eq. (<ref>) is zero. Since S_k^x(0) and S_k^y(0) are completely random,
S_k^x(0) = S_k^y(0)= 0, while (S_k^x(0))^2 = (S_k^y(0))^2= 1/3.
And because the LDoS is obtained by randomly sampling time, cos(B_k t)^2 = sin(B_k t)^2 = 1/2. All this implies that in the variance of the classical LDoS,
σ_cl^2= E^2- E^2,
we have that E^2=0, so
σ_cl^2 = E^2 = ∑_k=1^L∑_j>kJ_0^2/|j-k|^2ν(_k^x)^2 (_j^x)^2
= J_0^2/9∑_k=1^L∑_j>k1/|j-k|^2ν≡J_0^2/9ζ(ν, L),
where the symbol “≡” defines implicitly the function
ζ(ν, L) for any ν and finite L.
The function ζ(ν,L) can be approximated for large L values as
follow,
ζ(ν,L) ≃ (L-1)∑_k=1^∞1/k^2ν = (L-1)ζ(2ν),
where ζ(2ν) is the Riemann zeta function, which in our case for ν=1.4
means ζ(2.8) ≃ 1.247.
Therefore we have that for sufficiently large L
σ_cl≃J_0√(L-1)ζ(2ν)/3.
This result holds for any ν > 1/2, when the Riemann zeta function converges.
Our numerical results for the classical width of LDoS are shown in Fig. <ref> and compared both with the exact expression for finite L in Eq. (<ref>) and with the approximate expression in Eq. (<ref>). As one can see, even for L ≳ 20, the approximate expression fits extremely well the numerical data.
It is more difficult to obtain close analytical expressions for the width of the quantum LDoS. However, this distribution nearly coincides with the classical LDoS even for relatively small spin numbers, as shown in Fig. <ref>. We can therefore use the classical result to describe the quantum dynamics.
§ II. MAXIMAL CLASSICAL ENERGY SPREADING
In Ref. <cit.> we showed that for a sufficiently large interaction strength, J_0 ≳ 3, and a sufficiently large number of spins, L > 50, the classical motion of each single spin in the Bloch sphere is ergodic. We now use this result to compute the maximal energy spreading in the energy shell due to ergodicity.
Under complete ergodicity, S_k^z is a random independent variable with S_k^z(t) = 0 and
S_k^z(t)^2 = 1/3. Using this result in the definition of the energy for noninteracting spins,
E_0 (t) = ∑_k=1^L B_k S_k^z(t),
we obtain the maximal classical energy spreading,
(Δ E_0)^2_max = E_0^2(t) - E_0(t)^2 = ∑_k B_k^2 S_k^z(t)^2 = 1/3∑_k B_k^2 ,
where we used that S_k^z(t)S_j^z(t) = 0 for k j.
We can determine the dependence of Eq. (<ref>) on the system size L as follows.
In our studies, we choose the frequencies of the noninteracting motion to be given by B_k = B_0 +δξ_k, where ξ_k are uniformly independent random numbers in the range [- δ W, δ W]. Taking the average over different random realizations of the disordered frequencies, indicated as ⟨⟨...⟩⟩, we can write
⟨⟨∑_k=1^L B_k^2 ⟩⟩ = ∑_k=1^L ⟨⟨ (B_0 + δξ_k)^2 ⟩⟩ = L B_0^2 +L δ W^2/3 ,
and arrive at
(Δ E_0)^2_max = √(L) B_0 √(1/3 + δ W^2/9 B_0^2) .
This is the energy spreading for completely random variables, as in the case of fully ergodic motion.
Inserting our parameters B_0=1 and δ W=0.2, we obtain that (Δ E_0)^2_max≃ 0.58 √(L). Notice that the width of the ergodic spreading, which is ∝√(L), is much smaller than the range of possible values obtained for E_0(t), which is [-∑_k B_k, ∑_k B_k] and thus proportional to 2(B_0+δ W) L for large L.
In Fig. <ref>, we compare the numerical results obtained for the stationary value (Δ E_0)^2 with the analytical result in Eq. (<ref>) for different values of J_0. The saturation value of the energy spreading agrees with the analytical calculation for the ergodic spin motion when J_0 ≳ 3. For smaller values of the interaction strength, when the motion is not fully ergodic, we achieve an approximate expression for the saturating value of the energy spreading by fitting our data with the the two-parameter function,
f(J_0) = 1-a e^-bJ_0,
which gives
(Δ E_0)^2 = ( 1-a e^-bJ_0 ) (Δ E_0)^2_max.
The fitting is quite accurate for all J_0 values, as shown with the red curve in Fig. <ref>.
§ III. LYAPUNOV EXPONENT AND LYAPUNOV TIME
In this section, we compare the timescale for the relaxation of the energy spreading with the Lyapunov time obtained from the inverse of the maximal Lyapunov exponent. The numerical procedure to compute the Lyapunov spectrum is described in Ref. <cit.>. Since the noninteracting energy spreading happens in a diffusive-like manner, one could expect it to be governed by local instabilities of the motion, in which case the relaxation time would be related with the Lyapunov time. As we shall see, this is not what happens.
The maximal Lyapunov λ_max exponent averaged over 100 different initial conditions with the same non-interacting energy E_0 = 0 are shown in Fig. <ref>(a) as a function of J_0. As one can see, λ_max depends nonlinearly on the interaction strength and exhibits also a weak dependence on the system size. The dependence on L is more evident in Fig. <ref>(b), where the maximal Lyapunov exponent is corrected by L^0.1 and shown versus ln J_0. In Fig. <ref>(b), all points lie on the same linear curve, suggesting that the dependence of the average maximal Lyapunov exponent on J_0 and L is given by
λ_max≈ L^0.1( a+ b ln J_0),
where a,b are the best linear-fitting parameters. Even we would neglect the weak dependence L, which could be caused by finite-size effects, the remaining logarithmic dependence on the interaction strength implies a completely different parametric dependence on J_0 if compared with τ_d.
Using the fitting function obtained in Eq. (<ref>) to evaluate in an accurate way the relaxation time τ_d,
(Δ E_0)^2 = D τ_d = c J_0 Lτ_d = f(J_0) (Δ E_0)^2_max,
we find that
τ_d = f(J_0) (Δ E_0)^2_max/c J_0 L = ∑_k B_k^2 (1-a e^-bJ_0)/3 c J_0 L,
which is not related with
τ_λ∝1/L^0.1( a+ b ln J_0).
The comparison between τ_λ in Eq. (<ref>) with τ_d in Eq. (<ref>) as a function of the interaction strength is done in Fig. <ref> for different values of L. Even though the two timescales are on the same order of magnitude in the ergodic region, J_0 ≳ 3,
the dependence on the interaction strength is completely different, being convex in the case of τ_λ and concave for τ_d. This suggests that the timescale associated with the local instability, τ_λ, is not related with the relaxation time τ_d.
§ IV. CLASSICAL AND QUANTUM STATIONARY DISTRIBUTIONS
Due to the finite size of the energy shell, the variance (Δ E_0)^2(t) for time t ≫τ_d saturates. In the presence of a real diffusive process, we expect the energy distribution to become Gaussian with a variance proportional to D t. This is indeed what happens to the classical case, as seen in Fig. <ref>. The exception are the far tails [Fig. <ref>(b)], which cannot be described by the Gaussian, because the energy shell where the spreading occurs is finite.
The quantum distribution, on the other hand, exhibits a band structure, as seen in Fig. <ref>. This is why we say that the linear spread of (Δ E_0)^2(t) in the quantum domain is not “true” diffusion. Yet, since there are 2LS+1 bands and the total size of the energy shell is ∼ 2L, the quantum distribution approaches the classical one in the semiclassical limit, S≫ 1 at fixed L.
|
http://arxiv.org/abs/2307.04890v1 | 20230710202541 | Temporal network compression via network hashing | [
"Rémi Vaudaine",
"Pierre Borgnat",
"Paulo Goncalves",
"Rémi Gribonval",
"Márton Karsai"
] | cs.SI | [
"cs.SI",
"cs.CY",
"cs.DS",
"physics.data-an",
"physics.soc-ph"
] |
Temporal network compression via
network hashing
Rémi Vaudaine^1 Pierre Borgnat^2Paulo Goncalves^1Rémi Gribonval^1Márton Karsai^3,4,*
^1 Univ Lyon, EnsL, UCBL, CNRS, Inria, LIP, F-69342, Lyon Cedex 07, France
^2 CNRS, Univ de Lyon, ENS de Lyon, Laboratoire de Physique, F-69342 Lyon, France
^3 Department of Network and Data Science, Central European University, 1100 Vienna, Austria
^4 Rényi Institute of Mathematics, 1053 Budapest, Hungary
^*Corresponding author: [email protected]
===========================================================================================================================================================================================================================================================================================================================================================================================
Pairwise temporal interactions between entities can be represented as temporal networks, which code the propagation of processes such as epidemic spreading or information cascades, evolving on top of them. The largest outcome of these processes is directly linked to the structure of the underlying network. Indeed, a node of a network at given time cannot affect more nodes in the future than it can reach via time-respecting paths. This set of nodes reachable from a source defines an out-component, which identification is costly. In this paper, we propose an efficient matrix algorithm to tackle this issue and show that it outperforms other state-of-the-art methods. Secondly, we propose a hashing framework to coarsen large temporal networks into smaller proxies on which out-components are easier to estimate, and then recombined to obtain the initial components. Our graph hashing solution has implications in privacy respecting representation of temporal networks.
keywords: Temporal networks, out-component calculation, streaming matrix algorithms, graph hashing
§ INTRODUCTION
While temporal networks represent the sequence of time-evolving interactions between entities, they also code the connected structure that lays behind many dynamical processes like the spreading of an epidemic or an information cascade or the collective adoption of behavioural norms or products.
In static networks, connectivity is conventionally defined between two nodes if they are connected via a direct edge, or via a path building up from a sequence of adjacent edges that (pair-wise) share at least one node <cit.>. In temporal networks, however, connectedness is coded by temporal paths that are constructed from adjacent temporal interactions, which are not simultaneous yet structurally adjacent, and respect the causal time order. They determine the set of reachable nodes that can be influenced in the future with information held by a given node at a given time <cit.>. The set of reachable nodes of a node at a given time, also called its influence set, is the node's temporal out-component, whose structure and size are important indicators of any ongoing dynamical processes. Indeed, no ongoing process can exhibit a larger collective pattern than the largest connected out-component in the underlying temporal network.
However, the characterisation of connected components in temporal networks is a difficult task, as the temporal ordering of interactions introduces a degree of complexity to detect time-respecting paths in an effective way.
Here, we address this challenge by defining a component matrix that codes the in- and out-component size of any node in a temporal network. Using this matrix we apply network compression and reconstruction techniques via graph hashing,
to estimate the distribution of the size of connected components of nodes. The proposed algorithm provides advancements in the computation efficiency of the largest node components compared to the state-of-the-art, specifically for temporal networks with large number of interactions.
§.§.§ Calculation of the largest out-component:
Considering all nodes and timed interactions in a temporal network, the most important component to characterise is, among other components, the largest out-component that ever emerged in the structure. Its identification can be approached using different ideas.
A simple one would be to simulate a deterministic Susceptible-Infected (SI) process starting from every node at their first interaction time. In a deterministic SI process, nodes are either in a susceptible (S) or infected (I) state and a susceptible node certainly becomes infected when interacting with an infected one. It is a conventional model to describe the fastest spreading process in a network, where starting from a single seed node at its first appearance, the downstream set of infected nodes determines its maximum out-component.
Using this method, in a temporal network of n nodes and m events, the computation of the out-component of a spreading seeded from a single source node, at its first appearance time would have O(n) space and O(m) time complexity (in terms of memory usage and computation time). This results in O(n^2) space and O(nm) time complexity when considering every node.
A more efficient method rely on temporal Event Graphs (EG), a higher-order representation of temporal networks <cit.>. An EG is a static and lossless representation of a temporal network in the form of a weighted and directed acyclic graph (DAG). In this structure, temporal interactions are associated to nodes that are linked if their corresponding events are adjacent.
For a more precise definition see Section. <ref>.
Computing a single traversal of a static event graph (in reversed time order) yields the out-component of any node at any time, with an evidently smaller computational complexity as compared to a direct computation on a temporal network. However, EG appears with considerably larger size (having as many nodes as events in the original temporal network) and higher link density (by connecting any events to all future adjacent others) that leads to increased memory complexity. In order to reduce memory complexity, a link reduction method has been proposed that eliminates path redundancy in the EG <cit.>, leaving the connectedness of the DAG intact. Relying on the reduced EG, the use of the approximate HyperLogLog (HLL) counting algorithm can further reduce the time complexity of the out-component detection to O(m log(m)+η), where η is the number of edges of the EG. However, this method provides only estimation for the size of out-components, without giving any information about their detailed structure.
§.§.§ Graph compression for component inference:
Contrary to earlier solutions, our idea is to use graph compression methods to compute the out-component size distribution of a temporal network, with a reduced computational complexity.
The compressibility of static networks has been studied recently <cit.>, and has been shown to depend on the structure of the graph. This notion can be extended for temporal networks by interpreting them as a sequence of time-aggregated static network snapshots. Then compression can be formulated as finding a smaller diffusion-equivalent representation <cit.>. Also, consecutive snapshots can be compressed depending on their chronological importance <cit.>. Moreover, as pointed out by Li et al. <cit.>, in spatio-temporal networks nodes can be compressed via local clustering, while reducing time instants to change-points. Compression can be formulated using Minimum Description Length to also reduce the size of the graph <cit.>. Another compression approach has been proposed using information theory considerations, aiming to reduce the number of bytes required to describe a temporal network <cit.>. Reducing the size of the network via coarsening to compute spectral properties of a graph has also been studied <cit.>. Sampling techniques have been largely used to reduce the complexity of computation over large graphs <cit.>.
Despite these numerous compression techniques proposed for temporal networks, none of them reduces effectively the number of nodes in a series of events. This reduction has a huge impact on the computational complexity of any of these algorithms, especially when they are characterised by quadratic complexity in the number of nodes. Thus our central question remains: how to design an efficient compression scheme that reduces the number of nodes while keeping enough information about the network itself to reconstruct the statistics of its connected components?
To reduce the computational complexity of the out-component size distribution calculation, we first propose a online streaming matrix algorithm that scans through the series of events only once, while it can also consider new events added later on, without re-starting the computation. In addition, we define a general purpose temporal network compression scheme using a graph hashing approach. This compression method reduces the total number of nodes, yet it requires a decompression scheme too, which provides only an approximate solution. The compression method can be used in conjunction with the matrix algorithm and, more generally, it can be applied on any temporal network algorithm.
To present our contributions, we organised the paper as follows. First, we formalise the problem of out-components computation in Section <ref>. We present the proposed novel streaming matrix algorithm to compute the distribution of the size of out-components in Section <ref>, including some numerical experiments. Then, we describe the hashing framework in Section <ref>, and we report also on the numerical studies carried out to evaluate its ability to estimate the ground-truth out-components' distributions in Section <ref>. Finally, we discuss the proposed methods and the results.
§ METHODS
The aim of the present work is to effectively compute the distribution of the maximum out-component size for all nodes in a temporal network. To establish our apporach, we introduce first the definitions that are necessary to ground our methodology.
§.§ Problem definition
We define a temporal network :=(, , ) as a series of temporal events e = (u, v, t) ∈ that record interactions between nodes u,v ∈ V at time steps t sampled[In these definitions, we neglect the duration of events for simplicity, but all definitions could incorporate durations in a straightforward way.] from a time periods of length T. The network is characterised by its number of nodes n=|| and its number of events m=||.
In we call two events e_i ∈, e_j ∈ adjacent if they share at least one node ({u_i, v_i}∩{u_j, v_j}≠∅) and their inter-event time is Δ t=t_j-t_i>0, i.e. the two events are not simultaneous. Furthermore, we call two events to be δ t-adjacent if they are adjacent and their inter-event time is Δ t ≤δ t. A sequence of adjacent events defines a time respecting path between nodes u and v starting at time t, if the first event of the path starts from node u at time t, the last ends at node v, and each consecutive events in the sequence are pairwise adjacent <cit.>. The set of nodes that can be reached by any paths starting from node u at time t defines the out-component.
The size of the out-component of a node u at a given time t is measured as the number of unique nodes that can be reached by valid time respecting paths. Actually, it determines the largest possible phenonenon (e.g., largest epidemic or information cascade) that was initiated from that source node and evolved in the future. The computation of out-components is computationally challenging as it requires the tracking of each time-respecting paths starting from each node at each time. However, an effective approximate solution has been proposed lately <cit.> to solve a partial challenge, to estimate only the size of out-components without keeping track of nodes involved.
§.§ Event graphs and the HyperLogLog algorithm
The proposed solution builds on the Event Graph (EG) representation <cit.> of temporal networks.
An event graph G:=(V, E, Δ t) is defined as a static weighted directed acyclic graph (DAG) representation of a temporal network , where temporal events are associated to nodes in G (i.e., V=); directed edges in G correspond to Δ t-adjacent event pairs in the original temporal network, with direction indicating their temporal order. The Δ t weight of each link is defined as the inter-event time between the two adjacent events corresponding to the connected nodes in G.
This way, an event graph has m=|| number of vertices and η number of directed edges. This static graph representation provides a losless description of a temporal network and can be exploited to infer several property of without computations on the temporal structure <cit.>. Indeed, thanks to the EG representation, the out-component size distribution of can be precisely computed <cit.>, yet with high computational and memory costs.
To reduce this cost at the price of an inexact computation, Modiri et al. <cit.> proposed an approximate solution to precisely estimate the out-component size distribution of a temporal networks using its EG representation combined with the HyperLogLog algorithm.
The HyperLogLog (HLL) algorithm takes as input a set, and it outputs an approximate of its size <cit.>. More precisely, a HLL structure uses a representation on s registers each storing a number, initialised to zero to start with. Every element of the set is hashed into a binary vector that is then cut in two parts. The first part indicates the identifier of the register that will be used and the position of the leftmost 1 in the second part is stored in that register if it is larger than the current value. Finally, the size of the set is estimated with an ensemble indicator function based on the registers. The main advantage of the algorithm is that the whole set is not stored to estimate its size and the estimation can be done with constant space and time complexity O(s). The error of the estimation is O(1/√(s)). Also, the size of the union of two sets can also be estimated in constant time and space by merging two HLL structures. A final property is that each element of the set is considered one by one, hence compatible with a streaming approach. Let us stress that the hashing functions used in HLL are not related to the ones we will use in Section <ref> to compress the network representation.
The HLL algorithm can be used to estimate out-component sizes in a EG without tracking the exact set of nodes involved <cit.>. This approach reduces the time complexity of the out-component distribution computation to O(m log(m) + η) up to some constant factors that depend on the hyper parameters s of the HLL algorithm, which sets the trade-off between computational efficiency and accuracy.
§ STREAMING MATRIX ALGORITHM FOR OUT-COMPONENT SIZE CALCULATIONS
We develop a streaming matrix algorithm as an exact solution for the question of computing the largest out-component of each node in a temporal network. The proposed solution can process chronologically streamed nodes and events of a temporal network in real time, with a space complexity that does not depend on the number m of events.
To demonstrate the basic idea of the method, let us consider the simple example of an information spreading process on a temporal network between n nodes modelled by a deterministic SI process (a short definition is recalled in the Introduction). To follow-up on the evolving components during the SI process, we design a matrix with rows representing the in-component and columns representing the out-component of each node. At time t=0, when each node has a unique information that it has not propagated yet to any other nodes, we obtain the identity matrix with ones in the diagonal and zeros otherwise. Propagation happens between nodes u and v at the time of their interactions, when they mutually share all unique information they already learned from others (including their own) during earlier times of the process[Information sharing could be deemed non-mutual in case of directed interactions in the temporal network.].
This propagation rule is associated to the "OR" operation between the corresponding lines of the matrix, which yields the union of the set of unique information known by the two nodes.
By the last event of the temporal network, the unique information of a node u is known by all other nodes in its out-component (depicted by column of the matrix). Thus, to compute the size of u's out-component, we simply have to count the number of unique nodes that are aware of u's unique information, i.e., the number of ones in the corresponding column of the matrix.
§.§ The component matrix
The component matrix is a binary matrix of size n × n, where n is the number of nodes in . An illustration is provided in Fig. <ref>. An element (i, j) of the matrix is 1 if and only if the node i is reachable from the node j by any temporal path. Thus, the i-th line of the component matrix is the in-component of the node i and the j-th column of the matrix is the out-component of the node j.
The precise algorithm to compute this component matrix is given as pseudo-code in Algorithm <ref>. It starts with the identity matrix. Then, for every event, the lines corresponding to the interacting nodes are used to compute a binary OR operation, and those lines are replaced by that resulting OR. Finally, at the end of the series of events, the output matrix is the component matrix. This algorithmic construction process is described in Fig. <ref>.
§.§ Complexity of the algorithm
Since we use a n × n matrix to store the intermediate results, the space complexity is O(n^2), which may be reduced using sparse matrices for storage. For time complexity, we can divide the algorithm into several steps. The initialisation of the identity matrix can be done in O(n) by simply setting the n diagonal elements to the "True" value at the outset. To update the matrix we perform the OR operation between two vectors of size n once for each of the m events. The complexity of each update is O(n), bounded by the maximum out-component sizes n, but could be further reduced with a sparse matrix format. Consequently, the total complexity of the updates is O(nm). Finally, counting the number of non-zero element (or non-"False" elements) can be done at the same time as the update without any added complexity. Thus, the overall time complexity of the component matrix algorithm is O(n) + O(nm) = O(nm).
§.§ Streaming computation of average out-component size
One way to further reduce the complexity of the computation is to look for an approximation of the average size of a maximum out-component rather than its exact size. This can be implemented with the component matrix using the HyperLogLog counting algorithm that has been recalled before. It allows us to approximately describe and count the "True" values on each row of the matrix. The rows of the component matrix describe a set of nodes: the in-component. An HLL structure of size s (arbitrarily chosen, independently of n and m) can be used to estimate the size of an in-component. Thus, n HLL structures replace the former matrix. In its matrix form, the algorithm starts with the identity matrix. For the HLL structures, we simply initialise them with a single element: the i-th structure will be initialised with "i". Then, for every event (i, j, t), the OR operation between the lines i and j of the matrix is computed, which is equivalent to the union of the two in-components. For the HLL structures, this results in merging them. Finally, every HLL structure can give an approximation of the size of its corresponding line in the component matrix.
Interestingly, the average size of the maximum out-components at time t is defined as:
s_t = 1/n∑_u ∈∑_v ∈ S_t(u, v),
where the sums are interchangeable. Thus it can be computed both as the average size of out-components or in-components. Actually, the HyperLogLog structure can compute a size estimate with no additional cost in O(1) time complexity for each in-component, which are coded in the matrix as the number of "Trues" in a row. According to Eq. <ref>, the average value of these maximum in-component size values can give us an estimate directly for the average of the maximum out-component sizes. Thus, the HyperLogLog approach can reduce the algorithm's space complexity to O(n) and the time complexity to O(m) [Remember, however, that the O(·) notation hides a constant, whose value results from a trade-off between cost and precision for the HLL algorithm.]. As an advantage, the matrix algorithm using HyperLogLog preserves the streaming aspect of the algorithm, assuming events to arrive in chronological order. However, in turn, it does not provide the whole maximum out-component size distribution but only an estimation for its mean value.
§.§ Component size distribution from reversed event sequence
By reversing time, we can easily obtain a solution to compute the whole maximum out-component size distribution. But it comes at the expense of losing the streaming property of our algorithm, as this solution takes as input, the whole interaction sequence in reversed order, processing it from the end to the beginning. By reversing the order of the sequence of events, the in-components become the out-component.
In this case the component matrix algorithm does not fuse the rows anymore but it has to be adjusted the fuse the columns instead.
Thanks to the reversal of the sequence of events, we can use the HyperLogLog counting method to estimate the full distribution of the maximum out-component sizes at a lower cost.
More specifically, for every node, we initialise a HyperLogLog structure with constant size, which contains only the node itself as previously. Then, for every event (u_i, v_i, t_i), considered in reverse chronological order, we merge the structures of u_i and v_i (corresponding to columns, i.e., to current estimates of out-components) in O(1) time. Finally, we approximate the size of the maximum out-component of every node with their HyperLogLog estimates. This results in having an approximation for the whole distribution of out-components' sizes in O(n) space complexity, O(m) time complexity, and scanning the events' sequence only once. While this seems to be a very efficient solution, the constants in the complexity evaluations are quite large in practice, setting back the effective performance of this solution in some regimes of n and m, as we demonstrate in the next section, while providing better results in others.
As a summary, the first part of <Ref> reports on the complexity and properties of the methods described so far. The second part of the table also anticipates on the method to be described in the next section.
§.§ Experimental validation
We perform several computational experiments to demonstrate the effectiveness of the component matrix algorithm and to compare its performance to the corresponding EG based solution.
§.§.§ Experimental setting:
To test the performance of these algorithms we consider a simple model of temporal network. We first generate a static undirected random graph G(n,p) using the Erdős-Rényi model with n nodes and wiring probability p=2/n. This way the constructed static graph will likely contain a unique giant component. To generate a temporal network, we set an independent Poisson process on each link <cit.> of this graph to determine the times when links are present and can transmit information between connected nodes. This way, both the underlying structure and the link dynamics are generated by random processes, that induce limited degree heterogeneity (with an emerging Poisson degree distribution) <cit.> and no burstiness (with exponential inter-event time distribution) <cit.>. In the simulated networks, the number of nodes n varies between {100, 200, 500, 1000, 2000, 5000, 10000} and the number of events m logarithmically from 10 to 10^8. Note that while real temporal networks may exhibit several types of structural and temporal heterogeneity, we assume they would not change considerably the conclusion of the performance evaluation provided here.
For a fair comparison, both the EG based and the component matrix based methods were used to solve the same task, that is to compute the largest maximum out-component size of a temporal network. While the EG based method solves a larger problem first, i.e. estimating the out-component size of any node at any event, one can extract the maximum out-component size for every node from its solution, simply by taking the size that corresponds to the first emergence of a given node. The overall asymptotic memory and time complexity of this solution scales similarly to the EG+HLL algorithm <cit.>, as it is summarised in Table <ref>. Taking this model as reference, we compare it with the performance of the proposed proposed method based on the exact component matrix, as well as with its variant which uses HLL algorithms to obtain an approximation. In each method using HLL, we tune s
to obtain less than 1% error for the average component size. Note that reversing time does not change neither the time nor the memory complexity of the component matrix algorithm with or without HLL as summarised in Table <ref>.
§.§.§ Results:
To compare the different methods, we report in Fig. <ref> the ratio of computation times and of memory usages between the compared algorithms. First, let us focus on the relative performance of the component matrix method in Fig. <ref> (a) and (c). Interestingly, results depicted in panel (a) suggest that although this method provides an exact solution for the task, it performs always better than the EG+HLL algorithm in terms of computation time. A similar scaling is true in terms of memory complexity (panel (c)), although the large quadratic cost of the component matrix makes this method to perform worst than the reference for small numbers of events or for large numbers of nodes. Nevertheless, we can conclude that the component matrix method largely outperforms the event graph method in terms of computational time and memory for large numbers of events, especially with networks of smaller size, where the gain can reach several orders of magnitude.
Comparing the HLL variant of the component matrix algorithm with the reference EG+HLL mode, show more variable performance. In terms of computational time (see panel (b) in Fig. <ref>), although worse for small numbers of events and large networks, the performance of our method is comparable to that of the exact method, for the other parameter values. However, regarding memory consumption (see panel (d)), our method is much more efficient, as it does not have to store the component matrix of size n^2. For large network sizes the model requires approximately the same memory size as the reference model, while it is doing significantly better for the rest of the parameter space.
§.§.§ Advantages and limitations:
As stated before, a major advantage of the component matrix algorithm as compared to other methods is its space complexity that does not depend on the number m of events in the temporal network but scales as the square of its node set size n. Meanwhile, its time complexity scales only linearly with m. This is especially suitable for data streaming scenarios when nodes and events arrive in chronological order.
Actually, adding a new node to the network requires only to add a new row and column to the component matrix set as "False", except the diagonal element. As for new events, insertion follows the update rule discussed earlier, as the algorithm operates in a streaming manner. Furthermore, the component matrix method requires only one pass over the event sequence. At any time step t when a new event appears, it only requires information about the previous state of the component matrix S_i-1 at time t-1 (or conversely in the case of reversed time).
On the other hand, the exact component matrix method scales poorly in space complexity in terms of n, the number of nodes, as it operates on a n × n matrix. This shortcut can be addressed by the HLL method to obtain approximate results. A sparse matrix implementation
can also be very beneficial to solve this problem, if the average out-component size is much smaller than n. Otherwise, when it is comparable to n and the number of non-zero elements in the matrix is in O(n^2), even a sparse matrix solution would scale quadratically.
§.§.§ Reference point:
The figure above only gives the ratio between the computation times or the amount of memory required for the computations. We provide in Table <ref> some reference points for the real values for each method. The smallest temporal network on Fig. <ref> is for n=100 and m=100.
The largest temporal network in Fig. <ref> is for n=10^4 and m=10^8. The associated computation times and memory usages are reported in Table <ref>.
§ HASHING THE TEMPORAL NETWORK
Hashing the temporal network consists in reducing its number of nodes, thus compressing it, by (randomly) assigning nodes of the initial temporal network to "super-nodes" of a hashed graph. An event, or an interaction, between two nodes at time t in the initial temporal network becomes a new event between their hashed representatives in the hashed graph at time t.
Reducing the number of nodes is notably attractive
because this reduces the complexity of various different algorithms including the computation of the component matrix, even though
this may cause information loss about the initial graph. To balance this effect, we propose to use several different hashing functions and to fuse the obtained results together. The overall framework is shown in Fig. <ref>.
§.§ Hashing functions
To reduce the number of nodes of the static graph underlying the input temporal graph (in short: "the input static graph"), and therefore the computation complexity of the out-component size distribution, we use hashing functions. These functions take as input a set of labels of n nodes, {1, ..., n}=[n], and hash them into n_s super-nodes {1, ..., n_s }=[n_s]. The labels are the nodes of the input static graph and the buckets are the super-nodes of the resulting hashed static graph. Since n_s<n, some nodes will collide into the same super-node, reducing the overall cost of computation over the hashed temporal network associated to the hashed static graph, but reducing also the amount of available information.
We use k-universal (randomised) hashing functions <cit.>. A with k=4. In general, a class H of random functions is k-universal if ∀ x_1, ..., x_k∈ [n], ∀ v_1, ..., v_k∈ [n_s],
Pr
{ h(x_i)=v_i, ∀ i ∈ [k]} = 1/n_s^k
where the probability is on the draw of h.
Qualitatively, this means that, the probability that one node of the initial graph is assigned to the same super-node by two different hashing functions is low and controlled by the choice of k.
In our work, we use k=4 and the hashing functions are based on a large prime number: Prime=2^61-1. First, let us define the table A of size (3, order), where order is an order parameter, as:
∀ i ∈{0, 1, 2}, ∀ j ∈ [order], A(i, j) = ((rand(2^31-1) ≪ 32) + rand(2^31-1)) % Prime
where ≪32 denotes the shift of the binary representation 32 bits to the left.
The number acc is computed recursively order-1 times thanks to:
acc(i, u) = MultAddMod(u, acc(i, u), A(i, j)) % Prime
where j ∈{1, ..., order}, MultAddMod is a function defined in the paper and initially, acc(i, u) = A(i, 0).
Then,
T_0, T_1, T_2 are tables of size n_s that are defined as:
∀ u ∈ [n_s], T_i[u] = acc(i, u)
Furthermore, for every node, we define three quantities x_0, x_1, x_2 as ∀ u ∈ [n], x_0(u) = low(u), x_1(u) = high(u), x_2(u) = x_0(u) + x_1(u) where low(u) outputs the 32 rightmost bits of the binary representation of u and high(u) outputs the 32 leftmost bits.
Finally,
∀ u ∈ [n], h(u) = T_0(x_0(u)) ⋆ T_1(x_1(u)) ⋆ T_2(x_2(u))
where ⋆ is the bitwise exclusive OR.
The hashed static graph is made of super-nodes defined by the output of a hash function and of "super"-edges connecting them: if u and v are connected in the initial static graph, then the super-nodes h(u) and h(v) become connected by a super-edge, whose weight is binary. Finally, for every event (u, v, t), a super-event is defined as (h(u), h(v), t).
§.§ Fusion to compute the distribution of out-components
The main goal of our work is to compute the out-component, or its size, of every node in the input temporal network with lower complexity than existing methods reported in Table <ref>. To do so, we hash the set of n nodes of the input temporal network into n_s super-nodes with K
different hash functions h_j.
∀ i ∈ [n] ∀ j ∈ [K], h_j(u_i) ∈ [n_s]
These hashing functions are drawn independently at random.
Here, the hashing functions h_j, ∀ j ∈ [K] are not injective thus not invertible: there are usually several nodes mapped to the same super-node. We define the inverse of h as the function that, given a super-node of the hashed static graph, computes the set of corresponding nodes in the initial static graph:
∀ v ∈ [n_s], h^-1(v) = {u ∈ [n]/ h(u)=v}
Denote 𝙾𝙲(u) (resp. 𝙾𝙲(h(u))) the out-component of node u (resp. super-node v = h(u)). Assuming we can compute (an estimate of) the out component 𝙾𝙲(v) of a super-node v in the hashed graph obtained with hashing function h_j, we can also define
h_j^-1(𝙾𝙲(v)) = ⋃_x ∈𝙾𝙲(v) h^-1(x)
Instead of estimating the out-component for each of the n nodes in the temporal network, we first hash the network into K hashed graphs of n_s nodes and m events, then estimate the out-component for every node in the hashed graphs and finally aggregate the information by intersecting the (estimated) out-components given by each hashed graph.
We then define
∀ i ∈ [n], 𝙾𝙲(u_i) = ⋂_j ∈ [K] h_j^-1(𝙾𝙲(h_j(u_i))).
The estimated out-component necessarily contains the true out-component, i.e. 𝙾𝙲(u_i) ⊆𝙾𝙲(u_i), yet if K is too small the set 𝙾𝙲(u_i) may be much larger than the true out-component.
Computing |𝙾𝙲(u_i)|, where |A| is the number of elements of a set A, one can compute an approximation of the distribution of the out-components' sizes with any of the aforementioned algorithm that is able to compute the out-components (and not only their sizes) on the hashed graphs. We compare the resulting approximate distribution with the true distribution.
§.§ Properties of the algorithm
The structure of the resulting algorithm ensures that every step before the final fusion remains compatible with streamed events arriving in chronological order, and is also amenable to parallel/independent computations for each hashing function.
Moreover, the complexity of the framework depends on the setup. In a parallel setting, i.e. when the S^(k)_i are computed separately, we need O(K × n_s^2) space to store the matrices and O(mn_s) time to compute the small component matrices. In a non-parallel setting, we need O(K× n_s^2) to store the small matrices and O(Kmn_s) time to compute them.
§.§ Experimental evaluation
The compression framework that we propose can be used with several observables. Here, we focused on the computation of the out-components. The whole distribution of the size of the out-components describes the largest spreading phenomena possible starting from every node. The other quantity we are interested in is the tail of the distribution, i.e. the set of nodes with largest out-component' size. To experimentally prove the effectiveness of our work, we measure the precision of the approximate method with respect to the ground-truth for both the distribution and its tail.
§.§.§ Experimental setting:
For simulations, temporal networks are generated exactly as in the previous Section, see <ref>.
In the generated data, the number of nodes, n, varies between {100, 200, 500, 1000, 2000, 5000, 10000} and the number of events varies between 10^4 to 10^9 as powers of 10. The number of super-nodes is always a fraction of the number of nodes: n_s = 0.3 × n and the number of hashing functions is K=5.
The baseline algorithm is our matrix method from the previous section since it provides the exact distribution of the size of the out-components, with a controlled memory and time complexity. The results will compare the hashing version to this baseline.
In addition, some experiments have been conducted on real-world datasets freely available in http://snap.stanford.edu/data/Snap. The "Superuser" temporal network is a network of interactions on the stack exchange web site Super User. There are three kinds of interactions (edges): a user (node) answered a question of someone else, a user commented a question or a user commented an answer.
The Superuser network is made of 194 085 nodes and 1 443 339 events. The "Reddit" dataset is a temporal network of connections between subreddits (i.e., forum dedicated to a specific topic on the website Reddit), taken here as nodes.
There is an event between two subreddits when there is a post from a source subreddit that links to a target subreddit. The Reddit temporal network has 35 776 nodes and 286 560 events.
For real datasets, we split chronologically the events in 10 equal parts and compute the distribution of the largest out-component on {10%, 20%, ..., 100%} of the events. As for generated datasets, we use n_s = 0.3 × n. We also use K=1 and K=5.
The baseline is still the matrix algorithm of the previous section.
§.§.§ Performance criteria:
We evaluate the hashing framework based on three criteria: time, memory and accuracy. We compare the time required by the matrix algorithm to compute the true distribution of the largest out-components, 𝒟 with the one required by the hashing framework to compute the approximate distribution of the largest out-components 𝒟_s.
The computation time of the hashing framework includes both the computation of the hashed matrices and their fusion.
We also compare the memory usage of the matrix algorithm, with a single big matrix, with the one of the hashing framework, with several smaller matrices.
Furthermore, to compare the ground-truth out-component' size distribution 𝒟 and the one computed thanks to the hashing framework 𝒟_s(n_s, K), we simply use the Earth-Mover distance, also called Wasserstein distance <cit.>, computed thanks to the Python Optimal Transport library <cit.>, but any distance between two distributions could be used. We also tried to use the Kullback-Leibler divergence but it was less sensitive to subtle differences between the distributions. Thus we can define accuracy of the out-component size distribution inference as:
Acc(n_s, K) = γ (𝒟, 𝒟_s(n_s, K))
where γ
is the Earth-Mover distance. The lower Acc(n_s, K), the closest are the two distributions.
§.§.§ Results
First, we present the result for the generated data. The relative computation time, relative memory usage and accuracy of the hashing framework are reported in Fig. <ref>.
The relative computation time figure is red meaning that the hashing framework requires more time than the matrix method to compute the target distribution. However, we can clearly see that the relative computation time decreases quickly with the number of events and slowly with the number of nodes. Generally, for datasets with more events than m=10^8, the hashing framework with K=5 and n_s=0.3 × n requires less time than the full matrix method.
For the relative memory usage, the figure is blue meaning that we always gain memory. In fact, in this setup, only half the memory of the matrix method is required for the hashing framework. As for the Earth-Mover distance, there is a regime for small datasets where the accuracy is not satisfactory but for the large majority of the generated datasets, the hashing framework performs very well.
For the real datasets, we first show the results of the relative computation time and the relative memory usage of the hashing framework compared to the matrix method in Fig. <ref>. Experimentally, we show that the hashing framework generally requires more time to compute the target distribution. Moreover, the relative computation time is linear with the number of hashing functions. That is, for K=5, that time is approximately 5 times higher than for K=1 for both the Reddit dataset and the Superuser dataset. Overall, the general shape of the curves is in line with the results of generated data. For example, the relative computation time for the third point of the Reddit dataset, n=15370 and m=85968, is 198, which coincide with the corresponding value of the generated datasets. Overall, the relative computation time decreases as the number of nodes increases, as expected.
Then, the memory required for the computation is linear with the number of hashing functions. We clearly see that, for K=5, the memory usage is 5 times more than the one for K=1. The figures for real datasets are also in line with the figures for generated datasets. Obviously, with K=1 and n_s = 0.3 × n, the computation requires less memory than with the full matrix algorithm for both real datasets by a factor 10. But, more importantly, with K=5 and n_s=0.3 × n, the hashing framework still requires only around 50% of the memory of the matrix method.
Finally, we report the accuracy of the hashing framework compared to the matrix method in terms of the Earth-Mover distance between the true distribution computed by the matrix method and the one estimated by the hashing method in Fig. <ref>. Indeed, the quality of the results is important to assess the quality of the hashing framework. For both datasets, lower dimensions lead to lower accuracy of the framework. We see that the first few points, corresponding to networks of small sizes, have a significantly higher Earth-Mover distance (thus, lower accuracy) than the remaining ones. Overall, the shape of the curves still confirms the results with the generated datasets: the larger the network, the better the approximation. Secondly, as expected, the distance is lower for higher values of K. The accuracy of the method increases as there are more hashing functions.
Thus overall we can conclude that hashing is relevant in high dimension. There is a computation time gain for m ≥ 10^9 in the generated datasets while memory usage remains lower and accuracy is good. Also, increasing the number of hashes leads to a linear increase in the memory usage and a linear increase in the computation time. Obviously, this increases the accuracy of the method.
§ CONCLUSION
The continuous growth in the size of data bases requires new algorithms to process information. Moreover, structured data evolving over time represents an important challenge since it differs a lot from usual tabular data. To that end, we proposed a matrix algorithm that is able to compute both the out-components for every node of a temporal network and their sizes. Furthermore, to reduce the complexity of the analysis, we proposed a compression scheme based on hashing function that reduces the number of nodes of the network at the cost of some uncertainty. Uncertainty is lifted thanks to the use of several hashes in parallel. On each hashed graph, the matrix algorithm can be computed and, finally, all the information is merged to approximate the component matrix of the input network. Our framework is online and allows parallelization. Indeed, new nodes and news events can be processed as they come. Moreover, the different hashed graphs allow parallelization since they are independent.
Additionally, hashing can make the computation private. If we do not observe the temporal network directly but only hashed versions of it and if hashes have some external randomness, our framework allows ϵ-differential privacy <cit.>.
We believe that our work has a lot of potential applications. The first concrete user case is to use out-component sizes as the maximum number of nodes reachable during a spreading process. For example, it can be the maximum number of people infected by a virus from a single source. Or, on Twitter, it can be the maximum number of people a piece of news spread to. Secondly, our framework can be extended to other cases. In our work, we focused on out-components but we believe that many other quantities can be computed thanks to our compression scheme such as pairwise distances between nodes. Also, we believe that the hashing framework can be rewritten with an algebraic formulation. This would open up the work to linear problems and linear solvers. In fact, the reconstruction of the matrix could be tackled in many different ways making the framework more flexible. Moreover, privacy preserving algorithms are particularly interesting for security or privacy reasons. The work we propose can efficiently make algorithms on temporal networks private. Indeed, adding randomness in the data can lead to prevent the identification of the source of the data.
Most importantly, our hashing framework transforms a temporal network into a series of smaller datasets that can be used to infer properties of the initial dataset without direct access to it. This can be very beneficial in the processing of sensitive information.
§ ACKNOWLEDGEMENT
This work has been supported by the DATAREDUX project, ANR19-CE46-0008. MK was supported by the CHIST-ERA project SAI: FWF I 5205-N; the SoBigData++ H2020-871042; the EMOMAP CIVICA projects; and the National Laboratory for Health Security, Alfréd Rényi Institute, RRF-2.3.1-21-2022-00006. PB was supported by the CHIST-ERA-19-XAI-006, for the GRAPHNEX ANR-21-CHR4-0009 project.
10
Adhikari2017
Adhikari, B., Zhang, Y., Bharadwaj, A., Prakash, B.: Condensing Temporal
Networks using Propagation, pp. 417–425 (06 2017).
10.1137/1.9781611974973.47
albert2002statistical
Albert, R., Barabási, A.L.: Statistical mechanics of complex networks.
Reviews of modern physics 74(1), 47 (2002)
Allen2022
Allen, A.J., Moore, C., Hébert-Dufresne, L.: A network compression approach
for quantifying the importance of temporal contact chronology (2022).
10.48550/ARXIV.2205.11566, <https://arxiv.org/abs/2205.11566>
Badie-Modiri2020
Badie-Modiri, A., Karsai, M., Kivelä, M.: Efficient limited-time reachability
estimation in temporal networks. Phys. Rev. E 101, 052303 (May
2020). 10.1103/PhysRevE.101.052303,
<https://link.aps.org/doi/10.1103/PhysRevE.101.052303>
Bernardo2013
Bernardo, G.D., Brisaboa, N.R., Caro, D., Rodríguez, M.A.: Compact data
structures for temporal graphs. In: 2013 Data Compression Conference. pp.
477–477 (2013). 10.1109/DCC.2013.59
earth_mover
Bonneel, N., van de Panne, M., Paris, S., Heidrich, W.: Displacement
interpolation using Lagrangian mass transport. ACM Transactions on
Graphics 30(6), Article n°158 (2011).
10.1145/2070781.2024192, <https://inria.hal.science/hal-00763270>
Caro2016
Caro, D., Rodríguez, M.A., Brisaboa, N.R., Fariña, A.: Compressed
kd-tree for temporal graphs. Knowl. Inf. Syst. 49(2), 553–595
(nov 2016). 10.1007/s10115-015-0908-6,
<https://doi.org/10.1007/s10115-015-0908-6>
Dwork2006
Dwork, C., McSherry, F., Nissim, K., Smith, A.: Calibrating noise to
sensitivity in private data analysis. In: Halevi, S., Rabin, T. (eds.) Theory
of Cryptography. pp. 265–284. Springer Berlin Heidelberg, Berlin, Heidelberg
(2006)
Flajolet2007
Flajolet, P., Éric Fusy, Gandouet, O., Meunier, F.: Hyperloglog: The analysis
of a near-optimal cardinality estimation algorithm. In: IN AOFA ’07:
PROCEEDINGS OF THE 2007 INTERNATIONAL CONFERENCE ON ANALYSIS OF ALGORITHMS
(2007)
flamary2021pot
Flamary, R., Courty, N., Gramfort, A., Alaya, M.Z., Boisbunon, A., Chambon, S.,
Chapel, L., Corenflos, A., Fatras, K., Fournier, N., Gautheron, L., Gayraud,
N.T., Janati, H., Rakotomamonjy, A., Redko, I., Rolet, A., Schutz, A., Seguy,
V., Sutherland, D.J., Tavenard, R., Tong, A., Vayer, T.: Pot: Python optimal
transport. Journal of Machine Learning Research 22(78), 1–8
(2021), <http://jmlr.org/papers/v22/20-451.html>
Holme2012
Holme, P., Saramäki, J.: Temporal networks. Physics Reports 519(3),
97–125 (oct 2012). 10.1016/j.physrep.2012.03.001,
<https://doi.org/10.1016%2Fj.physrep.2012.03.001>
karsai2018bursty
Karsai, M., Jo, H.H., Kaski, K., et al.: Bursty human dynamics. Springer (2018)
kivela2018mapping
Kivelä, M., Cambe, J., Saramäki, J., Karsai, M.: Mapping
temporal-network percolation to weighted, static event graphs. Scientific
reports 8(1), 12357 (2018)
Li2017
Li, X., Sharpnack, J.: Compression of spatio-temporal networks via
point-to-point process models. Proceedings of the 13th International Workshop
on Mining and Learning with Graphs (MLG)
<https://par.nsf.gov/biblio/10061449>
Panagiotis2022
Liakos, P., Papakonstantinopoulou, K., Stefou, T., Delis, A.: On compressing
temporal graphs. In: 2022 IEEE 38th International Conference on Data
Engineering (ICDE). pp. 1301–1313 (2022). 10.1109/ICDE53745.2022.00102
Liu2018
Liu, Y., Safavi, T., Shah, N., Koutra, D.: Reducing large graphs to small
supergraphs: a unified approach. Social Network Analysis and Mining
8 (03 2018). 10.1007/s13278-018-0491-4
Loukas2018
Loukas, A., Vandergheynst, P.: Spectrally approximating large graphs with
smaller graphs. In: International Conference on Machine Learning (2018)
Lynn2021
Lynn, C.W., Bassett, D.S.: Quantifying the compressibility of complex networks.
Proceedings of the National Academy of Sciences 118(32),
e2023473118 (2021). 10.1073/pnas.2023473118,
<https://www.pnas.org/doi/abs/10.1073/pnas.2023473118>
Mellor2017
Mellor, A.: The temporal event graph. Journal of Complex Networks
6(4), 639–659 (oct 2017). 10.1093/comnet/cnx048,
<https://doi.org/10.1093%2Fcomnet%2Fcnx048>
newman2018networks
Newman, M.: Networks. Oxford university press (2018)
Thorup2004
Thorup, M., Zhang, Y.: Tabulation based 4-universal hashing with applications
to second moment estimation. In: Proceedings of the Fifteenth Annual ACM-SIAM
Symposium on Discrete Algorithms. p. 615–624. SODA '04, Society for
Industrial and Applied Mathematics, USA (2004)
Yousuf2020
Yousuf, M., Kim, S.: Guided sampling for large graphs. Data Mining and
Knowledge Discovery 34 (07 2020). 10.1007/s10618-020-00683-y
|
http://arxiv.org/abs/2307.04729v2 | 20230710173843 | Bootstrapping the Chiral Anomaly at Large $N_c$ | [
"Teng Ma",
"Alex Pomarol",
"Francesco Sciotti"
] | hep-th | [
"hep-th",
"hep-lat",
"hep-ph"
] |
left=1cm,right=1cm
40
10
7
blue!20
yellow
white
|
http://arxiv.org/abs/2307.04523v1 | 20230710124620 | 1D non-LTE corrections for chemical abundance analyses of very metal-poor stars | [
"L. Mashonkina",
"Yu. Pakhomov",
"T. Sitnova",
"A. Smogorzhevskii",
"P. Jablonka",
"V. Hill"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.GA"
] |
firstpage–lastpage
Cluster-Induced Mask Transformers for Effective Opportunistic Gastric Cancer Screening on Non-contrast CT Scans
Mingze Yuan^1,2,3,*, Yingda Xia^1,, Xin Chen^4,, Jiawen Yao^1,3, Junli Wang^5, Mingyan Qiu^1,3, Hexin Dong^1,2,3, Jingren Zhou^1, Bin Dong^2,6, Le Lu^1, Li Zhang^2, Zaiyi Liu^4,, Ling Zhang^1
August 12, 2023
===================================================================================================================================================================================================
Detailed chemical abundances of very metal-poor (VMP, [Fe/H] < -2) stars are important for better understanding the First Stars, early star formation and chemical enrichment of galaxies. Big on-going and coming high-resolution spectroscopic surveys provide a wealth of material that needs to be carefully analysed. For VMP stars, their elemental abundances should be derived based on the non-local thermodynamic equilibrium (non-LTE = NLTE) line formation because low metal abundances and low electron number density in the atmosphere produce the physical conditions favorable for the departures from LTE. The galactic archaeology research requires homogeneous determinations of chemical abundances. For this purpose, we present grids of the 1D-NLTE abundance corrections for lines of Na, Mg, Ca, Ca, Ti, Fe, Zn, Zn, Sr, and Ba in the range of atmospheric parameters that represent VMP stars on various evolutionary stages and cover effective temperatures from 4000 to 6500 K, surface gravities from = 0.5 to = 5.0, and metallicities -5.0 ≤ [Fe/H] ≤ -2.0. The data is publicly available, and we provide the tools for interpolating in the grids online.
line: formation – stars: abundances – stars: atmospheres.
§ INTRODUCTION
Very metal-poor (VMP, [Fe/H][In the classical notation, where [X/H] = log(N_ X/N_
H)_star - log(N_ X/N_ H)_⊙.] < -2) stars are fossils of the early epochs of star formation in their parent galaxy.
Their detailed elemental abundances are of extreme importance for understanding the nature of the First Stars, uncovering the initial mass function and the metallicity distribution function of the galaxy, testing the nuclesynthesis theory predictions and the galactic chemical evolution models <cit.>.
Since 1980th, the number of discovered VMP star candidates has grown tremendously thanks to the wide-angle spectroscopic and photometric surveys, such as HK <cit.>, HES <cit.>, RAVE <cit.>, SMSS <cit.>, SEGUE/SDSS <cit.>, LAMOST <cit.>.
The survey Pristine has been specially designed for efficient searching VMP stars <cit.>. Using the narrow-band photometric filter centered on the Ca H & K lines makes possible to successfully predict stellar metallicities <cit.>.
The number of confirmed VMP stars is substantially lower than the number of candidates because the verification of very low metallicity requires the high-resolution follow-ups. The SAGA (Stellar Abundances for Galactic Archaeology) database <cit.> includes about 1390 Galactic stars with [Fe/H] ≤ -2, for which their metallicities were derived from the R = λ /Δλ≥ 20 000 spectra. The 470 stars of them have [Fe/H] ≤ -3, and 28 stars are ultra metal-poor (UMP, [Fe/H] ≤ -4). A burst in the number of VMP stars with detailed elemental abundances derived is expected with the launch of the WEAVE (WHT Enhanced Area Velocity Explorer) project <cit.>. A vast amount of spectral data will be taken with the coming 4-metre Multi-Object Spectroscopic Telescope <cit.>.
Abundance ratios among the elements of different origin, such as Mg and Fe, for stellar samples covering broad metallicity ranges serve as the observational material for the galactic archaeology research.
The simplest and widely applied method to derive elemental abundances is based on using one-dimensional (1D) model atmospheres and the assumption of local thermodynamic equilibrium (LTE), see, for example, the abundance results from the high-resolution spectroscopic survey APOGEE <cit.>.
In metal-poor atmospheres, in particular, of cool giants, low total gas pressure and low electron number density lead to departures from LTE that grow towards lower metallicity due to decreasing collisional rates and increasing radiative rates as a result of dropping ultra-violet (UV) opacity. The non-local thermodynamic equilibrium (non-LTE = NLTE) line formation calculations show that the NLTE effects for lines of one chemical species and for different chemical species are different in magnitude and sign, depending on the stellar parameters and element abundances. Ignoring the NLTE effects leads to a distorted picture of the galactic abundance trends and thus to wrong conclusions about the galactic chemical evolution.
The NLTE abundance from a given line in a given star can be obtained by adding the theoretical NLTE abundance correction, which corresponds to the star's atmospheric parameters, to the LTE abundance derived from the observed spectrum: NLTE = LTE + Δ_ NLTE. For a number of chemical species, Δ_ NLTE can be taken online from the websites
* INSPECT (<http://www.inspect-stars.com>) for lines of Li, Na, Mg, Ti, Fe-, and Sr,
* NLTE_MPIA (<http://nlte.mpia.de/>) for lines of O, Mg, Si, Ca-, Ti-, Cr, Mn, Fe-, and Co,
* <http://spectrum.inasan.ru/nLTE/> for lines of Ca, Ti-, and Fe.
Extensive grids of the NLTE abundance corrections are provided by <cit.>, <cit.>, and <cit.>.
The NLTE abundance corrections for the selected lines of S and Zn in the limited set of atmospheric models were computed by <cit.>. <cit.> report the NLTE to LTE equivalent width ratios for lines of Mg, Ca, and Ca in the grid of model atmospheres representing cool giants.
Different approach is a determination of the NLTE abundance directly, by using the synthetic spectrum method and the precomputed departure coefficients, b_i = n_i^ NLTE/n_i^ LTE, for the chemical species under investigation. Here, n_i^ NLTE and n_i^ LTE are the statistical equilibrium and the Saha-Boltzmann's number densities, respectively, for the enery level i.
<cit.> provide the grids of b_i
for 13 chemical species (neutral H, Li, C, N, O, Na, Mg, Al, Si, K, Ca, Mn; singly ionized Mn, Ba)
across a grid of the classical one-dimensional (1D) MARCS model atmospheres <cit.>.
This approach is based on using the 1D-NLTE spectral synthesis codes, such as SME <cit.> synthV_NLTE <cit.>, Turbospectrum <cit.>.
An approach based on three-dimensional (3D) model atmospheres combined with the NLTE line formation is extremely time consuming and, to now, was applied to a few chemical species in the Sun <cit.> and the benchmark VMP stars <cit.>. Grids of the 3D-NLTE abundance corrections were computed for lines of O <cit.> and Fe- <cit.> using the STAGGER grid of model atmospheres for a limited range of effective temperatures ( = 5000-6500 K), surface gravities ( = 3.0-4.5), and metallicities ([Fe/H] = 0 to -3). For the Li lines, grids of the 3D-NLTE abundance corrections were computed by <cit.> and <cit.> with the CO^5BOLD and STAGGER model atmospheres, respectively.
The 3D-NLTE calculations are available for a small number of the chemical elements observed in VMP stars, and they cover only in part the range of relevant atmospheric parameters. Furthermore, as shown by <cit.> for Fe, the abundance differences between 3D-NLTE and 1D-NLTE are generally less severe compared with the differences between 3D-NLTE and 1D-LTE and reach 0.2 dex, at maximum (see Figs. 5-7 in their paper). Therefore, calculations of the 1D-NLTE abundance corrections for extended linelists across the stellar parameter range which represents the VMP stars make sense, and they are useful for the galactic archaeology research. Availability and comparison of Δ_ NLTE from different independent studies increase a credit of confidence in the spectroscopic NLTE analyses.
This paper presents the 1D-NLTE abundance corrections for lines of 10 chemical species in the grid of MARCS model atmospheres with = 4000-6500 K, = 0.5-5.0, and -5 ≤ [Fe/H] ≤ -2.
We provide the tools for calculating online the NLTE abundance correction(s) for given line(s) and given atmospheric parameters by interpolating in the precomputed grids.
Potential users may take the following advantages of our data compared with the grids of the 1D-NLTE abundance corrections available in the literature.
* Only this study provides extended grids of the NLTE abundance corrections for lines of Zn and Ba.
* For Ca and Ca, the NLTE calculations were performed with advanced treatment of the Ca + H and Ca + H collisions, following <cit.> and <cit.>, respectively.
* For Zn and Sr, our results are based on advanced treatment of collisions with H, following <cit.> and <cit.>. Our grids cover the broader range of , , and [Fe/H] compared to that for Zn in <cit.> and for Sr in the INSPECT database.
* For Ca–Ca, Fe–Fe, and Na, the developed 1D-NLTE methods have been verified with spectroscopic analyses of VMP stars and have been shown to yield reliable results.
The paper is organised as follows. Section <ref> describes our NLTE methods and their verification with observations of VMP stars. New grids of the NLTE abundance corrections are presented in Sect. <ref>. In Sect. <ref>, we compare our calculations with those from other studies. Our recommendations and final remarks are given in Sect. <ref>.
§ NLTE METHODS AND THEIR VERIFICATION
The present investigation is based on the NLTE methods developed and tested in our earlier studies.
Details of the adopted atomic data and the NLTE line formation for Na, Mg, Ca-, Ti-Ti, Fe-, Zn-, Sr, and Ba can be found in the papers cited in Table <ref>.
It is important to note that collisions with hydrogen atoms were treated with the data based on quantum-mechanical calculations.
The exceptions are Ti and Fe-, for which we adopted the Drawinian rates <cit.> scaled by an empirically estimated factor of = 1 <cit.> and = 0.5 <cit.>, respectively.
The code detail <cit.> with the revised opacity package <cit.> was used to solve the coupled radiative transfer and statistical equilibrium (SE) equations. The obtained LTE and NLTE level populations were then implemented in the code linec <cit.> that, for each given spectral line, computes the NLTE curve of growth and finds the shift in the NLTE abundance, which is required to reproduce the LTE equivalent width. Such an abundance shift is referred to as the NLTE abundance correction, Δ_ NLTE = NLTE-LTE.
All the calculations were performed using the classical LTE model atmospheres with the standard chemical composition <cit.>, as provided by the MARCS website[<http://marcs.astro.uu.se>].
Below we provide evidence for a correct treatment of the NLTE line formation for Fe-Fe, Ca-Ca, and Na in the atmospheres of VMP stars.
§.§ Spectroscopic versus Gaia eDR3 distances
Iron is represented in the VMP stars by the two ionization stages, which are used in many studies to determine spectroscopic surface gravities (g_ Sp) from the requirement that abundances from lines of Fe and Fe in a given star must be equal. The surface gravity can also be derived from distance; this is the distance-based surface gravity, g_ d. If g_ Sp based on the NLTE calculations and g_ d are obtained to be consistent within the error bars, this means that the calculations for Fe-Fe are correct.
<cit.> and <cit.> derived the surface gravities for the two Galactic stellar samples using photometric effective temperatures and the NLTE analysis of the Fe and Fe lines. Using the Gaia eDR3 parallaxes corrected according to
<cit.>, we calculated distances from the maximum
of the distance probability distribution function, as recommended by
<cit.>, and then
_ d from the relation
log g_ d = -10.607 +log M+4 log T_ eff -
0.4 [4.74 - (V + BC + 5 - 5 log d - A_V)]
Here, M is a star's mass, A_V is an interstellar extintion in the V-band, BC is a bolometric correction which was calculated by interpolation in the grid of <cit.>[<https://wwwuser.oats.inaf.it/castelli/colors/bcp.html>]. The atmospheric parameters and A_V were taken from <cit.> and <cit.>. Stellar masses and V magnitudes for the <cit.> sample are listed in their Table 5 and 2, respectively. For the stellar sample of <cit.>, the V magnitudes are listed in their Table 5. For each VMP giant, we adopt M = 0.8 M_⊙.
Statistical error of the distance-based surface gravity was computed as the quadratic sum of errors of the star's distance, effective temperature, mass, visual magnitude, and BC. We assumed the stellar mass error as σ_M = 0.1 M_⊙ and took the effective temperature errors, σ_T, from <cit.> and <cit.>. The total error is dominated by σ_M for the nearby stars and by the distance error, σ_ d, for the distant objects.
Table <ref> lists the obtained Gaia eDR3 distances and _ d values, as well as the spectroscopic surface gravities from <cit.> and <cit.>.
The differences log g_ Sp – _ d are shown in Fig. <ref>. The majority of our stars lie within 631 pc from the Sun, and their spectroscopic surface gravities are found to be consistent within the error bars with the distance-based ones. A clear outlier is HD 8724, with log g_ Sp – _ d = -0.48. We note that the discrepancy between log g_ Sp and _ d has reduced compared to -0.76 dex obtained for HD 8724 by <cit.> using the Gaia DR1 parallax <cit.>. However, it is still greater than the error of spectroscopic surface gravity, σ_log g (sp) = 0.24 dex. Formal calculation of σ_log g (d) leads to 0.07 dex (Table <ref>), however, astrometric_excess_noise_sig = 6.005 and astrometric_chi2_al = 419.84 indicated by <cit.> for HD 8724 suggest an unreliable solution for the Gaia eDR3 parallax.
For 15 distant stars, with d > 2 kpc, the errors of _ d grow. Nevertheless, the spectroscopic surface gravities are consistent, on average, with the distance-based ones.
Thus, our NLTE method for Fe/Fe is reliable and can be used for determinations of surface gravities, in particular, for distant stars with large distance errors.
§.§ Ca versus Ca
A firm argument for a correct treatment of the NLTE line formation for Ca-Ca can be obtained from a comparison of the NLTE abundances from lines of the two ionization stages. <cit.> report the LTE and NLTE abundances from lines of Ca and Ca 8498 Å for five reference stars with well-determined atmospheric parameters in the -2.7 < [Fe/H] < -1.3 metallicity range and find fairly consistent NLTE abundances, while the LTE abundance difference between Ca and Ca 8498 Å grows in absolute value towards lower metallicity and reaches -0.45 dex for [Fe/H] = -2.62, see their Fig. 6.
<cit.> studied the UMP stars and improved their atmospheric parameters using an extensive method based on the colour- calibrations, NLTE fits of the Balmer line wings, and Gaia DR2 trigonometric parallaxes. For each star, the derived effective temperature and surface gravity were checked by inspecting the Ca/Ca NLTE ionization equilibrium and by comparing the star's position in the - plane
with the theoretical isochrones of 12 and 13 Gyr.
The abundance differences between the two ionization stages from the NLTE and LTE calculations of <cit.> and <cit.> are displayed in Fig. <ref>. Nowhere, the NLTE abundance difference Ca – Ca exceeds 0.15 dex, while the LTE abundances from lines of Ca are systematically lower compared with that from Ca, by up to 0.85 dex. Thus, the NLTE results obtained using our NLTE method for Ca- <cit.> can be trusted.
§.§ Na resonance lines in VMP stars
Figure <ref> displays the [Na/Mg] abundance ratios in the wide range of metallicities from the LTE and NLTE calculations of <cit.> and <cit.>. For [Fe/H] > -1, both LTE and NLTE data form a well-defined upward trend, with a small star-to-star scatter for the stars of close metallicity. The situation is very different in LTE and NLTE for [Fe/H] < -1. In LTE, the [Na/Mg] ratios reveal a big scatter, which is substantially reduced in the NLTE calculations. An explanation lies mostly with the NLTE effects for lines of Na. For Mg, the differences between the NLTE and LTE abundances do not exceed 0.1 dex.
For [Fe/H] > -1, the Na abundances were derived by <cit.> from the Na 5682, 5688, 6154, 6160 5895 Å subordinate lines, which are slightly affected by NLTE, with negative Δ_ NLTE of ≾0.1 dex, in absolute value.
In the lower metallicity stars, sodium is observed in the Na 5889, 5895 Å resonance lines only. They are subject to strong NLTE effects, with Δ_ NLTE depending on the atmospheric parameters and the Na abundance itself. For different stars, Δ_ NLTE varies between -0.1 and -0.6 dex <cit.>. Removing the star-to-star scatter of the [Na/Mg] NLTE abundance ratios for [Fe/H] < -1 can serve as a circumstantial evidence for the line formation to be treated correctly.
Taking advantage of the obtained Galactic NLTE [Na/Mg] trend, we found that the modern nuclesynthesis and Galactic chemical evolution (GCE) calculations, which are represented in Fig. <ref> (right panel) by the GCE model of <cit.>, predict correctly contributions from the core-collapse supernovae (SNeII) and the asymptotic giant branch (AGB) stars to production of Mg and Na during the Galaxy history.
§ GRIDS OF THE NLTE ABUNDANCE CORRECTIONS
By request of the Pristine collaboration <cit.>, the NLTE abundance corrections were computed for the lines which can be detected in spectra of VMP stars, that is, for the [Fe/H] ≤ -2 range. We focused, in particular, on the spectral ranges observed by WEAVE[https://ingconfluence.ing.iac.es/confluence/display/WEAV/Science], that is 4040-4650 Å, 4750-5450 Å, and 5950-6850 Å for the high-resolution (R = λ/Δλ = 20 000) observations and 3660-9590 Å for the R = 5000 observations, and 4MOST [https://www.4most.eu/cms], that is 3926-4350 Å, 5160-5730 Å, and 6100-6790 Å for the high-resolution spectrograph (HRS, R ≃ 20 000) and 3700-9500 Å for the low-resolution spectrograph (LRS, R ≃ 4000-7500). We selected
4 / 15 / 28 / 4 / 54 / 262 / 7 / 2 / 2 / 5 lines of Na / Mg / Ca / Ca / Ti / Fe / Zn / Zn / Sr / Ba.
The range of atmospheric parameters was selected to represent metal-poor stars on various evolutionary stages, from the main sequence to the red giant branch (RGB); see the isochrone of 12 Gyr, [Fe/H] = -2, and [α/Fe] = 0.4 from <cit.> in Fig. <ref>. The NLTE calculations were performed in the following ranges of effective temperature and surface gravity:
= 4000 to 4750 K for = 0.5 to 2.5;
= 5000 K for = 0.5 to 5.0;
= 5250 to 5500 K for = 2.0 to 5.0;
= 5750 to 6500 K for = 3.0 to 5.0.
Metallicity range is -5.0 ≤ [Fe/H] ≤ -2.0.
The nodes of the NLTE abundance correction grids correspond to the nodes of the MARCS model grid. Therefore, varies with a step of 250 K, with a step of 0.5, and [Fe/H] with a step of 0.5. The MARCS website does not provide models with [Fe/H] = -3.5 and -4.5. The missing models were calculated by interpolating between the [Fe/H] = -3 and -4 and between the [Fe/H] = -4 and -5 models. We applied the FORTRAN-based interpolation routine written by Thomas Masseron and available on the MARCS website.
For Fe- and Zn-, the SE calculations were performed with [Element/Fe] = 0.0;
for Mg and Ti with [Element/Fe] = 0.4 and 0.3, respectively.
For Na, Ca, Ca, Sr, and Ba, the NLTE effects are sensitive to not only //[Fe/H], but also the element abundance used in the SE calculations. Therefore, the grids of the NLTE corrections are 4-dimensional where [Element/Fe] takes the following numbers:
[Na/Fe] = -0.6, -0.3, 0.0, 0.3, 0.6;
[Ca/Fe] = 0.0 and 0.4;
[Sr/Fe] = -1.0, -0.5, 0.0, 0.5, 1.0 for the dwarf model atmospheres,
[Sr/Fe] = -1.5, -1.0, -0.5, 0.0, 0.5 for the giant model atmospheres;
[Ba/Fe] = -1.0, -0.5, 0.0, 0.5 for the dwarf model atmospheres,
[Ba/Fe] = -1.5, -1.0, -0.5, 0.0, 0.5 for the giant model atmospheres.
The website INASAN_NLTE[<http://spectrum.inasan.ru/nLTE2/>] provides the tools for calculating online the NLTE abundance correction(s) for given spectral line(s) and atmospheric parameters , , [Fe/H], [Element/Fe] by an interpolation in the NLTE correction grids.
§.§ NLTE corrections depending on atmospheric parameters
Figure <ref> displays the NLTE abundance corrections predicted for representative lines of different chemical species in VMP stars on different evolutionary stages, namely, the turn-off (TO, / = 6250/4.0), the bottom red giant branch (bRGB, 5250/3.0), and the RGB (4500/1.5). For each line, Δ_ NLTE depends on , and [Fe/H]. Therefore, neglecting the NLTE effects distorts the galactic abundance trends.
In the same atmosphere, different lines have the NLTE corrections of different magnitude and sign. Therefore,
the star's element abundance pattern derived under the LTE assumption does not reflect correctly relative contributions of different nuclesynthesis sources.
The sign of Δ_ NLTE is determined by the mechanisms that produce the departures from LTE for lines of a given species in given physical conditions.
In the stellar parameter range with which we concern, Mg, Ca, and Fe are the minority species in the line formation layers, and they are subject to the ultra-violet (UV) overionization, resulting in depleted atomic level populations, weakened lines, and positive NLTE abundance corrections <cit.>. The intensity of the ionizing UV radiation increases with decreasing metallicity, resulting in growing departures from LTE.
Na is also the minority species, however, due to low photoionization cross-sections of its ground state, the main NLTE mechanism is a "photon suction" process <cit.> which produces overpopulation of the neutral stage, resulting in strengthened Na lines and negative NLTE abundance corrections. Photon suction is connected with collisional processes that couple the high-excitation levels of Na with the singly ionized stage. In contrast to the radiative processes, an influence of collisional processes on the statistical equilibrium of Na is weakened with decreasing metallicity, and Δ_ NLTE for Na 5895 Å decreases in absolute value and becomes even slightly positive for [Fe/H] ≤ -4.5 in the 4500/1.5 models.
The NLTE effects for the majority species Ca, Ti, Sr, and Ba are driven by the bound-bound (b-b) transitions. For an individual line, the sign and magnitude of Δ_ NLTE depend on the physical conditions and the transition where the line arises. Ca 8498 Å arises in the transition 3d2D3/2 – 4p2P∘3/2. The upper level is depopulated in the atmospheric layers where the core of Ca 8498 Å forms via photon loss in the wings of the Ca 3933, 3968 Å resonance lines and the 8498, 8542, 8668 Å infra-red (IR) triplet lines. The Ca 8498 Å line core is strengthened because the line
source function drops below the Planck function, resulting in negative Δ_ NLTE <cit.>. In the [Fe/H] = -2 models, Ca 8498 Å is very strong with a total absorption dominated by the line wings that form in deep atmospheric layers where the NLTE effects are small. With decreasing [Fe/H] (and Ca abundance, too) the line wings are weakened, and Δ_ NLTE grows in absolute value. In the 6250/4.0 and 5250/3.0 models, Δ_ NLTE decreases in absolute value for [Fe/H] ≤ -3.5 because of shifting the formation depths for Ca 8498 Å in deep atmospheric layers.
Owing to a complex atomic term structure, the levels of Ti are tightly coupled to each other and to the ground state via radiative and collisional processes, and the NLTE corrections for the Ti lines are slightly positive in the stellar parameter range with which we concern <cit.>: Δ_ NLTE≾ 0.1 dex for Ti 4395 Å.
<cit.> and <cit.> predicted theoretically that NLTE may either strengthen or weaken the lines of Sr and Ba, depending on the stellar parameters and elemental abundance. For example, in the 6250/4.0 models, Δ_ NLTE is positive for Ba 4554 Å over full range of [Fe/H] = -2 down to -4.5, while, for Sr 4215 Å, Δ_ NLTE is negative when [Fe/H] ≥ -2.5 and positive for the more metal-deficient atmospheres. In the RGB atmospheres, both Sr 4215 Å and Ba 4554 Å are very strong until metallicity decreases to [Fe/H] = -3.5, and the NLTE corrections are small. For the lower metallicity, Δ_ NLTE is positive for both lines and grows with decreasing [Fe/H].
For lines of Zn, the NLTE abundance corrections depending on atmospheric parameters are discussed by <cit.>.
§.§ NLTE corrections depending on elemental abundances
The stars of close metallicity in the [Fe/H] < -2 range reveal a substantial scatter of the Na, Sr, and Ba abundances <cit.>. Exactly for Na, Sr, and Ba the NLTE effects depend strongly on not only atmospheric parameters, but also the element abundance. Therefore in order to interpret correctly the chemical evolution of Na, Sr, and Ba, abundance analyses of VMP samples should be based on the NLTE abundances.
Figure <ref> shows that, for the TO and bRGB stars, the LTE analysis overestimates the Na abundances, by the quantity which is greater for the Na-enhanced than for Na-poor star. The difference in Δ_ NLTE exceeds 0.4 dex for [Fe/H] = -2.5 and reduces towards the lower [Fe/H]. The same is true for the RGB stars with [Fe/H] ≤ -3.5, but the situation is more complicated for the higher metallicities. For [Fe/H] > -3, the Na 5895 Å line is very strong in the Na-enhanced cool atmospheres, and the total line absorption is dominated by the line wings that form in deep atmospheric layers affected only weakly by NLTE. Accounting for the NLTE effects for the Na lines reduces substantially the abundance discrepancies found for stellar samples in LTE, as well illustrated by Fig. <ref>.
Using the same atmospheric parameters, LTE may either overestimate, or underestimate abundances of Sr and Ba depending on the elemental abundances, as shown in Fig. <ref>. For [Fe/H] < -2, the NLTE abundance corrections for Sr 4215 Å and Ba 4554 Å are positive in the Sr- and Ba-poor atmospheres, while they can be negative for the Sr- and Ba-enhanced atmospheres. Accounting for the NLTE effects can reduce the abundance discrepancies found for stellar samples in LTE, by more than 0.4 dex for Sr in the TO [Fe/H] = -2.5 stars and for Ba in the bRGB [Fe/H] = -2.5 stars.
§.§ NLTE corrections for different type model atmospheres
The model atmospheres computed with different codes produce, as a rule, very similar atmospheric structures and spectral energy distributions for common atmospheric parameters. We checked how different type model atmospheres influence on magnitudes of the NLTE abundance corrections. Taking the ATLAS9-ODFNEW models from R. Kurucz's website[<http://kurucz.harvard.edu/grids/gridm40aodfnew/>], we performed the NLTE calculations for Ca-, Fe-, and Ba with the models 6250/4.0/-4.0 and 4500/1.5/-4.0. For these atmospheric parameters, the selected lines reveal the greatest NLTE effects. The results are presented in Table <ref>.
For 6250/4.0/-4.0, the MARCS and ATLAS9-ODFNEW model atmospheres provide consistent within 0.036 dex NLTE abundance corrections. Slightly larger differences of up to 0.058 dex are obtained for the strong lines, Ca 4226 Å and Ca 8498 Å, in the cool giant atmosphere. We remind that the MARCS models with ≤ 2 were computed as spherically-symmetric, and the difference in temperature stratification between the spherically-symmetric and plane-parallel (ATLAS9-ODFNEW) models can explain, in part, differences in Δ_ NLTE for strong spectral lines.
§ COMPARISONS WITH OTHER STUDIES
The NLTE methods based on comprehensive model atoms and the most up-to-date atomic data have been developed in the literature for many chemical species observed in spectra of the Sun and F-G-K type stars because the NLTE results are in demand in chemical abundance analyses of, in particular, VMP stars. For a common chemical species, the model atoms in different NLTE studies can differ by a treatment of inelastic collisions with electrons and hydrogen atoms and by the sources of transition probabilities and photoionization cross-sections. Different NLTE studies use different NLTE codes, with a different treatment of background opacity, and different model atmospheres. We compared our NLTE calculations with the NLTE abundance corrections from the other studies.
§.§ Lines of Fe
As shown in Fig. <ref>, our results for lines of Fe agree well with the NLTE abundance corrections from the NLTE_MPIA database, which were computed using the model atom of <cit.> and the same treatment of collisions with H, as in our calculations, namely, the formulas of <cit.> with a scaling factor of = 0.5. The differences in Δ_ NLTE between this study (TS) and NLTE_MPIA mostly do not exceed 0.02 dex, with the maximal (TS – NLTE_MPIA) = 0.06 dex for Fe 5506 Å in the 6350/4.09/-2.18 model and Fe 5041 Å in the 4630/1.28/-2.99 model.
<cit.> provide the NLTE abundance corrections computed with the 1D and 3D model atmospheres. The 3D-NLTE calculations were performed for a limited atmospheric parameter range ( = 5000–6500 K, = 4.0 and 4.5, [Fe/H] = 0 to -3) and a limited number of Fe lines. We selected Fe 5232 Å for a comparison. Amarsi22 computed more positive NLTE corrections compared with ours (Fig. <ref>), by 0.07 to 0.27 dex in the 1D case and by 0.14 to 0.39 dex in the 3D case. The difference between 1D-NLTE corrections is most probably due to a different treatment of the Fe + H collisions in this and Amarsi22's studies. For H impact excitation and charge transfer, Amarsi22 apply the asymptotic model of <cit.> complemented by the free electron model of <cit.> for the b-b transitions. We showed earlier <cit.> that compared with the <cit.> formulas with = 0.5 using data of <cit.> leads to stronger NLTE effects. For example, Δ_ NLTE = 0.08 dex and 0.35 dex, respectively, for Fe 5232 Å in the 6350/4.09/-2.18 model atmosphere. In the 3D model atmospheres, the NLTE effects for Fe are stronger than in the 1D models, and notable departures from LTE appear for lines of Fe, in contrast to the 1D case, such that, for two benchmark VMP stars, Amarsi22 (see their Table 5) obtain similar abundance differences between Fe and Fe in the 1D-NLTE and 3D-NLTE calculations. To remind the reader, our 1D-NLTE approach for Fe- makes the spectroscopic distances of the VMP stellar sample to be consistent with the Gaia eDR3 ones (Sect. <ref>).
§.§ Lines of Na, Mg, Ca, Ca, and Sr
We selected Mg 5528 Å, in order to compare our NLTE calculations with the 1D-NLTE corrections provided by the NLTE_MPIA database and by <cit.>. The used model atoms <cit.> are similar to ours, including a treatment of collisions with H atoms. As seen in Fig. <ref>, our calculations agree very well with those of Lind22. The differences in Δ_ NLTE do not exceed 0.01 dex and 0.02 dex for the = 4.0 and 2.5 models, respectively. The exception is the 4000/2.5/-3 model, for which we obtained a 0.065 dex more negative Δ_ NLTE. NLTE_MPIA provides more positive NLTE corrections compared with ours, by 0.03–0.05 dex. The difference is 0.12 dex for the 4000/2.5/-3 model.
Similar model atoms of Na were used in this study and by Lind22. The differences in Δ_ NLTE for Na 5895 Å are very small (∼0.01 dex) for the coolest and the hottest temperatures in Fig. <ref>. It is difficult to explain why TS – Lind22 = 0.07 dex for the 5000/2.5/-3 model, but TS – Lind22 = 0.00 for 5000/4.0/-3.
For lines of Sr, the 1D-NLTE corrections are provided by the INSPECT database. Their NLTE calculations were performed with the model atom developed by <cit.> and did not take into account collisions with H atoms. This is in contrast to this study based on quantum-mechanical rate coefficients for the Sr + H collisions. The atmospheric parameter range is narrower in INSPECT compared with this study, namely: 4400 K ≤≤ 6400 K, 2.2 ≤≤ 4.6, -3.9 ≤ [Fe/H] ≤ 0. The differences in Δ_ NLTE for Sr 4077 Å are small except the models 5500/2.5/-3 and 6000/4.0/-3, where TS – INSPECT = -0.07 dex and +0.05 dex, respectively (Fig. <ref>).
The 1D-NLTE corrections for the Ca lines at the NLTE_MPIA database were computed with the model atom developed by <cit.> and using <cit.> formulas with = 0.1 for calculating hydrogen collision rates. In this study, we applied the same model atom, however, the Ca + H collisions were treated using quantum-mechanical rate coefficients from <cit.>. As seen in Fig. <ref>, NLTE_MPIA provides systematically greater NLTE corrections for Ca 6162 Å compared with our data, by 0.08 to 0.20 dex, probably due to a simplified treatment of hydrogenic collisions.
Ignoring the Ca + H collisions in the SE calculations resulted in stronger NLTE effects for the Ca triplet lines in <cit.> study compared with ours. For example, <cit.> report the NLTE/LTE equivalent ratios of 1.28 and 1.16 for Ca 8498 and 8542 Å, respectively, in the 4250/1.5/-4.0 model, while our corresponding values are 1.22 and 1.12.
§.§ Lines of Ba
Finally, we compared our results with the 1D-NLTE corrections calculated by <cit.> for lines of Ba. <cit.> provide the data for the -2 ≤ [Fe/H] ≤ 0.5 metallicity range. Therefore, Δ_ NLTE comparisons are presented in Fig. <ref> for the same temperatures and surface gravities, as in Fig. <ref>, but for [Fe/H] = -2. The differences in Δ_ NLTE for Ba 6496 Å do not exceed 0.02 dex except the coolest and the hottest giant atmospheres, where TS – K15 = -0.05 dex and +0.05 dex, respectively.
To summarise this section, the situation with the 1D-NLTE corrections for lines of Na, Mg, and Fe looks good. For each of these chemical species, there are, at least, two independent NLTE studies that predict consistent within 0.01-0.02 dex NLTE corrections and provide the grids which cover the full range of atmospheric parameters of VMP stars. For Sr and Ba, the NLTE corrections predicted by the independent studies agree reasonably well in the overlapping atmospheric parameter range.
§ FINAL REMARKS
This study presents grids of the 1D-NLTE abundance corrections for the Na, Mg, Ca, Ca, Ti, Fe, Zn, Zn, Sr, and Ba lines, which are used in the galactic archaeology research. The range of atmospheric parameters represents VMP stars on various evolutionary stages and covers 4000 K ≤ ≤ 6500 K, 0.5 ≤ ≤ 5.0, and -5.0 ≤ [Fe/H] ≤ -2.0. The NLTE corrections for Zn, Zn, Sr, and Ba have been calculated for the first time for such a broad atmospheric parameter range. Compared to the data available in the literature, our NLTE corrections for lines of Ca, Ca, Zn, Zn, Sr, and Ba are based on accurate treatment of collisions with H atoms in the statistical equilibrium calculations.
In the same model atmosphere, the NLTE abundance corrections may have different magnitude and sign for lines of the same chemical species, for example Δ_ NLTE = 0.092 dex (Mg 5528 Å) and Δ_ NLTE = -0.083 dex (Mg 5172 Å) in the 4500/1.5/-3.5 model. Accounting for the NLTE effects in stellar abundance determinations is expected to improve an accuracy of the obtained results.
In the same model atmosphere, the NLTE abundance corrections may have different magnitude and sign for lines of different chemical species, for example, Δ_ NLTE = -0.222 dex (Na 5895 Å) and Δ_ NLTE = 0.092 dex (Mg 5528 Å) in the 4500/1.5/-3.5 model. Therefore, an appropriate treatment of the line formation is obligatory for the studies based on analysis of the stellar element abundance patterns.
For all spectral lines and chemical species, the NLTE corrections depend on metallicity. Neglecting the NLTE effects in stellar abundance determinations leads to distorted galactic abundance trends and incorrect conclusions on the Galactic chemical evolution.
We show that, for common spectral lines and the same atmospheric parameters, independent NLTE studies of Na, Mg, and Fe predict consistent 1D-NLTE abundance corrections, with the difference of 0.01-0.02 dex in Δ_ NLTE.
The obtained results are publicly available. At the website INASAN_NLTE (<http://spectrum.inasan.ru/nLTE2/>), we provide the tools for calculating online the NLTE abundance correction(s) for given line(s) and given atmospheric parameters.
§ ACKNOWLEDGEMENTS
This research has made use of the data from the European Space Agency (ESA) mission Gaia[<https://www.cosmos.esa.int/gaia>], processed by the Gaia Data Processing and Analysis Consortium (DPAC[<https://www.cosmos.esa.int/web/gaia/dpac/consortium>]).
This research has made use of the MARCS and ADS[<http://adsabs.harvard.edu/abstract_service.html>] databases. L.M. thanks the Russian Science Foundation (grant 23-12-00134) for a partial support of this study (Sections 1, 2, 4, 5). T.S. acknowledges a partial support (Section 3) from the MK project, grant 5127.2022.1.2.
§ DATA AVAILABILITY
All our results are publicly available at the website INASAN_NLTE (<http://spectrum.inasan.ru/nLTE2/>).
mnras
|
http://arxiv.org/abs/2307.03972v1 | 20230708131059 | Evaluating the Capability of Large-scale Language Models on Chinese Grammatical Error Correction Task | [
"Fanyi Qu",
"Yunfang Wu"
] | cs.CL | [
"cs.CL"
] |
Autonomy 2.0: The Quest for Economies of Scale
Shuang Wu, Bo Yu, Shaoshan Liu, Yuhao Zhu
August 12, 2023
==============================================
Large-scale language models (LLMs) has shown remarkable capability in various of Natural Language Processing (NLP) tasks and attracted lots of attention recently. However, some studies indicated that large language models fail to achieve promising result beyond the state-of-the-art models in English grammatical error correction (GEC) tasks. In this report, we aim to explore the how large language models perform on Chinese grammatical error correction tasks and provide guidance for future work. We conduct experiments with 3 different LLMs of different model scale on 4 Chinese GEC dataset. Our experimental results indicate that the performances of LLMs on automatic evaluation metrics (e.g. F_0.5 score) falls short of the previous sota models because of the problem of over-correction. Furthermore, we also discover notable variations in the performance of LLMs when evaluated on different data distributions. Our findings demonstrates that further investigation is required for the application of LLMs on Chinese GEC task.
§ INTRODUCTION
Building on InstructGPT <cit.>, ChatGPT has demonstrated its powerful ability to understand complex instruction and generate reasonable responses on various of NLP tasks. Following the technical trajectory of ChatGPT, a significant number of high-quality LLMs have emerged in recent times in both academia and industry, such as LLaMA <cit.>, ChatGLM <cit.> and PaLM <cit.>. Previous studies found that these LLMs have achieved great performance on a wide range of NLP tasks, including machine translation <cit.>, named entity recognition <cit.> and text summarization <cit.>.
Certain studies have token comprehensive investigations into the performance of LLMs in the domain of English grammatical error correction, yielding some interesting findings <cit.>. LLMs are not able to outperform sota models in terms of automatic evaluation metrics. This is primarily because LLMs tend to make unnecessary modifications to make the input sentences more fluent, which may result in over correction problem, and in some cases, even alter the original semantics of the input sentences.
In this report, we aim to explore the performance of LLMs in Chinese GEC task. We conducted experiments on various LLMs to investigate the influence of model size on the GEC results. Additionally, we attempted different test dataset from various data sources to explore the impact of data distribution on the outcomes.
§ EXPERIMENTAL SETUP
§.§ Dataset
We conduct experiments on four Chinese GEC dataset to provide a comprehension demonstration of LLMs' capability. The detailed statistics of these dataset are shown in Table <ref>.
§.§.§ GEC data from Chinese learners
We apply the test set of NLPCC-2018<cit.> and the validation set of MuCGEC<cit.> for evaluation. These two dataset collect the grammar errors made by foreigners during their process of learning Chinese.
§.§.§ GEC data from Chinese native speaker examination
We apply the validation set of FCGEC<cit.> and the validation set of NaCGEC<cit.> for evaluation. These two dataset are collected from Chinese native speakers' language examinations.
§.§ Model
We conduct experiments on 3 LLMs with different model scales:
* ChatGPT[https://platform.openai.com/docs/api-reference]: we evaluate the performance of ChatGPT with OpenAI's official API. We choose gpt-3.5-turbo as the evaluated model, which stands out as the most advanced and specifically optimized for chat functionality.
* ChatGLM-6B <cit.>: ChatGLM is an open bilingual language model based on GLM framework which is optimized for Chinese QA and dialogue and exhibits a robust capacity for Chinese understanding.
* LLaMA-7B <cit.>: LLaMA is a collection of foundation LLMs ranging from 7B to 65B parameters proposed by Meta AI. we applied the 7B model for evaluation.
§.§ Evaluation Metric
We evaluate models' performance with Precision, Recall and F_0.5 from word level and char level respectively.
We adopt the official implementation of MaxMatch (M^2) <cit.> scorer to calculate word-level F_0.5 score and choose PKUNLP as our word segment tool. We apply ChERRANT [https://github.com/HillZhang1999/MuCGEC/tree/main/
scorers/ChERRANT] for char-level metric calculation.
§.§ Prompt
Considering the differences in performance of large language models, we designed different prompts for them. These prompts are roughly the same in semantics, but there are some differences in details. The prompts are shown in Figure <ref>
§.§ Setting details
We set temperature to 0.6 when applying ChatGPT for a reliable generated result. For ChatGLM-6B and LLaMA-7B, we conduct experiments on 4 GeForce NVIDIA 3080 Ti GPUs.
§ EXPERIMENT RESULTS
The experiment results are shown in Table <ref>. There are some results worthy of discussion.
First, different data sources result in distinct evaluation results. LLMs exhibit significantly superior performance when trained on Chinese learner data (NLPCC and MuCGEC), as opposed to Chinese native speaker examination data (FCGEC and NaCGEC). According to our observations, the grammatical errors made by Chinese learners primarily involve the misuse of similar words or phrases, rather than incorrect sentence structures. In contrast, GEC data from Chinese native speaker examination maintains a higher level of regularity and is consisted of more complex structural errors.
It is noteworthy that there exists gaps between GEC data from Chinese examination and Chinese native speakers' daily spoken habits.
Second, different model scales also lead to distinct performance. The unified trend is that ChatGPT performs similarly on Precision with other 2 smaller models while achieves significant improvement in Recall. This implies that the evaluated LLMs have similar error correction capability while their error detection ability differs a lot.
Third, there still exists great gaps between state-of-art models and LLMs on automated evaluation metrics. Previous work <cit.> has found the problem of over-correction for LLMs, which has also been noticed in our experiment.
What's more, it is hard to explain why the char-level evaluation metrics is significantly lower than word-level evaluation metrics, which is not noticed in previous work.
§ CONCLUSION
In this report, we explore the performance of various LLMs on Chinese grammatical error correction task. Experimental results indicate that there still remains gap between LLMs' performance and current sota models. Furthermore, the performance of different LLMs' performance is greatly impacted by the distribution of test data. Future work can focus on addressing the over-correction problem of LLMs and explore the untapped potential of LLMs in the field of grammatical error correction tasks.
acl_natbib
|
http://arxiv.org/abs/2307.04476v1 | 20230710105503 | Nitrogen isotope effects on boron vacancy quantum sensors in hexagonal boron nitride | [
"Kento Sasaki",
"Takashi Taniguchi",
"Kensuke Kobayashi"
] | quant-ph | [
"quant-ph",
"cond-mat.mes-hall",
"cond-mat.mtrl-sci"
] |
AIP/123-QED
[email protected]
Department of Physics, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
Research Center for Materials Nanoarchitectonics, National Institute for Materials Science, 1-1 Namiki, Tsukuba 305-0044, Japan
[email protected]
Department of Physics, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
Institute for Physics of Intelligence, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
Trans-scale Quantum Science Institute, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
Recently, there has been growing interest in researching the use of hexagonal boron nitride (hBN) for quantum technologies.
Here we investigate nitrogen isotope effects on boron vacancy (V_B) defects, one of the candidates for quantum sensors, in ^15N isotopically enriched hBN synthesized using metathesis reaction.
The Raman shifts are scaled with the reduced mass, consistent with previous work on boron isotope enrichment.
We obtain nitrogen isotopic composition dependent optically detected magnetic resonance spectra of V_B defects and determine the hyperfine interaction parameter of ^15N spin to be -64 MHz. Our investigation provides a design policy for hBNs for quantum technologies.
Nitrogen isotope effects on boron vacancy quantum sensors in hexagonal boron nitride
Kensuke Kobayashi
August 12, 2023
====================================================================================
Localized electron spins in solids, such as those in color centers or quantum dots, are the promising platform of quantum technologies.
In most cases, they couple with surrounding nuclear spins; thus, controlling the nuclear spins and their influence are essential.
The isotope enrichment technique has great potential to address this issue<cit.>.
For example, the electron spin coherence time can be improved by enriching nuclear-spin-free isotopes<cit.>, or the the electron spin qubit can be labeled by isotopes with low natural composition ratios<cit.>.
To design such isotopically purified platform, it is crucial not only to synthesize isotopically controlled materials but also to estimate the isotopic composition and determine the hyperfine interaction (HFI) parameters of nuclear spins of the isotopes<cit.>.
Recently, it was discovered that electron spins of boron vacancy (V_B) defects in hexagonal boron nitride (hBN) can be used as quantum sensors even at room temperature<cit.>.
A V_B defect has a structure in which a boron atom in hBN is replaced by a vacancy [Fig. <ref>(a)].
Its electron spin is localized around the vacancy site and is significantly affected by the three nearest nitrogen spins.
Stable isotopes of nitrogen are ^14N and ^15N.
The natural composition ratio of ^14N is 99.6%, and ^15N is almost nonexistent (0.4%).
The nuclear spin is one of the major differences between these isotopes.
Since ^15N spin (I=1/2) is only half of ^14N spin (I=1), V_B defects in ^15N isotopically enriched hBN have fewer energy levels than in non-treated hBN.
The higher the occupancy of each level is, the stronger the resonance signal becomes, leading to higher sensitivity.
However, there are few reports on the isotopic enrichment of hBN, most of which are related to boron isotopes<cit.>.
Here we investigate nitrogen isotope-enriched hBN and observe nitrogen isotope effects on V_B defects.
We synthesized the isotopically controlled hBN crystals using metathesis reaction under high pressure<cit.> with commercially available ^15NH_4Cl.
The Raman shifts of the samples are scaled with their reduced mass, which is the effective mass for an equivalent one-body problem of the two-body vibration problem for boron and nitrogen atoms, consistent with previous work on boron isotope enrichment.
We perform optically detected magnetic resonance (ODMR) of V_B defects produced by helium ion implantation and determine the HFI parameter of ^15N spin to be -64 MHz.
The observed significant modification of resonance spectra due to ^15N isotope enrichment will help improve sensitivity, control fidelity, and precise positioning of quantum sensors.
Our investigation provides guidance for the material design of hBNs for quantum technologies.
First, we describe the influence of nitrogen spins on an electron spin (S = 1) of a V_B defect.
In quantum sensing, an external magnetic field of several mT in the direction of the symmetry axis (z) of the V_B defect is often applied <cit.>.
It is helpful to mitigate the sensitivity suppression due to the strain.
In that condition, the spin Hamiltonian can be approximated as <cit.>,
Ĥ ∼ D Ŝ_z^2 + γ_e B_z ·Ŝ_z + ∑_j=1^3 A_zz,(j)Ŝ_z Î_z,(j),
where, Ŝ_z is the electron spin (S=1) operator in the z direction, D is the zero field splitting, γ_e = 28 MHz/mT is the gyromagnetic ratio of the electron spin, B_z is the magnetic field strength, j (= 1,2,3) is a label of nearest-neighbor nitrogen site, A_zz,(j) is the HFI parameter, Î_z,(j) is the nuclear spin operator in the z direction.
Here we ignore the nuclear spin's Zeeman effect and the quadrupole moment<cit.>, which are much smaller than the HFI parameter in the case of the ^14N spin.
In this study, we determine the A_zz of ^15N spin, ^(15N)A_zz, that have vital contributions in this quantum sensing condition.
Next, we show a model of the expected ODMR spectrum.
In the situation when Eq. (<ref>) is valid, both electron and nuclear spins are quantized in the z direction.
The resonance frequency corresponding to the electron spin transition m_S = 0 ↔±1 can be expressed as,
f_±1(m_I,(1),m_I,(2),m_I,(3)) ∼ f_±1,0±∑_j=1^3 A_zz,(j) m_I,(j),
where, f_±1,0 = D ±γ_e B_z is the resonance frequency in the absence of nuclear spins, m_I,(j) is the magnetic quantum number of nuclear spins at site j which can take the values m_I=-1,0,+1 for ^14N spin (m_I=-1/2,+1/2 for ^15N spin).
Assuming that the nuclear spins are unpolarized and that each resonance signal has the same amplitude and line width, the ODMR spectrum is given by
R = 1 - C/N_level∑ L( f_±1(m_I,(1),m_I,(2),m_I,(3)), dν),
where C is the signal amplitude and L(f,dν) is the Lorentzian with a center frequency f and a full width at half maximum dν.
N_level is the number of possible nuclear states of the nearest-neighbor nitrogen spins (m_I,(1),m_I,(2),m_I,(3)), and the summation symbol means summing concerning those states, which will be explained in detail below.
The resonance spectrum [Eq. (<ref>)] of a V_B defect depends on the number n of ^15N among the nearest nitrogen atoms.
We distinguish V_B defects by #n as shown in Figs. <ref>(b–e).
The energy level splittings of these defects are shown in Figs. <ref>(f–i).
Since ^14N spins can take three states (m_I=-1,0,+1), whereas ^15N spins can take only two states (m_I=-1/2,+1/2), N_level of #0, #1, #2 and #3 are 27(=3^3), 18(=3^2×2), 12(=3×2^2), and 8(=2^3), respectively.
To the extent that Eq. (<ref>) is satisfied, all states belonging to m_S=0 and some of the states belonging to m_S=±1 are degenerated.
In the case of m_S=-1 of #0 (#3), there are 7 (4) states whose energies are distinguished by the total nuclear spin quantum number, m_I,tot = ∑_j=1^3 m_I,(j).
Specifically, the degeneracy of energy states m_I,tot = -3, -2, -1, 0, +1, +2, and +3 (-3/2, -1/2, +1/2, and 3/2) are 1, 3, 6, 7, 6, 3, and 1 (1, 3, 3, and 1), respectively [see Figs. <ref>(f) and (i)].
The occupancy of the state with the largest degeneracy is 26% (=7/27) for #0 and 38% (=3/8) for #3.
High occupancy leads to a strong signal, which is advantageous for high sensitivity.
The distances between energy states (=resonance lines) depend on the HFI parameter A_zz of ^14N and ^15N spins.
The gyromagnetic ratio, a the magnitude of the magnetic moment of the spin, is γ_14N = 3.077 kHz/mT for ^14N spin and γ_15N = -4.316 kHz/mT for ^15N spin.
Since the absolute value of the gyromagnetic ratio is about 1.4 times larger for ^15N spin than for ^14N spin, the spectral separation should get larger for ^15N isotopically enriched hBN.
The larger separation would be helpful to suppress the degradation of control fidelity caused by unintentional driving of neighboring resonance lines.
In this work, we will demonstrate nitrogen isotope effects described above, such as a reduced number of resonance lines and enhanced separation, which are advantageous for quantum sensing.
When measuring an ensemble of V_B defects, the signals of #0 to #3 are averaged.
Specifically, the expected ODMR spectrum is given by,
R_tot = P_0 R_0 + P_1 R_1 + P_2 R_2 + P_3 R_3,
where, R_n is the ODMR spectrum of #n [Eq. (<ref>)] and P_n is the fraction of #n in all V_B defects.
When ^15N isotopic composition, p_15, is spatially uniform, then P_0 = (1 - p_15)^3, P_1 = 3(1 - p_15)^2 p_15, P_2 = 3(1 - p_15 ) p_15^2, and P_3 = p_15^3.
In cases where p_15 is other than 0 or 1, the obtained signal is the sum of #0 to #3.
Here we describe the preparation of ^15N isotopically enriched hBN crystal.
We verify the metathesis reaction process under high pressure<cit.> using commercially available ^15NH_4Cl reagents as a raw material; NaBH_4 + ^15NH_4Cl = B^15N + NaCl + 4H_2.
By continuing the above reaction for about 30 hours, we obtained hBN crystals, which are expected to be close to perfect ^15N isotopic composition (hB^15N).
Other hBN single crystals of about 1 mm-sized are obtained by using Ba-BN as a solvent system <cit.>, where hBN sources are grown within the molten solvent through dissolution and precipitation.
In this case, the nitrogen isotope enrichment in the resulting crystals (hB^14+15N) is not 100% because nitrogen in Ba-BN solvents has a natural isotopic composition.
The ^15N isotopic composition of hB^14+15N is determined by secondary ion mass spectrometry (SIMS) to be about 60%.
In addition, hBN crystal with a natural composition ratio (hB^14N) is used for comparison.
From now, we describe the experimental results.
All the measurements in this work are performed at room temperature.
First, we investigate isotope effect on the phonon energy due to changes in the reduced mass, using a Raman microscope (Nanophoton RAMAN-FM-UTM).
In previous works on boron isotope enrichment <cit.>, it has been shown that the phonon energy scales with the square root of the reduced mass.
Figure <ref>(a) shows the obtained Raman scattering spectra.
The sample with a natural composition ratio, hB^14N, has a Raman shift of 1366 cm^-1.
This value is consistent with the previous work <cit.>.
On the other hand, the Raman shifts for hB^14+15N and hB^15N are 1355 cm^-1 and 1347 cm^-1, respectively.
Clearly, the Raman shift decreases with increasing ^15N isotopic composition, i.e., with increasing reduced mass.
To quantitatively evaluate this behavior, we show the relationship between Raman shift and reduced mass in Fig. <ref>(b).
We calculate the reduced masses of hB^14N, hB^14+15N, and hB^15N assuming p_15 as 0%, 60% (SIMS), and 100%, respectively.
By analyzing the result of the Ref. Vuong2017, we obtain,
Δν_r ∼ -537 μ^1/2 + 2691,
where, Δν_r is the Raman shift (unit cm^-1) and μ is the reduced mass (no unit).
The crosses and the solid line in Fig. <ref>(b) are the result of Ref. Vuong2017 and Eq. (<ref>), respectively.
The deviation between them is as small as about 1 cm^-1.
Since our results agree with Eq. (<ref>) within the error of about 2 cm^-1, we confirm that our nitrogen isotope enrichment is successful.
Next, we perform ODMR measurements to obtain ^15N isotope effects on V_B defects.
V_B defects are generated by helium ion implantation (acceleration voltage 30 keV, dose 1×10^15 cm^-2) into flakes cleaved with Scotch tape.
The flakes are attached to silicon substrates (with SiO_2 thickness of 90 nm).
We use the homemade confocal microscope <cit.> with optimized optical filters for the photoluminescence (PL) of V_B defects (750∼1000 nm).
A broadband microwave antenna with a copper wire soldered to a coplanar waveguide is used to mitigate unwanted distortions in the broad resonance spectrum of V_B defects.
A permanent magnet is approached from the lower side of the sample in the direction of the optical (z) axis.
Figure <ref>(a) shows the ODMR spectrum (m_S=0↔-1) of hB^14N at B_z ∼ 40 mT.
The broad signal consists of several closely overlapping Lorentzians.
The solid line is the fitted curve using Eq. (<ref>) with p_15 = 0.
It reproduces the experimental result well.
The parameters obtained by this fitting are f_-,0 = 2312 MHz, C = 5.6%, dν = 47 MHz, and ^(14N)A_zz = 43 MHz.
The obtained HFI parameter of ^14N spin is consistent with the values of previous works <cit.> within a typical error of few MHz.
Generally, it is impossible to determine the sign of HFI parameter sign from this fitting.
From the positive zero-field splitting of the ground state <cit.> and the spectral change at the ground state level anticrossing <cit.>, we determine that the sign of ^(14N)A_zz is positive.
Note that C and dν depend on the measurement conditions, such as laser power and microwave amplitude.
Next, we show the result of hB^15N in Fig. <ref>(c).
The resonance spectrum clearly consists of four dips, and their separation is larger than in hB^14N.
These are the nitrogen isotope effects on V_B defects.
The solid line is the fitted curve using Eq. (<ref>) with p_15 = 1 and reproduces the experimental result well.
The parameters obtained by this fitting are f_-,0 = 2308 MHz, C = 11%, and dν = 51 MHz, ^(15N)A_zz = ±64 MHz.
The obtained HFI parameter of ^15N spins, ^(15N)A_zz, is ±1.4 times larger than the ^(14N)A_zz obtained above.
It is reasonable considering the ratio of the gyromagnetic ratio of ^14N and ^15N spins.
Our observation supports a number of benefits of ^15N isotope enrichment we expected above, including increased sensitivity and control fidelity.
This is the central result of this work.
In addition, we measure hB^14+15N and obtained that measured spectrum is consistent with the fitting using the HFI parameters and p_15 = 0.6 [Fig. <ref>(b)].
The reason there are only slight undulations in the spectrum is that it contains all signals from #0 to #3 [see Fig. <ref>].
High isotopic composition is necessary to obtain isotope effects useful for quantum sensing.
Finally, to determine the sign of ^(15N)A_zz, we use dynamic nuclear polarization at the excited state level anticrossing (B_z ∼ 70 mT)<cit.>.
In this situation, the angular momentum of the optically polarized electron spins in V_B defects are transferred to the nuclear spins by flip-flops in the excited state; the nuclear spin polarization increases positively <cit.> independent of nitrogen isotopes.
Figure <ref>(d) is the ODMR spectrum of hB^15N at the magnetic field where we observe the largest polarization.
Compared to the Fig. <ref>(c), there is clearly an increase in the signal on the high-frequency side and a decrease in the signal on the low-frequency side.
The polarization of ^15N spins estimated from the area of spectra <cit.> is 16%.
Since the it is enhanced to 27% when the laser power is increased from 0.6 mW [Fig. <ref>(d)] to 5 mW [Fig. <ref>(e)], we conclude that this behavior is the result of optical polarization.
The trend of the observed change in resonance is opposite to that of conventional samples with natural composition ratios <cit.>.
It means that the sign of the HFI parameter is opposite to ^(14N)A_zz, i.e., ^(15N)A_zz=-64MHz, which is consistent with the different signs of the gyromagnetic ratio of ^14N and ^15N spin.
In this work, we examine nitrogen isotope effects on V_B defects in nitrogen isotopically enriched hBN.
We measure ^15N isotopically enriched hBN crystals synthesized using metathesis reaction under high pressure<cit.>.
In the hBN crystals with different ^15N isotope composition, an isotope effect on phonon energy due to changes in the reduced mass are confirmed.
The HFI parameter of ^15N spin is determined to be -64 MHz from the fitting of ODMR spectra of V_B defects produced by helium ion implantation.
The demonstrated uncomplicated spectrum of hB^15N is beneficial to achieve the high sensitivity.
Further, when combined with ^10B isotope enrichment techniques, the sensitivity will be optimized by improving the coherence properties of V_B defects<cit.>.
Sensor labeling with nitrogen isotopes may enable us to identify multiple sensor locations within a device stacked with two-dimensional materials.
The increased control fidelity and distinct optical polarization resulting from enhanced spectral separation would also make hB^15N useful as a polarization agent <cit.> and platform for quantum information processing.
Furthermore, nitrogen isotope enrichment of hBN is essential in studying color centers other than V_B defects, such as carbon-related defects<cit.>.
Our investigation, which reveals nitrogen isotope effects, is a vital step toward the design of hBN for quantum technologies.
We thank Kenji Watanabe (NIMS) for material preparation and Shu Nakaharai (TUT) for useful discussion, Kohei M. Itoh (Keio) for letting us to use the confocal microscope system, and Ryota Akiyama (UTokyo) for supporting Raman measurement.
This work was partially supported by “Advanced Research Infrastructure for Materials and Nanotechnology in Japan (ARIM)" (Proposal No. JPMXP1222UT1131) of the Ministry of Education, Culture, Sports, Science and Technology of Japan (MEXT), “World Premier International Research Center Initiative on Materials Nanoarchitectonics (WPI-MANA)" supported by MEXT.
This work was supported by Grants-in-Aid for Scientific Research (KAKEN) Nos. JP22K03524, JP19H00656, JP19H05826, JP23H01103, and JP23H02052, and Next Generation Artificial Intelligence Research Center at the University of Tokyo.
36
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Itoh and Watanabe(2014)]Itoh2014
author author K. M. Itoh and author H. Watanabe, title title Isotope
engineering of silicon and diamond for quantum computing and sensing
applications, https://doi.org/10.1557/mrc.2014.32 journal journal MRS Communications volume 4, pages 143–157 (year
2014)NoStop
[Balasubramanian et al.(2009)Balasubramanian, Neumann, Twitchen,
Markham, Kolesov, Mizuochi,
Isoya, Achard, Beck,
Tissler, Jacques, Hemmer,
Jelezko, and Wrachtrup]Balasubramanian2009
author author G. Balasubramanian, author P. Neumann, author D. Twitchen,
author M. Markham, author R. Kolesov, author
N. Mizuochi, author
J. Isoya, author J. Achard, author J. Beck, author J. Tissler, author V. Jacques,
author P. R. Hemmer, author F. Jelezko, and author
J. Wrachtrup, title title Ultralong spin coherence time in isotopically engineered
diamond, https://doi.org/10.1038/nmat2420 journal
journal Nature Materials volume 8, pages 383–387 (year 2009)NoStop
[Ishikawa et al.(2012)Ishikawa, Fu, Santori, Acosta, Beausoleil, Watanabe, Shikata, and Itoh]Ishikawa2012
author author T. Ishikawa, author K.-M. C. Fu,
author C. Santori, author V. M. Acosta, author
R. G. Beausoleil, author
H. Watanabe, author
S. Shikata, and author
K. M. Itoh, title title Optical and spin coherence properties of nitrogen-vacancy
centers placed in a 100 nm thick isotopically purified diamond layer, https://doi.org/10.1021/nl300350r journal journal Nano Letters volume 12, pages 2083–2087 (year 2012)NoStop
[Ohashi et al.(2013)Ohashi,
Rosskopf, Watanabe, Loretz,
Tao, Hauert, Tomizawa,
Ishikawa, Ishi-Hayase, Shikata, Degen, and Itoh]Ohashi2013
author author K. Ohashi, author T. Rosskopf,
author H. Watanabe, author M. Loretz, author
Y. Tao, author R. Hauert, author S. Tomizawa, author T. Ishikawa, author J. Ishi-Hayase, author S. Shikata, author C. L. Degen, and author K. M. Itoh, title title
Negatively charged nitrogen-vacancy centers in a 5 nm thin ^12C
diamond film, https://doi.org/10.1021/nl402286v journal journal Nano Letters volume
13, pages 4733–4738 (year 2013)NoStop
[Muhonen et al.(2014)Muhonen, Dehollain, Laucht, Hudson, Kalra, Sekiguchi, Itoh, Jamieson, McCallum, Dzurak, and Morello]Muhonen2014
author author J. T. Muhonen, author J. P. Dehollain, author A. Laucht,
author F. E. Hudson, author R. Kalra, author
T. Sekiguchi, author
K. M. Itoh, author
D. N. Jamieson, author
J. C. McCallum, author
A. S. Dzurak, and author
A. Morello, title title Storing quantum information for 30 seconds in a
nanoelectronic device, https://doi.org/10.1038/nnano.2014.211
journal journal Nature Nanotechnology volume 9, pages 986–991 (year
2014)NoStop
[Veldhorst et al.(2014)Veldhorst, Hwang, Yang, Leenstra, de Ronde, Dehollain,
Muhonen, Hudson, Itoh,
Morello, and Dzurak]Veldhorst2014
author author M. Veldhorst, author J. C. C. Hwang, author C. H. Yang,
author A. W. Leenstra, author B. de Ronde, author
J. P. Dehollain, author
J. T. Muhonen, author
F. E. Hudson, author
K. M. Itoh, author
A. Morello, and author
A. S. Dzurak, title
title An addressable quantum dot qubit with
fault-tolerant control-fidelity, https://doi.org/10.1038/nnano.2014.216 journal journal Nature Nanotechnology volume 9, pages 981–985 (year 2014)NoStop
[Kleinsasser et al.(2016)Kleinsasser, Stanfield, Banks,
Zhu, Li, Acosta,
Watanabe, Itoh, and Fu]Kleinsasser2016
author author E. E. Kleinsasser, author M. M. Stanfield, author J. K. Q. Banks, author Z. Zhu, author W.-D. Li, author
V. M. Acosta, author
H. Watanabe, author
K. M. Itoh, and author
K.-M. C. Fu, title title High density nitrogen-vacancy sensing surface created via
He^+ ion implantation of ^12C diamond, https://doi.org/10.1063/1.4949357 journal journal Applied Physics Letters volume 108, pages 202401 (year 2016)NoStop
[Rabeau et al.(2006)Rabeau,
Reichart, Tamanyan, Jamieson,
Prawer, Jelezko, Gaebel,
Popa, Domhan, and Wrachtrup]Rabeau2006
author author J. R. Rabeau, author P. Reichart,
author G. Tamanyan, author D. N. Jamieson, author
S. Prawer, author F. Jelezko, author T. Gaebel, author I. Popa, author M. Domhan, and author J. Wrachtrup, title title Implantation
of labelled single nitrogen vacancy centers in diamond using ^15N, https://doi.org/10.1063/1.2158700 journal journal Applied Physics Letters volume 88, pages 023113 (year 2006)NoStop
[van Dam et al.(2019)van
Dam, Walsh, Degen, Bersin,
Mouradian, Galiullin, Ruf,
IJspeert, Taminiau, Hanson, and Englund]vanDam2019
author author S. B. van Dam, author M. Walsh,
author M. J. Degen, author E. Bersin, author
S. L. Mouradian, author
A. Galiullin, author
M. Ruf, author M. IJspeert, author T. H. Taminiau, author R. Hanson, and author D. R. Englund, title title Optical
coherence of diamond nitrogen-vacancy centers formed by ion implantation and
annealing, https://doi.org/10.1103/physrevb.99.161203 journal journal Physical Review B volume 99, pages 161203 (year
2019)NoStop
[Gottscholl et al.(2020)Gottscholl, Kianinia, Soltamov,
Orlinskii, Mamin, Bradac,
Kasper, Krambrock, Sperlich,
Toth, Aharonovich, and Dyakonov]Gottscholl2020
author author A. Gottscholl, author M. Kianinia, author V. Soltamov,
author S. Orlinskii, author G. Mamin, author
C. Bradac, author C. Kasper, author K. Krambrock, author A. Sperlich, author M. Toth, author I. Aharonovich, and author V. Dyakonov, title title Initialization
and read-out of intrinsic spin defects in a van der Waals crystal at room
temperature, https://doi.org/10.1038/s41563-020-0619-6 journal journal Nature Materials volume 19, pages 540–545 (year
2020)NoStop
[Gottscholl et al.(2021)Gottscholl, Diez, Soltamov, Kasper, Krauße, Sperlich, Kianinia, Bradac, Aharonovich, and Dyakonov]Gottscholl2021
author author A. Gottscholl, author M. Diez,
author V. Soltamov, author C. Kasper, author
D. Krauße, author
A. Sperlich, author
M. Kianinia, author
C. Bradac, author I. Aharonovich, and author V. Dyakonov, title title Spin defects in hBN as promising temperature, pressure and
magnetic field quantum sensors, https://doi.org/10.1038/s41467-021-24725-1 journal journal Nature Communications volume 12, pages 4480 (year 2021)NoStop
[Huang et al.(2022)Huang,
Zhou, Chen, Lu, McLaughlin, Li, Alghamdi, Djugba, Shi, Wang, and Du]Huang2022
author author M. Huang, author J. Zhou,
author D. Chen, author
H. Lu, author N. J. McLaughlin, author S. Li, author M. Alghamdi, author D. Djugba,
author J. Shi, author
H. Wang, and author
C. R. Du, title title Wide field imaging of van der Waals ferromagnet
Fe_3GeTe_2 by spin defects in hexagonal boron nitride, https://doi.org/10.1038/s41467-022-33016-2 journal
journal Nature Communications volume
13, pages 5369 (year 2022)NoStop
[Healey et al.(2022)Healey,
Scholten, Yang, Scott,
Abrahams, Robertson, Hou,
Guo, Rahman, Lu,
Kianinia, Aharonovich, and Tetienne]Healey2022
author author A. J. Healey, author S. C. Scholten, author T. Yang,
author J. A. Scott, author G. J. Abrahams, author
I. O. Robertson, author
X. F. Hou, author Y. F. Guo, author S. Rahman, author Y. Lu, author M. Kianinia, author I. Aharonovich, and author J.-P. Tetienne, title title Quantum
microscopy with van der Waals heterostructures, https://doi.org/10.1038/s41567-022-01815-5 journal journal Nature Physics volume 19, pages 87–91 (year 2022)NoStop
[Kumar et al.(2022)Kumar,
Fabre, Durand, Clua-Provost,
Li, Edgar, Rougemaille,
Coraux, Marie, Renucci,
Robert, Robert-Philip, Gil,
Cassabois, Finco, and Jacques]Kumar2022
author author P. Kumar, author F. Fabre,
author A. Durand, author T. Clua-Provost, author
J. Li, author J. Edgar, author N. Rougemaille, author J. Coraux, author X. Marie, author P. Renucci, author C. Robert, author I. Robert-Philip, author B. Gil, author G. Cassabois, author A. Finco, and author V. Jacques, title title Magnetic imaging with spin
defects in hexagonal boron nitride, https://doi.org/10.1103/physrevapplied.18.l061002 journal
journal Physical Review Applied volume
18, pages L061002 (year 2022)NoStop
[Sasaki et al.(2023)Sasaki,
Nakamura, Gu, Tsukamoto,
Nakaharai, Iwasaki, Watanabe,
Taniguchi, Ogawa, Morita, and Kobayashi]Sasaki2023
author author K. Sasaki, author Y. Nakamura,
author H. Gu, author
M. Tsukamoto, author
S. Nakaharai, author
T. Iwasaki, author K. Watanabe, author T. Taniguchi, author S. Ogawa, author Y. Morita, and author K. Kobayashi, title title Magnetic field imaging by hBN quantum sensor nanoarray, https://doi.org/10.1063/5.0147072 journal journal Applied Physics Letters volume 122, pages 244003 (year 2023)NoStop
[Vuong et al.(2017)Vuong,
Liu, der Lee, Cuscó,
Artús, Michel, Valvin,
Edgar, Cassabois, and Gil]Vuong2017
author author T. Q. P. Vuong, author S. Liu, author A. V. der Lee,
author R. Cuscó, author L. Artús, author
T. Michel, author P. Valvin, author J. H. Edgar, author G. Cassabois, and author B. Gil, title title Isotope engineering
of van der Waals interactions in hexagonal boron nitride, https://doi.org/10.1038/nmat5048 journal journal
Nature Materials volume 17, pages
152–158 (year 2017)NoStop
[Cuscó et al.(2018)Cuscó, Artús, Edgar,
Liu, Cassabois, and Gil]Cusc2018
author author R. Cuscó, author L. Artús, author J. H. Edgar, author S. Liu, author G. Cassabois, and author B. Gil, title
title Isotopic effects on phonon anharmonicity in
layered van der Waals crystals: Isotopically pure hexagonal boron
nitride, https://doi.org/10.1103/physrevb.97.155435 journal journal Physical Review B volume 97, pages 155435 (year
2018)NoStop
[Haykal et al.(2022)Haykal,
Tanos, Minotto, Durand,
Fabre, Li, Edgar,
Ivády, Gali, Michel,
Dréau, Gil, Cassabois, and Jacques]Haykal2022
author author A. Haykal, author R. Tanos,
author N. Minotto, author A. Durand, author
F. Fabre, author J. Li, author J. H. Edgar, author V. Ivády, author A. Gali,
author T. Michel, author A. Dréau, author
B. Gil, author G. Cassabois, and author V. Jacques, title title Decoherence of V_B^- spin defects in monoisotopic
hexagonal boron nitride, https://doi.org/10.1038/s41467-022-31743-0 journal journal Nature Communications volume 13, pages 4347 (year 2022)NoStop
[Janzen et al.(2023)Janzen,
Schutte, Plo, Rousseau,
Michel, Desrat, Valvin,
Jacques, Cassabois, Gil, and Edgar]Janzen2023
author author E. Janzen, author H. Schutte,
author J. Plo, author
A. Rousseau, author
T. Michel, author W. Desrat, author P. Valvin, author V. Jacques, author G. Cassabois, author B. Gil, and author J. H. Edgar, title title
Boron and nitrogen isotope effects on hexagonal boron nitride properties, https://doi.org/10.48550/ARXIV.2306.13358 (year
2023), 10.48550/ARXIV.2306.13358NoStop
[Chen et al.(2020)Chen,
Song, Ravichandran, Zheng,
Chen, Lee, Sun, Li, Gamage, Tian, Ding,
Song, Rai, Wu, Koirala, Schmidt, Watanabe, Lv, Ren, Shi, Cahill,
Taniguchi, Broido, and Chen]Chen2020
author author K. Chen, author B. Song, author N. K. Ravichandran, author Q. Zheng, author
X. Chen, author H. Lee, author H. Sun, author S. Li, author G. A. G. U. Gamage, author F. Tian, author
Z. Ding, author Q. Song, author A. Rai, author H. Wu, author P. Koirala, author
A. J. Schmidt, author
K. Watanabe, author
B. Lv, author Z. Ren, author L. Shi, author D. G. Cahill,
author T. Taniguchi, author D. Broido, and author
G. Chen, title title Ultrahigh thermal conductivity in isotope-enriched cubic
boron nitride, https://doi.org/10.1126/science.aaz6149 journal journal Science volume
367, pages 555–559 (year 2020)NoStop
[Taniguchi et al.()Taniguchi
et al.]TaniguchiXXXX
author author T. Taniguchi et al., @noop note Unpublished
study.Stop
[Gao et al.(2022)Gao,
Vaidya, Li, Ju, Jiang, Xu, Allcca, Shen,
Taniguchi, Watanabe, Bhave,
Chen, Ping, and Li]Gao2022
author author X. Gao, author S. Vaidya,
author K. Li, author
P. Ju, author B. Jiang, author Z. Xu, author A. E. L. Allcca, author K. Shen, author T. Taniguchi,
author K. Watanabe, author S. A. Bhave, author
Y. P. Chen, author
Y. Ping, and author
T. Li, title title Nuclear spin polarization and control in hexagonal boron
nitride, https://doi.org/10.1038/s41563-022-01329-8 journal journal Nature Materials volume 21, pages 1024–1028 (year
2022)NoStop
[Gracheva et al.(2023)Gracheva, Murzakhanov, Mamin, Sadovnikova, Gabbasov, Mokhov, and Gafurov]Gracheva2023
author author I. N. Gracheva, author F. F. Murzakhanov, author G. V. Mamin, author M. A. Sadovnikova, author B. F. Gabbasov, author E. N. Mokhov, and author M. R. Gafurov, title title Symmetry of the
hyperfine and quadrupole interactions of boron vacancies in a hexagonal boron
nitride, https://doi.org/10.1021/acs.jpcc.2c08716 journal journal The Journal of Physical Chemistry C volume 127, pages 3634–3639 (year 2023)NoStop
[Taniguchi and Watanabe(2007)]Taniguchi2007
author author T. Taniguchi and author K. Watanabe, title title Synthesis of
high-purity boron nitride single crystals under high pressure by using
Ba–BN solvent, https://doi.org/10.1016/j.jcrysgro.2006.12.061 journal
journal Journal of Crystal Growth volume
303, pages 525–529 (year 2007)NoStop
[Stenger et al.(2017)Stenger, Schué, Boukhicha,
Berini, Plaçais, Loiseau, and Barjon]Stenger2017
author author I. Stenger, author L. Schué, author M. Boukhicha, author B. Berini,
author B. Plaçais, author A. Loiseau, and author
J. Barjon, title title Low frequency raman spectroscopy of few-atomic-layer thick
hBN crystals, https://doi.org/10.1088/2053-1583/aa77d4
journal journal 2D Materials volume 4, pages 031003 (year
2017)NoStop
[Misonou et al.(2020)Misonou, Sasaki, Ishizu, Monnai, Itoh, and Abe]Misonou2020
author author D. Misonou, author K. Sasaki,
author S. Ishizu, author Y. Monnai, author
K. M. Itoh, and author
E. Abe, title title Construction and operation of a tabletop system for
nanoscale magnetometry with single nitrogen-vacancy centers in diamond, https://doi.org/10.1063/1.5128716 journal journal AIP Advances volume 10, pages 025206 (year 2020)NoStop
[Murzakhanov et al.(2022)Murzakhanov, Mamin, Orlinskii,
Gerstmann, Schmidt, Biktagirov, Aharonovich, Gottscholl,
Sperlich, Dyakonov, and Soltamov]Murzakhanov2022
author author F. F. Murzakhanov, author G. V. Mamin, author S. B. Orlinskii, author U. Gerstmann, author W. G. Schmidt, author T. Biktagirov,
author I. Aharonovich, author A. Gottscholl, author
A. Sperlich, author
V. Dyakonov, and author
V. A. Soltamov, title
title Electron-nuclear coherent coupling and nuclear
spin readout through optically polarized V_B^- spin states in
hBN, https://doi.org/10.1021/acs.nanolett.1c04610 journal journal Nano Letters volume
22, pages 2718–2724 (year 2022)NoStop
[Gu et al.(2023)Gu,
Nakamura, Sasaki, and Kobayashi]Gu2023
author author H. Gu, author Y. Nakamura,
author K. Sasaki, and author K. Kobayashi, title
title Multi-frequency composite pulse sequences for
sensitivity enhancement in hexagonal boron nitride quantum sensor, https://doi.org/10.35848/1882-0786/acd1d1 journal journal Applied Physics Express volume 16, pages 055003 (year 2023)NoStop
[Ru et al.(2023)Ru,
Jiang, Liang, Kenny,
Cai, Lyu, Cernansky,
Zhou, Yang, Watanabe,
Taniguch, Li, Seng,
Liu, Jelezko, Bettiol, and Gao]Shihao2023
author author S. Ru, author Z. Jiang, author H. Liang, author
J. Kenny, author H. Cai, author X. Lyu, author R. Cernansky,
author F. Zhou, author
Y. Yang, author K. Watanabe, author T. Taniguch, author F. Li, author K. T. Seng, author X. Liu, author F. Jelezko,
author A. A. Bettiol, and author W. Gao, title title Robust nuclear spin polarization via
ground-state level anti-crossing of boron vacancy defects in hexagonal boron
nitride, https://doi.org/10.48550/ARXIV.2306.15960 (year 2023), 10.48550/ARXIV.2306.15960NoStop
[Jacques et al.(2009)Jacques, Neumann, Beck, Markham, Twitchen, Meijer, Kaiser, Balasubramanian, Jelezko, and Wrachtrup]Jacques2009
author author V. Jacques, author P. Neumann,
author J. Beck, author
M. Markham, author D. Twitchen, author J. Meijer, author F. Kaiser, author G. Balasubramanian, author F. Jelezko, and author J. Wrachtrup, title title Dynamic polarization of single nuclear spins by optical pumping of
nitrogen-vacancy color centers in diamond at room temperature, https://doi.org/10.1103/physrevlett.102.057403 journal
journal Physical Review Letters volume
102, pages 057403 (year 2009)NoStop
[Broadway et al.(2018)Broadway, Tetienne, Stacey, Wood, Simpson, Hall, and Hollenberg]Broadway2018
author author D. A. Broadway, author J.-P. Tetienne, author A. Stacey,
author J. D. A. Wood, author D. A. Simpson, author
L. T. Hall, and author
L. C. L. Hollenberg, title
title Quantum probe hyperpolarisation of molecular
nuclear spins, https://doi.org/10.1038/s41467-018-03578-1
journal journal Nature Communications volume 9, pages 1246 (year
2018)NoStop
[Jannin et al.(2019)Jannin,
Dumez, Giraudeau, and Kurzbach]Jannin2019
author author S. Jannin, author J.-N. Dumez,
author P. Giraudeau, and author D. Kurzbach, title title Application and methodology of
dissolution dynamic nuclear polarization in physical, chemical and biological
contexts, https://doi.org/10.1016/j.jmr.2019.06.001 journal journal Journal of Magnetic Resonance volume 305, pages 41–50 (year
2019)NoStop
[Mendelson et al.(2020)Mendelson, Chugh, Reimers, Cheng, Gottscholl, Long, Mellor, Zettl, Dyakonov, Beton, Novikov, Jagadish, Tan, Ford, Toth, Bradac, and Aharonovich]Mendelson2020
author author N. Mendelson, author D. Chugh,
author J. R. Reimers, author T. S. Cheng, author
A. Gottscholl, author
H. Long, author C. J. Mellor, author A. Zettl, author V. Dyakonov, author P. H. Beton, author S. V. Novikov, author C. Jagadish,
author H. H. Tan, author M. J. Ford, author
M. Toth, author C. Bradac, and author I. Aharonovich, title title Identifying carbon as the source of visible single-photon emission
from hexagonal boron nitride, https://doi.org/10.1038/s41563-020-00850-y journal journal Nature Materials volume 20, pages 321–328 (year 2020)NoStop
[Chejanovsky et al.(2021)Chejanovsky, Mukherjee, Geng, Chen, Kim, Denisenko, Finkler, Taniguchi, Watanabe, Dasari, Auburger, Gali, Smet, and Wrachtrup]Chejanovsky2021
author author N. Chejanovsky, author A. Mukherjee, author J. Geng,
author Y.-C. Chen, author Y. Kim, author
A. Denisenko, author
A. Finkler, author T. Taniguchi, author K. Watanabe, author D. B. R. Dasari, author P. Auburger, author A. Gali,
author J. H. Smet, and author J. Wrachtrup, title title Single-spin resonance in a van der
Waals embedded paramagnetic defect, https://doi.org/10.1038/s41563-021-00979-4 journal journal Nature Materials volume 20, pages 1079–1084 (year 2021)NoStop
[Stern et al.(2023)Stern,
Gilardoni, Gu, Barker,
Powell, Deng, Follet,
Li, Ramsay, Tan,
Aharonovich, and Atatüre]Stern2023
author author H. L. Stern, author C. M. Gilardoni, author Q. Gu,
author S. E. Barker, author O. Powell, author
X. Deng, author L. Follet, author C. Li, author A. Ramsay, author H. H. Tan,
author I. Aharonovich, and author M. Atatüre, title title A quantum coherent spin in a
two-dimensional material at room temperature, https://doi.org/10.48550/ARXIV.2306.13025 (year 2023), 10.48550/ARXIV.2306.13025NoStop
[Scholten et al.(2023)Scholten, Singh, Healey, Robertson, Haim, Tan, Broadway, Wang, Abe, Ohshima, Kianinia, Reineck, Aharonovich, and Tetienne]Scholten2023
author author S. C. Scholten, author P. Singh,
author A. J. Healey, author I. O. Robertson, author
G. Haim, author C. Tan, author D. A. Broadway, author L. Wang, author H. Abe, author T. Ohshima, author
M. Kianinia, author
P. Reineck, author I. Aharonovich, and author J.-P. Tetienne, title title Multi-species optically addressable spin defects in a van der
Waals material, https://doi.org/10.48550/ARXIV.2306.16600 (year 2023), 10.48550/ARXIV.2306.16600NoStop
§ SPIN HAMILTONIAN
In this section, we explain the spin Hamiltonian.
The spin Hamiltonian of the ground state of a V_B defect would be given as,
Ĥ = Ĥ_ZFS + Ĥ_Ze + Ĥ_Zn + Ĥ_HFI + Ĥ_QI,
Ĥ_ZFS = D Ŝ_z^2
- E_x (Ŝ_xŜ_y + Ŝ_yŜ_x) - E_x (Ŝ_x^2 - Ŝ_y^2),
Ĥ_Ze = γ_e B_0 ·Ŝ,
Ĥ_Zn = ∑_j=1^3 -γ_(j)B_0 ·Î_(j),
Ĥ_HFI = ∑_j=1^3Ŝ A_HFI,(j)Î_(j),
Ĥ_QI = ∑_j=1^3 P_p(j),(j)Î_p(j),(j)^2 + P_z,(j)Î_z,(j)^2 + P_o(j),(j)Î_o(j),(j)^2,
where, z is the direction perpendicular to the hBN plane (the direction of the symmetry axis of the V_B defect), x and y are the in-plane directions, D is the zero-field splitting (ZFS) including the effects of electric field and strain, γ_e = 28 MHz/mT is the gyromagnetic ratio of electron spin, B_0 is the magnetic field vector, E_x and E_y are strain parameters related to local electric field and crystal strain<cit.>, j (=1,2,3) are labels of nearest-neighbor nitrogen sites, γ_(j) is gyromagnetic ratio of nitrogen nuclear spins, A_HFI,(j) is hyperfine interaction (HFI) tensor, Î_k,(j) is nuclear spin operator in the k direction, and P_k,(j) is the nuclear quadrupole moment in the k direction.
Ĥ_ZFS is the ZFS term and Ĥ_Ze is the Zeeman term of the electron spin.
We assume that the strain terms take the same form as the NV center in diamond <cit.>, which has a symmetry close to the V_B defect.
Typical parameter values for V_B defects are D∼3450MHz and E_x,E_y∼ 50MHz <cit.>.
Ĥ_Zn is the Zeeman term of nuclear spin, Ĥ_HFI is the HFI term, and Ĥ_QI is the nuclear quadrupole moment term.
They are based on the form of Ref. Gracheva2023.
p(j) is the direction from the vacancy (electron spin) of the nearest nitrogen site j, and the direction o(j) is the cross product direction of the p(j) and z.
The gyromagnetic ratio is γ_14N = 3.077 kHz/mT for ^14N spin and γ_15N = -4.316 kHz/mT for ^15N spin.
The interaction with boron and remote nitrogen spins other than the nearest-neighbor ones is small and appears as a broadening of the electron spin resonance linewidth<cit.>, so we do not consider its details.
We introduce an approximation that is valid under quantum sensing conditions.
When a magnetic field is applied with sufficient strength in the direction of the symmetry axis (B_0 = B_z e_z), the effect of strain, which degrades the magnetic field sensitivity, can be ignored.
Specifically, this condition is given by B_z ≫ E_x(y)/γ_e.
Except in the vicinity of the ground state level anticrossing (D/γ_e ∼ 125 mT), the Hamiltonian can be approximated as,
Ĥ_ZFS ∼ D Ŝ_z^2
Ĥ_Ze = γ_e B_z Ŝ_z,
Ĥ_HFI ∼Ŝ_z ∑_j=1^3 ( A_zx,(j)Î_x,(j) + A_zy,(j)Î_y,(j) + A_zz,(j)Î_z,(j) ),
where A_zx, A_zy, and A_zz are the elements of HFI tensor.
Within this approximation, the electron spin is quantized in the z direction.
Then, we also introduce an approximation to the nuclear spin terms.
The HFI tensor consists of the dipole interaction and the Fermi contact interaction.
The element of the dipole interaction tensor between electron and nuclear spins is given by,
^dipoleA_αβ = μ_0/4πh γ_e γ_n/r^3 [ 3 (r·e_α)(r·e_β) - e_α·e_β],
where α (= x,y,z) is the direction of the electron spin, β (= x,y,z) is the direction of the nuclear spin, h is Plank constant, r is the position of the nuclear spin with respect to the electron spin.
Since the electron spin is quantized in the z direction, only the α = z term needs to be considered.
Assuming that the electron spin is localized at the vacancy position, r·e_z=0 is satisfied, and we obtain,
^dipoleA_zz = -μ_0/4πh γ_e γ_n/r^3,
^dipoleA_zx = 0,
^dipoleA_zy = 0.
The Fermi contact interaction ^FermiA is a term arising from the overlapping of wave functions of electron and nuclear spins and is zero except for the isotropic component (α = β).
Thus, the HFI term can be approximated as,
Ĥ_HFI ∼Ŝ_z ∑_j=1^3 A_zz,(j)Î_z,(j).
A_zz,(j) and typical line widths of the V_B defects are around 40 MHz or larger.
Under typical experimental conditions, they are an order of magnitude larger than the nuclear spin's Zeeman effect and nuclear quadrupole moment.
Therefore, we neglect nuclear spin terms other than HFI and express the effective spin Hamiltonian as,
Ĥ = D Ŝ_z^2 + γ_e B_z Ŝ_z + Ŝ_z ∑_j=1^3 A_zz,(j)Î_z,(j).
It corresponds to Eq. (1) in the main text and is equivalent to ignoring the nuclear spin's Zeeman effect in the Eq. (8) of the Supplementary Information of Ref. Gao2022.
In this condition, each nitrogen nuclear spin is quantized in the z direction, and energy states according to their total quantum number m_I,tot can be observed.
§ ADDITIONAL DATA OF FIGURE 3 IN THE MAIN TEXT
This section contains additional data related to Fig. 3 in the main text.
Figures <ref>(a) and (b) are enlarged images of Figs. 3(a) and (c) in the main text, respectively.
Based on the fitting results, the signals of each resonance line are decomposed and shown.
The signal of hB^15N [Fig. <ref>(b)] has a simpler spectrum with higher contrast and narrower linewidths overall than conventional case [Fig. <ref>(a)] due to that the number of included resonance lines is small and the separation of each is large.
Slight bias in signal contrast appears as a deviation from fitting.
It is not identify whether the slight bias in the signal contrast, which appears as a deviation from the fitting.
We have not yet identified its cause.
The possible causes are polarization of nuclear spins or frequency dependence of microwave power.
Figure <ref> contains additional data of dynamic nuclear polarization at excited state level anticrossing.
The condition for excited state anticrossing is estimated to be 76 mT from the zero-field splitting of 2130 MHz obtained from the ODMR spectrum of excited state measured at zero field.
We show ODMR spectrum around the condition in Fig. <ref>(a).
We observe a property that biases the spectrum toward the high frequency side around 70 mT.
Figure <ref>(b) shows the ^15N spin polarization estimated by <cit.>,
Polarization = ∑ m_I,tot A_m_I,tot/3/2∑ A_m_I,tot,
where A_m_I,tot is the area of the spectrum belonging to the m_I,tot state, estimated from the product of signal amplitude and line width obtained by fitting each spectrum.
The summation symbols in the denominator and numerator are for the possible m_I,tot states.
Polarization reaches a maximum around 70 mT.
It reaches its maximum around 70 mT, near the condition where polarization increases with increasing laser power [Fig. <ref>(c)].
It is a typical behavior of optical nuclear spin polarization at the excited state level anticrossing <cit.>.
We determine the sign of the HFI parameter of ^15N spin based on this evidence.
The above observations reveal several behaviors that have not been observed before.
First point is the magnetic field condition which achieves maximum polarization.
It differs from the previous works (∼74 mT) <cit.>.
Second and third points are a decrease in signal contrast around 73 mT and polarization sign reversal above 75 mT, respectively.
It is unlikely that the frequency dependence of microwave power is responsible for this since the contrast of the ODMR spectra at the same microwave frequency is very different at different magnetic field conditions.
It remains to be clarified whether this is due to ^15N isotope effects, field misalignment, other defects in the sample, etc.
^15N isotope enrichment may have allowed us to observe such behaviors that could not be observed with the conventional broad anticrossing condition.
We believe that these interesting behaviors will be elucidated in future studies of ODMR spectra and ^15N spin polarization in a wide magnetic field range, including ground state level anticrossing <cit.>.
10
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Dolde et al.(2011)Dolde,
Fedder, Doherty, Nöbauer,
Rempp, Balasubramanian, Wolf,
Reinhard, Hollenberg, Jelezko, and Wrachtrup]Dolde2011
author author F. Dolde, author H. Fedder,
author M. W. Doherty, author T. Nöbauer, author
F. Rempp, author G. Balasubramanian, author T. Wolf, author F. Reinhard, author L. C. L. Hollenberg, author F. Jelezko, and author J. Wrachtrup, title title
Electric-field sensing using single diamond spins, https://doi.org/10.1038/nphys1969 journal journal Nature Physics volume 7, pages 459–463 (year 2011)NoStop
[Mittiga et al.(2018)Mittiga, Hsieh, Zu, Kobrin,
Machado, Bhattacharyya, Rui,
Jarmola, Choi, Budker, and Yao]Mittiga2018
author author T. Mittiga, author S. Hsieh,
author C. Zu, author
B. Kobrin, author F. Machado, author P. Bhattacharyya, author N. Rui, author A. Jarmola, author S. Choi,
author D. Budker, and author N. Yao, title
title Imaging the local charge environment of
nitrogen-vacancy centers in diamond, https://doi.org/10.1103/physrevlett.121.246402 journal
journal Physical Review Letters volume
121, pages 246402 (year 2018)NoStop
[Gottscholl et al.(2020)Gottscholl, Kianinia, Soltamov,
Orlinskii, Mamin, Bradac,
Kasper, Krambrock, Sperlich,
Toth, Aharonovich, and Dyakonov]Gottscholl2020
author author A. Gottscholl, author M. Kianinia, author V. Soltamov,
author S. Orlinskii, author G. Mamin, author
C. Bradac, author C. Kasper, author K. Krambrock, author A. Sperlich, author M. Toth, author I. Aharonovich, and author V. Dyakonov, title title Initialization
and read-out of intrinsic spin defects in a van der Waals crystal at room
temperature, https://doi.org/10.1038/s41563-020-0619-6 journal journal Nature Materials volume 19, pages 540–545 (year
2020)NoStop
[Gu et al.(2023)Gu,
Nakamura, Sasaki, and Kobayashi]Gu2023
author author H. Gu, author Y. Nakamura,
author K. Sasaki, and author K. Kobayashi, title
title Multi-frequency composite pulse sequences for
sensitivity enhancement in hexagonal boron nitride quantum sensor, https://doi.org/10.35848/1882-0786/acd1d1 journal journal Applied Physics Express volume 16, pages 055003 (year 2023)NoStop
[Ivády et al.(2020)Ivády, Barcza, Thiering,
Li, Hamdi, Chou,
Örs Legeza, and Gali]Ivdy2020
author author V. Ivády, author G. Barcza,
author G. Thiering, author S. Li, author
H. Hamdi, author J.-P. Chou, author Örs
Legeza, and author A. Gali, title title Ab initio theory of the
negatively charged boron vacancy qubit in hexagonal boron nitride, https://doi.org/10.1038/s41524-020-0305-x journal journal npj Computational Materials volume 6, pages 41 (year 2020)NoStop
[Gottscholl et al.(2021)Gottscholl, Diez, Soltamov, Kasper, Krauße, Sperlich, Kianinia, Bradac, Aharonovich, and Dyakonov]Gottscholl2021
author author A. Gottscholl, author M. Diez,
author V. Soltamov, author C. Kasper, author
D. Krauße, author
A. Sperlich, author
M. Kianinia, author
C. Bradac, author I. Aharonovich, and author V. Dyakonov, title title Spin defects in hBN as promising temperature, pressure and
magnetic field quantum sensors, https://doi.org/10.1038/s41467-021-24725-1 journal journal Nature Communications volume 12, pages 4480 (year 2021)NoStop
[Gao et al.(2022)Gao,
Vaidya, Li, Ju, Jiang, Xu, Allcca, Shen,
Taniguchi, Watanabe, Bhave,
Chen, Ping, and Li]Gao2022
author author X. Gao, author S. Vaidya,
author K. Li, author
P. Ju, author B. Jiang, author Z. Xu, author A. E. L. Allcca, author K. Shen, author T. Taniguchi,
author K. Watanabe, author S. A. Bhave, author
Y. P. Chen, author
Y. Ping, and author
T. Li, title title Nuclear spin polarization and control in hexagonal boron
nitride, https://doi.org/10.1038/s41563-022-01329-8 journal journal Nature Materials volume 21, pages 1024–1028 (year
2022)NoStop
[Gracheva et al.(2023)Gracheva, Murzakhanov, Mamin, Sadovnikova, Gabbasov, Mokhov, and Gafurov]Gracheva2023
author author I. N. Gracheva, author F. F. Murzakhanov, author G. V. Mamin, author M. A. Sadovnikova, author B. F. Gabbasov, author E. N. Mokhov, and author M. R. Gafurov, title title Symmetry of the
hyperfine and quadrupole interactions of boron vacancies in a hexagonal boron
nitride, https://doi.org/10.1021/acs.jpcc.2c08716 journal journal The Journal of Physical Chemistry C volume 127, pages 3634–3639 (year 2023)NoStop
[Haykal et al.(2022)Haykal,
Tanos, Minotto, Durand,
Fabre, Li, Edgar,
Ivády, Gali, Michel,
Dréau, Gil, Cassabois, and Jacques]Haykal2022
author author A. Haykal, author R. Tanos,
author N. Minotto, author A. Durand, author
F. Fabre, author J. Li, author J. H. Edgar, author V. Ivády, author A. Gali,
author T. Michel, author A. Dréau, author
B. Gil, author G. Cassabois, and author V. Jacques, title title Decoherence of V_B^- spin defects in monoisotopic
hexagonal boron nitride, https://doi.org/10.1038/s41467-022-31743-0 journal journal Nature Communications volume 13, pages 4347 (year 2022)NoStop
[Ru et al.(2023)Ru,
Jiang, Liang, Kenny,
Cai, Lyu, Cernansky,
Zhou, Yang, Watanabe,
Taniguch, Li, Seng,
Liu, Jelezko, Bettiol, and Gao]Shihao2023
author author S. Ru, author Z. Jiang, author H. Liang, author
J. Kenny, author H. Cai, author X. Lyu, author R. Cernansky,
author F. Zhou, author
Y. Yang, author K. Watanabe, author T. Taniguch, author F. Li, author K. T. Seng, author X. Liu, author F. Jelezko,
author A. A. Bettiol, and author W. Gao, title title Robust nuclear spin polarization via
ground-state level anti-crossing of boron vacancy defects in hexagonal boron
nitride, https://doi.org/10.48550/ARXIV.2306.15960 (year 2023), 10.48550/ARXIV.2306.15960NoStop
|
http://arxiv.org/abs/2307.04647v2 | 20230710154458 | A note on the induction of comonotonic additive risk measures from acceptance sets | [
"Samuel Solgon Santos",
"Marlon Ruoso Moresco",
"Marcelo Brutti Righi",
"Eduardo de Oliveira Horta"
] | q-fin.MF | [
"q-fin.MF"
] |
Active Learning for Video Classification with Frame Level Queries
This research was supported in part by the National Science Foundation under Grant Number: 2143424
Debanjan Goswami
Department of Computer Science
Florida State University
Shayok Chakraborty
Department of Computer Science
Florida State University
August 12, 2023
=========================================================================================================================================================================
We present simple general conditions on the acceptance sets under which their induced monetary risk and deviation measures are comonotonic additive. We show that acceptance sets induce comonotonic additive risk measures if and only if the acceptance sets and their complements are stable under convex combinations of comonotonic random variables. A generalization of this result applies to risk measures that are additive for random variables with a priori specified dependence structures, e.g., perfectly correlated, uncorrelated, or independent random variables.
Active Learning for Video Classification with Frame Level Queries
This research was supported in part by the National Science Foundation under Grant Number: 2143424
Debanjan Goswami
Department of Computer Science
Florida State University
Shayok Chakraborty
Department of Computer Science
Florida State University
August 12, 2023
=========================================================================================================================================================================
§ INTRODUCTION
The notion of risk is rooted in two fundamental concepts: the potential for adverse outcomes and the variability in expected results. Traditionally, risk has been understood as a measure of dispersion, such as variance, in line with the second concept <cit.>. However, the occurrence of critical events has brought attention to tail risk measurement, exemplified by well-known measures like Value at Risk (VaR) and Expected Shortfall (ES), which account for the possibility of extreme events, thus incorporating the first concept. <cit.> and <cit.> are remarkable references in this regard.
This study investigates the relationship between acceptance sets and risk / deviation measures, focusing on the property of comonotonic additivity. Roughly speaking, two random variables are comonotonic if the variability of one never offsets the variability of the other, that is, they move in the same direction. A financial intuition of the property of comonotonic additivity is the following: joining two comonotonic positions provides neither diversification benefits nor brings harm to the portfolio. Comonotonic additivity occupies a central place in the theory of risk measures
(seminal papers in this regard are <cit.>, <cit.>, <cit.>, and <cit.>).
Acceptance sets are criteria used by financial regulators to distinguish between permissible and impermissible positions held by financial firms. However, acceptance sets alone do not provide direct guidance on how to convert non-permissible positions into permissible ones. This is the role of risk measures, which assign extended real values to quantify the risk (usually the tail risk) of financial positions. For non-permissible financial positions, monetary risk measures indicate the minimum amount of cash addition or assets addition required to make these positions permissible. This idea goes back to <cit.>. For a review, see chapter 4 of <cit.>. On the other hand, deviation measures may not reflect tail risk, as they are designed to quantify deviation. <cit.> is a landmark work in the axiomatic study of deviation measures, and <cit.> provide a handbook treatment. In analogy to risk measures, <cit.> associated deviation measures to acceptance sets, and showed that generalized deviation measures (in the sense of <cit.>) represent how much a position must be shrunk or deleveraged for it to become acceptable. As further references on the topic, <cit.> and <cit.> studied the connection between risk, deviation measures, and premium principles. Also, <cit.> used deviation measures to define restrictions on problems of maximum entropy.
From an axiomatic point of view, the properties of a risk measure directly translate into attributes of its acceptance set. It is well known that a risk measure is law-invariant, convex, positive homogeneous, and star-shaped if and only if its acceptance set is law-invariant, convex, conic, and star-shaped. However, the literature has no correspondence for comonotonic additivity beyond an attempt in finite probability spaces from <cit.>. In fact, additivity in general was never approached, to the best of our knowledge, through the perspective of acceptance sets.
The additivity of a risk measure means that it is just as risky to have two positions added together in the same portfolio as it is to have them separated. If there were some diversification benefits in holding them together, we would require the acceptance set and the risk measure to be convex. For a discussion on convexity, see <cit.>, <cit.>, and <cit.>. If it were more risky to hold them together, the risk measure would be concave. From the perspective of acceptance sets, it translates into requiring the acceptance set's complement to be convex. Now, if the risk of two positions is the same regardless of whether they are in the same portfolio or not, then a combination of the two aforementioned concepts emerges. In this case, the risk measure should be both convex and concave, and both the acceptance set and its complement should be convex.
It is well known that the only linear risk measure is the expectation, and in this case, the above rationale holds trivially because both the acceptance set and its complement are half-spaces. However, we are interested in the additive property for random variables with specific dependence structures, such as independent, uncorrelated, and mainly, comonotonic random variables; that is, we do not require the risk measure to be additive in its whole domain, but just for specific random variables which, under some criterion, neither provide diversification benefit nor harm.
Our main results show that this connection occurs for monetary and deviation measures. While the concept of monetary and deviation measures are similar, the technical tools to obtain those results are significantly different. In fact, up until recently, there was no such thing as an acceptance set for deviation measures. <cit.> established the notion of acceptance sets for deviation measures, to which a crucial property is positive homogeneity. Since the main focus of this study is comonotonic additivity, which is a stronger property than positive homogeneity, we will exclusively consider deviation measures that satisfy the former condition. We focus on monetary risk measures in <Ref>, and deviation measures in <Ref>.
Regarding basic notation, let (Ω, ℱ,) be a probability space and L^0 L^0(Ω, ℱ,) the space of equivalence classes of random variables (under the a.s. relation) and L^∞ L^∞(Ω, ℱ,)={X ∈ L^0: ‖ X ‖_∞< +∞}, where ‖ X ‖ _∞= inf{m ∈ℝ: |X|<m} for all X ∈ L^0. Equalities and inequalities must be understood in the a.s. sense. For generality, we work on a Hausdorff topological vector space such that L^∞⊆𝒳⊆ L^0. The elements X ∈𝒳 represent discounted net financial payoffs. We adopt the identify ℝ≡{X ∈𝒳:X=c for some c ∈ℝ}.
For any subset A ⊆, we denote (A), (A), A^∁ the convex hull, conic hull, and complement of A, respectively. Also, for any two sets A,B ⊆𝒳, we denote A+B={X ∈𝒳: X=Y+Z,Y ∈ A, Z ∈ B}. It is valid noticing that if A is non-empty, 0 ∈(A).
Further, two random variables X and Y are comonotonic if
(X(ω)-X(ω'))(Y(ω)-Y(ω'))≥ 0 ⊗ a.s.
The concept of comonotonicity dates back at least to <cit.>. <cit.> and <cit.> present further characterizations of comonotonic random variables.
§ MONETARY RISK MEASURES
We begin with some terminology on acceptance sets and monetary risk measures.
A nonempty set 𝒜⊆𝒳 is called an acceptance set. It is a monetary acceptance set if satisfies the following:
* (Monotonicity) 𝒜 is monotone if X ∈𝒜 and X≤ Y implies Y ∈𝒜.
* (Normalization) 𝒜 is normalized if inf{m ∈ℝ:m ∈𝒜}=0.
In addition, an acceptance set may fulfill:
* (Convexity) 𝒜 is convex if λ𝒜 + (1-λ)𝒜⊆𝒜 whenever λ∈ [0,1].
We say that any set is comonotonic convex if X,Y ∈ implies λ X+ (1-λ )Y ∈ for all comonotonic pairs X,Y ∈𝒳.
A functional ρ:𝒳→ℝ∪{∞} is called a risk measure if it satisfies:
* (Monotonicity) ρ is monotone if ρ(Y)≤ρ(X) whenever X≤ Y for X,Y ∈𝒳.
* (Cash invariance) ρ is cash invariant if ρ(X+m)=ρ(X)-m for any X ∈𝒳 and m ∈ℝ.
* (Normalization) ρ is normalized if ρ(0)=0.
In addition, a functional may fulfil the following for some set C⊆:
* (Convexity) ρ is convex in C if ρ(λ X + (1-λ)Y) ≤λρ(X)+(1-λ)ρ(Y) for all λ∈ [0,1] and X,Y ∈ C.
* (Concavity) ρ is concave in C if ρ(λ X + (1-λ)Y) ≥λρ(X)+(1-λ)ρ(Y) for all λ∈ [0,1] and X,Y ∈ C.
* (Additivity) ρ is additive in C if ρ( X + Y) = ρ(X)+ρ(Y) for all X,Y ∈ C.
If C = 𝒳, we simply refer to the functional as convex, concave or additive. If ρ is convex/concave/additive for comonotonic pairs, then we say it is comonotonic convex/concave/additive.
Since 0 is comonotonic to any X ∈𝒳, it is easy to see that, if ρ is comonotonic convex, then ρ(λ X)≤λρ(X) for λ∈ [0,1] and ρ(λ X)≥λρ(X) for λ>1. Risk measures satisfying this property are called star-shaped. For theory and applications of star-shaped risk measures, see <cit.>, <cit.>, <cit.>, and <cit.>.
Let ρ be a risk measure and 𝒜 a monetary acceptance set.
* The acceptance set induced by ρ is defined as
𝒜_ρ{X ∈𝒳:ρ(X)≤ 0}.
* The risk measure induced by 𝒜 is defined as
ρ_𝒜(X)inf{m ∈ℝ:X+m ∈𝒜}, ∀ X ∈𝒳.
As shown, for instance, in <cit.>, <cit.>, and <cit.>, there exist direct links between acceptance sets and risk measures. The following relations between risk measures and acceptance sets will be used throughout the paper:
(Propositions 4.6 - <cit.>; Lemma 2.5 - <cit.>) Let ρ be a risk measure and let 𝒜 be a monetary acceptance set. Then we have the following:
* ρ(X)=ρ_𝒜_ρ(X) for all X ∈𝒳.
* { X ∈ : ρ_ (X) <0 }⊆⊆𝒜_ρ_𝒜⊆ (), where () denotes the closure of .
* If 𝒜 is convex, then ρ_𝒜 is convex. Conversely, if ρ is convex, then 𝒜_ρ is convex.
We use the following auxiliary function towards our way to this section's main result. Notice that it corresponds to the smallest upper bound for the amount of cash that can be added to some position without making it acceptable.
Let 𝒜⊆ be an acceptance set, then the functional ψ_𝒜^∁ : 𝒳→ℝ∪{-∞,+∞} induced by 𝒜^∁ be defined as
ψ_𝒜^∁(X)sup{m ∈ℝ:X+m ∈𝒜^∁}, ∀ X ∈𝒳.
Let 𝒜 be a monetary acceptance set. Then ρ_𝒜(X)=ψ_𝒜^∁(X) for all X ∈𝒳.
From the monotonicity of monetary acceptance sets we have, for any X ∈, that the real sets { m ∈ : X +m ∈} and { m ∈ : X +m ∈^∁} are intervals that partition the real line. Hence, it follows that ψ_𝒜^∁(X) = sup{m ∈ℝ:X+m ∈𝒜^∁} = inf{ m ∈ : X +m ∈} = ρ_(X).
The next result gives us sufficient conditions to induce convex, concave and additive risk measures. As formally stated in <Ref>, a set C⊆𝒳 is stable under scalar addition if C+ℝ =C.
Let 𝒜 be a monetary acceptance set and C ⊆ be stable under scalar addition.
* If 𝒜∩ C is convex, then ρ_𝒜 is convex in C.
* If 𝒜^∁∩ C is convex, then ρ_𝒜 is concave in C.
* If 𝒜^∁∩ C and 𝒜∩ C are convex, then ρ_𝒜 is additive in C.
Furthermore, the converse implications hold if 𝒜 is closed and C is convex.
For <Ref>, let X, Y ∈ C and note that there is x , y ∈ such that X+x ∈ and Y + y ∈. As C is stable under scalar addition, it also holds that X + x ∈ C for any x ∈, and similarly for Y+y. Consequently, the convexity of ∩ C implies that λ (X+x) + (1-λ)(Y+y) ∈ for any λ∈ [0,1]. Therefore, ρ_ ( λ(X+x) + (1-λ)(Y+y)) ≤ 0, and the cash invariance of ρ_ implies ρ_ ( λ X+ (1-λ)Y) ≤λ x + (1-λ) y. Then, taking the infimum over x and y yields
ρ_ ( λ X+ (1-λ)Y) ≤λρ_(X) + (1-λ) ρ_ (Y).
Regarding <Ref>, take X, Y ∈ C and notice that, there is x , y ∈ such that X+x ∈^∁ and Y + y ∈^∁. Therefore, the convexity of ^∁∩ C implies that λ (X+x) + (1-λ)(Y+y) ∈^∁ for any λ∈ [0,1]. Hence we have ρ_ ( λ(X+x) + (1-λ)(Y+y)) > 0, so the cash invariance of ρ_ implies ρ_ ( λ X+ (1-λ)Y) > λ x + (1-λ) y. Then, taking a supremum over x and y and using <Ref> yields
ρ_ ( λ X+ (1-λ)Y) ≥λϕ_^∁(X) + (1-λ) ϕ_^∁ (Y)=λρ_(X) + (1-λ) ρ_ (Y).
For <Ref>, recall that under normalization, a map is linear if and only if it is both convex and concave. Hence, the claim is a direct consequence of the previous items.
When 𝒜 is closed, the converse of <Ref> is straightforward by <Ref>, <Ref>. For <Ref>, take X, Y ∈^∁∩ C. Since 𝒜 is closed, it holds that 𝒜=𝒜_ρ_𝒜 (<Ref> of <Ref>), which implies ρ_ (X) >0 and ρ_ (Y)>0. Concavity of ρ_ in C implies ρ_ ( λ X+ (1-λ)Y) ≥λρ_(X) + (1-λ) ρ_ (Y) > 0, whence we conclude that λ X+ (1-λ)Y ∈^∁_𝒜_ρ = ^∁. Additionally, it also belongs to C as it is a convex set. Finally, <Ref> follows by the previous items.
Examples of sets C ⊆𝒳 that fulfill the hypothesis of the above theorem are, for a given, fixed, X ∈𝒳: the class of random variables independent of X, namely C^ind_X = {Y ∈𝒳:X and Y are independent}, the set of random variables uncorrelated with X, that is C^uncor_X {Y ∈𝒳:Cov(X,Y)=0}, and the set of affine transformations of X, namely C^cov_X {Y ∈𝒳:Cov(X,Y)=1}. As an application of <Ref>, notice that, if C^ind_X ∩𝒜 and C^ind_X ∩𝒜^∁ are convex for all X ∈𝒳, then ρ_𝒜 is additive for independent random variables. This closely relates to the literature on additive risk measures and premium principles (see, for instance, <cit.>, and <cit.>).
The preceding reasoning and results yield comonotonic additivity of ρ whenever 𝒜 and 𝒜^∁ are both convex for comonotonic pairs; this is the content of the main theorem in this section. We now show a result that relates comonotonic variables to the needed assumptions. To this end, we will denote by C_X {Y ∈ Y is comonotonic to X} the set of all random variables that are comonotonic to X ∈𝒳.
Let X∈. The following holds:
* C_X is a convex cone that is closed with respect to the topology of convergence in probability.
* If X,Y is a comonotonic pair, then any two elements of the convex cone C_X,Y( ({X}∪{Y} )) are comonotonic to one other.
* Additionally, if neither X or Y are constants, then C_X,Y∩ = {0}.
In what follows, all equalities and inequalities are in the ⊗-almost sure sense, that is, they hold for any pair (ω,ω') lying in an event Ω_1⊆Ω×Ω having total ⊗ measure. Ω_1 can be taken as the countable intersection of the events where the required inequalities (for any pairing of X, Y, Y_n, Z and W) hold.
We start proving <Ref>. To see that C_X is a cone, note that for any Y ∈ C_X we have, by definition,
(X(ω) - X(ω')) (Y(ω)-Y(ω') ) ≥ 0,
for any (ω,ω')∈Ω_1. Hence, for any λ≥ 0 and (ω,ω')∈Ω_1,
(X(ω) - X(ω')) (λ Y(ω)- λ Y(ω') )= λ(X(ω) - X(ω')) (Y (ω)- Y (ω') ) ≥ 0,
yielding λ Y ∈ C_X. For convexity, let Y,Z ∈ C_X. Then, for λ∈ [0,1] we have that,
[ X(ω) - X(ω') ] [ (λ Y(ω) + (1-λ) Z(ω)) - (λ Y(ω') + (1-λ) Z(ω')) ]
= λ[ X(ω) - X(ω')] [ Y(ω)-Y(ω') ] + (1-λ) [ X(ω) - X(ω')] [ Z(ω)-Z(ω') ] ≥ 0
whenever (ω,ω')∈Ω_1. To see that C_X is closed in the asserted sense, consider a convergent sequence {Y_n}⊆ C_X with Y_n → Y in probability. By standard facts of measure theory, there is a subsequence {Y_n(k)} such that Y_n(k)→ Y almost surely. Clearly, this yields that Y is comonotonic to X.
For <Ref>, let Z,W ∈ C_X,Y. By definition we have
Z = γ_1 (λ_1 X) + (1-γ_1)(δ_1 Y)
for some triplet (γ_1, λ_1, δ_1) with 0≤γ_1≤1 and 0≤λ_1,δ_1, and similarly
W = γ_2 (λ_2 X) + (1-γ_2)(δ_2 Y)
for some triplet (γ_2, λ_2, δ_2) with 0≤γ_2≤1 and 0≤λ_2,δ_2. Then, for (ω,ω')∈Ω_1, expanding the product
(Z(ω) - Z(ω'))(W(ω) - W(ω'))
yields a weighted sum whose terms are all non-negative.
For the last item, it is enough to verify that the additive combination of non-constants comonotonic random variables can not be constant. As X is non-constant, then there is ω,ω' ∈Ω such that X(ω) <X(ω') and comonotonicity implies Y(ω) ≤ Y(ω'). Therefore, for any α,β > 0 it holds that (α X + β Y)(ω) = α X (ω) + β Y (ω) < α X (ω') + β Y (ω) ≤ (α X + β Y)(ω'). Hence, α X + β Y is not constant.
Note that the set C ⋂_Y ∈ C_X C_Y,
is a non-empty, closed, and convex set, such that all its elements are comonotonic to one another. In particular, ⊆ C and C + = C.
We are now in a position to prove the main result of this section.
Let 𝒜 be a monetary acceptance set and ρ a risk measure. Then we have the following:
* If 𝒜 and 𝒜^∁ are comonotonic convex, then ρ_𝒜 is comonotonic additive. The converse implication holds if 𝒜 is closed.
* The risk measure ρ is comonotonic additive if and only if 𝒜_ρ and 𝒜_ρ^∁ are comonotonic convex.
For the first part of <Ref>, let X and Y be a comonotonic pair. By <Ref>, all elements in ( ( {X}∪{Y})) are comonotonic to each other. This implies, in light of the comonotonic convexity of 𝒜 and 𝒜^∁, that 𝒜∩ C_X,Y and 𝒜^∁∩ C_X,Y are convex sets. Since ( ( {X}∪{Y})) + is stable under scalar addition, the result follows from <Ref>. The converse of <Ref> follows directly from the converse of <Ref> of <Ref>
Regarding the “only if" part of <Ref>, we will show that 𝒜_ρ^∁ is comonotonic convex. A similar argument also holds for 𝒜_ρ. Take a comonotonic pair X,Y ∈_ρ^∁. We need to show that λ X + (1-λ)Y ∈_ρ^∁ for any λ∈ [0,1]. But ρ (λ X + (1-λ) Y ) = λρ (X) + (1-λ) ρ(Y) > 0, which concludes the proof. The converse direction follows directly from <Ref> and the fact that ρ=ρ_𝒜_ρ.
§ DEVIATION MEASURES
For deviations, a similar line of reasoning applies as for monetary risk measures but with distinct technical machinery. To explore this further, we introduce additional properties that comprise the basic setup to study Minkowski deviation measures.
An acceptance set 𝒜 is a Minkowski acceptance set if it satisfies the following:
* (Star-shapedness) 𝒜 is star-shaped if λ X∈𝒜, for every X ∈𝒜 and λ∈ [0,1].
* (Stability under scalar addition) 𝒜 is stable under scalar addition if 𝒜 + = 𝒜, that is, if X + c ∈𝒜, for all X ∈𝒜 and c ∈.
* (Radial boundedness at non-constants) 𝒜 is radially bounded at non-constants if, for every X ∈𝒜\, there is some δ_X ∈ (0 , ∞), such that δ X ∉𝒜 whenever δ∈ [δ_X , ∞).
For a functional 𝒟→ [0,+∞] we define its sub-level sets of the form 𝒟{X∈ 𝒟(X)≤ 1}. Further, 𝒟 is a deviation measure if it fulfils:
* (Non-negativity) 𝒟 is non-negative if 𝒟 (X) > 0 for any non-constant X∈𝒳 and 𝒟(X) = 0 for any constant X ∈𝒳.
* (Translation insensitivity) 𝒟 is translation insensitive if 𝒟 (X + c) = 𝒟 (X) for any X∈ and c ∈.
* (Positive homogeneity) 𝒟 is positive homogeneous if 𝒟(λ X) = λ𝒟(X) for any X∈ and λ≥ 0.
A deviation measure may also satisfy the properties in <Ref>.
We now define the Minkowski Deviation, introduced in <cit.>, which is the main tool used in this section. A financial interpretation is that such a map indicates how much we should shrink (or “gauge”) a certain position for it to become acceptable.
Let 𝒜⊆ . The Minkowski Deviation of 𝒜 is the functional _𝒜→ [0,+∞] defined, for X∈, by
_𝒜(X) inf{m > 0 m^-1X∈𝒜},
where inf∅ = +∞ .
In analogy to <Ref>, the next lemma relates acceptance sets to Minkowski deviations.
Let 𝒟 be a deviation measure and let be a Minkowski acceptance set. Then we have the following:
* 𝒟(X) = _1(X) for all X ∈
* { X ∈ : (X) <1 }⊆⊆𝒜_⊆ (), where () is the closure of .
* If 𝒜 is convex, then is convex. Conversely, if 𝒟 is convex, then 𝒜_ is convex.
* is a deviation measure and 𝒜_ is a Minkowski acceptance set.
Now, we turn our focus to the main results of this section. Similarly to what we did in the previous section, we define an auxiliary map, which represents the most we can shrink a position while keeping it non-acceptable.
The cogauge of 𝒜^∁ is the functional _𝒜^∁→ [0,+∞] defined, for X∈, by
_𝒜^∁ (X) sup{m ∈_+^* m^-1X∈𝒜^∁},
where sup∅ = 0.
We have the following relation between gauge and co-gauge.
(Corollary C.8. of <cit.>)
Let 𝒜⊆ be star-shaped. Then (X) = _𝒜^∁ (X)
holds for all X∈.
We now prove a result regarding (sub/super) additivity of deviation measures, which will be very useful for the main result.
Let 𝒜⊆ be a Minkowski acceptance set and 𝒟 a deviation measure. Then we have that:
* If 𝒜 is convex, then is sub-linear (convex and positive homogeneous).
* If 𝒜^∁ is convex, then is super-linear (concave and positive homogeneous) on (𝒜^∁), that is, (X + Y ) ≥(X) + (Y) for any X,Y ∈ (𝒜^∁).
* If C ⊆ (𝒜^∁) is a cone for which both 𝒜∩ C and 𝒜^∁∩ C are convex sets, then respects (X + Y) = (X) + (Y) for every X,Y ∈ C.
* If 𝒟 is additive in some convex cone C, then k∩ C and (k)^∁∩ C are convex sets.
<Ref> follows from <Ref>. For <Ref>, we already have positive homogeneity from <Ref> <Ref>. The star-shapedness of 𝒜 and <Ref> tells us that = _𝒜^∁. Hence, it suffices to show that _𝒜^∁ is a concave functional on (𝒜^∁) whenever 𝒜^∁ is convex. To see that this is the case, let B = 𝒜^∁, and fix λ∈[0,1] and X,Y∈ (𝒜^∁).
Let us first consider the case where 0<λ<1 and where both X and Y are nonzero. In this scenario, the sets
𝔄{α∈_+^* λ X∈α B}
and
𝔅{β∈_+^* (1-λ)Y∈β B}
are both non-empty (for instance, X∈(B) means precisely that X = aZ for some a>0 and some non-zero Z∈ B, and in this case we have λ a ∈𝔄). The positive homogeneity of together with the equality = _B, implies that sup𝔄 = _B(λ X) = λ_B(X) and sup𝔅 = _B((1-λ)Y) = (1-λ)_B(Y). Taking α∈𝔄 and β∈𝔅, convexity of B yields λ X+(1-λ)Y ∈ (α + β)B, so _B (λ X + (1-λ)Y) ≥α + β. Therefore, _B (λ X + (1-λ)Y) ≥sup𝔄 + sup𝔅 = λ_B (X) + (1-λ)_B (Y). The remaining cases are just a matter of adapting the following argument: if, say, λ X = 0, then 𝔄=∅ and _B(λ X + (1-λ Y)) = _B((1-λ)Y) = (1-λ)_B(Y) = λ_B(X) + (1-λ)_B(Y).
Regarding <Ref>, let g be the restriction of to the cone C, i.e., g C → [0,∞] is such that g (X) = (X) = max((X), _C(X)) = _𝒜∩ C (X) for all X ∈ C. It suffices to show that g is additive; we shall proceed by showing that this function is concave and sub-linear. Sub-linearity of g is yielded as 𝒜∩ C is a convex set containing the origin by assumption (see Theorem 3.2 in <cit.> – item (v)). Therefore _𝒜∩ C is sub-linear on the whole , in particular when restricted to C. For concavity, we shall summon the cogauge to help us: as 𝒜 is a star-shaped set, the gauge coincides with the cogauge of its complement, i.e., = _𝒜^∁ — see <Ref>. It follows that, for X∈ C, one has g (X) = _𝒜^∁ (X). We now show that, for X∈ C, the identity _𝒜^∁ (X) = _𝒜^∁∩ C (X) holds.
As C^∁∪{0} is a cone and any cone is star-shaped, 𝒜∪ C^∁ is star-shaped, then we have ∀ X ∈𝒳 that
_𝒜^∁∩ C (X) = _(𝒜∪ C^∁)^∁ (X)
= _𝒜∪ C^∁ (X)
= min((X) , _C^∁ (X) )
= min(_𝒜^∁(X),_C(X)).
In particular, g = _𝒜^∁∩ C on C, as _C (X) = ∞ = _C^∁ (X) if X ∈ C and _C (X) = 0=_C^∁ (X) if X ∉ C. Now, the only thing that is left to show is that the cogauge of a convex set is a concave function on C. This claim follows from <Ref> as it tells us that _𝒜^∁∩ C is concave on C⊆ (𝒜^∁).
For <Ref>, note that the restriction of 𝒟 to C is both convex and concave. Therefore, the convexity of both k∩ C and (k)^∁∩ C follows from Theorem 3.7 in <cit.> – item (v) and <Ref>.
<Ref> in the above Theorem can easily be relaxed to the following: if 𝒟 is sub-(super-)additive in some convex cone C, then k∩ C ((k)^∁∩ C, respectively) is a convex set. Unfortunately, <Ref> of <Ref> cannot be relaxed so as to accommodate the superlinearity of on the whole domain. Consider the following counterexample, illustrated in <Ref>:
let Ω = {0,1} be the binary market and identify L^0≡^2 as usual. Let 𝒜{(x,y)∈^2 y-| x|≤1}. In this case, the set C𝒜∖ (𝒜^∁) is a cone and hence, for any X ∈ C, we have that (X) = 0, whereas (X)>0 for X∉ C. We denote, respectively, by C and C the interior and boundary of C. Now let Y = (1,1/2)∈ C, Z = (1,1)∈ C and W = (1,2)∈𝒜. We have
(Z) =0< (W), but Z is a convex combination of W and Y, so is not concave on the whole domain.
However, if we are willing to abandon the identity = _𝒜^∁, it is possible to define the cogauge in a slightly different way by assigning the value _B(X) -∞ whenever {m ∈_+ m^-1 X ∈ B} = ∅; in this case, an easy adaptation yields the concavity of _B for convex B.
We are now in condition to prove the main result in this section.
We have the following:
* Consider an acceptance set 𝒜⊆ being radially bounded at non-constants, stable under scalar addition, and assume both 𝒜 and 𝒜^∁ be comonotonic convex. Then 𝒜 is star-shaped and a comonotonic additive deviation measure.
* Let 𝒟 be a deviation measure that is comonotonic additive. Then both k and k^∁ are comonotonic convex.
For <Ref>, the star-shapedness of 𝒜 follows for 0 is comonotonic to any X ∈𝒜 and, by assumption, 𝒜 is convex for this pair. Therefore, λ X≡λ X + (1-λ) 0 ∈𝒜 for all X ∈𝒜 and any 0≤λ≤1, which establishes star-shapedness. Furthermore, as 𝒜 is radially bounded at non-constants, it follows that (𝒜^∁) = ( ∖) ∪{0} and so any cone with no constants that we may take is contained in (𝒜^∁). Now let X and Y be a comonotonic pair of non-constants. Note that any two members of the set C_X,Y = ( ({X}∪{Y} )) are comonotonic to one another and the only constant in C_X,Y is 0 (see <Ref>). Now, if we take any Z,W ∈ C_X,Y∩𝒜, as they are a comonotonic pair, by assumption we have that λ Z + (1-λ)W ∈ C_X,Y∩𝒜, ∀λ∈ [0,1]. Hence, C_X,Y∩𝒜 is a convex set. The same argument shows that C_X,Y∩𝒜^∁ is also convex. Thus, by <Ref>, we have that (X+Y) = (X) + (Y). To conclude the first item, notice that is a deviation measure because 𝒜⊆ is a Minkowski acceptance set (<Ref>).
For <Ref>, let X,Y be a comonotonic pair. Due to Lemma <ref>, the set C_X,Y
is a convex cone whose members are all comonotonic to one another, and 𝒟 is additive on C_X,Y. By <Ref> <Ref>, the sets k∩ C_X,Y and (k)^∁∩ C_X,Y are both convex. In particular, if Z is any convex combination of X and Y, then Z∈k∩ C_X,Y⊆k whenever X,Y ∈k, and similarly Z∈ (k)^∁ whenever X,Y ∈ (k)^∁.
If the conditions above are imposed only on 𝒜 (and not necessarily on 𝒜^∁), then we have that is comonotonic convex. Similarly, if we only impose those conditions on 𝒜^∁, then the resulting is comonotonic concave. The converse implications also hold. As an example of a set 𝒜 satisfying the assumptions in the theorem, take Ω = {0,1}, identify L^0≡^2, and let 𝒜 be the set of those X=(u,v)∈^2 for which u≥0, v≥0 and | u| + | v|≤ 1. In this case, the set of comonotonic pairs in the 1st quadrant is precisely {(u,v)∈_+^2 u≥ v}.
apalike
|
http://arxiv.org/abs/2307.07376v2 | 20230714143614 | Bounds on Fourier coefficients and global sup-norms for Siegel cusp forms of degree 2 | [
"Félicien Comtat",
"Jolanta Marzec-Ballesteros",
"Abhishek Saha"
] | math.NT | [
"math.NT"
] |
plain
*theorem*Theorem
theorem Theorem [section]
lemma [theorem]Lemma
corollary [theorem]Corollary
proposition[theorem]Proposition
conjecture Conjecture
question [theorem]Question
observation [theorem]Observation
remark [theorem] Remark
definition
algorithm [theorem]Algorithm
criterion [theorem]Criterion
definition [theorem]Definition
condition [theorem]Condition
example [theorem]Example
exercise [theorem]Exercise
problem [theorem]Problem
solution [theorem]Solution
note Note
claim Claim
summary Summary
case Case
acknowledgment Acknowledgments
conclusion Conclusion
notation Notation
(bsmallmatrix[[ ]#1#1𝒮1
Let F be an L^2-normalized Siegel cusp form for _4() of weight k that is a Hecke eigenform and not a Saito–Kurokawa lift. Assuming the Generalized Riemann Hypothesis, we prove that its Fourier coefficients satisfy the bound |a(F,S)| ≪_k^1/4+ (4π)^k/Γ(k) c(S)^-1/2(S)^k-1/2+ where c(S) denotes the gcd of the entries of S, and that its global sup-norm satisfies the bound ( Y)^k/2F_∞≪_ϵ k^5/4+ϵ. The former result depends on new bounds that we establish for the relevant local integrals appearing in the refined global Gan-Gross-Prasad conjecture (which is now a theorem due to Furusawa and Morimoto) for Bessel periods.
Bounds on Fourier coefficients and global sup-norms for Siegel cusp forms of degree 2
Félicien Comtat, Jolanta Marzec-Ballesteros, Abhishek Saha
=====================================================================================
§ INTRODUCTION
The problem of bounding the sup-norms of L^2-normalized cuspidal automorphic forms as one or more of their underlying parameters tend to infinity is interesting from several points of view and has been the topic of many recent works, see e.g. <cit.> and the references therein.
A basic case of this problem concerns upper-bounds for the global sup-norms of holomorphic cusp forms f of weight k for _2() as k →∞. Assuming that f is an eigenform, Xia <cit.> proved the bound
y^k/2f_∞≪_ k^1/4 +f_2.
The exponent 1/4 here is optimal, as one can prove a lower bound of similar strength[The lower bound of k^1/4 - is obtained high in the cusp. A variant of the sup-norm problem focusses not on global bounds but on bounds over a fixed compact set Ω where it is an open problem to improve upon the exponent 1/4. Indeed, the much stronger upper bound y^k/2f|_Ω_∞≪_ k^ is expected to hold.]. The proof uses the Fourier expansion f(z) = ∑_n>0a(f, n) e^2 π i nz and crucially relies on Deligne's bound
|a(f, n)| ≤ d(n)n^k-1/2 |a(f, 1)|
for the Fourier coefficients of f as well as the bound |a(f, 1)|/f_2≪_ k^ which follows from the non-existence of Landau–Siegel zeroes <cit.> for the symmetric-square L-function attached to f. Given these deep facts, the deduction of (<ref>) from the Fourier expansion is fairly direct. The fact that the bound (<ref>) is essentially best possible reflects the special behaviour of the exponential function as a degenerate Whittaker function.
In this paper we are interested in a rank 2 analogue of Xia's result. Namely, let _̋2 denote the Siegel upper-half space of degree 2 and let S_k(Γ)
be the space of holomorphic Siegel cusp forms
of weight k transforming with respect to the subgroup Γ = _4() ⊂_4().
As in the rank 1 case considered by Xia, one may hope to exploit the Fourier expansion and obtain strong bounds on the sup-norm in this setting.
Recall that the Fourier expansion of F ∈ S_k(Γ) takes the form
F(Z)=∑_S∈Λ_2 a(F, S)e^2π i Tr(SZ), Z ∈_̋2,
where
Λ_2 = {ab/2b/2c: a,b,c∈, a>0, d:=b^2 - 4ac <0}.
For S =ab/2b/2c∈Λ_2, we define its discriminant (S)=-4(S)=b^2-4ac and its content c(S) = (a, b, c). Note that c(S)^2 divides (S). If d = (S) is a fundamental discriminant[Recall that an integer n is a fundamental discriminant if eithern is a squarefree integer congruent to 1 modulo 4 orn = 4m where m is a squarefree integer congruent to 2 or 3 modulo 4.], then S is called fundamental in which case clearly c(S)=1.
The Fourier coefficients of Hecke eigenforms in S_k(Γ) are mysterious and poorly understood objects. Unlike in the rank 1 case, they contain much more information than just the Hecke eigenvalues and are closely related to central values of L-functions. However, there exist special forms in S_k(Γ)
known as the Saito-Kurokawa lifts for which the Fourier coefficients are relatively better understood. In fact, the Fourier coefficients of a Saito-Kurokawa lift F can be
explicitly written in terms of the Fourier coefficients of a classical half-integral weight form g ∈ S_k-1/2(Γ_0(4)) and there exists a simple relation between the Petersson norms of F and g. Using these facts, Blomer <cit.> observed that if a Hecke eigenform F is a Saito–Kurokawa lift, then under the Generalized Lindelöf hypothesis (GLH) one has the following bound[See also <cit.> for an extension to the case of Saito–Kurokawa lifts with square-free level, where the dependence of the implied constant on F is not made explicit.] on the Fourier coefficients
|a(F,S)|/F_2≪_k^1/4+ (4π)^k/Γ(k) c(S)^1/2(S)^k/2 - 3/4 +
where the Petersson norm F_2 is defined via
F_2^2 = ⟨ F, F ⟩ = ∫_Γ_̋2 |F(Z)|^2 ( Y)^k-3 dX dY.
Using the bound (<ref>), Blomer obtained under GLH the following essentially optimal bound <cit.> on the sup-norm of a Hecke eigenform F∈ S_k(Γ) that is a Saito–Kurokawa lift:
( Y)^k/2F_∞≪_ϵ k^3/4+ϵF_2.
The main obstacle to generalizing (<ref>) to non-Saito–Kurokawa lifts and obtaining a global sup-norm bound for any Hecke eigenform F ∈ S_k(Γ) lies in obtaining a bound similar to (<ref>) for non-lifts. A key step for fundamental S was taken in <cit.> where weighted averages of fundamental Fourier coefficents for such F were related via the refined Gan–Gross–Prasad (GGP) period conjecture for (_5, _2) (which is now a theorem due to Furusawa and Morimoto <cit.>) to values of higher degree L-functions. Using this, a bound under GRH for the fundamental Fourier coefficients was proved in <cit.>; see also <cit.> for a related result which saves an additional power of log((S)) but where the implied constant depends on F.
A main achievement of the present paper is to go beyond fundamental matrices and obtain a uniform bound for |a(F,S)|/F_2 under GRH for all S ∈Λ_2. To lay the groundwork for our theorem, recall first that
a(F, S)=a(F, ASA)
for A∈SL_2(ℤ), i.e., the Fourier coefficient a(F, S) depends only on the SL_2(ℤ)-equivalence class of S. Let D<0 be congruent to 0 or 1 mod 4 and let L be a positive integer. The set of _2()-equivalence classes of matrices S ∈Λ_2 whose content c(S) equals L and whose discriminant (S) equals L^2D can be canonically identified with the class group H_D of the imaginary quadratic order of discriminant D. We view the characters Λ of the finite abelian group H_D as Hecke characters of K^×_K^× where K = (√(D)). Note that in the special case that D is a fundamental discriminant, these are precisely the characters of the ideal class group. We prove the following theorem.
Let F ∈ S_k(Γ) be a Hecke eigenform with Fourier expansion given by (<ref>). Assume that F is not a Saito–Kurokawa lift and let π be the automorphic representation generated by F. Let D<0 be an integer that is congruent to 0 or 1 mod 4 and let L be a positive integer. Then
∑_S ∈Λ_2 / _2()
c(S)=L, (S)=L^2 D|a(F,S)|^2 ≪_⟨ F, F ⟩(4 π)^2k/Γ(2k-1) L^2k-3 + |D|^k - 3/2 + ∑_Λ∈H_DL(1/2, π×(Λ))/L(1, π, ).
In the above theorem, we note that the L-values are known to be non-negative <cit.> and that the length of the sum on each side is equal to |H_D| ≍ |D|^1/2 + o(1). Under GRH[Strictly speaking, all we need is GLH and a sufficiently strong zero-free region for L(s, π, ).], we can bound L(1/2, π×(Λ))/L(1, π, )≪_ (kD)^. Recalling that 4(S) = |(S)| = L^2|D| and using the duplication formula for the Gamma function, we obtain under GRH the strong bound (Corollary <ref>)
∑_S ∈Λ_2 / _2()
c(S)=L, (S)=L^2D|a(F,S)|^2/F^2_2≪_k^1/2+ (2π)^2k/Γ(k)^2 L^-1 |L^2D|^k-1 + .
The bound (<ref>) may be viewed as an extension of the bound (<ref>) to non-Saito–Kurokawa lifts[A key point here is that for Saito–Kurokawa lifts F, a(F,S) depends only on c(S) and (S), and so for the corresponding sum in the case of the Saito–Kurokawa lifts all terms on the left side are equal. This is not true for non-Saito–Kurokawa lifts.]. We believe that (<ref>) is optimal as far as the exponents on the right side are concerned. Assuming that most summands on the left side of (<ref>) are of comparable size, one is led to the optimistic and far-reaching conjecture |a(F,S)|/F_2≪_k^1/4+ (4π)^k/Γ(k)(S)^k/2-3/4 + for individual Fourier coefficients, which refines the famous open conjecture of Resnikoff and Saldaña <cit.> (and whose proof seems well beyond reach even if one were to assume standard conjectures like GRH).
On the other hand, dropping all but one term from (<ref>), we obtain the following corollary.
Assume GRH. For a Hecke eigenform F ∈ S_k(Γ) that is not a Saito–Kurokawa lift, we have for any S ∈Λ_2 the bound
|a(F,S)|/F_2≪_k^1/4+ (4π)^k/Γ(k) c(S)^-1/2(S)^k-1/2+.
In contrast to (<ref>), the bound (<ref>) is not expected to be optimal (even though it assumes GRH) because we potentially lose a factor of (S)^1/4/c(S)^1/2 when we drop all the other terms. In the body of the paper we do not assume GRH but instead assume that L(1/2, π×(Λ))/L(1, π, ) is bounded by a specific power of the analytic conductor and we write down analogous bounds to (<ref>), (<ref>) under this assumption; see Corollary (<ref>).
Using our bound (<ref>), we obtain a global sup-norm bound for non Saito–Kurokawa lifts F ∈ S_k(Γ) under GRH.
Assume GRH. Let F ∈ S_k(Γ) be a Hecke eigenform that is not a Saito–Kurokawa lift. Then
( Y)^k/2F_∞≪_ϵ k^5/4+ϵF_2.
The reason the exponent 5/4 in the above Theorem is weaker than the exponent 3/4 proved by Blomer for Saito–Kurokawa lifts is the non-optimality of the bound (<ref>). (We expect the true exponent for the sup-norm to be 3/4 for non-lifts[This is a special case of Conjecture 1.1 of <cit.> and is supported by heuristics of the Bergman kernel as well as the fact that one can prove a lower bound of k^3/4- for many non-lifts using a method similar to <cit.>.] as well).
The proof of Theorem <ref> follows from (<ref>) in a relatively straightforward manner via the Fourier expansion, similar to the analysis in <cit.>. We remark here that (as is often the case) the Fourier expansion gives better results near the cusp.
Indeed, our proof shows that if Y is Minkowski-reduced, then we have under GRH the stronger bound ( Y)^k/2|F(Z)|≪_ϵ ( Y)^-1/4 k^5/4+ϵF_2.
In particular, when Z is “high in the cusp", this provides extra power savings in k.
Thus, one may hope to obtain improved sup-norm bounds if one can tackle the “bulk" by different methods.
While we have focussed on the Fourier expansion approach towards the global sup-norm in this paper, alternative approaches toward Theorem <ref> exist. For example, using a Bergman Kernel approach, Das–Krishna <cit.> have proved the bound ( Y)^k/2F_∞≪_ϵ k^9/4+ϵF_2. There is also an exciting new approach to the sup-norm problem via 4th moments and theta kernels introduced by Steiner et al <cit.> which, if implemented for Siegel cusp forms, could potentially lead to strong bounds.
We end this introduction with a few words about the proof of Theorem <ref>, which should be viewed as the central result of this paper. The reader may be tempted to try to derive Theorem <ref> (or the stronger Theorem <ref>) from its known special case <cit.> for D a fundamental discriminant, by using the Hecke relations between fundamental and non-fundamental coefficients. However, this approach appears not to work. To see why, suppose that we know the values of all the fundamental Fourier coefficients and all the Hecke eigenvalues of F. Using the Hecke relations which can be expressed compactly via Sugano's formula as in <cit.> we can then evaluate sums like ∑_S ∈Λ_2 / _2()
π^-1(L^-1 S) = c a(F,S) with c(S)=L, (S)=L^2 D and c a fixed class in H_d where d is the fundamental discriminant attached to D, and π: H_d → H_D is the natural map. However, it does not seem possible to tell apart the individual summands above, i.e., this method cannot separate two non-equivalent, non-fundamental coefficients a(F, S_1) and a(F, S_2) of equal content L in the case that L^-1S_1 and L^-1S_2 are images of the same fundamental coefficient class under the natural map from H_d → H_D.
Therefore we develop a different method for proving Theorem <ref>. We build upon the explicit refinement of Böcherer's conjecture introduced in <cit.> and incorporate characters Λ of H_D which in general correspond to ramified Hecke characters of K=(√(d)). (In <cit.> we had restricted ourselves to ideal class group characters, i.e., characters of H_d; these correspond to unramified Hecke characters of K.) Using the refined GGP conjecture proved in <cit.>, we write the left hand side of (<ref>) as a deformation of the right hand side, where each summand is multiplied by a product over primes p|D of certain local quantities related to local Bessel models for ramified characters. Bounding these purely local quantities constitute the technical heart of this paper. This may be viewed as a depth aspect bound for the local Bessel functions as the conductor of Λ becomes large at the primes dividing D. For the main local statements, we refer the reader to Propositions <ref> and <ref>. In contrast to the similar local quantities for unramified characters computed in <cit.>, the local quantities associated to ramified characters are not easy to compute explicitly, so we bound them via a soft approach where we exploit the volume of the support of the integrals. We refer the reader to Section <ref> for details of the argument. It would be of interest to compute these integrals exactly, as this would allow us to replace (<ref>) by an exact identity.
§.§ Notation
We use the notation
A ≪_x,y,… B
to signify that there exists
a positive constant C, depending at most upon x,y,z,
so that
|A| ≤ C |B|. If the subscripts x,y,… are omitted, it means the constant is absolute. We write A ≍ B to mean A ≪ B ≪ A.
The symbol ε will denote a small positive quantity.
We say that an integer d is a fundamental discriminant if (√(d)) is a quadratic field whose discriminant is equal to d. For a fundamental discriminant d, we let χ_d be the associated quadratic Dirichlet character.
We use to denote the ring of adeles over and for a number field F we let _F denote the ring of adeles over F. All L-functions in this paper will denote the finite part of the L-function (i.e., without the archimedean factors), so that for an automorphic representation π of _n(), we have L(s, π) = ∏_p<∞ L(s, π_p). All L-functions will be normalized to take s ↦ 1-s. For a finite set of places S we denote L^S(s, π)=∏_p ∉ SL(s, π_p).
For a commutative ring R, we define
_4(R)={g∈_4(R): ^tgJg=μ(g)J, μ(g)∈ R^×}, J=[[ 1; 1; -1; -1 ]]
Here, μ is called the similitude character. Let _4(R) = {g ∈_4(R) : μ(g) = 1}. We occasionally use G to denote _4.
We let M_2(R) denote the ring of 2 by 2 matrices over R, and let M^_2(R) be the additive subgroup of symmetric matrices.
Over the real numbers, we have the identity component G()^+:={g∈_4():μ(g)>0}. Let _̋2 be the Siegel upper half space of degree 2, i.e., the space _̋2 consists of the symmetric, complex 2× 2-matrices with positive definite imaginary parts. The group G()^+ acts on _̋2 via g⟨ Z ⟩ = (AZ+B)(CZ+D)^-1 for g = ABCD. We define j(g, Z) = (CZ+D) for g = ABCD∈ G()^+.
§.§ Acknowledgments
The first and third-named authors acknowledge the support of the Engineering and Physical Sciences Research Council (grant number EP/W522508/1) and the Leverhulme Trust (research project grant RPG-2018-401).
§ LOCAL CALCULATIONS
In this section, which is purely local, we will study the Bessel function and Bessel integral associated to a spherical vector in an irreducible, tempered, principal series representation of _4(F) where F is a non-archimedean local field of characteristic zero. Our main local results, Proposition <ref> and Proposition <ref>, quantify the growth of these quantities along diagonal matrices.
§.§ Basic facts and definitions
§.§.§ Preliminaries
Throughout Section <ref>, F will be a non-archimedean local field of characteristic zero. Let be the ring of integers of F, with maximal ideal and uniformizer ϖ. Let 𝐤=/ be the residue class field, and q its cardinality. For x ∈ F, let v(x) be the normalized valuation and let | x | = q^-v(x) denote the normalized absolute value of x, so that v(ϖ) = 1, |ϖ| = q^-1. Let ψ be a character of F which is trivial on but non-trivial on ^-1.
We use the Haar measure dx on F that assigns volume 1, and we use the Haar measure d^× x on F^× that assigns ^× volume 1. So we have d^× x= (1-q^-1)^-1 dx/|x|.
§.§.§ The Bessel subgroup
Following <cit.> and <cit.> we introduce the following notations. Let a,b,c∈ F such that d := b^2-4ac ≠ 0. Let
S = ab/2b/2c, Δ = b/2c-a-b/2
and note that d= (S) = -4 (S) and Δ^2=d/4. If d is not a square in F^×, then let K=F(√(d)) and note that K is isomorphic to F(Δ) via
F(Δ) ∼⟶ K, x + y Δ⟼ x + y √(d)/2.
If d is a square in F^×, then let K = F⊕ F and note that K is isomorphic to F(Δ) via
F(Δ) ∼⟶ K, x + y Δ⟼(x + y √(d)/2, x - y √(d)/2).
We define
T(F)={g∈_2(F): ^tgSg=(g)S}.
One can check that T(F)=F(Δ)^×, so that T(F)≅ K^× via the isomorphisms (<ref>), (<ref>) above. We define the Legendre symbol as
(K/)=
-1 if K/F is an unramified field extension,
0 if K/F is a ramified field extension,
1 if K=F⊕ F.
These three cases are referred to as the inert case, ramified case, and split case, respectively. If K is a field, then let _K be its ring of integers and _K be the maximal ideal of _K. If K = F ⊕ F, then let _K = ⊕.
Throughout, we will make the following standard assumptions (see Section 1 of <cit.>),
90ex
* a, b ∈ and c∈^×.
* If d∉ F^×2, then d is a generator of the discriminant of K/F.
* If d∈ F^×2, then d∈^×.
Under these assumptions, the group T():=T(F)∩_2() is isomorphic to _K^× via the isomorphism T(F)≅ K^×. Note also that these assumptions imply that if we are in the split case (d∈ F^×2) then b ±√(d)/2∈.
We consider T(F) a subgroup of _4(F) via
T(F)∋ g⟼g(g) ^tg^-1∈_4(F).
Let N(F) be the unipotent radical of the Siegel parabolic subgroup, i.e.,
N(F)={1_2X1_2∈_4(F): ^tX=X}
and let R(F)=T(F)N(F). We call R(F) the Bessel subgroup of _4(F).
§.§.§ Bessel models
For S as in (<ref>), we define a character θ of N(F) by
θ([[ 1 x y; 1 y z; 1 ; 1 ]]) = ψ(ax+by+cz) = ψ ((Sxyyz))
for x,y,z ∈ F. It is easily verified that θ(tnt^-1)=θ(n) for n∈ N(F) and t∈ T(F).
Let Λ be any character of K^× such that Λ |_F^× = 1. We identify Λ with a character of T(F) using the isomorphism T(F) ≃ K^×. The map
tu↦Λ(t)θ(u) defines a character of R(F). We denote
this character by Λ⊗θ. Let 𝒮(Λ,θ) be the space of all locally constant functions B: _4(F)→ with the Bessel transformation property
B(rg)=(Λ⊗θ)(r)B(g) for all r∈ R(F) and g∈_4(F).
Consider an irreducible, admissible, unitarizable, tempered representation (π,V_π) of trivial central character. If (π,V) is isomorphic to a subrepresentation of 𝒮(Λ,θ), then this realization of π is called a (Λ,θ)-Bessel model. It is known that such a model, if it exists, is unique; we denote it by ℬ_Λ,θ(π).
Since π is unitary, let ⟨ , ⟩ denote a _4(F)-invariant inner product (unique up to scaling) on V_π. For a vector v ∈ V_π, the normalized matrix coefficient attached to v is the function
Φ_v(g)=⟨π(g)v,v⟩/⟨ v, v ⟩.
Define
J_Λ, θ(v) := ∫_F^×\ T(F)∫_N(F)^Φ_v(tn) Λ^-1(t) θ^-1(n) dn dt,
where ∫_N(F)^ := lim_k→∞∫_N(^-k) denotes the stable integral <cit.>.
It can be shown that the representation π has a (Λ, θ)-Bessel model (i.e., ℬ_Λ,θ(π) exists) if and only if there is a non-zero vector v in the space of π such that J_Λ, θ(v) ≠ 0, in which case v is said to be a (Λ, θ)-test vector for π. We will refer to J_Λ, θ(v) as the local Bessel integral (of type (Λ, θ)) associated to v.
Suppose that the representation π has a (Λ, θ)-Bessel model. Fix a realization of π in ℬ_Λ,θ(π) and for each vector v ∈ V_π let B_v ∈ℬ_Λ,θ(π) denote its image. From uniqueness arguments it is easy to show that there exists a non-zero c that depends on Λ, θ, π and on the choice of realization such that for all v ∈ V_π and all g ∈_4(F) we have |B_v(g)|^2 = c J_Λ, θ(π(g)v).
§.§.§ Subgroups and characters of T()
Recall that
T():=T(F)∩_2(),
and that T() is isomorphic to _K^× via the isomorphisms (<ref>), (<ref>). We define the subgroup U_T(m) ⊂ T() via
U_T(0) = T(),
and for m ≥ 1,
U_T(m) = {g ∈ T(): g= λλ^m, for some λ∈^×}.
Let
Δ_0:= 0c-a-b
so that Δ_0 corresponds (under (<ref>), (<ref>) respectively) to the element δ_0:=-b + √(d)/2 if K is a field, and the element δ_0:= (-b + √(d)/2, -b - √(d)/2) if K=F ⊕ F. One can show (using <cit.>) that
_K = {x + y δ_0: x ∈, y ∈}
and for m ≥ 1,
U_T(m) = {x + y Δ_0: x ∈^×, y ∈^m}.
Using the above description, it is easy to see that (<ref>), (<ref>) induce isomorphisms
U_T(m) ∼⟶^×(1+^m_K)
for each m≥ 1.
Any character Λ of K^× satisfying Λ|_F^×=1 can be identified with a character of F^× T(F). From the above description, it follows that such a character must be trivial on U_T(m) ≃^×(1+^m_K) for some m. We define
c(Λ) = min{m ≥ 0: Λ|_U_T(m) = 1}.
Suppose that S̃=λ ^tASA for some λ∈ F^× and A ∈_2(F). A straightforward calculation verifies that
B_Λ, θ_S(v') = |λ(A)|^3 B_Λ, θ_S̃(v), where v'=π(λ AA')v.
Therefore, in order to compute the local Bessel integral, we may replace S by S̃ (for a suitable λ and A) at the cost of changing the vector v by a translate. Clearly, π has a (S, Λ)-Bessel model if and only if it has a (S̃, Λ)-Bessel model. In particular, the question of whether π has a (S, Λ)-Bessel model depends only on L and Λ and not on the particular choice of the matrix S such that T_S ≃ L^×.
§.§ Main results
For the rest of Section <ref>, let π be a tempered, spherical, irreducible principal series representation of _4(F), i.e., π is a tempered representation of Type I in the notation of <cit.>. We also assume throughout that π is of trivial central character. In particular, π is the unramified constituent of a representation χ_1 ×χ_2⋊σ induced from a character of the Borel subgroup associated to unramified characters χ_1, χ_2, σ of F^× satisfying χ_1 χ_2 σ^2 =1. We put
α=σ(ϖ), βσ(ϖ)χ_1(ϖ).
Note that the temperedness of π implies that χ_i and σ are unitary and therefore |α| = |β| = 1. Let ϕ be a (unique up to multiples) spherical vector in V_π, i.e., ϕ is fixed by the subgroup _4().
It is known (see, e.g., Table 2 of <cit.>) that ℬ_Λ,θ(π) exists for all characters Λ of F^× T(F). For any such character Λ we let B_ϕ, Λ∈ℬ_Λ,θ(π) be the element corresponding to ϕ under some choice of isomorphism π≃ℬ_Λ,θ(π).
For ℓ, m ∈, let
h(ℓ,m)=[ ϖ^ℓ+2m; ϖ^ℓ+m; 1; ϖ^m ].
Using the Iwasawa decomposition, one can show that
_4(F)=_ℓ,m∈
m≥0R(F)h(ℓ,m)_4();
cf. (3.4.2) of <cit.>.
§.§.§ Growth of the local Bessel function
We keep the setup as above. We refer to the function B_ϕ,Λ as the local (spherical) Bessel function of type (Λ, θ). Note that this function depends on our choice of realization of π in ℬ_Λ,θ(π). However, given π, Λ and θ, this function is canonically specified up to multiples. Our next result bounds the growth of this function along the elements h(ℓ, m).
Let ϕ∈ V_π be a spherical vector. For a character Λ of F^× T(F), we have B_ϕ,Λ(h(0, c(Λ))) ≠ 0. Furthermore, for non-negative integers ℓ, m satisfying m ≥ c(Λ), we have
B_ϕ,Λ(h(ℓ, m))/B_ϕ,Λ(h(0, c(Λ)))≪ (m-c(Λ)+1)^3(ℓ+1)^3 q^-2(m - c(Λ)) - 3 ℓ/2.
The implied constant is absolute.
§.§.§ Growth of the local Bessel integral
We keep the setup as above. For brevity, denote
ϕ^(ℓ,m) = π(h(ℓ,m))ϕ.
We are interested in bounding the quantity J_Λ, θ(ϕ^(ℓ,m)) for m ≥ c(Λ). Due to Proposition <ref> and the discussion in Section <ref> it suffices to consider the case ℓ=0, m=c(Λ). We prove the following bound.
Let ϕ∈ V_π be a spherical vector and Λ be a character of F^× T(F). If the residue field characteristic q is even and c(Λ)=0, assume that F= _2.
We have
J_Λ, θ(ϕ^(0,c(Λ))) ≪ (c(Λ) + 1)^6 q^-4c(Λ).
The implied constant is absolute.
§.§ Proof of Proposition <ref>
Fix a character Λ of F^×\ T(F) and recall the definitions of the parameters , $̱ attached to the representationπ, and of the local spherical Bessel functionB_ϕ,Λ∈ℬ_Λ,θ(π)of type(Λ,θ). It is known thatB_ϕ,Λ(h(ℓ,m))=0wheneverℓ <0orm<c(Λ), cf. <cit.> and <cit.>.
§.§.§ A formula due to Sugano
In <cit.> Sugano proved thatB_ϕ,Λ(h(0,c(Λ)))≠ 0and provided a formula for the generating function ofB_ϕ,Λ(h(ℓ,m)). We write it down in a form which will be convenient for the proof of Proposition <ref>. Namely:
∑_ℓ, m ≥ 0B_ϕ,Λ(h(ℓ,m+c(Λ)))x^my^ℓ = B_ϕ,Λ(h(0,c(Λ)))H(x,y)/P(x)Q(y) ,
whereP(x)=(1-q̱^-2x)(1-^̱-1 q^-2x)(1-^-1q̱^-2x)(1-^-1^̱-1 q^-2x)Q(y)=(1- q^-3/2y)(1-^̱-1 q^-3/2y)(1-^-1 q^-3/2y)(1-^̱-1 q^-3/2y)and
* in case c(Λ)>0:
H(x,y)= 1+xq^-2-xyq^-7/2σ(,)̱ + xy^2q^-5 + x^2y^2q^-7
* in case c(Λ)=0:
H(x,y) = 1+xq^-2(1+δ(,)̱) + x^2q^-4q^-1K/ +δ(,)̱ +q^-1/2ϵσ(,)̱
+ x^3q^-7K/
+y[ -q^-2ϵ +xq^-7/2q^-1/2ϵτ(,)̱-q^-1/2ϵ -σ(,)̱.
+ x^2q^-6ϵ(τ(,)̱ -1-σ(,)̱^2) -q^1/2δ(,)̱σ(,)̱
. -x^3q^-8q^-1/2K/σ(,)̱+ϵ]
+y^2[-q^-4K/ + xq^-51+q^-1K/ (τ(,)̱ -2).
+ x^2q^-71 +δ(,)̱ + q^-1K/(σ(,)̱^2-2τ(,)̱ +2)
. +x^3q^-9q^-1K/ -q^-1K/δ(,)̱ +q^-1ϵ^2]
whereδ(,)̱=q^1/2/q+K/q^-1/2ϵ^2 +2q^-1/2K/-ϵσ(,)̱ -q^-1/2K/τ(,)̱ ,ϵ = 0, K/ =-1,
Λ(ϖ_K), K/ =0,
Λ((ϖ,1))+Λ ((1,ϖ)), K/ =1,withϖ_Ka uniformizer of_K, andσ(,)̱= ++̱^-1 +^̱-1 ,τ(,)̱=+̱^̱-1 + ^-1+̱^-1^̱-1+2 .§.§.§ The required bound
The proof of Proposition <ref> will follow from the formula (<ref>). For the sake of brevity, denote:a(m,ℓ):=B_ϕ,Λ(h(ℓ,m+c(Λ)))(B_ϕ,Λ(h(0,c(Λ))))^-1,
H(x,y)=∑_m=0^3∑_ℓ=0^2 h_m,ℓx^my^ℓ .Recall that||=||̱=1. Hence, since|ϵ|≤ 2, it follows that|δ(,)̱|< 28 q^-1/2and we may derive good bounds for|h_m,ℓ|; they are listed in Table <ref>.
Note in particular that for allm,ℓ≥ 0:h_m,ℓ≪ q^-2m-3/2ℓ .Using the geometric series expansion of(P(x)Q(y))^-1in the formula (<ref>), we obtain
a(m,ℓ) = ∑_m̃=0^3∑_l̃=0^2∑_(m_1,m_2,m_3,m_4)∈^4_≥ 0
m_1+m_2+m_3+m_4=m-m̃α^m_1+m_2-m_3-m_4β^m_1-m_2+m_3-m_4∑_(l_1,l_2,l_3,l_4)∈^4_≥ 0
l_1+l_2+l_3+l_4=ℓ-l̃α^l_1-l_3β^l_2-l_4
× h_m̃,l̃q^-2(m-m̃)q^-3/2 (ℓ-l̃)
Hence, because||=||̱=1,
|a(m,ℓ)q^2m+3/2ℓ|≤∑_m̃=0^3∑_l̃=0^2 (m-m̃+1)^3(ℓ-l̃+1)^3 |h_m̃,l̃|q^2m̃+3/2l̃ ,
and from Table <ref> we see that|h_m̃,l̃|q^2m̃+3/2l̃is bounded above by an absolute and effective constant. This proves thatB_ϕ,Λ(h(ℓ,m+c(Λ)))≪ (m+1)^3(ℓ+1)^3 q^-2m-3/2ℓ|B_ϕ,Λ(h(0,c(Λ)))|where the implied constant is absolute and effective.
§.§ Bounding the local Bessel integral
§.§.§ A set of coset representatives
As a starting point, we write down for eachm≥ 1a set of coset representatives forF^× U_T(m) T(F) ≃ F^× (1+^m _K) K^×. Recall the definition ofΔ_0from Section <ref>.
For each integer m>0, let the sets D and S_m be as defined below.
* if K/F is an unramified field extension, let D = {1} and
S_m={1+yΔ_0 : y ∈/^m }∪{x+Δ_0 : x ∈/^m}.
* if K/F is a ramified field extension, choose P_0 = x_0 + y_0Δ_0 = x_0y_0c-y_0ax_0 - y_0b such that x_0, y_0 ∈ and (P_0) ∈ϖ^× (such a choice is possible by (<ref>)). Let D={1,P_0} and
S_m={1+yΔ_0 : y ∈/^m }.
* if K=F × F, recall that √(d)∈^×, b+√(d)/2∈, and let P_0 = 1+ b+√(d)/2√(d)(ϖ -1) + ϖ-1/√(d)Δ_0. Let D={P_0^n: n ∈} and
S_m={1+b+√(d)/2y+ yΔ_0 : y ∈/^m, y ∉-1/√(d) +}.
Then in each case the set D S_m := {s_1s_2: s_1 ∈ D, s_2 ∈ S_m } gives a complete set of representatives for the quotient F^× U_T(m) T(F).
It suffices to show that the image of D under (<ref>) or (<ref>) gives a complete set of representatives for F^×_K^× K^× and the image of S_m under (<ref>) or (<ref>) gives a complete set of representatives for ^×(1+^m _K)_K^×.
Inert case: Assume K/F is an unramified extension.
Then F^×_K^× = K^×, which agrees with D ={1}. To show that the image of S_m under (<ref>) gives a complete set of representatives for ^×(1+^m _K)_K^×, note first that S_m has the correct cardinality by Lemma 3.5.3 of <cit.> and each element of S_m maps to _K^×. Using (<ref>), it is easy to see any element of _K^× can upon multiplying by a suitable element of ^×(1+^m _K) be brought to one of the elements in the image of S_m. The proof is complete.
Ramified case: Assume K/F is a ramified extension. Note that the image of P_0 under (<ref>) is an uniformizer ϖ_K of _K. So D maps onto {1, ϖ_K}≃ F^×_K^× K^×. The proof that the image of S_m under (<ref>) gives a complete set of representatives for ^×(1+^m _K)_K^× is essentially identical to the inert case.
Split case: Assume K=F × F. A computation show that the image of P_0 under (<ref>) is the element (ϖ, 1). Therefore D maps onto the set {(ϖ^n, 1): n∈} which is clearly a complete set of representatives for F^× (^××^×) (F^×× F^×). Similarly, the image of S_m under (<ref>) is the
set {(1 + √(d)y, 1): y ∈/^m, y ∉-1/√(d) +} which clearly
gives a complete set of representatives for ^×((1+^m ) × (1+^m ) ) (^××^×).
§.§.§ Some volume computations
We now calculate some volumes
that will appear in our calculation of the local integral.
Let X,S be two 2 × 2 matrices.
Then
(X+S)=(X)+(^tA_SX)+(S),
where A_S is the adjugate matrix of S.
In particular, if S is invertible, then
(X+S)=(X)+( S)(S^-1X)+(S).
Say X=xywz and S=abcd.
Then
(X+S) =(x+a)(z+d)-(y+b)(w+c)
= (ad-bc)+(az+xd-yc-wb)+(xz-yw).
Now
az+xd-yc-wb =xd-wb* *az-yc
= (d-b-caxywz),
which proves the first claim.
The second claim comes from the identity
^tA_S=( S)S^-1.
Let a,b,c,d be integers.
Let _a,b,c,d={xyyz∈ M^sym_2(F): v(x) ≥ a, v(y) ≥ b, v(z) ≥ c, v(xz-y^2) ≥ d}.
Then we have
(_a,b,c,d) ≤ q^-max{d/2,b}-max{d,a+c}+2max{0,d-a-c}q^-d-b.
We have _a,b,c,d⊂ S_1 ∪ S_2 where
S_1 = {xyyz∈ M^sym_2(F): v(x) ≥ a, v(y) ≥ b, v(z) ≥ c, min{v(xz), 2v(y)}≥ d}
and
S_2={xyyz∈ M^sym_2(F): v(x) ≥ a, v(y) ≥ b, v(z) ≥ c, v(xz) < d, xz ∈ y^2 + ^d}.Volume of S_1: Let j=max{b,⌈d/2⌉}, so
that if xyyz∈ S_1 then y ∈^j.
If a+c ≥ d then we have
(S_1)= (^a ×^j ×^c) ≤ q^-max{d/2,b}-(a+c).
On the other hand, if a+c < d then let i=v(x).
Since v(xz) ≥ d, we must have v(z) ≥ d-i, which becomes automatic
from the condition v(z) ≥ c as soon as i ≥ d-c. Thus we have
(S_1) =(^j)(∑_i=a^d-c-1(ϖ^i ^××^d-i) + (^d-c×^c) )
=q^j ((d-a-c)(1-q^-1)q^-d + q^-d) ≤ (1+d-a-c)q^-max{d/2,b}-d.
Combining the cases a+c ≥ d and a+c<d together, we find that
(S_1) ≤ (1+max{0,d-a-c})q^-max{d/2,b}-max{d,a+c}.Volume of S_2:
Assume xyyz∈ S_2 and let i=v(x) and j=v(y).
Since v(xz)<d, the condition xz ∈ y^2 +^d forces 2j=i+v(z)<d
and in particular 2j-i ≥ c.
Moreover, for fixed x, we must have z ∈y^2/x + ^d-i.
Therefore,
(S_2) = ∑_j ≥ b(ϖ^j ^×) ∑_i ≥ a(ϖ^i ^××^d-i) _2j-i ≥ c
2j <d
= ∑_j=b^⌊d-1/2⌋(1-q^-1)q^-j∑_i=a^2j-c(1-q^-1)q^-d
≤max{0,d-c-a}q^-b-d.
Therefore, in total we have
(_a,b,c,d) ≤ q^-max{d/2,b}-max{d,a+c} + max{0,d-c-a}(q^-max{d/2,b}-d+q^-b-d)
≤ q^-max{d/2,b}-max{d,a+c} + 2max{0,d-c-a}q^-b-d.
Fix j a non-negative integer.
For Y=xyyz∈ M_2^sym(F), define
M_1(Y;j)=min{0,v(x)+j,v(y),v(z)} and
M_2(Y;j)=min{0,j+v(xz-y^2), j+v(x),j+v(y),v(z)}.
For future use, we note the following inequality, holding for allY ∈ M_2^sym(F)2M_1(Y;j) ≤ M_2(Y,j) ≤ M_1(Y;j)+j.
The second inequality comes fromM_2(Y,j) ≤min{0,j+v(x),j+v(y),v(z)}≤min{0,j+v(x),v(y),v(z)}+j.To establish the first inequality, observe that
v(xz-y^2) ≥min{v(x)+v(z),2v(y)}
and thusM_2(Y,j) ≥min{0, v(x)+v(z)+j,2v(y),j+v(x),j+v(y),v(z)}≥ 2M_1(Y;j).For any integers m_1,m_2 ≤ 0 with 2m_1≤ m_2 define
N(m_1,m_2;j)={Y ∈ M_2^sym(F): M_1(Y;j)=m_1, M_2(Y;j)=m_2}.
Then we have
( N(m_1,m_2;j)) ≤ (1+2min{m_2-2m_1,-m_2})q^-m_1-m_2+j.
If Y ∈ N(m_1,m_2;j) then by definition of M_1(Y;j) we
must have v(x) ≥ m_1-j, and by definition of M_2(Y;j)
we must also have v(x) ≥ m_2-j.
Thus v(x) ≥max{m_1,m_2}-j.
By the same reasoning, it follows that
Y ∈_a,b,c,d,
where
a=max{m_1,m_2}-j, b=max{m_1,m_2-j}, c=max{m_1,m_2}, d=m_2-j.
We have
max{d/2,b}=max{m_1,m_2-j,m_2-j/2}
=max{m_1,m_2-j/2}≥ m_1.
Moreover, using the fact that m_2 ≤ 0, we have by (<ref>)
max{d,a+c} =max{m_2-j,2max{m_1,m_2}-j}
=max{m_2,2m_1}-j=m_2-j.
Thus, by Lemma <ref>,
we obtain
( N(m_1,m_2;j)) ≤ (1+2min{m_2-2m_1,-m_2})q^-m_1-m_2+j.
§.§.§ Proof of Proposition <ref>
Recall thatπis an irreducible, tempered, spherical representation,ϕ∈ V_πis a spherical vector, andΛis a character ofK^×satisfyingΛ|_F^×=1.
Our aim is to prove Proposition <ref>. We assume thatc(Λ) ≥ 1because the casec(Λ) = 0is known by Theorem 2.1 of <cit.>.
Our bound will follow from the calculation of explicit coset representatives
in Lemma <ref> together with a bound for the following stable integral.
Given a positive integermand givent ∈ T(F), we defineJ_0(ϕ^(0,m),t)= ∫_N(F)^Φ_ϕ^(0,m)(nt)θ^-1(n) dn.Let t ∈ T(F) and let m be a positive integer.
Write the Cartan decomposition
ϖ^-2mϖ^-m t ϖ^2mϖ^m=UDV
with
U,V ∈_2(), D=ϖ^iϖ^i+j and j ≥ 0.
Then we have
J_0(ϕ^(0,m),t)≪ (m+v(2)+j)^5 q^-3m-j/2.
Since Φ_ϕ^(0,m) is bi-F^× h(0,m)_4()h(0,m)^-1-invariant,
we have
J_0(ϕ^(0,m),t)=lim_k→∞∑_ℓ',m'Φ_ϕ^(0,m) (h(ℓ',m'))
∫_N(^-k) ∩ N(ℓ',m';m,t)θ^-1(n) dn,
where
N(ℓ',m';m,t)={n ∈ N(F) : h(0,m)^-1nt h(0,m)∈ Z(F)_4()h(ℓ',m')_4()}.
Now let n=1X1∈ N(F).
For convenience, define
h(m)=ϖ^2mϖ^m,
so that
h(0,m)=h(m)ϖ^2mh(m)^-1.
Then
h(0,m)^-1nth(0,m)
=
h(0,m)^-11X1t( t)^tt^-1
h(0,m)
=
h(m)^-1ϖ^-2mh(m)1X1h(m)UDVh(m)^-1( t)h(m)^-1 ^tU^-1D^-1^tV^-1h(m)h(m)ϖ^2mh(m)^-1
=U^tU^-1D( t) Y D^-1( t) D^-1V^tV^-1,
where we have set
U^-1ϖ^-2mϖ^-m X 1ϖ^m ^tU^-1=Y=x̃ỹỹz̃.
Note that there is a unit u such that t= ϖ^2i+j u, and
D( t) Y D^-1( t) D^-1 =1Y1D( t) D^-1
=ϖ^i 1Y1[ 1; ϖ^j; ϖ^j; 1 ][ 1; 1; u; u ].
Thus, recalling Definition <ref>, by <cit.> we have n ∈ N(ℓ',m';m,t) if and only if
ℓ'=j+2M_1(Y;j)-2M_2(Y;j)
and
m'=M_2(Y;j)-2M_1(Y;j).
Henceforth, set m_1=-m'-ℓ'/2+j/2 and
m_2=j-m'-ℓ', so that n ∈ N(ℓ',m';m,t) if and only if Y ∈ N(m_1,m_2;j).
Observe that
(SX) =(Sϖ^2mϖ^mUY^tU1ϖ^-m)
=(S'Y),
where
S'=^tU1ϖ^-mSϖ^2mϖ^mU=
^tUϖ^m1Sϖ^m1U.
Thus, changing variables X ↦ Y, we have
∫_N(^-k) ∩ N(ℓ',m';m,t)θ_S^-1(n) dn=q^-3mI^(k)(m_1,m_2;j),
where
I^(k)(m_1,m_2;j)=∫_N(m_1,m_2;j) ∩ S_U(k,m)ψ(-(S'Y )) dY
and
S_U(k,m)= U^-1ϖ^-2mϖ^-m M_2^sym(𝔭^-k) 1ϖ^m ^tU^-1.
Since
| I^(k)(m_1,m_2;j) | ≤(N(m_1,m_2;j)),
by Lemma <ref> we obtain
| ∫_N(^-k) ∩ N(ℓ',m';m,t)θ_S^-1(n) dn | ≤
(1+2min{m',ℓ'+m'-j})q^3/2ℓ'+2m'-j/2-3m.
By Macdonald's formula <cit.> we have for all non-negative integers ℓ',m'
Φ_ϕ^(0,m)(h(ℓ',m')) = Φ_ϕ(h(ℓ',m')) ≪ (m'+ℓ')^2 q^-(4m'+3ℓ')/2.
The bound (<ref>) follows from the fact that the
Macdonald's formula is essentially a double geometric sum in the Satake parameters, of length ≪ m'+ℓ'.
Hence combining (<ref>) and (<ref>),
every term in (<ref>) is ≪ (m'+ℓ')^3 q^-j/2-3m.
It remains to show that the series (<ref>) is actually a finite sum.
From now on, assume that m_1+j ≤ -2m-6-j-2v(2).
We claim that for k large enough we have
I^(k)(m_1,m_2;j)=0. This is enough to prove the Proposition, because then only the terms with 0 ≥ m_1>-2m-6-2j-2v(2)
will contribute to (<ref>), but in view of inequality (<ref>) these terms
must also satisfy 0 ≥ m_2 ≥ -4m-12-4j-4v(2), and thus there are ≪ (m+v(2)+j)^2 such terms.
We split the integral I^(k)(m_1,m_2;j) over the following two disjoint ranges:
* v((S'Y)) <-2,
* v((S'Y)) ≥ -2.
For i ∈{1,2}, let I_i be the integral in the corresponding range.
Range (1) is trivially stable by the change of variable Y ↦λ Y for any λ∈^×.
Thus
I_1 = ∫ψ(-λ(S'Y)) dY.
Integrating both sides with respect to λ∈ 1+ gives I_1=0.
Next, we claim that range (2) is stable by the change of variables Y ↦ Y+ϖ^-1-v(2)S'^-1.
To prove this, we have three conditions to check:
* We have v[(S'(Y+ϖ^-1-v(2)S'^-1))]=v((S'Y)+2ϖ^-1-v(2)) ≥ -2.
* We need to check that M_1(Y+ϖ^-1-v(2)S'^-1;j)=M_1(Y;j)=m_1.
We have (S')=ϖ^2m(S)=ϖ^2md and by Assumption <ref>,
v((S')) ≤ 2m+2.
Moreover, since the entries of S are integers, it is clear from
the definition that S' also has integer coefficients.
Since S'^-1=1/ S'^tA_S',
where A_S' is the adjugate matrix of S', it follows that
all the entries of ϖ^-1-v(2)S'^-1 have valuation larger than -2m-3-v(2).
Since we are assuming M_1(Y;j)=m_1 ≤ -2m-6-2j-2v(2)< -2m-3-v(2)-j, it follows
that M_1(Y+ϖ^-1-v(2)S'^-1;j)=M_1(Y;j), as required.
* We need to check that M_2(Y+ϖ^-1-v(2)S'^-1;j)=M_2(Y;j)=m_2.
By Lemma <ref> we have
(Y+ϖ^-1-v(2)S'^-1)=
(Y)+ϖ^-1-v(2)(S')^-1(S'Y)+ϖ^-2-2v(2)(S')^-1.
In particular,
v((Y+ϖ^-1-v(2)S'^-1))=v((Y))
unless
v((Y)) ≥ -2m-5-2v(2),
in which case it does not contribute to M_2(Y;j) anyway since
by (<ref>), we have M_2(Y;j)=m_2 ≤ m_1+j < -2m-5-j-2v(2).
In both cases, we have M_2(Y+ϖ^-1-v(2)S'^-1;j)=M_2(Y;j).
Thus, we obtain I_2=ψ(2ϖ^-1-v(2))I_2.
Since 2ϖ^-1-v(2) generates ^-1/ and since ψ is trivial on but not trivial on ^-1, we have ψ(2ϖ^-1-v(2)) ≠ 1. This implies I_2=0.
We now proceed to prove Proposition <ref> in all cases – split, inert and ramified. Until the end of the section, we setm=c(Λ).
We remind the reader that Assumptions <ref> are in force throughout.
ByU_T(m)-invariance, we have
J_Λ,θ(ϕ^(0,m))=(^× U_T(m))∑_iΛ^-1(t_i)J_0(ϕ^(0,m),t_i),
where thei-sum ranges over representativest_iofF^× U_T(m) T(F).
Note that(^× U_T(m)) ≍ q^-m.
Proof in the inert case:
AssumeK/Fis an unramified field extension.
Lettbe a representative as in Lemma <ref>.
Consider first the caset=1yc-ya1-yb.Writeϖ^-2mϖ^-m t ϖ^2mϖ^m
=1ϖ^-myc-ϖ^mya1-yb=UDVas in (<ref>).
Noting thatv(y)-m ≤ 0, we have
i =min{v(1), v(y)-m, v(ya)+m, v(1-yb)}
=v(y)-m.
On the other hand, sincet ∈ S_mand the image ofS_munder (<ref>)
is contained in_K^×, we have2i+j=v((t))=v((1+yδ_0))=0.Thereforej=2m-2v(y).Thus by Proposition <ref> we haveJ_0(ϕ^(0,m),t) ≪ (m+v(2))^5q^-4m+v(y).
Since there are≍ q^m-v(y)representatives of this form, their total contribution (taking into account the volume)
to (<ref>) is≪ (m+v(2))^5 q^-4m. Summing this over0 ≤ v(y) ≤ mgives≪ m(m+v(2))^5 q^-4m.Next, consider the caset=xc-ax-bwithx ∈. Then similarly as before, we findi=-mand2i+j=0.
Thus by Proposition <ref> we haveJ_0(ϕ^(0,m),t) ≪ (m+v(2))^5q^-4m.
Since there are≍ q^m-1representatives of this form, their total contribution
to (<ref>) is≪ (m+v(2))^5 q^-4m-1.
Adding up the contributions of all the representatives thus givesJ_Λ,θ(ϕ^(0,m)) ≪ m(m+v(2))^5 q^-4m.Proof in the ramified case:
Now assumeK/Fis a ramified field extension.
Lettbe a representative as in Lemma <ref>.
Consider first the caset=1yc-ya1-yb∈ S_m.Then the exact same argument as in the inert case goes through.
Next, consider the caset=(x_0+y_0Δ_0)(1+yΔ_0)=x_0-yy_0acc[y_0+y(x_0-y_0b)]-a[y_0+y(x_0-y_0b)]x_0+y[y_0(b^2-ac)-bx_0]-by_0.Recall thatx_0,y_0are any two elements ofsuch that
v(x_0(x_0-y_0b)+y_0^2ac)=1.
We shall now choose some convenientx_0,y_0.
Let us now distinguish some cases.
Consider first the casev(a)=1.
Then we can takey_0=1andx_0=y_0b.
Thusv([y_0+y(x_0-y_0b)])=0, and hencei=-m. Furthermore2i+j=v((t))=v((P_0))=1,and hencej=2m+1. From there on, the argument proceeds as in the second case of the inert case, giving a final contribution≪ (m+v(2))^5q^-4m-1/2Next supposev(a)>1.
First, we claim thatv(b)=0.
Indeed, ifb=ϖ b'withb' ∈then we haved=ϖ^2d'whered'=b'^2-4a/ϖ^2c∈and thusddoes not generate the discriminant ofK/F, contradicting Assumption <ref>. Thus we can takey_0=1andx_0=ϖ+b.
Thenv(y_0+y(x_0-y_0b))=0again and the rest of the argument is the same as the casev(a)=1.
Finally consider the casev(a)=0. For (<ref>) to hold,
we must havev(y_0)=v(x_0-y_0b)=0.
Sincey ∈/^m, for each0 ≤ k ≤ mthere are exactlyq^k(1-q^-1)choices ofysuch thatv([y_0+y(x_0-y_0b)])=k.
From there on, the argument proceeds as in the first case of the inert case,
giving a final contribution≪ m(m+v(2))^5q^-4m-1/2.
Adding up the contributions of all the representatives again givesJ_Λ,θ(ϕ^(0,m)) ≪ m(m+v(2))^5 q^-4m.Proof in the split case:
Finally assumeK=F × F.
We start with a technical lemma.
Assume v(2)>0. Then we have v(√(d)-b),v(√(d)+b) ≥ v(2).
We have d ∈ b^2+4 thus
v(√(d)+b)+v(√(d)-b) ≥ v(4).
But
v(√(d)+b)=v(√(d)-b+2b) ≥min{v(2),v(√(d)-b)}.
If v(√(d)-b) ≤ v(2) then equation (<ref>) implies
v(√(d)+b) ≥ v(2), while if v(√(d)-b) ≥ v(2) then equation (<ref>) implies v(√(d)+b) ≥ v(2).
The same reasoning gives v(√(d)-b) ≥ v(2).
Now lettbe a representative as in Lemma <ref>,
and writet=x+yΔ=x+yb/2yc-yax-yb/2for somex,y ∈ F.
Since by (<ref>) the image ofP_0is(ϖ^n,1)and the image ofS_mis(/^m)^××{1},
we havex+y√(d)/2=ϖ^ku,
x-y√(d)/2=1for somek ∈andu ∈ (/^m)^×.
Equivalently,x=ϖ^ku+1/2,
y=ϖ^ku-1/√(d).We start with the casek<0.
Thenv(y)=k.
Furthermore, we claim thatv(x ± y b/2) ≥ k.
Ifv(2)=0this is obvious.
On the other hand, we have
x+yb/2 =ϖ^ku+1/2+bϖ^ku-1/2√(d)
=(√(d)+b)ϖ^ku+(√(d)-b)/2√(d).
Ifv(2)>0, Lemma <ref> proves the claim, and similarly forx-yb/2.
It follows that
i =min{v(x+yb/2), v(y)-m, v(ya)+m, v(x-yb/2)}
=k-m.
Moreover, we have2i+j=v(x^2-d/4y^2)=v(ϖ^ku)=k.Therefore,j=2m-kand by Proposition <ref> we haveJ_0(ϕ^(0,m),t) ≪ (m+v(2)-k)^5q^-4m+k/2.Summing overk<0andu ∈ (/^m)^×and multiplying by(^× U_T(m))gives a contribution≪ (m+v(2))^5q^-4m-1/2.Consider now the casek>0. Thenv(y)=0and, by the same reasoning as as above,v(x± yb/2) ≥ 0, hencei=-mand2i+j=k.Therefore,j=2m+k, and from then on this case is completely analogous to the casek<0after changingkto-k.
Finally, consider the casek=0. We again havev(x± yb/2) ≥ 0.
Since we can always choose the representativeu ∈ (/ ^m)^×such thatv(y) ≤ m, it follows thati=v(y)-mand2i+j=0. Therefore by Proposition <ref> we haveJ_0(ϕ^(0,m),t ) ≪ (m+v(2))^5q^-4m+v(y).Since there are≍ q^m-v(y)such representatives, we get as before a contribution≪ m(m+v(2))^5 q^-4m.Adding up the contributions of all the representatives thus givesJ_Λ,θ(ϕ^(0,m)) ≪ m(m+v(2))^5 q^-4m.§ BOUNDS ON FOURIER COEFFICIENTS AND SUP-NORMS OF SIEGEL CUSP FORMS
In this section we relate the Fourier coefficients of Siegel cusp forms of degree 2 to global Bessel coefficients and the local quantities studied in the previous section. We go on to prove the main theorems stated in the introduction.
§.§ Siegel cusp forms and representations
We begin with recalling classical Siegel cusp forms and their adelizations. For brevity, denoteG(R):=_4(R)for each ringRand letΓ = _4(). Letkbe a positive integer.
LetS_k(Γ)be the space of holomorphic Siegel cusp forms of degree2and weightkwith respect toΓ. Hence, ifF ∈ S_k(Γ), then for allγ∈Γwe haveF |_k γ=F, where
(F |_k g)(Z) := μ(g)^k j(g, Z)^-k
F(g ⟨ Z ⟩)
forg ∈ G()^+andZ ∈_̋2, the Siegel upper half space; moreoverFvanishes at the cusps. For a precise formulation of this cusp vanishing condition, see <cit.>;
for definitions and basic properties
of Siegel cusp forms we refer the reader to <cit.>.
ForF ∈ S_k(Γ), we define the adelizationϕ_FofFto be the function onG()defined by
ϕ_F(γ h_∞ k_0) =
μ(h_∞)^k j(h_∞,
iI_2))^-kF(h_∞⟨ iI_2⟩))
whereγ∈ G(), h_∞∈ G()^+andk_0 ∈∏_ p<∞ G(_p). Thenϕ_Fis a
well-defined function on the whole ofG()by strong approximation,
and is a cuspidal automorphic form.
It is well-known thatϕ_Fgenerates an irreducible representation if and only ifFis an eigenform of all the Hecke operators <cit.>. We denote this representation byπ_F. Now suppose thatFis a Hecke eigenform. SinceFis of full level, it is either of general type or of Saito–Kurokawa type (see, e.g., Proposition 2.3.1 of <cit.>). The representationπ_Fis an irreducible cuspidal automorphic representation ofG()of trivial central character; writingπ_F = ⊗_v π_v, we have
* The archimedean component π_∞ is a holomorphic discrete series representation with scalar minimal K-type determined by the weight k.
* If F is of Saito–Kurokawa type, then for a prime number p, the representation π_p is of type IIb according to Table A.1 of <cit.>. Note that these are non-tempered, non-generic representations.
* If F is of general type, then for a prime number p, π_p is a tempered representation of Type I in the notation of <cit.>.
ForF ∈ S_k(Γ)we define the Petersson norm
F_2^2 = ⟨ F, F⟩
=
∫_Γ_̋2 |F(Z)|^2 ( Y)^k - 3 dX dY.
For anyϕ∈ L^2(^× G() G()), let⟨ϕ, ϕ⟩ = ∫_^× G() G() |ϕ(g)|^2 dg. We have the convenient relation
⟨ F, F⟩/((4,)_̋2) = ⟨ϕ_F, ϕ_F ⟩/(Z()G() G());
c.f. Section 4.1 of <cit.>.
§.§ Bessel periods and Fourier coefficients
Given a fundamental discriminantd<0, define
S_d= [[ -d/4 0; 0 1; ]] if d≡ 04,
[[ 1-d/4 1/2; 1/2 1; ]] if d≡ 14.
GivenS_das above, let the groupT_d=T_S_doverbe defined in the same way as in (<ref>). SoT_d≃ K^×whereK=(√(d)). Note that the matrixS_dabove satisfies the standard assumptions (<ref>) at every finite prime.
Letψ:\→^×be the character such thatψ(x) = e^2 π i xifx ∈andψ(x) = 1forx ∈_p. One obtains a characterθ_S_dofN() \ N()byθ_S_d(1X1) = ψ( Tr(S_d X)). LetΛbe a character ofK^×_K^×such thatΛ|_^× = 1. Then for a measurable functionϕ: ^× G() G() →, we define the Bessel period
B(ϕ, Λ) =
∫_^× T_d() T_d() ∫_N() N()ϕ(tn)Λ^-1(t) θ_S_d^-1(n) dn dt,
where we give the adelic groups the Tamagawa measure.
In the special case whereϕ=ϕ_FwithFa Siegel cusp form of degree 2, the functionB(ϕ_F, Λ)captures information about all the Fourier coefficients ofF. Let us now make this precise.
LetF ∈ S_k(Γ). ThenFhas a Fourier expansion
F(Z)
=∑_T ∈Λ_2 a(F, T) e^2 π i (TZ)
with
Λ_2 = {ab/2b/2c: a,b,c∈, a>0, d:=b^2 - 4ac <0}.
For a matrixT = ab/2b/2c∈Λ_2, we distinguish its contentc(T)=(a,b,c)and discriminant(T)=-4(T). Clearly,c(T)^2divides(T).
From the invariance property ofF, we easily see that its Fourier coefficients satisfy the relation
a(F, T) = a(F, ATA)
for anyA ∈_2().
Therefore, forT_1, T_2 ∈Λ_2, we consider an equivalence relationT_1∼ T_2 ⟺ there exists A ∈_2() such that ^tA T_1 A = T_2and say that the matricesT_1, T_2satisfying this relation are_2()-equivalent. The equivalence class ofTwill be denoted by[T]. Note that:
* the Fourier coefficient a(F, T) depends only on the _2()-equivalence class of T;
* if T_1∼ T_2, then (T_1)= (T_2) and c(T_1)=c(T_2).
Given anyT ∈Λ_2, it is clear that there exists a fundamental discriminantd<0and positive integersL,Msuch thatc(T)=L,(T) = dL^2M^2; furthermored, L,Mare uniquely determined byT.
For a fundamental discriminantd < 0and positive integersL,M, we letH(dM^2; L)denote the set of_2()-equivalence classes of matrices inΛ_2such thatc(T)=Land(T) = dL^2M^2. It is clear that given[X] ∈ H(dM^2; L)andF ∈ S_k(Γ), the notationa(F, [X])is well-defined.
We define_d(M) = T_d() /T_d()T_d()∏_p<∞U_T_d(m_p)where we writeM= ∏_p<∞p^m_pand the subgroupU_T_d(m_p) ⊂ T_d(_p)is defined in Section <ref>.
It is well known that_d(M)can be naturally identified with the class group of the unique order of discriminantD=dM^2in(√(d)); in particular,_d(1)is canonically isomorphic to the ideal class group of(√(d)). From Proposition 5.3 of <cit.> we have
|_d(M)| = M/u(d) |_d(1)| ∏_p|M( 1 - (d/p) p^-1)whereu(-3)=3,u(-4)=2andu(d)=1for otherd. In particular this shows that
(|d|^1/2M)^1-≪_ |_d(M)| ≪_ (|d|^1/2M)^1+.
Section 5 of <cit.> constructs for eachc ∈ T_d()a matrixϕ_L,M(c) ∈Λ_2such thatϕ_L,M(c) =LLM1S_cM1with(S_c) = dand the(2,2)-coefficient ofS_cis1moduloM; this implies thatc(ϕ_L,M(c))=L, (ϕ_L,M(c))=dL^2M^2.The matrixϕ_L,M(c)depends on some choices, but its class inH(dM^2;L)is independent of those choices. We recall a result from <cit.>.
For each pair of positive integers L,M, the map c ↦ [ϕ_L,M(c)] gives a bijection from _d(M) to H(dM^2; L).
WriteL=∏_p<∞p^ℓ_p,M=∏_p<∞ p^m_p. Define the elementH(L,M)∈ G()viaH(L,M)_p:= h_p(ℓ_p, m_p) , p| LM
1 , p∤ LM p=∞ ,where the elementh_p(ℓ_p, m_p)was defined in (<ref>); note that we now add the subscriptpto avoid confusion. For an automorphic formϕonG(), we letϕ^L,Mbe the automorphic form obtained by right-translation byH(L,M), i.e.,ϕ^L,M(g):=ϕ(gH(L,M)),for allg ∈ G(). We can now state the relation between Bessel coefficients and Fourier coefficients.
Let F ∈ S_k(Γ) and ϕ_F be its adelization. For any character Λ of _d(M), we have
B(ϕ_F^L,M,Λ)=2/|_d(M)|(LM)^-ke^-2π (S_d)∑_c∈_d(M)Λ^-1(c) a(F,ϕ_L,M(c)).
This is a standard calculation; see, e.g., the proof of <cit.>.
Applying orthogonality relations and noting the bijection between_d(M)andH(dM^2; L), we obtain the key formula
∑_[T] ∈ H(dM^2; L)|a(F,[T])|^2= 1/4(LM)^2ke^4π (S_d) |_d(M)|∑_Λ∈_d(M) |B(ϕ_F^L,M,Λ)|^2.
§.§ The refined GGP identity and bounds on Fourier coefficients
In the case thatF ∈ S_k(Γ)is a Hecke eigenform, we can refine the formula (<ref>). In this case, the adelization ofFgenerates an irreducible automorphic representationπ_F. LetK=(√(d))andΛbe a character ofK^×_K^×such thatΛ|_^× = 1. Liu <cit.> formulated a precise refinement of the Gan–Gross–Prasad conjecture for((5), (2))which applies in particular to the Bessel periodsB(ϕ, Λ)whereϕis any automorphic form in the space ofπ_F. Recently Furusawa and Morimoto have proved this conjecture <cit.> for tempered representations; in particular their result applies toπ_FwheneverFis not of Saito–Kurokawa type (the Saito–Kurokawa case is easier and can be dealt with separately; see, e.g., <cit.>).
Let π_F = ⊗_v π_v and Λ be as above and assume that π_F is tempered. Let ϕ=⊗_v ϕ_v be an automorphic form in the space of π_F. Let S be a set of places including the archimedean place such that π_v, K_v, ϕ_v, and Λ_v are all unramified for v ∉ S. Then
|B(ϕ, Λ)|^2/⟨ϕ, ϕ⟩ = C_T/S_π_Fξ(2)ξ(4)L^S(1/2, π_F ×(Λ^-1))/L^S(1, π_F, )L^S(1, χ_d)∏_v∈ S J_Λ_v, θ_v(ϕ_v),
where ξ(s) = π^-s/2Γ(s/2)ζ(s) denotes the completed Riemann zeta function, C_T is a constant relating our choice of local and global Haar measures, S_π_F denotes a certain integral power of 2, related to the Arthur parameter of π_F, and J_Λ_v, θ_v(ϕ_v) equals the local Bessel integral defined in Section <ref>.
Let F ∈ S_k(Γ) and let ϕ_F be its adelization. Assume that F is not a Saito–Kurokawa lift and let π_F be the automorphic representation generated by F. Let L, M be positive integers and let Λ be a character of _d(M). Then
|B(ϕ_F^L,M,Λ)|^2/⟨ϕ_F, ϕ_F⟩≪_(L Md)^/M^4L^3(4 π)^2k |d|^k-2 e^-4π Tr(S_d)/Γ(2k-1)L(1/2, π_F ×(Λ^-1))/L(1, π_F, ).
We return to the setup of Section <ref> but assume also that F is a Hecke eigenform. Write ϕ_F = ⊗_v ϕ_v, π_F =⊗_v π_v and ϕ_F^L,M = ⊗_v ϕ_v^(l_v,m_v). By the uniqueness of local and global Bessel functions, we have the following relation (see also Lemma 5 in <cit.>).
B(ϕ_F^L,M,Λ) = ∏_p|LMB_ϕ_p, Λ_p(h_p(ℓ_p, m_p))/B_ϕ_p, Λ_p(h_p(0, c(Λ_p))) B(ϕ_F^1,C(Λ),Λ)
where C(Λ) = ∏_p|M p^c(Λ_p).
Using Proposition <ref>, we therefore obtain
|B(ϕ_F^L,M,Λ)|^2 ≪_ (L M)^C(Λ)^4/M^4L^3 |B(ϕ_F^1,C(Λ),Λ)|^2.
On the other hand, using (<ref>), Proposition <ref>, and the computation C_TJ_∞≍(4 π)^2k |d|^k-2 e^-4π Tr(S_d)/Γ(2k-1)L(1, χ_d) which follows from <cit.> we have
|B(ϕ_F^1,C(Λ),Λ)|^2/⟨ϕ_F, ϕ_F⟩≪_ C(Λ)^-4 + (4 π)^2k |d|^k-2 e^-4π Tr(S_d)/Γ(2k-1)L(1/2, π_F ×(Λ^-1))/L(1, π_F, )L(1, χ_d)^2
The proof follows by combining (<ref>), (<ref>) and the well-known fact L(1, χ_d) ≫_ |d|^-.
We are ready to prove our main theorem.
Let F ∈ S_k(Γ) be a Hecke eigenform that is not a Saito–Kurokawa lift and let π_F be the automorphic representation generated by F. Let d<0 be a fundamental discriminant and let L, M be positive integers.
* For each character Λ of _d(M), we have
|∑_c∈_d(M)Λ(c) a(F,ϕ_L,M(c)) |^2 ≪_⟨ F, F ⟩(4 π)^2k L^2k-3+|dM^2|^k-1+/Γ(2k-1)L(1/2, π_F ×(Λ))/L(1, π_F, ).
* We have the bound
∑_[T] ∈ H(dM^2; L)|a(F,[T])|^2 ≪_⟨ F, F ⟩(4 π)^2k/Γ(2k-1) L^2k-3 + |dM^2|^k - 3/2 + ∑_Λ∈_d(M)L(1/2, π_F ×(Λ))/L(1, π_F, ).
The first assertion follows from (<ref>), (<ref>), Lemma <ref> and Proposition <ref>. The second assertion follows from (<ref>), (<ref>), (<ref>) and Proposition <ref>.
Note that Theorem <ref> is just a restatement of part (<ref>) of Theorem <ref>. In the terminology of Theorem <ref> we have D=dM^2, H_D = _d(M).
Let F ∈ S_k(Γ) be a Hecke eigenform that is not a Saito–Kurokawa lift. Let d<0 be a fundamental discriminant and M a positive integer. Assume that for some real numbers α, β, the bound
L(1/2, π_F ×(Λ))/L(1, π_F, )≪_ϵ k^α+ϵ (|d|M^2)^β+ϵ
holds for all Λ∈_d(M). Then
1/⟨ F,F ⟩∑_[T] ∈ H(dM^2; L) |a(F,[T])|^2
≪_k^1/2 + α + (2π)^2k/Γ(k)^2 L^-1 |dM^2L^2|^k-1 + β + . Furthermore, for any T ∈Λ_2, we have |a(F,T)|/F_2≪_k^1/4 + α/2 (4π)^k/Γ(k) c(T)^-1/2 - β(T)^k-1+β/2.
The first assertion follows by substituting the bound (<ref>) into Theorem <ref> and using the duplication formula for the Gamma function. For the second assertion, we note that for any T ∈Λ_2 we have [T] ∈ H(dM^2; L) where we put L = c(T), dM^2 = -4(T)/c(T)^2; now the second assertion follows from the first by dropping all but one term.
For Λ∈_d(M), the analytic conductor of (Λ) is bounded by |d|M^2<cit.>. As the analytic conductor of π_F is ≍ k^2, the convexity bound gives L(1/2, π_F ×(Λ)) ≪_ (k|d|M^2)^1+ and the Generalized Lindelöf hypothesis asserts that L(1/2, π_F ×(Λ)) ≪_ (k|d|M^2)^. On the other hand, lower bounds for L(1, π_F, ) follow from good zero-free regions. In particular, it is known unconditionally (use <cit.> and functoriality) that there exists an absolute constant A such that L(1, π_F, ) ≫ k^-A; under GRH we have L(1, π_F, ) ≫_ k^-. It follows from the above discussion that under GRH we may take α = β = 0 in (<ref>).
§.§ Sup-norms of Siegel cusp forms
§.§.§ Preparatory lemmas
The results in this subsection are implicitly or explicitly contained in <cit.>.
LetY=xy/2y/2z∈_2(). We say thatYis Minkowski-reduced if the following holds
|y| ≤ x ≤ z, x ≫ 1, Y ≍ xz.
LetT = ab/2b/2c∈Λ_2. ThenTYis diagonalizable overwith positive eigenvalues; we shall denote them byx_1,x_2.
We need some control over the counting functionN_Y(λ,h)=#{ T ∈Λ_2 : λ-h ≤ x_1, x_2 ≤λ+h}.The first lemma provides a trivial bound.
Assume that Y is Minkowski-reduced.
Then for λ>0 and h>1 we have
N_Y(λ,h) ≪h^2λ/( Y)^3/2. For convenience, define m=λ-h and M=λ+h.
Then x_1,x_2 are the roots of the polynomial X^2 - (TY)X+(TY).
In particular, if m ≤ x_1, x_2 ≤ M then it follows
2m ≤ x_1+x_2= (TY)=ax+1/2by+cz ≤ 2M
and
m^2 ≤ x_1x_2 = (TY) ≤ M^2.
From (<ref>) we obtain
2m - 1/2by-cz ≤ ax ≤ 2M-1/2by-cz.
Since c>0, substituting (<ref>) in (<ref>),
and using that Y is Minkowski-reduced, we get
c(2M-b/2y-cz)-x/4b^2 ≥(T)x/4≫m^2/z.
Changing variables c̃=c-xM/ Y and b̃=b+yM/ Y we obtain
zc̃^2+x/4b̃^2+y/2b̃c̃≪xM^2/ Y-m^2/z≍M^2-m^2/z.
Completing the square,
z(c̃+y/4zb̃)^2+ Y/4zb̃^2 ≪M^2-m^2/z
and thus |c̃+y/4zb̃| ≤√(M^2-m^2)/z, |b̃| ≤ 2√(M^2-m^2/(Y)). So there are ≪M^2-m^2/z √( Y) choices for the pair (b,c).
For each choice of (b,c), by (<ref>) there are ≪M-m/x choices for
a, giving the desired bound.
Assume that Y is Minkowski-reduced and that h=O(√(k)logk).
Then N_Y(k/4π,h) ≪k^3/2+ϵ/( Y)^3/4.
Forx > 0andk > 0defineg_k(x)=x^k/2exp(-2π x), x_k=k/4 π.The function g_k reaches its maximum at x_k, and for k ≫ 1 we have
* if |x-x_k| ≥√(k)log(k) then g_k(x) ≪ k^-100 g_k(x_k),
* if x ≥ 2x_k then g_k(x) ≤exp(-π/3(x-x_k))g_k(x_k).
The derivative of log(g_k) is given by
d/dxlog g_k(x) = 2π (x_k/x-1),
thus g_k increases from 0 to x_k and decreases after this point, which
establishes the first claim.
To show (1), it suffices to bound g_k(x_k ±√(k)log(k)).
If x=x_k-√(k)log(k) then we have
g_k(x)/g_k(x_k)=exp(
k/2log(1-4πlog (k)/√(k))
+2π√(k)log(k))
≤exp(-4π^2(log k)^2).
On the other hand if x ≥ x_k then we have
log g_k(x) = log(g_k(x+x_k/2))+2π∫_x+x_k/2^x (x_k/t-1) dt
≤log(g_k(x_k))+2π∫_x+x_k/2^x x_k-x/x+x_k dt
= log(g_k(x_k))-π(x-x_k)^2/x+x_k.
Thus
g_k(x) ≤ g_k(x_k)exp(-π(x-x_k)^2/x+x_k).
In particular, for x=x_k+√(k)log(k), by (<ref>) and using the fact that √(k)log(k) ≤2/ek, we have
g_k(x) ≤ g_k(x_k) exp(-4π^2e/e+8π(log k)^2).
This proves (1).
Finally, if x ≥ 2x_k then x-x_k/x+x_k≥1/3 hence (<ref>)
gives (2).
§.§.§ Main result
Let F ∈ S_k(Γ) be a Hecke eigenform that is not a Saito–Kurokawa lift and normalized so that F_2=1.
Let 0≤α and 0≤β≤1, be constants such that the bound (<ref>) holds for all Λ∈_d(M) for all fundamental discriminants d and positive integers M.
Then for all Z=X+iY ∈_̋2 we have |( Y)^k/2F(Z)| ≪_ϵ k^5/4+β+α/2+ϵ.
Write the Fourier expansion of F and apply
Corollary <ref>:
( Y)^k/2F(Z) = ( Y)^k/2∑_T ∈Λ_2 a(F, T) e^2 π i (TZ)
≪_ϵ(4π)^k/Γ(k) k^1/4+α/2+ϵ∑_T ∈Λ_2c(T)^-1/2-β/(T)^1/2-β/2-ϵ(TY)^k/2e^-2 π(TY) .
By Stirling's formula, we have
(4π)^k/Γ(k)≍ k^1/2(4π e/k)^k .
We then proceed as in <cit.>: the matrix X=TY is diagonalizable
with positive eigenvalues x_1,x_2, and (X)^k/2e^-2π(X)=g_k(x_1)g_k(x_2).
Let 𝒳_0 be the set of matrices T ∈Λ_2 such that
k/4π-√(k)log(k) ≤ x_1, x_2 ≤k/4π+√(k)log(k).
For those matrices T ∈𝒳_0, we have (T) ≍ k^2 ( Y)^-1 and thus
by (<ref>) and bounding c(T)≥ 1 trivially we have
(4π)^k/Γ(k)
k^1/4+α/2+ϵc(T)^-1/2-β/(T)^1/2-β/2-ϵ(TY)^k/2e^-2 π(TY)≪ ( Y)^1/2-β/2
k^-1/4+β+α/2+ϵ.
Furthermore, by Lemma <ref>, assuming (without loss of generality) that Y is
Minkowski-reduced, we have
#𝒳_0 ≪ k^3/2+ϵ( Y)^-3/4 such matrices T,
and thus we get a contribution
( Y)^k/2∑_T ∈𝒳_0 a(F, T) e^2 π i (TZ)≪_ϵ ( Y)^-1/4-β/2k^5/4+β+α/2+ϵ.
Now for j > 0, let 𝒳_j be the set of matrices T ∈Λ_2 such that 0<x_1,x_2 ≤ (j+1)k/4π and T ∉⋃_i=0^j-1𝒳_i.
Then combining Lemma <ref>, part (1) of Lemma <ref>
and (<ref>) we get
( Y)^k/2∑_T ∈𝒳_1 a(F, T) e^2 π i (TZ)
≪ k^-50( Y)^-3/2.
Finally, for j ≥ 2, combining Lemma <ref>, part (2) of Lemma <ref>
and (<ref>) we get
( Y)^k/2∑_T ∈𝒳_j a(F, T) e^2 π i (TZ)
≪exp(-jk/13)( Y)^-3/2,
and thus
( Y)^k/2F(Z) = ( Y)^k/2∑_j ≥ 0∑_T ∈𝒳_j a(F, T) e^2 π i (TZ)≪_ϵ ( Y)^-1/4-β/2-ϵk^5/4+β+α/2+ϵ≪ k^5/4+β+α/2+ϵ.alpha ] |
http://arxiv.org/abs/2307.06866v1 | 20230710004749 | Modeling correlated uncertainties in stochastic compartmental models | [
"Konstantinos Mamis",
"Mohammad Farazmand"
] | q-bio.PE | [
"q-bio.PE",
"math.DS",
"math.PR",
"37N25, 60H10"
] |
Correlated uncertainties in epidemiological models]Modeling correlated uncertainties in stochastic compartmental models
1]Konstantinos [email protected]
These authors contributed equally to this work.
[2]Mohammad [email protected]
These authors contributed equally to this work.
[1]Department of Applied Mathematics, University of Washington, Seattle, 98195-3925, WA, USA
[2]Department of Mathematics, North Carolina State University, 2311 Stinson Drive, Raleigh, 27695-8205, NC, USA
We consider compartmental models of communicable disease with uncertain contact rates. Stochastic fluctuations are often added to the contact rate to account for uncertainties. White noise, which is the typical choice for the fluctuations, leads to significant underestimation of the disease severity. Here, starting from reasonable assumptions on the social behavior of individuals, we model the contacts as a Markov process which takes into account the temporal correlations present in human social activities. Consequently, we show that the mean-reverting Ornstein–Uhlenbeck (OU) process is the correct model for the stochastic contact rate. We demonstrate the implication of our model on two examples: a Susceptibles-Infected-Susceptibles (SIS) model and a Susceptibles-Exposed-Infected-Removed (SEIR) model of the COVID-19 pandemic. In particular, we observe that both compartmental models with white noise uncertainties undergo transitions that lead to the systematic underestimation of the spread of the disease. In contrast, modeling the contact rate with the OU process significantly hinders such unrealistic noise-induced transitions. For the SIS model, we derive its stationary probability density analytically, for both white and correlated noise. This allows us to give a complete description of the model's asymptotic behavior as a function of its bifurcation parameters, i.e., the basic reproduction number, noise intensity, and correlation time. For the SEIR model, where the probability density is not available in closed form, we study the transitions using Monte Carlo simulations. Our study underscores the necessity of temporal correlations in stochastic compartmental models and the need for more empirical studies that would systematically quantify such correlations.
[
*
August 12, 2023
===================
§ INTRODUCTION
Compartmental models describe the spread of communicable diseases in a population <cit.>. In such models, the population of a community is partitioned into disjoint compartments, e.g., the susceptible and the infected, each one containing all individuals with the same disease status <cit.>. The state variables in a compartmental model are the numbers of individuals in each compartment. The model parameters, e.g., the contact, incubation, and curing rates, determine the flow of individuals between compartments. In the present work, we study how the model predictions are affected by the presence of uncertainties in the compartmental model parameters.
Determining the value of model parameters is a delicate task that involves estimation and averaging of data over the whole population <cit.>. As such, model parameters are subject to uncertainties, arising from the variation of social and biological factors among individuals. Among the model parameters, average contact rate is the most volatile, due to its strong dependence on the social activity that varies from person to person, and also changes over time <cit.>. Uncertainties in the contact rate λ(t) are often modeled as a stochastic perturbation ξ(t) with intensity σ around a constant mean λ̅, so that λ(t)=λ̅+σξ(t). A common choice for ξ(t) is Gaussian white noise, see e.g. <cit.>. The remaining model parameters, such as the average incubation or curing rate, depend mainly on the biology of the virus, and, while every individual responds to the infection differently, they vary less compared to the contact rate and can be considered constant.
In a recent study <cit.>, we studied the role of temporal correlations, which are present in social activities of individuals, on the contact rate λ(t). Using standard results from the theory of stochastic processes, and assuming that the perturbation ξ(t) has an exponentially decreasing autocorrelation function, we showed that the only admissible model for the stochastic contact rate is the Ornstein–Uhlenbeck (OU) process. However, the assumption of exponentially decreasing autocorrelation had remained unjustified in Ref. <cit.>.
A main contribution of the present paper is to derive the OU process without making such an onerous assumption. In fact, as we show in Sec. <ref>, the OU process emerges naturally by making quite simple and realistic assumptions on the contacts of each individual in the population.
Then, we focus on determining the final size of disease in a population, as predicted by compartmental models with white or OU noise fluctuations in contact rate. We study two compartmental models. First, we consider the stochastic Susceptibles-Infected-Susceptibles (SIS) model, which is adequate for modeling sexually transmitted or bacterial diseases such as gonorrhea or syphilis. For the SIS model, we determine the stationary probability density (PDF) of the infected population fraction in closed form. This allows us to completely classify the asymptotic state of the model as a function of its bifurcation parameters, i.e., basic reproduction number, noise intensity, and the correlation time of the noise. As the second model, we consider the stochastic Susceptibles-Exposed-Infected-Removed (SEIR) model for COVID-19 pandemic in the US during the Omicron variant. Using Monte Carlo simulations, we study the bifurcations of the asymptotic probability density as in the SIS model.
Our main qualitative result is that, for increasing levels of white noise in the contact rate, both compartmental models undergo a noise-induced transition, whereby the stationary PDF of the infected exhibits an additional peak near zero, and far away from the deterministic equilibrium. This unrealistic behavior leads to significant underestimation of the severity of the disease.
In contrast, under OU noise this transition is suppressed, with most of the probability mass of the stationary PDF being concentrated around the deterministic equilibrium.
§.§ Related work
Stochasticity has been incorporated into many epidemiological models <cit.>. One principled approach for deriving stochastic compartmental models begins by modeling of the number of individuals in each compartment as continuous-time Markov chain birth–death processes or as branching processes, see e.g. <cit.>. Then, by assuming their state variables to be continuous, the Markov chain models result in a system of stochastic differential equations (SDEs) whose parameters contain white noise uncertainties. Another approach is to directly add noise to the model parameters; see e.g. <cit.>. This approach is more straightforward, but the choice of the type of noise is largely arbitrary: The most common choice in literature is Gaussian white noise <cit.>. However, OU noise has also been proposed for parameter perturbation in biological systems, see e.g., <cit.>, because OU noise combines the modeling of stochastic fluctuations with the stabilization around an equilibrium point, due to its mean-reverting property.
Recently, the COVID-19 pandemic has renewed interest in stochastic modeling of disease spread; see e.g. <cit.> for a survey of existing forecast models for COVID-19, and <cit.> where a lognormally distributed process has been considered for the stochastic fluctuations in contact rate of a COVID-19 compartmental model, to account for the presence of superspreaders in the population. However, most of the studies that use stochastic compartmental models to make predictions for the COVID-19 pandemic rely primarily on simulations, see e.g., <cit.>.
Most of the analysis performed on stochastic compartmental models has been focused on the derivation of conditions for the eradication or persistence of the disease in the population; see e.g., <cit.> for compartmental models with white noise uncertainties. Recently, this line of work has been extended to models with OU uncertainties <cit.>, and Lévy noises <cit.> to account for abrupt changes (jumps) in disease transmission.
The focus of the present work is different; apart from conditions for the disease to become endemic, we are also interested in the predictions of stochastic compartmental models on the final disease size. Analytic work in this direction is scarce; see e.g. Ref. <cit.> which uses the Fokker–Planck equation to study a scalar compartmental model under white noise fluctuations in contact rate.
§.§ Outline
This paper is organized as follows. In Sec. <ref>, we present our model of uncertainties in the contact rate. In Sec. <ref>, we study the SIS model, for both cases of white and OU noise fluctuations in contact rate. We analytically determine the noise-induced transitions the stochastic SIS model undergoes, and we quantify the effect of noise correlations in contact rate. In Sec. <ref>, we study the noise-induced transitions of a stochastic SEIR model for the Omicron wave of the COVID-19 pandemic in the US, by using direct Monte Carlo simulations. In Sec. <ref>, we make our concluding remarks and outline possible directions for future work.
§ MODELING UNCERTAINTIES IN CONTACT RATE
The average contact rate λ, defined as the average number of adequate contacts per individual per unit time <cit.>, <cit.>, is the main source of uncertainty in compartmental models. In order to determine its properties as a random process, we begin with the cumulative number of contacts C_n(t) of the n-th individual up to time t. We denote the incremental number of contacts by Δ C_n(t) which measures the number of constants that the n-th individual makes during the time interval [t,t+Δ t], with Δ t being a reference time interval, e.g., a day or a week. The contact rate λ_n of the n-th individual is then given by λ_n(t) = Δ C_n(t)/Δ t.
Next, we make the following assumptions on the social behavior of each individual.
* The average number of contacts that individuals make in a time interval is proportional to the length of the time interval, so that
𝖤[Δ C_n(t)]=μ_n Δ t, for some constant μ_n>0.
* The number of contacts Δ C_n(t) is subject to time-varying random fluctuations, the intensity of which is also proportional to reference unit time Δ t. For instance, this assumption implies that the contacts of an individual per week are prone to more uncertainty than the contacts of the same individual per day.
* After a period of relatively high or low contacts compared to the average number μ_nΔ t, the contacts of the individual will tend towards the mean μ_nΔ t. In other words, high or low numbers of contacts are not sustained for prolonged periods of time.
Under the above assumptions, we formulate the conditional probability of Δ C_n(t+δ t)-Δ C_n(t), given the number of contacts Δ C_n(t), where δ t≪Δ t is a small time increment. Note that Δ C_n(t+δ t) is the number of contacts that the individual makes over the time interval [t+δ t,t+δ t+Δ t]. Therefore, Δ C_n(t+δ t)-Δ C_n(t) measures the variations in the number of contacts as the reference time interval [t,t+Δ t] is shifted ever so slightly (see Fig. <ref> for an illustration).
Based on the above assumptions, for given positive integers i and j, we define the conditional probability,
𝖯[Δ C_n(t+δ t) -Δ C_n(t)=j|Δ C_n(t)=i]=
1/2[(κ_n Δ t)^2-θ_n(μ_n Δ t-i)]δ t j=-1
1/2[(κ_n Δ t)^2+θ_n(μ_n Δ t-i)]δ t j=+1
1-(κ_n Δ t)^2δ t j=0
0 otherwise,
where κ_n, θ_n and μ_n are positive constants. We will see shortly that κ_n controls the noise intensity and θ_n determines the time correlation
of the resulting stochastic process. For simplicity, we assume that these constants are identical across the population. Therefore, we omit the subscript n and simply denote them by κ, θ and μ.
The conditional probability (<ref>) dictates the following. If the current number of contacts Δ C_n(t) = i is greater than the mean μΔ t, then it is more probable for the number of contact to decrease by one after a short time interval δ t has passed (case j=-1). Conversely, if the number of contacts Δ C_n(t) = i is less than the mean, it is more likely for the number of contacts to increase by one in the near future (case j=+1). Furthermore, it assumes that the probability that the number of contacts jump by more than one within a short time δ t is negligible. Finally, the probability for case j=0 (no change within time δ t) is defined to ensure that the total probability adds up to one.
We note that the constant θ plays a crucial role here. If θ =0, the number of contacts increase or decrease with the same probability and regardless of their past history (Brownian motion). In contrast, θ >0 introduces time correlations into the process so that the number of contacts have a tendency to revert back to their mean value.
From Eq. (<ref>), we calculate the conditional mean value and variance,
𝖤[Δ C_n(t+δ t)-Δ C_n(t)|Δ C_n(t)=i]=θ(μΔ t-i)δ t,
𝖵𝖺𝗋[Δ C_n(t+δ t)-Δ C_n(t)|Δ C_n(t)=i]=(κΔ t)^2δ t.
Using Eq. (<ref>), and the definition λ_n(t) = Δ C_n(t)/Δ t, we calculate the conditional mean and variance,
𝖤[λ_n(t+δ t) -λ_n(t)|λ_n(t)=α]=
1/Δ t𝖤[Δ C_n(t+δ t)-Δ C_n(t)|Δ C_n(t)=αΔ t]=θ(μ-α)δ t,
𝖵𝖺𝗋[λ_n(t+δ t) -λ_n(t)|λ_n(t)=α]=
1/(Δ t)^2𝖵𝖺𝗋[Δ C_n(t+δ t)-Δ C_n(t)|Δ C_n(t)=αΔ t]=κ^2δ t,
where α=i/Δ t.
Assuming no dependence between the incremental contacts of different individuals, λ_n(t) are independent random variables. This means that λ_n(t+δ t)-λ_n(t) are also independent random variables, with the same mean value and variance given by Eq. (<ref>). Hence, the central limit theorem implies that the average over the whole population of N individuals,
λ(t+δ t)-λ(t)=1/N∑_n=1^N(λ_n(t+δ t)-λ_n(t)),
follows a normal distribution with mean θ(μ-α)δ t and variance κ^2δ t/N. As a result, we have
λ(t+δ t)-λ(t)=θ(μ-α)δ t+D√(δ t)𝒩(t),
where D=κ/√(N), and 𝒩(t) is the standard normal distribution, with 𝒩(t) and 𝒩(s) being independent for t≠ s.
Recall that the expressions in Eq. (<ref>) are conditioned on λ_n(t)=α for all n=1,…,N, which implies λ(t)=(1/N)∑_n=1^Nλ_n(t)=α. Therefore Eq. (<ref>) is equivalent to
λ(t+δ t)-λ(t)=θ(μ-λ(t))δ t+D√(δ t)𝒩(t).
Dividing by δ t and taking the limit δ t→ 0, we obtain the Langevin equation,
𝕀λ(t)/𝕀 t = θ (μ - λ(t)) + Dξ^WN(t),
where ξ^WN(t) is the standard white noise.
Equation (<ref>) is the SDE for an Ornstein–Uhlenbeck process. Therefore, the average contact rate λ(t) is an OU process.
The stationary solution of Eq. (<ref>) is a Gaussian process with the following mean and autocovariance <cit.>,
𝖤[λ(t)]=μ, 𝖢𝗈𝗏[λ(t)λ(s)]=D^2/2θexp(-θ| t-s|).
Introducing the new parameters τ=1/θ, σ^2=D^2τ^2, mean value and autocovariance of Eq. (<ref>) are recast into
𝖤[λ(t)]=μ, 𝖢𝗈𝗏[λ(t)λ(s)]=σ^2/2τexp(-| t-s|/τ).
Now it is easy to see that τ=1/θ is the correlation time of the average contact rate λ(t). It can be shown that, as τ→ 0, the autocovariance (<ref>) tends to the delta function, corresponding to white noise with intensity σ <cit.>.
We note that the autocovariance in Eq. (<ref>) was assumed in the earlier derivation of Mamis and Farazmand <cit.>. Here, we have shown that this property can be deduced naturally from the conditional probability (<ref>).
In the following sections, to simplify the notation, we write λ(t)=λ̅+σξ^OU(t), where λ̅=μ is the mean value, σ is the noise intensity, and ξ^OU(t) is the standard OU process. The standard OU process ξ^OU(t) has zero mean and its autocovariance is given by
𝖤[ξ^OU(t)ξ^OU(s)]=1/2τexp(-| t-s|/τ).
With this expression, λ(t) =λ̅+σξ^OU(t) satisfies the Langevin Eq. (<ref>) and its mean and covariance are given by Eq. (<ref>).
§ SIS MODEL
The Susceptibles-Infected-Susceptibles (SIS) model is described by the equations
𝕀 S(t)/𝕀 t=-λ/NS(t)I(t)+γ I(t),
𝕀 I(t)/𝕀 t=λ/NS(t)I(t)-γ I(t),
where S(t), I(t) are the numbers of susceptible and infected individuals, respectively, and N is the total population. SIS model parameters are the average contact rate λ and the average curing rate γ, which is the inverse of the average time an individual needs to recover. SIS Eq. (<ref>) is suitable for modeling diseases that are curable, and whose infection does not confer protective immunity; thus the infected become susceptibles again after their recovery. This is the case for most bacterial and sexually transmitted diseases <cit.>.
Note that (λ/N)S(t)I(t) is the simplest form for the disease transmission term, and it is based on the assumption of homogeneous mixing of population <cit.>. Under this assumption, out of the total number of contacts that each susceptible individual makes on average per unit time, λ I/N contacts are with the infected, resulting in disease transmission. Transmission term without the division with N is sometimes used <cit.>; however, this choice is not supported by empirical evidence <cit.>.
Under the usual assumption of constant population S(t)+I(t)=N, SIS model (<ref>) can be reduced to one scalar ordinary differential equation (ODE) <cit.>. Defining the infected fraction of the population, X(t)=I(t)/N∈[0,1], as the state variable, the scalar ODE is written as
𝕀 X(t)/𝕀 t=λ X(t)(1-X(t))-γ X(t).
The equilibrium points of ODE (<ref>) are x_0=0, and x_1=(λ-γ)/λ. The stability of equilibrium points depends on the basic reproduction number R_0=λ/γ:
* For R_0<1, equilibrium point x_0=0 is stable. In this case, the disease is eventually eradicated from the population.
* For R_0>1, equilibrium point x_0=0 is unstable and x_1=(λ-γ)/λ is stable. In this case, the disease persists in the population and becomes endemic.
In the endemic case, R_0>1, we derive a characteristic time scale for ODE (<ref>). For this, we linearize ODE (<ref>) around the stable equilibrium x_1, and calculate its Lyapunov exponent λ-γ (see also <cit.>). The characteristic time scale is determined as the inverse of the Lyapunov exponent, η=(λ-γ)^-1.
Under the stochastic perturbation of the contact rate λ(t)=λ̅+σξ(t), the SIS model reads
𝕀 X(t)/𝕀 t=λ̅ X(t)(1-X(t))-γ X(t)+σ X(t)(1-X(t))ξ(t).
Eq. (<ref>) is a stochastic differential equation under multiplicative noise excitation, since noise excitation ξ(t) is multiplied by a state-dependent function.
In the remainder of this section, we determine the asymptotic behavior of SIS model (<ref>) for two cases: 1. when ξ(t) is the standard Gaussian white noise, and 2. when ξ(t) is the standard OU process. In particular, we show that the OU process, as derived in Sec. <ref>, is more suitable for modeling uncertainties in the contact rate.
In contrast to the deterministic SIS model (<ref>), stochastic SIS model (<ref>) exhibits a richer asymptotic behavior that includes regions of bistablity, and regions with R_0>1 where x_0=0 is stable.
For the SIS model under white noise, its stationary PDF is easily determined as the stationary solution of the classical Fokker–Planck equation, see, e.g., <cit.>. For the case of OU fluctuations in contact rate, determining the stationary PDF is not as straightforward; the derivation and solution of Fokker–Planck-like equations, corresponding to stochastic differential equations (SDEs) excited by correlated noise, has been the topic of research for many decades <cit.>. Recently <cit.>, we have proposed a nonlinear Fokker–Planck equation whose validity is not limited to small correlation times of the stochastic excitation (see Appendix <ref>). As we show in Sec. <ref>, the stationary solution to this nonlinear Fokker–Planck equation is given in explicit closed form for the case of the stochastic SIS model. Thus, stochastic SIS model under OU perturbation is a rare instance of a nonlinear SDE under correlated noise whose stationary solution can be analytically determined.
By having the stationary PDFs in explicit form for both white and OU models, we are able to systematically investigate the noise-induced transitions that the stochastic SIS model undergoes, for increasing levels of noise.
§.§ SIS model under white noise
In this section, we consider ξ(t) to be the standard white noise ξ^WN(t) with zero mean value and autocorrelation
𝖤[ξ^WN(t)ξ^WN(s)]=δ(t-s),
where 𝖤[·] denotes the expected value and δ(t-s) is Dirac's delta function.
For the stochastic SIS model (<ref>) under white noise, we calculate the stationary PDF of X(t) as the stationary solution to the corresponding Fokker–Planck equation (see Appendix <ref>),
p_0(x)=Cx^2(1-R_0^-1)/(σ^2/λ̅)-2+ϖ(1-x)^-2(1-R_0^-1)/(σ^2/λ̅)-2+ϖexp(-2R_0^-1/(σ^2/λ̅)1/1-x),
where C is a normalization factor, so that ∫_ℝp_0(x)𝕀 x=1. Parameter ϖ models the difference, on the level of stationary PDF, between the Itō (ϖ=0) and Stratonovich (ϖ=1) solution of SDE (<ref>) under white noise. This difference stems from the different definition of integrals with respect to Wiener process in the two approaches <cit.>.
Stationary PDF (<ref>) depends on two dimensionless parameters: the basic reproduction number of the underlying deterministic model, R_0=λ̅/γ, and the relative variance of the noise, σ^2/λ̅, measuring the noise intensity. Using these dimensionless parameters, we study the bifurcation diagram of the stationary PDF as shown in Fig. <ref> (see Appendix <ref> for calculations). As we derive in Appendix <ref>, both Itō and Stratonovich solutions result in a disease eradication for R_0<1, as is the case for the deterministic SIS model. Therefore, we only consider the range R_0>1 in Fig. <ref>.
The different regions in Fig. <ref>, marked by roman numerals, correspond to different shapes of the stationary PDF of the infected population fraction X:
* Unimodal with mode at a non-zero x_m: The most probable outcome is the disease to become endemic in the population.
* Bimodal with one mode at zero and one at a non-zero x_m: In this case, the most probable outcomes is either the disease being eradicated, or to attain the level x_m in the population.
* Unimodal with mode at zero: The disease is most likely eradicated from the population.
* Delta function at zero, present only for Itō solution: disease eradication is certain. This is the only case of absolute eradication of the disease for R_0>1. In the Itō solution, the disease persists in the population for (σ^2/λ̅)<2(1-1/R_0) (region below the green curve in Fig. <ref>A), written equivalently as
R_0-σ^2/2γ>1.
Eq. (<ref>) is the disease persistence condition derived in <cit.>, as expressed for the stochastic SIS model (<ref>). Thus, for the Itō solution, increase in noise intensity results eventually in the eradication of the disease from the population, regardless of the value of R_0. On the other hand, region IV is absent from Fig. <ref>B, meaning that, under Stratonovich interpretation, the disease is never surely eradicated from the population for R_0>1, see also <cit.>.
Apart from the PDF shape, another important measure of disease severity is the value of x_m, at which the non-zero PDF mode is exhibited. As we observe in Fig. <ref>, for low levels of noise, the stationary PDFs of the infected population fraction are narrow and unimodal, exhibiting their mode at the stable equilibrium x_1 of the underlying deterministic SIS model. As the noise level increases, the PDF mode x_m moves away from the deterministic equilibrium. This phenomenon is called the peak drift <cit.>, and is commonplace in SDEs with multiplicative noise excitation such as SDE (<ref>).
The color in Fig. <ref> encodes the peak drift phenomenon quantifying the difference between the coordinates x_m (non-zero PDF mode) and x_1 (deterministic equilibrium point), as a percentage of x_1. Figure <ref> revels two opposite trends in peak drift. In regions Ia and IIa to the left of the vertical dashed line where R_0<2, higher noise intensity σ^2/λ̅ results in the non-zero PDF mode x_m to drift towards zero.
In contrast, in regions Ib and IIb where R_0>2, higher noise intensity σ^2/λ̅ results in the non-zero PDF mode x_m to drift towards one.
The above discussion shows that, by increasing the relative noise intensity σ^2/λ̅, the stochastic SIS model undergoes a noise-induced transition <cit.>, i.e., a bifurcation in the shape of its stationary PDF. The type of noise-induced transition is determined by the value of the deterministic dimensionless parameter R_0:
* Type 1: For 1<R_0<1.5, the stationary PDF stays always unimodal. By increasing σ^2/λ̅, the PDF peak drifts from the deterministic equilibrium x_1 towards zero. When the relative noise intensity σ^2/λ̅ crosses the level marked by the blue curve in Fig. <ref>, the PDF mode is located at zero. Further increase of σ^2/λ̅ results in more probability mass being accumulated at zero. In Figs. <ref>A, B, we show an example of this noise-induced transition, for R_0=1.4.
* Type 2: For 1.5<R_0<2, the PDF mode shifts towards zero as σ^2/λ̅ increases, which is similar to the previous case. However, in this case, when σ^2/λ̅ crosses the blue curve level, the PDF becomes bimodal, with the additional peak located at zero. By increasing σ^2/λ̅ further, more probability mass accumulates at zero, and, after σ^2/λ̅ crosses the level marked by the magenta curve in Fig.<ref>, the PDF becomes unimodal at zero. In Figs. <ref>C, D, we show an example of this noise-induced transition for R_0=1.7.
* Type 3: For R_0>2, PDF peak drift phenomenon has the opposite trend; by increasing σ^2/λ̅, the PDF peak drifts towards higher values. When σ^2/λ̅ crosses the blue curve level, an additional PDF mode appears at zero, whose magnitude increases by further increase of σ^2/λ̅. In Figs. <ref>E, F, we show an example of this noise-induced transition for R_0=2.2.
We observe that, by increasing noise levels, a PDF peak at zero appears eventually, making the eradication of disease more likely.
However, for diseases with R_0<2 (corresponding to noise-induced transitions of types 1 and 2), the most likely final size of disease in the population, i.e., the non-zero mode at x_m, drifts towards zero, even for low white noise levels, for which no PDF peak at zero has appeared yet. This means that, for R_0<2, white noise in contact rate always results in less severe predictions for disease spread. Note that, many SIS-modeled diseases lie in the range of 1<R_0<2, such as gonorrhea, R_0=1.4 <cit.>, syphilis, R_0=1.32-1.50 <cit.>, streptococcus pneumoniae (pneumococcus), R_0=1.4 <cit.>, tuberculosis, R_0=1.78 <cit.>. On the other hand, for highly contagious diseases with R_0>2 (e.g. pertussis, R_0=5.5 <cit.>) increase in noise levels results in more spread of the disease, since, in this case, the most likely endemic point x_m drifts towards larger values.
We note that bifurcation diagrams were also analyzed by Méndez et al. <cit.> for a stochastic SIS model slightly different than Eq. (<ref>). However, in <cit.>, only the Stratonovich solution was considered, the peak drift phenomenon was not studied, and a dimensionless parameter involving noise intensity σ and curing rate γ was chosen, instead of the more easily interpretable relative variance σ^2/λ̅.
§.§ SIS model under Ornstein–Uhlenbeck noise
In this section, we let the stochastic perturbation ξ(t) to be the standard OU process ξ^OU(t) with zero mean and autocorrelation (<ref>).
Recall that τ>0 is the correlation time of the OU noise. For an SDE under OU excitation, we can approximate its stationary PDF by the equilibrium solution of a nonlinear Fokker–Planck equation which was only recently formulated <cit.>. For the case of stochastic SIS model (<ref>) under OU noise, we are able to derive an approximate stationary PDF for the infected population fraction (see Appendix <ref>). This stationary PDF is available in the explicit closed form,
p_0(x)=Cx^Q_1(1-x)(Gx^2-Dx+F)^Q_2exp[Q_3arctan(2Gx-D/√(|Δ|))],
where C is the normalization factor, and
Q_1=P/F-1, Q_2=-P/2F-1, Q_3=P(D-2BF)/F√(|Δ|),
G=A^2B^2+AB+1, D=A+AB+2A^2B+2, F=A^2+A+1,
with |Δ|=4GF-D^2>0, and
P=2(1+a)/B(σ^2/λ̅), B=R_0/R_0-1, A=a/1+a, a=τ(λ̅-γ).
Despite its convoluted form, stationary PDF (<ref>) depends on three dimensionless parameters only. Two of them, R_0 and σ^2/λ̅, are the same as in the white noise case. The additional parameter a=τ/η is the relative correlation time of the OU noise, defined as the ratio of the correlation time τ of the noise and the Lyapunov characteristic time scale η=(λ̅-γ)^-1 of the underlying deterministic model (<ref>). As discussed in Appendix <ref>, for the white noise limit τ→ 0, PDF (<ref>) results in the Stratonovich stationary PDF (<ref>) with ϖ=1.
Using PDF (<ref>), we formulate the bifurcation diagrams shown in Fig. <ref>, which depend on the dimensionless parameters R_0, σ^2/λ̅ and a. To the best of our knowledge, such bifurcation diagrams for the correlated noise case are considered here for the first time.
As shown in Fig. <ref>, although PDF (<ref>) is approximate, it is in excellent agreement with the stationary PDFs obtained from direct Monte Carlo simulations of SDE (<ref>).
Bifurcation diagrams in Fig. <ref> corresponding to the correlated OU process are similar to those in Fig. <ref> for the uncorrelated white noise. In addition, the types of noise-induced transitions are similar to those in the white noise case. However, there are some important quantitative differences:
* Region III, where disease eradication is most likely, is smaller when using correlated OU process. Moreover, as the relative correlation time a increases, this region shrinks further.
* As the relative correlation time a increases, the range of R_0 values corresponding to transitions of types 1 and 2 reduces. Furthermore, transitions of type 3 occur for R_0 that are significantly less than 2 (vertical dashed line).
* PDF peak drift towards zero, that occurs in transitions of types 1 and 2, becomes less pronounced, as a increases.
To summarize, correlations in contact rate suppress the drift of the PDF mode towards zero and delay the emergence of a PDF mode at zero. This results in stationary PDFs whose probability mass is mainly located around the equilibrium of the deterministic SIS model. We can also observe the stabilizing effect of correlated noise by comparing Figs. <ref>A, B for OU noise with a=0.5, to the respective Figs. <ref>B, D for white noise (Stratonovich interpretation). We also observe the change in type of noise-induced transitions due to correlated noise: for R_0=1.4 (resp., R_0=1.7), stationary PDF exhibits a type 1 (resp., type 2) noise-induced transition under white noise, while it exhibits a type 2 (resp., type 3) transition under OU noise with a=0.5.
§ SEIR MODEL
In this section, we consider the Susceptibles-Exposed-Infected-Removed (SEIR) model,
𝕀 S(t)/𝕀 t=-λ/NS(t)I(t),
𝕀 E(t)/𝕀 t=λ/NS(t)I(t)-α E(t),
𝕀 I(t)/𝕀 t=α E(t)-γ I(t),
𝕀 R(t)/𝕀 t=γ I(t).
Compared to the SIS model, the SEIR model has two additional compartments: the exposed E(t) containing the individuals that have contracted the disease but are not infectious yet, and R(t) containing the individuals that have been removed from the population, comprising the deceased and the immune due to vaccination or prior infection. The additional model parameter α is the average incubation rate, defined as the inverse of the average incubation (or latency) period during which the individual has contracted the disease but is not infectious yet. SEIR models are suitable for describing the spread of airborne diseases such as flu and COVID-19, whose infection follows after a latency period, and also confers immunity after recovery, albeit temporarily <cit.>.
In our study, we use SEIR model (<ref>) to model the Omicron wave of COVID-19 pandemic in the US, i.e., the period between December 3, 2021 and April 22, 2022. We use the data for cumulative infections from the COVID-19 Dashboard by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University <cit.>. The total population N is considered constant and equal to the US population of 329.5 million, and the initial values of the exposed E(t_0), the infected I(t_0), and the removed R(t_0) at the beginning of the Omicron wave were chosen consistent with the Johns Hopkins data base to be 0.14%, 0.18% and 14.88% of the total population, respectively. Then, SEIR model parameters R_0=λ/γ, α and γ are determined by least square fitting so that the cumulative number of COVID-19 cases during Omicron wave, as predicted by the model, agrees with the Johns Hopkins data. To determine the cumulative number of COVID-19 cases from the SEIR model, we use the relation
∫_t_0^tI(s)𝕀 s=1/γ(R(t)-R(t_0)),
which is derived from Eq. (<ref>). By this process, we obtain the values R_0=1.85, α=1/3.5 days^-1, γ=1/1.2 days^-1.
After fitting the deterministic SEIR model to data, we add noise fluctuations to the average contact rate. We consider both cases of white and OU noise, for noise levels 0.5≤σ^2/λ̅≤ 3.0. We note that, in prior studies which use stochastic compartmental models for COVID-19, the parameter σ/λ̅ is chosen to model the noise level <cit.>. However, here we use σ^2/λ̅, since this is a dimensionless parameter.
Contrary to the stochastic SIS model of Sec. <ref>, a stationary PDF for the stochastic SEIR model is not available in analytic form. Thus, we perform Monte Carlo simulations of SEIR model (<ref>) with sample size 50,000 and stochastically perturbed contact rate λ(t) = λ̅+σξ(t). In the white noise case, ξ(t)=ξ^WN(t), the system of SDEs of SEIR model is numerically solved under the Stratonovich interpretation, using the predictor-corrector scheme of Cao et al. <cit.>. In the case of OU noise, ξ(t)=ξ^OU(t), stochastic SEIR model is augmented by the linear SDE,
𝕀ξ^OU(t)/𝕀 t=-1/τξ^OU(t)+1/τξ^WN(t),
that generates the standard OU process ξ^OU(t). The resulting coupled system is again solved using a predictor-corrector scheme <cit.>.
The time series of the mean cumulative COVID cases obtained from the Monte Carlo simulations are shown in Fig. <ref>. These simulations are also used to determine the stationary PDFs of COVID cases shown in Fig. <ref>.
As we show in Fig. <ref>, the choice between white or OU noise for modeling uncertainties in the contact rate is consequential since they lead to very different forecasts for the spread of the pandemic. For increasing levels of white noise intensity, SEIR model significantly underestimates the severity of the pandemic on average. On the other hand, OU noise leads to forecasts whose mean trajectory of COVID cases stays always close to the actual data. The best fit is obtained for OU noise with correlation time of 1 week, which is in agreement with the weekly social patterns observed in human behavior <cit.>. Note however that, despite the abundance of data collected from the COVID-19 pandemic, the correlation time of the contact rate has not been quantified yet <cit.>.
The reason the SEIR model under white noise underestimates the pandemic spread is that it undergoes a noise-induced transition similar to type 2 transition of the stochastic SIS model, see Fig. <ref>. As the white noise intensity increases, the PDF peak drifts from the deterministic equilibrium towards lower values. Due to this peak drift, the mean trajectories of cumulative COVID cases for σ^2/λ̅=0.5 and σ^2/λ̅=1 lie below the actual data. For the noise level σ^2/λ̅=2, an additional peak emerges in the regime of low number of cases (≈5×10^7). Further increase of white noise intensity makes the additional peak more pronounced. This results in stochastic SEIR model to greatly underestimate the pandemic severity for σ^2/λ̅=3.
In contrast, when the contact rate is perturbed by the OU noise, the stationary PDF of COVID cases remains unimodal for a wider range of noise levels. The presence of correlations in OU noise hinders the emergence of the additional peak at lower case values; only for the combination of small correlation time (τ=1 day) and high intensity (σ^2/λ̅=3) of the OU noise does an additional peak start forming around 5×10^7 cases (see Fig. <ref>B). Also, the stationary PDF for OU noise exhibits the opposite trend in peak drift compared to the PDFs for white noise; increasing the noise intensity makes the peak to drift towards higher values. Thus, presence of temporal correlation in the noise changes the type of the noise-induced transitions the SEIR model undergoes. This is similar to our results for correlated noise in the stochastic SIS model.
In Figs. <ref>C-E, we also see that larger correlation times make the PDFs less diffusive, and the peak drift less pronounced. This is the expected sharpening effect of correlated noise <cit.> as a result of the mean-reverting property of the OU process <cit.>, which becomes stronger as the correlation time increases. The effect of mean-reverting property of OU noise is also shown in Fig. <ref>, where the OU noise is more concentrated around its mean value, compared to the white noise with the same intensity. This also means that the OU noise becomes negative less frequently than the respective white noise. Nonetheless, since OU noise is Gaussian and thus unbounded, it can always attain negative values, which is unrealistic for the contact rate. Prior work on stochastic oncology remedies these unwanted negative values by considering bounded noise <cit.>.
Noise-induced transitions in compartmental models under bounded noise have not been studied extensively yet (see, e.g., <cit.>), thus constituting an interesting direction for future work.
§ CONCLUSIONS
It was shown recently that time correlations are essential for modeling uncertainties in the contact rate of an infectious disease <cit.>. Using standard results from the theory of stochastic processes, Mamis and Farazmand <cit.> showed that the only feasible process for modeling such uncertainties is the Ornstein–Uhlenbeck (OU) process. However, to arrive at this conclusion, the authors assumed that the autocorrelation function of the contact rate has an exponentially decreasing form. In the present work, we proved the same result without making such onerous assumptions. Modeling the contacts of each individual as a Markov process, assuming a reasonable conditional probability for such contacts, and using the central limit theorem, we proved that the contact rate averaged over the population satisfies the Langevin equation corresponding to the OU process.
We studied the implications of this result on two typical examples of stochastic compartmental models in epidemiology; the SIS model which describes bacterial and sexually transmitted diseases, and the SEIR model which describes airborne diseases such as COVID-19. Stochasticity enters into the compartmental models by considering stochastic fluctuations in the contact rate, to account for uncertainties in social behavior of individuals in the population.
For the stochastic SIS model, we derived the exact stationary PDF of the infected population fraction for both cases of white and Ornstein–Uhlenbeck noise fluctuations in contact rate. As a result, we were able to determine the noise-induced transitions that a stochastic SIS model undergoes, as well as the effect of temporal correlations in contact rate. Our main result is that, for a range of R_0 corresponding to many SIS-modeled diseases (see Remark <ref>) white noise in contact rate makes the eradication of the disease more likely. This is an unrealistic behavior since greater uncertainty in measuring a model parameter should not lead to the eradication of the disease. On the other hand, the inclusion of correlations has a stabilizing effect on the stationary PDF of the infected population fraction, mitigating the unrealistic transitions towards zero infected population.
The results for noise-induced transitions of stochastic SEIR models are similar to those for SIS models. By performing Monte Carlo simulations of a SEIR model fitted to data from the Omicron wave of COVID-19 pandemic in the US, we observed that white noise models of the contact rate lead to systematic underestimation of the pandemic severity. On the other hand, when the contact rate is modeled as an OU process, the predicted number of COVID cases is always close to the actual data. An important direction for future work is to develop analytic tools for stochastic SEIR models, similar to those that we have already developed for SIS models.
Our work demonstrates that the inclusion of correlated uncertainties in compartmental models is a central component for a realistic stochastic model of disease spread. If overlooked, this would lead to unrealistic, less severe forecasts. However, despite the abundance of data collected, especially during the COVID-19 pandemic, the intensity and temporal correlations of noise in compartmental model parameters have not been determined with precision. This calls for more empirical studies that would systematically quantify the nature of uncertainties, and especially their correlation time, in the parameters of compartmental epidemiological models.
Acknowledgments K.M. would like to acknowledge the hospitality of the Department of Mathematics at North Carolina State University where most of this work was carried out when he was a postdoctoral associate in the research group of M.F.
Data availability The data used in this work is available from COVID-19 Dashboard
by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University,
<https://github.com/CSSEGISandData/COVID-19>.
Authors' contributions M.F. conceptualized and supervised the research. K.M. conducted the
research and wrote the paper. M.F. revised the paper.
Funding The authors received no funding for this work.
Competing interests The authors declare that they have no competing financial interests.
§ CALCULATION OF STATIONARY PDFS
Consider the general form of the scalar SDE
𝕀 X(t)/𝕀 t=h(X(t))+σ(X(t))ξ(t),
where X(t) and ξ(t) are the stochastic processes of the response and excitation respectively, h(x) is the continuous drift function, and σ(x) is the differentiable function of the noise intensity.
In the case where excitation ξ(t) is Gaussian white noise (see Eq. (<ref>)), the evolution of the PDF p(x,t) of the response X(t) is governed by the classical Fokker–Planck equation (see, e.g., <cit.>):
∂ p(x,t)/∂ t+∂/∂ x[(h(x)+ϖ/2σ'(x)σ(x))p(x,t)]=1/2∂^2/∂ x^2[σ^2(x)p(x,t)].
In Eq. (<ref>), the drift term h(x) is augmented by (1/2)σ'(x)σ(x) which is the Wong-Zakai correction (see <cit.>) modeling the difference between Itō (ϖ=0) and Stratonovich (ϖ=1) interpretations of SDEs under multiplicative white noise excitation. The stationary solution p_0(x)=lim_t→∞p(x,t) of Fokker–Planck Eq. (<ref>) is given in the closed form <cit.>:
p_0(x)=C/σ^2-ϖ(x)exp(2∫^xh(y)/σ^2(y)𝕀 y),
where ∫^x𝕀 y denotes the antiderivative, and C is the normalization factor.
In our recent papers <cit.>, we derived an approximate nonlinear Fokker–Planck equation corresponding to SDE (<ref>) under correlated excitation:
∂ p(x,t)/∂ t+∂/∂ x {[h(x)+σ'(x)σ(x)A(x,t;p)]p(x,t)}=
=∂^2/∂ x^2[σ^2(x)A(x,t;p)p(x,t)],
where
A(x,t;p)=∑_m=0^2D_m(t;p)/m!{ζ(x)-𝖤[ζ(X(t))]}^m,
with
ζ(x)=σ(x)(h(x)/σ(x))',
and
D_m(t;p)=∫_t_0^tC_ξ(t,s)exp(∫_s^t𝖤[ζ(X(u))]𝕀 u)(t-s)^m𝕀 s.
where C_ξ(t,s) is the autocorrelation function of noise excitation ξ(t).
Fokker–Planck equation (<ref>) is nonlinear, due to the dependence of coefficient A(x,t;p) on the response moment 𝖤[ζ(X(t))], which in turn depends on the unknown PDF p(x,t). As we have proven in <cit.>, for diminishing correlation time of ξ(t), τ→0, coefficient A in Eq. (<ref>) becomes 1/2. Thus, in the white noise limit, nonlinear Fokker–Planck Eq. (<ref>) becomes the Stratonovich Fokker–Planck Eq. (<ref>) for ϖ=1. Also, note that, by keeping only the zeroth-order term in the sum of Eq. (<ref>), we obtain the widely-used Hänggi's approximate Fokker–Planck equation <cit.>.
For ξ(t) being the standard OU process (see Eq. (<ref>)), the stationary solution of the nonlinear Fokker–Planck Eq. (<ref>) reads
p_0(x,M)=C/σ(x)A(x,M)exp(∫^xh(y)/σ^2(y)A(y,M)𝕀 y),
where A(x,M) is the stationary value of coefficient A(x,t;p), given by
A(x,M)=1/2∑_m=0^2[τ(ζ(x)-M)]^m/(1-τ M)^m+1,
and M is the stationary value of response moment 𝖤[ζ(X(t))]:
M=∫_ℝζ(x)p_0(x,R)𝕀 x.
As derived in <cit.>, solution (<ref>) is valid under the condition M<τ^-1.
Due to the presence of the unknown response moment M, Eq. (<ref>) is an implicit stationary solution of the nonlinear Fokker–Planck Eq. (<ref>). In <cit.>, we proposed an iteration scheme for the calculation of M, by substituting the implicit form (<ref>) for p_0(x,M) into the definition relation (<ref>) for M. The initial value of moment M for the iteration scheme is calculated from the corresponding Stratonovich Fokker–Planck equation. Implicit closed-form solution (<ref>), supplemented by the iteration scheme for M, constitutes a semi-analytic form for the stationary response PDF for SDE (<ref>) under OU stochastic excitation.
However, for the special case of stochastic SIS model (<ref>), we are able to calculate moment M analytically. Note that the calculation of moment M in explicit closed form is, in general, not possible.
For stochastic SIS model (<ref>), moment M defined by Eq. (<ref>), is
M=-(λ̅-γ).
For stochastic SIS model, (<ref>), and by substituting Eq. (<ref>) into Eq. (<ref>), the definition relation for M is specified as
M=C∫_ℝζ(x)/σ(x)A(x,M)exp(∫^xh(y)/σ^2(y)A(y,M)𝕀 y)𝕀 x,
where h(x)=λ̅x(1-x)-γ x, σ(x)=σ x(1-x) and ζ(x)=-γ x/(1-x).
By performing integration by parts, we obtain
M =C∫_ℝζ(x)σ(x)/h(x)[exp(∫^xh(y)/σ^2(y)A(y,M)𝕀 y)]'𝕀 x=
=-C∫_ℝ(ζ(x)σ(x)/h(x))'exp(∫^xh(y)/σ^2(y)A(y,M)𝕀 y)𝕀 x=
=Cσγ(λ̅-γ)∫_ℝ1/[λ̅(1-x)-γ]^2exp(∫^xh(y)/σ^2(y)A(y,M)𝕀 y)𝕀 x.
On the other hand, normalization factor C of p_0 is defined as
C^-1=∫_ℝ1/σ(x)A(x,M)exp(∫^xh(y)/σ^2(y)A(y,M)𝕀 y)𝕀 x,
and after integration by parts:
C^-1 =∫_ℝσ(x)/h(x)[exp(∫^xh(y)/σ^2(y)A(y,M)𝕀 y)]'𝕀 x=
=-∫_ℝ(σ(x)/h(x))'exp(∫^xh(y)/σ^2(y)A(y,M)𝕀 y)𝕀 x=
=-σγ∫_ℝ1/[λ̅(1-x)-γ]^2exp(∫^xh(y)/σ^2(y)A(y,M)𝕀 y)𝕀 x.
By substituting Eq. (<ref>) into Eq. (<ref>), we obtain Eq. (<ref>).
Using Eq. (<ref>), we calculate coefficient A to
A(x)=1/2(1+a)∑_m=0^2(a/1+a)^m(1-x)^-m(1-R_0x/R_0-1)^m,
where a=τ(λ̅-γ)>0. By having coefficient A(x) in explicit form, we can perform the integration in Eq. (<ref>) analytically, obtaining thus the expression (<ref>) for the stationary response PDF for SIS model (<ref>) under OU noise.
§ ANALYSIS OF STATIONARY PDFS FOR SIS MODELS
§.§ White noise model - Itō solution
In the vicinity of zero, response PDF (<ref>) for ϖ=0 is p_0(x)∼ x^2(1-R_0^-1)/(σ^2/λ̅)-2. For
2(1-R_0^-1)/(σ^2/λ̅)-2<-1⇒ (σ^2/λ̅)>2(1-R_0^-1),
p_0(x) is not integrable, and p_0(0)=+∞. Thus, under condition (<ref>), p_0(x) is a delta function at zero. Eq. (<ref>) always holds true for R_0<1, resulting in disease eradication, as in the deterministic case. This is the reason for choosing R_0∈[1,+∞) in our analysis. The green curve in Fig. <ref>A corresponds to (σ^2/λ̅)=2(1-1/R_0). For
-1<2(1-R_0^-1)/(σ^2/λ̅)-2<0⇒ 1-R_0^-1<(σ^2/λ̅)<2(1-R_0^-1),
p_0(x) is integrable, and has a peak at zero. The blue curve in Fig. <ref>A corresponds to (σ^2/λ̅)=1-1/R_0. For
2(1-R_0^-1)/(σ^2/λ̅)-2>0⇒ (σ^2/λ̅)<1-R_0^-1,
p_0(x) is integrable, and p_0(0)=0. For the case (σ^2/λ̅)<2(1-1/R_0), where p_0(x) is integrable, its local extrema points for x∈(0,1) are specified, by the first derivative test, as the roots of quadtratic equation
2(σ^2/λ̅)x^2+[1-3(σ^2/λ̅)]x+(σ^2/λ̅)+R_0^-1-1=0.
The requirement of nonnegative discriminant results in the condition
R_0≥8(σ^2/λ̅)/[1+(σ^2/λ̅)]^2.
Eq. (<ref>), for the case of equality, is the magenta curve in Fig. <ref>A. The two roots of Eq. (<ref>) are
x_±=3(σ^2/λ̅)-1±√([1+(σ^2/λ̅)]^2-8(σ^2/λ̅)R_0^-1)/4(σ^2/λ̅).
By the additional requirement of x_±∈(0,1), we summarize the conditions for roots x_± to be extrema points of p_0(x).
x_+ is extremum point for{(σ^2/λ̅)<1/3⋀(σ^2/λ̅)<1-1/R_0,
(σ^2/λ̅)>1/3⋀ R_0≥8(σ^2/λ̅)/[1+(σ^2/λ̅)]^2..
x_- is extremum point for (σ^2/λ̅)>1/3⋀(σ^2/λ̅)>1-1/R_0⋀R_0≥8(σ^2/λ̅)/[1+(σ^2/λ̅)]^2.
Furthermore, we determine that x_+ is a maximum point, and x_- is a minimum point. Thus, we identify x_+ as the non-zero mode coordinate x_m. By using de l'Hôpital rule, we calculate lim_σ→0x_m=(λ̅-γ)/λ̅, which is the expected result that, in the deterministic limit, PDF mode x_m coincides with the deterministic equilibrium.
Last, in order to capture the peak drift phenomenon, we calculate the first derivative of x_m with respect to (σ^2/λ̅). After algebraic manipulations, we obtain that
x_m'(σ^2/λ̅)≥0⇒ R_0≥2.
The dashed line in Fig. <ref>A is R_0=2.
§.§ White noise model - Stratonovich solution
We repeat the procedure we followed in Sec. <ref>, for Stratonovich solution, Eq. (<ref>) for ϖ=1. The results we obtain are the following: Stratonovich solution is a delta function at zero only for R_0<1; for R_0>1, it is always integrable. For
(σ^2/λ̅)>2(1-1/R_0),
p_0(x) has a peak at zero. The blue curve in Fig. <ref>B corresponds to (σ^2/λ̅)=2(1-1/R_0). For x∈(0,1), the local extrema are roots of the equation
2(σ^2/λ̅)x^2+[2-3(σ^2/λ̅)]x+(σ^2/λ̅)+2(R_0^-1-1)=0.
Thus, the possible extrema points in (0,1) are
x_±=3(σ^2/λ̅)-2±√([2+(σ^2/λ̅)]^2-16(σ^2/λ̅)R_0^-1)/4(σ^2/λ̅),
under the condition for nonnegative discriminant of Eq. (<ref>)
R_0≥16(σ^2/λ̅)/[2+(σ^2/λ̅)]^2.
Eq. (<ref>) for the case of equality is the magenta curve in Fig. <ref>B. We further determine that
x_+ is maximum point for{(σ^2/λ̅)<2/3⋀(σ^2/λ̅)<2(1-1/R_0),
(σ^2/λ̅)>2/3⋀ R_0≥16(σ^2/λ̅)/[2+(σ^2/λ̅)]^2..
x_- is minimum point for (σ^2/λ̅)>2/3⋀(σ^2/λ̅)>2(1-1/R_0)⋀R_0≥16(σ^2/λ̅)/[2+(σ^2/λ̅)]^2.
Also, we calculate that condition (<ref>) is true for Stratonovich solution too.
§.§ Ornstein–Uhlenbeck noise model
In the vicinity of zero, response PDF (<ref>) is p_0(x)∼ x^P/F-1. We calculate that, for R_0<1, solution (<ref>) is a delta function at zero, similarly to the Stratonovich solution. For
(σ^2/λ̅)>2(1+a)/F(1-R_0^-1),
PDF (<ref>) exhibits a peak at zero. The blue curve in Fig. <ref> corresponds to (σ^2/λ̅)=2(1+a)(1-R_0^-1)/F. For x∈(0,1), the local extrema are roots of the cubic equation f(x)=0, with
f(x)=2G(σ^2/λ̅)x^3+[2(1+a)-(3G+D)(σ^2/λ̅)]x^2+
[2D(σ^2/λ̅)-2(1+a)(2-R_0^-1)]x+2(1+a)(1-R_0^-1)-F(σ^2/λ̅).
The calculation of the exact roots of a cubic equation is cumbersome. However, f(1)=-(σ^2/λ̅)A^2(B-1)^2<0, f(+∞)=+∞, and thus, by intermediate value theorem, cubic polynomial f(x) has always a real root that is greater than 1, which is not admissible as extremum point of p_0(x). Thus, the regions III in Fig. <ref> correspond to Δ_3<0, where Δ_3 is the discriminant of the cubic polynomial f(x).
Note also that f(-∞)=-∞, and f(0)>0 under the condition
(σ^2/λ̅)<2(1+a)/F(1-R_0^-1).
Using the intermediate value theorem again, we deduce that, under condition (<ref>), polynomial f(x) has three distinct real roots, with only one of them in (0,1). By combining this result to the behavior of PDF (<ref>) at zero (see Eq. (<ref>)), we conclude that, under condition (<ref>), PDF (<ref>) is unimodal, with its mode at a non-zero x_m.
Last, we observe that, for a=0, condition (<ref>) is identical to condition (<ref>), and the cubic polynomial f(x) is factorized to
f(x)=
(x-1){2(σ^2/λ̅)x^2+[2-3(σ^2/λ̅)]x+(σ^2/λ̅)+2(R_0^-1-1)}.
We identify the second factor in the right-hand side of Eq. (<ref>) as the quadratic polynomial, whose roots detetermine the PDF extrema points in (0,1) of the white noise case, under Stratonovich interpretation (see Eq. (<ref>)). This finding shows the compatibility between results under OU noise with a=0 and the Stratonovich solution for the white noise case.
|
http://arxiv.org/abs/2307.04122v1 | 20230709082919 | Enhancing Low-Light Images Using Infrared-Encoded Images | [
"Shulin Tian",
"Yufei Wang",
"Renjie Wan",
"Wenhan Yang",
"Alex C. Kot",
"Bihan Wen"
] | cs.CV | [
"cs.CV",
"eess.IV"
] |
Bounced Model of Droplet on Moving Substrate
Chengwu Liu
August 12, 2023
============================================
Low-light image enhancement task is essential yet challenging as it is ill-posed intrinsically.
Previous arts mainly focus on the low-light images captured in the visible spectrum using pixel-wise loss, which limits the capacity of recovering the brightness, contrast, and texture details due to the small number of income photons.
In this work, we propose a novel approach to increase the visibility of images captured under low-light environments by removing the in-camera infrared (IR) cut-off filter, which allows for the capture of more photons and results in improved signal-to-noise ratio due to the inclusion of information from the IR spectrum. To verify the proposed strategy, we collect a paired dataset of low-light images captured without the IR cut-off filter, with corresponding long-exposure reference images with an external filter.
The experimental results on the proposed dataset demonstrate the effectiveness of the proposed method, showing better performance quantitatively and qualitatively. The dataset and code are publicly available at
Low-light enhancement, infrared photography, computational photography
§ INTRODUCTION
Due to the small number of photons captured by the camera, the images captured under low-light environments usually suffer from poor visibility, intense noise, and artifacts. To enhance the visibility of the images captured in low-light environments, previous works mainly focus on modelling the mapping
relationship between low-light images and corresponding normally-exposed images.
Specifically, current deep learning based methods have the following paradigms: learning an end-to-end model using paired datasets in <cit.>; GAN-based networks in <cit.>; encoder-decoder based models in <cit.>. However, the aforementioned methods are all based on existing visible information of the corrupted inputs on RGB space,
i.e., even if they can achieve pleasant perceptual quality, they can not perform reliably due to the lack of incident photons <cit.>.
Besides, there are various limitations of the current mainstream methods, e.g.,
end-to-end training using pixel reconstruction loss leads to a regression-to-mean problem; GAN-based training requires careful hyper-parameter tuning and lacks enough supervision for noise removal.
Recently, infrared-light-based methods have attracted great attention in low-level computer vision tasks
as they introduce extra information from infrared spectroscopy.
There are several works explored the usage of infrared light in computation photography previously. Specifically, Zhuo et al. <cit.> propose to use additional Near-Infrared (NIR) flash images instead of normal flash images to restore the details of noisy input images that require the user to take two photos of the same scene in a static environment, causing the misalignment of the inputs easily;
Zhang et al. <cit.> propose a dual-camera system to capture a NIR image and a normal visible image of the same scene concurrently, while increasing the cost of devices during the acquisition of data.
In this paper, we propose a novel prototype that utilizes information from the infrared spectrum without the need for additional devices. Most solid-state (CCD/CMOS) based digital cameras are equipped with IR cutoff filters to avoid color distortion caused by the high sensitivity to IR light. Conversely, we remove the IR cutoff filter so that the CMOS can receive more incident photons located on the infrared spectrum, resulting in increased brightness, higher signal-noise ratio, and improved details as shown in Fig. <ref>. A paired dataset, namely IR-dataset, of IR-RGB images captured under low-light environments and their reference normally-exposed RGB images, is collected under different scenes. We further propose a novel flow-based model that can enhance visibility by modelling the distribution of normally-exposed images and address color distortion caused by the lack of IR cutoff filter through our proposed color alignment loss (CAL).
In summary, the contributions of our work are threefold:
*
We collect a paired dataset under a novel prototype, i.e., IR-RGB images captured under low-light environments and their normally-exposed reference RGB images, which supports future studies.
* We propose a flow-based model with our proposed color alignment loss, which can effectively address the color distortion caused by removing the IR-cut filter.
* We conduct extensive experiments on our collected datasets that demonstrate removing the IR-cut filter can lead to better-quality restored images in low-light environments. Besides, our proposed framework achieves superior performance compared with SOTA methods.
§ METHODOLOGY
§.§ Dataset Collection
The dataset is collected by a modified Nikon D3300 camera, in which the internal IR cut-off filter is removed.
The paired images are captured using a stable tripod and remote control to minimize misalignment. The low-light images are captured using the aforementioned device without IR cut-off filter. To capture the normally-exposed reference images in the visible light spectrum, an external IR filter, which has the same cut-off wavelength as the internal one, is carefully put in front of the lens to ensure that no camera shift occurs during the long exposure. To better explore the effectiveness of removing the IR cut-off filter in a low-light environment, we also collect a set of low-light images in the visible light spectrum (e.g., the example in Fig. <ref>).
We divide our dataset into a training set and an evaluation set. Specifically, the training set includes 236 pairs of low-light images without cut-off filter and their corresponding reference images (472 images in total). The evaluation set has 80 pairs of low-light images with and without the cut-off filter and their corresponding reference images.
§.§ Preliminary
Previously, the mainstream of deep learning based models is mainly based on pixel reconstruction loss. However, due to the limited capacity to distinguish the unwanted artifacts with the real distribution of normally-exposed images, they may lead to unpleasant visual quality with blurry outputs <cit.>.
Inspired by the extraordinary performance of flow-based models <cit.>, we found that learning conditional probability distribution can handle the aforementioned problem by including possibilities of various distributions of natural images. Specifically, the recent state-of-the-art LLFlow model <cit.> has shown great performance in using normalizing flow conditioned on corrupted inputs to capture the conditional distribution of normally exposed images. In this work, we inherited the
core idea of conditional flow with the likelihood estimation proposed in <cit.> as the backbone of our method.
The conditional probability density function of normally exposed images can be modified as follows:
f_cond(y|x) = f_z(Θ(y;x))|∂Θ/∂ y(y;x)|,
where Θ(·) is the invertible network with N invertible layers {θ ^1, θ ^2, …, θ ^N}, and the latent representation z=Θ(y;x) is mapped from the corrupted inputs x normally exposed images y. By characterizing the model with maximum likelihood estimations, the model can be optimized with the negative log-likelihood loss function:
ℒ_nll (x, y) = -log f_cond(y|x)
= -log f_z(Θ (y; x))
- ∑_n=0^N-1log |∂θ^n/∂ z^n(z^n; g^n(x_l))|,
where g(·) is the encoder that outputs conditional features of the layers θ ^i from the invertible network.
§.§ Color Alignment Loss
Although the benchmarks performed well on the visible light spectrum, the performance suffered from severe degradation caused by the additional infrared light in some extreme cases if we directly apply benchmark methods to the collected dataset. To further alleviate the color distortion caused by removing the IR filter, inspired by histogram-matching techniques studies <cit.>, used by remote sensing, we propose to minimize the divergence of the color distribution between the generated images and reference images. Specifically, by representing the color information using differentiable histograms in the RGB color channels, we emphasize more on the color distributions of the generated and reference images instead of the local details. To further measure the differences in these distributions, we propose using the Wasserstein distance, which can provide a more stable gradient compared with the commonly used KL divergence. The details are as follows:
§.§.§ Differentiable Histogram
Since the low-light images are taken without the existence of an IR cut-off filter, they admit more red light, which leads to color bias in the red channel. To suppress the color distortion, we propose to minimize the divergence of the channel-wise differentiable histogram between the generated and reference images.
Assume that x∈ℝ^C × H × W is an image where C, H and W refer to its number of channels, height, and width respectively.
To calculate its channel-wise histogram bounded by an arbitrary range [a;b], we consider fitting the histogram with uniformly spaced bins with size R, noted by nodes t_i ∈{t_1 = m, t_2, …, t_R = n}, where step size Δ = (a-b)/R-1. By matching the pixel values of different channels of the image to the histogram nodes, the value h_r of the histogram H at each node then be calculated as:
h_r = ∑_C1/1+δ*(p_i,j-t_r)^2, r = 1,2,…, R
where δ is a constant scaling factor. After collating and normalizing h_r, we could get the final one-dimensional histogram H(x) with size R on different channels.
§.§.§ Wasserstein Metric
Inspired by Wasserstein distance (W-distance) to measure the distance between distributions on a given metric space <cit.>, we propose to optimize the histograms of images using W-distance as follows
W_p (H_ŷ, H_y) = inf_ŷ∼ H_ŷ, y∼ H_y(𝔼||ŷ-y||^p)^1/p,
where H_ŷ and H_y denote differentiable histograms of the restored image ŷ and ground-truth image y respectively through Eq. (<ref>).
An explicit formula can be obtained since the dimension of the variable is 1 as follows,
W_p (H_ŷ, H_y) = ||F_ŷ^-1 - F_y^-1||_p
= (∫_a^b |F_ŷ^-1(α) - F_y^-1(α)|^p dα)^1/p,
where F_y and F_ŷ are the cumulative distribution of H_y and H_ŷ respectively. It could be further simplified when p=1 and the variable is discrete:
ℒ_CA = W_1(H_ŷ, H_y)
= ∑_ℝ |F_ŷ(t)-F_y(t)|d t.
The negative log-likelihood and the color alignment loss jointly define the total loss as follows
ℒ = ℒ_nll + λ·ℒ_CA,
where λ is a weighting constant to adjust the scales of color alignment loss for specific settings.
§ EXPERIMENTS
§.§ Experimental settings.
All the captured images are resized to the resolution of 400×600 for training and testing.
For our model, the weighting factor λ of CAL is set to 0.01 to cast the loss component value onto a similar numerical scale during training;
to simplify the task, we bound the range of the channel-wise histogram values to [0.0;1.0], and the bin size is set to 64 per channel.
§.§ Evaluations results.
To evaluate the performance of different methods on the proposed dataset, we retrain all the methods using the same training data, i.e., the training set of our proposed dataset. For a fair comparison, we explore training hyper-parameters of competitors in a wide range and report the best performance we obtained. We report the experimental results in Table <ref> and visual comparison in Fig. <ref>.
Based on our evaluation and analysis of the experiment results
As we can see in the table, Retinex-theory-based methods exhibit limited generalization ability and unpleasant outputs, e.g., RetinexNet <cit.>, Kind <cit.>, KinD++ <cit.>. We conjecture the reason is that the aforementioned methods assume the existence of an invariant reflectance map across low-light inputs and ground truth images and require a shared network to extract both illumination and reflectance maps of them
, which is not feasible in our setting. Besides, our method achieves the best performance among all competitors in terms of both fidelity and perceptual quality.
§.§ Ablation Study
1) Effectiveness of removing IR cut-off filter. To further verify the effect of removing the internal IR cut-off filter, we compare both quantitative and visual results that were restored from standard RGB space and IR-RGB space separately. For the models evaluated on the visible light spectrum, we utilize the pretrained/released models from SOTA methods trained on a large-scale dataset so that they have good generalization ability to different scenarios.
As shown in Table <ref>, the quantitative results calculated from IR light encoded image with our model are much higher than those directly restored from standard visible light spectrum. Besides, for the same method, especially for the method utilizing fully supervised training manner, there exists an obvious performance gap by converting the input space from IR-visible spectrum to only visible spectrum, which demonstrates that removing the IR cut-off filter may lead to the higher noise-signal ratio in extreme dark environment.
Besides, as shown in Fig. <ref>, the reconstructed image with IR light performs better in recovering local features and details of the image.
2) The effectiveness of color alignment loss. To validate the assumption of using color alignment loss can improve the imaging quality, we compare the visual quality difference of the usage of color alignment loss. As shown in Fig. <ref>, the result with CAL shows better perceptual quality with aligned color correctness and higher contrast. However, the original method without CAL appears to have obvious color distortion and blurry edges.
§ CONCLUSION
In this paper, we present a novel strategy for tackling low-light image enhancement tasks which introduces more income photons in the IR spectrum. The proposed prototype leads to a higher noise signal ratio in the extreme-dark environment. Based on the proposed prototype, a paired dataset is collected under different scenarios.
Experimental results on the proposed dataset show our method achieves the best performance in both quantitative results and perceptual quality. Our prototype shed light on the potential new designs for the digital cameras by exploiting the spectroscopic information captured from infrared light spectrum, providing better image quality with more practical solutions for customers.
IEEEbib
|
Subsets and Splits