entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 17
188
| authors
sequence | primary_category
stringlengths 5
18
| categories
sequence | text
stringlengths 2
629k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2307.05941v1 | 20230712061132 | Characterizing Data Assimilation in Navier-Stokes Turbulence with Transverse Lyapunov Exponents | [
"Masanobu Inubushi",
"Yoshitaka Saiki",
"Miki U. Kobayashi",
"Susumu Goto"
] | physics.flu-dyn | [
"physics.flu-dyn"
] |
APS/123-QED
[][email protected]
^1Department of Applied Mathematics, Tokyo University of Science, Tokyo 162-8601, Japan
^2Graduate School of Engineering Science, Osaka University, Osaka 560-8531, Japan
^3Graduate School of Business Administration, Hitotsubashi University, Tokyo 186-8601, Japan
^4Faculty of Economics, Rissho University, Tokyo 141-8602, Japan
Data assimilation (DA) reconstructing small-scale turbulent structures is crucial for forecasting and understanding turbulence.
This study proposes a theoretical framework for DA based on ideas from chaos synchronization, in particular, the transverse Lyapunov exponents (TLEs).
The analysis with TLEs characterizes a critical length scale, below which the turbulent dynamics is synchronized to the larger-scale turbulent dynamics, indicating successful DA.
An underlying link between TLEs and the maximal Lyapunov exponent suggests that the critical length scale depends on the Reynolds number.
Furthermore, we discuss new directions of DA algorithms based on the proposed framework.
Characterizing Data Assimilation in Navier–Stokes Turbulence
with Transverse Lyapunov Exponents
Susumu Goto^2
August 12, 2023
================================================================================================
Predicting the future states of Navier–Stokes turbulence is
a huge challenge due to its chaotic dynamics over broad spatiotemporal scales.
In particular, observational data on small-scale turbulent structures are generally unavailable; therefore, there is uncertainty in the initial conditions for prediction.
These small-scale uncertainties increase exponentially fast,
considering the Lyapunov exponent λ of three-dimensional turbulence
is mainly determined by the Kolmogorov time scale τ <cit.> as λ∝ 1/τ.
This exponential error growth from the small scales finally limits the predictability of large-scale motions <cit.>.
Therefore, for predicting turbulence, it is crucial to infer small-scale turbulent structures from only observational data of large-scale ones.
Data assimilation (DA) is suitable for such inferences.
Previous studies have shown critical length scales below which turbulent structures can be inferred via DA methods with only observational data of larger-scale structures <cit.>, i.e.,
the small-scale turbulent dynamics are reconstructed or `slaved' by the larger-scale dynamics.
Interestingly, in three-dimensional turbulence,
a common critical length scale,
approximately 20 η where η is the Kolmogorov length,
has been reported irrespective of the details of the DA algorithms.
This length scale corresponds to the wavenumber k^∗=0.2/ η,
which was first found in the continuous DA <cit.>
and recently found in four-dimensional variational DA <cit.> and nudging method <cit.>.
This indicates that the slaving small-scale dynamics can be understood based on the nature of the Navier–Stokes equations rather than a specific DA algorithm, which is essential for turbulence physics <cit.> and modeling <cit.>, as discussed later.
However, the physical origins of the critical length scale and the Reynolds-number dependence remain unclear.
In this Letter, we propose a theoretical framework for studying such DA phenomena as a stability problem.
The proposed framework explains, for the first time, how the critical length scale can be determined from the property of the Navier–Stokes equations.
Inspired by the concept of Blowout bifurcation <cit.> in the study of chaos synchronization, we introduce an invariant manifold,
DA manifold, in phase space and present a stability analysis,
wherein the transverse Lyapunov exponents (TLEs) characterize the critical length scale, determining the success or failure of the DA process.
Moreover, we show that the TLEs are a generalization of the maximal Lyapunov exponent λ of the turbulence attractor,
whose Reynolds number dependence has been extensively studied in research on unpredictability <cit.>.
Considering this relationship between the TLEs and maximal Lyapunov exponent and their Reynolds number dependency, we conclude that the critical length scale, k^∗η, depends on the Reynolds number.
The findings of this study suggest novel directions for practical DA research based on stability, and moreover,
shed light on the fundamental relationship between the small-scale dynamics slaved to the larger-scale dynamics
and the unpredictability of turbulence.
Formulation—
On the basis of the mathematical analysis by Olson and Titi (2003) <cit.>,
we study a twin experiment of the continuous DA defined by two incompressible Navier–Stokes equations (i=1,2):
∂_t u_i +( u_i·∇) u_i = - ∇π_i + νΔ u_i + f,
∇· u_i =0,
on the d-dimensional periodic domain [0, L]^d,
where ν denotes the kinematic viscosity.
The first vector field u_1( x,t) is the true velocity field, which is used as a reference;
and the second, u_2( x,t), is used for the DA process (the twin system).
Here, π_i (i=1,2) is the pressure of each system,
and f is the external forcing.
The projection operators P_k_a and Q_k_a are introduced to the Fourier representation as follows:
P_k_a u = ∑_| k | < k_a u_ k e^i k· x, Q_k_a = I - P_k_a,
where u_ k is the Fourier coefficient of the velocity field u corresponding to the wavenumber vector
k∈{ 2π m / L : m∈ℤ^d}.
For each wavenumber k_a∈ℝ,
which is a key control parameter in this study,
these operators decompose the velocity fields into large-scale p_i=P_k_a u_i and small-scale q_i=Q_k_a u_i parts, i.e., u_i = p_i + q_i (i=1,2).
The continuous DA method assumes that the large-scale structure of the true velocity field p_1(t) can be observed at all times t≥ 0 without observational errors.
Therefore, partial observational data p_2(t) are used for the twin system as: p_2(t) ≡ p_1 (t).
Hence, u_2(t)= p_1(t)+ q_2(t).
The evolution equations for q_2(t) are derived from Eq. (<ref>) for i=2 using Q_k_a:
∂_t q_2 + Q_k_a ( u_2·∇ u_2) = - ∇π'_2 + νΔ q_2 + Q_k_a f,
∇· q_2 = 0,
where π'_2 :=Q_k_aπ_2 <cit.>.
The goal of the continuous DA method is to infer the small-scale structure of the true velocity field q_1(t) using Eq. (<ref>) with the observational data of p_1(t).
In the two-dimensional case (d=2), Olson and Titi (2003) rigorously showed a sufficient condition for successful DA;
for a given kinematic viscosity ν and forcing term f,
there exists a critical wavenumber k^∗_a such that, if k_a > k^∗_a, then
q_2(t) converges to q_1(t) exponentially, i.e., u_2(t) → u_1(t) (t →∞).
Therefore, continuous DA enables us to infer the small-scale structure of the true velocity field q_1
without direct observation.
In the three-dimensional case (d=3),
Yoshida, Yamaguchi, and Kaneda (2005) <cit.> studied continuous DA using direct numerical simulations of the Navier–Stokes equations with assimilation at each time step.
Starting with the initial velocity fields of u_1(t) and u_2(t),
the velocity fields u_1(t+ Δ t) and u_2(t + Δ t) were calculated independently
using the fourth-order Runge–Kutta method
and p_2 (t + Δ t) was replaced by the true state p_1 (t + Δ t).
Thus, ũ_2(t + Δ t):= p_1 (t + Δ t) + q_2 (t + Δ t)
where ·̃ denotes the updated state.
Then, the time evolution was calculated using the initial conditions
u_1(t+ Δ t) and ũ_2(t+ Δ t) independently.
Using this method, the critical wavenumber k_a^∗ was identified as k_a^∗η = 0.2, where η is the Kolmogorov length <cit.>.
Numerical experiments—
We conducted direct numerical simulations of the three-dimensional Navier–Stokes equations in a periodic box with L=2 π
driven by a steady forcing
f(x,y,z) =(
-sin x cos y,
cos x sin y,
0)^T.
Figs. 1 (a) and (b) show, respectively, the energy spectrum of the turbulence
and the time series of kinetic energy E in a statistically steady state with Reynolds number Re =570.
More details of the setup for the numerical experiments can be found in the Supplemental Material.
The initial condition of the true system is a turbulent field in a statistically steady state u_1(0) = p_1 (0) + q_1 (0).
We obtained the initial condition of the twin system u_2(0)
by adding a perturbation only to q_1 (0).
This procedure is similar to that adopted by Yoshida, Yamaguchi, and Kaneda (2005) <cit.>.
Fig. 1 (c) shows the time series of energy of the difference field between u_1(t) and u_2(t),
Δ E(t)= 1/2| u_1 - u_2|^2, where | u|^2 = ∑_k | u_k|^2,
for three values of k_a:
k_aη=0.17 (solid red), 0.20 (dotted green), and 0.23 (dashed blue).
Although the energy of the difference field does not decrease for k_aη=0.17 and 0.20,
it decreases exponentially for k_aη =0.23, thereby indicating a successful DA process.
In other words, when k_aη =0.23,
small-scale structures of the velocity field can be determined by the sequential data of large-scale structures.
This result is quantitatively the same as that of the previous studies <cit.>;
i.e., irrespective of differences in the forcing terms and details of the DA methods,
the critical wavenumber is k_a^∗η = 0.2.
For the spatiotemporal dynamics of vortex structures reconstructed using the DA process, see the movie in the Supplemental Material.
DA manifold and its stability—
We characterize the critical wavenumber with a stability property of the (skew-product) dynamical system determined by
the Navier–Stokes equations,
which can be expressed as
∂/∂ t [ p_1, q_1 ]^T = F( p_1, q_1) (the base system),
and Eq. (<ref>),
∂/∂ t q_2 = G( p_1, q_2) (the fiber system).
We focus on the manifold defined by ℳ={ ( p_1, q_1, q_2) | q_1= q_2},
which is invariant because the solution trajectory starting from an initial point on ℳ stays on there.
We refer to ℳ as a DA manifold. Fig. <ref> (a) shows schematics of the solution trajectory and ℳ in the phase space.
The success of the continuous DA process implies asymptotic stability of ℳ.
Let us now consider a successful DA process, i.e., the solution trajectory starting an initial point apart from ℳ, i.e., q_1(0) ≠ q_2(0),
converges to ℳ asymptotically in time; that is, q_2(t) → q_1 (t) (t → + ∞).
This can be interpreted as ℳ being asymptotically stable.
The linear stability analysis of ℳ gives a priori knowledge on whether the DA process succeeds or fails.
To this end, we introduce an infinitesimal perturbation to the velocity field δ q= q_2- q_1
and derive the variational equations as follows:
∂_tδ q + Q_k_a ( u_1·∇δ q )+Q_k_a ( δ q·∇ u_1 )
= - ∇δπ + νΔδ q,
where δπ = π'_2 - π'_1 is the perturbation in the pressure field
(see for the derivation in Supplemental Material).
The transverse Lyapunov exponent (TLE) is defined as
λ_⊥ (k_a):= lim_t →∞1/tln | δ q(t) |,
if the limit exists.
A negative TLE, λ_⊥<0, indicates asymptotic linear stability of the DA manifold ℳ,
which implies a successful DA process.
By contrast,
if the TLE is positive, λ_⊥>0, the DA manifold ℳ is linearly unstable, which implies a failure of the DA process.
The TLE characterizes the average exponential growth or decay rate of the norm of the perturbation along the solution trajectory within ℳ.
The TLEs λ_⊥ (k_a) explain the results of the numerical experiments for continuous DA in the Navier–Stokes turbulence, as shown in Fig. 1 (c).
The numerical integration of the variational equations (<ref>) coupled with the Navier–Stokes equations (<ref>) for u_1 gives a TLE λ_⊥(k_a) for each fixed k_a.
Fig. <ref> (b) shows the normalized TLE λ_⊥(k_a)τ as a function of the normalized wavenumber, k_aη.
For k_aη<0.2, the TLEs are positive, λ_⊥>0; that is, the DA manifold ℳ is unstable.
The TLE decreases as k_a increases and becomes negative for k_aη>0.2; that is, ℳ is stable.
The change in stability of ℳ at the critical wavenumber k_a^∗≃ 0.2/η explains the results of the success or failure of the DA process shown in previous studies <cit.> and Fig. 1 (c).
Reynolds-number dependence—
To study the Reynolds number dependence, the normalized TLEs λ_⊥τ for Re = 1400 are denoted as blue open circles in Fig. <ref>.
For reference, the red circles denote the TLEs λ_⊥ (k_a) τ for Re = 570, which are the same as those in Fig. <ref> (b).
The critical wavenumber k_a^∗η defined by the sign change of the TLEs weakly depends on the Reynolds number;
for Re = 1400 k_a^∗η shifts to a larger value than Re = 570.
To understand this weak Re dependence of the critical wavenumber, we consider the asymptotic forms of the TLE λ_⊥(k_a) at the small and large
wavenumber k_a, respectively.
In the small wavenumber limit,
the TLE is reduced to the maximal Lyapunov exponent of the turbulent attractor.
The variational equations (<ref>) describe the perturbation dynamics confined to the wavenumber regions higher than k_a.
As k_a decreases, the perturbation dynamics become less confined.
At k_a=0, the perturbation can evolve in the tangent space in any direction; that is, no confinement.
In this case, Q_k_a=I (the identity operator), and the variational equations (<ref>) are reduced to the variational equations of the Navier–Stokes equations,
under which the TLE reduces to the maximal Lyapunov exponent of the turbulent attractor, that is λ_⊥(0)=λ.
The horizontal red dashed and blue dotted lines in Fig. <ref> show the values of the normalized Lyapunov exponents λτ for Re = 570 and Re = 1400, respectively.
The TLEs for each Re number converge to the normalized Lyapunov exponents as k_a→ +0.
For Re = 570, the value of the maximal Lyapunov exponent is λτ≃ 0.1, and it increases with Re.
These results agree with the recent findings <cit.> claiming that the maximal Lyapunov exponent
increases with Re faster than predicted by dimensional analysis, that is, λ∝ 1/τ.
In particular, the lower inset of Fig. 4 of Boffetta and Musacchio (2017) <cit.> shows that λτ is an increasing function of the logarithm of Re.
Therefore, λτ depends on the Reynolds number, although this dependence is weak.
Second, for the large-wavenumber limit of λ_⊥(k_a), the perturbation is confined to the higher-wavenumber region
where the viscous term is dominant and ∂_tδ q∼νΔδ q.
This suggests that λ_⊥(k_a) τ = - (k_aη)^2, as denoted by the gray dashed curve in Fig. <ref>.
The TLEs for different Reynolds numbers collapse onto the curve for k_aη≳ 0.25.
In summary, the TLEs λ_⊥(k_a) shown in Fig. <ref> connect the maximal Lyapunov exponents at k_a=0
and the curve - (k_aη)^2 for the large k_a.
In addition, the maximal Lyapunov exponent,
λ_⊥(0)=λ, increases slightly with Re <cit.>.
These findings indicate that
as the Reynolds number increases,
there is an upward shift of λ_⊥(k_a)τ
and a slight increase in the critical wavenumber k_a^∗.
Discussion and Conclusion—
DA is becoming an increasingly significant tool in data-driven forecasting.
However, little is known about the critical wavenumber k^∗, which plays a central role in various DA methods for three-dimensional turbulence <cit.>.
This study establishes a novel framework based on the theories of stability and Blowout bifurcation <cit.> and clarifies the critical wavenumber not from the results of DA but from the TLEs, which are the characteristic quantities of the Navier–Stokes equations.
Furthermore,
considering the novel discovery of the Reynolds number dependence of the maximal Lyapunov exponent <cit.>,
the relationship between the TLEs and the maximal Lyapunov exponent suggests a weak Reynolds number dependence of the critical wavenumber for the first time.
This Letter aims to present novel concepts completely different from those used in the well-established DA research fields <cit.>;
thus, a systematic investigation of the Reynolds number dependence of TLEs,
in particular, the critical length scale,
is beyond the scope of the present study and a crucial future challenge.
To this end, developing efficient algorithms for calculating the TLEs would be helpful.
The continuous DA is an ideal setting for the first step; extensions of our framework to incorporate the presence of noise and mismatch of the Reynolds number
will be important not only in practice
but also in the research of high-dimensional chaos synchronization.
Besides phase-space dynamics studied in this Letter,
understanding the turbulent dynamics in physical space will be complementarily necessary.
Remarkably,
the critical wavenumber, k^∗=0.2/ η, has been identified in a context that differs from DA; that is,
a recent study on vortex stretching found the far dissipation range as the wavenumber region above k^∗, i.e., k>k^∗ <cit.>.
In terms of the Kolmogorov–Richardson energy cascade,
the turbulent dynamics in the far dissipation range terminates the cascade process.
Although structures in the range acquire the energy from larger scales, they cannot transfer it to smaller ones but dissipate it there instead.
This may imply that they are slaving to larger-scale structures and gives an interpretation of the small-scale slaving dynamics in the DA context.
In addition to this insight,
a key to the complete understanding of the slaving small-scale dynamics will be found in the physical space structure of the (covariant) Lyapunov vectors <cit.> corresponding to the Lyapunov exponents;
these are `unstable modes' of turbulent structures, such as the hierarchy of antiparallel vortex tubes <cit.>,
as will be presented elsewhere.
These future studies based on the proposed framework can lead to new DA algorithms, including an approach for stabilizing the unstable direction of the DA manifold.
In the rapid development phase of the data-driven methods for turbulence <cit.>, including the DA methods <cit.>,
the dynamical system approaches are significant.
In particular, a neural network-based study of turbulence modeling found a qualitative change in modeling difficulty at k^∗=0.2/η <cit.>; the modeling of turbulent dynamics in the wavenumber region higher than that is feasible without difficulty, which can be understood within the proposed framework using TLEs.
The TLEs will provide insights into the data-driven science of turbulence and, more generally, high-dimensional chaotic dynamical systems with hierarchical spatiotemporal scales.
This work was partially supported by JSPS Grants-in-Aid for Scientific Research (Grants Nos. 22K03420, 22H05198, 20K20973, 20H02068, 19K14591, and 19KK0067).
Direct numerical simulations of the Navier–Stokes equations were conducted using supercomputer systems of the Japan Aerospace Exploration Agency (JAXA-JSS2).
*
99
Ruelle79 D. Ruelle, Microscopic fluctuations and turbulence, Phys. Lett. A 72, 81 (1979).
Lorenz E. N. Lorenz, The predictability of a flow which possesses many scales of motion. Tellus 21, 289 (1969).
Leith-Kraichnan C. E. Leith and R. H. Kraichnan, Predictability of turbulent flows. J. Atmos. Sci. 29, 6 (1972).
YYK2005
K. Yoshida, J. Yamaguchi, and Y. Kaneda, Regeneration of small eddies by data assimilation in turbulence, Phys. Rev. Lett. 94, 014501 (2005).
Lalescu2013 C. C. Lalescu, C. Meneveau, and G. L. Eyink, Synchronization of chaos in fully developed turbulence, Phys. Rev. Lett. 110, 084102 (2013).
Li2020 Y. Li, J. Zhang, G. Dong, and N. S. Abdullah, Small-Scale Reconstruction in Three-Dimensional Kolmogorov Flows Using Four-Dimensional Variational Data Assimilation, J. Fluid Mech. 885, A9 (2020).
Biferale P. C. D. Leoni, A. Mazzino, and L. Biferale, Synchronization to big data: nudging the Navier–Stokes equations for data assimilation of turbulent flows, Phys. Rev. X 10, 011023 (2020).
Martin2021 A. Vela-Martín, The synchronisation of intense vorticity in isotropic turbulence, J. Fluid Mech. 913, R8 (2021).
Zaki M. Wang and T. A. Zaki, Synchronization of turbulence in Channel flow, J. Fluid Mech. 943, A4 (2022).
Titi
E. Olson and E. S. Titi, Determining modes for continuous data assimilation in 2D turbulence, J. Stat. Phys. 113, 799 (2003).
GSK2017 S. Goto, Y. Saito, and G. Kawahara, Hierarchy of antiparallel vortex tubes in spatially periodic turbulence at high Reynolds numbers, Phys. Rev. Fluids 2, 064603 (2017).
YGT2022 T. Yoneda, S. Goto, and T. Turuhashi, Mathematical reformulation of the Kolmogorov–Richardson energy cascade in terms of vortex stretching, Nonlinearity 35, 1380 (2022).
MIG S. Matsumoto, M. Inubushi, and S. Goto (in preparation).
Ott
E. Ott and J. C. Sommerer, Blowout bifurcations: the occurrence of riddled basins and on-off intermittency, Phys. Lett. A 188, 39 (1994).
Boffetta2017 G. Boffetta and S. Musacchio, Chaos and predictability of homogeneous-isotropic turbulence, Phys. Rev. Lett. 119, 054102 (2017).
Mohan P. Mohan, N. Fitzsimmons, and R. D. Moser, Scaling of Lyapunov exponents in homogeneous isotropic turbulence, Phys. Rev. Fluids 2, 114606 (2017).
Arjun B. Arjun, and R. D. J. G. Ho., Chaotic properties of a turbulent isotropic fluid, Phys. Rev. Lett. 120, 024101 (2018).
Sebastian S. Reich and C. Cotter, Probabilistic forecasting and Bayesian data assimilation. Cambridge University Press (2015).
KO S. Kida and K. Ohkitani, Spatiotemporal intermittency and instability of a forced turbulence, Phys. Fluids 4, 5 (1992).
Ginelli F. Ginelli, P. Poggi, A. Turchi, H. Chaté, R. Livi, and A. Politi, Characterizing dynamics with covariant Lyapunov vectors,
Phys. Rev. Lett. 99, 130601 (2007).
IKTY M. Inubushi, Miki U. Kobayashi, S. Takehiro, and M. Yamada, Covariant Lyapunov Analysis of Chaotic Kolmogorov Flows, Phys. Rev. E 85, 016331 (2012).
ITY M. Inubushi, S. Takehiro, and M. Yamada, Regeneration cycle and the covariant Lyapunov vectors in a minimal wall turbulence, Phys. Rev. E. 92, 023022 (2015).
lucarini Y. Chen, A. Carrassi, and V. Lucarini, Inferring the instability of a dynamical system from the skill of data assimilation exercises, Nonlin. Processes Geophys. 28, 633 (2021).
Caulfield A. Mashayek, N. Reynard, F. Zhai, K. Srinivasan, A. Jelley, A. N. Garabato, and C. P. Caulfield, Deep Ocean Learning of Small Scale Turbulence, Geophys. Res. Lett. 49, 15 (2022).
Bruntonbook M.A. Mendez, A. Ianiro, B. R. Noack, S. L. Brunton, Data-Driven Fluid Mechanics: Combining First Principles and Machine Learning, Cambridge University Press (2023).
|
http://arxiv.org/abs/2307.07351v1 | 20230714140053 | RDSim: A fast, accurate and flexible framework for the simulation of the radio emission and detection of downgoing air showers | [
"Washington R. de Carvalho Jr.",
"Abha Khakurdikar"
] | astro-ph.IM | [
"astro-ph.IM",
"astro-ph.HE"
] |
Studying quantum entanglement and quantum discord in the cavity QED models
Li Wang-shun
August 12, 2023
==========================================================================
§ INTRODUCTION
RDSim is a framework for the simulation of the radio emission and detection of extensive air showers (EAS) by an arbitrary antenna array. The speed of the simulation set-up makes it possible to investigate larger areas around the detector, events with very low detection probability and examine the impact of asymmetries and border effects of the array. RDSim is specially suited to be used as a fast and accurate aperture calculator.
This work is organized as follows: Section <ref> describes the radio emission model used while on section <ref> we describe the simple detector response models and the overall structure of RDSim, including a description of the optional particle trigger simulation. Section <ref> outlines the extra models used in the case of neutrino events, such as sampling of the interaction point and tau-lepton propagation, followed by a discussion on Section <ref>.
§ RADIO EMISSION MODEL
When the air shower progresses in the atmosphere, the electromagnetic component of the shower emits radio emission. The measured radio emission is the composite of different emission mechanisms. The two main contributors to the radio emission are known as the Geomagnetic and Askaryan mechanisms. When the secondary electrons and positrons of the air shower travel through the atmosphere, they are accelerated by the Earth's magnetic field. The accelerated electrons and positrons drift apart in opposite directions (see top left of Fig. <ref>). The polarization direction of the resulting current is thus in the direction of the Lorentz force. As the air shower evolves, the number of secondary particles increases, reach maximum and then die out. Macroscopically, these accelerated particles create a time varying current that emits radiation. This is known as the geomagnetic emission mechanism and accounts for 80-90% of the radio emission of the extensive air shower. It is roughly proportional to the intensity of the Lorentz force and thus to sinα, where α is the angle between the shower axis and the geomagnetic field (see bottom left panel of Fig. <ref>).
The remaining 10-20% of the radio emission contribution comes from the Charge-excess mechanism, also known as Askaryan effect. As the charged particles interact with the air molecules, they ionize the medium. As the shower proceeds, the free electrons are then swept along with the shower front (see top right panel of Fig. <ref>), creating an excess of electrons in the shower. Macroscopically this can be thought as a dipole that moves along with the shower front due to the charge separation. The radio emission of this mechanism is polarized radially with respect to the shower axis <cit.> (see also the bottom left panel of Fig. <ref>).
The radio emission model in RDSim is based on the superposition of the Geomagnetic and Askaryan emission mechanisms and is an expansion of the model described in <cit.>. The emission of both mechanisms is roughly linearly polarized, but due to their different polarizations the superposition of these two mechanisms leads to an asymmetric footprint of the observed radiation pattern on ground. The RDSim emission model uses ZHAireS simulations as an input. ZHAireS <cit.> is an extension of AIRES (AIRshower Extended Simulations) <cit.> used to calculate the radio emission of air showers using the ZHS algorithm, which is based on first principles and do not presuppose any emission mechanisms. Instead it calculates the contribution to the vector potential of every single charged particle track in the shower. ZHAireS simulations of just a few antennas along a reference line are performed and RDSim disentangles the two emission components in order to obtain the peak electric field amplitude, for each mechanism separately, as a function of the distance r to the core along the reference line. The model then assumes an elliptical symmetry at ground level (see bottom right panel of Fig.<ref>) for the amplitudes of each mechanism and combines them, along with their respective polarizations, to obtain the net electric field and its polarization at any position on the ground. The model takes into account early-late effects that arise due to the varying distance to the shower as the observer position changes and also features a simple linear scaling of the electric field with shower energy.
To make it possible to use a single ZHAireS simulation for multiple arrival directions, RDSim's emission model can be rotated to any desired azimuth angle. This rotation takes into account all relevant parameters, such as the change of the angle α between the shower axis and the geomagnetic field and the changes in the distance between the simulated antennas on the reference line and the shower maximum. The maximum errors introduced by this rotation are very small, around 2%. For more details see <cit.>. We have also compared the results of the superposition model with full ZHAireS simulations and find a very good agreement between the two, as can be seen on Fig. <ref>, where we show a comparison between the electric field obtained with full simulations and the superposition model for a 70^∘ 1 EeV proton shower.
§ DETECTOR RESPONSE AND STRUCTURE
We model the characteristics of the array and its antennas in a very simple way. RDSim can be used with any arbitrary array with antenna positions on a horizontal plane at ground level. All antennas in the array are assumed to be identical and a simple threshold in electric filed amplitude is then set to trigger the antennas. We can also consider the effect of the beam pattern of the antennas in the trigger. To do this we simply multiply the electric filed obtained from the radio emission model by the beam pattern at the arrival zenith angle. This effective electric field is then used to evaluate the antenna trigger. For an array-level trigger we simply use a settable minimum number of antenna-level triggers required to trigger the whole event.
A more detailed description can be seen in <cit.>.
Some detectors also require ground particles to trigger. To accommodate for this, we have implemented a simple particle trigger. Currently, RDSim only takes into account high energy muons by using a simple model to estimate the muon density at ground level. This model uses as input AIRES simulations of ground particles at a low thinning level. Similarly to the radio emission model, we have also implemented a rotation of the muon model to make it possible to use a single AIRES simulation for many arrival directions. To do this we project the ground muons obtained from the simulation into the perpendicular plane of the simulated arrival direction (θ,ϕ). Small variations in the arrival azimuth leads to only very small variations in the angle α between the shower axis and the geomagnetic field. This leads to only small variations in the Lorenz force intensity and direction and thus only small variations in the perpendicular plane muon map. This can be visualized on the top panel of Fig. <ref>, where we show the muon density on the perpendicular plane for a simulation at ϕ=130^∘ (left) and one at ϕ=140^∘ (right), for the same 85^∘ zenith angle. Based on this approximation, the muon model assumes that the simulated muon map on the perpendicular plane can be used to produce muon maps at ground level for other arrival directions, provided the changes in the azimuth angle are small. In order to rotate a ground muon simulation performed for an azimuth angle ϕ to a different azimuth ϕ' we simply use the same original perpendicular plane map obtained for (θ,ϕ), but project it back to the ground using the new arrival direction (θ,ϕ'). The middle panel of Fig. <ref> shows the muon densities at ground level for an unrotated shower with azimuth 140^∘ (left) and a 130^∘ azimuth shower rotated to 140^∘ (right). The default behavior for this procedure is to allow a maximum difference in azimuth of 10^∘ (|ϕ-ϕ'|<10^∘) for the rotation. This means that performing AIRES ground muon simulations only every 20^∘ in azimuth allows the rotation of the maps to any desired azimuth.
Once the muon density map at ground level is obtained for a given event, we estimate the number of muons crossing each particle detector. To take into account the particle detector shape, we calculate an effective area A_(θ) as a function of the shower zenith angle, which represents the area on the ground that is shadowed by the detector for a given arrival direction. As an example, a circular Auger-like tank with radio r and filled with water to a height h would have an effective area A_(θ)=π r^2 + 2rhtanθ, while for a horizontal scintillator on the ground the effective area is just its geometrical area. We then sample the number of muons crossing the detector from a Poisson distribution with a rate parameter λ = A_eff(θ)ρ_μ, where ρ_μ is the muon density at the location of the detector. To evaluate the particle detector trigger state we use a simple settable trigger threshold on the number of muons that crosses the detector. This particle simulation is performed after the main radio-only detection part of the simulation finishes and is calculated only for those events and stations that triggered in the radio-only part. This approach simplifies the main RDSim run and keeps its high speed intact. An example simulated event at Auger-RD <cit.> can be seen on the bottom panel of Fig. <ref>. On the left we show the result of the radio-only part of the simulation, while on the right we show the simple particle trigger simulation for the same event.
RDSim was built around speed and is implemented in C++ with just a few key classes. The main run is controlled by an input file which contains parameters such as arrival direction, energy and core position ranges along with the settable trigger thresholds. It also contains links to the previously produced instances of the radio emission model along with ranges for their validity. This main input file also sets the secondary input files, such as the optional antenna pattern file and, in the case of events induced by tau decays, the file with the parametrizations for the tau propagation (see section <ref>). During the run, and for each event, RDSim samples an energy, core position and arrival direction. In the case of neutrino events it also samples the interaction or decay depth where the shower starts. Once the parameters of the event are sampled, RDSim searches for all radio emission toymodel instances that can be used for that particular event and chooses one randomly. The chosen toymodel is rotated and scaled to match the parameters of the event and then used to calculate the electric field at each antenna. The (optional) beam pattern is then applied to the calculated fields and the trigger conditions are checked. When the main run ends, the complete information of each event is saved in a compressed ROOT file containing, among other parameters, the event number, arrival direction, core position, energy, number of triggered stations, the details of the specific instance of the radio emission model that was used and in the case of neutrino events also the interaction or decay depth. For events that triggered, the peak electric field of each triggered antenna is also saved. More details can be seen in <cit.>.
§ NEUTRINO EVENTS IN RDSIM
RDSim can handle all neutrino interaction channels, but in order to simulate neutrino events some extra parameters are needed. In the case of showers initiated by CC and NC interactions, we use an extensive library of HERWIG <cit.> simulations of neutrino interactions. The products of the simulation are injected into ZHAireS to calculate the radio emission of neutrino induced events. Since the neutrino cross-section is very small for all energies, we assume the point of neutrino interaction to be equally distributed in atmospheric depth X. So, in order to sample the point in the atmosphere where the neutrino interacts and the shower starts, we simply divide the atmosphere in slices of equal thickness Δ X in atmospheric depth, centered at a various interaction depths X_int. Instances of the superposition emission model are then created for each slice from ZHAireS simulations. RDSim then chooses one of the slices at random, with equal probability, and one of the corresponding instances of the emission model at that particular X_int to simulate the radio emission of the shower.
In case of showers induced by tau-lepton decays, the procedure is similar but TAUOLA <cit.> simulations of tau decays are used instead of HERWIG as input for the air shower simulations. In order to sample the position where the shower starts, i.e. the point where the tau decays, one has to take into account the propagation of the tau from its creation, where the ν_τ interacts, until its decay. The ν_τ interaction depth X_0 in the atmosphere is sampled as before by choosing a random atmospheric Δ X slice. We then propagate the tau, disregarding its energy losses in air, to obtain the distribution of the decay depths X_, where the showers start. This propagation is based on the probability dP(E_τ) of tau decay per meter. In order to maintain RDSim's high speed in this type of event, these propagation simulations are done prior to the main run in order to obtain, for each zenith angle, the probability of the tau decaying before reaching the ground along with a parametrization of the decay depths X_ for those that decay above ground. The latter also takes into account the position of the ν_τ interaction. During the main run we sample the probability of the tau decaying before reaching the ground. If it decays above ground we sample the previously produced X_ parametrization and simulate a shower at that position, otherwise the event is instantly marked as not triggered, since no shower is produced.
§ DISCUSSION
The RDSim framework is very fast and flexible. It can simulate the radio emission of downgoing air showers and its detection by any horizontal ground antenna array. On Fig. <ref>, we present two example events, one simulated using the AUGER-RD <cit.> array (left) and the other using OVRO-LWA <cit.> (right), two very distinct detectors. RDSim was built with speed in mind and uses several simple, yet still very accurate, toymodel-like approaches. Once it is setup, it is able to simulate millions of events in just a few minutes. Its speed makes it possible to have enough statistics to study in detail the detection probability of every class of event, including events with a very low probability of detection. It can thus be used to shed light into the effect the array characteristics have in its detection capabilities and is specially suited to be used as a fast aperture calculator. It also allows to study geometrical effects in detail, such as those that arise due to border effects or array asymmetries. As a very simple example of this type of study, we show on the right panel of Fig. <ref> the number of triggered stations as a function of core position for 10^∘ 1 EeV proton showers at OVRO-LWA.
Another use of RDSim is to optimize the parameters of dedicated full simulation libraries in the event more detailed modeling of the emission and detector response are desired. Since it can calculate the detection probability of every class of event with great precision, it can be used to set the full simulation parameters that would cover the whole phase-space of detectable events, not only estimating the total number of full simulations needed but also the relative number needed for each class of event. This is specially relevant for neutrino events, since they have extra variables if compared to normal showers, such as the variable interaction or decay depth. This makes the phase-space for this type of event much larger. In this case, unoptimized full simulation libraries would either not cover the whole phase-space or be computationally unfeasible due to the shear number of unoptimized simulations needed to cover it fully.
We are in the process of comparing RDSim simulated events with full simulations of both the radio emission and detector response. Our preliminary results shows a very good agreement between RDSim and full simulations, despite the huge decrease in computing time. We are also extending RDSim to simulate mountain events. These are events induced by the decay of tau-leptons that are produced by ν_τ interactions inside mountains around the detector. This will be accomplished by using topographical maps to calculate the amount of rock traversed and the distance to the closest rock face for any given core position and arrival direction. This would be all that is needed not only to simulate such events, but also to convolute RDSim results with the probability of a given tau to be produced and exit the mountain on an event-by-event basis.
99
schoorlemmer2012tuning Harm Schoorlemmer, PhD. thesis, Radboud University Nijmegen, (2012)
HUEGE20161 Tim Huege, Physics Reports, 620, 1 - 52, (2016)
toymodel
J. Alvarez-Muniz, Washington R. Carvalho Jr., Harm Schoorlemmer, Enrique
Zas, Astropar. Phys., 59, 29-38, (2014)
zhaires J. Alvarez-Muniz, W. R. Carvalho Jr. and E. Zas, Astropar. Phys., 35, 325, (2012)
aires Sciutto, S.. (2019). AIRES A system for air shower simulations. User's guide and reference manual. 10.13140/RG.2.2.12566.40002.
RDSimARENA W. R. de Carvalho Jr. and Abha Khakurdikar, PoS (ARENA2022) 055
AugerPrime Jörg R. Hörandel for the Auger Collaboration, EPJ Web Conf.
210 (UHECR 2018), (2019)
herwig HERWIG 6.5, G. Corcella, I.G. Knowles, G. Marchesini, S. Moretti, K. Odagiri, P. Richardson, M.H. Seymour and B.R. Webber, JHEP 0101 (2001)
OVROARENA Kathryn Plant et al, PoS(ARENA2022)029, (2022)
tauola S. Jadach et al,
Computer Physics Communications, Volume 76, Issue 3, 361-380, (1993)
|
http://arxiv.org/abs/2307.03923v1 | 20230708073717 | New Methods for MLE of Toeplitz Structured Covariance Matrices with Applications to RADAR Problems | [
"Augusto Aubry",
"Prabhu Babu",
"Antonio De Maio",
"Massimo Rosamilia"
] | eess.SP | [
"eess.SP"
] |
Submitted to IEEE Trans. on Signal Processing...
New Methods for MLE of Toeplitz Structured Covariance Matrices with Applications to RADAR Problems
Augusto Aubry, Senior Member, IEEE, Prabhu Babu, Antonio De Maio, Fellow, IEEE, and Massimo Rosamilia, Member, IEEE
A. Aubry and A. De Maio are with the Department of Electrical Engineering and Information Technology, Universita degli Studi di Napoli “Federico II”, DIETI, Via Claudio 21, I-80125 Napoli, Italy (E-mail: [email protected], [email protected]).
P. Babu is with CARE, IIT Delhi, New Delhi, 110016, India (E-mail: [email protected])
M. Rosamilia is with the National Inter-University Consortium for Telecommunications, 43124 Parma, Italy (e-mail: [email protected]).
August 12, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
This work considers Maximum Likelihood Estimation (MLE) of a Toeplitz structured covariance matrix. In this regard, an equivalent reformulation of the MLE problem is introduced and two iterative algorithms are proposed for the optimization of the equivalent statistical learning framework. Both the strategies are based on the Majorization Minimization (MM) paradigm and hence enjoy nice properties such as monotonicity and ensured convergence to a stationary point of the equivalent MLE problem. The proposed framework is also extended to deal with MLE of other practically relevant covariance structures, namely, the banded Toeplitz, block Toeplitz, and Toeplitz-block-Toeplitz. Through numerical simulations, it is shown that the new methods provide excellent performance levels in terms of both mean square estimation error (which is very close to the benchmark Cramér-Rao Bound (CRB)) and signal-to-interference-plus-noise ratio, especially in comparison with state of the art strategies.
§ INTRODUCTION
Estimation of the data covariance matrix has diverse applications in radar signal processing, such as direction
of arrival estimation, target detection, adaptive beamforming, and sidelobe canceller design <cit.>. In these situations, the interference covariance matrix is estimated from the secondary/training data, which are assumed target-free and collected from spatial and/or temporal returns corresponding to range cells close to the one of interest. When the data follows a complex, zero-mean, circular Gaussian distribution, it is well known that the Sample Covariance Matrix (SCM) is the unstructured Maximum Likelihood (ML) estimate of the covariance matrix. However, in the presence of a small number of training data and/or when mismatches in training data spectral properties occur, it does not always represent a reliable choice for the covariance inference <cit.>. A well-known strategy, often discussed in the open literature to improve the performance of a covariance estimator, relies on the incorporation of some a priori knowledge about its underlying structure. For instance, in some radar applications, it is customary to suppose that data come from a stationary Gaussian random process, leading to a Hermitian symmetric Toeplitz Structured Covariance (TSC) matrix. Leveraging this information, one can obtain (under the design conditions) a more reliable estimator than the SCM <cit.>. Aside radar applications, the estimation of a TSC matrix is encountered in speech recognition <cit.>, spectral estimation <cit.>, gridless compressive sensing <cit.>, and hyperspectral imaging <cit.>.
So far, several algorithms have been proposed for estimating a TSC matrix. Let us first discuss those for ML Estimation (MLE). According to the Caratheodory parametrization <cit.> a Toeplitz covariance matrix ∈ℍ^m × m can always be decomposed as[Notice that the parametrization is unique provided that the rank of <m <cit.>.]
[ = ^H; []_k,k≥ 0 ],
where
=
[ 1 ⋯ 1; e^jω_1 ⋯ e^jω_r; ⋮ ⋱ ⋮; e^j(m-1)ω_1 ⋯ e^j(m-1)ω_r ],
=
[ p̃_1 … 0; ⋮ ⋱ ⋮; 0 … p̃_r ],
ω_i and p̃_i, i=1,2, ⋯,r ≤ m, denote some angular frequencies and their corresponding powers while r indicates the rank of . Capitalizing on this parametrization, Circulant Embedding (CE) of Toeplitz matrix (<cit.>) can be used to compute approximately the ML estimate of . According to CE, a Positive SemiDefinite (PSD) m × m Toeplitz matrix is modeled as
[ = ^H; = diag([p_1,p_2,⋯,p_L]), p_k≥ 0 , ]
where = [_m × m _m × (L-m)], _m × m is the identity matrix of size m × m, _m × L-m is the zero matrix of size m × L-m, is the normalized Discrete Fourier Transform (DFT) matrix of size L ≥ 2m-1 and is a diagonal matrix of size L × L with diagonal elements p_k≥ 0. Therefore, the matrix is completely parameterized by the diagonal matrix . Although estimating the Toeplitz covariance matrix using CE seems attractive, the representation in (<ref>) is valid only for a subset of Toeplitz covariance matrices. This can be intuitively justified because the Caratheodory parametrization in (<ref>) does not give restrictions on the frequencies spacing, while the CE in (<ref>) strictly requires the frequencies to lie on the Fourier grid. Hence, for some Toeplitz matrices, the parametrization in (<ref>) is only approximated. Based on CE, <cit.> and <cit.> have proposed an iterative algorithm based on Expectation-Maximization (EM) for MLE of . By modifying the M step in the EM procedure, in <cit.> the technique has been extended to deal with the banded Toeplitz covariance case. In <cit.>, still leveraging CE framework, a Majorization Minimization (MM) based optimization, with faster convergence than the EM of <cit.> and <cit.>, has been introduced. In <cit.> a closed-form estimator has been designed by invoking the extended invariance principle to deal with the Toeplitz constraint. Finally, in <cit.>, an efficient approximation of a Toeplitz covariance matrix under a rank constraint has been handled forcing the eigenvectors to be the same as those of the SCM whereas the Toeplitz constraint has been explicitly imposed while estimating the eigenvalues. Other than the MLE, several other alternative paradigms have been considered for the problem at hand. Recently, in <cit.> the Toeplitz structure is forced together with a condition number constraint via SCM projection onto a suitable constraint set. Other geometric based approaches for the TSC estimation have also been proposed in <cit.>.
In this manuscript, two iterative algorithms referred to as Alternating Projection Based TOeplitz Covariance Matrix Estimation 1 (ATOM1) and ATOM2 are devised leveraging a suitable reformulation of the MLE problem and the MM framework. Both ATOM1 and ATOM2 involve the construction of a bespoke surrogate function (s.f.) along with its optimization. Specifically, the two procedures construct distinct s.f. and therefore solve different surrogate minimization problems. While ATOM1 addresses the surrogate minimization problem using the Alternating Direction Method of Multipliers (ADMM), ATOM2 handles it either via alternating projection or Dykstra's algorithm. However, both the procedures directly estimate the Toeplitz covariance matrix without forcing a reparametrization via the CE. ATOM2 is also extended to include other constraints, such as banded Toeplitz, block-Toeplitz, and Toeplitz-block-Toeplitz structures. The major contributions of this paper can be summarized as follows:
* Two iterative algorithms ATOM1 and ATOM2 are proposed based on the MM framework to address MLE of a Toeplitz covariance matrix. Their computational complexities are thoroughly discussed. Also, the convergence of the procedures to a stationary point of the equivalent MLE problem is established.
* The extensions of ATOM2 to handle additional covariance structures, such as banded Toeplitz, block-Toeplitz, and Toeplitz-block-Toeplitz.
* The derivation of the Cramér-Rao Bound (CRB) for the estimation of Toeplitz, banded Toeplitz, and Toeplitz-block-Toeplitz covariance matrices are provided.
* Performance comparisons of the proposed algorithms (included their extensions) with some state-of-the-art procedures via numerical simulations are illustrated, using the Mean Square Error (MSE) and the Signal-to-Interference-plus-Noise Ratio (SINR) (for case studies related to radar applications) as performance metrics.
The organization of the paper is as follows. The MLE problem of Toeplitz covariance matrix for complex, zero-mean, circular Gaussian observations is formulated in Section <ref>. In Section <ref>, ATOM1 and ATOM2 algorithms are proposed, along with a discussion on their computational complexity and implementation aspects. Also, their convergence properties are studied. At the end of this section, the extension of ATOM2 to handle additional constraints along with the Toeplitz requirement is discussed too. In Section <ref>, the CRB for the estimation of Toeplitz, banded Toeplitz, and Toeplitz-block-Toeplitz covariance matrices is computed. In Section <ref>, the proposed algorithms are compared with some state-of-the-art techniques, and finally, concluding remarks are given in Section <ref>.
§.§ Notation
Throughout the paper, bold capital and bold small letter denote matrix and vector, respectively. A scalar is represented by a small letter. The value taken by an optimization vector at the t^th iteration is denoted by _t.
Furthermore, ℝ is used to denote the set of real numbers, ℝ^m and ℂ^m are used to represent the sets of m dimensional vectors of real and complex numbers, respectively, whereas ℝ^m × m, ℂ^m × m, and
ℍ^m × m are used to represent the sets of m × m matrices of real numbers, m × m matrices of complex numbers, and m × m Hermitian matrices, respectively. Superscripts (·)^T, (·)^*, (·)^H, and (·)^-1 indicate the transpose, complex conjugate, complex conjugate transpose, and inverse, respectively. For any x ∈ℝ, ⌈ x ⌉ returns the least integer greater than or equal to x. The trace and the determinant of a matrix are denoted by Tr() and ||, respectively. The notation []_i is used to represent the i^th column of the matrix . The symbol ⊗ indicates the Kronecker product while the gradient of a function f is denoted by ∇ f. The symbol ≽ (and its strict form ≻) is used to denote the generalized matrix inequality: for any ∈ℍ^m × m, ≽ 0 means that is a PSD matrix (≻ 0 for positive definiteness). Besides, for any ∈ℍ^m × m, eig() is the vector collecting the eigenvalues of (sorted in increasing order). The Euclidean norm of the vector is denoted by _2, || indicates the element wise modulus of the vector . The notation E[·] stands for statistical expectation. Finally, for any ,∈ℝ^m× m, max(,) refers to the matrix containing the element wise maximum between and .
§ PROBLEM FORMULATION
Let us assume the availability of n independent and identically distributed vectors {_1, _2, ⋯,_n}, where each _i is of size m and follows a m-variate complex, zero-mean, circular Gaussian distribution with covariance matrix ≻0. The maximum likelihood covariance estimation problem can be formulated as
[ ≻ 0 minimize f̅() =1n∑_i=1^n_i^H^-1_i + log|| . ]
If n ≥ m, Problem (<ref>) has a unique minimizer with probability one which is given by the SCM, i.e., _SCM = 1n∑_i=1^n_i_i^H. However, if the random process, where each observation is drawn, is stationary (at least in wide sense) then the covariance matrix also exhibits a Toeplitz structure which can be capitalized in the estimation process. By doing so, Problem (<ref>) becomes
[ MLE: ∈ Toep, ≻ 0 minimize f̅(), ]
where Toep is used to denote the set of Hermitian Toeplitz matrices of size m × m. The above problem has two constraints: a structural constraint and a positive definite constraint. Even though the structural constraint is convex, the non-convexity of the objective function makes Problem (<ref>) challenging to solve and no analytical solution seems to be available. In the following two iterative solution procedures for (<ref>) are designed exploiting the MM principle. Briefly, the MM technique mainly consists of two steps
* constructing a s.f. g(|_t) (where _t is the estimate of at the t^th iteration) for the objective function in (<ref>);
* minimizing the resulting surrogate problem at each iteration.
For more details, <cit.> provide an in-depth discussion on MM based algorithms.
§ ALGORITHMS FOR TOEPLITZ COVARIANCE MATRIX ESTIMATION
In this section, ATOM1 and ATOM2 are proposed to tackle the MLE problem of TSC matrix. Both exploit the MM principle (applied to an equivalent reformulation of the MLE problem) and differ in the way they construct and handle the surrogate minimization problem. ATOM1 solves the surrogate optimization using ADMM while ATOM2 tackles it using either alternating projection or Dykstra's algorithm. Subsequently, the computational complexity and proof of convergence of the procedures are established. Finally, the extension of ATOM2 to deal with additional covariance constraints along with the Toeplitz structure is provided.
Before proceeding further, let us observe that the Hermitian Toeplitz matrices intrinsically endow the centro-Hermitian symmetry structure <cit.>, i.e.,
= ^*
with the m× m permutation matrix given by
= [ 0 0 ⋯ 0 1; 0 0 ⋯ 1 0; ⋮ ⋮ ⋱ ⋮ ⋮; 1 0 ⋯ 0 0 ] .
As a consequence, Problem (<ref>) is tantamount to
∈ Toep, ≻ 0 minimize f(),
where
f() = (_FB^-1) + log||
refers to the restriction of f̅(·) to the centro-Hermitian covariance matrices, with _FB the forward-backward (FB) averaged sample covariance matrix[Hereafter, Problem (<ref>) (and thus (<ref>)) is assumed solvable, i.e., there exists a global optmizer ^* ≻ 0, as well as any limit point of a feasible sequence of matrices whose corresponding objectives converge to the optimal value is feasible to the optimization problem. As a consequence, without loss of generality, the constraint ≻ 0 can be relaxed into ≽ 0. Notably, a sufficient condition to ensure the aforementioned properties is provided by n ≥⌈ m/2 ⌉, corresponding to _FB≻ 0 with probability one.] given by _FB = 1/2 (_SCM + _SCM^* ) <cit.>.
Now, decomposing _FB=^H, e.g., via LDL factorization, with ∈ℂ^m × r, where r=rank(_FB)≤ m, Problem (<ref>) can be equivalently cast as[A similar constraint reformulation is used in some studies involving atomic norm for sparse reconstruction <cit.>.] (the interested reader may refer to Appendix A of the supplementary material to this paper)
min_∈ Toep,∈ℍ^r× r () + log||
s.t. ([ ^H; ])≽0,
where the objective is a concave differentiable function of and .
Before proceeding with the next important lemma, it is worth pointing out that Problem (<ref>) holds true even if the Toeplitz structural constraint in Problem (<ref>) and (<ref>) is replaced by any set of positive definite matrices, provided that the estimation problem is solvable.
Given a concave differentiable[For a non-differentiable function, the inequality in (<ref>) can be cast as h() ≤h(_t) + Tr((_t)^H (-_t)), where (_t) is the subgradient of the concave function h() at _t <cit.>. ] function h(): ℍ^r × r→ℝ, it can be majorized as
[ h() ≤h(_t) + Tr(∇h(_t)^H (-_t)), ]
where _t∈ℍ^r × r. The upper bound to h() is linear and differentiable with respect to (w.r.t.) .
Since h() is a concave function w.r.t. , (<ref>) stems from linearizing h() via its first order Taylor expansion <cit.>.
In order to tackle the challenging optimization problem (<ref>), MM-based methods <cit.>, denoted ATOM1 and ATOM2, are now developed.
To this end, let us observe that the term log|| in (<ref>) is a concave function w.r.t. <cit.>. Hence, it can be majorized using Lemma <ref> to get the following s.f.
g(,|_t) =() + (_t^-1) + c_1
=(_t) + c_1,
where the constant c_1 = log|_t| - m, _t = diag(,_t^-1), whereas = diag(,) is the block-diagonal matrix with blocks and along the main diagonal. Given _t, which in our case is the value assumed by the variable _t at the t-th iteration of the algorithm, the MM method demands for the solution of the following surrogate minimization task
_t+1 = ∈ Toep, ∈ℍ^r× r arg min g(,|_t)
s.t. ([ ^H; ]) ≽,
which is a SDP problem. Unfortunately, the computational complexity necessary to handle SDP using interior point methods is 𝒪((r+m)^6.5) <cit.>. In order to alleviate the computational issue, two different approaches are pursued. The former directly handles Problem (<ref>) via the iterative ADMM algorithm. The latter, by means of a suitable manipulation of (<ref>), constructs a different s.f. for the objective function in Problem (<ref>). By doing so, as clearly explained in the following, a computationally efficient and flexible estimation procedure capable of including additional constraints can be developed. To this end, let us observe that, adding and subtracting γTr(^2), (<ref>) is equivalent to
(_t) + γTr(^2)-γTr(^2)
with γ > 0∈ℝ a parameter of the surrogate construction stage (for γ↓ 0, the function in (<ref>) reduces to (<ref>)).
Now, being -Tr(^2) a concave function of and invoking Lemma <ref> applied to the feasible solution _t=diag(_t;_t) with _t = ^H_t^-1 and _t provided by the t-th iteration step of the estimation process, it is possible to construct the s.f. for (<ref>)
g̃(,|_t) = Tr(_t)+γTr(^2)-2γTr(_t)
- γTr(_t^2).
It is worth pointing out that g̃(,|_t) represents a surrogate to a s.f.. Nonetheless, since g̃(,|_t) is a tight approximation of g(,|_t), it is straightforward to show that (<ref>) provides a direct surrogate for the objective function in Problem (<ref>). Hence, given _t and after some algebraic manipulations, the resulting surrogate minimization problem at the t-th iteration can be cast as
_t+1= ∈ Toep, arg min - _t_F^2
subject to +≽0,
where _t = _t - γ'_t, with γ' = 0.5/γ and =[,^H;,].
In the following subsections <ref> and <ref> two iterative methods, i.e., ATOM1 and ATOM2, are proposed to solve the surrogate minimization problems in (<ref>) and (<ref>), respectively.
§.§ ATOM1
The surrogate minimization problem in (<ref>) is solved using ADMM <cit.>. To this end, an auxiliary variable ∈ℍ^r+m × r+m is introduced in (<ref>) and the problem is framed in the equivalent form
min_∈ Toep,≽,∈ℍ^r× r () + ((_t)^-1)
s.t. ([ ^H; ])- =0.
The augmented Lagrangian associated with (<ref>) is
ℒ_ρ(,,,)=() + ((_t)^-1)
+ [^H(([ ^H; ])-)]
+ ρ/2‖([ ^H; ])-‖_F^2,
where ρ >0 is the penalty parameter and is the Lagrange multiplier of size (r+m)× (r+m). Problem (<ref>) can be further rewritten as
ℒ_ρ(, , ) = (_t ) + (^H ( + - ))
+ ρ/2‖ + - ‖_F^2.
The (inner) iterative steps of ADMM algorithm <cit.> are
_k+1^t = ≽min ((_k^t)^T (_k^t + - ))
+ ρ/2‖_k^t + - ‖_F^2
_k+1^t = ∈ Toep,min () + ((_k^t)^T ( + - _k+1^t))
+ ρ/2‖ + - _k+1^t‖_F^2
_k+1^t =_k^t + ρ(_k+1^t + - _k+1^t),
where (·)^t_k is used to denote the k-th inner-iteration of the ADMM algorithm in correspondence of the t-th MM outer-loop. Problems (<ref>) and (<ref>) have closed-form solutions which can be computed via the projection of appropriate matrices onto the respective feasible sets. Indeed, Problem (<ref>) can be equivalently cast as
[ ^t_k+1= ≽ 0 arg min - ^t_k_F^2; ]
where ^t_k = _k^t + + 1/ρ_k^t. Hence, solving (<ref>) is tantamount to performing the orthogonal projection of the matrix ^t_k onto the set of the PSD matrices which can be computed as ^t_k+1=^t_kmax(diag(^t_k),)^t H_k, where diag(^t_k) and ^t_k are the matrices containing the eigenvalues and the corresponding orthonormal eigenvectors of ^t_k, respectively. Similarly, the update step of in (<ref>) can be rewritten as
_k+1^t = ∈ Toep,min ‖ - _k^t‖_F^2,
where _k^t = 𝒫_D–Toep( _k+1^t- - 1/ρ (_k^t +_t)), with 𝒫_D–Toep() computed as follows: Partitioning the matrix as =([ _11 _12; ^H_12 _22 ]) with _12 of size r× m, the orthogonal projection of interest amounts to set the upper diagonal block to _11 whereas the second diagonal block is obtained by averaging the elements along each diagonal of _22 and constructing the corresponding Toeplitz matrix.
Now, partitioning _k^t as _k^t = ([ ^t_11,k ^t_12,k; ^tH_12,k ^t_22,k ])
with ^t_11,k and ^t_22,k being r × r and m × m matrices, respectively, it follows that _k+1^t=^t_11,k and _k+1^t = ^t_22,k.
Before concluding, it is worth pointing out that since the surrogate minimization problem in (<ref>) is convex and only an equality constraint is forced, it is guaranteed that ADMM converges to a supposed existing[A sufficient condition for the existence of the optimal solution to Problem (<ref>) is provided by the solvability of (<ref>).] optimal unique solution to (<ref>) (see Section 3.2 in <cit.>, <cit.>). The pseudocode of the proposed algorithm is shown in Algorithm 1.
From Algorithm 1 it can be seen that ATOM1 requires initialization of the matrices _0, ^t_0 and ^t_0. _0 can be set using the initialization scheme discussed in <cit.> and, as t=0, ^t_0 can be set equal to ^H_0^-1 while ^t_0 can be constructed as ^t_0 =^H, where the elements of are drawn randomly from a uniform distribution over [0,1]. For t≥1, the matrices ^t_0 and ^t_0 can be initialized with their last value after convergence at the previous ADMM iteration, respectively. Another input parameter required by ATOM1 is the penalty weight ρ, introduced during the construction of the Augmented Lagrangian of the ADMM framework. It is shown in <cit.>, that the ADMM algorithm converges for any value of ρ>0. However, the numerical stability and the convergence rate depends on the choice of ρ. Simulation results have highlighted that for ρ = 1, the ADMM algorithm is stable for different values of n and m. Hence, unless otherwise stated, in all the numerical analysis ρ = 1 is used.
§.§.§ Computational complexity and discussion about ATOM1
ATOM1 is iterative in nature with two loops - the outer-loop updates the Toeplitz matrix _t while the inner-loop solves the surrogate minimization problem using ADMM. Note that in the inner-loop, it is required to construct the data-based matrix = ([ 0 ^H; 0 ]) - which is iteration independent and hence can be pre-computed and stored.
Let us now discuss the complexity related to the outer and inner-loops of ATOM1. The inner-loop of ATOM1 requires the computation of the matrix _t - which is outer-loop iteration dependent. Therefore, this matrix can be evaluated once in each outer-loop. Consequently, apart from the computations involved in the inner-loop, an outer-loop cycle just involves the evaluation of the matrix _t^-1. Since _t is Toeplitz, its inverse can be efficiently computed with a complexity 𝒪(m logm) <cit.>. The computational complexity of an inner-loop cycle is related to the projection of _k^t onto the set of PSD matrices and projection of ^t_k onto the set of block diagonal matrices where the upper part (of size r × r) is unconstrained, whereas the lower block (of size m × m) is Toeplitz structured.
The cost of this latter operation mainly involves the projection of ^t_22,k onto the set of Toeplitz matrices; thus, it is substantially dictated by the computation of average of the elements along the diagonals of ^t_22,k. Hence, the cost of the inner-step 4) is 𝒪(m^2). Next, the projection of onto the set of PSD matrices mainly involves the computation of the eigenvalues and eigenvectors of the matrix _k^t - whose corresponding complexity is 𝒪((r+m)^3) <cit.>. Therefore, the per-outer-iteration computational complexity of ATOM1 is 𝒪(η(r+m)^3) where η is the total number of inner-loop iterations required by the algorithm to converge.
A drawback of ATOM1 is the lack of a theoretical quality guarantee when it has to handle additional constraints on the covariance matrix. This is because ATOM1 implements ADMM algorithm at each inner-iteration which requires (to endow convergence guarantees to the process) the optimization problem to exhibit the standard form <cit.>
[ , minimize h_1(_1) + h_2(_2); subject to _1_1+_2_2 = ]
where h_1(_1), h_2(_2) are convex functions and _1, _2, are matrices of appropriate dimensions, respectively. Therefore, to incorporate additional inequality constraints (such as those resulting from upper bound on the condition number of the matrix _1 or a lower bound to the strength of diagonal elements, or more in general an intersection of closed convex sets that can be described by additional auxiliary variables), one needs to replace each inequality constraint with an appropriate equality constraint. This can be done by introducing a slack variable for each inequality constraint to the existing optimization variables _1 and _2. However, there is no convergence guarantee of ADMM when there are more than two optimization variables <cit.>. This issue can be addressed by the low complexity algorithm, referred to as ATOM2, proposed to solve Problem (<ref>).
§.§ ATOM2
Problem (<ref>) is tantamount to seeking the block diagonal matrix belonging to the intersection of the two sets - the former defined by block diagonal matrices with the lower diagonal block of size m × m fulfilling a Toeplitz structure and the latter given by the Linear Matrix Inequality (LMI) <cit.> + ≽ 0 - with minimum distance from . Being the feasible set of (<ref>) characterized by the intersection of convex sets, a viable, even though heuristic, means to tackle Problem (<ref>) is provided by the alternating projection or Projection Onto the Convex Sets (POCS) technique <cit.>, which has already been successfully applied in the signal processing context, e.g., <cit.>.
Let us denote by 𝒫_LMI() the orthogonal projection of an arbitrary matrix onto the set defined by +≽0. Now, to proceed further and employ the POCS framework, 𝒫_D–Toep() and 𝒫_LMI() projections must be employed. Remarkably, both can be obtained in closed-form: the former is computed as described in subsection <ref>; as to the latter, the orthogonal projection onto the set defined by LMI +≽0 is computed by first evaluating the EigenValue Decomposition (EVD) of the matrix +, i.e., obtaining [, ] = eig( +), where and are matrices containing the eigenvalues and eigenvectors of the spectral decomposition, respectively. Then, the orthogonal projection 𝒫_LMI() is given by max(,)^H -.
According to POCS method, given an initial value _0^t = _t, at the k-th inner-iteration first compute ^t_k+1 =𝒫_D–Toep(^t_k) and then, using ^t_k+1, determine ^t_k+1=𝒫_LMI(^t_k+1) which represents the starting point ^t_k+1 of the next inner-iteration. Hence, the POCS-based solution approach finds a sequence of iterates {^t_k} by alternatingly projecting between the two convex sets. Nevertheless, as reported in <cit.>, POCS may suffer from slow convergence. Even more crucial, the convergence to the global optimal solution to (<ref>) is, in general, not ensured <cit.>. A possible solution to the aforementioned shortcoming is provided by Dykstra's projection <cit.> which is a refinement of POCS capable of finding a point closest to _t by adding correction matrices _k and _k before each projection is performed, which in-turn ensures convergence of sequence {_k+1} to the optimal solution ^*=^* <cit.>. The pseudocode of Dykstra's algorithm is shown in Algorithm 2.
Once the optimal solution ^* is obtained via Dykstra's projection, the matrix _t+1 can be constructed from its lower diagonal block of size m × m. This process is repeated until the whole MM-procedure, i.e., including the outer-loop, converges.
The complete ATOM2 is summarized in Algorithm 3.
It requires the initialization of the matrix . In this respect, a similar scheme as in ATOM1 is followed, i.e., at each outer-iteration, the initial guess required to determine _t+1 in the inner-loop is obtained starting from _t.
§.§ Computational complexity of ATOM2
Like ATOM1, ATOM2 is an iterative algorithm with outer- and inner-loops. The outer-loop updates the Toeplitz matrix _t and the inner-loop implements the Dykstra's algorithm - which requires the computation of the matrices and _t^-1. The former is a iteration independent data matrix and therefore can be pre-constructed. The latter is outer-loop iteration dependent and therefore can be computed once in each outer-loop. Consequently, apart from the inner-loop computations, the outer-loop demands only the
computation of _t^-1 - which can be computed efficiently with complexity 𝒪(m logm). Meanwhile, the computational load of the inner-loop stems from the evaluation of EVD of the matrix (_k +_k) plus a data matrix - which has a complexity of about 𝒪((r+m)^3).
In Table <ref>, the computational complexity of ATOM1 and ATOM2 is compared with that of the state-of-the-art iterative algorithms <cit.>. Unlike the proposed algorithms, the state-of-the art methods are single loop iteration algorithms. Therefore, in the case of <cit.> η is used to represent the number of iterations required by the algorithm to converge. Inspection of Table <ref> shows that ATOM1 and ATOM2 have the highest complexity when compared to MELT and EM. Nevertheless, it is worth anticipating that this complexity increase is complemented by a superior performance in terms of generality of the problem solved (ATOM1 and ATOM2 do not exploit the CE, ATOM2 permits to handle additional structural constraints with quality guarantee, as shown in subsection <ref>), covariance matrix MSE, and achieved SINR.
§.§ Proof of convergence
In this subsection, the proof of convergence of ATOM1 and ATOM2 is established. In this regard, it is worth pointing out that both the algorithms differ in the way they construct and optimize the s.f. for the Problem (<ref>). Nonetheless, since ATOM1 and ATOM2 are based on the MM framework, the proof of convergence based on the following Theorem will hold for both algorithms.
Before stating the Theorem, let us first introduce the first-order optimality condition for minimizing a function over a convex constraint set. A point is a stationary point of f(·) if f'(;) ≥ 0 for all such that +∈𝒞, where 𝒞 is the convex constraint set and f'(;) is the directional derivative of f(·) at point in direction and is defined as <cit.>
[ f'(;) =λ↓ 0lim inf f(+λ) - f()λ ].
Based on the following Theorem, both ATOM1 and ATOM2 are guaranteed to converge to a stationary point of Problem (<ref>).
Denoting by {_t} the sequence of matrices generated by either ATOM1 or ATOM2, then the objective function of Problem (<ref>) monotonically decreases along the iterations. Besides, any positive definite cluster point[Under the assumption m≥ n/2, all the cluster points are demanded to be positive definite.] to _t is a stationary point to Problem (<ref>).
See Appendix B of the supplementary material for details.
§.§ Extensions of ATOM2
The augmentation of ATOM2 to handle additional constraints other than the Toeplitz structure in the covariance estimation process is now addressed. In particular, it is shown that ATOM2 can be generalized to account for the following scenarios: Banded Toeplitz, block-Toeplitz, and Toeplitz-block-Toeplitz matrices. On the other side, as already mentioned in subsection <ref>, ATOM1 cannot be directly extended to tackle the general constraints as for instance an upper bound requirement to the condition number.
§.§.§ MLE of banded Toeplitz covariance matrix
The covariance matrix is constrained to exhibit a banded Toeplitz structure of bandwidth b (see <cit.> for relevant applications). For instance, assuming a bandwidth b=2 and dimension m=5 the covariance matrix enjoys the following structure
=
[ r_1 r_2 r_3 0 0; r^*_2 r_1 r_2 r_3 0; r^*_3 r^*_2 r_1 r_2 r_3; 0 r^*_3 r^*_2 r_1 r_2; 0 0 r^*_3 r^*_2 r_1 ].
Then, the MLE problem for banded Toeplitz covariance matrix can be formulated as
[ ∈ Band-Toep, ≻ 0 minimize 1n∑_i=1^n_i^H^-1_i + log|| ],
where Band-Toep is used to denote the set of banded Toeplitz matrices. Like in (<ref>), the above problem can be cast in the following equivalent form
[ ∈ Band-Toep, minimize () + log||; subject to ([ ^H; ]) ≽0 ].
Hence, (<ref>) is handled via MM framework solving the following surrogate minimization problem
[ minimize - _F^2; subject to + ≽0; = diag(,) with being a; banded Toeplitz matrix ]
The above problem involves two convex sets: the set defined by the LMI +≽0 and the set of block diagonal matrices where the second block has a banded Toeplitz structure with bandwidth b. Consequently, Dykstra's projection algorithm or POCS can be used to solve Problem (<ref>). The projection of a matrix onto the LMI set can be calculated as discussed earlier in Subsection <ref>. The projection of a matrix = ([ _11 _12; ^H_12 _22 ]) onto the set of block diagonal matrices with the second banded Toeplitz block can be obtained as follows. The first diagonal block is the same as _11 and the second diagonal block is constructed by averaging the entries of the main and the first b upper-diagonals of the matrix _22 and computing the corresponding Toeplitz matrix <cit.>.
§.§.§ MLE of block-Toeplitz or Toeplitz-block-Toeplitz covariance matrix
In space-time adaptive processing radar applications, the covariance matrix exhibits a block-Toeplitz (BT) or a Toeplitz-block-Toeplitz (TBT) structure. An example of a BT-structured covariance matrix with p blocks is shown below
=
[ _0 _1 … _p-1; ^H_1 _0 … _p-2; ⋮ ⋱ ⋱ ⋮; ^H_p-1 … ^H_1 _0 ].
When each block exhibit a Toeplitz structure, then is TBT <cit.>.
The MLE problem of a BT or a TBT covariance matrix is formulated as
∈BT (TBT), ≻ 0 minimize 1n∑_i=1^n_i^H^-1_i + log||,
where the notation BT (TBT) is used to indicate the set of BT (TBT) matrices. A feasible solution to Problem (<ref>) can be obtained by solving at any given step the following surrogate optimization problem
[ minimize - _F^2; subject to +≽0; is a block diagonal matrix with; the second diagonal BT (TBT) block ].
Problem (<ref>) exhibits two constraints - 1) a LMI constraint and 2) a structural constraint - where the optimization variable is confined to be a block diagonal matrix with the second block having a BT (TBT) structure. Since both the constraints are convex, Dykstra's projection or POCS can be applied to solve Problem (<ref>). The projection of a matrix onto the LMI set can be calculated as discussed earlier in Section <ref> B. The projection of a given matrix onto the set of matrices whose second diagonal block has the BT (TBT) constraint can be obtained as follows. For the first diagonal block, the submatrix _11 is directly used. Then, the second diagonal block is obtained following two (three) steps. First, p matrices are obtained by averaging the (upper-right) diagonal blocks of the matrix _22. Then, only for TBT, each of the p matrices are projected onto the Toeplitz set as described in subsection <ref>. Finally, the resulting matrix is constructed according to (<ref>).
§ CRB CALCULATION
In this section, the CRB is derived for the estimation of Toeplitz structured covariance matrix (the interesting reader may refer to Appendix C of the supplementary material with reference to the CRBs of Banded Toeplitz, BT, and TBT covariance model). The CRB provides a lower bound on the variance of any unbiased estimator <cit.>. To proceed further, let represent the real value vector parametrizing a given covariance matrix structure of interest.
Then, the CRB is the inverse of the Fisher Information matrix (FIM) whose (i,k)^th element is
[ []_i,k = E[∂^2logf̅()/∂θ_i∂θ_k] ],
where ∂logf̅()/∂θ_i denotes the partial derivative of logf̅() w.r.t. θ_i, with θ_i the i-th element of .
Due to the Gaussian assumption, the (i,k)^th element of the FIM can be computed using the Slepian–Bangs
formula <cit.>
[]_i,k = nTr(^-1∂/∂θ_i^-1∂/∂θ_k).
In the following subsection, the FIM is derived for the Toeplitz covariance structure.
§.§ Toeplitz matrix
As the entries of the TSC matrix are completely characterized by its first row, i.e., [r_1, r_2,⋯ r_m]^T, the covariance matrix ∈ℍ^m × m can be parameterized by = [r_1, (r_2),⋯(r_m),(r_2),...,(r_m) ]^T∈ℝ^ 2m-1 where (r_i) and (r_i) denotes the real and imaginary parts of r_i, respectively. Then, the covariance matrix can be expressed in terms of and basis matrices ^Toep_g (defined as in (<ref>)), g=1,2,⋯,m <cit.>
[ = ∑_g=1^mθ_g(^Toep_g) + j ∑_g=m+1^2m-1θ_g(^Toep_g-m+1) ].
The (i,k)^th element of the matrix ^Toep_g is given as
[ [^Toep_g]_i,k=
1+j i-k=g-1=0
1+j k-i=g-1≠0
1-j i-k=g-1≠ 0
0 otherwise ].
Using (<ref>), ∂/∂θ_i can be obtained as
∂/∂θ_i=
(^Toep_i) 1≤ i ≤ m
j(^Toep_i-m+1) m+1 ≤ i ≤ 2m-1
Substituting ∂/∂θ_i in (<ref>), yields the FIM for Toeplitz covariance matrix.
§ NUMERICAL SIMULATIONS
In this section, the performance of the proposed covariance matrix estimators ATOM1 and ATOM2 is numerically analyzed in comparison with the following state-of-the-art algorithms: EM-based <cit.>, MELT <cit.>, the SCM, and the FB estimators <cit.>. First, a convergence analysis of the derived methods is provided, also in comparison with the aforementioned counterparts. Then, the estimation capabilities are analyzed in three different scenarios, using the MSE as performance metric, defined as[In the following, (<ref>) is computed via Monte Carlo techniques.]
MSE = E[ - ^2] ,
where indicates the estimate of the unknown , obtained according to one of the aforementioned strategies.
First of all, the covariance matrix is assumed to share the Toeplitz structure. Then, the banded Toeplitz, the BT, and the TBT constraints are considered. The CRB-based benchmark, computed as CRB = (^-1), is reported too, whereby, for each case study, the FIM is appropriately derived, see Section <ref>.
Furthermore, assuming a typical radar signal processing scenario, the performance is also evaluated in terms of average achievable SINR by an adaptive spatial filter.
It is also worth reporting that, in the aforementioned scenarios, ATOM1 and ATOM2 procedures are initialized using the FB estimate _FB, projected onto the set of Toeplitz matrices. Moreover, for the execution of ATOM2, the parameter γ is updated adaptively in each outer-loop iteration according to the following law[As to the adaptive ATOM2 surrogate construction stage, it has been empirically shown that the updating rule (<ref>), with γ_0= 10^-4 and k_1 = 5, provides satisfactory performance in all the scenarios; therefore, unless otherwise stated, ATOM2 s.f. (and the subsequent processing) is constructed using (<ref>) with the aforementioned values.]
γ = γ_0 (t logt+k_1)^2.
To illustrate the role of γ in the optimization process performed by ATOM2, a notional representation of the objective function (conceptually depicted as a one-dimensional curve and corresponding to a specific portion of a restriction of the multivariate objective) and the s.f. of ATOM1 and ATOM2, is reported in Fig. <ref>.
Remarkably, the value of γ affects the trade-off between performance and convergence speed of ATOM2. Indeed, while a smaller γ leads to a better performance (ATOM2 s.f. approaches the ATOM1 one as γ→ 0), it demands more inner-loop iterations to achieve convergence, due to the almost singular resulting metric. On the other hand, a larger γ reduces the overall computational cost, but introduces a growth in the approximation error. However, as the outer-loop iterations increase, the approximation error of the ATOM2 s.f. w.r.t. the objective function decreases as the updated point becomes closer and closer to a local minimum at which the sequence is “converging”. That said, slowly increasing γ with the number of iterations allows to speed-up its computational burden without decreasing its performance.
§.§ Assessment of iterative algorithms convergence for on-grid and off-grid frequencies
In this simulation, the convergence of ATOM1 and ATOM2 (whose inner-loop was implemented via Dykstra's algorithm) is assessed in comparison with MELT and EM algorithms. To this end, each data snapshot _k∈ℂ^m is modeled as
_k= ^1/2_k, k=1,2, ⋯, n
where _k∈ℂ^m, k=1,…, n are independent and identically distributed zero-mean circularly symmetric Gaussian random vectors with unit mean square value.
Two different experimental setups are considered, assuming m=6 and n=20. In the former, the true underlying Toeplitz covariance matrix is constructed by choosing the 2-nd, 3-rd, 5-th, 7-th, 8-th and the 11-th column of the DFT matrix with L=2m-1 in (<ref>), corresponding to the frequencies [0.5712, 1.1424, 2.2848, 3.4272, 3.9984, 5.7120] rad, and as powers [p_1, …, p_6]^T = [3, 6, 4, 1, 7, 5]^T, respectively. Figs. *fig:negLL_obj_ON_GRID_a and *fig:negLL_obj_ON_GRID_b show the negative log likelihood (<ref>) and the objective function of problem (<ref>) versus the number of iterations, respectively. It can be seen that all the algorithms numerically improve the negative log-likelihood as the number of iterations increases and almost converge to the same value, with negligible differences. Moreover, Fig. *fig:negLL_obj_ON_GRID_b indicates that the proposed algorithms monotonically decrease the problem objective function, which is expected since they optimize (<ref>) using the MM framework.
In the other experimental setup, the true underlying Toeplitz covariance matrix is constructed such that two of the frequencies are not on the Fourier grid. Therefore, the same parameters used in case study 1 are considered, with the exception that the Fourier frequencies 0.5712 rad and 3.9984 rad are replaced with 0.5 rad and 5.3 rad, respectively. For the case study at hand, the negative log-likelihood (<ref>) and the objective function of (<ref>) are reported in Figs. *fig:negLL_obj_OFF_GRID_a and *fig:negLL_obj_OFF_GRID_b versus the number of iterations, respectively. Inspection of Fig. *fig:negLL_obj_OFF_GRID_a reveals that while MELT and EM converge to a value of ≈ 22.4, ATOM1 and ATOM2 converge to 22. Therefore, when two of the frequencies do not lie on the Fourier grid, the state-of-the-art iterative algorithms converge to a larger value of the negative log-likelihood than the proposed methods. This is due to the fact that unlike the counterparts, the proposed algorithms estimate the Toeplitz covariance matrix without reparametrizing it via the CE technique and thus they are able to cover the whole set of Toeplitz covariance matrices. Furthermore, remarks similar to those made for the on-grid case hold true with reference to the results depicted in Fig. *fig:negLL_obj_OFF_GRID_b.
In the following, the mean computational time[The simulation has been executed using MATLAB R2020b on a desktop computer equipped with an Intel i5 processor and 16 GB of RAM.] (averaged over 1000 Monte Carlo trials) of the proposed techniques and the counterparts is examined. As case studies, four different values of m are considered, i.e., m ∈{4, 8, 16, 32}. Moreover, the data samples _k are generated as (<ref>) using n=4m samples, with R = T + I. The Toeplitz covariance matrix is generated assuming 3 equal power sources, i.e., with p = [5, 5, 5], whose frequencies are randomly selected (at each trial) such that two of them lie on the Fourier grid of the DFT matrix, with L=2m-1, whereas the third one is drawn from a uniform distribution over [0, 2π]. The iterative algorithms have been run until the following condition is met[For the execution of EM and MELT procedures, the exit condition is set as f(_t-1)-f(_t) ≤ 10^-4.]
p(_t-1, _t-1)-p(_t, _t) ≤ 10^-4
with p(, ) = () + log|| the objective function of problem (<ref>),
or until the maximum number of iterations (set equal to 1000) is reached.
The average computational time of the different algorithms (possibly with different values of the hyperparameters) are reported in Table <ref>.
The results show that ATOM2 has, in general, a longer execution time than ATOM1. This is because the inner-loop of ATOM2 (based on Dykstra's algorithm) requires an higher number of iterations and hence a longer run time to converge than ATOM1 inner-loop (implemented via ADMM), and similar to those of EM/MELT when γ_0 is small, where the distance is minimized in a metric space is ill defined more and more. However, when γ_0 = 10^-1, the run times of ATOM1 and ATOM2 are comparable and similar to those of MELT and EM. Interestingly, Table <ref> pinpoints that, for γ_0 sufficiently small, i.e., 10^-4, ATOM2 is generally able to reach MSE values smaller than ATOM1, reasonably to its adaptive step-size strategy (<ref>), which allows it to provide better quality estimates than ATOM1 as the outer-loop iteration increases. It can also be seen that EM has the least computational time (at large values of m). Nevertheless, as shown in Table <ref>, although the proposed algorithms have a slight longer computational time, the obtained estimates are superior, in terms of MSE, to those provided by MELT and EM.
Interestingly, as the data dimension increases, the resulting average MSE values reached by the ATOM2 using different γ_0 parameters becomes closer and closer. Therefore, for a sufficient larger data size, i.e., m≥32, γ_0 = 10^-1 represents an appropriate choice for ATOM2 implementation, as it offers a good performance with a reduced computational burden.
§.§ MSE vs n for Toeplitz covariance matrix
For this case studies, it is assumed m= 15 and the number of samples n ranging between 50 and 500 in steps of 50. The data _k∈ℂ^15 are again simulated according to (<ref>).
Precisely, two different experiments are considered whereby the true Toeplitz covariance matrix is generated using on-grid[The frequencies used in the first experiment are: [0.2167, 0.6500, 1.0833, 1.3, 1.5166, 1.9500, 2.3833, 2.8166, 3.2499, 3.6832 4.1166, 4.5499, 4.9832, 5.4165, 5.8499] rad. Their corresponding powers increase linearly from 1 to 15 with a unit step.] and off-grid frequencies[For the off-grid simulation, the frequencies [1.3, 2.8166, 4.9832,5.8499] rad are replaced with [1.25, 3.01, 5.20, 5.8] rad, respectively.], respectively.
The resulting MSE, computed over 1000 Monte Carlo trials, are illustrated in Fig. <ref>.
Inspection of the curves depicted in Fig. *fig:MSE_a shows that, regardless of the number of samples n, in the first experiment ATOM1 and ATOM2 almost reach the CRB, whereas EM and MELT yield a slightly better performance, resulting in a deviation from the CRB. This can be explained observing that the derived CRB does not exploit the information that the frequencies lie on-grid. Fig. *fig:MSE_b highlight that in the second experiment, ATOM1 attain the best performance, with results quite close to the CRB and slightly better than ATOM2, with a limited gap between the corresponding curves. Furthermore, MELT and EM exhibit similar MSE values which seem to saturate as n increases. The performance behavior of Fig. *fig:MSE_b stems from the observation that, unlike MELT and EM, ATOM1 and ATOM2 are gridless methods, delivering the same performance regardless of the sources frequencies.
§.§ MSE vs n for banded Toeplitz covariance matrix
This subsection analyzes the performance in the case of covariance matrix belonging to the set of banded Toeplitz matrices. In particular, the same simulation setup as in Section <ref> is considered, but enforcing the underlying covariance matrix to have a bandwidth b=6. To this end, is constructed by alternately projecting a random Hermitian matrix onto the set of banded Toeplitz matrices and the set of PSD matrices.
Moreover, for this study case, ATOM2 is implemented according to the procedure described in Section <ref>, namely explicitly including the banded Toeplitz structure in the constraint set.
Fig. <ref> highlights that the bespoke implementation of ATOM2 delivers the best performance, with MSE values really close to the CRB. Furthermore, MELT and EM share the same performance with a noticeable gap w.r.t. ATOM2, which is expected since the aforementioned algorithms do not leverage the banded structure of the covariance matrix.
§.§ MSE vs n for BT (TBT) covariance matrix
Here, the capabilities of ATOM2 are analyzed in the context of covariance matrix with TBT structure. To this end, assuming m=16 and p=4 blocks (each having block-size l=4), the covariance matrix is modeled as = _1 ⊗_1, where _1 ∈ℂ^l × l is a Toeplitz matrix constructed as in subsection <ref>, with frequencies [0.6, 1.4, 3.2, 5.1] rad and powers [3,6,4,1]. Thus, each data snapshot _k is drawn according to (<ref>).
The resulting MSE values (averaged over 1000 Monte Carlo trials) are displayed in Figure <ref> versus the number of snapshots. Specifically, the performance of both the BT and the TBT extension of ATOM2 (described in Section <ref>) are reported and compared with the CRB (see Appendix C reported in the supplementary material to this paper) as well as with two EM-based estimators, tailored respectively for BT/TBT covariance matrix <cit.>.
Inspection of the results reveals that ATOM2 TBT uniformly achieves the least MSE, with ATOM2 BT ranking second. As previously highlighted, the superior performance of the proposed method stems from the design criterion which does not require reparametrizing the covariance matrix using the CE.
§.§ Radar Application
In this subsection, the performance of the covariance estimation algorithms is evaluated with reference to the average achievable SINR in adaptive radar spatial processing context. To this end, let us consider a radar system equipped with a uniform linear array with m=6 sensors, pointing toward the boresight direction. The inter-element distance between each sensor is set equal to d=λ/2, where λ is the radar operating wavelength.
For this simulation scenario, the interference covariance matrix is modeled as = _s + σ_a^2 where σ_a^2 is the power level of the white disturbance noise (assumed without loss of generality equal to 0 dB) and _s is given by _s = ∑_l=1^Jσ_l^2 (ϕ_l) (ϕ_l)^H, where J is the number of uncorrelated narrow-band jammers and, for the l-th jammer,
(ϕ_l) = 1/√(m)[1, e^j 2π/λ d sin(ϕ_l), …, e^j (m-1) 2π/λ d sin(ϕ_l)]^T
is the steering vector in its direction-of-arrival ϕ_l, and σ^2_l the corresponding interferer power.
The capabilities of the estimation methods are analyzed by means of the average SINR, computed as
SINR_avg= 1K∑_i=1^K|_̂î^H(θ)|^2_i^H_̂î,
where K=500 is the number of Monte-Carlo trials and _i = _i^-1(θ) is the estimate of the optimal weight vector for adaptive spatial processing with _i the estimate of the interference-plus-noise covariance matrix for the i-th trial, computed either via the sample covariance matrix or enforcing the Toeplitz structure in the covariance matrix and employing the estimators ATOM1, ATOM2, EM, and MELT.
More precisely, J=2 jammers, with powers σ_1^2= 30 dB and σ_2^2= 20 dB, respectively, impinging on the array from θ_1=9.8^∘ and θ_2=-8.8^∘, is considered. As comparison terms, the optimum SINR, i.e., SINR_OPT = (θ)^H^-1(θ) and the performance of the Sample Matrix Inversion (SMI) beamformer, are included too.
The average SINR versus θ∈𝒯, with 𝒯 = [-π/2, π/2] discretized with 500 equally-spaced points, is shown in Fig. <ref>, for n∈{m, 2m, 3m}. Inspection of the plots highlights that as the number of samples n increases, the results achieved by ATOM1 and ATOM2 gets closer and closer to the optimum, yielding superior performance w.r.t. the counterparts.
§ CONCLUSION
In this paper, the MLE problem for TSC matrices has been addressed. Precisely, by reformulating appropriately the MLE optimization problem and leveraging the MM framework, two iterative algorithms ATOM1 and ATOM2 have been developed. Both inherit the key properties of MM i.e., they monotonically decrease the underlying cost function with guaranteed convergence to a stationary point of the equivalent MLE problem. Subsequently, ATOM2 has been extended to handle covariance matrix MLE forcing other Toeplitz-related structures, such as banded Toeplitz, BT, and TBT. Simulation results have indicated that the proposed algorithms can perform better than some state-of-the-art techniques in terms of MSE and the SINR metrics.
Some of the possible future research directions are now outlined. In particular, ATOM2 could be further extended to include the cases of low rank TSC, with the rank assumed either known or unknown at the design stage, as well as covariance matrix with an upper bound to the condition number.
Another possible extension of the proposed technique could be MLE of a Toeplitz covariance matrix assuming a compound Gaussian distribution for the underlining data which has a significant application in low-grazing angle target detection <cit.>. Moreover, acceleration methods inspired for instance by the SQUAREd iterative Methods (SQUAREM) <cit.> could be investigated. Finally, the design of sub-optimal optimization strategies (e.g., based on the gradient projection method) with an improved computational burden (a valuable feature for real-time applications) is definitely worth to be pursued.
§ APPENDIX A
§ PROOF OF EQUIVALENCE BETWEEN (8) AND (10)
Let ^⋆ be an optimal solution to (8), then (^⋆, ^⋆), with ^⋆= ^H ^⋆-1, is feasible for (10) and the two problems have the same objective values. This means that
v(8) ≥ v(10),
where v(·) indicates the optimal value of the corresponding optimization problem.
Moreover, for any fixed _1 ≻ 0, concentrating the objective function of (10) with respect to (which is tantamount to placing = ^H _1^-1), it follows that the concentrated optimization problem is
_1 ≽ 0 minimize (_FB_1^-1) + log|_1|,
due to Schur complement Theorem and the monotonicity of the trace operator with respect to generalized matrix inequality “≽”.
Finally, being by assumption (8) solvable, any minimizer of (<ref>) satisfies _1^⋆≻ 0 with a corresponding optimal solution to (10) given by (_1^⋆, ^H _1^⋆-1). This implies that
v(8)≤ v(10).
Capitalizing on (<ref>) and (<ref>) as well as the above considerations, it follows that v(8)=v(10) and given an optimal solution (_1^⋆,_1^⋆) to (10), _1^⋆ is also optimal to (8) and viceversa, given an optimal solution ^⋆ to (8) (^⋆, ^⋆) is an optimal point to (10).
§ APPENDIX B
§ PROOF OF THEOREM 3.2
To begin with, let us denote by h(|_t) either the objective function involved in the surrogate optimization problem of ATOM1 (12) or ATOM2 (15), where = diag(, ). This function, regardless of the method, satisfies the following two inequalities
h(_t|_t) = l(_t)
h(_t+1|_t) ≥l(_t+1)
where l()= Tr() + log||. Leveraging the above inequalities, it follows that
l(_t+1) (a)≤h(_t+1|_t) (b)≤h(_t|_t) (c)= l(_t)
In (<ref>), the inequality (a) and equality (c) stem from (<ref>) and (<ref>), respectively; besides, the inequality (b) is obtained by exploiting the fact that ATOM1 and ATOM2 globally solve the corresponding convex surrogate optimization problem. Therefore, (<ref>) implies that the sequence of objective value of Problem (16) generated by the proposed algorithms is monotonically decreasing , i.e.,
l(_0) ≥l(_1) ≥l(_2) ≥⋯
Next, let us denote by a cluster point to {_t} and let {_r_t} be a subsequence of {_t} converging to . Then, from (<ref>), (<ref>), and (<ref>)
[ h(_r_t+1|_r_t+1)= l(_t_j+1) ≤l(_r_t+1); ≤h(_r_t+1|_r_t)≤h(|_r_t), ∀ . ]
Thus, letting t →∞
h(|) ≤h(|),
which implies that h'(|;) ≥ 0 where h'(·|;) is the directional derivative of the surrogate function at point in a feasible direction . Finally, by Proposition 1 in <cit.>, the surrogate function h(|) and the objective function l(·) have the same first order behavior at . Therefore, h'(|;) ≥ 0 implies that l'(; ) ≥ 0. Hence, is a stationary
point of the objective function l().
§ APPENDIX C
§ CRB OF BANDED TOEPLITZ, BT, AND TBT COVARIANCE MODEL
Herein, the CRB of Banded Toeplitz, BT, and TBT covariance model are provided.
§.§ Banded Toeplitz matrix
In the case of banded Toeplitz matrix with bandwidth b, the first row of the covariance matrix ∈ℍ^m × m has only b+1 non-zero terms. Therefore, can be parameterized via = [r_1, (r_2),⋯(r_b+1),(r_2),...,(r_b+1) ]^T∈ℝ^ 2b+1. Besides can be expressed in terms of basis matrices ^Toep_g and real coefficients
[ = ∑_g=1^b+1θ_g(^Toep_g) + j ∑_g=b+2^2b+1θ_g(^Toep_g-b) ]
and consequently
∂/∂θ_i=
(^Toep_i) 1≤ i ≤ b+1
j(^Toep_i-b) b+2≤ i ≤ 2b+1
.
Substituting ∂/∂θ_i in (34), yields the FIM for banded Toeplitz covariance matrix.
§.§ Toeplitz-block-Toeplitz matrix
Before proceeding further, it is worth noting that a TBT matrix composed of p blocks of size l can be parameterized by the vector = [_0^T, _1^T, …, _P-1^T]^T∈ℝ^2 l -1 + (p-1)(4l-2) whereby _0 = [r_0,1, (r_0,2), …, (r_0,l), (r_0,2), …, (r_0,l)]^T∈ℝ^2l-1 and _p = [(r_p,1), …, (r_p,l), (r_p,1), …, (r_p,l),
(c_p,2), …, (r_p,l), (r_p,2), …, (r_p,l)]^T∈ℝ^4l-2, p=1,…, P-1, with r_p,n and c_p,n the n-th row and n-th column of _p, respectively.
Indeed, the TBT covariance matrix can be expressed as
^TBT = _0⊗_0 +∑_w=1^p-1((_w⊗^H_w) +(_w^T⊗_w)),
where
_0 =
∑_g=1^lθ_0,g(^Toep_g) + j ∑_g=l+1^2l-1θ_0,g(^Toep_g-l+1)
and, for w=1,…, p-1,
_w = ∑_g=1^l[θ_w,g + jθ_w,g+l ](_g)
+ ∑_g=2l+1^3l-1[θ_w,g + jθ_w,g+l-1 ](_g-2l+1)
with θ_w,g the g-th element of _w, _g = ^Toep_g as long as g = 1 and 1/2 ((^Toep_g)^T + j (^Toep_g)^T) elsewhere, whereas the (i,k)^th element of the matrix _w∈ℝ^l× l is given by
[_w]_i,k=
1 i-k=w
0 otherwise.
That said,
∂^TBT/∂θ_w,g is given by
0.19!∂^TBT/∂θ_w,g=0.81!_0⊗(^Toep_g) 1≤ g≤l, w=0
_0⊗ j(^Toep_g-l+1) l+1≤ g ≤ 2l-1, w=0
_w⊗(_g)^T
+ _w^T⊗(_g) 1≤ g ≤ l, w > 0
_w⊗(-j)(_g-l)^T
+ _w^T⊗j(_g-l) l+1≤ g ≤ 2l, w > 0
_w⊗(_g-2l+1)^T
+ _w^T⊗(_g-2l+1) 2l+1≤ g ≤ 3l-1, w > 0
_w⊗(-j)(_g-3l+2)^T
+ _w^T⊗j(_g-3l+2) 3l≤ g ≤ 4l-2, w > 0
which, employed in (34), yields the FIM for TBT covariance matrix.
IEEEtran
|
http://arxiv.org/abs/2307.05126v1 | 20230711090149 | Enhancing Continuous Time Series Modelling with a Latent ODE-LSTM Approach | [
"C. Coelho",
"M. Fernanda P. Costa",
"L. L. Ferrás"
] | cs.LG | [
"cs.LG",
"math.OC",
"I.5.1; G.1.7"
] |
inst1]C. Coelhocor1
[email protected]
inst1]M. Fernanda P. Costa
[email protected]
inst1,inst2]L.L. Ferrás
[email protected]
[cor1]Corresponding author
[inst1]organization=Centre of Mathematics (CMAT),
addressline=University of Minho,
city=Braga,
postcode=4710 - 057,
country=Portugal
[inst2]organization=Department of Mechanical Engineering - Section of Mathematics,
addressline=University of Porto,
city= Porto,
postcode=4200-465,
country=Portugal
Due to their dynamic properties such as irregular sampling rate and high-frequency sampling, Continuous Time Series (CTS) are found in many applications.
Since CTS with irregular sampling rate are difficult to model with standard Recurrent Neural Networks (RNNs), RNNs have been generalised to have continuous-time hidden dynamics defined by a Neural Ordinary Differential Equation (Neural ODE), leading to the ODE-RNN model.
Another approach that provides a better modelling is that of the Latent ODE model, which constructs a continuous-time model where a latent state is defined at all times. The Latent ODE model uses a standard RNN as the encoder and a Neural ODE as the decoder. However, since the RNN encoder leads to difficulties with missing data and ill- defined latent variables, a Latent ODE-RNN model has recently been proposed that uses a ODE-RNN model as the encoder instead.
Both the Latent ODE and Latent ODE-RNN models are difficult to train due to the vanishing and exploding gradients problem. To overcome this problem, the main contribution of this paper is to propose and illustrate a new model based on a new Latent ODE using an ODE-LSTM (Long Short-Term Memory) network as an encoder - the Latent ODE-LSTM model. To limit the growth of the gradients the Norm Gradient Clipping strategy was embedded on the Latent ODE-LSTM model.
The performance evaluation of the new Latent ODE-LSTM (with and without Norm Gradient Clipping) for modelling CTS with regular and irregular sampling rates is then demonstrated. Numerical experiments show that the new Latent ODE-LSTM performs better than Latent ODE-RNNs and can avoid the vanishing and exploding gradients during training.
Code implementations developed in this work are available at https://github.com/CeciliaCoelho/LatentODELSTMgithub.com/CeciliaCoelho/LatentODELSTM.
Machine Learning Neural ODE Latent ODE RNN LSTM Latent ODE-LSTM Gradient Clipping
§ INTRODUCTION
Feed-forward Neural Networks (NNs) propagate information in a unidirectional manner, moving from the input layer through the hidden layers, until the output layer is reached. In contrast, RNNs <cit.> feature a feedback mechanism between two or more layers, making RNNs ideal for modelling and processing sequential data, such as time series data.
However, RNNs can only process regularly sampled data <cit.>, but real world data sequences are often sampled irregularly. To mitigate this problem, the irregularly-sampled data are rewritten as regularly-sample data, by dividing the time interval into equally-sized intervals, and assigning or aggregating observations using averages.
This type of preprocessing can destroy information, in particular about measurement time, and can also lead to an extra source of error <cit.>. Moreover, RNNs process continuous time series as discrete-time sequence data, complicating real-time processing. In general, RNNs are only suitable for processing moderate-length regular sequence data, with few missing values and small time intervals between observations.
In <cit.>, the authors propose a different solution for handling time series using deep learning - Neural Ordinary Differential Equations (Neural ODEs). In Neural ODEs, the iterative updates of hidden states of the RNN are seen as an Euler discretization of a continuous transformation. Thus, instead of specifying a discrete sequence of hidden layers, it parameterises the derivative of the hidden state using a neural network.
This continuously defined dynamics can naturally incorporate data which arrives at arbitrary times (irregularly sampled data).
Variational Autoencoders (VAEs) are generative models that learn a distribution over data. They provide better predictive accuracy when few data is available, better extrapolation for long time horizons, and the possibility of generating new samples from the original data <cit.>.
In <cit.> the authors propose a VAE with a RNN encoder and a Neural ODE decoder, for time series data.
This new architecture is known as Latent ODE, and improves the performance of the VAEs by representing each time series by a continuous latent trajectory that allows for forward and backward extrapolations in time. The Latent ODE was further improved by Rubanova et al.
<cit.>, where the authors proposed a modification to the encoder of a Latent ODE so that the state transitions of the RNN are defined by a Neural ODE, ODE-RNN, taking advantage of the information given by the sampling intervals of the data. This new architecture was designated by Latent ODE-RNN.
It is known from the literature that RNNs can suffer from the problem of vanishing and exploding gradients <cit.>, making these networks difficult to train.
To mitigate this problem, the Long Short-Term Memory (LSTM) network <cit.> was developed, and in <cit.> the authors proposed an ODE-LSTM to overcome the vanishing and exploding gradients problem of ODE-RNNs.
In this work, we prove that Latent ODE-RNNs still suffer from the vanishing and exploding gradients problem and propose replacing the ODE-RNN encoder by an ODE-LSTM, thus avoiding gradient dissipation and improving the performance of the network when learning long-term dependencies.
Since the LSTM networks are still prone to gradient explosion <cit.> <cit.>, we propose combining the Latent ODE-LSTM architecture with norm gradient clipping, a technique used to control gradients by rescaling <cit.>, which is referred to as Latent ODE-LSTM and Latent ODE-LSTM+gradient clipping throughout the paper. Table <ref> shows the Encoder and Decoder used in each variant of the Latent ODE architecture.
We compare the newly proposed Latent ODE-LSTM (with and without gradient clipping) to a Latent ODE-RNN using synthetic irregularly sampled, gradually sparser time series, and real-life regularly and irregularly sampled time series. The results show that the new architecture outperforms Latent ODE-RNN, and consequently the classical architectures.
The paper is organised as follows. Section <ref> presents a brief review of essential concepts such as RNNs, LSTM networks, Norm Gradient Clipping, Neural ODEs, Autoencoders, VAEs, Latent ODEs, Latent ODE-RNN.
Section <ref> is devoted to the new Latent ODE-LSTM architecture. It is shown that the vanishing gradient problem is mitigated using the LSTM. Also, the exploding gradient problem is addressed by combining the Latent ODE-LSTM with norm gradient clipping (this section relies on <ref> where we prove that RNNs suffer from the vanishing and exploding gradients problem <cit.>, and, building on that, we also prove that Latent ODEs and Latent ODE-RNNs suffer from the same problem).
In Section <ref> we evaluate the performance of the different architectures by considering the reconstruction and extrapolation of spirals, and numerical experiments with two real-life datasets (one with regularly sampled data and the other with irregularly sampled data). The paper ends with the conclusions and future work in Section <ref>.
§ BACKGROUND
This section provides some of the background information needed for the next sections.
Let 𝒳=(x_1,x_2,…,x_N) be an input sequential data of length N, with x_i ∈ℝ^d denoting the input at time step i (i=1,…,N). Let 𝒴=(y_1,y_2,…,y_N) be the desired response sequential data, with y_i ∈ℝ^p
denoting the response vector at time step i, and, let Ŷ=(ŷ_1,ŷ_2,…,ŷ_N) be the output response sequential data produced by an architecture, with ŷ_i ∈ℝ^p denoting the output response vector at time step i.
§.§ RNNs
There are a number of works in the literature, <cit.>, in which NNs are used to handle sequential data. However, when using sequential data 𝒳, NNs only provide independent data values, and there is no way to convey a dependency or ordering idea between each value of the data sequence. Furthermore, because NNs have a fixed number of neurons in the input layer, it is not possible to generalise the model to input and output sequences with arbitrary lengths that were not used during training <cit.>.
To address these issues, RNNs were developed <cit.>. RNNs have feedback loops that allow multiple input values to be fed sequentially, enabling sequential data modelling <cit.>.
An RNN builds a sequence of n-dimensional hidden state vectors h_i ∈ℝ^n, where h_i catches the essential characteristics of the input sequences from the first time step to i. The left side of Figure <ref> shows a simple RNN, with a hidden layer h_i and its feedback loop. A RNN unrolling through time, which is the same hidden layer represented once per time step i (i=1,…,N) is represented on the right side of Figure <ref>. Therefore, a RNN is a deep feed-forward NN with N layers, where each layer has a number of neurons, n, equal to the length of h_i.
At time step i, the hidden state h_i depends on the input vector x_i and the previous hidden state h_i-1 from time step i-1. The RNN cell that constitutes the layers of a RNN is defined by:
h_i = σ(w_feedback h_i-1 + w_inputx_i +b)
Here, σ is an activation function, and the initial hidden state vector h_0 needs to be initialised.
The matrix w_input∈ℝ^n × d, contains the weights that link the input and hidden state vectors.
The matrix w_feedback∈ℝ^n × n contains the weights that link two hidden state vectors at time step i-1 and i, and b ∈ℝ^n the bias vector of the current hidden state.
At time step i, having the hidden state vector h_i, the output vector ŷ_i is computed by:
ŷ_i = σ(w_outputh_i + b_output)
where the matrix w_output∈ℝ^p × n contains the weights that link the hidden state and the output vectors, with bias vector b_output.
In a RNN, the weight matrices and bias vectors are constant across all of time steps i, that is, all parameters θ: w_input, w_feedback, b, w_output, b_output are shared across the time steps.
The strategy of parameter sharing is noteworthy as it enables a significant reduction in the number of parameters that an RNN needs to learn. This is done by assuming that the shared parameters can capture all the essential sequential features.
Note that h_i is a function of x_i and h_i-1, which is a function of x_i-1 and h_i-2, which is a function of x_i-2 and h_i-3, etc. Thus, h_i is a function of all the inputs since the instant i=1:
h_i = σ(w_feedback(... σ(w_feedbackσ( w_feedbackh_0+ w_inputx_1 + b)_h_1 + w_inputx_2 + b)^h_2 + ...)) + w_inputx_i + b
with h_0 usually initialised as the null vector.
Since the output ŷ_i and hidden state h_i are functions of all the inputs from previous time steps, it is considered to exist a form of memory, being the RNN cell also known as short-term memory cell.
The update of formula <ref> performed by an RNN cell is often represented by h_i = RNNCell(h_i-1, x_i).
To train a RNN, the RNN is unrolled through time and the BackPropagation Through Time (BPTT) technique is applied.
However, when training very deep networks, such as RNNs for long sequences, the vanishing or exploding gradients problem may arise <cit.> <cit.>. Moreover, RNNs are only suitable for handling regularly-sampled data with moderate length since, with the increase in length, the first time steps of the sequence will progressively be forgotten.
We proved in <ref>, that RNNs suffer from the vanishing and exploding gradients problem.
§.§ LSTM Networks
To alleviate the vanishing gradient problem and the short-term memory problem in RNNs, LSTM networks have been proposed
<cit.>.
Like RNNs, LSTMs use feedback loops but with an additional internal cell state C_i ∈ℝ^n, which corresponds to long-term memory, and three differentiable gates (input gate vector I_i ∈ℝ^n, forget gate vector F_i ∈ℝ^n, output gate vector O_i ∈ℝ^n), to control the states h_i and C_i.
At time step i, the three gate vectors are updated accordingly:
[ I_i = σ( w_xinx_i + w_hinh_i-1+b_in); F_i = σ( w_xfx_i+ w_hfh_i-1 +b_f); O_i = σ( w_xox_i + w_hoh_i-1+b_o) ]
Here, σ is the sigmoid activation function. Each gate is a function of the input vector x_i and the previous time step hidden state h_i-1. Each gate has a weight matrix from input to gate vector, and from the hidden state vector to the gate vector, and the respective bias vector, with w_xin, w_xf, w_xo∈ℝ^n × d, w_hin, w_hf, w_ho∈ℝ^n × n and b_in, b_f, b_o, ∈ℝ^n.
Each gate in an LSTM network has a distinct role. The input gate vector I_i allows the NN to decide how much of the input vector, via the memory state C̃_̃ĩ∈ℝ^n, is allowed to influence the memory state C_i. The forget gate vector F_i allows the NN to decide how much of the previous memory vector C_i-1 should be forgotten, and the output gate vector O_i allows the NN to decide how much of the internal memory state C_i should be retained for the hidden state h_i.
At time step i, given the current input x_i and the previous hidden state h_i-1, LSTM first calculates the candidate memory update C̃_i as follows:
C̃_i= tanh(w_xc x_i + w_hc h_i-1 +b_c).
Here, tanh is the hyperbolic tangent activation function. The matrix w_xc∈ℝ^n × d contains the weights between the input vector and the candidate memory cell, w_hc∈ℝ^n × n contains the weights between the hidden state and the candidate memory cell, and b_c ∈ℝ^n is the corresponding bias vector.
Then, the three gates are used to calculate the vectors of the internal memory C_i and hidden state h_i:
[ C_i = F_i ⊙ C_i-1+I_i ⊙C̃_i; h_i = O_i⊙tanh(C_i), ]
where ⊙ is the element-wise product operation.
Finally, as in RNNs, the output vector ŷ_i at time i is computed using the hidden state vector h_i, as follows:
ŷ_i = σ(w_outputh_i + b_output).
Here, σ is the sigmoid activation function, w_output∈ℝ^p × n is the weight matrix that contains the weights that link the hidden state and the output vectors, and b_output the bias vector. The initial state vector h_0 and the initial internal memory state vector C_0 are initialised, usually, as the null vector. Figure <ref> shows the scheme of a LSTM cell.
The update of formulas (<ref>)-(<ref>) performed by an LSTM cell are often represented by
(C_i,h_i) = LSTMCell(C_i-1,h_i-1,x_i).
To train a LSTM network, like in a RNN, it is unrolled through time, and a modified version of the BPTT technique to deal with element-wise operation in the update formula (<ref>), is applied.
Due to their design, LSTMs are suitable for handling regularly-sampled long discrete sequence data. Moreover, they ensure constant error flow through the network due to the dependence of the previous memory cell C_j-1 with the current C_j, preventing gradients from vanishing <cit.>.
Note that, the computational cost of a LSTM is higher than for a RNN model due to the more complex architecture with multiple gates and memory cells. This increased complexity generally requires more computations during both training and prediction.
Despite the increase of computational cost, LSTMs are an attractive alternative when favouring accuracy over computational cost.
§.§ Neural ODE
The authors in <cit.> observed that some neural network architectures, such as RNN and LSTM, learn how one hidden state, h_t, differs from the next one, h_t+1, resembling an Euler discretisation,
h_t+1 = h_t + f(h_t, θ)
where θ=(w, b) and t = 0, …, N. When the steps from h_t to h_t+1 are infinitesimally small, these computations resemble a continuous dynamic process <cit.>.
Thus, the authors in <cit.> proposed Neural ODEs, a NN that models an ODE to the hidden states dynamics:
dh(t)dt=f(h(t),t,θ), with h(0) = h_0.
A Neural ODE is composed of two parts, a NN that models a function dynamics f_θ by optimising the parameters θ and an ODE solver. The final result of training a Neural ODE is the function f_θ. Then, to make predictions, the solution of the ODE initial value problem (<ref>), is computed for a given time interval (t_0, t_f) by an ODE solver:
{h_t}_t=0^N = ODESolve(f_θ, h_0, (t_0,t_f)).
Figure <ref> shows the architecture of the Neural ODE.
Note that during training, to optimise the parameters θ, backpropagating through the ODE solver can be used, but it has a high memory cost and introduces additional numerical errors <cit.>. Thus, the authors in <cit.> propose to use the Adjoint Sensitivity method to train a neural ODE. It scales linearly with the problem size, has low memory cost and explicitly controls the numerical error
<cit.>.
§.§ Autoencoders
Autoencoders are an unsupervised learning process with the goal of reducing the dimension of input data, i.e, it is a form of data compression <cit.>. Through the process of compressing the input data, encoding it and then reconstructing it as output, autoencoders allow one to reduce dimensionality and focus only on the areas that are truly valuable. They have application in information retrieval, anomaly detection, image processing (image denoising as well as super-resolution), etc.
The architecture of an autoencoder is very simple and information only flows in one way, forward, meaning that we have a feed-forward NN. This network is usually treated as being composed of two main components (that can be seen as separate NNs and that have different architectures), an encoder and a decoder, which learn an encoding-decoding scheme that minimises the loss using an iterative optimisation process <cit.>.
The encoder learns a mapping from the data, x ∈ℝ^d, to a low-dimensional latent space with dimension l, z ∈ℝ^l, l<d, leading to a bottleneck (layer with the fewest number of neurons in the network), while the decoder learns a mapping from the latent space, z, to a reconstructed observation, x̃∈ℝ^d with x̃≈ x <cit.>. Figure <ref> shows the architecture of an autoencoder.
The goal is to train the model to reconstruct the original data as better as possible, so that the cost function ℒ_θ minimises the difference between the original input x and the reconstruction x̃, by optimising the network parameters θ using the backpropagation method <cit.>.
The main difference between autoencoders and traditional NNs is that instead of training an input/output pair (x,y) and modelling a transformation x ↦ y, autoencoders try to fit the input data to themselves x ↦ x.
§.§ VAEs
Autoencoders attempt to reconstruct a given input x by learning mappings to and from a lower-dimensional vector z. However, an autoencoder is not a generative model as it does not model a distribution of the data, being unable to sample new data from a learned distribution.
Variational Autoencoders <cit.> (VAEs) on the other hand, are generative models in which samples are generated by a neural network NN_α applied to a random latent variable z. This latent variable is drawn from a distribution with all possible values, called the prior distribution p_α(z) <cit.>,
x = NN_α(z).
Unlike autoencoders, VAEs consist of probabilistic versions of encoders and decoders.
The goal of VAEs is to find the true posterior distribution p_α(z|x) learned by the encoder so that latent variables can be inferred from data, and to find a stochastic decoder p_α(x|z) so that a sample can be reconstructed from a random latent variable z, <cit.>.
However, computing the true posterior distribution, p_α(z|x), is difficult because it is too expensive due to the enumeration of all values of x. To overcome this issue, the authors in <cit.> proposed learning an approximation to the true posterior distribution using a stochastic encoder q_ϕ(z|x). Thus, given an input x the stochastic encoder generates the corresponding value z (see Figure <ref>)<cit.>.
The mean, μ∈ℝ^l, and the standard deviation, σ∈ℝ^l, are given by the encoder and used to sample the latent variable z given by z=μ + σ⊙ϵ. The generative part of a VAE is created by inducing randomness by adding noise ϵ∈ℝ^l to calculate z <cit.>.
The training of a VAE is done by optimising the parameters of the encoder, ϕ, and decoder, α, simultaneously. This is done by optimising a cost function ℒ_Θ, with Θ=(ϕ,α), defined as follows:
ℒ_Θ = 1N∑_i=1^N l(x̃_i)
where l(x̃_i) is the individual cost for each time step. In VAEs, ℒ_Θ is the Evidence Lower Bound (ELBO) function <cit.>.
§.§ Latent ODE
In <cit.> the authors propose Latent ODEs, a VAE architecture using a RNN encoder and a Neural ODE decoder (see Figure <ref>).
Note that the encoder uses a RNN combined with a translator NN g, and the decoder uses a Neural ODE combined with an output NN, that models a distribution p_α(x|z). The RNN output is transformed by g and gives a mean μ and standard deviation σ of a distribution q_ϕ:
q_ϕ(z_0| {x_i}_i=1^N) = 𝒩(μ,σ) where μ,σ=g(RNN({x_i}_i=1^N)).
Therefore, the encoder converts the input sequence {x_i}_i=1^N into the latent variable of the initial state:
z_t_0≈ z_0=μ + σ⊙ϵ∈ℝ^l
where l is the dimension of the latent space, ϕ are the parameters of the NN encoder and ϵ is a randomly sampled noise.
Before training (in networks with a Neural ODE decoder), to solve an ODE initial value problem, the sequence must be reversed in time to compute the initial state z_0. The initial state z_0 is then given to a Neural ODE decoder that learns a continuous latent trajectory f(z_0,θ,t). Then, an ODE initial value problem is solved using an ODE solver giving the latent variables z_t in the entire time domain (t_0, …, t_N):
{z_t_i}_i=0^N = ODESolve(f_θ, z_0, (t_0, …, t_N)).
Finally, a learned distribution reconstructs the sample from the latent space, where α are the parameters of the decoder.
Again, training is done by optimising a cost function ℒ_Θ, with Θ=(ϕ,α), defined as follows:
ℒ_Θ = 1N∑_i=1^N l(ŷ_i)
where l(ŷ_i) is the individual cost for each time step.
We note that the use of a Neural ODE decoder allows easy extrapolation forward or backward in time, since a continuous-time latent trajectory is available. Note that although the latent variable z_0 is computed by a stochastic process that introduces randomness into the model, the ODE, which models evolution through time, assumes deterministic dynamics since the initial value problem depends on the initial state z_0. Therefore, each state z_t is uniquely defined and dependent on the initial state z_0. However, this determinism can be problematic in certain applications that have inherent randomness that cannot be captured by such deterministic models.
We prove in Appendix <ref>,
that Latent ODEs suffer from the vanishing and exploding gradients problem due to the RNN in the encoder.
§.§ Latent ODE-RNN
Latent ODEs use a RNN encoder and have difficulty handling irregularly sampled data. In <cit.>, the authors propose Latent ODE -RNNs that replace the RNN in Latent ODEs with a ODE-RNN. In this way, the encoder can perceive the sampling time between observations.
The ODE-RNN <cit.> is a RNN where the state transitions are defined by a Neural ODE. In this case, the RNN update (<ref>) is computed with an intermediate hidden state h'_i ∈ℝ^n:
h_i = RNNCell(h'_i,x_i),
where h'_i is the solution of the ODE initial value problem in the time interval (t_i-1,t_i) by using the previous hidden state h_i-1 as the initial value:
h'_i= ODESolve(f_θ,h_i-1,(t_i-1,t_i)), i=1,…,N
Figure <ref> shows the architecture of an ODE-RNN. Note that the ODE-RNN, in a Latent ODE-RNN, deals with the sequence backwards in time and produces a single output z'_0, at time t_0:
z'_0 = ODE-RNN_ϕ({x_i, t_i}_i=1^N).
Then, the z'_0 is used by the translator network g to output the mean, μ, and standard deviation, σ: μ, σ = g(z'_0).
The initial value z_0 is sampled from 𝒩(μ,σ), z_0=𝒩(μ, σ), and used as the initial value to compute the solution of the ODE initial value problem, in the decoder.
By replacing the RNN encoder in Latent ODEs, Latent ODE-RNNs were shown to outperform Latent ODEs when learning from irregularly sampled data due to the ability to better learn the approximate posterior distribution <cit.>.
However, RNNs suffer from the vanishing and exploding gradients problem <cit.>. Thus networks such as ODE-RNN, Latent ODE and Latent ODE-RNN, that embed RNNs, suffer from the same problem too. The authors in <cit.> proposed ODE-LSTMs that are less prone to this problem.
As proved in Appendix <ref>,
Latent ODE-RNNs suffer from the vanishing and exploding gradients problem due to the RNN in the encoder.
§.§ Norm Gradient Clipping
Norm gradient clipping is a strategy introduced to address the explosion gradient problem by rescaling <cit.>.
If the norm of the gradients is above a predefined threshold, ‖∂ℒ∂Θ‖≥ threshold, then the gradients are updated as follows:
∂ℒ∂Θ = threshold‖∂ℒ∂Θ‖∂ℒ∂Θ
.
Rescaling is performed after all gradients have been calculated and before the network parameters are updated.
Note that this strategy introduces an additional hyperparameter, threshold, which can be tuned by looking at the average norm of the gradients during training <cit.>.
§ METHOD
In this section, we propose the Latent ODE-LSTM architecture and prove that the ODE-LSTM encoder mitigates the vanishing gradient problem. Moreover, we prove that the Latent ODE-LSTM suffers from the exploding gradient problem, and, to overcome this issue, we propose to incorporate the Norm Gradient Clipping strategy <cit.> into Latent ODE-LSTMs.
§.§ Latent ODE-LSTM network
We propose to replace the encoder in Latent ODE-RNNs with a ODE-LSTM <cit.>, designated by Latent ODE-LSTMs. With the introduction of LSTMs into the encoder, we expect to mitigate the vanishing gradient problem and to improve performance in learning long-term dependencies.
The Latent ODE-LSTM is a VAE where the encoder combines an ODE-LSTM with a translator NN g that approximates the posterior distribution q(z_0|{x_i,t_i}_i=1^N) with mean μ and standard deviation σ:
q_ϕ(z_0| {x_i,t_i}_i=1^N) = 𝒩(μ,σ) where μ,σ=g(z'_0) with z'_0 =ODE-LSTM _ϕ({x_i,t_i}_i=1^N).
Then the encoder converts the reversed input sequence {x_i,t_i}_i=1^N into a latent variable z_0 at the initial time:
z_0 = μ + σ⊙ϵ∈ℝ^l
where l is the dimension of the latent space, ϕ are the parameters of the NN encoder and ϵ is randomly sampled noise.
The ODE-LSTM encoder is an LSTM where the transitions between two observations are given by a Neural ODE. At time step i, the LSTM cell is defined as follows:
[ I_i = σ( w_xinx_i + w_hinh'_i+b_in); F_i = σ( w_xfx_i+ w_hfh'_i +b_f); O_i = σ( w_xox_i + w_hoh'_i+b_o); C̃_i = tanh(w_xc x_i + w_hc h'_i +b_c); C_i = F_i ⊙ C_i-1+I_i ⊙C̃_i; h_i = O_i⊙tanh(C_i), ]
where h'_i ∈ℝ^n is the intermediate hidden state, that is the solution of the ODE initial value problem with initial state h_i-1, and time interval (t_i-1, t_i). In this case, the LSTM update formula (<ref>) can be represented by:
(C_i,h_i) = LSTMCell(C_i-1,h'_i,x_i).
Algorithm <ref> describes the ODE-LSTM embed in the encoder of a Latent ODE-LSTM.
Then, the latent state z_0 is given to the Neural ODE decoder that learns a continuous-time latent trajectory f(z_0, θ,(t_0,t_N)) that allows prediction or extrapolation by solving an ODE initial value problem in the entire time domain (t_0,…,t_N):
{z_t_i}_i=0^N = ODESolve(f_θ, z_0, (t_0, …, t_N)).
Finally, the predicted latent states {z_t_i}_i=1..N are fed into a neural network, which performs a conversion to the original data space and outputs the predicted data {ŷ_i}_i=1^N. The algorithm that describes the proposed Latent ODE-LSTM network is presented in Algorithm <ref>.
The training of Latent ODE-LSTMs is done by optimising the parameters of the encoder, ϕ, and decoder, θ, simultaneously. This is done by optimising the cost function defined in (<ref>).
§.§ Latent ODE-LSTMs prevent the vanishing gradient problem
To find the parameters Θ that optimise the loss ℒ_Θ, the gradients with respect to the parameters are computed as follows:
∂ℒ∂Θ= 1/N∑_i=1^N∂ l(ŷ_i)∂Θ
where ∂ l(ŷ_i)∂Θ is a sum of products that gives the gradient at time step i, which is given by all the contributions of the previous time steps k, with k<i:
∂ l(ŷ_i)∂Θ = ∑_k=1^i ∂ l(ŷ_i)∂ŷ_i∂ŷ_i∂O_NN∂O_NN∂ z_t_i∂ z_t_i∂ z_0(∂ z_0∂μ∂μ∂ g_NN∂ g_NN∂ z'_0∂ z'_0∂ h_i + ∂ z_0∂σ∂σ∂ g_NN∂ g_NN∂ z'_0∂ z'_0∂ h_i) ∂ h_i ∂ h_k∂^+ h_k∂Θ,
where ∂^+ h_k∂θ is the immediate partial derivative of h_k with respect to the parameters Θ <cit.>.
Here, ∂ h_i∂ h_k is a chain of products of all the hidden states that contribute to the hidden state h_i, at time step i:
∂ h_i/∂ h_k = ( ∏_i ≥ j>k∂ h_j/∂ h'_j∂ h'_j/∂ h_j-1).
In Latent ODE-LSTMs, the ODE-RNN is replaced by an ODE-LSTM in the encoder. In this case the update of the LSTM cell (<ref>) are more complex, where the hidden state at time step j is given by:
h_j = O_j⊙tanh(F_j⊙ C_j-1+I_j⊙C̃_j_C_j).
The term ∂ h_j∂ h'_j is given by a chain of products (see <ref>). Considering (<ref>) to compute the ∂ h_j∂ h'_j, the vectors are converted into diagonal matrices and the Hadamard product is replaced by the usual product, the derivatives are computed in an element-wise fashion:
∂ h_j∂ h'_j = ∂∂ h'_j( O_j ⊙tanh(C_j) ) = ∂∂ h'_j( O_j diag(tanh(C_j)) ) = ∂O_j∂ h'_j diag(tanh(C_j)) + O_j diag (∂tanh(C_j)∂ h'_j)
where O_j=diag(O_j),
∂O_j∂ h'_j =
w_ho diag(σ'( w_xo x_j + w_ho h'_j + b_o))
and
[ diag ( ∂tanh(C_j)∂ h'_j) = diag(∂∂ h'_j( tanh(F_j ⊙ C_j-1 + I_j ⊙C̃_j) ) ); = diag(∂∂ h'_j( tanh(F_j C_j-1 + I_j C̃_j) ) ); = diag (tanh'(F_j C_j-1 + I_j C̃_j)) (∂F_j∂ h'_jC_j-1 + F_j ∂C_j-1∂ h'_j + ∂I_j∂ h'_jC̃_j + I_j ∂C̃_j∂ h'_j) ]
with F_j=diag(F_j), C_j-1=diag(C_j-1), I_j=diag(I_j) C̃_j=diag(C̃_j), and where
∂F_j∂ h'_j = w_hf diag(σ'( w_xf x_j+ w_hf h'_j + b_f)),
∂I_j∂ h'_j = w_hin diag(σ'( w_xin x_j + w_hin h'_j + b_in)),
∂C̃_j∂ h'_j = w_hc diag(tanh'( w_xc x_j + w_hc h'_j + b_c)).
Substituting (<ref>)-(<ref>) into (<ref>), we obtain:
[ diag ( ∂tanh(C_j)∂ h'_j) = diag(tanh'(F_j C_j-1 + I_j C̃_j)) [ w_hf diag(σ'( w_xf x_j+ w_hf h'_j + b_f)) C_j-1 +; F_j ∂C_j-1∂ h'_j + w_hin diag(σ'( w_xin x_j+ w_hin h'_j + b_in)) C̃_j+; I_j w_hc diag(tanh'( w_xc x_j + w_hc h'_j + b_c)) ]. ]
Substituting (<ref>) and (<ref>) into (<ref>), we obtain ∂ h_j∂ h'_j:
[ ∂ h_j∂ h'_j = w_ho diag(σ'(w_xo x_j + w_ho h'_j + b_o)) diag(tanh(C_j)) + O_j [diag(tanh'(F_j C_j-1 + I_j C̃_j)) ×; [ w_hf diag(σ'(w_xf x_j + w_hf h'_j + b_f)) C_j-1 + F_j ∂C_j-1∂ h'_j +; w_hin diag(σ'(w_xin x_j + w_hin h'_j + b_in)) +
C̃_j + I_j w_hc diag(tanh'(w_xc x_j + w_hc h'_j + b_c)) ]]. ]
Substituting (<ref>) into (<ref>), and converting the ∂ h'_j∂ h_j-1 into a diagonal matrix, we obtain:
∂ h_i∂ h_k = ∏_i ≥ j ≥ k∂ h_j∂ h'_j diag(∂ h'_j∂ h_j-1)
where ∂ h_j∂ h'_j is given by (<ref>).
Note that, the behaviour of the sum in (<ref>) is given by the behaviour of the terms ∂ l(ŷ_i)∂Θ, which have all the same form. Each temporal contribution is given by (<ref>) and measures how parameters Θ at time step k affect the loss at time step i>k. The factors h_ih_k (<ref>) transport the error in time from step i back to step k. To analyse the vanishing and exploding gradient problems, we need to look in particular to these matrix factors that takes the form of a product of i-k Jacobian matrices.
Although less prone to this problem, Latent ODE-LSTM can still suffer from the vanishing gradient problem:
∀ j, ‖∂ h_j∂ h'_j diag(∂ h'_j∂ h_j-1) ‖=‖∂O_j∂ h'_j diag(tanh(C_j)) + O_j diag ( ∂tanh(C_j)∂ h'_j) ‖‖ diag ( ∂ h'_j∂ h_j-1) ‖≤
‖∂O_j∂ h'_j diag(tanh(C_j)) + O_j diag (∂tanh(C_j)∂ h'_j) ‖‖ diag ( ∂ h'_j∂ h_j-1) ‖.
Let λ_ho, λ_hf, λ_hin, λ_hc, λ, λ_d, λ' be the absolute value of the largest eigenvalue of the weight matrix of the output w_ho, forget w_hf and input w_hin gates and candidate cell w_hc, and of the matrix C_j-1, ∂C_j-1∂ h'_j and ∂ h'_j∂ h_j-1, respectively.
Let γ_t and γ_σ be the absolute values of tanh(.) and σ(.) and therefore ‖ diag(tanh(.)‖≤γ_t, ‖ diag(σ(.)‖≤γ_σ, respectively.
Let γ_td and γ_σ d be the absolute values of tanh'(.) and σ'(.) and therefore ‖ diag(tanh'(.)‖≤γ_td, ‖ diag(σ'(.)‖≤γ_σ d, respectively.
For the gradient to vanish, the chain of products given by (<ref>) would have to decrease linearly toward zero <cit.>. For this to happen, it suffices that
[ [ λ_hoγ_σ dγ_t + γ_σ ( γ_tdλ_hfγ_σ dλ + γ_σλ_d + λ_hinγ_σ dγ_t + γ_σ dλ_hcγ_td ) ] λ'<; < [ 1 + γ_σ ( 1 + 1 + 1 + 1 ) ] λ'; < [ 1 + 4γ_σ ] λ'; < 5 λ'; < 1 ]
with λ_ho < 1γ_σ dγ_t, λ_hfλ < 1γ_tdγ_σ d, λ_d < 1γ_σ, λ_hin < 1γ_σ dγ_t, λ_hc < 1γ_σ dγ_td and λ' < 15.
For all j, taking η∈ℝ so that η < 1 comes ‖∂ h_j∂ h'_j diag (∂ h'_j∂ h_j-1) ‖≤η < 1. By induction over j we obtain
∂ l(ŷ_i)∂ŷ_i∂ŷ_i∂O_NN∂O_NN∂ z_t_i∂ z_t_i∂ z_0(∂ z_0∂μ∂μ∂ g_NN∂ g_NN∂ z'_0∂ z'_0∂ h_i + ∂ z_0∂σ∂σ∂ g_NN∂ g_NN∂ z'_0∂ z'_0∂ h_i) ( ∏_i ≥ j>k∂ h_j/∂ h'_j∂ h'_j/∂ h_j-1) ≤
≤∂ l(ŷ_i)∂ŷ_i∂ŷ_i∂O_NN∂O_NN∂ z_t_i∂ z_t_i∂ z_0(∂ z_0∂μ∂μ∂ g_NN∂ g_NN∂ z'_0∂ z'_0∂ h_i + ∂ z_0∂σ∂σ∂ g_NN∂ g_NN∂ z'_0∂ z'_0∂ h_i) η^i-k .
As η < 1 it follows from (<ref>) that in the presence of long-term sequences of data (for which i-k is large) then the term η^i-k decreases exponentially to zero, leading to the vanishing of the gradients.
Note that comparing the term ∂ h_i∂ h_k (<ref>) with the same term in (<ref>) in <ref>, highlights the difference between the computation of a Latent ODE-LSTM and a Latent ODE-RNN networks.
Although it has the same structure, ∂ h_i∂ h_k is no longer a simple chain of products, but a chain of products of sums (<ref>). This ensures constant error flow through the network due to the dependence of the previous memory cell C_j-1 with the current C_j, preventing gradients from vanishing <cit.>.
§.§ Latent ODE-LSTMs suffer from the exploding gradient problem
LSTMs cannot eliminate or mitigate the exploding gradient problem <cit.> and the problem propagates to Latent ODE-LSTMs due to the LSTM update scheme used by the ODE-LSTM encoder.
Consider the absolute value of the largest eigenvalues defined in the vanishing gradient proof shown previously.
It is necessary that
[ [ λ_hoγ_σ dγ_t + γ_σ ( γ_tdλ_hfγ_σ dλ + γ_σλ_d + λ_hinγ_σ dγ_t + γ_σ dλ_hcγ_td ) ] λ'>; > [ 1 + γ_σ ( 1 + 1 + 1 + 1 ) ] λ'; > [ 1 + 4γ_σ ] λ'; > 5 λ'; > 1 ]
with λ_ho > 1γ_σ dγ_t, λ_hfλ > 1γ_tdγ_σ d, λ_d > 1γ_σ, λ_hin > 1γ_σ dγ_t, λ_hc > 1γ_σ dγ_td and λ' > 15.
Let η∈ℝ so that η>1, comes ∀ j, ‖∂ h_j∂ h'_j diag(∂ h'_j∂ h_j-1) ‖≥η > 1. By induction over j, we obtain
∂ l(ŷ_i)∂ŷ_i∂ŷ_i∂O_NN∂O_NN∂ z_t_i∂ z_t_i∂ z_0(∂ z_0∂μ∂μ∂ g_NN∂ g_NN∂ z'_0∂ z'_0∂ h_i + ∂ z_0∂σ∂σ∂ g_NN∂ g_NN∂ z'_0∂ z'_0∂ h_i) ( ∏_i ≥ j>k∂ h_j/∂ h'_j∂ h'_j/∂ h_j-1) ≥
≥∂ l(ŷ_i)∂ŷ_i∂ŷ_i∂O_NN∂O_NN∂ z_t_i∂ z_t_i∂ z_0(∂ z_0∂μ∂μ∂ g_NN∂ g_NN∂ z'_0∂ z'_0∂ h_i + ∂ z_0∂σ∂σ∂ g_NN∂ g_NN∂ z'_0∂ z'_0∂ h_i) η^i-k.
As η > 1, it follows from (<ref>) that in the presence of long-term sequences of data (for which i-k is large), the term η^i-k increases exponentially with i-k, leading to the explosion of the gradients.
§.§ Latent ODE-LSTM + Gradient clipping
As shown in Section <ref>, Latent ODE-LSTMs suffer from the exploding gradient problem. To prevent this from happening, we propose to augment the Latent ODE-LSTM architecture with the Norm Gradient Clipping <cit.>. This strategy can be easily implemented in the training loop used in most Deep Learning libraries and does not require any change in the implementation of the Latent ODE-LSTM network.
§ EXPERIMENTS
To analyse the performance of the Latent ODE-LSTM model, two different evaluations were carried out: a qualitative approach to visually test time series reconstruction and extrapolation using 4 datasets with irregularly sampled time points forming spirals; and a quantitative evaluation using Mean Squared Error (MSE) in extrapolation tasks. Two datasets consisting of real-life time series data publicly available at Kaggle <cit.> were used. Both datasets are sampled daily, one of them regularly, while the other is sampled irregularly.
To evaluate the performance of the newly proposed architecture with and without gradient clipping, a Latent ODE-RNN was used as a baseline.
The Latent ODE-LSTM (with and without gradient clipping) models and Latent ODE-RNN model were implemented from scratch in Pytorch, available at https://github.com/CeciliaCoelho/LatentODELSTMgithub.com/CeciliaCoelho/LatentODELSTM.
§.§ Qualitative evaluation
A qualitative assessment was also carried out in <cit.> to test the performance of Latent ODE and Latent ODE-RNN. Here, we compare the performance of our Latent ODE-LSTM and Latent ODE-LSTM+GC with Latent ODE-RNN <cit.> on the spirals dataset.
§.§.§ Experimental Conditions
The VAE models for the qualitative evaluation consist of: ODE-LSTM (or ODE-RNN) encoder with 1 hidden layer and 20 neurons, a Neural ODE with 1 hidden layer and 20 neurons and an output layer with 25 neurons; and a Neural ODE decoder with 1 hidden layer and 20 neurons and an output network with 1 hidden layer and 20 neurons.
We chose Adam Optimiser with learning rate of 0.01, batch size of 1000 and 750 epochs.
The Neural ODEs are solved with Fourth-order Runge-Kutta method with 3/8 rule and using a fixed-step. A fixed-step was preferred over an adaptive one since the ODE is stiff and the step size could go to zero.
§.§.§ Bidirectional spiral dataset
The bidirectional spiral dataset was introduced in <cit.> to test the performance of Latent ODEs in reconstructing and extrapolating spirals with irregularly sampled (forward and backward) time points.
In this work, based on the code available in <cit.>,
first a dataset with 500 sequences taken from the clockwise 2D spirals and 500 from the counterclockwise 2D spirals, with different starting points were generated. Each sequence has 500 time points regularly sampled (Figure <ref> shows the clockwise and counterclockwise spiral), denoted by D_N=500. Then, from D_N=500 we constructed 4 training datasets, each with different sequence length N={30,50,100,250} of randomly selected time points to which Gaussian noise was added, denoted by D_N=30,D_N=50,D_N=100,D_N=250. Each training dataset has a total of 1000 sequences.
We note that during the training of models Latent ODE-RNN and Latent ODE-LSTM without the gradient clipping strategy, when using D_N=250, the exploding gradient problem occurred. To overcome this, several trials of initialisation of the weights parameters were tested. For the Latent ODE-LSTM+GC, as expected, this issue did not occur.
Reducing N allows us to test the performance of the three models on progressively sparser data.
Testing was performed using D_N=500 for seen (reconstruction) and unseen time points (extrapolation) by each model.
The results obtained in the task of reconstructing and extrapolating bidirectional spirals for Latent ODE-RNN, Latent ODE-LSTM and Latent ODE-LSTM+GC are shown in Figure <ref> for N=250, Figure <ref> for N=100, Figure <ref> for N=50 and Figure <ref> for N=30.
For models trained with D_N=250 (Figure <ref>), one can see from the results that the counterclockwise spiral (top of Figure <ref>) reconstructions are far from the sampled data points. The extrapolation backward in time, t< 0, is completely off the mark and shows no resemblance to spiral dynamics. The Latent ODE-LSTM model stands out from the others when extrapolated forward in time, t > 0, and shows dynamics that are close to the target, albeit a little shift from the true trajectory. The results for the clockwise spiral (bottom of Figure <ref>) are better. Latent ODE-LSTM is the best model for reconstruction, but cannot extrapolate for t< 0. Latent ODE-LSTM+GC had the best performance on the extrapolation task.
For models trained with D_N=100, Figure <ref>, Latent ODE-LSTM+GC showed better extrapolation of the counterclockwise spiral (top of Figure <ref>). In the reconstruction task, Latent ODE-RNN and Latent ODE-LSTM+GC show similar performance.
Looking at the clockwise spiral results (bottom of Figure <ref>), we can see that all models perform worse than the models trained with D_N=250.
For the models trained with D_N=50, the three models had difficulty reconstructing and extrapolating the counterclockwise spirals, Figure <ref> top.
As can be seen for the models trained with D_N=250, D_N=100 in Figure <ref> and Figure <ref>, Latent ODE-LSTM has the best performance in reconstructing the clockwise spirals and Latent ODE-LSTM+GC is the best at extrapolation. Latent ODE-RNN's extrapolation is worst at both tasks, Figure <ref> bottom.
For the models trained with D_N=30, from Figure <ref> we can see that the performance of the three models in reconstruction (Figure <ref> bottom) remains similar to experiments with models trained with more time points, D_N=50,D_N=100,D_N=250, with Latent ODE-LSTM performing best on this task.
When extrapolating, forward and backward in time, Latent ODE-LSTM+GC shows a better fit to the dynamics of the spiral for longer time periods (end of sampled data points in Figure <ref>). We note that the Latent ODE-RNN and Latent ODE-LSTM models interrupt the dynamics of the spiral immediately after the end of the sampled points for t > 0.
§.§ Quantitative evaluation
§.§.§ Experimental conditions
The VAE models for the qualitative evaluation consist of: ODE-LSTM (or ODE-RNN) encoder with a hidden layer of 4 neurons, a Neural ODE with a hidden layer of 25 neurons and an output layer of 4 neurons; and a decoder with a Neural ODE with 1 hidden layer with 25 neurons and an output network with 1 hidden layer and 256 neurons.
The Neural ODEs are solved with Runge-Kutta method of order 5 of Dormand-Prince-Shampine with an adaptive step size.
For training, we chose the Adam optimiser with 0.0005 learning rate.
To evaluate the performance of the models in extrapolating short- and long-term sequences, 4 pairs of input and prediction sequences of different lengths were used: 7 days, 15 days, 30 days and an input of 365 days to predict the next 60 days.
Each combination of model and sequence length was run 3 times (N=3), to account for the randomness in the models during training. Each model trained for 50 (daily climate series data) and 100 (DJIA 30 stock time series) epochs.
The mean square error and standard deviation of the test data were determined.
§.§.§ Daily Climate Time Series Data
In this numerical experiment, we have investigated the ability of the models to extrapolate the weather forecast for Delhi, India using 4 parameters: Date, Mean Temperature, Humidity, Wind Speed, and Mean Air Pressure.
The training dataset used provides a training time series with one value per day between January 1, 2013 and January 1, 2017, resulting in 1462 values. The test data consists of 114 daily data points between January 1 2017 and the 24th of April 2017 <cit.>.
From this dataset we construct 4 training datasets each with a different sequence length N=7,15,30,365 selected consecutively, denoted by D_N=7,D_N=15,D_N=30,D_N=365.
Three models were trained, namely Latent ODE-RNN, Latent ODE-LSTM and Latent ODE-LSTM+GC, and their performance was evaluated by computing the mean of MSE and its standard deviation for the test set, Table <ref>.
Latent ODE-LSTM showed a lower MSE in all experiments with different sequence lengths (seen/predict).
In general, the models show similar performance.
§.§.§ DJIA 30 Stock Time Series
In this experiment, we evaluated the performance of the models in extrapolating stock market data. The dataset used is irregularly sampled and yields 1 parameter vector per day, between January 3, 2006 and December 29, 2017, with 6 parameters: date, price of the stock when the stock market opened, highest and lowest price reached on that day, number of shares traded (discarded in our experiments) and the stock's ticker name <cit.>. Although data from 29 DJIA companies are available, only one was used. In total, 3019 data points are available to train and test the models, and a split of 75/25 was used.
From this dataset we construct 4 training datasets each with a different sequence length N=7,15,30,365 selected consecutively, denoted by D_N=7,D_N=15,D_N=30,D_N=365.
Three models were trained, namely Latent ODE-RNN, Latent ODE-LSTM and Latent ODE-LSTM+GC, and their performance was evaluated by computing the mean MSE and its standard deviation for the test set, Table <ref>.
From Table <ref>, we can see that Latent ODE-RNN is slightly better in all experiments, except for the model trained with D_N=365. As expected, the Latent ODE-LSTM models show better results for training with D_N=365, since the LSTM architecture is devoted to long-term sequences.
The models clearly show the difficulty in gradually predicting longer sequences, ranging from an MSE on the order of 10^-2 for a short-term prediction of 7 days to 10^-1 for a longer-term prediction of 60 days.
§ CONCLUSIONS
In this paper, to overcome the vanishing and exploding gradients during training of Latent ODE-RNN, we proposed Latent ODE-LSTM. This architecture is a Variational Autoencoder with an ODE-LSTM encoder, in which the state transitions between LSTM cells are given by a Neural ODE, and a Neural ODE decoder.
Using a LSTM in the encoder our architecture is more flexible and further retains the information given by past values.
We proved that Latent ODE-LSTM network mitigates the vanishing gradient problem.
Latent ODE-LSTM does not solve the exploding gradient problem, and a proof has been derived.
To limit the growth of the gradients, the norm gradient clipping strategy was embedded in the model.
This strategy provides explicit control over the gradients by rescaling them when their norm is greater than a predefined threshold.
To evaluate the performance of the Latent ODE-LSTM models, we performed two types of evaluation, qualitative and quantitative, using a Latent ODE-RNN baseline.
The qualitative approach was to visually test the reconstruction and extrapolation of the dynamics of bidirectional clockwise and counterclockwise spirals using synthetic datasets with irregularly sampled time points and gradually sparser data.
The results show that Latent ODE-LSTM has the best performance in reconstruction and that sparser data does not affect performance. Latent ODE-LSTM + GC has the best performance in extrapolation backward, t< 0 and forward, t > 0, in time.
A qualitative approach was performed by considering two real-life time series, one sampled regularly and one sampled irregularly. The data of these datasets were split into smaller sequences to test both short- and longer-term predictions.
The numerical experiments showed that the models have similar performances. Contrary to the Latent ODE-RNN model, the Latent ODE-LSTM model mitigates the vanishing and exploding gradients problem. Furthermore, Latent ODE-LSTM + GC does not suffer from the exploding gradient problem.
There are several directions that can be taken in the future.
The choice of the numerical solver of the Neural ODE encoder and decoder has proved to be challenging. In the future, it would be important to study the correlation between the hyperparameters of the architecture (number of layers, neurons, activation functions) and the characteristics of the dataset, as well as the performance of the ODE solver and the best choice of the numerical method.
§ ACKNOWLEDGEMENTS
The authors acknowledge the funding by Fundação para a Ciência e Tecnologia (Portuguese Foundation for Science
and Technology) through CMAT projects UIDB/00013/2020 and UIDP/00013/2020.
C. Coelho would like to thank FCT for the funding through the scholarship with reference 2021.05201.BD.
10
elmanFindingStructureTime1990
J. L. Elman, “Finding Structure in Time,” Cognitive Science,
vol. 14, no. 2, pp. 179–211, 1990.
chenNeuralOrdinaryDifferential2019
R. T. Chen, Y. Rubanova, J. Bettencourt, and D. K. Duvenaud, “Neural ordinary
differential equations,” vol. 31, 2018.
rubanovaLatentOrdinaryDifferential2019
Y. Rubanova, R. T. Q. Chen, and D. K. Duvenaud, “Latent Ordinary
Differential Equations for Irregularly-Sampled Time Series,” in Advances in Neural Information Processing Systems, vol. 32, Curran
Associates, Inc., 2019.
kingmaIntroductionVariationalAutoencoders2019
D. P. Kingma, M. Welling, et al., “An introduction to variational
autoencoders,” Foundations and Trends® in Machine
Learning, vol. 12, no. 4, pp. 307–392, 2019.
bengioLearningLongtermDependencies1994
Y. Bengio, P. Simard, and P. Frasconi, “Learning long-term dependencies with
gradient descent is difficult,” IEEE transactions on neural networks /
a publication of the IEEE Neural Networks Council, vol. 5, pp. 157–66, Feb.
1994.
hochreiterLongShorttermMemory1997
S. Hochreiter and J. Schmidhuber, “Long Short-term Memory,” Neural
computation, vol. 9, pp. 1735–80, Dec. 1997.
lechnerLearningLongTermDependencies2020
M. Lechner and R. Hasani, “Learning Long-Term Dependencies in
Irregularly-Sampled Time Series,” Dec. 2020.
sutskeverSequenceSequenceLearning2014
I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with
neural networks,” Advances in neural information processing systems,
vol. 27, 2014.
pascanuDifficultyTrainingRecurrent2013
R. Pascanu, T. Mikolov, and Y. Bengio, “On the difficulty of training
recurrent neural networks,” in International conference on machine
learning, pp. 1310–1318, Pmlr, 2013.
lawrenceUsingNeuralNetworks1997
R. Lawrence, “Using Neural Networks to Forecast Stock Market Prices,”
University of Manitoba, vol. 333, pp. 2006–2013, 1997.
vaizHybridModelForecast2016
J. S. Vaiz and M. Ramaswami, “A Hybrid Model to Forecast Stock Trend
Using Support Vector Machine and Neural Networks,” International
Journal of Engineering Research and Development, vol. 13, pp. 52–59, 2016.
waibelPhonemeRecognitionUsing1989
A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, and K. Lang, “Phoneme
recognition using time-delay neural networks,” Acoustics, Speech and
Signal Processing, IEEE Transactions on, vol. 37, pp. 328–339, Apr. 1989.
pontryaginMathematicalTheoryOptimal2018
L. Pontryagin, Mathematical Theory of Optimal Processes.
Routledge, first ed., May 2018.
Goodfellow-et-al-2016
I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning.
MIT Press, 2016.
<http://www.deeplearningbook.org>.
kingmaAutoEncodingVariationalBayes2014
D. P. Kingma and M. Welling, “Auto-Encoding Variational Bayes,” May 2014.
kaggle
G. LLC, “Kaggle,” no. online resource -
https://www.kaggle.comhttps://www.kaggle.com, last accessed on
2022/09/26, 2010.
climate
S. V. Rao, “Daily climate time series data,” no. online resource -
https://www.kaggle.com/datasets/sumanthvrao/daily-climate-time-series-data/code?datasetId=312121 sortBy=voteCount select=DailyDelhiClimateTrain.csvhttps://www.kaggle.com/datasets/sumanthvrao/daily-climate-time-series-data/code?datasetId=312121&sortBy=voteCount&select=DailyDelhiClimateTrain.csv,
last accessed on 2022/09/26, 2019.
djia
szrlee, “Djia 30 stock time series,” no. online resource -
https://www.kaggle.com/datasets/szrlee/stock-time-series-20050101-to-20171231?select=AAPL_2006-01-01_to_2018-01-01.csvhttps://www.kaggle.com/datasets/szrlee/stock-time-series-20050101-to-20171231?select=AAPL_2006-01-01_to_2018-01-01.csv,
last accessed on 2022/09/26, 2018.
ieeetr
§ VANISHING AND EXPLODING GRADIENTS PROBLEM IN RNN-BASED NETWORKS
In this section we show that Latent ODE-RNNs suffer from the vanishing and exploding gradients problem. To prove this, we first show that RNNs suffer from the vanishing and exploding problem <cit.>, and then that ODE-RNNs, Latent ODEs and Latent ODE-RNNs inherit this problem.
Let l(ŷ_i) be the loss function value at time step i, then backpropagation through time (BPTT) is used to optimise the parameters θ of an RNN, with the total loss given by the sum of losses of all time steps,
ℒ_θ = 1N∑_i=1^N l(ŷ_i).
To find the parameters θ that minimise the loss ℒ_θ, the gradients are then calculated , Equation <ref>,
∂ℒ∂θ = 1N∑_i=1^N ∂ l(ŷ_i)∂θ.
The term ∂ l(ŷ_i)∂θ is a sum of products that gives the gradients at time step i (considering all the contributions of the previous time steps k, where k<i) (<ref>),
∂ l(ŷ_i)∂θ= ∑_k=1^i ( ∂ l(ŷ_i)∂ŷ_i∂ŷ_i∂ h_i∂ h_i∂ h_k∂^+ h_k∂θ),
where ∂^+ h_k∂θ is the immediate partial derivative of h_k with respect to the parameters θ <cit.>.
§.§ RNNs and the vanishing/exploding gradient problem
As seen in Section <ref>, RNNs are neural networks with feedback loops that, when unfolded, can be viewed as a Feed-forward Neural Network with N layers, where each layer is an RNN cell.
Since the layers are copies of the same RNN cell, the parameters θ=(w_input,w_feedback,b, w_output, b_output) are shared across the network depth, so that the computation of the hidden states h_i, i∈(1 … N) consists of multiple instances of the same values <cit.>.
The term ∂ h_i∂ h_k is a chain of products of all the hidden states that contribute to the hidden state at time step i (see also the RNN update (<ref>)),
∂ h_i∂ h_k = ( ∏_i≥ j>k∂ h_j∂ h_j-1) = ( ∏_i≥ j>kw_feedback diag(σ'( w_feedback h_j-1+ w_input x_j + b)) ).
The chain of products in(<ref>) can be rewritten as a power of i-k terms, where diag() is the transformation of the vector of derivatives of the activation function σ into a diagonal matrix <cit.>,
( w_feedback diag(σ'(w_feedback h_j-1 + w_input x_j + b )))^i-k
with
[ ‖∂ h_j∂ h_j-1‖=‖w_feedback diag(σ'( w_feedback h_j-1+ w_input x_j + b )) ‖≤; ‖w_feedback‖‖ diag(σ'( w_feedback h_j-1 + w_input x_j+ b )) ‖. ]
Let σ be any activation function such that σ' is its derivative, bounded by the absolute value γ and therefore ‖ diag(σ'(.)‖≤γ. Thus ‖ diag(σ'( w_feedback h_j-1 + w_input x_j + b)) ‖≤γ <cit.>.
§.§.§ Vanishing gradient
The vanishing gradient problem occurs when the norm of gradients decreases exponentially and tends to zero during the training of a NN. This prevents the optimisation process from learning the best parameters θ that minimise the loss function <cit.>.
Following the proof in <cit.>, we take λ to be the absolute value of the largest eigenvalue of the feedback matrix, w_feedback, and for all j, it is sufficient to have
∀ j, ‖∂ h_j∂ h_j-1‖ < 1γγ < 1.
For all j, taking η∈ℝ so that η < 1 comes ‖∂ h_j∂ h_j-1‖≤η < 1. By induction over j we obtain,
∂ l(ŷ_i)∂ŷ_i∂ŷ_i∂ h_i( ∏_i ≥ j ≥ k∂ h_j∂ h_j-1) ≤∂ l(ŷ_i)∂ŷ_i∂ŷ_i∂ h_iη^i-k.
As η < 1 it follows from <ref> that in the presence of long-term sequences of data (for which i-k is large) the term η^i-k decreases exponentially fast to zero leading to the vanishing of the gradients <cit.>.
§.§.§ Exploding gradient
The problem of exploding gradients arises when the norm of gradients increases exponentially towards infinity <cit.>. This makes the optimisation steps very large and it is impossible to reach an optimal point.
Inverting the vanishing gradient proof, it is necessary to have
∀ j, ‖∂ h_j∂ h_j-1‖ > 1γγ > 1
Taking η∈ℝ, for all j, so that η>1 comes ‖∂ h_j∂ h_j-1‖≥η > 1. By induction over j we obtain,
∂ l(ŷ_i)∂ŷ_i∂ŷ_i∂ h_i( ∏_i ≥ j ≥ k∂ h_j∂ h_j-1) ≥∂ l(ŷ_i)∂ŷ_i∂ŷ_i∂ h_iη^i-k.
As η > 1 it follows from <ref> that in the presence of long-term sequences of data (for which i-k is large) the term η^i-k increases exponentially fast leading to the explosion of the gradients <cit.>.
§.§ ODE-RNNs and the vanishing/exploding gradient problem
As seen in Section <ref>, the difference between RNN and ODE-RNN is that to calculate the hidden state h_i, an intermediate state h'_i is computed by solving an ODE (when solving the NN adjusted ODE f_θ) using the previous state, h_i-1, (given by an RNN) as initial condition (see Figure <ref>),
h_i = σ(w_feedbackODESolve(f_θ,h_i-1,(t_i-1,t_i))_h'_i + w_input x_i+b).
To find the parameters θ in the ODE-RNN that minimise the loss ℒ_θ, the term ∂ h_i∂ h_k in (<ref>) must take into account the intermediate state h'_i. Converting ∂ h'_j∂ h_j-1 into a diagonal matrix, ∂ h_i∂ h_k is given by:
[ ∂ h_i∂ h_k = ( ∏_i≥ j>k∂ h_j∂ h'_j diag(∂ h'_j∂ h_j-1) ); = ( ∏_i≥ j>kw_feedback diag(σ'( w_feedback h_j-1 + w_input x_j + b)) diag ( ∂ h'_j∂ h_j-1) ). ]
As in RNNs, the term ∂ h_i∂ h_k in ODE-RNNs is a chain of products, that now also has the partial derivative of the ODE solver, ∂ h'_j∂ h_j-1. Thus, (<ref>) can be rewritten as a power of i-k terms,
( w_feedback diag(σ'(w_feedback h_j-1 + w_input x_j + b)) diag ( ∂ h'_j∂ h_j-1) )^i-k
with
[ ‖∂ h_j∂ h'_j diag(∂ h'_j∂ h_j-1) ‖=‖w_feedback diag(σ'(w_feedback h_j-1 + w_input x_j + b)) diag (∂ h'_j∂ h_i-j) ‖≤; ‖w_feedback diag(σ'(w_feedback h_j-1 + w_input x_j + b)) ‖‖ diag ( ∂ h'_j∂ h_j-1) ‖ ]
where the first term, ‖w_feedback diag(σ'( w_feedback h_j-1 + w_input x_j + b)) ‖ is the same as in (<ref>), while the second term ‖ diag ( ∂ h'_j∂ h_j-1) ‖ appears due to the ODE solver used to compute the hidden states h'_j in the RNN update (<ref>).
§.§.§ Vanishing gradient
Consider λ and λ' as the absolute values of the largest eigenvalue of the feedback matrix, w_feedback, and the Jacobian matrix diag(∂ h'_j∂ h_j-1), respectively.
Following the vanishing gradient proof done for RNNs, <ref>, it is sufficient to have λλ' < 1γ for
∀ j, ‖∂ h_j∂ h'_j diag(∂ h'_j∂ h_j-1) ‖ < 1γγ < 1.
For all j, taking η∈ℝ such that η<1 comes ‖∂ h_j∂ h'_j diag(∂ h'_j∂ h_j-1) ‖≤η < 1. By induction over j we obtain
∂ l(ŷ_i)∂ŷ_i∂ŷ_i∂ h_i( ∏_i ≥ j ≥ k∂ h_j∂ h'_j∂ h'_j∂ h_j-1) ≤∂ l(ŷ_i)∂ŷ_i∂ŷ_i∂ h_iη^i-k.
As η < 1 it follows from <ref> that in the presence of long-term sequences of data (for which i-k is large) the product decreases exponentially fast to zero leading to the vanishing of the gradients.
§.§.§ Exploding gradient
Inverting the vanishing gradient proof, it is necessary that λλ' > 1γ for
∀ j, ‖∂ h_j∂ h'_j diag(∂ h'_j∂ h_j-1) ‖ > 1γγ > 1.
For all j, taking η∈ℝ so that η>1 comes ‖∂ h_j∂ h'_j diag(∂ h'_j∂ h_j-1) ‖≥η > 1. By induction over j we obtain
∂ l(ŷ_i)∂ŷ_i∂ŷ_i∂ h_i( ∏_i ≥ j ≥ k∂ h_j∂ h'_j∂ h'_j∂ h_j-1) ≥∂ l(ŷ_i)∂ŷ_i∂ŷ_i∂ h_iη^i-k.
As η > 1 it follows from <ref> that in the presence of long-term sequences of data (for which i-k is large) the product increases exponentially fast leading to the explosion of the gradients.
§.§ Latent ODEs and the vanishing/exploding gradient problem
Let l(ŷ_i) to be the contribution of the loss function at time step i to the total loss ℒ_Θ, where Θ=(ϕ, α) are the encoder and decoder parameters, respectively.
To find the parameters Θ that minimise the total loss, backpropagation is done:
∂ l(ŷ_i)∂Θ = ∑_k=1^i ∂ l(ŷ_i)∂ŷ_i∂ŷ_i∂O_NN∂O_NN∂ z_t_i∂ z_t_i∂ z_0(∂ z_0∂μ∂μ∂ g∂ g∂ h_i + ∂ z_0∂σ∂σ∂ g∂ g∂ h_i) ∂ h_i∂ h_k∂^+ h_k ∂Θ.
§.§.§ Vanishing gradient
Due to the RNN encoder, the term ∂ h_i∂ h_k has the same form as (<ref>), so, following the proof <ref>, for all j, taking η∈ℝ so that η<1 comes
∀ j, ‖∂ h_j∂ h_j-1‖≤η < 1. By induction over j we obtain,
∂ l(ŷ_i)∂ŷ_i∂ŷ_i∂O_NN∂O_NN∂ z_t_i∂ z_t_i∂ z_0(∂ z_0∂μ∂μ∂ g∂ g∂ h_i + ∂ z_0∂σ∂σ∂ g∂ g∂ h_i) ( ∏_i ≥ j ≥ k∂ h_j∂ h_j-1) ≤
≤∂ l(ŷ_i)∂ŷ_i∂ŷ_i∂O_NN∂O_NN∂ z_t_i∂ z_t_i∂ z_0(∂ z_0∂μ∂μ∂ g∂ g∂ h_i + ∂ z_0∂σ∂σ∂ g∂ g∂ h_i)η^i-k.
As η < 1 it follows from <ref> that in the presence of long-term sequences of data (for which i-k is large) the term η^i-k decreases exponentially fast to zero leading to the vanishing of the gradients.
§.§.§ Exploding gradient
Similarly, for all j, taking η∈ℝ so that η>1 comes
∀ j, ‖∂ h_j∂ h_j-1‖≥η > 1. By induction over j we obtain,
∂ l(ŷ_i)∂ŷ_i∂ŷ_i∂O_NN∂O_NN∂ z_t_i∂ z_t_i∂ z_0(∂ z_0∂μ∂μ∂ g∂ g∂ h_i + ∂ z_0∂σ∂σ∂ g∂ g∂ h_i) ( ∏_i ≥ j ≥ k∂ h_j∂ h_j-1) ≥
≥∂ l(ŷ_i)∂ŷ_i∂ŷ_i∂O_NN∂O_NN∂ z_t_i∂ z_t_i∂ z_0(∂ z_0∂μ∂μ∂ g∂ g∂ h_i + ∂ z_0∂σ∂σ∂ g∂ g∂ h_i)η^i-k.
As η > 1 it follows from <ref> that in the presence of long-term sequences of data (for which i-k is large) the term η^i-k increases exponentially fast leading to the explosion of the gradients.
Thus, Latent ODEs suffer from the vanishing and exploding gradient problem due to the RNN used in the encoder.
§.§ Latent ODE-RNNs and the vanishing/exploding gradient problem
Latent ODE-RNNs are Latent ODEs that have an ODE-RNN encoder instead of a RNN.
The computation of the gradient of the loss at time step i, ∂ l(ŷ_i)∂Θ, is given by
∂ l(ŷ_i)∂Θ = ∑_k=1^i ∂ l(ŷ_i)∂ŷ_i∂ŷ_i∂O_NN∂O_NN∂ z_t_i∂ z_t_i∂ z_0(∂ z_0∂μ∂μ∂ g_NN∂ g_NN∂ z'_0∂ z'_0∂ h_i + ∂ z_0∂σ∂σ∂ g_NN∂ g_NN∂ z'_0∂ z'_0∂ h_i) ∂ h_i ∂ h_k∂^+ h_k∂Θ.
§.§.§ Vanishing gradient
Due to the ODE-RNN encoder, the term ∂ h_i∂ h_k has the same form of (<ref>) thus, it is sufficient to have λλ' < 1γ for
∀ j, ‖∂ h_j∂ h'_j diag(∂ h'_j∂ h_j-1) ‖ < 1γγ < 1
.
For all j, taking η∈ℝ so that η<1 comes ‖∂ h_j∂ h'_j diag(∂ h'_j∂ h_j-1) ‖≤η < 1. By induction over j we obtain
∂ l(ŷ_i)∂ŷ_i∂ŷ_i∂O_NN∂O_NN∂ z_t_i∂ z_t_i∂ z_0(∂ z_0∂μ∂μ∂ g_NN∂ g_NN∂ z'_0∂ z'_0∂ h_i + ∂ z_0∂σ∂σ∂ g_NN∂ g_NN∂ z'_0∂ z'_0∂ h_i) ( ∏_i ≥ j ≥ k∂ h_j∂ h'_j∂ h'_j∂ h_j-1) ≤
≤∂ l(ŷ_i)∂ŷ_i∂ŷ_i∂O_NN∂O_NN∂ z_t_i∂ z_t_i∂ z_0(∂ z_0∂μ∂μ∂ g_NN∂ g_NN∂ z'_0∂ z'_0∂ h_i + ∂ z_0∂σ∂σ∂ g_NN∂ g_NN∂ z'_0∂ z'_0∂ h_i)η^i-k.
As η < 1 it follows from <ref> that in the presence of long-term sequences of data (for which i-k is large) the term η^i-k decreases exponentially fast to zero leading to the vanishing of the gradients.
§.§.§ Exploding gradient
Inverting the vanishing gradients proof, it is necessary to have λλ' > 1γ for
∀ j, ‖∂ h_j∂ h'_j diag(∂ h'_j∂ h_j-1) ‖ > 1γγ > 1
.
For all j, taking η∈ℝ such that η>1 comes ‖∂ h_j∂ h'_j diag(∂ h'_j∂ h_j-1) ‖≥η > 1. By induction over j we obtain
∂ l(ŷ_i)∂ŷ_i∂ŷ_i∂O_NN∂O_NN∂ z_t_i∂ z_t_i∂ z_0(∂ z_0∂μ∂μ∂ g_NN∂ g_NN∂ z'_0∂ z'_0∂ h_i + ∂ z_0∂σ∂σ∂ g_NN∂ g_NN∂ z'_0∂ z'_0∂ h_i) ( ∏_i ≥ j ≥ k∂ h_j∂ h'_j∂ h'_j∂ h_j-1) ≥
≥∂ l(ŷ_i)∂ŷ_i∂ŷ_i∂O_NN∂O_NN∂ z_t_i∂ z_t_i∂ z_0(∂ z_0∂μ∂μ∂ g_NN∂ g_NN∂ z'_0∂ z'_0∂ h_i + ∂ z_0∂σ∂σ∂ g_NN∂ g_NN∂ z'_0∂ z'_0∂ h_i)η^i-k.
As η > 1 it follows from <ref> that in the presence of long-term sequences of data (for which i-k is large) the term η^i-k increases exponentially fast leading to the explosion of the gradients.
|
http://arxiv.org/abs/2307.05678v1 | 20230711180003 | A fermion-parity qubit in a proximitized double quantum dot | [
"Max Geier",
"Rubén Seoane Souto",
"Jens Schulenborg",
"Serwan Asaad",
"Martin Leijnse",
"Karsten Flensberg"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall",
"quant-ph"
] |
Center for Quantum Devices, Niels Bohr Institute, University of Copenhagen, 2100 Copenhagen, Denmark
Center for Quantum Devices, Niels Bohr Institute, University of Copenhagen, 2100 Copenhagen, Denmark
Division of Solid State Physics and NanoLund, Lund University, 22100 Lund, Sweden
Departamento de Física Teórica de la Materia Condensada, Condensed Matter Physics Center (IFIMAC) and Instituto Nicolás Cabrera, Universidad Autónoma de Madrid, 28049 Madrid, Spain
Instituto de Ciencia de Materiales de Madrid (ICMM), Consejo Superior de Investigaciones Científicas (CSIC),
Sor Juana Inés de la Cruz 3, 28049 Madrid, Spain.
Center for Quantum Devices, Niels Bohr Institute, University of Copenhagen, 2100 Copenhagen, Denmark
Department of Microtechnology and Nanoscience (MC2), Chalmers University of Technology, S-412 96 Göteborg, Sweden
Center for Quantum Devices, Niels Bohr Institute, University of Copenhagen, 2100 Copenhagen, Denmark
Center for Quantum Devices, Niels Bohr Institute, University of Copenhagen, 2100 Copenhagen, Denmark
Division of Solid State Physics and NanoLund, Lund University, 22100 Lund, Sweden
Center for Quantum Devices, Niels Bohr Institute, University of Copenhagen, 2100 Copenhagen, Denmark
NNF Quantum Computing Programme, Niels Bohr Institute, University of Copenhagen, 2100 Copenhagen, Denmark
Bound states in quantum dots coupled to superconductors can be in a coherent superposition of states with different electron number but with the same number parity.
Electrostatic gating can tune this superposition to a sweet spot,
where the quantum dot has the same mean electric charge independent of its electron-number parity.
Here, we propose to encode quantum information in the local fermion parity of two tunnel-coupled quantum dots embedded in a Josephson junction.
At the sweet spot, the qubit states have zero charge dipole moment. This protects the qubit from dephasing due to electric field fluctuations.
Depending on the strength of the tunnel coupling between the dots, the system is further protected towards either relaxation (weak tunneling) or dephasing (strong tunneling) from noise coupling separately to each quantum dot.
We describe initialization and readout as well as single-qubit and two-qubit gates by pulsing gate voltages.
A fermion-parity qubit in a proximitized double quantum dot
Karsten Flensberg
August 12, 2023
===========================================================
§ INTRODUCTION
Quantum dots coupled to superconductors host bound states with energies below the superconducting gap. They are known as Yu-Shiba-Rusinov states <cit.> for large charging energy or Andreev bound states <cit.> with small charging energy compared to the superconducting gap. These bound states are superpositions with different particle number due to so-called Andreev tunnel events where pairs of electrons in the quantum dot are transferred as a Cooper pair in the superconductor. This process thus preserves the total fermion parity of the system. In recent years, hybrid superconductor-semiconductor structures have proven to be a reliable platform to realize Josephson junctions, qubits, and quantum dot systems whose properties depend on the occupation of the in-gap bound states. These systems have been extensively studied experimentally<cit.>, and theoretically <cit.>.
Electrostatic gating can control the mean electric charge of the subgap states <cit.>. The quantum dot can be tuned to a “sweet spot” where it has same mean electric charge for both ground states with an even or odd electron number parity. As a consequence, both fermion parity sectors have the same response to small electric fields.
Here, we propose to leverage the protection of the local fermion parity together with the tunability of the charge expectation value to define a qubit in a pair of electrostatically controlled quantum dots embedded in a superconducting loop, see sketch in Fig. <ref>(a). The two qubit states are encoded in the local fermion parity of the two dots: the state |L⟩ (|R⟩) is a product of the left (right) quantum dot hosting an odd number of fermions while the right (left) quantum dot hosts an even number, as depicted in Fig. <ref>(b). Both quantum dots are tuned to the sweet spot where even and odd fermion parity sectors have the same mean electric charge, causing insensitivity to small fluctuations of the electrostatic environment. The quantum dot with odd fermion parity has a spin 1/2 degree of freedom. When the coupling of the spin to the environment is negligible, the qubit can be operated with spin-degenerate levels. Otherwise, the spin can be polarized by an applied magnetic field.
The coherence properties of this system depends on the hybridization between the quantum dots. For weak hybridization, the qubit eigenstates are localized in |L⟩ and |R⟩. Depolarization of the qubit state requires a quasiparticle to tunnel between the quantum dots, which is suppressed for weak tunneling.
In this regime, dephasing due to noise coupling to individual quantum dots (such as local magnetic field or level energy fluctuations) are further suppressed. Instead, the qubit is sensitive to fluctuations in the tunneling strength.
When charging energy can be neglected, the qubit states can be described in terms of a single Bogoliubov quasiparticle shared between the quantum dots. This quasiparticle has zero electric charge when the quantum dots are tuned to the sweet spot. From this point of view, our proposal can be considered a “chargeless” variant of a charge qubit <cit.>, where the electron position encodes quantum information. In our parity qubit, the superconducting correlations strip of the electron's charge.
In the following Sec. <ref>, we define the system Hamiltonian and the regimes for qubit operations, as well as microwave controlled single- and two-qubit rotations. A detailed discussion on the effects of noise on the qubit is contained in Sec. <ref>. In Sec. <ref>, we discuss the relation of our proposal to other qubit realizations. We comment on the feasibility of our proposal in currently available material platforms in Sec. <ref>.
§ THE FERMION-PARITY QUBIT
In this section, we define the Hamiltonian describing the pair of quantum dots connected to superconductors (Sec. <ref>), identify the operating regimes (Sec. <ref>), and describe initialization and read-out (Sec. <ref>) as well as single- and two-qubit rotations (Sec. <ref> and <ref>).
§.§ System Hamiltonian
Figure <ref>(a) shows a sketch of the fermion-parity qubit setup with the relevant control parameters. The system Hamiltonian,
Ĥ=∑_νĤ_ν+Ĥ_T ,
is composed of terms Ĥ_ν, ν = L,R describing the two individual quantum dots and their tunnel coupling Ĥ_T. The terms Ĥ_ν describing the individual quantum dots read
Ĥ_ν=∑_σε_νn̂_σν+U_νn̂_↑νn̂_↓ν+Γ_ν( ĉ_↑νĉ_↓ν+ H.c.) ,
where n̂_σν=ĉ_σν^†ĉ_σν, with the annihilation operator ĉ_σν for an electron on the dot ν with spin σ = ↑, ↓. The level energy ε_ν can be controlled using electrostatic gates, U_ν is the Coulomb repulsion strength, and Γ_ν describes the proximity-induced superconducting correlations in the quantum dots, assuming Δ≫Γ_ν, ε_ν, U_ν [see App. <ref> for a discussion on the validity of the approximation].
The eigenstates of the Ĥ_ν with even local fermion parity are superpositions of zero |0⟩_ν and two excess electrons of opposite spin |2⟩_ν = ĉ_ν↑^†ĉ_ν↓^† |0 ⟩_ν. The odd local fermion parity subspace contains the states where a single electron of spin σ occupies the quantum dot, |σ⟩_ν = ĉ_νσ^† |0 ⟩_ν.
The tunnel coupling between the dots is given by
Ĥ_T=τ∑_σ e^iϕ/2ĉ^†_σ Rĉ_σ L + H.c. ,
where τ is the tunneling amplitude and ϕ the superconducting phase difference.
The Hamiltonian is written in a gauge where the pairing amplitudes Γ_ν are real and positive while the superconducting phase difference is included in the tunneling term. For now, spin-orbit coupling and Zeeman field are neglected as these terms are not required for operating the fermion-parity qubit. These effects are included in the discussion on dephasing and depolarization due to parameter fluctuations in Sec. <ref>.
Single-particle picture in the absence of charging energy.—
For U_L=U_R = 0, the system can be described in terms of Bogoliubuv quasiparticles γ̂_σν obtained as the eigenstates of each quantum dot Hamiltonian Eq. (<ref>) after a Bogoliubov transformation.
At the sweet spot, the Bogoliubov quasiparticles γ̂_σν = u^* ĉ_σν + v ĉ^†_σ̅ν are equal superpositions |u|^2 = |v|^2 of electrons and holes of opposite spin σ̅ with mean electric charge -|e| (|u|^2 - |v|^2) = 0. The qubit basis states are constructed with the Bogoliubov quasiparticles as excitations |ν⟩ = γ̂^†_σν |BCS⟩ from the ground state |BCS⟩ = γ̂_↓ Lγ̂_↑ Lγ̂_↓ Rγ̂_↑ R |0⟩. Including tunneling between the quantum dots, the eigenstates of the full Hamiltonian Eq. (<ref>) are superpositions of γ̂_σ L and γ̂_σ R. In this picture, the qubit is defined by the position of a single, chargeless Bogoliubov quasiparticle shared between the two quantum dots.
§.§ Sweet spot
Here, we describe the sweet spot where the qubit is optimally operated.
Level energies ε_ν.—
The sweet spot, at which both fermion-parity sectors of a quantum dot have the same mean charge, is reached by setting the level energy to ε_ν=-U_ν/2.
At this point, the quantum dot eigenstates with even fermion parity are symmetric (λ_ν =) and antisymmetric (λ_ν =) superpositions
|⟩_ν = 1/√(2)(|0⟩_ν± | 2⟩_ν) with energies ±Γ_ν.
Thus the ground state of individual quantum dots is |⟩_ν.
Superconducting phase difference ϕ.—
The phase difference ϕ determines the tunneling between the quantum dots. At ϕ = 0, tunneling of an electron switches the symmetry of the even-parity bound state, due to a fermion sign acquired during tunneling. At ϕ = π, this sign is cancelled and the symmetry of the even-parity bound state is preserved, thus coupling the two qubit states, which have λ_R = λ_L =. We set ϕ = π such that tunneling acts within the qubit subspace. Furthermore, at ϕ = π, the spectrum is first-order insensitive to the fluctuations in ϕ (see Eq. (<ref>)).
Tunneling strength τ.—
Tunneling between the quantum dots hybridizes the bound states in the two dots. Without tunnel coupling, the qubit eigenstates are the product states |L⟩ = |σ⟩_L ⊗ |⟩_R, |R⟩ = |⟩_L ⊗ |σ⟩_R with energies -U_L/R/2 - Γ_R/L (at the sweet spots). With finite tunneling strength and ϕ = π, the qubit eigenstates are |⟩ = sinη/2 |L⟩ + i cosη/2 |R⟩, |⟩ = cosη/2 |L⟩ - i sinη/2 |R⟩ with angle
tanη = 2 τ/(U_R - U_L)/2 -(Γ_R - Γ_L),
and energies E_ρ = -(U_R + U_L)/4 - (Γ_R + Γ_L)/2 + ρτ/sinη, where ρ = labels the qubit states |⟩.
To summarize this section, setting both quantum dots to the sweet spot leads to a first-order insensitivity of the qubit frequency to the dot potentials. Qubit operation is optimal at phase bias ϕ = π.
Depending on whether the qubit coherence is limited by relaxation or dephasing due to noise coupling, the system can be operated in two regimes set by the tunneling strength. For weak tunneling (|η| ≪ 1) the system is protected towards relaxation, but sensitive to dephasing from fluctuations of the energy difference the even and odd states of the individual dots. In contrast, for strong tunneling (|η| ≃π/2) the sensitivity to these fluctuations is suppressed by a factor |tanη| ≫ 1 (but the qubit is no longer protected against relaxation). Analytic results for the decoherence rates are presented in Sec. <ref>.
The protection from dephasing follows from the dispersion of the qubit spectrum <cit.>. Figure <ref>(c) and (d) show the many-body spectra as a function of the level energy ε_R for the two regimes. Analogous results hold for ε_L. For weak tunneling [|η| ≪ 1, Figure <ref>(c)] the slope of the energy of the two states |L⟩ and |R⟩ as a function of ε_R aligns at the sweet spot ε_R = - U_R/2. At this point, the qubit frequency (ħω_0, given by the energy difference of the two lowest states) is insensitive to first order in ε_R. For strong tunneling [|η| ≃π/2, Figure <ref>(d)] the fluctuations of qubit frequency are further suppressed by a factor |(U_R-U_L)/2 - (Γ_R - Γ_L)|/τ.
§.§ Initialization and read-out
The operation of our qubit proposal is restricted to total odd fermion parity in the two quantum dots. Changing the total parity in the pair of quantum dots requires a quasiparticle from the superconducting leads to enter the quantum dots, similar to a quasiparticle poisoning event <cit.>. It can be expected that there are (almost) no quasiparticles present in the superconducting leads for temperature much below their superconducting gap. However, recent experiments <cit.> have shown that superconductors exhibit a density of “hot” quasiparticles at high energy that persists for small temperatures and dominates over thermally excited quasiparticles below T ≈ 35mK <cit.> or T ≈ 150mK <cit.>. Below, we discuss two ways to control fermion-parity changing events to initialize the fermion parity of the individual quantum dots.
Initialization by detuning ε_ν.—
Depending on the system parameters, it may be energetically favorable for a quasiparticle from the environment to enter a quantum dot and, thereby, flip its fermion parity. For large charging energy U > Γ_ν, the ground state of the quantum dot switches from odd fermion parity around the sweet spot (in our model: (ε_ν - U_ν/2)^2 ≤Γ_ν^2) to even at larger level energy ε_ν <cit.>. The energy released by a quasiparticle entering the quantum dot depends on the parity of the quantum dot, resulting in different quasiparticle trapping rates <cit.>. This transition has been applied experimentally to initialize the fermion parity in a quantum dot <cit.>. A follow-up work, Ref. Pita_arXiv2022, applied this procedure to initialize a quantum dot in the odd-parity sector, with measured even-to-odd and odd-to-even switching rates of 17 kHz and 0.36 kHz, respectively. If the fermion parity lifetimes cannot be tuned to differ significantly, one can alternatively monitor the fermion parity on the quantum dots in real time.
Initialization by microwave drive.—
Alternatively, the local fermion parity can be polarized in the odd parity state by a microwave pulse on a local gate that supplies the energy to split a Cooper pair from the condensate into one electron in the dot and one in the continuum of the superconductor <cit.>. Similarly, the local fermion parity can be polarized in the even state by a microwave pulse that supplies the energy to excite an electron from the dot into the continuum of the superconductor.
Comment on spin initialization.—
In the absence of Zeeman fields, the above initialization procedures do not favor a particular spin direction in the quantum dot. In this case, the spin of the quasiparticle is irrelevant and plays no role during operations.
Read-out by charge measurements.—
The qubit state can be read out by a converting the parity information to charge. This is done by detuning the level energy of one of the quantum dots away from the sweet spot (to |ε_ν + U_ν/2| ≳Γ_ν), resulting in a different charge in the dots for the even and odd parity sectors. Once tuned away, the state can be read by conventional charge-detection methods <cit.>.
§.§ Single-qubit gates
Single-qubit gates can be achieved by driving either the tunneling amplitude between the dots τ(t) = τ + δτcos(Ω t + φ_0) or the level energy of one of the quantum dots ε_ν(t) = ε_ν + δε_νcos(Ω t + φ_0) at the resonance frequency Ω = ω_0 = (E_ - E_)/ħ [Fig. <ref>(a)]. The two computational qubit eigenstates |⟩ and |⟩ are given by the two lowest-energy eigenstates in the global odd-parity sector of Ĥ(t) [defined around Eq. (<ref>)].
The amplitudes δτ, δε determine the Rabi frequency, and the phase φ_0 sets the axis of rotation within the X-Y-plane <cit.>.
We obtain the response of the qubit states to the driving protocol from the exact unitary Schrödinger evolution with respect to Ĥ(t) [Eq. (<ref>)].
Weak tunneling regime, |η| ≪ 1.—
In the weak tunneling regime, the qubit eigenbasis |⟩ is approximately given by the product states |⟩≈ |L⟩ and |⟩≈ |R⟩, up to perturbative corrections in |tanη| ≪ 1. In this basis, the tunnel coupling Ĥ_T at ϕ = π is off-diagonal, Ĥ_T|L⟩ = |R⟩. Thus, a drive in the tunnel strength leads to qubit rotations.
The qubit rotation by driving τ is demonstrated in Fig. <ref>(b). There, we furthermore include a ramp of the level energy ε_R at the beginning and end of the protocol to demonstrate a read-out scenario where adiabatic detuning of ε_R changes the mean charge of the even-parity state detectable by a charge measurement.
The small modulation on top of the Rabi oscillations is due to the Bloch-Siegert effect <cit.> occurring for sizable ratios δτ/Ω between Rabi and driving frequency. These oscillations are often neglected when employing a rotating-wave approximation for driven quantum systems, but are included in our exact time-evolution.
Due to the finite energy difference between the two qubit states, any superposition between the qubit states Larmor precesses at frequency ω_0 around the Z-axis [Fig. <ref>(a)].
Changing the phase ϕ_0 of the pulse relative to the Larmor precession of the qubit changes the axis of rotation. This is demonstrated in Figs. <ref>(c,d): In Fig. <ref>(c), two X_π/2 pulses are applied in sequence separated by a waiting time t_ w. The waiting time is chosen as ω_0 t_ w = 2π n such that the rotation Z_ω_0 t_ w = 1. The result is X_π/2 Z_ω_0 t_ w X_π/2 = X_π. In Fig. <ref>(d), the phase of the second pulse is shifted by ϕ_0 = π such that its rotation in the opposite direction X_-π/2 brings the qubit back into its initial state.
Strong tunneling regime, |η| ≈π/2.—
For strong tunneling, the eigenstates |⟩ are approximately equal superpositions of the product states |L⟩ and |R⟩, up to perturbative corrections in η. In this case, driving the amplitude τ only affects weakly the qubit states via perturbative processes in η. Instead, driving the level energy ε_ν of one of the quantum dots strongly couples to the qubit states. A numerical demonstration of the resulting Rabi oscillations is shown in Fig. <ref>.
Away from the sweet spot, the optimal driving frequency for Rabi processes equals to the qubit frequency. At the sweet spot, the optimal driving frequency to achieve complete population transfer is shifted to Ω = 1/2(ω_0 + 1/8∂^2 ω_0/∂ε_ν^2δε^2_ν). The second term in the previous expression accounts for the shift of the mean qubit frequency at the sweet spot in the presence of the drive with amplitude δε_ν in second-order perturbation theory, see App. <ref> for a derivation. The derivative of the qubit frequency ∂^2 ω_0/∂ε_ν^2 follows directly from the perturbative result contained in Eq. (<ref>) below. Again, two-axis control is achieved by setting the phase ϕ_0 of the pulse or, equivalently, by Larmor precession due to the energy difference of the two qubit states.
§.§ Two-qubit gates
We describe two-qubit gates that arise from inductive coupling of the superconducting loops or capacitive coupling between quantum dots of two distinct qubits indexed by j = 1,2. Figure <ref> shows a sketch for a setup allowing to realize two-qubit gates.
Capacitive coupling.—
Mutual capacitive coupling between quantum dots ν_1, ν_2 of adjacent qubits can be described by an interaction term U_12n̂_ν_1,1n̂_ν_2,2, where n̂_ν_j, j = ∑_σn̂_σ, ν_j, j is the occupation of the ν_j = L, R quantum dot of qubit j = 1,2. For concreteness, we consider capactive coupling between the right quantum dot ν_R,1 of qubit 1 and the left quantum dot ν_L,2 of qubit 2. Using n̂_ν,j = d Ĥ_j/d ε_ν, j with Ĥ_j being the Hamiltonian Eq. (<ref>) for each qubit, we apply the Hellman-Feynman theorem ⟨ n | dĤ/dε_ν, j|n⟩ = d E_n/dε_ν, j where |n⟩ labels the n'th eigenstate. The capacitive coupling projected onto the qubit eigenspace is
P U_12n̂_R1n̂_L2 P=ħ^2 U_12d ω_0,1/d ε_R,1d ω_0,2/d ε_L,2ρ̂_1^z⊗ρ̂_2^z,
where ρ̂_j^z are Pauli-z operators in the space of eigenstates |ρ_j ⟩_j, ρ_j = of qubit j (see around Eq. (<ref>) for a definition of the qubit eigenstates at the operating point), P = ∑_ρ_1, ρ_2 |ρ_1⟩_1 |ρ_2⟩_2 ⟨ρ_1|_1 ⟨ρ_2 |_2 the projector onto these states, and the qubit frequency ħω_0,j to second order is given in Eq. (<ref>) below. At the sweet spot, ε_ν, j = -U_ν, j/2, the capacitive coupling does not differentiate between the qubit states as the charge dipole moment of both qubit states vanishes. The charge dipole moment increases linearly with the detuning of the level energy ε_ν, j away from the sweet spot, which allows to switch on the two-qubit coupling using electrostatic control of the level energy ε_ν, j. [We note that single-qubit gates in the strong tunneling regime could also be performed by detuning ε_ν. If the detuned quantum dot is capacitively coupled to a quantum dot from another qubit, a two-qubit rotation results only if the other quantum dot is also detuned (see Eq. (<ref>)). To avoid accidental two-qubit gates when performing single-qubit operations, one can consider a design with divided tasks where only one quantum dot of each qubit is used for read-out and single-qubit gates by detuning ε_ν, while the other quantum dot is capacitively coupled to other qubits.] Long distance capacitive coupling between quantum dots can be mediated by floating gates <cit.>, schematically depicted in Fig. <ref> in green.
Inductive coupling.—
Inductive coupling via the superconducting loops is described by a term Ĥ_L = L Î_1 Î_2 where Î_j = 2e/ħdĤ_j/dϕ_j is the supercurrent operator in the loop and L the mutual inductance. However, the inductive coupling is not well-suited for our qubit design because at the phase sweet spot ϕ = π, the derivatives, and thereby the inductive coupling, are zero. Moreover, around ϕ = π, the coupling is proportional to τsinη [see Eq. (<ref>)] and, thus, suppressed in the weak tunneling regime |η| ≪ 1. It therefore requires strong tunneling between the dots to give a significant contribution.
§ PARAMETER NOISE
In this section, we quantify the susceptibility of the parity qubit to noise around the sweet spot. We use Fermi's golden rule and the Bloch-Redfield approximation to determine depolarization and dephasing rates. We use lowest-order perturbation theory for the qubit frequency ω_0 as a function of the fluctuating parameters. Here we focus on the relevant noise parameters. A discussion including fluctuations in all system parameters is contained in App. <ref>.
We include terms coupling to the local spin,
Ĥ^B_ν = ∑_σ s_σ B_z, νn̂_σν + [(B_x,ν-iB_y,ν)ĉ_↑ν^†ĉ_↓ν+H.c.],
with s_σ = ± 1 for σ = ↑,↓,
which describes Zeeman coupling to magnetic fields as well as spin-spin exchange coupling to a nuclear spin bath.
We further include spin-orbit coupling, which replaces the tunneling term in Eq. (<ref>),
Ĥ_T^SOC= τ e^i ϕ[ĉ_↑ L^†, ĉ_↓ L^†]e^i θn⃗·σ⃗([ ĉ_↑ R; ĉ_↓ R ])+H.c.
with the vector of Pauli matrices σ⃗ = (σ_x, σ_y, σ_z)^ T in spin space.
The matrix e^i θn⃗·σ⃗ describes a rotation of the electron's spin by an angle θ around the axis n⃗ as they tunnel from the right to left dot. For θ = 0, the spin-orbit coupling is zero and Eq. (<ref>) is recovered. The axis n⃗ is called the spin-orbit direction. By choosing the spin quantization axis in both quantum dots to be aligned with n⃗, the spin-orbit coupling becomes diagonal, τ e^i θσ_z. We choose this basis for the following, keeping in mind that the Zeeman fields are now given with respect to this basis, i.e. B_z points parallel to the spin-orbit axis n⃗.
In the presence of Zeeman fields, the angle η determining the qubit eigenstates and operating regimes (as defined in Eq. (<ref>)) is modified,
tanη_σ,λ = 2 τ/(U_R - U_L)/2 + s_σ(B_z L - B_z R) + s_λ(Γ_R - Γ_L)
where s_λ = ± 1 for λ = ,.
Equation (<ref>) is recovered from Eq. (<ref>) for B_zl=B_zR and λ=, see Sec. Sec. <ref> [This equation is valid in the presence of an exact or approximate U(1) spin-rotation symmetry around the axis of an applied Zeeman field or around the spin-orbit direction, such that perpendicular components can be treated perturbatively.].
Dephasing and relaxation rates.— Assuming the computational qubit subspace to be decoupled from the remaining states governed by Ĥ(t) at any time t, the dephasing rate Γ_φ^χ due to a noisy, linearly coupled parameter χ is given in Bloch-Redfield theory as <cit.>
Γ_φ^χ = π( ∂ω_0/∂χ)^2 S_χ(ω→ 0).
This presupposes that the noise spectral density
S_χ (ω) = ∫_-∞^∞ dτ⟨χ(0) χ(τ) ⟩ e^- i ωτ
is regular near ω≈ 0 up to frequencies of order of Γ_φ, where ⟨χ(0) χ(τ) ⟩ is the autocorrelation function of the fluctuating parameter χ with respect to its underlying statistical distribution.
The relaxation rate Γ_rel^χ is given in Fermi's golden rule <cit.>,
Γ_rel^χ(ω_0) = π/2 ħ^2| ⟨ | d Ĥ/d χ | ⟩|^2 S_χ(ω_0).
Similarly, the excitation rate is given by Γ_exc^χ(ω_0)=Γ_rel^χ(-ω_0). Both processes contribute to the depolarization rate Γ_1^χ (ω_0)= Γ_rel^χ(ω_0) + Γ_exc^χ(ω_0). At temperatures k_ B T ≪ħω_0, the excitation rate Γ_exc^χ(ω_0) is exponentially suppressed <cit.>.
Level energy fluctuations.—
Electric field fluctuations can couple to the level energy ε_ν of the quantum dots. To quantify their effect,
we calculate the qubit frequency, ħω_0^(2) = E_^(2)-E_^(2), to second order in the detuning of the level energy ε_ν away from the sweet spot,
ħω_0^(2) =ħω_0^(0)
+((ε_L+U_L/2)^2/2Γ_L - (ε_R+U_R/2)^2/2Γ_R) cosη_σ,
-(ε_L+U_L/2/2Γ_L - ε_R+U_R/2/2Γ_R)^2τsinη_σ, .
The first correction arises from the quadratic dependence of the energy of the local even parity states of the individual dots on the level energy ε_ν + U_ν/2 around the sweet spot. The second correction describes a modification of the tunneling between these states due to the change of the Andreev bound state superposition by detuning ε_ν. In Eq. (<ref>), we neglected a term describing second-order processes involving high-energy states |⟩_ν with a symmetric superposition of |0⟩_ν and |2⟩_ν, see App. <ref> for the full result. These states are separated from the qubit subspace by an energy difference of the order of 2Γ_ν.
The matrix element ⟨ | dĤ/dε_ν | ⟩ = 0 at the sweet spot.
Thus we have Γ_rel^ε_ν = 0
independent of the tunneling strength.
Both the Bloch-Redfield dephasing rate and relaxation rate are zero at the sweet spot, ε_ν + U_ν/2 = 0 and ϕ = π. We note that due to the quadratic dependence of the spectrum on the level energy ε_ν, there are higher-order contributions to the dephasing that are not captured by the Bloch-Redfield dephasing rate. These contributions however depend on the detailed form of the noise power spectral density and the time scale. Corresponding expressions can be found in Ref. <cit.>.
Phase fluctuations.— The phase difference between the superconductors can fluctuate due to magnetic flux variations. To second order in the superconducting phase difference, the qubit frequency can be written as
ħω_0^(2)=ħω_0^(0)
-δϕ^2/4τsinη_σ, .
Also here, we neglected a term describing second-order processes via the high-energy states |⟩_ν, c.f. App. <ref>.
At the sweet spot ε_ν + U_ν/2 = 0 and ϕ = π, also the matrix element ⟨ | d Ĥ/d ϕ | ⟩ = 0, such that both Bloch-Redfield dephasing and the relaxation rate in Fermi's golden rule are zero.
Magnetic field fluctuations.— Fluctuating magnetic fields coupling to the electron spin include Zeeman coupling to external magnetic fields as well as spin-spin exchange interaction with the nuclear spin bath. Magnetic fields along the spin-orbit axis have a linear contribution to the qubit frequency
ħω_0^(1)=ħω_0^(0) + s_σ (B_z, L - B_z, R) cosη_σ,.
This linear contribution is large for weak tunneling |η_σ,| ≪ 1, but goes to zero for strong tunnel coupling with |η_σ,| ≪ 1.
The relaxation rate Γ_rel^B_z,ν =π/8ħ^2sin^2η_σ,S_B_z,ν(ω_0) behaves oppositely: It is large for strong tunneling, but approaches zero for weak tunneling with |tanη_σ,| ≪ 1.
Magnetic field fluctuations perpendicular to the spin-orbit direction induce transitions between the two spin sectors. Choosing one of the spin sectors as the computational space, these fluctuations would take the qubit out of the computational space. This can be avoided by applying an external Zeeman field B_z^ext. Ideally, this Zeeman field should be aligned with the spin-orbit direction n⃗ to avoid spin-flipping processes from the spin-orbit coupling. Then, fluctuations in the orthogonal directions enter only to second order,
ħω_0^(2)=ħω_0^(0) - δ B_x^2/(B_z^ext)^2τsin(η_σ,)sin^2θ/1-τ^2(sinη_σ,)^-2(B_z^ext)^-2 ,
and similarly for B_y.
This dephasing contribution decreases quadratically with the applied magnetic field and requires spin-orbit coupling, as expressed by the proportionality sin^2 θ to the spin-orbit angle θ. Also here, the perpendicular fluctuations do not contribute to the relaxation rate in Fermi's golden rule, Γ_rel^B_x = 0.
Remaining parameters.— While the parity qubit is linearly protected against fluctuations in the level energies and the magnetic flux, fluctuations in other parameters can affect qubit performance. Fluctuations of local parameters of one of the dots, such as the charging energy U_ν and the induced pairing Γ_ν, contribute to the dephasing rate with a term proportional to sin^2(η_σ,) and to the relaxation rate proportional to cos^2(η_σ,). Fluctuations in the tunneling strength τ enter oppositely: They contribute to the dephasing rate proportional to cos^2(η_σ,) and to the relaxation rate proportional to sin^2(η_σ,).
Furthermore, a mutual charging energy U_LR∑_σ, σ'n̂_σ Ln̂_σ' R modifies the system Hamiltonian in the odd fermion-parity subspace by replacing U_ν→ U_ν + 2 U_LR. Accounting for the modified sweet spot, fluctuations in U_LR enter only to second order in the qubit frequency by detuning the system from the sweet spot as described by Eq. (<ref>).
Beyond the decoherence channels discussed here, the depolarization rate of our proposed qubit is lower bounded by the fermion-parity lifetime of the individual quantum dots.
§ COMPARISON TO OTHER QUBIT REALIZATIONS
Semiconductor spin qubit.—
An common type of single-electron qubits in semiconductors are spin qubits <cit.>. This type of qubits are vulnerable to magnetic fluctuations created by the nuclei spins in the host material, which is the reason why efforts are now focused on (possibly isotopically purified) silicon or germanium based dots <cit.>. One complication with spin qubits is that their spin state cannot be directly manipulated with electric fields, but instead requires oscillating magnetic fields, spin-orbit coupling, or gradience in the magnetic field. In our proposal, local magnetic fields couple to the of the states with odd fermion parity. We can reduce the effect of this coupling: (i) by applying a magnetic field larger than the local fluctuations, or (ii) by increasing the tunnel coupling of the quantum dots such that the qubit states are superpositions of |L⟩ and |R⟩ [see discussion around Eqs. (<ref>) and (<ref>)]. Moreover, our proposed gates use electrical control without the need for spin-orbit coupling or gradients in the magnetic field.
Andreev and Andreev spin qubits.—
In superconductors, coherent control of single fermionic quasiparticles in Andreev bound states has been demonstrated in “Andreev qubits” where the computational states are given by the occupation of spin-degenerate Andreev bound states <cit.>. Further experiments on “Andreev spin qubits” demonstrated coherent manipulation of the spin 1/2 degree of freedom of a quantum dot with odd fermion parity in contact to superconductors <cit.>. These experiments have been performed in aluminum-based structures where relatively long parity and T_1 lifetimes were observed, while the dephasing times were shorter <cit.>. Our proposal can be considered as a tuned double-dot version of these experiments.
Pairs of quantum dots coupled to superconductors.—
Recently, other proposals for qubits in pairs of quantum dots coupled to superconductors have appeared. Ref. MalinowskiarXiv2023Mar discusses two quantum dots coupled to a single superconducting island. By tuning the level energies of the quantum dots, a similar charge-insensitive sweet spot can be reached. An ongoing project <cit.> studies a pair of quantum dots in a Josephson junction where the qubit is encoded in Yu-Shiba-Rusinov states.
Majorana qubit.—
Our proposal shares two essential properties with topological Majorana qubits <cit.>, namely that the information is encoded in the fermion parity of localized modes and that the information is separated from the electronic charge. However, there are fundamental differences. In Majorana qubits, the quantum information is stored in spatially separated Majorana bound states, where a pair of Majorana bound states composes a single complex fermion. Individual Majorana bound states are chargeless quasiparticles. Dephasing of quantum information stored in Majorana bound states due to local fluctuations occurs only via modifications of the hybridization to other Majorana bound states (which is exponentially suppressed in their distance in the topological case). In contrast, in our proposal the quantum dots are fine-tuned to a sweet spot with zero charge dipole moment. In spirit, our proposal is thus related to two-dot Kitaev chain proposals <cit.>. However, this so-called "poor man's" Majorana system has qubit states defined by the total parity of two dots (and therefore needs four dots to make a workable qubit), whereas for our proposal the qubit states are defined by local parity (and therefore two dots are sufficient).
§ EXPERIMENTAL PLATFORMS
We expect that our qubit proposal can be implemented in current superconductor-semiconductor platforms. One of the most promising platforms for demonstration purposes are aluminum superconductors deposited on top of an electron gas formed in a III-V semiconductor such as indium-arsenide <cit.> or indium antimony <cit.>. This platform hosts very clean interfaces between the Al superconductor and the indium-arsenide electron gas such that electrostatically tunable hopping rates Γ_ν from zero to larger than the superconducting gap Δ≈ 0.2meV of Al as well as quantum dots with level spacing much larger than the hopping rates to justify the single level approximation are achievable <cit.>. In this platform, all of the requirements for our proposal have been demonstrated: Coherent manipulation of Andreev bound states and electron spins in quantum dots <cit.>, fermion-parity initialization <cit.>,
and good control of the hybridization of Andreev bound states in quantum dots <cit.>. Furthermore, this platform can be integrated with superconducting circuits <cit.>, which enhances the range of possibilities for qubit addressing and readout, and interfaces with other superconducting qubits. A challenge in this platform for our qubit proposal is the nuclear spin bath, which causes dephasing by inducing fluctuating spin-spin exchange fields in the quantum dots via the Overhauser effect <cit.>. Therefore, this platform cannot be expected to be viable as a long coherence time qubit, but is well-suited for demonstration experiments. Another possible platform in III-V semiconductors is gallium arsenide with niobium nitride superconductors <cit.>, which realizes high-mobility electron gases with very good electrostatic tunability.
A promising platform with higher coherence are group IV semiconductors such as germanium / silicon germanium heterostructures which make good interfaces with aluminum <cit.> or germanosilicide <cit.> superconductors. In group IV
semiconductors, most isotopes have zero nuclear spin which suppresses exchange field fluctuations from the Overhauser effect. In carbon-based materials, quantum dots have been realized in Bernal-stacked bilayer graphene <cit.>, where long-lived valley degrees of freedom have been reported <cit.>. Bilayer graphene itself can turn superconducting <cit.> by electrostatic gating, and the superconductivity can be enhanced by coupling to a tungsten diselenide monolayer <cit.>. Alternatively, graphene can be interfaced with a niobium nitride superconductor by etching the graphene and depositing the superconductor <cit.>.
§ CONCLUSIONS AND DISCUSSIONS
We have proposed a qubit where quantum information is encoded in the fermion parity of two quantum dots constituting a weak link in a superconducting loop.
The qubit states are defined by the fermion number parity of the two quantum dots. By electrostatic tuning to a sweet spot, the quantum dots have the same electric charge, independently of their fermion parity.
Thereby, the qubit states are protected from dephasing to first order in fluctuations of the electric field that lead to variations of the quantum dot level energy much smaller than the induced superconducting correlations.
By tuning the tunneling strength between the quantum dots, the encoded quantum information is further protected towards relaxation (weak tunneling) or dephasing (strong tunneling) induced by environmental fluctuations coupling to individual quantum dots, including electric and magnetic fields, and nuclear spins. The reduced sensitivity towards local noise may mediate the main decoherence sources in qubits in quantum dot-superconductor heterostructures, such as Andreev <cit.> and Andreev spin qubits <cit.> [see Sec. <ref>]. Strong tunneling increases the dephasing due to fluctuations of the tunneling amplitude, which could be reduced by lowering the coupling between gate voltage and tunneling strength.
The fermion-parity qubit is controllable with electric gates to perform single- and two-qubit gates, as well as initialization and read-out.
The qubit states are well separated from the nearest non-computational states by an energy difference of the order of the induced pairing potential 2Γ_ν, which permits fast gate operations while avoiding population of non-computational states. Taking aluminium as a host superconductor with gap Δ≈ 0.2meV and estimating Γ_ν≈Δ/4 = 2πħ× 12GHz, we expect that single-qubit gates can be achieved on a nanosecond scale.
Future theoretical directions of this work could address how large charging energy U_ν≫Δ, Γ_ν or induced pairing strength Γ_ν≫Δ modifies the qubit spectrum and decoherence properties in detail. We expect that the qualitative features of our proposal persist for large charging energy, with quantitative modifications as the even-parity bound states become singlets composed of an electron on the dot and a quasiparticle from the superconductor <cit.>.
An experimental realization of our proposal would demonstrate encoding of quantum information in the fermionic parity degree of freedom separated from its electric charge. This property is shared by the topological Majorana qubits and is an essential element in their anticipated decoupling of the encoded quantum information from the environmental noise. While topological Majorana qubits <cit.> have proven difficult to realize, our proposal uses currently available technology. We hope our proposal inspires experimental realization and further studies on how superconductivity can decouple quantum information from the environment.
§ ACKNOWLEDGEMENTS
We thank Anasua Chatterjee, Ferdinand Kuemmeth, Charles M. Marcus, Jens Paaske, and Gorm Ole Steffensen for discussions. We acknowledge support from the Danish National Research Foundation, the Danish Council for Independent Research Natural Sciences, the European Research Council (Grant Agreement No. 856526), Spanish CM “Talento Program” (project No. 2022-T1/IND-24070), the Swedish Research Council (Grant Agreement No. 2020-03412), and NanoLund. This project has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 10103324 and funded by the European Union under the Grant Agreement no. 101063135.
§ VALIDITY OF THE EFFECTIVE HAMILTONIAN
The Hamiltonian Ĥ_ν defined in Eq. (<ref>) is an accurate description for individual quantum dots when the gap Δ in the surrounding superconductor is large, Δ≫Γ_ν, U_ν.
At large charging energy, U_ν≫Δ, Γ_ν, an accurate description of the individual quantum dots coupled to superconductors requires to take the superconductors and the hopping to them explicitly into account <cit.>. In comparison to the description using Ĥ_ν where coupling to the superconductors is included in lowest-order perturbation theory via Γ_ν, the even-parity bound states get modified: Double-occupation of the quantum dot is avoided by forming a singlet with a quasiparticle from the superconductor. This leads to a renormalization of the even-fermion-parity bound state energies <cit.>. We expect that while changing the quantitative details, the qualitative features of our results should persist in this limit: A sweet spot where both fermion parity sectors of a quantum dot have the same mean charge can also be reached for large charging energy.
§ RABI DRIVING AT THE SWEET SPOT
Rabi transitions of the qubit states can also be achieved by driving the level energy ε_ν at the sweet spot. In this regime, we have to account for the quadratic dependence of the qubit frequency on the detuning of the level energy around the sweet spot, ħω_0(ε_ν) = ħω_0(-U_ν/2) + 1/2∂^2 ω_0/∂ε_ν^2 (ε_ν + U_ν/2)^2 + 𝒪((ε_ν + U_ν/2)^4), where the factor 1/2∂^2 ω_0/∂ε_ν^2 can be determined from perturbation theory, see App. <ref> for an explicit expression.
Due to the quadratic dependence of the low-energy Hamiltonian projected on the qubit subspace in the driven parameter ε_ν(t) = -U_ν/2 + δε_νcos(Ω t), the drive enters the qubit subspace as 1/2∂^2 ω_0/∂ε_ν^2 (ε_ν(t) + U_ν/2)^2 = 1/4∂^2 ω_0/∂ε_ν^2δε_ν^2 [1 + cos(2 Ω t)] with doubled frequency.
For small driving amplitudes, 1/4∂^2 ω_0/∂ε_ν^2δε_ν^2 ≪ħω_0, the driving frequency Ω needs to be set to half of the qubit frequency at the sweet spot, Ω = ω_0/2 to achieve complete population transfer. For finite driving amplitudes, 1/4∂^2 ω_0/∂ε_ν^2δε_ν^2 ≳ħω_0, the qubit frequency ω̃_0 = 1/T∫_0^T dt ω_0[ε_ν(t)] = ħω_0(-U_ν/2) + 1/4∂^2 ω_0/∂ε_ν^2δε_ν^2 averaged over one period of the drive T = 2 π/Ω deviates from the non-driven value ω_0 such that the driving frequency needs to be corrected as Ω = 1/2ω̃_0 = 1/2ω_0 + 1/8∂^2 ω_0/∂ε_ν^2δε_ν^2 to allow complete population transfer. The sweet spot driving is demonstrated in Fig. <ref>(b). A similar result also allows driving of the phase difference ϕ at the sweet spot ϕ = π to perform qubit rotations.
§ PERTURBATION THEORY AND DECOHERENCE RATES
Here we give the full result of the second-order perturbation theory for the qubit frequency around the sweet spot ε_ν = - U_ν/2 and at the operating point ϕ = π. These results are employed to calculate the Bloch-Redfield dephasing rates and depolarization rates from Eqs. (<ref>) and (<ref>).
§.§ Diagonalization at the sweet spot
The full qubit Hamiltonian including spin-orbit coupling and Zeeman fields,
Ĥ = ∑_ν (Ĥ_ν + Ĥ^B_ν) + Ĥ_T^soc ,
is conveniently diagonalized in the eigenbasis of proximitized dots Ĥ_ν,
|λ_ν⟩_ν = |0⟩ + (sinh(ξ_ν) + s_λ_νcosh(ξ_ν)) c^†_↑νc^†_↓ν|0⟩/N_λ_ν
with labels λ_ν = , as in Sec. <ref>, N_λ_ν = √(2 coshξ_ν (coshξ_ν + s_λ_νsinhξ_ν)), sinhξ_ν = 2ε_ν+U_ν/2Γ_ν and energy E_λ_ν = Γ_ν(sinhξ_ν + s_λ_νcoshξ_ν).
In the eigenbasis of proximitized dots Ĥ_ν as given in Eq. (<ref>), the Hamiltonian within a spin sector σ can be written in matrix form,
Ĥ_σ = [ ε_σ L+E__R 0 τ e^i σθ f_ τ e^i σθf_; 0 ε_σ L+E__R τ e^i σθf_ τ e^i σθf_; ε_σ R+E__L 0; H.c. 0 ε_σ R+E__L ]
acting on the basis (ĉ^†_σ L|⟩_R, ĉ^†_σ L|⟩_R, ĉ^†_σ R|⟩_L, ĉ^†_σ R|⟩_L)^ T, where ε_σν = ε_ν + s_σ B_z, ν and
f_λ_Rλ_L =e^iϕ/2/N_λ_RN_λ_L
- s_λ_R s_λ_Le^-iϕ/2/N_λ_RN_λ_L( cosh( s_λ_Lξ_L+ s_λ_Rξ_R)
+sinh( s_λ_Lξ_L+ s_λ_Rξ_R) )
At the sweet spot ξ_L = ξ_R = 0, the function f_λ_Rλ_L simplifies as f^(0)_λλ=i sinϕ/2 and f^(0)_λλ̅=cosϕ/2 where / =. Thus, the tunnel-coupling of the quantum dot states can be controlled by the phase-bias ϕ: At ϕ = 0, the low-energy state |⟩_L of the left dot is coupled to the high-energy state |⟩_R of the right dot, and vice versa. At ϕ = π, the low-energy states |⟩_L, |⟩_R and high-energy states |⟩_L, |⟩_R are coupled with each other. The perpendicular components of the Zeeman field B_x, ν, B_y, ν remain unaffected by this basis transformation.
At the sweet spot, for B_x, ν = B_y, ν = 0, and at ϕ = π, the eigenstates of the full Hamiltonian Ĥ are,
|σ,λ,-⟩ = sinη_σ,λ/2ĉ^†_σ L|λ⟩_R + i e^-i s_σθcosη_σ,λ/2ĉ^†_σ R|λ⟩_L
|σ,λ,+⟩ = cosη_σ,λ/2ĉ^†_σ L|λ⟩_R - i e^-i s_σθsinη_σ,λ/2ĉ^†_σ R|λ⟩_L
with energy
E_σ,λ,ρ^(0) = -(U_R + U_L)/2 + s_σ(B_z L + B_z R) + s_λ(Γ_R + Γ_L)/2
+ ρτ/sinη_σ,λ,
with ρ = ± labeling the corresponding eigenstate and the superscript (0) indicates the sweet spot. The angle
η_σ, λ is given in Eq. (<ref>). It determines the operating regime: For small |η_σ,| ≪π/2 (weak tunneling), the eigenstates Eqs. (<ref>) are localized either left or right, while for large |η_σ,| ≈π/2 (strong tunneling), the eigenstates are bonding and anti-bonding superpositions between left and right.
The two qubit states are given by the two lowest energy eigenstates of the system within a spin sector σ, |σ,,-⟩ and |σ,,+⟩. The qubit eigenfrequency is
ħω_0^(0) = E^(0)_σ, , + - E^(0)_σ, , - = 2τ/sinη_σ,.
For Zeeman fields (and fluctuations thereof) much smaller than ħω_0^(0), the system can be operated in the spin degenerate regime. For sizeable fluctuations of the Zeeman fields, it is advantageous to apply an external magnetic field B^ext_zν much larger than the variance of the fluctuations to suppress spin flips. Differences in the electronic g-factor in the two quantum dots g_ν lead different resulting Zeeman fields B^ext_zν.
§.§ Deviations from the operating regime
Here we summarize the results for the lowest order corrections to the qubit spectrum due to perturbations in the system parameters away from the operating point at the sweet spot ε_ν = - U_ν/2 and ϕ = π.
Detuning ε_ν.— We calculate the energy difference ħω_0^(2) = E_σ,,+^(2)-E_σ,,-^(2) of the qubit states to second order in the detuning,
ħω_0^(2) =ħω_0^(0)
+Γ_Lξ_L^2 - Γ_Rξ_R^2/2cosη_σ,
-(ξ_L-ξ_R/2)^2(1 + 2 Ξ_σ,^2 )τsinη_σ,
where ξ_ν = arsinh2ε_ν+U_ν/2Γ_ν≈2ε_ν+U_ν/2Γ_ν and Ξ_σ,λ = 1/Γ_R + Γ_Lτ/sinη_σ,λ. The first correction arises from the quadratic dependence of the energy E_λ_ν of the local even parity states of the individual dots. The second correction describes a modification of the tunneling between these states due to their dependence of the Andreev bound state wavefunction (Eq. (<ref>)) on the level energy ε_ν. The term proportional to Ξ_σ, describes second-order processes involving the symmetric even-fermion-parity bound states λ_ν = at high energy λ_νΓ_ν
at the sweet spot. The above result is furthermore expanded to lowest order in Ξ_σ,λ, which is small Ξ_σ,λ≪ 1 when the qubit states λ_ν = are well separated from the high-energy states λ_ν =.
Neglecting the coupling to the symmetric even-fermion-parity bound states (Ξ_σ,→ 0) reproduces Eq. (<ref>).
Phase difference ϕ.— Similarly, the variations of the phase difference ϕ = π + δϕ away from the operating point ϕ = π modify the qubit frequency as
ħω_0^(2)=ħω_0^(0)
-δϕ^2/4(1 + 2 Ξ_σ,^2 )τsinη_σ,
which, again, is expanded to second order in Ξ_σ,λ. Setting Ξ_σ,→ 0 reproduces Eq. (<ref>).
Tunneling strength τ.— Variations in the magnitude of the tunneling amplitude τ→τ + δτ enter linearly in the qubit frequency,
ħω_0^(1)=ħω_0^(0) + δτsinη_σ,.
Charging energy U_ν.— Fluctuations in the charging energy U_ν on either dot modify the qubit frequency to linear order,
ħω_0^(1)=ħω_0^(0) + (δ U_R/2-δ U_L/2)cosη_σ,.
Induced pairing potential Γ_ν.— At the sweet spot, fluctuations in the induced pairing strength Γ_ν→Γ_ν+δΓ_ν modify only the diagonal elements of the Hamiltonian. The offdiagonal terms remain unchanged as ξ_ν=0. The energy depends linearly on the perturbation,
ħω_0^(1)=ħω_0^(0) + (δΓ_L-δΓ_R)cosη_σ,.
Zeeman fields B⃗_ν.—
Dephasing and depolarization due to Zeeman field fluctuations are discussed around Eqs. (<ref>) and (<ref>) in the main text.
Spin-orbit coupling.—
Spin-orbit coupling enters the Hamiltonian via the tunneling term Eq. (<ref>). Without Zeeman fields, or for Zeeman fields pointing only along the spin-orbit direction n⃗, the spin-orbit coupling does not modify the spectrum. The spin orbit angle θ enters only in the perturbative corrections due to orthogonal fluctuations of the Zeeman field [see Eq. (<ref>)].
§.§ Dephasing and depolarization
We derive Bloch-Redfield dephasing rates and depolarization rates using Eqs. (<ref>) and (<ref>) from the main text.
Detuning ε_ν.— Using Eq. (<ref>), the dephasing rate due to fluctuations in the detuning ε_ν around the sweet spot is
2 ħ^2/π S_ε(ω→0)Γ_φ^ε_ν=
(ξ_νcosη_σ,+ξ_L-ξ_R/2Γ_ν(1 + 2Ξ_σ,^2 )τsinη_σ,)^2.
To compute the relaxation rate, we calculate at the sweet spot and ϕ = π
dĤ/dε_ν = _4+τ/2Γ_νλ_y⊗ν_x
where λ_l and ν_l, l=0,x,y,z are Pauli matrices in λ space and quantum dot space, respectively, and "⊗" denotes the direct product. The matrix element ⟨ + | dĤ/dε_ν | - ⟩ = 0 due to orthogonality of the wavefunction and because the operator λ_y⊗ν_x relates wavefunctions with opposite λ, while the two qubit states have the same λ = -. Thus we have
Γ_rel^ε_ν = 0.
Both the Bloch-Redfield dephasing rate and relaxation rate are zero at the sweet spot ξ_ν = 0 and ϕ = π.
Phase difference ϕ.— The dephasing rate due to fluctuations in ϕ = π + δϕ is
2 ħ^2/π S_ϕ(ω→0)Γ_φ^ϕ=(δϕ(1 + 2 Ξ_σ,^2 )τsinη_σ,/2)^2.
With dĤ/dϕ = -λ_x ⊗ν_x at ϕ = π we find ⟨ + | dĤ/dϕ | - ⟩ = 0 because the operator λ_x ⊗ν_x couples only states with opposite λ, such that
Γ_rel^ϕ = 0.
Tunneling strength τ.— Fluctuations in the tunneling strength lead to a dephasing and relaxation rate,
Γ_φ^τ =π/ħ^2sin^2η_σ, S_τ(ω→0)
Γ_rel^τ =π/2ħ^2cos^2η_σ,S_τ(ω_0)
where we used that dĤ/dτ_σ = - λ_0 ⊗ν_y at ϕ = π.
Charging energy U_ν.— Fluctuating charging energies U_ν on either dot result in dephasing and relaxation rates, to first order around the sweet spot U_ν = - 2 ε_ν,
Γ_φ^U_ν =π/4 ħ^2cos^2η_σ, S_U_ν(ω→0)
Γ_rel^U_ν =π/32 ħ^2sin^2η_σ,S_U_ν(ω_0) .
where we used that dĤ/dU_L/R = λ_0 ⊗ (ν_0 ±ν_z)/4.
Induced pairing strength Γ_ν.— The dephasing and relaxation rates due to fluctuating Γ_ν
Γ_φ^Γ_ν =π/ħ^2cos^2η_σ, S_Γ_ν(ω→0)
Γ_rel^Γ_ν =π/8ħ^2sin^2η_σ,S_Γ_ν(ω_0) .
using dĤ/dΓ_L/R = λ_0 ⊗ (ν_0 ±ν_z)/2.
Zeeman fields B⃗.— We again distinguish fluctuations B_z along the spin quantization axis in the two dots, and perpendicular fluctuations B_x, B_y. The spin quantization axis is either set by the direction of the spin-orbit coupling n⃗ and/or the direction of an externally applied Zeeman field B_z^ext, see Sec. <ref>. Parallel fluctuations δ B_z result in dephasing and relaxation rates
Γ_φ^B_z,ν =π/ħ^2cos^2η_σ, S_B_z,ν(ω→0)
Γ_rel^B_z,ν =π/8ħ^2sin^2η_σ,S_B_z,ν(ω_0) .
In the presence of an externally applied Zeeman field much larger than the variance of the fluctuations of B_x and B_y, fluctuations in these components yield dephasing and relaxation rates
Γ_φ^B_x,ν =π/ħ^24 δ B_x^2/(B_z^ext)^4( τsinη_σ,sin^2θ/1-(τ/B_z^extsinη_σ,)^2)^2 S_B_x,ν(ω→0)
Γ_rel^B_x,ν = 0 .
The dephasing rate is zero in the absence of a bias δ B_x = 0 and / or in the absence of spin-orbit coupling θ = 0.
Relative dephasing of multiple qubits.—
When operating all qubits at the sweet spot, the charge expectation value of all quantum dots is independent of their fermion parity. Therefore, any charge dipole or multipole moments between different qubits vanish as well, such that also a multi-qubit system remains linearly insensitive to fluctuations in the level energy. Similar arguments hold for the other parameters.
§ PULSE PARAMETERS
Here, we define the precise form and parameters for the pulses applied in Fig. <ref> and <ref> in the main text.
The pulses are of the form
f(t) = s(t;t_ pulse, t_ rise) cos (Ω t + ϕ_0)
with the driving frequency Ω, pulse phase ϕ_0 and envelope function
s(t;t_ pulse, t_ rise) = ϑ(t - t_ pulse/2; t_ rise) - ϑ(t + t_ pulse/2; t_ rise)
setting the pulse duration t_ p. The pulse is turned on smoothly over a rising time t_ r using the smooth step function
ϑ(t;t_ rise) = (1 + exp[ t_ rise/t + t_ rise/2 - t_ r/t_ rise/2 - t] )^-1
which is defined on the interval -t_ rise/2 < t < t_ rise.
This smooth step function connects continuously in derivatives of all orders to a flat line ϑ(t) = 0 for t < -t_ rise/2 and ϑ(t) = 1 for t > t_ rise/2.
Weak tunneling, X_π rotation [Fig. <ref>(b)].—
The pulse on the tunneling amplitude is of the form τ(t) = τ + δτ f(t) with amplitude δτ = 0.006 Γ_L, frequency Ω = ω_0, pulse time t_ pulse = 115 h/Γ_L and rise time t_ rise = 10 h/Γ_L. The ramp on the level energy ε_R is parameterized using the smooth step functions, ε_R(t) = - U_R/2 + δε_R, ramp (1 - s(t;t_ pulse + t_ ramp + t_ rise, t_ ramp)) with δε_R, ramp = Γ_L and ramp time t_ ramp = 30 h/Γ_L. The pulse starts directly after the ramp is completed.
Weak tunneling, X_π/2 Z_2π X_π/2 sequence of rotations [Fig. <ref>(c)].—
The two consecutive pulses on the tunneling amplitude are performed with δτ = 0.006 Γ_L, frequency Ω = ω_0, pulse time t_ pulse = 60 h/Γ_L and rise time t_ rise = 10 h/Γ_L. The pulses are separated by a waiting time t_ w = 7.75 h/Γ_L and have the same phase ϕ_0.
Weak tunneling, X_π/2 Z_2π X_π/2 sequence of rotations [Fig. <ref>(d)].—
Same parameters as in Fig. <ref>(c), except that the phase of the second pulse is shifted ϕ_0 →ϕ_0 + π.
Strong tunneling, X_π rotation with detuned ε_R [Fig. <ref>(a)].—
The pulse on the level energy ε_R is applied for a detuned ε_R(t) = -U_R/2 + δε_ detune + δε f(t), with detuning δε_ detune = 0.2Γ_L, pulse amplitude δε_R = 0.1 Γ_L, frequency Ω = ω_0, pulse time t_ pulse = 55 h/Γ_L and rise time t_ rise = 10 h/Γ_L.
Strong tunneling, X_π rotation with by driving the level energy at the sweet spot [Fig. <ref>(a)].—
The pulse to perform a X_π rotation by drive the level energy at the sweet spot is of the form ε_R(t) = -U_R/2 + δε f(t) with amplitude δε_R = 0.3 Γ_L, frequency Ω = 1/2ω_0 + 1/8∂^2 ω_0/∂ε_ν^2δε_ν^2 ≈ 0.500214 ω_0 [see App. <ref>], pulse time t_ pulse = 50 h/Γ_L and rise time t_ rise = 10 h/Γ_L.
|
http://arxiv.org/abs/2307.04976v1 | 20230711023334 | Multi-fidelity Emulator for Cosmological Large Scale 21 cm Lightcone Images: a Few-shot Transfer Learning Approach with GAN | [
"Kangning Diao",
"Yi Mao"
] | astro-ph.CO | [
"astro-ph.CO",
"astro-ph.IM"
] |
[
Multi-fidelity Emulator for Cosmological Large Scale 21 cm Lightcone Images:
a Few-shot Transfer Learning Approach with GAN
Kangning Diaoto
Yi Maoto
toDepartment of Astronomy, Tsinghua University, Beijing, China
Kangning [email protected]
Yi [email protected]
Machine Learning, ICML
0.3in
]
Large-scale numerical simulations (≳ 500Mpc) of cosmic reionization are required to match the large survey volume of the upcoming Square Kilometre Array (SKA). We present a multi-fidelity emulation technique for generating large-scale lightcone images of cosmic reionization. We first train generative adversarial networks (GAN) on small-scale simulations and transfer that knowledge to large-scale simulations with hundreds of training images. Our method achieves high accuracy in generating lightcone images, as measured by various statistics with mostly percentage errors. This approach saves computational resources by 90% compared to conventional training methods. Our technique enables efficient and accurate emulation of large-scale images of the Universe.
§ INTRODUCTION
In preparation for the upcoming era of 21 cm cosmology, many models have been developed to extract information from observations. These models range from the semi-numerical simulation, e.g. <cit.> to hydrodynamical radiation transfer simulation, e.g. <cit.>, with varying levels of accuracy and computational cost. In addition, different approaches have been applied to infer cosmological and astrophysical parameters, including the Markov Chain Monte Carlo (MCMC) code, e.g. <cit.> to the machine learning boosted simulation-based inference <cit.>. However, parameter inference typically requires many forward simulations. Given the large field of view of the next-generation telescopes, large-scale simulations are required to fully exploit the information contained in the observations. However, these large-scale simulations are computationally expensive, which has inspired the development of emulators as an alternative.
Building emulators typically requires numerous training samples. For large-scale simulations, the cost of obtaining these training samples can be prohibitive in and of itself. To address this issue, the concept of multi-fidelity emulation <cit.> has been proposed. This approach first uses low-cost (low-fidelity) simulations to create an emulator. The emulator is then calibrated with a small number of high-cost (high-fidelity) simulations, reducing the computational cost while still maintaining the output quality.
Here we choose GAN <cit.> as our emulation model. GAN emulation has previously demonstrated the ability to produce high-quality samples. However, GAN training is known to suffer very often from mode collapse, especially with a dataset smaller than ∼ 1000 images. In the context of 21 cm lightcone emulation, this would typically require ≳ 1000 expensive simulations which are sometimes impossibly costly. In this paper, we propose the few-shot transfer learning <cit.> to train a faithful large-scale 21 cm lightcone image emulator with a limited number of simulations. Few-shot transfer learning allows us to learn a new task with a limited number of samples, which serves as the `calibrating' procedure in multi-fidelity emulation. This multi-fidelity emulation allows us to significantly reduce the number of simulations required to train an accurate lightcone image emulator.
§ METHODOLOGY
Our approach involves a two-step process. First, we train our GAN with 120000 small-scale (size of (2,64,512)) images. In the second step, we train our large-scale GAN on 320 large-scale (size of (2,256,512)) images while preserving the diversity of GAN results. We will explain our approach in detail in the following.
StyleGAN 2:
The GAN architecture used in this work is StyleGAN 2 <cit.>. Our generator G consists of two parts: First, a mapping network f takes the astrophysical parameter 𝐜 and a random vector 𝐳 and returns a style vector 𝐰. Second, a synthesis network g uses the style vector 𝐰 to shift the weights in convolution kernels, and Gaussian random noise is injected into the feature map right after each convolution to provide variations in different scales of features. Our discriminator D has a ResNet <cit.>-like architecture.
Cross-Domain Correspondence (CDC): Assuming we have a good small-scale StyleGAN emulator, we expand the size of the generator's first layer, resulting in a final output size of (2,256,512).
Next, we retrain our GAN with large-scale images. We first employ the patchy-level discriminator and cross-domain correspondence as described in <cit.>. We mark the small-scale GAN as our source model G_s and the large-scale GAN as the target model G_t. First, we use the same batch of vectors (𝐳,𝐜) feeding both G_s and G_t, getting the corresponding small-scale images G_s(𝐳,𝐜) and large-scale G_t(𝐳,𝐜). Then we calculate the cosine similarity s_(i,j) between any pair of images in G_s(𝐳,𝐜) as
𝐒_s(𝐳,𝐜)={cos(G_s(z_i,c_i),G_s(z_j,c_j))_∀ i≠ j}
and similarly for G_t we have:
𝐒_t(𝐳,𝐜)={cos(G_t(z_i,c_i),G_s(z_j,c_j))_∀ i≠ j}
Here the cos denotes the cosine similarity. Next, we normalize these two vectors using softmax and calculate the KL divergence between vectors:
ℒ_ CDC = D_ KL(Softmax(𝐒_s),Softmax(𝐒_t))
In this way, one can encourage the G_t to generate samples with a diversity similar to G_s, relieving the mode collapse problem.
Other Techniques: A patchy-level discriminator is also adopted in this work. We divided the astrophysical parameter space into two parts: the anchor region and the rest. The anchor region is a spherical region around training set parameters with a small radius. In this region, the GAN image G_t(𝐳,𝐜_ anch) has a good training sample to compare with. Thus, we apply the full discriminator with these parameters. If 𝐜 is located outside the anchor region, we only apply a patch discriminator: in this case, the discriminator does not calculate the loss of the whole image but calculates the loss of different patches of the image.
Since the small-scale information in both training sets is identical, we freeze the first two layers of the discriminator <cit.>. We add the small-scale discriminator D_s loss to ensure the correctness of small-scale information. Our code is public-available in this GitHub repo[<https://github.com/dkn16/multi-fidel-gan-21cm>].
§ DATASET
The training dataset for this project consists of two parts: a small-scale dataset and a large-scale dataset. All the data are generated with <cit.>, and each simulation has distinct reionization parameters. Our parameters are the ionizing efficiency ζ and the minimum virial temperature T_ vir. We explored a range of 10<ζ<250 and 4<log T_ vir<6, and the parameters are sampled with Latin-Hypercube Sampling<cit.>.
The small-scale dataset has a resolution of (64,64,512) and consists of 30,000 simulations with a comoving box length of (128,128,1024) Mpc. The third axis (z-axis) is along the line of sight (LoS), spanning a redshift range of 7.51<z<11.93. For each redshift, we run a realization
and select the corresponding slice for our final data. We include the matter overdensity field δ_m and the 21 cm brightness temperature T_b field for training. Since the overdensity field is highly correlated with other intensity mappings (IM) like CO and C[II] lines, we expect our method can be transferred to other IM images smoothly. For each sample, we cut four image slices, resulting in 120000 lightcone images with a size of (2,64,512) in our small-scale dataset, containing both the overdensity and brightness temperature field.
The large-scale dataset has a (256,256,512) resolution and consists of 80 simulations with a comoving box length of (512,512,1024) Mpc, covering the same redshift range. As before, for each sample, we cut four slices and obtained 320 lightcone images with a size of (2,256,512) in our large-scale dataset.
§ RESULTS
Here we present the evaluation of our model results. A visual inspection of generated samples is shown in Fig. <ref>. We tested our result on 3 combinations of parameters, each having distinct evolution history. For each parameter combination, we run 4 simulations with distinct initial conditions generated with different random seeds for testing.
Global Signal:
We calculated the global 21 cm signal of the GAN results. Limited by the size of the test set, the mean value is calculated with 1024 image samples. Our result is shown in Fig. <ref>. We see that GAN works well, with an error of mostly less than 5% and a well-matched 2σ region.
Power spectrum (PS):
Fig. <ref> shows the T_b auto-PS, T_b-δ_m cross-PS and δ_m auto-PS. GAN results perform well on small scales, with an error of less than 10%, except when the PS is close to 0. On extremely large scales, the error can exceed 50%. This is unsurprising because we lack training samples. The GAN still captures the large-scale power when the T_b signal has a high amplitude. Moreover, the relative error is insignificant compared with the sampling variance.
From T_b auto-PS figures (Fig. <ref>, top row), the change of lines shows an evolution with the time that power is transferred from small scale to large scale. Again, the accuracy of the cross-PS (Fig. <ref>, middle row) guarantees the correlation between T_b and δ_m. At early stages, the HI traces the matter field well, and the GAN T_b and δ_m fields have positive cross-correlation at all scales. Later, the cross-correlation becomes negative due to the fact that dense regions hosted ionizing sources earlier and ionized first. Our GAN performs well in reproducing these features. The GAN samples with different parameters have similar matter PS (Fig. <ref>, bottom row), which agrees with the truth.
Non-Gaussianity:
Here we employ the scattering transform <cit.> coefficients as a non-Gaussian statistic to evaluate our GAN. A detailed description can be found in e.g. <cit.>. We calculated the second-order ST coefficients S_2 as measures for non-Gaussianity with Kymatio <cit.>. As the image sample size grows, we set the kernel size scale j=0,3,6 to capture more large-scale information. Results are shown in Fig. <ref>. When (j_1,j_2) = (0,3), the error is less significant as ≲ 10%. When j_2=6 the error exceeds 20%.
§ SUMMARY
In this paper, we introduce the few-shot transfer learning technique to build an emulator for large-scale 21 cm simulations. The large-scale GAN is trained with 80 simulations, and the relative error of statistics is less than 10% on small scales. On large scales, a mild increase in error arises due to insufficient training samples.
Generating our multi-fidelity dataset requires ∼ 1.2× 10^5 CPU hours, while a purely large scale dataset requires ∼ 1.5× 10^5 CPU hours, with 5000 simulations, an optimistic estimate of dataset size consistent with e.g. <cit.>. Our method reduces the computational cost by 90%, which will enable us to emulate more complex simulations in the future.
§ ACKNOWLEDGEMENTS
This work is supported by the National SKA Program of China (grant No. 2020SKA0110401), NSFC (grant No. 11821303), and the National Key R&D Program of China (grant No. 2018YFA0404502). We thank Xiaosheng Zhao, Ce Sui, and especially Richard Grumitt for inspiring discussions.
We acknowledge the Tsinghua Astrophysics High-Performance Computing platform at Tsinghua University for providing computational and data storage resources that have contributed to the research results reported within this paper.
icml2023
§ COMPARISON WITH PREVIOUS WORK
Several noteworthy applications of GAN in astronomy have been extensively explored in previous studies <cit.>. Previous works have made significant progress in utilizing innovative GAN structures such as the progressively growing GAN (PGGAN) <cit.> and stabilized GAN <cit.>. These studies have demonstrated sub-percent-level accuracy, as assessed by various statistical measures, for unconditional emulation, and achieved accuracy at the ten percent level for conditional emulation. A comparison between our results and previous findings is presented in Table <ref>. By employing the StyleGAN2 architecture, we have achieved percent-level accuracy in conditional emulation with sufficient training samples, as validated by various statistical measures. In the few-shot learning scenario, our GAN exhibits similar accuracy on a small scale and demonstrates a moderate increase on a larger scale. Furthermore, our large-scale GAN, combined with few-shot transfer learning techniques, allows for computational resource savings ranging from 90% to 99%, depending on different estimations.
§ TEST ON MODE COLLAPSE
§.§ Visual inspection
To assess the diversity of our model, we conducted a visual inspection. We generated multiple realizations for both GAN samples and simulation samples, as illustrated in Fig. <ref>. Upon careful observation, we observed that the shape and size of ionized bubbles exhibit variation across different GAN samples, indicating the absence of any specific preference for bubble features. Furthermore, the locations of ionized bubbles also appear random, as no discernible trend or pattern was observed among the samples we examined.
§.§ Pixel level variance
In addition to visual inspections, we also computed the standard deviation of the T_b field for each pixel, as depicted in Fig. <ref>. Our aim was to observe any potential decrease in the standard deviation, which could indicate mode collapse. Upon analyzing the results in Fig. <ref>, we noticed that the variance for both GAN and simulation samples appeared similar, particularly for higher T_b values. However, we observed mild fluctuations in the standard deviation when the T_b value was low. Based on this analysis, we can conclude that there is no clear evidence of significant mode collapse at the pixel level.
§.§ Feature level variance
Lastly, we computed the 2σ scatter of the second-order ST coefficients (S_2) for the T_b field, which serves as a representation of image features. The results are presented in Figures <ref>-<ref>. Consistent with the analysis in Section <ref>, we selected the scales (j_1,j_2) as (0,3), (0,6), and (3,6) to capture both small and large-scale features.
Upon examination, we observed that in most cases, the 2σ scatter of GAN features overlapped with that of simulation samples, indicating the absence of mode collapse at the feature level. However, in the bottom subplot of Fig. <ref>, we noticed a deviation in both the mean value and 2σ scatter for certain features at the super-large scale. This suggests a slight mode collapse issue in the generated images at that particular scale.
In conclusion, our analysis indicates that there is no strong evidence of mode collapse at the feature level. The GAN samples generally mimic the behavior of the simulation samples quite well, except when the T_b approaches zero.
|
http://arxiv.org/abs/2307.03959v1 | 20230708115509 | Understanding the power-law nature of participation in community sports organizations | [
"Jia Yu",
"Mengjun Ding",
"Weiqiang Sun",
"Weisheng Hu",
"Huiru Wang"
] | cs.SI | [
"cs.SI",
"physics.soc-ph"
] |
Understanding the power-law nature of participation in community sports organizations
Jia Yu, Mengjun Ding, Weiqiang Sun, Senior Member, IEEE,
Weisheng Hu, Member, IEEE, Huiru Wang
Manuscript received June, 2023. (Corresponding author: Weiqiang Sun.)
Jia Yu, Mengjun Ding, Weiqiang Sun, and Weisheng Hu are with the School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China. Huiru Wang is with the Department of Physical Education, Shanghai Jiaotong University, Shanghai 200240, China.
(e-mail: {yujia543, mengjun_ding, sunwq, wshu, wanghr}@sjtu.edu.cn).
August 12, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The improvement of living standards and awareness of chronic diseases have increased the importance of community sports organizations in promoting the physical activity levels of the public. However, limited understanding of human behavior in this context often leads to suboptimal resource utilization. In this study, we analyzed the participation behavior of 2,956 members with a time span of 6 years in a community sports organization. Our study reveals that, at the population level, the participation frequency in activities adheres to a power-law distribution. To understand the underlying mechanisms driving crowd participation, we introduce a novel behavioral model called HFBI (Habit-Formation and Behavioral Inertia), demonstrating a robust fit to the observed power-law distribution. The habit formation mechanism indicates that individuals who are more engaged are more likely to maintain participation, while the behavioral inertia mechanism suggests that individuals' willingness to participate in activities diminishes with their absences from activities. At the individual level, our analysis reveals a burst-quiet participation pattern, with bursts often commencing with incentive activities. We also find a power-law distribution in the intervals between individual participations. Our research offers valuable insights into the complex dynamics of human participation in community sports activity and provides a theoretical foundation to inform intervention design. Furthermore, the flexibility of our model enables its application to other data exhibiting power-law properties, broadening its potential impact beyond the realm of community sports.
human behavior, power law, habit formation, behavioral inertia, burst timing, community sports activity.
§ INTRODUCTION
Globalization urbanization, and increased wealth have led to significant lifestyle changes, causing a wide decrease in physical activity. According to the World Health Organization (WHO), inactivity rates can climb as high as 70% in certain countries, primarily due to shifts in transportation habits, heightened reliance on technology, and urbanization <cit.>. Physical inactivity, which has been identified as a global pandemic, is responsible for up to 8% of non-communicable diseases and deaths globally <cit.>. Conservatively estimated, physical inactivity cost health-care systems INT$53.8 billion worldwide in 2013 <cit.>. Additionally, if the prevalence of physical inactivity remains unchanged, it is projected that by 2030, there will be around 499.2 million new cases of preventable major NCDs worldwide, resulting in direct health-care costs of INT$ 520 billion. The annual global cost of not taking action on physical inactivity is anticipated to reach approximately $47.6 billion <cit.>.
In an effort to improve physical activity participation, community sports organizations have achieved remarkable results in recent years. Many concur that community sport, as a low-threshold physical activity, is a powerful tool for targeting socially vulnerable groups <cit.>. Moreover, community sport has been recognized as a policy area and a social field that goes beyond “just" providing opportunities for groups to participate in sports. It also encompasses functions such as social care and crime reduction <cit.>. Today, being non-profit by nature, community sports organizations face greater challenges, such as competition for limited resources, volunteer availability, and capacity, and the impact of pandemics (such as COVID-19) <cit.>. Understanding the nature of the population participating in community sports is thus pivotal to making the best use of limited resources.
The interest in the data-driven exploration of human behavior has been persistent. Very early on, power-law distribution has been found in certain human behaviors, such as the intervals between emails <cit.>, the pattern of phone calls <cit.>, and complex social networks <cit.>. Efforts have been made to understand the principle behind the formation of this power-law distribution in these behaviors <cit.>. Classical models such as the decision-based queuing process <cit.> and preferential attachment <cit.> are proposed to explain the power law distribution observed in the waiting time for processing emails and the degree distribution in complex networks, respectively. Research on community sports organizations is usually conducted from an organizational management perspective, providing high-level guidance for organizational development by quantifying aspects such as resources, program design, diversity, life cycle, and resilience <cit.>. However, very few, if any, models are population-based and consider when, how, and who participates in community-level sports activities <cit.>.
In this study, with the data from 2,956 users collected over a span of six years, we discovered a power-law distribution of population participation in community sports activities. To explain this power-low distribution, we proposed the hypothesis of habit formation and behavioral inertia in community sports activity participation. Previous research has indicated that physical activity behavior can be developed through repeated experience of the activity in stable contexts <cit.>. Human behavior does exhibit inertia, as evidenced by the tendency for users to stick with default options <cit.> and purchase habits <cit.>. Our empirical data provides evidence of habit formation and behavioral inertia in community sports participation. It may help to address the question, “What is the typical `shape' of within-person real-world habit growth with repetition over the long-term" identified in the 2019 European Health Psychology Society Synergy Expert Meeting <cit.>. Based on these two mechanisms we designed a behavioral model called HFBI that can robustly fit the power-law distribution of the empirical data. Power-law distribution is also observed in the interval of participation at the individual level, signifying a burst-quiet pattern of activity participation. With the relevant activity information, we found that bursts tend to be initiated by activities with incentive rewards, suggesting that incentive activities can help call people back for sustained engagements. The main contributions of the article as described as follows.
* For the first time, we discovered that the frequency of population participation in community physical activities and the interval between individual's participations obey power-law distributions.
* We proposed an intuitive model to explain the power-law distribution of population participation in community physical activities, by taking into account habit formation and behavioral inertia. We demonstrated good fitting performance and statistical significance with real-world data. The model may as well be used in other domains where power-law distributions with low power-law exponents are observed.
* The intervals between individual's participation exhibit a power-law distribution, with a pattern of bursts followed by periods of inactivity (a burst-quiet pattern). We observed that bursts often start with incentive activities located in the head position. This implies that incentive activities not only attract more participants but also have the potential to call users back from a quiet state to an active state, thereby promoting sustained engagement.
The rest of this article is organized as follows. In Section II, we demonstrate the power-law phenomenon of participation frequency in activities at the population level. In Section III, we introduce the proposed HFBI model and present the evidence. In Section IV, we verify the participation patterns at the individual level and the role of incentive activities. In Section V, we present the related work. Finally, we summarize this paper in Section VI.
§ POWER-LAW DISTRIBUTION OF PARTICIPATION FREQUENCY AT THE POPULATION LEVEL
§.§ Data Description
The data used in our research was sourced from a university-based community sports platform that we develop and operate, which allows individuals to initiate or participate in sports activities. The initiator of the activity can choose whether or not to provide rewards as incentives for the activity. Over the course of 6 years, from May 2015 to May 2021, our dataset captured 28,714 records of activity participation in 770 activities (including 110 activities with incentives), involving a total of 2,956 individuals. Each record in the dataset contains the participant's ID, activity ID, team ID, and type of activity (whether to provide incentives or not). The activity IDs are consecutive natural numbers starting from 0 and arranged in the order of their occurrence (numbered from 0 to 769).
§.§ Fitting the Empirical Data
The frequency of user i participating in activities over the entire period is denoted as q_i. For the sequence of activity participation frequency { q_i}, we assume that the frequency larger than a truncated value q_min is described by the power law distribution,
p(q) ∼ q^-γ, q ≥ q_min.
In the Kolmogorov-Smirnov (KS) test, p>0.1 (or p>0.05) suggests that the data can be considered to follow a power-law distribution. We select the smallest value of q that satisfies the KS test with p>0.1 as q_min, and the data above q_min can be plausibly modeled as a power-law distribution. The estimate γ is chosen by maximum likelihood (MLE) <cit.>.
§.§ Power-law Distribution of Participation Frequency
The participation frequency of the population follows a power-law distribution. Fig. <ref> shows the empirical distribution of user participation frequency in activities in a complementary cumulative way to enhance the statistical significance <cit.>. The complementary cumulative function can be represented as F(q)=∑_q^'=q^∞ p(q^'), where p(q) denotes the proportion of individuals who participated in activities q times. A clear straight-line trend can be observed on the double logarithmic axis, indicating a power law distribution of the data. Kolmogorov–Smirnov (KS) tests and Maximum likelihood estimation (MLE) fits are employed to check whether the empirical distributions obey power law distribution and estimate the related parameters. The result shows that the frequency of population participation in the activity is in line with the power law distribution (p=0.18, q_min=2) with the power-law exponent γ=1.76. The cutoff of the tail indicates that there are fewer individuals participating in an exceptionally large number activity than what a power-law distribution would expect, which is a phenomenon commonly observed in real-world systems. Fig. <ref> shows the relationship between the fraction P of the participation and the most active p of the population. 80/20 rule is evident that the top 20% of the most active users contributed to approximately 84% of the total activity participation. Theoretically, the case is more extreme for power-law distributions with γ less than 2. However, the fact that the number of activities is finite and the tail cutoff brings the ratio close to the classical Pareto's law.
To demonstrate that the power-law distribution of the participation frequency is not momentary coincidental, we analyzed the data for each activity node after the platform scale reached 1000. All samples (287 (88.9%) with q_min=1 and 36 (11.1%) with q_min=2) conformed to the power law distribution by KS test, with p-values all greater than 0.1. Fig. <ref> presents the γ for all samples of 323 activity nodes. The range of γ spans from 1.66 to 1.81 with a mean of 1.72. And it keeps changing slowly with each activity held, first decreasing steadily, and then fluctuating and rising. The γ less than 2 indicates a significant “heavy tail" phenomenon in the frequency of participation.
§ HFBI-A BEHAVIORAL MODEL BASED ON HABIT FORMATION AND BEHAVIORAL INERTIA
To explore the principle behind the power-law distribution of the participation frequency, we propose a behavioral model named HFBI, which is based on the assumptions of habit formation and behavioral inertia. Intuitively, people who have participated in activities frequently or have just participated in an activity are more likely to participate in subsequent activities. They are supported by convincing evidence from our data.
§.§ Evidence for Habit Formation and Behavioral Inertia
To provide evidence for the habit formation and behavioral inertia mechanisms, we performed a statistical analysis of all activities in the dataset. The proportion of people who have participated in q activities and would choose to participate in a new available activity can be represented as
prop .(q)=∑_j=0^N-1 m_q^j/∑_j=0^N-1 n_q^j.
Here, n_q^j represents the number of individuals who have participated in q activities before a new activity j, m_q^j represents the number of individuals among them who choose to participate in the activity j, and N is the total number of activities in the dataset. The denominator represents the total number of individuals who have participated in q activities for all activities, while the numerator represents the number of individuals who choose to continue to participate in an activity after participating in q activities.
Similarly, the proportion of people who have been away from activities for d sessions and would choose to participate in a new available activity can be represented as
prop .(d)=∑_j=0^N-1 m_d^j/∑_j=0^N-1 n_d^j.
n_d^j represents the number of individuals who have been away from activities for d sessions for activity j, m_d^j represents the number of individuals among them who choose to participate in the activity j. The denominator represents the total number of individuals who have been away from activities for d sessions for all activities, while the numerator represents the number of individuals who choose to continue to participate in an activity after being away from activities for d sessions.
Fig. <ref> shows the proportion of people who have participated in q activities and would choose to participate in a new available activity. As shown, the proportion of individuals opting to continue participation increases almost linearly with the number of activities participated in the early stage. Fig. <ref> illustrates the proportion of people who have been away from activities for d sessions and would choose to participate in a new available activity. As the number of sessions away from activities increases, the proportion of people choosing to back to participating in activities sharply decreases. These provide solid evidence for the existence of habit formation and behavioral inertia in community sports participation.
§.§ The HFBI Model
Based on the evidence presented, we propose the HFBI model, which incorporates habit formation and behavioral inertia, to simulate user participation in activities. The experimental results demonstrate that the model can accurately simulate user participation in activities with only four parameters.
§.§.§ Parameter Settings
The HFBI model only requires four parameters: n, c, m, and α. n represents the number of activities held, i.e., the model's iteration count. c and m refer to the quantities of new and existing users participating in an activity (added in a round of iterations), respectively. α is a parameter that adjusts the ratio of habit formation and behavioral inertia to achieve a better fit with the empirical data. The parameters of c and m can be derived from the mean values of the dataset. Note that since the parameters are natural integers, the values of c and m will be rounded. To ensure consistency in the scale of the population, n is calculated based on the number of population, c, and m. Additionally, we initiate the iteration process with m pre-existing users to enable the selection of existing users at the start of the iteration.
§.§.§ Model Description
The model is characterized by adding users in a sequential and batched manner, which aligns with many real-life situations. Initially, we make the assumption that for every activity, there will be c new users and m existing users participating. For a new available activity and an existing user i, q_i represents the total number of activities that user i has participated in before, and d_i represents the interval between the last activity they participated in and the current new activity. User i participating in the activity can be attributed to two mechanisms. (1) User i has a probability of α to participate in the activity due to habit formation, which means the probability of participating is proportional to q_i:
q_i/∑_i ∈ I q_i.
(2) Additionally, there is a probability of 1-α for user i to participate in the activity due to behavioral inertia, which means the probability is a decreasing function of d_i:
1 / d_i/∑_i ∈ I 1 / d_i.
Therefore, the total probability of user i participating in the activity is:
ϕ_i=αq_i/∑_i ∈ I q_i+(1-α) 1 / d_i/∑_i ∈ I 1 / d_i.
I is the set of all existing users. The model will perform n rounds of iterations, adding c new users and selecting m existing users based on Eq. <ref> in each round. The c new users will be added to the existing user pool in each round. The overall process of the model is shown in Algorithm <ref>. Note that the specific form of the decreasing function for d_i is not unique, as it can be adjusted by the parameter α.
§.§.§ Proof of Power-Law Distribution and Exponent in Habit Formation
When only considering the habit formation, that is ϕ_i=q_i/∑_i ∈ I q_i, the model can generate power-law distribution data with a power exponent γ=2+c/m. The proof process is similar to the Price model <cit.>. In the HFBI, for every activity held, there will be c new users and m existing users participating, and the participation probability of existing users is proportional to the number of activities they have participated in before. Let p_q(n) be the fraction of users that have participated q times when the platform contains n users, which is also the probability distribution of participation frequency. q_i represents the number of activities participated by user i. When organizing an activity where only one user among all existing users will participate, the probability of existing user i participating in the activity is
q_i/∑_i q_i=q_i/n⟨ q⟩=q_i/n m+c/c.
where ⟨ q⟩ represents the average number of activities each person participates in, ⟨ q⟩=n^-1∑_i q_i. The number of people who have participated in q activities is np_q(n). When there is a new activity, the expected number of people who have participated in q activities and will join the new activity is
n p_q(n) × m ×q/n m+c/c=p_q(n) × m ×q/m+c/c.
Then the master equation for the evolution of the participation frequency distribution is
(n+c) p_q(n+c)=n p_q(n)+(q-1) m c/m+c p_q-1(n)-m q c/m+c p_q(n).
The left side of the equation is the expected number of people participating in the activity q times after adding an activity. The first term on the right-hand side here represents the number of users with previous q participation. The second term refers to the expected number of users who have a participation frequency of q-1 and join the activity and become q times, while the third term refers to the expected number of users who have a participation frequency of q and participate in this activity and are no longer q times.
Eq. <ref> is applicable for all cases where q ≠ 1. When q = 1, the right side of the equation will increase by c new users whose participation frequency becomes 1, instead of the second term in Eq. <ref>, and the equation for q=1 is
(n+c)p_1(n+c)=n p_1(n)+c-m c/m+cp_1(n).
When considering the limit of large population size n →∞ and calculating the asymptotic form of the distribution participation frequency in this limit, we take the limit n →∞ and use the shorthand p_q=p_q(∞). Eqs. <ref> and <ref> become
p_q=(q-1) m c/c(m+c)+m q c p_q-1 for q>1,
p_1=m+c/2 m+c for q = 1.
Let k = c/m, then
p_1=1+k/2+k for q = 1,
p_q=(q-1)/1+k+p p_q-1 for q>1.
With Eqs. <ref> and <ref>, we can iteratively determine p_q for all values of q, beginning with our initial solution for p_1. The results are as follows:
[ p_1=1+k/2+k; p_2=1/2+k+1×1+k/2+k; p_3=2/3+k+1×1/2+k+1×1+k/2+k; p_4=3/4+k+1×2/3+k+1×1/2+k+1×1+k/2+k; ...; ]
The expression for general q can be successively derived as:
p_q=(q-1) ×(q-2) …× 1 ×(1+k)/(q+k+1) ×(q-1+k+1) …×(2+k+1) ×(2+k).
It is known that the gamma function is
Γ(x)=∫_0^∞ t^x-1e^-t d t,
and it has the property that
Γ(x+1)=x Γ(x) for x > 0.
Applying this equation iteratively, we find that
Γ(x+n)/Γ(x)=(x+n-1)(x+n-2) … x.
Using this result, we can rewrite Eq. <ref> as
p_q=(1+k) Γ(q) Γ(2+k)/Γ(1) Γ(2+k+q).
By further employing Euler's formula
B(x, y)=Γ(x) Γ(y)/Γ(x+y),
Eq. <ref> can be simplified to
p_q=(1+k)/Γ(1) B(q, 2+k) .
Using Stirling’s approximation for the gamma
function, the beta function B(x, y) falls off as a power law for large values of x, with exponent y <cit.>,
B(x, y) ≃ x^-yΓ(y).
Applying this finding to Eq. <ref>, for large values of q, the distribution of participation frequency goes as
p_q∼ q^-γ=q^-(2+k)=q^-(2+c/m),
where the exponent γ is
γ=2+k=2+c/m.
Therefore, by only considering habit formation, represented by ϕ_i=q_i/∑_i ∈ I q_i, the model is able to generate data with a power-law distribution, where the power exponent is given by γ=2+c/m.
§.§.§ Experimental Results on the Real Dataset
We conducted experiments on real data, and the results show that HFBI is capable of generating data with only four parameters derived from the mean values of the empirical data and also exhibits good statistical significance.
The Kolmogorov-Smirnov (KS) test is used to assess whether the data generated by the model and empirical data are drawn from the same distribution. The KS statistic is a value that measures the maximum distance between two cumulative distribution functions (CDFs) of two samples, which is used to determine if two samples are drawn from the same underlying probability distribution or not. The null hypothesis is that the two distributions are identical. If p > 0.1, we cannot reject the null hypothesis, which suggests that the data generating process is plausible.
The experiment is first performed on the largest-scale data, that is, the data up to the last activity node. The parameter values for c, m, and n are derived from the mean values of the data and are determined as 4, 33, and 731, respectively. In Fig. <ref>, a comparison is shown between the generated data from HFBI and the real data. It can be seen that the distribution of the simulated data and the real data are very close. The model achieves the best fit when α is set to 0.9. The α values within the range of 0 to 1 suggest that the results of the empirical distribution are attributed to the combined effects of both habit formation and behavioral inertia mechanisms. The habit formation mechanism described by Eq. <ref> can be demonstrated to generate data with a power-law distribution for γ=2+c/m, which is strictly greater than 2 and differs from the empirical data. The participation frequency with γ less than 2 implies that the frequency of participation in activities is slightly more than what can be explained by the habit formation mechanism alone. The behavioral inertia mechanism precisely compensates for this deficiency, as it captures the situation of individuals who have just participated in an activity being highly likely to continue participating in one or two due to inertia. It effectively adjusts the exponent while preserving the power-law distribution. It is the joint effect of both mechanisms that generate data that closely fit the empirical data.
The data produced by the model is incapable of including the extremely rare users who have engaged in activities excessively. One possible explanation is that these individuals usually have a strong self-motivation to participate in activities, which cannot be captured by habit formation, as evidenced by the non-steady growth in the later stage of Fig. <ref>. And since the parameters have to be integers and the operation to maintain consistency of the number of users between the generated data and the empirical data, there will be a small difference between the model's n and the actual number of activity counts. This is considered acceptable since the proportion of these individuals is extremely low.
To demonstrate the robustness of the model, the model was also employed to fit the participation frequency up to each activity node. As the generated data can be slightly different each time, we conducted 5 runs for each possible value of α and selected the optimal α value with the highest average p-value among 5 runs. The average p-values and corresponding optimal α of model fitting for 323 samples are shown in Fig. <ref>. In Fig. <ref> and Fig. <ref>, the behavioral inertia mechanism is represented by 1 / d_i/∑_i ∈ I 1 / d_i,e^-d_i/∑_i ∈ I e^-d_i, respectively. It shows that different functional forms can achieve a good fit at different values of α. The model shows good fitting performance (p>0.1) for all empirical data samples, indicating its correctness and robustness. The range of α values from 0.69 to 1 suggests that the proportion of habit formation and behavioral inertia may vary in different situations. We can observe clear downward trends in α around 450 to 600, indicating that the proportion of behavioral inertia gradually increases during this stage. By combining with Fig. <ref>, it can be observed that there is also a decreasing trend of γ. This indicates that behavioral inertia can effectively help to capture situations with smaller γ.
§ PARTICIPATION PATTERNS AT THE INDIVIDUAL LEVEL
At the population level, the frequency of participation in activities follows a power-law distribution. At the individual level, the pattern of activity participation, specifically the intervals between each user's participation, is also worth studying. Similarly, we investigated the distribution of intervals between each individual's activity participation and discovered that they also exhibit a power-law distribution. In terms of activity participation patterns, it is a burst-quiet mode where individuals alternate between periods of high activity and periods of low activity.
§.§ The Burst-Quiet Pattern
The interval between an individual's participation is defined as the subtraction of the IDs of two consecutive activities in which they have participated, denoted by the r. Considering the requirement of a sufficient amount of interval sequence data, we focused on 58 loyal users who participated in more than 100 activities for the individual-level analysis. Fig. <ref> shows an example of a real user's participation in activities. It is evident that intervals of individual participation in activities vary greatly in size, with a majority being small and some being large. The participation of individuals is characterized by alternating bursts of high activity and long periods of low activity, similar to the outgoing mobile phone call sequence of an individual <cit.>. This burst-quiet pattern is common among the group of loyal users. We studied the distribution of interval sequences for all 58 users and discovered that their interval sequences also follow a power-law distribution (p>0.1 for 54 users, p>0.05 for all 58 users, r_min=1 for 48 users, and r_min=2 for 10 users).
The power law distribution also plays an important role in the intervals of individual participation in activities. Fig. <ref> shows examples of complementary cumulative probability distributions of the intervals for three users. The intervals of participation in the activities of each of the three individuals obeyed a power law distribution with different power exponents. Fig. <ref> plots the probability distribution of the estimated power-law exponents γ for all loyal individuals, revealing a range from 1.6 to 3.25 and a mean of 2.35. Although their activity participation intervals all follow power-law distributions, the difference in the power-law exponent is quite significant. The range of γ is surprisingly consistent with γ for individuals with the intraday inter-call duration that follows a power-law distribution reported by Jiang et al <cit.>. And the probability distributions are also somewhat similar, which may suggest a potential connection between the intervals of different human behaviors.
§.§ The Role of Incentive Activities in Bursts
Burst, characterized by frequent participation in activities with short intervals within a specific period, has a significant impact on improving individuals' overall fitness level. Therefore, it is important to explore the factors associated with this pattern to promote physical activity among the population. In this study, a burst is defined as a period in which the interval between consecutive activities a user participates in is less than a threshold value Δ. The specific value of Δ is arbitrarily set in empirical analysis <cit.>.
Organizations often invest resources to provide incentives for activities to attract users to participate. Incentives are crucial in promoting physical activity. Typically, physical activity behavior is initially motivated by incentive, and as habits form, it shifts towards unconscious and automatic processes <cit.>. The effectiveness of incentives can be immediately reflected in the number of participants in the activity. However, the benefits in other aspects are yet to be discovered. Our study has made some findings by observing the position of incentive activities in bursts. At thresholds of Δ=8, 9, and 10, we identified a total of 433, 399, and 378 bursts for all individuals, respectively, and recorded the positions of the first occurrence of the incentive activity within each burst. As shown in Fig. <ref>, the majority of bursts are observed to start with incentive activities. Table <ref> shows the number and percentage of bursts with the first incentive activity appearing at the head position in the bursts. Over 50% of bursts have their first incentive activity in the first position, and over 65% in the first three positions at different Δ. Note that there is only one in seven activities is incentivized. The proportion of incentive activities in the head of bursts is much higher than it, indicating a correlation between the occurrence of incentive activities and bursts. This phenomenon suggests that in addition to increasing the number of participants in the activity, incentive activities may also play a role in calling users back from a quiet state to a burst state for sustained engagements.
§ RELATED WORK
Power law distributions have been observed in various domains and contexts, such as biology <cit.>, general science <cit.>, economics <cit.> and the social sciences <cit.>. Many human behaviors, such as the intervals between sending emails <cit.> and the pattern of phone calls <cit.>, have also been identified as following power-law distributions. Our work has discovered that the participation frequency of the population and the intervals between individual participation in activities exhibit power-law distributions in the context of community sports organizations.
Over the years, there have been continuous efforts to propose diverse models aimed at replicating and explaining data characterized by power-law distributions. Barabási proposed the classic preferential attachment model, which can generate data exhibiting a power-law distribution with an exponent of 3 <cit.>. There are also derivative models that can generate data with power-law distributions with exponents between 2 and 3 <cit.>. They have been widely used to explain the power-law distribution of node degrees observed in social networks. The decision-based queuing process <cit.> simulates the power-law distribution of waiting times for emails by randomly assigning priorities to each incoming task and following a rule of processing tasks in priority order. This suggests that the power-law distribution of waiting times for emails may be attributed to human decision-making based on priorities. The preferential attachment model suggests that the power-law distribution of node degrees in networks may be due to the preferential connection of newly added nodes to high-degree nodes in the network <cit.>. In our HFBI model, the habit formation mechanism exhibits similarities to the preferential attachment model and can be proven to generate data conforming to a power-law distribution. In addition, the behavioral inertia component of the HFBI model introduces effective modifications, leading to a slight decrease in the exponent of the data while preserving its essential power-law characteristics.
Community sports organizations have been receiving increasing attention for their significant contributions to public health and social harmony. Klenk et al. <cit.> investigated the participation of people with disabilities in community sports activities from three aspects: (1) social contacts, interactions, and friendships, (2) self-perception and identity formation, and (3) social acceptance, support, and embeddedness. Hanlon et al. <cit.> conducted a questionnaire survey to investigate the needs and initiatives for women's participation in community sports activities. Zhou et al's survey <cit.> revealed a correlation between the provision of community-sport services (both core and peripheral services) and participants’ satisfaction levels. To the best of our knowledge, there is no research that explores and comprehensively understands individual participation in community sports organizations from a data-driven and modeling approach.
§ CONCLUSION
Our study has identified new members of the power-law data family, a) the frequency of community sports participation among populations, and b) the interval of individual activity participation. The participation frequency exhibits a power-law distribution with a tail cutoff and an exponent less than 2. We have proposed HFBI - a model based on habit formation and behavioral inertia, to uncover the underlying causes for this power-law distribution. In the model, the behavioral inertia mechanism effectively complements the habit formation mechanism, with which alone one can only generate power-law distributions with an exponent greater than 2. The model provides a robust fit to the empirical data. Furthermore, Individual participation in community sports activities exhibits a burst-quiet pattern. Importantly, our study suggests that periods of high activity bursts are often driven by incentive activities, highlighting the importance of incentive activities to sustain long-term physical activity behavior.
Our results have important implications for the design of interventions aimed at promoting sustainable physical activity behavior. Interventions can be better tailored to align with individuals' behavioral tendencies by gaining insights into habit formation, behavior inertia, and incentive activities. Additionally, the classic preferential attachment process restricts the power law exponent to γ>2 <cit.>, while many real-world networks exhibit γ<2 <cit.>. Our HFBI model based on habit formation and behavior inertia can be valuable in other domains where power-law distributions with low power-law exponents are observed, such as the population of cities <cit.>, short-message communication <cit.>, and corporate innovative patent counts <cit.>.
Despite the strengths of our study, there are limitations that should be noted. First, our study only focused on a sports community in a university, whose members are mostly well-educated university faculties and staff members, and may differ in the perception of self-motivated exercise from the population in the society at large. Further research is needed to understand how our study may be generalized to other community sports organizations. Secondly, the model cannot capture the behavior of extremely rare individuals who engage in activities excessively. As reported in the study's 80/20 rule, active individuals make a significant contribution to community activity participation, and future research should pay more attention to this group.
In conclusion, our study provides novel insights into the principle underlying human participation in community sports activities and offers practical implications for the design of interventions to promote sustained physical activity behavior and human health. Our findings may also have broader implications for other fields where power-law distributions are commonly observed.
§ ACKNOWLEDGMENTS
We would like to thank every member of the SJTU Health Community for their selfless commitment in building a supportive community and providing help to those in need.
IEEEtran
|
http://arxiv.org/abs/2307.05291v1 | 20230711143636 | An update of the catalog of radial velocity standard stars from the APOGEE DR17 | [
"Qing-Zheng Li",
"Yang Huang",
"Xiao-Bo Dong"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.GA"
] |
Yang Huang
[email protected]
0000-0002-4033-2208]Qing-Zheng Li
Yunnan Observatories, Chinese Academy of Sciences, Kunming, Yunnan 650011, People's Republic of China
School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing 100049, People's Republic of China
0000-0003-3250-2876]Yang Huang
School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing 100049, People's Republic of China
Key Lab of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, People's Republic of China
0000-0002-2449-9550]Xiao-Bo Dong
Yunnan Observatories, Chinese Academy of Sciences, Kunming, Yunnan 650011, People's Republic of China
We present an updated catalog of 46,753 radial velocity (RV) standard stars selected from the APOGEE DR17. These stars cover the Northern and Southern Hemispheres almost evenly, with 62% being red giants and 38% being main-sequence stars. These RV standard stars are stable on a baseline longer than 200 days (54% longer than one year and 10% longer than five years) with a median stability better than 215 m s^-1. The average observation number of those stars are 5 and each observation is required to have spectral-to-noise-ratio (SNR) greater than 50 and RV measurement error smaller than 500 m s^-1. Based on the new APOGEE RV standard star catalog, we have checked the RV zero points (RVZPs) for current large-scale stellar spectroscopic surveys including RAVE, LAMOST, GALAH and Gaia. By carefully analysis, we estimate their mean RVZP to be +0.149 km s^-1, +4.574 km s^-1 (for LRS), -0.031 km s^-1 and +0.014 km s^-1, respectively, for the four surveys. In the RAVE, LAMOST (for MRS), GALAH and Gaia surveys, RVZP exhibits systematic trend with stellar parameters (mainly [Fe/H], T_eff, log g, G_BP-G_RP and G_RVS). The corrections of those small but clear RVZPs are of vital importances for these massive spectroscopic surveys in various studies that require extremely high radial velocity accuracies.
§ INTRODUCTION
The velocity component of a star in the line-of-sight direction can be defined by the Doppler shift of the spectrum captured by the telescope. It can be converted into the framework of the Solar System's center of mass, called the “barycentric" or “heliocentric" radial velocity (RV). The “barycentric" or “heliocentric" RV represents the rate of change of the distance between the Sun and the star <cit.>. The measurement of RV is essential to the construction of a complete stellar 6D information (3D position and 3D velocity). Its accuracy is required to be better than several km s^-1, or even a few m s^-1, for various Galactic studies such as understanding the structure and assembly history of the Milky Way <cit.>, estimating the mass of the Milky Way <cit.>, defining orbital parameters and characteristics of binary systems <cit.>, identification of exoplanets <cit.> and systematic searching for hypervelocity stars <cit.>.
In the past decades, RVs have been measured for over tens of million stars from a series of large-scale spectroscopic surveys, including, the ground-based surveys, such as the GALAH <cit.>, the SDSS/APOGEE <cit.>, the RAVE <cit.>, the SDSS/SEGUE <cit.>, the LAMOST <cit.>, and the space-based surveys, i.e., the Gaia-RVS <cit.>. In the near future, more stellar RVs will be obtained thanks to the ongoing/planing massive spectroscopic surveys, such as the SDSS-V <cit.>, 4MOST <cit.>, and DESI <cit.>.
The measurement of RV can be influenced by various factors, including the type of instrument, the spectral resolution, the accuracy of wavelength calibration, the methodology used to derive RV, and even observation conditions and environments. These factors can lead to significant variations in RV measurements. To correct for these effects, it is necessary to construct a set of RV standard stars which are stable enough in a long observation baseline.
At present, over tens of thousands of RV standard stars have been defined by various efforts, including about 5000 bright RV standard stars with extreme stability of 15 m s^-1 over an average baseline of six years constructed by a long monitoring project <cit.> and over 18,000 standard stars with a median stability of 240 m s^-1 over one-year baseline of a large color and magnitude range constructed from the APOGEE DR14 <cit.>. However, the number spatial density of current RV standard stars is too low that are hard to calibrate the RV zero points (RVZPs) of the RV measurements from future massive spectroscopic surveys.
This paper is an update of . Thanks to the long-term repeated observations and more southern stars observed during SDSS-IV, the number of RV standard stars has been trebled with a much large sky coverage from the APOGEE DR17, compared to the previous version of . The paper is structured as follows. In Section <ref>, we correct the possible RVZPs of the RV measurements of APOGEE DR17. In Section <ref>, we describe the details of selections of RV standard stars from the APOGEE DR17. In Section <ref>, we use the selected APOGEE RV standard stars to calibrate the RVZPs of the RAVE, GALAH, LAMOST and Gaia surveys. Finally, we conclude in Section <ref>.
§ CORRECTIONS OF APOGEE RV MEASUREMENTS
As found in , the measured RVs of APOGEE surveys exhibit systematic trend as a function of T_ eff. To correct this trend, 1611 reference RV standard stars, collected from the various literatures <cit.>, are adopted to calibrate the RVZP of APOGEE DR14 <cit.>. In this paper, we apply these reference RV standard stars to check the RVZP of the APOGEE DR17 <cit.>. The details of the compilation of these reference RV standard stars are detailedly described in . Generally, the RVs of these 1611 reference stars are required to have stability better than 100 m s^-1 over a baseline at least one year.
By crossing match the 1611 reference RV standard stars to APOGEE DR17, 118 common stars are found to check the RVZPs of APOGEE DR17.
The RV differences between APOGEE and reference RV standard stars show a significant systematic trend along stellar effective temperature (see Fig. <ref>).
To describe this trend, a simple linear fit is adopted:
Δ RV = - 1.2146 + 0.2885 × (T_ eff/10^3 K) km s^-1.
The coefficients found here are similar to these reported in , implying the robustness of the instruments and the RV measurements.
We also cross-matched 4813 RV standard stars constructed by <cit.> with APOGEE DR17 and 205 common stars are left. The systematic trend found by these common stars is generally consistent with that shown in Fig. <ref> for stars with T_ eff > 4000 K. For cold stars with T_ ff < 4000 K, few stars are found.
§ APOGEE RADIAL VELOCITY STANDARD STARS
§.§ APOGEE Survey
APOGEE <cit.> is a large-scale high-resolution (R∼ 22,500) spectroscopic survey in the near-infrared (H-band 1.51-1.70 μm) part, provided by the 2.5-meter Sloan Foundation Telescope <cit.> and the 1-meter NMSU Telescope <cit.> at the Apache Point Observatory (APO) in the Northern Hemisphere, and the 2.5-meter Irénée du Pont Telescope <cit.> at the Las Campanas Observatory (LCO) in the Southern Hemisphere. The APOGEE survey is an important part of the SDSS-III <cit.> and SDSS-IV <cit.> programs. The APOGEE survey is called “APOGEE" or “APOGEE-1" in SDSS-III, and it is called “APOGEE-2" in SDSS-IV. APOGEE-1 started its data collection in 2011 and ended in 2014. The SDSS DR10 publicly released the APOGEE-1 three-year dataset, which was subsequently followed by two additional releases in 2015 and 2016. This accomplishment successfully fulfilled the stated objective of observing over 100,000 stars with a limiting magnitude of H=12.2 mag and spectral signal-to-noise ratio (SNR) greater than 100. APOGEE-2 is a constituent program of the SDSS-IV initiative, which commenced in 2014 and finished in 2021. In addition to collect the data in the Northern Hemisphere, APOGEE-2 also adopted the 2.5-meter Irénée du Pont Telescope mounted on the LCO to expand the observation sky coverage to the Southern Hemisphere. The recently released SDSS DR17 <cit.> includes the latest version of the APOGEE survey, including more than 657,000 stars, and this version is also the final version of all APOGEE-1 and APOGEE-2 data. The measurement of RV has an uncertainty of 100 m s^-1 and a zero point offset of 500 m s^-1 <cit.>. Typical uncertainties for T_eff, log g, and [Fe/H] are better than 150 K, 0.2 dex, and 0.1 dex, respectively <cit.>.
§.§ Selecting RV standard stars from the APOGEE DR17
Following , to select RV standard stars from APOGEE DR17, we defined the weighted mean RV (RV), internal error of RV (I_ ERV), RV wighted standard deviation (σ_ RV^2), and uncertainty of RV (σ_ RV) for each star separately:
* RV = ∑_i=1^n RV_iw_i/∑_i = 1^nw_i, where w_i is the weight assigned by the individual RV measurement error ϵ_i, that is, 1/ϵ_i^2, n is the total number of observations;
* I_ ERV = ∑_i=1^nϵ_iw_i/∑_i = 1^nw_i;
* σ_ RV^2 = ∑_i=1^nw_i/(∑_i=1^nw_i)^2-∑_i=1^nw_i^2∑_i^nw_i( RV_i - RV)^2;
* σ_ RV= max(σ_ RV/√(N), I_ RV/√(N)).
We utilize the symbol Δ T to denote the time baseline and MJD to represent the mean modified julian day of the n observations. In the following step, we selected RV standard stars based on the following criteria: Δ T >200 days, n≥3, SNR_ low≥50 and σ_ RV≤ 200 m s^-1, where SNR_ low represents the lowest SNR for the multiple spectroscopic visits for each star.
Through the above cuts, a total of 46,753 APOGEE RV standard stars were selected. The spatial distribution is shown in Fig. <ref>, with a full sky coverage. The distributions of time baseline Δ T, number of observations n and σ_ RV for these RV standard stars are shown in Fig. <ref>. Their Δ T are all greater than 200 days (54% longer than one year and 10% longer than five years). The average number of observations for these stars is 5. The median σ_ RV of all standard stars is 71.75 m s^-1, corresponding to a median stability (3σ_ RV) of 215.25 m s^-1, better than 240 m s^-1 of the sample. We show the color-(absolute) magnitude distributions of these stars, see Fig. <ref> for H against J-K_s, and Fig. <ref> for absolute M_G against G_BP-G_RP. Due to the selection effect of the APOGEE survey, 81% of RV standard stars are redder than 0.5 on the J-K_s. The Fig. <ref> contains 30,268 RV standard stars with measurable distances, all of which have been corrected for interstellar extinction using the 2D dust map from <cit.>. We employ the empirical relation M_G=3.53× (G_BP-G_RP)-0.06 to distinguish giants from main-sequence dwarf stars (see dashed line). Amongst them, 62% are red giants and 38% are main-sequence dwarf stars. We list 46,753 APOGEE RV standard stars including name, H, J-K_s, T_eff, RV (after RVZP correction by Equation <ref>), I_ERV, σ_RV, n, σ_ RV, Δ T and mean MJD information in Table <ref>.
§ CALIBRATIONS OF RADIAL VELOCITY SCALES FOR LARGE-SCALE STELLAR SPECTROSCOPIC SURVEYS
Next, we use these 46,753 APOGEE RV standard stars to check the RVZPs of the RAVE, LAMOST, GALAH, and Gaia surveys. The calibration results are shown in Figs. <ref>, <ref> and Table <ref>.
(i) The RAVE survey: The RAVE survey has collected 520,781 medium-resolution (R ∼ 7500) spectra centered on the Ca I triplet (8410–8795Å) range. The survey has released 457,588 individual stars randomly selected from the Southern Hemisphere stars with 9 < I < 12 using the multi-object spectrograph 6dF on the Australian Astronomical Observatory's 1.2m UK Schmidt Telescope. Estimations of RV, atmospheric parameters (T_ eff, log g and [Fe/H]), and α element abundances were described in <cit.>.
We cross-matched RAVE DR5 with our 46,753 APOGEE RV standard stars, resulting in a total of 1284 common stars with SNR>10. The comparisons show a mean ΔRV (APOGEE RVs minus RAVE) of +0.149 km s^-1, with a standard deviation of 1.358 km s^-1. We show the systematic trends of ΔRV with T_ eff, log g, [Fe/H] and SNR in Fig. <ref>. There is no obvious systematic trend of ΔRV with T_ eff, log g and SNR, however, ΔRV shows a weak linear trend with [Fe/H]. This trend can be described by ΔRV=0.1058 + 0.4175× [Fe/H] (see Table <ref>).
(ii) The LAMOST survey: LAMOST is a 4-meter quasi-meridian reflecting Schmidt telescope <cit.>. The telescope is equipped with 4000 fibers distributed in a field of view with a diameter of 5^∘. Within one exposure, LAMOST can obtain 4000 optical low-resolution spectra (LRS; R∼2000; with wavelength coverage between 3700 and 9000Å) or medium-resolution spectra (MRS; R∼7500; with two wavelength windows of 4950-5350Å and 6300-6800Å, respectively).
Over ten million LRS spectra has been released in the recent DR9 of LAMOST (<http://www.lamost.org/dr9/v1.1/>). A total of 7,060,436 stars in the AFGK Stellar Parameters catalog of LAMOST DR9 LRS have measurements of RV and stellar atmospheric parameters, which are derived by the official stellar parameter pipeline: LAMOST Stellar Parameter Pipeline <cit.>. To check the RVZPs of the LAMOST LRS RVs, we cross-matched the LAMOST DR9 AFGK Stellar Parameters catalog with the RV standard stars. 15,600 common stars are found with SNR>10 (average SNR=96, see Table <ref>). The comparisons show a mean ΔRV (APOGEE RVs minus LAMOST) of +4.574 km s^-1 and a scatter of 3.844 km s^-1. No obvious systematic trend with T_ eff, log g, [Fe/H] and SNR are detected for LAMOST LRS RVs (see middle panels of Fig. <ref>).
The MRS parameter catalog released in LAMOST DR9 contains measurements of stellar atmospheric parameters and RV for over 1.6 million stars from 8 million MRS spectra. To check the RVZPs of the LAMOST MRS RVs, we cross-matched the LAMOST DR9 MRS parameter catalog with the RV standard stars, resulting in 6,431 common stars with SNR>10 (see Table <ref>). By comparing their RVs (measurements from LASP) with the standard stars, multiple peaks are found in the RV difference distribution. These peaks are dominated by two mainnes, one occurring before October 19, 2018 (MJD=58,410) and another after this date. Prior to October 19, 2018, the mean ΔRV (APOGEE RVs minus LAMOST) was 6.843 km s^-1 with a standard deviation of 1.202 km s^-1, while after that date, the mean ΔRV was 0.727 km s^-1 with a standard deviation of 1.183 km s^-1. The main reason for such a significant transition in mean ΔRV arises from the use of different wavelength calibration lamps. Prior to October 19, 2018, the Sc lamp was employed to calibrate the wavelength of the LAMOST test observation spectra, whereas the Th–Ar lamp has been used since then <cit.>. LAMOST MRS provides zero-point corrected RV measurements, with the aforementioned offsets largely corrected. If considering the formal survey started from October 19, 2018, the offset-corrected RVs from LAMOST MRS agree very well with those of the APOGEE RV standard stars, with a nil zero-point and a scatter of 1.05 km s^-1 (see middle panel of Fig. <ref>). However, the mean RV differences still show a systematic trend with T_eff (see middle panels of Fig. <ref>). This trend can be described by a fourth-order polynomial in T_eff (see Table <ref>).
(iii) The GALAH survey: The GALAH survey is a large-scale stellar spectroscopic survey. The aim is to collect high-resolution (R = 28,000) spectra of approximately one million stars in the optical band (four discrete optical wavelength ranges: 4713–4903Å, 5648–5873Å, 6478–6737Å, and 7585–7887Å) using the HERMES spectrograph installed on the 3.9 m Anglo-Australian Telescope (AAT) at the Siding Spring Observatory <cit.>. In the third data release <cit.>, GALAH provided a total of 678,423 spectra of 588,571 unique stars, including measurements RVs, stellar atmospheric parameters and individual element abundances.
We cross-matched the APOGEE RV standard stars with GALAH DR3 to examine the RVZP of the GALAH RVs. A total of 1839 common stars with SNR>10 were found, with average SNR of 40 (Table <ref>). The mean value and standard deviation of the ΔRV (APOGEE RVs minus GALAH) are -0.031 km s^-1 and 0.299 km s^-1, respectively. Fig. <ref> (bottom panel) shows the systematic trends of ΔRV with T_ eff, log g, [Fe/H] and SNR. It can be seen from the plot that ΔRV has no trend with [Fe/H] and SNR. However, ΔRV exhibited curved trend with both T_ eff and log g. Through our validation, we find that the systematic trend of GALAH RVs are dominated by log g. The trend can be described by a sixth-order polynomial about log g, which the coefficients are presented in Table <ref>.
(iv) The Gaia survey: The European Space Agency (ESA) satellite Gaia <cit.> recently released the Data Release 3 <cit.>, which provides astrometric and photometric data for more than 1.8 billion sources. Compared with Gaia DR2, Gaia DR3 provides more than 33 million stars with measurements of RV <cit.> and more than 470 million stars with measurements of atmospheric parameters <cit.>. The median value of RV measurement accuracy is 1.3 km s^-1 at G_RVS=12 mag and 6.4 km s^-1 at G_RVS=14 mag. The RVZP of the Gaia DR2 has a systematic trend with G_RVS, which shows ΔRV = 0 km s^-1 at G_RVS=11 mag and ΔRV=0.40 km s^-1 at G_RVS=14 mag <cit.>.
To check the RVZP of the Gaia DR3, we cross-matched our APOGEE RV standard stars with Gaia DR3 to obtain 43,214 common stars with 3200 K<T_eff<6400 K and SNR>5. The comparison shows a tiny offset of +0.014 km s^-1 (APOGEE RVs minus Gaia), with a small scatter of 0.561 km s^-1. The systematic trend of ΔRV with color G_BP-G_RP, magnitude G_RVS and number of transits (N_obs) is shown in Fig. <ref>. Significant systematic trends for ΔRV with color and magnitde are detected. For G_BP-G_RP, systematic deviations are clearly detected at G_BP-G_RP<1 mag and G_BP-G_RP>2 mag. For G_RVS, systematic trend is significant found at G_RVS>11 mag. We first adopted a fourth-order polynomial to correct the trend along with G_RVS. After corrections of G_ RVS dependent systematics, the trend along with G_BP-G_RP was further corrected by a fourth-order polynomial fit. The resulted coeficients are present in Table <ref>. It is worth noting that <cit.> has also identified the G_ RVS dependent trend and provided a second-order polynomial to describe it for G_RVS>11 mag (as shown in Fig. <ref>). The second-order polynomial can partially capture the systematic trend as we discovered, while it cannot correct the systematic trend at the faint end (G_RVS>13 mag).
Based on the common stars of our APOGEE RV standard star and Gaia DR3, we study the precision of Gaia RV measurements. As shown at the bottom panels of Fig. <ref>, the precision of Gaia RV measurement generally decreases with T_eff and G_RVS and slightly increases with [Fe/H]. The RV precision at bright range (G_RVS < 10 mag) is several hundred m s^-1, and is few km s^-1 at the faint end (G_RVS > 12 mag). This result is consistent with the prediction of <cit.>.
§ SUMMARY
We have constructed a catalog of 46,753 RV standard stars from the 657,000 near-infrared (H band; 1.51–1.70 μm) high-resolution (R ∼ 22 500) spectra provided by APOGEE DR17. They are almost evenly distributed in the Northern and Southern Hemispheres, with 62% red giants, and 38% main-sequence dwarf stars. They were observed with a time baseline of at least 200 days (54% longer than one year and 10% longer than five years) and were observed more than 3 times. The median RV stability was 215.25 m s^-1. Using the catalog of RV standard stars, we calibrated the RVZPs of four large-scale stellar spectroscopic surveys: the RAVE, LAMOST, GALAH and Gaia. By careful comparisons, we found the mean RVZPs are +0.149 km s^-1, +4.574 km s^-1 (for LRS), -0.031 km s^-1 and +0.014 km s^-1, for RAVE, LAMOST, GALAH and Gaia, respectively. In addition to an overall constant offset, RVZPs of part of these surveys show moderate dependences on stellar parameters (e.g., Teff, log g, [Fe/H], color or magnitude). We further provide corrections by simple polynomial fits with coefficients listed in Table <ref>. Our studies show that the small but clear RVZPs in these large-scale spectroscopic surveys can be well detected and properly corrected by our RV standard stars, which is believed to be useful for their further applications in various studies. The complete APOGEE RV standard star catalog in Table <ref> is publicly available on the <https://nadc.china-vo.org/res/r101244/>.
This work is supported by National Key R & D Program of China No. 2019YFA0405500, and National Natural Science Foundation of China grants 11903027, 11973001, 11833006, U1731108, 12090040, 12090044.
Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org.
astropy <cit.>, TOPCAT <cit.>
aasjournal
|
http://arxiv.org/abs/2307.05274v1 | 20230711140606 | Variability, flaring and coherence -- the complementarity of the maser and superradiance regimes | [
"Martin Houde",
"Fereshteh Rajabi",
"Gordon C. MacLeod",
"Sharmila Goedhart",
"Yoshihiro Tanabe",
"Stefanus P. van den Heever",
"Christopher M. Wyenberg",
"Yoshinori Yonekura"
] | astro-ph.GA | [
"astro-ph.GA"
] |
Houde et al.
Variability, flaring and coherence – the maser and superradiance regimes
17
2023
TBD
Proceedings IAU Symposium
T. Hirota, H. Imai, K. Menten, & Y. Pihlström, eds.
^1The University of Western Ontario, London, Ontario N6A 3K7, Canada
^2McMaster University, Hamilton, Ontario L8S 4M1, Canada
^3The Open University of Tanzania, P.O. Box 23409, Dar-Es-Salaam, Tanzania
^4Hartebeesthoek Radio Astronomy Observatory, P.O. Box 443, Krugersdorp, 1741, South Africa
^5South African Radio Astronomy Observatory, 2 Fir Street, Black River Park, Observatory 7925, South Africa
^6Center for Space Research, North-West University, Potchefstroom Campus, Private Bag X6001, Potchefstroom 2520, South Africa
^7Center for Astronomy, Ibaraki University, 2-1-1 Bunkyo, Mito, Ibaraki 310-8512, Japan
^8Institute for Quantum Computing and Department of Physics and Astronomy, The University of Waterloo, 200 University Ave. West, Waterloo, Ontario N2L 3G1, Canada
We discuss the role that coherence phenomena can have on the intensity variability of spectral lines associated with maser radiation. We do so by introducing the fundamental cooperative radiation phenomenon of (Dicke’s) superradiance and discuss its complementary nature to the maser action, as well as its role in the flaring behaviour of some maser sources. We will consider examples of observational diagnostics that can help discriminate between the two, and identify superradiance as the source of the latter. More precisely, we show how superradiance readily accounts for the different time-scales observed in the multi-wavelength monitoring of the periodic flaring in G9.62+0.20E.
Radiative processes: non-thermal – masers – ISM: molecules – ISM: G9.62+0.20E
Variability, flaring and coherence – the complementarity of the maser and superradiance regimes
Martin Houde^1,
Fereshteh Rajabi^2,
Gordon C. MacLeod^3,4,
Sharmila Goedhart^5,6,
Yoshihiro Tanabe^7,
Stefanus P. van den Heever^4,
Christopher M. Wyenberg^8,
and Yoshinori Yonekura^7
August 12, 2023
========================================================================================================================================================================================================================================================================================
§ INTRODUCTION
The monitoring of astronomical objects harboring sources of maser radiation reveals ubiquitous variability in the measured intensities over a range of time-scales and light curve patterns. Most intriguing are strong flaring behaviours, which can consist of isolated or recurring events. Periodic flaring sources are of particular interest because of, among other things, their potential for shedding light on the nature of the engines at the root of the observed periodicities. Accordingly, a growing number of such objects have been discovered and closely monitored with several spectral transitions in the recent past.
Recurring and periodic flaring sources also often display behaviours that can seem difficult to explain theoretically. Examples can involve one spectral spectral transition (observed at different systemic velocities) or the comparison between several lines. Figure <ref> shows the case of a recurring flare observed by <cit.> in the methanol 6.7 GHz transition for the G33.64–0.21 high mass star-forming region. Although six different masing spectral features are detected in this line, only one (Feature II) shows two strong flares with an amplification of a factor of approximately eight relative to the quiescent flux level and exceedingly fast rise times. The challenge in making sense of these observations, besides the nature of the flares themselves and their source, resides in explaining why features from the same spectral transitions but at different velocities could exhibit such vastly different responses to a common excitation.
Figure <ref> shows observations obtained by <cit.> in methanol 6.7 GHz and water 22 GHz during a monitoring campaign of the G107.298+5.639 star-forming region (see also ). This source exhibits flaring in multiple spectral transitions at a common period of 34.4 d. More interestingly, and as shown in the figure, the methanol 6.7 GHz and water 22 GHz flares are seen to alternate with the flux density of one transition peaking when the other reaches a minimum. Also to be noted are the different time-scales of the flares, where the methanol 6.7 GHz flares are consistently of shorter duration than the water 22 GHz features. While the alternation of the flares between the two species can perhaps be explained by the effect of an infrared (methanol-) pumping source on the dust temperature and its quenching effect on the water 22 GHz population inversion <cit.>, their different time-scales are more problematic.
There are also more fundamental questions pertaining to the physics underlying the flaring phenomenon. That is, when subjected to an excitation signal (e.g., the infrared pump source or a seed electric field signal for a maser-hosting region) a physical system will often exhibit a transient response before settling into a steady-state regime. It has long been established in the quantum optics community that for an ensemble of radiators (i.e., atoms or molecules) exhibiting a population inversion the two regimes are associated with different radiation processes. That is, stimulated emission rules the quasi steady-state regime while Dicke's superradiance is at play during the transient phase <cit.>. One may wonder whether this dichotomy also applies to astrophysical sources and could be essential for explaining some of the characteristics of flares emanating from maser-hosting sources.
In this review, we will use observations from multi-transition monitoring of the G9.62+0.20E high mass star-forming region to probe the physics underlying the periodic flaring observed in this source and determine whether different processes characterize the bursting and quiescent phases. The paper is structure as follows. We first introduce Dicke's superradiance and its study at the laboratory level in Sec. <ref>, while we discuss the association of the transient and quasi steady-state regimes to the superradiance and maser phenomena, respectively, as well as their manifestation as two independent limits of the Maxwell-Bloch equations in Sec. <ref>. Finally, in Sec. <ref> we model existing multi-transition data obtained for G9.62+0.20E and show that superradiance can naturally account for the diversity of time-scales observed in this source.
§ DICKE'S SUPERRADIANCE
Given that masers were first discovered in the laboratory <cit.>, it may be appropriate to inquire whether scientists in the quantum optics community have studied or have performed experiments on “flaring” with such systems? The answer to this question is a resounding “yes.” This is due to the introduction of the superradiance phenomenon by <cit.>, which, it is interesting to note, actually predates that of the maser. In his foundational paper, Dicke pointed out that molecules in a gas (atoms could also be used but we focus on molecules as we will be dealing with corresponding transitions in this review) may not always radiate independently, whereas their interaction with a common electromagnetic field will bring a quantum mechanical entanglement between them. Dicke thus treated the ensemble of molecules as a “single quantum mechanical system” and proceeded in calculating spontaneous emission rate for the gas when focusing on a single spectral transition.
Under conditions where velocity coherence and population inversion prevail, he showed that the commonly assumed independent spontaneous emission process is replaced by a transient phenomenon he termed “superradiance.” With superradiance radiation from the ensemble of molecules happens on a photon cascade. For example, when the N≫ 1 molecules are all in the excited state the emission of photons takes place at transition rates ranging from NΓ_0 (at the beginning and the end of the cascade) to ∼ N^2Γ_0/4 (in the middle of the cascade), with Γ_0 the single-molecule transition rate (i.e., the spontaneous emission rate of the transition). This results in powerful bursts of radiation intensity lasting on a time-scale on the order of T_R∝ 1/(NΓ_0)[We will soon give a more precise definition of the superradiance time-scale.], while the coherence nature of the radiation and the process can be attested through the scaling of the peak intensity with N^2. Evidently, coherence in the radiation cannot occur in all directions whenever the molecules are separated by a finite distance. In this case, Dicke showed that superradiance happens only in well-defined radiation modes with successive photons exhibiting increasing correlation in their propagation direction, and therefore results in highly collimated beams of radiation. In the process, Dicke thus introduced and described the “photon bunching” phenomenon before it became established within the physics community <cit.>.
While, as previously mentioned, the theoretical introduction of superradiance preceded that of the maser in the literature, its experimental verification comparatively lagged significantly as we had to wait for almost 20 years before it could be realized in the laboratory. The left panel of Figure <ref> shows the first experimental verification of superradiance obtained by <cit.> in a rotational transition of a vibrationally excited HF gas. A near complete population inversion was achieved with an intense pump pulse at λ∼ 2.5 μm (the small peak at τ=0 in the top graph), after which followed a superradiance burst at 84 μm. The time delay between the pump pulse signal and the intensity burst, as well as its “ringing,” are telling characteristics of superradiance for these experimental conditions. Superradiance has since become an area of sustained and intense research in the quantum optics community. It has been tested and used to probe fundamental physics questions for a wide range of systems under a vast array of conditions (see Chap. 2 of for an early summary). An example of a recent experiment by <cit.> is shown in the right panel of Figure <ref>, where the superradiance responses from assemblies containing different numbers of laser-cooled ^87Rb atoms were probed and compared for different types of pumping conditions.
It is interesting to note that despite the intense amount of research accomplished in quantum optics laboratories over several decades, superradiance has remained largely unnoticed by the astronomical community until the recent work of <cit.> (see also ).[Dicke's superradiance, as discussed here, should not be confused with “black hole superradiance,” which is connected to Hawking radiation; see for a review of black hole superradiance and its historical connection to Dicke's superradiance.]
§ THE COMPLEMENTARITY OF THE MASER AND SUPERRADIANCE REGIMES
The temporal evolution of a gas, whose components (e.g., molecules) are modeled as two-level systems, interacting with an ambient electromagnetic field is described by the so-called Maxwell-Bloch equations (MBE). The application of the MBE is not limited to systems hosting a population inversion but is general and applies to a wide range of conditions where numerous fundamental physical phenomena can be theoretically studied and modeled. For the problem at hand, we will consider a gas whose (single) molecular constituents exhibit velocity coherence and occupy a linear volume configuration, e.g., a circular or rectangular cylinder as shown in Figure <ref>, and focus on a single transition between two specific energy levels.
While the MBE can in principle be numerically solved in three dimensions, it is simpler (both analytically and computationally) and adequate to limit the analysis to a one-dimensional problem <cit.>. With this approximation the only remaining spatial variable, z, is that defining the symmetry axis of the cylinder while the temporal evolution is tracked as a function of the retarded time τ=t-z/c. The three-dimensional reality of the problem is recovered “by hand” after the computations are effected by imposing a Fresnel number of unity, where the width of the cylinder is set to w=√(λ L) with L the length of the system. This condition limits the radiation intensity of the one-dimensional system to that contained to a solid angle ΔΩ∼(λ/w)^2 at one end-fire of the cylinder (see the bottom schematic in Figure <ref>) and restricts phase coherence to a volume of physically allowable dimensions. We will refer to such system as a “sample.”
Under the slowly varying envelope (SVEA) and rotating wave approximations, the one-dimensional MBE in the rest frame of the gas take the form
∂ n^'/∂τ = i/ħ(P^+E^+-E^-P^-)-n^'/T_1+ Λ_n
∂ P^+/∂τ = 2id^2/ħE^-n^'-P^+/T_2+Λ_P
∂ E^+/∂ z = iω_0/2ϵ_0c P^-,
where n^' is (half of) the population density difference between the upper and lower energy levels, while P^+ and E^+ are the amplitudes of the molecular polarization and the electric field, respectively defined by
𝐏^±(z ,τ) = P^±(z ,τ) e^± iω_0τϵ_d
𝐄^±(z ,τ) = E^±(z ,τ) e^∓ iω_0τϵ_d.
The unit polarization vector ϵ_d = 𝐝/d is associated with the molecular transition of dipole moment d=| 𝐝| at frequency ω_0. The superscript “+” in equations (<ref>)-(<ref>) is for the polarization corresponding to a transition from the lower to the upper level and the positive frequency component of the electric field <cit.>. Any non-coherent phenomena (e.g., collisions) that cause relaxation of the population density and de-phasing of the polarization are respectively accounted for through the phenomenological time-scales T_1 and T_2. The evolution of the system is initiated through internal fluctuations in n^' and P^+ modeled with an initial non-zero Bloch angle
θ_0=2/√(N),
with N the number of molecules in the sample (see for more details). The polarization pump Λ_P in equation (<ref>) consists entirely of those fluctuations (i.e., for P^±). The inversion pump Λ_n in equation (<ref>) is for an effective pump signal that directly drives a population inversion.
In this section we will endeavour to establish the complementarity between the maser action and superradiance, and demonstrate how they arise from the MBE. To do so, we will follow the discussion presented in Sec. 3.1 of <cit.> for a system that is “instantaneously inverted” at τ=0. That is, we set the inversion pump to
Λ_n(z ,τ) = {[ 0, τ < 0; Λ_0, τ≥ 0 ].
with Λ_0 a constant level and fix an initial population inversion level n^'_0 ≡ n^'(z,τ=0)=Λ_0 T_1 at all positions z. The inversion pump signal would then compensates for any non-coherent relaxation in the weak intensity limit (i.e., when ∂ n^'/∂τ≈ 0), while setting a non-zero initial population inversion level is equivalent to having an instantaneous inversion at τ=0.[The nature of Λ_n will change in the next section when modeling data for G9.62+0.20E.] With this set-up, we let the system evolve and monitor n≡ 2n^', P^+ and E^+ at all positions z and retarded times τ.
We show in Figure <ref> the evolution at two positions of such a sample for a methanol gas at the 6.7 GHz spectral transition. For this example we set L=2× 10^15 cm and n_0≡ 2n_0^'=3.3× 10^-12 cm^-3, which corresponds to a population inversion of approximately 0.1 cm^-3 for a 1 km s^-1 velocity distribution. These two parameters, along with the Einstein spontaneous emission coefficient for that line (Γ_6.7 GHz=1.56× 10^-9 s^-1), set the superradiance characteristic time-scale for this sample through
T_R = 8π/3λ^2 n_0Lτ_sp,
where τ_sp=Γ_6.7 GHz^-1 leads to T_R, 6.7 GHz=4× 10^4 s <cit.>. Finally, the time-scales for non-coherent relaxation and de-phasing were fixed to T_1=1.64× 10^7 s and T_2=1.55× 10^6 s, respectively. Before we study the response of the sample we note that the initial column density of the population inversion n_0L appears in the denominator of T_R, implying that an increased length for the system will lead to a faster response. This characteristic will be essential for understand the sample's behaviour and will lead to the notion of a critical threshold for the appearance of superradiance. We also note that in the figure the intensity I=cϵ_0|E^+|^2/2 is normalized to NI_nc with the non-coherent intensity (i.e., that expected from the sample when the molecules are radiating independently) given by
I_nc = 2/3ħω_0/AT_R,
where A=λ L is the cross-section of the sample's end-fire (for a Fresnel number of unity; see ).
In Figure <ref> we see that the response of the system has an entirely different character depending on the position where it is probed. At z=0.4 L the intensity (top right) exhibits a smooth transition from I≈ 0 at τ=0 to its steady-state value, which is very weak at I∼ 10^-10NI_nc. This low intensity is due to the absence of coherence in the gas, as can also be assessed from the low level of polarization P^+∼10^-15 D cm^-3≪ n_0d (top left; d≃ 0.7 D for this transition). For reasons that will soon become clear we will term this smooth transient regime as “non-superradiance” (or “non-SR” in the figure). The sample's response is completely different at its end-fire, i.e., at z = L, where a strong oscillatory transient regime is seen in the intensity, peaking at I≃ 1.6× 10^-3NI_nc before reaching a steady-state value of I≃ 0.5× 10^-3NI_nc. The strong transient is also seen in n and P^+ with the latter reaching a peak value of P^+∼ n_0d. This is indicative of the presence of coherence in the gas during the transient regime, which we qualify as “SR transient.”
Our usage of the “non-SR” and “SR transient” terms is based on the fact that it can be analytically shown that there exist two distinct limits or regimes discernible within the framework of the MBE for an initially inverted system such as the one considered here <cit.>. In the limit when the population density and the polarization are changing on a time-scale much shorter than T_1 and T_2, i.e.,
∂ n^'/∂τ≫n^'/T_1 and ∂ P^+/∂τ≫P^+/T_2,
the sample enters a rapid transient regime that is characterized by the presence of coherence in the gas and strong intensities scaling with N^2 (see below). This is the domain of Dicke's superradiance. The bottom graphs in Figure <ref> form indeed a good example of an “SR transient.” The slower, non-coherent transient regime observed in the top graphs in Figure <ref> are thus “non-SR transient.”
On the other hand, at the opposite limit when the population density and the polarization are varying on a time-scale much longer than T_1 and T_2, i.e.,
∂ n^'/∂τ≪n^'/T_1 and ∂ P^+/∂τ≪P^+/T_2,
it can be shown that the MBE simplify to the maser equation (see Sec. 2.1 of for a derivation). This is the so-called “steady-state limit” where the maser action (and stimulated emission) is at play. This steady-state regime is identified Figure <ref> in the intensity graphs on the right. More precisely, at z=0.4 L (top) the low intensity suggests the existence of a (steady-state) unsaturated maser while at z=L (bottom) we are in the presence of a saturated maser.
The fact that the slow limit characterized by equation (<ref>) is associated with the maser action can be readily verified by plotting the (normalized) steady-state intensity I_steady/(NI_nc)≡ I(τ=1000T_0)/(NI_nc) for all positions z, as shown in Figure <ref>. We can clearly verify this association by the presence of the exponential growth unsaturated maser and linear gain saturated maser regimes, which also justify our earlier attributions for z=0.4 L and z=L.
We can also verify the association between Dicke's superradiance and the fast transient limit by tracking the peak intensity, the minimum population density and the peak polarization as a function of the position z along the sample. This is shown in Figure <ref>, where these quantities are respectively plotted from top to bottom in light/cyan along with the corresponding steady-state (i.e., maser) values in black. In the top graph for the intensities we can clearly see the linear gain for the saturated maser intensity I_steady∝ z∝ N and the quadratic behaviour for the transient superradiance regime I_SR∝ z^2∝ N^2. This functionality, as well as those for the population density and the polarization, are the hallmark of Dicke's superradiance.
One more important piece of information can be gathered from Figures <ref> and <ref>. That is, it can be seen through their comparison that the transition between the unsaturated and saturated maser regimes (where I_steady=I_sat in Figure <ref>, with I_sat the saturation intensity) and the appearance of superradiance (from the top graph of Figure <ref>) happen at the same position z≡ z_crit. It can be shown that this position is associated with the reaching of a critical threshold for the column density
n_0z_crit = 4π/3λ^2τ_sp/T_2ln(T_2/T_1θ_0^2),
which sufficiently reduces the superradiance characteristic time-scale T_R to reach the fast limit of equation (<ref>) <cit.>. In other words, the appearances of superradiance in the transient response and a saturated maser in the steady-state regime are closely linked and will take place whenever L≥ z_crit. Systems of shorter lengths will not host the superradiance phenomenon and will be restricted to the unsaturated maser regime in the steady state. We can also combine equations (<ref>) and (<ref>) to reformulate this critical threshold as
T_R,crit = 2T_2/ln[T_2/(T_1θ_0^2)].
For our system we find T_R<T_R,crit≈ 0.05 T_2≃7× 10^4 s, which satisfies the requirement for superradiance.
§ MODELING OF G9.62+0.20E
One further characteristic of the superradiance transient response not discussed in Sec. <ref> is that the shape of the intensity curve as a function of time is largely independent of that of the excitation (i.e., the pump signal or the seed electric field). An example is shown in Figure <ref> where a strong 6.7 GHz methanol flare in S255IR-NIRS3 is modeled using two different inversion pump signals <cit.>. Other parameters for the model (i.e., L, T_1 and T_2) are similar to those used in Sec. <ref>. We thus see that the shape of the fast transient superradiance response is minimally affected by the detailed nature of the pump signal. This behaviour will be advantageous in this section as we endeavour to model existing data from the G9.62+0.20E star-forming region. That is, while we do not know the nature of the excitation responsible for the periodic flaring observed in this source, the fact that the the shape of the measured light curves are the results of the natural (or characteristic) transient responses of the corresponding systems will allow us to perform meaningful comparisons between spectral transitions from different molecular species.
G9.62+0.20E is a high mass star-forming region located 5.2 kpc away <cit.>. Methanol 6.7 GHz monitoring by <cit.> (and subsequent studies) has revealed this source to be flaring with a main period of approximately 243 d (a second period of 52 d was recently discovered by ). This characteristic has since led to a significant number of studies, observational and theoretical, which resulted in increased monitoring with other molecular species/transitions and models aimed at explaining the source of the periodicity as well as the physical mechanisms behind the flaring activity (see for a recent review). The availability of multi-transition monitoring data for G9.62+0.20E is particularly interesting to us, as it will allow us to probe the transient (i.e., flaring) regimes for the different molecular species and transitions, as well as apply and test the model based on the MBE discussed in Sec. <ref>. Here, we will present some of the results from <cit.>.
The data sets we used were published elsewhere in the literature prior to the work of <cit.>. That is, the OH 1665 MHz and 1667 MHz data were obtained with KAT-7 <cit.> and initially published in <cit.>, as were the methanol 12.2 GHz observations from the HartRAO 26-m telescope. The methanol 6.7 GHz observations are those from <cit.>, which resulted from the corresponding monitoring of G9.62+0.20E performed with the Hitachi 32 m telescope of the Ibaraki station from the NAOJ Mizusawa VLBI Observatory <cit.>. The reader is referred to the cited references for more details concerning these data and to <cit.> for a more comprehensive discussion of our analysis.
One interesting feature resulting from the multi-transition monitoring data of G9.62+0.20E uncovered by <cit.> is the existence of different duration for the flares from different molecular species and transitions. An example is shown in Figure <ref> where one flare from OH 1665 and 1667 MHz and methanol 6.7 and 12.2 GHz each is shown. We can clearly see from the position of the peaks that the flare duration decreases from top to bottom in the figure (i.e., from OH 1665 MHz → OH 1667 MHz → methanol 6.7 GHz → methanol 12.2 GHz). At first sight, this behaviour may seem difficult to explain as the sources for these features are likely to be subjected to a common excitation (e.g., these flares are all recurrent at the same period of 243.3 d). However, this is the kind of results we should expect within the framework of the MBE, in view of the existence of the superradiance transient regime.
More precisely, using equation (<ref>) we can estimate the relative superradiance time-scales for the different spectral lines pertaining to Figure <ref> to be
T_R,1665 MHz/T_R,6.7 GHz ≃ 1.4(nL)_6.7 GHz/(nL)_1665 MHz
T_R,1667 MHz/T_R,6.7 GHz ≃ 1.3(nL)_6.7 GHz/(nL)_1667 MHz
T_R,12.2 GHz/T_R,6.7 GHz ≃ 0.6(nL)_6.7 GHz/(nL)_12.2 GHz,
which, interestingly, scale in the same manner as the observations for equal inversion column densities. Of course, there is no guarantee that all sources will share the same value for the inversion column density but, still, this pattern can certainly serve as a motivation for verifying if our MBE/superradiance approach can help elucidate these observational results.
We therefore proceeded to numerically model several flaring features using the MBE in the manner explained in <cit.>. Results are shown in Figure <ref> for features in OH 1665 MHz (v_lsr = +1.7 km s^-1), methanol 12.2 GHz (v_lsr = +1.7 km s^-1) and methanol 6.7 GHz (v_lsr = +5.0 km s^-1 and +8.0 km s^-1). To do so we fixed the length of the systems to L=50 au[The chosen value of L=50 au is to some extent arbitrary since the inversion column density is the relevant parameter for setting the superradiance time-scale (see equation <ref>). That is, any variation in L could be compensated by applying the opposite change in the population inversion density n.] and set the excitation from a population inversion pump to
Λ_n(z,τ) = Λ_0 + ∑_m=0^∞Λ_1,m/cosh^2[(τ-τ_0-mτ_1)/T_p].
The constant pump rate Λ_0, the amplitude Λ_1,m of pump pulse m at τ=τ_0+mτ_1 and the delay τ_0 to line our fit up to the corresponding data are adjusted for all models. Unchanged are the period τ_1=243.3 d and the width of the pump pulse T_p=4 d. According to our previous discussion, the choice of the pump pulse's profile is arbitrary. While the relation and de-phasing time-scales are set independently for all transitions, little variations were observed between the different fits. More precisely, we have T_1=210 d and T_2=13.5 d for OH 1665 MHz, and T_1=220 d and T_2=12.6 d for methanol 6.7 GHz (both velocities) and 12.2 GHz.
Despite the small changes in parameters the resulting fits in Figure <ref> are found to recover the observed time-scales well by simply modulating the amplitude of the pump components Λ_0 and Λ_1,m. This is because in doing so we are in effect changing the inversion column density nL, which in turn directly affects the characteristic time-scale of superradiance T_R (see equation <ref>). This also explains, for example, why the methanol 6.7 GHz can exhibit significantly varying time-scales at different velocities, as seen for the corresponding v_lsr = +5.0 km s^-1 and +8.0 km s^-1 features in Figure <ref>. More models to observations are presented in <cit.>. Most notable are the cases of unusually shaped flares in methanol 6.7 GHz and OH 1667 MHz; we refer the reader to the corresponding section in that paper.
Although the flaring periodicity in G9.62+0.20E necessitates an excitation that is different from the case of instantaneous inversion discussed in Sec. <ref>, the same type of behaviour is observed. Most notable is the clear disconnect between the duration of the inversion pump pulse (i.e., 4 d) and that of the bursts (e.g., from ∼40 d for methanol 6.7 GHz at v_lsr = +8.0 km s^-1 to on the order of 100 d for OH 1665 MHz). This is a clear manifestation of a superradiance transient response in the gas hosting the flaring events. As seen in Figure <ref>, while the interaction of the fast and brief pump pulses drives a rapid increase in the population inversion level and subsequently stimulates a transient superradiance response in the system, the duration of the superradiance burst bears no relation to that of the excitation. We also note that adopting a fixed value for the duration of the pump pulses is a reasonable assumption, as it is likely that a single source is at the root of the 243 d periodicity of the flares. Time-scales intrinsic to the pump (i.e., the period and pulse duration) should be the same for all transition/species used for the monitoring. On the other hand, the coupling of the inversion pump signal is bound to vary with position within G9.62+0.20E and wavelength. The dependence of the superradiance response on the strength of the excitation and the parameters characterizing the molecular transition/species used to probe the regions under study renders it an attractive and powerful mechanism to make sense of the diverse flaring time-scales and profiles observed.
Finally, we end with a word concerning the potential effects that the coherent nature of superradiance might be suspected to have on the characteristics of observations such as those presented in this section. That is, we have already stated that superradiance bursts scale in intensity with the square of the number of molecules involved in the process and inversely with that number in duration <cit.>. However, one should not necessarily expect coherence to be readily detected in astronomical sources hosting flares such as those analyzed here. The reason for this is presented in Figure <ref> and rests on the relative size of a coherent superradiance sample and an astronomical maser-hosting region. Considering as an example the OH 1665 MHz sample resulting from the fit presented in Figure <ref>, we find that its radius is limited to ∼ 10^-5 au to ensure a Fresnel number of unity. Evidently, this size is several orders of magnitude smaller than the spot size of a maser-hosting region. For example, a region of 10 au undergoing a flaring event would necessarily break up in a very large number of independent and uncorrelated samples, in the manner shown in Figure <ref>. Although individually radiating coherently, these samples combine in a non-coherent manner to make up the total intensity measured at the output. For an astronomical maser-hosting source, this intensity is thus likely to appear non-coherent to an observer. Our superradiance OH 1665 MHz sample of Figure <ref> outputs a peak flux density of ≈ 5× 10^-36 erg s^-1 cm^-2 at 5.2 kpc (or ∼ 10^-5 Jy in the bandwidth of ∼ 10^-7 Hz approximately associated to the duration of a flare). Only a small fraction of a typical maser-hosting is therefore needed to account for the detected radiation intensities.
§ ACKNOWLEDGEMENTS
M.H.'s research is funded through the Natural Sciences and Engineering Research Council of Canada Discovery Grant RGPIN-2016-04460. The 6.7 GHz methanol maser data obtained by the Hitachi 32-m telescope is a part of the Ibaraki 6.7 GHz Methanol Maser Monitor (iMet) program. The iMet program is partially supported by the Inter-university collaborative project `Japanese VLBI Network (JVN)’ of NAOJ and JSPS KAKENHI grant no. JP24340034, JP21H01120, and JP21H00032 (YY).
iaulike
|
http://arxiv.org/abs/2307.05338v1 | 20230710173004 | Root Causal Inference from Single Cell RNA Sequencing with the Negative Binomial | [
"Eric V. Strobl"
] | q-bio.GN | [
"q-bio.GN"
] |
]Root Causal Inference from Single Cell RNA Sequencing
with the Negative Binomial
Accurately inferring the root causes of disease from sequencing data can improve the discovery of novel therapeutic targets. However, existing root causal inference algorithms require perfectly measured continuous random variables. Single cell RNA sequencing (scRNA-seq) datasets contain large numbers of cells but non-negative counts measured by an error prone process. We therefore introduce an algorithm called Root Causal Inference with Negative Binomials (RCI-NB) that accounts for count-based measurement error by separating negative binomial distributions into their gamma and Poisson components; the gamma distributions form a fully identifiable but latent post non-linear causal model representing the true RNA expression levels, which we only observe with Poisson corruption. RCI-NB identifies patient-specific root causal contributions from scRNA-seq datasets by integrating novel sparse regression and goodness of fit testing procedures that bypass Poisson measurement error. Experiments demonstrate significant improvements over existing alternatives.
[
Eric V. Strobl
==================
§ INTRODUCTION
Causal inference algorithms identify causal relations from data. Most investigators infer causation using randomized controlled trials (RCTs). However, an RCT cannot distinguish between a cause and a root cause of disease, or the initial perturbation to a biological system that ultimately induces a diagnostic label. Identifying the root causes of disease is critical for (a) understanding disease mechanisms and (b) discovering drug targets that treat disease at its biological onset.
Single cell RNA sequencing (scRNA-seq) datasets represent prime targets for root causal inference because they provide global but fine-grained snapshots of gene expression with ample numbers of cells. scRNA-seq also provides a functional read-out more proximal to the clinical phenotype than single nucleotide polymorphisms. Accurately inferring patient-specific root causes from scRNA-seq therefore has the potential to improve the discovery of novel therapeutic targets that significantly impact patient symptoms.
Unfortunately, most existing root causal inference algorithms assume perfectly measured, continuous random variables <cit.>. Sequencing datasets contain counts measured by an error-prone sequencing process. Moreover, modern single cell pipelines cannot replicate the same measurements per cell <cit.>. Customized methods that appropriately account for non-negativity and measurement error – without relying on technical replicates – have the potential to substantially improve the performance of existing methods from scRNA-seq.
The negative binomial distribution models counts as a mixture of the Poisson and gamma distributions, where the Poisson component can represent measurement error and the gamma distribution the expression level of an RNA molecule. The negative binomial also fits scRNA-seq data well by accounting for overdisperson and a high proportion of zeros <cit.>. As a result, many scientists analyze RNA-seq data with the negative binomial in the context of regression, normalization or differential hypothesis testing <cit.>. None however have utilized the negative binomial for identifying the root causes of disease.
[breakable,enhanced,frame hidden]
We therefore extend the negative binomial to root causal inference as follows:
* We propose a post-nonlinear causal model with gamma distributed error terms representing true continuous RNA expression levels. We can however only measure the expression levels as counts using a noisy sequencing process, which we model by the Poisson. The resultant Poisson-gamma mixture is the negative binomial.
* We introduce a negative binomial regression procedure and goodness of fit hypothesis test that both bypass Poisson measurement error without technical replicates.
* We integrate the regression procedure into an algorithm that identifies the parameters of the gamma distributions and latent causal graph.
* We finally utilize the recovered parameters to identify the root causes of disease unique to each patient.
The resultant method called Root Causal Inference with Negative Binomials (RCI-NB) identifies patient-specific root causes of disease more accurately than existing alternatives from both simulated and real scRNA-seq datasets.
§ BACKGROUND
§.§ Structural Equations
We can formalize causal inference under the framework of structural equation models (SEMs), or a set of deterministic equations over p random variables X such that:
X_i = f_i(Pa(X_i),E_i), ∀X_i ∈X.
The random vector E denotes a set of mutually independent error terms, and Pa(X_i) ⊆X∖X_i the parents, or direct causes, of X_i. We can therefore associate a directed graph 𝔾 over X to an SEM by drawing a directed edge from each member of Pa(X_i) to X_i. A directed path in 𝔾 from X_i to X_j refers to a sequence of adjacent directed edges from X_i to X_j. We say that X_i is an ancestor of X_j in 𝔾 if there exists a directed path from X_i to X_j or X_i = X_j; similarly, X_j is a descendant of X_i. A cycle occurs when X_i is an ancestor of X_j and we have X_j →X_i. We call 𝔾 a directed acyclic graph (DAG) if it contains no cycles. The joint distribution ℙ_X over X satisfies the causal Markov condition if every variable in X is independent of its non-descendants given its parents. Furthermore, ℙ_X is causally minimal if it satisfies the causal Markov condition relative to 𝔾 but not to any proper sub-graph of 𝔾.
§.§ Related Work
Root causal analysis refers to a suite of methods designed to detect the root causes of undesired outcomes, typically in man-made systems within the industrial or healthcare industry <cit.>. The methods require a painstaking manual approach that implicitly or explicitly reconstructs the underlying causal graph. Strategies also rely on participants with deep knowledge of the underlying causal processes and therefore falter when applied to biological systems that remain largely unknown.
A second line of work takes a similar approach by assuming a known set of structural equations but formalizes root causal analysis using the error terms of SEMs <cit.>. These works unfortunately do not define patient-specific root causes of disease properly. For example, Root Causal Analysis of Outliers recovers root causal contribution scores for symptoms that are worse than the symptoms of a given patient <cit.>. We do not want to eliminate just the worse symptoms of a patient, but all of his symptoms. Attempting to correct the method with a predetermined cut-off score unfortunately foregoes patient-specificity. The Model Substitution
algorithm proposed in <cit.> also loses specificity by identifying the root causes of changes in the marginal distribution of the diagnosis. Moreover, both MS and RCAO assume that the user has knowledge of the structural equations and the “normal” counterfactual distributions of the error terms. The methods further require that the diagnosis correspond to a noiseless cutoff score, even though a diagnosis is noisy because it depends on the diagnostician in practice. RCAO and MS therefore utilize improper definitions of patient-specific root causes of disease and require a noiseless label, a known SEM as well as known counterfactual error term distributions.
A third line of work instead identifies patient-specific root causes of disease using the conditional distribution of the diagnosis given the error terms. The authors do not require access to the underlying structural equations or error term distributions. <cit.> performed independent component analysis (ICA) on electronic health record data and correctly recovered the top five root causes of hepatocellular carcinoma. The approach achieved clinical face validity, but the authors did not connect the strategy to causality. <cit.> later extended the idea to root causal analysis and introduced a more efficient algorithm called Root Causal Inference (RCI). The same authors later created a related procedure for handling latent confounding <cit.>. All three of these algorithms assume linear relationships and continuous additive noise. Investigators thus later extended the work to the non-linear setting with the heteroscedastic noise model that allows non-linear conditional expectations and variances <cit.>. Unfortunately, even the non-linear approach assumes continuous random variables and no measurement error. The above algorithms therefore perform poorly when directly run on scRNA-seq datasets.
We improve on the aforementioned works by introducing an algorithm called Root Causal Inference with Negative Binomials (RCI-NB) that accounts for the measurement error and counts of scRNA-seq by bypassing the Poisson. The algorithm utilizes novel simulation-based regression and goodness of fit testing procedures. RCI-NB automatically recovers all parameters needed for the simulations using a top-down procedure introduced in Section <ref>. As a result, the algorithm requires no prior knowledge about the underlying structural equations or counterfactual distributions. Furthermore, RCI-NB allows a noisy label and maintains patient-specificity by identifying changes in the conditional distribution of the diagnosis.
§ NEGATIVE BINOMIAL MODEL
We begin the development of RCI-NB by introducing a negative binomial SEM. We model the expression levels of RNA molecules X using the following post non-linear SEM:
X_i = exp( Xβ_· i ) E_i = exp( Xβ_· i + ln(E_i)),
for each X_i ∈X similar to Equation (<ref>), where post non-linearity refers to the outer exponentiation. Exponentiation ensures that all variables in X are positive and enforces faithfulness to the inverse canonical link function of the negative binomial generalized linear model <cit.>. The entry β_ji≠ 0 if and only if X_j ∈Pa(X_i). We write β_· i to refer to the i^th column of β, and β_A i to rows associated with A⊆X in the i^th column. <cit.> proved full identifiability of β except in a few scenarios not applicable to this work.
Many RNA molecules have low expression levels. The gamma distribution places larger probability mass near zero than the log-normal with equal mean and variance. We therefore further assume that each E_i ∈E follows the gamma distribution Γ(r_i,r_i/exp(Pγ_· i)) with shape r_i and rate r_i/exp(Pγ_· i). The set P contains q binary variables each indicating a patient from which we harvest cells. The error terms are therefore mutually independent given P, or within each patient.
We unfortunately cannot observe X in practice. Sequencing technologies instead approximate the expression level of each RNA by reverse transcribing and amplifying the molecules. Most technologies then count the number of complementary DNA sequences that align to a reference genome <cit.>. As a result, sequencing technologies such as scRNA-seq only approximate reference RNA expression levels by counts <cit.>.
The efficiency of the above process may differ between cells depending on e.g., cell diameter and the amount of reagents used. We can also only detect a small proportion of RNA molecules existing in a cell in general <cit.>. We therefore apply the law of rare events <cit.> and henceforth assume that we observe Poisson-corrupted counts X with each X_i ∈X drawn according to:
X_i ∼Pois(X_i C)
= (X_i C)^X_iexp(-X_i C)/X_i!.
The random variable C>0 denotes the cell-specific efficiencies of the sequencing process. The efficiencies differ due to the technology – not due to the biological system modeled by Equation (<ref>). We can therefore approximate C to high accuracy by a variety of control methodologies such as estimated library sizes or RNA spike-ins <cit.>.
Recall that X_i =exp(Xβ_· i)E_i from Equation (<ref>). We derive the conditional distribution of X_i given Pa(X_i) ∪P∪ C by marginalizing out Γ(r_i,r_i). The resultant Poisson-gamma mixture, or negative binomial distribution, obeys the probability mass function:
ℙ(X_i | Pa(X_i), P,C) = Γ(X_i+r_i)/Γ(X_i + 1) Γ(r_i)( r_i/r_i+μ_i)^r_i( μ_i/r_i+μ_i)^X_i.
with dispersion parameter r_i ∈r, conditional expectation μ_i=exp(Xβ_· i + Pγ_· i)C and variance μ_i + 1/r_iμ_i^2. Several groups have shown that the quadratic variance accurately accounts for the overdispersion seen in real scRNA-seq data <cit.>. We will drop the subscripts of r_i and μ_i to prevent notational cluttering, when it is clear that we focus on one X_i ∈X.
In summary, we assume X follows the fully identifiable SEM in Equation (<ref>) with gamma distributed error terms. We however can only observe X with Poisson measurement error – denoted by X. We now seek to recover (β,r,γ) from X∪P∪ C alone using negative binomial regression and goodness of fit testing, which we describe in the next two sections.
§ NEGATIVE BINOMIAL REGRESSION
§.§ Corrected Score Equations
We first develop a negative binomial regression procedure that bypasses Poisson measurement error among the predictors. Most existing negative binomial regressors erroneously assume perfectly measured predictors or only Gaussian measurement error <cit.>.
We in particular seek to regress X_i on P and the perfectly measured non-descendants of X_i, denoted by A⊆X∖X_i, but only have access to the Poisson corrupted counterparts A. Let Z = (A/C, P) and Z = (A, P). Further let α = (β_A i, γ_· i)^T. We can then write the logarithm of the negative binomial probability mass function L(α, r) as follows:
lnΓ(X_i + r)/Γ(X_i + 1) Γ(r) + r ln(r) + X_i Zα - (X_i + r) ln(r+μ).
Directly maximizing the expectation of the above expression requires access to Z. <cit.> showed that, if we can construct a corrected function L(α,r) where:
𝔼[L(α,r)|X_i, Z,C] = L(α,r),
then maximizing the (unconditional) expectation of L(α,r) still yields unbiased estimates of α and r. Observe that 𝔼(X_iZ | X_i, Z,C) = X_iZ for the term X_i Zα in Expression (<ref>), so L(α,r) satisfies:
lnΓ(X_i + r)/Γ(X_i + 1) Γ(r) +r ln(r) + X_iZα - f(X_i, Z, C),
such that 𝔼(f|X_i, Z,C) = (X_i + r) ln(r+μ) for some function f.
We find f difficult to derive analytically. We can however simplify (X_i + r) ln(r+μ) in Expression (<ref>) as follows:
𝔼_X_i Z C((X_i + r) ln(r+μ) ) =𝔼_ZC(𝔼_X_i|ZC (X_i + r) ln(r+μ))
= 𝔼_ZC((μ + r)ln(r+μ)),
so that we can approximate the last expectation by averaging over s samples drawn from the density p(Z,C)=p(Z)p(C). We will show how to estimate p(Z,C) from data in Section <ref>. We therefore equivalently consider the following corrected function L(α,r):
lnΓ(X_i + r)/Γ(X_i + 1) Γ(r) +r ln(r) + X_iZα - 𝔼((μ + r)ln(r+μ)),
which satisfies Equation (<ref>) as required.
We then set the expectation of the derivatives of L(α,r) to zero:
α : 𝔼(X_i Z) - 𝔼(μZ) = 0,
r : 𝔼ψ(X_i + r) - ψ(r) + ln(r) - 𝔼ln(r + μ) = 0,
where ψ denotes the digamma function. We replace the expectations with sample averages and quickly obtain the roots (α_n,r_n) = θ_n of the corresponding score equations with n samples by the Newton-Raphson method. Let θ_0 denote the ground truth parameter values. The proposed approach achieves asymptotic normality:
(Asymptotic normality) Assume n →∞, s →∞ and n/s → 0. Further assume that Var(μZ, ln(r+μ)) and Σ = -𝔼 S^'(θ_0) are positive definite. Then √(n)(θ_n - θ_0) →𝒩(0,Σ^-1(J_1 + J_2 + J_3) Σ^-1).
We define S^', J_1, J_2 and J_3 as well as detail longer proofs in the Appendix.
§.§ Regularization
Causal graphs in biology are frequently sparse, so we next introduce sparsity promoting regularization into the above negative binomial regressor. The equation for α in Equation (<ref>) does not depend on r and is asymptotically equivalent to the score equation of the negative binomial with r fixed. Recall that the negative binomial is a member of the exponential family with r fixed. We therefore introduce regularization via the Bayesian information criterion (BIC) score <cit.>.
Let L_j(α,r) denote the corrected log-likelihood for sample j. We maximize:
(1/n∑_j=1^n L_j(α,r)) - λ_n/2 (β_A i_0 + γ_· i - γ_· i_2^2),
where λ_n = ln(n)/n according to BIC and γ_· i = 1/q∑_k=1^q γ_ki. We optimize the above expression quickly by customizing the expectation-maximization (EM) approach proposed in <cit.>. The following equivalence relation holds:
β_A i_0 = ∑_j ∈ Rβ_j i^2 /β_j i^2 = ∑_j ∈ Rβ_j i^2/η_j^2 = β_Ri/η_R^2_2,
where η = (|β_A i|,1) and R indexes the non-zero elements in β_A i. We collect β_R i into the first |R| entries of β_A i for ease of notation. The ones in η correspond to γ_· i. Assume now that η is a latent variable. The EM algorithm successively approximates α by iterating between expectation:
(1/n∑_j=1^n L_j(α,r)) - λ_n/2( β_Ri/η_R^2_2 + γ_· i - γ_· i_2^2 ),
and maximization via the equation:
1/n∑_j=1^n x_ijz_j - 1/s∑_j=1^s μ_jz_j - λ_n ( β_Ri/η_R, 0,γ_· i - γ_· i) = 0.
The zero vector on the left hand side corresponds to elements not in R. The above equation is potentially unstable due to division by entries in η_R close to zero. Further, we do not know the indices R in practice. We resolve both of these issues by element-wise multiplying both sides of the score equation by η and instead solve:
(1/n∑_j=1^n x_ijz_j - 1/s∑_j=1^s μ_jz_j ) ⊙η - λ_n (β_A i, γ_· i - γ_· i) = 0.
We summarize the EM algorithm in Algorithm <ref>; it almost always converges with a finite number of samples in practice.
§ GOODNESS OF FIT TEST
We have thus far assumed that p(X_i|Z,C) indeed follows a negative binomial distribution. We now address the problem of determining whether the negative binomial distribution holds in this section by constructing a score-based goodness of fit test.
Assume for now that we have access to Z∪ C in order to compute μ. We construct a hypothesis test with a flexible order-k alternative probability mass function:
p_k(X_i|Z,C) = N(h,ϕ,μ, r) exp( ∑_j=1^k h_j(X_i,μ, r) ϕ_j ) p_0(X_i|Z,C),
where p_0(X_i|Z,C) denotes the negative binomial probability mass function under the null hypothesis. We now suppress the inputs to some functions for cleaner exposition. The function exp( ∑_j=1^k h_j ϕ_j ) =exp(hϕ) is non-negative and equal to one under the null hypothesis that ϕ = 0, or when the negative binomial p_0(X_i|Z,C) holds. The normalizing function N ensures that p_k integrates to one. Each function h_j must have zero expectation under the null hypothesis, denoted by 𝔼_0(h_j)=0. We will show how to intelligently choose such functions shortly.
We now take the expectation of the logarithm of p_k. The normalizing function N has derivative ∂log N/∂ϕ equal to -𝔼_k(h), or the negative expectation under the order-k alternative (Lemma 4.2.1 in <cit.>). As a result, ∂log N/∂ϕ = 0 under the null hypothesis, so we can write the population score equation with respect to ϕ under the null as 𝔼_0(h) = 0. This implies that:
U = 1/n∑_j=1^n h_j^T Π^-1 h_j χ^2_k,
by the central limit theorem. Here, we index the samples of h rather than its entries. The matrix Π denotes the sample covariance matrix of the vector h, which we will describe soon.
The χ^2 test loses power with too many functions in h and may not converge to the asymptotic distribution fast enough with large variances for realistic sample sizes. We therefore fix k=2 and utilize the bounded functions h_j = m_j - 𝔼(m_j | μ, r), where m_1 = exp(-X_i) ∈ (0,1] and m_2 = sin(X_i) ∈ [-1,1]. The conditional expectations of m_1 and m_2 admit closed forms under the negative binomial:
𝔼(m_1 | μ, r) = exp(r) (r/((exp(1) - 1) μ + exp(1) r))^r,
𝔼(m_2 | μ, r) = ir^r/2((-exp(-i)μ + μ + r)^-r - (-exp(i)μ + μ + r)^-r),
where i = √(-1).
Recall that we estimate (α, r) = θ in practice by NB-EM. Boos <cit.> used a first-order Taylorian expansion to account for the non-maximum likelihood estimation using an adjusted covariance matrix Π equal to:
B_ϕϕ - A_ϕθA_θθ^-1B_θϕ - B_ϕθ(A_θθ^-1)^TA_ϕθ^T + A_ϕθA_θθ^-1B_θθ(A_θθ^-1)^TA_ϕθ^T
where:
S = ( h_1-𝔼(m_1 | μ, r), h_2-𝔼(m_2 | μ, r), X_iZ-μZ)^T,
A = -𝔼( ∂ S/∂ (ϕ,θ)), B = 𝔼(S S^T ).
We must finally account for the fact that we observe Z but simulate Z. We split the functions in S into two groups:
S_Z = ( h_1, h_2, X_iZ)^T,
S_Z = ( -𝔼(m_1 | μ, r), -𝔼(m_2 | μ, r), -μZ)^T,
yielding the new matrices:
A = -𝔼_Z( ∂ S_Z/∂ (ϕ,θ)), B = 𝔼_Z(S_ZS^T_Z) +𝔼_Z(S_ZS_Z^T ).
We then reject the null hypothesis that the negative binomial holds when the statistic U in Equation (<ref>) falls above the critical value determined by the Type I error rate.
§ CAUSAL INFERENCE
§.§ Parameter Estimation
We have thus far created a sparse negative binomial regressor and score-based goodness of fit test that both bypass Poisson measurement error. They however require access to p(Z,C). We now design an algorithm called Recover Parameters (RP) that utilizes regression and goodness of fit testing to systematically identify the β, r and γ parameters of p(Z,C). We summarize RP in Algorithm <ref>.
RP performs causal discovery in a top-down fashion; the algorithm discovers the roots, then the children of the roots, and so forth. The algorithm first fits negative binomial distributions on each random variable given P∪ C in Line <ref>. RP then tests whether each variable follows a negative binomial in Line <ref>. If a variable does, then RP places it into A and eliminates it from X in Line <ref>. We eliminate the variable from X with the smallest U statistic in practice to avoid dependence on a pre-specified Type I error rate.
When the negative binomial holds, the set (β_A i,r_i,γ_· i) obtained in Line <ref> contains the gamma distribution parameters of E_i because E_i ∼Γ(r_i,r_i/exp(Pγ_· i)). RP can therefore simulate values from p(A) in Line <ref> by drawing P as well as the samples of the corresponding error terms of A, and then passing the values through the structural equations associated with A in Equation (<ref>).
RP can now discover the children of the roots by independently sampling C by bootstrap, regressing on A∪P∪ C and testing whether the model fits a negative binomial. The algorithm repeats the above process of simulation, sparse regression and goodness of fit testing until it moves all variables from X into A. The algorithm is formally sound and complete:
(Identifiability) If ℙ_X|P is causally minimal and X_i ∼Pois(X_i C) for each X_i ∈X, then RP recovers (β,r,γ) with regression and goodness of fit oracles.
We cannot reach the conclusion directly from the results of <cit.> due to the Poisson measurement error. We therefore instead prove the theorem in the Appendix using an overdispersion score developed for quadratic variance functions <cit.>.
§.§ Root Causal Contributions
We would like to utilize the recovered parameters from RP to identify root causes of disease specific to each patient. A root cause of disease intuitively corresponds to an initial perturbation to a biological system that ultimately induces a diagnostic label (Figure <ref>). We can formulate this intuition mathematically by first introducing a binary label D labeling a sample with a diagnosis of a certain illness (D=1) or a healthy control (D=0). We assume that
D is a terminal vertex such that ℙ(D|X,P) = logistic(Xβ_· D + Pγ_· D).
The logistic function emphasizes that the diagnosis is a noisy label of the predictors X. We may likewise consider other functions for a binary target, such as the probit. We can associate an error term E_D to D but reserve the notation E for the error terms of X so that E_D ∉E.
A root cause of disease for a specific patient then corresponds to a natural intervention on the error term of an ancestor of D. In particular, consider X_i = exp( Xβ_· i) E_i from Equation (<ref>) and suppose E_i = e̅_i for a healthy control. We can interpret each error term E_i ∈E as the combined effects of unobserved variables lying upstream of only X_i – such as the DNA sequence, acetylation or methylation status of a gene. An exogenous insult – such as a somatic mutation or toxin – then changes the value of E_i from e̅_i to an “unhealthy” one e_i. The change of E_i from e̅_i to e_i affects downstream variables, ultimately impacting variables involved in the diagnostic criteria and therefore the diagnosis D itself (Figure <ref>).
We can quantify the change in probability of D using the following logarithmic odds:
D_0 = ln( ℙ(D=1|X,P)/ℙ(D=0|X,P)) = Xβ_· D + Pγ_· D.
The equation depends on the variables X∪P, but we would like to quantify the causal effect of each error term E_i ∈E on D for a specific patient.
The above logarithmic odds of the logistic regression model of D on X∪P admits a linear form. However, the logarithmic odds of D on E for patient j, denoted by f^j(E), generally requires a non-linear function. We learn f^j(E) by performing non-linear logistic regression with the cells associated with patient j. Let v^j(W) correspond to the conditional expectation of the non-linear model 𝔼(f^j(E)|W) for some W⊆E∖ E_i. We can measure the change in probability when intervening on some E_i ∈E for patient j via the difference δ^j_E_i W = v^j(E_i,W)-v^j(W). We have δ^j_e_i w >0 when E_i = e_i increases the probability that D=1 because v^j(e_i,w) is larger than v^j(w).
We do not a priori know which W to choose, so we average over all possible W⊆E∖ E_i as follows:
S^j_i = 1/p∑_W⊆ (E∖ E_i)1/p-1|W|δ^j_E_i W.
An instantiation of the above quantity corresponds precisely to the Shapley value of <cit.> which, as the reader may recall, is the only additive feature attribution measure satisfying the local accuracy, missingness and consistency desiderata. We can thus quantify the root causal contribution of E_i on D using S^j_i.
Measurement error precludes recovery of the exact values of E and therefore of S^j_i. We instead compute the expected Shapley for patient j given by 𝔼(S^j_i|D=1) = Υ_i^j. This expected Shapley also satisfies the three desiderata by linearity of expectation:
* Local accuracy: ∑_i=1^p Υ_i^j = 𝔼 [f^j(E)|D=1] - 𝔼 f^j(E);
* Missingness: if E_i ∉E, then Υ_i^j = 0;
* Consistency: We have Ϋ_i^j ≥Υ_i^j for any two models f̈^j and f^j where 𝔼(δ̈^j_E_iW|D=1) ≥𝔼(δ^j_E_iW|D=1) for all W⊆E∖ E_i.
The first criterion ensures that the total score ∑_i=1^p Υ_i^j remains invariant to changes in the patient-specific disease prevalence rate 𝔼 f^j(E). The second criterion implies s_D = 0 because E_D ∉E. The first and third criteria together imply that each Υ_i^j is also invariant to changes in the disease prevalence rate, since we must have δ̈^j_E_iW≥δ^j_E_iW for all W⊆E∖ E_i and X_i ∈X. The three desiderata are therefore necessary.
We now introduce the following definition:
The root causal contribution of X_i for patient j is Υ_i^j. Similarly, X_i is a root cause of disease (D=1) for patient j if Υ_i^j > 0.
X_i is not a root cause of disease for patient j if Υ_i^j ≤ 0 because E_i does not on average increase the probability that D=1 in this case.
§.§ Root Causal Inference
We estimate Υ_i^j for each variable X_i ∈X and patient P_j ∈P using the Root Causal Inference with Negative Binomials (RCI-NB) algorithm summarized in Algorithm <ref>. RCI-NB first runs RP in Line <ref> to estimate the coefficients β_·X and gamma distribution parameters (r,γ_·X) in order to simulate samples from p(E,X,P). The algorithm then obtains (β_· D, γ_· D) by regressing D on X∪P with the Logistic Regression Expectation Maximization (LR-EM) algorithm in Line <ref>. LR-EM proceeds just like NB-EM but with Line <ref> removed and Equation (<ref>) replaced by:
(1/n∑_j=1^n d_jz_j - 1/s∑_j=1^s μ_j z_j/1+μ_j)⊙η - λ_n (β_· D, γ_· D - γ_· D) = 0,
or the corresponding score equations for logistic regression.
The recovered parameters in turn enable simulation of D_0. RCI-NB therefore non-linearly regresses D_0 on E for each patient j. We use XGBoost in this paper, so we can quickly compute the expected Shapley values for each patient using TreeSHAP in Line <ref> <cit.>.
A Shapley oracle outputs the true expected Shapley for each patient given p(E,P,D_0). The RCI-NB algorithm is sound with oracle information:
RCI-NB outputs the true expected Shapley values, or Υ_i^j for each variable X_i ∈X and patient j given regression, goodness of fit and Shapley oracles.
RP recovers the parameters (β_·X, r, γ_·X) with negative binomial regression and goodness of fit oracles per Theorem <ref>. Similarly, RCI-NB recovers β_· D and γ_· D with a logistic regression oracle. Lines 2 and 4 therefore simulate samples from p(E,P,D_0) and recover 𝔼(S_i^j | D=1) with a Shapley oracle in Line <ref>.
§ EXPERIMENTS
§.§ Algorithms
We compared RCI-NB against the following four algorithms representing the state of the art in inference for patient-specific root causes of disease:
* Root Causal Inference (RCI): an efficient top-down algorithm that infers patient-specific root causes assuming a linear model with non-Gaussian error terms <cit.>.
* Independent Component Analysis (ICA): utilizes a general purpose ICA algorithm to extract the error term values also assuming a linear model with non-Gaussian error terms <cit.>.
* Generalized Root Causal Inference with the Additive Noise Model (ANM): a bottom-up algorithm that generalizes RCI to non-linear models <cit.>. We equipped GRCI with ANM by solving for (β, γ) with NB-EM. We then subtracted out the conditional means to recover the error term values.
* Generalized Root Causal Inference with the Heteroscedastic Noise Model (HNM): same as ANM, but we solved for both (β, r,γ) with NB-EM. We then subtracted out the conditional means and dividing by the conditional standard deviations to recover the error term values.
We equipped RCI-NB, ANM and HNM with the same XGBoost TreeSHAP procedure for estimating the expected Shapley values with extracted error terms <cit.>. ANM and HNM both utilize NB-EM and LR-EM like RCI-NB. RCI and ICA use linear logistic regression models for inferring the expected Shapley values in accordance with their linearity assumption. No algorithm except RCI-NB takes measurement error into account.
Reproducibility. All code needed to replicate the experimental results is available at https://github.com/ericstrobl/RCINB.
§.§ Evaluation Criteria
All of the above algorithms output an expected Shapley value for each patient and each variable. Moreover, the Shapley values involve a predictive model fit on the error terms. We therefore compared the outputs of the algorithms utilizing the root mean squared error (RMSE):
√(1/qp∑_j=1^q ∑_i=1^p (Υ_i^j - Υ_i^j)^2),
where lower is better. If an algorithm only estimates expected Shapley values for a subset of variables, then we set the values of the missing variables to zero. We computed the ground truth Shapley values Υ^j to negligible error by running XGBoost TreeSHAP on 100,000 ground truth values of (E, P, D_0). We also measured the average running time of each algorithm in seconds.
§.§ Synthetic Data
§.§.§ Data Generation
We generated structural equation models obeying Equation (<ref>) as follows. We first generated DAGs with p=7 or p=12 variables in X and an expected neighborhood size of two. We created a random adjacency matrix by sampling from a Bernoulli(2/(p-1)) distribution in the upper triangular portion of the matrix. We introduced weights β and offsets γ by drawing values from the uniform distribution on [-1,-0.25] in order to stay within machine precision after exponentiation – except for the terminal vertex D whose incoming edges β_D were drawn from the uniform distribution on [-1,-0.25] ∪ [0.25,1]. We drew D randomly from the set of terminal vertices with at least one parent. We similarly set the shape parameters of each gamma distribution by drawing from a uniform distribution on [0.1, 1]. We drew C from a gamma distribution with shape and rate equal to one. We generated n=10,000 or 100,000 cell samples from 2 to 10 individuals in P again sampled uniformly. We repeated the above procedure 250 times and therefore generated a total of 250 × 2 × 2 = 1000 independent datasets.
§.§.§ Results
We summarize the results in Table 1. Bolded values denote the best performance per dimension and sample size. All bolded values are significant at a Bonferonni corrected threshold of 0.05/5 using paired t-tests, since we compared the performance of five algorithms.
RCI-NB achieved the lowest mean RMSE across all dimension numbers and sample sizes. Moreover, the algorithm continued to improve with increasing sample sizes. The other algorithms all performed worse but comparably across dimensions; their performances also did not improve with increasing sample sizes. Accounting for Poisson measurement error thus steadily improves performance with more samples.
RCI and ICA both completed within two seconds due to the computational efficiencies gained by assuming linearity. RCI-NB took significantly longer than both RCI and ICA but completed approximately two times as quickly as the alternative non-linear approaches ANM and HNM. We conclude that RCI-NB is slower than linear methods but faster than alternative non-linear ones.
§.§ Real Data
§.§.§ Lung Adenocarcinoma
We evaluated the algorithms on their ability to discover the root causes of lung adenocarcinoma. The GSE-123904 scRNA-seq dataset of <cit.> contains RNA counts from 17,502 single cells derived from cancerous and normal adjacent tissue of three patients (IDs 675, 682 and 684). The mitogen-activated protein kinase (MAPK) pathway plays an important role in lung carcinogenesis <cit.>. KRAS and EGFR comprise the top driver genes in lung adenocarcinoma <cit.>. EGFR had nearly all zero counts in the data, so we included downstream genes GRB2, HRAS, ARAF and CCND1 instead. We added KRAS and TP53 as well.
We do not have access to the ground truth expected Shapley values with real data. However, we can estimate them to high accuracy using the ground truth causal graph. We obtained the causal relations between the genes using the KEGG pathway of non-small cell lung cancer (HSA05223) and plot the pathway in Figure <ref> <cit.>. We estimated the ground truth Shapley values by (1) fitting negative binomial regression models using the sample versions of Equation (<ref>) on the ground truth parent set of each variable, (2) sampling from the de-noised gamma distributions and (3) running XGBoost TreeSHAP on the data of each patient. Recall that RCI-NB, ANM and HNM all use NB-EM, LR-EM and TreeSHAP.
We plot the RMSE and timing results of the algorithms in Figure <ref> as averaged over 50 bootstrapped samples. RCI-NB achieved the lowest MSE by a large margin. The algorithms utilizing ANM and HNM achieved lower accuracy than the linear methods ICA and RCI. All non-linear algorithms took substantially longer than the linear ones, but RCI-NB still completed faster than both HNM and ANM. We conclude that RCI-NB achieves the highest accuracy and completes the fastest among the non-linear methods in this dataset.
§.§.§ Cervical Carcinoma
We next evaluated the ability of the algorithms to discover the root causes of cervical squamous cell carcinoma. We downloaded scRNA-seq data from E-MTAB-11948 used in <cit.>. The dataset contains 69,938 cells from cancerous and normal adjacent tissue of three patients with cervical cancer. PIK3CA is the most frequently mutated gene in cervical carcinoma <cit.>. All three patients also tested positive for HPV type 16 that produces oncoproteins E5, E6 and E7 known to effect EGFR and the PI3K signaling pathway <cit.>. The PI3K signaling pathway effects cell cycle progression via GSK3B and FOXO1 as well as cell survival via MDM2 and TP53 according to the HPV KEGG pathway (HSA05165). We plot the ground truth causal graph in Figure <ref>.
We summarize the results in Figure <ref> as averaged over 50 bootstrapped draws. RCI-NB again achieved the lowest average RMSE by a large margin. The linear algorithms did not consistently outperform non-linear ANM and HNM. Instead, all algorithms besides RCI-NB performed comparably. RCI-NB took 202.7 seconds to complete on average, on-par with ANM and HNM in this case. We conclude that RCI-NB again achieves the highest accuracy with timing comparable to other non-linear algorithms. The real data results therefore mimic those seen with synthetic data.
§ CONCLUSION
We presented a post non-linear SEM consisting of gamma distributed error terms and random variables corrupted by Poisson measurement error. We then showed that each variable admits a negative binomial distribution when conditioned on its parents and patient. We used this fact to derive novel regression and goodness of fit testing procedures that bypass Poisson measurement error. The test requires samples from the joint distribution of the parents, which we recovered using the top-down RCI-NB algorithm. Experimental results highlighted the superiority of RCI-NB in recovering the true root causal contributions – quantified using expected Shapley values – in both synthetic and real data. Future work could improve the scalability of the method and accommodate latent confounding not related to measurement error.
§ APPENDIX
§.§ Proposition 1
Let θ=(α,r) and X_i = Y. Consider the derivative of the corrected log-likelihood given by 1/n∑_i=1^n S_i(θ) where θ = (α, r) and:
S_i(α) = y_i z_i - 1/s∑_j=1^s μ_j z_j
S_i(r) = ψ(y_i + r) - ψ(r) + ln(r) - 1/s∑_j=1^s ln(r + μ_j).
The original uncorrected versions of the score equations correspond to:
S_i^*(α) = y_i z_i - r + y_i/r + μ_iμ_i z_i
S_i^*(r) = 1 -r + y_i/r + μ_i + ψ(y_i + r) - ψ(r) + ln(r) - ln(r + μ_i),
The following conclusion holds:
prop:normality
(Asymptotic normality) Assume n →∞, s →∞ and n/s → 0. Further assume that Ω = Var(μZ, ln(r+μ)) and Σ = -𝔼 S^'(θ_0) are positive definite. Then √(n)(θ_n - θ_0) →𝒩(0,Σ^-1(J_1 + J_2 + J_3) Σ^-1).
We can write:
1/√(n)∑_i=1^n S_i (θ) = 1/√(n)∑_i=1^n S^*_i (θ) + 1/√(n)∑_i=1^n A_i(θ) + √(n)/s∑_j=1^s B_j(θ),
where:
A_i(θ) = ( r + y_i/r + μ_iμ_i z_i - 𝔼μZ
r + y_i/r + μ_i - 1 + ln(r + μ_i) - 𝔼ln(r + μ)
),
and:
B_j(θ) = ( 𝔼μZ - μ_j z_j
𝔼ln(r+μ) - ln(r+μ_j)
).
We consider a compact neighborhood Q_ρ = {θ : |θ - θ_0| ≤ρ} for some ρ >0. We now invoke the integral form of the mean valued theorem <cit.>:
1/√(n)∑_i=1^n S_i (θ_n) = 1/√(n)∑_i=1^n S_i (θ_0) - √(n)(θ_n - θ_0)C_n,
where C_n = - ∫_0^1 1/n∑_i=1^n S^'_i (θ_0 + u(θ_n - θ_0)) du.
We have sup_θ∈ Q_ρ |1/n∑_i=1^n S_i(θ) - 𝔼_θ_0S(θ)| → 0 almost surely by the uniform strong law of large numbers with n →∞ and s →∞ <cit.>. We then invoke Theorem 2.1 in <cit.> to conclude that θ_n is a strongly consistent sequence satisfying ∑_i=1^n S_i (θ_n) = 0. Therefore 1/√(n)∑_i=1^n S_i (θ_0) = √(n)(θ_n - θ_0)C_n.
We consider the right hand side of Equation (<ref>). We have: 1/√(n)∑_i=1^n S^*_i (θ_0) 𝒩(0,J_1), where J_1 = 𝔼 S^*(θ_0) S^*T(θ_0). We also have 1/√(n)∑_i=1^n A_i (θ_0) 𝒩(0,J_2),
where J_2 = 𝔼 A(θ_0) A^T(θ_0). Thus: 1/√(n)∑_i=1^n S^*_i (θ_0) + A_i (θ_0) 𝒩(0,J_1+J_2 + J_3),
where J_3 = 𝔼 A(θ_0)S^*T(θ_0) + 𝔼 S^*(θ_0)A^T(θ_0). We finally have: 1/√(s)∑_j=1^s B_j (θ_0) 𝒩(0,Ω), since Ω is positive definite. We invoke Slutsky's lemma so that: √(n)/s∑_j=1^s B_j (θ_0) = √(n/s)1/√(s)∑_j=1^s B_j (θ_0) 0,
because n/s → 0. As a result: 1/√(n)∑_i=1^n S_i (θ_0) 𝒩(0,J_1 + J_2 + J_3).
We next show that C_n →Σ almost surely. We let ε > 0. The function 𝔼_θ_0 S^'(θ) is continuous in θ. We can therefore identify a ρ >0 such that | θ - θ_0 | < ρ implies | 𝔼_θ_0 S^' (θ) + Σ | < ε/2.
Again by the uniform strong law of large numbers, there exists an integer N such that the following holds with probability one for all n > N: sup_θ∈ Q_ρ| 1/n∑_i=1^n S_i^'(θ) - 𝔼_θ_0 S^'(θ) | < ε/2. Now assume N is so large such that for all n > N, we have | θ_n - θ_0 | < ρ. Hence, for all n > N, we have:
| C_n - Σ | ≤∫_0^1 | 1/n∑_i=1^n S^'_i (θ_0 + u(θ_n - θ_0)) + Σ| du
≤ ∫_0^1 sup_θ∈ Q_ρ| 1/n∑_i=1^n S_i^'(θ) - 𝔼_θ_0 S^'(θ) | + | 𝔼_θ_0 S^'(θ) + Σ | du < ε.
We conclude that C_n →Σ almost surely because we chose ε arbitrarily. We now invoke Slutsky's lemma:
√(n)(θ_n - θ_0) = C_n^-11/√(n)∑_i=1^n S_i(θ_0) 𝒩(0,Σ^-1(J_1 + J_2 + J_3) Σ^-1).
§.§ Theorem 1
We consider the following overdispersion score:
T(X_i, U) = w^2_iUVar(X_i | U) - w_i U𝔼(X_i | U),
where w^-1_iU = 1 + 𝔼(X_i | U)/r and U⊆X∪P∪ C. The negative binomial of X_i conditional on U has mean 𝔼(X_i | U) and variance 𝔼(X_i | U) + 𝔼(X_i | U)^2/r. Thus T(X_i, U) = 0 in this case.
Let U more specifically correspond to a subset of the non-descendants of X_i always including P∪ C. We have:
If ℙ_X|P is causally minimal and X_i ∼Pois(X_i C) for each X_i ∈X, then T(X_i, U) = 0 if and only if Pa(X_i)⊆U.
We can write the following sequence:
T(X_i, U) = w^2_iUVar(X_i | U) - w_i U𝔼(X_i | U)
(a)=w^2_iU[
Var(𝔼(X_i | Pa(X_i),P,C)|U) + 𝔼(Var(X_i|Pa(X_i),P,C)|U)
- w^-1_i U𝔼(X_i|U)]
(b)=w^2_iU[
Var(𝔼(X_i | Pa(X_i),P,C)|U) + 𝔼(𝔼(X_i|Pa(X_i),P,C)|U)
+𝔼(𝔼(X_i|Pa(X_i),P,C)^2/r|U) - (1 + 𝔼(X_i | U)/r) 𝔼(X_i|U)]
=w^2_iU[
Var(𝔼(X_i | Pa(X_i),P,C)|U) +𝔼(𝔼(X_i|Pa(X_i),P,C)^2/r|U)
- 𝔼^2(X_i|U)/r]
=w^2_iU (1+1/r) Var(𝔼(X_i | Pa(X_i),P,C)|U),
where (a) follows from the variance decomposition formula, and (b) from the quadratic variance property of the negative binomial.
For the backward direction, if Pa(X_i)⊆U, then
Var(𝔼(X_i | Pa(X_i),P,C)|U) = 0, so T(X_i, U) = 0. For the forward direction, assume by contrapositive that U does not contain all members of Pa(X_i). Then Var(𝔼(X_i | Pa(X_i),P,C)|U) > 0 by causal minimality, so T(X_i, U) > 0.
thm:identifiability
If ℙ_X|P is causally minimal and X_i ∼Pois(X_i C) for each X_i ∈X, then RP recovers (β,r,γ) with regression and goodness of fit oracles.
We prove the statement by induction. Base: suppose |X| = 1. Then X_i ∈X is a Poisson-gamma mixture and therefore a negative binomial. Hence, RP recovers (β_· i,r_i,γ_· i) in Line <ref>.
Induction: suppose the conclusion holds with |X| = p. We need to prove the statement when |X∪X_i| = p+1. We have two situations:
* Assume that A contains all of the parents of X_i and none of its descendants. Then X_i is a Poisson-gamma mixture given A∪P∪ C and hence a negative binomial. RP again recovers (β_· i,r_i,γ_· i) in Line <ref>. The algorithm then places X_i into A and removes it from X in Line <ref>.
* Assume that A (a) does not contain all of the parents of X_i or (b) contains at least one of the descendants of X_i from X∖X_i (or both). For (a), assume for a contradiction that RP removes X_i from X in Line <ref>. Then S(X_i, A∪P∪ C)>0 by Lemma <ref> but this contradicts the fact that S(X_i, A∪P∪ C)=0 because X_i follows a negative binomial. Thus RP does not place X_i into A and remove it from X in Line <ref>. For (b), assume for a contradiction that A contains a descendant of X_i from X∖X_i. But then there exists at least one descendant X_j of X_i (X_j ≠X_i) whose parents are not all in A, and X_j was removed by RP in a previous iteration. We therefore arrive at a contradiction again by Lemma <ref>. We conclude that RP does not place X_i into A and remove it from X in Line <ref> in either case.
The conclusion follows by the inductive hypothesis.
|
http://arxiv.org/abs/2307.05577v1 | 20230710131915 | Comment on "Nuclear Excitation by Free Muon Capture" | [
"Natalia S. Oreshkina",
"Julian C. Berengut"
] | nucl-th | [
"nucl-th",
"physics.atom-ph"
] |
Comment on “Nuclear Excitation by Free Muon Capture"
In the paper <cit.> the process of free muon capture with simultaneous excitation of a nuclear isomer has been suggested, claiming that “the effect can be detectable for selected isotopes”.
Here, we argue that this claim can not be confirmed.
Briefly, the process is far from the dominant mechanism for nuclear excitation; it excites high energy nuclear levels that will not generally decay to the isomer; the proposal assumes all incident muons will fulfil energy criteria, ignoring dominant capture paths; and nuclei excited by muons will have a shortened lifetime due to muonic capture.
Let us start by discussing an important technical point. As stressed in <cit.>, for a free muon to be captured to its ground or first excited state, it should have a well-defined energy close to the nuclear resonance. Coupled with the small size of the target orbital, such low energy, non-relativistic scattering will be dominated by the s-wave cross-section, however this is in conflict with angular momentum and parity selection rules for most of the considered transitions in Table I.
Instead these must originate from p or d-wave muons, where the rate will be suppressed. On the other hand, radiative and Auger capture via dominant channels are always available (see Fig. <ref>) and can involve any free muon energies and bound muon states.
After a muon is captured in a highly excited state with a statistically distributed angular momentum, the dominant process is cascade towards the ground state, first via Auger, then via radiative decay <cit.>. Due to the similar energy scale for muonic and nuclear states, there is strong mixing of muon-nucleus levels driven by hyperfine interaction. This so-called dynamical splitting was discussed in <cit.> and improved later in <cit.>. The correctness of the theoretical prediction was fully confirmed by the experiment <cit.>. As a result, all states of the muonic cascade include a superposition of a few low-lying nuclear states. Decays in the cascade occur spontaneously via photon emission, without the requirement of precise energy matching, and nuclear excitations are merely a side effect of this cascade in muonic atoms. This is a vastly different physical reality from that presented in <cit.>, where the muonic and nuclear degrees of freedom are considered highly separable.
A major drawback of the paper is that it compares the probability of nuclear excitation by muon capture () almost exclusively with that of nuclear excitation by electron capture (NEEC). These are two very different physical systems: NEEC occurs with no competing nuclear excitation mechanisms. On the other hand, in order to properly evaluate the experimental feasibility of the proposal, the process should be compared with other decay channels of the same system to establish the hierarchy. The dominant process for muonic atoms, namely excitation upon muon cascade, has not received enough attention in the paper <cit.>.
Even if one allows that the mechanism might occur in some systems as a sub-dominant effect, the nuclear levels that are excited by this method are relatively high-energy, and do not necessarily cascade to the metastable isomer. For instance, the suggested nuclear state of ^207Pb at 4980.5 keV is highly excited and the direct photo-excitation rate is low. However, the excited state is separated from the ground state by over 100 levels and it is not metastable, so it would uncontrollably decay in gamma cascade. Therefore, the process will not enable the preferential feeding of nuclear isomers.
Finally, excited nuclei produced by NEμC will generally be destroyed by nuclear muon capture, which is the dominant decay mechanism for muonic atoms with heavy nuclei <cit.>.
Overall, the paper <cit.> presents an interesting mechanism for manipulating nuclear states via interaction with a muon, but ignores the dominant mechanism of muon-nucleus interaction, namely the muonic dynamical-structure cascade, and gives a misleading impression that could be observed. Despite the idea's attractiveness, based on the points mentioned above, it is highly improbable that can be visible in an experiment.
N.S.O. thanks the Gordon Godfrey fund for the financial support of the visit to UNSW Sydney, Australia.
Natalia S. Oreshkina^1 and Julian C. Berengut^2
^1 Max-Planck-Institut für Kernphysik, Saupfercheckweg 1, 69117 Heidelberg, Germany
^2 School of Physics, University of New South Wales, Sydney NSW 2052, Australia
|
http://arxiv.org/abs/2307.04108v1 | 20230709063120 | Asynchronous Proportional Response Dynamics in Markets with Adversarial Scheduling | [
"Yoav Kolumbus",
"Menahem Levy",
"Noam Nisan"
] | cs.GT | [
"cs.GT",
"cs.MA",
"econ.TH",
"math.DS"
] |
Model-Based End-to-End Learning for Multi-Target Integrated Sensing and Communication
José Miguel Mateos-Ramos, Student Member, IEEE,
Christian Häger, Member, IEEE,
Musa Furkan Keskin, Member, IEEE,
Luc Le Magoarou, Member, IEEE,
Henk Wymeersch, Senior Member, IEEE
This work was supported, in part, by a grant from the Chalmers AI Research Center Consortium (CHAIR), by the National Academic Infrastructure for Supercomputing in Sweden (NAISS), the Swedish Foundation for Strategic Research (SSF) (grant FUS21-0004, SAICOM), Hexa-X-II, part of the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101095759., and Swedish Research Council (VR grant 2022-03007). The work of C. Häger was also supported by the Swedish Research Council under grant no. 2020-04718.
José Miguel Mateos-Ramos, Christian Häger, Musa Furkan Keskin and Henk Wymeersch are with the Department of Electrical Engineering, Chalmers University of Technology, Sweden (email: [email protected]; [email protected]; [email protected]; [email protected]).
Luc Le Magoarou is with INSA Rennes, CNRS, IETR - UMR 6164, F-35000, Rennes, France (email: [email protected]).
Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023
=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
We study Proportional Response Dynamics (PRD) in linear Fisher markets where participants act asynchronously. We model this scenario as a sequential process in which in every step, an adversary selects a subset of the players that will update their bids, subject to liveness constraints. We show that if every bidder individually uses the PRD update rule whenever they are included in the group of bidders selected by the adversary, then (in the generic case) the entire dynamic converges to a competitive equilibrium of the market. Our proof technique uncovers further properties of linear Fisher markets, such as the uniqueness of the equilibrium for generic parameters and the convergence of associated best-response dynamics and no-swap regret dynamics under certain conditions.
§ INTRODUCTION
A central notion in the study of markets is the equilibrium: a state of affairs where no single party wishes to unilaterally deviate from it. The main benefit of focusing on the notion of equilibria is in what it ignores: how the market can reach an equilibrium (if at all). This latter question is obviously of much interest as well, especially if you wish to consider computational aspects,[As we know that finding an equilibrium may be computationally intractable in general.] and a significant amount of research has been devoted to studying “market dynamics” and their possible convergence to an equilibrium. Almost all works that study market dynamics consider synchronous dynamics.
Synchronous Dynamics:
Every time step t, all participants update, simultaneously, their behavior based on the state
at time t-1.
Such synchronization is clearly difficult to achieve in real markets, and so one might naturally wonder to what extent is full synchrony needed or whether convergence of market dynamics occurs even asynchronously. There are various possible levels of generality of asynchrony to consider. The simplest model considers a sequential scenario where at every time step t, an adversary chooses a single participant, and only this participant updates their behavior based on the state at time t-1. The adversary is limited to adhere to some liveness condition, such as scheduling every participant infinitely often or at least once every T steps. In the most general model <cit.>, the adversary may also delay messages, causing players to reply to dated information. In this paper, we focus on an intermediate level of allowed asynchrony, where updates may happen in an arbitrary asynchronous manner, but message delays are always smaller than the granularity of activation.
Activation Asynchrony:[In <cit.> this was termed “simultaneous.”] Every time step t, an arbitrary subset of participants is chosen by an adversary and all of these participants update their behavior based on the state at time t-1. The adversary must adhere to the liveness condition where for every participant some set that includes him must be chosen at least once every T consecutive steps.
The market dynamics that we study in this paper are linear Fisher markets with proportional response dynamics (PRD), a model that has received much previous attention <cit.> and for which synchronous convergence to equilibrium is known.
While there are a few asynchronous convergence results known for other dynamics, specifically for tatonnement dynamics <cit.>, there are no such results known for proportional response dynamics, and achieving such results has been mentioned as an open problem in <cit.>.
Fisher Market with Linear Utilities:
There are n players and m goods. Each player i has a budget B_i and each good j has, w.l.o.g., a total quantity of 1. Buyer i's utility from getting an allocation
x_i=(x_i1,...,x_im)
is given by u_i(x_i) = ∑_j a_ij x_ij,
where the parameters a_ij≥ 0
are part of the definition of the market. A market equilibrium is an allocation
X = (x_ij) (where 0 ≤ x_ij≤ 1) and a pricing
p = (p_j) with the following properties.
(1) Market clearing: for every good j it holds that
∑_i x_ij = 1;
(2) Budget feasibility: for every player i it holds that ∑_j x_ij p_j ≤ B_i; and
(3) Utility maximization: for every player i and every alternative allocation
y = (y_1,...,y_m) with ∑_j y_j p_j ≤ B_i we have that u_i(x_i) ≥ u_i(y).
Proportional Response Dynamics:
At each time step t, each player i will make a bid b^t_ij≥ 0 for every good j, where ∑_j b^t_ij = B_i. In the first step, the bid is arbitrary. Once bids for time t are announced, we calculate p^t_j = ∑_i b^t_ij and allocate the items proportionally: x^t_ij = b^t_ij/p^t_j, providing each player i with utility u^t_i = ∑_j a_ijx^t_ij. At this point, player i updates his bids for the next step by bidding on each item proportionally to the utility he obtained from the item: b^t+1_ij = B_i · a_ijx^t_ij / u^t_i.
From the perspective of the player, proportional response updates can be thought of as a simple parameter-free online learning heuristic, with some similarity to regret-matching <cit.> in its proportional updates, but considers the utilities directly, rather than the more sophisticated regret vector loss.
It is not difficult to see that a fixed point of this proportional response dynamic is indeed an equilibrium of the Fisher market. Significantly, it was shown by <cit.>
that this dynamic does converge,
in the synchronous model, to an equilibrium. As mentioned, the question of asynchronous convergence
was left open.
We provide the first analysis of proportional response dynamics in the asynchronous setting, and provide a positive answer to this open question in our “intermdiate” level of asynchrony.
For generic linear Fisher markets, proportional response dynamics with adversarial activation asynchrony, where each player is activated at least once every T steps, converge to the unique market equilibrium.
“Generic” means except for measure zero of possible (a_ij)'s,
and the uniqueness of the
market equilibrium is due to this genericity. We do not know whether the genericity condition is required for
asnychronous convergence and we leave this as a minor open problem. We did not analyze the rate of convergence to equilibrium;
we leave such analysis as a second open problem.
Our main open problem, however, is the generalization to full asynchrony.
Open Problem: Does such convergence occur also in the full asynchronous model where the adversary may introduce arbitrary message delays?
Our techniques rely on considering an associated game
obtained by using “modified” utility functions for
each of the players: ũ_̃ĩ(b)=∑_jb_ijln(a_ij) + ∑_jp_j(1 - ln(p_j)). We show that a competitive market equilibrium (with the original utility
functions) corresponds to a Nash equilibrium in the associated game.[It is
worthwhile to emphasize, though, that a competitive market equilibrium is not
a Nash equilibrium in the original market since
the players are price takers rather than fully rational. See Section <ref>.]
These modified utility functions are an adaptation to an individual utility
of a
function Φ(b)=∑_ijb_ijln(a_ij) + ∑_jp_j(1 - ln(p_j))
that was proposed in <cit.> as an objective for a convex program for equilibrium computation.[Notice that Φ is not the sum of the ũ_̃ĩ's, as
the second term appears only once.] This function was first linked with proportional
response dynamics in <cit.> where it was proven
that synchronous proportional
response dynamics act as mirror descent on this function.
The following three sets of bid profiles are identical: (1) the set of pure strategy Nash equilibria of the associated game; (2) the set of market equilibria of the Fisher market; and (3) the maximizing set of the potential function Φ.
The technical core of our proof is to show that not only does a synchronized
proportional response step by all the players increase the potential
function but, in
fact, every proportional response step by any subset of the players increases this potential function.
The point of view of market equilibria as Nash equilibria of the associated game offers several other advantages, e.g., suggesting several other dynamics that are related to proportional response that can be of interest. For example, we show that letting players best-respond in the game corresponds to the limit of a sequence of proportional response steps by a single player, but can be implemented as a single step of such a best-response in the game, which can be computed efficiently by the players and
may converge faster to the market equilibrium. Another possibility is using some (internal) regret minimization dynamics (for the game), which would also
converge to equilibrium in the generic case since, applying <cit.>,
it is the unique Correlated Equilibrium as well.
The structure of the rest of the paper is as follows. In Section <ref> we provide further formal details and notations that will be useful for our analysis. In Section <ref> we present the associated game and its relation to the competitive equilibria of the market. In Section <ref> we study best response dynamics in the associated game and their relation to PRD. In Section <ref> we show a key lemma regarding the potential function of the associated game under bid updates by subsets of the players, and then, in Section <ref> we show the uniqueness of the market equilibrium for generic markets and complete our proof of convergence for asynchronous PRD. In Section <ref> we provide simulation results that compare the convergence of proportional response dynamics with best response dynamics in the associated game in terms of the actual economic parameters in the market (namely, the social welfare and the convergence of the bid profiles). Finally, in Section <ref> we conclude and discuss limitations of our technique and open questions.
All proofs in this paper are deferred to the appendix.
§.§ Further Related Work
Proportional response dynamics (PRD) were originally studied in the context of bandwidth allocation in file-sharing systems, where it was proven to converge to equilibrium, albeit only for a restrictive setting <cit.>.
Since then, PRD has been studied in a variety of other contexts, including Fisher markets, linear exchange economies, and production markets. See <cit.> for further references.
In Fisher markets, synchronous PRD has been shown to converge to market equilibrium for Constant Elasticity of Substitution (CES) utilities in the substitutes regime <cit.>.
For the linear Fisher setting, synchronous PRD was explained as mirror descent <cit.> on a convex program, previously discovered while developing an algorithm to compute the market equilibrium <cit.>,
and later proven to be equivalent to the famous Eisenberg-Gale program <cit.>.
By advancing the approach of <cit.>, synchronous PRD with mild modifications was proven to converge to a market equilibrium for CES utilities in the complements regime as well <cit.>.
In linear exchange economies, synchronous PRD has been shown to converge to equilibrium in the space of utilities and allocations while prices do not converge and may cycle, whereas for a damped version of PRD, also the prices converge <cit.>.
In production markets, synchronous PRD has been shown to increase both growth and inequalities in the market <cit.>. PRD was also shown to converge with quasi-linear utilities in <cit.> and shown to stay close to market equilibrium for markets with parameters varying over time <cit.>.
All the above works consider simultaneous updates by all the players, and the question of analyzing asynchronous dynamics and whether they converge was raised by several authors as an open problem <cit.>.
Asynchronous dynamics in markets have been studied in several recent works. However, these works consider different models and dynamics from ours, and to our knowledge, our work presents the first analysis of asynchronous proportional response bidding dynamics. In <cit.>, it is shown that tatonnement dynamics under the activation asynchrony model converge to equilibrium, with results for settings both with and without carryover effects between time units. A later work, <cit.>, showed that tatonnement price dynamics converge to a market equilibrium under a model of sequential activation where in every step a single agent is activated, and where additionally, the information available to the activated seller about the current demand may be inaccurate within some interval that depends on the range of past demands. A different approach taken in <cit.> assumes that each seller has a set of rules affected by the other players' actions and governing its price updates; it is shown that the dynamics in which sellers update the prices based on such rules converge to a unique equilibrium of prices in the activation asynchrony model.
Classic results regarding the computation of competitive equilibria in markets mostly consider centralized computation and vary from combinatorial approaches using flow networks <cit.>, interior point <cit.>, and ellipsoid <cit.> methods, and many more <cit.>. Eisenberg and Gale devised a convex program which captures competitive equilibria of the Fisher model as its solution <cit.>. Notable also is the tatonnement process of price convergence in markets dated back to Walras <cit.> and studied extensively from Arrow <cit.> and in later works.
More broadly, in the game theoretic literature, our study is related to a long line of work on learning in games, starting from seminal works in the 1950s <cit.>, and continuing to be an active field of theoretical research <cit.>, also covering a wide range of classic economic settings including competition in markets <cit.>, bilateral trade <cit.>, and auctions <cit.>, as well as applications such as blockchain fee markets <cit.> and strategic queuing systems <cit.>. For a broad introduction to the field of learning in games, see <cit.>. The vast majority of this literature studies repeated games under the synchronous dynamics model. Notable examples of analyses of games with asynchronous dynamics are <cit.>, which study best response dynamics with sequential activation, and <cit.>, which explore best response dynamics in a full asynchrony setting which includes also information delays, and show that in a class of games called max-solvable, convergence of best response dynamics is guaranteed. Our analysis of best response dynamics in Section <ref> takes a different route, and does not conclude whether the associated game that we study is max-solvable or not; such an analysis seems to require new ideas.
Our work is also related to a large literature on asynchronous distributed algorithms.
We refer to a survey on this literature <cit.>.
The liveness constraint that we conisder in the dynamics[Intuitively, if one allows some of the parameters in the dynamic not to update, these parameters become irrelevant, as they will remain frozen, and thus one cannot hope to see any convergence of the entire system.] is related to those, e.g., in <cit.>.
Recent works that are conceptually more closely related are <cit.>, which propose asynchronous distributed algorithms for computing Nash equilibria in network games. Notably, <cit.> propose an algorithm that converges to an equilibrium in a large class of games in asynchronous settings with information delays. Their approach, however, does not capture proportional response dynamics and does not apply to our case of linear Fisher markets.
§ MODEL AND PRELIMINARIES
The Fisher market: We consider the classic Fisher model of a networked market in which there is a set of buyers ℬ and a set of divisible goods 𝒢. We denote the number of buyers and number of goods as n = |ℬ|, m = |𝒢|, respectively, and index buyers with i and goods with j. Buyers are assigned budgets B_i∈ℝ^+ and have some value[
For the ease of exposition, our proofs use w.l.o.g. a_ij > 0. This is since in all cases where a_ij = 0 might have any implication on the proof, such as ln(a_ij), these expressions are multiplied by zero in our dynamics.]
a_ij≥ 0 for each good j.
Buyers' valuations are normalized such that ∑_j a_ij = 1.
It is convenient to write the budgets as a vector B=(B_i) and the valuations as a matrix A_n × m=(a_ij), such that A,B are the parameters defining the market.
We denote the allocation of goods to buyers as a matrix X = (x_ij) where x_ij≥ 0 is the (fractional) amount of good j that buyer i obtained.
We assume w.l.o.g. (by proper normalization) that there is a unit quantity of each good. The price of good j (which is depends on the players' actions in the market, as explained below) is denoted by p_j≥ 0 and prices are listed as a vector p=(p_j). Buyers have a linear utility function u_i(x_i)=∑_j a_ijx_ij with the budget constraint ∑_j x_ij p_j ≤ B_i. We assume w.l.o.g. that the economy is normalized, i.e., ∑_i B_i = ∑_j p_j = 1.
Market equilibrium: The competitive equilibrium (or “market equilibrium”) is defined in terms of allocations and prices as follows.
(Market Equilibrium): A pair of allocations and prices (X^*,p^*) is said to be market equilibrium if the following properties hold:
* Market clearing: ∀ j, ∑_i x_ij^* = 1,
* Budget feasibility: ∀ i, ∑_j x_ij^* p_j^*≤ B_i,
* Utility maximization: ∀ i, x_i^* ∈max_x_i u_i(x_i).
In other words, under equilibrium prices all the goods are allocated, all budgets are used, and no player has an incentive to change their bids given that the prices remain fixed.
Notice that this notion of equilibrium is different from a Nash equilibrium of the game where the buyers select their bids strategically, since in the former case, players do not consider the direct effect of possible deviation in their bids on the prices. We discuss this further in Section <ref>.
For linear Fisher markets, it is well established that competitive equilibrium utilities u^* and prices p^* are unique, equilibrium allocations are known to form a convex set, and the following conditions are satisfied.
∀ i,j a_ij/p^*_j≤u_i^*/B_i and x_ij > 0 a_ij/p^*_j = u_i^*/B_i.
This is a detailed characterization of the equilibrium allocation: every buyer gets a bundle of goods in which all goods maximize the value per unit of money. The quantity a_ij/p^*_j is informally known as “bang-per-buck” (ch. 5 & 6 in <cit.>), the marginal profit from adding a small investment in good j.
Market equilibrium bids are also known to maximize the Nash social welfare function (see <cit.>) NSW(X)=∏_i∈ℬ u_i(x_i)^B_i and to be Pareto efficient, i.e., no buyer can improve their utility without making any one else worse off (as stated in the first welfare theorem).
The trading post mechanism and the market game (Shapley-Shubik):
First described in <cit.> and studied under different names <cit.>, the trading post mechanism is an allocation and pricing mechanism which attempts to capture how a price is modified by demand. Buyers place bids on goods, where buyer i places bid b_ij on good j. Then, the mechanism computes the good's price as the total amount spent on that good and allocates the good proportionally to the bids, i.e., for bids b:
p_j = ∑_i=1^n b_ij x_ij =
b_ij/p_j b_ij > 0
0 otherwise
Note that the trading post mechanism guarantees market clearing for every bid profile b in which all goods have at least one buyer who is interested in buying. The feasible bid set of a buyer under the budget constraint is S_i={b_i∈ℝ^m | ∀ j b_ij≥ 0 ∑_j b_ij=B_i}, i.e., a scaled simplex. Denote S=∏_i∈ℬ S_i and S_-i=∏_k∈ℬ∖{i} S_k.
Considering the buyers as strategic, one can define the market game as G={ℬ,(S_i)_i∈ℬ,(u_i)_i∈ℬ} where the utility functions can be written explicitly as u_i(b)=u_i(x_i(b))=∑_j=1^ma_ijb_ij/p_j.
We sometimes use the notation u_i(b_i, b_-i), where b_i is the bid vector of player i and b_-i denotes the bids of the other players.
Potential function and Nash equilibrium: For completeness, we add the following definitions.
Potential function: A function Φ is an exact potential function<cit.> if ∀ i∈ℬ,∀ b_-i∈ S_-i and ∀ b_i,b_i'∈ S_i we have that Φ(b_i',b_-i) - Φ(b_i,b_-i) = u_i(b_i',b_-i) - u_i(b_i,b_-i), with u_i being i's utility function in the game.
Best response: b_i^* is a best response to b_-i if ∀ b_i∈ S_i u_i(b^*_i,b_-i)≥ u_i(b_i,b_-i). That is, no other response of i can yield a higher utility.
Nash equilibrium: b^* is Nash equilibrium if ∀ i b^*_i is a best response to b^*_-i (no player is incentivized to change their strategy).
Proportional response dynamics:
As explained in the introduction, the proportional response dynamic is specified by an initial bid profile b^0, with b_ij^0 > 0 whenever a_ij > 0, and the following update rule for every player that is activated by the adversary: b^t+1_ij = a_ijx^t_ij/u_i(x^t_i) B_i. See Section <ref> for further details on activation of subsets of the players.
§ THE ASSOCIATED GAME
The Fisher market can be naturally thought of as a game in which every one of the n players aims to optimize their individual utility u_i(b_i, b_-i), as defined in Section <ref>. However, it is known that the set of Nash equilibria of this game does not coincide with the set of market equilibria <cit.>, and so a solution to this game (if indeed the players reach a Nash equilibrium) is economically inefficient <cit.>.
A natural question that arises is whether there is some other objective for an individual player that when maximized by all the players, yields the market equilibrium. We answer positively to this question and show that there is a family of utility functions such that in the “associated games” with these utilities for the players, the set of Nash equilibria is identical to the set of market equilibria of the original game
(for further details, see also the appendix).
However, the fact that a Nash equilibrium of an associated game is a market equilibrium still does not guarantee that the players' dynamics will indeed reach this equilibrium.
A key element in our proof technique is that we identify, among this family of associated games, a single game, defined by the “associated utility” ũ_̃ĩ(b)=∑_jb_ijln(a_ij) + ∑_jp_j(1 - ln(p_j)), which admits an exact potential. We then use a relation which we show between this game and the proportional response update rule to prove the convergence of our dynamics (Theorem <ref>).
(The Associated Game):
Let G be a market game. Define the associated utility of a player i as ũ_̃ĩ(b)=∑_jb_ijln(a_ij) + ∑_jp_j(1 - ln(p_j)).
The associated game G̃ is the game with the associated utilities for the players and the same parameters as in G.
G is constructed such that the function Φ is it's potential. Note that although having similar structure, u_i and Φ differ via summation on i only in the first term (Φ is not the sum of the players' utilities).
For every Fisher market, the associated game G̃ admits an exact potential function that is given by[Since we discuss the players' associated utilities, we consider maximization of this potential. Of course, if the reader feels more comfortable with minimizing the potential, one can think of the negative function.]
Φ(b)=∑_ijb_ijln(a_ij) + ∑_jp_j(1 - ln(p_j)).
Once having the potential function defined, the proof is straightforward: the derivatives of the utilities u_i and the potential Φ with respect to b_i are equal for all i. Theorem <ref>, formally restated below, connects between the associated game, the market equilibria and the potential.
(Restatement of Theorem <ref>).
The following three sets of bid profiles are equal. (1) The set of pure-strategy Nash equilibria of the associated game: NE(G̃) = {b^*| ∀ b∈ S ũ_̃ĩ(b^*)≥ũ_̃ĩ(b)}; (2) the set of market equilibrium bid profiles of the Fisher market: {b^*| (x(b^*),p(b^*)) satisfy Def. <ref>}; and (3) the maximizing set of the potential from Theorem <ref>: max_b∈ SΦ(b).
The proof uses a different associated game G' that has simpler structure than G̃, but does not have an exact potential, and shows that: (i) Nash equilibria of G' identify with the market equilibria; (ii) all the best responses of players i to bid profiles b_-i in G' identify with those of G̃; and (iii) every equilibrium of G̃ maximizes the potential Φ (immediate by the definition of potential).
§ BEST RESPONSE DYNAMICS
In this section we explore another property of the associated game: we show that if instead of using the proportional response update rule, each player myopically plays their best response to the last bid profile with respect to their associated utility, then the entire asynchronous sequence of bids converges to a market equilibrium, as stated in the following theorem. We then show that there is a close relation between best response and proportional response dynamics.
For generic linear Fisher markets in a sequential asynchrony model where in every step a single player is activated, best response dynamics converge to the Market Equilibrium. For non-generic markets the prices are guaranteed to converge to the equilibrium prices.
The idea of the proof is to show that the best-response functions are single valued
(∀ i,b_-i u_i(·, b_-i) has a unique maximizer) and continuous (using the structure of best-response bids). Together with the existence of the potential function Φ it holds that the analysis of <cit.> applies for these dynamics and thus convergence is guaranteed.
One of the appealing points about proportional response dynamics is their simplicity — in each update, a player observes the obtained utilities and can easily compute the next set of bids. We show that also the best response of a player can be computed efficiently by reducing the calculation to a search over a small part of the subsets of all goods which can be solved by a simple iterative process.
For every player i and any fixed bid profile b_-i for the other players, the best response of i is unique and can be computed in 𝒪(mlog(m)) time.
Roughly, best responses are characterized uniquely by a one-dimensional variable c^*. For every subset of goods s we define a variable c_s and prove that c^* is the maximum amongst all c_s. So finding c^* is equivalent to searching a specific subset with maximal c_s. The optimal subset of goods admits a certain property that allows to narrow-down the search domain from all subsets to only m subsets.
The relation between the best response and proportional response updates can intuitively be thought of as follows. While in PRD players split their budget between all the goods according the utility that each good yields, and so gradually shift more budget to the more profitable subset of goods,
best response bids of player i with respect to ũ_i can be understood as spending the entire budget on a subset of goods which, after bidding so (considering the effect of bids on prices), will jointly have the maximum bang-per-buck (in our notation a_ij/p_j) amongst all subsets of goods,
given the bids b_-i^t of the other players.
Those bids can be regarded as “water-filling” bids as they level the bang-per-buck amongst all goods purchased by player i (for more information see the appendix).
It turns out that there is a clear formal connection between the best response of a player in the associated game and the proportional response update rule in the true game: the best response bids are the limit point of an infinite sequence of proportional response updates by the same player, as expressed in the following proposition.
Fix any player i and fix any bid profile b_-i for the other players. Let b_i^* = argmax_b_i ∈ S_iũ_i(b_i, b_-i) and let (b_i^t)_t=1^∞ be a sequence of consecutive proportional response steps applied by player i, where b_-i is held fixed at all times t. Then lim_t →∞ b_i^t = b_i^*.
§ SIMULTANEOUS PLAY BY SUBSETS OF AGENTS
In this section, we shift our focus back to proportional response dynamics under the activation asynchrony model in which the adversary can choose in every step any subset of players to update their bids. Towards proving that proportional response dynamics converges to a market equilibrium in this setting, we utilize the associated game and potential function presented in Section <ref> to show that any activated subset of players performing a PRD step will increase the potential.
Formally, let v⊆ℬ be a subset of players activated by the adversary and let f_v(b) be a function that applies proportional response to members of v and acts as the identity function for all the other players. The update for time t+1 when the adversary activates a subset of the players v^t ⊆ℬ is therefore:
b_ij^t+1=(f_v^t(b^t))_ij =
a_ijx_ij^t/u_i^tB_i if i ∈ v^t
b_ij^t otherwise.
For all v ⊆ℬ and for all b∈ S it holds that Φ(f_v(b)) > Φ(b), unless f_v(b)=b.
The proof shows that for any subset v^t, a PRD step b^t+1 is the solution to some maximization problem of a function g^t(b) different from Φ, such that Φ(b^t+1)>g^t(b^t+1)≥ g^t(b^t)=Φ(b^t).
Notable to mention is the sequential case where all subsets are singletons, i.e., for all t, v^t={i^t} for some i^t ∈ℬ.
In that case, the above result yields that the best-response bids can be expressed as the solution to an optimization problem over the bids b on a function that is monotone in the KL divergence between the prices induced by b and the current prices,
whereas PRD is the solution to an optimization problem on a similar function, but one that depends on the KL divergence between the bids b and the current bids. Thus, sequential PRD can be regarded as a relaxation of best response; on the one hand, it is somewhat simpler to compute a step, and on the other hand, it takes more steps to reach the best response (see Proposition <ref> and the simulations in Section <ref>).
§ GENERIC MARKETS
Here we show that in the generic case, linear Fisher markets have a unique equilibrium bid profile.
While it is well known that in linear Fisher markets equilibrium prices and utilities are unique, and the equilibrium bids and allocations form convex sets (see
section <ref>), we show that multiplicity of equilibrium bid profiles can result only from a special degeneracy in the game parameters that has measure zero in the parameter space. In other words, if the game parameters are not carefully tailored to satisfy a special equation (formally described below), or, equivalently, if the parameters are slightly perturbed, the market will have a unique equilibrium. Similar property was known for the linear exchange market<cit.> and we bring a simple and concise proof for the Fisher model.
A Fisher market is called generic if the non-zero valuations of the buyers (a_ij) do not admit any multiplicative equality. That is, for any distinct and non empty K, K'⊆ℬ×𝒢 it holds that ∏_(i,j)∈ Ka_ij≠∏_(i',j')∈ K'a_i'j'.
Any generic linear fisher market has a unique market equilibrium bid profile b^*.
Before discussing the proof of Theorem <ref>, we present the following corollary.
In generic linear Fisher markets, no-swap regret dynamics in the associated game converge to the market equilibrium.
This follows from <cit.> which states that in games with convex strategy sets and continuously differentiable potential function Φ, as in our case, the set of correlated equilibria are mixtures of elements in max_b Φ. Theorem <ref> yields that max_b Φ = {b^*} is the unique market equilibrium in the generic case, and so for a unique correlated equilibrium we have a that no-swap regret guarantees convergence.
To prove Theorem <ref>, we use the representation of the bids in the market as a bipartite graph of players and goods Γ(b)={ V,E} with V=ℬ∪𝒢 and E={(i,j) | b_ij > 0}. The proof shows that if a market has more than one equilibrium bid profile, then there has to be an equilibrium b with Γ(b) containing a cycle, whereas the following lemma forbids this for generic markets.
If b^* are equilibrium bids in a generic linear Fisher market, then Γ(b^*) has no cycles.
A key observation for proving this lemma is that at a market equilibrium, a_ij/p^*_j is constant amongst goods purchased, and so it is possible to trace a cycle and have all the p^*_j cancel out and obtain an equation contradicting the genericity condition.
An observation that arises from Lemma <ref> is that when the number of buyers in the market is of the same order of magnitude as the number of goods or larger, then in equilibrium most buyers will only buy a small number of goods. Since there are no cycles in Γ(b^*) and there are n+m vertices, there are at most n+m-1 edges. Thus, with n buyers, the average degree of a buyer is 1 + m-1/n.
Proof idea of Theorem <ref>:
With Theorem <ref> and the construction from the previous sections under our belts (namely, the associated game, Theorems <ref>, <ref> about its potential and equilibria, and Lemma <ref> about updates by several players simultaneously), we are now ready to complete the proof of Theorem <ref> on the convergence of asynchronous proportional response dynamics.
The idea is that we now know that PRD steps by subsets of players increase the potential, and so the bids should somehow converge to reach the maximum potential, which is obtained at the unique market equilibrium.
Technically, since the sequence of bids b^t is bounded, it must have condensation points. The proof then proceeds by way of contradiction. If the sequence does not converge to the equilibrium bid profile b^*, then there is some subsequence that converges to a different bid profile b^**, which by Theorem <ref>, must have lower potential than b^* (since it is not a market equilibrium). The main idea is to show that if players are not “starved” in the dynamic, i.e., if the maximum time interval between consecutive updates of a player is bounded by some constant T, then the dynamic must reach a point where the bids are sufficiently close to b^** such that there must be some future update by some subset of the players under which the potential increases to more than Φ(b^**), thus contradicting the existence of condensation points other than the market equilibrium.
To show this, the proof requires several additional arguments on the continuity of compositions of PRD update functions that arise under adversarial scheduling, and the impact of such compositions on the potential function. The full proof is found in the appendix.
§ SIMULATIONS
Next, we look at simulations of the dynamics that we study and compare the convergence of proportional response dynamics to best response dynamics in the associated game, as discussed in Section <ref>.
The metrics we focus on here for every dynamic are the Nash social welfare, which, as mentioned in Section <ref>, is maximized at the market equilibrium, and the Euclidean distance between the bids at time t and the equilibrium bids. Additionally, we look at the progression over time of the value of the potential Φ(b^t) (for the definition, see Section <ref>).
Figure <ref>, presents simulations of an ensemble of markets, each with ten buyers and ten goods, where the parameters in each market (defined in the matrices A,B) are sampled uniformly (and so the genericity condition <ref> holds with probability one) and normalized as explained in Section <ref>. For each market, the parameters remain fixed throughout the dynamic. The initial condition in all simulation runs is the uniform distribution of bids over items, and the schedule is sequential, such that a single player updates its bids in every time step.
Figure <ref> (main figure) shows our metrics, averaged over a sample of 300 such simulations. The insets show the plots of a sample of 50 individual simulations (without averaging) over a longer time period.
Figure <ref> show similar plots for best response dynamics.
As could be expected in light of our analysis in Section <ref> — best response dynamics converge faster than PRD, as seen in the different time scales on the horizontal axes.
A close look at the individual bid dynamics depicted in the insets shows a qualitative difference between the two types of dynamics: in PRD the bids in each dynamic smoothly approach the equilibrium profile, whereas best response bid dynamics are more irregular.
Additionally, the collection of curves for the individual simulations shows that under uniformly distributed market parameters, in both dynamics there is variance in convergence times, with a skewed distribution such that in most markets the dynamics converge quickly, but there is a distribution tail of slower-converging dynamics.
§ CONCLUSION
We have shown that proportional response bid dynamics converge to a market equilibrium in a setting where the schedule of bid updates can be chosen adversarially, allowing for sequential or simultaneous updates for any subset of players. We proposed a novel approach to address this problem by identifying a family of associated games related to proportional response dynamics, showing their relation to the competitive equilibria of the market, and leveraging these relations to prove convergence of the dynamics.
En route, we showed that other types of dynamics, such as myopic best response and no-swap regret, also converge in the associated game. Additionally, we note that our result on the uniqueness of market equilibria in the generic case (e.g., if the market parameters have some element of randomness) may also be of interest for future research on the Fisher market setting.
One main open question that we did not analyze is whether proportional response dynamics converge under the full asynchrony model, which includes information delays. The analysis of this model raises several complications, as it creates further coupling between past and current bid profiles. We conjecture that if information delays are bounded, then convergence also occurs in this model. However, it is not clear whether our approach could be extended to argue that proportional response updates by subsets of players with respect to delayed information increase the potential in our associated game, or whether proving convergence in this setting will require new methods.
One limitation of our analysis is that we provide a guarantee that under any bid update by any subset of players chosen by an adversary, the potential function of the associated game increases, but our technique does not specify by how much the potential increases in every step, and therefore, we cannot provide speed of convergence results. Such analysis seems to require new techniques, and we see this as an interesting problem for further work.
plain
§ APPENDICES
In the following sections we provide the proofs for the results presented in the main text as well as further technical details and explanations.
*definition*Definition
Notation: In the following, we use ∇_b_if to denote the gradient of a function f with respect to the bids b_i of player i only, ∂_b_ij f to denote the partial derivative by the bid of player i on good j and ∂_b_ij^2 f to denote the second derivative. We denote by θ_i = ∑_k ≠ i b_i the `pre-prices' which are the prices excluding the bids b_i (and so for every player i and every bid profile, p=θ_i + b_i). In some of the proofs, we use for a function f the abbreviated notation (f)^+ = max(f,0). All other notations are as defined in the main text.
§ THE ASSOCIATED GAME
(Theorem <ref>):
A sufficient condition for Φ being an exact potential<cit.> is
∀ i ∇_b_iΦ(b_i, b_-i)= ∇_b_i(b_i,b_i).
And indeed, in our case we have:
∂ b_ijΦ(b_i, b_-i) = ln(a_ij) - ln(p_j)=ln(a_ij/p_j),
∂ b_iju_i(b_i,b_i) = ln(a_ij) - ln(p_j)=ln(a_ij/p_j).
In order to prove Theorem <ref>,
we first define a different associated game denoted G' that differs from G only in having a different associated utility function u_i'=∑_j a_ijln(p_j).
In fact, G̃ and G' a part of a family
of associated games of the market game G,
which have the property that they all share the same best responses to bid profiles (and therefore, also the same Nash equilibria) and for all these games, the function Φ is a best-response potential (see <cit.> for the definition of best-response potential games). Among this family of games, we are particularly interested in the games G̃ and G', since the first admits Φ as an exact potential, and the latter has a particularly simple derivative for its utility , which has a clear economic interpretation:
∂_b_ij(b) = a_ij/p_j is simply the bang-per-buck of player i from good j (see the model section in the main text).
Next, we present several technical lemmas that will assist us in proving Theorem 2 and which will also be useful in our proofs later on.
For any player i and fixed b_-i, both (b_i,b_-i) and (b_i,b_-i) are strictly concave in b_i.
We will show the proof for .
We compute the Hessian and show that it is negative definite.
The diagonal elements are
∂^2_b_ij(b_i,b_i) = -1/p_j,
and all of the off-diagonal elements are
∂_b_ik∂_b_ij(b_i,b_i) = 0.
Therefore, the Hessian is a diagonal matrix with all of its elements being negative, and thus, is strictly concave. The same argument works for as well.
Fix a player i and any bid profile b_-i∈ S_-i of the other players, then the following two facts hold.
* b_i^'* = max_b_i ∈ S_i(b_i, b_-i) if and only if it holds that
∀ b_i' ∈ S_i ∑_j a_ij/p_j^'* b_ij^'*≥∑_j a_ij/p_j^'* b_ij', where p_j^'*=θ_ij +b^'*_ij.
* b̃_i^* = max_b_i ∈ S_i(b_i, b_-i) if and only if it holds that ∀b̃_̃ĩ∈ S_i ∑_j ln(a_ij/p̃_j^*) b̃_ij^* ≥∑_j ln(a_ij/p̃_j^*) b̃_ij, where p̃_j^*=θ_ij +b̃^*_ij.
We will show the proof for (1), and the proof for (2) is similar.
Let b_i^* be a best response to b_-i and let b'_i be some other strategy. Consider the restriction of (b_i) to the line segment [b_i', b_i^*] as follows; define f(ξ)=(b_i(ξ)) for b_i(ξ)=b_i' + ξ (b_i^* - b_i') where ξ∈ [0,1]. As is strictly concave and b_i^* is the unique maximizer of , it holds that f is strictly concave and monotone increasing in ξ. Therefore the derivative of f must satisfy at the maximum point ξ = 1 that ξ f(1) ≥ 0. This is explicitly given by
ξ f(ξ) = ∇_b_i(b_i(ξ))(b_i^* - b_i').
Therefore, when deriving and substituting ξ=1 we get b_i(1)=b_i^*, and
0 ≤ξ f(1) = ∇_b_i(b^*_i)(b_i^* - b_i')
= ∑_ja_ij/p^*_jb^*_ij - ∑_ja_ij/p^*_jb'_ij,
which implies ∑_ja_ij/p^*_jb'_ij≤a_ij/p^*_jb^*_ij, as required.
To complete the other direction of the proof, consider b^*_i for which the expression stated in the lemma is true for all b'_i. Then, fix any such b'_i and again consider the restriction of to [b'_i, b^*_i]. By direct calculation, as before but in the inverse direction, it holds that f(1) ≥ 0, and as and f(ξ) are strictly concave, it thus must be that ξ f(ξ) is monotone decreasing in ξ. Thus, for all ξ we have ξf(ξ) ≥ξ f(1) ≥ 0. This must mean that ξ=1 is the maximizer of f(ξ) since for all ξ, ξf(ξ) ≥ 0 implies that f(ξ) is monotone increasing and therefore (b^*_i) ≥(b'_i). Finally, note that this holds for any b'_i and hence b^*_i must be a global maximum of .
Let (c_j)_j∈[m]∈ℝ^m, if there exists x^* ∈Δ^m (the m-dimensional simplex) such that ∀ x ∈Δ^m it holds that ∑_j c_j x_j ≤∑_j c_j x^*_j := α then:
* for all j we have that c_j≤α, and
* if x^*_j > 0 then c_j=α.
* Assume for the sake of contradiction that there exists k with c_k >α then x=e_k (the “one-hot” vector with 1 at the k'th coordinate and 0 in all other coordinates) yields ∑_j c_j x_j = c_k > α = ∑_j c_j x^*_j, a contradiction.
* Assume for the sake of contradiction that there exists k with x^*_k >0 and c_k<α c_k x^*_k<α x^*_k. From (1) we have that c_j ≤α c_j x^*_j ≤α x^*_j, summing the strict inequality with the weak ones over all j yields ∑_j c_j x^*_j < ∑_j α x^*_j = α, a contradiction.
Fix a player i and any bid profile b_-i∈ S_-i of the other players, then the following properties of best-response bids hold in the modified games G̃ and G'.
* The support set of b^*_i, defined as s^*_i={j | b^*_ij > 0}, is equal to the set {j | a_ij > c^* θ_ij}, and for every j ∈ s^*_i we have that a_ij/p^*_j=c^*.
* Best-response bids with respect to the utilities and are equal and unique. That is, in the definition from Lemma <ref> we have b'^*_i = b_i^* (denoted simply as b^*_i).
* Best-response bids are given by b^*_ij=(a_ij/c^* - θ_ij)^+ for a unique constant c^* ∈ (0,m/B_i).
By Lemma <ref>, and are strictly concave in b_i for any fixed b_-i, and so each admits a unique maximizer. To see that they are equal, we use Lemma <ref> and introduce constants c, d to obtain
∀ b_i ∑_j a_ij/p'^*_jb_ij/B_i≤∑_j a_ij/p'^*_jb'^*_ij/B_i = c,
∀ b_i ∑_j ln(a_ij/p^*_j)b_ij/B_i≤∑_j ln(a_ij/p^*_j)b^*_ij/B_i = d,
where p'^*_j = θ_ij + b'^*_ij and p^*_j = θ_ij + b^*_ij.
For the ease of exposition, we assume that θ_ij >0 for all j. All the results stated below remain valid also when θ_ij = 0 for some j.
Proof of (1): Applying Lemma <ref> to each of those inequalities (once with x^*=1/B_ib'^*_i and twice with x^*=1/B_ib^*_i) and denoting the support sets of b'^*_ij, b^*_ij as s'^*,s^*, respectively, we obtain the following. (1) ∀ j ∈ s'^* we have a_ij/p'^*_j = c and ∀ j ∉ s'^* we have a_ij/p'^*_j≤ c. Therefore, ∀ j ∈ s'^* bids are positive and c = a_ij/p'^*_j = a_ij/θ_ij + b'^*_ij < a_ij/θ_ij while ∀ j ∉ s'^* the bids are zero, and a_ij/θ_ij≤a_ij/θ_ij + 0 = a_ij/p'^*_j≤ c, hence s'^*={j | c < a_ij/θ_ij}. (2) By the same argument but with d=ln(a_ij/p̃^*_j), we also have that s^*={j | e^d < a_ij/θ_ij}.
Proof of (2): We will show that c=e^d and thus obtain that the vectors b'^*_i, b_i^* are identical. Assume by way of contradiction that c < e^d, then j∈s^* c < e^d < a_ij/θ_ij j ∈ s'^*, i.e., s^* ⊆ s'^*. For all j ∈s^* it holds that a_ij/p'^*_j = c < e^d = a_ij/p^*_jp^*_j < p'^*_j b^*_ij < b'^*_ij. Now we sum those inequalities over s^* and extend to the support s'^*. By using the subset relation we proved, we obtain a contradiction:
B_i = ∑_j∈s^*b^*_ij < ∑_j∈s^* b'^*_ij≤∑_j∈ s'^* b'^*_ij =B_i.
The case where e^d < c follows similar arguments with inverse roles of s'^*, s^*. Thus, c=e^d and s'^* =s^*, which implies a_ij/p'^*_j = a_ij/p̃^*_j, meaning that the prices are equal as well for all goods purchased. Therefore b'^*_i = b_i^*.
Proof of (3): Finally, observe that for j ∈ s'^* c = a_ij/p'^*_j = a_ij/θ_ij + b'^*_ij b^*_ij=a_ij/c^* - θ_ij, while otherwise b^*_ij=0, and a_ij/c^* - θ_ij≤ 0. For the bounds on c, notice that by definition it is equal to u^*_i/B_i and that u^*_i∈ (0, m), as i can receive as little as almost nothing (by the definition of the allocation mechanism, if i places a bid on a good it will receive a fraction of this good, no matter how tiny) and receive at most (almost) all the goods.
The above Lemma <ref> shows a property of the structure of best-response bids. If we consider all the goods sorted by the parameter a_ij/θ_ij, then the best-response bids are characterized by some value c^* which partitions the goods into two parts: goods that can offer the player a bang-per-buck of value c^* and those that cannot. The former set of goods is exactly the support s^*. When a player increases its bid on some good j, the bang-per-buck offered by that good decreases, so clearly, any good with c^* ≤a_ij/θ_ij cannot be considered in any optimal bundle. Consider the situation where the player has started spending it's money on goods with a_ij/θ_ij > c^*, and the for some goods j and k we have that a_ij/p_j=a_ik/θ_ik, then if the player increases it's bid on j without increasing the bid on k, this means that its bids are not optimal since the player could have received higher bang-per-buck by bidding on k. The optimal option is a `water-filling` one: to split the remaining budget and use it to place bids on both j and k, yielding equal bang-per-buck for both (as Lemma <ref> shows).
With the above lemmas, we are now ready to prove Theorem <ref>.
(Theorem <ref>):
We start by making the following claim.
Claim: The set of Market equilibria is equal to the set of Nash equilibria in the game G'.
Proof:
By definition, b^* is a Nash equilibrium of G' if and only if for every i it holds that b_i^* = max_b_i ∈ S_i(b_i, b^*_-i), where by Lemma <ref>, for any fixed b_-i, the bid profile b_i^* is unique. By Lemma <ref>, we have that for x^*_ij=b^*_ij p_j^* and any other x'_ij=b'_ijp^*_j , (x^*_i) ≥(x'_i), if and only if (X^*, p^*) is a market equilibrium (market clearing and budget feasibility hold trivially). That is, the set of Nash equilibria of the game G' corresponds to the set of market equilibria (i.e., every bid profile b^* which is a market equilibrium must be a Nash equilibrium of G', and vice versa).
Then, by Lemma <ref>, best responses by every player i to any bid profile b_-i of the other players with respect to and with respect to are the same. Therefore, every Nash equilibrium in one game must be a Nash equilibrium in the other. Thus, we have that Nash equilibria of the game G̃ are market equilibria, and vice versa – every market equilibrium must be Nash equilibrium of G̃. Finally, at a Nash equilibrium, no player can unilaterally improve their utility, so no improvement is possible to the potential, and in the converse, if the potential is not maximized, then there exists some player with an action that improves the potential, and so by definition their utility function as well, thus contradicting the definition of a Nash equilibrium.
Therefore, we have that every bid profile that maximizes the potential is a Nash equilibrium of G̃ and a market equilibrium (and vice versa).
§ BEST RESPONSE DYNAMICS
We start with the following characterizations of best-response bids in the games G̃ and G'.
Fix θ_i and let b^*_i be i's best response to θ_i with support s^*. Define c_s = ∑_j∈ s a_ij/B_i+∑_j∈ sθ_ij for every subset s⊆ [m]. Let c^* be as described in Lemma <ref> Then, it holds that c^* = c_s^*≥ c_s for all s⊆[m].
Furthermore, if s^* ⊄s then c_s^* > c_s.
Let b^*_i be a best response to θ_i with support s^*. By lemma <ref> we have that b^*_ij=(a_ij/c^*-θ_ij)^+, by summing over s^* we obtain that B_i=∑_j_∈ s^*a_ij/c^*-θ_ij. Rearranging yields c^*=∑_j∈ s^* a_ij/B_i + ∑_j∈ s^*θ_ij which is c_s^* by definition. Now we prove that c^*=max_s ⊆ [m] c_s.
A key observation to the proof is that, by Lemma <ref>, if j∈ s^* then c^*θ_ij < a_ij and otherwise c^*θ_ij≥ a_ij.
For a set s' distinct from s^* we consider two cases:
Case (1): s^* ⊄s'
Consider a bid profile b'_i that for every good j in s' ∩ s^* (if the intersection is not empty) places a bid higher by ϵ > 0 than b^*_ij and distributes the rest of i's budget uniformly between all other goods in s:
b'_ij =
b^*_ij + ϵ if j ∈ s' ∩ s^*,
B_i - ∑_j ∈ s' ∩ s^* (b^*_ij + ϵ)/|s' ∖ s^*| otherwise.
For ϵ small enough, we have ∑_j∈ s' b'_ij = B_i and the support of b'_i is indeed s'.
For every j ∈ s^* ∩ s' we have b'_ij > b^*_ij and by adding θ_ij to both sides we obtain p'_j > p^*_j; multiplying both sides by c^* yields (i) c^* p'_j > c^* p^*_j = a_ij, where the equality is by Lemma <ref>, while for every j ∈ s' ∖ s^* it holds that c^* θ_ij≥ a_ij by which adding c^* b'_ij to the left hand side only increases it and implies (ii) c^* p'_j > a_ij. Summing over inequalities (i) and (ii) for all j appropriately, we obtain c^* ∑_j∈ s' p'_j > ∑_j∈ s' a_ij, observe that ∑_j∈ s' p'_j = ∑_j∈ s' (b'_ij+θ_ij)=B_i + ∑_j∈ s'θ_ij, and thus by division, we obtain the result: c^* > ∑_j∈ s' a_ij/B_i + ∑_j∈ s'θ_ij = c_s'.
Case (2): s^* ⊂ s'
In this case, the idea used above can not be applied since adding ϵ to every bid b^*_ij would create bids b'_ij that exceed the budget B_i.
As stated above, the equality c^* = ∑_j∈ s^* a_ij/B_i + ∑_j∈ s^*θ_ij holds where the sums are taken over all members of s^*, by rearranging we get c^*B_i + c^* ∑_j∈ s^*θ_ij=∑_j∈ s^* a_ij. For all j ∈ s' ∖ s^* it holds that c^* θ_ij≥ a_ij and by summing those inequalities for all j and adding the equality above we obtain: c^*B_i + c^* ∑_j∈ s'θ_ij≥∑_j∈ s' a_ij Rearranging yields the result: c^* ≥∑_j∈ s' a_ij/B_i + ∑_j∈ s'θ_ij = c_s'.
And so, c^* is obtained as the maximum over all c_s, as required.
The function BR_i:S_-i→ S_i which maps b_-i to the best response b_i^* is continuous.
By Lemma <ref>, best-response bids are given by b^*_ij=max{a_ij/c^* - θ_ij, 0}, with support s^*_i. We wish to show that b^*_i is a continuous in b_-i. We do so by showing that b^*_ij is obtained by a composition of continuous functions. As θ_i is a sum of elements from b_-i, it suffices to prove continuity in the variable θ_i. The expression for b^*_ij is the maximum between zero and a continuous function of θ_ij, which is continuous in θ_i, and so we are left to prove that a_ij/c^* - θ_ij is continuous in θ_i. More specifically, it suffices to show that c^* as defined in Lemma <ref> is continuous in θ_i.
By Lemma <ref>, c^* is obtained as the maximum over all c_s functions, where each is a continuous function itself in θ_i, and thus c^* is continuous in θ_i.
To prove Theorem <ref> on the convergence of best-response dynamics we use the following known result (for further details, see Jensen 2009 <cit.>).
Theorem (Jensen 2009 <cit.>): Let G be a best-reply potential game with single-valued, continuous best-reply functions and compact strategy sets. Then any admissible sequential best-reply path converges to the set of pure strategy Nash equilibria.
(Theorem <ref>):
G̃ is a potential game, which is a stricter notion than being a best-reply potential game (i.e., every potential game is also a best-reply potential game). By Lemma <ref>, best replies are unique, and so the function BR_i is single valued. Furthermore, Lemma <ref> shows that it is also a continuous function. By definition, every i's strategy set S_i is compact. Admissibility of the dynamics is also guaranteed by the liveness constraint on adversarial scheduling of the dynamics, and thus by the theorem cited above, best-reply dynamics converges to the set of Nash equilibria of G̃.
Since every element in this set is market equilibrium (by Theorem <ref>) and equilibrium prices are unique (see the model section in the main text), we have that any dynamic of the prices are guaranteed to converge to equilibrium prices. Furthermore if the market is generic then there is a unique market equilibrium (by Theorem <ref>) and convergence to the set in fact means convergence to the point b^*, the market-equilibrium bids.
(Proposition <ref>):
Fix a player i, fix any bid profile b_-i of the other players and let b^*_i be i's best response to b_-i, by Lemma <ref>, b^*_ij=(a_ij/c^*-θ_ij)^+ for c^* being a unique constant. We present a simple algorithm (Algorithm <ref>) which computes c^* and has a run-time of 𝒪(mlog(m)).
To see that this process indeed reaches c^*, assume w.l.o.g. that the goods are sorted by a_ij/θ_ij in a descending order. For ease of exposition, assume θ_ij > 0 for all j; the case with θ_ij = 0 for some goods is similar. By Lemma <ref> we have s^* = {j| a_ij > c^* θ_ij}. And so, if k < j and j∈ s^* then k∈ s^*, since in this case a_ik/θ_ik >a_ij/θ_ij > c^*. Therefore, s^* must be one of the following sets: [1], [2], [3], …, [m]. By Lemma <ref> we have c^*=max_s ⊆ [m] c_s. For any set mentioned, the algorithm computes c_s=∑_j∈ s a_ij/B_i + ∑_j∈ sθ_ij and finds the maximal among all such c_s. Therefore it finds c^*.
As for the running time of the algorithm, it is dominated by the running time of the sorting operation which is 𝒪(mlog(m)).
After proving that the best response to a bid profile can be computed efficiently, we can prove now that proportional response, applied by a single player while all the other players' bids are held fix, converges in the limit to that best response.
(Proposition <ref>):
Fix a player i and fix any bid profile b_-i of the other players, let b^*_i be the best response of i to b_-i with support s^* and let (b^t_i)_t=1^∞ be a sequence of consecutive proportional responses made by i. That is, b^t+1_i = f_i(b^t_i). We start the proof with several claims proving that any sub-sequence of (b^t_i)_t=1^∞ cannot converge to any fixed point of f_i other than b^*_i. After establishing this, we prove that the sequence indeed converges to b^*_i.
Claim 1: Every fixed point of Proportional Response Dynamic has equal `bang-per-buck` for all goods with a positive bid. That is, if b^**_i is a fixed point of f_i then a_ij/p^**_j=u^**_i/B_i for every good j with b^**_ij > 0, where u^**_i is the utility achieved for i with the bids b^**_i.
Proof: By substituting b^**_i into the PRD update rule, we have
b^**_i = f_i(b^**_i)
∀ j b^**_ij = a_ij/p^**_j/u^**_i/B_i b^**_ij
either b^**_ij = 0 or a_ij/p^**_j = u^**_i/B_i.
Claim 2: The following properties of b^*_i hold.
* Except b^*_i, there are no other fixed points of f_i with a support that contains the support of b^*_i. Formally, there are no fixed points b^**_i ≠ b^*_i of f_i with support s^** such that s^* ⊂ s^**.
* The bids b^*_i achieve a higher utility in the original game G, denoted u^*_i, than any other fixed point of Proportional Response Dynamics. Formally, let b^**_i be any fixed point other than b^*_i, with utility u^**_i in the original game G, then u^*_i > u^**_i.
Proof:
Let b_i be any fixed point of f_i. By the previous claim it holds that a_ij/p_j = u_i/B_i whenever b_ij > 0. Multiplying by p_j yields a_ij =u_i/B_i p_j. Summing over j with b_ij>0 and rearranging yields u_i/B_i = ∑_j∈ s a_ij/∑_j∈ s p_j = c_s as defined in Lemma <ref> with support s. By that lemma, we have that c_s^*≥ c_s for any set s distinct from s^*. Thus, we have that u^*_i/B_i = c_s^*≥ c_s^** = u^**_i/B_i for b^**_i being a fixed point of f_i other than b^*_i with support s^** and utility value u^**_i.
Assume for the sake of contradiction that s^*⊂ s^**. If j∈ s^* then j∈ s^**. By Claim 1 for every such j the following inequality holds,
a_ij/p^*_j = u^*_i/B_i≥u^**_i/B_i = a_ij/p^**_j,
implying that p^**_j ≥ p^*_j. Subtracting θ_ij from both sides yields b^**_ij≥ b^*_ij. Summing over j∈ s^* yields a contradiction:
B_i = ∑_j∈ s^* b^*_ij≤∑_j∈ s^* b^**_ij < ∑_j∈ s^** b^**_ij = B_i,
where the first inequality is as explained above, and the last by the strict set containment s^* ⊂ s^**.
Finally, as there are no fixed points with support s^** containing s^*, by Lemma <ref>, the inequality stated above is strict, that is c_s^* > c_s^** and so u^*_i > u^**_i.
Claim 3: If b^**_i ≠ b^*_i is a fixed point of f_i then b^**_i is not a limit point of any sub-sequence of (b^t_i)_t=0^∞.
Proof:
The proof considers two cases:
(1) When u_i is continuous at b^** (2) when continuity doesn't hold.
Let (b^t_k)_k=1^∞ be a converging subsequence of (b^t_i)_t=0^∞.
Case (1):
The utility function u_i is continuous at b^**_i when for every good j it holds that θ_ij > 0 or b^**_ij > 0. i.e. that there is no good j with both θ_ij = 0 and b^**_ij = 0. This is implied directly from the allocation rule x_ij=b_ij/θ_ij + b_ij (see the formal definition in Section <ref>) and the fact that u_i=∑_j a_ijx_ij.
Examine the support of b^**_i, by Claim 2 there are no fixed points with support set s^** containing s^*. Therefore s^* ⊄s^** implying that there exists a good j∈ s^* ∖ s^**. That is, by definition of the supports, there exists j with b^*_ij >0 and b^**_ij = 0. Consider such j and assume for the sake of contradiction that b^**_i is indeed a limit point. Then, by definition, for every δ^**>0 exists a T s.t. if t> T then b^t_k_i - b^**_ij< δ^**. Specifically it means that |b^t_k_ij - b^**_ij| < δ^** whenever t>T.
By Claim 2, u^*_i > u^**_i. Then, by continuity there exists a δ' s.t. if b^**_i - b_i< δ' then |u_i(b_i) - u^**_i| < u^*_i - u^**_i.
Take δ^** < min{δ', b^*_ij} and, by the assumption of convergence, there is a T s.t. for t > T and we have that b^t_k_i - b^**_i < δ^**. This implies
(I) |b^t_k_ij - 0| < δ^**< b^*_ij as b^**_ij=0 and (II) |u^t_k_i - u^**_i| < u^*_i - u^**_i u^t_k_i < u^*_i. From these two, we can conclude that
a_ij/p^t_k_j = a_ij/θ_ij + b^t_k_ij > a_ij/θ_ij + b^*_ij = u^*_i/B_i > u^t_k_i/B_i.
Finally, observe that by rearranging the PRD update rule we get b^t+1_ij = a_ij/p^t_k_j/u^t_k_i/B_i b^t_k_ij, implying that b^t_k+1_ij > b^t_k_ij since a_ij/p^t_k_j/u^t_k_i/B_i > 1 for t> T and b^0_ij > 0. This means that for all t_k> T we have b^t_k_ij > b^T+1_ij. That is, b^t_k_ij cannot converge to zero and thus the subsequence thus cannot converge to b^**_i, a contradiction.
Case (2): When there exists a good j with θ_ij = 0 and b^**_ij = 0 we have that u_i is not continuous at b^**_i and the previous idea doesn't work. Instead we will contradict the PRD update rule. Assume of the sake of contradiction that b^**_i is a limit point of a subsequence of PRD updates. Therefore for every ϵ exists a T s.t. if t_k>T then |b^t_k_ij-b^**_ij|<ϵ. Note that b^**_ij=0 in this case and set ϵ < a_ij/mB_i and so, for t_k>T it holds that a_ij/b^t_k_ij> m/B_i. Also note that p^t_k_j = θ^t_k_ij + b^t_k_ij = b^t_k_ij and that the maximal utility a buyer may have is m (when it is allocated every good entirely). Then overall we have that a_ij/p^t_k_j>m/B_i > u^t_k_i/B_i.
The PRD update rule is b^t_k+1_ij=a_ij/p^t_k_j/u^t_k_i/B_ib^t_k_i. But since the ratio a_ij/p^t_k_ij/u^t_k_i/B_i is greater than 1 it must be that b^t_k+1_ij> b^t_k_ij. And so every subsequent element of the subsequence is bounded below by b^T+1_ij>0 and as before, we reach a contradiction as the subsequence cannot converge to b^**_i.
Finally we can prove the convergence of the sequence (b^t_i)_t=1^∞. As the action space S_i is compact, there exists a converging subsequence b^t_k_i with the limit b^**_i. If b^**_i = b^*_i for any such subsequence, then clearly we are done. Otherwise, assume b^**_i ≠ b^*_i. By the previous claim any fixed point of f_i other than b^*_i is not a limit point of any subsequence, thus b^**_i is not a fixed point of f_i. By Lemma <ref>, any subset of players performing proportional response, strictly increase the potential function unless performed at a fixed point. When discussing a proportional response of a single player, with all others remaining fixed, this implies, by the definition of potential function, that is increased at each such step. Let ϵ < (f_i(b^**)) - (b^**), this quantity is positive since b^**_i is not a fixed point. The function ∘ f_i is a continuous function and b^t_k_i converges to b^**_i therefore there exists a T such that for all t_k > T we have that |(f_i(b^**)) - (f_i(b^t_k))| < ϵ. Substituting ϵ yields (f_i(b^**)) - (f_i(b^t_k)) < (f_i(b^**)) - (b^**) which implies (b^**) < (f_i(b^t_k)) = (b^t_k+1) ≤(b^t_k+1). That is, the sequence u_i(b^t_k_i) is bounded away from (b^**) and since is a continuous function, this implies that b^t_k_i is bounded away from b^**_i — a contradiction to convergence.
§ SIMULTANEOUS PLAY BY SUBSETS OF AGENTS
In order to prove Lemma <ref>, we first need some further definitions and technical lemmas. We use the notation D(xy) to denote the KL divergence between the vectors x and y, i.e., D(xy)=∑_j x_j ln(x_j/y_j).
For a subset of the players v⊆ℬ, the subscript v on vectors denotes the restriction of the vector to the coordinates of the players in v, that is, for a vector b we use the notation b_v=(b_ij)_i∈ v, j∈ [m] to express the restriction to the subset. ℓ_Φ(b_v;b'_v) denotes the linear approximation of Φ;
that is, ℓ_Φ(b_v;b'_v)=Φ(b'_v) + ∇_b_vΦ(b'_v)(b_v-b'_v).
The idea described in the next lemma to present the potential function as a linear approximation term and a divergence term was first described in <cit.> for a different scenario when all agents act together in a synchronized manner using mirror descent; we extend it to any subset of players which required us to introduce different proof methods and as well as to embed it in a game.
Fix a subset of the players v ⊂ℬ and a bid profile b_-v of the other players. Then, for all b_v, b'_v ∈ S_v we have that Φ(b_v) = ℓ_Φ(b_v; b'_v) - D(p p'), where p=∑_i∉ v b_ij + ∑_i∈ v b_ij and p'=∑_i∉ v b_ij + ∑_i∈ v b'_ij.
Calculating the difference Φ(b_v) - ℓ_Φ(b_v;b'_v) yields
Φ(b_v) - ℓ_Φ(b_v;b'_v) = Φ(b_v) - Φ(b'_v) - ∇_b_vΦ(b'_v)(b_v - b'_v).
We rearrange the term Φ(b_v) - Φ(b'_v) as follows.
Φ(b_v) - Φ(b'_v) = ∑_i∈ v, j∈[m] (b_ij - b'_ij)ln(a_ij)-∑_j(p_jln(p_j)-p'_jln(p'_j)) -∑_j(p_j-p'_j)
= ∑_i∈ v, j∈[m] (b_ij - b'_ij)ln(a_ij)-∑_j(p_jln(p_j)-p'_jln(p'_j)),
where the last equality is since ∑_j p_j = 1 for any set of prices because the economy is normalized (see the model section in the main text).
The term ∇_b_vΦ(b'_v)(b_v - b'_v) is expanded as follows.
∇_b_vΦ(b'_v)(b_v - b'_v) = ∑_i∈ v, j∈[m]ln(a_ij/p'_j)(b_ij-b'_ij)
=∑_i∈ v, j∈[m]ln(a_ij)(b_ij-b'_ij) - ∑_i∈ v, j∈[m]ln(p'_j)(b_ij-b'_ij)
Subtracting the latter from the former cancels out the term ∑_i∈ v, j∈[m]ln(a_ij)(b_ij-b'_ij), and we are left with the following.
Φ(b_v) - ℓ_Φ(b_v;b'_v) = Φ(b_v) - Φ(b'_v) - ∇_b_vΦ(b'_v)(b_v - b'_v)
=∑_i∈ v, j∈[m]ln(p'_j)(b_ij-b'_ij) -∑_j(p_jln(p_j)-p'_jln(p'_j))
= -∑_jp_jln(p_j)-(p'_j - ∑_i ∈ v b'_ij + ∑_i ∈ v b_ij)ln(p'_j)
= -∑_jp_jln(p_j)-(θ_vj + ∑_i ∈ v b_ij)ln(p'_j)
= -∑_j p_jln(p_j) - p_jln(p'_j)
=-D(pp').
For any subset of the players v ⊂ℬ and any bid profile b_-v of the other players and for every b_v, b'_v ∈ S_v it holds that D(pp') ≤ D(b_vb'_v), with equality only when b_v=b'_v.
We begin by proving a simpler case where v={i} for some player i and use it to prove the more general statement. Fix i and b_-i, which implies fixing some θ_i. KL divergence is convex in both arguments with equality only if the arguments are equal; formally, for λ∈ (0,1) it holds that D(λθ_i + (1-λ)b_iλθ_i + (1-λ)b'_i) ≤λ D(θ_i θ_i) + (1-λ) D(b_i b'_i), which is equivalent to D(λθ_i + (1-λ)b_iλθ_i + (1-λ)b'_i) ≤ (1-λ) D(b_i b'_i), with equality only if b_i=b'_i (since D(θ_i θ_i) = 0). Substituting λ=1/2 and noting that p_j=θ_ij + b_ij (and the same for p'_j and b'_ij), we obtain the following relation.
D(1/2 p 1/2 p') = D(1/2θ_i + 1/2b_i 1/2θ_i + 1/2b'_i)
≤1/2 D(b_i b'_i).
On the other hand, the expression D(1/2 p 1/2 p') can be evaluated as follows.
D(1/2 p 1/2 p') = ∑_j 1/2p_jln(1/2p_j/1/2p'_j)
=1/2∑_j p_jln(p_j/p'_j)
=1/2 D(p p').
And therefore, we have
D(p p') ≤ D(b_i b'_i), with equality only if b_i = b'_i.
Now we can prove the general case, as stated fix v and b_-v and let b_v, b'_v∈ S_v. We know that for all i∈ v it is true that D(pp')≤ D(b_i b'_i), summing those inequalities for all i∈ v yields |v|D(pp')≤∑_i∈ v D(b_i b'_i), on the one hand clearly D(pp') ≤ |v|D(pp') and on the other hand ∑_i∈ v D(b_i b'_i) = ∑_i∈ v∑_j b_ijln(b_ij/b'_ij) = D(b_v b'_v) and the result is obtained.
Let v⊆ℬ, let f_v:S→ S be a proportional response update function for members of v and identity for the others, and let b'∈ S be some bid profile. Then, (f_v(b'))_v=max_b_v∈ S_v{ℓ_Φ(b_v; b'_v) - D(b_v b'_v) }.
By adding and removing constants that do not change the maximizer of the expression on the right hand side, we obtain that the maximizer is exactly the proportional response update rule:
max_b_v∈ S_v{ℓ_Φ(b_v; b'_v) - D(b_v b'_v) } = max_b_v∈ S_v{Φ(b'_v) + ∇_b_vΦ(b'_v)(b_v-b'_v) - D(b_v b'_v) }
= max_b_v∈ S_v{∇_b_vΦ(b'_v)b_v - D(b_v b'_v) }
= max_b_v∈ S_v{∇_b_vΦ(b'_v)b_v - D(b_v b'_v) - ∑_i∈ v B_i ln(u'_i/B_i)}.
Rearranging the last expression by elements yields the following result,
∇_b_vΦ(b'_v)b_v - D(b_v b'_v) - ∑_i∈ v B_i ln(u'_i/B_i) =
= ∑_i∈ v, j∈ [m] b_ijln(a_ij/p'_j) - ∑_i∈ v, j∈ [m] b_ijln(b_ij/b'_ij)- ∑_i∈ v, j∈[m] b_ijln(u'_i/B_i)
=∑_i ∈ v, j ∈ [m] b_ijln(a_ij/p'_jb'_ij/b_ijB_i/u'_i)
=∑_i ∈ v, j ∈ [m] b_ijln(a_ijx'_ij/u'_iB_i/b_ij)
=-∑_i ∈ v, j ∈ [m] b_ijln(b_ij/a_ijx'_ij/u'_iB_i),
which is exactly -D(b_v (f_v(b'))_v), since (f_v(b'))_ij = a_ijx'_ij/u'_iB_i for i∈ v by definition. That is, our maximization problem is equivalent to min{ D(b_v (f_v(b'))_v) }. Finally, note that KL divergence is minimized when both of its arguments are identical, and (f_v(b'_v))_v ∈ S_v, the domain of the minimization.
(Lemma <ref>):
Let v ⊆ℬ be a subset of players and let b ∈ S be some bid profile. By combining the lemmas proved in this section have that
Φ(f_v(b)) ≥ℓ_Φ(f_v(b); b) - D(f_v(b) b)
≥ℓ_Φ(b; b) - D(b b)
= Φ(b),
where the first inequality is by Lemmas <ref> and <ref> with the inequality being strict whenever f_v(b) ≠ b, and the second inequality is by Lemma <ref>, as f_v(b) was shown to be the maximizer of this expression over all b∈ S.
An interesting case to note here is when v=i. In this case, the lemmas above show that if the players' bids are b^t and i is being activated by the adversary, then the best response bids of i to b^t_-i are the solutions to the optimization problem max_b_i∈ S_i{ℓ_(b_i;b^t_i) - D(pp^t)}. On the other hand, the proportional response to b^t_-i is the solution to the optimization problem max_b_i∈ S_i{ℓ_(b_i;b^t_i) - D(b_ib^t_i)}. This can be seen as a relaxation of the former, as proportional response does not increase (or equivalently the potential) as much as best response does. However, proportional response is somewhat easier to compute.
§ GENERIC MARKETS
(Theorem <ref>): Assume by way of contradiction that a generic linear Fisher market has two distinct market equilibrium bid profiles b^* ≠ b^**. For any market equilibrium b it must hold that: (1) ∀ j ∑_i b_ij = p^*_j since equilibrium prices are unique; and (2) ∀ i ∑_j b_ij = B_i by budget feasibility.
As b^* ≠ b^**, there exists a pair (i,j) with b^*_ij≠ b^**_ij, meaning that buyer i has a different bid on good j between b^* and b^**, and so by (1) it must be that exists a buyer k whose bid on good j was also changed so that the price p^*_j remains fixed; formally, b^*_kj≠ b^**_kj. In such case, by (2) there must be a good ℓ for which buyer k has a different bid as well, since it's budget B_k is fixed and fully utilized; formally b^*_kℓ≠ b^**_kℓ. As the graph Γ={ℬ∪𝒢, E} with E={{i,j} | b'_ij≠ b^*_ij} is finite, following the process described above while obeying the constraints (1) and (2) must lead to a cycle in the graph Γ.
Finally, we will show that there exists a market equilibrium with a cycle in its corresponding graph. Define b'=λ b^* + (1-λ) b^** for some λ∈ (0,1) and note that b' is also market equilibrium as the set of market equilibria is a convex set (see the model section in the main text). Let Γ(b')={ℬ∪𝒢, E(b')} with E(b')={{i,j} | b'_ij > 0} be the corresponding graph of b'. Observe that E ⊆ E(b') since if b^*_ij≠ b^**_ij then it must be that b^*_ij > 0 or b^**_ij > 0 and in any such case b'_ij > 0.
Thus, the graph Γ(b') contains a cycle, contradicting Lemma <ref> from the main text
(Lemma <ref>): Assume for the sake of contradiction that exists a cycle C in Γ(b^*), w.l.o.g. name the vertices of buyers and goods participating in the cycle in an ascending order; that is, C=b_1g_1b_2g_2… b_k-1g_kb_1, where b_i and g_i represent buyers and goods i, respectively. Recall that for any market equilibrium if x^*_ij >0 then a_ij/p^*_j = c_i for some constant c_i (see the model section in the main text). Applying this to the cycle C yields the following equations. (1) By considering edges from buyers to goods b_i → g_i we obtain for i ∈ [k-1] a_i,i = c_i p^*_i; and (2) by considering edges from goods to buyers g_i → b_i+1 we obtain for i ∈ [k-1] a_i+1,i= c_i+1 p_i^* and the edge closing the cycle yields a_1,k= c_1 p_k^*. Finally, by considering the product of ratios between valuations of buyers participating in the cycle we have the following condition.
a_21/a_11a_32/a_22a_43/a_33…a_i+1,i/a_i,i…a_k,k-1/a_k-1,k-1a_1,k/a_k,k = c_2 p^*_1/c_1 p^*_1c_3 p^*_2/c_2 p^*_2c_4 p^*_3/c_3 p^*_3…c_i+1 p^*_i/c_i p^*_i…c_k p^*_k-1/c_k-1 p^*_k-1c_1 p^*_k/c_k p^*_k
= c_2/c_1c_3/c_2c_4/c_3…c_i+1/c_i…c_k/c_k-1c_1/c_k
= 1,
which contradicts the genericity of the market.
§ CONVERGENCE OF ASYNCHRONOUS PROPORTIONAL RESPONSE DYNAMICS
(Theorem <ref>): Assume that Φ: [0,1]^n→ℝ_≥ 0 is some continuous function with a single maximum point b^* (as is the case with out potential). We start with the following lemma.
For every ϵ>0 there exists δ>0 such that
Φ(b) > Φ(b^*) - δ
implies |b-b^*|<ϵ.
Assume otherwise that for some ϵ_0 there exist a sequence (b_t) such that Φ(b_t)→Φ(b^*) but |b_t-b^*| ≥ϵ_0 for all t. Take a condensation point b^** of this sequence and a subsequence (t_j) that converges to b^**. We have Φ(b^**)= limΦ(b_t_j) = Φ(b^*) and |b^**-b^*| = lim |b_t_j-b^*| ≥ϵ_0 > 0. The former equality must imply b^*=b^**, but the latter implies b^* b^**.
Next, for a subset of players A ⊂ℬ let f_A: [0,1]^n → [0,1]^n be the continuous function where j ∈ A do a proportional response update and the other players play the identity function.
(i.e., do not change their bids, see Section <ref> in the main text).
By Lemma <ref> from the main text we have that
(i) For all A we have that f_A(b)=b if and only if for all i ∈ A it holds that f_i(b)=b; and
(ii) Φ(f_A(b)) > Φ(b) unless f_A(b)=b.
The stable set of b^** is defined to be S(b^**)={i | f_i(b^**)=b^**}.
A corollary (i) and (ii) above is that if A ⊆ S(b^**) then f_A(b^**)=b^**, but if A ∖ S(b^**) ∅ then Φ(f_A(b^**)) > Φ(b^**).
: Let ϕ(b^**) < Φ(b^*). Then there exists δ>0 such that for every |b-b^**|≤δ and every A ∖ S(b^**) ∅ we have that Φ(f_A(b)) > Φ(b^**).
Fix a set A such that A ∖ S(b^**) ∅ and let α = Φ(f_A(b^**)) - Φ(b^**) >0. Since Φ(f_A(·)) is continuous, there exists δ so that |b-b^**|≤δ implies Φ(f_A(b^**)) - Φ(f_A(b)) < α and thus Φ(f_A(b)) > Φ(b^**). Now take the minimum δ for all finitely many A.
Let Φ(b^**) < Φ(b^*) and let F be a finite family of continuous functions such that for every f ∈ F we have that f(b^**)=b^**. Then there exists ϵ>0 such that for every b such that |b-b^**|≤ϵ and every f ∈ F and every A ∖ S(x^**) ∅ we have that Φ(f_A(f(b))) > Φ(b^**).
Fix f ∈ F and let δ be as promised by the previous lemma, i.e. for every |z-b^**|≤δ and every A ∖ S(b^**) ∅ we have that Φ(f_A(z)) > Φ(b^**). Since f(b^**)=b^** and f is continuous there exists ϵ >0 so that |b-b^**| ≤ϵ implies |f(b)-f(b^**)| = |f(b)-b^**|≤δ and thus Φ(f_A(f(b))) > Φ(b^**). Now take the minimum ϵ over the finitely many f ∈ F.
a sequence of sets A_t ⊆ℬ is called T-live if for every i and for every t there exists some t ≤ t^* ≤ t+T such that i ∈ S_t^*.
Fix a sequence b = (b_t) where b_t+1 = f_A_t(b_t) such that the sequence A_t is T-live. Then it holds that
lim_t→∞ b_t = b^*.
Otherwise there exists a subsequence that converges to some other b^** where Φ(b^**)<Φ(b^*). Notice that as Φ(b_t) is increasing then Φ(b_t) ≤Φ(b^**) for all t.
Let F be a set of functions achieved by composition of at most T functions from {f_A | A ⊂ S(b^**)}.
So for every f ∈ F we have that f_A(b^**)=b^**, while for every B = A ∖ S(b^**) ∅ we have that Φ(f_B(b^**)) > Φ(b^**).
Let ϵ be as promised by the previous lemma, i.e., for every |b-b^**|≤ϵ and every f ∈ F and every A such that A ∖ S(b^**) ∅ we have that Φ(f_A(f(b))) > Φ(b^**). Since the subsequence converges to b^** there exists t_j in the subsequence so that |b_t_j-b^**| ≤ϵ. Now let t>t_j be the first time that A_t ∖ S(b^**) ∅. Now b_t+1 = f_A_t(f(b_t_j), where f is the composition of all f_A for the times t_j to t. We can now apply the previous lemma to get that Φ(b_t+1) = Φ(f_A_t(f(b_t_j)) > Φ(b^**) a contradiction.
The last lemma concludes our proof of Theorem <ref>.
|
http://arxiv.org/abs/2307.03888v1 | 20230708034255 | Spectral radius, fractional $[a,b]$-factor and ID-factor-critical graphs | [
"Ao Fan",
"Ruifang Liu",
"Guoyan Ao"
] | math.CO | [
"math.CO",
"05C50, 05C35"
] |
Spectral radius, fractional [a,b]-factor and ID-factor-critical graphs[Supported by National Natural Science Foundation of China
(Nos. 11971445 and 12171440),
Henan Natural Science Foundation (No. 202300410377) and
Research Program of Science and Technology at Universities of Inner Mongolia Autonomous Region (No. NJZY22280).]
Ao Fan^a, Ruifang Liu^aCorresponding author.
E-mail addresses: [email protected], [email protected], [email protected]., Guoyan Ao^a, b
^a School of Mathematics and Statistics, Zhengzhou University, Zhengzhou, Henan 450001, China
^b School of Mathematics and Physics, Hulunbuir University, Hailar, Inner Mongolia 021008, China
==========================================================================================================================================================================================================================================================================================================================================
Abstract
Let G be a graph and h: E(G)→ [0,1] be a function.
For any two positive integers a and b with a≤ b, a fractional [a,b]-factor of G with the indicator function h is
a spanning subgraph with vertex set V(G) and edge set E_h such that a≤∑_e∈ E_G(v)h(e)≤ b for any vertex v∈ V(G),
where E_h = {e∈ E(G)|h(e)>0} and E_G(v)={e∈ E(G)| e v G}.
A graph G is ID-factor-critical if for every independent set I of G whose size has the same parity as |V(G)|, G-I has a perfect matching.
In this paper, we present a tight sufficient condition based on the spectral radius for a graph to contain a fractional [a,b]-factor,
which extends the result of Wei and Zhang [Discrete Math. 346 (2023) 113269].
Furthermore, we also prove a tight sufficient condition in terms of the spectral radius for a graph with minimum degree δ to be ID-factor-critical.
Keywords: Spectral radius, Fractional [a,b]-factor, ID-factor-critical, Minimum degree
AMS Classification: 05C50; 05C35
§ INTRODUCTION
Let G be a finite, undirected and simple graph with vertex set V(G) and edge set E(G).
The order and size of G are denoted by |V(G)|=n and |E(G)|=e(G), respectively.
We denote by δ(G), i(G) and o(G) the minimum degree,
the number of isolated vertices and the number of odd components of G, respectively.
We use K_n and I_n to denote the complete graph of order n and the complement of K_n.
For a vertex subset S of G, let G[S] be the subgraph of G induced by S.
Let G_1 and G_2 be two vertex-disjoint graphs.
We denote by G_1+G_2 the disjoint union of G_1 and G_2.
The join G_1∨ G_2 is the graph obtained from G_1+G_2 by adding all possible edges between V(G_1) and V(G_2).
For undefined terms and notions, one can refer to <cit.>.
Given a graph G of order n, the adjacency matrix of G is the 0-1 matrix A(G)=(a_ij)_n× n indexed
by the vertex set V(G) of G, where a_ij=1 when v_i and v_j are adjacent and a_ij=0 otherwise.
The eigenvalues of A(G) are also called the eigenvalues of G.
Note that A(G) is a real nonnegative symmetric matrix. Hence its eigenvalues are real,
which can be arranged in non-increasing order as λ_1(G)≥λ_2(G) ≥⋯≥λ_n(G).
The largest eigenvalue of A(G), denoted by ρ(G), is called the spectral radius of G.
Let g and f be two integer-valued functions defined on V(G) such that 0≤ g(v)≤ f(v) for each vertex v in V(G).
A (g,f)-factor of G is a spanning subgraph F of G satisfying g(v)≤ d_F(v)≤ f(v) for any vertex v in V(G).
Let a and b be two positive integers with a≤ b.
A (g,f)-factor is called an [a,b]-factor if g(v)≡ a and f(v)≡ b for any v∈ V(G).
An [a,b]-factor is called a 1-factor (also called a perfect matching) if a=b=1.
Let h : E(G)→ [0,1] be a function and E_G(v)={e∈ E(G)| e v G}.
If g(v)≤∑_e∈ E_G(v)h(e)≤ f(v) holds for any vertex v∈ V(G),
then we call a subgraph F with vertex set V(G) and edge set E_h a fractional (g,f)-factor of G with indicator function h,
where E_h = {e∈ E(G)|h(e)>0}.
A fractional (g,f)-factor is called a fractional [a,b]-factor if g(v)≡ a and f(v)≡ b.
In particular, for a positive integer k, a fractional [k, k]-factor of a graph G is called a fractional k-factor of G.
A fractional 1-factor is also called a fractional perfect matching.
Note that if G contains a (g,f)-factor, then it also contains a fractional (g,f)-factor.
However, if G has a fractional (g,f)-factor, G may not have a (g,f)-factor.
We start with the following well-known fractional (g,f)-factor theorem.
Let G be a graph and g,f: V(G)→ Z^+ be two integer functions such that g(v)≤ f(v) for all v∈ V(G).
Then G has a fractional (g,f)-factor if and only if for any subset S⊆ V(G), we have
f(S)-g(T)+∑_v∈ Td_G-S(v)≥0,
where T={v|v∈ V(G)-S d_G-S(v)<g(v)}.
If g(v)≡ a and f(v)≡ b, then by Theorem <ref>, we obtain the following result.
Let G be a graph and let a, b be two positive integers with a≤ b.
Then G has a fractional [a,b]-factor if and only if for any subset S⊆ V(G), we have
b|S|-a|T|+∑_v∈ Td_G-S(v)≥0,
where T={v|v∈ V(G)-S d_G-S(v)<a}.
There are many sufficient conditions which can assure a graph to have a fractional [a,b]-factors
(see for example, <cit.>).
Cho, Hyun, O and Park <cit.> posed the spectral version conjecture for the existence of [a,b]-factors in graphs.
Fan, Lin and Lu <cit.> proved that the conjecture holds for the case n≥ 3a+b-1.
Very recently, Wei and Zhang <cit.> confirmed the full conjecture.
Let a, b be two positive integers with a≤ b, and let G be a graph of order n≥ a+1.
If ρ(G)>ρ(K_a-1∨(K_n-a+K_1)) and na≡ 0 (mod 2) when a=b, then G has an [a,b]-factor.
It is well known that if G contains an [a,b]-factor, then it contains a fractional [a,b]-factor.
Inspired by the work of Wei and Zhang <cit.>,
we obtain a tight sufficient condition in terms of the spectral radius for a graph to contain a fractional [a,b]-factor.
Let a, b be two positive integers with a≤ b, and let G be a graph of order n≥ a+1.
If
ρ(G)≥ρ(K_a-1∨(K_n-a+K_1))
and na≡0 (mod 2) when a=b, then G has a fractional [a,b]-factor
unless G≅ K_a-1∨(K_n-a+K_1).
Note that 4+√(32a^2+24a+5)> a+1. Our Theorem <ref> improves the following result.
Let b≥ a≥ 1 be two integers, and let G be a graph of order n≥ 4+√(32a^2+24a+5).
If ρ(G)≥ρ(K_a-1∨(K_n-a+K_1)), then G has a fractional [a,b]-factor unless G≅ K_a-1∨(K_n-a+K_1).
A graph G is independent-set-deletable factor-critical, shortly ID-factor-critical, if for every
independent set I of G whose size has the same parity as |V(G)|, G-I has a perfect matching.
Let S_n, k be the join of a clique on k vertices with an independent set of n-k vertices for n>k. That is to say, S_n, k=K_k∨ I_n-k.
A graph G has a perfect matching if and only if o(G-S)≤|S| for every S⊆ V(G).
The following theorem is a direct consequence of Tutte's Theorem.
A graph G is ID-factor-critical if and only if o(G-I-S)≤|S| for every independent set I such that |I| has the same parity as |V(G)| and every subset S⊆ V(G)-I.
Using Theorem <ref>, we prove a tight spectral condition for a graph with minimum degree δ to be ID-factor-critical.
Let G be a graph of order n with minimum degree δ≥3r+1, where r≥1 is an integer.
If n≥ max{20δ+r+8, δ^3-r-3/2δ^2-r^2-2r-4/2δ-r^2-3r-3/2} and
ρ(G)≥ρ(S_δ+r, δ∨(K_n-2δ-r-1+I_δ+1)),
then G is ID-factor-critical unless G≅ S_δ+r, δ∨(K_n-2δ-r-1+I_δ+1).
§ PROOF OF THEOREM <REF>
Before presenting our proof, we introduce some necessary lemmas.
Let G be a graph of order n≥3. If e(G)≥n-12+1, then G has a Hamilton path.
Although the following Lemma <ref> can be obtained directly from Theorem 2 in <cit.>,
here we can present a much simpler proof of Lemma <ref> for a fractional [a,b]-factor.
Let a and b be two positive integers with a≤ b, and let G be a graph of order n≥ a+1 and minimum degree δ≥ a. If
e(G)≥n-12+a+1/2
and na≡0 (mod 2) when a=b, then G has a fractional [a,b]-factor.
For any two disjoint vertex subsets S and T in G, let
φ(S,T)=b|S|-a|T|+∑_v∈ Td_G-S(v).
Suppose to the contrary that G has no fractional [a,b]-factor. By Corollary <ref>,
there exist two disjoint subsets S and T of V(G) such that
φ(S,T)≤-1,
where T={v|v∈ V(G)-S d_G-S(v)<a}.
n≥ a+2 and b≥2.
Note that δ≥ a. If n=a+1, then G is a complete graph. It is well known that the complete graph contains an [a,b]-factor,
and hence G contains a fractional [a,b]-factor, a contradiction. So we have n≥ a+2.
If b=1, then a=b=1, and thus e(G)≥n-12+1.
By Lemma <ref>, G has a Hamilton path.
Note that n is even. Then G contains a 1-factor, and hence G contains a fractional 1-factor, a contradiction.
Hence b≥2.
S≠∅
Assume that S=∅.
Note that G-S=G and δ(G)≥ a. Then δ(G-S)≥ a. Recall that T={v|v∈ V(G)-S d_G-S(v)<a}.
Then T=∅, and thus
φ(∅,∅)=0,
which is contrary to (<ref>).
Next we will evaluate the value of |T|.
Case 1. 0≤|T|≤ b.
Note that δ≥ a. Then
φ(S,T) = b|S|-a|T|+∑_v∈ Td_G-S(v)
= b|S|-a|T|+∑_v∈ Td_G(v)-e_G(S,T)
≥ b|S|-a|T|+a|T|-|T||S|
= (b-|T|)|S|
≥ 0,
which contradicts (<ref>).
Case 2. |T|≥ b+1.
Since S and T are two disjoint subsets of V(G), n≥|S|+|T|≥|S|+b+1.
By the assumption e(G)≥n-12+a+1/2, there exist at most n-1-a+1/2 edges which are not in E[V(G-T-S),T]∪ E(G[T]).
Hence
∑_v∈ Td_G-S(v)≥(n-1-|S|)|T|-2(n-1-a+1/2).
Subcase 2.1. a<b.
Combining Claim <ref>, we have
φ(S,T)
= b|S|-a|T|+∑_v∈ Td_G-S(v)
≥ b|S|-a|T|+(n-1-|S|)|T|-2(n-1-a+1/2)
= (n-1-|S|-a)|T|+b|S|-2n+a+3
≥ (n-1-|S|-a)(b+1)+b|S|-2n+a+3
= (b-2)n+n-|S|-ab-b+2
≥ (b-2)n+(|S|+b+1)-|S|-ab-b+2
= (b-2)n-ab+3
≥ (b-2)(a+2)-ab+3
= 2b-2a-1
≥ 1,
which is contrary to (<ref>).
Subcase 2.2. a=b.
Recall that n≥|S|+b+1=|S|+a+1 and na ≡0 (mod 2). If a is odd, then n is even.
By Claim <ref>, we have n≥ a+3 and a≥3.
Then
φ(S,T) = a|S|-a|T|+∑_v∈ Td_G-S(v)
≥ a|S|-a|T|+(n-1-|S|)|T|-2(n-1-a+1/2)
= (n-1-|S|-a)|T|+a|S|-2n+a+3
≥ (n-1-|S|-a)(a+1)+a|S|-2n+a+3
= (a-2)n+n-|S|-a^2-a+2
≥ (a-2)n+(|S|+a+1)-|S|-a^2-a+2
= (a-2)n-a^2+3
≥ (a-2)(a+3)-a^2+3
= a-3.
≥ 0,
a contradiction.
Next we consider that a is even. Since e(G)≥n-12+a+1/2, we obtain that e(G)≥n-12+a+2/2,
and hence ∑_v∈ Td_G-S(v)≥(n-1-|S|)|T|-2(n-1-a+2/2).
By Claim <ref>, we have n≥ a+2. Then
φ(S,T) = a|S|-a|T|+∑_v∈ Td_G-S(v)
≥ a|S|-a|T|+(n-1-|S|)|T|-2(n-1-a+2/2)
= (n-1-|S|-a)|T|+a|S|-2n+a+4
≥ (n-1-|S|-a)(a+1)+a|S|-2n+a+4
= (a-2)n+n-|S|-a^2-a+3
≥ (a-2)n+(|S|+a+1)-|S|-a^2-a+3
= (a-2)n-a^2+4
≥ (a-2)(a+2)-a^2+4
= 0,
which contradicts (<ref>).
Let A=(a_ij) and B=(b_ij) be two n× n matrices.
Define A≤ B if a_ij≤ b_ij for all i and j, and define A< B if A≤ B and A≠ B.
Let A=(a_ij) and B=(b_ij) be two n× n matrices with the spectral radii λ(A) and λ(B).
If 0≤ A≤ B, then λ(A)≤λ(B).
Furthermore, if B is irreducible and 0≤ A < B, then λ(A)<λ(B).
We will use the following lemma in the proof of Theorem <ref>.
Let G be a graph with minimum degree δ. Then
ρ(G)≤δ-1/2+√(2e(G)-δ n+(δ+1)^2/4).
[Hong, Shu and Fang <cit.>, Nikiforov <cit.>]
For graph G with 2e(G)≤ n(n-1), the function
f(x)=x-1/2+√(2e(G)-nx+(x+1)^2/4)
is decreasing with respect to x for 0≤ x≤ n-1.
Proof of Theorem <ref>. Let G be a graph of order n≥ a+1.
Note that the minimum degree of K_a-1∨ (K_n-a+K_1) is a-1.
Let h: E(G)→ [0,1] be a function. Then for v∈ V(K_1), we have ∑_e∈ E_G(v)h(e)≤ a-1.
By the definition of a fractional [a, b]-factor, then K_a-1∨ (K_n-a+K_1) has no fractional [a, b]-factor.
Assume that G K_a-1∨ (K_n-a+K_1) (see Fig. <ref>). It suffices to prove that G contains a fractional [a, b]-factor.
First we prove the following claim.
δ≥ a.
If δ≤ a-1, then there exists a vertex v∈ V(G) such that d(v)≤ a-1.
This means that G is a subgraph of K_a-1∨ (K_n-a+K_1).
By Lemma <ref>, we have
ρ(G)≤ρ(K_a-1∨ (K_n-a+K_1)).
By the assumption ρ(G)≥ρ(K_a-1∨ (K_n-a+K_1)), we have G≅ K_a-1∨ (K_n-a+K_1), a contradiction.
Hence δ≥ a.
We distinguish the proof into the following two cases.
Case 1. a=1.
By the assumption, we have ρ(G)≥ρ(K_a-1∨ (K_n-a+K_1))= ρ(K_n-1+K_1)=n-2.
By Claim <ref>, Lemma <ref> and Proposition <ref>, we obtain that
n-2≤ρ(G)≤√(2e(G)-n+1).
It follows that e(G)≥n-12+1/2, and hence e(G)≥n-12+1.
By Lemma <ref>, then G contains a fractional [a,b]-factor.
Case 2. a≥ 2.
Note that K_n-1 is a proper subgraph of K_a-1∨ (K_n-a+K_1).
By the assumption and Lemma <ref>, we have ρ(G)≥ρ(K_a-1∨ (K_n-a+K_1))>ρ(K_n-1)=n-2.
By Claim <ref>, Lemma <ref> and Proposition <ref>, we have
n-2<ρ(G)≤a-1/2+√(2e(G)-an+(a+1)^2/4).
It follows that e(G)>n-12+a/2. That is to say, e(G)≥n-12+a+1/2.
By Lemma <ref>, then G contains a fractional [a,b]-factor.
§ PROOF OF THEOREM <REF>
By the Perron-Frobenius Theorem, ρ(G) is always a positive number (unless G is an empty graph),
and there exists an unique positive unit eigenvector corresponding to ρ(G), which is called the Perron vector of G.
Let n=∑_i=1^tn_i+s. If n_1≥ n_2≥⋯≥ n_t≥ p and n_1<n-s-p(t-1), then
ρ(K_s∨(K_n_1+ K_n_2 + ⋯ + K_n_t))<ρ(K_s∨(K_n-s-p(t-1)+ (t-1)K_p)).
Graph S_δ+r,δ∨(K_n-2δ-r-1+I_δ+1) is not ID-factor-critical.
Let G= S_δ+r,δ∨(K_n-2δ-r-1+I_δ+1) (see Fig. <ref>).
Suppose to the contrary that G is ID-factor-critical.
By the definition of an ID-factor-critical graph,
we have G-I has a perfect matching for any independent set I of G whose size has the same parity as |V(G)|.
Note that S_δ+r, δ=K_δ∨ I_r. However, if we take I=I_r and let H=G-I.
Then
H≅ K_δ∨(K_n-2δ-r-1+I_δ+1).
Note that the vertices of I_δ+1 are only adjacent to the vertices of K_δ. Hence H has no perfect matching, a contradiction.
Now, we are in a position to present the proof of Theorem <ref>.
Proof of Theorem <ref>.
Assume that G is not ID-factor-critical. According to Theorem <ref>, there exists some independent set I such that |I| has the same parity as
|V(G)|=n , we have o(G-I-S)≥ |S|+1 for some subset S⊆ V(G)-I. Let |I|=r and |S|=s. Then o(G-I-S)≥ s+1.
Since n-r is even, o(G-I-S) and s have the same parity. Hence we have o(G-I-S)≥ s+2.
It is clear that G is a spanning subgraph of G'=I_r∨ (K_s∨(K_n_1+ K_n_2+⋯+K_n_s+2))
for some odd integers n_1≥ n_2≥⋯≥ n_s+2>0 with ∑_i=1^s+2n_i=n-r-s. Then we have
ρ(G)≤ρ(G'),
where equality holds if and only if G≅ G'.
Let G”=S_s+r, s∨(K_n-2s-r-1+I_s+1). By Lemma <ref>, we obtain that
ρ(G')≤ρ(G”),
where equality holds if and only if (n_1, n_2, … ,n_s+2)=(n-2s-r-1,1,… ,1).
Case 1. s=δ.
Combining (<ref>) and (<ref>), we have
ρ(G)≤ρ(G')≤ρ(G”)=ρ(S_δ+r, δ∨(K_n-2δ-r-1+I_δ+1)).
By the assumption ρ(G)≥ρ(S_δ+r, δ∨(K_n-2δ-r-1+I_δ+1)),
we have G≅ S_δ+r, δ∨(K_n-2δ-r-1+I_δ+1).
By Lemma <ref>, S_δ+r, δ∨(K_n-2δ-r-1+I_δ+1) is not ID-factor-critical.
Hence G≅ S_δ+r, δ∨(K_n-2δ-r-1+I_δ+1).
Case 2. s≥δ+1.
Recall that G”=S_s+r, s∨(K_n-2s-r-1+I_s+1).
The vertex set of G” can be divided into V(G”)=V(K_s)∪ V(I_s+1)∪ V(I_r)∪ V(K_n-2s-r-1),
where V(K_s)={u_1, u_2, … ,u_s},
V(I_s+1)={v_1, v_2, … ,v_s+1}, V(I_r)={w_1, w_2, … ,w_r}
and V(K_n-2s-r-1)={z_1, z_2, … ,z_n-2s-r-1}.
Let E_1={v_iz_j|δ+2≤ i≤ s+1, 1≤ j≤ n-2s-r-1}
∪{v_iv_j|δ+2≤ i≤ s,i+1≤ j≤ s+1}
and E_2={u_iv_j|δ+1≤ i≤ s,1≤ j≤δ+1}.
Let G^*=G”+E_1-E_2. Obviously, G^*≅ S_δ+r, δ∨(K_n-2δ-r-1+I_δ+1).
Let x be the perron vector of A(G”), and let ρ” =ρ(G”).
By symmetry, x takes the same value on the vertices of V(K_s), V(I_s+1), V(I_r) and V(K_n-2s-r-1), respectively.
It is easy to see that
A(G”)=[
[ (J-I)_s× s J_s× (s+1) J_s× r J_s× (n-2s-r-1); J_(s+1)× s O_(s+1)× (s+1) J_(s+1)× r O_(s+1)× (n-2s-r-1); J_r× s J_r× (s+1) O_r× r J_r×(n-2s-r-1); J_(n-2s-r-1)× s O_(n-2s-r-1)× (s+1) J_(n-2s-r-1)× r (J-I)_(n-2s-r-1)×(n-2s-r-1) ]].
We denote the entry of x by x_1, x_2, x_3 and x_4 corresponding to the vertices in the above four vertex sets, respectively.
By A(G”)x=ρ” x, we have
ρ” x_2=sx_1+rx_3,
ρ” x_3=sx_1+(s+1)x_2+(n-2s-r-1)x_4,
ρ” x_4=sx_1+rx_3+(n-2s-r-2)x_4.
Observe that n≥2s+r+2. According to (<ref>) and (<ref>), we obtain that x_4≥ x_2.
By (<ref>) and (<ref>), we have ρ” x_3-ρ” x_4=(s+1)x_2-rx_3+x_4.
It follows that x_4=(ρ”+r)x_3-(s+1)x_2/ρ”+1≥ x_2.
Then we have x_3≥ρ”+s+2/ρ”+rx_2.
Note that s≥δ+1 and δ≥3r+1. Then ρ”+s+2≥ρ”+δ+3>ρ”+r, and hence x_3>x_2.
Combining (<ref>), we have
x_2>sx_1/ρ”-r.
Recall that G^*≅ S_δ+r, δ∨(K_n-2δ-r-1+I_δ+1).
Note that G^* contains K_n-2δ-r-1 as a proper subgraph. Then ρ^*>n-2δ-r-2.
Similarly, let y be the perron vector of A(G^*), and let ρ^* =ρ(G^*).
By symmetry, y takes the same value (say y_1, y_2, y_3 and y_4) on the vertices of
V(K_δ), V(I_δ+1), V(I_r) and V(K_n-2δ-r-1).
By A(G^*)y=ρ^*y, we have
ρ^*y_2=δ y_1+ry_3,
ρ^*y_4=δ y_1+ry_3+(n-2δ-r-2)y_4.
Combining (<ref>) and (<ref>), we have
y_4=ρ^*y_2/ρ^*-(n-2δ-r-2).
Note that n≥2s+r+2. Then δ+1≤ s≤n-r-2/2. Since G” is not a complete graph, ρ”< n-1.
ρ”<ρ^*.
Suppose that ρ”≥ρ^*. By x_4≥ x_2, (<ref>) and (<ref>), we have
y^T(ρ^*-ρ”)x
= y^T(A(G^*)-A(G”))x
= ∑_i=δ+2^s+1∑_j=1^n-2s-r-1(x_v_iy_z_j+x_z_jy_v_i)+∑_i=δ+2^s∑_j=i+1^s+1(x_v_iy_v_j+x_v_jy_v_i)-∑_i=δ+1^s∑_j=1^δ+1(x_u_iy_v_j+x_v_jy_u_i)
= (n-2s-r-1)(s-δ)(x_2y_4+x_4y_4)+(s-δ-1)(s-δ)x_2y_4-(s-δ)(δ+1)(x_1y_2
+x_2y_4)
≥ (s-δ)[2(n-2s-r-1)x_2y_4+(s-δ-1)x_2y_4-(δ+1)x_2y_4-(δ+1)x_1y_2]
= (s-δ)[(2n-3s-2δ-2r-4)x_2y_4-(δ+1)x_1y_2]
> (s-δ)[(2n-3s-2δ-2r-4)·sx_1/ρ”-r·ρ^*y_2/ρ^*-(n-2δ-r-2)-(δ+1)x_1y_2]
= (s-δ)x_1y_2/(ρ”-r)(ρ^*-(n-2δ-r-2))[(2n-3s-2δ-2r-4)sρ^*-(δ+1)(ρ”-r)(ρ^* .
.-(n-2δ-r-2))]
= (s-δ)(δ+1)x_1y_2/(ρ”-r)(ρ^*-(n-2δ-r-2))[ρ^*(2n-2δ-3s-2r-4)·s/δ+1-(ρ”-r)(ρ^*.
.-(n-2δ-r-2))].
Note that s≥δ+1, ρ”≥ρ^* and ρ^*>δ-1≥3r. Then
y^T(ρ^*-ρ”)x
> (s-δ)(δ+1)x_1y_2/(ρ”-r)(ρ^*-(n-2δ-r-2))[ρ^*(2n-2δ-3s-2r-4)-ρ”ρ^*+ρ”(n-2δ-r-2).
.+rρ^*-r(n-2δ-r-2)]
= ρ^*(s-δ)(δ+1)x_1y_2/(ρ”-r)(ρ^*-(n-2δ-r-2))[(2n-2δ-3s-2r-4)-ρ”+ρ”/ρ^*·(n-2δ-r-2).
.+r-r/ρ^*·(n-2δ-r-2)]
> ρ^*(s-δ)(δ+1)x_1y_2/(ρ”-r)(ρ^*-(n-2δ-r-2))[(2n-2δ-3s-2r-4)-ρ”+(n-2δ-r-2).
.+r-1/3·(n-2δ-r-2)]
= ρ^*(s-δ)(δ+1)x_1y_2/(ρ”-r)(ρ^*-(n-2δ-r-2))(8/3n-10/3δ-3s-5/3r-16/3-ρ”).
Since K_s⊂ G” and δ≥3r+1, ρ”> ρ(K_s)=s-1≥δ >r.
Note that s≤n-r-2/2, ρ”< n-1, ρ^*>n-2δ-r-2 and n≥ 20δ+r+8. Then
y^T(ρ^*-ρ”)x
> ρ^*(s-δ)(δ+1)x_1y_2/(ρ”-r)(ρ^*-(n-2δ-r-2))(8/3n-10/3δ-3·n-r-2/2-5/3r-16/3-ρ”)
= ρ^*(s-δ)(δ+1)x_1y_2/(ρ”-r)(ρ^*-(n-2δ-r-2))(7/6n-10/3δ-1/6r-7/3-ρ”)
= ρ^*(s-δ)(δ+1)x_1y_2/(ρ”-r)(ρ^*-(n-2δ-r-2))(1/6n-10/3δ-1/6r-4/3+(n-1)-ρ”)
> ρ^*(s-δ)(δ+1)x_1y_2/(ρ”-r)(ρ^*-(n-2δ-r-2))·n-20δ-r-8/6
≥ 0.
This implies that ρ^*>ρ”, which contradicts the assumption ρ”≥ρ^*.
By Claim <ref>, (<ref>) and (<ref>), we have
ρ(G)≤ρ(G')≤ρ(G”)<ρ(G^*)=ρ(S_δ+r, δ∨(K_n-2δ-r-1+I_δ+1)),
which contradicts ρ(G)≥ρ(S_δ+r, δ∨(K_n-2δ-r-1+I_δ+1)).
Case 3. s<δ.
Recall that G'=I_r∨ (K_s∨(K_n_1+ K_n_2+⋯+K_n_s+2)).
Then d_G'(v)=n_s+2+s+r-1≥δ for v ∈ V(K_n_s+2), and hence n_s+2≥δ-s-r+1.
Let G”'=I_r∨(K_s∨(K_n-s-r-(s+1)(δ-s-r+1)+(s+1)K_δ-s-r+1)).
By Lemma <ref>, we have
ρ(G')≤ρ(G”'),
where equality holds if and only if (n_1,n_2,…,n_s+2)=(n-s-r-(s+1)(δ-s-r+1),δ-s-r+1,…,δ-s-r+1).
Let ρ”'=ρ(G”').
ρ”'< n-r-1-(s+1)(δ-s+1).
Suppose to the contrary that ρ”'≥ n-r-1-(s+1)(δ-s+1).
Let x be the perron vector of A(G”'). By symmetry, x takes the same values x_1, x_2, x_3 and x_4
on the vertices of K_s, (s+1)K_δ-s-r+1, I_r and K_n-s-r-(s+1)(δ-s-r+1), respectively.
According to A(G”')x=ρ”' x, we obtain that
ρ”'x_1=(s-1)x_1+(s+1)(δ-s-r+1)x_2+rx_3+(n-s-r-(s+1)(δ-s
-r+1))x_4,
ρ”'x_2=sx_1+(δ-s-r)x_2+rx_3,
ρ”'x_3=sx_1+(s+1)(δ-s-r+1)x_2+(n-s-r-(s+1)(δ-s-r+1))x_4,
ρ”'x_4=sx_1+rx_3+(n-s-r-1-(s+1)(δ-s-r+1))x_4.
By (<ref>) and (<ref>), we have
x_3=(ρ”'+1)x_1/ρ”'+r.
Substituting (<ref>) into (<ref>) and (<ref>), we have
x_2=sx_1+r(ρ”'+1)/ρ”'+rx_1/ρ”'-δ+s+r,
x_4=sx_1+r(ρ”'+1)/ρ”'+rx_1/ρ”'-[n-s-r-1-(s+1)(δ-s-r+1)].
Since n≥δ^3-r-3/2δ^2-r^2-2r-4/2δ-r^2-3r-3/2, we have
ρ”'≥ n-r-1-(s+1)(δ-s+1)>δ-r+1.
Substituting (<ref>), (<ref>) and (<ref>) into (<ref>), we have
ρ”'+1 = s+(s+1)(δ-s-r+1)(s+r(ρ”'+1)/ρ”'+r)/ρ”'-δ+s+r+r(ρ”'+1)/ρ”'+r
+[n-s-r-(s+1)(δ-s-r+1)](s+r(ρ”'+1)/ρ”'+r)/ρ”'-(n-s-r-1-(s+1)(δ-s-r+1))
≤ s+(s+1)(δ-s-r+1)(s+r)/ρ”'-δ+s+r+r
+[n-s-r-(s+1)(δ-s-r+1)](s+r)/ρ”'-(n-s-r-1-(s+1)(δ-s-r+1))
< s+(s+1)(δ-s-r+1)(s+r)/(δ-r+1)-δ+s+r+r
+[n-s-r-(s+1)(δ-s-r+1)](s+r)/[n-r-1-(s+1)(δ-s+1)]-(n-s-r-1-(s+1)(δ-s-r+1))
= s+(s+r)(δ-s-r+1)+r+[n-s-r-(s+1)(δ-s-r+1)](s+r)/s-sr-r
= n-r-1-(s+1)(δ-s+1)-1/sr-s+r[(sr+2r)n+(2r-1)s^3+(2r^2-2δ r
+δ+1)s^2+(r^3-δ r^2-r^2-3δ r-2r+1)s+r^3-δ r^2-3r^2-2δ r-3r].
Let f(n)=(sr+2r)n+(2r-1)s^3+(2r^2-2δ r+δ+1)s^2+(r^3-δ r^2-r^2-3δ r-2r+1)s+r^3-δ r^2-3r^2-2δ r-3r.
We assert that f(n)≥ 0. Suppose that f(n)<0. Then
n<1/sr+2r[(-2r+1)s^3+(-2r^2+2δ r-δ-1)s^2+(-r^3+δ r^2+r^2+3δ r+2r-1)s-r^3+δ r^2+3r^2+2δ r+3r].
Note that 0≤ s<δ, -2r+1<0, -2r^2+2δ r-δ-1>0 and -r^3+δ r^2+r^2+3δ r+2r-1>0. Then
n < 1/sr+2r[(-2r+1)s^3+(-2r^2+2δ r-δ-1)s^2+(-r^3+δ r^2+r^2+3δ r+2r-1)s
-r^3+δ r^2+3r^2+2δ r+3r]
< 1/2r[(-2r^2+2δ r-δ-1)δ^2+(-r^3+δ r^2+r^2+3δ r+2r-1)δ-r^3+δ r^2+3r^2
+2δ r+3r]
= 1/2r[(2r-1)δ^3+(-r^2+3r-1)δ^2+(-r^3+2r^2+4r-1)δ-r^3+3r^2+3r]
< 1/2r[2rδ^3+(-r^2+3r)δ^2+(-r^3+2r^2+4r)δ-r^3+3r^2+3r]
= δ^3-r-3/2δ^2-r^2-2r-4/2δ-r^2-3r-3/2,
which contradicts n≥δ^3-r-3/2δ^2-r^2-2r-4/2δ-r^2-3r-3/2. Hence f(n)≥ 0.
Then
ρ”'+1 < n-r-1-(s+1)(δ-s+1)-1/sr-s+rf(n)
< n-r-1-(s+1)(δ-s+1)
≤ ρ”',
a contradiction. Therefore, we have ρ”'<n-r-1-(s+1)(δ-s+1).
By Claim <ref> and s< δ, we obtain that
ρ”' < n-r-1-(s+1)(δ-s+1)
= n-δ-r-1-[(δ-s)s+1]
< n-δ-r-1.
Note that K_n-δ-r⊂ S_δ+r, δ∨(K_n-2δ-r-1+I_δ+1). Then
n-δ-r-1=ρ(K_n-δ-r)<ρ(S_δ+r, δ∨(K_n-2δ-r-1+I_δ+1)).
Combining (<ref>) and (<ref>), we have
ρ(G)≤ρ(G')≤ρ(G”')<n-δ-r-1<ρ(S_δ+r, δ∨(K_n-2δ-r-1+I_δ+1)),
which contradicts the assumption, as desired.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
99
Anstee1990 R.P. Anstee, Simplified existence theorems for (g,f)-factor, Discrete Appl. Math. 27 (1990) 29–38.
Berman1979 A. Berman, R.J. Plemmons, Nonnegative matrices in the mathematical sciences, Academic Press, New York, 1979.
Bondy2008 J.A. Bondy, U.S.R. Murty, Graph Theory, Grad. Texts in Math. vol. 244, Springer, New York, 2008.
Cho2021 E. Cho, J. Hyun, S. O, J. Park, Sharp conditions for the existence of an even [a,b]-factor in a graph,
Bull. Korean Math. Soc. 58 (2021) 31–46.
Fan D.D. Fan, H.Q. Lin, Spectral conditions for k-extendability and k-factors of bipartite graphs, arXiv:2211.09304.
Fan2021 D.D. Fan, S. Goryainov, X.Y. Huang, H.Q. Lin, The spanning k-tree, perfect matchings and spectral radius of graphs,
Linear Multilinear Algebra 70 (2022) 7264–7275.
Fan2022 D.D. Fan, H.Q. Lin, H.L. Lu, Spectral radius and [a,b]-factors in graphs, Discrete Math. 345 (2022) 112892.
Hong2001 Y. Hong, J.L. Shu, K.F. Fang, A sharp upper bound of the spectral radius of graph,
J. Combin. Theory, Ser. B 81 (2001) 177–183.
Horn1986 R.A. Horn, C.R. Johnson, Matrix analysis, Cambridge University Press, New York, 1986.
Liu2001 G.Z. Liu, L.J. Zhang, Fractional (g,f)-factors of graphs, Acta Math. Sci. 21 (2001) 541–545.
Liu2008 G.Z. Liu, L.J. Zhang, Toughness and the existence of fractional k-factors of graphs, Discrete Math. 308 (2008) 1741–1748.
Lu2013 H.L. Lu, Simplified existence theorems on all fractional [a,b]-factors,
Discrete Appl. Math. 161 (2013) 2075–2078.
Nikiforov2002 V. Nikiforov, Some inequalities for the largest eigenvalue of a graph,
Combin. Probab. Comput. 11 (2002) 179–189.
Tutte1947 W.T. Tutte, The factorization of linear graphs, J. London Math. Soc. 22 (1947) 107–111.
Wang2023 J.J. Wang, J.X. Zheng, Y.L. Chen, Spectral radius conditions for fractional [a, b]-covered graphs,
Linear Algebra Appl. 666 (2023) 1–10.
Wei2023 J. Wei, S.G. Zhang, Proof of a conjecture on the spectral radius condition for [a, b]-factors,
Discrete Math. 346 (2023) 113269.
|
http://arxiv.org/abs/2307.04898v1 | 20230710204908 | Slitless spectrophotometry with forward modelling: principles and application to atmospheric transmission measurement | [
"Jérémy Neveu",
"Vincent Brémaud",
"Pierre Antilogus",
"Florent Barret",
"Sébastien Bongard",
"Yannick Copin",
"Sylvie Dagoret-Campagne",
"Claire Juramy",
"Laurent Le-Guillou",
"Marc Moniez",
"Eduardo Sepulveda",
"The LSST Dark Energy Science Collaboration"
] | astro-ph.IM | [
"astro-ph.IM"
] |
Sorbonne Université, CNRS, Université de Paris, LPNHE, 75252 Paris Cedex 05, France
Université Paris-Saclay, CNRS, IJCLab, 91405, Orsay, France
MODAL'X, UPL, Univ. Paris Nanterre, CNRS, F92000 Nanterre France
Univ Lyon, Univ Claude Bernard Lyon 1, CNRS/IN2P3, IP2I Lyon, UMR 5822, F-69622, Villeurbanne, France
J. Neveu et al.
Slitless spectrophotometry with forward modelling
In the next decade, many optical surveys will aim to tackle the question
of dark energy nature measuring its equation of state parameter at the permil
level. This requires to trust the photometric calibration of the survey with
a precision never reached so far, controlling many sources of systematic
uncertainties. The measurement of the on-site atmospheric transmission for each exposure, or
on average for each season or for the full survey, can help reaching
the permil precision for magnitudes.
This work aims at proving the ability to use slitless spectroscopy for standard star spectrophotometry and its use to monitor on-site
atmospheric transmission as needed for example by the Vera C. Rubin Observatory Legacy Survey of Space and Time supernova cosmology program. We fully deal with the case of a disperser in the filterwheel which is the configuration chosen in the Rubin Auxiliary Telescope.
The theoretical basis of the slitless spectrophotometry is at the heart of
our forward model approach to extract spectroscopic information from slitless
data. We developed a publicly available software called (<https://github.com/LSSTDESC/Spectractor>) that implements each
ingredient of the model and finally performs a fit of a spectrogram model directly on image data to get the spectrum.
We show on simulations that our model allows to understand the structure of
spectrophotometric exposures. We also demonstrate its use on real data,
solving specific issues and illustrating how our procedure allows the improvement of
the model describing the data. Finally we discuss how this approach can be
used to directly extract atmospheric transmission parameters from data, and
thus provide the base of on-site atmosphere monitoring. We show the
efficiency of the procedure on simulations, and test it on the limited
data set available.
Slitless spectrophotometry with forward modelling: principles and application to atmospheric transmission measurement
J. Neveu1,2V. Brémaud2P. Antilogus1F. Barret3S. Bongard1Y. Copin4S. Dagoret-Campagne2C. Juramy1L. Le Guillou1M. Moniez2E. Sepulveda1The LSST Dark Energy Science Collaboration
August 12, 2023
==================================================================================================================================================================================
§ INTRODUCTION
Cosmology measures and interprets the evolution of the whole universe. To probe
its dynamic and understand the nature of dark energy, observers needs to
compute distances at different epochs, from the light they receive in
telescopes. The evolution of cosmological distances with time tells how dark
energy, dark matter and matter interacts and how they can be modelled.
Optical surveys use magnitude and colour comparisons to build a relative
distance scale. For instance, type Ia supernovae (SNe Ia) revealed the presence
of a dark energy component because they appeared fainter in the early universe
than it was thought <cit.>. More precisely, because SNIa colours redshift with
universe expansion, high redshift supernovae were fainter in red bands than
what can be inferred from low redshift supernovae observed in blue bands. This
case underlines that colours need to be accurately calibrated in an optical
survey to display the universe dynamics (see e.g., ). Every chromatic effect that alters the
astral light distorts our dynamic perception of the universe expansion, like
the galactic dust, the instrumental response or the local atmospheric
conditions.
In this paper we present a forward modelling method to analyse and extract data
gathered with a dispersing element (grating or hologram) in the filter wheel of
a telescope. We label our approach as forward modelling because we
implement a numerical simulation of the data taking procedure including as much
a priori knowledge as is available, and then estimate model parameters
from likelihood maximisation. This method is fundamentally different from the traditional flux weighted sum orthogonally to the dispersion axis <cit.> or algebraic method using multiple images <cit.> as it relies on physical modelling to describe directly the footprint of the spectrum on the imaging sensor. Deconvolution techniques and PSF modelling have been explored for optical fibre spectrographs <cit.> but in our forward model we aim at going further in building a physical model for the extraction of the spectra, and in particular the atmospheric transmission from spectra. Our approach was inspired by the forward modelling developed in <cit.>, and we applied it for punctual sources, with the ultimate goal to measure atmospheric transmission.
The scientific context of our work is the study of the atmospheric transmission
variability via the repeated observation of stable (aka standard) stars.
<cit.> opens the path to control the optical survey photometry with dedicated
measurements of atmospheric components, observing standard stars. Then
<cit.> reached a 5 permil (i.e., “per thousand”)
relative photometric calibration between filters covering the full optical and
the near-infrared (near-IR) range, accounting for linear temporal variations of the atmosphere
transmission over the nights. Given the scale of the new SNe Ia surveys, the
number of observed objects reaches a point where even such an exquisite
calibration stands as the dominant source of systematics. Being able to
accurately measure and estimate the chromatic variations of the atmospheric
transmission that allow us to probe for systematic variations, either nightly,
seasonal or directional, at the per thousand level thus becomes one of the
challenges of modern SNe Ia cosmology.
While spectrophotometry at the required precision has been hinted at in
<cit.>, it relies on
the dedicated use of a specifically designed integral field spectrograph. Our
current approach instead focusses on exploring the spectrophotometric
possibilities offered by a much simpler design: we consider a slitless
spectrograph, where a disperser (either a grating or a hologram) is inserted in
the converging beam of a telescope in a regular filter wheel. This
implementation is used in the Rubin Auxiliary Telescope (AuxTel) <cit.>, as well as on the Star Direct Illumination Calibration Experiment (StarDICE) telescope, an experiment aiming at
transferring to stars the unit of optical power (watt) defined at the National Institude of Standards and Technology (NIST) with a reference cryogenic radiometer, Primary Optical Watt Radiometer (POWR) <cit.>.
This paper describes the preparatory work for those projects, presenting the
analysis procedure developed and tested on a few nights of data gathered at the
Cerro Tololo Inter-American Observatory (CTIO). It describes some implementation choices, and demonstrates how the forward
modelling approach allows us to incrementally build a detailed understanding of the
data that in the end can permit the direct extraction of the parameters used to
describe the atmospheric transmission variability.
The first section of this paper describes the theory of slitless spectrophotometry, the basic implementation
in and the data and simulations sets to assess the quality of the algorithm. Section <ref>
details the different ingredients of the software, the assumptions and the implementation choices. In particular we detail the regularised deconvolution technique at the heart of the process to get a prior for the forward model, and qualify the code on simulations. The application of to extract spectra from on-sky CTIO data is described in section <ref>,
while section <ref> is focussed on the measurement of the atmospheric transmission. Discussions and summaries conclude the paper in section <ref>.
§ FORWARD MODELLING OF A SLITLESS SPECTRUM
There are many different possible configurations to gather spectroscopic data
without the use of a slit.
The slitless spectrograph configuration that will be considered in this paper (a grating or hologram in a converging beam) can be implemented in different
ways that could require special care in the forward-modelling analysis. For
example, the field of view can be small and contain only one star, or it can be
crowded, with many star spectrograms super-imposing on each other. There
might be different detectors spanning the field of view, with responses that
need to be mapped and gaps that need to be accounted for. We haven't tried to solve abstractly all different situations. We therefore defer
to further work all the technical issues that we didn't encounter, and
concentrate on the ones we did and solved.
Among those restrictions, we consider in the following only the case of point
sources and won’t discuss, aside from a passing mention in the next section,
the case of extended sources like galaxies or resolved planetary
nebulae, nor the deblending of such extended sources with point sources.
§.§ Description and geometry of a slitless spectrograph
The slitless spectrograph we consider can be seen as a grating with N grooves
per millimetre, placed in a telescope beam at a distance from
a Charge-Coupled Device (CCD) sensor. In the following, the positions in the sensor plane are parametrised with
the coordinates r⃗ = (x, y) and the z axis points orthogonally toward
the CCD. The disperser can be more or less complex, used in a convergent or in
a parallel beam, and it can have a varying resolution, but in the end it will
spread the source light in different diffraction
orders superimposed on the CCD, with a sky background also
diffracted. Depending on the choice of N, on the size N_CCD (in pixels) of
the sensor and on , different diffraction orders will end up to be
recorded by the sensor.
The special case of the 0th order is worth mentioning: while its presence on the
image is not mandatory, knowing its centroid position r⃗_0 can ease
significantly the setting of the zero of the wavelength calibration.
The positive and negative diffraction orders are placed on each side of the 0th
order on a line forming an angle α with the x axis. We parametrize the
position along this dispersion axis with the coordinate u and transversally
with the coordinate v. The zeroth order stands at coordinates (u_0,0) in the
(u,v) coordinate system. If the instrument wavelength coverage spans more
than one octave in wavelength, the different diffraction orders superimpose
on each other.
The total 2D CCD image formed by the cumulation of the collected light is called
hereafter a spectrogram. All the notations are illustrated in
Figure <ref>.
For a periodic grating placed inside a convergent beam instead of a parallel beam, this optical system response is astigmatic i.e., the image of a point source
like a star is not point-like on the sensor. Usually, the image
is elliptical, and the redder the wavelength the wider the ellipse (see e.g., ).
However, the centroid of these ellipses is still given by the classical grating formula (see
e.g., ) :
sinθ_p(λ) - sinθ_0 = p N_ effλ,
tanθ(λ) = u(λ)/, tanθ_0 = u_0/.
The angles are those of the projection in the plane perpendicular to the grating
lines (see Figure <ref>); θ_0 is the angle of the
projected telescope beam axis with respect to the normal to the grating surface,
p is the diffraction order, θ_p(λ) is the corresponding projected
diffracted angle, and is the effective spatial frequency of
grating lines at the position of the central ray[For some dispersers
this number of lines per millimetre can depend on the beam position on the
grating.] of the light beam (hereafter called chief ray).
The observer has the liberty to choose the focus of the telescope. One common
choice is to set the focus using the order 0 but doing so the spectrogram can suffer from defocusing effects going toward redder wavelengths (for periodic gratings). To minimize the defocusing effect
and increase the spectrograph resolution, it is possible to optimize the focus
for a particular wavelength, but then it is more difficult to set the zero of
the wavelength calibration with a defocused order 0 image. In the following we
assume that the focus has been made on the 0th order, unless otherwise
specified.
§.§ Theoretical model of a spectrogram
A theoretically perfect image of a light source can be modelled by its spatio-spectral flux
density C(r⃗, λ). For a point source, we can separate the spectral
and spatial distributions as:
S_*(r⃗,λ) =
S_*(λ) ×δ(r⃗ - r⃗_0)
with S_*(λ) the astrophysical object spectral
energy density (SED) and δ the Dirac
distribution. The observed spectral and spatial distribution is then:
C_p(r⃗,λ) =
T_inst, p(λ) T_atm(λ | P⃗_a)
S_*(r⃗,λ)
where T_ atm(λ| P⃗_a) is the atmospheric
transmission, depending on a set of atmospheric parameters P⃗_a,
T_ inst, p(λ) is the instrumental transmission (including the CCD
quantum efficiency) for the diffraction order p.
In the case of a monochromatic point source we have C_0(r⃗,λ) =
A δ(λ- λ_0)×δ(r⃗ - r⃗_0) with A the received source flux at λ_0. The Point Spread Function (PSF) describes the
optical response of the telescope and of the atmospheric turbulence on a
sensitive surface like a CCD (see
Figure <ref> (a)). It depends a priori on the
wavelength λ and can be modelled by a function ϕ_0(r⃗,
λ) whose spatial integral is normalised to one. Therefore, by definition
of the PSF, the image of a monochromatic point source centred on r⃗_0 on
the CCD can be described as a convolution product:
I_0(r⃗) = ∫λ∬^2 r⃗' C_0(r⃗', λ) ϕ_0(r⃗ - r⃗', λ)
= A ϕ_0(r⃗ - r⃗_0, λ_0).
With a slitless spectrograph, the mechanism is the same but the incoming light
is dispersed in several diffraction orders p, and light from all diffraction orders of the sky background is superimposed. The position of the point source
image depends on the wavelength and the order p. Also the PSF shape itself can
depend on the order p and the wavelength λ. Here and everywhere else, the dispersed imaging PSF integrates both the seeing and the instrumental PSF. Let's introduce the
dispersion relation Δ⃗_p(λ) as the 2D vectorial quantity that
describes the position of the PSF centroids { x_c,p(λ)-x_0,
y_c,p(λ)-y_0 } on the CCD with respect to the zeroth order position (x_0,y_0), for a diffraction order p and a
wavelength λ. This quantity can be computed by applying
the classical grating formula <ref>. With a point-like monochromatic
source, the image recorded on the CCD is modelled as:
I(r⃗) = ∑_p ∫λ∬^2 r⃗' C_p(r⃗', λ) ϕ_p(r⃗ - Δ⃗_p(λ) - r⃗', λ)
= ∑_p A_p ϕ_p(r⃗ - Δ⃗_p(λ_0), λ_0),
with
Δ⃗_0(λ_0) = r⃗_0 and A_p the flux density at
wavelength λ_0 for the order p. On the image, one expects a series of
spots, one per order p, of different intensities, with different sizes, but
containing the same spectral information S_*(λ_0) about the source.
The observed sources are naturally polychromatic. For point-like sources, the
theoretical description above holds at each wavelength and the image can be
described as:
I(r⃗) = ∑_p∫dλ S_p(λ)ϕ_p(r⃗ - Δ⃗_p(λ), λ),
with
Δ⃗_p(λ) = {
x_c,p(λ) -x_0,
y_c,p(λ) - y_0
},
S_p(λ) = T_inst, p(λ) T_atm(λ | P⃗_a) S_*(λ).
Therefore, the spectrogram of a polychromatic source can be viewed of as a
stack of monochromatic images with different centroids, or as a dispersed image with a very chromatic PSF (see
Figure <ref> (b)). This description of the image
and of the spectrogram formation is the base of our forward model of slitless spectrograph data.
To obtain the S_p(λ) spectra, a process to un-stack
the monochromatic images spread over the image by the disperser is needed
(Figure <ref> (c))). This in turn requires that
such ingredients as the PSF ϕ_p(r⃗, λ) and the dispersion
relation Δ⃗_p(λ) are sufficiently well known, either a priori, either thanks to specific data analysis, or directly fitted on data. The hardest point is usually the determination of the PSF kernel as a function of wavelength. Indeed, as illustrated in Figures <ref> and <ref>, in the general case the PSF is blurred and defocused. Using a simple Moffat profile to model the PSF is suitable as long as the defocus is small; we did this approximation along this paper and we leave the general defocus case for future work. Nevertheless, we studied and discussed two cases: the use of a standard grating and the use of an holographic disperser that limits the defocus.
In this paper we describe our implementation of such a process in the form of
the code <cit.> (see Section <ref>). We also show
how we tested it on simulations and data, as well as showing how it could be
used in order to ingest lacking modelling information from specific data analysis.
§.§
We call
[<https://github.com/LSSTDESC/Spectractor>]^,[<https://spectractor.readthedocs.io>]
the computer suite we wrote to analyse the future AuxTel images, as well as the
images obtained on the Cerro Tololo Inter-American Observatory (CTIO) 0.9m
telescope. It was trained on CTIO data but with the purpose to be easily configurable for slitless spectrophotometry with other telescopes. The main steps, inputs and outputs of the extraction part are illustrated in Figure <ref> and will be described in details in Section <ref>. To help the reading of the following, here is a summary of the main steps:
Order 0 centroid:
The main inputs of are a pre-processed image (overscan subtracted,
debiased, spatial flat fielded) obtained with a slitless spectrograph, and a
configuration file setting the main geometrical and spectrographic properties of
the instrument (, N, telescope diameter, pixel size, the PSF model
etc.) At the time of writing this paper, the PSF models implemented are either a
Gaussian profile, a Moffat profile or a Moffat minus a Gaussian profile. Further
more detailed PSF models are planned to describe more accurately the AuxTel data
as it is analysed. We are in the situation where the centroid of the zeroth
order is contained in the image. We thus implemented a searching procedure to
inform us on the origin of the location of the spectrum and set the zero of the
wavelength scale.
Rotation:
Considering the geometry of the spectrograph and the dispersion properties of
the grating, the spectrogram is cropped from the image and de-rotated to have
the dispersion axis along the horizontal axis of the cropped image. Note that
the rotation angle is fitted later in the full model without explicitly rotating
the image.
Geometric wavelength calibration:
On that image, a first fit of a 1D sliced PSF model transverse to the
dispersion axis is performed, with PSF shape parameters represented by a
polynomial function [A polynomial function of the fourth order is
sufficient to absorb the main chromatic variations of the PSF shape.] as a
function of the distance to the order 0.
PSF deconvolution:
The procedure is continued by a deconvolution that uses a 2D PSF model and the 1D result as a
prior to regularize it.
Wavelength calibration:
A first wavelength calibration is performed using the detection of the principal
absorption or emission lines in the extracted spectrum (astrophysical or
telluric lines).
Flux calibration: spectrum flux in ADU is converted into ^2 units using the telescope collecting area, CCD gain and exposure time.
Full forward model:
Finally, given all these prior ingredients, a full forward model is initiated
on the preprocessed[For future developments, one
could model directly the unpreprocessed
exposure, for instance introducing the chromatic flat fields directly in
the forward model.]
but not rotated exposure using a 2D PSF model, and a model for the Atmospheric
Differential Refraction (ADR)[ADR is also called Differential Chromatic Refraction (DCR).]. The PSF shape parameters as well as the
wavelength calibration are refitted in the process. The main output is a
calibrated spectrum in wavelength and amplitude, but returns also
a host of useful fitted parameters like the PSF shape chromaticity or the
distance, to perform extraction quality analysis and in the end
to improve the forward modelling or its initialisation.
Note that all the steps but the last are implemented in order to provide the
required ingredients to the full forward models, and are thus completely
contingent to the availability of understanding of the instrument. This
will again be addressed later on when we discuss how to obtain the optical
transmission of the instrument.
§.§ Data examples
While we use the natural ability of the forward model implementation to easily
provide simulations to validate the code, we also present the use of
on real data.
§.§.§ CTIO data
In order to test the slitless spectroscopy approach, in particular using a
holographic disperser, we benefited of a run of 17 nights in May-June 2017 at the
Cerro Tololo Inter-American Observatory (CTIO) 0.9m Cassegrain telescope
(f/13.7, scale at focal plane 60/). This telescope is equipped
with a cooled Tek2K CCD device of 2048× 2046
pixels, read by four amplifiers[<https://noirlab.edu/science/programs/ctio/instruments/Tek2K>]. Two filter wheels are installed. The first one
in the light path was used to insert broad band filters, while the second one
hold different dispersers.
While many gratings were tried, in this paper we focused on two of them: a
Thorlabs blazed grating (300 lines/mm) ref. GT50-03 and an amplitude holographic
optical element (around 350 lines/mm) especially designed for this
telescope. This hologram is fully described and analysed thanks to the CTIO
data in <cit.>. Its main advantage is that the defocusing described in
Figure <ref> is very limited, which allowed us to model its
chromatic PSF with simpler mathematical models.
By using those dispersers in the upstream filter wheel, we readily transformed the
CTIO 0.9 telescope into a spectrophotometric instrument with a resolution of about
150 – 200 <cit.>.
Figure <ref> shows an example of the data obtained: The
dispersion axes are nearly horizontal along the x axis of the CCD, and for
optimal focusing of the amplitude hologram the target star was placed around pixel coordinates
(750,700). The spectrum covers two amplifiers. Field stars are present and
the sky background is also dispersed (brighter in the middle).
Dome flats were taken with different filters, and we used a red one to flatten
our exposures (λ > 715). Combined bias were taken at the beginning of each night, for bias subtraction. We
made sure that the meta data contains informations about the CCD properties
and from the on-site meteorological station.
We mainly analysed the performances of the holographic element we brought during
this run. Fortunately, we had one very good night on 2017 May 30th with very
stable conditions in terms of temperature and seeing, which we can exploit to
estimate atmospheric transmission. During that night we essentially monitored the
CALSPEC[<https://www.stsci.edu/hst/instrumentation/reference-data-for-calibration-and-tools/astronomical-catalogs/calspec>] <cit.> star HD111980 with an amplitude hologram. The main characteristics of
those data are
summarised in Table <ref>.
§.§.§ Simulations
To test pipeline, we use the full forward model for CTIO spectrograms (see
section <ref>) to simulate observations of CALSPEC stars (in particular
HD111980).
The simulation used in Section <ref> shares the same known
characteristics as the real data image presented in Table <ref>, but with variations concerning unknown parameters like the PSF
model, the amount of second order diffraction contamination and the atmospheric
parameters.
For simulations, a 2D Moffat circular
PSF kernel ϕ(x,y, λ) is chosen.
To model the widening of the PSF due to defocusing or chromatic seeing effects, shape parameters γ and α
evolves with wavelength as a n_PSF order polynomial function:
ϕ(x, y |r⃗_c, P⃗) = A[1+(x-x_c/γ(z(x)))^2+(y-y_c/γ(z(x)))^2]^-α(z(x))
A = α-1/πγ^2 with α > 1,
γ(z) = ∑_i=0^n_PSFγ_i L_i(z), α(z) = ∑_i=0^n_PSFα_i L_i(z).
The integral of this PSF kernel is exactly A,
and its centre is at r⃗_c = (x_c,y_c). The PSF shape parameters
(γ(x), α(x)) are themselves sets of polynomial
coefficients γ_i and α_i respectively. The L_i(x) functions are
the order i Legendre polynomials. Let's call x_min and x_max the left and right pixel
positions of the spectrogram edges on the x axis. The parameter z∈[-1,1] is rescaled
proportionally on the desired pixel range
[x_min, x_max], set to encompass roughly the wavelength
range [350, 1100] with the formula
z(x) = x-(x_max+x_min)/2/ (x_max-x_min)/2.
Parametrisation with Legendre
polynomials has the advantage to give an equal weight to all polynomial
coefficients during χ^2 minimisation, whatever the degree of the polynomial
functions. The chosen n_PSF, γ_i and α_i values are quoted for each simulation. Simulation suite is fully available in the code.
§ FORWARD MODEL EXTRACTION OF A SPECTRUM
In this section we follow and describe in detail the steps of the pipeline in order to get the first order spectrum S_1(λ) of a point
source, calibrated in flux and wavelength. Those steps cover the first line of
orange boxes of figure <ref>, and leave to a later
discussion the full forward model. The process starts from a preprocessed image containing the 2D image formed by a star observed through a slitless spectrograph that we call a spectrogram.
§.§ Uncertainty evaluation
Once the exposure is pre-processed, if not given we must start to build the
uncertainty map of the exposure. The uncertainties on the pixel values are
estimated using the CCD gain (x,y) (in electrons/ADU) and its read-out noise
σ_ro (in electrons). The exposure
unit is considered to be in ADU at that point. The uncertainty σ(x,y) on
the pixel value I(x,y) is then:
σ(x,y) = 1/(x,y)√(σ_ro^2 + (x,y)I(x,y))
as we assume that the number of photoelectrons in each pixel follows a Poisson
distribution. The uncertainty map can be inverted to get a weight map, on which
we can superimpose a mask to remove bad pixels. Assuming no correlations between
pixels, we then assemble a weight matrix 𝐖 as a diagonal matrix of
the inverted pixel variances.
The computed noise variance uses the pixel value itself, which incorporates the noise fluctuation. Using weights incorporating the fluctuations is prone to introduce a bias on the recovered quantities because in the case of positive fluctuation, the weight is increased while in the case of negative fluctuation the weight is decreased. The bias is smaller with high signal-to-noise spectrograms. For this reason, in simulations and data we will only consider the high signal-to-noise case and will avoid the bias due to this uncertainty evaluation (even at the spectrum edges where flux is low). The low signal-to-noise case is postponed for future developments.
§.§ Zeroth order centroid
Since the zeroth order is included in the observed images, we
use its centroid to set the zero of the wavelength scale. Therefore, an error in the
determination of the zeroth order centroid (x_0, y_0) results in a systematic
shift of the wavelength calibration.
A subset of different situations encountered in the CTIO data is presented in
Figure <ref>. To get a high
signal-to-noise in the spectrogram, the exposure time is set at such a value
that the zeroth order is saturated, causing bleeding spikes. If the exposure comes with a World
Coordinate System (WCS), then one can get the precise position of the star on
the CCD. However, in most images we get from CTIO, the WCS associated with the
images is wrong because of an
imprecise mount calibration.
While not directly part of the extraction
procedure, we present the algorithm we implemented to find the zeroth order
centroid in such difficult cases as an useful example of how to improve the
starting point from preliminary analysis of the data.
First, the image is cropped around the supposed position of the zeroth order
image, as close as needed so that the targeted star is the brightest
object. Then the cropped image is projected onto the x and y axis: the
maximum of the two projections sets a new approximation of the order 0
position. From there, saturated pixels are detected and a null weight is
affected to them. A 2D second order polynomial background with a 3σ
outlier removal is fitted and subtracted from the cropped image. At last, a 2D
circular Moffat profile is fitted on the weighted pixels: only the crown of
non-saturated pixels counts and locks the fit. This last step is then repeated a
second time on a new cropped image of width and height divided by two, centred
in the last fitted centroid. This step is illustrated
Figure <ref>. We tested this process on many images,
most of them pathological, and visually confirmed that the accuracy of this
algorithm is finer than the pixel size on CTIO images.
An other issue we had to cope with is that when adding a disperser on the
telescope beam path, the WCS associated with the image can be shifted or
distorted. In the case of crowded field, faint objects or very pathological
order 0, we developed a method to estimate the WCS using the field stars, the
library
[<http://astrometry.net/doc/readme.html>]
<cit.> and the Gaia DR2 catalogue. The process is described in
appendix <ref>, but not used by default to deal with the CALSPEC
stars we observed. However, when comparing the centroids obtained with both
methods, their difference has a scatter of 015 (around one half of a pixel) on CTIO
data which confirms that both methods are accurate at least at the 015
level (Figure <ref>).
This accuracy is converted into a prior on the order 0 shift δ u_0 used
during the wavelength calibration process to account for a mistake in the order
0 centroid evaluation.
§.§ Rotation
Unless special care has been taken to that end when mounting the disperser in
the filter wheel, the spectrogram image can be tilted (possibly intentionally) with respect to the x
axis, with an angle which can be poorly known depending on the mounting of the
disperser into the telescope beam.
This dispersion direction can be either known a priori, or fitted in the full
forward model step. Since we are in the later case, we need a supplementary step to estimate a
good starting point for this angle.
In addition, in the following we found it extremely useful to have the
spectrogram to be roughly aligned with the x axis of the exposure, with the
wavelength increasing with x. To that end exposure must be flipped and rotated
accordingly before continuing the process of extracting more information useful
both to diagnose the data quality, and to refine the starting point of the full
forward model.
The spectrogram of sources sufficiently continuous in wavelength, like the
thermal emission component of stars for example, display filament shapes on the
2D image that can be detected using an Hessian analysis inspired by the one
developed in <cit.>. The advantage of this technique is that it
comes with an analytical expression of the angle of the detected shape with
respect to the horizontal or vertical axis of the CCD grid.
The Hessian matrix H(x,y) of the image is computed for each pixel value I(x,y) as:
H(x,y) = [ H_xx H_xy; H_xy H_yy ] = [ ∂^2 I∂ x^2 ∂^2 I∂ x ∂ y; ∂^2 I∂ x ∂ y ∂^2 I∂ y^2 ].
The two eigenvalues of the Hessian matrix H are calculated as
λ_±(x,y) = 1/2(H_xx + H_yy± h )
with h = √((H_xx-H_yy)^2 + 4H_xy^2). The eigenvalue λ_- is
associated with the eigenvector directed along the spectrum dispersion axis while
λ_+ corresponds to the eigenvector with the largest change in intensity
value i.e., transverse to the dispersion axis. The orientation angle of these
eigenvectors with respect to the x axis can be analytically computed. For
instance for λ_- we have:
α(x,y) = arctan( H_yy - H_xx - h/2 H_xy)
= 1/2arctan( 2 H_xy/H_xx - H_yy)
with the trigonometric formula tan 2α = 2 tanα / (1- tan^2
α). After a selection of the 5% pixels with the highest λ_- value
above a reasonable threshold, the median α of the remaining α(x,y)
values gives the orientation of the spectrum with respect to the x axis. A
linear fit can also be performed across the selected pixels and the slope gives
an angle very close to the one estimated with the median of the angle
values. This process is illustrated Figure <ref>.
Note that because of the Atmospheric Differential Refraction (ADR) the
spectrogram can be sheared transversally to the dispersion axis. The ADR shear
is of the order of 2 across the visible spectrum, which is 5 pixels at
CTIO while the spectrogram is ≈ 700 pixels long and it is thus neglected
at this step of this analysis. On the other hand it will be fully accounted for
in the full forward model.
§.§ Background estimation
The background of the image must be carefully subtracted to avoid bias in the
estimation of the spectrum flux and its PSF. However, due to optical vignetting
it can be non-flat, and it is dispersed. It can also contain additional field stars
and their corresponding spectrograms.
To estimate the spectrogram background, we first select two lateral bands above and
below the spectrogram region, of the same length and width
N_y^(bgd) (see Figure <ref>). First we mask the sources detected above a 3σ
threshold. Then we divide the two lateral regions in a few square boxes of size
(N_y^(bgd)/2, N_y^(bgd)/2). The box dimensions must be
larger than the typical size of a field star PSF or of a spectrogram width, but
small enough to account for the spatial variations of the background.
To get a first estimate of the background, we use a python wrapper of the
[<http://www.astromatic.net/software/sextractor>
] algorithm for background extraction which is roughly based on the bilinear
interpolation of the median value inside the boxes, after a sigma-clipping
rejection of the outlier pixels (more details in
). This process is illustrated in
Figure <ref>.
In a second step, we analyse the distribution of the background residuals
normalised with their uncertainties in the two regions, with the sources being
masked. If the histogram of the residuals normalised by their errors (aka
the “pull distribution”) departs from a distribution of zero mean and
standard deviation equal to 1, we refine the background estimate by dividing the box size by 2 and the process is
continued iteratively until the mean is below 1 and the standard deviation is
below 2, or until the box size goes below a threshold of 5 pixels.
At the end of this process, the estimated background is interpolated between the
two lateral bands to get the background below the spectrogram, and finally
subtracted. We call B(r⃗) the background map. The background RMS is
also evaluated by and is quadratically added as a background uncertainty to
the error budget of the spectrogram.
At this stage of the pipeline, with α and r⃗_0 given and using a
first geometrical wavelength calibration to define roughly the left and right
margins of the spectrogram, we can crop the exposure to extract a background
subtracted spectrogram. This is extremely useful for diagnosis purposes, and can also be directly studied with an atmospheric forward model (see Section <ref>).
§.§ Spectrum first extraction
The next step of the pipeline we implemented is devoted to the pursuit of the
extraction of the first order spectrum S_1(λ) and the estimate of the
wavelength dependent PSF. The extraction of the spectral information S_1(λ) from the spectrogram
image is a delicate process. The intuitive and traditional way to extract slitless spectra at this stage of the pipeline is to sum over cross-dispersion direction, eventually with weights, to form what we call a cross-dispersion spectrum. <cit.> and <cit.> present an optimal method to achieve this. However, this method leads to distorted spectra in case of a wavelength dependent PSF (neighbour wavelengths contaminate each other on the sensor), which is not an issue for spectroscopy but is problematic for spectrophotometry. In the following, we describe first a method to deconvolve the spectrum for the PSF (Section <ref>). The products of that process are then very useful to inform the full forward model finalizing the unbiased extraction of the spectrum (Section <ref>).
§.§.§ Spectrogram first order model
To start with, we consider only the spectrogram of the first
diffraction order, potentially with the superposition of a second diffraction order, as
provided by the pipeline previous step. The cropped spectrogram has a shape
(N_x, N_y) pixels.
Inspired by equation (<ref>), we model the spectrogram as a
discrete stack of N_x 2D PSF realisations of amplitude A_i, separated by one
pixel along the x axis:
I⃗_1(r⃗ | A⃗, r⃗_c, P⃗) = ∑_i=0^N_x A_i ϕ(r⃗ | r⃗_c,i, P⃗_i)
with r⃗=(x,y) the vector of the pixel coordinates, A⃗ an amplitude
parameter vector along the dispersion axis of the spectrogram, ϕ(r⃗
| r⃗_c,i, P⃗_i) the 2D PSF kernel whose integral is
normalised to one. This kernel depends non-linearly on a shape parameter vector
P⃗_i and on a centroid position vector r⃗_c,i=(x_c,i,
y_c,i) where only the y_c,i coordinate is considered unknown. The
x_c,i coordinate is set directly to the pixel index i. This choice of
implementation can be discussed and changed, but it was found to be practical
since the PSF is then well sampled by the pixel grid. Yet, in theory one could
choose another sampling for x_c,i, to increase the speed of the
spectrum extraction or try to enhance the spectral resolution.
Somehow, the array of vectors r⃗_c is a sampled precursor of the
dispersion relation Δ⃗_1(λ) and the vector A is the flux
density S_1(λ) integrated within the pixels. If we index all the N_x N_y
spectrogram pixels as a long vector
Z⃗ = (ζ_1, ⋯, ζ_N_x N_y), the
equation <ref> takes a matricial form:
I⃗_1(Z⃗ | A⃗,r⃗_c, P⃗) =
𝐌(Z⃗ | r⃗_c, P⃗ ) A⃗
with
𝐌(Z⃗ | r⃗_c, P⃗ ) =
([ ϕ(ζ_1 | r⃗_c,1, P⃗_1) ⋯ ϕ(ζ_1 | r⃗_c,N_x, P⃗_N_x); ⋮ ⋱ ⋮; ϕ(ζ_N_x N_y | r⃗_c,1, P⃗_1) ⋯ ϕ(ζ_N_x N_y | r⃗_c,N_x, P⃗_N_x); ])
The (N_x N_y, N_x) matrix 𝐌 is called the design matrix.
This model of the spectrogram is designed to deconvolve the spectrum
S_1(λ) from the PSF. In principle, one can choose to sample it with PSF
kernels separated by arbitrary distances. However, if the PSF is correctly
sampled by the pixel grid, it is difficult to extract the spectrum with a
spatial resolution below the typical PSF width, and therefore below the pixel
size.
Note that at this stage we want to extract the spectrum from a spectrogram potentially contaminated by higher diffraction orders, as yielded
by the pipeline steps discussed above, or distorted by atmospheric differential refraction.
The final extraction using a full forward model that fully deals with these physical effects is the last step of the
pipeline, presented in section <ref>.
§.§.§ Preparation for deconvolution
To initialize the deconvolution with parameters close to the final best model, a
first PSF fit is performed transversally to the dispersion axis with 1D PSF
kernels on the rotated spectrogram image:
I⃗_1^(1D)(r⃗ | A⃗^(1D), r⃗_c, P⃗) = ∑_i=0^N_x A_i^(1D) ϕ^(1D)(r⃗ | r⃗_c,i, P⃗_i).
This procedure is done in two steps. The first step fits the 1D parameters
independently for each pixel column, applying a 5σ clipping to reject
field stars and other CCD defects. In the second step, a polynomial evolution of
the 1D PSF parameter vector along the dispersion axis is proposed. For
instance, a polynomial evolution of the y_c,i(x_c,i) positions can model
the transverse ADR and a polynomial evolution of the width of PSF can model a
defocusing effect. The polynomial coefficients are fitted using a
Gauss-Newton minimisation of a χ^2
(equation <ref>) for the non-linear parameters (see
appendix <ref>), alternating with a linear resolution for the
amplitude parameters as follows.
Assuming that we gather all the spectrogram pixel values in a long vector D⃗ (as Z⃗), we can model it as:
D⃗ = 𝐌(Z⃗ | r⃗_c, P⃗ ) A⃗ + ϵ⃗,
with ϵ⃗ a random noise vector; the χ^2 function to minimise is:
χ^2(A⃗ | P⃗)= (D⃗ - 𝐌 A⃗)^T 𝐖(D⃗ - 𝐌 A⃗)
with 𝐖 the weight matrix of dimension (N_xN_y, N_xN_y), inverse of the data covariance matrix (see
section <ref>). In most cases this matrix is diagonal as the
pixels are considered all independent. The minimum of equation
<ref> is reached for the set of amplitude parameters
Â⃗̂ given by:
Â⃗̂^(1D) = (𝐌^T 𝐖𝐌)^-1𝐌^T 𝐖D⃗.
The covariance matrix associated with the Â⃗̂ coefficients is
𝐂^(1D) = (𝐌^T 𝐖𝐌)^-1. At the end of this process, we get
a first guess of the r⃗_c and P⃗ parameters, and a first guess
of the amplitudes Â⃗̂^(1D) with their uncertainties σ⃗_A^(1D), which form what we call a transverse cross-spectrum.
The result of this extraction is illustrated on simulated data in
Figure <ref> left. For this simulation, we chose to ignore the second order diffraction and we used a wide PSF kernel with lots of chromatic
variations to enhance the visibility of the residuals for the 1D extraction:
(γ_0, γ_1, γ_2)=(10,2,5) and (α_0,α_1,
α_2)=(2,0,0). At first glance in the top panel, the model seems accurate
but looking at the residuals we see that this 1D model failed to capture the
true evolution of the PSF shape with wavelength. As expected, the residuals are
less dramatic with a thinner PSF, but remain measurable, demonstrating the need
to use a 2D PSF extraction method.
At this stage, we obtained a spectrum close to a cross-dispersion spectrum as in <cit.> and <cit.> but informed with a fitted PSF model having a smooth polynomial wavelength evolution.
§.§.§ PSF deconvolution
Having provided ourselves with this first 1D estimate of the spectrum, we can
resort to a more accurate 2D PSF modelling. When using 2D PSF kernels, the
latter linear regression method enters in the category of the deconvolution
problems or inverse problems. Since the spectral amplitude information is mixed
and diluted at a scale below the typical size of the PSF, the computation of the
Â⃗̂ vector sampled at the pixel scale inverting
equation <ref> using 2D PSF kernels
may lead to results far from the reality while giving a seemingly good
fit to data (low χ^2). Doing so, a common symptom is the alternation of
positive and negative values in Â⃗̂, or at least with large
variations, demonstrating that the problem is ill-posed. As we know by the
physics that well-sampled spectra are rather continuous and differentiable
functions, we enforce a regularisation method to smooth the resulting
Â⃗̂ vector.
A first fit to the data, without any prior on A⃗, using a 1D transverse
PSF model fitted independently to each column of data along the dispersion axis
thus yields a vector Â⃗̂^(1D) containing most of the spectral
flux, especially in the smooth parts of the spectrum, but lacking precision in
the rapidly varying parts.
Indeed, the most visible effect of the PSF is to smooth the absorption
lines and more generally to deform the spectral information where the spectral
energy density evolves rapidly (for instance in the blue part of the
spectrum), while conserving the total flux. It provides nonetheless useful information that we
use as a prior A⃗_0 on A⃗ when performing a fit using a 2D PSF
kernel with a Tikhonov regularisation.
The Tikhonov regularisation method proposes to add a regularisation quantity to the χ^2 and minimise a new cost function:
ℰ(A⃗ | r⃗_c, P⃗) = (D⃗ - 𝐌 A⃗)^T 𝐖(D⃗ - 𝐌 A⃗)
+ r (A⃗ - A⃗_0 )^T 𝐐 (A⃗ - A⃗_0)
= χ^2(A⃗ | r⃗_c, P⃗) + r χ^2_ pen(A⃗ | A⃗_0),
A⃗_0 = Â⃗̂^(1D)
where 𝐐 is a weight matrix. The last term favours Â⃗̂ to
be close to prior vector A⃗_0, with a positive regularisation
hyper-parameter r.
Minimising this ℰ(A⃗ | r⃗_c, P⃗) function is still a linear regression for the A⃗ parameters, whose optimal value is now:
Â⃗̂ = (𝐌^T 𝐖𝐌 + r𝐐)^-1 (𝐌^T 𝐖D⃗ +r𝐐A⃗_0 )
The covariance matrix associated with Â⃗̂ is directly:
𝐂 = (𝐌^T 𝐖𝐌 + r𝐐)^-1
We tested different 𝐐 matrices on CTIO simulations and the one that
gives the most satisfying results when comparing Â⃗̂ to the true
amplitudes uses the Laplacian operator 𝐋:
𝐋 = [ -1 1 0 0 ⋯ 0 0; 1 -2 1 0 ⋯ 0 0; 0 1 -2 1 ⋯ 0 0; ⋮ ⋱ ⋱ ⋱ ⋮ ⋮; 0 0 0 0 ⋯ -2 1; 0 0 0 0 ⋯ 1 -1; ]
𝐐 = 𝐋^T 𝐔^T 𝐔𝐋,
with 𝐔^T 𝐔 the matrix proposed in
equation <ref>:
𝐔^T 𝐔 = [ 1/σ^2_A_1D^(1) 0 ⋯ 0; 0 1/σ^2_A_1D^(2) ⋯ 0; ⋮ ⋮ ⋱ ⋮; 0 0 ⋯ 1/σ^2_A_1D^(N_x); ].
The total variation regularisation is known to be able to retrieve very sharp
features (like steps or edges) when deconvolving an image. The Laplacian
regularisation can not do the same, but discontinuities are not expected in the
physical spectra we are observing. Moreover the norm-2 regularisation offers
analytical solution while a norm-1 regularisation needs to conduct an iterative
minimisation process (more details in appendix <ref>).
As for the 1D transverse estimate, the deconvolution fit is performed using a
Gauss-Newton minimisation of ℰ(A⃗ | P⃗) for the
non-linear P⃗ parameters (see appendix <ref>), and a
linear resolution for the A⃗ parameters (see
section <ref>). The Gauss-Newton minimisation is repeated 3 times
with a clipping rejection of bad pixels to remove field stars or CCD
defects that can pull the final parameters towards undesired directions. At the
end of this process, we get the measured r̂⃗̂_c and P̂⃗̂ parameters, and a measurement of the amplitudes Â⃗̂(r)
with their covariance matrix 𝐂(r), which form what we can call a
spectrum, eventually contaminated by the second diffraction order. To accelerate the
computation, regions farther than two times the PSF FWHM from the centroids are
masked and set to zero with null weights (grey regions in
Figure <ref>). For this process, a default value of
the r regularisation parameter as been chosen, without fully exploring how it
could be optimised.
The PSF deconvolution problem was tackled in another way in <cit.> for slitless spectroscopy. Instead of using a PSF model, the mathematical model is inverted using multiple exposures of the same spectrum ideally taken at different orientations, dispersion directions, or dithered positions. Doing so the amount of data leads to an invertible problem, but still needs a damping hyper-parameter equivalent to r introduced in equation <ref>. In <cit.>, the tuning of this hyper-parameter is performed manually finding the breaking point of a specific L-curve (see ). This method is mathematically close to the one we used (described in Section <ref>) as it leads to an equilibrium between information coming from the prior and data.
§.§.§ Optimisation of the regularisation
What is the right amount of information searchable in the data ? It is first reasonable to think that it is hopeless to find information at a spatial scale below the typical width of the convolution kernel with a unique spectrogram. Therefore, what is the way to find the optimal hyper-parameter r ?
The optimisation of the regularisation parameter r is performed via the study of the resolution operator
𝐑 = 𝐈 - r 𝐂𝐐
with 𝐈 the identity matrix. A fruitful interpretation of the
𝐑 operator is given in <cit.> with
Tr 𝐈 = Tr 𝐑 + Tr(r 𝐂𝐐) ⇔
[# parameters] = [# parameters resolved by data]
+ [# parameters resolved by prior],
where the trace of the resolution operator gives the effective number of
degrees of freedom that can be extracted from the data for a given amount of
prior information. We set
N_dof = Tr 𝐑.
The optimal r parameter is found minimizing the following G(r) function, which behaves like a reduced χ^2:
G(r) = χ^2(Â⃗̂ | r̂⃗̂_c,P̂⃗̂)/(N_x N_y - N_dof)^2.
This method, known as General Cross-Validation (GCV), is extensively presented
in e.g., <cit.>. It is demonstrated
that the minimum of G(r) corresponds to the minimum of the distance |
𝐌Â⃗̂ - 𝐌A⃗_truth|^2 where
A⃗_truth is the true amplitude vector hidden in the data.
We tested different ways to implement the optimisation of the regularisation
process in . The one that gives the most efficient result (in terms
of rapidity and bias of the final result) is to set a default reasonable
regularisation parameter for the fitting procedure of the amplitude Â⃗̂ and PSF r̂⃗̂_c, P̂⃗̂ parameters, and finally to
find the minimum of the G(r) function. We observed on simulations that
whatever r hyper-parameter is chosen at the beginning, the process
reconstructs an unbiased spectrum at the end of the ℰ(A⃗ |
r⃗_c,P⃗) minimisation. The level of regularisation of the solution
can thus be set a posteriori by finding the optimal r after minimizing the G(r)
function.
The result of a 2D deconvolution process is presented in
Figures <ref> right panel for a simulation with a
wide PSF kernel but without second diffraction order to stress the benefit of
the deconvolution. The residual map between the best-fitting model and the
simulated spectrogram shows that the 1D transverse fit can not extract correctly
the spectrum from the spectrogram image (Figure <ref> left), while the 2D deconvolution ends with a
quasi unstructured residual map respecting the expected Gaussian distribution (Figure <ref> right). As our model is informed by a PSF model, we are able to extract spectra with a single exposure involving rather small matrices compared with <cit.>. This allows us to tune the r hyper-parameter automatically in a few seconds with a standard laptop.
If the spectrogram is not fully contained in the sensor area, the spectrum exhibits a discontinuity which makes the norm-2 regularisation fail (second derivative from the Laplacian operator is undefined). Then, for a given instrumental PSF, it is better to use a more dispersive grating to feed the deconvolution with more data and increase the wavelength resolution, but the spectrogram must fit inside the sensor area to use regularisation techniques.
§.§.§ The spectrophotometric uncertainty principle
The regularity of the deconvolved solution depends on the hyper-parameter
r. The optimal r parameter is chosen as the minimum of the G(r) function
represented on the top panel of Figure <ref>
for a simple simulation with n_PSF=0, γ_0=5, α_0=3 (and
same characteristics as in Table <ref>):
ϕ(x, y| r⃗_c, P⃗) = α_0-1/πγ_0^2[1+(x-x_c/γ_0)^2+(y-y_c/γ_0)^2]^-α_0.
The second panel displays
the χ^2(Â⃗̂(r) | r̂⃗̂_c,P̂⃗̂) function
which shows that the optimal A⃗(r) solution is not the one minimising the
agreement with data (minimum χ^2) but the one that makes a compromise with a regularisation
scheme (modelled by the χ^2_pen(A⃗ | A⃗_0) penalty
term). The lower panel illustrates that the effective number of amplitude
parameters fitted by data with the optimal regularisation hyper-parameter is
approximately of 180. Note that for this simulation, around 680 amplitude
parameters were fitted in a spectrogram built with a constant PSF FWHM of around
5.5 pixels. Intuitively, we can conjecture that there must exist an optimal
relationship between the typical width of the PSF kernel and the amount of
information that is searchable in data:
[PSF width] ×[# effective degrees of freedom]
≈[# parameters].
If the number of searched parameters is too high, the problem
becomes ill-posed. If it is too low, then one should compensate with a wide PSF
kernel. We thus postulate that we should have a spectrophotometric uncertainty
principle of the type:
σ_PSF×N_dof/N_x≳ h
where optimum is reached at equality and h is a number with the same units as
σ_PSF the width of the PSF kernel. This formula gives the minimum number of degrees of freedom to describe data, given a PSF width.
We tested the deconvolution and regularisation process on a large number of
simulated spectrograms of constant width (n_PSF=0), but without
second diffraction order, for the sake of simplicity without any loss of
generality. We tried a Gaussian PSF kernel, as well as a Moffat kernel with two
different exponents (α_0=2 and 3) and various γ_0 values. The
results are summarised in Figure <ref>. For the three models, σ_PSF was chosen to be half of the PSF
FWHM. The figure shows that for any PSF kernel the measure of the number of
degrees of freedom N_dof/N_x scales as the inverse of the PSF
width. They show a
definite trend for the product σ_PSF× N_dof
/N_x, that appears to be asymptotically constant and equal to a number h
close to 0.8 pixel when the PSF size is significantly greater than a few pixels. This h value varies with the signal-to-noise ratio of the spectrogram, but for a situation it locks the relationship between σ_PSF and N_dof.
This flatness of the relationship shows that our procedure is in agreement with considering as optimal the extraction of information at the scale of the PSF kernel.
It is also noteworthy that this equation could be exploited to accelerate the computation of the PSF cubes: instead of computing a PSF kernel for each pixel column i, we could compute it for each N_dof/N_x pixel. As the computation times were not an issue in this paper, we left this investigation for a future project were computation speed counts.
§.§ Wavelength calibration
Despite the astigmatism of the system, in
first approximation the slitless spectrograph obeys to the usual grating
formula (Eq. <ref>, see
e.g., ). Using the notations
of Figure <ref>, the grating formula can be inverted to find the
relation between the u coordinate along the dispersion axis and λ.
First, we assume that the true 0th order position is at u_0 along the
dispersion axis, but that a misfit of its centroid (see
Section <ref>) can shift the position by a quantity δ
u_0^(fit).
The ADR also slightly spreads the order 0
image along the local constant azimuth line in a deterministic way depending on
the airmass and the spectrum of the source. It also depends on the
parallactic and camera angles, the atmosphere temperature, pressure and humidity. It is incorporated in the wavelength calibration process as a wavelength
dependent shift δ
u^(ADR)(λ) of the PSF centroid position along the dispersion axis with respect to a reference wavelength λ_ref: δ
u^(ADR)(λ_ref) = 0.
We model this effect using the NIST metrology
toolbox[<https://emtoolbox.nist.gov/Wavelength/Documentation.asp>]
recommendation of using a modified version of the Edlén
equation <cit.> by Birch and Downs
<cit.> (see appendix <ref>).
The distance d(λ) between
an abscissa of the spectrogram and the zeroth order then reads:
δ u(λ) = δ u_0^(fit) + δ u^(ADR)(λ),
u_0 = tanθ_0,
d(λ) = u(λ)-u_0-δ u(λ),
so:
d(λ | , δ u_0^(fit)) = [tan(arcsin(p λ+sinθ_0 )).
. - tanθ_0] - δ u^(ADR)(λ) - δ u_0^(fit).
The bijection between the position on the CCD and the wavelength is thus
parametrised with two unknown parameters, and δ
u_0^(fit), that need to be fitted.
As a starting point, we compute a first wavelength array λ_0 from the array of distances d to the order 0 along the dispersion axis, assuming δ
u_0=0 and given a prior value of . To get a wavelength array given and δ u_0, equation <ref> is inverted as
λ = 1/p [sinarctan(d + δ u^(ADR)(λ) + δ u_0^(fit)/) - sinθ_0 ].
To get rid of the ambiguity with ADR which depends also on wavelength, we iterate this computation 5 times starting from λ_0 and updating λ. We checked that it is enough to converge toward a stable wavelength solution. At those steps, we have associated a wavelength array λ with the amplitude array A⃗. From this calibration, λ_ref is computed as the mean wavelength weighted by the spectrum A⃗ itself.
The parameters and δ u_0^(fit) are fit on data using the most prominent absorption (or emission) lines on the observed stellar spectrum (typically the hydrogen lines Hα and Hβ, and the dioxygen lines at 762.1 nm and 686.7 nm). They are locally fitted with a polynomial background plus a Gaussian profile of unknown height, centroid and width. A partial χ^2 quantity is computed for each spectroscopic line, and added into a global χ^2.
A penalty defined as the squared distance between the fitted Gaussian centroids
and the tabulated values for the detected lines, weighted by the squared signal-to-noise ratio, is then added to the χ^2.
Finally, the full χ^2 and its penalty are normalised to the number of detected lines. This normalisation of the global χ^2 is necessary to avoid solutions that
favour a lower number of detected lines, while the penalty gives more weights to
the well detected lines and anchors them on their tabulated values. The two
parameters δ u_0^(fit) and are varied to minimize the
penalised global χ^2 and find the best solution for the wavelength
calibration.
The result of this process is illustrated on Figure <ref>. On top, the global χ^2 is represented for the wavelength calibration of planetary nebula PNG321.0+3.9 observed at CTIO. The sharp steps at high or low values reflects situations where some emission lines are detected or not, which emphasizes the need to normalize the global χ^2 by the number of detected lines. The smoothness around the minimum is due to the penalty term on δ u_0^(fit). In the calibrated spectrum, we can observe many emission lines that have been detected by the algorithm, and a good alignment between the tabulated values (represented by the vertical lines), the extrema of the Gaussian profiles and those of the data curve.
§.§ Flux calibration
With the fitted wavelength solution, the spectrum amplitude Â⃗̂ can be converted from
ADU units to flux densities in ^2, assuming that the telescope collecting area 𝒮_T, the exposure
time τ and the CCD gain (in e^-/ADU) are known:
S_1(λ) = Â⃗̂×hc /𝒮_T τλδλ,
where δλ is the local variation of λ within one pixel.
The end product of this pipeline is thus a
background-subtracted spectrum, calibrated in wavelength and flux, which
is the product of three quantities: the object SED, the instrumental
transmission and the atmospheric transmission, potentially contaminated by a
second diffraction order, since this later hasn't yet been taken into account.
§.§ Spectrogram full forward model
At the end of the previous steps, we also have in hands a first estimate of the two instrumental model functions ϕ_1(r⃗, λ) and Δ⃗_p(λ) together with geometric parameters like the zeroth order position r⃗_0 and the dispersion angle α. With all these ingredients, we can implement a full forward model of the data, also taking into account the atmospheric differential refraction (ADR) and the superimposition with the second diffraction order (last stage of Figure <ref>).
In practice, we enrich the forward model described in the steps above with the
knowledge of the ADR physics (see Sections <ref>
and <ref>) and with the knowledge of the second order to first order
transmission ratio r_2/1(λ) of the spectrograph disperser. The ADR
model replaces a polynomial approach to predict Δ⃗_p(λ). In
other words, it is used to predict the trace of the spectrogram on the sensor
with no free parameter, as long as airmass, outside pressure, outside
temperature and humidity are given. With that, two free parameters remain in
order to fully constrain the spectrogram trace on the sensor: the dispersion axis angle
α and δ y^(fit) that compensates a misfit of the zeroth
order centroid along the y axis, and the ratio r_2/1(λ) which can be
measured on an optical test bench (see Section <ref>) or on-sky
data. In the full forward model, we use a new design matrix M⃗̃⃗
defined as:
M⃗̃⃗ = M⃗(Z⃗| r⃗_c, P⃗ ) + A_2R⃗_2/1M⃗(Z⃗| r⃗_c^p=2, P⃗^p=2 ),
where A_2 is a “safety” normalisation parameter,
R⃗_2/1 is the transmission ratio vector computed for a given wavelength calibration, r⃗_c^p=2 are the centroid positions
of the second diffraction order PSF kernels and P⃗^p=2 their shape
parameters. The vector r⃗_c^p=2 is computed using the grating
formula <ref> and the ADR model. P⃗^p=2 can be fitted
independently of P⃗^p=1 but we chose to assume that the PSF shape
depends more on the spectrograph defocusing towards the infrared than on the
atmospheric chromatic seeing. The PSF shape parameters for the second
diffraction order are thus considered the same than for the first
diffraction order at the same distance of the order 0. We chose to set the
P⃗^p=2 vector accordingly[Another choice could have been to assume that the spectrograph does not suffer
from defocus, and thus arguing that the PSF shape parameters for the second
diffraction order are the same than that of the first diffraction order at the
same wavelength whatever the distance to the order 0. For CTIO images, our first choice leads to better fits to data.]. Therefore, the full forward model now includes both first and second order spectrogram models as:
I⃗_1(Z⃗ | A⃗,r⃗_c, P⃗) + I⃗_2(Z⃗ | A⃗,r⃗_c^p=2, P⃗^p=2) =
M⃗̃⃗ A⃗.
We implemented a two steps iterative method, alternating the wavelength
calibration described in Section <ref> and the full forward
model described just above (combining a linear fit for the A⃗ spectrum amplitudes and a Gauss-Newton descent for the non-linear
parameters P⃗) with the same r hyper-parameter that
was fitted (in Section <ref>). The A⃗ parameter vector determined with PSF 2D deconvolution previously (Section <ref>) is seeded in the forward model fit as a new prior A⃗_0. A 20σ clipping is performed to reject the field stars and their concomitant
spectrograms as well as other sensor defects.
This procedure ensures that all the forward model parameters are fitted again together on
data, using the more complete model including A⃗, P⃗, , δ u_0^(fit) and A_2. Their values replaced all the ones that were fitted previously. The residual map obtained is flatter than
before, with an even smaller final χ^2, and a consequently better accuracy of
all the parameters fitted.
The two main products of this step are a first order diffraction spectrum S_1(λ) separated
from the second order spectrum, and the second order spectrum
S_2(λ) with
S_2(λ) = r_2/1(λ) S_1(λ),
in ^2 (following formula (<ref>)).
It is as this point worthwhile to reconsider the second diffraction order not
as a nuisance but as a useful signal. With a strong bending due to ADR (for instance with the dispersion axis orthogonal to the zenith direction), it can be detached from the first diffraction order on purpose to maximize the statistical power of the exposure.
In summary, a full forward model takes advantage of the higher diffraction orders as a
redundant piece of data to fit all parameters, especially in the bluer part where
absorption lines are twice wider in pixels than in the first order spectrum. This is a key
advantage of the forward approach on the direct approach.
§.§ Validation on simulations
To test the full forward model, we simulated a spectrogram with a second
diffraction order and a sharp Moffat PSF kernel to increase the spectral
resolution (see the PSF parameter values in Table <ref>), whose
shape evolves as a second order polynomial
function. Figure <ref> compares simulated data with the
fitted spectrogram model, focussing on some atmospheric absorption lines: the
residuals follow the expected Gaussian distribution again, even in those fast varying regions
of the spectrum.
In Figure <ref>, we show that the process recovered the
true spectrum injected in the simulation within the estimated uncertainties
(diagonal elements from the 𝐂 matrix from
equation <ref>).
The agreement is excellent at all wavelengths, even around the fast varying
absorption lines. On the right panel of the figure, the FWHM of the true PSF is
also represented, showing that the reconstructed PSF displays the same
wavelength-dependent PSF FWHM as the simulated one.
It is also remarkable that while the cross-spectrum issued from the transverse 1D fit described in Section <ref> failed to recover the true spectrum and the true PSF profile because of the presence of the second diffraction order (orange curves), it still proved to be an important seed for the regularisation process, since only its regularity is used thanks to the Laplacian operator 𝐋.
The recovered parameters are compared with the simulation values in Table <ref>. They are fitted together with their uncertainties in the full forward model minimisation, which provides their full covariance matrix (Figure <ref>), while the other parameters like the star centroid are just estimated on data.
The regularisation quantities are given in
Figure <ref>. We see once
again that for a PSF FWHM between 2 to 4 pixels we obtain
N_dof≈ 300 out of ≈ 700 parameters, confirming the rule of thumb given by the spectrophotometric uncertainty principle. For spectrograms like the ones presented in the
previous figures, the end-to-end pipeline takes 2 minutes on a standard laptop.
The implementation has been tested on many simulations and
recovered the simulation parameters within the estimated uncertainties (68% confidence interval) for sets of parameters not too extreme (smooth wavelength dependence of the PSF, PSF kernel sampled over a few pixels). We also evaluated the extraction bias b between the true spectrum given in the simulation S_1^ truth(λ) and the extracted spectrum S_1(λ) as
b = ∫λ(S_1(λ)- S_1^ truth(λ) ) /∫λ S_1^ truth(λ).
The extraction bias was evaluated with many simulations in different cases in terms of signal-to-noise ratio, resolution and geometry. For the first case, the signal-to-noise ratio variation was simulated by multiplying the simulated spectrum by an arbitrary grey factor A_1, keeping the image background at the same level. The signal-to-noise ratio of the simulation presented Figure <ref> and Table <ref> corresponds to A_1=1. We found no significant bias in the A_1>1 regime (Figure <ref>) but a small bias appears for lower signal-to-noise spectrograms at the percent level, coming from the assimilation of the Poisson noise as a Gaussian noise in the evaluation of the uncertainty map (see <ref>). The spectrogram resolution variation was simulated changing the PSF width γ_0 and we see small negative bias appearing for low resolution spectra (large γ_0). Indeed, the latter exhibit wider and shallower absorption lines than the true spectrum which lead to these negative b values, although at the sub-percent level. However, except for the absorption lines, the overall spectrum shape from blue to red is perfectly recovered. Finally, geometry variations were also simulated using different dispersion axis angles α and found no bias. In conclusion, the most paramount condition to extract unbiased spectra from slitless spectrophotometry is a sufficient signal-to-noise ratio, closely followed by a sufficiently fine spectral resolution. The adequacy of those parameters being easy to test a priori with forward model simulation.
For completeness, we represented the reconstructed number of degrees of freedom for all these simulations in Figure <ref>. As expected, for very low signal-to-noise spectrograms this number is close to zero and the extracted spectra provide only from the 1D extraction; but if the signal increases then N_dof strongly increases until it saturates because of pixelisation. Conversely, this number decreases when the PSF width increases because the spectrogram has a lower spectral resolution.
§ SPECTRA EXTRACTION ON DATA
The success of the spectrum extraction on data depends mostly on the model of
the wavelength dependent PSF of the telescope. If the PSF model correctly
represents the reality, the residuals after the spectrogram full forward model fit
converges towards the expected Gaussian distribution. Otherwise, the extracted
spectrum is distorted were the PSF is too far from reality.
This is illustrated on Figure <ref>. On the left panels,
a blazed Thorlabs grating 300 lines/mm was chosen to observe CALSPEC star
HD111980 at CTIO on 2017, May 30th. This grating, directly placed in the filter
wheel at ≈ 55 mm from the CCD presents a rather strong
defocusing that is not well modelled by our default circular Moffat PSF, even
with an order 4 polynomial evolution with wavelength.
The building of an appropriate model for these highly defocused PSF is left for
a future work, and is presented here as an illustration to how much the prior
understanding of the telescope can be worthwhile in a forward fitting approach. Those extractions used a sigma clipping procedure with a 20σ threshold, in order to reject only the field stars and not the poorly modelled spectrogram pixels.
However, the same PSF kernel used to treat the same star observed 5 minutes
later but with an amplitude hologram optimised to correctly focus the
spectrogram at all wavelength <cit.> leads to residuals between ±
5σ (right figures), mostly dominated by a field star contaminating the
spectrogram around 530.
Parameters of interest for those two extractions are summarised in
Table <ref>. We realised a posteriori that the signal to noise
ratio was not sufficient to fit A_2, and decided to keep it fixed at 1. The way
the ratio r_2/1(λ) was obtained for these exposures is explained in
Section <ref>.
The calibrated spectra given by at the end of the extraction
process are given in Figure <ref>. We can see that due to a
strong defocusing, the spectrum from the Thorlabs grating presents broadened
absorption lines while the amplitude hologram have sharper absorption lines. The
second displays a better spectral resolution, limited by the atmospheric seeing,
which argues in favour of either controlling the spectrograph PSF at the hardware
level, or of being able to model accurately a defocused PSF kernel. At that point, it seems easier to adjust the spectrograph to get a simple PSF model than guessing the complexity of the chromatic PSF with on-sky data. From these
spectra or spectrograms, we show in Section <ref> how to measure the
atmospheric transmission or the instrumental transmission via a forward
modelling.
§ A PATH TOWARD ATMOSPHERIC TRANSMISSION MEASUREMENT
One of the main objectives for us to build spectrophotometric instrument and its
analysis pipeline is to be able to measure accurately on-site atmospheric
transmission so as to improve the photometric calibration of other telescope on
the same site. For instance, the aim of the Auxiliary Telescope at Cerro Pachón
is to measure the atmospheric transmission to correct the photometry of the LSST
survey.
In order to discuss the capabilities of in measuring atmospheric
quantities, we first recall that its main output is a first diffraction order
spectrum:
S_1(λ) = T_inst, 1(λ) T_atm(λ | P⃗_a) S_*(λ).
To be able to get the atmospheric transmission T_atm(λ |
P⃗_a), we need to know the star SED and the instrumental
transmission, and to have an accurate full forward model for the spectrograph.
Since the most accurate PSF model is achieved with the amplitude hologram at
CTIO thanks to its focusing properties, we expect better results from its
analysis than from the data acquired with the Thorlabs grating.
In order to inform our forward model, we need to know both dispersers on-sky
first order transmission and their r_2/1(λ) ratio. While we will show
how to use on sky data to that end, we also managed to secure the Thorlabs
grating to bring it back on an optical bench at the Laboratoire de Physique Nucléaire et de Hautes Énergies (LPNHE) and measure its transmission. This
was not possible for the prototype hologram used at CTIO and we had to recover its transmission with
on-sky data.
While in theory the forward modelling of the atmospheric transmission could be
based on a perfect a prior knowledge of the instrument, and simply needs to
fit each star exposure, in practice things tend to be a bit more
complex. That's why we decided to show how our pipeline could be used with intermediary steps in
order to gain more and more understanding of the data, up to the point where the
full forward modelling of the atmospheric parameters becomes possible. Note that
due to the limited telescope time and
observations, this first paper relies on a limited amount of data, and aims at
presenting the algorithms and procedures, being well understood that for
accurate results, much more data would be needed.
The different steps that we undertook and that will be detailed are as follows:
* <ref> laboratory measurement of the blazed grating transmission as a function of λ;
* <ref> inference of the CTIO 0.9 telescope transmission using data taken with the blazed grating during a stable night;
* <ref> with the CTIO 0.9 telescope transmission, inference of the amplitude hologram transmission during the same stable night;
* <ref>, <ref> with T_inst, 1(λ), S_*(λ), and a Moffat PSF kernel, measurement of T_atm(λ | P⃗_a) with data using the amplitude hologram.
§.§ Disperser transmission measurement
Our blazed Thorlabs 300 lines/mm grating was studied two years after the CTIO
campaign on the LPNHE optical test bench. The optical bench description can
be summarised as simulating a f/D=18 telescope beam of known amplitude with
parabolic off-axis mirrors. A monochromator is used to select a narrow
accurately known wavelength interval, and the light, after passing through the
grating mounted on an xyz support, is collected on a CCD device.
The transmissions for order 0, 1 and 2 were measured using aperture photometry
at many wavelengths. Unfortunately since the laboratory bench had been designed
to measure filter transmissions, it couldn't be adjusted to allow for high
enough signal to noise in the regions below 450 and above
1000.For these wavelength ranges we thus resort to using the
grating manufacturer spreadsheet.
Concerning the measurement of the r_2/1(λ) ratio, we use the
optical bench measurement above 450 and the CTIO on-sky
measurement from <cit.> below.
In order to measure wavelengths bluer than <400, we extrapolate
the r_2/1(λ) function with an exponential model
Cexp[-(λ-λ_0)/τ] with three free parameters C, λ_0
and τ fitted on the lab data.
The first order efficiency curve and the r_2/1(λ) curve for the blazed Thorlabs 300 lines/mm grating are
represented on Figure <ref> and are used in when
measuring spectra taken with this grating (like in
Figure <ref> left).
§.§ Analysis of a photometric night
To measure the CTIO 0.9 telescope transmission, we made use of a set of spectra
acquired at different airmasses during a night with stable photometric
condition. The multiplicity of the airmass conditions and the hypothesis that
the atmospheric transmission spectrum only varies with the quantity of
atmosphere between the source and the observer allow us to factorize the
atmospheric transmission as an airmass-dependent term and the
instrumental transmission term.
In order to disentangle between the average spectrum of the atmospheric
transmission and the instrumental transmission spectrum we performed the fit
over the N_s spectra available by simulating the atmospheric transmission with
Libradtran[<http://www.libradtran.org>]
<cit.> jointly with N_i arbitrary coefficients
to sample the T_inst, 1(λ) curve.
At CTIO on 2017, May 30th, the night presented very stable conditions according
to the in situ meteorological measurements of temperature, pressure,
humidity, and also a stable seeing around 0.8".
We analysed the spectra of CALPSEC star HD111980 acquired with the Thorlabs and
holographic dispersers, under the hypothesis that this night was photometric. The observations cover an airmass range from 1 to 2 (see
Figure <ref>).
For each one of the dispersers, N_s spectra were acquired and extracted. They
were averaged in 3 bins, for which the instrumental transmission is supposed to be very
smooth at this scale. The main atmospheric and hydrogen absorption lines have
been masked in this process.
§.§.§ Analysis of a photometric night to extract the instrumental transmission
Since the only a priori partial information we have in hand about the telescope
transmission is the laboratory measurement of the Thorlabs grating, we start by
fitting a telescope transmission model and one atmospheric model on the
collection of spectra extracted from the data collected with this disperser.
From 300 to 1100 we fit simultaneously the N_s=20 good blazed grating
spectra observed with the model from equation (<ref>) for p=1,
S_1(λ) = T_inst, 1(λ) T_atm(λ | P⃗_a) S_*(λ),
S_*(λ) being the binned CALSPEC star SED, and T_inst,
1(λ) a vector of N_i =250 free linear amplitude parameters.
The Libradtran atmospheric transmission simulation T_atm(λ)
uses the in situ pressure, temperature and airmass given in each
exposure meta-data. In addition 3 common parameters P_a are fitted:
* the Precipitable Water Vapour (PWV, in mm);
* the ozone quantity (in dobson db);
* the Vertical Aerosols Optical Depth (VAOD).
In order to account for a possible small grey variation of the atmospheric
transmission, each spectrum is weighted by a grey factor A_1^(n), with
their average constrained to one:
⟨ A_1^(n)⟩_n=1
The χ^2 we need to minimize thus writes:
χ^2 = ∑_n=1^N_s[D⃗_n -A_1^(n)S_1(λ)]^T 𝐂_n^-1[D⃗_n -A_1^(n)S_1(λ)]
where D⃗_n is the data vector for spectrum number n and C_n its
covariance matrix estimated by the extraction pipeline. The
A_1^(n) and P_a parameters are fitted jointly via a Gauss-Newton
descent and come with their covariance matrix, while the T_inst,
1(λ) linear parameters are computed analytically via the usual algebra
at each descent step. Since the spectrum of the star is supposed known, no
regularisation is needed. As the instrumental transmission is assumed to be smooth,
the descent is repeated with a 5σ clipping to remove outliers.
This procedure has been tested on simulations, and we checked that it recovered
the injected parameters for instrumental transmission, grey factors and
atmospheric quantities within the uncertainty ranges.
The results obtained on the CTIO data are presented in
Figure <ref>. Approximately 15% of the 5000 spectral data
points end up masked, either because they are close to a spectral line or
because they are 5σ outliers. Residuals are structured below the
2σ level in the red part of the spectra, either because of an incorrect
PSF model for the redder wavelengths due to defocusing or because of PWV
variations in the atmosphere, as hinted by the spectra, vertically ordered in
time.
The best T_inst, 1(λ) solution that we managed to extract is
presented in Figure <ref>. The black points represent the
raw fitted T_inst, 1(λ) vector,
and the red curve is smoothed with a Savitzky–Golay filter of order 1 and
window size 17. Error bars result from the combination of raw uncertainties from the fit plus the difference between the smoothed curve and the scattered raw black points. This leads to larger uncertainties for the instrumental throughput where the spectra were masked, around the main absorption lines.
The transmission curve presents the expected decreases due to the loss of efficiency of
the CCD. Given the measurement of the blazed Thorlabs grating at the lab
(Figure <ref>), we extracted the CTIO 0.9 instrumental throughput
from the T_inst, 1(λ) smoothed curve.
This fills the lack in a priori knowledge of the telescope throughput to
inform our forward model, and will be used in the following
analysis.
We obtain the telescope throughput by dividing the fitted instrumental
transmission by the first order efficiency of the blazed Thorlabs grating. The atmospheric transmission results are detailed in
Section <ref>.
A more accurate estimate of the instrumental transmission would need more data
both to inform a better PSF model, and to constrain more closely the
atmospheric transmission variations. Given the limited amount of data available,
our goal was to illustrate how a forward model approach can be adjusted to
gain more information on the different components of the model.
We find it also noteworthy that the procedure is symmetric with respect to
atmospheric transmission and telescope transmission: the need for the Libradtran
model as a priori to constrain the atmospheric transmission shape could be
replaced by the a priori measurement of the telescope transmission.
§.§.§ Analysis of a photometric night to get amplitude hologram transmissions
The next step needed to inform further our forward model in order to use the
best quality data to constrain the atmospheric transmission is to estimate the
holographic disperser transmission. If this transmission is known, it becomes
possible to use the data gathered with this disperser and take advantage of the
fact that its PSF can be fairly well modelled with a Moffat.
In order to obtain the hologram transmission we use the same procedure than described
for the Thorlabs disperser above on data collected during the same photometric
night but with the holographic disperser.
However the ratio r_2/1(λ) is still a prior information needed for the full forward model. For the holographic disperser, we built r_2/1(λ) using only the interpolated on-sky data presented in <cit.> Figure 21.
The N_s=27 spectra are presented in Figure <ref>
and the results are presented in
Figure <ref>. Approximately 0.5% of the 6723 data
points end up masked. Residuals are below 3σ. A deep absorption feature
is visible around the water absorption band at 950.
As in the previous section, from the T_inst, 1(λ) best
fit (see Figure <ref>) we deduced the transmission of
the first diffraction order for the holographic disperser, using the CTIO 0.9
telescope transmission curve determined previously.
We are well aware of the systematic errors present in these results, and stress
that they are presented here to illustrate how the forward model approach we
implemented can be used when lacking a priori information on crucial components
of the model.
§.§.§ Analysis of a photometric night to extract atmospheric parameters
In addition to the instrumental transmissions of both dispersers, the procedures
above also yield the parameters describing the mean atmospheric transmission of
the night. These results, under the assumption that the night was photometric,
are presented in Table <ref>.
The rather low value of the reduced χ^2 for the amplitude hologram
illustrates the focussing properties of this disperser, that allow us to describe
its PSF quite accurately with a simple 2D Moffat. Quantities obtained from the blazed Thorlabs grating data show lower statistical uncertainties than amplitude hologram data as their signal-to-noise ratio is much higher (because of its much higher transmission). However they certainly suffer from higher unevaluated PSF systematics than the hologram measurements. The difference between the two estimates of the atmospheric transmission in Table <ref> leads to variations in synthesized broadband magnitudes for the LSST filters of about 8 mmag in u, g and r filters, 3 mmag in i, 1.5 mmag in z and 4 mmag in y filter, for various standard CALSPEC SEDs and supernovae at redshift 0. The milli-magnitude accuracy on atmospheric transmission can thus be reached provided that the atmospheric parameters accuracy reaches below the difference shown in Table <ref>: we found that PWV must be fitted with an accuracy better than ≈0.05 to get milli-magnitude accuracy in y band. For VAOD, uncertainties of about 0.001 are required for u, g and r bands. For ozone, 10 db precision is enough to get milli-magnitude in r band.
We find also interesting to note that the ozone and VAOD
parameters we fitted are similar to what the global meteorological network
MERRA-2[<https://gmao.gsfc.nasa.gov/reanalysis/MERRA-2/>]
<cit.> estimates for the CTIO site during that night.
The MERRA-2 PWV value ranges from 4 to 5 mm
during the 2017 May 30th night. As MERRA-2 averages atmospheric quantities in
60 wide cells, it can be expected that quantities with large
local variations like water vapour could differ from on-site
measurements. This is even more true for CTIO, located at the top of a Chile
mountain. On the other hand, a high-atmospheric quantity would be expected to
depart less between on-site and satellite measurements.
We report MERRA-2 values and compare to what we extract with our forward model
to illustrate how, given a detailed knowledge of the telescope, one can tackle
the challenging problem of on site atmospheric transmission measurement. While
the error bars quoted only propagate statistical uncertainties, and are probably
dominated by systematics, the tentative concordance between the
parameters measured by MERRA-2 and our forward model results support the
algorithm developed.
For completeness, we also present for the holographic disperser data, the
evolution of the grey parameters A_1^(n) through the night in
Figure <ref> together with the final correlation matrix of the fitted
parameters in Figure <ref>. The 27 A_1^(n) factors
variation is less than 1%. This supports the first order approximation of the night as
being photometric and shows again the ability of the procedure to improve our
understanding of the data, offering venues to improve the model.
Finally, we note that the correlation matrix exhibits that the VAOD aerosol
parameter is particularly correlated to the spectrum amplitudes. This doesn't
come as a surprise since this quantity is mostly determined by the spectrum
slope for λ≈400. So any systematic on the amplitude of the spectrum affects directly the estimate of aerosols.
§.§ Atmospheric forward model approach
After illustrating how the forward model approach could be used to measure the
telescope and disperser transmissions, yielding a set of stellar and
atmospheric transmission spectra, we can go one step further. Assuming that
our measurement of those crucial components of the forward model had been done
with enough data to be accurate, it becomes possible to skip the spectrum
extraction part and directly fit the atmospheric parameters on the raw
spectrogram.
At the cost of having access to the transmissions aforementioned, if we model the spectrum S_1(λ)
as the product of a known instrumental transmission T_inst, 1(λ), a Libradtran atmospheric model T_atm(λ |
P⃗_a) and a known CALSPEC star SED S_*(λ), we can describe
any observed spectrogram as:
I⃗(Z⃗ | A⃗,r⃗_c, P⃗) = 𝐌̃(Z⃗ | r⃗_c, P⃗ ) A⃗
A⃗ (λ) = A_1 T_inst, 1(λ) T_atm(λ | P⃗_a) S_*(λ)
with A_1 a grey factor.
As before, the parameters and their covariance matrix are estimated via a
Gauss-Newton descent minimizing a χ^2 calculated over a single
spectrogram. The parameters fitted are A_1, A_2, δ y^(fit),
P_a, , α and all the polynomial coefficients modelling the
wavelength dependence of the PSF kernel. Each spectrogram is fitted
independently. We call this a spectrogram fit. As a comparison, and a way to assess the quality of the stellar spectra
forward model, we also fit S_1(λ) and the atmospheric transmission
directly on the stellar spectra extracted at that step. We call this a
spectrum fit.
§.§.§ Qualification on simulations
The atmospheric parameter extraction directly from a spectrogram has been tested
on simulations. The chosen parameters to simulate them were the extracted parameters found from fitting all data spectrograms of the "photometric" night of 2017, May 30th involving the amplitude hologram.
With those parameters we simulated spectrograms of a CALPSEC star spanning
airmass ≈ 1 to ≈ 2 with the same seeing and atmospheric
conditions than that of our data.
We fixed arbitrarily the unknown P_a parameters to 300 db
for ozone, 0.03 for VAOD and 5 for PWV.
The result of the spectrogram fit is very similar to what was presented in
Figure <ref>. For comparison we present the result
of the spectrum fit of the extracted spectrum from one of the simulated
spectrograms in Figure <ref>. We see that the extracted spectrum (red points), the best fitting spectrum model
(blue) and the true spectrum (green) are all in agreement within the quoted
uncertainties.
In addition, the recovered atmospheric values are compatible with the true
injected values within uncertainties, with a strong correlation between the
grey parameter A_1 and the aerosols, as seen before on real data.
The nightly behaviour is presented in Figure <ref>. All
values agree with the true values for both methods, correctly accounting for the
variable simulated conditions.
We note again a strong correlation between the VAOD and the A_1
parameter. As mentioned before, this is an expected behaviour as aerosols
specifically affect spectrum slope in the blue and the global spectrum amplitude.
In addition to validating the spectrogram fit, theses results also show that the
forward model process and all the pipeline steps presented above do not bias the
measurement of the atmospheric parameters.
§.§.§ Data analysis
The individual fits of the spectrograms and spectra extracted from CTIO data are
presented in Figures <ref>,
<ref> and <ref>.
We see that the atmospheric forward modelling fits the data at the 5σ uncertainty level but that the PSF model imprints structured residuals similarly to what happens in the full forward model case (Figure <ref>). This effect is visible along the spectrogram and inside the dioxygen absorption line.
The spectrum presented in Figure <ref>
is globally well fitted by the
S_1(λ) model. The fit residuals around the main dioxygen line for data as well for simulation are compatible with the instrumental throughput uncertainties. All spectrograms and spectra of the night show the same
residual patterns.
Concerning the atmospheric parameters we see that both methods yield very similar values. The spectrogram fit values are smooth in time, with a visible correlation between VAOD and A_1 parameters, while the spectrum fit values are shifted and more scattered probably due to the higher sensitiveness of this simpler procedure to outliers like the field star contamination of the spectra around 530.
We remind here that in the spectrogram fit the raw spectrogram data are directly
fitted with a model that contains the instrumental transmission for diffraction
orders 1 and 2, a Libradtran atmospheric model, and models for the dispersion
relationship and PSF kernel.
At this point, the smoothness of the atmospheric parameter curves and the
reasonable values that we obtained (low ozone, a few millimetres of precipitable
water vapour) are the closest to reality that we can expect to get with the
quality and size of the data set at hand.
We again acknowledge that these atmospheric results are affected by systematic
uncertainties and choices (like the circularity of the PSF model, the second
diffraction order PSF size or the blazed grating transmission model) that affect
the absolute value of the quoted parameters. Unfortunately we have not enough
data to estimate systematics and go further. We leave the careful analysis of the
atmospheric transmission for a future paper, for example using the high quality
data set promised by AuxTel, which is dedicated to measure atmospheric
transmission at the Rubin Observatory site.
§ SUMMARY AND CONCLUSIONS
Slitless spectrophotometry with forward modelling open a path toward the
acquisition of spectra with imaging telescopes simply transformed into
spectrographs by inserting a disperser on the light path.
We demonstrated on simulations that building a forward model of a spectrogram
allows for accurate spectrophotometry, with a spectral resolution that only
depends on the PSF width along the dispersion axis. The key of the process is a
regularisation algorithm, fed with as much as prior information as
possible (regularity of the searched spectrum, PSF parametrisation, ADR, grating
efficiency). The two key functions of the model are the dispersion function
Δ_p(λ) and the PSF model ϕ(r⃗, λ), plus the knowledge of the r_p/1(λ) ratio of diffraction order transmissions.
We exemplified how this procedure functions on real data, with tentatively very
promising results. Being aware of the limits of the data set in our possession,
we took great care to exemplify how the forward model procedure can be used to
improve our knowledge of the data, and by doing so, to inform the forward
model.
We can also summarize some of the important lessons learned while implementing
the pipeline as follows.
* Forward modelling provides a modular approach where each brick is a
physical or empirical model that can be changed or improved depending on the
data particularities and signal to noise ratio. Residuals indicate how to
improve the model (data rules).
* Once implemented, forward modelling easily provides the capability to simulate data
sets to test new algorithms.
* Second diffraction order is not a contamination but a signal that helps
recovering the blue part of the order 1 spectrum. It should be taken advantage
of whenever possible. This in particular needs to rethink the common wisdom
of spectroscopy by increasing the efficiency of the grating in the second
order, and use a field rotator (if available) in order to more easily
separate the different diffraction orders on the sensor thanks to ADR.
* The accurate knowledge of the PSF is
thus crucial and requires dedicated data and analysis that need to be
carefully budgeted for. The PSF width sets the spectral resolution and the number of degrees of
freedom that can be extracted from data, thus decreasing the width is crucial.
Since the main scientific driver for the development of is the
measurement of on-site atmospheric transmission, we pushed our analysis all the
way to that point. We showed how our procedure allows us to measure the on-sky
telescope transmission all the way to the direct extraction of atmospheric
parameters from spectrograms.
While the atmospheric parameters are dominated by systematic uncertainties, in
particular from our partial knowledge of the instrumental transmission and of
the PSF, the comparison with satellite data shows a promising tentative
agreement. We defer as work beyond this paper devoted to presenting the
spectrophotometry method, a more intensive study of on site atmosphere
transmission. This will in particular require access to much more data, and
specific detailed analysis to obtain accurate instrumental transmissions and PSF
model.
We finally would like to acknowledge that many elements of the forward model can,
and will be improved as new data becomes available. In particular we can imagine estimating the background directly in the forward model, including other diffraction orders, modelling the field star spectrogram contaminations, integrate the chromatic flat-fielding in the model to account for pixel efficiencies, etc.
These ideas are worth implementing in the algorithm if required by data. On
the other hand, there are also many hardware solutions that can be implemented
to increase the a priori knowledge of the instrument and improve greatly the
forward model analysis. For instance we showed that holographic dispersers like
those presented in <cit.> improve the focusing of the spectrogram on the
sensor on the whole visible and near infra-red range which eases the modelling
of the PSF. Also, its narrow width allows better spectral resolution. Another improvement would be using a Collimated Beam Projector <cit.> to measure the telescope transmission at the permil level,
and monitor its evolution with time.
In conclusion, we have presented in this paper the theoretical tools, together
with a detailed implementation example to add spectro-photometric ability to an
imager by the insertion of a disperser on the light path. This comes at some
computational cost, readily available nowadays, but also requires either a
priori knowledge of the instrument or dedicated data and analysis to bring the
model to the required level of accuracy.
As closing remark, we would like to stress that the forward model approach is
extremely powerful, but requires a deep focus on analysing the data in hands, and
solving the problems that are present. We found many times that trying to put
together a generic foward model that suits all needs is a fools quest, and
realised again and again that implementation choices matter.
We are grateful to the CTIO technical staff members
Hernan Tirado and Manuel Hernandez for their help during our tests with the CTIO 0.9 m telescope. We also thank Mélanie Chevance for her participation to the observations and Augustin Guyonnet for fruitful advice for the CTIO image reduction. The cost of the observations have been shared by the IJCLab (IN2P3-CNRS) and the Department of Physics and Harvard-Smithsonian Center for Astrophysics, Harvard University. F.B. is part of the FP2M federation (CNRS FR 2036) and of the project Labex MME-DII (ANR11-LBX-0023-01).
This paper has undergone internal review in the LSST Dark En-
ergy Science Collaboration. The internal reviewers were Marc Betoule, Andres Plazas-Malagon and David Rubin.
J.Neveu is the primary author of the paper and of the Spectractor software, leading the analysis toward measurement of atmosphere transmission measurement. V.Brémaud implemented the atmospheric differential refraction in the forward model, and wrote the pipeline to determine the CTIO telescope throughput. F.Barret brought the mathematical frame for the regularisation procedure. S.Bongard contributed with general discussions on the spectrum extraction and atmospheric physics. Y.Copin developed the theoretical framework of slitless spectrophotometry and of the forward modelling. S.Dagoret-Campagne and M.Moniez contributed with general discussions on spectrum extraction, the data taking and analysis of CTIO data. L.Le Guillou built the specific system to measure the disperser transmission on the LPNHE optical bench and measured the dispersers. P.Antilogus, C.Juramy and E.Sepulveda built and maintained the LPNHE optical bench.
The DESC acknowledges ongoing support from the Institut National de
Physique Nucléaire et de Physique des Particules in France; the
Science & Technology Facilities Council in the United Kingdom; and the
Department of Energy, the National Science Foundation, and the LSST
Corporation in the United States. DESC uses resources of the IN2P3
Computing Center (CC-IN2P3–Lyon/Villeurbanne - France) funded by the
Centre National de la Recherche Scientifique; the National Energy
Research Scientific Computing Center, a DOE Office of Science User
Facility supported by the Office of Science of the U.S. Department of
Energy under Contract No. DE-AC02-05CH11231; STFC DiRAC HPC Facilities,
funded by UK BEIS National E-infrastructure capital grants; and the UK
particle physics grid, supported by the GridPP Collaboration. This
work was performed in part under DOE Contract DE-AC02-76SF00515.
aa
§ ASTROMETRY
Accurately anchoring the wavelength calibration is crucial in many regards. For
atmospheric transmission measurement, a small shift of about 1 can bias significantly the estimate of
the aerosol parameters. Unfortunately, such a shift can happen because of the poor determination of the position of the
order 0 of the spectrum which is usually saturated with long bleeding
spikes. Localizing it accurately is difficult and might not be robust enough to
achieve a centroid determination precision better than the pixel scale for every
image in every circumstances.
We thus found useful to explore how to use the field stars to set a precise
astrometry using the
library[<http://astrometry.net/>].
The field stars centroids are first extracted from the image using the
method from library
<cit.>, using a 5σ clipping above threshold.
The function is then called:
patches of stars are compared to known asterisms to obtain
the precise location of the image on sky as well as the transformation between
image coordinates and sky coordinates in the form of a World Coordinate System
(WCS) description.
This procedure may not yield sub-pixel precision for the target star centroid,
in particular if it has a high proper motion. To improve the precision, we
compare the first source catalogue whose positions are converted in sky
coordinates, with star positions from the Gaia DR2 catalogue corrected
from their proper motion.
The difference between the 50 brightest field star coordinates in the two
catalogues is subtracted to the WCS solution in order to lock it on the
Gaia catalogue.
We then remove the star with the largest distance to the Gaia
catalogue, and reapply followed by the Gaia
catalogue centering, and repeat 10 times this operation. The
astrometric solution where the distance between the image stars and the
Gaia stars is the smallest is kept. This procedure minimizes the effect of stars with bad reconstructed centroids (due to saturation
effects) or high proper motions. It ends with a scatter evaluated at ≈ 0.15 RMS (Figure <ref>).
§ GAUSS-NEWTON MINIMISATION ALGORITHM
The gradient descent to minimize a χ^2 using the Gauss-Newton algorithm
works as follow. Let's consider a data set with N data points gathered into a
vector D⃗ with their uncertainties (correlated or not) represented by a
matrix W. We want to model these data with a model
m(P⃗) depending on a set of parameters P⃗. For a parameter vector P⃗, the
χ^2 is defined as:
χ^2(P⃗) = (M⃗(P⃗) - D⃗)^T W (M⃗(P⃗) - D⃗) = R⃗^T(P⃗) W R⃗(P⃗)
where M⃗(P⃗) is the vector of the model predicted values for the
N data points. The vector R⃗(P⃗) is the residuals vector.
In order to find the set of parameters P̂⃗̂ that minimizes the
χ^2 function, we search for the zero of the χ^2 gradient that verifies
∇⃗_P⃗χ^2(P̂⃗̂) = 0. The algorithm used is the
iterative multi-dimensional Gauss-Newton method, that we describe hereafter.
We start the minimisation with a first guessed value for the parameters
P⃗_0. A Taylor expansion at first order of the ∇⃗_P⃗χ^2 function can be performed around the starting point
P⃗_0 and gives:
∇⃗_P⃗χ^2(P⃗_1)P⃗≈P⃗_0≈ 2 J⃗_0^T W⃗R⃗_0 +2J⃗_0^T W⃗J⃗_0 δ⃗ ⃗P⃗_⃗1⃗ + ⋯
with δ⃗ ⃗P⃗_⃗1⃗ = P⃗_1- P⃗_0 and J⃗_0=∇⃗_P⃗M⃗(P⃗_0) the Jacobian matrix of the model
evaluated at P⃗_0. Note that for a linear model, i.e., a model that can
be written as m(P⃗ | Z⃗) = ∑_i=1^N P_i f(Z⃗), the
∇⃗_P⃗χ^2(P⃗) is exactly equal to its first order
Taylor expansion.
The zero of the function is then approached solving the equation ∇⃗_P⃗χ^2 (P⃗_1) = 0⃗:
∇⃗_P⃗χ^2(P⃗_1) = 0⃗⇒P⃗_1 = P⃗_0 - ( J⃗_0^T W⃗J⃗_0)^-1J⃗_0^T W⃗R⃗_0.
Because of the approximation coming from the Taylor expansion and of the finite
numerical accuracy of the Jacobian matrix computation, it is unlikely that the
P⃗_1 found cancels exactly the χ^2 gradient.
We then search for the α value that minimizes the χ^2 function along
the line parametrised by the vector α_1 δ⃗ ⃗P⃗_⃗1⃗ where
α_1 is a real number. The P⃗_1 value solution then writes:
P⃗_1 = P⃗_0 - α̂_1 ( J⃗_0^T W⃗J⃗_0)^-1J⃗_0^T W⃗R⃗_0.
The process is iterated K times:
P⃗_k+1 = P⃗_k - α̂_k+1( J⃗_k^T W⃗J⃗_k)^-1J⃗_k^T W⃗R⃗_k
until a convergence criteria is reached. For example when the value of
χ^2(P⃗_k) or P⃗_k evolution with k gets below a certain
threshold.
The best fitting model is then considered to be the one parametrised by the
kth vector: P̂⃗̂≈P⃗_k. The covariance matrix of
the P⃗_k parameters is obtained as the Hessian matrix at the
minimum χ^2:
C⃗(P̂⃗̂) = ( J⃗_k^T W⃗J⃗_k)^-1.
In , we also implemented the possibility to limit the P⃗
search within given bounds(for instance we can impose that the amplitudes are all
positive). We found that such bounds help the algorithm to converge.
§ SECOND ORDER PENALISATION
Regularisation techniques involving the total variation are often used in image analysis for de-noising or deconvolution while recovering sharp edges. One way to justify the use of a penalisation with the discretised Laplacian is to understand that it entails an automatic bound on the total variation. We show it in the following.
Recall some notations. The complete cost to minimize is :
ℰ(A⃗ | r⃗_c, P⃗) = (D⃗ - 𝐌 A⃗)^T 𝐖(D⃗ - 𝐌 A⃗)
+ r (A⃗ - A⃗_0 )^T 𝐐 (A⃗ - A⃗_0)
= χ^2(A⃗ | r⃗_c, P⃗) + r χ^2_ pen(A⃗ | A⃗_0),
A⃗_0 = A⃗^(1D)
D⃗ is the data vector, 𝐌 the design matrix and A⃗ is the amplitude vector which gives the spectrogram and that we wish to obtain.
𝐖 is the inverse of the covariance matrix so that if we have all the true parameters for A⃗, r⃗_c and P⃗, then χ^2(A⃗ | r⃗_c, P⃗) is the sum of the squares of the residuals and thus is the realisation of a random variable following a χ^2 law with N_xN_y degrees of freedom, hence the name of the cost as χ^2.
The penalisation term χ^2_ pen(A⃗ | A⃗_0)=(A⃗ - A⃗_0 )^T 𝐐 (A⃗ - A⃗_0) is also a quadratic term and for 𝐐=𝐋^T 𝐔^T 𝐔𝐋 with the Laplacian operator 𝐋=-𝐃^T𝐃:
𝐋 = [ -1 1 0 0 ⋯ 0 0; 1 -2 1 0 ⋯ 0 0; 0 1 -2 1 ⋯ 0 0; ⋮ ⋱ ⋱ ⋱ ⋮ ⋮; 0 0 0 0 ⋯ -2 1; 0 0 0 0 ⋯ 1 -1; ]
𝐃 = [ 1 -1 0 0 ⋯ 0 0; 0 1 -1 0 ⋯ 0 0; 0 0 1 -1 ⋯ 0 0; ⋮ ⋱ ⋱ ⋱ ⋮ ⋮; 0 0 0 0 ⋯ 1 -1; 0 0 0 0 ⋯ 0 1; ]
and 𝐔 is such that
𝐔 = [ 1/σ_A_1D^(1) 0 ⋯ 0; 0 1/σ_A_1D^(2) ⋯ 0; ⋮ ⋱ ⋱ ⋮; 0 0 ⋯ 1/σ_A_1D^(N_x); ]
we get χ^2_ pen(A⃗ | A⃗_0)=(𝐔𝐋(A⃗ - A⃗_0) )^T (𝐔𝐋(A⃗ - A⃗_0)) and is the quadratic norm (or the squared euclidean norm) of the vector 𝐔𝐋(A⃗ - A⃗_0) denoted 𝐔𝐋(A⃗ - A⃗_0)_2^2.
If we interpret A⃗ as the discretisation of a continuous spectrogram a by setting A^(i)=a(i/N_x), and σ is a function such that σ(i/N_x)=σ_A_1D^(i), a continuous analogue of this term would be a term of the form
(a-a_0)”_2,σ^2=∫_0^1[(a-a_0)”(x)]^2/σ^2(x)
where a, a_0 would be functions whose A⃗, A⃗_0 are discretisations. As a simple consequence, one has
lim_N_x→∞N^3_xχ^2_ pen(A⃗ | A⃗_0) =(a-a_0)”_2,σ^2
since N_x𝐃∼-d/dx.
The total variation distance is defined as the (weighted) norm-1 of the gradient operator, in functional term :
(a-a_0)'_1,σ=∫_0^1|(a-a_0)'(x)|/σ(x).
Note also that
lim_N_x→∞∑_i=1^N_x|𝐔𝐃(A⃗ -A⃗_0)| =(a-a_0)'_1,σ.
However by a simple argument, one can show that the 2-norm of the second derivative controls the 1-norm of the first derivative so that minimizing the former entails that we minimize also the latter. In order to prove it, let f(x)=a(x)-a_0(x), we get
f'(x) =f'(0)+∫_0^xf”(s)=f'(0)+∫_0^xf”(s)σ(s)/σ(s)
⩽ f'(0)+ (∫_0^x(f”(s))^2/σ(s)^2)^1/2(∫_0^xσ^2(s))^1/2.
Thus, under the supplementary constraint that a'(0)=a_0'(0), we have
(a-a_0)'_1,σ ⩽(∫_0^1σ^2(s))^1/2(a-a_0)”_2,σ.
The discrete analogue is, asymptotically in N_x→ +∞:
∑_i=1^N_x|𝐔 𝐃(A⃗-A⃗_0)|_1,σ ⩽ N_x√(Tr(𝐔 ^-2)χ^2_ pen(A⃗ | A⃗_0)).
This shows that regularisation using the weighted quadratic second order derivative ensures automatically an upper bound of the weighted total variation norm. So, while being computationally a lot faster, regularizing by the weighted quadratic norm of the second order derivative ensures an upper bound on the regularisation via the weighted total variation norm. Since the usual advantages of the weighted total variation norm (absence of assumption of a second order derivative or research of a sparse minimizer) are not important here, the choice of the weighted quadratic norm of the second order derivative as a loss function is completely pertinent.
§ ATMOSPHERIC DIFFERENTIAL REFRACTION
The Atmospheric Differential Refraction (ADR) depends mostly on the pressure,
the temperature, the airmass and loosely on the atmosphere humidity.
In our wavelength calibration process for CALSPEC stars, the absorption line
that weights most in the fit is the main dioxygen line at 762.1 nm. If the ADR
is not correctly modelled and taken into account in the wavelength calibration,
shifts of the absorption line minima towards the blue part of the spectrum can
be observed through the night while the airmass of the star changes.
This is illustrated in the left panels of Figures <ref>
and <ref>. On the right panels, the ADR effect is included to the
wavelength calibration process through a wavelength dependent shift of the order
0 centroid δ u_0^(ADR)(λ). We observe that this
procedure absorbs most of the line shifts when the dispersion axis is not orthogonal to the zenith direction.
For completeness, in Figure <ref> we represent the
angle conventions used in to compute correctly the zenith direction
in the image.
|
http://arxiv.org/abs/2307.05695v2 | 20230711180209 | Stack More Layers Differently: High-Rank Training Through Low-Rank Updates | [
"Vladislav Lialin",
"Namrata Shivagunde",
"Sherin Muckatira",
"Anna Rumshisky"
] | cs.CL | [
"cs.CL",
"cs.LG"
] |
Electrons interacting with Goldstone modes and the rotating frame
Nick Bultinck
Indian Institute of Technology Kharagpur
=================================================================
Despite the dominance and effectiveness of scaling, resulting in large networks with hundreds of billions of parameters,
the necessity to train overparametrized models remains poorly understood, and alternative approaches do not necessarily make it cheaper to train high-performance models.
In this paper, we explore low-rank training techniques as an alternative approach to training large neural networks. We introduce a novel method called ReLoRA, which utilizes low-rank updates to train high-rank networks.
We apply ReLoRA to pre-training transformer language models with up to 350M parameters and demonstrate comparable performance to regular neural network training. Furthermore, we observe that the efficiency of ReLoRA increases with model size, making it a promising approach for training multi-billion-parameter networks efficiently. Our findings shed light on the potential of low-rank training techniques and their implications for scaling laws.
Code is available on GitHub.[https://github.com/guitaricet/peft_pretraining]
§ INTRODUCTION
Over the past decade, the machine learning field has been dominated by the trend of training increasingly overparametrized networks or adopting the "stack more layers" approach <cit.>. The definition of a large network has evolved from models with 100 million <cit.> to hundreds of billions <cit.> of parameters, which has made computational costs associated with training of such networks prohibitive to most of the research groups. Despite this, the necessity to train models which can have orders of magnitude more parameters than the training examples <cit.>, is poorly understood theoretically <cit.>.
Alternative approaches to scaling, such as more compute-efficient scaling optima <cit.>,
retrieval-augmented models <cit.>,
and the simple approach of training smaller models for longer <cit.>,
have offered new interesting trade-offs. However, they do not bring us closer to understanding why we need overparametrized models and rarely democratize the training of these models.
For example, training RETRO <cit.> requires a complex training setup and infrastructure capable of quickly searching over trillions of tokens, while training LLaMA-6B <cit.> still requires hundreds of GPUs.
In contrast, approaches like zero-redundancy optimizers <cit.>, 16-bit training <cit.>, 8-bit inference <cit.>, and parameter-efficient fine-tuning (PEFT) <cit.> have played a crucial role in making large models more accessible. Specifically, PEFT methods have enabled fine-tuning of billion-scale language or diffusion models on consumer hardware. This raises the question: Can these approaches also benefit pre-training?
On the one hand, pre-training is exactly the step that allows for small modifications to the network to adapt it to new tasks. <cit.> demonstrated that the rank of the changes required to learn a task decreases the more you pre-train the network. On the other hand, multiple studies have demonstrated the simplicity of features extracted and utilized by language and vision models, along with their low intrinsic dimensionality <cit.>. For instance, attention patterns in transformers <cit.> often exhibit a small rank, which has been successfully leveraged to develop more efficient variants of attention <cit.>.
Moreover, overparametrization is also not necessary for training. The Lottery Ticket Hypothesis <cit.> empirically demonstrates that during initialization (or early in training <cit.>), there exist sub-networks – winning tickets – that when trained in isolation reach the performance of the full network.
In this study, we focus on low-rank training techniques and introduce that uses low-rank updates to train a high-rank network.
We empirically demonstrate that performs a high-rank update and achieves performance similar to regular neural network training.
The components of include initial full-rank training of the neural network (similar to <cit.>), LoRA training, restarts, a jagged learning rate schedule, and partial optimizer resets.
We evaluate on transformer language models up to 350M parameters. We chose to focus on autoregressive language modeling, as this approach has demonstrated its universality in most of the applications of neural networks <cit.>.
Finally, we observe that the efficiency of increases with model size, making it a viable option for efficient training of multi-billion-parameter networks.
Each experiment in this study has used no more than 8 GPU days of compute.
§ RELATED WORK
Scaling versus Efficiency
The relationship between overparametrization and neural network trainability and generalization has been extensively studied <cit.>, yet it remains a mystery <cit.>.
Moreover, scaling laws <cit.> demonstrate a simple and strong power-law dependence between network size and its performance across a variety of modalities.
This finding not only supports overparametrization but also encourages the training of extraordinarily resource-intensive neural networks <cit.>.
Nonetheless, the Lottery Ticket Hypothesis <cit.> suggests that overparametrization could, in principle, be minimized. Specifically, it shows that early in training, subnetworks exist that can be trained to achieve the performance of the full network (winning tickets).
Parameter-efficient fine-tuning
<cit.> found that pre-training reduces the amount of change to the network, or its intrinsic dimensionality, to learn a new task through fine-tuning.
I.e., larger networks or networks pre-trained on more data require smaller modifications in terms of the rank of the range to learn a new task.
This explains the success of parameter-efficient fine-tuning methods <cit.> and has also motivated the development of low-rank fine-tuning methods such as LoRA <cit.> and Compacter <cit.>.
Low-rank neural network training
Training low-rank representations has been explored in the context of CNN compression, regularization, and efficient training <cit.>.
However, most of these methods are either specific to CNNs, do not scale well, or have not been evaluated on large transformers <cit.> with hundreds of millions of parameters, which can benefit greatly from efficient training.
While transformers have been shown to have a low-rank internal dimensionality and representations <cit.>, the study by <cit.> demonstrated that the low rank of key and query projections in multi-head attention bottlenecks the performance of transformers.
Our own experiments (Section <ref>) also demonstrate that low-rank transformers perform significantly worse compared to the full-rank baseline and .
§ METHOD
Let's start by revisiting linear algebra-101. In particular, we are interested in the rank of the sum of two matrices:
(A + B) ≤(A) + (B).
This bound on the rank of the sum is tight: for a matrix 𝐀, (𝐀) < dim(𝐀), there exists 𝐁, (𝐁) < dim(𝐁) such that sum of the matrices has a higher rank than either 𝐀 or 𝐁.
We want to exploit this property to make a flexible parameter-efficient training method.
We start with LoRA <cit.> which is a parameter-efficient fine-tuning method based on the idea of low-rank updates. LoRA can be applied to any linear operation parametrized through W ∈^m × n. Specifically, LoRA decomposes the weight update δ W into a low-rank product W_A W_B as shown in Equation <ref>, where s ∈ is a fixed scaling factor usually equal to 1/r.
δ W = s W_A W_B
W_A ∈^in× r, W_B ∈^r ×out
In practice, LoRA is usually implemented by adding new trainable parameters W_A and W_B, which could be merged back into the original parameters after training. Thus, even though Equation <ref> allows the total update over training time ∑_t δ W_t to have a higher rank than any of the individual matrices, LoRA implementations are restricted by the rank r = max_W_A,W_B(W_A W_B).
If we could restart LoRA, meaning we merge W_A and W_B during training and reset the values of these matrices, we could increase the total rank of the update. Doing this multiple times brings the total neural network update to
Δ W =
∑^T_1_t = 0δ W_t + ∑^T_2_t = T_1δ W_t + … + ∑^T_N_t = T_N - 1δ W_t
=
s W_A^1 W_B^1 + s W_A^2 W_B^2 + … + s W_A^N W_B^N
where the sums are independent enough, meaning that (W_A^i W_B^i) + (W_A^j W_B^j) ≥ r.
However, implementing restarts is not trivial in practice and requires certain modifications to the optimization procedure. Naïve implementation causes the model to diverge right after the restart.
Unlike plain stochastic gradient descent, which solely relies on the value of the gradient at the current optimization timestep, Adam <cit.> update is guided mainly by the first and second moments of the gradient accumulated over the previous steps. In practice, gradient moment smoothing parameters β_1 and β_2 are usually very high 0.9-0.999.
Let's assume that at the reinitialization boundary W_A^1 and the corresponding gradient moments m_A and v_A, are full-rank (r).
Then, after the merge-and-reinit, continuing to use old gradient moments for W_A^2 will guide it in the same direction as W_A^1 and optimize the same subspace.
To resolve this issue, we propose . performs a partial reset of the optimizer state during merge-and-reinit and sets the learning rate to 0 with a subsequent warmup.
Specifically, we set 99% of low-magnitude optimizer state values to zero and use a jagged-cosine learning rate schedule (Figure <ref>).
Our ablation studies (Section <ref>) show that both of these modifications are required to improve the performance over vanilla LoRA.
To reiterate, is a low-rank training method inspired by LoRA that uses restarts to increase the effective rank of the update, uses partial optimizer reset, and a jagged scheduler to stabilize training and warm starts. All of this allows to achieve performance comparable to full-rank training, especially in large transformer networks, by only training a small set of parameters at a time. is described in Algorithm <ref>.
Enhancing computational efficiency
Unlike other low-rank training techniques <cit.>, follows the LoRA approach by maintaining the frozen weights of the original network and adding new trainable parameters.
At first glance, this may appear computationally inefficient; however, the differentiation between frozen and trainable parameters plays a crucial role in parameter-efficient fine-tuning <cit.>.
These methods achieve significant improvements in training time and memory efficiency by reducing the size of the gradients and the optimizer states.
Notably, Adam states consume twice as much memory as the model weights. Moreover, it is common practice to maintain gradient accumulation buffers in 32-bit precision for large networks, thereby adding significant overhead to the memory consumption of gradients.
By substantially reducing the number of trainable parameters, enables the utilization of larger batch sizes, maximizing hardware efficiency. Additionally, it reduces the bandwidth requirements in distributed setups, which are often the limiting factor in large-scale training.
Furthermore, since the frozen parameters are not being updated between restarts, they can be kept in a low-precision quantized format, further reducing their memory and computational impact. This additional optimization contributes to overall improved efficiency in terms of memory utilization and computational resources of and increases at scale.
§ EXPERIMENTS
To evaluate the effectiveness of , we apply it to train a transformer language model on the C4 dataset <cit.> using various model sizes: 60M, 130M, 250M, and 350M.
Language modeling has been shown to be a fundamental task in machine learning <cit.>, it enables text and image classification <cit.>, translation <cit.>, programming <cit.>, in-context learning, step-by-step reasoning <cit.>, and many other emergent abilities <cit.>.
Given its significance, we focus solely on language modeling for the purposes of this paper.
Architecture and training hyperparameters
Our architecture is based on transformer <cit.> and closely resembles LLaMA <cit.>. Namely, we use pre-normalization, RMSNorm <cit.>, SwiGLU activations <cit.>, 8/3h fully-connected hidden state size <cit.>, and rotary embeddings <cit.>.
All hyperparameters are presented in Table <ref>.
We use bfloat16 for all floating point operations and Flash attention <cit.> for effective attention computation. Compared to attention in LLaMA, which uses float32 for softmax computation, this increased training throughput by 50-100% without any training stability issues.
Most of our models were trained on 8 RTX 4090 for one day or less.
Due to computational constraints, we train much smaller models than LLaMA, with the largest model having 350M parameters, the same as BERT Large <cit.>.
We select the number of pre-training tokens based on the Chinchilla scaling laws <cit.> for all models, except for the largest one, which we train for 6.8B tokens while 9.5B tokens are Chinchilla-optimal.
and baselines setup
In our low-rank training experiments, replaces all attention and fully-connected network parameters, while keeping the embeddings full-rank. The RMSNorm parametrization remains unchanged. Since -wrapped models have fewer trainable parameters than full-rank training, we include a Control baseline, which is a full-rank transformer with the same number of trainable parameters as .
We initialize from a checkpoint of full-rank training at 5,000 update steps and reset it every 5,000 steps thereafter, 3 times in total. After each reset, 99% of the optimizer state is pruned based on magnitude, and the loss is warmed up for the next 100 iterations. parameters are reinitialized following LoRA best practices, Kaiming initialization <cit.> for A-matrix, and zeros for B-matrix. In case of not using the restarts, the B-matrix also uses Kaiming initialization to avoid gradient-symmetry issues.
§ RESULTS
Parameter-efficient pre-training
Our main results are resented in Table <ref>.
significantly outperforms low-rank LoRA training demonstrating the effectiveness of our proposed modifications (ablated in Section <ref>). Furthermore, achieves similar performance to full-rank training, and the performance gap diminishes as network size increases.
Interestingly, the only model in which couldn't surpass the Control baseline was our smallest model with 60M parameters. This observation suggests that is particularly effective in improving the training of large networks, which aligns with our goal of developing a method that improves large-network training.
High-rank training through low-rank updates
To determine whether performs a higher rank update than LoRA we plot the singular value spectrum of the difference between warm-start weights and the final weights for ReLoRA, LoRA, and full-rank training.
Figure <ref> illustrates significant qualitative differences between LoRA and for the singular values of W_Q, W_K, W_V, and W_down.
While most of the singular values for LoRA are zero (Figure <ref>) with a noticeable number of exceptionally high values above 1.5, exhibits a higher distribution mass between 0.1 and 1.0, reminiscent of full-rank training.
This observation emphasizes the significance of high-rank updates and demonstrates the qualitative efficacy of ReLoRA, which accomplishes a high-rank update by performing multiple low-rank updates.
§.§ Ablation studies
We conduct ablation studies on all four crucial components of : restarts, jagged schedule, optimizer resets, and warm starts, utilizing the 130M-sized model. The results are presented in Table <ref>. In this section, we will focus on and analyze certain combinations of these components.
LoRA
ReLoRA, without the aforementioned components, is essentially equivalent to training a low-rank network parameterized by LoRA. This approach yields remarkably high perplexity, indicating that a simple matrix decomposition has significantly different training dynamics from full-rank training.
Adding restarts and optimizer resets
ReLoRA, without a jagged schedule and optimizer reset, performs similarly to LoRA because old optimizer states force the newly initialized parameters into the same subspace as the prior weights, limiting the model's capacity.
However, doing a naive optimizer reset with ReLoRA causes the model to diverge.
A jagged schedule helps to stabilize training and has a positive impact on the mixture.
In our initial experiments, we also observed that a combination of partial optimizer reset and jagged scheduler allows for a quicker warm-up, as low as 50 steps, instead of hundreds of steps required when the optimizer is initialized from scratch.
Warm start
The warm start shows the most significant improvement, dropping perplexity by almost 10 points.
To investigate whether post-warmup training contributes to the loss, we measured the perplexity of the warmed-up network, which equals 27.03. It outperforms all low-rank methods except for our final recipe but still demonstrates a significant difference from the final network.
This demonstrates the importance of early training, similar to the concept of the lottery ticket hypothesis with rewinding <cit.>.
§ CONCLUSION
In this paper, we investigated low-rank training techniques for large transformer language models. We first examined the limitations of a simple low-rank matrix factorization (LoRA) approach and observed that it struggles to effectively train high-performing transformer models. To address this issue, we proposed a novel method called , which leverages the rank of sum property to train a high-rank network through multiple low-rank updates. Similar to the lottery ticket hypothesis with rewinding, employs a full-rank training warm start before transitioning to ReLoRA. Additionally, introduces a merge-and-reinit (restart) strategy, a jagged learning rate scheduler, and partial optimizer resets, which collectively enhance the efficiency of and bring it closer to full-rank training, particularly in large networks.
efficiency increases with the network size making it a viable candidate for multi-billion-scale training.
We firmly believe that the development of low-rank training methods holds great promise for improving the efficiency of training large language models and neural networks in general. Furthermore, low-rank training has the potential to provide valuable insights for the advancement of deep learning theories, aiding our understanding of neural network trainability through gradient descent and their exceptional generalization capabilities in the overparametrized regime.
§ LIMITATIONS AND FUTURE WORK
Scaling beyond 350M
Due to limited computational resources, our experiments were constrained to training language models with up to 350M parameters. Nonetheless, already demonstrates promising results at this scale. However, we anticipate its true potential will be realized in the 1B+ parameter region.
Additionally, while the 350M model outperforms the Control baseline, it does not continue the trend of narrowing the gap between and full-rank training. We attribute this to suboptimal hyperparameter choice, which requires further investigation.
Furthermore, in 60-350M experiments, even though significantly reduces the number of trainable parameters, we did not observe substantial improvements in memory and computation for the networks of this size.
To evaluate the efficiency of our current implementation at a larger scale, we trained the 1.3B-parameter model for a small number of iterations to estimate memory and compute improvements of ReLoRA. At this scale, we observe 30% memory consumption reduction and 52% training throughput increase.
We expect to observe even bigger improvements over the full-training baseline for larger networks since the number of trainable parameters for ReLoRA, similar to LoRA, increases at a much slower rate compared to the number of frozen parameters.
implementation could be further improved by effectively utilizing gradient checkpointing for ReLoRA layers, custom backward functions, and converting frozen model weights to int8 or int4 quantized format <cit.>.
Comparison to other low-rank training methods
A number of approaches to low-rank training have been explored with other model architectures in earlier work <cit.>. Two aspects set our work apart from these earlier efforts. First, the approach we propose performs high-rank updates through low-rank training. Second, our work demonstrates competitiveness of the low-rank training methods in large-scale transformer language models with 100M+ parameters.
Thanks to Google Cloud for Research Program, Eric Lehman, and Artem Krovosheev for helping with computational resources for this paper.
abbrvnat
|
http://arxiv.org/abs/2307.04470v1 | 20230710104044 | Test-Time Adaptation for Nighttime Color-Thermal Semantic Segmentation | [
"Yexin Liu",
"Weiming Zhang",
"Guoyang Zhao",
"Jinjing Zhu",
"Athanasios Vasilakos",
"Lin Wang"
] | cs.CV | [
"cs.CV"
] |
Journal of IEEE Transactions on Artificial Intelligence, Vol. 00, No. 0, Month 2020
First A. Author et al.: Bare Demo of IEEEtai.cls for IEEE Journals of IEEE Transactions on Artificial Intelligence
Test-Time Adaptation for Nighttime Color-Thermal Semantic Segmentation
Yexin Liu, Weiming Zhang, Guoyang Zhao, Jinjing Zhu, Athanasios Vasilakos, and Lin Wang^†
Manuscript received April 19, 2023. ^† corresponding author
Y. Liu, W. Zhang, and Jingjin Zhu are with the Artificial Intelligence Thrust, HKUST(GZ), Guangzhou, China. E-mail:[email protected], [email protected], and [email protected]
G. Zhao is with the Robotics and Autonomous Systems Thrust, HKUST(GZ), Guangzhou, China. E-mail:[email protected]
Athanasios V. Vasilakos is with the Center for AI Research (CAIR), University of Agder(UiA), Grimstad, Norway. Email: [email protected]
L. Wang is with the Artificial Intelligence Thrust, HKUST(GZ), Guangzhou, and Dept. of Computer Science and Engineering, HKUST, Hong Kong SAR, China. E-mail: [email protected]
August 12, 2023
==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The ability to scene understanding in adverse visual conditions, , nighttime, has sparked active research for color-thermal semantic segmentation.
However, it is essentially hampered by two critical problems: 1) the day-night gap of color images is larger than that of thermal images, and 2) the class-wise performance of color images at night is not consistently higher or lower than that of thermal images.
We propose the first test-time adaptation (TTA) framework, dubbed Night-TTA, to address the problems for nighttime color-thermal semantic segmentation without access to the source (daytime) data during adaptation.
Our method enjoys three key technical parts.
Firstly, as one modality (, color) suffers from a larger domain gap than that of the other (, thermal),
Imaging Heterogeneity Refinement (IHR) employs an interaction branch on the basis of color and thermal branches to prevent cross-modal discrepancy and performance degradation.
Then, Class Aware Refinement (CAR) is introduced to obtain reliable ensemble logits based on pixel-level distribution aggregation of the three branches.
In addition, we also design a specific learning scheme for our TTA framework, which enables the ensemble logits and three student logits to collaboratively learn to improve the quality of predictions during the testing phase of our Night TTA.
Extensive experiments show that our method achieves state-of-the-art (SoTA) performance with a 13.07% boost in mIoU.
Night-time segmentation, TTA, Cross-modal learning.
Night-time segmentation is a critical task for autonomous driving under challenging visual conditions. Existing methods mostly focus on daytime segmentation with perfect illumination. This has inspired active research on color-thermal semantic segmentation as thermal cameras are less affected by illumination changes and can complement color modality.
However, thermal images suffer from a lack of large-scale labeled datasets, which are labor-intensive to obtain. TTA allows for the on-the-fly adaptation to different target domains at the testing phase while protecting data privacy. In light of this, we propose the first TTA framework that achieves SoTA nighttime color-thermal segmentation performance at the testing phase without relying on the source (daytime) data. This is practically valuable for real-world application scenarios.
The proposed method presents a robust solution for all-day scene understanding, which may hopefully inspire more research in the community.
§ INTRODUCTION
Recent years have witnessed the success of deep neural networks (DNNs) for color image semantic segmentation, which is crucial for the scene understanding, , autonomous driving <cit.>.
However, models trained in favorable lighting conditions show poor generalization ability to the nighttime data. Thus, nighttime image semantic segmentation has become a challenging problem.
Recently, increasing attention has been paid to thermal images because they are inherently robust to illumination changes and may complement semantic information to the color images (especially nighttime images). <cit.>. This has sparked research for supervised <cit.> and unsupervised <cit.> color-thermal semantic segmentation as both modalities can compensate for each other’s deficiencies.
However, existing supervised methods necessitate well-label annotations, particularly for thermal images captured during nighttime, which poses significant labor-intensive challenges.
Meanwhile, most unsupervised methods (, unsupervised domain adaptation (UDA)) entail the drawbacks of time-consuming offline domain adaptation training, and its performance is greatly affected by the domain gap, leading to limited adaptation in diverse testing environments.
Therefore, it is non-trivial as only the nighttime color-thermal data is available under a limited overhead for adaption. This motivates us to explore a suitable adaptation strategy for nighttime color-thermal semantic segmentation.
Test-Time Adaptation (TTA) <cit.> presents a practical domain adaptation approach that enables the seamless adaptation of pre-trained models to the target domain in real-time during the testing phase. TTA is different from the UDA-based semantic segmentation setting <cit.>: TTA does not need to access source data during adaptation. Moreover, the TTA framework can achieve privacy protection while allowing for on-the-fly adaptation to different target domains during the testing phase without the need for offline domain adaptation training. This is practically valuable for real-world applications. However, directly extending existing TTA methods to color-thermal semantic segmentation leads to less optimal performance, as demonstrated in Tab. <ref> in the experiments. For example, entropy minimization of TENT <cit.> generates overconfident predictions. Therefore, applying it individually to color and thermal branches aggravates the color-thermal discrepancy.
Motivation: In this paper, we, for the first time, explore a TTA framework for nighttime color-thermal semantic segmentation without access to the source (daytime) color-thermal data.
Our work addresses two challenges for nighttime color-thermal semantic segmentation arising from the modality differences during TTA, as shown in Fig. <ref>.
(1) Due to the different imaging mechanisms, the day-night domain gap, denoted as G_color, of color images is larger than that, denoted as G_T, of the thermal images (See Fig. <ref>(a)). This unbalanced difference between G_color and G_T leads to the considerable cross-modal discrepancy and performance degradation in the adaption process.
We refer to this issue as imaging heterogeneity. (2) Existing color-thermal segmentation methods, , <cit.>, apply the same weights to all classes. However, we find that the class-wise performance at night (denoted as P_color) of color images is not consistently higher or lower than that of the thermal images (denoted as P_T). Therefore, these methods might neglect the discriminative features of the modalities with smaller weights during the color-thermal nighttime segmentation ensemble process. An example is shown in Fig. <ref>(b), where the performance P_T^person on the class `person' in the thermal image is larger than P_color^person of the color image. We refer to this as class-wise prediction heterogeneity.
To address aforementioned challenges, we propose a novel nighttime TTA framework, called Night-TTA, which consists of three key technical components: (1) Imaging Heterogeneity Refinement (IHR) (Sec. <ref>) and (2) Class Aware Refinement (CAR) (Sec. <ref>) and (3) a learning scheme (Sec. <ref>), as shown in Fig. <ref>(c). For IHR, we propose an interaction branch to obtain the color-thermal cross-modal invariant feature to prevent the performance degradation in the adaptation process caused by the difference in the cross-modal domain gap (G_color>G_T).
Specifically, we first take the color-thermal image pairs as input to the interaction branch and then use the two encoders to obtain the color and thermal features that need to be fused. However, directly fusing the color and thermal features induces inconsistent noises due to the private information in the two individual branches. Therefore, we introduce a novel cross-modal shared attention (CMSA) module to aggregate the cross-modal invariant features while suppressing the noisy ones between the two modalities.
The CAR strategy employs an element-wise entropy-based fusion (EEF) module to generate reliable ensemble logits. This subtly avoids neglecting the discriminative feature information of each class in each branch. Specifically, we first evaluate Shannon entropy in the channel dimension of each student's logits. Then, we re-weight the students' logits to generate more reliable ensemble logits (, teacher) based on the pixel-level distribution of three students. By performing pixel-wise re-weight on the logits of the three branches, the performance advantages of different modalities in different classes can be utilized, and more reliable ensemble logits can be obtained.
Lastly, we present a novel learning scheme to overcome the potential problematic segmentation results during TTA. By utilizing the reliable ensemble logits generated by the EEF module as a self-supervised signal, we enable three student networks to learn from each other through online distillation <cit.> during the adaptation process. This allows our Night-TTA model to fully utilize the discriminative information in each branch, thus preventing the ensemble logits from making false predictions among the categories.
Contribution: In summary, our major contributions are four-fold: (I) We make the first attempt and propose a novel TTA framework for color-thermal semantic segmentation. (II) We propose an IHR strategy with the CMSA module, to reduce the imaging heterogeneity during TTA. We also propose the CAR strategy to take advantage of the segmentation performance of different modalities in different classes and then generate reliable ensemble logits.
(III) For cross-modal ensemble distillation of our Night-TTA framework, we propose a novel learning scheme to achieve cross-modal ensemble distillation in the testing phase. (IV) Extensive experiments demonstrate that our method significantly surpasses the baselines and prior methods (at least 3.11% mIoU improvement on the MF-1 dataset, and 2.69% mIoU improvement on the KP dataset).
§ RELATED WORK
Color-Thermal Image Semantic Segmentation.
Color-thermal segmentation methods can be divided into two main categories: supervised methods and unsupervised methods. The former includes the fusion of multi-modalities using multiple encoders with a shared decoder <cit.> and the translation between the RGB and thermal images <cit.>. MFNet <cit.> extracts features from the color and thermal images using two encoders and expands the receptive field by using the 'mini-inception' module. ABMDRNet <cit.> solves the problems of multimodal disparity and multi-scale contextual information fusion by using a bridging-then-fuse strategy to obtain more discriminative cross-modal information.
UDA-based methods, , HeatNet <cit.>, propose a teacher-student learning method <cit.> to transfer the knowledge from the daytime color image domain to the nighttime thermal image domain to avoid expensive nighttime image annotation. MS-UDA <cit.> enhances the performance of thermal segmentation by transferring knowledge from color to thermal modality.
By contrast, we propose the first color-thermal TTA framework that consists of triple student networks for nighttime image semantic segmentation without access to the source domain (daytime) data. Moreover, our TTA framework not only considers the difficulty of the domain gap faced by UDA but also proposes and solves the two novel problems based on the differences between modalities.
Test-Time Adaptation (TTA).
TTA methods enable the model to adapt quickly to the target domain, which does not require access to source domain data.<cit.>. TTA has been applied to unimodal<cit.> and cross-modal<cit.> segmentation tasks.
For the former task, the typical model Tent<cit.> presents an entropy minimization strategy to optimize affine parameters during testing.
For the Cross-modal segmentation task, xMUDA<cit.> allows the 2D and 3D modalities to learn from each other via imitation, disentangled from the segmentation objective to prevent false predictions. MM-TTA<cit.> proposes two complementary modules to obtain and select more reliable pseudo-labels (from 2D and 3D modalities) as self-learning signals during TTA.
However, directly using previous TTA methods for color-thermal semantic segmentation leads to less optimal performance. Therefore, we propose the IHR and CAR strategies to make our color-thermal TTA framework more robust and generalized, with a unique learning scheme that can perform better in both the training and testing phases.
Ensemble distillation.
Compared with the standard knowledge distillation (KD) paradigm<cit.>, online KD (ensemble distillation)<cit.> enables efficient and single-stage training via collaborative learning among the student networks. Collaborative learning relies on two main ways: students learn from each other <cit.> or generate ensemble logits to supervise their learning<cit.>.
The former methods facilitate peers' mutual learning by sharing knowledge among the student networks. For example, CLNN<cit.> allows multiple classifier heads to share intermediate-level representation for collaborative learning to reduce generalization errors.
The latter methods focus on generating ensemble logits that update each student's network based on the contributions shared by the students. In particular, <cit.> select the logits based on the cross-entropy loss of each student with the true label. However, we cannot access the labels during test time. Therefore, we propose the CAR strategy to generate reliable ensemble logits, which considers the different class-wise performance between the two modalities.
§ METHOD
Overview. In multi-modal TTA for color-thermal image semantic segmentation, we consider a source domain dataset, where each sample consists of daytime paired color images (x_s^color∈ℝ^H × W × 3), thermal images (x_s^T∈ℝ^H × W × 1), and corresponding segmentation ground truth (GT). A source model is trained on the labeled source domain dataset. Usually, the source model consists of a color encoder E_color, a thermal encoder E_T, and the decoder D utilized to generate pixel-level semantic labels. The source model can be denoted as f_θ=D(E_color(x_s^color), E_T(x_s^T)).
Typically, the performance of the source model f_θ is unsatisfactory when confronted with new test data characterized by a different distribution from the source samples. The primary objective of TTA is to enhance the prediction performance in the target domain by conducting model adaptation solely on unlabeled target data. Specifically, given a target dataset t, which comprises nighttime paired color images (x_t^color) and thermal images (x_t^T).
The model is updated using *min_θ̃ℒ(𝐱;θ),𝐱∼ t
, where θ̃⊆θ represent the model parameters that should be updated (, batch normalization layer), ℒ denotes self-supervised loss functions.
Prior research works on TTA have employed the entropy minimization for single-modality (, color image) semantic segmentation <cit.> or utilized consistency loss and pseudo-labels for cross-modal (, 2D-3D) segmentation <cit.>.
However, as discussed above, applying existing TTA methods directly to color-thermal semantic segmentation poses challenges due to two main factors: imaging heterogeneity and class-wise prediction heterogeneity.
To this end, we propose a novel TTA framework for nighttime color-thermal image semantic segmentation. Specifically, as depicted in Fig. <ref>, the proposed TTA framework consists of color, thermal, and interaction branches, representing three separate student networks.
color, thermal, and interaction branches take the x_t^color, x_t^T, and both as the input, respectively.
There are two novel technical components: IHR (Sec. <ref>) and CAR (Sec. <ref>).
To solve the problems caused by imaging heterogeneity, the IHR employs an interaction branch with a novel cross-modal shared attention (CMSA) module to generate reliable pseudo labels. The CMSA module is introduced before the decoder to aggregate the complementary features and suppress the noisy features of the color and thermal modalities.
To solve the problems caused by class-wise prediction heterogeneity, the CAR is buttressed by an element-wise entropy-based fusion (EEF) module to generate the ensemble logits by aggregating the reliable logits from three branches. We also propose a specific learning scheme that enables the three student networks to collaboratively learn to improve the quality of predictions during adaptation.
§.§ Imaging Heterogeneity Refinement (IHR)
The straightforward fusion of the color and thermal branches leads to a noticeable degradation in the segmentation performance due to the significant domain gap between the two modalities, as evidenced by the results presented in Tab. <ref>. To address this challenge, we propose the integration of an interaction branch to facilitate the extraction of cross-modal invariant features, which are crucial for generating reliable pseudo labels.
Specifically, color images provide abundant textual information that is valuable for segmentation tasks, particularly in well-illuminated daytime scenarios. However, their performance suffers greatly when confronted with adverse lighting conditions. On the contrary, thermal images exhibit robustness to illumination changes but exhibit limitations such as lower resolution and ambiguous object boundaries. Therefore, a direct fusion of color and thermal features may introduce inconsistencies caused by the individual characteristics of each modality, undermining segmentation accuracy.
To mitigate these issues, the introduction of the interaction branch aims to exploit the complementary nature of color and thermal modalities. This branch facilitates the extraction of cross-modal invariant features that are resilient to domain gaps, enabling the generation of more reliable pseudo labels. By integrating these cross-modal invariant features with the individual modalities, we can effectively capture both shared and unique information, leading to improved segmentation performance in color-thermal images.
This may cause generating unreliable pseudo labels.
For this reason, we design the CMSA module (see Fig. <ref>) to rectify the noisy features and extract the cross-modal invariant features.
For the CMSA, we first embed both color (F_color∈ℝ^H × W × C) and thermal (F_T∈ℝ^H × W × C) features into two individual channel (C) attention vectors (V_color^C∈ℝ^C) and (V_T^C∈ℝ^C). Unlike <cit.>, rectifying features by utilizing the individual vectors, we generate the shared channel attention vectors ( V_shared^C∈ℝ^C) by aggregating the vectors from the color-thermal features to maintain the shared features while suppressing the noisy features.
The channel-wise feature rectification can be described as:
F^C_color =V_shared^C ⊙ F_color +F_color,
F^C_T =V_shared^C ⊙ F_T + F_T.
Similar to the channel-wise rectification, a shared spatial (S) attention vector (V_shared^S∈ℝ^H × W) is embedded to calibrate the local information, which is formulated as follows:
F^S_color =V_shared^S ⊙ F^C_color + F^C_color,
F^S_T =V_shared^S ⊙ F^C_T + F^C_T.
F^S_color and F^S_T are the rectified features after the CMSA module, which will be aggregated to the decoder of the interaction branch. Once obtained the logits in each branch, pseudo-labels are provided for the CAR.
§.§ Class Aware Refinement (CAR)
To generate ensemble logits, previous method, , <cit.> usually assigns an image-level weight to each branch by measuring the consistency between the cross-modal branches. This may encounter class performance imbalance problems for color-thermal segmentation due to the class-wise prediction heterogeneity in cross-modalities. Take the cross-modal branches as an example (See Fig. <ref>). We assume that the weights calculated by the existing method for the color and thermal branch are 0.7 and 0.3, respectively. When generating the ensemble logits, all classes in the color branch are assigned a weight of 0.7, while those of the thermal branch are assigned 0.3. This leads to poor segmentation performance for some classes that were originally better segmented in the thermal branch (, person). To alleviate this problem, we propose the EEF module to refine the ensemble logits, as shown in Fig. <ref>.
§.§.§ Element-wise Entropy-Based Fusion (EEF)
The EEF module uses the outputs of three branches as the input, which are denoted as ỹ_1^M, ỹ_2^M, and ỹ_3^M (ỹ_1^M, ỹ_2^M, ỹ_3^M ∈ℝ^H × W × C) respectively, where M ∈{s, t}and C denotes the number of channels. To assign the weight W_i for branch i, specifically, the softmax is firstly computed along the channel dimension. Then, we calculate the Shannon entropy (H(ỹ_i^M) ∈ℝ^H × W × 1) of the logits ỹ_i^M. For each pixel (i, j) ∈ H × W, we can obtain a vector v_i,j∈ 1 × C consisting of the elements of logits at position (i, j) for all channels. Then, we calculate the Shannon entropy H(v_i,j^C) of the vector v_i,j:
H(v_i,j^C)=∑_C=1^Nsoftmax(v_i,j^C) · log softmax(v_i,j^C),
where v_i,j^C denotes the value of vector v_i,j in channel C. H(ỹ_i^M) is composed of the Shannon entropy (SE) of all vectors v_i,j. Assume that the true label at position (i, j) is k. When the value on the k-th channel becomes larger, the value on other channels diminishes. Then the cross entropy (CE) loss with the label decreases, which means the segmentation performance becomes better. The ideal probability distribution is that the prediction on the k-th channel is close to 1, while the prediction on the other channels is close to 0. In this situation, Shannon entropy will be kept to a relatively small extent. An effective way to generate teacher logits is to re-weight the student's logits based on the element-wise Shannon entropy. For each element in the teacher's logits, the smaller the Shannon entropy in the channel dimension, the greater the weight of the branch. We define the teacher's logits as the combination of all students' weighted logits. The pixel-wise weights W_i of branch i are calculated as:
W_i=e^(1-H(ỹ_i^M))/temp/∑_i=1^3 e^(1-H(ỹ_i^M))/temp,
where W_i ∈ℝ^H × W × 1, temp denotes the temperature. Finally, the teacher's logits are as follows:
ỹ^EN=∑_i=1^3 W_i*ỹ_i^M.
§.§ Learning Scheme
For TTA, we denote the updated parameters of the Batch normalization layer of color, interaction, and thermal branch as γ^color, γ^Int, and γ^T, respectively. Given paired color-thermal images, there are i classes in the image. The predictions of different branches can be denoted as P_color={P_color^1,P_color^2,..., P_color^i}, P_Int={P_Int^1,P_Int^2,..., P_Int^i}, and P_T={P_T^1,P_T^2,..., P_T^i}. During TTA, the class-wise segmentation performance of one branch is not consistently higher or lower than the other branches. For some classes, one branch can achieve the best segmentation performance while the other branch could achieve the best performance in other classes. Without loss of generality, we consider the case of three classes where the color, interaction, and thermal branch achieves the best performance on class 1, 2, and 3, respectively. The ensemble logits of traditional methods are calculated by P_EN=P_color+PInt+P_T/3. Then, the consistency loss ℒ_KL^tta, which achieves knowledge distillation from ensemble logits to student logits, is used to train the three branches. During TTA, the parameters of the batch normalization layer γ are updated by:
γ_t^color=γ_t-1^color-β·▽_γℒ_KL^tta(color,EN)
γ_t^Int=γ_t-1^Int-β·▽_γℒ_KL^tta(Int,EN)
γ_t^T=γ_t-1^T-β·▽_γℒ_KL^tta(T,EN)
Based on our assumptions, for class 1, the entropy of the color branch is smaller than the ensemble logits (SE(P_color) SE(P_EN)), whereas the entropy of the interaction and thermal branches are larger than the ensemble logits (SE(P_Int) SE(P_EN) and SE(P_T) SE(P_EN)). Therefore, although the interaction and thermal branches will improve the segmentation performance, the color branch will have performance degradation after optimization. The other two classes have similar results.
To mitigate the issues mentioned above, we propose the EEF module and a learning scheme (See Fig. <ref>).
During TTA, we consider the teacher logits as the self-training signals to update the model. We define KL loss as ℒ_KL^tta(i,EN)= KL(ỹ_i^s,ỹ^EN) to ensure collaborative learning of these three students. Moreover, to boost the performance of all three student networks, we introduce the Shannon entropy loss ℒ_i^tta= SE(ỹ_i^t), and ℒ_EN^tta= SE(ỹ^EN).
For each student network i, the final learning objective is:
ℒ^tta=∑_i=1^3 ℒ_i^tta+λ_1 ℒ_EN^tta+λ_2 ∑_i=1^3 ℒ_KL^tta(i,EN),
where λ_1 and λ_2 are hyperparameters.
Dynamic Weighting Each branch. Existing methods for multi-modal test time adaptation typically assign the same weights to all branches. However, for color-thermal segmentation, the day-night domain gap in color images is more significant than in thermal images. Consequently, utilizing identical weights for all branches can lead to instability during adaptation. To address this issue, we propose a dynamic weighting scheme for these branches, which exclusively affects the loss function without incurring additional computational overhead for model adaptation. Specifically, we introduce weights ω_i for each branch according to the adaptation extent.
Measuring the extent of adaptation typically relies on labeled samples, which presents a challenge in our problem scenario where training data is unavailable, and the test samples remain unlabeled. Consequently, quantifying the extent of adaptation becomes non-trivial. To address this issue, we propose a novel approach that leverages ensemble logits to estimate the extent of adaptation. In particular, we initially compute the distance between the student logits and the ensemble logits of each branch within a batch. This computation can be formulated as follows:
D_i=1/B∑_b=1^B 1/2(KL(ỹ^EN||ỹ^M)+KL(ỹ^M||ỹ^EN)),
Then we calculate the weights of each branch as follows:
ω_i = D_i/min{D_1, D_2,D_3}
Then, the final objective is :
ℒ^tta=∑_i=1^3 ω_i ℒ_i^tta+λ_1 ℒ_EN^tta+λ_2 ∑_i=1^3 ω_i ℒ_KL^tta(i,EN),
where λ_1 and λ_2 are hyperparameters. With the EEF module, we can generate ensemble logits with small entropy at the pixel level. Then, for each class i, we have SE(P_EN) SE(P_color), SE(P_EN) SE(P_Int), and SE(P_EN) SE(P_T), which means that we have better ensemble logit to train the three branches. Adaptation with our learning scheme can continuously improve the segmentation performance of the three student branches through ensemble distillation, so as to gradually carry out more accurate segmentation results.
§ EXPERIMENTS
§.§ Datasets
MF dataset. It contains 1569 images (784 for training, 392 for validation, and 393 for test) in which 820 daytime and 749 nighttime images are mixed in training, validation, and test sets. The resolution of images is 480×640 with annotated semantic labels for 8 classes.
To evaluate our method, we just drop out the nighttime color-thermal image pairs in the original training and validation sets and drop out the daytime color-thermal image pairs in the original test sets to form a new dataset (410 for training, 205 for validation, and 188 for test), which is denoted as MF-1. For UDA methods, under our investigation, there only exist two UDA methods (HeatNet and MS-UDA) for nighttime image semantic segmentation leveraging color and thermal images. Thus, we compare the segmentation performance with these two methods. For a fair comparison, we use the same training and testing set with MS-UDA: We reorganize the daytime and nighttime images in the MF dataset as training and testing sets (820 daytime images for training and 749 nighttime images for testing ), which is denoted as MF-2. Three categories of labels overlapping the KP dataset (, car, person, and bike) are used for evaluation.
The modified KP dataset. The KAIST Multispectral Pedestrian Detection (KP) dataset <cit.> is a color-thermal paired urban driving dataset without semantic segmentation labels. Kim <cit.> create a modified KP dataset with manually annotated 503 daytime and 447 nighttime color-thermal image pairs and the pixel-level labels of 19 classes consistent with Cityscapes <cit.>. The resolution of color-thermal image pairs is 512 × 640 × 3 and 512 × 640 × 1, respectively.
§.§ Implementation Details
The proposed method is implemented using PyTorch libraries with a single A6000 GPU.
Source model.
As the first TTA framework for nighttime color-thermal semantic segmentation, our approach adopts a three-branch network structure. Each branch utilizes an untrained encoder and decoder from FEANet <cit.> (which after the training step already reaches good performance based on a supervised manner) to obtain the logits. We utilize the encoder and decoder from FEANet as the source model without changing the network architecture.
Pre-training the source model.
In our experiment setting, we want to use daytime data for training and nighttime data for testing. However, the source model from FEANet was trained and tested on day-night mixed dataset which is a different dataset splitting scheme from ours. Therefore, we pre-train the source encoder and source decoder with the source domain dataset.
For a fair comparison, We follow the training details of FEANet apart from using the original dataset.
Test-time Adaptation Details.
We apply the source model that only uses daytime data as training to each branch and use unlabeled nighttime paired data as input for test time adaptation. Similar to previous TTA methods <cit.>, we only optimize the batch norm affine parameters for one epoch. The learning rate for three sub-networks is set to 1e^-5. The temperature is set to 2.
§.§ Comparative Studies
We evaluate the proposed framework against state-of-the-art TTA methods on the MF-1, MF-2, and modified KP datasets.
MF-1 dataset. We compare our TTA framework with uni-modal and multi-modal TTA frameworks on MF-1 dataset. The quantitative and qualitative results are shown in Tab. <ref> and Fig. <ref>. The proposed Night-TTA could bring a significant adaptation effect on nighttime color-thermal image semantic segmentation compared to the source model (increases 13.07 % mIoU). Specifically, in Tab. <ref>, we conduct a comparison of the segmentation performance among different TTA frameworks across three categories: Car, Person, and Bike. Based on the analysis of the experimental data, our TTA framework exhibits a notable improvement in the segmentation performance for all three categories. Moreover, our Night-TTA achieves a substantial performance advantage over both uni-modal TTA methods, with an improvement of over 17.34% in mIoU. It should be noted that directly applying the uni-model TTA methods would degrade the segmentation performance. Our method also surpasses multi-modal TTA methods with an improvement of over 3.01% in mIoU.
MF-2 dataset. We also compare our method with existing UDA methods. The results are shown in Tab. <ref>. In the MF-2 dataset setting, where training is conducted on daytime data and testing on nighttime data, our Night-TTA approach showcases remarkable performance superiority over UDA methods, specifically achieving a significant 6.05% improvement in comparison to MS-UDA. These results highlight the efficacy and professionalism of our Night-TTA framework in addressing the challenges of domain adaptation in the context of semantic segmentation for nighttime scenarios.
The modified KP dataset. Tab. <ref> and Fig. <ref> show the quantitative and qualitative results. We can conclude that the proposed Night-TTA performs better than existing nighttime color-thermal image semantic segmentation methods. Specifically, our Night-TTA framework achieves the best segmentation performance in most categories. In addition, our proposed learning scheme for the TTA framework improves the segmentation performance of the source model (from 36.35 % mIou to 47.77 % mIou) more significantly than other TTA methods (The highest increase to 45.08% mIou).
§.§ Ablation Studies and Analysis
1) Imaging Heterogeneity Refinement
1) Interaction Branch.
We validate the effectiveness of the proposed interaction branch on the MF-1 dataset. The results are shown in Tab. <ref>. During the assessment of single-modal nighttime semantic segmentation, our findings indicate that thermal imaging exhibits superior performance compared to color imaging, highlighting its heightened robustness and reliability in low-light environments. Compared with single-modal nighttime image semantic segmentation, multi-modal (color-thermal) achieves better performance. Besides, the dual path (without the interaction branch) worsens the segmentation performance (from 49.71% mIou to 32.16 % mIou when using EEF), demonstrating the interaction branch's effectiveness.
2) CMSA.
We conduct additional experiments to validate the efficacy of the CMSA module, comparing its performance in an interaction-only network and a complete network. The results, presented in Tab. <ref>, demonstrate the significant improvements achieved by the CMSA module in both the interaction-only network (from 35.82% mIoU to 41.26% mIoU) and the triple branches networks (from 49.71% mIoU to 52.06%).
2) Class Aware Refinement
1) EEF module.
We compare EEF module against different methods of generating the ensemble logits (as shown in Tab. <ref>).
The 'Merge' approach represents taking the mean of the logits from the three branches, while 'IE' refers to methods based on image-level entropy(<cit.>). The results demonstrate that our EEF module performs better than other strategies, with an increase of 5.64% (from 47.52% mIoU to 53.16% mIoU) for 'Merge' and 6.79%(from 46.37% mIoU to 53.16% mIoU) for 'IE' in mIoU. This highlights the superior performance of our EEF module in ensemble logits generation.
2) Learning Scheme.
In this experiment, the λ_1 and λ_2 are set to 1. Tab. <ref> shows the quantitative results. Based on our experimental data, it is evident that utilizing individual losses alone or combining any two losses leads to performance improvement in adaptation. Specifically, the three ℒ^tta, ℒ_EN^tta, and ℒ_KL^tta(i,EN) contribute similarly during TTA, while ℒ_KL^tta(i,EN) plays a slightly more important role compared with others. It should be noted that our learning scheme could significantly improve the performance of the source model (13.07% mIoU).
3) Sensitivity Analysis
1) Batch size.
We explore the impact of batch size on the semantic segmentation performance of different TTA methods (as shown in Tab. <ref>). The results indicate that a small batch size (1 or 2) leads to degraded segmentation performance, while a larger batch size (4 or 8) results in improved performance. Tab. <ref> shows that the TTA method looks very sensitive to batch size. This sensitivity can be attributed to the parameters updated by the TTA method during the test phase, primarily within the batch normalization layer. Increasing the batch size brings the testing data in a batch closer to the real data contribution during the adaptation process, thus improving the segmentation performance. The proposed method consistently performs well across different batch sizes.
It outperforms the other evaluated TTA methods in terms of mIoU, showcasing its effectiveness in semantic segmentation tasks. For example, at a batch size of 8, the proposed method achieves mIoU of 53.16, surpassing the mIoU of the other methods (ranging from 49.28 to 50.05).
2) Robustness to perturbations.
We further evaluate the robustness of our methods on the MF dataset. We conduct an ablation study to evaluate the impact of different input perturbations during the test-time adaptation. Three types of perturbations are applied: image cropping, brightness adjustment, and the addition of Gaussian noise. Specifically, we crop the image at the rate of 0.2, randomly add Gaussian noise (noise range is set to 5) to the image, or just the brightness of the images to reorganize three new test sets. Tab. <ref> shows the quantitative results of different TTA methods. shows the quantitative results of different TTA methods. We can conclude that our method is more robust to noises and image corruption.
3) Parameters updated in TTA.
We conduct an analysis of the TTA performance by examining the impact of updating specific network layers. The ablation study aims to analyze the impact of updating specific network layers during TTA in semantic segmentation. Three scenarios are considered: updating only the encoder parameters, updating only the decoder parameters, and updating both the encoder and decoder parameters. The experiment is conducted with a batch size of 8. Tab. <ref> presents the results according to updating the affine parameters in different network parts for effective TTA. When only the encoder parameters are updated during TTA, the method achieved the mIoU of 48.71. Updating only the decoder parameters result in the best performance, with a mIoU of 53.16.
§ DISCUSSION
For the IHR strategy, naively combining the individual color and thermal branches yields subpar performance due to modality gap and noise (Fig. <ref>). The proposed IHR strategy enhances prediction reliability by incorporating an interaction branch and a CMSA module. The CMSA module effectively combines cross-modal invariant features while suppressing noisy information between color and thermal modalities. Evaluating with nighttime color-thermal image pairs, we observe a performance gap between color and thermal branch logits without IHR, along with considerable noise in ensemble logits. By introducing the interaction branch and CMSA module, the discrepancy between color and thermal branch logits decreases, resulting in ensemble logits that align better with ground truth labels. This reduction in cross-modal discrepancy highlights the effectiveness of the interaction branch in mitigating the influence of image heterogeneity.
As the first TTA framework, we design three branches to generate reliable pseudo labels without considering much about the parameters and computational costs, which is typical for other cross-modal TTA methods, , <cit.>. Future work will focus more on designing tight frameworks. Moreover, while our TTA framework is specifically designed for nighttime color-thermal semantic segmentation, there is potential for its application to address other types of multi-modality data. For instance, it can be extended to handle data combinations such as color and event data or color and depth data, opening up opportunities for broader applicability.
§ CONCLUSION
In this paper, we addressed two potential problems of nighttime color-thermal image semantic segmentation to reduce the cross-modal discrepancy via test time adaptation (TTA) with cross-modal ensemble distillation. We presented a novel TTA framework, dubbed Night-TTA, with two novel refinement strategies: imaging heterogeneity refinement (IHR) and class aware refinement (CAR). In the experiments, both strategies were shown effective in achieving credible performance. The experimental results also proved the benefits of our learning scheme. Moreover, for nighttime color-thermal semantic segmentation, Night-TTA outperformed the existing methods by a considerable margin.
IEEEtran
[
< g r a p h i c s >
] Yexin Liu is a Mphil. student in the Visual Learning and Intelligent Systems Lab, Artificial Intelligence
Thrust, The Hong Kong University of Science and Technology, Guangzhou (HKUST-GZ).
His research interests include infrared- and event-based vision, and unsupervised domain adaptation.
[
< g r a p h i c s >
] Weiming Zhang
is a research assistant in the Visual Learning and Intelligent Systems Lab, Artificial Intelligence
Thrust, The Hong Kong University of Science and Technology, Guangzhou (HKUST-GZ).
His research interests include event-based vision, Deep Learning, .
[
< g r a p h i c s >
] Guoyang ZHAO is a Mphil. student in the Intelligent Autonomous Driving Center, Thrust of Robotics and Autonomous Systems, The Hong Kong University of Science and Technology, Guangzhou (HKUST-GZ). His research interests include vision-based perception systems and Deep learning.
[
< g r a p h i c s >
]
Jinjing Zhu is a Ph.D. student in the Visual Learning and Intelligent Systems Lab, Artificial Intelligence Thrust, The Hong Kong University of Science and Technology, Guangzhou (HKUST-GZ). His research interests include CV (image classification, person re-identification, action recognition, etc.), DL (especially transfer learning, knowledge distillation, multi-task learning, semi-/self-unsupervised learning, etc.), omnidirectional vision, and event-based vision.
[
< g r a p h i c s >
]
Athanasios V. Vasilakos is with the Center for AI Research (CAIR), University of Agder(UiA), Grimstad, Norway. He served or is serving as an Editor for many technical journals, such as the IEEE TRANSACTIONS ON AI, IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT; IEEE TRANSACTIONS ON CLOUD COMPUTING, IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, IEEE TRANSACTIONS ON CYBERNETICS; IEEE TRANSACTIONS ON NANOBIOSCIENCE; IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE; ACM Transactions on Autonomous and Adaptive Systems; the IEEE JOURNAL ON SELECTED AREAS IN COM-MUNICATIONS . He is WoS highly cited researcher(HC).
[
< g r a p h i c s >
]
Lin Wang (IEEE Member) is an assistant professor in the AI Thrust, HKUST-GZ, HKUST FYTRI, and an affiliate assistant professor in the Dept. of CSE, HKUST. He did his Postdoc at the Korea Advanced Institute of Science and Technology (KAIST). He got his Ph.D. (with honors) and M.S. from KAIST, Korea. He had rich cross-disciplinary research experience, covering mechanical, industrial, and computer engineering. His research interests lie in computer and robotic vision, machine learning, intelligent systems (XR, vision for HCI), etc.
|
http://arxiv.org/abs/2307.04274v1 | 20230709223246 | Assessing the efficacy of large language models in generating accurate teacher responses | [
"Yann Hicke",
"Abhishek Masand",
"Wentao Guo",
"Tushaar Gangavarapu"
] | cs.CL | [
"cs.CL",
"cs.LG"
] |
Automated Essay Scoring in Argumentative Writing: DeBERTeachingAssistant
[
August 12, 2023
========================================================================
<cit.> organized the shared task hosted by the 18th Workshop on Innovative Use of NLP for Building Educational Applications on generation of teacher language in educational dialogues. Following the structure of the shared task, in this study, we attempt to assess the generative abilities of large language models in providing informative and helpful insights to students, thereby simulating the role of a knowledgeable teacher. To this end, we present an extensive evaluation of several benchmarking generative models, including GPT-4 (few-shot, in-context learning), fine-tuned GPT-2, and fine-tuned DialoGPT. Additionally, to optimize for pedagogical quality, we fine-tuned the Flan-T5 model using reinforcement learning. Our experimental findings on the Teacher-Student Chatroom Corpus subset indicate the efficacy of GPT-4 over other fine-tuned models, measured using BERTScore and DialogRPT.
We hypothesize that several dataset characteristics, including sampling, representativeness, and dialog completeness, pose significant challenges to fine-tuning, thus contributing to the poor generalizability of the fine-tuned models. Finally, we note the need for these generative models to be evaluated with a metric that relies not only on dialog coherence and matched language modeling distribution but also on the model's ability to showcase pedagogical skills.
§ INTRODUCTION
The advent of powerful open-source generative language models such as GPT-2 <cit.>, T5 <cit.>, OPT <cit.>, BLOOM <cit.>, Flan-T5 <cit.> or LLAMA <cit.> has led to significant developments in conversational agents, opening avenues for various applications in education <cit.>. Such AI-driven educational dialogues offer the potential for skill improvement and personalized learning experiences, with intelligent tutoring systems increasingly gaining traction <cit.>. However, deploying AI-based teachers in the educational ecosystem demands the careful modeling and evaluation of these agents to ensure their capability to address critical pedagogical concerns.
<cit.> created the AI teacher test challenge which follows the recommendations from <cit.> (pp. 67-72) stating that, if we want to put generative models into practice as AI teachers, it is imperative to determine whether they can (a) speak to students like a teacher, (b) understand students, and (c) help students improve their understanding.
Taking inspiration from the AI teacher test challenge which asks whether state-of-the-art generative models are good AI teachers, capable of replying to a student in an educational dialogue this paper seeks to investigate the applicability of reinforcement learning (RL) techniques in the generation of AI teacher responses within educational dialogues. The AI teacher test challenge emphasizes the need for a systematic evaluation of generative models to ensure that they can effectively communicate with students, comprehend their needs, and facilitate their academic improvement. Can we guide the language generator with RL to help it focus on these pedagogical requirements?
<cit.> organized the shared task hosted by the 18th Workshop on Innovative Use of NLP for Building Educational Applications on generation of teacher language in educational dialogues. Following the structure of the shared task, in this study, we aim to evaluate the potential of combining state-of-the-art generative language models with reinforcement learning algorithms to generate AI teacher responses in the context of real-world educational dialogues sourced from the Teacher Student Chatroom Corpus <cit.>. The natural baselines for the task at hand are SOTA closed-source models such as GPT-4, and fine-tuned open-source pre-trained models such as GPT-2 <cit.>. We will evaluate these natural baselines before evaluating fine-tuned pre-trained models using RL techniques, that optimize for pedagogical quality.
By exploring the role of reinforcement learning in guiding the generation of AI teacher responses, we aim to advance the discourse on the utilization of conversational agents in educational settings and contribute innovative ideas to the ongoing shared task on the generation of teacher language in educational dialogues at the 18th Workshop on Innovative Use of NLP for Building Educational Applications.
The rest of this paper is structured as follows. Section 2 offers a comprehensive review of relevant literature in the areas of AI-driven educational dialogues and reinforcement learning-based language generation. Section 3 discusses the analysis and processing of the dataset prior to conducting any language modeling tasks. In Section 4, the proposed model and its methodology for generating AI teacher responses in educational interactions are introduced. Section 5 evaluates the effects of our approach on the quality and relevance of the generated AI teacher responses and highlights key observations. Finally, Section 6 concludes the paper and explores potential directions for future research.
§ RELATED WORK
A variety of related literature exists in the realm of conversational teaching between a student and a teacher. In this section, we review several notable works addressing aspects of teacher-student dialogues, foundation models, and conversational datasets, which have contributed to the progress and understanding of generative models in educational contexts.
Teacher-Student Dialogues
One prominent resource in educational dialogues is the National Council of Teachers of English (NCTE) dataset <cit.>. It includes numerous examples of teacher-student interactions, which can serve as a valuable resource for the training and evaluation of generative models in an educational context.
The SimTeacher dataset <cit.> is an assemblage of information obtained through a "mixed-reality" simulation platform. This unique environment aids beginner educators in honing essential skills for classroom settings by employing student avatars managed by human actors. All aspiring teachers from a prominent public university participate in several brief simulation sessions throughout their educational preparation program, focusing on improving their ability to encourage more profound textual understanding among students. The original researchers annotated a variable called "quality of feedback" within the transcript, determining how effectively teachers proactively assist students.
In <cit.>, we can find a dataset collected from an education technology company that provides on-demand text-based tutoring for math and science. With a mobile application, a student can take a picture of a problem or write it down and is then connected to a professional tutor who guides the student to solve the problem. The dataset represents, after some selection, 108 tutors and 1821 students. Each session is associated with two outcome measures: (1) student satisfaction scores (1-5 scale) and (2) a rating by the tutor manager based on an evaluation rubric (0-1 scale).
Foundation Models
<cit.> provided a comprehensive analysis of the opportunities and risks of foundation models, including insights into their use in educational applications. They identified potential benefits, such as personalized learning and accessibility, while also highlighting the major risks, such as unfair biases and the generation of harmful content. This work establishes the need for carefully crafted benchmarks and evaluations to assess the potential of generative models in education.
The AI Teacher Test <cit.> builds on this idea by examining the performance of generative models such as GPT3 <cit.> and Blender <cit.> in generating appropriate and informative responses in a teacher-student dialogue.
Kasneci et al. <cit.> conducted an investigation to understand the effectiveness of ChatGPT <cit.> as a tool for educational support. They analyzed the model's performance in a student-tutoring context, examining its ability to provide accurate, relevant, and engaging responses for learners. By identifying the strengths and weaknesses of ChatGPT in this specific setting, they contributed to a better understanding of how generative models can be successfully deployed in educational applications.
Our work builds on these foundations by evaluating the potential of combining reinforcement learning with generative models to enhance the performance of AI teacher agents in educational dialogues.
Conversational Uptake
<cit.> introduced the concept of uptake as a way to comprehend the effectiveness of conversational responses in a teacher-student dialogue. It laid the groundwork for the evaluation of generative models in dialogues by taking into account the relevance and appropriateness of model-generated responses.
Demszky et al. <cit.> further explored the concept of Conversational Uptake by proposing metrics to assess the success of responses in maintaining and advancing a conversation. By applying these metrics to AI-generated responses, their work contributes to the evaluation of models in realistic conversation settings, including teacher-student dialogues. Our work attempts to guide the language generation process with similar goals in mind. We hope to find proxies of pedagogical quality through NLP metrics such as BERTScore combined with DialogRPT.
We continue by reviewing the literature utilizing reinforcement learning as a guide for language generation.
Reinforcement Learning for language generation
Policy gradient-based algorithms and their variants have been widely used in text generation to optimize sequence-level metrics <cit.>. Off-policy Reinforcement Learning (RL) is also commonly used in dialogue applications where online interaction with users is expensive <cit.>. The main difference in our work is that we take advantage of demonstrations and design generic reward functions for generation tasks.
We extend this concept to educational contexts by employing reinforcement learning to guide the generation of AI teacher responses in educational dialogues. We focus on optimizing the responses of fine-tuned generative models based on a reward system designed to enhance the pedagogical quality of the generated responses. Recently, Ramamurthy et al. <cit.> explored the efficacy of using RL to optimize language models in several natural language processing tasks, including text classification, sentiment analysis, and language generation. They developed a library, RL4LMs, which provides a generic framework for deploying RL-based language models for various tasks. We build on top of the RL4LMs framework by adding a new task to its existing array of tasks which we hope can be added as a standard for any future RLHF benchmark.
§ DATA
The shared task for BEA 2023 is based on the Teacher-Student Chatroom Corpus (TSCC) <cit.>. This corpus comprises data collected from 102 chatrooms where English as a Second Language (ESL) teachers interact with students to work on language exercises and assess the students' English language proficiency.
§.§ Data Extraction and Format
From each dialogue in the TSCC, several shorter passages were extracted. Each passage is at most 100 tokens long, consisting of several sequential teacher-student turns (i.e., the preceding dialogue context) and ending with a teacher utterance (i.e., the reference response). These short passages are the data samples used in this shared task.
The data samples are formatted using a JSON structure inspired by the ConvoKit <cit.>. Each training sample is represented as a JSON object with three fields:
* id: a unique identifier for the sample.
* utterances: a list of utterances corresponding to the preceding dialogue context. Each utterance is a JSON object with a "text" field containing the utterance and a "speaker" field containing a unique label for the speaker.
* response: a reference response, which corresponds to the final teacher's utterance. This utterance is a JSON object with a "text" field containing the utterance and a "speaker" field containing a unique label for the speaker.
Each test sample is represented as a JSON object that uses the same format as the training sample but excludes the reference response. As a result, each test sample has two fields:
* id: a unique identifier for the sample.
* utterances: a list of utterances, which corresponds to the preceding dialogue context. Each utterance is a JSON object with a "text" field containing the utterance and a "speaker" field containing a unique label for the speaker.
§.§ Data Distribution and Characteristics
The TSCC corpus is divided into three sets: train, dev, and test, each comprising 2747, 305 and 273 of the samples, respectively.
The corpus has 3325 samples, and each sample has an average length of 7.52 turns, with about 7.33 tokens per turn on average.
Table <ref> presents a summary of the statistics of the TSCC corpus across the training, development, and testing sets.
The TSCC corpus exhibits several characteristics that are specific to educational dialogues and pose challenges to natural language generation models. For instance, the dialogues often include technical vocabulary and idiomatic expressions related to English language learning. Additionally, the dialogues can be highly varied in terms of topic, complexity, and participant proficiency. Finally, the dialogues can include challenging responses which are based on out-of-context information, posing challenges for conversational agents. These characteristics must be taken into consideration when selecting and evaluating generative models for the TSCC corpus.
§.§ Data Overlap and Challenges
It is worth noting that the released development and training sets in the TSCC dataset have some overlaps, as individual conversation samples within these sets have been generated by creating chunks from larger conversations. This overlap may lead to potential biases and overfitting when training and evaluating models on this dataset. However, the test set for the BEA 2023 shared task is free of overlaps, allowing for a more accurate assessment of the model's performance in generating AI teacher responses.
The presence of overlaps in the development and training sets posed a challenge, as models inadvertently learned to predict teacher responses based on the similarities between the samples rather than genuinely understanding the context and dynamics of the teacher-student interaction. It is essential to be aware of this issue and devise strategies to mitigate the risks associated with such overlaps and ensure that the models are robust and capable of handling diverse and unseen scenarios.
To ensure the validity of our model on the validation set, we employed an iterative inclusion process to create a train-val split without any overlap between them. This process involved carefully selecting and excluding samples from the training set that had any similarity or overlap with the samples in the development set. This approach aimed to minimize the risk of data leakage and ensure that our model was evaluated on a truly unseen set of dialogues.
§ METHODS
The primary objective of our study is to investigate the potential of using in-context learning, supervised fine-tuning, and reinforcement learning to generate AI teacher responses in educational dialogues. Our proposed methods will be evaluated using the Teacher Student Chatroom Corpus (TSCC) dataset. In this section, we provide an overview of the three main parts of our methodology: in-context learning using GPT-4, supervised fine-tuning with existing models such as GPT-2 and DialoGPT, and supervised fine-tuning with Reinforcement Learning using the RL4LMs library <cit.>.
§.§ In-context Learning
§.§.§ GPT-4
As a preliminary step, we investigate the potential of in-context learning using GPT-4, a state-of-the-art language model. It generates educational dialogues based on its pre-trained knowledge, which has been acquired from a vast corpus of text data during its training process (the pre-training data might have included the test set; we will address this issue in the discussion section).
To evaluate the performance of GPT-4, we prompted GPT-4 in a few-shot fashion. We retrieved 5 most similar teacher-student conversations from the TSCC dataset and provided them to the model in addition to the current conversation and instructions about the model's role as a teacher. Details about the prompt construction that helps guide the model toward generating suitable responses as a teacher can be found in the Appendix <ref>.
§.§ Supervised Fine-tuning
To further adapt pre-trained language models to the specific educational context and generate more accurate and context-aware teacher responses, we explore supervised fine-tuning using GPT-2 and DialoGPT models.
§.§.§ GPT-2
GPT-2 <cit.> is a decoder-only large language model pre-trained on WebText, and we used GPT-2 Large, which has 24 transformer decoder blocks with 774 million parameters.
We fine-tune the GPT-2 model <cit.> using the Huggingface Library on the Teacher Student Chatroom Corpus (TSCC) dataset. For each educational dialogue, we concatenated all dialogue turns into a single string with additional information of speaker roles i.e. students or teachers. As a result, the input to the GPT-2 model consists of a sequence of text representing the conversation history, culminating in the teacher's response. We then finetuned GPT-2 Large <cit.> with a causal language modeling task. Details of the exact hyperparameters used during the fine-tuning process can be found in the Appendix.
After the fine-tuning process, we evaluated the fine-tuned GPT-2 model's performance on the test set by comparing its generated teacher responses to reference responses, assessing the model's ability to generate context-aware and educationally relevant responses in line with the teacher's role in the TSCC dataset.
§.§.§ DialoGPT
DialoGPT <cit.> is a dialogue model based on the GPT-2 architecture, specifically designed for generating conversational responses. DialoGPT is trained with 147M conversation pieces extracted from Reddit <cit.>, and it is trained with causal language modeling objectives with multi-turn dialogue. We adapt our training dataset with the same format as that of DialoGPT during pretraining and then prompt the DialoGPT to generate an educational dialogue of teachers in the validation set. After training, we follow the same methodology for evaluation as GPT-2 which we discussed in the earlier section.
§.§ Supervised Fine-tuning with Reinforcement Learning
§.§.§ Flan-T5 Fine-tuned with RL4LMs
To optimize the generative models for pedagogical quality, we explore the use of reinforcement learning techniques in the fine-tuning process. We employ the RL4LMs library <cit.>, which provides an efficient and scalable framework for reinforcement learning-based language model fine-tuning.
The RL4LMs library incorporates Proximal Policy Optimization (PPO) <cit.> as the reinforcement learning algorithm, which is known for its stability and sample efficiency. The library also supports the integration of custom reward functions, allowing us to design rewards that encourage the generation of pedagogically sound teacher responses.
To implement the reinforcement learning-based fine-tuning, we first fine-tune the Flan-T5 <cit.> model on the TSCC dataset using supervised learning, as described in the previous section. Next, we utilize the RL4LMs library to fine-tune the model further using the PPO algorithm. We use an equal division of the F1 as calculated by the roberta-large version of BERTScore and DialogRPT-updown as the reward function. More Details about the reinforcement learning fine-tuning process can be found in the Appendix.
The subsequent evaluation of the fine-tuned Flan-T5 model reveals the benefits of incorporating reinforcement learning into the fine-tuning process, contributing to more context-aware, relevant, and pedagogically effective AI teacher responses.
§ RESULTS
In this section, we present the results and discuss the performance of GPT-4, fine-tuned GPT-2, and fine-tuned DialoGPT models on the TSCC dataset. We analyze the strengths and weaknesses of each approach and provide insights into their potential applications and limitations in an educational context.
§.§ GPT-4
The GPT-4 model, without fine-tuning on the TSCC dataset, demonstrates a relatively strong performance in generating educational dialogues. The generated teacher responses are generally fluent and contextually relevant, indicating that GPT-4 has a good understanding of the educational context based on its pre-trained knowledge. However, some limitations are observed in the model's ability to generate accurate and pedagogically sound responses consistently.
The carefully crafted prompt provided to the model plays a crucial role in guiding GPT-4 toward generating suitable responses as a teacher. Although the model is capable of generating contextually relevant and linguistically correct responses, it may not always produce the most pedagogically sound or helpful responses for the students. This limitation highlights the importance of fine-tuning the model on a specific educational dataset, such as TSCC, to further enhance its performance in generating AI teacher responses.
Additionally, due to the nature of the dataset, where conversations were often cut off, the model sometimes lacked the full context needed to generate meaningful responses that accurately represented the ground truth. Despite this limitation, GPT-4's responses were generally sensible and appropriate given the available context.
§.§ Finetuned GPT-2
We observe that compared with DialoGPT, GPT-2 usually generates longer and more formal responses, even with the same generation hyperparameters.
§.§ Finetuned DialoGPT
We observe that DialoGPT usually generates shorter and more vernacular responses. It fits better in a conversational setting, but sometimes the educational uptakes are not satisfactory since the responses are not guiding students to learn the language.
§.§ Finetuned Flan-T5 w/ RL
We observe that the results of Flan-T5 w/ RL on the validation set are really good suggesting that the model was able to hack the metrics designed as the reward. On the contrary, it is performing poorly on the test set suggesting that it overfits the validation set. We hypothesize two reasons for this to be the case:
the way conversations are split into chunks in the training dataset or the difference in distribution between the training set and the test set.
§ DISCUSSION
Conversational agents have the potential to revolutionize the teaching landscape by addressing several challenges and enhancing the overall learning experience for both students and educators <cit.>. However, developing conversational agents that can behave like human teachers requires addressing several challenges <cit.>.
Data challenges. As noted in the subsections above, the generations from the GPT-4 model outperformed all the fine-tuned models, with and without reinforcement learning. To this end,
we put forward the proposition that an array of dataset features plays a crucial role in posing significant challenges to the fine-tuning process of generative models. These features include several dataset characteristics, including sampling, representativeness, prompt and response lengths, and dialogue completeness—upon manual inspection, we identified several dialogues to be cut off—pose serious challenges in achieving superior performance with fine-tuning. Furthermore, upon random inspection of the generations from the fine-tuned models, we identified that these models seem to have learned simple, generic, often inappropriate yet correct responses such as “thank you” and “okay.” While more recent language models have been shown to have high few-shot performance, we believe that fine-tuned models could adapt better to provide domain-specific responses in comparison. To achieve this, we emphasize the need for extending the current dataset to include longer prompts with more context.
It is important to acknowledge that these models might not be as effective as desired in their response generation due to these intricacies. The current efforts made by the research community to collect and build quality datasets encompassing enough information about the educational task to enable AI teacher generative models to fully generalize in any context is what we assess to be the main focus that the community should adopt <cit.>.
Evaluation metrics.
In addition, we emphasize that to truly gauge the efficiency of these AI-powered teaching models, it is vital to go a step further and examine their ability to comprehend the unique nuances in the students' queries and cater to their particular educational requirements. This implies the need for a pedagogically meaningful evaluation metric. We believe that it is crucial for the research community to embrace this as the second primary focus.
While common evaluation metrics such as BERTScore and DialogRPT are commonly used in several language and dialog modeling tasks, it is important to note that these metrics were not fundamentally designed to capture the level of pedagogical meaningfulness in the generated responses. As an example, consider the dialog shown in Figure <ref>—depending on the given context, only one of the responses (option (b): disconnected) is correct, while both the responses are ranked as equally correct by the BERTScore metric. Commonly-used domain-agnostic metrics often serve as a proxy for how coherent and human-like the generated responses are. However, for more goal-oriented tasks such as modeling teacher-student conversational dialogues, these metrics seem to fall short. This generalization gap becomes more apparent on analyzing the results from the fine-tuned Flan-T5 model with a feedback loop based on BERTScore and DialogRPT scores—despite the model performing significantly well on training and validation sets, it failed to generalize on unseen test data.
In an effort to advance research on this front, we note the need for auxiliary training-level metrics, including the faithfulness of the generation to the true response, to ensure that the generations are context-aware and factually accurate (e.g., correct option (b) vs. incorrect option (a) in Figure <ref>).
GPT-4 unknown pre-training data. We understand that the use of GPT-4 as a baseline in our study presents challenges due to its unknown training data. Yet, whether GPT-4 has seen parts of the TSCC dataset during its pre-training or not, the improvement of performance compared to the reference with regard to the DialogRPT scores and human evaluation scores attached to the leaderboard of the shared task suggests that the potential of using such high-performing models in this domain warrants further exploration.
§ CONCLUSION
In this paper, we explored the potential of using large pre-trained language models and reinforcement learning for generating AI teacher responses in an educational context. We first presented a few-shot approach using the GPT-4 model, which demonstrated promising results in generating contextually relevant and fluent responses, but with limitations in generating pedagogically sound responses consistently. We then fine-tuned GPT-2 and DialoGPT on the TSCC dataset and evaluated their performance using BERTScore and DialogRPT metrics. We also proposed an approach using RL to optimize directly for pedagogical values. We hypothesized that several dataset characteristics (e.g., dialog completeness, sampling) pose challenges to achieving superior performance with fine-tuning. To this end, we recommend the extension of the dataset to include longer prompts with extended context. Finally, we also draw attention to the need for more domain-specific metrics (in both evaluation and reward-based training) in enabling the generation of accurate, context-aware, and factually correct teacher responses.
acl_natbib
§ APPENDIX
§.§ GPT-4 Prompt Construction
To evaluate the performance of GPT-4, we provided it with a few-shot prompt that includes a selection of similar teacher-student conversations from the TSCC dataset. This approach helps guide the model toward generating suitable responses as a teacher. The prompt is constructed as follows:
* We direct the system role to act as a teacher and encourage learning by using the prompt as given below.
* Retrieve the 5 most similar teacher-student conversations from the TSCC dataset. This is done by computing the cosine similarity between the input conversation context and the current conversation context in the dataset using embeddings generated by the text-embedding-ada-002 model.
* Concatenate the selected conversations with the input conversation, separated by special tokens to indicate the beginning and end of a new sample conversation.
This prompt construction aims to provide GPT-4 with the necessary context and guidance to generate accurate and pedagogically relevant responses in the context of teacher-student dialogues. The prompt is designed as follows:
You are acting as a teacher, and you are helping a student learn. Be patient, helpful, and kind. Don't be superimposing; give short responses to encourage learning. Make the student feel comfortable and confident, and help them learn. Now, join the following conversation: <conversation context>
The prompt is designed using the following directives in mind:-
* We instruct the system with several indicators to act as a teacher and provide helpful advice to the student.
* To mitigate the challenge of generating teacher-like responses, we advise the model to be patient, kind, and helpful to the student.
* Through the directive to keep responses short and encouraging, we guide the model toward generating suitable responses that might help the student learn effectively.
* The model is also instructed to make the student feel comfortable and confident in their learning process, providing an overall supportive environment for the student.
* Finally, the conversation context is provided to the model to set the context for the given student query, allowing the model to generate appropriate responses given the conversation context.
Combining all these aspects together, we aim to guide the model toward generating contextually relevant and pedagogically meaningful responses in the given teacher-student dialogue.
We use the following hyperparameters for querying the GPT-4 model:
* Model: gpt-4-0314
* Temperature: 1
* Max Tokens: 100
* Top p: 1
§.§ Fine-tuning Exact Parameters
For our supervised fine-tuning experiments, we used the following hyperparameters:
§.§.§ GPT-2
* Learning rate: 1e-5
* Batch size: 32
* Epochs: 10
* Max sequence length: 1024
* Optimizer: AdamW
* Scheduler: linear learning rate scheduler
§.§.§ DialoGPT
* Learning rate: 1e-5
* Batch size: 32
* Epochs: 10
* Max sequence length: 1024
* Optimizer: AdamW
* Scheduler: linear learning rate scheduler
§.§ Supervised Fine-tuning with Reinforcement Learning Details
To implement the reinforcement learning-based fine-tuning using the RL4LMs library, we first fine-tuned the Flan-T5 model on the TSCC dataset using supervised learning. After this initial fine-tuning step, we utilized the RL4LMs library to fine-tune the model further using reinforcement learning. We used an equal division of the BERTScore and DialogRPT as the reward function to optimize the model for pedagogical quality. The following hyperparameters were used for the reinforcement learning fine-tuning process:
* Learning rate: 1e-6
* Batch size: 64
* Epochs: 5
* Max prompt length: 512
* Max episode length: 100
* Optimizer: AdamW
* Scheduler: linear learning rate scheduler
The YAML file for the RL4LMs scipt is as follows:
[
gobble=4
]yaml
tokenizer:
model_name: google/flan-t5-small
padding_side: left
truncation_side: left
pad_token_as_eos_token: False
reward_fn:
id: dialog_rpt_bert
args:
BERTScore_coeff: 0.5
DialogRPT_coeff: 0.5
datapool:
id: bea
truncate: False
args:
env:
n_envs: 1
args:
max_prompt_length: 100
max_episode_length: 20
terminate_on_eos: True
context_start_token: 0
prompt_truncation_side: "right"
alg:
id: ppo_separate
args:
n_steps: 20
batch_size: 64
verbose: 1
learning_rate: 0.000001
clip_range: 0.2
n_epochs: 1
value_update_epochs: 3
# batchify: False
gae_lambda: 0.95
gamma: 0.99
ent_coef: 0.01
kl_div:
coeff: 0.001
target_kl: 2.0
policy:
id: seq2seq_lm_actor_critic_policy
args:
model_name: google/flan-t5-small
apply_model_parallel: True
prompt_truncation_side: "right"
generation_kwargs:
do_sample: True
top_k: 0
min_length: 9
max_new_tokens: 20
train_evaluation:
eval_batch_size: 64
n_iters: 200
eval_every: 20
save_every: 10
metrics:
- id: bert_score
args:
language: en
- id: dialog_rpt
args:
model_name: "microsoft/DialogRPT
-updown"
label_ix: 0
batch_size: 1
# - id: uptake
# args:
# model_name: None
# label_ix: 0
# batch_size: 1
generation_kwargs:
num_beams: 5
min_length: 9
max_new_tokens: 20
|
http://arxiv.org/abs/2307.04118v1 | 20230709081438 | Twotier -- A Layered Analysis of Backbone Members in a Moderate Sized Community Sports Organization | [
"Qingran Wang",
"Jia Yu",
"Mengjun Ding",
"Weiqiang Sun"
] | cs.SI | [
"cs.SI"
] |
Twotier - A Layered Analysis of Backbone Members in a Moderate Sized Community Sports Organization
Qingran Wang and Jia Yu contributed equally to this paper. We would like to thank all members of the SJTU Health community for their selfless commitments in building a strong community.
Qingran Wang, Jia Yu, Mengjun Ding, Weiqiang Sun, Senior Member, IEEE
August 12, 2023
==============================================================================================================================================================================================================================================================================================
Backbone members are recognized as essential parts of an organization, yet their role and mechanisms of functioning in networks are not fully understood. In this paper, we propose a new framework called Twotier to analyze the evolution of community sports organizations (CSOs) and the role of backbone members. Tier-one establishes a dynamic user interaction network based on grouping relationships, and weighted k-shell decomposition is used to select backbone members. We perform community detection and capture the evolution of two separate sub-networks: one formed by backbone members and the other formed by other members. In Tier-two, the sub-networks are abstracted, revealing a core-periphery structure in the organization where backbone members serve as bridges connecting all parts of the network. Our findings suggest that relying on backbone members can keep newcomers actively involved in rewarding activities, while non-rewarding activities solidify relations between backbone members.
community sports organizations(CSOs), backbone, two-tier analysis, core-periphery structure
§ INTRODUCTION
Community sports organizations (CSOs) are non-profit and voluntary organizations whose primary responsibility are to provide sports services to their members, often with low threshold to entry <cit.>.
Despite the huge physical and psychological benefits CSOs can bring to their community members, the development of CSOs are often constrained by their voluntary nature and the limited resource available to them <cit.>. It is thus important to understand the development principle of CSOs such that the limited resource may be put to the most effective use.
It has always been intuitively felt that there is usually a group of highly active and influential people who actuate and drive the development of a network. In the process of product marketing based on the human interaction network, marketers will take the nodes occupying the structural hole position in the network as the influential initial node, in order to achieve the greatest influence in the network <cit.>. In online social networks such as Weibo, Twitter, etc., users with a large number of followers are considered as influential users, and the topics they publish tend to generate great network effects <cit.>. Similarly, in CSOs, there are also some influential users who have “the right and the ability to influence in an indirect or intangible way" <cit.>, and their presence and activities have significantly higher effects on the operation and development of the organization. Backbone members are important nodes in a network that are well-connected to other members and play a crucial role in facilitating communication and information flow. They are defined as members who are relatively more important, active, and have a greater number of friends compared to other members. The identification and analysis of backbone members can provide insights into the structure and dynamics of the network, which is valuable for understanding its behavior and performance.
The problem of vital node identification has attracted increasing attention in different fields <cit.>.
Typically, researchers build user social networks based on participant interaction data collected over a period of time and then work to identify key nodes in the network. In this scenario, various centrality measures <cit.> such as degree centrality, closeness centrality, and betweenness centrality can be used to indicate the importance of nodes. With the rich set of metrics introduced in <cit.> we can also identify important nodes in CSOs. However, little attention has been paid to the role that backbone members play, nor the mechanisms by which they function in the network. At the same time, studies are often focused on the group of backbone members themselves, and interactions between the backbone and non-backbone members are largely neglected. In addition to the internal forces generated by the backbone members, external interventions such as rewards and penalties may also be crucial for network development <cit.>. Targeting interventions on leaders have been shown to be more effective than applying them to random individuals for community health campaigns <cit.>. Understanding the mechanisms by which internal forces work can help us better implement external interventions <cit.>. And, if these two forces work together, they can bring even greater developmental benefits.
In this research, with longitudinal data recorded, we focus on the development of CSOs, with a particular emphasis on backbone members, defined as the top X% of influential members based on coreness centrality. We introduce Twotier, a new framework for analyzing dynamic networks, which allows us to study both the evolutionary characteristics of components and their connections. Our main finding is that backbone members play a critical role as the trunk of the network, while others act as leaves and are regularly updated. Rewarding activities and backbone members are essential for organization expansion, while non-rewarding activities solidify the backbone group.
The main contributions of this work are three-fold. Firstly, we introduce Twotier, a novel mathematical framework for analyzing dynamic networks. Secondly, we demonstrate its applicability in a moderate-sized CSO. Finally, using our framework and numerical results, we provide practitioners with tailored approaches to improve outcomes for different groups within their organization.
The remainder of this paper is organized as follows. In Section II, we introduce Twotier, the main method for network analysis in this work, and explain its specific procedure. In Section III, we present an overview of the dataset and the experimental results obtained in each tier, including the role of backbone members in network development and their performance under external factors. Section IV provides an overview of related work. Finally, in Section V, we conclude the paper.
§ TWOTIER: A LAYERED ANALYSIS ON CSOS
This section introduces the Twotier framework, which analyzes the role of backbone members in a moderate-sized community sports organization. In Tier-one, we build a dynamic network based on team-wise links between members and classify them into two groups: backbone members and general members, using the dynamic W-KS algorithm to calculate their influence. Community detection is performed to transition from individuals to communities. In Tier-two, we analyze the evolutionary regularities of different types of communities by abstracting the dynamic network into the network of communities extracted in Tier-one. The framework is illustrated in Fig. <ref>. To explore the influence of different types of activities on organizational development, we separate the network into two sub-networks: one formed under rewarding activities and the other under non-rewarding activities. Table <ref>
summarizes the symbols used in this paper.
§.§ Tier-one Analysis
§.§.§ Evaluating Social Influence by Dynamic W-KS
In a CSO with teaming-based relationships, user interactions can change over time, resulting in a dynamic network. Therefore, it is not advisable to apply vital node identification approaches designed for static networks. For example, it may be challenging to determine the importance of a node that is active during some time periods but inactive in others. To address this issue, we extend the weighted k-shell decomposition method to be applied on dynamic networks as a series of static networks.
Considering the duration of activities and the fact that the study is conducted under a six-year time span, we take a three-month time window to build a dynamic network containing 24 consecutive equal-length time frames, in each of which the network is considered non-evolving. The validity of this partitioning approach has been demonstrated in our previous work <cit.>. In this case, the network is expressed as
G={G_t=(V_t, E_t), for all t in [0, T]},
where V_t is the node set and E_t is the set of edges. The weights of the edges are assigned with the number of links that connect the same node pair within a time frame.
Generally speaking, in a static network, the hubs are the key players when networks have a broad degree distribution. In addition, the topology of the network is also an important measure <cit.>.
With the weighted k-shell decomposition method (W-KS) proposed in <cit.>, we shall be able to rank the nodes according to both the degree and position of the node in each static undirected weighted network. Define the weighted degree D_t^i of node i in time frame t as
D^i_t=[√(d^i_t·∑_j∈n_t^iw^ij_t)],
the combination of degree d^i_t and the sum of all its link weights ∑_jw^ij_t, rounded to the nearest integer.
In W-KS, all nodes with D not greater than 1 are removed first. Then, the D of other nodes is recalculated in the trimmed network and the pruning process is repeated until no nodes with D less than or equal to 1 are left in the network. The pruned nodes are grouped in the first shell with k=1.
Then the next k-shell with k=2 and further higher k-shells are separated from the remaining network iteratively until no nodes remain.
Finally each node has a value k, with larger k indicating greater node influence.
The influence of node i in the t-th time frame, I_t^i, can be represented by
I_t^i=
{[ k_t^i, i in the t^th network; 0, i not in the t^th network ]
.
.
The influence of node i in the dynamic network is defined as the sum of its influence in each time frame
I^i=∑_t=0^TI_t^i,
where I_t^i is derived from the methods of static networks, i.e., in our case, weighted k-shell decomposition.
In Fig. <ref>, we show a particular example in which fig_kshell1 W-KS is applied to a static (but accumulated) network, and fig_kshell2 is extended and then applied to a dynamic network. It can be seen that the dynamic W-KS can better capture the temporal nature of the CSO and can thus characterize node influences more accurately.
§.§.§ Community structure detection
Backbone members (BMs) are identified as the top X% of influential members, measured by any metric. We use the dynamic W-KS algorithm to calculate node influence, specifically coreness centrality, to select BMs. All other members are classified as general members (GMs). In turn, the network can be divided into three components: a) The sub-network formed by BMs and the interactions between them (BSN); b) The sub-network formed by GMs and the interactions between them (GSN); and c) The links that connect BSN and GSN.
We use the quality metric modularity Q defined in <cit.> to explore the community structure in the two sub-networks of the CSO respectively:
Q_t=12m∑_i,j[w^ij_t-e_t^ie_t^j2m]δ(i, j),
where e^i_t=∑_jw^ij_t is the sum of the weights of the edges attached to node i, and m=12∑_ijw_t^ij is the sum of the weights of all edges in G_t. The δ-function is 1 if node i and j in the same community, otherwise δ=0. A Q value higher than 0.3 suggests that distinct community structures do exist in the network <cit.>.
§.§ Tier-two Analysis
§.§.§ Community evolution
Once the presence of community structure is confirmed, we can then proceed with the analysis on community evolution. To capture intermittent participation that is often seen in CSOs, we extend the community evolution events used in our previous study <cit.>, by adding Suspend and 𝑅𝑒 𝑒𝑚𝑒𝑟𝑔𝑒. A community is said to be suspended, if it appears in a time frame, but disappears for some time, and then re-emerges. A community is said to be re-emerging, if it did not exist in the previous time frame, but has appeared at least once in the past time frames. According to their effects on the community structure, the evolution events other than Continue may be roughly classified into two categories: a) Events that bring significant structural changes to the network (V - Violent), mainly through adding/removing nodes from the network - Form, Dissolve, Suspend and 𝑅𝑒 𝑒𝑚𝑒𝑟𝑔𝑒 and b) Events that cause marginal structural changes to the network (S - Stable), mainly through adding/removing links between existing nodes - Grow, Merge, Shrink and Split. Communities that undergo S-type evolution are relatively closed groups with close internal interactions but limited communication with other external communities. However, V-type evolution brings about more diverse changes in the network structure, which promotes communication among different groups and plays an important role in the stability and development of the network. The information about community evolution events is presented in Table <ref> in detail.
§.§.§ Community abstraction
To explore the interactions of different communities on a horizontal level, we abstract the network of communities by hiding the details of the original connections between individuals within communities. The network described by Eq.(<ref>) is similar to Eq. (<ref>), however, the nodes are now communities detected in Tier-one and edges are connections between communities.
G^com={G_t^com=(V^com_t, E^com_t), for all t in [0, T]},
There are two kinds of nodes in the network: a) communities formed by backbone members (BCs); and b) communities formed by general members (GCs). The connections in the network are classified into three categories: a) edges between BCs (BBEs); b) edges between GCs (GGEs); and c) edges between BCs and GCs (BGEs).
We characterize the structure of the network using network density and betweeness centrality, where network density is denoted as
Density(G^com)=2L^com/N^com(N^com-1),
where N^com denotes the number of communities in the network and L^com denotes the number of connected edges between communities in the network.
The betweeness centrality is formulated as
BC^com_z=∑_m,n ∈ V^comσ(m, n|z)/σ(m, n),
where m and n denote any community.
The network with core-periphery structure has a high backbone node centrality and a high density of BSN. However, it is difficult to form a connected network by connections between general nodes only.
§ EXPERIMENTS AND RESULTS
In this section, we apply the proposed Twotier framework to a sports organization and obtain information about the evolution of the organization and its structural characteristics. The experimental steps are illustrated in Fig. <ref>.
The data that we use are collected from a non-profit sports organization, through an online platform serving a community with more than 10,000 members. Users can organize communal activities, most of which require users to participate in teams with less than 10 members. Necessary tools are provided for team members to communicate with each other. The platform went online in May 2015, and by June 2021, 790 activities have been held, with 4879 different individual participants in 6426 teams. The activities can be rewarding, denoted as Type-A, with a total of 119 activities, or without any reward, with a total of 671, denoted as Type-B.
According to the teaming relations in the entire time span, each different user is represented with a node, and team-wise relations are represented by undirected links with weight of 1. It is clear that a link is in fact representing the quadruple of ⟨ node pair, id of team, id of the activity, time of the 𝑎𝑐𝑡 .
.ivity, type of the activity⟩. In the entire time span, there are 73813 such links.
§.§ Influence of Nodes by dynamic weighted k-shell decomposition (dynamic W-KS)
While traditional weighted k-shell decomposition (W-KS) divides all nodes into 87 shells, dynamic W-KS divides all nodes into 457 shells, allowing for a more detailed division with more prominent gaps between nodes. Furthermore, for nodes within the same layer, we sort them based on their degree. To verify that the extended node influence determination approach is superior, we compare the coverage when the top X% members are selected as backbone members. The coverage reflects the range of influence in the network. It is defined as the proportion of the number of selected kernel members, and their neighbors, to the size of the given network.
We compare in two network scenarios, one ignoring temporal properties and aggregating all members who have ever appeared in the organization to form an aggregation network; the other creates a time-frame network at three-month intervals. As shown in Fig. <ref>, with selected proportion X increases from 1 to 50, the coverage under both methods increases but dynamic W-KS generally gives higher results than W-KS. Considering degree, the position in network topology and activeness, the dynamic W-KS can give a more comprehensive picture of the importance of a node and is therefore chosen in our further analysis.
In the following parts, we choose the cases X = 5, X=10, and X=20 to carry out the experiment respectively.
After classifying the network members, we find that the BMs have a greater degree and are in a more important position in the network topology than GMs, as the results in Table <ref> show. In addition, they are involved in a larger number of activities and active in the network for a longer period of time than GMs. On average, the participation of BMs in non-rewarding activities is higher than in rewarding activities, which is the opposite of GMs. The network structure formed by each of these two groups is also different.
§.§ The Evolution of communities
In each time frame, the BSN has a giant component, occasionally there are also some small groups independent of the huge part, while the GSN is consist of many small groups that are separate from each other. The average Q of the two sub-networks across all time frames is higher than 0.3, suggesting that they both have a distinct community structure.
Further, we present the evolutionary relationships of the detected communities after dividing the 9 evolution events into three categories. Fig. <ref> illustrates the percentage of evolution events for BSN and GSN, with rewarding activities (type-A), with non-rewarding activities (type-B), and with both, respectively. It can be observed that in GSN under type-B activities, form and dissolve among these 9 evolution events are always the major ones. However, this is very different in GSN under type-A activities with diverse and abundant events. With no stimulus, the GMs are less willing to participate, resulting in a large number of dropouts after attending the activities. This suggests that type-B activities are suitable to be held regularly to consolidate the connection between BMs rather than to absorb new members. It can also be seen that the BSN has more S-type community evolution events (light green area), while there is a higher number of V-type community evolution events (light red area) in GSN. The mobility within the backbone member groups is stronger than that of general members. This indicates that the backbone groups play the role of the trunk, while other members renew at a faster rate and act like leaves.
§.§ The Structure of Abstracted Network
We present the graph of the abstracted network in some time frames while X=10 in Fig. <ref>, where the red circles represent BCs and the green circles are GCs. The red, green and brown edges are BBEs, GGEs and BGEs, respectively. The size of the circles reflects the number of members in the community, and the thickness of the edges indicates the intensity of interaction between members of communities.
It is interesting that the network, which is star-shaped, contains a dense cohesive core and a sparsely connected periphery. This observation is further verified in Fig. <ref>, <ref> and <ref>.
It can be seen in Fig. <ref>figcomnum5 figcomnum10 and figcomnum20 that the number of BCs is stable, while the number of GCs has a larger variation and follows a similar trend as the network size changes. The GCs out-numbers BCs in most time frames, but the former is far more sparse than the latter. Fig. <ref>figedge5 figedge10 and figedge20 show the proportion of the weights of the three types of edges to the total weight of all edges. The edges are mostly between BCs or between BCs and GCs. Fig. <ref> presents the network density of the two sub-networks, containing either BCs and BBEs or GCs and GGEs. We can see that the BCs-formed sub-network has a high density, while the density of the sub-network formed by GCs is always low. In Fig. <ref>, the betweenness centrality of backbone members and general members are displayed respectively. By comparison, it can be found that backbone members have higher betweenness centrality, playing an important role as mediators, while GCs are isolated from each other and must be bridged by BCs.
This also suggests that promoting communications between groups at the periphery may be effective to increase network connectivity.
A further look into the structure of the networks under type-A and type-B activities shows that the two networks also exhibit a clear core-periphery structure. With type-A activities, the number of connections between BCs and GCs (BGEs) significantly out-numbers that are among BCs (BBEs) or GCs (GGEs) (Fig. <ref>fig2edgeA), while with type-B activities, the number of BBEs out-numbers that of BGEs (Fig. <ref>fig2edgeB). This indicates that while BMs tend to be active in both types of activities, they may behave very differently. In type-A, i.e., rewarding activities, BMs are more likely to connect with GMs, but in type-B activities, BMs tend to connect with other BMs. This on the first hand validates the trunk-like function that the BMs play. On the other hand, this suggests that even though rewarding activities are generally more popular than non-rewarding ones and participation is much higher, the backbone members are still important convening points for the activities, and are thus very important to the development of the CSO. It can be seen in Fig. <ref>fig3densityfig3densityAfig3densityB that in both types of activities, connections between GCs are very rare, again validating the leaf-like behavior of GCs and GMs. It also can be observed from Fig. <ref>fig3densityB that non-rewarding activities provide an important vehicle for backbone members to develop and consolidate very close relations, and should be regarded as an important tool in the entire CSO development toolset.
§ RELATED WORK
Research in the domain of “who are the most important nodes in the network" has considered various criteria to define influential participants. One stream of relevant research is node centrality <cit.>, such as degree centrality, closeness centrality and so on, which attempts to quantify the structural importance of actors in a network. Considering two metrics, the degree of the node and its position in the network topology, Kitsak et al. <cit.> designed a k-shell decomposition method to divide the network nodes into different layers, i.e., to determine the importance of the nodes hierarchically at the group level. Subsequently, authors in <cit.> introduced a generalized method
for calculating the k-shell structure of weighted networks. Our work goes a step further to improve the existing weighted k-shell method by taking the temporal nature of the network into account.
Taking influential nodes as research objects, researchers have drawn many interesting conclusions. Kerlund <cit.> shows that influential users have a narrow focus in terms of the content they post and how they profile themselves and tend to produce more original content than other users.
Zhao et al. <cit.> find that for the entertainment news, the influential spreaders may appear at the later stage of spreading. Borgatti et al. present an intuitive description of the core-periphery structure in <cit.>, that is, the network contains a dense, cohesive core and a sparse, unconnected periphery, and then quantified the core-periphery structure using the quadratic assignment procedure. Authors of <cit.> performed a detailed analysis on the key topological properties of the friendship graph for three different user categories of Leaders, Followers and Neutrals and yielded interesting insights. Wang et al. <cit.> divided the nodes in the network into two categories: central nodes and ordinary nodes, further they found that the information dissemination mode can be summarized into three specific patterns. By decomposing the complex network structure into different parts and analyzing them systematically, it is possible to grasp the development pattern of the network in more detail.
Social Network Analysis (SNA)<cit.>, focusing on understanding the nature and consequences of relations between individuals or groups, has been widely used to study social networking platforms.
Newman proposed that social networks can be naturally divided into communities or modules in <cit.>. Blondel et al. <cit.> proposed a method known as the Louvain algorithm for community detection, with the core idea of optimizing the quality function known as modularity. Frequent changes in the activity and communication patterns of network members result in the associated social and communication network being subject to constant evolution. For a deeper understanding of network development, Palla et at. quantified network evolution from the perspective of social group evolution in <cit.>. The evolution events are specifically classified into seven categories by group evolution discovery based on the changes of members in the communities in <cit.>. Based on this, <cit.> introduced an improved GED algorithm, describing network evolution in the context of CSOs. These works, from the perspective of community evolution in networks, have inspired the analysis of dynamic network structures on a vertical timeline.
§ CONCLUSIONS
We propose a new framework called Twotier for analyzing the network structure and apply it to probe a non-profit sports organization. Firstly we establish a time-evolving network based on the team-wise relationships of participants. Taking the degree and topological position of nodes as well as the temporal nature of the CSO into account, we extend the weighted k-shell decomposition to determine the influence of the nodes and next classify the participants into two categories: backbone members and general members. Further the network of communities is abstracted by hiding the connections within communities. We not only analyze the development of the organization from the perspective of community evolution on the vertical timeline, but also pay attention to the connections between different groups in the horizontal time frames. In addition we have discussed the effect of the external stimulus on both groups.
Our findings are summarized as follows.
The backbone members of the CSO are not only characterized by their high degree and closeness centrality, but also by the fact that the average number of participating activities and active time frames of them are much higher than the general members. On average, the participation of backbone members in non-rewarding activities is higher than in rewarding activities, being the opposite of the general members.
Through Tier-one analysis, we reveal that the two sub-networks, containing either backbone or general members, both have a clear community structure. The groups of backbone members play the role of the trunk, while the general members renew frequently and act like leaves.
Through Tier-two analysis, we identify that there is a core-periphery structure in the organization. Backbone members serve as a critical link between different groups within an organization, and organizational leaders should pay special attention to their role in managing the organization effectively. However, we also note the potential negative impact of few ties between communities of general members on membership stability and organizational development. Therefore, it is necessary to implement measures to strengthen interaction between these groups and break down isolation.
More importantly, we observe that external stimulus affects the organization in different ways. Even though rewarding activities are generally more popular than non-rewarding ones and participation is much higher, the backbone members are still important convening points for the activities, and are thus very important to the development of the CSO. The non-rewarding activities provide an important vehicle for backbone members to develop and consolidate very close relations and should be regarded as an important tool in the entire CSO development toolset. These insights can help practitioners develop tailored approaches for different groups within their organization to ensure better outcomes.
IEEEtran
|
http://arxiv.org/abs/2307.03928v1 | 20230708080247 | Bounding data reconstruction attacks with the hypothesis testing interpretation of differential privacy | [
"Georgios Kaissis",
"Jamie Hayes",
"Alexander Ziller",
"Daniel Rueckert"
] | cs.CR | [
"cs.CR",
"cs.AI"
] |
Ariadne's Thread[Ariadne's thread, the name comes from ancient Greek myth, tells of Theseus walking out of the labyrinth with the help of Ariadne's golden thread.]
: Using Text Prompts to Improve Segmentation of Infected Areas from Chest X-ray images
Yi ZhongMengqiu Xu Kongming Liang Kaixin Chen Ming Wu
August 12, 2023
===========================================================================================================================================================================================================================================================
We explore Reconstruction Robustness (ReRo), which was recently proposed as an upper bound on the success of data reconstruction attacks against machine learning models.
Previous research has demonstrated that differential privacy (DP) mechanisms also provide ReRo, but so far, only asymptotic Monte Carlo estimates of a tight ReRo bound have been shown.
Directly computable ReRo bounds for general DP mechanisms are thus desirable.
In this work, we establish a connection between hypothesis testing DP and ReRo and derive closed-form, analytic or numerical ReRo bounds for the Laplace and Gaussian mechanisms and their subsampled variants.
§ INTRODUCTION
In the rapidly advancing field of machine learning (ML), the importance of preserving privacy cannot be understated, particularly in critical tasks where privacy may be compromised through attacks on unprotected ML models.
Among these, membership inference (MI) poses a considerable risk <cit.>.
Here, an adversary attempts to determine whether a candidate record was part of the model's training database.
Differential privacy (DP) <cit.> plays a crucial role as a safeguard against privacy risks in ML.
Its guarantees can be interpreted in terms of the protection it offers against MI, a notion termed the hypothesis testing interpretation of DP <cit.>.
Broadly speaking, protecting against MI also serves to protect against all weaker forms of attack <cit.>.
For example, data reconstruction (DR) attacks <cit.>, where adversaries attempt to extract records from the model's weights or gradients <cit.>, are also prevented by DP mechanisms.
In fact, it can be shown that protecting against DR requires substantially less noise than protecting against MI <cit.>.
Recent works have proposed formal bounds tailored to DR.
For instance, Guo et al. <cit.> frame DR as a signal estimation problem and use the properties of the Fisher information matrix to lower-bound reconstruction error.
Moreover, Guo et al. <cit.> utilise Fano's inequality to bound the mutual information between the training data and the model's parameters.
Last but not least, Balle et al. <cit.> recently proposed Reconstruction Robustness (ReRo), which serves as a high-probability bound on successful DR.
Moreover, this work's authors prove a strong relationship between DP and ReRo in the sense that (Rényi-)DP <cit.> implies ReRo (and vice versa under some preconditions).
Very recently, Hayes et al. <cit.> strengthened the aforementioned results by circumventing the utilisation of Rényi-DP and bounding ReRo directly.
In this work, we expand upon the previous investigations on ReRo, which we regard as the most promising DR bound (as it both outperforms previous DR guarantees and is closely matched by the results of empirical DR attacks against ML models).
The aforementioned work by Hayes et al. <cit.> limits its purview to DP-SGD <cit.> and utilises a Monte Carlo (MC) technique to estimate the ReRo bound.
This MC bound only holds asymptotically and cannot be used efficiently in workflows involving large datasets.
Methods to directly obtain ReRo upper bounds for arbitrary datasets and mechanisms (e.g. also the Laplace mechanism and its subsampled variant), would thus be of value to practitioners.
Contributions
The contributions of our work are as follows:
(1) We extend the work of Hayes et al. by proposing ReRo bounds derived from the hypothesis testing interpretation of DP.
(2) We furnish closed-form bounds for the Gaussian and Laplace mechanisms and provide an analytic formulation for the Poisson-sampled Gaussian and Laplace mechanisms using an Edgeworth series.
Both techniques are very efficient in terms of memory and run time, even for very large datasets and across broad ranges of the mechanism parameters.
(3) We experimentally corroborate the accuracy of our bounds against a numerical ground truth, provide the first ReRo bounds for ImageNet-scale workflows and explain a finding by <cit.> regarding differences in ReRo bounds when DP-SGD parameters are varied at a fixed (ε, δ)-value.
Background
We assume familiarity with the fundamentals of DP and omit a detailed introduction due to space constraints.
In brief, we will focus on the global model of DP and the add/remove one adjacency relation between databases D and D'.
An extension to replacement adjacency is straightforward.
We will denote the deterministic query function (e.g. a single step of SGD outputting a gradient containing sensitive information) by q and its global sensitivity by Δ with an appropriate subscript to indicate the order of the norm it is measured in.
We will use ℳ for an (additive noise) mechanism, i.e. the Laplace mechanism (LM), Gaussian mechanism (GM) or their Poisson-subsampled variants (SLM and SGM).
For details on subsampling, we refer to <cit.>; in brief, to realise Poisson subsampling, each record in a database participates in the query with individual probability p.
In the hypothesis testing interpretation of DP, we presume that an adversary 𝒜 who has complete knowledge of D, D', q, and all specifications of ℳ observes a mechanism output y and must decide: ℋ_0: y ∼ℳ(D) vs. ℋ_1: y ∼ℳ(D').
ℋ_0 and ℋ_1 are called the null and alternative hypothesis, respectively.
We stress that the only unknown in the aforementioned hypothesis testing problem is the exact noise draw realised by ℳ.
The privacy guarantee of ℳ thus expresses how difficult it is to distinguish between the distributions ℳ(D) and ℳ(D') as measured in terms of trade-off between the fundamental errors of hypothesis testing: the Type-I error α and the Type-II error β.
Since the aforementioned hypothesis testing problem is one between two simple hypotheses, 𝒜 is endowed with the optimality properties furnished by the Neyman-Pearson (NP) lemma <cit.>.
In other words, their test has the highest power 1-β at any given level α∈ [0, 1].
f-DP <cit.> utilises a trade-off function T: α↦β to express DP guarantees.
Concretely, let ϕ be a rejection rule for the aforementioned hypothesis testing problem.
Then, T(ℳ(D), ℳ(D'))(α) = inf_ϕ{β_ϕ|α_ϕ≤α}.
A mechanism is said to satisfy f-DP, if, for all α∈ [0,1] and all adjacent D, D' it holds that T(ℳ(D), ℳ(D'))(α) ≥ f(α), where f is some reference trade-off function.
The inf_ϕ means that, by definition, f-DP only considers the rejection rule with the highest power among all realisable rejection rules at the same level α, which is consistent the optimality properties of 𝒜.
For rejection rules with asymmetric trade-off functions (e.g. for sub-sampled mechanisms), one must also consider T^-1=T(ℳ(D'), ℳ(D)) and obtain the symmetrised/convexified curve C(T, T^-1).
This is important as the DP guarantee must hold identically for the add one and the remove one adjacency relations.
A mechanism whose trade-off function is β(α)=1-α, i.e. the off-diagonal of the unit square, offers perfect privacy.
As a worst-case guarantee, f-DP thus additionally only considers the trade-off function which is farthest from this off-diagonal, corresponding to the pair of mechanism distributions exhibiting the greatest effect size.
This pair is called the dominating pair of a mechanism <cit.>.
For the GM, the dominating pair is (𝒩(0, σ^2), 𝒩(Δ_2, σ^2)) and for the LM it is (Lap(0, b), Lap(Δ_1, b)).
For the SGM two pairs must be considered: (𝒩(0, σ^2), (1-p)𝒩(0, σ^2)+p𝒩(Δ_2, σ^2)) and ((1-p)𝒩(Δ_2, σ^2)+p𝒩(0, σ^2), 𝒩(Δ_2, σ^2)); this transfers to the SLM by replacing the Gaussian by the Laplace density.
ReRo <cit.> is an upper bound on the probability of a successful DR attack.
In this work, we will study ReRo under a pessimistic threat model which is very similar to that of DP:
𝒜 has access to all database records and executes a DR attack R on a model w outputting a reconstructed record z^∗∼ R(w).
The goal of 𝒜 is to select the correct database record z corresponding to z^∗ (i.e. record matching).
Formally, let π denote 𝒜's prior distribution (i.e. auxiliary information) and let ρ be a reconstruction error function.
Then, ℳ satisfies (η, γ)-ReRo if, for any fixed D, it holds that ℙ_z∼π, w ∼ℳ(D ∪{ z })(ρ(z, R(w))≤η) ≤γ.
Note the difference to DP: ReRo is defined purely through the add one adjacency relation.
The authors of <cit.> directly show that mechanisms whose output distributions satisfy a bound on the so-called blow-up function ℬ_κ(η) also satisfy ReRo.
Concretely, let μ and ν be ℳ's dominating pair distributions for the add one adjacency relation and E be a measurable event.
Then, ℳ satisfies (η, γ)-ReRo with respect to a prior κ(η) with γ = ℬ_κ(η)(μ, ν) = sup{ℙ_μ(E) |ℙ_ν(E) ≤κ(η) }.
Throughout, we follow <cit.> and let ρ=1(z≠ z^∗) (i.e. an exact match) and assign a uniform prior κ(η)=1/n, where n can e.g. be the cardinality of the database, since 𝒜 has an a priori probability of 1/n to select the correct candidate record without observing R(w), or some more pessimistic fixed prior, e.g. 1/10.
Although general hypothesis testing theory is used in <cit.> to prove the ReRo bound for DP mechanisms, the authors do not directly use f-DP to bound ReRo and instead estimate γ using MC (Algorithm 1 of <cit.>).
This strategy has the drawback of holding only at the limit as the number of MC samples approaches infinity and is impracticable for very large n or very small κ.
Next, we will show that ℬ_κ(η)(μ, ν) has a natural hypothesis testing interpretation, allowing us to circumvent the MC procedure and directly bound γ.
§ RERO BOUNDS FOR DP MECHANISMS THROUGH HYPOTHESIS TESTING
We begin by expressing ℬ_κ(η) in terms of the hypothesis testing problem between ℳ(D) and ℳ(D').
Assume that 𝒜 employs a rejection rule ϕ with power 1-β_ϕ(α) at a pre-selected level α.
Consistent with the worst-case guarantee, we will only consider the rejection rule with the highest power among all realisable rejection rules and denote this supremum power as 𝒫(α).
We remark that we make no further specifications about the rejection rule.
Therefore, although we will consider the DP threat model which assumes an optimal ϕ using the likelihood ratio test statistic evaluated at the dominating pair, all results transfer to threat model relaxations, provided the realisable rejection rules and their corresponding test statistics can be specified.
(2) We formulate our results in terms of the test ℋ_0:ℳ(D) vs. ℋ_1:ℳ(D') because we only need to bound the add one adjacency relation to bound ReRo.
The upshot of this choice can be seen in Figure 1, panel f.
If ℳ upper-bounds the adversary's supremum power 𝒫(α), then it also satisfies (η, 𝒫(κ(η))-ReRo for a prior κ.
In particular, if ℳ satisfies f-DP, it also satisfies (η, 1-f(κ(η)))-ReRo and if it satisfies (ε, δ)-DP, it also satisfies (η, min{e^εκ(η) + δ, 1})-ReRo.
The special case of (ε, 0)-DP appeared previously in <cit.>.
The theorem's main advantage is that it allows us to think about the relationship between DP and ReRo in terms of statistical power analysis, for which robust tools and an extensive body of theory exist.
Moreover, it explains the finding by <cit.> that directly bounding ReRo using ℬ_κ(η)(μ, ν) instead of taking a detour via Rényi DP results in a tighter bound: ReRo has a natural hypothesis testing interpretation, whereas Rényi DP does not <cit.>.
Furthermore, the theorem establishes that ReRo as a weaker guarantee than f-DP in the sense that f-DP bounds 𝒜's supremum power at all levels α∈ [0,1], whereas ReRo is a bound on the supremum power at a single level α = κ(η).
Consequently, achieving ReRo is easier (i.e. requires less noise) than achieving f-DP.
In terms of concrete mechanisms, we obtain the following results:
Let μ_1 = Δ_1/b and
f_Lap(α, μ_1) =
1-αe^μ_1, α < e^-μ_1/2
e^-μ_1/4α, e^-μ_1/2≤α≤1/2
(1-α)e^-μ_1, α > 1/2.
Then, the LM satisfies (η, γ)-ReRo with γ = 1-f_Lap(κ(η), μ_1).
Let μ_2 = Δ_2/σ and f_Gauss(α, μ_2) = Φ(Φ^-1(1-α) - μ_2), where Φ and Φ^-1 are the cumulative distribution and quantile function of the standard normal distribution. Under N-fold homogeneous composition, the GM satisfies (η, γ)-ReRo with γ = 1-f_Gauss(κ(η), √(N)μ_2).
Under heterogeneous composition of mechanisms with μ_a, μ_b, …, we have γ = 1-f_Gauss(κ(η), √(μ_a^2+μ_b^2+…)).
These two corollaries allow us to obtain an exact bound on ReRo for the respective mechanisms.
Unfortunately, the trade-off functions for the LM under composition and for the SLM and SGM are not available in closed form.
Three distinct options exist for evaluating these functions:
(1) Compute the trade-off functions numerically either through direct numerical integration or e.g. using the technique by <cit.>.
This approach can be optimal in the sense that it can provide an exact bound up to numerical precision (or with a controlled error tolerance).
To obtain a valid ground truth, we use direct numerical integration by performing a grid discretisation over G points and using an arbitrary-precision floating point library such as <cit.>.
This technique is extremely time-consuming, as (for N composition steps) it requires G · N numerical integrations (in neural network applications N = 𝒪(10^4)) and thus only serves as a gold standard.
An approach using the technique by <cit.> can be found in the appendix.
(2) One can leverage an analytic (e.g. Edgeworth or saddle-point) finite sample approximation to the trade-off function which can be computed in constant time for homogeneous composition.
Such approximations are a cornerstone of statistical power analysis <cit.>, and have been previously used for (ε, δ)-DP accounting <cit.>.
For our experiments, we use an improved version of the technique proposed by <cit.>, i.e. a fourth order Edgeworth approximation, which has error 𝒪(N^-2).
(3) Asymptotically, the trade-off function of a (Poisson-)subsampled mechanism with sampling rate p converges to f_Gauss(α, μ̃) with μ̃= p√(N(e^μ_2-1)) when p√(N) converges to a positive constant as the compositions N→∞ <cit.>.
This so-called CLT approximation is essentially an order zero Edgeworth approximation and has an error of 𝒪(N^-1/2).
We note that, although the MC technique of <cit.> has a nominally even higher error rate of 𝒪((κ N)^-1/2), it performs better than the CLT approximation in practice because it is unbiased, whereas the CLT approximation presupposes that the approximated trade-off function is Gaussian, which leads to poor performance when its assumptions are violated (see experiments below and <cit.> for discussion).
Independent of the technique used to approximate the trade-off function, we can formulate the following results:
Let f̃_SLM(α, μ_1, N, p) denote the approximate trade-off function for the SLM with sampling rate 0<p≤ 1 under N-fold composition using one of the approximation techniques above.
Then, the SLM satisfies (η, γ)-ReRo with γ≈ 1-f̃_SLM(κ(η), μ_1, N, p).
Similarly, let f̃_SGM(α, μ_2, N, p) denote the approximate trade-off function for the SGM with sampling rate 0<p<1.
Then, the SGM satisfies (η, γ)-ReRo with γ≈ 1-f̃_SGM(κ(η), μ_2, N, p).
Note that for the SGM, when p=1, we revert to the GM and can use the closed-form bound from Corollary 2 (see Figure 1c below).
We remark for completeness that heterogeneous composition is also possible using the techniques above and that approximations are not necessarily valid upper bounds unless verified, e.g. using the technique by <cit.>.
We omit a detailed discussion of these points due to space constraints.
§ EXPERIMENTAL EVALUATION AND CONCLUSION
Figure <ref> compares the MC estimate <cit.> of γ (10^6 samples) at a fixed prior κ to the asymptotic CLT approximation <cit.>, the fourth-order Edgeworth approximation and the Ground Truth computed by numerical integration (a,b,d) or in closed form (c).
γ is plotted against the effect size (Δ_1/b or Δ_2/σ), corresponding to increasing privacy loss: a: ε_max=20, b/c: ε_max=100, d: ε_max=𝒪(10^8) at δ=10^-6 for c/d.
Observe that in panel d, the MC algorithm of <cit.> already has too high variance to provide an accurate estimate of γ.
This means that the analysis of ImageNet-sized datasets where the values of κ and p are very low and the number of steps N is very high is infeasible using MC (or the Ground Truth) due to memory or time constraints.
In contrast, estimating γ using the Edgeworth approximation yields excellent precision at a constant memory consumption and run time of only about 1.5s, exactly matching the Ground Truth.
Panel e shows γ as a function of κ for a very low p and a very large N, similar to the hyperparameters used by <cit.> when training ImageNet from scratch.
Even at κ=10^-7, our presented techniques allow for estimating γ, and the CLT approximation matches the Edgeworth approximation very well.
Further examples of CIFAR-10 and ImageNet workflows are shown in the Appendix.
Panel f explains the observation by <cit.>, where, at a constant (ε, δ), the authors find that different sampling rates p lead to different values of γ.
The crux of this finding is that the authors of <cit.> choose mechanism parameter combinations which result in the same privacy guarantee in terms of a single (ε, δ)-pair (recall that SGMs are only identical if they correspond in all possible (ε, δ)-pairs).
Thus, mechanisms with different p are fundamentally distinct and thus lead to different γ values across the range of κ.
In particular, the trade-off function (and thus the ReRo bound function) is increasingly asymmetric at low values of p.
As seen in the figure, for κ=0.1 (used by <cit.>), γ is lower at p=0.1 (blue) compared to 0.9 (lavender), matching Figure 6 of <cit.>.
A detailed discussion on this topic can be found in the Appendix.
Conclusion
In this work, we expanded on the connection between ReRo and DP by leveraging hypothesis testing theory and techniques from statistical power estimation.
This allowed us to formulate refined ReRo bounds for relevant DP mechanisms and propose techniques to estimate them with high precision across a broad range of use-cases.
Our results can thus help ML practitioners to evaluate the vulnerability of sensitive data processing systems against data reconstruction attacks, thereby increasing user trust.
In future work, we intend to assess ReRo bound tightness for large vision and language models/datasets, provide ReRo bounds in the shuffle model of DP and for individual privacy accounting schemes, expand our analysis to non-uniform priors other reconstruction error functions and heterogeneous compositions.
§ APPENDIX
§.§ Proofs
Proof of Theorem 1
Let y be a mechanism output, μ, ν be the dominating pair distributions of ℳ and κ(η) ∈ [0,1] be a prior.
Since E is an arbitrary measurable event, we can fix E to be the event of rejecting ℋ_0 (this mirrors the event definitions in Corollary 3 of <cit.> and standard hypothesis testing theory).
Moreover, let ϕ be a rejection rule for ℋ_0: y ∼ν and ℋ_1: y ∼μ.
This is without loss of generality since f can always be considered (or made) symmetric, and thus the following statements also hold when the role of the hypotheses is exchanged, although this is not required to bound ReRo, which only considers the add one adjacency relation.
From the definition of ReRo, γ = ℬ_κ(η)(μ, ν) = sup{ℙ_μ(E) |ℙ_ν(E) ≤κ(η) }.
From our assumption above, ℙ_μ(E)=1-β_ϕ (correctly reject ℋ_0 given ℋ_1) and ℙ_ν(E) = α_ϕ (wrongly reject ℋ_0 given ℋ_0).
Substituting, we obtain γ = sup{ 1-β_ϕ | α_ϕ≤κ(η) }.
In other words, γ exactly corresponds to the supremum power of ϕ given a pre-selected bound on Type-I error rate, i.e. γ = 𝒫(α), and thus a bound on γ is implied by a bound on 𝒫(α) with α=κ(η).
To prove the ReRo bound implied by f-DP, we consider the definition of the trade-off function: f(κ(η)): inf{β_ϕ | α_ϕ≤κ(η)).
Since f is convex, continuous and non-increasing on the unit square, 1-f(κ(η))=sup{ 1-β_ϕ | α_ϕ≤κ(η)) = 𝒫(α) = γ.
We note that the reverse does not hold in general: bounding ReRo through a bound on γ implies a bound on 𝒫(α) for a specific level α, whereas f-DP implies a bound on 𝒫(α) at all levels α∈ [0,1].
To prove the the ReRo bound implied by (ε, δ)-DP, we leverage a result by <cit.>, who show that, if a mechanism satisfies (ε, δ)-DP, it imposes a bound on the power 1-β at a level α of the optimal hypothesis test ϕ such that 1-β_ϕ(α_ϕ) ≤e^εα_ϕ + δ, i.e. 𝒫(α) = e^εα_ϕ + δ.
Finally, we substitute κ(η) as the desired level α_ϕ and take the min since γ is a probability, from which the claim follows.
Algorithm 1 of <cit.> essentially computes an MC estimate of the complementary trade-off function for 𝒩(0, σ^2) vs. (1-p)𝒩(0, σ^2)+p𝒩(Δ_2, σ^2).
The sampling inefficiency and high variance at small values of κ is due to the fact that the algorithm draws S MC samples but discards all but ⌈κ· S ⌉ of them.
This percolates to extreme parameter regimes such as the ones discussed above, necessitating orders of magnitude more samples to be drawn to correctly estimate the bound, which eventually becomes infeasible due to memory constraints.
In terms of the distributions of the likelihood ratio test statistics under ℋ_0 and ℋ_1, constructing 𝒫(α) corresponds to the following steps:
Per the Neyman-Pearson lemma, the optimal test ϕ is realised by thresholding the likelihood ratio test statistics.
Let c be the critical value for rejecting ℋ_0.
Then, (1) determine the value of c for which α_ϕ(c)<κ(η) by computing the quantile function of the test statistic under ℋ_0 at 1-κ(η) and
(2) compute the value of the cumulative distribution function of the test statistic under ℋ_1 evaluated at c.
The likelihood ratios under ℋ_0 and ℋ_1 are also called the privacy loss random variables in DP.
The equivalence between the privacy loss random variables and the test statistics from which the trade-off function f is computed represents the intuitive link between f-DP, (ε, δ)-DP and ReRo and reinforces the pivotal role of the privacy loss random variable.
The Edgeworth approximation utilises the cumulant generating functions of the likelihood ratio test statistics computed numerically, followed by a series approximation combined with a numerical inverse for the quantile function.
The CLT approximation is equivalent to an Edgeworth approximation of order zero, rendering it quite inflexible, which explains its poor performance when its assumptions are violated.
Proof of Corollaries 1 and 2
The claims of both corollaries follow directly from the closed-form expressions of the trade-off functions of the LM and the GM.
The derivations of the trade-off functions themselves can be found e.g. in <cit.>.
Proof of Corollary 3
The claims follow from the ReRo bound implied by f-DP proven in Theorem <ref>.
We remark that since we are dealing with trade-off function approximations, minimising the approximation error is crucial for obtaining an exact bound on γ.
§.§ Supplementary Figure
The following figure illustrates further scenarios in which the Edgeworth and CLT approximation yield excellent results, whereas the MC technique of <cit.> would not be usable due to an impracticably high number of MC samples required to obtain an accurate estimate.
Moreover, in these scenarios, the numerical ground truth would take on the order of weeks to compute and is thus unavailable.
In contrast, the Edgeworth and CLT approximations are computable in constant time.
Moreover, the assumptions of the CLT approximation kick in for these parameter values and thus the two methods yield identical results.
The top figure row shows ReRo bounds for CIFAR-10-style workflows with hyperparameters taken from Table 13 of <cit.> (left) and an even smaller sampling rate (right), whereas the bottom row shows ImageNet-style workflows with the hyperparameters from Table 15 of <cit.> (left) and an even smaller batch size (right).
The bottom right panel is identical to Figure 1, panel e in the main manuscript.
For all panels, κ∈ [10^-7, 10^-1].
§.§ Experimental details
Conversion to (ε, δ)-DP
Conversions to (ε, δ)-DP were performed as follows:
* For the LM, following the simple composition theorem: ε = NΔ_1/b.
* For the SLM, following <cit.>: ε = log(1 + p(e^NΔ_1/b-1).
* For the GM, following <cit.>: Compute δ(ε) = Φ(-σε/Δ_2 + Δ_2/2σ) - e^εΦ(-σε/Δ_2 - Δ_2/2σ), then solve for ε at a given δ numerically.
* For the SGM, following <cit.>: Compute the (symmetrised) trade-off function, then compute the convex conjugate numerically and solve for ε at a given δ.
Details of numerical techniques
The numerical Ground Truth was evaluated by using the technique proposed in Section 5.1 of <cit.> with G=1 000 grid points and using 25 digits of numerical precision in <cit.> (for reference, a 64-bit floating point value provides ≈ 15 digits of precision).
We recall that this technique requires one numerical integral per step N and grid point, rendering it extremely time consuming and impracticable for any use beyond establishing a gold standard.
The fourth-order Edgeworth approximation was computed as previously described (see Section 3.1 of <cit.>).
However, we expanded the Edgeworth series up to order four as described in the main manuscript.
Moreover, the original work <cit.> only approximates the trade-off function for only one of the two dominating pairs of the SGM (𝒩(0, σ^2), (1-p)𝒩(0, σ^2)+p𝒩(Δ_2, σ^2)).
Whenever required (e.g. for conversions to (ε, δ)-DP or for Figure 1, panel e), we also instantiated the trade-off function for the other dominating pair ((1-p)𝒩(Δ_2, σ^2)+p𝒩(0, σ^2), 𝒩(Δ_2, σ^2)) and obtained the symmetrisation/convexification of the two trade-off functions, in line with the assumption that the trade-off function is symmetric.
Monte Carlo (MC) estimation of γ was performed according to Algorithm 1 of <cit.>.
All MC experiments were performed with S=1 000 000 samples.
We used multi-core sampling with 16 concurrent processes on a single 2019 Apple MacBook Pro with an 8 core Intel i9 CPU and 64 GB of memory.
The CLT and Edgeworth approximations have constant run time, the latter provided the composition is homogeneous (i.e. the effect size is constant over all N).
In terms of memory usage, the MC algorithm allocates an array of size S · N, where S is the number of MC samples and N is the number of SGM steps.
The numerical Ground Truth, Edgeworth and CLT approximations require constant memory.
§.§ Discussion of ReRo bound sensitivity to subsampling probability
In <cit.>, the authors found that the ReRo upper bound is dependent on the subsampling probability, p.
They showed this by fixing the number of steps in DP-SGD and the gradient cliping norm, and finding a σ that would give a fixed (ε, δ)-DP guaranteee across different subsampling rates.
In <cit.>, authors chose a small number of steps (100) for this experiment, due to the computational overhead of their MC estimation method.
For this relatively small number of composition, the CLT assumption for Gaussian DP is not yet fully in effect, meaning the mechanisms authors selected at different values of p are not identical, they only intersect for a specific choice of ε and δ.
We plot this in <Ref>, and show the corresponding trade-off curves for the add one and remove one adjacency relations along with their symmetrised version (see Definition F.1 in <cit.>).
When the CLT does not apply, this comparison is across fundamentally distinct mechanisms with different trade-off curves, and so the upper bounds for ReRo are different.
We compare different values of subsampling probabilities when the CLT is assumed to hold (number of steps N=10,000).
From <Ref>, the three mechanisms intersect at identical (ε, δ) pairs, as so they are identical mechanisms.
In the right figure, we plot the trade-off curves under the assumption that the CLT is valid [We use μ̃ = p √(N ( e^1/σ^2 -1 )),
so the collection of all σ to obtain the same μ̃ can be found through:
σ = 1/√(log(1 + μ̃^2/N p^2)), see <cit.> for details.].
For each mechanism, we also numerically compute its privacy profile using <cit.> and convert to a trade-off curve using the (ε, δ) trade-off function (Eq. 5 in <cit.>).
These all coincide perfectly.
When the CLT holds, the mechanisms are identical, the trade-off curves are independent of p, and since the curves are symmetric, the add one and the remove one curves are one and the same.
|
http://arxiv.org/abs/2307.04290v1 | 20230710004342 | The Electronic Structure of the Hydrogen Molecule: A Tutorial Exercise in Classical and Quantum Computation | [
"Vincent Graves",
"Christoph Sünderhauf",
"Nick S. Blunt",
"Róbert Izsák",
"Milán Szőri"
] | physics.chem-ph | [
"physics.chem-ph",
"quant-ph"
] |
< g r a p h i c s >
The potential energy curves of H_2 and a simple quantum circuit associated with them.
In this educational paper, we will discuss calculations on the hydrogen molecule both on classical and quantum computers. In the former case, we will discuss the calculation of molecular integrals that can then be used to calculate potential energy curves at the Hartree–Fock level and to correct them by obtaining the exact results for all states in the minimal basis. Some aspects of spin-symmetry will also be discussed. In the case of quantum computing, we will start out from the second-quantized Hamiltonian and qubit mappings. Using quantum phase estimation, we then provide the circuits for two different algorithms: Trotteization and qubitization. Finally, the significance of quantum error correction will be briefly discussed.
§ INTRODUCTION
It has been almost 20 years since Prof. Csizmadia, known simply to his students as IGC, gave a series of lectures at the University of Szeged about theoretical calculations and their relevance to organic chemistry. As students (R. I., M. Sz.) attending these lectures, we knew that he had studied with Slater and was a professor of international standing who had been associated with the University of Toronto for a long time. While we had already heard of the basics of quantum mechanics and quantum chemistry, we were all eager to know more about their applications to chemical problems that we had also encountered by then in the organic chemistry lab. Who better to tell us about that than IGC, who was among the pioneers of applying Gaussian orbitals to organic molecules and was among the authors of POLYATOM<cit.>, the first program package that could carry out such calculations? Despite his many scientific achievements, IGC never talked much about the past except to explain something to his students. He had a unique style that can be discerned from some of his writings<cit.> but that worked best in the classroom. We all remember his simple explanations of complicated mathematical subjects, usually accompanied by student-friendly illustrations that he simply called Mickey Mouse Figures. Apart from his knowledge on chemical calculation and his accessible lecturing style, his dedication set him apart from most teachers we had known: few people would have given a two-hour long lecture when struggling with a whooping cough that threatened to strangle him. In our contribution to this special issue commemorating his achievements, we would like to pay tribute to him as an educator by providing an educational introduction to quantum chemistry methods, using the hydrogen molecule as an important first example. While IGC would have probably preferred an organic molecule and might have given less detail about the calculation than we intend to, he would have certainly approved of our using the simplest Gaussian orbital basis possible and we hope that such a simple model calculation will help the determined student to understand the machinery underlying modern quantum chemistry calculations. It is in this spirit that we offer this contribution to his memory.
§ THEORETICAL BACKGROUND
§.§ The Hartree–Fock Method
Chemistry investigates the myriad of ways molecules may interact. With the advent of quantum mechanics, it became possible to explain these interactions in terms of those between the electrons and nuclei that make up molecules. Unfortunately, this leaves us with a large number of variables to consider if we want to describe everything that takes place in a chemist's flask. To make the problem easier to solve, several further assumptions are made beyond the axioms of quantum mechanics and special relativity needed to describe chemical systems. To start with, akin to usual practice in thermodynamics, we may divide the universe into a system and its environment. The system is simply the part of the world we are interested in, and, for chemical purposes, this might be a number of atoms and molecules. As a first approximation, we will consider only particles in the system and neglect interactions with the environment. We will further neglect relativistic effects and the time dependence of the states of the system. Within the Born-Oppenheimer approximation, the nuclear and electronic variables are separated and the electronic problem is solved for fixed nuclear coordinates. The electronic Hamiltonian then takes the form
Ĥ=
-∑_i1/2∇_i^2
+∑_A<BZ_A Z_B/|𝐑_A-𝐑_B|
-∑_iAZ_A/|𝐫_i-𝐑_A|
+∑_i<j1/|𝐫_i-𝐫_j|,
where the indices i,j denote electrons and A,B nuclei, 𝐫_i is the position of an electron and 𝐑_A is that of a nucleus, Z_A is its charge number. The terms in the order they appear are the kinetic energy of the electrons, the potential energy of nuclear-nuclear, nuclear-electron and electron-electron interactions.
Although the approximations so far simplify the problem considerably, the resulting quantum mechanical problem remains intractable. In the next step, the variables describing individual electrons are also separated. To fulfil the condition of antisymmetry required by the exclusion principle, an approximate many-electron wavefunction is constructed as the antisymmetrized product of functions describing a single electron
Φ =
1/√(N!)ϕ_1(𝐱_1) ϕ_2(𝐱_1) ⋯ ϕ_N(𝐱_1)
ϕ_1(𝐱_2) ϕ_2(𝐱_2) ⋯ ϕ_N(𝐱_2)
⋮ ⋮ ⋱ ⋮
ϕ_1(𝐱_N) ϕ_2(𝐱_N) ⋯ ϕ_N(𝐱_N)
The function Φ is called the Slater determinant and the one-electron functions ϕ_p are the spin orbitals.<cit.> Since the latter are orthonormal, the norm of Φ is also 1. The energy of the system can then be obtained as the expectation value of of the Hamiltonian with respect to the Slater determinant,
E = ⟨Φ|Ĥ|Φ⟩,
implying an integration over all electronic coordinates 𝐱_p. At this point, the spin variable of an electron (s_p) can also be separated from the spatial coordinates (𝐫_p), to yield spatial orbitals φ_p,
ϕ_p(𝐱_p) = φ_p(𝐫_p)σ_p(s_p).
For labeling the orbitals, we will use the convention that i,j,… refer to orbitals occupied in the Hartree–Fock ground state, a,b,… are unoccupied and p,q,… could refer to any molecular orbital. The spin-function σ can denote a spin-up (α) or a spin-down (β) state of a single electron identified by s_p. Often, the above product is denoted simply as pσ. If necessary, spin-orbital and spatial orbital labels can be distinguished by capitalizing one of them. Here, we opt for capitalizing the spin-orbital labels, which leads to the compact notation P=pσ or P=pσ_p. Sometimes it is convenient to refer to spin orbitals in terms of their spatial component. This purpose is served by the `relative' spin notation in which the product pα is simply referred to as p and pβ as p̅. Using the fact that the spin functions α and β are orthonormal, the following expression can be obtained for the Hartree–Fock energy using the Slater-Condon rules<cit.>
E = ⟨Φ |Ĥ|Φ⟩ = E_n + 2∑_i (i|ĥ|i) + 2∑_ij(ii|jj)-∑_ij(ij|ij),
where E_n is simply the nuclear-nuclear interaction term in Eq. (<ref>). The one electron integral reads
(p|ĥ|q) = ∫φ_p^*(𝐫) ĥ(𝐫) φ_q(𝐫) d𝐫,
with ĥ containing the kinetic energy term of the electrons and the nuclear-electron interaction energy in Eq. (<ref>). Finally, the two-electron term is
(pq|rs) = ∬φ^*_p(𝐫_1)φ_q(𝐫_1)φ^*_r(𝐫_2)φ_s(𝐫_2)/|𝐫_1-𝐫_2| d𝐫_1d𝐫_2,
and represents the remaining electron-electron interaction term in Eq. (<ref>). Note that these integrals are defined in terms of spatial orbitals but the spin-orbital equivalents are easily defined as (P|ĥ|Q)=(p|ĥ|q)δ_σ_pσ_q and (PQ|RS)=(pq|rs)δ_σ_pσ_qδ_σ_rσ_s.
To find the lowest-energy determinant Φ, the energy needs to be minimized under the constraint that Φ is normalized. This is a quite involved task in general, and in a final approximation,
the molecular orbitals (MO) φ_p are expanded in terms of known atomic orbitals (AO) serving as basis functions,
φ_p = ∑_μ C_μ pχ_μ.
Here C_μ p is an element of the MO coefficient matrix 𝐂. This yields the algebraic form of the Hartree–Fock equations, sometimes called the Hartree–Fock-Roothaan-Hall equations,<cit.>
𝐅𝐂 = 𝐒𝐂𝐄,
with the elements of the Fock matrix 𝐅 and the overlap matrix 𝐒 defined as
F_μν = ∫χ_μ(𝐫)f̂(𝐫)χ_ν(𝐫) d𝐫,
S_μν = ∫χ_μ(𝐫)χ_ν(𝐫) d𝐫,
and 𝐄 being the diagonal matrix containing the molecular orbital energies. Note that from this point on we will assume that quantities are real and will not denote complex conjugation any more. As the Fock operator
f̂[{φ_i}](𝐫_1) = ĥ(𝐫_1) + ∑_j∫φ_j(𝐫_2;𝐑)2-P̂_12/|𝐫_1-𝐫_2|φ_j(𝐫_2;𝐑) d𝐫_2,
itself depends on the orbitals that we seek to optimize, this is a self-consistent eigenvalue problem, the solutions of which must be found in an iterative manner. The operator P̂_12 swaps the coordinate labels to account for antisymmetry. The resulting Fock matrix has the general form
F_μν = h_μν + G_μν,
where the core term h_μν itself consists of two contributions,
h_μν = T_μν + V_μν,
with
T_μν = -1/2∫χ_μ(𝐫)∇^2 χ_ν(𝐫) d𝐫,
and
V_μν = ∑_A V_μν(A),
V_μν(A) = -Z_A∫χ_μ(𝐫)χ_ν(𝐫)/|𝐫-𝐑_A| d𝐫,
while the electronic interaction term consists of a direct Coulomb term and an exchange term,
G_μν = ∑_κλP_κλ (μν|κλ)
-1/2∑_κλP_κλ (μκ|νλ)
with
(μν|κλ) = ∬χ_μ(𝐫_1)χ_ν(𝐫_1)χ_κ(𝐫_2)χ_λ(𝐫_2)/|𝐫_1-𝐫_2| d𝐫_1d𝐫_2,
where the charge-density matrix is defined as
P_κλ = 2∑_i C_κ i C_λ i.
Finally, the spin-restricted Hartree–Fock (RHF) energy can be written in terms of the AO quantities as
E_RHF = E_n + 1/2∑_μνP_μν(h_μν+F_μν).
§.§ Electron Correlation
The Hartree–Fock solution has several deficiencies that originate in the approximations made. Over the decades several methods have been devised that improve one or more of these approximations, starting from the Hartree–Fock solution.<cit.> Other than the choice of the AO basis, improving on the treatment of interelectronic interactions in the Hartree–Fock method is the most important issue in practical calculations. In particular, in the Hartree–Fock model, the electrons with parallel and anti-parallel spins are treated differently in that the probability of finding two electrons at the same place is zero in the former case (presence of a Fermi hole) and non-zero in the latter (lack of a Coulomb hole). This is a consequence of representing the many-body wavefunction using a single Slater determinant. However, the manifold of Slater determinants can be used to build an improved wavefunction Ψ as a linear combination
Ψ = ∑_I 𝒞_IΦ_I.
If the expansion contains all possible Slater determinants in a given basis, then this full configuration interaction (FCI) expansion will yield the exact solution in that basis if the normalized coefficients C_I are optimised. To find these, the following eigenproblem must be solved
𝐇𝒞=ℰ𝒞,
where 𝐇 is the matrix representation of the Hamiltonian with elements
H_IJ = ⟨Φ_I | Ĥ | Φ_J⟩.
The correlation energy E_c is then the difference between the exact energy ℰ and the Hartree–Fock energy E,
E_c = ℰ - E.
However, determinants are not the only basis in which Ψ can be expanded. Unlike determinants, configuration state functions (CSF)<cit.> are eigenfunctions of the total spin squared operator,
Θ = ∑_I D_IΦ_I,
where D_I are fixed coefficients and Θ is a CSF. For a given spin sublevel, there are in fact fewer CSFs than there are determinants. Using CSFs instead of determinants may change how many basis states have large coefficients in the FCI expansion, as does rotating the orbitals between occupied and virtual spaces.
§ THE HYDROGEN MOLECULE IN A MINIMAL BASIS
§.§ Possible States and Their Long Range Behaviour
Consider a single H_2 molecule and let χ_μ be an atomic orbital<cit.> (AO) on one of the H atoms, and χ_ν another AO on the other H atom. Because this simple problem has only two basis functions and is highly symmetric, it is possible to describe some of its properties, especially at large internuclear separations, even without solving the HF and FCI equations. In this section, we will discuss such general considerations, then move on to the actual calculations.
Due to the symmetries of H_2, we know that there are only two possibilities of combining χ_μ and χ_ν into molecular orbitals. The MO coefficients must have the same magnitude with the same or opposite signs. After normalization, this yields the bonding orbital φ_i
φ_i = 1/√(2(1+S_μν))(χ_μ + χ_ν),
while the anti-bonding orbital φ_a has the form
φ_a = 1/√(2(1-S_μν))(χ_μ - χ_ν).
In terms of the MO coefficient matrix 𝐂, this means that
𝐂 =
[ 1/√(2(1+S_μν)) 1/√(2(1-S_μν)); 1/√(2(1+S_μν)) -1/√(2(1-S_μν)); ].
For the purposes of analysing long range behaviour, the AOs can be assumed to be orthonormal, since the two AOs barely overlap at large bond lengths. Thus, for the remainder of this section, we will assume that S_μν≈ 0. The lowest-energy RHF determinant is then
Φ_0 = |ii̅|,
where
|ii̅|≡ |φ_iφ̅_i| =
1/√(2)(φ_i(1)φ̅_i(2) - φ_i(2)φ̅_i(1)).
The bar in φ̅_i denotes the fact that φ_i is occupied by a β spin orbital. Substituting Eq. (<ref>), one gets
Φ_0 = Φ_C0 + Φ_I0,
where Φ_0 consists of a covalent part
Φ_C0 = 1/2(|μν̅| + |νμ̅|),
and an ionic part
Φ_I0 = 1/2(|μμ̅| + |νν̅|).
The covalent contribution consists of AO basis determinants in which one electron is assigned to one H atom via μ and the other electron to the other H atom via ν, which is what is expected in a homolytic dissociation process. The ionic contribution on the other hand consists of AO determinants in which both electrons are assigned to one atom only. The fact that in Φ_0 the two contributions come with an equal weight leads to what is known as the dissociation catastrophe of Hartree–Fock theory. Since Φ_C0 describes a homolytic and Φ_I0 a heterolytic process and since the latter requires a much higher energy, the total dissociation curve produces an artificially large dissociation energy for the homolytic process. The customary solution is to construct the doubly excited determinant
Φ_1 = |aa̅|,
which, after following a similar procedure as above, is found to be
Φ_1 = -Φ_C0 + Φ_I0.
We may now define a two-determinant trial wavefunction
Ψ_0 = 𝒞_0Φ_0 + 𝒞_1Φ_1,
which can be written as
Ψ_0 = (𝒞_0-𝒞_1)Φ_C0 + (𝒞_0+𝒞_1)Φ_I0.
It is clear that if 𝒞_0=-𝒞_1, the ionic contribution vanishes and the covalent contribution survives. This trial function has the necessary flexibility to describe the entire curve in a qualitatively correct way: close to the equilibrium 𝒞_0≈ 1, which agrees well with the fact that HF is a good description of the H_2 molecule at equilibrium distance. This analysis is also identical with the result obtained from a valance bond (VB) construction of the wavefunction. Similar results can be obtained for the correct heterolytic curve starting from the doubly excited determinant Φ_1.
So far we have only considered the ground state and the doubly excited state within the minimal basis. When it comes to singly excited states, it is useful to represent them using CSFs, i.e., linear combinations of determinants that are spin-eigenstates, as mentioned above. For a singlet state, this has the form
Θ_S = 1/√(2)(|ia̅| - |i̅a|),
This wavefunction can also be analyzed in terms of AO basis determinants,
|ia̅| = -Φ_C1 + Φ_I1,
|ai̅| = Φ_C1 + Φ_I1,
where the ionic and covalent contributions are
Φ_C1 = 1/2(|μν̅| - |νμ̅|),
Φ_I1 = 1/2(|μμ̅| - |νν̅|).
Therefore,
Θ_S = √(2)Φ_I1,
which means that this is a fully ionic solution at long distance. Similarly, the three degenerate triplet states,
Φ_T^+ = |ia|=-|μν|,
Θ_T = 1/√(2)(|ia̅| + |i̅a|)=√(2)Φ_C1,
Φ_T^- = |i̅a̅|=-|μ̅ν̅|,
all of which have a covalent character.
§.§ The Necessary Integrals
The calculation of the electronic energy requires the construction of the integrals in Eq. (<ref>). The simplest model that can be evaluated without the aid of a computer assumes that the basis functions χ_μ and χ_ν are simple normalized Gaussians<cit.>
χ_μ = (2α/π)^3/4e^-α(𝐫+𝐑)^2,
χ_ν = (2α/π)^3/4e^-α(𝐫-𝐑)^2,
where we have assumed that the two atoms are at an equal distance 𝐑 from the origin. Without loss of generality, we may choose the molecule to lie along the x-axis, i.e., that 𝐑=(R,0,0). Thus, the above Gaussians decompose into
(2α/π)^3/4 e^-α(𝐫±𝐑)^2
=(2α/π)^3/4
e^-α(x± R)^2
e^-α y^2
e^-α z^2.
While the multiplication of the same Gaussians is easily evaluated, when different Gaussians are multiplied, the Gaussian product theorem applies
e^-α(𝐫±𝐑)^2e^-α(𝐫∓𝐑)^2 = e^-2α R^2 e^-2α𝐫^2.
The overlap integrals are then
S_μμ = S_νν =
(2α/π)^3/2∫ e^-2α(𝐫±𝐑)^2 d𝐫=1,
by normalization, and
S_μν = S_νμ =
(2α/π)^3/2 e^-2α R^2∫ e^-2α𝐫^2 d𝐫=e^-2α R^2.
The two unique values of the kinetic energy integral T_μν can be determined by using derivation and partial integration techniques. The remaining integrals contain the Coulomb operator in some form. The nuclear-electronic attraction term also depends on the position of the nuclei, yielding integrals of the form V_μν(A) and V_μν(B), where A and B denote the nuclei on which χ_μ and χ_ν are centered, respectively. There are altogether three unique values of these integrals, while the two-electron integrals (μν|κλ) may assume four distinct values, all listed in the Supplementary Material. We note that the evaluation of the Coulomb integrals is significantly simplified by the application of the Gaussian integral, e.g.,
1/|𝐫±𝐑| =
1/√(π)∫^+∞_-∞
e^-(𝐫±𝐑)^2 t^2 d t,
as discussed in detail elsewhere.<cit.> This introduces another Gaussian function beyond the AOs χ_μ and χ_ν, and thus the usual product rules and integration techniques apply. In particular, the change of variables of the type u^2 = t^2/(2α + t^2) simplifies the evaluation of the integrals significantly.
Once the two-electron integrals are known, the most general form of the effective two-body term for two atomic orbitals can be written as
G_μμ(𝐏)
= 1/2 (μμ|μμ) P_μμ
+ (μμ|μν) P_μν
+ (μμ|νν) P_νν
- 1/2 (μν|μν) P_μν,
G_νν(𝐏)
= (νν|μμ) P_μμ
+ (νν|μν) P_μν
+ 1/2 (νν|νν) P_νν
- 1/2 (νμ|νμ) P_μμ,
G_μν(𝐏) = G_νμ(𝐏)
= 1/2 (μν|μμ) P_μμ
+ 3/2 (μν|μν) P_μν
+ 1/2 (μν|νν) P_νν
- 1/2 (μμ|νμ) P_μν.
Here, we have only used the fact P_μν is symmetric.
§.§ The Orbital Exponent
These formulae can be evaluated for any R once the exponent α is known. We may determine this by assuming that the single Gaussian considered here is an STO-1G orbital, i.e., one in which a single Gaussian (1G) is used to fit a Slater type orbital (STO). The coefficient α may be obtained by maximizing the overlap<cit.>
⟨ψ_1s|χ_μ⟩ = √(ζ_0^3/π)(2α_0/π)^3/4∫ e^-ζ_0 |𝐫|e^-α_0 𝐫^2 d𝐫.
Assuming that ζ_0=1, as in the H atom, this yields α_0≈ 0.270950. As α_0 is associated with |𝐫|^2 and ζ_0 with |𝐫|, it is customary<cit.> to rescale α using
α = α_0ζ^2,
if the value of ζ is different from ζ_0=1. A change in the value of ζ would reflect the change in the STO as a result of the molecular environment. Thus, to find an optimal ζ, the energy of an H atom may be optimized as a function of ζ. This energy is simply given as
E_H = T_μμ + V_μμ(A) = 3/2α -2√(2α/π),
and the optimization yields
ζ = 2/3√(2/πα_0),
yielding
α = 8/9π,
which is approximately α≈ 0.282942. This choice of α yields the best energy value obtainable for the H atom using a single atom-centered Gaussian, E_H=-4/3π≈ -0.424413, still relatively far off from the exact value of -1/2 in atomic units.
§.§ The RHF Potential Energy Curves
The energy expression in Eq. (<ref>) is particularly simple for the ground state of H_2,
E_0 = ⟨Φ_0|Ĥ|Φ_0⟩ = E_n + 2 (i|ĥ|i) + (ii|ii).
Once the intergrals are constructed and an initial guess of 𝐂 is found, the next step should be to build the Fock matrix and optimize P_μν iteratively. Fortunately, the symmetry adapted orbitals in Eq. (<ref>) and Eq. (<ref>) turn out to be the self-consistent solutions of the Hartree–Fock equations. To see this, it is enough to show that G_μμ=G_νν, since for a real symmetric two-by-two matrix with identical diagonal elements, the eigenvectors have the form (a,a) or (a,-a) for some value a, usually fixed by normalization. With these assumptions, the charge density matrix is
𝐏 = 1/1+S_μν[ 1 1; 1 1 ].
Substituting this into Eqs. (<ref>), (<ref>), (<ref>) leads to simplifications which are discussed in more detail in the Supplementary Material, the most important of which is that G_μμ(𝐏) = G_νν(𝐏). Once these quantities are available, the Fock matrix can be built as in Eq. (<ref>), while the energy can be obtained as in Eq. (<ref>), using the integrals discussed in Sec. <ref>. These steps and the final analytic formulae are discussed in more detail in the Supplementary Material.
To obtain the doubly excited state Φ_1, Eq. (<ref>) should be evaluated. Fortunately, for H_2 in the minimal basis, a simpler route is available by simply relabeling all i to a in the energy formula,
E_1 = ⟨Φ_1|Ĥ|Φ_1⟩ = E_n + 2 (a|ĥ|a) + (aa|aa).
This amounts to constructing a new density,
𝐏̅ = 1/1-S_μν[ 1 -1; -1 1; ],
which then produces a modified G(𝐏̅) matrix. The procedure from this point is very similar to the case of E_0 and is detailed in the Supplementary Material.
Finally, the HF energies, E_S and E_T, of the singly excited singlet and triplet states can be obtained from Eq. (<ref>), by using the Slater-Condon rules,<cit.>
E_S = ⟨Θ_S|Ĥ|Θ_S⟩ = E_n + (i|ĥ|i) + (a|ĥ|a) + (ii|aa) + (ia|ia),
and
E_T = ⟨Θ_T|Ĥ|Θ_T⟩ = E_n + (i|ĥ|i) + (a|ĥ|a) + (ii|aa) - (ia|ia).
The AO expressions and the final analytical formulae are again given in the Supplementary Material.
Fig. <ref> displays the dissociation curves Δ E_X = E_X - 2E_H for all the possible Hartree–Fock states in the minimal basis. Thus, E_X can be the RHF energy of the 1^1Σ^+_g singlet ground state (E_0), the doubly excited 2^1Σ^+_g singlet state (E_1), the singly excited 1^1Σ^+_u singlet state (E_S) and one of the degenerate 1^3Σ^+_u triplet states (E_T). Around the equilibrium distance, all curves behave reasonably, the ground state and the singly excited state have a minimum indicating a stable structure for H_2 in these states. As the two H atoms are pulled apart, the ground state and the doubly excited states converge. From the formulae provided in the Supplementary Material, it is easily seen that Δ E_0 and Δ E_1 both converge to the value √(α/π) as the internuclear distance D goes to infinity, while Δ E_S approaches 2√(α/π) and Δ E_T goes to 0 as D→∞. The fact that the ground state curve in particular does not approach zero is often referred to as the `dissociation catastrophe' of the RHF method.<cit.> It shows that RHF does not produce two H atoms at infinite distance, but due to the weight of the ionic contributions mentioned in Sec. <ref>, it significantly overshoots, although it should be noted that it is still well under the purely ionic limit at 2√(α/π). One way to solve this problem is to mix various states of the same spin and spatial symmetry; we will consider this approach in the next section.
§.§ The FCI Potential Energy Curves
To overcome the problems of the RHF method, the wavefunction can also be expanded as in Eq. (<ref>). This means that the matrix Hamiltonian in the basis of many-electron basis states shown in Eq. (<ref>) must be diagonalized. Notice that neither of the singlet RHF states mix with the triplet as they have different spin symmetry and Θ_S does not mix with Φ_0 or Φ_1, as they have they have different spatial symmetry. Thus, the only non-zero off-diagonal elements in Eq. (<ref>) are those between Φ_0 and Φ_1, yielding a conveniently simple two-by-two matrix
𝐇=
[ E_0 g; g E_1; ],
where g = ⟨Φ_1|Ĥ|Φ_0⟩ = (ia|ia), discussed more explicitly in the Supplementary Material.
As discussed before in Sec. <ref>, the mixture of Φ_0 or Φ_1 is enough to produce the correct ground-state solution in the minimal basis, due to the cancellation of ionic terms. The eigenvalues of H, shown in Fig. <ref>, are
E_± = A±√(ω^2+4g^2)/2,
A = 1/2(E_0 + E_1), ω = E_1-E_0,
where A is the average RHF energy of the two states, while ω is the excitation energy. Using the formulae of the Supplementary Material, it is now easy to show that the FCI solution with the minus sign, E_- approaches 0 as D→∞ corresponding to the correct covalent dissociation limit. Furthermore, the other solution, E_+, converges to the correct ionic limit 2√(α/π). Thus, within the minimal basis, only the triplet and the singly excited singlet states are described correctly at the RHF level, it is necessary to mix two RHF states to recover the exact solutions for the other two. As we will see in the next section, there is an alternative: breaking the spin symmetry also removes the dissociation catastrophe.
§.§ The UHF Potential Energy Curves
The RHF solution in Eq. (<ref>) has the property that C_μ i = C_ν i and for the occupied MO i. The unrestricted Hartree–Fock (UHF) model differs from RHF in that there are two different sets of spatial orbitals for electrons with alpha and beta spins, i_α and i_β. Since these MOs span the same space as the RHF solution, we may represent them using the RHF orbitals as a basis,<cit.>
φ_i_α = U^α_iiφ_i + U^α_iaφ_a,
φ_i_β = U^β_iiφ_i + U^β_iaφ_a.
with U^α and U^β being the unitary transformations that yield i_α and i_β. In this case, the MO coefficients belonging to the two AOs are not fixed by symmetry and need not have the same magnitude, i.e., C_μ i_σ≠ C_ν i_σ, for σ=α, β. On the other hand, the number of alpha and beta electrons are equal which is reflected in the solution: both alpha and beta orbitals can be obtained from the RHF ones by a rotation of the same angle but opposite direction, i.e., U^α=U^β†, U^α_ii=U^β_ii≡ U_ii and U^α_ia=-U^β_ia≡ U_ia. On substitution into the UHF determinant
Φ_UHF = |i_αi̅_β| = U_ii^2 Φ_0 - U_ia^2Φ_1 +
√(2)U_iiU_iaΘ_T,
which reveals the spin-symmetry-broken nature of the UHF wavefunction since it mixes singlet and triplet states. Evaluating the energy as an expectation value gives
E_UHF =
U_ii^4 (E_0 + E_1 +2g -2E_T) +2U^2_ii(E_T -E_1 - g) + E_1,
where the normalization condition was used to eliminate U_ia. This expression can be minimized as a function of U_ii with the result
U_ii^2 = E_1 + g -E_T/E_0 + E_1 +2g -2E_T.
Note that because of normalization, it must be true that U_ii^2≤ 1. It turns out that the above function decreases monotonically as a function of D and it approached 0.5 at infinity. Thus, we need only find out where it takes the value U_ii=1, which is the case if
E_0 +g -E_T = 0.
This happens at the Coulson-Fischer point at a distance of D_CF = 2.4653.
Thus, the optimized UHF energy for the ground state becomes
E_UHF =
E_0, D<D_CF,
E_1 - (E_1+g-E_T)^2/E_0+E_1+2g-2E_T, D_CF≤ D.
Fig. <ref> shows the RHF, UHF and FCI ground-state solutions. Unlike the RHF solution, the UHF curve indeed approaches the FCI limit at infinite distance. The UHF solution is often a convenient starting point for electron correlation methods since it is a much more flexible reference point than the spin-restricted alternative, which can often only achieve a qualitatively correct starting point by mixing several determinants or CSFs. While the spin-symmetry-broken character of UHF can also be problematic,<cit.> the FCI solution in Eq. (<ref>) is much harder to obtain. Consequently, many approximate approaches have been developed on classical computers to tackle this problem, and more recently the potential benefit of quantum computers in solving this problem has also been investigated. In the next section, we will continue the discussion of the H_2 molecule from the perspective of quantum computers. In doing so, we will provide an introduction to the topic of quantum algorithms for solving quantum chemistry problems, which is currently an active and growing area of research.
§ THE HYDROGEN MOLECULE ON THE QUANTUM COMPUTER
§.§ The Second-Quantized Hamiltonian
Second quantization is a technique in which the evaluation of matrix elements is performed through algebraic operations. To achieve this one switches from the Hilbert-space representation to a Fock-space representation. Within the Fock space, these Slater determinants are represented as occupation number vectors (ONV), i.e., the list of the occupation numbers of orbitals in their canonical order. Next, fermionic creation and annihilation operators are defined that map ONVs onto other ONVs. If n_P=0,1 is the occupation number of spin-orbital P, which may be labelled using integers P=0,1,2,…, then the annihilation and creation operators are defined by their action as
â_P^† | n_0, …, n_P, …, n_N ⟩ = δ_0n_PΓ_P | n_0, …, 1_P, …, n_N ⟩,
â_P | n_0, …, n_P, …, n_N ⟩ = δ_1n_PΓ_P | n_0, …, 0_P, …, n_N ⟩,
where Γ_P=∏_Q=0^Q=P-1(-1)^n_Q is just a sign factor. These operators obey the following anti-commutation relations,
{â_P,â_Q} = 0,
{â_P^†, â_Q^†} = 0,
{â_P^†, â_Q} = δ_PQ,
and it is worth pointing out that δ_PQ=δ_pqδ_σ_pσ_q, see discussion on Eq. (<ref>). Then, the Hamiltonian can be written in terms of creation and annihilation operators, which is its second-quantized form,
Ĥ =
E_n
+ ∑_PQ (P|ĥ|Q)
â_P^†â_Q
+ 1/2∑_PQRS (PS|RQ)
â_P^†â_Q^†â_Râ_S,
where the connection with the MO integrals defined previously is (P|ĥ|Q)=(p|ĥ|q)δ_σ_pσ_q and (PS|RQ)=(ps|rq)δ_σ_pσ_sδ_σ_rσ_q.
We next again consider the H_2 example in particular. Using the minimal basis, each of the 2 electrons can be in 4 possible states, the canonical order of which is φ_iα, φ_iβ, φ_aα, φ_aβ. Relabelling these as P = 0,1,2,3, a possible two-electron state has the form |n_0, n_1, n_2, n_3 ⟩ (with the sum of occupation numbers, i.e., the number of electrons, being 2). Due to the Pauli exclusion principle, each occupation number can be equal to 0 or 1. Thus, the lowest-energy determinant is simply |1100⟩, and other determinants can be written similarly. Setting up the FCI problem would correspond to evaluating matrix elements of Ĥ with respect to these Slater determinants. As one example, calculating ⟨ 1100 |Ĥ|1100⟩ would yield the result in Eq. (<ref>). We can also write the full H_2 Hamiltonian in second-quantized form. In the H_2 case, it is obvious that some of the one- and two-body integrals vanish due to spin-integration. Other terms are zero due to spatial symmetry. After simplifications, the H_2 Hamiltonian takes the form<cit.>
Ĥ =
E_n
+ (i|ĥ|i) (a_0^† a_0 + a_1^† a_1)
+ (a|ĥ|a) (a_2^† a_2 + a_3^† a_3)
+ (ii|ii) a_0^† a_1^† a_1 a_0
+(ii|aa)(a_0^† a_2^† a_2 a_0 + a_1^† a_3^† a_3 a_1)
+(ii|aa) (a_0^† a_3^† a_3 a_0 + a_1^† a_2^† a_2 a_1)
+ (aa|aa) a_2^† a_3^† a_3 a_2
+ (ia|ia) ( a_0^† a_3^† a_1 a_2 + a_2^† a_1^† a_3 a_0 + a_0^† a_1^† a_3 a_2 + a_2^† a_3^† a_1 a_0 ),
where the antisymetrized integral is defined as (ii|aa) = (ii|aa) - (ia|ia). All of these integrals are known from the classical calculations in the previous section.
§.§ Second-Quantized Qubit Mappings
The basic operations on a quantum computer are carried out on two-state quantum systems called qubits. A general qubit state is an arbitrary linear combination of the |0⟩ and |1⟩ states, i.e. |ψ⟩ = α |0⟩ + β|1⟩, with normalization |α|^2 + |β|^2 = 1.
Notice that, because each spin orbital in a chemical system can be in state |0⟩ or |1⟩, it seems reasonable that we can map Slater determinants, and fermionic Hamiltonians, to a qubit representation. However, the operators that act on qubits are written in terms of Pauli matrices (defined in Supplementary Material), which follow a different algebra compared to the fermionic creation and annihilation operators. Therefore, we need a way to convert the fermionic operators in the second-quantized Hamiltonian to the Pauli representation. The oldest and simplest mapping is due to Jordan and Wigner,<cit.>
a_P^† →1/2(X_P - iY_P ) ⊗_Q<P Z_Q,
a_P →1/2(X_P + iY_P) ⊗_Q<P Z_Q,
where the sub-index indicates the qubit the matrix is acting on. Here, the string of Z operators is needed to enforce the fermionic anti-commutation relations defined in Eqs. (<ref>) to (<ref>).
Applying this mapping to Eq. (<ref>) leads to the qubit Hamiltonian H, the explicit form of which can be found in the Supplementary Material for Hamiltonians with real coefficients. While the qubit Hamiltonian is quite lengthy in the general case, it assumes a relatively simple form in the H_2 case,<cit.>
H = H_0 + H_ii [Z_0 + Z_1] + H_aa [Z_2 + Z_3]
+ 1/4(ii|ii) Z_0 Z_1
+ 1/4(ii|aa) [Z_0 Z_2 + Z_1 Z_3]
+ 1/4(ii|aa) [Z_0 Z_3 + Z_1 Z_2]
+ 1/4(aa|aa) Z_2 Z_3
- 1/4(ia|ia)
[
X_0 X_1 Y_2 Y_3
+ Y_0 Y_1 X_2 X_3
- X_0 Y_1 Y_2 X_3
- Y_0 X_1 X_2 Y_3
],
with coefficients
H_0 =
E_n + (i|ĥ|i) + (a|ĥ|a)
+ 1/4(ii|ii) + 1/2(ii|aa) + 1/4(aa|aa),
H_ii =
1/2(i|ĥ|i)
+ 1/4[(ii|ii) + (ii|aa)],
H_aa =
1/2(a|ĥ|a)
+ 1/4[(ii|aa) + (aa|aa)].
Here, the spin-summed integral is (ii|aa) = 2(ii|aa)-(ia|ia).
There have been several alternative proposals to improve on the Jordan-Wigner mapping, both in terms of the number of qubits used and in terms of the length of the Pauli strings. A simple improvement to reduce the number of qubits required is the Qubit Efficient Encoding (QEE)<cit.>. In this case, the mapping focuses on the fermionic ladder operators â^†_P â_Q which corresponds to sums of diadic products of basis vectors of the type |𝐧⟩⟨𝐧'|, where 𝐧 and 𝐧' are sequences of occupation numbers that differ at positions P and Q (n_P=n'_Q=1, n_Q=n'_P=0). To proceed further, the basis vectors are converted into a binary form based on their ordering. In the general case, the number of qubits required is only logarithmic in the number of spin orbitals. In the H_2 case, there are six possible two-electron basis states of the type |𝐧⟩ =|n_0,n_1,n_2,n_3⟩ such that the occupation numbers add up to 2. In the occupation number representation, encoding |𝐧⟩ requires 4 qubits. However, the six possible basis states may also be labeled as 0,…,5 by some convention. Since the binary representation of the largest ordinal, 5, is 101 and this requires only three digits, all six states can also be represented as |𝐛⟩ = |b_0,b_1,b_2⟩ using only 3 qubits. The fermionic ladder operators then also have the form |𝐛⟩⟨𝐛'| which can be decomposed into direct products of |0⟩⟨ 0|, |0⟩⟨ 1|, |1⟩⟨ 0| and |1⟩⟨ 1|.
A separate issue with the Jordan-Wigner encoding is the long string of anti-symmetrizing Z operators that appears after the mapping, which leads to undesirable scaling. Bravyi and Kitaev proposed a new mapping which encoded this anti-symmetrization in a more efficient way<cit.>. The number of qubits required still depends on the number of spin orbitals, N, but this time, the information that is stored on these qubits depends on the qubit index, starting from 0. If the index is even then the qubit is encoded with the orbital occupation, much like in Jordan-Wigner. If the index is odd then the anti-symmetrization of a subset of orbitals is encoded. Finally, when log_2(i+1) is an integer (where i is the qubit index) then the anti-symmetrization of all orbitals with an index lower than or equal to the current index is encoded. All sums are performed in modulo 2. The Jordan-Wigner and Bravyi-Kitaev mapping have been compared in the literature for chemical calculations.<cit.> Although the Bravyi-Kitaev approach certainly has its advantages, for our purposes, the Jordan-Wigner mapping is a sufficient starting point.
§.§ The 1-Qubit Hydrogen Hamiltonian
The symmetries present in the Hamiltonian can be exploited to reduce (or “taper”) the number of qubits required for a calculation. For the case of H_2, note that only two Slater determinants, |1100⟩ and |0011⟩, can contribute to the ground-state wavefunction, due to particle number, spin and spatial symmetries. Since only two states can contribute, this suggests that the corresponding Hamiltonian can be represented by just a single qubit.
The general procedure to reduce the Hamiltonian is beyond the scope of this paper and is discussed elsewhere.<cit.> Here, it is enough to note that we are looking for a transformation of the type
H'=U^† H U,
where U is unitary. Since H and H' are unitarily equivalent, their eigenvalues are also the same. The main requirement that should make this transformation worthwhile is that H' should commute with Pauli X matrices for at least some of the qubits. If this holds, then for the purposes of determining the ground-state energy, these qubits can be replaced by the eigenvalues of the corresponding X matrices, i.e., either +1 or -1. In the H_2 case, U can be written<cit.> as U=U_1 U_2 U_3, with
U_P = 1/√(2)(X_P + Z_0 Z_P),
for P = 1, 2, 3. The transformation 𝒫'=U^†𝒫U can now be performed for each Pauli string 𝒫 in the Jordan-Wigner qubit Hamiltonian in Eq. (<ref>). The results are summarized in the Supplementary Material. As a consequence, only X or I matrices act on qubits 1, 2 and 3 in H'. For example, for 𝒫 = Z_2, 𝒫' = Z_0 X_2; although there is a Z acting on qubit 0, only X or I Paulis act on qubits 1, 2 and 3.
Before these qubits can be tapered, the corresponding eigenvalues of X matrices should be known for the eigenstate of H' that we seek, which will be the ground state. Here, symmetry can again be exploited since, as noted above, it only allows the configurations |1100> and |0011> to contribute to the ground state, |Ψ⟩. Therefore, the eigenvalues of the operators Z_0 Z_1 must be equal to +1, while the eigenvalue of Z_0 Z_2 and Z_0 Z_3, will equal -1. Taking Z_0 Z_1 as an example, we may write
Z_0 Z_1 |Ψ⟩ = | Ψ⟩.
Inserting U U^† = I,
Z_0 Z_1 U U^† | Ψ⟩ = | Ψ⟩,
and applying U^† to both sides gives
( U^† Z_0Z_1 U ) U^† |Ψ⟩ = U^† | Ψ⟩.
From the above, U^† | Ψ⟩ is an eigenvector of the transformed Hamiltonian H' and based on the results shown in the Supplementary Material, U^† Z_0 Z_1 U = X_1. Therefore, the eigenstates of H' are also eigenstates of X_1 with eigenvalue +1, and all instances of X_1 in the Hamiltonian can be replaced by +1. The same argument can be worked through for X_2 and X_3, which will be replaced by eigenvalues -1.
Thus, the only operators remaining in the transformed Hamiltonian are Z_0 and X_0 acting on qubit 0, and qubits 1, 2 and 3 can be removed. The final single-qubit Hamiltonian has the form
H' = c_0 + c_1 Z_0 + c_2 X_0,
with
c_0 = H_0 + 1/4[(ii|ii) + (aa|aa)] - 1/2(ii|aa),
c_1 = 2(H_ii - H_aa),
c_2 = (ia|ia).
§.§ Quantum Algorithms
Once the qubit Hamiltonian is available, the question still remains of how the energy calculation is to be carried out. In the current era of noisy intermediate scale quantum (NISQ) devices, the program depth measured in terms of the number of gates in the quantum circuit must be short enough so that the program can run before device errors ruin the result. This has led to a search for algorithmic solutions that satisfy this criteria, the most important of them for chemistry being the variational quantum eigensolver (VQE) algorithm<cit.>. In VQE, the wavefunction is parametrized in a similar way as in traditional approaches of quantum chemistry, such as coupled cluster theory and variational Monte Carlo, except that the quantum implementation should be unitary. Such an approach relies on Ansätze, i.e., the wavefunction is parametrized using a reference function (typically the HF solution) and parameterized quantum gates acting on it. This leads to a linear combination of excited determinants. VQE is a hybrid classical-quantum algorithm in which the energy evaluations happen on the quantum computer, while the optimization of the wavefunction coefficients is performed on the classical computer. Although this approach is more familiar to computational chemists, and for H_2 in the minimal basis it could even yield the exact energy, it has steep scaling with system size<cit.>, and so we do not consider it further here.
Quantum phase estimation (QPE), on the other hand, is a purely quantum algorithm, first introduced by Kitaev in 1995<cit.>. The QPE method can be used to determine the eigenvalues of a unitary operator U,
U |Ψ_k⟩ = e^2π i θ_k |Ψ_k⟩,
where θ_k is the phase corresponding to the k'th eigenstate of U, |Ψ_k⟩. The quantum circuit diagram for the “textbook” QPE algorithm<cit.> is shown in Fig. <ref>. The top m qubits are ancilla qubits, which are measured at the end of the circuit to obtain the first m bits of an eigenphase θ_k of U. The bottom n qubits (represented in this circuit diagram by a single line), to which the unitary U is applied, are prepared in an initial state, |ψ⟩. Here, n is the number of qubits in the Hamiltonian, which is equal to 1 for the H_2 Hamiltonian in Eq. <ref>. The initial state |ψ⟩ should be a good approximation to the exact eigenstate |Ψ_k⟩, whose eigenphase we want to estimate. The larger the overlap, the higher the probability of measuring the desired θ_k. However, in general there is a chance that the wavefunction will collapse to an undesired |Ψ_k⟩ upon measurement.
Remember that our goal is to estimate the eigenvalues of H, but QPE provides the eigenphases of a unitary U. In order to apply QPE to the energy estimation problem, the eigenvalues of H must be encoded in the phases of U. Performing QPE with U will then allow estimation of the desired energies. The most common encoding of H in U is through the time evolution operator[Usually the time evolution operator would be e^-iHt, but the minus sign is unimportant in QPE, as the phases can be extracted regardless. We call U(t) the time evolution operator for brevity.],
U(t) = e^i H t,
where t is a scalar parameter. If the eigenvalues of H are denoted E_k, then the eigenvalues of U will be of the form e^i E_k t, and it is trivial to obtain the desired energy from the measured eigenphase. An alternative encoding is sometimes considered in more sophisticated implementations of QPE, which is discussed in Section <ref>. To begin with, we will turn our attention to the implementation of the time evolution operator in Eq. <ref>. For most instances of H for chemistry problems, this cannot be implemented exactly on a quantum computer, and we instead must consider approximate approaches such as Trotterization.
§.§ Trotterization
As described in Section <ref>, we would like to perform QPE, the circuit diagram for which is shown in Fig. <ref>. We wish to encode the Hamiltonian in the unitary U through time evolution, as defined in Eq. <ref>.
For the case of H_2 in a minimal basis, the Hamiltonian consists of two single-qubit Pauli operators and an identity contribution, as in Eq. <ref>. We drop the constant shift c_0, so that
H = c_1 Z + c_2 X,
and
U(t) = e^i(c_1 Z + c_2 X) t.
Therefore we have to consider the question, how can this operator be implemented on a quantum computer? For a more general chemical Hamiltonian, its qubit form can be written
H = ∑_j=1^L H_j,
where each H_j consists of an n-qubit Pauli, P_j, and a coefficient c_j, so that H_j = c_j P_j, for example[More generally, each H_j might be a linear combination of commuting Paulis<cit.>; time-evolving a Hamiltonian of fully-commuting Pauli terms can be performed efficiently. Such H_l terms are called fast-forwardable Hamiltonians.].
In theory, a quantum computer is capable of implementing a general unitary operation, however the number of gates required to do so may be extremely large. In practice, a finite set of basis gates is defined, from which all other unitary operations are constructed. These basis gates ultimately correspond to operations that are performed on the physical qubits. On current quantum computers, such as a superconducting quantum processor, a common set of native operations might include Pauli rotation gates, R_P(θ) = e^-i (θ/2) P, and a CZ (controlled Z) gate. For fault-tolerant quantum computers, arbitrary rotation gates cannot be protected, and one might instead work with the Hadamard gate, the phase gates S and T, and the CNOT gate. However, in both cases, complex multi-qubit operations such as e^iHt, for a general H, cannot be performed directly.
The most common solution to approximately implement U = e^iHt for H as in Eq. <ref> is through Trotter product formulas, or Trotterization. The simplest of these is the first-order Trotter expansion
e^iHt≈ U_1(t) = ∏_j=1^L e^i H_j t.
The second-order Trotter expansion is defined by
e^iHt≈ U_2(t) = ∏_j=1^L e^i H_j t/2∏_j=L^1 e^i H_j t/2.
The benefit of these expansions is that each term e^i H_j t can now be implemented in a fairly direct manner on a quantum computer. However, these product formula are approximate, unless all the terms in H commute with each other. In particular, the error on the p'th-order Trotter expansion is<cit.>
‖ U_p(t) - e^iHt‖ = 𝒪 (α t^p+1).
This means that the error in the first-order expansion is 𝒪(t^2), while the error in the second-order expansion is 𝒪(t^3). The value α depends on the commutator of the terms in the partitioning of H.
In order to manage this error, we split the time evolution operator into m steps, each of length t/m:
e^i ∑_l H_l t = ( e^i ∑_l H_l t/m)^m.
Each of the steps is then approximated by a Trotter formula, U_p(t/m), which will become exact in the large-m limit.
An important question is how many rotation gates of the form e^i H_j t/m are needed to perform time evolution up to time t with error ϵ (in the trace distance), which we denote N_gates(t,ϵ). This question has been studied in detail. For the first-order Trotter formula the number of required gates is<cit.> N_gates, 1(t,ϵ) = 𝒪(t^2/ϵ) while for the second-order formula<cit.> N_gates, 2(t,ϵ) = 𝒪(t^1.5/√(ϵ)). For a comparison of N_gates(t, ϵ) for different simulation methods, see Ref. .
Having discussed Trotterization for general Hamiltonians, we now consider the specific case of H_2, taking the first-order Trotter expansion. Here we approximate U(t) by
U(t) ≈( e^i Z c_1 t/m e^i X c_2 t/m)^m,
so that each term is a Pauli rotation gate. In particular, the Pauli-Z rotation is defined R_Z(θ) = e^-i Z θ/2, and the Pauli-X rotation is R_X(θ) = e^-i X θ/2. Thus we have
U(t) ≈[ R_Z ( - 2 c_1 t/m) R_X ( - 2 c_2 t/m) ]^m.
Lastly, note from Fig. <ref> that the U operators must each be controlled on an ancilla qubit. The circuit diagram for the controlled-U operation is shown in Figure <ref>.
§.§ Qubitisation
Quantum phase estimation allows to measure the eigenvalues of a unitary operator U. Above, this was used to determine energies by choosing U=e^iHt and implementing the exponential in a quantum circuit using the Trotter product formula.
Alternatively, a different unitary operator U can be chosen for phase estimation. In qubitisation <cit.>, U is chosen to be the walk operator
U = e^iarccos (H/λ), λ = |c_1| + |c_2|
with the subnormalisation λ the norm of the Hamiltonian[To be precise, the walk operator also has eigenvalues e^-iarccos(E_k/λ).]. Performing phase estimation on the walk operator also allows to determine H's energies. Here we will specifically consider the H_2 Hamiltonian, and the constant shift c_0 will again be ignored throughout.
The walk operator can be constructed from circuits called PREPARE and SELECT using an additional ancilla qubit. The ancilla qubit's state indicates the two terms c_1Z and c_2X of the Hamiltonian. (For larger Hamiltonians with more terms, you would require more than one ancilla qubit.)
The PREPARE operator acts on the ancilla qubit and prepares a state corresponding to the terms' coefficients c_1 and c_2:
PREPARE|0⟩ = √(|c_1|/λ)|0⟩ + √(|c_2|/λ)|1⟩= R_Y(α)|0⟩, α = 2arctan√(|c_2/c_1|).
The coefficients are such that the measurement probabilities are c_1 and c_2 (here, they have the same sign, otherwise slight adaptations are necessary below), with the subnormalisation λ being required to ensure the right-hand state is normalised. PREPARE is implemented with a single-qubit rotation gate R_Y(α).
The
SELECT operator acts on the system qubit and selects the operator for the term corresponding to the state of the ancilla qubit. It applies either Z or X on the system qubit:
SELECT|0⟩|ψ⟩ = |0⟩Z|ψ⟩, SELECT|1⟩|ψ⟩ = |1⟩X|ψ⟩.
Qubitisation theory shows that the circuit for the walk operator can be constructed from these operators together with a reflection around |0⟩⟨0| as follows:
qubit ancilla a;
qubit |ψ⟩ psi;
box U (a,psi);
text = (-);
box PREPARE a;
box SELECT (a,psi);
box PREPARE^† a;
Z a;
For usage in the QPE circuit, the walk operator must be controlled on the ancillas that allow the readout of the phase. Inserting the circuits for PREPARE and SELECT in our example, we have:
[baseline=([yshift=-1ex]current bounding box.center)]
qubit control c;
qubit ancilla a;
qubit |ψ⟩ psi;
box U (a,psi) | c;
text = (-);
box R_Y(α) a;
z psi | c a;
x psi | c, a;
box R_Y(α)^† a;
box Z a | c;
The advantage to Trotterisation is that this circuit does not suffer from a Trotterisation error. Instead, it is exact (up to the finite precision of the rotation by α). Thus, it avoids lengthy circuit repetitions stemming from a small Δ t in the Trotter product formula, at the cost of an ancilla qubit. Many recent large-scale chemical quantum algorithms<cit.> are based on qubitisation due to the shorter circuits.
We will now explain that the walk operator U has the eigenvalues e^± i arccos(E_k/λ) by resorting to arguments that generalise to other and larger Hamiltonians.
Geometrically, a product of reflections about axes of relative angle β results in a rotation by angle 2β.
Both Z and PREPARE^†·SELECT·PREPARE are reflections, because their squares are the identity operator. Hence, their product is a rotation. In fact, this is true individually for each eigenvalue E_k of H on the two-dimensional subspace generated by |0⟩|E_k⟩. A general two-dimensional reflection matrix about an axis at inclination β has the
form
[ cos 2β -2cosβsinβ; -2cosβsinβ -cos 2β ].
From the top left matrix element in the relevant basis generated by |0⟩|E_k⟩, we can determine the angles of the reflection axes: cos 2β_1=⟨0|⟨E_k|Z⊗ I|0⟩|E_k⟩=+1 for Z and
cos(2β_2) =⟨0|⟨E_k|PREPARE^†·SELECT·PREPARE|0⟩|E_k⟩ = E_k/λ.
Hence, the walk operator in this basis is a rotation by angle
β_rot = 2(β_2-β_1) = arccos(E_k/λ)-arccos(+1) = arccos(E_k/λ).
Since a rotation by angle β_rot has eigenvalues e^± i β_rot, the walk operator has the eigenvalues e^± iarccos(E_k/λ) for each energy E_k of the Hamiltonian.
§.§ Quantum Error Correction
Quantum computers are affected by noise. For example, the latest IBM quantum computer<cit.> has a median error rate of ∼ 0.66% for CNOT gates (noise varies strongly for different gates and gate types, but this will suffice for the following back-of-the-envelope estimation). Roughly, this means that a quantum algorithm run on such a NISQ device could only use circuits with a depth of about 103 operations before the error rate becomes larger than 50%. However, 103 gates is far too little for any useful quantum algorithm.
While the error rates of qubits are expected to decrease as technology progresses, they will always stay significant compared to error rates in classical computing. This is because qubits are inherently small quantum systems and even a tiny perturbation from the environment can have a disastrous effect on the qubit's state.
Luckily, quantum error correction provides a pathway to run useful longer circuits, despite the errors affecting the qubits.
To explain the concept of error correction, let us resort to the common experience of a noisy telephone line. When spelling out a name, the letters b and p can easily be confused. The error can be corrected by referring to each letter by a longer name according to the standard phonetic alphabet, like bravo for b, papa for p. This reduces the possibility of error, while increasing the length of the information transmitted.
Similarly, in quantum error correction, multiple physical qubits are used to represent one logical qubit, which has a reduced error rate compared to the physical qubit.
In quantum computing, once you measure a qubit, the wavefunction collapses and the state is destroyed. This makes it difficult to correct errors that occur in the midst of a computation. However, a theory of quantum error correction has been developed and shows intricate methods to perform measurements that reveal information about errors (if any) that have occurred, without destroying the information that is encoded. This is information is sufficient to correct the errors, provided there are not too many.
For quantum algorithms of any reasonable length, we will have to resort to quantum error correction<cit.>. While this results in an overhead in the number of physical qubits (many physical qubits encode one logical qubit) and run-time, at least it offers a chance to escape the limited fidelity of quantum computers.
§ SUMMARY
In this contribution to the memory of Prof. Csizmadia, we have provided a detailed discussion of quantum chemical calculations on the simplest diatomic molecule, H_2. Such a simple exercise is not only useful as an elucidation of the traditional methods of quantum chemistry, but it also serves as an introduction to the emerging field of quantum computing, as applied to chemistry. Thus, after providing a high-level overview of the theoretical basis of molecular calculations which led us to the Hartree–Fock model and to the notion of electron correlation, we turned to the evaluation of the necessary equations in the minimal basis. After a general discussion of the long-distance properties of the possible states in the minimal basis, the necessary integral calculations were outlined and the orbital exponent was determined by standard methods. Next, we have compared the spin-restricted Hartree–Fock and the exact solutions to discover that the exact solution removes the artifacts of the Hartree–Fock model and finds the proper covalent ground state. As a final contribution to our description of traditional methods, we have also discussed the effects of breaking spin-symmetry in the Hartree–Fock model. Next, we turned our attention to quantum computing and gave a brief discussion on second quantization in order to rewrite the Hamiltonian in terms of fermionic operators for the H_2 problem. We then used the Jordan-Wigner mapping to recast this Hamiltonian as a sum of Pauli-strings (products of Pauli spin-matrices) which can be implemented on a quantum computer. We have also made use of spatial symmetry to reduce the Hamiltonian to a form that acts on a single qubit. A discussion of quantum algorithms for chemistry followed and we decided to focus on variants of quantum phase estimation in the remainder of this paper. Trotterization and qubitization were introduced as two distinct algorithms for translating the single-qubit Hamiltonian into phase estimation circuits that can in principle be run on current quantum hardware. However, in the last section on quantum error correction we also discussed why such a calculation cannot be expected to yield accurate results without applying methods to reduce noise on quantum hardware. Quantum error correction is a very active field of research and its detailed discussion is outside the scope of the present paper, although another paper is in preparation outlining its application in the case of the hydrogen molecule<cit.>.
@ifundefinedendmcitethebibliography
32
f
subitem(mcitesubitemcount)
[Barnett(1963)]barnett1963mechanized
Barnett, M. P. Mechanized Molecular Calculations—The POLYATOM System.
Rev. Mod. Phys. 1963, 35, 571–572
[Csizmadia et al.()Csizmadia, Harrison, Moskowitz, Seung,
Sutcliffe, and Barnett]POLYATOM
Csizmadia, I. G.; Harrison, M. C.; Moskowitz, J. W.; Seung, S.;
Sutcliffe, B. T.; Barnett, M. P. QCPE #47.1 POLYATOM – Program set for
nonempirical molecular calculations, Quantum Chemistry Exchange Program,
Indiana University, Bloomington, Indiana 47401.
[Csizmadia et al.(1966)Csizmadia, Harrison, Moskowitz, and
Sutcliffe]csizmadia1966non
Csizmadia, I. G.; Harrison, M. C.; Moskowitz, J. W.; Sutcliffe, B. T.
Non-empirical LCAO-MO-SCF-CI calculations on organic molecules with Gaussian
type functions. Theoretica chimica acta 1966, 6,
191–216
[Csizmadia(1991)]csizmadia1991some
Csizmadia, I. G. Some Fundamentals of Molecular Orbital Computations. In
Computational Advances in Organic Chemistry: Molecular Structure and
Reactivity; Springer Netherlands: Dordrecht, 1991; pp 1–165
[Mayer(2003)]mayer2003simple
Mayer, I. Simple theorems, proofs, and derivations in quantum chemistry;
Springer Science & Business Media: New York, 2003
[Szabo and Ostlund(2012)Szabo, and Ostlund]szabo2012modern
Szabo, A.; Ostlund, N. S. Modern quantum chemistry: introduction to
advanced electronic structure theory; Dover Publications: New York,
2012
[Helgaker et al.(2014)Helgaker, Jorgensen, and
Olsen]helgaker2014molecular
Helgaker, T.; Jorgensen, P.; Olsen, J. Molecular electronic-structure
theory; John Wiley & Sons: New Jersey, 2014
[Bartlett and Stanton(1994)Bartlett, and
Stanton]bartlett1994applications
Bartlett, R. J.; Stanton, J. F. Applications of Post-Hartree—Fock Methods: A
Tutorial. In Reviews in Computational Chemistry; VCH Publishers: New
York, 1994; Chapter 2, pp 65–169
[Knowles et al.(2000)Knowles, Schütz, and
Werner]knowles2000ab
Knowles, P. J.; Schütz, M.; Werner, H.-J. Ab Initio Methods for Electron
Correlation in Molecules. In Mod. Methods Algorithms Quantum Chem.;
Grotendorst, J., Ed.; NIC: Jülich, 2000; Vol. 1; pp 61–151
[Whitfield et al.(2011)Whitfield, Biamonte, and
Aspuru-Guzik]whitfield_simulation_2011
Whitfield, J. D.; Biamonte, J.; Aspuru-Guzik, A. Simulation of Electronic
Structure Hamiltonians Using Quantum Computers. Mol. Phys.
2011, 109, 735–750
[Jordan and Wigner(1928)Jordan, and Wigner]jordan1928ueber
Jordan, P.; Wigner, E. Über das Paulische Äquivalenzverbot.
Zeitschrift für Physik 1928, 47, 631–651
[Shee et al.(2022)Shee, Tsai, Hong, Cheng, and
Goan]shee_qubit-efficient_2022
Shee, Y.; Tsai, P.-K.; Hong, C.-L.; Cheng, H.-C.; Goan, H.-S. Qubit-efficient
encoding scheme for quantum simulations of electronic structure. Phys.
Rev. Research 2022, 4, 023154
[Bravyi and Kitaev(2002)Bravyi, and Kitaev]bravyi_2002
Bravyi, S. B.; Kitaev, A. Y. Fermionic Quantum Computation. Annals of
Physics 2002, 298, 210–226
[Tranter et al.(2018)Tranter, Love, Mintert, and
Coveney]tranter_comparison_2018
Tranter, A.; Love, P. J.; Mintert, F.; Coveney, P. V. A comparison of the
Bravyi-Kitaev and Jordan-Wigner transformations for the quantum simulation of
quantum chemistry. J. Chem. Theory Comput. 2018, 14,
5617–5630
[Bravyi et al.()Bravyi, Gambetta, Mezzacapo, and
Temme]bravyi_tapering_2017
Bravyi, S.; Gambetta, J. M.; Mezzacapo, A.; Temme, K. Tapering off qubits to
simulate fermionic Hamiltonians. <http://arxiv.org/abs/1701.08213>
[Setia et al.(2020)Setia, Chen, Rice, Mezzacapo, Pistoia, and
Whitfield]setia_reducing_2020
Setia, K.; Chen, R.; Rice, J. E.; Mezzacapo, A.; Pistoia, M.; Whitfield, J.
Reducing qubit requirements for quantum simulation using molecular point
group symmetries. J. Chem. Theory Comput. 2020, 16,
6091–6097
[Peruzzo et al.(2014)Peruzzo, McClean, Shadbolt, Yung, Zhou,
Love, Aspuru-Guzik, and O'Brien]vqe_2014
Peruzzo, A.; McClean, J.; Shadbolt, P.; Yung, M.-H.; Zhou, X.-Q.;
Love, P. J.; Aspuru-Guzik, A.; O'Brien, J. L. A variational eigenvalue solver
on a quantum processor. Nat. Commun. 2014, 5,
4213
[Blunt et al.(2023)Blunt, Camps, Crawford, Izsák, Leontica,
Mirani, Moylett, Scivier, Sünderhauf, Schopf, Taylor, and
Holzmann]bluntPerspectiveCurrentStateoftheArt2022
Blunt, N. S.; Camps, J.; Crawford, O.; Izsák, R.; Leontica, S.; Mirani, A.;
Moylett, A. E.; Scivier, S. A.; Sünderhauf, C.; Schopf, P.; Taylor, J. M.;
Holzmann, N. Perspective on the Current State-of-the-Art of Quantum
Computing for Drug Discovery Applications. J. Chem. Theory
Comput. 2023, 18, 7001–7023
[Kitaev(1995)]kitaev_quantum_1995
Kitaev, A. Y. Quantum measurements and the Abelian Stabilizer Problem.
arXiv:quant-ph/9511026 1995,
[Nielsen and Chuang(2010)Nielsen, and Chuang]nielsen_quantum_2010
Nielsen, M. A.; Chuang, I. L. Quantum computation and quantum
information, 10th ed.; Cambridge University Press, 2010
[Martínez-Martínez et al.(2022)Martínez-Martínez, Yen, and
Izmaylov]Luis2022
Martínez-Martínez, L. A.; Yen, T.-C.; Izmaylov, A. F. Assessment of various
Hamiltonian partitionings for the electronic structure problem on a quantum
computer using the Trotter approximation. arXiv:2210.10189 [quant-ph]
2022,
[Childs et al.(2021)Childs, Su, Tran, Wiebe, and
Zhu]Childs2021
Childs, A. M.; Su, Y.; Tran, M. C.; Wiebe, N.; Zhu, S. Theory of Trotter Error
with Commutator Scaling. Phys. Rev. X 2021, 11,
011020
[Lloyd(1996)]Lloyd1996
Lloyd, S. Universal Quantum Simulators. Science 1996,
273, 1073–1078
[Berry et al.(2006)Berry, Ahokas, Cleve, and
Sanders]Berry2006
Berry, D. W.; Ahokas, G.; Cleve, R.; Sanders, B. C. Efficient Quantum
Algorithms for Simulating Sparse Hamiltonians. Commun. Math. Phys.
2006, 270, 359
[Childs et al.(2018)Childs, Maslov, Nam, Ross, and
Su]Childs2018
Childs, A. M.; Maslov, D.; Nam, Y.; Ross, N. J.; Su, Y. Toward the first
quantum simulation with quantum speedup. PNAS 2018,
115, 9456
[Poulin et al.(2018)Poulin, Kitaev, Steiger, Hastings, and
Troyer]poulinQuantumAlgorithmSpectral2018
Poulin, D.; Kitaev, A.; Steiger, D. S.; Hastings, M. B.; Troyer, M. Quantum
Algorithm for Spectral Measurement with Lower Gate Count.
Phys. Rev. Lett. 2018, 121, 010501
[Berry et al.(2018)Berry, Kieferová, Scherer, Sanders, Low,
Wiebe, Gidney, and Babbush]berryImprovedTechniquesPreparing2018
Berry, D. W.; Kieferová, M.; Scherer, A.; Sanders, Y. R.; Low, G. H.;
Wiebe, N.; Gidney, C.; Babbush, R. Improved Techniques for Preparing
Eigenstates of Fermionic Hamiltonians. npj Quantum Inf
2018, 4, 22
[Ivanov et al.(2023)Ivanov, Sünderhauf, Holzmann, Ellaby,
Kerber, Jones, and Camps]ivanovQuantumComputationPeriodic2023a
Ivanov, A. V.; Sünderhauf, C.; Holzmann, N.; Ellaby, T.; Kerber, R. N.;
Jones, G.; Camps, J. Quantum Computation for Periodic Solids in Second
Quantization. Phys. Rev. Res. 2023, 5, 013200
[Lee et al.(2021)Lee, Berry, Gidney, Huggins, McClean, Wiebe,
and Babbush]leeEvenMoreEfficient2021
Lee, J.; Berry, D. W.; Gidney, C.; Huggins, W. J.; McClean, J. R.; Wiebe, N.;
Babbush, R. Even More Efficient Quantum Computations of Chemistry through
Tensor Hypercontraction. PRX Quantum 2021, 2,
030305
[IBM()]IBMQuantumHighest2021
IBM Quantum’s Highest Performant System, Yet.
<https://research.ibm.com/blog/eagle-quantum-error-mitigation>
[Blunt et al.()Blunt, Gehér, and Moylett]blunt2023_h2
Blunt, N. S.; Gehér, G. P.; Moylett, A. E. Compilation of a simple
chemistry application to quantum error correction primitives. In
preparation
§ FORMULAE FOR ENERGY CURVES AND INTEGRALS
The kinetic energy of the electrons T_μν is calculated as
T_μμ = T_νν = -1/2(2α/π)^3/2∫ e^-α(𝐫±𝐑)^2∇^2 e^-α(𝐫±𝐑)^2 d𝐫=3/2α,
T_μν = T_νμ = -1/2(2α/π)^3/2∫ e^-α(𝐫±𝐑)^2∇^2 e^-α(𝐫∓𝐑)^2 d𝐫=(3/2α - 2α^2R^2)e^-2α R^2.
The nuclear-electronic attraction term also depends on the position of the nuclei. Let A be the atom on which χ_μ is centered and B the center of χ_ν. Then, the total potential has the form
V_μν = V_μν(A) + V_μν(B),
where the unique contributions are
V_μμ(A) = V_νν(B) = -(2α/π)^3/2∫e^-2α(𝐫±𝐑)^2/|𝐫±𝐑| d𝐫=-2√(2α/π),
V_μμ(B) = V_νν(A) = -(2α/π)^3/2∫e^-2α(𝐫±𝐑)^2/|𝐫∓𝐑| d𝐫=-erf(2√(2α)R)/2R,
V_μν(A) = V_μν(B) = -(2α/π)^3/2 e^-2α R^2∫e^-2α𝐫^2/|𝐫±𝐑| d𝐫=-erf(√(2α)R)/Re^-2α R^2.
The two-body terms can be dealt with similarly. Since there are only two basis functions, there are only four unique integrals,
(μμ|μμ) = (νν|νν) = (2α/π)^3 ∬e^-2α(𝐫_1±𝐑)^2e^-2α(𝐫_2±𝐑)^2/|𝐫_1 - 𝐫_2| d𝐫_1d𝐫_2 = 2√(α/π),
(μμ|μν) = (νν|μν) = (2α/π)^3 e^-2α R^2∬e^-2α(𝐫_1±𝐑)^2e^-2α𝐫_2^2/|𝐫_1 - 𝐫_2| d𝐫_1d𝐫_2 = erf(√(α)R)/Re^-2α R^2,
(μμ|νν) = (νν|μμ) = (2α/π)^3 ∬e^-2α(𝐫_1±𝐑)^2e^-2α(𝐫_2∓𝐑)^2/|𝐫_1 - 𝐫_2| d𝐫_1d𝐫_2 = erf(2√(α)R)/2R,
(μν|μν) = (νμ|νμ) = (2α/π)^3 e^-4α R^2∬e^-2α𝐫_1^2e^-2α𝐫_2^2/|𝐫_1 - 𝐫_2| d𝐫_1d𝐫_2 = 2√(α/π)e^-4α R^2.
Assuming the special from of Eq. (61) for the charge-density matrix, Eqs. (52), (53), (54) of the main text take the following special form,
G_μμ(𝐏) = G_νν(𝐏) = 1/1+S_μν(1/2(μμ|μμ) + (μμ|μν) + 1/2(μμ|νν)),
G_μν(𝐏) = G_νμ(𝐏) = 1/1+S_μν((μμ|μν) + (μν|μν)),
where we have also neglected the exchange contributions in Eqs. (52), (53), (54) as they cancel some of the Coulomb terms in a system of two electrons in which the same spatial orbital is occupied by the two electrons. Putting all these results together, the Hartree-Fock energy for the hydrogen ground state can be calculated from the special case of Eq. (19),
E_0 = E_n + 1/1+S_μν(h_μμ+h_μν+F_μμ+F_μν),
leading to
E_0 = 1/D
+1/1+e^-α D^2/2[
3α - 4√(2α/π)-2 erf(2√(α)D)/D +
(3α - α^2D^2 -8 erf(√(α)D)/D)e^-α D^2/2.
.
+1/1+e^-α D^2/2(
√(α/π) +
4 erf(√(α)/2D)/De^-α D^2/2 +
erf(√(α)D)/2D +
2√(α/π)e^-α D^2)
].
Here a change of variables 2R=D was also introduced so that the expressions depend directly on the internuclear distance D.
A similar process yields the following simplified G-elements corresponding to 𝐏̅ in Eq. (63),
G_μμ(𝐏̅) = G_νν(𝐏̅) = 1/1-S_μν(1/2(μμ|μμ) - (μμ|μν) + 1/2(μμ|νν)),
G_μν(𝐏̅) = G_νμ(𝐏̅) = 1/1-S_μν((μμ|μν) - (μν|μν)),
and a new energy expression
E_1 = E_nn + 1/1-S_μν(h_μμ-h_μν+F_μμ-F_μν),
and finally,
E_1 = 1/D
+1/1-e^-α D^2/2[
3α - 4√(2α/π)-2 erf(2√(α)D)/D -
(3α - α^2D^2 -8 erf(√(α)D)/D)e^-α D^2/2.
.
+1/1-e^-α D^2/2(
√(α/π) -
4 erf(√(α)/2D)/De^-α D^2/2 +
erf(√(α)D)/2D +
2√(α/π)e^-α D^2)
].
For the singly-excited singlet state, the AO basis expression has the form
E_S = ⟨Θ_S|Ĥ|Θ_S⟩ = E_n + (P_μμ+P̅_μμ)(μ|ĥ|μ) + (P_μν+P̅_μν)(μ|ĥ|ν)
+ P_μμP̅_μμ(μμ|μμ) + P_μνP̅_μν(μν|μν),
yielding
E_S = 1/D
+1/1-e^-α D^2/2(
3α - 4√(2α/π)-2 erf(2√(α)D)/D)
-e^-α D^2/2/1-e^-α D^2/2(
3α - α^2D^2 -8 erf(√(α)D)/D)
+2√(α/π).
Similarly for the triplets
E_T = ⟨Θ_S|Ĥ|Θ_S⟩ = E_n + (P_μμ+P̅_μμ)(μ|ĥ|μ) + (P_μν+P̅_μν)(μ|ĥ|ν)
+ P_μμP̅_νν(μμ|νν) + P_μνP̅_μν(μν|μν),
so that
E_T = 1/D
+1/1-e^-α D^2/2(
3α - 4√(2α/π)-2 erf(2√(α)D)/D)
-e^-α D^2/2/1-e^-α D^2/2(
3α - α^2D^2 -8 erf(√(α)D)/D)
+1/1-e^-α D^2/2(
erf(√(α)D)/D-2√(α/π)e^-α D^2/2).
Finally, the off-diagonal element in the FCI-matrix in Eq. (66) is simply given as
g = ⟨Φ_1|Ĥ|Φ_0⟩ = (ia|ia) = 1/1-e^-α D^2(
√(α/π) - erf(√(α)D)/D).
§ QUBIT MAPPINGS
Any 2 × 2 matrix can be written as a linear combination of the Pauli spin-matrices X, Y, and Z and the identity matrix I given by
X = [ 0 1; 1 0 ] Y = [ 0 -i; i 0 ]
Z = [ 1 0; 0 -1 ] I = [ 1 0; 0 1 ]
For a general chemical Hamiltonian with real coefficients, the explicit form of the qubit Hamiltonian in terms of MO integrals, after applying the Jordan-Wigner mapping, is
ℋ = E_n + 1/2[∑_P(P|ĥ|P) + 1/4∑_PQ(PP|QQ)]
-1/2∑_P [(P|ĥ|P) + 1/2∑_Q(PP|QQ)] Z_P + 1/4∑_Q<P(PP|QQ) Z_P Z_Q
+1/2∑_Q<P[(P|ĥ|Q) + 1/4∑_R(PQ|RR)] (X_P.X_Q + Y_P.Y_Q)
-1/4∑_P<Q<R(PQ|RR) Z_R(X_Q.X_P + Y_Q.Y_P)
-1/4∑_P<R<Q(PQ|RR)(X_Q.R.X_P + Y_Q.R.Y_P)
-1/4∑_R<P<Q(PQ|RR)(X_Q.X_P + Y_Q.Y_P)Z_R
-1/4∑_S<R<Q<P [(PS|QR)-(PQ|SR)] (X_P.X_Q X_R.X_S + Y_P.Y_Q Y_R.Y_S)
-1/4∑_S<R<Q<P [(PR|QS)-(PQ|RS)] (X_P.X_Q Y_R.Y_S + Y_P.Y_Q X_R.X_S)
-1/4∑_S<R<Q<P [(PS|QR)-(PR|QS)] (X_P.Y_Q Y_R.X_S + Y_P.X_Q X_R.Y_S).
Here the notation X_Q.X_P = X_Q Z_Q-1… Z_P+1 X_P indicates a product of Z_S matrices such that P<S<Q, while X_Q.R.X_P denotes a similar string, except that S≠ R.
§ THE 1-QUBIT HYDROGEN HAMILTONIAN
The transformed Pauli strings in the Jordan-Wigner Hamiltonian, after performing 𝒫→𝒫' = U^†𝒫 U as defined in the main text, are as follows:
[ Z_0; Z_1; Z_2; Z_3; Z_0Z_1; Z_0Z_2; Z_0Z_3; Z_1Z_2; Z_1Z_3; Z_2Z_3; Y_0Y_1X_2X_3; X_0Y_1Y_2X_3; Y_0X_1X_2Y_3; X_0X_1Y_2Y_3; ]→[ Z_0; Z_0X_1; Z_0X_2; Z_0X_3; X_1; X_2; X_3; X_1X_2; X_1X_3; X_2X_3; X_0X_2X_3; X_0X_3; X_0X_1X_2; X_0X_1; ]
|
http://arxiv.org/abs/2307.04604v1 | 20230710144332 | EchoVest: Real-Time Sound Classification and Depth Perception Expressed through Transcutaneous Electrical Nerve Stimulation | [
"Jesse Choe",
"Siddhant Sood",
"Ryan Park"
] | cs.SD | [
"cs.SD",
"cs.LG",
"eess.AS",
"eess.SP"
] |
An effective density matrix approach for intersubband plasmons coupled to a cavity field: electrical extraction/injection of intersubband polaritons
R. Colombelli
August 12, 2023
====================================================================================================================================================
Over 1.5 billion people worldwide live with hearing impairment [17]. Despite various technologies that have been created for individuals with such disabilities, most of these technologies are either extremely expensive or inaccessible for everyday use in low-medium income countries. In order to combat this issue, we have developed a new assistive device, EchoVest, for blind/deaf people to intuitively become more aware of their environment. EchoVest transmits vibrations to the user’s body by utilizing transcutaneous electric nerve stimulation (TENS) based on the source of the sounds. EchoVest also provides various features, including sound localization, sound classification, noise reduction, and depth perception. We aimed to outperform CNN-based machine-learning models, the most commonly used machine learning model for classification tasks, in accuracy and computational costs. To do so, we developed and employed a novel audio pipeline that adapts the Audio Spectrogram Transformer (AST) model, an attention-based model, for our sound classification purposes, and Fast Fourier Transforms for noise reduction. The application of Otsu’s Method helped us find the optimal thresholds for background noise sound filtering and gave us much greater accuracy. In order to calculate direction and depth accurately, we applied Complex Time Difference of Arrival algorithms and SOTA localization. Our last improvement was to use blind source separation to make our algorithms applicable to multiple microphone inputs. The final algorithm achieved state-of-the-art results on numerous checkpoints, including a 95.7% accuracy on the ESC-50 dataset for environmental sound classification.
§ INTRODUCTION
According to the World Health Organization, if a person's hearing thresholds are below 20 dB, they are said to have hearing loss [17]. The consequences of this condition can vary in their severity and may include difficulties in communication, leading to social isolation for older individuals, decreased academic performance in children, and limited job opportunities for adults in areas without adequate accommodations for those with hearing loss. Currently, 1.5 billion people globally, or one in every five people, live with hearing loss and this is projected to increase to 2.5 billion people, or 25%, of the world population by 2050. The majority of individuals with hearing loss, 80%, live in low and middle-income countries. Despite this, a significant amount of hearing loss goes unaddressed, costing governments around the world nearly $980 billion annually, with the majority of these costs incurred in low and middle-income countries. This high cost is attributed to the expensive nature of hearing impairment devices such as hearing aids and cochlear implants, which range in cost from $2,000 to $7,000 for hearing aids [16] and $30,000 to $50,000 for cochlear implants [10].
We aim to address the problem of unaddressed hearing impairment by creating EchoVest, a cost-effective wearable alternative to current hearing impairment solutions. EchoVest utilizes sound localization and depth perception through the use of TENS pads and sound classification through Audio Spectrogram Transformers (ASTs). To implement sound localization and depth perception, EchoVest selectively activates TENS pads [14] with amplified signals based on the distance and location of the sound source. With a total cost of manufacturing of $98.90, EchoVest is a significantly more affordable and effective option than existing hearing impairment technologies.
§ MATERIALS
The primary objective was to create an inexpensive, durable, and wearable device for the user’s day to day activities. We used a mesh vest as the base with wiring interwoven throughout the mesh. A mesh vest is ideal for contact between skin and the output nodes. On the back of the mesh, we employed a Raspberry Pi 3B+ as our central computer in order to control all of the output devices. The Machine-Learning libraries and other built-in software that we integrated into our algorithms all required a 64-bit system and the Raspberry Pi 3B+ was the cheapest processor on the market that we were able to obtain. In order to record sounds, we used a ReSpeaker 4-mic Array due to the fact that we could get 4 different streams of audio input. The continuous stream of 4 mics made it possible for us to calculate the direction and distance of sounds. The last piece of significant hardware that we used were TENS electrodes. TENS is a service that delivers mild electrical currents through electrodes placed on the body. By applying TENS to our vest, we are able to directly stimulate the user’s nerves and leave them with a multi-dimensional feeling. A full materials list breakdown with all other assorted materials can be seen in the Figure 1 below. Our essential pieces of hardware and their application can be seen in Figure 2.
§ METHODS
We designed EchoVest to determine the relative location and distance of a sound source from each microphone in our vest for audio spatial awareness. We were able to triangulate the relative location of the sound source in real time and determine the sound's arrival angle using the Open embeddeD Auditory System's (ODAS) built-in sound localization algorithms [4]. We utilized Time Difference of Arrival (TDoA) with Generalized Cross-Correlation with Phase Transforms (GCC-PHAT) to calculate the distance from the sound source to the microphones in real time. After recording the audio signals coming from each microphone, we calculated the cross-correlation function by sliding one signal in relation to the other for each time step to see how similar their waveforms are to one another. The time delay between the signals, or TDoA value, between the two microphones is represented by the time step with the highest cross-correlation value [8]. We were able to select the appropriate pad based on the angle of sound arrival and alter the strength of our electrical pad signals in response to the distance from the microphone, calculated using the distance-rate-time equation.
In order to enhance EchoVest's sound localization and depth perception, we utilized a Blind Source Separation (BSS) approach that combined Principal Component Analysis (PCA), Non-Negative Matrix Factorization (NMF), and Independent Component Analysis (ICA) to separate the combined sound file from the microphone array into individual sound files for each microphone. We first reduced the dimensionality of the sound input with PCA and NMF. NMF factorized the sound input into two smaller matrices, as seen in Figure 3, with non-negative elements [15], while PCA transformed the sound data into a new coordinate system with axes along the directions of maximum variance [7]. ICA then separated the mixed sound signal into subcomponents by assuming that only one component was Gaussian and that the components were independent from each other [4]. The cross-correlation matrix and TDOA values from sound localization allowed us to identify each sound with its corresponding microphone, resulting in enhanced sound localization and depth perception.
We employed a signal processing approach that combined Fast Fourier Transforms (FFTs) with Otsu's method to effectively remove background noise from our sound input and enhance the sound classification accuracy. First, we converted the sound input into its frequency domain using FFT, an efficient algorithm that computes the discrete Fourier transform in real-time. We then implemented Otsu's Method, a commonly used image processing algorithm, to denoise the sound input by selecting an optimal noise threshold. Otsu's Method classifies pixels into background and foreground based on their intensity levels and was applied to the audio by converting it into a frequency histogram from the Fourier transform [2]. This effectively filtered out white noise from the input signals and improved the sound classification accuracy. Figures comparing the sound before and after applying Fast Fourier Transform (FFT) with Otsu's Method are presented below in Figure 4.
We implemented a Sound Spectrogram Transformer that was trained on the ESC-50 dataset, a dataset for Environmental Sound Classification which consists of 2000 environmental audio recordings suitable for benchmarking methods of environmental sound classification, using 5-fold cross-validation to prevent overfitting. Additionally, this dataset was used in order to limit the amount of semantical classes to only classify sounds frequently associated with real-time environments. As shown in Figure 5, this transformer takes the audio waveform input of T seconds, outputted from Otsu’s Method, and is converted into a 128x100t spectrogram using log Mel filterbank features computed with a 25ms Hamming window every 10ms. The model outputs a Transformer encoder's [CLS] token, which serves as the audio spectrogram representation for classification. The corresponding label is matched by using a linear layer with sigmoid activation.
To assess the accuracy of the classification data produced by the transformer model in a real-time environment, the resultant semantic class was paired with a timestamp associated with the live sound data. This live sound data was simulated by playing a variety of sounds around the mic array with white noise that included people talking, laughter, and the air conditioner running. The aligned time series data were then used to calculate errors and determine accuracy of the real-time classification system. The sample capture rate for the audio input was 0.205 seconds.
The only constraint for the electrical system was that we needed to provide an equal amount of current and voltage to each of the output nodes located on the vest. The intensity of the electrical current expelled from the TENS electrodes is directly related to the current from the Raspberry Pi, which outputs a maximum current of 50 hertz through the 5V pin-outs. By applying two strategic parallel circuits, as demonstrated in Figure 6, we were able to ensure that each output node only revised a maximum current of 12.5 HZ. 12.5 HZ is under-powered for the typical TENS electrode (50 Hertz), the current is still high enough to be felt and also guarantees user safety.
The process for setting up the OS and driver was challenging due to many libraries being outdated and the Re-Speaker 4 Mic Array Drivers being meant for a 32-bit version of Raspberry Pi. We downloaded specific versions of packages and downgraded our Raspberry Pi OS to 64-bit Raspberry Pi OS Bullseye 11.0 Debian Release. Lastly, the Re-Speaker 4-Mic Array Drivers was written for a 32-bit system with Linux Kernel Version 4.9.80+ while the earliest release of Raspberry Pi 64-bit OS had 5.10.40+ Kernels. We addressed this issue by modifying the ReSpeaker Driver Scripts and creating overlay diversions for every component of the driver.
§ RESULTS
Referencing Figure 7, Prototype 1 had a simple design which served as a basic setup on a breadboard, comprising the ReSpeaker, LEDs, and the Raspberry Pi. This prototype allowed us to test our ML algorithms and hardware, which enabled accurate node activations and the assessment of our model's accuracy in the presence of live background noise. The next stage of prototyping involved integrating our breadboard design with the mesh vest, to test its practical functionality and identify any issues with our original mesh design. Prototype 3 was focused on making the vest portable by incorporating a battery pack, streamlining the wiring, and correcting node positioning. Additionally, the position of the ReSpeaker was adjusted to eliminate significant vest feedback that was observed in the previous prototype. As a result, the device is now easy to use, simply by putting on the vest and switching on the battery pack. We tested the device with LEDS by having a test subject put on the vest and had a person playing sounds on a phone from 2 meters away. By visually being able to see the changes in luminosity of the LEDS, we could confirm that our depth and direction algorithms were working.
As shown in Table 1, the implementation of Fast Fourier Transforms with Otsu's Method outperformed three other noise reduction techniques with a peak signal-to-noise ratio (PSNR) of 57.5 dB. FFT with Otsu's Method is an uncommon technique for noise reduction, but it proved to be more effective than each of the other algorithms due to its higher PSNR value, which indicates the algorithm's ability to reduce noise. Referencing Table 2, our Audio Spectrogram Transformer model outperformed many traditional sound classification models in terms of accuracy, with higher accuracies on the ESC-50 dataset and higher mean average precisions (mAPs) on the AudioSet dataset. Specifically, the Audio Spectrogram Transformer achieved an accuracy of 95.7% on the ESC-50 dataset and a 0.485 mAP on the AudioSet.
Also, we conducted a small test of the electrical stimulation of the TENS electrodes on the vest. We had 10 different human volunteers (55-65 years old) report the amount of stimulation that they received on a scale from 0-10 (with 0 being no electrical stimulation and 10 being the stimulation with the maximum current output) when all the TENS were activated. This process was then repeated in 3 different environments.
§ DISCUSSION
Our implementation of the Fast Fourier Transform (FFT) algorithm using Otsu's Method for noise thresholding effectively removed white noise from our audio input, which preserved the model’s sound classification accuracy of 95.7%. This confirmed that our preprocessing pipeline was efficient in accurate sound classification in real-time environments. This implementation was much more effective than other methods, since the peak signal-to-noise ratio (PSNR) was higher than other methods without requiring machine learning, which allowed our algorithm to process under the limited computational powers. We tested the different noise reduction algorithms in Table 1 by calculating the PSNR (using the PSNR formula) of the original and denoised sounds for each of the four different noise reduction algorithms we considered.
As shown in Table 3, the TENS stimulation is largely unaffected by changes in temperature and is largely unaffected by environmental factors. Due to a lack of professional expertise and knowledge of wiring and Raspberry Pi current control, we could not guarantee the safety of the user due to the possible rapid changes in current. Thus, we replaced the TENS with LEDs, which were the closest alternative to the TENS as they could describe the distance and depth through visual aid with the brightness of the LED indicating distance and the specific placement of the LED representing direction. As a result, EchoVest serves as a proof-of-concept that the various components and pipeline accomplish the task.
Currently, we are creating an app that hosts the classification model and sound preprocessing on the cloud because of potential for customizable features. However, EchoVest will still be completely localized and will not require the app to function. The app serves as a means to increase functionality with wifi and bluetooth capabilities by giving the user the ability to not be notified of certain sounds and change the various strengths of the TENS to the person's suitability. Furthermore, our future goal includes a partnership with companies like Ring that would allow us to utilize and implement our sound classification and preprocessing pipeline to their system. Their systems are outdated and do not take out sufficient white noise and, therefore, limit the functionality of their doorbell system. In addition to company partnerships, EchoVest can be directly applied to search and rescue operations as it gives personnel a heightened sense of their surroundings. EchoVest has multiple functionalities, which makes EchoVest a multi-purpose solution to a variety of real world problems.
§ CONCLUSION
In this paper, we developed an accessible, cost-efficient wearable product for localizing and classifying sounds. We further demonstrate that our preprocessing pipeline, consisting of Otsu’s Method and FFT, sufficiently preserves sound classification accuracy in a real-time environment. EchoVest utilizes optimized machine learning to efficiently and effectively lower computation costs, thereby reducing product costs. EchoVest costs a maximum of $98 per unit to manufacture and is easily accessible to the general public due to the lack of customization needed. Traditional hearing aid devices require numerous prerequisites, such as hearing test, medical clearance, and hearing aid evaluation. Constrastingly, EchoVest will be available to the public without any specialization or pre-requisites because its only variable would be the size of the vest.
17
bisgaard2021
Bisgaard, N., Zimmer, S., Laureyns, M., & Groth, J. (2021).
A model for estimating hearing aid coverage world-wide using historical data on hearing aid sales.
International Journal of Audiology, 61(10), 841–849.
<https://doi.org/10.1080/14992027.2021.1962551>
chen2012
Chen, H., & Gururajan, R. (2012).
Otsu’s Threshold Selection Method Applied in De-noising Heart Sound of the Digital Stethoscope Record.
Lecture Notes in Electrical Engineering, 239–244.
<https://doi.org/10.1007/978-3-642-26001-8_31>
ast
Gong, Y., Chung, Y.-A., & Glass, J. (2021).
AST: Audio Spectrogram Transformer.
ArXiv:2104.01778
<https://arxiv.org/abs/2104.01778>
sloc
Grondin, F., & Michaud, F. (2019).
Lightweight and optimized sound source localization and tracking methods for open and closed microphone array configurations.
Robotics and Autonomous Systems, 113, 63–80.
<https://doi.org/10.1016/j.robot.2019.01.002>
fdahear
Health, C. for D. and R. (2019, April 24).
Hearing Aids. FDA.
<https://www.fda.gov/medical-devices/consumer-products/hearing-aids>
ica
Hyvärinen, A., & Oja, E. (2000).
Independent Component Analysis: Algorithms and Applications.
Neural Networks, 13(45), 411–430.
<https://www.cs.helsinki.fi/u/ahyvarin/papers/NN00new.pdf>
pca
Jolliffe, I. T., & Cadima, J. (2016).
Principal component analysis: a review and recent developments.
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2065), 20150202.
<https://doi.org/10.1098/rsta.2015.0202>
gccphat
Kwon, B., Park, Y., & Park, Y. (2010, October 1).
Analysis of the GCC-PHAT technique for multiple sources.
IEEE Xplore.
<https://doi.org/10.1109/ICCAS.2010.5670137>
hearlos
McKee, M. M., Choi, H., Wilson, S., DeJonckheere, M. J., Zazove, P., & Levy, H. (2018).
Determinants of Hearing Aid Use Among Older Americans With Hearing Loss. The Gerontologist.
<https://doi.org/10.1093/geront/gny051>
cochlear
National Institute on Deafness and other Communication Disorders. (2018, June 15).
Cochlear Implants.
NIDCD.
<https://www.nidcd.nih.gov/health/cochlear-implants>
procon
Nunez, K. (2020, February 27).
Cochlear Implant: Cost, Pros, Cons, Risks, How It Works.
Healthline.
<https://www.healthline.com/health/cochlear-implant>
ppwc
Papers with Code - ESC-50 Benchmark (Audio Classification). (n.d.).
Paperswithcode.com.
Retrieved February 2, 2023, from
<https://paperswithcode.com/sota/audio-classification-on-esc-50>
speechrec
Rev, & Rev. (2021, May 17).
Exploring Your Speech-to-text Options: Advantages and Disadvantages of Speech Recognition Software. Rev.
<https://www.rev.com/blog/speech-to-text-technology/advantages-and-disadvantages-of-speech-recognition-software>
tens
Transcutaneous electrical nerve stimulator (TENS). (2018, April 4).
University of Iowa Hospitals & Clinics.
<https://uihc.org/health-topics/transcutaneous-electrical-nerve-stimulator-tens>
nmf
Wang, Y.-X., & Zhang, Y.-J. (2013).
Nonnegative Matrix Factorization: A Comprehensive Review.
IEEE Transactions on Knowledge and Data Engineering, 25(6), 1336–1353.
<https://doi.org/10.1109/tkde.2012.51>
cost
Watson, A. M. (2022, August 16).
How Much Do Hearing Aids Cost?
GoodRx.
<https://www.goodrx.com/health-topic/ear/hearing-aid-cost>
who
World Health Organization: WHO. (2019, September 18).
Hearing loss. Who.int; World Health Organization: WHO.
<https://www.who.int/health-topics/hearing-loss#tab=tab_1>
|
http://arxiv.org/abs/2307.04081v1 | 20230709014122 | Score-based Conditional Generation with Fewer Labeled Data by Self-calibrating Classifier Guidance | [
"Paul Kuo-Ming Huang",
"Si-An Chen",
"Hsuan-Tien Lin"
] | cs.CV | [
"cs.CV",
"cs.LG"
] |
Age of FGK Dwarfs Observed with LAMOST and GALAH: Considering the Oxygen Enhancement
Jinghua Zhang
Received August 12, 2023; accepted August 12, 2023
====================================================================================
Score-based Generative Models (SGMs) are a popular family of deep generative models that achieves leading image generation quality. Earlier studies have extended SGMs to tackle class-conditional generation by coupling an unconditional SGM with the guidance of a trained classifier. Nevertheless, such classifier-guided SGMs do not always achieve accurate conditional generation, especially when trained with fewer labeled data. We argue that the issue is rooted in unreliable gradients of the classifier and the inability to fully utilize unlabeled data during training. We then propose to improve classifier-guided SGMs by letting the classifier calibrate itself. Our key idea is to use principles from energy-based models to convert the classifier as another view of the unconditional SGM. Then, existing loss for the unconditional SGM can be adopted to calibrate the classifier using both labeled and unlabeled data. Empirical results validate that the proposed approach significantly improves the conditional generation quality across different percentages of labeled data. The improved performance makes the proposed approach consistently superior to other conditional SGMs when using fewer labeled data. The results confirm the potential of the proposed approach for generative modeling with limited labeled data.
§ INTRODUCTION
Score-based Generative Models (SGMs) capture the underlying data distribution by learning the gradient function of the log-likelihood on data, also known as the score function. SGMs, when coupled with a diffusion process that gradually converts noise to data, can often synthesize higher-quality images than other popular alternatives, such as generative adversarial networks <cit.>. SGMs attracted research attention and demonstrated promising performance not only in image generation <cit.> but also in audio synthesis <cit.>, natural language generation <cit.>, and various other fields.
Many successful SGMs above focus on unconditional generation, which models the data distribution without considering other variables <cit.>. When aiming to generate data with some control, it is necessary to model the conditional distribution concerning another variable, such as the class label for generating images from a particular class.
Such conditional SGMs will be the focus of this paper. They have achieved cutting-edge performance for class-conditional generation <cit.>, image inpainting <cit.>, and audio upsampling <cit.>.
There are two major families of conditional SGMs. The family of Classifier-Free SGMs designs specific conditional network architectures with their losses derived from the conditional score functions <cit.>. Such SGMs are known to generate high-fidelity images in fully-supervised settings where all data are labeled. Nevertheless, they are often criticized for generating data with less diversity, favoring some easier classes while being inaccurate for some harder classes. Furthermore, their performance drops significantly as the proportion of labeled data decreases, making them less preferable in semi-supervised settings.
Classifier-Guided SGMs (CGSGMs) form another family of conditional SGMs that address the aforementioned issues by decomposing the conditional score function into a mixture of the unconditional score function and the gradient of an auxiliary classifier <cit.>.
For instance, the vanilla CGSVM <cit.> trains the unconditional SBM with the popular Denoising Score Matching (DSM) <cit.> technique that learns the score function from noise-perturbed data, and the classifier with the usual cross-entropy loss from labeled data. The additional classifier improves the accuracy of conditional generation and allows better control of the trade-off between generation diversity and fidelity <cit.>. Furthermore, because the unconditional SBM can be trained with either labeled or unlabeled data in principle, CGSGMs potentially fit the semi-supervised setting better by requiring fewer labeled data.
The quality of the auxiliary classifier is critical for CGSGMs. If the classifier is overly confident in its predictions, as often happens with cross-entropy loss <cit.>, the resulting conditional scores may be unreliable. This, in turn, leads to low generation accuracy, even if the unconditional scores are reliable enough to ensure decent generation fidelity.
Robust CGSGM <cit.> trains an adversarial robust classifier instead of a usual one to improve the quality of the auxiliary classifier. Somehow there is no theoretical guarantee that adversarial robustness is related to reliable conditional scores. Denoising Likelihood Score Matching <cit.> proposes to calibrate the classifier on the labeled data externally, leveraging the help of the unconditional SGM. Then, the training of the classifier is dependent on having a trained unconditional SGM first.
Our proposed approach is aligned with both techniques above to design a better loss to train the classifier. Still, it significantly differs from them by letting the classifier self-calibrate. Unlike the robust CGSGM, the self-calibration technique carries a sound theoretical guarantee by converting the classifier to another view of the unconditional SGM when reinterpreting the classifier through the angle of energy-based models. The novel view allows reusing DSM seamlessly to design a Self-Calibration (SC) loss (as illustrated with ℒ_SC in Fig. <ref>) that can be used on the classifier without dependence to the unconditional SGM. Furthermore, the SC loss can be effortlessly applied to both labeled and unlabeled data, resulting in immediate advantages in the semi-supervised setting.
We demonstrate the effect of self-calibration by visualizations on a toy data set. The results justify that our proposed CGSGM with the SC loss (CGSGM-SC) approach results in more accurate classifier gradients, thus enhancing the estimation of the conditional scores.
We further conduct thorough experiments on CIFAR-10 and CIFAR-100 datasets to validate the advantages of the proposed approach. The results confirm that CGSGM-SC is superior to the vanilla CGSGM and state-of-the-art techniques in the CGSGM family. Furthermore, in an extreme setting of having only 5% of the data being labeled, CGSGM-SC, which can use unlabeled data to self-calibrate the classifier, is significantly better than both classifier-guided and classifier-free SGMs, which cannot easily take the unlabeled data into account. The results confirm the potential of CGSGM-SC in scenarios where labeled data are costly to obtain.
§ BACKGROUND
Consider a data distribution p(x) where x∈ℝ^d. SGMs aim to generate samples from p(x) via the information contained in the score function ∇_xlog p(x), which is learned from data. We first introduce how the score function can be efficiently learned from data in Section <ref>, which is related to the derivation of our proposed loss. Then, we discuss how a diffusion process can be combined with learning a score function to effectively sample from p(x) in Section <ref>. Finally, we review studies that extend SGMs to conditional SGMs in Section <ref>.
§.§ Learning the score function
Learning the score function aims to choose the best function from a family of functions {s_θ(x)}_θ, such as deep learning models parameterized by θ, to approximate the score function ∇_x log p(x) of interest. The learning is based on some data {x_n}_n=1^N that are assumed to be sampled from p(x). It has been shown that the aim can be achieved by optimizing the in-sample version of the following score-matching loss over θ:
ℒ_SM=𝔼_p(x)[tr(∇_x s_θ(x))+1/2‖ s_θ(x)‖^2_2],
where tr(·) denotes the trace of a matrix and ∇_x s_θ(x)=∇^2_x log p(x) is the Hessian matrix of log-likelihood p(x). Somehow calculating the score-matching loss requires O(d) passes of computation for x ∈ℝ^d, which makes the optimization process computationally prohibitive on high-dimensional data.
Several previous studies <cit.> attempted to resolve the computational issue by approximating or transforming score matching into equivalent objectives. One standard approach nowadays is called Denoise Score Matching (DSM) <cit.>, which learns the score function of a noise-perturbed data distribution q(x̃) instead. DSM typically assumes that q(x̃) comes from the original distribution p(x)
injected with a pre-specified noise q(x̃|x). Then, it has been proved <cit.> that the score function can be learned by minimizing the in-sample version of
𝔼_q(x̃|x)p(x)[1/2‖ s_θ(x̃) - ∇_x̃log q(x̃|x)‖_2^2],
where ∇_x̃log q(x̃|x) is the score function of the noise distribution centered at x. DSM is generally more efficient than the original score matching and is scalable to high-dimensional data as it replaces the heavy computation on the Hessian matrix with simple perturbations that can be efficiently computed from data.
§.§ Generating from the score function by diffusion
Assume that we hope to sample from some unknown target distribution p(x) = p_0(x), and the distribution can be transited to a known prior distribution p_T(x) through a Markov chain that is described with
some stochastic differential equation (SDE) <cit.>:
dx(t)=f(x(t),t)dt+g(t)dw,
where the Markov chain is computed for 0 ≤ t < T using the drift function f(x(t),t) that describes the overall movement and the dispersion function g(t) that describes how the noise w from a standard Wiener process enters the system.
To sample from p(x) = p_0(x), the VE-SDE framework <cit.> proposes
to reverse the SDE from p_T(x) to p_0(x), which turns out to operate with another SDE <cit.>:
dx=[f(x(t),t)-g(t)^2 s(x(t), t)]dt+g(t)dw̅
where w̅ is a standard Wiener process when time-step flows from T back to 0 and s(x(t), t) ≡∇_xlog p_t(x(t))=s(x(t), t) denotes a time-dependent score function. If we can learn the score function s(x(t), t), the diffusion process in (<ref>) can then be used to take any instance sampled from the known p_T(x) to a sample from the unknown p(x) = p_0(x).
Learning the time-dependent score function s(x(t), t)) can be done by minimizing an time-generalized (in-sample) version of the DSM loss because the diffusion process can be viewed as one particular way of injecting noise. The extended DSM loss is defined as
ℒ_DSM(θ)=𝔼_t[λ(t)𝔼_x^(t),x^(0)[1/2‖ s_θ(x(t),t) - s_t(x^(t)|x^(0))‖_2^2]],
where t is selected uniformly between 1 and T, x^(0)∼ p_t(x), x^(0)∼ p_0(x), and s_t(x |x^(0)) denotes the score function of p_t(x | x^(0)), and λ(t) is a weighting function that balances the loss of different time steps.
In this paper, we take the same drift, dispersion, and weighting functions f(x,t), g(t), and λ(t) as the original VE-SDE framework <cit.>.
§.§ Related studies of conditional score-based generative models
In conditional SGMs, we are given some labeled data {(x_m, y_m)}_m=1^M in addition to the unlabeled data {x_n}_n=M+1^M+N, where y ∈{1, 2, …, K} denotes the class label. The case of N = 0 is called the fully-supervised setting, while we focus on the more challenging semi-supervised setting with N > 0 (and possibly N ≫ M) in this paper.
Conditional score-based generative models aim to learn the conditional score function ∇_x log p(x | y) from the data and then generate samples from p(x | y). Previous studies <cit.> showed how to decompose the conditional score function using Bayes' theorem:
∇_x log p(x|y) =∇_x[log p(x) + log p(y|x)- log p(y)]= ∇_xlog p(x) + ∇_xlog p(y|x).
The term log p(y) can be dropped because it is not a function of x and is thus of gradient 0. The decomposition shows that conditional generation can be achieved by an unconditional SGM that learns the score function ∇_x log p(x) plus an extra conditional gradient term ∇_xlog p(y|x).
The vanilla form of Classifier Guidance (CG) for SGM estimates ∇_xlog p(y|x) with an auxiliary classifier trained from the cross-entropy loss on the labeled data and learns the unconditional score function by the DSM loss ℒ_DSM that can in principle be applied on both the labeled and unlabeled data.
Nevertheless, the classifier within the vanilla CG approach is known to be potentially over-confident <cit.> on its predictions, which in term results in inaccurate gradients.
The issue can mislead the conditional generation process and decrease class-conditional generation quality.
<cit.> propose to address the issue by tuning the term ∇_xlog p(y|x) with a scaling parameter λ_CG≠ 1.
∇_x log p(x|y) = ∇_xlog p(x) + λ_CG∇_xlog p_ϕ(y|x),
where p_ϕ(y|x) is the posterior probability distribution outputted by a classifier parameterized by ϕ. Increasing λ_CG sharpens the distribution p_ϕ(y | x), guiding the generation process to produce less diverse but higher fidelity samples. While the tuning heuristic is effective in improving the vanilla CG approach, it is not backed by sound theoretical explanations.
<cit.> propose to resolve the issue differently by enhancing the adversarial robustness of the classifier. It is empirically observed that adversarially robust classifiers produce more interpretable and perceptually more aligned <cit.> gradients. Somehow it remains theoretically unclear whether robust classifiers are truly more accurate for capturing the true data distribution.
<cit.> propose the Denoising Likelihood Score Matching (CG-DLSM) approach that calibrates the classifiers to resolve the issues. The calibration is done by designing a loss computed from the outputs of a trained unconditional SGM to regularize the classifier during training. CG-DLSM achieves state-of-the-art performance within the CGSGM family in the fully-supervised setting. Somehow because of the design, the learning of unconditional SGM and the classifier needs to be done in sequential steps, losing the computational advantage of the original vanilla CGSGM of being able to train the two components in parallel. Furthermore, it is not clear whether the unlabeled data in the semi-supervised setting could be helpful in improving the classifier under the design.
The approaches above are all CGSGMs. Another popular approach for conditional SGM is Classifier-Free Guidance (CFG) <cit.>. The approach parameterizes its deep learning model with more sophisticated architectures such that the class labels y can be included as inputs to calculate the score. A null token y_nil is used to indicate unconditional score calculation, which is linearly combined with conditional score calculation for some specific y to form the final estimate of s(x | y). CFG is a state-of-the-art conditional SGM in the fully-supervised setting. Nevertheless, as we shall show in our experiments, its performance drops significantly in the semi-supervised setting, as the conditional parts of the architecture may not get enough labeled data during training. The disadvantages of CFG and other CGSGMs in the semi-supervised setting motivate us to design another CGSGM that (1) comes with theoretical justifications; (2) includes a classifier that can be trained in parallel to the unconditional SGM; (3) can leverage both the unlabeled and labeled data to achieve better performance in the semi-supervised setting.
§ SELF-CALIBRATION FOR CLASSIFIER GUIDANCE
§.§ Motivation
As mentioned in Section <ref>, inaccurate gradients of classifiers could potentially misguide the conditional generation process.
Therefore, we need an efficient way to calibrate the classifiers.
Motivated by JEM <cit.> where the classifiers are calibrated by being reinterpreted as an energy-based model (EBM), we propose to connect the EBM and SGM and calibrate the classifiers by interpreting them as EBMs in a similar approach.
To be more specific, we formulate a self-calibration loss that utilizes denoising score matching to calibrate the score function estimated by the classifier.
§.§ Formulation of self-calibration loss
In this work, we adopted the framework of score-based generative modeling using stochastic differential equations (SDEs) <cit.>. Given a target distribution p_0(x) and a known prior distribution p_T(x) (typically a Gaussian distribution) where the transition between them is a diffusion process with timestep 0≤ t< T, we can describe the diffusion process and its reverse process using SDEs. To incorporate the results of Section <ref> into this framework, we introduce the time-dependent version of ∇_x log p(x) and ∇_x log p(y|x). That is ∇_x log p_t(x(t)) and ∇_x log p_t(y|x(t)), respectively, where x(t)∼ p_t. Denoising score matching (DSM) <cit.> is often utilized to train the score-based model under this framework due to its close relationship with diffusion process modeling. A time-generalized cross-entropy loss is adopted o train the classifier.
Inspired by JEM <cit.>, we propose to improve CGSGM through self-calibration during the training stage. We reinterpret the classifier as a time-dependent EBM and obtain the score function by calculating the gradient. Since both energy function -log p(x) and score function ∇_x log p(x) are calculated from the log-likelihood function, we hypothesize that integrating EBM-related objectives into classifier training can be beneficial to CGSGM. To incorporate the energy function into our framework, we used a time-dependent version of the transformation described in JEM <cit.>:
E_ϕ,t(x) = -log∑_yexp(f_ϕ,t(x)[y])= -LogSumExp_y(f_ϕ,t(x)[y])
where f_ϕ,t(x)[y] is the output logits of the classifier. The score function can then be computed like the following:
s_ϕ(x,t) =∇_x LogSumExp_y(f_ϕ,t(x)[y])
To calibrate this score estimated by the classifier, we adopt DSM to calculate the Self-calibration Loss (SC loss):
ℒ_SC(ϕ)=𝔼_t[λ(t)𝔼_x_t,x_0[1/2‖ s_ϕ(x_t,t)-s_t(x_t|x_0)‖_2^2]]
where x_t∼ p_t, x_0∼ p_0, and s_t(x_t|x_0) denotes the score function of the noise centered at x_0. Fig. <ref> summarizes the calculation of the proposed SC loss. After the self-calibration loss is obtained, it is summed with the cross-entropy loss to train the classifier. The total loss can be written as:
ℒ_CLS(ϕ)=ℒ_CE(ϕ)+λ_SCℒ_SC(ϕ)
where ℒ_CE is the cross-entropy loss and λ_SC is a hyperparameter. By applying self-calibration, the classifier should be able to more accurately estimate the score function of the underlying data distribution, which implies the underlying data distribution itself is also more accurately estimated. As a result, the gradients of the classifiers should be more aligned with the ground truth as it is calculated from the estimated distribution.
After self-calibration, the classifier then can be used just like the original classifier to guide an unconditional SGM to achieve conditional generation. Note that since our method calibrates the classifier in training time and scaling classifier gradient is done in sampling time, we can easily combine the two methods to achieve better performance.
§.§ 2D toy dataset
We use a 2D toy dataset containing two classes to demonstrate the effects of the self-calibration loss. The data distribution is shown in Fig. <ref>, where the two classes are shown in two different colors. After training the classifiers on the toy dataset with (1) only cross-entropy loss and (2) both cross-entropy loss and self-calibration loss, we plot the gradients ∇_x log p(y|x) estimated by the classifiers and compare them with the ground truth. Also, we added the ground truth unconditional score to the estimated gradients just like CGSGM and compared the results with the real conditional score. Additional quantitative measurements of the toy dataset is included in Appendix <ref>.
Fig. <ref> shows the ground truth classifier gradient (Fig. <ref>) and the gradients estimated by classifiers trained on the toy dataset (1) without self-calibration (Fig. <ref>) and (2) with self-calibration (Fig. <ref>).
Uncalibrated classifiers produce gradients that contain rapid changes in magnitude across the 2D space, with frequent fluctuations and mismatches with the ground truth.
Such fluctuations can impede the convergence of the reverse diffusion process to a stable data point, leading SGMs to generate noisier samples.
Moreover, the divergence from the ground truth gradient can misguide the SGM, leading to generation of samples from incorrect classes.
Uncalibrated classifiers also tend to generate large gradients near the distribution borders and tiny gradients elsewhere.
This implies that when the sampling process is heading toward the incorrect class, such classifiers are not able to “guide" the sampling process back toward the desired class.
In contrast, the introduction of self-calibration results in estimated gradients that are more stable, continuous across the 2D space, and better aligned with the ground truth.
This stability results in a smoother generation process and contributes to the production of higher-quality samples.
§.§ Using self-calibration loss on semi-supervised learning
In this work, we also explore the benefit of self-calibration loss in semi-supervised setting where only a small proportion of data are labeled.
In original classifier guidance, the classifiers are solely trained from labeled data.
The lack of label in semi-supervised setting leads to more challenges to learn an unbiased classifier.
With self calibration, we are able to better utilize the large amount of unlabeled data by calculating the self calibration loss with all data.
To incorporate the loss and utilize the unlabeled samples during training time, we changed the way of calculating ℒ_CLS from Eq. <ref>. As illustrated in Fig. <ref>, the entire batch of data is used to calculate ℒ_SC, but only the labeled data is used to calculate ℒ_CE. During training, we observed that when the majority is unlabeled data, the cross-entropy loss does not converge to a low-and-steady stage if the algorithm randomly samples from all training data. We suspect this is due to the low percentage of labeled data in each batch. Therefore, we changed the way of sampling batches. We always ensure that half of the data is labeled while the other half is not. Appendix <ref> summarizes the semi-supervised training process of the classifier.
Note that even though the classifier is learning a time-generalized classification task, we can still make it perform as an ordinary classifier that classifies the unperturbed data by setting the input timestep t=0. Therefore, we can easily incorporate many other common semi-supervised classification methods like pseudo-labeling <cit.>, self-training, and noisy student <cit.>.
§ EXPERIMENTS
We have tested our method on a toy dataset (Section <ref>) to provide a high-level view of how self-calibration can improve classifiers in terms of producing accurate gradients. In this section, we present the experimental results on the CIFAR-10 and CIFAR-100 datasets to demonstrate the improvement of CGSGM after incorporating our method on different percentage of labeled data (Section <ref>). Randomly selected images of CGSGM before and after self-calibration on the dataset CIFAR-10 are shown in Appendix <ref>. For conditional metrics, we report the average scores across all classes. Results of individual classes on the CIFAR-10 dataset are included in Appendix <ref>.
§.§ Experimental setup
In the following sections, we tested our methods on the CIFAR-10 and CIFAR-100 datasets for image generation. We demonstrate that our methods are able to improve generation quality both conditionally and unconditionally with different percentage of labeled data.
Implementation details
We follow NCSN++ <cit.> to implement the unconditional score estimation model. We also adapted the encoder part of NCSN++ as the classifier used in CGSGM <cit.>.
Sampling method: We used Predictor-Corrector (PC) samplers <cit.> with 1000 sampling steps.
Evaluation metrics: Besides commonly used metrics Frechet Inception Distance (FID) <cit.> and Inception Score (IS) <cit.>, we also evaluated class-conditional performance of our methods using several different methods. This includes intra-FID, which measures the average FID for each class, and generation accuracy (on the CIFAR-10 dataset), which uses a pre-trained ViT <cit.> classifier to check whether the samples are generated in the correct class. The test accuracy of the pre-trained ViT is 98.52% on the CIFAR-10 dataset.
Baseline methods:
The baseline methods used in our work include:
* Cond: Adopts conditional SGMs by conditional normalization techniques <cit.> rather than classifier guidance.
* CFG-labeled: Classifier-free guidance<cit.> using only labeled data is applied.
* CFG-all: Classifier-free guidance<cit.> using only labeled data to train the conditional part of the model and all data to train the unconditional part of the model.
* CG: Vanilla classifier guidance.
* CG-DLSM: Classifier guidance with DLSM loss <cit.> applied.
§.§ Experiment Result
Table <ref> and Fig. <ref> present the performance of all methods when applied to varying percentages of labeled data.
Notice that it includes the fully-supervised setting when 100% of data are labeled.
CG-SC-labeled implies self-calibration is only applied on labeled data while CG-SC-all implies self-calibration is applied on all data.
Conditional SGMs vs Unconditional SGMs.
The first observation from our results is that conditional SGMs, including Cond, CFG-labeled, and CFG-all, consistently excel in generation accuracy.
However, when the quantity of labeled data decreases below 40%, a significant performance drop is witnessed in these models.
These conditional SGMs, while generating high-quality images, tend to lose diversity when working with fewer labeled data.
This occurs mainly because of the lack of labeled data in training phase, leading them to generate samples closely mirroring the distribution of the labeled data instead of all data.
In contrast, unconditional SGMs, such as CG, demonstrate superior performance when the majority of the data is unlabeled, as they are capable of leveraging both labeled and unlabeled data during training.
Classifier-Guided SGMs (CGSGMs) vs Conditional SGMs
Our experimental results align with our expectations that CGSGMs produce improved performance compared to conditional SGMs.
The CG method exhibits a consistent performance in terms of FID and inception scores across varying percentages of labeled data when evaluated using unconditional metrics.
Notably, when unlabeled data is in the majority, we observe a 16% drop in generation accuracy on the CIFAR-10 dataset.
Despite this, the intra-FID of CG significantly outperforms that of conditional SGMs on both datasets.
As for the proposed method, incorporating self-calibration with labeled data does not majorly affect unconditional metrics but substantially improves conditional metrics.
This process reduces intra-FID by 8.25 and 17.86 on the CIFAR-10 and CIFAR-100 dataset respectively and increases generation accuracy on CIFAR-10 by up to 23%.
The results demonstrate that with self-calibration, the classifier can better represent the class-conditional distribution even when labeled data is limited.
Leverage unlabeled data for semi-supervised conditional generation
Intuitively, incorporating unlabeled data into the computation of self-calibration loss would enhance the quality of conditional generation, because the classifier can exploit additional information from unlabeled data during the training phase.
As the proportion of labeled data decreases, this benefit of leveraging unlabel data should become more significant.
As our experimental results show, conditional metrics do not differ greatly when the proportion of labeled data ranges between 40% and 100%.
However, when the percentage of labeled data falls below 40%, the use of unlabeled data significantly improves intra-FID and generation accuracy.
Specifically, with just 5% labeled data, intra-FID improves by 12.22, and generation accuracy increases by 22.8% compared to the original CG.
These results affirm our expectation that as the quantity of labeled data decreases, the beneficial impact of utilizing unlabeled data increases.
§ CONCLUSION
In this work, we verify that the existing CGSGM approach results in a high generation fidelity but low accuracy. We hypothesize that the root cause lies in the unreliable scores produced by the classifiers and design a Self-Calibration Loss to enhance the classifier directly towards better scores without resorting to an external SGM. The Self-Calibration Loss is derived from rigorous principles when viewing the classifier as an energy-based model. We demonstrate three immediate benefits of the proposed Self-Calibrating CGSGM approach. Using the toy dataset, we show that the scores computed from the approach are indeed closer to the ground-truth scores. Secondly, across all percentages of labeled data, our proposed approach outperforms the existing CGSGM in the semi-supervised setting. Lastly, our empirical study justifies that our proposed approach can consistently reach the best intra-FID by seamlessly leveraging the power of unlabeled data, when compared to other conditional SGMs. The benefits establish the rich potential of the proposed approach.
§ LIMITATIONS
The major limitation of our work lies in the selection of datasets. We can only afford to conduct experiments on smaller and lower-resolution datasets (CIFAR-10 and CIFAR-100) because of limited computational resources. In particular, even with those smaller data, training, sampling, and testing a single approach on a single setting once requires up to 210 hours (more than a week) with 4 NVIDIA Tesla V100 GPUs. We understand that conducting more experiments on larger and higher-resolution datasets can further strengthen our claims, but those experiments are not affordable to us. While we tested on only two datasets, the observed results are consistent—our proposed approach achieves the best class-conditional performance in the semi-supervised setting with much fewer labeled data.
plainnat
§ DETAILED CLASS-CONDITIONAL GENERATION MEASUREMENTS OF CIFAR-10
Section <ref> contains the class-conditional measurements averaged among all classes of CIFAR-10. This section includes a more detailed result that contains the measurement of each class.
§ TRAINING ALGORITHM FOR SEMI-SUPERVISED SELF-CALIBRATING CLASSIFIER
[h]
Semi-supervised classifier training with self-calibration loss
§ QUANTITATIVE MEASUREMENTS OF TOY DATASET
Table <ref> shows the quantitative measurements of the methods on the toy dataset. First, we compared the gradients ∇_x log p(y|x) estimated by the classifiers with the ground truth by calculating the mean squared error (first column) and cosine similarity (second column). We observed that after self-calibration, the mean squared error of estimated gradients can be lowered by 18%, and tuning the scaling factor can further improve it to 36%. This improvement after scaling implies that the direction of gradients is more aligned with the ground truth, and scaling can further reduce the mismatch between the magnitude of the classifier and the ground truth. In terms of cosine similarity, self-calibration grants the classifiers an improvement of 42%. The numerical results agree with our previous observation that after self-calibration, classifiers align better with the ground truth in terms of both direction and magnitude.
Then, we add the unconditional score of the training data distribution to the classifier gradients to calculate the conditional scores and compare the results with the ground truth. As we can see, the classifiers are able to estimate conditional scores with a cosine similarity of 0.9175 even without self-calibration. The result shows that with a well-trained unconditional SGM, in which we use the ground truth unconditional score in this case, CGSGM is able to produce conditional scores pointing in the correct directions in most cases. This explains why the original CGSGM is able to generate samples with decent quality. After applying the self-calibration loss and scaling method, we can further improve the cosine similarity to 0.9689, which we believe can enhance the quality of class-conditional generation.
§ TUNING THE SCALING FACTOR FOR CLASSIFIER GUIDANCE
This section includes the experimental results of tuning the scaling factor λ_CG for classifier guidance with and without self-calibration under fully-supervised setting.
Fig. <ref> shows the result of tuning the scaling factor λ_CG for classifier guidance. While tuning λ_CG with and without self-calibration, we can see that self-calibration does not affect unconditional performance by much. However, when evaluated with conditional metrics, the improvement after incorporating self-calibration becomes more significant. The improvement in intra-FID is up to 7.9 while the generation accuracy can improve up to 13%.
§ IMAGES GENERATED BY CLASSIFIER GUIDANCE WITH AND WITHOUT SELF-CALIBRATION
This section includes images generated by classifier guidance with (first 6 images) and without (last 6 images) self-calibration after training on different percentage of labeled data. Each row corresponds to a class in the CIFAR-10 dataset.
|
http://arxiv.org/abs/2307.07210v1 | 20230714080538 | Registry-dependent potential energy and lattice corrugation of twisted bilayer graphene from quantum Monte Carlo | [
"Kittithat Krongchon",
"Tawfiqur Rakib",
"Shivesh Pathak",
"Elif Ertekin",
"Harley T. Johnson",
"Lucas K. Wagner"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall",
"cond-mat.mtrl-sci"
] |
Department of Physics, Institute for Condensed Matter Theory, University of Illinois at Urbana-Champaign
Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign
Sandia National Laboratories
Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign
Materials Research Laboratory, University of Illinois at Urbana-Champaign
Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign
Department of Materials Science and Engineering, University of Illinois at Urbana-Champaign
Department of Physics, Institute for Condensed Matter Theory, University of Illinois at Urbana-Champaign
An uncertainty in studying twisted bilayer graphene (TBG) is the minimum energy geometry, which strongly affects the electronic structure.
The minimum energy geometry is determined by the potential energy surface, which is dominated by van der Waals (vdW) interactions.
In this work, large-scale diffusion quantum Monte Carlo (QMC) simulations are performed to evaluate the energy of bilayer graphene at various interlayer distances for four stacking registries.
An accurate registry-dependent potential is fit to the QMC data and is used to describe interlayer interactions in the geometry of near-magic-angle TBG.
The band structure for the optimized geometry is evaluated using the accurate local-environment tight-binding model.
We find that compared to QMC, DFT-based vdW interactions can result in errors in the corrugation magnitude by a factor of 2 or more near the magic angle.
The error in corrugation then propagates to the flat bands in twisted bilayer graphene, where the error in corrugation can affect the bandwidth by about 30% and can change the nature and degeneracy of the flat bands.
Registry-dependent potential energy and lattice corrugation of twisted bilayer graphene from quantum Monte Carlo
Lucas K. Wagner
August 12, 2023
================================================================================================================
§ INTRODUCTION
Twisted bilayer graphene (TBG) exhibits a multitude of correlated electronic phases and has emerged as a platform for studying correlated electron physics <cit.>.
These correlation-driven phases are attributed to flattening of the band structure near the Fermi level due to the superlattice interaction in moiré patterns <cit.>.
However, a complete model Hamiltonian for these systems remains unknown.
An important piece of the puzzle of predicting flat bands is the van der Waals (vdW) interactions, which lead to symmetry breaking, corrugation, and other distortions in the layers.
The lattice reconstruction in the moiré superlattices in turn significantly affects the electronic behavior of this system <cit.>.
Therefore, the interplay between lattice corrugation and electronic structure provides strong impetus to accurately model vdW contributions in TBG.
Evaluating vdW interactions, however, is a challenging task for two reasons.
First, this type of interaction results from long-range electron correlations, which means that local or semi-local exchange-correlation functionals from density functional theory (DFT) cannot describe them <cit.>.
Although a large set of computational methods has been developed in the DFT regime to account for the long-range interactions <cit.>, they are empirical in nature.
Within various iterations of the so-called DFT-D scheme, which adds the dispersion corrections to the standard Kohn–Sham DFT energy, the estimated binding energy errors can differ by up to 250% <cit.> in bilayer graphene.
Thus, the large uncertainty in this set of techniques warrants a more accurate first-principles approach.
The second reason for the difficulty is that accurate treatment of vdW interactions becomes computationally intractable for a large system, such as small-angle TBG, which consists of ∼ 10^4 atoms.
A systematic potential training approach is needed to model lattice and electronic degrees of freedom for vdW systems, while maintaining first-principles accuracy.
Diffusion quantum Monte Carlo (QMC) has been shown to closely reproduce experimental values for vdW materials due to explicit treatment of electron interactions <cit.>.
Specifically, in AB-stacked bilayer graphene, the binding-energy curve from QMC is able to predict the out-of-plane phonon frequency and relaxed interlayer spacing in agreement with available experimental results <cit.>, which shows that QMC is a promising technique for investigating bilayer graphene.
The interlayer energy curve from QMC data for this system is available for only a single registry: AB-stacked bilayer graphene.
To fit a registry-dependent potential, multiple registries are needed.
Therefore, more QMC data is needed to accurately parameterize the potential energy surface of the entire moiré superlattice.
In this manuscript, we use large-scale QMC simulations to compute the Born-Oppenheimer ground state energy for bilayer graphene as a function of the displacement for four stacking registries, which allows for an accurate assessment of the stacking fault energy.
We find that the QMC data is closer to RPA <cit.> and a previous atomic potential parameterization <cit.> than to DFT with dispersion corrections.
The data is well-fit by the Kolmogorov–Crespi (KC) potential model <cit.>, whose refined parameters for the KC potential are made available in the supplementary information in LAMMPS <cit.> format.
We then assess the importance of an accurate vdW energy calculation by investigating the effect of corrugation computed by QMC, DFT-D2 <cit.>, DFT-D3 <cit.>, and Ref. <cit.> (labeled as “KC-Ouyang”).
For each relaxed structure from different parameterizations of the KC potential, the band structure is evaluated using the accurate local-environment tight-binding (LETB) model <cit.>.
The DFT-D2 and DFT-D3 vdW interactions result in large errors in the band structure due to the poor estimation of the structural corrugation.
We show that the difference in corrugation due to QMC evaluation of the vdW interaction leads to significant changes in the electronic band structure.
§ METHOD
The procedure for parameterizing the interlayer interaction of TBG and evaluating the sensitivity of the minimum energy structure and electronic structure is outlined as follows.
* First, the energy of rigid bilayer graphene as a function of the interlayer distance, is sampled using QMC for four stacking registries, defined in Fig. <ref>.
* The sampled QMC energy data are used to fit the KC potential.
* Using the fitted KC potential, the structure of TBG is optimized by minimizing the total energy.
* The electronic structure is calculated on the relaxed structure of TBG using the LETB model <cit.>.
The steps are illustrated in Fig. <ref> and are discussed in further details in the following sections.
§.§ QMC calculations
We consider 4 stacking registries of bilayer graphene, which are constructed by performing rigid translations of the top layer of AB-stacked bilayer graphene along the armchair direction by different distances.
We label these registries according to the translation distances s as listed in <ref>.
The sliding parameter s is defined such that starting from the AB structure (s = 0), the translation of s = 1 brings the structure back to AB.
The nomenclature of these stacking orders is described in Ref. <cit.>, except for the “Mid” stacking type, which is additionally defined here to be s = 1/2.
For each of the four registries, we perform QMC to sample the energy at 11 interlayer distances, ranging from d = 3 Å to 7 Å, for a total of 44 energy data points.
In the fixed-node QMC scheme as implemented in the QMCPACK software package <cit.>, the ground-state wave function is projected out of the Slater–Jastrow trial wave function of the form
Ψ(𝐑) = Det [ϕ_i^↑(𝐫_j^↑)] Det [ϕ_i^↓(𝐫_j^↓)] exp(J),
where 𝐑 = {𝐫_1, …, 𝐫_M} is the collection of electron coordinates of the M-electron system, ϕ is the Kohn–Sham orbital, i and j denote electron indices, ↑ and ↓ indicate spins, and J is the Jastrow correlation factor as defined in Ref. <cit.>.
The set of Kohn–Sham orbitals is produced by the Quantum ESPRESSO plane-wave DFT code <cit.>, using 200 Ry kinetic energy cutoff and the 𝐤-point mesh of 20 × 20 × 1.
The core electrons are removed using Dirac–Fock pseudopotentials described by Ref. <cit.>.
We verify that the orbitals generated by the Perdew–Burke–Ernzerhof approximation <cit.> and two vdW functionals, namely DFT-D2 <cit.> and DFT-D3 <cit.>, result in the same QMC energy within error bars.
The QMC energy is twist-averaged <cit.> over 4 × 4 𝐤-point mesh.
The finite-size errors are eliminated by extrapolating the twist-averaged QMC energies of 3 × 3, 4 × 4, 5 × 5, and 6 × 6 bilayer graphene supercells to the thermodynamic limit.
The full analysis of the QMC errors is provided in the Appendix.
§.§ Potential parameterization
The stacking-dependent KC potential <cit.> is fitted to the 44 calculated QMC data points to obtain a smooth curve that can describe the energy at any registry and interlayer distance.
In this work, we assume that this is enough to estimate the interlayer interaction in the twisted bilayer case; essentially we are assuming that the interaction energy only depends on the local stacking.
We thus expect the potential to be most accurate for small twist angles, where the local alignment of the layers varies slowly.
The detailed procedure to select the most suitable model for a low energy structure is described in the Appendix.
The KC potential model fit to QMC data is labeled as KC-QMC.
We perform a similar procedure for DFT-D2 <cit.> and DFT-D3 <cit.>, whose functional forms are reproduced in the Appendix, but with many more samples of interlayer distances (0.01 Å apart) as they are not computationally expensive.
The KC potentials fitted to DFT-D2 and DFT-D3 are labeled as KC-DFT-D2 and KC-DFT-D3 respectively in order to make distinctions between the raw training data and the fitted curves.
§.§ Stacking-fault energy calculation
The stacking-fault energy (SFE) is defined as the difference between the energy of a displaced configuration (s ≠ 0) and the energy of the most stable configuration (s = 0), which is the AB-stacked bilayer graphene in this case.
We plot the stacking-fault energy as a function of registry along the armchair direction for our results from KC-QMC, KC-DFT-D2, and KC-DFT-D3 in Fig. <ref>(a).
The KC parameters for KC-QMC, KC-DFT-D2, and KC-DFT-D3 are obtained by the fitting procedure as described in the Appendix, while the KC parameters for KC-Ouyang are taken from Ref. <cit.>.
The data points for KC-QMC, KC-Ouyang, KC-DFT-D2, and KC-DFT-D3 are obtained from a quadratic fit within the range of 0.05 Å from the minimum of the potential energy surface for each stacking registry.
The data points for RPA are taken directly from Ref. <cit.>.
We fit the SFE as a function of registry using the formula <cit.>
F(s) = c_0
+ c_1 [ 2cos(2π s) + cos(4π s) ]
+ c_2 [ 2cos(6π s) + 1 ]
+ c_3 [ 2cos(4π s) + cos(8π s) ]
+ √(3) c_1 [ -2sin(2π s) + sin(4π s) ]
- √(3) c_3 [ -2sin(4π s) + sin(8π s) ],
where F(s) is the SFE, s is the registry in the armchair direction in units of √(3) a, and a is the in-plane lattice constant.
The fitting constants c_0, c_1, c_2, c_3 are reported in Table <ref>.
We also fit the interlayer distance d_min for a given registry using the same functional form as Eq. (<ref>).
§.§ Structural relaxation
The relaxed geometries of TBG are obtained using the conjugate gradient method with a stopping tolerance of 10^-11 eV as implemented in the LAMMPS molecular dynamics program.
The intralayer interaction is given by the reactive empirical bond order potential <cit.>, while the interlayer interaction is described by one of the four KC potentials that we have parameterized, namely KC-QMC, KC-Ouyang, KC-DFT-D2, and KC-DFT-D3.
The initial structures are defined by two rigid sheets of graphene at 3.4 Å interlayer distance.
The top layer is rotated at a twist angle θ with respect to the bottom layer.
For this work, we perform relaxation calculations for the commensurate twist angles of θ = 0.84^∘, 0.93^∘, 0.99^∘, 1.05^∘, 1.08^∘, and 1.16^∘.
Despite the large number of atoms in a simulation cell, which is on the order of 10^4, the geometry optimizations remain computationally tractable due to the low cost of the classical potentials.
§.§ Local-environment tight-binding model
To determine the band structure of the relaxed geometry, the local-environment tight-binding model is employed due to its high accuracy within twisted bilayer graphene <cit.>.
This model accounts for the detailed local environment of the nearby atoms and has the following form.
H_LETB = ∑_ij σ t_ij^LETB (𝐑_i, 𝐑_j, {𝐑_ij}) c_i σ^† c_j σ + h.c.,
where σ denotes the spin index, 𝐑_i and 𝐑_j represent the location of atoms i and j, {𝐑_ij} is a set of atomic positions in the vicinity of atoms i and j.
The functional form of the hopping parameters t_ij^LETB is classified into intralayer and interlayer contributions based on the z projection of the distance 𝐑_i - 𝐑_j. The detailed description of the functional form of t_ij^LETB is provided in Ref. <cit.>.
§ RESULTS
The results of our calculations are discussed using QMC as the reference.
This decision is based on the ability to reproduce the experimental values of relaxed interlayer spacing in the AB stacking and out-of-plane zone-center optical phonon frequency in bilayer graphene <cit.>.
§.§ Stacking-fault energy
Figure <ref>(a) shows the SFE as a function of registry.
We find a close agreement between KC-QMC, KC-Ouyang, and RPA.
Let us consider the AA stacking, shown by the vertical dotted line at the registry of s = 2/3, where the differences are the most notable.
The difference between KC-QMC and RPA is 0.2 meV in the stacking-fault energy.
We consider the agreement between KC-QMC and RPA to be an indication that both might be accurate in this case.
A similar agreement between QMC and RPA was found for water on boron nitride <cit.>.
The difference between KC-QMC and KC-Ouyang is 0.3 meV in the stacking-fault energy.
While the agreement might appear to be fortuitous, a similar agreement between QMC and the so-called DFT-MBD <cit.>, on which the KC-Ouyang potential was trained, was found in DNA-ellipticine molecules <cit.>.
Relative to the consensus SFE of KC-QMC, RPA, and KC-Ouyang, KC-DFT-D2 overestimates SFE by 1.7 meV for the AA stacking compared to KC-QMC, while KC-DFT-D3 underestimates the SFE by 1.5 meV.
This result agrees with previous comparisons to RPA only <cit.>, so our results increase confidence in the RPA results.
Figure <ref>(b) shows the relaxed interlayer distance, d_min, as a function of registry.
Similar to the case of SFE in Fig. <ref>(a), there is an agreement between KC-QMC, KC-Ouyang, and RPA.
In this case, however, both RPA and KC-Ouyang underestimate the relaxed interlayer distance for the AA stacking by a small amount of 0.03 and 0.04 Å, respectively.
Meanwhile, the two dispersion-corrected DFT methods show the opposite trend of SFE, as shown by the fact that KC-DFT-D2 has the smallest relaxed interlayer distances overall.
KC-DFT-D2 underestimates the relaxed interlayer distance by 0.12 Å, while KC-DFT-D3 overestimates it by 0.07 Å.
This result is consistent with the result of SFE shown in Fig. <ref>(a) in the sense that a smaller interaction leads to larger interlayer distance which in turn results in smaller SFE.
§.§ Effects of the more accurate interlayer potential on the minimum energy structure of TBG
The corrugation δ z, defined as the deviation of the z-coordinate of a carbon atom from the average of the smallest and largest z-coordinates within the layer, of both layers for each interlayer potential is visualized in Fig. <ref>(a, b, c, d).
In the AA regions, the relaxed structures manifest an upward bulge in the top layer and a downward bulge in the bottom layer, which are denoted by the most prominent red and blue regions in the visualization.
Around these AA peaks, the 6-fold symmetric structure is observed.
The formation of small-amplitude structure around the AA regions results in the 3-fold symmetry around the AB/BA regions, which can be identified by the centroid of three adjacent AA nodes as labeled in Fig. <ref>(a).
This result is in qualitative agreement with previous studies of breathing mode structure <cit.>.
The heights of these peaks depend on the potential being used to describe the interlayer interactions.
In the case of KC-QMC (Fig. <ref>(a)), the maximum out-of-plane corrugation is δ z_KC-QMC = 0.075 Å.
The maximum corrugation of KC-Ouyang is δ z_KC-Ouyang = 0.071 Å, or 5% smaller than δ z_KC-QMC.
This similarity is expected from the agreement in SFE (Fig. <ref>).
For KC-DFT-D2, the chiral relaxations are also observed between layers, where the top layer has the opposite chirality to the bottom layer, in agreement with KC-QMC and KC-Ouyang.
However, the difference in maximum out-of-plane corrugation from KC-QMC is more pronounced as δ z_KC-DFT-D2 is 0.112 Å, which is 50% larger than δ z_KC-QMC.
A small difference from KC-QMC is also observed in around AA nodes, in which an inner 6-fold symmetric structure is observed, while in the case of KC-QMC and KC-Ouyang, the AA regions have smooth round-shaped bulges.
On the other hand, the structure from KC-DFT-D3 has much smaller overall corrugation, with the maximum of only δ z_KC-DFT-D3 = 0.045 Å.
As a result, the chiral symmetry between both layers and 6-fold symmetric structure around AA nodes are not observed in KC-DFT-D3.
§.§ Effects of the more accurate minimum energy structure on the band structure of TBG
To investigate the effects of the more accurate minimum energy structure on the band structure, the LETB model is employed <cit.>.
In Fig. <ref>(e, f, g, h), the LETB band structures are plotted for four twisted bilayer structures at 0.99^∘, relaxed using KC parameters from four different sources, namely KC-QMC, KC-Ouyang, KC-DFT-D2, and KC-DFT-D3.
The band structures from KC-QMC, KC-Ouyang, and KC-DFT-D3 show similar features, in which the four bands exhibit two degenerate energy states at the gamma point, as reported in previous studies <cit.>.
The band structure from a KC-DFT-D2 geometry shows a slightly different feature from the other three structures, where in this case, all the four bands seem to form a single degenerate energy state at the gamma point.
The flat bandwidths, defined as the difference between the maximum energy and the minimum energy of the flat bands, for different potentials and twist angles are reported in Table <ref>.
At the twist angle of 0.99^∘, the bandwidth for KC-QMC is 7% larger than KC-Ouyang and KC-DFT-D2, and 33% larger than KC-DFT-D3.
In Fig. <ref>(a), the flat bandwidth is plotted as a function of the twist angle.
Reducing the twist angle from 1.16^∘ to 0.99^∘, all four potentials show a similar downward trend, where the inflection points occurs, which defines the first magic twist angle.
The identification of the magic angle at 0.99^∘ agrees with the previous result <cit.>.
The electron and hole gaps are defined by the separation of the flat bands from the remote bands, and have been noted <cit.> to be particularly sensitive to the corrugation.
For our geometries, these gaps are presented in Fig. <ref>(b, c).
The treatment of the vdW interaction can change the estimated hole and electron gaps by a significant amount.
For example, near the magic angle of 0.99^∘, the gaps can vary by almost a factor of two.
Corrugation tends to increase the electron and hole gaps, so the underestimation of corrugation in DFT-D3 results in gaps that are too small, while the reverse in DFT-D2 results in gaps that are too large.
§ CONCLUSION
As has been noted previously in the literature, the careful treatment of the van der Waals interaction appears to be very important to obtain accurate corrugation in twisted bilayer graphene, and small changes in the corrugation can affect the electronic structure significantly.
In this work, we provided a state-of-the-art benchmark of the van der Waals interaction in bilayer graphene and found that while a commonly used atomic potential <cit.> is fairly accurate, DFT-based methods may not improve the description, and in fact may lead to worse results.
Models of the electronic structure which depend on van Hove singularities and other features of the flat bands should take this into consideration.
This study has focused on freestanding bilayer graphene; encapsulated layers in boron nitride may change the corrugation and result in different electronic structure behavior.
For the accuracy of the interlayer interation in graphene, we found that the commonly used Kolmogorov–Crespi <cit.> model as parameterized in Ref. <cit.> was surprisingly accurate compared to QMC results, as well as the RPA estimation.
On the other hand, commonly used corrections to DFT, the DFT-D2 and DFT-D3 functionals, resulted in large errors in the interlayer potential and stacking fault energy, which leads to an over (DFT-D2) or under (DFT-D3) estimation of the degree of corrugation.
We found that the electronic band structure varied significantly depending on the treatment of the corrugation.
While structures relaxed from different potentials result in the same prediction of the magic twist angle at 0.99^∘, the error in the electronic structure due to using DFT geometries near the magic angle (here found to be roughly 0.99^∘) results in rearrangements of bands up to around 50% (≈ 3 meV) in the two lower flat bands for the DFT-D2 structure.
While the separation of the flat bands from the other bands as quantified by electron and hole gaps shows the same inflection point at 0.99^∘, at the magic twist angle, the relaxed structures from DFT-D2 and DFT-D3 result in a similar error of roughly 10 meV, which is an error of about 50%.
Alongside this paper, we provide the QMC data and a potential fit to the QMC data suitable for use in LAMMPS.
This data is suitable for use to perform atomic scale simulations of bilayer graphene, and has the level of accuracy comparable to the underlying quantum Monte Carlo calculations in the binding region.
§ ACKNOWLEDGEMENTS
This work was supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Computational Materials Sciences Program, under Award No. DE-SC0020177.
This work made use of the Illinois Campus Cluster, a computing resource that is operated by the Illinois Campus Cluster Program (ICCP) in conjunction with the National Center for Supercomputing Applications (NCSA) and which is supported by funds from the University of Illinois at Urbana-Champaign.
This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.
§ APPENDIX
§.§ Errors in diffusion quantum Monte Carlo data
Figure <ref> shows that the results for the time step of 0.02 Ha^-1 have roughly 0.01 to 0.02 meV errors from the energies for the time step of 0.01 Ha^-1.
Since the discrepancy is small at small interlayer distances, we consider the time step of 0.02 Ha^-1 a good approximation for describing the potential energy near equilibrium distances.
The diffusion quantum Monte Carlo (QMC) simulations are performed using a 4 × 4 twist grid on the supercells N = 3 × 3, 4 × 4, 5 × 5, and 6 × 6, where N is the number of unit cells in a simulation cell.
The energy is extrapolated to the infinite system size by fitting the following linear equation <cit.>
E(N) = E(∞) + cN^-5/4,
where E(∞) is a fitting parameter, which represents the energy extrapolated to the thermodynamic limit, and N is the number of primitive cells in a simulation cell.
The error bars of the extrapolated energies are calculated using the bootstrapping technique.
The linear model fits our energy data well, as shown in Fig. <ref> for AB-stacked bilayer graphene with 3.35 Å interlayer distance.
The time step is chosen to be 0.02 Ha^-1.
§.§ Fitting procedure
We sampled 44 energy data points using QMC and fit them to the KC potential <cit.>.
The KC model is a registry-dependent potential designed to improve on the classical potential by taking into account the anisotropy of the π overlap between layers.
The full model is given by
E = 1/2∑_i ∑_j ≠ iTap(r_ij) V_ij.
V_ij = e^-λ(r_ij - z_0) [C + f(ρ_ij) + f(ρ_ji)] - A(r_ij/z_0)^-6.
ρ_ij^2 = r_ij^2 - (𝐫_ij·𝐧_i)^2.
ρ_ji^2 = r_ij^2 - (𝐫_ij·𝐧_j)^2.
f(ρ) = e^-(ρ/δ)^2∑_n=0^2 C_2n(ρ/δ)^2n,
where 𝐫_ij is the distance vector pointing from atom i to atom j from differnet layers,
𝐧_k is the surface normal at atom k, and ρ_ij is the transverse distance from atom i to j.
The taper function given by
Tap(x_ij) = 20 x_ij^7 - 70 x_ij^6 + 84 x_ij^5 - 35 x_ij^4 + 1,
x_ij = r_ij/R_cut,
provides a continuous long-range cutoff, and R_cut is fixed to 16 Å throughout all the calculations.
The raw QMC energies for four stacking types are displayed as black data points in Fig. <ref>(a, b, c, d).
We fit the KC model to the data points weighted by the energy and a tunable parameter k_B T according to the formula
w_i = exp(-E_i - E_min/k_BT),
where E_i is the raw energy from QMC,
E_min is the smallest data point, i.e. the equilibrium point of the AB stacking potential energy surface,
k_B is the Boltzmann constant,
and T is the temperature, which serves as a parameter that controls the weights in this context.
The weights are introduced so that we can tune how much the fitting model favors the data points near equilibrium by adjusting the value of k_B T.
A smaller value of k_B T corresponds to a model that highly favors the points near equilibrium, while giving up the goodness of fit at very small or large interlayer distances.
Thus, the model is selected according to the goodness of fit in the region of interest, which can be quantified by the root mean square (RMS) as described in the next section.
The values of fitting parameters for different values of k_B T are reported in Table <ref> along with the KC parameters from Ref. <cit.> labeled as KC-Ouyang.
The fitted curves for three different sets of weights, i.e. k_B T = 2, 4, and ∞ meV (unweighted) are shown in Fig. <ref>(a, b, c, d).
We find that while the KC potential fits well to the energy data from the DFT-D scheme because they both have the functional form of r^-6, the QMC potential energy is not exactly proportional to r^-6, which is why the KC potential struggles to describe the entire potential energy surface as shown in Fig. <ref>.
Modifications to the model might be needed to describe the potential energy surface at any interlayer separation range.
§.§ Choosing the KC parameter set
In order to decide which KC parameter set to use, we investigate the RMS errors between the QMC data and the fitting within the region of interest.
The RMS errors for the data within d = 3.2 to 3.8 Å as illustrated by the highlighted regions in Fig. <ref>(a, b, c, d) is plotted as a function of k_B T in Fig. <ref>(e).
This range of interlayer distances is chosen based upon the fact that the relaxed interlayer distances of all the methods we explore fall within this range as shown in Fig. <ref>(b).
Figure <ref>(e) suggests that the most preferable model is corresponding to k_BT = 4 meV due to having the smallest RMS.
Therefore, all the subsequent calculations of relaxed structures in this work are performed using this set of KC parameters to describe the interlayer interactions.
In the case of KC-QMC, the chosen parameters are the third row of Table <ref>.
The minimum interlayer spacing d_min for different stacking registries and potentials is reported in Table <ref>.
On the other hand, the binding energy (BE) is defined as the energy required to separate the two graphene sheets in equilibrium to the infinite interlayer distance.
Therefore, the BE is a property at a large interlayer distance and is evaluated using KC parameters that are fitted with no weights because data points at large interlayer distances now deserve the same weights to data points near minimum interlayer spacing.
Our BEs from QMC for AB-stacked and AA-stacked bilayer graphene are 21.9 meV and 17.4 meV, while the BEs for these two stacking configurations are reported to be 17.7(9) and 11.5(9) meV in the previous QMC study <cit.>, which are slighly smaller than our BE results.
This discrepancy could result from the small number of QMC data points and less generic fitting function in the previous study, which could significantly affect the binding energy and minimum interlayer spacing.
Our BEs for different stacking registries and potentials are reported in Table <ref>.
|
http://arxiv.org/abs/2307.04878v1 | 20230710195656 | The Impact of Black Hole Scaling Relation Assumptions on the Mass Density of Black Holes | [
"Cayenne Matt",
"Kayhan Gültekin",
"Joseph Simon"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.HE"
] |
firstpage–lastpage
Higher-order composition of short- and long-period effects for improving analytical ephemeris computationt1
[
============================================================================================================
We examine the effect of supermassive black hole (SMBH) mass scaling relation choice on the inferred SMBH mass population since redshift z ∼ 3. To make robust predictions for the gravitational wave background (GWB) we must have a solid understanding of the underlying SMBH demographics. Using the SDSS and 3D HST+CANDELS surveys for 0 < z < 3 we evaluate the inferred SMBH masses from two SMBH-galaxy scaling relations: and . Our SMBH mass functions come directly from stellar mass measurements for , and indirectly from stellar mass and galaxy radius measurements along with the galaxy mass fundamental plane for .
We find that there is a substantial difference in predictions especially for z > 1, and this difference increases out to z = 3. In particular we find that using velocity dispersion predicts a greater number of SMBHs with masses greater than 10^9 M_⊙. The GWB that pulsar timing arrays find evidence for is higher in amplitude than expected from GWB predictions which rely on high redshift extrapolations of local SMBH mass-galaxy scaling relations. The difference in SMBH demographics resulting from different scaling relations may be the origin for the mismatch between the signal amplitude and predictions. Generally, our results suggest that a deeper understanding of the potential redshift evolution of these relations is needed if we are to draw significant insight from their predictions at z > 1.
black hole physics – gravitational waves
§ INTRODUCTION
Supermassive black holes (SMBHs) reside in the nuclei of nearly all massive galaxies <cit.>. Through galaxy mergers, these SMBHs can form dual and binary SMBHs <cit.>. In the final stages of their evolution, before coalescence, SMBH binaries lose energy and angular momentum purely though gravitational waves (GW). The combined GW signal from SMBH binaries is expected to be a stochastic background known as the gravitational wave background <cit.>. Though GW detectors such as LIGO, VIRGO, and KAGRA have successfully detected many GW events from stellar mass compact objects <cit.>, the frequency range of GWs emitted by SMBH binaries is far below even the lowest detectable limit for Earth-based detectors. For such GWs, a much longer baseline is needed. To achieve this, pulsar timing arrays <cit.> use high-precision time-of-arrival measurements of millisecond pulsars to measure the change in Earth–pulsar distances for ∼kpc-scale baselines. There are several years-long PTA campaigns, including North American Nanohertz Observatory for Gravitational Waves <cit.>, European Pulsar Timing Array <cit.>, Parkes Pulsar Timing Array <cit.>, and Chinese Pulsar Timing Array <cit.>, Indian Pulsar Timing Array <cit.>, South Africa Pulsar Timing Array <cit.>.
Several PTAs have individually made significant progress towards detecting the GWB with evidence for a GWB with the characteristic quadrupolar signal of GWs <cit.>. Previously, the NANOGrav 12.5-year data <cit.>, while not having sufficient signal-to-noise to see the <cit.> correlation, showed a common red noise process that shared many traits characteristic of the expected GWB. NANOGrav's signal, however, is significantly higher in amplitude than many predictions of the GWB <cit.>. The newest PTA data increase the significance of the high-amplitude GWB with support for characteristic strain amplitude of h_c ∼ 2 × 10^-15 consistent in all of the data sets finding evidence for <cit.> correlations <cit.>. In fact, three of the analyses are inconsistent with h_c ≤ 1 × 10^-15 <cit.>. The discrepancy between high amplitude observed and that expected from SMBH binaries has been explained with exotic theories such as cosmic strings <cit.> and inflationary universe models <cit.>, or extreme parameterizations of our current models <cit.>. This opens the possibility that the explanations for the GWB signal should be revised <cit.>.
Though there are many SMBH properties that influence the emitted GWs, the mass distribution of SMBHs is fundamentally linked to the characteristic strain amplitude of the GWB and may be the most significant contributor to the amplitude we observe. <cit.> noted that the characteristic strain amplitude from an isotropic background of binary SMBHs depends on four key quantities:
(i) the chirp mass of the binary, ℳ^5 / 3≡M_1 M_2(M_1+M_2)^-1 / 3, where M_1, M_2 are the masses of the SMBHs in the system with M_1 ≥M_2; (ii) the frequency of the emitted GWs, f, which is twice the orbital frequency; (iii) the present-day comoving number density of merged remnants, N_0; and (iv) the redshift, z as
h_c ∼ℳ^5/6 f^-2/3 N_0^1/2⟨(1+z)^-1 / 6⟩.
Note that the amplitude has the strongest dependence on chirp mass, and so the signal is dominated by the most massive black holes. Below z = 1 the PTA band is dominated by local SMBH binaries, but the GWB amplitude is additionally influenced by galaxies that merged at higher redshifts. SMBH evolution is determined, among other things, by mass and so a higher mass population of SMBHs at z > 1 may reflect a higher redshift evolution, thus the astrophysical history of SMBH mass evolution is encoded in the GWB.
Since direct measurements of SMBH masses are only possible for nearby sources, we are often left to infer masses from properties of their host galaxies <cit.>. There exists a wealth of relations between galaxy properties and the mass of their central black hole, all with varying degrees of scatter <cit.>. Here we focus on two relations in particular: the correlation between SMBH mass with velocity dispersion (σ) and bulge stellar mass (M_bulge). In the local universe, despite having lower scatter <cit.>, both relations were found to be remarkably accurate when reproducing known SMBH masses from either stellar mass or velocity dispersion. These scaling relations are based on direct, dynamical mass measurements, which have been shown to be robust. For example, SMBH mass estimates in M87 have previously had discrepancies up to a factor of 2.5 when using stellar kinematics <cit.> versus gas dynamics <cit.>. These are now seen as due to gas filaments <cit.> which agrees with the mass found by the Event Horizon Telescope collaboration <cit.>.
While there is general agreement in the local universe between SMBH masses predicted from stellar mass and velocity dispersion, it is worth discussing instances where these relations are thought to break down. Though we do not investigate it in this paper, SMBH mass is well-predicted from host luminosity. When investigating SMBH masses of large, luminous, brightest cluster galaxies (BCGs), <cit.> found that fails to reproduce the extreme masses above M_BH∼ 3 × 10^9 M_⊙ measured and predicted from M_BH–L. Similarly, <cit.> discuss this same trend, which they call a “saturation” effect, for which not only , but also under-predict the highest mass SMBHs in core galaxies. Both relations display this saturation at the high end of the relations that is not seen in the M_BH–L.
We see a strikingly different pattern, however, when considering red nugget galaxies—galaxies with relatively small radii for their masses and high velocity dispersions that are more typical of younger galaxies.
Red nugget galaxies may be representative of the high-redshift galaxy population, possibly because they have avoided mergers for a large portion of their lives <cit.>. One red nugget is NGC 1277 which hosts a SMBH with a mass of (4.9 ± 1.6) × 10^9 M_⊙ <cit.>. NGC 1277's SMBH is over massive compared to the total stellar mass of the galaxy (1.2 × 10^11M_⊙) and is an outlier in the M_BH–M_bulge relation which predicts a mass of around (4.9–6.23) × 10^8 M_⊙. However, because of its high velocity dispersion, reproduces the measured SMBH mass more accurately, predicting a mass of (2.9–3.7) × 10^9 M_⊙ and the dynamical mass lies within the intrinsic scatter of the relation <cit.>. Recently, it has been found that NGC 1277 may have lost the majority of its dark matter, suggesting an alternative evolutionary path <cit.>, but NGC 1277 is not the only galaxy for which σ has been found to be a better predictor of SMBH mass. MRK 1216 is another one of several well studied examples of this type of object which exhibit similar traits <cit.>.
Despite the great promise of the relation as a SMBH mass predictor, it is resource intensive to measure velocity dispersion at high redshift due the spectral quality required to resolve the necessary spectral features. To overcome this, the method is commonly used because it relates the relatively easily measured bulge stellar mass directly to the SMBH mass. This relationship is well measured within our local universe, but a more accurate mass predictor may be needed for high redshifts (z > 1) where the a significant fraction of the GWB signal originates.
To circumvent the spectral limitations on measuring velocity dispersion, in this paper we use the mass fundamental plane (MFP) of galaxies, which links total stellar mass and half light radius to stellar velocity dispersion. The MFP therefore allows us to infer velocity dispersion for distant galaxies and thus extend the relationship to higher redshifts. <cit.> investigated the evolution of the relationship between galaxy total stellar mass (M_*) and effective radius (R_eff). They found that galaxy masses do not evolve along the z = 0 M_*–R_eff relation, but from redshift 0 to 3, the effective radii decrease substantially. This evolution of the M_*–R_eff relationship indicates that galaxies start off relatively compact and become more diffuse as they age as a result of mergers, feedback processes, and other galaxy interactions. This change in radius is not incorporated in any way into the relation. Applying the local relationship to high-redshift galaxies results in a relatively unchanging SMBH mass population throughout time.
Because of the known evolution of the M_*–R_eff relationship, the lack of evolution in the MFP is not immediately obvious. Velocity dispersions tend to be higher, however, for more compact galaxies, which would suggest that younger galaxies have higher velocity dispersions and therefore higher SMBH masses. This does not mean that black holes decrease in mass, of course, but suggests that black holes grow faster (relatively) than their host galaxies at first. This inference is supported by observations of red nugget galaxies. We therefore investigate how the assumption of SMBH mass galaxy scaling relation affects the inferred SMBH mass population.
The structure of this paper is as follows: In section <ref> we describe the data we used. Section <ref> provides the details of our methods and choices of scaling relations. Section <ref> is where we present the results of our analysis. We discuss the implications of our results in section <ref> and then summarize our work in section <ref>. Tables of our fit posterior values can be found in the appendix. Throughout this work we adopted a WMAP9 cosmology <cit.> where H_0 = 69.33, Ω_b = 0.0472, and Ω_c= 0.2408.
§ DATA
The data we use in this work come from SDSS <cit.> and the 3D-HST+CANDELS survey <cit.>. A summary of the data is presented in the mass–radius plots in Figure <ref>.
§.§ Local Sample from SDSS
<cit.> did not provide mass estimates for galaxies below a redshift of 0.5 so, to supplement this, we compiled a sample of local galaxies with velocity dispersion measurements from the 7th data release of SDSS <cit.> at 0.05 < z < 0.07 (top-left panel in Fig. <ref>). All galaxies were selected from the SDSS Main Galaxy Sample <cit.>, which is ∼95% complete <cit.>. We cross-matched our initial sample with galaxies that had circularized half-light radii and stellar mass estimates from <cit.> and <cit.>, respectively. Quiescent and star-forming galaxies were separated using their u-r and r-z colors, using the criteria in <cit.>. These criteria are nearly identical to those laid out in <cit.>, and we found them to be consistent with other methods of separation based on, e.g., star formation rates. The data were selected for reliability of measurements and completeness of the sample from the SDSS DR7 database. We excluded flagged galaxies using the same criteria detailed in <cit.>. For plotting purposes we include galaxies below log(M_* / M_⊙) = 10.5 which <cit.> removed from their sample entirely. Our sample contains 10,863 galaxies split into 1,241 star-forming and 9,622 quiescent galaxies.
§.§ 0.5 < z < 3 Sample from 3D-HST+CANDELS
For our high-redshift sample (all panels except top-left in Fig. <ref>), we use data from the 3D-HST+CANDELS survey. For this work we infer SMBH mass from stellar mass and velocity dispersion, the latter of which can be calculated from stellar mass and half-light radius. Half-light radii used here are those determined by <cit.>. Half-light radius estimates can differ when measured at one wavelength versus another so we normalized these radii to a rest frame of 5000 Å following equation 2 in <cit.>. We circularized the radii according to R_eff = R_hl q^1/2 where R_hl is the wavelength-corrected half-light radius and q is the axis ratio reported by <cit.>. We also made cuts to the data according to <cit.> and <cit.> based on, e.g., completion limits resulting in a sample that is ≥ 95% complete <cit.>.
Masses for each galaxy were determined by <cit.> using the galaxy SED-fitting code <cit.>. In their work, <cit.> report that the mass-radius relationship evolves as R_eff = 5.6 ( M_* / 5 × 10^9 M_⊙)^0.8 (1 + z)^-1.48 for quiescent galaxies and R_eff = 8.9 ( M_* / 5 × 10^9 M_⊙)^0.2 (1 + z)^-0.75 for star forming galaxies. Because their analysis was performed with different mass estimates, we provide our own fits to the data to demonstrate this evolution. Those interested in the evolution of this relationship should refer to <cit.> for a more rigorous characterization of this relationship. Our final sample consists of 13,232 galaxies from the UDS, GOODS-S, and COSMOS fields. For all galaxies in this sample, <cit.> determined star formation rates from infrared (IR) and ultraviolet (UV) luminosity. We followed their galaxy type selection criteria shown in their figure 5 resulting in a final sample of 11,107 star-forming and 2,125 quiescent galaxies.
§ METHODS
Here we describe how we use the <ref> to infer velocity dispersions for all galaxies in our sample, as well as the two methods of predicting SMBH mass that are our main focus of this paper. The resulting SMBH mass predictions are converted to number density functions, the process for which is detailed at the end of this section.
§.§ Scaling Relations
In this section we give the relations for the MFP, and .
§.§.§ High Redshift Velocity Dispersion
We infer velocity dispersions for our sample using the galaxy MFP; a three-dimensional relation between galaxy stellar mass, half-light radius, and stellar velocity dispersion <cit.>. This relation can be used reliably to predict any of the three properties if the other two are known. Several works in the last decade have investigated both the possibility of an evolution in the MFP and the effect galaxy type may have on the parameterization <cit.>. Now, with large volumes of deep data a picture is emerging where all galaxies lie on one plane that does not evolve <cit.>. In particular, <cit.> recently performed a thorough analysis of the galaxy type dependence and redshift evolution and came to this same conclusion. Motivated by these results we used the MFP described described by
logσ = (log R_eff - β logΣ_⋆ - γ) / α
and
Σ_⋆≡M_* / (2 πR_eff^2),
where α = 1.6287 and β = -0.84 as determined by <cit.> and the offset is γ= 4.482 <cit.>.
If the MFP is a valid prescription, we should be able to reproduce measured velocity dispersions using the stellar mass and effective radii of each galaxy. We compare the measured velocity dispersions from galaxies in both the SDSS and LEGA-C surveys to those we predict using the MFP. We plot the results of these comparisons in Fig. <ref> for each set of galaxies. We find that our predicted values are consistent with measurements for all galaxy types across both samples (0.1 dex or below), even with scatter introduced (0.16 dex or lower). Because our predictions are able to reproduce the measured values, we can treat the MFP velocity dispersions functionally as measured velocity dispersions. From here on we use σ to indicate the velocity dispersion predicted from the MFP unless otherwise specified.
§.§.§ Supermassive Black Hole Mass
To infer SMBH mass from host galaxy properties we used the relations presented in <cit.> for the and scaling relationships given by
M_BH/10^9 M_⊙=α_1 (M_bulge/10^11M_⊙)^β_1
and
M_BH/10^9 M_⊙=α_2(σ/200 km s^-1)^β_2.
The two relations are well studied in the local universe, but there is a lack of consensus surrounding the evolution (or lack thereof) of either relation beyond nearby galaxies <cit.>. For this work we assumed the local paramtetrization [α_1, β_1] = [0.49, 1.16] and [α_2, β_2] = [0.309, 4.38] to be non-evolving with redshift. We revisit this assumption in section <ref>. When using mass and radius to predict velocity dispersion, the relation becomes a function of both bulge mass and radius, therefore including an additional galaxy property in the mass estimation in contrast with . Because of this consideration of galactic radius, implicitly incorporates the evolution of the M_*–R_eff relationship with redshift without defining an explicit redshift evolution <cit.>.
Because SMBH mass is derived from host bulge properties, we assigned each star-forming galaxy a bulge mass fraction to be 40% of its total stellar mass. Our choice of bulge mass fraction has an effect on the degree to which the two relationships disagree, but the our overall results do not change when using significantly higher or lower fractions. We also performed our analysis for each galaxy type separately, so results including only quiescent galaxies are not affected by this choice.
§.§ Number Density Functions
The stellar mass function (SMF) of galaxies is a useful tool for understanding galaxy formation and evolution. The SMF informs us of the total number of galaxies per unit volume per logarithmic mass interval as a function of stellar mass. Though stellar mass and luminosity are the most commonly discussed, this type of number density function, Φ(X), can be constructed for virtually any galaxy property.
There are several ways of estimating Φ(X), but the most straightforward is Schmidt's 1 / V_max method <cit.>. We calculate the density functions as
V_max , i=Ω/3(r(z_max , i)^3-r(z_min , i)^3)
and
Φ(X) =∑_i 1/V_max , iΔX,
where X represents the property in question, e.g., stellar mass, velocity dispersion, or SMBH mass and V_max , i is the co-moving volume between redshifts z_min , i and z_max , i. The solid angle subtended by the survey is represented by Ω, and ΔX is the width of the bins. This method is functionally similar to a histogram making it computationally efficient and it is robust against bias as long as no clustering is present <cit.>. Given the high completeness of the data sets we use, this is sufficient for our purposes.
Because Φ(X) is a function of redshift, it is common to split the data into narrow redshift bins and fit each independently. We used the survey areas listed in <cit.> to calculate our co-moving volume for each redshift bin.
The number of galaxies within a given volume is expected to undergo an overall decline with increasing redshift and with increasing extremity of the property in question (e.g., very high mass or luminosity). Distributions of Φ(X) of this sort are well described by Schechter functions. The logarithmic form of a “single Schechter”, which we used for all our fitting, is described by
Φ(Y)=ln (10) ϕ_* 10^(Y-Y_c)(α_s+1)exp(-10^Y-Y_c),
where Y is the base 10 logarithm of the property in question, i.e. Y = log_10(X), Y_c is the (log) characteristic value of said property, α_s is the slope of the lower power-law, and ϕ_* is density normalization. Especially in the local universe, a “double Schechter” is sometimes used which is simply the sum of two single Schechter functions.
After obtaining values for our stellar mass functions, we compared our estimates to those obtained in <cit.>. We compiled the data into one figure and over plotted our SMF estimates and found that we were in good agreement (Fig. <ref>).
We repeated the same process to produce number density functions for velocity dispersion and SMBH mass predicted from both and . Our parameterization for the Schechter fits was found using PyMC <cit.>, a modeling software that uses Markov chain Monte Carlo sampling. The priors we used are listed in Table <ref>. We used four chains with 15,000 total steps, the first 5,000 of which were tuning steps. In all cases, the data were not fitted for values below the completion limits. We determined our completion limits for stellar mass from <cit.> and converted these into SMBH mass completion limits using the relation. Velocity dispersion completion limits are informed by the aforementioned limits on stellar mass and the completion limits for effective radius used by <cit.>. A more complete breakdown can be found in Table <ref>.
Error estimates were obtained by performing 100 fits to the data where we introduced random scatter into the data based on the errors of the values involved in the fits and the known intrinsic scatter of the relations used for our inferred quantities. Cosmic variance estimates were obtained following the methods outlined in <cit.>. Because accurate determinations of cosmic variance for velocity dispersion and SMBH mass would require a large volume of in-depth measurements for each of these values, an exact estimate does not exist. For these values we approximated the cosmic variance based on the values we calculated for stellar mass.
§ RESULTS
In Figures <ref>, <ref>, <ref>, and <ref> we present the number density functions of galaxy stellar mass, MFP velocity dispersion, and inferred SMBH mass from both the and scaling relations.
§.§ Stellar Mass and Velocity Dispersion Functions
Our stellar mass and velocity dispersion function fits to all galaxies are shown in figures <ref>–<ref>, . The stellar mass functions (Figs. <ref> and <ref>) are described here by a double Schechter function at all redshifts. At the highest redshifts the data are well described by a single Schechter which is consistent with others' results <cit.>, but we chose to fit these with a double Schechter to maintain consistency within our results across all redshifts. There is a general decline in the total number density between the lowest and highest redshifts, the number of galaxies with logM_* > 11.5 M_⊙ is 8.3 times higher at z̅ = 0.65 than at z̅ = 2.8. The distribution, Φ(M_*) drops off steeply for masses greater than ∼ 11 M_⊙ but the slope for lower masses is much flatter with no clear trends across time.
The velocity dispersion functions (Figs. <ref> and <ref>) are parameterized by a single Schechter function across all redshifts. We see an overall decrease in number density of galaxies as redshift increases. There appears to be a mild change in the slope of the distribution that is steepest at z̅ = 0.65 and is at its shallowest for 1.6 < z̅ < 2.0. This flattening of the curve leads to an apparent broadening of the whole distribution, though we cannot be sure if the flattening of the values to the left of the completion limits are reliable. Perhaps the most notable results of these fits are the evolution of the characteristic velocity dispersion which increases from 1.6 to 1.9 over the entire redshift range. An increase of the characteristic velocity dispersion suggests that galaxy velocity dispersion is increasing with increasing redshift.
The large difference between the results of <cit.> and our functions (Fig. <ref>) has several possible explanations. First, their results consider only quiescent galaxies while ours are for combined galaxy type. Number density functions of separate galaxy types often have different shapes to the combined functions as we find in this paper and what was found by, e.g., <cit.>. There is also a large gap in cosmic time between their z̅ = 0.07 results and our lowest redshift sample which is z̅ = 0.65 that corresponds to a approximately 5.2 Gyr. Because we see lower characteristic velocity dispersions with lower redshift, it is possible that the relation evolves in this time. Additionally, <cit.> found an increase in the number of galaxies with high velocity dispersions for z > 0.6 which could indicate an evolution in the intrinsic scatter of the relation they used to infer velocity dispersion. Though they used dynamical mass to infer virial velocity dispersions, which is different to what we do here, a similar scatter evolution could be affecting this difference since we include the measured intrinsic scatter from <cit.> which was measured for z ∼ 0.8.
§.§ Supermassive Black Hole Mass Functions
We show histograms of resulting distributions of SMBH masses in Figure <ref>. As we look back to earlier times the shape of the histogram of SMBH masses inferred from velocity dispersions flattens out leading to a lower peak, but a much thicker and longer tail than for SMBH masses inferred from stellar mass. These same data are shown in Figure <ref> showing only our quiescent galaxy population. We see the same trends here despite having far fewer galaxies; the high mass tail of the distribution is larger for masses predicted from velocity dispersion than from stellar mass. It is from these same data that we constructed the mass functions for each relationship for star-forming, quiescent, and combined galaxy types.
If our results are to be trusted, they should be independent of survey choice. We can compare CANDELS to the LEGA-C survey for quiescent galaxies between 0.5 ≲ z ≲ 1. In this redshift range, the two surveys have comparable coverage, and even though our results are robust to choice of bulge fraction, we see these same results even when restricting to quiescent galaxies only. When repeating our analysis on LEGA-C (Fig. <ref>), we get SMBH mass distributions that have all of the same properties we have highlighted. Namely, predicts a larger number of SMBHs with masses greater than ∼ 10^9 M_⊙ and also extends to higher masses than . The fact that we find similar trends between both data sets with quiescent galaxies suggests that our results are both reproducible and unbiased by survey choice or bulge stellar mass fraction.
The resulting SMBH mass functions for both galaxy types as well as quiescent and star-forming galaxies are shown in Figures <ref>, <ref>, <ref> respectively. Here median fits and errors are presented in the same way as the stellar mass and velocity dispersion fit. We find that, independent of galaxy type, there are significant differences between the predicted SMBH masses from and especially for redshifts above 1. For all redshift bins higher than z ∼ 1, predicts a notably higher number density of large (M_BH > 10^9 M_⊙) SMBHs. While both relationships undergo a decrease in total number density with increasing redshift, the overall predictions between high and low masses evolve. The number density of the highest mass black holes derived from stellar mass does not change significantly. The slope of the distribution around M_BH∼ 10^8 M_⊙ and higher remains consistent across all snapshots until a slight flattening in the two highest redshift bins. The characteristic logarithmic SMBH mass is also highest at these two times while it does not follow a noticeable trend in either direction for redshifts below z̅ = 2.5. The characteristic logarithmic SMBH mass for those derived from velocity dispersion undergoes an increase from 9.8 to 10.8 over the range of redshifts considered here. This change is related to the similar increase we see in characteristic velocity dispersion. The highest SMBH masses in this distribution tend towards higher values with increasing redshift which leads to a growing division further back in time.
Especially at z ∼ 3, the distributions of SMBH masses inferred by either galaxy stellar mass or velocity dispersion do not agree. This tension is apparent when considering galaxy types both separately and together and is present across at least two different high-redshift samples (Fig. <ref>). The bulk of the distributions overlap (Fig. <ref>) and so these relationships are suggesting similar populations of SMBHs for the majority of galaxies. The amplitude of the GWB is most impacted by the largest SMBHs, where the distributions differ most significantly, so an accurate picture of the high-mass population is necessary. Further study and high redshift tests of the MFP are needed.
§ DISCUSSION
We derive the distribution of SMBH mass for 0 < z < 3. The masses we used were inferred from either the host bulge stellar mass or velocity dispersion, the latter being inferred from host stellar mass and radius using the MFP. When comparing these mass distributions we find that using MFP velocity dispersion implies a greater number density of SMBHs at the high mass end, particularly for M_BH > 10^9 M_⊙.
Throughout the course of this work we checked our methods against others (Figs. <ref>, <ref>, <ref>) and we were able to consistently reproduce their results and/or measured values. We additionally demonstrated that our results are not limited or biased by our choice in sample. Because higher numbers of high-mass SMBHs are predicted by even when only considering quiescent galaxies, we can also be confident that our choice in bulge fraction is not the reason for this this difference. Additionally, these results are not sensitive to which version of the SMBH mass scaling relationship is used. When comparing to other forms of these relations such as those determined by <cit.> or <cit.> we found no significant differences in respective SMBH mass distributions. Finally, assuming larger values for the intrinsic scatter in the MFP and SMBH mass relations does not impact our predicted values without assuming non-physically large scatter.
Given the known observed evolution of galaxy properties, it is not possible for the z = 0 and relations to be both correct and non-evolving at high redshift. There have been observational studies to investigate the evolution of black hole scaling relations with sometimes contradictory results <cit.>. A recent study by <cit.> uses results from HETDEX, and takes into account a number of potential observational biases including the potential selection bias discussed in <cit.>; they find a 0.52 ± 0.14 dex offset between the local relation and the relation at z ∼ 2. This alone, however, does not entirely bridge the gap we find at z ∼ 2 though their results primarily consider SMBHs with masses lower than 10^9 M_⊙ so the applicability of their results is limited when comparing to the population of large SMBHs we discuss here. Very little analysis has been performed for in this manner though <cit.> found no evolution in using observational data out to z ∼ 1. Without a high redshift survey of velocity dispersions for galaxies with known SMBH mass, we have extremely limited insight into how this relation may or may not evolve.
If the observed lack of evolution in the MFP out to redshift 1 is a robust result, we would expect that any evolution in the MFP velocity dispersions out to this same redshift would reflect a physical reality. Because we see an increasing difference between the distribution of SMBH masses predicted from bulge mass and velocity dispersion even below z = 1, it is likely that this change is because one (or both) of these scaling relationships evolve with redshift.
We find an inescapable tension between predictions made with versus that cannot be otherwise explained given our modest assumptions. This difference in number density of high mass SMBHs has several implications for predictions such as for the sizes of galactic core. Galaxies with more massive central SMBHs have larger cores <cit.> and so using may predict a population of galaxies with larger cores than when using .
Our results indicate that analysis similar to <cit.> would point to a larger GWB amplitude when using . For masses above 10^9 M_⊙ we can do an approximate calculation for the GWB amplitude suggested by these number densities. Following the relation between number density and GWB amplitude given in equation (<ref>) we see that the amplitude has a dependence on number density such that h_c ∝ N_0^1/2. Using this we can get that the ratio in amplitudes predicted by versus is proportional to the square root of the number densities of SMBHs predicted from each relation, i.e.,
h_c(σ)/h_c(M_bulge) = √(N_0(σ)/N_0(M_bulge)).
Using our reported number densities (Table <ref>) we find that using implies a higher amplitude by a factor of 2.1 on average across 0.5 < z < 3.0.
From the 15 year results of NANOGrav's PTA, the offset between the signal amplitude and the highest value predictions for the GWB amplitude is at least a factor of 2 though potentially more <cit.>. An in-depth analysis of how our results affect predictions for the GWB will be presented in future work, but the initial estimate we provide here suggests an origin for this difference. It is uncertain at this point whether velocity dispersion or stellar mass is necessarily a better SMBH mass indicator. It is clear, however, that further investigation is necessary so that we can further understand why these relations differ so greatly.
Future work investigating our findings is necessary. A good test the MFP would involve obtaining velocity dispersion measurements for a sub-sample of the galaxies in this survey for z > 1, with even a relatively small sample it would be possible to quantify the accuracy of the MFP at z > 1. Measured velocity dispersion estimates are the first step for evaluating the potential evolution of the MFP, but to thoroughly analyze how SMBH mass scaling relations may change with time, dynamical mass estimates at z > 1 are needed. 30-m class telescopes, suitable for high-redshift observations, make this feat a realistic goal and will expand our understanding of how galaxies and their SMBHs evolve <cit.>. Aside from tests of the results we show here, extending our work to include a robust analysis of lower mass (M_BH < 10^8 M_⊙) black holes will inform our predictions for the Laser Interferometer Space Antenna (LISA) mission which will be vital in our characterization of black hole see formation. With upcoming missions and the continued refinement in GWB detection efforts, a full picture of the potential evolution of galaxy SMBH scaling relations can emerge.
§ SUMMARY
In this paper we examined the difference between SMBH mass predictions when assuming versus . To do this we used the three-parameter relationship between galaxy stellar mass, effective radius, and velocity dispersion to infer velocity dispersion for galaxies up to z = 3. We created SMBH mass density functions for all galaxies in our sample for 0.5 < z < 3 and compared how using stellar mass versus MFP velocity dispersion affected inferred SMBH demographics. We found that the number of SMBHs with masses M_BH < 10^9 M_⊙ was different between these relations, especially for z > 1. In particular we find that predicts a greater number of these high mass SMBHs. Our results suggest that the relationship between SMBH mass and stellar mass and/or velocity dispersion must evolve at high redshift. Assuming the local relations to be constant across time leads to substantial differences when extrapolated beyond z = 0.5, and this difference must be reconciled.
Our results do not inform us of the accuracy of either relation. It remains unclear whether one or both relations are evolving. Recent work has found that the stellar mass to SMBH mass relation may have evolved at least since z ∼ 2 <cit.>, but no evolution has been investigated for velocity dispersion. Circumstantial evidence from, e.g., red nugget galaxies, points toward being a more accurate predictor of SMBH mass at these higher redshifts <cit.>. Prediction and interpretation of the GWB from PTAs relies heavily on the assumptions made for the SMBH demographics at high redshift. Here we have shown that the choice in scaling relation used to infer high redshift SMBH mass can lead to meaningfully different demographics. If we are to refine our ability to explore the physics of galaxy and SMBH evolution at z > 1 we must also re-examine how the local scaling relations may evolve.
§ ACKNOWLEDGEMENTS
The authors would like to thank Eric Bell and Rachel Bezanson for their helpful conversations. We additionally thank Anna de Graaff, Joel Leja, and Arjen van der Wel for readily sharing their knowledge and data with us.
CM acknowledges financial support through the University of Michigan’s Rackham Merit Fellowship Program.
JS is supported by an NSF Astronomy and Astrophysics Postdoctoral Fellowship under award AST-2202388.
We thank the anonymous referee for their insightful comments.
Anishinaabeg gaa bi dinokiiwaad temigad manda Michigan Kichi Kinoomaagegamig. Mdaaswi nshwaaswaak shi mdaaswi shi niizhawaaswi gii-sababoonagak, Ojibweg, Odawaag, minwaa Bodwe’aadamiig wiiba gii-miigwenaa’aa maamoonjiniibina Kichi Kinoomaagegamigoong wi pii-gaa aanjibiigaadeg Kichi-Naakonigewinning, debendang manda aki, mampii Niisaajiwan, gewiinwaa niijaansiwaan ji kinoomaagaazinid. Daapanaming ninda kidwinan, megwaa minwaa gaa bi aankoosejig zhinda akiing minwaa gii-miigwewaad Kichi-Kinoomaagegamigoong aanji-daapinanigaade minwaa mshkowenjigaade.
The University of Michigan is located on the traditional territory of the Anishinaabe people. In 1817, the Ojibwe, Odawa, and Bodewadami Nations made the largest single land transfer to the University of Michigan. This was offered ceremonially as a gift through the Treaty at the Foot of the Rapids so that their children could be educated. Through these words of acknowledgment, their contemporary and ancestral ties to the land and their contributions to the University are renewed and reaffirmed.
§ DATA AVAILABILITY
The data generated through this project will be deposited into Deep Blue Data, the University of Michigan's institutional data repository. Data that we supply but is based on formatted versions of others' work will include attribution and notices that they are downstream products of others' work.
mnras
§ FIT PARAMETERS
The posterior fit parameters for stellar mass, velocity dispersion, and black hole mass functions are presented in the tables <ref>, <ref>, <ref>, and <ref> found here. The errors listed are 68% confidence intervals. Because of degeneracy between some of the fit parameters, e.g., ϕ_* and α, the errors reported here are the confidence intervals on a given variable and are not the same as the 68% confidence fits shown by the darker shaded region in each plot.
|
http://arxiv.org/abs/2307.04984v1 | 20230711024736 | Turán number of the odd-ballooning of complete bipartite graphs | [
"Xing Peng",
"Mengjie Xia"
] | math.CO | [
"math.CO",
"05C35, 05C38"
] |
Turán number of the odd-ballooning of complete bipartite graphs
Xing Peng and Mengjie Xia[Center for Pure Mathematics, School of Mathematical Sciences, Anhui University,
Hefei 230601, P. R. China. Email: [email protected]. Supported in part by the National Natural Science Foundation of China
(No. 12071002) and the Anhui Provincial Natural Science Foundation (No. 2208085J22). ]
================================================================================================================================================================================================================================================================================================================================
Given a graph L, the Turán number (n,L) is the maximum possible number of edges in an n-vertex L-free graph.
The study of Turán number of graphs is a central topic in extremal graph theory.
Although the celebrated Erdős-Stone-Simonovits theorem gives the asymptotic value of (n,L) for nonbipartite L, it is challenging in general to determine the exact value of (n,L) for χ(L) ≥ 3.
The odd-ballooning of H is a graph such that each edge of H is replaced by an odd cycle and all new vertices of odd cycles are distinct. Here the length of odd cycles is not necessarily equal. The exact value of Turán number of the odd-ballooning of H is previously known for H being a cycle, a path, a tree with assumptions, and K_2,3. In this paper, we manage to obtain the exact value of Turán number of the odd-ballooning of K_s,t with 2≤ s ≤ t, where (s,t) ∉{(2,2),(2,3)} and each odd cycle has length at least five.
Keywords: Turán number, odd-ballooning of graphs, decomposition family.
§ INTRODUCTION
Let ł be a family of graphs. A graph G is ł-free if G does not contain any L ∈ł as a subgraph. The Turán number (n,ł) is the maximum number of edges in an n-vertex ł-free graph. If an n-vertex ł-free graph G contains exactly (n,ł) edges, then G is called an extremal graph for ł. The set of extremal graphs with n vertices is denoted by (n,ł).
For the case where ł contains only one graph L, we write (n,L) and (n,L) for the Turán number of L and the set of extremal graphs for L respectively. The well-known result by Mantel asserts that (n,K_3)=⌊ n^2/4⌋ and the unique extremal graph is the balanced complete bipartite graph. Turán's theorem <cit.> gives the value of (n,K_r+1) of all r ≥ 2. Moreover,
(n,K_r+1) contains only the balanced complete r-partite graph which is known as Turán graph T_r(n). The number of edges in the Turán graph is denoted by t_r(n). Turán's theorem is considered as the origin of extremal graph theory. For the Turán number of general graphs, Erdős-Stone-Simonovits <cit.> theorem gives that
(n, L) = (1 - 1/χ(L)-1) n2+o(n^2).
Here χ(L) is the chromatic number of L. This theorem is one of the cornerstones of extremal graph theory.
For χ(L) ≥ 3, the asymptotic value of (n, L) is given by Erdős-Stone-Simonovits theorem. Thus it is meaningful to determine the exact value of (n, L) for such kind of graphs, which is quite challenging in general. The notion of the decomposition family introduced by Simonovits <cit.> turns out to be very useful, for example <cit.>. Given two graphs G and H, the join G ⊗ H is the graph obtained from the vertex disjoint union of G and H by connecting each vertex of G and each vertex of H.
Given a graph L with χ(L)=p+1,
the decomposition family (L) is the set of minimal graphs M such that L⊂ (M∪K_t)⊗ T_p-1((p-1)t), where t=t(L) is a constant.
Roughly speaking, M ∈ℳ(L) if the graph obtained from adding a copy of M (but not any of its proper subgraphs) into a class of T_p(n) with n large enough contains L as a subgraph. We remark that the decomposition family ℳ(L) always contains bipartite graphs.
The following inequality can be checked easily:
(n,L) ≥ t_p(n)+(⌈ n/p⌉, (L)).
To see this, let G be a graph obtained from the Turán graph T_p(n) by adding a graph from (⌈ n/p⌉, (L)) to a largest part of T_p(n). Thus e(G)=t_p(n)+(⌈ n/p⌉, (L)) and G is L-free by the definition of (L). Surprisingly, the above construction indeed gives the true value of (n,L) for many graphs, i.e.,
(n,L)=t_p(n)+(⌈ n/p⌉, (L)). Before we state examples, we introduce the following definitions and a related result. A matching M_k is a set of k disjoint edges. For a graph G, the matching number ν(G) is the number of edges in a maximum matching of G. For positive integers ν and Δ, let
f(ν,Δ)=max{ e(G):ν(G) ≤νΔ(G) ≤Δ}.
The following result gives the upper bound for f(ν,Δ).
f(ν,Δ)=νΔ+⌊Δ/2⌋⌊ν/⌈Δ/2 ⌉⌋≤νΔ+ν.
A special case where ν=Δ=k-1 was first proved by Abbott, Hanson, and Sauer <cit.> as follows:
f(k-1, k-1)= k^2-k if k is odd;
k^2-3/2 k if k is even.
Let F_k be the graph which consists of k triangles sharing a common vertex.
For k ≥ 1 and n ≥ 50k^2,
(n,F_k)=t_2(n)+f(k-1, k-1).
One can see (F_k)={M_k,K_1,k}, here K_1,k is a star with k+1 vertices. Moreover, (⌈ n/2 ⌉, {M_k,K_1,k})=f(k-1, k-1). Thus the equality holds in (<ref>)
for F_k.
Given a graph G, let kG denote the graph consisting of k vertex disjoint copies of G.
As K_3 is a clique, Theorem <ref> was generalized as follows.
For any p ≥ 2 and k ≥ 1, if n ≥ 16 k^3(p+1)^8, then
ex(n, K_1 ⊗ k K_p)=t_p(n)+f(k-1, k-1).
It is clear that F_k is a special case of K_1 ⊗ k K_p where p=2. One can observe that (K_1 ⊗ k K_p)={M_k,K_1,k} and then the equality also holds in (<ref>) for K_1 ⊗ k K_p.
Motivated by this result, Liu <cit.> introduced the concept of edge blow-up of graphs. For a graph G, the edge blow-up G^p+1 is a graph obtained from G such that each edge is replaced by a clique with p+1 vertices and all new vertices for different cliques are distinct. One can see F_k=K_1,k^3 and K_1 ⊗ k K_p=K_1,k^p+1. So far, Turán number of the edge blow-up of many families of graphs are known, for example, trees, cycles, keyrings, cliques K_r with p ≥ r+1, and complete bipartite graphs K_s,t with p ≥ 3, see <cit.>. In general, a remarkable result by Yuan <cit.> gives the range of (n,G^p+1) for p ≥χ(G)+1.
If one view K_3 as an odd cycle, then Theorem <ref> can be generalized in another way. Let G be a graph. The odd ballooning of G is a graph obtained from G such that each edge in G is replaced by an odd cycle and all new vertices for different odd cycles are distinct. Apparently, F_k is the odd-ballooning of K_1,k in which all odd cycles are triangles. Notice that it is only meaningful to consider the odd-ballooning of bipartite graphs. A star may be one of the simplest bipartite graphs.
Hou, Qiu, and Liu <cit.> first studied the Turán number of the odd-ballooning of K_1,k in which the length of all odd cycles is the same and is at least five. Later, they <cit.> considered the general case in which triangles are allowed. Zhu, Kang, and Shan <cit.> determined the Turán number of the odd-ballooning of paths and cycles.
Recently, Zhu and Chen <cit.> obtained the Turán number of the odd-ballooning of trees under some assumptions. It is nice that previous results for paths and stars are special cases of the result by Zhu and Chen <cit.>.
As the Turán number of the odd-ballooning of a cycle is known, the next step is to study such a problem for bipartite graphs with many cycles. A possible candidate is complete bipartite graphs. Note that K_2,2 is C_4 and thus the simplest case is K_2,3. This case was solved by Yan <cit.> previously.
The goal of this paper is to extend this result to all bipartite graphs.
To state our result, we need to define a number of graphs. Let H be the graph obtained from K_t-1,t-1 by removing a M_t-2, see Figure <ref>, where M_t-2={u_2v_2,…,u_t-1v_t-1}. Note that H is P_4 for t=3.
For 2 ≤ s ≤ t, let G_s,t be the graph obtained from T_2(n-s+1) ⊗ K_s-1 by embedding H into one class of T_2(n-s+1). Similarly, let G_3,3' be the graph obtained from T_2(n-2) ⊗ K_2 by embedding a triangle into one class of T_2(n-2). The following theorem is our main result.
Let F_s,t be the odd-ballooning of K_s,t with t ≥ s ≥ 2, where (s,t) ∉{(2,2),(2,3)} and each odd cycle has length at least 5. Then for n large enough,
(n,F_s,t)=⌈n-s+1/2⌉⌊n-s+1/2⌋+(s-1)(n-s+1)+s-12+t^2-3t+3.
Moreover, G_s,t is the only extremal graph for t ≥ 4. For t=3, there are at least two extremal graphs G_3,3 and G'_3,3.
The notation in this paper is standard. For a graph G and a vertex v ∈ V(G), let N_G(v)={u: u is adjacent to v} be the neighborhood of v and d_G(v)=N_G(v) be the degree of v. If X ⊂ V(G), then let d_G(v,X)=|N_G(v) ∩ X| denote the number of neighbors of v in X. Additionally, G[X] is the subgraph induced by X, Let e(X) and e(X) be the number of edges and non-edges in X respectively. If X and Y are disjoint subsets of V(G), then e(X,Y) and e(X,Y) are the number of edges and non-edges between X and Y respectively. If e(X,Y)=|X||Y|, then we say X is completely adjacent to Y.
We use uv to denote an edge. If u and v are not adjacent, then we use u ≁v to denote it.
The rest of this paper is organized as follows. In Section 2, we will recall a few results and prove some lemmas. In Section 3, we will present the proof of Theorem <ref>.
§ PRELIMINARIES
The following definition of symmetric graphs was introduced by Simonovits.
Let T_1 and T_2 be connected subgraphs of G. They are called symmetric in G if either T_1=T_2 or:
(1) T_1 ∩ T_2=∅; and
(2) (x, y) ∉ G if x ∈ T_1, y ∈ T_2; and
(3) there exists an isomorphism ψ_2: T_1 → T_2 such that for every x ∈ T_1 and u ∈ G-T_1-T_2, x is joined to u if and only if ψ_2(x) is joined to u.
Note that T_1, …, T_k are symmetric if for every 1 ≤ i<j ≤ k, graphs T_i and T_j are symmetric.
We also need to define a special family of graphs.
Let 𝒟(n, p, r) be the family of n-vertex graphs G satisfying the following symmetric conditions:
(1) It is possible to omit at most r vertices of G such that the remaining graph G^' is a join of graphs of almost equal order, i.e. G^'=⊗_i=1^p G^i where |V(G^i)|=n_i and |n_i-n / p| ≤ r for any i ∈[p]. The vertices in V(G) \ V(G^') are called the exceptional vertices.
(2) For every i ∈[p], there exist connected graphs H_i such that G^i=k_i H_i where k_i=n_i /|H_i| and |V(H_i)| ≤ r. Moreover, any two copies H_i^j and H_i^ℓ in G^i(1 ≤ j<ℓ≤ k_i) are symmetric subgraphs of G.
Our proof relies on the following theorem by Simonovits.
For a given graph F with χ(F)=p+1 ≥ 3, if (F) contains a linear forest, then there exist r=r(F) and n_0=n_0(r) such that 𝒟(n, p, r) contains an extremal graph for F and n ≥ n_0. Furthermore, if this is the only extremal graph in 𝒟(n, p, r), then it is the unique extremal graph for every sufficiently large n.
Let F be a graph with chromatic number t≥ 3. If G is an extremal graph for F with n large enough, then δ(G)= ((t-2)/(t-1) )n+o(n).
We conclude this section with two more lemmas.
Let ℳ(F_s,t) be the decomposition family of F_s,t. Then
{ K_s,t,K_1,p_1∪ K_1,p_2∪⋯∪ K_1,p_s∪ M_q }⊂ℳ(F_s,t),
where ∑_i=1^s p_i+q=st and 0 ≤ p_i ≤ t for 1 ≤ i ≤ s.
Proof: Let A and B be the two classes of K_s,t. For each a ∈ A and b ∈ B, let ℓ_a,b≥ 5 be the length of the odd cycle in F_s,t associated with a and b. As a complete bipartite graph does not contain any odd cycles, if M ∈ℳ(F_s,t), then M contains at least one edge of each odd cycle and |E(M)| ≥ st.
Given T_2(n) with n large, put K_1,p_1∪ K_1,p_2∪⋯∪ K_1,p_s∪ M_q in one class.
View centers of s stars as vertices of A and put B in the other class arbitrarily. Then for each a ∈ A and b ∈ B, we can get an odd cycle C_a,b either by including an edge incident with a or by taking an edge from M_q. In addition, picking extra vertices appropriately, we can ensure that the length of C_a,b equals ℓ_a,b. It is minimal as it contains st edges. Similarly, we are able to show K_s,t∈(F). □
Note that if p_i=0 for each 1 ≤ i ≤ s, then K_1,p_1∪ K_1,p_2∪⋯∪ K_1,p_s∪ M_q is the matching M_st. Similarly, we can determine the decomposition family of F_1,t.
Let ℳ(F_1,t) be the decomposition family of F_1,t. Then
ℳ(F_1,t)={K_1,a∪ M_t-a: 0 ≤ a ≤ t}.
Proof: We have shown that each M ∈ℳ(F_1,t) contains at least one edge of each odd cycle. Since M is minimal, each odd cycle contributes exactly one edge to E(M). Note that edges that are incident with the center of K_1,a span a star and other edges form a matching. Meanwhile, one can find F_1,t if one class of T_2(n) contains K_1,a∪ M_t-a. The lemma is proved. □
Remark 1: From the proof of Lemma <ref>, one can see the assumption ℓ_a,b≥ 5 is crucial. Actually, if one can prove the theorem by assuming ℓ_a,b=5 for each a ∈ A and b ∈ B, then the proof also works for the case where ℓ_a,b≥ 5 and the length of odd cycles may be different.
§ PROOF OF THEOREM <REF>
§.§ Proof of the lower bound
We start to prove an auxiliary result. Recall F_s,t is the odd-ballooning of K_s,t and G_s,t is the graph obtained from T_2(n-s+1) ⊗ K_s-1 by embedding H into one class of T_2(n-s+1). In particular, G_1,t is the graph obtained from T_2(n) by embedding H into one class.
For any 1 ≤ a ≤ (t+1)/2, the graph G_1,t does not contain F_a,t+1-a as a subgraph, where all odd cycles have length at least five.
Proof: Assume V(K_a,t+1-a)=A ∪ B, where |A|=a and |B|=t+1-a.
Let L ∪ R be the partition of G_1,t such that H ⊂ L. In addition, set L'={v_1,…,v_t-1} and R'=V(H) ∖ L'. Suppose that F_a,t+1-a is a subgraph of G_1,t for some a.
Note that there are no odd cycles in a bipartite graph. Thus each odd cycle in F_a,t+1-a must contain an edge in H. In addition, all new vertices through the operation of odd-ballooning are distinct. There are two cases.
Case 1: A ∩ L ≠∅. If A ∩ V(H) ≠∅, then we may assume v_1 ∈ A for a moment. As A and B are completely adjacent in F_a,t+1-a, one may assume B=B' ∪ B”, where B' ⊂ R' and B”⊂ R.
We begin with the case where both B' and B” are not empty. This implies that A ⊂ L', say v_1,…,v_a.
Let b'=|B'|.
As there are t+1-a odd cycles associated with v_1 and each of them contains an edge in H, there is a (t+1-a-b')-set E_v_1 of edges in H that are associated with v_1.
As H is bipartite, let B_1 ⊂ R' ∖ B' be the set which contains exactly one endpoint of edges in E_v_1. Note that vertices in B_1 are new in the operation of odd-ballooning.
It is clear that |B_1|=t+1-a-b'. For each 2 ≤ i ≤ a, if we repeat the analysis above for v_i, then there exist a subset B_i ∈ R'∖ B' associated with v_i such that B_i and B_j are pairwise disjoint for 1 ≤ i <j ≤ a. Here vertices from B_i and B_j are new and of course are distinct. Therefore,
t-1=|R'| ≥ |B' ∪ B_1 ∪⋯∪ B_a|
=|B'|+|B_1|+⋯+|B_a|
=t+1-a+|B_2|+⋯+|B_a|
≥ t+1-a+a-1=t.
This is a contradiction.
If B” is empty, then we consider locations of vertices of A. If A ∩ R=∅, then A ⊂ L', say A={v_1,…,v_a}. Then {v_2,…,v_a} and B form a K_a-1,t+1-a which is a subgraph of H-v_1. The definition of H gives that each v_i (here 2 ≤ i ≤ t-1) has a unique non-neighbor u_i in R'. Furthermore, u_i and u_j are distinct. Therefore, vertices from {v_2,…,v_a} have at most t-1-(a-1)=t-a common neighbors in R', a contradiction. If there is a vertex r ∈ A ∩ R, then there is a (t+1-a)-set E_r of edges from H associated with r as there are t+1-a odd cycles containing r in F_a,t+1-a. Thus there is a (t+1-a)-subset L_r' ⊂ L' which consists of exactly one endpoint of each edge from E_r. Note that vertices from L_r' are new with respect to the operation of odd-ballooning. As the assumptions of a and |L'|=t-1, there is only one such vertex. That is A ∖ r ⊂ L'. Note that L_r' and A ∖ r are disjoint. It holds that
t-1=|L'| ≥ |(A∖ r) ∪ L_r'|=|A∖ r|+|L_r'|=a-1+t+1-a=t,
a contradiction.
If B' is empty, then A ⊂ L. Reusing the argument above, for each vertex a ∈ A, there is a (t+1-a)-set V_a ⊂ V(H) such that vertices from V_a are new with respect to odd-ballooning. Thus V_a and V_a' are disjoint for a ≠ a' ∈ A. Recall that we assume v_1 ∈ A. Thus v_1 ∪_a ∈ A V_a ⊂ V(H).
It follows that 2(t-1) ≥ a(t+1-a)+1. Let f(a)=-a^2+(t+1)a-2t+3. Then f(a) is concave down. As a ≤ (t+1)/2, one can check f(a)>0 for 2 ≤ a ≤ (t+1)/2, which is a contradiction to the inequality above. For the case of a=1, the graph H must contain M_p ∪ K_1,t-p for some 0 ≤ p ≤ t as a subgraph by Lemma <ref>, this is impossible by the definition of H.
To prove the case where A ∩ V(H)≠∅, it suffices to consider the case of v_2 ∈ A by the symmetry of vertices in H. The proof runs the same lines as above and it is skipped here.
To complete the proof for Case 1, it remains to consider the case where A ∩ V(H)=∅. Note that A ⊂ L∖ V(H) and B ⊂ R. For each a ∈ A, if we let E_a ⊂ E(H) be the set of edges which consists of one edge from each odd cycle associated with a in F_a,t+1-a, then E_a is a matching with t+1-a edges. Moreover, E_a and E_a' are disjoint for a ≠ a' ∈ A. This is impossible by the definition of H. We complete the proof for Case 1.
Case 2: A ∩ L=∅. Equivalently, A ⊂ R and B ⊂ L. By the symmetry of A and B, this case can be proved by repeating the one for Case 1. □
We are ready to prove the lower bound for (n,F).
Proof of the lower bound: For positive integers 1 ≤ k ≤ t, we apply the induction on k to show G_k,t is F_k,t-free. The base case where k=1 follows from Lemma <ref> in which a=1. Assume it is true for small k. To show the induction step, let A and B be the two classes of K_k,t, where |A|=k and |B|=t. The vertex set of K_k-1 in G_k,t is denoted by W. We view A and B as subsets of V(F_k,t). Suppose F_k,t is a subgraph of G_k,t.
If W ∩ V(F_k,t)=∅, then it means that F_k,t is a subgraph of G_k,t-W. Note that G_k,t-W is a subgraph of G_1,t and F_1,t⊂ F_k,t, then it must be the case where F_1,t⊂ G_1,t. This is a contradiction by Lemma <ref>. Therefore, W ∩ V(F_k,t) ≠∅.
If there is a vertex v ∈ V(F_k,t)∖ B such that v ∈ W, then F_k,t-v ⊂ G_k,t-v.
Note that F_k,t-v contains F_k-1,t as a subgraph no matter v ∈ A or v is a new vertex with respect to odd-ballooning. Thus G_k,t-v contains F_k-1,t as a subgraph. However, this is impossible by the induction hypothesis and the observation G_k,t-v ⊂ G_k-1,t.
It is left to consider the case where W ∩ F_k,t⊂ B.
The assumption F_k,t⊂ G_k,t implies that F_k,t-W ⊂ G_k,t-W.
Note that |W|=k-1 and W ∩ F_k,t⊂ B. It follows that F_k,t-W contains F_k,t-k+1 as a subgraph. Note that G_k,t-W is a subgraph of G_1,t. We get that F_k,t-k+1 is a subgraph of G_1,t. This is a contradiction to Lemma <ref>. There is a contradiction in each case and the proof for the induction step is complete. Therefore, G_s,t does not contain F_s,t as a subgraph.
As G_n is an extremal graph, we get e(G_n) ≥ e(G_s,t) and the lower bound for e(G_n) follows.
§.§ Proof of the upper bound
Observe that χ(F)=3. Lemma <ref> gives that (F) contains a matching. By Theorem <ref>, D(n,2,r) contains an extremal graph for F, say G_n, provided n is large enough. Here r is a constant.
Assume V(G_n)=A_1 ∪ A_2 ∪ R, where A_1 and A_2 form a complete bipartite graph and R is the set of exceptional vertices. By the definition of D(n,2,r), for i ∈{1,2}, the subgraph G[A_i]=k_iH_i. If H_i is nontrivial for some i, then k_i ≥ |A_i|/r ≥ st as |V(H_i)| ≤ r. This indicates that M_st⊂ G[A_i], a contradiction to Lemma <ref>. Therefore, both A_1 and A_2 are independent sets. Recall the definition of symmetric graphs. For each 1 ≤ i ≤ 2 and each vertex v ∈ R, we get either v is adjacent to all vertices in A_i or v has no neighbors in A_i. For 1 ≤ i ≤ 2, we define
B_i={v ∈ R: v is adjacent to all vertices in A_3-i}.
Similarly, let
W={v ∈ R: v is adjacent to all vertices in A_1 ∪ A_2}
and
W'={v ∈ R: v has no neighbors in A_1 ∪ A_2}.
Apparently, R=B_1 ∪ B_2 ∪ W ∪ W' is a partition of R. If W'≠∅, then for each vertex v ∈ W', we have d_G_n(v) ≤ r
as N_G_n(v) ⊂ R by the definition of W'. As r is a constant, this is a contradiction to Theorem <ref>. Thus W'=∅.
Recall F_1,t is the odd-ballooning of K_1,t. Let T be the set of t leaves of K_1,t.
The graph G_n-W does not contain F_1,t such that either T ⊂ A_1 or T ⊂ A_2.
Proof: Note that each vertex w ∈ W is adjacent to all vertices of A_1 ∪ A_2. Suppose that G_n-W contains F_1,t such that T ⊂ A_1. Note that W ∪ T form a K_s-1,t. As the definition of W, we get F_s-1,t is a subgraph of G_n. Together with the F_1,t, there is an F_s,t in G_n, a contradiction. □
Let ł={K_1,p∪ M_t-p: 0 ≤ p ≤ t}. To make the notation simple, we will use B_1 and B_2 to denote subgraphs of G_n induced by B_1 and B_2 respectively.
Both B_1 and B_2 are ł-free.
Proof: Suppose that B_1 contains K_1,p∪ M_t-p for some 0 ≤ p ≤ t. Consider the subgraph induced by A_1∪ A_2 ∪ B_1. Observe that A_1 ∪ B_1 and A_2 form a complete bipartite graph. As K_1,p∪ M_t-p∈(F_1,t), the subgraph induced by A_1∪ A_2 ∪ B_1 contains a copy of F_1,t, here we can choose t vertices from A_2 as leaves of K_1,t, i.e., T ⊂ A_2. This is a contradiction by Claim <ref>. We can show B_2 is ł-free similarly. □
Let ν_1=ν(B_1) and ν_2=ν(B_2).
ν_1+ν_2 ≤ t-1.
The proof is the same as the one for Claim 3 and it is skipped here.
|W|=s-1.
Proof: Notice that W ∪ A_1 and A_2 form a complete bipartite graph. As K_s,t∈(F_s,t), we get |W| ≤ s-1. Suppose that |W|=w ≤ s-2.
Note that
e(G_n) ≤⌊n-w/2⌋⌈n-w/2⌉+e(B_1)+e(B_2)+∑_r ∈ W d_G_n(r)
≤⌊n-w/2⌋⌈n-w/2⌉+|B_1|2+|B_2|2+(n-1)w
≤⌊n-w/2⌋⌈n-w/2⌉+|B_1|+|B_2|2+(n-1)w
≤⌊n-w/2⌋⌈n-w/2⌉+r2+(n-1)w
≤⌊n-s+1/2⌋⌈n-s+1/2⌉+n(s-1-w)/2+r2+(n-1)w
=⌊n-s+1/2⌋⌈n-s+1/2⌉+n(s-1-w)/2+r2+(n-1)(s-1)+(n-1)(w-(s-1))
< ⌊n-s+1/2⌋⌈n-s+1/2⌉ +(s-1)(n-s+1)+s-12+t^2-3t+3,
for the last step, we note that r and t are constants, w ≤ s-2, and n is large enough. This is a contradiction to the lower bound for e(G_n) and the claim is proved. □
Assume that M_p⊂ G_n[B_i] and there is a vertex v ∈ B_3-i with at least q neighbors in B_3-i, where i ∈{1,2}, 1 ≤ p,q ≤ t-1 and p+q ≥ t. Then v is not incident with at least p+q-t+1 edges from M_p. Equivalently, v has at least 2(p+q-t+1) non-neighbors in B_i.
Proof: Suppose that M_p ⊂ B_2 and v ∈ B_1. Let M_p={x_1y_1,…,x_py_p} and M'={x_iy_i: 1 ≤ i ≤ p, either vx_i ∈ E(G_n) or vy_i ∈ E(G_n)}. Note that K_1,q⊂ B_1 with the center v. We claim |M'| ≤ t-1-q. Otherwise, suppose that {x_1y_1,…,x_t-qy_t-q}⊆ M'. Consider the subgraph induced by A_1 ∪ A_2 ∪ B_1 ∪ B_2. Fix a t-subset T of A_2. We can find the F_1,t as follows. Actually, to get an odd cycle, we include an edge from the star K_1,q and M' one by one. For an edge x_iy_i ∈ M', if assume vx_i is an edge and the length of the odd cycle associated with x_iy_i is 2k+1, then we can find an odd cycle vx_iy_ia_1a_2⋯ a_2k-2v, here a_j∈ A_1 for odd j, a_j ∈ A_2 for even j, and a_2k-2∈ T. This is a contradiction to Claim <ref> and the claim is proved. □
Similarly, we can show the following variant and the proof is omitted here.
Let v ∈ B_i and ν_i' be the matching number of the subgraph induced by B_i- (v ∪ N_B_i(v)). If d_B_i(v)+ν'_i=t-1, then v is not incident with any edge in B_3-i.
e(B_1 ∪ B_2) ≥ |B_1||B_2|+t^2-3t+3.
Proof: Recall the lower bound for e(G_n). On the one hand,
e(G_n-W) ≥⌊n-s+1/2⌋⌈n-s+1/2⌉+t^2-3t+3.
On the other hand,
e(G_n-W) =|A_1 ∪ B_1||A_2|+|A_2 ∪ B_2||A_1|+e(B_1 ∪ B_2)
=|A_1 ∪ B_1||A_2 ∪ B_2|-|B_1||B_2|+e(B_1 ∪ B_2)
≤⌊n-s+1/2⌋⌈n-s+1/2⌉-|B_1||B_2|+e(B_1 ∪ B_2).
The desired lower bound for e(B_1 ∪ B_2) follows.
Assume that both B_1 and B_2 contain edges. Let ν_1=ν(B_1) and ν_2=ν(B_2) with ν_1 ≥ν_2 and ν_1+ν_2 ≤ t-1. If either (ν_1,ν_2)=(t-2,1) with t ≥ 5 or t ∈{3,4}, then e(B_1∪ B_2) <|B_1||B_2|+t^2-3t+3.
Proof: Let E' be the set of non-edges between B_1 and B_2.
The proof is split into the following cases.
Case 1: (ν_1,ν_2)=(t-2,1) and t ≥ 5.
If Δ(B_1)=t-1, then let v be a vertex with the maximum degree and N_B_1(v)={v_1,…,v_t-1}. By Claim <ref>, B_1-{v,v_1,…,v_t-1} is an independent set. Thus e(B_1) ≤∑_i=1^t-1 d_B_1(v_i). If d_B_1(v_i) ≤ t-2 for each 1 ≤ i ≤ t-1, then e(B_1) ≤ t^2-3t+2. As ν_2=1, edges in B_2 span a star or a triangle. Let B_2' ⊂ B_2 be the set of vertices with degree at least one.
Then |B_2'| ≥ e(B_2) and the vertex v is not adjacent to any vertex from B_2' by Claim <ref>, i.e., |E'| ≥ e(B_2). It is clear that e(B_1∪ B_2) ≤ e(B_1)+e(B_2)+|B_1||B_2|-|E'| < |B_1||B_2|+t^2-3t+3. If there is a vertex v_i such that d_B_1(v_i)=t-1, then let I={1 ≤ i ≤ t-1: d_B_1(v_i)=t-1}. As above, for each i ∈ I, the vertex v_i is not adjacent to any vertex from B_2'. Thus
|E'| ≥ (|I|+1) e(B_2). Note that e(B_1) ≤ t^2-3t+2+|I| in this case. Therefore,
e(B_1∪ B_2) ≤ e(B_1)+e(B_2)+|B_1||B_2|-|E'|
≤ |B_1||B_2|+t^2-3t+2+|I|+e(B_2)-(|I|+1)e(B_2)
<|B_1||B_2|+t^2-3t+3.
If Δ(B_1)=t-2, then let v be a vertex with the maximum degree and N_B_1(v)={v_1,…,v_t-2}, here d_B_1(v_i) ≤ t-2 for each 1 ≤ i ≤ t-2. Set B_1'=B_1-v ∪ N_B_1(v). If B_1' is an independent set, then
e(B_1) ≤∑_i=1^j d_B_1(v_i) ≤ t^2-4t+4. If further e(B_2) ≤ t-2, then the claim follows. Recall that t ≥ 5 in this case. If e(B_2)=t-1 ≥ 4, then edges in B_2 span a star since ν_2=1. Let u be the center of the star and then u is not adjacent to vertices from v ∪ N_B_1(v), i.e., |E'| ≥ t-1. Now, e(B_1∪ B_2) ≤
|B_1||B_2|+t^2-4t+4+t-1-|E'|<|B_1||B_2|+t^2-3t+3. If B_1' contains edges, then ν(B_1')=1 by Claim <ref>.
For e(B_1') ≤ t-2, we get e(B_1) ≤∑_i=1^j d_B_1(v_i)+e(B_1') ≤ t^2-3t+2. Note that there are at least e(B_2) vertices which has degree at least one in B_2. Claim <ref> gives that |E'| ≥ e(B_2) and then e(B_1 ∪ B_2) ≤ |B_1||B_2|+t^2-3t+2. If e(B_1')=t-1, then B_1' is a star as t ≥ 5. This is impossible as it contradicts to the assumption Δ(B_1)=t-2.
If Δ(B_1) ≤ t-3, then e(B_1) ≤ t^2-4t+4 by Theorem <ref>. Repeating the argument above, we can show the upper bound for e(B_1 ∪ B_2).
Case 2: t=4. It is clear that either (ν_1,ν_2)=(2,1) or (ν_1,ν_2)=(1,1). For the case of (ν_1,ν_2)=(1,1), note that e(B_1) ≤ 3 and e(B_2) ≤ 3. This implies that e(B_1∪ B_2) ≤ |B_1||B_2|+6<|B_1||B_2|+7. For the case of (ν_1,ν_2)=(2,1), we can prove the upper bound for e(B_1∪ B_2) by repeating the argument in Case 1.
Case 3: t=3. Apparently, (ν_1,ν_2)=(1,1) in this case. Reusing the argument in Case 2, one can show the desired upper bound for e(B_1∪ B_2). □
One of B_1 and B_2 is an independent set.
Proof: Suppose that both B_1 and B_2 contain edges. Recall ν_1=ν(B_1) and ν_2=ν(B_2). We assume that ν_1 ≥ν_2 >0. Claim <ref> implies that Δ(B_1) ≤ t-1 and Δ(B_2) ≤ t-1. If ν_1+ν_2 ≤ t-3, then by Theorem <ref>, we get
e(B_1 ∪ B_2) ≤ |B_1|B_2|+ t ν_1+t ν_2 ≤ |B_1||B_2|+ t^2-3t.
A contradiction to Claim <ref>. Thus it remains to consider the case where t-2 ≤ν_1+ν_2 ≤ t-1. To simply the notation, let K=G_n[B_1 ∪ B_2] and show an upper bound for e(K). Observe that
2e(K)=∑_v ∈ V(K) d_K(v)=∑_v ∈ B_1d_K(v)+ ∑_v ∈ B_2d_K(v).
We aim to establish ∑_v ∈ B_1d_K(v) ≤ |B_1||B_2|+2(t-ν_2)ν_1. If d_K(v,B_1) ≤ t-1-ν_2 for each vertex v ∈ B_1, then Theorem <ref> gives that e(B_1) ≤ (t-ν_2)ν_1, which yields that
∑_v ∈ B_1d_K(v)=∑_v ∈ B_1 d_K(v,B_1)+ ∑_v ∈ B_1 d_K(v,B_2) = 2e(B_1)+e(B_1,B_2) ≤ 2(t-ν_2)ν_1+|B_1||B_2|.
Thus we assume that there is a vertex v ∈ B_1 such that d_K(v,B_1) ≥ t-ν_2. Let v_1 be such a vertex with d_K(v_1,B_1)=t-1+j_1-ν_2, here j_1 ≥ 1. Applying Claim <ref> with i=1, p=ν_2, and q=t-1+j_1-ν_2, we get that v_1 has at least 2j_1 non-neighbors in B_2.
We remove j_1 neighbors of v_1 from B_1 arbitrarily and turn 2j_1 non-neighbors of v_1 in B_2 as neighbors.
Let K_1 be the resulting graph. Observe that
∑_v ∈ B_1 d_K_1(v)=∑_v ∈ B_1 d_K(v) and d_K_1(v,B_2) ≤ |B_2| for each v ∈ B_1.
Actually, the removal of j_1 neighbors of v_1 makes the degree sum decreased by 2j_1 while the adding of 2j_1 neighbors of v_1 contributes 2j_1 to the degree sum. Because of the choice of v_1, it is clear that d_K_1(v,B_2) ≤ |B_2| for each v ∈ B_1.
For i ≥ 2,
we shall define a vertex v_i and a graph K_i recursively such that ∑_v ∈ B_1 d_K_i(v)=∑_v ∈ B_1 d_K_i-1(v) and d_K_i(v,B_2) ≤ |B_2| for each v ∈ B_1. If d_K_i-1(v,B_1) ≤ t-1-ν_2 for each v ∈ B_1, then we stop. Otherwise, let v_i be a vertex such that d_K_i-1(v_i,B_1)= t-1+j_i-ν_2 with j_i ≥ 1. Observe that the added crossing edges so far are not incident with v_i. That is N_K_i-1(v_i,B_2)=N_K(v_i,B_2).
Thus we can apply Claim <ref> again to show that v_i has at least 2j_i non-neighbors in B_2 (in K_i-1 and K). We repeat the operation above to get K_i which satisfies desired properties.
Assume that the process terminates after ℓ steps. We remark that all new crossing edges are distinct as they are associated with distinct vertices v_1,…,v_ℓ in B_1.
Note that d_K_ℓ(v,B_1)≤ t-1-ν_2 for each v ∈ B_1. Therefore, by Theorem <ref> and the definition of K_i for 1 ≤ i ≤ℓ, we get
∑_v ∈ B_1 d_K(v) =∑_v ∈ B_1 d_K(v,B_1)+∑_v ∈ B_1 d_K(v,B_2)
=∑_v ∈ B_1 d_K_ℓ(v,B_1)+∑_v ∈ B_1 d_K_ℓ(v,B_2)
≤∑_v ∈ B_1 d_K_ℓ(v,B_1)+|B_1||B_2|
=2e_K_ℓ(B_1)+|B_1||B_2|
≤ |B_1||B_2|+2(t-ν_2)ν_1.
Repeating the argument above, we can show
∑_v ∈ B_2 d_K(v) ≤ |B_1||B_2|+2(t-ν_1)ν_2.
Therefore,
e(B_1∪ B_2)=E(K)=1/2∑_v ∈ B_1d_K(v)+ 1/2∑_v ∈ B_2d_K(v) ≤ |B_1||B_2|+(t-ν_2)ν_1+(t-ν_1)ν_2.
The case where either t ∈{3,4} or t≥ 5 with (ν_1,ν_2) = (t-2,1) is proved by Claim <ref>. We next assume t ≥ 5 and (ν_1,ν_2) ≠ (t-2,1).
For the case of ν_1+ν_2=t-1, let ν_2=t-1-ν_1 and g(ν_1)=(t-ν_2)ν_1+(t-ν_1)ν_2=2ν_1^2+(2-2t)ν_1+t^2-t. Clearly, g(ν_1) is concave up. As assumptions
ν_1 ≥ν_2 and (ν_1,ν_2) ≠ (t-2,1), the maximum value of g(ν_1) is g(t-3)=t^2-5t+12. Note that g(t-3)<t^2-3t+3 when t ≥ 5.
For the case of ν_1+ν_2=t-2, let ν_2=t-2-ν_1 and h(ν_1)=2ν_1^2+(4-2t)ν_1+t^2-2t. Similarly, the maximum value of h(ν_1) is h(t-3)=t^2-4t+6<t^2-3t+3 for t ≥ 4.
We established e(B_1 ∪ B_2) < |B_1||B_2|+t^2-3t+3 in each case. There is a contradiction to Claim <ref> in each case and the proof is complete. □
From now on, assume B_2 is an independent set. Let Δ_1 be the maximum degree of B_1. Recall ν_1 is the matching number of B_1.
We have e(B_1) ≥ t^2-3t+3.
Furthermore, if e(B_1) ≤ t^2-3t+3+k for an integer 0 ≤ k ≤ s-1, then e(W)+e(W,B_1) ≤ k. As a special case where k=0, i.e., e(B_1)=t^2-3t+3, then e(B_1,B_2)=|B_1||B_2| and each vertex w ∈ W has degree n-1.
Proof: As G_n is an extremal graph, it satisfies that
e(G_n) ≥⌊n-s+1/2⌋⌈n-s+1/2⌉+(s-1)(n-s+1)+s-12+t^2-3t+3.
Let C_1=A_1 ∪ B_1 and C_2=A_2 ∪ B_2. Observe that
e(G_n)=e(C_1, C_2)+e(B_1)+e(W,C_1 ∪ C_2)+e(W).
Notice that e(C_1, C_2) ≤⌊n-s+12⌋⌈n-s+12⌉ and
e(W,C_1 ∪ C_2)+e(W) ≤ (s-1)(n-s+1)+s-12.
Therefore, e(B_1) ≥ t^3-3t+3.
If e(B_1) ≤ t^2-3t+3+k for an integer 0 ≤ k ≤ s-1, then
e(W,C_1 ∪ C_2)+e(W) ≤ (s-1)(n-s+1)+s-12-e(W)-e(W,B_1). Combined with the lower bound for e(B_1), we get e(W)+e(W,B_1)≤ k and the second part follows. The special case of k=0 can be shown similarly. □
The subgraph B_1 has exactly one connected component and Δ_1=t-1.
Proof: Claim <ref> gives that Δ_1 ≤ t-1 and ν_1 ≤ t-1. We assert that Δ_1 ≥ t-2. Otherwise, e(B_1) ≤ t^2-3t+2 by Theorem <ref>. This is a contradiction to Claim <ref> and the assertion follows. Let v be a vertex with maximum degree in B_1 and C_1 be the connected component containing v.
If Δ_1=t-1, then C_1 is the only connected component in B_1. Otherwise, K_1,t-1∪ M_1 ⊂ B_1. This is a contradiction to Claim <ref>.
If Δ_1=t-2, then let N(v) be the neighborhood of v in B_1.
Suppose B_1 has another connected component C_2. Then ν(C_2)=1. Otherwise, K_1,t-2∪ M_2 ⊂ B_1, which is a contradiction to Claim <ref>. Similarly, we can show B_1-(v ∪ N(v)) is an independent set, i.e., each edge in C_1 is incident with a vertex from N(v). As Δ_1=t-2, we get e(C_2) ≤ t-2 for t ≥ 5. Therefore, e(B_1) ≤∑_i=1^t-2 d_B_1(v_i)+e(C_2) ≤ (t-2)^2+t-2=t^2-3t+2 provided t ≥ 5.
For t=4, if e(C_2) ≤ 2, then e(B_1) ≤ 6 which leads to the same contradiction. If t=4 and e(C_2)=3, then C_2 is a triangle. If v_1v_2 is an edge, then e(C_1)=3 and e(B_1)=6, a contradiction to Claim <ref>. If v_1 is not adjacent to v_2, then ν(C_1)=2 and Δ(C_2)=2 which imply that K_1,2∪ M_2 ⊂ B_1. This is a contradiction to Claim <ref>.
For t=3, as Δ_1=1, we get e(B_1)=2<3 and obtain the same contradiction. Therefore, B_1 has exactly one connected component.
Notice that we already showed Δ_1 ≥ t-2 and B_1 contains exactly one connected component. One can repeat the argument above to show Δ_1=t-1. □
If t=3, then the unique connected component in B_1 either is a triangle or is a P_4.
Proof: Note that s=t=3 now. By Claim <ref>, there is a vertex v with two neighbors in B_1, say v_1 and v_2. Claim <ref> yields that B_1 ∖ (N_B_1(v_1) ∪ N_B_1(v_2)) is an independent set.
If v_1v_2 is an edge, then vv_1v_2 is a triangle and there is no other edge as Claim <ref>. If v_1 ≁v_2, then both v_1 and v_2 have at most one neighbor other than v as Δ_1=2. If one of v_1 and v_2 has degree one, then we get a P_4 and we are done. Assume v_1u_1 and v_2u_2 are two edges. If u_1≠ u_2, then K_1,2∪ M_1 ⊂ B_1, this is a contradiction to Claim <ref>. If u_1=u_2, then e(B_1)=4. Note that s-1=2 and assume W={w_1,w_2}.
If both w_1 and w_2 are completely adjacent to B_1, then {v,u_1,w_1} and {v_1,v_2,w_2} form a K_3,3. Otherwise, as the lower bound and the upper bound for e(G_n), only one of w_1 and w_2 has a unique non-neighbor in B_1, say w_1. Thus w_1 is completely adjacent to one of {v_1,v_2} and {v,u_1}, say {v,u_1}. Note that w_2 is completely adjacent to B_1.
Now {v,u_1,w_2} and {v_1,v_2,w_1} form a K_3,3. Thus F_3,3⊂ G_n by Lemma <ref>. This is a contradiction and the claim is proved. □
Let u and v be two vertices from B_1 such that u and v are not adjacent. If d_B_1(u)=t-1, then N_B_1(v) ⊆ N_B_1(u).
Proof: Suppose that v has a neighbor x such that x ∉N_B_1(u). Then observe that K_1,t-1∪ M_1 ⊂ B_1, here K_1,t-1 is formed by u and its neighbors in B_1 while M_1 is the edge vx. This is a contradiction to Claim <ref>. □
Let v be a vertex from B_1 with d_B_1(v)=t-1. Then N_B_1(v) is an independent set for t ≥ 4.
Proof: Let N_B_1(v)={v_1,…,v_t-1} and B_1'=B_1-(v ∪ N_B_1(v)). Without causing any confusion, for each u ∈ B_1, we will use N(u) to denote N_B_1(u) in the proof. Claim <ref> gives us that B_1' is an independent set. Thus
e(B_1)=∑_i=1^t-1 d_B_1(v_i)-e(N(v)).
This equation will be used frequently in the proof.
If N(v) contains an edge, then there is a vertex v_i ∈ N(v) such that d_B_1(v_i)=t-1. Otherwise, the equation (<ref>) gives that e(B_1) ≤ (t-1)(t-2)-1=t^2-3t+2 and this is a contradiction to Claim <ref>. Without loss of generality, let v_1 be such a vertex.
The vertex v_1 is adjacent to some v_j ∈ N(v). If it is not the case, then N(v_1)=v ∪ T, where T ⊂ B_1' and |T|=t-2. By Claim <ref>, N(v_i) ⊆ N(v_1)=v ∪ T for each 2 ≤ i ≤ t-1 and then N(v) is an independent set, a contradiction to the assumption. Let {v_2,…,v_p} be the set of vertices that are adjacent to v_1. We next show p<t-1. Otherwise, e(N(v)) ≥ t-2 as p=t-1. By equation (<ref>) and Claim <ref>,
t^2-3t+3 ≤ e(B_1) ≤ (t-1)^2-e(N(v)) ≤ t^2-3t+3,
which implies that d_B_1(v_i)=t-1 for each 1 ≤ i ≤ t-1, e(B_1)=t^2-3t+3, and e(N(v))=t-2, i.e., {v_2,…,v_t-1} is an independent set. By Claim <ref>, there is a subset D ⊂ B_1' with t-3 vertices such that N(v_i)={v,v_1}∪ D for each 2 ≤ i ≤ t-1. Note that N(v_1)={v,v_2,…,v_t-1}.
If we let L={v_2,…,v_t-1} and R={v,v_1}∪ D, then L ∪ R form a K_t-2,t-1.
As e(B_1)=t^2-3t+3, each vertex from W has degree n-1 by Claim <ref>. Thus including vertices from W properly, there is a K_s,t in B_1∪ W. As A_1 ∪ B_1 ∪ W and A_2 form a complete bipartite graph, then F_s,t⊂ G_n by Lemma <ref>, a contradiction. Thus p<t-1.
There is a vertex v_i such that p+1 ≤ i ≤ t-1 and d_B_1(v_i)=t-1. If there is no such a vertex, then by equation (<ref>)
e(B_1) =∑_i=1^t-1 d_B_1(v_i)-e(N(v))
=∑_i=1^p d_B_1(v_i)+∑_i=p+1^t-1 d_B_1(v_i)-e(N(v))
≤ p(t-1)+(t-1-p)(t-2)-e(N(v))
≤ p(t-1)+(t-1-p)(t-2)-(p-1)
=t^2-3t+3,
here e(N(v)) ≥ p-1 as v_1 is adjacent to v_i for each 1 ≤ i ≤ p . Together with Claim <ref>, we get
(1) d_B_1(v_i)=t-1 for 1 ≤ i ≤ p,
(2) d_B_1(v_i)=t-2 for each p+1 ≤ i ≤ t-1,
(3) e(N(v))=p-1, i.e., E(N(v))={v_1v_2,…,v_1v_p}.
Assume N(v_1)={v,v_2,…,v_p}∪ T, where T ⊂ B_1' with |T|=t-p-1.
Claim <ref> implies that N(v_i) ⊂ v ∪ T for each p+1 ≤ i ≤ t-1.
Since d_B_1(v_i)=t-2 and |v ∪ T| = t-p ≤ t-2, we get p=2 and N(v_i) =v ∪ T for each 3 ≤ i ≤ t-1.
Similarly, observe that N(v_2)={v,v_1}∪ T. Now v ∪ T and {v_1,v_2,…,v_t-1} form a K_t-2,t-1.
Notice that each vertex from W has degree n-1 as e(B_1)=t^2-3t+3. We can show F_s,t⊂ G_n similarly and this is a contradiction.
In the following, we assume that d_B_1(v_i)=t-1 for p+1 ≤ i ≤ p+q.
We next show p=2. Recall that N(v_1)={v,v_2,…,v_p}∪ T, where T ⊂ B_1' with |T|=t-p-1.
As v_1 ≁v_i for p+1 ≤ i ≤ p+q, Claim <ref> gives that N(v_i) ⊆ N(v_1)={v,v_2,…,v_p}∪ T. The assumption d_B_1(v_1)=d_B_1(v_i)=t-1 indicates that v_i is adjacent to all vertices from {v_2,…,v_p} for each p+1 ≤ i ≤ p+q. Thus e(N(v)) ≥ p-1+q(p-1). If p>2, then by equation (<ref>), we get e(B_1) ≤ (t-1)(p+q)+(t-1-p-q)(t-2)-(p-1)(q+1)=t^2-3t+3+(2-p)q<t^2-3t+3, which is a contradiction to Claim <ref>. Therefore, p=2 and e(B_1)=t^2-3t+3. This implies that d_G(v_i)=t-2 for q+3 ≤ i ≤ t-1 and each w ∈ W has degree n-1 by Claim <ref>. Notice that |T|=t-3 now.
If t ≥ 5, then reusing Claim <ref>, we get N_G_n(v_i)=v ∪ T for q+3 ≤ i ≤ t-1 and T ⊂ N_G_n(v_2). Now N_G_n(v_2)={v,v_1,v_3}∪ T and d_G_n(v_2)=t, a contradiction to Claim <ref>.
If t=4, then N_G_n(v_2)={v,v_1,v_3}. Now {v_1,v_3} and {v,v_2,u_1} form a K_2,3, here T={u_1}.
Recall each vertex w ∈ W has degree n-1 and |W|=s-1. Thus taking vertices from W properly, we can see v ∪ T ∪ N(v) ∪ W contains a K_s,4 for any 2 ≤ s ≤ 4. Since A_1 ∪ B_1 ∪ W and A_2 is a complete bipartite graph, F_s,t⊂ G_n by Lemma <ref>. This is a contradiction and the claim is proved.
□
The subgraph B_1 contains exactly one copy of H as a subgraph for t ≥ 4.
Proof: We reuse the assumptions in the proof of Claim <ref>. Then v is a vertex from B_1 with maximum degree, N_B_1(v)={v_1,…,v_t-1}, and B_1'=B_1-(v ∪ N_B_1(v)). Claim <ref> tells us that N_B_1(v) is an independent set.
Notice that e(B_1)=∑_i=1^t-1 d_B_1(v_i). Claim <ref> implies that there is at least one v_i ∈ N(v) such d_B_1(v_i)=t-1. Assume that d_B_1(v_i)=t-1 for each 1 ≤ i ≤ k. We claim k=1. Otherwise,
Claim <ref> yields that N(v_i)=v ∪ T for each 1 ≤ i ≤ k, where T ⊂ B_1' and |T|=t-2. Then L={v_1,…,v_k} and R=v ∪ T form a K_k,t-1 with k ≥ 2. In the following, we will show K_s,t⊂ B_1 ∪ W. As A_1 ∪ B_1 ∪ W and A_2 is a complete bipartite graph, then F_s,t is a subgraph of G_n by Lemma <ref> which is a contradiction to the assumption.
There are two cases depending on k.
Case 1: 2 ≤ k ≤ s-1. Let
W'={w ∈ W: w is adjacent to all vertices in R}.
Note that each w ∈ W ∖ W' has a non-neighbor in B_1. As e(B_1) ≤ t^2-3t+2+k, it follows that |W∖ W'| ≤e(W,B_1)+e(W) ≤ k-1 by Claim <ref>, i.e., |W'| ≥ s-k. Assume |W'|=s-k+j.
For j=0, each vertex from W∖ W' has exactly one non-neighbor in R and has degree n-2.
Note that |W'| ≤ s-2. Pick w ∈ W∖ W' arbitrarily and notice that w is adjacent to all vertices in L ∪ W'. Now, L ∪ W' and R ∪ w form a K_s,t.
For j>0, we have |W∖ W'|=k-j-1.
As e(W ∖ W',B_1) ≥ k-j-1, we get e(W')+e(W',L) ≤ j by Claim <ref>. If e(W',L)=0, then there is a vertex w ∈ W' which has at least s-k neighbors in W' provided s-k+j>2. Otherwise, e(W') ≤(s-k-1)(s-k+j)2<s-k+j2-j, a contradiction. Let w ∈ W' be such a vertex and W” be the set of s-k neighbors of w in W'. Then L ∪ W” and R ∪ w is a K_s,t.
If s-k=j=1, then k=s-1, |W'|=2, and |W∖ W'|=s-3. Assume W'={w_1,w_2}. If w_1 is adjacent to w_2, then L ∪ w_1 and R ∪ w_2 is a K_s,t. Thus we assume w_1 is not adjacent to w_2. For W∖ W' ≠∅, then each w ∈ W∖ W' is completely adjacent to L ∪ W'.
We can find a K_s,t in B_1 ∪ W as we did in the case where j=0. For W∖ W' = ∅, i.e., s=3 and W=W'={w_1,w_2}. As w_1 is not adjacent to w_2, we get both w_1 and w_2 are completely adjacent to L ∪ R by Claim <ref>.
Note that t>3 by the assumption. Then d_G_n(v_i)=t-2 for 3 ≤ i ≤ t-1 and v_i has a unique non-neighbor in T by Claim <ref>. Let Y={v_3,…,v_t-2}.
Thus there are two vertices t_1,t_2 ∈ T such that both t_1 and t_2 are completely adjacent to Y as |T|=t-2. Now {v,t_1,t_2} and W ∪{v_1,v_2}∪ Y form a K_3,t.
If e(W',L)>0, then let Z ⊂ W' be the set of vertices which have non-neighbors in L. Thus e(W') ≤ j-|Z|. Recall |W'|=s-k+j.
Removing vertices from W' which has non-neighbors in W' one by one, we will get a subset W” such that |W”| ≥ s-k+|Z| and W” is a clique. Let w ∈ W”∖ Z and W”' ⊂ W” be the set of s-k neighbors of w in W”. Then L ∪ W”' and R ∪ w form a K_s,t.
Case 2: k ≥ s. We first consider the case where k ≥ s+1. We claim that there is a vertex w ∈ W such that w has at least s neighbors in L. If there is a such vertex w, then let L' be the set of s neighbors of w in L. Observe that L' and R ∪ w is K_s,t. It is left to show the existence of w. Let
W'={w ∈ W: w has at most s-1 neighbors in L}.
As e(B_1) ≤ t^2-3t+2+k and each w ∈ W' has at least k-s+1 non-neighbors in L, it follows that (k-s+1)|W'| ≤ k-1. Thus |W'| ≤ 1+s-2k-s+1≤ 1+s-22 since k ≥ s+1.
If s ≥ 4, then |W'| ≤ s-2 and the desired vertex w ∈ W exists. If s=3, then the inequality above gives |W'| ≤ 1 and the existence of w also follows. For s=2, note that W contains only one vertex, say w, and e(B_1) ≤ t^2-3t+2+k. As w has at most one neighbor in L. Equivalently, d_G_n(w) ≤ n-k. Combining the upper bound and the lower bound for e(G_n), we get that d_G_n(w) = n-k and d_G_n(v_i)=t-2 for each k+1 ≤ i ≤ t-1, here we assume k<t-1 for a while. Claim <ref> give that N_G_n(v_i)=v ∪ T_i such that T_i ⊂ T and |T_i|=t-3, i.e., v_i has exactly one non-neighbor in T. As |T|=t-2 and k ≥ s+1, there is a vertex t ∈ T such that t is adjacent to each vertex v_i for k+1 ≤ i ≤ t-1. As d_G_n(w) = n-k and w has at least k-1 non-neighbors in L, then w is completely adjacent to R. Now {v,t} and w ∪ N_B_1(v) form a K_2,t.
For k=t-1, then note that R and L ∪ w is a K_t-1,t and we can find a K_s,t easily.
It remains to prove the case of k=s. As above, if |W'| ≤ s-2, then we are done. Thus we assume |W'|=s-1, i.e., W'=W. Similarly, one can show d_G_n(w)=n-2 for each w ∈ W and the unique non-neighbor of w is in L. In addition, d_G_n(v_i)=t-2 for each s+1 ≤ i ≤ t-1, here we also assume s<t-1 for a moment. As above, we can find an (s-1)-subset T' ⊂ T such that T' and {v_s+1,…,v_t-1} form a complete bipartite graph. Now v∪ T' and w ∪ N_B_1(v) is a K_s,t for any w ∈ W.
For k=s=t-1, as above R and L ∪ w is a K_t-1,t for any w ∈ W and we are able to find K_s,t easily.
There is a contradiction in each case and then k=1 follows. Assume v_1 is the unique vertex such that d_B_1(v_1)=t-1 and N(v_1)=v ∪ T, where T ⊂ B_1' with |T|=t-2. Meanwhile, e(B_1)=t^2-3t+3 and d_B_1(v_i)=t-2 for each 2 ≤ i ≤ t-1. By Claim <ref>, N_B_1(v_i) ⊂ v ∪ T for each 2 ≤ i ≤ t-1. Actually, for each v_i, there is a unique vertex u_i ∈ T such that v_i ≁u_i. Moreover, we assert that u_i ≠ u_j for 2 ≤ i ≠ j ≤ t-2. Otherwise, suppose u_2=u_3 without losing any generality. Observe that {v_1,v_2,v_3} and v ∪ (T ∖ u_2) is K_3,t-2. Claim <ref> together with e(B_1)=t^2-3t+3 imply that each vertex from W has degree n-1. Let W' be an (s-3)-subset of W.
Then {v_1,v_2,v_3}∪ W' and v ∪ (T ∖ u_2) ∪ (W∖ W') form a K_s,t for s ≥ 3. For s=2, there is a vertex u_1 ∈ T which is completely adjacent to {v_2,…,v_t-1}. Now {v,u_1} and N(v) ∪ W is a K_2,t.
By Lemma <ref>, F_2,t⊂ G_n. This is a contradiction and the proof is complete.
Proof of Theorem <ref>: For (s,t) ≠ (3,3), Claim <ref> gives that B_1 contains exactly one copy of H as a subgraph. Let C_1=A_1 ∪ B_1 and C_2=A_2 ∪ B_2. Reusing the proof for Claim <ref>, we get that
e(G_n) ≤⌊n-s+1/2⌋⌈n-s+1/2⌉+(n-s+1)(s-1)+s-12+t^2-3t+3.
As the lower bound for e(G_n), it follows that d_G_n(w)=n-1 for each w ∈ W. In addition, e(C_1,C_2)=⌊n-s+1/2⌋⌈n-s+1/2⌉, i.e., C_1 and C_2 is a balanced complete bipartite graph with n-s+1 vertices. Therefore, G_n=G_s,t and it is the unique extremal graph by Theorem <ref>. For (s,t)=(3,3), one can show either G_n=G_3,3 or G_n=G_3,3'. For this case, we are not able to characterize all extremal graphs. □
99
AHS
H. Abbott, D. Hanson, and H. Sauer, Intersection theorems for systems of sets, J. Combin. Theory Ser. A, 12 (1972), 381–389.
CGPW
G. Chen, R. Gould, F. Pfender, and B. Wei, Extremal graphs for intersecting cliques, J. Combin. Theory Ser. B, 89 (2003), 159–171.
chi
C. Chi and L. Yuan, The Turán number for the edge blow-up of trees: The missing case, Discrete Math., 346(6) (2023), No.113370.
CH
V. Chvátal and D. Hanson, Degrees and matchings, J. Combin. Theory Ser. B, 20 (1976), 128-138.
erdos67
P. Erdős, Some recent results on extremal problem in graph theory, Theory of Graphs (ed P. Rosenstiehl), (Internat. Sympos., Rome, 1966), Gordon and Breach, New York, and Dunod, Paris, 1967, 117–123.
erdos68
P. Erdős, On some new inequalities concering extremal properties of graphs, Theory of Graphs (P. Erdős and G. Katona, Eds.), Academic Press, New. York, 1968, 77–81.
ES
P. Erdős and M. Simonovits, A limit theorem in graph theory, Studia Sci. Math Hungar., 1 (1966), 51–57.
ES1 P. Erdős and A. Stone, On the structure of linear graphs, Bull. Amer. Math., 52 (1946), 1089–1091.
EFGG
P. Erdős, Z. Füredi, R. Gould, and D. Gunderson, Extremal graphs for intersecting triangles, J. Combin. Theory Ser. B, 64(1) (1995), 89–100.
HQL
X. Hou, Y. Qiu, and B. Liu, Extremal graph for intersecting odd cycles, Electron. J. Combin., 23(2) (2016), P29.
HQL1
X. Hou, Y. Qiu, and B. Liu, Turán number and decomposition number of intersecting odd cycles, Discrete Math., 341(1) (2018), 126–137.
Liu
H. Liu, Extremal graphs for blow-ups of cycles and trees, Electron. J. Combin., 20(1) (2013), P65.
NKSZ
Z. Ni, L Kang, E. Shan, and H. Zhu, Extremal graphs for blow-ups of keyrings, Graphs Combin., 36(6) (2020), 1827–1853.
S1
M. Simonovits, A method for solving extremal problems in graph theory, stability problems, in:
Theory of Graphs, Proc. Colloq., Tihany, 1966, Academic Press, New York, 1968, 279–319.
S2
M. Simonovits, Extremal graph problems with symmetrical extremal graphs, additional chromatic conditions, Discrete Math., 7 (1974), 349–376.
Turan
P. Turán, On an extremal problem in graph theory (in Hungrarian), Mat. es Fiz. Lapok., 48 (1941), 436–452.
WHLM
A. Wang, X Hou, B. Liu, and Y. Ma,
The Turán number for the edge blow-up of trees,
Discrete Math., 344(12) (2021), No.112627.
Yan
N. Yan, The Turán number of graphs with given decomposition family, Acta Scientiarum Naturalium Universitatis Nankaiensis, 54(4) 2021, 34–43.
Yuan3 L. Yuan, Extremal graphs for the k-flower, J. Graph Theory, 89(1) (2018), 26–39.
Yuan2 L. Yuan, Extremal graphs for odd wheels, J. Graph Theory, 98(4) (2021), 691–707.
Yuan L. Yuan, Extremal graphs for edge blow-up of graphs, J. Combin. Theory Ser. B, 152 (2022), 379–398.
Yuan1
L. Yuan, Extremal graphs of the pth power of paths, European J. Combin., 104 (2022), No.103548.
ZKS
H. Zhu, L. Kang, and E. Shan, Extremal Graphs for odd-ballooning of paths and cycles, Graphs Combin., 36(3) (2020), 755–766.
ZC
X. Zhu and Y. Chen, Turán number for odd-ballooning of trees, J. Graph Theory, online DOI: 10.1002/jgt.22959.
|
http://arxiv.org/abs/2307.07620v2 | 20230714203907 | Generalizable Embeddings with Cross-batch Metric Learning | [
"Yeti Z. Gurbuz",
"A. Aydin Alatan"
] | cs.LG | [
"cs.LG",
"cs.CV"
] |
Generalizable Embeddings with Cross-batch Metric Learning
Xiang Guo, Arash Tavakoli, T. Donna Chen, and Arsalan Heydarian
Corresponding Author: Dr. Arsalan Heydarian, [email protected]
August 12, 2023
=============================================================================================================================================
Global average pooling (GAP) is a popular component in deep metric learning (DML) for aggregating features. Its effectiveness is often attributed to treating each feature vector as a distinct semantic entity and GAP as a combination of them. Albeit substantiated, such an explanation's algorithmic implications to learn generalizable entities to represent unseen classes, a crucial DML goal, remain unclear. To address this, we formulate GAP as a convex combination of learnable prototypes. We then show that the prototype learning can be expressed as a recursive process fitting a linear predictor to a batch of samples. Building on that perspective, we consider two batches of disjoint classes at each iteration and regularize the learning by expressing the samples of a batch with the prototypes that are fitted to the other batch. We validate our approach on 4 popular DML benchmarks.
Metric learning, zero-shot learning
firststyle
Accepted as a conference paper at ICIP 2023
firststyle
§ INTRODUCTION
Deep metric learning (DML) considers image-label pairs (I,L) and aims to learn an embedding function I→ y that maps images I to vectors y such that the Euclidean distance in the space of embeddings is consistent with the label information. More specifically, ‖ y_i y_j‖_2 is small whenever L_i=L_j, and large whenever L_i ≠ L_j. To enable learning, this requirement is represented via loss function ℓ((y_i, L_i), (y_j,L_j)) (, contrastive <cit.>, triplet <cit.>, multi-similarity <cit.>) and the typical learning mechanism is gradient descent of an empirical risk function defined over a batch of data points: ℒ_DMLΣ_ijℓ((y_i, L_i), (y_j,L_j)).
Primary thrusts in DML include tailoring pairwise loss terms <cit.>, pair mining <cit.> and data augmentation with either synthesizing informative samples <cit.> or with virtual embeddings called proxies <cit.>. To improve generalization; training strategies upon characterization of the generalization bounds <cit.>, separating unique and shared characteristics among classes,
<cit.>, intra-batch feature aggregation <cit.>, ranking surrogates <cit.>, further regularization terms <cit.>, and various architectural designs such as ensemble <cit.> and multi-task <cit.> models are utilized in the prolific DML literature. A shared component of these diverse methods is the embedding function which is a convolutional neural network (CNN) followed by global average pooling (GAP) <cit.>.
Though simple, GAP is a highly effective way to aggregate information. Empirically validated folklore <cit.> to explain the effectiveness of GAP is considering each pixel of the CNN feature map as corresponding to a separate semantic entity and GAP as the combination of them <cit.>. A critical desiderata of DML is generalizing the learned embedding function to unseen classes. Thus, the learned semantic entities should be able to express novel classes, , learning "tire" and "window" to represent "car" instead of learning "car". However, no explicit mechanism exists in the current DML approaches to enforce this behavior. Moreover, supervised DML losses provide guidance for seen classes, that yields entities fitted to classes and possibly hinders generalization capability. In this paper, we address explicitly learning generalizable semantic entities in the context of GAP.
Briefly, our contributions include the following; i) we formulate GAP as a convex combination of learnable prototypes (<ref>) to enable explicit learning of the semantic entities, ii) we show that the prototype learning can be expressed as a recursive process fitting a linear predictor to the batch of samples, and iii) we tailor a regularization loss (<ref>) built on expressing the set of classes with the prototypes fitted to another set of classes. Through rigorous experimentation, we validate our theoretical claims and demonstrate the effectiveness of our approach.
§ METHOD
We propose a regularization loss to learn transferable features. Our loss is built on solving a metric learning problem on a batch and then evaluate the learned metric on another batch of unseen classes. We first express GAP as a convex combination of learnable prototypes in <ref>. We then associate prototype learning with a recursive process fitting a linear predictor to a batch of samples in <ref>. Building on that, we formulate our loss in <ref>. We defer all the upcoming proofs to [sec:appendix]appendix.
§.§ GAP as Convex Combination of Prototypes
We consider embedding functions that are implemented as CNN followed by GAP, , ICNN⟶X1/nΣ_i x_i⟶y where X is wh feature map and n=wh. We introduce the following operator to compose a histogram representation from the collection of features.
For n-many d-dimensional features X=[ x_i∈^d ]_i=1^n and m-many prototype features 𝒱=[ ν_i∈^d ]_i=1^m of the same dimension, the histogram of X on 𝒱 is denoted as z^∗ which is computed as the minimizer of the following problem:
(z^∗,π^∗) = _z∈𝒮^m, π⩾0∑_ijν_i x_jπ_ij s.to [ Σ_iπ_ij=1/n; Σ_jπ_ij=z_i ]
where 𝒮^m { p ∈^m_⩾ 0|Σ_i p_i = 1 }.
The solution of the problem in (<ref>) reads:
π^∗_ij = 1/n1(i = argmax_k{ν_k x_j })
where 1(c) is 1 whenever c is true and 0 otherwise.
In words, histogram operator basically assigns each feature to their nearest prototype and accumulates 1/n mass for each assigned feature.
We now consider a set of prototypes in the feature space 𝒳 where the convolutional features x_i lie. We consider m-many prototype features 𝒱 = {ν_i}_i=1^m so that the set 𝒱 is δ-cover of the feature space, 𝒳. Namely, for any x∈𝒳, we have a prototype ν_x such that ‖ x ν_x‖_2 ⩽δ.
Given n-many convolutional features X=[ x_i]_i=1^n we compute the histogram of X on 𝒱 (, z^∗) using (<ref>) and obtain global representation ŷ as:
ŷ = ∑_k=1^m z^∗_k ν_k .
Note that GAP representation is y=1/nΣ_i=1^n x_i. By the following lemma, we show that GAP is approximately equivalent to ŷ, , convex combination of prototypes.
Given n-many convolutional features X=[ x_i∈𝒳]_i=1^n and m-many prototype features 𝒱=[ ν_i]_i=1^m with {ν_i}_i=1^m being δ-cover of 𝒳. If z^∗ is the histogram of X on V, defined in (<ref>), then we have:
‖∑_i=1^m z^∗_iν_i ∑_j=1^n 1n x_j ‖_2 ⩽δ
We visualize the result of <Ref> in <ref>, which implies that with GAP each image is represented as the convex combination of the prototype vectors. To generalize DML to unseen classes, we want the prototypes to represent transferable entities such as "tire" and "window" rather than classes themselves (, "car"). To enforce that, we first formulate histogram operator as a trainable layer by smoothing the objective of (<ref>) with entropy:
(z^',π^') = _Σ_iπ_ij=1/n
Σ_jπ_ij=z_i
z∈𝒮^m, π>0∑_ijν_i x_jπ_ij - 1ε∑_ijπ_ijlogπ_ij
which admits soft-max solution as: z^'_i=1nΣ_jexp(εν_i x_j)Σ_kexp(εν_k x_j). Thus, it can be implemented with 11 convolution and soft-max layers (<ref>). In the following sections, we derive a loss to regularize the learning of the prototypes.
§.§ Learning the Prototypes
Given Z=[z_i]_i and Y=[y_i]_i denoting the histograms obtained by (<ref>) and GAP representations of a batch, respectively, we can learn the prototypes jointly with the embedding function by augmenting ‖𝒱Z Y ‖_F^2 to the DML loss. However, that does not guarantee transferable representations. We now alternatively express the learning mechanism of the prototypes as a recursive process and derive a loss to regularize the learning.
Let (Z_1,Y_1), (Z_2,Y_2),…, (Z_K,Y_K) be the representations we obtain during the course of K-step training. We can obtain 𝒱^(K), , the prototypes at K, as the solution of the following problem:
𝒱^(K)=_A∑_i=1^K α^K-i‖ A Z_i - Y_i‖_F^2 + β‖ A‖_F^2
where 0<α⩽ 1 is the forgetting factor to put more emphasis on the recent representations, and β‖ A‖_F^2 is to improve robustness. We can obtain the solution as <cit.>:
𝒱^(K) = R_K^ 1 Q_K
where R_K =Σ_iα^K-iZ_iZ_i+β I and Q_K = Σ_iα^K-iZ_iY_i. For a new batch (Z,Y) at step K+1, we can update the solution as:
𝒱^(K+1) = W_K𝒱^(K) + (I - W_K)𝒱
where 𝒱 = _A ‖ A Z-Y‖^2_F+(1α)β‖ A‖_F^2 is the prototypes fitted to the current batch as 𝒱=R^ 1Z Y with R^ 1=Z Z+(1α)β I, and W_K=R^ 1(R_K^ 1+α R^ 1)^ 1. The results mainly come from Woodbury identity similar to derivation of RLS filter <cit.>.
Practically, learning prototypes with gradient descent of ‖𝒱Z Y ‖_F^2 is more appealing. That said, the form of the recursive update in (<ref>) reveals that the learned prototypes are the weighted combinations of the prototypes fitted to the batch of samples. Thus, imposing constraints on per-batch-fitted prototypes can be a decisive step to obtain a batch-based regularization loss. In the following section, we build on that perspective to formulate our loss to regularize prototype learning.
§.§ Cross-batch Metric Learning
The formulation in (<ref>) reinterprets the learning mechanism of prototypes, that is based on iteratively fitting prototypes to batch of samples (Z, Y) as:
𝒱 = _A ‖ A Z - Y ‖_F^2 + ϵ‖ A ‖_F^2
Assuming that representations in Y are consistent with the label information, expression in (<ref>) is equivalent to solving a metric learning problem for (Z,Y) tuples <cit.>. We now exploit this observation to derive our loss.
We first split the batch (Z, Y) into two as (Z_1, Y_1) and (Z_2, Y_2) such that class sets of the two batches are disjoint. Similar to (<ref>), we express (<ref>) as:
𝒱 = W𝒱_1 + (I-W)𝒱_2
where 𝒱_k = _A ‖ A Z_k - Y_k ‖_F^2 + ϵ/2‖ A ‖_F^2, and W=R_2^ 1(R_1^ 1+ R_1^ 2)^ 1 with R_k = Z_k Z_k^T + ϵ/2I. Hence, we express the learning mechanism at each batch as the weighted combination of the two metrics fitted to the different sets of classes, that sets the stage for the rest of the formulation.
Consider the prototypes 𝒱_1 fitted to (Z_1,Y_1). If those prototypes, 𝒱_1, are corresponding to transferable entities, then their combination with the weights in Z_2 should yield embeddings that are consistent with the label information. Specifically, Ŷ_2 = 𝒱_1 Z_2 should also minimize DML loss.
Formally, given a batch (Z=[Z_1 Z_2], Y=[Y_1 Y_2]), we first obtain the prototypes as 𝒱_k=(Z_k Z_k+ϵ I)^ 1Z_k Y_k for k∈{1,2} or equivalently 𝒱_k=Y_k(Z_k Z_k+ϵ I)^ 1Z_k, if the batch size is less than the number of prototypes for computational efficiency. Given a DML loss function ℓ((y_i,L_i), (y_j,L_j)), , contrastive <cit.>, we formulate our cross-batch metric learning (XML) loss as:
ℒ_XML = ∑_k∑_ŷ_i,ŷ_j∈Ŷ_kℓ((ŷ_i,L_i), (ŷ_j,L_j))
for k=1,2 where Ŷ_1 = 𝒱_2 Z_1 and Ŷ_2 = 𝒱_1 Z_2. In words, we solve a metric learning problem for a set of classes and then compute its performance on another set of unseen classes. Having closed form solution for 𝒱_k in terms of (Z_k, Y_k) enables us to express the metric learning problem as a differentiable operation. Hence, unseen class performance can be explicitly enforced through a batch-based loss term (, ℒ_XML) that can be jointly optimized with gradient descent of any DML loss. In particular, we combine this loss with the metric learning loss as:
ℒ = (1λ)ℒ_DML + λℒ_XML
The proposed loss assesses the unseen class generalization performance of locally fitted prototypes. Intuitively, such a regularization in learning should be useful in better generalization of the CNN features as well as GAP embeddings since prototypes are connected to CNN features and GAP embeddings through analytical operations.
§ EXPERIMENTAL WORK
We start our empirical study with evaluations on DML benchmarks to show the effectiveness of XML. We extend our study further to validate the role of XML in learning.
§.§ Deep Metric Learning Experiments
Setup. We evaluate our method on CUB <cit.>, Cars <cit.>, InShop <cit.>, and SOP <cit.>. Minimizing the confounding of factors other than our proposed method, we keep the comparisons as fair as possible following the MLRC <cit.> procedures with BNInception embeddings <cit.>. We additionally evaluate XML following the conventional settings <cit.> with ResNet50 <cit.> embeddings. For XML, ε=10, λ=0.01, ϵ=0.05, and m=64 in CUB&Cars, and m=128 in SOP&InShop, based on our empirical analysis.
Results. We apply XML with contrastive <cit.> (C+ XML) and ProxyAnchor <cit.> (PA+XML) losses in MLRC setting, and with LIBC <cit.> in conventional setting. For MLRC, we report average (128D) and concatenated (512D) model MAP@R <cit.> performance, and R@1 for the conventional evaluation in <ref> (higher the better). We observe consistent improvements upon direct application of DML losses in all datasets and boost state-of-the-art.
§.§ Proof of the Concept
For the following, we perform DML trainings with XML on Cifar10 <cit.> dataset using ResNet20 <cit.> architecture.
GAP and prototypes. To empirically verify <ref>, we use 2D feature embeddings for direct visualization. We sample 64 images from each class and obtain the local CNN features as well as the GAP features. We compute 48-many prototypes among the local features using greedy k-center <cit.>. We plot the prototypes in <ref> where we see that prototypes correspond to generalizable semantic entities. We also provide the covering radius (, δ in δ-cover) of the prototype set and the discrepancy between GAP and prototype convex combination (PCC) embeddings, which is less than δ as <ref> claims.
Prototypes with XML. We test the impact of XML on learned prototypes by performing DML on Cifar10 with 8 prototypes. We compare results with and without ℒ_XML and visualize the prototype histograms for each class in <ref>. With ℒ_XML, we observe transferable representations and that the prototypes are fit to transferable entities while they are fit to classes without it. For instance, XML prototypes represent a "car" in terms of parts and use some of them in the representation of "cat" as well. We quantitatively evaluate this behavior by randomly splitting the classes in half and using cross-batch metric learning in <ref>. Our evaluation shows that the features and prototypes with XML have superior unseen class generalization (MAP_x) while the seen class performances (MAP_c) are similar. We repeated the experiment 1000 times to ensure validity.
§ CONCLUSION
Building on the perspective explaining GAP as the convex combination of prototypes, we formulated learning of the prototypes and proposed cross-batch metric learning loss to regularize the learning for transferable prototypes. With extensive empirical studies, we validated the effectiveness of our method in various DML benchmarks.
IEEEbib-abbrv
§ APPENDIX
§.§ Preliminaries
[Optimal Transport Distance] The optimal transport (OT) distance between two probability mass distributions (p,X) and (q,Y) is:
‖ (p,X) (q,Y) ‖_OT = min_π⩾0
Σ_iπ_ij=q_j
Σ_jπ_ij=p_i∑_ijc_ijπ_ij
where c_ij=‖ x_i y_j ‖_2, and (p,X)∈Σ_n ×^d× n denotes a probability mass distribution with masses p∈Σ_n in the probability simplex (, Σ_n { p ∈^n_⩾ 0|∑_i p_i = 1 }), and d-dimensional support X=[x_i]_i∈[n]∈^d n.
[Maximum Mean Discrepancy] Max-imum mean discrepancy (MMD) between two probability mass distributions (p,X) and (q,Y) is:
‖ (p,X) (q,Y) ‖_MMD = max_f∈𝒞(X,Y)∑_i p_i f(x_i) - ∑_j q_j f(y_j)
where 𝒞(X,Y) is the set of continuous and bounded functions defined on a set covering the column vectors of X and Y.
[Optimal Transport Distance Dual] The Lagrangian dual of the optimal transport distance defined in <Ref> reads:
‖ (p,X) (q,Y) ‖_OT = max_f_i + g_j ⩽ c_ij∑_i p_i f_i + ∑_j q_j g_j
with the dual variables λ={ f,g}.
Note that x_i=y_j implies f_i = -g_j and from the fact that c_ij=c_ji, we can express the problem in (<ref>) as:
‖ (p,X) (q,Y) ‖_OT = max_f∈𝔏_1∑_i p_i f(x_i) - ∑_j q_j f(x_j)
where 𝔏_1={ f |sup_x,y| f(x) - f(y)|‖ x - y‖_2⩽ 1 } is the set of 1-Lipschitz functions.
§.§ Proofs
[Histogram Operator] For n-many d-dimensional features X=[ x_i∈^d ]_i=1^n and m-many prototype features 𝒱=[ ν_i∈^d ]_i=1^m of the same dimension, the histogram of X on 𝒱 is denoted as z^∗ which is computed as the minimizer of the following problem:
(z^∗,π^∗) = _z∈𝒮^m, π⩾0∑_ijν_i x_jπ_ij s.to [ Σ_iπ_ij=1/n; Σ_jπ_ij=z_i ]
where 𝒮^m { p ∈^m_⩾ 0|Σ_i p_i = 1 }.
The solution of the problem in (<ref>) reads:
π^∗_ij = 1/n1(i = argmax_k{ν_k x_j })
where 1(c) is 1 whenever c is true and 0 otherwise.
We prove our claim by contradiction. Denoting c_ij= - ν_i x_j, for any j, we express a solution as π^∗_ij=ϵ_i with ϵ_i ⩾ 0 and ∑_i ϵ_i = 1/n. Let i^∗ = _k{ c_kj}. We can write π^∗_i^∗ j=1/n - ∑_i| i≠ i^∗ϵ_i. Our claim states that ϵ_i =0 for i≠ i^∗. We assume an optimal solution, π^', with ϵ_i > 0 for some i≠ i^∗. Since π^' is optimal, we must have ∑_ijπ^'_ijc_ij⩽∑_ijπ_ijc_ij for any π. For the j^th column we have,
∑_iπ^'_ijc_ij = (1n-∑_i^'| i^'≠ i^∗ϵ_i^') c_i^∗ j + ∑_i^'| i^'≠ i^∗ϵ_i^' c_i^' j
= 1n c_i^∗ j + ∑_i^'| i^'≠ i^∗ϵ_i^' (c_i^' j-c_i^∗ j) (a)>∑_iπ^∗_ijc_ij
where in (a) we use the fact that (c_i^' j-c_i^∗ j) > 0 and ϵ_i^'>0 for some i^' by the assumption. Hence, ∑_ijπ^'_ijc_ij > ∑_ijπ^∗_ijc_ij poses a contradiction. Therefore, ϵ_i^'=0 must hold for all i^'≠ i^∗.
Given n-many convolutional features X=[ x_i∈𝒳]_i=1^n, and m-many prototype features 𝒱=[ ν_i]_i=1^m with {ν_i}_i=1^m being δ-cover of 𝒳. If z^∗ is the histogram of X on V, defined in (<ref>), then we have:
‖∑_i=1^m z^∗_iν_i ∑_j=1^n 1n x_j ‖_2 ⩽δ
We can express
‖∑_i∈[m] z^∗_iν_i ∑_j∈[n]1n x_j ‖_2^2 = ∑_i∈[m] p^∗_i f(ν_i) ∑_j∈[n] q_j f(x_j)
where f(x) = x(∑_i z^∗_iν_i ∑_j 1n x_j), and [n]=1,…,n. Note that
f is a continuous bounded operator for 𝒳 ={ x|‖ x ‖_2 ⩽ 1 } (We can always map the features inside unit sphere without loosing the relative distances). Moreover, the operator norm of f, ‖ f ‖, which is ‖∑_i z^∗_iν_i ∑_j 1n x_j ‖_2 is less than or equal to 1. Thus, f lie in the unit sphere of the continuous bounded functions set. Using the definition of MMD distance, we can bound the error as:
∑_i∈[m] z^∗_i f(ν_i) ∑_j∈[n] q_j f(x_j) ⩽‖ (z^∗, V) (q, X) ‖_MMD
where q_i=1/n for all i. For the continuous and bounded functions of the operator norm less than 1, MMD is lower bound for OT <cit.>. Namely,
∑_i∈[m] z^∗_i f(ν_i) ∑_j∈[n] q_j f(x_j) ⩽‖ (z^∗, V) (q, X) ‖_MMD
⩽‖ (z^∗, V) (q, X) ‖_OT
Since columns of V is δ-cover of the set 𝒳, the optimal transport distance between the two distributions are bounded by δ, ‖ (z^∗, V) (q, X) ‖_OT⩽δ. Thus, we finally have:
‖∑_i∈[m] z^∗_iν_i ∑_j∈[n]1n x_j ‖_2 ⩽δ.
|
http://arxiv.org/abs/2308.01917v1 | 20230711075627 | PePNet: A Periodicity-Perceived Workload Prediction Network Supporting Rare Occurrence of Heavy Workload | [
"Feiyi Chen",
"Zhen Qin",
"Hailiang Zhao",
"Mengchu Zhou",
"Shuiguang Deng"
] | cs.DC | [
"cs.DC",
"cs.LG"
] |
PePNet: A Periodicity-Perceived Workload Prediction Network Supporting Rare Occurrence of Heavy Workload
1st Feiyi Chen
College of Computer Science and Technology
Zhejiang University
Hangzhou, China
[email protected]
2nd Zhen Qin
College of Computer Science and Technology
Zhejiang University
Hangzhou, China
[email protected]
3rd Hailiang Zhao
College of Computer Science and Technology
Zhejiang University
Hangzhou, China
[email protected]
4th Mengchu Zhou
Aritificial Intelligence Institute
Zhejiang Gongshang University
Hangzhou, China
[email protected]
5th Shuiguang Deng^∗ *Corresponding author
College of Computer Science and Technology
Zhejiang University
Hangzhou, China
[email protected]
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Cloud providers can greatly benefit from accurate workload prediction. However, the workload of cloud servers is highly variable, with occasional heavy workload bursts. This makes workload prediction challenging.
There are mainly two categories of workload prediction methods: statistical methods and neural-network-based ones. The former ones rely on strong mathematical assumptions and have reported low accuracy when predicting highly variable workload. The latter ones offer higher overall accuracy, yet they are vulnerable to data imbalance between heavy workload and common one. This impairs the prediction accuracy of neural network-based models on heavy workload.
Either the overall inaccuracy of statistic methods or the heavy-workload inaccuracy of neural-network-based models can cause service level agreement violations.
Thus, we propose to improve overall especially heavy workload prediction accuracy. It has two distinctive characteristics:
(i) A Periodicity-Perceived Mechanism to detect the existence of periodicity and the length of one period automatically, without any priori knowledge. Furthermore, it fuses periodic information adaptively, which is suitable for periodic, lax periodic and aperiodic time series.
(ii) An Achilles' Heel Loss Function iteratively optimizing the most under-fitting part in predicting sequence for each step, which significantly improves the prediction accuracy of heavy load.
Extensive experiments conducted on Alibaba2018, SMD dataset and Dinda's dataset demonstrate that improves MAPE for overall workload by 20.0% on average, compared with state-of-the-art methods. Especially, improves MAPE for heavy workload by 23.9% on average.
Time Series Prediction, Heavy Workload, Periodicity
§ INTRODUCTION
Accurate workload prediction brings huge economic benefits to cloud providers <cit.> and many cloud frameworks make real-time adjustments based on the workload prediction results <cit.>.
On the one hand, workload prediction provides meaningful insights to improve the utilization of cloud servers while ensuring quality of service (QoS) <cit.> <cit.>.
On the other hand, it can also alarm the forthcoming uncommon heavy workload, thus helping to avoid service level agreement (SLA) violations.
However, the high variability of cloud-server workload <cit.> <cit.> makes workload prediction challenging. Statistical methods and neural-network-based models are two mainstream workload predicting methods.
The former ones require the time series to satisfy strong mathematical assumptions and are uncompetitive when predicting highly variable workload <cit.>.
For example, the classical statistic method, ARIMA <cit.>, requires the time series to be stationary after difference and shows dissatisfied results when predicting highly variable workload <cit.>.
The neural-network-based models are more suitable to predict highly variable workloads,
but they are vulnerable to data imbalance between heavy workload and common one (the heavy workload is much more uncommon <cit.>). To demonstrate this, we present statistics of the heavy workload proportion of different datasets in Tab. <ref>, where the heavy workload is defined as the workload greater than the average workload plus one standard deviation for each machine.
It has also been proven that the data imbalance between heavy workload and common workload impairs the the former's accuracy <cit.>.
As an empirical example, Fig. <ref> shows the overall prediction accuracy and heavy-workload prediction accuracy of some prevailing methods (ARIMA <cit.>, LSTM with attention (LSTMa) <cit.>, Informer <cit.>, Learning based Prediction Algorithm for cloud workload (L-PAW) <cit.>, LSTM and GRU) on Alibaba2018 dataset, where the heavy-workload predicted error is nearly twice as large as the overall predicted error.
The inaccurate prediction will not only reduce the utilization of cloud servers but also bring SLA violations, as it provides wrong information to the scheduler.
To predict overall workload and heavy workload accurately, we propose a Periodic-Perceived Workload Prediction Network (), which supports the prediction of highly variable workload with the rarely-occurring heavy workload.
mainly improves the prediction accuracy in two aspects: 1) using the periodically recurrent patterns to guide the workload prediction 2) using an Achilles' Heel Loss Function[Achilles' Heel originates from Greek mythology and is the fatal weakness of the hero Achilles], which pays more attention to the under-fitting part to offset the negative influence of data imbalance.
One challenge of utilizing the periodic information is that we have no priori knowledge of the periodicity (i.e., we neither know whether the time series is periodic nor do we know the length of its period). Besides, periodic and aperiodic time series are mixed and it is better to manipulate them with a unified architecture for convenience. Thus, we propose a Periodicity-Perceived Mechanism, which consists of Periodicity-Mining Module and Periodicity-Fusing Module. The Periodicity-Mining Module detects the periodicity and period length and the Periodicity-Fusing Module adaptively fuses the periodic information for periodic and aperiodic time series. One challenge of using is that
Periodicity-Mining Module: one of its hyperparameter determinations depends on expert experience of workload. To solve this problem, we propose an automatic hyperparameter determination method for it based on statistical observations.
The Achilles' Heel Loss Function specifically targets the upper bound of the prediction error, which enables to strengthen and improve the most vulnerable part of prediction. At each training step, the Achilles' Heel Loss Function picks the most under-fitting part in a predicting sequence and minimizes its prediction error.
The most under-fitting part shifts among different steps and each part in the predicting sequence is fitted alternatively. In this way, Achilles' Heel Loss Function solves the under-fitting problem of heavy load and improves the heavy-load prediction accuracy as well as the overall prediction accuracy. The challenge for designing such a loss function is that the trivial operation of picking the most under-fitting part is non-differentiable during backpropagation. Thus, we propose a smooth operator for such an operation. Furthermore, we prove that such smooth operator can indefinitely approximate the trivial operation of picking the most under-fitting part.
The main contributions of our work are summarized as:
* designing a Periodicity-Perceived Mechanism, which mines and adaptively fuses the periodic information;
* proving an error bound of periodic information extracted by Periodicity-Fusing Module;
* providing an automatic hyperparameter determination method based on statistical observations;
* designing an Achilles' Heel loss function by using a smooth operator to improve heavy-workload prediction accuracy and overall prediction accuracy;
* conducting extensive experiments on the Alibaba2018, SMD and Dinda's datasets and demonstrating that improves the heavy-workload prediction accuracy (MAPE) up to 40.8% and 23.9% on average, while promoting overall prediction accuracy (MAPE) by 20.0% on average.
§ RELATED WORK
We first review existing methods for workload and time series prediction. Since uses periodic information, we also summarize related work for this topic.
§.§ Workload and Time Series Prediction
Time series prediction aims to predict future time series based on the observed workload data.
Based on its aims, workload prediction can be divided into: cloud resource allocation <cit.>, autoscaling <cit.>, etc.
Based on users' interests, the prediction can be divided into: the prediction of GPU workload <cit.>, the prediction of CPU utilization <cit.>, the prediction of virtual desktop infrastructure pool workload <cit.>, the prediction of disk healthy states <cit.>, workload prediction framework <cit.>, etc.
As workload prediction is a branch of time series prediction, it is necessary to review the development of time series prediction. There are several research directions. To improve the prediction accuracy of dynamic real-world time series, researchers propose some traditional recurrent neural network <cit.> <cit.> to replace classical statistic methods <cit.> <cit.> <cit.>. To improve the model memory of long input sequences, researchers propose models combining recurrent neural networks and attention mechanism <cit.> <cit.>. To improve the prediction accuracy of a long sequence, researchers propose models based on variants of transformer <cit.> <cit.> <cit.> and auto-encoder <cit.>. To make use of the relevance among different channels in multi-variant time series, researchers propose models combining GNN or CNN with time series manipulating networks <cit.> <cit.>.
Another research direction is improving the robustness of deep networks dealing with time series. The robustness includes five implications: the robustness for missing data, the robustness for the malicious or limited labels, the generalization of the trained model, the robustness toward noise in time series, and the robustness for the irregular data sampling period. For example, Belkhouja et al. propose a Robust Training for Time-Series (ROTS) to improve deep neural network robustness for generalization and noise <cit.>; Tan et al. propose Dual-Attention Time-Aware Gated Recurrent Unit (DATA-GRU) to manipulate time series sampled in an irregular period <cit.>; Zhang et al. propose Multivariate Time Series
Classification with Attentional Prototypical Network (TapNet) to deal with time series with limited labels <cit.>; Tang et al. propose network modeling local and global temporal dynamics (LGnet) to deal with a data-missing problem in time series <cit.>; Luo et al. propose E^2GAN, which uses adversarial learning to deal with data missing problem <cit.>; Yao et al. propose Spatial-Temporal Dynamic Network (STDN) to deal with lax-strict periodicity caused by noise <cit.>; Luo et al. propose Uncertainty-Aware Heuristic Search (UAHS) to deal with uncertain prediction error caused by noise <cit.>.
But none of these methods focus on dealing with the prediction accuracy of heavy workload.
§.§ Periodicity Information Extraction
In the field of traffic forecasting, there is also significant periodicity and we summarize some recent popular methods below.
There are several mechanisms to fuse periodic information. For example, Guo et al. propose a novel attention-based spatial-temporal graph convolution network <cit.>; Lv et al. use a fully-connected layer and take advantage of both CNN and RNN to deal with periodic time series <cit.>; Chen et al. propose Hop Res-RGNN to deal with periodic patterns <cit.>.
But these methods do not consider the lax periodicity of time series.
Yao et al. propose an attention mechanism to tackle periodic shift <cit.>.
These studies use the priori knowledge of the daily and weekly periodicity of traffic loads.
However, in the scenario of service load prediction, there is usually no priori knowledge about the periodicity. Besides, the periodicity of the time series in these studies is fixed, while the periodicity of workload for different machines are variable.
§ METHODOLOGY
§.§ Overview of
is based on an encoder-decoder <cit.> architecture and fuses three kinds of information from rough data: the long-term tendency information, the short-term dependent information and the periodic information. Besides, the whole model is trained by an Achilles' Heel Loss Function to offset the negative impact of data imbalance. The overview of is shown in Fig. <ref>.
The input of data X is divided into three parts, which are denoted by X^enc_short∈ℝ^I × d, X^enc_long∈ℝ^M × d, X_period∈ℝ^P × d respectively. The d is the dimension of the feature at each time slot and I, M and P respectively stand for the length of short-term dependent information, long-term tendency information and periodic information, respectively. The data division is shown in Fig. <ref>: Y is the ground truth value of predicting workload;
X^enc_short is the nearest workload time series before the predicting part and shows high relevance to predicting workload;
X^enc_long is a bit longer workload time series before X^enc_short and reflects the long term tendency in workload variation;
X_period is the first period of workload for each machine; The y_period is the workload sequence corresponding to Y in the period of X_period. The extracting process of X_period and y_period is illustrated in section <ref>.
We use X^enc_long,i and X^enc_short,i to denote the i-th time slot's workload in X^enc_long and X^enc_short respectively.
These three kinds of information bring different effects for workload prediction. The short-term dependent information X^enc_short is highly related to the predicting sequence, as the customers' behaviors are continuous and the workload is similar in a short time slot. The long-term tendency information denotes the forthcoming heavy load, as the heavy load is more likely to happen in ascending tendency. This tendency information is usually masked by noises in the short term and has to be extracted from long-term time series. As shown in Fig. <ref>, we pick a small segment of the workload from a long-term-ascending sequence. Though the long-term ascending tendency it is, the short-term workload just fluctuates around a stable value and does not show any increasing tendency.
The periodic information provides the recurrent patterns in history and promotes both overall prediction accuracy and heavy-load prediction accuracy.
According to the different properties of these three kinds of information, uses different modules to deal with them in the encoder. uses an LSTM to capture short-term dependent information from X^enc_short and uses Periodicity-Mining Module to detect and extract periodic information from X_period. As for long-term tendency information, firstly downsamples X^enc_long, as it helps to improve the processing efficiency as well as maintain and magnify the trend information of time series. Then, uses self-attention to manipulate the down-sampled sequence. In this way, reduces the forgetting effect <cit.> in long sequence processing.
The processes of extracting short-term, long-term and periodic information are shown in Eq.<ref>-<ref>. In the function LSTM(x,h,c), x, h, c respectively stand for the input of LSTM, the hidden state and the cell state<cit.>. In function Attention(q,k,v), q, k, v respectively stand for the query, key and value<cit.>.
X^enc_short,i+1, h_i+1, c_i+1 = LSTM(X^enc_short,i, h_i, c_i)
y_period,X_period=PeriodicityMining(X,X^enc_short)
X^enc_long= Downsample(Attention(X^enc_long,X^enc_long,X^enc_long))
In the decoder, uses LSTM to generate ŷ∈ℝ^J × d, where J stands for the predicting length. After that, uses Periodicity-Fusing Module to estimate the reliability of ŷ and fuses it with y_period according to their reliability. By then, the Periodicity-Fusing Module generates the final prediction y. The process of decoder is shown in Eq.<ref>-<ref>, where W_h, W_c are both model parameters. The symbol ⊕ stands for concatenation.
ŷ_j+1, h_j+1, c_j+1 = LSTM(ŷ_j,h_j+1/2,c_j+1/2)
h_j+1/2=W_h · (Attention(h_j,X^enc_long,X^enc_long)⊕ h_j)
c_j+1/2=W_c · (Attention(c_j,X^enc_long,X^enc_long)⊕ c_j)
y=PeriodicityFusing(X^enc_short,y_period,ŷ)
When training the model, the imbalance between heavy load and normal workload usually leads to unsatisfactory accuracy of heavy load prediction. Thus, we propose an Achilles' Heel Loss function, which aims to minimize the upper bound of prediction error. At each step, Achilles' Heel Loss Function iteratively optimizes the most under-fitting part for this step, in order to improve the prediction accuracy of the heavy load when keeping the overall prediction accurate. It is named after Achilles' Heel as it focuses on the most vulnerable portion along the predicting sequence.
Thus, the prediction accuracy of the heavy load becomes more accurate, when keeping the overall prediction accurate. One challenge for minimizing the upper bound is that the operation of picking the upper bound is usually non-differentiable. Thus, we propose a smooth operator instead of hard segmentation. It is proved that the smooth operator can be infinitely close to the operation of picking the upper bound.
The most distinctive characters of lie in: 1) it is equipped with the Periodicity-Perceived Mechanism to mine the periodicity (Periodicity-Mining Module) and adaptively fuse the periodic information (Periodicity-Fusing Module); 2) it theoretically proves the error boundary of extracted periodic information; 3) it uses an automatic hyperparameter determination method for Periodicity-Mining Module; 4) it uses an Achilles' Heel Loss Function to mitigate the negative effect of data imbalance on model performance.
§.§ Periodicity-Perceived Mechanism
Periodic information can effectively improve the overall prediction accuracy as well as the heavy workload prediction accuracy. It consists of Periodicity-Mining Module and Periodicity-Fusing Module.
There are three main challenges to fuse periodic information.
(1) We have no priori knowledge about the periodicity of workload for different machines, which calls for the Periodicity-Mining Module to detect the existence of periodicity and the length of one period.
(2) The periodicity for different machines is variable (i.e., strict periodic, lax periodic, aperiodic), which calls for an adaptive Periodicity-Fusing Module.
(3) It is hard to fuse the periodic information in lax periodic series, because lax periodic series are sometimes periodic and sometimes not. In Fig. <ref>, we overlap a series shifted one period ahead and the original series. As Fig. <ref> shows, there are mainly two obstacles to fuse periodic information in lax periodic series: periodic shift and local periodicity violation. The former one can be solved by dynamic matching which is depicted in section Periodicity-Mining Module. The latter one is caused by noise and external events. Therefore, we filter out the noise in periodic information in Periodicity-Mining Module and evaluate the reliability of periodic information in the Periodicity-Fusing Module.
§.§.§ Periodicity-Mining Module
calculates the time series autocorrelation coefficient ρ_k for each machine as shown in Eq.<ref>, which represents the linear correlation among all workloads with interval k (i.e., linear correlation between X_t and X_t-k, for every t∈{t∈ℕ^+|t<sequence length}, X_t stands for the workload at t-th time slot). As shown in Fig. <ref>, when the workload is periodic, the autocorrelation coefficient rises to a large value again after the first decline, which represents a high hop relevance of workload. While the sequence is aperiodic, the autocorrelation coefficients are shown in Fig. <ref>, which have a distinct pattern. Therefore, sets a hyperparameter 𝒯 and judges whether a time series is periodic by detecting whether the autocorrelation coefficient crosses over 𝒯 again. The value of 𝒯 denotes the acceptable range of periodicity strictness.
ρ_k=cov(X_t-k,X_t)/√(cov(X_t,X_t)cov(X_t-k,X_t-k))
The position of the first peak in autocorrelation coefficients of periodic sequence strikes the highest hop relevance and denotes the length of one period. We illustrate it in Fig. <ref>. The reason why the autocorrelation coefficient shows such a pattern is that the user behavior is continuous in time, so the autocorrelation coefficients of small k are large. As the time interval k increases, the relevance drops down. But when the time interval k reaches around the integer multiple of period length, the workload at the two moments X_t and X_t-k shows a high linear correlation again.
Based on these observations, as shown in Algorithm <ref>, traverses the value of k from small to large. If finds the smallest k that makes the autocorrelation coefficient ρ_k satisfy the conditions: ρ_k>ρ_k-1, ρ_k>ρ_k+1 and ρ_k> 𝒯, the time series is periodic and the length of one period is k. Otherwise, the time series is aperiodic. We cut out the first period of the machines whose workloads are periodic in the training set as a periodic information knowledge base, which is denoted by X_period. To solve the issue of period shift, uses a dynamic matching. If the workload is periodic, finds the sequence {X_t_a,…,X_t_a+I-1} in X_period, whose distance from X^enc_short is the smallest. The distance can be computed either by Mean Square Error (MSE) or by Dynamic Time Warping (DTW). Then, sets y_period={X_t_a+I,…,X_t_a+I+J}. Otherwise, {-1,-1,…,-1} is set to y_period. Such a process is illustrated in Fig. <ref>.
Error estimation. We use E[(X_t-k-X_t)^2] to measure the quality of periodic length k found by the Periodicity-Mining Module. We have the following result.
Theorem 1. When using Algorithm <ref>, E[(X_t-k-X_t)^2] is upper bounded by (Δσ)^2+(Δμ)^2+2(1-𝒯)σ_t-kσ_t, where σ_t-k, σ_t , μ_t-k, μ_t are the standard deviation and expectation at t-k and t time slots respectively. The Δσ and Δμ are σ_t-k-σt and μ_t-k-μ_t respectively. Furthermore, when the time series is stationary, E[(X_t-k-X_t)^2] is upper bounded by 2(1-𝒯)σ_t-kσ_t, where σ is the standard deviation of the time series.
Proof of Theorem 1. According to algorithm <ref>, when finds k, Eq.<ref>-<ref> holds, where a_i=X_i-μ_i, t̃=t-k. E[(X_t̃-X_t)^2] can be transformed to Eq.<ref>. Then, we substitute Eq.<ref> into Eq.<ref> and obtain Eq.<ref>. The Eq.<ref> can be further reduced into Eq.<ref> and the first part of Theorem 1 is proven. When the time series is stationary, Δσ^2 and Δμ^2 are zero. Then the upper bound can be reduced to 2(1-𝒯)σ_t-kσ_t.
E[a_t̃a_t]/σ_t̃σ_t>𝒯
E[a_t̃a_t]>𝒯* σ_t̃σ_t
E[(X_t̃-X_t)^2] = E[a_t̃^2+a_t^2-2a_t̃a_t+2ΔμΔ a+Δμ^2]
E[(X_t̃-X_t)^2] < E[a_t̃^2]+E[a_t^2]-2*𝒯√(E[a_t̃^2]E[a_t^2])+Δμ^2
E[(X_t̃-X_t)^2] < Δσ^2+Δμ^2+2(1-𝒯)σ_t̃σ_t
Taking a further look at the upper bound, the first and second items of the upper bound are constants. Thus, the value of error depends on the last item, which is determined by the value of 𝒯. Considering that the 𝒯 is less than or equal to one, the closer the 𝒯 is to one, the smaller the error is. However, most of the time series are lax periodic in reality. Setting the 𝒯 as 1 may miss many lax periodic time series. Thus, there is a tradeoff between improving the accuracy of periodic length and finding all the periodic time series (including lax periodic time series).
Considering the tradeoff may make it challenging to choose a proper value of 𝒯, we provide an automatic setting method for 𝒯 based on statistical observations. For finer tuning, this method provides a searching range for 𝒫.
The hyperparameter 𝒫 is a threshold setting for the first peak of the autocorrelation coefficient. Thus, we look into the distributions of the first peak value. Taking Alibaba2018-CPU[https://github.com/alibaba/clusterdata] as an example, we take 75 machines in it and plot a histogram of first-peak value in Fig. <ref>. There are two properties of the distribution: clustered and following Gaussian Distribution for each cluster. We use Gaussian Mixture Model (GMM) algorithm to fit the mixed Gaussian distribution of its first peak values and plot the Gaussian distributed curve on the histogram in Fig. <ref>. The parameter for this GMM is given in Tab. <ref>. Different clusters have different degrees of periodicity. The higher the expectation of the cluster is, the stricter its periodicity is. To guarantee the quality of the periodicity the Periodicity-Mining Module detects, it picks workload in the cluster with the highest expected value as periodic and filters the others.
Let μ and σ denote the expectation and standard deviation of the cluster with the highest expectation. In industrial applications, μ-σ can be directly used as the 𝒫 to save the labor costs, as the probability that the first peak is greater than μ-σ in this cluster is 84.2% according to Gaussian distribution. For finer tuning, the mathcalP can be explored between μ-σ and μ+σ.
§.§.§ Periodicity-Fusing Module
To filter out the random noise in the periodic information, firstly uses an auto-encoder <cit.>.
To solve the problem of variable periodicity for different machines and the problem of local periodicity violation, uses an attention mechanism to evaluate the reliability of periodic information. The final prediction y is given in Eq.<ref>-<ref>.
y_period^'= Autoencoder(y_period)
y=Attention(X^enc_short,(ŷ,y_period^'),(ŷ,y_period^'))
§.§ Achilles' Heel Loss Function
As mentioned before, heavy load rarely occurs, which leads to poor performance of heavy load prediction due to data imbalance. To solve this problem, minimizes the worst case in a predicting sequence for each step. When iteratively training the model, optimizes the worst case in one step and in the next step the worst case may occur at another place. In this way, Achilles' Heel Loss Function optimizes the most under-fitting part at each training step and effectively improves the accuracy of heavy load.
A native idea is to optimize the prediction with the highest error in each predicting sequence, as shown in Eq.<ref>.
In Eq.<ref>, T equals {t_1,…,t_1+J}, which are the time slots of a forecasting sequence. y_t denotes the predicted workload at time t (the prediction length for one sample should be at least 2), and Y_t denotes the ground-truth workload at time t.
l(Y,y)=max_t∈ T (y_t-Y_t)^2
However, the function in Eq.<ref> is not smooth and cannot be derived for backpropagation. Inspired by <cit.>, we propose a smooth max operator in Eq.<ref>, where γ is a hyperparameter and A_i can be any variable. Then, the loss function in Eq.<ref> can be transformed into Eq.<ref>.
max (A_1,A_2,…,A_N)=γlog(∑^N_i exp(A_i/γ)), γ>0
l(Y,y)=γlog(∑^T_t exp((y_t-Y_t)^2/γ)), γ>0
Let us take a further look at Eq.<ref>. It actually distributes more weight to the gradient for higher prediction error when back propagating. This aspect is obvious by deriving Eq.<ref>, as shown in Eq.<ref>, where α_t=exp(y_t(𝒫)-Y_t)^2/γ/∑_t=1^T exp(y_t(𝒫)-Y_t)^2/γ, 𝒫 stands for the parameters of . α_t is actually a normalized weight, whose value depends on the prediction error at time slot t. γ is a scale factor. When γ is large, the difference in prediction errors across time slots is narrowed and the weights are relatively uniform for each time slot. When the γ is small, the difference in prediction errors across time slots is amplified and the weights become polarized. To visually demonstrate this point, we plot the value of α_t for different combinations of γ and prediction error y_t-Y_t in Fig. <ref>. In Fig. <ref>, each row shows the value of α_t for a set of prediction error {y_t-Y_t|0 ≤ t ≤ 9}, when specifying the γ.
As Fig. <ref> shown, for a specific γ, the larger the prediction error is, the bigger the α_t is. Besides, for a specific prediction error in a specific error set, the smaller the γ is, the bigger the α_t is. When γ is set to 0.1, α_t for the largest prediction error in this row approximates 1.
∂ l(Y,y)/∂𝒫
=∑_t=1^T(α_t ·∂(y_t(𝒫)-Y_t)^2/∂𝒫)
To theoretically prove this tendency, let us look into the extreme situation when the γ is infinitely close to 0. We have the following result that implies when γ indefinitely approximates 0, only the gradient for maximum prediction error is assigned a weight of 1, while others are assigned a weight of 0.
Theorem 2. As the γ is infinitely close to 0, the gradient of the loss function infinitely approximates Eq.<ref>. The t^* = max_t (y_t-Y_t)^2 and the g(t^*,𝒫) = (y[t^*]-Y[t^*])^2.
∂ l(Y,y)/∂𝒫=∂ g(t^*,𝒫)/∂𝒫
Proof of Theorem 2.
The derivation of l(Y,y) is shown in Eq.<ref>. Then, both the numerator and dominator on the right-hand side are divided by exp(g(t^*,𝒫)/γ) simultaneously and the first step of Eq.<ref> is obtained. The g(t^*,𝒫) is surely bigger than g(t,𝒫), ∀ t≠ t^*, as the definition of t^* suggests. Thus, g(t,𝒫)-g(t^*,𝒫)<0, ∀ t≠ t^*. When the γ (γ>0) infinitely approaches zero, exp(g(t,𝒫)-g(t^*,𝒫)/γ) will also infinitely approaches zero. In this way, the second step of Eq.<ref> is obtained.
∂ l(Y,y)/∂𝒫=∑_t^T[exp(g(t,𝒫)/γ)∂ g(t,𝒫)/∂𝒫]/∑_t^T exp(g(t,𝒫)/γ)
lim_γ→ 0∂ l(Y,y)/∂𝒫
=lim_γ→ 0∂ g(t^*,𝒫)/∂𝒫+∑_t,t≠ t^*^T [exp(g(t,𝒫)-g(t^*,𝒫)/γ) ∂ g(t,𝒫)/∂𝒫]/1+∑_t,t≠ t^*^T exp(g(t,𝒫)-g(t^*,𝒫)/γ)
=∂ g(t^*,𝒫)/∂𝒫
§ EXPERIMENT
In this section, we conduct extensive experiments to validate the following findings:
* improves the accuracy of heavy-workload prediction as well as improving the overall prediction accuracy with respect to the state of the art.
* only introduces slight time overhead for training and inference.
* is insensitive to hyperparameters and has high robustness robustness.
* Each mechanism in is proven to play an important role by using ablation experiments.
* Due to its adaptability for periodicity, when the time series is aperiodic, still works well.
§.§ Experiment Setup
Hyperparameters. We summarize the most important hyperparameters of in Tab. <ref>.
Baseline Methods. We compare with the state-of-art time series predicting method and popular workload predicting models and the variants of .
* Autoformer (Autof) <cit.>: Autoformer is a novel decomposition architecture with an Auto-Correlation mechanism. It breaks the pre-processing convention of series decomposition and renovating it as a basic inner block of deep models. Further, Autoformer also has an Auto-Correlation mechanism based on the series periodicity.
* LSTM with attention mehcanism (LSTMa) <cit.>: The approach extracts the sequential and contextual features of the historical workload data by an encoder network and integrates an attention mechanism into the decoder network.
* Informer (Inf) <cit.>: The method uses a ProbSparse attention mechanism and a halving cascading layer to extract the input information and use a generative style decoder to predict. Informer is one of the most recognized methods of time series prediction in recent years.
* L-PAW <cit.>: The method integrates top-sparse auto-encoder (TSA) and gated recurrent unit (GRU) block into RNN to achieve the adaptive and accurate prediction for highly-variable workloads. L-PAW is a model specifically designed for workload prediction and is also widely used.
* LSTM <cit.>: A classic time series processing model, which uses the hidden state to store long sequences of information.
* GRU <cit.>: A variant of the LSTM which modifies the forget gate structure in LSTM to make the model simpler.
* Reformer (Ref) <cit.>: Reformer introduces two techniques to improve the efficiency of Transformers. Firstly, it replaces dot-product attention with one that uses locality-sensitive hashing. Secondly, it uses reversible residual layers instead of standard residuals to reduce the memory consumption of the model. It is a widely recognized method in time series prediction.
* Fedformer (Fedf) <cit.>: Fedformer combines Transformer with a seasonal-trend decomposition method, in which the decomposition method captures the global profile of time series while Transformers capture more detailed structures. Fedformer is an effective and novel method in time series prediction.
* Variants of : These approaches are introduced for ablation study. We use ^- to denote the that removes the Periodicity-Perceived Mechanism. We use ^† to denote the that uses MSE as a loss function. We use ^ to denote the that both removes the Periodicity-Perceived Mechanism and does not use the Achilles' Heel loss function.
Datasets. We perform experiments on three public datasets: Alibaba's cluster trace v2018[https://github.com/alibaba/clusterdata], Dinda's dataset <cit.> and Server Machine Dataset (SMD) <cit.>. These datasets are collected by different organizations with different system configurations. Their use can help to verify the generalization of .
We plot the data distribution of different datasets in Fig. <ref>. It is worth noting that different datasets represent different data distribution: long-tailed distribution (Dinda's dataset), centralized distribution (memory usage of Alibaba2018 and SMD), and relatively even data distribution (CPU usage of Alibaba2018).
* Dinda's dataset (long-tailed distribution): Dinda's dataset is collected from two groups of machines. The first is the Alpha cluster at the Pittsburgh Supercomputing Center (PSC). The second one includes computing servers (Mojave, Sahara), a testbed (manchester1-8), and desktop workstations at Carnegie Mellon University. The workload in Dinda's dataset shows significant periodicity and sometimes there are large peaks far above average (long-tailed distribution).
* Alibaba2018-Memory (centralized distribution): Alibaba's cluster trace is sampled from one of Alibaba's production clusters, which includes about 4000 machines' workload in 8 days. Memory usage of some machines in this dataset shows relatively significant periodicity and others are aperiodic. Memory usage is generally stable and is distributed around a certain value (centralized distribution).
* Alibaba2018-CPU (relatively even distribution): The CPU usage in Alibaba2018 shows more significant periodicity on heavy workload than light load. Besides, it is highly variable and is distributed relatively even.
* SMD dataset (centralized distribution with rare extreme values): SMD is a new 5-week-long dataset, which is collected from a top 10 Internet company. SMD is made up of data from 28 different machines. It consists of 38 features, including CPU utilization, memory utilization, disk I/O, etc. The workload in this dataset shows high periodicity. Besides, most of workload of this dataset is concentrated around 10-20, while others are extremely big.
We use the first 1,000,000 lines in Alibaba's dataset and axp0 in Dinda's dataset. The values in Alibaba2018 are utilization percentage and we uniformly scale it to range [0, 1].
Evaluation metrics. We use three metrics to evaluate 's performance: Mean Squared Error (MSE), Mean Absolute Error (MAE), and Mean Absolute Percentage Error (MAPE), which are calculated as Eq.<ref>-<ref>. These metrics are some of the most recognized and widely used ones in time series prediction. There are many influential works using these metrics, such as <cit.>, <cit.>, <cit.>. Besides, these metrics not only measure the relative error (MAPE) but also the absolute error (MSE, MAE).
MSE = 1/n∑_i=1^n(p[i]-y[i])^2
MAE = 1/n∑_i=1^n | p[i]-y[i] |
MAPE = 1/n∑_i=1^n | p[i]-y[i] | /y[i]
§.§ Prediction Accuracy
We summarize the performance of all the methods in Tab. <ref>. We use the above metrics to evaluate the accuracy of heavy-workload forecasting accuracy as well as overall forecasting accuracy. We highlight the highest accuracy in boldface. Besides, if achieves the highest accuracy, we highlight the best one in baseline with underlines.
As shown in Tab. <ref>, achieves the best heavy-workload prediction accuracy as well as overall prediction accuracy on all three workload datasets, with few exceptions. Comparing the prediction accuracy on all three datasets, works the best on long-tail distributed Dinda's dataset, while less advantageous on the relatively even-distributed Alibaba2018-CPU dataset.
Long-tail data distribution. works best in this case. Compared with the best performance among the baselines, improves MAPE, MSE, and MAE by 17.9%, 24.0%, and 29.5% respectively for overall prediction. For heavy-workload prediction, improves MAPE, MSE and MAE by 40.8%, 10.4% and 38.7% respectively over the best method in the baseline.
Centralized data distribution with rare extreme value. also has obvious advantages in this case, because SMD dataset has extremely heavy workload. Compared with the best performance in the baseline, improves MAPE, MSE and MAE by 14.0%, 25.2% and 26.7% respectively for overall prediction. For heavy-workload prediction, improves MAPE and MAE by 25.3% and 25.0% respectively.
Centralized data distribution. is also effective in this case. Compared with the best performance in the baseline, improves MAPE, MSE and MAE by 33.8%, 13.9% and 9.5% for overall prediction. For heavy-workload prediction, improves MAPE, MSE and MAE by 15.2%, 24.4% and 8.7% respectively, compared with the best method in the baseline.
Relatively even data distribution. In this case, maintains high overall prediction accuracy and achieves the best heavy workload prediction performance among neural network-based methods. improves MAPE, MSE and MAE by 25.7%, 28.3% and 13.9% for heavy-workload prediction over the best method in the baseline.
§.§ Time Overhead
We use an Intel(R) Xeon(R) CPU E5-2620 @ 2.10GHz CPU and a K80 GPU to record the time spent on training and inference for all methods. We show the time overhead of all the methods in Fig. <ref>. brings just an acceptable extra time overhead compared to the most efficient network (GRU) and greatly improves the overall and heavy-workload prediction accuracy, especially on Dinda's dataset (40.8% for MAPE of heavy-workload prediction).
§.§ Hyperparameter Sensitivity
The impact of input length and prediction length.
We use grid search to explore the impact of X^enc_short's length and the impact of y's length on 's performance. We show the result in Fig. <ref>-Fig. <ref>. We perform experiments with Cartesian combinations of input lengths from 30 to 80 and prediction lengths from 2 to 12. Generally, the error grows higher gently when predicting length increases. The best input length for different datasets is different. It is found the best input length for Alibaba2018-Memory and Alibaba2018-CPU is 50, while the best input length for Dinda's dataset is 30.
Dinda's dataset.
For overall workload prediction, MAE rises slowly as prediction length increases and MAE are stable for any input length. For heavy-workload workload prediction, MAE is more stable for prediction length when the input length is longer. As the length of prediction grows, the MAE for overall accuracy increases no more than 0.051 at each input length, while the MAE of heavy workload increases no more than 0.062.
Alibaba2018-Memory. The long-term dependency of Alibaba's memory usage is weak, thus when the input length is too big, increases the input length increases the noise level, which could corrupt the performance of . Thus, when the input length is greater than 50, the MAE of is unstable. But when the input length is less than 50, the performance of is stable, and the prediction error only increases slowly with the prediction length. For overall memory usage prediction, the MAE with the prediction length of 12 is only 0.006 larger than the MAE with the prediction length of 2. For heavy-workload memory usage prediction, the MAE with the prediction length of 12 is only 0.012 larger than that with the prediction length of 2.
Alibaba2018-CPU. Long-term dependency is more pronounced in CPU usage. Thus, when we increase the input length, the performance of is always stable. For overall CPU usage prediction, the MAE with a prediction length of 12 is only 0.016 larger than that with a prediction length of 2. For heavy-workload CPU usage prediction, the MAE with a prediction length of 12 is only 0.034 larger than that with a prediction length of 2.
The impact of γ.
We also explore the impact of γ on three datasets, as shown in Fig. <ref>.
There are slight fluctuations of overall accuracy for different γ.
Besides, it is consistent with the theoretical analysis in Section.<ref> that when the γ is smaller the heavy-workload accuracy is higher. It is also intuitive that the heavy-workload MAE of Dinda's dataset dips most sharply, as Dinda's dataset is long-tail distributed. The extremely-heavy workload in the long tail greatly reduces the prediction accuracy. But as γ gets smaller, more attention is put on these heavy workload, during the process of training. According to our experimental results, setting γ as the value indefinitely approaching zero can lead to the best accuracy.
§.§ Ablation Experiment
We validate the effect of the Periodicity-Perceived Mechanism and heavy-workload-focused loss function by comparing the performance of with that of ^- and ^†.
We show the performance of these models in Tab. <ref> and summarize the improvement ratio of compared with ^- and ^† in Tab. <ref> and Tab. <ref> respectively. Overall, the Periodicity-Perceived Mechanism contributes more to the prediction accuracy than the Achilles' Heel loss function. But Achilles' Heel loss function can improve the accuracy of periodic and aperiodic time series, while the Periodicity-Perceived Mechanism can only promote the accuracy of periodic and lax-periodic data. Furthermore, we also test the prediction accuracy of on aperiodic data.
The performance of Periodicity-Perceived Mechanism.
Dinda's dataset and SMD dataset has more periodicity than Alibaba2018 does <cit.>. Thus, the Periodicity-Perceived Mechanism promotes the prediction accuracy most on Dinda's dataset and SMD dataset.
For datasets with lax periodicity, such as memory usage of Alibaba2018, the Periodicity-Perceived Mechanism also promotes overall and heavy-workload prediction accuracy.
As for CPU usage of Alibaba2018, which has no significant periodicity on light load but has more significant periodicity on heavy workload, Periodicity-Perceived Mechanism can promote the heavy-workload prediction accuracy, while the performance of is about as same as ^- on light load. This observation confirms that Periodicity-Perceived Mechanism has little negative impact on aperiodic data.
The performance of Achilles' Heel loss function.
Our loss function can improve both the overall prediction accuracy and heavy-workload prediction accuracy on all three datasets, compared with ^†. Except MAPE for heavy-workload prediction accuracy on memory usage of Alibaba2018, all of the evaluation metrics of are better than that of ^†. As the Achilles' Heel Loss function apparently reduces the error for extremely-heavy workload in the long tail, which is also proven in Fig. <ref>, the Achilles' Heel Loss Function improves the accuracy of Dinda's dataset and SMD dataset more that of Alibaba2018.
The performance of on aperiodic data.
There is a major concern about whether still works well on aperiodic data. To test the prediction accuracy on aperiodic data, we divide the Alibaba2018-CPU and Alibaba2018-Memory by periodicity and collect the prediction accuracy for periodic data and aperiodic data respectively.
In Tab. <ref>, there is prediction accuracy for periodic data and aperiodic data. For convenience, the overall accuracy of both periodic data and aperiodic data is also listed at the bottom. On the whole, the accuracy of periodic data is slightly higher than the overall accuracy, while the accuracy of aperiodic data is slightly lower than the overall. But there is a strange phenomenon that the MAPE of periodic data in Alibaba2018-Memory is bigger than the overall accuracy, while its MAE and MSE are much lower than the overall accuracy. That is because, in the Alibaba2018-Memory dataset, the periodic workload is much smaller than the aperiodic one. Thus, even slight errors in periodic data prediction tend to become much larger after the division in the computation of MAPE.
§ CONCLUSION
In this paper, we study the problem of improving overall workload prediction accuracy as well as heavy-workload prediction accuracy and propose . makes use of short-term dependent information, long-term tendency information and periodic information.
Within , we propose two mechanisms for better prediction accuracy of workloads: (1) a Periodicity-Perceived Mechanism to guide heavy-workload prediction, which can mine the periodic information and adaptively fuse periodic information for periodic, lax periodic and aperiodic time series; and (2) an Achilles' Heel Loss Function to offset the negative effect of data imbalance. We also provide theoretical support for the above design by: (1) providing a theoretically proven error bound of periodic information extracted by Periodicity-Perceived Mechanism; and (2) providing an automatic hyperparameter determination method for periodicity threshold 𝒯. Compared with existing methods, extensive experiments conducted on Alibaba2018, Dinda's dataset and SMD dataset demonstrate that improves MAPE for overall workload prediction by 20.0% on average.
Especially, improves MAPE for heavy workload prediction by 23.9% on average.
IEEEtran
|
http://arxiv.org/abs/2307.05556v1 | 20230709154956 | A multitype Fiksel interaction model for tumour immune microenvironments | [
"Jonatan A. González",
"Paula Moraga"
] | stat.AP | [
"stat.AP"
] |
pics/pics/
#1 1
1
A multitype Fiksel interaction model for tumour immune microenvironments
Jonatan A. González
Computer, Electrical and Mathematical Science and Engineering Division,
King Abdullah University of Science and Technology (KAUST),
Thuwal 23955-6900, Saudi Arabia
and
Paula Moraga
Computer, Electrical and Mathematical Science and Engineering Division,
King Abdullah University of Science and Technology (KAUST),
Thuwal 23955-6900, Saudi Arabia
August 12, 2023
=======================================================================================================================================================================================================================================================================================================================================================================================================================================
0
A multitype Fiksel interaction model for tumour immune microenvironments
The tumour microenvironment plays a fundamental role in understanding the development and progression of cancer. This paper proposes a novel spatial point process model that accounts for inhomogeneity and interaction to flexibly model a complex database of cells in the tumour immune microenvironments of a cohort of patients with non-small-cell lung cancer whose samples have been processed using digital pathology techniques. Specifically, an inhomogeneous multitype Gibbs point process model with an associated Fiksel-type interaction function is proposed. Estimation and inference procedures are conducted through maximum pseudolikelihood, considering replicated multitype point patterns.
Keywords: Digital pathology; Gibbs models; Non-small cell lung cancer, Point process models; Pseudolikelihood; Replicated point patterns.
1.9
§ INTRODUCTION
Tumours are complex ecosystems that consist of much more than a collection of cancer cells. For example, they contain epithelial cells, fibroblasts, blood and lymphatic vessels, and infiltrating hematopoietic cells, among others <cit.>. These structural elements affect the growth and clinical conditions of the tumour. The ecosystem surrounding a tumour within the body is usually known as the tumour microenvironment. It is a set of infiltrating and resident host cells, secreted factors, and extracellular matrix <cit.>. Tumour cells stimulate essential molecular, cellular, and physical changes within host tissues to support tumour growth and progression. The composition of the tumour microenvironment varies between tumour types, but distinctive features include immune and stromal cells, among others. The tumour microenvironment characterises the tumour and its environment and plays a fundamental role in understanding the development and progression of cancer. One of the key components of the tumour microenvironment is the tumour immune microenvironment, which has a highly diverse composition, including various populations of T-cells, B-cells, dendritic cells, natural killer cells, myeloid-derived suppressor cells, neutrophils, or macrophages <cit.>.
Recent advances in imaging techniques allow scientists to study the spatial structure of the tumour microenvironment or tumour immune microenvironment at a level of detail down to a single cell <cit.>. This data (several antibody markers) has complex acquisition and processing. Processing them involves various efforts in the laboratory and applying various numerical and statistical methods to reduce non-biological variability, i.e., the variability due to the computational procedures to process the data <cit.>. Some research teams have developed suitable methods and software to address these challenges.
In this context, image normalisation is a technique that adjusts an image's input pixel- or image-level values to remove noise and improve image quality. Some statistical tools for normalisation improve the similarity across images by removing the unknown effect of technical variability. To normalise multiplex image data, <cit.> implement and compare data transformations and normalisation algorithms in multiplexed imaging data providing a foundation for better data quality and evaluation criteria in multiplexed imaging. <cit.> propose a density-based method for distinguishing the difference between the subjects concerning the distribution of a functional marker in the tumour microenvironment or tumour immune microenvironment. <cit.> examine how spatial interactions among different immune cells in the ovarian cancer tumour microenvironment are associated with overall survival using scalar spatial summaries.
Currently, spatial statistical techniques are preferred when analysing this type of data <cit.>. The locations of the immune cells in the tumour immune microenvironment can be assumed as a spatial point pattern in a predefined observation window, usually given by the limits in which the image of the biological sample was processed. The most straightforward point process is completely spatial random (CSR or stationary Poisson process), where the expected value of the number of immune cells is assumed to be constant throughout the region of interest and where the cells do not interact with each other <cit.>. This model, however, is unrealistic in practice since cells easily violate both assumptions <cit.>, accumulate in certain preferred regions of the tissue (inhomogeneity), repel or attract each other, or even attract each other on one scale and repel each other on another scale (interaction). Therefore, we can study the tumour microenvironment or tumour immune microenvironment from the spatial point processes point of view by employing some tools to deal with inhomogeneity and interaction. We consider the different cell sub-types within the tumour microenvironment or tumour immune microenvironment to provide helpful information on how cells behave and how their distribution is affected. We also consider various exogenous factors simultaneously. This could allow future medical or clinical decisions regarding the patient to be positively influenced by the knowledge acquired about this cellular dynamic.
Analysing densities and interactions between points in some spatial domain is a primary pursuit in spatial statistics <cit.>. Some real datasets have motivated these analyses; for example, biology <cit.>, neuroscience <cit.> and ecology <cit.>. Commonly, the literature describes multivariate point patterns through second-order summary descriptors such as the K- or J- functions in their multitype versions <cit.>. There are other methods for multitype point patterns, such as the mark connection function more suitable for detecting mark correlation in an exploratory analysis <cit.>. Testing spatial independence between two components of a stationary bivariate spatial process is a well-known problem in the literature <cit.>.
Gibbs point processes are a wide class that includes, for example, all Cox processes and all finite point processes having a density with respect to the Poisson process <cit.>. Gibbs processes are motivated by statistical physics and arise from the forces acting on and between particles in a fluid or gas. We start by assuming that the total potential energy V(·) corresponding to a given configuration of particles, that is, an instantaneous snapshot, can be disaggregated into different terms that represent the potential energies of the individual particles (which can come from external force fields), the interactions between particles taken in pairs, triples, etc. Often it is assumed that only the first and second-order terms need to be included. Then, a representation of the total potential energy for n particles X={ξ_i}_i=1^n would be given by <cit.>
V(X)=V(ξ_1,…,ξ_n)=∑_j=1^n ∑_1≤ i_1< ⋯ < i_j≤ n V_j(ξ_i_1,…,ξ_i_j),
where V(·) is the interaction potential of order j. One of the principles of statistical mechanics establishes that, in equilibrium, the probability density of a point pattern, that is, a particular configuration of points, is inversely proportional to the exponential of the potential energy; that is, proportional to e^-V(X)/T, where T is the temperature. Potential energy is the total work required to move the particles to form the point pattern X. Markov point processes, a virtual subclass of these Gibbs processes where the interaction range of particles is assumed to be finite, are flexible statistical models for spatial point patterns <cit.>.
In this paper, we propose a novel approach that leverages several spatial statistical techniques to model the distributions of cells in tumour immune microenvironments flexibly. We employ a non-small cell lung cancer (NSCLC) dataset collected by multiplex immunohistochemistry (mIHC) <cit.>. The data provide tissue samples collected from 122 non-small cell lung cancer patients. These samples were processed to isolate the tumour immune microenvironments and obtain marked point patterns where each point represents an immune cell, which is marked as belonging to one of five immunity markers (see Section <ref>). Similarly, the dataset includes some clinical factors such as age, whether the patient has undergone chemotherapy, the stage of the disease and survival time. Our main objective is to develop a multitype inhomogeneous point process model for the cells of the tumour immune microenvironment that includes the acquired contextual knowledge, i.e., including the marks of immunity, the clinical factors of the patients and the possible interaction between cells of the same and different types.
We formulate an accurate statistical model and validate it to answer the scientific question behind our research objective, taking advantage of all the data components. To do this, we start from a general principle of points interacting in Gibbs' fashion and their probability distribution. We incorporate several factors, such as the trend, whose baseline is estimated using non-parametric techniques. We add a multitype pairwise interaction component inspired by cell dynamics. We use several methods for estimation and inference: the descriptive input of second-order statistics such as Ripley's K-function, the profile pseudolikelihood, and the maximum pseudo-likelihood maximisation. In addition, since the data were collected from several patients, we take advantage of methods related to replicated point patterns to feed the model and gain more robust estimates of the model parameters.
The remainder of this article is organised as follows. We describe the tumour immune microenvironment dataset in Section <ref>. Section <ref> contains the fundamental notions about Gibbs's processes. In Section <ref>, we introduce the Fiksel interaction function in its multitype version and describe the methods we utilise to make statistical inference. In Section <ref>, we detail the analysis of the tumour immune microenvironment dataset step by step. We estimate the model's components by combining several techniques starting from the corresponding geometric considerations and also compare the model's performance with several other alternative models. We define a type of root mean square error (RMSE) based on residuals to facilitate comparisons. We end with some comments, directions for future research, and final considerations in Section <ref>.
§ IMMUNE CELLS DATA
The cellular composition of the tumour immune microenvironment can be studied through multiple well-known techniques in digital pathology <cit.>. In this work, the data come from multiplex immunohistochemistry (mIHC), which allows the evaluation of multiple markers in a single experiment and may detect the spatial location of multiple cell types.
<cit.> used multispectral quantitative imaging on the lung adenocarcinoma tumour microenvironment in 153 patients with resected tumours. The data consist of a single slide per patient, where they evaluated the tumour microenvironment with markers for CD3, CD8, CD14, CD19, major histocompatibility complex II (MHCII), cytokeratin, and 40,6-diamidino-2-phenylindole (DAPI). Then they performed image analysis, including tissue segmentation, phenotyping, and attached spatial coordinates. The data are available at <cit.>.
Specifically, the data associated with each patient comes in a point pattern format representing the phenotype map of CK^+ cancer cells, CD4^+ (CD3^+CD8^-) T-cells, CD8^+ T-cells, CD14^+ cells and CD19^+ B-cells. A random (patient 45) processed tissue sample is displayed in Figure <ref>.
The data for this study came from the Mayo Clinic Lung Cancer Repository, where they ensured compliance with applicable ethical and data protection protocols. The selected patients (a total of 122) underwent curative surgical resection of lung adenocarcinoma between 2004 and 2007. These patients had not received targeted anticancer therapy and had available residual tumour specimens.
In addition, extra information related to each patient was extracted, which will be considered design covariates (non-spatial). The covariates are, gender (56% women), age at the time of surgery (a mean of 68 years), stage of cancer (42% IA, 23% IB, 10% IIA, 10% IIB, 12% IIIA, 1% IIIB, 3% IV), cancer cell MHCII status (67% high (≥ 0.5%)), survival days (a mean of 2389 days, i.e., approx. six years and a half), death (44% dead), recurrence or dead event (38% of no recurrence), adjuvant therapy (86% of no therapy). To explain the state of cancer, we follow <cit.>. Stage I is divided into stages IA and IB. In stage IA, the tumour is only in the lung and up to 3. At this stage, cancer has not spread to the lymph nodes. In stage IB, the tumour size lies between 3 and 4, and cancer has not spread to the lymph nodes. Stage II is also divided into two categories, IIA where the tumour lies between 4 and 5 and cancer has not spread to the lymph nodes, and IIB, where the tumour lies between 4 and 5 and cancer has spread to lymph nodes on the same side of the chest as the primary tumour; the lymph nodes with cancer are in the lung or near the bronchus. Stage III is divided into three categories, IIIA, where the tumour is up to 5 and cancer has spread to lymph nodes on the same side of the chest as the primary tumour; the lymph nodes with cancer are around the trachea or aorta, or where the trachea divides into the bronchi. In IIIB, the tumour is up to 5, and cancer has spread to lymph nodes above the collarbone on the same side of the chest as the primary tumour or to any lymph nodes on the opposite side of the chest as the primary tumour. In IIIC, the tumour may be any size, and cancer has spread to lymph nodes above the collarbone on the same side of the chest as the primary tumour or to any lymph nodes on the opposite side of the chest as the primary tumour. Finally, stage IV, where the tumour may be any size and cancer may have spread to the lymph nodes.
MHCII status indicates the presence of MHCII molecules, a class of major histocompatibility complex (MHC) very important in initiating immune responses <cit.>. The next factor is survival days, the time in days from the date of diagnosis to the date of death or the date when data collection stopped (censoring). Death factor establishes whether the participant passed away during the data collection. Recurrence or dead event informs about whether or not the participant had a recurrence or died. Finally, adjuvant therapy establishes whether or not the participant received adjuvant therapy, understood as any additional cancer treatment given after the primary. Figure <ref> summarises this clinical information.
§ GIBBS MODELS
Interaction is a fundamental concept between points that often cannot be observed through second-order descriptors as these functions measure correlation and not causal interaction <cit.>. Gibbs point processes models, also called Markov point processes, explicitly hypothesise that interactions occur between points in the process. These models can mimic a wide range of point patterns and can easily combine repulsion and attraction at different scales. In practice, Gibbs models only produce weak inhibition or clustering; they are helpful for modelling non-strongly clustered patterns. These models can be built based on the concept of Papangelou conditional intensity <cit.>.
§.§ Papangelou conditional intensity
The conditional intensity function is a valuable statistic for studying and modelling point patterns <cit.>. It describes the probability of observing a cell of type m∈ℳ conditional on the configuration of its neighbours in the observation window W. We consider a realisation of a marked (multitype) point process 𝐗, as a set X={(𝐱_i, m_i)}_i=1^n, where 𝐱_i∈ W are the spatial locations, and m_i∈{1,2,…, M} are the types. For simplicity, we may denote a marked location (𝐱_i,u_i) as ξ_i. If f(X) is the joint probability density function of a multitype point pattern X, this function can be written as <cit.>,
f(X)= ζ[∏_i=1 ^n B_m_i(𝐱_i)][ ∏_i<jΦ_m_i,m_j(𝐱_i,𝐱_j)],
where ζ is known as normalising constant, and it is usually intractable <cit.>, B_m(𝐱) is a non-negative first-order trend of points of type m, and Φ_m,m'(𝐱,𝐱'), m,m'∈{1,…,M} are pairwise interaction functions for points of types m and m'. Then the Papangelou conditional intensity or simply conditional intensity at any point 𝐱 of type m of the observation window W is defined as
λ(ξ|X)= λ((𝐱,m)|X)= f(X ∪{ (𝐱,m)} )/f(X ) = B_m(𝐱)∏_i=1^nΦ_m,m_i(𝐱,𝐱_i),
where X ∪{ (𝐱,m)} is the extended point pattern obtained adding (𝐱,m)=ξ to the coordinates set. The conditional intensity at a data point ξ_i is defined as λ(ξ_i|X):= λ(ξ_i|X \ξ_i), i.e., removing the data point ξ_i in the denominator.
§.§ Potential energy
We can specify a Gibbs model by writing a formula for the probability density f(X) as a product of the terms associated with each interaction. We call the (negative) potential of the model to the logarithm of the probability density V(X) = log f(X). The potential might be written as
V(X) = V_0 + V_1(X) + V_[≥ 2](X),
where V_0 is a constant and V_1 and V_[≥ 2] represent the spatial trend and the spatial interactions (of all orders), respectively. This enables us to define, for instance, a pairwise interaction model; we would define the log trend V_1(·) and the pair potential V2(·,·) for all points.
There is a significant technical piece behind this theory <cit.>. Still, the key point is that we may formulate Gibbs models with an arbitrary first-order spatial trend term V_1({ξ}) = Z(ξ ). Then, we have to choose the interaction term V_[≥ 2](·) from a list of well-studied higher-order potentials whose integrability properties are known. For the conditional intensity, we have
λ(ξ |X)=exp{Δ_ξ V(X) },
where Δ_ξ V(X)=V(X ∪{ξ})- V(X).
§.§ Fitting Gibbs models
The idea is to model the conditional intensity in the following way,
logλ(ξ |X) = B(ξ ) + η^⊤ Z(ξ ) + ϕ^⊤T(ξ|X),
where B(ξ) is an optional, real-valued function representing a baseline or offset. The first-order term Z(ξ ) describes spatial inhomogeneity or covariate effects. The higher-order term T(ξ |X) describes interpoint interaction, and η and ϕ are parameters.
§ FIKSEL INTERACTIONS
Motivated by some exponential decay observed in cellular contexts <cit.>, we assume that the interaction energy between every pair of immune cells (the pair potential) decreases exponentially. <cit.> proposed a bivariate pair potential function. As a step further, we display its straightforward multitype version given by
Φ_ij(r):=
-∞ 0≤ r < h_ij,
c_ij·exp{-γ_ij· r} h_ij≤ r < R_ij,
0 R_ij≤ r,
{h_ij}, {c_ij}, {γ_ij} and {R_ij} are parameters, and i,j∈ℳ. The parameter h_ij is the hardcore distance between types; points of types i and j must be separated at least by a distance h_ij. The interaction strength parameter c_ij controls the type of interaction; it is zero for independent processes, positive for attractive processes and negative for repulsive processes. The rate or slope γ_ij controls the decay of the interaction between i and j as the distance increases. The interaction range R_ij means that points beyond this distance do not interact.
§.§ Inference
§.§.§ Pseudolikelihood
For a multitype pairwise interaction process, the pseudolikelihood can be written as <cit.>
PL = [∏_i=1 ^n B_m_i(𝐱_i)][ ∏_i<jΦ_m_i,m_j(𝐱_i,𝐱_j)] ·exp{- ∑_m∈ℳ∫_W B_m(𝐮)∏_i=1^nΦ_m,m_i(𝐮,𝐱_i) 𝐮},
where ℳ is the set of types.
§.§.§ Berman and Turner's device
A Berman-Turner device is a computational tool for approximating maximum pseudolikelihood estimates <cit.>. It generates a set of marked points, including both a set of dummy points and the data points and forms a good quadrature rule for W ×ℳ. A quadrature rule is an approximation of an integral ∫_W f(𝐮) 𝐮 as a weighted sum ∑_j w_j f(𝐮_j) of the function values at specified points (quadrature points) within the integration domain. The weights w_j are quadrature weights that sum to |W|. <cit.> proposed a practical scheme to select the weights; it partitions W into tiles of an equal area where each tile has a dummy point.
Consider the Cartesian product of a set of quadrature points in W and the set ℳ. We write the marked points as (𝐮_j, v_ℓ) for j=1,…,J and ℓ =1,…,L where 𝐮_j ∈ W and k_ℓ∈ℳ. Then we define the indicator z_j ℓ to equal one if (𝐮_j,k_ℓ) is a data point and zero if it is a dummy point. Let w_j ℓ be the corresponding weights for a linear quadrature rule in W×ℳ. Then the pseudolikelihood is approximated by
logPL≈∑_ℓ = 1^L ∑_j=1^J (υ_jℓlogλ_j ℓ - λ_j ℓ) w_j ℓ,
where λ_j ℓ:= λ((𝐮_j,k_ℓ)|X), υ_j ℓ:=z_j ℓ / w_j ℓ, z_j ℓ:= 1 if 𝐮_j is a data point, z_j ℓ:= 0 if 𝐮_j is a dummy point
and the weights w_j ℓ are the areas of the tiles. The log-likelihood given in (<ref>) has the form of a weighted (w_j ℓ) log-likelihood of Poisson random variables 𝒴_j ℓ with expected values λ_j ℓ <cit.>. It can be handled computationally using generalised linear models techniques <cit.> or even generalised additive models <cit.>.
§.§ Replicated point patterns methodology
Assume that we have g experimental units and that the response from unit k≤ g is a multitype point pattern X^k observed in a window W^k. We assume that the point patterns { X^k}_k=1^g are independent conditional on the covariates and random effects. The conditional intensity for the kth point pattern is
λ^k(ξ |X)=exp{Δ_ξ B^k(X) + θ^⊤Δ_ξ Y^k(X)
},
where
Y^k(X) := (Y_1^k(X), …, Y_p^k(X) ),
are vector-valued functions representing fixed effects. Notice that random effects can be included easily as an additional term in the right of Eq. (<ref>). Furthermore, notice that every function f(X) of a point pattern X can be expressed as f(X)=f_[1](X) + f_[≥ 2](X), where
f_[1](X)=∑_ξ_i∈ X f({ξ_i}),
is the first-order component and f_[≥ 2](X)=f(X) - f_[1](X) is the interaction term <cit.>. When we apply this decomposition to the functions B^k(X) and Y^k(X), we retrieve the first-order components B_[1]^k(ξ), Y_[1]^k(ξ) that resemble the offset and the covariate effects in Eq.(<ref>), and the interaction components B_[≥ 2]^k(ξ) and Y_[≥ 2]^k(ξ). In our particular case, we assume that the interaction has canonical parameters; therefore, B_[≥ 2]^k(ξ) vanishes and Y^k_[≥ 2](X) comes from a pairwise interaction Fiksel form (see Section <ref>).
The log-pseudolikelihood is given by
logPL = ∑_k=1^g ∑_(𝐮_i,m_i) ∈ X^kλ^k((𝐮_i,m_i)|X^k) - ∑_k=1^g ∑_m∈ℳ∫_W^kexp{λ^k((𝐮,m)| X^k) }𝐮,
this expression is equivalent to the pseudolikelihood of a Gibbs process on the disjoint union of the windows <cit.>.
§ IMMUNE CELLS MODEL
In this section, we develop a multitype Fiksel interaction model for lung cancer patients' tumour immune microenvironments. First, we define a common window for all the observations of different patients. We then combine two techniques, maximum profile pseudolikelihood and maximum pseudolikelihood, to estimate the model parameters. We then proceed to propose a set of extra models in order to compare the performance of our model with those of others. We make the comparison using residual measures and the RMSE (or its analogue in this context) that we define to be able to summarise the residuals.
§.§ Observation window and edge correction
Since the tissue block extraction process was consistently done on 5 slides, the observation windows are the same in theory but slightly different in practice due to measurement errors and precision. To alleviate this effect, we will consider each patient's observation window W^ℓ as a dilation of the convex hull that contains the data (by 1/√(1 - ω_ℓ / n_ℓ); n_ℓ is the number points of X^ℓ, and ω_ℓ is the number of vertices of the convex hull of the patient ℓ) <cit.>. Once we have these windows, the final observation window, which is also common for all patients, is defined as
W:=⋂_ℓ = 1^151 W^ℓ.
When the goal is to make inference, it is important to assume that the data is a realisation of a finite point process defined only within W (bounded case) or a partially observed realisation of a point process that extends along a bigger domain only through the window W (unbounded case). In our context, we must assume that our point patterns are partially observed realisations. This is due to two reasons, first is that the tissues analysed before imaging are just samples of larger tissue (lungs in this case). Second, we cut the windows to make a common window through Eq. (<ref>).
There can be edge-effect problems in the unbounded case <cit.> since some information might come from unobserved points outside the final observation window. There are several methods in the literature to alleviate this type of effect <cit.>. In our case, we use the well-known border method <cit.>, which obtains the pseudo-likelihood integration domain by cutting a width margin r from the original observation window.
§.§ Trend and interaction terms
We incorporate inhomogeneity into our model through an offset in which we non-parametrically estimate the total first-order intensity of each of our point patterns. The total intensity function is defined as
B_∙(𝐮)=∑_m∈ℳB_m(𝐮),
<cit.>. In this way, we generate a smooth estimate of the expected value of the number of immune cells at each point in the observation region, considering all cell types simultaneously. This estimation is made through a spatial Gaussian kernel with adaptive bandwidth <cit.>. This estimator is defined as follows,
B̂_m(𝐮) =1/e_ϵ(𝐮) ∑_𝐮_i ∈ X_mK_ϵ(𝐮_i)(𝐮-𝐮_i), 𝐮∈ W, m∈ℳ,
where K(·) is a Gaussian kernel, ϵ(·) is a bandwidth function and
e_ϵ(𝐮)
is an edge correction <cit.>. The estimates for B_∙(𝐮) and for B_CD14^+(𝐮), B_CD19^+(𝐮), B_CD4^+(𝐮), B_CD8^+(𝐮) and B_CK^+(𝐮) are shown in Figure <ref>.
Additionally, we consider the design covariates, which, although not spatial, correspond to factors that influence the overall conditional intensity. These factors have been explained in Section <ref> and are related to patients' clinical information; we denote them by Z.
We introduce the interaction between cells into the model through the term T(𝐮,m). This inclusion of interaction entails the assumption of several facts; in this case, we assume that we have the same interaction for all patients, that is, the Fiksel interaction defined by the Φ_ij(r) function given in Eq. (<ref>). We also assume we have the same sets of parameters {h_ij}, {γ_ij},{R_ij} and {c_ij} for all patients. Modifying this assumption would correspond to having a previously identified mechanism that could alter these parameters for different groups of patients or, in the worst case, assuming that each patient has their own isolated set of parameters. This decision would make the model highly complex and likely cause overfitting problems. Therefore, we opt to choose the most parsimonious model in this case.
§.§.§ Irregular parameters
When a parameter of a point process model does not appear in the log-linear form (<ref>), it is called irregular <cit.>. In contrast, the other parameters are called regular. Consider, for example, a model of the form,
logλ_ϑ((𝐮,m)|X) = φ^⊤· Z((𝐮,m), ψ|X)
where ψ is the vector of irregular parameters, φ is the vector of regular ones and ϑ := (φ, ψ). For every fixed value of the irregular parameters, the model is log-linear in the regular ones, i.e., if we fix the values of ψ, then the model is log-linear in φ; this model can be fitted using maximum pseudolikelihood over φ <cit.>. For retrieving a maximum profile pseudolikelihood estimate, we assign a value to ψ, then, the pseudolikelihood PL(φ, ψ) can be maximised over all possible values of φ,
PPL(ψ)=max_φPL(φ, ψ).
The maximum pseudolikelihood estimate of ϑ can be obtained by maximising the profile pseudolikelihood over ψ.
Hardcore distances
A maximum likelihood estimator of the hardcore radii is the minimum nearest-neighbour distance amongst the points with different labels <cit.>. Given that we have replicated patterns, we choose the minimum across the replicates for every matrix entry, so distinct points are not permitted to come closer than this minimum apart. The estimate is given by
ĥ_ij =
[ CD14 CD19 CD4 CD8 CK; CD14 0.498 0.496 0.497 0.497 0.495; CD19 0.496 0.499 0.496 0.495 0.481; CD4 0.497 0.496 0.498 0.496 0.496; CD8 0.497 0.495 0.496 0.498 0.497; CK 0.495 0.481 0.496 0.497 0.499 ].
We observe similar values in the ĥ_ij entries. This means that the tumour immune microenvironment cells could share a common hardcore distance, which could simplify the model since instead of considering a matrix of hardcore distances, we could only consider a positive scalar, given, for example, by min_ij{ĥ_ij}=0.481. Models with this type of simplification are shown in Section <ref>.
Interaction range and rate or slope
We must provide a suitable range of values for the parameters to apply the maximisation over the profile pseudolikelihood. For the interaction range, we can revise Ripley's K-function K_ij(r) (of its variance stabilised version, the L-function, L_ij(r)) in its multitype inhomogeneous version <cit.>. Roughly speaking, this function represents the expected value of the count of events of the type j, weighted by the reciprocal of the intensity at each point of type j, within distance r of an arbitrary event of type i. It could be estimated by
K̂_ij(r)=1/|W|∑_𝐮_ℓ∈ X_i∑_𝐮_k∈ X_j1{||𝐮_ℓ-𝐮_k|| ≤ r}/B̂_i(𝐮_ℓ)B̂_j(𝐮_k)e(𝐮_ℓ, 𝐮_k;r), i,j∈ℳ,
where 1{·} is the indicator function, and e(·) is an edge correction <cit.>. The L-function is intended to stabilise the K-function variance and it is defined as L_ij(r):=√(K(r)/ π). When the points. For illustration purposes, Figure <ref> displays the L-functions of the cells of the same type.
L-functions of the same type (L_ii(r)) are the usual L-functions of the process X_i of points of the type i, meaning that the same interpretation applies as if there was no labelling. For example, the typical benchmark of L(r)=r for Poisson processes still applies here. We select a maximum possible interaction range so we may see a stable behaviour of the mean L-function (understood as the classical functional mean) across the patients behind such selected maximum; this value is 33.81 and it is displayed as a vertical black line in Figure <ref>.
For a range for the rate or slope, we decide to take into account the scale of the data and assign γ_ij∈ [0.2, 0.2]. After applying the procedure of maximising the profile pseudolikelihood, we retrieve the estimations for R_ij and γ_ij
R̂_ij =
[ 27.11 21.00 18.03 20.03 24.42; 21.00 27.11 19.08 19.32 25.92; 18.03 19.08 27.11 16.40 23.13; 20.03 19.32 16.40 27.11 24.55; 24.42 25.92 23.13 24.55 27.21; ],
γ̂_ij =
[ 0.110 -0.066 -0.031 -0.041 -0.082; -0.066 0.073 -0.071 -0.080 -0.077; -0.031 -0.071 0.111 -0.052 -0.048; -0.041 -0.080 -0.052 0.111 -0.043; -0.082 -0.077 -0.048 -0.043 0.200; ].
§.§.§ Regular parameters
There are ten regular parameters ν_1,…,ν_X and c_ij, i.e., all terms that appear in the conditional intensity log-linear form, one of them is the intercept, eight of them are the coefficients for each clinical covariates. The estimation procedure is done through the pseudolikelihood and the Berman-Turner approximation, considering the replicates. The estimated coefficients are shown in Table <ref>.
All model design covariates were statistically associated with conditional intensity except for the recurrence variable (p-value of 0.911); i.e., the factor that reports whether the patient had a recurrence or died has no statistical impact on conditional intensity. The values in Table <ref> come from a generalised linear model, as detailed in Section <ref>. Therefore, it should be noted that the p-values are calculated based on traditional mechanisms. This means that the significance depends on the number of observations, roughly seven million in this case. With such a large number of observations, it is logical and expected that almost all the factors become statistically significant <cit.>, which is what happens in this case. To avoid a vague interpretation, we focus on the regression coefficients (exp{η}). Factors associated with reduced conditional intensity are gender, where men generally have lower immune cell counts than women; a low MHCII status has less intensity than high MHCII status; and death, where those who died showed less immune cell density than those who did not. Patients who received adjuvant therapy also show lower counts than those who did not; this may occur as immune cells may be found within cancerous tissues targeted by adjuvant therapies to be removed or killed. Regarding the disease status, we can observe an intensity increase in patients in stage IV compared to those in stage IA and a decrease in patients in stage III.
Fitted interaction strength
The other parameter of the model is the strength of the Fiksel interaction term c_ij.
ĉ_ij =
[ 1.3052 0.9995 0.9994 0.9998 0.9996; 0.9995 1.2171 0.9997 0.9996 0.9996; 0.9993 0.9997 1.1951 0.9988 0.9999; 0.9998 0.9996 0.9988 1.4694 0.9993; 0.9996 0.9996 0.9999 0.9993 1.0473 ].
For illustration purposes, in Figure <ref>, we show the conditional intensities of CD14^+ cells considering their interaction with cells of the same type and their interaction with CD19^+ cells of a single patient included in our sample.
The small size of the white dots represents the minimum distance of repulsion ĥ_ij, i.e., CD14^+ cells are prohibited from locating within 0.498 of other CD14^+ cells and within 0.499 of CD19^+ cells. Beyond this distance, the attraction decays exponentially with the distance according to the Φ_ij(r) function given in Eq. (<ref>) for cells of the same type. In the case of cells of a different type, the Φ function does not decay; instead, it increases due to the sign of the γ_ij parameter. However, the magnitude of this quantity is generally smaller for different cells, which makes the interaction's strength less in these cases. This type of behaviour, where the magnitudes of interaction are observed to be so small for cross-terms, makes us think of simpler alternative models, for example, an interaction model only within types. These models are discussed in Section <ref>.
Figure <ref> shows each cell type's fitted conditional intensity logλ̂(ξ|X) from an arbitrarily chosen patient. This conditional intensity is evaluated in a regular mesh in the observation window W. We may see how the adjustment is satisfactory even in cases with few points, such as CD14^+, CD19^+ and CD4^+. We see a better fit for cell types with more points, CD8^+ and CK^+. This evaluated conditional intensity strongly suggests that the fit is adequate, considering the model is simultaneously set up for all patients. This gives us an idea of how suitable the model with the Fiksel interaction is for tumour immune microenvironment modelling.
§.§ Assessing the model
We want to test whether or not the model is working well; this model evaluation can be done in many ways. In this work, we compute some residual summaries of the proposed model. Residuals from point process models are a proper diagnostic measure for comparisons <cit.>.
§.§.§ Comparing several models
We consider several models to be able to compare their performance and finally opt for one or some. To do this, we rely on the fact that our model comprises three fundamental parts: an offset, other first-order effects (design covariates), and second-order effects (Fiksel's interaction). In order to understand how good the model is, we will propose several models where these different parts are included or not.
We then consider three sets of models. In the first one, we include four models that have in common the multitype Fiksel-type interaction function that we propose in this article. The first is the model described in Section <ref>, containing all the components (Fiksel 1). The second is a model without first-order effects (Fiksel 2); the third considers only the effects of the design covariates but not the offset (Fiksel 3). Finally, the fourth model considers first-order effects, but an interaction function that, although it is Fiksel type, assumes that the interaction does not depend on different types of immune cells; that is, the interactions only occur between cells of the same type (Fiksel 4).
The literature on multitype Gibbs models is not very huge as far as we know. Some classical univariate interaction functions have been extended to the multitype case <cit.>. For example, multitype Strauss, Hardcore and Strauss Hardcore models <cit.>. For the next set of models for comparison, we opt for holding the first-order terms and changing the pairwise interaction functions. So we choose multitype Strauss (Strauss), Hardcore (Hardcore), and Strauss Hardcore (Srt Hardcore) models. The interaction pairwise functions for these models are shown in Table <ref>.
We have also decided to generate a last model with the first-order effects but no associated interaction function (Poisson). This model corresponds to a Poisson model to explain the conditional intensity function. It should be noted that the conditional intensity coincides with the first-order intensity of an inhomogeneous Poisson process under this assumption of no interaction between points <cit.>.
§.§.§ Residuals
<cit.> defined residuals and residual plots for Gibbs models for spatial point processes, providing a strategy for model criticism in spatial point process models. Their techniques resemble the existing methods for linear models, i.e., they represent the differences between the data and the fitted model. The raw residual measure can be defined as
ℛ_m^k(B)=N(X_m^k∩ B) - ∫_B λ̂^k((𝐮,m)|X^k) 𝐮, ∀ B ⊆ W, m∈ℳ, k≤ g.
This function can be estimated in any subset of the observation window; that is the rationale behind the term “measure”. Usually, a regular window partition is set to estimate the measure in each pixel as per density estimations.
In practice, the residuals are often scaled to calculate, for example, standardised residuals. The analogue to Pearson's residuals in this context is given by
ℛ^⋆ k_m (B)=∑_𝐮_i∈ X_mλ̂^k ((𝐮_i,m)|X^k)^-1/2 - ∫_B λ̂^k((𝐮,m)|X^k)^1/2𝐮.
There is a third version of the residual measure called inverse λ residuals
ℛ^† k_m (B)=∑_𝐮_i∈ X_m1{λ̂((𝐮_i,m)|X^k)>0}/λ̂^k((𝐮_i,m)|X^k) - ∫_B 1{λ̂^k((𝐮,m)|X^k)>0}𝐮.
For comparison purposes, we need to summarise some residual measure ℛ(B), ℛ_(P)(B) or ℛ_(I)(B), thus we consider the total value (the integral) of these measures over the observation window W. As we have five different types of cells, we may obtain a total value for each patient. Then we retrieve 122× 5 total residuals. Figure <ref> summarises these residuals for each proposed model.
From Figure <ref>, we can glimpse several exciting things; we see how the three residuals provide roughly the same information, although on different scales. We also see how CK+ cancer cells seem the most difficult to model since their residuals are the furthest from zero in all models. The model that we have proposed and its variants, that is, those with multitype Fiksel interactions, generally present a similar and very adequate behaviour, except perhaps for the Fiksel 4 model, that is, the one where it is assumed that immune cells of different types do not interact with each other; this model has greater variability than its counterparts. This good behaviour of the models of the first set allows us to glimpse that this function is appropriate to model this type of cells, which is in harmony with our motivation (see Section <ref>) to use this type of interaction.
The Poisson model is presented as the most inadequate since it is not only the one that is furthest from zero but also presents the greatest variability; this suggests that the interaction between cells must be a fundamental part of any model proposed for this type of tumour immune microenvironments. The models whose interaction functions are Strauss or Hardcore do not generate good models either. Although the residuals of these cases are closer to zero than in the Poisson case, their variability is greater than that of the other cases considered. Of the alternative interaction functions, the Strauss Hardcore is the one that best manages to model immune cells; this model is the most competitive that we can find among the multitype models that are currently known.
We especially highlight the Fiksel 2 model since, although far from being the best, it is surprisingly good at modelling cells without having first-order information. If we wanted to simplify the model, we could do without the first-order information and still obtain a successful model. This has positive implications in practice; for example, we do not need prior knowledge of the patients' clinical conditions to obtain information on the distribution of cells in the tumour immune microenvironment. Although we strive to obtain reasonable estimates of the offset (the expected value of the counts per unit area at each point in the observation window), it does not appear critical to obtaining a good model; the model Fiksel 3 confirms this fact as well.
Root mean square error
We wish to summarise our residuals in order to provide an overall notion of the performance of the models. For doing so, we first consider an overall residual measure across the type-cells given by ℛ^k_∙:=∑_m∈ℳℛ^k_m. We then define the Root mean square error in this context as
RMSE = √(1/g∑_k=1^g(∫_Wℛ^k_∙)^2).
Notice that we can straightforwardly extend this definition to Pearson's and inverse residuals. We then compute these residuals for every one of the considered alternative models.
Table <ref> shows that the different types of residuals do not agree on a single best model. However, we can highlight our base model (Fiksel 1) as the best overall since it maintains low RMSE values across the other models. The model that assumes no interaction between cells of different types performs better regarding raw residuals; however, the Pearson and inverse residuals do not support this finding as much. This may be due to the variability in Figure <ref>. The Strauss Hardcore model is highly competitive; the Pearson and inverse residuals favour this model despite having a slightly higher RMSE based on raw residuals than the Fiksel 1 model.
§ DISCUSSION
In this article, we have proposed a multitype Fiksel interaction model for tumour immune microenvironments and applied it to understand inhomogeneity and interaction patterns of a sample of digitalised tissues through digital pathology techniques from 122 patients with lung cancer.
Throughout this article, we have explored various tools connected through a statistical model that includes several components, a first-order component also called a trend that includes, in turn, the estimate of the expected value of the number of cells in each one of the tumour immune microenvironments through a non-parametric kernel and several design covariates. We have included all possible interactions between cells of the same type and cells of different types in a single component that describes the interaction; this term is a Fiksel-type pairwise interaction function, and it comes from the Gibbs and Markov pairwise interaction processes <cit.>. In summary, we have shown that inhomogeneous multitype Gibbs processes provide effective tools for analysing tumour immune microenvironments. It is the first time this multitype version has been used in practice since only the bivariate version was initially proposed <cit.>.
Given that the images processed in digital pathology are relatively new, little has been studied from a statistical and probabilistic perspective on the distribution of immune cells within the tumour immune microenvironment <cit.>. Then, some open questions could give rise to new and exciting research fields. For example, are there asymmetric interactions between cell types? In other words, does a kind of cells appear or are located within the tumour immune microenvironment first, and then the other types are distributed conditionally to the first type? Hierarchical interaction models might account for this type of cell behaviour and assign, for example, our multitype Fiksel interaction function in a conditional way. A conditional hierarchical model would express the probability function such that
f(X)=f_1(X_1)f_2|1(X_2|X_1)f_3|1,2(X_3|X_1,X_2)⋯ f_M|1,2,…,M-1(X_M|X_1,…,X_M-1),
where X=∪_m∈ℳX_m has M point types, and X_m-1 takes precedence over X_m for every m∈ℳ. Each probability density f_m|1,2,…,m-1 is a pairwise interaction density that assembles all the information about the preceding terms, including the normalising constant, the trend and the interaction function <cit.>.
On the other hand, the alternative models (see Section <ref>), particularly Fiksel 2 and 3, have shown that the assumption of homogeneity could be reasonable in this context. Unfortunately, a homogeneity test through quadrat counting <cit.> would be inadequate given the dependency between cells; therefore, we cannot formally rule this homogeneity out. Nevertheless, although the first-order factors are statistically significant (see Section <ref>), the performance of these models remains similar to those that include first-order terms. Therefore, a homogeneous, more parsimonious model, such as the ones we have offered, may be adequate in this case.
It is important to highlight that our estimates are point estimations and could be improved. for example, through simulation using Metropolis-Hastings algorithms for Gibbs processes <cit.>. These algorithms can provide confidence intervals for the associated parameters. As an interesting future research direction, the proposed model also offers a good opportunity to estimate parameters using other approximate Bayesian computational methods, such as the variational Bayesian method <cit.>.
One of the problems we face is the amount of data that results when applying the procedures of generated linear models computationally. Although we have very well-optimised software nowadays, sometimes it is not enough. In our case, we have about seven million records that the regression algorithm must process and the other point process techniques described throughout the paper, such as kernel smoothing and K-function calculation, which fortunately were calculated only for each patient. The problem we have faced goes beyond any processing speed; the problem is the vast amount of memory required to do all the calculations, which requires serious computational resources. That is why an interesting line of research could include how to bring these computations in the context of this type of digital pathology images to the comfort of a conventional laptop.
We conclude that multitype inhomogeneous Gibbs models are a convenient statistical option for tumour immune microenvironment analysis. In particular, the Fiksel interaction function is satisfactory for studying the interaction between cells of the tumour immune microenvironment. These models can easily include extra clinical information available per individual, although they are robust enough to provide good results even without this first-order information. These models also allow estimation and inference through the computational simplification offered by pseudolikelihood methods.
agsm
|
http://arxiv.org/abs/2307.07359v1 | 20230714140401 | From Multilayer Perceptron to GPT: A Reflection on Deep Learning Research for Wireless Physical Layer | [
"Mohamed Akrout",
"Amine Mezghani",
"Ekram Hossain",
"Faouzi Bellili",
"Robert W. Heath"
] | cs.IT | [
"cs.IT",
"math.IT"
] |
From Multilayer Perceptron to GPT: A Reflection on Deep Learning Research for
Wireless Physical Layer
Mohamed Akrout, Amine Mezghani, Member, IEEE, Ekram Hossain, Fellow, IEEE,
Faouzi Bellili, Member, IEEE, Robert W. Heath, Fellow, IEEE
August 12, 2023
=======================================================================================================================================================
Most research studies on deep learning (DL) applied to the physical layer of wireless communication do not put forward the critical role of the accuracy-generalization trade-off in developing and evaluating practical algorithms. To highlight the disadvantage of this common practice, we revisit a data decoding example from one of the first papers introducing DL-based end-to-end wireless communication systems to the research community and promoting the use of artificial intelligence (AI)/DL for the wireless physical layer. We then put forward two key trade-offs in designing DL models for communication, namely, accuracy versus generalization and compression versus latency. We discuss their relevance in the context of wireless communications use cases using emerging DL models including large language models (LLMs). Finally, we summarize our proposed evaluation guidelines to enhance the research impact of DL on wireless communications. These guidelines are an attempt to reconcile the empirical nature of DL research with the rigorous requirement metrics of wireless communications systems.
§ INTRODUCTION
Researchers are developing use cases where DL can potentially enhance the system performance and reduce complexity/overhead compared to classical methods (see envisioned examples for standardization in the 3GPP Release 18 <cit.>). Since the introduction of the AlexNet network in 2012, the use of deep neural networks (DNNs) has skyrocketed within the communications community by substituting conventional optimization solvers with generative and/or discriminative DL techniques. Fig. <ref> summarizes some of the key DL models applied to communications problems, starting from the multilayer perception (MLP) model <cit.> to the diffusion model <cit.>. We refer the reader to <cit.> for a comprehensive survey on the applications of DL for wireless physical layer design. In Fig. <ref>, we also include the recent generative pre-trained transformer (GPT) models <cit.> as they are currently initiating many discussions within the communications research community about the data compression properties of large language models (LLMs) and their role in replicating digital twins (cf. Section <ref>).
Because the sixth-generation (6G) networks are envisaged as multi-band, decentralized, fully autonomous, and hyper-flexible user-centric systems encompassing satellite, aerial, terrestrial, underwater, and underground communications, DL techniques are expected to partially or fully substitute classical methods, their assessment metrics should comply with rigorous evaluation guidelines that equally address latency, complexity, generalization, and accuracy. In this paper, we reflect on the last decade of research on DL for wireless communications with a focus on the physical and link layers. We start by highlighting the limitations of state-of-the-art DL methods for wireless communications. We do so by revisiting one of the first published papers <cit.> introducing DL-based end-to-end communications systems over additive white Gaussian noise (AWGN) channels. By doing so, we pinpoint how <cit.>, as many of the subsequent DL papers for wireless, turn the spotlight on the accuracy of DL methods at the cost of sacrificing their generalization capabilities. We also highlight how the open literature does not draw enough attention to the important practical considerations for designing DL systems for wireless communications, such as data acquirement and adaptation to new system dimensions. We then describe the two key trade-offs in designing DL-aided wireless communications systems, namely, accuracy versus generalization and compression versus latency. These two trade-offs offer new evaluation guidelines to assess future DL research directions for wireless communications.
§ LIMITATIONS OF THE STATE-OF-THE-ART
DL TECHNIQUES FOR WIRELESS COMMUNICATIONS
In this section, we describe the limitations of neglecting the data distribution shift when evaluating DL models for wireless communications at the physical layer. Specifically, we revisit an example from <cit.> to illustrate the drawbacks of blindly applying black-box DL models in a plug-and-play manner when only the model accuracy is assessed. We also describe other challenges arising from the use of DL techniques that are usually not examined in the open literature.
§.§ Evaluation of a Single Metric
It is always possible to beat a classical method that solves a non-closed form problem using DL techniques based on deep neural networks (DNNs) given a known model by generating training and test datasets on which DNNs are both trained and evaluated. For this reason, it is critical to use both the accuracy and generalization of DNNs to assess their performance on a variety of:
* in-distribution (ID) scenarios where the training and testing datasets are generated from the same distribution (e.g., the same user speed is assumed to generate mobility data to be used for training and testing).
* out-of-distribution (OOD) scenarios where a distribution shift occurs between training and testing datasets (e.g., different user speeds are assumed to generate mobility data to be used for training and testing).
In other words, DL models that are either accurate and non-robust or highly biased and robust are equally worthless for real-time physical/link layer applications.
To illustrate this idea, we revisit the study <cit.>, which is one of the first papers promising the design of communications systems as an autoencoder for reconstruction tasks that jointly optimize transmitter and receiver components in a single end-to-end process. As shown in Fig. <ref>, the physical communication chain of the transmitter and the receiver are substituted by an encoder and a decoder, respectively. The channel is represented as a noisy non-parameterized layer that injects an additive Gaussian white noise at a specific energy per bit to noise power spectral density ratio, E_b/N_0.
In <cit.>, the training of the autoencoder was performed on a dataset generated at E_b/N_0 = 7 dB. However, the evaluation was conducted over a range of E_b/N_0 between in [-4 dB, 8 dB]. By doing so, the authors obtained a lower block error rate (BLER) than the Hamming code with rate R=4/7 and concluded that the autoencoder “has learned some joint coding and modulation scheme, such that a coding gain is achieved”.
From a machine learning theory perspective, this conclusion is questionable. For this reason, we train multiple autoencoders, each with a training E_b/N_0 ∈{-4,0,5,7,8} dB. We then evaluate each autoencoder on the testing range E_b/N_0 ∈ [-4 dB, 8 dB] as shown in Fig. <ref>. There, all autoencoders in Fig. <ref> exhibit a decreasing BLER over the entire test E_b/N_0 range even though they were trained on one single E_b/N_0 value. While this fact suggests that autoencoders over AWGN channels generalize well to out-of-distribution decoding scenarios, it can also indicate that the problem at hand is easy to solve because the distribution shift between the training data distribution associated with E_b/N_0 = 7 dB and the test data distributions with E_b/N_0 ∈ [-4 dB, 8 dB] is minimal. To confirm this fact, we report in Table <ref> the percentage of the area overlap between the data distribution of the received signal y∼𝒩(x,1/2 R E_b/N_0 𝐈) for training and testing E_b/N_0 values [To quantify the shift between two distributions, their area overlap is more informative than their KL-divergence because the latter only accounts for the region where both distributions are non-zero.]. We observe the fact that lower testing values for E_b/N_0 compared to the training value E_b/N_0 = 7 dB yields a lower area overlap, or equivalently, a higher OOD shift. However, it is interesting to note the significantly high overall overlap. This suggests that the AWGN channel is a simplistic model to assess the generalization performance of DL models. One can perceive the AWGN channel model in communication as the MNIST dataset in computer vision on which any great performance is considered obsolete by the computer science community due to the intrinsic simplicity of the handwritten digit classification task.
It is also unclear whether the chosen value E_b/N_0 = 7 dB is the best one to select. It is seen that training autoencoders on lower E_b/N_0 values in [-4 dB, 8 dB] (i.e., with higher noise levels) leads to a smaller BLER across the entire test E_b/N_0 interval. This result not only shows that the train E_b/N_0 = 7 dB selected in <cit.> is not the best choice, but also demonstrates how noisy training can be more beneficial for better generalization.
Given all the aforementioned reasons, the fact that the autoencoder outperformed the Hamming code in <cit.> is more justified by the simplicity of the AWGN channel model which does not shift significantly the received signal in the testing E_b/N_0 interval. As a matter of fact, adding a noise correlation to the AWGN channel or accounting for the fading effects by changing the AWGN channel to a Rayleigh one break down the decoding performance of the autoencoder. In summary, the analysis of the results as a function of the trade-off between generalization and accuracy metrics opens the door to future rigorous investigations and reveals more insights about better data generation and model training choices.
§.§ Is Meta-learning Sufficient for Generalization?
To sidestep the need for large data samples to train DL models, meta-learning optimizes a general model using samples from multiple tasks (a.k.a., meta-tasks) in order to adapt to new unseen tasks. By designing meta-tasks associated with specific communication scenarios using the model-agnostic meta-learning (MAML) framework <cit.>, prior work reported better performance for meta-learning solutions compared to standard DL methods. A few attempts bypass the black-box nature of DL models and connect communication systems models with meta-learning. This enables learning a subset of the model-based parameters, thereby minimizing the search space induced by black-box DL models <cit.>. However, the large body of research about generalization in wireless is limited to creating multiple meta-tasks with different communication conditions following the MAML-like framework. While enumerating and generating meta-tasks do not scale for some complex communication scenarios, understanding which feature is invariant across different domains becomes critical for scaling DL techniques. By doing so, one relies on the domain knowledge in addition to the standard two-step optimization of meta-learning. For instance, variations of complex signals related to the phase generalize better than those related to the amplitude. Overall, the study of the features in the context of meta-learning (a.k.a., meta-features) for wireless problems has not been explored by the wireless communications community. Existing studies rely on back-propagation to extract suitable correlations characterizing specific meta-tasks. This is different from those in the ML community which investigate data properties that affect the learning performance, and measure similarities between datasets and meta-features using multiple criteria such as mutual information and density skewness <cit.>.
§.§ Unquestioned Sources of Dataset
In conventional communications protocols, a significant portion of a transmission interval (e.g., time slot) is usually used to send training sequences for channel estimation. The real-time computation involved during these training time slots defines the computational complexity of many communication methods. For possible future AI-aided communication protocols, it is still unclear whether they will be designed based on fully offline training procedures or by additionally relying on extra finetuning steps. The latter scenario raises the question about real-time collection of data for adaptation purposes, and the related overhead to determine the complexity of DL techniques. The existing research in the open literature disregards these practical challenges and focuses entirely on fully offline-trained DL methods, which, at the current state, cannot fulfill the adaption capabilities of wireless systems envisioned for 6G.
When accuracy is the only metric of evaluation of DL models, ignoring these practical challenges seems acceptable because offline training is enough to judge the DL performance. However, when the assessment of DL techniques accounts for their generalization issues, the availability of data sources and their properties becomes a central component of the analysis.
§.§ Use of Reinforcement Learning for Optimization
In the pre-DL era (i.e., before 2012), optimizing non-convex problems using gradient descent (GD) algorithms were not very popular in the wireless research community. Instead of using GD-based optimization, researchers convexified the non-convex problems in order to solve them. DL research has introduced a plethora of optimizers to train DNNs which has made GD a popular technique. This is partly due to the unique properties of flat minima characterizing DNNs' loss functions <cit.>. Consequently, DL techniques have been used to solve optimization problems such as channel and power allocation, beamforming/precoding, user association, and trajectory planning for unmanned aerial vehicles (UAVs) in the physical/link layers in various contexts and system models.
A common practice to tackle non-convex communication problems in the DL era is by resorting to the reinforcement learning (RL) paradigm. RL agents are trained to learn the optimal policy to act on an environment in order to maximize the sum of the instantaneous reward signals received from an environment. We refer the reader to <cit.> for a rigorous treatment of RL formulation in terms of Markov decision processes (MDPs). By substituting the reward signal within the RL formalism with a convex or non-convex cost function to be optimized, the RL agent can find the best policy to optimize it. For instance, one can associate the beamforming vector to the action of the RL agent and the achievable rate of the communication system to be the reward signal <cit.>.
Because wireless communications problems must satisfy multiple constraints simultaneously (e.g., power, latency, signal-to-interference-plus-noise ratio [SINR]), prior work made use of clipping strategies to enforce constraints on the RL agent's output. While this strategy provides good results in some scenarios, it does not guarantee an optimal solution. A better choice would be to cast the communication problem within a constrained MDP formalism <cit.>. However, little effort has been dedicated to properly incorporating the constraints, and the unconstrained MDP approach remains a popular data-driven approach to optimize non-convex communication problems. In addition, the discussion of RL challenges in terms of sample efficiency and generalization is usually neglected despite the fact that it is an active research area within the ML research community.
§.§ Fixed Structures of Deep Neural Networks
DNNs have fixed structures and it is not possible to change them anymore after initialization. This inflexibility is considered a drawback because many wireless communications problems may require different input/output sizes over time. As one example, consider the problem of channel estimation at the base station based on the SINR vector with each component being associated with a specific user. In this case, the DNN input is the SINR vector while the estimated channel matrix represents the DNN output. During multiple transmission blocks, the number of users changes and so does the size of the SINR vector. Using the dropout method <cit.>, it is possible to randomly drop the contribution of some neurons during the DNN training. While this provides flexibility in the structure of hidden layers, the input and output layers still have a fixed size, and hence limited flexibility in practice. Actively altering the network structure by adding neurons is both a promising and challenging direction. Because DL research for wireless communications generally considers DNNs as black-box modules, this field has not attracted the attention of the wireless research community and opted for constant-padding the input and output layers to the expected maximum size. Another unexplored area in this direction is the mapping of different vector sizes to the same latent space, thereby unifying the DNN input into the same feature space. While feature extractors for computer vision and natural language processing (NLP) tasks are abundant, wireless communications problems require signal-based feature extractors, which have not been well examined by our research community yet. It is worth noting that the recent wave of NLP models has allowed DNNs to have an apparent flexibility in the input and output sizes since LLMs are trained to sequentially predict the next token given the current token. This is to be opposed to the application of DL for wireless where the output of DNNs is obtained with a one-shot inference run.
§ TWO KEY TRADE-OFFS FOR DL
APPLIED TO WIRELESS PHYSICAL LAYER
In this section, we highlight the importance of two fundamental learning trade-offs in the assessment of DL models, namely, accuracy versus generalization, and compression versus latency. In particular, we discuss how accuracy-generalization should guide the evaluation of smaller DL models and highlight the importance of the compression-latency trade-off for practical use cases of LLMs. Through this section, generalization does not refer to the stability of the model's performance under noise and adversarial examples (i.e., adversarial robustness), but rather to the ability of the model to generalize to unseen scenarios.
§.§ Accuracy Versus Generalization
In classical machine learning, the trade-off between the accuracy and the generalization pertains to the fundamental bias–variance trade-off which characterizes the generalization capabilities of predictive models <cit.>. The bias–variance trade-off stipulates that a model must have enough parameters to capture the underlying structure of the dataset without over-fitting spurious patterns. Fig. <ref> depicts the expected U-shaped variation of the test loss whose minimum represents the sweet spot in terms of the number of parameters between under-fitting and over-fitting. In DL practice, however, deeper networks with a large number of parameters are trained to interpolate between the training samples and do maintain a lower test loss accuracy on test datasets as depicted by the “double descent” curve in Fig. <ref>. While this over-parameterized regime depicted might look contradictory to classical (i.e., under-parameterized) regime, this over-parameterized regime is not well understood and is being actively investigated within the theoretical DL community <cit.>. This is because the double descent behavior does not consistently occur for every DNN and some of them, even very deep ones, still empirically obey the bias-variance trade-off <cit.>. Whether the DNN follows the variance-bias trade-off or the double descent behavior depends on the DNN's parameters such as the number of input data points, the number of layers, and the overall number of parameters.
For the above reasons, evaluating the performance of DL models must be entirely tied to the employed DNN architecture. For example, one cannot claim that DL decoding is superior to classical methods because the mean-square error of one specific DNN tested on a few evaluation scenarios is lower than the one of a classical method (cf. Section <ref>). Recent efforts from the communication community initiated the application of different learning paradigms including transfer learning, meta-learning, and continual learning to investigate the generalization of DNNs <cit.>. Some studies also showed that even the incorporation of domain knowledge in supervised learning without the explicit use of other learning paradigms can improve the DNN's generalization. For instance, by alternating between the time-domain and frequency-domain representations of signals and using the idea of successive-estimation and cancellation, it is possible to design DNNs to handle multiple sinusoid waves and improve their estimation performance on out-of-distribution samples <cit.>.
In summary, by blinding applying DNNs without accounting for their accuracy-generalization trade-off, most research results become biased and less impactful in the long term.
At the time of writing, potential leaks about the unknown GPT-4 architecture on Reddit <cit.> revealed that GPT-4 is not one giant monolithic lossy dataset compressor but rather an ensemble of eight 220B-parameter LLMs, each trained with different data/task distributions. This suggests that GPT-4 is a mixture of experts (unlike GPT-3.5 and GPT-3) and operates at a much smaller over-parameterized regime that challenges the myth of very large and fully end-to-end DNNs.
§.§ Compression Versus Latency
The study of the trade-off between lossless/lossy compression and latency has historically evolved around reaching the source coding limit established in 1948 by Shannon's seminal work <cit.>. In a nutshell, it stipulates that higher compression ratios require longer block lengths and thus higher encoding and decoding time complexity. Specifically, Shannon showed that the codeword length of an optimal prefix-free code is approximately the negative logarithm of the codeword's probability. He also proved that the expected message length of an optimal prefix-free code is close to the entropy of the message. Shannon also explored the information-theoretic relationship between compression and (next-letter) prediction by estimating the entropy of the English language <cit.>. Similar to the connection between next-letter predictors and data compressors, the emerging LLMs represent lossy data compressors in a data-driven fashion which exploit the inherent redundancy within the human language to embed massive datasets into a significantly smaller DNN model <cit.>. They are giant DNNs with billions of parameters compressing a large number of datasets by learning to predict the next token from the previous context composed of one or multiple tokens.
Given the tight information-theoretic relationship between the compression ratio and the decoding latency, it is therefore natural at this time to review some of the envisioned communication use cases of LLMs. The fascination of the general public with the quality of the output text of LLMs suggests that their compression quality will be at the cost of their decoding latency. Indeed, an LLM with a few billion parameters requires around 100 GB of RAM. Moreover, the current optimized deployment of LLMs for efficient inference comes at the expense of high backend infrastructure costs and significant latency. Aware of these issues, important research efforts from the LLM community are actively examining the effect of quantization on the inference time of LLMs without significantly affecting their accuracies. For these reasons, the discussed applications of LLMs by the communications research community focus on latency-tolerant use cases at the application layer.
The first use case of LLMs for communication is image transfer. As shown in Fig. <ref>, a user equipped with an LLM obtains the text prompt of the image to transfer. Only the prompt is sent through the channel as a bit stream. The prompt is then recovered from the received bits to probe the LLM, which in turn outputs the image to the receiver. This is unlike the conventional data transfer where the entire bit stream of the image is transmitted through the channel as in Fig. <ref>. Here, the LLM-aided image transfer can provide a significant data compression rate under two conditions. The first one is that the parameters of the transmit and receive LLMs must be equal, or that their embeddings yield the encoded and decoded image.
The second one is related to the compression-latency trade-off where both the encoding and decoding time of LLMs must provide a significant advantage over the improvement of the compression ratio between an image and a text prompt.
Another use case of LLMs is to minimize the need for communication between two users by learning a personalized LLM as a digital twin for each one of them as shown in Fig. <ref>. This means that an LLM must be able to reliably mimic the user interaction. From a probabilistic perspective, this corresponds to reliably learning the joint distribution of reactions, preferences, and thoughts of any user. The realizability of this scenario is still far from being a reality in the near future due to multiple reasons. This includes the data privacy and security concern, as well as the hallucination effects exhibited by LLMs as they tend to output text that appears to be correct but is actually false or not based on the input given. In summary, short-cutting the cost of communication must come with the benefit of highly accurate digital twins.
Another important limitation to implementing these use cases is the lack of rigorous metrics to assess the performance of LLMs. In fact, existing evaluation protocols are either automatic (e.g., the ROUGE metric for text summarization) exhibiting poor correlation to human judgments, or manual yielding noisy and potentially biased annotations. In summary, the adaptation of LLMs to the unique signal-based nature of communication datasets is vague and uncertain and is still in its infancy.
§ CONCLUSION
The ever-increasing requirements for future 6G wireless applications call for the urgent need to go beyond the accuracy metric to assess DL methods for communication. We have described the importance of the accuracy-generalization and compression-latency trade-offs in shaping the future evaluation guidelines of DL techniques for wireless problems as summarized in Fig. <ref>.
These evaluation criteria should be continuously updated in light of a new understanding of the specific challenges facing the applications of deep learning models for wireless communication problems. We also have discussed how these metrics are critical in evaluating the relevance of emerging deep learning models including large language models. We believe these trade-offs bridge the gap between the empirical nature of deep learning models applied to communication problems and the challenging technical requirements of future communication systems.
§ ACKNOWLEDGMENTS
The work of M. Akrout was supported by the doctoral scholarship of the Natural Sciences and Engineering Research Council of Canada (NSERC). The work of A. Mezghani, E. Hossain, and F. Bellili were supported by Discovery Grants from NSERC. The work of R. Heath was supported by the National Science Foundation (grant nos. NSF-ECCS-2153698, NSF-CCF-2225555, NSF-CNS-2147955) and in part by funds from federal agency and industry partners as specified in the Resilient & Intelligent NextG Systems (RINGS) program. The authors also acknowledge the insightful comments and suggestions from Osvaldo Simeone.
IEEEtran
|
http://arxiv.org/abs/2307.05959v1 | 20230712070453 | Giving Robots a Hand: Learning Generalizable Manipulation with Eye-in-Hand Human Video Demonstrations | [
"Moo Jin Kim",
"Jiajun Wu",
"Chelsea Finn"
] | cs.RO | [
"cs.RO",
"cs.AI",
"cs.LG"
] |
An adaptive approach to remove tensile instability in SPH for weakly compressible fluids
[
August 12, 2023
========================================================================================
Eye-in-hand cameras have shown promise in enabling greater sample efficiency and generalization in vision-based robotic manipulation. However, for robotic imitation, it is still expensive to have a human teleoperator collect large amounts of expert demonstrations with a real robot. Videos of humans performing tasks, on the other hand, are much cheaper to collect since they eliminate the need for expertise in robotic teleoperation and can be quickly captured in a wide range of scenarios. Therefore, human video demonstrations are a promising data source for learning generalizable robotic manipulation policies at scale. In this work, we augment narrow robotic imitation datasets with broad unlabeled human video demonstrations to greatly enhance the generalization of eye-in-hand visuomotor policies. Although a clear visual domain gap exists between human and robot data, our framework does not need to employ any explicit domain adaptation method, as we leverage the partial observability of eye-in-hand cameras as well as a simple fixed image masking scheme. On a suite of eight real-world tasks involving both 3-DoF and 6-DoF robot arm control, our method improves the success rates of eye-in-hand manipulation policies by 58% (absolute) on average, enabling robots to generalize to both new environment configurations and new tasks that are unseen in the robot demonstration data. See video results at <https://giving-robots-a-hand.github.io/>.
§ INTRODUCTION
Recent works in vision-based robotic manipulation have shown significant performance gains realized by using eye-in-hand cameras in addition to, or in replacement of, static third-person cameras <cit.>. Despite their effects, eye-in-hand cameras alone do not guarantee robust policies, as vision-based models tend to be brittle against real-world variation, such as changes in background, lighting, and object appearances <cit.>. Therefore, one natural approach for improving generalization is to train policies on large, diverse robot demonstration datasets <cit.>. However, collecting such data on a real robot is expensive, as it often requires practitioners to either perform kinesthetic teaching <cit.> or robotic teleoperation <cit.> via virtual reality headsets or joystick controllers.
In contrast, collecting videos of humans completing tasks is much less expensive because a human operator can rapidly capture many demonstrations without having to constantly reset the robot to some initial state, debug hardware-related issues, or arduously relocate the robot to varied settings to increase visual diversity. Consequently, human video demonstrations are a promising data source that could improve the generalization capabilities of vision-based robotic manipulators at scale.
Despite this enticing potential, a central challenge in learning from human video demonstrations is the difference in appearance between human and robot morphologies, which creates a distribution shift that must be accounted for. Prior works that utilize a third-person camera perspective have aimed to mitigate this domain gap by taking explicit domain adaptation approaches, such as performing human-to-robot image translation, learning domain-invariant visual representations, and leveraging keypoint representations of human and robot states (see Section <ref> for details). In contrast, since we learn policies from an eye-in-hand visual perspective, we close the domain gap in a far less involved way: we simply mask a fixed portion of every image such that the human hand or robotic end-effector is no longer visible. As a result, we do not need to employ any domain adaptation method and can learn vision-based manipulation policies end-to-end directly from human videos (where actions are inferred by an inverse dynamics model, which we discuss later). We can thus avoid errors produced by explicit domain adaptation methods, e.g., conspicuous visual artifacts from human-to-robot image translations <cit.>.
The main contribution of this work is the study of a simple, novel method that incorporates diverse eye-in-hand human video demonstrations to improve environment and task generalization. Across several real-world robotic manipulation tasks, including reaching, grasping, pick-and-place, cube stacking, plate clearing, and toy packing, we observe that our method leads to significant improvements in generalization. Our policies generalize to both new environments and new tasks that are not seen in the robot demonstrations, even in tasks with heavy visual occlusion and multiple stages. On average, we observe a 58% improvement in absolute success rates across unseen environments and tasks when comparing against policies trained only on robot demonstrations.
§ RELATED WORK
Imitation learning is a powerful paradigm for training an agent to complete a task by learning a mapping between observations and actions. Traditional approaches to robotic imitation assume access to expert demonstrations collected from the robot's observation and action spaces <cit.>. Since collecting expert trajectories with a real robot can be physically demanding or require special teleoperation equipment and training <cit.>, we study the setting of training robots to complete tasks by watching videos of a human demonstrator. One central challenge here is the distribution shift caused by apparent visual differences between human and robot structures.
Past works have addressed this distribution shift in various ways. Some have employed explicit domain adaptation techniques such as human-to-robot context translation <cit.> and pixel-level image translation <cit.>, commonly using generative models like CycleGAN, which can learn mappings between domains given unpaired data <cit.>. Other works have explicitly specified the correspondences between human and robot embodiments and behaviors by, e.g., employing pose and object detection techniques <cit.> and learning keypoint-based state representations of human and robot observations <cit.>. Some have taken a more implicit approach and learned domain-invariant visual representations or reward functions that are useful for solving downstream tasks <cit.>. Yet another class of works used robotic end-effectors more closely resembling the human hand (e.g., Allegro Hand) to train dexterous manipulation policies via hand pose estimation and kinematic retargeting <cit.>.
In contrast to most of these works, which use human demonstrations captured from third-person cameras, we avoid the need to apply any explicit domain adaptation or human-robot correspondence mapping method by utilizing masked eye-in-hand visual inputs. We also train policies that generalize to new settings and tasks without having to learn intermediate representations or reward functions. Further, unlike prior works utilizing manipulators with more than two fingers, we employ a parallel-jaw robotic end-effector despite it being visually and kinematically dissimilar from the human hand.
Relatedly, <cit.> and <cit.> amass diverse manipulation data using “reacher-grabber” tools. To minimize domain shift, these tools are attached to the robot arms or engineered to closely resemble real parallel-jaw end-effectors. In contrast, we collect demonstrations with the human hand, which is faster and more flexible than these tools, and test our policies directly on a robot with a structurally dissimilar gripper. Further, our lightweight eye-in-hand camera configuration for human demonstrations is simple to assemble and has nearly zero cost (aside from purchasing the camera itself), while the reacher-grabber tool proposed by <cit.> requires more sophisticated assembly and costs approximately $450 USD (excluding the cost of the camera).
Lastly, the purpose of this work is not to perform a head-to-head comparison between eye-in-hand and third-person perspective methods. We defer such discussion to <cit.> and <cit.>, which already analyze and compare eye-in-hand and third-person methods in robotic manipulation comprehensively. In this paper, we focus specifically on the eye-in-hand visual setting and compare our approach to methods that are compatible to this regime.
§ PRELIMINARIES
Observation and action spaces. The observation spaces of the robot and human, 𝒪^r and 𝒪^h respectively, consist of eye-in-hand RGB image observations o^r ∈𝒪^r, o^h ∈𝒪^h. The robot's action space 𝒜^r either has four dimensions, consisting of 3-DoF end-effector position control and 1-DoF gripper control, or seven dimensions, consisting of additional 3-DoF end-effector rotation/orientation control. We assume that the human's action space 𝒜^h is the same as the robot's: 𝒜^h = 𝒜^r.
Problem definition. Our objective is to incorporate broad human data to train a policy that generalizes better than one that is trained solely on robot data. While broad data can improve generalization along a number of axes, we specifically aim to improve performance in terms of environment generalization and task generalization. We define environment generalization as the ability to execute a learned task in a new environment unseen in the robot demonstrations. We define task generalization as the ability to execute a new, longer-horizon task when the robot demonstrations only perform an easier, shorter-horizon task.
§ LEARNING FROM EYE-IN-HAND HUMAN VIDEO DEMONSTRATIONS
We now discuss each module of our framework (Figure <ref>). We first collect eye-in-hand human demonstrations with a simple low-cost setup (Section <ref>). We then label human demonstrations with actions using an inverse dynamics model trained on robot “play” data (Section <ref>). Finally, we utilize human demonstrations to train generalizable imitation learning policies (Section <ref>).
§.§ Eye-in-Hand Video Data Collection
Data collection setup. As shown in Figure <ref>, we secure an RGB camera to a human demonstrator's forearm with two rubber bands, and the demonstrator is immediately ready to collect video demonstrations of a task. While more secure ways of fastening the camera exist, we find that this simple configuration is sufficient and only takes a few seconds to prepare. The same camera is mounted onto a Franka Emika Panda robot arm via an L-bracket assemblage (see Figure <ref>). To control the robot, we perform teleoperation with a virtual reality controller (Oculus Quest).
Masking the hand and end-effector. To close the gap between human and robot domains, we mask a fixed region of all image observations o^h, o^r captured by the eye-in-hand human and robot cameras to hide the agent's embodiment. Specifically, we capture images of size 100 × 100 and zero out the top 36 rows of pixels with a script; we denote the resulting human and robot observations as o̅^h, o̅^r, respectively. This transformation is shown in Figure <ref>. We train inverse dynamics models and imitation learning policies (discussed in subsequent sections) solely on masked images.
At first glance, it may seem impossible to learn with o̅^h, o̅^r given that the hand or end-effector is not visible. However, we observe that inverse models trained on data in this format can reasonably infer environment dynamics nonetheless due to the presence of certain visual cues. For example, the grasping and lifting of an object can be inferred even when the gripper is not visible due to visual signals such as the object “locking” into place as it is secured in the hand, the object beginning to levitate, shadows forming underneath the object, and neighboring objects shrinking in size in the eye-in-hand camera's field of view. Similarly, imitation learning policies can also succeed at various tasks without seeing the hand or end-effector in the frame after a small modification to the policies' inputs (see Section <ref> for details). Nonetheless, masking the image does place some limitations on the tasks that can be performed, which we discuss further in Section <ref>.
§.§ Action Labeling of Human Video Demonstrations via Inverse Dynamics
Suppose we have a diverse set of eye-in-hand human video demonstrations for a manipulation task: 𝒟_exp^h = {o̅_t^h }_1...M, where M is the total number of timesteps. Since human videos only contain sequences of images, we cannot train an imitation learning policy on this dataset until we generate action labels. The inverse dynamics model serves this precise purpose: Given image observations o̅_t^h and o̅_t+1^h at timesteps t and t+1, the inverse model predicts the action a_t giving rise to the change in observations <cit.>. See Appendix <ref> for details on the inverse model architecture.
Robot play data.
An inverse model should be trained on data with sufficient diversity such that it can make accurate predictions on diverse human demonstration data. In this paper, we choose to train the inverse model using visually and behaviorally diverse, task-agnostic robot “play” data that is collected in a similar manner as <cit.>. See Appendix <ref> for details on how we collect the play data and why it is easy to collect in large quantities.
Inverse dynamics model training. Given robot play data, we now have observation-action transitions (o̅_t^r, a_t^r, o̅_t+1^r) ∈𝒟_play^r. The inverse model, parameterized by θ, takes as input (o̅_t^r, o̅_t+1^r) and outputs a prediction â_t^r = f_θ(o̅_t^r, o̅_t+1^r). We optimize the parameters θ to minimize the L_1 difference between â_t^r and a_t^r for K transitions sampled from the play dataset, using Adam optimization <cit.>:
ℒ(â_t^r, a_t^r; θ)_1...K = ∑_t=1^K || â_t^r - a_t^r ||_1.
Labeling human video demonstrations.
Once we have trained an inverse model, we run it on all pairs of observations in the human demonstration dataset, (o̅_t^h, o̅_t+1^h) ∈𝒟_exp^h, to automatically generate action labels for the demonstrations (see Appendix <ref> for sample inverse model predictions and analysis). We then have a labeled set of human observation-action pairs, which we denote as 𝒟_exp^h = { (o̅_t^h, â_t^h) }_1...M, where M is the total number of such pairs. We use this dataset to train an imitation learning policy, as described in the next section.
§.§ Imitation Learning with Human and Robot Demonstrations
Behavioral cloning. Given a dataset of human video demonstrations with inferred action labels 𝒟_exp^h = { (o̅_t^h, â_t^h)_1...M}, we train a manipulation policy via behavioral cloning (BC), which learns a mapping between observations encountered by an expert demonstrator and their corresponding actions <cit.>. In this case, we treat actions â_t^h inferred by the inverse model as “ground truth” labels representing the demonstrator's actions. The BC policy π_ϕ takes as input an RGB image observation o̅_t^h and outputs an action ã_t^h to best match â_t^h. We minimize the negative log-likelihood of the predictions to find the optimal policy parameters ϕ^*, using Adam optimization <cit.> to train the model.
Conditioning the behavioral cloning policy on grasp state. We modify the BC policy to be conditioned on an additional binary variable s_t^h representing the grasp state at time t (open/closed). This variable provides proprioceptive information about the manipulator that was removed from the image observations by the image masking scheme discussed in Section <ref>; without knowing the grasp state, the policy may not be able to discern whether it has already grasped an object and could fail to proceed to complete the task. We automatically estimate s_t^h by setting it as the prior timestep's grasping action, which is inferred by the inverse model when labeling human demonstrations with actions. We then concatenate s_t^h to the latent image embedding and feed the result into the policy network (see Appendix <ref> for model architecture details). The resulting policy is π_ϕ(ã_t^h | o̅_t^h, s_t^h), and we optimize ϕ as described before.
Generalizing beyond narrow robot demonstrations. As discussed in Section <ref>, we collect and train a BC policy on a narrow set of robot demonstrations and a broader set of human demonstrations with the goal of generalizing to the environments or tasks covered by the human data. The final objective, given N robot samples and M human samples, is to find:
ϕ^* = arg min_ϕ - ∑_t=1^Nlogπ_ϕ(ã_t^r | o̅_t^r, s_t^r) - ∑_t=1^Mlogπ_ϕ(ã_t^h | o̅_t^h, s_t^h).
§ EXPERIMENTS
We execute a set of experiments to study whether our framework for incorporating broad eye-in-hand human video demonstrations can be used to improve environment generalization and task generalization, as defined in Section <ref>. We then ablate key components of our framework, such as image masking and grasp state conditioning, to study their contributions to the final performance.
§.§ Experimental Setup
As it is difficult to generate realistic human data in simulation, we perform all experiments in the real world. All observations o^h ∈𝒪^h, o^r ∈𝒪^r are (3, 100, 100) RGB images. Raw image pixels range between [0, 255], but we normalize them to [-0.5, 0.5]. We use 6-DoF end-effector control for the toy packing task and 3-DoF control for the rest. In 3-DoF control, the three actions are continuous values ranging between [-1,1] that command the change in the end-effector's position in the Cartesian space. In 6-DoF control, the additional three actions command the end-effector's change in orientation (in extrinsic Euler angles). Lastly, one degree of freedom represents the binary gripper action (-1: close, 1: open).
§.§ Environment Generalization Experiments
Recall that environment generalization (Section <ref>) is the ability to complete a learned manipulation task in a new environment unseen in the robot demonstration dataset.
Tasks. The tasks include reaching towards a red cube in the presence of different distractor objects, grasping a red cube placed on various environment backgrounds, clearing different objects off of a plate, and packing different toys into a box. See Figure <ref> for a visualization of these tasks and Appendix <ref> for details about each task. The tasks are ordered by increasing difficulty and complexity. The final 6-DoF toy packing task is particularly challenging because it involves heavy occlusion (from a wall positioned between the end-effector and target object); execution of a multi-stage trajectory (reaching around wall, grasping toy, lifting toy, reaching box, dropping toy into box); and three more degrees of freedom than the previous tasks (for end-effector orientation control).
Datasets. For each task, we collect narrow robot demonstrations in one environment, and broad human demonstrations in multiple environments (shown in Figure <ref>). We also collect a robot play dataset for an inverse model that is shared with a task generalization experiment involving similar objects. See Appendix <ref> for details on all expert demonstration datasets and robot play datasets.
Methods.
In our method, we train a BC policy on robot demonstrations and human demonstrations with the image masking scheme discussed in Section <ref>. As we wish to study whether incorporating broad human demonstrations into training achieves increased environment generalization, we compare our method against a baseline policy trained only on narrow robot demonstrations. In addition, to assess whether any improvements in generalization are simply correlated to the increase in training dataset
size, we also compare against a policy trained on both robot demonstrations and robot play data, as the play datasets are larger than the human demonstration datasets. Lastly, we evaluate how effective our image masking method is compared to explicit domain adaptation approaches such as pixel-level image translation by comparing against a policy trained on both human and robot demonstrations, where a CycleGAN is used to translate human images into robot images (as in <cit.> and <cit.>). To summarize, we evaluate the following four methods in our experiments, where each one is a BC policy trained on a different set of data:
* robot: robot demos only
* robot + play: robot demos and robot play data
* robot + human w/ CycleGAN: robot demos and CycleGAN-translated human demos
* robot + human w/ mask (ours): robot demos and human demos with image masking
Results. As shown in Table <ref>, incorporating diverse human video demonstrations into policy training with image masking significantly improves generalization. The policy generalizes to new environment configurations unseen in the robot demonstrations (see fine-grained results in Table <ref>). To our knowledge, this marks the first time that a real robot policy is directly trained end-to-end on eye-in-hand human demonstrations. On the other hand, the policy trained only on a limited set of robot demonstrations fails completely in many cases, as shown in Figure <ref>(c), since novel out-of-distribution visual inputs confuse the policy. In addition, we see that a policy also trained on the full play dataset, which is larger than the set of human demonstrations, does not perform as well as one trained on the human demonstrations, verifying that generalization performance is not simply a function of training dataset size. Further, while using CycleGAN-translated human demonstrations generally leads to greater performance than using only robot demonstration or play data, it is not as effective as our image masking method. In particular, while the CycleGAN image translations are successful in some cases, they are noisy in other cases (sample translations are shown in Figure <ref> in Appendix <ref>); such noise hinders final policy performance. Videos of the policies and extensive qualitative analysis of individual methods are available on our https://giving-robots-a-hand.github.io/project website.
§.§ Task Generalization Experiments
Recall that task generalization (Section <ref>) is the ability to complete a task that is unseen and longer-horizon than those in the robot demonstrations.
Tasks. The tasks we test on include stacking a red cube on top of a blue cube, picking-and-placing a red cube onto a green plate, clearing a green sponge from a plate, and packing a small black suit vampire wind-up toy into a box. See Figure <ref> for a visualization of these tasks.
Datasets. As in Section <ref>, we collect robot demonstrations, human demonstrations, and shared robot play data. Robot demonstrations perform a simple, short-horizon task (e.g., cube grasping), and human demonstrations perform one of the more difficult, longer-horizon tasks above (e.g., cube stacking). Appendix <ref> gives full details on all datasets used in the experiments.
Methods. We evaluate the task generalization of the same four methods discussed in Section <ref>.
Results. As shown in Table <ref>, training the policy on the eye-in-hand human video demonstrations with image masking substantially improves task generalization compared to using robot data alone. Intuitively, a policy trained on robot demonstrations that never perform the desired multi-stage task is incapable of performing the task at test time. A policy that is also trained on robot play data can occasionally execute the desired task since the play dataset contains a collection of behaviors, some of which can be useful for solving the task. However, as the play dataset is task-agnostic, BC often struggles to learn one coherent sequence of actions for solving a specific multi-stage task. Lastly, a policy trained on human demonstrations translated to the robot domain via CycleGAN can generalize to new tasks, but it performs worse than simply using our proposed image masking scheme due to the aforementioned errors in image translation (Figure <ref> in Appendix <ref>). See further qualitative analyses of individual methods and videos of learned policies on our https://giving-robots-a-hand.github.io/project website.
§.§ Ablation Experiments
Training with unmasked images. We remove the image masking entirely to assess whether it is an important component of our framework. Given unmasked robot play data where the end-effector is now visible, we train an inverse model to predict the dynamics and use the model to infer action labels for unmasked human demonstrations, regardless of the domain shift caused by visual differences between the human hand and the robot gripper. (Note that we do not use CycleGAN image translation here.) We train a BC policy on unmasked versions of the robot and human demonstrations used in our previous experiments, and compare this to our original method.
r6.5cm
Ablation experiments results.
We observe that removing either the image masking or grasp state conditioning generally leads to greatly reduced success rates, validating their important contributions to the final generalization performance. Success rates and standard errors are computed by aggregating the finer-grained results in Table <ref>.
0.47!
success rate (%)
robot + human w/ mask (ours) 54.29 ± 5.95
robot + human, no mask 24.29 ± 5.13
robot + human w/ mask, no grasp state 28.57 ± 5.40
Behavioral cloning without conditioning on grasp state. In a separate ablation, we modify the BC policy such that it is no longer conditioned on the binary (open/close) grasp state. We reuse the robot and human demonstrations from the previous experiments and simply train a new BC policy without grasp state inputs. Note that we are still masking images here, as in our original framework.
Results. As shown in Table <ref> (detailed results are shown in Table <ref>), removing either image masking or grasp state reduces overall performance. Qualitatively, the policy often fails to even reach the target object in several cases when using unmasked images; we attribute this to the distribution shift between human and robot observations.
Without conditioning on grasp state, a common failure mode we observe is repeatedly attempting to grasp an object rather than lifting it, as the robot does not know that it has already secured the object (an illustration of this behavior is shown in Figure <ref>(b)). Overall, both components here are important to successfully leverage eye-in-hand human video demonstrations.
§ CONCLUSION
This work presents a novel yet simple framework for leveraging diverse eye-in-hand human video demonstrations and displays its potential to enhance the generalization of vision-based manipulators. We utilize eye-in-hand cameras and image masking to largely close the domain gap between human and robot data and bypass explicit domain adaptation entirely. Our framework enables an imitation learning policy to generalize to new environments and new tasks unseen in the robot demonstrations.
Limitations and future work.
Our image masking scheme may not be as effective if the target object is so minuscule (<1.5 cm long on each side) that it is not visible in the unmasked portion of the image, because it may be difficult for the inverse model to infer actions that manipulate the object due to insufficient visual cues. Optimizing the camera angle so that a smaller portion of the image can be masked could mitigate this issue. Additionally, our method involves collecting a robot play dataset to train the inverse model. While this process is inexpensive (details discussed in Appendix <ref>), in the future we hope to automate play data collection nonetheless, e.g., by training a BC policy on a small play dataset and sampling actions during inference to encourage exploration (as in <cit.>).
We thank Alexander Khazatsky, Tony Zhao, Suraj Nair, Kaylee Burns, Maximilian Du, and other members of the Stanford IRIS Lab for insightful discussions and helpful feedback. Moo Jin Kim gratefully acknowledges the financial support of the Siebel Scholarship. This work was supported by ONR grant N00014-21-1-2685.
§ APPENDIX
§.§ Model Architectures
In this section, we discuss model architecture details. We implement and train all models using PyTorch <cit.>.
§.§.§ Inverse Dynamics Model Architecture
The inverse dynamics model is a convolutional neural network with 4 convolutional layers followed by 2 feedforward layers. Each convolutional and feedforward layer is followed by a batch normalization layer and a ReLU activation layer. For every convolutional layer, the number of convolutional filters is 128, kernel size is 3, stride is 1 (except for the first layer, whose stride is 2), and padding is 0. The latent embedding size of the second feedforward layer is 200. We use early fusion, i.e., two consecutive image observations are concatenated channel-wise and then fed into the first convolutional layer. The full network outputs an action prediction that takes the agent from one observation to the next timestep's observation, where the action is 3-DoF or 6-DoF with an additional binary gripper action. (Recall that we use 6-DoF position and orientation control for the toy packing task, and 3-DoF position control for the other tasks.)
We train every inverse model with random shifts data augmentation. For every pair of 100 × 100 image observations, we pad each side by 4 pixels and randomly crop a 100 × 100 region out of the result. The same augmentation is applied to both images in a given pair so as to not perturb the original dynamics captured in the images. We only apply this augmentation with 80% probability, as we found that the resulting model is as accurate as one trained with 100% probability, yet it trains faster because it does not need to compute the augmentation 20 percent of the time.
§.§.§ Behavioral Cloning Policy Network Architecture
The BC policy network consists of an image encoder with mostly the same architecture as the inverse model, except that the number of convolutional filters per layer is 32, and the hidden size of the second feedforward layer is 50. Unlike the inverse model, the policy network acts on one image at a time rather than a pair. After the image encoder portion, the policy network consists of an additional two feedforward layers (with a latent dimensionality of 64) representing the policy head. Further, the policy is conditioned on a 1-dimensional grasp state variable as described in Section <ref>; this variable is concatenated with the 50-dimensional latent embedding output by the second feedforward layer of the image encoder, and the resulting 51-dimensional embedding is passed on to the policy head, which outputs an action prediction that best imitates the expert demonstrator's action given some input observation.
As with the inverse model, we apply random shifts data augmentation while training the BC policy.
§.§ Tasks
In this section, we discuss the tasks introduced in Section <ref> in more detail. All tasks involve 3-DoF position control (and 1-DoF binary gripper control), except for the toy packing tasks, which involve 6-DoF position/orientation control.
§.§.§ Environment Generalization Tasks
The tasks used for the environment generalization experiments include the following:
* reaching: The goal is to reach the end-effector towards the red cube. The environment contains just the red cube; or the red cube and a blue cube distractor; or the red cube and a green sponge distractor; or all three objects. The initial positions of the objects are randomized within a 50 × 50 cm section of the environment.
* cube grasping: The goal is to grasp the red cube and lift it off the ground. The cube is the only object in the environment. The environment background can be one of seven: plain white background, rainbow floral texture, green floral texture, blue floral texture, orange plate, green plate, or blue plate. The initial position of the cube is randomized within a 30 × 30 cm section of the environment.
* plate clearing: The goal is to grasp a target object resting on a plate, lift it up, and transfer it to a location off to the right of the plate. The target object is either a green sponge, yellow sponge, blue towel, or pink towel. The initial position of the target object is randomized within a 20 × 20 cm section of the plate.
* toy packing: The goal is to maneuver around a wall, reach towards the toy, grasp it, lift it up, move over to the open box, and release the toy into the box. In the initial environment setting, the end-effector is positioned behind a thin cardboard box (which we call the “wall”) such that the toy is fully occluded in the eye-in-hand image observations at the beginning of the episode. There are twelve types of toys that our policies are evaluated against (see Figure <ref> for pictures of the toys):
black suit vampire toy,
white mummy toy,
orange body jack-o'-lantern toy,
red cape vampire toy,
purple body green zombie toy,
crazy witch toy,
green body jack-o'-lantern toy,
purple body jack-o'-lantern,
red dentures w/ USA hat toy,
green Christmas tree toy,
Santa Claus toy, and
brown reindeer toy.
The initial positions of the toy, end-effector, and two boxes are all randomized within a 40 × 10 cm section of the table, and the toy's position relative to the boxes is also randomized within a 15 × 10 cm section in front of the open box. In addition, the end-effector is initially angled towards the wall, as opposed to being oriented top-down as in the other tasks (see Figure <ref> for an illustration). Therefore, 6-DoF control is necessary for solving the task.
§.§.§ Task Generalization Tasks
We now describe the tasks used in the task generalization experiments:
* cube stacking: The goal is to grasp the red cube, lift it up, stack it on top of the blue cube, and release the red cube. The initial positions of the cubes are fixed relative to each other, but vary relative to the blue floral texture background within a 20 × 20 section of the environment. Robot demonstrations perform cube grasping, while human demonstrations perform full cube stacking or portions of the task that follow the grasp.
* cube pick-and-place: The goal is to grasp the red cube, lift it up, move it over to the green plate, and release it onto the plate. The initial positions of the cube and plate are fixed relative to each other, but vary relative to the plain white background within a 20 × 20 section of the environment. Robot demonstrations perform cube grasping, while human demonstrations perform full cube pick-and-place or portions of the task that follow the grasp.
* plate clearing: The goal is the same as described earlier for the plate clearing environment generalization task. However, here we only manipulate one target object: the green sponge. The initial position of the sponge is randomized within a 20 × 20 cm section of the plate. Robot demonstrations perform sponge grasping, while human demonstrations perform full plate clearing or portions of the task that follow the grasp.
* toy packing: The goal is the same as described earlier for the toy packing environment generalization task. However, here we only manipulate one target object: the black suit vampire toy. As before, the initial position of the toy, end-effector, and two boxes are randomized within a 40 × 10 cm section of the table, and the toy's position relative to the boxes is also randomized within a 15 × 10 cm section in front of the open box. Robot demonstrations perform toy grasping, while human demonstrations perform full toy packing or portions of the task that follow the grasp.
Please see our https://giving-robots-a-hand.github.io/project website for further details and visualizations of data collected in these tasks and environments (expand the page using the button at the very bottom).
§.§ Datasets
§.§.§ Robot Play Datasets
How play data is collected. We gather play data in a similar manner as <cit.>: a human teleoperator controlling a Franka Emika Panda robot arm executes a diverse repertoire of behaviors in an environment, exploring the observation and action spaces while interacting with objects in the scene. For example, in an environment containing two cubes, the teleoperator may wave the robotic end-effector around, reach towards a cube, grasp and lift up a cube, release and drop the cube, stack one cube on top of the other, and so on. The continuous sequences of observations captured by the eye-in-hand camera and the actions commanded by the teleoperator are logged and stored into a replay buffer 𝒟_play^r for inverse model training. See the subsections below and the https://giving-robots-a-hand.github.io/project website for examples of play datasets.
Why play data is easy to collect. The key advantage of using play data is that it is easy to collect meaningful interaction data in large quantities <cit.> due to the following:
* There is no need to frequently reset the manipulator and objects to some initial state (which is typically necessary when collecting expert demonstrations).
* There is no notion of maximum episode length or time limit (allowing a teleoperator to execute a variety of behaviors in a single contiguous stretch of time, pausing only when desired).
* The teleoperator's knowledge of object affordances leads to interesting interactions with objects (as opposed to a script that executes purely random actions, which leads to slower exploration of the interaction space unless the data collection process is manually biased towards more meaningful interactions, as in <cit.>).
* The play behaviors do not have to solve any particular task (which makes it easier to collect play data than expert task-specific demonstrations).
As a result, we can quickly collect a play dataset for a given environment, or set of environments, that is sufficient for training the inverse dynamics model. In addition, a single play dataset could in principle be used to develop an inverse model that is reused for many different downstream tasks, effectively amortizing the cost of collecting it.
Details on collected play datasets. We collect four robot play datasets and train four corresponding inverse models. Each inverse model is shared across one environment generalization experiment and one task generalization experiment. We discuss the details of each play dataset below:
* reaching and cube stacking dataset: We collect 20,000 timesteps of play data at 5 Hz (approximately 67 minutes) in an environment with a blue floral background and three objects: a red cube, a blue cube, and a green sponge. The play data behaviors include waving the end-effector around, reaching towards each object, grasping and lifting up each object, releasing and dropping an object, stacking an object on top of another, and so on. This play dataset is shared for the reaching environment generalization and cube stacking task generalization tasks.
* cube grasping and cube pick-and-place dataset: We collect 52,400 steps of play data at 5 Hz (approximately 171 minutes) in multiple environments containing a red cube, each having a different background that the red cube rests on: plain white background, rainbow floral texture, green floral texture, blue floral texture, orange plate, green plate, or blue plate. The play data behaviors include waving the end-effector around, reaching towards the cube, grasping and lifting up the cube, releasing and dropping the cube, and so on. This play dataset is shared for the cube grasping environment generalization and cube pick-and-place task generalization tasks.
* plate clearing environment generalization and task generalization dataset: We collect 20,000 steps of play data at 5 Hz (approximately 67 minutes) in multiple environment configurations, each containing a different target object: green sponge, yellow sponge, blue towel, and pink towel. The play data behaviors include waving the end-effector around, reaching toward the objects, grasping and lifting up the objects, releasing and dropping the objects, and so on. This play dataset is shared for both plate clearing environment generalization and task generalization experiments.
* toy packing environment generalization and task generalization dataset: We collect 10,000 steps of play data at 4 Hz (approximately 42 minutes) in multiple environment configurations, each containing a different target toy. The toys included in this dataset are the following:
white mummy toy,
orange body jack-o'-lantern toy,
green body jack-o'-lantern toy,
crazy witch toy,
purple body green zombie toy,
purple body jack-o'-lantern,
black suit vampire toy,
red dentures w/ USA hat toy,
red cape vampire toy,
pirate bomb toy,
green witch w/ broomstick toy,
purple dentures w/ eyes toy,
eyeball toy,
skull toy, and
X-ray skeleton toy.
The play data behaviors include waving the end-effector around, reaching around the wall and towards the boxes, reaching towards the toys, grasping and lifting up the toys, releasing and dropping the toys, and so on. This play dataset is shared for both toy packing environment generalization and task generalization experiments.
Please see our https://giving-robots-a-hand.github.io/project website for visualizations of these play datasets (expand the page using the button at the very end).
§.§.§ Expert Demonstration Datasets
In each environment generalization or task generalization experiment, we collect a set of expert robot demonstrations and a set of expert human demonstrations. Below we discuss details of the datasets collected for each experiment. Please refer to Figure <ref> and Figure <ref> for a visualization of the distribution of environments or tasks that the robot and human datasets are each collected from. All demonstrations are collected at 5 Hz (or 4 Hz for the toy packing tasks), as is done while collecting the play datasets.
* reaching (environment generalization): We collect 60 robot demonstrations with no distractor objects and 100 human demonstrations with both the blue cube and green sponge as distractors.
* cube grasping (environment generalization): We collect 100 robot demonstrations only in an environment with a plain white background and 20 human demonstrations from each of the following environment backgrounds: rainbow floral texture, green floral texture, blue floral texture, orange plate, green plate, and blue plate.
* plate clearing (environment generalization): We collect 30 robot demonstrations with just the green sponge as a target object and 20 human demonstrations with each of the following target objects: yellow sponge, blue towel, and pink towel.
* toy packing (environment generalization): We collect 100 robot demonstrations with just the black suit vampire toy and 20 human demonstrations with each of the following toys: white mummy toy, orange body jack-o'-lantern toy, red cape vampire toy, purple body green zombie toy, and crazy witch toy.
* cube stacking (task generalization): We collect 25 robot demonstrations and 130 human demonstrations. The robot demonstrations perform red cube grasping; the human demonstrations perform cube stacking (stack red cube onto blue cube) or portions of the task that follow the grasp. For this task, a majority of the human demonstrations do the latter and are thus able to be collected very quickly.
* cube pick-and-place (task generalization): We collect 20 robot demonstrations and 70 human demonstrations. The robot demonstrations perform cube grasping; the human demonstrations perform cube pick-and-place (place cube onto plate) or portions of the task that follow the grasp.
* plate clearing (task generalization): We collect 40 robot demonstrations and 25 human demonstrations. The robot demonstrations perform sponge grasping; the human demonstrations perform plate clearing (remove green sponge off of plate) or portions of the task that follow the grasp.
* toy packing (task generalization): We collect 100 robot demonstrations and 100 human demonstrations. The robot demonstrations perform toy grasping; the human demonstrations perform toy packing (lift the toy and drop it into the box) or portions of the task that follow the grasp.
Please see our https://giving-robots-a-hand.github.io/project website for visualizations of these expert demonstration datasets (expand the page using the button at the very end).
§.§ CycleGAN Analysis
In Figure <ref>, we show sample human-to-robot image translations output by CycleGAN <cit.>. The translations are successful in some cases but noisy in others. Noisy translations hinder final BC policy performance, resulting in lower performance than simple image masking.
§.§ Inverse Dynamics Model Analysis
§.§.§ Sample Inverse Dynamics Model Action Predictions
In Figure <ref>, we show sample outputs from the inverse dynamics model when labeling human video demonstrations with actions for the toy packing task. The middle column contains the original human images. The leftmost column contains human images translated to the robot domain via CycleGAN. The rightmost column contains human images masked according to our proposed image masking scheme. We highlight major mistakes made by the inverse model in red. Due to noise in the CycleGAN translations, we see that there exists a significant nonzero rotation component in several of the action predictions in the leftmost column, which causes the robot gripper to rotate excessively in some cases (we show such behavior in the videos for robot + human w/ CycleGAN on our https://giving-robots-a-hand.github.io/project website). In contrast, we avoid such issues using our image masking method.
§.§.§ Validating Inverse Dynamics Model Accuracy
Learning an accurate inverse dynamics model is not unusually challenging given that we leverage eye-in-hand camera observations in this work. Suppose we have a tuple (o, a, o^'), where o represents the current image observation, a represents the current action, and o^' represents the next image observation. Recall that the inverse dynamics modeling problem is to predict the action a giving rise to the change in observations. Predicting the action is fairly intuitive in our framework: for example, if an object in the eye-in-hand camera view is moving to the right, we can infer that the hand or gripper is moving to the left. The inverse model can use any visual cue in the scene as a reference point while learning to predict the dynamics.
Regardless of the perceived difficulty, there are several ways we can validate the behavior of a learned inverse model. Quantitatively, we can check the performance of the inverse dynamics model on a validation set (e.g., held-out robot play data). Qualitatively, we can check whether the predicted actions for a human video are sensible. For instance, we can verify that the action predictions are smooth and not noisy, e.g., by outputting a chunk of observation-action pairs and observing a coherent action trajectory over a continuous stretch of time.
Lastly, we note that, if desired, one could avoid collecting play data and training an inverse model by inferring actions via visual odometry or Structure-from-Motion pose estimation methods.
§.§ Detailed Experimental Results
Table <ref> contains the full experimental results that were aggregated to produce Table <ref> in Section <ref>. All success rates are evaluated over 10 trials for all tasks (except for the toy packing task, in which we use 20 trials to compute each success rate). Initial object positions are distributed according to the configurations described in Appendix <ref>. Please see our https://giving-robots-a-hand.github.io/project website for videos of the learned policies.
In addition, Table <ref> contains detailed ablation experiment results that were summarized to produce Table <ref> in Section <ref>.
|
http://arxiv.org/abs/2307.04351v1 | 20230710052343 | MD-HIT: Machine learning for materials property prediction with dataset redundancy control | [
"Qin Li",
"Nihang Fu",
"Sadman Sadeed Omee",
"Jianjun Hu"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci",
"cs.LG"
] |
On Sufficient Graphical Models
Bing Li [email protected]
Department of Statistics, Pennsylvania State University
326 Thomas Building, University Park, PA 16802
Kyongwon Kim [email protected]
Department of Statistics, Ewha Womans University
52 Ewhayeodae-gil, Seodaemun-gu, Seoul, Republic of Korea, 03760
August 12, 2023
=======================================================================================================================================================================================================================================================================================================================
Materials datasets are usually featured by the existence of many redundant (highly similar) materials due to the tinkering material design practice over the history of materials research. For example, the materials project database has many perovskite cubic structure materials similar to SrTiO_3. This sample redundancy within the dataset makes the random splitting of machine learning model evaluation to fail so that the ML models tend to achieve over-estimated predictive performance which is misleading for the materials science community. This issue is well known in the field of bioinformatics for protein function prediction, in which a redundancy reduction procedure (CD-Hit <cit.>) is always applied to reduce the sample redundancy by ensuring no pair of samples has a sequence similarity greater than a given threshold. This paper surveys the overestimated ML performance in the literature for both composition based and structure based material property prediction. We then propose a material dataset redundancy reduction algorithm called MD-HIT and evaluate it with several composition and structure based distance threshold sfor reducing data set sample redundancy. We show that with this control, the predicted performance tends to better reflect their true prediction capability. Our MD-hit code can be freely accessed at <https://github.com/usccolumbia/MD-HIT>
§ INTRODUCTION
Density functional theory (DFT) level accuracy of material property prediction <cit.> and >0.95 R^2 for thermal conductivity prediction <cit.> with less than a hundred training samples have been routinely reported recently by an increasing list of machine learning algorithms in the material informatics community. In <cit.>, an AI model was shown to be able to predict formation energy of a hold-out test set containing 137 entries from their structure and composition with a mean absolute error (MAE) of 0.064 eV/atom which significantly outperform the performance of DFT computations for the same task (discrepancies of >0.076 eV/atom). In another related work in Nature Communication by the same group <cit.>, a mean absolute error (MAE) of 0.07 eV/atom was achieved for composition only based formation energy prediction using deep transfer learning, which is comparable to the MAE of DFT-computation. Pasini et al <cit.> reported that their multitasking neural networks can estimate the material properties (total energy, charge density and magnetic moment) for a specific configuration hundreds of times faster than first-principles DFT calculations while achieving comparable accuracy. In <cit.>, the authors claimed their graph neural network models can predict the formation energies, band gaps, and elastic moduli of crystals with better than DFT accuracy over a much larger data set. In <cit.>, Farb et al. showed numerical evidence that ML model predictions deviate from DFT less than DFT deviates from experiment for all nine properties that they evaluated over the QM9 molecule dataset. They also claimed the out-of-sample prediction errors with respect to hybrid DFT reference were on par with, or close to, chemical accuracy. In <cit.>, Tian et al reported that current ML models can achieve accurate property-prediction (formation energy, band gap, bulk and shear moduli) using composition alone without using structure information, especially for for compounds close to the thermodynamic convex hull. However, this good performance may be partially due to the over-represented redundancy in their test samples obtained with 6:2:2 random selection from matminer datasets without redundancy control. To illustrate this point, Figure <ref> shows the formation energy and band gap landscape over the MP composition space, which is generated by mapping the Magpie features of all MP unique compositions to the 2D space using t-SNE and then plot the surface. Both figures show that there exist a large number of local areas with smooth or similar property values. Random splitting of samples in those areas into training and test sets may lead to information leakage and over-estimation of the prediction performance.
Despite these encouraging successes, the DFT accuracy reports of these ML models for material property prediction should be cautiously interpreted as they are all average performance evaluated over mostly randomly held-out samples that come from unexpectedly highly redundant datasets. Materials databases such as Material Project and OQMD are characterized by the existence of many redundant (highly similar) materials due to the tinkering material design practice over the history of materials research. For example, the materials project database has many perovskite cubic structure materials similar to SrTiO_3. This sample redundancy within the dataset makes the random splitting of machine learning model evaluation to fail so that the ML models tend to achieve over-estimated predictive performance which is misleading for the materials science community. This issue is well known in the area of ecology <cit.> and bioinformatics for protein function prediction, in which a redundancy reduction procedure (CD-Hit <cit.>) is required to reduce the sample redundancy by ensuring no pair of samples has a sequence similarity greater than a given threshold e.g. 95% sequence identity. In a recent work in 2023, it was also shown that excellent benchmark score may not imply good generalization performance <cit.>.
The over-estimation of the ML performance for materials has been investigated in a few studies. In <cit.>, Meredig et al. examined extrapolation performance of ML methods for materials discovery. They found that traditional ML metrics (even with cross-validation (CV)) overestimate model performance for materials discovery and introduce the leave-one-(material) cluster-out cross-validation (LOCO CV) to objectively evaluate the extrapolation performance of ML models. They especially highlighted that materials scientists often intend to extrapolate with trained ML models, rather than interpolate to find new functional materials and sampling in materials training data is typically highly non-uniform. So the high interpolation performance of ML models trained with datasets with high sample redundancy (e.g. due to doping) does not indicate their strong capability to discovery new materials (out of dotmain (OOD) samples). They showed that current ML models have much higher difficulty to generalize from the training clusters to a distinct test cluster. They suggested the use of uncertainty quantification (UQ) on top of ML models to evaluate and explore candidates in new regions of design space. Stanev et al. <cit.> also discussed this generalization issue across different superconductor family. In <cit.>, Xiong et al. propose K-fold forward cross-validation (FCV) as a new way for evaluating exploration performance in materials property prediction by first sorting the samples by their property values before CV splitting. They showed that current ML models' prediction performance were actually very low as shown by their proposed
FCV evaluation method and the proposed exploratory prediction accuracy. A similar study for thermal conductivity prediction <cit.> also showed that when ML models are trained with low property values, they are usually not good at predicting samples with high property values, indicating the weak extrapolation capability. These studies show the need for the material property model developers to focus more on extrapolative prediction performance rather than average interpolation performance over test samples with high similarity to training samples due to dataset redundancy.
The material datasets redundancy issue has also been studied recently from the point of view of training efficient ML models or achieving sample efficiency. In <cit.>, Magar and Farimani proposed an adaptive sampling strategy to generate/sample informative samples for training machine learning models in the lowest amounts of data. They assumed that informative samples for a model are those with the highest K(e.g. 250) MAE in the test set, which are added to the initial 1000 training set iteratively. Another selection approach is to add samples similar to data points of the train set having the maximum MAE during training. They showed that their sampling algorithms can create smaller training sets that obtain better performance than the baseline CGCNN model trained with all training samples. This approach can be used with active learning to build high performance ML models in a data efficient way. In a more recent work <cit.>, Li et al. studied the redundancy in large material datasets and found that a significant degree of redundancy across multiple large datasets is present for various material properties and that up to 95% of data can be removed from ML model training with little impact on prediction performance for test sets sampled randomly from the same distribution dataset. They further showed that the redundant data is due to over-represented material types and does not help improve the low performance on out-of-distribution samples. They proposed a pruning algorithm similar to <cit.> which first splits the training set into A and B and then train a ML model on A and evaluates the prediction errors on samples in B. After that the test samples with low MAEs are pruned and the remaining samples are merged and split into A and B again and so on. Both approaches rely on the iterative training of ML models and are specific to a given material property. The also proposed an uncertainty quantification based active learning method to generate sample efficient training set for model training. While these works recognize the possibility to build data-efficient training set, they did not mention the how redundancy can affect the over-estimated ML model performance commonly seen in literature. Moreover, all approaches for building informative training set are material property specific, making it difficult to generate a single non-redundant benchmark dataset for benchmarking material property prediction algorithms for all material properties. Another limitation of these methods is that they show different similarity thresholds when applied to different datasets, which makes the resulting non-redundant datasets to have different minimum distances among the samples.
Since material property prediction research is now pivoting toward developing ML models with high accuracy, that are generalizable and transferable between different materials (including materials of different families), healthy evaluation of ML algorithms is needed to recognize the limitation of existing ML models and to invent new models with essential process. Within this context, reducing the dataset redundancy of both training set and test sets can avoid the over-estimation of the ML model performance, ameliorate the training bias towards samples in crowded areas, and push the model developers to focus on improving extrapolation performance instead of only interpolation performance.
In this paper, we argue the importance of redundancy control in the training and test set selection to achieve objective performance evaluation. Neglecting this has lead to many overestimated ML performances as reported in the literature for both composition based and structure based material property prediction. We then conduct the ML experiments to show that the over-estimated models usually fail for samples that are distant to training samples (lack of extrapolation performance). We then developed two redundancy reducing algorithms (CD-hit-composition and CD-hit-structure) with open-sourced code for reducing the dataset redundancy of both composition datasets and structure datasets. These two algorithms are based on composition and structure based distance metrics, which are used to add samples that are above a defined distance threshold. After this data redundancy control, the dataset can then be splitted randomly into training, validation, and test sets to achieve objective performance evaluation. We show that with this dataset redundancy control, the predicted performance tends to reflect their true prediction capability.
§ METHOD
§.§ MD-HIT-composition algorithm for redundancy reduction of composition datasets
The early version of CD-HIT algorithm <cit.> of bioinformatics was originally developed to handle large-scale sequence datasets efficiently. It employs a clustering approach to group similar sequences together based on a defined sequence identity threshold. Within each cluster, only one representative sequence, called the "centroid," is retained, while the rest of the highly similar sequences are considered duplicates and removed. However, the clustering approach is still inefficient to deal with datasets with hundreds of thousands of sequences. The next generation of CD-HIT further improved the efficiency by using a greedy algorithm <cit.>.
Both of our MD-HIT-composition and MD-HIT-structure redundancy reduction algorithms are designed based on this idea, which are greedy incremental algorithms. In our case, the MD-HIT starts the selection process with a seed material (default to be H2O). And then it sorts the remaining materials by the number of atoms instead of the formula lengths and then one-by-one classifies them as a redundant or representative material based on its similarities to the existing representatives already selected into the cluster. The composition similarities are estimated using the ElMD (Earth Movers' Distance) package, which provides the options to choose linear, chemically derived, and machine learned similarity measures. By default, we used the mendeleev similarity and the magpie similarity <cit.> for our non-redundant composition dataset generation. The magpie distance function is defined as the Euclidean distance of a given set of material composition magpie feature vectors such as the widely used magpie features <cit.>. In the matminer materials informatics package, there are several other material composition descriptors that can also be used as well. Here we focused on ElMD and the magpie feature based distance function for redundancy control of composition datasets for materials property prediction.
The complete composition similarity metrics can be found in Table <ref>.
§.§ MD-HIT-Structure algorithm for redundancy reduction of structure datasets
MD-HIT-structure algorithm uses the same greedy adding approach of the MD-HIT-composition except it uses a structure based distance metric. However, due to the varying number of atoms of different crystals, it is non-trivial to compare the similarity of two given structures given most of structure descriptors tend to have different dimension for structures of different number of atoms. Here we chose two structure distances for redundancy reduction. One is the distance metric based on XRD features calculated from crystal structures. We used a Gaussian smoothing operation to first smooth the calculated XRD using the Pymatgen XRDCalculator module and then sample 900 points even distributed between 0 and 90 degree, which leads to XRD features of a fixed 900 dimension.
We also selected the OrbitalFieldMatrix feature to calculate the distances of two structures. This feature has also been used in <cit.> to select informative samples for ML model training. It is a set of descriptors that encode the electronic structure of a material. These features provide information about the distribution of electrons in different atomic orbitals within a crystal structure. These features provide a comprehensive representation of the electronic structure and bonding characteristics of materials and is of fixed dimension (1024).
Similar to the MD-Hit-composition, MD-Hit-structure algorithm also starts the selection process with a seed material (default to be H2O) put in the non-redundant set. And then it sorts the remaining materials in the candidate set by the number of atoms instead of the formula lengths and then one-by-one classifies them as a redundant or representative material based on its similarities (we use Euclidean distance of XRD features or OFM features) to the existing representatives already selected into the non-redundant set. Redundant samples are discarded while non-redundant ones are added to the non-redundant set until the candidate set is empty.
§.§ Composition based materials property prediction algorithms
We evaluate two state-of-the-art composition based material property prediction algorithms including Roost <cit.> and Crabnet (the Compositionally Restricted Attention-Based network)<cit.> to study the impact of dataset redundancy on their performance. The Roost algorithm is a machine learning approach specifically designed for materials property prediction based on the material composition. It utilizes a graph neural network framework to learn relationships between material compositions and their corresponding properties. CrabNet is a transformer self-attention based model for composition only material property prediction. It matches or exceeds current best-practice methods on nearly all of 28 total benchmark datasets.
§.§ Structure based material property prediction algorithms
We evaluate two state-of-the-art structure based material property prediction algorithms including ALIGNN (Atomistic Line Graph Neural Network)<cit.> and DeeperGATGNN<cit.> to study the impact of dataset redundancy on their performance. The ALIGNN model addresses a major limitation of the majority of current Graph Neural Network (GNN) models used for atomistic predictions, which only rely on atomic distances while overlooking the bond angles. Actually bond angles play a crucial role in distinguishing various atomic structures and small deviations in bond angles can significantly impact several material properties. ALIGNN is a GNN architecture that conducts message passing on both the interatomic bond graph and its corresponding line graph specifically designed for bond angles. It has achieved state-of-art performances in most benchmark problems of the matbench <cit.>. The DeeperGATGNN algorithm is a global attention based graph neural network that uses differentiable group normalization and residual connection to achieve high performance deep graph neural networks without performance degradation. It has achieved superior results as shown in a set of material property predictions.
§.§ Evaluation criteria
We use the following performance metrics for evaluating dataset redundancy's impact on model performance, including Mean Absolute Error (MAE), R-squared (R^2), and Root Mean Squared Error (RMSE)
Mean Absolute Error (MAE):
MAE = 1/n∑_i=1^n| y_i - ŷ_i |
R-squared (R^2):
R^2 = 1 - ∑_i=1^n (y_i - ŷ_i)^2/∑_i=1^n (y_i - y̅)^2
Where y_i represents the observed or true values, ŷ_i represents the predicted values, and y̅ represents the mean of the observed values. The summation symbol ∑ is used to calculate the sum of values, and n represents the number of data points in the dataset.
§ RESULTS
§.§ Datasets generation
We downloaded 125,619 cif strutures from the Material Project database, which contains 89,354 unique compositions. For compositions that correspond to multiple polymorphs, we choose the average material property values as the default property value for that composition except for formation energy we use the minimum value. We also dropped the mp-101974 (HeSiO2) which has issue to calculate their Magpie features. We then remove all formulas with more than 50 atoms and got a non-duplicate composition dataset with 86,741 samples. We then use different similarity (distance) thresholds to generate non-redundant data sets. For mendeleev similarity, we use distance thresholds of 0.5, 0.8, 1, 1.5, 2, 2.5 and 3 to generate seven non-redundant datasets. The dataset sizes range from 86740 to 3177. Similarly we generate eight matsholar non-redundant datasets. The percentages of total range from 50.82% to 2.33%. We also applied the MD-HIT-structure algorithm to all the 125,619 cif structures and use different thresholds to generate seven XRD non-redundant datasets and eight OFM non-redundant datasets.
After removal of redundancy based on varying degree of sample identity using MD-HIT algorithms, the details of all non-redundant datasets are shown in Table 2.
To visually understand the effect of redundancy removal of datasets, Figure <ref> shows the material distribution t-SNE maps of the full dataset and two non-redundant datasets. For each dataset, we calculate the magpie composition features for all its samples. Then we use t-SNE dimension reduction algorithm to map the features to two dimension space. Figure 2(a) shows the distribution of whole dataset, which are filled crowded samples with high redundancy. Figure 2(b) shows the less redundant dataset Matscholar-nr generated with threshold of 0.1. It contains only 50.82% samples while still covering the whole map. Figure 2(c) shows the Mendeleev-nr non-redundant dataset with only 4,930 samples, only 5.68% of the whole dataset while still covering the whole map with much lower redundancy. The non-redundant datasets thus allow us to test the true generalization capability when trained and tested on them.
§.§ Composition based material property prediction with redundancy control
To examine the material properties prediction performance of ML models using datasets with Mendeleev distance and Matscholar distance based redundancy control, we conducted a series of experiments to explore how the degree of redundancy affects the ML performance for formation energy and band gap prediction. The non-redundant datasets derived from the whole MP composition dataset with 86,740 samples using different thresholds were divided into training, validation, and testing sets with a ratio of 8:1:1, respectively. Figure <ref> and <ref> show a comparison of the performances of Roost and CrabNet for formation energy and band gap prediction on datasets of different sizes, filtered by Mendeleev distance thresholds of 0, 0.5, 0.8, 1, 1.5, 2, 2.5 and 3 and Matscholar distance thresholds of 0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.35 and 0.4.
Figure <ref>(a) shows the prediction performances (MAE and R^2) of Roost and CrabNet for formation energy prediction evaluated over the whole dataset and six non-redundant datasets. It is found that the performance of both models exhibits a deteriorating trend with the increasing thresholds corresponding to lower degree of data redundancy, as evidenced by the diminishing R2 and increasing MAE scores. For band gap prediction (Figure <ref>(b)), the R2 scores of both models are decreasing gradually with the increase of the threshold. While the MAE scores exhibit a general uptrend, they do not exhibit a consistent decline with respect to the increasing threshold. Instead, they exhibit abrupt jumps at certain points. This could be due to outliers in the band gap-target datasets, which also shows the higher challenges for band gap prediction.
Figure <ref> shows the ML performances over the matscholar-controlled non-redundant datasets. In Figure <ref> (a), we found that the correlations between prediction performances of both Roost and CrabNet and thresholds (or data redundancy) are much higher than those shown in Figure <ref>(a), indicating that the matscholar distance tends to generate more evenly distributed non-redundant datasets compared to Mendeleev distance. However, this consistent trends of MAE and R^2 do not hold in the bandgap prediction performance shown in Figure <ref>(b), in which the R^2 curves are similar to those found in Figure <ref>(b) while the band gap prediction performances have large variation across different thresholds. We have checked this phenomenon by running multiple experiments for each threshold and got similar results. One possible reason is that a large percentage of bandgap samples have zero values. Overall, we found that removing redundancy of the datasets allows us to obtain more objective performances of ML models.
Through experiments, we observe that without reducing redundancy, a significant portion of test samples are concentrated in crowded areas with low prediction errors. This occurs because the model may overly rely on the information from these redundant samples during the learning process, while disregarding other more diverse data features. Excessive sample redundancy can potentially lead to deceptive phenomena on the test set.
§.§ Structure based material property prediction with redundancy control
To investigate the redundancy control of structure-based material datasets, we downloaded the whole Material Project database of 123,108 crystal structures along with their formation energy per atom and band gaps. Then we use the XRD and OFM features of crystal structures to define the similarity between pairs of structures, which is used to control the structure redundancy using the thresholds the minimum XRD/OFM distance between any pair of samples. For XRD based non-redundant datasets, we used the thresholds of 0.5, 0.6, 0.8, and 0.9. We then evaluated the material property prediction performances of two state-of-the-art graph neural network algorithms including DeeperGATGNN and ALIGNN. The results are shown in Figure <ref> (a) for formation energy prediction and Figure <ref> (b) for band gap prediction.
First we found that the XRD-distance provides a good control of data redundancy as the MAEs of both algorithms gradually increase with the increasing XRD thresholds, corresponding to lower dataset redundancy (Figure <ref> (a)). Simultaneously, the R^2 scores decrease as the thresholds go up. For band gap prediction result in Figure <ref> (b), the degree of dataset redundancy also affects the performance of both algorithms, though with a more complex effect compared to formation energy prediction results. First, it can be found that the R^2 scores of both algorithms drop down with the increasing thresholds. However, while the MAEs of the DeeperGATGNN go up overall with increasing thresholds, the MAEs of ALIGNN over the non-redundant with thresholds 0.8 and 0.9 are actually lower than the result over the dataset with threshold of 0.6 while the R^2 scores are lower. This discrepancy indicates for the bandgap prediction problem, there is a higher nonlinearity and the outlier band gap values may also play a role here. This phenomenon is also observed in the composition based results in Figure <ref> and Figure <ref>.
We further evaluated how OFM-controlled data redundancy affects the algorithms' performance. Figure <ref>(a) and (b) show how the performances in terms of MAE and R^2 change with the decreasing redundancy (or increasing thresholds). First we found that both algorithms showed high consistency in the formation energy prediction (Figure <ref>(a)). For both algorithms, the R^2 scores decreases in general with the increasing thresholds while the MAE scores increase. This indicates that OFM distance metric can be used as a good redundancy control method for crystal structure dataset. However, for band gap prediction, Figure <ref>(b) shows a surprising result: the R^2 scores go down with the increasing threshold as expected for both algorithms. However,the MAE scores also go down, which is unexpected since lower redundancy should lead to higher challenge for property prediction. To investigate the issue, we count the percentages of near-zero bandgap (<0.01 eV) samples of the test sets for all the five datasets with thresholds 0, 0.15, 0.2, 0.45, 0.7 and found that while the whole redundant dataset contains only 48.64% near-zero bandgap samples, our MD_HIT algorithm accidentally tend to pick higher percentage of near-zero bandgap samples with 64.09%, 67.81%, 84.52%, and 92.43% for thresholds 0.15, 0 2, 0.45, 0.7 respectively, which makes the prediction to be much easier, which explains why the MAEs drop. To further illustrate this data bias, we plotted the scatter plots of the predicted bandgaps by DeeperGATGNN over the whole datasets and two non-redundant datasets. We can clearly see that the dominance (92.43%) of near-zero samples in non-redundant dataset with threshold 0.7, which makes the prediction to be much easier compared to the whole dataset. This data bias may be reduced by choosing a different seed structure rather than the SrTiO_3 as used in this experiment. It also shows the importance to watch for data bias which can easily lead to over-estimated ML model performance in material property prediction.
§ CONCLUSION
Large material databases such as Materials Project usually contain high degree of redundancy, which causes biased ML models and over-estimated performance evaluations due to the redundancy between randomly selected test samples and the remaining training samples. The claimed DFT accuracy averaged over all data samples from literature deviates from the common needs of material scientists who usually want to discover new materials that are different from the known training samples, which makes it important to evaluate and report the extrapolation rather than interpolation material property prediction performance.
Here we propose and develop two material dataset redundancy reducing algorithms based on a greedy algorithm inspired by the peer bioinformatics CD-HIT algorithm. We use two composition distance metrics and two structure distance metrics as the thresholds to control sample redundancy of our composition and structure datasets. Our benchmark results over two composition based and two structure based material property prediction algorithms over two material properties (formation energy and band gap) showed that the prediction performance of current ML models all tend to degrade due to the removal of redundant samples, leading to more realistic measure of prediction performance of current ML material property models. The availability of our easy-to-use open-source code of MD-HIT-composition and MD-HIT-structure makes it easy for researchers to conduct objective evaluation and report realistic peformance of their ML models for material property prediction. It should be also noted that the current multi-threaded implementation of our MD-hit algorithms are still slow and more improvements are highly desirable.
§ DATA AND CODE AVAILABILITY
The source code and the non-redundant datasets can be freely accessed at https://github.com/usccolumbia/MD-HIT
§ CONTRIBUTION
Conceptualization, J.H.; methodology,J.H. Q.L.,S.L.,E.S.,Y.Z.; software, J.H., S.S.,Y.S., S.O.; resources, J.H.; writing–original draft preparation, J.H., S.S., Y.S.,S.O.,S.L.,E.S.,Y.Z.; writing–review and editing, J.H; visualization, J.H. and S.S.; supervision, J.H.; funding acquisition, J.H.
§ ACKNOWLEDGEMENT
Qin Li would like to thank for the computing support of the State Key Laboratory of Public Big Data, Guizhou University.
unsrt
|
http://arxiv.org/abs/2307.04496v1 | 20230710113448 | Distinguishing between Dirac and Majorana neutrinos using temporal correlations | [
"Bhavya Soni",
"Sheeba Shafaq",
"Poonam Mehta"
] | hep-ph | [
"hep-ph",
"quant-ph"
] | LaTeX2e
=1
=7.truein
=9.5truein
justification=justified,singlelinecheck=false
#1(#1)
#1(#1)
#1#1
#1#1
#1#1
= 1.5ex
5pt plus 1pt
0pt
-0.1in -0.1in
6.45in 9.3in
-1.5cm 1.0cm
[4]
-.4ex∼ .4ex<
-.4ex∼ .4ex>
myenumi
mylist
)
m
mysubequation[equation]
|
http://arxiv.org/abs/2307.03979v1 | 20230708140755 | Attacking (EC)DSA scheme with ephemeral keys sharing specific bits | [
"M. Adamoudis",
"K. A. Draziotis",
"D. Poulakis"
] | cs.CR | [
"cs.CR",
"94A60"
] |
Computation of the private key]Attacking (EC)DSA scheme with ephemeral keys sharing specific bits
[2010]94A60.
[
Sahil Gangurde
ABV-Indian Institute of Information Technology & Management, Gwalior, India
[email protected]
===========================================================================================================================
In this paper, we present a deterministic attack
on (EC)DSA signature scheme, providing
that several signatures are known such that the corresponding
ephemeral keys share a certain amount of bits without knowing
their value. By eliminating the shared blocks of bits between
the ephemeral keys, we get a lattice of dimension equal to the number of signatures having a vector containing the private key. We compute
an upper bound for the distance of this vector from a target vector, and next,
using Kannan's enumeration algorithm, we determine it and hence the secret key.
The attack can be made highly efficient by appropriately selecting
the number of shared bits and the number of signatures.
§ INTRODUCTION - STATEMENT OF RESULTS
In August 1991, the U.S. government's National Institute of
Standards and Technology (NIST) proposed an algorithm for digital
signatures. The algorithm is known as DSA, for Digital Signature
Algorithm <cit.>. It is an efficient
variant of the ElGamal digital signature scheme <cit.>
intended for use in electronic mail, electronic funds transfer,
electronic data interchange, software distribution, data storage,
and other applications which require data integrity assurance and
data authentication. In 1998, an elliptic curve analogue called
Elliptic Curve Digital Signature Algorithm (ECDSA) was proposed
and standardized <cit.>.
§.§ The (EC)DSA Signature Scheme
First, we recall the DSA schemes. The
signer selects a prime p of size between 1024 and 3072 bits with
increments of 1024, as recommended in FIPS 186-3 <cit.>.
Also, he selects a prime q of size 160, 224 or 256 bits, with q|p-1
and a generator g of the
unique order q subgroup G of the multiplicative group 𝔽_p^*
of the prime finite field 𝔽_p. Furthermore, he
selects a randomly a ∈{1,…,q-1} and computes R = g^a p.
The public key of the signer is (p,q,g,R) and his private
key a. He also publishes a hash function
h : {0,1}^* →{0,…,q-1}.
To sign a message m∈{0,1}^*, he selects randomly
k ∈{1,…,q-1} which is the ephemeral key, and
computes
r = (g^k p) q and
s = k^-1(h(m)+ar) q.
The signature of m is (r,s). The signature is accepted as
valid if and only if the following holds:
r = ((g^s^-1h(m) qR^s^-1r q) p) q.
Next, let us recall the ECDSA scheme. The signer selects an elliptic curve
E over 𝔽_p, a point P∈ E(𝔽_p) with order a prime
q of size at least 160 bits.
Following FIPS 186-3, the prime p must belongs to the set
{160,224,256,512}. Further, he chooses randomly
a ∈{1,…,q-1} and computes Q = aP.
Finally, he publishes a hash
function h : {0,1}^* →{0,…,q-1}.
The public
key of the signer is (E,p,q,P,Q) and his private key a.
To sign a message m, he selects randomly
k ∈{1,…,q-1} which is the ephemeral key and computes
kP = (x,y) (where x and y are regarded as integers between 0 and
p-1).
He computes
r = x q and
s = k^-1(h(m)+ar) q.
The signature of m is (r,s). The verifier computes
u_1 = s^-1h(m) q, u_2 = s^-1r q, u_1P+u_2Q = (x_0,y_0).
He accepts the signature if and only if r = x_0 q.
§.§ Previous Results
Researchers have explored various attacks on DSA schemes by analyzing the signature equation s= k^-1(h(m)+ar) mod q and using lattice reduction techniques such as LLL and CVP algorithms. One study focused on the use of a linear congruential pseudorandom number generator (LCG) for generating random numbers in DSA <cit.>, showing that combining the DSA signature equations with LCG generation equations can lead to a system of equations that provide the secret key. To recover the secret key, several heuristic attacks have been proposed <cit.> in another study, which assume the revelation of a small fraction of the corresponding nonce k. However, these attacks are based on heuristic assumptions, making it difficult to make precise statements on their theoretical behavior.
The first rigorous lattice attack on (EC)DSA was presented in <cit.>. The authors successfully decreased the security of (EC)DSA to a Hidden Number Problem (HNP), which can then be further reduced to an approximation Closest Vector Problem (CVP) for a specific lattice. The signer's secret key a can be computed using this reduction in polynomial time. The attack was also adapted to the case of ECDSA, as described in <cit.>.
The paper <cit.> describes an attack on DSA schemes that uses the LLL reduction method and requires one message. By computing two short vectors of a three-dimensional lattice, the attack derives two intersecting lines in (a, k), provided that a and k are sufficiently small, and the second shortest vector is sufficiently short. If two messages are available, the same attack can be applied to derive a linear congruence relating to the corresponding ephemeral keys.
The papers <cit.> and <cit.> describe attacks on DSA schemes using the LLL algorithm and one or two messages. In <cit.>, the combination of LLL with algorithms for finding integral points of two classes of conics gives a, provided that
at least one of the sets {a,k^-1 q}, {k,a^-1 q}, {a^-1 q,k^-1 q} is sufficiently small.
In <cit.>, the Lagrange Reduction algorithm is applied
on a 2-dimensional lattice defined by a signed message, and provides
two straight lines intersecting at (a, k). Similar attacks can be applied to the pairs (k^-1 q, k^-1a q) and (a^-1 q,a^-1k q). If two signed messages are available, the above two attacks can be applied to the equation relating the two ephemeral keys.
The article <cit.> presents an attack using Coppersmith's method to compute the secret key a. The attack works when a and k satisfy a specific inequality, and in this case, the secret key a can be efficiently computed.
The article <cit.> describes an attack that involves constructing a system of linear congruences using signed messages. This system has at most one unique solution below a certain bound, which can be computed efficiently. Thus, if the length of a vector containing the secret and ephemeral keys of a signed message is quite small, the secret key can be computed using the above system. The article <cit.> presents an improved version of this attack.
In <cit.>, the proposed attacks
take advantage using of the bits in the ephemeral key and the Fast Fourier Transform.
In <cit.>, it is shown that, using lattice reduction under some
heuristic assumptions, that partial
information about the nonces of multiple signatures can lead to recovery of the
full private key. The original approach to doing so is based on discrete
Fourier analysis techniques <cit.>.
A very important issue is the
attacks on cryptosystems based on the malicious modification of memory registers. These attacks may affect the randomness of the secret parameters,
and so, to force certain bits of the ephemeral
key to be equal, without their values being known. In <cit.>,
it is discussed how such attacks could occur in a real-life scenario.
Following the line of research from <cit.>, the authors of <cit.> focus on an attack scenario where ephemeral keys share specific bits, such as the least significant bits (LSB) and/or most significant bits (MSB), either within multiple blocks.
By eliminating the shared blocks
of bits between the ephemeral
keys, a lattice of dimension equal to the number of signatures is provided, which
contains a quite short vector with components that reveal the secret key.
Then, the LLL algorithm is used for the computation of this vector.
Note that these attacks are based on heuristic assumptions.
Later, in <cit.>, the authors further improved upon the attack proposed in <cit.> by providing a probabilistic attack with a success probability approaching 1 when the pair (δ,n) is appropriately selected, where n represents the number of signatures, and δ represents the number of shared bits in the ephemeral keys. This attack relies on a mild assumption regarding the hash function used in (EC)DSA.
§.§ Our Contribution
Our study builds on the research presented in <cit.>, and we present a deterministic attack that, although not always polynomial in complexity, proves to be highly efficient in practical scenarios. Instead of using methods like LLL, approximate, or exact CVP, which were employed in previous attacks, we use enumeration on a suitable lattice to find lattice vectors that are close to a specific target vector. From these solutions, we can readily extract the secret key to the system.
It is important to highlight that the attacks presented in <cit.> rely on heuristics assumptions that aim to force the presence of a vector containing the private key as a solution to the Shortest Vector Problem (SVP) in a relatively large lattice. In <cit.>, the authors provide a probabilistic approach to <cit.>, where an assumption for the hash function is made and the attack is modelled as a Closest Vector Problem (CVP). Due to the computational complexity of finding such a vector using a deterministic algorithm, an approximation algorithm can be used instead.
Our approach takes a different path. We calculate a bound for the distance between the vector of the lattice containing the private key and a target vector. Then, we leverage Kannan's enumeration algorithm to determine this vector and, consequently, extract the secret key. Our experiments demonstrate that the attack can be made highly efficient by appropriately selecting values for δ and n. Finally, we improve the results provided in <cit.>.
§.§ Our results
In the subsequent Theorem, we apply the framework suggested by <cit.>, which presupposes that we have access to a collection of signed messages with ephemeral keys that are shorter than q. These messages have some of their most and least significant bits in common, with a total of δ bits shared.
Suppose we have a (EC)DSA scheme
with a binary length ℓ prime number q and secret key a. Let m_j (j=0,…,n) be
messages signed with this scheme, (r_j,s_j) their signatures, and
k_j = ∑_i=1^ℓ k_j,i 2^ℓ-i (where k_j,i∈{0,1}) are
the corresponding ephemeral keys, respectively.
Set A_j = -r_js_j^-1 q.
Suppose that 0< k_j < q (j=0,…,n), and there are integers δ >0 and
0 ≤δ_L≤δ such that the
following conditions hold:
* k_0,i+1 = ⋯ = k_n,i+1 (i=1,…,δ-δ_L,ℓ-δ_L,
…,ℓ-1).
* For i = 0,…,n, set
C_i,j = (A_j-1 -A_i) 2^-δ_L q, (j=1,…,i), and
C_i,j = (A_j -A_i) 2^-δ_L q
(j=i+1,…,n).
The shortest vector of the lattice ℒ_i spanned by the vectors
(2^δ+1q,0,…, 0),…,
(0,…, 0, 2^δ+1q , 0),
(2^δ+1C_i,1, …, 2^δ+1C_i,n, 1)
has length
> 1/2 (2^δ+1q)^n/n+1.
Then, the secret key a can be computed in
𝒪(2^ℓ-δ n+2n n ( (nℓ)^c 2^𝒪(n)
+ℓ^4 2^n (n+1)^n+1/2))
bit operations, for some c > 0.
By the Gaussian heuristic <cit.> the length of the vectors of the lattice
ℒ is > q^n/(n+1).
Thus, the hypothesis (2) of Theorem <ref> will very often be
satisfied.
In the above complexity estimate, if ℓ≤δ n, then
the time complexity is polynomial in ℓ.
Roadmap. The paper is structured as follows:
Section 2 presents an auxiliary lemma that will prove crucial in the proof of Theorem <ref>.
Section 3 is dedicated to the proof of Theorem <ref>, providing a detailed explanation and justification.
In Section 4, an attack on (EC)DSA, derived from Theorem <ref>, is presented. Additionally, several experiments are conducted to illustrate the effectiveness of the attack.
Finally, Section 5 concludes the paper, summarizing the main findings and discussing potential avenues for future research.
§ LATTICES
Let ℬ = { b_1, …, b_n}⊂^n be a basis of ^n.
A n-dimensional lattice spanned by ℬ is the set
ℒ = {z_1 b_1+⋯ +z_n b_n/ z_1,…,z_n ∈}.
Recall that the scalar product of two vectors 𝐮 = (u_1,…,u_n)
and 𝐯 = (v_1,…,v_n) in ℝ is the quantity ⟨𝐮,𝐯⟩ = u_1v_1+⋯ + u_nv_n, and
the Euclidean norm of a vector v = (v_1,…,v_n) ∈^n
the quantity
𝐯 = ⟨𝐯,𝐯⟩^1/2 =
(v_1^2+⋯ +v_n^2)^1/2.
The Gram-Schmidt orthogonalisation (GSO) of the basis ℬ
is the orthogonal family
{𝐛_1^⋆,…,𝐛_n^⋆}
defined as follows:
𝐛_i^⋆ =
𝐛_i-∑_j=0^i-1μ_i,j𝐛_j^⋆, with μ_i,j = ⟨𝐛_i,𝐛_j^⋆⟩/𝐛_j^⋆^2 (j= 0,…,i-1).
Let L be a lattice. If K is a convex body
in ^n+1 symmetric about the origin, we denote by
λ_i(K,L) (i=1,…,n+1)
the ith successive minimum of K with respect to L
which it is defined as follows
λ_i(K, L) = inf{λ > 0/ (λ K) ∩ L contains i
linearly independent points}.
Further, we denote by s(L) the length of a shortest vector in L.
Let B_𝐯(R) be the closest ball of center 𝐯 and
radius R in ℝ^n+1 and L a lattice. Then,we have:
|B_𝐯(R)∩ L | < ( 2R/s(L)+1)^n+1.
Set
𝒟_𝐯(R) =
{𝐱-𝐲/ 𝐱,𝐲∈ B_𝐯(R)}.
Then, 𝒟_𝐯(R) is a convex body, symmetric about the
origin.
Then <cit.> implies:
|B_𝐯(R)∩ L | <
∏_i=1^n+1(1/λ_i(𝒟_𝐯(R),L)+1).
Let 𝐱,𝐲∈ B_𝐯(R). Then, we have:
𝐱-𝐲≤𝐱-𝐯+
𝐯-𝐲≤ 2R.
It follows that 𝒟_𝐯(R)⊆ B_0(2R),
and so we deduce
λ_1(B_0(2R),L) ≤λ_i(𝒟_𝐯(R),L) (i=1,…,n).
Further, we have
λ_1(B_0(2R),L) ≥ s(L)/2R.
Combining the inequalities (<ref>), (<ref>)
and (<ref>), we obtain:
|B_𝐯(R)∩ L | < ( 2R/s(L)+1)^n+1.
§ PROOF OF THEOREM 1.1
Let a be the secret key and k_j, j = 0,…,n the ephemeral keys. We put A_j = -r_js_j^-1 q and B_j = -h(m_j) s_j^-1 q for j = 0,…,n.
The signing equation for (EC)DSA provides that,
k_j+A_j a +B_j ≡ 0 ( q) (j=0,…,n).
Suppose first that k_0 = min{k_0,…,k_n}.
We set δ_M=δ-δ_L. From the hypothesis of the Theorem we get
z_j=k_j-k_0=ε 2^ℓ-δ_M-1+⋯+ε' 2^δ_L,
for some ε, ε'∈{0,1}.
Since z_j>0 we get 0<z_j<2^ℓ-δ_M and there exists positive integer z_j' such that
z_j = 2^δ_Lz^'_j
Furthermore, we set
C_j = (A_j-A_0)2^-δ_L q and
D_j = (B_j-B_0)2^-δ_L q.
From (<ref>) we have the congruences:
z_j^'+C_j a +D_j ≡ 0 ( q) (j=1,…,n).
Since z_j^' is positive, there is a positive integer c_j such that
-C_ja-D_j+c_jq= z_j^'.
Thus, we obtain:
0 < c_jq-C_j a-D_j < 2^ℓ-δ.
It follows that
-2^ℓ-δ-1 < c_jq-C_j a-D_j-2^ℓ-δ-1 < 2^ℓ-δ-1,
whence we get
0 < |c_jq-C_j a-D_j-2^ℓ-δ-1| < 2^ℓ-δ-1.
Therefore, we have:
0 < |c_jq2^δ+1 -C_j2^δ+1 a-D_j2^δ+1-2^ℓ|
< 2^ℓ.
We consider the lattice ℒ spanned by the rows of the matrix
𝒥 = ( [ 2^δ+1q 0 0 … 0 0; 0 2^δ+1q 0 … 0 0; 0 0 2^δ+1q … 0 0; ⋮ ⋮ ⋮ ⋱ ⋮ ⋮; 0 0 0 … 2^δ+1q 0; 2^δ+1C_1 2^δ+1C_2 2^δ+1C_3 … 2^δ+1C_n 1 ]).
The vectors of the lattice ℒ are of the form
(2^δ+1(qx_1+x_n+1C_1),2^δ+1(qx_2+x_n+1C_2),…,2^δ+1(qx_n+x_n+1C_n),x_n+1),
for some integers x_1,…,x_n+1. By setting
(x_1,…,x_n+1)=(c_1,…,c_n,-a), we get the lattice vector
𝐮 =
(2^δ+1(c_1q-C_1a),…,2^δ+1(c_nq-C_na),-a).
Further we consider the vector in the span of ℒ,
𝐯 = (D_12^δ+1+2^ℓ,…,2^δ+1D_n+2^ℓ,0).
Now, we have
u- v=(2^δ+1(qc_1-C_1a-D_1)-2^ℓ,…,2^δ+1(qc_n-C_na-D_n)-2^ℓ,-a),
and inequalities (<ref>) yield:
𝐮-𝐯 < 2^ℓ√(n+1).
Put R = 2^ℓ√(n+1). Then 𝐮∈ B_𝐯(R).
Next, we compute a LLL-reduced
basis for ℒ, say ℬ = {𝐛_1,…,𝐛_n+1}. This can be done in time 𝒪(n^6 (log q)^3). By hypothesis (2) of Theorem,
we have:
s(ℒ) > 1/2 (2^δ+1 q)^n/n+1.
Let {𝐛_1^*,…,𝐛_n+1^*} the Gram-Schmidt orthogonalisation of ℬ.
By
<cit.>, we get:
4 b_i^*^2 ≥ 2 b_i-1^*^2 ≥ b_i-1^2 ≥ s(L)^2
Thus, we obtain:
1/4 (2^δ+1q)^n/n+1≤𝐛_i^* (i=1,…,n+1).
Next, using Kannan's enumeration algorithm <cit.>, we compute
all the elements of B_𝐯(R)∩ℒ.
Combining <cit.> with the inequality (<ref>),
we obtain that the bit complexity of the procedure is
(nlog q)^c 2^𝒪(n)(2^ℓ+2/(2^δ+1q)^n/n+1)^n+1 ,
where c is a constant >0.
Then we check whether the last coefficient of
𝐮∈ B_𝐯(R)∩ℒ is the minus of the secret
key -aq. Every such operation needs 𝒪((log q)^4) bit operations
<cit.>. If none of the elements of
𝐮∈ B_𝐯(R)∩ℒ gives the secret key, then
we repeat the procedure assuming that k_1 = min{k_0,…,k_n}, and we continue until
we found the secret key.
By Lemma <ref>, we have:
|B_𝐯(R)∩ℒ | < (
2^ℓ+2√(n+1)/ (2^δ+1q)^n/n+1 +1)^n+1.
Thus, the overall bit complexity of the computation of a is
𝒪(n(nlog q)^c 2^𝒪(n)(2^ℓ+2/(2^δ+1q)^n/n+1)^n+1 +n
(
2^ℓ+2√(n+1)/ (2^δ+1q)^n/n+1 +1)^n+1 (log q)^4),
whence the result.
§ THE ATTACK
The proof of Theorems 1.1 yields the following attack:
ATTACK-DSA
Input: Messages m_j (j=0,…,n) and
(r_j,s_j) their (EC)DSA signatures and integers δ >0 and
0 ≤δ_L≤δ and the public key (p,q,g,R)
(resp. (E,p,q,P,Q)).
Output: The private key a.
* For j=0,…, n compute A_j = -r_is_i^-1 q,
B_j = -h(m_j) s_j^-1 q.
* For i=0,…,n,
* For j=1,…,i compute
C_i,j = (A_j-1 -A_i) 2^-δ_L q, D_i,j = (B_j-1 -B_i) 2^-δ_L q,
and for j= i+1,…,n compute
C_i,j = (A_j -A_i) 2^-δ_L q,
D_i,j = (B_j -B_i) 2^-δ_L q.
* Consider the lattice ℒ_i spanned by the rows of the matrix
J_i = ( [ 2^δ+1q 0 0 … 0 0; 0 2^δ+1 q 0 … 0 0; 0 0 2^δ+1 q … 0 0; ⋮ ⋮ ⋮ ⋱ ⋮ ⋮; 0 0 0 … 2^δ+1 q 0; 2^δ+1C_i,1 2^δ+1C_i,2 2^δ+1C_i,3 … 2^δ+1C_i,n 1 ])
and compute a LLL-basis ℬ_i for ℒ_i.
* Consider the vector
𝐯_i = (2^δ+1D_i,1+2^ℓ,…,2^δ+1D_i,n+2^ℓ,0), and
using Kannan's enumeration algorithm
with basis ℬ_i, compute all
𝐮∈ℒ_i satisfying
𝐮-𝐯_i < 2^ℓ√(n+1).
* Check whether the last coordinate of 𝐮 say u_n+1 satisfies g^-u_n+1≡ Rq (resp. P(-u_n+1) = Q).
If it is so, then return the secret key -u_n+1q=a.
For the Pseudocode of Kannan's Enumeration Algorithm, one
can see <cit.>.
Supposing that condition (2) is satisfied, taking n quite small and nδ≥ℓ, Theorem <ref> implies that the attack is polynomial
in ℓ. Furthermore, if s(L) is closed to the Gauss heuristic, then the upper bound for the number of points of B_𝐯(R)∩ℒ will be the smaller possible, and so
it is expect that the attack will be quite efficient.
Experiments.
We conducted a thorough analysis of our experiments, and we compared our results with those presented by Gomez et al. <cit.>. Our findings indicate a significant improvement in almost all cases. Our experiments were conducted on a Linux machine with an i5-12400 CPU, using Sagemath 9.8 <cit.>. We made the assumption that we already knew the minimum ephemeral key. However, in the general case, where the minimum key is unknown, we would need to perform n executions, where n+1 represents the number of signatures. This worst-case scenario would require multiplying the execution time of each experiment by n. Overall, our results demonstrate a notable improvement compared to the previous findings (see the Table below). Finally, we have successfully found the secret key even when the shared bits in the ephemeral keys are only 5. Remarkably, in this case, we only needed a minimum of 58 signatures. It is worth noting that in <cit.>, no successful attack was provided for the specific scenario where δ=5.
§ CONCLUSION
Attacks based on the malicious modification of memory registers
is a topic of high importance, since it
may affect the randomness of the secret parameters by forcing a limited number of bits to a certain value, which can be unknown to the attacker.
In this context, we developed a deterministic attack on the DSA schemes,
providing that several signatures are such that the corresponding
ephemeral keys share a number of bits without knowing their value.
Our attack is deterministic, meaning that it always produces a result for a given input every time. However, it is important to note that while the attack is deterministic, it may not always be practical to execute. Deterministic attacks on the (EC)DSA are relatively rare, as they typically rely on heuristic assumptions.
While deterministic attacks on (EC)DSA, are rare, our attack demonstrates practical feasibility in specific scenarios, surpassing previous results, (see Table <ref>). However, it is important to note that the practicality and effectiveness of our attack may vary depending on the specific choice of (δ,n).
Acknowledgement
The author, Marios Adamoudis is co-financed by Greece
and the European Union (European Social Fund-ESF) through the Operational
Programme ”Human Resources Development, Education and Lifelong Learning”
in the context of the Act ”Enhancing Human Resources Research Potential by
undertaking a Doctoral Research” Sub-action 2: IKY Scholarship Programme for
PhD candidates in the Greek Universities.
99
marios M. Adamoudis, K. A. Draziotis and D. Poulakis, Enhancing a DSA attack, CAI 2019, p. 13-25. LNCS 11545, Springer 2019.
Aranha
D. F. Aranha, F. R. Novaes, Akira Takahashi, M. Tibouchi,
and Y. Yarom. LadderLeak: Breaking ECDSA with less than one bit of
nonce leakage. In Jay Ligatti, Xinming Ou, Jonathan Katz, and Giovanni
Vigna, editors, ACM CCS 2020, pages 225-242. ACM Press, November 2020.
Bellare M. Bellare, S. Goldwasser and Micciancio,
“Pseudo-random" number generation within cryptographic
algorithms: the DSS case. In Proc. of Crypto '97, LNCS 1294
IACR, Palo Alto, CA. Springer-Verlag, Berlin 1997.
Blake I. F. Blake and T. Garefalakis,
On the security of the digital signature algorithm.
Des. Codes Cryptogr., 26, no. 1-3 (2002), 87-96.
Bleichenbacher
D. Bleichenbacher. On the generation of one-time keys in DL signature
schemes. In Presentation at IEEE P1363 working group meeting, page 81,
2000.
Draziotis K. A. Draziotis and D. Poulakis, Lattice attacks on DSA schemes based on Lagrange's algorithm.
5th international Conference on Algebraic Informatics, CAI 2013. Berlin: Springer. LNCS 8080, 119-131 (2013).
Draziotis2 K. A. Draziotis, (EC)DSA lattice attacks based on Coppersmith's method, Information Processing Letters 116(8), Elsevier (2016), Pages 541-545.
ElGamal T. ElGamal, A public key cryptosystem and a signature scheme
based on discrete logarithm, IEEE
Transactions on Information Theory, 31 (1985), 469-472.
fips FIPS PUB 186-3, Federal Information Processing Standards
Publication, Digital Signature Standard (DSS).
Faugere J. -L. Faugère, C. Goyet, and G. Renault, Attacking
(EC)DSA Given Only an Implicit Hint, Selected Area of Cryptography, LNCS 7707, p. 252–274, Springer-Verlag, Berlin - Heidelberg 2013.
Gomez Ana I. Gomez, D. Gomez-Perez, and G. Renault, A probabilistic analysis on a lattice attack against DSA. Des. Codes Cryptogr. 87, 2469-2488 (2019).
Hanrot G. Hanrot and D. Stehlé, Improved analysis of kannan’s shortest lattice vector algorithm. In
Proceedings of Crypto, LNCS 4622, 170-186. Springer, 2007.
Hanrot2 G. Hanrot, X. Pujol and D. Stehlé,
Algorithms for the shortest and closest lattice vector problems.
Chee, Yeow Meng (ed.) et al., Coding and cryptology. Third international workshop, IWCC 2011, Qingdao, China, May 30 – June 3, 2011. Proceedings. Berlin: Springer. Lecture Notes in Computer Science 6639, 159-190 (2011).
Hoffstein J.
Hoffstein, J. Pipher, H. H. Silverman, An introduction to mathematical cryptography. 2nd ed.
Undergraduate Texts in Mathematics. New York, NY: Springer 2014.
Howgrave N. A. Howgrave-Graham and N. P. Smart, Lattice
Attacks on Digital Signature Schemes, Des. Codes Cryptogr.
23 (2001) 283-290.
Johnson D. Johnson, A. J. Menezes and S. A. Vastone, The
elliptic curve digital signature algorithm (ECDSA), Intern.
J. of Information Security, 1 (2001) 36-63.
Koblitz N. Koblitz, A. J. Menezes and S. A. Vastone,
The state of elliptic curve cryptography, Des. Codes
Cryptogr. 19 (2000), 173-193.
Koblitz2 N. Koblitz and A. J. Menezes, A survey of Public-Key
Cryptosystems,
SIAM REVIEW, 46, No. 4 (2004), 599-634.
Leadbitter P.J. Leadbitter, D. Page, N.P. Smart. Attacking DSA Under a Repeated Bits Assumption. In: Joye, M., Quisquater, JJ. (eds) Cryptographic
Hardware and Embedded Systems - CHES 2004. CHES 2004. Lecture Notes in Computer
Science, vol 3156, (2004) 428-440. Springer, Berlin, Heidelberg.
Lenstra A. K. Lenstra, H. W. Lenstra Jr., and L. Lovász, Factoring
polynomials
with rational coefficients, Math. Ann., 261 (1982), 513-534.
Malikiosis R.-D. Malikiosis,
Lattice-point enumerators of ellipsoids, Combinatorica 33, No. 6 (2013) 733-744.
Menezes A. J. Menezes, P. C. van Oorschot and S. A.
Vanstone, Handbook of Applied Cryptography, CRC Press, Boca
Raton, Florida, 1997.
Micciancio D. Micciancio and P. Voulgaris. A deterministic single
exponential time algorithm
for most lattice problems based on Voronoi cell computations. In Proc. of
STOC, ACM, (2010) pages 351-358.
Mulder1
E. De Mulder, M. Hutter, M. E. Marson, and P. Pearson. Using Bleichenbacher s solution
to the Hidden Number Problem to attack nonce leaks in 384-bit ECDSA. In Cryptographic Hardware
and Embedded Systems-CHES 2013, 435-452. Springer, 2013.
Mulder2
E. De Mulder, M. Hutter, M. E. Marson, and P. Pearson. Using Bleichenbacher's solution
to the hidden number problem to attack nonce leaks in 384-bit ecdsa: extended version. Journal of
Cryptographic Engineering, 4(1):33-45, 2014.
National National Institute of Standards and Technology
(NIST). FIPS Publication 186: Digital Signature
Standard. May 1994.
Nguyen P. Nguyen and I. E. Shparlinski, The Insecurity
of the Digital Signature Algorithm with Partially Known Nonces,
J. Cryptology, 15 (2002), 151-176.
Nguyen2 P. Nguyen and I. E. Shparlinski,
The Insecurity of the Elliptic Curve Digital Signature Algorithm
with Partially Known Nonces, Des. Codes Cryptogr. 30,
(2003), 201-217.
Poulakis D. Poulakis, Some Lattice Attacks on DSA and ECDSA, Applicable Algebra in Engineering, Communication and Computing,
22, (2011), 347-358.
Poulakis1 D. Poulakis, New lattice attacks on DSA schemes,
J. Math. Cryptol. 10 (2) (2016), 135–144.
sage Sage Mathematics Software, The Sage Development Team. <http://www.sagemath.org>.
Sun
C. Sun, T. Espitau, M. Tibouchi, and M. Abe, Guessing Bits: Improved
Lattice Attacks on (EC)DSA with Nonce Leakage,
IACR Transactions on Cryptographic Hardware and Embedded Systems,
ISSN 2569-2925, Vol. 2022, No. 1, pp. 391-413.
Zheng Z. Zheng, Modern Cryptography, Volume 1,
Springer 2021.
|
http://arxiv.org/abs/2307.07534v1 | 20230714091328 | Masked Autoencoders for Unsupervised Anomaly Detection in Medical Images | [
"Mariana-Iuliana Georgescu"
] | eess.IV | [
"eess.IV",
"cs.CV"
] |
27th International Conference on Knowledge-Based and Intelligent Information & Engineering Systems (KES 2023)
a]Mariana-Iuliana Georgescu
[a]Faculty of Mathematics and Computer Science, University of Bucharest, 14 Academiei, Bucharest, Romania
Pathological anomalies exhibit diverse appearances in medical imaging, making it difficult to collect and annotate a representative amount of data required to train deep learning models in a supervised setting. Therefore, in this work, we tackle anomaly detection in medical images training our framework using only healthy samples. We propose to use the Masked Autoencoder model to learn the structure of the normal samples, then train an anomaly classifier on top of the difference between the original image and the reconstruction provided by the masked autoencoder. We train the anomaly classifier in a supervised manner using as negative samples the reconstruction of the healthy scans, while as positive samples, we use pseudo-abnormal scans obtained via our novel pseudo-abnormal module. The pseudo-abnormal module alters the reconstruction of the normal samples by changing the intensity of several regions. We conduct experiments on two medical image data sets, namely BRATS2020 and LUNA16 and compare our method with four state-of-the-art anomaly detection frameworks, namely AST, RD4AD, AnoVAEGAN and f-AnoGAN.
anomaly detection medical imaging self-supervised learning deep learning masked autoencoders
[email protected]
§ INTRODUCTION
Masked data modeling which employs the Transformer architecture has been vastly adopted by the deep learning community ranging from text <cit.> to images <cit.>, videos <cit.> and sound <cit.>. The masked autoencoder models were mainly used for pre-training, due to their ability to leverage the structure already existing in data.
Consequently, in this work, we employ them for anomaly detection in medical images to learn the underlying structure of normal data.
Meanwhile, the medical imaging domain has witnessed significant advancement with applications ranging from medical image super-resolution to organ segmentation, disease progress prediction, tumor detection, etc. Researchers have invested a lot of effort in applying deep learning methods in medical imaging, obtaining outstanding results using supervised learning for tumor detection <cit.>, organ segmentation, cancer prediction, etc. Even though supervised methods obtained noteworthy results, the acquisition of supervised annotations for medical applications remains a challenging task, requiring specialized expertise. However, there is a lot of unlabeled data which can be used to train deep learning models. In order to leverage the available unlabeled data, researchers <cit.> have focused on unsupervised anomaly detection in medical images, where a model is trained using only normal data (these techniques are sometimes considered semi-supervised). Researchers trained Autoencoders <cit.>, Variational Autoencoders <cit.>, Generative Adversarial Networks <cit.> and Denoising Diffusion Models <cit.> to capture the structure of normal data. Their approaches are based on the concept that the models should reconstruct a normal sample with high fidelity, but they should face difficulties in accurately reconstructing an abnormal instance. Therefore, the anomaly score is computed using a function that measures the difference between the original image and the reconstructed image. Using this approach of computing the anomaly score, the anomaly decision is only based on the reconstruction fidelity of the underlying model. To overcome the limitation of relying entirely on the reconstruction fidelity of the model, we propose an anomaly classifier to predict the probability of a sample to be abnormal. Our anomaly classifier is trained using supervised learning.
In this work, we learn informative and discriminative features from medical images under weak annotations by using only normal data (healthy scans) at training time. We employ the masked autoencoder model to learn the underlying patterns of data in a self-supervised manner. The masked autoencoder model reconstructs a given image using only a small portion (usually 25%) of its pixels, thus during training it must capture the structure of data in order to accurately reconstruct the masked input. Training the masked autoencoder model only using healthy scans, forces the model to reconstruct with high fidelity only normal patterns, therefore it should face difficulties in accurately reconstructing an abnormal sample. We then propose to train an anomaly classifier in a supervised fashion. To obtain abnormal training data, we apply a novel pseudo-abnormal module to alter the reconstruction of the normal samples obtaining pseudo-abnormal examples.
We test our method on two medical image data sets, namely BRATS2020 <cit.> and LUNA16 <cit.>, outperforming four state-of-the-art anomaly detection frameworks <cit.>.
To the best of our knowledge, our contributions are twofold:
* We apply the MAE framework for anomaly detection in medical imaging.
* We proposed an anomaly classifier to further boost the performance of frameworks which rely on the reconstruction fidelity of the underlying model.
§ RELATED WORK
Due to the success of unsupervised and self-supervised algorithms in computer vision applications, many researchers have started to apply unsupervised algorithms in medical imaging. Baur et al. <cit.> presented in their survey study that the works that tackle the medical image anomaly detection can be divided into three categories based on the architecture. The architectures used are: Autoencoder (AE) <cit.>, Variational Autoencoder (VAE) <cit.> and Generative Adversarial Networks (GAN) <cit.>. With respect to the addressed task, related works can be divided into methods that perform anomaly detection (i.e. the model only predicts if an image is abnormal) <cit.> or methods that perform anomaly segmentation (i.e. the model also localizes the anomalous regions) <cit.>.
Similar to <cit.>, we propose an unsupervised algorithm that performs anomaly detection in medical images.
Chen et al. <cit.> proposed an algorithm to detect the anomalies in MRI brain scans by adding an adversarial constraint on the latent space of the AE model. The constraint enforces that an abnormal image be mapped in the same point as a normal image, such that the difference between the reconstruction of an abnormal image and the original image will highlight the lesion. To apply the constraint, the algorithm requires adversarial (abnormal) data. Chen et al. <cit.> used as adversarial samples, the reconstructions of the healthy samples. Different from the other works that employ AE to segment anomalies in medical images, Baur et al. <cit.> learned to compress and reconstruct different frequency bands of healthy brain MRI scans using the Laplacian pyramid.
Kascenas et al. <cit.> showed that a simple yet effective, denoising autoencoder architecture obtains better results than more sophisticated methods <cit.> on the brain lesion segmentation task.
To improve the anomaly segmentation performance, Baur et al. <cit.> proposed AnoVAEGAN. AnoVAEGAN trained the decoder model with the help of an adversarial network, which discriminates between real images and images reconstructed by the VAE model.
Instead of treating the anomaly detection task as an image reconstruction task, Chen et al. <cit.> approached the unsupervised lesion detection as an image restoration problem proposing a probabilistic model to detect brain lesions.
Different from the aforementioned works <cit.> which only use the MRI scans as input to detect the anomalies, Bengs et al. <cit.> proposed a model for unsupervised 3D brain MRI anomaly detection considering the age as additional information. Bengs et al. <cit.> considered age prediction as an additional task and attached a regression layer, just before the latent space of the VAE model, which predicted the age of the patient. Bengs et al. <cit.> observed that the prediction error of the age was increased by a factor of two on abnormal data.
More recently, generative adversarial networks have been surpassed by the denoising diffusion probabilistic models in the computer vision community. <cit.> proposed to use a denoising diffusion probabilistic model to segment anomalies in brain imaging. Wolleb et al. <cit.> presented a novel pixel-wise anomaly detection approach based on Denoising Diffusion Implicit Models.
Pinaya et al. <cit.> proposed to employ a Transformer model to learn the probability density function only from healthy brain scans. Pinaya et al. <cit.> integrated the Transformer into the vector quantized variational autoencoder model showing the benefits of using a self-attention model on the brain anomaly segmentation task. Instead of training the model to reconstruct the input, Pinaya et al. <cit.> optimized the model towards outputting the likelihood of the input.
Most of the aforementioned works <cit.> rely on the reconstruction fidelity to compute the anomaly score. Different from such works, we employ a Transformer model, which is trained in a supervised manner, to discriminate between normal and abnormal reconstructions, alleviating the limitation of relying only on the reconstruction fidelity of the underlying model. Similar to <cit.>, we use adversarial samples, but we build our own pseudo-abnormal sample pool instead of using the reconstructions of the samples as abnormal examples.
Similar to <cit.>, our underlying model is a Transformer. Different from <cit.>, we employ the Masked Autoencoder (MAE) framework to learn discriminative features from healthy scans. To the best of our knowledge, we are the first to propose the use of masked autoencoders to detect anomalies in medical images.
§ METHOD
We begin with an overview of the proposed method (Section <ref>), then we describe each component of our approach, namely the Masked Autoencoder framework (Section <ref>), the Pseudo-abnormal Module (Section <ref>) and the Anomaly Classifier (Section <ref>).
§.§ Overview
We illustrate our approach of detecting anomalies in medical images in Figure <ref>. We start by applying the Masked Autoencoder framework (MAE) to obtain the reconstruction of the medical image input. Afterwards, the absolute difference between the reconstructed input and the actual input is used to train the anomaly classifier. The binary anomaly classifier is trained in a supervised manner requiring both positive and negative examples. As negative examples, we use the reconstructed images, while for the positive class, we generate pseudo-abnormal samples.
We use a pseudo-abnormal module which is applied on the reconstructed input to create a set of pseudo-abnormal samples. The anomaly score for a sample is given by the probability of being labeled as pseudo-abnormal by the anomaly classifier.
§.§ Masked Autoencoders
Transformer models have recently received a lot of attention from the deep learning community and have started being the standard architecture used in computer vision and natural language processing models. According to Shamshad et al. <cit.>, Vision Transformer models (ViT) <cit.> are heavily used in medical imaging processing too. Transformers use as input a 1D sequence of token embeddings. In order to apply a Transformer model on an image, the image is divided into patches, then each patch is projected into a 1D embedding space forming the tokens.
Transformer models need huge amounts of training data to avoid overfitting. To overcome this challenge, He et al. <cit.> proposed the masked autoencoder framework, a self-supervised method to pre-train the ViT model on small data sets.
He et al. <cit.> designed an asymmetric encoder-decoder model that operates on patches. The key concept of the MAE framework is to randomly remove tokens from the input, then process the remaining tokens with the encoder, and finally reconstruct the unseen tokens with a lightweight decoder from the latent representation and the learned mask tokens.
In this work, we do not employ the MAE framework as a self-supervised method, but we employ it for its capability to learn the structure of the data set. We train the MAE framework using only normal data in order to capture the patterns of normal samples.
The MAE framework <cit.> works as follows. Let X be the input image and X be the version of the input X, which is divided into patches. X is transformed into embedding tokens through a linear projection obtaining P=[p_1, p_2, .., p_n] where P∈ℛ ^ n× d, n is the number of patches and d is the embedding dimension. Additionally, a positional embedding is added to the embeddings input P. The token sequence becomes P = P + pos, where pos is the positional embedding.
A random subset of tokens from P are removed (masked) and the remaining (visible) tokens are processed by the encoder obtaining the encoded representation. More formally, this operation can be expressed as V = mask(P, α) and E = encoder(V), where mask is the masking operation where the tokens are dropped and α is the masking ratio.
The last component of the MAE framework is the lightweight decoder. The decoder is applied on top of the encoded representation E and the learned mask tokens. The learned mask tokens m ∈ℛ ^ d are placed in the location of the unseen tokens forming the encoded token sequence Z∈ℛ ^ n × d. A new positional embedding is added to the encoded token sequence Z, resulting Z = Z + pos_d, where pos_d is the positional embedding. Eventually, the reconstruction is obtained as P̂ = decoder(Z). By re-arranging the tokens P̂, we obtain the reconstruction X̂ of the input image X.
The MAE framework is optimized by employing the mean squared error loss between the reconstructed tokens P̂ and the input tokens X in the pixel space. The employed loss function is:
ℒ(X, P̂) = 1/α· n∑_i ∈ℳ || x_i - p̂_i ||^2,
where ℳ denotes the set of masked token indices, as we only compute the reconstruction loss on the masked tokens. In Section <ref>, we present an ablation experiment showing that the anomaly detection performance is higher when we apply the loss only on the masked tokens.
§.§ Pseudo-abnormal Module
In order to train the anomaly classifier in a supervised setting, both positive and negative samples are required. As negative (normal) samples, we use the reconstructed images. To overcome the limitation of not having real positive (abnormal) samples, we create pseudo-abnormal samples which we use as positive samples during training. In order to create positive samples, we intentionally alter the reconstructed images. The concept behind this design choice is that if an abnormal example is presented during inference, the reconstructed image should exhibit artifacts, therefore, we intentionally add artifacts to the reconstructed image. For each negative reconstructed image X̂, we produce a positive sample X̂'̂. To create X̂'̂, we first select k random bounding boxes of different dimensions inside X̂. For each selected bounding box, we change the intensity of the pixels inside the bounding box by multiplying them by β. The multiplier factor β is randomly selected from the uniform distribution U(0, 1).
Our design choice of altering the reconstruction using randomly generated boxes is based on the concept that the tokens (image patches) which are included in an abnormal region should not be accurately reconstructed by the MAE framework. Therefore, we simulate this scenario by altering random patches from the reconstructed image.
In Figure <ref>, we illustrate a few examples of positive and negative samples used as training data for the anomaly classifier.
§.§ Anomaly Classifier
The last component of our method is the Anomaly Classifier. We employ the ViT model as the underlying architecture. We train the classifier to distinguish between positive (pseudo-abnormal) and negative (normal) samples. The input to the classifier, denoted as X, is the absolute difference between the reconstructed sample X̂ (for negative) or X̂'̂ (for positive) and the original image X.
The classifier is optimized using the binary cross-entropy function:
ℒ_(y, ŷ) = - y ·log(ŷ) + (1 - y) ·log(1 - ŷ),
where y is the ground-truth label (0 for negative samples and 1 for positive samples) and ŷ represents the prediction of sample X.
At inference time, when we have both normal and abnormal samples, the anomaly score is interpreted as the probability of a sample to be positive (pseudo-abnormal).
§ EXPERIMENTS
We start by describing our experimental setup (Sections <ref>, <ref> and <ref>), then we compare our approach with state-of-the-art methods in Section <ref> and present our ablation study in Section <ref>.
.41
tableAnomaly detection results in terms of AUROC
on the BRATS2020 <cit.> and LUNA16 <cit.> data sets.
The top results are highlighted in bold.
0.95
1cMethod 2cAUROC
2-3
BRATS2020 LUNA16
AnoVAEGAN <cit.> 0.872 0.583
f-AnoGAN <cit.> 0.863 0.535
RD4AD <cit.> 0.886 0.521
AST <cit.> 0.895 0.619
Proposed method 0.899 0.634
.5
< g r a p h i c s >
figureThe ROC curves obtained by our proposed method on the BRATS2020 <cit.> and LUNA16 <cit.> data sets. The dashed line is the ROC curve obtained by a random classifier.
§.§ Data Sets
BRATS2020. Multimodal Brain Tumor Segmentation Challenge 2020 <cit.> is an MRI data set containing 369 scans. The tumors presented in the data set are glioblastoma and lower grade glioma. One of the main tasks performed on this data set is tumor segmentation. In this work, we tackle an easier problem, namely tumor detection. We treat the tumors as anomalies, but we train our model using only healthy scans. We annotate a slice as normal if there is no region containing tumor, otherwise we consider it as abnormal. We split the 369 scans into 60% scans for training and 40% for test. We keep 20% of the training scans for validation. We apply our algorithm at slice level, predicting if each slice is anomalous. After splitting the data set, we obtain 15,557, 4,013, 13,203 normal slices for training, validation, and test, respectively, as well as 2,962, 9,737 abnormal slices for validation and test. The resolution of each scan is 240 × 240 pixels. In our experiments, we use only the native modality to determine if a slice is abnormal.
LUNA16. LUng Nodule Analysis-2016 <cit.> contains 888 lung CT scans with some of the scans containing pulmonary nodules. The data set provides a list of 551,065 2D regions which are candidates for being pulmonary nodules. One of the main tasks is the false positive reduction track, where the nodule candidates should be classified accordingly. Out of 551,065 candidate regions, only 1,351 are labeled as pulmonary nodules. In this work, we consider the pulmonary nodules as anomalies. We keep all abnormal regions for validation and testing and use only non-nodule regions for training. We randomly select 50,000 out of 551,065 non-nodule regions for training. We randomly select 3,000 and 10,000 normal regions for validation and test, while splitting the abnormal regions into 351 and 1,000 regions for validation and test, respectively. The resolution of each region is 64 × 64 pixels.
To facilitate further comparisons, we release the split of each data set used in our experiments at https://github.com/lilygeorgescu/MAE-medical-anomaly-detection.
§.§ Evaluation Metrics
As the evaluation metric, we employ the Area Under the Receiver Operating Characteristic curve (AUROC). The AUROC is calculated as the area under the receiver operating characteristic (ROC) curve. A ROC curve shows the trade-off between the true positive rate and false positive rate across different decision thresholds. The true positive rate is the proportion of samples correctly classified as abnormal, while the false positive rate is the proportion of normal samples misclassified as abnormal. The AUROC ranges from 0 to 1, with higher values indicating better performance.
§.§ Implementation Details
In order to train the MAE framework, we use the official PyTorch implementation[<https://github.com/facebookresearch/mae>] provided by the authors <cit.>. We use ViT-Base <cit.> as encoder for the MAE framework. The decoder architecture follows the setting proposed by He et al. <cit.> which has the embeddings dimension equal to 512, the number of Transformer blocks set to 8, and 16 attention heads. We train the autoencoder only using normal samples for 1600 epochs, setting the mask ratio to 0.75. The input size is set to 224 × 224 × 1 for the BRATS2020 data set, while for the LUNA16 benchmark, we set the input size to 64 × 64 × 1. In order to obtain the reconstruction of a sample, we replaced the unmasked tokens with the original tokens. To obtain the final reconstruction of a sample, we pass it through the MAE framework 4 times and average the resulting outputs.
The number of bounding boxes (k) for the pseudo-abnormal module is uniformly sampled between 1 and 10. The width and height of a bounding box are also randomly selected between 10 and 40 pixels for BRATS2020 and between 5 and 12 pixels for LUNA16, respectively.
The underlying architecture of the anomaly classifier is also ViT-Base <cit.>. We train the anomaly classifier to discriminate between normal and pseudo-abnormal samples for 100 epochs. The model is optimized using AdamW <cit.> with the learning rate set to 0.001 and the weight decay set to 0.05.
The experiments are performed on a single GeForce GTX 3090 GPU with 24 GB of VRAM. Our code is available online at https://github.com/lilygeorgescu/MAE-medical-anomaly-detection.
§.§ Results
We present the results of our proposed method along with other state-of-the-art methods <cit.> for image anomaly detection on the BRATS2020 <cit.> and LUNA16 <cit.> data sets in Table <ref>. We also illustrate the ROC curves obtained by our proposed method in Figure <ref>. We emphasize that the analysis is performed at the slice level, therefore the results reported in Table <ref> are the slice-wise AUC scores. In order to compute the performance of AnoVAEGAN, f-AnoGAN, RD4AD and AST on the BRATS2020 and LUNA16 data sets, we use the official code released by the authors[<https://github.com/hq-deng/RD4AD>]^,[<https://github.com/marco-rudolph/ast>]^,[<https://github.com/StefanDenn3r/Unsupervised_Anomaly_Detection_Brain_MRI>]. On the BRATS2020 data set, our method attains an AUROC score of 0.899, surpassing the AST <cit.> method by 0.004 and the RD4AD <cit.> framework by 0.013. On the LUNA16 benchmark, our method reaches an AUROC score of 0.634 while RD4AD <cit.> reaches only 0.521. The improved performance proves the benefits of introducing supervised signal in the training process.
§.§ Ablation Study
Anomaly classifier. In order to assess the performance brought by the anomaly classifier, we remove it and compute the AUROC score by employing only the remaining components. When we remove the anomaly classifier, the resulting framework contains only the MAE component. In order to compute the anomaly score, we use the mean absolute error, mean squared error or structural similarity index measure (SSIM) between the reconstructed image and the original image. We compute the performance obtained by the MAE component on the BRATS2020 data set, presenting the results in Table <ref>. We observe that the highest performance of 0.877 in terms of AUROC score is obtained when SSIM is employed as the anomaly measure, showing that MAE alone is 0.012 below our proposed framework, highlighting the significance of the anomaly classifier. In Figure <ref>, we show the MAE reconstruction of normal and abnormal samples from the BRATS2020 benchmark. We can easily notice that the abnormal samples have a higher reconstruction error than the normal samples.
We also ablate the choice of using the absolute difference between the reconstructed and the original image as the input for the anomaly classifier. We present the results obtained on the BRATS2020 data set in Table <ref>. We test the absolute difference (L_1) and the Euclidean distance (L_2) between the reconstructed and the original image. We also perform an experiment directly using the reconstructed image as input to the anomaly classifier. We observe that the highest performance is obtained when the absolute difference between the samples (original and reconstructed) is employed, while the lowest performance of 0.821 is obtained when the reconstructed image is used as input to the anomaly classifier.
Pseudo-abnormal Module.
In the pseudo-abnormal module, we adjust the reconstructed samples to simulate abnormalities. We change the pixel intensities inside each selected region by multiplying the pixels with β, a random single number drawn from a uniform distribution. Our first ablation experiment is to change the intensity of each pixel in the selected region independently (i.e. to generate a matrix of β, one for each pixel inside the bounding box). By independently changing the pixel intensity the AUROC score drops to 0.835 from 0.899 on the BRATS2020 data set. We conclude that independently altering each pixel inside a region is too severe, resulting in unnatural reconstruction artifacts that are not aligned with the reconstruction errors that occur when a real anomaly is encountered, therefore, the performance decreases significantly.
We also ablate the hyper-parameter k (the number of pseudo-abnormal bounding boxes) which is uniformly sampled between [ k_1, k_2 ]. We report the results obtained on the BRATS2020 data set in Table <ref>. We observe that if we use too many abnormal regions (at least 5 per sample) the performance drops to 0.889. We also observe that using maximum 5 abnormal regions is slightly better than using maximum 10 regions per sample. We carried on the experiments with the latter setting since we tuned the hyper-parameters on the validation set, where we obtained slightly better performance when using maximum 10 regions per sample.
Masked Autoencoders.
In our main experiment, we do not apply the loss on the unmasked tokens as detailed in Eq. (<ref>). When we applied the loss on the unmasked tokens and used the reconstructed tokens instead of the original tokens, the performance drops from 0.899 to 0.864 in terms of AUROC on the BRATS2020 data set. Furthermore, the MAE framework has two significant hyper-parameters, namely the masking ratio α and the numbers of training epochs. We studied the influence of these two hyper-parameters on the medical anomaly detection performance and reported the results on the BRATS2020 data set in Figure <ref>. We notice that the masking ratio equals to 0.75 is the optimal masking ratio in order to capture representative patterns from the normal training data. We conducted an ablation experiment where we substituted the MAE framework with an Autoencoder having identical architecture (i.e. we did not mask any token and reconstruct all tokens during training) while keeping the rest of our framework unchanged. This ablated version is illustrated in the left side of Figure <ref> with masking ratio equals 0. When we replaced the MAE model with the standard AE model the performance dropped to 0.735 in terms of AUROC on the BRATS2020 data set. To further assess the significance of the MAE framework, we removed it altogether and applied the anomaly classifier directly to the original images. This ablated version obtained an AUROC score of 0.874 on the BRATS2020 data set. Based on the results obtained by the two ablation experiments, we conjecture that the MAE framework is a significant component of our framework.
As illustrated in Figure <ref>, we observed that training the MAE framework longer (up to 1600 epochs) attains better performance, however after a certain point (1600 epochs), training MAE longer does not increase the performance of the anomaly detection framework.
§ CONCLUSIONS
In this paper, we proposed a framework for unsupervised anomaly detection in medical images.
We tested our framework on two data sets, namely BRATS2020 <cit.> and LUNA16 <cit.> obtaining higher performance than four state-of-the-art anomaly detection frameworks <cit.>. We performed an extensive ablation study showing the benefits of each design choice of our framework. Different from the related works <cit.>, we added supervised signal to the learning process through our pseudo-abnormal module and anomaly classifier, which helped us achieve state-of-the-art results. Our method has the advantage of not depending solely on the accuracy of the underlying model, leveraging the supervised signal obtained from the anomaly classifier. By overcoming the limitation of the previous methods (i.e. the reliance on the reconstruction model), we introduce a new challenge namely, the simulation of anomalies in healthy scans. In future work, we aim to study different approaches to simulate anomalies in medical images to further boost the performance.
§ ACKNOWLEDGEMENTS
The research leading to these results has received funding from the NO Grants 2014-2021, under project ELO-Hyp contract no. 24/2020.
elsarticle-harv
|
http://arxiv.org/abs/2307.05542v2 | 20230708193421 | Geometric parametrization of $SO(D+1)$ phase space of all dimensional loop quantum gravity: II. Beyond the simplicity constraint surface | [
"Gaoping Long"
] | gr-qc | [
"gr-qc"
] |
[
Sungsoo Ray Hong
====================
The regularization of the scalar constraint and the Fermion coupling problem indicate that it is necessary to consider some kind of gauge fixing methods to deal with the simplicity constraint in all dimensional SO(D+1) loop quantum gravity. The coherent state with well-behaved peakedness property is an essential ingredient to carry out the gauge fixing method. To provide the basic tool for constructing such kind of coherent state, we generalize the twisted geometry parametrization of the SO(D+1) holonomy-flux phase space of (1+D)-dimensional loop quantum gravity from the edge simplicity constraint surface to a dense subspace in the SO(D+1) holonomy-flux phase space. The symplectic structure on the twisted geometric parameter space and the Poisson structure in terms of the twisted geometric variables are analyzed. Besides, we discuss the relation between the two twisted geometry parametrizations constructed respectively on the
edge simplicity constraint surface and the dense subspace of the SO(D+1) holonomy-flux phase space. Our result show that these two type of parametrizations are equivalent to each other by carrying out the gauge reduction with respect to the edge simplicity constraint.
§ INTRODUCTION
As a non-perturbative and background-independent approach to unify general relativity (GR) and quantum
mechanics, loop quantum gravity (LQG) has made remarkable progresses in several aspects <cit.><cit.><cit.><cit.>. For instance, various symmetry-reduced models are established in the framework of LQG to give the resolution of singularities <cit.>, and various attempts are made in the framework of
LQG to account for the BH entropy <cit.>.
Loop quantum gravity in all dimensional spacetime is also concerned since its potential for absorbing the valuable ideas (e.g. super symmetries and extra dimensions <cit.>) in other gravity theories to the loop quantization framework of GR. The loop quantization approach for GR in all dimensions is first developed by Bodendorfer, Thiemann and Thurn <cit.><cit.><cit.>. In detail, the all dimensional LQG is based on the connection formulation of (1+D) dimensional GR in the form of the SO(D+1) Yang-Mills theory, with the kinematic phase space coordinatized by the canonical pairs (A_aIJ,π^bKL), consisting of the spatial SO(D+1) connection fields A_aIJ and the vector fields π^bKL. In this formulation, the theory is governed by the first class system of the SO(D+1) Gaussian constraints, the (D+1)-dimensional ADM constraints and the additional simplicity constraints. Similar to the Gaussian constraints, the simplicity constraints taking the form S^ab_IJKL:=π^a[IJπ^|b|KL] generate extra gauge symmetries in the SO(D+1) Yang-Mills phase space. It has been shown that the connection phase space correctly reduces to the familiar ADM phase space by carrying out the symplectic reductions with respected to the Gaussian and simplicity constraints. Similar to the case of the SU(2) LQG, the loop quantization of the SO(D+1) Yang-Mills theory leads to the spin-network states of the SO(D+1) holonomies on some graphes, which carry the quanta of the flux operators representing the fluxes of π^bKL over some (D-1)-dimensional faces. The Hilbert space composed by the spin-network states indicates the holonomy-flux phase space associated to each graph, with the Poisson algebras among holonomies and fluxes in the holonomy-flux phase space being isomorphic to the quantum algebras among them in the quantum Hilbert space. To look for the all-dimensional Regge ADM data encoded in the SO(D+1) spin-network states, it is necessary to find the degrees of freedom of discrete geometries encoded in the SO(D+1) holonomy-flux variables, by considering a gauge reduction procedure with respect to both of the SO(D+1) Gaussian constraints and the simplicity constraints in the holonomy-flux phase space.
A series of studies in this direction is first carried out in the SU(2) formulation of (1+3)-dimensional LQG <cit.><cit.><cit.><cit.><cit.>, and then they are generalized to the SO(D+1) holonomy-flux phase space in our companion paper <cit.>. Specifically, since the simplicity constraints become anomalous at the vertices of the graphs, the reductions with respect to the Gaussian and simplicity constraints are guided by the twisted geometry parametrization of the edge simplicity constraint surface in the holonomy-flux phase space of SO(D+1) LQG.
Especially, the twisted geometry interpretation of holonomy-flux variables suggests that the Gaussian and edge simplicity constraints should be imposed strongly since they generate true gauge transformations, while the vertex simplicity constraints should be imposed weakly. The reduced space parametrized by the twisted geometric parameters give a discrete Regge geometry picture, which can be regarded as the discrete version of the ADM phase space of GR.
An important application of the twisted geometry parametrization is the construction of the twisted geometry coherent state. Such kind of coherent states is firstly established in SU(2) LQG <cit.>, and then it is generalized to the SO(D+1) LQG with the restriction of the simple representations <cit.>. Specifically, based on the twisted geometry parameters, the simple twisted geometry coherent state in the strong solution space of quantum edge simplicity constraints is established by selecting the dominant terms (which is referred to as Perelomov type coherent state <cit.>) with simple representation of SO(D+1) in the decomposition of the heat-kernel coherent state of SO(D+1) <cit.>. It has been shown that the simple twisted geometry coherent states take the Gaussian superposition formulations. Especially, the simple twisted geometry coherent states provides an over-complete basis of the strong solution space of quantum edge simplicity constraints, and their wave functions have well-behaved peakedness and Ehrenfest properties in the reduced phase space with respect to the edge simplicity constraints <cit.>.
In fact, the twisted geometry parametrization of the SO(D+1) holonomy-flux phase space discussed in Ref.<cit.> concerns the issues on the constraint surface of edge simplicity constraint, and the resulted twisted geometry variables only give the parametrization of the reduce phase space with respect to edge simplicity constraint. Correspondingly, the simple twisted geometry coherent states constructed based on the twisted geometry parametrization of the reduce phase space are the gauge (with respect to edge simplicity constraint) invariant coherent states <cit.>. In other words, the wave functions of these gauge (with respect to edge simplicity constraint) invariant coherent states are constants along the corresponding gauge orbits, so that each of them peaks at a gauge orbit instead of a point in the phase space <cit.>.
As we have mentioned above, the edge simplicity constraint should be imposed strongly following the twisted geometry interpretation of holonomy-flux variables. Thus, it seems that all of the studies for all dimensional SO(D+1) LQG can be completed in the strong solution space of quantum edge simplicity constraint, which is the gauge (with respect to simplicity constraint) invariant subspace of the full Hilbert space of all dimensional SO(D+1) LQG. Nevertheless, several discussions has shown that it is necessary to consider some kind of gauge fixed solution space with respect to simplicity constraint, to deal with some of the issues appeared in the all dimensional SO(D+1) LQG.
Let us introduce two issues to explain this necessity. First, the regularization of the scalar constraint can be carried out by following the standard loop regularization method <cit.><cit.><cit.>. The resulted regularized scalar constraint contains the Euclidean term which is given by the antisymmetric contraction of the holonomies along some closed loops and the fluxes at the beginning and target point of these loops. Classically, this Euclidean term captures the information of both of the intrinsic and extrinsic curvature along these closed loops. However, it is shown that the Euclidean term in the quantized scalar constraint can not capture the information of those intrinsic and extrinsic curvature in the strong solution space of quantum edge simplicity constraint, since the strong imposition of quantum edge simplicity constraint leads to the gauge averaging, which vanishes some critical ingredients in the holonomies <cit.>. Thus, the standard loop regularization method is conflict to the strong imposition of the edge simplicity constraint. To deal with issue, one may consider the gauge fixed solution of the edge simplicity constraint to avoid the gauge averaging, so that the scalar constraint operator given by standard loop regularization method captures the information of those intrinsic and extrinsic curvature correctly. This is the first issue which points out the necessity to consider then gauge fixed solution space with respect to simplicity constraint. The second issue which points out this necessity is the the Fermion coupling problem in all dimensional LQG <cit.>. Specifically, the strong imposition of the quantum edge simplicity constraint restricts that the holonomies in all dimensional LQG can only be represented in the simple representation space of SO(D+1), which leads that the holonomies can not transform the Fermions which take values in the spinor representation space of SO(D+1) for D≥4. An alternative scheme to deal with this issue is to consider the gauge fixed solution of quantum edge simplicity constraint based on the coherent states, which ensures that the holonomies could take matrixes in the spinor representation space of SO(D+1), so that they are able to describe the transformation of Fermions along edges.
Usually, in the classical theory, the gauge fixing can be realized by restricting the physical considerations on a section of the gauge orbits on the constraint surface of edge simplicity constraint. However, this is not valid in the quantum theory, since the wave functions of the quantum states which sharply converge to the constraint surface of edge simplicity constraint are always dispersed along the gauge orbits.
To overcome this problem, it is reasonable to consider the coherent state whose wave function peaks at a point in the phase space, so that one could have the state whose wave function converges to both of the constraint surface of edge simplicity constraint and a section of the gauge orbits, with this convergence is controlled by the width of the wave function of the coherent state.
Such kind of coherent state whose wave function peaks at a point in the SO(D+1) holonomy-flux phase space could be constructed by following a similar procedure as the construction of the simple twisted geometry coherent state in the strong solution space of quantum edge simplicity constraint <cit.>. More specifically, one need to consider a more generalized twisted geometry parametrization, which is able to coordinate the (almost) whole SO(D+1) holonomy-flux phase space instead of the reduced phase space. Then, based on this more generalized twisted geometry parametrization, one could decompose the heat-kernel coherent state of SO(D+1) and select some dominant terms to formulate the twisted geometry coherent state involving the non-simple representations of SO(D+1), which will be referred as to the non-simple twisted geometry coherent state in all dimensional LQG.
As the first step to establish the non-simple twisted geometry coherent state in all dimensional LQG, it is necessary to extend the twisted geometry parametrization to the full SO(D+1) holonomy-flux phase space. In this article, we will establish the twisted geometry parametrization of a dense subspace of the full SO(D+1) holonomy-flux phase space, and extend this parametrization as a symplectic-morphism. Besides, we will show that the twisted geometry parametrization of
edge simplicity constraint surface introduced in our previous work <cit.> can be regarded as a special cases of the construction in this article.
This article is organized as follows. In our brief review of the classical connection formulation of all dimensional GR in Section <ref>, we will also introduce the SO(D+1) holonomy-flux phase space and the discretized formulation of the kinematical constraints. In Section <ref> and Section <ref> we will introduce the twisted geometry parametrization for a dense subspace of the SO(D+1) phase space, and analyze the Poisson structures among the new geometric parametrization variables. Then, in Section <ref> we will discuss the relation between the twisted geometry parametrizations of the edge simplicity constraint surface and the dense subspace of the SO(D+1) holonomy-flux phase space. Finally, we will conclude with the outlook for the possible next steps of the future research.
§ PHASE SPACE OF ALL DIMENSIONAL LOOP QUANTUM GRAVITY
§.§ Connection phase space
The classical connection formulation of GR with arbitrary spacetime dimensionality of (1+D) is first developed by Bodendofer, Thiemann and Thurn in Ref.<cit.>. This continuum connection phase space is coordinatized by a so(D+1) valued 1-form field A_aIJ and a vector field π^bKL on the D-dimensional spatial manifold Σ, with the non-trivial Poisson brackets between them being given by
{A_aIJ(x), π^bKL(y)}=2κβδ_a^bδ_[I^Kδ_J]^Lδ^(D)(x-y),
where β is the Barbero-Immirzi parameter and κ is the gravitational constant. It is known that this connection phase space correctly reduces to the familiar ADM phase space after the standard symplectic reduction procedure with respect to the first-class constraint system composed by the Gauss
constraints 𝒢^IJ≈0 and simplicity constraints S^ab_IJKL:=π^a[IJπ^|b|KL]≈0. Specifically, the simplicity constraint can be solved as π^aIJ=2√(q)n^[Ie^|a|J], where e^a_I is a dual D-bein field, n^I satisfying n^In_I=1 is determined by e^a_I with n^Ie_aI=0, and q is the determinant of the spatial metric q_ab which is determined by π^aIJ with q^ab=e^aIe^b_I on the simplicity constraint surface. One can split A_aIJ as
A_aIJ≡Γ_aIJ(π)+β K_aIJ
where Γ_aIJ(π) is a functional of π^aIJ and it satisfies Γ_aIJ(π)=Γ_aIJ(e) on the simplicity constraint surface, with Γ_aIJ(e) being the unique torsionless spin connection compatible with the D-bein e_aI. Then, the densitized extrinsic curvature can be given by K̃_a^ b=K_aIJπ^bIJ on the constraint surface of both Gaussian and simplicity constraint surface.
It is easy to check that the Gaussian constraint generate the standard SO(D+1) gauge transformation of the connection field and its conjugate momentum. Now, let us consider the simplicity constraints from the perspectives of the corresponding gauge transformations. First, the solutions π^aIJ=2√(q)n^[Ie^|a|J] to the simplicity constraint introduced above defines the constraint surface of the simplicity constraints. Then, one can verify that the infinitesimal gauge transformations induced by simplicity constraints are given by <cit.>
δ K_c^PQ={∫_Σd^Dxf_ab^IJKLπ^a_[IJπ^b_KL](x), K_c^PQ(y)}=4κ f_cb^[PQKL]π^b_KL(y).
Notice that, on the simplicity constraint surface we have π^aIJ=2√(q)n^[Ie^|a|J] so that
δ K_c^IJn_I=0. Further, by introducing the decomposition
K_aIJ≡ 2n_[IK_|a|J]+K̅_aIJ,
where K̅_aIJ:=η̅_I^Kη̅_J^LK_aKL with η̅^I_J=δ^I_J-n^I n_J and K̅_aIJn^I=0, we immediately find that K̅_aIJ is the pure gauge component, while the components 2n_[IK_|a|J] are gauge invariant with respect to the transformations given in (<ref>). From the expressions of the ADM variables qq^ab=1/2π^aIJπ^b_IJ and K̃_a^ b=K_aIJπ^bIJ, it is easy to see that these variables are indeed gauge invariant with respect to the simplicity constraints on the constraint surface. Thus, through the symplectic gauge reduction procedure, the simplicity constraints eliminate the two parts of degrees of freedom— restricting π̅^aIJ:=π^aIJ-2√(q)n^[Ie^|a|J]=0 by the constraint equation and removing the pure-gauge components K̅_aIJ:=η̅_I^Kη̅_J^LK_aKL. Following these results, the geometric variables constructed by the ADM variables (q_ab,K̃^cd) can be extended as functionals in the connection phase space, with their original geometric interpretation are remained on the constraints surface.
§.§ Holonomy-flux phase space
The quantization of the connection formulation of (1+D)-dimensional GR can be carried out by following the standard loop quantization procedures, which leads to a Hilbert space ℋ given by the completion of the space of cylindrical functions on the quantum configuration space <cit.>. This Hilbert space ℋ can be regarded as a union of the spaces ℋ_γ=L^2((SO(D+1))^|E(γ)|,dμ_Haar^|E(γ)|) on all possible graphs γ, where E(γ) denotes the set of edges of γ and dμ_Haar^|E(γ)| denotes the product of the Haar measure on SO(D+1). The Gaussian constraint and simplicity constraint can be promoted as constraint operators in this Hilbert space. However, it has been turned out that the quantum brackets among these constraints give an open and anomalous quantum algebra, which is distinguished with the corresponding constraint algebra of first class in connection phase space <cit.>. Hence, it is necessary to propose a proper treatment of these quantum constraints, to reduce the gauge degrees of freedom and remain the physical degrees of freedom correctly. A reasonable method to reach this goal is to construct the gauge reductions with respect to Gaussian and simplicity constraints in the holonomy-flux phase space. More specifically, since the classical constraint algebras in the holonomy-flux phase space are isomorphic to the quantum constraint algebras in the quantum theory, one can treat the Gaussian and simplicity constraints in the holonomy-flux phase space and quantum theory on the same footing. Then, the degrees of freedom reduced in the procedures of the imposition of quantum constraint operators can be reflected in the procedures of the gauge reductions with respect to Gaussian and simplicity constraints in the holonomy-flux phase space. Through this gauge reductions, one can clarify the gauge degrees of freedom and verify that if the treatment of these constraints remains correct physical degrees of freedom. Now, let us first give a brief review of the holonomy-flux phase space.
The quantum geometry of loop quantum gravity is described based on the spatially smeared variables — the D-bein fluxes over (D-1)-dimensional faces and connection holonomies over paths— for the conjugate pairs of elementary variables. We will focus on the holonomies and fluxes based on one specific graph for the following. The edges of the given graph naturally provide the set of paths for a fixed set of holonomies, and the cell decomposition dual to the graph provides the set of (D-1)-faces specifying a fixed set of fluxes. In this setting, the holonomy over one of the edges is naturally conjugating to the flux over the (D-1)-face traversed by the edge, with this pair satisfies the smeared version of the Poisson algebra (<ref>), and thus form a new phase space. More precisely, given the graph γ embedded in the spatial manifold, we consider a new algebra given by the holonomy-flux variables (h_e, X_e)∈ SO(D+1)× so(D+1) over all edges e of γ. These pairs of variables represent the discretized version of the connection A_aIJ and its conjugate momentum π^bKL. Specifically, the holonomy of A_aIJ along an edge e∈γ defined by
h_e[A]:=𝒫exp(∫_eA)=1+∑_n=1^∞∫_0^1dt_n∫_0^t_ndt_n-1...∫_0^t_2 dt_1A(t_1)...A(t_n),
where A(t):=1/2ė^aA_aIJτ^IJ, ė^a is the tangent vector field of e, τ^IJ is a basis of so(D+1) given by (τ^IJ)^def._KL=2δ^[I_Kδ^J]_L in definition representation space of SO(D+1), and 𝒫 denotes the path-ordered product.
The flux X^IJ_e of π^aIJ through the (D-1)-dimensional face dual to edge e is defined by
X^IJ_e:=-1/4β a^D-1tr(τ^IJ∫_e^⋆ϵ_aa_1...a_D-1h(ρ^s_e(σ)) π^aKL(σ)τ_KLh(ρ^s_e(σ)^-1)),
where a is an arbitrary but fixed constant with the dimension of length, e^⋆ is the (D-1)-face traversed by e in the dual lattice of γ, ρ_e^s(σ): [0,1]→Σ is a path connecting the source point s_e∈ e to σ∈ e^⋆ such that ρ_e^s(σ): [0,1/2]→ e and ρ_e^s(σ): [1/2, 1]→ e^⋆. The Poisson algebra between the holonomy-flux variables can be induced from the Poisson bracket (<ref>) between the connection variables, which reads
{h_e, h_e'}=0, {h_e, X^IJ_e'}=δ_e,e'κ/a^D-1d/dλ(e^λτ^IJh_e)|_λ=0,
{X^IJ_e, X^KL_e'}=δ_e,e'κ/2a^D-1(-δ^IKX_e^JL-δ^JL X^IK_e+δ^ILX_e^JK+δ^JKX_e^ IL).
Notice that h_e∈ SO(D+1), X_e^IJ∈ so(D+1) and SO(D+1)× so(D+1)≅ T^∗ SO(D+1), the new discrete phase space called the holonomy-flux phase space of SO(D+1) loop quantum gravity on a fixed graph, is a direct product of SO(D+1) cotangent bundles. Finally, the complete phase space of the theory is given by taking the union over the holonomy-flux phase spaces of all possible graphs. Similar to the SU(2) case, the phase space coordinated by the holonomy-flux variables (h_e, X_e) of SO(D+1) loop quantum gravity can be regarded as the discretized version of the continuum phase space.
The (discretized) Gaussian and simplicity constraints in the holonomy-flux phase space are constructed in agreement with the corresponding quantum constraints. With X_-e=-h_e^-1X_eh_e≡X̃_e, the (discretized) Gaussian constraints G_v^IJ≈0 for each vertex v∈γ of the graph take the form <cit.>
G_v^IJ=∑_e|s(e)=vX_e^IJ+∑_e|t(e)=vX̃_e^IJ≈0,
where s(e) and t(e) denote the source and target points of the oriented edge e respectively. The (discretized) simplicity constraints consist of the edge simplicity constraints S^IJKL_e≈0 and vertex simplicity constraints S^IJKL_v,e,e'≈0, which take the forms <cit.>
S_e^IJKL≡ X^[IJ_e X^KL]_e≈0, ∀ e∈γ, S_v,e,e'^IJKL≡ X^[IJ_e X^KL]_e'≈0, ∀ e,e'∈γ, s(e)=s(e')=v.
It has been shown that, since the commutative Poisson algebra between the conjugate momentum variables {π^bKL} becomes non-commutative Poisson algebra between the flux variables { X^KL_e} after the smearing, the Poisson algebra among the discrete version of simplicity constraints become non-closed and thus anomalous, which leads that the symplectic reductions in the holonomy-flux phase space becomes difficult to implement <cit.>. To deal with this issue, the twisted geometry parametrization of the holonomy-flux phase space is constructed, which ensures that the gauge reductions with respect to the Gaussian and simplicity constraint in the holonomy-flux phase space can be carried out with the guidance of the twisted geometric interpretation of the holonomy-flux variables <cit.>.
The twisted geometry parametrization for the the SU(2) holonomy-flux variables of (1+3)-dimensional LQG is first introduced by a series of studies following the original works by Freidel and Speziale <cit.><cit.>. The space of the twisted geometry for SU(2) LQG can undergo a symplectic reduction with respect to the discretized Gauss constraints, giving rise to a reduced phase space containing the discretized ADM data of a polyhedral Regge hypersurface. Following a similar procedure, the twisted geometry parametrization in all dimensional SO(D+1) LQG has been constructed on the edge simplicity constraint surface in the SO(D+1) holonomy-flux phase space in our companion paper <cit.>. It has been shown that the gauge reductions with respect to the simplicity constraints and Gaussian constraints in SO(D+1) LQG can be carried out properly in the twisted geometry parametrization space, which leads to a clear correspondence between the original holonomy-flux variables (h_e, X_e) on edge simplicity constraint surface and the D-hypersurface discrete geometry data in Regge geometry formulation. Nevertheless, it is not enough to construct the twisted geometric parametrization on the edge simplicity constraint surface in the SO(D+1) holonomy-flux phase space.
As we have mentioned in introduction, several explorations in the quantum theory of SO(D+1) LQG requires us consider the quantum states whose wave functions are dispersed beyond the edge simplicity constraint surface. Hence, it is necessary to extend the twisted geometry parametrization to interpret the phase space points which are not located in the edge simplicity constraint surface.
§ GEOMETRIC PARAMETRIZATION OF SO(D+1) HOLONOMY-FLUX PHASE SPACE
To ensure our statements and the notations clearer, we will first generalize the twisted geometry parametrization to a dense subspace of T^∗ SO(D+1) in this section. Then, it will be left to section 5 to discuss the relation between the twisted geometry parametrizations constructed in this article and previous works <cit.>.
§.§ Beyond the edge-simplicity constraint surface
Recall the SO(D+1) holonomy-flux phase space ×_e∈γT^∗ SO(D+1)_e associated to the given graph γ. Let us focus on the holonomy-flux phase space T^∗ SO(D+1) associated to a single edge without loss of generality. Notice that the semi-simple elements in so(D+1) compose a dense subset so(D+1)_ss⊂ so(D+1) and we have T^∗ SO(D+1)≅ SO(D+1)× so(D+1). Then, we can define a dense subspace of T^∗ SO(D+1) as
T_ss^∗ SO(D+1):={(h, X)| h∈ SO(D+1), X is a semi-simple element of so(D+1)}.
To give the explicit formulation of the twisted geometric parametrization of T_ss^∗ SO(D+1), let us first introduce some new notations. Consider the orthonormal basis {δ_1^I,δ_2^I,...,δ_D+1^I} of ℝ^D+1, one has the basis {τ_IJ} of so(D+1) given by τ_IJ=(τ_IJ)^KL_def.:=2δ_I^[Kδ_J^L] in the definition representation space of SO(D+1), where (τ_IJ)^KL_def. is the generator of the infinitely small rotation in the 2-dimensional vector space spanned by the two vectors δ_I^K and δ_J^L.
Then, let us introduce the maximum commutative sub-Lie algebra of so(D+1) spanned by {τ_1, τ_2,...,τ_m} with m=[D+1/2], where we define
τ_1:=τ_12, τ_2:= τ_34, ..., τ_m:= τ_D,D+1
for D+1 being even, and
τ_1:=τ_12, τ_2:= τ_34, ..., τ_m:= τ_D-1,D
for D+1 being odd.
This maximum commutative sub-Lie algebra of so(D+1) generates the maximum commutative subgroup 𝕋^m:=×_=1^m SO(2)_, m=[D+1/2]. Then, SO(D+1) can be regarded as a fiber Bundle with the fibers 𝕋^m on the base manifold ℚ_m:=SO(D+1)/𝕋^m, which can be also given by ℚ_m={𝕍:=(V_1,...,V_m)|V_=gτ_ g^-1, ∈{1,...,m}, g∈ SO(D+1)}. One can choose a Hopf section n: ℚ_m↦ SO(D+1), 𝕍↦ n(𝕍)
and another Hopf section ñ: ℚ̃_m↦ SO(D+1), 𝕍̃↦ñ(𝕍̃) for the copy ℚ̃_m of ℚ_m, which satisfy
V_1=nτ_1n^-1,...,V_m=nτ_mn^-1,
and
Ṽ_1=-ñτ_1ñ^-1,...,Ṽ_m=-ñτ_mñ^-1
with ℚ_m∋𝕍:=(V_1,...,V_m) and ℚ̃_m∋𝕍̃:=(Ṽ_1,...,Ṽ_m).
Observe that the choice for the Hopf sections is clearly non-unique, and from now on our parametrization will be given under one fixed choice of {n_e,ñ_e} for each edge e.
Then, in the subspace T_ss^∗ SO(D+1)_e associated to each edge e, the generalized twisted geometry parametrization can be given by the map
(𝕍_e,𝕍̃_e,η⃗_e,ξ⃗_e)↦(h_e, X_e)∈ T_ss^∗ SO(D+1)_e: X_e=1/2n_e(η_e^1 τ_1+...+η_e^m τ_m)n_e^-1
h_e=n_ee^ξ_e^1τ_1...e^ξ_e^mτ_mñ_e^-1,
where we defined η⃗_e:=(η_e^1,...,η_e^m), η_e^1,η_e^2,...,η_e^m∈ℝ with η_e^1≥η_e^2≥,...,≥η_e^m≥0 and ξ⃗:=(ξ_e^1,...,ξ_e^m) with ξ_e^1,...,ξ_e^m
∈(-π,π]. By defining η_e^1=:χ_e^1+...+χ_e^m, η_e^2 =:χ_e^2+...+χ_e^m, ..., η_e^m-1=:χ_e^m-1+χ_e^m, η_e^m=:χ_e^m with χ_e^1,...,χ_e^m≥ 0, one can replacing η⃗_e by χ⃗_e:=(χ_e^1,...,χ_e^m) in the parametrization (<ref>).
The twisted geometry parametrization (<ref>) of T_ss^∗ SO(D+1)_e associated to a single edge can be directly extended to the whole graph γ.
Correspondingly, one can introduce the Levi-Civita holonomies {h^Γ_e|e∈γ} determined by the fluxes {X_e∈ so(D+1)_ss|e∈γ} and {X̃_e∈ so(D+1)_ss|e∈γ}, which takes the form
h^Γ_e≡ n_ee^ζ_e^1τ_1...e^ζ_e^mτ_mñ_e^-1.
Note that the variables (ζ_e^1,...,ζ_e^n) are well-defined via the given h^Γ_e and the chosen Hopf sections, thus
(ζ_e^1,...,ζ_e^n) are already fixed by the given {X_e∈ so(D+1)_ss|e∈γ} and {X̃_e∈ so(D+1)_ss|e∈γ}. Then, one can factor out h^Γ_e from h_e through the expressions
h_e= (e^(ξ_e^1-ζ_e^1)n_eτ_1n_e^-1...e^(ξ_e^m-ζ_e^m)n_eτ_mn_e^-1) h^Γ_e =h^Γ_e(e^(ξ_e^1-ζ_e^1)ñ_eτ_1ñ_e^-1... e^(ξ_e^m-ζ_e^m)ñ_eτ_mñ_e^-1)
in the perspectives of the source point and target point of e respectively.
The above decomposition with twisted geometry parameters can be adopted to the splitting of the the Ashtekar connection as A_a=Γ_a+β K_a on a given graph. Specifically, one can consider the integral of A_a=Γ_a+β K_a∈ so(D+1) along an infinitesimal edge direction ℓ^a_e, which leads to A_e≡ A_aℓ^a_e, Γ_e≡Γ_aℓ^a_e and K_e≡ K_aℓ^a_e. Clearly, we can establish the following correspondence of
h_e= e^A_e and h^Γ_e= e^Γ_e.
The remaining factor should account for the K_e. According to the above discussion, the value of K_e may thus be expressed in the perspectives of the source point and target point of e, respectively as
(e^(ξ_e^1-ζ_e^1)n_eτ_1n_e^-1...e^(ξ_e^m-ζ_e^m)n_eτ_mn_e^-1) =e^β K_e
or
(e^(ξ_e^1-ζ_e^1)ñ_eτ_1ñ_e^-1... e^(ξ_e^m-ζ_e^m)ñ_eτ_mñ_e^-1)= e^β K_e .
Further, we have
K_e =1/βn_e((ξ_e^1-ζ_e^1)τ_1+...+(ξ_e^m-ζ_e^m)τ_m)n_e^-1
or
K_e =1/βñ_e((ξ_e^1-ζ_e^1)τ_1+...+(ξ_e^m-ζ_e^m)τ_m)ñ_e^-1
when it is expressed in the perspectives of the source point and target point of e respectively.
The set of the variables ((η^e_1,...,η^e_m), (ξ^e_1,...,ξ^e_m),𝕍_e, 𝕍̃_e) gives the generalization of twisted geometry parametrization for the SO(D+1) holonomy-flux phase space. Comparing with the twisted geometry parametrization for the edge-simplicity constraint surface in the SO(D+1) holonomy-flux phase space introduced in our companion paper <cit.>, this generalized parametrization scheme covers the dense subset of the SO(D+1) holonomy-flux phase space, which are far beyond the edge-simplicity constraint surface. We will now carry out an analysis of the symplectic structure of the SO(D+1) holonomy-flux phase space based on the variables ((η^e_1,...,η^e_m), (ξ^e_1,...,ξ^e_m),𝕍_e, 𝕍̃_e) , before coming back to provide more support on the relation between the generalized parametrization scheme in this paper and that only for the edge simplicity constraint surface given in our companion paper <cit.>.
§ SYMPLECTIC ANALYSIS OF SO(D+1) HOLONOMY-FLUX PHASE SPACE
Notice that the discussions in this section only depend on each single edge of the graph. To simplify our notations, we will focus on the analysis on a single edge and omit the label e without loss of generality.
§.§ Symplectic structure of SO(D+1) holonomy-flux phase space
The symplectic structure of SO(D+1) holonomy-flux phase space has been discussed in our companion paper <cit.>, let us give a brief review of the main notations as follows.
Recall that the SO(D+1) holonomy-flux phase space associated with each edge of a given graph can be given by the group cotangent space T^*SO(D+1), as a phase space it enjoys the natural symplectic structure of the T^*SO(D+1). To give the explicit formulation of this symplectic structure, let us introduce the function f(h) on SO(D+1)∋ h, and the element p_X∈ so(D +1)^∗ which is a linear function of Y∈ so(D+1) defined by
p_X(Y)≡ X^KLY_KL,
where X=X^KL∈ so(D+1).
A right-invariant vector field X̂ associated to the Lie algebra element X∈ so(D+1), acts on a function f(h) via the right derivative ∇_X^R as
∇_X^Rf(h)≡d/dtf(e^-tXh)|_t=0;
under the adjoint transformation X↦ -hXh^-1, we obtain the corresponding left derivative
∇_X^Lf(h)≡d/dtf(he^tX)|_t=0=-∇^R_hXh^-1f(h).
One can straightforwardly show that the map from the right invariant vector fields X̂ to the corresponding elements X∈ so(D+1) is given by the algebra-valued, right-invariant 1-form dhh^-1, which reads
i_X̂(dhh^-1)=(ℒ_X̂h)h^-1=-X,
where i denotes the interior product, and ℒ_Ŷ≡ i_Ŷd+di_Ŷ denotes the Lie derivative.
Now, the natural symplectic potential for T^∗ SO(D+1) can be expressed as
Θ≡ X^IJ(dhh^-1)_IJ≡Tr(Xdhh^-1).
The symplectic 2-form then follows as
Ω≡ -dΘ=- dTr(Xdhh^-1)=1/2Tr(dX̃∧ h^-1dh-dX∧ dhh^-1)
where we have introduced X̃≡-h^-1Xh. From the symplectic 2-form, the Poisson brackets among the interesting phase space functions f≡ f(h) and p_Y≡ p_Y(X)=Y^IJX_IJ is given by <cit.>
{p_Y,p_Z}=p_[Y,Z], {p_Y,f(h)}=∇^R_Yf(h), {f(h),f'(h)}=0.
One can see from the brackets (<ref>) that the Poisson action of p_Y(X) generates left derivatives. Similarly, it is easy to check that the action of p̃_Y(X)≡ Y^IJX̃_IJ with X̃=-h^-1Xh generate the right derivative {p̃_Y,f(h)}=∇^L_Yf(h). Moreover, one can check the commutative relation {p_Y,p̃_Z}=0. Finally, it is easy to verify that, by setting 2κ/a^D-1=1, the Poisson brackets (<ref>) given by the natural symplectic potential (<ref>) for T^∗ SO(D+1) are identical with the one (<ref>) induced by the symplectic structure (<ref>) in the SO(D+1) connection phase space <cit.>. In the following part of this article, we will analyze the symplectic structure on T^∗ SO(D+1) based on the symplectic potential Θ without loss of generality.
§.§ Symplectomorphism between SO(D+1) holonomy-flux phase space and generalized twisted geometry parameter space
From now on, let us focus on the analysis on one single edge e of given graph γ, and we omit the the label e for all of the notations.
Denote by B:=ℚ_m×ℚ̃_m × (×_=1^m ℝ^_+)×(× _=1^m S^1_) the collection of the generalized twisted geometric parameters (𝕍,𝕍̃,χ⃗,ξ⃗). It is easy to see that the map (<ref>) is not a one to one mapping. More explicitly, one can decompose B=B_0∪Ḃ with
Ḃ:= B|_η_m> 0
and
B_0:= B∖Ḃ.
Then, one can find that the map (<ref>) is a one to one mapping between Ḃ and its image Ḃ^∗⊂ T_ss^∗ SO(D+1), while it is a many to one mapping between
B_0 and its image B_0^∗⊂ T_ss^∗ SO(D+1). We will first focus on the symplectic structure on B in this subsection, and then go back to consider the many to one mapping between
B_0 and its image B_0^∗ in section <ref>.
The one to one mapping between Ḃ and its image Ḃ^∗⊂ T_ss^∗ SO(D+1) is also an isomorphism
Ḃ→Ḃ^∗⊂ T_ss^∗ SO(D+1).
Based on the isomorphism (<ref>), we may use the generalized twisted geometric parameters to express the induced symplectic structure of Ḃ^∗⊂ T_ss^∗ SO(D+1) inherited from the phase space T^*SO(D+1). First, the induced symplectic potential can be expressed as
Θ_Ḃ^∗ = Tr(Xdhh^-1)|_Ḃ^∗⊂ T_ss^∗ SO(D+1)⊂ T^∗ SO(D+1)
= 1/2∑_'=1^mη_'Tr(nτ_'n^-1 (dnn^-1+n(∑_dξ^τ_)n^-1-ne^∑_ξ^τ_ñ^-1dññ^-1ñe^-∑_ξ^τ_ n^-1))
= 1/2∑_=1^mη_Tr(V_ dnn^-1)+ ∑_=1^mη_dξ^- 1/2∑_=1^mη_Tr(Ṽ_ dññ^-1).
In the space B, one can extend the potential Θ_Ḃ=Θ_Ḃ^∗ in the limit η_m→0 and define
Θ_B≡1/2∑_=1^mη_Tr(V_ dnn^-1)+ ∑_=1^mη_dξ^- 1/2∑_=1^mη_Tr(Ṽ_ dññ^-1)
as the symplectic potential on B. This potential gives the sympletic form Ω_B as
Ω_B=-dΘ_B = 1/2∑_=1^mη_Tr(V_ dnn^-1∧ dnn^-1)-1/2∑_=1^mη_Tr(Ṽ_ dññ^-1∧ dññ^-1)
-∑_=1^mdη_∧ (dξ_+1/2Tr(V_ dnn^-1)-1/2Tr(Ṽ_ dññ^-1)).
It is clear that in the η_m=0 region of the above (pre-)symplectic structure is degenerate, as expected due to the degeneracy in the parametrization itself in the η_m= 0 region of T_ss^∗ SO(D+1).
We are interested in the Poisson algebras between these twisted-geometry variables using the presymplectic form Ω_B. In order to give the explicit Poisson brackets, in the following section we will study the Hopf sections n(𝕍) and ñ(𝕍̃) in the perspectives of their contributions to the Hamiltonian fields on B defined by Ω_B .
§.§ Geometric action on the Hopf section and its decomposition
§.§.§ Geometric action on the Hopf section
The Hopf map is defined as a special projection map π: SO(D+1)↦ℚ_m with ℚ_m:=SO(D+1)/𝕋^m, such that every element in ℚ_m comes from an orbit generated by the maximal subgroup 𝕋^m of SO(D+1) that fixed all of the elements in the set {τ_1,τ_2,...,τ_m}. In the definition representation of SO(D+1) the Hopf map reads
π: SO(D+1) → ℚ_m
g → 𝕍(g)=(gτ_1g^-1, gτ_2g^-1,...).
Note that 𝕍(g) is invariant under g↦ g^α_1,α_2,...,α_m=ge^α_1τ_1+α_2τ_2+...α_mτ_m, thus it is a function of D(D+1)/2-[D+1/2] variables only. This result shows that SO(D+1) can be seen as a bundle (which is referred to as Hopf bundle) over ℚ_m with the 𝕋^m
fibers. On this bundle we can introduce the Hopf sections, each as an inverse map to the above projection
n: ℚ_m → SO(D+1)
𝕍 ↦ n(𝕍),
such that π(n(𝕍))=𝕍. This section assigns a specific SO(D+1) element n to each member of the ℚ_m, and it is easy to see that any given section n is related to all other sections via n^α_1,α_2,...,α_m≡ ne^α_1τ_1+α_2τ_2+...α_mτ_m; hence the free angles {α_1,α_2,...,α_m} parametrize the set of all possible Hopf sections.
Notice that each algebra element X∈ so(D+1) can be associated to a vector field X̂ on ℚ_m, which acts on a function f(𝕍) of ℚ_m as
ℒ_X̂f(𝕍):=d/dtf(e^-tX𝕍e^tX)|_t=0,
where g𝕍g^-1:=(gV_1g^-1, gV_2g^-1,...,gV_mg^-1) with g∈ SO(D+1). Similarly, for a so(D+1) valued function S=S(𝕍) on ℚ_m, it can be also associated to a vector field Ŝ on ℚ_m, , which acts on the function f(𝕍) of ℚ_m as
ℒ_Ŝf(𝕍):=d/dtf(e^-tS𝕍e^tS)|_t=0.
Specifically, for the linear functions we have
ℒ_X̂𝕍:=(ℒ_X̂V_1,..., ℒ_X̂V_m)=(-[X,V_1],...,-[X,V_m])=:-[X,𝕍].
Especially, we are interested in the action of the vector fields on the Hopf section n. Notice that we have
ℒ_X̂V_(n)=(ℒ_X̂n)τ_ n^-1 +nτ_(ℒ_X̂n^-1)=[(ℒ_X̂n)n^-1, V_], ∀∈{1,...,m}.
Comparing this result with (<ref>), we deduce that
(ℒ_X̂n)n^-1=-X+∑_V_ F^_X(𝕍),
where F^_X(𝕍) are functions on ℚ_m, so that V_ F^_X(𝕍) commuting with the element 𝕍 for all .
Lemma.
The solution functions L_^IJ≡ L^: ℚ_m↦ so(D+1) of the equations
Tr(L^ dnn^-1)=0, L_^IJV_',IJ=δ_,',
appears in the Lie derivative of the Hopf map section n(𝕍) as,
L^_X:=L^IJ_ X_IJ=F^_X
and it satisfies the key coherence identity
ℒ_X̂L^_Y-ℒ_ŶL^_X=L^_[X,Y].
Finally, the general solution to this identity satisfying the conditions L_^IJV_',IJ=δ_,' is given by
L'^_X=L^_X+ℒ_X̂α^
where α^ is a function on ℚ_m.
Proof.
To prove Eq.(<ref>), let us take the interior product of an arbitrary vector field X̂ with the definition Tr(L^ dnn^-1)=0 and consider (ℒ_X̂n)n^-1=i_X̂(dnn^-1) given by the definition of Lie derivative, we have
0=i_X̂Tr(L^ dnn^-1)=Tr(L^(ℒ_X̂n)n^-1) =-Tr(L^ X)+∑_'=1^mF^'_XTr(L^ V_')=-L^_X+F^_X,
where we used Tr(L^ V_')=L_^IJV_',IJ=δ_,' and (<ref>). Thus, we proved F^_X=L^_X.
To prove Eq.(<ref>), we first consider that
ℒ_X̂(dnn^-1) = i_X̂(dnn^-1∧ dnn^-1)+d[(ℒ_X̂n)n^-1]
= [-X+∑_V_ L^_X,dnn^-1]+d(-X+∑_V_ L^_X)
= ∑_V_ dL^_X-[X,dnn^-1],
where we used the definition of Lie derivative in the first equality, Eq.(<ref>) in the second and dV_=[dnn^-1,V_] in the third. Then, the above equation leads to
0=ℒ_X̂Tr(L^ dnn^-1) =Tr((ℒ_X̂L^-[L^,X])dnn^-1) +dL^_X
by using the equalities Tr(L^ V_')=δ_,'.
Further, let us take the interior product of Eq.(<ref>) with Ŷ and we get
ℒ_ŶL^_X = Tr((ℒ_X̂L^-[L^,X] )(Y-∑_'V_' L^'_Y))
= ℒ_X̂L^_Y-L^_[X,Y]-∑_'L^'_Y(Tr((ℒ_X̂L^)V_') -Tr(L^[X,V_']))
= ℒ_X̂L^_Y-L^_[X,Y]-∑_'L^'_Yℒ_X̂(Tr(L^ V_') ),
where the last term vanishes, thus we obtain the coherence identity (<ref>).
To show Eq.(<ref>), let us suppose that we have another solution L'^ to the coherence identity and also the condition Tr(L'^ V_')=L'^IJ_ V_',IJ=δ_,'. Considering the 1-form ϕ^≡ -Tr(L'^ dnn^-1), one can see that its contraction with X̂
ϕ^_X≡ i_X̂ϕ^=-Tr(L'^ (ℒ_X̂n)n^-1)=L'^ _X-L^_X
is the difference between the two solutions L'^ _X and L^_X. Thus, ϕ^_X is also a solution to the coherence identity (<ref>). This result together with the
definition of the differential i_X̂i_Ŷdϕ^=ℒ_Ŷϕ^_X -ℒ_X̂ϕ^_Y+ϕ^_[X,Y] implies that dϕ^=0, which means that there exists a function α^ locally at least, such that ϕ^=dα^ and thus L'^_X=L^_X+ℒ_X̂α^. This proves the Eq. (<ref>).
□
Finally, let us recall that the freedom in choosing the Hopf section lies in the function parameters α^(𝕍) in the expression n'(𝕍)≡ n(𝕍)e^∑_α^(𝕍)τ_ for all possible choices of the sections. By applying Eq.(<ref>) to this n', we immediately get L'^_X= L^_X+ i_X̂dα^. Referring to (<ref>), we can conclude that the function L^ is exactly the function coefficient for the component of (dn)n^-1 in the V_ direction, which is determined by a choice of the Hopf section n.
§.§.§ Decomposition and sequence of the Hopf section
As we will see in following part of this article, the Hopf section n and the geometric action on it are closely related to the symplectic structure and the symplectic reduction on B. To analyze the Hopf section ℚ_m more explicitly, let us consider the decomposition of the Hopf section n. Recall the definition ℚ_m:=SO(D+1)/𝕋^m, one can decompose ℚ_m as
ℚ_m=𝔻_1×𝔻_2×...×𝔻_m
with
𝔻_1:=SO(D+1)/(SO(2)_τ_1× SO(D-1)_[τ_1]),
𝔻_2:=SO(D-1)_[τ_1]/(SO(2)_τ_2× SO(D-3)_[τ_2]),
...
𝔻_m:=SO(D+3-2m)_[τ_(m-1)]/SO(2)_τ_m,
where SO(2)_τ_ is the group generated by τ_ and SO(D+1-2)_[τ_] is the maximal subgroup of SO(D+1) which preserves (τ_1,...,τ_) and has the Cartan subalgebra spanned by (τ_(+1),...,τ_m). Here one should notice that both of SO(2)_τ_ and SO(D+1-2)_[τ_] preserve (τ_1,...,τ_). Then, the
Hopf section n can be decomposed as
n=n_1n_2...n_m.
This decomposition gives a sequence of the Hopf sections, which reads
n_1, n_1n_2, n_1n_2n_3, ..., n_1...n_m.
For a specific one n_1...n_ with ∈{1,...,m}, it gives
n_1...n_: 𝔻_1×...×𝔻_→ SO(D+1)
(V_1,...,V_)↦ n_1(V_1)n_2(V_1,V_2)...n_(V_1,...,V_),
where
V_1=n_1n_2...n_τ_1 n_^-1...n_2^-1n_1^-1=n_1τ_1 n_1^-1,
V_2=n_1n_2...n_τ_2 n_^-1...n_2^-1n_1^-1=n_1n_2τ_2n_2^-1n_1^-1,
...,
V_=n_1n_2...n_τ_ n_^-1...n_2^-1n_1^-1.
Here one should notice that the decomposition n=n_1...n_m is not unique. For instance, one can carry out the transformation
n_→ n_ g, n_+1→ g^-1n_+1
with g∈ SO(D+1) being arbitrary element which preserve (τ_1,...,τ_), and it is easy to verify that the transformation (<ref>) preserves the Hopf section n but changes n_ and n_+1 in the decomposition n=n_1...n_m. We can also establish the geometric actions on the Hopf sections n_1. Specifically, one can give
(ℒ_X̂n_1)n_1^-1=-X+V_1L̅^1_X (V_1)+∑_μV̅^μ_1 L̅^μ_X(V_1)
based on Eqs.(<ref>), (<ref>) and V_1=n_1τ_1n_1^-1, where V̅^μ_1=n_1τ̅^μ n_1^-1 with {τ̅^μ} being a basis of so(D-1)_τ_1, L̅^1_X (V_1)=L̅^1_IJ(V_1)X^IJ and L̅^μ_X(V_1)=L̅^μ_IJ(V_1) X^IJ are functions of V_1∈𝔻_1 <cit.>. It has been shown that
L̅^1_IJ(V_1) is the solution of the equations <cit.>
Tr(L̅^1 dn_1 n_1^-1)=0, Tr(L̅^1V_1)=1, and Tr(L̅^1 V̅^μ_1)=0, ∀μ.
By comparing Eq.(<ref>) and Eq.(<ref>), it is easy to see that L^1=L̅^1 is a solution of L^1 in Eq.(<ref>). This result will be a key ingredient in discussions in the next section.
Now, by applying the results of this section to the presymplectic form Ω_B, we will identify the Hamiltonian fields in B and compute the Poisson brackets.
§.§ Computation of Hamiltonian vector fields in pre-symplectic manifold B
Let us recall the pre-symplectic potential Θ_B≡1/2∑_=1^mη_Tr(V_ dnn^-1)+ ∑_=1^mη_dξ^- 1/2∑_=1^mη_Tr(Ṽ_ dññ^-1) induced from the SO(D+1) holonomy-flux phase space, which defines the pre-sympletic form Ω_B as
Ω_B=-dΘ_B = 1/2∑_=1^mη_Tr(V_ dnn^-1∧ dnn^-1)-1/2∑_=1^mη_Tr(Ṽ_ dññ^-1∧ dññ^-1)
-∑_=1^mdη_∧(dξ_+1/2Tr(V_ dnn^-1)-1/2Tr(Ṽ_ dññ^-1)).
The associated Poisson brackets can be calculated by considering the Hamiltonian vector fields on B. Let us denote the Hamiltonian vector field for the function f as ψ_f , where f∈{η_, ξ_, p_X≡1/2∑_η_ V^_X=1/2∑_η_ V^_IJX^IJ, p̃_X≡1/2∑_η_Ṽ^_X=1/2∑_η_Ṽ^_IJX^IJ}. Then, using i_ψ_fΩ_B=-df, the vector fields could be checked to be given by
ψ_p_X = X̂-∑_L^_X(𝕍)∂_ξ_, ψ_p̃_X = - X̂̃̂-∑_L^_X(𝕍̃)∂_ξ_, ψ_η_= -∂_ξ_.
Here X̂ are the vector fields generating the adjoint action on ℚ_m labelled by 𝕍, associated to the algebra elements X. Similarly, X̂̃̂ are the vector fields generating the adjoint action on ℚ_m labelled by 𝕍̃, associated to the algebra elements X.
Proof. The first equation of (<ref>) can be checked by considering
i_X̂Ω_B=-1/2∑_Tr(d(η_ V_)X)+∑_dη_ L^_X(𝕍).
Notice that we have i_∂_ξ_Ω_B=dη_, the first equation of (<ref>) follows immediately. The computation for ψ_p̃_X can be carried out similarly, with an opposite sign due to the reversal of the orientation.
□
§.§ Reduction of the pre-symplectic manifold B
Recall that in the η_m=0 region Ω_B is degenerate, as expected due to the degeneracy of the parametrization (<ref>) in the η_m= 0 region.
Let us now address this degeneracy to get a true symplectic manifold. We can reduce the pre-symplectic manifold B with respect to the vector fields Ê in the kernel of Ω_B, i.e. to consider the quotient manifold B̅≡ B/Ker(Ω_B). The result would be a symplectic manifold with non-degenerate 2-form given by the quotient projection of Ω_B.
In obtaining the space B̅, we can introduce the equivalence classes under the equivalence relation p∼ p' whenever p'=e^Êp, with Ê∈Ker(Ω_B) and p, p'∈ B. The operation is thus determined by the vector fields in the kernel of Ω_B. Since it is obvious that the vector fields Ê∈Ker(Ω_B) appear in the region with η_m=0, we look for the vector fields preserving the region while having the interior products with Ω_B proportional to η_. Let us first consider the vector fields
Ê_X≡ψ_p_X-ψ_p̃_Y,
where X∈ so(D+1), Y=-h^-1Xh with h being a group element rotating V^ to Ṽ^=-h^-1V^ h. Indeed, using the fact that V^_X=Ṽ^_Y, the interior product of the field D̂_X with the symplectic 2-form is
i_Ê_XΩ_B=-1/2∑_d(η_ V^_X-η_Ṽ^_Y)-1/2∑_η_Tr(Ṽ^ dY) =-1/2∑_η_Tr([V^,X]dnn^-1).
Now, let us analyze the degeneracy of i_Ê_XΩ_B. Denoted by K^ the subspace of B defined by η_=η_+1=...=η_m=0. Consider the so(D+1) valued functions F(V_1,...,V_(-1)) on K^ which satisfies
n_(-1)^-1...n_2^-1n_1^-1F(V_1,...,V_(-1))n_1n_2...n_(-1)∈ so(D+3-2)_[τ_(-1)],
where n_1n_2...n_(-1) determined by (V_1,...,V_(-1)) is from the sequence of the Hopf sections (<ref>), SO(D+1-2)_[τ_] is the maximal subgroup of SO(D+1) which preserves (τ_1,...,τ_) and has the Cartan subalgebra spanned by (τ_(+1),...,τ_m).
Then, we can define the vector fields Ê^_F by
Ê^_F:=Ê_X|_X=F(V_1,...,V_(-1)),
and one can verify i_Ê^_FΩ_B=0 on K^ by using Eq.(<ref>). Thus, notice the relation K^1⊂ K^2⊂...⊂ K^m, we have
Ker(Ω_B)≡{Ê^_F| ∈{1,...,m}}
on K^m.
Next, to find the equivalence class generated by the vector fields Ê^_F on K^, we note that the actions of the fields should rotate jointly the vectors (V_,..,V_m) and (Ṽ_,...,Ṽ_m), that is we have Ê^_F (V_')=-[F(V_1,...,V_(-1)),V_'], Ê^_F(Ṽ_')=-h^-1[F(V_1,...,V_(-1)),V_']h. Further, the actions preserves the group element h, since
Ê_X(h)=-Xh-hY=0
which ensures that Ê^_F(h)=0.
Therefore, given p and p' on K^, we have p'∼ p if and only if the two are related by a joint rotation in (V_,..,V_m) and (Ṽ_,...,Ṽ_m) and a h-preserving translations in (ξ_1,...,ξ_m). It is easy to see that the parametrization (<ref>) maps p and p'∼ p to the same image in T^∗_ssSO(D+1), as expected that the equivalence class generated by the vector fields Ê^_F on K^ also describes the degeneracy of the
parametrization (<ref>). After the quotient with respect to Ê^_F on each K_, we are left with a manifold K̅_ parametrized by only (η_1,...,η_(-1)), (V_1,...,V_m), (Ṽ_1,...,Ṽ_(-1)) and (ξ_1,...,ξ_m). Recall that B≡ B|_η_m>0∪ K^m and K^1⊂ K^2⊂...⊂ K^m, let us define
K̇^m:=K^m/Ker(Ω_B)
and then the quotient space B̅≡ B|_η_m>0∪K̇^m. Finally, we conclude that the parametrization (<ref>) gives a one to one map between B̅ and its image T^∗_ssSO(D+1), and it can be extended as a symplectic-morphism with B̅ being equipped with the symplectic structure Ω_B.
§.§ Poisson algebra among the twisted geometry parameters
Based on the Hamiltonian vector fields given by the pre-symplectic potential Θ_B, the Poisson brackets between the twisted geometry parameters can be given by
{ξ_,η_}=δ_,,
{p_X, p_Y}=p_[X,Y], {p̃_X, p̃_Y}=p̃_[X,Y]
{V^,η_}= {Ṽ^,η_}=0,
and
{V^,Ṽ^}=0.
Moreover, one can show that the Poisson brackets given by Θ_B between ξ_ and p_X, or the ones between ξ_ and p̃_X are non-trivial, and they are given by the function L^: ℚ_m→ so(D+1) in the form
{ξ_,p_X}= L^_X(𝕍), {ξ_,p̃_X}= L^_X(𝕍̃),
where L^_X≡Tr(L^ X) is the component of L^ along the algebra element X.
Especially, the Eqs. (<ref>) taken as the definition equations of the functions L^, together with the Poisson brackets (<ref>), already determined L^ to be exactly the results of the brackets {ξ_,p_X} and {ξ_,p̃_X} given by the potential Θ_B corresponding to our choice of the Hopf sections. This result can be shown by the fact that, the function L^ defined by Eqs.(<ref>) is constrained by two conditions given by the above Poisson brackets (<ref>), and these two conditions are exactly the definition of L^ in Lemma in section <ref>. Let us then illustrate the details of this fact as follows. The first one of the two conditions comes from the equation
p_IJL_^IJ=p_IJ{ξ_,p^IJ}=1/2{ξ_,p^IJp_IJ}= 1/4{ξ_,∑_η^2_} =1/2η_,
with p_IJ:=1/2∑_(η_ V^_IJ),
which gives the normalization condition L_^IJV^_IJ=δ_^ in Lemma in section <ref>. The second one of the two conditions just comes from the Jacobi identity
{ξ_,{p_X,p_Y}}+{p_X,{p_Y,ξ_}}+{p_Y,{ξ_,p_X}}=0,
from which we get
L^_[X,Y]-{p_X,L_Y^}+{p_Y,L_X^}=0,
By using
{p_X,L_Y^}=i_ψ_p_XdL_Y^=ℒ_X̂L_Y^,
one can write the identity (<ref>) as an identity involving
Lie derivatives and we get
ℒ_X̂L^_Y-ℒ_ŶL^_X=L^_[X,Y],
which is just the coherence identity in Lemma in section <ref>.
Now, it is easy to see these two conditions makes the Lemma in section <ref> applicable and we can verify the result given in the beginning of this paragraph.
§ RELATION WITH THE TWISTED GEOMETRY PARAMETRIZATIONS ON EDGE SIMPLICITY CONSTRAINT SURFACE
The twisted geometry parametrization introduce in this article is constructed in the space ×_e∈γT^∗_ssSO(D+1)_e, and we also have introduced the twisted geometry parametrization of the edge simplicity constraint surface ×_e∈γT^∗_esSO(D+1)_e in our companion paper <cit.>. Thus, it is worth to discuss the relation between these two types of parametrizations.
We also focus on the twisted geometry parametrizations of the space T^∗_ssSO(D+1) on a single edge without loss of generality. Then, by setting η_2=...=η_m=0 in Eq.(<ref>), we get
X=1/2η_1nτ_1n^-1
which parametrizes all of the simple fluxes satisfying X^[IJX^KL]=0 in so(D+1). Besides, recall the decomposition n=n_1...n_m of the Hopf section n, we get
X = 1/2η_1n_1τ_1n_1^-1
h = n_1e^ξ^1τ_1n̅ñ_1^-1
with n̅=n_2...n_me^ξ^2τ_2...e^ξ^mτ_m(ñ_2...ñ_m)^-1. Recall the edge simplicity constraint surface T_es^∗ SO(D+1) defined by
T_es^∗ SO(D+1)={(h,X)∈ T^∗ SO(D+1)|X^[IJX^KL]=0},
it is easy to see that T_es^∗ SO(D+1)⊂ T_ss^∗ SO(D+1) is parametrized by (η_1,ξ_1, V_1, Ṽ_1, n̅) based on Eq.(<ref>), where V_1=n_1τ_1n_1^-1, Ṽ_1=ñ_1τ_1ñ_1^-1 with the Hopf sections n_1 and ñ_1 being given by the decompositions
n=n_1...n_m and ñ=ñ_1...ñ_m respectively.
Thus, by restricting the consideration on the edge simplicity constraint surface, the parametrization (<ref>) reproduces the twisted geometry parametrization introduced in our companion paper <cit.>.
We can further consider the symplectic reduction with respect to the edge simplicity constraint, which can be expressed as 𝒮_IJKL≡ p_[IJp_KL]=0 with p_IJ:=1/2∑_η_ V^_IJ in twisted geometry parameters. Notice that the Hamiltonian vector field of edge simplicity constraint is spanned by
ψ^𝒮_IJKL=2p_[IJ(X̂_KL]-∑_L^_KL]∂ _ξ_),
where X̂_KL is the vector field generating the adjoint action of X_KL on ℚ_m labelled by 𝕍, with X_KL is the so(D+1) algebra element given by
X_KL≡ X^IJ_KL=δ^I_[Kδ^J_L]. It is easy to verify that the vector field (<ref>) only induces the transformation of holonomy on the edge simplicity constraint surface, which reads
ℒ_α^IJKLψ^𝒮_IJKLh= 1/2η_1 α^IJKLV^1_[IJτ_KL]h= 1/2η_1 α̅^KLn_1(τ̅_KLn̅)e^ξ^1τ_1n_1^-1,
where α^IJKL is an arbitrary tensor satisfying α^IJKL=α^[IJKL] and α̅^KLτ̅_KL≡α^IJKLV^1_[IJ(n^-1_1τ_KL]n_1)∈ so(D-1)_τ_1. Thus, the component n̅ is just the gauge component with respect to edge simplicity constraint. By reducing the edge simplicity constraint surface with respect to the gauge orbit generated by ψ^𝒮_IJKL, we get the simplicity reduced phase space B_es given by
B_es≡ℝ_+× S^1×𝔻_1×𝔻̃_1 ≡{(η_1,ξ_1,V_1, Ṽ_1)},
where
η_1∈ [0,+∞), ξ_1∈[-π,π), V_1∈𝔻_1, Ṽ_1∈𝔻̃_1 with 𝔻_1 and 𝔻̃_1 are defined by Eq.(<ref>).
Correspondingly, the reduced symplectic structure on B_es
gives the Poisson brackets
{p̅_X, p̅_Y}= p̅_[X,Y], {p̃̅̃_X, p̃̅̃_Y}=p̃̅̃_[X,Y], {ξ_1,η_1}=1,
where p̅_X≡1/2η_1V^1_X=1/2η_1 V^1_IJX^IJ and p̃̅̃_X≡1/2η_1Ṽ^1_X=1/2η_1Ṽ^1_IJX^IJ. Specifically, the Poisson bracket between ξ_1 and (p̅_X, p̃̅̃_X) are given by
{ξ_1, p̅_X}=L^1_X(𝕍), {ξ_1, p̃̅̃_X}=L^1_X(𝕍̃).
Notice these Poisson brackets is not independent of (V_2,..V_m) and (Ṽ_2,...,Ṽ_m), since ξ_1 contains the information of the choices of the Hopf section n and ñ which depend on 𝕍 and 𝕍̃. Recall the result of section <ref>, by using the decomposition n=n_1...n_m and ñ=ñ_1...ñ_m, one can choose the Hopf sections n and ñ to ensure that
L^1(𝕍)=L̅^1(V_1), and L^1(𝕍̃)=L̅^1(Ṽ_1).
Then, the symplectic structure on reduce phase space B_es is given by the Eqs.(<ref>), (<ref>) and (<ref>), which is identical with that given in our companion paper <cit.>. Further, the gauge reduction with respect to Gaussian constraint and the treatment of vertex simplicity constraint can be carried out following the same procedures as that in <cit.>.
§ CONCLUSION AND OUTLOOK
The realization of gauge fixing in quantum gauge reduction and the Fermion coupling in all dimensional LQG require us to construct the coherent state in the full Hilbert space which involving the non-simple representations of SO(D+1). Following previous experiences, it is reasonable to consider the generalized twisted geometry coherent state and thus it is necessary to establish the twisted geometry parametrization of the full SO(D+1) holonomy-flux phase space.
We established the generalized twisted geometry parametrization for a dense subspace of the full SO(D+1) holonomy-flux phase space. In particular, the twisted geometry parameters are adapted to the splitting of the Ashtekar connection to capture the degrees of freedom of the intrinsic and extrinsic part of the spatial geometry respectively. Moreover, the symplectic structure on the SO(D+1) holonomy-flux phase space is re-expressed based on the twisted geometry parameters.
Through studying the properties of the Hopf sections in SO(D+1) Hopf fibre bundle, we obtained the Poisson algebra among the twisted geometry parameters. Especially, the relation between the twisted geometry parametrizations for
the edge simplicity constraint surface and the dense subspace ×_e∈γT^∗_ss SO(D+1)_e are discussed. We pointed out that the twisted geometry parametrizations for ×_e∈γT^∗_ss SO(D+1)_e is equivalent to that for the edge simplicity constraint surface by carrying out the gauge reduction with respect to the edge simplicity constraint, which ensures that the treatment of the anomalous vertex simplicity constraint proposed in our companion paper <cit.> are still valid for the more general case considered in this article.
The twisted geometry parametrizations for the dense subspace ×_e∈γT^∗_ss SO(D+1)_e provides us the tool which is necessary to construct the twisted geometry coherent state in the full Hilbert space of all dimensional LQG. More explicitly, similar to the construction of twisted geometry coherent state in the solution space of edge simplicity constraint, one could decompose the heat-kernel coherent state of SO(D+1) based on the twisted geometry parametrization for ×_e∈γT^∗_ss SO(D+1)_e, and then select the terms dominated by the highest and lowest weight in each representation of SO(D+1), to form the twisted geometry coherent state in the full Hilbert space of all dimensional LQG. This will be the subject of a follow up work <cit.>.
It should be remarked that the twisted geometry parametrization of the SO(D+1) holonomy-flux phase space are also valid for general SO(D+1) Yang-Mills gauge theory. Though the “geometry” may be meaningless out of the framework of gravity theory, the twisted geometry parameters provide a new perspective to analyze the Poisson structure of the SO(D+1) holonomy-flux phase space, which could help us to understand the quantum aspects of corresponding SO(D+1) Yang-Mills gauge theory.
§ ACKNOWLEDGMENTS
This work is supported by the National Natural Science Foundation of China (NSFC) with Grants No. 12047519, No. 11775082, No. 11875006 and No. 11961131013.
unsrt
|
http://arxiv.org/abs/2307.04852v1 | 20230710184746 | $\texttt{AlgRel.wl}$: Algebraic Relations for the Product of Propagators in Feynman integrals | [
"B. Ananthanarayan",
"Souvik Bera",
"Tanay Pathak"
] | hep-ph | [
"hep-ph",
"hep-th",
"math-ph",
"math.MP"
] |
Spatially-Resolved Recent Star Formation History in NGC 6946
[
August 12, 2023
============================================================
§ INTRODUCTION
In this work, we consider the formalism first proposed by Tarasov to derive algebraic relations for the product of propagators for functional reduction <cit.>. We systematically develop an algorithm inspired by the original work and
present a realization in for the same, which is provided for the user as a package called .
We have used the package to simplify and analyze many important and interesting Feynman Integrals that are
amenable to treatment using this formalism.
Feynman integrals play an important role in precision calculations in quantum field theory. There are various methods to evaluate them <cit.>. Even with all these methods, it is at times still challenging to compute Feynman integrals. More often, other techniques are used to facilitate this computation. In <cit.> the method of functional reduction was introduced to derive functional relations between Feynman integrals. These relations reduce the original integral into a sum of integrals which are easier to evaluate. The focus of the present work is this new way to obtain functional relations by deriving the algebraic relations for the product of propagators. This method in turn then leaves some undetermined free parameters which can be chosen at will. Appropriate choices of these parameters result in various functional equations for Feynman integrals<cit.>.
The method can be applied to any one-loop diagram, indeed as already pointed out in
detail by Tarasov. Despite this, no working code has been provided in the past. In the present work, we provide an automated package to derive the algebraic relation for the product of propagators. Our code here fills this gap in the possibility of finding widespread use of formalism. Since our goal is an efficient algorithmic implementation to find the algebraic relation, we introduce a recursive way of method. The free parameters in the resulting relation can then be chosen in an appropriate manner to derive the functional equations for the Feynman integrals. More specifically, for presentation purposes, we will focus on the cases when all these free parameters are zero and the original Feynman integral with many massive propagators can be written as a sum of integrals with fewer massive propagators, which was also pointed out in <cit.>[We also briefly discuss a case when we choose a non-zero parameter in Appendix <ref>]. For the one-loop integrals, this procedure can be used to reduce the original integral to a sum of integrals with at most one massive propagator. We apply the method for up to 6-point, one-loop integrals and show that the N-point one loop integral with all massive propagators and general external momenta can be written as a sum of 2^N-1 integral with just one massive propagator. Though the method is not readily generalizable to the higher loop we yet extend the uses to cover
certain cases of 2- and even 3-loop Feynman integrals. In a similar manner, this approach is also applicable to higher loops. Our findings show that we require at least
4 propagators in order for the formalism to be viable and to be of utility as far as the simplification is
concerned. We explain this feature in some detail.
We, however, notice that such functional reduction is one of the many possibilities obtained after choosing the free parameters obtained from the algebraic relation <cit.>.
In view of the proposed method of functional reduction of Feynman integrals, the package has been built in such a way that the final result still has arbitrary parameters which can be chosen suitably for the functional reduction procedure. Using a few of the analytical results available for the one-loop integrals, we explicitly show how the complexity in the evaluation of these integrals can be reduced. Since the Feynman integrals can be written in terms of hypergeometric functions <cit.> this reduction in complexity gives rise to reduction formulae for the hypergeometric functions. Thus it can be used to establish new relations between multi-variable hypergeometric functions. We discovered many new reduction formulae for such hypergeometric functions, which, to the
best of our knowledge, have not appeared anywhere in the literature. We also discuss in detail how further reduction formulae can be obtained from already available results for the one-loop cases. Such relations between hypergeometric functions were also obtained in <cit.>, where explicit relations between hypergeometric functions were derived via the evaluation of Feynman integrals. In order to make the results accessible, we provide several examples in a single notebook that allows
the reader to appreciate the power of the formalism, based on the code that is provided alongwith.
The article is organized as follows. In section <ref> we discuss the method in detail with one loop bubble integral as an example and explicitly present how the reduction in complexity has been achieved for the integral. In section <ref>, we present the algorithm of the package and discuss its usage in detail. In section <ref>, various results obtained for one, two, and three-loop integrals are presented. In section <ref>, we discuss the various analytic results in terms of multi-variable hypergeometric functions already derived for the one-loop N-point integrals <cit.> and show how the present work helps in deriving the reduction formulae for the multi-variable hypergeometric functions using them. Finally, we conclude the paper with some summary and discussions in section <ref>. In Appendix <ref>, we provide a list of various reduction formulae that we derive, along with some details on how to further extend the list given there.
The package along with a notebook , that contains all the examples discussed in the paper can be found in the https://github.com/TanayPathak-17/Algebraic-relation-for-the-product-of-propagatorsGitHub repository.
§ THE METHOD
We now explain the method to find the algebraic relation of the product of propagators with the help of the one-loop bubble integral.
Consider the one-loop bubble integral corresponding to bubble diagram Fig.<ref>,
I_2(p^2,m_1,m_2)= ∫d^4k/(k^2-m_1^2)((k-p)^2-m_2^2)
To find the algebraic relation for the product of propagators, we instead consider a more general propagator, depending on only one loop-momenta, of the following form
d_i= (k+q_i)^2-m_i^2
where k is the loop-momentum, q_i's are dependent on external momenta and can be zero as well and m_i is the mass of the propagator.
With the general propagators, we now have
I_2((q_1-q_2)^2,m_1,m_2) = ∫d^4k/d_1d_2
where substituting q_1=0 and q_2=-p we recover Eq.(<ref>).
We seek the algebraic relation for the integrand, by introducing a new denominator D_1 along with coefficients x_1 and x_2, of the following form
1/d_1d_2 = x_1/D_1 d_2 + x_2/D_1 d_1
where D_i= (k+P_i)^2-M_i^2 is defined similar to Eq.(<ref>).
The unknowns that are introduced can be fixed using the above equation, while the remaining parameters are arbitrary and can be fixed at will in such a way that the resulting relationship will give rise to integrals which are easier to compute.
Using Eq.(<ref>) we get
D_1= x_1 d_1 +x_2 d_2
Comparing the coefficients of k^2, k and the remaining k independent term we get
x_1+x_2 =1
x_1 q_1 + x_2 q_2 = P_1
-M_1^2 + P_1^2 - (-m_1^2 + q_1^2) x_1 - (-m_2^2 + q_2^2) x_2 =0
Solving for x_1, x_2 and P_1 we get
x_1 = √((m_1^2-m_2^2+(q_1-q_2)^2)^2-4 (m_1^2-m_2^2) (q_1-q_2)^2)+m_1^2-m_2^2+q_1^2+q_2^2-2 q_1 q_2/2 (q_1-q_2)^2
x_2 = -√((m_1^2-m_2^2+(q_1-q_2)^2)^2-4 (m_1^2-m_2^2) (q_1-q_2)^2)-m_1^2+m_2^2+q_1^2+q_2^2-2 q_1 q_2/2 (q_1-q_2)^2
P_1 = -√((m_1^2-m_2^2+(q_1-q_2)^2)^2-4 (m_1^2-m_2^2) (q_1-q_2)^2)+m_1^2-m_2^2-q_1^2+q_2^2/2 (q_1-q_2)^2(q_1-q_2)
In the above equation, M_1 is still an arbitrary variable that can be chosen at will. Choosing various values of M_1 will result in different functional equations <cit.> for the bubble integral. For the present work, we will focus on one of the simple choices i.e. M_1=0. Integrating Eq.(<ref>) and substituting q_1=0 and q_2= -p we have
I_2(p^2,m_1,m_2)= x_1 I_2((P_1-p)^2,0,m_2)+ x_2 I_2(P_1^2,m_1,0)
Hence we see that the general two-point bubble integral with non-zero masses can be written in terms of two integrals with just one mass. Diagrammatically Eq.(<ref>) can be represented as in Fig.<ref>.
To see how the complexity in the computation has been reduced in the Eq.(<ref>), we refer to a few analytic results. The general result for the massive bubble diagram can be written in terms of the Appell F_4 function <cit.>.
I_2(p,m_1,m_2) = (m_2^2)^d/2-2Γ (d/2-1) Γ (2-d/2)/Γ (d/2) F_4(2-d/2,1;d/2,2-d/2;p^2/m_2^2,m_1^2/m_2^2)
+(m_1^2)^d/2-1Γ (1-d/2) /m_2^2 F_4(d/2,1;d/2,d/2;p^2/m_2^2,m_1^2/m_2^2)
where,
F_4(a,b,c,d,x,y)= ∑_m,n=0^∞(a)_m+n(b)_m+n(c)_m(d)_nm!n!x^my^n
is the Appell F_4 hypergeometric function with region of convergence (ROC) given by √(|x|)+ √(|y|) < 1.
The analytic expression result for I_2(p,m,0) is readily available in <cit.>.
I_2^(d)( p^2; m^2, 0 )=-Γ(1-d/2) m_2^d-4 _2F_1[[ 1,2-d/2 ;; d/2 ; ]p^2/m^2]
Using the above relation in Eq.(<ref>), we get the following for the right-hand side
-m_1^d-4Γ (1-d/2) (-√((-m_1^2+m_2^2+p^2)^2 -4 m_2^2 p^2)+m_1^2-m_2^2+p^2) /2 p^2
× _2F_1[[ 1,2-d/2 ;; d/2 ; ](p^2+m_1^2-m_2^2-√((p^2-m_1^2+m_2^2)^2-4 p^2 m_2^2))^2/4 p^2 m_1^2]
-m_2^d-4Γ (1-d/2)/2 p^2
(√((-m_1^2+m_2^2+p^2)^2-4 m_2^2 p^2)-m_1^2+m_2^2+p^2) _2F_1[[ 1,2-d/2 ;; d/2 ; ](p^2+m_1^2-m_2^2-√((p^2-m_1^2+m_2^2)^2-4 p^2 m_2^2)/2 p-p)^2/m_2^2]
The above relation can be viewed as a reduction formula without making reference to the underlying Feynman integral and the result is shown in Eq.(<ref>) and Eq. (<ref>). In a similar manner, evaluation of other Feynman integrals can be used to obtain the relationship between hypergeometric functions <cit.>. Such a reduction of hypergeometric functions with a higher number of variables to those with a lesser number of variables also helps when the analytic continuation has to be done to reach a certain kinematical region. For the case of Appell F_4 the elaborate analytic continuation has been performed explicitly in <cit.> or using automatized algorithms <cit.> for more general multi-variable hypergeometric functions. This whole process still does not guarantee convergence for all the values of the parameter space <cit.>. While for the case of _2F_1 complete table of analytic continuations is available <cit.>. The procedure to find the analytic continuations also gets more complicated with the increase in the number of variables even with the use of automatized packages.
§ PACKAGE : ALGORITHM AND USAGE
§.§ Algorithm
We now present a general algorithm for the case when we have N denominators to find algebraic relation recursively.
Consider the general situation with product of N denominators as 1d_1⋯ d_N.
* We first find the algebraic relation by taking d_1 and d_2
1/d_1d_2 = x_1/D_1 d_2 + x_2/D_1 d_1
* We then multiply the above equation by 1d_3
1/d_1d_2d_3 = x_1/D_1 d_2d_3 + x_2/D_1 d_1d_3
* We then find the algebraic relation of each pair of d_is again using Eq.(<ref>).
* Then in the resulting relation, we repeat this process until all the denominators are exhausted.
The final result will be a sum of 2^N-1 terms where N is the total number of denominators we started with.
It is to be noted that the above procedure is a slight modification of the original method<cit.>. In <cit.>, we start by seeking the following algebraic relation for the product of N propagators
1/d_1⋯ d_N = x_1/D_1d_2⋯ d_n + ⋯ + x_N/d_1⋯ d_N-1D_1
Comparing the coefficients of k^2,k and using the constant term we get an over-determined set of equations. Such a system will leave x_3, x_4⋯ x_N undetermined. Such procedure when used recursively with each term on the RHS of the above equation will finally result in N! total number of terms, unlike 2^N-1 terms using the procedure presented here. Also, the arbitrariness in the choice of coefficients x_is in the original algorithm is now present in the choice of parameters M_is.
§.§ Usage
The recursive algorithm presented previously has been automatized in the accompanying package . Below we demonstrate the usage of the package .
After downloading the package and putting it in the same directory as the notebook we can call the package as follows: Input
SetDirectory[NotebookDirectory[]];
AlgRel.wl;
Input
<<AlgRel.wl
Print
AlgRel.wl v1.0
Authors : B. Ananthanarayan, Souvik Bera, Tanay Pathak
ROC2.wl v1.0
The package has been made assuming the form d_i= (k+p_i)^2-m_i^2 for the propagator, where k,p and m can be changed as per the convenience of the user. The only command of the package is , which can be called as follows
Input
AlgRel[Propagator's number,{k,q,m},{P,M},x,Substituions]
Output
{{Algebraic relation},{Values}}
The various elements of the input are as follows
* : It is a list of numbers to denote various propagators. It need not necessarily be serial and to ease the use of the package in case of many propagators (See Section <ref> for an example).
* : It is a list containing three variables corresponding to the general propagator d_i= (k+p_i)^2-m_i^2. denotes the loop momenta, denotes the combination of external momenta and can be zero too and denotes the mass of the propagator.
* : It is a list containing two variables. They are used to set the variable for the auxiliary propagator introduced for obtaining the algebraic relation, D_i= (k+P_i)^2-M_i^2. It automatically takes the from the previous list.
* : It is used to denote the variable for the coefficients in the algebraic relation, Eq.(<ref>).
* : It is a list of substitution for p_i and M_i.
The output of the above command is a nested list with two sub-lists with the following two sub-lists
* : It gives the algebraic relation for the product of propagators, Eq.(<ref>).
* : It is a list of the values obtained for P_i and x_i.
Consider the example of Bubble integrand. To obtain the result for it we can use the following command
Input
AlgRel[{1, 2},{k,q,m},{P, M}, x,{q[1]-> 0,q[2]->-p,M[1]->0}]
Output
{{x[1]((k+P[1])2)(-m[1]2+(k)2)+x[2]((k+P[1])2)(-m[2]2+(k-p)2)},
{x[1]->p2+m[1]2-m[2]2+(p2+m[1]2- m[2]2)2-4p2(m[1]2)p2,...}}
Due to its length, the second element of the output (i.e., the substitution list) is not shown fully. It contain the values of the and as given in Eq.(<ref>).
In the next section, we look at a few one-loop and two-loop examples where such a procedure is helpful. Numerical checks at the integrand level for all the algebraic relations have been done as a check for correctness of the relations. For the cases wherever it was possible to achieve numerical stability we have also checked these relation by carrying out explicit numerical integration using <cit.>.
§ RESULTS
We will now look at results for one loop and higher loop cases that are obtained with the help of the package. All the results are also presented in the file .
§.§ One-loop vertex integral
We next consider the reduction of the one-loop vertex integral, which is given by
I_3= ∫d^4k/(k^2-m_1^2)((k+p_1)^2-m_2^2)((k+p_1+p_2)^2 -m_3^2)
We proceed as described in the previous section. We use the generalized propagators and do the substitutions accordingly so the result reduces to Eq.(<ref>). This can be done using following command
Input
AlgRel[{1,2,3},{k,q,m},{P,M},x,{q[1]->0,q[2]->p1,q[3]->p1+p2}]
The result will be the relation which is a sum of 4 terms, as follows
x_1 x_3/(k^2-m_1^2) (k+P_1)^2 (k+P_2)^2 +x_1 x_4/(k+P_1)^2 (k+P_2)^2 ((k+p_1+p_2)^2-m_3^2)
+x_2 x_5/(k+P_1)^2 (k+P_3)^2 ((k+p_1)^2-m_2^2)
+x_2 x_6/(k+P_1)^2 (k+P_3)^2 ((k+p_1+p_2)^2-m_3^2)
where
x_1 = √((m_1^2-m_2^2+p_1^2)^2-4 m_1^2 p_1^2)+m_1^2-m_2^2+p_1^2/2 p_1^2,
x_2 = -√((m_1^2-m_2^2+p_1^2)^2-4 m_1^2 p_1^2)-m_1^2+m_2^2+p_1^2/2 p_1^2,
P_1 = √((m_1^2-m_2^2+p_1^2)^2-4 m_1^2 p_1^2)+m_1^2-m_2^2+p_1^2/2 p_1^2p_1,
x_3 =√((m_1^2-m_3^2+(-p_1-p_2)^2)^2-4 m_1^2 (-p_1-p_2)^2)+m_1^2-m_3^2+(p_1+p_2)^2/2 (-p_1-p_2)^2,
x_4 = -√((m_1^2-m_3^2+(-p_1-p_2)^2)^2-4 m_1^2 (-p_1-p_2)^2)-m_1^2+m_3^2+(p_1+p_2)^2/2 (-p_1-p_2)^2,
P_2 = √((m_1^2-m_3^2+(p_1+p_2)^2)^2-4 m_1^2 (-p_1-p_2)^2)+m_1^2-m_3^2+(p_1+p_2)^2/2 (p_1+p_2)^2 (p_1+p_2) ,
x_5 = √((m_2^2-m_3^2+p_2^2)^2-4 m_2^2 p_2^2)+m_2^2-m_3^2+p_1^2+(p_1+p_2)^2-2 p_1 (p_1+p_2)/2 p_2^2,
x_6 = -√((m_2^2-m_3^2+p_2^2)^2-4 m_2^2 p_2^2)-m_2^2+m_3^2+p_1^2+(p_1+p_2)^2-2 p_1 (p_1+p_2)/2 p_2^2,
P_3 = √((m_2^2-m_3^2+p_2^2)^2-4 m_2^2 p_2^2)+m_2^2-m_3^2-p_1^2+(p_1+p_2)^2/2 p_2^2p_2
Integrating Eq.(<ref>) over loop momenta k we get vertex integral written as a sum of vertex integrals but with just one massive propagator.
§.§ One loop box integral
We now consider one loop box integral which can be written as
I_4= ∫d^4k/(k^2-m_1^2)((k+p_1)^2-m_2^2)((k+ p_1 +p_2)^2 -m_3^2)((k+ p_1 +p_2+p_3)^2 -m_4^2)
We can get the algebraic relation using the following command
Input
AlgRel[{1,2,3,4},{k,q,m},{P,M},x,{q[1]->0,q[2]->p1,q[3]->p1+p2}]
Substitute q_1=0, q_2 = p_1,q_3= p_1+p_2,q_4= p_1+p_2+p_3 and M_i=0, i=1 ⋯ 7 and simplifying we get
1/(k^2-m_1^2)((k+p_2)^2-m_2^2)((k+ p_2 +p_3)^2 -m_3^2)((k+ p_2 +p_3+p_4)^2 -m_3^2) =
x_1 x_3 x_7/(k^2-m_1^2) (k+P_1)^2 (k+P_2)^2 (k+P_4)^2+x_1 x_3 x_8/(k+P_1)^2 (k+P_2)^2 (k+P_4)^2 ((k+p_1+p_2+p_3)^2-m_4^2)
+x_1 x_4 x_9/(k+P_1)^2 (k+P_2)^2 (k+P_5)^2 ((k+p_1+p_2)^2-m_3^2)
+x_1 x_4 x_10/(k+P_1)^2 (k+P_2)^2 (k+P_5)^2 ((k+p_1+p_2+p_3)^2-m_4^2)
+x_2 x_5 x_11/(k+P_1)^2 (k+P_3)^2 (k+P_6)^2 ((k+p_1)^2-m_2^2)
+x_2 x_5 x_12/(k+P_1)^2 (k+P_3)^2 (k+P_6)^2 ((k+p_1+p_2+p_3)^2-m_4^2)
+x_2 x_6 x_13/(k+P_1)^2 (k+P_3)^2 (k+P_7)^2 ((k+p_1+p_2)^2-m_3^2)+
x_2 x_6 x_14/(k+P_1)^2 (k+P_3)^2 (k+P_7)^2 ((k+p_1+p_2+p_3)^2-m_4^2)
where the value of unknowns can be obtained from the notebook . Integrating Eq.(<ref>) over loop momenta k we get box integral written as a sum of 8 box integrals but with just one massive propagator.
§.§ One-loop pentagon integral
The one-loop pentagon integral is given by [For this and the subsequent subsection we will use the shorthand notation p_i_1+p_i_2 + ⋯ = p_i_1 i_2⋯, so as to avoid very lengthy expressions. ]
I_5= ∫d^4k/(k^2-m_1^2)((k+p_1)^2-m_2^2)((k+ p_12)^2 -m_3^2)((k+ p_123)^2 -m_4^2)((k+ p_1234)^2 -m_5^2)
We can get the algebraic relation using the following command
Input
AlgRel[{1,2,3,4,5},{k,q,m},{P,M},x,{q[1]->0,q[2]->p1,q[3]->p1+p2
,q[4]->p1+p2+p3,q[5]->p1+p2+p3+p4}]
Doing the substitution as before and simplifying we get
1/(k^2-m_1^2)((k+p_1)^2-m_2^2)((k+ p_12)^2 -m_3^2)((k+ p_123)^2 -m_4^2)((k+ p_1234)^2 -m_5^2)
= x_1 x_3 x_7 x_15/(k^2-m_1^2) (k+P_1)^2 (k+P_2)^2 (k+P_4)^2 (k+P_8)^2
+x_1 x_3 x_7 x_16/(k+P_1)^2 (k+P_2)^2 (k+P_4)^2 (k+P_8)^2 ((k+p_1234)^2-m_5^2)
+x_1 x_3 x_8 x_17/(k+P_1)^2 (k+P_2)^2 (k+P_4)^2 (k+P_9)^2 ((k+p_123)^2-m_4^2)
+x_1 x_3 x_8 x_18/(k+P_1)^2 (k+P_2)^2 (k+P_4)^2 (k+P_9)^2 ((k+p_1234)^2-m_5^2)
+x_1 x_4 x_9 x_19/(k+P_1)^2 (k+P_2)^2 (k+P_5)^2 (k+P_10)^2 ((k+p_12)^2-m_3^2)
+x_1 x_4 x_9 x_20/(k+P_1)^2 (k+P_2)^2 (k+P_5)^2 (k+P_10)^2 ((k+p_1234)^2-m_5^2)
+x_1 x_4 x_10 x_21/(k+P_1)^2 (k+P_2)^2 (k+P_5)^2 (k+P_11)^2 ((k+p_123)^2-m_4^2)
+x_1 x_4 x_10 x_22/(k+P_1)^2 (k+P_2)^2 (k+P_5)^2 (k+P_11)^2 ((k+p_1234)^2-m_5^2)
+x_2 x_5 x_11 x_23/(k+P_1)^2 (k+P_3)^2 (k+P_6)^2 (k+P_12)^2 ((k+p_1)^2-m_2^2)
+x_2 x_5 x_11 x_24/(k+P_1)^2 (k+P_3)^2 (k+P_6)^2 (k+P_12)^2 ((k+p_1234)^2-m_5^2)
+x_2 x_5 x_12 x_25/(k+P_1)^2 (k+P_3)^2 (k+P_6)^2 (k+P_13)^2 ((k+p_123)^2-m_4^2)
+x_2 x_5 x_12 x_26/(k+P_1)^2 (k+P_3)^2 (k+P_6)^2 (k+P_13)^2 ((k+p_1234)^2-m_5^2)
+x_2 x_6 x_13 x_27/(k+P_1)^2 (k+P_3)^2 (k+P_7)^2 (k+P_14)^2 ((k+p_12)^2-m_3^2)
+x_2 x_6 x_13 x_28/(k+P_1)^2 (k+P_3)^2 (k+P_7)^2 (k+P_14)^2 ((k+p_1234)^2-m_5^2)
+x_2 x_6 x_14 x_29/(k+P_1)^2 (k+P_3)^2 (k+P_7)^2 (k+P_15)^2 ((k+p_123)^2-m_4^2)+
x_2 x_6 x_14 x_30/(k+P_1)^2 (k+P_3)^2 (k+P_7)^2 (k+P_15)^2 ((k+p_1234)^2-m_5^2)
where the values of x_i and P_i can be obtained from the notebook . Integrating Eq.(<ref>) over loop momenta k we get pentagon integral written as a sum of 16 pentagon integrals but with just one massive propagator.
§.§ Six-point integral
The six-point integral corresponding to the Fig.<ref>, is
I_6= ∫d^4k/(k^2-m_1^2)((k+p_1)^2-m_2^2)((k+ p_1 +p_2)^2 -m_3^2)((k+ p_123)^2 -m_4^2)
×1/((k+ p_1234)^2 -m_5^2)((k+ p_12345 )^2 -m_6^2)
As in the previous examples we use the following command to obtain the algebraic relations
Input
AlgRel[{1,2,3,4,5,6},{k,q,m},{P,M},x,{q[1]->0,q[2]->p1,q[3]->p1+p2
,q[4]->p1+p2+p3,q[5]->p1+p2+p3+p4,q[6]->p1+p2+p3+p4+p5}]
We omit the result as it is lengthy. The full result can be obtained from the notebook .
§.§ Two-loop box integral
To find algebraic relation for the product of propagators for a two-loop integral we would use a loop-by-loop approach. To illustrate the method let us consider a two-loop example, the two-loop box integral, corresponding to the diagram Fig.<ref>. The integral is as follows
I_4,2= ∫∫d^4k_1d^4k_2/(k_1^2-m_1^2)((k_1+p_1)^2-m_2^2)(k_2^2-m_3^2)((k_2+p_3)^2-m_4^2)((k_1-k_2+p_1+p_2)^2-m_5^2)
The propagators are numbered such that i represents the propagator d_i.
Firstly we will find the algebraic relation for the product of propagators numbered 1 and 2, which has only the loop-momenta k_1 we can use the following command
Input
AlgRel[{1,2},{k1,q,m},{P,M},x,{q[1]->0,q[2]->p1}]
Similarly, for propagators numbered 3 and 4 we can use the following command
Input
AlgRel[{3,4},{k2,q,m},{Q,M},y,{q[3]->0,q[4]->p3}]
The final relation that we obtain, with M_i=0 is( see )
1/(k_1^2-m_1^2)((k+p_1)^2-m_2^2)(k_2^2-m_3^2)((k_2+p_3)^2-m_3^2) = x_1 y_1/(k_1^2-m_1^2) (k_2^2-m_3^2) (k_1+P_1)^2 (k_2+Q_1)^2
+x_2 y_1/(k_2^2-m_3^2) (k_1+P_1)^2 (k_2+Q_1)^2 ((k_1+p_1)^2-m_2^2)+x_1 y_2/(k_1^2-m_1^2) (k_1+P_1)^2 (k_2+Q_1)^2 ((k_2+p_3)^2-m_4^2)
+x_2 y_2/(k_1+P_1)^2 (k_2+Q_1)^2 ((k_1+p_1)^2-m_2^2) ((k_2+p_3)^2-m_4^2)
where
x_1= √((m_1^2-m_2^2+p_1^2)^2-4 m_1^2 p_1^2)+m_1^2-m_2^2+p_1^2/2 p_1^2,
x_2= -√((m_1^2-m_2^2+p_1^2)^2-4 m_1^2 p_1^2)-m_1^2+m_2^2+p_1^2/2 p_1^2
P_1= √((m_1^2-m_2^2+p_1^2)^2-4 m_1^2 p_1^2)+m_1^2-m_2^2+p_1^2/2 p_1^2p_1,
y_1=√((m_3^2-m_4^2+p_3^2)^2-4 m_3^2 p_3^2)+m_3^2-m_4^2+p_3^2/2 p_3^2,
y_2= -√((m_3^2-m_4^2+p_3^2)^2-4 m_3^2 p_3^2)-m_3^2+m_4^2+p_3^2/2 p_3^2,
Q_1= √((m_3^2-m_4^2+p_3^2)^2-4 m_3^2 p_3^2)+m_3^2-m_4^2+p_3^2/2 p_3^2p_3
Multiplying both sides of Eq.(<ref>) by 1/((k_1-k_2+p_1+p_2)^2-m_5^2) will give the required algebraic relation for the two-loop box integral.
§.§ Two-loop double box integral
Next, we consider the two-loop double-box integral corresponding to the diagram Fig.<ref>. The integral is as follows
I_4,2= ∫∫d^4k_1d^4k_2/(k_1^2-m_1^2)(k_2^2-m_2^2)((k_2+p_2)^2-m_3^2)((k_2+p_23)^2-m_4^2)((k_1+ p_23)^2-m_5^2)
×1/((k_1+ p_234)^2-m_6^2)((k_1-k_2)^2-m_7^2)
The propagators are numbered such that i represents the propagator d_i.
To find the algebraic relation for the product of propagators numbered 1,5 and 6 we can use the following command
Input
AlgRel[{1,5,6},{k1,q,m},{P,M},x,{q[1]->0,q[5]->p2+p3,q[6]->p2+p3+p4}]
Substituting value of q_1,q_5 and q_6 corresponding to the Feynman integral we get
x_1 x_4/(k_1+P_1)^2 (k_1+P_2)^2 ((k_1+p_234)^2-m_6^2)+x_2 x_5/(k_1+P_1)^2 (k_1+P_3)^2 ((k_1+p_23)^2-m_5^2)
+x_1 x_3/(k_1^2-m_1^2) (k_1+P_1)^2 (k_1+P_2)^2+x_2 x_6/(k_1+P_1)^2 (k_1+P_3)^2 ((k_1+p_234)^2-m_6^2)
Similarly, for propagators numbered 2,3 and 4, we can use the following command
Input
AlgRel[{2,3,4},{k2,q,m},{Q,M},y,{q[2]->0,q[3]->p2,q[4]->p2+p3}]
which gives the following result after substituting the value of q_2,q_3 and q_4 corresponding to the Feynman integral
y_1 y_4/(k_2+Q_1)^2 (k_2+Q_2)^2 ((k_2+p_23)^2-m_4^2)+y_1 y_3/(k_2^2-m_2^2) (k_2+Q_1)^2 (k_2+Q_2)^2
+y_2 y_5/(k_2+Q_1)^2 (k_2+Q_3)^2 ((k_2+p_2)^2-m_3^2)+y_2 y_6/(k_2+Q_1)^2 (k_2+Q_3)^2 ((k_2+p_23)^2-m_4^2)
All the values of the parameters P_i,Q_i,x_i and y_i can be obtained from the notebook . To get the algebraic relation for the integrand in Eq.(<ref>) we multiply Eq.(<ref>) and (<ref>) together and then multiply both the sides of the equation by 1(k_1-k_2)^2-m_7^2.
We see that, unlike the one-loop case, we will have 3 massive propagators in each term. In fact with the present procedure to find the algebraic relation for any two-loop integral, we will have at least 3-massive propagators in each integral. Due to this reason, the present procedure won't be helpful for the case of integrals like the sunset integral where there are only 3-propagators.
§.§ Three-loop ladder integral
I_4,3 = ∫∫∫d^4k_1 d^4k_2 d^4k_3/(k_1^2-m_1^2)(k_2^2-m_2^2)(k_3^2-m_3^2)((k_3+p_2)^2-m_4^2)((k_3+p_23)^2-m_5^2)((k_2+ p_23)^2-m_6^2)
× 1/((k_1+ p_23)^2-m_7^2)((k_1+ p_234)^2-m_8^2)((k_1-k_2)^2-m_9^2)((k_2-k_3)^2-m_10^2)
We use a similar strategy as before for this case too, to obtain the algebraic relation. The result contains 32 terms and is presented in the file .
§ REDUCTION OF HYPERGEOMETRIC FUNCTIONS
The Feynman integral evaluation gives results in terms of hypergeometric functions. The formalism to find algebraic relation for the product of propagators of Feynman integrals can be employed to find relations between hypergeometric functions <cit.>.
In this section, we point out some analytic results on N-point function <cit.> and various hypergeometric relations that can be obtained from them with the present analysis.
It is well-known that the general one-loop N-point function with zero external momenta and different masses (m_i, i = 1,…, N-1), with unit powers of propagators, can be expressed in terms of Lauricella F_D function <cit.>
I^(N)(m_1,…, m_N-1) = π^d / 2 i^1-d(-m_N^2)^d / 2-NΓ(N-d / 2)/Γ(N)
× F_D^(N-1)(N-d/2, 1, …, 1 ; N | 1-m_1^2/m_N^2, …, 1-m_N-1^2/m_N^2)
where F_D^L is the Lauricella function of L-variables given by
F_D^(L) (a, b_1, …, b_L ; c | z_1, …, z_L) = ∑_j_1=0^∞⋯∑_j_L=0^∞(a)_j_1+⋯+j_L(b_1)_j_1⋯(b_L)_j_L/(c)_j_1+⋯+j_L×z_1^j_1⋯ z_L^j_L/j_1 ! ⋯ j_L ! .
and d is the dimension. The general result (i.e., Eq. (<ref>)) is a N-1 summation fold hypergeometric series. If one of the masses m_1,m_2⋯, m_N-1 vanishes then the function F^(N-1)_D reduces to F^(N-2)_D, using the following relation
F_D^(L) (a, b_1, …, b_L-1, b_L ; c | z_1, …, z_L-1, 1)
= Γ(c) Γ(c-a-b_L)/Γ(c-a) Γ(c-b_L) F_D^(L-1)(a, b_1, …, b_L-1 ; c-b_L | z_1, …, z_L-1)
Using the method presented, N-1 masses vanishes; hence the result reduces to 0-fold series and is just a factor dependent on m_i. To evaluate the result (<ref>), outside its associated ROC, one has to explicitly perform analytic continuation which is difficult to obtain at times for multi-variable hypergeometric functions. Thus a reduction of the result to just a mass-dependent constant is helpful when the analytic continuation of the Eq.(<ref>) is required. Such a result should also be viewed as a reduction formula for the Lauricella F_D^L, obtained using a physical problem <cit.> which might otherwise be hard to obtain.
For a general one-loop N- point function with non-zero external momenta, the general result can be written as a generalized Lauricella hypergeometric function with (N-1)(N+2)/2 variables <cit.>. For the case of general vertex integral, this will result in a generalized Lauricella function with 5 variables <cit.>. On the other hand, using Eq.(<ref>) the result can be written in terms of a hypergeometric function of 3-variables <cit.>.
Comparing Eq. (<ref>) and (<ref>) we see that the evaluation of bubble integral has reduced from the evaluation of Appell F_4 which has two variables to that of hypergeometric _2F_1 with one variable. Such a result can be viewed as a general reduction formula without any explicit relation to the Feynman integrals it has been obtained from. Substituting a=d/2, p^2/m_2^2=x and m_1^2/m_2^2=y, we get the following relation
y^a-1 F_4(a,1,a,a,x,y)-F_4(2-a,1,a,2-a,x,y) =1/2 x((-x+y-1)-√(-2 (x+1) y+(x-1)^2+y^2))
× _2F_1[[ 1,2-a ;; a ; ]((x-y+1)+√((x-1)^2+y^2-2 (x+1) y))^2/4 x]+((1-x-y)
+√(-2 (x+1) y+(x-1)^2+y^2))× y^a-2 _2F_1[[ 1,2-a ;; a ; ](√((x-1)^2+y^2-2 (x+1) y)-(x+y-1))^2/4 x y])
Here a can take any value except negative integers and positive integers greater than 2.
We can further simplify the above relation by using the following relation of F_4 <cit.>
F_4(α, β ; β, β ;-x/(1-x)(1-y),-y/(1-x)(1-y)) =(1-x)^α(1-y)^α_2 F_1(α, α-β+1 ; β ; x y),
For our case α=1, β=a. Thus we get
F_4(2-a,1,a,2-a,x,y) =1/2 x((-x+y-1)-√(-2 (x+1) y+(x-1)^2+y^2))
× _2F_1[[ 1,2-a ;; a ; ]((x-y+1)+√((x-1)^2+y^2-2 (x+1) y))^2/4 x]+((1-x-y)
+√(-2 (x+1) y+(x-1)^2+y^2))× y^a-2 _2F_1[[ 1,2-a ;; a ; ](√((x-1)^2+y^2-2 (x+1) y)-(x+y-1))^2/4 x y])
+ y^a-1(1-x-y-√(x^2-2 x (y+1)+(y-1)^2)/2 x y) _2F_1[[ 1,2-a ;; a ; ](√((x+y-1)^2-4 x y)+x+y-1)^2/4 x y]
As a consequence of this we get F_4(1,1;1,1;x,y)
F_4(1,1;1,1;x,y)= 1/√((x+y-1)^2-4 x y)
We can also consider the result for the bubble integral with general masses and unit power of propagators, for which the result is given as follows <cit.>
I_2= (m_2^2)^a-2Γ(2-a) ×∑_j=0^∞∑_l=0^∞1/j ! l !(x)^j(1-y)^l ×(2-a)_j+l(1)_j+l(1)_j/(2)_2 j+l
With the help of the reduction procedure, the result for bubble integral is given by Eq.(<ref>). This equality of Eq.(<ref>) and (<ref>) thus provides a reduction formula for the hypergeometric series in Eq.(<ref>), which can be written as follows
∑_j=0^∞∑_l=0^∞(2-a)_j+l(1)_j+l(1)_j/(2)_2 j+l(x)^j/j!(1-y)^l/l! =1/2 x((-x+y-1)-√(-2 (x+1) y+(x-1)^2+y^2))
× _2F_1[[ 1,2-a ;; a ; ]((x-y+1)+√((x-1)^2+y^2-2 (x+1) y))^2/4 x]+((1-x-y)+
+√(-2 (x+1) y+(x-1)^2+y^2))× y^a-2 _2F_1[[ 1,2-a ;; a ; ](√((x-1)^2+y^2-2 (x+1) y)-(x+y-1))^2/4 x y])
We can also obtain new hypergeometric relations by deriving other functional equations for the Feynman integrals(see appendix <ref>). Using Eq.(<ref>), (<ref>) and and we get
y^a-1 F_4(a,1;a,a;x,y)-F_4(2-a,1;a,2-a;x,y) =
(a-1)/2 x( y^a-2 (x+y-1) _2 F_1[[ 1,2-a; 3/2 ; ](x+y-1)^2/4 x y ;
] +
(x-y+1) _2 F_1[[ 1,2-a; 3/2 ; ](x-y+1)^2/4 x ;
]
We further obtain
F_4(2-a,1;a,2-a;x,y) =
(a-1)/2 x( y^a-2 (x+y-1) _2 F_1[[ 1,2-a; 3/2 ; ](x+y-1)^2/4 x y ;
] +
(x-y+1) _2 F_1[[ 1,2-a; 3/2 ; ](x-y+1)^2/4 x ;
]+
y^a-1 (1-x-y-√(x^2-2 x (y+1)+(y-1)^2)/2 x y) _2F_1[[ 1,2-a ;; a ; ](√((x+y-1)^2-4 x y)+x+y-1)^2/4 x y]
We provide a list of various reduction formulae that can be derived using Eq.(<ref>) and (<ref>) in the appendix <ref>.
The right-hand side of Eq.(<ref>) and (<ref>) can further be equated to give the relation between the sum of hypergeometric _2F_1 functions. An interesting consequence of this relation can be obtained with a= 3/2
-tanh ^-1(x-y+1/2 √(x))+ ^-1(2 √(x)√(y)/x+y-1)/2 = ^-1(2 √(x)√(y)/√(x^2-2 x (y+1)+(y-1)^2)-x-y+1)-
tanh ^-1(√(-2 (x+1) y+(x-1)^2+y^2)+x-y+1/2 √(x))
As before, such a reduction also helps if the analytic continuation has to be performed to reach a certain kinematical region. We can find the analytic continuations for the series in Eq.(<ref>) using automated tools <cit.>, but it still does not guarantee that the parameter space has been covered. In contrast, the complete list of analytic continuations for the hypergeometric _2F_1<cit.> is available and well implemented in software like . The complexity of the analytic continuation procedure also increases with the increase in the number of variables of the hypergeometric function due to the increase in difficulty to find the ROC of the resulting series.
We notice that the procedure is sufficiently general and one can obtain a large number of reduction formulae using it by doing the following steps
* We take the Eq.(<ref>) or any other general result for N-point integral from <cit.>.
* For a N- point function we will have product of N- propagators. We take any two propagators and find the algebraic relation. This will result in a sum of 2 terms, for which the number of variables in the result, as in Eq.(<ref>), would be reduced by one. This would give a reduction formula between, say L (which is a function of N) variable hypergeometric function and (L-1) variable hypergeometric function.
* We apply the previous step again, thus resulting in a relation between L variable hypergeometric function and (L-2) variable hypergeometric function. Also the previous step it will give a relation between (L-1) variable hypergeometric function and (L-2) variable hypergeometric function.
* We apply the procedure recursively until we have an algebraic relation for the product of N massive propagators as sum of 2^N-1 terms, such that each term contains product of N-propagators with just one massive propagator.
* The final result of the procedure would be a collection of relations between L, (L-1),...1 variable hypergeometric functions.
§ SUMMARY AND DISCUSSION
We have presented an automatized package for finding the algebraic relation for the product of propagators. These relations were used by Tarasov <cit.> to derive the Functional relations for Feynman integrals. The results obtained using the package are also sufficiently general and can be used further to obtain the functional relations for the Feynman integrals by appropriately choosing the arbitrary parameters. In the present work we focused on automatizing the method to derive algebraic relation for the propagators by suitably implementing a recursive algorithm (a slight modification to the Tarasov's algorithm<cit.>). Furthermore, using a loop-by-loop approach we provided a systematic way so as to use these relation for higher loop integrals too. These relation occur with free parameters which can be chosen suitably. Using various examples up to three-loops, we focused on how with a simple choice of these free parameters we can reduce integrals with large numbers of massive propagators into integrals with fewer massive propagators <cit.>, which can thus be computed easily. For the one-loop case, we obtained results for up to 6-point integral with the procedure and wrote them as a sum of 2^N-1(for N- point integral) integrals with one massive propagator. We also showed how the procedure can be used for higher-loop integrals too where a loop-by-loop strategy has been applied for finding the relations.
Since the general results for the one-loop N-points integral are explicitly known for various cases in terms of multi-variable hypergeometric functions, we show how the present work can be used to obtain a large list of reduction formulae for these functions. As a demonstrative example of the same, we used the one-loop bubble integral where the reduction of the Appell F_4 to hypergeometric _2F_1 can be obtained. We also derive another reduction formula for a 2-variable hypergeometric series, Eq.(<ref>) in terms of hypergeometric _2F_1. The relations thus obtained, can be treated as general reduction formulae for these functions without making reference to the Feynman integral they were derived from. These relation hence provides a way to derive non-trivial reduction formulae for multi-variable hypergeometric function using physical problems. They are also helpful, especially for situations where the analytic continuation of multi-variable hypergeometric functions has to be obtained to evaluate them outside their ROC, which is not easy to derive otherwise.
The present procedure of finding algebraic relation for the product of propagators can be used only if the propagators are dependent on just one loop-momenta. For this reason, the procedure cannot be applied with full generality to multi-loop integrals and a loop-by-loop approach has to be adopted. Hence the procedure is not helpful for integrals such as the sunset integral or in the cases where for each loop momenta k_i there is just one propagator. To apply such a procedure to sunset-like integrals, a generalization of the procedure for the multi-variable case, when the propagators can depend on more than one loop momenta has to be developed.
As we have seen that the algebraic relation obtained reduces the complexity of the Feynman integral. Specifically for the simple case of one loop bubble (in Section <ref>), we saw that the result for general bubble integral, which was expressed in terms of double variable hypergeometric function Appell F_4, was reduced to _2F_1 which is a single variable hypergeometric function. It would be worth studying such reduction in complexity for other non-trivial cases of Feynman integrals which result in multi-variable hypergeometric functions of even higher variables. Since obtaining analytic expressions might not be feasible for such cases, a detailed numerical study for the same would be an important application of these algebraic relations after the proper function relations have been obtained by the proper choice of arbitrary variables.
§ FUNCTIONAL REDUCTION WITH M_I≠ 0
In this appendix, we will point out other possibilities for the choice of arbitrary parameters M_i <cit.>. This choice will lead to different functional reduction equations than already presented. Also, this will give rise to different reduction formulae as has been done in section <ref>.
Consider the bubble integral considered in section <ref>. This time we will choose a different non-zero value of M_1. Since the Feynman integrals are relatively easier to compute with equal masses a suitable choice is M_1= m_1. With this choice, we get, similar to Eq.(<ref>), the following relation
I_2(p^2,m_1,m_2)= x_1 I_2((P_1-p)^2,m_1,m_2)+ x_2 I_2(P_1^2,m_1,m_1)
with
x_1= m_1^2-m_2^2+p^2/p^2, x_2= m_2^2-m_1^2/p^2, P_1= -m_1^2-m_2^2+p^2/p^2 p
We see that on the right-hand side of Eq.(<ref>), we have a partial simplification. We can then exploit the symmetry of the I_2 integral under the exchange of m_1↔ m_2. We do the exchange m_1↔ m_2 in Eq.(<ref>) and add the resulting equation with it. Simplifying we get
I_2(p^2,m_1,m_2)= p^2+m_1^2-m_2^2/2p^2 I_2((p^2+m_1^2-m_2^2/p)^2,m_1,m_1) +
p^2-m_1^2+m_2^2/2p^2 I_2(( p^2-m_1^2+m_2^2/p)^2,m_2,m_2)
The value of I_2(p^2,m,m) is <cit.>
I_2(p^2,m,m)= m^d-4Γ(2-d/2)_2 F_1[[ 1,2-d/2; 3/2 ; ]p^2/4 m^2 ;
]
Substituting this in Eq.(<ref>) we get another functional equation for the bubble Feynman integral.
§ REDUCTION FORMULAE
F_4(1,1;1,1;x,y)=1/√((x+y-1)^2-4 x y)
F_4(3/2,1;1/2,3/2;x,y)= x-y+1/x^2-2 x (y+1)+(y-1)^2
F_4(5/2,1;-1/2,5/2;x,y)= (x-y+1) (x^2-2 x (y+5)+(y-1)^2)/(x^2-2 x (y+1)+(y-1)^2)^2
F_4(1/2,1;3/2,1/2;x,y)= -1/√(x)(-tanh ^-1(√(x^2-2 x (y+1)+(y-1)^2)+x-y+1/2 √(x))+
^-1(2 √(x)√(y)/√(x^2-2 x (y+1)+(y-1)^2)-x-y+1)+ ^-1(2 √(x)√(y)/√((x+y-1)^2-4 x y)+x+y-1))
F_4(1/2,1,3/2;1/2;x,y)=1/2√(x)(tanh ^-1(x-y+1/2 √(x))-2 ^-1(2 √(x)√(y)/√((x+y-1)^2-4 x y)+x+y-1)
+ ^-1(2 √(x)√(y)/x+y-1) )
F_4(0,1;2,0;x,y)= √(-2 (x+1) y+(x-1)^2+y^2)+x-y+1/2 x
F_4(2-a,1,a,2-a,1,1)= 1/2(1-i √(3)) _2F_1(1,2-a;a;-√(-1))
_2F_1(1,2-a;3/2;(x-y+1)^2/4 x) = (√(-2 (x+1) y+(x-1)^2+y^2)+(y-x-1)) /(1-a) (x-y+1)
_2F_1(1,2-a;a;((x-y+1)+√((x-1)^2+y^2-2 (x+1) y))^2/4 x)
_2F_1(1,2-a;3/2;(x+y-1)^2/4 x y) = √(-2 (x+1) y+(x-1)^2+y^2)-(x+y-1)/(1-a) (x+y-1)
_2F_1(1,2-a;a;(√((x-1)^2+y^2-2 (x+1) y)-(x+y-1))^2/4 x y)
We can use the following relation as given in <cit.> and obtain formulae for F_1
F_4 (α, β ; γ, β ;-x/(1-x)(1-y),-y/(1-x)(1-y)) =(1-x)^α(1-y)^α F_1(α, γ-β, α-γ+1 ; γ ; x, x y)
We can further exploit the relation between F_1 and F_2 to derive reduction formulae for F_2. F_2 can further be related to other hypergeometric functions including F_3, G_1,G_2,G_3,H_2,H_3,H_4,H_6 and H_7. So we can derive reduction formulae for all of these two variable hypergeometric functions.
In a similar manner using the following relation <cit.> we can obtain formulae for H_3
F_4(α, β ; γ, β ; x, y) =(1-x-y)^-α H_3(α, γ-β ; γ ; x y/(x+y-1)^2, x/x+y-1)
jhep
|
http://arxiv.org/abs/2307.05322v1 | 20230711150910 | Class Instance Balanced Learning for Long-Tailed Classification | [
"Marc-Antoine Lavoie",
"Steven Waslander"
] | cs.CV | [
"cs.CV"
] |
OT1ptmbn
packedEnum
packedItemize
0pt plus .5pt
Class Instance Balanced Learning for Long-Tailed Classification
Marc-Antoine Lavoie and Steven L. Waslander
Institute for Aerospace Studies
University of Toronto
Toronto, Canada
{ marc-antoine.lavoie, steven.waslander }@robotics.utias.utoronto.ca
August 12, 2023
=====================================================================================================================================================================================================
The long-tailed image classification task remains important in the development of deep neural networks as it explicitly deals with large imbalances in the class frequencies of the training data. While uncommon in engineered datasets, this imbalance is almost always present in real-world data. Previous approaches have shown that combining cross-entropy and contrastive learning can improve performance on the long-tailed task, but they do not explore the tradeoff between head and tail classes. We propose a novel class instance balanced loss (CIBL), which reweights the relative contributions of a cross-entropy and a contrastive loss as a function of the frequency of class instances in the training batch. This balancing favours the contrastive loss for more common classes, leading to a learned classifier with a more balanced performance across all class frequencies. Furthermore, increasing the relative weight on the contrastive head shifts performance from common (head) to rare (tail) classes, allowing the user to skew the performance towards these classes if desired. We also show that changing the linear classifier head with a cosine classifier yields a network that can be trained to similar performance in substantially fewer epochs. We obtain competitive results on both CIFAR-100-LT and ImageNet-LT.
§ INTRODUCTION
The rapid development of convolutional neural network (CNN) architectures has allowed great progress in many computer vision tasks, such as classification, object detection and segmentation. However, moving from the hand-engineered benchmark datasets used in most image classification tasks to unfiltered datasets collected from real-world experiments presents a problem. Real datasets tend to be significantly more imbalanced, with some classes being much more common than others. Consider for instance nuImages <cit.>, an autonomous driving dataset obtained from real-world driving scenes, in which there are 36314 truck instances but only 42 ambulances. Standard training approaches will generate classifiers biased toward the most common classes when applied to this imbalanced dataset.
In long-tailed classification, we observe two main problems on the rarest classes: underfitting and overfitting. When underfit, the classifier does not learn to classify rare class examples during training and performs poorly on both the training and testing datasets. On the other hand, when overfit, the network performance is good on training examples but does not generalize to the unseen test data.
Common strategies to improve performance on the rare classes involve correcting for the data imbalance in the training signal, either by adjusting the sampling by undersampling frequent classes <cit.> or oversampling rare classes <cit.>, or by balancing the gradient of the loss by weighting the individual losses according to class frequencies <cit.>. These methods generally trade performance on the most frequent classes to improve results on the rarer ones <cit.>.
A refinement of the oversampling approaches involves the generation of new samples, particularly from the rare classes, to correct the initial imbalance. These new samples can be generated in the image space using more robust augmentations <cit.> or generative models <cit.>, or in feature space by transforming existing features with the addition of noise <cit.>. Similarly, some developments on the loss gradient balancing approach are the margin <cit.> and logit adjustment methods <cit.>. Instead of simply increasing the magnitude of the gradient for rare classes, the training task is made harder and more robust for these classes.
Recently, there has been an interest in using contrastive losses <cit.> instead of the standard cross-entropy formulation. These losses have been shown to be effective in long-tailed classification, learning more balanced feature representations <cit.>. Some approaches <cit.> now combine contrastive and cross-entropy terms and obtain state-of-the-art performance.
Following the strong results shown by PaCo <cit.>, we propose a similar structure combining cross-entropy and supervised contrastive losses. In contrast to that approach, we explicitly decouple the two losses and thus consider a simple weighted sum of both. The weights bias the loss function towards the contrastive loss for classes with many examples in the batch and momentum queue. We show that this reweighting, and not simply the sum of the two losses, improves performance and gives the user a single parameter to tune the network's performance towards the rarest classes. Next, we show that replacing the final linear classifier with a cosine classifier yields a network with competitive performance that requires significantly fewer epochs to train. Our main contributions are:
* We show that when combining cross-entropy and contrastive losses, increasing the relative importance of the contrastive loss on the head classes effectively balances the performance across all classes, and this can be done by tuning a single value.
* Next, we show that simply modifying the linear classifier to a cosine classifier gives us a similar performance with much fewer training epochs.
* The proposed approach achieves competitive results on two popular long-tailed benchmarks.
§ RELATED WORKS
§.§ Long-Tailed Classification
§.§.§ Resampling and Reweighting
Classical approaches in long-tailed classification rely on directly compensating for the skew in training data. This can be solved by undersampling common classes <cit.> or oversampling rare classes <cit.>, but the first can lead to overfitting while the second can lead to poor representation learning. Similar issues are observed with the direct loss reweighting approaches <cit.>.
§.§.§ Decoupled Learning
To reduce these effects, LDAM <cit.> proposes a two-stage training process, decoupling feature and classifier learning and ensuring that good features are learned before the balanced classifier is trained in a second step, using a more balanced sampling or weighting. In addition to simply training using a more balanced distribution, some methods <cit.> consider post hoc adjustments that attempt to find an optimal reweighting of the logits to improve the final performance. DLSA <cit.> trains a Gaussian mixture flow model as a second step to detect and send the tail instances to a separate classifying branch.
§.§.§ Margins and Logit Adjustment
Instead of scaling the loss according to class frequencies, some approaches instead attempt to make the classification task harder for instances of rare classes. This can be done by subtracting an offset from the true logit during training <cit.>, forcing the classifier to separate classes by an additional margin, which scales inversely with the number of instances in a class. Alternatively, logit adjustment methods <cit.> opt instead to rescale all logits during training such that the most frequent ones are larger, again making the task more difficult for tail instances. ALA <cit.> uses the instance difficulty in addition to class frequency to scale the added margins.
§.§.§ Additional Augmentations and Sample Generation
Some approaches use stronger augmentations or attempt to generate entirely new images to deal with the long tail. M2M <cit.> transforms frequent class images by pushing them towards rare classes in image space, generating new rare images. CMO <cit.> generates new tail samples by mixing tail and head images using Cutmix <cit.>. GAMO <cit.> and AFHN <cit.> generate new images using a GAN. MetaSAug <cit.> perturbs the feature representation of rare class examples according to a learned covariance. A recent approach, OPeN <cit.>, adds pure noise images to the original instances from rare classes.
§.§.§ Ensemble Methods
Long-tailed learning can also be improved using an ensemble of models instead of a single one. In RIDE <cit.>, multiple models are trained with a diversity maximization loss, ensuring the models generate different outputs, and are deployed sequentially. LFME <cit.> and NCL <cit.> train multiple experts and use knowledge distillation to improve performance on a single model. ResLT <cit.> uses a shared backbone but adds additional parameters to the heads trained on tail classes.
§.§ Contrastive Learning
§.§.§ Pure Contrastive
Contrastive learning is an alternative to cross-entropy training originally developed for self-supervised training without labels <cit.>. These methods learn features by clustering different augmented views of the same image while pushing away all other images. Some works have shown that self-supervised contrastive pre-training can improve performance <cit.> on the long-tailed task. Supervised Contrastive Learning (SCL) <cit.> is an extension of self-supervised contrastive learning when labels are available. It has been used successfully for classification on the full ImageNet dataset. However, this approach still suffers from issues with the long-tailed task and produces skewed results. To remedy this, KCL <cit.> proposes to use a fixed number of positive samples, reducing the imbalance in training. TSC <cit.> adds fixed targets in the contrastive loss to act as class centers, ensuring a uniform class distribution in the feature space.
§.§.§ Mixed Cross-Entropy and Contrastive Losses
Other approaches consider a mix of supervised contrastive and cross-entropy losses. Hybrid-SC <cit.> applies the loss in two independent branches and gradually decays the contrastive component of the loss in favour of the cross-entropy, allowing the network to first learn good features and then optimize the classifier for classification performance. PaCo <cit.> adds the cross-entropy directly to the contrastive loss as additional terms, while BCL <cit.> first rescales the contribution of each class in the denominator of the contrastive loss and then adds learnable class centers derived from the cross-entropy classifier vectors.
§ METHOD
§.§ Logit Adjustment
Balanced Softmax <cit.> is a variation of the standard cross-entropy (CE) loss that adjusts the logits during training to account for the imbalance in the training set. The loss for instance i is
ℒ_i^CE = -logn_c_iexp(x_i^θ_c_i)/∑_c ∈ C n_cexp(x_i^θ_c)
= -logη_i,c_i/∑_c ∈ Cη_i,c,
where x_i is the feature vector, C the set of all classes, c_i the true class of instance x_i, θ_c the linear classifier associated with class c, n_c the number of instances of class c in the training set and η_i,c the logit associated with class c. This formulation greatly improves performance on a balanced test set. Unless otherwise noted, cross-entropy losses in the following are corrected with Balanced Softmax.
§.§ Supervised Contrastive Learning
Supervised Contrastive Learning (SCL) <cit.> is a modification of standard contrastive loss that considers all samples of the same class in a batch to the positives. The supervised contrastive loss for instance i has the form
ℒ_i^SCL = -1/P_i^-∑_j ∈ P_i^-logexp(z_i·z_j / τ)/∑_k ∈ A^-exp(z_i·z_k / τ),
where z_i is the contrastive embedding vector for the i^th instance, P_i^- is the set of indices of the instances with the same class as instance i excluding instance i, P_i^- is its cardinality, A^- the set of all indices in the batch and queue excluding i, and τ is a temperature scaling parameter. Because contrastive losses tend to learn more reliably with additional negative examples, we keep a memory queue of past image features generated from a second copy of the encoder following the MoCo framework <cit.>. Both P_i^- and A^- contain examples from the current batch and the momentum queue.
The contrastive embedding vectors z_ℓ are given as
z_ℓ = h(x_ℓ)/h(x_ℓ)_2,
where h is a MLP projection layer as per SimCLR <cit.>.
§.§ Combined Losses
As discussed in Section <ref>, methods like Hybrid-SC <cit.> and PaCo <cit.> combine both a cross-entropy and a supervised contrastive loss.
§.§.§ Summed Loss
We first consider a scalar-weighted sum of both losses given by eqs. (<ref>) and (<ref>), similar to Hybrid-SC <cit.> but with fixed weights
ℒ^sum_i = λ^CEℒ^CE_i + λ^SCLℒ^SCL_i.
In practice, this simple summed formulation has a limited effect on performance, with final results similar to using only the cross-entropy term. We discuss this further in Section <ref>.
§.§.§ PaCo Loss
Another approach, presented in PaCo <cit.>, is to treat the cross-entropy logits as additional positive and negative samples in the contrastive loss. The loss in PaCo has the form
ℒ^PaCo_i =- γ_i ( α∑_j ∈ P_i^-logexp(z_i·z_j / τ)/∑_k ∈ A^-exp(z_i·z_k / τ)+ ∑_c ∈ Cη_i,c
+ βlogη_i,c_i/∑_k ∈ A^-exp(z_i·z_k / τ) + ∑_c ∈ Cη_i,c) ,
γ_i = 1/αP_i^- +β,
where α and β control the relative importance of the contrastive and cross-entropy positives. The cross-entropy and contrastive terms have the same definitions as in eq. (<ref>) and eq. (<ref>) respectively. The numerator term is a weighted sum between a cross-entropy and a contrastive term from the other positive instances. This is then normalized by the sum of all positive and negative terms in both losses. Because each sample has only a single positive logit in the cross-entropy but can have many positives in the contrastive loss, the summed loss skews towards the contrastive term as the number of positive examples in the batch and momentum queue P_i^- increases.
§.§.§ CIBL Loss
Inspired by the PaCO loss above, we consider a new class instance balanced loss (CIBL) that explicitly considers the sum of a cross-entropy and a contrastive term, with relative importance proportional to the number of instances in the batch and queue
ℒ_i^CIBL =-1/λ^CE + λ^SCLP_i^-( λ^CElogη_i,c_i/∑_c ∈ Cη_i,c
+ λ^SCL∑_j ∈ P_i^-logexp(z_i·z_j / τ)/∑_k ∈ A^-exp(z_i·z_k / τ)).
Figure <ref> presents a schematic of the CIBL formulation.
In contrast to the PaCo formulation in eq. (<ref>), the cross-entropy and contrastive terms are uncoupled in CIBL and only dependent on the number of other positive samples in the batch and queue P_i^-. Because of this, it remains a weighted sum of two independent losses. This also means that we can change the formulation of either loss with fewer repercussions on the other. CIBL is compared to PaCo and other methods in Section <ref>. A comparison between the baseline given in eq. (<ref>), the simple summed loss given by eq. (<ref>) and the proposed CIBL loss given by eq. (<ref>) is given in Section <ref>.
§.§ Logit Adjustment for Cosine Classifiers
Recent state-of-the-art methods <cit.> use a normalized classifier for the cross-entropy and obtain improved performance. Here, we also implement the cross-entropy loss with a cosine distance to see if this improvement is also observed, giving us the normalized CIBL (NCIBL). This gives a normalized cross-entropy (NCE) loss of the form
ℒ_i^NCE = -logn_c_iexp(sim(x_i,θ_c_i) /γ)/∑_c ∈ C n_cexp( sim(x_i,θ_c) /γ),
sim(x_i,θ_c_i) = x_i^θ_c_i/x_i_2 θ_c_i_2,
where n_c, x_i, θ_c are defined as in eq (<ref>) and where γ is a temperature parameter that scales the sharpness of the softmax and controls how sensitive the loss is towards the largest terms. Note that because the magnitude of the temperature directly affects the magnitude of the logits, the relative effects of the added logit adjustment depend on temperature. This is obvious if the logit adjustment term n_c is moved inside the exponential, acting as a margin on both the positive and negative logits.
ℒ_i^NCE = -logexp(sim(x_i,θ_c_i) /γ + log n_c_i)/∑_c ∈ Cexp( sim(x_i,θ_c) /γ + log n_c),
In practice, the performance of NCIBL is sensitive to the temperature value γ used, but a reasonable range of values gives results that are better than the baseline of pure cross-entropy. Section <ref> presents a discussion of this.
In CIBL, pulling the number of class instances n_c into the exponential function as in eq. (<ref>) adds a margin log(n_c) that has a constant magnitude for all instances of a given class, regardless of the original logit x_i^θ_c. In NCIBL however, the effect of the added margin depends on the temperature hyperparameter γ. Moreover, it cannot be interpreted as a fixed margin on the angle between x_i and θ_c. In the related task of face recognition, ArcFace <cit.> showed that a constant angular margin, with the margin adjustment done inside the similarity sim function, was beneficial over having the margin added on the output of the similarity as done here, but further analysis of this result in the context of long-tailed image classification is outside the scope of this work.
§ EXPERIMENTS
We compare the performance of CIBL and NCIBL against other current state-of-the-art methods on two standard long-tailed datasets: CIFAR-100-LT <cit.> and ImageNet-LT <cit.>. We follow the standard evaluation protocol by testing on a class-balanced test set. The total classification accuracy is usually considered the primary metric of performance, but we also report the mean class-wise accuracy for three subsets of classes when testing on ImageNet: The Many category includes classes with n > 100 images, Medium those with 100 ≥ n > 20 and Few those with n ≤ 20. While we rank the different methods according to the average accuracy, we are also interested in identifying if the proposed approach improves performance on the tail classes specifically.
§.§ Datasets
§.§.§ Long-Tailed CIFAR-100
CIFAR-100-LT is a subsampled version of the original CIFAR-100 dataset <cit.>, which is composed of 50,000 training and 10,000 validation images of size 32 × 32, equally divided into 100 classes. The long-tailed version of the training dataset is subsampled with an exponential factor, and we present results for imbalance factors of 100, 50 and 10.
§.§.§ Long-Tailed ImageNet
ImageNet-LT is a subsampling of the original ImageNet 2012 <cit.> following a Pareto distribution, with 1000 classes having from 1280 to 5 instances in the training set. The test set is balanced, with 50 images per class. The original images have varying dimensions, but all of them are cropped for testing and training to a size of 224 × 224, with the crop being centred for the test images.
§.§ Implementation Details
To allow a fair comparison with PaCo <cit.>, which is the closest method to our proposed CIBL, we generally follow the same training setups and schedules. This means that the supervised head uses logit adjustment following Balanced Softmax <cit.> as described in Section <ref> above and that the supervised contrastive loss is based on the MoCov2 <cit.> approach. All the training is done end-to-end on a single Nvidia 3090 GPU using SGD with a momentum of 0.9.
§.§.§ CIFAR-100-LT
We use ResNet-32 as the backbone. Note that this is the smaller version of ResNet as per the test setting in LDAM <cit.>. Following PaCo, we augment the main branch with Cutout <cit.> and AutoAugment <cit.> and the second momentum branch with SimAugment <cit.>. The class instance balance parameters are set to λ^CE = 1.0 and λ^SCL = 0.03 respectively. We train for 400 epochs, with a learning rate of 0.1, decaying to 0.01 at 320 and 0.001 at 360 epochs, and with a linear warmup for the first 10 epochs. We use a batch size of 128, a MoCo queue length of 1024 and a contrastive temperature of τ=0.05. For NCIBL, we use a cross-entropy temperature of γ = 0.05.
§.§.§ ImageNet-LT
We use ResNet-50 as the backbone. We augment the main branch with RandAugment <cit.> and the momentum branch with SimAugment. The class instance balance parameters are set to λ^CE = 1.0 and λ^SCL = 0.05 respectively. We present results for a training time of 400 epochs for CIBL, following PaCo. However, this schedule overfits when using NCIBL, and so we present results for 200 epochs instead for NCIBL. See Section <ref> for further discussion. For CIBL, we use a learning rate of 0.02, and for NCIBL, a learning rate of 0.04. Both decay to 0 using a cosine schedule and a linear warmup period of 10 epochs. We use a batch size of 128, a MoCo queue of 8192 and a contrastive temperature of τ=0.2, again following the implementation in PaCo. NCIBL uses a temperature parameter of γ=0.05.
§.§ Results
§.§.§ CIFAR-100-LT
Table <ref> presents the results on CIFAR-100 for our proposed CIBL method with and without the normalized classifier. Here, both proposed approaches outperform the baselines Balanced Softmax and PaCo and all the other methods for all the considered imbalance ratios. NCIBL performs similarly for an imbalance ratio of 100 but underperforms for smaller imbalances ratios.
§.§.§ ImageNet-LT
Table <ref> presents the results on ImageNet-LT for CIBL and NCIBL. Here, our approaches perform similarly to PaCo, and both proposed methods outperform all other methods. As on CIFAR-100, NCIBL has slightly lower accuracy than CIBL on Imagenet-LT, but the differences of -0.1% are likely not significant. However, NCIBL has -1.5% lower accuracy on the tail classes compared to CIBL and so presents a significantly less balanced performance. This is due to overfitting on the tail data, which is discussed in additional detail in Section <ref> below.
§.§ A Discussion on Training Length
In image classification, increasing the number of training epochs generally leads to improved performance. However, there are diminishing returns on extended training and the eventual risk of overfitting to the training data. However, the number of training epochs required to reach overfitting will depend on the method used.
Table <ref> compares the performances on ImageNet-LT of CIBL and NCIBL as the number of training epochs is increased. The implementation is the same as in Section <ref>, except NCIBL at 100 epochs is trained with a learning rate of 0.02. For both CIFAR-100 and ImageNet, increasing the learning rate when close to the best operating point for maximum accuracy tends to decrease overfitting with respect to the test set. Thus, we increase the learning rate for the longer training schedules of 200 and 400 epochs for NCIBL.
Here, we see that NCIBL has higher accuracy for shorter training times, providing an improvement of +3.3% at 100 epochs. At 200 epochs, the gain is still +1.4%, and only -0.1% behind CIBL at 400 epochs. However, note that the performance on rare classes has dropped compared to NCIBL at 100 epochs, indicating the method overfitting these classes. For the 400 epoch run, NCIBL performs worse than CIBL by -0.6% and worse than the NCIBL run at 200 epochs by -0.5%, and there is a loss in performance on both the medium and tail classes, again indicating overfit. Further increases in epochs are likely to continue to reduce performance. In the end, the normalized formulation allows a significantly faster training, with similar performance to CIBL with only half the number of epochs, but not necessarily a better one.
Finally, we see that as the number of training epochs increases, improvements in accuracy on the training data are less transferable to the test set. Figure <ref> presents the difference between training and testing accuracies at the last training epoch for all classes, ordered by test set frequency, and linear fits to the data. For both CIBL and NCIBL, the increase in training epochs leads to a higher y-intercept and a steeper slope on the linear fits. Overfitting tends to worsen as the number of epochs increases, and this result is accentuated on tail classes.
§.§ Ablations on the Choice of Loss Formulation
We compare the baseline cross-entropy loss with and without logit adjustment given by eq. (<ref>), the summed loss given by eq. (<ref>) and our CIBL loss given by eq. (<ref>) to identify the effects of the contrastive scaling term λ^SCL and of our specific formulation. Results are presented in Table <ref> and are obtained using the same test setting as the experiments on CIFAR-100-LT discussed in Section <ref>.
We see that including the contrastive loss can improve performance for both formulations as long as λ^SCL is not too large, after which the average performance degrades. For CIBL, increasing λ^SCL also transfers performance from frequent classes towards rare classes and reduces the overfit on the training set. These effects are also seen with the summed loss, but the magnitudes are much smaller. This is likely because in CIBL, the relative importance of the contrastive term is scaled by the class frequency in a batch P_i^-. This means that changes in λ^SCL mainly affect the loss on frequent classes, in contrast to the loss for rare classes, which is always dominated by cross-entropy. This difference in training objective allows the network to learn a more balanced representation, unlike simply using a sum of the losses with the same weights for all instances. Finally, the performance on the tail classes increases with the magnitude of λ^SCL, regardless of the change in average accuracy. This means that the end user can sacrifice some average performance for rare class performance if desired by increasing λ^SCL.
§.§ Ablations on the Temperature Scaling γ for NCIBL
Table <ref> presents the performance of the NCIBL loss on CIFAR-100 for different values of γ. It has been shown <cit.> that decreasing the temperature tends to improve the fit on the dataset at the cost of a loss of generalizability. Thus, we expect the average performance to increase as the temperature decreases, but the performance on rare classes may also decrease as overfitting worsens. Here, the network trains poorly for large values of γ (1, 0.2), with training and testing accuracies significantly under the unnormalized cross-entropy baseline. As the value of γ gets smaller, the testing accuracy increases to a maximum at γ = 0.05. At γ = 0.025, the performance is still better than the baseline, but the test accuracy is worse than at γ = 0.05, particularly for the rare classes where it drops by -3.5%. On the other hand, the training accuracy continues to increase, indicating overfitting. The optimal temperature for average accuracy is also where the performance on the rarest classes is maximized. Note also that the optimal temperature γ of 0.05 is the same as the contrastive temperature τ used in the contrastive loss for CIFAR as described in Section <ref>. While the performance of NCIBL is dependent on γ, this parameter takes on values similar to other temperature scaling parameters used in normalized losses.
§ CONCLUSION
In this paper, we propose Class Instance Balanced Loss (CIBL), a method of combining cross-entropy and contrastive losses to improve long-tailed image classification. Our analysis shows that by forcing the loss on frequent class samples to favour the contrastive loss over the cross-entropy loss, we obtain a more accurate and more balanced final classifier compared to using a fixed weighting for all classes while also enabling a principled approach to the tradeoff between head and tail performance, which can be tuned using a single parameter. We also show that replacing the linear classifier with a cosine classifier allows us to train a network with similar average performance in significantly fewer epochs. Our method obtains competitive results compared to state-of-the-art methods on CIFAR-100-LT and ImageNet-LT.
IEEEtran
|
http://arxiv.org/abs/2307.07577v1 | 20230714184101 | Decomposition Based Refinement for the Network Interdiction Problem | [
"Krish Matta",
"Xiaoyuan Liu",
"Ilya Safro"
] | cs.DS | [
"cs.DS"
] |
Decomposition Based Refinement for the Network Interdiction Problem
Krish Matta
School of Computer Science
Carnegie Mellon University
Pittsburgh, USA
[email protected]
Xiaoyuan Liu
Fujitsu Research of America
Sunnyvale, USA
[email protected]
Ilya Safro
Department of Computer and Information Sciences
University of Delaware
Newark, USA
[email protected]
==================================================================================================================================================================================================================================================================================================================
The shortest path network interdiction (SPNI) problem poses significant computational challenges due to its NP-hardness. Current solutions, primarily based on integer programming methods, are inefficient for large-scale instances. In this paper, we introduce a novel hybrid algorithm that can utilize Ising Processing Units (IPUs) alongside classical solvers. This approach decomposes the problem into manageable sub-problems, which are then offloaded to the slow but high-quality classical solvers or IPU. Results are subsequently recombined to form a global solution. Our method demonstrates comparable quality to existing whole problem solvers while reducing computational time for large-scale instances. Furthermore, our approach is amenable to parallelization, allowing for simultaneous processing of decomposed sub-problems.
Reproducibility: Our source code and experimental results are available at <https://github.com/krishxmatta/network-interdiction>.
decomposition, Ising processing hardware, network interdiction, refinement, integer programming
§ INTRODUCTION
Network interdiction refers to a class of challenging combinatorial optimization problems that involve strategically disrupting flows in a network to achieve specific objectives <cit.>. These problems can be characterized as a game between attackers and defenders of a network. The defender seeks to optimize a predefined objective across the network, such as maximizing flow between two nodes, while the attacker aims to impede the defender's objective by inflicting maximum disruption upon the network. This disruption may manifest in a variety of forms, such as targeted attacks on the network arcs aiming to destroy or impair them.
The topic of network interdiction has gained significant attention due to its ability to model complex situations in the defense domain. For example, Ghare et al. <cit.> demonstrated how network interdiction can be used in wartime to capacitate an enemy's supply network to maximally disrupt the flow of enemy troops. Network interdiction can additionally be used to identify weaknesses in critical infrastructure (e.g., electric grid) to make them more resilient to terrorist attacks and natural disasters. Such problems are even extremely useful outside of military applications, e.g., Goldberg et al. <cit.> display how network interdiction can be used to minimize the spread of infection within hospitals.
Within this paper, we study the Shortest Path Network Interdiction (SPNI) problem. In this problem, the defender wishes to traverse the minimum length path between two specified nodes s and t in a directed network. The attacker uses their limited resources to destroy certain arcs, or increase their effective length, to increase the defender's shortest s-t path length. Each arc is either interdicted or not, and additionally, it is known beforehand how much an arc costs to interdict and how much interdiction of an arc causes its effective length to increase. When solving SPNI, we take the viewpoint of the attacker, and thus our goal is to maximize the shortest s-t path length.
Our contribution The SPNI is an 𝒩𝒫-Hard problem <cit.>.
To the best of our knowledge, almost all current SPNI algorithms rely on integer programming methods. On large problem instances, these methods are time costly—for example, in one of the most important works regarding SPNI, the largest network solved by Israeli et al. had 240 nodes and 1,042 arcs <cit.>, so these methods are either slow or produce large gaps and fast but low quality results.
To ameliorate this issue, we propose a novel algorithm which may leverage Ising processing units (IPUs) <cit.>, specialized computational devices specifically tailored to solve the Ising model, as well as classical solvers that can exhibit high quality solutions for small instances in a reasonable computational time. Examples of IPU include but are not limited to quantum annealers, gate-based quantum machines equipped with the Ising model solvers, digital and analog annealers <cit.>. Since physical limitations of existing IPU hardware severely restrict the size of problems they can handle and connectivity between variables, we adopt a hybrid IPU-classical approach in which a classical computer decomposes SPNI into subproblems small enough for an IPU, offloads them for computation, then combines the IPU's results into a global solution. For the proof of concept we use exact solvers instead of real IPU. Our decomposition based results demonstrate
an ability to achieve almost the same quality as the whole problem slow solvers that are prohibitive for large-scale instances and IPU hardware.
Additionally, our approach is parallelization friendly, namely, sub-problems obtained as a result of SPNI decomposition can be tackled in parallel.
§ RELATED WORK
In one of the earliest works studying SPNI, Malik et al. focused on a variant termed the k most vital arcs problem in which the interdiction of each arc requires exactly one unit of the attacker's budget and interdiction of an arc results in its complete removal <cit.>. Ball et al. show that the k most vital arcs problem is 𝒩𝒫-Hard. Since this is a variant of SPNI, it follows that SPNI is 𝒩𝒫-Hard as well <cit.>. Corley and Shaw <cit.> investigated the single-most-vital-arc problem, which is the k most vital arcs problem where k = 1, but this problem is a simple variant of SPNI that lies in 𝒫. Rather than study the full removal of arcs, Fulkerson and Harding <cit.> and Golden <cit.> study related problems in which the length of an arc increases as more budget is allocated to its interdiction. In one of the most prominent works studying SNPI, Israeli and Wood <cit.> study the problem in its more general form, proposing two integer-programming based algorithms of different quality depending on whether arcs are fully removed or if arc length is increased as a result of interdiction.
More recent work on SPNI has since shifted away from integer-programming algorithms. Huang et al., for example, approximate SPNI using a reinforcement learning framework <cit.>. However, this approach generates quite significant gaps and the reinforcement learning is not particularly scalable.
Rocco et al. <cit.> study an extension of SPNI where the attacker is not solely focused on maximizing the s-t path length, but also wishes to minimize interdiction cost, and propose an evolutionary algorithm to do so. This is a multiobjective problem that is different than ours.
For more details, we refer the reader to the recent comprehensive survey on the network interdiction problem <cit.>. To our knowledge, however, no attempts have been made for parallelization friendly decomposition of SPNI for IPU hardware.
Several frameworks have been proposed to solve large instances of combinatorial optimization problems on small IPU hardware. The quadratic unconstrained binary optimization (QUBO) formulation is a popular formulation for combinatorial optimization problems that is equivalent to the Ising model,
and thus is solvable on all IPU hardware <cit.>. QUBO formulations contain only quadratic binary variables with no constraints. A large number of work in the quantum computing space focuses on formulating problems into one large QUBO, then identifying sub-QUBOs that can be solved directly on an IPU. The qbsolv tool developed by D-Wave accomplishes exactly this, once allowing researchers to solve a 1254 binary variable problem using quantum hardware that can only solve problems with at most 64 binary variables <cit.>. An alternative approach is, instead of first formulating the problem as a QUBO then decomposing the QUBO itself, to decompose the problem and then formulate these subproblems into QUBOs. For example, Shaydulin et al. <cit.> take this approach to solve the community detection problem on quantum hardware. This is the approach we opt to take in this paper, applied to SPNI.
§ PROBLEM FORMULATION
Let G = (N, A) be a directed network with node set N and arc set A. The length of arc k ∈ A is c_k ≥ 0, and the added interdiction length is d_k ≥ 0, meaning that if arc k is interdicted, then its effective length becomes c_k + d_k. If interdiction destroys arc k, then d_k can be set to a sufficiently large value. For any node i ∈ N, let A^+(i) ⊆ A denote the set of all arcs directed out of node i, and similarly let A^-(i) ⊆ A denote the set of all arcs directed into node i. While other forms of SNPI allow for multiple resource types and variant interdiction costs, for our algorithmic specializations we model a single resource type and the scenario in which interdiction of any arc in A costs only one unit of this resource. Thus, let r_0 denote the amount of budget we have available. Let 𝐱 = {x_k}_k=1^|A| where for each arc k ∈ A, x_k is a binary variable denoting whether arc k is interdicted or not by the attacker, and let 𝐲= {y_k}_k=1^|A| where y_k is a binary variable denoting whether arc k is traversed by the defender. Finally, let s, t ∈ N denote the source and sink node in G, respectively. The integer programming formulation of SNPI is
[ max_𝐱∈ Xmin_𝐲 ∑_k ∈ A (c_k + x_k d_k) y_k; s.t. ∑_k ∈ A^+(s) y_k - ∑_k ∈ A^-(s) y_k = 1; ∑_k ∈ A^+(t) y_k - ∑_k ∈ A^-(t) y_k = -1; ∑_k ∈ A^+(i) y_k - ∑_k ∈ A^-(i) y_k = 0, ∀ i ∈ N ∖{ s, t },; ]
where X = {𝐱∈{0, 1}^|A| | ∑_i=1^|A|𝐱_i ≤ r_0 }.
Note that the above formulation is biobjective, and thus can not be directly converted into a QUBO model. As Israeli et al. <cit.> show, we may fix 𝐱, take the dual of the inner minimization, make some modifications, then release 𝐱, and obtain a single objective formulation
[ max_𝐱, π π_t; s.t. π_j - π_t - d_k x_k ≤ c_k, ∀ k = (i, j) ∈ A; π_s = 0; 𝐱∈ X. ]
We may interpret π_i as the post-interdiction shortest-path length from s to i. As such, we may impose bounds on π_i ∈ [0, |N|max_k' ∈ A{c_k' + d_k'}]. Since each variable is constrained and we have a single objective, we may now convert each constraint to a penalty function and decompose bounded variables into several binary variables, for example using the mapping proposed by Karimi et al. <cit.>. Letting P be a sufficiently large positive penalty value, our QUBO formulation is
[ max_𝐱, π, 𝐦, n π_t - P ∑_k = (i, j) ∈ A (c_k - (m_k + π_j - π_i - d_k x_k))^2; - P(π_s)^2 - P(r_0 - (n + ∑_k ∈ A x_k))^2; s.t. π_i ∈ [0, |N|max_k' ∈ A{c_k' + d_k'}], ∀ i ∈ N; m_k ∈ [0, |N|max_k' ∈ A{c_k' + d_k'} + d_k], ∀ k ∈ A; n ∈ [0, r_0]; 𝐱∈{ 0, 1 }^|A|. ]
Note that we have included bounded non-binary variables for simplicity, additionally, we have included variables 𝐦 and n to act as slack variables for inequality constraints. In the actual QUBO formulation, all bounded variables and slack variables should be mapped to several binary variables. The given QUBO formulation is directly solvable on an IPU.
We now describe how to formulate subproblems of SPNI for the purposes of our algorithm. We define a connected partition of a network G = (N, A) as a subset of nodes N such that its induced subgraph is connected.
Say we have a connected partition of G, i.e., a subset of nodes N_p ⊆ N, a sink node t' ∈ N_p, and let A_p represent all arcs associated with this subset of nodes, i.e. A_p = { (u, v) ∈ A | v ∈ N_p }. Note that we may partition A_p = A_p,1∪ A_p,2 where A_p,1 = { (u, v) ∈ A | u ∈ N_p v ∈ N_p } and A_p,2 = { (u, v) ∈ A | u ∉ N_p v ∈ N_p }. We define a subproblem of SPNI over this partition N_p and sink node t' by restricting our free variables to only those associated with N_p—rather than define π_i for all i ∈ N, we will only define π_i for all i ∈ N_p, and similarly only define x_k for all k ∈ A_p. Additionally, we change the objective formula to maximize π_t' rather than π_t.
For those variables not associated with N_p, we may fix their values to constants. Assume that we have a current solution A' ⊆ A—by “solution,” we refer to a subset of arcs chosen for interdiction. We use Γ_i to denote a constant representing the shortest-path length from node s to i after all arcs in A' have been interdicted. Since π_i represents the postinterdiction shortest-path length from nodes s to i, we can fix π_i to the value Γ_i for all nodes i not in N_p. Additionally, we limit our budget from r_0 to r_0 - |A' ∖ A_p|, ensuring we use only the budget we have allocated within N_p. The subproblem formulation over N_p with sink node t' is thus
[ max_𝐱, π, 𝐦, n π_t' - P ∑_k = (i, j) ∈ A_p,1 (c_k - (m_k + π_j - π_i - d_k x_k))^2; - P ∑_k=(i,j) ∈ A_p,2 (c_k - (m_k + π_j - Γ_i - d_k x_k))^2; - P(π_s)^2; - P((r_0 - |A' ∖ A_p|) - (n + ∑_k ∈ A_p x_k))^2; s.t. π_i ∈ [0, |N|max_k' ∈ A{c_k' + d_k'}], ∀ i ∈ N_p; m_k ∈ [0, |N|max_k' ∈ A{c_k' + d_k'} + d_k], ∀ k ∈ A_p; n ∈ [0, r_0]. ]
Note that in the above formulation, we have restricted our free variables to those associated with N_p, thereby decreasing computational load and potentially allowing the problem to be solved on an IPU depending on the size of N_p.
§ ALGORITHM
In this section we present an algorithm for approximating SPNI by leveraging small IPU hardware. Our algorithm first proceeds by generating an initial greedy solution, then refining this solution over several iterations.
First we define terminology and helper functions. A problem instance of SPNI is a tuple 𝒫 = (G, 𝐜, 𝐝, s, t, r_0) where G = (N, A) is a directed network, 𝐜 is a vector representing arc lengths of A, 𝐝 is a vector representing added interdiction arc lengths of A, s ∈ N is the source node, t is the sink node, and r_0 is the interdiction budget.
We define PARTITION to be a function that takes in a graph G = (N, A) and an integer n as inputs, then returns P ⊆𝒫(V), a connected partitioning of G where ∀ N_p ∈ P, |N_p| ≈ n. These partitions are how we will choose to define subproblems. The purpose of the size restriction is to ensure that subproblems are sufficiently small for IPU hardware. Additionally, PARTITION is necessarily non-deterministic ensuring different partitions on each call to prevent the solver from being stuck in a local attraction basin.
We define CALC-LENGTH, a function that takes in a problem instance 𝒫 = (G = (N, A), 𝐜, 𝐝, s, t, r_0), and a set A' ⊆ A where |A'| ≤ r_0. It returns the shortest s-t path length in G after all arcs in A' have been interdicted.
We also define CALC-PATH that takes in a problem instance 𝒫 = (G = (N, A), 𝐜, 𝐝, s, t, r_0), and a set A' ⊆ A where |A'| ≤ r_0. It returns the set N' ⊆ N consisting of all nodes in the shortest s-t path in G after all arcs in A' have been interdicted.
Finally, function SOLVE takes in a problem instance 𝒫 = (G = (N, A), 𝐜, 𝐝, s, t, r_0), a set N_p ⊆ N representing a continuous partition of G, a node t' ∈ N_p for the sink node of the subproblem, and a set of arcs A' ⊆ A where |A'| ≤ r_0 representing the current working solution. It formulates a subproblem based off these parameters according to (<ref>), solves the subproblem, and returns a new arc set A”⊆ A representing which arcs to interdict to maximize π_t'.
Here is where computational work may be offloaded to an IPU. Note that due to the subproblem restriction, SOLVE can only modify the interdiction status of arcs associated with N_p, and thus all arcs not associated with N_p but in A' will remain in A”.
Initial solution generation The pseudocode is given in Algorithm <ref>. It receives in input a problem instance 𝒫 = (G = (N, A), 𝐜, 𝐝, s, t, r_0) alongside an integer n representing the approximate nodes allowed per partition. To generate an initial solution to SPNI, the algorithm takes a greedy approach. As each arc costs exactly one unit of the budget to interdict, the algorithm iterates over each unit of the budget attempting to decide which arc to allocate this unit to. To make this decision, the algorithm first partitions the network using PARTITION and additionally calculates the shortest s-t path with CALC-PATH. Note that it is sufficient to only consider edges along the s-t path for interdiction, since edges outside of this path will not affect the s-t path length after interdiction. For each node in the s-t path, the algorithm determines which partition this node belongs to, then solves a subpartition using this node as the sink node by calling SOLVE. It then takes the output solution, and calls CALC-LENGTH to determine this solution's impact on the s-t path length. If this path length is greater than the current maximum path length, the algorithm notes it as a potential candidate solution and updates the maximum. After repeating this process for each node in the s-t path, the algorithm chooses a candidate solution randomly, then continues this process until the budget has ran out. Note that in each iteration, the algorithm incrementally adds one arc to the current solution.
Solution refinement The pseudocode is given in Algorithm <ref>. It receives in input a problem instance 𝒫 = (G = (N, A), 𝐜, 𝐝, s, t, r_0), an integer n representing the approximate number of nodes per partition, an integer λ for the number of iterations to run the algorithm, and a solution A' ⊆ A to improve. Note that the given solution A' is expected to be the output of Algorithm <ref>. This algorithm attempts to improve the initial solution. To do so, it iterates λ times, and in each iteration, partitions the graph using PARTITION and calculates nodes in the current s-t path given the current interdiction solution using CALC-PATH. For each of these nodes, alongside all nodes found in the shortest path for the previous iteration, it determines what partition the node lies in and solves a subproblem where this node is the target node using SOLVE. CALC-LENGTH is then called to determine whether the outputted solution's s-t path length is greater than the current s-t path length, and if so, marks this solution as a potential candidate solution. After iterating over each node, if a better s-t path length is found, the current solution is chosen to be a randomly selected candidate solution. If a better s-t path length is not found, the algorithm attempts to explore alternate solutions by iterating over each arc in the current solution, temporarily deleting that arc from the solution, and observing whether this deletion can result in a better solution through a partitioning process similar to the above. If a better solution is found, the deletion of this arc becomes permanent and the better solution is adopted. Note that to prevent excessive computation, the algorithm keeps track of arcs which, upon being temporarily deleted, have not previously increased the s-t path length in a set named good-arcs. In future iterations, the algorithm does not consider temporarily deleting arcs found in good-arcs.
§ COMPUTATIONAL RESULTS
To evaluate the performance of our algorithm and similar to other network interdiction literature, we have generated grid networks with randomized edge weights and randomized interdiction weights. To each grid we added the source node s connected to all nodes in the first column, and the sink node t connected to all nodes in the last column, with all of these arcs having length 0 and interdiction length 0 to make the problem more challenging. All horizontal arcs are oriented forward and all vertical arcs are oriented down. All arcs not connected to s or t have a randomly assigned length and interdiction length, each in the range [1, 10]. Figure <ref> depicts an example of grid 3x3.
Models were built using Pyomo, and were solved using the open-source solver CBC <cit.>. No significant difference was observed in experiments with Gurobi solver <cit.>.
Graph representation and utilities such as Dijkstra's was done using NetworkX <cit.>. Graph partitioning was done using METIS <cit.>. All problems were ran on the University of Delaware's Caviness cluster which uses a Linux system and has exclusively Intel E5-2695 v4 18 core processors in its pool.
Below we plot the results of our computational experiments. The “Size” axis represents the number of nodes in the experiment's network (| N|). Note that by the structure of the grid network, the number of arcs can be calculated by | A | = 2(| N| - 2). The “Quality” axis displays a metric to evaluate the quality of our solutions (defined in Eq. (<ref>)).
We run both our refinement algorithm and CBC attempting to solve Equation (<ref>) across the entire network—we shall hereby refer to the latter as the “full problem solver.” Note that the full problem solver is significantly slower than the refinement algorithm, and thus, we timeout the full problem solver once it has exceeded the time that the refinement algorithm takes, then proceed to take the current best solution. In every problem instance below, the full problem solver always timed out. For any given experiment, if r is the shortest s-t path length after interdicting the arcs from the solution of our refinement algorithm and f the shortest s-t path length after interdicting the arcs from the full problem solver's solution, the ratio can be calculated by
(r - f)/max(r, f).
Thus, the higher the quality, the better—positive quality indicates that the refinement solver performed better than the full problem solver.
We run three sets of experiments. In each experiment, we set the number of refinement iterations λ to 50. (However, the convergence is often observed earlier.) In the first set of experiments, represented by Figure <ref>, we set the interdiction budget r_0 to 0.25% of the number of arcs in the network and limit our subproblem size to n=20 nodes. The average quality for this set of experiments is -0.008, implying that a subproblem size of only 20 nodes is sufficient for a good estimation of SPNI. In the second set of experiments, represented by Figure <ref>, we set the interdiction budget r_0 to 0.5% of the number of arcs in the network and limit our subproblem size to n=20 nodes. The average quality for this set of experiments is -0.013, indicating relative consistency in estimation regardless of budget. In the final set of experiments, we set the interdiction budget r_0 to 0.25% of the number of arcs in the network and limit our subproblem size to n = 40 nodes. The average quality for this set of experiments is -0.012, implying that an increase in subproblem size is not necessary for estimation and limited capacity IPU machines can already be integrated to solve this problem using decomposition-based refinement.
Out of all 39 experiments ran, the difference between the s-t path length produced by the refinement solver and the path length produced by the full problem solver was ≥ -10 for 35 of the experiments. In 10 of these experiments, the refinement solver performed better than the full problem solver. These results are significant as they indicate that in the cases when full problem solver was better the refinement missed the value of a single arc.
§ DISCUSSION
§.§ Solving on Ising Processing Hardware
The current results shown in this paper rely on using an exact solver to solve subproblems. We have previously attempted to utilize classical heuristic QUBO solvers, such as D-Wave's qbsolv and D-Wave Ocean, to solve these subproblems, but such solvers provided poor quality solutions even on much smaller problem sizes that those we experiment with.
Additionally, due to lack of accessibility to massive experiments with the IPU hardware, utilizing exact solvers is necessary to demonstrate the efficiency of our algorithm. We hope that as IPUs continue to develop in capacity, their superior solution quality and speed when compared to classical heuristic solvers will be able to make full use of the algorithm presented in this paper, generating solutions that are able to beat classical exact solvers consistently.
§.§ Algorithmic Challenges and Obstacles
Refinement algorithms for solving SPNI may face several challenges, the most notable being the problem of proper budget allocation amongst partitions. If an algorithm decides to improperly allocate budget to a subproblem, for example deciding to interdict an arc where this unit of budget would ideally be spent much further away in the graph, it proves to be very difficult to shift this budget significantly far away from its current position. Within our refinement approach, the movement of interdicted arcs is primarily dictated by random partitioning—for example, in one iteration, the subproblem solver may determine that a specific arc should be interdicted under the context of the current partititioning configuration, i.e. within the context of the current subproblem the arc lies in. But when we repartition in the next iteration, the subproblem solver reevaluates the placement of interdicted arcs—the cost unit used to interdict the previous arc may be relocated elsewhere based on the new subproblem formulation. Since the cost unit used for an interdicted arc may only shift to arcs relatively near this arc, it is unlikely for the cost unit to travel far away in the graph. To resolve this issue, a multilevel approach widely known in combinatorial scientific computing and applied graph algorithms <cit.> may be taken. Multilevel algorithms create a hierarchy of coarse problem representations, find a solution to the coarsest (smallest and most compressed) problem, then gradually derive a solution to the original finest problem by projecting the solution back from the coarse to fine levels and refining it using our algorithm, thereby potentially aiding in proper budget allocation. This approach has seen great success in solving various (hyper)graph problems <cit.> including hybrid quantum-classical multi- and single-level refinement for partitioning and community detection <cit.>, indicating that its application to SPNI may improve solution quality.
§.§ Performance
One of our main goals was to develop a trivial IPU parallelization friendly algorithm. Our current implementation of the proposed refinement does not take advantage of the fact that subproblems produced by a given partitioning can be solved independently of one another, since they only depend on the current working solution.
Consequently, subproblems can be solved in parallel rather than sequentially, offering a potentially large optimization to the current performance of the algorithm.
§.§ Generalizability
The algorithm introduced in this paper does not handle SPNI problem instances in which the binary interdiction of a given arc may cost more than one unit of the interdictor's budget. The algorithm can currently handle instances where an arc between two nodes can take on multiple values of interdiction (e.g., if interdicted once add 1 to the length, if interdicted twice add 2 to the length) through the introduction of multi-edges with appropriate lengths, but instances in which a binary decision needs to be made (i.e., interdicted or not) and differing levels of interdiction cost are not supported.
Another scenario which the algorithm is unable to handle is in the case of different resource types. For example of such a scenario, let r_0 and r_1 represent the budget for two different types of resources, then arc k_0 may cost 1 of r_0 and 2 of r_1, but arc k_1 may cost 0 of r_0 and 3 of r_1. This restriction is not much of a concern, however, considering that in one of the most prominent works regarding SPNI, the authors of <cit.> also do not accommodate this case for their algorithmic cases.
§ CONCLUSION
Solving SPNI holds crucial importance in the defense, engineering and other domains, addressing challenges in securing critical infrastructure from terrorist attacks, natural disasters and disrupting enemy supply networks among other applications. Current solutions to SPNI, however, are often too slow to be scalable to large networks, and thus prove to be impractical for real world purposes. In this paper, we have introduced a novel decomposition approach addressing SPNI harnessing the rapid solving abilities of IPU hardware to yield an approximate solution. We have further shown that solutions generated by our decomposition algorithm are extremely close in quality to state-of-the-art integer programming methods. We anticipate that as IPU hardware advancements continue to improve speed in solving optimization problems, our algorithm will enable an even quicker approximation of SPNI thereby facilitating a more efficient approach for real world scenarios of SPNI. More work is under way to produce higher quality approximations in even shorter amounts of time and extend the algorithm to more general forms of SPNI.
Reproducibility: Our results and code are available at <https://github.com/krishxmatta/network-interdiction>.
§ ACKNOWLEDGEMENTS
We would like to thank J. Cole Smith and Yongjia Song, the authors of <cit.>, for their input and discussion about the current status of the network interdiction solvers.
This work was supported in part with funding from the Defense Advanced Research Projects Agency (DARPA). The views, opinions and/or findings expressed are those of the author and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S.Government.
plain
|
http://arxiv.org/abs/2307.07335v1 | 20230714133041 | Benchmarking Digital-Analog Quantum Computation | [
"Vicente Pina Canelles",
"Manuel G. Algaba",
"Hermanni Heimonen",
"Miha Papič",
"Mario Ponce",
"Jami Rönkkö",
"Manish J. Thapa",
"Inés de Vega",
"Adrian Auer"
] | quant-ph | [
"quant-ph"
] |
APS/123-QED
[email protected]
IQM Germany GmbH, Nymphenburgerstrasse 86, 80636 Munich, Germany
Department of Physics and Arnold Sommerfeld Center for Theoretical Physics, Ludwig-Maximilians-Universität München, Theresienstrasse 37, 80333 Munich, Germany
IQM Germany GmbH, Nymphenburgerstrasse 86, 80636 Munich, Germany
IQM Finland Oy, Keilaranta 19, 02150 Espoo, Finland.
IQM Germany GmbH, Nymphenburgerstrasse 86, 80636 Munich, Germany
Department of Physics and Arnold Sommerfeld Center for Theoretical Physics, Ludwig-Maximilians-Universität München, Theresienstrasse 37, 80333 Munich, Germany
IQM Germany GmbH, Nymphenburgerstrasse 86, 80636 Munich, Germany
Department of Physics and Arnold Sommerfeld Center for Theoretical Physics, Ludwig-Maximilians-Universität München, Theresienstrasse 37, 80333 Munich, Germany
IQM Finland Oy, Keilaranta 19, 02150 Espoo, Finland.
IQM Germany GmbH, Nymphenburgerstrasse 86, 80636 Munich, Germany
IQM Germany GmbH, Nymphenburgerstrasse 86, 80636 Munich, Germany
Department of Physics and Arnold Sommerfeld Center for Theoretical Physics, Ludwig-Maximilians-Universität München, Theresienstrasse 37, 80333 Munich, Germany
IQM Germany GmbH, Nymphenburgerstrasse 86, 80636 Munich, Germany
Digital-Analog Quantum Computation (DAQC) has recently been proposed as an alternative to the standard paradigm of digital quantum computation. DAQC creates entanglement through a continuous or analog evolution of the whole device, rather than by applying two-qubit gates. This manuscript describes an in-depth analysis of DAQC by extending its implementation to arbitrary connectivities and by performing the first systematic study of its scaling properties. We specify the analysis for three examples of quantum algorithms, showing that except for a few specific cases, DAQC is in fact disadvantageous with respect to the digital case.
Benchmarking Digital-Analog Quantum Computation
Adrian Auer
August 12, 2023
===============================================
§ INTRODUCTION
Digital-Analog Quantum Computation (DAQC) has recently emerged as a new paradigm, posing an alternative to standard Digital Quantum Computation (DQC) <cit.>. The objective of DAQC is to utilize the natural evolution of a quantum device, generated by a given entangling Hamiltonian, with engineered control only over single-qubit gates (SQGs) to perform quantum computations. The reasoning is that such systems ought to be much simpler to experimentally operate than the fine control required for digital quantum computation. The analog evolution of the device has also been argued to be more robust against control errors than two-qubit gates (TQGs) <cit.>, which are the typical entangling operations utilized in DQC. While this analog evolution is not universal (it cannot implement an arbitrary unitary evolution), combining it with SQGs can make it universal, by effectively engineering the evolution under arbitrary Hamiltonians.
Although ideas similar to DAQC have been discussed in Refs. <cit.>, where multi-qubit gates combined with single qubit gates are explored, the proposal in Refs. <cit.> is particularly interesting because it does not require a specific qubit connectivity, as we show in this work, and can therefore be applied to many different promising quantum computing architectures (such as trapped ions, superconducting circuits or Rydberg atoms). In this regard, Refs. <cit.> have described the method for implementing universal quantum computation using DAQC for two types of devices, defined by their connectivities: all-to-all (ATA), in which all qubits are directly coupled to all other qubits <cit.>, and an open linear chain, in which all qubits are coupled to their nearest neighbors in a one-dimensional array <cit.>. Additionally to several simulation protocols <cit.>, DAQC algorithms have been proposed to implement the Quantum Fourier Transform (QFT) routine <cit.>, a instance of the Quantum Phase Estimation (QPE) algorithm <cit.>, the Harrow-Hassidim-Lloyd algorithm for solving linear systems of equations <cit.>, and the Quantum Approximate Optimization Algorithm (QAOA) <cit.>. These works offer preliminary analyses on the scaling properties of DAQC with respect to DQC, as well as on the impact of noise and errors in their performance.
Thus, while DAQC appears as a promising alternative to the digital case, a complete analysis of its implementation in arbitrary device connectivities, its scaling properties, as well as the impact of errors in its performance is still missing in the literature <cit.>. This manuscript tackles this problem by providing the first systematic study of the limitations and potential of DAQC: (i) we provide a generalization of this paradigm for any type of connectivity, (ii) we provide an analysis of the error scaling of DAQC with the number of qubits of the device, considering a detailed account of the number of operations introduced, identifying and accounting for major sources of error, and (iii) focusing on the QFT algorithm and the Greenberger-Horne-Zeilinger (GHZ) state <cit.> preparation, we show how specific connectivities, in this case a star layout, may have a positive impact in the scaling properties of DAQC due to a reduction in the number of analog blocks. Throughout our study, we consider the two versions of DAQC proposed in the literature: stepwise DAQC, consisting of a sequential approach where all the interactions are simultaneously switched on and off between layers of single qubit gates, simplifying the theoretical analysis, and the experimentally attractive banged DAQC where an always-on multi-qubit interaction is overlayed with fast single-qubit pulses.
Our analysis allows us to conclude that, in general, DAQC scales unfavorably with respect to the digital paradigm, except for some cases in which a specific algorithm is implemented on a tailored device connectivity. In this regard, the closer the computation's Hamiltonian is to that of the device, the better the scaling properties of DAQC.
The manuscript is organized as follows. In <Ref>, we develop general stepwise and banged DAQC methods for devices with an arbitrary connectivity. In <Ref>, we perform a theoretical analysis of the error scaling of DAQC, and compare it to DQC. In <Ref>, we write a specific DAQC protocol for a star-connectivity device, similar to the one for a one-dimensional chain of <cit.>, that significantly improves the results of DAQC. In <Ref>, we perform an error analysis of the QFT algorithm, and, in <Ref>, of the GHZ state preparation routine. Finally, in <Ref>, we perform numerical simulations for three digital-analog algorithms and compare their performance against their digital counterparts, in terms of fidelity and time of execution.
§ DIGITAL-ANALOG QUANTUM COMPUTATION WITH ARBITRARY CONNECTIVITY
Currently existing DAQC algorithms have been developed for an ATA qubit connectivity <cit.> and for a one-dimensional qubit chain with nearest neighbor couplings <cit.>. However, promising quantum computing architectures like those based on superconducting qubits consist of planar devices where only local interactions with nearest neighbors can be natively implemented. In this section we provide a generalization by developing a protocol for implementing universal digital-analog quantum computation on a device with an arbitrary connectivity. A more succinct method to achieve such a general protocol is described in <cit.>, which utilizes the ATA case as a starting point. For the sake of completeness, in this section we describe our method from the ground up.
§.§ Resource and Target Hamiltonian
Throughout this manuscript, we distinguish between resource and target Hamiltonians:
* The resource Hamiltonian is the entangling Hamiltonian according to which the qubits of a device evolve naturally, when all interactions are turned on <cit.>. Its coupling coefficients are assumed to be constant and non-tunable during the computation, though they can be turned on or off simultaneously as desired. In the following, we denote resource Hamiltonians as H̅.
* The target Hamiltonian is the entangling Hamiltonian that generates a specific unitary which we wish to implement. Its coupling coefficients can be chosen arbitrarily, depending on the computation to be implemented. We denote target Hamiltonians as H.
We assume that the resource Hamiltonians are of ZZ-Ising type and that the target Hamiltonians that we wish to implement are also of the ZZ-Ising type,
H̅_𝒞 = ∑_(j, k) ∈𝒞g̅_jk Z^j Z^k ,
H_𝒞 = ∑_(j, k) ∈𝒞 g_jk Z^j Z^k ,
where, formally, we have defined the connectivity of a device (i.e., that of its resource Hamiltonian) as the collection of c pairs of qubits that are connected, and we write it as 𝒞 = {(j, k)}, where j, k are qubit indices and k>j. Additionally, Z^j is the Pauli-Z operator acting on qubit j,
Z = [ 1 0; 0 -1 ],
and g̅_jk (g_jk) are the coupling coefficients of the resource (target) Hamiltonian. A method for other types of two-body Hamiltonians is given in Ref. <cit.>, which, utilizing significantly more resources, is able to engineer target Hamiltonians with arbitrary Pauli operators using resource Hamiltonians with other arbitrary Pauli operators. In this manuscript we study algorithms that require only ZZ-type target Hamiltonians, so the assumptions of Eqs. (<ref>) and (<ref>) are valid for our purposes.
An analog block is the multi-qubit entangling operation consisting on the evolution of all qubits under the resource Hamiltonian, for a finite and tunable time t,
U_H̅_𝒞(t) = exp(-i t H̅_𝒞) ,
where (and from now on in this manuscript) we have set ħ=1 and work in natural units.
The evolution unitary U_H_𝒞 under the target Hamiltonian H_𝒞, for some time t_f, is given by
U_H_𝒞(t_f) = exp(i t_f H_𝒞)
= exp(i t_f ∑_(j, k) ∈𝒞 g_jk Z^j Z^k)
= ∏_(j, k) ∈𝒞exp(i t_f g_jk Z^j Z^k) ,
which is equivalent to implementing c two-qubit gates (TQGs) of the form
ZZ^jk(ϕ_jk) = e^i ϕ_jk Z^j Z^k ,
with phases ϕ_jk = t_f g_jk (2π) (see <Ref>). The set of operations comprising such unitaries U_H_𝒞 and arbitrary SQGs is universal <cit.>. Therefore, any quantum algorithm can be written as a combination of SQGs and the evolution under such target Hamiltonians, which themselves can be expressed as combinations of SQGs and analog blocks as we will explain in the following subsections. Thus, analog blocks along with SQGs are universal. We provide an example of how a quantum algorithm can be decomposed into such operations in <Ref>.
In the following subsections, we explain how one can effectively implement an arbitrary target Hamiltonian by making use of SQGs and a given resource Hamiltonian.
§.§ The stepwise digital-analog quantum circuit
The digital-analog quantum circuit we will describe in this subsection is constructed in the so-called stepwise DAQC (sDAQC) paradigm <cit.>, as opposed to the banged DAQC (bDAQC) paradigm <cit.> that will be discussed in <ref>. The defining characteristic of sDAQC is our ability to implement analog blocks with a defined beginning and end, by turning on and off all the interactions simultaneously.
The resource Hamiltonian's coupling coefficients {g̅_jk}_(j,k)∈𝒞 are fixed by definition, though we assume that the qubits of the device can interact for a certain time t under the resource Hamiltonian in Eq.(<ref>) <cit.>. Because of this, we are only left with tuning the time of the evolution. The core idea of a DAQC protocol is to find a way to effectively engineer the desired coefficients of the target Hamiltonian, {g_jk}, by tuning the times of the analog blocks of a digital-analog quantum circuit (see Eq. (<ref>)), which can comprise analog blocks and SQGs. Inspired by the methods of <cit.>, we construct a digital-analog quantum circuit which contains c analog blocks, each running for some time t_mn (with the indices m, n running over the number of qubits, similarly to j, k), that implies a transformation
{t_mn}_(m,n)∈𝒞⟶{g_jk}_(j,k)∈𝒞 .
We will provide the method for calculating the appropriate runtimes {t_mn} which effectively implement the correct coefficients {g_jk} in Sec. <ref>, and for now concentrate on the construction of the DAQC circuit.
For the sake of clarity, we denote qubit indices as (j,k) or (m,n) as a shorthand notation for connected qubit pairs in the set 𝒞. We start our considerations with a quantum circuit that consists of c analog blocks, each running for some time t_mn. In order to be able to implement a target Hamiltonian with arbitrary coefficients, firstly we need to effectively modify the signs of the coupling coefficients within each analog block. This is because we use the combined evolution of these different effective analog blocks, with modified signs, to engineer the arbitrary target Hamiltonian. To this end, we will interleave X gates in between the analog blocks, and make use of the identity <cit.>
X^a Z^b X^a = (-1)^δ_ab Z^b ,
where X is the Pauli-X operator,
X = [ 0 1; 1 0 ].
Then, placing an X^a gate before and after an analog block has the effect of flipping the signs of all the terms in H̅_𝒞 involving the qubit a, effectively implementing a different Hamiltonian, H̅_𝒞^'
U_H̅_𝒞^'(t) = X^a exp(-i t H̅_𝒞) X^a
= X^a exp(-i t ∑_(j, k)g̅_jk Z^j Z^k) X^a
= exp(-i t ∑_(j, k)g̅_jk X^a Z^j Z^k X^a)
= exp(-i t ∑_(j, k) (-1)^δ_aj + δ_akg̅_jk Z^j Z^k) ,
where we have used the property R e^i t H R^† = e^i t R H R^†, provided that R is unitary <cit.>. Using this procedure, we can implement effective Hamiltonians that differ from the resource Hamiltonian by one or more sign flips, in each of the c analog blocks of the circuit.
Assume our quantum circuit is similar to that of <Ref>, where each of the analog blocks is preceded and followed by X gates placed on the same connected qubits appearing in the connectivity 𝒞. This specific way of placing the X gates will allow us in the next subsection to derive the explicit relationship between the times of the analog blocks and the coefficients of the target Hamiltonian. The evolution of a quantum state according to this circuit is given by
U_DAQC = ∏_(m, n) X^m X^n exp(-i t_mnH̅_𝒞) X^m X^n
= ∏_(m, n)exp(-i t_mn X^m X^n H̅_𝒞 X^m X^n)
= ∏_(m, n)exp(-i ∑_(j, k) t_mn g̅_jk
× X^m X^n Z^j Z^k X^m X^n) .
§.§ Runtimes of the analog blocks in stepwise DAQC
We now turn towards the calculation of the runtimes t_mn of the analog blocks. Utilizing Eq. (<ref>), we can write Eq. (<ref>) as
U_DAQC = ∏_(m, n)exp(-i ∑_(j, k) t_mn g̅_jk
× (-1)^δ_mj+δ_mk+δ_nj+δ_nk Z^j Z^k )
= ∏_(m, n)exp(-i ∑_(j, k) t_mn g̅_jk M_mnjk Z^j Z^k)
= exp(-i ∑_(m, n)∑_(j, k) t_mn g̅_jk M_mnjk Z^j Z^k) .
We have defined the tensor M_mnjk≡ (-1)^δ_mj+δ_mk+δ_nj+δ_nk containing c elements taking the values ± 1. We can convert these elements M_mnjk into a c × c matrix with entries M_αβ by “vectorizing” the pairs of coupled qubits (m, n) →α; (j, k) →β characterized by a single index each, as explained in <Ref>. This also “vectorizes” the times t_mn→𝐭 and the coupling coefficients g̅_jk→𝐠̅, g_jk→𝐠.
The interpretation of the sign of a given element M_αβ is the following: if M_αβ = +1 (-1), it means that the effective coupling corresponding to the α-th connection, during the β-th analog block, is positive (negative).
Let us compare now Eq. (<ref>), which is the evolution we implement through the DAQC protocol, with Eq. (<ref>), which is the evolution under the target Hamiltonian we wish to simulate. They are equal if the following vector equation is fulfilled,
𝐆 t_f = M 𝐭 ,
where we define each element of 𝐆 as G_β≡g_β/g̅_β.
The runtimes of each analog block can therefore be calculated, such that, effectively, the time evolution under the target Hamiltonian is implemented, by inverting the matrix M,
𝐭 = M^-1𝐆 t_f .
Eq. (<ref>) allows us to find a vector of times 𝐭 of the analog blocks such that the circuit described above effectively implements the evolution under the desired target Hamiltonian, provided that the matrix M is invertible. This invertibility must be studied on a case-by-case basis, and the SQG placement may be shifted to produce a different matrix M, in this case invertible, while still making the DAQC protocol universal <cit.>.
The case in which the resource Hamiltonian does not have only 2-body terms, but rather up to M-body terms with M ≥ 3, is described in <Ref>. Alternatively, an efficient way to effectively get rid of all odd-body terms (if present) in the resource Hamiltonian is explained in <Ref>.
§.§ Banged DAQC
In addition to sDAQC, which was described in the subsection above, another paradigm exists to perform an approximate digital-analog quantum computation, called banged DAQC (bDAQC) <cit.>. The idea is that SQGs are applied simultaneously to an analog block, which runs throughout the whole circuit (see <Ref>). The motivation behind bDAQC is that it does not require us to “turn on and off” the analog Hamiltonian throughout the quantum circuit, but rather it stays constantly on from beginning to end. Repeatedly turning the analog blocks on and off introduces, for example, coherent errors such as leakage to non-computational states <cit.>. In addition, such a procedure also suffers from calibration errors because each time an analog block is turned on, it needs a fine-tuned calibration of the control pulse parameters, upon which the unitary evolution is sensitive <cit.>.
Consequently, a slight modification in the analog times between the layers of SQGs is required <cit.>. Specifically, for a quantum circuit with l analog blocks, the first and last (referred to as “boundary”) analog block times are modified by the single qubit gate duration Δ t to
t_1, l^' = t_1, l - 3/2Δ t ,
and the rest (referred to as “central”) of analog blocks' times are modified to
t^'_α = t_α - Δ t , α∈{2, …, l-1
} .
The evolution under the simultaneous SQGs and analog block is given by
U_H̅+H_s(Δ t) = exp(-i Δ t [H̅ + H_s]) ,
where H_s is the Hamiltonian that generates the SQGs. In general, H̅ and H_s might not commute. This introduces a reverse Trotter error <cit.>, due to which the bDAQC computation is not exactly equal to the evolution generated by the target Hamiltonian anymore. This error depends, among other things, on the duration of the SQGs Δ t, and it is different for the boundary analog blocks and for the central analog blocks, due to different Trotterization methods. Keep in mind that, usually, the term Trotter error is used in the case in which the ideal evolution is that of non-commuting Hamiltonians acting simultaneously, and is introduced when “splitting” it into sequential evolutions under each individual Hamiltonian <cit.>. However, we use it in the reverse case: the ideal evolution is that of the sequential application of the Hamiltonians, and the error is introduced when applying them simultaneously.
Consequently, there is a trade-off between the errors arising from turning on and off the analog blocks being eliminated, and the Trotter error being introduced.
We study this intrinsic error associated with bDAQC, and its scaling, in more detail in <Ref>.
§ ERROR SCALING IN DAQC
The method proposed in <Ref> for performing DAQC introduces errors that differ from those in DQC in several ways. In this section, we study how these errors scale with the number of connections and qubits of the device, and how they compare to the DQC paradigm.
We focus our analysis on the errors related to imperfect control parameters, given that these are ubiquitous across quantum computing platforms, whereas the nature of environmental errors can vastly change across them. However, we make some general remarks on the latter in <Ref>.
§.§ Analog blocks
Performing a DAQC algorithm requires to construct a circuit with c analog blocks for each target Hamiltonian that needs to be simulated, where c is the total number of connections. This is due to the fact that, in Eq. (<ref>), we obtain a vector 𝐭 of the analog block times which contains c elements. It is important to note that, in general, c analog blocks are required to implement a target Hamiltonian, independently of how many coefficients of said Hamiltonian are equal to 0. As an example, we provide the digital-analog circuit for implementing a target Hamiltonian on a 4-qubit device with ATA connectivity, for which c=6, in <Ref>.
Despite the fact that performing analog blocks suggests a high performance because they implement the natural dynamics of a device, they may still be subject to significant errors. These errors may have higher or lower relevance depending on whether we are considering bDAQC or sDAQC, and are described in the following subsections.
§.§.§ Ramp-up and ramp-down errors
Calibration errors can be produced when switching on and off the analog blocks <cit.>, a process during which the evolution differs from the ideal square pulse assumed in <Ref>. Such errors can be modeled as an uncertainty in the time t of application of the resource Hamiltonian (see <Ref>). Additionally, the ramp-up and ramp-down procedure can introduce other types of error, such as leakage to non-computational states <cit.>.
These errors are particularly relevant within sDAQC, which requires ramping up and down the resource Hamiltonian's coupling coefficients repeatedly during the algorithm, whereas bDAQC mitigates this error by requiring it only at the beginning and end of the execution of the circuit.
§.§.§ Two-qubit terms in analog blocks
The resource Hamiltonian considered to implement a DAQC algorithm should be descriptive of the natural dynamics of the device. However, there might be several sources of characterization errors associated to their implementation:
* The resource Hamiltonian might still be an approximation to the actual dynamics for some quantum computing platforms. This is the case, for example, in superconducting qubit devices where the native dynamics are described with a Bose-Hubbard Hamiltonian <cit.>, and the qubitized form of the Hamiltonian is still an approximation to it <cit.>. In this case, a qubitized form of the Hamiltonian reduced to two-qubit interactions is in general valid only at relatively short times after the activation of an analog block.
* The parameters of the resource Hamiltonian might be inaccurately characterized. All together, the resource Hamiltonian H̅_𝒞 contains c coupling coefficients, g̅_jk. This means that the execution of each analog block introduces c terms that have a potential mischaracterization error, even assuming that the physical Hamiltonian will always have the exact form as in Eq. (<ref>). Such an error can be modeled as an uncertainty in the coupling coefficients g̅ of the resource Hamiltonian (see <Ref>).
As an example of the latter, we show the equivalence of one analog block as two-qubit terms on a 4-qubit device with ATA connectivity in <Ref>. In general, the calibration of a large number of digital gates is simpler compared to the calibration of an analog block of the same size, since we can calibrate each gate individually. In this regard, the precise many-body Hamiltonian identification needed for the successful characterization of an analog block is still the subject of ongoing research <cit.>.
In addition, we know that a target Hamiltonian requires c analog blocks (<Ref>), and that each analog block introduces c two-qubit terms (<Ref>), so the total number of two-qubit terms needed to implement a target Hamiltonian is c^2. However, the error in the coefficients of the resource Hamiltonian is multiplied by the runtime of each analog block, t_α (see Eq. (<ref>)). Therefore, the error is not necessarily proportional to the number of two-qubit terms, and the runtimes of the analog blocks must be considered in the error scaling analysis.
§.§.§ Environmental errors
As is the digital case, the dynamics of analog blocks is subject to the impact of its environment, which produces decoherence and information losses. While the environment responsible for the coherence decay is the same in both the digital and DAQC cases, the analog blocks may dissipate in a more complex and potentially faster way specifically at longer timescales, where the presence of non-local decaying channels involving multiple neighboring qubits may become increasingly relevant <cit.>. Depending on the physical implementation of the qubit states, many-body effects related to collective decay can arise in a variety of physical systems, such as e.g. atom arrays <cit.>, quantum dots <cit.> and also in superconducting circuits <cit.>.
§.§ Depth and duration
The depth of a digital quantum circuit is defined as the number of distinct timesteps at which gates are applied <cit.>. It constitutes a measure of how long it takes to execute the quantum circuit, because each gate generally has a fixed duration.
In the DAQC framework, the number of distinct timesteps is not directly related to how long it takes to execute a quantum circuit, because each analog block in general has a different duration. Therefore, we need to sum the duration of the analog blocks and layers of single gates. Recall that the vector of the analog block times is calculated via the matrix M^-1 (see Eq. (<ref>)), and we cannot make any general statements on the form of M^-1. Thus, the duration of DAQC algorithms must be calculated and studied on a case-by-case basis. In <Ref>, we calculate numerically the total runtime of the algorithms that we explore in this manuscript.
§.§ Single-qubit gates
The DAQC method also introduces extra X gates to simulate one target Hamiltonian. The exact number depends on the device's connectivity. However, from the method presented in <Ref> or from <Ref>, it is straightforward to see that a constant number of X gates is introduced per analog block. Thus, the number of extra SQGs introduced per target Hamiltonian is proportional to c.
§.§ bDAQC non-commutativity errors
As discussed above, the bDAQC paradigm does not require to switch on and off the analog blocks repeatedly but only requires the activation of a a single block during the whole protocol, thus reducing the corresponding calibration errors. However, in bDAQC the non-commutativity of SQGs with the resource Hamiltonian also introduces an error (see <Ref>), which would only disappear for infinitely fast SQGs <cit.>. The non-commutativity error is different for the boundary analog blocks (at the beginning and the end of the DAQC circuit) and the central analog blocks. In this section, we only focus on the central analog blocks because they appear significantly more often in DAQC circuits and therefore have a larger total error contribution.
Specifically, when a SQG generated by a Hamiltonian H_s^a applied for some time Δ t, U^a = exp(-i H_s^a Δ t), is applied on qubit a, the error introduced is given by <cit.>
e_central = 1-e^-i H̅Δ t/2 e^-i H_s^a Δ t e^-i H̅Δ t /2 e^i (H̅ + H_s^a) Δ t
= (Δ t)^3/4[[H̅, H_s^a], H̅ + 2 H_s^a] + 𝒪((Δ t)^4) .
In the following, we work out the explicit dependence on Δ t by carefully analyzing Eq. (<ref>). If a SQG has a given rotation angle (for example, if it is an X gate), the amplitude of its generator Hamiltonian is inversely proportional to the SQG's time: H_s^a = π/2 Δ t X. Since the Hamiltonian that generates the SQG, H_s^a, appears twice in the nested commutators of Eq. (<ref>), we find that the explicit dependence of the error e_central on the SQG gate time Δ t is linear, e_central∝Δ t, in contrast to previous literature <cit.>.
Additionally, the resource Hamiltonian H̅ also appears twice in the nested commutators. Thus, there is an additional dependence with the degree of qubit a (i.e., the number of couplings) and with the resource Hamiltonian's coupling coefficients.
Specifically, the infidelity introduced by the non-commutativity of a gate X^a with the resource Hamiltonian H̅ is
ϵ_central = 𝒪(d_a g̅Δ t + d_a^2 g̅^2 Δ t^2) ,
where d_a is the degree of qubit a, and g̅ is the coupling coefficient of the resource Hamiltonian (assumed to be homogeneous for simplicity). An upper bound with a similar scaling is given in Ref. <cit.>, and a detailed derivation of the scaling given in Eq. (<ref>) is provided in <Ref>.
§.§ Compound fidelity
In this subsection, we aim to write approximate formulas for the fidelities of DQC, sDAQC and bDAQC that account for the scaling of all the sources of error studied in this section, and their individual infidelity contributions. In order to do so, we make two assumptions:
* Each one- and two-qubit term in an evolution operator U corresponding to a SQG, TQG or analog block has a fidelity f < 1 arising from control errors, which is independent of all other operations.
* The main source of decoherence is thermal relaxation, and we consider a simple Markovian model for it, such that the fidelity per qubit for an algorithm that requires a time t has the approximate form F_T_1≈ e^-t/T_1, where T_1 is the relaxation time. Additionally, we consider this infidelity to be independent for each qubit, and also independent from their unitary dynamics (disregarding the complex decaying channels that can arise in DAQC, as discussed in <Ref>).
Under these assumptions, the approximate total fidelity of a digital circuit implementing one given target Hamiltonian with c terms on a device with c connections is
F_DQC≈ (f_TQG)^n_TQT
× e^-N t_tot /T_1
= (f_TQG)^c
× e^-N t_tot /T_1 ,
where f_TQG is the fidelity of each TQG, n_TQT is the number of two-qubit terms (i.e., of TQGs) and t_tot is the total execution time of the circuit. The compound fidelity F_DQC in Eq. (<ref>) accounts for that of the TQGs, and decoherence due to thermal relaxation.
On the other hand, the approximate fidelity of a stepwise digital-analog circuit implementing the same target Hamiltonian is given by
F_sDAQC≈ [(f_ramp) × (f_coupling) ]^n_TQT
× (f_SQG)^n_SQG
× e^-Nt_tot/T_1 ,
= [(f_ramp) × (f_coupling) ]^c^2
× (f_SQG)^𝒪(c)
× e^-Nt_tot/T_1 ,
where f_ramp is the fidelity associated with the ramp-up and ramp-down errors, f_coupling is the fidelity associated with the mischaracterization of g̅, f_SQG is the fidelity of SQGs, and n_SQG is the number of SQGs.
Finally, the approximate fidelity of a banged digital-analog circuit implementing said target Hamiltonian is
F_bDAQC≈ [(f_ramp)^c × (f_coupling)^n_TQT]
× (f_SQG)^n_SQG
× e^-Nt_tot/T_1
× (1-ϵ_central)^n_AB .
= [(f_ramp)^c× (f_coupling)^c^2]
× (f_SQG)^𝒪(c)
× e^-Nt_tot/T_1
× (1-ϵ_central)^c ,
where n_AB is the number of analog blocks. In this case, the contribution to infidelity from ramp-up and ramp-down errors gets significantly reduced, while the infidelity from non-commutativity is introduced.
§ OPTIMIZED DAQC ON A DEVICE WITH A STAR-CONNECTIVITY
As discussed in the previous section, some of the error sources present in DAQC are sensitive to the number of analog blocks required within the protocol, and to the total time of the quantum circuit.
While the protocol described in <Ref> is general for any arbitrary connectivity, ad hoc protocols can be developed for specific connectivities using fewer analog blocks and, consequently, shorter algorithm runtimes. For example, in Ref. <cit.>, an optimized DAQC protocol is developed for a device with a nearest-neighbors connectivity in an open, one-dimensional graph, which reduces the number of analog blocks, and also their runtimes.
On the other hand, we focus on a device with a so-called star-connectivity, where a central qubit is coupled to N-1 other external qubits (see <Ref>). We can write said connectivity as 𝒮 = {(0, 1), (0, 2), …, (0, N-1)}, where we label the central qubit with index 0. Ref. <cit.>, e.g., describes how an effective star-connectivity device can be built out of superconducting circuits.
The main idea behind the optimized DAQC protocol <cit.> is to place the X gates in such a way that we obtain an (N-1) × (N-1) sign matrix M, that relates the coupling coefficients of the resource and target Hamiltonians to the analog times according to Eq. (<ref>), of the form
M = [ 1 1 1 ⋯ 1 1 1; -1 1 1 ⋯ 1 1 1; -1 -1 1 ⋯ 1 1 1; ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ ⋮; -1 -1 -1 ⋯ 1 1 1; -1 -1 -1 ⋯ -1 1 1; -1 -1 -1 ⋯ -1 -1 1 ],
i.e. a matrix with its elements being 1 on and above the diagonal, and -1 below the diagonal.
Recall the definition of the vector 𝐆 from Eq. (<ref>), with elements G_β = g_β / g̅_β. We reorder and express, without loss of generality, the elements of vector 𝐆 in such a way that the following conditions are met,
G_β ≥ 0 ,
G_β ≥ G_β+1 .
For the first condition to be met, we may need to shift the phase of the evolution, ϕ_β→ϕ_β^' = ϕ_β - 2π (see Eq. (<ref>)), in order to change the signs of the target coefficients (recall that ϕ_β = t_f g_β (2π)). Through this transformation, we can change the sign of G_β without affecting said unitary evolution. For the second condition, we may need to change the order of the labels of the coefficients in 𝐆. Under these conditions, it is proven in Ref. <cit.> that the inverse M^-1 yields runtimes for the analog blocks (see Eq. (<ref>)) given by
t_α/t_f = G_α - G_α+1/2 ,
t_N-1/t_f = G_1 + G_N-1/2 .
Also in Ref. <cit.>, it is proven that these equations lead to the minimum number of analog blocks, running for a minimal time, required to implement a given target Hamiltonian. One can see in Eq. (<ref>) how the number of analog blocks gets reduced if k elements of 𝐆 are equal, which makes k-1 elements of 𝐭 equal to zero. Also, one can see how the time of each analog block t_α gets reduced as the difference between G_α and G_α+1 gets smaller. Then, our task is to find the correct placement of the X gates in our digital-analog circuit, in order to obtain an M matrix of the form (<ref>) for a star-connectivity.
Recalling the interpretation of the M matrix from <Ref>, a digital-analog quantum circuit corresponding to the matrix M can be constructed by flipping all but one connections in the first analog block, and flipping one less connection in the subsequent analog blocks. An example of such a circuit for 5 qubits is given in <Ref>.
Additionally, recall our discussion of non-commutativity errors in bDAQC in <Ref> and their dependence on the qubits' degree. This protocol also minimizes the number of overlapping Z^j Z^a terms with the X^a gates introduced, given that they are only acting on the external qubits, which have degree d=1. Compared against other connectivities, qubits in an ATA connectivity have d=N-1, and in a one-dimensional chain they have d=2. Thus, this protocol also minimizes the error introduced by the non-commutativity of the resource Hamiltonian and the single-qubit terms.
Such optimized protocols have been described only for the one-dimensional open chain (in Ref. <cit.>) and for the star-connectivity (in this manuscript) so far. This is because, in general, it is not possible to change the sign of just one connection in an arbitrary connectivity without changing the others, which is required to get the necessary M matrix (<ref>). Take, for example, a square lattice: to flip a connection between two qubits, we place X gates on one of the qubits involved, but this flips three additional connections. To correct these additional flipped signs, we can place X gates on the three other qubits involved, but this flips three additional signs each. For such a reason, it is not possible to flip the sign of a connection in an isolated way in an arbitrary connectivity.
§ DIGITAL-ANALOG QUANTUM FOURIER TRANSFORM
After introducing the theoretical framework of DAQC, we now present the details of the implementation of a specific quantum algorithm utilizing DAQC: the digital-analog Quantum Fourier Transform (QFT). This algorithm was described and studied in Refs. <cit.> for an ATA connectivity, along with simulations of control errors and environmental noise. We extend the analysis through a full study of the error scaling with the system size. We provide such an analysis in <Ref>, and also analyze our developed implementation of digital-analog QFT on a star-connectivity, using the optimized protocol from <Ref>, in <Ref>. In addition, for illustrative purposes, we further present numerical simulations of the fidelity for a few qubits in <Ref>.
The QFT is a quantum routine that acts on a quantum state |x⟩ = ∑_i=0^2^N-1 x_i |i⟩, where |i⟩ are computational basis states, and maps it to a Fourier-transformed quantum state ∑_i=0^2^N-1 y_i |i⟩, with
y_k = 1/√(N)∑_j=0^N-1 x_j ω_N^jk , k = 0, 1, 2,3 …, N-1 ,
where ω_N = e^2 π i/N. A digital quantum circuit for this routine is depicted in <Ref> using the Hadamard gate (H) and phase gate R_z(θ),
H =1/√(2)[ 1 1; 1 -1 ],
R_z(θ) = e^-i θ/2 Z = [ 1 0; 0 e^i θ ],
as well as the ZZ(ϕ) gate defined in Eq. (<ref>).
§.§ ATA-QFT error analysis
In order to translate the digital QFT into the digital-analog paradigm, we must first identify the target Hamiltonians we need to implement. We do that by taking the biggest blocks of consecutive ZZ(ϕ) gates. As an example, we show the first and last such target Hamiltonians for the ATA-QFT in <Ref>, and from that it becomes clear that N-1 such blocks are needed.
The sources of possible errors were specified in <Ref> and their effects on the compound fidelity summarized in Eqs. (<ref>) and (<ref>). For an ATA device, the number of connections c is given by c=N(N-1)/2.
For the digital-analog ATA-QFT, we therefore find the following scaling behaviour with the number of qubits N:
* Number of analog blocks: The ATA-QFT circuit is constructed as 𝒪(N) target Hamiltonians. Each target Hamiltonian requires c = 𝒪(N^2) analog blocks (see <Ref>). Thus, the total number of analog blocks is 𝒪(N^3).
* Number of two-qubit terms: Each analog block contains c = 𝒪(N^2) two-qubit terms (see <Ref>). Thus, the total number of two qubit terms in all the analog blocks is 𝒪(N^5).
* Duration: The digital ATA-QFT can be implemented in depth 𝒪(N) <cit.>. On the other hand, the total duration of the DAQC algorithm depends on the resulting matrix M for each case, and thus we cannot say anything about it a priori (see <Ref>). We numerically compute the duration of this algorithm in <Ref>, and, by fitting a curve to the simulated data, we extract a scaling of the duration 𝒪(N^2.05).
* Number of single-qubit gates: Each target Hamiltonian requires c = 𝒪(N^2) X gates (see <Ref>). Thus, the total number of SQGs is 𝒪(N^3).
* bDAQC non-commutativity: Each qubit has a degree d=𝒪(N), so from Eq. (<ref>), and setting Δ t, g̅ to be constant, each analog block introduces an error that scales as ϵ_central = 𝒪(N^2) (see <Ref>). There are 𝒪(N^3) analog blocks, so the contribution from bDAQC to the compound fidelity in Eq. (<ref>) scales as (1-𝒪(N^2))^𝒪(N^3).
We summarize these scalings, and compare them to those of a purely digital implementation, in <Ref>(a).
The worse error scaling for DAQC in the case of ATA-QFT is, in part, a result of the sparsity of two-qubit terms in QFT. DQC requires only one TQG for each non-zero term of the target Hamiltonians; however, DAQC is introducing superfluous analog blocks needed to effectively cancel the non-zero couplings of the resource Hamiltonian. On top of that, each of these analog blocks introduces a large number of two-qubit terms, as compared to just one two-qubit term per TQG.
§.§ Star-QFT error analysis
In this subsection, we analyze the scaling of the error sources for the digital-analog QFT implemented on a star-connectivity, using the optimized protocol of <Ref>, and work out the improvement compared to the ATA-QFT.
Implementing the QFT on a star-connectivity introduces the need for SWAP gates placed between each target Hamiltonian, as can be seen in <Ref>, and each SWAP gate requires six additional analog blocks when translated to a digital-analog implementation (see Appendix <ref>).
For a star-connectivity device, the number of connections c is again given by c=N-1.
For the digital-analog Star-QFT, we therefore find the following scaling behaviour with the number of qubits N:
* Number of analog blocks: Similarly to the ATA-QFT, the Star-QFT circuit is constructed as 𝒪(N) target Hamiltonians. However, in this case, each target Hamiltonian requires c=𝒪(N) analog blocks, because the other analog blocks get cancelled. On the other hand, in total, the need for 𝒪(N) SWAP gates introduces 𝒪(N) analog blocks. Thus, the total number of analog blocks is 𝒪(N^2).
* Number of two-qubit terms: Each analog block contains c = 𝒪(N) two-qubit terms. Therefore, the total number of two-qubit terms in all the analog blocks is 𝒪(N^3).
* Duration: The digital Star-QFT can be implemented in depth 𝒪(N^2). In the digital-analog circuit, the n-th target Hamiltonian has n null coupling coefficients (meaning that g_jk = 0), which eliminates n-1 analog blocks (recalling the discussion of Eq. (<ref>)). In addition, the difference between one coupling coefficient and the next decreases exponentially (see the exponentially decreasing phases in <Ref>). Recall from Eq. (<ref>) that the analog times are proportional to the difference between the coefficient of each term in the Hamiltonian and the following. Thus, the contribution to the analog times of each target Hamiltonian is asymptotically constant, and the duration of the whole algorithm is decreased to 𝒪(N).
* Number of single-qubit gates: Each target Hamiltonian requires 𝒪(N) X gates. Thus, the total number of SQGs is 𝒪(N^2).
* bDAQC non-commutativity: Each external qubit, on which X gates are applied, has one coupling, so from Eq. (<ref>), each analog block introduces an infidelity that scales as 𝒪(1), when Δ t, g̅ are set to be constant. There are 𝒪(N^2) analog blocks, so the contribution to the compound fidelity (see Eq. (<ref>)) introduced by bDAQC scales as (1-𝒪(1))^𝒪(N^2).
A full comparison of these scalings to the purely digital implementation of the Star-QFT can be found in <Ref>(b).
The errors for Star-QFT scale slower when compared to the ATA-QFT, but the scaling is generally still worse than in DQC. This is because, even though the number of analog blocks scales similarly to the number of TQGs required for the DQC algorithm, each analog block introduces more two-qubit terms that are prone to mischaracterization.
In this case, the duration of the algorithm scales linearly in DAQC while it scales quadratically in DQC, so it presents an advantage in that regard. Additionally, the intrinsic error introduced by bDAQC is smaller than that introduced by the two-qubit terms, so bDAQC has the potential of a bigger improvement than in the ATA case.
§ DIGITAL-ANALOG GHZ STATE PREPARATION
In this section, we introduce another example of a digital-analog quantum algorithm and describe the DAQC protocol for generating the maximally entangled Greenberger-Horne-Zeilinger (GHZ) state <cit.> in a star-connectivity device with N qubits,
|GHZ_N⟩ = |0⟩^⊗ N + |1⟩^⊗ N/√(2) .
In <Ref>, we show the digital circuit for generating the GHZ state on N=5 qubits by utilizing ZZ(-π/4) gates.
We can write the N-1 consecutive ZZ gates appearing in the digital circuit as the evolution operator
U = e^ -i ∑_j=1^N π/4 Z^0 Z^j ,
where we have set the time of the evolution t_f = π/4g. Writing a digital-analog circuit for this algorithm is now possible following the method described in <Ref>.
In this section we assume the resource Hamiltonian is homogeneous, i.e., all its coupling coefficients g̅≡g̅_0j are equal, and also that they are independent of the number of qubits N. Then, all elements of the vector G are equal (see Sec. <ref>, Eq. (<ref>)), given that both the resource and the target Hamiltonians are homogeneous. This means that there is only one distinct G_β and the digital-analog quantum circuit, therefore, only requires a single analog block with appropriate runtime (see <Ref>).
The runtime of the analog block is given by the relation between the coefficients of the target and the resource Hamiltonians, t = g / g̅. This runtime t is independent of the number of qubits, whereas the number of TQGs needed in the digital paradigm, and thus also the runtime of the algorithm, scales linearly with the number of qubits. A comparison of the scaling of the digital-analog and the purely digital implementations of this algorithm can be found in <Ref>(c). In this case, we do not consider bDAQC, because only one analog block is present and thus bDAQC presents no advantage.
For this algorithm, we can see that DAQC scales favorably when compared to the DQC paradigm. This is because, in this case, the target and resource Hamiltonians are related just by a multiplicative factor, and thus just one analog block is required to simulate the target Hamiltonian. This way, DAQC is not introducing superfluous resources as it was for the two previous examples, while it is actually reducing the duration of the circuit execution.
§ NUMERICAL SIMULATIONS
After deriving the scaling of the sources of error of the three algorithms in <Ref> and <Ref>, we validate our derivations through numerical simulations under a certain error model for different numbers of qubits, and extract average execution fidelities as well as durations of circuit execution.
§.§ Error model and methods
DQC employs single- and two-qubit gates, whereas DAQC employs single-qubit gates and analog blocks. As discussed in <Ref>, we model the errors caused by all these operations by introducing errors in their control parameters. We introduce a coherent and an incoherent contribution of these control errors to the total infidelity by implementing two different modifications to the control parameters: (1) systematic errors that are constant throughout each noisy simulation of the quantum circuit and (2) stochastic errors that are randomly chosen every time an operation gets applied, for every run of circuit simulation.
For each SQG generated by a Hamiltonian H_s^a, U_H_s^a(θ) = exp(-iθ H_s^a), we modify the angle of the rotation as
θ→θ^' = θ× (1 + Δθ + δθ) ,
where Δθ is the systematic error, and δθ is the stochastic error.
For each TQG, ZZ(ϕ) = exp(-iϕ Z^j Z^k), we modify the phase of the rotation as
ϕ→ϕ^' = ϕ× (1 + Δϕ + δϕ) ,
where Δϕ is the systematic error, and δϕ is the stochastic error.
Finally, for each analog block, U_H̅_𝒞(t) = exp(-i t ∑g̅_jk Z^j Z^k), we modify the runtime and coupling coefficients of the resource Hamiltonian as
t → t^' = t × (1 + Δ t + δ t) ,
g̅ →g̅^' = g̅× (1 + Δg̅ + δg̅) ,
where, again, Δ t, Δg̅ (systematic), and δ t, δg̅ (stochastic) are unitless.
For the case of QFT, because we are interested in the fidelity of the process regardless of the initial state, we compute the ideal unitary implemented by the quantum circuit, and average over the erroneous unitaries' fidelities. We define the fidelity of one erroneous unitary U with respect to its ideal Ũ as its average fidelity over all possible initial states <cit.>,
F_U = n + |Tr(Ũ^† U)|^2/n(n+1) ,
where n = 2^N is the dimensionality of the Hilbert space.
On the other hand, for the case of the GHZ state preparation, because we are interested in the final state only, we compute the ideal state and average over the erroneous states' fidelities. We define the fidelity of one erroneous final state |ψ⟩ with respect to its ideal state (<ref>) as:
F_ψ = | ⟨ψ|GHZ_N|⟩| ^2.
For the sake of specificity, we choose the error parameters of the simulations to match those of a state-of-the-art superconducting QPU. We sample all errors from a Gaussian distribution 𝒩(μ=0, σ) centered around 0, where σ is chosen so that each type of operation has a given average fidelity: 99.99% for SQGs, 99.9% for TQGs, and 99.95% for each two qubit term in analog blocks. These figures are calculated executing the erroneous gates 10000 times, and averaging the resulting erroneous unitaries' fidelities accorcing to Eq. (<ref>).
This choice for the fidelities of each operation entails considerably better SQGs than TQGs, and analog blocks that introduce less error per two-qubit term than each TQG. Additionally, in the case of bDAQC, the error associated with the runtimes is applied only in the first and last analog blocks of the quantum circuit (see <Ref>). Finally, all the values of σ are also chosen so that the coherent errors account for 25% of the infidelity per operation, and incoherent errors account for 75% of it.
We simulate the circuits for digital and digital-analog ATA-QFT, Star-QFT and Star-GHZ, and compute the noisy fidelities for each case, after applying the errors described above. We do this by running 1000 iterations of noisy circuits, computing the resulting erroneous unitaries and final states according to Eqs. (<ref>) and (<ref>), respectively, and averaging to obtain ⟨ F_U ⟩ in the case of QFT or ⟨ F_ψ⟩ in the case of the GHZ state preparation.
The quantum circuits must be compiled to a specific basis gate set, which may be different for each quantum computing platform. For specificity, we focus our simulations on one consisting of superconducting qubits. Therefore, the native SQGs that we assume can be implemented in the devices are the R_xy and R_z gates,
R_xy^a(θ, ϕ) = e^-i θ/2( cosϕ X^a + sinϕ Y^a ) ,
R_z^a(θ) = e^-i θ/2 Z^a ,
where X and Z are the Pauli-X and Pauli-Z matrices, respectively (see Eqs. (<ref>) and (<ref>)), and Y is the Pauli-Y matrix,
Y = [ 0 -i; i 0 ] .
For superconducting quantum computers, the R_xy^a(θ, ϕ) gate can be physically implemented via a microwave drive <cit.>. The gate R_z^a(θ) does not need to be physically implemented because it can be accounted for virtually by readjusting the phase of the subsequent gates applied on qubit a <cit.>. On the other hand, the native TQG we assume is the ZZ^jk(ϕ) gate previously defined in Eq. (<ref>).
We set the resource Hamiltonians to be homogeneous, with coupling coefficient g̅ = 10 MHz and the SQG times Δ t = 5 ns. On the other hand, we assume that the time it takes to implement a TQG does not depend on the phase of its rotation, and it is t_TQG = 50 ns, as is realistic for superconducting transmon qubits coupled via tuneable couplers.
§.§ Results
Considering the error model and methods described in the previous subsection, we have performed our simulations using the open-source quantum computing library Qiskit <cit.> and the results are presented in Fig. <ref>.
In the top row of <Ref>, we plot the fidelity of the digital and digital-analog computations for each algorithm we have studied (ATA-QFT, Star-QFT and Star-GHZ), as a function of the number of qubits.
For ATA-QFT, we skip the case of N=4. This is because, as explained in Ref. <cit.>, the matrix M that results for the ATA connectivity with N=4 is not invertible, and thus the times of the analog blocks cannot be calculated via Eq. (<ref>). Furthermore, also for ATA-QFT, we simulate the bDAQC circuit only for 3, 5 and 6 qubits as those are the only cases in which the compilation described in Ref. <cit.> can be applied. For any N>6, Eq. (<ref>) may return negative runtimes for the analog blocks. Indeed, the ATA-QFT needs the implementation of analog blocks with negative runtimes, which is not physical. A protocol for obtaining a digital-analog circuit with only non-negative runtimes is given in Ref. <cit.>, though it requires the construction of an M matrix whose size grows exponentially with the number of qubits, thus rendering it impractical.
Likewise, in the bottom row of <Ref>, we plot the time of execution of the quantum circuits as a function of the number of qubits. Because we do not need to actually simulate the quantum circuits, we can extend these figures to a higher number of qubits. The times of bDAQC circuits are not represented because they are very similar to those of sDAQC, undergoing just the minor modifications of Eqs. (<ref>) and (<ref>).
We now discuss these results in detail for each algorithm.
§.§.§ All-to-all Quantum Fourier Transform
As can be seen from Fig. <ref>, the fidelities of QFT in both DAQC paradigms are below the fidelity in DQC over the entire range of the number of qubits N studied. One reason is that, even though each two-qubit term in analog blocks is more error-robust, the number of two-qubit terms is much smaller in DQC. For example, in the case of N=3, DQC has 3 two-qubit terms whereas DAQC has 18 of them. Additionally, in DAQC, the two-qubit terms are repeatedly applied on the same pairs of qubits, leading to a higher accumulation of coherent errors throughout the computation <cit.>. The much worse scaling of DAQC compared to DQC only exacerbates the difference in fidelity for a bigger number of qubits. Recall from <Ref> that part of the difference in scaling comes from the fact that each target Hamiltonian has many null coupling coefficients, (i.e., g_jk=0, see <Ref>), which means that a small number of TQGs is needed to implement them in the DQC paradigm, while no such reduction of resources occurs necessarily in DAQC.
Regarding the time of the computation, plotted in <Ref>, we have fit the data to the best signomial expression, which gives a scaling of 𝒪(N^2.05), whereas the computation time for DQC scales linearly as expected. The detrimental impact of decoherence is therefore much larger for DAQC than for DQC. Additionally, a spike is present in the range N ∈ (5, 8). This non-monotonic behavior is due to a property of the matrix M for the ATA connectivity: usually, its inverse M^-1 has a balance of positive and negative elements that makes it so the contributions to analog times are partially cancelled in Eq. (<ref>); however, for N=6, 7 in the ATA connectivity, its elements are all negative and positive, respectively.
§.§.§ Star Quantum Fourier Transform
From Fig. <ref>, one can see that the fidelities for DAQC are also below the DQC fidelity for the whole range of N studied in this case. However, the fidelities of DAQC are higher with respect to the ATA case, as was expected from the analysis of the error scaling in <Ref>. Thus, we see explicitly the dependence of the performance of DAQC on the connectivity and the compilation used for the digital-analog circuit. Additionally, in this case, the intrinsic error associated with bDAQC scales slower than the error associated with the analog blocks (see <Ref>), so the trade-off is favorable to bDAQC and it outperforms sDAQC for N>6.
As for the time of computation, in <Ref>, we see that for DAQC, it grows linearly, while for DQC, it grows quadratically, in such a way that for N > 25, the duration of the digital algorithm surpasses that of the digital-analog. Recall from <Ref> that this is because the coupling coefficients of each target Hamiltonian in QFT decrease exponentially, and so does the difference between one and the next. This means, according to Eq. (<ref>), that the times of the analog blocks for each target Hamiltonian also decrease exponentially. This leads to a total contribution of the times that is asymptotically constant for each target Hamiltonian, and therefore linear for the whole algorithm. On the other hand, we assume that TQGs require a constant time no matter how small their phase is, and the quadratic contribution to time arises.
In this case, while the infidelity coming from control errors is greater for DAQC, there may be a trade-off with the infidelity arising from decoherence and other environmental noise related to the time of execution of the quantum circuits, which is greater for DQC than for DAQC for a big enough number of qubits. The total fidelity under both sources of noise is calculated approximately in <Ref>, where we conclude that the trade-off can be favorable for DAQC for certain ranges of parameters, for example, if the execution of TQGs is very slow, and/or if the relaxation time T_1 of the qubits is very short.
It is important to keep in mind the difficulty of implementing analog blocks with exponentially decreasing runtimes. On the same note, the digital QFT algorithm can be approximated to a very good degree by the Approximate Quantum Fourier Transform (AQFT) <cit.>, which ignores some of the TQGs, whose phases decrease exponentially.
§.§.§ Star GHZ state preparation
In <Ref>, we can see that the fidelities for sDAQC are better than those of DQC, because the number of two-qubit terms is the same and the analog evolution is more resilient to the control errors.
Additionally, and as expected, the time of the digital-analog algorithm is constant whereas that of the digital algorithm scales linearly with N (see <Ref>). This is because, in a way, the digital-analog algorithm is parallelizing all the two-qubit terms while the digital algorithm requires that we apply them sequentially, one after the other.
Therefore, DAQC presents an advantage with respect to DQC when we can express the evolution of many consecutive two-qubit gates in a digital algorithm as a very reduced number of analog blocks, and combining them with SQGs.
§ CONCLUSIONS
In the past few years the DAQC paradigm has been presented as an alternative path to perform universal quantum computation that combines the robustness of analog quantum computing with the flexibility of the digital approach.
In this manuscript we have systematically analyzed its performance by studying and simulating the scaling of errors with respect to the digital case. Furthermore, we have considered the most general situation, i.e. regardless of the connectivity or the algorithm to be implemented. Our analysis shows a clearly disadvantageous error scaling of DAQC with respect to the digital case, coming mainly from the number of analog blocks needed to engineer one target Hamiltonian, and the number of two-qubit terms introduced per each analog block. While for DAQC the implementation of one target Hamiltonian entails the introduction of c^2 two-qubit terms (where c is the number of connections of the device), for DQC it only entails, at most, the introduction of c two-qubit terms.
To illustrate our scaling analysis, we have analyzed the performance of DAQC with respect to the digital case for two different algorithms, the QFT and the GHZ state preparation algorithm, on two different connectivities: ATA and a star configuration. We have consistently found DAQC to be less efficient in terms of fidelities, except for the case in which the device's resource Hamiltonian closely matches the algorithm's target Hamiltonian. In this situation, it can be argued that the resulting quantum circuit corresponds to a purely analog implementation, with SQGs applied before and after the analog evolution (see, e.g., <Ref>). While this implies the need for tailoring the device's connectivity to match that of the algorithm, this case shows a promising advantage as it parallelizes the two-qubit interactions that would otherwise be applied sequentially in the digital paradigm, and takes full advantage of the potentially more error-resilient analog evolution. Thus, we foresee potential areas of application of DAQC in quantum simulation <cit.> and variational algorithms in which fast generation of entanglement across the whole device is desirable <cit.>.
We would like to acknowledge the support of our colleagues in IQM, and specially thank T. Liu and B. G. Taketani. We also thank M. Sanz for fruitful discussions at the early stages of our work, as well as P. García-Molina. Finally, we acknowledge the support from the German Federal Ministry of Education and Research (BMBF) under DAQC (grant No. 13N15686) and Q-Exa (grant No. 13N16062).
§ THE SIGN MATRIX M
The set of elements M_mnjk has four indices. Let us “reorder” these elements in such a way that they can be arranged into a matrix, so that we will be able to invert it. In order to do that, we vectorize the pairs of indices (m,n) →α and (j, k) →β, assigning to each pair a single number, ordered from smallest to biggest m (j), then from smallest to biggest n (k). For example, for an ATA 3-qubit device:
(m, n) = (0, 1) →α = 1 ,
(m, n) = (0, 2) →α = 2 ,
(m, n) = (1, 2) →α = 3 .
This way, each pair of indices (m, n) is uniquely mapped to a single index α. The same is done with each pair of indices (j, k), which is uniquely mapped into a single index β. This way, we are also able to map M_mnjk to M_αβ.
The general formula for this mapping in the ATA case for N qubits is <cit.>
(m, n) →α = N (m-1) - m(m+1)/2 + n ,
(j, k) →β = N (j-1) - j(j+1)/2 + k .
On the other hand, the inverse transformation α→ (m, n) is given by:
n = 1 + H_1 [ α/N ] + H_1[ α/2N - 2]
+ H_1[ α/3N - 5] + ⋯ + H_1[ α/N(N-1)/2],
m = α - N(n - 1) + n(n+1)/2,
where H_1 is the Heaviside step function at 1,
H_1[x] = {[ 0 x < 1; 1 x ≥ 1; ].
§ DAQC FOR HAMILTONIANS WITH UP TO M-BODY TERMS
In <Ref>, we have laid out a method for performing DAQC using a Hamiltonian with 2-body terms. In this section, we generalize this method for the case of resource Hamiltonians that have additional, up to M-body terms. We characterize such a Hamiltonian by the collection of pairs of connected qubits (j, k) ∈𝒞_2, the collection of triplets of connected qubits (j, k, l) ∈𝒞_3... in general, 𝒞_b, and their corresponding coupling strengths g̅^b
H̅_M = ∑_(j, k)g̅^2_jk Z^j Z^k + ∑_(j, k, l)g̅^3_jkl Z^j Z^k Z^l + …
Each 𝒞_b has c_b elements. This way, the total connectivity of the M-body resource Hamiltonian is
𝒞 = ⋃_b=2^M𝒞_b ,
which has a total of c = ∑_b^M c_b elements. As a specific example, the total number of terms, c, appearing in an ATA Hamiltonian with up to M-body terms is given by:
c = N2 + N3 + … + NM .
If we are able to use the resource Hamiltonian (<ref>) to implement an arbitrary target Hamiltonian with the same structure,
H_M = ∑_(j, k)g^2_jk Z^j Z^k + ∑_(j, k, l)g^3_jkl Z^j Z^k Z^l + … ,
then we can get rid of the higher body terms by setting g^b = 0 for all b > 2. Alternatively, if our problem at hand has such higher body terms, we can use them to our advantage. Such interaction terms may appear, e.g., in fermionic <cit.> and lattice gauge theory quantum simulations <cit.>, and quantum optimization <cit.>.
Let us construct a digital-analog quantum circuit consisting of c analog blocks. The first c_2 analog blocks are preceded and followed by X^m X^n gates, in exactly the same way as described in <Ref> (see <Ref>). The following c_3 analog blocks are preceded and followed by X^m X^n X^p gates, with (m, n, p) ∈𝒞_3. This pattern is repeated until the analog blocks are exhausted. This quantum circuit is a generalization of the one described in <Ref>, for which we were restricting ourselves to 𝒞 = 𝒞_2.
This way, the unitary evolution of such a circuit is
U_DAQC = ∏_(m, n) X^m X^n exp(i t_mn^2 [ ∑_(j, k)g̅_jk^2 Z^j Z^k + ∑_(j, k, l)g̅_jkl^3 Z^j Z^k Z^l + …] ) X^m X^n
×∏_(m, n, p) X^m X^n X^p exp(i t_mnp^3 [ ∑_(j, k)g̅_jk^2 Z^j Z^k + ∑_(j, k, l)g̅_jkl^3 Z^j Z^k Z^l + …] ) X^m X^n X^p
×⋯
= ∏_(m, n)exp(i t_mn^2 [ ∑_(j, k) M_jkmn^(2,2)g̅_jk^2 Z^j Z^k + ∑_(j, k, l) M_jklmn^(3,2)g̅_jkl^3 Z^j Z^k Z^l + …] )
×∏_(m, n, p)exp(i t_mnp^3 [ ∑_(j, k) M_jkmnp^(2,3)g̅_jk^2 Z^j Z^k + ∑_(j, k, l) M_jklmnp^(3,3)g̅_jkl^3 Z^j Z^k Z^l + …] )
×⋯ ,
where we have introduced the collections of elements M^(a,b)_jk… mn…, which can take on the values ± 1. These elements are calculated as
M^(a,b)_jk… mn… = (-1)^α, where α = ∑_ν = {j, k…}
μ = {m, n…}δ_νμ ,
for which the ones in Eq. (<ref>) are a special case with ν = {i, j} and μ = {m, n}. Now, each of these collections of elements can be rearranged into a matrix M^(a,b) of dimensions c_a × c_b, following a process similar to the one in <Ref>. When comparing our DAQC evolution (<ref>) and the evolution under the target Hamiltonian (<ref>) for some time t_f, we can see that they are equal when the following set of vector equations holds:
t_f 𝐆^2 = M^(2,2)𝐭^2 + M^(2,3)𝐭^3 + ⋯
t_f 𝐆^3 = M^(3,2)𝐭^2 + M^(3,3)𝐭^3 + ⋯
⋮ = ⋮ + ⋮ + ⋱
Again, Eq. (<ref>) is a special case of this set of equations, in which we restrict ourselves only to the first term of the RHS of the first equation. This set of equations can be written as just one vector equation, where a joint matrix of dimensions c × c appears, comprising all the M^(a,b) matrices:
t_f [ 𝐆^2; 𝐆^3; ⋮ ]
=
[ M^(2,2) M^(2,3) ⋯; M^(3,2) M^(3,3) ⋯; ⋮ ⋮ ⋱ ][ 𝐭^2; 𝐭^3; ⋮ ],
which we can write as t_f 𝐆_joint = M_joint𝐭_joint for compactness. By solving this equation through the inversion of the joint matrix M_joint, we can calculate the time each analog block must run for in our digital-analog circuit:
𝐭_joint = M_joint^-1𝐆_joint t_f .
Expressing the relationship between the times and the couplings of the target Hamiltonian in this single equation is very useful, because then the only condition we need to impose for this relation to hold is the invertibility of the joint matrix, as opposed to the invertibility of each individual M^(a,b) matrix.
§ CANCELLING UNDESIRED ODD-BODY TERMS IN THE RESOURCE HAMILTONIAN
By substituting an analog block of time t by two analog blocks of time t/2 each, and placing X gates on all qubits before and after one of the two analog blocks (see <Ref>), we can effectively cancel all odd-body terms. This is because flipping the sign of the connections of all qubits leaves the even-body terms untouched, but flips the sign of all odd-body terms, as exemplified here for two- and three-body terms:
( ∏_m=0^N X^m ) Z^j Z^k ( ∏_m=0^N X^m ) = (-1)^2 Z^j Z^k = Z^j Z^k ,
( ∏_m=0^N X^m ) Z^j Z^k Z^l ( ∏_m=0^N X^m ) = (-1)^3 Z^j Z^k Z^l = - Z^j Z^k Z^l .
Thus, evolving by times t/2 with the original and flipped signs cancels all odd-body terms, while evolving according to all even-body terms for a total time t. This procedure introduces, at most, 2N single-qubit gates per analog block, while leaving the total analog times intact. The single-qubit gate depth is increased, at most, by two per analog block.
§ DERIVATION OF BDAQC NON-COMMUTATIVITY ERROR
in Ref. <cit.>, it is shown that the first and last analog blocks (i.e., a constant number of analog blocks) of a digital-analog circuit introduce errors e_boundary = 𝒪(Δ t^2).
Each central analog block, however, is shown to introduce an error
e_central = 1-e^-i H̅Δ t/2 e^-i H_s Δ t e^-i H̅Δ t/2 e^i(H̅ + H_s)Δ t
=(Δ t)^3/4[ [ H̅, H_s], H̅ + 2H_s ] + 𝒪(Δ t^4) ,
where H_s is the Hamiltonian generating the SQG that overlaps the resource Hamiltonian. Let's assume that this SQG is an X^a gate, and thus H_s = π/2 Δ tX^a. By explicitly plugging this, and the resource Hamiltonian H̅ = ∑_j,kg̅_jk Z^j Z^k into (<ref>), we can get the explicit expression of e_central. The innermost commutator in Eq. (<ref>) is
[H̅, H_s] = [∑_k=1^d g̅ Z^a Z^k, π/2 Δ t X^a]
= d g̅π/2 Δ t[ Z^a Z^k, X^a ]
= d g̅ ̅i̅π/Δ t Y^a Z^k .
Plugging this into the outermost commutator yields
[[H̅, H_s], H̅ + 2H_s] = d g̅π i/Δ t[Y^a Z^k, H̅ + 2H_s ]
= d g̅π i/Δ t([Y^a Z^k, H̅] + [Y^a Z^k, 2H_s]) .
Each of the two commutators above yields:
[Y^a Z^a, H̅] = [Y^a Z^a, ∑_k=1^d g̅ Z^a Z^k ]
= 2 d g̅ i X^a I^k
[Y^a Z^a, 2H_s ] = [Y^a Z^a, π/Δ t X^a ]
= -2π i/Δ t Z^a Z^k .
Computing the total infidelity, we get
e_central = (Δ t)^3/4√((dg̅)^4 ( 2π/Δ t)^2 + (2dg̅)^2 ( π/Δ t)^4) .
§ DIGITAL-ANALOG SWAP GATES ON A STAR-CONNECTIVITY
The SWAP gate acts on two qubits by exchanging their states, |j⟩|k⟩→|k⟩|j⟩
SWAP = [ 1 0 0 0; 0 0 1 0; 0 1 0 0; 0 0 0 1 ].
A SWAP gate applied on the central qubit and one of the external qubits can be expressed in terms of two-qubit Pauli rotations as:
SWAP = exp(i π/4 (X^0 X^k +Y^0 Y^k+Z^0 Z^k))
= exp(i π/4 X^0 X^k) exp(i π/4 Y^0 Y^k) exp(i π/4 Z^0 Z^k)
= [H^0 H^k exp(i π/4 Z^0 Z^k) H^0 H^k ]
×[ S^0 S^k H^0 H^k exp(i π/4 Z^0 Z^k) H^k H^0 S^† k S^† 0]
×exp(i π/4 Z^0 Z^k) .
In the last equality, we have decomposed the SWAP gate into three different evolutions under a Z^0 Z^k Hamiltonian, each of which we can interpret as a target Hamiltonian with all couplings g_jk = 0 except for g_0k = π/4 t_f. Following the optimized DAQC protocol described in <Ref>, each of these target Hamiltonians requires two analog blocks, accounting for a total of 6 analog blocks needed to implement a SWAP gate.
§ TRADE-OFF BETWEEN CONTROL ERRORS AND ENVIRONMENTAL NOISE IN STAR-QFT
While the DQC algorithm for the Star-QFT has a better performance than DAQC regarding the infidelity coming from control errors (see <Ref>), it has a longer execution time (see <Ref>). In turn, long execution times imply that the algorithm becomes more affected by environmental decoherence, so a trade-off may arise for a large enough number of qubits, in which decoherence accounts for a bigger effect on the infidelity.
In order to analyze such a trade-off, we study the relationship between the scaling of both sources of infidelity. We assume, like we did in <Ref>, that the main source of decoherence is thermal relaxation, and consider a simple Markovian model for it. Additionally, we consider this infidelity to be independent for each qubit, and also independent from their unitary dynamics. Therefore, the total fidelity of the computation is given by
F_total≈⟨ F_U ⟩× e^-N t/T_1 ,
where F_U is the unitary evolution's fidelity, as defined in Eq. (<ref>) and represented in <Ref>, N is the number of qubits, t is the execution time of the quantum circuit and T_1 is the thermal relaxation time. While approximate, this expression can give us insight into the interplay between the scaling of the two sources of infidelity.
In order to extend ⟨ F_U ⟩ to a higher number of qubits, for which the effects of decoherence become more relevant, we fit the fidelity data of <Ref> for DQC, sDAQC and bDAQC to a function of the form
⟨ F_U ⟩≈ f^a (N^b) + c ,
where we have assumed that each operation incurs in an independent infidelity, and where f, a, b, c are the parameters resulting from the function fitting, which are given in <Ref>. We plot the fitted curves on top of the simulated data in <Ref>.
Finally, in <Ref>, we plot the resulting total fidelity calculated as in Eq. (<ref>) for different scenarios, in which TQGs have execution times of 50, 150 and 300 ns, and in which T_1 is either 50 or 500μ s. For the regime in which T_1 is very short, and the time of the TQGs is very long, the trade-off is favorable to DAQC, as can be seen in <Ref>, for which T_1=50 μ s, and DAQC outperforms DQC for t_TQG = 150 ns and t_TQG = 300 ns.
|
http://arxiv.org/abs/2307.05125v1 | 20230711090113 | Linearization via Ordering Variables in Binary Optimization for Ising Machines | [
"Kentaro Ohno",
"Nozomu Togawa"
] | math.OC | [
"math.OC",
"cond-mat.stat-mech",
"cs.ET",
"physics.app-ph"
] |
Enhancing Continuous Time Series Modelling with a Latent ODE-LSTM Approach
[
August 12, 2023
==========================================================================
Ising machines are next-generation computers expected for efficiently sampling near-optimal solutions of combinatorial oprimization problems.
Combinatorial optimization problems are
modeled as quadratic unconstrained binary optimization (QUBO) problems
to apply an Ising machine.
However, current state-of-the-art Ising machines still often fail to output near-optimal solutions due to the complicated energy landscape of QUBO problems.
Furthermore, physical implementation of Ising machines severely restricts the size of QUBO problems to be input as a result of limited hardware graph structures.
In this study,
we take a new approach to these challenges by injecting auxiliary penalties preserving the optimum,
which reduces quadratic terms in QUBO objective functions.
The process simultaneously simplifies the energy landscape of QUBO problems, allowing search for near-optimal solutions, and makes QUBO problems sparser, facilitating encoding into Ising machines with restriction on the hardware graph structure.
We propose linearization via ordering variables of QUBO problems as an outcome of the approach.
By applying the proposed method to synthetic QUBO instances and to multi-dimensional knapsack problems,
we empirically
validate
the effects on enhancing minor embedding of QUBO problems and performance of Ising machines.
§ INTRODUCTION
Combinatorial optimization founds an important and well-studied research area
with abundant applications in the field of operations research.
For example, a knapsack problem and its variants are
famous combinatorial optimization problems with
applications including production planning, resource-allocation and portfolio selection <cit.>.
Combinatrial optimization problems are often hard to deal with by traditional
von Neumann-type computers due to their NP-hardness.
Various heuristics and meta-heuristics have been developed for handling large-scale combinatorial optimization problems.
Ising machines are attracting interests as a next-generation computing paradigm, especially for tackling hard combinatorial optimization problems <cit.>.
Ising machines find
heuristic solutions of combinatorial optimization problems in a class called quadratic unconstrained binary optimization (QUBO).
There are several types of Ising machines, including quantum annealing machines <cit.>, coherent Ising machines <cit.> and specialized-circuit-based digital machines <cit.>, depending on their way of physical implementation.
When utilizing an Ising machine, a combinatorial optimization problem is converted to a QUBO problem <cit.>.
Discrete variables are represented via binary variables and constraints are encoded into the objective function as penalties.
The total objective function should be quadratic so that the resulted model is indeed a QUBO problem.
After
conversion to the QUBO problem,
binary variables are assigned to physical spins in the Ising machine.
The Ising machine is then executed to sample a solution by minimizing energy, that is, the value of the objective function.
Performance of Ising machines on solving a QUBO problem typically involves energy landscape <cit.> of the objective function.
It also involves a graph structure associated with the QUBO problem, in which nodes and edges correspond respectively to variables and quadratic terms with non-zero coefficients, from a combinatorial point of view.
There are two major challenges for utilizing Ising machines.
One is that Ising machines suffer from finding near-optimal solutions when the QUBO problem involves complex energy landscape.
Ising machines often output solutions with large energy gap (e.g., more than 10% optimality gap) to an optimal solution of the problem <cit.>, on which a simple heuristic might achieve smaller gap.
The issue casts question on practical utility of Ising machines
as meta-heuristic solvers and the gap is required to be filled.
Another is that some of the major Ising machines have physical restriction on the structure of QUBO problems.
For example, on quantum annealing machines, a densely connected QUBO problem cannot be input directly to them, since quantum bits on the machines corresponding to variables in the problem only have interaction with a limited group of other quantum bits <cit.> according to specific hardware graphs.
It severely limits the potential applicability of Ising machines to combinatorial optimization problems.
For the former issue, attempts to tame complexity of the energy landscape are taken. For example, merging several variables into one variable is proposed to improve quality of solutions for single-spin-flip-based Ising machines by deforming the energy landscape <cit.>. However, the process might
change the optimum, and thus requires
iterations of applying an Ising machine, varying a set of merging variables.
For the latter issue,
minor embedding <cit.>
is proposed to embed a QUBO problem with arbitrary graph structure in a hardware graph.
That is, a variable in the QUBO problem is represented by a chain, which is a set of possibly multiple connected physical variables.
However, a dense QUBO problem still requires a huge number of auxiliary variables to form chains, which degrades the performance of Ising machines <cit.>.
Moreover, such requirement restricts the size of QUBO problems that can be input to a small one.
In this study, we take a new approach to tackle the issues by considering auxiliary constraint conditions for QUBO.
That is, we extract conditions that the optimal solution must satisfy, consider them as constraint conditions, and add them to the QUBO objective function as penalties.
By taking appropriate penalty terms and their coefficients, the quadratic terms in the objective function are reduced without changing the optimum.
The above process should
have two effects.
First, the auxiliary penalties simplify the energy landscape associated with the QUBO problem, allowing an efficient search for a near-optimal solution via Ising machines.
Second, the graph associated with the QUBO problem becomes sparse by the reduction of quadratic terms, thereby reducing the number of variables added by minor embedding.
Reduction of the number of additional variables for minor embedding would result in alleviating the degradation of Ising machine performance and in improving embeddability of QUBO problems of large size.
Therefore,
establishing such an approach is expected to mitigate
the aforementioned issues on Ising machines.
We propose linearization via ordering variables of QUBO problems, a method realizing the above process by considering ordinal conditions of precedence
on binary variables as auxiliary constraints.
The proposed method is illustrated using the knapsack problem as an example.
In the knapsack problem, given multiple items with two scalars representing value and weight, we seek to select items that maximizes the sum of the values so that the sum of the weights is less than a prescribed upper bound.
When modeling with QUBO, a binary variable x_i is associated with each item i, and item i is interpreted as selected when x_i=1.
If one item j is more valuable and weighs less than another item i, then item j should be selected in preference to item i.
That is, the condition x_i=1⇒ x_j=1 is satisfied for any optimal solutions.
Given an order on a set of variables with certificate that imposing the ordinal constraints preserves the optimum of the problem, we inject penalties of form x_i - x_i x_j corresponding to the constraint x_i=1 ⇒ x_j=1.
Taking appropriate coefficients for the penalties, we can eliminate the corresponding quadratic terms x_i x_j in the objective function of the QUBO problem.
Based on this fact, the original QUBO problem is transformed into an equivalent and sparser QUBO problem.
The proposed method
consists of the following two steps.
Step 1 is ordering variables, that is, to find an order of variables from a given optimization problem so that corresponding constraints
preserve the optimum.
By considering a sufficient condition for the requirement, we develop a general strategy and fast algorithm for extracting a valid order.
Step 2 is linearization of the QUBO problem with auxiliary penalties corresponding to the extracted order.
Overall, the proposed method is expected to mitigate the
issues for Ising machines with feasible computational complexity.
We conduct experiments on synthetic QUBO problems and multi-dimensional knapsack problems (MKPs) to validate the effects of the proposed method.
The results show that
the proposed method effectively mitigates the defects of minor embedding of introducing additional variables and substantially improves Ising machine performance on the benchmark MKP instances.
Our contribution is summarized as follows:
* We propose a method of linearization of QUBO problems
which improves minor embedding and Ising machine performance, preserving the optimum of the
problem.
* We provide solid theoretical results and efficient algorithms for the proposed method with practical applications.
* We validate the effects of the proposed method on improving
minor embedding and Ising machine performance through comprehensive experiments on synthetic QUBO problems and MKPs.
The rest of the paper is organized as follows.
Background on QUBO and Ising machines are explained in Section <ref>.
The notion of ordering variables is introduced and
theoretical results are shown in Section <ref>.
We explain the proposed method in Section <ref>.
Application of the method to practical problems are discussed in Section <ref>.
Experimental results are given in Section <ref>.
We summarize
related work and further discussion in Section <ref>.
Section <ref> concludes this paper.
§ QUBO AND ISING MACHINES
We
review related notions on Ising machines.
Throughout the paper, n is a positive integer denoting the problem size, and B_n {0,1}^n denotes a space of binary vectors.
§.§ Quadratic Unconstrained Binary Optimization
Quadratic unconstrained binary optimization (QUBO) is a class of optimization problems over binary variables defined by a square matrix Q ∈ℝ^n× n as follows:
minimize ϕ(x) x^⊤ Q x
subject to x ∈ B_n.
We also call the value ϕ(x) as energy of x.
A QUBO problem is naturally associated with an undirected graph whose nodes and edges correspond to variables and non-zero off-diagonal entries of Q, respectively.
Various combinatorial optimization problems can be modelled as QUBO problems <cit.>.
When there are constraints on binary variables, penalty terms are introduced to represent those constraints in a QUBO form.
If binary variables are constrained to a subset C⊂ B_n, i.e., C is a set of feasible solutions, then a penalty term corresponding to C is a function ψ: B_n →ℝ satisfying both ψ(x)=0 for all x∈ C and ψ(x)>0 for all x∈ B_n∖ C.
The QUBO formulation of the constrained optimization problem
minimize ϕ(x)
subject to x ∈ C
with a quadratic objective function ϕ of x is obtained as
minimize ϕ(x) + λψ (x)
subject to x ∈ B_n.
The coefficient λ > 0 of the penalty term is set to sufficiently large so that x violating constraints are sufficiently penalized.
§.§ Ising Machines
It is expected that near-optimal solutions of QUBO problems are efficiently sampled via computers called Ising machines.
Generally, an Ising machine takes a QUBO problem as an input and returns heuristic solutions of the QUBO problem.
Ising machines tend to perform poorly
when the input QUBO problem involves a complex energy landscape <cit.>,
which is a major issue of Ising machines for practical use.
Furthermore, several Ising machines are physically implemented with limited graph structures which we call hardware graphs.
For example, D-Wave Advantage <cit.> and Advantage2 <cit.> machines are associated with the Pegasus and Zephyr graphs, respectively.
The hardware graph restricts the structure of an input QUBO problem in a sense that the graph associated with the QUBO problem must be a subgraph of the hardware graph.
§.§ Minor Embedding
To embed a QUBO problem with arbitrary graph structure to Ising machines with sparse hardware graphs, a technique called minor embedding <cit.> has been developed.
We call the graph structure associated with the QUBO problem as the input graph.
Minor embedding represents the input graph I as a minor of the hardware graph H.
Namely, a node in I is represented by a set of connected nodes (called a chain) in H.
The variables corresponding to nodes in a chain are penalized by their interaction so that they take the same values.
The strength of the penalty is called chain strength.
When a QUBO problem is dense, i.e., the input graph has dense edges, minor embedding requires a large number of hardware nodes to form chains.
Such large-sized chains lead to degradation of performance of Ising machines <cit.>.
Furthermore, the size of hardware graph severely restricts the size of a dense input graph.
For a complete input graph, minor embedding is also called clique embedding.
Clique embedding (with relatively short chains) of complete graphs of size 180 and 232 has been found for
16-Pegasus graph P_16 of D-Wave Advantage <cit.> with 5640 nodes and
15-Zephyr graph Z_15 of D-Wave Advantage2 <cit.> with 7440 nodes, respectively.
§ ORDER OF VARIABLES AND LINEARIZATION
We first give a motivating example of linearization via ordering variables.
Take a QUBO matrix over 3 variables
Q = [ -3 2 7; 0 -5 7; 0 0 -8; ].
The QUBO problem min_x ϕ(x) with ϕ(x) x^⊤ Q x has two local minima:
x=(1,1,0) with ϕ(x)=-6 and x=(0,0,1) with the global minimum ϕ(x)=-8.
We consider
conditions x_1 = 1 ⇒ x_2 = 1 and x_1 = 1 ⇒ x_3 = 1 as auxiliary constraints.
Such precedence conditions are obtained on knapsack problems, for example, as we discuss in Section <ref>.
Adding corresponding penalty terms x_1 - x_1 x_2 and x_1 - x_1 x_3 to
ϕ(x)
with coefficients equal to
Q_1,2=2 and Q_1,3=7 yields
ϕ̃(x)
ϕ(x) + 2(x_1 - x_1 x_2) + 7(x_1 - x_1 x_3)
= -3 x_1 - 5 x_2 - 8 x_3 + 2 x_1 x_2 + 7 x_2 x_3 + 7 x_1 x_3
+ 2(x_1 - x_1 x_2) + 7(x_1 - x_1 x_3)
= 6 x_1 - 5 x_2 - 8 x_3 + 7 x_1 x_3 = x^⊤Q̃ x
with a new QUBO matrix
Q̃[ 6 0 0; 0 -5 7; 0 0 -8; ],
which has a smaller number of quadratic terms (the underlined numbers has become zero in Eq. (<ref>)).
Actually, the new objective function ϕ̃(x) has the same minimum as the original function ϕ(x), that is, ϕ̃(x)=-8 at x=(0,0,1), which follows from Theorem <ref> and Theorem <ref> below.
Comparison of energy landscape of the original and new QUBO problems min_x ϕ̃(x)
is shown in Fig. <ref>.
One of the local minima of the original QUBO problem is eliminated by adding the penalties.
Thus, the linearization process above yields an equivalent and sparser QUBO problem with simplified energy landscape.
and an order G=(V,E) defined by
E = { (x_1, x_2), (x_1, x_3) }.
G is valid with respect to minimization of ϕ(x) x^⊤ Q x by Theorem <ref> below.
Through linearization, we obtain
Q^G = [ 6 0 0; 0 -5 7; 0 0 -8; ].
Comparison of energy landscape of the original and linearized problem min_x ϕ^G(x) with x^⊤ Q^G x is shown in Fig. <ref>.
Note that the original problem min_x ϕ(x) has two local minima (1,1,0) and (0,0,1), while the linearlized problem has only one local minimum (0,0,1).
Linearization eliminates the other local minimum by penalizing on it.
We first give a motivating example of an order of variables on a knapsack problem.
In a knapsack problem, given multiple items with two types of scalars representing a value and a weight, we seek to select items that maximizes the sum of values so that the sum of weights is less than a prescribed upper bound called a capacity.
When modeling it as a QUBO problem, a binary variable x_i is associated with each item i, and item i is interpreted as selected when x_i=1.
If one item j is more valuable and weighs less than another item i, then item j should be selected in preference to item i.
That is, an implication x_i=1⇒ x_j=1 is satisfied for an optimal solution.
Once we find such an ordinal condition preserving the optimum, we may inject a penalty corresponding to the condition viewed as an auxiliary constraint.
On the other hand, a quadratic term x_i x_j appears with a positive coefficient in a QUBO objective function for the knapsack problem.
The linearization process eliminates the quadratic term exploiting the auxiliary penalty, which is the core of our method.
In the following sections,
we give a formal description of generalization of the above example.
We define the notion of ordering variables
and explain the associated auxiliary penalties.
We then introduce linearization of QUBO problems and explain related theoretical results.
§.§ Order of Variables and Associated Auxiliary Panelty
We
define
notions on order of precedence on variables.
An order of n binary variables x=(x_1,⋯,x_n) is a directed acyclic graph G=(V,E) with a vertex set V={x_1,⋯,x_n}[
Precisely, the nodes x_1, ⋯, x_n ∈ V should be distinguished with the variables (x_1,⋯,x_n)∈ B_n in a sense that x_i is just a symbol as a node, not a value of 0 or 1.
Although it might be consistent to label nodes as 1,⋯,n, we use x_1, ⋯, x_n to emphasize that G is an order of variables.
]
and an edge set E⊂ V× V.
We define a subset B_n^G ⊂ B_n ordered by an order G of variables as
B_n^G { x∈ B_n |∀ (x_i, x_j)∈ E, x_i=1 ⇒ x_j=1 }.
The order G is valid with respect to minimization of a function ϕ: B_n →ℝ
if the following equality holds:
min{ϕ(x) | x ∈ B_n } =
min{ϕ(x) | x ∈ B_n^G }.
By definition, a valid order of variables
with respect to minimization of
ϕ induces auxiliary constraints on the optimization problem min_x ϕ(x) preserving the optimum.
Finding a valid order for given ϕ is a non-trivial task, which we argue later.
We discuss injection of penalties according to a given order.
Let ϕ: B_n →ℝ be an objective function of a minimization problem.
We consider an ordinal condition x_i=1 ⇒ x_j=1
in an order G of variables which is valid with respect to minimization of ϕ.
Since imposing the condition as a constraint
preserves the minimum of ϕ,
so does adding a penalty term corresponding to the constraint .
A penalty term representing the precedence condition x_i=1 ⇒ x_j=1 is defined by
x_i - x_i x_j.
Indeed, it takes 1 only if x_i=1 and x_j=0, and 0 otherwise.
Therefore, we have the following result.
Let G be an order of variables valid with respect to minimization of a function ϕ: B_n →ℝ.
Then, for any non-negative function c: E× B_n →ℝ, we have
min_x∈ B_nϕ(x) = min_x∈ B_n( ϕ(x) + ∑_e∈ E c(e, x) (x_i - x_i x_j) ).
Moreover, if x^* ∈ B_n attains the minimum of the right hand side, then it attains the minimum of the left hand side.
We refer to Appendix <ref> for a proof, as it is straightforward.
The function c(e,x) in Proposition <ref> represents coefficients
of auxiliary penalties.
Our key insight is that QUBO problems can be simplified by taking appropriate
c(e,x) depending on the objective function
ϕ.
§.§ Linearization of QUBO Problems
We introduce linearization of QUBO matrix based on an order of variables.
Then, we prove the equivalence of the linearized and original QUBO problems.
Let ϕ(x) = x^⊤ Q x be a quadratic function of binary variables
defined by an upper-triangle matrix Q ∈ℝ^n× n.
Let G=(V,E) be an order of variables valid with respect to minimization of ϕ.
For i=1,2,⋯, n, we define
U_i^+ {j ∈{1,2,⋯,n}| (x_i,x_j) ∈ E, Q_i,j > 0 },
U_i^- {j ∈{1,2,⋯,n}| (x_i,x_j) ∈ E, Q_j,i > 0 }.
Then, we define a new QUBO matrix Q^G ∈ℝ^n× n as
Q^G_i,j
0 j ∈ U_i^+ or i ∈ U_j^-
Q_i,i + ∑_j'∈ U_i^+ Q_i,j' + ∑_j'∈ U_i^- Q_j',i i=j
Q_i,j otherwise.
The process to obtain Q^G from Q is called linearization of Q with respect to G.
Recall that off-diagonal and diagonal entries of a QUBO matrix corresponds to quadratic and linear terms of the QUBO objective function, respectively.
The new QUBO matrix Q^G is obtained by converting each quadratic term Q_i,jx_i x_j or Q_j,ix_i x_j in the objective function with a positive coefficient satisfying (x_i, x_j) ∈ E to a linear term Q_i,ix_i, which is the reason we call the process linearization.
Note that the number of off-diagonal entries of the QUBO matrix is reduced by ∑_i ( |U^+_i| + |U^-_i| ) through linearization.
Consider the QUBO matrix Q given in Eq. (<ref>).
Two precedence conditions x_1 = 1 ⇒ x_2 = 1 and x_1 = 1 ⇒ x_3 = 1 correspond to an order G=(V,E) with V={x_1, x_2, x_3} and E={(x_1, x_2), (x_1,x_3)}.
In this setting, U_1^+ = {2,3} and all other U_i^+ and U_i^- are empty.
Then, it can be easily checked that the linearized QUBO matrix Q^G obtained by Eq. (<ref>) is equal to Q̃ given in Eq. (<ref>).
We consider a QUBO matrix over 3 variables
Q = [ -3 2 7; 0 -5 7; 0 0 -8; ]
and an order G=(V,E) defined by
E = { (x_1, x_2), (x_1, x_3) }.
G is valid with respect to minimization of ϕ(x) x^⊤ Q x by Theorem <ref> below.
Through linearization, we obtain
Q^G = [ 6 0 0; 0 -5 7; 0 0 -8; ].
Comparison of energy landscape of the original and linearized problem min_x ϕ^G(x) with ϕ^G(x) x^⊤ Q^G x is shown in Fig. <ref>.
Note that the original problem min_x ϕ(x) has two local minima (1,1,0) and (0,0,1), while the linearlized problem has only one local minimum (0,0,1).
Linearization eliminates the other local minimum by penalizing on it.
Based on Proposition <ref>, we have the following result.
Let Q∈ℝ^n × n be an upper-triangle matrix
and G be an order of variables valid with respect to minimization of a function ϕ(x) = x^⊤ Q x.
Then, the following equality holds:
min_x ∈ B_n x^⊤ Q x = min_x ∈ B_n x^⊤ Q^G x.
Moreover, if x^* ∈ B_n attains the minimum of the right hand side, then it attains the minimum of the left hand side.
We define a non-negative function c: E × B_n →ℝ as
c((x_i,x_j),x)
Q_i,j i<j and Q_i,j>0
Q_j,i i>j and Q_j,i>0
0 otherwise.
Then, we have
x^⊤ Q x + ∑_e∈ E c(e, x) (x_i - x_i x_j)
= ∑_i ( ∑_j Q_i,jx_i x_j + ∑_j ∈ U_i^+ Q_i,j (x_i - x_i x_j)
+ ∑_j ∈ U_i^- Q_j,i (x_i - x_i x_j) )
= ∑_i ∑_j Q^G_i,j x_i x_j = x^⊤ Q^G x.
Thus, we get the assertion by applying Proposition <ref>.
§.§ Sufficient Condition for Validity of Order
Since the definition of validity of an order G of variables with respect to minimization of a function ϕ depends on the minimum of ϕ, determining validity of G might be as difficult as obtaining the minimum of ϕ.
Therefore, it seems impractical to exactly determine validity of a given order of variables in a reasonable time.
Instead, we give a sufficient condition for validity of G when ϕ is a quadratic function.
Let G=(V,E) be an order of n variables and
ϕ(x)=x^⊤ Q x be a quadratic function with Q∈^n× n.
For i,j=1,2,⋯,n, we define
a_i,j
Q_i,j + Q_j,i i j
Q_i,i i=j.
That is, a_i,j is the coefficient of x_i x_j (or x_i if i=j) in ϕ(x).
If an inequality
S_i,j∑_k=1
k i,j^n max{ 0, a_j,k - a_i,k} + a_j,j - a_i,i≤ 0
holds for every directed edge (x_i,x_j)∈ E, then G is valid with respect to minimization of ϕ.
Eq. (<ref>) assures that ϕ(x) with (x_i, x_j) = (0,1) is not larger than ϕ(x) with (x_i, x_j) = (1,0).
Under such a situation, if there exists an optimal solution with (x_i, x_j) = (1,0), then an optimal solution with (x_i, x_j) = (0,1) also exists.
Therefore, by checking Eq. (<ref>) for all edges in E, it is concluded that G is valid.
A detailed proof of Theorem <ref> is in Appendix <ref>.
For the QUBO matrix Q and order G in Example <ref>, we have S_1,2 = -2 and S_1,3=0, which implies the validity of G.
We provide another
useful and instructive application of Theorem <ref> in the following example.
Let d≤ n be a positive integer.
Assume that a function ϕ: B_n → is symmetric with respect to variables x_1,⋯, x_d, that is, invariant under permutation of values of x_1,⋯, x_d.
Then, we have S_i,j=0 for i,j=1,⋯,d with i j, since a_i,i=a_j,j and a_i,k=a_j,k holds for any k i,j.
Therefore, by Theorem <ref>, an order defined by an edge set
E= { (x_i, x_j) | 1≤ i ≤ j ≤ d}
is valid with respect to minimization of ϕ.
Note that E defines a total order on V={x_1,⋯, x_n} and attains the largest possible edge set on n nodes.
§ PROPOSED METHOD
We propose to apply an Ising machine to a QUBO problem after linearizing it.
Specifically, the proposed method consists of the following steps.
First, we extract a valid order of variables from a given optimization problem.
Then, we linearize the QUBO matrix based on the extracted order.
After that, the linearized QUBO problem is input to an Ising machine to sample a solution.
In the following,
we discuss expected effects and algorithms of the proposed method.
§.§ Effects on Application of Ising Machines
There are two aspects regarding effects of the proposed method on application of Ising machines.
One is on the energy landscape of QUBO problems, and the other is on minor embedding.
We expect that linearization has greater effects in the both aspects with larger number of edges |E| in the extracted order G of variables.
We explain
the effects
below.
First, the proposed method improves solutions obtained with Ising machines by modifying the energy landscape.
Linearization is derived by adding auxiliary penalties to the objective function.
The penalties change the energy landscape of the QUBO problem as shown in Fig. <ref>.
They restricts region with low energy by increasing energy on B_n ∖ B_n^G.
Since Ising machines sample lower-energy solutions with higher probabilities, restricting low-energy region enables them to sample near-optimal solutions.
Furthermore, linearization eliminates quadratic terms, thus
possibly removes local minima,
which seems favorable for both local and global search algorithms.
For these reasons, we expect that linearization enables Ising machines to output lower energy solutions.
Second, the proposed method mitigates defects of minor embedding by making a QUBO matrix sparser.
A linearized QUBO matrix Q^G has less off-diagonal elements than the original Q.
Therefore,
linearization would reduce
the number of auxiliary variables required for minor embedding.
This would further result in mitigating performance degradation of Ising machines due to embedding with large-sized chains as well as enabling embedding larger QUBO problems.
§.§ Extraction of Valid Order
In the first step of the proposed method, we extract a valid order of variables from a given problem.
This might be done in a problem-specific way or in a general way.
In the following, we explain a general algorithm to extract a valid order from a given QUBO problem on the basis of Theorem <ref>.
We discuss problem-specific cases in Section <ref> where a valid order can be extracted in a more computationally efficient way.
We describe an algorithm that takes a QUBO matix as an input and returns an edge set of a valid order in Algorithm <ref>.
The algorithm is constructed based on
Theorem <ref>.
That is, we extract directed edges satisfying Eq. (<ref>) to form a set of edges.
An edge set E is initialized as an empty set.
In lines <ref>-<ref>,
S_i,j in Eq. (<ref>) for each pair (x_i, x_j) of variables is calculated, and
the pair is added to E if it passes the test, i.e., S_i,j≤ 0.
A condition (x_j, x_i)∉ E is imposed in line <ref> to ensure that the obtained directed graph G=(V,E) does not have a cycle.
Note that
(x_i, x_j) should satisfy a_j,j≤ a_i,i for Eq. (<ref>) to be satisfied,
since max{0, a_j,k - a_i,k}≥ 0.
Therefore, we check this inequality in line <ref> to omit redundant computation.
In lines <ref>-<ref>, S_i,j in Eq. (<ref>) is calculated as S.
We break the loop in lines <ref>-<ref> as soon as S gets larger than 0, since Eq. (<ref>) never holds then.
This pruning again omits redundant computation.
If S_i,j≤ 0 is checked for the pair, we confirm the pair represents a valid precedence and the pair is added to E in lines <ref>-<ref>.
Algorithm <ref> has computational complexity of O(n^3) in the worst case.
If the pruning is effectively triggered for most of the loops over k, complexity approaches to O(n^2).
Correctness of Algorithm <ref> is summarized as follows.
See
Appendix <ref> for a proof.
Let Q∈^n × n be a square real matrix and x=(x_1, ⋯, x_n) be a vector of binary variables.
Then, a graph G=(V, E) with a node set V={x_1, ⋯, x_n} and an edge set E obtained by running Algorithm <ref> with Q given as the input is a
directed acyclic graph, i.e., order of variables x.
Moreover, G is valid with respect to minimization of
ϕ(x) = x^⊤ Q x.
We remark that Algorithm <ref> can be obviously more optimized for sparse matrices by bounding
the iteration range of j to e(i) and that of k to union e(i) ∪ e(j), where we write the set of adjacent edges of node x_i ∈ V in a graph associated with Q as e(i).
Worst-case complexity of the extended version of Algorithm <ref> is O( OD(Q)d), where OD(Q) is the number of non-zero off-diagonal entries of Q and d is the maximum degree of a graph associated to Q.
The pruning in a loop over k should reduce the complexity to near O( OD(Q)) for QUBO instances on which variables are hard to be ordered.
Since the extension is straightforward, we omit further discussion.
On both dense and sparse settings, we expect that the algorithms run practically fast for most cases, since O( OD(Q)) (=O(n^2) for dense Q) is typically the same complexity as preparing the QUBO matrix Q.
When the running time exceeds this complexity, then the obtained edge set E should account for a certain percentage of all edges in a graph associated to Q.
Based on the assumption that linearization have larger positive effects
with larger |E|, the proposed method should have greater impact if the running time is long, which achieves a good trade-off of time and effectiveness.
We demonstrate the effect of the pruning in
Algorithm <ref>.
Running time of Algorithm <ref> on synthetic QUBO instances of various size over several problem classes are shown in Fig. <ref>.
A parameter p of problem classes controls the number |E| of ordered pairs extracted via Algorithm <ref>.
Namely, |E| is large for large p.
We refer to Section <ref> for details of synthetic QUBO problems and dependence of |E| on p.
Fig. <ref> indicates that small p leads to short processing time.
This is because the pruning in a loop over k in Algorithm <ref> is triggered for almost every time for small p, since most pairs (x_i, x_j) satisfies S_i,j > 0 for small p.
We also prepared another set of synthetic QUBO instances by sampling entries Q_i,j of an upper triangle QUBO matrix uniformly from {-1, 0, 1}.
We call them hard instances, since we cannot expect variables of
the problems to be ordered at all by Algorithm <ref>.
On hard instances, positivity of S_i,j for each pair (x_i, x_j) is detected on early stages in the loop of k, and so the total processing time should be near O(n^2) rather than O(n^3).
We indeed observed that the running time on hard instances are by a magnitude shorter than that on the other synthetic QUBO instances in Fig. <ref>.
We performed power regression on the running time, summarize the
exponents in Table <ref> and plot the fitted curves in Fig. <ref>.
As expected, the
running time scales about on the order of n^3 for large p and with a bit smaller exponents for small p.
It scales with exponent less than 2 on hard instances, which indicates that Algorithm <ref> runs fast due to the pruning on QUBO instances on which variables are clearly hard to be ordered.
This is preferable for a practical use, since practical complex problems typically involves most variables hard to be ordered and several variables that may be ordered.
§.§ Linearization Algorithm
We proceed to the second step in the proposed method, i.e., linearization.
Algorithm <ref> shows a pseudo-code for linearization of a QUBO matrix.
As it simply computes Q^G following the definition given in Eq. (<ref>),
the correctness of Algorithm <ref> is straightforward and we omit the proof.
If one uses Algorithm <ref> to create an order G as an input of Algorithm <ref>, then a combined process which takes Q as an input and outputs a linearized QUBO matrix can be implemented in one algorithm in a more optimized way.
That is, instead of linearizing Q after calculating whole E, we may linearize Q each time an ordered pair (x_i, x_j) is found (lines <ref>-<ref> in Algorithm <ref>).
It also allows removing whole use of E in Algorithm <ref>, which simplifies the algorithm.
§ APPLICATION TO PRACTICAL PROBLEMS
We illustrate applications of the proposed method to practical problems.
We take (multi-dimensional) knapsack problems as a typical class of combinatorial optimization problems, show an example of problem-specific ways to extract an order of variables and explain applicability of the proposed method.
§.§ Knapsack Problems
A knapsack problem is a combinatorial optimization problem defined as follows.
A set of n items are given and each of them is associated with value v_i and weight w_i.
Capacity C of a knapsack is also given.
Here, we assume all these values are positive integers.
The objective is to obtain a subset of items with the maximum total value under a constraint that the total weight do not exceed the capacity.
A knapsack problem is mathematically modeled as the following:
max_x∈ B_n{∑ v_i x_i | ∑ w_i x_i ≤ C}.
Binary variables x=(x_1, ⋯, x_n)∈ B_n are decision variables, and x_i=1 is interpreted that item i is selected.
A knapsack problem is converted to a QUBO problem by representing an inequality constraint ∑ w_i x_i ≤ C as a penalty term.
The penalty term is defined as the following <cit.>:
k = ⌊log C ⌋ + 1,
R = C+1 - 2^k-1 ,
H_ ineq = (∑_i=1^n w_i x_i
- ∑_i=1^k-1 2^i-1y_i - R y_k )^2 .
Here, y_1,⋯, y_k are auxiliary binary variables.
Since a knapsack problem is a maximization problem, we flip the sign of the objective function.
By taking a sufficiently large penalty coefficient λ, we obtain a QUBO problem:
min_(x,y)∈ B_n+k( -∑_i=1^n v_i x_i + λ H_ ineq).
§.§ Order of Variables in Knapsack Problems
For the QUBO problem Eq. (<ref>),
we define an order G=(V,E) on a set V={x_1,⋯,x_n} of variables as follows:
E =
{(x_i, x_j) ∈ V× V |
[ v_i ≤ v_j, w_i ≥ w_j,; (v_i=v_j, w_i=w_j ⇒ i<j) ].
}.
The order G is valid with respect to minimization of the objective function in Eq. (<ref>).
A proof sketch is shown as follows.
This is proved as follows in a similar manner of a proof of Theorem <ref>.
Fix a set of selected items except item i and j.
When item j has higher value than item i, then we get higher total value by selecting item j rather than selecting item i if the knapsack admits.
If item j has less weight than item i, then item j can be selected satisfying the constraint when item i can be selected.
Therefore, if both conditions on values and weights hold between item i and j, then we may select item j prior to item i.
In other words, we may add an auxiliary constraint x_i = 1 ⇒ x_j = 1.
A rigorous proof is given by extending Theorem <ref> to a constrained setting with inequalities, which is
left to Appendix <ref>.
Note that when item i and j has exactly the same values and weights,
there is a subtlety in considering
two-way constraints x_i = 1 ⇒ x_j = 1 and x_j = 1 ⇒ x_i = 1.
Adding both constraints prohibits selecting only either item i or j, which possibly changes the optimum of the knapsack problem.
To avoid this, we add restriction that x_i = 1 ⇒ x_j = 1 only when i<j if the values and weights coincide.
We remark that this corresponds to removing cycles in G, which illustrates why we impose acyclicity of a graph to be an order of variables in Definition <ref>.
Every quadratic term of form x_i x_j appears in the objective function in Eq. (<ref>) with a positive coefficient.
Therefore, each quadratic term corresponding to an edge (x_i, x_j)∈ E is reduced to a linear term through linearization.
§.§ Generalization to Multi-dimensional Knapsack Problems
The order of variables on a knapsack problem given in the previous section is easily extended to a multi-dimensional knapsack problem (MKP).
An MKP is generalization of a knapsack problem, where multiple types of knapsack constraints are given.
Namely,
m types of capacities C_k (k=1,⋯,m) and m types of weights w_k,i (k=1,⋯,m) of each item i are given where m is a positive integer.
The objective is to maximize the total value ∑_i v_i x_i under m constraints ∑_i w_k,i x_i ≤ C_k for k=1,⋯,m instead of only one constraint.
In a similar manner to knapsack problems, a QUBO objective function H and a valid order G=(V,E) is defined as follows:
H = -∑_i v_i x_i + λ( H^(1)_ ineq + ⋯ + H^(m)_ ineq),
E={(x_i, x_j)
|
[ v_i ≤ v_j, w_k,i≥ w_k,j ∀ k,; (
v_i = v_j,
w_k,i = w_k,j ∀ k
⇒ i<j ) ].
}.
Here, H^(k)_ ineq denotes a penalty term corresponding to k-th weights defined similarly as in Eq. (<ref>) for k=1,⋯,m.
The edge set E is extracted by comparing (m+1) scalars for each pair (x_i, x_j).
Computational complexity of such an algorithm is O(n^2 m).
§ EXPERIMENTS
We conduct numerical experiments to evaluate the effects of the proposed method on synthetic QUBO instances and MKP instances.
We first introduce problem instances, then explain experimental setup.
The effects on (i) degradation of Ising machine performance via minor embedding, (ii) improvement of embeddability of large-sized problems, and (iii) performance improvement on practical problems are examined respectively.
§.§ Problem Instances
We introduce data sets of problem instances on which the proposed method is evaluated.
As a notational convention, U(l,u) denotes a uniform probability distribution over integers in an interval [l, u] with integers l,u and ⌊ x ⌋ denotes the largest integer such that ⌊ x ⌋≤ x for a real number x.
§.§.§ Synthetic QUBO Instances
Let n be a positive integer.
An upper-triangle QUBO matrix Q∈ℝ^n × n is generated as follows with a positive integer s and positive real number p as inputs.
Set o sp.
Every off-diagonal upper-triangle entry of Q is sampled independently from U(1+o, s+o).
Every diagonal entry of Q is sampled independently from U(-⌊ p(n-1)(s+1) ⌋ -o,-1-o).
We explain some properties of QUBO instances generated as above.
First, we note that the expectation value of sum ∑_j=1^i-1 Q_j,i + ∑_j=i+1^n Q_i,j of interactions between a variable x_i and the others is 0.5(n-1)(s+1)+(n-1)o.
Suppose we take the offset o of each entries as o=0 to generate QUBO instances.
If p is larger than 0.5, the scale p(n-1)(s+1) of coefficients of linear terms is larger than the above expected total interaction.
Then, for large p, values of several variables can be determined as 1 without solving, since a diagonal entry Q_i,i is negatively too large than off-diagonal entries Q_i,j, Q_j,i.
To avoid such triviality of the problems, we set a positive offset o=sp.
A generated QUBO matrix has the following properties:
* All off-diagonal upper-triangle entries have positive values. In particular, a graph associated to the matrix is complete.
* For large p, the variance of diagonal entries is large.
In particular, difference |Q_i,i-Q_j,j| of two diagonal entries tends to be large.
From Property 1, a generated QUBO instance requires a number of additional variables for minor embedding to a sparse hardware graph.
From Property 2, the number of pairs (x_i, x_j) with S_i,j≤ 0 is expected to increase by increasing p.
Based on Theorem <ref>, this indicates that large p leads to a large number of ordered pairs of variables obtained by Algorithm <ref>.
Moreover, from Property 1, all off-diagonal entries corresponding to ordered pairs are reduced to diagonal entries through linearization.
In summary, by setting p larger, the proposed method is expected to have greater effects.
Note that the other parameter s determines the whole scale of QUBO matrices, and considered to have no impact on effects of the proposed methods or on solvability of QUBO matrices.
We generated 10 QUBO instances for every combination of s=10, n∈{180, 232} and p ∈{0.1, 0.2, 0.5, 1.0, 1.5, 2.0} following the above process.
Note that n=180, 232 are the (near-)maximum size of complete graphs embeddable to the 16-Pegasus graph P_16 and 15-Zephyr graph Z_15, respectively.
Furthermore, we generated QUBO instances increasing n from n=190 to n=500 with the same set of parameters p and s to examine scaling of embeddability.
The
instances have been
used to analyze
running time of Algorithm <ref> in Section <ref>.
§.§.§ MKP Instances
We use the OR-Library data set <cit.> of MKP instances.
It consists of 30 randomly generated problem instances for every combination of the number of items n ∈{100, 250, 500} and the number of types of weights m ∈{5,10,30}.
More specifically, there are 10 instances generated for each m, n and the tightness parameter α∈{0.25, 0.5, 0.75} of inequality constraints in the following process:
Weight w_k,i is independently sampled from U(1,1000) for each i=1,⋯,n and k=1,⋯,m.
Capacity C_k is defined as C_k = α∑_i w_k,i.
Sample a real number q∈ [0,1] from a uniform distribution, then define value v_i as v_i = ∑_i=1^m w_k,i/m+500q_i.
In addition to the above instances, we create knapsack problem instances (which we also call MKPs with m=1 for simplicity) by ignoring constraints ∑_iw_k,i x_i ≤ C_k except k=1.
We use instances with m=1,5,10 in our experiments.
Instances with m=30 are not used since we found that a valid order is not obtained, that is |E|=0, and thus linearization has no effects on those instances.
§.§ Computational Setup
Source code for experiments is written with Python 3.9.16 using D-Wave Ocean SDK[https://github.com/dwavesystems/dwave-ocean-sdk] 6.3.0 and Fixstars Amplify SDK <cit.> 0.11.1 libraries, except for Algorithm <ref>.
To measure processing time,
Algorithm <ref> is implemented with Cython 0.29.35 and compiled as code of C.
Programs are run on MacBook Pro with Apple M2 chip and 8GB memory.
We use Amplify Annealing Engine (AE) <cit.> of version v0.7.3-A100 as an Ising machine with execution time of 1 second.
For minor embedding search, we use minorminer Algorithm <cit.> implemented on Ocean SDK with timeout of 1000 second.
Chain strength for minor embedding is calculated with
in Ocean SDK.
§.§ Performance of Ising Machine with Minor Embedding
We evaluate the effect of the proposed method on quality of minor embedding and on performance of the Ising machine combined with minor embedding.
We conduct experiments using the synthetic QUBO matrices with n=180 and 232 described in the previous section.
§.§.§ Setting and Metrics
We apply Algorithm <ref>
and Algorithm <ref> to linearize the generated QUBO matrices.
We evaluate reduction rates of non-zero off-diagonal entries of QUBO matrices.
Then, we run minor embedding search on each linearized QUBO instance setting target graphs as
P_16 for n=180 and
Z_15 for n=232.
We evaluate quality of obtained embedding by the number of auxiliary variables and the maximum chain length.
All of the metrics are averaged over 10 QUBO instances.
We set clique embedding <cit.> as a baseline, since the original QUBO instances have a fully connected graph structure.
After obtaining minor embedding for linearized QUBO instances, we apply the Ising machine 10 times to each of the QUBO problems of three types:
the original problems, equivalent problems embedded by clique embedding and linearized problems embedded by the obtained minor embedding.
We evaluate performance of the Ising machine by energy of the best solution averaged over 10 instances.
We remark several comments on the experimental setting.
First,
Amplify SDK and AE provides an interface which
accepts a fully connected QUBO problem without minor embedding.
Nevertheless, we intentionally input sparse minor-embedded QUBO problems to AE,
since
our aim in the experiment is to evaluate performance degradation of Ising machines due to minor embedding with various hardware graphs in a unified way.
Second, performance of the search algorithm for minor embedding might have significant effect on experimental results.
To enhance the performance of minorminer Algorithm, we adopt a method setting clique embedding as an initial state of the algorithm, which is simple yet effective as shown in the previous study <cit.>.
In preliminary experiments, we found an interesting fact that setting clique embedding of a complete graph of size slightly less than n as an initial state occasionally yields better results.
Thus, for QUBO instances of size n=180 and n=232, we run minorminer Algorithm with clique embedding of complete graph with size each of {160, 180} and {200, 232}
as the initial state, respectively, and then, we adopt the result with less auxiliary variables to report.
As a result, we observed that clique embedding of the smaller size indeed gives better results for instances of p=1.5, 2.0.
Lastly, energy for an embedded QUBO problem can be defined in two ways: raw energy of solutions of the embedded problem and unembedded energy of solutions on the original problem decoded by fixing broken chains in some way.
We observed that
no broken chains appear in
all solutions in our experiments, and thus we do not distinguish them below.
§.§.§ Results
We show results on the number of non-zero off-diagonal entries of QUBO matrices and on the quality of the minor embedding in Table <ref>.
The column label H denotes a target hardware graph.
The “clique” row for each target graph shows the results of the baseline, that is, clique embedding.
OD(Q^G) denotes the number of non-zero off-diagonal entries of linearized QUBO matrices and of total off-diagonal entries on the “clique” rows.
The values of OD(Q^G) is less for larger p, since more pairs of variables are ordered.
Note that the reduction rates of
non-zero off-diagonal entries
depend on p and not on n.
For p=1.5 and p=2.0, we observe that the numbers of auxiliary variables and maximum chain lengths of the obtained minor embeddings are significantly reduced.
For the other cases, such drastic changes are not observed.
We show results on the performance of the Ising machine in Table <ref>.
We exclude the results of p=0.1, since
we obtained |E|=0 and thus linearization does not change the QUBO problems.
On the original QUBO problems without minor embedding, the Ising machine produced solutions with the same energy on all 10
executions for each instance.
This indicates that the generated QUBO problems are relatively easy to solve and that the Ising machine always outputs optimal solutions on these problems.
We also applied the Ising machine to the linearized QUBO problems without minor embedding and observed that the result (omitted from Table <ref>) is exactly the same as that of the original problems.
In Table <ref>,
the energy of solutions of QUBO problems embedded with clique embedding shown in the
“Baseline”
column
is larger than that of the
problems without minor embedding.
This implies that minor embedding degrades the performance of the Ising machine even on those easy QUBO problems.
The energy of solutions of linearized
problems shown in the “Linearized” column
is smaller than
the baselines for all cases.
This implies that the proposed method improves performance of the Ising machine for minor-embedded QUBO problems.
It is an interesting phenomenon, considering the fact that the quality of minor embedding is not changed by linearization for p≤ 1.
Namely, the results suggest that there are other reasons for the improvement of performance than improving the quality of minor embedding.
It is further supported by additional experiments on clique embedding with linearization in Appendix <ref>, where similar performance improvement via linearization is observed even for a fixed embedding.
One possible reason for the improvement is that the energy landscape of the minor-embedded QUBO problems are simplified by linearization.
§.§ Embeddability of Large-sized Problems
We evaluate success rates of
minor embedding of linearized QUBO problems increasing the problem size n.
We conduct experiments using synthetic QUBO matrices increasing n on Pegasus graph P_16 and Zephyr graph Z_15 of the fixed size.
§.§.§ Setting and Metrics
We start increasing the problem size n from n=180 for P_16 and n=232 for Z_15 with step size 10.
Note that smaller instances can be trivially embedded via clique embedding.
For each combination of n and p∈{0.2, 0.5, 1.0, 1.5, 2.0}, we apply Algorithm <ref> and Algorithm <ref> to the generated 10 QUBO instances to obtain linearized QUBO instances.
For each target graph P_16 and Z_15,
we run minor embedding search on those linearized QUBO instances and count the number of instances for which a valid minor embedding is found.
In a similar manner as in the previous experiment, we set clique embedding of a complete graph of size 180 and 232 as an initial state of minorminer Algorithm for P_16 and Z_15, respectively.
We remark that since
only graph structures of QUBO matrices matter in this experiment
and
density of linearized synthetic QUBO problems only depend on p,
the linearized synthetic QUBO instances can be viewed as random graphs with almost constant density for a fixed p.
§.§.§ Results
We plot the number of instances for which valid minor embedding is found in Fig. <ref>.
For p ≥ 0.5, the QUBO instances with larger n can be minor-embedded into the target graph on both setting of P_16 and Z_15.
Note that no tested QUBO instances can be embedded without linearization, since the clique embedding gives a near-optimal upper bound of the size of dense QUBO problems that can be embedded.
We observe that larger p results in a higher upper limit of the size of embeddable QUBO problems.
For example, all 10 instances of size n=360 are minor-embedded into P_16 when p=2.0, while no valid embedding is found for instances of size n=220 when p=0.5.
This is because more pairs of variables are ordered in Algorithm <ref> for larger p, and thus the QUBO matrices get sparser by linearization.
§.§ Performance of Ising Machine on MKPs
We evaluate the effect of the proposed method on performance of the Ising machine on MKPs.
Note that excellent algorithms to solve MKPs heuristically or exactly have been well developed through decades of research.
Our aim in this experiment is not to beat them, but to show to what extent our proposed method enables one of the state-of-the-art Ising machines to obtain near-optimal solutions under typical setting where the method can be applied.
The experiment is conducted on MKP instances described in the previous section.
§.§.§ Setting and Metrics
We solve MKPs by encoding them to QUBO problems as in Eq. (<ref>) with and without linearization with respect to the order given by Eq. (<ref>) and applying the Ising machine.
We adopt the following two slightly different ways of implementation of each penalty term Eq. (<ref>) of the inequality constraints:
* Penalty terms
are programmed as-is by explicitly defining auxiliary variables.
* Penalty terms are defined using function in Amplify SDK
without explicitly defining auxiliary variables.
We explain each method in detail in the next paragraph.
For each method, we execute the Ising machine 50 times to sample 50 solutions on every MKP instance with and without the proposed method applied.
Quality of solutions of an MKP instance is evaluated by the number of feasible solutions and optimality gaps of solutions.
The averaged or best optimality gap of solutions are defined by
Optimality Gap = S_ best - S_ Ising/S_ best× 100,
where S_ best is the best known
score ∑_i v_i x_i obtained using other solvers and S_ Ising is the averaged or best score over feasible solutions obtained by the Ising machine.
In this experiment, S_ best is obtained by running Gurobi Optimizer <cit.> of version 9.1.2 for 10 seconds.
The obtained S_ best for small-scale instances such as n=100 or m=1 are proved to be optimal by the solver.
All metrics are reported by averaging over 10 instances for each combination of n, m, α.
We provide some details on each encoding method.
On Method 1, the whole objective function of a MKP in QUBO form with the proposed method is given as follows:
H_ lin,1 = -∑ v_i x_i
+λ∑_k=1^m (H^(k)_ ineq + ∑_(i,j)∈ E 2w_k,iw_k,j(x_i - x_i x_j) ).
A solution is feasible if ∑_i w_k,ix_i ≤ C_k holds for all k=1,⋯,m.
Note that a solution can be feasible even when H^(k)_ ineq>0, in which case the auxiliary variables for H^(k)_ ineq
are not completely optimized.
On Method 2,
function in Amplify SDK serves functionality to conceal auxiliary variables and enable users to avoid tedious tasks to program penalties of constraints.
The implementation of the function is undisclosed and might be specially designed to enhance performance of the Ising machine.
While an object returned by the function can be added to usual polynomial objects or multiplied by a scalar, users cannot access and change the penalty term.
Therefore, we cannot linearize each penalty as in Eq. (<ref>).
In this experiment, we apply the proposed method in Method 2 by replacing H^(k)_ ineq by function in Eq. (<ref>) and reformulating as follows:
H_ lin,2 = -∑ v_i x_i
+ λ∑_k=1^m (∑_i w_k,ix_i, C_k )
+λ∑_k=1^m ∑_(i,j)∈ E 2w_k,iw_k,j(x_i - x_i x_j),
where (∑_i w_k,ix_i, C_k ) represents the inequality constraint ∑_i w_k,ix_i ≤ C_k.
On the penalty coefficient λ, we found that a sufficient number of feasible solutions are obtained with λ=1 on both methods in a preliminary experiment.
Thus, we adopt λ=1 in this experiment.
We refer to Appendix <ref> for results based on baselines tuned with respect to λ.
§.§.§ Results
Results are summarized in Table <ref>.
#FS stands for the number of feasible solution.
Gap denotes the best optimality gap of solutions.
The results of the averaged optimality gap is omitted due to space limit and are similar to those of the best optimality gap, see Appendix <ref> for the full results.
|E| denotes the number of ordered pairs of variables.
All metrics are averaged over 10 instance.
|E| is large when n is large and m is small.
This is a natural consequence, since
the number of candidate pairs of variables to be ordered increases for increasing n and conversely, larger m leads to a tighter condition for a pair of variables to be ordered following Eq. (<ref>).
Meanwhile, Eq. (<ref>) depends only on v_i and w_k,i and not on C_k, and thus expectation values of |E| do not depend on α.
On Method 1, the number of feasible solutions tends to slightly decrease by applying the proposed method.
We consider that this is because the Ising machine outputs solutions violating constraints in order to avoid the auxiliary penalties introduced by the proposed method.
The increase in the number of infeasible solutions is negligibly small, and thus is not a practical issue.
We do not observe such increase of infeasible solutions on Method 2.
Regarding optimality gap, the proposed method substantially decreases the gap for both Method 1 and Method 2 on instances with large |E|.
In particular, the proposed method enables the Ising machine to reach the exact optimum on all instances of m=1 and n=100 with Method 2.
On instances of m=10 and n=100,250, we do not observe consistent improvement in performance due to small |E|.
Interestingly, we observe that the impact of the proposed method tends to be greater for smaller α.
This suggests that the proposed method is particularly effective on problems with relatively tight inequlity constraints.
Although the reason for this phenomenon is not clear, we consider it might be related to the scale of coefficients of the QUBO form Eq. (<ref>).
Since the coefficients of auxiliary variables in Eq. (<ref>) scales proportionally to knapsack capacity, large α leads to large coefficients of QUBO objective function in Eq. (<ref>).
On the other hand, the proposed method also increases coefficients of linear terms through linearization.
Assuming that the distribution and scale of coefficients in the QUBO objective function influences performance of Ising machines, the difference of impact of the proposed method in α is expected to be explained in the aspect of the coefficients distribution.
Analysis on dependence on α of behavior of the baseline (without linearization) must be done in the first place
in order to verify this hypothesis.
Since such a study does not exist so far and is beyond the scope of this paper, we leave the precise analysis on dependence of performance on tightness of inequality constraints
as future work.
§ RELATED WORK AND DISCUSSION
We have proposed a method for deriving QUBO matrices suitable for Ising machines by introducing auxiliary penalties which preserves the optimum of given QUBO problems.
No previous studies have taken such an approach so far.
Several studies consider ordinal conditions in knapsack problems <cit.> or in representation of integer variables via binary variables <cit.>.
These studies impose the ordinal conditions on variables as hard constraints given by problem definition.
Our approach is different in that ordinal conditions are derived as auxiliary penalties to improve applicability of Ising machines.
It is an interesting question if linearization induces feasible solutions even on problems with hard ordinal constraints, which will be explored in future work.
There are various directions of extension and application of the proposed method.
We explain some possible directions below to illustrate the potential impact of our approach utilizing auxiliary penalties.
Exploration of each
direction goes beyond the scope of this paper, that is, verifying effectiveness of linearization on improving minor embedding and Ising machine performance, and thus is left as future work.
One straightforward extension is to consider other implications such as x_i=1 ⇒ x_j =0
than x_i =1 ⇒ x_j = 1.
Note that x_i=1 ⇒ x_j =0 is equivalent to x_i x_j=0, which yields a corresponding penalty term in a QUBO form.
When an auxiliary constraint x_i =1 ⇒ x_j = 1 can be imposed without changing the optimum of the QUBO problem, the corresponding quadratic term x_i x_j with a negative coefficient can be removed by adding the penalty x_i x_j with a suitable coefficient.
We expect the process has similar effects on sparsity and energy landscape of QUBO problems.
Exploring practical application of such an extension is an important direction of future studies.
There is another extension of the method to Ising machines that can directly handle inequality constraints
without encoding them to a QUBO objective function as in Eq. (<ref>) and Eq. (<ref>).
Fujitsu Digital Annealer of 3rd generation <cit.> is one of such Ising machines.
As we have seen in the case of MKPs, quadratic terms to be linearized via ordering often come from expanded polynomials of penalties of inequality constraints.
In such a case, the proposed method might not be applied as-is for Ising machines that directly handles inequality constraints.
Meanwhile, the method can be extended to the situation by considering an auxiliary constraint x_i=1 ⇒ x_j = 1 as an inequality constraint x_i ≤ x_j and encoding it to the Ising machines, instead of converting it to a penalty term.
It is an interesting direction of investigation to evaluate performance of Ising machines with the extended method.
A possible application of the proposed method is efficient processing of QUBO matrices.
When dealing with
numerous variables in a QUBO problem, dense interaction of them results in large computational overhead for construction of the QUBO matrix both in time and space.
Symmetric variables admits a total order as in Example <ref> and this nice property might help efficient construction of the linearized QUBO matrix, possibly skipping calculation of dense interaction.
On occasion of emergence of large-scale Ising machines, construction of QUBO matrices is considered to become a bottleneck of the whole process of using Ising machines.
Thus, this direction of application of the proposed method seems a promising approach to tackle the bottleneck.
§ CONCLUSION
We proposed linearization via ordering variables of QUBO problems to improve applicability and performance of Ising machines.
Linearization eliminates quadratic terms in the QUBO objective function by introducing auxiliary penalty of ordinal conditions with suitable coefficients, thereby simplifying energy landscape of the QUBO problem and enhancing minor embedding.
We developed general and practical algorithms to extract a valid order and demonstrated its computational complexity.
Through experiments on synthetic QUBO problems and MKPs, we validate effects of the proposed method.
The results confirms that linearization mitigates performance degradation of Ising machines due to minor embedding (possibly not by improving quality of minor embedding but by simplifying energy landscape), enables minor embedding of larger QUBO instances and substantially reduces optimality gaps on practical problems.
IEEEtran
addtoresetfiguresection
addtoresettablesection
addtoresetequationsection
addtoresettheoremsection
§ PROOFS OF THEORETICAL RESULTS
§.§ Proof of Proposition <ref>
[= Proposition <ref>]
Let G be an order of variables valid with respect to minimization of a function ϕ: B_n →ℝ.
Then, for any non-negative function c: E× B_n →ℝ, we have
min_x∈ B_nϕ(x) = min_x∈ B_n( ϕ(x) + ∑_e∈ E c(e, x) (x_i - x_i x_j) ).
Moreover, if x^* ∈ B_n attains the minimum of the right hand side, then it attains the minimum of the left hand side.
We define ψ(x) ϕ(x) + ∑_e∈ E c(e, x) (x_i - x_i x_j).
Then ψ(x) ≥ϕ(x) always holds.
Since G is valid with respect to minimization of ϕ, there exists x^* ∈ B_n^G such that ϕ(x^*) = min_x∈ B_nϕ(x)
We have an equality ϕ(x^*) = ψ(x^*) since ϕ(x) = ψ(x) for every x ∈ B_n^G.
For any x∈ B_n, we have ψ(x) ≥ϕ(x) ≥ϕ(x^*) = ψ(x^*), so x^* minimizes ψ.
Therefore, Eq. (<ref>) is proved.
Conversely, if x^* minimizes ψ, then
ψ(x^*) ≥ϕ(x^*) and ψ(x^*) =
min_x∈ B_nψ(x) = min_x∈ B_nϕ(x) holds.
Thus, we have ϕ(x^*) ≤min_x∈ B_nϕ(x), and so x^* minimizes ϕ.
§.§ Proof of Theorem <ref>
[= Theorem <ref>]
Let G=(V,E) be an order of n variables and
ϕ(x)=x^⊤ Q x be a quadratic function with Q∈^n× n.
For i,j=1,2,⋯,n, we define
a_i,j
Q_i,j + Q_j,i i j
Q_i,i i=j.
That is, a_i,j is the coefficient of x_i x_j (or x_i if i=j) in ϕ(x).
If an inequality
S_i,j∑_k=1
k i,j^n max{ 0, a_j,k - a_i,k} + a_j,j - a_i,i≤ 0
holds for every directed edge (x_i,x_j)∈ E, then G is valid with respect to minimization of ϕ.
Assume S_i,j≤ 0 for any directed edge (x_i, x_j) ∈ E.
It suffices to show that for any x∈ B_n, there exists x̃∈ B_n^G such that ϕ(x̃)≤ϕ(x).
Take x ∈ B_n.
If x ∈ B_n^G, then there is nothing to show.
Otherwise, there exists a directed edge (x_i, x_j) ∈ E such that x_i=1, x_j=0.
Let F be a set of all directed edges (x_i, x_j) ∈ E such that x_i=1, x_j=0.
Take some (x_i, x_j) ∈ F and define x'∈ B_n by
x_k' =
0 k=i
1 k=j
x_k k i,j.
Then, we have
ϕ(x') - ϕ(x) = ∑_k=1
k i,j^n (a_j,k - a_i,k)x_k + a_j,j - a_i,i
≤ S_i,j≤ 0.
Therefore, we may replace x by x'.
We claim that repetition of this replacement terminates in a finite number of iteration.
To show this, we observe that a subgraph G_F=(V,F) of G=(V,E) is acyclic.
By each replacement of x, the number of path on G_F decreases by more than or equal to 1 since G_F is acyclic.
Thus, the number of repetition must not exceed the number of path on G_F, which is finite.
After the termination of the repetition, we obtain G_F with no edges.
This implies that F is empty, so we concludes that x ∈ B_n^G.
The above theorem and proof can be straightforwardly generalized to the following constrained setting,
which shows validity of the order of variables given in Eq. (<ref>) as a direct corollary.
Let G=(V,E) be an order of n variables and
ϕ(x)=x^⊤ Q x be a quadratic function with Q∈^n× n.
Let
0 ≤∑_i w_k,i x_i ≤ C_k (k=1,2,⋯,m) be linear inequality constraints with positive integers w_k,i and C_k.
Let H_ ineq^(k) be the corresponding penalties given as in Eq. (<ref>) for each k.
Let ϕ̃(x) ϕ(x) + ∑_k=1^m λ_k H_ ineq^(k) be the QUBO objective function with sufficiently large λ_k (k=1,2,⋯,m) so that an optimal solution satisfies all constraints.
For i,j=1,2,⋯,n, we define
a_i,j
Q_i,j + Q_j,i i j
Q_i,i i=j.
We also define
S_i,j∑_l=1
l i,j^n max{ 0, a_j,l - a_i,l} + a_j,j - a_i,i.
If inequalities
S_i,j≤ 0 and w_k,i≥ w_k,j
for all k=1,2,⋯,m hold for every directed edge (x_i,x_j)∈ E, then G is valid with respect to minimization of ϕ̃.
We set C { x∈ B_n | 0 ≤∑_i w_k,i x_i ≤ C_k ∀ k=1,2,⋯,m }⊂ B_n and C^G C ∩ B_n^G.
Assume S_i,j≤ 0 and w_k,i≥ w_k,j
for all k=1,2,⋯,m for any directed edge (x_i, x_j) ∈ E.
It suffices to show that for any x∈ C, there exists x̃∈ C^G such that ϕ(x̃)≤ϕ(x).
Take x ∈ C.
If x ∈ C^G, then there is nothing to show.
Otherwise, there exists a directed edge (x_i, x_j) ∈ E such that x_i=1, x_j=0.
Let F be a set of all directed edges (x_i, x_j) ∈ E such that x_i=1, x_j=0.
Take some (x_i, x_j) ∈ F and define x'∈ B_n by
x_k' =
0 k=i
1 k=j
x_k k i,j.
Then, x' satisfies all constraints by inequalities w_k,i≥ w_k,j.
Thus, we have x' ∈ C.
Furthermore, we have
ϕ(x') - ϕ(x) = ∑_k=1
k i,j^n (a_j,k - a_i,k)x_k + a_j,j - a_i,i
≤ S_i,j≤ 0.
Therefore, we may replace x by x'.
We claim that repetition of this replacement terminates in finite number of iteration.
To show this, we observe that a subgraph G_F=(V,F) of G=(V,E) is acyclic.
By each replacement of x, the number of path on G_F decreases by more than or equal to 1 since G_F is acyclic.
Thus, the number of repetition must not exceed the number of path on G_F, which is finite.
After the termination of repetition, we obtain G_F with no edges.
This implies that F is empty, so we concludes that x ∈ C^G.
§.§ Proof of Theorem <ref>
We provide a proof of Theorem <ref>.
For ease of notation, we define a_i a_i,i and c^+ max{0, c} for a real number c ∈ℝ.
We first show a lemma used in the proof.
Let l be a positive integer and (a_i,j)_i,j=1,⋯,l∈ℝ^l × l be a symmetric matrix.
Assume inequality a_i,k≥ a_i+1,k holds for any i, k=1,2,⋯,l with k i,i+1.
Then, for any i, we have
a chain of inequality
a_1,i≥⋯≥ a_i-1,i≥ a_i+1,i≥⋯≥ a_l,i.
Take any i∈{1,⋯,l}.
From the assumption, we have
a_1,i≥⋯≥ a_i-1,i
and
a_i+1,i≥⋯≥ a_l,i.
Moreover, by symmetry of (a_i,j) and the assumption, we have
a_i-1,i = a_i,i-1≥ a_i+1,i-1
= a_i-1, i+1≥ a_i,i+1
= a_i+1, i.
Thus, we have a chain of inequality as desired.
[= Theorem <ref>]
Let Q∈^n × n be a square real matrix and x=(x_1, ⋯, x_n) be a vector of binary variables.
Then, a graph G=(V, E) with a node set V={x_1, ⋯, x_n} and an edge set E obtained by running Algorithm <ref> with Q given as the input is a
directed acyclic graph.
Moreover, G is valid with respect to minimization of a quadratic function ϕ(x) = x^⊤ Q x.
To show that G is acyclic, we first prove that a cycle contained in G must consists of symmetric variables.
Assume that edges (x_i_1, x_i_2), ⋯, (x_i_l-1, x_i_l), (x_i_l, x_i_1) ∈ E form a cycle.
Since each edge satisfies Eq. (<ref>), we have a_i_1≥ a_i_2≥⋯≥ a_i_l≥ a_i_1.
Therefore, equality a_i_1 = a_i_2 = ⋯ = a_i_l holds.
Furthermore, again by Eq. (<ref>), we have (a_i_j+1,k - a_i_j,k)^+=0 for each j=1,⋯, l and any k=1,⋯,n with k i,j.
Here, we set i_l+1 i_1.
Therefore, for k ∈{1,⋯,}∖{i_1, ⋯, i_l}, we have a_i_1, k≥ a_i_2, k≥⋯ a_i_l, k≥ a_i_1, k, which then must be equality a_i_1, k = a_i_2, k = ⋯ a_i_l, k.
For k ∈{i_1, ⋯, i_l}, we take t such that i_t =k.
Applying Lemma <ref> with suitable reindexing, we get
a_i_1, k≥⋯≥ a_i_t-1, k≥ a_i_t+1, k≥⋯≥ a_i_l, k≥ a_i_1, k,
which must be equality.
Thus, ϕ(x) is symmetric with respect to x_i_1, ⋯, x_i_l.
In Algorithm <ref>, if ϕ(x) is symmetric with respect to x_i and x_j and i<j, then (x_i, x_j) ∈ E and (x_j, x_i) ∉ E are ensured by line <ref>.
Thus, G is acyclic.
Note that this proof explains that Algorithm <ref> outputs a total order over symmetric variables x_i_1, ⋯, x_i_l described in Example <ref>.
Validity of G directly follows from Theorem <ref>, since the edge set E consists of edges satisfying Eq. (<ref>).
§ COMBINED ALGORITHM
§ ADDITIONAL EXPERIMENTS
§.§ Minor Embedding to King's Graph
We conduct the first and second experiments in the main text setting the target hardware graph as a King's graph, which is adopted for the current commercial version of Hitachi CMOS Annealer <cit.>.
Vertices in a King's graph are aligned in a rectangular shape and connected to other vertices which are in horizontally, vertically or diagonally adjacent positions.
For a square King's graph KG_L,L of size L × L, a clique embedding of a complete graph of size L+1 is known <cit.>.
Since the Ising machine (AE) in our experiments accepts QUBO problems of size up to 65536, we set the size of the King's graph to 256 × 256 = 65536 and use n=257 to generate synthetic QUBO instances.
The experimental setting and metrics are the same as in the main text except the target graph and n.
Table <ref> shows the results for evaluation of effects of the proposed method on minor embedding.
In contrast to the cases of P_16 and Z_15 in the main text, the number of auxiliary variables and maximum chain length are not reduced even on p=1.5 and p=2.0.
This is probably because minorminer Algorithm is not suitable for searching a good minor embedding on the King's graph.
As we have mentioned in Section <ref> in the main text, performance of minor embedding search can have a significant impact on this experiment.
We hypothesize that improvement in quality of minor embedding is observed for p=1.5 and p=2.0 when we use a search algorithm suitable for the King's graph such as probabilistic-swap-shift-annealing (PSSA) <cit.>.
However, since other experimental results suggest that the performance improvement of Ising machine in our experiment might not be relevant to the quality of minor embedding, we do not delve into further experiments using other search algorithms.
Table <ref> shows the performance of the Ising machine on the original and minor-embedded synthetic QUBO problems.
In this experiment, we observed that several chains in solutions of embedded problems are broken, so we adopt the majority decision heuristic for fixing chains to compute the energies for the embedded problems.
We observed similar trends as in the main text;
the energies for baselines are much higher than those for the original QUBO problems, and the linearization substantially mitigates the degradation.
Note that since the minor embedding for the linearized problems are almost the same as that for the baselines (from Table <ref>), the mitigation apparently comes from other factors than quality of minor embedding.
Fig. <ref> shows the result of the second experiment for the King's graph KG_256,256.
In contrast to the cases of P_16 and Z_15, we do not observe significant improvement in embeddability for KG_256,256, as all instances of size 260 other than only two with p=2 cannot be embedded to KG_256,256 even with linearization.
We again hypothesize this is due to low performance of minor embedding search and using other algorithms such as PSSA leads to improving embeddability of linearized problems.
§.§ Ising Machine Performance with Linearization for Fixed Embedding
The experimental results (Table <ref> and <ref>) in the main text suggest that
the performance improvement of Ising machine on the minor-embedded linearized problems might not be relevant to quality of minor embedding.
To confirm this, we evaluate Ising machine performance on linearized problems embedded with a fixed embedding.
If we see similar performance improvement against the baselines in this setting, it serves an evidence for the irrelevance of the minor embedding quality to solving performance.
Similarly as in the main text, we sample 10 solutions of the linearized QUBO problems embedded with clique embedding for each combinations of p∈{1.5, 2.0} (for which the quality improvement of minor embedding was observed) and n∈{180, 232} and report the averaged energies.
Table. <ref> summarizes the results.
We re-print the results in the main text in the “Without Embedding” and “With Embedding” columns for comparison.
The energies for linearized problems with clique embedding achieves the same values as those of the original problems without embedding.
The performance improvement is similar as that for linearized problems with minor embedding obtained by minorminer Algorithm.
Therefore, in our experimental settings, it seems that the performance improvement of the Ising machine does not come from the difference in mapping of minor embedding.
§.§ Full Results on MKPs
Table <ref>, <ref> present the full results including the averaged optimality gap (shown in “Avg.” column) on MKPs.
The results for the number of feasible solutions and best optimality gap are re-printed for convenience.
We observe a similar trend of improving the averaged gap as the best gap.
§.§ Results on MKPs with Tuning Penalty Coefficient
For fairer comparison, we evaluate the proposed method on MKPs against stronger baselines based on tuning penalty coefficients.
When applying an Ising machine to constrained optimization problems, feasible solutions are not necessarily obtained.
It is favorable to obtaining feasible solutions with sufficiently high probability while achieving high optimization scores.
Performance of the Ising machines in this aspect mostly depends on the value of penalty coefficients for the constraints <cit.>.
On MKPs, λ in Eq. (<ref>) should be taken sufficiently large for constraints on solutions to be satisfied.
On the other hand, large λ tends to degrade the objective score.
To tune baselines with respect to this aspect, we take one instance from each combination of (m,n, α) and apply the Ising machine to it 10 times for each λ∈{1, 0.2, 0.05, 0.01, 0.002, 0.0005, 0.0001}.
We report the number of feasibility solutions and best optimization scores ∑_i v_i x_i over feasible solutions.
The tuning results are shown in Fig. <ref>.
As expected, small λ leads to decrease in the number of feasible solutions and higher optimization scores.
When λ is too small to obtain a sufficient number of feasible solutions, variance of scores increase and so the best scores tends to decrease.
On Method 2, there are several cases where the scores decrease while the numbers of solutions do not decrease.
It is probably due to the function specification of , though it is undisclosed.
We conduct the third experiment in the main text again with setting λ to the best parameter achieving the highest best score in the tuning experiment.
Note that the proposed method is evaluated with the same value of λ as that of the tuned baselines.
The results are shown in Table <ref> and <ref>.
We observe that the proposed method improves the averaged and best gap on most of cases where |E| is larger than a hundred, even on the strong tuned baselines.
One exception is the case of m=5, n=500 and α=0.75 on Method 2, where the solutions with linearization has more than 1.3x averaged or best gap than the baseline.
The cause of the phenomenon is difficult to analyze as the specification of function is undisclosed.
Overall, the proposed method generally improves performance of the Ising machine on MKPs even on the well-tuned setting.
|
http://arxiv.org/abs/2307.04776v1 | 20230709142916 | A new Machine Learning-based method for identification of time-correlated events at tagged photon facilities | [
"V. Sokhoyan",
"E. Mornacchi"
] | physics.data-an | [
"physics.data-an",
"nucl-ex"
] |
Predictive Coding for Animation-Based Video Compression
Morgane Austern
Received: date / Accepted: date
=======================================================
§ INTRODUCTION
The accuracy of the measurements performed at electron accelerators with bremsstrahlung-based tagged photons is limited, among other factors, by the presence of the time-uncorrelated background in the tagging system at high rates, complicating unambiguous matching of the initial electron and the bremsstrahlung photon inducing the reaction in the target. The widely used sampling and subtraction of time-correlated events leads to a reduction in the measurement accuracy, depending on the degree of contamination of the data sample with time-uncorrelated background.
The method presented in this paper allows for the identification of the time-correlated events without subtraction of the random (uncorrelated) background.
To illustrate the application of the method, we have chosen the reaction γ p → p π^0 for a selected photon beam energy range of 240-260 MeV, where no other reactions are expected to contribute to the sample with two photons and one proton candidate in the final state.
In addition, kinematic cuts were applied for reaction identification and background suppression (see section <ref>).
After the initial event selection, the time-correlated (signal) events in the experimental data are expected to have the properties of a simulated γ p → p π^0 reaction, under the assumption that the simulation reasonably describes the data.
Afterwards, the samples obtained from Monte Carlo simulation of the signal and experimental measurement of the time-uncorrelated (random) background are used to create Machine Learning (ML) models, which are applied to distinguish between time-correlated and uncorrelated events in the experimental data.
This paper is organized as follows. Section <ref> introduces the experimental setup and the conventional method for the subtraction of the random background coincidences in the tagging system.
Section <ref> explains the concept of the new method, while the application of the ML models, including the comparison of the new method with the conventional subtraction approach, is discussed in section <ref>.
The outcome of this work is summarized in section <ref>.
§ SELECTION OF TIME-CORRELATED HITS WITH A CONVENTIONAL APPROACH
In this paper, the application of a new method for the identification of time-correlated events with ML-based models is illustrated for experimental data obtained with the Crystal Ball/TAPS setup at the Mainz Microtron (MAMI) <cit.>. Figure <ref> shows a schematic picture of the Crystal Ball/TAPS experiment. The photon beam is produced via bremsstrahlung by the electron beam from MAMI on a thin radiator. The outgoing electrons are bent by a dipole magnet and detected by the tagger spectrometer <cit.>, while the remaining part of the electron beam is directed to the electron beam dump. The energy of the bremsstrahlung photons is determined as the difference between the energy of the electron beam and the energy of the deflected electrons, measured by the tagger. The current setup of the tagger includes 408 channels, divided into 51 modules. Each channel is composed of a plastic scintillator (EJ200) rod, 30 mm long with a 6 × 6 mm base, read out by a 6 × 6 mm SensL-SiPM with a bias voltage of 25 mV. The signal is then guided outside the region with intense radiation by long Ethernet cables and is fed to a Constant Fraction Discriminator (CFD). This configuration ensures a single-counter time resolution of δ t = 0.1 ns. The bremsstrahlung photons can be tagged at 4.3% to 93.0% of the incoming electron beam energy E_e. The energy resolution relative to E_e varies over the energy spectrum, from low to high photon energies, from 0.4% to 0.11%, and the absolute energy resolution varies from 3.47 MeV to 1.03 MeV respectively <cit.>. The resulting photon beam (after collimation) impinges on a 10 cm-long LH_2 target and the particles in the final state are detected by the nearly 4π Crystal Ball/TAPS detector system consisting of the Crystal Ball (CB) <cit.> and TAPS <cit.> calorimeters (and other charged particle detectors), covering ≈ 97% of the solid angle.
Additional information on the apparatus can be found in refs. <cit.>.
Due to the relatively high electron beam current, the number of hits in the tagger associated with the same event in the CB/TAPS setup can reach large numbers (typically up to ≈ 140 electrons in the same trigger window), making it difficult to correlate the event with the correct hit in in the tagger.
In a conventional approach, this is achieved by calculating the time difference Δ t between each hit in the tagger and the event in the CB/TAPS systems.
This allows for the selection of the time-coincident events with a subsequent sampling and subtraction of the remaining time-uncorrelated background.
The left panel of figure <ref> shows a sample of the time difference Δ t distribution between the reconstructed π^0 in the calorimeters and each hit in the tagger. The so-called prompt peak around 0 ns corresponding to the coincident hits is well visible. The flat background outside and below the peak instead corresponds to the time-uncorrelated events. The spectrum is generated for the reaction γ p → p π^0 at a photon beam energy of 240 – 260 MeV after the application of the kinematic cuts described in subsection <ref>, used to reduce the initial background contamination of the data.
The prompt time-correlated events are selected in the peak region within an interval Δ t ∈ [-2,2] ns (highlighted in blue in the left panel of figure <ref>). To remove the uncorrelated background in the prompt region, the random contribution is modeled by selecting two time windows, one on the left Δ t_l∈ [-200,-50] ns, and one on the right Δ t_r∈ [50,600] ns of the peak (both highlighted in gray in the left panel of figure <ref>).
The obtained background sample is normalized according to the width of the selected time windows, and then subtracted from the prompt peak sample.
To illustrate the performance of this method, the missing mass distribution was calculated as
M_miss = √((E_γ + m_p - E_γ')^2 - (k⃗ - k⃗')^2 ),
where k = (E_γ, k⃗) and k' = (E_γ', k⃗'⃗) are the incoming and scattered photon four-momenta, respectively, and m_p is the target proton mass at rest.
For a γ p → p π^0 event, M_miss is expected to be in agreement with the proton mass. The obtained distribution is shown in the right panel of figure <ref>, for the prompt and the random sample in blue and gray, respectively, while the distribution obtained after subtracting the uncorrelated events is shown in red. As expected, the shoulder on the left of the total distribution (blue) is well described by the uncorrelated background, resulting in a final distribution well peaked around the proton mass after random background subtraction.
Although this method has been widely used with reliable performance at photon tagging facilities for the past few decades, the need to subtract the random background limits the achievable measurement accuracy, especially for low-energy measurements of processes with small cross sections. In addition, the information about all correlations between variables for a single event is lost in the subtraction process (unless the subtraction is performed in multiple dimensions). Thus, a new method allowing for unambiguous matching of the time-correlated tagger hit with the event in the calorimeter without the need for sampling and subtraction of the time-uncorrelated background is highly desirable, especially now that we are entering the so-called precision era of nuclear and hadron physics. This paper presents a new ML-based multivariate analysis method, applicable for the selection of time-correlated hits without subtraction of the time-uncorrelated background.
§ NEW ANALYSIS CONCEPT
The goal of this work is to distinguish between signal and background events in the region of the prompt peak, shown in the left panel of figure <ref>.
In the newly developed method, ML-based models are created and trained using a simulated data sample with a signature of the reaction of interest in combination with the experimentally measured uncorrelated background[The simulation of the random background is extremely challenging due to multiple non-linear effects in the experiment.]. The prerequisite for using this method is that the sample of the (potentially) time-correlated events shows patterns similar to the simulated sample. This can be verified, e.g., by comparison of the simulated distributions with the distributions obtained from the experimental data after conventional random background subtraction described in section <ref>. At the same time, the patterns in the simulated data should be significantly different from the patterns in the random background sample. The selection of input variables (input features for ML models) meeting these criteria allows us to create ML models, trained to distinguish between signal and background events. The performance of the created ML models is tested at first on a data sample consisting of simulated signal events and experimentally measured random background by comparing the initial and predicted labels for both classes of events. After the initial tests, the created ML model is used for the separation of experimentally measured signal and background events in the region of the prompt peak of the time spectrum (discussed in section <ref> and shown in figure <ref>). Finally, the performance of the ML-based models is compared with the conventional random background subtraction method.
§ APPLICATION OF THE MACHINE LEARNING-BASED METHOD
This section illustrates the application of the developed method for the reaction γ p → p π^0 measured with the Crystal Ball/TAPS setup at MAMI for the incoming photon energy range of 240 – 260 MeV. The simulated Monte Carlo sample, generated with the GEANT4-based <cit.> package A2Geant4 <cit.> used to create ML models, consists of 10^6 γ p → p π^0 events at E_γ = 240 - 260 MeV[At these energies, there is a significant amount of time-uncorrelated background remaining after the application of routinely used kinematic cuts such as the invariant mass cut, the coplanarity cut, and the opening angle cut.]. The second component used to create ML-based models is the experimentally measured random background sample, described in section <ref>.
§.§ Preparation of the input data
To select the γ p → p π^0 reaction, the following cuts were applied both to the simulated signal and the measured random background. At first, the events with two neutral particles (photon candidates) and one charged particle (proton candidate) were retained. The invariant mass of the two neutral particles had to agree with the nominal π^0 mass within 15 MeV. The difference between the azimuthal angles of the charged particle (proton candidate) and the π^0 (reconstructed from the neutral particles) had to fulfill the condition |ϕ_p - ϕ_π0| = 180^∘± 10^∘. The measured polar angle of the proton candidate was matched to the polar angle of the "missing particle" (calculated from the photons in the final and initial states) within 5^∘. Under these conditions, the events with the signature of the reaction γ p → p π^0 can be clearly identified.
After the event selection described above, an ML model was built using five selected input variables (features). Figure <ref> shows the comparison between simulated γ p → p π^0 sample (green) and experimentally measured time-uncorrelated background (gray) at E_γ = 240 - 260 MeV. These signal and background samples were used to build an ML-based model trained to distinguish between these two classes of events. In addition to the missing mass, the input variables (features) are the polar angle of π^0 (θ_π^0), the z-component of the missing momentum (P_z (miss)), the difference between the energy sum measured for the initial and final states (E_i - E_f), and the invariant mass of the pπ^0 pair (M_pπ^0).
§.§ Application of ensemble learning with boosted decision trees using CatBoost
The ML algorithm used in this work relies on ensemble learning with gradient boosting for decision trees, where the errors are reduced at each learning step based on the previous step. Generally, gradient boosting algorithms are well suited for solving classification and regression tasks with tabular data. We used the package CatBoost, which is one of the state-of-the-art gradient boosting algorithms, and utilizes symmetric decision trees <cit.>. The CatBoost-based models were created using the data sample shown in figure <ref> as an input.
To introduce a realistic ratio for the number of correlated and uncorrelated events in the region of the prompt peak, the ratio of simulated events to random background was chosen to match the ratio of random background to signal events in the prompt peak region (determined from the spectrum shown in figure <ref>). The simulated sample and the measured random background were mixed according to this ratio. Then, the data were randomly reshuffled and split in two parts. The first part containing 2/3 of the data was used to train the ML models, while the remaining 1/3 was used to test the model performance. The models showed high performance with the default settings provided by CatBoost, and were additionally improved by tuning the hyperparameters of the model [The following hyperparameters were tuned with the so-called Random Search method: number of iterations, learning rate, bagging temperature, random strength, and L2 regularization (for details on hyperparameters see ref. <cit.>).].
The created models were at first used to separate the events simulated with GEANT4 from the random background.
Figure <ref> shows the correlations of the polar angle of the π^0 vs. the missing mass (top) and the invariant mass of the proton and pion (bottom) for both the training sample (left panels) and for the evaluation one as predicted by CatBoost-based model (right panels). Part of the background can be linearly separated, however a significant overlap between the two data sets is also present. The shapes of the time-correlated signal (green area in figure <ref>) are generally well-reproduced by the model. The corresponding values for the precision (number of true positives divided by the sum of true positives and false positives) and recall (number of true positives divided by the sum of true positives and false negatives) both for simulated sample and measured random background are summarized in table <ref> for different prompt peak cuts, corresponding to different levels of background contamination. The precision for the signal events varies between 97.5%, for the ratio of signal and background events at ≈ 48% and 95% when the amount of the background increases up to 111.5% (compared to the number of signal events). As described above, the expected ratios for the signal and background samples were determined for the prompt peak region of the time spectrum (see figure <ref>). In this case, the cut of 0 ± 2 ns allowed the inclusion of most of the events in the prompt peak, while the broader cuts were used to test the performance of the models in presence of different amounts of background[For each of the time cuts, a separate model was created, in order to take the ratios of background and signal into account in each case.].
Finally, the trained and evaluated model was used to distinguish between the time-correlated signal and uncorrelated background in the prompt peak (see figure <ref>). The performance of the Catboost-based model is compared with the conventional random background subtraction method in figure <ref> for the five variables used as input features for the model (shown in figure <ref>) and for the total energy of the neutral pion E_π^0, not used as an input for the model. The new and the conventional background handling methods show consistent results for most of the bins. The differences observed in the region corresponding to low missing mass (appearing in the other variables as well), are mainly due to the remaining differences between the simulation (used to build the model) and experimental data and can be improved by further fine-tuning of the simulation (dependent on the goals of the analysis). In the missing mass region above 925 MeV/c^2, the integrals of the distribution obtained with both methods agree on a sub-percent level.
It is important to note that the developed method is not only applicable for the variables used as input features to the model, but can also be used to predict distributions of other variables, which are correlated with the input features used to build the models, as can be seen
for the total energy of the neutral pion E_π^0 in the lower right panel of figure <ref>. Moreover, since the new method does not require subtraction of the random background, the resulting uncertainties are smaller compared to the uncertainties for the conventional method, even though the magnitude of the reduction depends strongly on the amount of background in the corresponding analysis.
In addition, the performance of the developed method and conventional background subtraction were compared at different levels of background contamination.
Figure <ref> shows the comparison between missing mass spectra corresponding to different prompt peak cut widths, resulting in different background contamination of the data. For each of these cases, corresponding to different prompt peak cuts, CatBoost-based models were built separately, taking into account the expected ratio of the correlated signal to uncorrelated background (dependent on the width of the selected time window). Generally, the comparison of the results obtained with the new ML-based approach with the conventional random background subtraction indicates stable model performance at significantly different levels of random background.
The differences at low missing mass values, as mentioned above, can be additionally suppressed (dependent on the goals of the analysis) by further reduction of differences between GEANT simulation and experimental data (used to build the ML-based models). The integrals shown in table <ref> for the new and conventional analysis methods are in agreement within ≈ 1% for the range missing mass range of 925 - 960 MeV/c^2.
§ SUMMARY
A newly developed Machine Learning-based method for the selection of the time-correlated signal at tagged photon facilities is presented. The application of this method allows to improve the precision of experiments, where the conventional sampling and subtraction of the uncorrelated background poses restrictions on the accuracy of the measurements. Moreover, the developed method allows to preserve the information about the correlations of the variables for individual events, in contrast to the standard subtraction method. The new method shows stable performance in handling data with different levels of background contamination. One of the future applications of this method will be the analysis of the Compton scattering data taken with hydrogen and light nuclear targets in order to improve the accuracy of the extraction of the scalar and spin polarizabilities of the nucleons.
The data used in this work were taken by the A2 Collaboration with
the Crystal Ball/TAPS setup at MAMI in March 2018.
99
bib:Jankowiak:2006
A. Jankowiak, The Mainz Microtron MAMI —Past and future, Eur. Phys. J. A 28S1, 149-160 (2006).
bib:Kaiser:2008
K. H. Kaiser et al., The 1.5 GeV harmonic double-sided microtron at Mainz University Nucl. Instrum. Meth. A 593, 159-170 (2008).
bib:Mornacchi:2021
E. Mornacchi, Ph.D. Thesis, University of Mainz,
http://doi.org/10.25358/openscience-6051doi:10.25358/openscience-6051 (2021).
bib:McGeorge:2007
J. C. McGeorge et al., Upgrade of the Glasgow photon tagging spectrometer for Mainz MAMI-C, Eur. Phys. J. A 37, 129-137 (2008).
bib:Nefkens:1995
B. M. K. Nefkens, The Crystal Ball Technical Report 1, UCLA (1995).
bib:Gabler:1994
A. Gabler et al.Response of TAPS to monochromatic photons with energies between 45 MeV and 790 MeV, Nucl. Instrum. Meth. A 346, 168–176 (1994).
bib:Novotny:1998
R. Novotny, Performance of the BaF-2 calorimeter TAPS, Nucl. Phys. B Proc. Suppl. 61, 137–142 (1998).
bib:Unverzagt:2008
M. Unverzagt et al., Determination of the Dalitz plot parameter α for the decay η→ 3π^0_with the Crystal Ball at MAMI-B, Eur. Phys. J. A 39, 169-177 (2009).
bib:Geant4
S. Agostinelli et al., Geant4 - A Simulation Toolkit, Nucl. Instrum. Meth. A 506, 250-303 (2003).
bib:A2Geant4
A2Geant4 - Official GitHub repository,
https://github.com/A2-Collaboration/A2Geant4github.com/A2-Collaboration/A2Geant4.
bib:catboost
CatBoost - open-source gradient boosting library,
https://catboost.ai/https://catboost.ai/.
bib:catboost:arx
Liudmila Prokhorenkova, Gleb Gusev, Aleksandr Vorobev, Anna Veronika Dorogush, Andrey Gulin,
CatBoost: unbiased boosting with categorical features, arXiv:1706.09516 (2017).
|
http://arxiv.org/abs/2307.04381v1 | 20230710072646 | ADAQ-SYM: Automated Symmetry Analysis of Defect Orbitals | [
"William Stenlund",
"Joel Davidsson",
"Viktor Ivády",
"Rickard Armiento",
"Igor A. Abrikosov"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci"
] |
[email protected]
Department of Physics, Chemistry and Biology, Linköping
University, Linköping, Sweden
Department of Physics, Chemistry and Biology, Linköping University, Linköping, Sweden
Department of Physics, Chemistry and Biology, Linköping University, Linköping, Sweden
Department of Physics of Complex Systems, Eötvös Loránd University, Egyetem tér 1-3, H-1053 Budapest, Hungary
Department of Physics, Chemistry and Biology, Linköping University, Linköping, Sweden
MTA–ELTE Lendület "Momentum" NewQubit Research Group, Pázmány Péter, Sétány 1/A, 1117 Budapest, Hungary
Department of Physics, Chemistry and Biology, Linköping University, Linköping, Sweden
Quantum technologies like single photon emitters and qubits can be enabled by point defects in semiconductors, with the NV-center in diamond being the most prominent example. There are many different semiconductors, each potentially hosting interesting defects. High-throughput methods and automated workflows become necessary when searching for novel point defects in a large chemical space.
The symmetry properties of the point defect orbitals can yield useful information about the behavior of the system, such as the interaction with polarized light.
We have developed an automated code to perform symmetry analysis of point defect orbitals obtained by plane-wave density functional theory simulations.
The code, named ADAQ-SYM, calculates the characters for each orbital, finds the irreducible representations, and uses selection rules to find which optical transitions are allowed.
The capabilities of ADAQ-SYM are demonstrated on several defects in diamond and 4H-SiC.
The symmetry analysis explains the different zero phonon line (ZPL) polarization of the hk and kh divacancies in 4H-SiC.
ADAQ-SYM is automated, making it suitable for high-throughput screening of point defects.
ADAQ-SYM: Automated Symmetry Analysis of Defect Orbitals
Igor A. Abrikosov
August 12, 2023
========================================================
§ INTRODUCTION
Point defects in semiconductors can provide a platform for solid state quantum technology, with applications such as qubits<cit.>, sensors <cit.> and single photon emitters <cit.>. One significant benefit of quantum applications made with solid state point defects is room temperature operation <cit.>.
Theoretical calculations have been proven useful for identification of potentially interesting defects in wide-band gap semiconductors and quantitative estimations of their properties <cit.>.
Indeed, first principles methods based on density functional theory (DFT) can simulate the electronic structures and predict multiple properties <cit.>.
Each semiconductor material may host a multitude of intrinsic and extrinsic point defects.
To probe the large combinatorically complex chemical space in an efficient manner, high-throughput workflows have been developed <cit.> to simulate thousands of defect combinations, calculate relevant properties and store the results into a searchable database <cit.>.
Automatic Defect Analysis and Qualification (ADAQ) <cit.> is one such high-throughput workflow.
There are many relevant properties to study, such as how the defect interacts with light.
By analyzing symmetry of the defect orbitals and selection rules, one can deduce polarization of incoming and outgoing light.
With symmetry analysis on the theoretical side, polarization specific PL measurements can be more accurately matched with simulated defects and orientation for single defects can be identified <cit.>.
Before analyzing the orbitals, the point group symmetry of the crystal hosting the defect needs to be found. There are two broadly used codes for this, spglib <cit.> and AFLOW-SYM <cit.>. We use AFLOW-SYM because of its reported lowest mismatch when finding symmetry for known crystals <cit.>.
In addition, there are several codes that calculate irreducible representations of bands but mostly with focus on topological insulators <cit.>.
Within quantum chemistry, one method to quantitatively analyse the symmetry of molecular orbitals is with continuous symmetry measures (CSM) <cit.>, which provide a numerical measure of how close molecular orbitals are to certain irreducible representations.
Defect orbitals in the band gap are localized much like molecular orbitals, yet methods similar to CSM have not been applied to point defects in solid host materials. Presently, a common method of symmetry analysis of defect orbitals is visually inspecting an isosurface of the wave function and how it behaves under the symmetry transformations, this may be prone to human error especially for high symmetry structures, and is not applicable in high-throughput workflows.
Another method of analyzing the symmetry is to describe the defect orbitals as a linear combination of atomic orbitals and manually carrying out the group theory <cit.>. This method may be applicable in high-throughput, however, since the structure has already been relaxed with plane waves we focus on analyzing these directly without projecting to atomic orbitals. By omitting the projection we can keep all information present in the plane waves.
This paper presents a quantitative symmetry analysis method for defect orbitals in solid host materials simulated with a plane wave basis set and the selection rules of optical transitions between defect orbitals. We introduce ADAQ-SYM, a Python implementation of this method. The code is fast and automated, requiring little user input, making it applicable as an analysis tool for high-throughput simulations of defects.
Section <ref> presents an introduction to the group theory, specifically applied to defects.
Section <ref> describes the ADAQ-SYM algorithm that performs the symmetry analysis, and Appendix <ref> deals with how the code is constructed and what approximations are used. Computational details of the simulations in this paper are described in Section <ref>.
Section <ref> presents the results from symmetry analyses of several known defects; nitrogen vacancy (NV) center and silicon vacancy (SiV) center in diamond and the silicon vacancy (V_Si) and several divacancy (V_SiV_C) configurations in 4H-SiC. Section <ref> discusses these results and Appendix <ref> presents the recommended best practices when using the software.
§ THEORETICAL BACKGROUND
In this paper we consider point groups. For convenience we summarize basic concepts following Ref. <cit.>.
We refer to symmetry transformations as unitary transformations in three dimensional space which have at least one fixed point, meaning no stretching or translation. In the Schönflies notation, these transformations are:
* Identity, E.
* Rotation of 2π/n or 2π m/n, where n and m are integers, C_n or C_n^(m).
* Reflection in a plane, σ_x. x = h, v or d, denoting reflection in a horizontal, vertical or diagonal plane.
* Inversion, i.
* Improper rotation of 2π/n or 2π m/n, where n and m are integers, which is a rotation 2π/n or 2π m/n followed by a reflection in a horizontal plane, S_n or S_n^(m).
A set of these symmetry transformations, if they have a common fixed point and all leave the system or crystal structure invariant, constitutes the point group of that system or crystal structure.
The axis around which the rotation with the largest n occurs is called the principal axis of that point group.
The point groups relevant to solid materials are the 32 crystallographic point groups, of which the following four are used in this paper, C_1h, C_2h, C_3v and D_3d.
In brief, character describes how a physical object transforms under a symmetry transformation, (1 = symmetric, -1 = anti-symmetric, 0 = orthogonal), and representation Γ describes how an object transforms under the set of symmetry transformations in a point group.
Each point group has a character table which has classes of symmetry transformations on the columns and irreducible representation (IR) on the rows, with the entries in the table being characters. IRs can be seen as basis vectors for representations.
Each point group has an identity representation, which is an IR that is symmetric with respect to all transformations of that point group.
Character tables can have additional columns with rotations and polynomial functions, showing which IR they transform as.
Appendix <ref> contains the character tables of the point groups used in this paper, these character tables also show how the linear polynomials (x, y and z) transform.
For defects in solids, the point group is determined by the crystal structure, and the symmetry of the orbitals can be described by characters and IRs. Figure <ref> shows a divacancy defect in silicon carbide with the point group C_3v as an example.
Comparing with the character table for C_3v, Table <ref> in Appendix <ref>, one sees that the orbital has the IR a_1.
When defects are simulated with DFT, one obtains (one-electron) orbital wave functions ϕ_i and corresponding eigenvalues ϵ_i.
Optical transitions, where an electron moves from an initial state with orbital i to a final state with orbital f, has an associated transition dipole moment (TDM) μ⃗ , which is expressed as:
μ⃗ = ⟨ϕ_f|er⃗|ϕ_i⟩,
where e is the electron charge and r⃗ is the position operator.
Selection rules can be formulated with group theory <cit.>. For TDM the following applies:
for optical transitions to be allowed the representation of the TDM Γ_μ must contain the identity representation, where
Γ_μ = Γ_f ⊗Γ_r ⊗Γ_i,
with ⊗ being the direct product, and Γ_r is the IR of the polarization direction of the light, corresponding to the linear functions in the character tables.
§ METHODOLOGY
Figure <ref> shows the symmetry analysis process of ADAQ-SYM. Here, we describe the steps in detail.
First, we perform a DFT simulation on a defect in a semiconductor host material. This produces a relaxed crystal structure and a set of orbital wave functions and their corresponding eigenvalues. These are the main inputs for ADAQ-SYM.
The orbitals to be analyzed are chosen by the user. The electron orbitals associated with defects are localized
around the defect, and the inverse participation ratio (IPR) is a good measure of how localized an orbital is <cit.>. The discreet evaluation of IPR is
χ = ∑_r |ϕ_i(r⃗)|^4/(∑_r |ϕ_i(r⃗)|^2)^2,
and can be used to identify defect orbitals in the band gap, since they have much higher IPR than the bulk orbitals. There are also defect orbitals in the bands which are hybridized with the delocalized orbitals, their IPR are lower than the ones in the band gap, but still higher than the other orbitals in the bands. We employ IPR as a tool for identifying defect orbitals in the bands by spotting outliers.
After the careful selection of ab initio data and inputs, ADAQ-SYM is able to perform the symmetry analysis.
Second, the "center of mass" c⃗ of the each orbital is calculated according to
c⃗ = ⟨ϕ(r⃗)|r⃗|ϕ(r⃗)⟩ = ∫ dr^3 ϕ^*(r⃗) r ϕ(r⃗)) = ∑_rϕ^*(r⃗) r ϕ(r⃗).
These centers are used as the fixed points for the symmetry transformations. Orbitals are considered degenerate if the difference in their eigenvalues are less than a threshold. When calculating c for degenerate orbitals, they are considered together and the average center is used.
This method does not consider periodic boundary conditions and necessitates the defect be in the middle of the unit cell. To mitigate skew of the center of mass, the wave function is sampled in real space, and points with moduli under a certain percentage p of the maximum are set to zero according to
ϕ_trunc(r⃗) =
0 if |ϕ(r⃗)| < p max_r⃗ (ϕ(r⃗))
ϕ(r⃗) otherwise
.
Third, the point group and symmetry transformations of the crystal structure is found via existing codes. Each symmetry transformation has an operator Û. To get characters the overlap of an orbital wave function and its symmetry transformed counterpart, the symmetry operator expectation value (SOEV), is calculated
⟨Û⟩ = ⟨ϕ(r⃗)|Ûϕ(r⃗)⟩ = ∫ dr^3 ϕ^*(r⃗) (Ûϕ(r⃗)) ,
for each orbital and symmetry transformation. The wave function is expanded in a plane wave basis set, with G-vectors within the energy cutoff radius. Therefore, Eq. <ref> can be rewritten to be evaluated by summing over these G-vectors only once, the plane wave expansion is also truncated by reducing the cutoff radius when reading the wave function and renormalizing <cit.>.
Fourth, the character of a conjugacy class is taken to be the mean of the overlaps of operators within that class, and the overlaps of degenerate orbitals are added.
To find the representation of a set of characters, the row of characters is projected on each IR, resulting in how many of each IR the representation contains. Consider an IR Γ, where W⃗_Γ is a vector with the characters of Γ times their order.
For example, for C_3v the vector for a_2 is W⃗_a_2 = (1·1, 1·2, -1·3) = (1,2,-3). Let V⃗ be the vector with a row of characters and h be the order of the point group, then N_Γ is the number of times the IR Γ occurs which is calculated as follows
N_Γ = W⃗_Γ·V⃗1/h .
For degenerate states the found representation should be an IR with dimension equal to the degeneracy, e.g. double degenerate orbital should have a two-dimensional e state.
If an IR is not found, the overlap calculation is rerun with the center of another orbital as the fixed point.
The CSM S for the IRs of molecular orbitals <cit.> is used for the defect orbitals and calculated with
S(ϕ, Γ) = 100(1 - N_Γ),
which produces a number between 0 and 100. S(ϕ, Γ)=0 means that the orbital is completely consistent with IR Γ, and S(ϕ, Γ)=100 means that the orbital is completely inconsistent with the IR Γ.
Fifth, to calculate the IR of the TDM and find the allowed transitions, the characters of the TDM is calculated by taking the Hadamard (element-wise) product of the character vectors of each 'factor'
V⃗_μ = V⃗_f ∘V⃗_r ∘V⃗_i
and Eq. <ref> is used to calculate Γ_μ.
The representation of the resulting character vector is found in the same way as the IR of the orbital was found.
As an example, consider the group C_3v with the three IRs a_1, a_2 and e. If some TDM in this group has the character vector V⃗_⃗μ⃗ = (4,1,0) calculating the representation would look like:
W⃗_a_1 = (1,2,3), W⃗_a_2 = (1,2,-3), W⃗_e = (2,-2,0), h = 6 ,
Γ_μ = [N_a_1,N_a_2,N_e] = [4+2/6, 4+2/6, 8-2/6] = [1,1,1] .
Since Γ_μ contains a_1, the transition is allowed. The code contains a function to convert a representation array of the above format to a string such as "a_1 + a_2 + e".
Finally, the information produced by ADAQ-SYM is entered into a script to produce an energy level diagram which shows the position in the band gap, orbital occupation, IR and allowed transitions.
§ COMPUTATIONAL DETAILS
The DFT simulations are executed with VASP <cit.>, using the projector augmented-wave method <cit.>.
We apply the periodic boundary conditions, and the defects in the adjacent supercells cause a degree of self-interaction. To limit this, the supercell needs to be sufficiently large. In our case supercells containing more than 500 atoms are used.
The defects are simulated with the semi-local Perdew, Burke and Ernzerhof (PBE) exchange-correlation functional <cit.>. These simulations only include the gamma point, run with a plane-wave cutoff energy of 600 eV, with the energy convergence parameters 1×10^-6 eV and 5×10^-5 eV for the electronic and ionic relaxations, respectively. The simulations are done without symmetry constraints so symmetry breaking due to the Jahn-Teller effect can occur when relaxing the crystal structure.
Excited states are simulated by constraining the electron occupation <cit.>.
§ RESULTS AND DISCUSSION
To illustrate the capability of our method, we apply ADAQ-SYM to several defects in two different host materials, diamond and 4H-SiC, and analyze the symmetry properties for the defects orbitals.
The symmetry analysis provides a coherent picture of the known defects, finds the allowed optical transitions between defect orbitals, and, specifically, explains the different ZPL polarization of the hk and kh divacancies in 4H-SiC.
§.§ Diamond Defects
We first analyze the symmetry of the ground state of NV-, SiV0 and SiV- centers and silicon vacancy center in diamond.
These defects were simulated in a cubic (4a,4a,4a) supercell containing 512 atoms, where a=3.57 Å.
§.§.§ Negatively Charged NV Center
Figure <ref> shows the ground state crystal structure and electronic structure of the NV- center in diamond.
Figure <ref> (b) is the generated output from ADAQ-SYM, for each orbital in the band gap. It shows the eigenvalue, occupation and IR, as well as the allowed transitions for each polarization. In this case, the found IRs are in accordance with previous work <cit.>, and only one allowed transition is found, where the light is polarized perpendicular (⊥) to the principal axis. This selection rule has been experimentally confirmed <cit.>.
§.§.§ Silicon Vacancy Center
Figure <ref> shows the ground state electronic structure of the neutral (a) and negatively charged (b) silicon vacancy center in diamond, and the IPR for 30 KS-orbitals around the band gap.
Our DFT calculations show that most orbitals in the VB are delocalized and have a low IPR.
However, some orbitals have larger IPRs meaning that they are more localized and indicating that they are defect states. These defect states in the VB are ungerade (u), meaning anti-symmetric with respect to inversion.
Both the charge states considered in this work have point groups with inversion symmetry which only allow optical transitions between orbitals of different symmetry with respect to inversion. To populate an orbital that is gerade (g), that is symmetric with respect to inversion, an electron from an u-state must be excited. When some orbitals in the valance band are taken into account, ADAQ-SYM finds two allowed transitions from defect states in the valance band to an empty state in the band gap, in agreement with previous calculations <cit.>.
For SiV^- the behavior of the orbitals under inversion is clear. The IR of these states will depend on the point group being analyzed, and CSM is used measure how well the orbitals conform to different IRs. Table <ref> shows the CSM of the defect orbitals of SiV^- in different point groups. The orbitals conform well to the C_i point group, and with some tolerance they also conform to the IRs of C_2h. The orbitals do not conform IRs in D_3d, unless one considers the orbitals degenerate despite the difference in eigenvalues.
§.§ Silicon Carbide
In this subsection, we carry out the symmetry analysis of defects in 4H-SiC with ADAQ-SYM, in both the ground state and the lowest excited state. The IR of each KS-orbital in the band gap and the allowed polarization of light for both absorption and emission is shown in the figures below. 4H-SiC consists of alternating hexagonal (h) and quasi-cubic (k) layers, resulting in different defect configurations for the same stoichiometry. The defects were simulated in a hexagonal (6a,6a,2c) supercell containing 576 atoms, where a=3.09 Å and c=10.12 Å. For 4H-SiC, "in-plane" refers to the plane perpendicular to the c-axis.
§.§.§ Negatively Charged Silicon Vacancy
We simulated the ground and excited state of the negatively charged silicon vacancy in the h site.
Figure <ref> (a) shows two allowed transitions with different polarization, where the parallel polarized transition has slightly lower energy than the perpendicular, this corresponds well to the V1 and V1' absorption lines <cit.> associated with the silicon vacancy in the h site <cit.>.
Figure <ref> (b) shows that the transition back to the ground state emits light polarized parallel to the c-axis, in agreement with previous calculations and measurements of the V1 ZPL <cit.>.
§.§.§ High Symmetry Divacancy
Figure <ref> shows the ground and excited state of the hh configuration of the divacancy, and the allowed transitions. In the excited state, one electron occupies what was previously an empty degenerate state and causes an Jahn-Teller effect. Because of this, the point group symmetry is reduced from C_3v to C_1h and degenerate states split when the system is relaxed in our simulations. This also changes the principal axis from being parallel to the c-axis to being perpendicular to it, that is the principal axis now lies in-plane. The selection rule tells us that absorption (to the lowest excited state) happens only for light polarized perpendicular to the c-axis, and the transition from the excited state emits light polarized parallel to the in-plane principal axis, thus also perpendicular to the c-axis. This behavior corresponds well to previous calculations and measurements <cit.>.
The kk divacancy is basically identical to the hh divacancy with respect to symmetry.
§.§.§ Low Symmetry Divacancies
The two low symmetry divacancy configurations hk and kh exhibit different behavior regarding the polarization of the ZPL <cit.>. Examining the symmetry of the orbitals and applying selection rules regarding the TDM allows us to distinguish between these configurations. For both of these low symmetry configurations, the only symmetry transformation is a reflection in a plane where the principal axis lies in-plane.
Figure <ref> shows crystal- and electronic structure information of the hk divacancy. From panel (d) one sees that the relaxation to the ground state only emits light polarized parallel to the in-plane principal axis.
Figure <ref> shows crystal- and electronic structure information of the kh divacancy. Panel (d) demonstrates that the relaxation to the ground state only emits light polarized perpendicular to the in-plane principal axis, meaning there are components both in-plane and along the c-axis.
From the symmetry analysis by ADAQ-SYM, one can attribute the differing polarization behavior of the hk and kh configurations to the symmetry of the lowest excited state (symmetric and anti-symmetric respectively). Due to the principal axis laying in-plane, it is possible to experimentally determine the orientation of individual defects measuring the in-plane polarization angle of the PL detected along the c-axis, in an experiment similar to Alegre et al. <cit.>. In such a experiment, the hk divacancy will exhibit a luminescence intensity maxima when the polarization is parallel to the principal axis, and a minima when the polarization in perpendicular. The opposite would be true for the kh divacancy, and the two configurations could be distinguished by the approximately 30 meV difference in ZPL <cit.>, or by the 30 degree polarization differences between the respective maxima.
§ DISCUSSION
The orbitals of the NV- center, seen in Figure <ref> (c)-(d), are a little asymmetric, despite this ADAQ-SYM reproduces the results of previous calculations <cit.> because there is a tolerance when finding the characters of an orbital. This shows that the code can produce correct IRs, even for systems that are not simulated with symmetry constraints and not very tightly converged, making this a useful tool for high-throughput calculations of defects where high convergence becomes costly.
One issue that arose when analyzing SiV- is that the crystal symmetry was a little inconsistent with point group which the electronic structure seemed to conform to. Depending on the tolerance, AFLOW-SYM found either C_i or D_3d as the point group. The orbitals seem to conform to a C_2h point group, although with a strict tolerance on IR, it only matches with C_i.
The crystal structure seems to be distorted in a way to break the symmetry by little and the difference between the distortion that would reduce D_3d to C_2h is of similar magnitude to the distortion that reduced the symmetry to C_i, meaning they both fall within or outside of the tolerance of AFLOW-SYM. A more accurate DFT simulation might address this and make the distortions distinguishable. In this case, it was solved manually by calculating overlaps in D_3d and then calculating CSM of various subgroups of D_3d and seeing in which subgroup the orbitals conformed reasonably to IRs.
Having a loose tolerance parameter for the AFLOW-SYM crystal symmetry finder can be useful in ambiguous cases since ADAQ-SYM will then run for a larger set of symmetry operators, which gives an overview and can provide insight to what extent the orbitals are asymmetric with regards to each operator. It is also recommended to do this when multiple gradual distortions of the same defect are examined.
The initial excited state calculation of the silicon vacancy seemed to show a case of the pseudo Jahn-Teller effect
where the symmetry was reduced and the degenerate states split despite not being partially occupied in either spin channel. Upon running a simulation with more accurate tolerance parameters the point group remained C_3v and the splitting reduced to less than the threshold of 10 meV. For cases like this, convergence becomes more important and looser high-throughput simulations may exaggerate these effects, to resolve this one can have a higher degeneracy tolerance parameter which will cause more states to be grouped together as degenerate.
§ CONCLUSION
We have presented a method of determining the symmetry of defect orbitals, and implemented this method in the software ADAQ-SYM.
The implementation calculates the characters and irreducible representations of defect orbitals, the continuous symmetry measure is also calculated to get a numerical measure of how close the orbitals are described by the irreducible representations. Finally, ADAQ-SYM applies selection rules to the optical transitions between the orbitals. The code is applicable to efficient analysis of defects.
We have applied the software to a variety of known defects with different point groups and host materials, and it reliably reproduces their symmetry properties.
It is found that the polarization of the allowed transition for hk (kh) is parallel (perpendicular) to the in-plane principal axis, in accordance with experiments. A method to determine the orientation of individual hk and kh divacancies is also proposed.
In summary, ADAQ-SYM is an automated defect symmetry analysis code which is useful for both manual and high-throughput calculations.
§ SOFTWARE AVAILABILITY
For availability of ADAQ-SYM and instructions, see https://httk.org/adaq/https://httk.org/adaq/.
§ ACKNOWLEDGEMENTS
This work was partially supported by the Knut and Alice Wallenberg Foundation through the Wallenberg Centre for Quantum Technology (WACQT).
We acknowledge support from the Knut and Alice Wallenberg Foundation (Grant No. 2018.0071).
Support from the Swedish Government Strategic Research Area Swedish e-science Research Centre (SeRC) and the Swedish Government Strategic Research Area in Materials Science on Functional Materials at Linköping University (Faculty Grant SFO-Mat-LiU No. 2009 00971) are gratefully acknowledged.
JD and RA acknowledge support from the Swedish Research Council (VR) Grant No. 2022-00276 and 2020-05402, respectively.
The computations were enabled by resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS) and the Swedish National Infrastructure for Computing (SNIC) at NSC, partially funded by the Swedish Research Council through grant agreements no. 2022-06725 and no. 2018-05973.
This research was supported by the National Research, Development, and Innovation Office of Hungary within the Quantum Information National Laboratory of Hungary (Grant No. 2022-2.1.1-NL-2022-00004) and within grant FK 145395.
§ IMPLEMENTATION
ADAQ-SYM is written in python using functional programming. Table <ref> provides an overview of the principal functions, and Table <ref> shows the settings ADAQ-SYM uses. To run the code, the user needs to provide three files from a VASP simulation; POSCAR or CONTCAR, the crystal structure; WAVECAR, wave function; EIGENVAL, eigenvalues and occupation of the bands. The user must also define which bands should be considered for the analysis. This should be a list of indices for each of the spin channels in EIGENVAL. In most cases, one should list the indices of the bands in the band gap.
The functions and call AFLOW-SYM <cit.> to find the point group and symmetry operators of the input crystal structure, the symmetry operators are then sorted by their conjugacy class and arranged in the order the classes appear in the character table. These functions use the setting which determine tolerance for asymmetry AFLOW-SYM uses, values may be "tight" or "loose".
The point group is used to load the right character table from text files by Gernot Katzer <cit.>. The vaspwfc module in the VaspBandUnfolding package <cit.> is used for reading the WAVECAR file and working with the plane wave expansion of the wave function, and it also serves as the basis of the IPR calculations.
The calculates the "center of mass" of each of the considered bands using Eq. <ref> and <ref>, where the cutoff percentage p is read from the setting. The wave function is sampled in a real space grid where the setting makes the grid denser.
The function loops through all considered orbitals and all symmetry operators and calculates the overlap, how Eq. <ref> is computed is described in more detail in <cit.> and Numpy <cit.> is used to accelerate the evaluation.
The evaluation time of the overlap calculation scales linearly with the number of G-vectors in the plane wave expansion. To speed up the code the series is truncated by multiplying the cutoff energy by the factor . The cutoff energy corresponds to a radius in k-space and only G-vectors within the radius are used, so halving the cutoff energy gives roughly one eighth as many G-vectors. Truncating the series produces some error in the overlap, this error is relatively small for larger than 0.1 <cit.>, the symmetry does not depend strongly on the high frequency components of the plane wave expansion. Note that the overlap calculation will produce a complex number.
The function reads the EIGENVAL file and groups the considered bands by degeneracy. Two bands are considered degenerate if the difference in eigenvalue is less than . This function also outputs the eigenvalue and occupation of the considered bands.
The function takes the overlaps and the bands grouped by degeneracy and first adds the overlaps of degenerate bands for each symmetry operator, then the overlaps within each conjugacy class is averaged to produce the character. At this point, the character is complex valued but this is resolved with the following function.
The function takes a set of characters and computes Eq. <ref> for all IRs of a point group, since the overlaps are in general complex N_Γ will also be a complex number. Doing this for a truly symmetric orbital will produce a complex number with a small imaginary component and a real component close to an integer. For a set of characters to be said to transform as IR Γ, the imaginary component must be smaller than and the real component must be within of a non-zero integer. For example with a tolerance of 0.05, characters producing N_Γ= 0.99 + 0.02i will be interpreted as transforming as IR Γ, while characters producing N_Γ= 0.96 + 0.07i or N_T= 0.92 + 0.03i will not. The same procedure is used when the CSM is calculated, since Eq. <ref> uses N_Γ.
The function calculates Eq. <ref> for each occupied state i, each non-full state f and each linear function r. The representation is found with and if the trivial representation is contained, the transition is marked as allowed.
The function uses Matplotlib <cit.> to create energy level diagrams of the considered states with the occupation and IR drawn, and all allowed transitions represented by arrows between the bands. The color of the arrow differs on the polarization of the transition.
§ BEST PRACTICES
The following summarizes our recommendations when running the software:
If no IR is found and several bands are close, increase degeneracy tolerance which will cause more states to be grouped together as degenerate. This may be preferential since actually degenerate orbitals split apart will not be assigned any IR, while accidentally degenerate orbitals grouped together as degenerate will assigned an IR which is the sum each orbitals IR, such as a_g+b_g, which makes it clear that the orbitals are accidentally degenerate.
If no IR is found, check that the centers of mass are close to your defect. If not, recalculate the centers with higher grid density, setting 6 or 8. There is also an automated fallback where the atomic position of any unique atomic species will be used.
If the crystal symmetry is unclear or you think it should be higher, increase AFLOWs tolerance. This way, the overlaps will be calculated for a larger set of symmetry operators.Then, check the overlaps manually, and look for subsets where the characters are close to integers, any such subset should be a point group which is a subset of the larger group.
§ CHARACTER TABLES
The character tables used in this paper are presented here.
|
http://arxiv.org/abs/2307.05940v1 | 20230712061101 | Hochschild polytopes | [
"Vincent Pilaud",
"Daria Poliakova"
] | math.CO | [
"math.CO",
"52B11, 52B12, 05A15, 05E99, 06B10"
] |
The (m,n)-multiplihedron is a polytope whose faces correspond to m-painted , and whose oriented skeleton is the Hasse diagram of the rotation lattice on binary m-painted n-trees.
Deleting certain inequalities from the facet description of the (m,n)-multiplihedron, we construct the (m,n)-Hochschild polytope whose faces correspond to m-lighted n-shades, and whose oriented skeleton is the Hasse diagram of the rotation lattice on unary m-lighted .
Moreover, there is a natural shadow map from m-painted n-trees to m-lighted n-shades, which turns out to define a meet semilattice morphism of rotation lattices.
In particular, when m=1, our Hochschild polytope is a deformed permutahedron whose oriented skeleton is the Hasse diagram of the Hochschild lattice.
Characterizing Data Assimilation in Navier–Stokes Turbulence
with Transverse Lyapunov Exponents
Susumu Goto^2
August 12, 2023
================================================================================================
§ INTRODUCTION
We present a remake of the famous combinatorial, geometric, and algebraic interplay between permutations and binary trees.
In the original story, the central character is the surjective map from permutations to binary trees (given by successive binary search tree insertions <cit.>).
This map enables us to construct the Tamari lattice <cit.> as a lattice quotient of the weak order <cit.>, the sylvester fan as a quotient fan of the braid fan <cit.>, Loday's associahedron <cit.> as a removahedron of the permutahedron, and the Loday–Ronco Hopf algebra <cit.> as a Hopf subalgebra of the Malvenuto–Reutenauer Hopf algebra <cit.>.
Many variations of this saga have been further investigated, notably for other lattice quotients of the weak order <cit.> and for generalized associahedra arizing from finite type cluster algebras <cit.>.
See <cit.> for a recent survey on this topic.
In the present remake, permutations are replaced by binary m-painted n-trees (binary trees with n nodes with m horizontal labeled edge cuts), while binary trees are replaced by unary m-lighted n-shades (partitions of n with m labels inside its gaps).
While their precise definitions are delayed to <ref>, these combinatorial objects are already illustrated in <ref> for m = 1 and n = 3.
The m-painted n-trees already appeared in <cit.>, inspired from the case m = 1 studied in <cit.>.
They are mixtures between the permutations of [m] and the binary trees with n nodes (here, mixture is meant in the precise sense of shuffle <cit.>, which is very different from other interpolations of permutations and binary trees, notably permutrees <cit.>).
The m-lighted n-shades are introduced in this paper, inspired from the case m = 1 studied in <cit.>.
Here again, the central character is a natural surjective map from the former to the latter.
Namely, the shadow map sends an m-painted n-tree to the m-lighted n-shade obtained by collecting the arity sequence along the right branch.
In other words, this map records the shadow projected on the right of the tree when the sun sets on the left of the tree.
We first use this map for lattice purposes.
It was proved in <cit.> that the right rotation digraph on binary m-painted n-trees (a mixture of the simple transposition digraph on permutations and the right rotation digraph on binary trees) defines a lattice.
We consider here the right rotation digraph on unary m-lighted n-shades.
In contrast to the rotation graph on binary m-painted n-trees, the rotation graph on unary m-lighted n-shades is regular (each node has m+n-1 incoming plus outgoing neighbors).
We prove that it defines as well a lattice by showing that the shadow map is a meet semilattice morphism (but not a lattice morphism).
When m = 0, this gives an unusual meet semilattice morphism from the Tamari lattice to the boolean lattice (distinct from the usual lattice morphism given by the canopy map).
When m = 1, this gives a connection, reminiscent of <cit.>, between the painted tree rotation lattice and the Hochschild lattice introduced in <cit.> and studied in <cit.>.
The Hochschild lattice has nice lattice properties: it was proved to be congruence uniform and extremal in <cit.>, its Galois graph, its canonical join complex and its core label order were described in <cit.>, and its Coxeter polynomial was conjectured to be a product of cyclotomic polynomials <cit.>.
For m > 1, computational experiments indicate that the m-lighted n-shade rotation lattice is still constructible by interval doubling (hence semi-distributive and congruence uniform), but it is not extremal and its Coxeter polynomial is not a product of cyclotomic polynomials.
However, its subposet induced by unary m-lighted n-shades where the labels of the lights are ordered seems to enjoy all these nice properties.
The lattice theory of the m-lighted n-shade right rotations certainly deserves to be investigated further.
We then use the shadow map for polytopal purposes.
It was proved in <cit.> that the refinement poset on all m-painted n-trees is isomorphic to the face of a polytope, called the (m,n)-multiplihedron .
This polytope is a deformed permutahedron (polymatroid <cit.>, or generalized permutahedron <cit.>) obtained as the shuffle product <cit.> of an m-permutahedron with an n-associahedron of J.-L. Loday <cit.>.
Oriented in a suitable direction, the skeleton of the (m,n)-multiplihedron is isomorphic to the right rotation digraph on binary m-painted n-trees <cit.>.
Similarly, we show here that the refinement poset on all m-lighted n-shades is isomorphic to the face lattice of a polytope, called the (m,n)-Hochschild polytope .
We obtain this polytope by deleting some inequalities in the facet description of the (m,n)-multiplihedron.
We also work out the vertex description of the (m,n)-Hochschild polytope and its decomposition as Minkowski sum of faces of the standard simplex.
We obtain a deformed permutahedron whose oriented skeleton is isomorphic to the right rotation digraph on unary m-lighted n-shades.
When m = 0, the (0,n)-multiplihedron is the and the (0,n)-Hochschild polytope is a skew cube (which is not a parallelotope).
When m = 1, the (1,n)-multiplihedron is the classical multiplihedron introduced and studied in <cit.>, and the (1,n)-Hochschild polytope is a deformed permutahedron realizing the Hochschild lattice.
Let us insist here on the fact that our Hochschild polytope provides a much stronger geometric realization of the Hochschild lattice than the already two existing ones.
Namely, the Hochschild lattice is known to be realized
* on the one hand, as the standard orientation of a graph drawn on the boundary of an hypercube (see <ref>), but this graph is not the skeleton of a convex polytope,
* on the other hand, as an orientation of the skeleton of a convex polytope called freehedron and obtained as a truncation of the standard simplex <cit.> (or equivalently as the Minkowski sum of the faces of the standard simplex corresponding to all initial and final intervals), but this orientation cannot be obtained as a Morse orientation given by a linear functional (see <ref>).
Finding a deformed permutahedron whose skeleton oriented in the standard linear direction is isomorphic to the Hasse diagram of the Hochschild lattice was an open question raised by F. Chapoton.
The groupies of the permutahedron–associahedron saga probably wonder about properties of the singletons of the shadow map (a unary m-lighted n-shade whose shadow fiber consists of a single binary m-painted n-tree).
Interestingly, these singletons are counted by binomial transforms of Fibonacci numbers.
Moreover, the common facet defining inequalities of and are precisely those that contain a common vertex of and .
This property was essential in the original realization of the Cambrian fans of <cit.> as generalized associahedra <cit.>.
Somewhat independently, we also show that the right rotation digraph on unary m-lighted can also be realized on the boundary of an hypercube, generalizing the existing cubic coordinates for the Hochschild lattice <cit.>.
Cubic coordinates are well known for many famous lattices (they are called Lehmer codes for weak Bruhat lattices, and bracket vectors for Tamari lattices).
In <cit.> a stronger notion of cubic subdivisions was used to construct combinatorial diagonals for the corresponding polytopes.
When available, cubic coordinates also provide an elegant alternative proof of the lattice property.
We conclude this introduction by a glance at the algebraic motivation for painted trees and lighted shades, coming from homological algebra.
The family of multiplihedra controls the notion of A_∞-morphisms.
If A and B are two A_∞-algebras and f : A → B is an A_∞-morphism, then each face of the multiplihedron encodes an operation A^⊗ n→ B, with the cellular differential taking care of the relations.
Equivalently, one can view the faces of the multiplihedron as encoding the operations A^⊗ n-1⊗ M → N, where A is an A_∞-algebra and M and N are A_∞-modules over A.
Now if one assumes A strictly associative (DG instead of A_∞), there are clearly less such operations.
A universal basis for such operations was constructed in <cit.> in the form of short forest-tree-forest triples, and it was observed in <cit.> that these items are nothing else but the faces of the freehedra of <cit.>.
This gave the case m=1 of the shadow map <cit.>.
.2cm
The paper is organized as follows.
In <ref>, we survey the m-painted n-trees from <cit.> and introduce the m-lighted n-shades, and we consider the shadow map sending the former to the latter.
In <ref>, we recall the descriptions of the (m,n)-multiplihedron, realizing the n-tree refinement lattice, from which we derive the construction of the (m,n)-Hochschild polytope, realizing the m-lighted n-shade refinement lattice.
Finally, we discuss in <ref> the cubic coordinates for m-painted n-trees and m-lighted n-shades.
§ PAINTED TREES AND LIGHTED SHADES
In this section, we first recall the combinatorics of m-painted n-trees (<ref>) and introduce that of n-shades (<ref>).
We then analyse the natural shadow map from m-painted n-trees to m-lighted n-shades (<ref>), with a particular focus on its singletons (<ref>).
§.§ m-painted n-trees
We start with the combinatorics of m-painted n-trees already studied in details in <cit.>.
It was inspired from the case m = 1 studied in <cit.>.
An n-tree is a rooted plane tree with n+1 leaves.
As usual, we orient such a tree towards its root and label its vertices in inorder.
Namely, each node of degree ℓ is labeled by an (ℓ-1)-subset {x_1, …, x_ℓ-1} such that all labels in its ith subtree are larger than x_i-1 and smaller than x_i (where by convention x_0 = 0 and x_ℓ = n+1). Note in particular that unary nodes receive an empty label.
A cut of an n-tree T is a subset c of nodes of T containing precisely one node along the path from the root to any leaf of T.
A cut c is below a cut c' if the unique node of c is after the unique node of c' along any path from the root to a leaf of T (note that we draw trees growing downward).
An m-painted n-tree (T, C, μ) is an n-tree C together with a sequence C (c_1, …, c_k) of k cuts of T and an ordered partition μ of [m] into k parts for some k ∈ [m], such that
* c_i is below c_i+1 for all i ∈ [k-1],
* ⋃ C c_1 ∪…∪ c_k contains all unary nodes of T.
We represent an m-painted n-tree (T, C, μ) as a downward growing tree T, where the cuts of C are red horizontal lines, labeled by the corresponding parts of μ. As there is no ambiguity, we write 12 for the set {1,2}. See <ref> for illustrations.
We now associate to each m-painted n-tree a preposet (a reflexive and transitive binary relation) on [m+n].
These preposets will be helpful in several places.
Consider an m-painted n-tree (T, C, μ).
Orient T towards its root, label each node x of T by the union of the part in μ corresponding to the cut of C passing through x (empty set if x is in no cut of C) and the inorder label of x in the tree T shifted by m, and finally merge together all nodes contained in each cut.
We then define ≼_ as the preposet on [m+n] where i ≼_ j if there is a (possibly empty) oriented path from the node containing i to the node containing j in the resulting oriented graph.
See <ref>.
We now use these preposets to define the refinement poset on m-painted n-trees.
The m-painted n-tree refinement poset is the poset on m-painted n-trees ordered by refinement of their corresponding preposets, that is, ≤' if ≼_⊇≼_'.
Alternatively <cit.>, we could describe the cover relations of the n-tree refinement poset combinatorially by three types of operations, as was done in <cit.> and illustrated in <ref>.
Namely, to obtain the elements covered by an m-painted n-tree, one can
* contract an edge whose child is contained in no cut,
* contract all edges from a parent in no cut to its children all in the same cut,
* join together two consecutive cuts with no node in between them.
In the following statement, we denote by |T| the number of nodes of a tree T (including unary nodes), and define |C| k and |⋃ C| |c_1 ∪…∪ c_k| for C = (c_1, …, c_k).
The m-painted n-tree refinement poset is a meet semilattice ranked by m+n-|T|-|C|+|⋃ C|.
We now define another lattice structure, but on minimal m-painted n-trees.
See <ref>.
An m-painted n-tree (T, C, μ) is binary if it has rank 0, meaning that all nodes in ⋃ C are unary, while all nodes not in ⋃ C are binary.
The binary m-painted n-tree right rotation digraph is the directed graph on binary m-painted n-tree with an edge (, ') if and only if there exists 1 ≤ i < j ≤ m+n such that ≼_{(i,j)} = ≼_'{(j,i)}.
Again, we could alternatively describe the right rotations on binary m-painted n-trees combinatorially by three types of operations, as was done in <cit.> and illustrated in <ref>.
Namely, the cover relations correspond to
* right rotate an edge joining two binary nodes,
* sweep a binary node by a cut below it,
* exchange the labels of two consecutive cuts with no node in between them, passing the small label above the large label.
The binary m-painted n-tree right rotation digraph is the Hasse diagram of a lattice.
When m = 0, the 0-painted n-tree rotation lattice is the Tamari lattice <cit.>.
When m = 1, the 1-painted n-tree rotation lattice is the multiplihedron lattice introduced in <cit.>.
Note that the m-painted n-tree rotation lattice is meet semidistributive, but not join semidistributive when m ≥ 1.
We conclude this recollection on m-painted n-trees by some enumerative observations.
See also <ref> in <ref>.
The number of binary m-painted n-trees is
m! [y^n+1] ^(m+1)(y),
where [y^n+1] selects the coefficient of y^n+1, and ^(i)(y) is defined for i ≥ 1 by
^(1)(y) (y)
and ^(i+1)(y) ( ^(i)(y) ),
where
(y) = 1-√(1-4y)/2
is the Catalan generating function.
See <ref> in <ref> for the first few numbers.
The number of rank m+n-2 m-painted n-trees is
n+12 - 1 + 2^m+n - 2^n.
See <ref> in <ref> for the first few numbers.
The generating function (x,y,z) ∑_m,n,p PT(m,n,p) x^m y^n z^p of the number of rank p m-painted n-trees is given by
(x,y,z) = ∑_m x^m ∑_k = 0^m ( ^(k)(y,z), z ) mk z^m-k,
where mk is the number of surjections from [m] to [k],
(y,z) = 1+yz-√(1-4y-2yz+y^2z^2)/2(z+1)
is the Schröder generating function, and ^(i)(y,z) is defined for i ≥ 0 by
^(0)(y,z) y,
^(1)(y,z) (1+z) (y,z) - yz
and ^(i+1)(y,z) ^(i)( ^(1)(y,z), z ).
See <ref> in <ref> for the first few numbers.
When m = 0, the number of 0-painted n-trees of rank 0, rank n-2 and arbitrary rank are respectively given by the classical Catalan numbers (A000108), the interval numbers (A000096) and the Schröder numbers (A001003).
§.§ m-lighted n-shades
We now introduce the main new characters of this paper, which will later appear as certain shadows of m-painted n-trees.
An n-shade is a sequence of (possibly empty) tuples of integers, whose total sum is n.
An m-lighted n-shade (S, C, μ) is an n-shade S together with a set C of k distinguished positions in S, containing all positions of empty tuples of S, and an ordered partition μ of [m] into k parts for some k ∈ [m].
Alternatively, we could define an m-lighted n-shade as a pair (S,C) of sequences of the same length, where S contains (possibly empty) tuples of integers and has total sum n, while C contains (possibly empty) subsets of [m] whose union is [m], and c_i is nonempty when s_i is the empty tuple.
We preferred the version of <ref> to be more parallel to <ref>.
We represent an m-lighted n-shade (S, C, μ) as a vertical line, with the tuples of the sequence S in black on the left, and the cuts of C in red on the right. As there is no ambiguity, we write 12 for the tuple (1,2) or the set {1,2}. See <ref> for illustrations.
We now associate to each m-lighted n-shade a preposet on [m+n].
These preposets will be helpful in several places.
Consider an m-lighted n-shade (S, C, μ).
The preceeding sum ps(x) of an entry x in a tuple of S is m plus the sum of all entries that appear weakly before x in S (meaning either the entries in a strictly earlier tuple of S, or the weakly earlier entries in the same tuple as x).
We then define ≼_ as the preposet on [m+n] given by the relations
* i ≼_ j if i,j ∈ [m] and i appears weakly after j in μ,
* k ≼_ ps(y) if x and y are elements of tuples of S such that the tuple of x appears weakly after the tuple of y, and ps(x)-x < k ≤ ps(x),
* i ≼_ ps(x) if i ∈ [m] and x is an element of a tuple of S which appears weakly before the cut containing i,
* k ≼_ i if i ∈ [m] and ps(x)-x < k ≤ ps(x) for some element x of a tuple of S which appears weakly after the cut containing i.
See <ref>.
Define the Hasse diagram of a preposet ≼ on X to be the Hasse diagram of the poset ≼ / ≡ on the classes of the equivalence relation ≡(x,y) ∈ X × Xx ≼ y and y ≼ x defined by ≼.
In contrast to the preposet ≼_ of an m-painted n-tree , note that by definition the Hasse diagram of the preposet ≼_ of an m-lighted n-shade is always a forest.
More precisely, the Hasse diagram of ≼_ is a caterpillar forest, whose path contains one node {ps(x_1), …, ps(x_k)} for each tuple (x_1, …, x_k) of .
We now use these preposets to define the refinement poset on m-lighted n-shades.
The m-lighted n-shade refinement poset is the poset on m-lighted n-shades defined by refinement of their corresponding preposets, that is, ≤' if ≼_⊇≼_'.
Alternatively, we could describe the cover relations of the m-lighted n-shades refinement poset combinatorially by two types of operations, as illustrated in <ref>.
Namely, to obtain the elements covered by an m-lighted n-shade, one can
* concatenate two consecutive (possibly empty) tuples, and merge their (possibly empty) cuts,
* replace one of the integers x inside a tuple by two integers y,z with x = y + z and y ≥ 1 and z ≥ 1.
For a sequence S (s_1, …, s_ℓ) of tuples, we define |S| ℓ and S∑_i ∈ [ℓ] |s_i|, where |s_i| is the length of the tuple s_i.
The m-lighted n-shade refinement poset is a meet semilattice ranked by m - |S| + S.
For the rank, if (S,C,μ) and ' (S',C',μ') are obtained by one of the two operations of <ref>, then we have
* |S'| = |S|-1 and S' = S when we concatenate two consecutive tuples,
* |S'| = |S| and S' = S+1 when we refine an integer into two inside one of the tuples.
In both situations, we get (') = ()+1.
Finally, the meet semilattice property will follow from <ref>.
We now define another lattice structure, but on minimal m-lighted n-shades.
.
An m-lighted n-shade (S, C, μ) is unary if it has rank 0, meaning that all tuples in ⋃ C are empty tuples, while all tuples not in ⋃ C are singletons.
The unary m-lighted n-shade right rotation digraph is the directed graph on unary m-lighted n-shades with an edge (, ') if and only if there exists 1 ≤ i < j ≤ m+n such that ≼_{(i,j)} = ≼_'{(j,i)}.
Again, we could alternatively describe the right rotations on unary m-lighted n-shades combinatorially by three types of operations, as illustrated in <ref>.
Namely, the cover relations correspond to
* replace a singleton (r) by two singletons (s), (t) with r = s + t and s ≥ 1 and t ≥ 1,
* exchange a singleton with a cut below it,
* exchange the labels of two consecutive cuts with no singleton in between them, passing the small label above the large label.
From <ref>, we observe that any unary m-lighted n-shade with singleton tuples s_1, …, s_k admits m+k-1+∑_i ∈ [k] (s_i-1) = m+n-1 (left or right) rotations.
In other words, the (undirected) rotation graph is regular of degree m+n-1.
Note that this can also be seen as a consequence of <ref>.
The next statement will follow from <ref>.
The unary m-lighted n-shade right rotation digraph is the Hasse diagram of a lattice.
When m = 0, the 0-lighted n-shade rotation lattice is the boolean lattice.
When m = 1, the 1-lighted n-shade rotation lattice is the Hochschild lattice studied in <cit.>.
Computational experiments indicate that the m-lighted n-shade rotation lattice is constructible by interval doubling (hence semidistributive and congruence uniform).
However, in contrast to the case when m ≤ 1, it is not extremal (see <cit.> for context), and its Coxeter polynomial is not a product of cyclotomic polynomials (see <cit.> and <cit.> for context).
Nevertheless, its subposet induced by unary m-lighted n-shades where the labels of the lights are ordered (see also <ref>) seems to enjoy all these nice properties.
The lattice properties of the m-lighted n-shade rotation lattice and its subposet deserve to be investigated further.
We conclude this section on m-lighted n-shades by some enumerative observations.
See also <ref> in <ref>.
The number of unary m-lighted n-shades is
m! ∑_ℓ = 1^n m+ℓmn-1ℓ-1.
See <ref> in <ref> for the first few numbers.
The number of unary m-lighted n-shades with ℓ singletons is given by m! m+ℓℓn-1ℓ-1.
Namely, choose the order of the cuts (hence m! choices), the position of the m cuts and ℓ singletons (hence m+ℓℓ choices), and the values of the ℓ singletons (hence n-1ℓ-1 choices).
The number of rank m+n-2 m-lighted n-shades is
(2^m+1)(n+1) - 4 + δ_n=0.
See <ref> in <ref> for the first few numbers.
Consider an m-lighted n-shade (S, C, μ) with S (s_1, …, s_ℓ)
According to <ref>, has rank m+n-2 if and only if n-2 = S-|S| = ∑_i ∈ [ℓ] |s_i|-1, or equivalently if and only if one of the following holds:
* either ℓ = 1 and s_1 = 1^i-1 2 1^n-1-i for some i ∈ [n-1] (hence n-1 choices),
* or ℓ = 2 and s_1 = 1^i while s_2 = 1^n-i for some i ∈ [n-1] and the m labels are allocated arbitrarily on the two positions (hence 2^m(n-1) choices),
* or ℓ = 2 and s_1 = ∅ while s_2 = 1^n and the m labels are allocated on the two tuples, with at least one label on the first tuple (hence 2^m-1 choices),
* or ℓ = 2 and s_1 = 1^n while s_2 = ∅ and the m labels are allocated on the two tuples, with at least one label on the second tuple (hence 2^m-1 choices).
We obtain that there are n-1+2^m(n-1)+2(2^m-1) = (2^m+1)(n+1) - 4 rank m+n-2 m-lighted n-shades.
The correction δ_n=0 comes from the fact that the last three situations overlap when n = 0.
The generating function (x,y,z) ∑_m,n,p LS(m,n,p) x^m y^n z^p of the number of rank p m-lighted n-shades is given by
(x,y,z) = ∑_m x^m ∑_k = 0^m ( 1-y )^k ( 1-y(z+1) )/( 1-y(z+2) )^k+1 mk z^m-k,
where mk is the number of surjections from [m] to [k].
See <ref> in <ref> for the first few numbers.
Denote by
τ^≥(y,z) = 1/1-yz/(1-y) = 1-y/1-y(z+1) (resp. τ^>(y,z) = y/1-y(z+1) )
the generating function of all (resp. nonempty) tuples of integers, where y counts the sum, and z counts the length.
For fixed integers 0 ≤ k ≤ m, we claim that the generating function ∑_n,p LS(m,n,p) y^n z^p of rank p m-lighted n-shades with k cuts is given by
τ^≥(x,y)^k ( 1/1-τ^>(x,y))^k+1 mk z^m-k = ( 1-y )^k ( 1-y(z+1) )/( 1-y(z+2) )^k+1 mk z^m-k
Indeed, to construct a rank p m-lighted shade with k cuts, we need to choose
* k possibly empty tuples for the cuts, hence k factors τ^≥(x,y),
* k+1 possibly empty sequences of nonempty tuples inbetween the cuts (including before the first cut and after the last cut), hence k+1 factors 1/1-τ^>(x,y),
* an ordered partition of [m] into k parts, hence a factor mk.
When m = 0, the number of 0-painted n-trees of rank 0, rank n-2 and arbitrary rank are respectively given by 2^n-1 (A000079), 2(n-1) (A005843) and 3^n-1 (A000244).
When m = 1, the number of 1-painted n-trees of rank 0, rank n-1 are respectively given by 2^n-2(n+3) (A045623) and 3n-1 (A016789).
§.§ Shadow map
We now describe a natural shadow map sending an m-painted n-tree to an m-lighted n-shade.
Intuitively, the shadow is what you see on the right of the tree when the sun sets on its left.
For instance, the m-painted n-trees of <ref> are sent to the m-lighted n-shade of <ref>.
We call right branch of a tree T the path from the root to the rightmost leaf of T.
The shadow of an n-tree T is the n-shade (T) obtained by
* contracting all edges joining a child to a parent which does not lie on the right branch of T,
* replacing each node on the left branch of T by the tuple of the arities of its children except its rightmost.
The shadow of a cut c in T is the position (c) in (T) of the unique node of the right branch of T contained in c.
For a sequence C = (c_1, …, c_k), define (C) ((c_1), …, (c_k)).
The shadow of an m-painted n-tree (T, C, μ) is the m-lighted n-shade () ((S), (C), μ).
The shadow congruence is the equivalence class on m-painted n-trees whose equivalence classes are the fibers of the shadow map. In other words, two m-painted n-trees are shadow congruent if they have the same shadow.
Given two meet semilattices (M, ) and (M', '), a map f : M → M' is a meet semilattice morphism if f(x y) = f(x) ' f(y) for all x,y ∈ M.
The fibers of f are the classes of a meet semilattice congruence ≡_f on M, and the image of f is then called the meet semilattice quotient of M by ≡_f.
In other words, an equivalence relation ≡ on M is a meet semilattice congruence when x_1 ≡ x_2 and y_1 ≡ y_2 implies x_1 y_1 ≡ x_2 y_2, and the quotient M / ≡ is the meet semilattice on the ≡-equivalence classes, where for two ≡-equivalence classes X and Y,
* the order relation is given by X ≤ Y if there exist representatives x ∈ X and y ∈ Y such that x ≤ y,
* the meet X Y is the ≡-equivalence class of x y for any representatives x ∈ X and y ∈ Y.
The following classical criterion will be fundamental.
An equivalence relation ≡ on a meet semilattice (M, ) is a meet semilattice congruence if and only if
* each ≡-equivalence class admits a unique minimal element,
* the map : M → M sending an element of M to the minimal element of its ≡-equivalence class is order preserving.
We will now apply this characterization to the shadow congruence on the binary m-painted n-tree rotation lattice.
This will prove along the way that the binary m-painted n-tree rotation poset is a meet semilattice quotient, hence a meetsemilattice, hence a lattice as it is bounded.
This completes the proof of <ref>.
The shadow map is a surjective meet semilattice morphism from the binary n-tree rotation lattice to the unary m-lighted n-shade rotation lattice.
The shadow fiber of a unary m-lighted n-shade (S, C, μ) clearly has a unique minimal element (obtained by replacing each element x of S by a left comb on x leaves, cut at the level of its leaves by all lines of C below x).
Consider now two m-painted n-trees (T, C, μ) and ' (T', C', μ') connected by a right rotation.
If this rotation does not affect the right branch of , then and ' are shadow congruent, so that () = (').
Assume now that this rotation affects the right branch.
There are three possible such flips:
* Assume first that we rotate an edge i → j in (with j on the right branch of ) to an edge i ← j in ' (with both i and j on the right branch of '). Then () and (') coincide except that () has a left comb at j (cut at the level of its leaves by all lines of C below j) while (') has a left comb at i and a left comb at j (both cut at the level of their leaves by all lines of C below j). As the left comb is the rotation minimal binary tree, we can perform a sequence of right rotations in () to obtain ('). Note here that it is crucial that the cuts appear in the left combs of () and (') at the level of their leaves so that these binary tree rotations are indeed painted tree rotations.
* Assume now that we sweep a binary node i (on the right branch) by a cut c to pass from to '. Then () and (') coincide except that the left comb at vertex i of () is completely above c while the left comb at vertex i of (') is completely below c. Hence, we can successively sweep all nodes of the left comb at vertex i of () by the cut c to obtain (').
* Assume finally that we exchange the labels of two consecutive cuts with no node in between them to pass from to '. Then we can exchange the labels of the same cuts to pass from () to ('), since they are still consecutive and still have no node between them.
In all cases, we obtain that () < (').
We conclude that is order preserving, so that the shadow map is a meet semilattice morphism by <ref>.
When m = 0, we obtain an unusual meet semilattice morphism from the Tamari lattice to the boolean lattice (distinct from the usual lattice morphism given by the canopy map).
When m = 1, we obtain a meet semilattice morphism from the multiplihedron lattice to the Hochschild lattice, reminiscent of <cit.>.
Note that the shadow map is not a join semilattice morphism.
For instance,
< g r a p h i c s >
while
< g r a p h i c s >
Note that there is already a counter-example with (m,n) = (0,3), see <ref> (left).
If we tried to apply the (dual) characterization of <ref>, we would observe that, even if the fiber of a unary m-lighted n-shade (S, C, μ) has a unique maximal element (obtained by replacing each element x of S by a right comb on x leaves, cut at the level of its root by all lines of C below x), the projection map is not order preserving.
Note that we could also consider the left shadow map, given by the arity sequence on the left branch of the m-painted n-tree.
It defines a join semilattice morphism, which is not a meet semilattice morphism.
It would also be interesting to consider the map that records the arity sequence along the path from the root to the ith leaf.
And of course all intersections of the equivalence relations arising from these arity sequence maps.
Note that it is crucial here that our orientation of the skeleton of the (m,n)-multiplihedron gives advantage to the permutation part over the binary tree part (in other words, that we consider the shuffle of the m-permutahedron with n-associahedron).
Indeed, as observed in the proof of <ref>, we need that the cuts appear at the level of the leaves of the left combs in ().
Had we considered instead the shuffle of the n-associahedron with the m-permutahedron (or equivalently, the shuffle of the m-permutahedron with the anti-n-associahedron), the (left or right) shadow map would be neither a join nor a meet semilattice morphism.
When m ≤ 1, the shadow map is a surjective meet semilattice morphism from the n-tree refinement meet semilattice to the m-lighted n-shade refinement meet semilattice.
Indeed, the minimal element () in the shadow class of an m-painted n-tree is obtained by contracting all edges between two nodes that are not on the right branch of .
It fails when m ≥ 2 as edges between two consecutive cuts cannot be contracted.
§.§ Singletons
We now study the fibers of the shadow map which consist in a single m-painted n-tree.
They are analoguous to the classical singleton permutations used to construct associahedra, see <ref>.
An (m,n)-singleton is a binary m-painted n-tree which is alone in its shadow congruence class.
The following conditions are equivalent for a binary m-painted n-tree :
* is an (m,n)-singleton,
* each binary node of lies on the right branch, or its parent lies on the right branch if it is below the last cut,
* each tuple of the shadow () is reduced to a single 1, or either to a single 1 or a single 2 if it is below the last cut.
.2cm
Assume that has a binary node i which is not on the right branch, and let j be the parent of i.
If j is a unary node contained in a cut c, then sweeping i with c preserves the shadow of .
If j is a binary node not on the right branch, then rotating the edge i → j preserves the shadow of .
If j is on the right branch but above a cut, sweeping a node in the left branch of j with c preserves the shadow of .
Finally, if j is on the right branch and below all cuts, then the only possible rotation in the left branch of j modifies the shadow of .
This proves that (i) (ii).
Finally, (ii) clearly translates to (iii) via the shadow map.
The number of singletons of the (m,n)-shadow map is
m! ∑_k = 0^nm+k-1kn-k+1 ,
where k denote the kth Fibonacci number (defined by 0 = 1 = 1 and k+2 = k+1 + k for k ≥ 0, see A000045).
See <ref> in <ref> for the first few numbers.
To count the number of singletons, we can simply count the number of shadows of singletons.
From their description in <ref> (iii), we obtain that the number of such shades with k entries above the last cut is given by m! m+k-1kn-k+1.
Namely, choose the order of the cuts (hence m! choices), insert k tuples reduced to a single 1 before the last cut (hence m+k-1k choices), and finish with a sequence of tuples reduced to a single 1 or a single 2, whose total sum is n-k (hence n-k+1 choices).
When m = 0, the number of singletons is the Fibonacci n (A000045).
When m = 1, the number of singletons is n+2-1 (A000071).
§ MULTIPLIHEDRA AND HOCHSCHILD POLYTOPES
.1cm
In this section, we construct polyhedral fans and polytopes whose face lattices are isomorphic to the refinement posets on m-painted n-trees and m-lighted n-shades respectively.
We start with a brief recollection on polyhedral geometry (<ref>).
We then present the vertex and facet of the (m,n)-multiplihedron (<ref>) and of the (m,n)-Hochschild polytope (<ref>).
We conclude by gathering all necessary proofs on Hochschild polytopes (<ref>).
§.§ Recollection on polyhedral geometry
We start with a brief reminder on fans and polytopes, with a particular attention to deformed permutahedra.
We invite the reader familiar with these notions to jump directly to <ref>.
§.§.§ Fans and polytopes
A (polyhedral) cone is the positive span _≥0Ṟ of a finite set Ṟ of vectors of ^d or equivalently, the intersection of finitely many closed linear half-spaces of ^d.
The faces of a cone are its intersections with its supporting hyperplanes.
The rays (resp. facets) are the faces of dimension 1 (resp. codimension 1).
A cone is simplicial if its rays are linearly independent.
A (polyhedral) fan F̧ is a set of cones such that any face of a cone of F̧ belongs to F̧, and any two cones of F̧ intersect along a face of both.
A fan is essential if the intersection of its cones is the origin, complete if the union of its cones covers ^d, and simplicial if all its cones are simplicial.
A polytope is the convex hull of finitely many points of ^d or equivalently, a bounded intersection of finitely many closed affine half-spaces of ^d.
The faces of a polytope are its intersections with its supporting hyperplanes.
The vertices (resp. edges, resp. facets) are the faces of dimension 0 (resp. dimension 1, resp. codimension 1).
The normal cone of a face F of a polytope P is the cone generated by the normal vectors to the supporting hyperplanes of P containing F.
Said differently, it is the cone of vectors c̱ of ^d such that the linear form x̱↦c̱x̱ on P is maximized by all points of the face F.
The normal fan of P is the set of normal cones of all its faces.
The Minkowski sum of two polytopes P, Q⊆^n is the polytope P + Qp+qp ∈P, q ∈Q.
The normal fan of P + Q is the common refinement of the normal fans of P and Q.
We write P = Q - R when P + R = Q.
A deformation of a polytope P is a polytope Q satisfying the following equivalent conditions:
* the normal fan of Q coarsens the normal fan of P,
* Q is a weak Minkowski summand of P, there exists a polytope R and a positive real number λ such that λP = Q + R
* Q can be obtained from P by gliding its facets in the direction of its normal vectors without passing a vertex.
§.§.§ The braid fan, the permutahedron, and its deformations
We denote by (e̱_i)_i ∈ [d] the canonical basis of ^d and we define _X ∑_i ∈ Xe̱_i for X ⊆ [d], and _[d].
All our polytopal constructions will lie in the affine subspace _d x̱∈^dx̱ = ∑_i ∈ [d] x_i = d+12, and their normal fans will lie in the vector subspace ^⊥x̱∈^dx̱ = 0.
The braid arrangement is the arrangement formed by the hyperplanes x̱∈^⊥x_i = x_j for all 1 ≤ i < j ≤ d.
Its regions (the closures of the connected components of the complement of the union of its hyperplanes) are the maximal cones of the braid fan B̧_d.
This fan has a sional cone for each ordered partition of [d] into k+1 parts.
In particular, it has a region for each permutation of [d], and a ray for each proper nonempty subset of [d].
(Note that we work in the subspace ^⊥ of ^d so that the braid arrangement is essential and indeed has rays.)
The permutahedron is the polytope defined equivalently as
* the convex hull of the points ∑_i ∈ [n] i e̱_σ(i) for all permutations σ of [d], see <cit.>,
* the intersection of the hyperplane _d with the halfspaces x̱∈^n∑_i ∈ I x_i ≥|I|+12 for all ∅ I ⊊ [n], see <cit.>.
The braid fan B̧_d is the normal fan of the permutahedron .
When oriented in the direction ω̱ (n,…,1) - (1,…,n) = ∑_i ∈ [n] (n+1-2i) e̱_i, the skeleton of the permutahedron is isomorphic to the Hasse diagram of the classical weak order on permutations of [d].
§.§.§ Deformed permutahedra
A deformed permutahedron (polymatroid <cit.>, or generalized permutahedron <cit.>) is a deformation of the permutahedron.
The normal fan of a deformed permutahedron is a collection of preposet cones <cit.>.
The preposet cone of a preposet ≼ on [d] is the cone x̱∈^dx_i ≤ x_j if i ≼ j.
For instance, the cones of the braid fan are precisely the preposet cones of the total preposets (those where i ≼ j or j ≼ i for any i,j ∈ [d]).
There are two standard parametrizations of the deformed permutahedra.
Namely, for a deformed permutahedron P in ^d, we define:
* its Minkowski coefficients ( y̱_I(P) )_∅ I ⊆ [d] such that P is the Minkowski sum and difference ∑_∅ I ⊆ [d]y̱_I(P) _I, where _I e̱_ii ∈ I is the face of the standard simplex _[d]e̱_ii ∈ [d] corresponding to I,
* its tight right hand sides ( ẕ_J(P) )_∅ J ⊆ [d] such that ẕ_J(P) min_Jp̱p̱∈P̱.
As proved in <cit.>, these two parametrizations are related by boolean Möbius inversion:
ẕ_J(P) = ∑_I ⊆ Jy̱_I(P)
and y̱_I(P) = ∑_J ⊆ I (-1)^|I J| ẕ_J(P).
For instance, for the classical permutahedron ,
* its Minkowski coefficients are y̱_I ( ) = 1 if |I| ≤ 2 and 0 otherwise,
* its tight right hand sides are ẕ_J ( ) = |J|+12.
As another illustration, recall that the n-associahedron is the deformed permutahedron defined equivalently as
* the convex hull of the points ∑_i ∈ [n]ℓ(T,i) r(T,i) e̱_i for all binary trees T with n internal nodes, where ℓ(T,i) and r(T,i) respectively denote the numbers of leaves in the left and right subtrees of the ith node of T in infix labeling, see <cit.>,
* the intersection of the hyperplane _d with the halfspaces x̱∈^n∑_i ≤ℓ≤ j x_ℓ≥j-i+22 for all 1 ≤ i ≤ j ≤ n, see <cit.>.
Moreover,
* its Minkowski coefficients are y̱_I ( ) = 1 if I is an interval of [n] and 0 otherwise, see <cit.>,
* its tight right hand sides are ẕ_J ( ) = |J_1|+12 + … + |J_k|+12 where J = J_1 ∪…∪ J_k is the decomposition of J into maximal intervals of [n], see <cit.>.
§.§ Multiplihedra
We now consider the (m,n)-multiplihedron which realize the m-painted n-tree refinement semilattice.
These polytopes are illustrated in <ref>.
Although they were previously constructed when m = 1 in <cit.>, we use here the construction of <cit.>.
This construction is just an example of the shuffle product on deformed permutahedra, introduced in <cit.>.
However, we do not really need the generality of this operation and define here the (m,n)-multiplihedron using its vertex and facet descriptions.
Consider a binary m-painted n-tree (T, C, μ).
We associate to a point a̱() whose pth coordinate is
* if p ≤ m, the number of binary nodes and cuts weakly below the cut labeled by p,
* if p ≥ m+1, the number of cuts below plus the product of the numbers of leaves in the left and right subtrees the node of T labeled by p-m in inorder.
See <ref> for some examples.
Consider the hyperplane _m+n of ^m+n defined by the equality
x̱1̱_[m+n] = m+n+12.
Moreover, for each rank m+n-2 m-painted n-tree (T, C, μ), consider the halfspace H̱() of ^m+n defined by the inequality
x̱_A ∪ B≥|A|+12 + |B_1|+12 + … + |B_k|+12 + |A| · |B|,
where
* A denotes the set of elements of [m] which label the cut not containing the root of T (hence, A = ∅ if C has only one cut, which contains the root of T),
* B B_1 ∪…∪ B_k where B_1, …, B_k are the inorder labels shifted by m of the non-unary nodes of T distinct from the root of T.
See <ref> for some examples.
The m-painted n-tree refinement lattice is anti-isomorphic to the face lattice of the (m,n)-multiplihedron , defined equivalently as
* the convex hull of the vertices a̱() for all binary m-painted n-trees ,
* the intersection of the hyperplane _m+n with the halfspaces H̱() for all rank m+n-2 n-trees .
The normal fan of the (m,n)-multiplihedron is the fan whose cones are the preposet cones of the preposets ≼_ of all m-painted n-trees .
When oriented in the direction ω̱ (n,…,1) - (1,…,n), the skeleton of the (m,n)-multiplihedron is isomorphic to the right rotation digraph on binary m-painted n-trees.
As observed in <cit.>, it is straightforward to obtain the y̱ and ẕ parametrizations of the (m,n)-multiplihedron .
Namely, for I ⊆ [m+n], we have
y̱_I ( ) =
1 if |I| ≤ 2 and |I ∩ [n]^+m| ≤ 1, or I is a subinterval of [n]^+m
0 otherwise
and
ẕ_J ( ) = |A|+12 + |B_1|+12 + … + |B_k|+12 + |A| · |B|,
where A J ∩ [m] and B B_1 ∪…∪ B_k is the coarsest interval decomposition of J [m].
When m = 0, the (0,n)-multiplihedron is Loday's associahedron <cit.>.
When m = 1, the (1,n)-multiplihedron is the classical multiplihedron alternatively constructed in <cit.>.
§.§ Hochschild polytopes
We now construct the (m,n)-Hochschild polytope which realize the m-lighted n-shade refinement semilattice.
These polytpes are illustrated in <ref>.
Recall that we denote by ps(x) the preceeding sum of an entry x in an m-lighted n-shade (see <ref>).
Consider a unary m-lighted n-shade (S, C, μ) and denote by s_1, s_2, …, s_k the values of the singleton tuples of S.
We associate to a point a̱() whose pth coordinate is
* if p ≤ m, then the number of cuts plus the sum of the entries s_i which are weakly below the cut labeled p,
* if there is j ∈ [k] such that p = ps(s_j), then 1 + s_j ( m + n - p + c_p ) + s_j2 where c_p is the number of cuts below s_j.
* 1 otherwise.
See <ref> for some examples.
We still denote by _m+n the hyperplane of ^m+n defined by the equality
x̱1̱_[m+n] = m+n+12.
Moreover, for each rank m+n-2 m-lighted n-shade (S, C, μ), consider the halfspace H̱() of ^m+n defined by the inequality
x̱_A ∪ B≥|A|+|B|+12,
where
* A denotes the set of elements of [m] which label the cut not containing the first tuple of S (hence, A = ∅ if C has only one cut, which contains the first tuple of S),
* B = {m+q} if S is a single tuple with the 2 in position q, and B = {m+q+1, …, m+n} if S = (s_1, s_2) is a pair of tuples with |s_1| = q.
See <ref> for some examples.
The inequalities of <ref> form a subset of the inequalities of <ref>.
We postpone the proofs of the next three statements to <ref>.
The m-lighted n-shade refinement lattice is anti-isomorphic to the face lattice of the (m,n)-Hochschild polytope , defined equivalently as
* the convex hull of the vertices a̱() for all unary m-lighted n-shades ,
* the intersection of the hyperplane _m+n with the halfspaces H̱() for all rank m+n-2 m-lighted n-shades .
The normal fan of the (m,n)-Hochschild polytope is the fan whose cones are the preposet cones of the preposets ≼_ of all m-lighted n-shades .
When oriented in the direction ω̱ (n,…,1) - (1,…,n), the skeleton of the (m,n)-Hochschild polytope is isomorphic to the right rotation digraph on unary m-lighted n-shades.
It follows from <ref> that the (m,n)-Hochschild polytope is simple and the m-lighted n-shade fan is simplicial.
This will simplify our proofs in <ref>.
As in <ref>, one can compute the y̱ and ẕ parametrizations of the schild polytope .
Namely, for I ⊆ [m+n], we have
y̱_I ( ) =
1 if |I| = 1, or |I| = 2 and I ⊆ [m],
or I = {i, m+j, m+j+1, …, m+n} for some i ∈ [m] and j ∈ [n]
n-j if I = {m+j, m+j+1, …, m+n} for some j ∈ [n]
0 otherwise
and
ẕ_J ( ) = |A|+|C|+12 + |B|,
where A J ∩ [m], and B ∪ C J [m] such that C is the largest interval of J [m] containing m+n.
As mentioned in the introduction, there are deep similarities between the behaviors of
* the permutahedron and the associahedron ,
* the multiplihedron and the Hochschild polytope .
We conclude with a few comments on the behavior of the later for the reader familiar with the behavior of the former:
* As observed in <ref>, the (m,n)-Hochschild polytope can be obtained by deleting inequalities in the facet description of the (m,n)-multiplihedron .
* The common facet defining inequalities of and are precisely those that contain a common vertex of and (the singletons of <ref>).
* In contrast, the vertex barycenters of the (m,n)-multiplihedron and of the polytope do not coincide.
* When m = 0, the (0,n)-Hochschild polytope [0][n] is a skew cube distinct from the parallelepiped obtained by considering the canopy congruence on binary trees (which is a lattice congruence, in contrast to the shadow meet semilattice congruence).
When m = 0, the (0,n)-Hochschild polytope is a skew cube.
Note that it is distinct from the parallelotope ∑_i ∈ [n-1] [e̱_i, e̱_i+1].
When m = 1, the (1,n)-Hochschild polytope gives a realization of the Hochschild lattice <cit.>.
Note that the unoriented rotation graph on 1-lighted n-shades was already known to be isomorphic to the unoriented skeleton of a deformed permutahedron called freehedron and obtained as a truncation of the standard simplex <cit.>, or more precisely as the Minkowski sum ∑_i ∈ [n]_{1, …, i} + ∑_i ∈ [n]_{i, …, n} of the faces of the standard simplex corresponding to initial and final intervals, see <ref>.
However, orienting the skeleton of the freehedron in direction ω̱, we obtain a poset different from the Hochschild lattice, and which is not even a lattice.
Indeed, in <ref> (left) the two blue vertices have no join while the two red vertices have no meet.
In fact, the Hasse diagram of the Hochschild lattice cannot be obtained as a Morse orientation given by a linear functional on the freehedron.
Finally, observe that the freehedron cannot be obtained by removing inequalities in the facet description of the permutahedron or of the multiplihedron.
See <ref> (middle and right) where the resulting removahedra have the wrong combinatorics (look at the 4-valent vertex on the right of the polytopes).
§.§ Proof of <ref>
Our proof strategy follows that of <cit.>.
First, we will prove that the collection of cones described in <ref> indeed defines a fan.
The preposets cones of the preposets ≼_ for all m-lighted n-shades define a complete simplicial fan realizing the m-lighted n-shade refinement semilattice.
By <ref>, the Hasse diagram of the preposets ≼_ of each m-lighted n-shade is a forest, so that the corresponding preposet cone is simplicial.
Moreover, contracting any edge in this forest gives rise to the Hasse diagram of the preposet ≼_' of an m-lighted n-shade ' refined by , so that this collection of cones is closed by faces.
Finally, we have a well-defined shadow map , which sends an m-painted n-tree to the refinement maximal m-lighted n-shade such that ≼_⊆≼_.
Since the preposet cones of the preposets ≼_ for all m-painted n-trees form a complete fan F̧, we conclude that the preposets cones of the preposets ≼_ for all m-lighted n-shades also form a complete fan refined by F̧.
Next, we apply the following characterization to realize a complete simplicial fan as the normal fan of a convex polytope. A proof of this statement can be found in <cit.>.
Consider a complete simplicial fan F̧ in ^d, and choose
* a point a̱(C) for each maximal cone C of F̧,
* a half-space H̱(ρ) of ^d containing the origin for each ray ρ of F̧,
such that a̱(C) belongs to the hyperplane defining H̱^=(ρ) when ρ∈ C.
Then the following assertions are equivalent:
* the vector a̱(C') - a̱(C) points from C to C' for any two adjacent maximal cones C, C' of F̧,
* the polytopes
a̱(C)C maximal cone of F̧ and ⋂_ρ ray of F̧H̱(ρ)
coincide and their normal fan is F̧.
In the next two lemmas, we check the conditions of application of <ref>.
For any m-lighted n-shades and ', of rank 0 and m+n-2 respectively, such that ≼_ refines ≼_', the point a̱() belongs to the hyperplane defining H̱(').
Denote by s_1, …, s_k the values of the singleton tuples of .
We distinguish two cases:
* Assume first that ' contains a single tuple with the 2 in position q, so that A = ∅ and B = {m+q} in <ref>.
Since refines ', there is no j so that m+q = ps(s_j), so that a̱()_m+q = 1 in <ref>.
We conclude that
a̱()_A ∪ B = a̱()_m+q = 1 = |A|+|B|+12.
* Assume now that ' is a pair of tuples (s'_1,s'_2) with |s'_1| = q, so that A ⊆ [m] are the labels of the cut containing s'_2, and B = {m+q+1, …, m+n} in <ref>.
Since refines ', there is j such that q = ps(s_j).
We conclude that
a̱()_A ∪ B = |A|+12 + |A||B| + ∑_i = j+1^k ( s_i - 1+ s_i ( n-ps(s_i) ) + s_i2)
= |A|+12 + |A||B| + |B|+12 = |A|+|B|+12.
We now check that for a rotation sending to ', the direction between the two points a̱() and a̱(') of <ref> points from the poset cone ≼_ to the poset cone ≼_' of <ref>.
For any unary m-lighted n-shades and ' related by a rotation, the vector a̱(') - a̱() points from the poset cone ≼_ to the poset cone ≼_'.
We distinguish three cases according to <ref>.
Namely, if we obtain ' from by:
* replacing a singleton (r) by two singletons (s), (t) with r = s + t, then
a̱(') - a̱() = ( s(m+n-p+t+c_p) + s2)(e̱_p-t - e̱_p),
and we have p-t ≺_ p while p ≺_' p-t, where p ps(r) is the preceeding sum of r in .
* exchanging a singleton (s) with a cut c (with (s) above c in ), then a̱(') - a̱() = e̱_c - e̱_p and we have c ≺_ p while p ≺_' c, where p ps(s).
* exchanging the labels of two consecutive cuts c,c' with no singleton in between them (with c above c' in ), then a̱(') - a̱() = e̱_c' - e̱_c and we have c' ≺_ c while c ≺_' c'.
In all cases, the vector a̱(') - a̱() points from the poset cone ≼_ to the poset cone ≼_'.
We have seen in <ref> that the preposet cones of the preposets ≼_ for all m-lighted n-shades define a complete simplicial fan.
By <ref>, whose conditions of application are checked in <ref>, we thus obtain <ref>.
Finally, <ref> is a direct consequence of <ref>, since a̱(') - a̱()ω̱ > 0 for and ' related by a right rotation.
§ CUBIC REALIZATIONS
In this section we give an alternative description of the m-painted n-tree and m-lighted n-shade rotation lattices, generalizing the triword description of the Hochschild lattice <cit.>.
We also construct the cubic subdivisions realizing the face poset of the hedron and of the (m,n)-Hochschild polytope, generalizing the original construction of <cit.>.
We first fix our conventions and give examples of cubic realizations (<ref>), then recall the cubic (m,n)-multiplihedron (<ref>) and finally construct the cubic (m,n)-Hochschild polytope (<ref>).
§.§ Cubic realizations of posets
We first propose formal definitions of two types of cubic realizations of posets.
The first is the cubic analogue of the face lattice of a polytope while the second is the cubic analogue of the oriented skeleton of a polytope.
These definitions are illustrated in <ref> with the permutahedron and the associahedron.
We call cube any axis parallel parallelepiped in ^d.
If x̱, y̱∈^d are such that x_i ≤ y_i for all i ∈ [d], we denote by (x̱,y̱) the cube ∏_i ∈ [d] [x_i,y_i].
A subcube of C is a cube included in C whose vertices all lie on the boundary of C.
A cubic subdivision of C is a collection Ḑ of subcubes of C such that
* The boundary of C is the union of all the subcubes of Ḑ,
* for any subcubes C', C”∈Ḑ, the intersection C' ∩ C” is either empty or a subcube of Ḑ, with (C' ∩ C”) < min((C), min(C')).
The subcube poset of the cubic subdivision Ḑ is the poset on Ḑ∪{C} ordered by inclusion.
A cubic subdivision realizes a poset P if its subcube poset is isomorphic to P.
A cubic realization of a poset P is a map γ : P →^d such that
* for any cover relation p ⋖ q in P, the difference γ(p) - γ(q) is a positive multiple of some basis vector e̱_i,
* γ(P) lies on the boundary of ( γ(min(P)), γ(max(P)) ).
Note that our conventions are slightly unusual: we require that the cubic coordinates are decreasing along the poset, so that the maximum of the poset P has minimal cubic coordinates.
Our choice is driven by the fact that we want our cubic coordinates for 1-lighted n-shades to coincide with the triwords of <cit.>.
We next illustrate these two notions of cubic realizations with the Lehmer code of a permutation and the bracket vector of a binary tree (or we should say adaptations of them, in order to stick with our conventions).
The Lehmer code <cit.> of a permutation σ of [m] is the vector Ḻ(σ) ( L_j(σ) )_j ∈ [m] where L_j(σ) = #i < jσ^-1(i) < σ^-1(j).
Note that L_j(σ) ∈{0, …, j-1}, so that it is standard to forget the first coordinate (which is always 0).
These Lehmer codes define
* a cubic realization of the weak order on the permutations of [m],
* a cubic subdivision realizing the face lattice of the permutahedron by the set of cubes ( Ḻ(σ), Ḻ(τ) ) for σ≤τ defining a face of .
See <ref> (left) for illustration.
The bracket vector <cit.> of a binary tree T with n internal nodes is the vector Ḇ(T) ( B_j(T) )_j ∈ [n] where B_j(T) is the number of descendants of j which are smaller than j (for the usual inorder labeling of T).
Equivalently, B_j(T) is the number of leaves minus 1 in the left subtree of i.
Note that B_j(σ) ∈{0, …, j-1}, so that it is standard to forget the first coordinate (which is always 0).
The bracket vectors define
* a cubic realization of the Tamari lattice on the binary trees with n nodes,
* a cubic subdivision realizing the face lattice of the associahedron by the set of cubes ( Ḇ(S), Ḇ(T) ) for S ≤ T defining a face of .
See <ref> (right) for illustration.
§.§ Cubic (m,n)-multiplihedron
We now briefly present the cubic realizations of the m-painted n-tree refinement poset and rotation lattice.
These are a mixture of the Lehmer codes of the permutations of [m] and the bracket vectors of the binary trees with n nodes.
The case m = 1 was already discussed in <cit.>.
It is convenient to use the poset ≺_ to define the cubic vector of
The cubic vector of a binary m-painted n-tree is the vector C̱() ( C_j() )_j ∈ [m+n] where C_j() #i < ji ≺_ j.
Note that C_j() ∈{0, …, j-1}, so that it is standard to forget the first coordinate (which is always 0).
Observe that
* when n = 0, we have the Lehmer code of a permutation presented in <ref>,
* when m = 0, we have the bracket vector of a binary tree presented in <ref>.
The following statement is illustrated in <ref> (top).
We skip its proof as it is a straightforward generalization of the permutahedron and associahedron cases.
The cubic vectors of m-painted n-trees define
* a cubic realization of the right rotation lattice on m-painted n-trees,
* a cubic subdivision realizing the face lattice of the (m,n)-multiplihedron by the set of cubes ( C̱(), C̱(') ) for ≤' defining a face of .
§.§ Cubic (m,n)-Hochschild polytope
We now provide cubic realizations for the (m,n)-Hochschild polytope.
Unfortunately, the formula for the cubic coordinates of an m-lighted n-shade is not just obtained by counting non-inversions in ≺_ (pairs i < j with i ≺_ j).
We thus first introduce a bijection between the m-lighted n-shades and the (m,n)-Hochschild words, generalizing the triwords of <cit.>.
We then use these (m,n)-Hochschild words to obtain cubic realizations.
§.§.§ (m,n)-Hochschild words
We start with (m,n)-words, defined as follows.
A (m,n)-word is a word w w_1 … w_n of length n on the alphabet {0,1, ... , m+1} such that
* w_1 ≠ m+1
* for s ∈ [1,m], w_i = s implies w_j ≥ s for all j<i
We denote by the poset of (m,n)-words ordered componentwise (w ≤ w' if and only if w_i ≤ w'_i for all i ∈ [n]).
When m = 0, the second condition is empty, so that the (0,n)-words are binary words of length n starting with a 0, and [0][n] is isomorphic to the boolean lattice on n-1 letters.
When m = 1, the (1,n)-words are precisely the triwords of <cit.>, and [1][n] is isomorphic to the Hochschild lattice.
A (m,n)-Hochschild word is a pair of (σ, w) where σ is a permutation of [m] and w is an (m,n)-word.
We now define a bijection between the m-lighted n-shades and the (m,n)-Hochschild words.
Recall that we denote by ps(x) the preceeding sum of an entry x in an m-lighted n-shade (see <ref>).
Consider a unary m-lighted n-shade (S, C, σ) and denote by s_1, …, s_k of values of the singleton tuples of S.
We associate to an (m,n)-Hochschild word (σ, w), where the permutation is the permutation σ of the labels of the cuts of , and the (m,n)-word w has pth entry w_p given by
* if there is j ∈ [k] such that p = ps(s_j)-m-s_j+1, then the number of cuts below s_j,
* m+1 otherwise.
In other words, for each s_j, we write the number of cuts below s_j followed by s_j-1 copies of m+1.
See <ref> for some examples.
Conversely, we associate to an (m,n)-Hochschild word (σ, w) a unary m-lighted n-shade (S, C, σ) where the labels of the cuts of is given by the permutation σ, and the n-shade S is the sequence of (either singleton or empty) tuples
S (s_m,1) … (s_m,k_m)(∅) … (∅) (s_i,1) … (s_i,k_i)(∅) ⋯ (∅)(s_0,1) … (s_0,k_0),
where the s_i,j≥ 1 are such that
w = m(m+1)^s_m,1-1… i(m+1)^s_1,k_i-1… i(m+1)^s_i,k_i-1… 0(m+1)^s_0,k_0-1.
In other words, we place the m cuts-to-be, and place a tuple (s) before the (m-i+1)st cut for each maximal subword of w of the form i(m+1)^s-1.
See <ref> for some examples.
The maps of <ref> are inverse bijections between the unary m-lighted n-shades and the (m,n)-Hochschild words.
First, the word associated to a unary m-lighted n-shade is an (m,n)-word.
Indeed,
* the first letter is not m+1, because there are only m cuts,
* as we are reading the shade from top to bottom, the numbers written before the number s ∈ [1,m] come from higher entries that have at least s cuts below them, so these numbers are at least s.
Conversely, the sequence of tuples associated to an (m,n)-Hochschild word is a unary m-lighted n-shade.
Indeed, the total sum is the length of w, and each tuple is either empty or a singleton contained in a singleton cut.
Finally, it is immediate to check that the two maps are inverse to each other.
Through the bijection of <ref>, we can thus transport the rotation lattice on unary m-lighted n-shades to a lattice on (m,n)-Hochschild words.
The relation between words can be described as follows.
For two Hochschild words (σ,v) and (τ,w), we have (σ,v) ≤ (τ,w), if
* σ≤τ in weak order,
* v ≤ w coordinatewise,
* there exists a reduced expression σ^-1∘τ = τ_i_1∘…∘τ_i_k (a path in the permutahedron from σ to τ) and a sequence of (m,n)-words v = h_0 ≤ h_1 ≤ h_2 …≤ h_k = w such that h_l does not have the entry i_l.
The lattice is thus a subposet in the Cartesian product between the weak order on permutations of [m] and the (m,n)-word poset .
It would be nice to have a more explicit formulation of the last condition in the descriprion of the relation (σ,v) ≤ (τ,w), but we were not able to find it.
Finally, as it is a fiber of the lattice morphism (σ, w) ↦σ from the (m,n)-Hochschild word rotation lattice to the weak order, we obtain that the (m,n)-word poset is a lattice.
As mentioned in <ref>, it seems to have much more interesting properties than the word rotation lattice (for instance, it seems to be extremal, and its Coxeter polynomial seems to be a product of cyclotomic polynomials).
The (m,n)-word poset is a lattice.
The lattice [1][n] has a geometric interpretation in the context of homotopical algebra.
Specifically, the Hochschild polytope Hoch(1,n) has a polytopal subdivision whose directed 1-skeleton is [m][n].
The Hochschild polytopes Hoch(1,n) form an operadic bimodule over the operad of skew cubes Hoch(0,n) in the category of CW-spaces <cit.>, and tensor powers of this bimodule over the operad are CW-isomorphic to this subdivision.
Algebraically this allows for the composition of sequences of morphisms of A_∞-modules over DG-algebras (or of representations up to homotopy <cit.>).
§.§.§ Cubic realizations
Passing from the unary m-lighted n-shades to the (m,n)-Hochschild words allows us to construct cubic realizations for the m-lighted n-shade refinement lattice and rotation lattice.
The cubic vector of an (m,n)-Hochschild word is the vector obtained by the concatenation of the Lehmer code of σ (forgetting the first coordinates which is always 0) with the (m,n)-word w.
The cubic vector C̱() of a unary m-lighted n-shade is the cubic vector of the associated (m,n)-Hochschild word via the bijection of <ref>.
The following statement is illustrated in <ref> (top).
The cubic vectors of m-lighted n-shades define
* a cubic realization of the right rotation lattice on m-lighted n-shades,
* a cubic subdivision realizing the face lattice of the (m,n)-Hochschild polytope by the set of cubes ( C̱(), C̱(') ) for ≤' defining a face of .
We proceed by induction, starting from the Lehmer–Saneblidze–Umble realization of permutahedra for the case Hoch(m,0). Suppose that the cubic subdivision for Hoch(m,n-1) is already constructed. To obtain the cubic subdivision of , we further subdivide the boundary of Hoch(m,n-1) × [0,n].
Let (S,C,μ) be a shade corresponding to a d-dimensional face in Hoch(m,n-1), which can also be viewed as a subcube. For a cut c_i ∈ C below all of S, let p_i represent the total size of all μ-parts on c_i and all the cuts below. The subdivision of [0,n] relative to is denoted by [0,n]_, and it consists of the subdivision of [0,n] by the points p_i. The intervals in [0,n]_ correspond to the (d+1)-dimensional shades that map to under the “forgetting the last leaf” map. Refinement of shades corresponds to refinement of relative interval subdivisions. Then Hoch(m,n-1) × [0,n] subdivides as ⋃_× [0,n]_. The vertex coordinates coincide with the construction above, by description of points p_i for unary where each cut carries a μ-part of size 1.
This cubic realization provides an alternative proof of the lattice property, using induction and the fact that fibers of the map π: Hoch(m,n) →Hoch(m,n-1) are totally ordered.
Indeed, assuming that Hoch(m,n-1) is known to be a lattice, let and ' be two elements in Hoch(m,n) whose meet we want to find.
Observe that the point with coordinates (π() ∧π('), 0) is smaller than both and '.
The meet of and ' is thus the largest element in the fiber of π() ∧π() which is smaller than both and ' (it exists as the fiber is totally ordered).
§ ACKNOWLEDGEMENTS
We thank Frédéric Chapoton for suggesting to look for polytopal realizations of the Hochschild lattices.
This work started at the workshop “Combinatorics and Geometry of Convex Polyhedra” held at the Simons Center for Geometry and Physics in March 2023.
We are grateful to the organizers (Karim Adiprasito, Alexey Glazyrin, Isabella Novic, and Igor Pak) for this inspiring event, and to all participants for the wonderful atmosphere.
alpha
§ ENUMERATION TABLES
.5cm
All references like A000142 are entries of the Online Encyclopedia of Integer Sequences <cit.>.
[-.3cm]
§.§ Multiplihedra
§.§ Hochschild polytopes
§.§ Singletons
|
http://arxiv.org/abs/2307.04434v1 | 20230710091850 | One-arm exponent of critical level-set for metric graph Gaussian free field in high dimensions | [
"Zhenhao Cai",
"Jian Ding"
] | math.PR | [
"math.PR"
] |
[Zhenhao Cai]School of Mathematical Sciences, Peking University
[email protected]
^1School of Mathematical Sciences, Peking University
[Jian Ding]School of Mathematical Sciences, Peking University
[email protected]
One-arm exponent of critical level-set for metric graph Gaussian free field in high dimensions
Jian Ding^1
August 12, 2023
==============================================================================================
In this paper, we study the critical level-set of Gaussian free field (GFF) on the metric graph ℤ^d,d>6. We prove that the one-arm probability (i.e. the probability of the event that the origin is connected to the boundary of the box B(N)) is proportional to N^-2, where B(N) is centered at the origin and has side length 2⌊ N ⌋. Our proof is hugely inspired by Kozma and Nachmias <cit.> which proves the analogous result of the critical bond percolation for d≥ 11, and by Werner <cit.> which conjectures the similarity between the GFF level-set and the bond percolation in general and proves this connection for various geometric aspects.
§ INTRODUCTION
In this paper, we study the Gaussian free field (GFF) on the metric graph ℤ^d. To define it precisely, we first review the definition of discrete Gaussian free field (DGFF) on the lattice ℤ^d, where we assume d≥ 3 in this paper. For any x∈ℤ^d, let ℙ_x be the law of a continuous-time simple random walk {S_t}_t≥ 0 on ℤ^d with starting point x and transition rate 1/2d in each direction. We denote the corresponding expectation of ℙ_x by 𝔼_x. The Green's function is defined as
G(x,y):= 𝔼_x( ∫_0^∞1_S_t=ydt ) , ∀ x,y∈ℤ^d.
The DGFF {ϕ_x}_x∈ℤ^d is a mean-zero Gaussian field, whose covariance is given by
𝔼( ϕ_x_1ϕ_x_2) =G(x_1,x_2), ∀ x_1,x_2∈ℤ^d.
We denote the ℓ^1 and ℓ^∞ norms by |·|_1 and |·| respectively.
The level-set E^≥ h:={x∈ℤ^d: ϕ_x≥ h} (h∈ℝ) of DGFF has been extensively studied. It was proved that E^≥ h exhibits a non-trivial phase transition as the level h varies, and that the critical level h_*(d) is positive for all d≥ 3 (see Bricmont, Lebowitz and Maes <cit.>, Rodriguez and Sznitman <cit.>, Drewitz, Prévost and Rodriguez <cit.>). Drewitz and Rodriguez <cit.> further proved that h_*(d) is asymptotic to √(2log(d)) as d→∞. In their celebrated work <cit.>, Duminil-Copin, Goswami, Rodriguez and Severo established that h_*(d) also serves as the critical threshold between the strongly non-percolative regime and the strongly percolative regime:
* For any h>h_*, the probability of the existence of a cluster (i.e. connected component) of E^≥ h that crosses the annulus B(2N)∖ B(N) (where B(M):={x∈ℤ^d:|x|≤ M}) converges to 0 as N→∞;
* For any h<h_*, with at least 1-e^-cR^c' probability there exists a cluster of E^≥ h∩ B(N) with diameter at least N5, and moreover, any two clusters of E^≥ h∩ B(N) with diameter at least N10 are connected by E^≥ h∩ B(2N).
In addition, much has been understood regarding to the percolative properties for the level-set in both the supercritical and subcritical regimes (see e.g. Drewitz, Ráth and Sapozhnikov <cit.>, Popov and Ráth <cit.>, Popov and Teixeira <cit.>, Goswami, Rodriguez and Severo <cit.>). Despite of extensive works, our understanding in the critical regime remains limited. In this work, we focus on the critical behavior of the level-set for the GFF on the metric graph, which is much more tractable.
Let 𝕃^d:={{x,y}:x,y∈ℤ^d,|x-y|_1=1} be the edge set of ℤ^d. For each e={x,y}∈𝕃^d, we consider I_e as a compact interval of length d with two endpoints identical to x and y respectively. The metric graph generated by ℤ^d is defined as ℤ^d:= ∪_e∈𝕃^dI_e.
The GFF {ϕ_v}_v∈ℤ^d on the metric graph, as an
extension of the DGFF, is defined as follows. Given a DGFF {ϕ_x}_x∈ℤ^d, set ϕ_v=ϕ_v for all lattice points v∈ℤ^d. For each interval I_e with e={x_1,x_2}, {ϕ_v}_v∈ I_e is given by an independent Brownian bridge of length d with variance 2 at time 1, conditioned on ϕ_x_1=ϕ_x_1 and ϕ_x_2=ϕ_x_2. Readers may refer to <cit.> for more details of the construction of {ϕ_v}_v∈ℤ^d. For any h∈ℝ, we denote the level-set of ϕ above h by
E^≥ h:={v∈ℤ^d:ϕ_v≥ h }.
Lupu <cit.> proved that the critical level h_* of E^≥ h exactly equals to 0. More precisely, for any h<0, the level-set E^≥ h almost surely percolates (i.e. contains an infinite connected component). Moreover, at the critical level h=h_*=0, E^≥ 0 does not percolate. As a corollary, the so-called one-arm probability ℙ[0[]E^≥ 0∂ B(N)] converges to 0 as N→∞, where 0 is the origin of ℤ^d, ∂ A:={x∈ A:∃ y∈ℤ^d∖ A such that {x,y}∈𝕃^d}, and “A_1[]E^≥ 0 A_2” denotes the event that there exists a path on ℤ^d in E^≥ 0 joining A_1 and A_2 (see Section <ref> for the precise definition). As for quantitative estimates, Ding and Wirth <cit.> employed a martingale argument and proved various polynomial bounds for the one-arm probability in various dimensions. Precisely, they proved that for any d≥ 3, there exist constants C(d), C'(d)>0 such that for all N>1,
* when d=3,
C(3)/√(N)≤ℙ[0[]E^≥ 0∂ B(N)] ≤ C'(3)√(log(N)/N);
* when d>3,
C(d)/N^d/2-1≤ℙ[ 0[]E^≥ 0∂ B(N)] ≤C'(d)/√(N).
After that, with a different approach, Drewitz, Prévost and Rodriguez <cit.> substantially improved <cit.> by extending the estimates to a large class of transient graphs (note their bounds for lattices are the same as in <cit.>).
Generally, physicists and mathematicians may conjecture that there exists a constant ρ(d)>0 called critical exponent such that ℙ[ 0[]E^≥ 0∂ B(N)]= N^-1/ρ+o(1). As mentioned in (<ref>), the 3-dimensional critical exponent has been proved to be 2 while the parallel problems in other dimensions remain open. In this paper, we prove that the critical exponents for d>6 equal to 1/2.
For d>6, there exist constants C_1(d),C_2(d)>0 such that for all N>1,
C_1N^-2≤ℙ[ 0[]E^≥ 0∂ B(N)] ≤ C_2N^-2.
In light of Lupu's coupling (<cit.>) between the GFF and the critical loop soup (see (<ref>)), the analogue of Theorem <ref> also holds for the critical loop soup cluster on ℤ^d (d>6).
The parallel result of Theorem <ref> for the critical bond percolation was conjectured to be true for d>6 (see e.g. Grimmett <cit.>). Up to now, this conjecture was only proved partially. In <cit.>, Kozma and Nachmias proved it under the following assumptions: (i) d>6; (ii) when p=p_c(d) (where p_c is the critical percolation probability), the two-point function ℙ[x[]y] satisfies ℙ[x[]y]≍ |x-y|^2-d (“f≍ g” means ∃ constants C'>C>0 such that Cg≤ f≤ C'g). Extensive efforts have been made on proving the assumption (ii) (see e.g. Fitzner and van der Hofstad <cit.>, Hara and Slade <cit.>). Currently, the best result is given by <cit.>, which confirms Assumption (ii) for d≥ 11. It is worth mentioning that Hara, Slade and van der Hofstad <cit.> verified Assumption (ii) for the percolation on the sufficiently spread-out lattice for d>6, where two non-adjacent points within a bounded distance can also form an edge opened with a polynomial decaying probability (with respect to the distance). In summary, for 7≤ d≤ 10, the counterpart of Theorem <ref> for bond percolation remains open.
Let us turn back to the case of the GFF on the metric graph ℤ^d. In <cit.>, Werner asserted that in high dimensions (i.e. d>6), the GFF on ℤ^d
becomes asymptotically independent and thus shares similar behavior with
the bond percolation. Our Theorem <ref> confirms his conjecture and heuristics from the perspective of the one-arm probability, and in fact our proof of Theorem <ref> hugely benefits from some of the many valuable ideas presented in <cit.> (see e.g. Section <ref> for more detailed discussions). However, compared to the bond percolation, GFF has a considerable polynomial correlation between the values of different sites, which causes numerous technical difficulties. How to handle the correlation and build local independence is the core of proving Theorem <ref>. In earlier works on the level-set of GFF (see e.g. <cit.>), one usually employs the idea of decoupling to obtain the desired independence, along which usually one has to slightly change the threshold for the level-set in order to get an inequality in the desired direction. Since we work at the criticality, it is essential that we stick to the critical threshold throughout our analysis. In order to address this challenge, we will follow the Kozma–Nachmias framework as in <cit.> in the overview level, and in its implementation, we combine in a novel way many ingredients such as tree expansion, exploration processes, multi-scale analysis, and so on.
An interesting direction for future research is to establish the existence of the incipient infinite cluster (IIC) for the GFF on ℤ^d for d>6, which can be understood as the critical cluster under the conditioning of growing to infinity. Provided with the construction of the IIC, it would then be natural to study its geometric properties, including the two-point function, the dimension and the random walk on the IIC. We remark that there have been numerous studies on the IIC of the bond percolation (d≥ 19) and the percolation on the sufficiently spread-out lattice (d>6). Notably, van der Hofstad and Járai <cit.> and Heydenreich, van der Hofstad and Hulshof <cit.> constructed the IIC through applications of lace expansion, with also a computation of the two-point function. Additionally, van Batenburg <cit.> computed the mass dimension and volume growth exponent of the IIC. Furthermore, Kozma and Nachmias <cit.> studied the random walk on the IIC, and computed its spectral dimension as well as the diameter and range for the random walk thereon. See also Heydenreich, van der Hofstad and Hulshof <cit.> for further progress on this.
§ PRELIMINARIES
For the convenience and preciseness of exposition, we record some necessary notations, definitions and well-known results for random walks, Brownian motions, Gaussian free fields and loop soups in this section.
§.§ Graph, path and set
We denote by ℕ (resp. ℕ^+) the set of non-negative integers (resp. positive integers). We also denote by ℝ^+ the set of positive real numbers. For any x, y∈ℤ^d, we write x∼ y if {x,y}∈𝕃^d. Recall that ℤ^d=∪_e∈𝕃^dI_e, where each I_e=I_{x,y} is an interval with length d and endpoints x,y. Note that ℤ^d is a subset of ℤ^d. For any v_1,v_2∈ I_e, we denote the sub-interval of I_e with endpoints v_1 and v_2 by I_[v_1,v_2]. For any t∈ [0,1] and any x,y∈ℤ^d with x∼ y, let x+t· (y-x) be the point in I_{x,y} such that the length of the sub-interval I_[x,x+t· (y-x)] equals to td.
For a subset A⊂ℤ^d, we write |A| for the number of lattice points included in A. The diameter of A is diam(A):=sup_v_1,v_2∈ A∩ℤ^d|v_1-v_2|. The boundary of A is defined as
∂ A:= {x∈ A:∃ y∈ℤ^d∖ A such that x∼ y }.
A (time-parametrized) path on ℤ^d is a function η:[0,T ) →ℤ^d where T∈ℝ^+ (or T=∞) such that there exist m∈ℕ (or m=∞), 0=T_0<...<T_m+1=T and {x_i}_0≤ i<m+1 ⊂ℤ^d with x_i∼ x_i+1 for all 0≤ i< m such that
η(t)=x_i, ∀ 0≤ i< m+1, t∈[T_i,T_i+1).
For such a path η, its length is len(η)=m. Note that if m=0, η is also a path (although it contains only one point) and its length is 0. The range of η is ran(η):={x_0,...,x_m}. For 0≤ i< m+1, we say that T_i(η) is the i-th jumping time, η^(i):=x_i is the i-th position, and H_i(η):=T_i+1(η)-T_i(η) is the i-th holding time.
A path on ℤ^d is a continuous function η:[0,T ) →ℤ^d, T∈ℝ^+∪{∞}. When T is finite, we may denote η(T)=lim_t→ Tη(t). From now, we always use the notation η for a path on ℤ^d and η for a path on ℤ^d. With a slight abuse of notations, let ran(η):={η(t):0≤ t< T} be the range of η. For 0≤ t_1<t_2≤ T, we define the sub-path η[t_1,t_2 ) : [0,t_2-t_1 ) →ℤ^d of η as
η[t_1,t_2 ) (s)= η(s+t_1), ∀ s∈[0,t_2-t_1 ) .
For any subsets A_1,A_2,F⊂ℤ^d, we say A_1 and A_2 are connected by F if either A_1∩ A_2≠∅, or there is a path η contained in F that intersects A_1 and A_2. We write this connection relation as A_1[]FA_2. Especially, when A_i={v} for some i∈{1,2} and v∈ℤ^d, we may omit the braces.
For any x∈ℤ^d and N>0, let B_x(N):= { y∈ℤ^d: |x-y|≤ N} be the box in ℤ^d with center x and side length 2⌊ N ⌋. We also define the box in ℤ^d as follows:
B_x(N):=⋃_y_1,y_2∈ B_x(N):y_1∼ y_2,{y_1,y_2}∩ B(N-1)≠∅I_{y_1,y_2}.
Note that any interval I_{y_1,y_2} with y_1,y_2∈∂ B(N) is not contained in B_x(N). Especially, when x is exactly the origin, we may omit the subscript and write B(N):=B_0(N), B(N):=B_0(N).
§.§ Statements about constants
We use notations C,C',c,c',... for the local constants with values changing according to the context. The numbered notations C_1,C_2,c_1,c_2,... are used for global constants, which are fixed throughout the paper. We usually use the upper-case letter C (maybe with some superscript or subscript) for large constants and use the lower-case c for small ones. In addition, we may also use some other letters such as K,λ,δ... for constants. When a constant depends on some parameter or variable, we will point it out in brackets. A constant without additional specification can only depend on the dimension d.
§.§ Stretched exponential and sub-polynomial functions
We say a function f(·) is stretched exponentially small if there exist constants C,c,δ>0 such that f(n)≤ Ce^-cn^δ for all n≥ 1. If f(·) is stretched exponentially small, we may write “f(·)=s.e.(·)”. We also say a function is super-polynomially small if there exist constants C,c,δ>0 such that f(n)≤ Ce^-clog^1+δ(n) for all n≥ 1. Similarly, we may use the notation “f(·)=s.p.(·)” for such a function.
§.§ Random walk, bridge measure, stopping time and capacity
Recall that ℙ_x is the law of the continuous-time simple random walk {S_t}_t≥ 0 starting from x. For i∈ℕ, let T_i be the i-th jumping time, S^(i) be the i-th position and H_i be the i-th holding time (note that {S_t}_t≥ 0 is a.s. a path on ℤ^d). Then ℙ_x satisfies ℙ_x(S^(0)=x)=1 and ℙ_x(S^(n+1)=y_2|S^(n)=y_1)=(2d)^-1·1_y_1∼ y_2. In addition, the holding times {H_i}_i∈ℕ are independent exponential random variables with rate 1. We denote the expectation under ℙ_x by 𝔼_x. If the starting point is exactly the origin, we may omit the subscript.
The transition probability is denoted by p_t(x,y):= ℙ_x(S_t=y ) for all x,y∈ℤ^d and t≥ 0. The (normalized) bridge measure ℙ_x,y^t(·) is the conditional distribution of {S_t'}_0≤ t'≤ t (starting from x) given {S_t=y}.
For A⊂ℤ^d, we denote the first time that {S_t}_t≥ 0 intersects A by τ_A:=inf{t≥ 0:S_t∈ A}. We also denote the hitting time by τ_A^+:=inf{t≥ T_1:S_t∈ A}. For completeness, we set inf∅=∞. Especially, when A={x} for some x∈ℤ^d, we may omit the brackets.
For any non-empty subset A⊂ℤ^d and x∈ A, the escape probability of x with respect to A is esc_A(x):=ℙ_x(τ_A^+=∞). The capacity of A is defined as cap(A):=∑_x∈ Aesc_A(x). By Lawler and Limic <cit.>, one has
cap( B(N)) ≍ N^d-2.
§.§ Brownian motion S_t on ℤ^d
{S_t}_t≥ 0 is a continuous-time Markov process on ℤ^d. When S_t is in the interior of some interval I_e, it behaves as a one-dimensional standard Brownian motion. Every time when S_t visits a lattice point x, it will uniformly choose a segment from {I_{x,y}}_y∈ℤ^d:y∼ x and behave as a Brownian excursion from x in this interval. Once there is an excursion hitting some y with x∼ y, the next step continues as the same process from the new starting point y. The total local time of all Brownian excursions at x in this single step (i.e. the part of S_t from x to one of its neighbors y) is an independent exponential random variable with rate 1. We denote the law of {S_t}_t≥ 0 starting from v∈ℤ^d by ℙ_v. Let 𝔼_v be the expectation under ℙ_v. Further details about the construction of S_t can be found in Folz <cit.>.
By the aforementioned construction, given {S_t}_t≥ 0∼ℙ_x, the range of the Brownian motion {S_t}_t≥ 0∼ℙ_x can be recovered by taking the union of all the edges traversed by S_t as well as additional Brownian excursions at each S^(i) (for i∈ℕ) where the excursions are conditioned on returning to S^(i) before hitting one of its neighbors and the total local time at S^(i) is H_i.
For any A⊂ℤ^d, similar to τ_A, we denote the first time that S_t intersects A by τ_A:=inf{t≥ 0:S_t∈ A }.
§.§ Loop, loop measure and loop soup
In this part, we introduce some basic definitions and properties about loops on both ℤ^d and ℤ^d. As we will discuss in Section <ref>, the isomorphism theorem for GFF in <cit.> is one of the main tools in this paper. Loops are core elements of this tool. We hereby give a partial list of literatures about the isomorphism theorem (see Le Jan <cit.>, Marcus and Rosen <cit.>, Rosen <cit.> and Sznitman <cit.> for an excellent account on this topic). In its earlier form, isomorphism theorems connect the law of the Gaussian free field and local times for random walks (see e.g. Ray <cit.> and Knight <cit.>, Dynkin <cit.>, Marcus and Rosen <cit.>, Eisenbaum <cit.> and Eisenbaum, Kaspi, Marcus, Rosen and Shi <cit.>). Extensions of isomorphism theorems were a topic of interest in the past decade, including for random interlacements by Sznitman <cit.> and for permanent processes by Fitzsimmons and Rosen <cit.>, Le Jan, Marcus and Rosen <cit.>. Of particular interest to our work is the isomorphism theorem discovered in Lupu <cit.>, which developed the method in <cit.> and presented a coupling where the sign clusters of the GFF on the metric graph are the same as the loop soup clusters at criticality (i.e., when the intensity α equals to 1/2). The coupling of <cit.> is very powerful and has inspired many subsequent works. It is also worth pointing out that a weaker form of this coupling was questioned in Ding <cit.> and was proved in Zhai <cit.> (which was completed independently, after the completion of <cit.>).
§.§.§ Time-parametrized loop on ℤ^d
A (time-parametrized) rooted loop ϱ on ℤ^d is a path on ℤ^d whose 0-th position and len(ϱ)-th position are the same point. We continue to use notations such as T(ρ) and T_i(ρ) for paths as introduced in Section <ref>. Two rooted loops are equivalent if they equal to each other after a time-shift. Each equivalent class ℓ of such rooted loops are called a loop on ℤ^d. As defined in <cit.>, the loop measure μ on the space of rooted loops is
μ(·) = ∑_x∈ℤ^d∫_0^∞ t^-1ℙ_x,x^t(·)p_t(x,x)dt.
Referring to <cit.>, μ is invariant under the time-shift. Thus, μ induces a measure on the space of loops, which is also denoted by μ. Since len(·) and ran(·) are invariant under the time-shift, we can define the length and range of ℓ as len(ℓ):=len(ϱ) and ran(ℓ):=ran(ϱ) for any ϱ∈ℓ.
We cite some formulas about μ in <cit.> as follows. For any integer k≥ 2, any 0=t_0<t_1<...<t_k<t and any sequence of lattice points x_0,...,x_k with x_0=x_k and x_i∼ x_i+1 for all 0≤ i≤ k-1, one has
μ(T(ϱ)∈ dt and ∀ 0≤ i≤ k=len(ϱ), ϱ^(i)=x_i,T_i(ϱ)∈ dt_i )
= t^-1e^-t(2d)^-k dt_1...dt_kdt.
For each aforementioned sequence x_0,...,x_k, its multiplicity J=J(x_0,...,x_k) is the maximal integer such that the sub-sequences (x_(j-1)kJ^-1,x_(j-1)kJ^-1+1,...,x_jkJ^-1) for 1≤ j≤ J are identical. Then we have
μ({ℓ : ∃ϱ∈ℓ such that ∀ 0≤ i≤ k=len(ϱ), ϱ^(i)=x_i})= J^-1(2d)^-k.
In addition, for any x∈ℤ^d and t>0,
μ( len(ϱ)=0,ϱ^(0)=x, T∈ dt)= t^-1e^-tdt.
For any α>0, the loop soup ℒ_α is defined as the Poisson point process in the space of loops on ℤ^d with intensity measure αμ.
§.§.§ Continuous loop on ℤ^d
In this subsection, we review the construction of continuous loop, loop measure and loop soup introduced in <cit.>. We only focus on the case of ℤ^d here, and we refer to <cit.> for more details on the construction for general graphs.
A rooted loop on ℤ^d is a path ϱ:[0,T ) →ℤ^d such that ϱ(0)=ϱ(T). Similarly, a loop on ℤ^d is an equivalent class of rooted loops such that each of them can be transformed into another by a time-shift. In this paper, we use the notations η, ϱ and ℓ for a path, a rooted loop and a loop on ℤ^d respectively. We also use η, ϱ and ℓ for their counterparts on ℤ^d. Since ran(ϱ) is invariant for all ϱ∈ℓ, we denote the range of ℓ by ran(ℓ):=ran(ϱ) for some ϱ∈ℓ.
In fact, the loops on ℤ^d and ℤ^d can be divided into the following types:
* fundamental loop: a loop that visits at least two lattice points;
* point loop: a loop that visits exactly one lattice point;
* edge loop (only for loops on ℤ^d): a loop that is contained by a single interval I_e and visits no lattice point.
By the method in <cit.>, one can use {S_t}_t≥ 0 to contruct a measure μ on the space of the continuous loops on ℤ^d. For each α>0, the loop soup on ℤ^d of parameter α, denoted by ℒ_α, is the Poisson point process with intensity measure αμ. Actually, we always focus on ℒ_1/2 in this paper. Let ℒ_1/2^f (resp. ℒ_1/2^p, ℒ_1/2^e) be the point measure composed of fundamental loops (resp. point loops, edge loops) in ℒ_1/2. We also denote by ℒ_1/2^f (resp. ℒ_1/2^p) the counterpart of ℒ_1/2^f (resp. ℒ_1/2^p) for ℒ_1/2. By the thinning property of Possion point processes, ℒ_1/2^f, ℒ_1/2^p and ℒ_1/2^e (resp. ℒ_1/2^f and ℒ_1/2^p) are independent.
For the sake of brevity, we do not distinguish a point measure ℒ from the support of ℒ in notation. Hence, we may write “ℓ∈ℒ” for “ℓ is in the support of ℒ”.
In what follows, we review a construction of ℒ_1/2 in <cit.>, by which ℒ_1/2 can be obtained by adding Brownian excursions to the loops in ℒ_1/2.
For ℒ_1/2^f: There is a coupling of ℒ_1/2^f and ℒ_1/2^f, which is equipped with a one-to-one mapping π between their loops (from ℒ_1/2^f to ℒ_1/2^f). Moreover, given ℒ_1/2^f, the range of each π(ℓ) for ℓ∈ℒ_1/2^f can be recovered as follows (this is parallel to the discussions at the end of Section <ref>). Arbitrarily take ϱ∈ℓ. For each 0≤ i≤len(ϱ), let ℬ_i be the union of Brownian excursions with total local time H_i(ϱ), starting from ϱ^(i) and conditioning on hitting ϱ^(i) before its lattice neighbors. Note that ℬ_i has the same distribution as ∪_z∈ℤ^d:z∼ϱ^(i)I_[ϱ^(i), ϱ^(i)+d^-1M_i^z· (z-ϱ^(i))], where M_i^z is the maximum of the square of a Bessel-0 process with initial value H_i(ϱ), conditioning on hitting 0 before time d. Then the range of π(ℓ) is
⋃_0≤ i≤len(ϱ)-1 I_{ϱ^(i),ϱ^(i+1)}∪⋃_0≤ i≤len(ϱ) ℬ_i.
As a corollary, one has that a.s.
ran(ℓ) ⊂ran(π(ℓ))⊂∪_x∈ran(ℓ) B_x(1).
For ℒ_1/2^p: Recall that the distribution of loops in ℒ_1/2^p is given by (<ref>). For any x∈ℤ^d, let γ_x^p be the union of ranges of loops in ℒ_1/2^p including x. Given the total holding time H_x of loops in ℒ_1/2^p including x, then γ_x^p has the same distribution as the union of Brownian excursions with total local time H_x, starting from x and conditioning on hitting x before its lattice neighbors.
For ℒ_1/2^e: For any {x,y}∈𝕃^d, we denote by γ_{x,y}^e the union of ranges of loops in ℒ_1/2^e whose range is contained in I_{x,y}. Each γ_{x,y}^e has the same distribution as the non-zero points of a standard Brownian bridge in I_{x,y} of length d, from 0 at x to 0 at y.
§.§.§ Decomposition of loops on ℤ^d
We present an approach to decompose a loop ℓ. This decomposition is a continuous analogue of that introduced in Chang and Sapozhnikov <cit.> and is closely related to the spatial Markov property of (both discrete and continuous) loop soups. Further discussions about this property can be found in Werner <cit.>.
For two disjoint subsets A_1,A_2⊂ℤ^d, consider a mapping L(A_1,A_2) as follows. For a loop ℓ, define L(A_1,A_2)(ℓ) as the collection of rooted loops ϱ:[0,T ) →ℤ^d in the equivalence class ℓ such that
* ϱ(0)∈ A_1;
* ∃ t∈ (0,T) such that ϱ(t)∈ A_2 and for all t'∈ (t,T), ϱ(t')∉ A_1∪ A_2.
Note that ℓ intersects A_1 and A_2 if and only if L(A_1,A_2)(ℓ)≠∅. For each ϱ∈ L(A_1,A_2)(ℓ), one can define a sequence of stopping time as follows:
* τ_0=0;
* ∀ k≥ 0, τ_2k+1:= inf{t>τ_2k: ϱ(t)∈ A_2};
* ∀ k≥ 0, τ_2k+2:= inf{t>τ_2k+1: ϱ(t)∈ A_1}.
Let κ(ϱ)=κ(ϱ;A_1,A_2) be the unique integer such that τ_2κ=T. Since κ(ϱ) is constant for all ϱ∈ L(A_1,A_2)(ℓ), we also denote it by κ(ℓ). Note that 2κ(ℓ) is the number of excursions in ℓ between A_1 and A_2. For 1≤ i≤κ(ϱ), we define the i-th forward crossing path as the sub-path η^F_i:= ϱ[ τ_2i-2,τ_2i-1), and define the i-th backward crossing path as the sub-path η^B_i:= ϱ[ τ_2i-1,τ_2i). In fact, for any ϱ_1,ϱ_2∈ L(A_1,A_2)(ℓ), the sequences of the forward crossing paths (also backward crossing paths) of ϱ_1 and ϱ_2, say {η^F_1,i}_1≤ i≤κ(ℓ) and {η^F_2,i}_1≤ i≤κ(ℓ), are identical to each other under an index translation. I.e., there is an integer a_*∈ [1,κ(ℓ)-1] such that η^F_1,i=η^F_2,i_* for all 1≤ i≤κ(ℓ), where i_*≡ i+a_* mod κ(ℓ). Note that only forward crossing paths can intersect A_1 and no backward crossing path can. I.e., ran(ℓ)∩ A_1= ∪_i=1^κ(ℓ)ran(η^F_i)∩ A_1. See Figure <ref> for an illustration for this decomposition.
For any loop ℓ, as the counterpart of κ(ℓ), we also define κ(ℓ)=κ(ℓ;A_1,A_2) as the integer such that 2κ(ℓ) is the number of excursions in ℓ between A_1 and A_2. By the relation between the loops on ℤ^d and ℤ^d presented in Section <ref>, we have: for any j∈ℕ^+,
μ({ℓ:κ(ℓ;A_1,A_2 )=j})=μ({ℓ:κ(ℓ;A_1,A_2 )=j}).
We define a sequence of stopping times for the simple random walk as follows. For {S_t}_t≥ 0∼ℙ_x with x∈∂ A_1, we set τ̂_0:=0. For i∈ℕ^+, let τ̂_2i-1:=inf{t>τ̂_2i-2:S_t∈ A_2} and τ̂_2i:=inf{t>τ̂_2i-1:S_t∈ A_1}. The following lemma is useful to the subsequent proof.
For any disjoint subsets A_1,A_2⊂ℤ^d and j∈ℕ^+,
μ({ℓ:κ(ℓ;A_1,A_2 )=j}) = j^-1∑_x∈∂ A_1ℙ_x(S_τ̂_2k=x ).
For any N≥ 1, x∈∂ B(N) and A⊂ B(N/2), by <cit.> and (<ref>),
ℙ_z(τ_A<∞)≍cap(A)N^2-d.
Taking A_1= A and A_2=∂ B(N), by (<ref>), Lemma <ref> and (<ref>), we get the following corollary, which is frequently-used in this paper. A similar result of this corollary can be found in <cit.>.
For any N≥ 1, A⊂ B(N/2) and j∈ℕ^+, we have
[ μ({ℓ:κ(ℓ;A,∂ B(N) )=j})]^j^-1≍cap(A) N^2-d.
As a direct consequence, one has
μ({ℓ:ran(ℓ)∩ A≠∅,ran(ℓ)∩∂ B(N)≠∅})≍cap(A) N^2-d.
For any disjoint subsets A_1,A_2⊂ℤ^d and a sequence of paths η_i:[ 0,T^i) →ℤ^d for 1≤ i≤ k such that η_i(0)∈∂ A_2, ran(η_i)∩ A_1=∅ and η_i(T^i)∈ A_1, let 𝔏' = 𝔏'(η_1,... η_k) be the collection of loops ℓ with κ(ℓ)=k such that there exists a rooted loop ϱ∈ L(A_1,A_2)(ℓ), whose backward crossing paths are exactly η_i for 1≤ i≤ k. Unless otherwise stated, we assume that η_1,... η_k are different from one another to avoid the issue of periodicity. This assumption will not make any essential difference since the loop measure of all loops violating this property is 0.
We conclude this section by presenting the following lemma.
We keep the notations in the last paragraph. Under the measure μ, conditioning on {ℓ∈𝔏'}, the forward crossing paths, say η_i^F for 1≤ i≤ k, are independently distributed. Moreover, for the forward crossing path that starts from x_i-1^+:=η_i-1(T^i-1) (where x_0^+:=η_k(T^k)) and ends at x_i^-:=η_i(0), its conditional distribution is given by
ℙ_x_i-1^+( · |τ_A_2=τ_x_i^- ).
<cit.> proved the following property of ℒ_1/2 on ℤ^d. For any disjoint subsets A_1,A_2⊂ℤ^d, conditioning on all excursions of loops in ℒ_1/2 that start from A_1, then intersect A_2 and finally return to A_1, the missing parts of the loops in ℒ_1/2 intersecting A_1 and A_2 can be sampled in two steps as follows:
* Suppose that the returning points and departure points of these excursions are {x_i}_i=1^k and {y_i}_i=1^k respectively. Sample a pairing {(x_i,y_σ_i)}_i=1^k (where (σ_1,...,σ_k) is a permutation of (1,...,k)) with probability proportional to ∏_i=1^kG_A_2(x_i,y_σ_i) (where G_A(x,y):=𝔼_x(∫_0^τ_A1_S_t=y dt ) is the Green's function restricted in ℤ^d∖ A).
* Given the pairing sampled above, the missing parts (i.e. the paths {η_i}_i=1^k where η_i starts from x_i, ends at y_σ_i and does not intersect A_2) are independent, and in addition the law of each η_i is given by ℙ(·|τ_y_σ_i < τ_A_2).
In <cit.>, it is also stated that the analogous proposition holds for ℒ_1/2. In fact, this proposition provides even more information than Lemma <ref> since it not only ensures the independence between different remaining paths, but also describes the probabilities of various loop structures. Although Lemma <ref> and <cit.> slightly differ in the definitions of excursions between disjoint subsets, their proofs are highly similar and thus we omit proof details for Lemma <ref>. More general statements for the spatial Markov property can be found in <cit.>.
§ MAIN TOOLS
§.§ Isomorphism theorem: a coupling between GFF and loop soup
In <cit.>, Lupu showed a coupling between two continuous random fields on the metric graph: the GFF {ϕ_v}_v∈ℤ^d and the occupation field {ℒ^v_1/2}_v∈ℤ^d of the loop soup ℒ_1/2. Precisely, for any v∈ℤ^d, ℒ^v_1/2 is the sum of local times of all loops in ℒ_1/2 at v. In this paper, it is sufficient to note that the collection of v∈ℤ^d with ℒ^v_1/2>0 is exactly ∪ℒ_1/2 (for convenience, we denote the union of ranges of loops in a point measure ℒ (resp. a collection 𝔏) by ∪ℒ (resp. ∪𝔏)).
There is a coupling between the loop soup ℒ_1/2 and the GFF {ϕ_v}_v∈ℤ^d such that
* for any v∈ℤ^d, ℒ^v_α= 1/2ϕ_v^2;
* the clusters composed of loops in ℒ_1/2 are exactly the sign clusters of {ϕ_v}_v∈ℤ^d, where a “sign cluster” is a maximal connected subgraph on which ϕ has the same sign (every v∈ℤ^d with ϕ_v=0 does not belong to any sign cluster).
Through Lemma <ref>, a profound link between the GFF and the loop soup is unveiled, enabling a rich interplay between the GFF and the loop soup. Of particular interest to us, we get from the symmetry of GFF and Lemma <ref> that
ℙ[ 0[]E^≥ 0∂ B(N)] = ℙ[ 0[]E^> 0∂ B(N)]
= 1/2ℙ[ 0[]the union of all sign clusters of ϕ∂ B(N)]
= 1/2ℙ[ 0[]∪ℒ_1/2∂ B(N)].
Note that the first line of (<ref>) follows from the following two facts:
* For any x∈ℤ^d, ℙ[ϕ_x=0]=0;
* For any interval I_{x,y}, arbitrarily given the values of endpoints ϕ_x,ϕ_y≠ 0, one has that {ϕ_v}_v∈ I_{x,y} (which is given by the Brownian bridge) a.s. does not have extremum 0 in I_{x,y}.
§.§ Two-point function
Using the isomorphism theorem, Lupu <cit.> proved an explicit formula of the probability that two points are connected by ∪ℒ_1/2.
For any x,y∈ℤ^d,
ℙ( x[]∪ℒ_1/2 y) = 2/πarcsin(G(x,y)/√(G(x,x)G(y,y))).
It is well-known that the Green's function satisfies G(x,y)≍ |x-y|^2-d. Thus, by Lemma <ref> we have
ℙ( x[]∪ℒ_1/2 y) ≍ |x-y|^2-d, ∀ x,y∈ℤ^d.
§.§ BKR inequality
In this subsection, we introduce another useful tool, the van den Berg-Kesten-Reimer inequality. This inequality was conjectured in van den Berg and Kesten <cit.> and then was proved by van den Berg and Fiebig <cit.> and Reimer <cit.>. Borgs, Chayes and Randall <cit.> provided a nice exposition for this inequality.
Recall the notations γ_x^p and γ_{x,y}^e in Section <ref>. For any connected A⊂ℤ^d with |A|≥ 2, let γ_A^f be the union of ranges of loops in ℒ_1/2^f that visit every point in A and do not visit any other lattice point. For each of γ_x^p, γ_{x,y}^e and γ_A^f, we call it a glued loop. Note that each glued loop is a random subset of ℤ^d, but not a loop on ℤ^d. We say a collection of glued loops certifies an event 𝖠 if on the realization of this collection of glued loops, 𝖠 happens regardless of the realization of all other glued loops. For two events 𝖠 and 𝖡, let 𝖠∘𝖡 be the event that there exist two disjoint collections of glued loops such that one collection certifies 𝖠, and the other certifies 𝖡. Note that in this context, “two disjoint collections” implies that the collections do not contain any glued loops with matching subscripts and superscripts, but it does not necessarily mean that every glued loop in one collection does not intersect any glued loop in the other collection.
Recall in Section <ref> that each glued loop is measurable with respect to several random variables, whose distributions have been written down rigorously. Therefore, this satisfies the requirements of the framework introduced in Arratia, Garibaldi and Hales <cit.> for the BKR inequality on continuous spaces. Thus, we have the following lemma:
If events 𝖠 and 𝖡 both depend on finitely many glued loops, then
ℙ( 𝖠∘𝖡) ≤ℙ( 𝖠) ·ℙ( 𝖡).
However, sometimes the events that we want to study do not satisfy the condition of Lemma <ref>. Nevertheless, the BKR inequality can be applied via taking a limit. We present the following corollary as an extension of Lemma <ref>, which is adequate for this paper.
We say an event 𝖠 is a connecting event if there exist two finite subsets A_1,A_2⊂ℤ^d such that 𝖠={A_1[]∪ℒ_1/2A_2}.
If events 𝖠_1,𝖠_2,...,𝖠_m (m≥ 2) are connecting events, then we have
ℙ( 𝖠_1∘𝖠_2 ∘ ... ∘𝖠_m ) ≤∏_i=1^mℙ( 𝖠_i).
Note that we cannot apply Lemma <ref> directly since a connecting event does not only depend on finitely many glued loops. Suppose that 𝖠_i={A_i,1[]∪ℒ_1/2A_i,2} for 1≤ i≤ m. Arbitrarily take M∈ℕ^+. For each 1≤ i≤ m, we consider the truncated event
𝖠̂_i=𝖠̂_i(M):= { A_i,1[]∪ℒ_1/2·1_ran(ℓ)⊂B(M) A_i,2}.
If 𝖠_i∩𝖠̂_i^c happens, then one of A_i,1,A_i,2 is connected to ∂ B(M). In addition, each 𝖠̂_i only depends on
{γ_x^p}_x∈ B(M-1)∪{γ_{x,y}^e}_x,y∈ℤ^d:I_{x,y}∈B(M)∪{γ_A^f}_A⊂ B(M-1),
and therefore satisfies the requirement of Lemma <ref> (since the number of these glued loops is finite). Thus, we have
ℙ( 𝖠_1∘𝖠_2 ∘ ... ∘𝖠_m )
≤ ℙ( 𝖠̂_1∘𝖠̂_2 ∘ ... ∘𝖠̂_m ) + ∑_i=1^mℙ(𝖠_i∩𝖠̂_i^c)
≤ ℙ( 𝖠̂_1∘𝖠̂_2 ∘ ... ∘𝖠̂_m )+∑_i=1^m∑_j=1^2∑_z∈ℤ^d:B_z(1)∩ A_i,j≠∅ℙ[ z []∪ℒ_1/2∂ B(M) ]
≤ ∏_i=1^mℙ( 𝖠̂_i)+∑_i=1^m∑_j=1^2∑_z∈ℤ^d:B_z(1)∩ A_i,j≠∅ℙ[ z []∪ℒ_1/2∂ B(M) ].
The first term on the RHS is upper-bounded by ∏_i=1^mℙ( 𝖠_i) since 𝖠̂_i⊂𝖠_i for 1≤ i≤ m. Moreover, by (<ref>) and (<ref>), the second term can be arbitrarily close to 0 if we take sufficiently large M. Now the proof is complete.
§.§ Tree expansion
We now review a combinatorial approach called tree expansion introduced in Aizenman and Newman <cit.>. This approach, usually applied together with the BKR inequality, has proved to be a powerful tool in the study of percolation models (see e.g. Barsky and Aizenman <cit.> for its application in bond percolation). In this paper we only review the version mentioned in the proof of <cit.>.
For any x∈ℤ^d and A_1,A_2⊂ℤ^d, where {x},A_1 and A_2 are disjoint to one another, if x is connected to both A_1,A_2 by ∪ℒ_1/2, then there exists a glued loop γ_* such that {γ_* []∪ℒ_1/2 x}∘{γ_* []∪ℒ_1/2 A_1}∘{γ_* []∪ℒ_1/2 A_2} happens.
For j∈{1,2}, since x is connected to A_j by ∪ℒ_1/2, there must exist a finite sequence of different glued loops, say γ^j_1,...,γ^j_m_j, such that x∈γ^j_1, A_j∩γ^j_m_j≠∅, and γ_i^j∩γ_i+1^j≠∅ for all 1≤ i≤ m_j-1.
If {γ^1_1,...,γ^1_m_1} and {γ^2_1,...,γ^2_m_2} are two disjoint collections, then we have x∈γ^1_1, γ^1_1[]∪_i=2^m_1γ^1_iA_1 and γ^1_1[]∪_i=1^m_2γ^2_iA_2. Thus, we only need to take γ_*=γ^1_1.
Otherwise, we take γ_*=γ^1_m_*, where m_* is the maximal integer in [1,m_1] such that γ^1_m_*∈{γ^2_1,...,γ^2_m_2}. The reasons are as follows. Let m_† be an integer in [1,m_2] such that γ^1_m_*=γ^2_m_†. Then we have γ^1_m_*[]∪_i=1^m_†-1γ^2_i x, γ^1_m_*[]∪_i=m_*+1^m_1γ^1_i A_1 and γ^1_m_*[]∪_i=m_†+1^m_2γ^2_i A_2. By the maximality of m_*, one has {γ^1_m_*+1,...,γ^1_m_1}∩{γ^2_1,...,γ^2_m_1}=∅. Thus, the event {γ^1_m_*[]∪ℒ_1/2 x}∘{γ^1_m_*[]∪ℒ_1/2 A_1}∘{γ^1_m_*[]∪ℒ_1/2 A_2} occurs.
§ PROOF OF THE LOWER BOUND
In this section, we show the proof of the lower bound in Theorem <ref>. This proof shares the same spirit as <cit.>. Moreover, the main step (i.e. Lemma <ref>) was essentially sketched in <cit.>.
To simplify the formulation, we abbreviate “[]∪ℒ_1/2” as “[]”.
For d>6, there exists C_3(d)>0 such that for any N≥ 1,
∑_x_1,x_2∈∂ B(N)ℙ( 0[]x_1,0[]x_2 ) ≤ C_3N^4.
With Lemma <ref>, proving the lower bound in Theorem <ref> is straightforward.
Let X:= ∑_x∈∂ B(N)1_0[] x. By |∂ B(N)|≍ N^d-1 and the two-point function estimate (<ref>), we have
𝔼X≥ cN^2-d· N^d-1= cN.
Recall that for any non-negative random variable Y, ℙ(Y>0)≥ (𝔼Y)^2/𝔼(Y^2). Thus, by Lemma <ref> and (<ref>), we have
ℙ[0[]∂ B(N) ]≥(𝔼X)^2/𝔼(X^2)≥ c^2C_3^-1 N^-2.
We present some inequalities that will be used for multiple times in the subsequent proof. For convenience, we set 0^-a=1 for a>0 in this paper.
For d≥ 3 and a∈ℝ, there exists C(d,a)>0 such that the following holds:
* When a>d, for any M≥ 1,
∑_x∈ℤ^d∖ B(M) |x|^-a≤ CM^d-a;
* When a≠ d-1, for any M≥ 1,
max_y∈ℤ^d∑_x∈∂ B(M)|x-y|^-a≤ CM^(d-1-a)∨ 0.
* When a≠ d, for any M≥ 1,
max_y∈ℤ^d∑_x∈ B(M) |x-y|^-a≤ CM^(d-a)∨ 0;
For (<ref>), since |∂ B(k)|≤ Ck^d-1, we have
∑_x∈ℤ^d∖ B(M) |x|^-a= ∑_k=M+1^∞∑_x∈∂ B(k)|x|^-a≤ C∑_k=M+1^∞k^d-1· k^-a≤ CM^d-a.
Now we focus on the proof of (<ref>). When y∈ [ℤ^d∖ B(1.5M)]∪ B(0.5M), since |x-y|≥ 0.4M for all x∈∂ B(M), we have
∑_x∈∂ B(M) |x-y|^-a≤ CM^-a· |∂ B(M)| ≤ CM^d-1-a.
For the remaining case (i.e. y∈ B(1.5M)∖ B(0.5M)), we observe that |∂ B_y(k) ∩∂ B(M)|≤ Ck^d-2 for all 0≤ k≤ 5M, and that ∂ B_y(k) ∩∂ B(M)=∅ for all k>5M. Therefore, we have
∑_x∈∂ B(M) |x-y|^-a≤ C∑_k=1^5M k^d-2· k^-a≤ CM^(d-1-a)∨ 0.
Combining (<ref>) and (<ref>), we obtain (<ref>).
The proof of (<ref>) can be approached similarly to (<ref>). Specifically, we can estimate the sum on the LHS of (<ref>) by separately considering two cases: when y∈ℤ^d∖ B(2M) and when y∈ B(2M). Further details are omitted since the calculations parallel those in (<ref>) and (<ref>).
For d>6, there exists C(d)>0 such that for all x,y∈ℤ^d,
∑_z∈ℤ^d |z-x|^2-d|z-y|^2-d≤ C|x-y|^4-d.
When |x-y|≤ 100, one has |z-y|≤ |z-x|+|x-y|≤ 2|z-x| for all z∈ℤ^d∖ B_x(200). Thus, by (<ref>) we have
∑_z∈ℤ^d |z-x|^2-d|z-y|^2-d≤ C+∑_z∈ℤ^d∖ B_x(200) 2|z-x|^2(2-d)<∞.
By choosing a sufficiently large constant C in (<ref>), we can establish that this lemma holds for all x,y∈ℤ^d with |x-y|≤ 100.
For the remaining case (i.e. |x-y|>100), we denote n:=⌊1/2|x-y| ⌋, A_1:= B_x(n), A_2:=B_y(n) and A_3:=ℤ^d∖ (A_1∪ A_2). Since |z-y|≥ |x-y|-|x-z|≥ n for all z∈ A_1, by (<ref>) we have
∑_z∈ A_1 |z-x|^2-d|z-y|^2-d≤ Cn^2-d∑_z∈ A_1 |z-x|^2-d≤ Cn^4-d.
For the same reason, the sum over z∈ A_2 is also upper-bounded by Cn^4-d. Since min{|z-x|, |z-y|}≥ n for z∈ A_3, we have
A_3⊂⋃_k≥ nA_3,k:=⋃_k≥ n{ z∈ℤ^d:min{|z-x|, |z-y|}=k }.
Combining this inclusion with |A_3,k|≤ |∂ B_x(k)|+|∂ B_y(k)|≤ Ck^d-1, we obtain
∑_z∈ A_3 |z-x|^2-d|z-y|^2-d≤ ∑_k≥ n∑_z∈ A_3,k |z-x|^2-d|z-y|^2-d
≤ C∑_k≥ nk^d-1· k^2(2-d)≤ Cn^4-d.
By these estimates for the sums over A_1, A_2 and A_3, we conclude this lemma.
By separating the sum over ℤ^d in the same way as above, we also have the following estimates. For the sake of brevity, we will not provide further details of these proofs since they are parallel to that of Lemma <ref>.
For d>6, there exists C(d)>0 such that for all x,y∈ℤ^d,
∑_z∈ℤ^d |z-x|^2-d|z-y|^6-2d≤ C|x-y|^2-d,
∑_z∈ℤ^d |z-x|^2-d|z-y|^4-d≤ C|x-y|^6-d.
In the following corollary of Lemmas <ref> and <ref>, we provide several estimates that will be used repeatedly in the subsequent proof.
For d>6, there exists C(d)>0 such that for all x,y∈ℤ^d,
∑_z_1,z_2∈ℤ^d |x-z_1|^2-d|z_1-z_2|^2-d|z_2-x|^2-d|z_1-y|^2-d≤ C|x-y|^2-d,
∑_z_1,z_2,z_3∈ℤ^d |z_1-z_2|^2-d|z_2-z_3|^2-d|z_3-z_1|^2-d|x-z_1|^2-d|y-z_2|^2-d≤ C|x-y|^4-d.
For (<ref>), by summing over z_2 and z_1 in turn, we have
∑_z_1,z_2∈ℤ^d |x-z_1|^2-d|z_1-z_2|^2-d|z_2-x|^2-d|z_1-y|^2-d
≤ ∑_z_1∈ℤ^d |x-z_1|^6-2d|z_1-y|^2-d (by Lemma <ref>)
≤ C|x-y|^2-d (by (<ref>)).
For (<ref>), we sum over z_3,z_2 and z_1 in turn, and then obtain
∑_z_1,z_2,z_3∈ℤ^d |z_1-z_2|^2-d|z_2-z_3|^2-d|z_3-z_1|^2-d|x-z_1|^2-d|y-z_2|^2-d
≤ ∑_z_1,z_2∈ℤ^d |z_1-z_2|^6-2d |x-z_1|^2-d|y-z_2|^2-d (by Lemma <ref>)
≤ ∑_z_1∈ℤ^d|x-z_1|^2-d |z_1-y|^2-d (by (<ref>))
≤ |x-y|^4-d (by Lemma <ref>).
Now we are ready to prove Lemma <ref>.
When the event 𝖠_x_1,x_2:={0[]x_1,0[]x_2} happens, by the tree expansion (Lemma <ref>), there exists a glued loop γ_* such that {γ_* []∪ℒ_1/20}∘{γ_* []∪ℒ_1/2 x_1}∘{γ_* []∪ℒ_1/2 x_2} happens. We denote by 𝖠_x_1,x_2^f (resp. 𝖠_x_1,x_2^p, 𝖠_x_1,x_2^e) the event that 𝖠_x_1,x_2 happens and the selected glued loop γ_* can be the one composed of fundamental loops (resp. point loops, edge loops). It follows that (note that the RHS below is not necessarily a disjoint union)
𝖠_x_1,x_2⊂𝖠_x_1,x_2^f∪𝖠_x_1,x_2^p∪𝖠_x_1,x_2^e.
When the event 𝖠_x_1,x_2^f happens, there exist x_3,x_4,x_5∈ℤ^d and a fundamental loop ℓ∈ℒ_1/2 such that ran(ℓ) ∩B_x_j(1)≠∅ for j∈{3,4,5}, and that {0[]x_3}∘{x_1[]x_4}∘{x_2[]x_5} happens. By the analogous result of Lemma <ref> for three disjoint subsets of ℤ^d (see <cit.>) and the relation between the loops on ℤ^d and ℤ^d presented in Section <ref>, the loop measure of fundamental loops that intersect B_x_3(1), B_x_4(1) and B_x_5(1) is bounded from above by C|x_3-x_4|^2-d|x_4-x_5|^2-d|x_5-x_3|^2-d. Thus, by the BKR inequality (Corollary <ref>) and the two-point function estimate (<ref>) , we have
ℙ( 𝖠_x_1,x_2^f)
≤ C ∑_x_3,x_4,x_5∈ℤ^d|x_3-x_4|^2-d|x_3-x_5|^2-d|x_4-x_5|^2-d
· |x_3|^2-d|x_1-x_4|^2-d|x_2-x_5|^2-d
:= C ∑_x_3,x_4,x_5∈ℤ^d𝒯^x_1,x_2_x_3,x_4,x_5.
When the event 𝖠_x_1,x_2^p (or 𝖠_x_1,x_2^e) happens, since every γ^p_· (or γ^e_·) is contained by some B_x(1), x∈ℤ^d, there exist x_3,x_4,x_5∈ℤ^d with max{|x_3-x_4|,|x_3-x_5|}≤ 2 such that {0[]x_3}∘{x_1[]x_4}∘{x_2[]x_5} happens. Similar to (<ref>), we have
max{ℙ( 𝖠_x_1,x_2^p),ℙ( 𝖠_x_1,x_2^e)}
≤ C∑_x_3∈ℤ^d∑_x_4,x_5∈ B_x_3(2) |x_3|^2-d|x_1-x_4|^2-d|x_2-x_5|^2-d,
which implies that ℙ( 𝖠_x_1,x_2^p) and ℙ( 𝖠_x_1,x_2^e) are also bounded from above by C ∑_x_3,x_4,x_5∈ℤ^d𝒯^x_1,x_2_x_3,x_4,x_5 since min{|x_3-x_4|^2-d,|x_3-x_5|^2-d,|x_4-x_5|^2-d}≥ 4^2-d for all x_4,x_5∈ B_x_3(2). Thus, by (<ref>) we obtain
∑_x_1,x_2∈∂ B(N)ℙ( 0[]x_1,0[]x_2 )≤ C∑_x_1,x_2∈∂ B(N)∑_x_3,x_4,x_5∈ℤ^d𝒯^x_1,x_2_x_3,x_4,x_5.
For the case when x_3 ∈ℤ^d∖ B(2N), by Corollary <ref> and Lemma <ref>, we have
∑_x_1,x_2∈∂ B(N)∑_x_3 ∈ℤ^d∖ B(2N)∑_x_4,x_5∈ℤ^d𝒯^x_1,x_2_x_3,x_4,x_5
≤ CN^2-d∑_x_1,x_2∈∂ B(N)∑_x_3 ∈ℤ^d∖ B(2N)∑_x_4,x_5∈ℤ^d |x_4-x_5|^2-d|x_1-x_4|^2-d
· |x_2-x_5|^2-d|x_3-x_4|^2-d|x_3-x_5|^2-d (by |x_3|^2-d≤ CN^2-d)
≤ CN^2-d∑_x_1,x_2∈∂ B(N)|x_1-x_2|^4-d (by (<ref>))
≤ CN^2-d· N^3 · N^d-1= CN^4 (by (<ref>)).
When x_3∈ B(2N), by summing over x_1, x_2, x_5, x_4 and x_3 in turn, and using Lemmas <ref> and <ref>, we get
∑_x_1,x_2∈∂ B(N)∑_x_3∈ B(2N)∑_x_4,x_5∈ℤ^d𝒯^x_1,x_2_x_3,x_4,x_5
≤ CN^2 ∑_x_3∈ B(2N)∑_x_4,x_5∈ℤ^d|x_3-x_4|^2-d|x_3|^2-d |x_3-x_5|^2-d|x_4-x_5|^2-d (by (<ref>))
≤ CN^2 ∑_x_3∈ B(2N)∑_x_4∈ℤ^d|x_3-x_4|^6-2d|x_3|^2-d (by Lemma <ref>)
≤ CN^2 ∑_x_3∈ B(2N) |x_3|^2-d (by (<ref>))
≤ CN^4 (by (<ref>)) .
Combined with (<ref>) and (<ref>), it concludes Lemma <ref>, and thus we complete the proof of the lower bound of Theorem <ref>.
§ THE ERROR OF DELETING LARGE LOOPS
In <cit.>, Werner presented the following heuristic:
“In fact, when a∈ (0,d), the N^a-th largest Brownian loop will have a diameter of the order of N× N^-a/d+o(1). This means for instance that an overwhelming fraction of the numerous large clusters will contain no loop of diameter greater than N^b for b>6/d. In other words, if we remove all loops of diameter greater than N^b, one will still have at least N^d-6+o(1) large clusters, and the estimates for the two-point function will actually remain valid.”
To sum up, Werner described a strategy to prove the following conjecture. For each fixed b∈ (6/d,1) and any x,y∈ℤ^d with |x-y|=N, one has
ℙ( x []∪ℒ_1/2^≤ N^b y ) ≍ N^2-d,
where ℒ_1/2^≤ M:= ℒ_1/2·1_diam(ran(ℓ))≤ M is the point measure composed of loops in ℒ_1/2 with diameter at most M.
Inspired by the heuristics mentioned above, we prove the analogous result with respect to the one-arm probability, which is not only useful in the proof of Theorem <ref>, but also interesting in its own right.
For d>6 and any b∈ (6/d,1), there exist C_4(d),c_1(d,b)>0 such that for all N≥ 1,
0≤ℙ[ 0[]∪ℒ_1/2∂ B(N) ] - ℙ[ 0[]∪ℒ_1/2^≤ N^b∂ B(N) ] ≤ C_4N^-2-c_1.
To prove Proposition <ref>, we need some preparations. For any x∈ℤ^d, we denote by 𝐂(x) the cluster of ∪ℒ_1/2 containing x.
For d>6, there exists C_5(d)>0 such that for any x∈ℤ^d, M≥ 1 and k∈ℕ^+,
𝔼( |𝐂(x)∩ B(M) |^k ) ≤ C_5^kk!M^4k-2.
By Lemma <ref>, we can prove a large deviation bound for |𝐂(x)∩ B(M) |. The proof is parallel to <cit.>.
For d>6, there exist C(d),c(d)>0 such that for all M≥ 1 and s>0,
ℙ( max_x∈ B(M)|𝐂(x)∩ B(M) |>sM^4 ) ≤ C M^d-6 e^-cs.
We enumerate the clusters of ∪ℒ_1/2 that intersect B(M) by 𝐂_1',...,𝐂_m_*'. Note that for any l≥ 2, one has a.s.
∑_m=1^m_*|𝐂_m'∩ B(M)|^l = ∑_x∈ B(M)|𝐂(x) ∩ B(M) |^l-1
Therefore, by applying Lemma <ref> for k=l-1, we have
𝔼( ∑_m=1^m_*|𝐂_m'∩ B(M)|^l ) = 𝔼( ∑_x∈ B(M)|𝐂(x) ∩ B(M) |^l-1)
≤ CC_5^l-1(l-1)!M^4l+d-6.
Let l_†=l_†(s):=⌈s/2C_5+1⌉. By Markov's inequality, we have
ℙ( max_x∈ B(M)|𝐂(x)∩ B(M) |>sM^4 )
≤ (sM^4)^-l_†𝔼( max_x∈ B(M)|𝐂(x)∩ B(M) |^l_†)
≤ (sM^4)^-l_†𝔼( ∑_m=1^m_*|𝐂_m'∩ B(M)|^l_†)
≤ CM^d-6e^-cs,
where we applied (<ref>) and the Stirling's formula in the last inequality.
We need the following estimate on the loop measure of all oversized loops in a large box.
For any b∈ (0,1), α>0 and ϵ>0, we have
μ[ {ℓ: ℓ⊂B(N^1+α),diam(ran( ℓ))≥ N^b }] ≤ o( N^(1-b+α)d+ϵ).
Recall the notations L(A_1,A_2)(·) and τ_i in Section <ref>. For each loop ℓ involved in the LHS of (<ref>), we say it is a type I loop if there exist x∈ B(N^1+α) and ϱ∈ L({x},∂ B_x(1/2N^b))(ℓ) such that |{ϱ(t):0≤ t≤τ_1}|≤ N^2b-3ϵ/4; otherwise, we say ℓ is a type II loop. We denote the collections of these two types of loops by 𝔏_I and 𝔏_II respectively.
For type I loops, by (<ref>) and the relation between loops on ℤ^d and ℤ^d presented in Section <ref>, we know that μ( 𝔏_I) is upper-bounded by
∑_x∈ B(N^1+α)ℙ_x[ | {S^(i)}_0≤ i≤τ_∂ B_x(1/2N^b)|≤ N^2b-3ϵ/4]
≤ | B(N^1+α)| (ℙ[ τ_∂ B(1/2N^b)≤ N^2b-ϵ/4] + ℙ[|{S^(i)}_0≤ i≤ N^2b-ϵ/4|≤ N^2b-3ϵ/4] ).
By Lawler <cit.>, the first probability on the RHS of (<ref>) is bounded from above by s.e.(N). Moreover, for the second probability, by the Markov property and the law of large numbers for the range of a simple random walk (see e.g. Spitzer <cit.>), we have
ℙ( |{S^(i)}_0≤ i≤ N^2b-ϵ/4|≤ N^2b-3ϵ/4)
≤ ℙ( ⋂_j=1^N^ϵ/4{|{S^(i)}_(j-1)N^2b-ϵ/2≤ i≤ jN^2b-ϵ/2|≤ N^2b-3ϵ/4})
= [ ℙ( |{S^(i)}_0≤ i≤ N^2b-ϵ/2|≤ N^2b-3ϵ/4) ]^N^ϵ/4≤s.e.(N).
Thus, the loop measure of 𝔏_I is stretched exponentially small.
For 𝔏_II, let us consider the following summation:
𝒯:= ∑_x∈ B(N^1+α)μ[{ℓ: ran(ℓ)⊂B(N^1+α), diam(ran(ℓ))≥ N^b,
|ran(ℓ)|≥ N^2b-3ϵ/4,x∈ran(ℓ)}].
For any type II loop ℓ, since ran(ℓ) contains at least N^2b-3ϵ/4 different points in B(N^1+α), ℓ must be counted by 𝒯 for at least N^2b-3ϵ/4 times. Therefore
𝒯≥ N^2b-3ϵ/4·μ( 𝔏_II).
Moreover, by |B(N^1+α)|≍ N^(1+α)d and (<ref>), we have
𝒯≤ CN^(1+α)d· N^-b(d-2)= C N^(1-b+α)d+2b.
Combining (<ref>) and (<ref>), we obtain the following estimate of μ( 𝔏_II), and thus complete the proof:
μ( 𝔏_II)≤ C N^(1-b+α)d+2b· N^-2b+3ϵ/4=o( N^(1-b+α)d+ϵ).
Now we are ready to prove Proposition <ref>. Recall that we abbreviate “[]∪ℒ_1/2” as “[]”. In addition, we may also write “[]∪ℒ_1/2^≤ M” as “[]≤ M”.
For any b∈ (6/d,1), we choose sufficiently small α,ϵ>0 such that
4(1+α)+(-b+α)d+2ϵ=4-bd+α(d+4)+2ϵ<-2.
Let M=N^1+α. By Lemma <ref>, we have
ℙ[𝖦] :=ℙ[max_x∈ B(M)| 𝐂(x)∩ B(M) |≤ M^4log^2(M) ]≥ 1-s.p.(N).
Let V_1:={x∈ B(N): x[]∂ B_x(N) } and V_2:={x∈ B(N): x[]≤ N^b∂ B_x(N) }. By (<ref>) and the translation invariance of ℒ_1/2 and ℒ_1/2^≤ N^b, we have
0≤ ℙ[0[]∂ B(N)] - ℙ[0[]≤ N^b∂ B(N)]
= |B(N) |^-1𝔼|V_1∖ V_2|
≤ |B(N) |^-1𝔼( |V_1∖ V_2|·1_𝖦) + s.p.(N).
We denote by 𝔏 the collection of loops ℓ∈ℒ_1/2 such that ran(ℓ)∩ B(2N)≠∅ and diam(ran(ℓ))>N^b. Like in the proof of Lemma <ref>, we enumerate the clusters of ∪ℒ_1/2 intersecting B(M) by 𝐂_1',...,𝐂_m_*'. Note that for any 1≤ m≤ m_*, if 𝐂_m' does not intersect any loop of 𝔏, then 𝐂_m'∩ (V_1∖ V_2)=∅. Therefore,
V_1∖ V_2 ⊂⋃_m∈ [1,m_*]: ∃ℓ∈𝔏 with 𝐂_m'∩ran(ℓ)≠∅𝐂_m'∩ B(N).
Also note that each ℓ intersects at most one 𝐂_m'. In addition, on the event 𝖦, one has |𝐂_m'∩ B(N)|≤ M^4log^2(M) for all m∈ [1,m_*]. Thus, we have
| V_1∖ V_2|·1_𝖦≤|𝔏| · M^4log^2(M).
Let 𝔏_1:= {ℓ∈𝔏: ran(ℓ)⊂B(M) } and 𝔏_2:= 𝔏∖𝔏_1. Since each ℓ∈𝔏_1 is involved in the LHS of (<ref>), by Lemma <ref> we have
𝔼|𝔏_1 | ≤ o( N^(1-b+α)d+ϵ).
In addition, since each ℓ∈𝔏_2 intersects both B(2N) and ∂ B(M), by (<ref>) and (<ref>) we have
𝔼|𝔏_2 |≤ CN^-α(d-2).
Combining (<ref>), (<ref>) and (<ref>), we get
𝔼( |V_1∖ V_2|·1_𝖦) ≤ M^4log^2(M)[ o( N^(1-b+α)d+ϵ) +CN^-α(d-2)].
This implies that the RHS of (<ref>) is upper-bounded by
CN^-d· N^4(1+α)+ϵ· N^(1-b+α)d+ϵ= CN^4(1+α)+(-b+α)d+2ϵ:= CN^-2-c_1,
where the existence of c_1 is ensured by the requirement in (<ref>).
§ OUTLINE OF THE PROOF OF THE UPPER BOUND
Now we describe our strategy to prove the upper bound of Theorem <ref>. The framework we use here is inspired by <cit.>. The key novelty of our proof lies in a new exploration process, which is precisely desrcibed in Section <ref>.
For n≥ 1, let θ(n):=ℙ[0[]∂ B(n) ]. We aim to prove
For any d>6, there exist constants c_2(d)∈ (0,1),C_6(d)>0 such that for any λ∈(0,1 ], there exists c_3(d,λ)>0 such that for all ϵ∈ (0,c_3) and N≥ 1,
θ( (1+λ)N) ≤ C_6ϵ^-1/2N^-2+ 3dϵ^3/5N^2 θ(λ N2)θ(N)+ (1-c_2)θ(N)+C_4/[(1+λ)N]^2+c_1.
It suffices for our proof even if c_1=0, but we keep this stronger form in the statement in case this improvement will be useful for some future work.
With Proposition 6.1 at hand, proving the desired upper bound in Theorem <ref> is straightforward by induction.
We choose a small enough λ∈( 0,1 ] such that
(1+λ)^2≤ 2, (1-c_2) (1+λ)^2 ≤ 1-2c_3/3.
Meanwhile, we also take a sufficiently large M_0(d,λ) such that
(2C_6+ 24d λ^-2)M_0^-1/11≤c_2/3, M_0^-20/11≤ c_3, C_4M_0^-1≤c_2/3.
Let us prove θ(N)≤ M_0N^-2 by induction. For the base, we note that the desired bound holds obviously for N ≤√(M_0). Assume the bound θ(s)≤ M_0s^-2 holds for all s<(1+λ)N. By Proposition <ref> with ϵ=M_0^-20/11 and the induction hypothesis, we have
θ((1+λ)N)
≤ C_6ϵ^-1/2N^-2+ 3dϵ^3/5N^2 θ(N)θ(λ N2)+ (1-c_2)θ(N)+C_4[(1+λ)N]^-2
≤ C_6M_0^10/11N^-2+ 12dM_0^10/11λ^-2N^-2+(1-c_2)M_0N^-2+C_4[(1+λ)N]^-2
= M_0/[(1+λ)N]^2[ (1+λ)^2(C_6+12d λ^-2)M_0^-1/11+(1-c_2)(1+λ)^2+C_4M_0^-1].
By the requirement of λ in (<ref>), the RHS is upper-bounded by
M_0[(1+λ)N]^-2[(2C_6+24d λ^-2)M_0^-1/11+(1-2c_2/3)+ C_4M_0^-1].
Combined with (<ref>), this yields that
θ((1+λ)N)≤ M_0[(1+λ)N]^-2[c_2/3+(1-2c_2/3)+ c_2/3]
= M_0[(1+λ)N]^-2.
Now we finish the induction and conclude the upper bound in Theorem <ref>.
For m∈ℕ^+ and x∈ℤ^d, let B̂_x(m):={y∈ℤ^d:|y-x|≤ m, |y-x|_1<md } be the box obtained by removing all corner points of B_x(m). When x is the origin, we may write B̂(m):=B̂_0(m). For any x∈∂B̂(m), we denote by x^in the unique point in ∂ B(m-1) with x^in∼ x. Note that every corner point of B(m) (i.e. y∈ℤ^d with |y|_1=md) is not adjacent to B(m-1). This is why we need to restrict the definition of x^in in ∂B̂(m).
The following definition is crucial for our proof.
(1) For n∈ℕ^+ and M∈ [1,∞], let Ψ_n,M^1 be the cluster containing 0 and composed of the following types of loops in ℒ_1/2^≤ M:
* fundamental loops intersecting B(n);
* point loops intersecting some x∈ B(n-1);
* edge loops contained in B(n).
We call these loops “involved loops". Let Ψ_n,M:=(Ψ_n,M^1,Ψ_n,M^2), where
Ψ_n,M^2:= { x∈∂B̂(n) :x∉Ψ_n,M^1, I_{x,x^in}⊂γ_x^p∪Ψ_n,M^1 }.
(2) Let Ψ_n,M:=[Ψ_n,M^1 ∩ℤ^d∖ B(n-1)]∪Ψ_n,M^2, ψ_n,M:= |Ψ_n,M| and
Ψ_n,M:= Ψ_n,M^1∪⋃_x∈Ψ_n,M^2γ_x^p.
See Figure <ref> for an illustration of this definition. Note that Ψ_n,M is a cluster of ∪ℒ_1/2^≤ M. In addition, for any D⊂ℤ^d,
Ψ_n,M∩ D= (Ψ_n,M^1∪Ψ_n,M^2 )∩ D,
which is measurable with respect to Ψ_n,M (but Ψ_n,M is not).
(1) For any x∈Ψ^2_n,M, the glued point loop γ_x^p is only known to intersect I_{x,x^in}∩Ψ^1_n,M. Moreover, for x∈Ψ^1_n,M∩∂B̂(n), γ_x^p is independent of Ψ_n,M. Thus, given Ψ_n,M, by the FKG inequality, the conditional distribution of γ_x^p∩ I_{x,y} (where x∈Ψ_n,M∩∂B̂(n), y∈∂B̂(n+1) and x∼ y) stochastically dominates the one without conditioning.
(2) At the first glance (or even the second glance), it seems more natural and also simpler to define Ψ^♢_n, M (as the replacement for the more complicated Ψ_n, M) to be the cluster containing 0 and composed of all involved loops and γ_x^p∩ I_{x,x^in} for x∈∂B̂(n). However, Ψ_n,M^♢ does not have the property as in Item (1), which is crucial in the subsequent proof. To see this, let us look at the following scenario. Arbitrarily take x∈∂B̂(n) and then assume that x∈Ψ_n,M^♢ and x^in∉Ψ_n,M^♢. By the definition of Ψ_n,M^♢, γ_x^p∩ I_{x,x^in} is sampled and depends on the configuration of Ψ_n,M^♢. In addition, for any y∈∂B̂(n+1) with x∼ y, γ_x^p∩ I_{x,y} has a positive correlation with γ_x^p∩ I_{x,x^in} since both of them are positively correlated to the total local time of point loops in ℒ_1/2^≤ M intersecting x. Thus, arbitrarily given Ψ_n,M^♢ (note that γ_x^p∩ I_{x,x^in} may be arbitrarily small), one cannot ensure the stochastic domination of γ_x^p∩ I_{x,y} as in Item (1).
We say 𝐀=(𝐀^1,𝐀^2) is an admissible tuple if it is a possible configuration of Ψ_n,M. Parallel to (<ref>), we define a random subset
𝐀:= 𝐀^1∪⋃_x∈𝐀^2γ_x^p(𝐀^1),
where {γ_x^p(𝐀^1)}_x∈𝐀^2 is independent of ℒ_1/2, and has the same distribution as {γ_x^p}_x∈𝐀^2 conditioning on the event ∩_x∈𝐀^2{I_{x,x^in}⊂γ_x^p∪𝐀^1}.
(1) For any admissible 𝐀=(𝐀^1,𝐀^2), we denote by ℒ_𝐀,M^U the point measure composed of the following types of loops in ℒ_1/2^≤ M:
* involved loops ℓ with ran(ℓ)∩𝐀^1=∅;
* loops ℓ with ran(ℓ)∩B(n)=∅;
* point loops including some x∈𝐀^1∩∂B̂(n);
* point loops that include some x∈∂B̂(n)∖ (𝐀^1∪𝐀^2) and do not intersect I_{x,x^in}∩𝐀^1.
(2) We define ℒ^U_M as ℒ_𝐀,M^U on the event {Ψ_n,M= 𝐀}.
When M=∞ (i.e. there is no restriction on the diameter of ℓ), we may omit the subscript ∞ and denote Ψ_n^1:=Ψ_n,∞^1, Ψ_n^1:=Ψ_n,∞^2, Ψ_n:=Ψ_n,∞, Ψ_n:=Ψ_n,∞, ψ_n:=ψ_n,∞, Ψ_n:=Ψ_n,∞, ℒ_𝐀^U:=ℒ_𝐀,∞^U and ℒ^U:=ℒ^U_∞.
We have some useful observations about Ψ_n,M as follows:
* For any admissible tuple 𝐀=(𝐀^1,𝐀^2), when {Ψ_n,M= 𝐀} happens, ℒ_1/2^≤ M-ℒ^U_𝐀,M contains all the loops used to contruct Ψ_n,M. In light of this, we call the loops in ℒ^U_𝐀,M unused loops.
* By the thinning property of Poisson point processes, given {Ψ_n,M= 𝐀} (which is measurable with respect to σ(ℒ_1/2^≤ M-ℒ^U_M)), the conditional distribution of ℒ^U_M is the same as ℒ_𝐀,M^U without conditioning.
* Since every loop ℓ included in Ψ_n,M^1 has diameter at most M and must intersect B(n) (by Definition <ref>), we have Ψ_n,M^1∩ℤ^d⊂ B(n+M) and thus Ψ_n,M^1∩ℤ^d⊂Ψ_n^1∩ B(n+M). For any x∈Ψ_n,M^2, we have I_{x,x^in}⊂γ_x^p∪Ψ_n,M^1⊂γ_x^p∪Ψ_n^1, which implies that x is either in Ψ_n^1∩∂B̂(n) or in Ψ_n^2. In conclusion,
Ψ_n,M⊂Ψ_n ∩ B(n+M).
* If the event {0[]≤ M∂ B(m)} happens for some m>n+M, then there exists v_†∈Ψ_n,M such that v_† is connected to ∂ B(m) by ∪ℒ^U_M. Suppose that v_† is in the interval I_{x_†,y_†}. We claim that either x_† or y_† is in Ψ_n,M. When v_†∈γ_z^p for some z∈Ψ_n,M^2, we know that either x_† or y_† is z, which is contained in Ψ_n,M. When v_†∈Ψ_n,M^1, we verify the claim separately in the following subcases.
* Both x_† and y_† are in B(n-1): We will show that this case cannot occur by contradiction. Since v_†∈Ψ_n,M^1∩ [∪ℒ^U_M], there exists a loop ℓ_†∈ℒ^U_M intersecting v_†, which implies ran(ℓ_†)∩B(n) ∩Ψ_n,M^1≠∅. In addition, ℓ_† must be an involved loop since a point loop including some x∈∂B̂(n) cannot intersect B(n-1). These two facts cause a contradiction with ℓ_†∈ℒ^U_M.
* x_†∈∂ B(n-1) and y_†∈∂B̂(n): With the same argument as in Subcase (a), there exists ℓ_†∈ℒ^U_M intersecting v_† and B(n) ∩Ψ_n,M^1. To avoid the same contradiction as in Subcase (a), it is necessary for ℓ_† to be a point loop including y_†. We now prove that y_†∈Ψ^1_n, M by contradiction (this then yields the claim since Ψ^1_n, M∩ℤ^d∖ B(n-1)⊂Ψ_n,M). Suppose that y_†∉Ψ_n,M^1, then we have x_†∈Ψ_n,M^1 and therefore, I_{x_†,y_†}⊂ran(ℓ_†) ∪Ψ_n,M^1 ⊂γ_y_†^p∪Ψ_n,M^1. Thus, ℓ_† is a point loop containing y_†∈Ψ_n,M^2, which arrives at a contradiction with ℓ_†∈ℒ^U_M.
* y_†∈∂ B(n-1) and x_†∈∂B̂(n): For the same reason as in subcase (b), the claim is valid.
* x_†, y_†∉B(n-1): Since Ψ_n,M^1 is connected and v_†∈Ψ^1_n,M, we know that either x_† or y_† is in Ψ_n,M^1, and thus is in Ψ_n,M.
To sum up, we now conclude this claim (i.e. either x_† or y_† is in Ψ_n,M). Meanwhile, either x_† or y_† is connected to ∂ B(m) by ∪ℒ^U_M since v_† does so. Putting these two results together, we have: for any m>n+M,
{0[]≤ M∂ B(m)}⊂⋃_z_1∈Ψ_n,M⋃_z_2∈ℤ^d:|z_1-z_2|_1≤ 1{ z_2[ ]∪ℒ^U_M∂ B(m)}.
Recall that 𝐂(x) is the cluster of ∪ℒ_1/2 containing x. We take constants b∈ (6d,1) and λ∈(0,1 ], and fix a large integer N. We also take a constant ϵ>0 and denote L= ϵ^3/10N. Let Ψ_n^*:= Ψ_n ∩ B(n+[(1+λ)N]^b) and ψ_n^*= |Ψ_n^*|. When {0 []∂ B((1+λ)N) } happens, one of the following events occurs:
* 𝖡_0: {0[]∂ B((1+λ)N) }∩{0[]≤ [(1+λ)N]^b∂ B((1+λ)N) }^c.
* 𝖡_1: |𝐂(0)|≥ϵ N^4.
* 𝖡_2: ∃ n∈[(1+λ/4)N,(1+λ/3)N] such that 0<ψ_n^*≤ L^2 and 0[]≤ [(1+λ)N]^b∂ B((1+λ)N).
* 𝖡_3: ∀ n∈[(1+λ/4)N,(1+λ/3)N], ψ_n^*> L^2 and |𝐂(0)|< ϵ N^4.
Thus, to prove Proposition <ref>, we only need to control the probabilities of these four events.
For 𝖡_0, by Proposition <ref>, we have
ℙ(𝖡_0)= ℙ[ 0[]∂ B((1+λ)N) ] - ℙ[ 0[]≤ [(1+λ)N]^b∂ B((1+λ)N) ]
≤ C_4/[(1+λ)N]^2+c_1.
For 𝖡_1, we use the decay rate of |𝐂(0)| in the following proposition, which will be proved in Section <ref>.
For d>6, there exists C_6(d)>0 such that for all M≥ 1,
ℙ( |𝐂(0) |≥ M ) ≤ C_6M^-1/2.
By Proposition <ref>, we have
ℙ(𝖡_1) ≤ C_6ϵ^-1/2N^-2.
For 𝖡_2, we denote Ψ^i,▴_n:=Ψ_n,[(1+λ)N]^b^i for i∈{1,2}, Ψ_n^▴:=Ψ_n,[(1+λ)N]^b, Ψ_n^▴:=Ψ_n,[(1+λ)N]^b, Ψ_n^▴:=Ψ_n,[(1+λ)N]^b and ψ_n^▴:=|Ψ_n^▴|. Let k_†:=min{k≥ 0: 0<ψ_n_k^▴≤ L^2 }, where n_k:=⌈ (1+λ/4)N ⌉+k.
The event 𝖡_2 ensures the following two events:
* {ψ_n_k^▴>0} for any n_k∈[(1+λ/4)N,(1+λ/3)N] (since the event {0[]≤ [(1+λ)N]^b∂ B((1+λ)N)} happens).
* There exists some n_k∈[(1+λ/4)N,(1+λ/3)N] such that 0≤ψ^▴_n_k≤ L^2 (since ψ_n_k^*≥ψ_n_k^▴ for all k≥ 0 (by (<ref>))).
To sum up, one has 𝖡_2 ⊂{n_k_†∈[(1+λ/4)N,(1+λ/3)N]}. Thus, by (<ref>), we have
ℙ(𝖡_2)≤ ∑_k∈ℕ:n_k∈ [(1+λ/4)N,(1+λ/3)N]ℙ(k_†=k ) 𝔼{∑_z_1∈Ψ_n_k^▴∑_z_2∈ℤ^d:|z_2-z_1|_1≤ 1
ℙ[ z_2 []ℒ^U_[(1+λ)N]^b∂ B((1+λ)N) | Ψ_n_k^▴,k_†=k ]}.
In fact, given Ψ_n_k^▴, then the unused loops ℒ^U_[(1+λ)N]^b (with respect to Ψ_n_k^▴) is independent of Ψ_n_k'^▴ for all 0≤ k'<k. To see this, we only need to check the loops in ℒ^U_[(1+λ)N]^b (see Definition <ref>) as follows:
* involved loops ℓ with ran(ℓ)∩Ψ_n_k^1,▴=∅: Since Ψ_n_k'^1,▴⊂Ψ_n_k^1,▴, we have ran(ℓ)∩Ψ_n_k'^1,▴=∅. Therefore, ℓ is independent of Ψ_n_k'^1,▴.
* loops ℓ with ran(ℓ)∩B(n_k)=∅: Since B(n_k') ⊂B(n_k), we know that ℓ is disjoint from B(n_k') and thus is independent of Ψ_n_k'^1,▴.
* Every remaining loop ℓ is a point loop including some x ∈∂B̂(n_k), which is also disjoint of B(n_k') and is independent of Ψ_n_k'^1,▴.
As a result, given Ψ_n_k^▴ and the occurrence of {k_†=k}, the conditioning distribution of ℒ^U_[(1+λ)N]^b (with respect to Ψ_n_k^▴) is the same as the one only given Ψ_n_k^▴. Combined with Item (2) in Remark <ref>, this yields that for each z_2 involved in the RHS of (<ref>), we have
ℙ[ z_2 []ℒ^U_[(1+λ)N]^b∂ B((1+λ)N) | Ψ_n_k^▴,k_†=k ]
≤ ℙ[ z_2 []∂ B((1+λ)N) ]≤θ(λ N2),
where in the last inequality we used
Ψ_n_k^▴⊂ B(n_k+[(1+λ)N]^b)⊂ B((1+λ3)N+[(1+λ)N]^b).
Combining (<ref>), (<ref>) and 0<ψ_n_k_†^▴≤ L^2, we get
ℙ(𝖡_2) ≤ (2d+1)L^2 θ(λ N2)ℙ( n_k_†∈ [(1+λ4)N,(1+λ3)N])
≤ 3dϵ^3/5N^2 θ(λ N2)θ(N).
Finally, let us consider the event 𝖡_3. For any n∈ℕ^+, let
χ_n= |{x∈ B(n+L)∖ B(n): 0[] x }|.
We need the following theorem, which is the core of this paper.
For d>6, there exist c_4(d)>0,c_5(d)∈ (0,1) such that for each fixed λ∈( 0, 1 ] and sufficiently small fixed ϵ>0, the following holds for any large enough N≥ 1 and any n∈[(1+λ/4)N,(1+λ/3)N]:
ℙ( ψ_n^*≥ L^2, χ_n≤ c_4L^4 ) ≤ (1-c_5)θ(N).
Now we estimate the probability of 𝖡_3 based on Theorem <ref>. For any integer i ∈ [0,112λϵ^-3/10-1], let n_i' := ⌈ (1+ λ4) N + iL⌉. Note that each n_i'∈[ (1+ λ4) N, (1+ λ3) N]. We also define
I :=| { i∈ [0,112λϵ^-3/10-1]∩ℕ : ψ_n_i'^* ≥ L^2,
χ_n_i'≤ c_4L^4 }|.
If 𝖡_3 happens, then we have |𝐂(0)|<ϵ N^4 and thus
|{i∈ [0,112λϵ^-3/10-1]∩ℕ : χ_n_i' > c_4L^4 }|< ϵ N^4 /c_4 L^4= c_4^-1ϵ^-1/5.
Therefore, by the Markov's inequality and Theorem <ref>, we have
ℙ(𝖡_3)≤ ℙ( I ≥112λϵ^-3/10- c_4^-1ϵ^-1/5-1)
≤ 𝔼I/112λϵ^-3/10- c_4^-1ϵ^-1/5-1
≤ 1/12λϵ^-3/10(1-c_5)/1/12λϵ^-3/10- c_4^-1ϵ^-1/5-1·θ(N).
For each fixed λ∈(0,1 ], by taking a small enough ϵ, we can require that
1/12λϵ^-3/10(1-c_5)/1/12λϵ^-3/10- c_4^-1ϵ^-1/5-1< 1-c_5/2:= 1-c_2.
By (<ref>) and (<ref>), we obtain the desired estimate for ℙ(𝖡_3) as follows:
ℙ(𝖡_3)≤ (1-c_2) θ(N).
In conclusion, by (<ref>), (<ref>), (<ref>) and (<ref>), we conclude Proposition <ref>, and thus complete the proof of Theorem <ref> assuming Proposition <ref> and Theorem <ref>. We will prove Proposition <ref> in Section <ref>. The proof of Theorem <ref> will be established in Sections <ref> and <ref>. Specifically, we will prove a core lemma in Section <ref> and then conclude Theorem <ref> in Section <ref>.
§ GOOD POINTS, LOCALLY GOOD POINTS AND QUALIFIED POINTS
As in the last section, we fix b∈ (6/d,1), λ∈( 0,1] and a sufficiently small ϵ>0. We also take a sufficiently large constant K(d)>0. For any m≥ 1, we denote r_m:= K2^m-1. Recall the notations 𝐀 and ℒ^U_𝐀 in (<ref>) and Definition <ref> respectively.
For any x∈ℤ^d, m≥ 1 and admissible 𝐀=(𝐀^1,𝐀^2) (i.e. a possible configuration of Ψ_n), we define the function
Δ_x,m(𝐀):= 𝔼(∑_y∈ B_x(r_m)1_y[]∪ℒ^U_𝐀𝐀∩ B_x(r_m^4d) ).
For l≥ 1, we say 𝐀 is (x,m,l)-nice if Δ_x,m(𝐀)≤ r_m^4log^l(r_m).
Recall the notation Ψ_n in Item (2) of Definition <ref>, and also recall that Ψ_n^*=Ψ_n∩ B(n+[(1+λ)N]^b).
* For any m≥ 1 and x∈ℤ^d, we say x is m-good if Ψ_n is (x,m,16)-nice. We also say x is m-bad if it is not m-good.
* If x is m-good for all m≥ 1, then we say x is regular. Otherwise, we call x an irregular point.
* We say x is strongly regular if y is regular for all y∈ B_x(K^10d).
* We denote the numbers of irregular, strongly regular and m-bad points in Ψ_n^* by ψ_n^irr, ψ_n^SR and ψ_n^mbad respectively.
(1) If y∈ B_x(r_m)∩𝐀, then { y[]∪ℒ^U_𝐀𝐀∩ B_x(r_m^4d) } a.s. happens. Therefore, we have
Δ_x,m(𝐀) ≥| 𝐀∩ B_x(r_m) |.
Thus, when 𝐀 is (x,m,l)-nice, one has |𝐀∩ B_x(r_m) |≤ r_m^4log^l(r_m). As a result, when x is m-good, we have
|Ψ_n∩ B_x(r_m)| ≤Δ_x,m(Ψ_n)≤ r_m^4log^16(r_m).
(2) For any admissible tuple 𝐀, by Item (2) in Remark <ref>, we have
𝔼(∑_y∈ B_x(r_m)1_y[]∪ℒ^UΨ_n∩ B_x(r_m^4d) |Ψ_n= 𝐀)= Δ_x,m(𝐀).
The main goal of this section is to prove the following lemma, which will be a crucial ingredient in proving Theorem <ref>.
With the same conditions as in Theorem <ref>, we have
ℙ( ψ_n^*≥ L^2, ψ_n^irr≥ K^-20d^2ψ_n^* ) ≤s.p.(N).
The lemma above implies that when ψ_n^* is at least L^2, with high probability, at least half of the points in Ψ_n^* are strongly regular.
With the same conditions as in Theorem <ref>, we have
ℙ( ψ_n^*≥ L^2, ψ_n^SR≤12ψ_n^* ) ≤s.p.(N).
For any x∈ℤ^d, if x is not strongly regular, then there must exist an irregular point y such that x∈ B_y(K^10d). Thus, we have
ψ_n^*- ψ_n^SR≤|B(K^10d) |·ψ_n^irr.
Therefore, when {ψ_n^*≥ L^2, ψ_n^SR≤1/2ψ_n^*} happens, one has
ψ_n^irr≥|B(K^10d)|^-1(ψ_n^*- ψ_n^SR) ≥ cK^-10d^2ψ_n^* ≥ K^-20d^2ψ_n^*.
By Lemma <ref> and (<ref>), we immediately get the corollary.
We next describe the proof of Lemma <ref>. Recalling Definition <ref>, one has the following deterministic inequality:
ψ_n^irr≤∑_m=1^∞ψ_n^mbad.
Combined with ∑_m=1^∞ m^-2 < 2, it suffices to prove that for any m≥ 1,
ℙ( ψ_n^*≥ L^2, ψ_n^mbad≥12m^-2 K^-20d^2ψ_n^* ) ≤s.p.(N).
It turns out that for large m, the proof of (<ref>) is fairly simple since the probability for the existence of a single m-bad point already decays super-polynomially, as incorporated in Lemma <ref> below. For small m, however, the proof is much more delicate since this necessarily requires to control many points simultaneously, and its proof almost occupies the rest of this section.
For any d>6, there exist constants c(d),C(d)>0 such that for any x∈ℤ^d and m≥ 1,
ℙ( x is mbad) ≤ Ce^-clog^16(r_m).
We denote the event 𝖡:= {x is mbad}. By (<ref>) and the definition of m-bad points, one has
𝔼( ∑_y∈ B_x(r_m)1_y[]∪ℒ^UΨ_n∩ B_x(r_m^4d) |𝖡) ≥ r_m^4log^16(r_m).
In addition, since ∑_y∈ B_x(r_m)1_y[]∪ℒ^UΨ_n∩ B_x(r_m^4d) ≤ |B_x(r_m)|=(2r_m+1)^d, we have
𝔼( ∑_y∈ B_x(r_m)1_y[]∪ℒ^UΨ_n∩ B_x(r_m^4d) |𝖡)
≤ Cr_m^d ℙ( ∑_y∈ B_x(r_m)1_y[]∪ℒ^UΨ_n∩ B_x(r_m^4d) ≥12r_m^4log^16(r_m)|𝖡)+ 12r_m^4log^16(r_m).
Combining these two estimates, we get
ℙ( ∑_y∈ B_x(r_m)1_y[]∪ℒ^UΨ_n∩ B_x(r_m^4d) ≥12r_m^4log^16(r_m)|𝖡)≥ cr_m^4-dlog^16(r_m).
Since Ψ_n is connected, all points connected to Ψ_n must be connected to each other. Thus, by Lemma <ref> we have
ℙ( ∑_y∈ B_x(r_m)1_y[]∪ℒ^UΨ_n∩ B_x(r_m^4d) ≥12r_m^4log^16(r_m))
≤ ℙ( max_y∈ B_x(r_m)|𝐂(y)∩ B(r_m) |≥12r_m^4log^16(r_m) )
≤ Ce^-clog^16(r_m).
Combined with (<ref>), the desired bound follows.
By Lemma <ref> and Ψ_n^*⊂ B(n+[(1+λ)N]^b), we have
ℙ( ∑_m≥ m_0ψ_n^mbad≥ 1) ≤s.p.(N),
where m_0:= min{m: r_m≥ e^log^1/4(N)}. We now need to control the probability for small m as promised. To this end, we fix an arbitrary m∈ [1, m_0-1].
For ψ_n^mbad, we make a further decomposition as follows. Let D_N:=⌊ e^log^1/3.5(N)⌋. Note that r_m_0<D_N. For any w∈[-D_N,D_N )^d ∩ℤ^d, we define
F(w):={x∈ w+2D_N·ℤ^d:x∈ B(n+[(1+λ)N]^b)∖ B(n-1) }.
We also define
ζ_w=ζ_w(n):= |Ψ_n^*∩ F(w)|,
ζ_w^mbad=ζ_w^mbad(n):= |{x∈Ψ_n^*∩ F(w):x is mbad}|.
It follows from the definition that
ψ_n^*= ∑_w∈[-D_N,D_N )^d ∩ℤ^dζ_w, ψ_n^mbad= ∑_w∈[-D_N,D_N )^d ∩ℤ^dζ_w^mbad.
We claim the following inclusion relation:
{ψ_n^*≥ L^2, ψ_n^mbad≥12m^-2 K^-20d^2ψ_n^* }
⊂ ⋃_w∈[-D_N,D_N )^d ∩ℤ^d{ζ_w≥L^2/2^d+2K^20d^2m^2D_N^d, ζ_w^mbad≥ζ_w/4 K^20d^2m^2}.
We will prove a contrapositive statement of (<ref>). To this end, denote
W_1:= { w∈ B(D_N): ζ_w< L^2/2^d+2K^20d^2m^2D_N^d},
W_2 := {w∈ B(D_N)∖ W_1 : ζ_w^mbad< ζ_w/4K^20d^2m^2}.
In fact, when the event on the RHS of (<ref>) does not happen, one has W_1∪ W_2=[-D_N,D_N )^d ∩ℤ^d. Thus, by (<ref>) and |W_1|≤|[-D_N,D_N )^d|= (2D_N)^d, we have
ψ_n^mbad= ∑_w∈ W_1ζ_w^mbad+∑_w∈ W_2ζ_w^mbad
< (2D_N)^d·L^2/2^d+2K^20d^2m^2D_N^d+ ∑_w∈[-D_N,D_N )^d ∩ℤ^dζ_w/4 K^20d^2m^2
= [ 4K^20d^2m^2] ^-1(L^2+ ψ_n^* ),
which is incompatible with the event on the LHS of (<ref>), thereby completing the proof (for the contrapositive statement) of (<ref>). Therefore, to get (<ref>) (which implies Lemma <ref>), it is sufficient to prove the following lemma (since then (<ref>) follows via a simple union bound).
With the same conditions as in Theorem <ref>, we have
max_1≤ m≤ m_0-1,w∈[-D_N,D_N )^d ∩ℤ^dℙ[ζ_w≥L^2/2^d+2K^20d^2m^2D_N^d, ζ_w^mbad≥ζ_w/4 K^20d^2m^2] ≤s.p.(N).
The rest of this section is devoted to the proof of Lemma <ref>.
§.§ Qualified point
We arbitrarily fix m∈ [1,m_0-1] and w∈[-D_N,D_N )^d ∩ℤ^d. For any k∈ℕ, let s_k=s_k(m):=r_m^(4d)^k+1. Note that s_k+1=(s_k)^4d. We denote
k_0=k_0(m):=min{k≥ 1: s_k+2≥√(D_N)}.
Recall the notation κ(ℓ;A_1,A_2) in Section <ref>.
* For any k≥ 1 and x∈ F(w), we say x is k-qualified if the total number of forward crossing paths (with A_1=B_x(s_k) and A_2=∂ B_x(s_k+1)) of loops in ℒ_1/2 is at most log^6(s_k). I.e.,
∑_ℓ∈ℒ_1/2: ran(ℓ)∩ B_x(s_k)≠∅,ran(ℓ)∩∂ B_x(s_k+1)≠∅κ(ℓ;B_x(s_k),∂ B_x(s_k+1)) ≤log^6(s_k).
* We say x is k-unqualified if it is not k-qualified.
* We denote the number of k-unqualified points in Ψ_n^* by ψ_n^kUQ.
We first show that for each lattice point, only with a small probability it is k-unqualified.
There exist C(d),c(d)>0 such that for any x∈ F(w) and k≥ 1,
ℙ( x is k-unqualified) ≤ Ce^-clog^4(s_k).
Let N_x,k be the number of loops in ℒ_1/2 that cross B_x(s_k+1)∖ B_x(s_k). By Definition <ref> we have
ℙ( x is k-unqualified)
≤ ℙ[N_x,k < log^3(s_k), ∃ ℓ∈ℒ_1/2 with more than log^3(s_k) forward crossing
paths with A_1=B_x(s_k),A_2=∂ B_x(s_k+1)]+ℙ[N_x,k≥log^3(s_k) ].
Let μ_x,k be the loop measure of loops with more than log^3(s_k) forward crossing paths with A_1=B_x(s_k) and A_2=∂ B_x(s_k+1). By (<ref>), we have
μ_x,k≤[C·cap(B_x(s_k))· (s_k+1)^2-d]^log^3(s_k)≤ Ce^-clog^4(s_k),
which implies that the first term on the RHS of (<ref>) is bounded from above by
1-e^-1/2μ_x,k≤12μ_x,k≤ Ce^-clog^4(s_k).
For the second term, by (<ref>), the loop measure of loops crossing B_x(s_k+1)∖ B_x(s_k) is at most
C·cap(B_x(s_k))· (s_k+1)^2-d≤ C's_k^-(4d-1)(d-2).
Therefore, N_x,k is stochastically dominated by Pois(λ_k), where λ_k=1/2C's_k^-(4d-1)(d-2). Recall that for any ξ,λ>0 and a Poisson random variable Y∼Pois(λ), one has 𝔼[exp(ξ Y)]=exp(λ(e^ξ-1)). Thus, by using the exponential Markov's inequality, and taking ξ=log(λ_k^-1+1), λ=λ_k and Y=N_x,k, we have
ℙ[N_x,k≥log^3(s_k)]≤ e^-log(λ_k^-1+1)log^3(s_k)𝔼[e^log(λ_k^-1+1) N_x,k] ≤ Ce^-clog^4(s_k).
Combining (<ref>), (<ref>) and (<ref>), we complete the proof.
Recalling that D_N=⌊ e^log^1/3.5(N)⌋ and k_0=min{k≥ 1: s_k+2≥√(D_N)}, one has
∑_k=k_0^∞exp(-clog^4(s_k))≤ Cexp(-clog^4/3.5(N))=s.p.(N).
Thus, by Lemma <ref> and Ψ_n^*⊂ B(n+[(1+λ)N]^b), we have
ℙ( ∃ x∈Ψ_n^* and k≥ k_0 such that x is k-unqualified) ≤s.p.(N).
Next, we will demonstrate the “inheritability” of qualified points. I.e., given that a lattice point x is (k+1)-qualified, the conditional probability for x to be k-qualified is close to 1. Before proving that, we need a technical lemma as follows. For the sake of fluency, we leave its proof in Section <ref>.
Let 𝔑 be the number of times that the Brownian motion S_t on ℤ^d crosses B_x(s_k+1)∖ B_x(s_k) before hitting ∂ B_x(s_k+2). Then there exists c(d)>0 such that for any x∈ℤ^d, y∈∂ B_x(s_k+1), z∈∂B̂_x(s_k+2) and l∈ℕ^+,
ℙ_y[𝔑= l| τ_∂ B_x(s_k+2)=τ_z] ≤ s_k+1^-cl.
As a direct consequence, for any γ>0 with e^γ<s_k+1^c,
𝔼_y[ exp(γ𝔑)| τ_∂ B_x(s_k+2)= τ_z ] ≤s_k+1^c/s_k+1^c-e^γ.
For any x∈ℤ^d and j∈ℕ, let Ω_x,j:=∪_l∈ℕ[∂ B_x(s_j) ×∂B̂_x(s_j+1)]^l be the collection of possible configurations of starting and ending points of all forward crossing paths (with A_1=B_x(s_j) and A_2=∂ B_x(s_j+1)) in a collection of loops.
For any ω_x,j∈Ω_x,j, we denote by ℙ(·|ω_x,j) the conditional measure given that the configuration of starting and ending points of all forward crossing paths (with A_1=B_x(s_j) and A_2=∂ B_x(s_j+1)) in ℒ_1/2 is equal to ω_x,j.
We claim that under ℙ(·|ω_x,j) where ω_x,j=((x_1,y_1),...,(x_l,y_l) ), all the l forward crossing paths are independent and their marginal distribution is given by ℙ_x_i( · |τ_∂ B_x(s_j+1)=τ_y_i ) for 1≤ i≤ l. In fact, the conditioning of ℙ(·|ω_x,j) is equivalent to “the backward crossing paths η^B_i for 1≤ i≤ l are compatible with ω_x,j (i.e. each η^B_i starts from y_i-1 (where y_-1:=y_l) and ends at x_i)”. At this point, the claim follows by recalling Lemma <ref>.
The next lemma shows the inheritability of k-qualified points.
For any d>6, there exist C(d),c(d)>0 such that the following holds: for every ω_x,k+1∈Ω_x,k+1 such that x is (k+1)-qualified with respect to ω_x,k+1, we have
ℙ( x is k-unqualified|ω_x,k+1) ≤ Ce^-clog^4(s_k).
Note that the loops in ℒ_1/2 crossing B_x(s_k+1)∖ B_x(s_k) can be divided into the following two types:
𝔏^cro:={ℓ∈ℒ_1/2:∀ j∈{0,1,2},ran(ℓ)∩∂ B_x(s_k+j)≠∅},
𝔏^in:={ℓ∈ℒ_1/2:ran(ℓ)⊂B_x(s_k+2) and ∀ j∈{0,1},ran(ℓ)∩∂ B_x(s_k+j)≠∅}.
We denote by ξ^cro:=∑_ℓ∈𝔏^croκ(ℓ) the total number of forward crossing paths (with A_1=B_x(s_k), A_2=∂ B_x(s_k+1)) of loops in 𝔏^cro. Similarly, let ξ^in:=∑_ℓ∈𝔏^inκ(ℓ).
We enumerate the forward crossing paths (with A_1=B_x(s_k+1) and A_2=B_x(s_k+2)) of loops in 𝔏^cro as η_i^F for 1≤ i≤ q(ω_x,k+1), which starts from x_i(ω_x,k+1)∈∂ B_x(s_k+1) and ends at y_i(ω_x,k+1)∈∂B̂_x(s_k+2). In the rest of this proof, we write q:=q(ω_x,k+1), x_i:=x_i(ω_x,k+1) and y_i:=y_i(ω_x,k+1) for short. Since x is (k+1)-qualified with respect to ω_x,k+1, we know that q≤log^6(s_k+1). By Remark <ref>, η_i^F for 1≤ i≤ q are conditionally independent and their conditional distributions are given by ℙ_x_i(· |τ_∂ B_x(s_k+2)=τ_y_i). We denote by 𝔑_i the number of times that η_i^F crosses B_x(s_k+1)∖ B_x(s_k). Note that ξ^cro=∑_i=1^q𝔑_i. By the exponential Markov's inequality, Lemma <ref> and q≤log^6(s_k+1), we have
ℙ[ξ^cro≥12log^6(s_k)|ω_x,k+1]
≤ e^-1/2log^6(s_k)∏_i=1^q𝔼( e^𝔑_i|ω_x,k+1)
≤ e^-1/2log^6(s_k)(s_k+1^c/s_k+1^c-e) ^log^6(s_k+1)≤ e^-1/4log^6(s_k).
Now we consider ξ^in, which is determined by 𝔏^in. Since the loops in 𝔏^in all belong to ℒ_1/2 and are independent of ω_x,k+1, using the same argument in the proof of Lemma <ref>, we have
ℙ[ξ^in≥12log^6(s_k)|ω_x,k+1]≤ Ce^-clog^4(s_k).
Combining (<ref>) and (<ref>), we complete the proof.
§.§ Locally good points
By Definition <ref>, whether a lattice point x is m-good depends on the whole configuration of Ψ_n. This global dependence causes significant difficulty in the analysis. To this end, we approximate m-good points by locally good points as we define next. Before that, we introduce some notations to simplify our presentation:
* Let η^F_x,i for 1≤ i≤ q^F_x be the forward crossing paths (with A_1=B_x(s_1), A_2=∂ B_x(s_2)) of loops in ℒ_1/2 (recall that m is fixed and s_k=r_m^(4d)^k+1). Let 𝒬_x^F be the collection of subsets of {1,2,...,q^F_x}. For any Q∈𝒬_x^F, we denote by 𝔏^F_x(Q) the collection of all forward crossing paths η^F_x,i with i∈ Q.
* We denote by 𝔏_x^inv the collection of involved loops ℓ with ran(ℓ)⊂B_x(s_2) (recall the definition of “involved loops” in Definition <ref>).
* For any z∈∂ B_x(s_1) and Q∈𝒬_x^F, we denote by Φ_x,z^1=Φ_x,z^1(Q) the cluster of ∪ (𝔏^F_x(Q)∪𝔏_x^inv) containing z. Let Φ_x,z^2=Φ_x,z^2(Q) be the collection of points y∈ B_x(s_0)∩∂B̂(n)∖Φ_x,z^1 such that I_{y,y^in}⊂γ_x^p∪Φ_x,z^1. Then we define Φ_x,z=Φ_x,z(Q):=(Φ_x,z^1,Φ_x,z^2) and
Φ_x,z =Φ_x,z(Q):= Φ_x,z^1 ∪⋃_x∈Φ_x,z^2γ_x^p.
For completeness, when z∉∪( 𝔏^F_x(Q)∪𝔏_x^inv), let Φ_x,z^1,Φ_x,z^2,Φ_x,z,Φ_x,z=∅.
* We define a local version of ℒ_𝐀^U (recall Definition <ref>) as follows. For any possible configuration 𝐃=(𝐃^1,𝐃^2) of some Φ_x,z(Q), let ℒ_x,𝐃^LU be the point measure composed of the following types of loops in ℒ_1/2, which are contained in B_x(s_2):
* involved loops ℓ with ran(ℓ)∩𝐃^1=∅;
* loops ℓ with ran(ℓ)∩B(n)=∅;
* point loops including some point y∈ [∂B̂(n)∖ B_x(s_0)]∪ [𝐃^1∩∂B̂(n)];
* point loops that include some point y∈ B_x(s_0)∩∂B̂(n)∖ (𝐃^1∪𝐃^2) and do not intersect I_{y,y^in}∩𝐃^1.
* We define ℒ_x,z^LU=ℒ_x,z^LU(Q) as ℒ_x,𝐃^LU on the event {Φ_x,z(Q)=𝐃}.
* Parallel to (<ref>), we define
𝐃:= 𝐃^1∪⋃_y∈𝐃^2γ_y^p(𝐃^1),
where {γ_x^p(𝐃^1)}_x∈𝐃^2 is independent of ℒ_1/2, and has the same distribution as {γ_x^p}_x∈𝐃^2 given ∩_x∈𝐃^2{I_{x,x^in}⊂γ_x^p∪𝐃^1}.
* Let Q^ be the collection of integers i∈ [1,q^F_x] such that η^F_x,i is contained in an involved loop. Note that Q^∈𝒬_x^F. We denote Φ_x,z^:=Φ_x,z(Q^), Φ_x,z^i,:=Φ_x,z^i(Q^) for i∈{1,2}, Φ_x,z^:=Φ_x,z(Q^) and ℒ_x,z^LU,:= ℒ_x,z^LU(Q^).
Here are some useful relations between Ψ_n and Φ_x,z^:
* If we delete all loops ℓ included in Ψ_n^1 with ran(ℓ)∩B_x(s_1)=∅, and all backward crossing paths with A_1= B_x(s_1) and A_2=∂ B_x(s_2), then the remaining part of Ψ_n^1 that intersects B_x(s_1) is composed of several clusters of the form Φ_x,z^1, for z∈∂ B_x(s_1) (but not every Φ_x,z^1, necessarily intersects Ψ_n^1). Let U_x^ be the collection of z∈∂ B_x(s_1) such that Φ_x,z^1,⊂Ψ_n^1. Since the deleted loops and paths are disjoint from B(s_1), we have
Ψ_n^1∩B(s_1) = ∪_z∈ U_x^Φ_x,z^1,∩B(s_1).
Thus, if y∈ B_x(s_0)∩∂B̂(n)∖Ψ_n^1 satisfies I_{y,y^in}⊂γ_y^p∪Ψ_n^1, then there exists some z∈ U_x^ with I_{y,y^in}⊂γ_y^p∪Φ_x,z^1,. In addition, by (<ref>) one has y ∉Φ_x,z^1, for all z∈ U_x^. These two facts yield that
Ψ_n^2∩ B_x(s_0)⊂∪_z∈ U_x^Φ_x,z^2,.
By (<ref>) and (<ref>), one has
Ψ_n∩ B_x(s_0) ⊂∪_z∈ U_x^Φ_x,z^∩ B_x(s_0).
* We claim that every loop ℓ∈ℒ^U with ran(ℓ)⊂B_x(s_2) is in one of the following cases:
* ℓ is a point loop including some y∈Ψ_n^1∩ B_x(s_0)∩∂B̂(n);
* ℓ is contained in ℒ_x,z^LU, for all z∈ U_x^.
To verify the claim, it suffices to check each type of loops ℓ with ran(ℓ)⊂B_x(s_2) in Definition <ref> as follows.
* involved loops ℓ with ran(ℓ)∩Ψ_n^1=∅: For any z∈ U_x^, as mentioned in Item (1), one has Φ_x,z^1,⊂Ψ_n^1. Therefore, we have ran(ℓ)∩Φ_x,z^1,=∅, and thus ℓ∈ℒ_x,z^LU,.
* loops ℓ with ran(ℓ)∩B(n)=∅: Obviously, one has ℓ∈ℒ_x,z^LU, for all z∈ U_x^.
* point loops ℓ including some y∈Ψ_n^1∩∂B̂(n): If y∈ B_x(s_0), these loops are in Case (a) of the claim. Otherwise, one has y∈∂B̂(n) ∖ B_x(s_0), and therefore ℓ∈ℒ_x,z^LU, for all z∈ U_x^.
* point loops ℓ that include some y∈∂B̂(n)∖ (Ψ_n^1∪Ψ_n^2) and do not intersect I_{y,y^in}∩Ψ_n^1: For any z∈ U_x^, since y∉Ψ_n^1, one has y∉Φ_x,z^1,. In addition, since y∉Ψ_n^2, we have I_{y,y^in}⊄γ_y^p∪Ψ_n^1, which yields I_{y,y^in}⊄γ_y^p∪Φ_x,z^1,, and thus y∉Φ_x,z^2,. Furthermore, since ℓ does not intersect I_{y,y^in}∩Ψ_n^1, ℓ does not intersect I_{y,y^in}∩Φ_x,z^1, either. These three facts imply that ℓ∈ℒ_x,z^LU,.
To sum up, we conclude the claim.
* We have the following inclusion: for any y∈ B_x(r_m),
{ y[]∪ℒ^U_𝐀·1_ℓ⊂B_x(s_2)Ψ_n∩ B_x(s_0) }⊂⋃_z∈ U_x^{ y[]∪ℒ_x,z^LU,Φ_x,z^∩ B_x(s_0) }.
In fact, when the event on the LHS happens, in ℒ^U_𝐀·1_ℓ⊂B_x(s_2) there is a finite sequence of loops ℓ_i for 1≤ i≤ M such that ℓ_1 intersects y, ℓ_M intersects Ψ_n∩ B_x(s_0), and ran(ℓ_i)∩ran(ℓ_i+1)≠∅ for all 1≤ i≤ M-1. Let i_†:=min{i∈ [1,M]:ℓ_i is a point loop including some v∈Ψ_n^1∩ B_x(s_0)∩∂B̂(n)}, where we set min∅=∞ for completeness. There are two cases as follows.
* If i_†=∞, by Item (2), we know that y is connected to Ψ_n∩ B_x(s_0) by ∪ℒ_x,z^LU, for all z∈ U_x^. Combined with (<ref>), this yields that the event on the RHS of (<ref>) happens.
* If i_†∈ [1,M], then by (<ref>), ℓ_i_† is a point loop including some v_†∈Φ_x,z_†^1,∩ B_x(s_0)∩∂B̂(n) for some z_†∈ U_x^. Note that y is connected to Φ_x,z_†^∩ B_x(s_0) by ∪{ℓ_1,...,ℓ_i_†}. By Item (2) and the minimality of i_†, we have ℓ_i∈ℒ_x,z_†^LU, for all 1≤ i< i_†. Thus, since ℓ_i_† is also in ℒ_x,z_†^LU, (by v_†∈Φ_x,z_†^1,∩∂B̂(n)), y is connected to Φ_x,z_†^∩ B_x(s_0) by ∪ℒ_x,z_†^LU,, which implies the occurrence of the event on the RHS of (<ref>).
To sum up, we conclude (<ref>).
Recall that s_k=r_m^(4d)^k+1. We also introduce a local version of Definition <ref>:
For any x∈ℤ^d, m≥ 1 and tuple 𝐃=(𝐃^1,𝐃^2) (which is a possible configuration of some Φ_x,z(Q)), we define
Δ_x,m^loc(𝐃):= 𝔼(∑_y∈ B_x(r_m)1_y[]ℒ_x,𝐃^LU𝐃∩ B_x(s_0) ).
For l≥ 1, we say 𝐃 is (x,m,l)-locally nice if Δ_x,m^loc(𝐃)≤ r_m^4log^l(r_m).
Let V_x=V_x(Q) be the collection of z∈∂ B_x(s_1) such that Φ_x,z^1 intersects B_x(s_0+1).
* For any m∈ [1,m_0-1], w∈[-D_N,D_N )^d∩ℤ^d and x∈ F(w), we say x is m-locally good if for any Q∈𝒬_x^F, the following events occur:
* V_x≤ 2log^6(r_m).
* Every tuple Φ_x,z is (x,m,9)-locally nice. I.e., for any z∈∂ B_x(s_1),
Δ_x,m^loc(Φ_x,z)≤ r_m^4log^9(r_m).
* We say x is m-locally bad if it is not m-locally good.
* For convenience, we also say a point x is 0-qualified (resp. 0-unqualified) if x is m-locally good (resp. m-locally bad). We remind the readers that the k-unqualified points defined in Definition <ref> are only valid for k≥ 1.
* We denote the number of m-locally bad points in Ψ_n^* by ψ_n^0UQ.
Recall the notation Q^ before Remark <ref>. We denote V_x^:=V_x(Q^).
For any x∈ℤ^d, if x is m-bad, then x is m-locally bad.
Arbitrarily fix a configuration of Ψ_n, say 𝐀=(𝐀^1,𝐀^2), such that x is m-bad. I.e., Δ_x,m(𝐀)>r_m^4log^16(r_m). On the event {Ψ_n=𝐀}, U_x^, V_x^ and Φ_x,z^ for z∈ U_x^ are all deterministic. For any y∈ B_x(r_m), if {y[]∪ℒ^U_𝐀Ψ_n∩ B_x(s_0)} happens, then either y is connected to Ψ_n∩ B_x(s_0) by ∪ℒ_𝐀^U·1_ran(ℓ)⊂B_x(s_2), or y is connected to ∂ B_x(s_2) by ∪ℒ^U_𝐀. Recall that the former event implies the one on the RHS of (<ref>). Thus, we have
Δ_x,m(𝐀)
≤ 𝔼(∑_y∈ B_x(r_m)1_y[]∪ℒ^U_𝐀·1_ran(ℓ)⊂B_x(s_2)𝐀∩ B_x(s_0) )
+ ∑_y∈ B_x(r_m)ℙ[ y []∪ℒ^U_𝐀∂ B_x(s_2)]
≤ ∑_z∈ U_x^Δ_x,m^loc( Φ_x,z^) + ∑_y∈ B_x(r_m)ℙ[ y []∪ℒ^U_𝐀∂ B_x(s_2)].
We denote Û_x^:= U_x^∩ V_x^. For any z∈ U_x^, if Δ_x,m^loc( Φ_x,z^)≠ 0, then we have Φ^_x,z∩ B_x(s_0)≠∅, which implies that Φ_x,z^1,∩ B_x(s_0+1)≠∅, and thus z∈Û_x^. Therefore, the first term on the RHS of (<ref>) is equal to ∑_z∈Û_x^Δ_x,m^loc( Φ_x,z^). In addition, by (<ref>) and |B_x(r_m)|≤ Cr_m^d, the second term on the RHS of (<ref>) is bounded from above by Cr_m^ds_2^-1/2<r_m^-30d^3. In conclusion,
Δ_x,m(𝐀) ≤∑_z∈Û_x^Δ_x,m^loc(Φ_x,z^)+ r_m^-30d^3.
We conclude this lemma by proving its contrapositive statement as follows. Assume that x is m-locally good. Then one has |Û_x^|≤ |V_x^| ≤ 2log^6(r_m) and Δ_x,m^loc(Φ_x,z^)≤ r_m^4log^9(r_m) for all z∈Û_x^. Thus, by (<ref>), we obtain that x is m-good since
Δ_x,m(𝐀)≤ 2log^6(r_m)· r_m^4log^9(r_m)< r_m^4log^16(r_m).
Next, we show the inheritability of 0-qualified points. I.e., conditioned on the event that x is 1-qualified, we have x is also m-locally good (i.e., 0-qualified) with a uniformly high probability. Recall that the inheritability of k-qualified points (k≥ 1) has been proved in Lemma <ref>.
We first record a technical lemma, where the bound is suboptimal but suffices
for our purpose. The proof can be carried out in the same way as <cit.>, so we just omit it.
There exist c(d),C(d)>0 such that for any R>1 and any y∈∂ B(R-1),
ℙ( τ_y <τ_∂ B(R)) ≥ ce^-Clog^2(R).
As a direct consequence, for any z∈∂B̂(R),
ℙ( τ_∂ B(R)=τ_z) ≥ ce^-Clog^2(R).
The following lemma presents the inheritability of 0-qualified points. Recall the notations Ω_x,j and ℙ(·|ω_x,j) in the paragraphs before Remark <ref>.
For any d>6, there exist C(d)>0,c(d)>0 such that for any m∈ [1,m_0-1], w∈[-D_N,D_N )^d∩ℤ^d, x∈ F(w), and any configuration ω_x,1∈Ω_x,1 such that x is 1-qualified with respect to ω_x,1, we have
ℙ( x is m-locally bad|ω_x,1) ≤ Ce^-clog^7(r_m).
Recall the notations η^F_x,i, q_x^F, 𝒬_x^F, 𝔏_x^F, 𝔏_x^inv and Φ_x,z below the first paragraph of Section <ref>. Also recall V_x in the sentence before Definition <ref>.
Since x is 1-qualified, we have q_x^F≤log^6(s_1), which implies |𝒬_x^F|≤ 2^log^6(s_1). We denote the starting point and the ending point of η^F_i,x by y_i and z_i respectively, where y_i and z_i are deterministic given ω_x,1. For any Q∈𝒬_x^F, let 𝖡_1^Q:={V_x(Q)>2log^6(s_1)} and 𝖡_2,z^Q:= {Φ_x,z(Q) is not (x,m,9)-nice} for z∈∂ B_x(s_1). By Definition <ref>, if x is m-locally bad, then there exists Q∈𝒬_x^F such that either 𝖡_1^Q happens, or 𝖡_2,z^Q happens for some z∈∂ B_x(s_1).
On the event 𝖡_1^Q, since each η^F_x,i can be contained in at most one cluster of the form Φ_x,z^1, the number of clusters Φ_x,z^1 that intersect B_x(s_0+1) and do not contain any forward crossing path η^F_x,i is at least 2log^6(s_1)-log^6(s_1)=log^6(s_1). Since these clusters do not share a common glued loop (we excluded clusters with forward crossing paths exactly to achieve this property), their existence ensures that there are at least log^6(s_1) disjoint collections of glued loops certifying {B(s_0+1)[]∂ B(s_1)}. Thus, by the BKR inequality and (<ref>), we have (recalling s_0=r_m^4d and s_1=r_m^16d^2)
ℙ( 𝖡_1^Q|ω_x,1) ≤ {ℙ[B(s_0+1)[]∂ B(s_1) ] }^log^6(s_1)
≤ ( Cr_m^4d(d-1)· s_1^-1/2) ^log^6(s_1)≤ Ce^-clog^7(r_m).
Now let us focus on 𝖡_2,z^Q. Similar to Item (2) in Remark <ref>, we know that given {Φ_x,z(Q)=𝐃}, the conditional distribution of ℒ^LU_x,z is the same as ℒ_x,𝐃^LU without conditioning. Moreover, ℒ^LU_x,z is independent of the conditioning ω_x,1 since all loops in ℒ^LU_x,z are contained in B_x(s_k+2). Thus, for any 𝐃 such that {Φ_x,z(Q)=𝐃}⊂𝖡_2,z^Q, we have
𝔼( ∑_y∈ B_x(r_m)1_y[]∪ℒ^LU_x,zΦ_x,z∩ B_x(s_0) | Φ_x,z(Q)=𝐃,ω_x,1)
= Δ_x,m^loc(𝐃) ≥ r_m^4log^9(r_m).
Recall that for any random variable Y with 0≤ Y≤ a a.s. and 𝔼(Y)≥ b, one has ℙ(Y≥ b')≥ (b-b')/a for all b'∈ (0,b). Thus, by taking a=|B_x(r_m)|≤ Cr_m^d, b=r_m^4log^9(r_m) and b'=12r_m^4log^9(r_m), we have
ℙ( ∑_y∈ B_x(r_m)1_y[]ℒ^LU_x,zΦ_x,z∩ B_x(s_0) ≥12r_m^4log^9(r_m) | Φ_x,z(Q)=𝐃,ω_x,1)
≥ cr_m^4-dlog^9(r_m).
Recall that 𝐂(y) is the cluster of ∪ℒ_1/2 containing y. Note that all the points y∈ B_x(r_m) that are connected to Φ_x,z∩ B_x(s_0) are connected to each other. Therefore, by taking integral over the event 𝖡_2,z^Q conditioning on ω_x,1 for both sides of (<ref>), we have
ℙ( max_y∈ B_x(r_m)|𝐂(y) ∩ B_x(r_m) |>12r_m^4log^9(r_m) | ω_x,1)
≥ cr_m^4-dlog^9(r_m)·ℙ( 𝖡_2,z^Q|ω_x,1).
Now let us control the LHS of (<ref>). Recall L(A_1,A_2)(ℓ), κ(ℓ;A_1,A_2), and τ_i for 1≤ i≤ 2κ(ℓ) in Section <ref>. We denote by 𝔏(ω_x,1) the collection of loops ℓ with κ(ℓ;B_x(s_0),∂ B_x(s_1))=q_x^F such that there exists ϱ∈ L(B_x(s_0),∂ B_x(s_1))(ℓ) satisfying ϱ(τ_2i)= y_i and ϱ(τ_2i+1)= z_i for 1≤ i≤ q_x^F. In fact, for any ℓ∈𝔏(ω_x,1), the multiplicity (recalling J in (<ref>)) of its projection on ℤ^d is upper-bounded by the number of crossings κ=q_x^F, and therefore is at most log^6(s_1). Thus, by (<ref>) and the relation between loops on ℤ^d and ℤ^d mentioned in Section <ref>, we have
μ[ 𝔏(ω_x,1)] ≥log^-6(s_1) ∏_i=1^q_x^Fℙ_y_i( τ_∂ B_x(s_2)= τ_z_i) ·ℙ_z_i( τ_∂ B_x(s_1)= τ_y_i<∞).
For the first part of the product on the RHS of (<ref>), by the strong Markov property, we have
ℙ_y_i( τ_∂ B_x(s_2)= τ_z_i)
≥ ℙ_y_i( τ_0<τ_∂ B_x(s_2))·ℙ_0(τ_∂ B_x(s_2)= τ_z_i)
= ℙ_0( τ_y_i<τ_∂ B_x(s_2))·ℙ_0(τ_∂ B_x(s_2)= τ_z_i) (by reversing the random walk)
≥ ce^-Clog^2(s_2) (by Lemma <ref>).
For the second part, note that we can find v_i∈∂ B_x(s_2) such that y_i∈B̂_v_i(s_2-s_1) for each y_i∈ B_x(s_1). Therefore, by the strong Markov property, we have
ℙ_z_i( τ_∂ B_x(s_1)= τ_y_i<∞)
≥ ℙ_z_i( τ_∂ B_v_i(0.5s_2)< τ_B_x(s_1)) ·min_v∈∂ B_v_i(0.5s_2)ℙ_v( τ_v_i<τ_∂ B_v_i(s_2-s_1))
·ℙ_v_i( τ_∂ B_v_i(s_2-s_1)=τ_y_i)
≥ c e^-Clog^2(s_2) (by the invariance principle and Lemma <ref>).
Combining (<ref>), (<ref>), (<ref>) and the fact that q_x^F≤log^6(s_1), we get
μ[ 𝔏(ω_x,1)] ≥ c e^-Clog^8(s_2).
Let 𝔏^c(ω_x,1) be the collection of loops in the complement of 𝔏(ω_x,1) that cross B_x(s_k+2)∖ B_x(s_k+1). By (<ref>), we have
μ[ 𝔏^c(ω_x,1)]≤ Cs_1^d-2s_2^2-d.
We denote by 𝖠_† the event that in ℒ_1/2 there is exactly one loop in 𝔏(ω_x,1) and there is no loop in 𝔏^c(ω_x,1). By (<ref>) and (<ref>),
ℙ(𝖠_†)= 12μ[ 𝔏(ω_x,1)] · e^-1/2μ[ 𝔏(ω_x,1)]· e^-1/2μ[ 𝔏^c(ω_x,1)]≥ c e^-Clog^8(s_2).
Since 𝖠_† implies the conditioning ω_x,1, by Lemma <ref> and (<ref>), we have
ℙ( max_y∈ B_x(r_m)|𝐂(y) ∩ B_x(r_m) |>12r_m^4log^9(r_m) | ω_x,1)
≤ ℙ( max_y∈ B_x(r_m)|𝐂(y) ∩ B_x(r_m) |>12r_m^4log^9(r_m) ) [ ℙ(𝖠_†) ]^-1≤ Ce^-clog^9(r_m).
Combined with (<ref>), this gives us
ℙ(𝖡_2,z^Q| ω_x,1) ≤ Ce^-clog^9(r_m).
Finally, we conclude the desired bound as follows:
ℙ( x is m-locally bad|ω_x,1)
≤ ∑_Q∈𝒬_x^Fℙ( 𝖡_1^Q|ω_x,1) + ∑_Q∈𝒬_x^F∑_z∈∂ B_x(s_1)ℙ(𝖡_2,z^Q| ω_x,1)
≤ C|𝒬_x^F|·(e^-clog^7(r_m)+s_1^d-1e^-clog^9(r_m)) (by (<ref>) and (<ref>))
≤ Ce^-clog^7(r_m) (by |𝒬_x^F|≤ 2^log^6(s_1)).
§.§ Exploration processes
In this subsection, we introduce the exploration process 𝒯_n which completes a construction of Ψ_n^1 (recall Definition <ref>) upon termination. As an additional feature, during the process 𝒯_n, we will keep track of the ordering for appearances of loops and we will record some statistics, and these will be very useful for the proof of Lemma <ref>.
Recall that we already fixed m∈ [1,m_0] and w∈[-D_N,D_N )^d∩ℤ^d. Now we also fix an arbitrary integer k∈ [0,k_0-1]. Recall the definitions of F(w) and k_0 in (<ref>) and (<ref>) respectively. We divide F(w) into F^I(w):= {x∈ F(w): B_x(s_k+2)∩B(n)=∅} and F^II(w):=F(w)∖ F^I(w).
Unless otherwise specified, in the construction of 𝒯_n, when we refer to a forward or backward crossing path, we always assume A_1=∪_x∈ F(w) B_x(s_k+1) and A_2=∪_x∈ F(w)∂ B_x(s_k+2). Note that A_1 and A_2 are disjoint since |x_1-x_2|≥ 2D_N>2s_s_k+2 for any distinct points x_1,x_2∈ F(w) and any k∈ [0,k_0-1]. As a result, for any x∈ F(w), the forward and backward crossings in the annulus B_x(s_k+2)∖ B_x(s_k+1) are the same as with respect to A_1 = B_x (s_k+1) and A_2 = ∂ B_x(s_k+2). Recall the definition of involved loops in Definition <ref>. We say a crossing path is involved if it is included by some involved loop.
The exploration process 𝒯_n is described as follows.
Step 0: We define the collection 𝔏^w by
𝔏^w:={ℓ∈ℒ_1/2:ℓ intersects B_x(s_k+1) and ∂ B_x(s_k+2) for some x∈ F(w) }.
We sample every backward crossing path η^B of every loop in 𝔏^w except its Brownian excursions at ∪_x∈ F(w)∂B̂_x(s_k+2) (i.e. we reserve the randomness of these Brownian excursions and only sample the remaining part of η^B). For each forward crossing path η^F of a loop in 𝔏^w, its starting point and ending point are now fixed. Thus, now we can determine the collection of (k+1)-unqualified points in F(w) and denote it by 𝒟.
We also sample all forward crossing paths η^F contained in ∪_x∈ F^II(w)B_x(s_k+2). Note that a loop ℓ∈𝔏^w is involved if and only if it contains a forward or backward crossing path that intersects B(n), and that the Brownian excursions of a fundamental loop ℓ do not make any difference on whether ℓ intersects B(n). In addition, every η^F contained in ∪_x∈ F^I(w)B_x(s_k+2) cannot intersect B(n) (since B_x(s_k+2)∩B(n)=∅ for all x∈ F^I(w)). Thus, now we can determine which loop in 𝔏^w is involved. Let ℰ be the collection of all involved crossing paths sampled up until now.
Since B_x(s_k+2)∩B(n)=∅ for all x∈ F^I(w), every loop contained in B_x(s_k+2) is not involved. In light of this, we say a point x∈ F^I(w) is inactive if there is no involved forward crossing path in B_x(s_k+2). We also say the remaining points in F(w) are active. Especially, all points in F^II(w) are active. We denote by 𝒜 the collection of all active points. Note that 𝒜 is already determined. Then we sample the Brownian excursions of all backward crossing paths in ℰ at ∪_F(w)∖𝒜∂B̂_x(s_k+2). See Figure <ref> for an illustration of Step 0.
The following statistics for 𝒯_n will be recorded. We will provide their initial values, and then describe how they are updated as the construction of 𝒯_n proceeds:
* 𝐂_p with 𝐂_0={0}⊂ℤ^d: the existing cluster. I.e., the cluster containing 0 and composed of the collection of all involved loops (or their crossing paths) which have been sampled.
P.S.: The subscript p of 𝐂_p indicates that 𝐂_p is the existing cluster after Step p. The subscript p in notations for other statistics also has the same meaning. As we will show later, there is an intermediate cluster 𝐂_p^† in each step. We also call this intermediate cluster “existing cluster” although it will only be used in the construction but not in the analysis later.
While two crossing paths of a loop may not be connected to each other by themselves, they are connected in the loop cluster (since they are from the same loop). Thus, when referring to the cluster including paths in ℰ, we always consider all crossing paths from the same loop as connected.
* F_p with F_0=𝒜: the collection of all unvisited active points x∈ F(w).
P.S.: We hereby introduce the definition of a visited point x. For any p∈ℕ^+ and x∈ F_p-1 (i.e. x is active and is not visited up to Step p-1), we say x is visited in Step p if the aforementioned existing cluster 𝐂_p^† (which grows as 𝒯_n progresses) intersects ∂B̂_x(s_k+2). During the construction of 𝒯_n, we say x is unvisited if it is not visited yet.
Intuitively, “x is visited in Step p” indicates that with a positive probability x is connected to 𝐂_p^† by the involved loops and forward crossing paths in B_x(s_k+2) (which justifies our choice of the word “visited”). See Lemma <ref> for a precise statement on a uniform lower bound on this probability. Note that the reason we maintain the randomness of the Brownian excursions at ∪_x∈𝒜∂B̂_x(s_k+2) in Step 0 (also in some subsequent steps) is to ensure this uniform lower bound.
* N_p^vis with N_0^vis=0: the number of visited points.
* N_p^k-UQ with N_0^k-UQ=0: the number of visited, k-unqualified points.
* N_p^vis-𝒟 with N_0^vis-𝒟=0: the number of visited points in 𝒟.
* N_p^con with N_0^con=0: the number of visited points x∈𝒜 such that x is connected to the existing cluster 𝐂_p^† by ∪ℑ_p^x, where 𝔍_p^x is the collection of the involved loops and involved forward crossing paths in B_x(s_k+2) and the Brownian excursions of sampled loops at ∂B̂_x(s_k+2).
* N_p^con-𝒟 with N_0^con-𝒟=0: the number of points counted by both N_p^vis-𝒟 and N_p^con.
Step p (p ≥ 1): Suppose that we have completed the (p-1)-th step of 𝒯_n and as a result have obtained 𝐂_p-1, F_p-1, N_p-1^vis, N_p-1^k-UQ, N_p-1^vis-𝒟, N_p-1^con and N_p-1^con-𝒟. Now we describe the p-th step as follows.
Firstly, we sample all unsampled fundamental loops ℓ in the following collection except their Brownian excursions at ∪_x∈𝒜∂B̂_x(s_k+2):
{ℓ∉𝔏^w: ran(ℓ)∩B(n)≠∅, ran(ℓ)∩𝐂_p-1≠∅ and ∀ x∈𝒜, ran(ℓ) ⊄B_x(s_k+2) }.
P.S.: We say a fundamental loop is unsampled if none of its edges has been sampled.
Secondly, we sample all unsampled glued point loops γ_y^p with 𝐂_p-1∩γ_y^p≠∅ for y∈ B(n-1)∖∪_x∈𝒜B̂_x(s_k+2). Moreover, for each y∈∪_x∈ F_p-1∂B̂_x(s_k+2)∩ B(n-1), we sample whether the glued point loop γ_y^p or the Brownian excursions of all sampled loops at y intersect 𝐂_p-1 (but do not sample the whole configuration of γ_y^p or these Brownian excursions).
Thirdly, for each y∈ℤ^d and each of its incident edge e with I_e⊂B(n) and I_e⊄∪_x∈𝒜B_x(s_k+2), if y∈𝐂_p-1 then we let v_e,y be the furthest point in I_e connected to y by 𝐂_p-1∩ I_e. We sample the cluster of the glued loop γ_e^e containing v_e,y.
Let 𝐂_p^† be the cluster composed of 𝐂_p-1 and all these sampled loops (or partial loops). (P.S.: For each y∈∪_x∈ F_p-1∂B̂_x(s_k+2)∩ B(n-1), if γ_y^p is sampled to intersect 𝐂_p-1∩ I_e for some edge e, then we include I_e in 𝐂_p^†. In addition, if the Brownian excursions of all sampled loops at y are sampled to intersect 𝐂_p-1∩ I_e for some edge e, then we add both I_e and the sampled part of every loop that intersects y to 𝐂_p^†.)
There are two sub-cases for the subsequent construction:
Case p.1: If 𝐂_p^† does not intersect ∪_x∈ F_p-1B̂_x(s_k+2), then we set 𝐂_p=𝐂_p^†, maintain all other statistics and go to the next step.
Case p.2: Otherwise, we enumerate all points x∈ F_p-1 with B̂_x(s_k+2) ∩𝐂_p^†≠∅ as {x_l}_1≤ l≤ l_p. Then we sample all forward crossing paths and loops contained in every B_x_l(s_k+2), and sample all Brownian excursions of sampled loops at every ∂B̂_x_l(s_k+2). Let 𝐂_p be the cluster composed of 𝐂_p-1, and all involved loops and involved crossing paths sampled up until now. If 𝐂_p=𝐂_p-1, then we maintain all statistics and stop the process. Otherwise, we update the values of our statistics in the following way and then go the the next step:
- F_p:=F_p-1∖{x_1,...,x_l_p} and N^vis_p:= N^vis_p-1+l_p.
- N_p^k-UQ:=N_p-1^k-UQ+|{1≤ l≤ l_p:x_l is k-unqualified}|.
- N_p^vis-𝒟:=N_p-1^vis-𝒟+|{1≤ l≤ l_p:x_l∈𝒟}|.
- N_p^con:=N_p-1^con+|{1≤ l≤ l_p:x_l and 𝐂_p^† are connected by ∪ℑ_p^x_l}|.
- N_p^con-𝒟:=N_p-1^con-𝒟+|{1≤ l≤ l_p:x_l is counted by both N_p^vis-𝒟 and N_p^con}|.
This completes the construction of our exploration process. It is easy to see that each process 𝒯_n a.s. stops after a finite number of steps, which is denoted as p_*. Let 𝐂_*, F_*, N_*^vis, N_*^k-UQ, N_*^vis-𝒟, N_*^con and N_*^con-𝒟 be the corresponding statistics of 𝐂_p, F_p, N_p^vis, N_p^k-UQ, N_p^vis-𝒟, N_p^con and N_p^con-𝒟 when 𝒯_n stops.
Recall Ψ_n^* in the paragraph after Remark <ref>.
(1) If an involved loop intersects 𝐂_*, it is included in 𝐂_*.
(2) If 𝐂_* intersects an involved loop or involved forward crossing path in B_x(s_k+2) for some x∈ F(w), then x is visited.
(3) 𝒯_n constructs Ψ_n^1 eventually. I.e., 𝐂_*=Ψ_n^1.
(4) For any x∈Ψ_n^*∩ F(w), x must be visited in some step of 𝒯_n.
We prove all these four items one by one.
(1) We divide the involved loops into four types and prove Item (1) separately:
Type 1 (All involved loops ℓ∈ℒ_1/2^f with ℓ∉𝔏^w and ran(ℓ) ⊄B_x(s_k+2) for all x∈𝒜): If such a loop ℓ intersects 𝐂_*, then it also intersects the existing cluster in some step of 𝒯_n, and thus is sampled and contained in 𝐂_*.
Type 2 (All involved edge loops and point loops that are not contained in ∪_x∈𝒜B_x(s_k+2)): For the same reason as in Type 1, these loops are included by 𝐂_*.
Type 3 (All involved loops ℓ contained in some B_x(s_k+2), x∈𝒜): Since ℓ intersects 𝐂_*, we know that ℓ intersects 𝐂_p'^† for some p'∈ [0,p_*-1]. Since 𝐂_p'^† is connected, we see that 𝐂_p'^† intersects ∂B̂_x(s_k+2) and as a result, x is visited. Thus, ℓ is sampled and included in 𝐂_*.
Type 4 (All involved loops ℓ∈𝔏^w): We assume that 𝐂_* intersects some involved loops ℓ∈𝔏^w, and we next prove that ℓ is included in 𝐂_*. Since ℓ is fully decomposed into backward and forward crossing paths, 𝐂_* either intersects some backward crossing path η^B or forward crossing path η^F of ℓ. We next consider these two (possibly overlapping) subcases separately.
(a) 𝐂_* intersects η^B: By the construction of 𝒯_n, η^B (and also every other backward crossing path of ℓ) must be contained in 𝐂_*. Therefore, for every x_♢∈ F(w) such that B_x_♢(s_k+2) contains some forward crossing path of ℓ, x_♢ will be visited, which implies that ℓ must be completely sampled, and thus is included in 𝐂_*.
(b) 𝐂_* intersects η^F: Suppose that η^F is contained in B_x(s_k+2) for some x∈ F(w). For the same reason as in Type 3, one can show that x is visited, and thus η^F is sampled and contained in 𝐂_*. With the same argument as in Subcase(a), ℓ is also included in 𝐂_*.
To sum up, we conclude Item (1).
(2) Since there is no involved loop in B_x(s_k+2) for any x∈ F(w)∖𝒜, this follows directly by combining the above analysis for Type 3 and Subcase (b) for Type 4.
(3) It is clear from the definition of 𝒯_n that 𝐂_* is composed of involved loops. Therefore, 𝐂_*⊂Ψ_n^1. If 𝐂_*⊊Ψ_n^1, since 𝐂_* and Ψ_n^1 are both connected subsets, in Ψ_n^1 there exists an involved loop ℓ that intersects 𝐂_* and is not included in 𝐂_*. However, by Item (1), such ℓ does not exist. By contradiction, we get Item (3).
(4) For any x∈Ψ_n^*∩ F(w), B_x(1) must intersect some involved loop ℓ or forward corssing path η^F in B_x(s_k+2), which is included in Ψ_n^1(=𝐂_*, by Item (3)). By Item (2), the existence of such ℓ or η^F implies that x is visited. Now the proof is complete.
Recall ζ_w in (<ref>). For any k∈ℕ, we define
ζ_w^kUQ:= |x∈Ψ_n^*∩ F(w):x is k-unqualified|.
Almost surely we have
ζ_w^kUQ≤ N_*^kUQ≤ N_*^vis,
N_*^con≤ζ_w≤ N_*^vis,
N_*^con𝒟≤ζ_w^(k+1)UQ.
Proof of (<ref>): Since every point x counted by ζ_w^k-UQ is in Ψ_n^*∩ F(w), by Item (4) of Lemma <ref>, x is visited in some step of 𝒯_n. Thus, x is also counted by N_*^k-UQ since it is k-unqualified. As a result, we obtain ζ_w^k-UQ≤ N_*^k-UQ. The second inequality of (<ref>) is straightforward since every point counted by N_*^k-UQ is visited.
Proof of (<ref>): Recall that for each x counted by N_*^con, x is connected to the existing cluster by the involved loops and involved forward crossing paths in B_x(s_k+2) and the Brownian excursions of sampled loops at ∂B̂_x(s_k+2). This implies that x is counted by ζ_w, and thus N_*^con≤ζ_w. In addition, By Item (4) of Lemma <ref>, for every x counted by ζ_w (i.e. x∈Ψ_n^*∩ F(w)), x must be visited in some step of 𝒯_n, which implies ζ_w≤ N_*^vis.
Proof of (<ref>): For the same reason as proving N_*^con≤ζ_w, the points counted by N_*^con𝒟 are in Ψ_n^*∩ F(w). Thus, all these points are also counted by ζ_w^(k+1)UQ since they are (k+1)-unqualifid. Now we also conclude (<ref>).
Recall ℑ_p^x in the definition of N_p^con, and 𝐂_p^† in the construction of Step p. As promised, in the following lemma we will prove a uniform lower bound for the probability that a visited point x is counted by N_*^con. For the sake of fluency in writing, we leave its proof in Section <ref>.
Let ℭ_p^x be the collection of all possible configurations of 𝒯_n up to sampling 𝐂_p^† such that x is visited in Step p. For any ω∈ℭ_p^x, let ℙ(·|ω) be the conditional measure given that the configuration of 𝒯_n up to sampling 𝐂_p^† is exactly ω.
There exist c(d),C(d)>0 such that for any x∈ F(w), p∈ℕ^+ and ω∈ℭ_p^x, we have
ℙ( x[]ℑ_p^x𝐂_p^†|ω)≥ ce^-Clog^2(s_k+2).
For every (k+1)-qualified point x which is counted by N_*^vis (note that the number of such x is N_*^vis-N_*^vis-𝒟), by the inheritability property of k-qualified points (see Lemmas <ref> and <ref>), the probability that x is k-unqualified is at most Ce^-clog^4(s_k). As a result, N_*^k-UQ-N_*^vis-𝒟 is stochastically dominated by the sum of N_*^vis-N_*^vis-𝒟 i.i.d Bernoulli random variables with parameter Ce^-clog^4(s_k). Thus, by the Hoeffding’s inequality (see e.g. Vershynin <cit.>), we have: for any M>e^log^2.9(s_k),
ℙ[N_*^k-UQ-N_*^vis-𝒟≥ e^-log^2.8(s_k)(N_*^vis-N_*^vis-𝒟), N_*^vis-N_*^vis-𝒟≥ M ]
≤ exp(-M^0.8).
Similarly, by Lemma <ref>, each time when a point y is counted by N_*^vis, at least with ce^-Clog^2(s_k+2) probability it is also counted by N_*^con. Therefore, N_*^con stochastically dominates the sum of N_*^vis i.i.d Bernoulli random variables with parameter ce^-Clog^2(s_k+2). Consequently, for any M>e^log^2.2(s_k), we have
ℙ( N_*^con≤ e^-log^2.1(s_k)N_*^vis, N_*^vis≥ M ) ≤exp(-M^0.8).
For the same reason, we also have: for any M>e^log^2.2(s_k),
ℙ( N_*^con-𝒟≤ e^-log^2.1(s_k)N_*^vis-𝒟, N_*^vis-𝒟≥ M ) ≤exp(-M^0.8).
For any m∈ [1,m_0-1], w∈[-D_N,D_N )^d∩ℤ^d and k∈ [k_0-1],
ℙ[ ζ_w≥ L^1.5,ζ_w^kUQ≥ζ_w/e^log^2.2(s_k) , ζ_w^(k+1)UQ<ζ_w/e^log^2.2(s_k+1)]
≤ e^-N.
We denote the events in the LHS of (<ref>), (<ref>) and (<ref>) by 𝖠^k-UQ(M), 𝖠^con(M) and 𝖠^con-𝒟(M) respectively. We claim some inclusion relations as follows:
𝖠_1:= { N_*^vis≥ L^1.5, ζ_w^kUQ≥ e^-log^2.4(s_k)ζ_w+N_*^vis-𝒟,N_*^vis-𝒟≤12N_*^vis}
⊂ 𝖠^k-UQ( 12L^1.5)∪𝖠^con(L^1.5),
𝖠_2:= { N_*^vis≥ L^1.5, N_*^vis-𝒟> 12N_*^vis, ζ_w^(k+1)-UQ< ζ_w/e^log^2.2(s_k+1)}
⊂ 𝖠^con-𝒟( 12L^1.5),
𝖠_3:= { N_*^vis≥ L^1.5, ζ_w^kUQ≥ζ_w/e^log^2.2(s_k), ζ_w^(k+1)-UQ< ζ_w/e^log^2.2(s_k+1)}
⊂ A^con-𝒟( 12L^1.5)∪ A^k-UQ( 12L^1.5) ∪ A^con(L^1.5).
We start with confirming (<ref>). Since 𝖠_1 implies N_*^vis≥ L^1.5 and N_*^vis-N_*^vis-𝒟≥1/2N_*^vis≥1/2L^1.5 (where 1/2L^1.5>e^log^2.9(s_k) for all k≤ k_0), on the event 𝖠_1∩[𝖠^kUQ(12L^1.5)∪𝖠^con(L^1.5)]^c, one has
N_*^k-UQ-N_*^vis-𝒟< e^-log^2.8(s_k)(N_*^vis-N_*^vis-𝒟),
N_*^con> e^-log^2.1(s_k)N_*^vis.
Thus, 𝖠_1∩[𝖠^kUQ(1/2L^1.5)∪𝖠^con(L^1.5)]^c implies
ζ_w^kUQ≤ (N_*^k-UQ-N_*^vis-𝒟)+N_*^vis-𝒟 (by (<ref>) and (<ref>))
< e^-log^2.8(s_k)(N_*^vis-N_*^vis-𝒟)+N_*^vis-𝒟 (by (<ref>))
≤ e^-log^2.8(s_k)N_*^vis+N_*^vis-𝒟
< e^log^2.1(s_k)· e^-log^2.8(s_k) N_*^con+N_*^vis-𝒟 (by (<ref>))
≤ e^log^2.1(s_k)· e^-log^2.8(s_k)ζ_w+N_*^vis-𝒟 (by (<ref>)).
In addition, note that 𝖠_1⊂{ζ_w^kUQ≥ e^-log^2.4(s_k)ζ_w+N_*^vis-𝒟}. Therefore, on the event 𝖠_1∩[𝖠^kUQ(1/2L^1.5)∪𝖠^con(L^1.5)]^c, ζ_w^kUQ satisfies
e^-log^2.4(s_k)ζ_w+N_*^vis-𝒟≤ζ_w^kUQ < e^log^2.1(s_k)· e^-log^2.8(s_k)ζ_w+N_*^vis-𝒟,
which is a contradiction and in turn implies that 𝖠_1∩[𝖠^kUQ(1/2L^1.5)∪𝖠^con(L^1.5)]^c=∅ (and thus verifies (<ref>)).
For (<ref>), when 𝖠_2∩[𝖠^con-𝒟(1/2L^1.5)]^c happens, since N_*^vis-𝒟> 1/2N_*^vis≥1/2L^1.5>e^log^2.2(s_k) for all k≤ k_0, we have
N_*^con-𝒟> e^-log^2.1(s_k)N_*^vis-𝒟.
Thus, the event 𝖠_2∩[𝖠^con-𝒟(1/2L^1.5)]^c implies
ζ_w^(k+1)UQ≥ N_*^con-𝒟 (by (<ref>))
> e^-log^2.1(s_k)N_*^vis-𝒟 (by (<ref>))
> 12e^-log^2.1(s_k)N_*^vis (by N_*^vis-𝒟> 12N_*^vis)
≥ 12e^-log^2.1(s_k)ζ_w (by (<ref>)),
which is incompatible with 𝖠_2⊂{ζ_w^(k+1)-UQ< e^-log^2.2(s_k+1)ζ_w}. Consequently, we obtain the inclusion in (<ref>).
For (<ref>), by the definition of 𝖠_2 in (<ref>) one has
𝖠_2^c= {N_*^vis< L^1.5}∪{N_*^vis-𝒟≤12N_*^vis}∪{ζ_w^(k+1)-UQ≥ e^-log^2.2(s_k+1)ζ_w},
where {N_*^vis< L^1.5} and {ζ_w^(k+1)-UQ≥ e^-log^2.2(s_k+1)ζ_w} are both incompatible with 𝖠_3. Therefore, 𝖠_3∩𝖠_2^c implies N_*^vis-𝒟≤1/2N_*^vis. Furthermore, when 𝖠_3∩𝖠_2^c∩[𝖠^con-𝒟( 1/2L^1.5)]^c happens, we have
e^-log^2.4(s_k)ζ_w+N_*^vis-𝒟
< e^-log^2.4(s_k)ζ_w+e^log^2.1(s_k)N_*^con-𝒟 (since {N_*^vis≥ L^1.5}∩[𝖠^con-𝒟( 12L^1.5)]^c happens)
≤ e^-log^2.4(s_k)· e^log^2.2(s_k)ζ_w^kUQ +e^log^2.1(s_k)N_*^con-𝒟 (since 𝖠_3 happens)
≤ e^-log^2.4(s_k)· e^log^2.2(s_k)ζ_w^kUQ +e^log^2.1(s_k)ζ_w^(k+1)UQ (by (<ref>))
< [e^-log^2.4(s_k)· e^log^2.2(s_k)+e^log^2.1(s_k)· e^log^2.2(s_k)-log^2.2(s_k+1)] ζ_w^kUQ (since 𝖠_3 happens)
< ζ_w^kUQ.
In conclusion, we have 𝖠_3∩𝖠_2^c∩[𝖠^con-𝒟( 1/2L^1.5)]^c⊂𝖠_1, which implies
𝖠_3 ⊂𝖠_1∪𝖠_2 ∪𝖠^con-𝒟( 12L^1.5).
Combining (<ref>), (<ref>) and (<ref>), we obtain (<ref>).
We denote the event on the LHS of (<ref>) by 𝖠_0. Since ζ_w≤ N_*^vis (by (<ref>)), we have 𝖠_0⊂𝖠_3. Thus, by (<ref>), (<ref>), (<ref>) and (<ref>), we get
ℙ( 𝖠_0)≤ℙ( 𝖠_3) ≤ 2e^-(L^1.5/2)^0.8+e^-(L^1.5)^0.8≤ e^-N.
With Lemma <ref> in hand, we are ready to prove Lemma <ref>.
By Lemma <ref>, we know that ζ_w^mbad≤ζ_w^0UQ. Therefore, by the inequalities L^2/2^d+2K^20d^2m^2D_N^d>L^1.5 and e^-log^2.2(s_0)< (4 K^20d^2m^2)^-1, we have
ℙ[ζ_w≥L^2/2^d+2K^20d^2m^2D_N^d, ζ_w^mbad≥ζ_w/4 K^20d^2m^2]
≤ ∑_k=0^k_0-1ℙ[ ζ_w≥ L^1.5,ζ_w^kUQ≥ζ_w/e^log^2.2(s_k) , ζ_w^(k+1)UQ<ζ_w/e^log^2.2(s_k+1)]
+ℙ[∃ x∈ F(w) and k≥ k_0 such that x is k-unqualified].
By (<ref>) and Lemma <ref>, the RHS of (<ref>) is upper-bounded by s.p.(N).
Since we have confirmed Lemma <ref>, the estimate in (<ref>) is now proved. As a result, Lemma <ref> and Corollary <ref> follow as well.
§.§ Proof of technical lemmas
This subsection includes the proofs of Lemmas <ref> and <ref>, which are related to some estimates about random walks and loop measure.
§.§.§ Proof of Lemma <ref>
We first focus on the proof of (<ref>). By the relation (presented in Section <ref>) between the Brownian motion S_t in ℤ^d and the continuous-time simple random walk S_t in ℤ^d, it suffices to prove that
ℙ_y[𝔑= l| τ_∂ B_x(s_k+2)=τ_z] ≤ s_k+1^-cl,
where we also denote by 𝔑 the number of times that S_t crosses B_x(s_k+1)∖ B_x(s_k) before hitting ∂ B_x(s_k+2).
Without loss of generality, we assume x=0. Similar to τ̂_· (defined before Lemma <ref>), we define a sequence of stopping times as follows. Let τ̅_0=0. For any p∈ℕ, we define τ̅_2p+1:=min{τ̅_2p<t< τ_∂ B(s_k+2):S_t∈∂ B(s_k) } and τ̅_2p+2:=min{t>τ̅_2p+1:S_t∈∂ B(s_k+1) }. Note that 𝔑 is the smallest integer such that τ̅_2𝔑+1=∞. By the law of total probability, we have
ℙ_y[𝔑= l| τ_∂ B(s_k+2)=τ_z]
= ∑_y_*∈∂ B(s_k+1)ℙ_y[𝔑= l,S_τ̅_2𝔑=y_* ,τ_∂ B(s_k+2)=τ_z]/∑_y_*∈∂ B(s_k+1)ℙ_y[S_τ̅_2𝔑=y_* ,τ_∂ B(s_k+2)=τ_z]
≤ ∑_y_*∈∂ B(s_k+1)ℙ_y[𝔑= l,S_τ̅_2𝔑=y_* ,τ_∂ B(s_k+2)=τ_z]/∑_y_*∈∂ B(s_k+1)ℙ_y[𝔑= 0,S_τ̅_2𝔑=y_* ,τ_∂ B(s_k+2)=τ_z]
= ∑_y_*∈∂ B(s_k+1)ℙ_y[𝔑= l,S_τ̅_2𝔑=y_* ,τ_∂ B(s_k+2)=τ_z]/ℙ_y[τ_z=τ_∂ B(s_k+2)<τ_∂ B(s_k)].
For each term on the numerator, by the strong Markov property, we have
ℙ_y[𝔑= l,S_τ̅_2𝔑=y_* ,τ_∂ B(s_k+2)=τ_z]
= ∑_y_1∈∂ B(s_k+1)∑_x_1∈∂ B(s_k)ℙ_y[S_τ̅_2l-2=y_1 ]ℙ_y_1[τ_x_1=τ_∂ B(s_k)<τ_∂ B(s_k+2)]
·ℙ_x_1[τ_∂ B(s_k+1)=τ_y_*] ℙ_y_*[τ_z=τ_∂ B(s_k+2)<τ_∂ B(s_k)].
In addition, by the Harnack's inequality (see e.g. <cit.>), one has
max_x_1∈∂ B(s_k)ℙ_x_1[τ_∂ B(s_k+1)=τ_y_*] ≤ C ℙ[τ_∂ B(s_k+1)=τ_y_*],
max_y_*∈∂ B(s_k+1)ℙ_y_*[τ_z=τ_∂ B(s_k+2)<τ_∂ B(s_k)]≤ C ℙ_y[τ_z=τ_∂ B(s_k+2)<τ_∂ B(s_k)].
By (<ref>), (<ref>) and (<ref>), we get
∑_y_*∈∂ B(s_k+1)ℙ_y[𝔑= l,S_τ̅_2𝔑=y_* ,τ_∂ B(s_k+2)=τ_z]
≤ Cℙ_y[τ_z=τ_∂ B(s_k+2)<τ_∂ B(s_k)]·(∑_y_*∈∂ B(s_k+1)ℙ[τ_∂ B(s_k+1)=τ_y_*])
·( ∑_y_1∈∂ B(s_k+1)ℙ_y[S_τ̅_2l-2=y_1 ] ·∑_x_1∈∂ B(s_k)ℙ_y_1[τ_x_1=τ_∂ B(s_k)<τ_∂ B(s_k+2)])
≤ Cℙ_y[τ_z=τ_∂ B(s_k+2)<τ_∂ B(s_k)]·ℙ_y[τ̅_2l-2<∞]
·max_y_1∈∂ B(s_k+1)ℙ_y_1[τ_∂ B(s_k)<τ_∂ B(s_k+2)].
Furthermore, by <cit.> and (<ref>), one has
max_y_1∈∂ B(s_k+1)ℙ_y_1[τ_∂ B(s_k)<τ_∂ B(s_k+2)]≤ Cs_k^-(4d-1)(d-2).
Combining (<ref>), (<ref>) and (<ref>), we obtain
ℙ_y[𝔑= l| τ_∂ B(s_k+2)=τ_z]≤ Cs_k^-(4d-1)(d-2)·ℙ_y[τ̅_2l-2<∞].
Repeating the argument in proving (<ref>) for l-1 times, we also have
ℙ_y[τ̅_2l-2<∞]≤[Cs_k^-(4d-1)(d-2)]^l-1.
By (<ref>) and (<ref>), we get (<ref>) and thus conclude (<ref>). Based on (<ref>), the proof of (<ref>) is a straightforward
calculation as follows:
𝔼_y[ exp(γ𝔑)| τ_∂ B_x(s_k+2)= τ_z ] ≤ 1+∑_l≥ 1e^γ lℙ_y[𝔑= l| τ_∂ B_x(s_k+2)=τ_z]
≤ 1+ ∑_l≥ 1e^γ ls_k+1^-cl = s_k+1^c/s_k+1^c-e^γ.
Now we complete the proof of Lemma <ref>
§.§.§ Proof of Lemma <ref>
Before proving Lemma <ref>, we need the following lemma as preparation.
There exist c(d),C(d)>0 such that:
* For any y∈∂ B(s_k+1), z∈∂B̂(s_k+2) and v∈∂ B(s_k+2-1),
ℙ_y[τ_0<τ_v< τ_∂ B(s_k+2)|τ_∂ B(s_k+2)=τ_z ] ≥ ce^-Clog^2(s_k+2).
* For any v_1,v_2∈ B(s_k+2-1),
μ({ℓ: 0,v_1,v_2∈ran(ℓ) and ran(ℓ)⊂ B(s_k+2-1) })≥ ce^-Clog^2(s_k+2).
(1) The inequality (<ref>) can be proved as follows:
ℙ_y[τ_0<τ_v< τ_∂ B(s_k+2)|τ_∂ B(s_k+2)=τ_z ]
≥ ℙ_y[τ_0<τ_v< τ_∂ B(s_k+2)=τ_z ]
≥ ℙ_y[τ_0<τ_∂ B(s_k+2-1)]·ℙ_0[τ_v<τ_∂ B(s_k+2)] ·ℙ_v[τ_0<τ_∂ B(s_k+2)]
·ℙ_0[τ_∂ B(s_k+2)=τ_z] (by strong Markov property)
= ℙ_0[τ_y<τ_∂ B(s_k+2-1)]·ℙ_0[τ_v<τ_∂ B(s_k+2)] ·ℙ_0[τ_v<τ_∂ B(s_k+2)]
·ℙ_0[τ_∂ B(s_k+2)=τ_z] (by reversing the random walk)
≥ ce^-Clog^2(s_k+2) (by Lemma <ref>).
(2) We denote by 𝔏_v_1,v_2 the collection of loops ℓ that satisfy the following: there exists ϱ∈ℓ such that before intersecting ∂ B(s_k+2), ϱ starts from 0, first hits v_1, then hits v_2 and finally return to 0. By (<ref>), the loop measure of 𝔏_v_1,v_2 is bounded from below by
ℙ_0[ τ_v_1<τ_∂ B(s_k+2)] ·ℙ_v_1[τ_v_2<τ_∂ B(s_k+2)]·ℙ_v_2[τ_0<τ_∂ B(s_k+2)]
≥ ℙ_0[ τ_v_1<τ_∂ B(s_k+2)] ·ℙ_v_1[τ_0<τ_∂ B(s_k+2)]·ℙ_0[τ_v_2<τ_∂ B(s_k+2)]
·ℙ_v_2[τ_0<τ_∂ B(s_k+2)] (by strong Markov property)
= ℙ_0[ τ_v_1<τ_∂ B(s_k+2)] ·ℙ_0[τ_v_1<τ_∂ B(s_k+2)]·ℙ_0[τ_v_2<τ_∂ B(s_k+2)]
·ℙ_0[τ_v_2<τ_∂ B(s_k+2)] (by reversing the random walk)
≥ ce^-Clog^2(s_k+2) (by Lemma <ref>).
This implies (<ref>) since for every ℓ∈𝔏_v_1,v_2, one has 0,v_1,v_2∈ran(ℓ) and ran(ℓ)⊂ B(s_k+2-1).
Recall in the construction of Step 0 in 𝒯_n that 𝒜 is the collection of active points in F(w), and that ℰ is the collection of involved crossing paths sampled in Step 0. Now we are ready to prove Lemma <ref>.
Arbitrarily take ω∈ℭ^x_p. Recall that on the conditioning of ω, one has that x∈𝒜, and that 𝐂_p^† is determined and intersects ∂B̂_x(s_k+2). We arbitrarily choose a point y_♢∈𝐂_p^†∩∂B̂_x(s_k+2) in some prefixed manner. Then one of the following happens:
* 𝐂_p^† includes an involved fundamental loop ℓ intersecting y_♢.
* y_♢∈ B(n-1), and 𝐂_p^† includes the glued point loop γ_y_♢^p.
We denote by y_♢' the unique point in ∂ B_x(s_k+2-1) such that y_♢'∼ y_♢. Let y_♢:= 1/2(y_♢+y_♢'). We also denote by y_♢” the unique point in ∂B̂_x(s_k+2+1) such that y_♢”∼ y_♢.
In Case (1), recall that the Brownian excursions of ℓ at y_♢ are either not sampled, or are sampled to intersect 𝐂_p-1∩ I_{y_♢,y_♢”}. If these Brownian excursions are not sampled, then (recalling Section <ref>) the conditional distribution (given ω) of the union of these Brownian excursions can be described as a function of an exponential random variable and a Bessel-0 process. Thus, conditioning on ω, {y_♢∈ran(ℓ)} happens with at least probability c(d)>0. Otherwise (i.e. these Brownian excursions are sampled to intersect 𝐂_p-1∩ I_{y_♢,y_♢”}), by the FKG inequality, {y_♢∈ran(ℓ)} also happens with at least probability c(d)>0. In Case (2), it also follows from the FKG inequality that {y_♢∈γ_y_♢^p} happens with at least probability c(d)>0. In conclusion, to verify this lemma, it suffices to prove that
ℙ(∃ involved ℓ or η^F in B_x(s_k+2) intersecting x and y_♢ | ω)
≥ ce^-Clog^2(s_k+2).
In what follows, We prove (<ref>) separately in two different cases when x∈ F^I(w) and x∈ F^II(w).
When x∈ F^I(w): Since x∈ F^I(w)∩𝒜, there exists an involved forward crossing path η^F in B_x(s_k+2). Suppose that η^F starts from z_1∈∂ B_x(s_k+1) and ends at z_2∈∂B̂(s_k+2-1). According to the construction of 𝒯_n, the conditional distribution (given ω) of η^F is exactly ℙ_z_1(·|τ_∂ B_x(s_k+2)=τ_z_2). Thus, the LHS of (<ref>) is at least
ℙ_z_1(τ_x<τ_y_♢< τ_∂ B_x(s_k+2)|τ_∂ B_x(s_k+2)=τ_z_2).
By the relation between the random walk on ℤ^d and the Brownian motion presented in Section <ref>, the probability above is bounded from below by
c·ℙ_z_1(τ_x<τ_y_♢'< τ_∂ B_x(s_k+2)|τ_∂ B_x(s_k+2)=τ_z_2).
Combined with (<ref>), it concludes (<ref>) for x∈ F^I(w).
When x∈ F^II(w): We arbitrarily take a lattice point v∈ B_x(s_k+2-1)∩B̂(n).
Since the loops in B_x(s_k+2) are independent of the conditioning ω, the LHS of (<ref>) is at least
1/2μ({ℓ: x,y_♢,v∈ran(ℓ) and ran(ℓ)⊂B_x(s_k+2) }).
By the relation between the loops on ℤ^d and ℤ^d presented in Section <ref>, this loop measure is bounded from below by
c·μ({ℓ: x,y_♢',v∈ran(ℓ) and ran(ℓ)⊂ B_x(s_k+2-1) }).
Therefore, by (<ref>), we also conclude (<ref>) for x∈ F^II(w), and thus complete the proof.
§ PROOF OF THEOREM <REF>
In this section, we aim to prove Theorem <ref> by using Corollary <ref>. This part of proof is inspired by <cit.>. Here is an overview for this section. Our main aim is to give a lower bound for the probability of {ψ_n^SR≥12L^2, χ_n≤ cL^4 } (see Lemma <ref>), which indicates that each strongly regular point roughly generates O(L^2) points in the loop cluster. To achieve this, we employ the second moment method. On the one hand, we will prove a lower bound in Lemma <ref> for the first moment of the number of points that are connected to some regular points, where a pivotal property is employed to prevent excessive duplication for counting (see Definition <ref>); on the other hand, we will prove an upper bound in Lemma <ref> for the second moment of the aforementioned number, and thus obtain Lemma <ref>. Finally, we conclude Theorem <ref> by combining Corollary <ref> and Lemma <ref>.
Recall that K>0 is a sufficiently large constant and r_m=K· 2^m-1. Let m_*:=min{m∈ℕ^+:r_m≥ K^2d} and K_*:=r_m_*. Note that K_*∈ [K^2d,2K^2d]. For any x∈ℤ^d∖ B(n-1), at least one of the 2d faces of ∂ B_x(K^4_*) is disjoint from B(n). We choose one such face, denoted by 𝒮_x=𝒮_x(n,K), in an arbitrary and prefixed manner.
For d>6, x∈ℤ^d∖ B(n-1) and sufficiently large K, if x is a strongly regular point, then there exists x'∈∂ B_x(K_*^4) such that Ψ_n∩ B_x'(K_*)=∅.
By a simple volume consideration, we can find cK_*^3(d-1) points x_1',...,x_cK_*^3(d-1)' in 𝒮_x such that the minimal pairwise distance is at least 3K_* (so in particular B_x'_i(K_*) ∩ B_x'_j(K_*) = ∅ for all i≠ j). Since x is strongly regular, by Item (1) of Remark <ref> we have
| B_x(2K_*^4)∩Ψ_n | ≤ CK_*^16log^16(K_*^4).
Combined with the fact that CK_*^16log^16(K^4_*)< cK_*^3(d-1) for all large enough K, it yields that there exists some x_i' such that B_x_i'(K_*)∩Ψ_n=∅.
In light of Lemma <ref>, for each strongly regular point x∈Ψ_n^*, we may define x'=x'(Ψ_n) to be the first point (in some arbitrary and prefixed order) such that Ψ_n ∩ B_x'(K_*)=∅. Note that x' is regular since x'∈ B_x(K_*^4)⊂ B_x(K^10d).
For any A_1,A_2,A_3⊂ℤ^d, we say A_1 and A_2 are connected (by ∪ℒ_1/2) off A_3 if there exists a collection 𝔏 of loops in ℒ_1/2 disjoint from A_3 such that A_1 []∪𝔏 A_2. We write it as “A_1[] A_2 off A_3”. We may omit the braces when A_i={v} for some i∈{1,2} and v∈ℤ^d.
For any ℓ∈ℒ_1/2 with ran(ℓ)∩Ψ_n=∅, we have ℓ∈ℒ^U. As an immediate consequence, for any A_1, A_2 ⊂ℤ^d, the event {A_1[] A_2 off Ψ_n} is measurable with respect to ℒ^U.
We prove this lemma by contradiction. Suppose that there is ℓ_♢∈ℒ_1/2-ℒ^U such that ran(ℓ_♢)∩Ψ_n=∅. By Definition <ref>, loops in ℒ_1/2-ℒ^U can be divided into the following types:
* a fundamental or edge loop intersecting both B(n) and Ψ_n^1;
* a point loop that includes some x∈ B(n-1) and intersects Ψ_n^1;
* a point loop that includes some x∈∂B̂(n)∖Ψ_n^1 and satisfies I_{x,x^in}⊂γ^p_x∪Ψ_n^1.
On the one hand, ℓ_♢ does not belong to Type (1) or (2) since ran(ℓ_♢)∩Ψ_n^1 ⊂ran(ℓ_♢)∩Ψ_n=∅. On the other hand, if ℓ_♢ belongs to Type (3), then ℓ_♢ is a point loop including some x∈Ψ_n^2. Since Ψ_n^2 ⊂Ψ_n, this is contradictory with ran(ℓ_♢)∩Ψ_n=∅.
We recall some necessary notations before presenting the next definition:
* For any x∈ℤ^d, we denote by 𝐂(x) the cluster of ∪ℒ_1/2 containing x;
* L= ϵ^3/10 N for some constant ϵ>0;
* n∈ [(1+λ/4)N,(1+λ/3)N] for some constant λ∈(0,1 ];
* Ψ_n^*:= Ψ_n ∩ B(n_*) where n_*:=n+[(1+λ)N]^b, and ψ_n^*= |Ψ_n^*|.
For any x∈ℤ^d∖ B(n-1), we define the point x_†=x_†(x,n) as follows: when x∈∂B̂(n), let x_† be the unique point in ∂B̂(n+1) such that x_†∼ x; otherwise, let x_†=x. Then we define x_†:=1/2(x+x_†).
For any x∈ℤ^d∖ B(n-1), y∈ B_x(1/2L)∖ B(n_*) and integer M∈ℕ^+, we say (x,y) is an M-potential pair if the following events happen:
𝖯_1(x,M):= { x∈Ψ_n^*}∩{ x is strongly regular}∩{ψ_n^SR=M }∩{x_†∈Ψ_n } ,
𝖯_2(x,y):= { x'[]y off Ψ_n },
𝖯_3(x):= {𝐂(x)∩𝐂(x')=∅}.
Note that the M-potential pair is not a symmetric relation since y is not even necessarily strongly regular when (x, y) is an M-potential pair.
For d>6, there exists c_6(d)>0 such that for all large enough K, any x∈ℤ^d∖ B(n-1) and M∈ℕ^+,
∑_y∈ B_x(1/2L)∖ B(n_*)ℙ[ 𝖯_1(x,M)∩𝖯_2(x,y) ] ≥ c_6 L^2 ℙ[𝖯_1(x,M)].
In order to prove the lemma, it suffices to show that for any sufficiently large K, and an arbitrary realization 𝐀 for Ψ_n on which 𝖯_1(x,M) occurs, we have
∑_y∈ B_x(1/2L)∖ B(n_*)ℙ[𝖯_2(x,y) |Ψ_n =𝐀] ≥ cL^2
(since we can obtain (<ref>) by averaging over 𝐀). In what follows, we prove (<ref>).
By Lemma <ref> and Item (2) of Remark <ref> we have
ℙ[𝖯_2(x,y) |Ψ_n =𝐀]
= ℙ( x'[]y off 𝐀|Ψ_n =𝐀)
= ℙ( x'[]y off 𝐀)
= ℙ( x'[]y) - ℙ( x'[]y only by 𝐀) ,
where “only by 𝐀” means that in any collection of loops connecting x' and y, there is at least one loop intersecting 𝐀. On the event {x'[]y only by 𝐀}, there exists a glued loop γ_* intersecting 𝐀 such that {x'[]γ_*}∘{y[]γ_*} happens. Similar to (<ref>), by the BKR inequality and the two-point function estimate, we have
ℙ( x'[]y only by 𝐀)
≤ C∑_z_1∈𝐀∩ℤ^d,z_2,z_3∈ℤ^dμ( {ℓ: ℓ∩B_z_i(1)≠∅,∀ i=1,2,3})
·ℙ[B_z_2(1)[] x' ]·ℙ[B_z_3(1)[] y ]
≤ C∑_z_1∈𝐀∩ℤ^d,z_2,z_3∈ℤ^d |z_1-z_2|^2-d|z_2-z_3|^2-d|z_3-z_1|^2-d|z_2-x'|^2-d|z_3-y|^2-d.
Therefore, by Lemma <ref> and Corollary <ref>, we have
∑_y∈ B_x(1/2L)∖ B(n_*)ℙ( x'[]y only by 𝐀)
≤ CL^2∑_z_1∈𝐀∩ℤ^d,z_2,z_3∈ℤ^d |z_1-z_2|^2-d|z_2-z_3|^2-d|z_3-z_1|^2-d
· |z_2-x'|^2-d (by (<ref>))
≤ CL^2∑_z_1∈𝐀∩ℤ^d |z_1-x'|^2-d (by (<ref>))
≤ CL^2 (|𝐀∩ B_x'(K_*)|+ ∑_m=m_*+1^∞|𝐀∩ B_x'(r_m)|· r_m-1^2-d).
It follows from the definition of x' that |𝐀∩ B_x'(K_*)|=0. Moreover, since x is strongly regular and x'∈ B_x(K^10d), we know that x' is regular, and thus |𝐀∩ B_x'(r_m)|≤ r_m^4log^16(r_m). In conclusion, the RHS of (<ref>) can be upper-bounded by
∑_m=m_*+1^∞ r_m^4log^16(r_m)· r_m-1^2-d≤ CK^6-dlog^16(K).
Combining (<ref>) and (<ref>), we obtain
∑_y∈ B_x(1/2L)∖ B(n_*)ℙ( x'[]y only by 𝐀) ≤ CK^6-dlog^16(K)L^2.
Meanwhile, by the two-point function estimate, we have
∑_y∈ B_x(1/2L)∖ B(n_*)ℙ( x'[]y) ≥ c∑_y∈ B_x(1/2L)∖ B(n_*)|x'-y|^2-d≥ cL^2.
Since K^6-dlog^16(K) converges to 0 as K→∞, by (<ref>), (<ref>) and (<ref>), there exists a constant K_0(d)>0 such that (<ref>) holds for all K≥ K_0.
For d>6, there exists c_7(d)>0 such that for all large enough K, any x∈ℤ^d∖ B(n-1) and any M∈ℕ^+,
∑_y∈ B_x(1/2L)∖ B(n_*)ℙ[𝖯_1(x,M)∩𝖯_2(x,y)∩𝖯_3(x) ] ≥ c_7 L^2 ℙ[𝖯_1(x,M)].
To verify this lemma, it suffices to prove that for any large enough K and any realization 𝐀 for Ψ_n on which 𝖯_1(x,M) happens, one has
∑_y∈ B_x(1/2L)∖ B(n_*)ℙ[ 𝖯_2(x,y)∩ [𝖯_3(x)]^c |Ψ_n =𝐀] ≤ CL^2K^6-dlog^16(K).
In fact, by averaging over 𝐀 in (<ref>), we have
∑_y∈ B_x(1/2L)∖ B(n_*)ℙ[𝖯_1(x,M)∩𝖯_2(x,y)∩[𝖯_3(x)]^c ]
≤ CL^2K^6-dlog^16(K)ℙ[𝖯_1(x,M)].
For all sufficiently large K with CK^6-dlog^16(K)<1/2c_6, (<ref>) follows from Lemma <ref> and (<ref>). We proceed to show (<ref>) in the remainder of this proof.
On the event 𝖯_2(x,y)∩ [𝖯_3(x)]^c, by Lemma <ref>, x' is connected to both 𝐀 and y by ℒ^U_𝐀. Therefore, by the tree expansion, there exists a glued loop γ_* such that {𝐀[]ℒ^U_𝐀γ_*}∘{x'[]γ_* }∘{y[]γ_* } happens. Thus, similar to (<ref>), we have
ℙ[ 𝖯_2(x,y)∩ [𝖯_3(x)]^c |Ψ_n =𝐀]
≤ C ∑_z_1,z_2,z_3∈ℤ^dℙ( 𝐀[]ℒ^U_𝐀 z_1) |z_1-z_2|^2-d|z_2-z_3|^2-d
· |z_3-z_1|^2-d|z_2-x'|^2-d|z_3-y|^2-d.
Therefore, by (<ref>) and (<ref>), we have
∑_y∈ B_x(1/2L)∖ B(n_*)ℙ[ 𝖯_2(x,y)∩ [𝖯_3(x)]^c |Ψ_n =𝐀]
≤ CL^2∑_z_1∈ℤ^dℙ( 𝐀[]ℒ^U_𝐀 z_1) |z_1-x'|^2-d.
The sum on the RHS can be decomposed into 𝕀^(1)+∑_m=1^∞𝕀^(2)_m, where (recall r_1=K)
𝕀^(1):= ∑_z_1∈ B_x'(K)ℙ( 𝐀[]ℒ^U_𝐀 z_1) |z_1-x'|^2-d,
𝕀^(2)_m:= ∑_z_1∈ B_x'(r_m+1)∖ B_x'(r_m) ℙ( 𝐀[]ℒ^U_𝐀 z_1) |z_1-x'|^2-d.
For 𝕀^(1), since |B_x'(K)|≍ K^d, |z_1-x'|^2-d≤ 1 and 𝐀∩ B_x'(K_*)=∅, we have
𝕀^(1)≤ CK^d max_z_1∈ B_x'(K)ℙ[ z_1 []𝐀∖ B_x'(K_*)]
≤ CK^dmax_z_1∈ B_x'(K)∑_m=m_*+1^∞∑_z_4∈𝐀∩ B_x'(r_m)∖ B_x'(r_m-1) |z_4-z_1|^2-d.
For any z_1∈ B_x'(K) and z_4∈𝐀∩ B_x'(r_m)∖ B_x'(r_m-1) for m>m_*, we have
|z_4-z_1|≥ |z_4-x'|-|x'-z_1|≥ |z_4-x'|-K≥ cr_m.
In addition, since x is strongly regular (and thus x' is regular), one has
|𝐀∩ B_x'(r_m)∖ B_x'(r_m-1)|≤ r_m^4log^16(r_m).
Thus, the RHS of (<ref>) is bounded from above by
CK^d∑_m=m_*+1^∞ r_m^4log^16(r_m)· r_m^2-d≤ CK^d· K_*^6-dlog(K_*)≤ CK^1-d,
where we used K_*≥ K^2d in the last inequality.
For each 𝕀^(2)_m, since |z_1-x'|≥ r_m for all z_1∈ B_x'(r_m+1)∖ B_x'(r_m), we have (recalling Definition <ref>)
𝕀^(2)_m≤ Cr_m^2-d∑_z_1∈ B_x'(r_m+1)∖ B_x'(r_m) ℙ( 𝐀[]ℒ^U_𝐀 z_1)
≤ Cr_m^2-d[Δ_x',m+1(𝐀)+∑_z_1∈ B_x'(r_m+1) ℙ( z_1[]ℒ^U_𝐀𝐀∖ B_x'(r_m+1^4d) ) ].
Since x' is regular, one has Δ_x',m+1(𝐀)≤ r_m+1^4log^16(r_m+1). In addition, since ∪ℒ^U_𝐀 is stochastically dominated by ∪ℒ_1/2, by (<ref>) we have
∑_z_1∈ B_x'(r_m+1) ℙ(z_1 []ℒ^U_𝐀𝐀∖ B_x'(r_m+1^4d) ) ≤ Cr_m+1^d· r_m+1^-1/2· 4d<1.
Consequently, 𝕀^(2)_m is upper-bounded by
Cr_m^2-d·[ r_m+1^4log^16(r_m+1)+1] ≤ Cr_m^6-dlog^16(r_m).
By (<ref>), (<ref>) and (<ref>), we obtain (<ref>) and finally conclude the lemma:
∑_y∈ B_x(1/2L)∖ B(n_+)ℙ[ 𝖯_2(x,y)∩ [𝖯_3(x)]^c |Ψ_n =𝐀]
≤ CL^2( 𝕀^(1)+∑_m=1^∞𝕀^(2)_m)
≤ CL^2 (K^1-d+ ∑_m=1^∞r_m^6-dlog^16(r_m))
≤ CL^2K^6-dlog^16(K).
In order to introduce our aforementioned pivotal event, for any x∈ℤ^d∖ B(n-1), we define 𝔏_x,K as the collection of all fundamental loops that are contained in B_x(K^4_*+1)∖B(n), and visit x_† and every point in 𝒮_x (and thus also visits x'). Note that there exists some constant u_0(K,d)>0 such that μ(𝔏_x,K)≥ u_0(K,d) for all x∈ℤ^d∖ B(n-1). Let γ_x,K^f be the union of ranges of loops contained in both ℒ_1/2 and 𝔏_x,K. It follows from the definition that every loop in 𝔏_x,K is not involved (recalling Definition <ref>), and thus γ_x,K^f is independent of Ψ_n.
For any x∈ℤ^d∖ B(n-1) and y∈ B_x(1/2L)∖ B(n_*), we say (x,y) is admissible if the following events happen:
* x∈Ψ_n^* and x is strongly regular.
* The event {y[]Ψ_n} happens and is pivotal with respect to γ_x,K^f. Precisely, “pivotal” means that if we delete all loops included in γ_x,K^f from ℒ_1/2, then the event {y[]Ψ_n} no longer happens.
We denote the total number of all admissible pairs by
W=|{(x,y):x∈ℤ^d∖ B(n-1),y∈ B_x(12L)∖ B(n_*),(x,y) is admissible}|.
For all sufficiently large K and any M∈ℕ^+, there exists c_8(K,d)>0 such that
𝔼[W·1_ψ_n^SR=M] ≥ c_8L^2Mℙ( ψ_n^SR=M).
Recall the events 𝖯_1, 𝖯_2 and 𝖯_3 in Definition <ref>. If (x,y) is an M-potential pair, then we have:
* The event {γ_x,K^f=∅} happens. Otherwise, since x_†∈Ψ_n∩γ_x,K^f, we get 𝐂(x)∩𝐂(x')≠∅ and thus 𝖯_3(x) fails.
* If we add one loop ℓ∈𝔏_x,K into the configuration of ℒ_1/2, then (x,y) becomes an admissible pair.
We define a mapping π_x,K^f as follows, which maps a configuration of ℒ_1/2 to a collection of configurations of ℒ_1/2. Precisely, for any ω, which is a configuration of ℒ_1/2 such that γ_x,K^f=∅, we define
π_x,K^f(ω):= {ω+ ∑_i∈ I1_ℓ_i: ∅≠ I ⊂ℕ,ℓ_i∈𝔏_x,K}.
Note that π_x,K^f is an injection. By the aforementioned observations, for any ω such that (x,y) is an M-potential pair, any configuration in π_x,K^f(ω) satifies that (x,y) is admissible and ψ_n^SR=M (recalling that γ_x,K^f is independent of Ψ_n). As a result,
ℙ[(x,y) is admissible, ψ_n^SR=M ]
≥ ℙ[π_x,K^f(ω):ω such that 𝖯_1(x,M)∩𝖯_2(x,y)∩𝖯_3(x) happens]
= ℙ( γ_x,K^f≠∅) /ℙ( γ_x,K^f=∅)·ℙ[𝖯_1(x,M)∩𝖯_2(x,y)∩𝖯_3(x) ]
≥ c(K,d)·ℙ[𝖯_1(x,M)∩𝖯_2(x,y)∩𝖯_3(x) ] (by μ(𝔏_x,K)≥ u_0(K,d)).
By summing over all x∈ℤ^d∖ B(n-1) and y∈ B_x(1/2L)∖ B(n_*), we have
𝔼[W·1_ψ_n^SR=M]
= ∑_x∈ℤ^d∖ B(n-1),y∈ B_x(1/2L)∖ B(n_*)ℙ[(x,y) is admissible, ψ_n^SR=M ]
≥ c(K,d) ∑_x∈ℤ^d∖ B(n-1),y∈ B_x(1/2L)∖ B(n_*)ℙ[𝖯_1(x,M)∩𝖯_2(x,y)∩𝖯_3(x) ]
≥ c(K,d)c_7L^2 ∑_x∈ℤ^d∖ B(n-1)ℙ[𝖯_1(x,M) ] (by Lemma <ref>).
By Item (1) of Remark <ref>, arbitrarily given a configuration of Ψ_n with x∈Ψ_n^*, with at least probabilty c'(d)>0 the event {x_†∈Ψ_n} occurs. Therefore, the sum on the RHS of (<ref>) is bounded from below by
c'∑_x∈ℤ^d∖ B(n-1)ℙ[ x∈Ψ_n^*, x is strongly regular, and ψ_n^SR=M ] =c'M ℙ( ψ_n^SR=M).
Combined with (<ref>), the proof of this lemma is complete.
For any K>0 and M≥1/2L^2, there exists C_7(K,d)>0 such that
𝔼[W^2·1_ψ_n^SR=M] ≤ C_7L^4M^2ℙ( ψ_n^SR=M).
Recall that L≍ n. The term on the LHS of (<ref>) can be written as
∑_∀ i∈{1,2},x_i∈ℤ^d∖ B(n-1), y_i∈ B_x_i(1/2L)∖ B(n_*)ℙ[∀ i∈{1,2},(x_i,y_i) is admissible,ψ_n^SR=M].
We divide the sum above into the following three parts, which we denote by 𝒮_1, 𝒮_2 and 𝒮_3 respectively:
Part 1: x_1=x_2, y_1=y_2;
Part 2: x_1=x_2, y_1≠ y_2;
Part 3: x_1≠ x_2.
In what follows, we prove the upper bounds for 𝒮_1, 𝒮_2 and 𝒮_3 separately. Assume that {ψ_n^SR=M} occurs, and (x_1,y_1) and (x_2,y_2) are both admissible. We denote 𝖯(x,M):={ x∈Ψ_n^*, x is strongly regular, ψ_n^SR=M}.
Part 1: Since 𝖯(x_1,M) and {B_x_1(2K^4_*)[] y_1 off Ψ_n} both happen (note that they are certified by two disjoint collections of glued loops), by the BKR inequality and the two-point function estimate, we have
𝒮_1≤ C(K,d)∑_x_1∈ℤ^d∖ B(n-1), y_1∈ B_x_1(1/2L)∖ B(n_*)ℙ[ 𝖯(x_1,M)] · |x_1-y_1|^2-d
≤ C(K,d) L^2 ∑_x_1∈ℤ^d∖ B(n-1)ℙ[ 𝖯(x_1,M)] (by (<ref>))
= C(K,d) L^2M·ℙ( ψ_n^SR=M).
Part 2: Note that 𝖯(x_1,M), {B_x_1(2K^4_*)[] y_1 off Ψ_n} and {B_x_1(2K^4_*)[] y_2 off Ψ_n} happen. By the tree expansion, there exists a glued loop γ_* such that
{γ_*[]B_x_1(2K^4_*) off Ψ_n }∘{γ_* [] y_1 off Ψ_n }∘{γ_* [] y_2 off Ψ_n }
happens. Since the event 𝖯(x_1,M) is certified by a disjoint collection of glued loops, by the BKR inequality and the two-point function estimate, we have
𝒮_2 ≤ C(K,d)∑_x_1∈ℤ^d∖ B(n-1)∑_y_1,y_2∈ B_x_1(1/2L)∖ B(n_*)∑_z_1,z_2,z_3∈ℤ^dℙ[ 𝖯(x_1,M)]|z_1-z_2|^2-d
· |z_2-z_3|^2-d|z_3-z_1|^2-d |z_1-x_1|^2-d|z_2-y_1|^2-d|z_3-y_2|^2-d.
If the sum on the RHS is also over the restriction |z_1-x_1|≤ L (we denote this part of sum by 𝒮_2^(1)), then we sum over y_1 and y_2, and apply Lemma <ref> and Corollary <ref> to get its upper bound as follows:
𝒮_2^(1)≤ C(K,d)L^4∑_x_1∈ℤ^d∖ B(n-1)∑_z_1,z_2,z_3∈ℤ^d:|z_1-x_1|≤ Lℙ[ 𝖯(x_1,M)]|z_1-z_2|^2-d
· |z_2-z_3|^2-d|z_3-z_1|^2-d |z_1-x_1|^2-d (by (<ref>))
≤ C(K,d)L^4∑_x_1∈ℤ^d∖ B(n-1)∑_z_1∈ℤ^d: |z_1-x_1|≤ L ℙ[ 𝖯(x_1,M)] |z_1-x_1|^2-d (by (<ref>))
≤ C(K,d)L^6M ·ℙ( ψ_n^SR=M) (by (<ref>)).
In the remaining case (i.e. |z_1-x_1|> L; let this part of sum be 𝒮_2^(2)), we sum over y_2, and then apply Corollary <ref> to obtain
𝒮_2^(2)≤ C(K,d) L^2 ∑_x_1∈ℤ^d∖ B(n-1), y_1∈ B_x_1(1/2L)∖ B(n_*)∑_z_1,z_2,z_3∈ℤ^d: |z_1-x_1|> Lℙ[ 𝖯(x_1,M)]
· |z_1-z_2|^2-d|z_2-z_3|^2-d|z_3-z_1|^2-d |z_1-x_1|^2-d|z_2-y_1|^2-d (by (<ref>))
≤ C(K,d) L^2 ∑_x_1∈ℤ^d∖ B(n-1), y_1∈ B_x_1(1/2L)∖ B(n_*)∑_z_1∈ℤ^d∖ B_x_1(L)ℙ[ 𝖯(x_1,M)]
· |z_1-x_1|^2-d|z_1-y_1|^2-d (by (<ref>)).
For any x_1∈ℤ^d∖ B(n-1), y_1∈ B_x_1(1/2L)∖ B(n_*) and z_1∈ℤ^d∖ B_x_1(L), by |z_1-y_1|≥ |z_1-x_1|-|x_1-y_1|≥1/2|z_1-x_1| and (<ref>), we have
∑_z_1∈ℤ^d∖ B_x_1(L)|z_1-x_1|^2-d|z_1-y_1|^2-d≤ C∑_z_1∈ℤ^d∖ B_x_1(L) |z_1-x_1|^4-2d≤ CL^4-d.
Combined with the previous upper bound for 𝒮_2^(2), it yields that
𝒮_2^(2)≤ C(K,d)L^6-d|B(12L)|·∑_x_1∈ℤ^d∖ B(n-1)ℙ[ 𝖯(x_1,M)]
≤ C(K,d)L^6M·ℙ( ψ_n^SR=M).
Combining these two estimates for 𝒮_2^(1) and 𝒮_2^(2), we obtain
𝒮_2≤ C(K,d)L^6M·ℙ( ψ_n^SR=M).
Part 3: Since x_1≠ x_2, γ_x_1,K^f is independent of γ_x_2,K^f. For each i∈{1,2}, we denote by 𝐂_y_i,K (resp. 𝐂_y_i,K^*) the cluster containing y_i and composed of loops in ℒ_1/2·1_ℓ∉𝔏_x_1,K∪𝔏_x_2,K (resp. ℒ_1/2·1_ℓ∉𝔏_x_i,K).
Here are some useful observations.
* For each i∈{1,2}, 𝐂_y_i,K=𝐂_y_i,K^*. To see this, we only need to prove that 𝐂_y_i,K is disjoint from γ_x_3-i,K^f. We prove this by contradiction. Without loss of generality, assume that 𝐂_y_1,K intersects γ_x_2,K^f. Then y_1 can be connected to Ψ_n without γ_x_1,K^f (since γ_x_2,K^f intersects Ψ_n), which is contradictory with the pivotality of γ_x_1,K^f.
* For each i∈{1,2}, 𝐂_y_i,K intersects γ_x_i,K^f, and thus also intersects B_x_i(2K^4_*). In fact, since (x_i,y_i) is admissible, 𝐂_y_i,K^* must intersect γ_x_i,K^f. Combined with Observation (1), it implies this observation.
* 𝐂_y_1,K is disjoint from 𝐂_y_2,K. Otherwise, one has 𝐂_y_1,K=𝐂_y_2,K and therefore, 𝐂_y_1,K intersects both γ_x_1,K^f and γ_x_2,K^f (by Observation (2)). As a result, 𝐂_y_1,K and Ψ_n can be connected by either γ_x_1,K^f or γ_x_2,K^f, which is contradictory with the fact that γ_x_1,K^f is pivotal.
Observations (1) and (2) imply that
{y_1[] B_x_1(2K^4_*) off Ψ_n }∘{y_2[] B_x_2(2K^4_*) off Ψ_n }
happens. Since in addition the event 𝖯(x_1,M) ∩𝖯(x_2,M) is certified by a disjoint collection of glued loops, by the BKR inequality and the two-point function estimate, we have
𝒮_3≤ C(K,d)∑_∀ i∈{1,2},x_i∈ℤ^d∖ B(n-1), y_i∈ B_x_i(1/2L)∖ B(n_*)ℙ[ 𝖯(x_1,M) ∩𝖯(x_2,M)]
· |x_1-y_1|^2-d|x_2-y_2|^2-d
≤ C(K,d)L^4∑_x_1,x_2∈ℤ^d∖ B(n-1)ℙ[ 𝖯(x_1,M) ∩𝖯(x_2,M)] (by (<ref>))
≤ C(K,d)L^4M^2 ·ℙ( ψ_n^SR=M).
Finally, we put (<ref>), (<ref>), (<ref>) and the requirement M≥1/2L^2 together, and then the proof is complete.
Recall that Observation (3) in the analysis of Part 3 above indicates that if (x_1,y_1) and (x_2,y_2) are both admissible, and x_1≠ x_2, then 𝐂_y_1,K is disjoint from 𝐂_y_2,K, which implies that y_1≠ y_2. As a result, if (x_1,y) and (x_2,y) are both admissible (i.e. taking y_1=y_2=y), then we must have x_1=x_2.
Recall that χ_n=|{x∈ B(n+L)∖ B(n): 0[] x }|.
There exist c_9(d)>0 and c_10(d)∈ (0,1) such that under the same conditions as Theorem <ref>, we have
ℙ( ψ_n^SR≥12L^2, χ_n≤ c_9L^4 ) ≤ (1-c_10) θ(N).
Recall the total number of admissible pairs W in (<ref>).
We claim that W≤χ_n. In fact, for each admissible pair (x,y), one has y∈ B(n+L)∖ B(n) and y[]0, and thus y is counted by χ_n. Moreover, by Remark <ref> for each y counted by χ_n, it can be contained in at most one admissible pair. As a result, we obtain W≤χ_n.
Recall that for any u>0 and random variable Z≥ 0 with 0<u <𝔼Z<∞, one has ℙ[Z> u]≥ (𝔼Z-u)^2/𝔼[Z^2]. Arbitrarily take c_9∈ (0, 1/2c_8). Note that c_8L^2M>c_9L^4 for all M≥1/2L^2. Applying the general inequality above with u=c_9L^4 and Z being the random variable W conditioning on {ψ_n^SR=M}, we have: for any integer M≥1/2L^2,
ℙ( χ_n > c_9L^4 | ψ_n^SR=M )
≥ ℙ[ W> c_9L^4 | ψ_n^SR=M ] (by W≤χ_n)
≥ {𝔼[W|ψ_n^SR=M ]-c_9L^4}^2/𝔼[W^2|ψ_n^SR=M ]
≥ [c_8L^2M-c_9L^4]^2 /C_7L^4M^2≥ c∈ (0,1) (by Lemmas <ref> and <ref>).
By (<ref>), we get the desired bound as follows:
ℙ( ψ_n^SR≥12L^2, χ_n≤ c_9L^4 ) = ∑_M≥1/2L^2ℙ( ψ_n^SR=M, χ_n≤ c_9L^4 )
≤ (1-c)∑_M≥1/2L^2ℙ( ψ_n^SR=M)
≤ (1-c)·θ(N),
where in the last line we used the fact that {ψ_n^SR>0}⊂{0[]∂ B(N)}.
Based on Corollary <ref> and Lemma <ref>, we are ready to prove Theorem <ref>.
By Corollary <ref> and Lemma <ref>, we have
ℙ( ψ_n^*≥ L^2, χ_n≤ c_9L^4 )
≤ ℙ( ψ_n^*≥ L^2, ψ_n^SR≤12ψ_n^* ) + ℙ( ψ_n^SR≥12L^2, χ_n≤ c_9L^4 )
≤ s.p.(N) + (1-c_10) θ(N).
For all large enough N, by the polynomial lower bound of θ(N) in (<ref>), the RHS of (<ref>) is dominated by 1/2c_10θ(N)+ (1-c_10) θ(N)= (1-1/2c_10)θ(N). Then Theorem <ref> follows by setting c_4 = c_9 and c_5 = 1/2c_10.
§ DECAY RATE OF THE CLUSTER VOLUME
In this section, we will prove Proposition <ref>, which then completes the proof of Theorem <ref>. This proof is inspired by <cit.>.
Recall that for any x∈ℤ^d, 𝐂(x) is the cluster of ∪ℒ_1/2 containing x. Also recall that for any A⊂ℤ^d, |A| is the number of lattice points in A. For any m∈ℕ, we denote P_m=ℙ(|𝐂(0)|=m). For any h≥ 0, let ℜ(h):= ∑_m=1^∞P_m(1-e^-mh). The key is the following upper bound on ℜ(h).
For d>6, there exists C_8(d)>0 such that for any h> 0,
ℜ(h) ≤ C_8h^1/2.
For any M≥ 1, applying Lemma <ref> with h=M^-1, we get that
C_8M^-1/2≥ℜ(M^-1)≥∑_m=M^∞P_m(1-e^-M^-1m)≥ (1-e^-1) ℙ(|𝐂(0)|≥ M ),
thereby completing the proof of Proposition <ref>.
To prove Lemma <ref>, we need to consider the so-called ghost field {𝒢_x^h}_x∈ℤ^d with parameter h≥ 0. Precisely, {𝒢_x^h}_x∈ℤ^d is independent of ℒ_1/2 and is a collection of i.i.d. {0,1}-valued Bernoulli variables which take value 0 with probability e^-h. Let 𝒢^h:={x∈ℤ^d:𝒢_x^h=1}. With the help of the ghost field, we can write ℜ(h) as the probability of a connecting event as follows:
ℜ(h)=∑_m=1^∞ P_m(1-e^-mh)= ℙ(0[]𝒢^h),
where “[]” is “[]∪ℒ_1/2”, and the probability space of the RHS is the product space for ℒ_1/2 and {𝒢_x^h}_x∈ℤ^d. The next lemma provides a geometric interpretation for the derivative ℜ'(h), i.e., the derivative of ℜ(h) with respect to h.
For any A_1,A_2⊂ℤ^d, we use the notation A_1 ↮ A_2:= {A_1 [] A_2}^c.
For any h≥ 0, we have
(e^h-1) ℜ'(h) = ℙ(|𝐂(0)∩𝒢^h|=1 ),
ℜ'(h) = ∑_x∈ℤ^dℙ( 0[] x,0↮𝒢^h ).
For the LHS of (<ref>), by (<ref>) one has
ℜ'(h) = ∑_m=1^∞ P_m · m e^-mh.
For the RHS of (<ref>), we have
ℙ(|𝐂(0)∩𝒢^h|=1 )
= ∑_m=1^∞ℙ(|𝐂(0)|=m )·ℙ(|𝐂(0)∩𝒢^h|=1 ||𝐂(0)|=m )
= ∑_m=1^∞ P_m· e^-(m-1)h(1-e^-h)
=(e^h-1)∑_m=1^∞ P_m· me^-mh.
By (<ref>) and (<ref>), we get (<ref>).
We now prove (<ref>). Similar to (<ref>), we also have
∑_x∈ℤ^dℙ( 0[] x,0↮𝒢^h )
= ∑_m=1^∞∑_x∈ℤ^dℙ(0[] x, |𝐂(0)|=m ) ℙ( 0↮𝒢^h | 0[] x,| 𝐂(0)|=m )
= ∑_m=1^∞e^-mh∑_x∈ℤ^dℙ(0[] x, |𝐂(0)|=m )
= ∑_m=1^∞ P_m· me^-mh.
By (<ref>) and (<ref>), we obtain (<ref>).
We introduce some more notations below for further analysis:
* Recall that for any A_1,A_2,A_3⊂ℤ^d, {A_1[] A_2 off A_3} is the event that there exists a collection 𝔏 of loops in ℒ_1/2 disjoint from A_3 such that A_1[]∪𝔏 A_2. For any x∈ℤ^d and A⊂ℤ^d, we denote
𝐂_A(x):={v∈ℤ^d: v[] x off A}.
* For three different x,y,z ∈ℤ^d, the event 𝖤(x,y,z) is defined to be the intersection of the following three events:
* 0[] x and 0↮𝒢^h;
* y[]𝒢^h and z[]𝒢^h;
* The clusters 𝐂(x), 𝐂(y) and 𝐂(z) are disjoint to each other.
See Figure <ref> for an illustration of this event.
* For any x∈ℤ^d and i∈ℕ^+, let x_i^+=x+(i,0,...,0)∈ℤ^d and x_i^-=x+(-i,0,...,0)∈ℤ^d. We denote the subset
A_J^x:=∪_i=1^J{x_i^+,x_i^- }∪{x}.
* The event 𝖥_J^x is the intersection of the following two events:
* γ^f_A_J^x≠∅ (recalling in Section <ref> that γ^f_A is the glued loop composed of fundamental loops in ℒ_1/2 that visit every point in A and do not visit any other lattice point);
* After deleting all loops ℓ included in γ^f_A_J^x (i.e. ℓ is one of the loops that construct γ^f_A_J^x) from ℒ_1/2, the event 𝖤_J^x:=𝖤(x,x_J^-,x_J^+) occurs.
Note that 𝖥_J^x implies |𝐂(0)∩𝒢^h|≥ 2, which is incompatible with the event 𝖤_J^y for each y∈ℤ^d.
For any lattice points y_1≠ y_2, 𝖥_J^y_1∩𝖥_J^y_2=∅.
For any w∈ℤ^d and i∈{1,2}, let 𝐂_w (resp. 𝐂_w^i) be the cluster containing w and composed of loops in ℒ_1/2 that are not included in γ^f_A_J^y_1 or γ^f_A_J^y_2 (resp. not included in γ^f_A_J^y_i). We abbreviate y_i^†:=(y_i)_J^- and y_i^♢:=(y_i)_J^+.
Assume the event 𝖥_J^y_1∩𝖥_J^y_2 occurs. Here are some useful observations.
* For i∈{1,2}, it follows from the definition of 𝖥_J^y_i that: (a) 𝐂_y_i^i, 𝐂_y_i^†^i and 𝐂_y_i^♢^i are disjoint from one another; (b) 𝐂_y_i^i contains 0 and is disjoint from 𝒢^h; (c) 𝐂_y_i^†^i and 𝐂_y_i^♢^i both intersect 𝒢^h.
* For i∈{1,2}, either 𝐂_y_i^†^i=𝐂_y_i^† or 𝐂_y_i^♢^i=𝐂_y_i^♢ since at most one of the clusters 𝐂_y_i^†^i and 𝐂_y_i^♢^i can intersect γ^f_A_J^y_3-i.
* For i∈{1,2}, 𝐂_y_i^i=𝐂_y_i. We prove this observation by contradiction. Assume that 𝐂_y_i^i≠𝐂_y_i, then 𝐂_y_i must intersect γ^f_A_J^y_3-i. Moreover, by Observation (2), without loss of generality we can also assume that 𝐂_y_3-i^†^3-i=𝐂_y_3-i^†. Therefore, since y_i can be connected to 𝒢^h by 𝐂_y_i^i∪γ^f_A_J^y_3-i∪𝐂_y_3-i^†^3-i (which equals to 𝐂_y_i∪γ^f_A_J^y_3-i∪𝐂_y_3-i^† and thus is contained in 𝐂_y_i^i), we have that 𝐂_y_i^i intersects 𝒢^h, which is contradictory with Observation (1b).
* For i∈{1,2}, 𝐂_y_i^†^i and 𝐂_y_i^♢^i both intersect γ^f_A_J^y_3-i. We prove this by contradiction. If 𝐂_y_i^†^i∩γ^f_A_J^y_3-i=∅, then one has 𝐂_y_i^†=𝐂_y_i^†^i. In addition, by Observations (1b) and (1c), 0 is connected to 𝒢^h by 𝐂_y_i^†^i∪γ^f_A_J^y_i∪𝐂_y_i^i, which is contained in 𝐂_0^3-i since 𝐂_y_i^†^i=𝐂_y_i^† and 𝐂_y_i^i=𝐂_y_i (by Observation (3)). However, this implies that 𝐂_0^3-i∩𝒢^h≠∅, and thus arrives at a contradiction with Observation (1b).
We next prove the lemma. Since 𝐂_y_1^†^1 and 𝐂_y_1^♢^1 both intersect γ^f_A_J^y_2 (by Observation (4)), one has that y_1^† and y_1^♢ are connected by 𝐂_y_1^†^1∪𝐂_y_1^♢^1∪γ^f_A_J^y_2, which is incompatible with Observation (1a). Thus, we complete the proof by contradiction.
For any d>6 and J∈ℕ^+, there exists C_9(d,J)>0 such that for any h≥ 0,
∑_x∈ℤ^dℙ( 𝖤_J^x) ≤ C_9[ℜ(h)-(e^h-1)ℜ'(h)].
For any x∈ℤ^d, when 𝖤_J^x happens, one has γ^f_A_J^x= ∅. Moreover, if we add a loop ℓ, which constructs γ^f_A_J^x, into the configuration of ℒ_1/2, then the event 𝖥_J^x occurs. Therefore, we have
ℙ( 𝖤_J^x)≤ℙ(γ^f_A_J^x= ∅)/ℙ(γ^f_A_J^x≠∅)·ℙ( 𝖥_J^x)≤ C(d,J) ℙ( 𝖥_J^x).
Recall that Lemma <ref> shows that the events 𝖥_J^x for x∈ℤ^d are disjoint from one another. Thus, since 𝖥_J^x⊂{|𝐂(0)∩𝒢^h|≥ 2}, we have
∑_x∈ℤ^dℙ( 𝖥_J^x)= ℙ( ∪_x∈ℤ^d𝖥_J^x)≤ℙ(|𝐂(0)∩𝒢^h|≥ 2 ),
where the RHS is equal to ℜ(h)-(e^h-1)ℜ'(h) by (<ref>) and (<ref>). Thus, combining (<ref>) and (<ref>), we conclude this lemma.
Recall that “x[] y only by A” means that x[] y, and in every collection of loops in ℒ_1/2 connecting x and y there must be some loop intersecting A. Next, we give three technical lemmas.
For any d>6, there exists C(d)>0 such that for any y∈ℤ^d and A⊂ℤ^d,
ℙ(y[]𝒢^h only by A ) ≤ C(d)ℜ(h)∑_v∈ A∩ℤ^d |y-v|^2-d.
This proof is parallel to that of (<ref>). On the event {y[]𝒢^h only by A}, there exists a glued loop γ_* intersecting A such that {γ_*[]y}∘{γ_*[]𝒢^h} happens. By the same arguments as in the proof of (<ref>) (replacing 𝐀, x' and y in (<ref>) by A, y and 𝒢^h respectively), we have
ℙ(y[]𝒢^h only by A )
≤ C∑_z_1∈ A∩ℤ^d,z_2,z_3∈ℤ^d |z_1-z_2|^2-d|z_2-z_3|^2-d|z_3-z_1|^2-d|z_2-y|^2-dℙ(z_3[]𝒢^h )
= Cℜ(h)∑_z_1∈ A∩ℤ^d,z_2,z_3∈ℤ^d |z_1-z_2|^2-d|z_2-z_3|^2-d|z_3-z_1|^2-d|z_2-y|^2-d (by (<ref>))
≤ Cℜ(h)∑_z_1∈ A∩ℤ^d|z_1-y|^2-d (by (<ref>)).
For any d>6, there exists C(d)>0 such that for any J∈ℕ^+,
∑_x,y∈ℤ^dℙ(0[]x,0[]y,0↮𝒢^h )|x_J^–y|^2-d≤ CJ^6-dℜ'(h).
Let 𝒢^h_*:= {v∈ℤ^d: v[]𝒢^h }. We claim that
{0[]x,0[]y,0↮𝒢^h} = {0[]x off 𝒢^h_* }∩{0[]y off 𝒢^h_* }.
On the one hand, when {0[]x,0[]y,0↮𝒢^h} occurs, 𝐂(0) does not contain any loop intersecting 𝒢^h_* (otherwise, 0[]𝒢^h). Therefore, since x,y∈𝐂(0) (ensured by 0[]x and 0[]y), we have 0[]x off 𝒢^h_* and 0[]y off 𝒢^h_*. On the other hand, on the event {0[]x off 𝒢^h_* }∩{0[]y off 𝒢^h_* }, we directly have 0[]x and 0[]y. In addition, we also have 0↮𝒢^h; otherwise, one has 0∈𝒢^h_*, which is incompatible with both {0[]x off 𝒢^h_*} and {0[]y off 𝒢^h_* }. To sum up, the event on the LHS of (<ref>) is contained in and contains the RHS, therefore (<ref>) follows.
On the event {0[]x off 𝒢^h_* }∩{0[]y off 𝒢^h_* }, by the tree expansion, there exists a glued loop γ_* disjoint from 𝒢^h_* such that {γ_*[]0 off 𝒢^h_*}∘{γ_*[]x off 𝒢^h_* }∘{γ_*[]y off 𝒢^h_*} happens. Moreover, arbitrarily given {𝒢^h_*=𝒦}, the connection off 𝒢^h_* only depends on the loops in ℒ_1/2 disjoint from 𝒦, which are independent from the event {𝒢^h_*=𝒦}. This implies that for any D_1,D_2⊂ℤ^d,
ℙ(D_1[]D_2 off 𝒢^h_*|𝒢^h_* )≤ℙ(D_1[]D_2).
Thus, with the same argument as proving (<ref>), we have
ℙ(0[]x off 𝒢^h_* ,0[]y off 𝒢^h_*| 𝒢^h_* )
≤ C∑_z_1,z_2,z_3∈ℤ^d |z_1-z_2|^2-d|z_2-z_3|^2-d|z_3-z_1|^2-d|z_2-x|^2-d |z_3-y|^2-d
·𝔼[ℙ(0[]z_1 off 𝒢^h_*| 𝒢^h_* ) ],
where we bounded ℙ(z_2[]x off 𝒢^h_* |𝒢^h_* ) and ℙ(z_3[]y off 𝒢^h_* |𝒢^h_* ) from above by C|z_2-x|^2-d and C|z_3-y|^2-d respectively through applying (<ref>).
For the same reason as proving (<ref>), one has {0[]z_1 off 𝒢^h_*}= {0[]z_1, 0↮𝒢^h}. Therefore, by taking integral on both sides of (<ref>), the LHS of (<ref>) is bounded from above by
𝕀:= C∑_x,y∈ℤ^d∑_z_1,z_2,z_3∈ℤ^d |z_1-z_2|^2-d|z_2-z_3|^2-d|z_3-z_1|^2-d|z_2-x|^2-d
·ℙ(0[]z_1, 0↮𝒢^h )|z_3-y|^2-d|x_J^–y|^2-d.
Since |z_3-y|=|(z_3)_J^+-y_J^+| and |x_J^–y|=|x-y_J^+|, we have
𝕀
=C∑_x,y∈ℤ^d∑_z_1,z_2,z_3∈ℤ^d |z_1-z_2|^2-d|z_2-z_3|^2-d|z_3-z_1|^2-d|z_2-x|^2-d
·ℙ(0[]z_1, 0↮𝒢^h )|(z_3)_J^+-y_J^+|^2-d|x-y_J^+|^2-d.
By calculating the sum over y and x in turn, we get
𝕀≤ C∑_x∈ℤ^d∑_z_1,z_2,z_3∈ℤ^d|z_1-z_2|^2-d|z_2-z_3|^2-d|z_3-z_1|^2-d|z_2-x|^2-d
·ℙ(0[]z_1, 0↮𝒢^h )|(z_3)_J^+-x|^4-d (by (<ref>))
≤ 𝕀'(ℤ^d ×ℤ^d ×ℤ^d) (by (<ref>)),
where for any A ⊂ℤ^d ×ℤ^d ×ℤ^d, we define
𝕀'(A)= ∑_(z_1, z_2, z_3) ∈ A|z_1-z_2|^2-d|z_2-z_3|^2-d|z_3-z_1|^2-d|z_2-(z_3)_J^+|^6-dℙ(0[]z_1, 0↮𝒢^h ).
We decompose ℤ^d ×ℤ^d ×ℤ^d = A_1 ∪ A_2 ∪ A_3 where
A_1:={(z_1,z_2,z_3):|z_2-(z_3)_J^+|≥ 0.5J},
A_2:={(z_1,z_2,z_3):|z_2-(z_3)_J^+|< 0.5J,|z_1-z_2|≥ 2J},
A_3:={(z_1,z_2,z_3):|z_2-(z_3)_J^+|< 0.5J,|z_1-z_2|< 2J}.
We next bound 𝕀'(A_1), 𝕀'(A_2) and 𝕀'(A_3) one after another. For (z_1, z_2, z_3)∈ A_1, since |z_2-(z_3)_J^+|≥ 0.5J, 𝕀'(A_1) is bounded from above by
CJ^6-d∑_z_1,z_2,z_3∈ A_i |z_1-z_2|^2-d|z_2-z_3|^2-d|z_3-z_1|^2-dℙ(0[]z_1, 0↮𝒢^h )
≤ CJ^6-d∑_z_1,z_2∈ℤ^d |z_1-z_2|^6-2dℙ(0[]z_1, 0↮𝒢^h ) (by (<ref>))
≤ CJ^6-dℜ'(h) (by (<ref>) and (<ref>)).
When (z_1,z_2,z_3)∉ A_1, one has |z_2-(z_3)_J^+|<0.5J. Therefore, by the triangle inequality,
|z_2-z_3|≥ |(z_3)_J^+-z_3|- |z_2-(z_3)_J^+|≥ J-0.5J= 0.5J.
Thus, for i∈{2,3}, we have
𝕀'(A_i) ≤ CJ^2-d∑_z_1,z_2,z_3∈ A_i |z_1-z_2|^2-d|z_3-z_1|^2-d|z_2-(z_3)_J^+|^6-d
·ℙ(0[]z_1, 0↮𝒢^h ).
For (z_1, z_2, z_3)∈ A_2, one has z_2∈ℤ^d∖ B_z_1(2J), z_3∈ B_z_2(2J) and
|z_3-z_1|≥ |z_2-z_1|-|z_2-(z_3)_J^+|-|z_3-(z_3)_J^+|≥ (1-0.25-0.5)|z_2-z_1|=0.25|z_2-z_1|.
Therefore, by (<ref>), 𝕀'(A_2) is upper-bounded by
CJ^2-d∑_z_1∈ℤ^d,z_2∈ℤ^d∖ B_z_1(2J),z_3∈ B_z_2(2J)|z_1-z_2|^4-2d |z_2-(z_3)_J^+|^6-d
·ℙ(0[]z_1, 0↮𝒢^h )
= CJ^2-dℜ'(h) ∑_z∈ℤ^d∖ B(2J) |z|^4-2d∑_z∈ B(2J) |z|^6-d (by (<ref>))
≤ CJ^12-2dℜ'(h)≤ CJ^6-dℜ'(h) (by (<ref>), (<ref>) and d>6).
For (z_1,z_2,z_3)∈ A_3, one has z_2∈ B_z_1(2J) and z_3∈ B_z_2(1.5J)⊂ B_z_1(4J). Therefore, by (<ref>) and |z_2-(z_3)_J^+|^6-d≤ 1 we have
𝕀'(A_3)≤ CJ^2-d∑_z_1∈ℤ^d∑_z_2∈ B_z_1(2J),z_3∈ B_z_1(4J) |z_1-z_2|^2-d|z_3-z_1|^2-dℙ(0[]z_1, 0↮𝒢^h )
= CJ^2-dℜ'(h) ∑_z∈ B(2J)|z|^2-d∑_z∈ B(4J)|z|^2-d (by (<ref>))
≤ CJ^6-dℜ'(h) (by (<ref>)).
Combined with (<ref>) and (<ref>), it concludes this lemma.
For any d>6, there exists C(d)>0 such that for any J∈ℕ^+,
∑_w∈ℤ^dℙ(0[] w, 0[]𝒢^h ) |w_2J^-|^2-d≤ CJ^6-dℜ(h).
By the same arguments as in the proof of (<ref>) (replacing x_1 and x_2 in (<ref>) by w and 𝒢^h respectively), we have
ℙ(0[] w, 0[]𝒢^h )
≤ C ∑_z_1,z_2,z_3∈ℤ^d |z_1-z_2|^2-d|z_2-z_3|^2-d|z_3-z_1|^2-d|z_1-w|^2-d|z_2|^2-dℙ(z_3[]𝒢^h )
= Cℜ(h) ∑_z_1,z_2,z_3∈ℤ^d |z_1-z_2|^2-d|z_2-z_3|^2-d|z_3-z_1|^2-d|z_1-w|^2-d|z_2|^2-d (by (<ref>))
≤ Cℜ(h) |w|^4-d (by (<ref>)).
Combined with |w_2J^-|=|w-0_2J^+|, this yields the desired bound:
∑_w∈ℤ^dℙ(0[] w, 0[]𝒢^h ) |w_2J^-|^2-d≤ Cℜ(h) ∑_w∈ℤ^d |w|^4-d|w-0_2J^+|^2-d
≤ CJ^6-dℜ(h) (by (<ref>)).
Using these three technical lemmas, we can prove the following lower bound for ∑_x∈ℤ^dℙ( 𝖤_L^x).
For any d>6, there exist C_10(d),c_11(d)>0 such that for all J≥ C_10 and h≥ 0,
∑_x∈ℤ^dℙ( 𝖤_J^x) ≥ c_11ℜ(h)ℜ'(h).
For any x∈ℤ^d, we define 𝒜_1,𝒜_2 and 𝒜 as follows:
* 𝒜_1= 𝒜_1(x,𝒢^h):={connected 𝐂_1⊂ℤ^d:0,x∈𝐂_1,𝐂_1∩𝒢^h=∅};
* For any 𝐂_1∈𝒜_1,
𝒜_2= 𝒜_2(x,𝒢^h,𝐂_1):={connected 𝐂_2⊂ℤ^d:𝐂_2∩𝐂_1=∅,𝐂_2∩𝒢^h≠∅}.
* 𝒜=𝒜(x,𝒢^h):={(𝐂_1,𝐂_2):𝐂_1∈𝒜_1, 𝐂_2∈𝒜_2}.
Recall the definition of 𝖤_J^x below (<ref>). Then it follows that 𝖤_J^x∩{𝐂(0)=𝐂_1, 𝐂(x_J^-)=𝐂_2}≠∅ if and only if (𝐂_1, 𝐂_2)∈𝒜. Moreover, on the event {𝐂(0)=𝐂_1, 𝐂(x_J^-)=𝐂_2, (𝐂_1,𝐂_2)∈𝒜}, 𝖤_J^x happens if and only if {x_J^+[]𝒢^h off 𝐂_1∪𝐂_2}. For any fixed 𝐂_1,𝐂_2⊂ℤ^d, the event {𝐂(0)=𝐂_1,𝐂(x_J^-)=𝐂_2, (𝐂_1,𝐂_2)∈𝒜} only depends on 𝒢^h∩ (𝐂_1∪𝐂_2) and the loops in ℒ_1/2 intersecting 𝐂_1∪𝐂_2, which are independent from the event {x_J^+[]𝒢^h off 𝐂_1∪𝐂_2}. Therefore, we have
ℙ( 𝖤_x,J|𝐂(0)=𝐂_1,𝐂(x_J^-)=𝐂_2,(𝐂_1,𝐂_2)∈𝒜)
= ℙ( x_J^+[]𝒢^h off 𝐂_1∪𝐂_2 |𝐂(0)=𝐂_1,𝐂(x_J^-)=𝐂_2,(𝐂_1,𝐂_2)∈𝒜)
= ℙ( x_J^+[]𝒢^h off 𝐂_1∪𝐂_2 )
= ℜ(h)-ℙ( x_J^+[]𝒢^h only by 𝐂_1∪𝐂_2 ) (by (<ref>)).
By Lemma <ref>, one has
ℙ( x_J^+[]𝒢^h only by 𝐂_1∪𝐂_2 )
≤ Cℜ(h)∑_v∈(𝐂_1∪𝐂_2)∩ℤ^d |x_J^+-v|^2-d
≤ Cℜ(h)∑_i=1^2∑_v∈𝐂_i∩ℤ^d |x_J^+-v|^2-d.
By taking integral in (<ref>) over {𝐂(x_J^-)∈𝒜_2} conditioning on {𝐂(0)=𝐂_1,𝐂_1∈𝒜_1} and using (<ref>), we have
ℙ( 𝖤_x,J|𝐂(0)=𝐂_1,𝐂_1∈𝒜_1 )
≥ ℜ(h)ℙ( 𝐂(x_J^-)∈𝒜_2 |𝐂(0)=𝐂_1,𝐂_1∈𝒜_1 )
- Cℜ(h)ℙ( 𝐂(x_J^-)∈𝒜_2 |𝐂(0)=𝐂_1,𝐂_1∈𝒜_1 ) ∑_v∈𝐂_1∩ℤ^d |x_J^+-v|^2-d
-Cℜ(h)𝔼[1_𝐂(x_J^-)∈𝒜_2∑_v∈𝐂(x_J^-)∩ℤ^d |x_J^+-v|^2-d| 𝐂(0)=𝐂_1 ,𝐂_1∈𝒜_1]
:= 𝕁_1(x,𝐂_1)-𝕁_2(x,𝐂_1)-𝕁_3(x,𝐂_1),
which implies that
∑_x∈ℤ^dℙ( 𝖤_x,J)≥𝕁_1-𝕁_2-𝕁_3,
where 𝕁_i:= ∑_x∈ℤ^d𝔼[ 𝕁_i(x,𝐂(0))·1_𝐂(0)∈𝒜_1] for i∈{1,2,3}. In what follows, we estimate them separately.
For 𝕁_1, with the same arguments as in (<ref>), we have
𝕁_1(x,𝐂_1)= ℜ(h) ℙ( x_J^- []𝒢^h off 𝐂_1 |𝐂(0)=𝐂_1,𝐂_1∈𝒜_1 )
= ℜ^2(h)-ℜ(h)ℙ( x_J^- []𝒢^h only by 𝐂_1 ).
By the definition of 𝒜_1, one has
∑_x∈ℤ^d𝔼[ ℜ(h)ℙ( x_J^- []𝒢^h only by 𝐂(0) )·1_𝐂(0)∈𝒜_1]
≤ Cℜ^2(h)∑_x∈ℤ^d𝔼[∑_v∈𝐂(0)∩ℤ^d |x_J^–v|^2-d·1_0[]x,0↮𝒢^h] (by Lemma <ref>)
≤ Cℜ^2(h)∑_x∈ℤ^d∑_v∈ℤ^d |x_J^–v|^2-dℙ(0[]v,0[]x,0↮𝒢^h)
≤ CJ^6-dℜ^2(h)ℜ'(h) (by Lemma <ref>).
Combining (<ref>), (<ref>) and ℙ(𝐂(0)∈𝒜_1)=ℜ'(h) (by (<ref>)), we obtain
𝕁_1 ≥(1-CJ^6-d) ℜ^2(h)ℜ'(h).
For 𝕁_2, since ℙ( 𝐂(x_J^-)∈𝒜_2 |𝐂(0)=𝐂_1,𝐂_1∈𝒜_1 )≤ℙ( x_J^-[]𝒢^h )=ℜ(h), we have
𝕁_2(x,𝐂_1)≤ Cℜ^2(h)∑_v∈𝐂_1∩ℤ^d |x_J^+-v|^2-d.
By taking integral over the event {𝐂(0)∈𝒜_1} (i.e. {0[]x,0↮𝒢^h}) and summing over x∈ℤ^d, one has
𝕁_2 ≤ Cℜ^2(h)∑_x∈ℤ^d𝔼[∑_v∈𝐂(0)∩ℤ^d |x_J^+-v|^2-d·1_0[]x,0↮𝒢^h].
For the same reason as in the third and fourth line of (<ref>), the RHS is also bounded form above by CJ^6-dℜ^2(h)ℜ'(h). To sum up, we have
𝕁_2 ≤ CJ^6-dℜ^2(h)ℜ'(h).
Now we consider 𝕁_3. Recall in (<ref>) that for any y∈ℤ^d, 𝐂_A(y):={v∈ℤ^d: v[] y off A}. On the event {𝐂(0)=𝐂_1,𝐂_1∈𝒜_1,𝐂(x_J^-)∈𝒜_2}, every loop contained in 𝐂(x_J^-) does not intersect 𝐂_1, and thus 𝐂(x_J^-)=𝐂_𝐂_1(x_J^-). In addition, since the event {𝐂_𝐂_1(x_J^-)∈𝒜_2} only depends on 𝒢^h∖𝐂_1 and the loops in ℒ_1/2 disjoint from 𝐂_1 (both of which are independent of {𝐂(0)=𝐂_1,𝐂_1∈𝒜_1}), we have
𝔼[1_𝐂(x_J^-)∈𝒜_2∑_v∈𝐂(x_J^-)∩ℤ^d |x_J^+-v|^2-d| 𝐂(0)=𝐂_1,𝐂_1∈𝒜_1 ]
= 𝔼[1_𝐂_𝐂_1(x_J^-)∩𝒢^h≠∅∑_v∈𝐂_𝐂_1(x_J^-)∩ℤ^d |x_J^+-v|^2-d],
which implies that
𝕁_3(x,𝐂_1)≤ Cℜ(h)𝔼[1_𝐂_𝐂_1(x_J^-)∩𝒢^h≠∅∑_v∈𝐂_𝐂_1(x_J^-)∩ℤ^d |x_J^+-v|^2-d].
Therefore, since ∑_v∈𝐂_𝐂_1(x_J^-)∩ℤ^d |x_J^+-v|^2-d≤∑_v∈ℤ^d1_v[]x_J^-· |x_J^+-v|^2-d and 1_𝐂_𝐂_1(x_J^-)∩𝒢^h≠∅≤1_x_J^-[]𝒢^h, we have
𝕁_3(x,𝐂_1)≤ Cℜ(h)∑_v∈ℤ^d |v-x_J^+|^2-dℙ(x_J^-[]v,x_J^-[]𝒢^h )
= Cℜ(h)∑_v∈ℤ^d |(v-x_J^-)-0_2J^+|^2-dℙ(0[]v-x_J^-,0[]𝒢^h )
≤ CJ^d-6ℜ^2(h) (by Lemma <ref>).
Recalling that ℙ(𝐂(0)∈𝒜_1)=ℜ'(h), by (<ref>) we get
𝕁_3
≤ CJ^6-dℜ^2(h)ℜ'(h).
Combining (<ref>), (<ref>) and (<ref>), and taking a large enough J, we finally complete the proof.
After getting Lemmas <ref> and <ref>, now we are ready to prove Lemma <ref>:
By Lemmas <ref> and <ref>, we have: for any h≥ 0,
d ℜ^2(h)/d h=2 ℜ(h)ℜ'(h)≤ 2C_9c_11^-1[ℜ(h)-(e^h-1)ℜ'(h)],
where the RHS is upper-bounded by 2C_9c_11^-1 since ℜ(h) is increasing and is at most 1 (see (<ref>)). Take integral over [0,h] and then we get this lemma.
Recalling that Lemma <ref> is sufficient for Proposition <ref>, we eventually conclude the main result Theorem <ref>.
§ ACKNOWLEDGMENTS
J. Ding is partially supported by NSFC Key Program Project No. 12231002.
plain
|
http://arxiv.org/abs/2307.04320v1 | 20230710032804 | Collimated hot electron generation from sub-wavelength grating target irradiated by a femtosecond laser pulse of relativistic intensity | [
"Kamalesh Jana",
"Amit D. Lad",
"Guo-Bo Zhang",
"Bo-Yuan Li",
"V. Rakesh Kumar",
"Moniruzzaman Shaikh",
"Yash M. Ved",
"Min Chen",
"G. Ravindra Kumar"
] | physics.plasm-ph | [
"physics.plasm-ph"
] |
,s
|
http://arxiv.org/abs/2307.03967v1 | 20230708124657 | End-to-End Supervised Multilabel Contrastive Learning | [
"Ahmad Sajedi",
"Samir Khaki",
"Konstantinos N. Plataniotis",
"Mahdi S. Hosseini"
] | cs.CV | [
"cs.CV"
] |
Impact of noise on inverse design: The case of NMR spectra matching
O. Anatole von Lilienfeld
August 12, 2023
===================================================================
Multilabel representation learning is recognized as a challenging problem that can be associated with either label dependencies between object categories or data-related issues such as the inherent imbalance of positive/negative samples. Recent advances address these challenges from model- and data-centric viewpoints. In model-centric, the label correlation is obtained by an external model designs (e.g., graph CNN) to incorporate an inductive bias for training. However, they fail to design an end-to-end training framework, leading to high computational complexity. On the contrary, in data-centric, the realistic nature of the dataset is considered for improving the classification while ignoring the label dependencies. In this paper, we propose a new end-to-end training framework–dubbed KMCL (Kernel-based Mutlilabel Contrastive Learning)–to address the shortcomings of both model- and data-centric designs. The KMCL first transforms the embedded features into a mixture of exponential kernels in Gaussian RKHS. It is then followed by encoding an objective loss that is comprised of (a) reconstruction loss to reconstruct kernel representation, (b) asymmetric classification loss to address the inherent imbalance problem, and (c) contrastive loss to capture label correlation. The KMCL models the uncertainty of the feature encoder while maintaining a low computational footprint. Extensive experiments are conducted on image classification tasks to showcase the consistent improvements of KMCL over the SOTA methods. PyTorch implementation is provided in <https://github.com/mahdihosseini/KMCL>.
§ INTRODUCTION
Learning from multilabel representation is a common practice that is considered in both computer vision <cit.> and medical image <cit.> application domains. Images usually contain more than one object for classification, where they can be semantically related to each other. The idea is to create an embedded feature space that can capture label dependencies to improve the classification task <cit.>. However, effectively learning such embedded space is known to be a challenging problem and various methods have been proposed over the past few years, including sequence-to-sequence modeling <cit.>, graph approaches <cit.>, and new loss-function designs <cit.>. Generally, there are two main approaches to addressing the multilabel representation learning problem: the data-centric approach and the model-centric approach. The data-centric approach focuses on addressing data-related issues like inherent imbalance <cit.>, impartial label training <cit.>, and hierarchical relationships <cit.> while ignoring label dependencies. On the contrary, the model-centric approach aims to capture label interactions for semantic embedding such as graph convolutional networks <cit.>, attention mechanisms <cit.>, and transformer-based learning <cit.>. Despite the benefits, they fail to design an end-to-end learning framework due to their high computational costs or the laborious task of capturing heuristic label dependencies like using correlation matrices. These limitations make them challenging to implement, optimize, and interpret.
In this paper, we aim to combine the benefits of both data-centric and model-centric approaches while addressing their potential drawbacks. The solution lays on the foundation of asymmetric loss <cit.> which tackles the imbalance between positive and negative samples in multilabel classification. Our design augments this loss function by capturing the semantic relationships between labels using a kernel-based contrastive loss. This is achieved through two steps: (a) leveraging a Kernel Mixture Module (KMM) to explore the epistemic uncertainty of the feature encoder (see Figs. <ref> and <ref>). This is done by converting the embedded features of multilabel images into a Gaussian Reproducing Kernel Hilbert Space (RKHS) ℋ, and (b) employing a contrastive learning framework on the Gaussian RKHS to capture label dependencies through a weighted loss-function design (see Fig. <ref>). The resulting loss is trainable from end-to-end, providing high numerical stability during training. The following summarizes the contribution of the paper:
[C1]: We propose a novel end-to-end framework –dubbed KMCL– to strike a balance between model-centric and data-centric approaches using a new contrastive loss augmented on asymmetric classification loss from <cit.>. KMCL is capable of capturing both the epistemic uncertainty of the model and label dependencies between classes simultaneously.
[C2]: We introduce a KMM block design within the KMCL framework to generate a mixture of exponential kernels in Gaussian RKHS to model the uncertainty of the feature encoder and improve the robustness of the classification task. To reconstruct the mixture kernels from data, we propose a loss function ℒ_REC (in Eq. <ref>) as an alternative to the negative log-likelihood loss that addresses the numerical instabilities mentioned in <cit.>.
[C3]: We construct the ℒ_KMCL (in Eq. <ref>) as a complementary loss to ℒ_ASL <cit.> to capture label dependencies and enhance classification performance. We utilize the Bhattacharyya coefficient (ρ) as a similarity metric between two kernel representations to pull together similar classes (positive) from a pair of multilabel images while contrasting dissimilar ones (negative) in Gaussian RKHS.
[C4]: We consistently improve classification performance on both computer vision and medical imaging tasks with low computational footprints. Our loss design yields robust behavior toward a range of hyperparameters that are fixed across all experiments.
§.§ Related Work
Multilabel Image Representation. Multilabel image representation problems have been extensively studied, focusing on exploiting label dependencies within semantically aware regions. Previous approaches include RNN-CNN models for sequence-to-sequence modeling <cit.>, transforming the problem into a multi-instance problem <cit.>, and using recurrent attention reinforcement learning <cit.>. Later, efforts were made to incorporate linguistic embedding of training labels into graph neural network designs <cit.>. However, graph-based approaches assume the presence of coexisting label dependencies, which may not hold true when labels co-occur infrequently. Attention mechanisms have been introduced in dynamic graph modeling networks to address this issue <cit.>. Despite their effectiveness, these approaches often result in complex models with heavy computational requirements and limited generalization in different domains. A residual attention mechanism was introduced <cit.> to reduce such complexities by augmenting independent class feature scores using a class-agnostic average pooling method for aggregation scoring. Recent developments in this field emphasize the realistic nature of multilabel data representation. For example, the design proposed in <cit.> introduces an asymmetric loss function to balance the frequency of positive and negative classes. Other approaches include class-aware loss design for impartial label training <cit.> and exploring hierarchical relationships of multilabel data in a contrastive learning framework <cit.>. In this paper, we leverage both data- and model-centric approaches to reduce the above-mentioned complexities.
Contrastive Learning. Self-supervised learning methods primarily focus on contrastive learning, which involves capturing inter-relational object information in image representation. This is achieved through the use of contrastive loss functions, either in unsupervised contrastive learning where labels are absent <cit.>, or in supervised contrastive learning where labels are available <cit.>. The framework has been extended to multilabel representation learning <cit.> by considering shared label images as positive and unshared label images as negative. The existing multilabel contrastive loss designs rely on hard-coded features and lack flexibility in representing semantically aware objects and their label dependencies. However, we propose transforming embedded features into a mixture of exponential kernels in Gaussian RKHS to account for the potential uncertainty of model parameters and accordingly relax the embeddings.
§ BACKGROUND ON BHATTACHARYYA COEFFICIENT BETWEEN EXPONENTIAL KERNELS
The Bhattacharyya coefficient is a widely used metric to measure the similarity between probability distributions in various fields, including computer vision, pattern recognition, and statistical analysis <cit.>. Normal distributions are commonly evaluated using this metric to determine class separability in transfer learning <cit.>, perform point cloud instance segmentation <cit.>, and employ pseudo-labels for semi-supervised classification <cit.>. However, the Gaussian probability may not always be the best option for estimating the target variable due to normality assumptions which leads to numerical instabilities such as singularity <cit.>. A mixture of exponential kernels can be used as a reliable alternative to estimate the relative likelihood of the target variable, especially when the distribution is unknown or multimodal. In such cases, the Bhattacharyya coefficient ρ between the normalized versions of the kernel components can assess the geometric similarity and degree of overlap. Compared to Kullback-Leibler divergence <cit.> or L_p norms, ρ takes values in the range of [0, 1], which makes it a practical choice for comparing two statistical samples. In the following remark, we will elaborate on the closed-form expression of ρ between two exponential kernels.
Let p(𝐱):= K_Σ_p(𝐱, μ_p) = exp(-1/2𝐱 - μ_p^2_Σ_p^-1) and q(𝐱) := K_Σ_q(𝐱, μ_q) = exp(-1/2𝐱 - μ_q^2_Σ_q^-1) be anisotropic multivariate squared exponential kernels that define a Gaussian RKHS ℋ <cit.>. Then, the Bhattacharyya coefficient between the normalized p(𝐱) and q(𝐱)
is:
ρ(p(𝐱), q(𝐱) ) = ∫(p(𝐱)∫p(𝐱) d𝐱)^1/2(q(𝐱)∫q(𝐱) d𝐱)^1/2d𝐱 = |Σ_p|^1/4|Σ_q|^1/4/|Σ|^1/2exp(-1/8μ_p-μ_q^2_Σ^-1),
where, μ_p-μ_q^2_Σ^-1 = (μ_p-μ_q)^TΣ^-1(μ_p-μ_q) and Σ = Σ_p+Σ_q/2. The μ_p, μ_q∈ℝ^M and Σ_p, Σ_q∈𝕊_++^M are the mean vectors and the covariance matrices, respectively, and the operation |·| represents the determinant of a matrix. The proof of Remark <ref> is provided in Supplementary material.
The Bhattacharyya coefficient, also known as the Hellinger affinity <cit.>, measures the normalized correlation between the square roots of kernels over the entire space. This similarity metric compares p(𝐱) and q(𝐱) by projecting their square roots onto a unit hypersphere and measuring the cosine of the angle between them in the complete inner product space ℋ.
A careful examination of Equation <ref> reveals that the Bhattacharyya coefficient between normalized p(𝐱) and q(𝐱) consists of two terms: a scale factor and an exponential component. The scale factor measures overlap by comparing the generalized variances of the kernels, which are determined by the determinant of their covariance matrices. The scale factor converges to one when the covariance matrices of the two kernels are similar, indicating an overlap between them. The generalized variance of a kernel is related to its entropy and power entropy <cit.>, which measure uncertainty and spread. This allows the scale factor to consider differences in information content and orientation, resulting in separability due to covariance dissimilarity. On the other hand, the second term measures the similarity between the means μ_p and μ_q weighted by the precision matrix Σ^-1, providing separability based on positional differences. This exponential component represents the Mahalanobis kernel similarity <cit.> between μ_p and μ_q with respect to Σ^-1. The following corollary will further elucidate the connection of the Bhattacharyya coefficient with the Mahalanobis and Gaussian similarities.
Let p(𝐱) := K_Σ_p(𝐱, μ_p) and q(𝐱) := K_Σ_q(𝐱, μ_q) be multivariate kernels defined in Remark <ref>. The Bhattacharyya coefficient between normalized p(𝐱) and q(𝐱) can be reduced to either the Mahalanobis or the RBF kernel similarity, depending on the covariance matrices:
(i) The Mahalanobis kernel similarity, Sim_M(p(𝐱), q(𝐱)), is obtained when the covariance matrices are homoscedastic, i.e., Σ_p = Σ_q = Σ. It has the following closed-form expression:
Sim_M(p(𝐱), q(𝐱)) = ρ(K_Σ(𝐱, μ_p), K_Σ(𝐱, μ_q)) = exp(-12 (2)^2μ_p-μ_q^2_Σ^-1).
The described Mahalanobis metric evaluates the similarity between p(𝐱) and q(𝐱) based on their mean difference and relative positions (see Fig. <ref>d).
(ii) The Gaussian kernel similarity, Sim_G(p(𝐱), q(𝐱)), is obtained when the covariance matrices are equal and isotropic, meaning Σ_p = Σ_q = σ^2I. The closed-form expression will be:
Sim_G(p(𝐱), q(𝐱)) = ρ(K_Σ(𝐱, μ_p), K_Σ(𝐱, μ_q)) = exp(-μ_p-μ_q^28σ^2).
In cases where two kernels have similar means but different covariance matrices, the Mahalanobis and Gaussian kernel similarities often exhibit a perfect correlation that may not precisely reflect true similarities (Figs. <ref>a and c). Instead, the Bhattacharyya coefficient evaluates the generalized variances of the kernels and identifies similarities in their orientation, shape, and means (Figs. <ref>a and c). Therefore, it is often a superior metric to the Mahalanobis and the Gaussian kernel similarities.
The process of computing the final value of the closed-form expression between high-dimensional kernels can be time-consuming and resource-intensive. This problem can be alleviated by imposing constraints on the mean vectors and/or the covariance matrices. Following <cit.>, we will cover how specific constraints can be applied to improve computational efficiency in a subsequent corollary.
Let p(𝐱) := K_Σ_p(𝐱, μ_p) and q(𝐱) := K_Σ_q(𝐱, μ_q) be two multivariate kernels as defined in Remark <ref>. The following statements hold:
(i) If the covariance matrices are diagonal, meaning that
Σ_p = diag(σ_p,1^2, ⋯, σ_p,M^2) and Σ_q = diag(σ_q,1^2, ⋯, σ_q,M^2), the Bhattacharyya coefficient between normalized p(𝐱) and q(𝐱) will be
ρ(p(𝐱), q(𝐱)) = (∏_i=1^M(σ_p,i^2+σ_q,i^2/2σ_p,iσ_q,i)^-1/2)exp(-14∑_i=1^M(μ_p,i -μ_q,i)^2/σ_p,i^2+σ_q,i^2). (Anisotropic)
(ii) If the mean vectors have identical values across all dimensions (μ_p = μ_p1, μ_q = μ_q1, where 1 = [1, ⋯, 1]^T∈ℝ^M is the one vector), and the covariance matrices are diagonal with homogeneous variances (Σ_p = σ_p^2I, Σ_q = σ_q^2I, where I∈𝕊^M_++ is the identity matrix), then the Bhattacharyya coefficient between two normalized isotropic kernels p(𝐱) and q(𝐱) can be calculated as
ρ(p(𝐱), q(𝐱)) = (σ_p^2+σ_q^2/2σ_pσ_q)^-M/2exp(-M4(μ_p -μ_q)^2/σ_p^2+σ_q^2). (Isotropic)
§ PROPOSED METHOD
[13]R0.68
< g r a p h i c s >
Overview of KMCL framework. The training pipeline comprises a feature encoder that feeds into the KMM, which outputs the parameters of a mixture model in the Gaussian RKHS ℋ. These parameters then define the objective function that captures label correlation to aid in training the model for the multi-label classification.
The multi-label classification task involves assigning multiple labels to an image 𝐱^n from sample space 𝐗. These labels are typically correlated with each other and represented by a multi-hot binary vector 𝐲^n∈{0,1}^K, where K denotes the number of labels. In this section, we propose an end-to-end multi-label learning framework–dubbed Kernel-based multi-label Contrastive Learning (KMCL), that captures label correlations to improve recognition performance. Given an input batch of data, we first propagate it through the encoder network to obtain the feature embedding. The embedding is then inputted into a novel fully connected layer called the Kernel Mixture Module (KMM), which produces a Gaussian Reproducing Kernel Hilbert Space ℋ. The Gaussian RKHS embedding can handle higher-order statistics of the features and has a complete inner product that enables linear geometry, making it richer than the deterministic feature embedding. Finally, we compute the loss function using the KMM outputs on space ℋ to capture label correlation and train the model for multi-label classification. Figure <ref> provides a visual explanation.
§.§ KMCL Framework
The main components of the KMCL framework are:
[2]R0.23
< g r a p h i c s >
Internal architecture of KMM.
Feature Encoder.The encoder network takes two samples from the input batch separately and generates corresponding feature representation vectors 𝐟∈ℝ^M. The dimension of the feature vector depends on the encoder type.
KMM.
Most feature encoders produce deterministic results that do not quantify or control uncertainty, leading to low confidence in robust multi-label classification tasks and errors in interpreting the output predictions. Uncertainty in deep learning arises from two sources: epistemic uncertainty (model uncertainty), resulting from uncertainty in model parameters, and aleatoric uncertainty (data uncertainty), which stems from the inherent noise in data and label ambiguity. In this study, we propose the Kernel Mixture Module (KMM) to estimate epistemic uncertainty in predictions. The KMM takes the feature vector 𝐟 from the encoder network and generates a mixture of exponential kernels within the Hilbert space, each corresponding to a specific class in an image. Specifically, the fully connected layer in the KMM utilizes learnable weights and biases to produce three outputs for each unimodal exponential kernel component: the mixture coefficient π_k, mean vector μ_k, and covariance matrix Σ_k (Fig. <ref>). The parameters π_k, μ_k, and Σ_k quantify the existence, relative spatial positioning, and relative statistical complexities (measures of spread and uncertainty) of the kth class membership. These parameters are then used to model the label representation of a given sample 𝐱^n associated with a class vector 𝐲^n using the following expression:
𝒢_𝒮(𝐟^n) := ∑_k ∈𝒮π_k^n g_k(𝐟^n) =
∑_k ∈𝒮π_k^nexp(-‖𝐟^n - μ_k^n1‖^2/2(σ_k^n)^2),
where, 𝒮 = {k: y_k^n = 1} and 𝐟^n is the extracted feature vector of the input sample. The component g_k(𝐟^n) := K_Σ_k^n(𝐟^n, μ_k^n) is an isotropic exponential kernel where μ_k^n = μ_k^n1, Σ_k^n = (σ^n_k)^2I, and π_k^n∈ [0, 1]. These adaptive parameters i.e., θ_k^n = [μ_k^n, (σ^n_k)^2, π_k^n] are calculated through forward propagation, using suitable activation functions to ensure that the parameters adhere to their constraints. The sigmoid activation function is used to normalize the mixture coefficient for efficient multi-label classification, accurately predicting the likelihood of multiple labels. The modified version of the exponential linear unit (ELU) <cit.> is also used as an activation function for variances, ensuring their semi-positivity. The detailed architecture of KMM can be found in Fig. <ref> and Supplementary material.
§.§ Multi-label Learning with KMCL
Building upon the KMCL framework, we aim to provide insights into the learning process of multi-label tasks. To achieve this, we introduce the details of our objective function, which comprises three components: reconstruction loss, classification loss, and contrastive loss. Throughout this paper, we use N and K to denote the mini-batch size and the total number of classes, respectively.
Reconstruction Loss.
[12]R0.3
< g r a p h i c s >
Relative frequency histograms of class distributions in four datasets show that most images have 2, 2, 4, and 1 labels in the Pascal-VOC, MS-COCO, ADP, and ChestX-ray14, respectively.
It is straightforward to compute the mixture model defined in Equation <ref> using the KMM output parameters, which provide 3K values for each input sample. Following this calculation, the model can be used to learn label-level representations in Hilbert space ℋ by minimizing its negative log-likelihood. Therefore, we introduce to optimize the following reconstruction loss over the data batch to train the mixture model
ℒ_REC = 1/N∑_n=1^N-log𝒢_𝒮(𝐟^n)/𝒢_𝒴(𝐟^n),
where, 𝒢_𝒴(𝐟^n) := ∑_k∈𝒴={1, ⋯, K}π_kg_k and 𝒢_𝒮(𝐟^n) denotes the kernel mixture associated with image 𝐱^n defined in Equation <ref>. The log-ratio term in Equation <ref> is always negative i.e. 𝒢_𝒮(𝐟^n)≤𝒢_𝒴(𝐟^n), where the loss is led by the supervised labels for reconstruction. We propose this as an alternative choice for reconstruction loss, which is commonly used in the literature <cit.>. Our new loss function ℒ_REC exhibits robust behavior without relying on numerical tricks for stabilization.
Classification Loss.
The analysis in Figure <ref> reveals that despite varying statistical and conceptual properties across datasets, most images have only a fraction of labels, causing a significant imbalance between positive and negative samples. This imbalance can lead to poor training accuracy as gradients from positive labels may be underemphasized. To mitigate this issue, we use ASL <cit.> as a classification loss function that adjusts the contributions of positive and negative samples by down-weighting easy negative samples and focusing on the hard ones. Therefore, given the predictive mixture of coefficients π^n from KMM and the ground-truth multi-hot label vector 𝐲^n, the classification loss for a batch is obtained as
ℒ_ASL = 1/N∑_n=1^N∑_k=1^K -y_k^n(L_k^n)_+-(1-y_k^n)(L_k^n)_-,
where, (L_k^n)_+ = (1-π_k^n)^γ_+log (π_k^n), and (L_k^n)_- = (max(π_k^n-m, 0))^γ_-log (1-max(π_k^n-m, 0)) represent the positive and negative loss parts, respectively, such that γ_+, γ_-, and m are the hyper-parameters used to balance the loss. For additional information on ℒ_ASL, please refer to <cit.>.
Kernel-based Contrastive Loss.
The ASL loss function classifies labels independently, making it difficult to capture correlations between co-occurring semantic labels. Moreover, it fails to account for uncertainty in predictions, which can undermine decision-making confidence. To address these limitations, we propose a new loss function, ℒ_KMCL, which incorporates label correlation and epistemic uncertainty into supervised contrastive learning to improve representation.
The objective of kernel-based multi-label contrastive loss ℒ_KMCL is to pull together the kernel representations of positive images that have shared classes with the anchor image 𝐱^n in the embedding space ℋ, while pushing apart negative samples that do not share any classes. This approach differs from deterministic supervised contrastive losses <cit.> as ℒ_KMCL constructs the positive and negative pairs using similarity measures that consider the uncertainty of kernel representations. The similarity is measured by a Bhattacharyya coefficient discussed in Corollary <ref> (isotropic), which determines the overlap between these exponential kernels and their confidence in proximity. Essentially, the kernel-based contrastive loss optimizes the similarity of frequently co-occurring labels and captures their statistical dependencies, making it a valuable complement to ASL. The contrastive loss is defined for the entire minibatch as follows:
ℒ_KMCL = 1/N∑_n=1^N -1/|𝒜(n)| ∑_m∈𝒜(n)J(n, m) (∑_k∈𝒦(n, m) logexp(ρ_k,k^n,m/τ)/∑_i∈{N\n}exp(ρ_k,k^n,i/τ)),
where, ρ_k,l^n,m:=ρ(g_k(𝐟^n), g_l(𝐟^m)) indicates the Bhattacharyya coefficient between the normalized exponential kernels g_k(𝐟^n) and g_l(𝐟^m) (see Corollary <ref>) and τ is the temperature parameter. The positive set 𝒜(n) = {m ∈{N \ n}: 𝐲^n·𝐲^m≠ 0, where · is a dot product.} includes samples that share at least one label with the anchor image 𝐱^n, while 𝒦(n,m)= {k∈𝒴: y_m^k = y_n^k = 1} represents the indices of shared labels between 𝐱^n and 𝐱^m. The Jaccard index J(n, m)=𝐲^n·𝐲^m/𝐲^n^2+𝐲^m^2-𝐲^n·𝐲^m serves as a weighting factor for positive samples based on the number of shared labels with the anchor. It measures the intersection over union (IOU) of the label vectors between the anchor and positive image, taking into account object co-occurrences. In this way, ℒ_KMCL prioritizes positive samples with a high Jaccard index for a given anchor while downplaying samples with few shared labels.
[14]R0.44
< g r a p h i c s >
(a) Training loss over different epoch training. Plots show the normalized total loss ℒ as well as different normalized sub-losses, and (b) Training accuracy of KMCL pipeline over different epoch training
Objective Function. The overall training loss of the KMCL is the augmented Lagrangian of the three aforementioned losses, which can be expressed as:
ℒ = ℒ_REC + λ_1 ℒ_ASL + λ_2 ℒ_KMCL,
where λ_1 and λ_2 are the Lagrangian multipliers used to balance the gradients of ℒ_ASL and ℒ_KMCL, respectively. We use an end-to-end pipeline to incorporate contrastive learning into supervised classification, which simultaneously trains the feature encoder and classification parts. This approach is different from previous methods that use contrastive losses <cit.>. In those methods, the encoder is trained with a contrastive loss and then frozen before being transferred to the classifier for tuning. Instead, the KMCL framework combines these training regimes into one formulation, enabling us to learn multi-label classification and label correlations with data-driven techniques.
§.§ KMCL Algorithm
[15]R0.57
The pseudo-code of the proposed KMCL framework is outlined in Algorithm <ref>, which takes a set of batches and a specified number of epochs as inputs. The pair of anchor images and their positive set are fed through the network depicted in Figure <ref> to obtain the feature vectors and parameters of the corresponding kernel mixtures (lines <ref>-<ref>). The overall loss is then computed as an augmented Lagrangian of the ℒ_REC, ℒ_ASL, and ℒ_KMCL using the KMM parameters (lines <ref>-<ref>). Finally, the objective function is back-propagated through the KMM and the feature encoder for each iteration to update the weights based on the gradients associated with the subsequent forward pass (line <ref>). This iterative process continues until convergence is reached.
Figures <ref> (a) and (b) demonstrate the results of implementing the KMCL framework with TResNet-L <cit.> as the encoder network on the Pascal-VOC dataset <cit.>. Fig. <ref> (b) displays the objective loss behavior along with the evolution of the three loss terms for the training and test sets; whereas The mean average precision (mAP) accuracy is presented in <ref> (a). The losses decrease with different multiplicative factors due to the tuned Lagrangian multipliers. The convergence speed of the method on multi-label tasks is impressive, reaching 96.2% mAP accuracy in fewer than 30 epochs.
§ EXPERIMENTS
In this section, we present the experimental setup and demonstrate the superior performance of KMCL in both general computer vision and medical imaging domains. To ensure robust feature extraction, we utilized TResNet-M and TResNet-L <cit.>, state-of-the-art architectures designed for different image resolutions (224 and 448, respectively). The features are then passed through the KMM to obtain the mixture parameters π, μ, and Σ. Additional information regarding the encoders, KMM, datasets, evaluation metrics, and training details can be found in Supplementary material.
Datasets. We evaluate the KMCL's performance on popular computer vision datasets, PASCAL-VOC <cit.> and MS-COCO <cit.>, as well as on medical datasets, ADP <cit.> and ChestX-ray14 <cit.>.
Evaluation Metrics.
Following SOTA <cit.>, we report the standard metrics of mean average precision (mAP), average overall precision (OP), recall (OR), and F1 score (OF1) in addition to per-class precision (CP), recall (CR), and F1 score (CF1). We considered the number of parameters (M) and GMAC as measures of computational costs. Finally, for the ChestX-ray14 dataset <cit.>, we reported per-class AUC scores to assess model discriminability for specific classes.
Training Details.
We implemented the KMCL framework using PyTorch, following Alg. <ref>. The backbone feature encoders were initialized with pre-trained architectures, while the mixture parameters were initialized by applying a uniform distribution to π and μ and setting Σ to a constant value of 1. In all experiments, we assign fixed values of 0.1 and 0.3 to λ_1 and λ_2 respectively, as specified in Eq. <ref>. The Adam optimizer <cit.> was used with an initial learning rate of 2e-4, and the OneCycleLR scheduler <cit.> for 40 epochs. Standard augmentations from RandAugment policy <cit.> were applied to the training data. Experiments were conducted on four NVIDIA GeForce RTX 2080Ti GPUs.
How does KMCL compare to SOTA methods on computer vision datasets? We evaluate KMCL with SOTA methods on computer vision datasets in Table <ref> and Fig. <ref>. KMCL outperforms the best competitors on PascalVOC and MS-COCO, achieving superior performance with a margin of 0.4% and 0.2% in mAP score, respectively. In particular, KMCL excels in challenging classes on PascalVOC, such as the sofa and bus classes, with an improvement of over 3.0%. On MS-COCO, KMCL demonstrates significant improvements across multiple metrics, including mAP, OF1, and CF1. Using the TResNet-M encoder at resolution 224, we achieve state-of-the-art results with a 5.0% increase in mAP compared to the best method. Similarly, with TResNet-L at a resolution of 448, KMCL surpasses other methods in overall and per-class metrics. These achievements are attained by integrating the proposed contrastive learning with ASL classification loss, to capture label correlation and enhance prediction accuracy. This is illustrated through the Top3-metrics on MS-COCO, where our 3 classes are better selected by considering label correlation when ranking the predictions.
How well KMCL generalizes to medical imaging datasets?
[8]r7cm
Comparisons with state-of-the-art methods on the ADP dataset.
!
2pt1pt1pt
22emMethod 7c|Performance 2cComplexity
2-10
mAP OP OR OF1 CP CR CF1 Parameters (MM) GMAC
2pt1pt1pt
ML-GCN (Binary) <cit.> 94.9 92.0 86.9 89.7 91.8 87.0 89.3
44.90 31.39
ASL (TResNet-L) <cit.> 96.1 92.1 90.7 91.4 92.5 89.2 90.8
44.14 35.28
TDRG <cit.> 95.5 94.3 86.2 90.5 94.6 84.8 89.4
75.20 64.40
CSRA <cit.> 96.1 93.0 89.7 91.7 93.1 88.6 90.8
42.52 31.39
KMCL (TResNet-M) 95.1 94.2 91.0 90.4 94.7 88.9 89.8
29.41 5.74
KMCL (TResNet-L) 96.5 92.7 92.9 92.8 92.6 92.0 92.3
44.20 35.28
2pt1pt1pt
We evaluate KMCL against SOTA methods on medical imaging datasets presented in Tables <ref> and<ref>. The recall is a crucial factor in these datasets, as it reflects the likelihood of missing a medical diagnosis. The proposed method achieves a superior tradeoff between precision and recall by significantly improving recall metrics while maintaining competitive precision scores, including SOTA mAP. On the ADP dataset, KMCL outperforms the surveyed SOTA with margins of 0.4%, 2.2%, and 2.8% for mAP, OR, and CR, respectively. Similarly, on the ChestX-ray14 dataset, both TResNet-M and TResNet-L models exhibit significant improvements, with our best model surpassing SOTA results by 5.2%, 7.0%, and 11.6% in mAP, OR, and CR, respectively. In comparison, competing methods such as ML-GCN <cit.> use label correlation but suffer from increased computational complexity and a multi-stage approach, as shown in Table <ref>. However, our method surpasses the SOTA while maintaining a small model size and low GMAC scores. These findings highlight the advantage of KMCL in computationally constrained environments.
How KMCL's performance varies with different similarity measurements? In this ablation study, we examine the impact of changing the Battacharya coefficient to either Mahalanobis kernel similarity or Gaussian kernel similarity in the KMCL framework (Corrolary <ref> (i) and (ii)). Under the Mahalanobis kernel similarity, the performance decreases across the PascalVOC and ADP, as indicated in Table <ref>. This is likely due to the constraint that the variance must be identical across all classes, leading to an inability to capture entropy and uncertainty as reported in Section <ref>.
[7]r7cm
Ablative comparison for similarity measures and kernel representation cases.
!
2pt1pt1pt
2c||Configuration 7c|ADP PascalVOC 2cComplexity
Similarity Metric Case mAP OP OR OF1 CP CR CF1 mAP Params(MM) GMAC
2pt1pt1pt
Bhattacharyya Anisotropic 95.4 94.0 92.7 90.6 94.8 90.7 90.5 95.4
104.91 5.81
Bhattacharyya Isotropic 95.1 94.2 91.0 90.4 94.7 88.9 89.8 95.2
29.41 5.74
Mahalanobis - 94.7 92.0 92.4 90.9 92.6 90.5 90.4 95.1
71.34 5.78
Gaussian Kernel - 94.5 91.5 89.7 90.6 92.3 86.5 89.3 95.0
29.40 5.74
2pt1pt1pt
Similarly, when utilizing Gaussian kernel similarity, the performance further deteriorates because the model is constrained to learn a single variance value that applies to both the label classes and feature dimensions. Therefore, it is more meaningful to use the Bhattacharyya coefficient since it evaluates the generalized variances of the kernels and identifies similarities in their orientation, shape, and means (Eq. <ref>). We further investigate the assumptions from both isotropic and anisotropic cases of the exponential kernel representations in KMCL framework as discussed in Corrolary <ref>. While the anisotropic case leads to an improved performance as shown in Table <ref>, but results in an increase in learnable parameters at the cost of higher computational complexity. By incorporating variances over the feature dimension, we better capture epistemic uncertainty and achieve enhanced overall results. Thus, if computational resources are available, one could best leverage our framework in the anisotropic case to achieve sota results.
[11]r0.7
< g r a p h i c s >
Reduced t-SNEs for ASL (left) and KMCL(Center) on PascalVOC color-coded by user-defined super-classes in the legend; (Right) ground truth correlation matrix for PascalVOC.
Intuitive Visualizations.
KMCL presents an end-to-end framework for contrastive learning that has achieved quantitatively significant results compared to existing methods. In this section, we visualize how the learned feature representation incorporates label correlation and epistemic uncertainty. Figure <ref> shows a reduced t-SNE <cit.> visualization of the feature representation for ASL and KMCL on the Pascal VOC dataset. Both methods accurately discriminate between different classes, as seen from the plotted centroids of each cluster. Notably, both methods exhibit a clustering pattern based on user-defined super-classes (e.g., car and bus are both forms of Transportation). Upon analyzing the ground truth correlation matrix, it becomes apparent that KMCL captures label correlation more effectively. Specifically, the sofa class exhibits the highest correlation with the chair class, resulting in their closer proximity in the t-SNE visualization for KMCL compared to ASL.
[8]r0.65
< g r a p h i c s >
GradCam visualization of KMCL and competing SOTA method. Bolded class labels indicate instances where KMCL outperforms SOTA by a large margin.
Figure <ref> showcases the GradCam visualization for KMCL and a competing SOTA method. KMCL effectively distinguishes the sofa and chair classes, consistent with the t-SNE visualization results. Moreover, by capturing epistemic uncertainty from the kernel representation, our method accurately identifies the correct classes in the ADP sample with minimal extraneous activations. For more visualizations, please refer to the Supplementary material.
§ BROADER IMPACT
KMCL provides an end-to-end supervised contrastive learning framework for multilabel datasets. It requires fewer resources for the design and implementation of downstream tasks such as classification. Contrastive learning methods like <cit.> typically involve two stages of encoder training and fine-tuning for the task, which can take several hundred epochs. In contrast, KMCL only requires one stage of training with significantly fewer epochs. This translates into a much smaller carbon emission footprint, as highlighted in <cit.> for using more compact models for training. Although KMCL has been successfully applied in computer vision and medical imaging domains, its effectiveness has not yet been tested for segmentation/detection tasks or in other modalities like natural language processing. In future work, we will consider broadening our experiments for further validation. Additionally, we believe that society can benefit from the theoretical analysis of the similarity metrics presented in this paper, which can be adapted to different application domains.
§ ACKNOWLEDGMENT
Authors would like to thank Rahavi Selvarajan, Xiao Hu and Jiarui Zhang for their assistant and helpful discussion.
ieee_fullname
§ APPENDIX
§.§ Proof of Remark 1.
The Bhattacharyya coefficient between the normalized p(𝐱):= K_Σ_p(𝐱, μ_p) = exp(-1/2𝐱 - μ_p^2_Σ_p^-1) and q(𝐱):= K_Σ_q(𝐱, μ_q) = exp(-1/2𝐱 - μ_q^2_Σ_q^-1) is defined as
ρ(p(𝐱), q(𝐱) ) = ∫_𝒳(p(𝐱)∫_𝒳 p(𝐱) d𝐱)^1/2(q(𝐱)∫_𝒳 q(𝐱) d𝐱)^1/2d𝐱 = ∫_𝒳p(𝐱)^1/2q(𝐱)^1/2d𝐱/√(∫_𝒳p(𝐱) d𝐱)√(∫_𝒳q(𝐱) d𝐱).
To begin, we expand the integrand part of the enumerator, i.e., √(p(𝐱)q(𝐱)) as follows:
exp(-14𝐱^T(Σ_p^-1+Σ_q^-1)𝐱+12(Σ_p^-1μ_p+Σ_q^-1μ_q)^T𝐱 -14(μ_p^TΣ_p^-1μ_p + μ_q^TΣ_q^-1μ_q )).
In order to overcome the challenge of integrating the derived integrand in Equation <ref>, we will introduce a new approach. We will represent √(p(𝐱)q(𝐱)) as the product of a constant value, denoted as h(μ_p, μ_q, Σ_p, Σ_q), and a newly defined anisotropic multivariate squared exponential kernels, denoted as r(𝐱):= K_Σ_r(𝐱, μ_r). This formal representation can be expressed as follows:
√(p(𝐱)q(𝐱)) = h(μ_p, μ_q, Σ_p, Σ_q)r(𝐱).
We defined the new exponential kernel of Equation <ref> as
r(𝐱) := K_Σ_r(𝐱, μ_r) = exp(-1/2𝐱 - μ_r^2_Σ_r^-1) = exp(-12(𝐱-μ_r)^TΣ_r^-1(𝐱-μ_r)),
where Σ_r≜(12Σ_p^-1+12Σ_q^-1)^-1 and μ_r≜Σ_p(12Σ_p^-1μ_p+12Σ_q^-1μ_q). Once the values of Σ_r and μ_r are replaced in Equation <ref>, the kernel r(𝐱) will be
r(𝐱) = exp(-14𝐱^T(Σ_p^-1+Σ_q^-1)𝐱 + 12(Σ_p^-1μ_p+Σ_q^-1μ_p)^T𝐱 -14(Σ_p^-1μ_p+Σ_q^-1μ_p)^T
+(Σ_p^-1+Σ_q^-1)^-1(Σ_p^-1μ_p+Σ_q^-1μ_p)).
By substituting Equations <ref> and <ref> into Equation <ref>, we obtain the closed-form expression of h(μ_p, μ_q, Σ_p, Σ_q) as presented below.
exp(-1/4(
μ_p^T(Σ_p^-1-Σ_p^-1(Σ_p^-1+Σ_q^-1)^-1Σ_p^-1)μ_p+
μ_q^T(Σ_q^-1-Σ_q^-1(Σ_p^-1+Σ_q^-1)^-1Σ_q^-1)μ_q
-μ_p^T(Σ_p^-1(Σ_p^-1+Σ_q^-1)^-1Σ_q^-1)μ_q
-μ_q^T(Σ_q^-1(Σ_p^-1+Σ_q^-1)^-1Σ_p^-1)μ_p
))
Given the fact that Σ_p^-1-Σ_p^-1(Σ_p^-1+Σ_q^-1)^-1Σ_p^-1 = Σ_q^-1-Σ_q^-1(Σ_p^-1+Σ_q^-1)^-1Σ_q^-1 = Σ_p^-1(Σ_p^-1+Σ_q^-1)^-1Σ_q^-1 = Σ_q^-1(Σ_p^-1+Σ_q^-1)^-1Σ_p^-1 = (Σ_p+Σ_q)^-1 <cit.>, we can simplify Equation <ref> and derive
exp(-1/4μ_p^T(Σ_p+Σ_q)^-1μ_p+μ_q^T(Σ_p+Σ_q)^-1μ_q-μ_p^T(Σ_p+Σ_q)^-1μ_q-μ_q^T(Σ_p+Σ_q)^-1μ_p),
where can be further simplified to yield the following expression:
h(μ_p, μ_q, Σ_p, Σ_q) = exp(-18(μ_p-μ_q)^TΣ^-1(μ_p-μ_q)),
where Σ
= Σ_p+Σ_q/2. Ultimately, by utilizing the definition of the Bhattacharyya coefficient, Equation <ref>, and Equation <ref>, we can deduce the following conclusion:
ρ(p(𝐱), q(𝐱)) = ∫_ℝ^Mp(𝐱)^1/2q(𝐱)^1/2d𝐱/√(∫_ℝ^Mp(𝐱) d𝐱)√(∫_ℝ^Mq(𝐱) d𝐱) =∫_ℝ^Mh(μ_p, μ_q, Σ_p, Σ_q)r(𝐱)d𝐱/√(∫_ℝ^Mp(𝐱) d𝐱)√(∫_ℝ^Mq(𝐱) d𝐱)
= h(μ_p, μ_q, Σ_p, Σ_q) ∫_ℝ^M|2πΣ_r|^1/2𝒩(𝐱;μ_r, Σ_r)d𝐱/√(∫_ℝ^M|2πΣ_p|^1/2𝒩(𝐱;μ_p, Σ_p) d𝐱)√(∫_ℝ^M|2πΣ_q|^1/2𝒩(𝐱;μ_q, Σ_q) d𝐱)
= |Σ_r|^1/2/|Σ_p|^1/4|Σ_q|^1/4h(μ_p, μ_q, Σ_p, Σ_q) = |2Σ_p(Σ_p+Σ_q)^-1Σ_q|^1/2/|Σ_p|^1/4|Σ_q|^1/4h(μ_p, μ_q, Σ_p, Σ_q)
(a)= |Σ_p|^1/2|Σ_q|^1/2|Σ|^1/2exp(-18(μ_p-μ_q)^TΣ^-1(μ_p-μ_q)),
where, Σ
= Σ_p+Σ_q/2 and (a) is followed by the probability property that the total area underneath a probability density function is 1. The notation 𝒩(𝐱;μ, Σ) represents a multivariate Gaussian probability distribution in M dimensions, characterized by a mean vector μ and a covariance matrix Σ. This completes the proof of Remark 1.
§.§ Forward Propagation in KMM.
The KMM (Kernel Mixture Module) takes the feature vector 𝐟^n∈ℝ^M as input from the encoder network and produces the parameters for each exponential kernel component in the kernel mixture model. This transformation converts the feature vector into 3K values, where each K represents the parameters for the kth kernel component (existing class), such as μ_k^n∈ℝ, σ_k^n∈ℝ^+, π_k^n∈ [0, 1]. The adaptive parameters are computed through forward propagation, employing suitable activation functions to ensure that the parameters satisfy their respective constraints. The activations corresponding to the parameters of the kth component for the KMM ((a_k^μ)^n,(a_k^σ^2)^n, (a_k^π)^n) are used to accomplish this, and they are calculated through the forward propagation of a fully connected layer by
(a_k^μ)^n = 𝐰_k^μ𝐟^n + b^μ_k, (a_k^σ^2)^n = 𝐰_k^σ^2𝐟^n + b^σ^2_k, (a_k^π)^n = 𝐰_k^π𝐟^n + b^π_k,
where, {𝐰_k^μ, 𝐰_k^σ^2, 𝐰_k^π}∈ℝ^M are the weights, and {b^μ_k, b^σ^2_k, b^π_k}∈ℝ represent the biases associated with {(a_k^μ)^n, (a_k^σ^2)^n, (a_k^π)^n}, respectively. We make a minor revision to the idea of using nonlinear activation from <cit.> by replacing softmax with sigmoid to normalize the mixture of coefficients and address multilabel issues. In the following, we define the nonlinear and linear transformations applied to (ak^μ)^n, (ak^σ^2)^n, (a_k^π)^n using
π_k^n = 11+exp(-(a_k^π)^n),
μ_k^n = (a_k^μ)^n, (σ_k^n)^2 = ELU((a_k^σ^2)^n)+2+ϵ,
where ELU(·) and ϵ are the exponential linear unit function <cit.> and the hyperparameter used to ensure training stability, respectively. We use a modified ELU function rather than the exponential function as the activation on (a_k^σ^2)^n in order to ensure that variances remain non-negative ((σ_k^n)^2≥ 0). This modification is necessary because the vanilla exponential function exhibits rapid growth for larger values, which can lead to training instability, particularly when dealing with high-variance datasets. It is important to note that there is no constraint on the mean μ_k^n, as it is obtained directly from the activation (a_k^μ)^n.
§ DATASETS
PASCAL-VOC
The PASCAL Visual Object Classes Challenge (2007) <cit.> is a common computer vision dataset used in multi-label classification. It contains a total of 9963 images over 20 classes, including 'cat', 'bottle', and 'person'. Being consistent with the state of the art, we trained our architecture on the trainval set and evaluated it on the test set with a total of 5011 and 4952 images in each set, respectively. Referencing the relative frequency in the main paper, we can see that the number of classes per image to the total number of classes is heavily unbalanced, with the majority of images having only 2-4 classes.
MS-COCO
The Microsoft COCO dataset <cit.> is another common computer vision dataset used in multi-label classification. This dataset includes 82,081 training and 40,504 validation images across 80 different classes including 'person', 'bicycle', and 'elephant'. Following the state of the art, we test our method on the validation dataset making it comparable with competitive approaches.
ADP
The Atlas of Digital Pathology for Hisotological Tissue Type Classification <cit.> is composed of digital histology images taken from several organ tissues, including the colon, brain, stomach, etc. These images were generated via a Whole Slide Image (WSI) scanner. This database includes 17,668 image patches that are multilabel in nature. The training, validation, and test sets contain 14,134, 1767, and 1767 images respectively. This labeling scheme follows a three-tier hierarchy: L1 (9 labels), L2 (11 labels), and L3 (22 labels). As we progress down the levels, the features being annotated gradually progress from coarse to fine detail. The highest level (L1) contains classes that amalgamate several lower-level classes.
For example, Dense Regular Connective (C.D.R) is an L3 precise label that falls under the more coarse L1 category of Connective (C). For the purpose of our work, we have selected L1 as it seems to be the most statistically significant selection with a better balance of per-class distribution.
ChestXray-14
The ChestX-Ray 14 dataset contains hospital-scale frontal-view chest X-ray images from 30,805 unique patients. Each image either contains multiple common thoracic illnesses including ‘cardiomegaly’ or ‘pneumonia’ or is designated ‘normal’ indicating no illness. The released version of the dataset catalogs 14 common illnesses to date, as opposed to the original 8 that was released at the time of publication.
§.§ Hyperparameters & Tuning
In this section, we list all the necessary parameters for the reproducibility of our method. We have categorized our hyperparameters depending on which part of the pipeline they relate to (i.e., Training Optimization refers to any parameters used in setting up the training phase). A special note is made for the Loss Development λ values. In order to best tune our method, we sampled a 15-point log-random search in a subset of the provided range to best adapt our model to the given datasets. See Table <ref>.
§.§ Additional Information on Metrics
Being consistent with state-of-the-art methods, we calculate the average overall precision (OP), recall (OR), and F1 score (OF1), in addition to the average per-class precision (CP), recall (CR), and F1 score (CF1), as metrics for evaluating the different methods on the datasets <cit.>. Overall these metrics challenge the model’s ability to accurately discriminate the class of interest in terms of measuring false positives and false negatives. Superior OF1 and CF1 indicate that the model is well-tuned for class discrimination as this metric encompasses both recall and precision in the calculation. For some experiments, we include the following computational complexity measures: Parameters (MM) to indicate model size, and GMAC to indicate the forward computational resource required. The motivation behind these metrics is to illustrate that performance is not only measured through how well the method discriminates classes but also through the complexity of deploying said method in the real world. Finally, due to the increased difficulty of the ChestX-ray14 dataset, we additionally report per class AUC scores to identify model discriminability for the class of interest, this has been a common trend in papers that have cited results on this dataset <cit.>.
§.§ Additional Visualizations
To further augment the main paper visualizations, we attach supplemental visualizations on the two additional datasets: MS-COCO and ChestXray-14. As can be seen, by the visualizations, our model is more precise at localizing the correct features. Due to capturing the epistemic uncertainty from the kernel representation, our method is able to focus the activation on the correct class, limiting extraneous false positive results. See Figure <ref>.
|
http://arxiv.org/abs/2307.04518v2 | 20230710123127 | On the Computational Modeling of Meaning: Embodied Cognition Intertwined with Emotion | [
"Casey Kennington"
] | cs.CL | [
"cs.CL"
] |
Efficient Match Pair Retrieval for Large-scale UAV Images via Graph Indexed Global Descriptor
San Jiang,
Yichen Ma,
Qingquan Li,
Wanshou Jiang,
Bingxuan Guo,
Lelin Li,
and Lizhe Wang
S. Jiang, Y. Ma, and L. Wang are with the School of Computer Science, China University of Geosciences, Wuhan 430074, China; S. Jiang is also with the Guangdong Laboratory of Artificial Intelligence and Digital Economy (Shenzhen), Shenzhen 518060, China, and with the Hubei Key Laboratory of Intelligent Geo-Information Processing, China University of Geosciences, Wuhan 430078, China. E-mail: [email protected], [email protected], [email protected]. (Corresponding author: Lizhe Wang)
Q. Li is with the College of Civil and Transportation Engineering, Shenzhen University, Shenzhen 518060, China, and also with the Guangdong Laboratory of Artificial Intelligence and Digital Economy (Shenzhen), Shenzhen 518060, China. E-mail: [email protected].
W. Jiang and B. Guo are with the State Key Laboratory of Information Engineering in Surveying, Mapping, and Remote Sensing, Wuhan University, Wuhan 430072, China. E-mail: [email protected], [email protected].
L. Li is with the Provincial Key Laboratory of Geo-information Engineering in Surveying, Mapping and Remote Sensing, Hunan University of Science
and Technology, Xiangtan 411201, China. E-mail: [email protected].
August 12, 2023
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
This document chronicles this author's attempt to explore how words come to mean what they do, with a particular focus on child language acquisition and what that means for models of language understanding.[I say historical because I synthesize the ideas based on when I discovered them and how those ideas influenced my later thinking.] I explain the setting for child language learning, how embodiment—being able to perceive and enact in the world, including knowledge of concrete and abstract concepts—is crucial, and how emotion and cognition relate to each other and the language learning process. I end with what I think are some of the requirements for a language-learning agent that learns language in a setting similar to that of children. This paper can act as a potential guide for ongoing and future work in modeling language.
§ INTRODUCTION
How can machines understand language? is a question that many have asked, and represents an important facet of artificial intelligence. Large language models like ChatGPT seem to understand language, but as has been pointed out <cit.>, even large, powerful language models trained on huge amounts of data are likely missing key information to allow them to reach the depth of understanding that humans have. What information are they missing, and, perhaps more importantly, what information do they have that enables them to understand, to the degree that they do? Current computational models of semantic meaning can be broken down into three paradigms:
* distributional paradigms where meaning is derived from how words are used in text (i.e., the notion that the meaning of a word depends on the “company it keeps," following <cit.>)
* meaningfulness of language lies in the fact that it is about the world <cit.> and grounded paradigms are where aspects of the physical world are linked to language (i.e., the symbol grounding problem following <cit.>)
* formal paradigms where meaning is a logical form (e.g., first order logic as in <cit.>)
Figure <ref> shows examples of the three paradigms to computational semantics and the kinds of language phenomena that they model well. These paradigms to computational semantics have been applied in various models that represent remarkable progress in recent years. However, now that large language models and other AI models are more widely used, it is clear that there are limits to their `understanding` (if they fully understand, then why is prompt engineering necessary?) which has prompted some to claim that a full, unified model of computational semantics is only possible if it goes through the same language acquisition process that children do.
Even if computational models of language meaning do not need to learn in the same settings and progression that children do, it is useful to make an appeal to what is known about how children do learn language in order to guide current and future modeling efforts to enable models to have a more holistic understanding of language.[Why? Because understanding and misunderstanding is a vexing societal problem and a scientific understanding of how to acquire, represent, and apply language in a computational model will tell us something about what language is, which may help us overcome those vexing misunderstandings.]
This paper represents such an appeal. At least it represents this author's attempt in the past decade to synthesize what is known about child language acquisition and model semantics more holistically.
§ WHY DO CHILDREN SPEAK?
One goal of my research is to determine the setting and requirements for language learning; specifically I have been searching for environmental reinforcement signals that a child could use to know that they were aligning with language speakers, and what the parameters of the pre-linguistic (i.e., before the first real words—not just babbling—are uttered with some kind of intent) language learner might be.
In “Child's Talk: Learning to Learn Language" <cit.>, we get at some important basics (the following are quotes from Bruner): it is banal to say that infants (and, generally, humans) are social; they are geared to respond to the human face, voice, action, and gesture. Children seem to want to coordinate their actions with, or at the very least mimic the behavior that they see in conspecific entities: i.e., with their mothers. <cit.> noted that children have basic needs that might contribute to spoken interaction, namely that children aspire to affection and intimacy with their caregivers. Mothers are able to track a child's progress and act accordingly.
Mothers seem to follow the Gricean maxims of quantity, quality, relation, and manner <cit.>. The initial cognitive endowment appears to be that it is goal-driven activity, is social and communicative—self-propelled and self-rewarding—constrained, ordered, systematic, familiar, often referring, and surprisingly abstract (as opposed to concrete, which is usually what is assumed when considering that children first refer to physical objects).
A book more directly related to what I was looking for was How Children Learn to Learn Language by Lorraine McCune (), who claims that the basis of meaning is grounded in embodiment, something I had not really before considered. This way of thinking was quite different from the general NLP thesis that meaning can be derived from text alone, with word embeddings dominating the “semantic" side of the field. Text isn't embodied, and children don't learn language via the medium of text. In fact, if words are to be learned, then children must attend to physical objects (including self and others), and one thing that makes objects salient to children is the fact that they move <cit.>.
McCune makes the case (synthesizing other work) that linguistic patterns and order emerge not necessarily with explicit instructions—language learning doesn't requite a curriculum, though caregivers use simple words and phrases at first. Children are interested in novelty which gives them sensitivities in information coming through all of the sensory inputs and internally (e.g., proprioperception) from their own bodies. McCune noted four stages that children go through as they learn language (p.27):
* Sensitivity to expressiveness (i.e., movement and sound)
* Transcendence of expressive qualities and knowing attitude (i.e., the child recognizes that actions and sounds are communicative)
* Denotative reference and semantics correspondence (i.e., children begin to refer to objects and learn their designations)
* Shared perceptual and representational settings (i.e., children learn language in a shared space where they and caregivers directly perceive objects and each other)
To learn language, eyes are important; children can follow paternal line of regard (i.e., triangulate what another person is looking at) by just seven months. Children somehow know that the eyes are an important modality of attention and they wonder why the eyes of others aren't pointed at themselves because the attention of others is something that children seem to innately seek. Reference to visually present objects (the subject of my PhD dissertation) is an important step in the developmental process, which coincides with the development of theory-of-mind (i.e., the child comes to the realization that they are an individual separate and distinct from others, and they allow others the same endowment of distinct individual identity and frame of reference to the world). But reference doesn't come first; there are other pre-linguistic parameters that must be in place before reference to visual objects can begin.
My primary takeaway from McCune's work was that children are motivated, intrinsically, to interact with other human beings and that language learning likely would not take place without interaction, nor without the motivation to interact. Other work reinforces spoken interaction <cit.>, and another reinforces that there is no overt supervision signal; children just need to explore and observe to find patterns and regularities, and once a regularity is learned, exploit it to learn more abstract regularities <cit.>.
§.§ Nature or Nurture?
Part of my quest to understand the settings and parameters for language learning has meant taking a stance on the nature (à la Chomsky) vs. nurture debate (spoiler alert: both are required, but with some nuance). It is clear that there is some degree of cognitive scaffolding that uniquely affords humans the ability to think and talk about abstract ideas using speech and other communicative mediums. Furthermore, it is known that pre-linguistic infants possess “highly developed perceptual mechanisms for the perception of speech" <cit.>. It is also clear that a fairly substantial degree of linguistic exposure is necessary for children to learn language, and by language I don't just mean syntax. Important to this debate was an observation made in <cit.> that learning is not the antithesis of innateness, but one of its important products. <cit.> makes a strong case that language requires experience and that languages are socially constructed even between the child language learner and the parent.
It is known that comprehension of speech occurs ahead of production of speech, and that visual, physical context is critical to learning language. An important takeaway from that book is that adults are not simply performing random behaviors, they are performing intentful behaviors, and children pick up on those intents. Understanding intent first seems to be a precursor to language.
For example, a child who is old enough to make use of hands to reach objects understands the intent to make an effort to reach for an object, so when that child sees another person reaching for an object there is an understanding of intent behind that other person (an important part of theory-of-mind); i.e., the person wants to grab the object. Sounds that come from human mouths that accompany those kinds of actions form the basis for language because language builds on understanding of intent. That, once again, simply means that the child learner requires a body to make utterances, to enact intentful behaviors, and to experience them personally in order to recognize them in others.
§ MODELING CHALLENGES
If embodiment is necessary, does it matter what kind of body? Searching for an embodied setting led me to explore robots as a body for my computational model, but I had no experience with robots. I did not want to build one. I was, however, an interested consumer that wanted to put my incremental (i.e., processes at the word level) dialogue system onto a robot platform so the physical interaction could take place. I opted to purchase several Anki Cozmo robots because they were affordable, small, and had a Python SDK that gave me access to sensors and control over actions.
Learning how to use the robot took longer than it should have, requiring branching into the field of human-robot interaction (HRI) because we had to establish that the Cozmo robot was the right one for the job of first language learning, that people would actually treat it like a child and did not have adult-level cognitive capabilities <cit.>. There were technical challenges in this regard; getting our spoken dialogue systems to play nicely with the robot SDK took a lot of effort and we knew that the model of semantics that we had espoused wasn't quite right to work well with the robot's sensors. The importance of good technical infrastructure cannot be understated, yet it takes up a lot of time because without it we cannot do productive research.
§.§ Objects in the World
How Children Learn the Meanings of Words by Paul Bloom <cit.> focuses somewhat more on the child who was ready to learn words and what those words might mean. Some words refer to objects, so what do children assume about those objects? Bloom mentioned Spekle objects with four important principles:
* cohesion — objects hold together
* continuity — objects remain even if they disappear from view
* solidity — objects are solid
* contact — objects can interact with each other including people
This concept is important because children interact with objects and people before they begin referring to objects using words, which requires that they have some kind of understanding of those objects; at least how they feel, their potential affordances (e.g., a ball is roll-able, a box can hold objects) which is what is potentially grounded into when the children begin to learn reference words. This puts reference (and affordances) in a central position to meaning, at least when children are learning words that refer to concrete, physical objects. Bloom states that if reference is central to meaning, then meaning is not determined by mental representations. This is an important point because that affects the model (whatever a mental representation is).
Similar to Bloom, <cit.> looked at the literature on early word learning in children. Children learn words slowly; if children could learn words quickly, we would not see a strong correlation between how much parents talk to their young children and a child's vocabulary scores, but we do (nod to Tomasello). There is such a thing as fast-mapping in older (though still young) children, but the initial words are not so easy.
Production (i.e., how a word is used / uttered) is the ultimate demonstration that a child has learned a word. Interaction requires speech, and speech unfolds phonetically over time, so listeners must interpret words incrementally, one syllable at a time. <cit.> is an entire book on this subject with the thesis that timing posits (rather, builds upon) the notion that the give-and-take and timing of that give-and-take are foundational before other cognitive development can take place. Not just spoken language; two objects can't occupy the same space so there is a give-and-take in the use of spaces and a give-and-take in the use of things like speech as a communication medium because we can only attend to one voice at a time, and because speech is manifested as compressed air within a specific space, so only one person can talk at a time if anyone is to be understood. This is either a handy thing because human attention is limited to one thing at a time, or it might actually be one of the things that limits human attention to one thing at a time.
Relatedly, one important point that is the main thesis of <cit.> is that our cognitive functions are housed in a body that lives in time and a kind of space with specific degrees of freedom (e.g., three axes of movement are allowed in that space) and our cognitive machinery builds its understanding (and as Johnson and Lakoff observed, metaphors <cit.>) upon the spatial foundation of language and cognition.
Imagine communicating without prepositions, or the information encoded in action verbs that connote activities that occur in time and space. This is where mothers shine: mothers take note of the space-time constraints then have simulated dialogues with their babies; when a child doesn't respond to a turn, the mother still allows the duration of what might have been a turn to elapse before taking the dialogue floor again. Mothers' high responsivity facilitates their infants' cognitive development. This is the literal cradle of social adaptation; cognition and cognitive development are inseparable from social adaptation—but it must be interactive in that cognitive beings are participating in the social interaction.
Digging deeper into child psychology, <cit.> explain a few important things. First, that parents repeat what children say, they don't overlap their speech with children (i.e., the dialogue has very easy-to-distinguish turns), the speech is simplified with primary content at the end of what is being said, and parents repeat what children mean by rephrasing in a grammatically correct way. Thus parents assume the child has an egocentric frame of reference of the world (i.e., they can only take on their own frame of reference—they haven't discovered that others have their own frame of reference). Parents keep a level of complexity just ahead the child which gives the child enough novelty, thereby holding attention and learning.
Taken together, physical, co-located interaction between parent and child is key. Children are motivated to interact, and caregivers assume an egocentric frame of reference for the children, meaning that parents don't refer to the objects, they often name the objects that the children are already attending. Learning that was helpful for me because those are some of the parameters that need to be in place for a computational model:
* it must be in an interactive setting
* the learner should probably be embodied (at the very least it should have sensory input)
* because of the ego-centricity we could assume that when objects are referred, it is because they are already salient to the child
Which computational model can fulfill those requirements?
§.§ Deep Learning and Transformer Language Models
Deep neural networks are the mainstay of most NLP tasks and the latest architectures of the time led to a new language model that dramatically altered everything. <cit.> introduced the attention mechanism and <cit.> made attention and transformer architectures work in NLP as a new way to use a pre-trained language model called BERT to do anything. The caveat was that it was trained on large amounts of text. Broad-sweeping claims followed: BERT and more powerful derivatives were at the basis for artificial general intelligence, etc., etc. That caused me to raise an eyebrow because throwing text from books and websites at a model and using a learning regime of guess-the-word within a sentence wasn't anywhere close to how children learn language, if all of those books I had been reading about child language learning (and my own experience with my children) were to be believed.
But do we really need to be concerned with mimicking exactly how humans learn language? After all, airplanes fly without flapping their wings. Two responses to that: first, the reason deep learning works so well is because it is bio-inspired, so there is something potentially useful about trying to mimic biological processes. Second, language is an ability that is so uniquely human that understanding it means understanding how humans acquire and use language.
Thankfully, I wasn't the only one who had reservations about BERT and derivatives thereof. <cit.> highlights some of the important reservations that many in the field have with assigning so much meaning and understanding to BERT-like transformer language models. Others have followed with their own skepticisms since phrases like large language models like ChatGPT became part of everyday vernacular.
§ MEANING AND EMBODIMENT
§.§ What is Context?
What possibly annoyed me most in my investigations was the claim that the language models were following a Wittgenstein view of meaning in language, that meaning is derived from how it is used in a context. What context? The assumption is lexical context; i.e., words used in the textual context of other words. But there is also physical context, and I believe that is more likely what Wittgenstein meant. I picked up his Philosophical Investigations <cit.> in 2019 and what luck our library had it in English and its original German. I read both in tandem, and while I found the translation reliable (mostly), it bothered me that the accepted interpretation of Wittgenstein's stance on language meaning was text-centric, or at least context only meant what was spoken or written previously. This was late Wittgenstein when he thought he had settled language as a more formal system, but then spent time with children and (I conjecture) realized that the child mind is not the same as the adult mind.
More evidence that he meant that meaning comes from physical context: “There is a gulf between an order and its execution. It has to be filled by the act of understanding" (1.431) and not disconnected from the body (1.339). I interpret this to mean that meaning requires action, or a body to act in, because meaning is grounded in bodily movement. The word throw, for example, isn't just an idea and it's not just something we see someone else do, we have muscle memory or throwing that is part of the meaning of throw. He also brings up color and shape (1.72-74), that words refer to objects which themselves have affordances (1.11), and mentions that language use is first in reference to deictic (i.e. pointing) gestures. In other words, at first, language is grounded in the physical world. Only after a conceptual foundation of concrete concepts do we get to abstract language games (1.270+) and thinking about thought (1.428); i.e., use language to construct meanings of abstract words and abstract thought only after concrete scaffolding. In contrast, language models were only focusing on the last bit: use a language game of “guess the word that I randomly covered" to think about abstract thought, but a model distributed in text, all words are treated as if they are abstract. This idea was hard to convey in some of my (rejected) papers. Wittgenstein did explore how words come to arrive at a shared meeting between speakers that don't have observable thoughts, which is what language games are for, an observation explored deeply in <cit.>.
In any case, language models are here and making an enormous splash and winning all of the benchmarks. If you aren't using language models in your research then at least one reviewer will use that as a justification for rejecting your paper. That's not to say that transformer language models don't have merit—they really do—but try using one out of the box on a robot, show the robot an apple, then ask if the apple is red.[Though there is now a trend in multimodal language models that at least bring the visual modality into language, see <cit.> for a review.] The language model doesn't know anything about redness, it only knows that red is a color and might be able to list of some objects that are typically red. That is changing with visual and other multimodal language models, but as observed by <cit.>, the language “learning" progression made in the NLP community starting with transformer language models then working towards more embodied notions of meaning is the opposite direction of how children learn language. Clearly there is top-down processing that happens in cognitive processes and they are in play early on in a child's life, but large language models were completely lacking anything bottom up from the pyhsical world.
§.§ Embodied Cognition
Johnson, along with George Lakoff, has been an early proponent of embodied cognition and carries forward the research of the time in his later book <cit.>. <cit.> puts language and bodies together. Both make a strong claim that the fact that our bodies are unique and distinct from other bodies, allowing interaction to take place, and putting language at the social level between linguistic bodies. Moreover, the categorical gap between sensorimotor life and the life of language is not only big, it is largely uncharted scientifically.
We cannot separate bodies from what they do, making people with bodies agents (in that they act), where agency is the active regulation of tensions between different negative tendencies; the actions of the agent are guided by positive norms that emerge dialectically out of opposing negative ones. Di Paolo of course mentions reference and social interaction, but the main thrust is that without the precarious materiality of bodies, there would be no meaning and no minds (p.110).
The idea that bodies are important to meaning is not new. <cit.> predicted what the world of AI might look like in the future (i.e., today, 20 years after its publication) and embodiment was not out of the equation. Brooks mentions Kismet, a simple robot that could respond to stimuli in ways that humans interpreted as somewhat intelligible. But, as the author admits, what Kismet cannot do is actually understand what is said to it. Nor can it say anything meaningful, and it turns out that neither of these restrictions seem to be much of an impediment to good conversation (sorry, chatbots).
Moreover, according to Brooks, researchers are operating in an underconstrained environment, and as they follow up interesting research ideas, they are tempted—and succumb—to make their abstract world more interesting for their research ideas, rather than being faithful to the reality of the physical world. This is exactly the issue I take with language models that are perfectly satisfied with deriving meaning through abstract text–partly because the datasets are easier to come by than the painful collection of often stilted spoken dialogues accompanied by a recording of the physical environment in which the spoken dialogue took place.
Embodied cognition is not without its critics, and there are plenty of theories of cognition that don't require a body. <cit.> in Meaning in the Brain takes a step back and looks at meaning from a different perspective: meaning is not a given, but rather the result of a constructive process that uses knowledge to make sense of sensory signals. So there is sense information, but the mechanisms in the brain aren't just reading sensory input in and finding patterns; rather, the brain is actively trying to find meaning. That's partly because prediction of possible pathways of conversation is a fundamental process of what the brain is doing during conversation, and that often drives the meaning of a given situation, linguistic or not. Embodied perceptual activations are not required for representing input's meaning, which is what makes language so ultimately useful. That of course only counts at the adult level, but each language learner needs to arrive at that point individually. Baggio does take that into consideration, noting that first words, in particular common nouns and their meanings, are learned in the first year of life, in social contexts where coordination between the infant and caregiver is generally the primary goal of interaction.
In the first two to three years of life, children learn largely by observing or imitating adults. Furthermore, considering the world outside just a single brain, Baggio quotes Miller's Law: In order to understand what another person is saying, you must assume it is true and try to imagine what it could be true of (Grice's maxims apply here, mostly the maxim of quality). Children must do that or they might not learn to speak at all. They only learn later about lies and manipulation—unfortunate aspects of the human condition—but no one would learn language if children assumed a-priori that nothing was true.
§.§ Which Body?
If embodiment is required for holistic computational modeling of language, then which body? Virtual agents offer a kind of body that could be an important stepping stone. They can be made to look human, which may also be important. The main drawback for me in my quest for holistic semantics was the fact that virtual agents exist in a virtual world. What is required is a body that can enact in the world that humans share with each other. That left robots.
<cit.> and more recently <cit.> bring some clarity to the stance that embodiment is crucial to cognitive development of minds. Like humans, robots are a kind of body that don't just observe the world; a robot is an autonomous physical device, of any shape or form, that can sense and perform actions in its working environment <cit.> (p.20). That means that the robot has to be able to act, at least to some degree, where it is physically located. Humans have the same limitation, though of course humans can control things remotely using technology, but our own limbs are limited to what they can reach here and now.
<cit.> makes the case that the Turing Test is not a proper test of intelligence because words and concepts, including the most abstract, must have their meanings ultimately grounded in sensory-motor experience. That's not a reductionist account that all meaning is eventually grounded; it's more of an account of a proper progression: one cannot come to learn the meaning (connotation) of abstract ideas like democracy without the vocabulary required to define democracy, and so on until one reaches the point where words are not learned by other words. Rather, they are concrete words that denote physical objects or object attributes. <cit.> showed how recursively considering words that define other words in a dictionary eventually lead to a core subset of words that all other words are defined upon.
Modern robotics has shown how important embodiment is <cit.>. Without a sensory-rich body, perception (as we know it) is impossible. And enaction (i.e., acting intentfully and not just sensing) is also vital—our actions are entangled with our thoughts just as much as perception. The life process, the life cycle of the individual, cannot be separated from embodied cognition. This is the difference between biological brains and computer brains. Indeed, <cit.>, required reading for anyone interested in first language acquisition, mentions that children seem to act out things as they are learning words, like opening and closing doors as they learn words open and close. Transformer models definitely don't do that, resulting in a meaningful lack of meaning.
§ EMOTION AND LANGUAGE
Though not strictly a book about child language development, <cit.> reported a longitudinal study of a group of people across decades to track their development from birth to adulthood that gives a broader picture of how humans develop in an individual, familial, and societal context. One of the main themes of the book is behavior (since behavior is something that can be observed), and what behavior means to the organization of an individual. How does this relate to language?
Central aspects of individual organization originate in the organization of early relationship. Language is part of that organization since it is a method of communication that maintains, fosters, or harms those relationships. Another is the main theoretical thrust of the book: that organization is the fundamental feature of behavior—i.e., language is part of the organizational structure itself. Organization is revealed in the interplay of emotion, cognition, and social behavior; development is defined by changes in organization of behavior over time, and organization of behavior is central to defining individual differences.
If development of an individual human means that they are organizing their behaviors (emotion, cognition, and social behavior and the interplay between emotion, cognition, and social behavior) then language development—which is an organizing behavior—plays a central role in the organization of emotion, cognition, and social behavior.
The idea that language plays a role in the organization of social behavior was clearly laid out in <cit.>, along with its accompanying idea that social behavior in turn plays a role in the organization of language because people have to coordinate what they mean when they speak with each other. Furthermore, that cognition plays a role in language and that language plays a role in cognition is well-established—language and cognition are often considered one and the same. But what about emotion? If emotion plays a role in the organization of behavior, and if emotion has a tight interplay with cognition and social behavior, what does emotion have to do with language?
Like many, I considered emotion to be part of the human experience, but clearly separate and distinct from cognition.[This is perhaps in part due to my affinity for Commander Data on Star Trek: The Next Generation who was a conscious and highly intelligent android, yet emotionless.] In fact, emotion was in my view often a hindrance to true linguistic understanding because emotion colors understanding in potentially the “wrong" way. The more we could separate emotion from meaning the better. That researchers were trying to model the ability to recover emotional content from text (e.g., a short post on a social media site) did seem useful, but the goal there was utilitarian, not to uncover the meaning of language.
My stance on emotion began to change when I took on a master's student, David, to whom I handed the Cozmo robot and tasked him with putting everything together we knew about language, meaning, spoken dialogue, and placing the robot in a setting where it could learn language from people without knowing any language a priori. After considering the task, he asked a question I had not considered (students tend to do that): “If we bring in people to talk to Cozmo, why would they care to help it learn language?" I don't think I grasped the question fully at the time. It partially meant that the science we were trying to advance was different from other science in that we weren't just observing people behaving, we were asking them to behave in a way such that they might help this little robot learn some words; i.e., we were asking them to be caregivers for an hour. David was concerned that paying participants for an hour of their time wasn't enough because there was no connection between them and the robot to care that the robot actually learned anything.
All of the literature I had read up until that point about what mothers do to foster development, particularly language development, backed up that concern. The robot had no mother. What we possibly lacked from the participants was “buy-in" to the needs of the robot to learn language. One way to potentially convince people to buy-in was to make the robot display behaviors that would motivate the participants to buy-in in a way that capitalized on a general human decency to help others.[The ethical implications of this were always a concern to us.] But the displays had to be age-appropriate for the robot; it was supposed to be like a pre-lingusitic child, after all, and what kinds of behaviors can pre-linguistic children engage in to capitalize on the decency of others to help? Well, children smile and they cry—they display emotion. David put in the due-diligence in the relevant literature and found that a number of important behaviors had emotional underpinnings that could help facilitate language learning; for example, confusion and curiosity.
That put us in a difficult position that we had been in before with human perceptions of the robot's age and cognitive level: what behaviors could we make the robot do in order to make people think that the robot was displaying some kind of emotional state? With Cozmo we struck gold because Cozmo had nearly 1,000 short, pre-defined behaviors that we could easily invoke and some of them were designed to have emotion content (e.g., the robot smiles, makes a “happy" noise, and moves its lift was meant to display happiness). David painstakingly video/audio-recorded all of the behaviors and put the recordings on a crowd-sourcing website, asking people to rate the behaviors for their emotional qualities. That work led to a model of emotion recognition, not from humans (!) but inferring what emotion people would attribute to a robot behavior based on the behavior itself (i.e., the movements, face, and sounds from the robot). That led to <cit.>, which was only the beginning of what we thought was a temporary, minor detour down the path of emotion and what it might have to do with language.
§.§ Concrete Affect, Abstract Emotion
My original hypothesis about emotion and language was that because emotion exists and is used to display information about the state of a child before language is learned, then emotion must be something that, like other perceptual modalities, language grounds into (i.e., the modality itself is part of the meaning) as language is learned. To find out more about emotion in general (not just how it relates to language), I picked up <cit.>, a dense but thorough read, and had to step back—way back—from what I thought I understood about emotion. That led me to read a few papers on the subject of language and emotion with a more open mind; these were neurosicence and psychology papers (the NLP community was only interested in how to infer the emotion of the writer of a piece of text, not in how it relates to meaning).
Two papers, both originating from Gabriella Vigliocco, really changed my understanding because they made a strong case that abstract words are more directly tied to emotion than concrete words <cit.>. This agreed, it seemed, with <cit.> which synthesizes the latest emotion research. At the very least, I came to the understanding of the difference between emotion and affect (most people use the two terms interchangeably) in that emotion is tied to the linguistic and cognitive system making emotions to a large degree socially constructed, whereas affect is more basic; grounded in embodiment.
§.§ Emotion and Cognition
A Child's Path to Spoken Langage by John L. Lock <cit.> makes some important arguments vis-à-vis cognition and emotion. As has been noted by others (some of them citing Locke's work), access to the vocalizations of one's species is simply not enough. Furthermore, an inclination to imitate and all the good communicative intentions in the world are insufficient for being a speaker of a language. Rather, the availability of appropriately interactive tutors are part of the story. That means putting an agent (or computational model) in a place where it can observe language, be it text or even referring expressions made to visually present objects, does not bring the child to language capabilities as much as participatory interaction. But interaction doesn't take place with just anyone—language is developmentally additive in the sense that the multiplicity of cues that trip off phonetic categories are piled on top of the prosodic, affective, and speaker-identifying cues that form the infralinguistic (i.e., non-linguistic) core of our vocal messages. It seems that infants need to know and have experience with the identity and intentions of those who are speaking to them. This could be because mothers imitate children 90% of the time, not the other way around.
Reinforcing some of the themes mentioned above, Locke further argues that language development proceeds from the general to the specific, and complex structures evolve by differentiation of a larger entity into smaller parts or functions (words to phonemes, phrases with prosody to words). From its earliest opportunity, the infant seeks out the particular kinds of stimulation that it enjoys and that its brain may need in order to develop maximally. Young humans express more interest in the eyes than in any other region of the face, and it seems that the human infant is largely preadapted to indexical and affective communication. Even little monkeys and apes do not “while away the hours in idle vocalization;" quite the contrary, but little humans babble.
An anecdote illustrates this. When my daughter was learning to speak, her word for all non-flying animals was cow because we lived near a farm and she saw cows a lot. Only later did she use size to distinguish between big animals and small ones, thus the concept dog emerged for the latter category. As she learned more words for more specific species, she picked up on the details that distinguished them.
Studies reveal that it is not just the sound of speech that sets infants to vocalizing or reinforces them for doing so: the person doing the speaking must be physically present and it may help if the speaker is visibly looking at the child; vocal imitation may occur more commonly when the baby can see the person who is talking or see a person while there is talking. It's worth noting research that sorts children into two cognitive camps: some children are more referential so their first words largely refer to physical objects, while other children are more expressive so their first words are more egocentric about their own feelings and needs.
In <cit.>, an entire chapter is dedicated to emotion and language development. Effectance (i.e., motivation to act and interact) and affect play several major roles in human communication. Certainly they expand the infant's capacity for intelligent behavior by pushing it to explore and to interact with the people and objects in its environment. Affective displays provide parents with a basis for social responding and cues they may use in adjusting the psychological and physical care of their infant. The experience of emotion fills infants with energies that are dissipated by behaviors (such as squealing), which are by their very nature communicative (p.328). Piaget said that affect plays an essential role in the functioning of intelligence. Without affect there would be no interest, no need, no motivation; and, consequently, questions or problems would never be posed, and there would be no intelligence. Affectivity is a necessary condition in the constitution of intelligence.
Locke also cites Sroufe and Waters who worked on the longitudinal study mentioned above <cit.> early on in the longitudinal project where they note that that cognitive advances “promote exploration, social development, and the differentiation of affect; and affective-social growth leads cognitive development [...] neither the cognitive nor the affective system can be considered dominant or more basic than the other; they are inseperable manifestations of the same integrated process [...] It is as valid to say that cognition is in the service of affect as to say that affect reflects cognitive processes." Moreover, in the real speech of sophisticated speakers, where both linguistic content and vocal affect are present, one type of cue does not preempt the other, and for speech to work this must be the case.
Listeners must know both what the speaker is saying and what he intends by saying it. Speakers duplexly pick up information about the linguistic content and the speaker affect because the cues to these things are of different sorts and are processed by different brain mechanisms. Thus, according to Locke, the meaning of an utterance is in the linguistic content, but the intent of the speaker who made the utterance is in the affect and emotion. In fact, children are adept at reading intents of others via affect and emotion, before they can even speak or really understand words.
§.§ Modeling Emotion
Taken together, the above discussion means that the separation of language from emotion in computational models is going to lead to something that is only an approximation of what a language model should encode, if any claim is to be made that a model has any degree of semantic meaning. However, emotion is not just another modality like vision through a camera or haptic sensations through a robotic hand; emotion is communicative on its own, albeit with limited (but important) social signals, pre-linguistic in that it helps scaffold the language learning process especially early on, and emotion is later intertwined with cognitive development and abstract linguistic meaning.
How, then, could we represent affect and/or emotion computationally? As has been the case with deriving meaning from text, we can't also derive emotion from text. Pulvermüller <cit.> I think gives us a hint: the only way we can arrive at a representation of emotion that we could possibly make use of computationally is if we tie emotion to behavior, which is how affect and emotion are signalled between humans. That means we need something to produce that behavior—we've already established that embodiment and interaction are crucial, and that robots are the only computational devices that fulfill the requirements of embodiment because they can act in the physical world. Thus we need humans to watch robots and record their appraisals of robot behaviors for emotional content, then link the behaviors to the emotion. That's only an approximation of emotion through the back door, but it's a start.
§ MEANING AND THE BRAIN
Explaining what was missing from computational models of language (like large language models) was easy when I explained the difference between concrete and abstract words <cit.>. It's no dichotomy either; some concepts are very concrete in that they exist physically (chair), a bit less concrete in that they exist physically but also have abstract properties that make them what they are (farm, city), and some words are abstract in that there is no physical denotation where the meanings are built upon meanings of other concepts (democracy).
From my readings above, learning that concreteness and abstractness play with emotion in different ways was additional evidence that the concreteness/abstractness dimension of language was something worth my attention. Neuroscience literature further showed that abstract and concrete concepts have different represntational frameworks in the brain <cit.>. However, I found neuroscience literature difficult to digest because a lot of the terminology.
A book by Iain McGilchrist helped me grasp some of the neuroscience terminology <cit.>, and I found that it fed my obsession for the concrete-abstract dimension of language. The main thesis is that the left and right brain hemispheres are, while very similar in function, have some notable differences with many important implicationss. Of course, I was primarily interested in how those differences might affect the language acquisition process. The following largely deal with the concrete-abstract nature of the hemispheres and the role emotion plays:
* The left hemisphere is the hemisphere of abstraction, which, as the word itself tells us, is the process of wresting things from their context. Thus the right hemisphere does have a vocabulary: it certainly has a lexicon of concrete nouns and imageable words which it shares with the left hemisphere; but, more than that, perceptual links between words are made primarily by the right hemisphere (p.50). In general, then, the left hemisphere’s tendency is to classify, where the right hemisphere’s is to identify individuals (p.52). It has been suggested that our concepts are determined by the language that we speak (the Sapir–Whorf hypothesis). However, this is no more than a half or quarter truth. Children certainly often get the concept first and then quickly learn the word to describe it, which is the wrong way round from the Sapir–Whorf point of view. Moreover there is evidence that five-month-old babies have a concept, to do with tightness of fit, which they subsequently lose if their native language does not embody the same concept (p.110).
* ... the right hemisphere’s interest in language lies in all the things that help to take it beyond the limiting effects of denotation to connotation: it acknowledges the importance of ambiguity. It therefore is virtually silent, relatively shifting and uncertain, where the left hemisphere, by contrast, may be unreasonably, even stubbornly, convinced of its own correctness (p.80).
* `emotion binds together virtually every type of information the brain can encode... [it is] part of the glue that holds the whole system together’ (p.88; quoting Douglas Watt)
* To recapitulate, then: language originates as an embodied expression of emotion, that is communicated by one individual `inhabiting’ the body, and therefore the emotional world, of another; a bodily skill, further, that is acquired by each of us through imitation, by the emotional identification and intuitive harmonisation of the bodily states of the one who learns with the one from whom it is learnt; a skill moreover that originates in the brain as an analogue of bodily movement, and involves the same processes, and even the same brain areas, as certain highly expressive gestures, as well as involving neurones (mirror neurones) that are activated equally when we carry out an action and when we see another carry it out (so that in the process we can almost literally be said to share one another’s bodily experience and inhabit one another’s bodies) [...] which binds us together as physically embodied beings through a form of extended body language that is emotionally compelling across a large number of individuals within the group (p.122).
There are other excerpts from the book that related to language, but these suffice for my purposes here. My primary takeaway is that the concrete-abstract dimension of language is one of the most fundamental aspects of language itself; certainly also of cognition and emotion. In fact, the neurological hardware upon which human thought takes places has, it seems, split the hemispheres to capitalize on the interplay between concreteness and abstractness. Pre-linguistic indeed. As for computational models, large language models like ChatGPT that are trained only on text are purely left-brain models.
§.§ Implications
Concreteness and abstractness are well studied in some fields, but taken for granted in NLP research. The concreteness-abstractness divide can help us understand meaning and the assumptions we are making in our models (e.g., that large language models trained on text are purely abstract). Moreover, cognition (which is often equated with language and visa-versa) doesn't stand on its own: cognition needs emotion.
Incorporating emotion into the language learning process is an additional challenging requirement of putting a computational agent into a setting that is similar to the setting that children are in when they learn their first words. The requirements are, thus far, that the child-like agent must:
* be embodied—the agent must be able to act in its environment and potentially manipulate objects and the language model needs access to internal embodied states and sensory modalities of the external world
* interact using speech—he agent must use speech as the primary modality for acquiring language, partly because prosody helps carry affective information, and be motivated to interact with others
* be physically co-located with language speakers—the agent must be able to visually and auditorially perceive the person(s) that it is learning language from, partly because physical behaviors carry emotional information
* distinguish concrete and abstract concepts—be able to learn concrete concepts that denote physical individual things, but also be able to use those concrete concepts abstractly and be able to learn abstract concepts from existing knowledge
* use affect and emotion—the agent must use affective displays to facilitate language learning in the early stages, language must ground into affect and emotion concepts are acquired in lock-step with cognitive and abstract language development
There are certainly other aspects that I did not explore or find in my years-long search for relevant literature, for example theory-of-mind needs to be modeled and recent work is exploring theory-of-mind deeper, but the entire notion of theory-of-mind needs to be ironed out to the degree that it could be modeled. Likewise, play is an important part of cognitive development in part because it gives children a chance to enact meaning with their bodies and make mistakes with language as they are learning it, and to discover affordances of objects in the world. But I think that theory-of-mind, affordance, and play are, like language, intertwined with emotion.
Acknowledgements I would like to thank Patty Kennington-Rooks and Vanessa Christensen for helpful and detailed feedback, as well as fruitful discussion. I would also like to thank members of the Speech, Language, and Interactive Machines research group at Boise State University for helping to fine-tune some of the ideas.
|
http://arxiv.org/abs/2307.04134v1 | 20230709092249 | Stimulated Brillouin scattering at 1 nm-1 wavevector by extreme ultraviolet transient gratings | [
"Danny Fainozzi",
"Laura Foglia",
"Riccardo Mincigrucci",
"Nupur N. Khatu",
"Ettore Paltanin",
"Claudio Masciovecchio",
"Filippo Bencivenga"
] | physics.optics | [
"physics.optics",
"cond-mat.mes-hall"
] |
APS/123-QED
[email protected]
Elettra-Sincrotrone Trieste, SS 14 km 163,5 in AREA Science Park, 34149, Trieste, Italy.
Elettra-Sincrotrone Trieste, SS 14 km 163,5 in AREA Science Park, 34149, Trieste, Italy.
Department of Molecular Sciences and Nanosystems, Ca’ Foscari University of Venice, Venice, Italy.
European XFEL, Holzkoppel 4, 22869 Schenefeld, Germany
Elettra-Sincrotrone Trieste, SS 14 km 163,5 in AREA Science Park, 34149, Trieste, Italy.
Elettra-Sincrotrone Trieste, SS 14 km 163,5 in AREA Science Park, 34149, Trieste, Italy.
Elettra-Sincrotrone Trieste, SS 14 km 163,5 in AREA Science Park, 34149, Trieste, Italy.
Elettra-Sincrotrone Trieste, SS 14 km 163,5 in AREA Science Park, 34149, Trieste, Italy.
We crossed two femtosecond extreme ultraviolet (EUV) pulses in a β -Ga_2O_3 (001) single crystal to create transient gratings (TG) of light intensity with sub-100 nm spatial periodicity. The EUV TG excitation launches phonon modes, whose dynamics were revealed via the backward diffraction of a third, time-delayed, EUV probe pulse. In addition to the modes typically observed in this kind of experiment, the phase-matching condition imposed by the TG, combined with the sharp penetration depth of the EUV excitation pulses, permitted to generate and detect phonons with a wavevector tangibly larger (≈ 1 nm^-1) than the EUV TG one, via stimulated Brillouin back-scattering (SBBS) of the EUV probe. While SBBS of an optical probe was reported in previous EUV TG experiments, the extension of SBBS to short wavelength radiation can be used as a contact-less experimental tool for filling the gap between the wavevector range accessible through inelastic hard X-ray and thermal neutron scattering techniques, and the one accessible through Brillouin scattering of visible and UV light.
Stimulated Brillouin scattering at 1 nm^-1 wavevector
by extreme ultraviolet transient gratings
Filippo Bencivenga
August 12, 2023
================================================================================================
Studying thermal and vibrational dynamics in nanoscale materials is critical for advancing the technological applications of faster, more efficient and more compact nanoelectronic devices, such as smartphone and computer chips, as well as for thermal barrier
coatings <cit.>, heat-assisted magnetic recording <cit.>, nano-enhanced photovoltaics and thermoelectric energy conversion, to name a few. To achieve this, layer upon layer of very thin films are often used, with impurities added to tailor their function <cit.>. However, the complex structure of these materials makes it challenging to predict and characterise their thermoelastic properties.
Material properties such as elasticity, thermal conductivity and heat capacity are mostly determined by collective lattice dynamics that exhibit strong length-scale dependencies, which can drastically differ when the spatial dimensions reduce from macroscopic to microscopic scales, i.e., to sizes comparable with the characteristic length scales of nanostructures.
Over the years, an obstacle to the full description of thermoelastic responses in the 10s of nm length-scale was given by the lack of experimental techniques capable of accessing such range <cit.> without the requirement of
modifying or physically touching the sample. This inherently introduces limitations in the experiment design and complicates data interpretation. Collective lattice dynamics in condensed matter at wavevector q > 1 nm^-1 can be measured by inelastic scattering of hard X-ray and thermal neutron, while Brillouin scattering and optical transient grating (TG) can be used for q < 0.1 nm^-1. The intermediate q = 0.1-1 nm^-1 is hardly accessible, despite efforts to expand the capabilities of Brillouin spectroscopy in the UV range <cit.> and for improving the performance of X-ray spectrometers <cit.>. In addition, these spectroscopic methods are inherently limited by the instrumental resolution when measuring narrow lines, i.e. long dynamics. This limitation does not affect time-domain techniques, such as picosecond ultrasonics and time-domain thermoreflectance. In these techniques, metal films or other nanostructures are fabricated on the sample for transducing an ultrafast optical excitation in a short wavelength thermoelastic perturbation <cit.>. However, this intrinsically modifies the sample under investigation.
The advent of free-electron laser (FEL) sources has recently permitted the usage of extreme ultraviolet (EUV) pulses for extending the TG approach to shorter wavelengths, i.e. in the 10-100 nm range, enabling the excitation and probing of nanoscale thermoelasticity in a contact-less fashion <cit.>. The EUV TG approach has been pioneered at the FERMI FEL (Trieste, Italy) with the dedicated endstation TIMER <cit.>, capable of incisively and selectively studying bulk and surface phonons <cit.>, thermal transport kinetics <cit.> and magnetic dynamics <cit.>.
In this paper, we exploit EUV TG to probe acoustic phonons in β-Ga_2O_3. In particular, by taking advantage of the phase-matching conditions imposed by the nanoscale EUV TG, we demonstrated the possibility to detect stimulated Brillouin back-scattering (SBBS) from an EUV pulse at 13.3 nm wavelength. This enabled us to probe the dynamics of phonon modes with a wavelength as short as ≈ 6 nm. The employed sample was an Mg-doped β-Ga_2O_3 (001)-oriented bulk crystal with monoclinic structure (space group C2/m), obtained from the Czochralski method at the Leibniz-Institut für Kristallzüchtung <cit.>. The excellent surface quality and well-known elastic parameters made this sample adapted for the present EUV TG experiment.
TG is a third-order non-linear optical technique (four-wave-mixing), wherein two pulses of equal wavelength λ (referred to as pumps) are temporally and spatially
overlapped on the sample at a crossing angle of 2θ. The interference between these two pulses, assuming parallel polarization of the beams, induces a spatial modulation in the intensity of light. This modulation exhibits a periodicity Λ_TG=λ/(2sinθ); see Figs. <ref>a)-<ref>b). Such a patterned excitation acts as a transient diffraction grating for a third variably-delayed pulse (probe), with wavelength λ_pr, giving rise to a fourth pulse: the diffracted beam (signal).
The experiment was performed at the TIMER beamline at the FERMI FEL, which is described in detail elsewhere <cit.>. Two time-coincident ≈ 60 fs (FWHM) EUV pulses were crossed on a crystalline β-Ga_2O_3 (001) sample at the angle 2θ=27.6^∘ (set with 2% accuracy), generating a transient grating in the [100] direction. Two values of λ were used: 39.9 nm and 26.6 nm, resulting in corresponding grating periods of Λ_TG≈ 84 nm and ≈ 56 nm, respectively. In the following, we will refer to the 39.9 nm and 26.6 nm pump-related quantities with the superscript ^39 and ^26, respectively. The probe pulse (≈ 40 fs FWHM) impinged on the sample with an angle of 4.6^∘, and λ_pr=13.3 nm (hereafter denoted as ^13). The backwards-diffracted signal beam was collected by a EUV mirror and detected by a CCD camera, as outlined in <cit.>. The beamline is designed to satisfy the TG phase matching conditions at the Bragg angle (i.e. θ_i = θ_o = sin^-1(λ_pr/2Λ_TG); being θ_i and θ_o the incidence and diffraction angles of the probe beam, respectively) for λ=3λ_pr. However, since the excitation light is absorbed in a subsurface layer shorter than Λ_TG (the absorption lengths of the pumps are: L_abs^39∼ 12.9 nm and L_abs^26∼ 15.9 nm), phase matching conditions are relaxed. In this case only the wavevector component parallel to the sample surface (q_TG^39=2π/Λ^39_TG≈ 0.075 nm^-1, and q_TG^26≈ 0.113 nm^-1) is well-defined <cit.>, while the component perpendicular to the surface (q_z) results in a broad spectrum; see Fig. <ref>. Therefore, acoustic waves with a well-defined wavevector equal to q_TG (parallel to the surface) are launched. In contrast, waves with a broad spectrum in q_z are generated along the z-direction. However, as shown in Ref <cit.>, only two values of q_z satisfy the TG phase-matching conditions i.e., q_z=0, which yields a signal in the forward direction, and
q_z=2k√(1-q_TG^2/4k^2)
yielding a back-scattered signal, that encodes the dynamics of SBBS modes. Here, k=2π n /λ_pr is the wavevector of the probe in the medium, where n is the refractive index at λ_pr. Thus, the modulus of the acoustic wavevector for the SBBS signal is:
q_SBBS=√(q_z^2+q_TG^2)=2k=4π n/λ_pr
which is independent of q_TG and is collinear with the backward diffracted signal from the TG.
Therefore, under the current experimental conditions, the combination of the sharp penetration depth of the EUV TG pump in the material and the short wavelength EUV probe, enables the excitation and detection of phonons with q as large as q_SBBS≈ 1 nm^-1 (Fig. <ref>b). To further illustrate the excitation mechanism, Fig. <ref>c displays the EUV TG generated on the sample in the 26.6/13.3 configuration plotted against the (x,z) coordinates, taking into account the finite value of L^26_abs. We note that the modulation along x extends in a much larger range, comparable with the width (FWHM_x≈ 100s of μm) of the excitation pulses. This is what we usually call TG. Additionally, there is a steep gradient along z. Such gradient launches acoustic waves in a broad range of Δ q (roughly extending up to ∼ 2π /L^pump_abs). This is represented in Fig. <ref>e as the Fourier transform (FT) along z of the EUV TG intensity profile shown in Fig. <ref>d. In an excitation scheme relying on a single EUV pump, there is no capability to selectively choose a specific phonon wavevector along the z-axis. However, in the current scenario, the phase matching condition imposed by the EUV TG (see Eq.<ref>) selects a specific wavevector phonon with q_SBBS∼ 1 nm^-1 from the wide range of available phonons This is illustrated by the vertical segment in Fig. <ref>e.
We detected the EUV TG signal by varying the time delay (Δ t) between the EUV TG excitation and the probe pulse. Measurements were conducted at both long timescales (Figs. <ref>a and <ref>e) and short timescales (Figs. <ref>c and <ref>g). As expected, at long timescales the overall signal is characterized by a slow decay, which can be attributed to the thermal relaxation of the EUV TG, modulated by phonon oscillations <cit.>. After a few oscillations, these modulations become highly regular. For larger Δ t values, when the slow relaxation is decayed, double-frequency oscillations become visible, indicating the long-living nature of this dominant mode <cit.>. Conversely, the irregular shape of the initial oscillations suggests the presence of additional dynamics that damps out after some 10s of ps. The EUV TG data obtained at short timescales (Figs. <ref>c and <ref>g) were sampled with finer steps and exhibit modulations at significantly higher frequencies. These higher-frequency modulations are compatible with the previously mentioned mixing between the SBBS signal and the backward diffracted signal from the EUV TG.
In order to quantitatively describe the waveforms at both long (blue line in Fig. <ref>a and <ref>e) and short timescale (black line in Fig. <ref>c and <ref>g) an initial fitting procedure was conducted using Eq. <ref>:
I(t) = |1/2[1+erf(Δ t/σ)] · A e^-Δ t/τ|^2,
where the erf function accounts for a sudden rise of the signal (with σ representing the width of the rise), followed by an exponential decay with a time constant τ. Subsequently, FTs were computed on the differences between the measured traces and their respective exponential fits. The obtained results are illustrated in Figs. <ref>b, <ref>d, <ref>f and <ref>h.
The FTs of the long timescale waveforms present a well-defined mode and its second harmonic, plus a weaker and spectrally broader feature. All frequencies in these FTs vary proportionally to q_TG, as depicted in Figs. <ref>b) and <ref>f). The presence of this broad feature confirms the existence of a damped mode, which predominantly affects the initial portion of the waveform, as already evident from the raw data.
To comprehensively describe the signal, the complete fitting procedure incorporated these two vibrational modes, specifically a damped sinusoidal term and an undamped sinusoidal term:
I(t) = |1/2[1+erf(Δ t/σ)] ·[A e^-Δ t/τ +
- A_SAWsin(2π ν_SAW Δ t + ϕ_SAW) +
- A_LAsin(2π ν_LA Δ t + ϕ_LA) e^-Δ t/τ_LA]|^2
The resulting best-fit results are reported as black lines in Figs. <ref>a and <ref>e. All parameters and errors mentioned further below have been obtained using Eq. <ref>. The values obtained from the preliminary fitting of the EUV TG signal with Eq. <ref> and from the FTs were used as an initial guess for fitting the data with Eq. <ref>.
The results concerning the oscillation frequencies are shown in Fig. <ref>a. The undamped mode is compatible with a Surface Acoustic Wave (SAW), which exhibits a linear dispersion relation as a function of q_TG. From the slope of such liner dispersion a value for the sound velocity of c_SAW^[100] = 3.15 ± 0.01 km/s is obtained. This value is close to the estimated velocity of 3.24 km/s, as evaluated by using the transverse acoustic (TA) phonon velocity c_TA^[100]=3.57 km/s <cit.> and the Poisson's ratio ν_p=0.2 <cit.> of β-Ga_2O_3 [100], through the relation c_SAW≈ c_TA· (0.862+0.14ν_p)/(1+ν_p) <cit.>. SAW modes represent long-lived coherent surface displacements characterized by mechanical energy confined to the surface. In the employed backward diffraction geometry, these modes are expected to be the dominant contribution to the EUV TG signal, as observed experimentally.
The damped mode also presents a liner dispersion with a velocity c_LA^[100]=5.97 ± 0.14, which is similar to the expected value (6.18 km/s) for longitudinal acoustic (LA) phonons <cit.>. Such marginal deviations between the expected and observed velocities may arise from factors such as slight misalignment of the sample relative to the [100] crystallographic direction, sample heating caused by the FEL, or the 10^^∘ tilt in the (x,y)-plane, necessary for collecting the backward diffracted signal <cit.>.
Surface-skimming LA modes and, more in general, bulk waves are expected in these types of TG experiments <cit.>, although they do not contribute significantly in the employed geometry and are often disregarded. Furthermore, at these q values bulk modes are not expected to show tangible damping in the probed Δ t range. However, EUV TG data indicate a quite fast decay time, i.e.: τ^39_LA = 22.5 ± 1.5 ps and τ^26_LA = 17.6 ± 1.5 ps, which is compatible with the broad feature observed in the FT (see Fig. <ref>b and <ref>f). The finite decay time can be explained by the fact that we are observing a thin region below the surface, with thickness ≈ L_abs^13∼ 26.3 nm < Λ_TG, and the excitation intensity steeply varies along the sample depth. Consequently, LA modes are strongly influenced by the surface and manifest as leaky waves, such as surface-skimming longitudinal waves, which rapidly transfer mechanical energy away from the subsurface region toward the bulk.
The FTs of the short timescale waveforms exhibit two peaks (Figs. <ref>d and <ref>h) located at considerably higher frequencies compared to SAW and LA modes. Furthermore, these peaks do not show dispersion vs q_TG, as shown in Fig. <ref>b. This behaviour is indeed expected from the SBBS of the EUV probe, since the phonon wavevector is given by q_SBBS and in this specific case the dependence on q_TG can be neglected (see Eq. <ref>). The absence of dispersion vs q_TG of phonon modes detected via SBBS does not imply that they do not exhibit dispersion; rather, it indicates that the changes in q_TG allowed under the specific experimental conditions were not sufficient to significantly alter q_SBBS. A more effective approach to modifying q_SBBS would be to vary λ_pr, as in this case, q_SBBS∝λ_pr^-1 (see Eq. <ref>).
On the other hand, the observed frequencies (ν_SBBS, as extracted from the FT) match with the ones expected by considering the sound velocities of TA (c_SAW^[001]=4.01 km/s) and LA (c_LA^[001]=7.55 km/s) modes along the relevant crystallographic direction <cit.>; see Fig. <ref>b. Indeed, the LA mode detected via SBBS propagates along q_SBBS, which means with a small tilt angle (ϕ^39=4.8^∘ and ϕ^26=7.2^∘) with respect to q_z, i.e., essentially towards the bulk of the sample ([001]). This is a different crystallographic direction with respect to the leaky LA mode detected at long timescales (see Figs. <ref>a and <ref>e) which essentially propagates beneath the surface ([100]) with wavevector q_TG≪ q_SBBS. However, since the employed setup did not allow precisely selecting crystallographic directions, such modes have to be regarded as quasi-LA and quasi-TA. It is worth mentioning that Brilluoin back-scattering from quasi-TA modes can be observed in monoclinic crystals, exhibiting signal amplitudes (in the optical regime) comparable to those from quasi-LA modes <cit.>. However, while EUV Brillouin scattering reasonably relies on the same selection rules as in the optical regime, the signal amplitude may differ due to potential wavelength-dependent variations in the photoelastic constants. Most likely, the modes associated with larger density variations provide stronger signals, as the EUV refractive index (far from core-hole resonances) primarily depends on density <cit.>. Nevertheless, further experiments beyond the scope of this study are required to investigate these aspects.
The combination of the sharp penetration depth of EUV excitation pulses and the phase-matching conditions imposed by the EUV TG permitted the detection of stimulated backscattered Brillouin oscillations with a wavevector as large as ≈ 1 nm^-1. This wavevector range overlaps with the lower limit of the wavevector range covered by inelastic scattering of hard X-ray and thermal neutrons. In this case, the limitations on the q_SBBS and the SBBS signal come from the wavelength of the probe, rather than from the EUV TG periodicity (see Eq. <ref>). This limit can be straightforwardly overcome by using a shorter probe wavelength, that can be envisioned extending all the way to the X-ray spectral range <cit.>. This would provide a longer penetration depth and an increased range in q_SBBS.
Furthermore, the described approach also allowed for the detection of high-frequency surface acoustic waves and longitudinal acoustic phonons propagating below the surface, without the need for nanofabrication and in a broad range of materials. In fact, unlike optical laser excitation, EUV photons are highly absorbed by any materials. The current setup at FERMI already makes it possible to conduct transient grating measurements at grating periods as short as 24 nm <cit.>, and a further extension down to approximately 10 nm is feasible, pushing the SAW frequency close to the THz region and q_SBBS above 1 nm^-1.
The authors thank Z. Galazka from Leibniz-Institut für Kristallzüchtung for providing the β-Ga_2O_3 (001) sample and Alexei Maznev (MIT, Boston) for useful discussions. E. P. acknowledges funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 860553.
|
http://arxiv.org/abs/2307.05762v1 | 20230711193940 | Approximating the Value of Energy-Parity Objectives in Simple Stochastic Games | [
"Mohan Dantam",
"Richard Mayr"
] | cs.GT | [
"cs.GT",
"math.PR",
"91A35, 91A15",
"G.3"
] |
=1
Approximating the Value of Energy-Parity Games
School of Informatics, University of Edinburgh, UK
School of Informatics, University of Edinburgh, UK
M. Dantam and R. Mayr
Mohan Dantam and Richard Mayr
Computing methodologies Stochastic games
Full version of a paper presented at MFCS 2023.
arrows.meta,automata,shapes,decorations,positioning,trees,calc
{}
⌈⌉
factFact
O O m𝚟𝚊𝚕^#1_#2#3
O O m𝙻𝚟𝚊𝚕^#1_#2#3
O O#2#1𝚂𝚃𝚂𝚃#1𝚂𝚃#1,#2
O O#2#1𝚆𝚒𝚗𝚆𝚒𝚗_#1#1𝚆𝚒𝚗#2𝚆𝚒𝚗_#1#2
qtheorem[1]
Theorem #1.
qtheoremname[2]
Theorem #1 (#2).
qlemma[1]
Lemma #1.
qcorollary[1]
Corollary #1.
qlemmaname[2]
Lemma #1 (#2).
qproposition[1]
Proposition #1.
qremark[1]
Remark #1.
="2D
Jérôme Leroux, Sylvain Lombardy, and David Peleg
3
48th International Symposium on Mathematical Foundations of Computer Science (MFCS 2023)
MFCS 2023
MFCS
2023
August 28 to September 1, 2023
Bordeaux, France
272
36
Approximating the Value of Energy-Parity Objectives in Simple Stochastic Games
Richard Mayr
August 12, 2023
==============================================================================
We consider simple stochastic games with energy-parity objectives,
a combination of quantitative rewards with a qualitative parity condition.
The Maximizer tries to avoid running out of energy while simultaneously
satisfying a parity condition.
We present an algorithm to approximate the value of a given configuration
in 2-. Moreover, -optimal strategies for either player
require at most 2-𝖤𝖷𝖯·log1/
memory modes.
§ INTRODUCTION
Background.
Simple stochastic games (SSGs)
are 2-player turn-based perfect information stochastic games played on finite graphs.
They are also called competitive Markov decision processes <cit.>,
or 21/2-player games <cit.>.
Introduced by Shapley <cit.> in 1953, they
have since played a central role in the solution of many problems, e.g.,
synthesis of reactive systems
<cit.>
and formal specification and verification
<cit.>.
Every state either belongs to one of the players (Maximizer or Minimizer)
or is a random state. In each round of the game the player who owns the current
state gets to choose the successor state along the game graph.
For random states the successor is chosen according to a predefined distribution.
Given a start state and strategies of Maximizer and Minimizer, this yields a
distribution over induced infinite plays.
We consider objectives that are measurable subsets of the set of possible plays, and the players
try to maximize (resp. minimize) the probability of .
Many different objectives for SSGs have been studied in the literature.
Here we focus on parity, mean-payoff and energy objectives.
We assign numeric rewards to transitions and priorities
(aka colors), encoded by bounded non-negative numbers, to states.
A play satisfies the (min-even) parity objective iff
the minimal priority that appears infinitely often in a play is even.
It subsumes all ω-regular objectives, and in particular safety,
liveness, fairness, etc.
On finite SSGs, the parity objective can be seen as a special case of
the mean-payoff objective which requires the limit average reward per
transition along a play to be positive (or non-negative).
Mean-payoff objectives in SSGs
go back to a 1957 paper by Gillette <cit.> and have
been widely studied, due to their relevance for efficient control.
The energy objective <cit.> requires that the
accumulated reward at any time in a play stays above some finite threshold.
The intuition is that a controlled system has some finite initial energy level
that must never become depleted.
Since the accumulated reward is not bounded a-priori, this essentially turns a finite-state game into an infinite-state one.
Energy-parity.
We consider SSGs with energy-parity objectives,
where plays need to satisfy both an energy and a parity objective.
The parity objective specifies functional correctness, while the energy
condition can encode efficiency or risk considerations, e.g., the system
should not run out of energy since manually recharging would be costly or risky.
Previous work.
Much work on combined objectives for stochastic systems is
restricted to Markov decision processes (MDPs)
<cit.>.
For (stochastic) games, the computational complexity of single
objectives is often in ∩, e.g., for parity or mean-payoff
objectives <cit.>.
Multi-objective games can be harder, e.g.,
satisfying two different parity objectives
leads to completeness <cit.>.
Stochastic mean-payoff parity games can be solved in ∩
<cit.>. However, this does not imply a solution for
stochastic energy-parity games, since, unlike in the non-stochastic
case <cit.>, there is no known reduction from energy-parity to mean-payoff parity
in stochastic games.
The reduction in <cit.> relies on the fact that
Maximizer has a winning finite-memory strategy for energy-parity, which does
not generally hold for stochastic games, or even MDPs <cit.>.
For the same reason, the direct reduction from stochastic energy-parity to ordinary
energy games proposed in <cit.> does not work
for general energy-parity but only for energy-Büchi; cf. <cit.>.
Non-stochastic energy-parity games can be solved in ∩
(and even in pseudo-quasi-polynomial time <cit.>)
and Maximizer strategies require only finite (but exponential) memory <cit.>.
Stochastic energy-parity games have been studied in <cit.>,
where it was shown that the almost-sure problem is decidable and in ∩.
That is, given an initial configuration (control-state plus current energy level),
does Maximizer have a strategy to ensure
that energy-parity is satisfied with probability 1 against any Minimizer
strategy?
Unlike in many single-objective games, such an almost-surely winning Maximizer strategy
(if it exists) requires infinite memory in general.
This holds even in MDPs and for energy-coBüchi objectives <cit.>.
However, <cit.> did not address quantitative questions about energy-parity
objectives, such as computing/approximating the value of a given
configuration, or the decidability of exact questions like
“Is the value of this configuration ≥ k ?” for some constant k (e.g., k=1/2).
The decidability of the latter type of exact question about the energy-parity value is open,
but there are strong indications that it is very hard.
In fact, even simpler subproblems are already at least as hard as the
positivity problem for linear recurrence sequences,
which in turn is at least as hard as the Skolem problem
<cit.>. (The decidability of these problems has been open for
decades; see <cit.> for an overview.)
Given an SSG with an energy-parity objective, suppose we remove the parity
condition (assume it is always true) and also suppose that Maximizer is passive
(does not get to make any decisions). Then we obtain an MDP where the only
active player (the Minimizer in the SSG) has a termination objective,
i.e., to reach a configuration where the energy level is ≤ 0.
Exact questions about the value of the termination objective in MDPs are
already at least as hard as the positivity problem
<cit.> (see also
<cit.>).
Thus exact questions about the energy-parity value in SSGs are also
at least as hard as the positivity problem.
Our contributions.
Since exact questions about the energy-parity value in SSGs are
positivity-hard,
we consider the problem of computing approximations of the value.
We present an algorithm that, given an SSG and error ,
computes -close approximations
of the energy-parity value of any given configuration in 2-.
Moreover, we show that -optimal Maximizer (resp. Minimizer)
strategies can be chosen as deterministic and using only finite memory
with 2-𝖤𝖷𝖯·log1/
memory modes.
One can understand the idea as a constructive upper bound on the accuracy
with which the players need to remember the current energy level in
the game.
(This is in contrast to the result in <cit.> that
almost-surely winning Maximizer strategies require infinite
memory in general.)
Once the upper bound on Maximizer's memory for -optimal strategies
is established, one might attempt a reduction from energy-parity to
mean-payoff parity, along similar lines as for non-stochastic games in
<cit.>. However, instead we use a more direct reduction
from energy-parity to parity in a derived SSG for our approximation algorithm.
§ PRELIMINARIES
A probability distribution over a countable set S is a function
with .
(f) denotes the support of
f and (S) is the set of all probability distributions over S.
Given an alphabet Σ,
let Σ^ and Σ^* (Σ^+) denote the set of infinite
and finite (non-empty) sequences over Σ, respectively.
Elements of Σ^ or Σ^* are called words.
Games, MDPs and Markov chains.
A Simple Stochastic Game (SSG) is a finite-state 2-player turn-based
perfect-information stochastic game =
where the finite set of states is partitioned into the states of
the player (Maximizer),
states of player (Minimizer),
and chance vertices (aka random states) .
Let ⊆ be the transition relation.
We write ' if ,'∈ and
assume that
{' |'}≠∅
for every state .
The probability function
assigns each random state ∈ a distribution over
its successor states, i.e., () ∈().
For ease of presentation, we extend the domain of to
^* by () ()
for all ∈^+.
An MDP is a game where one of the two players does not control any
states. An MDP is maximizing (resp. minimizing)
iff = ∅ (resp. = ∅).
A Markov chain is a game with only random states,
i.e., = = ∅.
Strategies.
A play is an infinite sequence _0_1 …∈^ω
such that _i _i+1 for all i ≥ 0.
A path is a finite prefix of a play.
Let * = q_i_i ∈ | q_i q_i+1
denote the set of all possible plays.
A strategy of the player () is a function
: ^* →()
(: ^* →())
that assigns to every path
w∈^* (∈^*)
a probability distribution over the successors of .
If these distributions are always Dirac then the strategy is called
deterministic (aka pure), otherwise it is called randomized
(aka mixed).
The set of all strategies of player and in
is denoted by and ,
respectively.
A play/path _0_1 … is compatible with a pair of strategies
(,) if _i+1∈((_0 …_i))
whenever _i ∈ and
_i+1∈((_0 …_i)) whenever _i ∈.
Finite-memory deterministic (FD) strategies are a subclass of
strategies described by deterministic transducers
= where is a finite set
of memory modes with initial mode _0,
: ↦
updates the memory mode upon observing a transition and
: ↦
chooses the successor state based on the current memory mode and state.
FD strategies without memory (||=1) are
called memoryless deterministic (MD).
For deterministic strategies, there is no difference between public memory
(observable by the other player) and private memory.
Measure. A game with initial state _0 and strategies
(,) yields a probability space
(_0^,__0, [],,_0)
where __0 is the σ-algebra generated by the cylinder sets
_0_1…_n^ for n ≥ 0.
The probability measure [],,_0 is
first defined on the cylinder sets.
For = _0…_n, let
[],,_0() 0 if
is not compatible with , and
otherwise
[],,_0(^) ∏_i=0^n-1(_0…_i)(_i+1) where
is or or depending on
whether _i ∈ or or , respectively.
By Carathéodory's extension
theorem <cit.>, this defines a unique probability
measure on the σ-algebra.
Objectives and Payoff functions.
General objectives are defined by real-valued measurable functions.
However, we only consider indicator functions of measurable sets.
Hence our objectives can be described by measurable subsets ⊆^ of plays.
The payoff, under strategies (,),
is the probability that plays belong to .
We use the syntax and semantics of the LTL operators <cit.>
, (always) and (next)
to specify some conditions on plays.
Reachability & Safety.
A reachability objective is defined by a set of target states
⊆. A play = _0s_1 …
belongs to iff ∃ i ∈ _i ∈.
Similarly, belongs to ^≤ n
(resp. ^≥ n) iff
∃ i ≤ n (resp. i ≥ n) such that _i ∈.
Dually, the safety objective
consists of all plays
which never leave . We have =.
Parity. A parity objective is defined via bounded function
: → that assigns non-negative priorities
(aka colors) to states. Given an infinite play
= _0s_1 …, let Inf()
denote the set of numbers that occur infinitely often in the sequence
(_0)(_1)….
A play satisfies even parity
w.r.t. iff the minimum of Inf() is even.
Otherwise, satisfies odd parity.
The objective even parity is denoted by ()
and odd parity is denoted by ().
Most of the time, we implicitly assume that the coloring function is known
and just write and .
Observe that, given any coloring , we have
= and
() = ( + 1)
where + 1 is the function which adds 1 to the color of every
state. This justifies to consider only one of the even/odd parity objectives,
but, for the sake of clarity, we distinguish these objectives wherever necessary.
Energy/Reward/Counter based objectives.
Let r: E →-R,…,0,…,R be a bounded function that assigns weights
to transitions. Depending on context, the sum of these weights in a path
can be viewed as energy, cost/reward or a counter.
If ' and r((,')) = c, we write c'. Let = _0 c_0_1 c_1… be a play.
We say that satisfies
* the k-energy objective k iff k + ∑_i=0^n-1 c_i > 0 for all n ≥ 0.
* the l-storage condition if l+∑_i=m^n-1 c_i ≥ 0
holds for every infix s_m c_m_m+1… s_n of the play.
Let k,l denote the set of plays that satisfy both the k-energy
and the l-storage condition. Let k⋃_l k,l. Clearly, k⊆k.
* k-Termination (k) iff there exists n ≥ 0 such that k + ∑_i=0^n-1 c_i≤ 0.
* Limit objective z iff
lim inf_n ∞∑_i=0^n-1 c_i z
for ∈<,≤,=,≥,> and z ∈∪∞,-∞ and similarly for z.
* Mean payoff c for some constant c ∈
iff lim inf_n ∞1/n∑_i=0^n-1 c_i c.
Observe that the objectives k-energy and k-termination are mutually
exclusive and cover all of the plays.
A different way to consider these objectives is to encode the energy level
(the sum of the transition weights so far) into the state space and
then consider the obtained infinite-state game with safety/reachability objective, respectively.
An objective is called shift-invariant
iff for all finite paths and plays ' ∈^ω,
we have ' ∈' ∈.
Parity and mean payoff objectives are shift-invariant, but energy and
termination objectives are not.
Objective is called submixing iff for all sequences of finite
non-empty words u_0, v_0, u_1, v_1 … we have
u_0v_0u_1v_1 …∈((u_0u_1…∈) ∨ (v_0v_1 …∈)).
Determinacy.
Given an objective and a game , state has value (w.r.t to ) iff
sup_∈inf_∈[],,() = inf_∈sup_∈[],,().
If has value then [][] denotes the value of
defined by the above equality. A game with an objective is called
weakly determined if every state has value.
Stochastic games with Borel objectives are weakly
determined <cit.>.
Our objectives above are Borel, hence any boolean combination of them is also
weakly determined.
For > 0 and state , a strategy
* ∈ is -optimal (maximizing) iff [],,() ≥[][] - for all ∈.
* ∈ is -optimal (minimizing) iff [],,() ≤[][] + for all ∈.
A 0-optimal strategy is called optimal.
An MD strategy is called uniformly -optimal (resp. uniformly optimal)
if it is so from every start state.
An optimal strategy for player from state
is almost surely winning if [][]=1.
By we denote the set of states that have an almost surely winning
strategy for objective . For ease of presentation, we drop subscripts and superscripts wherever possible if they are clear from the context.
Energy-parity.
We are concerned with approximating the value for the combined energy-parity
objective (k) ∩ and building -optimal strategies.
In our constructions we use some auxiliary objectives.
Following <cit.>, these are defined as
>-∞ ∩
and
= =-∞ ∪.
For finite-state SSGs and the following objectives
there exist optimal MD strategies for both players.
Moreover, if the SSG is just a maximizing MDP then the set of states
that are almost surely winning for Maximizer can be computed in polynomial
time.
* <cit.>
* -∞,
∞,
-∞,
∞,
>0 <cit.>
* <cit.>
§ THE MAIN RESULT
The following theorem states our main result.
theoremthmapproxenpar
Let = be an SSG with transition rewards in unary
assigned by function r and colors assigned to states by function .
For every state ∈, initial energy level i ≥ 0 and error margin
> 0, one can compute
* a rational number v' such that
0 ≤ v'-[][(i) ∩]≤ in
2-.
[We write “computing a number v' in 2-” as a shorthand for the
property that questions like v' ≤ c for constants c
are decidable in 2-.]
* -optimal FD strategies _ and _ for Maximizer
and Minimizer, resp., in 2-.
These strategies use
2-𝖤𝖷𝖯·log1/
memory modes.
For rewards in binary, the bounds above increase by one exponential.
We outline the main steps of the proof; details in the following sections.
We begin with the observation that (i) ⊆(j) for
i ≤ j, and thus for all states we have
[][(i) ∩ ]≤[][(j) ∩ ]≤ 1.
So lim_n ∞[][(n) ∩ ]
exists. We define
[] lim_n ∞[][(n) ∩ ].
We will see that [] and [][] are in fact equal
(a consequence of <Ref>) and [][]
can be computed in nondeterministic polynomial time (<Ref>).
Intuitively, for high energy levels, the precise energy level does not matter
much for the value.
The main steps of the approximation algorithm are as follows.
* Compute FD strategies ()
that are optimal maximizing for the objective starting from state in .
Compute an MD strategy
that is uniformly optimal minimizing
for the objective .
Compute the value [][] for every
∈.
See <ref>.
* Compute a natural number N such that for all ∈ and
all i ≥ N we have
0 ≤[][] - [][(i) ∩ ]≤.
N will be doubly exponential. See <ref>.
* Consider the finite-state parity game ' derived from by encoding the energy
level up-to N into the states, i.e., the states of ' are of the form (s,k)
for s∈ and 0 ≤ k ≤ N, and colors are inherited from .
Moreover, we add gadgets that ensure that
states (,N) at the upper end win with probability
[][] and states (,0) at the lower end lose.
By the previous item, [][] is -close to
[][(N) ∩ ].
Thus, for k < N we can -approximate the value
v = [][(k) ∩ ] by
v' ['][](,k).
If k ≥ N we can -approximate v
by v' [][].
Moreover, we obtain -optimal FD strategies _
for Maximizer (resp. _ for Minimizer) for
(k) ∩ in .
Let (resp. ) be optimal MD strategies
for Maximizer (resp. Minimizer) for the objective in '.
Then _ plays as follows.
While the current energy level j (k plus the sum of the rewards so far)
stays <N, then, at any state ', play like at state
(',j) in '.
Once the energy level reaches a value ≥ N at some state '
for the first time, then play like (s') forever.
Similarly, _ plays as follows.
While the current energy level j (k plus the sum of the rewards so far)
stays <N, then, at any state ', play like at state
(',j) in '.
Once the energy level reaches a value ≥ N (at any state)
for the first time, then play like forever.
See <ref>.
As a technical tool, we sometimes consider the dual of a game
(resp. the dual maximizing MDP of some minimizing MDP).
Consider ^d
with the complement objective
(k) ∩ = (k) ∪,
where ^d is simply the game with the roles of Maximizer and
Minimizer reversed, i.e.,
' = ' = ' = ' = ' = ' =
Hence ^d = and
^d =.
It is easy to see that for any objective and start state
* [][] + [^d][] = 1.
* is -optimal maximizing for in iff it is -optimal minimizing for in ^d.
* is -optimal minimizing for in iff it is -optimal maximizing for in ^d.
So approximating the value of (k) ∩
in can be reduced in linear time to approximating
the value of (k) ∪ in ^d.
§ COMPUTING VALUE OF GAIN
Given an SSG = and a start state ,
we will show how to compute [][]
and the optimal strategies for both players.
We start with the case of maximizing MDPs.
The following lemma summarizes some previous results
(<cit.>, <cit.>,
<cit.>).
lemmalemmaxmdp
Let be a maximizing MDP.
* [] = [][] for all states ∈.
* Optimal strategies for in exist and can be chosen FD,
with (exp(||^(1))) memory modes,
and exponential memory is also necessary.
* For any state ∈, [] is rational and can be computed in
(||^8) deterministic
polynomial time if rewards are in unary, and in and if
rewards are in binary.
<ref> holds by <cit.>.
Towards <ref>, we follow the proof of <cit.>.
Since = >-∞ ∩ is shift-invariant,
there exist optimal strategies by <cit.>.
By <cit.> and <ref>, an optimal strategy
for can be constructed as follows.
Let A⋃_k∈k∩
and B=∞∩
be the subsets of states from which there exist almost surely winning
strategies for the objectives k∩ and
=∞∩, respectively.
By <cit.>, we can restrict the values k in the
definition of A by some k' = (||· R), i.e.,
A = ⋃_k ≤ k'k∩.
An optimal strategy for works in two phases.
First it plays an optimal strategy _R towards reaching the set A ∪ B,
where _R can be chosen MD by <ref>.
Then, upon reaching A (resp. B), it plays an almost surely winning
strategy _A for the objective k∩
(resp. _B for the objective =∞∩).
By <cit.>,
the strategy _A requires (k· ||) memory modes
for a given k and thus at most (||^2 · R), since we
can assume that k ≤ k'.
Towards the strategy _B, we first observe that
in finite MDPs a strategy is almost-surely winning for =∞∩
iff it is almost-surely winning for >0∩.
By <cit.>, there exist optimal
deterministic strategies for >0∩ that use exponential
memory, i.e., (exp(||^(1))) memory modes.
The memory required for _B exceeds that of _R and
_A (even when R is given in binary), and the one extra memory mode to
record the switch from _R to _A (resp. _B) is
negligible in comparison.
Thus we can conclude that uses (exp(||^(1))) memory modes.
<cit.> shows that exponential memory is necessary.
Towards <ref>, let d |()| be the
number of priorities in the parity condition.
By <cit.>, for each ∈,
[] is rational and can be computed in deterministic time
Õ(|E| · d · ||^4 · R + d · ||^3.5· (|P| + |r|)^2)
(and still in and if R is given in binary).
So [] can be computed in (||^8) deterministic
polynomial time if weights are given in unary, and in and if
weights are given in binary.
In order to extend <ref> from MDPs to games,
we need the notion of derived MDPs, obtained by fixing the choices of one
player according to some FD strategy.
Given an SSG = and a finite memory deterministic (FD)
strategy for Minimizer (resp. for Maximizer) from a state
, described by ,
let _ (resp. ^) be the maximizing
(resp. minimizing) MDP with state space
obtained by fixing Minimizer's (resp. Maximizer's) choices according to
(resp. ).
For every SSG , objective and
Minimizer (resp. Maximizer) FD strategy = (resp. ),
from state we get
[^][],≤[][]≤[_][],
and equality holds if (resp. ) is optimal from state .
Consider an SSG = with the objective.
*
Optimal Minimizer strategies exist and can be chosen uniform MD.
*
[][] is rational and
questions about it, i.e., [][]≤ c for
constants c, are decidable in .
*
Optimal Maximizer strategies exist and can be chosen FD,
with (exp(||^(1))) memory modes.
Moreover, exponential memory is also necessary.
Towards <ref>, observe that since both the objectives =-∞ and
are shift-invariant and submixing, so is their union, i.e.,
is shift-invariant and submixing.
Hence, by <cit.>,
an optimal MD strategy _ for Minimizer
exists from any state ∈.
Since is finite and is shift-invariant,
we can also obtain a uniformly optimal MD strategy
, i.e., is optimal from every state.
Towards <ref>,
consider the maximizing MDP _ obtained from by
fixing (cf. <ref>).
Since is MD, the states of _ are the
same as the states as .
Since is optimal for Minimizer from every state ,
we obtain that [][] =
[_][] for every state by
<ref>.
By <ref>, the latter is rational and can be computed in polynomial time for
weights in unary (resp. in and for weights in binary).
Thus, by guessing , we can decide questions
[][]≤ c in .
Towards <ref>, we again use the property
that is shift-invariant and submixing (see above).
By <cit.>, optimal FD Maximizer strategies for in an SSG require
only || ·⌈log(|E|)⌉ many extra bits of memory above the memory required for
optimal Maximizer strategies in any derived MDP where Minimizer's choices
are fixed.
Hence, by <ref>, one can obtain optimal FD Maximizer strategies in
that use at most
2^||·⌈log(|E|)⌉·(exp(||^(1))) = (exp(||^(1)))
memory modes.
The corresponding exponential lower bound on the memory holds already for MDPs
by <ref>.
§ COMPUTING THE UPPER BOUND N
We show how to compute the upper bound N, up-to which Maximizer needs to
remember the energy level, for any given error margin >0.
Similarly as in <ref>, we first solve the problem for maximizing
MDPs and then extend the solution to SSGs.
§.§ Computing N for maximizing MDPs
Given a maximizing MDP = and > 0,
we will compute an N ∈ such that for all ∈ and all j ≥ N
0 ≤[][(j) ∪ ] - [][]≤.
Recall that = =-∞ ∪.
We now define the sets of states
W_0, W_1 =-∞ and W_2.
By <ref>, there exist optimal MD strategies for
=-∞ and .
Since is shift-invariant and submixing, there exists an optimal MD
strategy for it by <cit.>.
For every state in the MDP we have
* W_1 ∪ W_2 ⊆ W_0
* [][ W_0]≤[][]
* [][ ∩ W_2] = 0
* for every initial energy level j ≥ 0
[][(j) ∪ ∩ W_0] = [][ W_0]
[][]≤[][(j) ∪ ]≤[][] + sup_[],((j) ∩ W_1)
* This follows directly from the definitions of W_0,W_1,W_2.
* Let ' be an optimal MD strategy for W_0 from and
” be an almost surely winning MD strategy for from any state in W_0.
Let be the strategy that plays ' until reaching W_0 and
then switches to ”.
We have [][]≥,() ≥',( W_0)
= [][ W_0].
* For ∈ W_2 the statement is obvious.
So let ∉ W_2 and consider the modified
MDP ' = where all states in W_2 are collapsed into a
losing sink.
I.e., ' (∖ W_2) ⊎*𝑡𝑟𝑎𝑝,
with 𝑡𝑟𝑎𝑝 a new random sink state having color 0
(thus losing for objective ),
' contains all of
(∩*(∖ W_2)(∖
W_2)∪ (𝑡𝑟𝑎𝑝,𝑡𝑟𝑎𝑝) ) and all transitions to
W_2 are redirected to 𝑡𝑟𝑎𝑝 and ' is derived accordingly
from .
Then ['][] = [][ ∩ W_2]
for all states ∈∖ W_2.
Towards a contradiction, assume that
[][ ∩ W_2] > 0.
Hence ['][] > 0.
Then, by <cit.>, there exists a state ' ∈'
such that ['][]' = 1,
and it is easy to see that ' ≠𝑡𝑟𝑎𝑝 and thus
' ∈∖ W_2.
But this implies that [][]' = 1 and thus ' ∈ W_2, a contradiction.
* Let (j) ∪.
For <ref>, the first inequality
[][ ∩ W_0]≤[][
W_0] is trivial, since ∩ W_0 ⊆ W_0.
To show the reverse inequality, consider the strategy
that first plays like an optimal MD strategy ' for the objective
W_0 and after reaching W_0 switches to an almost surely winning MD
strategy ” for the objective .
Then
[][ ∩ W_0]≥[],( ∩ W_0)
≥[],( ∩ W_0)
=
[]',( W_0)
=
[][ W_0],
where the second inequality is due to =-∞⊆(j).
For <ref>, the first inequality is again due to the fact
that =-∞⊆(j) for all j ≥ 0.
Towards the second inequality of <ref> we have
[][]
= sup_[],()
= sup_(,( ∩ W_0) + ,( ∩ W_0) ) Law of total probability
≤sup_,( ∩ W_0) + sup_,( ∩ W_0) supf+g≤sup f + sup g
= sup_,( W_0)+sup_,( ∩ W_0) <ref>
≤[][] + sup_,( ∩ W_0)
<ref>
We can upper-bound the second summand above as follows.
sup_, ( ∩ W_0)
= sup_,(((j) ∪ ) ∩ W_0)
≤sup_,((j) ∩ W_0) +
sup_,( ∩ W_0) Union bound
≤sup_,((j) ∩ W_1) + sup_,( ∩ W_2) <ref>
= sup_,((j) ∩ W_1) <ref>
We show that the term
sup_[],((j) ∩ W_1)
in <ref> can be made arbitrarily small for large j.
To this end, we use <cit.> (adapted to our notation).
<cit.>
Let = be a maximizing finite MDP with rewards in unary and
W_1 =-∞.
One can compute, in polynomial time, a rational constant c < 1,
and an integer h ≥ 0
such that for all j ≥ h
and ∈
sup_,(j) ∩ W_1≤c^j/1-c.
Moreover, 1/(1-c) ∈(exp(^(1))) and
h ∈(exp^(1)).
lemmalemMDPN
Consider a maximizing MDP =, >0 and the constants c,h from <ref>.
For rewards in unary and i ≥ N we have
[][(i) ∪ ] - [][]≤ where
N maxh,log_c·1-c∈(exp(^(1))·log1/).
For rewards in binary we have
N ∈(exp(exp(^(1)))·log1/), i.e.,
the size of N increases by one exponential.
For rewards in unary, the result follows from
<ref>(<ref>) and <ref>.
For rewards in binary, the constants increase by one exponential via
encoding binary rewards into unary rewards in a modified MDP.
§.§ Computing N for SSGs
In order to compute the bound N for an SSG ,
we first consider bounds N() for individual states
and then take their maximum.
Given a state ,
we can use <ref>(<ref>)
to obtain an optimal FD strategy (with (exp(||^(1))) memory modes)
() = for Maximizer from state w.r.t. the
objective.
<ref>(<ref>)
yields a uniform MD strategy
that is optimal for Minimizer from all states w.r.t. the
objective.
lemmalemGameN
Given an SSG = and >0, we can compute a number N ∈
such that for all i ≥ N and states ∈ we have
[][(i) ∩ ] - ≤[][] - ≤inf_[](),,(i) ∩ ≤[][(i) ∩ ]
i.e., () is -optimal for Maximizer for (i) ∩ for all i ≥ N.
In particular, 0 ≤[][] - [][(i) ∩ ]≤.
Moreover, is -optimal for Minimizer from any state for i ≥ N.
sup_[],,(i) ∩ ≤sup_[],,
=
[][]≤[][(i) ∩ ] +
For rewards in unary, N is doubly exponential, i.e.,
N ∈(exp(exp(^(1)))·log1/)
and it can be computed in exponential time.
For rewards in binary, the size of N and its computation time increase
by one exponential, respectively.
Assume that rewards are in unary.
The first inequality of (<ref>) holds because
(i) ∩ ⊆ for any i.
The third inequality of (<ref>) follows from the definition of
the value.
Towards the second inequality of (<ref>),
we consider the minimizing MDP
() ^()
obtained by fixing the Maximizer strategy ().
Since () is optimal for Maximizer from state
wrt. the objective , <Ref> yields that
[][] = [()][],.
Since () has (exp(||^(1))) memory modes,
the size of () is exponential in || and () can be computed in
exponential time.
Now we consider the dual maximizing MDP ()^d and the objectives
(i) ∪ and . (Note that ()^d has the same
size as ().)
From <Ref>, we obtain
a bound N() ∈ such that for all i ≥ N()
0≤[()^d][(i) ∪ ], -[()^d][],≤.
By <Ref> and <Ref>,
N() is exponential in |()^d| and thus doubly exponential
in ||, i.e.,
N() ∈(exp(exp(^(1)))·log1/).
Moreover, N() can be computed in time polynomial in |()^d|
and thus in time exponential in ||.
By duality, we can rewrite <Ref> for () as follows.
For all i ≥ N()
0
≤[()][], -
[()][(i) ∩ ],≤.
In order to get a uniform upper bound that holds for all states,
let N max_∈ N().
Since || is linear, we still have
N ∈(exp(exp(^(1)))·log1/)
and it can be computed in exponential time in ||.
Finally, we can show the second inequality of (<ref>).
inf_[](),,(i) ∩
= inf_[()],,(i) ∩
= [()][(i) ∩ ],
≥[()][], -
= [][] -
The first inequality of (<ref>) holds because
(i) ∩ ⊆ for any i.
The equality in (<ref>) holds by the
optimality of .
The second inequality of (<ref>) follows from the previously
stated consequence of (<ref>).
For rewards in binary, the sizes of the numbers N() (and hence N)
and the time to compute it increase by one exponential by <Ref>.
§ UNFOLDING THE GAME TO ENERGY LEVEL N
Given an SSG = and error tolerance >0,
for each state ∈ and energy level i ≥ 0,
we want to compute a rational number
v' which satisfies
0 ≤ v'- [][(i) ∩ ]≤,
and -optimal FD strategies _ and _ for
Maximizer and Minimizer, resp.
We achieve this by constructing a finite-state parity game '
that closely approximates the original game ,
as described in <Ref>(<ref>).
For clarity, we explain the construction in two steps.
In the first step, we consider a finite-state parity game N.
(Unlike ', the game N is not actually constructed.
It just serves as a part of the correctness proof.)
N encodes the energy level up-to N+R (where R is the maximal
transition reward) into the states, i.e., it has states
of the form (,k) with k ≤ N+R.
It imitates the original game till energy level N+R,
but at any state ,i with energy level i ≥ N it jumps
to a winning state with probability [][(i) ∩ ] and to a losing state with probability
1-[][(i) ∩ ].
(We need the margin up-to N+R, because transitions can have rewards >1, so
the level N might not be hit exactly.)
Similarly, at states ,0 with energy level 0,
we jump to a losing state.
The coloring function in the new game N
derives its colors from the colors in the original game
, i.e., all states ,i have the same color as in .
By construction of N, for i ≤ N, the value of ,i in
N coincides with [][(i) ∩ ].
In the second step, since we do not know the exact values
[][(i) ∩ ] for N+R ≥ i > N,
we approximate these by the slightly larger [][].
I.e., we modify N by replacing the probability values
[][(i) ∩ ] for the jumps to the winning
state by [][]. Let ' be the resulting
finite-state parity game.
It follows from <Ref> that
0 ≤[][] -
[][(i) ∩ ]≤ for i ≥ N and
[][ ∩ ] = [][].
Thus ' -over-approximates N and , and we obtain the
following lemma.
For all states and all 0 ≤ i ≤ N
[N][],i =
[][(i) ∩ ],
0 ≤[^'][],i - [N][],i≤.
Now we are ready to prove the main theorem.
*
For i > N we output v' = [][], which satisfies the
condition by <Ref>.
For i ≤ N we output v' = ['][],i, which
satisfies the condition by <Ref>.
By <Ref>, the values [][] are
rational for all states . Therefore all probability values in ' are
rational and thus the values of all states in ' are rational.
Hence our numbers v' are always rational.
By <Ref>, the values [][] for
all states ∈ can be computed in exponential time.
By <Ref>, N ∈(exp(exp(^(1)))·log1/)
is doubly exponential.
Therefore, we can construct ' in
(exp(exp(^(1)))·log1/) time and space.
Questions about the parity values of states in ' can be decided in
nondeterministic time polynomial in |'|. Thus the numbers v' are computed in 2-.
Towards Item 2, we construct -optimal FD strategies _
for Maximizer (resp. _ for Minimizer) for
(i) ∩ in .
Let
(resp. )
be optimal MD strategies
for Maximizer (resp. Minimizer) for the objective in ',
which exist by <Ref>.
Since these strategies are MD, they can be guessed in nondeterministic time
polynomial in the size |'|, and thus
in (exp(exp(^(1)))·log1/) nondeterministic time.
Then _ plays as follows.
While the current energy level j (i plus the sum of the rewards so far)
stays <N, then, at any state ', play like at state
(',j) in '.
Once the energy level reaches a value ≥ N at some state '
for the first time, then play like (s') forever.
(Recall that (s') is the optimal FD Maximizer strategy for
from state s' from <ref>.)
_ is -optimal by <Ref>
and <Ref>.
It needs to remember the energy level up-to N while simulating
. Moreover, (s') needs
(exp(||^(1))) memory modes by <Ref>.
Finally, it needs to remember the switch from to (s').
Since N ∈(exp(exp(^(1)))·log1/)
dominates the rest, _ uses
(exp(exp(^(1)))·log1/) memory modes.
Similarly, _ plays as follows.
While the current energy level j
stays <N, at any state ', play like at state
(',j) in '.
Once the energy level reaches a value ≥ N (at any state)
for the first time, then play like forever
(where is the uniform optimal MD Minimizer strategy for
from <ref>.)
_ is -optimal by <Ref>
and <Ref>.
While is MD and does not use any memory, _ still
needs to remember the energy level up-to N
while simulating ,
and thus it uses
(exp(exp(^(1)))·log1/) memory modes.
For rewards in binary, all bounds increase by one exponential via an encoding
of into an exponentially larger but equivalent game with rewards in unary.
No nontrivial lower bounds are known on the computational complexity of
approximating [][(i) ∩].
However, even without the parity part, the problem appears to be hard.
The best known algorithm for approximating the value of the energy objective
(resp. the dual termination objective) runs in for SSGs with
rewards in unary <cit.>.
As for lower bounds on the strategy complexity,
-optimal Maximizer strategies need at least an exponential number of
memory modes (for any 0 < < 1) even in maximizing MDPs.
This can easily be shown by extending the example in
<Ref>(<Ref>) and
<cit.>
that shows the lower bound for the objective.
First loop in a state with an unfavorable color to accumulate a
sufficiently large reward (depending on ) and then switch to the MDP in
<cit.> to play for
(since (i) ∩ will be very close to then).
Even the latter part requires exponentially many memory modes.
§ CONCLUSION & EXTENSIONS
We gave a procedure to compute -approximations of the value of combined
energy-parity objectives in SSGs.
The decidability of questions about the exact values is open, but the problem
is at least as hard as the positivity problem for linear recurrence sequences
<cit.>.
Unlike almost surely winning Maximizer strategies which
require infinite memory in general <cit.>,
-optimal strategies for either player require only finite memory
with at most doubly exponentially many memory modes.
An interesting topic for further study is whether these results can be extended
to other combined objectives where the parity part is replaced by something
else, i.e., energy-X for some objective X
(e.g., some other color-based condition like Rabin/Streett, or a quantitative
objective about multi-dimensional transition rewards).
While our proofs are not completely
specific to parity, they do use many strong properties that parity satisfies.
* Shift-invariance of is used in several places,
e.g. in <Ref> (and thus its consequences) and
for the correctness of the constructions in <Ref>.
* We use the fact that goes well together with
>-∞,
i.e., the objective = >-∞ ∩ allows
optimal FD strategies for Maximizer in MDPs; cf. <Ref>.
* The submixing property of = is used in <ref>
to lift <Ref> from MDPs to SSGs.
plainurl
§ APPENDIX FOR <REF>
Given an SSG = and a finite memory deterministic (FD)
strategy = for Minimizer
let _
be the maximizing
MDP with state space
obtained by fixing Minimizer's
choices according to
.
The transition rules ' in the derived MDP _ are given as follows.
* If ∈
for every (,') ∈, ∈,
we have (,) ' ((,(,')),'),
i.e., Maximizer determines the successor state and Minimizer updates its
memory according to the observed transition.
* Similarly if ∈
for every (,') ∈, ∈
we have
(,) ' ((,(,s')),')
and
((,))(((,(,s')),')) = ()('),
i.e., transition probabilities are inherited and Minimizer's memory is
updated according to the observed transition.
* If ∈ then
(,) ' ((,(,')), ') where ' = (,),
i.e., Minimizer chooses the successor state according to the
strategy and updates its memory accordingly.
The reward of each transition is the same as the reward of the
transition in from which it is derived. Similarly for the priorities
(aka coloring) of the states.
The ownership of the vertices (, ) in _ is as follows.
If ∈ then (, ) belongs to Maximizer.
If ∈ then (, ) is also a chance vertex.
If ∈ then (, ) also becomes a chance vertex (with
exactly one successor), since Minimizer's choice has been fixed.
In the dual case where a FD strategy for Maximizer is fixed,
we obtain a minimizing MDP ^. The construction is the same as
above, with the roles of Minimizer and Maximizer swapped.
§ APPENDIX FOR <REF>
*
By <ref>(<ref>) and
<ref>, we have
[][(i) ∪ ] - [][]≤sup_,(i) ∩ W_1≤c^i/1-c
for all i ≥ h and ∈.
To obtain a bound N ≥ h with c^N/1-c≤, it suffices to choose
N maxh,log_c·1-c.
We observe that
log_c·1-c = -ln·1-c· (-ln(c)^-1).
However, -ln(c) = -ln(1-(1-c)) ≥ (1-c).
Thus log_c·1-c≤ln1/·1/1-c·1/1-c.
For rewards in unary, by <ref>, we have
1/(1-c) ∈(exp(^(1))) and
h is only (exp^(1)).
Thus N ∈(exp(^(1))·log1/).
Now consider the case where rewards are given in binary.
Following the proof of <cit.>,
the bounds are derived from the size of solutions of the constructed linear
program. While the MDPs in <cit.> only consider unary rewards
from *-1,0,1,
one can extend it to the case where the rewards come from the set
*-R,…,0,…,R in a natural way.
This affects the complexity of the above computed constants and thereby size of N.
More precisely, the proof of <ref>
can be split into three steps.
Firstly, given an MDP construct a new “rising”
MDP '. Then from this derived ',
construct a linear program.
From the solutions of constructed LP,
compute the required c and h.
We evaluate the effect of having non-unary rewards in each of these steps.
When rewards are given in unary, the resulting ' has overall size
'≤ 10 ^4. More exactly, '≤
10*^3*+ and similarly for
'. When the rewards are given in binary, the construction results in an additional R^2 factor. So the resulting ' is pseudo-polynomially big when compared to in our case.
The constructed LP (cf. <cit.>) has ' + 2 variables (z_ for each state, x for the mean payoff and ξ for converting the constraint x > 0 to x ≥ξ). Moreover all variables can be assumed non-negative. The number of constraints is bounded by ' + 1. Furthermore all the constants appearing in the constraints are either constants in the original MDP or 1 or 0.
Finally, from an optimal solution of the LP z_,x,ξ one can
compute exp-x^2/2·(z_max+x+R)^2 and to get c, then
take a rational over-approximation and also take h as z_max where z_maxmax_∈'z_ - min_∈'z_. The only difference compared to the unary rewards case here is that the one step change of the submartingale is bounded by z_max+x+R instead of z_max+x+1.
From the complexity point of view, both the construction of the LP and
the computation from its optimal solutions aren't affected by
changes in the rewards, i.e., the
previous bounds for c, h and N in terms of ' still hold.
In particular, c ∈(exp1/2^'^(1)), h ∈(exp'^(1)) and thus
N ∈exp'^(1)·log1/ by
<cit.>.
While previously, ' is only polynomially larger
than , introducing binary rewards blows up the
construction (cf. <cit.>).
As a result we have that
'∈2^^(1).
Therefore N can be doubly exponential in the size of the original MDP
,
i.e.,
N ∈(exp(exp(^(1)))·log1/).
§ APPENDIX FOR <REF>
We present formally the definition of the game N, which unfolds the energy level in till N
N = N,N,N,N,N, N
where
* N×*0,…,N+R⊎*_win,_lose, the set of states is the tuple with the game state and energy level until N+R as the maximum change in a single step is R and since we are only interested in energy levels ≤ N, it suffices to consider till N+R.
* N×*1,…, N, both players control their respective states until energy level N. Every state with energy > N becomes a chance node. Consequently,
* N×*1,…, N∪×*0,N+1,…,N+R∪*_win,_lose, since the Maximizer loses when the energy level becomes ≤ 0, we make these states as a chance vertex which go to a losing loop.
* N, N
* For 0 < i ≤ N, ,i',max(0,j) iff j-i' ∈, this is just simulating the transitions of the game until energy level N and taking care of border cases. When energy drops below 0, we move to level 0 as there is no difference. When it shoots above N, it cannot go beyond N+R and thus the transition is well defined.
* If ∈ above, then the probability is carried over.
* ,0_lose with probability 1.
* ,N+k_win with probability [][(N+k) ∩ ] and with remaining probability moves to _lose for 1 ≤ k ≤ R
* _lose_lose with probability 1. Similarly for _win.
To start off, we introduce some notation and define a few reachability objectives on the infinite state game induced by 𝒢 for ∨.
𝐂 Q ×
𝐂_⋆ Q_⋆×_> 0
𝐂_♢ Q_♢×_> 0
_N⋃_q,N
_(q,N)*ω∈ | ∃ j ω(j) = (q,N) ∧∀ i < j (ω(i)) < N
Given 𝒢 = Q,→, P and N, construct 𝒢^N = Q',⇝, P' as follows
* Q' = Q ×*1,⋯, N⊔*s_0,s_1
* For every 0 < i < N, q ∈ Q_⋆ ( Q_♢ or Q_∘) (q,i) ∈ Q'_⋆ ( Q'_♢ or Q'_∘)
* each (q,0), (q,N) and s_0,s_1 ∈ Q'_∘
* Also the colors of every q ∈ Q is carried as is and color of s_0 is 2 and color of s_1 is 1
* for every 0 < i < N, (q,i) ⇝ (q',i+k) (q,k,q') ∈ → with same probability wherever applicable
* (q,0) 1⇝ s_0, (q,N) ν_q⇝ s_0, (q,N) 1-ν_q⇝ s_1, s_0 1⇝ s_0, s_1 1⇝ s_1
* The winning condition in this game is given by
Let Σ(𝒢), Σ(𝒢^N) ( resp. Π(𝒢 ), Π(𝒢^N)) be the set of pure strategies for player ⋆ ( ♢ ) in games 𝒢, 𝒢^N respectively. Define two mappings †, †^-1 ( , ^-1) as follows
* Given σ : 𝐂^*𝐂_⋆→𝐂, σ^† is the restriction of σ on (Q' ∖*s_0,s_1)^*Q'_⋆
* Similarly given σ : Q'^* Q'_⋆→ Q', σ^†^-1 is the extension of σ with σ^*_q for words in _q,N
Let (E_i)_i∈ I be an at most countable set of measurable events which partition the set Q^ω. Then
_q(O) = ∑_i ∈ I_q(O ∧ E_i)
For every σ, π in game 𝒢, 0 < i < N,
* ℙ^σ,π_q,i((∨) ∧_N) = ℙ^σ^†,π^_q,i(∧_N)
* ℙ^σ,π_q,i(_q',N) = ℙ^σ^†,π^_q,i(_q',N) = ℙ^σ^†,π^_q,i(_q',N)
For every σ, π in game 𝒢^N, 0 < i < N,
* ℙ^σ^†^-1,π^^-1_q,i((∨) ∧_N) = ℙ^σ,π_q,i(∧_N)
* ℙ^σ^†^-1,π^^-1_q,i(_q',N) = ℙ^σ,π_q,i(_q',N) = ℙ^σ,π_q,i(_q',N)
* 0 ≤ℙ^σ^†^-1,π^^-1_q,i(∨) - ℙ^σ,π_q,i() ≤ϵ
Since the game 𝒢^N is finite, this implies there exist strategies σ_p, π_p which are optimal from every state in Q'. Also let υ_q,i be the value for parity and σ = σ_p^†^-1, π = π_p^^-1. Then applying <ref> on σ_p, π_p, we get that
0 ≤ℙ^σ,π_q,i(∨) - υ_q,i≤
Also for any σ, π in 𝒢, O = ∨, 0<i<N, ℙ^σ,π_q,i(O)
= ℙ^σ,π_q,i(O ∧_N) + ∑_q' ∈ Qℙ^σ,π_q,i(O ∧_q',N)
= ℙ^σ^†,π^†_q,i(∧_N) + ∑_q' ∈ Qℙ^σ,π_q,i(O | _q',N) * ℙ^σ^†,π^†_q,i(_q',N)
Since _N ⋃∪_q' ∈ Q_q',N partition the infinite plays in both games 𝒢 and 𝒢^N, and the mappings †, are surjective, we have ^𝒢_q,i(O)
= ^𝒢_q,i(O ∧_N) + ∑_q' ∈ Q^𝒢_q,i(O ∧_q',N)
= ^𝒢^N_q,i(∧_N) + ∑_q' ∈ Q^𝒢_q',N(O) * ^𝒢_q,i(_q',N)
= ^𝒢^N_q,i(∧_N) + ∑_q' ∈ Q^𝒢_q',N(O) * ^𝒢^N_q,i(_q',N)
Since ν_q ≤^𝒢_q',N(O) ≤ν_q+, we get that
υ_q,i≤^𝒢_q,i(O) ≤υ_q,i +
The strategies σ and π are - optimal
Let O = ∨. Consider an arbitrary strategy π for ♢. We want to show that
ℙ^σ,π_q,i(O) + ≥_q,i(O)
Case i ≥ N : In this case we have that
ℙ^σ,π_q,i(O) + = ℙ^σ^*_q,π_q,i(O) +
≥ℙ^σ^*_q,π_q,i() +
≥ν_q +
≥_q,i(O)
Case i < N: ℙ^σ,π_q,i(O)
= ℙ^σ,π_q,i(O ∧_N) + ∑_q' ∈ Qℙ^σ,π_q,i(O ∧_q',N)
= ℙ^σ_p,π^_q,i(∧_N) + ∑_q' ∈ Qℙ^σ,π_q,i(O | _q'N) * ℙ^σ,π_q,i(_q',N)
≥ℙ^σ_p,π^_q,i(∧_N) + ∑_q' ∈ Qν_q' * ℙ^σ,π_q,i(_q',N)
= ℙ^σ_p,π^_q,i(∧_N) + ∑_q' ∈ Qν_q' * ℙ^σ_p,π^_q,i(_q',N)
= ℙ^σ_p,π^_q,i()
≥υ_q,i
Given , N and ν_s, we define a new game N as follows
* The states of N is the set *0,1,…, N∪_,_
* Player states are *1,…, N-1
* For 0 < i < N (,i) (',j) in N j-i' in
* If is a random state above, then the probability of the edge in the new game is the same as the one from which it is derived
* (,0) _ with probability 1, (,N) _ with probability ν_ and with remaining probability it goes to _
* _ and _ are trap states with self loops
* priority of a state (,i) is same as in , priority of _ is 1 and that of _ is 0
|
http://arxiv.org/abs/2307.04876v1 | 20230710195438 | Density and Velocity Correlations in Isothermal Supersonic Turbulence | [
"Branislav Rabatin",
"David C. Collins"
] | astro-ph.GA | [
"astro-ph.GA"
] |
firstpage–lastpage
Engineering bound states in continuum via nonlinearity induced extra dimension
Girish S. Agarwal
August 12, 2023
==============================================================================
bstract
§ INTRODUCTION
Star-forming clouds of molecular hydrogen, which are known to be undergoing turbulent supersonic motion, are often modeled as isothermal in astrophysical simulations. This approximation is facilitated by rapid cooling rates of the molecular clouds <cit.>, which keeps the temperature roughly constant.
This reasonably simple yet powerful model is capable of explaining the observed density fluctuations within the molecular clouds, which can be used to predict many properties of star formation, such as the star formation rate <cit.> and the initial stellar mass distribution <cit.>. While supersonic turbulent motion inhibits the collapse and star formation by increasing the effective Jeans mass, at the same time it gives rise to large density variations allowing for a local collapse <cit.>.
The interplay between density and velocity fluctuations is fundamental to understanding star formation <cit.>. Describing the statistics of the fundamental dynamical quantities including the correlations between them reveals the statistical behavior of all derived quantities, including kinetic energy and the joint PDF of kinetic and thermal energy.
The main purpose of this work is to explore f_sv(s,v), the joint probability distribution
function (PDF) between the log of density, s=logρ, and speed, v. The simplest
assumption is that s and v are independent of one another, in which case the
joint distribution is the product of the marginalized distributions:
f_(s,v) = f_s(s) f_v(v)
f_s(s) =∫_-∞^∞ v f_(s,v)(s,v)
f_v(v) =∫_0^∞ s f_(s,v)(s,v).
The density PDF is typically treated as lognormal, f_s(s) = 𝒩(s;μ, σ), a Gaussian 𝒩 with mean μ and variance σ.
Speed, v, is usually modeled with a Maxwellian distribution; f_v (v) = ℳ (v; M) with the 1D Mach number M = √(⟨ v^2 ⟩ / 3). In this work, we improve on all three assumptions.
The finite shock model <cit.> as an extension of a simple Gaussian PDF of density is discussed in Section <ref>. In Section <ref> we introduce a tilted Maxwellian to better fit the statistics of speed. Finally, we find a correction to the joint PDF in Section <ref>.
Figure <ref> shows three models for the joint distribution along with simulated data. The color and solid contours are taken from simulations
described in Section <ref>. In the left panel, the dashed contours show the simple assumption of uncorrelated variables. Clearly the
shape of the model does not agree with the simulated data. The second panel shows our first correction to the joint PDF, which introduces a correlation between density and speed, but continues to assume a lognormal for density and Maxwellian for speed. The third panel shows our detailed model, with the corrected joint PDF and improved density and speed PDFs.
An important aspect of this work is the lack of fitting of any kind. All of the results come from moments of the data, and not by fitting a model to the simulated histograms.
The paper is organized as follows. In Section <ref> we discuss the code, simulations, and analysis. In
Section <ref> we describe the finite shock model for the density
PDF. In Section <ref> we discuss our updated distribution of speed. In Sections <ref> and <ref> we show our new joint
distribution. In Section <ref> we show that our model works
well even for higher order moments of the distribution. Finally we conclude in
Section <ref>.
§ METHODS
The suite of numerical simulations was performed using the hydrodynamic code Enzo <cit.> using the piecewise parabolic method <cit.>. The simulation domain consists of a cube of unit length with periodic boundary conditions. Each simulation is described by two parameters, the forcing mode ξ and Mach number M, both introduced via the Stochastic forcing module implemented within Enzo (Schmidt, Federrath, 2008). The forcing mode ξ∈ [0,1] is the weight of the solenoidal components of the forcing field. The value of ξ = 0 corresponds to the purely compressive forcing field, whereas ξ = 1 represents the purely solenoidal forcing.
The target mach number is achieved by adding energy at the large scale at a rate equal to the Mach-number dissipation rate, ϵ M^3/L <cit.>.
For each Mach number M we consider the turnover scale τ as the time scale at which two frames become statistically uncorrelated. The turnover time is roughly equal to the turbulent crossing time τ_turb. = (L/2)/M, where L is the size of the box with L/2 being the size of the driving pattern and M is the 1D r.m.s. Mach number, M = √(⟨ v^2 / 3 ⟩). Each simulation is run for 9 τ with the step of 0.1 τ. For statistical purposes, only frames with t ≥ 2 τ are considered, as the fluid settles in its chaotic turbulent motion. Thus 71 snapshots of statistics within each simulation. This approach to obtain statistical data is common in similar astrophysical simulations <cit.>.
The simulation grid consists of N = 1024^3 cells with each cell ℓ containing the same volume δ V_ℓ = 1/1024^3. Our suite of simulations employed 1D r.m.s. Mach numbers 1, 2, 4, 8, and three values of the forcing parameter, ξ = 0, 1/2, 1.
Table <ref> describes the simulations and the resulting parameters. The first column names the simulation by way of forcing parameter and target Mach number. The second column shows the actual 1d Mach number realized by the simulation. The third column shows the ratio of volume-weighted Mach number to mass-weighted Mach number, 𝔛. The following two columns show the volume-weighted mean speed ⟨ v ⟩ and its mass-weighted counterpart ⟨ρ v ⟩. The final three columns show the volume-weighted mean and variance of s, μ and σ, and the number of shocks.
§.§ Analysis
The probability distribution function, f_Q(q), for a random quantity, Q, is
the probability that Q will realize a value within the interval [q,q+dq].
This can be found as
f_Q(q) =1/V∫_V d^3 x δ( q-Q(x⃗)),
where V is the volume of the sample.
We can alternatively weight our PDF with other quantities, W, as
f^(W)_Q(q) = 1/W_net∫_V d^3 x W(x⃗) δ( q -
Q(x⃗)),
where W_net is the total of W on the domain. This is useful as it gives
an alternative view of the variable.
We will find it valuable to explore weighting by volume (V), mass (M), and kinetic
energy (E). 2D PDFs weighted by different quantities are related to one another by the following useful formulae:
f^(M)_(s,v) (s, v) = e^s f^(V)_(s,v) (s, v)
f^(E)_(s,v) (s, v) = e^s v^2/⟨ e^s v^2 ⟩ f^(V)_(s,v) (s, v)
f^(E)_(s,v) (s, v) = v^2/⟨ e^s v^2 ⟩ f^(M)_(s,v) (s, v)
For 1D PDFs, the only simple analytic expressions possible are the following
f^(M)_s (s) = e^s f^(V)_s (s)
f^(E)_v (v) = v^2/⟨ e^s v^2 ⟩ f^(M)_v (v).
Relationships between other weights and quantities, e.g., f_v^(M)(v) and f_v^(V)(v), are only possible by integrating the joint distributions.
The ratio of volume-weighted Mach number to its mass-weighted counterpart will prove to be a useful quantity:
𝔛 = ⟨ v^2 ⟩/⟨ e^s v^2 ⟩ = M^2/M_M^2
which serves as a loose measure of the correlation between density and velocity. Here we have introduced the mass-weighted Mach number, M_M = √(⟨ρ v^2 ⟩/3).
For the purposes of numerically comparing histograms binned from data, f^(data), with a theoretical model f^(theory) we employ the L_1 norm
δ = ∑_bin b| f^(data)_b - f^(theory) (b_cen.) | |b|
where the model function is evaluated at the bin center b_cen. and |b| indicates the bin measure (length, area, volume, ...). This formula closely mimics the analogous integral L_1 norm.
§ DENSITY IN SUPERSONIC ISOTHERMAL TURBULENCE
The knowledge of the statistical properties of density within the star-forming clouds is one of the cornerstones of many star formation theories <cit.>. A turbulent medium without self-gravity can be shown to exhibit near lognormal density fluctuations, a result of the self-similar statistics of isothermal, supersonic flows <cit.>, later also extended to flows magnetized with ideal MHD <cit.>. In the scope of isothermal turbulence the PDF of log density s = logρ / ρ_0 can be approximated by a Gaussian
f_s (s; σ) = 𝒩 (s; - σ^2 / 2, σ) = 1/√(2 πσ^2)exp( - ( s + σ^2 / 2 )^2/2 σ^2)
with variance σ^2 = ⟨ s^2 ⟩ - ⟨ s ⟩^2 and mean value μ = ⟨ s ⟩ = - σ^2 / 2 that fixes the mean density, ⟨ e^s ⟩ = 1. In the longormal approximation, the variance is known to depend on the r.m.s. sonic Mach number = √(⟨ v^2 ⟩) and the weight of the solenoidal components of the forcing, ξ; σ^2 ≈log( 1 + b^2 ^2 ) <cit.>.
While the lognormal approximation already provides a reasonably accurate picture of the density fluctuations, several works propose various corrections to the PDF of density, either purely within the context of turbulence <cit.>, or due to other phenomena extending beyond the framework of isothermal turbulence <cit.>.
In this work we make use of the finite shock model of density fluctuations <cit.>, that describes the PDF of log density s arising from a series of shocks traversing the turbulent medium, each adjusting the local density by a factor proportional to the local sonic Mach number, drawn from an idealized Maxwell distribution. When the number of the shocks grows to infinity, the PDF of density approaches a lognormal. However, for a finite number of shocks n, the distribution in s can be described via its characteristic function, ϕ
(s; μ, σ, n) = 1/σ∫_- ∞^∞ω ϕ (ω; n) exp( - i ωs - μ/σ)
where the parameters μ≡⟨ s ⟩ and σ^2 ≡⟨ s^2 ⟩ - μ^2 are the mean value of s and variance in s. The additional parameter n represents the number of shocks giving rise to a distribution with a negative skew. More details, along with the explicit form for ϕ can be found in <cit.>.
By default, the finite shock model PDF without a superscript is assumed to describe the volume-weighted statistics of log density s. To obtain its mass-weighted counterpart, we employ (<ref>)
f^(M)_s (s; μ, σ, n) = e^s (s; μ, σ, n)
The kinetic energy-weighted PDF of log density is derived in sec. <ref>.
§.§ Generating function of the finite shock model
For the purposes of calculating various expectation values within the finite shock model, we introduce the following parametric expectation value involving only (log) density
E (u, k; μ, σ, n) ≡⟨ s^k e^u s⟩ = ∫_-∞^∞ s^k e^u s (s; μ, σ, n)
Using the analytic properties of the characteristic function, we can easily calculate the expectation value for k = 0. Moreover, differentiation with respect to u brings down one power of s, increasing k by 1, which gives rise to a recurrent formula for k ≥ 1,
E (u, 0; μ, σ, n) = e^u μϕ (- i u σ; n)
E (u, k+1; μ, σ, n) = / u E (u, k; μ, σ, n)
In order to extract useful quantities from the characteristic function, we introduce two normalized functions, Φ_k(x) and F(Δ), which normalize out the first and second arguments of ϕ(ω;n) as follows
Φ_0 (x) ≡1/nlogϕ (- i √(n) x; n)
F (Δ) = 1/σ^2logϕ (- i σ; σ^2 / Δ^2).
If μ, σ, n are parameters of the volume-based distribution of log density, the conservation of total mass, ⟨ e^s ⟩ = 1, following equations (<ref>) and (<ref>), constraints μ as follows
μ = - logϕ (- i σ; n) = - n Φ_0 (σ / √(n))
which, as expected, reduces to - σ^2 / 2 when n →∞.
The number of shocks, n, for given values μ, σ can be estimated from equation (<ref>) and by inverting equation (<ref>)
n = σ^2/Δ(- μ / σ^2 )
where Δ≡ F^-1 denotes the solution to equation (<ref>).
Φ_k(x) for k > 0 are calculated as the derivative of Φ_0, and their explicit form for k = 1,2 is
Φ_1 (x) ≡Φ^' (x) = - i/√(n)ϕ^' (- i √(n) x; n)/ϕ (- i √(n) x; n)
Φ_2 (x) ≡Φ^'' (x) = - ϕ^'' (- i √(n) x; n)/ϕ (- i √(n) x; n) + ( ϕ^' (- i √(n) x; n)/ϕ (- i √(n) x; n))^2
The mass-weighted counterpart of the average log density, μ_M ≡⟨ s ⟩_M = ⟨ρ s ⟩ can be calculated using the generating function E with u = 1, k = 1, utilizing equation (<ref>),
μ_M = μ + √(n) σ Φ_1 (σ / √(n))
reducing to + σ^2 / 2 when n →∞.
Finally, it is possible to express the variance in s weighted by mass, σ_M^2 = ⟨ρ s^2 ⟩ - ⟨ρ s ⟩^2, using equation (<ref>) as follows
σ_M^2 = σ^2 Φ_2 (σ / √(n))
which reduces to σ_M = σ in the lognormal limit.
§.§ Energy-weighted density PDF
For the construction of the joint PDF of density and speed as outlined in sec. <ref>, the kinetic energy-weighted histogram of density must be known. We already explored the mass-weighted PDF, f^(M)_s (s; μ, σ, n) = e^s (s; μ, σ, n), and its statistics in the previous paragraph. However, equation (<ref>) indicates, that the conversion from the mass-weighted to the energy-weighted instance of the density PDF would require marginalization of the full joint PDF weighted by a factor of v^2. Since the full PDF is not known, this approach is not feasible. To sidestep this problem, we propose an explicit form for the energy-weighted PDF based on the finite shock model. First, we notice an approximate relation between the mass- and energy-weighted standard deviations of logρ are approximately equal,
σ_E ≈σ_M
to a high degree of accuracy. The highest relative difference between the two is observed to be less than 3% in the compressive simulation with Mach number 2 (see Figure <ref>). This remarkable match allows for the following educated guess; since the width of the log density PDF does not change between the mass- and energy-weighted instances, we assume, that the two share the same general shape. The only freedom left after this assumption has been made is an arbitrary argument shift, that can be expressed as
f^(E)_s (s) = f^(M)_s (s + δ s) = e^s + δ s (s + δ s; μ, σ, n).
As a consequence, the difference between the mean of s weighted by energy and mass is δ s; μ_M - μ_E = δ s. To determine δ s we look at the energy-weighted mean of 1/ρ,
⟨ e^-s⟩_E = ⟨ v^2 ⟩/⟨ e^s v^2 ⟩ = 𝔛
where 𝔛 = ⟨ v^2 ⟩ / ⟨ρ v^2 ⟩ was introduced in equation <ref>.
Going back to our proposed shape for f^(E)_s, we use this newly found mean value to determine δ s
𝔛 = ⟨ e^-s⟩_E = ∫_-∞^∞ s e^-s f^(E)_s (s) = e^δ s δ s = log𝔛
which translates to the following shift in μ_E
μ_E = μ_M - log𝔛
Given the shift, the energy-weighted PDF can be written using the finite shock model as
f^(E)_s (s; 𝔛, μ, σ, n) = 𝔛 e^s (s; μ - log𝔛, σ, n)
Figure <ref> shows the relative error between the estimators for σ_M, E and the values measured from the simulations as filled circles. The calculated value for σ_M was obtained from μ, σ, n using equation (<ref>), where n is given by equation (<ref>). Subsequently, σ_E is assumed to be equal to σ_M per equation (<ref>). Figure <ref> also shows the error between the estimated and measured means μ_M, E (filled stars) obtained from equations (<ref>, <ref>). These errors are taken relative to their respective σ_M,E, | μ_M, E^(data) - μ_M, E^(est.)| / σ_M, E^(data). This reduction was chosen due to the overall scale of a Gaussian-like distribution being set by its respective standard deviation σ; two Gaussian distributions with equal widths σ only differ substantially from each other if their means μ disagree significantly on the scale given by σ. The difference between the estimated and measured mass- and energy-weighted values of mean and standard deviation of log density is below 5 % for all simulations, demonstrating the accuracy and consistency of the approximations derived in this section.
Figure <ref> shows the plots of f^(E)_s (s; 𝔛, μ, σ, n) compared to the histograms extracted from the simulations, by using the values of 𝔛, μ, σ directly measured from the histograms. These values are used to determine n using equation (<ref>). Subsequently, equation (<ref>) with the determined parameters and the finite shock model for the volume-weighted basis is plotted alongside the data. The match between the model equipped by estimated parameters and the histograms is remarkable, considering the approximations made along the way.
§ PDF OF SPEED
The velocity field within in isothermally turbulent medium can, due to the chaotic nature of turbulence, also be treated as a random variable with certain statistical properties. While the exact distribution depends on the driving, several assumptions can be made to derive a simple distribution for the magnitude of velocity.
Assuming independence of all components of velocity and isotropic driving, the argument similar to that of <cit.> can be used to infer that the velocity is a Gaussian in all directions with variance equal in each component. Thus, the speed is drawn from the following Maxwellian distribution
f_v (v; M) = ℳ (v; M) = 4 π v^2/(2 π c_s^2 M^2)^3/2exp( - v^2/2 c_s^2 M^2),
where M is the 1D r.m.s. Mach number.
In what follows we will set c_s = 1 for the sake of brevity.
Despite the vast majority of literature regarding the velocity fluctuations focuses on the two-point statistics and power spectra, several previous works address the deviations from the ideal Maxwellian shape of the PDF of speed in compressible and incompressible isothermal turbulence <cit.>. The slope of the distribution above the maximum can be observed to be steepened compared to the ideal Maxwellian, and can be seen from a direct comparison, in Figure <ref>. The three-dimensional geometry of the simulation necessarily implies that the prefactor v^2 is preserved under very general assumptions about the original distribution for the velocity, f (v⃗) → f (v) ∼ v^2 + ⋯. Thus, this steepening can only be reflected as a higher-order term, for example a quartic correction inside the exponential,
f^(V,M)_v (v; M, b) = (v; M, b) ∝ v^2 exp[ - v^2/2 a^2( 1 - b + b v^2/a^2) ]
where a is a parameter carrying the units of speed, that is adjusted so that the root-mean square of v matches the desired Mach number, 3 M^2 = ⟨ v^2 ⟩. The parameter b ∈ [0, 1] adjusts the amount of steepening; when b = 0, ideal Maxwellian shape is restored, whereas for b = 1, the tail behaves like ∼ v^2 e^- v^4.
Note, that the functional form of equation (<ref>) can be used to describe both volume- and mass-weighted PDF of speed, with unique parameters of M, b in each case. The kinetic energy-weighted histogram of speed can be determined using equation (<ref>).
The difference between the newly introduced correction and its Maxwellian counterpart when b = 0, apart from the shape of the PDF, manifests in the following ratio of the expectation values of powers of magnitude of speed
⟨( v⃗·v⃗)^α⟩/⟨( v⃗·v⃗)^α⟩_(b=0)≡ h_α (b).
The function h_α only depends on the power, α, and the tilt parameter, b. While it doesn't have an analytic form, can be easily tabulated and inverted numerically.
Specifically, for the pure Maxwellian, the expected results are
⟨( v⃗·v⃗)^α⟩_(b=0) = ∫_0^∞ v^2 α f_v (v; M) v = 2^α + 1/√(π) M^2 αΓ (α + 3/2)
which simplifies to (2n+1)!! M^2n for integer α = n, however, extra care should be taken for half-integer α, as the double-factorial formula does not match the form in equation (<ref>). Lower values of α are most numerically reliable, for example, for α = 1/2, we can relate the ensemble average of ⟨ v ⟩ to the sloping parameter b as follows
√(π/8)⟨ v ⟩/M = √(π/8)/M⟨√(v⃗·v⃗)⟩ = h_1/2 (b) → b = h_1/2^-1( √(π/8)⟨ v ⟩/M)
This equation can be used to estimate the value of the parameter b for a given set of measured ensemble averages v and the Mach number M. Table <ref> lists the simulation parameters along with the ensemble averages of v and Mach number (both volume- and mass-weighted). Figure <ref> shows the perfect Maxwellian shape by obtaining the Mach number M and the correction (<ref>) obtained by measuring the additional parameter v ≡⟨ v ⟩ for each simulation. While the Maxwellian form fails to fit the data for v > M due to the prominent steepening of the slope of the distribution in this region, the quartic correction approximates the dataset much better.
In the line of the original argument for the Maxwellian distribution of speeds based on the rotational symmetry and independence of individual components of velocity, one might wonder which assumption (if not both) is violated. Arguments from the power spectrum of velocity <cit.> and direct numerical simulations <cit.> show, that the tails of the PDFs of the individual components of velocity are sub-Gaussian, which does not leave any indication of dependence or independence of the components. The full study of the velocity statistics is interesting, but outside the scope of this work.
§ JOINT PDF OF DENSITY AND SPEED: GENERAL THEORY
We now turn to the joint distribution of density and speed, f_(s,v)(s,v). Having already described the statistics of each variable separately, the dependence between the two comes to question, as
(s, v) independent f_(s,v) (s,v) = f_s (s) f_v (v).
If the random variables are truly independent, the joint PDF would be fully described by the product of its marginalized parts, f_(s,v) (s,v) = f_s (s) f_v (v). Conversely, if there is dependence between s and v, f_(s,v) is not the product of the marginalized distributions. We will first show that this is in fact the case, then develop a model for the actual joint PDF. Our correction will be developed in the next section.
To demonstrate dependence between s and v, we exploit another, equivalent, definition of independence of random variables. For any two functions h_1(s), h_2(v): ⟨ h_1 (s) h_2 (v) ⟩ = ⟨ h_1 (s) ⟩⟨ h_2 (v) ⟩ iff s, v are independent random variables. That is, the average of the product is the product of the averages, iff s and v are independent. Conversely, if we find a certain combination for which ⟨ h_1 (s) h_2 (v) ⟩≠⟨ h_1 (s) ⟩⟨ h_2 (v) ⟩, the variables must be dependent.
One such choice is h_1(s)=s and h_2(v)=v^2. We will show that ⟨ρ v^2 ⟩≠⟨ρ⟩⟨ v^2 ⟩. We can interpret this as the mass-weighted r.m.s. Mach number, also related to the mean kinetic energy density ε,
ε = E / V = ⟨1/2ρ v^2 ⟩ = 3/2ρ_0 M_M^2
where E is the total kinetic energy, E = ε V. We parameterize the correlation using 𝔛=⟨ v^2⟩/⟨ρ v^2⟩ and show that it is different from one, demonstrating dependence.
Table <ref> features all parameters measured from the simulations. As seen from the values of 𝔛, the values of ⟨ v^2 ⟩ and ⟨ρ v^2 ⟩ differ by at least 10% in all simulations which indicates, that density and speed are correlated and therefore, to some extent, dependent quantities.
The non-zero correlation between density and speed complicates the joint statistics, since the joint PDF cannot be written as a product of the 1D marginalized PDFs. However, motivated by the fact, that the product of 1D marginalized PDFs is relatively close to the joint PDF, in the following section <ref> we propose a simple correction term added to the product of marginalized distributions, allowing for a simple, consistent, description of the joint statistics.
§.§ Correction term to the joint PDF
The relative proximity between the true joint PDF and the product of its marginalized subparts leads us to believe, that a simple, small correction to the latter can be used to model the dependence between s and v,
f_(s,v) (s,v) = f_s (s) f_v (v) + g (s, v).
Given full freedom in g, this approach can perfectly describe the joint PDF. However, the full knowledge of such correction is akin to knowing the joint PDF itself. Instead, we resort to a reasonable approximation; let's assume, that the function g can be also written as a product of two single-variable functions,
g (s, v) = g_s (s) g_v (v).
The main task is to determine the single variable functions g_s, v using various methods of weighting outlined in <ref>. Note, that since integrating out one of the variables must yield the marginalized PDF of the other variable, the integral over each single-variable g_s,v must be equal to zero. Therefore, to reveal the correction term in each variable, we need a way to break this symmetry by introducing a factor involving one of the variables. This can be done using the paradigm of weighted histograms, as weighting by different positive quantities naturally imposes factors involving density and speed.
To proceed, we consider the mass-weighted joint PDF of s and v as the basis for our calculations,
f_(s,v)^(M) (s, v) = f_s^(M) (s) f_v^(M) (v) + g_s^(M) (s) g_v^(M) (v),
and compare it to the volume- and kinetic energy-weighted joint PDFs, that can be related to the mass-weighted basis using equations (<ref>, <ref>)
f_(s,v)^(V) (s, v) = e^-s[ f_s^(M) (s) f_v^(M) (v) + g_s^(M) (s) g_v^(M) (v) ]
f_(s,v)^(E) (s, v) = v^2/3 M_M^2[ f_s^(M) (s) f_v^(M) (v) + g_s^(M) (s) g_v^(M) (v) ]
The factors introduced this way break the symmetry of the correction terms under integration over the involved variable. Firstly, by definition, integrating over the mass-weighted instances of the joint PDF yields the baseline mass-weighted marginalized distribution of the other variable
∫_-∞^∞ s f_(s,v)^(M) (s, v) = f_v^(M) (v)
∫_0^∞ v f_(s,v)^(M) (s, v) = f_s^(M) (s)
If we now use the fact, that ⟨ e^-s⟩_M = ⟨ 1 ⟩ = 1 and ⟨ v^2 ⟩_M = ⟨ e^s v^2 ⟩ = 3 M_M^2 = 2 ε, we can explicitly integrate out s in the volume-weighted case and v in the energy-weighted instance to get
∫_-∞^∞ s f_(s,v)^(V) (s, v) ≡ f_v^(V) (v) = f_v^(M) (v) + A g_v^(M) (v)
∫_0^∞ v f_(s,v)^(E) (s, v) ≡ f_s^(E) (s) = f_s^(M) (s) + B g_s^(M) (s)
where A, B are non-zero constants associated with the integrals of the mass-weighted g-functions of variable s and v with additional factors of e^-s and v^2 in the density and speed terms, respectively. As we can see, the terms associated with different weighing break the symmetry of an otherwise identically vanishing integral. Solving equations (<ref>) and (<ref>) for the g-functions, we find:
g_s^(M) (s) ∼ f_s^(E) (s) - f_s^(M) (s)
g_v^(M) (v) ∼ f_v^(V) (v) - f_v^(M) (v).
The corrected joint PDF of logρ and v can be found by inserting these into equation (<ref>) to find
f_(s,v)^(M) (s,v) = f_s^(M) (s) f_v^(M) (v) +
+ C ( f_s^(E) (s) - f_s^(M) (s) ) ( f_v^(V) (v) - f_v^(M) (v) )
where C is a constant accommodating the proportionality relation of the g-terms to the differences in the brackets.
This method is successful under two conditions; first, we had to assume, that the correction g can be written as a product of two single-variable functions. Second, the single-variable functions must be well described by the finite shock model function and tilted Maxwellian, for some choice of the parameters, regardless of the method of weighing. It should be noted, that despite the derivation mainly focusing on the mass-weighted version of the histogram, this functional form can be converted to the volume-weighted instance of the joint PDF by multiplying by a factor e^-s. Since f^(M)_s (s) = e^s f^(V)_s (s), we can write the volume-weighted joint PDF as follows
f_(s,v)^(V) (s,v) ≈ f_s^(V) (s) f_v^(M) (v) +
+ C ( e^-s f_s^(E) (s) - f_s^(V) (s) ) ( f_v^(V) (v) - f_v^(M) (v) )
The expression for C,
C = (𝔛 - 1)^-1,
can be found by multiplying equation (<ref>) by v^2, integrating over speed and demanding both sides to be equal to 3 M_M^2 f^(E)_s (s).
§ JOINT PDF: SPECIFIC REALIZATIONS
In what follows we suggest several choices of basis functions to build up the joint distribution; first, we use the simplest basis possible, consisting of Gaussian in s and Maxwellian in v. We then utilize our updated marginalized pictures using the finite shock model and a tilted Maxwellian to obtain a much better description of the joint distribution.
§.§ Minimal model
In this section we describe the joint PDF using the simplest basis distributions; the normal distribution 𝒩 (s; μ, σ) with a mean μ and variance σ^2, and a simple Maxwellian ℳ (v; M) where M is the 1D r.m.s. Mach number. The minimum amount of parameters needed to describe the distribution is 3; M, 𝔛, σ. These three allow to directly describe the volume-weighted distribution of density, approximated by 𝒩 (s; μ, σ) where μ = -σ^2 / 2, volume-weighted distribution of speed approximated by ℳ (v; M) and also the mass-weighted distribution of speed using the Maxwellian with the parameter M_M = M / √(𝔛). The energy-weighted distribution of log density is approximated as exp (s + log𝔛) 𝒩 (s; μ - log𝔛, σ). With these considerations in mind, the joint PDF can be then written as
f^(V)_(s,v) (s,v; M, 𝔛, σ) = 𝒩 (s; μ, σ) ℳ (v, M_M) +
+ (𝔛 - 1)^-1( 𝔛 𝒩 (s; μ - log𝔛, σ) - 𝒩 (s; μ, σ) ) ×
× ( ℳ (v; M) - ℳ (v; M_M) )
While this model does not aspire to fit the true shape of the 2D histogram, it fully preserves the measured parameters and expected relations between them.
Figure <ref> shows the joint PDF of s (horizontal axis) and v (vertical axis). Histograms obtained from the simulated data are displayed via solid contours and color denoting the fraction of probability, our minimal model of the joint PDF is overlaid as dashed contours. Since the minimal model only uses three parameters directly measured from the data, it cannot, in its simplicity, fully capture the joint PDF. The most jarring difference occurs in the compressively driven simulations with high r.m.s. Mach number, manifesting in a large shift of the maximum. This is due to a crude approximation μ = - σ^2 / 2. In reality, μ is far away from this value, moreover, the true maximum of the density PDF is further shifted to the right due to the very low number of shocks inferred from these datasets.
While the maximum of the proposed simple model is shifted with respect to the true maximum of the distribution due to the approximations we used, the general shape matches that of the measured histograms.
§.§ Detailed basis
The final, most complicated form of our model of the joined distribution, we replace each function with its more detailed counterpart; the finite shock model function (s; μ, σ, n) instead of a simple Gaussian and the tilted Maxwellian for speed (v; M, b) in place of the ideal Maxwellian. This way, we need to provide 6 parameters to fully describe the joint distribution; M, 𝔛, u, u_M, μ, σ, where u, u_M are two new measured quantities equal to u=⟨√(v⃗·v⃗)⟩ and u_M=⟨√(v⃗·v⃗)⟩_M = ⟨ρ√(v⃗·v⃗)⟩, which define b and b_M via equation (<ref>). Parameter n is inferred from μ, σ using equation (<ref>).
The function can then be written as
f^(V)_(s, v) (s, v; M, 𝔛, u, u_M, μ, σ) = (s; μ, σ, n) (v; M_M, b_M) +
+ (𝔛 - 1)^-1( 𝔛 (s; μ - log𝔛, σ, n) - (s; μ, σ, n) ) ×
×( (v; M, b) - (v; M_M, b_M) )
Figure <ref> shows the comparison between the model with detailed basis to the histograms extracted from the datasets. Notice the remarkable match between the two without any additional fitting. Even the noisiest dataset, the compressible Mach 8 simulation, is described very closely by our model in the regions with low noise and extrapolates naturally into the region with larger density and higher noise.
§ MOMENTS OF THE JOINT DISTRIBUTION
To corroborate our model of joint distribution, we compare various moments, C_ℓ, m = ⟨ s^ℓ v^2m⟩ between our model and the data. The moments implied from our model can be expressed via the measured quantities as follows
C_ℓ,m = (2m+1)!! M_M^2m[ E (0, ℓ; μ, σ, n) h_m (b_M) +
(𝔛 - 1)^-1( 𝔛 E (0, ℓ; μ - log𝔛, σ, n) - E (0, ℓ; μ, σ, n) ) ×
( 𝔛^m h_m (b) - h_m (b_M) ) ].
In case of the simple model using parameters M, 𝔛, σ, the correlators can be obtained from the same formula by taking n →∞, μ = -σ^2 / 2 and b = b_M = 0.
Figure <ref> shows the ratio of the calculated vs. simulated moments of the joint distribution, C_ℓ,m for integers 1 ≤ℓ, m ≤ 5. For the sake of clarity, all moments are normalized by their uncorrelated value assuming lognormal density and Maxwellian speed, C̃_ℓ, m = C_ℓ, m / ( ⟨ s^ℓ⟩⟨ v^m ⟩ ). Moments generated using the simple model are depicted by red points, those of the detailed model by blue points. The shape of the points represents the size of ℓ^2+m^2; the lowest powers are denoted by circles, intermediate powers by diamonds and the highest combinations of powers by stars. It can be seen that for the most combinations of exponents, the detailed model matches the simulated moments substantially better than the simple model.
§.§ Correlation coefficient
The Pearson correlation coefficient corr (s,v) is a special case of a normalized moment of the joint distribution and can be expressed using our model. The term ⟨ s v ⟩ needed to calculate corr (s,v) can be obtained from equation (<ref>) by setting ℓ = 1, m = 1/2,
corr (s,v) = ⟨ s v ⟩ - ⟨ s ⟩⟨ v ⟩/σ_s σ_v = - u - u_M/σ√(3 M^2 - u^2)𝔛log𝔛/𝔛 - 1
This expression is compared to the measured correlation coefficients in Figure <ref>. The largest correlation coefficients, occurring in the datasets with the lowest Mach numbers, match the measurement more accurately, whereas with increasing Mach number and decreasing correlation, the estimate of the correlation deviates somewhat from the measured value.
§ CONCLUSIONS
In the present work we developed a new model of the joint distribution of log density s and speed v by introducing a correction term to the product of marginalized 1D PDFs of the individual variables. By marginalizing over differently weighted instances of the proposed 2-dimensional PDF we were able to describe the correction term using a simple set of 1D distributions of each variable weighted by volume, mass or kinetic energy. We proposed 3 different shapes of the overall distribution, depending on the complexity of the basis functions; ranging from the simplest Gaussian in s and Maxwellian in v to the most detailed basis comprised of the finite shock model in s and tilted Maxwellian (with a quartic correction) in v. Along the way we found out, that the kinetic-energy weighted histogram of log density has the same overall shape as its mass-weighted counterpart, and is shifted by δ s = log𝔛 = log( ⟨ v^2 ⟩ / ⟨ρ v^2 ⟩) to the left. The overall match between the shapes is closely related to the fact, that σ_M = σ_E, i.e. the mass- and kinetic energy-weighted variances of log density are equal to each other. The shift between the PDFs can be interpreted as the difference between the mass- and kinetic energy-weighted means of log density, μ_M - μ_E = log𝔛.
Our model was confronted with simulated data from Enzo with compressive, mixed and solenoidal driving, each at 4 different 1D sonic Mach numbers M = 1, 2, 4, 8. The parameters of the model are directly measured from each simulation, with no additional fitting needed. The model using the detailed basis functions matches the simulated histograms to a high degree of precision even when density and speed are correlated to a considerable degree. The match between each model and histograms is measured by the L_1 norm, and for the detailed basis, the overall difference is at most 4.5% in the worst case scenario. It should be noted, that feeding the model parameters taken from an ensemble leads to a reasonable match even upon re-weighing by mass or energy, e.g. see Figure <ref>. This is opposed to fitting one of the instances (for example the volume-weighted histogram) by varying the parameters of the model, however, that makes the match between a differently weighted histogram and its measured counterpart suboptimal.
In addition to matching histograms we computed a set of 25 correlation coefficients for each model, ⟨ s^ℓ v^2 m⟩ (1 ≤ℓ, m ≤ 5) that are compared to the coefficients measured directly from each simulation. Unsurprisingly, the model utilizing the detailed basis functions provides the closest match between the estimated values of the coefficients and their measured counterparts, with the factor of 2 at most, occurring in the case of the highest powers in ℓ, m.
In this work we focused on the supersonic turbulent flows, in which the density and speed become less correlated with increasing Mach number, regardless of the forcing mode. At the same time, the number of shocks, inferred from the statistics of density alone, decreases with Mach number, resulting in a more tilted distribution. Both of these effects can be explained in the same framework of shocks and rarefaction waves. The shock waves propagating through a supersonic, turbulent medium exhibit, on average, higher density with increasing Mach number. However, due to overall mass conservation, the volume available for such shock to occupy is smaller, resulting in a limited longitudinal size of the shock wave. On the other hand, rarefaction waves, following behind the shocks, tend to reset the density towards the mean. Since the shock waves are faster and smaller in more turbulent gas, the number of shocks experienced by the gas before it resets to ambient density is smaller. This is paralleled by the weakening correlation between the density and speed.
Overall, our model suggests, that the correlations between density and speed are an integral part of the complete picture of the statistics of a turbulent, supersonic, isothermal flow. Moreover, with the knowledge of the full joint PDF of density and speed, further insight into the statistics of turbulence can be attained, such as exploring the statistics of thermal and kinetic energy.
§ DATA AVAILABILITY
Simulation data present here is available on request ([email protected]).
§ ACKNOWLEDGEMENTS
tocsectionAcknowledgements
Support for this work was provided in part by the National Science Foundation
under Grant AAG-1616026 and AAG-2009870. Simulations were performed on Stampede2, part of the Extreme Science and Engineering Discovery Environment <cit.>, which is supported by National Science Foundation grant number
ACI-1548562, under XSEDE allocation TG-AST140008.
mnras
|
http://arxiv.org/abs/2307.05566v1 | 20230710013526 | Scalable Protocol for ZZ-Crosstalk Mitigation in Universal Quantum Gates | [
"Yan Liang",
"Ming-Jie Liang",
"Sai Li",
"Z. D. Wang",
"Zheng-Yuan Xue"
] | quant-ph | [
"quant-ph"
] |
Key Laboratory of Atomic and Subatomic Structure and Quantum Control (Ministry of Education),
and School of Physics, South China Normal University, Guangzhou 510006, China
Key Laboratory of Atomic and Subatomic Structure and Quantum Control (Ministry of Education),
and School of Physics, South China Normal University, Guangzhou 510006, China
Guangdong Provincial Key Laboratory of Quantum Engineering and Quantum Materials,
Guangdong-Hong Kong Joint Laboratory of Quantum Matter, and Frontier Research Institute for Physics,
South China Normal University, Guangzhou 510006, China
[email protected]
Guangdong-Hong Kong Joint Laboratory of Quantum Matter, Department of Physics,
and HK Institute of Quantum Science & Technology, The University of Hong Kong, Pokfulam Road, Hong Kong, China
[email protected]
Key Laboratory of Atomic and Subatomic Structure and Quantum Control (Ministry of Education),
and School of Physics, South China Normal University, Guangzhou 510006, China
Guangdong Provincial Key Laboratory of Quantum Engineering and Quantum Materials,
Guangdong-Hong Kong Joint Laboratory of Quantum Matter, and Frontier Research Institute for Physics,
South China Normal University, Guangzhou 510006, China
Hefei National Laboratory, Hefei 230088, China
High-fidelity universal quantum gates are widely acknowledged as essential for scalable quantum computation. However, in solid-state quantum systems, which hold promise as physical implementation platforms for quantum computation, the inevitable ZZ-crosstalk resulting from inter-qubit interactions significantly impairs quantum operation performance. Here we propose a scalable protocol to achieve ZZ-crosstalk mitigation in universal quantum gates. This method converts the noisy Hamiltonian with ZZ-crosstalk into a framework that efficiently suppresses all ZZ-crosstalk effects, leading to ideal target quantum operations. Specifically, we first analytically derive the ZZ-crosstalk mitigation conditions and then apply them to enhance the performance of target universal quantum gates. Moreover, numerical simulations validate the effectiveness of ZZ-crosstalk mitigation when multiple qubit gates operate concurrently. As a result, our protocol presents a promising approach for implementing practical parallel quantum gates in large-scale quantum computation scenarios.
Scalable Protocol for ZZ-Crosstalk Mitigation in Universal Quantum Gates
Zheng-Yuan Xue
August 12, 2023
=========================================================================
§ INTRODUCTION
Quantum computation is an emerging technology that leverages the principles of quantum mechanics to tackle problems that are intractable for classical computation, such as factorization of large integers <cit.> and database searching <cit.>. The success of quantum computation relies on the implementation of high-fidelity quantum operations. However, quantum crosstalk in large-scale quantum systems influences parallel quantum operations, leading to error accumulation and propagation <cit.>, which can ultimately cause the quantum computation process to fail. The ZZ crosstalk resulting from inter-qubit interactions is prevalent in various quantum systems, such as semiconductor and superconducting qubits <cit.>, and leads to correlated and nonlocal errors <cit.>, as well as spectator errors <cit.>. Therefore, the development of universal quantum gates that can withstand the effects of ZZ crosstalk is crucial for large-scale quantum computation.
Recently, significant efforts have been devoted to mitigating the ZZ crosstalk effect in quantum information processing. Firstly, various hardware-based strategies have been proposed, including the integration of tunable couplers or buses <cit.>, and the utilization of qubits with opposite anharmonicity <cit.>. However, these strategies heavily rely on the precision of hardware manufacturing, posing substantial challenges. Secondly, quantum control strategies, such as the simultaneous ac Stark effect on coupled qubits <cit.> and dynamical decoupling <cit.>, offer alternative approaches that can alleviate hardware requirements while suppressing ZZ crosstalk. However, their extensibility is limited due to the lack of control freedom, and they cannot be utilized for constructing universal quantum gates.
In this paper, we introduce a scalable protocol for implementing universal quantum gates with ZZ-crosstalk mitigation (ZZCM) in large-scale quantum systems. The quantum system with a ZZ-crosstalk Hamiltonian is transformed into a framework that suppresses all ZZ-crosstalk effects between qubits, yielding ideal quantum operations. We analytically derive the conditions that the transformed operator must satisfy and apply it to enhance the performance of universal quantum gates. We demonstrate that, for single-qubit gates, the application of a continuous external drive field to the gate qubit can effectively suppress ZZ crosstalk from all spectators, i.e., nearby qubits, significantly reducing experimental complexity and yielding high-fidelity quantum gates. For parallel quantum gates, the mitigation of ZZ crosstalk between adjacent qubits can be achieved through the application of continuous external driving fields only to the qubits involved in the gate operation. Consequently, this approach holds great promise for practical large-scale parallel quantum computation.
§ QUANTUM GATES WITH ZZCM
In this section, we first present the considered lattice model for universal quantum gates with specification on the explicit form of the ZZ-crosstalk. Then, we propose the general scheme for constructing universal quantum gates with ZZCM.
§.§ ZZ-Crosstalk Model
We consider a general two-dimensional lattice model for scalable quantum computation. For demonstration purposes and without loss of generality, we assume the lattice consists of N× N physical qubits, as depicted in Fig. <ref>(a). In this lattice, we label the qubit at the ith row and jth column as Q_i,j. For typical solid-state quantum systems, two nearest-neighbor qubits are coupled through the XY type of interaction, and each qubit can be driven independently. The static ZZ coupling is considered the dominant source of noise. The dynamics of the system is governed by the total Hamiltonian H_t(t)=H_0(t)+H_zz, with H_0(t) =H_d(t)+H_J(t), where H_d(t) represents the Hamiltonian with individual single-qubit drives, H_J(t) describes the interaction between nearest-neighbor qubits of the system, and H_zz accounts for the ZZ-crosstalk Hamiltonian. Setting ħ = 1 hereafter, in the interaction picture, the Hamiltonian of a single-qubit under resonant driven is given by
H_d(t)=∑_i, jΩ_i,j (t)(cosϕ_i,jσ_i,j ^x+sinϕ_i,jσ_i,j ^y),
where Ω_i,j (t) (ϕ_i,j) are the time-dependent driving amplitude (phases) acting on the Q_i,j individually, and σ_i,j=(σ_i,j ^x, σ_i,j ^y, σ_i,j ^z) is Pauli operator. The XY interaction between nearest-neighbor qubits is
H_J(t) = ∑_i, jJ_i,j^x(t)/2 (σ_i,j^x σ_i+1,j^x +σ_i,j^y σ_i+1,j^y)
+∑_i, jJ_i,j^y(t)/2 (σ_i,j^x σ_i,j+1^x +σ_i,j^y σ_i,j+1^y),
where {i, j}∈{1, 2, ...N} and J_i,j^x(t) and J_i,j^y(t) being the controllable coupling strength between Q_i,j and its nearest-neighbor qubits along the row and column direction, respectively. The adjacent ZZ-crosstalk Hamiltonian is
H_zz=∑_i, j(η_i,j^xσ_i,j^z σ_i+1,j^z + η_i,j^yσ_i,j^z σ_i,j+1^z),
where η_i,j^x,y characterize the coupling strength of ZZ interactions between nearby qubits.
§.§ The general scheme
To eliminate the unwanted ZZ crosstalk,
we rotate the system to the framework defined by a time-dependent unitary transformation 𝒜(t) as
H_𝒜(t) = 𝒜^†(t)H_t(t)𝒜(t)+ i𝒜^†(t)𝒜(t).
Our goal is to devise the form of 𝒜(t) such that the resulting evolution operator of H_𝒜(t) yields an ideal gate operation at the final moment, i.e.,
U_𝒜(T,0) =𝒯e^- i∫^T_0H_𝒜(t)dt=U_0
where 𝒯 denotes time-ordering, T represents the duration of the gate operation, and U_0 signifies the ideal gate operation free from the influence of ZZ crosstalk. By imposing the boundary condition of 𝒜(T)=𝒜(0)=I, the evolution operator generated by H_t(t) in the interaction picture can be expressed as
U_t(T,0) = 𝒜(T)U_𝒜(T,0)=I· U_0,
which implies that we eliminate the adverse effect of ZZ crosstalk and realize the ideal gate operation.
To determine the specific form of 𝒜(t) satisfying Eq. (<ref>), we divide the evolution into k segments (k is a positive integer), that is T=k τ, and expand U_𝒜(T,0) as
U_𝒜(T,0) = ∏_n=1^kU_𝒜[n τ, (n-1)τ].
For the nth period U_𝒜[nτ, (n-1)τ]=𝒯 exp [- i∫_(n-1)τ^nτH_𝒜(t)dt ],
by using the Magnus expansion <cit.>, the unitary evolution
operator U_𝒜[nτ, (n-1)τ] corresponding to a time-dependent Hamiltonian is
U_𝒜[nτ, (n-1)τ]= exp{𝒢[nτ, (n-1)τ]},
where 𝒢[nτ, (n-1)τ]=∑_l=1^∞τ^l𝒢_l[nτ, (n-1)τ] is the Magnus series.
When τ is infinitesimal, we can ignore the high-order terms of τ and only consider the first-order term with relatively large influence
𝒢_1[nτ, (n-1)τ]=- i/τ∫_(n-1)τ^nτH_𝒜(t)dt
=- i/τ [∫_(n-1)τ^nτH_𝒜0(t)dt+∫_(n-1)τ^nτ𝒜^†(t)H_zz𝒜(t)dt ],
where
H_𝒜0(t)=𝒜^†(t)H_0(t)𝒜(t)+ i𝒜^†(t)𝒜(t)
is the target Hamiltonian without ZZ crosstalk for H_0(t), in the 𝒜(t) framework, which can realize the ideal gate U_0.
To eliminate the influence of H_zz, we impose two conditions to 𝒜(t), i.e.,
𝒜(t)=𝒜(t+τ),
∫_(n-1)τ^nτ𝒜^†(t)H_zz𝒜(t)dt=0,
which indicate that 𝒜(t) is periodic with its period being τ, and the integral of H_zz in the framework of 𝒜(t) is zero, within any periods.
By these settings, Eq. (<ref>) becomes
U_𝒜(T,0) ≈∏_n=1^ke^- i∫_(n-1)τ^nτH_𝒜0(t)dt≈ U_0,
which is the ideal rotation operation. Moving back to the interaction picture, the evolution operator generated by H_t(t) is U_t(T,0)=𝒜(T)U_𝒜(T,0)=U_0. Overall, the key to mitigating ZZ crosstalk and realizing ideal quantum gates is to find a transformed operator 𝒜(t) that satisfies the boundary condition 𝒜(T)=𝒜(0)=I and Eq. (<ref>).
§ EXAMPLES OF UNIVERSAL QUANTUM GATES
In this section, we exemplify the construction of universal quantum gates with ZZCM in the considered lattice model, which can support scalable universal quantum computation.
§.§ Single-qubit gate
As shown in Fig. <ref>(a), we utilize the construction of a single-qubit σ^x/2 gate on qubit Q_i,j as an example to provide a detailed explanation of how to eliminate ZZ crosstalk from the surrounding four spectator qubits, i.e., Q_i-1,j, Q_i+1,j, Q_i,j-1, Q_i,j+1. We start from Eq. (<ref>), where the single-qubit driven Hamiltonian in the framework of 𝒜_1(t) reads
H_𝒜1(t)=Ω_01(t )σ_i,j^x,
where Ω_01(t )=Ω_01sin^2(π t/T_1) with Ω_01 being the pulse amplitude, and T_1 being the gate operation time satisfied ∫_0^T_1Ω_01(t)dt=π/4. The nearest ZZ-crosstalk Hamiltonian can be written as
H_zz1=η_zzσ_i,j^z Z_i, j,
where
Z_i, j=σ_i-1,j^z+σ_i+1,j^z+σ_i,j-1^z+σ_i,j+1^z
are the summation of the σ^z operators of the four nearest-neighbor qubits for Q_i, j, and we assume that the ZZ-crosstalk strengths are the same for the sake of simplicity.
For elimination of ZZ crosstalk, we choose the time-dependent transformed operator 𝒜_1(t) to be
𝒜_1(t)=exp[- iω_1 τ_1/πsin^2 ( π t/τ_1) σ_i,j^x],
where τ_1=T_1/k is the period, and ω_1 being the parameter used to satisfy Eq. (<ref>b). It is obvious that 𝒜_1(t) satisfies the boundary condition 𝒜_1(0)=𝒜_1(T_1)=I and Eq. (<ref>a).
In addition, the condition in Eq. (<ref>b) can be written as
∫_(n-1)τ_1^nτ_1𝒜_1^†(t)H_zz1𝒜_1(t)dt
=η_zz∫_(n-1)τ_1^nτ_1(cosχσ_i,j^z+sinχσ_i,j^y)Z_i, j dt,
where χ=2ω_1τ_1/πsin^2( π t/τ_1). We define the error cumulant during the nth period as
EC=η_zz[|∫_(n-1)τ_1^nτ_1cosχ dt | +|∫_(n-1)τ_1^nτ_1sinχ dt |].
Through the numerical simulation, it can be concluded that ω_1=4.81 k Ω_01 is the optimal choice for making the error cumulant to be zero, see Appendix <ref> for details.
Once the form of 𝒜_1(t) is determined, we can obtain the total Hamiltonian H_1(t) in the interaction picture by inverting Eq. (<ref>), i.e.,
H_1(t) = 𝒜_1(t)H_𝒜1(t)𝒜_1^†(t)+ id𝒜_1(t)/dt𝒜_1^†(t)+H_zz1
= Ω_1(t)σ_i,j^x+H_zz1,
with Ω_1(t)=Ω_01sin^2( π t/T_1)+ω_1 sin(2π t/τ_1). This indicates that we can mitigate the ZZ crosstalk from all spectators only by modulating the shape of the external drive on the gate qubit from Ω_01(t) to Ω_1(t).
We numerically quantify the gate-robustness by using the gate-fidelity of F(U_0)=|Tr(U_0^†U_zz)|/|Tr(U_0^†U_0)| <cit.>, where U_0 and U_zz are the ideal and error-affected evolution operators, respectively.
Figure <ref>(b) shows a comparison of the robustness to ZZ crosstalk of ZZCM schemes with different k and the DY scheme, where the DY scheme is implemented by using a pulse of Ω_d1(t)=Ω_01sin^2(π t/T_d1), with T_d1 being the gate time, see Appendix <ref> for details. The numerical results reveal a positive correlation between an increase in k and an enhancement of the resilience to ZZ crosstalk. Additionally, we can observe that the gate infidelity as a function of η_zz/Ω_01 (η_zz/Ω_01∈ [-0.5,0.5]) can be smaller than 10^-4 throughout the entire range of errors for k≥4. Notably, compared to the DY scheme, the gate infidelity of the ZZCM schemes reduces by at least two orders of magnitude when the error ratio exceeds |0.02|.
However, we can infer from Eq. (<ref>) that the modulated coupling strength Ω_1(t) increases as k increases, which is not favorable for implementation in an actual experimental system. To address this issue, we set the maximum value of Ω_1(t) as Ω_m, with η_zz/Ω_m ranging from -0.05 to 0.05. Under these conditions, the optimal choice for k is found to be 4, as shown in Appendix <ref>.
§.§ Parallel single-qubit gates
We can construct the ZZCM single-qubit gates on any two physical qubits at the same time. Here we take the parallel single-qubit gates σ^x on Q_i,j and σ^y on Q_i+1,j+1 simultaneously as an example, as shown in the dotted box in Fig. <ref>(a). Note that, parallel single-qubit gates on nearest-neighbour qubits can also be implemented, as detailed in Appendix <ref>.
Starting from Eq. (<ref>), the individual single-qubit driven Hamiltonian is
H_𝒜2(t)=Ω_02(t)(σ_i,j^x+σ_i+1,j+1^y)
with Ω_02(t)=Ω_02sin^2( π t/T_2), where the gate operation time T_2 satisfies ∫_0^T_2Ω_02(t) dt=π/2.
In this case, the Hamiltonian of nearest ZZ crosstalk can be written as
H_zz2=η_zz(σ_i,j^z Z_i,j+σ_i+1,j+1^z Z_i+1,j+1).
When we choose the transformed operator as
𝒜_2(t)= exp [- iω_2τ_2/πsin^2 ( π t/τ_2)(σ_i,j^x+σ_i+1,j+1^y) ],
the Hamiltonian in the interaction picture can be obtained by inverting Eq. (<ref>), i.e.,
H_2(t)= Ω_2(t)(σ_i,j^x+σ_i+1,j+1^y )+H_zz2,
where the equivalent coupling strength is Ω_2(t)=Ω_02sin^2( π t/T_2)+ω_2sin(2π t/τ_2). Here, τ_2=T_2/k is the period, and ω_2 is the parameter used to satisfy Eq. (<ref>b).
By utilizing numerical simulations, we can determine the optimal value of ω_2 is 2.4 kΩ_02, which ensures the error cumulant to be zero, see Appendix <ref> for details. By setting the maximum value of Ω_2(t) to Ω_m, and imposing a constraint on η_zz/Ω_m in the range of [-0.05, 0.05], we can identify the optimal value of k to be 4.
A comparison between the robustness of the ZZCM scheme with k=4 and the DY scheme against ZZ crosstalk is presented in Fig. <ref>(b), where the DY scheme is implemented by using a pulse of Ω_d2(t)=Ω_msin^2(π t/T_d2), with T_d2 being the gate time, see Appendix <ref> for details. The results indicate a meaningful increase in robustness for the entire error range when implementing the ZZCM proposal.
§.§ The two-qubit Swap gate
Compared to single-qubit gates, two-qubit gates are more susceptible to parasitic ZZ crosstalk, making it significantly challenging significant to achieve high-performance two-qubit gates. Therefore, suppressing ZZ crosstalk is crucial for implementing high-performance two-qubit gates. In this regard, we devise the two-qubit Swap gate with ZZ-crosstalk mitigation effects. Considering a quantum system consisting of eight physical qubits enclosed in a dotted box in Fig. <ref>(a), where the Swap gate U_(i,j),(i,j+1)^S are acted on qubits Q_i,j and Q_i,j+1, and the remaining six qubits, Q_i-1,j,Q_i-1,j+1,Q_i,j-1,Q_i,j+2,Q_i+1,j,Q_i+1,j+1, denote the spectators. The interaction between qubits is indicated by the solid and dashed lines, which correspond to the XY and ZZ interactions, respectively.
The Hamiltonian for this quantum system in the interaction picture is given by
H_3(t)=H_J(t)+H_zz3+H_A(t),
where
H_J(t)=J(t) 2 (σ_i,j^xσ_i,j+1^x+σ_i,j^yσ_i,j+1^y)
is the XY interaction Hamiltonian between qubits Q_i,j and Q_i,j+1, with J(t)=J_0sin^2(π t/T_3) being the time-dependent interaction strength. The Hamiltonian
H_zz3=η_zz[σ_i,j^z Z_i,j+σ_i,j+1^z (Z_i,j+1-σ_i,j^z)]
represents the ZZ crosstalk between qubits. To suppress the ZZ crosstalk, we introduce an additional Hamiltonian H_A(t)= i𝒜̇_3(t)𝒜^†_3(t), where the explicit form of 𝒜_3(t) is presented in Appendix <ref>.
To improve the performance of the Swap gate and minimize the ZZ crosstalk, additional single-qubit operations are required on the spectator qubits Q_i-1,j+1, Q_i,j+2, and Q_i+1,j+1, as detailed in Appendix <ref>. However, these qubits do not introduce any extra overhead in terms of physical resources, since arbitrary quantum gate operations can still be performed on them independently.
This means that not only can the Swap gate be implemented, but a parallel gate consisting of one Swap gate and three single-qubit gates can also be achieved.
To demonstrate this, we introduce a parallel gate U_(i,j),(i,j+1)^S⊗σ^x_i-1,j+1⊗σ^y_i,j+2⊗ I_i+1,j+1, which consists of a Swap gate on qubits Q_i,j and Q_i,j+1, a σ^x gate on qubit Q_i-1,j+1, a σ^y gate on qubit Q_i,j+2, and an identity gate on qubit Q_i+1,j+1. In this scenario, the total Hamiltonian is represented asH'_3(t)=H_3(t)+H_s(t), where
H_s(t)=J(t) 2 (σ_i- 1,j+1^x+σ_i,j+2^y)
is the Hamiltonian for constructing the single-qubit gates.
To achieve the parallel gate for
suppressing the ZZ crosstalks, the evolution is divided into two steps. In the first segment, with ∫_0^T_3J(t)dt=π/2, the time-dependent transformed operator is chosen to be
𝒜_31(t) = exp [- iω_3τ_3/πsin^2 ( π t/τ_3) (σ_i,j^x+σ_i-1,j+1^x
+σ_i,j+2^y+σ_i+1,j+1^x)],
with τ_3=T_3/4 (meaning k=4), and ω_3=4×2.4 J_0. In the second segment, with ∫_T_3^2T_3J(t)dt=π/2, the operator 𝒜_32(t) is chosen as
𝒜_32(t) = exp[- iω_3τ_3/πsin^2 ( π t/τ_3) (σ_i,j^y+σ_i-1,j+1^x
+σ_i,j+2^y+σ_i+1,j+1^x) ].
Figure <ref>(b) displays the numerically simulated gate fidelity as a function of ZZ crosstalk. The comparison between the performance of ZZCM and DY schemes is presented, where the latter is implemented via Hamiltonian of
H_d3(t) = J(t) 2(σ_i,j^xσ_i,j+1^x+σ_i,j^yσ_i,j+1^y)
+ J(t) (σ_i-1,j+1^x+σ_i,j+2^y),
see Appendix <ref> for details. The figure demonstrates that the ZZCM scheme exhibits remarkable robustness against ZZ crosstalk, outperforming the DY scheme. The results reveal that the two-qubit gate is much more sensitive to ZZ crosstalk than the single-qubit gate and that the incorporation of the ZZCM method can efficiently mitigate this issue. Specifically, the incorporation of the ZZCM approach reduces the infidelity of the parallel gate U_(i,j),(i,j+1)^S⊗σ^x_i-1,j+1⊗σ^y_i,j+2⊗ I_i+1,j+1 by three orders of magnitude, compared to the DY scheme, when the ZZ crosstalk ratio is 0.05.
Parallel SWAP gates can also be implemented, see Appendix <ref> for details.
§ CONCLUSIONS
We have presented a protocol for implementing ZZ-crosstalk-mitigation universal quantum gates in a two-dimensional square lattice. This approach is also applicable to other kinds of lattices, encompassing three-dimensional qubit arrays. It eliminates the need for auxiliary qubits, simplifying the implementation process and reducing resource requirements. Moreover, the method is compatible with different types of quantum processors, accommodating both direct and bus-based qubit interactions. These features contribute to the versatility and scalability of the ZZ-crosstalk mitigation scheme, making it a promising approach for a wide range of quantum computation platforms.
By employing a time-dependent unitary transformation operator, we successfully realize high-performance isolated and parallel quantum gates while mitigating the ZZ crosstalk between qubits. Notably, the ZZCM proposal can be utilized to prevent the accumulation and propagation of errors induced by ZZ crosstalk, making it a promising solution for constructing deep quantum circuits and simulating quantum algorithms. Consequently, our protocol may lay the groundwork for practical, scalable, and fault-tolerant quantum computation.
§ ACKNOWLEDGEMENTS
This work was supported by the National Natural Science Foundation of China (Grant No. 12275090), the Guangdong Provincial Key Laboratory (Grant No. 2020B1212060066), the Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0302300), NSFC/RGC JRS grant (N-HKU774/21), and the CRF of Hong Kong (C6009-20G).
§ NUMERICAL SIMULATION FOR THE ERROR CUMULANT
To mitigate the influence of ZZ crosstalk, the time-dependent transformed operator 𝒜(t) needs to satisfy the requirement stated in Eq. (<ref>) of the main text.
For single-qubit gates U_(l=1,2) (U_1=σ^x/2, U_2=σ^x) on qubit Q_i,j, the corresponding operation duration satisfy ∫_0^T_1^1Ω_01sin^2( π t/T_1^1)dt=π/4 and ∫_0^T_1^2Ω_01sin^2( π t/T_1^2)dt=π/2, respectively. The Hamiltonian describing the ZZ crosstalk between nearest qubits reads in Eq. (<ref>).
To mitigate the ZZ crosstalk from nearest qubits, we choose 𝒜_1^l(t)= exp [- iω_1^l τ_1^l/πsin^2( π t/τ_1^l) σ_i,j^x ], where τ_1^l=T_1^l/k is the period, and ω_1^l=γ_1^l k Ω_01 (k is positive integer) is the parameter used to satisfy Eq. (<ref>b). Therefore, the condition in Eq. (<ref>b) can be expressed as
∫_(n-1)τ_1^l^nτ_1^l𝒜_1^l†(t)H_zz1𝒜_1^l(t)dt
=η_zz∫_(n-1)τ_1^l^nτ_1^l(cosχ_1^l σ_i,j^z+sinχ_1^l σ_i,j^y) Z_i, j dt,
where χ_1^l=2γ_1^l k Ω_01τ_1^l/πsin^2( π t/τ_1^l). We define the error cumulant during the nth period as
EC_1^l=η_zz[|∫_(n-1)τ_1^l^nτ_1^lcosχ_1^l dt | +|∫_(n-1)τ_1^l^nτ_1^lsinχ_1^l dt |].
Figures <ref>(a) and (b) display the error cumulant as a function of γ_1^l, indicating that for the σ^x/2 (σ^x) gate, γ_1^1=|4.81| (γ_1^2=|2.4|) is the optimal choice for eliminating the ZZ crosstalk.
For the parallel single-qubit gate on next-nearest-neighbor qubits Q_i,j and Q_i+1,j+1, the ZZ crosstalk Hamiltonian is in the form in Eq. (<ref>).
The gate operation time T_2 satisfies ∫_0^T_2Ω_02sin^2( π t/T_2)dt=π/2. When we choose the transformed operator in the form in Eq. (<ref>), with τ_2=T_2/k being the period, and ω_2=γ_2 k Ω_02 used to satisfy Eq. (<ref>b). In this case, the condition in Eq. (<ref>b) can be written as
∫_(n-1)τ_2^nτ_2𝒜_2^†(t)H_zz2𝒜_2(t)dt
=η_zz [∫_(n-1)τ_2^nτ_2(cosχ_2σ_i,j^z+sinχ_2σ_i,j^y) Z_i,jdt
+∫_(n-1)τ_2^nτ_2(cosχ_2σ_i+1,j+1^z -sinχ_2σ_i+1,j+1^x)Z_i+1,j+1 dt ],
with χ_2=2γ_2 k Ω_02τ_2/πsin^2( π t/τ_2). We can also define the error cumulant during the nth period as
EC_2=η_zz[|∫_(n-1)τ_2^nτ_2cosχ_2 dt | +|∫_(n-1)τ_2^nτ_2sinχ_2 dt |],
which is consistent with the error cumulant in isolated σ^x gate.
§ THE CONSTRUCTION OF DYNAMICAL GATES
Here, we present the construction of dynamical (DY) gates with simple resonant interaction. They are implemented by using time-dependent driving fields, and the corresponding Hamiltonians are H_d1(t)= Ω_dsin^2( π t/T_d1) σ_i,j^x and H_d2(t)=Ω_dsin^2( π t/T_d2)(σ_i,j^x+σ_i',j'^y) for single-qubit gate on qubit Q_i,j, and parallel single-qubit gate σ^x_i,j⊗σ^y _i',j' between nearest or next-nearest neighboring qubits Q_i,j and Q_i',j', respectively. To construct the isolated gate σ^x/2 (σ^x), the corresponding evolution time T_d1 satisfies ∫_0^T_d1Ω_dsin^2( π t/T_d1)dt=π/4 (π/2). Similarly, for the parallel single-qubit gate, the evolution time is ∫_0^T_d2Ω_dsin^2( π t/T_d2)dt= π/2. The amplitude of the driven field is set to be equal to the amplitude of the equivalent coupling strength in the ZZCM scheme, i.e., Ω_d=Ω_m.
We also present the implementation of the two-qubit Swap gate using a simple XY interaction. The corresponding Hamiltonian is given by
H_d3(t)=J_d(t) (σ_i,j^xσ_i',j'^x+σ_i,j^yσ_i',j'^y)/2,
where J_d(t)=J_0sin^2(π t/T_d3) is the time-dependent XY interaction between qubits Q_i,j and Q_i',j', and the evolution time satisfies ∫_0^T_d3J_d(t)dt=π/2.
Moreover, we construct parallel gates of U_(i,j),(i,j+1)^S⊗σ^x_i-1,j+1⊗σ^y_i,j+2⊗ I_i+1,j+1 and U_(i,j),(i,j+1)^S⊗ U_(i+1,j),(i+1,j+1)^S by using the DY method. The corresponding Hamiltonian is given by Eq. (<ref>)
and
H_d5(t) = J_d(t) 2 (σ_i,j^xσ_i,j+1^x+σ_i,j^yσ_i,j+1^y
+σ_i+1,j^xσ_i+1,j+1^x+σ_i+1,j^yσ_i+1,j+1^y),
respectively, where the evolution time is chosen to satisfy ∫_0^T_d4(5)J_d(t)dt=π/2. We note that the form of the ZZ-crosstalk Hamiltonian is the same as in the ZZCM scheme.
§ THE OPTIMAL K FOR THE ZZCM SCHEME
We draw attention to the incorporation of the ZZCM control, which induces an increase in the effective coupling strength as k grows, as established by Eq. (<ref>) in the main manuscript. The corresponding equivalent coupling strength for the ZZCM approach is provided by Ω_1(t)=Ω_01sin^2( π t/T_1)+ω_1sin(2π t/τ_1), where ω_1=γ_1 k Ω_01. To prevent excessive driving field amplitude in experiments, the amplitude of the equivalent coupling strength is constrained to Ω_m, and the ZZ crosstalk ratio η_zz/Ω_m is within the range of [-0.05,0.05]. Under these conditions, as k increases, the gate driven field amplitude Ω_01 diminishes, consequently resulting in increased relative noise, quantified by the parameter η_zz/Ω_01. Therefore, there exists a trade-off between the enhancement in robustness with increasing k and the proportion of noise η_zz/Ω_01. Figures <ref>(a) and (b) depict the infidelities of σ^x/2 and σ^x gates, respectively, as a function of η_zz/Ω_m, with an optimal value of k=4 identified for the ZZCM scheme. Additionally, Figs. <ref>(c) and (d) contrast the robustness of the ZZCM scheme, with k=4, against the DY scheme concerning ZZ crosstalk when the equivalent coupling strength amplitude is Ω_m. The plots distinctly showcase the outstanding suppression of ZZ crosstalk achieved by the ZZCM scheme.
§ THE CONSTRUCTION OF PARALLEL GATES ON NEARBY QUBITS
Here, we present the construction of parallel single-qubit gate between nearest-neighbor qubits in an eight-qubit system, which is enclosed by the dotted box in Fig. <ref>(a). We apply σ^x and σ^y gates simultaneously on qubits Q_i,j and Q_i,j+1, and the remaining six qubits serve as spectators. Starting from Eq. (<ref>) in the maintext, the individual single-qubit driven Hamiltonian in the 𝒜 framework is H_𝒜2'(t)=Ω_02sin^2( π t/T_2)(σ_i,j^x+σ_i,j+1^y), and the ZZ interaction Hamiltonian is
H_zz2'=η_zz[σ_i,j^z Z_i,j+σ_i,j+1^z(Z_i,j+1 -σ_i,j^z)].
To suppress the ZZ crosstalk between qubits, we choose
𝒜_2'(t) = exp [- iω_2τ_2πsin^2 (π tτ_2) (σ_i-1,j+1^x.
+.σ_i,j^x+σ_i,j+2^x+σ_i+1,j+1^x) ].
The total Hamiltonian,
H_2'(t) = H_zz2 + Ω_2(t)σ_i,j^x
+Ω_02sin^2 (π t T_2) σ_i,j+1^y
+Ω_𝒜(σ_i-1,j+1^x+σ_i,j+2^x+σ_i+1,j+1^x),
in the interaction picture can be obtained by inverting Eq. (4) of maintext, where Ω_𝒜=ω_2sin(2π t/τ_2), and Ω_2(t)=Ω_02sin^2( π t/T_2)+Ω_𝒜 is the equivalent coupling strength applying on qubit Q_i,j. It indicates that, for mitigating ZZ crosstalk, we need additional control fields apply on qubits Q_i-1,j+1, Q_i,j, Q_i,j+2, and Q_i+1,j+1. However, it is worth noting that qubits Q_i-1,j+1, Q_i,j+2, and Q_i+1,j+1 do not incur extra resource consumption on the quantum processor as we can still implement single-qubit gates independently on these qubits.
We conduct a comparative analysis of the ZZCM scheme and the DY scheme, focusing on their robustness against ZZ crosstalk. We here impose a constraint on the maximum value of the equivalent coupling strength Ω_2(t), limiting it to Ω_m. Comparing Fig. <ref>(b) and Figs. <ref>(c) and (d), we observe that the susceptibility of the parallel single-qubit gate to ZZ crosstalk exceeds that of isolated single-qubit gates. Fortunately, the proposed ZZCM scheme effectively mitigates the adverse effects of ZZ crosstalk. Consequently, the parallel single-qubit gate achieves comparable performance to isolated single-qubit gates.
§ THE CONSTRUCTION OF TWO-QUBIT SWAP GATE
Next, we present a detailed description of the construction of a two-qubit Swap gate for an an eight-qubit system, enclosed in a dotted box as shown in Fig. <ref>(a). Qubits Q_i,j and Q_i,j+1 serve as gate qubits for the Swap gate U_(i,j),(i,j+1)^S, and qubits Q_i-1,j, Q_i-1,j+1, Q_i,j-1, Q_i,j+2, Q_i+1,j, and Q_i+1,j+1 act as spectators. The interactions between the qubits are represented by solid and dashed lines, corresponding to the XY and ZZ interactions, respectively. The Hamiltonian for this quantum system in the interaction picture is given by Eq. (<ref>).
To achieve the Swap gate while suppressing ZZ crosstalk, we divide the evolution into two steps. In the first step, we choose 𝒜_31(t)= exp [- iω_3τ_3/πsin^2( π t/τ_3) (σ_i-1,j+1^x+σ_i,j^x+σ_i,j+2^x+σ_i+1,j+1^x) ], with τ_3=T_3/k, and ω_3= 2.4 k J_0. By rotating the system to the framework defined by 𝒜_31(t), we obtain
H^1_𝒜3(t) = 𝒜_31^†(t)H_3(t)𝒜_31(t)+ i𝒜^†_31(t)𝒜_31(t)
= J(t)/2σ_i,j^xσ_i,j+1^x
+ 𝒜_31^†(t)[J(t)/2σ_i,j^yσ_i,j+1^y+H_zz3]𝒜_31(t).
It is not difficult to prove that ∫_(n-1)τ_3^nτ_3𝒜_31^†(t) [ J(t)σ_i,j^yσ_i,j+1^y/2 +H_zz3 ]𝒜_31(t)dt=0. Hence, when ∫_0^T_3J(t)dt=π/2, in the framework of 𝒜_31(t), the evolution operator in the first step is U^1_𝒜3(T_3,0)= exp[- iπσ_i,j^xσ_i,j+1^x/4 ].
In the second step, we choose 𝒜_32(t)= exp [- iω_3τ_3/πsin^2( π t/τ_3) (σ_i-1,j+1^y+σ_i,j^y+σ_i,j+2^y+σ_i+1,j+1^y) ]. The Hamiltonian in the framework defined by 𝒜_32(t) is
H^2_𝒜3(t) = 𝒜_32^†(t)H_3(t)𝒜_32(t)+ i𝒜^†_32(t)𝒜_32(t)
= J(t)/2σ_i,j^yσ_i,j+1^y
+ 𝒜_32^†(t)[J(t)/2σ_i,j^yσ_i,j+1^y+H_zz3]𝒜_32(t).
Since ∫_(n-1)τ_3^nτ_3𝒜_32^†(t) [ J(t)σ_i,j^xσ_i,j+1^x/2+H_zz3]𝒜_32(t)dt=0, the evolution operator in the second step becomes U^2_𝒜3(2T_3,T_3)= exp[- iπσ_i,j^yσ_i,j+1^y/4 ]. As a result, the evolution operator of the whole process is
U_𝒜3(2T_3,0)=U_𝒜3(2T_3,T_3)U_𝒜3(T_3,0)=U^S_(i,j),(i,j+1),
which is a Swap gate between qubits Q_i,j and Q_i,j+1. Moving back to the interaction picture, the evolution operator generated by H_3(t) at the end time reads
U_3(2T_3)=𝒜_32(2T_3)U^S_(i,j),(i,j+1)=U^S_(i,j),(i,j+1).
It should be noted that, to suppress ZZ crosstalk, an additional Hamiltonian H_A(t)= i𝒜̇_3(t)𝒜^†_3(t) is introduced, as shown in Eq. (<ref>). This extra Hamiltonian requires additional drives not only on qubit Q_i,j but also on spectator qubits Q_i-1,j+1, Q_i,j+2, and Q_i+1,j+1. We emphasize that these spectator qubits do not represent any additional physical qubits overhead on the quantum processor, as we can still perform independent quantum gate operations on them. In addition, the realization of a parallel quantum gate U_(i,j),(i,j+1)^S⊗σ^x_i-1,j+1⊗σ^y_i,j+2⊗ I_i+1,j+1 that consists of one Swap gate and three single-qubit gates has been implemented in the maintext.
We investigate the robustness of the Swap gate within the ZZCM scheme (with k=4) and conduct a comparative analysis against the DY scheme in the presence of ZZ crosstalk. Figure <ref>(b) displays the results of this comparison, where we observe that the susceptibility of the two-qubit gate to ZZ crosstalk surpasses that of a single-qubit gate. However, we address this concern by proposing the ZZCM scheme, which effectively mitigates the detrimental effects. Notably, the fidelity of the two-qubit gate within the ZZCM scheme remains consistently above 99.99% across the entire range of errors examined in this study.
§ THE CONSTRUCTION OF PARALLEL SWAP GATES
We next demonstrate the application of the ZZCM control for realizing the parallel two-qubit Swap gate in an eight-qubit system, as shown in the dotted box in Fig. <ref>(c).
The Swap gates on qubits Q_i,j and Q_i,j+1, Q_i+1,j and Q_i+1,j+1 are implemented by turning on the XY interaction between the respective qubit pairs. The total Hamiltonian of this quantum system in the interaction picture is
H_4(t)=H'_J(t)+H_zz4+H'_A(t),
which is composed of the XY interaction Hamiltonian
H'_J(t) = J(t)' 2 (σ_i,j^xσ_i,j+1^x+σ_i,j^yσ_i,j+1^y
+σ_i+1,j^xσ_i+1,j+1^x+σ_i+1,j^yσ_i+1,j+1^y)
between qubits Q_i,j and Q_i,j+1, Q_i+1,j and Q_i+1,j+1, with J(t)'=J_0sin^2(π t/T_4) as the time-dependent interaction strength. The ZZ-crosstalk Hamiltonian in this system is described by H_zz4=η_zz [σ_i,j^z(Z_i,j-σ_i,j-1^z)+σ_i,j+1^z(σ_i-1,j+1^z+σ_i+1,j+1^z) +σ_i+1,j^z(σ_i+1,j+1^z+σ_i+2,j^z)+σ_i+1,j+1^zσ_i+2,j+1^z ]. To suppress the ZZ crosstalk, we use the additional Hamiltonian H'_A(t)= i𝒜̇_4(t)𝒜^†_4(t).
The procedure of constructing the parallel Swap gate shares similarities with the construction process of an isolated Swap gate. However, there is a key difference lies in the selection of transformed operator 𝒜(t). Specifically, we select 𝒜_41(t)= exp [- iω_4τ_4/πsin^2( π t/τ_4) (σ_i-1,j+1^x+σ_i,j^x+σ_i+1,j+1^x+σ_i+2,j^x) ] in the first step, followed by 𝒜_42(t)= exp [- iω_4τ_4/πsin^2( π t/τ_4) (σ_i-1,j+1^y+σ_i,j^y+σ_i+1,j+1^y+σ_i+2,j^y) ] in the second step, with τ_4=T_4/k, and ω_4=2.4 k J_0. Under these settings, the ZZCM scheme exhibits superior robustness against ZZ crosstalk as expected, as shown in Fig. <ref>(d). In contrast, the gate fidelity experiences a rapid decline as η_zz/J_0 increases when the ZZCM control is absent. However, by employing the ZZCM method, we successfully enhance the fidelity of the parallel Swap gate from an initial value of 93.02% to 99.99% when the ZZ crosstalk ratio is η_zz/J_0=|0.05|.
99
Vandersypen2001
L. M. K. Vandersypen, M. Steffen, G. Breyta,
C. S. Yannoni, M. H. Sherwood, and I. L. Chuang,
Experimental realization of Shor's quantum factoring algorithm using nuclear magnetic resonance,
Nature (London) 414, 883 (2001).
Lucero2012
E. Lucero, R. Barends, Chen Y, J. Kelly, M. Mariantoni, A. Megrant,
P. O'Malley, D. Sank, A. Vainsencher, J. Wenner, T. White, Y. Yin, A. N. Cleland, and J. M. Martinis,
Computing prime factors with a Josephson phase qubit quantum processor,
Nat. Phys. 8, 719 (2012).
Jones1998
J. A. Jones, M. Mosca, and R. H. Hansen,
Implementation of a quantum search algorithm on a quantum computer,
Nature (London) 393, 344 (1998).
Shahandeh2021
A. Rodriguez-Blanco, A. Bermudez, M. Müller, and F. Shahandeh,
Efficient and robust certification of genuine multipartite entanglement in noisy quantum error correction circuits,
PRX Quantum 2, 020304 (2021).
PZhao2022
P. Zhao, K. Linghu, Z. Li, P. Xu, R. Wang, G. Xue, Y. Jin, and H. Yu,
Quantum Crosstalk Analysis for Simultaneous Gate Operations on Superconducting Qubits,
PRX Quantum 3, 020301 (2022).
Buterakos2018
D. Buterakos, R. E. Throckmorton, and S. D. Sarma,
Crosstalk error correction through dynamical decoupling of single-qubit gates in capacitively coupled singlet-triplet semiconductor spin qubits,
Phys. Rev. B 97, 045431 (2018).
Throckmorton2022
R. E. Throckmorton and S. Das Sarma,
Crosstalk- and charge-noise-induced multiqubit decoherence in exchange-coupled quantum dot spin qubit arrays,
Phys. Rev. B 105, 245413 (2022).
AKandala2021
A. Kandala, K. X. Wei, S. Srinivasan, E. Magesan, S. Carnevale, G. A. Keefe, D. Klaus, O. Dial, and D. C. McKay,
Demonstration of a High-Fidelity CNOT Gate for Fixed-Frequency Transmons with Engineered ZZ Suppression,
Phys. Rev. Lett. 127, 130501 (2021).
LPostler2018
L. Postler, Á. Rivas, P. Schindler, A. Erhard, R. Stricker, D. Nigg, T. Monz, R. Blatt, and M. M¨1ller,
Experimental quantification of spatial correlations in quantum dynamics,
Quantum 2, 90 (2018).
UvonLupke2020
U. von Lüpke, F. Beaudoin, L. M. Norris, Y. Sung, R. Winik, J. Y. Qiu, M. Kjaergaard, D. Kim, J. Yoder, S. Gustavsson, L. Viola, and W. D. Oliver,
Two-qubit spectroscopy of spatiotemporally correlated quantum noise in superconducting qubits,
PRX Quantum 1, 010305 (2020).
Sundaresan2020
N. Sundaresan, I. Lauer, E. Pritchett, E. Magesan, P. Jurcevic, and J. M. Gambetta,
Reducing unitary and spectator errors in cross resonance with optimized rotary echoes,
PRX Quantum 1, 020318 (2020).
Krinner2020
S. Krinner, S. Lazar, A. Remm, C. K. Andersen, N. Lacroix, G. J. Norris, C. Hellings, M. Gabureac, C. Eichler, and A. Wallraff,
Benchmarking Coherent Errors in Controlled- Phase Gates due to Spectator Qubits,
Phys. Rev. Appl. 14, 024042 (2020).
TQCai12021
T. Q. Cai, X. Y. Han, Y. K. Wu, Y. L. Ma, J. H. Wang, Z. L. Wang, H. Y. Zhang, H. Y. Wang, Y. P. Song, and L. M. Duan,
Impact of Spectators on a Two-Qubit Gate in a Tunable Coupling Superconducting Circuit,
Phys. Rev. Lett. 127, 060505 (2021).
PMundada2019
P. Mundada, G. Zhang, T. Hazard, and A. Houck,
Suppression of Qubit Crosstalk in a Tunable Coupling Superconducting Circuit,
Phys. Rev. Appl. 12, 054023 (2019).
XYHan2020
X. Y. Han, T. Q. Cai, X. G. Li, Y.K.Wu, Y. W. Ma, Y. L. Ma, J. H. Wang, H. Y. Zhang, Y. P. Song, and L. M. Duan,
Error analysis in suppression of unwanted qubit interactions for a parametric gate in a tunable superconducting circuit,
Phys. Rev. A 102, 022619 (2020).
LDuan2020
X. Li, T. Cai, H. Yan, Z. Wang, X. Pan, Y. Ma, W. Cai, J. Han, Z. Hua, X. Han, Y. Wu, H. Zhang, H. Wang, Y. Song, L. Duan, and L. Sun,
Phys. Rev. Appl. 14, 024070 (2020).
MCCollodo2020
M. C. Collodo, J. Herrmann, N. Lacroix, C. K. Andersen, A. Remm, S. Lazar, J.-C. Besse, T. Walter, A. Wallraff, and C. Eichler,
Implementation of Conditional Phase Gates Based on Tunable ZZ Interactions,
Phys. Rev. Lett. 125, 240502 (2020).
JChu2021
J. Chu and F. Yan,
Coupler-Assisted Controlled-Phase Gate with Enhanced Adiabaticity,
Phys. Rev. Appl. 16, 054020 (2021).
pZhao2021
P. Zhao, D. Lan, P. Xu, G. Xue, M. Blank, X. Tan, H. Yu, and Y. Yu,
Suppression of Static ZZ Interaction in an All- Transmon Quantum Processor,
Phys. Rev. Appl. 16, 024037 (2021).
JStehlik2021
J. Stehlik, D. M. Zajac, D. L. Underwood, T. Phung, J. Blair, S. Carnevale, D. Klaus, G. A. Keefe, A. Carniol, M. Kumph, M. Steffen, and O. E. Dial,
Tunable Coupling Architecture for Fixed-Frequency Transmon Superconducting Qubits,
Phys. Rev. Lett. 127, 080505 (2021).
Youngkyu2021
Y. Sung, L. Ding, J. Braumüller, A. Vepsäläinen, B. Kannan, M. Kjaergaard, A. Greene, G. O. Samach, C. McNally, D. Kim, A. Melville, B. M. Niedzielski, M. E. Schwartz, J. L. Yoder, T. P. Orlando, S. Gustavsson, and W. D. Oliver,
Realization of High-Fidelity CZ and ZZ-Free iSWAP Gates with a Tunable Coupler,
Phys. Rev. X 11, 021058 (2021).
JKu2020
J. Ku, X. Xu, M. Brink, D. C. McKay, J. B. Hertzberg, M. H. Ansari, and B. L. T. Plourde,
Suppression of Unwanted ZZ Interactions in a Hybrid Two-Qubit System,
Phys. Rev. Lett. 125, 200504 (2020).
PZhao2020
P. Zhao, P. Xu, D. Lan, J. Chu, X. Tan, H. Yu, and Y. Yu,
High-Contrast ZZ Interaction Using Superconducting Qubits with Opposite-Sign Anharmonicity,
Phys. Rev. Lett. 125,, 200503 (2020).
XXu2021
X. Xu and M. H. Ansari,
ZZ Freedom in Two-Qubit Gates,
Phys. Rev. Appl. 15, 064074 (2021).
Noguchi2020
A. Noguchi, A. Osada, S. Masuda, S. Kono, K. Heya, S. P. Wolski, H. Takahashi, T. Sugiyama, D. Lachance-Quirion, and Y. Nakamura,
Fast parametric two-qubit gates with suppressed residual interaction using the second-order non-linearity of a cubic transmon,
Phys. Rev. A 102, 062408 (2020).
Mitchell2021
B. K. Mitchell, R. K. Naik, A. Morvan, A. Hashim, J. M. Kreikebaum, B. Marinelli, W. Lavrijsen, K. Nowrouzi, D. I. Santiago, and I. Siddiqi,
Hardware-Efficient Microwave-Activated Tunable Coupling between Superconducting Qubits,
Phys. Rev. Lett. 127, 200502 (2021).
HXiong2022
H. Xiong, Q. Ficheux, A. Somoroff, L. B. Nguyen, E. Dogan, D. Rosenstock, C. Wang, K. N. Nesterov, M. G. Vavilov, and V. E. Manucharyan,
Arbitrary controlled-phase gate on fluxonium qubits using differential ac stark shifts,
Phys. Rev. Res. 4, 023040 (2022).
KXWei2021
K. X. Wei, E. Magesan, I. Lauer, S. Srinivasan, D. F. Bogorin, S. Carnevale, G. A. Keefe, Y. Kim, D. Klaus, W. Landers, N. Sundaresan, C. Wang, E. J. Zhang, M. Steffen, O. E. Dial, D. C. McKay, and A. Kandala,
Quantum crosstalk cancellation for fast entangling gates and improved multi-qubit performance,
arXiv:2106.00675.
ZCNi2022
Z. C. Ni, S. Li, L. B. Zhang, J. Chu, J. j. Niu, T. X. Yan, X. H. Deng, L. Hu, J. Li, Y. P. Zhong, S. Liu, F. Yan, Y. Xu, and D. P. Yu,
Scalable Method for Eliminating Residual ZZ Interaction between Superconducting Qubits,
Phys. Rev. Lett. 129, 040502 (2022).
Viola1999
L. Viola, S. Lloyd, and E. Knill,
Dynamical Decoupling of Open Quantum Systems,
Phys. Rev. Lett. 82, 2417(1999).
Suter2016
D. Suter and G. A. Álverez,
Protecting quantum information against environmental noise,
Rev. Mod. Phys. 88, 041001 (2016).
Jurcevic2021
P. Jurcevic, A. Javadi-Abhari, L. S. Bishop, I. Lauer, D. F. Bogorin, M. Brink, L. Capelluto, O. Günlük, T. Itoko, and N. Kanazawa,
Demonstration of quantum volume 64 on a superconducting quantum computing system,
Quantum Sci. Technol. 6, 025020 (2021).
Zeyuan2022
Z. Y. Zhou, R. Sitler, Y. Oda, K. Schultz, and G. Quiroz,
Quantum Crosstalk Robust Quantum Control,
arXiv:2208.05978 (2022).
Tripathi2022
V. Tripathi, H. Chen, M. Khezri, K.-W. Yip, E. M. Levenson-Falk, and D. A. Lidar,
Suppression of crosstalk in superconducting qubits using dynamical decoupling,
Phys. Rev. Appl. 18, 024068 (2022).
Magnus1954
W. Magnus,
On the Exponential Solution of Differential Equations for a Linear Operator,
Commun. Pure Appl. Math. 7, 649 (1954).
Blanes2009
S. Blanes, F. Casas, J. A. Oteo, and J. Ros,
The Magnus Expansion and Some of Its Applications,
Phys. Rep. 470, 151 (2009).
XGWang2009
X. G. Wang, Z. Sun, and Z. D. Wang,
Operator fidelity susceptibility: An indicator of quantum criticality,
Phys. Rev. A 79, 012105 (2009).
|
http://arxiv.org/abs/2307.07387v1 | 20230714145653 | Locking-free HDG methods for Reissner-Mindlin plates equations on polygonal meshes | [
"Gang Chen",
"Lu Zhang",
"Shangyou Zhang"
] | math.NA | [
"math.NA",
"cs.NA"
] |
Locking-free HDG methods for Reissner–Mindlin plates equations on polygonal meshes
Gang ChenSchool of Mathematics, Sichuan University, Chengdu, China,
email:[email protected],
Lu ZhangSchool of Mathematics, Sichuan University, Chengdu, China,
email:[email protected],
Shangyou ZhangDepartment of Mathematics Science, University of Delaware, Newark, USA, email:[email protected]
==================================================================================================================================================================================================================================================================================================================================
We present and analyze a new hybridizable discontinuous Galerkin method (HDG) for the Reissner–Mindlin plate bending system. Our method is based on the formulation utilizing Helmholtz Decomposition. Then the system is decomposed into three problems: two trivial Poisson problems and a perturbed saddle-point problem. We apply HDG scheme for these three problems fully. This scheme yields the optimal convergence rate ((k+1)th order in the L^2 norm) which is uniform with respect to plate thickness (locking-free) on general meshes. We further analyze the matrix properties and precondition the new finite element system. Numerical experiments are presented to confirm our theoretical analysis.
keywords: Reissner–Mindlin plate, hybridizable discontinuous Galerkin method, error analysis, locking-free
§ INTRODUCTION
In this paper we present a hybridizable discontinuous Glaerkin method (HDG) for Reissner–Mindlin (RM) plate bending system. The equations of RM can be written as the following equations:
-∇·𝒞ϵ(θ)
-λ t^-2(∇ω-θ) = f, in Ω,
-λ t^-2∇·(∇ω-θ) =g, in Ω,
with the hard clamped boundary condition
θ= 0, ω=0, on ∂Ω.
Here t denotes the plate thickness, the unknowns w and θ denote the transverse displacement of the midplane and the rotation of the fibers normal to it. The functions f∈[L^2(Ω)]^2 and g∈L^2(∂Ω) denote a body force and the transverse loading acting on a polygonal region Ω∈ℝ^2 with boundary ∂Ω, respectively. The symmetric gradient ϵ(ϕ):= 1/2(∇ϕ +(∇ϕ)^T) which takes values in the space 𝕊=ℝ_ sym^2×2. The operator 𝒞: 𝕊→𝕊 is the positive definite plane stress constitutive tensor as follows
𝒞τ =E/12(1-ν^2)
[(1-ν)τ+ν tr(τ) I],
with λ=κ E/2(1+ν) and τ∈𝕊, the Young’s modulus E>0, the Poisson's ratio ν∈(0,1/2] and the shear correction factor κ is usually chosen as 5/6. More details about RM can be found in <cit.>. A simple calculation shows that
12(1-ν)/Eτ_0^2≤(𝒞^-1τ,τ)≤12(1+ν)/Eτ_0^2,
holds true for any τ∈𝕊.Standard low order conforming finite element methods are known to exhibit shear locking and yield poor results for small thickness t→ 0 (see <cit.>). For dealing with shear locking phenomenon, it is useful to introduce shear stress γ:= λ t^-2(∇ω-θ) with corresponding finite dimensional space Γ_h. Then many successful conforming standard finite element methods introduce interpolation operator Π: H_0^1(Ω)→Γ_h to make ∇ω-Πθ→ 0 as t→ 0, for triangular elements, see <cit.>, for rectangular elements, see <cit.> and for quadrilateral elements, see <cit.>. Also, many researchers develop a number of other methods. A nonconforming finite element for ω is given by equivalent formulation of RM using the discrete Helmholtz decomposition and the MINI element for stokes problem, see <cit.>. Low-order nonconforming elements, both for θ and ω, obtain the desire stability by adding some inter-element term, see <cit.> for details. Jun Hu and Zhong-Ci Shi generalize the rectangular nonconforming Wilson element method dropping the requirement of H_3(Ω) regularity on the solution, see <cit.>. Stenberg <cit.> and Hughes <cit.> applied least-squares stabilization schemes. J. Kiendl et al. <cit.> develop two collocation schemes based on the equilibrium equations of the Reissner–Mindlin plate in a primal formulation. G. Kikis et al. <cit.> implement the anisotropic phase-field model of brittle fracture to the isogeometric Reissner–Mindlin shell formulation.A lot of literature show that the numerical trace or numerical flux is crucial to obtain stable numerical solution. So the discontinuous Galerkin method (DG) appears naturally in <cit.>, called the interior penalty at first and has been applied in solving various problems, see<cit.> for elliptic problems and <cit.> for RM problems. DG is not required to satisfy the standard inter-element continuity condition like conforming finite element method, but usually adds the penalty term of piecewise numerical flux. So DG applies flexibly on general polyonal meshes. However, based on above merit, DG needs more computability and memory with adding the extra degrees of freedom especially for high-degree polynomial approximations. Under the framework of DG, many other methods for RM have developed. Daniele and Pietro <cit.> use an enrichment of the two-dimensional discrete de Rham space with edge unknowns for discretization recently. Führer and Heuer <cit.> apply discontinuous Petrov–Galerkin method for RM with numerical flux and the numerical trace as unknowns, which is the main difference from HDG with only numerical trace being unknown.The method presented in this paper namely HDG is based on local discontinuous Galerkin method (LDG), see <cit.>. Therein LDG firstly introduce an auxiliary variable to convert a high-order system into a system of first-order equations, then defining the numerical trace for all unknowns. The important point is that the auxiliary variable can be solvable locally because the prescribed numerical trace of primal unknown does not depend on the auxiliary variable. In this sense, LDG decreases the computational expense compared with DG (assuming local system could be paralyzingly solved) and retains the flexibility. Soon combining with other continuous hybridized methods, HDG was introduced by<cit.>, directly called LDG-H therein. Corollary 3.2 of <cit.> pointed out HDG is not identical with LDG because the numerical trace, as a new introduced unknown, depends on the numerical flux. The new numerical trace leads to a small and better system which involves only numerical trace as unknowns. This is much more efficient for high-degree polynomials, see <cit.>. Over the past decade, the application of HDG extends more and more widely, in particular for RM in <cit.>. In this paper, we continue to extend the work of HDG to RM plates problems on polygonal meshes by utilizing a Helmholtz decomposition. Then the system is decomposed into three problems: two trivial Poisson problems and a perturbed saddle-point problem. The main analysis is in the perturbed saddle-point problem. The method retains the flexibility of DG and we can analyze the error locally. We show the new HDG method is locking-free and produces optimal-order solutions. The paper is organized as follows. Section 2 restates some necessary notations and preliminary results. Sections 3 and 4 include our method and the main result. In these two sections, we fully carry out the analysis and get an optimal convergence rate for the norm defined by discrete formulation of our method, and in L^2 norm, for arbitrary polynomial degree (greater than zero). Section 5 analyzes the properties of matrix from our method. The main improvement from <cit.> is embodied here. We end in Section 6 by presenting several numerical experiments, verifying our theoretical results.
§ NOTATION AND PRELIMINARY RESULTS
§.§ Notation
To introduce the fully discrete HDG formulation for the RM plates model, we firstly fix some notation. Let Ω,T be the bounded simply connected Lipschitz domain in ℝ^s where s=1,2. We shall use the usual Sobolev spaces such as H^m(T) and H^m_0(T) to denote the
m-th order Sobolev spaces on T, and use ·_m,T, |·|_m,T to denote the norm and semi-norm on these spaces. We use (·,·) to denote inner product on H^m(T). When T=Ω, for abbreviation, we denote ·_m:=·_m,T and |·|_m:=|·|_m,T.
In particular, when s=1 (in 1D), we use ⟨·,·⟩ to replace (·,·). By convention, we use
boldface type for vector (or tensor) analogues of the spaces along with the vector-valued functions. For an integer k≥ 0, P_k(T) denotes the set of all polynomials defined on T with degree at most K. And for clarity, we reintroduce some operators ∇,∇^⊥ with scalar variable ω
∇ω=(∂_xω,∂_yω), ∇^⊥ω=(∂_yω,-∂_xω),
and ∇·, ∇× with vector variable ϕ=(ϕ_1,ϕ_2)
∇·ϕ=∂_xϕ_1+∂_yϕ_2, ∇×ϕ=∂_xϕ_2-∂_yϕ_1.
Let 𝒯_h=∪K be a shape regular partition of the domain Ω (in sense of the Lemma 2.1 of <cit.>) with respect to a union of a finite (and uniformly bounded) number of star-shaped elements. For any K ∈𝒯_h, we let h_K be the minimum diameters of circles containing K and denote the mesh size h:=max_K ∈𝒯_hh_K. The element K is star-shaped if and only if ∃ x_0 ∈ K, ∀ x ∈ K, the line segment from x to x_0 lies in K. Let ∂𝒯_h=∪F be the union of all edges F of K∈𝒯_h. we denote h_F the length of edge F and the edge F of K satisfies that h_F is greater than or equal to a uniformly constant multiply h_K. For all K ∈𝒯_h, we denote by n the unit outward normal vectors of K. Furthermore, we introduce the discrete inner products
(μ,v)_𝒯_h:=∑_K∈𝒯_h(μ,v)_K =∑_K∈𝒯_h∫_K μ v d x, ⟨ζ,ρ⟩_∂𝒯_h:=∑_K∈𝒯_h⟨ζ,ρ⟩_∂ K =∑_K∈∂𝒯_h∫_K ζρ d s.
For any element K∈𝒯_h, F ∈∂𝒯_h and any non-negative integer j. Let Π_j^o: L^2(K)→𝒫_j(K) and Π_j^∂: L^2(F)→𝒫_j(F) be the usual L^2 projection corresponding to two-dimensional projection Π_j^o: L^2(K)→ [𝒫_j(K)]^2 and Π_j^∂: L^2(F)→ [𝒫_j(F)]^2. Then we can get the following approximation and boundness result (<ref>), therein (<ref>) and (<ref>) are direct results of L^2 projection definition, (<ref>) holds by (<ref>) which holds by Theorem 3.22 of <cit.> and Corollary 2.1.of <cit.>, (<ref>) holds by Lemma 3.3 of <cit.> and (<ref>) holds by Lemma 2.3 of <cit.>.It's important to note that we use same notation C throughout this paper to denote different positive constants at every occurrence which is independent of t,h.
Let m be an integer with 1≤ m ≤ j+1, For all K ∈𝒯_h, Fis arbitrary edge of K, it holds
Π_j^o v_0,K ≤v_0,K ∀ v ∈L^2(K),
Π_j^∂ v_F ≤v_F ∀ v ∈L^2(F),
v-Π_j^∂v_∂ K ≤ C h^m-1/2_K|v|_m,K ∀ v ∈ H^m(K),
|v|_1,K ≤ C h^-1_K|v|_0,K ∀ v ∈𝒫_j(K),
∇^s(v-Π_j^ov)_K ≤ C h^m-s_K|v|_m,K ∀ v ∈ H^m(K), 1≤ s+1≤ m,
|v|_∂ K ≤ C h^-1/2_K|v|_0,K ∀ v ∈ H^m(K).
§.§ Helmholtz Decomposition
Let us introduce a new unknown, so-called the shear stress. It is a useful tool for many methods:
γ:= λ t^-2(∇ω-θ).
Using Helmholtz decomposition, see <cit.>, wherein H^m(Ω):=H^m(Ω)/ℝ,
L^2(Ω):=[L^2(Ω)]^2=∇ H_0^1(Ω) ⊕∇^⊥H^1(Ω),
and suppose γ is decomposed as γ=∇ r+∇^⊥ p, we get the following weak formulation:
find (r, θ, σ, p, ω) ∈ H_0^1(Ω) ×[H_0^1(Ω)]^2 ×([L^2(Ω)]^2 × 2∩𝕊) ×H^1(Ω) × H_0^1(Ω) such that
(∇ r, ∇μ) =(g, μ), ∀μ∈ H_0^1(Ω),
(𝒞^-1σ, τ)-(∇θ, τ) =0, ∀τ∈[L^2(Ω)]^2 × 2∩𝕊,
(σ, ∇ϕ)-(ϕ, ∇^⊥ p) =(∇ r, ϕ)+(f,ϕ), ∀ϕ∈[H_0^1(Ω)]^2,
-(θ,∇^⊥ q)-λ^-1 t^2(∇^⊥ p,∇^⊥ q) =0, ∀ q ∈ H^1(Ω) ,
(∇ω,∇ s) =(θ,∇ s)+λ^-1t^2(g,s), ∀ s ∈ H_0^1(Ω) .
§.§ Continuous Form
We introduce L=∇ r and G=∇ω and R=λ^-1t^2∇^⊥ p, then we rewrite the above formulation as
( L, M)-(∇ r, M) =0, ∀ M ∈L^2(Ω),
( L, ∇μ) =(g, μ), ∀μ∈ H_0^1(Ω),
(𝒞^-1σ, τ)-(∇θ, τ) =0, ∀τ∈[L^2(Ω)]^2 × 2∩𝕊,
(λ t^-2 R, S)
-(∇^⊥p, S) =0, ∀ S ∈L^2(Ω),
(σ, ∇ϕ)-(ϕ, ∇^⊥ p) =( L, ϕ)+(f, ϕ), ∀ϕ∈[H_0^1(Ω)]^2,
-(θ, ∇^⊥ q)-( R, ∇^⊥ q) =0, ∀ q ∈H^1(Ω),
( G, H)-( ∇ω, H) =0, ∀ H ∈L^2(Ω),
( G, ∇ s) =(θ, ∇ s)+λ^-1t^2(g,s), ∀ s ∈ H_0^1(Ω),
γ = L+λ t^-2 R.
For abbreviation, we introduce the following continuous bilinear forms:
𝔞( L,r; M,μ) :=( L, M)-(∇ r, M)+( L, ∇μ),
𝔟(σ, R,θ,p;τ, S,ϕ,q) :=(𝒞^-1σ, τ)-(∇θ, τ)
+(λ t^-2 R, S)+(σ, ∇ϕ)
-(∇^⊥p, S)-(ϕ, ∇^⊥ p)
+(∇^⊥q, R)+(θ, ∇^⊥ q).
So (<ref>) can be reformulated as follows:
𝔞( L,r; M,μ) =(g, μ), ∀ ( M,μ ) ∈L^2(Ω) × H_0^1(Ω),
𝔟(σ, R,θ,p;τ, S,ϕ,q) =( L, ϕ)+(f, ϕ),
∀ (τ, S,ϕ,q) ∈ ([L^2(Ω)]^2 × 2∩𝕊)×L^2(Ω) ×[H_0^1(Ω)]^2×H^1(Ω),
𝔞( G,ω; H,s) = (θ, ∇ s)+λ^-1t^2(g,s), ∀ ( H,s)∈L^2(Ω) × H_0^1(Ω),
γ = L+λ t^-2 R.
§.§ Well-posedness and Regularity Result
In
<cit.>, the authors established such estimates below for the case of clamped plate, which will be used later in our proof.
Let Ω be a convex polygon or a smoothly bounded domain in the plane. For any t∈(0,1], f∈ [H^-1(Ω)]^2, and g ∈ H^-1(Ω), there exits a unique solution (r,σ,θ,p,ω)∈ H_0^1(Ω)×([L^2(Ω)]^2 × 2∩𝕊)×[H_0^1(Ω)]^2×H^1(Ω)× H_0^1(Ω) satisfying (<ref>), (<ref>) and (<ref>) and there exits a constant C independent of t, f and g such that
r_2+σ_1+θ_2+p_1+tp_2+ω_2≤ C( f_0+g_0).
In what follows we suppose that the following regularity result holds:
Under the condition of <Ref> and assume f∈ [H^k-1(Ω)]^2, and g ∈ H^k-1(Ω), k≥1, the solution of (<ref>), (<ref>) and (<ref>) holds
r_k+1+σ_k+θ_k+1+p_k+tp_k+1+ω_k+1≤ C( f_k-1+g_k-1).
§ OUR METHOD
§.§ HDG Formulation
For any integer k≥1,max(1,k-1)≤ℓ≤ k, we introduce the following finite dimensional function spaces:
ℒ_h : = { M_h∈𝐋 ^2(Ω ): M_h|_K∈ [𝒫^k-1(K)]^2,∀ K∈𝒯_h},
ℛ_h : = {μ_h∈L^2(Ω): μ_h|_K∈𝒫^k(K),∀ K∈𝒯_h},
ℛ_h := {μ_h∈L^2(ℱ_h ): μ_h|_F∈𝒫^k-1(F),∀ F∈ℱ_h, μ_h|_∂Ω=0 },
Σ_h : = {τ_h∈[L^2(Ω)]^2 × 2: τ_h|_K∈ [𝒫^k-1(K)]^2× 2∩𝕊,∀ K∈𝒯_h},
𝒴_h :={ϕ_h∈𝐋^2(Ω ): ϕ_h|_K∈ [𝒫^k(K)]^2,∀ K∈𝒯_h},
𝒴_h :={ϕ _h∈𝐋^2(ℱ _h):ϕ _h|_F∈[ 𝒫^ℓ(F)]^2,∀ F∈ℱ_h, ϕ|_∂Ω= 0 },
𝒮_h :={ S_h∈𝐋^2(Ω): S_h|_K ∈ [𝒫^k-1(K)]^2,∀ K ∈𝒯_h},
𝒫_h :={ q_h∈L_0^2(Ω ): q_h|_K∈𝒫^k(K),∀ K∈𝒯_h},
𝒫_h := {q_h∈L^2(ℱ_h ): q_h|_F∈𝒫^k-1(F),∀ F∈ℱ_h },
𝒢_h : = { H_h∈𝐋 ^2(Ω ): H_h|_K∈ [𝒫^k-1(K)]^2,∀ K∈𝒯_h} ,
𝒲_h : = { s_h∈L^2(Ω): s_h|_K∈𝒫^k(K),∀ K∈𝒯_h},
𝒲_h := {s_h∈L^2(ℱ_h ): s_h|_F∈𝒫^k-1(F),∀ F∈ℱ_h,s_h|_∂Ω=0 }.
Our numerical schemes reads as follows.Step One: find ( L_h,r_h,r_h) ∈ℒ_h ×ℛ_h ×ℛ_h such that
( L_h, M_h)_𝒯_h+(r_h,∇· M_h)_𝒯_h
-⟨r_h, M_h· n⟩_∂𝒯_h =0,
-(∇· L_h,μ_h)_𝒯_h
+⟨ L_h· n,μ_h⟩_∂𝒯_h
+⟨α_1(Π_k-1^∂r_h-r_h),
Π_k-1^∂μ_h-μ_h
⟩_∂𝒯_h =(g,μ_h)_𝒯_h.
Step Two: find (σ_h, R_h,θ_h,θ_h,p_h,p_h) ∈Σ_h ×𝒮_h×𝒴_h ×𝒴_h×𝒫_h×𝒫_h such that
(𝒞^-1σ_h,τ_h)_𝒯_h+(θ_h,∇·τ)_𝒯_h
-⟨θ_h,τ_h n
⟩_∂𝒯_h =0,
(λ t^-2 R_h, S_h)_𝒯_h
+(p_h,∇× S_h)_𝒯_h
-⟨p_h, S_h· t⟩_∂𝒯_h =0,
-(∇·σ_h,ϕ_h)_𝒯_h
+⟨σ_h n,ϕ_h
⟩_∂𝒯_h
+(∇×ϕ_h,p_h)_𝒯_h
-⟨ϕ_h· t,p_h
⟩_∂𝒯_h
+⟨α_2(Π_ℓ^∂θ_h-θ_h),
Π_ℓ^∂ϕ_h-ϕ_h
⟩_∂𝒯_h
=( L_h,ϕ_h)_𝒯_h
+( f,ϕ_h)_𝒯_h,
-(∇×θ_h,q_h)_𝒯_h
+⟨θ_h· t,q_h
⟩_∂𝒯_h
-(∇× R_h,q_h)_𝒯_h
+⟨ R_h· t,q_h⟩_∂𝒯_h
+⟨α_3(Π_k-1^∂p_h-p_h),
Π_k-1^∂q_h-q_h
⟩_∂𝒯_h =0.
Step Three: find ( G_h,ω_h,ω_h)∈𝒢_h×𝒲_h ×𝒲_h such that
( G_h, H_h)_𝒯_h+(ω_h,∇· H_h)_𝒯_h
-⟨ω_h, H_h· n⟩_∂𝒯_h =0,
-(∇· G_h,s_h)_𝒯_h
+⟨ G_h· n,s_h⟩_∂𝒯_h
+⟨α_1(Π_k-1^∂ω_h-ω_h), Π_k-1^∂s_h-s_h
⟩_∂𝒯_h
=λ^-1t^2(g,s_h)+⟨θ_h· n,s_h⟩_∂𝒯_h -(∇·θ_h, s_h)_𝒯_h.
Step Four: we simply set
γ_h= L_h+λ t^-2 R_h.
Likewise, we introduce discrete inner products and bi-linear forms as follows:
𝔞_h( L_h,r_h,r_h; M_h,μ_h,μ_h)
:=( L_h, M_h)_𝒯_h+(r_h,∇· M_h)_𝒯_h-⟨r_h, M_h· n⟩_∂𝒯_h
-(∇· L_h,μ_h)_𝒯_h+⟨ L_h· n,μ_h⟩_∂𝒯_h
+⟨α_1(Π_k-1^∂r_h-r_h),Π_k-1^∂μ_h-μ_h
⟩_∂𝒯_h,
𝔟_h(σ_h, R_h,θ_h,θ_h,p_h,p_h ;τ_h, S_h,ϕ_h,ϕ_h,q_h,q_h)
: =(𝒞^-1σ_h,τ_h)_𝒯_h+(θ_h,∇·τ)_𝒯_h-⟨θ_h,τ_h n⟩_∂𝒯_h
+(λ t^-2 R_h, S_h)_𝒯_h-(ϕ_h,∇·σ_h)_𝒯_h+⟨ϕ_h,σ_h n⟩_∂𝒯_h
+⟨α_2(Π_ℓ^∂θ_h-θ_h),Π_ℓ^∂ϕ_h-ϕ_h⟩_∂𝒯_h
+(p_h,∇× S_h)_𝒯_h-⟨p_h, S_h· t⟩_∂𝒯_h+(∇×ϕ_h,p_h)_𝒯_h
-⟨ϕ_h· t,p_h⟩_∂𝒯_h-(q_h,∇× R_h)_𝒯_h
+⟨q_h, R_h· t⟩_∂𝒯_h
-(∇×θ_h,q_h)_𝒯_h+⟨θ_h· t,q_h⟩_∂𝒯_h
+⟨α_3(Π_k-1^∂p_h-p_h),Π_k-1^∂q_h-q_h⟩_∂𝒯_h.
So, the HDG formulation could be rewritten in the following equivalent compact form:
𝔞_h( L_h,r_h,r_h; M_h,μ_h,μ_h)=(g,μ)_𝒯_h, ∀ ( M_h,μ_h,μ_h)∈ℒ_h ×ℛ_h ×ℛ_h,
𝔟_h(σ_h, R_h,θ_h,θ_h,p_h,p_h ;τ_h, S_h,ϕ_h,ϕ_h,q_h,q_h)=( L_h,ϕ_h)_𝒯_h
+( f,ϕ_h)_𝒯_h
∀ (τ_h, S_h,ϕ_h,ϕ_h,q_h,q_h)
∈Σ_h ×𝒮_h×𝒴_h ×𝒴_h×𝒫_h×𝒫_h,
𝔞_h( G_h, ω_h,ω_h; H_h,s_h,s_h) = λ^-1t^2(g,s_h)+⟨θ_h· n,s_h⟩_∂𝒯_h-(∇·θ_h,s_h)_𝒯_h
∀ ( H_h,s_h,s_h)∈𝒢_h×𝒲_h ×𝒲_h,
γ_h= L_h+λ t^-2 R_h.
So we consider split the processing of solving the discrete system (<ref>) into three problems.For the choice of the stabilization coefficients, <cit.>,<cit.> and <cit.> give the discussion for Possion problem and the Stoke flow problem. In this paper, we take
α_1|_K:=h_K^-1, α_2|_K:=h_K^-1, α_3|_K:=h_K+t^2h_K^-1, ∀ K ∈𝒯_h.
In the following parts, we should also use the notation 𝔥|_K=h_K, for all K∈𝒯_h.
§.§ Prior Error for Step One
We firstly introduce a norm on space ℒ_h ×ℛ_h ×ℛ_h as
( M_h, μ_h,μ_h)^2_𝔞_h= M_h^2_𝒯_h+∇μ_h^2_𝒯_h+α_1^1/2(Π_k-1^∂μ_h-μ_h)^2_∂𝒯_h,
that appears in natural way. By a standard argument, we can obtain the following lemma.
For all ( L_h,r_h,r_h)∈ℒ_h ×ℛ_h ×ℛ_h , we have
sup_ 0≠( M_h, μ_h,μ_h)∈ℒ_h ×ℛ_h ×ℛ_h𝔞_h( L_h,r_h,r_h; M_h,μ_h,μ_h)/( M_h, μ_h,μ_h)_𝔞_h≥ C( L_h,r_h,r_h)_𝔞_h.
For the analysis, we state the following interpolation, see definition and more details in <cit.>. We here give some properties of the interpolation.
If 𝔫 is a positive integer that is large enough, then there exits an interpolation operator ℐ_h: L^2(Ω)×L^2(ℱ_h)→ H_0^1(Ω)∩𝒫_k+𝔫(𝒯_h) such that for all (r_h,r_h)∈L^2(Ω)×L^2(ℱ_h), for all K∈𝒯_h, and for all F∈ℱ_h, we have
(ℐ_h(r_h,r_h),μ_h)_K =(r_h,μ)_K, ∀μ∈𝒫_k(K),
(ℐ_h(r_h,r_h),μ)_F =(r_h,μ)_F, ∀μ∈𝒫_k(F),
∇ℐ_h(r_h,r_h)_𝒯_h ≤ C(∇ r_h_𝒯_h+𝔥^-1/2(r_h-r_h)_∂𝒯_h),
r_h-ℐ_h(r_h,r_h)_𝒯_h ≤ C h(∇ r_h_𝒯_h+𝔥^-1/2(r_h-r_h)_∂𝒯_h),
where 𝒫_k+𝔫(𝒯_h)={r_h∈L^2(Ω):r_h∈𝒫_k+𝔫(K), ∀ K ∈𝒯_h}. Boldface fonts ℐ_h will be used for vector interpolation operator counterpart.
Actually, we can denote the operator ℐ_h onto more specified spaces. For instance, we can let ℐ_h onto H_0^1(Ω)∩𝒫_k+𝔫(𝒯_h), when r_h|_∂Ω=0. It trivially holds by (<ref>), which will be used for the argument of <Ref>. Also, we can let ℐ_h onto H^1(Ω)∩𝒫_k+𝔫(𝒯_h), when r_h∈L_0^2(Ω). It holds by taking μ=1 in (<ref>), which will be used for the argument of <Ref>.
Let ( L,r) be the solution of (<ref>) and let ( L_h,r_h,r_h) be the numerical solution of (<ref>), then we have
𝔞_h(Π_k-1^o L,Π_k^o r,Π_k-1^∂ r; M_h,μ_h,μ_h)
=(g,ℐ_h(μ_h,μ_h))_𝒯_h+(Π_k-1^o L- L,∇ℐ_h(μ_h,μ_h))_𝒯_h
+⟨α_1(Π_k-1^∂Π_k^o r-Π_k-1^∂ r),Π_k-1^∂μ_h-μ_h⟩_∂𝒯_h.
By the definition of 𝔞_h, we can obtain
𝔞_h(Π_k-1^o L,Π_k^o r,Π_k-1^∂ r; M_h,μ_h,μ_h)
=(Π_k-1^o L, M_h)_𝒯_h+(Π_k^o r,∇· M_h)_𝒯_h-⟨Π_k-1^∂r, M_h· n⟩_∂𝒯_h
-(∇·Π_k-1^o L,μ_h)_𝒯_h+⟨Π_k-1^o L· n,μ_h⟩_∂𝒯_h
+⟨α_1(Π_k-1^∂Π_k^o r-Π_k-1^∂ r),Π_k-1^∂μ_h-μ_h⟩_∂𝒯_h
=(Π_k-1^o L, M_h)_𝒯_h+(Π_k^o r,∇· M_h)_𝒯_h-⟨Π_k-1^∂r, M_h· n⟩_∂𝒯_h
-(∇·Π_k-1^o L,ℐ_h(μ_h,μ_h))_𝒯_h+⟨Π_k-1^o L· n,ℐ_h(μ_h,μ_h)⟩_∂𝒯_h
+⟨α_1(Π_k-1^∂Π_k^o r-Π_k-1^∂ r),Π_k-1^∂μ_h-μ_h⟩_∂𝒯_h
Applying the orthogonality of L^2 projections and the properties of ℐ_h (<ref>),
𝔞_h(Π_k-1^o L,Π_k^o r,Π_k-1^∂ r; M_h,μ_h,μ_h)
=( L, M_h)_𝒯_h+(r,∇· M_h)_𝒯_h-⟨ r, M_h· n⟩_∂𝒯_h
+( L,∇ℐ_h(μ_h,μ_h))_𝒯_h+(Π_k-1^o L- L,∇ℐ_h(μ_h,μ_h))_𝒯_h
+⟨α_1(Π_k-1^∂Π_k^o r-Π_k-1^∂ r),Π_k-1^∂μ_h-μ_h⟩_∂𝒯_h.
Then we use the relations of (<ref>), we get
𝔞_h(Π_k-1^o L,Π_k^o r,Π_k-1^∂ r; M_h,μ_h,μ_h)
=(g,ℐ_h(μ_h,μ_h))_𝒯_h+(Π_k-1^o L- L,∇ℐ_h(μ_h,μ_h))_𝒯_h
+⟨α_1(Π_k-1^∂Π_k^o r-Π_k-1^∂ r),Π_k-1^∂μ_h-μ_h⟩_∂𝒯_h.
This completes the proof.
Under condition of <Ref> , it holds
L- L_h_𝒯_h+∇ r-∇ r_h_𝒯_h≤ Ch^k(r_k+1+g_k-1).
We introduce the follow expressions to make the argument more concise:
ξ_ L:= Π_k-1^o L- L_h,
ξ_r:= Π_k^o r-r_h,
ξ_r:= Π_k-1^∂ r-r_h.
Then, we get
𝔞_h(ξ_ L,ξ_r,ξ_r; M_h,μ_h,μ_h)
=(g,ℐ_h(μ_h,μ_h)-μ_h)_𝒯_h+(Π_k-1^o L- L,∇ℐ_h(μ_h,μ_h))_𝒯_h
+⟨α_1(Π_k-1^∂Π_k^o r-Π_k-1^∂ r),Π_k-1^∂μ_h-μ_h⟩_∂𝒯_h
:=E_1+E_2+E_3.
For the first term, we have the following result by Cauchy-Schwartz’s inequality and (<ref>) and the properties of ℐ_h (<ref>)
E_1 =(g-Π^o_k-1g,ℐ_h(μ_h,μ_h)-μ_h)_𝒯_h
≤ Ch^kg_k-1(∇μ_h_𝒯_h+𝔥^-1/2(μ_h-μ_h)_∂𝒯_h)
≤ Ch^kg_k-1(∇μ_h_𝒯_h+α_1^1/2(Π_k-1^∂μ_h-μ_h)_∂𝒯_h).
For the second term, we use the same argument as E_1 with the properties of ℐ_h (<ref>)
E_2 ≤ Ch^k L_k(∇μ_h_𝒯_h+α_1^1/2(Π_k-1^∂μ_h-μ_h)_∂𝒯_h)
For the third term, we obtain by Cauchy-Schwartz’s inequality, (<ref>) and (<ref>):
E_3 ≤α_1^1/2(Π_k-1^∂Π_k^o r-Π_ℓ^∂ r)_∂𝒯_hα_1^1/2(Π_k-1^∂μ_h-μ_h)_∂𝒯_h
≤ C h^kr_k+1α_1^1/2(Π_k-1^∂μ_h-μ_h)_∂𝒯_h.
Then, using the (<ref>), we obtain the boundness for (ξ_ L,ξ_r,ξ_r)_𝔞_h:
(ξ_ L,ξ_r,ξ_r)_𝔞_h≤ Ch^k(r_k+1+g_k-1).
Finally, after using the triangle inequality, we finish the proof.
§.§ Prior Error for Step Two
Now, we mainly consider the second problem (<ref>). Again, we introduce a norm that appears naturally in the analysis.Let (σ, R,θ, p) be the solution of (<ref>) and let (σ_h, R_h,θ_h,θ_h,p_h,p_h) be the numerical solution of (<ref>).
We first introduce the HDG-Korn's inequality and the HDG-Poincaré’s inequality, see Lemma 3.3 of <cit.>.
For any θ_h∈𝒴_h, θ_h∈𝒴_h, it holds
θ_h^2_𝒯_h≤ C(∇θ_h^2_𝒯_h+𝔥^-1/2(Π_ℓ^∂θ_h-θ_h)^2_∂𝒯_h).
For any θ_h∈𝒴_h,θ_h∈𝒴_h and sufficiently small h, it holds
∇θ_h^2_𝒯_h≤ C(ϵ(θ_h)^2_𝒯_h+𝔥^-1/2(Π_ℓ^∂θ_h-θ_h)^2_∂𝒯_h).
We next introduce a semi-norm in the analysis of our method, ·_𝔟_h: Σ_h ×𝒮_h×𝒴_h ×𝒴_h×𝒫_h×𝒫_h →ℝ. Furthermore, we clarify that ·_𝔟_h is indeed a norm, then state the priori error in the norm ·_𝔟_h.
For all (σ_h, R_h,θ_h,θ_h,p_h,p_h) ∈Σ_h ×𝒮_h×𝒴_h ×𝒴_h×𝒫_h×𝒫_h, the definition of
(σ_h, R_h,θ_h,θ_h,p_h,p_h)^2_𝔟_h =σ_h^2_𝒯_h+t^-2 R_h^2_𝒯_h+∇θ_h^2_𝒯_h+α_2^1/2(Π_ℓ^∂θ_h-θ_h)^2_∂𝒯_h
+t^2∇^⊥p_h^2_𝒯_h+α_3^1/2(Π_k-1^∂ p_h-p_h)^2_∂𝒯_h,
is a norm for spaces Σ_h ×𝒮_h×𝒴_h ×𝒴_h×𝒫_h×𝒫_h.
It is enough to show (<ref>) is a norm for spaces Σ_h ×𝒮_h×𝒴_h ×𝒴_h×𝒫_h×𝒫_h, assuming that (σ_h, R_h,θ_h,θ_h,p_h,p_h)_𝔟_h=0, that we show σ_h, R_h,θ_h,θ_h,p_h,p_h vanish. Thanks to (<ref>), we only need to show θ_h,θ_h,p_h,p_h vanish because σ_h_𝒯_h=0, R_h_𝒯_h=0 directly imply σ_h= 0, R_h= 0 for (σ_h, R_h)∈Σ_h ×𝒮_h.It follows readily that ∇θ_h= 0, ∇^⊥p_h= 0 in 𝒯_h and Π_ℓ^∂θ_h=θ_h, Π_k-1^∂ p_h=p_h on ∂𝒯_h. So θ_h is constant in 𝒯_h and Π_ℓ^∂θ_h= 0 on ∂Ω because θ_h = 0 on ∂Ω. We can conclude that θ_h= 0 in 𝒯_h and θ_h= 0 on ∂𝒯_h. For the scaler variable p_h,p_h, we note that p_h∈L_0^2(Ω ). After a simple integration combining that p_h is also constant in 𝒯_h, now reads p_h=0 in 𝒯_h and p_h=0 on ∂𝒯_h. This completes the proof.
Following the well known theory of <cit.>, the discrete LBB condition is necessary for well-posedness of our method which makes 𝔟_h coercive.
For all (σ_h, R_h,θ_h, θ_h,p_h,p_h) ∈Σ_h ×𝒮_h×𝒴_h ×𝒴_h×𝒬_h×𝒬_h, we have
sup_ 0≠ (τ_h, S_h,ϕ_h, ϕ_h,q_h,q_h)∈Σ_h ×𝒮_h×𝒴_h ×𝒴_h×𝒬_h×𝒬_h 𝔟_h(σ_h, R_h,θ_h, θ_h,p_h,p_h;τ_h, S_h,ϕ_h, ϕ_h,q_h,q_h)/(τ_h, S_h,ϕ_h, ϕ_h,q_h,q_h)_𝔟_h
≥ C(σ_h, R_h,θ_h,θ_h,p_h,p_h)_𝔟_h.
For any fixed (σ_h, R_h,θ_h, θ_h,p_h,p_h) ∈Σ_h ×𝒮_h×𝒴_h ×𝒴_h×𝒬_h×𝒬_h, let β_1,β_2 be two constants that will be specified below, we take τ_h=σ_h+β_1ϵ(θ_h), S_h= R_h+β_2λ^-1t^2∇^⊥p_h, ϕ_h=θ_h, ϕ_h=θ_h, q_h=p_h, q_h=p_h and we get
𝔟_h(σ_h, R_h,θ_h, θ_h,p_h,p_h;τ_h, S_h,ϕ_h, ϕ_h,q_h,q_h)
=(𝒞^-1σ_h,σ_h+β_1ϵ(θ_h))_𝒯_h+(θ_h,∇·(σ_h+β_1ϵ(θ_h)))_𝒯_h
-⟨θ_h,(σ_h+β_1ϵ(θ_h)) n⟩_∂𝒯_h+(λ t^-2 R_h, R_h+β_2λ^-1t^2∇^⊥p_h)_𝒯_h
-(θ_h,∇·σ_h)_𝒯_h+⟨θ_h,σ_h n⟩_∂𝒯_h+⟨α_2(Π_ℓ^∂θ_h-θ_h),Π_ℓ^∂θ_h-θ_h⟩_∂𝒯_h
+(p_h,∇×( R_h+β_2λ^-1t^2∇^⊥p_h))_𝒯_h-⟨p_h,( R_h+β_2λ^-1t^2∇^⊥p_h)· t⟩_∂𝒯_h
+(∇×θ_h,p_h)_𝒯_h-⟨θ_h· t,p_h⟩_∂𝒯_h-(p_h,∇× R_h)_𝒯_h
+⟨p_h, R_h· t⟩_∂𝒯_h
-(∇×θ_h,p_h)_𝒯_h+⟨θ_h· t,p_h⟩_∂𝒯_h+⟨α_3(Π_k-1^∂p_h-p_h),Π_k-1^∂p_h-p_h⟩_∂𝒯_h.
Then applying the orthogonality of L^2 projections, we get
𝔟_h(σ_h, R_h,θ_h, θ_h,p_h,p_h;τ_h, S_h,ϕ_h, ϕ_h,q_h,q_h)
=(𝒞^-1σ_h,σ_h)_𝒯_h+(𝒞^-1σ_h,β_1ϵ(θ_h))_𝒯_h+(-β_1ϵ(θ_h),ϵ(θ_h))_𝒯_h
+⟨Π_ℓ^∂θ_h-θ_h,β_1ϵ(θ_h) n⟩_∂𝒯_h+(λ t^-2 R_h, R_h)_𝒯_h+( R_h,β_2∇^⊥p_h)_𝒯_h
-(∇^⊥p_h,β_2λ^-1t^2∇^⊥p_h)_𝒯_h+⟨Π_k-1^∂p_h-p_h,β_2λ^-1t^2∇^⊥p_h· t⟩_∂𝒯_h
+⟨α_2(Π_ℓ^∂θ_h-θ_h),Π_ℓ^∂θ_h-θ_h⟩_∂𝒯_h+⟨α_3(Π_k-1^∂p_h-p_h),Π_k-1^∂p_h-p_h⟩_∂𝒯_h.
Next, using Cauchy-Schwartz inequality and (<ref>) with β_1<0,β_2<0, we get
𝔟_h(σ_h, R_h,θ_h, θ_h,p_h,p_h;τ_h, S_h,ϕ_h, ϕ_h,q_h,q_h)
≥σ_h^2_𝒯_h+β_1σ_h_𝒯_hϵ(θ_h)_𝒯_h-β_1ϵ(θ_h)^2_𝒯_h
+β_1𝔥^-1/2(Π_ℓ^∂θ_h-θ_h)_∂𝒯_h𝔥^1/2ϵ(θ_h)_∂𝒯_h+λ t^-2 R_h^2_𝒯_h
+β_2 t^-1 R_h_𝒯_h· t∇^⊥p_h_𝒯_h-β_2λ^-1 t^2∇^⊥p_h^2_𝒯_h
+β_2t𝔥^-1/2(Π_k-1^∂p_h-p)_∂𝒯_h·λ^-1t𝔥^1/2∇^⊥p_h_∂𝒯_h
+α_2^1/2(Π_ℓ^∂θ_h-θ_h)^2_∂𝒯_h+α_3^1/2(Π_k-1^∂p_h-p_h)^2_∂𝒯_h.
Using (<ref>) and arithmetic and geometric means inequality, we get
𝔟_h(σ_h, R_h,θ_h, θ_h,p_h,p_h;τ_h, S_h,ϕ_h, ϕ_h,q_h,q_h)
≥σ_h^2_𝒯_h+α_2^1/2(Π_ℓ^∂θ_h-θ_h)^2_∂𝒯_h-β_1/2(ϵ(θ_h)^2_𝒯_h+𝔥^-1/2(Π_ℓ^∂θ_h-θ_h)^2_∂𝒯_h)
+Cβ_1(σ_h^2_𝒯_h+𝔥^-1/2(Π_ℓ^∂θ_h-θ_h)^2_∂𝒯_h)
+λ t^-2 R_h^2_𝒯_h+α_3^1/2(Π_k-1^∂ p_h-p_h)^2_∂𝒯_h-β_2/2λ^-1t^2∇^⊥p_h^2_𝒯_h
+Cβ_2(λ t^-2 R_h^2_𝒯_h+t^2𝔥^-1/2(Π_k-1^∂ p_h-p_h)^2_∂𝒯_h).
Using HDG-Korn’s inequality (<ref>), it holds
𝔟_h(σ_h, R_h,θ_h, θ_h,p_h,p_h;τ_h, S_h,ϕ_h, ϕ_h,q_h,q_h)
≥(1+Cβ_1)(σ_h^2_𝒯_h+α_2^1/2(Π_ℓ^∂θ_h-θ_h)^2)-β_1/2∇θ_h^2_𝒯_h
+(1+Cβ_2)(λ t^-2 R_h^2_𝒯_h+α_3^1/2(Π_k-1^∂ p_h-p_h)^2_∂𝒯_h)-β_2/2λ^-1t^2∇^⊥p_h^2_𝒯_h.
Next, we take 1+Cβ_1>0,1+Cβ_2>0 to conclude that
𝔟_h(σ_h, R_h,θ_h, θ_h,p_h,p_h;τ_h, S_h,ϕ_h, ϕ_h,q_h,q_h)
≥ C(σ_h, R_h,θ_h, θ_h,p_h,p_h)^2_𝔟_h.
With τ_h=σ_h+β_1ϵ(θ_h),
S_h= R_h+β_2λ^-1t^2∇^⊥p_h,
ϕ_h=θ_h,
ϕ_h=θ_h and
q_h=p_h,q_h=p_h, the triangle inequality implies
(τ_h, S_h,ϕ_h, ϕ_h,q_h,q_h)_𝔟_h
≤max{1+|β_1|,1+|β_2|}(σ_h, R_h,θ_h, θ_h,p_h,p_h)_𝔟_h
≤ C(σ_h, R_h,θ_h, θ_h,p_h,p_h)_𝔟_h.
Then the result (<ref>) follows immediately.
Let (σ, R,θ,p) ∈ ([L^2(Ω)]^2 × 2∩𝕊)×[L^2(Ω)]^2 ×[H_0^1(Ω)]^2×H^1(Ω) be the weak solution to the (<ref>), then for all (τ_h, S_h,ϕ_h, ϕ_h,q_h,q_h) ∈𝒯_h ×𝒮_h×𝒴_h ×𝒴_h×𝒬_h×𝒬_h, it holds
𝔟_h(Π_k-1^oσ,Π_k-1^o R,Π_k^o θ,Π_ℓ^∂θ,Π_k^o p,Π_k-1^∂ p;τ_h, S_h,ϕ_h,ϕ_h,q_h,q_h)
=( L+ f, ℐ_h(ϕ_h,ϕ_h))_𝒯_h+(∇ℐ_h(ϕ_h,ϕ_h),Π_k-1^oσ-σ)_𝒯_h
+⟨α_2(Π_ℓ^∂Π_k^oθ-Π_ℓ^∂θ),Π_ℓ^∂ϕ_h-ϕ_h⟩_∂𝒯_h-⟨ (ϕ_h-Π^o_k-1ϕ_h)· t, Π_k-1^∂p-p⟩_∂𝒯_h
+(∇×ℐ_h(ϕ_h,ϕ_h),Π_k^o p-p)_𝒯_h-⟨ (ℐ_h(ϕ_h,ϕ_h)-ϕ_h)· t,Π_k^o p-p⟩_∂𝒯_h
+(∇^⊥ℐ_h(q_h,q_h),Π_k-1^o R- R)_𝒯_h+(Π_k^oθ-θ,∇^⊥ℐ_h(q_h,q_h))_𝒯_h
+⟨α_3(Π_k-1^∂Π_k^op-Π_k-1^∂ p),Π_k-1^∂q_h-q_h⟩_∂𝒯_h.
By the definition of 𝔟_h, we get
𝔟_h(Π_k-1^oσ,Π_k-1^o R,Π_k^o θ,Π_ℓ^∂θ,Π_k^o p,Π_k-1^∂ p;τ_h, S_h,ϕ_h,ϕ_h,q_h,q_h)
=(𝒞^-1Π_k-1^oσ,τ_h)_𝒯_h+(Π_k^oθ,∇·τ_h)_𝒯_h-⟨Π_ℓ^∂θ,τ_h n⟩_∂𝒯_h
+(λ t^-2Π_k-1^o R, S_h)_𝒯_h-(ϕ_h,∇·Π_k-1^oσ)_𝒯_h+⟨ϕ_h,Π_k-1^oσ n⟩_∂𝒯_h
+⟨α_2(Π_ℓ^∂Π_k^oθ-Π_ℓ^∂θ),Π_ℓ^∂ϕ_h-ϕ_h⟩_∂𝒯_h
+(Π_k^o p,∇× S_h)_𝒯_h-⟨Π_k-1^∂ p, S_h· t⟩_∂𝒯_h+(∇×ϕ_h,Π_k^o p)_𝒯_h
-⟨ϕ_h· t,Π_k-1^∂ p⟩_∂𝒯_h-(q_h,∇×Π_k-1^o R)_𝒯_h+⟨q_h,Π_k-1^o R· t⟩_∂𝒯_h
-(∇×Π_k^oθ,q_h)_𝒯_h+⟨Π_k^oθ· t,q_h⟩_∂𝒯_h
+⟨α_3(Π_k-1^∂Π_k^op-Π_k-1^∂ p),Π_k-1^∂q_h-q_h⟩_∂𝒯_h.
then we introduce ℐ_h to get
𝔟_h(Π_k-1^oσ,Π_k-1^o R,Π_k^o θ,Π_ℓ^∂θ,Π_k^o p,Π_k-1^∂ p;τ_h, S_h,ϕ_h,ϕ_h,q_h,q_h)
=(𝒞^-1Π_k-1^oσ,τ_h)_𝒯_h+(Π_k^oθ,∇·τ_h)_𝒯_h-⟨Π_ℓ^∂θ,τ_h n⟩_∂𝒯_h
+(λ t^-2Π_k-1^o R, S_h)_𝒯_h-( ℐ_h(ϕ_h,ϕ_h),∇·Π_k-1^oσ)_𝒯_h
+⟨ ℐ_h(ϕ_h,ϕ_h),Π_k-1^oσ n⟩_∂𝒯_h+⟨α_2(Π_ℓ^∂Π_k^oθ-Π_ℓ^∂θ),Π_ℓ^∂ϕ_h-ϕ_h⟩_∂𝒯_h
+(Π_k^o p,∇× S_h)_𝒯_h-⟨Π_k-1^∂ p, S_h· t⟩_∂𝒯_h-⟨ϕ_h· t, Π_k-1^∂p-p⟩_∂𝒯_h
+(∇× ℐ_h(ϕ_h,ϕ_h),Π_k^o p-p)_𝒯_h-(ℐ_h(ϕ_h,ϕ_h),∇^⊥p)_𝒯_h
-⟨ ℐ_h(ϕ_h,ϕ_h)· t,Π_k^o p-p⟩_∂𝒯_h-(ℐ_h(q_h,q_h),∇×Π_k-1^o R)_𝒯_h
+⟨ℐ_h(q_h,q_h),Π_k-1^o R· t⟩_∂𝒯_h-(∇×Π_k^oθ,ℐ_h(q_h,q_h))_𝒯_h
+⟨Π_k^oθ· t,ℐ_h(q_h,q_h)⟩_∂𝒯_h+⟨α_3(Π_k-1^∂Π_k^op-Π_k-1^∂ p),Π_k-1^∂q_h-q_h⟩_∂𝒯_h.
Then apply the orthogonality of projections L^2 and ℐ_h such as (<ref>) and (<ref>) to get
𝔟_h(Π_k-1^oσ,Π_k-1^o R,Π_k^o θ,Π_ℓ^∂θ,Π_k^o p,Π_k-1^∂ p;τ_h, S_h,ϕ_h,ϕ_h,q_h,q_h)
=(𝒞^-1σ,τ_h)_𝒯_h-(∇θ,τ_h)_𝒯_h+(λ t^-2 R, S_h)_𝒯_h
+(∇ℐ_h(ϕ_h,ϕ_h),Π_k-1^oσ-σ)_𝒯_h+(∇ℐ_h(ϕ_h,ϕ_h),σ)_𝒯_h
+⟨α_2(Π_ℓ^∂Π_k^oθ-Π_ℓ^∂θ),Π_ℓ^∂ϕ_h-ϕ_h⟩_∂𝒯_h-⟨ (ϕ_h-Π^o_k-1ϕ_h)· t, Π_k-1^∂p-p⟩_∂𝒯_h
-(∇^⊥p, S_h)_𝒯_h+(∇×ℐ_h(ϕ_h,ϕ_h),Π_k^o p-p)_𝒯_h-(ℐ_h(ϕ_h,ϕ_h),∇^⊥p)_𝒯_h
-⟨ℐ_h(ϕ_h,ϕ_h)· t,Π_k^o p-p⟩_∂𝒯_h+(∇^⊥ℐ_h(q_h,q_h),Π_k-1^o R- R)_𝒯_h
+(∇^⊥ℐ_h(q_h,q_h), R)_𝒯_h+(Π_k^oθ-θ,∇^⊥ℐ_h(q_h,q_h))_𝒯_h
+(θ,∇^⊥ℐ_h(q_h,q_h))_𝒯_h+⟨α_3(Π_k-1^∂Π_k^op-Π_k-1^∂ p),Π_k-1^∂q_h-q_h⟩_∂𝒯_h.
The last step is straightforward consequence of (<ref>)
𝔟_h(Π_k-1^oσ,Π_k-1^o R,Π_k^o θ,Π_ℓ^∂θ,Π_k^o p,Π_k-1^∂ p;τ_h, S_h,ϕ_h,ϕ_h,q_h,q_h)
=(∇ℐ_h(ϕ_h,ϕ_h),Π_k-1^oσ-σ)_𝒯_h+(ℐ_h(ϕ_h,ϕ_h), L+ f)_𝒯_h
+⟨α_2(Π_ℓ^∂Π_k^oθ-Π_ℓ^∂θ),Π_ℓ^∂ϕ_h-ϕ_h⟩_∂𝒯_h-⟨ (ϕ_h-Π^o_k-1ϕ_h)· t, Π_k-1^∂p-p⟩_∂𝒯_h
+(∇×ℐ_h(ϕ_h,ϕ_h),Π_k^o p-p)_𝒯_h-⟨ (ℐ_h(ϕ_h,ϕ_h)-ϕ_h)· t,Π_k^o p-p⟩_∂𝒯_h
+(∇^⊥ℐ_h(q_h,q_h),Π_k-1^o R- R)_𝒯_h+(Π_k^oθ-θ,∇^⊥ℐ_h(q_h,q_h))_𝒯_h
+⟨α_3(Π_k-1^∂Π_k^op-Π_k-1^∂ p),Π_k-1^∂q_h-q_h⟩_∂𝒯_h.
Also, we introduce the follow expressions to make the argument more concise.
ξ_σ:= Π_k-1^o σ -σ_h, ξ_ R:= Π_k-1^o R- R_h, ξ_θ:=Π_k^o θ- θ_h,
ξ_p:= Π_k^o p-p_h, ξ_θ:= Π_ℓ^∂θ-θ_h, ξ_p:= Π_k-1^∂ p-p_h.
Let (σ, R,θ, p) be the solution of (<ref>) and let (σ_h, R_h,θ_h,θ_h,p_h,p_h) be the numerical solution of (<ref>),Then, under the
regularity condition, the following error estimate holds:
(ξ_σ,ξ_ R,ξ_θ,ξ_θ,ξ_p,ξ_p)_𝔟_h≤ C h^k( f_k-1+g_k-1+r_k+1+σ_k+θ_k+1+tp_k+1).
By the equation (<ref>), we get the following error equation
𝔟_h( ξ_σ,ξ_ R,ξ_θ, ξ_θ, ξ_p, ξ_p;τ_h, S_h,ϕ_h,ϕ_h,q_h,q_h)
=[( L- L_h, ℐ_h(ϕ_h,ϕ_h))_𝒯_h+( f-Π_k-1^o f, ℐ_h(ϕ_h,ϕ_h)-ϕ_h)_𝒯_h
+(∇ℐ_h(ϕ_h,ϕ_h),Π_k-1^oσ-σ)_𝒯_h-⟨ (ϕ_h-Π^o_k-1ϕ_h)· t, Π_k-1^∂p-p⟩_∂𝒯_h
+(∇×ℐ_h(ϕ_h,ϕ_h),Π_k^o p-p)_𝒯_h-⟨ (ℐ_h(ϕ_h,ϕ_h)-ϕ_h)· t,Π_k^o p-p⟩_∂𝒯_h]
+⟨α_2(Π_ℓ^∂Π_k^oθ-Π_ℓ^∂θ),Π_ℓ^∂ϕ_h-ϕ_h⟩_∂𝒯_h
+[(∇^⊥ℐ_h(q_h,q_h),Π_k-1^o R- R)_𝒯_h+(Π_k^oθ-θ,∇^⊥ℐ_h(q_h,q_h))_𝒯_h]
+⟨α_3(Π_k-1^∂Π_k^op-Π_k-1^∂ p),Π_k-1^∂q_h-q_h⟩_∂𝒯_h
:=E_4+E_5+E_6+E_7.
For the term E_4, we use the same argument as E_1,E_2 with the estimate ⟨ (ϕ_h-Π^o_k-1ϕ_h)· t, Π_k-1^∂p-p⟩_∂𝒯_h≤ Ch^k∇ϕ_h_𝒯_h|p|_k to get the following estimate
E_4 ≤ Ch^k( f_k-1+g_k-1+r_k+1+σ_k+p_k+ R_k)
·(∇ϕ_h_𝒯_h+α_2^1/2(Π_ℓ^∂ϕ_h-ϕ_h)_∂𝒯_h).
For the term E_5, it holds
E_5 =⟨α_2(Π_ℓ^∂Π_k^o θ-Π_ℓ^∂θ),Π_ℓ^∂ϕ_h-ϕ_h⟩_∂𝒯_h
≤α_2^1/2(Π_k^o θ-θ)_∂𝒯_hα_2^1/2(Π_ℓ^∂ϕ_h-ϕ_h)^2_∂𝒯_h
≤ C h^kθ_k+1α_2^1/2(Π_ℓ^∂ϕ_h-ϕ_h)_∂𝒯_h.
For the term E_6, the main steps of analysis is a little bit different with E_1,E_2. So we prove the details
E_6 =(∇^⊥ℐ_h(q_h,q_h),Π_k-1^o R- R)_𝒯_h+(Π_k^oθ-θ,∇^⊥ℐ_h(q_h,q_h))_𝒯_h
=(t∇^⊥ℐ_h(q_h,q_h),t^-1(Π_k-1^o R- R))_𝒯_h+(𝔥^-1(Π_k^oθ-θ),𝔥∇^⊥ℐ_h(q_h,q_h))_𝒯_h
(using Cauchy-Schwartz’s inequality and (<ref>) and (<ref>))
≤ C h^kθ_k+1(𝔥∇^⊥ q_h_𝒯_h+𝔥^1/2(q_h-q_h)_∂𝒯_h)
+C h^kt^-1 R_k (t∇^⊥ q_h_𝒯_h+t𝔥^-1/2(q_h-q_h)_∂𝒯_h)
(using (<ref>))
≤ C h^kθ_k+1(𝔥∇^⊥ q_h_𝒯_h+α_3^1/2(Π^∂_k-1q_h-q_h)_∂𝒯_h)
+C h^kt^-1 R_k (t∇^⊥ q_h_𝒯_h+α_3^1/2(Π^∂_k-1q_h-q_h)_∂𝒯_h).
To deal with the term 𝔥∇^⊥q_h_𝒯_h, we proceed as follows. Firstly it's ready to get
𝔥∇^⊥q_h_𝒯_h =(∇^⊥q_h,𝔥^2∇^⊥q_h)_𝒯_h/𝔥∇^⊥q_h_𝒯_h≤sup_ 0≠θ_h∈𝒴_h(∇^⊥q_h,θ_h)_𝒯_h/𝔥^-1θ_h_𝒯_h,
Next, we work on the numerator in the above expression. We have
(∇^⊥q_h,θ_h) =𝔟_h( 0, 0,θ_h, 0, 0, 0;τ_h, S_h,ϕ_h,ϕ_h,q_h,q_h)
-(θ_h,∇·τ)_𝒯_h-⟨α_2(Π_ℓ^∂θ_h,Π_ℓ^∂ϕ_h-ϕ_h⟩_∂𝒯_h+⟨ θ_h· t , Π_k-1^∂ q_h-q_h⟩_∂𝒯_h.
Last, by the triangle inequality, (<ref>), (<ref>), (<ref>) and Cauchy-Schwartz’s inequality, we conclude that
𝔥∇^⊥q_h_𝒯_h ≤sup_ 0≠θ_h∈𝒴_h(θ_h,∇·τ_h)_𝒯_h+⟨α_2(Π_ℓ^∂θ_h,Π_ℓ^∂ϕ_h-ϕ_h⟩_∂𝒯_h+⟨ θ_h· t ,Π_k-1^∂ q_h-q_h⟩_∂𝒯_h/𝔥^-1θ_h_𝒯_h
+sup_ 0≠θ_h∈𝒴_h𝔟_h( 0, 0,θ_h, 0, 0, 0;τ_h, S_h,ϕ_h,ϕ_h,q_h,q_h)/𝔥^-1θ_h_𝒯_h
≤ Csup_ 0≠θ_h∈𝒴_h𝔟_h( 0, 0,θ_h, 0, 0, 0;τ_h, S_h,ϕ_h,ϕ_h,q_h,q_h)/∇θ_h_𝒯_h
+C(τ_h_𝒯_h+α_2^1/2(Π_ℓ^∂ϕ_h-ϕ_h)_∂𝒯_h+α_3^1/2(Π_k-1^∂ q_h-q_h)_∂𝒯_h)
≤ Csup_ 0≠ (σ_h, R_h,θ_h, θ_h,p_h,p_h)∈Σ_h ×𝒮_h×𝒴_h ×𝒴_h×𝒬_h×𝒬_h𝔟_h(σ_h, R_h,θ_h, θ_h,p_h,p_h;τ_h, S_h,ϕ_h, ϕ_h,q_h,q_h)/(σ_h, R_h,θ_h, θ_h,p_h,p_h)_𝔟_h
+C(τ_h_𝒯_h+α_2^1/2(Π_ℓ^∂ϕ_h-ϕ_h)_∂𝒯_h+α_3^1/2(Π_k-1^∂ q_h-q_h)_∂𝒯_h).
As for the term E_7, we use the same arguments as E_6 to get
E_7 =⟨α_3(Π_k-1^∂Π_k^o p-Π_k-1^∂p),Π_k-1^∂q_h-q_h⟩_∂𝒯_h
≤ Ch^k tp_k+1α_3^1/2(Π_k-1^∂q_h-q_h)_∂𝒯_h.
Combining the above estimates of E_i, we get
𝔟_h( ξ_σ, e_ R,ξ_θ, ξ_θ, ξ_p, ξ_p;τ_h, S_h,ϕ_h,ϕ_h,q_h,q_h)
≤ C h^k(r_k+1+σ_k+t^-1 R _k+θ_k+1+tp_k+1)
· (τ_h_𝒯_h+∇ϕ_h_𝒯_h+α_2^1/2(Π_ℓ^∂ϕ_h-ϕ_h)_∂𝒯_h+t∇^⊥q_h_𝒯_h+α_3^1/2(Π_k-1^∂ q_h-q_h)_∂𝒯_h)
+sup_ 0≠ (σ_h, R_h,θ_h, θ_h,p_h,p_h)∈Σ_h ×𝒮_h×𝒴_h ×𝒴_h×𝒬_h×𝒬_h𝔟_h(σ_h, R_h,θ_h, θ_h,p_h,p_h;τ_h, S_h,ϕ_h, ϕ_h,q_h,q_h)/(σ_h, R_h,θ_h, θ_h,p_h,p_h)_𝔟_h· C h^kθ_k+1.
Finally, by discrete LBB condition (<ref>), we could prove the desired conclusion.
In light of <Ref> and the triangle inequality, we easily obtain the following error estimate:
Under the condition of <Ref>, it holds
σ-σ_h_𝒯_h+t^-1 R- R_h_𝒯_h+∇θ-∇θ_h_𝒯_h+t∇^⊥p-∇^⊥p_h_𝒯_h
≤ C h^k( f_k-1+g_k-1+r_k+1+σ_k+θ_k+1+tp_k+1).
§.§ Main Result
The third problem is also a normal Possion problem, so it's ready to get the prior estimate by using the same argument as (<ref>).
Let ( G,ω) be the solution of (<ref>) and let ( G_h,ω_h,ω_h) be numerical the solution of (<ref>), then we have
G- G_h_𝒯_h+∇ω-∇ω_h_𝒯_h
≤ Ch^k( f_k-1+g_k-1+r_k+1+σ_k+θ_k+1+tp_k+1+ω_k+1).
For the shear stress γ, we directly have the following estimate
tγ-γ_h_𝒯_h ≤ t L- L_h_𝒯_h+λ t^-1 R- R_h_𝒯_h
≤ C h^k( f_k-1+g_k-1+r_k+1+σ_k+θ_k+1+tp_k+1).
Under the <Ref> and combining <Ref> with (<ref>), the main result is straightforward.
Let ( L,r,σ, R,θ,p, G,ω,γ) be the solution of (<ref>). Let ( L_h,r_h,r_h,σ_h, R_h,θ_h,θ_h,p_h,p_h, G_h,ω_h,ω_h,γ_h) be the numerical solution of (<ref>), then we have
L- L_h_𝒯_h+∇ r-∇ r_h_𝒯_h+σ-σ_h_𝒯_h+t^-1 R- R_h_𝒯_h+∇θ-∇θ_h_𝒯_h
+t∇^⊥p-∇^⊥p_h_𝒯_h+ G- G_h_𝒯_h+∇ω-∇ω_h_𝒯_h+tγ-γ_h_𝒯_h
≤ C h^k(f_k-1+g_k-1).
§ L_2 ERROR ESTIMATES
We introduce the following co-problems to help us establish the L_2 error estimates, here e_ L= L- L_h, e_σ=σ-σ_h, e_ R= R- R_h.
(ℵ, M)-(∇ι, M) =( e_ L, M), ∀ M ∈[L^2(Ω)]^2,
(ℵ, ∇μ) =(-ξ_r, μ), ∀μ∈ H_0^1(Ω),
(𝒞^-1ζ, τ)+(∇ψ, τ) =0, ∀τ∈[L^2(Ω)]^2 × 2∩𝕊,
(λ t^-2Υ, S)
+(∇^⊥ρ, S) =0, ∀ S ∈[L^2(Ω)]^2,
(ζ, ∇ϕ)-(ϕ, ∇^⊥ρ) =(-ξ_θ, ϕ), ∀ϕ∈[H_0^1(Ω)]^2,
-(ψ, ∇^⊥ q)-(Υ, ∇^⊥ q) =0, ∀ q ∈H^1(Ω),
(ℷ, H)+( ∇ϰ, H) =0, ∀ H ∈[L^2(Ω)]^2,
(ℷ, ∇ s) =(-ξ_ω,s), ∀ s ∈ H_0^1(Ω).
Wehn Ω is a convex polygon, by <Ref> we have
ℵ_1+ι_2 ≤ C (ξ_r_0+ e_ L_0),
ζ_1+t^-1Υ_1+ψ_2+ρ_1+tρ_2 ≤ C ξ_θ_0,
ℷ_1+ϰ_2 ≤ C ξ_ω_0.
When Ω is convex and
under the condition of <Ref>, it holds
r-r_h_𝒯_h ≤ Ch^k+1(r_k+1+g_k-1) ,
θ-θ_h_𝒯_h ≤ Ch^k+1 (g_k-1+ f_k-1+r_k+1+σ_k+θ_k+1+p_k+tp_k+1),
ω-ω_h_𝒯_h ≤ Ch^k+1(g_k-1+ f_k-1+r_k+1+σ_k
+θ_k+1+p_k+tp_k+1+ω_k+1).
Moreover, under the <Ref>, we have
r-r_h_𝒯_h
+θ-θ_h_𝒯_h
+ω-ω_h_𝒯_h≤ C h^k+1(f_k-1+g_k-1).
We only give a proof of (<ref>) and (<ref>), because the proof of (<ref>) is similar with (<ref>).Proof of (<ref>): Using the relation of (<ref>) and the orthogonality of projections L^2, we get
ξ_r^2_𝒯_h+ e_ L^2_𝒯_h =( e_ L, e_ L)_𝒯_h+(ξ_r-ℐ_h(ξ_r,ξ_r),ξ_r)_𝒯_h+(ℐ_h(ξ_r,ξ_r),ξ_r)_𝒯_h
=(ξ_r-ℐ_h(ξ_r,ξ_r),ξ_r)_𝒯_h+(ℵ, e_ L)-(∇ι, e_ L)-(ℵ, ∇ℐ_h(ξ_r,ξ_r))
=(ξ_r-ℐ_h(ξ_r,ξ_r),ξ_r)_𝒯_h+(ℵ-Π^o_0ℵ, e_ L)_𝒯_h+(Π^o_0ℵ, e_ L)_𝒯_h
-(∇ι, e_ L)_𝒯_h-(ℵ-Π^o_0ℵ, ∇ℐ_h(ξ_r,ξ_r))_𝒯_h-(Π^o_0ℵ, ∇ℐ_h(ξ_r,ξ_r))_𝒯_h,
then, we mainly consider the following three terms
(Π^o_0ℵ, e_ L)_𝒯_h-(∇ι, e_ L)_𝒯_h-(Π^o_0ℵ, ∇ℐ_h(ξ_r,ξ_r))_𝒯_h
(by ∇·Π^o_0=0 and (<ref>), (<ref>) and integration by parts)
=(Π^o_0ℵ, e_ L)_𝒯_h+(ι,∇· e_ L)_𝒯_h-⟨ e_ L· n,ι⟩_∂𝒯_h-⟨Π^o_0ℵ· t, ξ_r⟩_∂𝒯_h,
=(Π^o_0ℵ, L- L_h)_𝒯_h+(ι,∇·( L- L_h))_𝒯_h
-⟨ L- L_h· n,ι⟩_∂𝒯_h-⟨Π^o_0ℵ· t, Π_k-1^∂ r-r_h⟩_∂𝒯_h
(by the orthogonality of projections L^2 and integration by parts)
=(Π^o_0ℵ, L)_𝒯_h-(Π^o_0ℵ, L_h)_𝒯_h-(∇ι, L)_𝒯_h
-(ι-Π^o_kι,∇· L_h)_𝒯_h-(Π^o_kι,∇· L_h)_𝒯_h+⟨ L_h· n,ι-Π^∂_k-1ι⟩_∂𝒯_h
+⟨ L_h· n,Π^∂_k-1ι⟩_∂𝒯_h-(Π^o_0ℵ,∇ r)_𝒯_h+⟨Π^o_0ℵ· t,r_h⟩_∂𝒯_h
(by the continuous form (<ref>) and the discrete form (<ref>))
=(g,ι-Π^o_kι)-⟨ L_h· n,ι-Π^∂_k-1ι⟩_∂𝒯_h-(ι-Π^o_kι,∇· L_h)_𝒯_h
-⟨α_1(Π_k-1^∂r_h-r_h),Π_k-1^∂Π^o_kι-Π^∂_k-1ι⟩_∂𝒯_h.
So the above equation leads to
ξ_r^2_𝒯_h+ e_ L^2_𝒯_h =(ξ_r-ℐ_h(ξ_r,ξ_r),ξ_r)_𝒯_h+(ℵ-Π^o_0ℵ, e_ L)_𝒯_h-(ℵ-Π^o_0ℵ, ∇ℐ_h(ξ_r,ξ_r))_𝒯_h
+(g-Π^o_k-1g,ι-Π^o_kι)-⟨ L_h· n,ι-Π^∂_k-1ι⟩_∂𝒯_h-(ι-Π^o_kι,∇· L_h)_𝒯_h
+⟨α_1(Π_k-1^∂ξ_r-ξ_r),Π_k-1^∂Π^o_kι-Π^∂_k-1ι⟩_∂𝒯_h
-⟨α_1(Π_k^o r-r),Π_k-1^∂Π^o_kι-Π^∂_k-1ι⟩_∂𝒯_h.
We denote
𝔼_1 :=|(ξ_r-ℐ_h(ξ_r,ξ_r),ξ_r)_𝒯_h-(ℵ-Π^o_0ℵ, ∇ℐ_h(ξ_r,ξ_r))_𝒯_h|
𝔼_2 :=|(ℵ-Π^o_0ℵ, e_ L)_𝒯_h+(g-Π^o_k-1g,ι-Π^o_kι)
-⟨ L_h· n,ι-Π^∂_k-1ι⟩_∂𝒯_h-(ι-Π^o_kι,∇· L_h)_𝒯_h|
𝔼_3 :=|⟨α_1(Π_k-1^∂ξ_r-ξ_r),Π_k-1^∂Π^o_kι-Π^∂_k-1ι⟩_∂𝒯_h
-⟨α_1(Π_k^o r-r),Π_k-1^∂Π^o_kι-Π^∂_k-1ι⟩_∂𝒯_h|.
By Cauchy-Schwartz's inequality, (<ref>), (<ref>), (<ref>), (<ref>) and <Ref>, it holds
𝔼_1 ≤ C h(ξ_r_𝒯_h+ℵ_1)(∇ξ_r_𝒯_h+𝔥^-1/2(ξ_r-ξ_r)_∂𝒯_h)
≤ C h^k+1(ξ_r_𝒯_h+ e_ L_𝒯_h)(r_k+1+g_k-1), (by (<ref>) and <Ref>).
By Cauchy-Schwartz's inequality and the orthogonality of projections L^2 and (<ref>) and (<ref>), it holds
𝔼_2 ≤ C hℵ_1 e_ L_𝒯_h
+C h^k+1g_k-1ι_2+Chι_2ξ_ L_𝒯_h
≤ ch^k+1(ξ_r_𝒯_h+ e_ L_𝒯_h)(r_k+1+g_k-1), (also by (<ref>) and <Ref>).
Using the same argument as 𝔼_2, we also get
𝔼_3 ≤ C hι_2(h^kr_k+1+α_1^1/2(Π_k-1^∂ξ_r-ξ_r)_∂𝒯_h)
≤ ch^k+1(ξ_r_𝒯_h+ e_ L_𝒯_h)(r_k+1+g_k-1).
So we can prove the desired conclusion
ξ_r_𝒯_h+ e_ L_𝒯_h≤ ch^k+1(r_k+1+g_k-1)
Proof of (<ref>): We give the proof of (<ref>) which is nearly same as (<ref>), but there are some details should be clarified. By (<ref>), it holds
ξ_θ^2_𝒯_h =(ξ_θ-ℐ_h(ξ_θ,ξ_θ),ξ_θ)_𝒯_h+(ℐ_h(ξ_θ,ξ_θ),ξ_θ)_𝒯_h
=(ξ_θ-ℐ_h(ξ_θ,ξ_θ),ξ_θ)_𝒯_h+(𝒞^-1ζ, e_σ)_𝒯_h+(∇ψ, e_σ)+(λ t^-2Υ, e_ R)_𝒯_h
+(∇^⊥ρ, e_ R)_𝒯_h-(ζ, ∇ℐ_h(ξ_θ,ξ_θ))_𝒯_h+(ℐ_h(ξ_θ,ξ_θ), ∇^⊥ρ)_𝒯_h
-(ψ, ∇^⊥ℐ_h(ξ_p,ξ_p))_𝒯_h-(Υ, ∇^⊥ℐ_h(ξ_p,ξ_p))_𝒯_h
(by the orthogonality of projections L^2 and integration by parts)
=(ξ_θ-ℐ_h(ξ_θ,ξ_θ),ξ_θ)_𝒯_h+(𝒞^-1ζ-𝒞^-1Π^o_0ζ, e_σ)_𝒯_h+(𝒞^-1Π^o_0ζ, e_σ)_𝒯_h
+(∇ψ, e_σ)_𝒯_h+(λ t^-2( Υ-Π^o_0Υ), e_ R)_𝒯_h+(λ t^-2Π^o_0Υ, e_ R)_𝒯_h
+(∇^⊥ρ, e_ R)_𝒯_h-(ζ-Π^o_0ζ, ∇ℐ_h(ξ_θ,ξ_θ))_𝒯_h-(Π^o_0ζ, ∇ℐ_h(ξ_θ,ξ_θ))_𝒯_h
+(ℐ_h(ξ_θ,ξ_θ), ∇^⊥ρ)_𝒯_h-(ψ, ∇^⊥ℐ_h(ξ_p,ξ_p))_𝒯_h
-(Υ-Π^o_0Υ, ∇^⊥ℐ_h(ξ_p,ξ_p))_𝒯_h-(Π^o_0Υ, ∇^⊥ℐ_h(ξ_p,ξ_p))_𝒯_h.
To deduce the desired estimate, we focus on the following terms
(𝒞^-1Π^o_0ζ, e_σ)_𝒯_h+(∇ψ, e_σ)+(λ t^-2Π^o_0Υ, e_ R)_𝒯_h+(∇^⊥ρ, e_ R)_𝒯_h-(Π^o_0ζ, ∇ℐ_h(ξ_θ,ξ_θ))_𝒯_h
+(ℐ_h(ξ_θ,ξ_θ), ∇^⊥ρ)_𝒯_h-(ψ, ∇^⊥ℐ_h(ξ_p,ξ_p))_𝒯_h-(Π^o_0Υ, ∇^⊥ℐ_h(ξ_p,ξ_p))_𝒯_h
(by ∇·Π^o_0=0 and (<ref>), (<ref>) and integration by parts)
=(𝒞^-1Π^o_0ζ, σ)_𝒯_h-(𝒞^-1Π^o_0ζ, σ_h)_𝒯_h+(∇ψ, σ)_𝒯_h-(∇ψ,σ_h)_𝒯_h
+(λ t^-2Π^o_0Υ, R)_𝒯_h-(λ t^-2Π^o_0Υ, R_h)_𝒯_h+(∇^⊥ρ, R)_𝒯_h-(∇^⊥ρ, R_h)_𝒯_h
-⟨Π^o_0ζ n, θ⟩_∂𝒯_h+⟨Π^o_0ζ n, θ_h⟩_∂𝒯_h+(ℐ_h(ξ_θ,ξ_θ), ∇^⊥ρ-∇^⊥Π^o_kρ)_𝒯_h
+(θ, ∇^⊥Π^o_kρ)_𝒯_h-(θ_h, ∇^⊥Π^o_kρ)_𝒯_h-(ψ-Π^o_kψ, ∇^⊥ℐ_h(ξ_p,ξ_p))_𝒯_h
-(Π^o_kψ, ∇^⊥p)_𝒯_h-(∇×Π^o_kψ, p_h)_𝒯_h-⟨Π^o_kψ· t,Π^∂_k-1p-p⟩_∂𝒯_h
+⟨Π^o_kψ· t,p_h⟩_∂𝒯_h-⟨Π^o_0Υ· t,p⟩_∂𝒯_h+⟨Π^o_0Υ· t,p_h⟩_∂𝒯_h
(by the orthogonality of projections L^2 and integration by parts)
=(𝒞^-1σ, Π^o_0ζ)_𝒯_h+(σ,∇ψ )_𝒯_h+(λ t^-2 R,Π^o_0Υ)_𝒯_h+( R,∇^⊥ρ)_𝒯_h-(Π^o_0ζ,∇θ)_𝒯_h
+(θ,∇^⊥ρ)_𝒯_h-(ψ,∇^⊥p)-(Π^o_0Υ,∇^⊥p)_𝒯_h-(θ,∇^⊥ρ-∇^⊥Π^o_kρ)_𝒯_h
-(Π^o_kψ-ψ,∇^⊥p)_𝒯_h-⟨ψ, σ_h n⟩_∂𝒯_h-⟨ρ, R_h t⟩_∂𝒯_h
-(𝒞^-1Π^o_0ζ, σ_h)_𝒯_h+(Π^o_kψ,∇·σ_h)_𝒯_h-(λ t^-2Π^o_0Υ, R_h)_𝒯_h+(Π^o_kρ,∇× R_h)
-(∇·Π^o_0ζ,θ_h)_𝒯_h+⟨Π^o_0ζ n, θ_h⟩_∂𝒯_h-(θ_h, ∇^⊥Π^o_kρ)_𝒯_h-(∇×Π^o_kψ, p_h)_𝒯_h
+⟨Π^o_kψ· t,p_h⟩_∂𝒯_h-(∇×Π^o_0Υ,p_h)_𝒯_h+⟨Π^o_0Υ· t,p_h⟩_∂𝒯_h-⟨Π^o_kψ· t,Π^∂_k-1p-p⟩_∂𝒯_h
+(ℐ_h(ξ_θ,ξ_θ), ∇^⊥ρ-∇^⊥Π^o_kρ)_𝒯_h-(ψ-Π^o_kψ, ∇^⊥ℐ_h(ξ_p,ξ_p))_𝒯_h
(by the continuous form (<ref>) and the discrete form (<ref>))
=( L+ f,ψ)_𝒯_h-( L_h+ f,Π^o_kψ)_𝒯_h+⟨θ_h· t,Π^∂_k-1ρ-Π^o_kρ⟩_∂𝒯_h
-⟨Π^o_kψ· t,Π^∂_k-1p-p⟩_∂𝒯_h-(θ,∇^⊥ρ-∇^⊥Π^o_kρ)_𝒯_h-(Π^o_kψ-ψ,∇^⊥p)_𝒯_h
+(ℐ_h(ξ_θ,ξ_θ), ∇^⊥ρ-∇^⊥Π^o_kρ)_𝒯_h-(ψ-Π^o_kψ, ∇^⊥ℐ_h(ξ_p,ξ_p))_𝒯_h
+⟨α_2(Π_ℓ^∂θ_h-θ_h),Π_ℓ^∂Π^o_kψ-Π_ℓ^∂ψ⟩_∂𝒯_h+⟨α_3(Π_k-1^∂p_h-p_h),Π_k-1^∂Π^o_kρ-Π_k-1^∂ρ⟩_∂𝒯_h
so, we can prove
ξ_θ^2_𝒯_h =(ξ_θ-ℐ_h(ξ_θ,ξ_θ),ξ_θ)_𝒯_h+(𝒞^-1ζ-𝒞^-1Π^o_0ζ, e_σ)_𝒯_h+(λ t^-2( Υ-Π^o_0Υ), e_ R)_𝒯_h
-(ζ-Π^o_0ζ, ∇ℐ_h(ξ_θ,ξ_θ))_𝒯_h-(Υ-Π^o_0Υ, ∇^⊥ℐ_h(ξ_p,ξ_p))_𝒯_h+( L+ f,ψ)_𝒯_h
-( L_h+ f,Π^o_kψ)_𝒯_h+⟨θ_h· t,Π^∂_k-1ρ-ρ⟩_∂𝒯_h-⟨Π^o_kψ· t,Π^∂_k-1p-p⟩_∂𝒯_h
-(θ,∇^⊥ρ-∇^⊥Π^o_kρ)_𝒯_h-(Π^o_kψ-ψ,∇^⊥p)_𝒯_h+(ℐ_h(ξ_θ,ξ_θ), ∇^⊥ρ-∇^⊥Π^o_kρ)_𝒯_h
-(ψ-Π^o_kψ, ∇^⊥ℐ_h(ξ_p,ξ_p))_𝒯_h+⟨α_2(Π_ℓ^∂θ_h-θ_h),Π_ℓ^∂Π^o_kψ-Π_ℓ^∂ψ⟩_∂𝒯_h
+⟨α_3(Π_k-1^∂p_h-p_h),Π_k-1^∂Π^o_kρ-Π_k-1^∂ρ⟩_∂𝒯_h.
Likely we also denote
𝔼_4 :=|(ξ_θ-ℐ_h(ξ_θ,ξ_θ),ξ_θ)_𝒯_h-(Υ-Π^o_0Υ, ∇^⊥ℐ_h(ξ_p,ξ_p))_𝒯_h-(ζ-Π^o_0ζ, ∇ℐ_h(ξ_θ,ξ_θ))_𝒯_h
+(ℐ_h(ξ_θ,ξ_θ), ∇^⊥ρ-∇^⊥Π^o_kρ)_𝒯_h-(ψ-Π^o_kψ, ∇^⊥ℐ_h(ξ_p,ξ_p))_𝒯_h|
𝔼_5 :=|( L+ f,ψ)_𝒯_h-( L_h+ f,Π^o_kψ)_𝒯_h+(𝒞^-1ζ-𝒞^-1Π^o_0ζ, e_σ)_𝒯_h
+(λ t^-2( Υ-Π^o_0Υ), e_ R)_𝒯_h-(θ,∇^⊥ρ-∇^⊥Π^o_kρ)_𝒯_h-(Π^o_kψ-ψ,∇^⊥p)_𝒯_h|
𝔼_6 :=|⟨θ_h· t,Π^∂_k-1ρ-ρ⟩_∂𝒯_h-⟨Π^o_kψ· t,Π^∂_k-1p-p⟩_∂𝒯_h|
𝔼_7 :=|⟨α_2(Π_ℓ^∂θ_h-θ_h),Π_ℓ^∂Π^o_kψ-Π_ℓ^∂ψ⟩_∂𝒯_h+⟨α_3(Π_k-1^∂p_h-p_h),Π_k-1^∂Π^o_kρ-Π_k-1^∂ρ⟩_∂𝒯_h|.
By Cauchy-Schwartz's inequality, (<ref>), (<ref>), (<ref>), (<ref>), <Ref> and HDG-Poincare's inequality (<ref>), it holds
𝔼_4 ≤ C h(ξ_θ_𝒯_h+ρ_1+ζ_1)(∇ξ_θ_𝒯_h+𝔥^-1/2(Π^∂_ℓξ_θ-ξ_θ)_𝒯_h)
+C h t^-1Υ_1· t(∇ξ_p_𝒯_h+𝔥^-1/2(ξ_p-ξ_p)_𝒯_h)
+C hψ_2(𝔥∇ξ_p_𝒯_h+𝔥^1/2(ξ_p-ξ_p)_𝒯_h)
≤ Ch^k+1ξ_θ_𝒯_h(r_k+1+| f|_k-1+σ_k+θ_k+1+tp_k+1)
(using the same argument for E_6 and by (<ref>) and <Ref>).
We note (Π^o_k-1 L+Π^o_k-1 f, ψ-Π^o_kψ)=0 and by Cauchy-Schwartz's inequality, the orthogonality of projections L^2, (<ref>), (<ref>) and (<ref>), it holds
𝔼_5 ≤ C h^k+1(ψ_2+ρ_1)(r_k+1+g_k-1+| L|_k+ f_k-1+θ_k+1+p_k)
+C hζ_1 e_σ_0+C hΥ_1 e_ R_0
≤ C h^k+1ξ_θ_𝒯_h(g_k-1+ f_k-1+r_k+1+| L|_k+σ_k+t^-1 R _k+θ_k+1+p_k).
By the orthogonality of projections L^2, it holds
𝔼_6 =|-⟨(ξ_θ-Π^o_k-1ξ_θ)· t,Π^∂_k-1ρ-ρ⟩_∂𝒯_h+⟨(θ-Π^o_kθ)· t,Π^∂_k-1ρ-ρ⟩_∂𝒯_h
-⟨(Π^o_kψ-ψ)· t,Π^∂_k-1p-p⟩_∂𝒯_h|.
So by (<ref>) and (<ref>) and (<ref>)
𝔼_6 ≤ Ch∇ξ_θ_𝒯_hρ_1+Ch^k+1θ_k+1ρ_1+Ch^k+1ψ_2p_k
≤ C h^k+1ξ_θ_𝒯_h(r_k+1+| f|_k-1+σ_k+t^-1 R _k+θ_k+1+tp_k+1+p_k).
By the orthogonality of projections L^2, it holds
𝔼_7 =|⟨α_2(Π_ℓ^∂ξ_θ-ξ_θ),Π_ℓ^∂Π^o_kψ-Π_ℓ^∂ψ⟩_∂𝒯_h-⟨α_2(Π_k^oθ- θ),Π_ℓ^∂Π^o_kψ-Π_ℓ^∂ψ⟩_∂𝒯_h
+⟨α_3(Π_k-1^∂ξ_p-ξ_p),Π_k-1^∂Π^o_kρ-Π_k-1^∂ρ⟩_∂𝒯_h-⟨α_3(Π_k^op-p),Π_k-1^∂Π^o_kρ-Π_k-1^∂ρ⟩_∂𝒯_h|.
so we get by (<ref>), (<ref>) and <Ref>
𝔼_7 ≤ C hψ_2α_2^1/2(Π_ℓ^∂ξ_θ-ξ_θ)_∂𝒯_h+Ch^k+1θ_k+1ψ_2
+Ch(ρ_1+tρ_2)α_3^1/2(Π_k-1^∂ξ_p-ξ_p)_∂𝒯_h+Ch^k+1(p_k+tp_k+1)(ρ_1+tρ_2)
≤ Ch^k+1ξ_θ_𝒯_h(r_k+1+| f|_k-1+σ_k+t^-1 R _k+θ_k+1+tp_k+1+p_k).
Combining the estimate of 𝔼_i, we immediately derive (<ref>), thus we completes the proof.
§ IMPLEMENTATION
For (<ref>) and (<ref>), we arrange the interior unknowns and boundary unknowns separately, then we can get the following linear system:
[ A_11 A_12; A_12^T -A_22 ][ x_1; x_2 ]
=
[ b_1; b_2 ],
where A_11 is piecewise diagonal, that means the inverse of A_11 can be obtained efficiently. Therefore, instead of solving (<ref>), we are going to solve the Schur complement system
(A_22+A_12^TA_11^-1A_12)x_2=-b_2+A_12^TA_11^-1b_1,
which is an SPD system, and any AMG solver or AMG-preconditioner solver can solve this system efficiently.
After solving (<ref>), we obtain x_1 by
x_1=A_11^-1(b_1-A_12x_2).
For (<ref>), we still arrange the interior unknowns and boundary unknowns separately. Therefore, we still have the system of form (<ref>), and we also turn to solve (<ref>) and (<ref>), the different is that here (<ref>) is not an SPD system, the system (<ref>) is a saddle-point system with the form
[ B_11 B_12; B_12^T -B_22 ][ θ; p ]
=
[ c_1; c_2 ],
This time the inverse of B_11 can not be obtained efficiently. Still, we can solve the system
(B_22+B_12^TB_11^-1B_12)p=-c_2+B_12^TB_11^-1c_1,
by iterative methods, Conjugate Gradient (CG) method for example. Remember that we do not need to compute B_11^-1, but we can use AMG-preconditioner solver to get matrix-vector multiplication B_11^-1v for any vector v that matches the dimensions.
At last we simply use the following formulation to recover θ:
θ=B_11^-1(c_1-B_12p).
We notice that in all steps of our calculation, the time is mainly spent on solving the system (<ref>).
§ NUMERICAL EXPERIMENTS
This section provides some numerical results to verify the performance of the HDG scheme. All examples are coded in C++ with the library Eigen<cit.> and the library Hypre<cit.>.
§.§ Numerical Results
We compute a square plate with continuous solution to show the numerical results. This solution is taken from <cit.>. The domain Ω is simple unit square (0, 1)^2, the corresponding parameters are taken as E = 1.0, ν= 0.3 and κ=5/6. Also, we mainly consider the hard clamped boundary. The continuous solution θ,ω is of the form
{ θ=(100y^3(y-1)^3x^2(x-1)^2(2x-1), 100x^3(x-1)^3y^2(y-1)^2(2y-1))^T,
ω=100(1/3x^3(x-1)^3y^3(y-1)^3-2t^2/5(1-ν)[y^3(y-1)^3x(x-1)(5x^2-5x+1)
+x^3(x-1)^3y(y-1)(5y^2-5y+1)] ).
.
Therefore, the body force and transverse loading are
{ f=(0,0)^T,
g = (x^3(x-1)^3(5y^2-5y+1)+y^3(y-1)^3(5x^2-5x+1)
+x(x-1)y(y-1)(5x^2-5x+1)(5y^2-5y+1)).
.
We mainly compute three different thick plates with the pane thickness t=1, t=0.1 and t=0.01. We display the results in following tables. From the results we can see that our HDG scheme, with k=1,2,3, yields optimal convergence rates which are uniform with respect to the plane thickness t as (k+1)-th order rate for θ-θ_h_𝒯_h, ω-ω_h_𝒯_h and k-th order for σ-σ_h_𝒯_h, γ-γ_h_𝒯_h. These are conformable to the theoretical results of <Ref> and <Ref>. We denote “Iter” in the tables equals iterations of linear system (<ref>). Here we use a Schur complement CG iteration method to solve the linear system.
§.§ Contrast of Iterations
We next consider the convergence rate of solving the discrete system of our method and comparing it with the method presented in <cit.>, called “Old Method” in <Ref>. The system would typically be solved iteratively in practice so it is of interest to examine the condition number and it is well known that the convergence can be slow if the condition number is large. Let us briefly analyze the result displayed in <Ref>. We let the stopping criterion for iteration be same for two methods taking 1.0E-10 and solve the problem introduced in <Ref>. As the mesh refinement doubling the number of line elements, the iteration of the old methods approximately increases twice correspondingly. It's simply shows that the iterations increase as the h decreases. However the iterations of our method increase slower than the old method. This shows that our new mehtod may have a good condition number compared to the old method.
§ ACKNOWLEDGMENTS
We would like to acknowledge the assistance of volunteers in putting
together this example manuscript and supplement.
10
200359
R. A. Adams and J. J. Fournier, The sobolev spaces
w^m,p(ω), in Sobolev Spaces, vol. 140 of Pure and Applied
Mathematics, Elsevier, 2003, pp. 59–78,
<https://doi.org/10.1016/S0079-8169(03)80005-3>.
Arnold1981
D. Arnold, Discretization by finite elements of a model parameter
dependent problem., Numerische Mathematik, 37 (1981), pp. 405–422,
<http://eudml.org/doc/132740>.
arnold2002approximation
D. Arnold, D. Boffi, and R. Falk, Approximation by quadrilateral
finite elements, Mathematics of computation, 71 (2002), pp. 909–922.
doi:10.1137/S0036142901384162
D. N. Arnold, F. Brezzi, B. Cockburn, and L. D. Marini, Unified
analysis of discontinuous galerkin methods for elliptic problems, SIAM
Journal on Numerical Analysis, 39 (2002), pp. 1749–1779,
<https://doi.org/10.1137/S0036142901384162>.
ARNOLD20073660
D. N. Arnold, F. Brezzi, R. S. Falk, and L. D. Marini, Locking-free
reissner–mindlin elements without reduced integration, Computer Methods in
Applied Mechanics and Engineering, 196 (2007), pp. 3660–3671,
<https://doi.org/10.1016/j.cma.2006.10.023>.
Special Issue Honoring the 80th Birthday of Professor Ivo Babuška.
Arnold2005AFO
D. N. Arnold, F. Brezzi, and L. D. Marini, A family of discontinuous
galerkin finite elements for the reissner–mindlin plate, Journal of
Scientific Computing, 22-23 (2005), pp. 25–45,
<https://doi.org/10.1007/s10915-004-4134-8>.
arnold1989uniformly
D. N. Arnold and R. S. Falk, A uniformly accurate finite element
method for the reissner–mindlin plate, SIAM Journal on Numerical Analysis,
26 (1989), pp. 1276–1290.
zbMATH03340747
I. Babuška, Error-bounds for finite element method, Numer.
Math., 16 (1971), pp. 322–333, <https://doi.org/10.1007/BF02165003>.
doi:10.1137/0710071
I. Babuška and M. Zlámal, Nonconforming elements in the
finite element method with penalty, SIAM Journal on Numerical Analysis, 10
(1973), pp. 863–875, <https://doi.org/10.1137/0710071>.
Bathe2
K.-J. Bathe and F. Brezzi, A simplified analysis of two plate
bending elements — the mitc4 and mitc9 elements, in Numerical Techniques
for Engineering Analysis and Design, G. N. Pande and J. Middleton, eds.,
Dordrecht, 1987, Springer Netherlands, pp. 407–417.
Bathe
K.-J. Bathe, F. Brezzi, and S. W. Cho, The mitc7 and mitc9 plate
bending elements, Computers. Structures., 32 (1989), pp. 797–814,
<https://doi.org/10.1016/0045-7949(89)90365-9>.
Brezzi1
F. Brezzi, K.-J. Bathe, and M. Fortin, Mixed-interpolated elements
for reissner–mindlin plates, International Journal for Numerical Methods
in Engineering, 28 (1989), pp. 1787 – 1801,
<https://doi.org/10.1002/nme.1620280806>.
Brezzi3
F. Brezzi and M. Fortin, Numerical approximation of mindlin-reissner
plates, Math.Comput., 47 (1986), pp. 151–158,
<https://doi.org/10.1090/S0025-5718-1986-0842127-7>.
Brezzi2
F. Brezzi, M. Fortin, and R. Stenberg, Error analysis of
mixed-interpolated elements for reissner–mindlin plate, Mathematical
Models and Methods in Applied Sciences, 1 (1991),
<https://doi.org/10.1142/S0218202591000083>.
CARSTENSEN20111161
C. Carstensen, X. Xie, G. Yu, and T. Zhou, A priori and a posteriori
analysis for a locking-free low order quadrilateral hybrid finite element for
reissner–mindlin plates, Computer Methods in Applied Mechanics and
Engineering, 200 (2011), pp. 1161–1175,
<https://doi.org/10.1016/j.cma.2010.06.035>.
doi:10.1137/S0036142900371003
P. Castillo, B. Cockburn, I. Perugia, and D. Schötzau, An a
priori error analysis of the local discontinuous galerkin method for elliptic
problems, SIAM Journal on Numerical Analysis, 38 (2000), pp. 1676–1706,
<https://doi.org/10.1137/S0036142900371003>.
CHEN2018643
G. Chen, W. Hu, J. Shen, J. R. Singler, Y. Zhang, and X. Zheng, An
hdg method for distributed control of convection diffusion pdes, Journal of
Computational and Applied Mathematics, 343 (2018), pp. 643–661,
<https://doi.org/10.1016/j.cam.2018.05.028>.
Chen1
G. Chen, X. Xie, and Y. Zhang, A robust hdg method for
reissner-mindlin plate problem, Numerical Analysis and Applicable
Mathematics, 1 (2020),
<https://doi.org/10.36686/Ariviyal.NAAM.2020.01.01.002>.
chinosi2006nonconforming
C. Chinosi, C. Lovadina, and L. Marini, Nonconforming locking-free
finite elements for reissner–mindlin plates, Computer Methods in Applied
Mechanics and Engineering, 195 (2006), pp. 3448–3460.
doi:10.1137/070706616
B. Cockburn, J. Gopalakrishnan, and R. Lazarov, Unified
hybridization of discontinuous galerkin, mixed, and continuous galerkin
methods for second order elliptic problems, SIAM Journal on Numerical
Analysis, 47 (2009), pp. 1319–1365, <https://doi.org/10.1137/070706616>.
Cockburn20051067
B. Cockburn, G. Kanschat, and D. Schötzau, A locally conservative
ldg method for the incompressible navier-stokes equations, Mathematics of
Computation, 74 (2005), p. 1067 – 1095,
<https://doi.org/10.1090/S0025-5718-04-01718-1>.
DIPIETRO2022136
D. A. Di Pietro and J. Droniou, A discrete de rham method for the
reissner–mindlin plate bending problem on polygonal meshes, Computers
Mathematics with Applications, 125 (2022), pp. 136–149,
<https://doi.org/10.1016/j.camwa.2022.08.041>.
Duran
R. Duran and A. Ghioldi, A finite element method for the
mindlin–reissner plate model, Siam Journal on Numerical Analysis - SIAM J
NUMER ANAL, 28 (1991), <https://doi.org/10.1137/0728053>.
duran1991inf
R. G. Duran, The inf-sup condition and error estimates for the
arnold-falk plate bending element, Numerische Mathematik, 59 (1991).
Duran2008
R. G. Durán, Mixed Finite Element Methods, Springer Berlin
Heidelberg, Berlin, Heidelberg, 2008, pp. 1–44,
<https://doi.org/10.1007/978-3-540-78319-0_1>.
10.1007/3-540-47789-6_66
R. D. Falgout and U. M. Yang, hypre: A library of high performance
preconditioners, in Computational Science — ICCS 2002, P. M. A. Sloot,
A. G. Hoekstra, C. J. K. Tan, and J. J. Dongarra, eds., Berlin, Heidelberg,
2002, Springer Berlin Heidelberg, pp. 632–641,
<https://doi.org/10.1007/3-540-47789-6_66>.
Falk
R. Falk and T. Tu, Locking-free finite elements for the
reissner-mindlin plate, Math. Comput., 69 (2000), pp. 911–928,
<https://doi.org/10.1090/S0025-5718-99-01165-5>.
Falk2008
R. S. Falk, Finite Elements for the Reissner–Mindlin Plate,
Springer Berlin Heidelberg, Berlin, Heidelberg, 2008, pp. 195–232,
<https://doi.org/10.1007/978-3-540-78319-0_5>.
doi:10.1137/22M1498838
T. Führer, N. Heuer, and A. H. Niemi, A discontinuous
petrov–galerkin method for reissner–mindlin plates, SIAM Journal on
Numerical Analysis, 61 (2023), pp. 995–1017,
<https://doi.org/10.1137/22M1498838>.
10.48550/1906.04869
T. Führer, N. Heuer, and F.-J. Sayas, An ultraweak formulation
of the reissner-mindlin plate bending model and dpg approximation, 2019,
<https://arxiv.org/abs/1906.04869>.
gallistl2021taylor
D. Gallistl and M. Schedensack, Taylor–hood discretization of the
reissner–mindlin plate, SIAM Journal on Numerical Analysis, 59 (2021),
pp. 1195–1217.
eigenweb
G. Guennebaud, B. Jacob, et al., Eigen v3.
http://eigen.tuxfamily.org, 2010.
HANSBO2011638
P. Hansbo, D. Heintz, and M. G. Larson, A finite element method with
discontinuous rotations for the mindlin–reissner plate model, Computer
Methods in Applied Mechanics and Engineering, 200 (2011), pp. 638–648,
<https://doi.org/https://doi.org/10.1016/j.cma.2010.09.009>.
Hansbo_2014
P. Hansbo and M. G. Larson, Locking free quadrilateral
continuous/discontinuous finite element methods for the
reissner–mindlin plate, Computer Methods in Applied Mechanics
and Engineering, 269 (2014), pp. 381–393,
<https://doi.org/10.1016/j.cma.2013.11.004>.
HU2008464
J. Hu and Z.-C. Shi, Error analysis of quadrilateral wilson element
for reissner–mindlin plate, Computer Methods in Applied Mechanics and
Engineering, 197 (2008), pp. 464–475,
<https://doi.org/10.1016/j.cma.2007.06.006>.
hughes1988mixed
T. J. Hughes and L. P. Franca, A mixed finite element formulation
for reissner-mindlin plate theory: Uniform convergence of all higher-order
spaces, Computer Methods in Applied Mechanics and Engineering, 67 (1988),
pp. 223–240.
KIENDL2015489
J. Kiendl, F. Auricchio, L. Beirão da Veiga, C. Lovadina, and
A. Reali, Isogeometric collocation methods for the reissner–mindlin
plate problem, Computer Methods in Applied Mechanics and Engineering, 284
(2015), pp. 489–507,
<https://doi.org/https://doi.org/10.1016/j.cma.2014.09.011>.
Isogeometric Analysis Special Issue.
KIKIS2021113490
G. Kikis, M. Ambati, L. De Lorenzis, and S. Klinkel, Phase-field
model of brittle fracture in reissner–mindlin plates and shells, Computer
Methods in Applied Mechanics and Engineering, 373 (2021), p. 113490,
<https://doi.org/https://doi.org/10.1016/j.cma.2020.113490>.
10.1007/s10915-011-9501-7
R. M. Kirby, S. J. Sherwin, and B. Cockburn, To cg or to hdg: A
comparative study, Journal of Scientific Computing, 51 (2012), pp. 183–212,
<https://doi.org/10.1007/s10915-011-9501-7>.
LI201637
B. Li and X. Xie, Analysis of a family of hdg methods for second
order elliptic problems, Journal of Computational and Applied Mathematics,
307 (2016), pp. 37–51,
<https://doi.org/https://doi.org/10.1016/j.cam.2016.04.027>.
1st Annual Meeting of SIAM Central States Section, April 11–12,
2015.
lovadina2005low
C. Lovadina, A low-order nonconforming finite element for
reissner–mindlin plates, SIAM journal on numerical analysis, 42 (2005),
pp. 2688–2705.
Nguyen20111147
N. Nguyen, J. Peraire, and B. Cockburn, An implicit high-order
hybridizable discontinuous galerkin method for the incompressible
navier-stokes equations, Journal of Computational Physics, 230 (2011),
p. 1147 – 1170, <https://doi.org/10.1016/j.jcp.2010.10.032>.
stenberg1995new
R. Stenberg, A new finite element formulation for the plate bending
problem, Asymptotic methods for elastic structures, (1995), pp. 209–221.
Stenberg
R. Stenberg and M. Suri, An hp error analysis of mitc plate
elements, SIAM Journal on Numerical Analysis, 34 (1997), pp. 544 –568,
<https://doi.org/10.1137/S0036142994278486>.
yu2017analysis
G. Yu and X. Xie, Analysis of a 3-node mixed-shear-projected
triangular element method for reissner–mindlin plates, Numerical Methods
for Partial Differential Equations, 33 (2017), pp. 241–258.
Zhongnian
X. Zhongnian, A thick-thin triangular element, International
Journal for Numerical Methods in Engineering, 33 (1992), pp. 963 – 973,
<https://doi.org/10.1002/nme.1620330506>.
|
http://arxiv.org/abs/2307.03958v1 | 20230708114851 | Secrets Revealed in Container Images: An Internet-wide Study on Occurrence and Impact | [
"Markus Dahlmanns",
"Constantin Sander",
"Robin Decker",
"Klaus Wehrle"
] | cs.CR | [
"cs.CR",
"cs.NI"
] |
An Internet-wide Study on Secrets in Container Images]Secrets Revealed in Container Images:
An Internet-wide Study on Occurrence and Impact
Markus Dahlmanns, Constantin Sander, Robin Decker, Klaus Wehrle
Communication and Distributed Systems, RWTH Aachen University Aachen Germany
{dahlmanns, sander, decker, wehrle}@comsys.rwth-aachen.de
[email protected]
RWTH Aachen University
Germany
[email protected]
RWTH Aachen University
Germany
[email protected]
RWTH Aachen University
Germany
[email protected]
RWTH Aachen University
Germany
Containerization allows bundling applications and their dependencies into a single image.
The containerization framework Docker eases the use of this concept and enables sharing images publicly, gaining high momentum.
However, it can lead to users creating and sharing images that include private keys or API secrets—either by mistake or out of negligence.
This leakage impairs the creator's security and that of everyone using the image.
Yet, the extent of this practice and how to counteract it remains unclear.
In this paper, we analyze numnonemptyimages images from Docker Hub and privatemeasurementnumtotalmax other private registries unveiling that pctaffectedimages of images indeed include secrets.
Specifically, we find validprivatekeyvalidnumdistinctmatchestotal private keys and apinumdistinctmatches leaked API secrets, both opening a large attack surface, i.e., putting authentication and confidentiality of privacy-sensitive data at stake and even allow active attacks.
We further document that those leaked keys are used in the wild:
While we discovered casignedcerts certificates relying on compromised keys being issued by public certificate authorities, based on further active Internet measurements, we find 20220901numuniquehosts TLS and SSH hosts using leaked private keys for authentication.
To counteract this issue, we discuss how our methodology can be used to prevent secret leakage and reuse.
<ccs2012>
<concept>
<concept_id>10002978.10003014</concept_id>
<concept_desc>Security and privacy Network security</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10002978.10002979.10002980</concept_id>
<concept_desc>Security and privacy Key management</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Security and privacy Network security
[500]Security and privacy Key management
[
Klaus Wehrle
August 12, 2023
===================
§ INTRODUCTION
While originally developed to isolate applications <cit.>, containerization has become a new cornerstone of interconnected services as it significantly eases their deployment <cit.>.
To this end, Docker, the most prominent containerization framework <cit.>, uses prebuilt images that include all software dependencies necessary to deploy an application <cit.>.
Users only need to download an image from a registry or can derive their own image by adapting its configuration and included files.
These new images can then again be uploaded building a whole ecosystem of containerized applications.
For example, Docker Hub, the official Docker registry, comprises more than 9000000 images <cit.> anybody can use.
With this level of public exposure, any mistake during image creation can have drastic consequences.
Most notably, including confidential secrets such as cryptographic keys or API secrets, by mistake or out of negligence, can introduce two security issues:
[(i)]
* attackers can misuse compromised secrets leading to potential loss of data, money, privacy, or control, and
* administrators instantiating images can rely on broken security, e.g., paving the way for Man-in-the-Middle attacks.
Aggravatingly, there is no easy tooling to show which files have been added—accidentally adding a secret is thus much easier than identifying such an incident.
Indeed, related work traced three reused private keys authenticating 6000 (Industrial) Internet of Things services back to the occurrence in a Docker image <cit.>.
Additionally, blog entries produced anecdotal evidence that Docker images include further confidential security material <cit.>.
However, comprehensive analyses on revealed security secrets at scale do not exist in this realm.
Instead, such analyses focus on GitHub repositories <cit.>.
Hence, the extent for container images is unknown.
In this paper, we thus comprehensively study whether Docker images include confidential security material and whether administrators reuse these compromised secrets at large scale by
[(i)]
* scanning publicly available Docker images for confidential security material, and
* measure whether these secrets are used in practice on production deployments.
To this end, we analyze images available on the official and largest registry Docker Hub as well as examine the entire IPv4 address space for public registries and services relying their security on compromised secrets.
Contributions Our main contributions are as follows.
* We found privatemeasurementnumtotalmax Docker registries in the IPv4 address space that contain not only secrets but also potentially confidential software and likely allow attackers to replace images, e.g., with malware.
* After filtering test secrets, we identified totalvalidmatches leaked distinct secrets, i.e., validprivatekeyvalidnumdistinctmatchestotal private keys and apinumdistinctmatches API secrets, in numaffectedimages images (pctaffectedimages of images we scanned are affected).
* We show that operators use 20220901corrFingerprint compromised private keys in practice affecting the authenticity of 20220901numuniquehosts Internet-reachable hosts providing, i.a., HTTP, AMQP, MQTT, and LDAP services.
* We discuss improvements of the Docker paradigm to prevent secret leakage and reuse in the future as well as provide our software used to find and verify secrets <cit.> to support mitigation.
§ A PRIMER ON THE DOCKER PARADIGM
In contrast to other containerization frameworks, Docker <cit.> does not only provide an isolated execution environment for applications.
Instead, Docker specifies an easy-to-use paradigm to create, share and deploy ready-to-run container images <cit.>.
These images constitute the filesystems of the containers and include all dependencies necessary for the actual applications, i.e., they can include all kinds of files added during creation.
The completeness of these images allows to share them via (publicly accessible) registries.
Figure <ref> shows the structure and lifecycle of Docker images in detail, from creating images to sharing and running them.
Image Creation
To create an image, Docker uses a user-defined Dockerfile <cit.> to specify the image ingredients.
First 1, the Dockerfile references another image, the base image, which is downloaded from a registry and comprises the initial file system of the new image.
Second 2, image layers consisting of differential snapshots of the file system after running commands from the Dockerfile are created and stacked on each other <cit.>.
These commands can include shell statements to, e.g., compile an application running in the container.
Furthermore, specific commands exist to embed environment variables or to add files from the host system into the image <cit.>.
While the files can be, e.g., source code or further dependencies, image creators can also easily and accidentally include (cryptographic) secrets into the image or its environment variables, putting the service's security at risk when leaked.
Once an image has been fully created, it exists as a self-containing unit, which is ready-to-run but also allows little insight on what has been added.
Image Push
After generating the image, creators can push it to a registry <cit.>, e.g., the official and largest registry Docker Hub <cit.>, allowing to deploy containers among an own fleet of servers easily, but also to share it with other users <cit.>.
To this end, the image layers are uploaded to the registry under a repository name and tag 3.
Thereby, the repository name typically represents the application in the image, and the tag describes a version.
Conventionally, creators tag the newest image in a repository with .
Container Deployment
To run a Docker container, users pull an image from a registry.
When pulling, users first request an image manifest <cit.> from the registry, including meta information about the image and its layers.
After downloading all layers 4, Docker merges the content composing the file system for the new container 5 <cit.>.
The application then finds an unchanged file system with all content provided by the image creator, i.e., all dependencies but also potentially added secrets, and can very likely provide services to the public Internet.
Since numerous containers of various users can base on a single image, included, and thus compromised, secrets could affect several deployments.
The Docker paradigm eases distribution and deployment of applications.
However, insight into what is added in images and up- or downloaded from a registry can be lost.
Thus, secrets can be leaked and reused, impairing Internet-reachable services at scale.
§ RELATED WORK
Three streams of research motivate our analysis of confidential security material in Docker images: studies that detect leaked security material, research on publicly available Docker images, and Internet-wide scans disclosing security weaknesses at scale.
Actively Leaked Security Material
Currently, the search for leaked security material focuses on code repositories.
Several studies detected the leakage of passwords <cit.>, SSH private keys <cit.>, Amazon Cloud API keys <cit.>, and Slack API keys <cit.>, using the built-in search of GitHub.
To allow broader searches, researchers entailed regular expressions but focused on specific file types <cit.> or code snippets <cit.>, i.e., the scale of this research was limited.
In contrast, Meli et al. performed a large scale study without focusing on specific file types, showing that ∼3.5 of the 4 analyzed code repositories on GitHub included leaked secrets <cit.>.
Further approaches use machine learning to improve the detection by relying on code semantics <cit.>, false-positive detection <cit.>, or both requiring further user input <cit.>.
Away from GitHub, research proposed methods to investigate various platforms <cit.> and proved the presence of secrets in publicly available Android apps <cit.>.
A recent study underlines that most developers experienced secret leakage, and guidelines are insufficient for prevention <cit.>.
While retroactively deleting leaked secrets does not help <cit.>, (non)-commercial approaches, e.g., GitGuardian <cit.>, TruffleHog <cit.>, or Gitrob <cit.>, aim at preventing secret leakage for Git.
Docker Images
Besides Git, researchers and developers, early on without evidence, assumed leaked secrets in images for virtual machines or Docker and provided countermeasures <cit.>.
Nevertheless, non-academic Web-blog studies <cit.> still find leaked secrets in images on Docker Hub.
However, these studies either limit their scale <cit.> to a few thousand images/secrets or restrict their methodology <cit.> to process large amounts of available images.
The latter study <cit.> finds 46076 affected images among 6.3 images on Docker Hub, but only considers information available in Dockerfiles, e.g., specific file paths.
Meanwhile, SecretScanner <cit.>, a smaller secret search tool, implements a function allowing users to find secrets in Docker images.
Still, a comprehensible, large-scale, and methodology-driven analysis on introduced security weaknesses by leaked security material is missing.
Instead, large-scale studies on Docker images focused on data compression <cit.>, software vulnerabilities <cit.>, or typosquatting of image names <cit.>.
Hence, as of now, it is unclear how widespread secret leakage is in images on Docker Hub as well as private Internet-reachable registries.
Moreover, it is unknown to what extent these compromised images are then used on the Internet and whether they weaken security at scale.
Internet Measurements
For understanding deployment security at scale, Internet-wide measurements have been a valuable tool in the past.
Internet scan services, such as Shodan <cit.> or Censys <cit.>, fetch and publish meta-information, e.g., security configurations, on Internet-reachable services.
Although these services often helped researchers analyzing the security of connected devices, e.g., cars <cit.> or (insecure) Industrial IoT (IIoT) deployments <cit.>, they usually do not see all deployments <cit.>.
Hence, researchers frequently conduct own active Internet measurement, e.g., using ZMap <cit.>.
On the web, these measurements allowed to analyze the deployment of new TLS versions <cit.> and revealed wide security configuration mistakes <cit.> or implementation deficits <cit.>.
Aside the web, researchers assessed the security of SSH services <cit.> and key-value stores leaking confidential data <cit.>.
For the IoT and IIoT, research revealed many deployments relying on vulnerable software <cit.> and communicating without any security mechanism <cit.>, e.g., access control.
Even with built-in security features, operators often configure such services insecurely <cit.>.
For example, a massive reuse of certificates was traced back to a Docker image including certificates and corresponding private keys <cit.> jeopardizing the authenticity of numerous deployments.
Based on this, we claim that it is probable that there are further public Docker images that wrongly include confidential secrets and harm security on the Internet—especially when looking at the sheer size of Docker and Docker Hub.
Although the broad leakage of security secrets in code repositories is well understood, the spread of revealed secrets in Docker images and the introduced security risk for the Internet are unknown.
However, known secret leakage detection techniques and Internet measurements are predestined to shed light on these issues.
§ COMPOSING OUR DATASET
To answer whether Docker image creators actively compromise security secrets by publishing them in openly available Docker images, we set out and retrieve images from Docker Hub (Section <ref>) and publicly reachable private registries (Section <ref>).
§.§ Retrieving Images from Docker Hub
Table <ref> guides through our composition process on Docker Hub, which has three tasks:
[(i)]
* composing a list of repositories,
* selecting one image per repository to widely spread our analysis, and
* identifying layers the images consist of.
§.§.§ Repositories
While Docker Hub limits the number of image downloads <cit.> and we cannot download and analyze all 15 of images available on Docker Hub <cit.> due to runtime and bandwidth restrictions, our analysis requires a selection of repositories of interest.
Furthermore, Docker Hub does not support listing all available images to choose from.
Hence, we use specific search terms to get images users retrieve when searching via the Web interface.
Our search terms (which we elaborate in more detail in Appendix <ref>) build two query groups (Table <ref> (left));
Standard comprises mainstream communication protocol names <cit.> and frequently used technologies <cit.> for a wide analysis of images referencing current issues.
For comparison and more focusing on a specific area, we choose the Industrial Internet of Things (IIoT) as past studies showed a great susceptibility to security faults <cit.>, i.e., IIoT includes protocol names from this area.
We list the number of repositories covered by our analysis per query group, i.e., the sum of found repositories of all search terms of a group, in Table <ref> (column Repositories-#).
To further convey the prevalence of our search terms, we indicate the minimum, maximum, and 25-, 50-, and 75-percentiles of search results for included terms, i.e., higher values of lower percentiles would imply a higher prevalence.
While both query groups contain terms that lead to no results (min), i.e., the term is not mentioned in any repository name or description, terms in the standard group generate more results due to their closer correlation to frequently used technologies than IIoT protocols (p_25, p_50, p_75).
Docker Hub's API limits the number of results to 10000 (max).
As different search terms lead to overlapping repositories, we further report on the distinct number of repositories gradually, i.e., per query group, and overall.
In total, we gathered distinctnumrepooverall distinct repositories subject to our study of which standarddistinctpctrepopergrouponly are uniquely added by our standard search terms and iiotdistinctpctrepopergrouponly by IIoT related search queries.
§.§.§ Images
Table <ref> (column Images-#) indicates how many images were available in total over the distinct repositories of a search group.
While repositories mostly contain different images, including the same software in other versions and thereby comprising similar files, we choose to analyze one tag per repository to spread our analysis as widely as possible.
Here, we select images tagged with which is used as Docker's default and typically includes the newest version of an image.
However, not all repositories contain images tagged with (as shown in Table <ref> (column Images-).
Here, we select the image with the latest changes (as reported by Docker Hub's API).
Empty repositories (Table <ref> (column Images-none)), i.e., have no image layers available, cannot include any secrets.
Besides the number of images that are covered by our study (column Images-analyzed), we also report on the age of the images to analyze how long they are already available on Docker Hub.
The ages of images included in both query groups roughly have the same distribution indicating that although the number of images found by our IIoT-related queries is lower image creators update their images in the same frequency as image creators of images included in our Standard group.
§.§.§ Layers
While we report on the number of layers included in all images (Table <ref> column Layers-#), different images often share the same layers, e.g., layers from frequently used base images.
Hence, to speed up our search for leaked secrets, we analyze each distinct layer only once.
We show the distinct number of layers gradually, i.e., per query group, and overall.
To cover all distinctnumrepooverall repositories, we analyze distinctnumlayersoverall layers.
(standarddistinctpctlayersgroup uniquely added by Standard-related, iiotdistinctpctlayersgroup by IIoT-related repositories).
§.§ Images from Private Docker Registries
Since image creators might upload sensitive images preferably to private registries, we want to include images from these registries in our analysis.
Table <ref> shows our steps taken to extend our dataset with images from private registries, i.e., we search private registries, and, subsequently, include a subset of available layers.
§.§.§ Find Private Registries and Repositories
To find publicly reachable Docker registries, we scan the complete IPv4 address space for services running on the standard port for Docker registries, i.e., TCP port 5000, under comprehensive ethical measures (cf. Appendix <ref>) twice to analyze short-term fluctuations (Table <ref> (left)).
Both times, we perform a TCP SYN scan using <cit.>, identifying hosts running a service behind this port and subsequently send an HTTP request as defined by Docker's Registry API <cit.> for verification.
Whenever we do not receive a valid HTTP response, we retry via HTTPS.
While we found up to privatemeasurementnumtotalmax private registries on privatemeasurementdatemax, the difference in found registries in comparison to our scan on privatemeasurementdatemin is due to registries in Amazon AWS-related ASes that do not reply after our first scan anymore.
Since these registries only contain the same and single image (uhttpd), they might relate to another research project, e.g., implementing a registry honeypot.
Contrarily to Docker Hub's API, the API of private registries allows listing available repositories without search terms.
However, we limit our requests to receive a maximum of 100 repositories per registry to prevent any overloads.
As such, the found private registries provide privatemeasurement220801repositorysum resp. privatemeasurement220806repositorysum repositories.
Since the registries do not implement access control for read access, clients are able to download all included images.
Notably, by default also write access is not restricted <cit.>, i.e., attackers might be able to inject malware.
privatemeasurement0repositoryuhttpd
privatemeasurement2repositorynginx
privatemeasurement4repositoryredis
While being publicly available on private registries but not filtered by any search terms, the content of these images is of special interest.
Here, often the repository name indicates the image's content and thus allows conclusions on widely distributed applications, i.e., over both measurements, is the most reoccurring repository name (reoccurring privatemeasurement0sum times, but only during our first scan).
Repository names on the second and third place, i.e., and , indicate proxy and cloud services where image creators might have included security secrets before uploading it to their registry.
Beyond the scope of security secrets, other repository names occurring less often, e.g., or , imply that image creators might include confidential software, source code, private data, or information on systems especially worthy of protection in openly available Docker images.
§.§.§ Image and Layer Selection
For all found repositories, we collect the lists of available images and their tags (Table <ref> (center)).
Although private registries typically do not implement any rate limiting like Docker Hub, we do not want to overload found registries or their Internet connections.
Hence, to spread our analysis as far as possible but limit the load on each registry, we choose one tag per image.
Similar to our selection process on Docker Hub, typically, in each repository, we select images tagged as to download the corresponding manifest.
Whenever no image is available, we sort all available images naturally by their tag (to account for version numbers as tags), and select the maximum (i.e., the newest version), as the API does not provide any information on the latest changes.
Subsequently, we download the corresponding image manifests to retrieve accompanying layers.
To further limit load on Internet connections of found registries, we do not download all available layers for included secrets.
Instead, we randomly select layers of chosen images such that the sum of their sizes does not exceed 250 per registry and per measurement.
All in all, we added privatenumdistinctlayersselected layers from private registries to our dataset.
In parallel to Docker Hub numerous private registries exist providing images to the public.
Overall, we assemble a dataset of numconsideredlayersoverall layers from numnonemptyimages images subject to our future research.
Furthermore, private registries might allow attackers to, e.g., inject malware, potentially infecting container deployments at scale as well.
§ LEAKED SECRETS IN DOCKER IMAGES
Next, we search in considered images for included secrets (Section <ref>), discuss the origin of affected images to later evaluate remedies (Section <ref>), and analyze also found certificates compromised due to private key leakage to estimate arising risks (Section <ref>).
§.§ Searching for Secrets
To analyze available images for included secrets, we align our approach to established methods <cit.>, i.e., we choose and extend regular expressions identifying specific secrets and match these on files and environment variables.
Additionally, we extensively filter our matches to exclude false positives.
§.§.§ Regular Expression Selection
We base our selection of regular expressions on previous work to find secrets in code repositories <cit.> (we further elaborate on our election process and expressions in Appendix <ref>).
Table <ref> (left) names the domains of secrets that our selected expressions match and indicates how attackers could misuse these secrets.
We start with regular expressions composed by Meli et al. <cit.> due to their selection of unambiguous expressions (reducing false positives) matching secrets with a high threat when leaked.
We extend their expressions for private keys to match a larger variety, e.g., also OpenSSH private keys.
Moreover, we widen the set by expressions matching API secrets of trending technologies <cit.> based on match rules from TruffleHog <cit.>.
However, TruffleHog's rules are relatively ambiguous and incur many false positives, which TruffleHog filters by validating the API secrets against their respective endpoints.
As our ethical considerations do not allow for any further use of the secrets (cf. Appendix <ref>), we focus on rules which expect at least one fixed character and later add further filtering and verification steps.
§.§.§ Matching Potential Secrets
To analyze whether image layers include secrets, we match the selected regular expressions on the images as follows (we will open-source our tool on acceptance of this paper):
We download and decompress the image layers and then match our regular expressions on the included files.
Moreover, we recursively extract archive files up to a depth of 3 and match again.
As API documentations often suggest setting secrets in environment variables and not writing them into files, we analyze set variables.
Since Docker allows downloading the small image configuration containing set variables aside of the image, i.e., potential attackers do not have to download and search through all files to find included secrets, we analyze variables separately:
As such, we only download the image configuration file and iterate our regular expression over set environment variables.
Here, we adapt the API expressions, as some expect a specific term before the secret (cf. Table <ref> in Appendix <ref>), e.g., the service name as part of a variable name.
As the variable names and values are separated in the configuration file, we also split the according expressions and match them individually.
Table <ref> (center) lists for each secret domain how many matches and how many distinct matches we found in both, image content and environment variables.
Notably, while only covering two services, i.e., Facebook and Twitter, the expressions in the Social Media domain matched most often over all domains, which already indicates that API secrets of this domain are often suspect to leakage.
The high redundancy of the matches, visible as the significant decrement between distinct and non-distinct matches, already hints at invalid matches, e.g., private keys or example API tokens prevalent in unit tests or documentation in several layers.
Indeed, the most reoccurring match (mostreoccurringnumocc times in mostreoccurringnumlayer different layers), is an example key for mostreoccurringrule from a library documentation which creators usually include in their images.
We thus validate our matches extensively.
§.§.§ Match Validation
To exclude test keys for cryptographic libraries, example API secrets, and completely invalid matches to get a near lower bound of harmful leaked secrets in Docker images, we use different filters depending on the secret type.
While we show the number of resulting valid secrets in Table <ref> (right), Figure <ref> details the filtering results separated by the match's origin, i.e., image content or environment variable and domain.
Private Keys
Our regular expressions for private keys match on PEM or XML formatted keys.
Thus, we can first exclude every match that is not parsable (filter Unparsable).
Figure <ref> shows that only a minority of all potential private keys in image layers are unparsable, underlining that image creators include and compromise private keys actually usable in final Docker containers for practical operations.
Contrarily, the single match within the environment variables is only a key fragment and thus not parsable.
Still, we expect a high number of software test keys in Docker images among found keys, as they are part of several libraries creators might include in their images, e.g., OpenSSL.
Since users will most likely not use such keys to secure their deployments, we filter out test keys that are included in kompromat <cit.>, a repository listing already compromised secrets (filter Kompromat).
More specifically, we filter keys occurring in RFCs (kompromatfoundrfcnumdistinct), libraries for software tests (kompromatfoundsoftwaretestsnumdistinct), or as special test vectors (kompromatfoundtestvectorsnumdistinct).
To also account for software test keys that are not available in kompromat, we analyze the file paths where respective keys were found (filter File).
While we do not generally exclude all paths containing signal words indicating test or example keys, as users might use such paths also for keys they generated and use in practice, we apply different measures.
For instance, based on locations of test keys identified using kompromat, we deliberately exclude matches in similar locations, i.e., keys within directories where we already detected test keys and all parent directories under which we find more than 2/3 test keys.
Last, we exclude file paths typically used by libraries (cf. Appendix <ref>), e.g., , as there is a lower chance that users adapt their keys here.
Figure <ref> shows that these filters process the largest share of excluded private key matches.
It further indicates that kompromat only includes a minority of software test keys, i.e., is not directly usable to exclude all false-positive matches.
Still, many of the found keys are not filtered and, thus, most likely, no software test keys.
In total, we found validprivatekeyvalidnumdistinctmatchestotal valid private keys potentially in use in practice (cf. Table <ref> (right)).
Since all of these keys are located in files, attackers would have to download respective image layers to get access and not only meta information to retrieve environment variables.
Still, since these keys are publicly available and thus compromised, usage in production puts authentication at stake, i.e., attackers can perform impersonation attacks.
API Secrets
Since our ethical considerations deter us from validating API secrets against their service endpoints (cf. Appendix <ref>) as applied by TruffleHog <cit.>, and related methods for false positive detection focus on matches in source code <cit.>, which is not prevalent in Docker images, we need alternative measures to filter invalid matches.
By manually supervising our filtering, we ensure that the final set only includes valid-looking API secrets.
Based on invalid matches in GitHub code repositories <cit.>, we expect human-created example keys that contain keywords, e.g., , or consecutive character sequences, e.g., , that we must exclude (filter Sequence).
To filter consecutive sequences, we search for segments consisting of ascending, descending (both with a length of four), and repeating characters (with a length of three).
Furthermore, we filter matches including sequences that occur unusually often, i.e., we create (frequencyngrammin, frequencyngrammax)-character-grams of all matches, exclude grams created over fixed parts of our regular expressions as well as grams only containing digits, and count the number of occurrences over all API matches.
To account for randomly reoccurring grams, we filter all matches that include grams occurring frequencyNgramsTimeFactor times more often than the average.
We manually ensured that our filter is not too restrictive but also not to loose leaving often reoccurring grams out.
Figure <ref> shows that this filtering excludes a large share of matches.
Interestingly, the most reoccurring gram is [sic!], which we could trace back to DNA sequences in images related to bioinformatics underpinning the large variety of different and unexpected file types occurring in Docker images.
Similar to filtering private key matches by their file paths, we also filter API matches occurring in manually selected paths (filter File, cf. Appendix <ref>).
Essentially, we revisited the location and file types of all matches and excluded paths that most likely do not include any valid secrets compromised by publishing these in Docker images.
Figure <ref> indicates that the filtered paths often also include matches filtered by our sequence filter and thus that libraries include strings similar to secrets, e.g., in their documentation.
Still, after manual revision of the remaining matches, we conclude that rules which match on a fixed term before the secret, e.g., the service name, and then allow a specific length of characters are too ambiguous for usage on files in Docker images as they match on arbitrary content, e.g., on hashes with the service name in front.
We thus decide to exclude matches of these rules from our further analysis (gray in Table <ref> (left)), i.e., consider these matches invalid, to ensure the integrity of our further results.
Still, a minority of these matches might be valid, potentially enabling attackers to compromise production services or access confidential data.
Comparing the filter results of API secret matches in files and environment variables, the share of valid matches in variables is significantly higher than in files indicating that image creators less likely include secret placeholders in variables.
Still, as Table <ref> (right) shows, most secrets are located within the images.
Thus, attackers have a higher chance of finding valid secrets when downloading both environment variables and image content.
In total, we found apinumdistinctmatches distinct API secrets in Docker images, mostly related to services from the cloud domain (validapicloudvalidnumdistinctmatchestotal secrets).
Although we cannot prove the functionality of these secrets, the occurrence of apicloud1numdistinctmatches secrets for the apicloud1rule or apicloud2numdistinctmatches secrets for the apicloud2rule indicate that attackers might be able to reconfigure cloud services maliciously, e.g., by editing DNS or VM options.
Additionally, we found evidence for secrets allowing attackers to access private data from social media (validapisocialmediavalidnumdistinctmatchestotal secrets), or even access financial services (validapifinancialvalidnumdistinctmatchestotal secrets, most matches: apifinancial0rule).
Notably, although we focused our image search partly on IoT terms, we found no valid secrets from selected IoT services.
§.§.§ Secrets Owned by Single Users
Based on findings over leaked secrets found on GitHub <cit.>, we expect most valid secrets to residing in images of single users (as users do not share their secrets intentionally).
Contrarily, invalid matches, e.g., library test keys, would mainly reside in images of multiple owners.
Thus, to check whether the matches we identified as valid secrets are located in images of single users, we analyze the number of different owners that include a specific secret in their images.
To this end, for images from Docker Hub, we consider the repository owner (embedded in the repository name) as the owner of a secret.
For private registries, we consider the registry's IP address as the owner (assuming that owners only run a single registry and neglecting that registries might use different (dynamic) IP addresses).
Figure <ref> shows that the largest share of valid secrets indeed occurs in images of single owners.
validmatchmultiuserprivatekeyFalsepct of private keys (validmatchmultiuserprivatekeyFalsenum keys) and validmatchmultiuserapiFalsepct of API secrets (validmatchmultiuserapiFalsenum secrets) reside in images of single owners underpinning that these should be protected.
Moreover, we can trace validmatchmultiuserlayer0privatekeyTruenum private keys and validmatchmultiuserlayer0apiTruenum API secrets of multiple owners back to inheritance.
These secrets were already included in the base image, but w.r.t. to the overall occurrence, we conclude that secret spread due to inheritance is no major problem.
To responsibly inform image creators about leaked secrets in their images, we reach out to them whenever possible (numemaildisclosure extractable and valid e-mail addresses) and also contacted the operator of Docker Hub (cf. Appendix <ref>).
Early on, we received notifications of creators that removed found secrets from their images.
totalvalidmatches found secrets show that image creators publish confidential information in their publicly available Docker images.
As attackers have access to these secrets relying authentication and other security mechanisms are futile, potentially leading to compromised servers or leaked privacy-sensitive data.
§.§ Origin of Leaked Secrets
Next, we analyze where the validated secrets stem from to see whether specific images are more affected and why.
To this end, we examine the distribution of affected images and compare between private registries and Docker Hub, as well as IIoT specific and Standard images.
Moreover, we evaluate which operation in the original Dockerfile led to the insertion of secrets and inspect the file paths where they reside to get an intuition for their usage.
§.§.§ Docker Hub Leads Before Private Registries
We already discovered that private registries include potentially sensitive images.
However, until now, it remains unclear whether images on these registries are more often subject to secret leakage than images from Docker Hub, e.g., due to creators believing that these are unavailable for the public.
Thus, we analyze whether leaked secrets occur more often in images from Docker Hub or from private registries.
While we found that numaffectedimages images (pctaffectedimages of images analyzed) contain valid secrets, pctaffectedimagesdockerhub of images from Docker Hub and pctaffectedimagesprivate of images from private registries are affected.
Thus, creators upload secrets to Docker Hub more often than to private registries indicating that private registry users may have a better security understanding, maybe due to a deeper technical understanding required for hosting a registry.
Yet, both categories are far from being leak-free.
For Docker Hub, besides the increased fraction of leaked secrets, we see an issue for others, i.e., other users can easily deploy containers based on these images.
Thus, there is a higher chance their containers rely their security on included and compromised secrets.
For example, a shared certificate private key could lead to an impersonation attack.
In case of shared API secrets, all deployed containers might use the same API token leading to exhausted rate limits in the best case, but maybe also to overwritten or insufficiently secured private data.
As a single API token does not allow fine-granular exclusions, i.e., it is either valid or revoked for all users, a revocation would also interfere with benign users.
Independent of their origin, attackers could equally misuse the secrets we found to leverage authentication or access privacy- or security-sensitive data.
As such, both user groups of Docker Hub and private registries leak sensitive information, be it through unawareness or a deceptive feeling of security.
§.§.§ Domains are Similarly Affected
For our image selection on Docker Hub, we specifically included search terms relating to the IIoT, as past research has shown significant security shortcomings in this area.
However, until now it is open whether found images of a certain domain are suspect to revealed secrets more frequently than other images.
To answer this question, we trace images that include secrets back to the query group that led to their inclusion.
We discovered that affectedstandardrepositorypct of the images only found using queries from the Standard query group and affectediiotrepositorypct of images only from the IIoT group include valid secrets[Images found by both query groups are not included.].
Thus, in case of secret leakage via Docker images and based on our selected search terms, the IIoT domain does not perform worse than our Standard domain.
However, it underpins that the problem of secret leakage in Docker images is a prominent issue for all domains.
§.§.§ Fresh Private Keys and Copied API Secrets
To find countermeasures against secret leakage in Docker images, it is important to understand how these leaked secrets became part of Docker images.
More specifically, for private keys, it is unclear whether creators execute commands in the Dockerfile to create fresh keys, which are then published in images, or whether they manually add them, i.e., using or in a Dockerfile.
Additionally, both, private keys and API secrets, could be indirectly included through other means, e.g., by cloning Git repositories or downloading further data.
Figure <ref> shows that while most API secrets are typically inserted by file operations (File), e.g., copied from the image creator's host system, private keys are predominantly included by executing a command within the Dockerfile (Exec.)[Secrets can be associated with both, File and Exec. operations, e.g., when first ed to the image and then copied or moved internally using or .].
Thus, private keys might be either downloaded or generated during the creation process.
To further trace the insertion of secrets in Exec. layers back to the responsible executed commands, we analyze these commands.
Since image creators often concatenate several bash commands whose output is then included in a single layer without any opportunity to associate files (and thus secrets) to a specific command, we count each of the commands related to the leakage of a secret.
We show the most prominent of all validmatchnumdistinctcommands commands associated with secret leakage in Figure <ref>.
In fact, privatekeyinstsshdpct of private keys were generated in layers where image creators installed the OpenSSH server.
Since the installation triggers to generate a fresh host key pair, it is automatically included in the image.
While the procedure of automatic key generation is beneficial on real hardware, i.e., users are not tempted to reuse keys on different hosts, in published Docker images it automatically leads to compromised keys and thus puts the authenticity of all containers relying on this image in danger.
Further privatekeysshkeygenpct of found private keys were generated by a direct call of , e.g., to generate fresh SSH client key material, implying the planned usage in production of generated but compromised key material.
Given the massive secret leakage on GitHub <cit.>, we also expect secrets to be included in images by cloning Git repositories.
However, only a minority of secrets can be associated with Git, suggesting that the sets of users leaking secrets via Docker and GitHub are distinct. Furthermore, only a minority of secrets were downloaded (using or ) both indicating that the secrets we found were most likely exclusively leaked in Docker images and underpinning that they are actually worth being protected.
§.§.§ File Paths Indicate Usage
To further reason about the usage of our found secrets, we analyze their file paths within the images assessing where secrets stem from and how services apply them.
Separated by private keys and API secrets, Figure <ref> shows the distribution of secrets throughout the directory structure of all images and focuses on the top seven paths.
We found the majority of private keys in underpinning a high prevalence of compromised SSH host keys.
Another large share occurs in suggesting compromised keys used for host authentication via TLS.
This path is also the location for TLS default (“snakeoil”) keys that are used if no other information is provided.
They are auto-generated when the package is installed such that every host possesses a unique default key-pair.
However, when installed during the creation of Docker images, the key is included in the image and, thus, compromised when shared.
Based on the key's filename, indeed, we found numsnakeoiletcssl of such keys which are potentially used to offer TLS services with broken authenticity to the public Internet.
Even more alarming, we found keys lying in , indicating that included keys are associated with a Public Key Infrastructure (PKI), and thus potentially destined to offer services to a higher number of users.
Furthermore, contains private keys used in relation to the IoT and, as per the repository names, for authentication using IoT protocols like CoAP and MQTT.
Thus, attackers possessing these private keys can leverage the authentication of all connections users establish to each container created based on these images.
In fact, attackers then can access or alter transmitted confidential information, e.g., privacy-sensitive user data or commands of IoT services potentially impacting cyber-physical systems.
In addition, we found keys in , i.e., a location where SSH client key pairs typically reside.
Hence, these keys might enable attackers to take over SSH servers, trusting these keys and having access to confidential data.
Contrarily, found API secrets are distributed more evenly through the directory structure.
We found the largest share in , which is the example folder for including own applications in Docker images <cit.>, underlining that image creators compromise their own application's API secrets.
While similar holds for , another large share of secrets resides in stemming from Firefox profiles containing Google Service API secrets in cached JavaScript files.
Although these secrets are most likely usable in combination with Google Maps or Google Analytics and thus meant to be shared with website visitors, this leakage implies privacy issues:
An attacker could retrace the creator's browsing history, which apparently exists due to the cache being filled, which could show potentially sensitive information.
In addition, we found a large share of Google API secrets (both Cloud and Services) in .
Since we do not use API tokens for further validation (cf. Appendix <ref>), we cannot be entirely sure whether these secrets are usable or only generated for testing purposes.
However, manual supervision of the matches and including files suggest that they could be actually in use.
pctaffectedimages of analyzed images contain and thus leak secrets.
While the majority stems from public Docker Hub images regardless of their domain, also private registries leak a significant number of secrets.
Notably, associated file paths and commands imply their production use and that various authentication mechanisms are futile.
§.§ Compromised Certificates
To further understand the severity of potentially compromised systems, we now focus on found certificates as they provide various information on their relations and use cases.
Thus, we research the trust chain, validity, and usage parameters of knowncompromizedcerts compromised certificates occurring in Docker images.
Trust Anchors
While self-signed certificates indicate the usage of certificates in controlled environments, i.e., clients need a safelist with all certificates they can trust, CA-signed certificates imply the usage at larger scale as these are trusted by all clients having a corresponding root certificate installed.
We consider certificates where the issuer and common name are similar as self-signed and CA-signed otherwise.
For CA-signed certificates, we consider those which we can validate against widespread root stores[Stores from Android, iOS/MacOS, Mozilla NSS, OpenJDK, Oracle JDK, and Windows.] as signed by a public CA, and otherwise signed by a private CA.
We discovered that the majority of found compromised certificates (selfsignedcertspct) are self-signed, but also privatecacerts private CA-signed and casignedcerts public CA-signed certificates.
While all systems relying on these certificates open the door for impersonation attacks, the occurrence of CA-signed certificates is especially alarming as such certificates are typically planned to provide authenticity to many clients/users and are universally accepted.
Thus, knowing these certificates' private key not only allows attackers to perform Man-in-the-Middle attacks but also enable them to sign malicious software to compromise other's systems.
Validity
As a countermeasure against key leakage, the certificate's lifetime enforces service operators to request new certificates from time to time, as clients should reject outdated certificates.
Notably, casignedvalidondownload public-CA, privatecavalidondownload private-CA, and selfsignedvalidondownload self-signed certificates were valid when we downloaded their containing image layer, showing that the authenticity of relying services is at stake, i.e., the lifetime does not help in these cases of key leakage.
Interestingly, casignedvalidonhistory public-CA, privatecavalidonhistory private-CA, and selfsignedvalidonhistory self-signed certificates were valid when added to their Docker image (as per the image's history timestamp).
While these larger numbers show that the limited lifetime of certificates helps to mitigate leaked private keys, they also indicate that key leakage in images is tedious, i.e., more and more private keys are leaked.
Usages
The usage attributes of certificates can optionally indicate the practical use-case of CA-signed certificates and, thus, further help to understand the severity of the private key leakage.
While all public-CA-signed certificates allow for authentication (digital signatures), and casignedparsedFindingextensionsextendedkeyusageserverauth are explicitly declared for server authentication, casignedparsedFindingextensionsextendedkeyusagecodesigning (private-CA: privatecaparsedFindingextensionsextendedkeyusagecodesigning) allow for code-signing.
Thus, knowing the private key of these certificates, does not only allow attackers to perform Man-in-the-Middle attacks, but also enable to sign malicious software to compromise others systems.
knowncompromizedcerts found compromised certificates show that leaked private keys can have extensive influence on the authenticity of services and software.
Thus, attackers can impersonate services, decrypt past communications, or sign malware to infect production systems.
§ SECRET USAGE IN THE WILD
Until now, it is open whether the found compromised secrets are used in practice and, if so, to what extent, i.e., whether a single compromised secret is reused due to several Docker containers stemming from the same image.
While we cannot check the validity of API secrets by using them against their destined endpoint due to our ethical guidelines (cf. Appendix <ref>), we can investigate whether hosts on the Internet use found private keys for authentication.
To assess whether Internet-reachable hosts can be suspect to impersonation attacks due to secret leakage in Docker images, we check for TLS- and SSH-enabled hosts relying their authentication on compromised private keys by using the Censys database, i.e., 15 months of active Internet-wide measurement results <cit.>.
Here, we search for hosts presenting a public key, i.e., as SSH host key or within a TLS certificate, matching to one of the found compromised keys.
More specifically, we match the fingerprint of public keys in the Censys database on ones extracted from found private keys.
In Figure <ref>, we detail how many hosts rely their authenticity on found compromised private keys and how often these keys are reused.
While the total number of hosts relying on compromised keys is worrying on its own (20220901numuniquehosts hosts in Oct. 2022), their protocols, even worse, imply sensitive services.
As such, in October 2022, we find MQTT20220901numuniquehosts MQTT and AMQP20220901numuniquehosts AMQP hosts, potentially transferring privacy-sensitive ((I)IoT) data.
Moreover, FTP20220901numuniquehosts FTP, PostgreSQL20220901numuniquehosts PostgreSQL, Elasticsearch20220901numuniquehosts Elasticsearch, and MySQL20220901numuniquehosts MySQL instances serve potentially confidential data.
Regarding Internet communications, we see SIP20220901numuniquehosts SIP hosts used for telephony as well as SMTP20220901numuniquehosts SMTP, POP320220901numuniquehosts POP3, and IMAP20220901numuniquehosts IMAP servers used for email.
Since these hosts are susceptible to impersonation attacks due to their leaked private keys, attackers can eavesdrop, relay, or alter the sensitive data transmitted here.
Aggravatingly, we also find services with administrative relevance:
SSH20220901numuniquehosts SSH servers rely on SSH20220901corrFingerprint compromised host keys and Kubernetes20220901numuniquehosts Kubernetes instances use leaked keys opening doors for attacks which can lead to remote-shell access, extension of botnets or further data access.
The comparably low number of compromised keys used (compared to knowncompromizedhostkeys found SSH host keys) is probably due to a missing need for SSH servers in Docker containers as other mechanisms, e.g., , already allow shell access.
Furthermore, we see LDAP20220901numuniquehosts LDAP instances relying on leaked secrets.
As LDAP is used as a base for user authentication on attached systems, the integrity of unknown many other clients is at stake.
For instance, attackers could grant themselves root access to a myriad of systems.
The number of actually used keys is low compared to the number of hosts which rely on them indicating that a few Docker images lead to various compromised container deployments.
Thus, the simplicity of Docker to deploy services based on ready-to-use images puts the authenticity of several instances most likely operated by different users under threat.
In this regard, HTTPS hosts stand out in particular.
HTTP20220901numuniquehosts HTTPS hosts use HTTP20220901corrFingerprint different compromised private keys showing that the reuse of these keys is rampant for Web services.
Thus, attackers can perform Man-in-the-Middle attacks to alter webpages on their delivery or data sent to the server.
Figure <ref> also underpins that the key usage of compromised keys is long-lasting and rising, i.e., over the complete available period the number of compromised systems grew from 20210501numuniquehosts (relying on 20210501corrFingerprint compromised keys) to 20220901numuniquehosts hosts (20220901corrFingerprint keys) indicating that container images with compromised certificates or SSH host keys included are increasingly used.
Thus, the authenticity of more and more systems is futile, offering an ever-growing attack surface.
While our study is significantly driven by initially found compromised keys in Docker images in the area of the IIoT, Censys does not identify secured IIoT protocols other than AMQP and MQTT via TLS.
Thus, we perform own Internet-wide measurements for a deeper inspection of whether IIoT services also use compromised certificates, e.g., for authentic communication via OPC UA.
To this end, we select ten secure IIoT protocols from recent literature <cit.> and mimic its proposed measurement strategy.
Our results show that besides the already large number of compromised AMQP and MQTT hosts, only 2 CoAP hosts use 2 different leaked keys from Docker containers.
That we do not find substantially more compromised hosts using other IIoT protocols underlines that the issue of key leakage is not an IIoT specfic hotspot but a general problem.
20220901numuniquehosts hosts use 20220901corrFingerprint compromised private keys found in Docker images for authentication on the Internet and encompass deployments using, i.a., MQTT, SMTP, and PostgreSQL.
This widespread usage allows attackers to eavesdrop on confidential or alter sensitive information, e.g., from the IoT, webpages, or databases.
§ DISCUSSION, LIMITATIONS & MITIGATIONS
The outcome of our work has different aspects.
We have seen that numerous private keys are compromised by image creators publishing their images via Docker registries and shown that security relies on these secrets in practice.
Still, future work could investigate the limitations of our approach or implement the derived mitigation opportunities from our results.
View on Available Images
Due to rate and computation-time limits and comprehensive ethical considerations (cf. Appendix <ref>), we could not analyze all available images on Docker Hub and private registries.
Thus, we might have missed secrets included in single layers or complete images that were not subject to our study.
In this light, the absolute number of found secrets is already very alerting.
Also, in relative numbers, our results should be representative for the selected groups due to our sampling.
Yet, the selected groups, i.e., our Docker Hub search terms, might lead to skewed results overestimating the overall population.
For instance, images that are not targeted at protocols might have been created with fewer secrets.
Thus, we opted for a broad body of terms based on, i.a., public polls <cit.> to avoid any bias.
Moreover, our private registry analysis has not been targeted but included randomly sampled layers, and we still found a similar share of affected images as on Docker Hub.
As such, we believe that our relative results are—at least in their magnitude—representative for the overall population of Docker images publicly available.
Missing Methods to Check API Secrets
While relying on Internet-wide measurements was a suitable measure to assess the usage of compromised private keys for the authenticity of Internet-reachable services, we could not check whether found API secrets are functional.
The only option would be to contact the corresponding API's endpoint to check for the acceptance of found credentials.
However, due to our ethical considerations, we must not use found secrets as such usage might influence other systems or services.
Thus, we cannot validate them against their respective endpoint.
Still, the number of found secrets is worrying and looking at the usage of compromised private keys, we are convinced that many API secrets are also functional.
Causes & Mitigation Opportunities
We have seen both creators actively copying secrets from their local file system into the image, e.g., most of the API secrets but also private keys, incl. certificates, and passively generating key material during the image creation process, e.g., by installing an OpenSSH server.
Both behaviors lead to compromised secrets and affect the security of both image creators and users basing their containers on an image and already included secrets.
Most likely, creators and users are unaware of compromising or using compromised foreign secrets.
In fact, compared to GitHub, which provides a graphical interface to browse published files and potentially notice a mistakenly uploaded secret, files in Docker images and containers cannot be browsed easily, i.e., users barely get an overview on included files.
Furthermore, while Git repositories only include manually added files, images of Docker containers contain a complete system directory tree.
Thus, files with included secrets cannot be identified.
The mitigation of these problems must be two-fold.
On the one hand, image creators must be warned that they are uploading their secrets to (publicly reachable) Docker registries.
On the other hand, when deploying containers based on downloaded images, users should be informed that included secrets, especially private keys, might already be compromised, putting the authentication of deployed services at stake.
To this end, credential-finding tools such as TruffleHog <cit.> or SecretScanner <cit.> can be integrated on both sides of the Docker paradigm.
When uploading or downloading an image, these tools could then scan all layers of the image for included secrets.
To reduce the number of false positives, for potential API secrets, the tool can also check the secret's function against the respective endpoint (we think this is also ethically correct on the user's side who downloaded the image).
For private keys, the tools could maintain a list of test keys that are usually included in libraries.
Increasing the image creator's awareness regarding the leakage of such secrets should decrease their number in uploaded images.
Additionally, performing a second check at the user deploying a container based on a downloaded image should further decrease the number of services relying on already compromised secrets.
An additional help could be an API + graphical view for images on Docker Hub, which shows the included files.
This API could also enable third-party solutions similar to those for GitHub <cit.> to easily search for known secret file paths.
§ CONCLUSION
Containerization allows integrating applications and their dependencies in self-containing and shareable images making software deployment easy.
However, when focusing on security, sharing of secrets or using already compromised secrets breaks promises, e.g., authenticity or access control.
Thus, cryptographic secrets must not be included in publicly available container images.
Our analysis of numnonemptyimages images from Docker Hub and privatemeasurementnumtotalmax private registries revealed that, however, pctaffectedimages include secrets that should not be leaked to the public.
More specifically, we found a near-lower bound of validprivatekeyvalidnumdistinctmatchestotal private keys and apinumdistinctmatches API secrets.
validapicloudvalidnumdistinctmatchestotal API secrets belonging to cloud providers, e.g., apicloud1rule (apicloud1numdistinctmatches secrets), or validapifinancialvalidnumdistinctmatchestotal secrets to financial services, e.g., apifinancial0rule (apifinancial0numdistinctmatches secrets), show that attackers can cause immediate damage knowing these secrets.
Focusing on the leaked private keys, we find that these are also in use in practice: 20220901numuniquehosts TLS and SSH hosts on the Internet rely their authentication on found keys, thus being susceptible to impersonation attacks.
Notably, many private keys automatically generate when installing packages during image creation.
While beneficial when running on real hardware where every computer generates its own key, in container images, this process automatically leads to compromised secrets and potentially a sheer number of containers with compromised authenticity.
We further discover that especially private registries serve images with potentially sensitive software, most likely not intended to be publicly shared.
Additionally, these registries might not prevent write access enabling attackers to add malware to images.
Our work shows that secret leakage in container images is a real threat and not neglectable.
Especially the proven usage of leaked private keys in practice verifies numerous introduced attack vectors.
As a countermeasure, the awareness of image creators and users regarding secret compromise must be increased, e.g., by integrating credential search tools into the Docker paradigm.
Funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK) — Research Project VeN2uS — 03EI6053K.
Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy — EXC-2023 Internet of Production — 390621612.
ACM-Reference-Format
§ ETHICAL CONSIDERATIONS
Our research curates a comprehensive archive of leaked security secrets in Docker images on Docker Hub and private registries whose leakage is again a threat to security.
Moreover, to find private registries and deployments relying their security on leaked secrets, we leverage Internet-wide measurements that can have unintended implications, e.g., high load on single network connections impacting stability or alerting sysadmins due to unknown traffic.
Thus, we base our research on several ethical considerations.
First, we take well-established guidelines <cit.> and best practices of our institution as base for our research.
We handle all collected data with care and inform image creators and Docker Inc., to responsibly disclose our findings (cf. Appendix <ref>).
Moreover, we comply with recognized measurement guidelines <cit.> for our Internet-wide measurements reducing their impact (cf. Appendix <ref>).
§.§ Handling of Data & Responsibilities
During our research, we always only collect and request publicly available data, i.e., our access is limited to publicly available image repositories.
At no time do we bypass access control, e.g., by guessing passwords.
We, thus, cannot download private images.
Still, we revealed that many of the public images contain sensitive security secrets (cf. Section <ref>) which we stored for further analysis.
All found secrets are stored on secured systems.
Furthermore, we refrain from releasing our dataset including these secrets or image names, to not provide an archive of leaked secrets or pinpoints for potential attackers.
While this restriction prevents others from independently reproducing our results, we consider this decision to constitute a reasonable trade-off to protect affected users.
Responsible Disclosure
To further support affected users in removing their secrets from publicly available Docker images, we target to responsibly disclose our findings.
To this end, we extract e-mail addresses from maintainer variables set in Dockerfiles and furthermore derive addresses from Gravatar accounts linked to affected Docker Hub accounts.
In this regard, we identified numemaildisclosure e-mail addresses we contacted to notify about our possible findings.
Already after a few hours, we received >30 answers of owners appreciating our efforts, fixing their images or informing us that the image at hand is not used anymore.
A handful informed us that no secrets were leaked helping us to refine our filtering.
Moreover, we decided to reach out to the operator of Docker Hub, i.e., Docker Inc., to discuss potential further disclosure to unidentifiable creators.
§.§ Reducing Impact of Measurements
To reduce the impact of our active Internet scans, we follow widely accepted Internet measurement guidelines <cit.>.
Coordination
We coordinate our measurements with our Network Operation Center to reduce the impact on the Internet and to react correspondingly.
Abuse emails are handled informing about the intent of our measurements and how to opt-out of our measurements.
As part of this opt-out process, we maintain a blocklist to exclude IPs from our measurements.
External Information
For giving external operators information about our research intent, we provide rDNS records for all our scan IPs and transmit contact information in the HTTP header of each request to the registries.
Moreover, we host a webpage on our scan IPs, which gives further information on our project and how to opt-out.
Over time, also due to other measurements, we excluded 5.8 M IP addresses (0.14% of the IPv4 address space).
Limiting Load
To limit load and stress on all systems involved (along the path and the end-host), we deliberately reduce our scan-rate.
Our scans are stretched over the course of one day and use 's address randomization to spread load evenly.
We further limit the load on single private registries when downloading available images.
While we paid to increase the existing rate limiting for image downloads on Docker Hub (cf. Appendix <ref>), private registries typically do not implement any rate limiting.
Hence, to prevent our scanner from overloading registries running on resource-constrained hardware or connected via slow or volume-billed Internet connections, we decide to only download image layers randomly until their size sums up to at most 250.
Additionally, we shuffle the downloads of layers of different registries to further distribute the load.
§.§ Overall Considerations
Without taking our goals into account, summarizing the sensitive nature and the impact of our measurements can quickly lead to the conclusion that our measurements are not beneficial.
However, we consider it public interest and fundamental for improving security to know about potential security issues and how widespread these are.
The Docker paradigm does not include any mechanisms to prevent image creators from (accidentally) adding security secrets to their images and no mechanisms exist that warns users relying on already compromised security secrets.
Hence, we consider it essential to know whether secrets are widely included in publicly available Docker images and whether these are in use at scale to steer future decisions for counter-measures.
To answer this question, we carefully weighed the impact of our measurements against their benefit and have taken sensible measures to reduce the risks of building a large archive of leaked security secrets and risks introduced by active Internet measurements.
§ IMAGE DOWNLOAD FROM DOCKER HUB
The limit of image manifest downloads from Docker Hub depends on the booked plan, e.g., free users are allowed to pull only 800 images per day.
Hence, for a faster analysis of images on Docker Hub, we purchased two Pro accounts, that allow 5000 image downloads per day each.
Still, we are required to perform our analysis on a subpart of available images as the download of one image of every of the 9321726 available repositories would require 933 days under best conditions.
Thus, we decided to limit our analysis on two categories:
[(i)]
* a context of standard protocol and frequently used technologies, and
* an (Industrial) IoT context for comparison.
Both categories have communication in common as here security can be affected on an Internet scale.
Standard Context
To generate a wide view on secret leakage in Docker images, we create a list of search queries comprising standard protocols <cit.>, and frequently used technologies <cit.>.
To find related images, we employ Docker Hub's API to perform searches over all available images and retrieve results users would retrieve when using the CLI command or Docker Hub's web interface.
To ensure that different handling of special characters in technology and protocol names does not exclude any images, we include different spelling variants in our query list, i.e., we include terms as they are, but also replace non-alpha-numeric characters by and space.
Table <ref> (top) shows our constructed search queries for the standard context.
(Industrial) IoT Context
We extend our analysis on images in the (Industrial) IoT context, as deployments in this area showed massive security deficits in past <cit.>, in single cases traced back to security secret leakage via GitHub and Docker images <cit.>.
As search terms, we take (Industrial) IoT protocol names that were subject to recent research <cit.>.
We proceed similar as in the standard context, i.e., include derived spellings of these terms, and show our constructed search query of this context in Table <ref> (bottom).
§ REGULAR EXPRESSIONS
Following already established procedures to find security secrets in code repositories <cit.>, we build our secret detection in Docker Images on regular expressions, i.e., we try to match regular expressions derived from secrets on the content of included files.
Table <ref> shows our composed list of regular expressions covering a variety of secrets, i.e., asymmetric private keys and API keys, as well as accompanying material we use for our analysis, i.e., public keys and certificates.
We orientate our expressions towards related work <cit.> and TruffleHog <cit.>, an established tool to find secrets in various sources, i.e., the local file system, Git repositories, S3 storages, and syslogs.
Specifically, we inherit Meli et al.'s <cit.> regular expressions to allow comparisons between the occurrence of leaked secrets in GitHub repositories at scale and our findings.
Furthermore, they composed their expressions comprehensibly, i.e., they included API keys for certain services by the occurrence of service domains in Alexa's Top 50 Global and United States lists in combination with a list of well-known APIs manually filtered for services with a high risk on key leakage and keys with a distinctive signature (to reduce the number of false-positives).
For private keys they focus on the most prevalent types and form to store, i.e., RSA, elliptic curve keys, PGP, and general keys in PEM format.
To spread our analysis and align our expressions to the scope of our search queries (cf. Appendix <ref>), we adapt our expression for private keys to match every type of private key in PEM format and, furthermore, extend the list of expressions to also match private key blocks, keys in PKCS7 format, and keys stored in XML format (due to their unambiguous signature).
Regarding API secrets to match, we extend our list with expressions from TruffleHog <cit.> on basis of services being currently trending under developers <cit.> or having a high risk for misuse and the regular expressions including a unique signature (also to reduce the number of false positives).
For some services we found more than one type of secret, i.e., secrets for different API versions (GitHub v1 and v2), or different types of keys (Stripe).
Our final list contains 48 expressions which we match on the content of every file in the images part of our study.
§ FILTERING BASED ON FILEPATHS
After matching our regular expressions on arbitrary file content available in Docker images, extensive filtering is required to exclude false positive matches, i.e., matches that do not contain any secret.
Our File filter bases on file paths derived from matches our Kompromat filter excluded, i.e., all parent directories under which we find more than 2/3 test keys known by kompromat <cit.> and all directories that include known test keys directly.
Additionally, it takes manually compiled file paths, e.g., where standard libraries reside () or package managers store their downloads (e.g., ) and extensions of database files (e.g., and ) into account which we selected after manually revisit all matches as these produced a high number of false positives.
Figure <ref> shows the seven most prevalent file paths that contain matches excluded by our File filter.
Indeed, most of the exclusions are matches included in folders belonging to package managers and thus most likely test secrets.
The massive filtering of API secret matches in is due to the high number of false positives of the Twitter regular expressions on database files.
|
http://arxiv.org/abs/2307.04623v1 | 20230710150910 | The precise form of Ahlfors' second fundamental theorem | [
"GUang-Yuan Zhang"
] | math.CV | [
"math.CV",
"[2020] 30D35, 30D45, 52B60"
] |
The precise form of Ahlfors' second fundamental theorem]The precise form of Ahlfors' second fundamental theorem
Department of Mathematical Sciences, Tsinghua University, Beijing 100084,
China. Email: [email protected]
Project 12171264 & 10971112 supported by NSFC
Let S=ℂ be the unit Riemann sphere. A simply connected
covering surface over S is a pair Σ=( f,U) ,
where U is a Jordan domain in ℂ and f:U→ S
is an orientation-preserving, continuous, open and finite-to-one mapping
(OPCOFOM). Let 𝐅 be the space of all simply connected covering
surfaces over S, for each Σ=( f,U) ∈𝐅 let
A(Σ) and L(∂Σ) be the area and boundary length of
Σ, weighted according to multiplicity, and for each 𝔞∈ S
let n( Σ,𝔞) =#f^-1(
𝔞) ∩ U be the cardinality of the set f^-1(
𝔞) ∩ U. The Second Fundamental Theorem (SFT) of Ahlfors'
covering surface theory is that, for any set E_q={𝔞
_1,𝔞_2,…,𝔞_q} of distinct q(≥3)
points on S, there exists a positive constant h, which depends only on
E_q, such that for any Σ∈𝐅,
(q-2)A(Σ)≤4πn(Σ,E_q)+hL(∂Σ),
where n(Σ,E_q)=∑_v=1^qn(Σ
,𝔞_v).
The goal of this paper is to develop a new method to give the precise bound
H_0 of h. We write R(Σ)=R(Σ,E_q) the error term of
Ahlfors' SFT, say
R(Σ)=( q-2) A(Σ)-4πn( Σ
,E_q) .
Our first main result is that for a.e. L∈(0,+∞), there exists an
extremal surface Σ_0 in 𝐅( L)
={Σ∈𝐅:L(∂Σ)≤ L}, say
H_L=sup{R(Σ)/L(∂Σ):Σ∈𝐅( L) } =R(Σ_0)/L(∂Σ_0).
Our second main result is that there exists a subspace 𝒮_0 of
𝐅, consisted of very simple surfaces, and there exists a 4π-extremal surface Σ_1∈𝒮_0, say,
H_0=lim_L→∞H_L=sup{R(Σ)/L(∂Σ):Σ∈𝐅} =R(Σ_1)+4π/L(∂Σ_1).
Our third main result is that among all 4π-extremal surfaces, there
exists one which is the simplest (such surfaces may not be unique). Simplest
4π-extremal surfaces can be used to give the precise bound H_0
as simple as possible.
[2020] 30D35, 30D45, 52B60
[
Guang Yuan Zhang
August 12, 2023
=====================
§ INTRODUCTION
We first recall some notations used in <cit.>. The
Riemann sphere S is the unit sphere in ℝ^3 centered
at the origin which is identified with the extended plane ℂ via stereographic projection P:S→ℂ as in <cit.>. Length and area on S have natural
interpretations using the spherical (chordal) metric on ℂ:
ds=ρ(z)|dz|=2/1+|z|^2|dz|,z∈ℂ.
For a set V on S, ∂ V denotes its boundary and
V its closure. We write Δ={z∈ℂ
:|z|<1}. Then for the upper and lower open hemispheres S^+
and S^- on S, we have P( S^+) =( ℂ
\Δ) ∪{∞} and
P(S^-)=Δ.
A topological triangle
γ on S is a Jordan curve equipped with three vertices lying
on γ. The three vertices define three (compact) edges of γ, the
two components of S\γ are called topological triangular
domains, and the vertices and edges of γ are also called vertices and
edges of the topological triangular domains. If each of
the three edges of γ is on a great circle on S, then γ is
called a triangle and the two components of S\γ are called
triangular domains.
A (covering) surface Σ (over
the sphere S) is defined as in Ahlfors' paper <cit.>: Σ is sewn
from a finite number of compact topological triangular domains on S (see
Remark <ref> A(3) for details). Equivalently speaking, Σ is defined
to be a pair ( f,U) , where U is a
domain[A domain means a connected open subset of ℂ.] in ℂ so that U has a finite
topological triangulation ∪_j=1^m_0U_j and for each
j=1,…,m_0,f|_U_j:U_j→
f(U_j)⊂ S is a homeomorphism. Moreover, f is locally
homeomorphic on U, except at a finite number of points, which are
some vertices of the triangulation {U_j}.
A mapping from a compact subset K of ℂ
into S is called an orientation-preserving, continuous, open and
finite-to-one mapping (OPCOFOM) if it can be extended to be an
OPCOFOM from a neighborhood of K in ℂ into S.
Then a surface Σ=( f,U) in the
definition can be regarded as an OPCOFOM from U
into S, and vice versa. The term orientation-preserving means that
P^-1∘ f is orientation-preserving. In this setting, we call the pair
∂Σ=(f,∂ U) the boundary of the surface Σ, and
define A(Σ)=A(f,U) and L(∂Σ)=L(f,∂ U),
respectively, area and length, weighted according to multiplicity. For
instance, A(g,Δ)=L(g,∂Δ)=6π when g(z)=z^3
,z∈Δ.
Here we list some elementary results on the sphere
S as in <cit.>: any great circle on S has length 2π,
A(S)=2A(S^+)=2A(S^-)=4π, L([0,+∞])=2L([0,1])=2L([1,+∞
])=π, the circle on S with spherical diameter [0,1] or [1,+∞],
has length √(2)π, and a disk in a hemisphere on S with perimeter L
has area 2π-√(4π^2-L^2).
All surfaces in this paper are covering surfaces defined above.
Let Σ=(f,U) be
a surface over S.
(1) Σ is called a closed surface, if U=ℂ=S. For a closed surface Σ, we have ∂Σ=∅, and then
L(∂Σ)=0.
(2) Σ is called a simply-connected surface, if U is a
simply connected domain.
(3) 𝐅 denotes all surfaces such that for each Σ=(
f,U) ∈𝐅, U is a Jordan domain.
In <cit.>, it is assumed that U is a Jordan
domain, when ( f,U) is a (covering) surface. But in
this paper, there is no such restriction. It is permitted that even for a
simply connected surface ( f,U), U is not a
Jordan domain, it may be the domain between two tangent circles, for example.
(A) Let K_1 and K_2 be two domains or two
closed domains on S, such that ∂ K_1 and ∂ K_2 are
both consisted of a finite number of disjoint Jordan curves. A mapping
f:K_1→ K_2 is called a complete covering
mapping (CCM), if (a) for each p∈ K_2 there exists a neighborhood V
of p in K_2 such that f^-1(V)can be expressed as a union
∪_j∈𝒜U_j of disjoint (relative) open sets of K_1, and
(b) f|_U_j:U_j→ V is a homeomorphism for each j∈𝒜.
(B) We call f a branched
complete covering mapping (BCCM), if all conditions of (A) hold, except that
(b) is replaced with (b1) or (b2): (b1) If both K_1 and K_2 are
domains, then for each j∈𝒜, U_j∩ f^-1(p) contains only
one point a_j of f^-1(p), and there exist two homeomorphisms
φ_j:U_j→Δ,ψ_j:V→Δ with
φ_j( a_j) =ψ_j( p) =0, such that
ψ_j∘ f|_U_j∘φ_j^-1(ζ)=ζ^k_j,ζ∈Δ,where k_j is a positive integer; or (b2) if both K_1 and
K_2 are closed domains, then f|_K_1^∘:K_1^∘→
K_2^∘ satisfies (b1) and moreover, f restricted to a neighborhood
of ∂ K_1 in K_1 is a CCM onto a neighborhood of ∂
K_2 in K_2.
(C) For a surface Σ=( f,U)over S, f is in general not a CCM or BCCM. When f(
z) =z^2, both f:Δ→Δand
f:Δ→Δ are BCCMs, but when f( z)
=z^-1( z-1/2/1-z/2) ^2, f:Δ→
f(Δ) is neither a CCM nor a BCCM.
Nevanlinna theory of value distribution of
meromorphic functions (<cit.>, <cit.>, <cit.>, <cit.>) and Ahlfors
theory of covering surfaces (<cit.>, <cit.>, <cit.>, <cit.>) are
two major events in the history of the development of function theory. The
most striking result of Nevanlinna theory is the Second Fundamental Theorem.
Ahlfors' theory is a geometric version of Nevanlinna's, in which Nevanlinna's
Second Fundamental Theorem is reinterpreted as follows.
Theorem A. (Ahlfors' Second
Fundamental Theorem (SFT) <cit.>). For an arbitrarily given set
E_q={𝔞_1,𝔞_2,…,𝔞
_q} of distinct q( ≥3) points on S, there exists a positive constant
h such that for any surface Σ=(f,U)∈𝐅,
(q-2)A(Σ)≤4πn(Σ)+hL(∂Σ),
where
n(Σ)=n(Σ,E_q)=∑_v=1^qn(Σ,𝔞_v), n(Σ,𝔞
_v)=#f^-1(𝔞_v)∩ U,
and # is the cardinality.
The goal of this paper is to present a method to
identify the precise bound for that h in Ahlfors' SFT for 𝐅. This
problem can be traced back to the early 1940s, when J. Dufresnoy first gave a
numerical estimate of h in <cit.> as follows.
Theorem B. (Dufresnoy
<cit.>) For any surface Σ=(f,U)∈𝐅
,
(q-2)A(Σ)≤4πn(Σ,E_q)+( q-2)
6π/δ_E_qL(∂Σ),
where δ_E_q=min_1≤ i<j≤ qd( 𝔞
_i,𝔞_j) .
Here d( 𝔞_i,𝔞_j) is the spherical
distance on S, which is the minimum of length of all paths on S from
𝔞_ito 𝔞_j.
In 2011, the author have identified the precise bound for h in a special
case as follows.
Theorem C. (Zhang <cit.>)
For any surface Σ=(f,U)∈𝐅 with
n( Σ,{0,1,∞}) =∅,
we have
A(Σ)≤ h_0L(∂Σ),
where
h_0=max_θ∈[ 0,π/2] h(θ),
h(θ)=A( 𝔇( 0,1,θ
,θ) ) +4π/L(∂𝔇( 0,1,θ,θ) )=( π+θ) √(1+sin^2θ)/arctan√(1+sin^2θ)cosθ
-sinθ,
and 𝔇( 0,1,θ,θ)
is the lens on the sphere S enclosed by two symmetric
circular arcs with endpoints 0 and 1 and with interior
angle 2θ at the cusps (see Definition <ref>).
Moreover, the bound h_0 is precise: there exists a sequence
Σ_n∈𝐅 with n(Σ_n
,{0,1,∞})=∅ such that A(Σ_n)/L(∂Σ_n)→ h_0 as n→∞.
The proof of (<ref>) occupied almost
all space of the long paper <cit.>, in which it is pointed out that
h_0 can be easily found and (<ref>) can be easily proved, provided
that for every given L>0, the extremal surface Σ_L=(
f_L,Δ) with L(∂Σ_L)≤ L so that
f_L(z)≠0,1,∞ and that A(Σ_L) assumes the maximal value
exists. In fact, such extremal surfaces have a lot of good properties, for
example, the boundary ∂Σ_L is consisted of circulars arcs with
the same curvature and each of these arcs contains two endpoints {0,1} or
{1,∞}. From this property, after a little argument as in the
introduction section of <cit.> one can compute the precise bound. However,
at that time we couldn't prove the existence of the extremal surface
Σ_L and left it as a conjecture. We were lucky to find a substitute
for the extremal surface, and finally we have successfully proved the
optimality of h_0 in <cit.>.
But the method in <cit.> is difficult
to be applied to the general case, that is, in (<ref>), q≥3, the
position of 𝔞_vs in E_q={𝔞_1,𝔞
_2,…,𝔞_q} are arbitrarily given and n
(Σ,E_q) is not assumed equal to zero. So, to identify the precise
bound of h in (<ref>) in general case, it seems the simplest way is to
solve the existence of extremal surfaces.
Consider the general case that q≥3,
E_q={𝔞_1,𝔞_2,…,𝔞_q} and L>0 are given arbitrarily. We introduce the following notations
R(Σ)=R(Σ,E_q)=(q-2)A(Σ)-4πn(Σ,E_q),
which is the error term of Ahlfors' SFT,
𝐅(L)={Σ∈𝐅:L(∂Σ)≤ L},
H(Σ)=H(Σ,E_q)=R(Σ,E_q)/L(∂Σ),
H_0=H_0(E_q)=sup_Σ∈𝐅H(Σ),
and
H_L=H_L(E_q)=sup_Σ∈𝐅(𝐋)H(Σ).
Then it is clear that H_L increase with respect to L,
H_0=lim_L→∞H_L,
and Ahlfors' SFT can be restated as
H_0=H_0(E_q)<+∞.
Then our goal becomes to prove the existence of extremal surfaces in
𝐅( L) and present an achievable method that gives the
precise value of H_0. For this purpose, we introduce some more terminology
and notations.
Let ℒ be the set of
continuous points of H_L=H_L(E_q), with respect to L.
Since H_L increase with respect to L, it is clear that
( 0,+∞) \ℒ is just a countable set.
For any two non-antipodal
points p and q on S, pq is the geodesic on S from p to
q: the shorter of the two arcs with endpoints p and q of the great
circle on S passing through p and q. Thus d(p,q)<π and
pq is uniquely determined by p and q. An arc of a great
circle on S is called a line segment on S, and to emphasize this,
we also refer to it as a straight line segment. For the notation
pq, when p and q are explicit complex numbers we write
p,q, to avoid ambiguity such as 123=12,3
or 1,23. When p and q are two antipodal points on S,
pq is not unique and d( p,q) =π. To avoid
confusions, when we write pq, or say pq is well
defined, we always assume d( p,q) <π.
All paths and curves considered in this paper are oriented and any subarc of a
path or closed curve inherits this orientation. Sometimes paths and curves
will be regarded as sets, but only when we use specific set operations and set
relations. For an oriented circular arc c, the circle C containing c
and oriented by c is called the circle determined by c.
(1) For a Jordan domain D in
ℂ, let h be a Möbius transformation with
h(D)⊂Δ. Then ∂ D is oriented by h and the
anticlockwise orientation of ∂ h(D). The boundary of every Jordan
domain on S is oriented in the same way, via stereographic projection.
(2) For a Jordan curve C on ℂ or S, the domain
T_C bounded by C is called enclosed by C if the boundary
orientation of T_C agrees with the orientation of C.
(3) A domain D on S is called convex if for any two points q_1
and q_2 in D with d(q_1,q_2)<π, q_1q_2⊂ D; a Jordan curve on S is called convex if it encloses a convex
domain on S; a path on S is called convex if it is locally an arc
of a convex Jordan curve.
(4) Let γ:[a,b]→ S be a path on S and p_0∈(a,b).
γ is called convex at p_0, if γ restricted to a
neighborhood I_δ=(p_0-δ,p_0+δ) of p_0 in (a,b) is a convex Jordan path, with respect to the orientation
of γ when t∈ I_δ increases; and γ is called
strictly convex at p_0 if for some δ>0 the restriction
γ|_I_δ is convex and γ|_I_δ∩ S_1
=γ|_I_δ\{γ(p_0)} for some open hemisphere
S_1 on S.
By definition, the disk {z∈ℂ:|z|>2} is viewed as a convex domain on S and its
boundary orientation is clockwise, and thus the circle |z|=2 oriented
clockwise is also convex on S. Also by definition, the disk {z∈ℂ:|z|<2} is not convex on S and its boundary orientation is
anticlockwise, and thus the circle |z|=2 oriented anticlockwise is not
convex on S. By convention, an arc of a curve inherits the orientation of
this curve. A Jordan curve on S is convex and lies in a hemisphere on S if
it is locally convex (see Lemma 4.1 in <cit.> for polygonal Jordan curves
on S). A locally convex curve on S is locally simple and always goes
straight or turns left when viewing S from its interior, such as with planar
convex polygons oriented anticlockwise.
For a curve β, in
ℂ or S, given by z=z(t),t∈ a,b], β^∘
denotes the interior of β, which is the restriction of β to the
open interval (a,b), and ∂β denotes the set of the endpoints of
β, which is either the singleton {z(a)} if β is closed, or the
two points set {z(a),z(b)}. For a surface
Σ=( f,U) , the interior of the surface Σ
is the restricted open surface Σ^∘=( f,U) . The
notation q∈Σ^∘ means a pair ( f,p) with
q=f(p) for some p∈ U.
When β is defined by z=e^it,t∈0,4π], for example,
β^∘ is still well defined as z=e^it,t∈(0,4π), and thus as
sets, β and β^∘ may coincide.
For a Jordan curve
α in ℂ, its partition is a collection {α_j
}_j=1^n of its subarcs such that α=∪_j=1^nα_j and
α_j^∘ are disjoint and arranged anticlockwise. In this setting
we write α=α_1+α_2+⋯+α_n. Here α
_j^∘ is the interior of α_j, which is now α_j
without endpoints (since α is simple). A partition
∂Σ=γ_1+γ_2+⋯+γ_n
of ∂Σ for a surface Σ=(f,U)∈𝐅 is
equivalent to a partition
∂ U=α_1+α_2+⋯+α_n
of ∂ U such that γ_j=(f,α_j) for j=1,…,n.
Now we can introduce the subspace ℱ of 𝐅:
We denote by
ℱ the subspace of 𝐅 such that for each Σ=(
f,U) ∈ℱ, ∂Σ has a partition
∂Σ=c_1+c_2+⋯+c_n
of simple convex circular (SCC) arcs. This means that ∂ U has a
partition
∂ U=α_1+α_2+⋯+α_n
such that f restricted to each α_j is a homeomorphism onto the SCC
arc c_j.
ℱ( L) is the subspace of ℱ such that for
each Σ∈ℱ( L) , L(∂Σ)≤ L.
Let T be a Jordan domain on S which is a union of a finite number of disks
D_j,j=1,…,m. Then T can be viewed as a surface in
ℱ, if all disks D_j are convex on S, say, all disks are of
diameter ≤π.
For any surface Σ in
𝐅 and any ε>0, to estimate H(Σ) we may
assume L(∂Σ)<+∞, for otherwise we have H(Σ)=0. Then
for any ε>0, there are standard ways to show that there exists
another surface Σ_ε in ℱ such that
∂Σ_ε is consisted of a finite number of line
segments, L(∂Σ_ε)≤ L(∂Σ)+ε, A(Σ_ε)≥ A(Σ)-ε and n(
Σ_ε) ≤n( Σ) . Then we
have
H(Σ_ε)≥( q-2) ( A(Σ
)-ε) -4πn( Σ) /L(∂Σ)+ε→ H(Σ)
as ε→0. Thus, we have
H_0=H_0(E_q)=sup_Σ∈𝐅H(Σ)=sup_Σ∈ℱH(Σ).
Define
H_L=H_L(E_q)=sup_Σ∈ℱ(L)H(Σ).
We call Σ_0∈ℱ(L) an extremal surface of
ℱ(L), if
H(Σ_0)=H_L.
If Σ_0 is an extremal surface of ℱ(L) with minimal
perimeter among all the extremal surfaces of ℱ(L), we call it a
precise extremal surface of ℱ(L).
By Remark <ref>, a precise extremal surface of
ℱ(L) is also a precise extremal surface of
𝐅(L), if L∈ℒ. Our first main result is the following.
For each L∈ℒ
with L>0, there exists a precise extremal surface of 𝐅(L), and
for every precise extremal surface Σ of 𝐅( L)
we have Σ∈ℱ( L), and ∂Σ has a
partition
∂Σ=C_1+C_2+⋯+C_N,
such that the following holds.
(i) All C_j,j=1,…,N, are SCC arcs and have the same curvature.
(ii) At most one of C_j,j=1,…,N, is a major circular arc.
(iii) Either ∂Σ is a simple circle containing at most one point
of E_q, or N>1, #∂ C_j=2, ∂ C_j⊂ E_q and
C_j^∘∩ E_q=∅, for j=1,…,N.
(iv) For every j=1,…,N,C_j is contained in an open hemisphere S_j
on S.
Recall that ∂ C_j denotes the endpoints of C_j. A major
circular arc means more than half of the circle that contains it, and a closed
circular path is regarded major.
From the above theorem we will find the simplest models to compute
H_0=H_0(E_q). But we need introduce some more notations and terminologies.
𝒮_0=𝒮_0(E_q) is the
space of all surfaces Σ=( f,Δ) in
ℱ such that the following hold.
(1) There exists a positive integer q^'=Q(Σ)∈{2,…,q} and
∂Σ has a partition
∂Σ=C_1( p_1,p_2) +⋯+C_q^'(
p_q^',p_1) ,
such that for each j=1,2,…,q^', C_j=C_j( p_j
,p_j+1) is an SCC arc on S (C_q^'+1=C_1).
(2) For each j=1,2,…,q^', the initial point p_j and the
terminal point p_j+1 are two distinct points in E_q, and d(
p_j,p_j+1) <π, say, p_jp_j+1 is well defined.
(3) p_1,…,p_q^' are distinct each other.
(4)
_maxΣ=max{#f^-1(w)∩Δ:w∈ S\∂Σ}≤2q^'-2.
(5) For each j=1,2,…,q^', C_j^∘⊂ S\
E_q.
(6) For each j=1,2,…,q^', C_j is contained in an open
hemisphere S_j on S.
(7) All C_j,j=1,2,…,q, have the same curvature.
(8) For each j=1,2,…,q^', C_j is either a minor arc or half
of a circle.
For each surface Σ in 𝒮_0=𝒮_0(
E_q) , all the circular arcs of ∂Σ have the same
curvature k=k( Σ,E_q) .
Now we can state our second main result:
(i) There exists a surface Σ_0 in
𝒮_0=𝒮_0(E_q) such that
H_0=sup_F∈𝐅R(F)/L(∂ F)=H_𝒮_0
^4π=max_F∈𝒮_0R(F)+4π/L(∂ F)
=R(Σ_0)+4π/L(∂Σ_0).
(ii) For each surface of 𝒮_0 satisfying (<ref>) the
boundary is consisted of strictly convex circular arcs.
(iii) k=k( Σ,E_q) is the same for all surfaces Σ
satisfying (<ref>).
For each Σ∈𝒮_0, the partition
(<ref>), ignoring a permutation of subscripts like i_0+1,i_0
+2,…,i_0+q^' ( modq^') , is
unique and so is the number q^' of terms. Then the boundary length of
all surfaces in 𝒮_0 has a common upper bound, say,
L(∂Σ_0)≤ qπ for all Σ_0∈𝒮_0.
Though 𝐅( L) contains extremal surface when
L∈ℒ by Theorem <ref> and (<ref>), it is proved in
<cit.> that 𝐅 contains no extremal surface and, since
H_L=sup{ H( Σ) ∈𝐅:L(∂Σ)≤ L} increases as a function of L∈(
0,∞) , for any sequence F_n∈𝐅,with
H(F_n)=R(F_n)/L(∂ F_n)→sup_F∈𝐅
R(F_n)/L(∂ F_n)=H_0,
we must have L(∂ F_n)→∞( n→∞) . Thus (<ref>) plays the role converting infinite
problems into finite problems.
To make (<ref>) easier to use, we will reduce the space 𝒮_0 as much as possible to make (<ref>) simpler:
A surface Σ in 𝒮
_0=𝒮_0(E_q) satisfying (<ref>) will be called a 4π-extremal surface of 𝒮_0. A surface Σ of
𝒮_0 is called the simplest 4π-extremal surface
of 𝒮_0 if the following hold.
(1) Q(Σ)=Q(E_q)=min{Q(Σ^')=#∂Σ^'∩
E_q:Σ^' are 4π-extremal surface of 𝒮_0}.
(2) L(∂Σ^')≥ L(∂Σ) for any 4π-extremal
surface Σ^' of 𝒮_0 with Q( Σ^') =Q(Σ).
(3) _max(Σ)≤_max(Σ^') for any 4π-extremal extremal surface Σ^' of 𝒮_0 with
Q( Σ^') =Q(Σ)and L(∂Σ^')=L(∂Σ).
𝒮^∗=𝒮^∗(E_q) is defined to be
the space of all the simplest 4π-extremal surfaces of
𝒮_0.
Our third main theorem is the following
For any set E_q of distinct q points on S,q≥3,
𝒮^∗=𝒮^∗(E_q)≠∅.
To present some application of the last two main theorems, we introduce some
more notations. For a path Γ on S or ℂ given by
z=z(t),t∈ t_1,t_2], -Γ is the opposite path of Γ
given by z=z(t_2-t+t_1),t∈ t_1,t_2].
A convex domain enclosed by a convex
circular arc c and its chord I is called a lune and is denoted by
𝔇^'( I,c) ,𝔇^'(
I,θ(c)) , 𝔇^'( I,L(c)) , or
𝔇^'( I,k(c)) , where θ is the interior
angle at the two cusps, k is the curvature of c and I is oriented such
that[The initial and terminal points of I and c are the same,
respectively, in the notation 𝔇^'(I,θ), in other
words, 𝔇^'(I,θ) is on the right hand side of I.]
∂𝔇^'( I,θ) =c-I.
For two lunes 𝔇^'( I,θ_1) and
𝔇^'( -I,θ_2) sharing the common chord
I we write
𝔇( I,θ_1,θ_2) =𝔇^'( I,θ_1) ∪ I^∘∪𝔇^'(
-I,θ_2)
and call the Jordan domain 𝔇( I,θ_1,θ
_2) a lens. Then the notations 𝔇( I,l_1
,l_2), 𝔇( I,c_1,c_2)and
𝔇( I,k_1,k_2) are in sense and denote the same
lens, when l_j=L(c_j) and k_j=k( c_j) are the length
and curvature of c_j, j=1,2, say,
𝔇( I,c_1,c_2) =𝔇(
I,l_1,l_2) =𝔇( I,k_1,k_2)
=𝔇^'( I,l_1) ∪ I^∘∪𝔇^'( -I,l_2)
=𝔇^'( I,c_1) ∪ I^∘∪𝔇^'( -I,c_2) =𝔇^'(
I,k_1) ∪ I^∘∪𝔇^'( -I,k_2
) .
For a lune 𝔇^'( I,τ) , whether τ
denotes the length l, the angle θ, or the curvature k is always
clear from the context, and so is for the lens 𝔇( I,τ
_1,τ_2) .
By definition we have 0<θ_j≤π for j=1,2, since 𝔇
^'( I,θ) is convex, but for the domain
𝔇( I,θ_1,θ_2) it is permitted that
θ_1 or θ_2 is zero, say 𝔇( I,θ
_1,θ_2) reduces to 𝔇^'(
I,θ_1) or 𝔇^'( -I,θ_2)
. By definition of 𝔇(I,θ,θ) we have
𝔇(I,θ,θ)=𝔇^'( I,θ)
∪𝔇^'( -I,θ) ∪ I^∘,
and θ∈(0,π]. If I=1,0,-1 and θ=π/2, for
example, 𝔇( I,θ,θ) =Δ and
𝔇^'( I,π/2) =Δ^+ is the upper half
disk of Δ.
As a consequence of the last two main results we have our forth main result:
Let Σ=( f,Δ)
∈𝒮^∗(E_q). Then ∂Σ has a partition
(<ref>) satisfying Definition <ref> (1)–(8) and one of the
following holds.
(i) If q^'=Q(Σ)=2, then Σ is a closed lens 𝔇( p_1p_2,θ_0,θ_0)
(see Definition <ref>) with {p_1,p_2}⊂ E_q,
θ_0∈(0,π/2) and
d( p_1,p_2) =δ_E_q=min{ d( a,b)
:a∈ E_q,b∈ E_q,a≠ b} .
(ii) If q=3 and E_q is on a great circle on S, then q^'=Q(Σ)=2.
(iii) If q=q^'=Q(Σ)=3, and the three point set (
∂Σ) ∩ E_q={p_1,p_2,p_3} is not in a great
circle on S, then ∂Δ has a partition ∂Δ
=α_1( a_1,a_2) +α_2( a_2,a_3)
+α_3( a_3,a_1) and there is a Jordan curve
α^'=α_1^'( a_1,a_2) +α
_2^'( a_2,a_3) +α_3^'(
a_3,a_1) in Δ such that α^'
∩∂Δ={a_1,a_2,a_3}, f restricted to the domain
enclosed by α^' is a homeomorphism onto the triangular domain
(see Definition <ref>) T enclosed by p_1p_2
p_3p_1, and f restricted to the closed Jordan domain enclosed by
α_j-α_j^' is a homeomorphism onto the closed lune
𝔇^'( p_jp_j+1,θ
_j) enclosed by C_j( p_j,p_j+1) +p_j+1p_j, for j=1,2,3 (see Definition <ref>).
(iv) If q^'=q=3, or if q^'=2 and q≥3, then
n( Σ) =n( Σ,E_3)
=∅.
(1) We call two surfaces Σ
_1=( f_1,U_1) and Σ_2=(
f_2,U_2) equivalent and write
( f_1,U_1) ∼( f_2,U_2
) ,
if there exists an orientation preserving homeomorphism (OPH) φ
:U_1→U_2 such that f_1=f_2
∘φ. If Σ_1 and Σ_2 are equivalent, then it is
clear that H(Σ_1)=H(Σ_2), and thus, it is very useful to
regard Σ_1 and Σ_2 as the same surface in our paper. Then we
can show that 𝒮^∗ contains at most a finite number surfaces,
say, equivalent classes.
(2) Similar to (1), we call two
curves (α_1,[a_1,b_1]) and (α_2,[a_2,b_2]) on
Sequivalent and write
(α_1,[a_1,b_1])∼(α_2,[a_2,b_2])
if there is an increasing homeomorphism τ:[a_1,b_1]→
a_2,b_2] such that α_2∘τ=α_1.
(3) 𝒮^∗ may contain more than one
equivalent classes. When q=3, E_3={0,1,∞}, 𝒮^∗
is consisted of two equivalent classes, and when E_3={0,r,∞},
𝒮^∗ contains only one equivalent class when d(
0,r) <π/2,by Theorem <ref> and Definition of 𝒮
^∗.
Let 𝐅^∗=𝐅^∗(
E_q) be the subspace of 𝐅 such that for each
Σ∈𝐅^∗( E_q) , n(Σ
,E_q)=0. Define
h_0=h_0(E_q):=sup_Σ∈𝐅^∗( E_q)
H(Σ).
then we have
h_0≤ H_0.
As a consequence of Theorem <ref>–<ref>, we can easily prove the
following at the end of this paper.
If q=3, then
H_0=H_0(E_3)=h_0(E_3)=h_0;
and if q>3, then there exists a set E_q^' of q distinct points
such that
H_0(E_q^')>h_0(E_q^').
If E_3={0,1,∞}. Then by Theorem
<ref>, Σ_0∈𝒮^∗( E_3)
=𝒮^∗( { 0,1,∞}) is the
closed lens 𝔇(0,1,θ_0,θ_0) or
𝔇(1,∞,θ_0,θ_0) for some
θ_0∈0,π/2]. Then
H_0=h_0=A( 𝔇(0,1,θ_0,θ
_0)) +4π/L(∂𝔇(0,1,θ_0
,θ_0))=max_θ∈0,π/2]A( 𝔇
(0,1,θ,θ)) +4π/L(∂𝔇
(0,1,θ,θ))
and Theorem C is recovered with more: The inequality (<ref>) of
<cit.> still holds and is sharp without the assumption (<ref>).
(A) We always assume the triangulation {U_j}={U_i}_i=1^m_0 of U in
Definition <ref> satisfy the rules (1)–(3) of triangulation:
(1) There is a closed triangular domain K on
ℂ, such that for each j, there exists a homeomorphism
φ_jfrom U_j to K, the inverse of vertices and
edges of K are called vertices and edges of U_j. We will
write 𝐔={U_i,e_i1,e_i2,e_i3}_i=1^m_0 and
also call 𝐔 a triangulation of U, the
vertices and edges of U_j are also called vertices and edges of
𝐔, respectively, and each U_i is called a
closed topological triangular domain (CTTD), or a face of
𝐔.
(2) For each pair of edges e_i and e_j of 𝐔 with e_i≠
e_j, e_i∩ e_j is empty, or a singleton which is a common
vertex of e_i and e_j.
(3) For each pair of faces U_i and U_j of
𝐔with U_i≠U_j and U_i∩U_j≠∅, U_i∩U_j is either a common vertex, or a common edge, of
U_i and U_j.
Then the surface Σ=( f,U) can be
identified as the disjoint collection
𝐓={T_i,l_i1,l_i2,l_i3}_i=1^m_0={f(U_i
),f(e_i1),f(e_i2),f(e_i3)}_i=1^m_0
of CTTDs and their edges on S with adjacency condition, the equivalent
relation 𝐑 of the collection of edges
𝐄={l_ij,i=1,…,m,j=1,2,3}
of 𝐓:l_ij∼ l_i_1j_1 iff the two CTTDs U_i
and U_i_1 share an common edge e_ij=e_i_1j_1. Thus
f( e_ij) =l_ij=l_i_1j_1=f( e_i_1j_1
)(as sets). Note that this relation depends on the order of
U_i and e_ij. When such identification is established, we can get rid
of U_j and f. That is, we can understand the surface Σ
as the collection 𝐓={T_j,l_j1,l_j2,l_j3}_j=1^m_0
with a relation 𝐑={( i,j,i_1,j_1) }such that l_ij∼ l_i_1j_1 iff ( i,j,i_1,j_1)
∈𝐑. Then, for a pair ( i,j) ∈{
1,2,…,m_0}×{1,2,3}, l_ij is on the boundary of
Σ iff there is no pair ( i_1,j_1) such that
( i,j,i_1,j_1) ∈𝐑, and we can understand
Σ as Σ=(𝐓,𝐑), and regard (
𝐓,𝐑) the Riemann surface of Σ, with a finite
number of branch points. In this way we can understand the relation
Σ_1∼Σ_2 as this: the Riemann surface of Σ_1 and
Σ_2 are the same (when we order U_j and its edges properly).
(B) If Σ=( f,U) is a surface, where U is a
Jordan domain, we should understand the whole boundary ∂Σas a
simple curve on the surface. In fact, we can define the positive distance
d_f( ·,·) in Definition <ref>. But for
simplicity, sometimes we state that ∂Σ contains a proper
closed arc γ( p_1,p_1). This only means
∂ U has a partition ∂ U=α( a_1,a_2)
+β( a_2,a_1) such that a_1≠ a_2 but f(
a_1)=f(a_2) =p_1. That is to say when we project the curve
∂Σ to S, the arc ( f,α_1) is projected
onto the closed path γ( p_1,p_1) .
(C) For convenience, we make the agreement: For two faces T_j=(
f,U_j) of the partition 𝐓 of Σ,j=1,2,
they can be regarded to be closed subdomains of both Σ and S.
Regarded to be in Σ, T_1 and T_2 can not intersect when
U_1∩U_2=∅. But when they are regarded
sets on S, T_1 and T_2 intersect when f( U_1
) ∩ f(U_2)≠∅. For a set K⊂ S, when
we write K⊂ T_1∩ T_2,we only regard T_j as the set
f(U_j) on S for j=1,2, say, K⊂ T_1∩ T_2 iff
K⊂ f(U_1)∩ f(U_2).
§ ELEMENTARY PROPERTIES OF SURFACES OF ℱ
By Stoilow's theorem, every surface (
f,U) is equivalent to a surface whose defining function is
holomorphic on the domain of Definition.
(i). (Stoilow's Theorem <cit.>
pp.120–121) Let U be a domain on ℂ and let
f:U→ S be an open, continuous and discrete mapping. Then there
exist a domain V on ℂ and a homeomorphism
h:V→ U, such that f∘ h:V→ S is a holomorphic mapping.
(ii). Let Σ=(f,U) be a surface
where U is a domain on ℂ. Then there exists a domain
V on ℂ and an OPH h:V→U such that f∘ h:V→ S is a holomorphic mapping.
(iii) Let Σ=(f,U)∈𝐅.
Then there exists an OPH φ:U→U such
that f∘φ is holomorphic on U.
What f is discrete means that f^-1(w)∩ K is finite for any compact
subset K of U.
Let Σ=(f,U) be a
surface where U is a domain on ℂ. Then f:U→ S is the restriction of an OPCOFOM g defined in a
neighborhood U_1 of U, and thus by Stoilow's theorem, there
exists a domain V_1 on ℂ and an OPH h:V_1
→ U_1 such that g∘ h is holomorphic on V_1 and then for
V=h^-1(U), f∘ h is holomorphic on
V, and thus (ii) holds.
Continue the above discussion and assume U is a
Jordan domain. Then V is also a Jordan domain and by Riemann mapping theorem
there exists a conformal mapping h_1 from U onto V and by
Caratheodory's extension theorem h_1 can be extended to be homeomorphic
from U onto V, and thus the extension of h∘
h_1 is the desired mapping φ in (iii).
Since equivalent surfaces have the
same area, boundary length and Ahlfors error term (<ref>), when we study
a surface Σ=(f,U) in 𝐅 in which U is not
specifically given, we can always assume that f is holomorphic on
U.
We shall denote by D(a,δ) the disk on S with center a and spherical
radius δ. Then Δ⊂ S is the disk D( 0,π/2)
.
Let Σ=(
f,U) ∈ℱ and let p∈∂ U. If f is
injective near p, then f is homeomorphic in a closed Jordan neighborhood
N_p of p in U, and then f(N_p) is a closed
Jordan domain on S whose boundary near f(p) is an SCC arc, or two SCC arcs
joint at f(p), and thus the interior angle of f(N_p) at
f(p) is well defined, called the interior angle of Σ at p and
denoted by ∠( Σ,p) .
In general, we can draw some paths {β_j}_j=1^k in U
with ∪_j=1^kβ_j\{p}⊂ U and β_j∩β_i={p}if i≠ j, such that each ( f,β_j)
is a simple line segment on S, ∪_j=1^kβ_j divides a closed
Jordan neighborhood N_p of p in U into k+1 closed Jordan
domains U_jwith p∈U_j,j=1,…,k+1, and
U_i∩ U_j=∅if i≠ j, and f restricted to
U_j is a homeomorphism with ( f,U_j)
∈ℱ for each j. Then the interior angle of Σ at p is
defined by
∠( Σ,p) =∑_j=1^k+1∠( (
f,U_j) ,p) .
The existences of {β_j} _j=1^k will be given later
in Corollary <ref> (v).
This definition is independent of coordinate transform of U, and
thus one can understood it with the assumption that f is holomorphic on
U. The following result is a consequence of the previous theorem.
Let (f,U)be a surface, U be
a domain on ℂ bounded by a finite number of Jordan curves and
( f,∂ U) is consisted of a finite number of simple
circular arcs and let q∈ f(U). Then, for sufficiently small
disk D(q,δ) on Swith δ<π/2, f^-1(D(q,δ
))∩U is a finite union of disjoint sets {U_j
}_1^n in U, where each U_j is a Jordan domain in U,
such that for each j, U_j∩ f^-1(q) contains exactly one
point x_j and (A) or (B) holds:
(A) x_j∈ U_j⊂U_j⊂ U and f:U_j
→D(q,δ) is a BCCM such that x_j is the only
possible branch point.
(B) x_j∈∂ U, f is locally homeomorphic on U_j
\{x_j}, and when ( f,U) ∈ℱ, the following conclusions (B1)–(B3) hold:
(B1) The Jordan curve ∂ U_j has a partition α_1(
p_1,x_j) +α_2( x_j,p_2) +α_3(
p_2,p_1) such that α_1+α_2=( ∂
U) ∩∂ U_j is an arc of ∂ U, α_3^∘⊂ U, c_j=( f,α_j) is an SCC arc for j=1,2,
and c_3=( f,α_3) is a locally SCC[The
condition δ<π/2 makes ∂ D( q,δ) strictly
convex, and it is possible that ( f,α_3^∘) may
describes ∂ D( q,δ) more than one round, and in
this case ( f,α_3^∘) is just locally SCC.] arc in
∂ D( q,δ) from q_2=f( p_2) to
q_1=f( p_1). Moreover, f is homeomorphic in a
neighborhood of α_j\{x_j}, for j=1,2, in U and
∂( f,U_j) =( f,∂ U_j)
=c_1+c_2+c_3.
(B2) The interior angle of ( f,U_j) at p_1
and p_2 are both contained in [7π/16,9π/16].
(B3) There exists a rotation ψ of S with ψ(q)=0 such that the
following conclusion (B3.1) or (B3.2) holds:
(B3.1) q_1=q_2,( f,α_1) =q_1q
=q_2q=-( f,α_2) , say, ( f,α
_1+α_2) =q_1q+qq_1, and (
ψ∘ f,U_j) is equivalent to the
surface[Here δ z^ω_j is regarded as the mapping
z↦δ z^ω_j∈ S,z∈Δ^+, via the
stereographic projection P.] ( δ z^ω_j:Δ^+) on S so that
( δ z^ω_j,[-1,1]) =a_δ
,0+0,a_δ,
where ω_j is an even positive integer and a_δ∈(
0,1) with d( 0,a_δ) =δ.
(B3.2) q_1≠ q_2, as sets c_1∩ c_2={q}, and (
ψ∘ f,U_j) is equivalent to the the surface
( F,Δ^+∪𝔇_1^'
∪𝔇_2^') so that the following holds.
(B3.2.1) 𝔇_1^'=𝔇^'(
-1,0,θ_1)and 𝔇_2^'=𝔇^'( 0,1,θ_2), such that
for each j=1,2,θ_j∈0,π/4]. Moreover θ_1=0
(or θ_2=0) when c_1=q_1q (or c_2=qq_2), and in this case 𝔇_1^'=∅ (or
𝔇_2^'=∅). See Definition <ref> for the
notation 𝔇^'( ·,·) .
(B3.2.2) ( F,Δ^+) is the surface T=(
δ z^ω_j,Δ^+), where ω_jis
a positive number which is not an even number and even may not be an integer,
( F,𝔇_1^') is the lune
ψ( 𝔇^'( q_1q
,c_1) ) and ( F,𝔇_2^'
) is the lune ψ( 𝔇^'(
qq_2,c_2) ) . That is to say, (
f,U_j) is obtained by sewing the sector
ψ^-1( T) with center angle[This angle maybe
larger than 2π as the sector ( z^3,Δ^+)
.] ω_jπ, and the closed lunes 𝔇^'( q_1q,c_1) and 𝔇
^'( qq_2,c_2) along q_1q
and qq_2 respectively.
(A) follows from Stoilow's theorem directly when x_j∈ U. (B) follows
from (A) and the assumption ( f,U) ∈ℱ,
by considering the extension of f which is an OPCOFOM in a neighborhood of
x_j in ℂ.
We list more elementary conclusions
deduced from the previous lemma directly and more notations. Let
Σ=(f,U) with Σ∈ℱ, q∈ f(U),
δ, x_j, U_j and α_1+α_2 be given as in Lemma
<ref>.
(A) If for some j, x_j∈Δ, then by Lemma <ref> (A), f is a
BCCM in the neighborhood U_j of x_j in Δ, and the order
v_f(x_j) of f at x_j is well defined, which is a positive integer,
and f is a v_f(x_j)-to-1 CCM on U_j\{x_j}.
(B) If for some j,x_jis contained in ∂Δ, then, using
notations in Lemma <ref> (B), there are two possibilities:
(B1) q_1=q_2, the interior angle of Σ at x_j equals
ω_jπ, and the order v_f( x_j) is defined to be
ω_j/2, which is a positive integer.
(B2) q_1≠ q_2,c_1+c_2 is a simple arc from q_1 to q, and
then to q_2. In this case the interior angle of Σ at x_j equals
ω_jπ+φ_1+φ_2, where φ_1 and φ_2
are the interior angles of 𝔇^'( q_1
q,c_1) and 𝔇^'( qq_2
,c_2) at the cusps, and we defined the order of f at x_j to
be the least integer v_f( x_j) with v_f(
x_j) ≥( ω_jπ+φ_1+φ_2) /2π. Since ω_jπ+φ_1+φ_2≥ω_jπ>0, we have
v_f( x_j) ≥1 and f is injective on U_j
\{ c_1+c_2} iff v_f( x_j) =1.
This is also easy to see by Corollary <ref> (v).
(C) The number v_f(x_j) can be used to count path lifts with the same
initial point x_j: when x_j∈Δ, any sufficiently short line
segment on S starting from q=f(x_j)has exactly v_f(
x_j) f-lifts starting from x_j and disjoint in Δ\{x_j}; and when x_j∈∂Δ, for each arc β
of the two sufficiently short arcs of ∂Δ with initial point
x_j, (f,β) is simple and has exactly v_f(x_j)-1 f-lifts
{β_j} _j=1^v_f( x_j) -1 with the
same initial point x_j, β_j\{x_j}⊂Δ for
each j and they are disjoint in Δ. This is also easy to see by
Corollary <ref> (v).
(D) A point x∈U is called a branch point of f (or
Σ) if v_f(x)>1, or otherwise called a regular point if
v_f( x) =1. We denote by C_f the set of all branch points
of f, and CV_f the set of all branch values of f. For a set
A⊂U, we denote by C_f( A) =C_f∩ A the
set of branch points of f located in A, and by CV_f(K)=CV_f∩
K the set of branch values of f located in K⊂ S. We will
write
C_f^∗( A) =C_f( A) \ f^-1
(E_q) and C_f^∗=C_f\ f^-1(E_q)=C_f(
U) \ f^-1(E_q).
(E) For each x∈U, b_f( x) =v_f(
x) -1is called the branch number of f at x, and for a set
A⊂U we write B_f( A) =∑_x∈ A
b_f( x) . Then we have b_f( x) ≠0 iff
C_f( x) ={x}, and B_f( A) =∑_x∈
C_f( A) b_f(x). We also define
B_f^∗( A) =B_f( A\ f^-1(E_q))
.
Then B_f^∗( A) ≥0, equality holding iff C_f^∗( A) =∅. When A=U is the domain of
definition of f, we write
B_f=B_f( U) and B_f^∗
=B_f^∗( U) .
Let Σ=( f,U) be
a surface in ℱ, let x∈U, and let V be a
(relatively)[This means that V=U∩ V^∗, where
V^∗ is an open set on ℂ. Thus when x∈∂ U, V
contains the neighborhood V^∗∩∂ U of x in ∂ U.]
open subset of U. The pair ( x,V) is called a
(relatively) disk of Σ with center x and radius δ, if xand
V satisfy all conclusions in Lemma <ref> (A) or (B) as
x_j and U_jand δ.
If x∈ U and ( x,V) is a disk of
Σ=( f,U) with radius δ, then V is
open in U and V⊂ U.
If x∈∂ U and ( x,V) is a disk of Σ with
radius δ, then ∂ V has a partition α_1(
x^',x) +α_2( x,x^'') +α
_3( x^'',x^') such that α_1
+α_2is the old boundary of V, α_3 is the new
boundary of V, V=V^∘∪( α_1+α_2)
^∘, c_1=( f,α_1) and c_2=(
f,α_2) are SCC arcs, c_3=( f,α_3) is
a locally SCC arc, which maybe more than a circle and which is
contained in the circle d( f(x),w) =δ, and the
interior angles of ( f,V) at x^' and
x^'' are contained in [7π/16,9π/16]. If
x is fixed and δ tends to 0, then x^' and x^'' tend to x and the interior angles of ( f,V) at x^' and x^'' both tend to π/2. The
paths -α_1 and α_2 are called boundary radii of the
disk V.
Now we can state a direct Corollary to Lemma <ref>.
Let Σ=( f,U) ∈ℱ and let ( x_1,U_1) be a disk
of Σ with radius δ_1. Then, the following hold.
(i) f is locally homeomorphic on U_1\{x_1}; and
if ( x_1,U_1^') is another disk of Σ with
radius δ_1^'>δ_1, then U_1⊂
U_1^', whether x_1is in ∂ U or U.
(ii) If f is homeomorphic in some neighborhood of x_1 in U
(which may be arbitrarily small), or if f locally homeomorphic on
U, then the disk ( x_1,U_1) is a
one sheeted closed domain of Σ, say, f restricted to U_1 is a homeomorphism onto f( U_1) .
(iii) For each x_2∈ U_1\{x_1}, any closed disk (
x_2,U_2) of Σ is a one sheeted closed domain of
Σ, moreover, U_2⊂ U_1 when the radius of
( x_2,U_2) is smaller than δ-d( f(x_1
),f(x_2)) .
(iv) If x_1∈∂ U, f is regular at x_1 and (
f,∂ U) is circular near x_1, then ( f,U_1) is a convex and one sheeted closed domain of
Σ, which is in fact the closed lens 𝔇(
I,c_1,c_1^'), where c_1 and c_1^' are
circular subarcs of ∂Σ and the circle ∂ D(
f(x_1),δ_1) , I is the common chord, and the three paths
c_1,-c_1^',I have the same initial point. Moreover, if Σ
is regular at x_1 and ∂Σ is straight near x_1, then
f(U_1)=𝔇^'( -I,c_1^') =𝔇^'( -c_1,c_1^')is "half" of the disk D(f(x_1),δ_1) on the
left hand side of "diameter" c_1 (see Definition <ref> for
lenses and lunes).
(v) For any x∈U_1, there exists a path I(
x_1,x) in U_1 from x_1 to x such that
I( x_1,x) is the unique f-lift of f(x_1
)f(x). That is to say, ( f,U_1) can be foliated
by the family of straight line segments {( f,I( x_1,x)
) :x∈∂ U_1} and for each pair { I(
x_1,x) ,I( x_1,y) } of the family {
I( x_1,x) :x∈∂ U_j} with x≠ y, one
has I( x_1,x) ∩ I( x_1,y) ={x_1}.
Let Σ=(f,U)∈ℱ and
let x∈∂ U. Then ∠(Σ,x)>0.
This is clear by Lemma <ref> (B) and Remark <ref> (B).
If the assumption Σ=(f,U)∈ℱ is not be satisfied,
the conclusion may fail. For example, for the convex closed half disk
Δ^+ and the disk B={z∈ℂ:| z-1/2| <1} in ℂ, T=Δ^+\ B
can be regarded as a surface on S (via the sterographic projection), whose
interior angle at the origin equals 0, and it is clear that T
∉ℱ. In fact the part of ∂ T lying on ∂ B is
not convex on S.
Lemma <ref> also directly implies the following lemma.
Let (f,U)∈ℱ. Then the
following hold.
(A) For each p∈U,f restricted to some neighborhood of p in
U is a homeomorphism if one of the following alternatives holds.
(A1) p∈ U and p is a regular point of f.
(A2) p∈∂ U, p is a regular point of f and (f,∂ U) is
simple in a neighborhood of p on ∂ U.
(B) For any SCC arc ( f,α) of ∂Σ=(
f,∂ U) , f restricted to a neighborhood of α^∘
in U is homeomorphic if and only if h has no branch point on
α^∘.
(A) follows from the definition of ℱ. (B) follows from (A). The
hypothesis in (A2) that (f,∂ U) is simple is necessary:
f(z)=z^2,z∈Δ^+, is regular at z=0 but not injective
in any neighborhood of 0.
(<cit.> p. 32–35) Let Σ=(f,Δ)∈ℱ and let β=β( q_1,q_2) be
a path on S from q_1 to q_2. Assume that α=α(
p_1,p) is a path in Δ from p_1 to p which
is an f-lift of a subarc β( q_1,q) of β from
q_1 to q, with α\{p_1}⊂Δ and f(
p_1) =q_1. Then α can be extended to an f-lift
α^'=α( p_1,p^') of a longer subarc
of β with α^'∘⊂Δ, such that either
p^'∈∂Δ, or p^'∈Δ and α^'
is an f-lift of the whole path β.
The following lemma is obvious.
Any two distinct great circles on S intersect at exactly two
points, which are antipodal points on S.
The following result follows from Definition <ref>, which is
essentially Lemma 5.2 of <cit.>.
Let (f,Δ)∈𝐅 and let D be a
Jordan domain on S such that f^-1 has a univalent
branch[Univalent branch for the inverse of an OPCOFOM always means an
OPH in this paper.] g defined on D. Then g can be extended to a
univalent branch of f^-1 defined on D.
The following result follows from the Argument principle.
Let D_1 and D_2 be Jordan domains on ℂ or S
and let f:D_1→D_2 be a mapping such that
f:D_1→ f(D_1) is a homeomorphism. If
f(∂ D_1)⊂∂ D_2, then f(D_1
)=D_2.
The following result is a generalization of the existence of lifts of curves
for a CCM.
Let U be a domain on
S enclosed by a finite number of Jordan curves, f:U→
Sbe a finite-to-one mapping which is locally homeomorphic on
U, Γ_n:[0,1]→ S be a sequence of paths on S
which converges to a path Γ_0:[0,1]→ S uniformly, and let
a∈U. If for each n,Γ_n has an f-lift I_n
:[0,1]→Ufrom a, say, I_n is a path with
f( I_n( s) ) =Γ_n( s)
for all s∈0,1],
and I_n( 0) =a. Then I_n( s) uniformly
converges to a path I_0( s) ,s∈0,1], in
U, such that I_0 is an f-lift of Γ_0 with
I_0( a) =a.
We first show the following.
For any s_0∈0,1], if lim_n→∞I_n( s_0) → a_0, then s_0 has a
neighborhood N_s_0( ε) =[s_0-ε
,s_0+ε]∩0,1] in [0,1], such that I_n(s) converges
to an arc I_0( s) uniformly on N_s_0 with I_0
(s_0)=a_0 and f(I_0( s) )=Γ_0(s) for all s on
N_s_0.
Let w_0=Γ_0( s_0). Then w_0=f(
a_0) and f^-1(w_0)={a_j}_j=0^m is a finite set. Since
f is locally homeomorphic, each a_j has a connected and (relatively)
open neighborhood U_j in U such that U_i∩
U_j=∅ when i≠ j and f:U_j→
f(U_j) is a homeomorphism for each j=0,1,2,…,m. Then we have
w_0 is outside the compact subset f( U\∪_j=0^mU_j)of S, say, there
exists a disk D( w_0,δ) on S
such that f^-1(D( w_0,δ) )⊂∪_j=0^mU_j.
s_0 has a connected neighborhood N_s_0 in [0,1] so that
Γ_0( N_s_0) ⊂ D( w_0,δ/2)
and thus Γ_n( N_s_0) ⊂ D( w_0
,δ) for all n>n_0 for some n_0>0. Then I_n(
N_s_0) ⊂∪_j=0^mU_j and, since I_n(N_s_0) is
connected, we have I_n( N_s_0) ⊂ U_0 for all
n>n_0. Therefore, Γ_n( N_s_0) ⊂ f(U_0)
for n>n_0. It is clear that Γ_0( N_s_0)
⊂f(U_0) since Γ_n(N_s_0) converges to
Γ_0( N_s_0) . Since f:U_0→
f(U_0) is homeomorphic, we conclude that I_n( s) converges to the path I_0( s) =f^-1(Γ_0(
s) )∩U_0 uniformly for s∈ N_s_0with
I_0(s_0)=a_0. It is obvious that I_0( s) ,s∈
N_s_0, is an f-lift of Γ_0( s) ,s∈ N_s_0
,and Claim <ref> is proved.
Let Abe the set of t∈0,1] such that lim_n→∞I_n( t) exists. Then A is an open subset of [0,1] by
Claim <ref>. Ley B=[0,1]\ A. We show that B is also an
open subset of [0,1]. Let s_1∈ B. Since U is compact,
I_n( s_1) has two subsequences I_n_k^j(
s_1) such that I_n_k^j( s_1) →
a_1j( k→∞) with j=1,2 and a_11≠
a_12. Then Claim <ref> applies to Γ_n_k^j and
Γ_0, and s_1 has a neighborhood N_s_1 in [0,1] such that
I_n_k^j( s) converges uniformly to a path I_0j(
s) ,s∈ N_s_1, and I_0j is an f-lift of Γ
_0|_N_S_1,j=1,2. Then both I_0j( s) are continuous
with I_01(s_1)=a_11≠ a_12=I_02(s_1), and then I_01∩
I_02=∅ when N_s_1 is chosen small enough. This implies that
I_n( s) cannot converges as n→∞ for each
s∈ N_s_1 and so B is open.
Since 0∈ A. We have A=[0,1] and B=∅. We have proved that
I_n converges to a path I_0 uniformly on [0,1], and it is clear that
I_0 is the f-lift of Γ_0.
§ SEWING TWO SURFACES ALONG A COMMON BOUNDARY ARC
We now introduce the method to sew two surfaces sharing a common boundary arc.
We let H^+ and H^- be the upper and lower open half planes
of ℂ, Then H^+ and H^-1 can be regarded as open hemispheres
on S and H^+ and H^-1 can be
regarded as closure of H^+ and H^- on S. For a closed curve γ
in ℂ, we write γ^±=H^±∩γ. But
recall that Δ^±always denotes H^±∩Δ,
not H^±∩Δ. Then ( ∂Δ)
^±=( ∂Δ) ∩H^±={ z=e^±
iθ:θ∈0,π]}.
A surface Σ=( f,Δ) can be
cut into two subsurfaces Σ_1=( f,Δ^+)
and Σ_2=( f,Δ^-) by the diameter
[-1,1] of Δ. Conversely, we can recover Σ by
sewing Σ_1 and Σ_2 along ( f,[
-1,1] ) . The interval [ -1,1] in
Δ is called the suture line when we sew Σ_1
and Σ_2. This trivial observation can be generalized in Lemma
<ref> and Lemma <ref>.
Let Σ=( f,Δ)be a
surface and B={z∈ℂ:|z-1/2|<1/2}. Then ∂ B cut the
surface Σ into two subsurfaces Σ_1=( f,Δ\ B) and Σ_2=( f,B) .
Conversely, we can glue Σ_1 and Σ_2 to recover the surface
Σ. This trivial observation can be generalized in Corollary
<ref>.
For j=1,2, let Σ_j=(f_j,U_j) be a
surface and let α_j=α_j( x_j1,x_j2) be a
proper arc of ∂ U_j such that ( f_j,α_j)
is a simple arc with distinct endpoints. If
γ=(f_1,α_1)∼-(f_2,α_2),
then (f_1,U_1) and (f_2,U_2)can be
sewn along γ=( f_1,α_1) to
become a surface Σ_3=(f_3,Δ) with suture
line [-1,1], such that the following hold:
(i). There exist orientation-preserving homeomorphisms (OPHs)
h_1:U_1→Δ^+ and
h_2:U_2→Δ^-, called
identification mappings (IMs), such that
(h_1,α_1)∼-1,1]∼-( h_2,α_2)
=( h_2,-α_2) ,
f_1∘ h_1^-1( x) =f_2∘ h_2^-1(x),∀
x∈-1,1],
and
f_3(z)={[ f_1∘ h_1^-1(z),z∈Δ^+,; f_2∘ h_2^-1(z),z∈Δ^-\-1,1], ].
is a well defined OPCOFOM, and we have the equivalent relations
(f_3,Δ^+)∼(f_1,U_1),(f_3
,Δ^-)∼(f_2,U_2),
∂Σ_3=( f_3,( ∂Δ) ^+)
+( f_3,( ∂Δ) ^-) ∼(
f_1,( ∂ U_1) \α_1^∘)
+( f_2,( ∂ U_2) \α_2^∘) ,
and
(f_3,[-1,1])∼(f_1,α_1)∼(f_2,-α_2).
(ii)
L(∂Σ_3)=L(∂Σ_1)+L(∂Σ_2)-2L(γ),
A(Σ_3)=A( Σ_1) +A(Σ_2),
n( Σ_3) =n( Σ_1)
+n( Σ_2) +#( γ^∘∩
E_q) ,
and
R(Σ_3)=R(Σ_1)+R(Σ_2)-4π#( γ^∘∩
E_q) .
(iii). z∈ C_f_3( Δ\{-1,1}) if
and only if h_1^-1(z)∈ C_f_1( U_1\∂α_1) or h_2^-1(z)∈ C_f_2(
U_2\∂α_2). In particular, if
f_1(∂α_1)⊂ E_q, then f_2(∂α
_2)⊂ E_q and in addition
CV_f_3(S\ E_q)=CV_f_1(S\ E_q)∪ CV_f_2
(S\ E_q).
Recall that C_f( A)is the set of branch points of
flocated in A and CV_f( T) is the branch value of f
located in T (see Remark <ref> (D)), ∂α denotes the
endpoints of α and α^∘ denotes the interior of α
(see Definition <ref>). The condition (f_1,α_1)∼
(f_2,-α_2) is crucial (see Remark <ref> for the relation
∼) and note that (f_2,-α_2)=-( f_2,α_2)
and ( f_2,α_2) are the same path with opposite
direction. Two copies of the hemisphere H^+ on S cannot be
sewn along their common boundary section ∞,-1,0⊂ S to
become a surface, but H^+ and H^- can be
sewn along ∞,-1,0 to become the surface
( f_3,Δ) , where f_3|_Δ^± are homeomorphisms from Δ^± onto
H^±, and f_3 maps [-1,1] onto ∞,-1,0,
( ∂Δ) ^+=( ∂Δ)
∩H^+ onto 0,1,∞, and (
∂Δ) ^-=( ∂Δ) ∩H^- onto ∞,1,0.
The conclusion (i) in fact gives a routine how to sew Σ
_1 and Σ_2, which is inspired by Example <ref>. By
(<ref>), there exists an orientation-preserving homeomorphism (OPH)
[Note that -α_2 is the same path with opposite direction,
not the set {-y:y∈α_2}.] φ:α_1→
-α_2 such that
( f_1,α_1) =( f_2∘φ,α_1)
,
that is
f_2( φ(x)) ≡ f_1(x),∀ x∈α_1.
Let h_1:U_1→Δ^+ be any OPH such
that h_1(α_1)=[-1,1]. Then let h_2:U_2
→Δ^- be an OPH such that
h_2( y) ≡ h_1( φ^-1(y)) ,∀
y∈α_2.
In fact, h_2|_α_2 defined by (<ref>) is an OPH from
α_2 onto [1,-1] and can be extended to be an OPH h_2 from
U_2 onto Δ^-. The pair of h_1 and
h_2 are the desired mappings satisfying (i). Then (ii) is trivial to verify.
To prove (iii) we may assume that Σ_1 and Σ_2 are the
surfaces Σ_±=(f_±,Δ^±) such that f_±
agree on [-1,1], and then f_3 defined by f_± on Δ^±is an OPCOFOM. Then f_± are the restrictions of f_3
to Δ^±, and thus x∈( -1,1) is a branch
point of f_3, say x∈ C_f_3∩( -1,1) , iff x is a
branch point of f_+ or f_-, say x∈ C_f_1( -1,1)
∪ C_f_2( -1,1) . In consequence we have
C_f_3( Δ\{-1,1}) =C_f_1(
Δ^+\{-1,1}) ∪ C_f_2(
Δ^-\{-1,1}) ,
and then all conclusions of (iii) follow.
The suture line may not be straight: the sewn surface Σ_3=(
f_3,Δ) can be reparametrized as Σ^'=( f_3^',U) with f_3^'
=f_3∘ψ, where ψ is a OPH from U onto
Δ for some Jordan domain U, and for Σ^',
the suture line becomes h^-1([-1,1]).
In the previous lemma, the sewing process can be
understood as an abstract process via equivalent relations. Let U_j
,Σ_j=( f_j,U_j) ,α_j⊂∂
U_j satisfy all assumption of the previous lemma. Then we can define an
equivalent relation ∼ on the disjoint union U_1
⊔U_2. For any pair of points x and y in U_1⊔U_2, x∼ y if and only if one of the three
conditions holds: (1)x=y∈U_1, (2) x=y∈U_2,
(3) x∈α_1,y∈α_2 and f_1(x)=f_2(y). Since (
f_1,α_1) is simple and ( f_1,α_1)
∼-( f_2,α_2) , ( f_2,α_2)is also simple, and thus for each x∈α_1 that y∈α_2
with f_1(x)=f_2(y) is unique, and vice versa for each y∈α_2.
We write [x] the equivalent class of x:
x]={y∈U_1⊔U_2:y∼ x}.
Then for x∈( U_1\α_1^∘)
⊔( ( U_2\α_2^∘)
) , [x] contains only one point in U_1⊔U_2, and for x∈α_1,[x] contains two points in
U_1⊔U_2, say [x]={x,y} with
f_1(x)=f_2(y) and y∈α_2. The previous lemma show that the
quotient space Q=( U_1⊔U_2
) /∼, with the quotient topology, is topologically equivalent to
the unit disk Δ. Then the sewn surface (
f_3,Δ) can be identified as a representation of the
abstract space ( f̃,Q) , where f̃( [x]) =f_1(x) when x∈ x]∩U_1,
or f̃( [x]) =f_2( y) when y∈
x]∩U_2 ,∀ x]∈Q, is well defined.
In this abstract version, [α_1]=[α_2] is the suture line,
the quotient mapping x→ x] is the IM.
U_1 may intersects U_2, but for the union
U_1⊔U_2 a point in U_1 and a
point in U_2 are always regarded as distinct points. In fact we
may assume U_1 and U_2 are bounded with positive distance.
For the sewn surface ( f_3,Δ) in the lemma
there exists a homeomorphism h from Q=( U
_1⊔U_2) /∼ onto Δ, such that
h( [ α_1] ) =[-1,1], with
h([x])={[ h_1(x),x∈U_1,; h_2( x) ,x∈U_2. ].
The topological equivalence h:Q→Δ is also called
the IM, when we use Δ to represent Q.
All surfaces obtained by sewing surfaces along arcs can be interpreted in this
way. Though this way is abstract, it keeps more information of the elder
surfaces than the new sewn surface as Σ_3 in Lemma
<ref>, and moreover, it is easier to state the process than that version
which involves the concrete IMs and suture lines
.
Let γ be a simple arc on S with[Recall that
#∂γ=2means γ contains two distinct endpoints.]
#∂γ=2 (recall that ∂γ is the endpoints of
γ). Let S_γ=( f,U) be a surface in
𝐅 such that f:U→ S\γ is a homeomorphism
and ∂ S_γ=( f,∂ U) =γ-γ. Then
R(S_γ)=4π#γ^∘∩ E_q+4π#( ∂γ) ∩ E_q-8π≤4π#γ^∘∩ E_q,
equality holding if and only if ∂γ⊂ E_q.
Since #E_q=q,we have
n( S_γ) =#( S\γ) ∩ E_q=q-#γ∩ E_q=q-#γ^∘∩
E_q-#( ∂γ) ∩ E_q
=q-2-#γ^∘∩ E_q+2-#( ∂γ) ∩
E_q
Since A(S_γ)=4π, (<ref>) follows from
R(S_γ) =( q-2) A(S_γ)-4πn(
S_γ)
=4π#γ^∘∩ E_q-8π+4π#( ∂γ)
∩ E_q
≤4π#γ^∘∩ E_q,
with equality if and only if ∂γ⊂ E_q
.
Here is a useful result directly from Lemma <ref>:
Let Σ=( f,Δ) be a surface
in 𝐅 and α be a simple arc in Δ such that
α^∘⊂Δ,∂α⊂∂Δ and
#∂α=2. If γ=( f,α) is simple, then
α cuts the disk Δ into two Jordan domains Δ_1 and
Δ_2 and for the surfaces Σ_j=( f,Δ_j
) ,j=1,2, we have
R(Σ)=R(Σ_1)+R(Σ_2)-4π#γ^∘∩ E_q.
Moreover, if ∂α⊂ E_q, then we have
CV_f( S\ E_q) =CV_f_1(S\ E_q)∪
CV_f_2( S\ E_q) .
In fact Σ can be recovered by gluing the surfaces (
f,Δ_1) and (f,Δ_2)along
γ, and so the desired equalities follows from (<ref>) and
(<ref>).
For two equivalent curves γ_j:α_j→ S, we will write
γ_1=γ_2 when there is no confusion. Then Lemma <ref>
can be restated as simplified but more useful versions (Lemmas <ref> and
<ref>).
Let Σ_1 and Σ_2 be two surfaces in 𝐅
such that
∂Σ_1=γ+Γ_1 and ∂Σ_2
=-γ+Γ_2,
where γ is a simple and proper arc of ∂Σ_1 which is not
closed, say, it has distinct endpoints. Then -γ is a proper subarc of
∂Σ_2, and Σ_1 and Σ_2 can be sewn along
γ, resulting a surface Σ_3=( f_3,Δ) such that the following hold.
(i).
∂Σ_3=Γ_1+Γ_2,
L(∂Σ_3)=L(∂Σ_1)+L(∂Σ_2)-2L(γ),
A(Σ_3)=A( Σ_1) +A(Σ_2),
n( Σ_3) =n( Σ_1)
+n( Σ_2) +#( γ^∘∩
E_q) ,
R(Σ_3)=R(Σ_1)+R(Σ_2)-4π#( γ^∘∩
E_q) .
(ii). If ∂γ⊂ E_q, then
CV_f_3(S\ E_q)=CV_f_1(S\ E_q)∪ CV_f_2
(S\ E_q).
(iii). If ∂Σ_2=-γ+γ and the interior of Σ_2
is the simple domain S\γ, then
∂Σ_3=∂Σ_1=γ+Γ_1,
and if in addition[note that by assumption #∂γ=2, say,
γ contains two distinct endpoints.] ∂γ⊂ E_q, then
the following two equalities hold:
R( Σ_3) =R( Σ_1) ,
and
CV_f_3(S\ E_q)=CV_f_1(S\ E_q).
All conclusions, except (<ref>) and (<ref>), follow from
Lemma <ref>. Assume ∂Σ_2=-γ+γ, the interior
of Σ_2 is the simple domain S\γ, and ∂γ⊂ E_q. Then we have CV_f_2(S\ E_q)=∅,
and thus (<ref>) follows from (ii). On the other hand, by Lemma
<ref> we have R(Σ_2)=4π#( γ^∘∩
E_q) , which with (<ref>) implies (<ref>).
Σ_3 in (iii) can also be obtained by continuously extending
Σ_1: let the two endpoints of γ be fixed and let γ
continue to move to the right hand side, and return to the initial position of
γ after scanning the whole sphere.
Let Σ=( f,Δ) be a surface
such that ∂Σ=γ+Γ where γ is a simple arc on S
with #∂γ=2. Let T be a closed Jordan domain on S with
∂ T=γ-γ^'. Then Σ and T^c=S\
T^∘ can be sewn along γ, resulting a surface Σ^'=( f^',Δ) such that ∂Σ^'=Γ+γ^'. Moreover, in the case ∂γ⊂ E_q we have
R( Σ) =R(Σ^')+R(T)-4π#E_q∩γ
^'∘,
and
CV_f^'( S\ E_q) =CV_f( S\
E_q) .
It is trivial to see that ∂ T^c=-γ+γ^', and then
by Lemma <ref> (i) we have ∂Σ^'=Γ
+γ^'.Assume ∂γ⊂ E_q. Then by Lemma
<ref> (ii) we have CV_f^'(S\ E_q)=CV_f
(S\ E_q)∪ CV_id(T_c\ E_q
)=CV_f(S\ E_q). The surface Σ^' can also be
obtained in this way: first sew Σ and the surface whose interior is
S\γ and boundary is γ-γ, obtaining a surface
Σ^''=( f^'',Δ) ,
and then cut from the new surface Σ^'' the the domain
T^∘, together with the open boundary γ^∘, along
γ^', obtaining the surface Σ^'=( f^',Δ) . Then we have by Lemma <ref> (iii) that
R(Σ^'')=R(Σ). And by Lemma <ref> we have
R(Σ)=R(Σ^'')=R(Σ^')+R(T)-4π#E_q
∩γ^'∘.
Lemma <ref> can be extended to the case that α_2 is the whole
boundary ∂ U_2 but α_1 is a proper arc of ∂
U_1, as the reverse process of Example <ref>.
Let Σ_1=(f_1,U) be a surface in
𝐅 and assume ∂Σ_1 has a partition
∂Σ_1=γ+Γ
such that γ=( f,α) is a Jordan curve on S, where
α=α( x_1,x_2) is a proper arc of ∂ U.
Let T_γ be the closed Jordan domain enclosed by γ and let
T^c=S\ T_γ^∘. Then Σ_1 and T^c can be
sewn along γ becoming a surface Σ_2=(
f_2,Δ)such that for the disk B={z∈ℂ:|z-1/2|<1/2} the following holds:
(i) There exist an OPCOFOM h_1:U→Δ\ B and an OPH f_1^':B→ T^c such
that h_1:U\{x_1,x_2}→(
Δ\ B) \{1} is an OPH,
(h_1,∂ U\α^∘)=∂Δ,
(h_1,α)=-∂ B, h_1( x_1) =h_1(
x_2) =1,
f_1^'( y) =f_1∘ h_1^-1( y)
, y∈∂ B,
and
f_2( z) ={[ f_1∘ h_1^-1( z) ,z∈Δ\ B,; f_1^',z∈ B. ]. .
(ii) p∈Δ\∂ B is a branch point of f_2
if and only if
h_1^-1(p)∈ C_f_1( U\α) .
(iii) p∈( ∂ B) \{1} is a branch point of
f_2 if and only if h_1^-1(p)∈ C_f_1( α^∘) .
(iv) ∂Σ_2=Γ, when Γ is viewed as a closed curve on
S, and moreover
L(∂Σ_1)=L(∂Σ_2)+L(γ),
A(Σ_1)=A(Σ_2)+A(T_γ)-4π,
n( Σ_1) =n( Σ_2)
+n( T_γ) -q+χ_E_q(f( x_1)
),
where
χ_E_q( w) ={[ 0,w∉ E_q,; 1,w∈ E_q. ].
R(Σ_1)=R(Σ_2)+R(T_γ)+8π-4πχ_E_q(f_1
(x_1))≥ R(Σ_2)+R(T_γ)+4π,
equality holding if and only if
f( x_1) =f( x_2) ∈ E_q;
and when (<ref>) holds, we have
CV_f_1( S\ E_q) =CV_f_2( S\
E_q) .
(i) in fact gives the method how to sew Σ_1 and T^c
along γ, all of (i) is easy to see. Then (ii) and (iii) follows from (i).
The relation ∂Σ_2=Γ and (<ref>)are trivial to see.
(<ref>) follows from the equalities A(Σ_1)=A(Σ_2
)-A(T^c)and A(T^c)=A(S\ T_γ)=4π-A(T_γ).
It is clear that[Note that by definition of n,
n( T^c) =n( S\ T_γ) =#( S\ T_γ) ∩ E_q, say, any
point of E_q on the boundary is not counted for n.]
n( T^c) =q-#γ∩ E_q-n(
T_γ) , and that
n( Σ_1) =n( Σ_2)
-n( T^c) -#γ∩ E_q+χ_E_q(f(
x_1) ),
where χ_E_q(f( x_1) ) appears because after the
sewing, the endpoints of α are sewn to one boundary point of Δ.
In consequence we have (<ref>).
(<ref>) follows from (<ref>) and (<ref>). When (<ref>) holds,
(<ref>) follows from (ii) and (iii) directly. Therefore all conclusions
in (iv) hold.
Assume that the arc α=α( x_1,x_2) in Corollary
<ref> is the whole boundary ∂ U, say, x_2=x_1, and
γ=( f,∂ U) =∂Σ is still simple, say,
γ=∂Σ is a Jordan curve. Then we can sew Σ_1 and
the closed domain T_c=S\ T_γ^∘ so that the result
surface Σ_2 is a closed surface, say, Σ_2=(
f_2,S) , where S is the sphere. Then above equations in the proof
all hold when we replace χ_E_q(f(x_1)) by 0, even if f(x_1)∈
E_q. This can be explained in another way: we may choose the end point
x_2=x_1 of α not contained in f_1^-1(E_q), then
χ_E_q(f(x_1))=0 and thus the above argument works. This means that
we have
Let Σ_1=( f_1,Δ) be a
surface such that ∂Σ is a Jordan curve on S and let
T_∂Σ be the closed domain enclosed by ∂Σ on S.
Then for the closed surface Σ_2=( f_2,S) over S
which is obtained by sewing Σ and T^c=S\ T_∂Σ^∘ along ∂Σ, we have
R(Σ)=R(Σ_2)+R(T_∂Σ)+8π.
Moreover, we have CV_f_2=CV_f_1.
The equivalent class argument for two surface in Remark
<ref> can be used for the above method. We left this to the reader.
For a surface Σ=( f,Δ)
∈ℱ, we can cut Δ by the radius [0,1] to
obtain a surface Σ_1 whose interior is ( f,Δ\0,1]) and boundary is ∂Σ_1+(
f,[ 1,0] ) +( f,[0,1]) . Σ_1 can be
expressed as ( f_1,Δ^+) which split the
arc ( f,[0,1]) into two boundary arcs of Σ_1, where
f_1( z) =f( z^2). Conversely, we can sew
Σ_1 along ( f_1,[0,1]) to recover the
surface Σ. This trivial observation is generalized in Lemma <ref>.
Let Σ=(f,U
)be a surface in 𝐅 such that ∂ U has a partition
∂ U=α_1( x_1,x_2) +α_2(
x_2,x_3) +A( x_3,x_1) , ∂Σ has the
corresponding partition
∂Σ=γ-γ+Γ=( f,α_1) +(
f,α_2) +( f,A) ,
and γ=( f,α_1) =-( f,α_2) is a
simple arc with distinct endpoints. Then Σ can be sewn along γ
becoming a surface Σ_1 such that the following hold:
(i) If Γ is a just a point, say ∂Σ=γ-γ, then
Σ_1 is a closed surface Σ_1=( f,S) such that
R(Σ)=R(Σ_1)+4π#γ∩ E_q,
and
CV_f=CV_f_1.
(ii) If Γ is not a point, then Σ_1 is a surface Σ
_1=( f_1,Δ) , such that
∂Σ_1=Γ
A(Σ_1)=A( Σ) ,L(∂Σ_1)=L(∂Σ)-2L(γ),
CV_f_1( Δ) =CV_f( Δ) or CV_f_1( Δ)
=CV_f( Δ) ∪{f( x_1) },
R(Σ)=R(Σ_1)+4π#( [ γ\{f(
x_1) }] ∩ E_q) ,
and if f(x_1)∈ E_q, then the following two equalities hold:
CV_f_1(S\ E_q)=CV_f( S\ E_q) ,
and
R(Σ)=R(Σ_1)+4π#γ∩ E_q-4π.
Assume Γ is a point. Then we may
understand the surface Σ is obtained by a closed surface Σ
_1=( f_1,S) so that f_1 maps [0,1] homeomorphically
onto γ and that the interior of Σ is the open surface
f_1:S\0,1]→ S and the boundary of ∂Σ is ( f_1,[0,1]) +(f_1,[1,0])=γ-γ. From
this we have the result (i).
Assume Γ is not a point, say, α_1+α_2 is a proper arc
of ∂ U, then (f,U) can be sewn along
γ to become a surface Σ_1=(f_1,Δ): There
exists an OPCOFOM h:U→Δ such that
h|_U\α_1+α_2→Δ\0,1] is an OPH, ( h,α_1)
=-[0,1]=[1,0],( h,α_2) =[0,1],
f(h^-1( y) ∩α_1)=f(h^-1( y) ∩α_2),y∈0,1],
and
f_1( z) =f∘ h^-1( z) ,z∈Δ.
It is clear that Σ_1=( f_1,Δ) is a
well defined surface and (<ref>)–(<ref>) hold. Moreover we have
n( Σ) =n( Σ_1)
-#E_q∩[ γ\{f( x_1) }] ,
which with the first equality in (<ref>) implies (<ref>). Then, under
the special case f( x_1) ∈ E_q, (<ref>) and
(<ref>) are just speecial case (<ref>) and (<ref>).
§ THE SPHERICAL ISOPERIMETRIC INEQUALITIES
In this section we list some results follows from Bernstein's isoperimetric inequalities.
Let Γ be a
closed curve on S.
(i) (Bernstein <cit.>) If L=L(Γ)≤2π, Γ is simple and
contained in some hemisphere S_1 on S, then the area A of the Jordan
domain of S_1 bounded by Γ satisfies
A≤2π-√(4π^2-L^2),
with equality if and only if Γ is a circle.
(ii) (Radó <cit.>) If L(Γ)<2π and Γ is simple, then
Γ lies in some open hemisphere on S.
(iii) If L(Γ)<2π and Γ consists of finitely many circular arcs
on S, then Γ also lies in some open hemisphere on S.
(i) and (ii) are known, and (iii) can be proved by (i) and (ii) as in
<cit.>.
Let f:Δ→ Sbe an OPCOFOM. If
f(Δ)⊂ S\ E_q and f(∂Δ) lies in some open
disk D in S\ E_q, then f(Δ)⊂ D.
By the assumption, f(∂Δ)∩( S\ D)
=∅. If f(Δ)∩( S\ D) ≠∅,
then by the argument principle we have f(Δ)⊃ S\ D⊃
E_q, contradicting the assumption f(Δ)⊂ S\ E_q.
The following lemma is Lemma 3.5 in <cit.>.
For each k=1,…,n, let F_k≠∅be a domain in a
hemisphere S_k on S which is enclosed by a finite number of Jordan
curves and let l_k=L(∂ F_k). If l=∑_k=1^nl_k<2πand
D_l is a disk in some hemisphere on S with L(∂ D_l)=l, then
A(F_1)+⋯+A(F_n)≤ A(D_l)=2π-√(4π^2-l^2),
with equality if and only if n=1 and F_1 is also a disk.
The following result is a consequence of Theorem 3.6 in <cit.>.
Let Σ=(f,U)∈ℱ. Assume f(U) is
contained in some open hemisphere and L(f,∂ U)<2π. Then
A(f,U)≤ A(T)<L(f,∂ U),
where T is a disk in some open hemisphere on S with L(∂
T)=L(f,∂ U).
But the author of <cit.> did not proved that the equality in (<ref>)
holds only if Σ is a simple disk with perimeter L(f,∂ U). We
will improve this and give a self-contained proof after we introduce an area formula.
For a rectifiable Jordan curve γ in ℂ which is oriented
anticlockwise, the spherical area A(γ) of the domain D_γ in
ℂ enclosed by γ can be defined by
A(γ)=∬_D_γ4dx∧ dy/(1+zz)^2
=2/i∫_γzdz/1+zz,
since the exterior differential of zdz/1+zz
equals
dzdz/1+zz=dz∧
dz/1+zz-zzdz∧ dz/(
1+zz) ^2=dz∧ dz/(
1+zz) ^2=2idx∧ dy/( 1+zz) ^2.
Note that the formula for A(γ) with γ⊂ℂ depends
on the orientation of γ, which is positive when γ is
anticlockwise, or is negative when γ is clockwise.
If Σ=(f,U) is a surface over S, then the area may not be
determined by the boundary ∂Σ=( f,∂ U)
only. But if ∞∉∂Σ, then A(Σ) is determined by
∂Σ and the covering number _f(∞) of f, where
_f(∞) is defined to be lim_q_n→∞
#f^-1(q_n) in which q_n is a sequence of regular values of f
converging to ∞. That is to say, we have the following.
Let Σ=( f,U) be a surface in which U is a Jordan domain enclosed by a
finite number of piecewise smooth Jordan curves and ∂Σ=(
f,∂ U) is also piecewise smooth. Assume that ∞∉∂Σ. Then
A(Σ)=4π_f(∞)+A(∂Σ),
where
A(∂Σ)=2/i∫_∂Σw
dw/1+|w|^2=2/i∫_∂ Uf(z)
df(z)/1+|f(z)|^2.
This is essentially follows from Argument principle of meromorphic functions.
By Theorem <ref> (ii), we may assume f is meromorphic on U.
Let p_1,…,p_k be all poles, which are all distinct points of
f^-1(∞), of f with multiplicities m_1,…,m_k. Then
{p_j}⊂ U and ∑_j=1^km_j=_f(∞). For each
j=1,…,k, let C_j,ε be small circles centered at p_j
with radius ε. Then we have
2/i∫_∂ U-C_1,ε-⋯-C_k,ε
f(z)f^'(z)dz/1+|f(z)|^2=
∬_U\∪_j=1^kD( p_j
,ε)
4| f^'(z)| ^2dx∧ dy/(
1+|f(z)|^2) ^2→ A(Σ) as ε→0.
For each j=1,…,k, we have by Argument principle that
lim_ε→02/i∫_-C_j,ε
f(z)f^'(z)dz/1+|f(z)|^2=lim_ε→02/i∫_-C_j,ε|
f(z)| ^2dlog f(z)/1+|f(z)|^2
=lim_ε→02/i∫_-C_j,εdlog
f(z)=4π m_j.
Then the conclusion follows from ∑ m_j=_f(∞).
Assume Σ_j=(
f_j,Δ) ,j=1,2, are two surfaces with ∂Σ_1=∂Σ_2 and ∂Σ_1 is piecewise smooth.
Then A(Σ_1)-A(Σ_2)=n_04π, where n_0 is an integer.
When ∞∉∂Σ_1, this follows from the previous lemma
directly. When ∞∈∂Σ_1 we can consider the surfaces
Σ_j^'=( φ∘ f_j,Δ)
,j=1,2, where φ is a rotation of S such that ∞∉∂Σ_j^',j=1,2. It is clear that A(Σ_j
)=A(Σ_j^'), and then we can apply the previous lemma to
Σ_j^',j=1,2, to obtain the conclusion.
Let Γ:∂Δ→ S\{∞} be a closed curve in ℂ
consisted of a finitely many simple circular arcs. If φ_t,
t∈0,1], is a family of rotations on S which is continuous with
respect to t and φ_t( Γ) ∩{∞}=∅for all t∈0,1]. Then
A(φ_t∘Γ)=A(Γ).
For each a∈ℂ\Γ let
n( Γ,a) =1/2π i∫_Γdz/z-a.
Then for each component V of ℂ\Γ, n(
Γ,a) is a constant integer n_V=n( Γ,a)
for all a∈ V, and then we can write
n_V=n( Γ,V) .
We call n_V=n( Γ,V) the index of V with respect to
Γ. It is clear that the index of the unbounded component is zero. Let
V_1,…,V_m be all distinct bounded components of ℂ
\Γ. Then we have
A(Γ)=∑_j=1^mn_V_jA(V_j).
By the assumption it is clear that for each t∈0,1], V_t,j
=φ_t( V_j) ,j=1,…,m, are all distinct bounded
components of ℂ\φ_t(Γ), and we have
n_V_t,j=n( φ_t( Γ) ,V_j)
=n_V_j,A(V_t,j)=A(V_j),t∈0,1].
Thus
A(φ_t( Γ) )=∑_j=1^mn_V_t,jA(V_t,j
)=∑_j=1^mn_V_jA(V_j)=A(Γ),t∈0,1].
For any piecewise smooth curve Γ=( f,∂Δ) with
∞∉Γ and any rotation φ of the sphere S with
∞∉φ( Γ) , A(Γ) need not equals
A(( φ∘ f,∂Δ) ). For example, for the
family of congruent circles C_x=∂ D( x,π/4)
in S\{∞} oriented anticlockwise, we have A(C_x)=A(C_0) when d( x,∞) >π/4, but A(C_x)=4π
-A(C_0) when d( x,∞) <π/4. In general we have
Let Γ be a piecewise smooth Jordan curve in
ℂ oriented anticlockwise. Then we have
(i) 0<A(Γ)=A(D)<4π, where D is the Jordan domain in ℂ
bounded by Γ.
(ii) For any rotation φ of S with ∞∉φ(
Γ),
A(φ(Γ))={[ A(Γ), if ∞∉φ( D); A(Γ)-4π, if ∞∈φ( D) ].
Γ divides ℂ into
two components D and D_∞, where D_∞ contains ∞. It
is clear that A( φ(D)) =A(D) and A(φ(
D_∞) )=A(D_∞). If φ( D) does not
contains ∞, then
n_φ(D)=n( φ( Γ) ,φ(
D) ) =n( Γ,D) =n_D=1
and
A(φ(Γ))=A(φ(D))=A(D)=A(Γ).
If ∞∈φ( D) , then ∞∉φ(
D_∞) and n( φ( Γ)
,φ( D_∞) ) =-1. Thus we have
A(φ( Γ) )=-A( φ( D_∞)
) =-A(D_∞)=A(D)-4π=A(Γ)-4π.
Now we prove the following lemma.
For j=0,1, let Γ_j=( f_j,∂Δ) be a closed curve on S consisted of a finite number of
simple circular arcs, such that Γ_j is contained in some open
hemisphere S_j on S with S_j⊂ℂand there exists a
rotation φ of S such that φ∘ f_0=f_1. Then
A(Γ_1)=A(Γ_0).
It is clear that the circle Γ_1=∂ D( 0,π/4) and Γ_2=-∂ D( ∞,π/4) are both in ℂ and oriented anticlockwise, but they do not satisfy
the hypothesis of the lemma, since Γ_2 can not be contained in any
open hemisphere which does not contain ∞.
By the assumption, there exists a family φ_t,t∈0,1], of
rotations of S so that φ_t( z) is continuous for
( z,t) ∈ S×0,1], φ_0=id and
φ_1=φ, and the family Γ_t( z)
=φ_t∘Γ_1(z),z∈∂Δ, never meet ∞. This
implies that A( Γ_t) is locally invariant for all
t∈0,1]. Thus we have A(Γ_0)=A(Γ_1).
Now we can enhance Lemma <ref> as follows.
Let Γ=(
f,∂Δ)be a closed curve consisted of finitely many
simple circular arcs such that ∂Σ is contained in some open
hemisphere S_1 in ℂ and L(Γ)<2π. Then the following hold.
(i) A(Γ)≤ A(T), where T is a disk in some hemisphere on S with
perimeter L, with equality holding if and only if Γ is a circle
oriented anticlockwise (say, Γ is a convex circle).
(ii) If, in addition, Γ is the boundary of a surface Σ=(
f,Δ) ∈ℱ with f(Δ)⊂
S_1⊂ℂ, then A(Σ)≤ A(T)<L(f,∂ U), with
equality holding if and only if Σ is a simple disk.
Since Γ⊂ S_1⊂ℂ, Γ is an anticlockwise
circle if and only it is a convex circle on S.
If Γ is of the form Γ
=I_1-I_1. Then A(Γ)=0 and (i) holds. If Γ is a Jordan curve
and D is the domain bounded by Γ in S_1⊂ℂ, then
D⊂ S_1 and |A(Γ)|=A(D) and (i) follows from Lemma <ref> (i).
In general, ∂Δ has a partition {{α
_ij} _j=1^k_i} _i=1^n of a finite number of
subarcs with α_ij^∘∩α_i_1j_1^∘=∅
when i≠ i_1 or j≠ j_1 such that ( f,α_ij)
are all simple circular arcs and for each i,α_i1+α_i2
+…+α_ik_i may not be a subarc of ∂Δ, but
γ_i=(f,α_i1)+( f,α_i2) +…+(
f,α_ik_i)
is either a Jordan curve on S or is of the form γ_i=I_i-I_i,
where I_i is a simple arc on S. Then we have
A(Γ)=∑_i=1^nA(γ_i),
and by Lemma <ref> (i)
| A(γ_i)| ={[ 0, if γ_i is of the form I_i-I_i; A(D_i), if γ_i
is a Jordan curve bounding a domain D_i in
S_1 ]. ≤ A(T_i),
where T_i is a disk in ℂ with perimeter L(γ_i), with
equality holding if and only if D_j is a disk. Then we have, by Lemma
<ref>,
A(Γ)≤∑_j=1^k| A(γ_j)|≤∑
_j=1^kA(T_j)≤ A(T),
where T is a disk in S_1 with L(∂ T)=L(Γ), and
A(Γ)=A(T) if and only if k=1 and Γ is a circle oriented
anticlockwise. (i) is proved.
Assume Γ=( f,∂Δ) is the boundary of a surface
Σ=( f,Δ)with f(Δ)⊂ S_1⊂ℂ. Then by Lemma <ref> and Lemma
<ref> (i), we have
A(Σ)=A(Γ)≤ A(T),
equality holding if and only if Σ is a simple disk. By Lemma
<ref>, we also have A(T)<L(f,∂Δ) and (ii) is proved.
Let ∂Δ=α( a_1,a_2) +β( a_2,a_1)
be a partition of ∂Δ, and for each j=1,2, let Γ
_j=( f_j,∂Δ) be a closed curve consisted of a
finite number of circular arcs such that Γ_j is contained in an open
hemisphere S_j⊂ℂ, and assume
( f_1,α) =( f_2,α) ,
L( f_1,β) =L( f_2,β) ,
L(f_1,β)+q_1q_2<2π,
and ( f_2,β) is an SCC arc, where q_j=f(a_j) for
j=1,2. Then
A( Γ_1) ≤ A(Γ_2),
equality holding if and only if (f_1,β)=( f_2,β)when q_1≠ q_2 or (f_1,β)is a convex circle when
q_1=q_2.
It is permitted that q_1=q_2, and in this case,
( f_j,α) and ( f_j,β) are both
closed curves for j=1,2, and the conclusion follows from Lemma <ref>
directly. Note that when q_1=q_2, L( f,β_1)
=L( f,β_2) <2π.
Any closed hemisphere on S can not contain
∞ if it contains 0 in its interior. Thus there exists a rotation
φ of S, such that φ( q_1) =0and
∞∉φ(S_j), and thus φ(
S_j) ⊂ℂ. Then ( φ∘
f_j,∂Δ) is contained in the open hemisphere
φ( S_j) ⊂φ( S_j)
⊂ℂ, and thus by Lemma <ref>, for j=1,2, we have
A( ( φ∘ f_j,∂Δ) ) =A(
(f_j,∂Δ)) =A(Γ_j).
So we may assume q_1=0.If c=( f_2,β) is the line
segment q_2q_1=q_2,0, then ( f_1
,β) is also the line segment q_2,0, and then
∂Γ_1∼∂Γ_2 and the conclusion of the lemma
holds with A( Γ_1) =A(Γ_2). Thus we may assume
that c is strictly convex.
Let C be the circle on S determined by c, and let D be the disk
enclosed by C. Then C is strictly convex and D is contained
in an open hemisphere S_C on S. Since q_1=f_1(a_1)=f_2
(a_1)=0, we may take S_C so that S_C⊂ℂ.
Define
Γ_1^'=( f_1^',∂Δ) =(
f_1^',α) +( f_1^',β)
=C\ c^∘+( f_1,β) .
Then L(Γ_1^')=L(C\ c)+L( f_1,β)
=L(C)<2π, which together with Lemma <ref> (ii), implies that
Γ_1^' is contained in some open hemisphere S_1^' on
S. Since 0∈Γ_1^', we have S_1^'
⊂ S\{∞}=ℂ. Therefore Lemma <ref> (i)
implies
A(Γ_1^')≤ A( D) =A(C),
equality holding if and only if Γ_1^' is an anticlockwise
circle in S_1^', say, ( f_1,β) =c if c≠ C
or Γ_1^'=( f_1,β) is an anticlockwise
circle in S_1^' if c=C. Then we have
A(Γ_1) =A(( f_1,α) +( f_1,β))
=A(( f_1,α) -C\ c^∘)+A(Γ
_1^')
≤ A(( f_1,α) -C\ c^∘)+A(C)
=A(( f_1,α) +c)=A( Γ_2) .
say A(Γ_1)≤ A(Γ_2), equality holding if and only if
(f_1,β) is the arc cor (f_1,β) is a convex circle.
Let Σ=(f,Δ)∈ℱ. Assume that the restriction I=-( f,[
-1,1] ) is a simple line segment Ion S, Σ is
contained in some open hemisphere on S and
L(f,∂Δ)≤2πsinL(I)/2.
Then the following hold.
(i) There exists θ_1∈(0,π) such that
L(f,∂Δ)=L(∂𝔇(I,θ_1,θ_1
)) and A(Σ)≤ A(𝔇(I,θ_1,θ_1)).
(ii) A(Σ)=A(𝔇(I,θ_1,θ_1)) if and only if
Σ is a simple closed domain congruent to 𝔇
(I,θ_1,θ_1).
Since Σ is contained in some open hemisphere on S, we have
L(I)<π, and the condition (<ref>) implies that L(f,∂Δ)
is not larger than the perimeter of the circle ∂ D(
0,L(I)/2) with diameter L(I), and so L(f,∂Δ)<2π. See Definition <ref> for the notation 𝔇
(I,θ,θ). Geometrically, it is clear that L(θ)=L(∂( 𝔇(I,θ,θ) ) is strictly increasing on
[0,π/2] as a continuous function of the angle θ, and thus
θ_1 is uniquely determined by L(θ_1)=L(f,∂Δ).
(i) is essentially implied in the proof of Theorem 3.8 in <cit.>, based on
Theorem 3.6 in <cit.>, though in Theorem 3.8 of <cit.>, I is
replaced by the special segment 1,0 and the condition (<ref>)
is replaced by
L(f,∂Δ)≤2πsind(0,1)/2=2πsinπ/4=√(2)π.
But if we use the same method of <cit.> based on Lemma <ref>, we
can prove (ii) by the way.
Recall that for a circular arc c,
k( c) ≥0 denotes the curvature.
Since f(Δ) is contained in a hemisphere on S,
L(f,∂Δ)=2L(I) implies f(Δ)=I, which
contradicts Σ∈ℱ. Thus 2L(I)<L(f,∂Δ)<2π.
We now use Lemma <ref> to give a new proof. Without loss of
generality, assume q_1=f(1)=0. Let q_2=f(-1), γ_1=(
f,( ∂Δ) ^+) ,γ_2=( f,(
∂Δ) ^-) , c_1 and c_1^' be the SCC
arcs from q_1 to q_2 with L(c_1)=L(f,( ∂Δ)
^+) and L(c_1^')=1/2L(f,∂Δ), and let c_2
and c_2^' be the SCC arcs from q_2 to q_1 with
L(c_2)=L( f,( ∂Δ) ^-) and
L(c_2^')=1/2L( f,∂Δ) . Then we
have
L(c_1+c_2)=L(c_1^'+c_2^')=L(γ_1+c_2)=L(
c_1+γ_2) =L( f,∂Δ) <2π,
and thus Σ and the close curves c_1+c_2,c_1^'
+c_2^',γ_1+c_2,c_1+γ_2 are contained in five open
hemispheres on S in ℂ=S\{∞}
respectively. By Lemmas <ref> and <ref> we have
A(Σ)=A(γ_1+γ_2)≤ A(c_1+γ_2)≤ A(c_1
+c_2),
with the second equality holding iff γ_1=c_1 and the last equality
holding iff γ_2=c_2.
We will show that
A(c_1+c_2)≤ A(c_1^'+c_2^'),
with equality if and only if c_1=c_1^' and c_2=c_2^'. It is clear that c_1+c_2 encloses the lens D=𝔇(
I,k_1,k_2) , where k_j is the curvature k(
c_j) of c_j,j=1,2; and c_1^'+c_2^' encloses
the lens D^'=𝔇( I,k_1^',k_1^') , where k_1^'=k( c_1^') =k(
c_2^') . D and D^' are
contained in two open hemispheres S_1⊂ℂ and S_1^'⊂ℂof the five. Thus we have A(D)=A(c_1+c_2) and
A(D^')=A(c_1^'+c_2^').
We show that A(D)≤ A(D^'), equality holding only if c_1
=c_1^' and c_2=c_2^'. Let C_1^' be the
convex circle determined by c_1^'. Then C_1^' is
contained in some open hemisphere S_1^''⊂ℂ on
S, and c_1^' is the arc of C_1^' from q_1 to
q_2. By (<ref>) c_1^' is at most half of C_1^',
and then C_1^'\ c_1^'∘ contains the arc
𝔠_2^'=𝔠_2^'( q_2
,q_3) with d(q_2,q_3)=d( q_1,q_2), and
write 𝔠_3=C_1^'\( c_1^'+𝔠_2^') ^∘. When c_1^' is half
of C_1^', 𝔠_3={q_3}={q_1}. Then
𝔠_2^' is congruent to c_2^' and we have a
partition
C_1^'=c_1^'+𝔠_2^'+𝔠_3.
When we replace c_1^' by c_1, 𝔠_2^' by
the convex arc 𝔠_2=𝔠_2( q_2,q_3)
congruent to c_2=c_2( q_2,q_1) , the convex circle
C_1^' becomes the Jordan curve
C_1=c_1+𝔠_2+𝔠_3,
with
L(C_1)=L(C_1^')=L(f,∂Δ)+L(𝔠_3)<2π.
Then,by Lemma <ref> (i), for the Jordan domain D_1 enclosed by
C_1 and the disk D_1^' enclosed by C_1^', we have
A(D_1)≤ A(D_1^'), equality holding if and only if C_1 is a
circle, say C_1=C_1^', c_1=c_1^', 𝔠
_2=𝔠_2^', which implies c_2=c_2^'.
We have the disjoint unions
D_1=D_c_1∪q_1q_2^∘∪ D_𝔠_2
∪q_2q_3^∘∪ D^'',
D_1^'=D_c_1^'∪q_1q_2^∘∪
D_𝔠_2^'∪q_2q_3^∘∪
D^'',
where D_c_1 and D_𝔠_2 are the disjoint lunes in D_1
of circular arcs c_1( q_1,q_2) and 𝔠
_2( q_2,q_3) , and D_c_1^' and
D_𝔠_2^' are the disjoint lunes in D_1^' of
circular arcs c_1^'( q_1,q_2) and 𝔠
_2^'( q_2,q_3) . Then we have
A( D_c_1∪ D_𝔠_2) =A(D)≤ A(D_c_1
^'∪ D_𝔠_2^')=A(D^'),
equality holding only if c_1^'=c_1 and c_2=c_2^'.
Let l be a positive number and
for j=1,2, let I_j=q_j1q_j2 be a line segment with
L(I_j)<πand let 𝔇_j^'(l_j)be the lune such
that ∂𝔇_j^'( l_j) =c_j
(l_j)-I_j, where c_j is a convex circular arc from q_j1 to
q_j2 with L(c_j(l_j))=l_j. Assume l_1+l_2=l,I_1 and
I_2 are fixed, but l_1 and l_2 vary, and for j=1,2,𝔇_j^'(l_j) is contained in some open hemisphere
S_j on S. Then we have the following.
(A) If the curvature k( c_1(l_1)) of c_1(l_1) is
larger than k( c_2(l_2)) when l_1=l_1^0,l_2
=l_2^0=l-l_2^0. Then there exists a δ>0, such that
A( 𝔇_1^'(l_1)) +A( 𝔇
_2^'(l_2)) =A( 𝔇_1^'
(l_1)) +A( 𝔇_2^'(l-l_1))
is a strictly decreasing function of l_1∈(l_1^0-δ,l_1
^0+δ).
(B) If k( c_1(l_1)) =k( c_2(l_2)) when
l_1=l_1^0 and both c_1(l_1) and c_2(l_2) are major
circular arcs, say, the interior angle of 𝔇_j^'(l_j)
at the cusps >π/2, then there exists a δ>0 such that
A( 𝔇_1^'(l_1)) +A( 𝔇
_2^'(l-l_1)) >A( 𝔇_1^'(l_1
^0)) +A( 𝔇_2^'(l-l_1^0)) ,
when 0<| l_1-l_1^0| <δ.
(C) (A) and (B) still hold when q_11=q_12, or
q_21=q_22, or both, holds (when q_j1=q_j2, 𝔇
_j^'(l_j) becomes into a disk, say, c_j(l_j) is a circle).
Assume
k( c_1(l_1^0)) >k( c_2(l_2^0)) =k(
c_2(l-l_1^0)) .
For sufficiently small ε>0, we may take small arcs
c_j,ε=c_j( q_j1,ε,q_j2,ε) in c_j( l_j^0) so that the center points of
c_j,ε and c_j( l_j^0) are the same and the
chard q_j1,εq_j2,ε has the same length
ε for j=1,2, and let 𝔠_j,ε
=𝔠_j,ε( q_j1,ε,q_j2,ε) be the convex circular arc from q_j1,εto
q_j2,ε so that
L(𝔠_j,ε)=1/2( L(c_1,ε)
+L(c_2,ε)),j=1,2.
When ε is small enough by (<ref>) we have 𝔠
_1,ε^∘⊂𝔇_1^'( l_1
^0) and 𝔠_2,ε^∘ is on the right side
of c_2,ε. Let D_j be the lune enclosed by c_j,ε-q_j1,εq_j2,ε, and let D_j^'
be the lune enclosed by 𝔠_j,ε-q_j1,εq_j2,ε, j=1,2. Then by Lemma <ref> we
have
A(D_1)+A(D_2)<A(D_1^')+A(D_2^'),
and when ε is small enough, the domain T_j=(
𝔇_j^'( l_j^0) \ D_j)
∪ D_j^'is a Jordan domain for j=1,2, and we have by
(<ref>)
A( 𝔇_1^'( c_1( l_1^0)
) ) +A( 𝔇_2^'( c_2(
l_2^0) ) ) <A( T_1) +A(T_2).
It is clear that
∂ T_j=( [ c_j( l_j^0) \
c_j,ε] ∪𝔠_j,ε) -I_j.
Now we replace [ c_j( l_j^0) \
c_j,ε] ∪𝔠_j,ε with the whole SCC
arc c_j( l_j( ε) ) from q_j1 to
q_j2 with
l_j( ε) =L(c_j( l_j( ε) ) )=L( [ c_j( l_j^0) \
c_j,ε] ∪𝔠_j,ε) .
Then l_1( ε) <l_1^0and l_2(
ε) =l-l_1( ε) >l_2^0. By Lemma
<ref>, we have
A(𝔇_j^'( c_j( l_j( ε) ) ) >A(T_j),j=1,2,
which, with (<ref>) implies
∑_j=1^2A(𝔇_j^'( c_j( l_j(
ε) ) ) >∑_j=1^2A(𝔇
_j^'( l_j^0) ).
Since l_1( ε) <l_1^0 and l_1(
ε) depends on ε continuously when
ε>0 is small enough, (<ref>) implies that
∑_j=1^2A(𝔇_j^'( c_j( l_j)
) >∑_j=1^2A(𝔇_j^'( l_j^0)
),
when l_1<l_1^0 and l_1^0-l_1 is small enough. Then we have
proved that there exists a small enough δ>0 such that (A) holds for
l_1∈(l_1^0-δ,l_1^0).
Now, assume k(c_1( l_1^0) )=k( c_2(
l_2^0) ) and both c_1( l_1^0) and
c_2( l_2^0) are major arcs. Then on (l_1^0
-δ,l_1^0], k(c_1( l_1) ) strictly decreases
but k( c_2( l-l_1) ) strictly increases when
δ is small enough. Thus by (A) for sufficient small δ
>0,∑_j=1^2A(𝔇_j^'( c_j( l_j)
) , as a function of l_1, strictly decreases on (l_1^0
-δ,l_1^0) and strictly increase on (l_1^0,l_1^0+δ).
On the other hand, it is clear that ∑_j=1^2A(𝔇_j^'( c_j( l_j) ) is a continuous function of
l_1 when | l_1-l_1^0| small enough. Hence (B) holds.
The proof of (C) is the same as (A) and (B).
Let I=pq be a line segment with L(I)<π,
l∈(2πsinL(I)/2,2π), and let 𝔇(
pq,x,l-x) be the lens enclosed by SCC arcs c_x
=c_x( p,q) and c_l-x^'=c_l-x^'(
q,p) with L(c_x)=x and L(c_l-x^')=l-x. Assume x_0
is the number such that 𝔇( pq,x_0
,l-x_0) is a disk and x_0>l-x_0. Then the following hold.
(i) If x>l-x and ∠( 𝔇( pq
,x,l-x) ,p) >π, then x∈(l/2,x_0).
(ii) The function A( 𝔇( pq,x,l-x)
) is strictly increases for x∈l/2,x_0].
If x>l-x, then x>l/2. If x>x_0, we must have ∠(
𝔇( pq,x,l-x) ,p) <π. Thus by
assumption of (i) we have x∈(l/2,x_0), and thus (i) holds true.
By Lemma <ref> and Corollary <ref> (B), A(
𝔇( pq,x,l-x) ) assumes the minimum
when x=l/2 and the maximum when x=x_0. Either x=l/2 or x=x_0 is
the condition such that c_x and c_l-x^' have the same
curvature, say, when x≠ l/2,x_0, the two circular arc of 𝔇
( pq,x,l-x) have distinct curvature. On the other
hand the curvature continuously depends on xand when x increase from
l/2 a little, the curvature of c_x decrease a little (note that
c_l/2 is a major circular arc and so is c_x for x>l/2). Thus when
x increases in (l/2,x_0), the curvature k( c_x)of c_x decreases and the curvature of c_l-x^' increases.
Then (ii) follows from Corollary <ref> (A).
The following lemma is a direct consequence of Lemma <ref>.
Let Σ=( f,Δ^+)
∈ℱ be a surface such that I=( f,[-1,1]) is a line
segment on S and Σ is contained in some open hemisphere S^'
on S. Assume L( ∂Σ) <2π. Then
A(Σ)≤ A(𝔇^'(I,θ))
for the unique θ with L(∂𝔇^'(I,θ
))=L(∂Σ), with equality if and only if Σis a
simple closed domain congruent to 𝔇^'(I,θ
).
Let L<2π be a positive number, x∈0,L] and
T_x and T_L-x be two disks in some open hemisphere on S with
L(∂ T_x)=x and L(∂ T_L-x)=L-x. Then
2A(T_L/2)≤ A(T_x)+A(T_L-x)≤ T_L
and A(T_x)+A(T_L-x) strictly decreases on [0,L/2].
This follows from Corollary <ref> (C). But here we can give a
simpler proof. Let f(x)=A(T_x)+A(T_L-x). Then we have
f(x)=4π-√(4π^2-x^2)-√(4π^2-( L-x) ^2),
and
f^'(x) =x/√(4π^2-x^2)+( x-L)
/√(4π^2-( L-x) ^2)
=x√(4π^2-( L-x) ^2)+( x-L)
√(4π^2-x^2)/√(4π^2-x^2)√(4π^2-( L-x)
^2).
Thus we have f^'( x) <0 on [0,L/2), and so
f(x) strictly decreases on x∈0,L/2].
Let c_n=c_n( q_n^',q_n^'') be a sequence of SCC arcs convergent to an SCC arc c_0
=c_0( q_0^',q_0^'') with
q_0^'≠ q_0^''. Assume that either c_0 is
straight with L(c_0)>π or c_0 is strictly convex. Then
c_0+q_0^''q_0^' is a convex Jordan curve
and, for sufficiently large n, c_n+q_n^''
q_n^' is also a convex Jordan curve and converges to c_0
+q_0^''q_0^'. Thus for sufficiently large
n, the lune enclosed by c_n+q_n^''q_n^' is convex and converges to the lune enclosed by c_0+q_0^''q_0^'.
This is clear when c_0 is strictly convex.
Assume c_0 is straight and L(c_0)>π. Then d( q_0^',q_0^'') <π and thus for sufficiently large n,
d( q_n^',q_n^'') <π and q_n^'q_n^'' converges to q_0^'q_0^''. It is clear that c_0+q_0^''q_0^' is a great circle and thus, by the assumption of
convergence, c_n+q_n^''q_n^'converges
to c_0+q_0^''q_0^'.
§ THE MONOTONICITY OF THE FUNCTION H_1(Δ)
For a line segment I on S with L(I)=δ<π, and a positive number
θ∈0,π] define
A(δ,θ,θ)=A(𝔇( I,θ,θ) ),
and
L(δ,θ,θ)=L(∂𝔇( I,θ,θ)
),
where 𝔇( I,θ,θ) is the lens defined in
Definition <ref>. Then for a constant A_0define
h(A_0,δ,θ,θ)=A_0+( q-2) A(δ
,θ,θ)/L(δ,θ,θ)=( q-2) A_0/q-2+A(δ,θ,θ)/L(δ,θ,θ).
For δ∈(0,π) and θ∈0,π/2] we
have
L(δ,θ,θ)=4tanδ/2/√(sin^2θ
+tan^2δ/2)[ arctan√(sin^2θ+tan
^2δ/2)/cosθ] ,
A(δ,θ,θ)=4θ-L(δ,θ,θ)sinθ/tanδ/2,
and
h(A_0,δ,θ,θ)=( A_0/4+(
q-2) θ) √(sin^2θ+tan^2δ/2)
/tanδ/2[ arctan√(sin^2θ+tan^2
δ/2)/cosθ] -( q-2) sinθ/tanδ/2.
Let a_δ be the positive number such that
δ=∫_0^a_δ2dx/1+x^2=2arctan a_δ,
Then a_δ=tanδ/2, and we have
A(δ,θ,θ)=A(𝔇(0,a_δ,θ,θ)
and
L(δ,θ,θ)=L(∂𝔇(0,a_δ
,θ,θ).
We assume θ∈(0,π/2].
Then, the line segment 0,a_δ on the sphere S divides
𝔇(δ,θ,θ) into two symmetric lunes 𝔇
^'(0,a_δ,θ) and 𝔇^'(a_δ,0,θ), and the upper lune is 𝔇
^'(a_δ,0,θ), as in Figure <ref> which
is in the plane ℂ. As shown in the plane figure, let α be
the circular arc on ℂ of ∂𝔇^'
(a_δ,0,θ) from a_δ to 0, let c_θ
with Imc_θ≤0 be the center of the circle on
ℂ containing α, and let c_θ^'=2c_θ.
Then
|c_θ^'-c_θ|=|0-c_θ|=|c_θ-a_δ|,
and thus c_θ^' is on the circle containing α, and the
triangle in ℂ with vertices 0,a_δ and c_θ^' is a right triangle whose interior angle at c_θ^' equals
θ. Thus we have |c_θ^'|=a_δ/sinθ. On the
other hand, for any point z∈α, the triangle with vertices
0,c_θ^' and z is also a right triangle, as in the figure,
whose angle at c_θ^' has value θ- z. Thus
|z|=sin(θ-t)|c_θ^'|=a_δsin(θ-t)/sinθ,
where t= z. Then, we obtain a parametric expression of the circular path
α:
α(t)=a_δsin(θ-t)/sinθe^it,t∈0,θ],
and
|dα(t)|=a_δ|-e^itcos(θ-t)+ie^itsin(θ
-t)|dt/sinθ=a_δdt/sinθ,t∈0,θ].
Therefore, we have
L(δ,θ,θ) =2L(α)=2∫_α2|dz|/1+|z|^2
=∫_0^θ4|dα(t)|/1+|α(t)|^2
=4∫_0^θa_δsinθ/sin^2θ+a_δ^2sin^2(θ-t)dt
=4∫_0^θa_δsinθ/sin^2θ+a_δ^2sin^2xdx,
and so (<ref>) follows from
L(δ,θ,θ) =-4a_δ/√(sin^2θ
+a_δ^2)∫_0^θdsinθ x/√(sin^2θ+a_δ^2)/[ sinθ x/√(sin^2θ+a_δ^2)] ^2+1
=. -4a_δ/√(sin^2θ+a_δ^2)
arctansinθ x/√(sin^2θ+a_δ^2)
| _0^θ
=2π a_δ/√(sin^2θ+a_δ^2)
-4a_δarctancosθ/√(sin^2θ+a_δ
^2)/√(sin^2θ+a_δ^2)
=4a_δarctan√(sin^2θ+a_δ^2)/cosθ/√(sin^2θ+a_δ^2)
=4tanδ/2/√(sin^2θ+tan^2δ/2)arctan√(sin^2θ+tan^2δ/2)/cosθ
It is then clear that (<ref>) will follow from (<ref>), the first
line of (<ref>) and
A(δ,θ,θ) =2A(𝔇^'(0,a_δ
,θ))=2
∬_𝔇^'(0,a_δ
,θ)
4dxdy/( 1+|z|^2) ^2
=2∫_0^θdt∫_0^|α_θ(t)|4rdr/(
1+r^2) ^2=2∫_0^θ( 2-2/1+|α_θ(t)|^2) dt
=4θ-4∫_0^θdt/1+|α_θ(t)|^2
=4θ-sinθ/a_δ∫_0^θ4|dα
_θ(t)|/1+|α_θ(t)|^2
=4θ-sinθ/tanδ/2L(δ,θ,θ).
Then
h(A_0,δ,θ,θ) =A_0+( q-2)
A(δ,θ)/L(δ,θ)
=A_0+4θ( q-2) -( q-2)
sinθ/tanδ/2L(δ,θ,θ)/L(δ
,θ,θ)
=A_0+4θ( q-2) /L(δ,θ,θ)
-( q-2) sinθ/tanδ/2
=( A_0+4θ( q-2) ) √(sin
^2θ+tan^2δ/2)/4tanδ/2arctan√(sin^2θ+tan^2δ/2)/cosθ-(
q-2) sinθ/tanδ/2
=( A_0/4+θ( q-2) )
√(sin^2θ+tan^2δ/2)/tanδ/2
arctan√(sin^2θ+tan^2δ/2)/cosθ
-( q-2) sinθ/tanδ/2.
It is clear that when the boundary length of 𝔇(
I,θ,θ) is fixed while θ increases, δ=L(I) has
to decrease.
If 0<δ_2<δ_1<π, θ_1<θ_2≤π/2 and L( δ_1,θ_1,θ_1)
=L(δ_2,θ_2,θ_2), then
A(δ_1,θ_1,θ_1)<A(δ_2,θ_2,θ_2).
We fix θ_1 and L(δ_1,θ_1,θ_1)=L(δ
_2,θ_2,θ_2)=L, and show that the conclusion holds when
θ_2-θ_1 is small enough. Assume 𝔇(AB,θ_1,θ_1) with L(AB)=δ_1 is given as in
Figure <ref>, in which A and B are the two cusps. For a pair
of points C and C^' on ∂𝔇(AB
,θ_1,θ_1) which are symmetric to AB, we let
γ_CC^' be the circular arc with chord CC^'
and length equal to the part C^'AC of ∂𝔇(AB,θ_1,θ_1). It is clear that for
sufficiently small ε, there exists a pair of C and C^'
such that γ_CC^' intersects AB at A^' with
d(A^',B)=δ_1-ε as in Figure <ref> (2).
Let D_CBC^'A^'C be the domain enclosed by the Jordan curve
consisted of γ_CC^' and the part of ∂𝔇
(AB,θ_1,θ_1) under the segment CC^'. Then by Lemma <ref> we have
A(δ_1,θ_1,θ_1)≤ A(D_CBC^'A^'C),
equality holding only if C^'AC is a circular arc. But by
the hypothesis θ_1<θ_2≤π/2, C^'AC can not be circular, and so the equality can not hold. It is clear that
the boundary length of the domain D_CBC^'A^'C equals L,
and by Lemma <ref> the domain 𝔇(A^'
B,θ_2,θ_2) with L(𝔇(A^'B
,θ_2,θ_2))=L has larger area than D_CBC^'A^'
C. Thus we have
A(δ_1,θ_1,θ_1)<A(D_CBC^'A^'C
)<A(δ_1-ε,θ_2,θ_2).
Since ε is arbitrary, the result follows.
(i) For δ∈δ_E_q,π), let D(δ) be
a closed disk on S with diameter δ and write A(δ)=A(D(δ
)), L(δ)=L(∂ D(δ)). Then
.
c]ll
h( δ) =R(D(δ))+4πL(δ)
=4π-4πn( D(δ) )+( q-2)
A(δ)L(δ)
=h(4π( 1-n( D(δ) )
),δ,π/2,π/2),
.
where h( A_0,δ,θ,θ) is defined in (<ref>).
(ii) For any open interval I^∘ in [δ_E_q,π), if
n( D( δ) ) is a constant for each
δ∈ I^∘, then h( δ) =R(D(δ))+4π/L(δ) is real analytic on I^∘ with
h^'( δ) =-( q-2n(
D(δ) )) cosδ/2+q-2/2sin^2δ/2,
and the following hold.
(ii1) If n( D( δ) ) =0 on
I^∘ and δ_E_q≥2arccosq-2/q, then h(
δ) strictly increases on I^∘∩δ_E_q
,π).
(ii2) If n( D( δ) ) =0 on
I^∘ and δ_E_q<2arccosq-2/q, then h(
δ) strictly decreases on I^∘∩δ_E_q
,2arccosq-2/q] and strictly increases on I^∘∩2arccosq-2/q,π].
(ii3) If n( D( δ) ) ≥1 on
I^∘, then h( δ) strictly increases on I^∘.
(ii4) h( δ) cannot assume the maximum value for every
δ∈ I^∘.
By definition, (<ref>) holds trivially. It is clear that
h(δ) =R(D(δ))+4π/L(δ)=h( 4π-4πn( D( δ) ) ,δ,π/2,π/2)
=4π-4πn( D(δ) )+( q-2)
A(δ)/L(δ)
=4π-4πn( D(δ) )+2π(
q-2) (1-cosδ/2)/2πsinδ/2
=q-2n( D(δ) )-( q-2)
cosδ/2/sinδ/2.
Thus we have (<ref>). If h(δ) assume a maximum value at some
point δ∈ I^∘, then we must have
-( q-2n( D(δ) )) cosδ/2+q-2=0.
Therefore, (ii1)–(ii3) follow from (<ref>) directly. Now, (ii4) follows
from (ii1)–(ii3).
For any pair of
constants δ∈(0,π) and A_0, let h( A_0,δ
,θ,θ) be given by (<ref>) with expression
(<ref>).
Then h(θ)=h( A_0,δ,θ,θ) strictly
increases in an interval [0,θ_0] for some θ_0>0.
Since δ∈(0,π), we have tanδ/2>0, and thus it is
clear that the functions h( θ) is real analytic in a
neighborhood I_0 of 0in (-1,1). It is clear that
d/dθ√(sin^2θ+tan^2δ/2)/tanδ/2[ arctan√(sin^2θ+tan^2
δ/2)/cosθ] =0
at θ=0. Since δ∈(0,π), we have by (<ref>),
h^'( 0) =q-2/δ/2-q-2/tanδ/2>0,
and so the existence of θ_0 follows.
For δ∈( 0,π) and for a constant A_0,
let h(δ)=max_θ∈0,π/2]h(A_0,δ
,θ,θ), where h(A_0,δ,θ,θ) is given by
(<ref>) with expression (<ref>). Let δ_0∈(0,π) and
assume h( δ_0) =h( A_0,δ_0,θ
_0,θ_0) for some θ_0∈( 0,π/2). Then for each δ<δ_0 which is sufficiently close
to δ_0, we have
h( A_0,δ_0,θ_0,θ_0) <h( A_0
,δ,θ_δ,θ_δ) ,
where θ_δ is determined by L(∂𝔇(
δ_0,θ_0,θ_0) )=L(∂𝔇(
δ,θ_δ,θ_δ) ), and thus
h( δ) =max_θ∈0,π/2]h(A_0
,δ,θ,θ)>h( δ_0) .
It is clear that for sufficiently small h>0, there exists θ
_1^'∈(θ_0,π/2) such that
L(δ_0-h,θ_1^',θ_1^')=L(δ_0
,θ_0,θ_0),
and then by Lemma <ref> we have
h(A_0,δ_0,θ_0,θ_0) =A_0/L(δ
_0,θ_0,θ_0)+( q-2) A(δ_0,θ
_0,θ_0)/L(δ_0,θ_0,θ_0)
<A_0/L(δ_0-h,θ_1^',θ_1^'
)+( q-2) A(δ_0-h,θ_1^',θ
_1^')/L(δ_0-h,θ_1^',θ_1^')
=A_0+( q-2) A(δ_0-h,θ_1^'
,θ_1^')/L(δ_0-h,θ_1^',θ_1^')
≤max_θ∈0,π/2]A_0+( q-2)
A(δ_0-h,θ,θ)/L(δ_0-h,θ,θ)
=h(δ_0-h).
Let δ_E_q=min{ d(𝔞_i,𝔞
_j):𝔞_i∈ E_q,𝔞_j∈ E_q,𝔞_i
≠𝔞_j} . Then δ_E_q≤2π/3,
equality holding only if q=3 and 𝔞_1,𝔞
_2,𝔞_3 are on a great circle on S and d(𝔞
_i,𝔞_j)=2π/3 for each pair of i and j with
i≠ j.
Assume δ_E_q>2π/3. Then we may assume 𝔞
_2,𝔞_3,…,𝔞_q are contained in the open disk
D complementary to the disk d(z,𝔞_1)≤2π/3. But
it is clear that the diameter of D is 2π/3 and since q≥3 we
have δ_E_q<2π/3. This is a contradiction.
Assume δ_E_q=2π/3. Then we may assume d(
𝔞_1,𝔞_2) =2π/3 and consider the
great circle C on S passing through 𝔞_1 and 𝔞
_2. If some 𝔞_j of 𝔞_3,…,𝔞_q is contained in S\ C, then we have d( 𝔞
_j,{𝔞_1,𝔞_2}) <2π/3,
contradicting to δ_E_q=2π/3. Thus {𝔞_3,…,𝔞_q}⊂ C, q=3, and
𝔞_1,𝔞_2,𝔞_3 divide C equally.
For the number δ_E_q, there exists a disk
T_0⊂ S with perimeter 3δ_E_q and T_0∩
E_q=∅.
Let 𝔞_1 and 𝔞_2 be two points of E_q such
that d( 𝔞_1,𝔞_2) =δ_E_q.
Then there exists a convex disk T_0 on S whose boundary contains
𝔞_1, 𝔞_2 and some a∈ S such that
𝔞_1,𝔞_2,a divide ∂ T_0 equally. It is
clear that T_0 is the desired disk.
Let T be a convex disk on S with T∩ E_q
=∅. Then H( T) is strictly increase as a function of
L=L(∂ T)∈(0,min{L_0,2π})and H( T)
≤( q-2) , where
L_0=max{ L:
[ there is a convex open disk T on S with; T∩ E_q=∅ and L(∂ T)=L} ]}≥3δ_E_q.
This follows from Lemma <ref>, n( T) =0, and
that, as a function of L=L(∂ T),
H(T) =( q-2) A(T)/L=( q-2) (
2π-√(4π^2-L^2))) /L
=( q-2) L/( 2π+√(4π^2-L^2)))
strictly increase for L∈(0,min{L_0,2π})] and H( T)
≤( q-2) on (0,min{L_0,2π}].
§ THE RIEMANN HURWITZ FORMULA AND BRANCH POINTS
The simplest version of Riemann Hurwitz formula is that for any BCCM
f:S→ S with degree d
B_f=∑_z∈ Sb_f( z) =∑_z∈ S( v_f(
z) -1) =2d-2.
Recall that B_f,b_f( z)and v_f( z) are
defined in Remark <ref> (D) and (E). The formula implies that f has
exactly 2d-2 branch points on S. This formula implies the following directly.
Let Σ=(f,S) be a surface, say, f:S→ S is a d to
1 BCCM. Then
∑_v=1^qn̅(Σ,𝔞_v)=qd-∑_z∈ E_q
b_f(z)≥ qd-∑_z∈ Sb_f(z)=( q-2) d+2,
and
R(Σ)=( q-2) A(Σ)-4π∑_v=1^qn
(Σ,𝔞_v)≤-8π.
both equality holding iff C_f^∗=C_f( S\ f^-1
(E_q)) =∅, say, CV_f⊂ E_q (see Remark
<ref> (D) for the notations C_f^∗ and CV_f).
In fact, the first two parts of (<ref>) are trivial, and the third part of
(<ref>) follows from the equation (<ref>). Then (<ref>), together
with the area formula A(Σ)=4π d, implies (<ref>).
Let Σ=( f,Δ) ∈ℱ
such that ∂Σ is a Jordan curve and let T be the closed domain
enclosed by ∂Σ on S. Then for the closed surface Σ
_0=( f_0,S) which is obtained by sewing Σ and the
domain S\ T along ∂Σ, we have
R(Σ)=R(Σ_0)+R(T)+8π≤ R(T),
with equality holding if ond only if Σ∈ℱ_r, where
ℱ_r={( f,U) ∈ℱ:C_f^∗=C_f( U\ f^-1(E_q)) =∅}.
The first equality of (<ref>) follows from Corollary <ref>. By
Lemma <ref>, R(Σ_0)+8π≤0, equality holding iff CV_f_0
⊂ E_q. On the other hand, by Corollary <ref> we have
CV_f_0=CV_f. Thus CV_f_0⊂ E_q iff CV_f⊂ E_q,
say, Σ∈ℱ_r.
Let K be a Jordan domain on S, let Σ=(
f,U)be a surface in 𝐅 such that f|_∂
U:∂ U→∂ K is an orientation preserving CCM with
degree d and that f covers K by d_0 times[This only means
that each point of K has d_0 inversers, counted with multiplicity.].
Then the following hold.
(i) In the case E_q⊂ K,
#f^-1(E_q)≥( q-2) d_0+d+1.
(ii) In the case E_q∩ K=q-1 and E_q∩∂ K=∅
#f^-1(E_q)≥( q-2) d_0+1.
By the convention, ∂ K is oriented in the way that K is on the left
of ∂ K, and so d≤ d_0, with equality only if f(
U) ⊂ K.
This is a consequence of Riemann-Hurwitz formula (<ref>). We may extend f
to be a BCCM F:S=ℂ→ S so that F restricted
to ℂ\U is a BCCM onto S\
K and that F contains no branch point in ℂ
\U when d=1, or contains only one branch point p in
ℂ\U with v_f(p)=d>1.
(i) If E_q⊂ K, then
#f^-1(E_q) =#F^-1(E_q)=d_0q-∑_z∈ F^-1(E_q)(
v_F(z)-1)
≥ d_0q-∑_z∈ U( v_F(z)-1)
=d_0q-[ ∑_z∈ℂ( v_F(z)-1)
-∑_z∈ℂ\U( v_F
(z)-1) ]
=d_0q-[ ( 2d_0-2) -( d-1) ]
=( q-2) d_0+d+1.
(ii) If E_q∩ K=q-1, then we may construct F so that F(p)∈ E_q
and v_F(p)=d for some p∈ℂ\U,
and then we have
#f^-1(E_q) =#F^-1(E_q)-1=[ d_0q-∑_z∈ F^-1
(E_q)( v_F(z)-1) ] -1
≥[ d_0q-( 2d_0-2) ] -1=( q-2)
d_0+1.
Now, we apply the previous lemma to prove the following lemma and theorem.
Let Σ=( f,Δ) ∈𝐅 and
assume that there exists a Jordan curve Γ in S\ E_q such
that for the two components K_1 and K_2 of S\Γ,
K_1is contained in the lower half sphere D(
0,π/2) on S, #E_q∩ K_2≥ q-1, and
∂Σ⊂ K_1. Then
R(Σ)≤( q-2) A( ∂Σ) ,
and moreover, if in addition n( Σ) ≠0, then
R(Σ)≤( q-2) A( ∂Σ) -4π.
Since f has a finite number of branch values, we may assume that Γ
contains no branch value of f. For otherwise, we can replace Γ with
another Jordan curve Γ^' close to Γ enough so that
Γ^' contains no branch value of f and satisfies all the hypothesis.
Firstly, consider the case ∞∉ f(Δ). Then by Lemma <ref>
A(Σ)=A( ∂Σ) +4π_f( ∞)
=A( ∂Σ) ,
and then
R(Σ)=( q-2) A(∂Σ)-4πn(
Σ) ,
which implies the desired result when ∞∉ f(Δ).
Now assume ∞∈ f(Δ) and let _f( ∞)
=d_0. Then d_0is a positive integer and f^-1(Γ) is not empty
and is consisted of a finite number of disjoint Jordan curves in Δ.
Then f^-1(Γ)∩∂Δ=∅, and f^-1(Γ)
divides Δ into a finite number of domains, and we let V be the
component of Δ\ f^-1(Γ) with ∂Δ⊂∂ V. Then f(V)∩Γ=∅ and
f(V)⊂ K_1,
and then Δ\V is consisted of a finite number of
Jordan domains U_j,j=1,…,k, such that for each j, U_j
⊂Δ, U_i∩U_j=∅ if i≠
j, and f|_∂ U_j is a CCM from ∂ U_j onto Γ. We
assume f|_U_j covers K_2 with degree d_j and f|_∂
U_j covers Γ with degree d_j^',j=1,…,k. Then we
have
d_0=d_1+⋯+d_k.
Note that[It is possible that some Jordan curve of f^-1(Γ
)encloses another Jordan curve of f^-1(Γ) in Δ. Thus
U_1,…,U_k maybe Jordan domains enclosed by just a part of Jordan
curves of f^-1(Γ). If some U_j contains some Jordan curve of
f^-1(Γ), then d_j>d_j^' and f(U_j)⊃ S.]
d_j>d_j^' when f|_U_j⊃ S and d_j
=d_j^' when f|_U_j=K_2. We write
d_0^'=d_1^'+⋯+d_k^'.
By Lemma <ref> we have
A(Σ)=A(∂Σ)+4π d_0,
where
A(∂Σ)=2/i∫_∂Σw
dw/1+|w|^2.
The restriction (f,U_j),∂ K_2,d_j,d_j^'
satisfies the hypothesis of Lemma <ref> (as (f,U),∂
K,d_0,dthere). Thus we have
n( U_j) =#f^-1(E_q)∩ U_j≥(
q-2) d_j+1, for j=1,…,k.
Therefore
n( Σ) ≥#( f^-1(E_q)∩∪_j=1^kU_j) =∑_j=1^k#( f^-1(E_q)∩
U_j)
≥∑_j=1^k( ( q-2) d_j+1) =(
q-2) d_0+k>( q-2) d_0+1.
In consequence we have by (<ref>)
R(Σ) =( q-2) A(Σ)-4πn(
Σ,E_q)
≤( q-2) [ A( ∂Σ) +4π
d_0] -4π[ ( q-2) d_0+1]
=( q-2) A( ∂Σ) -4π.
This completes the proof.
Let L be a positive number with
L≤2δ_Eq, let Σ=( f,Δ)
∈𝐅 with L( ∂Σ) ≤ Land let T_0
be a closed disk in some hemisphere on S with perimeter L and T_0∩
E_q=∅. Then
H(Σ)≤ H(T_0)
with equality holding iff Σ is a simple disk with f(Δ)∩
E_q=∅.
By Lemma <ref>, T_0 must exist.
By Rado's theorem, ∂Σ is contained in some open hemisphere
S^' on S. We may assume S^'=D(0,π/2), otherwise we
replace the surface Σ and the set E_q with Σ^'=(
φ∘ f,Δ) and E_q^'=φ
(E_q)so that ∂Σ^'⊂ D(0,π/2), where φ
is a rotation of S so that φ( ∂Σ) ⊂
D(0,π/2).
Let A be the convex hull of ∂Σ in D(0,π/2). Then by the
assumption we have
L(∂ A)≤ L(∂Σ)≤ L
and we can conclude that A contains at most two points of E_q.
If A contains two points 𝔞 and 𝔟 of E_q, then
we have L=2δ_Eq, ∂Σ=𝔞𝔟
+𝔟𝔞 and ( ∂Σ) ∩
E_q={𝔞,𝔟}, and then we can sew
Σ along 𝔞𝔟 to obtain a surface Σ
_0=( F,S) such that F is a BCCM from S onto S. Then
n(Σ_0)=n( Σ) +2,
A(Σ_0)=A(Σ) and thus we have by Lemma <ref>
R(Σ)=R(Σ_0)+8π≤0<R(T_0)=( q-2) A(T_0),
which implies (<ref>).
Assume A contains at most one point of E_q. Then there exists a Jordan
curve Γ in D( 0,π/2) \ A whose interior
domain contains A such that for the doubly connected domain V between
Γ and ∂ A in D( 0,π/2) , V\∂ A contains no point of E_q. Let K_1 be the
component of S\Γ which contains A and K_2 the other
component. Then we have ∞∈ K_2, K_1∪ K_2=S\Γ, and K_2 contains at least q-1 points of E_q. Thus by Lemma
<ref> and Lemma <ref> (i) we have
R(Σ)≤( q-2) A( ∂Σ) ≤(
q-2) A(T_1),
where T_1 is a disk in S\ E_q with perimeter L(
∂Σ), which with Lemma <ref>, implies (<ref>). This
completes the proof.
§ THE SPACES ℱ,ℱ(L),ℱ_R,ℱ
_R(L),𝒞( L,M) ,𝒞^∗( L,M)
,ℱ(L,M),ℱ_R(L,M)
Continuing Definition <ref>, in which ℱ and ℱ(
L) have been defined, we introduce some subspaces of ℱ
and ℱ( L) .
(a) 𝒞(L,m) denotes the subspace of ℱ(L)
such that Σ=( f,U) ∈𝒞(L,m) if and
only if ∂ U and ∂Σ have partitions
∂ U=α_1+α_2+⋯+α_m,
∂Σ=c_1+c_2+⋯+c_m,
such that c_j=( f,α_j) is an SCC arc on S
for j=1,…,m (it is permitted that some c_j may be a whole circle).
In this case, the partitions (<ref>) and (<ref>) are both called
𝒞( L,m)-partition of ∂Σ.
(b) 𝒞^∗(L,m) denotes the subspace of 𝒞
(L,m)such that Σ=( f,U)
∈𝒞^∗(L,m) if and only if ∂ U and ∂Σ
have partitions (<ref>) and (<ref>) such that c_j=(
f,α_j) is an SCC arc on S and f has no branch point
in α_j^∘∩ f^-1(E_q), for every j=1,…,m. In this
case, the partitions (<ref>) and (<ref>) are both called
𝒞^∗( L,m)-partition of ∂Σ.
(c) ℱ(L,m) denotes the subspace of 𝒞^∗(L,m) such
that Σ=( f,U) ∈ℱ(L,m)if and only
if ∂ U and ∂Σ have partitions (<ref>) and
(<ref>), and that the following (i)—(iii) hold.
(i) Each c_j=( f,α_j) is an SCC arc.
(ii) For each j=1,2,…,m, f has no branch point in α_j^∘.
(iii) For each j=1,2,…,m, f restricted to a neighborhood D_j of
α_j^∘ in Δ is a homeomorphism onto a one side
neighborhood T_j∪ c_j^∘ of c_j^∘, where T_j is a
Jordan domain enclosed by c_j and another circular arc c_j^',
say, ∂ T_j=c_j-c_j^' and c_j^' is a circular arc.
The partitions (<ref>) and (<ref>) are both called
ℱ(L,m)-partitions of ∂Σ if they satisfy (i)–(iii)
in addition.
(d) ℱ_r, as defined in (<ref>), is the subspace
of ℱ such that Σ=( f,U)
∈ℱ_r if and only if f has no branch point in U\ f^-1(E_q), and define
ℱ_r(L)=ℱ_r∩ℱ(L),
ℱ_r(L,m)=ℱ_r∩ℱ(L,m).
Note that 𝒞^∗( L,m)-partitons
and ℱ( L,m)-partitions envolve the inerior
information of the corresponding surfaces. But 𝒞( L,m)-partitions does not involve the interior information of the surface, and
thus 𝒞( L,m)-partitions can be defined for closed
curves which may not be bondary curves of surfaces.
The conditions (ii) and (iii) in the definition are equivalent
by Lemma <ref> (also see Lemma <ref> later in this section),
we list (iii) just to emphasize. On the other hand it is clear that
𝒞( L,m) ⊃𝒞^∗( L,m)
⊃ℱ( L,m) ⊃ℱ_r(
L,m) ,
𝒞( L,m+1) ⊃𝒞( L,m) ,
𝒞^∗( L,m+1) ⊃𝒞^∗(
L,m) ,
ℱ( L,m+1) ⊃ℱ( L,m) ,
ℱ^∗( L,m+1) ⊃ℱ^∗(
L,m) ,
and all inclusion relationship are strict. On the hand, we have
∪_m=1^∞𝒞( L,m) =∪_m=1^∞𝒞^∗( L,m) =∪_m=1^∞ℱ(
L,m) =ℱ( L) ,
since for every Σ=( f,U) ∈ℱ(
L) , ∂Σ has an ℱ(L,m)-partition for some
m∈ℕ.
A curve ( f,∂Δ) is called
parametrized by length, if for each θ∈0,2π] and the arc
Θ_θ={e^√(-1)t:t∈0,θ]}⊂∂Δ,
L(f,Θ_θ)=θ/2πL(f,∂Δ).
When a closed curve Γ=( φ,∂Δ) is parametrized by length and has 𝒞(L,m)
-partitions (<ref>) and (<ref>), the partitions are uniquely
determined by the initial point of α_1 and the length of
c_j,j=1,2,...,m, since we have
L(α_1):L(α_2):…:L(α_m)=L(c_1):L(c_2
):...:L(c_m).
Assume Σ=( f,Δ)
∈ℱ( L,m) and ∂Σ is parametrized by
length. Then for any ℱ( L,m)-partition
∂Δ=α_1( a_1,a_2) +⋯+α_m(
a_m,a_1)
of ∂Σ with a_1=1 and the corresponding ℱ(
L,m)-partition
∂Σ=c_1( q_1,q_2) +⋯+c_m(
q_m,q_1) ,
we have a_j=e^√(-1)θ_jfor some θ_j such that
0=θ_1<θ_2<…<θ_m<θ_m+1=2π,
and
θ_j=L(f,Θ_θ_j)/L( ∂Σ)
2π,j=1,2,…,m+1.
Then for the homeomorphism φ:[1,m+1]→0,2π] which
maps each interval [j,j+1] linearly onto [θ_j,θ_j+1], we
have a_j=e^√(-1)φ( j) , and for x∈(j,j+1),
a_x=e^√(-1)φ( x) is a point in α_j, and
q_x=f(a_x) is a point in c_j. For example, a_1+1/2 is
the middle point of α_1 and q_1+1/2 is the middle point
of c_1, q_m+1/2 is the middle point of c_m, while
q_m+1=q_1.
Let Σ=( f,Δ) ∈ℱ( L,m) , {a,b}⊂∂Δ,
α be an arc of ∂Δ from a to b, oriented by
∂Δ, β be a simple arc in Δ from a to
b, and assume that the following (a)–(c) hold.
(a) (<ref>) and (<ref>) are ℱ( L,m)-partitions of ∂Σwith c_j=( f,α_j)
,j=1,…,m.
(b) α∩β contains a connected component I=I(a^'
,b^'), oriented by ∂Δ, with b^'≠ a,b, say,
b^'∈α^∘∩β^∘.
(c) ( f,-β) is an SCC arc.
Then the following hold.
(i) b^'∈{a_j}_j=1^m.
(ii) If a^'≠ a, then a^'∈{a_j}_j=1^m, and I
is either a point of {a_j}_j=1^m, or an arc of ∂Δ
which can be written as
I=α_j_1( a_j_1,a_j_1+1) +α_j_1+1(
a_j_1+1,a_j_1+2) +⋯+α_j_1+k( a_j_1
+k,a_j_1+k+1)
for some j_1≤ m and some 0≤ k<m (with a_j=a_j-m for each j
with j>m).
(iii) If b^' is not a branch point of f, then (
f,∂Δ) is not convex at b^'.
(iv) If, in addition to (a)–(c), ( f,-β) is strictly
convex, then a^'=b^'∈{a_j}_j=1^m, say, I is a
point in {a_j}_j=1^m, and moreover β^∘∩∂Δ⊂{a_j}_j=1^m.
Let γ be a simple arc on S. Then for any x∈γ^∘ there
exist a positive number δ_x,γand a subarc γ^' of
γ in D(x,δ_x,γ) such that ∂γ^'⊂∂ D(x,δ_x,γ) and x∈γ
^'∘⊂ D(x,δ_x,γ). Hence γ^' divides
D(x,δ_x,γ) into two Jordan domains D^l( x,δ
_x,γ) and D^r( x,δ_x,γ) , on the
left and right hand side of γ^' (by convention, γ^' inherits the orientation of γ). From this observation, we can
introduce the following definition.
Let γ_1 and γ_2 be two arcs on S with
γ_1^∘∩γ_2^∘≠∅and assume
γ_2 is simple. We say that γ_1 is on the left hand side of
γ_2, if for each x∈γ_1^∘∩γ_2^∘, x
has a neighborhood in γ_1 contained in D^l(
x,δ_x,γ_2) \∂ D( x,δ
_x,γ_2) . When we replace D^l( x,δ
_x,γ_2) with D^r( x,δ_x,γ_2)
for every x∈γ_1^∘∩γ_2^∘, we obtain the
definition of that γ_1 is on the right hand side of γ_2.
By this definition, γ_2 is on the left hand side of itself, and on
the right hand side as well.
Let γ_1 and γ_2 be two simple arcs on S and
assume γ_1^∘∩γ_2^∘≠∅. When
γ_1 is on the left (right) hand side of γ_2, and γ
_2 is on the right (left) hand side of γ_1, we say that the
direction of γ_1 is determined by γ_2.
The proof of the previous lemma is similar as the proof of Lemma 5.4 in
<cit.>, which is based on an observation stated after that lemma in
<cit.>. The proof here is based on a similar observation but is a little
more general:
Let γ_1 and γ_2 be two simple and convex arcs on
S with γ_1^∘∩γ_2^∘≠∅. Assume that
γ_1 is on the left hand side of γ_2 and the direction of
-γ_1 is determined by γ_2 in the sense of Definition
<ref>. Then γ_1∩γ_2 is a simple line segment on S, and each of the two endpoints of γ_1∩γ_2 is an endpoint
of γ_1 or γ_2, say, γ_1^∘∩γ_2
^∘ has no compact connected component.
We may write
I=I( a^',b^') =α( a^',b^') =β( a^',b^') ,
which, by convention, means that I is from a^' to b^' and
I is a subarc of α (and β) whose direction is determined by
α (and β). Since Σ∈ℱ( L,m) and
( f,-β) is simple and circular, α∩β is
consisted of a finite number of connected components. Thus α and
β have subarcs[By convention, subarcs inherits the oriention.]
α_b^'=α( a^'',b^'')
and β_b^'=β(A^'',B^'') which are
neighborhoods of b^' in α and in β, respectively, such that
(d) When b^'≠ a^', we have A^''=a^''∈ I^∘,
α_b^'∩β_b^'=α_b^'∩
I=β_b^'∩ I=α( a^'',b^')
=β( a^'',b^') ;
and when b^'=a^', we have
α_b^'∩β_b^'={b^'}⊂α
_b^'^∘∩β_b^'^∘.
(e) α_b^'\{b^'} and β_b^'
\{b^'} contain neither point of {a_j}_j=1^mnor
branch point of f.
(f) ( f,-β_b^') is an SCC arc.
(g) If b^' is not a branch point of f and ( f,∂Δ) is not folded at b^', then f is an OPH in a
neighborhood[When b^' satisfies the assumption of (g), f
is a homeomorphism in a neighborhood of b^' in Δ,
and so the conclusion holds when α_b^' and β_b^' is contained in that neighborhood.] of α_b^'∪β_b^' in Δ, γ_1=(f,α_b^'
) is a simple arc on the left hand side of γ_2=(
f,-β_b^') , and the direction of γ_1 is
determined by -γ_2.
To prove (i), assume its opposite b^'∉{a_j}_j=1^m. Then
by (e) γ_1=(f,α_b^') is contained in some c_j of
the partition (<ref>) and thus is convex. Hence, (f), (g) and Observation
<ref> imply that f(b^') is in the interior ( γ
_1∩γ_2) ^∘, and thus b^' is an interior
point of α_b^'∩β_b^', which contradicts (d).
(i) is proved and the proof of (ii) is the same.
To prove (iii), assume that b^' is not a branch point of fand the
conclusion fails, say, ( f,∂Δ) is convex at b.
Then ( f,∂Δ) is not folded at b^' and so
all conclusions of (g) hold, and moreover, γ_1=(f,α_b^'
) is convex by (d) and (e). Thus (f), (g) and Observation <ref> again
imply that b^' is an interior point of α_b^'∩β_b^', contradicting (d), as in the proof of (i). This proves (iii).
Assume ( f,-β) is strictly convex, and I is not a point,
say, a^'≠ b^'. Then I has a subarc I^' such that
I^'⊂( ∂Δ) \{a_j}_j=1
^m, and then by (c) and definition of ℱ( L,m) ,
both ( f,I^') and ( f,-I^')
⊂( f,-β) are convex arcs, which implies that (
f,I^') is a line segment and thus ( f,-β)
cannot be strictly convex. This proves (iv).
The following result and the method of proof will be used a lot of times.
Let { L_j} _j=1^N and
{R_j}_j=1^N be sets of positive numbers and assume R_1
/L_1≥R_j/L_j,j=2,…,N. Then
R_1/L_1≥∑_j=1^NR_j/∑_j=1^NL_j,
with equality holding if and only if
R_1/L_1=R_2/L_2=…=R_N/L_N.
It follows from
∑_j=1^NR_j=∑_j=1^NR_j/L_jL_j≤∑_j=1
^NR_1/L_1L_j=R_1/L_1∑_j=1^NL_j.
The following lemma is very useful (recall definitions (<ref>) and
(<ref>) of R(Σ) and H(Σ)).
Let Σ_j,j=0,1,…,n, be surfaces in 𝐅, and
let ε_0be a positive number such that
R(Σ_0)-ε_0≤∑_j=1^nR(Σ_j) and
L(∂Σ_0)+ε_0≥∑_j=1^nL(∂Σ_j).
Then
H(Σ_0)-ε_0/L(∂Σ_0)/1+ε
_0/L(∂Σ_0)≤max_1≤ j≤ nH(Σ_j).
This follows from (<ref>), (<ref>) and Lemma <ref> directly.
We define
H_L,m=H_L,m( E_q) =sup_Σ∈ℱ(L,m)
H(Σ).
Then by Definition <ref> we have
H_L=lim_n→+∞H_L,m.
If L≤2δ_E_q, then
H_L=H_L,m=H_L,1=( q-2) A(T)/L=( q-2)
2π-√(4π^2-L^2)/L,
H_L increases strictly as a function of L∈(0,2δ_E_q] for any
given positive integer m, where T is a disk with perimeter Land
diameter less than π.
By Theorem <ref>, for any L∈(0,2δ_E_q], we have
(<ref>). Thus, by Lemma <ref>, H_L=H_L,m strictly increase on
(0,2δ_E_q].
For any L>0, let 𝒯_L be the set of all closed
convex[A convex disk on S is contained in some hemisphere on S,
and vice versa.] disks T with T^∘⊂ S\ E_q and
L(∂ T)≤ L, and let L_0=sup_T∈𝒯_LL(∂
T). Then for any positive integer mand any disk T_L_0 in
𝒯_Lwith L(T_0)=L_0
H_L≥ H_L,m≥ H_L,1≥sup_T∈𝒯_LH(T)=H(T_L_0)≥ H(T_min{L,2δ_E_q})>0.
This follows from the relation 𝒯_L⊂ℱ(
L,1) ⊂ℱ( L,m) ⊂ℱ(
L), T_L_0∈𝒯_L and Lemma <ref>.
Let L∈ℒ. Then there exists a positive number
δ_L such that for each L^'∈(L-δ_L,L+δ_L),
H_L-π/2L<H_L^'<H_L+π/2L,
This follows from Definition <ref>.
In <cit.> it is proved that for any surface Σ in 𝐅,
there exists a surface Σ^' in 𝐅=( f,Δ) with piecewise analytic boundary, such that f has no
branch point in Δ\ f^-1(E_q), A(Σ^')≥
A(Σ),L(∂Σ)≥ L(∂Σ^'), and moreover,
∂Σ^' is consisted of subarcs of ∂Σ. In
<cit.>, using the similar method and more analysis, the authors proved
the following theorem, which is a key step for proving the first main theorem.
Let L∈ℒ, m be a positive integer, Σ=(
f,Δ) ∈𝒞^∗(L,m),
∂Δ=α_1+⋯+α_m,
is a 𝒞^∗( L,m)-partition of ∂Σ
and
∂Σ=c_1+⋯+c_m
is the corresponding 𝒞^∗( L,m)-partition, and
assume that
H(Σ)>H_L-π/2L(∂Σ).
Then there exists a surface Σ^'=( f^',Δ) such that
(i) Σ^'∈ℱ_r(L,m).
(ii) H(Σ^')≥ H(Σ) and L(∂Σ^')≤
L(∂Σ). Moreover, at least one of the inequalities is strict if
Σ∉ℱ_r(L,m).
(iii) When L(∂Σ^')=L(∂Σ) we have
∂Σ^'=∂Σ and (<ref>) and (<ref>) are
ℱ( L,m)-partitions of ∂Σ^'.
The following Theorem is proved in <cit.>, which is also a key step for
proving the first main theorem.
There exists an integer d^∗=d^∗(m,E_q) depending only
on m and E_q such that for any Σ=( f,Δ) ∈ℱ_r( L,m), there exists a surface
Σ_1=( f_1,Δ) in ℱ
_r( L,m) such that
d_max( Σ_1) =d_max( f_1) =max_w∈
S\∂Σ_1#f_1^-1(w)≤ d^∗,
∂Σ_1=∂Σ,
and
H(Σ_1)=H(Σ).
ℱ_r^'(L,m) is defined to be the subspace of
ℱ_r(L,m) such that Σ=( f,U)
∈ℱ_r^'(L,m)iff d_max(Σ)≤ d^∗=d^∗(m,q), where d^∗ is the integer determined in Theorem <ref>
.
Let L∈ℒ. Then for
sufficiently large m, we have
H_L,m=sup_Σ∈𝒞^∗(L,m)H(Σ)=sup_Σ∈ℱ(L,m)H(Σ)=sup_Σ∈ℱ_r(L,m)
H(Σ)=sup_Σ∈ℱ_r^'(L,m)H(Σ).
It suffices to show that, for sufficiently large m,
sup_Σ∈𝒞^∗(L,m)H(Σ)≤sup_Σ∈ℱ_r^'(L,m)H(Σ).
By Remark <ref>,
H_L=lim_m→∞sup_Σ∈𝒞^∗(L,m)
H(Σ)>H_L-π/2L.
Let m be any large enough integer in ℕ such that
sup_Σ∈𝒞^∗(L,m)H(Σ)>H_L-π/2L.
Then there exists a sequence Σ_n in 𝒞^∗(L,m) such
that
lim_n→∞H(Σ_n)=sup_Σ∈𝒞^∗
(L,m)H(Σ)
and
H(Σ_n)>H_L-π/2L>H_L-π/2L( ∂Σ_n) .
Thus by Theorems <ref> and <ref>, there exists a sequence Σ
_n^'∈ℱ_r^'(L,m) such that H(Σ_n
^')≥ H(Σ_n). Thus (<ref>) holds.
Let Σ=( f,Δ) ∈𝐅 and
assume either (1) f^-1(E_q∩Δ)≠∅, or (2)
∂Σ contains a simple arc with distinct end points in E_q.
Then
R(Σ)+4π/L(∂Σ)≤ H_0,
where H_0 is defined by (<ref>).
This is Theorem 1.7 in <cit.>. In fact when (1) holds, we may assume
0∈ f^-1(E_q) and let Σ_n=( f_n,Δ) be the surface in 𝐅 with f_n( z)
=f( z^n) ,z∈Δ. Then we have R(Σ
_n)=( q-2) nA(Σ)-4π( nn(
Σ) -( n-1) ) =nR(Σ)+4π(
n-1) , and L(∂Σ_n)=nL(∂Σ), and thus
H_0≥R(Σ_n)/L(∂Σ_n)=R(Σ
)+4π( n-1) /n/L(∂Σ)→R(Σ)+4π/L(∂Σ) as n→∞,
which implies (<ref>). When (2) holds, (<ref>) follows from the
following Lemma which is also proved in <cit.>.
Let Σ=( f,Δ) ∈𝐅 and
assume that ∂Σ contains a simple arc γ with distinct end
points in E_q, say, (2) of the previous lemma holds. Then there exists a
surface Σ_1=( f_1,Δ) ∈𝐅
such that L(∂Σ)=L(∂Σ_1), H(Σ)=H(Σ_1)
and f_1^-1( E_q) ∩Δ≠∅.
In fact Σ_1 can be obtained by sew Σ and the
surface whose interior is S\γ and boundary is γ-γ. Then we have A(Σ_1)=A(Σ)+4π, n( Σ
_1) =n( Σ) +q-2 and ∂Σ=∂Σ_1. Therefore H(Σ)=H(Σ_1) and
f_1^-1( E_q) ∩Δ≠∅, and then
(<ref>) holds by the discussion of (1) of the previous lemma.
The above discussion about Lemmas <ref> and <ref> implies the following
If Σ∈ℱ satisfies (1) or (2) of Lemma <ref>,
then there exists a sequence Σ_n in ℱ such H(Σ
_n)=R(Σ_n)/L(∂Σ_n)→R(Σ)+4π/L(∂Σ) as n→∞.
Let Σ=( f,Δ) ∈ℱ_r(L), let α=α( b_1
,b_2) be an arc of ∂Δ such that c=c(
p_1,p_2) =( f,α)is a simple arc (it is
possible that p_1=p_2 even if b_1≠ b_2). If α^∘
contains no branch point of f, then there exists a closed domain T
on S such that
(i) ∂ T=c-c^', where c^' is a simple arc from p_1 to p_2 which is consisted of a finite number of line segments on S
such that c∩ c^'={p_1,p_2}.
(ii) The interior angles of T at p_1 and p_2 are positive.
(iii) f^-1 has a univalent branch g defined on T\{p_1
,p_2} with g(c)=α.
(iv) If p_1≠ p_2, then T is a closed Jordan domain and the
univalent branch g of f^-1 can be extended to a homeomorphism defined on
the closed domain T.
Since f has no branch point on α^∘ and c is simple, the
results follows from Lemmas <ref> and <ref>.
Let Σ=( f,Δ) ∈ℱ.
(i) Σ=( f,Δ) can be understood as a
branched Riemann surface with boundary ∂Σ=( f,∂Δ), such that every point of Σ is in fact a pair
(f,p)=( f(p),p) with p∈Δ: Σ can be
regarded as a union of a finite number of disks ( x_j,U_j)
of Σdefined in Definition <ref>. Then ( f,U_j)
plays the role of chards for Riemann surfaces when Σ is regarded as the
set of pairs ( f,p) =( f(p),p) ,p∈Δ.
(ii) Assume a is a point in Δ and D is a
closed domain in Δ which is a neighborhood of a in
Δ. If the restriction f:D→ Twith
T=f(D) is a homeomorphism, we will call the subsurface K=(
f,D) a simple closed domain of Σ determined by a
(when f(D) is given), and use the pair ( T,a) or (
T,( f,a) )to denote this closed domain. It is clear
that K and D are uniquely determined by T and a.
(ii1) Then the term "( W,a) is a closed simple domain
of Σ" means that "W is a closed domain on S and
f has a univalent branch g defined on W and
g(W) is a neighborhood of a in Δ".
(ii2) When D is a closed Jordan domain in Δ
such that ( ∂ D) ∩∂Δ contains an arc
α of ∂Δ and f:D→ T=f(D)
is a homeomorphism. Then D, as an f-lift of T, is uniquely
determined by the pair (T,( ∂ D) ∩∂Δ), or
(T,α), or ( T,( f,α) ) , or (
T,a) , where ais any interior point of α, say a∈α^∘. So we will write
( f,D) =( T,( ∂Δ)
∩∂ D) )=( T,α) =( T,(
f,α) ) =( T,a) ,
call ( f,( ∂Δ) ∩∂ D) the
old boundary of ( f,D) , and (
f,Δ∩∂ D) the new boundary of (
f,D) .
(iii) Assume Σ,α,c,c^',T,g satisfies all conditions of
Lemma <ref>. If p_1≠ p_2, then g can be extended to T so
that for D=g(T), f(D) is the closed Jordan domain
Tand ( T,a) is a closed simple Jordan domain of Σ,
where a∈α^∘. Then ( T,α)and (
T,c) with c=( f,α) both denote (
f,D) . If p_1=p_2, then D is still a Jordan domain,
while T is not. In this case we still use ( T,α) , or
( T,a) , where a is an interior point of α, to denote
the surface ( f,D) , and call it a simple closed
domain as well. In fact, T can be expressed as a union of Jordan curves,
each pair of which only intersect at f(a).
(iv) Now assume Σ=( f,Δ) ∈ℱ
( L,m) and let
∂Σ =c_1( q_1,q_2) +⋯+c_m(
q_m,q_1)
=( f,α_1( a_1,a_2) ) +⋯+(
f,α_m( a_m,a_1) ) ,
be an ℱ( L,m)-partition of ∂Σ. For
each j, by definition of ℱ( L,m) and
ℱ( L,m)-partitions, f has no branch point in
α_j^∘ and f is homeomorphism in a neighborhood of α
_j^∘ in Δ. Then Lemma <ref> applies to
each c_j, and there exist a positive number θ>0 and a closed domain
T_j,θ on S enclosed by c_j and c_j^' such that:
(iv1) If q_j≠ q_j+1, c_j^' is a circular arc from q_1
to q_2, ∂ T_j,θ=c_j-c_j^' and
∠( T_j,θ,q_j) =∠( T_j,θ
,q_j+1) =θ∈(0,min_i=j+1∠( Σ,a_i)
),
and ( T_j,θ,α_j) is a simple Jordan domain of
Σ.
(iv2) If q_j=q_j+1, c_j^' is consisted of two convex
circulars arcs such that ∂ T_j,θ=c_j-c_j^',
c_j^' is contained in the disk enclosed by c_j and
c_j^'∩ c_j={q_j}, the two interior angles of T_θ
at q_j are equal to θ, and there exists a Jordan domain D in
Δ such that ∂ D=α_j-α_j^' with
α_j^'∘⊂Δ, and f restricted to D\{a_j,a_j+1} is a homeomorphism onto T_j,θ\{q_i}. We then can use ( T_j,θ,α
_j) or ( T_j,θ,c_j) , in which
c_j=( f,α_j) , to denote the subsurface (
f,D) , and call it a closed simple domain of Σ as in (iii).
Let Σ=( f,Δ) be a surface
of 𝐅.
(a) A point a∈Δ is called a simple point of f if f is
homeomorphic in a neighborhood of a in Δ.
(b) A point ( f,a) of Σ is called a simple point of
Σ if a is a simple point of f.
(c) For a subset A of Δand a point a∈ A,
a is called a simple point of f in A if f is homeomorphic in a
neighborhood of a in A, and ( f,a) is called a simple
point of Σ in ( f,D) if a is a simple point of f in
D.
By definition a point ( f,a) of Σ^∘, say,
a∈Δ, is a simple point of Σ if and only if a is a regular
point of f. A point ( f,a) ∈∂Σ is a simple
point of Σ, if and only if a is a regular point of f and
∂Σ is simple in a neighborhood of a in ∂Δ. Note
that a simple point of a subsurface of Σ needs not be a simple point of
Σ.
Consider a surface Σ=( f,Δ)
∈ℱ( L,m) with ℱ( L,m)-partition (<ref>). Assume that for some pair j_1 and j_2 with
1≤ j_1<j_2≤ m,
L(c_j_i)<π,c_j_i^∘∩ E_q=∅,
and one of the following hold
(a) The curvatures k(c_j_i) of c_j_i are distinct for i=1,2.
(b) k(c_j_1)=k(c_j_2) and both c_j_1 and c_j_2 are major
circular arcs.
We will show that we can deform Σ by changing the two arcs c_j_1
and c_j_2 to obtain a surface Σ^'∈ℱ(
L,m) such that
H(Σ^')>H(Σ),L(∂Σ^')=L(∂Σ).
By Corollary <ref>, using the notations ( T_j_i,θ_i,c_j_i) with old boundary c_j_i and
new boundary c_j_i^'∘,i=1,2, in Remark <ref> (iv2),
we can deform the simple domain ( T_j_i,θ_i
,c_j_i) of Σ, i=1,2, as follows.
We replace ( T_j_i,θ_i,c_j_i) with
( T_j_i,θ_i^'^',𝔠
_j_i) , where T_j_i,θ_i^'^' is a domain
on S enclosed by 𝔠_j_i-c_j_i^' and
𝔠_j_i is a convex circular are from q_j_i to
q_j_i+1, which is a small perturbation of c_j_i with the same
endpoints and 𝔠_j_i∩ c_j_i^'={q_j_i
,q_j_i+1}. Then by (a), or (b), and Corollary <ref>, we may
choose 𝔠_j_i such that ∑_i=1^2A(T_j_i,θ)<∑_i=1^2A(T_j_i,θ^'^') and ∑_i=1
^2L(c_j_i)=∑_j=1^2L(𝔠_j_i). After this
deformation we obtain a new surface Σ^'∈ℱ(
L,m)such that the ℱ( L,m)-partition
(<ref>) changes into ℱ( L,m)-partition
∂Σ^'=c_1+⋯+c_j_1-1+𝔠_j_1
+c_j_1+1+⋯+c_j_2-1+𝔠_j_2+c_j_2+1+⋯+c_m
of ∂Σ^'. Thus the surface Σ^' with
H(Σ^') larger and L(∂Σ^') unchanged exists.
Σ^' is in fact obtained by moving c_j_i to its left (or
right) hand side a little to the position of 𝔠_j_i,i=1,2.
§ THE DISTANCES ON SURFACES IN ℱ
We first introduce some results for counting terms of partitions.
Assume that m≥3, Σ=( f,Δ)
∈ℱ_r( L,m) , (<ref>) and (<ref>) are
ℱ( L,m)-partitions of ∂Σwith
c_j( q_j,q_j+1) =( f,α_j( a_j
,a_j+1) ) ,j=1,…,m, and that the following (a) and (b) hold.
(a) a and b are two points on ∂Δ,a≠ b, γ_1 is an
arc on ∂Δ from a to b and γ_0=( ∂Δ) \γ_1^∘, both oriented by ∂Δ.
(b) I is a simple arc in Δ from a to b such that
I^∘∈Δ, I∩ f^-1(E_q)=∅, and either (b1) (
f,-I) is an SCC arc with L( f,I) ≤ L(f,γ_0),
or (b2) ( f,I) is straight with L(f,I)<π.
Then the following hold.
(i) I divides Δ into two Jordan domains Δ_0 and Δ_1 on the left and right hand side of I, respectively, ∂Δ
_0=γ_0+I, and ∂Δ_1=γ_1-I.
(ii) The surface Σ_1=( f,Δ_1) is
contained in ℱ_r( L,m_1^') with
m_1^'=m+2-#[ γ_0∩{a_j}_j=1^m] .
(iii) When (b2) holds Σ_0=( f,Δ_0) is
also contained in ℱ_r( L,m_0^') with
m_0^'=m+2-#[ γ_1∩{a_j}_j=1^m] .
By the assumption, (i) is trivial to verify.
By (b), f has no branch point in I^∘ and thus we have:
(c) f is homeomorphic in a neighborhood of I^∘ in Δ, and thus Σ_1=( f,Δ_1) ∈ℱ_r, and
when (b2) holds Σ_0∈ℱ_r as well.
It is clear that by (b)
L≥ L( f,∂Δ) =L(γ_1+γ_0)≥
L(γ_1)+L(f,I)=L(∂Σ_1).
and for the same reason L≥ L(∂Σ_0) when (b2) holds.
Therefore we have by (b) and (c) that:
(d) Σ_1∈ℱ_r( L) ; and if (b2) holds, then
Σ_0∈ℱ_r( L) also holds.
The endpoints {a,b} gives a refinement of the ℱ(
L,m)-partition (<ref>) which contains m+2 terms, among which
at most two are just points, and we let 𝐀 be the set of all these
terms. It is easy to see that if γ_0 contains s points of
{a_j}_j=1^m, then γ_1 is a sum of m+1-s terms of
𝐀, no matter what #{a,b}∩{a_j}_j=1^m is equal to.
Thus by (c) and (d) ∂Δ_1 has an ℱ_r(
L,m^')-partitionconsisted of the term I and
m+1-s terms of 𝐀, for Σ_1, with s=#γ_0
∩{a_j}_j=1^m. Thus we have Σ_1∈ℱ_r(
L,m_1^') and for the same reason, Σ_0∈ℱ_r( L,m_0^') in the case (b2).
Assume m>3, Σ=(
f,Δ) ∈ℱ_r( L,m),
(<ref>) and (<ref>) are ℱ( L,m)-partitions
of ∂Σwith c_j( q_j,q_j+1) =(
f,α_j( a_j,a_j+1) ) ,j=1,…,m, and the
following (a)–(e) hold (see Figure <ref> for k=3).
(a) k is a positive integers with 2≤ k<m, b_2 and b_2k+1 are
two points on ∂Δ with b_2≠ b_2k+1; γ_0 is the
arc of ∂Δ from b_2k+1 to b_2and γ_0^c is
the arc ( ∂Δ) \γ_0^∘, both
oriented by ∂Δ; and γ_0^'=γ_0^'( a_i_0,a_i_2) is the smallest arc on ∂Δ
containing γ_0 such that ∂γ_0^'={a_i_0
,a_i_2}⊂{a_j}_j=1^m, that is, γ_0^' is
the union of all the terms in (<ref>) which intersect γ_0^∘.
(b) I=I( b_2,b_2k+1) is a simple arc in Δ from b_2∈∂Δ to b_2k+1∈∂Δ which has the
partition
I( b_2,b_2k+1) =I_2( b_2,b_3)
+I_3( b_3,b_4) +⋯+I_2k( b_2k,b_2k+1
) ,
such that for each j=2,…,k,I_2j-1⊂∂Δ, while for each
j=1,…,k,∅≠ I_2j^∘⊂Δ.
(c) I∩γ_0={b_2,b_2k+1}, say, I_3( b_3
,b_4) ,…,I_2k-1( b_2k-1,b_2k) are all
contained in the open arc ( γ_0^c) ^∘and
b_2,b_3,…,b_2k,b_2k+1 are arranged anticlockwise on
∂Δ.
(d) ( f,-I)is an SCC arc on S, L(f,-I)≤ L(f,γ
_0) and ∪_j=1^kI_2j^∘⊂Δ\ f^-1
(E_q).
(e) One of the conditions (e1)–(e3) holds:
(e1) γ_0^'∩ I^∘=∅, say, I_3∩γ
_0^'=I_2k-1∩γ_0^'=∅, as in Figures
<ref> (1) and (2);
(e2) γ_0^'∩ I^∘=∅ and γ_0^∘
∩{a_j}_j=1^m≠∅;
(e3) γ_0^'∩ I^∘=∅ and γ_0^'=γ_0, as in Figure <ref> (1).
Then the following hold:
(i) For each j=2,…,k, the two end points of I_2j-1 are contained in
{a_j}_j=1^m, say {b_3,…,b_2k}⊂{a_j}_j=1^m.
(ii) I divides Δ into k+1 Jordan domains {Δ
_i} _i=0^k such that Δ_0 is on the left hand side of
I and Δ_1,…,Δ_k are on the right hand side of I.
(iii) For each j=1,2,…,k, one of the following holds.
(iii1) Σ_j=( f,Δ_j) ∈ℱ
_r( L,m) if (e1) holds.
(iii2) Σ_j=( f,Δ_j) ∈ℱ
_r( L,m-1)if (e2) or (e3) holds.
(iv) min{ L(∂Σ_1),L(∂Σ_k)}≥min{L(f,γ_01),L(f,γ_02)} where γ_0i,i=1,2, are
the two components of γ_0^'\γ_0^∘.
(i) follows from Lemma <ref>, and (ii) is trivial.
Let γ_j be the arc of ∂Δ from b_2j to b_2j+1
for j=1,…,k. Then we may arrange Δ_j so that ∂Δ_j=γ_j-I_2j,j=1,2,…,k. It is clear that γ_j
∩γ_0 contains at most one point for j=1,…,k, and then by (d)
we have
L(f,∂Δ_j)=L( f,γ_j-I_2j) ≤
L(f,γ_j)+L(f,γ_0)≤ L(f,∂Δ)≤ L,
and moreover f restricted to a neighborhood of I_2j^∘ in
Δ is a homeomorphism, by (d) and the assumption
Σ=( f,Δ) ∈ℱ_r(L,m).
We may assume
{ a_i_0,a_i_2}⊂{a_j}_j=1^m with
i_0<i_2<m+i_0( a_m+i=a_i) .
Assume (e1) holds. Then {b_3,b_2k}is contained in I^∘
∩{a_j}_j=1^m by (i), and is outside γ_0^' by (e1),
and then a_i_2 and b_3 are distinct and both contained in
γ_1∩{a_j}_j=1^m, and for the same reason, a_i_0 and
b_2k are distinct and both contained in γ_k∩{a_j}_j=1
^m. Then it is easy to see s_1=#[ ( ∂Δ)
\γ_1^∘] ∩{a_j}_j=1^m≥#γ
_k∩{a_j}_j=1^m≥2and Σ_1 is contained in
ℱ_r( L,m) , by applying Lemma <ref> to I_2;
and so is Σ_k for the same reason. It is trivial to see that when
k>2, for j=2,3,…,k-1, s_j=#[ ( ∂Δ)
\γ_j^∘] ∩{a_j}_j=1^m≥#{a_i_2
,b_3,b_2k,a_i_0}=4, and thus by Lemma <ref> Σ_j
∈ℱ_r( L,m-2) . Thus (iii1) holds.
Assume (e2) holds. Then there exists i_1 with i_0<i_1<i_2 and
a_i_1∈γ_0^∘. Consider Δ_1 and Δ_k. It
is clear that a_i_0 and a_i_1 are both contained in (
∂Δ) \γ_1. On the other hand, by (e2) and
(i) b_3∈{∂γ_1}∩{a_j}_j=1^m.
Thus s_1=#[ ( ∂Δ) \γ
_1^∘] ∩{a_j}_j=1^m≥3 and by Lemma <ref> we
have Σ_1∈ℱ_r( L,m+2-s) ⊂ℱ_r( L,m-1) . For the same reason Σ_k
∈ℱ_r( L,m-1) . It is trivial to see that when
k>2, for j=2,3,…,k-1, s_j=#[ ( ∂Δ)
\γ_j^∘] ∩{a_j}_j=1^m≥#{a_i_2
,b_3,b_2k,a_i_0,a_i_1}=5, and thus we have by Lemma <ref>
Σ_j∈ℱ_r( L,m-3) .
Assume (e3) holds. Then we still have s_1=#[ ( ∂Δ) \γ_1^∘] ∩{a_j}_j=1
^m≥3 and thus Σ_1∈ℱ_r( L,m-1) . For
the same reason, we also have Σ_k∈ℱ_r(
L,m-1). Assume k>2 and let j∈{2,…,k-1}. Then by (i)
∂γ_j are contained in {a_j}_j=1^m, and by the
assumptions, a_i_0,a_i_2 are outside γ_j^∘ and thus
s_j=#[ ( ∂Δ) \γ_j^∘] ∩{a_j}_j=1^m≥#{ a_i_2,b_3,b_2k
,a_i_0} =4, and then by Lemma <ref> we have Σ_j
∈ℱ_r( L,m+2-4) =ℱ_r(L,m-2). (iii2) has
been proved.
It is clear that L( ∂Σ_1) ≥ L(
γ_01) and L( ∂Σ_2) ≥ L(
γ_02) this implies that (iv) holds true.
Let Σ=( f,Δ) ∈ℱ. For
any two points a and b in Δ, define their d_f-distance d_f( a,b) by
d_f(a,b)=inf{L(f,I):I is a curve in Δ with endpoints a and b};
for any two sets A and B in Δ define their d_f-distance by
d_f( A,B) =inf{ d_f( a,b) :a∈ A,b∈
B} ;
and for any set A in Δ and any ε>0 define the
d_f-ε-neighborhood of A (in Δ) by
N_f(A,ε)={ x∈Δ:d_f(A,x)<ε} .
The distance d_f( a,b) is also called the distance of
Σ between the two points ( f,a) and (
f,b) of Σ. Sometimes we will write d_Σ( (
f,a) ,( f,b) ) =d_f( a,b) . Then
the notation d_Σ( ( f,A) ,( f,B)
) between two sets of Σ, and N_Σ(( f,A)
,ε) is well defined.
When ε is small enough, (a,N_f( a,ε) )
is the disk of Σ with radius ε (see Lemma <ref>,
Corollary <ref> and Definition <ref>). On the other hand, Corollary
<ref> (iv) directly implies
Let Σ=( f,Δ)
∈ℱ(L,m), let (<ref>) and (<ref>) be ℱ(
L,m)-partitions of ∂Σwith c_j( q_j
,q_j+1) =( f,α_j( a_j,a_j+1) )
,j=1,…,m, for any j let a∈α_j^∘, and finally let I
be a d_f-shortest path in Δ. Then for any disk (
a,U_δ) of Σ with small enough radius δ<π/2 the following hold:
(i) f:U_δ→ f(U_δ) is a
homeomorphism and f(U_δ) is a convex lens. If c_j is
straight, then f(U_δ) is half of a disk whose diameter is
contained in c_j and f(U_δ) is on the left hand side
of c_j.
(ii) For any two points a_1 and a_2 in U_δ, the
d_f-shortest path from a_1 to a_2 exists, which is the unique
f-lift of the line segment f(a_1)f(a_2) in U.
(iii) If I∩U_δ≠∅, then I∩U_δ is a subarc of I.
(iv) If a∈ I^∘∩α_j^∘, then c_j is straight and
I^∘∩α_j^∘ is an open neighborhood of a in
α_j. Thus I^∘∩α_j^∘=∅ if c_j is
strictly convex.
For any Σ=( f,Δ)
∈ℱ, d_f(x,y) is a continuous function on Δ×Δ. In other words, d_Σ( ·
,·) is a continuous function on Σ×Σ.
The proof is simple and standard and left to the reader.
Let Σ=( f,Δ) ∈ℱ and
let ( x,U) be a disk of Σ with radius δ (see
Definition <ref>). Then for any y∈Δ\ U,
d_f(x,y)≥δ, and d_f( x,y) =δ for every
y∈( ∂ U) \∂Δ.
This is trivial by Definition <ref> and Remark <ref>.
Let Σ=( f,Δ) ∈ℱ,
let x∈Δ, and let ( x,U_x), (
x,U_x^') and ( x,U_x^'') be
three disks in Σ with radius δ/4,δ/2, and δ (see
Definition <ref>), respectively. Then for any two distinct points a and
b in U_x, the d_f-shortest path I(a,b) exists; and more precisely,
putting A=f(a),B=f(b),X=f(x), one of the following holds.
(i) If a=x, then the f-lift I( x,b) of XB is
the unique shortest path from x to b.
(ii) If A≠ B and AB has an f-lift I=I(a,b) from a to
b, then I is the unique d_f-shortest path.
(iii) If A=B, or A≠ B but AB has no f-lift from a to
b, then the f-lift I=I(a,b) of AXB, from a to x, and
then to b, is the unique d_f-shortest path.
(i) follows from Corollary <ref> (v). (ii) is trivial. We only prove (iii).
By definition there exists a sequence of paths I_n⊂Δ
from a to b such that for[If I_n is the path I_n
:[0,1]→Δ, the length L( f,I_n)
should be understood to be L(f∘ I_n,[0,1]).] s_n=L(
f,I_n)
lim_n→∞s_n=d_f( a,b) .
It is clear that d_f( a,b) ≤ L(AXB)<δ/2 by
Corollary <ref> (v), since d( X,A) <δ/4,d(
X,B) <δ/4. Thus, for sufficiently large n, we see that
I_n⊂ U_x^',
for otherwise we have d_f( a,b) ≥δ/2.
We may parametrize I_nby length with L( f,I_n|_[0,s])
=s,s∈0,s_n] and
s_n=L(f,I_n)→ d_f( a,b) >0.
By Aazela-Ascoli theorem, we may assume Γ_n=(f,I_n), as a mapping
from [0,s_n] to S, has a subsequence uniformly converging to a path
Γ_0:[0,s_0]→ S from A to B and we assume the
subsequence is Γ_n itself. Then we have
L(Γ_0)≤ d_f( a,b) ≤ L(AXB),
and Γ_0⊂D( x,δ/2) .
If X∈Γ_0, then we have
L(Γ_0)=d_f( a,b) =L(AXB)
and by (i) the f-lift I(a,b) of AXB is a d_f-shortest
path from a to b. Let I^'( a,b) be another d_f-shortest path. If x∈ I^'( a,b) , then x gives a
partition I^'( a,b) =I^'( a,x)
+I^'( x,b) , I^'( a,x) and
I^'( x,b) have to be the d_f-shortest paths from
a to x, and x to b, respectively by (i), and thus I^'(
a,x) +I^'( x,b) =I( a,b) by(i). If
x∉ I^'( a,b) , we can show, as the following
discussion for the case X∉Γ_0, that that I^'(
a,b) is the unique f-lift of AB, which implies
d_f( a,b) =L(I^'( a,b) )=L(
AB) =L(AX)+L(XB)=d_f(
a,b) . This is a contradiction, since L( AB)
<L(AX)+L(XB) when X∉AB.
Assume X∉Γ_0. Then there exists a disk ( x,V_x) of Σ in ( x,U_x) such that I_n⊂
U_x\V_x, f is locally homeomorphic on
U_x\ V_x, and Γ_0⊂ f(U_x
\ V_x). Then Γ_0 has an f-lift I_0 such that
I_n uniformly converges to I_0, by Lemma <ref>. Then I_0 is
a d_f-shortest path from a to b. We will show that Γ
_0=AB.
It is clear that I_0 is simple, for otherwise there is another path
I_0^'=I_0^'( a,b) which is obtained from
I_0 by omitting a loop of I_0 so that L( f,I_0^') <d_f( a,b) contradicting I_0 being shortest. It
is also clear that any subarc of I_0 is a d_f-shortest path.
Let y∈ I_0^∘. Then by the assumption we have x∉ I_0 and
by Corollary <ref> (i) and (iv), y has a neighborhood I_y
=I_y( y^',y^'') in I_0 so that I_y
is contained in a disk ( y,U_y) of ( x,U_x^')with x∉ U_y. Then f(U_y) is convex and
f:U_y→ f(U_y) is homeomorphic. Thus both f(
I_y) and f(y^')f(y^'')can be lift
into U_y⊂ U_x^' from y^' to
y^''. Then I_y has to be the lift of f(y^')f(y^''). Thus ( f,I_0) is straight
everywhere and we have ( f,I_0) =Γ_0=AB.
Then in this case, I_0 is the unique d_f-shortest from a to b.
Let Σ=( f,U) ∈ℱ and
let a_1 and a_2 be two distinct points in Δ. Then
the d_f-shortest path I from a_1 to a_2 exists, and for any such
path I, ( f,I) is a polygonal path on S from f(a_1)
to f(a_2).
It is clear that, for some positive integer δ_0, There are 3s_0
disks ( p_s,U_s) , ( p_s,U_s^') ,
( p_s,U_s^'') of Σ, with radius
δ/4,δ/2,δ for each s=1,…,s_0, such that
𝒪={U_s}_s=1^s_0 is an open covering of U.
Note that by Lemma <ref> and Definition <ref>, for each
s=1,…,s_0, the four relations U_s∩∂ U≠∅
,U_s^'∩∂ U≠∅,U_s^''∩∂
U≠∅ and p_s∈∂ U are equivalent.
Write C_s=U∩∂ U_s, the closure of the part of
∂ U_s located in U, for s=1,2,…,s_0. We assume that no
U_s^'' contains U, and then C_s≠∅
and, as α_3 in Lemma <ref> (B) (B1), C_s is connected for
all s=1,…,s_0.
By definition, there exists a sequence of paths J_n in U from
a_1 to a_2 such that
lim_n→∞L(f,J_n)=d_f(a_1,a_2).
It is clear that a_1∈ U_s_1∈𝒪for some s_1≤
s_0. If C_s_1∩ J_n=∅ for some J_n, then
{a_1,a_2}⊂ U_s_1 and the d_f-shortest path I, such
that ( f,I) is polygonal, exists by Lemma <ref>. Thus
we may assume that for each n,C_s_1∩ J_n≠∅, and let
a_n2 be the latest point of J_n contained in J_n∩ C_s_1
.By taking subsequence we may assume that a_n2→ a_02∈
C_s_1, and then for the smaller arc C_s_1^n of C_s_1
between a_02 and a_n2, the length L(f,C_s_1^n) tends to 0,
since we assumed f is holomorphic on U(note that C_s_1
may be a circle). By the previous lemma, the d_f-shortest path
I_01=I_01( a_01,a_02) =I_01( a_1
,a_02) , which means by convention that I_01 is an arc from
a_01=a_1 to a_02, exists, and ( f,I_01) is
polygonal. Let J_n,1=C_s_1^n+J_n( a_n2,a_2) and
J_n^1=I_01( a_01,a_02) +J_n,1.
Then we still have
L(f,J_n^1)→ d_f(a_1,a_2).
Let U_s_2 be an element of 𝒪 such that a_02∈ U_s_2
. Then s_2≠ s_1 and when n is large enough C_s_1^n⊂
U_s_2. Thus we have by definition of a_n2,
J_n,1⊂∪_s∈{1,2,…,s_0}\{ s_1}U_s.
Applying the same argument to J_n,1, we can show that there exist a point
a_03∈ C_s_2, a d_f-shortest path I_02( a_02
,a_03) , a path J_n,2from a_03 to a_2 such that
J_n,2⊂∪_s∈{1,2,…,s_0}\{ s_1
,s_2}U_s,
and for the path J_n^2=I_01( a_01,a_02) +I_02(
a_02,a_03) +J_n,2 from a_1=a_01 to a_2
L(f,J_n^2)→ d_f(a_1,a_2).
Repeating the above method a finite number of times, we can finally prove that
there exists a path
I=I_01+I_02+⋯+I_0s^∗
with s^∗≤ s_0 such that L(f,I)=d_f(a_1,a_2). The existence
of I is proved.
For any d_f-shortest path I from a_1 to a_2, we may apply the
above argument to I to show that (f,I) is a polygonal. This completes the proof.
Let m≥3, Σ=( f,Δ) ∈ℱ_r(L,m) and let b_1 and b be two distinct
points on ∂Δ with
d_f(b_1,b)<π.
Let
∂Δ=α_1( a_1,a_2) +α_2(
a_2,a_3) +⋯+α_m( a_m,a_1)
be an ℱ(L,m)-partition of ∂Σ and
∂Σ=c_1( q_1,q_2) +c_2( q_2
,q_3) +⋯+c_m( q_m,q_1)
be the corresponding ℱ(L,m)-partition. Then for any d_f-shortest path I=I( b_1,b) from b_1 to b, say,
L(f,I)=d_f(b_1,b), the following hold:
(i) I is simple.
(ii) For each component J of I∩Δ\ f^-1(E_q), (
f,J) is a simple straight arc with L(f,J)<π.
(iii) For each j∈{1,…,m}, if c_j is strictly convex, then
I∩α_j^∘⊂{b_1,b}, and thus I^∘∩α
_j^∘=∅.
(iv) For each j∈{1,…,m}, if c_j is straight and I^∘
∩α_j^∘≠∅, then one of the following holds:
(iv1) α_j⊂ Ior I⊂α_j;
(iv2) α_j∩ I is an subarc of α_j with α_j∩
I=α_j( a_j,b^') , or α_j∩
I=α_j( b^',a_j+1) , where b^'∈{b_1,b};
(iv3) α_j∩ Iis consisted of two subarcs I^' and
I^'' of α_j, with I^'∩ I^''=∅,I^'=α_j( a_j,b_1^'),
I^''=α_j( b_2^',a_j+1) , and
{b_1^',b_2^'}={b_1,b}. This occurs only if
{b_1,b}⊂α_j and the subarc of c_j with endpoints
f(b_1) and f(b) has length >π;Thus each compact component
of I^∘∩∂Δ is a point of { a_j}
_j=1^m, or an arc of ∂Δ with two endpoints in {
a_j} _j=1^m.
(v) I has a partition I=I_1+I_2+⋯+I_2k+1,k≤ m+1, such that
for each j=1,2,…,k, I_2j^∘≠∅ and I_2j^∘⊂Δ; and for each j=1,2,…,k+1, I_2j-1⊂∂Δ and either I_2j-1 is a point or ( f,I_2j-1
) is an arc of ∂Σ which is a polygonal path on S (if
I∩Δ=∅, k=0 and I=I_1).
(vi) If k≥1, and if for some j=1,2,…,k, I_2j^∘∩
f^-1(E_q)=∅, then ( f,I_2j) is a simple line
segment with L( f,I_2j) <πand f restricted to a
neighborhood of I_2j^∘ is a homeomorphism.
(vii) If k≥1 and I_1 is not a point, then the joint point of I_1
and I_2 is contained in {a_j}_j=1^m, and if I_2k+1 is not a
point, then the joint point of I_2k and I_2k+1 is also contained in
{a_j}_j=1^m.
(viii) If k≥2, then the endpoints of all I_j for j=3,5,…,2k-1,
are contained in {a_j}_j=1^m.
(ix) Assume k≥1. Then I_2 divides Δ into two Jordan domains
Δ_1and Δ_2. Assume, in addition to k≥1,
Δ∩ I_2∩ f^-1(E_q)=∅,
and one of the following (a)–(d) holds:
(a) ( ∂Δ_i) ∩∂Δ contains at least
two points of {a_j}_j=1^m for each i=1,2; or
(b) k>1, and the two endpoints of I_2are not simultaneously contained
in any term α_j in (<ref>), for j=1,…,m; or
(c) k=1, the two endpoints of I_2are not simultaneously contained in
any term α_j in (<ref>), for j=1,…,m, and either one of
I_1 and I_3 is not a point, or one of I_1 and I_3 is a point
contained in {a_j}_j=1^m. Then both ( f,Δ_1) and ( f,Δ_2)are
contained in ℱ_r(L,m).
(x) If b_1∈α_1^∘and
L(f,I_1)<d( { q_1,q_2} ,f(b_1)) ,
then I_1 is a point, and if in addition k≥1, (<ref>) holds,
b_2∉( α_m+α_1+α_2) ^∘,
and
L(f,I_2)<d( { q_1,q_2} ,f(b_1)) ,
then the conclusion of (ix) holds: I_2 cuts Σ into two surfaces
contained in ℱ_r(L,m).
(i) trivially holds.
For each regular point x∈Δ of f, there exists a disk (
x,U_x) of Σ such that f restricted to U_x is a
homeomorphism onto a disk on S. Then for any two points a and b in
U_x, the path I(a,b) such that ( f,I(a,b)) is a simple
straight path is a d_f-shortest path from a to b. On the other hand
all singular points are all contained in f^-1(E_q). Therefore (ii)
follows from Lemma <ref> (ii) and (<ref>).
(iii) and (iv) follows from Lemma <ref> (iv).
It follows from (iii) and (iv) that each component of I∩∂Δ
contains at least one point of {a_j}_j=1^m∪{b_1,b_2}. Thus,
by (i), we conclude that I∩∂Δ contains at most m+2
components, and then k≤ m+1. (v) is proved.
Now, it is clear that (ii) implies (vi), and that (iii) and (iv) imply (vii)
and (viii).
To prove (ix) assume that k≥1 and (<ref>) holds. Then by (vi)
( f,I_2) is a simple line segment and by the assumption
Σ∈ℱ_r(L,m) we have
f restricted to a neighborhood of I_2^∘ in
Δ is a homeomorphism.
We first assume that (a) holds and write I_2=I_2(B_1,B_2). Then
Δ\ I_2 is consisted of two Jordan domains Δ
_j,j=1,2. By (a) B_1 and B_2 give a refinement of the partition
(<ref>) and each of the two arcs on ∂Δ with endpoints
B_1 and B_2 contains at most m-1 terms. Then Condition <ref>
implies that both Σ_1=( f,Δ_1) and
Σ_2=( f,Δ_2) are contained in
ℱ_r(L,m) if both L(∂Σ_1)≤ L(∂Σ)
and L(∂Σ_2)≤ L(∂Σ)hold. But these two
inequalities easily follows from that ( f,I_2) is straight
and less than π. We have proved (ix) when (a) holds.
To compete the proof of (ix), it suffices to show that (b) or (c) implies (a).
When k>1, the terminal point B_2 of I_2 is contained in
{a_j}_j=1^m by (viii), and thus the hypothesis (b) implies (a). When
k=1, by (c) and (vii) at least one endpoints of I_2 is contained in
{a_j}_j=1^mand thus (c) also implies (a). (ix) is proved.
Assume b_1∈α_1^∘ and (<ref>) holds. Then for
δ=d( { q_1,q_2} ,f(b_1))
L(f,I_1)<δ≤min{L(c_1( q_1,f(b_1))
,L(c_1( f(b_1),q_2) },
and then ( f,I_1) ⊂ c_1^∘, which implies
I_1⊂α_1^∘ and thus ∂ I_1 does not intersects
{a_j}_j=1^m, contradicting (vii) if I_1 is not a point. Thus
I_1 is a point and we have
I_1+I_2=I_2( b_1,B_2) =I_2( B_1,B_2)
.
In addition to (<ref>), assume (<ref>), (<ref>) and
(<ref>) hold. Then Condition <ref> also holds. We show that the
condition (a) is satisfied.
First consider the case k>1. Then by (viii) we have B_2∈{a_j
}_j=1^m. If B_2=a_1 or a_2, then we have L(f,I_2
)=L(f,I_2( b_1,B_2) )≥δ since b_1=B_1
∈α_1^∘, contradicting (<ref>), and so B_2∈{a_j}_j=3^m. Thus (a) holds.
Assume k=1. If I_3 is not a point, then by (vii) we have b_3=B_2
∈{a_j}_j=1^m, and if in addition B_2=a_1 or a_2 we can
obtain a contradiction by (<ref>) again, and thus we have B_2∈{a_j}_j=3^m and (a) holds again. If I_3 is a point, then
I_2=I( b_1,b_2) =I_2=I( B_1,B_2)
(note that k=1) and (a) also holds, by (<ref>). We have proved that
(<ref>), (<ref>), (<ref>) imply (a), and then all conclusion
of (x) hold, by (ix). The lemma is proved completely.
Let Σ=( f,Δ)be a surface
in ℱ,ε be a positive number and let A,B be two compact
sets in Δ. Then
d_f(N_f(A,ε),N_f(B,ε))≥ d_f(A,N_f
(B,ε))-ε≥ d_f(A,B)-2ε.
It suffices to prove the second inequality. Let a and b^' be any
two points of A and N_f(B,ε). Then there exists b∈ B such
that d_f(b,b^')<ε. Then
d_f(a,b^')≥ d_f(a,b)-d_f(b,b^')≥ d_f
(a,b)-ε≥ d_f(A,B)-ε.
This implies d_f(A,N_f(B,ε))≥ d_f(A,B)-ε.
Let L be a positive number in ℒ (see
Definition <ref> for ℒ) with L≥2δ_E_q. Then there
exists a positive number δ_0 such that
d_f(Δ∩ f^-1(E_q),∂Δ)>δ_0
holds for all surfaces Σ=( f,Δ) in
ℱ(L) with
L(∂Σ)≥δ_E_q,
Δ∩ f^-1(E_q)≠∅,
and with
H(Σ)>H_L-π/2L( ∂Σ) .
Since L∈ℒ, by Lemma <ref>, there exists a sequence
δ_L,n with 0<δ_L,n<1/n such that
H_L+1/n>H_L+δ_L,n>H_L,n=1,2,….
Assume δ_0 does not exists, then there exists a sequence Σ
_n=( f_n,Δ) ∈ℱ(L) such that for
every n,
L(∂Σ_n)≥δ_E_q,
Δ∩ f_n^-1(E_q)≠∅,
H(Σ_n)>H_L-π/2L( ∂Σ_n) ,
and
d_f_n(a_n,∂Δ)<δ_L,n/2
for some a_n∈Δ∩ f_n^-1(E_q). Then for each n there is an
arc α_n=α( a_n,b_n) in Δ
such that b_n∈∂Δ, α_n\{b_n
}⊂Δ and f_n restricted to α_n is a homeomorphism
onto a polygonal path β_n with
L(β_n)<δ_L,n/2,
and then the new surface Σ_n^' which is obtained by cutting
Σ_n along β_n so that the interior of Σ_n^'
is equivalent to ( f_n,Δ\α_n) and
∂Σ_n^'=β_n+∂Σ_n-β_n.
Then for all n=1,2,…,
R(Σ_n^')=R(Σ_n)+4π,
and
2δ_E_q<L(∂Σ_n^')=L(∂Σ_n
)+2L(β_n)<L(∂Σ_n)+δ_L,n≤ L+δ_L,n,
Thus by (<ref>)–(<ref>), we have a contradiction estimation
H_L+1/n ≥ H_L+δ_L,n≥ H(Σ_n^'
)=R(Σ_n^')/L(∂Σ_n^')≥R(Σ_n)+4π/L(∂Σ_n)+δ_L,n
=H(Σ_n)+4π/L(∂Σ_n)/1+δ_L,n/L(∂Σ_n)≥H_L-π/2L(
∂Σ_n) +4π/L(∂Σ_n)/1+δ_L,n/L(∂Σ_n)
=H_L+7π/2L( ∂Σ_n) /1+δ_L,n/L(∂Σ_n)≥H_L+7π/2L
/1+δ_L,n/δ_E_q
→ H_L+7π/2L ( as n→
0) .
§ UNDECOMPOSABILITY OF PRECISE EXTREMAL SEQUENCES
This section is to study precise extremal sequences of ℱ
_r^'(L,m), ℱ_r(L,m), or ℱ(L,m).
(1) A surface Σ_L,m=(
f,Δ) is called extremal in ℱ
_r^'(L,m), if
H(Σ_L,m)=H_L,m=sup_Σ∈ℱ_r^'(L,m)
H(Σ).
(2) If in addition
H(Σ_L,m)>H_L-π/2L( ∂Σ_L,m) ,
and for any other extremal surface Σ in ℱ_r^'
(L,m),
L(∂Σ_L,m)≤ L(∂Σ),
then Σ_L,m is called a precise extremal surface of
ℱ_r^'(L,m).
Extremal surfaces and precise extremal surfaces of
ℱ_r( L,m) (or ℱ( L,m))
are defined when the above ℱ_r^'(L,m) is replaced by
ℱ_r( L,m) (or ℱ( L,m)).
Recall in Definition <ref>, where we have defined precise extremal surfaces
of ℱ( L) . In that definition, we do not require
(<ref>). But for a precise extremal surface Σ_L,m in ℱ
_r^'(L,m),ℱ_r( L,m) or ℱ(
L,m), we always assume H(Σ_L,m)is sufficiently close to
H_L=sup_Σ∈ℱ(L,m)H(Σ), that is, (<ref>) is satisfied.
By Definition <ref>, the equality (<ref>) and Corollary
<ref>, if m is large enough and Σ_L,m is extremal in
ℱ_r^'(L,m), ℱ_r( L,m) or
ℱ( L,m), then H(Σ_L,m)>H_L-π/2L, which implies (<ref>).
(1) A sequence Σ_n=( f_n,Δ)
∈ℱ_r^'(L,m) is called an extremal sequence in
ℱ_r^'(L,m), if
lim_n→∞H(Σ_n)=H_L,m.
(2) If in addition
H(Σ_n)≥ H_L-2π/L(∂Σ_n),n=1,2,…,
lim_n→∞L(∂Σ_n) exists and for any other
extremal sequence Σ_n^' in ℱ_r^'(L,m),
lim_n→∞inf L(∂Σ_n^')≥lim_n→∞L(∂Σ_n),
Σ_n is called a precise extremal sequence in
ℱ_r^'(L,m).
Extremal sequences and precise extremal sequences of
ℱ_r( L,m) (or ℱ( L,m))
are defined, if ℱ_r^'(L,m) is replaced by ℱ
_r( L,m) (or ℱ( L,m)).
Any precise extremal surface (sequence) of ℱ
_r( L,m) is a precise extremal surface (sequence) of
ℱ(L,m), and any precise extremal surface (sequence) of
ℱ_r^'( L,m) is a precise extremal surface
(sequence) of ℱ_r(L,m) and ℱ( L,m) .
This follows from Corollary <ref>.
Note that surfaces in an extremal sequence need not be extremal, that is, it
is possible that H(Σ_n)<H_L,m for some n. Lemma <ref> can be
extended to extremal sequences of ℱ( L,m) .
Let L≥2δ_E_q be a positive number with
L∈ℒ. Then there exists a positive integer m_0 and a positive
number δ_0 such that: for any m>m_0 and any sequence Σ
_n=( f_n,Δ) ∈ℱ(L,m), if
lim_n→∞H(Σ_n)=H_L,m
f_n^-1(E_q)∩Δ≠∅,
and
L(∂Σ_n)≥δ_E_q,
then
d_f_n(f_n^-1(E_q)∩Δ,∂Δ)≥δ_0,
for sufficiently large n.
By (<ref>) Σ_n is extremal in ℱ(L,m). Then by Remark
<ref>, when m is given large enough, for sufficiently large n, we have
(<ref>), and then by Lemma <ref> we have the conclusion.
(1) For any fixed L>0 and any sufficiently large positive
integer m, there exists a precise extremal sequence in ℱ
_r^'(L,m).
(2) If L<2δ_E_q, then for any extremal sequence Σ_n of
ℱ_r^'(L,m), ℱ_r(L,m) or ℱ
(L,m),
lim_n→∞inf L(∂Σ_n)=L.
(3) If L≥2δ_E_q, then for any extremal sequence Σ_n in
ℱ_r^'(L,m), ℱ_r(L,m) or ℱ
(L,m),
lim_n→∞inf L(∂Σ_n)≥2δ_E_q.
(1) By (<ref>), Definition <ref> and Corollary <ref>, we may assume
that m is sufficiently large such that
H_L,m>H_L-π/2L.
Then by Definition <ref>, Corollary <ref> and (<ref>), there exists
an extremal sequence in ℱ_r^'(L,m), and for any extremal
sequence F_n in ℱ_r^'(L,m), we have
H(F_n)→ H_L,m>H_L-π/2L as n→∞.
Let L_0≤ L be the infimum of all numbers L^' such that there
exists an extremal sequence F_nof ℱ_r^'(L,m) such
that
lim_n→∞inf L(∂ F_n)=L^'.
Then there exists an extremal sequence Σ_n in ℱ
_r^'(L,m)such that lim_n→∞L(∂Σ
_n)=L_0. By (<ref>), for sufficiently large n_0 and any n≥
n_0, Σ_n satisfies
H(Σ_n)>H_L-π/2L≥ H_L-π/2L(
∂Σ_n) ,
and so Σ_n with n≥ n_0 is a precise extremal sequence in
ℱ_r^'(L,m).
(2) Assume L<2δ_E_qand Σ_n is an extremal sequence of
ℱ_r^'(L,m), ℱ_r(L,m) or ℱ(L,m),
with lim_n→∞inf L( ∂Σ_n)
=l_0<L. We will deduce a contradiction. Since subsequences of an extremal
sequence are also extremal, we may assume lim_n→∞L(
∂Σ_n) =l_0. Let L^',L^''∈
(l_0,L),L^'<L^''. Then it is clear that there exist
disks T_L^',T_L^'' and, for each large enough n, a
disk T_n on S with T_n⊂ T_L^'⊂ T_L^''⊂ S\ E_q such that
L(∂Σ_n)=L(∂ T_n)<L^'=L(∂ T_L^'
)<L^''=L(∂ T_L^'').
Then by Theorem <ref> and Lemma <ref>, we have a contradiction:
H_L,m=lim_n→∞H(Σ_n)≤lim_n→∞H(T_n)≤ H(T_L^')<H(T_L^'')≤ H_L,m,
where the last inequality follows from that T_L^''
is a surface of ℱ_r^'(L,m).
(3) Assume L≥2δ_E_q and let Σ_n be any extremal
sequence in ℱ_r^'(L,m), or ℱ_r(L,m), or
ℱ(L,m) with l_0=lim_n→∞inf L(∂Σ_n)<2δ_E_q. Then for some L^' and L^'' in (l_0,2δ_E_q) with L^'<L^'', as
discussed in (2), we have H_L,m<H(T_L^'')≤
H_L^'',m. But this contradicts L^''<L, which
implies T_L^''∈ℱ_r^'(L,m), and
then H_L^'',m≤ H_L,m.
Let Σ_L,m be a precise extremal surface in ℱ_r(L,m).
Then there exists a precise extremal surface Σ_1 of ℱ
_r^'(L,m) such that
∂Σ_1=∂Σ_L,m.
By Corollary <ref>, there exists a surface Σ_1in
ℱ_r^'(L,m) such that H(Σ_1)≥ H(Σ) and
L(∂Σ_1)≤ L(∂Σ). Since ℱ_r^'(L,m)⊂ℱ_r(L,m), Σ_1 is a precise extremal surface
of ℱ_r^'(L,m) which is also precise extremal in
ℱ_r(L,m). Thus L(∂Σ_1)=L(∂Σ), and
then we have ∂Σ_1=∂Σ_L,m by Theorem <ref>.
Let L∈ℒ be a positive number and
let Σ_n be an extremal sequence of ℱ(L,m). Σ_n
is called decomposable in ℱ(L,m^') if for some positive
integer j_0≥2, there exists a subsequence Σ_n_k of
Σ_n, a sequence {{Σ_n_kj}
_j=1^j_0} _k=1^∞ with {Σ_n_k
j} _j=1^j_0⊂ℱ(L,m^') and a sequence
ε_k of positive numbers such that
lim_k→∞ε_k=0,
∑_j=1^j_0R(Σ_n_kj)≥ R(Σ_n_k)-ε
_k, k=1,2,…,
∑_j=1^j_0L(∂Σ_n_kj)≤ L(∂Σ_n_k
)+ε_k,
and one of the following conditions holds:
(a) For each j≤ j_0, lim_k→∞inf L(∂Σ_n_kj)<lim_k→∞inf L(∂Σ_n_k).
(b) For at least two distinct subscribe j_1and j_2 in {1,2,…
,j_0}
lim_k→∞inf L(∂Σ_n_kj_i)>0,i=1,2.
(c) For some subscribe j_1≤ j_0,
lim_k→∞inf L(∂Σ_n_kj_1)>0 and
lim_k→∞sup H(Σ_n_kj_1)=0.
Let L∈ℒ be a positive number and let
Σ_n be a precise extremal sequence of ℱ(L,m). Then
Σ_n can't be decomposable in ℱ(L,m^')for every
positive integer m^'≤ m.
By Lemma <ref> we have
L_1=lim_k→∞inf L(∂Σ_n_k)>0.
Assume that the sequence {{Σ_n_kj}
_j=1^j_0} _k=1^∞ satisfying (<ref>), (<ref>)
and one of (a)–(c) in the definition exists. We may further assume
H(Σ_n_k1)=max_1≤ j≤ j_0H(Σ_n_kj).
Then for each k, by (<ref>)–(<ref>) and Lemma <ref>
H(Σ_n_k1)≥H(Σ_n_k)-ε_k/L(∂Σ_n_k)/1+ε_k/L(∂Σ_n_k),
which with (<ref>) implies
lim_k→∞inf H(Σ_n_k1)≥lim_k→∞H(Σ_n_k)=H_L,m.
Thus Σ_n_k1,k=1,2,…, is an extremal sequence of ℱ
(L,m) and in fact the equality holds (note that ℱ(
L,m^') ⊂ℱ(L,m) and we assumed Σ_n_k
j∈ℱ( L,m^')).
Assume (a) or (b) holds. Then, by (<ref>) and (<ref>), we have, for
sufficiently large k,
L(∂Σ_n_k1)≤ L^'<L_1,
for some L^'<L_1. Since Σ_n_k1 is extremal in
ℱ(L,m), (<ref>) and (<ref>) show that Σ_n is not
a precise extremal sequence in ℱ(L,m). This contradicts the
assumption that Σ_n is precise extremal in ℱ(L,m).
Assume (c) holds. Then 1≠ j_1, and then (<ref>) holds for
sufficiently k and we obtain a contradiction again. We have proved that
Σ_n is not decomposable in ℱ(L,m^') when
m^'≤ m.
As a direct consequence of the previous lemma we have the following.
Let L be a positive number. Then for any positive integer
m^'≤ m, any precise extremal surface Σ of ℱ
(L,m) can't be decomposable in ℱ(L,m^'). That is to say,
for any integer j_0≥2, there does not exist a number of j_0
surfaces Σ_j,j=1,…,j_0,in ℱ(L,m^') such
that
∑_j=1^j_0R(Σ_j)≥ R(Σ), ∑_j=1^j_0
L(∂Σ_j)≤ L(∂Σ),
and that
L(∂Σ_j)>0,j=1,2,…,j_0.
The same conclusion hold when ℱ( L,m) and
ℱ(L,m^') are both replaced by ℱ(L).
§ PRECISE EXTREMAL SEQUENCES IN ℱ_R( L,M)
WITH CONVERGING BOUNDARY
The goal of this section is to prove the following Theorem, which plays a key
role of the proof of the main theorems.
For fixed L∈ℒ with L≥2δ_E_q and large
enough m>3, let Σ_n=( f_n,Δ) be a
precise extremal sequence in ℱ_r(L,m), and assume that the
following conditions (A)–(D) hold.
(A) For each n=1,2,…,Γ_n=∂Σ_n has ℱ
(L,m)-partitions
∂Δ=α_n1( a_n1,a_n2) +α_n2
(a_n2,a_n3)+⋯+α_nm(a_nm,a_n1)
and
Γ_n=∂Σ_n=c_n1( q_n1,q_n2)
+c_n2( q_n2,q_n3) +⋯+c_nm( q_nm
,q_n1)
with c_nj=( f,α_nj) ,j=1,…,m.
(B) Γ_0=( f_0,∂Δ) is a curve on S which
has 𝒞(L,m)-partitions
∂Δ=α_01( a_01,a_02) +α_02
(a_02,a_03)+⋯+α_0m(a_0m,a_01),
and
Γ_0=c_01( q_01,q_02) +c_02( q_02
,q_03) +⋯+c_0m( q_0m,q_01) ,
with c_0j=( f_0,α_0j) ,j=1,…,m.
(C) Γ_n=∂Σ_n=( f_n,∂Δ)
uniformly converges to Γ_0=( f_0,∂Δ) , and
moreover, for each j=1,…,m,α_nj uniformly converges to
α_0j, and c_nj uniformly converges to c_0j.
(D) For every n=0,1,…, Γ_n=( f_n,∂Δ) are parametrized by length and a_n1=1.
Then
(i) For any pair of distinct two points a and b in ∂Δ,
lim_n→∞inf d_f_n(a,b)>0.
(ii) For any disjoint compact arcs I and J of ∂Δ,
lim_n→∞inf d_f_n(I,J)>0.
Some conventions for the proof of Theorem <ref>.
We first make some conventions on the assumptions of this theorem.
By Lemma <ref> and the assumption of Theorem <ref> we have
L(Γ_0)=L( f_0,∂Δ) =lim_n→∞L(f_n,∂Δ)=lim_n→∞L(∂Σ_n
)≥2δ_E_q>0,
and then by Lemmas <ref>, for the sequence Σ_n in the theorem
in proof, there exists a positive number δ_0 such that
d_f_n(f_n^-1(E_q)∩Δ,∂Δ)≥δ_0
,n=1,2,…
By the assumption, the restriction f_n|_∂Δ:∂Δ→ S is linear in length, say,
L(f_n,e^√(-1)[ 0,θ] )=θ/2π
L(f_n,∂Δ),θ∈0,2π],n=0,1,2,…,
where e^√(-1)[ 0,θ] denotes the arc {e^√(-1)t:t∈0,θ]} on ∂Δ. It is permitted that some
α_0j, as the limit of α_nj, is a point, and by (C), (D) and
(<ref>) it is clear that α_0j is a point iff c_0j is a
point, for each j=1,…,m.
For each j_0=2,…,m and each n=0,1,2,…, let ϕ
_n(z)=a_nj_0z be the rotation of ℂ (note that a_nj_0
=e^√(-1)θ_nj_0∈∂Δ for some θ_nj_0
∈0,2π)). Then the sequence Σ_n^j_0=( f_n
∘ϕ_n,Δ) is also a precise extremal sequence
of ℱ_r(L,m) and
∂Δ=α_n1^j_0( a_n1^j_0,a_n2^j_0
) +α_n2^j_0( a_n2^j_0,a_n3^j_0)
+⋯+α_nm^j_0( a_nm^j_0,a_n1^j_0) ,
with
a_nj^j_0=a_n,j_0+j-1/a_nj_0
for j=1,…,m, are ℱ(L,m)-partitions of ∂Σ
_n^j_0 with n≥1, or is a 𝒞(L,m)-partition for
Γ_0^j_0=( f_0∘ϕ_0,∂Δ) . Since
a_n1^j_0=1 for all n=0,1,…, and a_nj_0 converges to
a_0j_0 as n→∞, it is clear that the partition
(<ref>), the sequence Σ_n^j_0 and the curve Γ
_0^j_0 satisfy all hypothesis of the theorem in proof as the partition
(<ref>), the sequence Σ_n and the curve Γ_0. On the
other hand we have
d_f_n∘ϕ_n( ϕ_n^-1(a),ϕ_n^-1(b))
=d_f_n( a,b) ,n=1,2,…,
for any pair of two points a and b in Δ, and by (C) at
least one arc α_0j is not a point. Therefore, to prove (i) of the
theorem, we may always assume that the following conditions hold (by omitting
at most a finite number of terms of {Σ_n}
_n=1^∞).
α_01 is not a point and that point a in (<ref>) is
fixed and contained in α_01.
If a∈α_01^∘, then we have: (a) there exists a
compact arc
α_a=α_a( a^',a^'') =α
_01( a^',a^'') ⊂α_01^∘
such that α_a is a neighborhood of a in α_01^∘, that
for each n∈ℕ^0 and the two points q_n^'
=f_n(a^'),q_n^''=f_n(a^''), the arc
c_n,a=c_n1( q_n^',q_n^'') =(
f_n,α_a) ⊂ c_n1^∘
is a compact neighborhood of f_n(a) in c_n1^∘, and
L(c_n,a)>π if L(c_01)>π.
(b) there exists a number δ_a∈( 0,δ_0)such
that for eachn=0,1,2,…,
α_a⊂α_n1\ D( { a_n1,a_n2
} ,4δ_a) and
c_n,a⊂ c_n1\ D( { q_n1,q_n2}
,4δ_a) ,
L(c_n1( q_n1,q_n^') )=L(c_n1( q_n
^'',q_n2) )>4δ_a.
Here D( Q,r) =∪_x∈ QD(x,r) is the r-neighborhood of
Q on the sphere S and ∂Δ is regarded as a set on S.
In fact, Condition <ref> follows from (A)–(D) and Condition <ref>.
If a∈α_01^∘, then for every positive number ε≤4δ_a (see Condition <ref>) we introduce:
α_01,ε is the largest connected and compact
neighborhood of α_01 in ∂Δ such that
( f_0,α_01,ε\α_01)
⊂D( {q_01,q_02},ε) .
(i) For any arc γ on ∂Δ with distinct
endpoints, we have
L(f_n,γ)=L(γ)/2πL(f_n,∂Δ)>0,n=0,1,2,…
,
and for any two arcs γ_1 and γ_2 on ∂Δ with
L(γ_1)≤ L(γ_2) we have
L(f_n,γ_1)≤ L(f_n,γ_2),n=0,1,2,….
(ii) If {x_n} and {y_n} are two sequence of points on
∂Δ such that lim_n→∞d( x_n
,y_n) =0, then lim_n→∞d_f_n(
x_n,y_n) =0.
(iii) If the given point a is contained in α_01^∘, then for
the points a^',a^'',q_n^',q_n^'',n=0,1,2,…, in Condition <ref> we have,
lim_n→∞L(α_n1( a_n1,a^') )
=L(α_01( a_01,a^') )>4δ_a,
lim_n→∞L(α_n1( a^'',a_n2)
) =L(α_01( a^'',a_02) )>4δ_a;
lim_n→∞L(c_n1( q_n1,q_n^') )
=L(c_01( q_01,q_0^') )>4δ_a,
lim_n→∞L(c_n1( q_n^'',q_n2)
) =L(c_01( q_0^'',q_02) )>4δ_a.
(iv) If a∈α_01^∘and {b_n}_n=1^∞is a
sequence in (∂Δ)\α_01,3δ_a^∘ with
lim_n→∞d_f_n( a,b_n) →0,
then for sufficiently large n,
d_f_n( a,b_n) <min{δ_0, inf{
L(f_n,γ_0( a,b_n) } _n=0^∞),
where γ_0(a,b_n) is the shorter[Since (
f_n,∂Δ) are parametrized by length, this also means
L(f_n,γ_0)≤ L(f_n,( ∂Δ) \γ_0).] arc of ∂Δ with endpoints a and b_n.
In fact, (i)–(iii) follows from (A)–(D), and Conditions <ref> and
<ref>. Note that Condition <ref> (b) holds for n=0,1,….
Assume that the given point a is contained in α_01^∘. Then
for the sequence { b_n} _n=1^∞⊂
(∂Δ)\α_01,3δ_a^∘, we have by
(<ref>) that L(γ_0( a,b_n) )>4δ_a, and
thus by Remark <ref> we have
L( f_n,γ_0( a,b_n) ) >4δ_a
/2πL(f_n,∂Δ)>0
for n=0,1,…,n. Hence, by (<ref>), we have inf{
L(f_n,γ_0( a,b_n) } _n=0^∞>0, which
implies (<ref>) for sufficiently large n.
Step 1. Some useful results inspiring Theorem <ref>
In this step we will prove (<ref>) in some special cases, and we will
deduce some contradictions under the condition that (<ref>) fails. These
simple results inspire Theorem <ref>, though the complete proof of
Theorem <ref> is very complicated.
For any two points x and y on ∂Δ with
f_0(x)≠ f_0(y), (<ref>) holds, say,
lim_n→∞inf d_f_n(x,y)≥ d(f_0(x),f_0(y))>0.
By (B)–(D) we have
d_f_n(x,y)≥ d(f_n(x),f_n(y))→ d(f_0(x),f_0(y)),
and then we have (<ref>).
For the given point a in α_01, if there exists a number
δ>0 such that each surface Σ_n contains a disk[See
Definition <ref>.] ( a,U_n^δ) of radius δ,
then (<ref>) holds for all b∈( ∂Δ)
\{a}.
This follows from Lemma <ref>. In fact we can write
∂ U_n^δ=α_1,n^δ+α_2,n^δ
+α_3,n^δ=α_1,n^δ( a_1,n^δ,a)
+α_2,n^δ( a,a_2,n^δ) +α_3,n
^δ( a_2,n^δ,a_1,n^δ)
as in Lemma <ref>, so that -α_1,n^δ and α
_2,n^δ are the boundary radius and α_3,n^δ is the
new boundary (see Definition <ref> and Remark <ref> for the radius,
the boundary radius, the new and the old boundary of a disk of a surface). For
j=1,2, since the spherical distance of the two endpoints of c_j,n
^δ=( f_n,α_j,n^δ) equals δ, we
have, by (B)–(D), that c_j,n^δ=( f_n,α_j,n^δ) converges uniformly to an SCC arc c_j,0^δ, and thus
α_j,n^δ converges to an arc α_j,0^δ on
∂Δ, for j=1,2.
It is clear that for any r∈(0,1], ( a,U_n^δ)
contains the disk ( a,U_n^rδ) of radius rδ of
Σ_n, the corresponding α_j,n^rδ are well defined with
∂ U_n^rδ=α_1,n^rδ+α_2,n^rδ
+α_3,n^rδ for j=1,2,3 and n=0,1,2,…, and the above
argument applies when δ is replaced by rδ. Moreover,
α_1,0^rδ+α_2,0^rδ uniformly converges to the
point a as r→0. Thus, for any b∈( ∂Δ) \{a}, there exists r∈(0,1) such that
b∉α_1,0^rδ+α_2,0^rδ. Then for sufficiently
large n, b∉α_1,n^rδ+α_2,n^rδ, which
implies b∉( a,U_n^rδ) , and thus by Lemma
<ref> we have d_f_n( a,b) >rδ>0. Therefore
(<ref>) holds for all b∈∂Δ with b≠ a.
Lemma <ref> can be used to show Lemma <ref>, which state that
(<ref>) holds when a∈α_01^∘ and α_01 either is
strictly convex or is straight and L(c_01)>π. Lemma <ref> is the
second key ingredient of the proof of Theorem <ref>. By Lemma <ref>
and Condition <ref> we have the following lemma.
If the given point a is contained in α_01^∘, then
for any b∈α_01,4δ_a\{a}
lim_n→∞inf d_f_n( a,b) ≥ d(
f_0(a),f_0(b)) >0.
Let a∈α_01^∘ and let b_n be a sequence in
( ∂Δ) \α_01,3δ_a^∘
which satisfies (<ref>). Then for sufficiently large n and the d_f_n-shortest path I_n=I_n( a,b_n) from a to b_n
I_n∩Δ≠∅ but I_n∩Δ∩ f_n
^-1(E_q)=∅.
Let γ_0( a,b_n) be the shorter arc of ∂Δ with ∂γ_0={a,b_n}. Then for sufficiently large
n, (<ref>) holds. If I_n∩Δ=∅ for some n=n_0
which is so large that (<ref>) holds for this n_0, then we have
I_n_0⊂∂Δ and thus
d_f_n_0( a,b_n_0) =L( f_n_0,I_n_0
) ≥ L( f_n_0,γ_0( a,b_n_0)
)
≥min{δ_0, inf{ L(f_n,γ_0(
a,b_n) } _n=0^∞),
contradicting to (<ref>). Thus, when n is large enough, I_n∩Δ≠∅, and for any x∈ I_n∩Δ, we have
d_f_n( x,∂Δ) ≤ L(f_n,I_n)=d_f_n(
a,b_n) <δ_0,
and so by (<ref>) we have I_n∩Δ∩ f_n^-1(E_q
)=∅.
(The first key step for the proof of Theorem <ref>)
Let Σ_n=( f_n,Δ) be the precise
extremal sequence satisfying all conditions (A)–(D) in Theorem <ref>.
Assume that the following additional condition hold:
(E) a∈α_01^∘, {b_n} is a sequences in (
∂Δ) \α_0,3δ_a^∘, and for the
d_f_n-shortest path I_n=I_n( a,b_n) from a to
b_n,
lim_n→∞inf d_f_n( a,b_n) =lim
_n→∞inf( f_n,I_n) =0.
Then Σ_n=( f_n,Δ) contains a
subsequence which is still denoted by Σ_n=( f_n,Δ) such that
(i) I_n∩Δ≠∅, and I_n has a partition
I_n=I_n1( a,b_n1) +I_n2( b_n1,b_n2)
+…+I_n,2k+1( b_n,2k,b_n)
given by Lemma <ref> (v): I_n,2j-1⊂∂Δ for
j=1,…,k+1, and I_n,2j^∘⊂Δfor j=1,…,k.
Moreover, k≥1 is independent of n.
(ii) I_n,2j^∘∩ f_n^-1( E_q) =∅,
f_n restricted to a neighborhood of I_n,2j^∘ is a homeomorphism,
( f_n,I_n,2j) is a straight arc on S,j=1,…,k, and
moreover I_n2 divides Δ into two Jordan domains Δ_n1 and
Δ_n2, say, I_n2 divides Σ_n into two surfaces
F_nj=( f_n,Δ_nj) ,j=1,2.
(iii) I_n1 is a point. Thus I_n2=I_n2( b_n1,b_n2)
=I_n2( a,b_n2) =I_n( a,b_n2) .
(iv) b_n2∈{a_nj}_j=3^m and F_nj are both contained in
ℱ_r( L,m) , provided that
{ b_n,b_n2}∩{a_nj}_j=1^m≠∅.
(v) F_nj are both contained in ℱ_r( L,m) ,
provided that
b_n2∉( α_nm+α_n1+α_n2) ^∘
(vi) Σ_n is decomposable in ℱ_r(L,m), provided that
(<ref>) or (<ref>) holds for each n=1,2,….
By (<ref>), taking subsequence, we may assume that (<ref>) holds. Then
(i) follows from Lemma <ref> and Lemma <ref>.
By Lemma <ref>, we have I_n,2j∩ f^-1(E_q)=∅ for
sufficiently large n and each j=1,…,2k. Thus (ii) follows from Lemma
<ref>, and τ_n=( f_n,I_n2) is a simple and
straight arc on S.
By (<ref>) and (<ref>), omitting a finite number of terms of the
sequence {I_n}, we may assume
L(f_n,I_n1( a,b_n1) )<δ_a<min{L(
c_n1(q_n1,f_n(a)) ),L( c_n1(f_n(a),q_n2) )}.
Then by I_n1⊂∂Δ we have ( f_n,I_n1)
⊂ c_n1^∘and I_n1⊂α_n1^∘, and so
I_n1∩{a_nj}_j=1^m=∅. Then by Lemma <ref> (vii)
I_n1={a} is a singleton and (iii) is proved.
By (<ref>) and (ii) we have
lim_n→∞d_f_n( b_n1,b_n2) =lim
_n→∞d_f_n( a,b_n2) =lim_n→∞f( a) f( b_n2) =0.
Assume b_n2∈{a_nj}_j=1^m. For sufficiently large n, by
(<ref>) we have b_n2≠ a_n1,a_n2, since
d( f_n( a) ,{ f_n( a_n1)
,f_n( a_n2) }) >4δ_a
by (<ref>); and thus b_n2∈{a_nj}_j=3^m. Assume b_n
∈{a_nj}_j=1^m. Then we have b_n2∈{a_nj}_j=1^m as well
by Lemma <ref>, and then for sufficiently large n we also have
b_n2∈{a_nj}_j=3^m, by (<ref>).
Now assume (<ref>) holds. Then by the above discussion we may assume
b_n2∈{a_nj}_j=3^m, and then both of the two arcs of
∂Δ with common endpoints a and b_n2 contain at least two
points of {a_nj}_j=1^m, and thus F_n1 and F_n2 are both
contained in ℱ_r(L,m) by Lemma <ref> (ix), and (iv) is
proved completely.
If (<ref>) holds, then each of the two arcs of ∂Δ with
common endpoints a and b_n2 contains at least two points of
{a_nj}_j=1^magain. Thus we also have F_nj∈ℱ
_r(L,m),j=1,2, by Lemma <ref> (ix), and (v) is proved.
Now assume (<ref>) holds. Then F_n1 and F_n2 are both contained
in ℱ_r(L,m). On the other hand by (i) and (ii) and (<ref>) we
have
R(F_n1)+R(F_n2)=R(Σ_n),
and
L(∂ F_n1)+L(∂ F_n2)=L(∂Σ_n)+2L(τ_n),
with
lim_n→∞L(τ_n)=lim_n→∞L(f_n
,I_n)=0.
Let γ_nj,j=1,2, be the two arcs of ∂Δ with endpoints
a and b_n2 such that γ_nj=( ∂Δ_nj)
∩∂Δ. Then by (iv) and (<ref>) we have
lim_n→∞L(∂ F_nj) =lim_n→∞( L(f_n,γ_nj)+L(τ_n))
≥min{L( c_n1(q_n1,f_n(a)) ),L( c_n1
(f_n(a),q_n2) )}≥4δ_a,j=1,2.
Therefore, Σ_n is decomposable in ℱ_r(L,m)by
Definition <ref> (b).
If (<ref>) holds for each large enough n, the above argument can be
used to show that Σ_n is also decomposable in ℱ_r(L,m).
This completes the proof of Lemma <ref>.
For any a∈α_01^∘ and b∈∂Δ\α_0,3δ_a^∘, if the interior I_n
^∘ of the d_f_n-shortest path I_n=I_n( a,b)
is contained in Δ and if b∈∂Δ\(
α_nm+α_n1+α_n2) ^∘. Then lim
_n→∞inf d_f_n( a,b) >0.
If lim_n→∞inf d_f_n( a,b) =0, then we
have a contradiction by Lemma <ref> and Lemma <ref> (vi) with
b_n=b for every n=1,2,…
Step 2. Proof of Theorem <ref> (i) in a special case
This step is to prove the following Lemma, which is the second key to Prove
Theorem <ref> (i).
(The second key to the proof of Theorem <ref> (i))
For any j=1,2,…,m, if L(c_0j)>π, or if L(c_0j)>0 and c_j
is strictly convex, then
lim_n→∞inf d_f_n(x,b)>0
for every x∈α_0j^∘ and every b∈( ∂Δ) \{x}.
By Remark <ref> and Lemma <ref>, it suffices to prove Lemma
<ref> for j=1, x=a∈α_01^∘ (the fixed point) and each
b∈( ∂Δ) \α_01,3δ_a(see
Condition <ref>).
Let C_nj be the circle determined by c_nj, for j=1,…
,m,n=0,1,…. If c_01 is strictly convex, then we may assume, by taking
subsequence of Σ_n, that all c_n1 are strictly convex, and then
c_n,a+q_n^''q_n^' encloses a closed
domain T_n, for all n=0,1,…, and it is clear that q_n^''q_n^' divides the closed disk enclosed by
C_n1 into two lunes, and T_n is the closure of the lune on the right
hand side of q_n^'q_n^''(see Condition
<ref> for the notation c_n,a=c_n,a( q_n^',q_n
^'')). If c_01 is straight, then by the assumption
L(c_01)>π and Condition <ref>, all c_n,a,n=0,1,2,…, have
length >π and thus d( q_n^',q_n^'')
<π, and then c_n,a+q_n^''q_n^' also
encloses a closed and convex domain T_n, and T_n is in fact a closed
hemisphere with q_n^'q_n^''⊂ C_n1
when c_n1 and c_01 are straight.
For n=0,1,2,…, let θ_n be the interior angle of T_n at the
cusps q_n^' and q_n^'' and for each θ∈0,θ_n] let c_n,a,θ be the circular arc in T_n
from q_n^' to q_n^'' so that the closed domain
T_n,θ enclosed by c_n,a-c_n,a,θ has interior angle
θ at the cusps q_n^' and q_n^''. Then
T_n,θ⊂ T_n, T_n,0 is just the arc c_n,a=c_n,a,0 and
T_n,θ_n=T_n, for n=0,1,2,... It is clear that c_n,a,θ
is strictly convex when θ∈(0,θ_n), whether c_01 is
straight or not. On the other hand by Lemma <ref> we may assume that
θ_n≥θ_0/2={[ π/2, if c_01 is straight,; θ_0/2>0, if c_01 is strictly convex, ]. n=1,2,…
Note that each c_nj is convex and so it is either straight or strictly convex.
Since α_a given in Condition <ref> is a compact subarc of all
α_n1^∘,n=0,1,2…, by Definition of ℱ_r(L,m)
and Lemma <ref>, for each n=1,2,…, and sufficiently small
θ=θ(n)∈(0,θ_n], f_n^-1 has a univalent branch
g_n,θ defined on the closed domain T_n,θ such that
g_n,θ( c_n,a) =α_a. Let θ_n^∗ be
the largest positive number in (0,θ_n] such that g_n,θ is
well defined for every θ∈(0,θ_n^∗), say, g_n,θ
is a univalent branch of f_n^-1 defined on T_n,θ with
g_n,θ( c_n,a,0) =α_a. Then it is clear that
g_n,θ^' equals g_n,θ^'' on T_n,θ
^'⊂ T_n,θ^'' for every pair of
θ^' and θ^'' with 0<θ^'
<θ^''<θ_n^∗, and then f_n^-1 has a
univalent branch g_n,θ_n^∗ defined on T_n,θ_n^∗
\ c_n,a,θ_n^∗^∘with g_n,θ_n^∗
( c_n,a) =α_a. This, together with Lemma
<ref>, implies that g_n,θ_n^∗ can extend to a
univalent branch of f_n^-1 defined on the closed Jordan domain
T_n,θ_n^∗. We still use g_n,θ_n^∗ to denote
the extension and let α_a,θ_n^∗=g_n,θ_n^∗
( c_n,a,θ_n^∗) , which is a simple arc in
Δ from a^' to a^''. We summarize this
by a claim:
f_n:D_n=g_n(T_n,θ_n^∗)→
T_n,θ_n^∗ is a homeomorphism and ∂ D_n=α
_a-α_a,θ_n^∗ is a Jordan curve, and thus D_n^∘∪α_a=D_n\α_a,θ_n^∗^∘ contains
no branch point of f_n.
Since α_a=α(a^',a^'') is a compact subarc of
all α_n1^∘,n∈ℕ, by Lemma <ref> and Claim
<ref>, we have
For each n∈ℕ,α_n1^∘ has a neighborhood
N_n in Δ such that f_n:D_n∪ N_n→
T_n,θ_n^∗∪ f_n( N_n) is also a
homeomorphism. Thus, if θ_n^∗∈( 0,θ_n) ,
the part of α_a,θ_n^∗^∘ near its endpoints
a^' and a^'' is contained in N_n^∘
⊂Δ.
We will prove
θ=lim_n→∞infθ_n^∗>0.
We first show that this implies Lemma <ref>. Since θ_n^∗>0
for each n≥1, (<ref>) implies θ_n^∗>φ_0 for
some φ_0>0 and all n≥1. By Lemma <ref> T_n,θ
_n and T_n,φ_0 converge to T_0,θ_0 and
T_0,φ_0. Thus it is clear that there is a positive number
δ_1<{δ_0,δ_a}
such that for the δ_1-neighborhood V_n=D( f_n
(a),δ_1) ∩ T_n,φ_0 of f_n(a) in T_n,φ
_0and large enough n, V_n is the part of the disk D(
f_n(a),δ_1) on the left hand side of c_n,awith
D( f_n(a),δ_1) ∩ c_n,a⊂ V_n and V_n
does not intersects c_n,a,φ_0. Thus, when we choose δ_1
small enough, (a,U_n) with U_n=g_n,θ_n^∗(V_n) is a
disk of Σ_n with radius δ_1, U_n⊂ D_n and
U_n∩∂Δ⊂α_a, but (a,U_n) depends on n
(see Definition <ref>). Therefore by Lemma <ref>, (<ref>) holds,
and thus (<ref>) implies Lemma <ref>.
Now return to prove (<ref>), and assume that it fails. Then, by
(<ref>), we may assume
θ=lim_n→∞θ_n^∗
=0 and θ_n^∗<inf{θ_n}_n=1^∞
for all n, and then for sufficiently large n and each point y in
T_n,θ_n^∗, there exists a line segment I_n,y=yy^∗⊂ T_n,θ_n^∗ of length <θ_n^∗π
for some y^∗∈ c_n,a. Thus we have
d_f_n( x,∂Δ) ≤ d_f_n( x,α
_a) ≤ L(I_n,f_n(x))<θ_n^∗π→0
for all x∈ D_n\α_a and sufficiently large n, and thus
by (<ref>) we may assume that
D_n∩Δ∩ f_n^-1(E_q)=∅ for all
n=1,2,…, and so f_n has no any branch point in D_n∩Δ.
Thus α_a,θ_n^∗∩Δ∩ f_n^-1(E_q)=∅
for all n=1,2,….
Since 0<θ_n^∗<θ_n, when α_a,θ_n^∗
^∘∩∂Δ=∅, f_n is homeomorphic in a
neighborhood of α_a,θ_n^∗ in Δ by
Claims <ref> and <ref>, and so θ_n^∗ can be enlarge,
which contradicts the definition of θ_n^∗. Hence α
_a,θ_n^∗^∘∩∂Δ is not empty, which together
with Lemma <ref> (iv) implies that α_a,θ_n^∗
^∘∩∂Δ is a finite set contained in {a_j}_j=1
^m, and thus we have by Claim <ref>
α_a,θ_n^∗∩∂Δ={a^',a^''}∪{a_ni_j}_j=1^k-1 in which {a_ni_j
}_j=1^k-1=α_a,θ_n^∗^∘∩∂Δ for
some k≥2 is a subset of {a_nj}_j=1^m, and a^'',a_ni_1,a_ni_2,…,a_ni_k-1,a^' are arranged on
∂Δ anticlockwise.
It is clear that {a_ni_j}_j=1^k-1⊂ D_n since D_n is
closed, and thus by (<ref>) d_f_n( a_ni_j,a_α) <θ_n^∗π→0 for every j=1,…,k-1. On
the other hand, we have d_f_n( {a_n1,a_n2},α_a) ≥ d( {q_n1,q_n2},c_α) >4δ_a by
(<ref>). Thus for large enough n we have
a_ni_j∉{a_n1,a_n2} for each j=1,…,k-1.
Then α_n,θ_n^∗ cut Δ into k+1 components
{Δ_nj} _j=0^k of which Δ_n0 is on the
left hand side of -α_n,θ_n^∗ and Δ_nj
,j=1,…,k, are on the right hand side of -α_n,θ_n^∗,
and the intersections γ_j=∂Δ_nj∩∂Δ,j=1,…,k, are arcs of ∂Δ arranged on
∂Δ anticlockwise. We may assume k is independent of n.
Since c_a,n and c_a,θ_n^∗ are both convex and share the
same endpoints {q_n^',q_n^''}, c_a,n is on the
convex circle C_n1and c_a,θ_n^∗^∘ is in the domain
enclosed by C_n1, we have
L(f_n,α_a,θ_n^∗)=L(c_a,θ_n^∗)<L(c_a,n).
Then k,α_a,α_n1,-α_a,θ_n^∗,Σ
_n,a^'',a^',{{a_i_j}} _j=1
^k-1satisfy (a) (b) (c) (d) (e1) of Lemma <ref> as k,γ
_0,γ_0^',I,Σ,b_2,b_2k+1,{I_2j-1}_j=2^k
there, but here all I_2j-1={a_i_j} are points. Then by Lemma
<ref>, Σ_nj=( f_n,Δ_nj)
∈ℱ_r( L,m) for j=1,2,…,k.
It is clear that by Claim <ref> and (<ref>),
∑_j=1^kR(Σ_nj) =R(Σ_n)-( q-2)
A(T_n,θ_n^∗),
∑_j=1^kL(∂Σ_nj) ≤ L(∂Σ)-L(c_a,n
)+L(c_a,θ_n^∗)<L(∂Σ),
with A(T_n,θ_n^∗)→0 as n→∞. On the
other hand it is clear that α_n1( a^''
,a_n2) ⊂∂Δ_n1 and by (<ref>),
L(f_n,α_n1( a^'',a_n2) )=L(c_n1(
q_n^'',q_n2) )≥4δ_a.
Thus L(∂Σ_n1)>4δ_a and, for the same reason,
L(∂Σ_nk)>4δ_a. Hence Σ_n is decomposable by
Definition <ref> with (b), which contradicts Lemma <ref>.
We have proved (<ref>), and then Lemma <ref> is proved
completely.
Step 3. Preliminary discussion in the case a∈α_01^∘
The purpose of this and next steps is to prove Assertion <ref> introduced
later, which deduce Theorem (i) in the case a∈α_01^∘.
By Condition <ref> and Lemma <ref>, to prove Theorem <ref> (i),
it suffices to prove the following assertion.
For the fixed a∈α_01 and any b∈( ∂Δ) \{a}, we have
lim_n→∞inf d_f_n(a,b)>0.
We first prove this assertion under the condition
a∈α_01^∘.
Then by Lemma <ref>, to prove Assertion <ref> under (<ref>), it
suffices to prove
(<ref>) holds when a∈α_01^∘ and b∈(
∂Δ) \α_01,4δ_a^∘.
Assume that (<ref>) holds. Then we may assume that α_n1
⊂α_01,δ_a for all n (see Notation <ref> for
α_01,δ_a), since this is true for sufficiently large n.
By Lemma <ref>, for each n, there exists a point b_n in (
∂Δ) \α_01,3δ_a^∘ such that
d_f_n(a,b_n)=min_x∈( ∂Δ) \α_01,3δ_a^∘d_f_n(a,x).
By Lemma <ref>, the d_f_n-shortest path I_n=I_n(
a,b_n) with b_n∈( ∂Δ) \α_01,3δ_a^∘ exists. Then, to prove Assertion <ref>,
it suffices to prove the following assertion.
When the fixed a is contained in α_01^∘,
lim_n→∞inf L(f_n,I_n( a,b_n)
)=lim_n→∞inf d_f_n(a,b_n)>0.
To prove Assertion <ref>, we may assume that
lim_n→∞b_n=b_0∈( ∂Δ)
\α_01,3δ_a^∘,
exists and thus, by Condition <ref> (b), we have
a∈α_n1^∘∩α_01^∘andb_n
∈( ∂Δ) \α_01,3δ_a^∘, for n=0,1,2,…
Assume Assertion <ref> fails. Then by Lemma <ref> (ii) we may assume,
by taking subsequence, that
lim_n→∞d_f_n( a,b_0) =lim_n→∞d_f_n(a,b_n)=lim_n→∞L(f_n,I_n(
a,b_n) )=0.
If b_0∈α_01,4δ_a, then by (<ref>) we have b_0
∈α_01,4δ_a\{a}, and then by Lemma <ref> we
have lim_n→∞inf d_f_n( a,b_0) >0. This
contradicts (<ref>), and so b_0∈( ∂Δ)
\α_01,4δ_a, and so, by taking subsequence,
(<ref>) can be enhanced to be
a∈α_n1^∘∩α_01^∘andb_n
∈( ∂Δ) \α_01,4δ_a
, for n=0,1,2,…
By (<ref>) and Lemma <ref>, we have
I_n∩Δ≠∅, but I_n∩Δ∩ f_n
^-1(E_q)=∅ for n large enough,
and then by Lemma <ref>, taking subsequence, we conclude that I_n has
a partition
I_n=I_n1( a,b_n1) +I_n2( b_n1,b_n2)
+I_n3( b_n2,b_n3) +⋯+I_n,2k+1( b_n,2k
,b_n,2k+1)
(with b_n,2k+1=b_n) satisfying all conclusions of Lemma <ref> for
each n, in which k≥1 is independent of n. Then
I_n1is the point a in α_01^∘
,I_n2^∘⊂Δ, ( f_n,I_n2) is straight,
∂ I_n2=I_n2∩∂Δ, and
I_n2^∘∩ f^-1(E_q)=∅,n=1,2,…,
It is clear that I_n2 divides Δ into two Jordan domains
Δ_1n and Δ_n2 and we let Δ_n1 be the one on the
right hand side of I_n2, and write
F_nj=( f_n,Δ_nj) ,j=1,2.
We may assume
lim_n→∞b_n2=b_2 for some b_2∈∂Δ.
Now, it is clear that to prove Assertion <ref>, by taking subsequence, we
may assume that one of the following holds:
Case 1. k=1and b_n2∈{a_nj}_j=1^m for each
n=1,2,…, or k≥2.
Case 2. k=1and b_n2∉{a_nj}_j=1^m, for
each n=1,2,…, but b_2∈{a_0j}_j=1^m.
Case 3. k=1and b_n2∉{a_nj}_j=1^m, for
each n=1,2,…, and b_2∉{a_0j}_j=1^m.
Step 4. Case 1, 2, or 3 implies a contradiction
We will deduce a contradiction in each of Cases 1–3.
Step 4.1 Discussion of Case 1
Case 1 cannot occur.
If k>1 then b_n2∈{a_nj}_j=1^m by Lemma <ref>. Thus
{b_n,b_n2} satisfies (<ref>) in Case 1, and so by Lemma <ref>
(vi), Σ_n is decomposable in ℱ_r( L,m) ,
contradicting to Lemma <ref>.
Step 4.2 A general discussion of Case 2 and Case 3
Now assume that Case 2 or Case 3 occurs. Then k=1. We may assume I_n3 is
a point, for otherwise by Lemma <ref> b_n2∈{a_nj}_j=1^m
and Case 1 occurs. Then b_n2=b_n3=b_n and thus, by Claim
<ref>, for n=1,2,…,
I_n=I_n( a,b_n) =I_n2=I_n2(b_n1,b_n2)=I_n2
(a,b_n),
and then by (<ref>) we have b_n2=b_n∈( ∂Δ) \α_01,4δ_a. If b_n2∉(
α_nm+α_n1+α_n2) ^∘holds for infinitely
many n, then by (<ref>) and Lemma <ref> (v) Σ_n is
decomposable in ℱ( L,m) , contradicting Lemma
<ref>. We then can assume, for each n,b_n=b_n2is in
α_n2^∘\α_01,4δ_a or α_nm^∘\α_01,4δ_a, and thus we can further assume
b_n=b_n2∈α_n2^∘\α_01,4δ_a
,n=1,2,….
Then we have,
b_0=b_2=lim_n→∞b_n=lim_n→∞b_n2
∈α_02\α_01,4δ_a^∘,n=1,2,….
By the way we see that α_02, as the limit of α_n2, is not a
point, for otherwise α_n2\α_01,4δ_a is empty
for all large enough n.
By definition of ℱ_r( L,m) ,f_n is homeomorphic
in a neighborhood of α_nj^∘ for each j=1,…,m, which with
(<ref>) and (<ref>) implies that f_n is homeomorphic in some
neighborhoods of the endpoints of I_n=I_n( a,b_n) . On
the other hand, for large enough n, f_n is homeomorphic in a
neighborhood of I_n2^∘by Lemma <ref> (ii). Therefore by
(<ref>) we conclude that
f_n is homeomorphic in a neighborhood of I_n2(
a,b_n2) =I_n( a,b_n) in Δ and
I_n2^∘∩ f_n^-1(E_q)=∅.
Step 4.3 Complete the discussion of Case 2
Case 2 implies a contradiction.
Now assume Case 2 occurs. Then by the condition of Case 2 and (<ref>),
b_n=b_n2→ b_2=a_03∈α_02\α
_01,4δ_a^∘ as n→∞,
which with the condition lim_n→∞a_n3=a_03 implies, for
sufficiently large n,
a_n3∈α_n2\α_01,3δ_a^∘.
By (<ref>) and the condition lim_n→∞a_n3=a_03
we have lim_n→∞d( b_n,a_n3) =0, which
with Lemma <ref> (i) implies lim_n→∞d_f_n(
b_n,a_n3) =0. Thus by (<ref>) we have
lim_n→∞d_f_n( a,a_n3) ≤lim
_n→∞d_f_n( a,b_n) +lim_n→∞d_f_n( b_n,a_n3) =0.
Hence by (<ref>) and Lemma <ref> (vi), by taking subsequence, we may
assume that the d_f_n-shortest path Ĩ_n=Ĩ_n(
a,a_n3) from a to a_n3 has a partition
Ĩ_n=Ĩ_n1( a,b̃_n1) +Ĩ
_n2( b̃_n1,b̃_n2) +Ĩ_n3(
b̃_n2,b̃_n3) +⋯+Ĩ_n,2k̃
+1( b̃_n,2k̃,a_n3) .
satisfying all conclusions of Lemma <ref> for each n, in which
1≤k̃≤ m+1 is independent of n. Then {a_n3,b̃
_n2}∩{a_nj}_j=1^m≠∅, say (<ref>) holds, and
thus by Lemma <ref> (vi) Σ_n is decomposable in ℱ
_r( L,m) , contradicting to Lemma <ref>
.
Step 4.4 Discussion of Case 3: (1) Case 3 implies Condition
<ref>
Now we assume that Case 3 occurs. Then by Condition of Case 3 and
(<ref>) we have b_2∈α_02^∘ and thus by (<ref>)
we have
b_2=lim_n→∞b_n2∈α_02^∘\α_01,4δ_a,
and then by (<ref>) we may assume
{ b_2,b_n2}⊂[ α_02^∘∩α_n2^∘] \α_01,4δ_a,n=1,2,….
Then
a≠ b_2,b_n2 for n=1,2,….
By (<ref>) and (<ref>), we have
lim_n→∞d_f_n( a,b_2) =0.
Then by Lemma <ref> we have f_0(a)=f_0(b_2), and then
f_0( a) =f_0( b_2) ∈ c_01^∘∩
c_02^∘.
Thus by (<ref>), (<ref>) and Lemma <ref>, we may assume
c_0j=( f_0,α_0j) are both straight and
L(c_0j)≤π, for j=1,2. Then Case 3 can be discussed under the
following condition.
Both c_01 and c_02 are straight, f_0(a)=f_0
( b_2) ∈ c_01^∘∩ c_02^∘, L(c_01
)≤πand L(c_02)≤π.
Step 4.5 Discussion of Case 3: (2) The closed Jordan domains K_n1, K_n2 and K_n,ψ_n^∗
Let C_nj be the circle determined by c_nj for n=0,1,…, and
j=1,2. Then C_01 and C_02 are great circles on S. By
(<ref>), we have
f_n(a)∈ c_n1\ D( { q_n1,q_n2}
,3δ_a) → c_01\ D( {
q_01,q_02} ,3δ_a) ∋ f_0(a)
as n→∞, which implies that f_0(a) and q_02 are not
antipodal on S by Condition <ref>. Then we have {q_02
,f_0(a)}⊂ C_01∩ C_02, which implies C_01=-C_02, say,
c_01+c_02 is folded at q_02. Let A_n=f_n(a)and B_n
=f_n(b_n). Then τ_n=( f_n,I_n) =(
f_n,I_n2) =A_nB_n→{f_0(a)} by
(<ref>), and we see that, by (<ref>), α_n2^∘
\α_01,3δ_a is an open arc of α_n2^∘
containing b_n (see Notation <ref> for α_01,3δ_a).
Then by (<ref>) and Condition <ref> we must have τ_n⊥
c_n2 at B_n, where τ_n⊥ c_n2 indicates that τ_n
intersects c_n2 at B_n perpendicularly.
Since τ_n⊥ c_n2at B_nand τ_nis on the left hand
side of c_n1and c_n2, we see that
η_n=c_n1( A_n,q_n2) +c_n2( q_n2
,B_n) -τ_n( A_n,B_n)
is a convex topological triangle, enclosed by the two convex circular subarcs
of c_n1 and c_n2 and the line segment τ_n. It is clear that
η_n converges to the folded arcs f_0( a)
f_0( a_02) +f_0( a_02)
f_0( a) as n→∞. Then c_n1+c_n2 is
strictly convex at q_n2, the interior angle of η_n at A_n is
almost π/2, C_n1 and C_n2 enclose a "thin" and convex closed lens
K_n which is on the left hand side of c_n1+c_n2, and τ_n
divides K_n into two closed Jordan domains K_n1 and K_n2 such that
K_n1 is on the right hand side of τ_n, for n=1,2,…. Then
∂ K_n1=η_n and both K_n1 and K_n2 are contained in
some open hemispheres, respectively. It is clear that one of the two cusps of
K_n is q_n2, and we let q_n2^∗ be the other cusp.
Assume the interior center of C_n2, the center on the left hand side of
C_n2, is P_nand C_n2=∂ D(P_n,R_n). Since the
orientation of C_n2 is given by c_n2 and all c_nj are convex for
all n=0,1,…, and all j=1,2,…,m, we have R_n≤π/2.
Let R_n,ψ be the radius P_nB_n,ψ of C_n2 such
that B_n,ψ is the point on the arc ∂ K_n∩ C_n2 so that
the angle between the radius P_nq_n2 and P_nB_n,ψ at P_n equals ψ, A_n,ψ be the intersection
of R_n,ψ and c_n1, and write R_n,ψ_n=P_n
q_n2^∗and R_n,ψ_n,a=P_nB_n.Then
R_n,0=P_nq_2n and 0<ψ_n,a<ψ_n. Let
τ_n,ψ=τ_n,ψ( A_n,ψ,B_n,ψ)
=A_n,ψB_n,ψ=K_n∩ R_n,ψ,ψ∈0,ψ
_n],
and
K_n,ψ=∪_θ∈0,ψ]τ_n,θ.
Then τ_n,0={q_n2},
τ_n=τ_n,ψ_n,a=A_n,ψ_n,aB_n,ψ_n,a
=A_nB_n,
K_n,ψ_n,a=K_n1 and K_n,ψ_n=K_n. By the way we see that
I_n is an f_n-lift of τ_n,ψ_n,a=τ_n, and K_n,ψ
is a closed Jordan domain for each ψ∈0,ψ_n], with an
exception at ψ=0: K_n,0={q_n2}.
Since C_n1 and C_n2 tend to the great circles C_01and C_02
with C_02=-C_01, we have
lim_n→∞max_ψ∈0,ψ_n]L(τ_n,ψ)→0.
Then we may assume K_n is so thin that
max_w∈ K_nd(w,∂ K_n)<δ_0.
Now we can prove the following
For sufficiently large n, the restriction f_n1=f_n
|_Δ_n1 is a homeomorphism onto K_n,ψ_a=K_n1,
that is to say, F_n1=( f_n,Δ_n1) is the
simple closed domain K_n1.
By Condition <ref> and Definition of ℱ_r(L,m), f_n
restricted to ∂Δ_n1=-I_n2( a,b_n) +α
_n1( a,a_n2) +α_2( a_n2,b_n) , with
b_n=b_n2, is a homeomorphism onto ∂ K_n1 and f_n has no
branch point on ( ∂Δ_n1) \{a_n2}.
Recall that we are still in Case 3, and so I_n( a,b_n)
=I_2n( a,b_n2) . On the other hand, f_n is locally
homeomorphic on ( Δ\ f^-1( E_q) )
∪( ∂Δ) \{a_nj}_j=1^m. Then
f_n1=f_n|_Δ_n1 is homeomorphic in a neighborhood of
( ∂Δ_n1) \{a_n2} in Δ_n1 and ( f_n1,Δ_n1)
∈ℱ_r( L,3), and then f_n1(Δ_n1)⊃ K_n1.
The set of branch values of f_n1 in K_n1\{c_n2} is
contained in E^'=K_n1^∘∩ E_q. If E^'
≠∅, let ψ∈(0,ψ_n,a] decrease continuously from
ψ_n,a to 0 so that τ_n,ψ first meets a value Q∈
E^' for some ψ=ψ^'∈( 0,ψ_n,a) .
Then by Lemma <ref> we see that f_n^-1 has a univalent branch
defined on the closed Jordan domain K_n1^'=∪_ψ∈ψ^',ψ_n,a]τ_n,ψ and so τ_n,ψ^' has
an f_n-lift in Δ_n1 whose endpoints are on
α_n1^∘ and α_n2^∘, but interior in Δ
_n1, and thus f_n^-1(E_q)∩Δ_n1⊃ f_n^-1
(Q)∩Δ_n1≠∅, which with (<ref>) implies that
d_f_n( f_n^-1(E_q) ∩Δ,∂Δ)≤
d_f_n( f_n^-1(Q) ∩Δ_n1,α_n1+α
_n2)<L(τ_n,ψ^')<δ_0,
for sufficiently large n, contradicting (<ref>). Then for sufficiently
large n, f_n has no any branch value in K_n1\{q_n2},
and thus f_n^-1 has a univalent branch defined on K_n1\{q_n2}. Therefore Claim <ref> follows from Lemma <ref>.
By Condition <ref> we may discuss Case 3 under the following condition.
For each n=1,2,…, there exists φ_n∈(ψ_n,a
,ψ_n), such that for every ψ∈(ψ_n,a,φ_n],τ
_n,ψ has a unique f_n-lift I_n,ψ in Δ from
a point a_n,ψ∈α_n1^∘ to a point b_n,ψ∈α_n2^∘, I_n,ψ^∘⊂Δ, and K_n,ψ has
an f_n-lift D_n,ψ so that D_n,ψ is a closed Jordan domain
enclosed by
∂ D_n,ψ=α_n1( a_n,ψ,a_n2) +α
_n2( a_n2,b_n,ψ) -I_n,ψ( a_n,ψ
,b_n,ψ) ,
say, f_n restricted to D_n,ψ is a homeomorphism onto K_n,ψ.
For each n=1,2,…, let ψ_n^∗ be the supremun of all
φ_n∈(ψ_n,a,ψ_n) satisfying Condition <ref>. Then
ψ_n^∗<ψ_n,n=1,2,⋯.
Assume ψ_n^∗=ψ_n for some n. Then q_n2^∗∈
c_n1∩ c_n2 and f_n^-1 has a univalent branch g_n defined on
K_n,ψ_n^∗\{q_n2^∗}=K_n,ψ_n
\{q_n2^∗}=K_n\{q_n2^∗}. This, together
with Lemma <ref>, implies that f_n restricted to Δ
is a homeomorphism onto K_n and ∂Σ_n=c_n1+c_n2, and
then we have m=2. This contradicts that m>3, and then (<ref>) is
true.
Step 4.6 Discussion of Case 3: (3) The subsurface F_n0^'
By Claim <ref> and Condition <ref>, it is clear that f_n^-1 has
a univalent branch g_n defined on K_n,ψ_n^∗\τ_n,ψ_n^∗. By Lemma <ref> this branch can be
extended to a univalent branch well defined on K_n,ψ_n^∗,but
we still denote it by g_n. Let
I_n,ψ_n^∗=I_n,ψ_n^∗( a_n,ψ_n^∗
,b_n,ψ_n^∗) =g_n(τ_n,ψ_n^∗(
A_n,ψ_n^∗,B_n,ψ_n^∗) )
(see (<ref>)), and let λ_n0 be the arc of ∂Δ from
a_n,ψ_n^∗ to b_n,ψ_n^∗, say,
λ_n0=α_n1( a_n,ψ_n^∗,a_n2)
+α_n2( a_n2,b_n,ψ_n^∗) .
Then it is clear that I_n,ψ_n^∗∩ I_n=∅, and
-I_n,ψ_n^∗+λ_n0 encloses a domain Δ_n0 in
Δ such that
f_n:Δ_n0→ K_n,ψ_n^∗
is a homeomorphism. Since ( f_n,I_n,ψ_n^∗) is
straight, we have
L(f_n,∂Δ_n0)=L(f_n,I_n,ψ_n^∗)+L(f_n
,λ_n0)≤ L(f_n,( ∂Δ) \λ_n0)+L(f_n,λ_n0)≤ L,
and then we have
f_n restricted to Δ_n0 is a homeomorphism
onto K_n,ψ_n^∗, which is the closure of the component of
K_n\τ_n,ψ_n^∗ on the right hand side of
τ_n,ψ_n^∗. Moreover for sufficiently large n
F_n0^'=( f_n,Δ_n0) ∈ℱ_r(L,3)⊂ℱ_r(L,m),
and
lim_n→∞inf L(∂ F_n0^')≥ L(c_01(
A_n,ψ_n^∗,q_n2) >δ_a>0.
(note that ( f_n,I_n,ψ_n^∗) is straight and we
assumed m>3).
It is clear that the closed Jordan domain Δ_n0 is the
union ∪_ψ∈0,ψ_n^∗]I_n,ψ with I_n,ψ^∘⊂Δ_n0,∂ I_n,ψ⊂( ∂Δ_n0) ∩∂Δ and τ_n,ψ^∘=(
f_n,I_n,ψ^∘) ⊂ K_n,ψ_n^∗^∘ for
all ψ∈(0,ψ_n^∗). Then by (<ref>) and (<ref>) we have
Δ_n0∩ f_n^-1(E_q)=∅,
and then by Claim <ref> we have
R(F_n0^')=( q-2) A(F_n0^')<( q-2)
A(K_n,ψ_n^∗)<( q-2) A(K_n)→0
as n→∞, and
H(F_n0^')≤( q-2) A(K_n)/L(∂
F_n0^')≤( q-2) A(K_n)/δ_a
→0( n→∞) .
Step 4.7 Discussion of Case 3: (4) Discussion of the d_f_n-shortest path I_n,ψ_n^∗
If there exists a subsequence {n_s}_s=1^∞ of {n} such that
I_n_s,ψ_n_s^∗⊂∂Δ, then Δ_n_s
0=Δ, and then F_n_s0^'=Σ_n_s=( f_n_s
,Δ) by Claim <ref>. But this implies
A(f_n_s,Δ)=A(K_n_s,ψ_n_s^∗)<A(K_n_s)→0
as s→∞. Then by (<ref>), we have
H(Σ_n_s)≤( q-2) A(f_n_s,Δ)/L(∂Σ_n_s)=( q-2) A(f_n_s,Δ
)/L(∂ F_n0^')→0,
which contradicts that Σ_n is an extremal sequence of ℱ
_r( L,m) (and of ℱ( L,m)) by Lemma
<ref>. Thus we may assume that
I_n,ψ_n^∗∩Δ≠∅,n=1,2,….
By (<ref>) we have
lim_n→∞L(f_n,I_n,ψ_n^∗)=lim_n→∞L(τ_n,ψ_n^∗)=0,
and so by (<ref>) we may assume that
Δ∩ I_n,ψ_n^∗∩ f_n^-1(E_q)=∅,n=1,2,….
Then by Lemma <ref>, as (<ref>) for I_n, we may assume, by
taking subsequence, that I_n,ψ_n^∗ has a partition
I_n,ψ_n^∗=J_n1( a_n1^',a_n2^')
+J_n2( a_n2^',a_n3^') +⋯+J_n,2k^'+1( a_n,2k^'+1^',a_n,2k^'+2^')
satisfying all conclusions of Lemma <ref> when we regard k^' as k there, where k^' is independent of n, a_n1^'=a_n,ψ_n^∗ and a_n,2k^'+2^'=b_n,ψ_n
^∗. Then f_n has no branch point in ∪_j=1^k^'
J_n,2j^∘⊂Δ∩ I_n,ψ_n^∗ and we have
f_n restricted to a neighborhood of J_n,2j^∘ in
Δ is a homeomorphismand ( f_n,J_n,2j) is straight for every j=1,…,k^'.
Step 4.8 Discussion of Case 3: (5) Complete the discussion of Case 3
Corresponding to (<ref>), we write
I_n^∗=J_n2( a_n2^',a_n3^')
+⋯+J_n,2k^'( a_n,2k^'^',a_n,2k^'+1^') .
Let γ_n0=γ_n0( a_n2^',a_n,2k^'
+1^') be the arc on ∂Δ from a_n2^'
to a_n,2k^'+1^' and let γ_n0^'=γ
_n0^'( a_ni_0,a_ni_2) be the smallest arc of
∂Δ containing γ_n0 such that the endpoints of
γ_n0^' are contained in {a_nj}_j=1^m. Then we can
write
γ_n0^' =γ_n0^∗+γ_n0^∗∗,
γ_n0^∗ =γ_n0^∗( a_ni_0,a_n2^') +γ_n0^∗( a_n2^',a_n2)
=γ_n0^∗( a_ni_0,a_n2^') +α
_n1( a_n2^',a_n2)
γ_n0^∗∗ =γ_n0^∗∗( a_n2
,a_n,2k^'+1^') +γ_n0^∗∗(
a_n,2k^'+1,a_ni_2) =α_n2( a_n2
,a_n,2k^'+1^') +γ_n0^∗∗(
a_n,2k^'+1,a_ni_2) .
with
i_0( modm) <2<i_2.
Then a_n2∈γ_n0^∘∩{a_nj}_j=1^m≠∅.
If J_n1 is a point, then a_n2^'=a_n1^'∈α_n1
and a_ni_0=a_n1, and thus γ_n0^∗=α_n1 and
f_n(I_n^∗∘∩γ_n0^∗)=τ_n,ψ_n^∗
^∘∩ c_n1( f( a_n2^') ,q_n2)
=∅,
thus
I_n^∗∘∩γ_n0^∗=∅.
If J_n1 is not a point, then a_n1^'=a_n1, a_n2^'=a_ni_0 and γ_n0^∗=-J_n1( a_ni_0,a_n1)
+α_n1( a_n1,a_n2) . Since f_n maps ∂Δ_n0 homeomorphically onto ∂ K_n,ψ_n^∗,
(<ref>) also holds. Thus in both cases (<ref>) holds. For the same
reason, we may show that I_n^∗∘∩γ_n0^∗∗=∅. Thus we have I_n^∗∘∩γ_n0^'=∅, and (e2) of Lemma <ref> holds for Σ=Σ_n.
Now we have proved that -I_n^∗,γ_n0,γ_n0^'
,Σ_n,k^' satisfy all hypothesis of Lemma <ref> as
I,γ_0,γ_0^',Σ,k there. Let {Δ
_nj} _j=0^k^' be the k^'+1 components of
Δ\ I_n^∗ such that Δ_n0 is the the only one on
the right hand side of I_n^∗, which we discussed above, and all
others are on the left hand side of I_n^∗, and moreover we have
F_nj^'=( f_n,Δ_nj) ∈ℱ(L,m-1) for every j=1,…,k^'.
Now we deduce a contradiction. It is clear that we have
∑_j=0^k^'R(F_nj^')=R(Σ_n).
ε_n=∑_j=0^k^'L(f_n,J_n,2j)→0(
n→∞) ,
and
∑_j=0^k^'L(∂ F_nj^')≤ L(∂Σ
_n)+2ε_n,
equality holding if and only if all J_n,2j+1,j=0,…,k^', are
points. Thus we can conclude, by (<ref>) and Claim <ref>, that the
sequence Σ_n is decomposable in ℱ( L,m) (by
Definition <ref> (c)). This contradicts Lemma <ref>. We
have completed the discussion of Case 3 and obtained a contradiction.
We have proved that each of the three cases implies a contradiction, and thus
(<ref>) fails. Therefore Assertion <ref> holds true, and we have
proved Assertion <ref> in the case a∈α_01^∘.
Step 5. Discussion in the case a∈∂α_01
Now we prove Assertion <ref> under the condition
a=a_01∈∂α_01={a_01,a_02}.
The case a=a_02 can be discussed as the case a=a_01 in the same way.
Now that we have already proved Assertion <ref> with (<ref>), we
conclude that, under the condition a=a_01∈∂α_01, Assertion
<ref> holds for all b∈( ∂Δ) \{a_0j}_j=1^n. So, to prove Assertion <ref> under (<ref>), it
remains to prove the following special case of (<ref>): for each
j_0=2,3,…,m,
lim_n→∞inf d_f_n(a_01,a_0j_0)>0
if a_0j_0≠ a_01.
Note that j_0≠1 may not imply a_0j_0≠ a_01: when
α_0m is a point, for example, a_0m=a_01.
Since d_f_n(a_nj,a_0j)→0 for each j=1,2,…,m,
(<ref>) is equivalent to
lim_n→∞inf d_f_n(a_n1,a_nj_0)>0if a_0j_0≠ a_01.
Now we fix a j_0 with
a_0j_0≠ a_01.
By Lemma <ref>, there exists a d_f_n-shortest path J_n from
a_n1 to a_nj_0. To prove (<ref>) we assume the contrary
lim_n→∞inf d_f_n(a_n1,a_nj_0)=0. Then by taking
subsequence, we may assume
lim_n→∞d_f_n(a_n1,a_nj_0)=lim_n→∞L(f_n,J_n)=0.
If J_n has a subsequence J_n_k such that J_n_k⊂∂Δ, then we have by (<ref>) that J_n_k tends to the
point a_01, which implies a_j_0=a_01, contradicting (<ref>)
(note that Γ_n=( f_n,∂Δ) are
parameterized by length for all n=0,1,2,…. So we may assume J_n
∩Δ_n≠∅for all n. Then by (<ref>) we may assume
J_n∩ f^-1(E_q)∩Δ=∅,n=1,2,….
By Lemma <ref> and taking subsequence, we can assume that J_n has
a partition
J_n=J_n1( a_n1^',a_n2^') +J_n2(
a_n2^',a_n3^') +J_n3( a_n3^'
,a_n4^') +⋯+J_n,2k^'+1( a_n,2k^'+1^',a_n,2k^'+2^')
with a_n1^'=a_n1, a_n,2k^'+2^'=a_nj_0,
a_nj^'→ a_0i_jfor some a_0i_j∈{a_0j}_j=1^m for each j=1,2,…,2k^'+2, and k^'<m. We show that there exists j_1∈{1,…,k^'}, such that
the two endpoints of J_n,2j_1 converges to distinct points, say,
a_n,2j_1^'→ a_0,i_2j_1≠ a_0i_2j_1+1
← a_n,2j_1+1^' as n→∞.
Assume that this fails. Then each pair a_n,2j^',a_n,2j+1^', the endpoints of J_2j, converge to the same point a_0i_2j
=a_0i_2j+1, for j=1,…,k^'; and, as an arc on ∂Δ, by (<ref>) each J_n,2j-1( a_n,2j-1,a_n,2j)
converges to the same point a_n,i_2j-1=a_n,i_2j, for j=1,2,…
,k^'+1. Therefore, all a_nj^' converge to the same point,
and thus
a_01=lim_n→∞a_n1=lim_n→∞a_n1
^'=lim_n→∞a_n,2k+2^'=lim_n→∞a_nj_0=a_0j_0
This contradicts (<ref>).
Thus the segment J_n,2j_1=J_n,2j_1( a_n,2j_1^',a_n,2j_1+1^') of J_n satisfies Lemma <ref>
for the case k=1 there, so that the two endpoints a_0i_2j_1 and
a_0i_2j_1+1 of J_n,2j_1 belong to {a_nj}_j=1^m. Thus
by Lemma <ref> for the case k=1, J_n,2j_1 divides Σ
_n into two subsurfaces Σ_n0 and Σ_n1 contained in
ℱ_r(L,m). By (<ref>) we have
R(Σ_n0)+R(Σ_n1)=R( Σ_n) .
On the other hand we also have
L(∂Σ_n0)+L(∂Σ_n1)=L(∂Σ_n
)+2L(f_n,J_n,2j_1)→ L(f_0,∂Δ)≤ L
lim_n→∞inf L(∂Σ_nj)≥min{
L(f_0,γ_0),L(f_0,γ_1)} >0,j=1,2,
where γ_0 and γ_1 are the two arcs of ∂Δ with
the two distinct endpoints a_0i_2j_1 and a_0i_2j_1+1. Then
Σ_n is decomposable in ℱ_r(L,m)⊂ℱ(L,m),
by Definition <ref> (b). This contradicts the conclusion of Lemma
<ref>, and hence (<ref>) cannot hold. We have proved Assertion
<ref> completely and so (i) is completely proved.
Step 6. Theorem <ref> (i) implies Theorem <ref>
(ii)
If (ii) is not true, then there exists sequences x_n1∈ I and x_n2∈
J such that
x_n1→ x_01∈ I,x_n2→ x_02∈ J,
and
d_f_n(x_n1,x_n2)→0,
as n→∞. On the other hand, for the shorter arc α
_j=α_j( x_nj,x_0j) from x_nj to x_0j on
∂Δ,j=1,2, we have by Lemma <ref> (ii) d_f_n(
x_nj,x_0j) →0, as n→∞, for j=1,2.
Then we have
d_f_n(x_01,x_02)≤ d_f_n( x_01,x_n1) +d_f_n
( x_n1,x_n2) +d_f_n( x_n2,x_02)
→0.
But it is clear that x_01≠ x_02, and we obtain a contradiction by (i)
of the Theorem <ref>, and thus Theorem <ref> (ii) holds
true.
§ EXISTENCE AND PROPERTY OF EXTREMAL SURFACES IN ℱ
_R( L,M)
The goal of this section is to prove the following theorem.
Let L∈ℒ be a positive number and m be a sufficiently
large positive integer. Then there exists a unique positive number L_1
with
L_1≤ L
and a precise extremal surface Σ_L_1in ℱ_r(L,m)
such that L(∂Σ_L_1)=L_1.
The proof is divided into 5 steps. The key points of the proof are Theorems
<ref> and <ref>, which follows from Theorem <ref>.
Step 1. Notations, simple discussions and the idea for proof
of Theorem <ref>.
When L≤2δ_E_q, let L_1=L and let Σ_L be a simple
disk on S whose interior is outside E_q. Then L_1 and Σ
_L_1 are the desired number and surface, by Theorem <ref>. So
through out the proof, we assume
L≥2δ_E_q.
The proof for this case is complicated. We will first state the idea of the
proof, after some preparation.
By Corollary <ref>, there exists a precise extremal sequence Σ_n
of ℱ_r^'(L,m), such that Σ_n is also a precise
extremal sequence of ℱ_r(L,m) and ℱ(L,m), and,
moreover,
L_1=lim_n→∞inf L(∂Σ_n)≥2δ_E_q.
By definition of ℱ_r^'(L,m), for the number d^∗=d^∗( E_q,m) , which depends only on m and E_q and
given by Theorem <ref>, we have
_maxf_n≤ d^∗for all n=1,2,…
We assume that all ∂Σ_n=( f_n,∂Δ)
are ℱ(L,m)-partitioned, and then by definition, for each n,
∂Δ has an ℱ(L,m)-partition
∂Δ=α_n1( a_n1,a_n2) +α_n2(
a_n2,a_n3) +⋯+α_nm( a_nm,a_n1)
for ∂Σ_n, and ∂Σ_n has the corresponding
ℱ( L,m)-partition
∂Σ_n=c_n1( q_n1,q_n2) +c_n2(
q_n2,q_n3) +⋯+c_nm( q_nm,q_nm) ,
consisted of m circular arcs as in Definition <ref>. Then f_n
restricted to each α_nj is the SCC arc c_nj, and Remark
<ref>, we may assume
a_n1=1 for all n=1,2,….
Since for any homeomorphism h of Δ onto itself and
Σ_n^h=( f_n∘ h,Δ) ,Σ_n
^h is also a precise extremal sequence of ℱ_r(L,m), we may
assume, by taking subsequence and applying Aazela-Ascoli Theorem to curves
parametrized by length, the following.
All Γ_n=∂Σ_n are parametrized by length and
uniformly converges to a closed curve Γ_0=(f_0,∂Δ), for
each n≥1 and j∈ M={1,2,…,m}, c_nj=( f,α
_nj) is an SCC arc.
In fact, parametrized by length, ( f_n,∂Δ) is a
linear mapping in length and so does (f_0,∂Δ). Thus f_n
restricted to each α_nj is the simple circular arc c_nj for all
n∈ℕ^0={0}∪ℕ, where ℕ is
the set of positive integers, and j∈ M={1,2,…,m}, and we have by
Conditions <ref> and <ref> that:
For each j∈ M={1,2,…,m} and n∈ℕ^0,
f_n^-1 has a unique univalent branch g̃_nj defined on
c_nj^∘ with g̃_nj( c_nj^∘)
=α_nj^∘, such that g̃_nj uniformly converges to
g̃_0j:c_0j^∘→α_0j^∘, and thus for
any interval I_n⊂ c_nj^∘ which converges to I_0⊂
c_0j^∘, g̃_nj(I_n) converges to g̃_0j
(I_0). When c_nj is not closed, g̃_nj can be
extended to be homeomorphic on c_nj.
The idea of proving Theorem <ref>. Though
Γ_n=( f_n,∂Δ) converges to
Γ_0=(f_0,∂Δ) uniformly, f_n may not
converge in Δ. The key of the proof is to construct a surface
Σ_L_1=( f_L_1,Δ) ∈ℱ_r^'( L_1,m) such that
R(Σ_L_1)=lim R(Σ_n)=H_L,m,
f_L_1|_∂Δ=f_0 and L(f_L_1,∂Δ)=Γ_0=L(f_0,∂Δ).
This f_L_1 will be obtained by modifying some f_n for sufficiently large n. The key of this modification is
that Σ_n=( f_n,Δ) has a subsequence,
which will be still denoted by Σ_n, such that the following
holds:
(a) There exists a closed Jordan domain Δ_n contained
in Δsuch that the curves ( f_n,∂Δ
_n) contain no point of E_q and are equivalent
each other, and R( f_n,Δ_n) are equal to each
other.
(b) For 𝒜_n=Δ\Δ_n^∘, there exists a sequence of surfaces B_n=( F_n
,𝒜_n) such that f_n|_∂Δ_n
=F_n|_∂Δ_n, f_0=F_n|_∂Δ,
and R(f_n,A_n)-R(F_n,A_n)→0 as
n→∞.
(c) For the surface Σ_n^∗=( f_n^∗
,Δ) , in which f_n^∗ is
defined by f_n on Δ_n and by F_n on
A_n, H( Σ_n^∗) is the constant
H_L,m for all n.
The key ingredient of this idea are Theorems <ref> and <ref>,
which will be proved later in this section, and which with Theorem <ref>
deduce (a)–(c). The proof of these two theorems are applications of Theorem
<ref>.
By (<ref>), (<ref>) and Conditions <ref> and <ref>,
∂Δ has the 𝒞( L,m)-partition
∂Δ=α_01( a_01,a_02) +α_02(
a_02,a_03) +⋯+α_0m( a_m,a_01) ,
for Γ_0 so that α_01 initiates at a_01=1∈∂Δ, and Γ_0 has the corresponding 𝒞(
L,m)-partition
Γ_0=c_01( q_01,q_02) +c_02( q_02
,q_03) +⋯+c_0m( q_0m,q_01) ,
so that
For each j=1,…,m,f_0 restricted to α_0j is the SCC
arc c_0j, c_nj converges to c_0j uniformly, and thus c_0j is a
point iff α_0j is a point.
By assumption we have
2δ_E_q≤ L(Γ_0)=L_1=lim_n→∞L(Γ
_n).
Since ∂Σ_n and Γ_0 are parametrized by length and
a_n1=a_01=1, we have
For each j=1,2,…,m, α_nj converges to α_0j
as well and thus α_nj converges to a point iff α_0j and
c_0j are both point-arcs. Moreover, for any sequence of intervals
[θ_n1,θ_n2] of real numbers with [θ_n1,θ
_n2]→[ θ_01,θ_02] and for the
sequence of arcs I_n={e^√(-1)θ:θ∈θ
_n1,θ_n2]} of ∂Δ,
L(f_n,I_n)→ L(f_0,I_0).
Recall that M={1,…,m}. Then there exists a subset M_0={i_1
,i_2,…,i_m_0}of M with
1≤ i_1<i_2<…<i_m_0≤ m,
such that j∈ M_0 iff α_0j is not a point. Note that
α_0j is a point iff c_0j is.
The partition (<ref>) has a simplified partition
∂Δ=α_0i_1( a_0i_1,a_0i_1+1)
+α_0i_2( a_0i_2,a_0i_2+1) +⋯+α
_0m_0( a_i_m_0,a_i_m_0+1) ,
with m_0≤ m and a_01=⋯=a_0i_1-1=a_0i_1=1, such that
all point-arcs in (<ref>) are deleted, and the partition (<ref>) also
has a simplified partition:
Γ_0=c_0i_1( q_0i_1,q_0i_1+1) +c_0i_2
( q_0i_2,q_0i_2+1) +⋯+c_0m_0(
q_i_m_0,q_i_m_0+1) ,
such that all point-arcs in (<ref>) are also deleted and that f_0
restricted to each a_0j for j∈ M_0={i_1,…,i_m_0} is the
SCC arc c_0j.
By Lemma <ref> and Theorem <ref>, there exists a δ_1>0
such that for all n,
δ_1<d_f_n(f^-1(E_q)∩Δ,∂Δ),
and for sufficiently[Don't confuse α with a, although they
look similar from a distance, up close they are different! The three distance
in the curly brackets are of two arcs, one arc and one point, and two points.]
large n,
δ_1<min_j∈ M_0{min_{ i,j}∈
M_0
i≠ jd_f_n( α_0i,α_0j)
,min_i∈ M_0
j∈ M,a_0j∉α_0id_f_n
(α_0i,a_0j),min_{ i,j}⊂ M
a_0i≠ a_0jd_f_n(a_0i,a_0j)} .
It is clear that
δ_2=1/3min{ d(w_1,w_2):{w_1,w_2}⊂
E_q∪{q_0j}_j=1^m and w_1≠ w_2} >0.
On the other hand we have
δ_3=min_j∈ M_0L(c_0j)>0.
Let δ be a positive number with
δ<min{δ_1,δ_2,δ_3}/12π( d^∗+1) m.
Then we have
( D(q_0j,δ)\{q_0j}) ∩
E_q=∅,j∈ M,
and it is clear that:
For each j∈ M_0, c_0j is divided into three arcs:
c_0j=c_0j,δ^1+c_0j,δ^2+c_0j,δ^3=c_0j,δ
^1( q_0j,q_0j,δ^1) +c_0j,δ^2(
q_0j,δ^1,q_0j,δ^2) +c_0j,δ^3(
q_0j,δ^2,q_0,j+1)
with c_0j,δ^2=c_0j\ D({q_0j,q_0,j+1},δ), and
each α_0j corresponding to c_0j is divided into three arcs
α_0j=α_0j,δ^1+α_0j,δ^2+α_0j,δ^3=α_0j,δ^1( a_0j,a_0j,δ^1)
+α_0j,δ^2( a_0j,δ^1,a_0j,δ^2)
+α_0j,δ^3( a_0j,δ^2,a_0,j+1) ,
with c_0j,δ^i=( f_0,α_0j,δ^i) for
i=1,2,3, say,
f_0(a_0j)=q_0j,f_0(a_0j,δ^1)=q_0j,δ^1,f_0
(a_0j,δ^2)=q_0j,δ^2,f_0(a_0,j+1)=q_0,j+1;
and thus, for sufficiently large n, c_nj=c_nj( q_nj
,q_n,j+1) is divided into three arcs by ∂ D({q_0j
,q_0,j+1},δ):
c_nj=c_nj,δ^1+c_nj,δ^2+c_nj,δ^3=c_nj,δ
^1( q_nj,q_nj,δ^1) +c_nj,δ^2(
q_nj,δ^1,q_nj,δ^2) +c_nj,δ^3(
q_nj,δ^2,q_n,j+1) ,
with d(q_nj,δ^1,q_0j)=δ, d(q_nj,δ^2,q_0,j+1
)=δ and
c_nj,δ^2=c_nj,δ^2( q_nj,δ^1,q_nj,δ
^2) =c_nj\ D({ q_0j,q_0,j+1} ,δ),
and α_nj=α_nj( a_nj,a_nj) , corresponding to
c_nj, is divided into three arcs
α_nj=α_nj,δ^1+α_nj,δ^2+α_nj,δ^3=α_nj,δ^1( a_nj,a_nj,δ^1)
+α_nj,δ^2( a_nj,δ^1,a_nj,δ^2)
+α_nj,δ^3( a_nj,δ^2,a_n,j+1) ,
such that f_nrestricted to α_nj,δ^iis a homeomorphism
onto c_nj,δ^i, say,
c_nj,δ^i=( f_n,α_nj,δ^i) ,i=1,2,3,
and
f_n(a_nj)=q_nj,f_n(a_nj,δ^1)=q_nj,δ^1,f_n
(a_nj,δ^2)=q_nj,δ^2,f_n(a_nj+1)=q_nj+1;
and therefore, we have
c_nj,δ^i→ c_0j,δ^i,α_nj,δ
^i→α_0j,δ^i
as n→∞, for i=1,2,3.
Thus Σ_n and Γ_0 satisfies (A)–(D) in Theorem <ref>.
Then we have
For two sequences I_nj, of arcs on ∂Δ such that
I_nj→ I_0j as n→∞ for j=1,2, we have
lim_n→∞inf d_f_n( I_n1,I_n2)
=lim_n→∞inf d_f_n( I_01,I_02) .
By definition, α_0j,δ^2,j∈ M_0, are disjoint compact arcs
in ∂Δ, α_nj→α_0j, α_nj,δ^2→α_0j,δ^2 and α_0j,δ^2
⊂α_0j^∘. Then by Theorem <ref> and Claim <ref> we
may assume by taking subsequence, that
δ_4=min{min_{ i,j}⊂ M_0
i≠ jinf_n∈ℕ{ d_f_n( α_0i,δ^2,α_0j,δ^2) } ,min_j∈ M_0inf
_n∈ℕd_f_n( ( ∂Δ)
\α_nj,α_nj,δ^2) } >0.
Then we have by the relation α_ni,δ^2⊂(
∂Δ) \α_nj for { i,j}⊂ M_0,i≠ j, we have
min_{ i,j}⊂ M_0,i≠ jd_f_n(
α_ni,δ^2,α_nj,δ^2) ≥δ_4>0.
It is clear that we may assume that δ is small enough at first such
that
L(c_0j,δ^1)=L(c_0j,δ^3)<2πδ for all j∈
M_0.
By, Conclusions <ref> and <ref>, Claim <ref>, and (<ref>),
we have for each j∈ M_0,
d_f_n( a_0j,a_nj,δ^1) ≤ d_f_n(
a_0j,a_nj) +d_f_n( a_nj,a_nj,δ^1)
≤ d_f_n( a_0j,a_nj) +L(f_n,α_nj,δ
^1)→0+L(c_0j,δ^1)<2πδ,
as n→∞, and for the same reason
d_f_n( a_0,j+1,a_nj,δ^2) →
L(c_0j,δ^3)<2πδ,
as n→∞, for each j∈ M_0. Thus we have for sufficiently
large n,
max_j∈ M_0max{ d_f_n( a_0j,a_nj,δ
^1) ,d_f_n( a_n,j+1,δ^2,a_0,j+1)
} <2πδ.
For each j∈ M_0, we let C_0j be the circle determined by c_0jand C_0j=∂ D( p_j,r) . Then r_j≤π/2 and
p_j is on the left hand side of C_0j. For any positive number
ε≪δ with
ε<min{δ,δ_1,δ_2,δ_3,δ_4
}/12π( d^∗+1) m,
let C_0j,±ε=∂ D(p_j,r_j±ε)and write
𝒜_0j,±ε=D(C_0j,ε), which is the
ε neighborhood of C_0j on S:
𝒜_0j,±ε:r_j-ε<d(w,p_j)<r_j
+ε,
and let
R_j,δ,ε=D(c_0j,ε)\
D({q_0j,q_0,j+1},δ),
which is compact and is the component of 𝒜_0j,±ε\ D({q_0j,q_0,j+1},δ) containing
c_0j\ D({q_0j,q_0,j+1},δ). Then we have
∂𝒜_0j,±ε=C_0j,-ε∪
C_0j,+ε.
Recall that D(X,δ)=∪_x∈ XD(x,δ).
By (<ref>), (<ref>) and (<ref>), 𝒞_δ={∂ D(q_0j,δ):j∈ M} is consisted of disjoint circles. Then
we may assume that the positive numbers δ and ε≪δ
are small enough such that the following holds.
For each circle J_1 of 𝒞_δ and each circle
J_2of the 3m_0 circles { C_0j,±ε:j∈
M_0}∪{ C_0j:j∈ M_0} , either J_1∩
J_2=∅ or J_1 and J_2 intersect almost perpendicularly; the
closed domain R_j,δ,ε,j∈ M_0, defined by (<ref>),
is a quadrilateral enclosed by four circular arcs contained in
C_0j,-ε, ∂ D(q_0j,δ), C_0j,+ε and
∂ D(q_0,j+1,δ):
∂ R_j,δ,ε =-c_0j,δ,-ε^2(
q_0j,δ,-ε^1,q_0j,δ,-ε^2)
+τ_j,δ,ε^1( q_0j,δ,-ε
^1,q_0j,δ,ε^1)
+c_j,δ,ε^2( q_0j,δ,ε
^1,q_0j,δ,ε^2) +τ_j,δ,ε
^2( q_0j,δ,ε^2,q_0j,δ,-ε
^2) ;
and
min_1≤ j≤ m_0L(c_0j,δ,-ε^2)≥4/5
min_1≤ j≤ m_0L(c_0j)>5δ.
For each j∈ M_0, it is clear that for sufficiently large n,
q_nj,δ^1 and q_nj,δ^2 are contained in the interior of
τ_j,δ,ε^1and τ_j,δ,ε^2
respectively.
Step 2 Two lifting results: Theorems <ref> and <ref>
Let t_0=2π( d^∗+1) sinδ, which is (
d^∗+1) times of the length of any circle on S with radius
δ, and for j∈ M let
ζ_nj,δ^t_0=ζ_nj,δ^t_0(t),t∈[
0,t_0] ,
be the locally simple path which describes -∂ D(q_0j,δ) by
d^∗+1 times, parametrized by length, oriented clockwise, and from
ζ_nj,δ^t_0(0)=q_n,j-1,δ^2 to itself, say,
ζ_nj,δ^t_0(t_0)=q_n,j-1,δ^2.
For each j∈ M, there exists two numbers 𝔦_1=φ
_1( j) and 𝔦_2=φ_2( j)
in M_0, which are uniquely determined by j, such that 𝔦
_1+1≤ j≤𝔦_2, and that each arc α_0i with i∈
M and 𝔦_1+1≤ i≤𝔦_2-1 is a point, when
𝔦_2>𝔦_1+1. In other words, α_0𝔦
_1 and α_0𝔦_2 are terms in (<ref>) having
positive length and joined at a_0j, and α_0𝔦_1 is
before α_0𝔦_2 in the direction of ∂Δ. For
example, when j∈ M_0, we have φ_2( j) =j and
φ_1( j+1) =j, and it is clear that
φ_1(M)=φ_1(M_0)=φ_2(M)=φ_2(M_0)=M_0.
In the remain part of this section, the discussion for the case M_0
⫋ M makes the argument more complicated, without more deeper
meaning. The reader may understand the arguments only in the special case
M_0=M, though we discuss in general for completeness. When M_0=M, one
has 𝔦_1=φ_1(j)=j-1 and 𝔦_2=φ
_2(j)=jfor all j∈ M.
We first prove the following theorem.
Let j∈ M. Then for sufficiently large n, there
exists a number t_nj∈(0,t_0), such that the for 𝔦
_1=φ_1( j) and 𝔦_2=φ_2(
j) the following hold.
(i) ζ_nj,δ^t_nj=ζ_nj,δ^t_0|_[
0,t_nj] has an f_n-lift η_nj,δ^t_nj
=η_nj,δ^t_nj(t),t∈0,t_nj], from a_n𝔦
_1,δ^2to a_n𝔦_2,δ^1, say, η
_nj,δ^t_nj(0) is the terminal point a_n𝔦_1,δ^2 of α_n𝔦_1,δ^2, η_nj,δ^t_nj
(t_nj) is the initial pointa_n𝔦_2,δ^1 of
α_n𝔦_2,δ^2, and
f_n( η_nj,δ^t_nj(t)) =ζ_nj,δ^t_nj
(t),t∈0,t_nj].
(ii) The interior η_nj,δ^t_nj∘ of η_nj,δ^t_njis contained in Δ, say, η_nj,δ^t_nj∘(t)∈Δ for all t∈(0,t_nj).
(iii)
η_nj,δ^t_nj⊂ D_f_n(a_0j,t_nj+2πδ)⊂
D_f_n(a_0j,δ_1/3),
and for each pair {i,j}⊂ M with a_0i≠ a_0j
η_nj,δ^t_nj∩η_nj,δ^t_nj=∅.
(iv) η_nj,δ^t_nj is a simple arc in Δ and
η_nj,δ^t_nj∘∩ f_n^-1(E_q)=∅,
and thus f_n has no branch point on η_nj,δ^t_nj
=η_nj,δ^t_nj( a_n𝔦_1,δ^2
,a_n𝔦_2,δ^1)(by convention, η_nj,δ^t_nj( a_n𝔦_1,δ^2,a_n𝔦_2
,δ^1) indicates η_nj,δ^t_nj is a path from
a_n𝔦_1,δ^2to a_n𝔦_2,δ^1).
By (<ref>) and Definition <ref> (c) (iv) for ℱ(
L,m) ⊃ℱ_r( L,m) we have
ζ_nj,δ^t_0 never passes any point of E_q
and for sufficiently large n, f_n is homeomorphic in neighborhoods of
a_n𝔦_1,δ^2 and a_n𝔦_2,δ^1 in
Δ. Thus ζ_nj,δ^t_0 never passes through any
branch point of f_n.
From this condition we have: there exists a maximal number t_nj∈
(0,t_0]satisfying the following condition.
(A) The part ζ_nj,δ^t_nj of ζ_nj,δ^t_0 has an f_n-lift η_nj,δ^t_nj(t) starting from
a_n𝔦_1,δ^2∈α_n𝔦_1,δ^2(recall that a_n𝔦_1,δ^2 is the terminal point of
α_n𝔦_1,δ^2).
(B) ζ_nj,δ^t_nj∘⊂Δ.
By Condition <ref>, the lift η_nj,δ^t_nj is uniquely
determined by t_nj and the initial point η_nj,δ
(0)=a_n𝔦_1,δ^2and satisfies (<ref>). Since
_maxf_n≤ d^∗and t_nj is maximal, we have by Lemma
<ref> that
t_nj∈(0,t_0) and η_nj,δ^t_nj(t_nj)∈∂Δ,
We have proved that η_nj,δ^t_nj satisfies (ii) and (<ref>).
Now, we show that η_nj,δ^t_nj satisfies (iii). Since
η_nj,δ^t_nj(0)=a_n𝔦_1,δ^2 and
η_nj,δ^t_nj is parametrized by d_f_n-length, by
(<ref>) we have for sufficiently large n,
η_nj,δ^t_nj ⊂D_f_n(a_n𝔦
_1,δ^2,t_nj)⊂D_f_n(a_0j,t_nj+d_f_n
(a_n𝔦_1,δ^2,a_0j))
⊂ D_f_n(a_0j,t_nj+2πδ).
On the other hand, by (<ref>), we have
t_nj+2πδ<2πδ( d^∗+1) +2πδ<4πδ( d^∗+1) <δ_1/3.
Therefore (<ref>) holds, which with (<ref>) and Lemma <ref>,
implies that for each pair {i,j} in M with a_0i≠ a_0j
d_f_n(η_ni,δ^t_ni,η_nj,δ^t_nj) ≥
d_f_n( D_f_n(a_0i,δ_1/3),D_f_n(a_0j
,δ_1/3))
≥ d_f_n(a_0i,a_0j)-2δ_1/3>δ_1/3>0.
That is to say (<ref>) holds, and (iii) is proved.
By Condition <ref>, the f_n-lift η_nj,δ^t_nj is
simple, and thus (iv) is true.
It remains to prove (i). For sufficiently large n, it is clear that for each
i∈ M_0,α_ni⊂ D_f_n(α_0i,δ), and thus by
(<ref>) and Lemma <ref> we have, for i∈ M_0 with
i≠𝔦_1,𝔦_2, that
d_f_n( η_nj,δ^t_nj,α_ni) ≥
d_f_n( D_f_n(a_0j,δ_1/3),D_f_n(α
_0i,δ))
≥ d_f_n( a_0j,α_0i) -δ_1/3-δ,
and then by (<ref>) d_f_n( η_nj,δ,α
_ni) >δ_1/2. Then for sufficiently large n, η
_nj,δ^t_nj( t_nj) ∩α_ni≠∅
holds only for i=𝔦_1or 𝔦_2. Thus, when t
tends to t_nj in [0,t_nj], η_nj,δ(t) tend to
α_n𝔦_1or α_n𝔦_2, and it is clear
that we only have η_nj,δ(t_nj)=a_n𝔦_2^1∈α_n𝔦_2. (i) has been proved and Theorem <ref> is
proved completely.
By taking subsequence, we may assume that for each j∈ M there exists
v_j∈ℕ^0, independent of n, such that
d^∗+1≥t_nj/2πsinδ>v_j≥t_nj/2πsinδ-1.
For each j∈ M_0, we write
τ_nj,δ,ε^1,L=τ_j,δ,ε^1(
q_0j,δ,-ε^1,q_nj,δ^1) ,τ_nj,δ
,ε^2,L=τ_j,δ,ε^2( q_nj,δ
^2,q_0j,δ,-ε^2) ,
say, τ_nj,δ,ε^1,L is the arc of τ_j,δ
,ε^1 from q_0j,δ,-ε^1 to q_nj,δ
^1 and τ_nj,δ,ε^2,L is the arc of τ
_j,δ,ε^2 from q_nj,δ^2to q_0j,δ
,-ε^2. In other words, τ_nj,δ,ε^1,L and
τ_nj,δ,ε^2,L are the parts of τ_nj,δ
,ε^1 and τ_nj,δ,ε^2 on the left hand
side of c_nj, respectively. Then by Theorem <ref> and properties of
path lifts, we have the following.
For each j∈ M,η_nj,δ^t_nj=η_nj,δ
^t_nj(t),t∈0,t_nj], is a simple path in Δ
from a_nφ_1( j) ,δ^2 to a_nφ
_2(j),δ^1,
η_nj,δ^t_nj∘=η_nj,δ^t_nj|_(0,t_nj)
⊂Δ,
and η_nj,δ^t_nj is the f_n-lift of the path
ζ_nj,δ^t_nj=ζ_nj,δ^t_0|_[0,t_nj]
=τ_nφ_1(j),δ,ε^2,L+ζ_j,δ,ε+τ_nφ_2( j) ,δ,ε^1,L,
where τ_nφ_1(j),δ,ε^2,L and τ
_nφ_2( j) ,δ,ε^1,L are defined by
(<ref>), and ζ_j,δ,ε can be expressed as
ζ_j,δ,ε=v_j copies of -∂
D(q_0j,δ) -∂ D(q_0j,δ
)-…-∂ D(q_0j,δ)-κ,
in which each -∂ D(q_0j,δ) is regarded as a closed simple path
from q_0φ_1(j),δ,-ε^2 to itself, and -κ is
the simple arc of -∂ D(q_0j,δ) from q_0φ_1
(j),δ,-ε^2 to q_0φ_2(j),δ,-ε
^1, and ζ_j,δ,-ε=-κ when v_j=0.
Since {v_j}_j=1^M is independent of n, and ζ_j,δ
,ε is starting from the point q_0φ_1(j),δ
,-ε^2∈ C_0φ_1(j),-ε∩∂ D(
q_0j,δ) to the point q_0φ_2(j),δ,-ε^1∈ C_0φ_2(j),-ε∩∂ D( q_0j
,δ) , which are also independent of n, we have:
For each j∈ M,ζ_j,δ,ε is a subarc of
ζ_nj,δ^t_nj|_[0,t_nj]=ζ_nj,δ^t_0
|_[0,t_nj] which is independent of n, and we can write
η_nj,δ,ε^t_nj=𝔱_nφ_1
(j),δ,ε^2+η_nj,δ,ε+𝔱
_nφ_2(j),δ,ε^1,
with
( f_n,𝔱_nφ_1(j),δ,ε^2)
=τ_nφ_1(j),δ,ε^2,L, ( f_n
,𝔱_nφ_2(j),δ,ε^1) =τ
_nφ_2(j),δ,ε^1,L
and
( f_n,η_nj,δ,ε) =ζ_j,δ
,ε=ζ_j,δ,ε( q_0φ_1(
j)) ,δ,-ε^2,q_0φ_2( j))
,δ,-ε^1) .
It is clear that for each j∈ M_0,
c]ll
max{ L(τ_nj,δ,ε^1,L),L(τ_nj,δ
,ε^2,L)}
=max{ L(f_n,𝔱_nj,δ,-ε^1
),L(f_n,𝔱_nj,δ,-ε^2)} =ε+o(ε) as ε→0.
We write
η_nj,δ,ε=η_nj,δ,ε(
a_0φ_1( j)) ,δ,-ε^2,a_0φ
_2( j)) ,δ,-ε^1)
Now we prove the following Theorem.
For sufficiently large n, and each j∈ M_0, the
following hold:
(i) c_0j,δ,-ε^2 has an f_n-lift α
_nj,δ,-ε^2 such that the initial point of α
_nj,δ,-ε^2 equals the terminal point a_0φ
_2( j) ,δ,-ε^1=a_0j,δ,-ε
^1 of η_nj,δ,ε, and the terminal point a_0j,δ
,-ε^2 of α_nj,δ,-ε^2 equals the
initial point a_0φ_1(j+1),δ,-ε^2=a_0j,δ
,-ε^2 of η_n,j+1,δ,ε; and moreover,
α_nj,δ,-ε^2⊂ D_f_n(α_0j,δ
^2,2πε)⊂ D_f_n(∂Δ,δ_1).
(ii) For any other i∈ M_0 with i≠ j, α_ni,δ
,ε^2∩α_nj,δ,ε^2=∅.
We first show that (i) implies (ii). By (<ref>) we have for large enough
n and each pair of distinct i and j in M_0 that
d_f_n( α_ni,δ,-ε^2,α_nj,δ
,-ε^2) ≥ d_f_n( D_f_n(α_0i,δ^2,2πε),D_f_n(α_0j,δ^2,2πε
)) ,
and then, by Lemma <ref>, (<ref>) and (<ref>), we have
d_f_n( α_ni,δ,-ε^2,α_nj,δ
,-ε^2) ≥ d_f_n( α_0i,δ^2
,α_0j,δ^2,) -4πε≥δ_4-4πε>δ_4/2>0.
This implies (ii).
Consider the closed quadrilateral R_j,δ,ε defined by
(<ref>) for each j∈ M_0. By Conclusion <ref>, it is clear that
for sufficiently large n, the interior c_nj,δ^2∘ of
c_nj,δ^2=c_nj∩ R_j,δ,ε is contained in
R_j,δ,ε^∘ with ∂ c_nj,δ^2
⊂∂ R_j,δ,ε, and thus c_nj,δ^2
divides R_j,δ,ε into two closed quadrilateral which are
closed Jordan domainsand the one on the left hand side of c_nj,δ^2 is denoted by R_nj,δ,ε^L.
Since c_nj,δ^2 converges to c_0j,δ^2 as n→∞, c_nj,δ^2∩∂ R_j,δ,ε=c_nj
∩∂ R_j,δ,εis consisted of the two endpoints
q_nj,δ^1 and q_nj,δ^2 of c_nj,δ^2, which are
terminal and initial points of τ_nj,δ,ε^1,L and
τ_nj,δ,ε^2,L respectively. By (<ref>) and
(<ref>)–(<ref>), we may assume (by taking subsequence) that
∂ R_nj,δ,ε^L has the partition
∂ R_nj,δ,ε^L=-c_0j,δ,-ε^2
+τ_nj,δ,ε^1,L+c_nj,δ^2+τ_nj,δ
,ε^2,L,
with c_nj,δ^2∩ c_0j,δ,-ε^2=∅,
corresponding to (<ref>).
We will show the following:
f_n^-1 has a univalent branch g̃
_nj,δ,ε defined on closed quadrilateral R_nj,δ
,ε^L such that
g̃_nj,δ,ε(c_nj,δ^2)=g̃_nj,δ
,ε(c_nj∩ R_j,δ,ε)=g̃_nj,δ
,ε(c_nj∩ R_j,δ,ε^L)=α_nj,δ
^2,
and
g̃_nj,δ,ε(R_nj,δ,ε^L\
c_nj,δ^2)⊂Δ\ f_n^-1(E_q).
It is clear that there exists a family {τ_q,q∈ c_nj,δ^2}
of simple circular arcs which is a continuous fibration of R_nj,δ
,ε^L. Precisely speaking,
R_nj,δ,ε^L=∪_q∈ c_nj,δ^2τ_q,
τ_q_nj,δ^1=-τ_nj,δ,ε^1,L=-τ
_j,δ,ε^1( q_0j,δ,-ε^1
,q_nj,δ^1) ,
τ_q_nj,δ^2=τ_nj,δ,ε^2,L=τ
_j,δ,ε^2( q_nj,δ^2,q_0j,δ
,-ε^2) ,
τ_q∩τ_q^'=∅, for { q,q^'}⊂ c_nj,δ^2 with q≠ q^',
each τ_q is a simple circular path in R_nj,δ,ε^L
from q∈ c_nj,δ^2 to a point ψ( q) ∈
c_j,δ,-ε^2 with
L(τ_q)≤max{L(τ_nj,δ,ε^1,L),L(τ
_nj,δ,ε^2,L)}<πε for q∈
c_nj,δ^2.
Moreover, τ_q\{q}⊂ T_nj, where T_nj is the
(open) disk T_nj enclosed by the circle C_nj determined by c_nj.
It is clear that f_n restricted a neighborhood of α_nj,δ^2 in Δ is a homeomorphism onto a neighborhood of
c_nj,δ^2 in the closed disk T_nj. Thus, for each
q∈ c_nj,δ^2, it is clear that τ_q=τ_q(
q,ψ( q) ) contains a MAXIMAL arc τ_q^'=τ_q( q,q^') such that τ_q^' has an
f_n-lift β_n,a=β_n,a( a,a^') with
τ_q^'=( f_n,β_n,a) and β_n,a
^∘⊂Δ. We show that a^'∈Δ say β
_a\{a}⊂Δ.
Since L(τ_q^')≤ L(τ_q)<πε we have
d_f_n( a,a^') ≤ L(τ_q^')<πε<δ_4/4,
and then, by a∈α_nj,δ^2, we have
d_f_n( ( ∂Δ) \α
_nj,a^') ≥ d_f_n( ( ∂Δ)
\α_nj,α_nj,δ^2) -d_f_n(
a,a^') >δ_4-πε>0.
This implies a^'∉( ∂Δ) \α_nj. On the other hand, q^'∈ T_nj=T_nj\
c_nj and thus a^'∉α_nj. We have proved that
a^'∈Δ. By Lemma <ref>, β_n,p is the f_n-lift of
the whole arc τ_q and f_n(a^')=ψ( q) , the
endpoint of τ_q.
Now that β_n,a is the f_n-lift of τ_q with β
_n,a\{a}⊂Δ we have that for each x∈β
_n,a\{a}, d_f_n( x,∂Δ) ≤
L(τ_q)<επ<πδ<δ_1. Thus we have β
_n,a\{a}∩ f_n^-1( E_q) =∅, and
thus
( R_nj,δ,ε^L\ c_nj,δ^2) ∩
E_q=∅.
and f_n has no branch value on R_nj,δ,ε
^L\ c_nj,δ^2. By Lemma <ref>, Claim <ref>
is proved.
We are still considering the fixed j∈ M_0. Then we have φ
_2( j) =j and φ_1( j+1) =j+1. Then the
edge τ_q_nj,δ^1=τ_q_nj,δ,ε^1^1,L of
∂ R_nj,δ,ε^Lis the arc τ_nφ
_2( j) ,δ,ε^1,L=τ_nj,δ,ε^1,L of ζ_nj,δ|_[0,t_nj] in (<ref>) and the edge
τ_q_nj,δ^2=τ_q_nj,δ,ε^2^2,L of
∂ R_nj,δ,ε^Lis the arc τ_nφ
_1( j+1) ,δ,ε^1,L=τ_n,j+1,δ
,ε^1,L of ζ_n,j+1,δ|_[0,t_n,j+1] in
(<ref>) for j+1. Thus the arc 𝔱_nφ_2
(j),δ,ε^1=𝔱_nj,δ,ε^1 in
(<ref>) equals β_n,a_nj,δ^1=g̃_nj,δ
,ε( τ_q_nj,δ^1^1,L) and
𝔱_nφ_1(j+1),δ,ε^2=𝔱
_n,j+1,δ,ε^2 in (<ref>) equals β_n,a_nj,δ^2=g̃_nj,δ,ε( τ_q_nj,δ^2
^1,L) ,since 𝔱_nj,δ,ε^1
\{ a_nj,δ^1} and 𝔱
_n,j+1,δ,ε^2\{ a_nj,δ^2}
contains no branch point of f_nand f_n is homeomorphic in a
neighborhood of α_nj,δ^2. Let α_nj,δ,-ε^2=g̃_nj,δ,ε( c_j,δ,-ε
^2) . Then α_nj,δ,-ε^2 satisfies (ii),
except (<ref>).
By (<ref>) we have R_nj,δ,ε^L⊂ D(
c_nj,δ^2,επ) , which implies
α_nj,δ,-ε^2⊂g̃_nj,δ.ε( R_nj,δ,ε) ⊂ D_f_n(α
_nj,δ^2,πε).
Since α_nj,δ^2 converges α_0j^2⊂α
_0j^∘ as n→∞, we have by Conclusion <ref>,
D_f_n(α_nj,δ^2,πε)⊂ D_f_n
(α_0j,δ^2,2πε)
for sufficiently large n. Then (<ref>) holds and Theorem <ref> is proved.
By the way, by (<ref>) we have
( g̃_nj,δ,ε(R_nj,δ,ε
^L)\α_nj,δ^2) ∩ f_n^-1(E_q
)=∅.
It is clear that the following hold.
For each j∈ M_0, the definition of R_nj,δ
,ε^Lis valid for n=0, say, R_0j,δ,ε^L
is well defined and we have
∂ R_0j,δ,ε^L=-c_0j,δ,-ε^2
+τ_0j,δ,ε^1,L+c_0j,δ^2+τ_0j,δ
,ε^2,L,
where τ_0j,δ,ε^1,Land τ_0j,δ,ε^2,Lare the parts of τ_j,δ,ε^1 and
τ_j,δ,ε^2on the left hand side of c_0j.
Since for any neighborhood V of c_0j,δ^2 on S the closed
domains R_nj,δ,ε^Land R_0j,δ,ε^L
coincide outside V when n is large enough, (<ref>) implies
( R_0j,δ,ε^L\ c_0j,δ^2) ∩
E_q=∅.
Step 3 The construct of the Jordan curve γ_n,δ
,ε in Δ
Let
γ_n,δ,ε=∑_j∈ M_0( α_n,φ
_1( j) ,δ,-ε^2+η_nj,δ,ε) ,
and
Γ̃_δ,ε=∑_j∈ M_0( c_φ
_1( j) ,δ,-ε^2+ζ_j,δ,ε) ,
then by Theorems <ref> and <ref>, Calim <ref> and (<ref>),
we see that γ_n,δ,ε is a closed curve in Δ,
Γ̃_δ,ε is a closed curve on S, and,
Γ̃_δ,ε=( f_n,γ_n,δ
,ε) .
We assume that Γ̃_δ,ε and γ
_n,δ,ε are parametrized by length. We will show the claim:
For sufficiently large n,γ_n,δ,ε is a
simple curve in Δ which depends on n,δ and ε, while
Γ̃_δ,ε=( f_n,γ_n,δ
,ε) is a closed curve on S which is independent of n.
For each j∈ M_0, by Claim <ref> and the fact that c_j-1,δ
,-ε^2 is independent of n, Γ̃_δ
,ε is also independent of n, and so, to prove the claim, it
suffices to prove that γ_n,δ,ε is simple.
For each j∈ M_0, by Theorem <ref> (iv), η_nj,δ^t_nj
is simple, and then by Claim <ref>, η_nj,δ,εas a
subarc of η_nj,δ^t_njis also simple and has distinct
endpoints; and by Theorem <ref> (i) α_nj,δ,-ε^2
is simple with distinct endpoints. On the other hand, it it is clear, by
Theorems <ref> and <ref>, that c_nj,δ,-ε^2
∩ζ_nj,δ,ε={q_j,δ,-ε^1} and
c_nφ_1( j) ,δ,-ε^2∩ζ
_nj,δ,ε={q_φ_1( j) ,δ
,-ε^2}. Thus α_nφ_1(j),δ,-ε
^2+η_nj,δ,ε and η_nj,δ,ε
+α_nj,δ,-ε^2 are simple arcs. Therefore by Theorem
<ref> (iii) and Theorem <ref> (ii) we have:
For each j∈ M_0, the arcs α_nφ_1
(j),δ,-ε^2+η_nj,δ,ε+α_nφ
_2(j),δ,ε^2 and η_nφ_1(j),δ
,-ε+α_nφ_1(j),δ,-ε^2+η
_nj,δ,εare simple for sufficiently large n.
Let j∈ M. by (<ref>) and (<ref>), we have η_nj,δ
,ε⊂η_nj,δ⊂ D_f_n(a_0j,δ_1
/3)andα_nφ_l(j),δ,-ε^2⊂
D_f_n(α_0φ_l(j),δ^2,2πε) with l=1,2,
for large enough n. Then, for every pair of i and j with i≠φ_1(j),φ_2(j)and i∈ M_0 we have by Lemma <ref>
and (<ref>)
d_f_n(α_ni,δ,-ε^2,η_nj,δ,ε)
≥ d_f_n( D_f_n(α_0i,δ^2,2πε
),D_f_n(a_0j,δ_1/3))
≥ d_f_n(α_0i,δ^2,a_0j)-2πε-δ
_1/3
≥ d_f_n(α_0i,a_0j)-2πε-δ_1/3
≥δ_1-2πε-δ_1/3>0,
for sufficiently large n. Then for sufficiently large n, we have for
i≠φ_1(j),φ_2(j),
α_ni,δ,-ε^2∩η_nj,δ,ε=∅.
For sufficiently large n, by (<ref>), (<ref>), Conclusion
<ref> and Theorem <ref> (ii) we conclude that γ_n,δ
,ε is a simple curve in Δ, and Claim <ref> is proved
completely.
Step 4 Construction of the sequence Σ_n,δ,ε^∗ with the same Ahlfors error terms
For sufficiently large n, let Δ_n,δ,ε be the
closed Jordan domain in Δ enclosed by γ_n,δ
,ε and let 𝒜_n,δ,ε=Δ\Δ_n,δ,ε^∘. It is clear that
γ_n,δ,ε,{𝔱_nj,δ,ε
^1}_j∈ M_0,{𝔱_nj,δ,ε^2}_j∈ M_0
divide Δ into 2m_0+1 Jordan domains Δ_n,δ,ε,Δ_nj,δ, and Δ_nj,δ,ε^',j∈
M_0, where Δ_nj,δ is the part of Δ on the right hand
side of η_nj,δ and Δ_nj,δ,ε^'=g̃_nj,δ,ε(R_nj,δ,ε^L), with Δ_nj,δ
,ε^'∩Δ_nj,δ=𝔱
_nj,δ,ε^1,and Δ_n,j,δ,ε^'∩Δ_n,j+1,δ
=𝔱_nj,δ,ε^2. Then, for each j∈ M_0, we have
∂Δ_nj,δ=-η_nj,δ+I_nj,δ=-𝔱
_nj,δ,ε^1-η_nj,δ,ε-𝔱
_nφ_1(j),δ,ε^2+I_nj,δ,
where I_nj,δ is the arc of ∂Δ from a_nφ
_1(j),δ^2 to a_nj,δ^1. In fact,
I_nj,δ=α_nφ_1(j),δ^3+α_nφ_1
(j)+1+…+α_n,j-1+α_nj,δ^1,
and as a limit of I_nj,δ we have
I_0j,δ =α_0φ_1(j),δ^3( a_0φ
_1( j) ,δ^2,a_0φ_1( j) +1,δ) +α_nφ_1(j)+1( a_0φ_1( j)
+1,δ,a_0φ_1( j) +2,δ)
+…+α_n,j-1( a_0,j-1,δ,a_0j,δ)
+α_0j,δ^1( a_0j,δ,a_0j,δ^1)
=α_0φ_1(j),δ^3( a_0φ_1(
j) ,δ^2,a_0φ_1( j) +1,δ)
+α_0j,δ^1( a_0j,δ,a_0j,δ^1) ,
where α_nφ_1(j)+1,α_0φ_1( j)
+2,δ,…,α_n,j-1( a_0,j-1,δ,a_0j,δ) are all equal to the point-arc a_0j,δ.
Now we can prove
For sufficiently large n, there exists a surface B_n=(
F_n,δ,ε,𝒜_n,δ,ε) such that
(i) F_n,δ,ε|_∂Δ=f_0, F_n,δ
,ε=f_n in a neighborhood of γ_n,δ,ε in
𝒜_n,δ,ε, and 𝒜_n,δ,ε is contained in D_F_n(∂Δ,2πδ).
(ii) F_n,δ,ε^-1(E_q)∩[ 𝒜
_n,δ,ε\∂Δ] =∅.
(iii) For each j∈ M_0, the restriction F_nj,δ,ε^'=F_n,δ,ε|_Δ_nj,δ,ε^' is a homeomorphism from Δ_nj,δ,ε^' onto
R_0j,δ,ε such that
( F_nj,δ,ε^',α_nj,δ,-ε
^2) =( f_n,α_nj,δ,-ε^2)
=c_0j,δ,-ε^2,
( F_nj,δ,ε^',α_nj,δ^2)
=( f_0,α_0j,δ^2) =c_0j,δ^2,
( F_nj,δ,ε^',𝔱_nj,δ
,ε^1) =τ_0j,δ,ε^1,L
, ( F_nj,δ,ε^',𝔱
_nj,δ,ε^2) =τ_0j,δ,ε^2,L.
(iv) For each j∈ M_0, the restriction ( F_nj,δ,ε,Δ_nj,δ) =( F_n,δ,ε|_Δ
_nj,δ,Δ_nj,δ) is a surface contained in
D(q_0j,δ) such that, corresponding to (<ref>) and
(<ref>),
( F_nj,δ,ε,𝔱_nφ_1( j)
,δ,ε^2) =τ_0nφ_1( j)
,δ,ε^2,L, ( F_nj,δ,ε,𝔱_nj,δ,ε^1) =τ_0j,δ
,ε^1,L,
( F_nj,δ,ε,η_nj,δ,ε)
=( f_n,η_nj,δ,ε) =ζ_j,δ
,ε,
( F_nj,δ,ε,I_nj,δ) =c_0,j-1,δ^3+c_0j,δ^1,
F_nj,δ,ε^-1(q_0j) ={a_0j},
a_0j is the only possible branch point of F_nj,δ,ε and
v_F_nj,δ,ε(a_0j)=v_j+1 (see (<ref>) for v_j).
In fact there exists a Jordan domain U with U⊂Δ,
which contains Δ_n,δ,ε, such that for each
j∈ M_0,U∩Δ_nj,δ and U∩Δ_nj,δ,ε^' are closed Jordan
domains, that the restriction f_n|_U∩Δ_nj,δ,ε^' can be extended to a homeomorphism
F_nj,δ,ε^' from Δ_nj,δ,ε^' onto R_0j,δ,ε^L satisfying (iii), that the
restriction ( f_n|_U∩Δ_nj,δ
,U∩Δ_nj,δ) can be extended to a
surface ( F_nj,δ,ε,Δ_nj,δ
) contained in D(q_0j,δ) satisfying (iv), that
F_nj,δ,ε and F_nj,δ,ε^' agree on
𝔱_nj,δ,ε^1 (note that we assumed j∈ M_0), that F_n,j+1,δ,ε and F_nj,δ,ε
^' agree on 𝔱_nj,δ,ε^2. Then (i) holds
trivially, and, by (<ref>) and (<ref>), (ii) also holds. Then it is
clear that these 2m_0 mappings agree on the intersection boundary, and so
compose the desired global mapping F_n,δ,ε defined on
𝒜_n,δ,ε. The claim is proved.
Let
f_n,δ,ε^∗(z)={[ f_n(z),z∈Δ\𝒜_n,δ,ε,; F_n,δ,ε,z∈𝒜_n,δ,ε. ].
Then we obtain a sequence of surfaces
Σ_n,δ,ε^∗=( f_n,δ,ε^∗,Δ) ∈ℱ( L,m) ,
with
∂Σ_n,δ,ε^∗=( f_n,δ,ε^∗,∂Δ) =( f_0,∂Σ)
=Γ_0.
It is possible that Σ_n,δ,ε^∗∉ℱ_r( L,m) , which happens only if, for some
a_0j, a_0j∉ f_n,δ,ε^∗-1( E_q) and the integer v_j≥1. But we can show at last that this can not happen.
By Claim <ref> (ii), we have
( f_n,δ,ε^∗-1(E_q)∩𝒜_n,δ
,ε) \∂Δ=∅,
which implies
n( Σ_n,δ,ε^∗)
=#f_n,δ,ε^∗-1(E_q)∩Δ_n,δ,ε=#f_n^-1(E_q)∩Δ_n,δ,ε≤n
(Σ_n)≤ qd^∗,
and then, taking subsequence if necessary, we have
n( Σ_n,δ,ε^∗)
is a constant for all n=1,2,...
For each j≤ M_0, by definition of F_n,δ,ε and by
(<ref>) we have
A(F_n,δ,ε,Δ_nj,δ)≤( v_j+1)
A(D(q_0j,δ))=2π( v_j+1) ( 1-cosδ)
≤π( d^∗+1) δ^2,
and
A(F_n,δ,ε,Δ_nj,δ,ε^')=A(R_0j,δ,ε^L)<2πε L(c_0j,δ
^2)<L(c_0j,δ^2)δ<Lδ.
Thus, by (<ref>) and (<ref>), we have δ<1 and
0<R(f_n,δ,ε^∗,𝒜_n,δ,ε)=R(F_n,δ,ε,𝒜_n,δ,ε)=(
q-2) A(F_n,δ,ε,𝒜_n,δ,ε)<Cδ,
where C=[ π( d^∗+1) +L] M_0, which is
independent of δ,ε and n. Then, by Conclusion <ref>, we
may choose δ small enough such that
A(Σ_n,δ,ε^∗)=A(f_n,Δ_n,δ,ε)+A(F_n,δ,ε,𝒜_n,δ,ε)≤4π
d^∗+1.
Hence by Corollary <ref> and (<ref>), we may assume that
Σ_n,δ,ε^∗ have the same area and by Claim
<ref>, we have the following.
A(Σ_n,δ,ε^∗), n
(Σ_n,δ,ε^∗) and R(Σ_n,δ,ε^∗) are constants for all n=1,2,…, respectively.
By (<ref>) and Claim <ref> implies
H(Σ_n,δ,ε^∗)=R(Σ_n,δ
,ε^∗)/L(∂Σ_n,δ,ε^∗) is a
constant H for all n=1,2,…
Step 5 Complete the Proof of Theorem <ref>
We will show
H=H(Σ_n,δ,ε^∗)=lim_n→∞
H(Σ_n)=H_L,m, for n=1,2,…
Since Σ_n is an extremal sequence in ℱ( L,m)and L(∂Σ_n)→ L(∂Σ_n,δ
,ε^∗)=L(f_0,∂Δ), by (<ref>), we have
H=H(Σ_n,δ,ε^∗)≤lim_n→∞
H(Σ_n).
So, by (<ref>), to prove (<ref>), it suffices to prove that
for any number μ>0
R(Σ_n)<R(Σ_n,δ,ε^∗)+μ.
Since Σ_n and Σ_n,δ,ε^∗ coincide on
Δ_n,δ,εand by (<ref>) R(f_n,δ
,ε^∗,𝒜_n,δ,ε)>0, to prove
(<ref>) it suffices to prove
R( f_n,𝒜_n,δ,ε) <μ
, for large enough n.
We will prove the following Claim, which implies (<ref>) by taking
δ small enough.
There exists a constant C_1 independent of n, δ, and
ε, such that
R(f_n,𝒜_n,δ,ε)≤ C_1δ
, for large enough n.
We first show that for each j∈ M_0 and sufficiently large n
R(f_n,Δ_nj,δ)<C_1^'δ,
R(f_n,Δ_nj,δ,ε^')<C_2^'δ,
for some C_1^' and C_2^' independent of n, δ
and ε.
By the assumption that η_nj,δ is parameterized by length and
t_nj<2π( d^∗+1) δ, we have L(f_n
,η_nj,δ)<2π( d^∗+1) δ. By (<ref>)
and (<ref>) we have
L(f_n,I_nj,δ)→ L(c_0,j-1,δ^3)+0+L(c_0j,δ^1)<4πδ.
Thus we have
L(f_n,∂Δ_nj,δ)=L(f_n,I_nj,δ-η_nj,δ)<4πδ+2π( d^∗+1) δ=C_1^''
δ
where C_1^''=( 4π+2π( d^∗+1)
).
By (<ref>) and (<ref>) we have C_1^''δ
<δ_E_q, which with (<ref>), Theorem <ref> and Lemma
<ref>, implies
R(f_n,Δ_nj,δ)=H(f_n,Δ_nj,δ)L(f_n,∂Δ_nj,δ)≤( q-2) L(f_n,∂Δ_nj,δ)≤( q-2) C_1^''δ,
and then, putting C_1^'=( q-2) C_1^'',
we have (<ref>).
By (<ref>) and the fact that R_nj,δ,ε→
R_0j,δ,ε we have
R(f_n,Δ_nj,δ,ε^')=( q-2)
A(R_nj,δ,ε)≤( q-2) π L(c_nj,δ
^2)ε<( q-2) ( π L) δ
when n is large enough by (<ref>). This implies (<ref>).
(<ref>) and (<ref>) imply Claim <ref>, for 𝒜
_n,δ,ε=∪_j∈ M_0[ Δ_nj,δ
,ε^'∪Δ_nj,δ,ε] and
𝒜_n,δ,ε\∂Δ contains no point
of f_n,δ,ε^∗-1(E_q).
Now, (<ref>) is proved completely, and we have reach the position to
complete the proof of Theorem <ref>.
Let f_L_1be any one of the sequence f_n,δ,ε^∗,
so that n is large enough, δ is small enough, and εsatisfying (<ref>) further small enough, and Σ_L_1=(
f_L_1,Δ) . Then (<ref>) implies H(Σ
_L_1)=H_L,m, and then by (<ref>), Σ_L_1 is an extremal
surface of ℱ(L,m). Since Σ_n is a precise extremal
sequence, L_1=lim_n→∞inf L(∂Σ_n
)=L(Γ_0) and L(∂Σ_L_1)=L(Γ_0), Σ
_L_1 is a precise extremal surface in ℱ( L,m) .
If f_L_1 has no branch point outside f^-1(E_q), then
Σ_L_1∈ℱ_r(L,m) and thus Σ_L_1 is a precise
extremal surface in ℱ_r(L,m).
As it is pointed out, f_L_1 may be not in ℱ_r(
L,m) . But when this is happen, by Theorem <ref> there exists a
surface Σ^'=( f,Δ) such that
H(Σ^')≥ H(Σ_L_1), L(∂Σ^')≤
L(∂Σ_L_1)and Σ^'∈ℱ_r(L,m). Then
Σ^' is again an extremal surface of ℱ_r(L,m) and
by the definition of L_1 we have L(∂Σ^')≥
L(∂Σ_L_1) and thus L(∂Σ^')=L(∂Σ_L_1) and Σ^' is also a precise extremal surface of
ℱ_r(L,m). This completes the proof of Theorem <ref>.
By Lemma <ref>, Theorem <ref> and (<ref>), we have the following:
Let L∈ℒ be a positive number and m be a
sufficiently large positive integer. Then there exists a precise extremal
surface Σ_L_1of ℱ_r^'(L,m),ℱ
_r(L,m) and ℱ(L,m), such that L(∂Σ_L_1
)=L_1≤ L. For any precise extremal surface Σ of ℱ
(L,m), L(∂Σ)≥2δ_E_q if L≥2δ_E_q.
§ RELATION OF PRECISE EXTREMAL SURFACES OF ℱ(L,M) AND
ℱ(L,M-1)
The goal of this long section is to prove the following theorem, followed by
some applications.
Let L∈ℒ be a positive number with L≥
2δ_E_q. Then for sufficiently large integer m and any precise
extremal surface Σ_L_1=( f,Δ) of
ℱ_r(L,m) with
L(∂Σ_L_1)=L_1≤ L, Σ_L_1 is a precise
extremal surface of ℱ_r( L,m-1) .
To prove this theorem we fix a number Lof ℒ with
L≥2δ_E_q,
and let Σ_L_1=( f,Δ) be a precise
extremal surface of ℱ_r(L,m) with L(∂Σ_L_1
)=L_1≤ L, and assume m is large enough,
m>10L/δ_E_q,
and ∂Σ_L_1 is parametrized by length. Then ∂Σ_L_1 has an ℱ(L,m)-partition
∂Σ_L_1=c_1( q_1,q_2) +c_2(
q_2,q_3) +⋯+c_m( q_m,q_1) ,
which in fact means that ∂Δ has an ℱ(L,m)-partition
for ∂Σ_L_1:
∂Δ=α_1( a_1,a_2) +α_2(
a_2,a_3) +⋯+α_m( a_m,a_1) ,
such that f restricted to a neighborhood of α_j^∘ in
Δ is a homeomorphism onto a left hand side neighborhood of
c_j^∘ and c_j=( f,α_j) is a convex circular
arc for j=1,2,…,m, and by Remark <ref> we may assume
a_1=1.
As in Remark <ref>, we assume a_j=e^√(-1)ψ_j,
0=ψ_1<ψ_2<…<ψ_m<2π and introduce the continuous
subscript x in a_x∈∂Δ and q_x∈∂Σ_L_1
with q_x=f(a_x), but when the letters i,j,k appear as subscripts, they
are always integers.
Under the above assumptions, we will first prove Lemmas <ref>,
<ref>, <ref>, <ref>, <ref>, <ref>, and
<ref>. Then we will prove Theorem <ref> easily from Lemmas
<ref>, <ref> and <ref>.
∂Σ_L_1 cannot folded at any a_i∈{a_j}_j=1^m\ f^-1(E_q), and thus, for each i with
a_i∈{a_j}_j=1^m\ f^-1(E_q), c_i-1and c_i
intersect only at q_i in a neighborhood of q_ion S.
Assume a_i∈{a_j}_j=1^m\ f^-1(E_q) and
∂Σ_L_1 is folded at a_i. Then c_i-1+c_i contains
an arc of the form c_i^'+c_i+1^'=c_i^'
-c_i^' such that c_i^'∩ E_q=∅. Therefore we
can sew Σ_L_1 along c_i^' to obtain a new
surface Σ^'∈ℱ(L,m) so that R(Σ^')=R(Σ_L_1) and L(∂Σ^')<L(∂Σ_L_1
), say H(Σ^')>H(Σ_L_1), which contradicts the
maximality of Σ_L_1.
(i) The precise extremal surface Σ_L_1 of
ℱ_r( L,m) is also a precise extremal surface of
ℱ( L,m) and L_1=L( ∂Σ_L_1
) ≥2δ_E_q.
(ii) f is locally homeomorphic in [ Δ\ f^-1
(E_q)] ∪[ ( ∂Δ) \[
f^-1(E_q)∩{a_j}_j=1^m] ] .
(i) follows from Lemma <ref>.
By definition of ℱ_r(L,m), each x∈( ∂Δ) \{a_j}_j=1^m, is a simple point of f (see
Definition <ref>), say f is a homeomorphism in a neighborhood of x
in Δ. Assume for some j_0≤ m, f(a_j_0)∉
E_q. Then the interior angle θ_j_0 of Σ_L_1 at
a_j_0is strictly less than or equal to 2π. If θ_j_0
<2π, then a_j_0 is a simple point of f (Definition <ref>
(a)). If θ_j_0=2π, then a_j_0 is a simple point of f if
∂Σ_L_1 is simple in a neighborhood of a_j_0 in
∂Δ. Thus a_j_0 is not a simple point of f iff
∂Σ_L_1 is folded at a_j_0, contradicting Lemma
<ref>. Thus every point a_j∈{a_j}_j=1^moutside
f^-1(E_q)is a simple point of f. Since all branch points of f are
contained in f^-1(E_q), the conclusion (ii)
holds.
ℭ^1=ℭ^1( Σ_L_1)
is the collection of all subarcs of ∂Σ_L_1 such that for
each c=( f,α) ∈ℭ^1, the following (a)–(c) hold:
(a) c is an SCC arc and every point of c^∘ is a simple point of
Σ_L_1, say, f restricted to a neighborhood of α^∘ is
a homeomorphism.
(b) c^∘∩ E_q=∅, say, c∩ E_q⊂∂ c
(∂ c is the set of endpoints of c).
(c) L(c)<π.
ℭ^2=ℭ^2( Σ_L_1) is the
subset of ℭ^1 such that each c∈ℭ^2 has two
distinct endpoints.
(i) All arcs in { c_j}
_j=1^m∩ℭ^1 have the same curvature.
(ii) { c_j} _j=1^m∩ℭ^1 contains at most
one major circular arc (a simple circle is regarded as a major circular arc).
(iii) { c_j} _j=1^m∩ℭ^1contains at
most one closed arc, say, { c_j} _j=1^m∩[
ℭ^1\ℭ^2] is either empty or
contains only one element.
If #{ c_j} _j=1^m∩ℭ^1≤1, then there
is nothing to prove. So we assume #{ c_j} _j=1^m
∩ℭ^1≥2. Assume that (i) or (ii) of the lemma fails. Then
there exist distinct arcs c_j_1 and c_j_2 in {
c_j} _j=1^m∩ℭ^1 such (a) or (b) in Deformation
<ref> holds. Then there exists a new surface Σ^'
∈ℱ( L,m) such that H(Σ^'
)>H(Σ_L_1). Thus Σ_L_1 is not extremal in ℱ
(L,m), contradicting Lemma <ref> (i), and so (i) and (ii) hold, and
(iii) follows from (ii).
Assume that for some j≤ m,
c_j∈ℭ^2.
Then c_j+c_j+1 is circular at q_j+1 if q_j+1∉ E_q, and
c_j-1+c_j is circular at q_j if q_j∉ E_q. The term
"circular at q_i" means that f restricted to a neighborhood of a_i
in α_i-1+α_i is a homeomorphism onto a simple circular arc,
for i=j or j+1.
By Lemmas <ref> and <ref>, for i=j,j+1, when
q_i∉ E_q, we have that f is not only circular at q_i but also
homeomorphic in a neighborhood of a_i in Δ.
Assume c_j∈ℭ^2 and
q_j+1∉ E_q.
Then c_j is not closed and by Lemma <ref> f is homeomorphic in a
neighborhood of α_j\{a_j}=α_j^∘∪{
a_j+1} in Δ. Then by Lemma <ref> we
have the following claim.
For sufficiently small ε_0>0, and for the arc
C_j=c_j( q_j,q_j+1) +c_j+1( q_j+1
,q_j+1+ε_0) ,
there exist a number θ∈(0,π/2) and a closed simple Jordan domain
( T_j,ε_0,θ,C_j) =( f,D_j,ε_0,θ) of Σ_L_1 with the old
boundary C_j, such that the following hold.
(1) D_j,ε_0,θis a Jordan domain in Δ with
∂ D_j,ε_0,θ=A_j+β_j, in which
A_j=α_j( a_j,a_j+1) +α_j+1(
a_j+1,a_j+1+ε_0) ,
β_j is a simple arcs in Δ with β_j^∘⊂Δ.
(2) T_j,ε_0,θ is a closed Jordan domain in some open
hemisphere S_1 on S with
( T_j,ε_0,θ\{q_j}) ∩
E_q=∅,
∂ T_j,ε_0,θ=C_j+τ_j,
in which τ_j is a polygonal path from q_j+1+ε_0 to
q_j.
(3) The interior angle of T_j,ε_0,θ at q_j and
q_j+1 are both θ.
To prove the lemma, we will deduce a contradiction under the assumption:
c_j+c_j+1 is not circular at q_j+1.
Let ε≪ε_0be a positive number which is so small
that for the arc
γ_j,ε=γ_j,ε( q_j,q_j+1+ε) =c_j( q_j,q_j+1) +c_j+1( q_j+1
,q_j+1+ε) ,
L(γ_j,ε)<π.
We first show the following.
For sufficiently small ε<ε_0, there exists
a surface F_ε=( f_ε,D_j,ε
_0,θ) in S_1 such that f_ε agree with f
in a neighborhood of (β_j( a_j+1+ε_0,a_j)
)\{a_j}in D_j,ε_0,θ,
∂ F_ε=γ_j,ε^'+c_j+1(
q_j+1+ε,q_j+1+ε_0) +τ_j,
where γ_j,ε^'=γ_j,ε^'(
q_j,q_j+1+ε) is the SCC arc from q_j to
q_j+1+ε with L( γ_j,ε^')
=L(γ_j,ε), and moreover,
A(F_ε)>A(T_j,ε_0,θ),
F_ε\{q_j}⊂ S_1\ E_q,
and q_j+1+ε is the only possible branch value of f_ε.
In fact, F_ε is a deformation of T_j,ε
_0,θ=( f,D_j,ε_0,θ) so
that the new boundary τ_j^∘ and the part c_j+1(
q_j+1+ε,q_j+1+ε_0) of the old boundary of
T_j,ε_0,θ remain unchanged, while the part γ
_j,ε of the old boundary is changed into γ_j,ε^'.
It is clear that as ε→0, γ_j,ε
^' converges to c_j and thus the left hand side angle of τ
_j+γ_j,ε^' at q_j tends to θ (this may
fail when c_j∈ℭ^1\ℭ^2, the assumption
c_j∈ℭ^2 is used here). Thus, when ε is small
enough, the circular arc γ_j,ε^' does not intersects
τ_j\{q_j} as sets in S_1, and it is clear that for
sufficiently small ε>0,
γ_j,ε^'+c_j+1( q_j+1+ε,q_j+1+ε_0) +τ_j
encloses a surface F_ε=( f_ε,D_j,ε_0,θ) in S_1, which is just a simple
closed domain in S_1 when γ_j,ε^'+c_j+1(
q_j+1+ε,q_j+1+ε_0) is simple. It is clear
that by Lemma <ref> and Condition <ref> we have (<ref>).
When ε→0, as sets in S_1,F_ε converges
to T_j,ε_0,θ and so (<ref>) holds.
When γ_j,ε^'+c_j+1( q_j+1+ε,q_j+1+ε_0) is not simple, γ_j,ε^'+c_j+1( q_j+1+ε,q_j+1+ε_0) contains a small closed arc, which is consisted of two short subarcs of
γ_j,ε^' and c_j+1( q_j+1+ε,q_j+1+ε_0) near q_1+j+ε, and which
tends to q_j+1 as ε→0, and we may make
q_j+1+ε to be the only branch value of F_ε. It is
clear that ( f,β_j) is equivalent to (
f_ε,-β_j) and since the interior angle of
F_ε=( f_ε,D_j,ε_0
,θ)at a_j tends to that of ( f,D_j,ε_0,θ) =T_j,ε_0,θ, we can
make F_ε and f agree in a neighborhood of β\{a_j} in D_j,ε_0,θ. The
assertion is proved.
Now we can deform Σ_L_1 by replacing the part T_j,ε
_0,θ of Σ_L_1 with F_ε, that is, we cut
( T_j,ε_0,θ,C_j) from Σ_L_1
along the new boundary τ_j^∘ to obtain a surface Σ_1,
and then we sew F_ε and Σ_1 also along the
new boundary τ_j^∘. Then by (<ref>) and (<ref>) we see that
Σ_L_1 becomes a new surface Σ^'=( f^',Δ) such that
∂Σ^'=c_1+⋯+c_j-1+γ_j^'
+c_j+1(q_j+1+ε,q_j+2)+c_j+2+⋯+c_m.
We have to show Σ^'∈ℱ( L,m) . Since
f_ε agree with f in a neighborhood of β_j
\{a_j} in D_j,ε_0,θand the
partition (<ref>) is an ℱ( L,3) partition of
∂ F_ε, f^'can be defined by f on
Δ\D_j,ε_0,θ and
f_ε on D_j,ε_0,θ, and so
(<ref>) is an ℱ( L,m)-partition of
∂Σ^', by (<ref>) and (<ref>), and so Σ
^'∈ℱ( L,m) . By Assertion <ref> we also
have
L(∂Σ_L_1)=L(∂Σ^'),n̅(
Σ_L_1) =n̅( Σ^') ,A(Σ
^')>A(Σ_L_1),
which implies H(Σ^')>H(Σ_L_1). Then Σ_L_1 is
not an extremal surface of ℱ( L,m) , which
contradicts Lemma <ref> (i).
Assume that
L(c_1)<δ_E_q/2.
Then c_1is not closed, say, q_1≠ q_2.
Though the proof is complicated, the idea is quite simpler, which is implied
in the proof of Case 1 and the discussion for other cases are essentially the
same, with a little difference.
To prove the result, we assume that the opposite holds, say, c_1is a
whole circle. Then (<ref>) implies the following.
c_1 is a strictly convex circle from q_1 to q_2=q_1,
the length and diameter of c_1 are both less than δ_E_q/2, and
c_1∩ E_q is either empty or a singleton.
We denote by T_0 the domain enclosed by c_1. First of all we have by
Lemma <ref> the following.
Σ_L_1 contains no closed simple domain of the form
( T_0,c_1) =( T_0,α
_1) (as in Remark <ref> (iii), c_1 should be understood
as ( f,α_1)). This in fact means that there is no
subdomain D of Δ such that α_1⊂∂ D and f is a
homeomorphism from D\{a_1,a_2} onto T_0\{q_1}.
We let h_θ,q_1( w) be the rotation
h_θ,q_1( w) =φ_q_1^-1∘φ_θ∘φ_q_1(w),w∈ S,
of S, where φ_q_1 is a rotation of S putting q_1 into 0, and φ_θ is the rotation w↦ e^-iθw of
S,θ∈0,π]. Recall that, by Remark <ref>,
q_1+1/2 is the middle point of c_1. Write
c_1,θ=h_θ,q_1( c_1) ,
let q_2,θ^'∈ S be the intersection of c_1,θ^∘=c_1,θ\{q_1} and c_1^∘=c_1\{q_1} with q_2,0^'=q_1+1/2, let
c_1 =c_11,θ+c_12,θ,
c_1,θ =c_11,θ^'+c_12,θ^'
be partitions of c_1 and c_1,θ with
c_11,θ=c_1( q_1,q_2,θ^') ,c_12,θ=c_1( q_2,θ^',q_2) ,
c_11,θ^'=c_1,θ( q_1,q_2,θ^')
,c_12,θ^'=c_1,θ( q_2,θ^'
,q_2) ,
let T_θ be the disk enclosed by c_1,θ, T_θ^'=T_0\T_θ and let T_θ^''=T_θ\T_0. Then it is clear that c_12,θ^' divides T_0 into two Jordan domains T_θ^' and
T_θ^''', and c_11,θ divides T_θ
into two Jordan domains T_θ^''' and T_θ^''. By Lemma <ref> we have
For sufficiently small θ>0, ( T_θ^',c_12,θ) is a simple closed Jordan domain of
Σ_L_1 (see Remark <ref> (ii) and (iii)) with the new
boundary c_12,θ^'∘and old boundary c_12,θ. That
is to say, there exist two simple paths α_12,θ and
α_12,θ^' in Δ, both are from a point
a_2,θ^'∈α_1^∘ to a_2, such that
α_12,θ=α_1( a_2,θ^',a_2)
⊂α_1,α_12,θ^'∘=α_12,θ
^'∘( a_2,θ^',a_2) ⊂Δ,
α_12,θ-α_12,θ^' encloses a Jordan domain
D_θ^' in Δ,
c_12,θ=( f,α_12,θ) ,c_12,θ^'=( f,α_12,θ^') ,
and f restricted to D_θ^' is a homeomorphism onto
T_θ^'.
We let
α_11,θ=α_1( a_1,a_2,θ^') .
Then we can cut T_θ^'\ c_12,θ^' from Σ_L_1 along c_12,θ^', and
sew T_θ^'' to Σ_L_1
\ T_θ^' along c_11,θ, to obtain a surface
Σ_θ=( f_θ,Δ) . This
Σ_θ can be obtained in another way as follows.
By Lemma <ref>, there exists a closed path c_1^' in
T_0 from q_1 to q_1, oriented anticlockwise, such that
c_1^'∘⊂ T_0, c_1^' is a polygonal path and
for the domain T_0^' enclosed by c_1-c_1^', the two
interior angles of T_0^' at q_1 are both positive, and that
( T_0^',c_1) is a simple closed domain
of Σ_L_1, say, there exists a Jordan domain D_0^' in
Δ such that ∂ D_0^'∩∂Δ=α_1 and
f:D_0^'\{a_1,a_2}→T_0^'\{q_1} is a homeomorphism (see Remark
<ref> (ii)). We let α_1^'=( ∂
D_0) \α_1^∘, oriented from a_1 to a_2.
Then -c_1^'∘=( f,-α_1^'∘) is
the new boundary of the domain ( T_0^'
,c_1) of Σ_L_1. For a small enough number θ>0,
c_1,θ-c_1^' encloses a domain T_0,θ^' and
( T_0,θ^',c_1,θ) can be
regarded as a deformation of ( T_0^',c_1)
, sharing the same new boundary -c_1^'∘. That is to say, when
we rotate c_1 by h_θ,q_1to the position c_1,θ,
c_1,θ-c_1^' still enclosed a simply connected domain
T_0,θ^' which shares the new boundary -c_1^'∘
of the domain ( T_0^',c_1)of
Σ_L_1 and when we replace ( T_0^'
,c_1) with ( T_0,θ^',c_1,θ) , we obtain the new surface Σ_θ=( f_θ,Δ) . It is clear that ( T_0,θ^',c_1,θ) is a simple closed domain of
the new surface Σ_θ. On the other hand, we do not change
Σ_L_1\( T_0^',c_1) in
the deformation. Therefore Σ_θ∈ℱ( L,m)
and the partition (<ref>) becomes the following ℱ(
L,m)-partition of ∂Σ_θ:
∂Σ_θ=c_1,θ+c_2+⋯+c_m.
It is clear that
A(Σ_θ)=A(Σ_L_1),L(∂Σ_θ)=L(∂Σ_L_1).
By Condition <ref>, the disk D(q_1,δ_0) with
δ_0=d( q_1,q__1+1/2)is contained in
D(q_1,δ_E_q/2)⊂ S and contains at most one point of E_q. Then there are only three possibilities:
Case 1. D(q_1,δ_0)∩ E_qis
either empty or is a singleton {q^∗} contained in c_1, and thus
T_0∩ E_q=∅.
Case 2. ( D(q_1,δ_0)
\T_0) ∩ E_q={𝔞} with
𝔞∈ c_1,θ_1∩ c_1,-θ_2, where θ
_1>0,θ_2>0 and π<θ_1+θ_2<2π. That is to say,
θ_1 is the first number in (0,2π) so that 𝔞∈
c_1,θ_1 and θ_2 is the first number in (0,2π) so that
𝔞∈ c_1,-θ_2, and more over, T_0∩
E_q=∅.
Case 3. ( D(q_1,δ_0)\
T_0) ∩ E_q=∅, and T_0 contains at most one point of
E_q.
Assume Case 1 occurs. Recall that c_12,0=c_12,0^'=c_12(
q_1,q_1+1/2) . Then we may assume
D(q_1,δ_0)∩ E_q={q^∗}∈ c_12,0.
When it is empty, the proof is the same, and when {q^∗}∈ c_11,0,
the proof can be proceeded based on consider the rotation φ
_-θ,q_1 in a symmetrical way.
By assumption of Case 1, we have n( Σ_θ)
=n( Σ_L_1) , and then by (<ref>) we have
H(Σ_L_1)=H(Σ_θ).
Let θ_0 be the maximal positive number in (0,π] so that
Σ_θ∈ℱ( L,m) is well defined for all
θ∈(0,θ_0). This is equivalent to that θ_0 is the
maximal number such that ( T_θ^',c_12,θ) is a closed simple Jordan domain of Σ_L_1 with the new
boundary -c_12,θ^'∘ and old boundary c_12,θ for
all θ∈(0,θ_0) (see Remark <ref> (ii)), in other words,
for all θ∈(0,θ_0), c_12,θ^'∘ is in the
interior of Σ_L_1^∘, which just means α_12,θ^'∘⊂Δ, and f has no branch point in α
_12,θ^'∘. Then by Calim <ref> we have
θ_0<π,
otherwise, (T_π,c_12,π)=( T_0
,c_1) is a closed simple domain of Σ_L_1, by Lemma
<ref>. Moreover, we can show the following.
( T_θ_0,c_12,θ_0) is
still a simple closed Jordan domain of Σ_L_1 so that the new
boundary is contained in c_12,θ_0^'∘ and the old
boundary contains c_12,θ_0. That is to say, there exist two simple
paths α_12,θ_0 and α_12,θ_0^' in
Δ, both are from the point a_2,θ_0^'
∈α_1^∘ to a_2, such that
α_12,θ_0=α_1( a_2,θ_0^'
,a_2) ⊂α_1,α_12,θ_0^'∘
=α_12,θ_0^'∘( a_2,θ_0^'
,a_2) ⊂Δ,
α_12,θ_0-α_12,θ_0^' encloses a Jordan
domain D_θ_0^' in Δ,
c_12,θ_0=( f,α_12,θ_0) ,c_12,θ_0
^'=( f,α_12,θ_0^') ,
and f restricted to D_θ_0^' is a homeomorphism
onto T_θ_0^'.
The difference between Claims <ref> and <ref> is just that
α_12,θ_0^'∘ may not be the new boundary, but
contains the new boundary, say, Δ in (<ref>) should be replaced by
Δ when θ=θ_0, as in (<ref>). In fact by
definition of θ_0, considering that T_θ_0^'
=∪_θ∈0,θ_0)T_θ^' and that T_θ^' is an increasing family as θ increases, f^-1 has a
univalent branch g_θ_0 defined on T_θ_0^'. Then
by Lemma <ref> g_θ_0 can be extended to a univalent
branch of f^-1 defined on T_θ_0^'such that
g_θ_0( c_12,θ_0) =α_12,θ_0.
Thus ( T_θ_0^',c_12,θ_0)
is a closed simple Jordan domain of Σ_L_1. Then α
_12,θ_0^'=g_θ_0( c_12,θ_0^') is a well defined arc in Δ from a_2,θ
_0^' to a_2, and D_θ_0^'
=g_θ_0( T_θ_0^') is a
closed Jordan domain in Δ. Therefore Claim <ref> hold.
We let Δ_θ_0=Δ\D_θ_0^'
. By condition of Case 1, c_12,θ_0^'∘∩
E_q=∅, and so we have
f has no branch value on c_12,θ_0^'∘and
thus each component of α_12,θ_0^'\∂Δ has a neighborhood in Δ on which f is a homeomorphism.
If α_12,θ_0^'∘∩∂Δ=∅, then
( f,Δ_θ_0) is a surface in
ℱ( L^',m+1) ,with L^'=L-L(c_12,θ_0)+L(c_12,θ_0^'), and thus for small
enough ρ>0 and the arc 𝔠_ρ=c_1( q_2,θ
_0+ρ^',q_2,θ_0^') +c_12,θ_0
^', by Lemma <ref> ( f,Δ_θ_0
) contains the simple and closed Jordan domain (
K_ρ,𝔠_ρ) with K_ρ=T_θ
_0+ρ^'\T_θ_0^' such that
𝔠_ρ is the old boundary, an arc of ( f,∂Δ_θ_0) , and c_12,θ_0+ρ^'∘ is
the new boundary. Then we have
∂ K_ρ=𝔠_ρ-c_12,θ_0+ρ^'
and ( T_θ_0^',c_12,θ_0)
can be extended to the larger simple closed Jordan domain (
T_θ_0+ρ^',c_12,θ_0+ρ)
=( T_θ_0^',c_12,θ_0)
∪( K_ρ,𝔠_ρ) of
Σ_L_1, and so Σ_θ is well defined for all θ∈0,θ_0+ρ), contradicting the maximal property of
θ_0. Thus we have
c_12,θ_0^'∘=( f,α_12,θ_0
^') has to intersect ∂Σ_L_1, say,
α_12,θ_0^'∘∩∂Δ≠∅.
By Condition <ref>, c_12,θ_0^' is strictly convex, and
it is clear that α_12,θ_0^'∩α_12,θ_0
={a_2,θ_0^',a_2}. On the other hand, by Claim <ref>,
regarding ( ∂Δ) \α_12,θ_0
^∘ and -α_12,θ_0^' as α and β in
Lemma <ref>, we conclude that α_12,θ_0^'∘∩( ∂Δ) \α_12,θ_0^∘⊂{ a_j} _j=1^m is a finite set {a_i_1
,…,a_i_k}. Thus α_12,θ_0^'∩∂Δ={a_2,a_i_1,…,a_i_k,a_2,θ_0^'},
arranged anticlockwise on ∂Δ. Therefore, by Claim <ref>, we have
α_12,θ_0^'∘∩( ∂Δ) \α_12,θ_0^∘={a_i_1
,…,a_i_k} divides α_12,θ_0^'∘ into k+1
open arcs, each of which has a neighborhood in Δ on which
f is a homeomorphism.
Then Σ_θ_0 is no longer a surface, but is consisted of k+1
surfaces. We will show that these surfaces are all contained in ℱ
( L,m-1) ⊂ℱ( L,m) . We only prove
this in the case that the finite set α_12,θ_0^'∘
∩∂Δis a singleton {a_i_1}, say, k=1. When k>1,
the discussion is similar and more simpler. It is clear that we can define the
partition
c_1,θ_0=𝔠_1,θ_0^'+𝔠
_1,θ_0^''=c_1,θ_0( q_1,q_i_1)
+c_1,θ_0( q_i_1,q_1) .
Then Σ_θ_0 is consisted of two surfaces Σ_θ_0
^1 and Σ_θ_0^2 linked at the point ( f,a_i_1
) , so that
∂Σ_θ_0^1=𝔠_1,θ_0^'+c_i_1
+⋯+c_m,
∂Σ_θ_0^2=𝔠_1,θ_0^''+c_2+⋯+c_i_1-1.
It is clear that the partition (<ref>) contains at least 2 terms, and
so does (<ref>). Thus by Claim <ref> we have
Σ_θ_0^j∈ℱ( L,m-1) ,j=1,2,
which implies
{Σ_θ,^1,Σ_θ_0^2}⊂ℱ( L,m) .
We still assume k=2. Then we see that (<ref>) still holds in the
following form
∑_j=1^2A(Σ_θ_0^j)=A(Σ_L_1),∑_j=1
^2L(∂Σ_θ_0^j)=L( ∂Σ_L_1)
,
and by (<ref>) we have ∑_j=1^2n(Σ_θ_0
^j)=n(Σ_L_1), and then ∑_j=1^2R(Σ
_θ_0^j)=R(Σ_L_1). This contradicts Lemma <ref>, and
thus Case can not occur.
Assume Case 2 occurs. If θ_1≥π, then we can obtain a
contradiction as in Case 1.
Assume θ_1<π. Then we may find the maximum θ_0^' in
(0,θ_1] so that ( T_θ_0^'^'
,c_12,θ_0^') is a simple and closed Jordan domain of
Σ_L_1, by repeating the argument for Claim (<ref>). If
θ_0^'<θ_1, or if θ_0^'=θ_1 and
α_12,θ_0^'^'∘∩∂Δ≠∅, then we can find a contradiction again using the same method of
Case 1.
Assume θ_0^'=θ_1and we cannot obtain a contradiction
as in Case 1. Then we have
( T_θ_1^',c_12,θ_1)
=( T_θ_1^',c_12,θ_0^') is a simple closed Jordan domain of Σ_L_1 such that
c_12,θ_1^'∘ is the new boundary, say, α
_12,θ_1^'∘⊂Δ.
Then we apply the above argument to obtain Σ_-θ for θ>0,
by rotating T_0 into T_-θ in the other direction in a symmetrical
way. That is, we can construct surfaces Σ_-θ,θ>0, such that
the partition (<ref>) works, in which c_1,θ becomes the
rotation c_1,-θ of c_1. Then we can either obtain a contradiction
as in the Case 1, or we must have that ( T_-θ_2
^',c_11,-θ_2) =( T_0\
T_-θ_2,c_11,-θ_2) is a simple closed Jordan domain
of Σ_L_1, as Claim <ref>. Then we can see that (
T_-θ_2^'∪T_θ_1^'
,c_1) =( T_0,c_1) is a simple Jordan
domain of Σ_L_1, contradicting Claim <ref>. Thus in Case 2, we
also have a contradiction. Note that θ_1+θ_2≥π and thus
T_-θ_2^'∪T_θ_1^'
=T_0. In fact f_n^-1 has branches defined on
T_-θ_2^'and T_θ_1^' which are agree on T_-θ_2^'∩T_θ_1^' which contains an arc of c_1, and thus the two
branches are agree on T_-θ_2^'∩T_θ_1^', and so these two branches defines a global branch
on T_0, contradicting Claim <ref>.
Assume Case 3 occurs. Then we may assume
D( q_1,δ_0) ∩ E_q=T_0∩
E_q={q^∗},
otherwise we have
D( q_1,δ_0) ∩ E_q=T_0∩
E_q=∅,
and we can obtain a contradiction based on the discussion entirely the same as
in Case 1 in this later case. Then we may assume q^∗∈ c_12,θ
^∗^'∘for some θ^∗∈(0,π). If θ^∗≥π, we can obtain a contradiction as in Case 1, by Claim <ref>.
Let θ_0 be the maximum in (0,θ^∗] such that
Σ_θ is well defined, say, α_12,θ^'∘⊂Δ, for every θ∈(0,θ_0). If θ_0
<θ^∗, then we can obtain a contradiction as in Case 1. So we may
assume that
θ_0=θ^∗.
The argument in Case 1 for Σ_θ with θ∈(0,θ_0)
applies, and (<ref>) and (<ref>) both hold for θ∈
(0,θ_0), and by (<ref>) we have n( Σ
_θ) =n( Σ_L_1) for all
θ∈0,θ_0)=[0,θ^∗). Therefore (<ref>)
still holds and thus
Σ_θ is a precise extremal surface of ℱ
( L,m) for every θ∈θ,θ_0
)=[θ,θ^∗).
As in Case 1, we can show that α_12,θ_0^'∘
∩∂Δ is a finite set {a_i_1,a_i_2,…,a_i_k}
in {a_j}_j=1^m, and then Σ_θ_0 is consisted of k+1
surfaces {Σ_θ_0^j} _j=1^k+1 of
ℱ( L) linked at q_i_1,…,q_i_k, with
∑_j=1^k+1A(Σ_θ_0^j)=A(Σ_L_1),∑_j=1
^k+1L(∂Σ_θ_0^j)=L(∂Σ_L_1).
Here k=0 and Σ_θ_0^j=Σ_θ_0 when
α_12,θ_0^'∘∩∂Δ=∅. By the
assumption (<ref>), we have
q^∗∈ c_12,θ_0^'∘=c_12,θ^∗^'∘.
We show that
k≠0
and
q^∗∈{q_i_1,q_i_2,…,q_i_k}.
If k=0, then by (<ref>) q^∗ is in the new boundary
c_12,θ^∗^'∘ of ( T_θ_0
^',c_12,θ_0) and
d_f_θ( f_θ^-1(E_q) ,∂Δ)≤
d( q^∗,c_12,θ^') →0
as θ→θ_0. But (<ref>) can't hold by Claim
<ref> and Lemma <ref>. Thus k≥1.
Assume (<ref>) fails. Then q^∗ is again in the new boundary
c_12,θ^∗^'∘ of ( T_θ_0
^',c_12,θ_0) and (<ref>) again holds as
θ→θ_0, which again contradicts Claim <ref> or
Lemma <ref>. Thus we have (<ref>). Then ( α
_12,θ_0^'∘\{ a_i_j} _j=1
^k) ∩ f^-1(E_q)=∅ and each component of
α_12,θ_0^'∘\∂Δ=α
_12,θ_0^'∘\{ a_i_j} _j=1
^k has a neighborhood in Δ on which f is homeomorphic
(by Lemma <ref> (ii)). Then we can show that each Σ_θ_0
^i is contained in ℱ( L,m) for i=1,2,…,k+1
(this is easy to see when k=1 as in Case 1, and when k>1, the proof is
similar). On the other hand, (<ref>) and (<ref>) implies that
∑_j=1^k+1n( Σ_θ_0^j)
=n( Σ_L_1) ,
which with (<ref>) implies that ∑_j=1^k+1R(Σ_θ_0
^j)=R(Σ_L_1)and thus by (<ref>) Σ_L_1 is
decomposable in ℱ( L,m) , contradicting Lemma
<ref>.
We have proved that Cases 1–3 can't occur. Thus Condition <ref> can't be
satisfied by Σ_L_1 and the lemma is proved completely.
The proof of Lemma <ref> is to deduce contradictions when c_1 is a
circle with L(c_1)<δ_E_q/2. If the condition that Σ_L_1 is precise extremal in ℱ_r( L,m) is
strengthened to that Σ_L_1 is precise extremal in ℱ
_r( L) =∪_m=1^∞ℱ_r( L,m), the discussion can be greatly simplified, since we need not to discuss the
number of edges. For example, we can easily prove the following:
If Σ_L_1∈ℱ_r( L,m) is a
precise extremal surface of ℱ_r( L) , (<ref>)
is an ℱ( L,m)-partition such that {q_j
}_j=1^m⊂ E_qand that c_1 is a circle with L(c_1)<2π
and
c_1∩ E_q={q_1}.
Then there exists a precise extremal surface Σ_L_1^'of
ℱ_r( L) such that ∂Σ_L_1^' has the following ℱ( L,m)-partition
∂Σ_L_1^'=c_1,θ_0+c_2+⋯+c_m
where c_1,θ_0 is the rotation h_θ_0,q_1(
c_1) of c_1 with angle θ_0∈(0,π)and
c_1,θ_0^∘∩ E_q≠∅, say, c_1,θ_0∩
E_q contains not only q_1, but also a point other than q_1.
In Lemma <ref> we assumed L(c_1)<δ_E_q/2 and permitted
c_1^∘ contains a point of E_q. But in the Corollary we assume
c_1^∘∩ E_q=∅ but L(c_1)<2π, and require that the
closed arc c_1,θ_0 of the new surface boundary contains not only
the point q_1 of E_q, but also another point of E_q.
Since L(c_1)<2π, c_1 is strictly convex. We still use the notations
in the above proof. For sufficiently small θ>0, by (<ref>) we have
the following conclusion (1)–(3):
(1) T_θ^'∩ E_q={q_1}.
(2) T_θ^''∩ E_q={q_1}.
(3) ( T_θ^',c_12,θ) is a
simple closed Jordan domain of Σ_L_1.
Then Σ_θ is a well defined surface in ℱ(
L,m) . Let θ_1,θ_2,θ_3 be the supremum of
θ in (0,π) such that (1), (2), (3) hold on [0,θ] respectively.
We first consider the case θ_1≤θ_2. This means that the
moving circle c_θ first meets T_0∩ E_q before c_θ
first meets ( S\T_0) ∩ E_q.
If θ_3≤θ_1, then we can show that Claims <ref> and
<ref> hold for c_12,θ_3^'∘=( f,α
_12,θ_3^') and α_12,θ_3^'∘, and thus Σ_θ_3 is consisted of a finite number of
k+1,k≥1, surfaces Σ_θ_3^j in ℱ(
L) with
∑_j=1^k+1A(Σ_θ_3^j)=A(Σ_L_1),∑_j=1
^k+1L(∂Σ_θ_3^j)=L(∂Σ_L_1),
and
∑_j=1^k+1n(Σ_θ_3^j)≤n
(Σ_L_1),
inequality holding if and only if θ_3=θ_1 and f^-1
(E_q)∩α_12,θ_3^'∘\{a_i_1,a_i_2
,…,a_i_k}≠∅, which implies
∑_j=1^k+1R(Σ_θ_3^j)≥ R(Σ_L_1),∑
_j=1^k+1L(∂Σ_θ_3^j)=L(∂Σ_L_1),
contradicting Lemma <ref> for ℱ( L).
If θ_3>θ_1, then Σ_θ_1 is a surface of
ℱ( L) with (<ref>) and (<ref>) holding for
θ=θ_1. But n( Σ_L_1)
>n( Σ_θ_1). Thus H(
Σ_θ_1) >H(Σ_L_1) and Σ_L_1 is not
extremal in ℱ( L) , contradicting the hypothesis. We
have proved the result for the case θ_1≤θ_2.
Assume θ_1>θ_2. This means that the moving circle c_θ first meets E_q from outside of c_θ before c_θ first
meets E_qfrom inside of c_θ.
If θ_3>θ_2, then Σ_θ_2∈ℱ(
L), (<ref>) holds for θ_2 and n(
Σ_L_1) =n( Σ_θ_2), and
thus R(Σ_L_1)=R(Σ_θ_2). It is clear that
c_1,θ_2 contains more than one points of E_q and q_1∈
c_1,θ_2. Therefore Σ_θ_2 satisfies the corollary.
Assume θ_3≤θ_2. Then we can obtain a contradiction as the
discussion for the case θ_1≤θ_2 and θ_3≤θ_1.
Recall that our goal in this section is to prove Theorem <ref>. We will
in fact deduce a contradiction from the opposite of the conclusion, say under
the condition that Σ_L_1∉ℱ_r( L,m-1)
. This means we assume the following condition, before we prove Theorem
<ref>.
∂Σ_L_1 has no ℱ( L,m-1) partition, say, ∂Σ_L_1∈ℱ( L,m)
\ℱ( L,m-1).
We first show the following.
Under Condition <ref>, every c_j of (<ref>) which
is contained in ℭ^2 satisfies
c_j∩ E_q=∂ c_j,
and
L( c_j) ≥δ_E_q.
It is
trivial that (<ref>) implies (<ref>). Assume that (<ref>) fails
for some c_j_0contained in ℭ^2. Then, by definition of
ℭ^2, c_j_0 is not closed and one endpoint of c_j_0
is not in E_q, and we may assume q_j_0 is not in E_q. Thus by
Lemma <ref> c_j_0-1+c_j_0 is circular at q_j_0 and
f restricted to a neighborhood of a_j_0 in Δ is
homeomorphic, and so ( f,α_j_0-1+α_j_0) is a
locally simple circular arc, and c_j_0-1 and c_j_0 are both in the
same circle. If c_j_0-1+c_j_0 is simple, then c_j_0-1+c_j_0 in (<ref>) can be merged into one edge so that (<ref>) becomes an
ℱ( L,m-1)-partition of ∂Σ_L_1,
contradicting Condition <ref>. Then c_j_0-1+c_j_0 is locally
simple but not globally simple. Thus, considering that both c_j_0-1 and
c_j_0 are simple and c_j_0∈ℭ^2, we have
q_j_0-1∈ c_j_0\{ q_j_0+1}and
q_j_0+1∈ c_j_0-1^∘, and thus we can write
c_j_0-1+c_j_0=C_j_0-1^'+C_j_0,
in which C_j_0-1^'=C_j_0-1^'( q_j_0
-1,q_j_0+1) =c_j_0-1( q_j_0-1,q_j_0+1)
and C_j_0=C_j_0( q_j_0+1,q_j_0+1) is the
circle c_j_0-1( q_j_0+1,q_j_0) +c_j_0(
q_j_0,q_j_0+1) . Therefore we have the ℱ(
L,m)-partition
∂Σ_L_1=c_1+⋯+c_j_0-2+C_j_0-1^'+C_j_0
+c_j_0+1+⋯+c_m,
and as a set on S, C_j_0-1^'⊂ c_j_0 and the initial
point q_j_0-1 of C_j_0-1^' is not in E_q (we have
assumed q_j_0∉ E_q and thus q_j_0-1∈ c_j_0
\{ q_j_0+1}⊂ S\ E_q). We
conclude this by
C_j_0-1^'∈ℭ^2and the initial point
q_j_0-1 of C_j_0-1^'is not in E_q.
By Claim <ref>, the above discussion about c_j_0-1+c_j_0 applies
to c_j_0-2+C_j_0-1^'. Then we can repeat the same argument
m-1 times to obtain an ℱ( L,m)-partition
∂Σ_L_1=C_1( q_j_0+1,q_j_0+1)
+C_2( q_j_0+1,q_j_0+1) +⋯+C_m( q_j_0
+1,q_j_0+1)
such that all C_1,…,C_m are simple circles contained in the same
circle C_j=C, and by Lemma <ref> (ii) each point p∈
C_j\ E_q is a simple point of Σ_L_1for
j=1,…,m (see Definition <ref> (a) and (b)). Then (<ref>) and
(<ref>) imply
L(C_j)=L(C)=L(∂Σ_L_1)/m=L_1/m≤L/m≤δ_E_q/10,
which implies that C contains at most one point of E_q. Then we may
assume C_j=C_j( q_j_0+1^',q_j_0+1^')
such that C_j^∘=C_j\{q_j_0+1^'}⊂
S\ E_q, say, C_j∈ℭ^1\ℭ^2. Then by Lemma <ref>, we have m=1. This contradicts (<ref>) and
Condition <ref>.
Assume that Condition <ref> holds and
L(c_1+c_2)<δ_E_q/2.
Then the following hold.
(i) q_1≠ q_3.
(ii) q_1∉ c_2^∘ and q_3∉ c_1^∘.
(iii) c_1+c_2 cannot contain a closed subarc c_1^'
+c_2^'=c_1( q_1^',q_2) +c_2(
q_2,q_1^') such that
q_1^'∈ E_q and q_1^'∉{q_1,q_2
,q_3}.
The proof of (i) and (ii) is relatively easy, while the proof of (iii) is
quite complicated, but it is very similar to the proof of Case 1 in Lemma
<ref>.
By Lemma <ref> and (<ref>) we have
Neither c_1 nor c_2 is a closed arc, and (
c_1+c_2) ∩ E_q contains at most one point.
To prove (i) we assume q_1=q_3. If q_1or q_2is contained in
E_q, then by Claim <ref>, we have c_1^∘∩ E_q
=∅, say, c_1∈𝔠^2, and then by Lemma <ref>
L(c_1)≥δ_E_q, contradicting (<ref>). So we may
assume that neither q_1 nor q_2 is contained in E_q. If
c_1^∘∩ c_2^∘≠∅, then c_1 and c_2
contains three common points, which implies c_1=-c_2 and c_1+c_2
is folded at q_1∉ E_q, contradicting Lemma <ref>. Thus by
Claim <ref>, either c_1 or c_2 is contained in ℭ
^2, but this, together with Lemma <ref>, implies L(c_1+c_2
)≥δ_E_q, contradicting (<ref>) once more. (i) is proved.
To prove (ii) assume that it fails. Then we may assume q_3∈ c_1^∘. We first show that
Either c_1^∘∩ E_q or c_2^∘∩ E_q
is empty.
Assume neither c_1^∘∩ E_q nor c_2^∘∩ E_q is
empty. Then by Claim <ref> c_1^∘∩ c_2^∘∩ E_q is
a singleton {q^∗} in E_q, q_2∉ E_q, and thus c_1
and c_2 contain three common points, but c_1+c_2 is not folded at
q_2 by Lemma <ref>. Then by Claim <ref>, neither c_1nor
c_2 is closed but c_1+c_2 is contained in the same circle and
c_1+c_2 is more than that circle since q_3∈ c_1^∘, and by
Lemma <ref>, every point of ( α_1+α_2)
^∘is a simple point of f. Then we can write
c_1+c_2=C_1^'+C_2,
where C_1^'=c_1( q_1,q_3)is the subarc of
c_1 and C_2=c_1( q_3,q_2) +c_2( q_2
,q_3)is a circle, and moreover, C_1^'∘ and
C_2^∘ have neighborhoods in Σ_L_1 which are simple
domains of Σ_L_1 (see Remark <ref> (ii)). Hence the new
partition
∂Σ_L_1=C_1^'+C_2+c_3+c_4+⋯+c_m
is still an ℱ( L,m)-partition of ∂Σ_L_1. But this contradicts Lemma <ref>, since L(C_2
)<L(c_1+c_2)<δ_E_q/2. Thus Claim <ref> holds.
By Claim <ref> we may assume c_1^∘∩ E_q=∅. Then
by Claim <ref> we have c_1∈ℭ^2 and then by Lemma
<ref> c_1 contains two distinct endpoints in E_q. But this
contradicts Claim <ref> and (ii) is proved.
To prove (iii), assume it fails, say, that c_1^' and c_2
^' satisfying the condition of (iii) exist. By (i) and (ii),
c_1+c_2 cannot be contained in a circle and can not be folded at q_2. Hence, by Lemma <ref>, c_1^'+c_2^' is a simple
closed arc.
Let α_1^'=α_1^'( a_1^'
,a_2) and α_2^'=α_2^'(
a_2,a_1^'') be subarcs of α_1 and
α_2 such that c_1^'=( f,α_1^')
and c_2^'=( f,α_2^')and let T_0
be the domain on S enclosed by c_1^'+c_2^'. Then by
(<ref>) we have
D(q_1^',δ_E_q/2)∩ E_q=T_0∩
E_q={ q_1^'} ,
and that c_1 or c_2 is strictly convex. On the other hand, c_1 and
c_2 cannot externally tangent at q_2 since they both contain the two
common points q_1^' and q_2. Hence, by definition of
ℱ( L,m)-partition in Definition <ref>, we have
f is homeomorphic in a neighborhood of ( α_1^'+α_2^') ^∘ in Δ, 0<∠( Σ_L_1,a_2) <2π, and we may assume c_2 is
strictly convex.
Similar to the discussion of Claim <ref>, we have the following.
( T_0,c_1^'+c_2^')
cannot be a simple closed domain of Σ_L_1, say, there is no
univalent branch g of f^-1 defined on T_0\{q^'} such that g( c_1^'+c_2^')
=α_1^'+α_2^'.
Now we have only two cases to discuss.
Case A. c_1+c_2 is convex at q_2, say
0<∠( Σ_L_1,a_2) ≤π.
Case B. c_1+c_2 is concave at q_2, say π
<∠( Σ_L_1,a_2) <2π.
The discussion of Case A is essentially a duplication of that for Case 1 (in
the proof of Lemma <ref>), with a little difference, but we will write
it down for completeness.
Assume Case A occurs and let h_θ,q_1^'=φ_q_1
^'^-1∘φ_θ∘φ_q_1^', where
φ_q_1^' is a rotation of S moving q_1^' to 0
and φ_θ is the rotation w↦ e^-iθw of S. Then
the rotation h_θ,q_1^'( c_1^'+c_2^') of c_1^'+c_2^' never meets any point of
E_q other than q_1^', and then we can obtain a contradiction as
in Case 1 (in the proof of Lemma <ref>). But notations here may have
different meaning. For example, here T_0 is the domain on S enclosed by
c_1^'+c_2^',while in the proof of Lemma <ref>,
T_0 is the disk enclosed by the circle c_1. On the other hand one of
the key points in Case 1 of Lemma <ref> is (<ref>), which follows
from <ref>, but here <ref> may no longer hold for
Σ_θ_0^i(the surfaces in (<ref>) and (<ref>)).
But we can prove (<ref>) still holds after (<ref>)).
The interior angle of T_0 at q_1^' and q_2 are both equal
to ∠( Σ_L_1,a_2) . For θ∈
(0,∠( Σ_L_1,a_2) ), we introduce more
notations:
T_θ=h_θ,q_1^'( T_0) ,T_θ^'=T_0\T_θ,T_θ^''=T_θ\T_0,
c_1,θ^'=h_θ,q_1^'( c_1^')
,c_2,θ^'=h_θ,q_1^'( c_2^')
,
q_2,θ=h_θ,q_1^'( q_2) ,q_2,θ^'=c_2,θ^'∘∩ c_1^'∘.
Then q_2,θ^' gives the following partitions of c_1^' and c_2,θ^':
c_1^'=c_11,θ^'+c_12,θ^'=c_1^'( q_1^',q_2,θ^') +c_1^'(
q_2,θ^',q_2) ,
c_2,θ^'=c_21,θ^'+c_22,θ^'=c_2,θ^'( q_2,θ,q_2,θ^')
+c_2,θ^'( q_2,θ^',q_1^') ;
α_1^' has a partition
α_1^'=α_11,θ^'( a_1^',a_2,θ^') +α_12,θ^'(
a_2,θ^',a_2) =α_1^'( a_1^',a_2,θ^') +α_1^'( a_2,θ^',a_2)
such that
c_11,θ^'( q_1^',q_2,θ^')
=( f,α_11,θ^'( a_1^',a_2,θ^') ) ,
c_12,θ^'( q_2,θ^',q_2) =(
f,α_12,θ^'( a_2,θ^',a_2)
) ;
and α_2=α_2(a_2,a_3) has a partition
α_2=α_2^'( a_2,a_1^'')
+α_2^''( a_1^'',a_3)
=α_2( a_2,a_1^'') +α_2(
a_1^'',a_3)
such that
c_2^'=c_2^'( q_2,q_1^') =(
f,α_2^'( a_2,a_1^'') ) .
The reader should be aware of that c_1j,θ^' are subarcs of
c_1^' (not c_1,θ^'), but c_2j,θ^'
are subarcs of c_2,θ^' (not c_2^'). It is clear
that by Lemma <ref> (ii) and Claim <ref> when θ>0 and θ
is small enough, f restricted to a neighborhood of α_12,θ^'+α_2^' in Δ is a homeomorphism,
since α_12,θ^'+α_2^' is a subarc in
α_1^'+α_2^' which tends to α_2^'
as θ→0. Thus for small enough θ>0 we have the
following claim similar to Claim <ref>:
( T_θ^',c_12,θ^'+c_2^') is a simple closed Jordan domain of Σ_L_1 such that -c_22,θ^'∘=-c_2,θ^'∘(
q_2,θ^',q_1^')is the new boundary and
c_12,θ^'+c_2^' is the old boundary. That is to say,
there exist a Jordan domain D_θ^'⊂Δand an arc
α_22,θ^'=α_22,θ^'( a_2,θ^',a_1^'') in Δ with
α_22,θ^'∘⊂Δ, such that
∂ D_θ^'=α_12,θ^'( a_2,θ^',a_2) +α_2^'( a_2,a_1^'') -α_22,θ^'( a_2,θ^',a_1^'') ,
c_22,θ^'( q_2,θ^',q_1^')
=( f,α_22,θ^'( a_2,θ^'
,a_1^'') ) ,
and that f restricted to D_θ^' is a homeomorphism
onto T_θ^'.
Let θ_0 be the maximal number in (0,∠( Σ_L_1
,a_2) ] such that all θ∈( 0,θ_0)
satisfy Claim <ref>. Then by Claim <ref>, θ_0<∠(
Σ_L_1,a_2) . Repeating the the argument for Claims
<ref>–<ref>, with a little difference, we will show the following
Claims <ref>–<ref>:
Except for that α_22,θ_0^'∘⊂Δ may fail, all other conclusions in Claim <ref> hold for
θ_0: ( T_θ_0^',c_12,θ_0
^'+c_2^') is still a simple closed Jordan domain of
Σ_L_1 such that -c_22,θ_0^'∘ contains the the
new boundary and c_12,θ_0^'+c_2^' is contained in
the old boundary. That is to say, there exist a Jordan domain D_θ_0
^'⊂Δand an arc α_22,θ_0^'
=α_22,θ_0^'( a_2,θ_0^'
,a_1^'') in Δ, such that ∂
D_θ_0^'=α_12,θ_0^'+α_2^'-α_22,θ_0^', c_22,θ_0^'=(
f,α_22,θ_0^') , and f restricted to
D_θ_0^' is a homeomorphism onto T_θ_0^'.
f has no branch value on c_22,θ_0^'∘and
thus each component of α_22,θ_0^'\∂Δ has a neighborhood in Δ on which f is a homeomorphism.
c_22,θ_0^'∘=( f,α_22,θ_0
^') has to intersect ∂Σ_L_1, say,
α_22,θ_0^'∘∩∂Δ≠∅.
α_22,θ_0^'∘∩∂Δis a
nonempty finite set { a_i_1,…,a_i_k} in
{a_j}_j=1^m, a_1^'',a_i_1,…,a_i_k
,a_2,θ_0^'are arranged on ∂Δ anticlockwise
and divide α_22,θ_0^'∘ into k+1 open arcs, each
of which has a neighborhood in Δ on which f is a homeomorphism.
We repeat the argument for completeness. It is obvious by Claim <ref>
that f^-1 has a univalent branch g defined on T_θ_0
^'\ c_22,θ_0^'∘=∪_θ∈
(0,θ_0)T_θ^' with α_12,θ_0
^'( a_2,θ_0^',a_2) =g(
c_12,θ_0^') ⊂α_1and α_2
^'=g( c_2^') ⊂α_2. By Lemma
<ref>, g can be extended to be a univalent branch of f^-1
defined on T_θ_0^', and thus Claim <ref>
holds for D_θ_0^'=g( T_θ
_0^') =g( T_θ_0^').
By (<ref>), c_22,θ_0^'∘∩ E_q
=∅, which together with Lemma <ref> implies Claim <ref>.
Let Δ_θ_0=Δ\D_θ_0^'.
If α_22,θ_0^'∘∩∂Δ=∅, then
( f,Δ_θ_0) is a surface in
ℱ( L^',m+2) ,with
L^'=L-L(c_12,θ_0^'+c_2^')+L(c_22,θ_0
^'),
and thus by Claim <ref> and Lemma <ref>, for every small enough
ρ>0, ( f,Δ_θ_0) contains the
simple and closed Jordan domain ( T_θ_0+ρ
^'\ T_θ_0^',( c_12,θ_0+ρ^'\ c_12,θ_0^') +c_22,θ_0
^') such that ( c_12,θ_0+ρ^'\ c_12,θ_0^') +c_22,θ_0^' is
the old boundary, say, an arc of ( f,∂Δ_θ_0
) , and c_22,θ_0+ρ^'∘ is the new boundary.
Then we have that ( T_θ_0^',c_12,θ
_0^'+c_2^') can be extended to a larger simple
closed Jordan domain ( T_θ_0+ρ^'
,c_12,θ_0+ρ^'+c_2^') of Σ_L_1
, and that Claim <ref> holds for all θ∈0,θ_0
+ρ), contradicting the maximal property of θ_0. Thus Claim
<ref> holds.
It is clear that α_22,θ_0^'∩( α
_12,θ_0^'+α_2^') ={a_2,θ_0
^',a_1^''}, which together with that c_22,θ
_0^' is strictly convex and Lemma <ref>, implies that
α_22,θ_0^'∘∩( ( ∂Δ) \( α_12,θ_0^'+α
_2^') ) is a subset of {a_j}_j=1^m, and so
is α_22,θ_0^'∘∩∂Δ. This, together
with Claims <ref> and <ref>, implies Claim <ref>.
For simplicity, we assume that α_22,θ_0^'∘
∩∂Δ={a_i_1} is a singleton. Then q_i_1 gives
partitions
c_22,θ_0^'=c_221,θ_0^'( q_2,θ_0
^',q_i_1) +c_222,θ_0^'( q_i_1
,q_1^') ,
c_2,θ_0^'=𝔠_21,θ_0( q_2,θ_0
,q_i_1) +c_222,θ_0^'( q_i_1
,q_1^') ,
where
𝔠_21,θ_0=c_21,θ_0^'( q_2,θ
_0,q_2,θ_0^') +c_221,θ_0^'(
q_2,θ_0^',q_i_1) .
We can cut ( T_θ_0^'\
c_22,θ_0^',c_2,θ_0^') , the simple
closed Jordan domain of Σ_L_1 with new boundary c_2,θ_0
^'∘, from Σ_L_1 and sew (
T_θ_0^'',c_11,θ_0^') ,
to Σ_L_1\( T_θ_0^'
\ c_22,θ_0^',c_2,θ_0^')
along c_11,θ_0^'=c_1∩T_θ_0, to
obtain two surfaces Σ_θ_0^1 and Σ_θ_0^2,
linked at q_i_1, such that
∂Σ_θ_0^1=( c_1\ c_1^')
+c_1,θ_0^'+𝔠_21,θ_0+c_i_1+⋯
+c_m,
∂Σ_θ_0^2=c_222,θ_0^'+(
c_2\ c_2^') +c_3+⋯+c_i_1-1.
It is clear that the total number of terms in the above two partitions is
m+3, q_i_1 is contained in T_0, and q_1,q_2,θ_0,q_3
are outside T_0 since T_0 is convex. Therefore i_1≠1,2,3,and
i_1≥4, say, the first partition contains at least four terms and the
second partition contains at least three terms, which implies that each of the
partitions has at most m terms. Hence the above two partitions are both
ℱ( L,m) partitions, by Claim <ref>. It is clear
that here (<ref>) still holds and, by (<ref>), we have
n( Σ_L_1) =∑_j=1^2n(
Σ_θ_0^j) .
Then Σ_L_1 is decomposable in ℱ( L,m) ,
contradicting Lemma <ref>, and (iii) is proved in Case A.
Now we assume Case B occurs. By (<ref>), Claim <ref> and the
assumption of Case B, both c_1^' and c_2^' are strictly
convex. Then we may further assume
L(c_1^')≥ L(c_2^').
Let
l=L(c_1^')+L(c_2^'),
and let T_0 be still the domain enclosed by c_1^'+c_2^'. Let I=q_1^'q_2 and let C be the strictly convex
circle passing through q_1^' and q_2 whose length is land
whose arc c_x_0 from q_1^' to q_2 is longer than its
complementary, with L(c_x_0)=x_0, and let c_l-x_0^'=C\ c_x_0^∘. Then c_x_0 is on the right hand side
of the great circle determined by I=q_1^'q_2. Recall
Definition <ref> of lens and let
𝔇_x=𝔇( I,x,l-x) =𝔇(
I,c_x,c_l-x^')
be the lens with ∂𝔇_x=c_x-c_l-x^', where
c_x=c_x( q_1^',q_2) and c_l-x^'
=c_x^'( q_2,q_1^') are convex circular arcs
with L(c_x)=x and L(c_l-x^')=l-x. By Corollary
<ref>, we have the following Claim <ref> and <ref>.
The area A( 𝔇( I,x,l-x) )
strictly increases for x∈ l/2,x_0].
The lune 𝔇_x^'=𝔇^'
(I,c_x)=𝔇^'(I,x) strictly increases, and the lune
𝔇_x^''=𝔇^'(-I,c_l-x^')=𝔇^'(-I,l-x) strictly decreases, for all x∈
l/2,x_0] (see Definition <ref> for the notation 𝔇
^'(I,·)). That is to say, 𝔇_x^'
\ I⊂𝔇_x^'^'and 𝔇_x^'^''\ I⊂𝔇
_x^'' when l/2≤ x<x^'≤ x_0.
Since c_1^' and c_2^' are the circular arcs with the
same endpoints, we have
For any circular arc γ contained in T_0 from
q_1^' to q_2, q_1∈γ if and only if γ is
contained in the circle determined by c_1 (three points determine a unique
circle on S).
Assume L(c_1^')=x_0^'. Since we assumed L(c_1^')≥ L(c_2^'), we have x_0^'∈ l/2,x_0]. For
x∈(x_0^',x_0] let
T_x^'=𝔇_x_0^'^''\𝔇_x^'' and T_x^''=𝔇_x^'\𝔇_x_0^'
^'.
Then by Lemma <ref> we have the following result similar to Claims
<ref> and <ref>:
For every x∈(x_0^',x_0] so that x-x_0^' is small enough, there exist a simple arc α_l-x^'
=α_l-x^'( a_2,a_1^'') in
Δ, with α_l-x^'∘⊂Δ and
c_l-x^'=( f,α_l-x^') , and a Jordan
domain D_x^' in Δ with ∂ D_x^'=α
_2^'-α_l-x^', such that f restricted to
D_x^' is a homeomorphism onto T_x^'(α_2^' is defined just before (<ref>)). In
other words, for each x∈(x_0^',x_0] so that x-x_0^'
is small enough, ( T_x^',c_2^')
is a simple closed Jordan domain of Σ_L_1 with new boundary
c_l-x^'∘ and old boundary c_2^'=c_l-x_0^'
^'.
Then for every x satisfying Claim <ref>, we can cut (
T_x^',c_l-x_0^'^') from
Σ_L_1 and sew ( T_x^''
,c_x) to Σ_L_1\( T_x^',c_l-x_0^'^') along c_x_0^' to
obtain a surface Σ_x=( f_x,Δ) in
ℱ_r( L,m+2) .
It is clear that there exists a maximum x^∗∈(x_0^',x_0]
such that for every x∈ x_0^',x^∗), Σ_x is a
well defined surface in ℱ_r( L,m+2). Then either
x^∗=x_0 or x^∗<x_0. As the argument for Claims <ref>
and <ref>, with a little difference, we can show the following Claims
<ref> and <ref>:
Except for that α_l-x^∗^'∘⊂Δ
may fail, all other conclusions of Claim <ref> hold for x^∗:
There exist a simple arc α_l-x^∗^'=α_l-x^∗
^'( a_2,a_1^'') in Δ,
with c_l-x^∗^'=( f,α_l-x^∗^')
, and a Jordan domain D_x^∗^' in Δ with ∂
D_x^∗^'=α_2^'-α_l-x^∗^', such
that f restricted to D_x^∗^' is a homeomorphism
onto T_x^∗^'. In other words, (
T_x^∗^',c_2^') is still a simple
closed Jordan domain of Σ_L_1 with the new boundary contained
in c_1-x^∗^'∘ and the old boundary containing
c_l-x_0^'^'.
f has no branch value on c_l-x^∗^'∘and
thus α_l-x^∗^'∘\∂Δ has a
neighborhood in Δ on which f is a homeomorphism.
By Claim <ref> f^-1 has a univalent branch g defined on
T_x^∗^'\α_l-x^∗^'∘=∪_x∈(x_0^',x^∗)T_x^' with
g( c_2^') =g( c_l-x_0^'^') =α_2^'. By Lemma <ref>, g can be extended
to T_x^∗^', and then Claim <ref> follows.
Claim <ref> is obvious, since c_l-x^∗^'∘∩
E_q=∅, both α_l-x^∗^' and c_l-x^∗
^' are simple arcs and c_l-x^∗ is circular.
Now, there are only three possibilities:
Case BA. x^∗<x_0.
Case BB. x^∗=x_0and α_l-x^∗
^'∘∩∂Δ≠∅.
Case BC. x^∗=x_0and α_l-x^∗
^'∘∩∂Δ=∅.
Assume Case BA occurs. Then as the discussion for Claims <ref> and
<ref>, we can show
c_l-x^∗^'∘=( f,α_l-x^∗
^'∘) has to intersect ∂Σ_L_1, say,
α_l-x^∗^'∘∩∂Δ≠∅.
α_l-x^∗^'∘∩∂Δis a
nonempty finite set { a_i_1,…,a_i_k} in
{a_j}_j=1^m, a_1^'',a_i_1,…,a_i_k
,a_2are arranged on ∂Δ anticlockwise and divide
α_l-x^∗^'∘ into k+1 open arcs, each of which has a
neighborhood in Δ on which f is a homeomorphism.
But the proof of Claim <ref> is simpler: Let Δ_x^∗
=Δ\D_x^∗^'. If α_l-x^∗
^'∘∩∂Δ=∅, then by Claim <ref>,
( f,Δ_x^∗) is a surface in
ℱ( L^',m+1)with L^'=L_1
-L(c_2^')+L(c_l-x^∗^'), and thus by Lemma
<ref>, f restricted to a neighborhood of α_l-x^∗
^' in Δ_x^∗ is a homeomorphism, in other
words, ( f,Δ_x^∗) contains a simple
closed Jordan domain ( K_ε,c_l-x^∗
^') with old boundary c_l-x^∗^', say,
c_l-x^∗^' is an arc of ( f,∂Δ_x^∗
) , where K_ε=𝔇_x^∗^''\𝔇_x^∗+ε^'' for
every small enough ε>0. Then ( T_x^∗+ε^',c_2^') =( T_x^∗^'∪K_ε,c_2^') is a
closed and simple Jordan domain of Σ_L_1, contradicts the
maximality of x^∗. Thus Claim <ref> holds.
It is clear that α_l-x^∗^'∩α_1^'
={a_2,a_1^''} and, on the other hand, c_l-x^∗
^' is strictly convex, since x_0^'<x^∗<x_0.
Therefore, by Lemma <ref>, we have α_l-x^∗^'∘∩∂Δ⊂{a_j}_j=1^m. Thus α_l-x^∗
^'∘∩∂Δ is consisted of some points of
{a_j}_j=1^m, and then Claim (<ref> holds.
When α_2,θ_0^'∘∩∂Δ={a_i_1
}∈{a_j}_j=1^mis a singleton, we can obtain a contradiction as in
Case A, and the same argument applies to Case BB. When α_2,θ_0
^'∘∩∂Δ contains more than one point, the argument
is similar. We have obtained a contradiction in Cases BA and Case BB.
Assume Case BC occurs. Then c_x^∗+c_l-x^∗^'=c_x_0
+c_2^' is the circle C and it is clear that Σ_x_0
=Σ_x^∗ is a well defined surface with
L(∂Σ_x_0)=L(∂Σ_L_1).
For all x∈0,θ_0], since L(c_x+c_l-x)=l<δ_E_q/2, (<ref>) implies that ( c_x
+c_l-x) \{q_1^'} never meets E_q, and then
we have n( Σ_x_0) =n(
Σ_L_1) . On the other hand, by Claim <ref>, we have
A(Σ_x_0)>A(Σ_L_1). Therefore
R(Σ_x_0)>R(Σ_L_1),
and moreover ∂Σ_x_0 has the partition
∂Σ_x_0=C_1+C_2+C_3+c_3+⋯+c_m,
where C_1=c_1\ c_1^',C_2=c_x_0^'+c_l-x_0^',C_3=c_2\ c_2^'. It is clear that
C_2 is a simple circle such that C_2^∘ is the old boundary of a
simple domain of Σ_x_0.Thus (<ref>) is an ℱ(
L,m+1) partition of ∂Σ_x_0. It is clear that
Σ_x_0 has no branch value outside E_q, thus
Σ_x_0∈ℱ_r( L,m+1) .
It is clear that
L(C_1)+L(C_2) =L(c_1\ c_1^')+l=L(c_1\
c_1^')+L(c_1^')+L(c_2^')
<L(c_1)+L(c_2)<δ_E_q/2.
By (<ref>), (<ref>) and (<ref>), we can repeat the
argument for Case 1 to show that Σ_x_0, as a surface in
ℱ_r( L,m+1) is decomposable in ℱ(
L,m) (in Case 1 we in fact proved Σ_L_1∈ℱ
( L,m) is decomposable in ℱ( L,m-1),
by (<ref>)). This implies that Σ_L_1 is also decomposable in
ℱ( L,m) by (<ref>) and (<ref>). But this
contradicts Lemma <ref>, and thus Case BC can't occur. We have proved
(iii) in any case and the lemma has been proved completely.
Now we can easily prove Theorem <ref>.
It is clear that there are at most
[ L/( δ_E_q/4) ] +1=[ 4L/δ
_E_q] +1 terms in (<ref>) which have length ≥δ
_E_q/4. Thus, we may assume, after a permutation of the subscripts like
( 1,2,…,m) ↦( j_0,j_0+1,…,m,1,2,…
,j_0-1) , that
L(c_j)<δ_E_q/4,j=1,2.
Then, by Lemma <ref>, we have the following
(A) Neither c_1 nor c_2 is closed.
Assume Σ_L_1∉ℱ_r( L,m-1) . For j=1
or 2, if c_j is contained in ℭ^2, then by Lemma
<ref> we have ∂ c_j=E_qand thus L(c_1)≥δ_E_q, which contradicts (<ref>). Thus neither c_1 nor
c_2 is contained in ℭ^2, which, together with (A), implies
c_j^∘∩ E_q≠∅ for j=1 and 2. Then by (<ref>)
we have
c_1∩ E_q=c_2∩ E_q=c_1^∘∩ E_q=c_2^∘∩
E_q={q_1^'}
for some q_1^'∈ E_q. This contradicts Lemma <ref> (iii).
This contradiction comes from Condition <ref> which assumes
Σ_L_1∉ℱ_r( L,m-1), and so Condition
<ref> can't be satisfied. Thus we have Σ_L_1∈ℱ
_r( L,m-1), and Theorem <ref> is proved.
§ PROOF OF THEOREM <REF>
We first prove the following result.
Let Σ be a surface of ℱ( L)
and assume that ∂Σ has a partition ∂Σ=Γ+C
such that C=C( p_1,p_2) is a simple circular arc with
C∩ E_q⊂{p_1,p_2} (it is permitted that p_1=p_2). If
C( p_1,p_2) cannot be contained in any open hemisphere on
S, then there exists a surface Σ^' in ℱ(
L) such that ∂Σ^' has the partition
∂Σ^'=Γ+γ,
where γ is a simple polygonal path γ=p_1
𝔞_1𝔞_2…𝔞_sp_2 with
γ^∘∩ E_q={𝔞_1,𝔞_2,…
,𝔞_s},
L(γ)≤ L(C),
H(Σ^')>H(Σ),
R(Σ^')+4π/L(∂Σ^')>R(Σ)+4π/L(∂Σ),
and
γ^∘∩ E_q≠∅ if d( p_1
,p_2) =π.
We first show the following claim.
There exists a closed polygon T on S with
∂ T=-C+γ
such that γ satisfies (<ref>), (<ref>) and (<ref>), and
moreover,
( T\γ) ∩ E_q=( T^∘∪ C^∘) ∩ E_q=∅.
First assume C=C( p_1,p_2) is not contained in any open
hemisphere on S. Then C( p_1,p_2) is half, or a major
arc, of a great circle c on S, oriented by C.
Assume C( p_1,p_2) is half of a great circle on S. Then
p_1 and p_2 are antipodal. Let T be the largest closed biangular
domain on S so that, ∂ T=-C+γ, -C is one of the two edges of
T, say, T is on the right hand side of C, γ is the other edge of
T from p_1 to p_2, and (<ref>) holds. Since #E_q=q≥3,
we have γ∩ C={p_1,p_2} and γ^∘∩ E_q
≠∅. Then T and γ satisfies Claim <ref> with
L(γ)=L(C).
Assume that C( p_1,p_2) is a major arc of the great circle
c and let S^' be the closed hemisphere enclosed by -c. Then
d( p_1,p_2) <π. Since C∩ E_q⊂{p_1
,p_2}, the convex hull K of ( S^'∩ E_q)
∪{p_1,p_2}is a polygon in S^' with
p_1p_2=( ∂ K) ∩ c⊂ K∩
S^'=K⊂ S^'∘∪p_1p_2
and the vertices of K are all in E_q, except the two points p_1 and
p_2. But K is just the line segment p_1p_2 on S when
S^'∘∩ E_q=∅. Then we have two possibilities to discuss.
First consider the case S^'∘∩ E_q=∅ and let
T=S^'. Then K=p_1p_2, T and γ=p_1p_2=p_1𝔞_1…𝔞_sp_2 with
{𝔞_1,…,𝔞_s}=γ^∘∩ E_q
satisfies Claim <ref>. It is possible that γ^∘∩
E_q=∅.
Second consider the case S^'∘∩ E_q≠∅. In this
case K^∘ is a Jordan domain with ∂ K=-γ+p_1p_2, γ=p_1𝔞_1…𝔞
_sp_2 and
{𝔞_1,…,𝔞_s}=γ^∘∩ E_q=[
( ∂ K) \p_1p_2] ∩
E_q≠∅.
On the other hand, γ is a concave polygonal path in S^' whose
two endpoints are on ∂ S^'=-c, which implies L(γ)<L(C).
Then T=( S^'\ K) ∪γ satisfies Claim
<ref>.
By Claim <ref>, we can sew Σ and T along C to
obtain a surface Σ^' satisfying (<ref>)–(<ref>) and
(<ref>). On the other hand, (<ref>) implies
n( Σ^') =n(
Σ) +n(T^∘)+#C^∘∩ E_q=n( Σ)
and we have
A(Σ^')=A(Σ)+A(T)>A(Σ).
Therefore we have (<ref>) and (<ref>). It is clear that
Σ^'∈ℱ( L). We have proved the lemma completely.
Lemma <ref> has a direct corollary:
Let Σ be an extremal surface of ℱ(
L) and assume that ∂Σ contains an arc C=C(
p_1,p_2) such that C is an SCC arc with C∩ E_q
⊂{p_1,p_2} (it is permitted that p_1=p_2), then C is
contained in some open hemisphere on S.
Instead of proving Theorem <ref>, we prove the following theorem which
implies Theorem <ref> directly.
Let L∈ℒ be given. Then the following conclusions
(A)–(C) hold.
(A) There exists a precise extremal surface of ℱ_r(
L) , and there exists a positive integer m_0=m_0(
L,q) , depending only on L and q, such that every precise extremal
surface of ℱ_r( L) is precise extremal in
ℱ( L) ,ℱ_r( L,m) , and
ℱ( L,m) , respectively, for every integer m≥
m_0.
(B) For any precise extremal surface Σ_0=( f_0,Δ) of ℱ_r( L) , there exists a
positive integer n_0 such that ∂Σ_0 has an ℱ
( L,n_0)-partition
∂Σ_0=C_1( q_1,q_2) +C_2( q_2
,q_3) +⋯+C_n_0( q_n_0,q_1)
satisfying the following (B1)–(B4):
(B1) If n_0>1, then, for j=1,2,…,n_0, C_j^∘∩
E_q=∅and ∂ C_j={q_j,q_j+1}⊂ E_q. If
n_0=1, then either C_1∩ E_q=∅ or C_1∩ E_q is the
singleton {q_1}.
(B2) Each C_j is contained in an open hemisphere S_j on S,
j=1,2,…,n_0.
(B3) At most one of C_j,j=1,…,n_0, is a major circular arc (a closed
circular arc is regarded major).
(B4) All C_j,j=1,…,n_0, have the same curvature.
(C) There exists an integer d^∗=d_L,q depending only on L and
qand there exists a precise extremal surface Σ^∗ of
ℱ( L) such that
_maxΣ^∗≤ d^∗
(see (<ref>) for _max), and either Σ^∗ is a simple
closed disk in S\ E_q, or ∂Σ^∗ has a partition
∂Σ^∗=C_1^'(q_1^',q_2^'
)+C_2^'( q_2^',q_3^') +…
+C_n_0^'^'( q_n_0^'^',q_1^') ,
with n_0^'>1, such that
∂ C_j^'={q_j^',q_j+1^'}⊂
E_q, q_j^'≠ q_j+1^', C_j^∘∩ E_q=∅,
for all j=1,2,…,n_0^'.
Let L∈ℒ. For sufficiently large m_0 and each m≥ m_0,
by Theorem <ref> there exists a precise extremal surface Σ_m of
ℱ_r( L,m) . Assume L(∂Σ_m)=L_m.
Then by Theorem <ref>, we have {Σ_m}_m=1^∞
⊂ℱ_r( L,m_0) , andfor every m≥
m_0, since ℱ_r( L,m_0) ⊂ℱ
_r( L,m) , Σ_m is an extremal surface of
ℱ_r( L,m_0) . Therefore we have, for
everym≥ m_0,
L_m≥ L_m_0, H(Σ_m)=H(Σ_m_0),
which with the relation Σ_m_0∈ℱ_r( L,m)implies that Σ_m_0 is an extremal surface of ℱ
_r( L,m) as well, and thus L_m_0≥ L_m. Therefore
we have
L_m=L_m_0 and H( Σ_m) =H(
Σ_m_0) , m=m_0,m_0+1,…
For each Σ∈ℱ_r(L), there exists an integer m>m_0 such
that Σ∈ℱ_r(L,m). Then H(Σ)≤ H(Σ
_m)=H(Σ_m_0), and in consequence Σ_m_0 is an extremal
surface of ℱ_r(L). Assume that Σ^' is any other
extremal surface of ℱ_r(L). Then for some positive integer
m^'>m_0, Σ^' is an extremal surface of ℱ
_r(L,m^') and thus we have L(∂Σ^')≥
L_m=L_m_0, and therefore Σ_m_0 is precise extremal in
ℱ_r( L) .
We in fact proved that Σ_m is precise extremal in ℱ
_r( L) for every m≥ m_0. By Corollary <ref>, each
Σ_m is precise extremal in ℱ( L,m) as well
for each m≥ m_0. On the other hand we have ℱ(
L) =∪_m=1^∞ℱ( L,m), ℱ
_r( L) =∪_m=1^∞ℱ_r( L,m), ℱ_r( L,m) increases as m increases, and so
does ℱ( L,m) . Thus every precise extremal surface of
ℱ_r( L) is precise extremal in ℱ(
L), ℱ_r( L,m) and ℱ(
L,m) , for each m≥ m_0; and (A) is proved.
Let Σ_0=( f_0,Δ) be any precise
extremal surface of ℱ_r( L) with L(∂Σ_0)=L_0. Then (A) implies:
Σ_0=( f_0,Δ) is a precise
extremal surface of every ℱ_r( L,m) and every
ℱ( L,m) for m≥ m_0 and L_0=L(
∂Σ_0) =L_m_0.
Then ∂Δ and ∂Σ_0 have corresponding
ℱ( L,m_0)-partitions
∂Δ=α_1( a_1,a_2) +α_2(
a_2,a_3) +⋯+α_m_0( a_m_0,a_1) ,
∂Σ_0=c_1( q_1,q_2) +c_2( q_2
,q_3) +⋯+c_m_0( q_m_0,q_1) ,
with c_j=( f,α_j) ,j=1,…,m_0. We will show that
∂Σ_0 is circular at each q_j∈{q_j}_j=1^m_0
\ E_q, say, c_j-1+c_j is circular at q_j if
q_j∉ E_q.
For sufficiently small ε>0, q_j-ε is a point in
c_j-1^∘, which tends to q_j as ε→0. Then
for sufficiently small ε>0, we have an ℱ(L,m_0
+1)-partition
∂Σ_0=c_1+⋯+c_j-2+c_j-1^'+c_j-1^''+c_j+⋯+c_m_0,
where c_j-1^'=c_j-1(q_j-1,q_j-ε), c_j-1
^''=c_j-1(q_j-ε,q_j). Then c_j-1^'+c_j-1^''=c_j-1 and for sufficiently small ε>0,
c_j-1^''∈ℭ^2=ℭ^2( Σ
_0)(see Definition <ref> for the notation ℭ^j). Since, by Claim <ref>, Σ_0 is also a precise extremal surface
of ℱ_r(L,m_0+1) and (<ref>) is an ℱ(
L,m_0+1) partition of ∂Σ_0, Lemma <ref>
implies that c_j-1^''+c_j is circular at q_j if
q_j∉ E_q and thus c_j-1+c_j is circular at q_jif
q_j∉ E_q. Therefore, we conclude that ∂Σ_0 is
circular everywhere outside E_q. By Lemma <ref>, f_0 is locally
homeomorphic in [ Δ\ f_0^-1(E_q)]
∪[ ( ∂Δ) \[ f_0^-1
(E_q)∩{a_j}_j=1^m_0] ] . Thus we have:
For each a∈∂Δ, if a∉ f_0^-1(E_q), then
a has a neighborhood α_a in ∂Δ such that (
f_0,α_a) is an SCC arc and f_0 restricted to a
neighborhood of α_a in Δ is a homeomorphism.
Then for sufficiently large m, ∂Σ_0 has an ℱ
(L,m)-partition ∂Σ_0=∑_j=1^m𝔠_j such that
𝔠_j∈ℭ^1( Σ_0)(see
Definition <ref>). Since Σ_0 is also precise extremal in
ℱ_r(L,m), by Lemma <ref> all terms 𝔠_j
have the same curvature. Thus we have
The curvature of ∂Σ_0=( f_0,∂Δ) is a constant function of z∈( ∂Δ)
\ f_0^-1(E_q)
Let n_1=#( ∂Δ) ∩ f_0^-1(E_q). Then
there are three possibility.
Case 1. n_1=#( ∂Δ) ∩
f_0^-1(E_q)=∅.
Case 2. n_1=#( ∂Δ) ∩
f_0^-1(E_q)=1.
Case 3. n_1=#( ∂Δ) ∩
f_0^-1(E_q)≥2.
Assume Case 1 occurs. Then C=f(∂Δ) is a circle with C∩
E_q=∅ and f_0:∂Δ→ C is a CCM of degree
kfor some integer k≥1. In this case, ∂Σ_0 has a
partition ∂Σ_0=C_1+C_2+…+C_k, such that every
C_k is a simple closed path and C_k=C. Then by Corollary <ref>
C is contained in an open hemisphere on S. Thus C contains a major
circular arc with length <π.
We will prove k=1. We cannot use Lemma <ref> directly, since it is for
terms of ℱ( L,m)-partitions in ℭ^1,
and every arc of ℭ^1 has length <π. But it applies in this
way: If k>1, then ∂Σ_0 has an ℱ_r(
L,k+2) partition
∂Σ_0=C_1^'+c_1^'+C_2^'+c_2^'+C_3+…+C_k
so that C_1^' and C_2^' are major circular arcs, each of
which has length <π, thus C_1^' and C_2^' are both
in ℭ^1( Σ_0) , contradicting Lemma
<ref>. Thus (B) is proved in Case 1.
Assume Case 2 occurs and let ( ∂Δ) ∩ f_0
^-1(E_q)={a_1}. Then by Claim <ref>, ∂Σ
_0=(f,∂Δ) is a simple circle C_1=C_1( p_1
,p_1) with q_1=f(a_1), and by Corollary <ref> C_1
is contained in an open hemisphere on S. Thus (B) also holds in Case 2.
Assume that Case 1 or 2 occurs. Then we have proved that ∂Σ_0
is a simple circle C_1 with #C_1∩ E_q≤1. In this situation,
we in fact can show that (C) holds. Let T be the closed disk enclosed by
C_1. Then we can sew Σ_0 and S\ T along
C_1 to obtain a closed surface F=( f,S) . Then we have
L(∂ T)=L(Σ_0), A(Σ_0)=A(F)-A(S\
T)=A(F)+A(T)-4π and
n( Σ_0) =n( F)
-n( S\ T^∘) =n(
F) -n( S) +n( T)
=n( F) -q+n( T) .
Note that n( T) =#T^∘∩ E_q, not #T∩
E_q. Therefore we have
R(Σ_0)=R(F)+R(T)+8π.
By Lemma <ref> we have R(F)≤-8π, and thus H( Σ
_0) ≤ H(T). Then H( Σ_0) =H(T), say, both
T and Σ_0 are precise extremal surface of ℱ(
L) . If the diameter of T is equal to, or less than
δ_E_q, then there is another disk Σ^∗ in S\
E_q congruent to T, with H(T)≤ H(Σ^∗). But T is extremal
in ℱ( L) . We have H(T)=H(Σ^∗), and thus
both Σ^∗ and T are precise extremal in ℱ(
L) . If the diameter of T is larger than δ_E_q, then by
moving T continuously we can show that there is also another disk
Σ^∗ on S congruent to T such that n(
Σ^∗) ≤n( T) but ∂Σ^∗ contains at least two points of E_q, and then
∂Σ^∗ has a partition (<ref>) satisfying (<ref>)
and H(T)≤ H(Σ^∗), which implies that Σ^∗ and T
are both precise extremal in ℱ( L). Therefore (C)
holds. We have proved (B) and (C) in Cases 1 and 2.
Assume Case 3 occurs. Then f_0^-1(E_q) divides ∂Δ into
n_1 arcs and thus ∂Σ_L_0 has an ℱ(
L,n_1)-partition
∂Σ_0=C_1+C_2+⋯+C_n_1
which satisfies (B1) and (B4) for n_0=n_1. By Corollary <ref>,
(B2) holds true.
Assume that (<ref>) does satisfies (B3). Then by (B2), as the argument
for (<ref>), the partition (<ref>) has a refined ℱ
( L,n_1+2)-partitionwhich contains two terms of length
<π which are major arcs and of class ℭ^1. But this
contradicts Lemma <ref> again. Thus (B3) holds with the partition
(<ref>), and (B) is proved completely.
Now we begin to prove (C) for Case 3. By (B3) we may assume
C_2,C_3,…,C_n1 all satisfy (<ref>), C_1 may
or may not satisfy (<ref>).
We first show the following.
In Case 3, Σ_0 can be deformed to be a surface F_1
∈ℱ( L,n_1+k) such that F_1 has a partition of
the form (<ref>) satisfying (<ref>), with n_0^'=n_1+k
for some integer k≥0.
If C_1 also satisfies (<ref>), then there is nothing to prove. So we
assume C_1 is closed. Then by (B2) L(C_1)<2π, and by (B1) and
Corollary <ref> we can find a precise extremal surface F_1 in
ℱ_r( L) , so that ∂ F_1 has an
ℱ( L,n_1) partition
∂ F_1=C_1^'+C_2+⋯+C_n_1
which is the same as (<ref>), except that C_1 is replaced by a
rotation C_1^' of C_1with C_1^'∩ E_q
={q_1,q_2^',q_3^',…,q_k^'} containing k
points arranged on C_1^' anticlockwise, for some k≥2. Then
C_1^' can be divided into k arcs by { q_j^'} _j=1^k with q_1^'=q_1, and thus ∂ F_1
has an ℱ( L,n_0^')-partition
∂Σ=∂ F_2=C_1^'( q_1,q_2^') +C_1^'( q_2^',q_3^')
+⋯+C_1^'( q_k^',q_1) +C_2(
q_1,q_2) +⋯+C_n_1( q_n_1,q_1)
satisfying (<ref>) with n_0^'=n_1+k.
By Theorem <ref>, there exists a positive integer d^∗ depending
only on n_0^'=n_1+k and q, which in fact depends only on L
and q since n_0^'<L_0/δ_E_q, and there exists a
surface Σ^∗ in ℱ_r( L,n_0^')
such that
H(Σ^∗)≥ H(F_1),L(∂Σ^∗)≤ L(∂ F_1),
the second equality holding only if ∂Σ^∗=∂ F_1. Then Σ^∗ is a precise extremal
surface of ℱ_r( L,n_0^') and thus
L(∂Σ^∗)≥ L(∂ F_1), which implies L(∂Σ^∗)=L(∂ F_1) and thus ∂Σ^∗=∂
F_1, which has the ℱ( L,n_0^')-partition
satisfying (<ref>), and (C) is proved in Case 3.
§ PROOF OF THEOREMS <REF>, <REF> AND <REF>
In this section we complete the proof of our second and third main theorems.
Let 𝒮_1 be the space that satisfies Definition
<ref> (1)–(4). We denote by 𝒮_1^( 5)
,𝒮_1^(6),𝒮_1^(7),𝒮_1^(8) the
subspaces of 𝒮_1 satisfying (5), (6), (7) and (8) of Definition
<ref> respectively.
Then we have 𝒮_0=𝒮_1^( 5)
∩𝒮_1^(6)∩𝒮_1^(7)∩𝒮_1^(8).
We first introduce a procedure to construct a surface for given boundary.
(Standard solution of surfaces with known boundary) Let Γ be
a closed curve on S with the partition
Γ=( f,∂Δ) =C_1( p_1,p_2)
+⋯+C_q^'( p_q^',p_1) ,q^'≤ q.
satisfying Definition <ref> (1)–(3). Then d( p_j
,p_j+1) <π and p_jp_j+1 is well defined. We can
construct a surface Σ_Γ satisfying the following condition.
Σ_Γ is contained in 𝒮_1∩ℱ_r and that ∂Σ_Γ=Γ.
The surface Σ_Γ can be obtained as the output when we input
Γ and execute the following procedure.
For each j=1,2,…,q^', let K_j=𝔇^'( p_jp_j+1,C_j) (see Definition
<ref>), which is the closed convex lune enclosed by the arc
C_j=C_j( p_j,p_j+1) and its chord p_j+1p_j, say ∂ K_j=C_j-p_jp_j+1. If C_j is the line segment p_jp_j+1 for some j, then
K_j^∘=∅ and we just set K_j=p_jp_j+1. If
C_j is a major arc of a great circle on S for some j, then by
definition K_j is the closed hemisphere enclosed by the great circle
C_j-p_jp_j+1=C_j+p_j+1p_j. Note that
K_q^'=𝔇^'( p_q^'p_1,C_q) , say p_q^'+1=p_1.
Let l_2=p_1p_2 and l_q^'=p_1
p_q^', which are well defined since d( p_j,p_j+1)
<π for j=1,…,q^',p_q^'+1=p_1. For each
j=3,…,q^'-1, let l_j=p_1p_j if p_1p_j is well defined, say, p_j is not the antipodal point
p_1^∗ of p_1, or let l_jbe the straight line segment
p_1p_j-1p_j if p_j=p_1^∗, which is half of the
great circle from p_1 to p_1^∗ passing through p_j-1. Since
p_1,…,p_q^' are distinct each other, if p_j=p_1^∗
for some jof 3,…,q^'-1, we have that l_i=p_1p_i is well defined for each i≠ j with 3≤ i≤ q^'-1 and p_j-1p_j is also well defined.
When q^'=2, let T_2 be the surface whose interior is the simple
domain S\ l_2 and boundary is l_2-l_2.
When q^'≥3, for each j=2,…,q^'-1, we define T_j
to be the closed triangular domain enclosed by
l_j+p_jp_j+1-l_j+1,j=2,…,q^'-1.
Each T_j,j=2,…,q^'-1( q^'≥3) , is a
simple surface in the sense that its interior is a simple nonempty domain on
S. But it is possible that for some j, l_j+p_jp_j+1
-l_j+1 may be contained in a line segment, and in this case the interior
of T_j is S\ l_j when p_j+1∈ l_j^∘, or
S\ l_j+1 when p_j∈ l_j+1^∘. Recall Remark
<ref>, when we regard T_j as a surface, ∂ T_j is a
simple closed curve, and thus l_j and l_j+1, regarded as arcs
in the surface, only intersect at p_1 in the surface T_j, even if
∂ T_j is just a line segment as a set on S.
We sew T_2,…,T_q^'-1 along l_3,…
,l_q^'-2 to obtain a surface P with boundary p_1
p_2… p_q^'p_1. Then we can sew P and K_j
along p_jp_j+1 for each j=1,2,…,q^' to obtain a
surface Σ_Γ with ∂Σ_Γ=Γ and
_maxΣ_Γ≤ q^'-2+q^'≤2q^'-2.
Thus we have Σ_Γ∈𝒮_1.
It is clear that all branch values of the surface P are contained in
{p_1,…,p_q^'}and thus when we patch all lunes K_j to
P along p_jp_j+1, no other branch values appeared. Thus all
branch values of Σ_Γ are contained in {p_1,…
,p_q^'}⊂ E_q, and so Σ_Γ∈ℱ_r
∩𝒮_1.
We will write P=P(p_1,p_2,…,p_q^'). Then it is clear that
P(p_1,p_2, …, p_q^') is determined by the ordered
points p_1,p_2,…,p_q^' uniquely, and thus Σ_Γ is determined by Γ uniquely, when we execute the above procedure.
Uniqueness here is in the sense described in Remark <ref>.
If q^'≥4, we can regard l_j,j=3,…,q^'-1, as simple
arcs in the surface P with l_j^∘⊂ P^∘and ∂
l_j⊂∂ P so that every pair l_i and l_j with 3≤
i<j≤ q^'-1 only intersect at p_1in P (see Remark <ref>
for the convention). Then it is clear that l_3,…,l_q^'-1
divide P into the surfaces T_2,…,T_q^'-1. Then it is clear
that when q^'≥4
R( P) =∑_j=2^q^'-1R(T_j)-4π∑_j=2
^q^'-2#( l_j+1^∘∩ E_q) .
The polygon p_1p_2… p_q^'p_1 divides the
surface Σ_Γ into the surface P( p_1,p_2
,…,p_q^') and the q^' lunes K_j
,j=1,…,q^'. Write
J=J(Γ)={j:1≤ j≤ q^'and
K_j^∘≠∅}.
Note that the condition K_j^∘≠∅ means that ∂
K_j=C_j( p_j,p_j+1) -p_jp_j+1 is a Jordan
curve on S and p_jp_j+1^∘ (regarded as in the surface
Σ_Γ) is in the interior of Σ_Γ. Then we have
R(Σ_Γ)=R(P)+∑_j∈ J[ R(K_j)-4π#(
p_jp_j+1^∘∩ E_q) ] .
Therefore, by (<ref>) we have for q^'≥4 that
R(Σ_Γ)=∑_j=2^q^'-1R( T_j) +∑_j∈
J[ R(K_j)-4π#( p_jp_j+1^∘∩
E_q) ] -4π∑_j=2^q^'-2#( l_j+1^∘∩ E_q) .
For the cases q^'=2,3, by the definition of T_2 and {
K_j} _j=1^q^', we have
R(Σ_Γ)=R(T_2)+∑_j∈ J[ R(K_j)-4π#p_jp_j+1^∘∩ E_q] .
Let Σ=( f,Δ) be a surface in
ℱ such that
∂Σ=Γ_1+Γ_2
and Γ_1 is a closed arc of ∂Σ satisfying Definition
<ref> (1)–(3) with the corresponding partition
Γ_1=( f,∂Δ) =C_1( p_1,p_2)
+⋯+C_q^'( p_q^',p_1) .
Let Σ_Γ_1∈𝒮_1∩ℱ_r be the surface
given by Solution <ref>. Then there exists a surface Σ_2=(
f_2,S) without boundary or Σ_2=( f_2,Δ) with boundary Γ_2 such that the following holds.
(i) If ∂Σ=Γ_1, say, Γ_2 reduces to the point
p_1, then Σ_2=( f_2,S) is a closed surface and
R(Σ)=R(Σ_Γ_1)+R(Σ_2)+8π≤ R(Σ_Γ_1
),
with equality holding if and only if CV_f_2⊂ E_q.
(ii) If Γ_2 is not a point, then Σ_2=( f_2
,Δ) ∈ℱ and
R(Σ)+4π=[ R( Σ_Γ_1) +4π]
+[ R(Σ_2)+4π] .
(iii) CV_f_2⊂ E_q iff Σ∈ℱ_r.
In the proof the notations K_j,T_j,l_j,J are defined in Solution
<ref>. For all j∈ J, we sew the surface Σ and the
surface K_j^c=S\ K_j^∘ along C_j to obtain a
surface G_1^' such that
∂ G_1^'=p_1p_2p_3⋯ p_q^'p_1
+Γ_2.
By Lemma <ref> (regarding K_j, C_j and p_jp_j+1
as T,γ and γ^' in the lemma, and applying the lemma #J
times) we have
R(Σ)=R(G_1^')+∑_j∈ J[ R(K_j)-4π#p_jp_j+1^∘∩ E_q] ,
and Σ∈ℱ_r iff G_1^'∈ℱ_r.
We first consider the case q^'=2. Then we have by (<ref>)
∂ G_1^'=p_1p_2p_1+Γ_2.
Since ∂ T_2=p_1p_2+p_2p_1, we have by
Lemma <ref>
R(T_2)=4π#p_1p_2^∘∩ E_q.
If Γ_2 is a point, then we can sew G_1^' along
p_1p_2 to obtain a surface Σ_2=( f_2
,S) which is closed, say, a complete covering of S onto itself. By
Lemma <ref> (i)
R(G_1^')=R(Σ_2)+4π#[ p_1p_2∩
E_q] =R(Σ_2)+4π#[ p_1p_2^∘∩
E_q] +8π,
and G_1^'∈ℱ_r iff CV_f_2⊂ E_q. Then by
(<ref>) we have
R(G_1^')=R(Σ_2)+R(T_2)+8π,
and then by Lemma <ref>, (<ref>) and (<ref>) we have
R(Σ) =R(G_1^')+∑_j∈ J[ R(K_j)-4π#p_jp_j+1^∘∩ E_q]
=R(T_2)+R(Σ_2)+8π+∑_j∈ J[ R(K_j)-4π#p_jp_j+1^∘∩ E_q]
=R(Σ_Γ)+R(Σ_2)+8π≤ R(Σ_Γ),
equality holding iff CV_f_2⊂ E_q, and so (i) holds when
q^'=2. Note that the three relations CV_f_2⊂ E_q,
G_1^'∈ℱ_rand Σ∈ℱ_r are equivalent.
If Γ_2 is not a point, then by Lemma <ref> (ii), for the surface
Σ_2=( f_2,Δ) obtained from
G_1^' by sewing it along p_1p_2, we
have
R(G_1^')=R(Σ_2)+4π#p_1p_2∩ E_q
-4π=R(Σ_2)+4π#p_1p_2^∘∩ E_q+4π,
and G_1^'∈ℱ_r iff Σ_2∈ℱ_r.
Thus by (<ref>) (<ref>), and (<ref>) we have
R(Σ)=R(Σ_Γ)+R(Σ_2)+4π,
which implies (<ref>). On the other hand, the three relations Σ
_2∈ℱ_r, G_1^'∈ℱ_rand Σ∈ℱ_r are equivalent again. We have prove the lemma when
q^'=2.
From now on we assume q^'≥3. We will construct a sequence
G_1^',G_2^',…,G_q^'-1^' in
ℱ such that,
∂ G_j^'=l_j+1+p_j+1p_j+2… p_q^'
p_1+Γ_2,j=1,…,q^'-1,
and
R(G_j^')=R(G_j+1^')+R(T_j+1)-4π#( l_j+2^∘∩ E_q) ,j=1,…,q^'-2.
By (<ref>), G_1^' already satisfies (<ref>). Assume
G_1^',…,G_k^', satisfying (<ref>) for
j=1,…,k, and (<ref>) for j=1,…,k-1, are already obtained for
some k with 1≤ k≤ q^'-2.
When ∂ T_k+1=l_k+1+p_kp_k+1-l_k+2is a Jordan
curve on S, we sew G_k^' and S\
T_k+1^∘ along l_k+1+p_k+1p_k+2 to obtain a surface
G_k+1^'so that (<ref>) holds for j=k+1. Then by Lemma
<ref>
R(G_k^')=R(G_k+1^')+R(T_k+1)-4π#E_q∩
l_k+2^∘,
say, (<ref>) holds for j=k, and moreover, G_k^'∈ℱ_r iff G_k+1^'∈ℱ_r.
Consider the case that l_k+1=l_k+1∪p_k+1p_k+2, say
l_k+2=l_k+1+p_k+1p_k+2.
In this case we just put G_k+1^'=G_k^'. Then (<ref>)
holds for j=k+1, by the equality
l_k+1+p_k+1p_k+2… p_q^'p_1=l_k+2
+p_k+2… p_q^'p_1.
(<ref>) implies
∂ T_k+1=l_k+1+p_k+1p_k+2-l_k+2=l_k+2-l_k+2,
and thus we have by Lemma <ref> R(T_k+1)=4π#l_k+2^∘∩
E_q, say, (<ref>) holds for j=k as well.
Consider the case
l_k+1=l_k+2+p_k+2p_k+1.
Then
∂ T_k+1=l_k+1+p_k+1p_k+2-l_k+2=l_k+1-l_k+1,
and
∂ G_k^' =l_k+1+p_k+1p_k+2p_k+3…
p_q^'p_1+Γ_2
=l_k+2+p_k+2p_k+1+p_k+1p_k+2p_k+3…
p_q^'p_1+Γ_2
=l_k+2+p_k+2p_k+1+p_k+1p_k+2+p_k+2p_k+3… p_q^'p_1+Γ_2.
Thus we can sew G_k^' itself along p_k+2p_k+1 to obtain G_k+1^' satisfying (<ref>) for
j=k+1. By Lemma <ref> (ii)
R(G_k^')=R(G_k+1^')+4π#p_k+2p_k+1∩
E_q-4π,
and moreover, G_k^'∈ℱ_r iff G_k+1^'
∈ℱ_r. On the other hand
#p_k+2p_k+1∩ E_q=#l_k+1^∘∩
E_q-#l_k+2^∘∩ E_q+1.
Hence we have
R(G_k^')=R(G_k+1^')+4π#l_k+1^∘∩
E_q-4π#l_k+2^∘∩ E_q.
By (<ref>) and Lemma <ref>
R(T_k+1)=4π#l_k+1^∘∩ E_q.
Hence by (<ref>) we have (<ref>) for j=k. We have proved the
existence of the surfaces G_j^',j=1,…,q^'-1, in any
case, and moreover, G_j^'∈ℱ_r iff G_j+1^'∈ℱ_r for j=1,…,q-1.
By (<ref>), we have
G_q^'-1^'=l_q^'+p_q^'p_1
+Γ_2=l_q^'-l_q^'+Γ_2,
and, by (<ref>) and (<ref>), we have
c]l
R(Σ)=R(G_q^'-1^')+∑_j=2^q^'-1[
R(T_j)-4π#l_j+1^∘]
+∑_j∈ J[ R(K_j)-4π#p_jp_j+1^∘∩ E_q] ,
which, together with (<ref>) for q^'=3, or (<ref>) for
q^'>3, implies that
R(Σ)=R(G_q^'-1^')+R(Σ_Γ)-4π#l_q^'
^∘∩ E_q.
Note that the term 4π#l_q^'^∘∩ E_q=4π#p_1p_q^'^∘∩ E_q in (<ref>) never appeared in
(<ref>).
Assume Γ_2 is a point. Then by (<ref>) we can sew the
surface G_q^'-1^' along l_q^'=p_1p_q^' so that G_q^'-1^' becomes a closed
surface Σ_2=( f_2,S) . Then by Lemma <ref> (i) we
have
R( G_q^'-1^') =R(Σ_2)+4π#l_q^'
∩ E_q=R(Σ_2)+4π#l_q^'^∘∩ E_q+8π,
and thus by (<ref>)
R(Σ)=R(Σ_Γ_1)+R(Σ_2)+8π.
By Lemma <ref> we have R(Σ_2)≤-8π, with equality holding if
and only if CV_f_2⊂ E_q. On the other hand it is clear that
Σ∈ℱ_r⇔ G_1^'∈ℱ
_r⇔ G_2^'∈ℱ_r⇔…⇔ G_q^'-1^'∈ℱ_r
⇔ CV_f_2⊂ E_q,
(i) is proved completely.
Assume that Γ_2is not a point. Then by (<ref>) we can
sew G_q^'-1^' itself along l_q^'
=p_1p_q^' to obtain a surface Σ_2=(
f_2,Δ) ∈ℱ such that ∂Σ
_2=Γ_2. By Lemma <ref> (ii) we have
R(G_q^'-1^')=R(Σ_2)+4π#l_q^'∩ E_q
-4π=R(Σ_2)+4π#l_q^'^∘∩ E_q+4π,
which with (<ref>) implies
R(Σ)=R(Σ_Γ)+R(Σ_2)+4π.
In consequence we have (<ref>). On other hand, (<ref>) also holds
and so (ii) is proved completely.
For simplicity, we introduce the following condition:
A curve Γ=( f,∂Δ) is called satisfying
( p_1,…,p_m)-Condition, if
Γ=( f,∂Δ) has a partition
∂Σ=C_1( p_1,p_2) +⋯+C_m(
p_m,p_1)
and for each j≤ m, C_j is an SCC arc and the endpoints p_j and
p_j+1 are distinct and contained in E_q, and moreover d(
p_j,p_j+1) <π.
If p_1,…,p_m are distinct each other, then Γ satisfies
Definition <ref> (1)–(3) with partition (<ref>). In general it
is clear that the following holds.
Assume Σ∈ℱ such that ∂Σ satisfies
( p_1,…,p_m)-Condition with partition (<ref>).
Then ∂Σ has a partition ∂Σ=Γ_1+Γ_2
satisfying Lemma <ref>, say,
Γ_1=C_j( p_j,p_j+1) +C_j+1( p_j+1
,p_j+2) +…+C_j+k( p_j+k,p_j+k+1)
is a closed arc of ∂Σ satisfying Definition <ref>
(1)–(3) and Γ_2 is either a point when p_1,…,p_m are
distinct each other with j=1 and j+k=m, or
Γ_2 =C_1( p_1,p_2) +C_2( p_2
,p_3) +…+C_j-1( p_j-1,p_j)
+C_j+k+1( p_j+k+1,p_j+k+2) +…+C_m(
p_m,p_1)
which satisfies ( p_1,p_2,…,p_j,p_j+k+1,p_j+k+2
,…,p_m)-Condition.
Let Σ be a surface in ℱ such that
∂Σ satisfying ( p_1,…,p_m)-Condition
with partition (<ref>). Then there exist surfaces Σ_j
∈𝒮_1∩ℱ_r with partitions
∂Σ_j=C_j1( p_j1,p_j2) +⋯+C_jq_j
^'( p_jq_j^',p_j1)
satisfying Definition <ref> (1)–(4) for j=1,…,k, k≥1,
such that
L(∂Σ)=∑_j=1^kL( ∂Σ_j) ,
and
R(Σ)+4π≤∑_j=1^k( R(Σ_j) +4π),
with equality holding if and only if Σ∈ℱ_r. Moreover,
each Σ_j is the solution of ∂Σ_j in Solution
<ref> with respect to ∂Σ_j, say, Σ_j
=Σ_∂Σ_j,
q_1^'+q_2^'+…+q_k^'=q^',
{C_jl:j=1,…,k,l=1,…,q_j^'}={C_j:j=1,…,q^'},
and
max_j=1,…,kR(Σ_j)+4π/L(∂Σ_j)≥R(Σ)+4π/L(∂Σ).
All the conclusions can be obtained by repeating the previous lemma and Claim
<ref> several times, except (<ref>). (<ref>) follows from
(<ref>), (<ref>) and Lemma <ref>.
(i) There exists a surface Σ∈ℱ_r which is
a 4π-extremal surface 𝒮_1, say,
H_𝒮_1=sup_Σ^'∈𝒮_1R(Σ^')+4π/L(∂Σ^')=R(Σ)+4π/L(∂Σ).
(ii) For each surface Σ in 𝒮_1 satisfying (<ref>),
there exists a surface Σ^'in 𝒮_0∩ℱ_r such that
R(Σ^')+4π/L(∂Σ^')≥R(Σ)+4π/L(∂Σ)=H_𝒮_1.
(iii)
H_𝒮_1=sup_Σ∈𝒮_1R(Σ)+4π/L(∂Σ)=sup_Σ∈𝒮_0∩ℱ_r
R(Σ)+4π/L(∂Σ)=sup_Σ∈𝒮_0
R(Σ)+4π/L(∂Σ)=H_𝒮_0.
(iv) For each simplest 4π-extremal surface Σ of 𝒮_0,
its boundary ∂Σ is consisted of Q(Σ) strictly SCC arcs
(see Definitions <ref> and <ref>).
(i) Let
H_𝒮_1=sup_Σ^'∈𝒮_1R(Σ^')+4π/L(∂Σ^').
Then there exists a sequence Σ_n in 𝒮_1 such that
lim_n→∞R(Σ_n)+4π/L(∂Σ_n
)=H_𝒮_1.
By Definition of 𝒮_1, we may assume Γ_n=∂Σ_n has the partition
Γ_n=∂Σ_n=C_n1( p_1,p_2) +C_n2(
p_2,p_3) +…+C_nq^'( p_q^',p_1)
such that p_1,…,p_q^' are distinct each other, the endpoints
set {p_j}_j=1^q^' of all C_nj are independent of nand
∂Σ_n with this partition satisfy Definition <ref>
(1)–(4). Thus we may assume C_nj( p_j,p_j+1) converges
to an SCC arc C_j( p_j,p_j+1) for each j=1,…
,q^'. It is clear that
Γ=C_1( p_1,p_2) +…+C_q^'(
p_q^',p_1)
satisfies Definition <ref> (1)–(3). Then for the surface
Σ_Γ∈𝒮_1∩ℱ_r and its partition
{K_j}_j=1^q^'∪{T_j}_j=2^q^'-1 given by
Solution <ref>, we have (<ref>) for q^'≥4, or (<ref>)
for q^'=2,3.
For the surface Σ_Γ_n and its partition {K_nj
}_j=1^q^'∪{T_j}_j=2^q^'-1 given by Solution
<ref>, we may assume that for each j, either K_nj^∘=∅
for all n, or K_nj^∘≠∅ for all n. Then
J_n=J( Σ_n) ={j:K_nj^∘≠∅
,j∈{1,…,q^'}}=J
is independent of n, and Lemma <ref>, (<ref>) and (<ref>)
imply that[Note that T_j is the same for each Σ_n, since
p_1,…,p_q^' are independent of n.]
c]lll
R(Σ_n) ≤ R(Σ_Γ_n) =∑_j=2
^max{2,q^'-1}R( T_j) -4πφ( q^')
+∑_j∈ J[ R(K_nj)-4π#( p_jp_j+1
^∘∩ E_q) ] ,
where φ(2)=φ(3)=0, φ(q^')=∑_j=2^q^'-2#( l_j+1^∘∩ E_q) for all q^'≥4, and
For each j∈ J_n, we may assume K_nj converges to the closed lune
K_j enclosed by C_j+p_j+1p_j, which as a limit may be
just the segment p_jp_j+1 and we have
J_n⊃ J=J(Γ)={j:K_j^∘≠∅,j∈{1,…
,q^'}}.
Then it is clear that A(K_nj)→ A(K_j) and for each
sufficiently large n
n(K_nj)≥n(K_j),
which implies
lim_n→∞R(K_nj)≤ R(K_j).
Therefore by (<ref>) and (<ref>), for the solution Σ_Γ
∈𝒮_1∩ℱ_r to Γ, we have
lim_n→∞R(Σ_n) ≤ R(Σ_Γ)
=∑_j=2^max{2,q^'-1}R( T_j)
-4πφ( q^') +∑_j∈ J[ R(K_j
)-4π#( p_jp_j+1^∘∩ E_q) ] .
On the other hand we have lim_n→∞L(∂Σ
_n)→ L(∂Σ_Γ). Thus we have
H_𝒮_1=lim_n→∞R(Σ_n)+4π/L(∂Σ_n)≤R(Σ_Γ)+4π/L(∂Σ_Γ).
Since Σ_Γ∈𝒮_1, we have
H_𝒮_1=R(Σ_Γ)+4π/L(∂Σ_Γ
).
(i) is proved.
(ii) Let Σ be any surface in 𝒮_1∩ℱ_r
satisfying (i). Then ∂Σ has a partition
∂Σ=C_1( p_1,p_2) +…+C_q^'(
p_q^',p_1)
satisfying Definition <ref> (1)–(4). But (5) may fails.
E_q divides each C_j into subarcs each of which contains no point of
E_q in its interior. We will show
The partition (<ref>) has a refinement
∂Σ=C_1^'( p_1^',p_2^')
+⋯+C_q^''^'( p_q^''^',p_1^') ,
such that for each j≤ q^'', C_j^'∘∩
E_q=∅,∂ C_j^'⊂ E_q and the two endpoints of
C_j^' are distinct for each j=1,…,q^'' (note that
q^'' may be larger than q, though q^'≤ q). In
other words, ∂Σ with partition (<ref>) satisfies
(p_1^',p_2^',⋯,p_q^''^'
)-Condition[It is clear that d( p_j,p_j+1) <π
for all j=1,…,q^'. But this cannot implies d(
p_j^',p_j+1^') <π for all j=1,…
,q^''. So Claim <ref> needs a proof.].
We first show that
All C_j^',j=1,2,…,q^'', is contained in
an open hemisphere on S.
Assume that Claim <ref> fails. Then for some j_0∈{1,…
,q^''}, C_j_0^' is not contained in any open
hemisphere and we may assume j_0=1. Then by Lemma <ref> there
exists a surface Σ^' such that
∂Σ^'=p_1^'𝔞_2^'
+𝔞_2^'𝔞_3^'+⋯
+𝔞_s^'p_2^'+C_2^'
+…+C_q^''^',
L(∂Σ^')<L(∂Σ),R(Σ^')+4π/L(∂Σ^')>R(Σ)+4π/L(∂Σ
)=H_𝒮_1.
We can repeat this method at most q^''-1 times for every edge
C_j^' which is not contained in any open hemisphere on S, to
obtain a surface Σ^'' such that ∂Σ
^'' satisfy (𝔞_1^',𝔞_2^',…,𝔞_m^')-Condition and
L(∂Σ^'')≤ L(∂Σ^'),R(Σ^'')+4π/L(∂Σ^'')≥R(Σ^')+4π/L(∂Σ^')>R(Σ)+4π/L(∂Σ).
By Corollary <ref>, there exists a surface Σ^'''∈𝒮_1∩ℱ_rsatisfying
H_𝒮_1≥R(Σ^''')+4π/L(∂Σ^''')≥R(Σ^'')+4π/L(∂Σ^'')>R(Σ)+4π/L(∂Σ)=H_𝒮_1.
This contradiction implies Claim <ref>.
By Claim <ref>, we have d( p_j^',p_j+1^')
<π for all j=1,…,q^''. Thus Claim <ref> holds.
By Claim <ref> and Corollary <ref>, there exists a surface
Σ_1∈𝒮_1∩ℱ_r such that
R(Σ_1)+4π/L(∂Σ_1)≥R(Σ)+4π/L(∂Σ)=H_𝒮_1,
and ∂Σ_1 has a partition satisfying Definition <ref>
(1)–(4), and each term of this partition is one of the arc in (<ref>),
and thus we have Σ_1∈𝒮_1^(5). Since Σ_1
∈𝒮_1, (<ref>) in fact implies
R(Σ_1)+4π/L(∂Σ_1)=H_𝒮_1.
Let
∂Σ_1=𝒞_1( 𝔭_1,𝔭
_2) +𝒞_2( 𝔭_2,𝔭_3)
+⋯+𝒞_𝔮^'( 𝔭_𝔮
^',𝔭_1)
be the partition of ∂Σ_1 such that each term is a term in
(<ref>). Then by Claim <ref> Σ_1 with this partition
satisfies Definition <ref> (1)–(6), and thus Σ_1
∈𝒮_1^(6).
(†) Assume that Σ_1∉𝒮_1^(7), say, (<ref>)
does not satisfies Definition <ref> (7). Then there exists a pair
( i,j)with i≠ j, such that the curvature of
𝒞_i is not the same as that of 𝒞_j. By Corollary
<ref> we can change 𝒞_i and 𝒞_j a
little so that Σ_1 becomes another surface Σ_1^' in
𝒮_1 such that R(Σ_1^')>R(Σ_1) and
L(∂Σ_1^')=L(∂Σ_1), which deduce the
contradiction H(Σ_1^')>H_𝒮_1. We have proved
that Σ_1∈𝒮_1^(7).
Assume Σ_1∉𝒮_1^(8). Then in the partition
(<ref>), there is an edge 𝒞_j_0 which is a major circular
arc. Then by Lemma <ref>, we may assume that Σ_1 is the surface
Σ_∂Σ_1 given by Solution <ref> and is obtained by
sewing {𝒦_j} and {𝒯_j}, where
𝒦_j and 𝒯_j are defined by the partition
(<ref>) as in Solution <ref> (as K_j and T_j there). Let
Σ_1^' and Σ_2^' be two surface in
𝒮_1 both obtained by deform Σ_1^' via pushing
𝒞_j_0 to the right hand side a little, respectively, so that
the endpoints of 𝒞_j_0 remain unchanged, n
(Σ_1^')=n( Σ_2^')
=n( Σ_1) and L(∂Σ_1^')+L(∂Σ_2^')=2L(∂Σ_1). Then we have
A(Σ_1^')+A(Σ_2^')>2A( Σ_1)by Corollary <ref> (B). Thus we have
R(Σ_1^')+4π+R(Σ_2^')+4π/L(∂Σ_1^')+L( ∂Σ_2^')
>2( R(Σ_1)+4π) /2L(∂Σ_1).
which, with Lemma <ref> and (<ref>), implies
max{R(Σ_1^')+4π/L(∂Σ_1^'),R(Σ_2^')+4π/L( ∂Σ_2^') } >R(Σ_1)+4π/L(∂Σ_1
)=H_𝒮_1,
contradicting definition of H_𝒮_1 again. We have proved
Σ_1∈𝒮_1^(8).
Up to now, we have proved Σ_1 satisfies (1)–(8) of Definition
<ref>, say Σ_1∈𝒮_0. Then we have by
(<ref>)
H_𝒮_1≥ H_𝒮_0≥R(Σ_1)+4π/L(∂Σ_1)=H_𝒮_1.
and (ii) is proved, since Σ_1∈ℱ_r. (iii) follows from
(ii) directly.
To prove (iv), let Σ∈𝒮_0 be a simplest 4π-extremal
surface of 𝒮_0 with the corresponding partition (<ref>)
satisfying Definition <ref> (1)–(8). If (iv) fails for Σ,
then all arcs C_j in the partition (<ref>) are straight, say,
C_j=p_jp_j+1,j=1,…,q^'. We may assume Σ
is the solution of Solution <ref> from the partition (<ref>). Then
Σ is obtained by sewing P=P( p_1,p_2
,…,p_q^') and {K_j}_j=1^q^' given by
Solution <ref>, each K_j is just the line segment p_jp_j+1 and we have, by Definition <ref> (5),
p_jp_j+1^∘∩ E_q=C_j^∘∩ E_q=∅.
Then by (<ref>)
1/H_𝒮_1=L(∂Σ)/R(P)+4π=2∑_j=1^q^'L(p_jp_j+1)/2R(P)+8π.
It is clear that R(P)+4π>0. Let θ_1,θ_2,…
,θ_q^' be sufficiently small positive angles such that the
closed lunes K_j^'=𝔇^'(p_jp_j+1,θ_j) (see Definition <ref>) have the same
area, say,
A(K_j^')=A(K_1^'),j=1,2,…,q^',
and let C_j^' be the circular boundary of 𝔇^'( p_jp_j+1,θ_j) from p_j to
p_j+1. Then all θ_j are determined by θ_1 and we can sew
P and the lunes K_j^' along ∂ P to obtain a
surface Σ_θ_1 such that
∂Σ_θ_1=C_1^'( p_1,p_2)
+C_2^'( p_2,p_3) +⋯+C_q^'^'( p_q^',p_1) .
By (<ref>) when θ_1 is small enough we have that
R(K_j^')=( q-2) A(K_j^')
and that
R(Σ_θ_1)=R(P)+∑_j=1^q^'R(K_j^'
)=R(P)+∑_j=1^q^'( q-2) A(K_j^').
Then we have by (<ref>)
R(Σ_θ_1)+4π =R(P)+4π+q^'( q-2)
A(K_1^')
=q^'( q-2) [ R(P)+4π/q^'(
q-2) +A( K_1^') ] ,
and then by (<ref>)
L( ∂Σ_θ_1) /R(Σ_θ_1
)+4π =∑_j=1^q^'L( C_j^')
/q^'( q-2) [ R(P)+4π/q^'(
q-2) +A( K_1^') ]
=1/q^'( q-2) ∑_j=1^q^'
2L( C_j^') /2R(P)+8π/q^'(
q-2) +2A( K_1^')
=1/q^'( q-2) ∑_j=1^q^'
2L( C_j^') /2R(P)+8π/q^'(
q-2) +2A( K_j^') ,
which, with 2L( C_j^') =L( ∂𝔇( p_jp_j+1,θ_j,θ_j)
) and 2A( K_j^') =A( 𝔇(
p_jp_j+1,θ_j,θ_j) ) , implies
L( ∂Σ_θ_1) /R(Σ_θ_1
)+4π=1/q^'( q-2) ∑_j=1^q^'
L( ∂𝔇( p_jp_j+1,θ
_j,θ_j) ) /2R(P)+8π/( q-2)
q^'+A( 𝔇( p_jp_j+1,θ
_j,θ_j) ) .
Therefore by Lemma <ref> (i) we have for small enough
θ_j=θ_j( θ_1) ,
L( ∂Σ_θ_1) /R(Σ_θ_1
)+4π <1/q^'( q-2) ∑_j=1^q^'
L( ∂𝔇( p_jp_j+1
,0,0) ) /[ 2R(P)+8π/( q-2)
q^'+A( 𝔇( p_jp_j+1,0,0)
) ]
=1/( q-2) q^'∑_j=1^q^'
2L( p_jp_j+1) /[ 2R(P)+8π/( q-2) q^'+0]
=∑_j=1^q^'L( p_jp_j+1)
/R(P)+4π=L(∂Σ)/R(Σ)+4π.
Then by (<ref>) we have
H_𝒮_1=R(Σ)+4π/L(∂Σ)<R(Σ_θ_1)+4π/L( ∂Σ_θ_1) .
But this contradicts the definition of H_𝒮_1, since
Σ_θ_1∈𝒮_1. Thus (iv) holds.
Using the argument of the paragraph (†) in the above proof, we in fact have
proved Theorem <ref> (iii):
Assume F_1 and F_2 are any two 4π-extremal surfaces in
𝒮_0. Then
k( F_1,E_q) =k( F_2,E_q) .
Since H_𝒮_0=H_𝒮_1, both F_1 and F_2
satisfy (<ref>) and each ∂ F_j has s partition ∂
F_j=𝒞_j1+𝒞_j2+,…,𝒞_jk_j
satisfying (1)–(8) of Definition <ref>. If for some i_1 and
i_2, 𝒞_1i_1 and 𝒞_2i_2 have different
curvature, then applying the discussion in the paragraph (†) for
Σ_1∉𝒮_1^(7), we can construct two surfaces
F_1^' and F_2^' contained in 𝒮_1 by
deforming 𝒞_1i_1 and 𝒞_2i_2, as we do for
𝒞_i and 𝒞_j there, so that R(F_1^')+R(F_2^')>R(F_1)+R(F_2) and L(∂ F_1^')+R(∂ F_2^')=L(∂ F_1)+R(∂ F_2). Then we
can use Lemma <ref> to show that R(F_j^')+4π/L(∂ F_j^')>H_𝒮_1 for j=1 or 2. This is a contradiction.
For any closed convex disk T on S,
H_𝒮_0=sup_Σ∈𝒮_0R(Σ)+4π/L(∂Σ)>H(T).
Since T is convex, it is contained in a hemisphere S_1 on S. It is
clear that there exists a disk T_1 contained in S\ E_q whose
boundary is the circumcircle of a regular triangle of edge length
δ_E_q. Then we may assume that two points p_1 and p_2 of
E_q are contained in ∂ T_1 with d( p_1,p_2)
=δ_E_q, and then we have T_1∈𝒮_1. Thus by Lemma
<ref> (iii) we have
H_𝒮_0=H_𝒮_1=sup_Σ∈𝒮_1
R(Σ)+4π/L(∂Σ)≥R(T_1)+4π/L(∂
T_1)=( q-2) A(T_1)+4π/L(∂ T_1).
Then in the case L(∂ T)≤ L(∂ T_1), by Lemma <ref> we
have
H_𝒮_0≥( q-2) A(T_1)+4π/L(∂
T_1)>( q-2) A(T_1)/L(∂ T_1)=H(T_1)≥
H(T).
In the case L(∂ T_1)≤ L(∂ T)≤2π, we may rotate S
to move T to another congruent disk T^' such that n( T^') ≤n(T) and ∂ T^'contains at least two points of E_q with distance <π, and thus
H(T)≤ H(T^')=R(T^')/L(∂ T^')
<R(T^')+4π/L(∂ T^')≤ H_𝒮_1
=H_𝒮_0.
We are in the position to complete the proof of our second main theorem.
The conclusion (iii) is obtained by Corollary
<ref> directly.
Let Σ be any surface in ℱ. We first show
H(Σ)≤ H_𝒮_0.
By Theorem <ref> and Lemma <ref>, <ref> holds when
L≤2δ_E_q. Thus to prove (<ref>) we may assume
L(∂Σ)>2δ_E_q.
Let L be any positive number in ℒ with L≥ L(∂Σ). Then by Theorem <ref> (A) there exists a precise extremal
surface Σ_L_1 of ℱ( L) such that
L(∂Σ_L_1)=L_1. Then we have H(Σ)≤ H(Σ
_L_1) and thus by definition of H
H(Σ)≤ H(Σ_L_1)=R(Σ_L_1)/L(∂Σ_L_1)<R(Σ_L_1)+4π/L(∂Σ_L_1).
By Theorem <ref> (B), for some positive integer n_0, ∂Σ_L_1 has an ℱ( L,n_0)-partition
∂Σ_L_1=C_1( p_1,p_2) +C_2(
p_2,p_3) +⋯+C_n_0( p_n_0,p_1)
satisfying (B1)–(B4) of Theorem <ref>, which implies that
Σ_L_1with partition (<ref>) satisfies
( p_1,p_2,⋯,p_n_0)-Condition when n_0>1.
Consider the case that ∂Σ_L_1 contains at most one point of
E_q. Then C_1 is an SCC circle and n_0=1, and thus for the closed
disk T enclosed by C_1 we have by Lemma <ref> that
H(Σ_L_1)=R(Σ_L_1)/L(∂Σ_L_1)≤
H(T),
and thus by Lemma <ref> and (<ref>) we have
H(Σ)≤ H(Σ_L_1)≤ H(T)<H_𝒮_0.
Therefore (<ref>) holds when n_0=1.
Assume n_0≥2. Then Claim <ref> and Corollary <ref> imply
that there exists a surface Σ_1∈𝒮_1 such that
R(Σ_L_1)+4π/L(∂Σ_L_1)≤R(Σ
_1)+4π/L(∂Σ_1)≤ H_𝒮_1,
which, with (<ref>) and Lemma <ref>, implies
H( Σ) <R(Σ_L_1)+4π/L(∂Σ
_L_1)≤R(Σ_1)+4π/L(∂Σ_1)≤
H_𝒮_1=H_𝒮_0,
and (<ref>) follows.
By Lemma <ref>, there exists a surface Σ_0∈𝒮_0
such that
R(Σ_0)+4π/L(∂Σ_0)=H_𝒮_0.
Then ∂Σ_0 has a partition
∂Σ_0=C_1( p_1,p_2) +C_2( p_2
,p_3) +⋯+C_q^'( p_q^',p_1)
satisfying Definition <ref> (1)–(8). Then by Corollary <ref>
there exists a sequence Σ_n in ℱ such that
lim_n→∞H(Σ_n)=R(Σ_0)+4π/L(∂Σ_0)=H_𝒮_0.
Thus by (<ref>) we have proved H_0=sup_Σ∈𝐅
H(Σ)=sup_Σ∈ℱH(Σ)=H_𝒮_0, and
Theorem <ref> (i) is proved. Theorem <ref> (ii) follows from Lemma
<ref> (iv) directly.
Let 𝔖 be the space of all 4π-extremal surfaces of 𝒮_0. By Theorem <ref> (i),
𝔖≠∅ and Q( Σ) ≤ q for all
Σ∈𝔖. Then there exists F∈𝔖 such that
q^'=Q(F)=min_Σ∈𝔖Q(Σ). Let 𝔖
_q^' be the subspace of 𝔖 such that Q(Σ
)=q^' for all Σ∈𝔖_q^'. Then for every
surface Σ∈𝔖_q^',
q^'δ_E_q≤ L(∂Σ)<2π q^',
and then L=inf_Σ∈𝔖_q^'L(∂Σ)≥
q^'δ_E_q≥2δ_E_q. Then there exists a sequence
Σ_n in 𝔖_q^' such that
L(∂Σ_n)=L,
and Σ_n has the partition
∂Σ_n=C_n1( p_1,p_2) +C_n2(
p_2,p_3) +⋯+C_nq^'( p_q^'
,p_1)
satisfying Definition <ref> (1)–(8) and Theorem <ref> (ii),
where p_1,…,p_q^' are independent of nand distinct each other.
If Σ_n∉ℱ_r for some n, then by Lemma <ref>
(i), for the surface Σ_∂Σ_n∈𝒮_1 given by
Solution <ref> from ∂Σ_n and the partition (<ref>),
we have R(Σ_n)<R(Σ_∂Σ_n), which implies
H_𝒮_0=H(Σ_n)<H(Σ_∂Σ_n)≤
H_𝒮_1.
This contradicts H_𝒮_0=H_𝒮_1. Thus Σ
_n∈ℱ_r for all n.
By Corollary <ref>, k( Σ_n,E_q) =k is a constant
for all n=1,2,…, say, all C_nj have the same curvature, and then
C_nj=C_n1 for all n and all j≤ q^'. Thus, ∂Σ_n=∂Σ_1 and L(∂Σ_1)=L and so
Σ_1 ∈𝔖_q^' satisfies Definition
<ref> (1) and (2). Since _maxΣ^'≤2q^'-2 for all Σ^'∈𝔖_q^', there exists
Σ∈𝔖_q^'such that _maxΣ≤_maxΣ^' for all Σ^'∈𝔖
_q^'. Therefore 𝒮^∗≠∅.
We first prove (i). Let Σ∈𝒮^∗ and assume Q(Σ)=2. Then ∂Σ has a
partition ∂Σ=C_1( p_1,p_2) +C_2(
p_2,p_1) satisfying Definition <ref> (8), and C_1
and C_2 are both strictly convex, by Theorem <ref> (ii). Then
∂Σ encloses a closed lens Σ^'=𝔇(p_1p_2,θ_0,θ_0) with θ
_0∈(0,π/2], and by Corollary <ref> we have
R(Σ)≤ R(Σ^')
and thus R(Σ)=R(Σ^') and Σ^'∈𝒮
^∗. Therefore Σand Σ^'both equal the lens
𝔇(p_1p_2,θ_0,θ_0),by
Corollary <ref>, and we have
H_0=H_𝒮_0=R(Σ)+4π/L(∂Σ)
=R(Σ^')+4π/L(∂Σ^')=R(
𝔇(p_1p_2,θ_0,θ_0)) +4π/L( ∂𝔇( p_1p_2,θ_0
,θ_0) ) .
We will show
d( p_1,p_2) =δ_E_q.
Assume this fails. We will deduce a contradiction.
We first show that θ_0<π/2 when d( p_1,p_2)
>δ_E_q. Assume θ_0=π/2. Then T_p_1,p_2
=𝔇(p_1p_2,π/2,π/2) is a disk. Let
δ_0=L(p_1p_2). Then it is clear that for an open
neighborhood I_δ_0^∘=( δ_0-ε,δ
_0+ε) of δ_0 with δ_0-ε
>δ_E_q, there exists a family
𝒯_I_δ_0^∘={𝔇
(p_1p_2,δ,π/2,π/2):δ=d( p_1,p_2,δ) ∈ I_δ_0^∘,}
of convex disks on S such that n( T_p_1,p_2,δ
) =n( T_p_1,p_2) is a constant for
each δ∈ I_δ_0^∘. In fact, ∂ T_p_1,p_2
contains only two points p_1,p_2 in E_q. Then rotating
T_p_1,p_2 a little about p_1 we can obtain the desired family.
Then by Lemma <ref> (ii4) there exists δ_1∈ I_δ_0
^∘ and a disk in the family which can be write as T_p_1
,p_2,δ_1=𝔇(p_1p_2,δ_1
,π/2,π/2) with δ_1=d( p_1,p_2,δ_1) ∈
I_δ_0^∘ such that
H_𝒮_0=R( T_p_1,p_2) +4π/L(
∂ T_p_1,p_2) <R( T_p_1,p_2,δ_1
) +4π/L( ∂ T_p_1,p_2,δ_1) .
Since δ_1>δ_0-ε>δ_E_q, the diameter of
∂ T_p_1,p_2,δ_1 is larger than δ_E_q, and
then we may move the disk T_p_1,p_2,δ_1 congruently to another
closed disk T_δ_1 so that its boundary contains at least two points
of E_qand
n( T_δ_1) ≤n(
T_p_1,p_2,δ) .
Then we have T_δ_1∈𝒮_1 and
R( T_p_1,p_2,δ_1) +4π/L( ∂
T_p_1,p_2,δ_1) ≤R( T_δ_1)
+4π/L( ∂ T_δ_1) ≤ H_𝒮_1,
which with (<ref>) implies H_𝒮_1>H_𝒮_0.
This is a contradiction by Lemma <ref> (iii). We have proved
θ_0<π/2when d( p_1,p_2) >δ_E_q.
Now that we have proved θ_0<π/2, there exists a point
p_2^'∈p_1p_2^∘ near p_2 such that
d( p_1,p_2^') >δ_E_q,
L(∂𝔇( p_1p_2^',θ
_p_2^',θ_p_2^') )=L(𝔇
(p_1p_2,θ_0,θ_0)),
n( 𝔇( p_1p_2^'
,θ_p_2^',θ_p_2^') )
=n( 𝔇( p_1p_2,θ
_0,θ_0) ) ,
and
θ_0<θ_p_2^'<π/2.
Then by Lemma <ref>, putting A_0=4π-4πn(
𝔇(p_1p_2,θ_0,θ_0)) , we have
H_𝒮_0=R( 𝔇(p_1p_2
,θ_0,θ_0)) +4π/L( ∂𝔇(
p_1p_2,θ_0,θ_0) ) <R(
𝔇(p_1p_2^',θ_p_2^'
,θ_p_2^')) +4π/L( ∂𝔇(
p_1p_2^',θ_p_2^',θ_p_2^'
) ) .
Now we let F be the closed domain 𝔇(p_1p_2^',θ_p_2^',θ_p_2^'). Then
F has diameter larger than δ_E_qand ∂ F∩
E_q={p_1}, but we can rotate F on S to another domain F^'
so that H(F^')≥ H(F) and ∂ F^' contains two points
q_1 and q_2 of E_q. Thus ∂ F^' satisfies (2) of
Lemma <ref> and thus we have by (<ref>) that
H_0≥R(F^')+4π/L(∂ F^')≥R(F)+4π/L(∂ F)=R( 𝔇(p_1p_2^'
,θ_p_2^',θ_p_2^')) +4π/L(
∂𝔇( p_1p_2^',θ
_p_2^',θ_p_2^') ) >H_𝒮
_0.
This contradicts Theorem <ref> again, and thus we have (<ref>) and
Corollary <ref> (i) is proved.
Assume q=3 and E_q=E_3 is contained in a great circle on S. We will
show q^'=Q(Σ)=2. It suffices to show a contradiction when we
assume q^'=3. When q^'=3, ∂Σ has a partition
∂Σ=C_1( p_1,p_2) +C_2( p_2
,p_3) +C_3( p_3,p_1) ,
satisfies Definition <ref> (1)–(8) and Definition <ref>
(1)–(3), and by Lemma <ref> we have
R(Σ)≤ R(Σ_∂Σ),∂Σ=∂Σ_∂Σ
where Σ_∂Σ∈𝒮_1 is given by Solution
<ref>, say, Σ is the surface obtained by gluing the closed Jordan
domain T_2,K_1,K_2,K_3 in Solution <ref>. But we have proved
H_𝒮_1=H_𝒮_0=H_0, and then we have
R( Σ) =R(Σ_∂Σ).
By Definition <ref> (8) and by Theorem <ref> (ii), each
C_j is not a major circular arc and is strictly convex. Thus for each
K_j we may write K_j=𝔇^'( p_jp_j+1,θ_j,θ_j) with θ_j∈(0,π/2].
If p_1p_2p_3p_1 is a Jordan curve, then n( 𝔇( p_jp_j+1,θ_j,θ
_j) ) =0 for j=1,2,3, A(T_2)=2π, since E_q is on
a great circle, and by (<ref>), we have
R(Σ)+4π =4π+R(Σ_∂Σ)=4π+A(T_2)+∑
_j=1^3[ R(K_j)-4π#p_jp_j+1^∘∩
E_q]
=6π+∑_j=1^3R(K_j)=1/2∑_j=1^3[ R(
𝔇( p_jp_j+1,θ_j,θ_j)
) +4π] ,
and
H_𝒮_0=R(Σ)+4π/L(∂Σ)=1/2∑_j=1^3R( 𝔇( p_jp_j+1
,θ_j,θ_j) ) +4π/1/2∑ L(
∂𝔇( p_jp_j+1,θ_j,θ
_j) ) .
We may assume R( 𝔇( p_1p_2
,θ_1,θ_1) ) +4π/L( ∂𝔇( p_1p_2,θ_1,θ_1)
) ≥R( 𝔇( p_jp_j+1
,θ_j,θ_j) ) +4π/L( ∂𝔇( p_jp_j+1,θ_j,θ_j)
) ,j=2,3. It is clear that 𝔇(
p_1p_2,θ_1,θ_1) ∈𝒮_0, and
then we have
H_𝒮_0=R(Σ)+4π/L(∂Σ)≤R(
𝔇( p_1p_2,θ_1,θ_1)
) +4π/L( ∂𝔇( p_1p_2
,θ_1,θ_1) ) ≤ H_𝒮_0.
Then 𝔇( p_1p_2,θ_1
,θ_1) ∈𝒮^∗ and L( ∂𝔇( p_1p_2,θ_1,θ_1)
) <L( ∂Σ) . This is a contradiction since we
assumed q^'=3and Σ∈𝒮^∗.
If p_1p_2p_3p_1 is not a Jordan domain, then
A(T_2)=4π and we may assume p_2∈p_1p_3^∘.
Then we have n( 𝔇( p_jp_j+1
,θ_j,θ_j) ) =0 for j=1,2, and n( 𝔇( p_3p_1,θ_3,θ
_3) ) =1, in other words we have p_jp_j+1
^∘∩ E_q=∅ for j=1,2 and p_1p_3^∘∩ E_q=1. Therefore we have R( 𝔇( p_jp_j+1,θ_j,θ_j) ) =2R(K_j) for j=1,2
and R( 𝔇( p_3p_1,θ_3,θ
_3) ) =2R(K_3)-4π, and so, by (<ref>), we have
R(Σ)+4π =4π+R(Σ_∂Σ)=4π+A( T_2)
+∑_j=1^3[ R(K_j)-4π#p_jp_j+1^∘∩
E_q]
=4π+∑_j=1^3R(K_j)=4π+2π+1/2∑_j=1^3[
R( 𝔇( p_jp_j+1,θ_j,θ
_j) ) ]
=1/2∑_j=1^3[ R( 𝔇(
p_jp_j+1,θ_j,θ_j) ) +4π]
and
H_𝒮_0=R(Σ)+4π/L(∂Σ)=1/2∑_j=1^3R( 𝔇( p_jp_j+1
,θ_j,θ_j) ) +4π/1/2∑ L(
∂𝔇( p_jp_j+1,θ_j,θ
_j) ) .
We may assume R( 𝔇( p_j_0
p_j_0+1,θ_j_0,θ_j_0) ) +4π/L(
∂𝔇( p_j_0p_j_0+1,θ_j_0
,θ_j_0) ) ≥R( 𝔇(
p_jp_j+1,θ_j,θ_j) ) +4π/L(
∂𝔇( p_jp_j+1,θ_j,θ
_j) ) ,j=1,2,3. It is clear that T=𝔇
( p_j_0p_j_0+1,θ_j_0,θ_j_0)
∈𝒮_0, and then by Lemma <ref> we have
H_𝒮_0=R(Σ)+4π/L(∂Σ)≤R(
𝔇( p_j_0p_j_0+1,θ_j_0
,θ_j_0) ) +4π/L( ∂𝔇(
p_j_0p_j_0+1,θ_j_0,θ_j_0) )
≤ H_𝒮_0.
Then we have R(T)+4π/L(∂ T)=H_𝒮_0=H_0 and
L(∂ T)<L(∂Σ). Thus T∈𝒮_0 is a 4π-extremal surface in 𝒮_0 and so q^'=Q(Σ)≤
Q(T)=2. This contradicts the assumption q^'=Q(E_q)=3.
Then we in fact proved q^'=Q(Σ)=2when q=3and E_q lies
on a great circle, and so (ii) is proved.
Now we assume that q=q^'=Q(Σ)=3 and p_1,p_2,p_3 are not
on a great circle. Then the triangle p_1p_2p_3p_1 is in
an open hemisphere on S. And it is clear that ∂Σ has a
partition ∂Σ=C_1( p_1,p_2) +C_2(
p_2,p_3) +C_3( p_3,p_1) satisfying Definition
<ref> (1)–(8) and Definition <ref> (1)–(3). By Theorem
<ref> (ii) each C_j is strictly convex. By Definition of
𝒮^∗ and Corollary <ref>, there is no other surface
Σ^' in 𝒮_0 such that ∂Σ^'=∂Σand _maxΣ^'<_maxΣ. Thus
Σ=Σ_∂Σ, which is given in Solution <ref>.
Therefore we have proved Corollary <ref> (iii).
To prove (iv) of the corollary, we will show
n( Σ) =#f^-1(E_3)∩Δ=0,
whether q^'=2 or 3.
If q^'=2, then, by Corollary <ref> (i), we may assume
Σ=𝔇(p_1p_2,θ_0,θ_0)
for some θ_0∈(0,π/2], with d( p_1,p_2)
=δ_E_3, and then (<ref>) holds and ∂Σ has the
partition
∂Σ=C_1( p_1,p_2) +C_2( p_2
,p_1) =∂𝔇(p_1p_2,θ_0
,θ_0),
such that C_j are symmetric strictly convex circular arcs which are not major.
If E_3 is contained in a great circle on S, then q^'=2 by
Corollary <ref> (ii), and thus we may again choose Σ=𝔇(p_1p_2,θ_0,θ_0) for some
θ_0∈(0,π/2] satisfying (<ref>) and (<ref>).
Assume q^'=3=q. Then {p_1,p_2,p_3} are not on a great
circle and
∂Σ_0=C_1( p_1,p_2) +C_2( p_2
,p_3) +C_3( p_3,p_1) ,
with C_j^∘∩ E_q=∅. Then p_1p_2p_3
p_1 is a triangle which is either strictly convex at all vertices or
concave at all vertices.
If p_1p_2p_3p_1 is convex, then (<ref>) holds clearly.
Assume p_1p_2p_3p_1 is concave and (<ref>) fails.
Then n( Σ) ≥1 and for the closed domain T
enclosed by p_1p_3p_2p_1=-p_1p_2p_3p_1
, T is contained in some open hemisphere on S containing p_1p_3p_2p_1, we have T∈𝒮_0 and thus we have the
the contradiction (note that q-2=1)
H_𝒮_0 =H_0=R(Σ)+4π/L(∂Σ)
=A(Σ)-4πn( Σ) +4π/L(∂Σ)≤A(Σ)/L(∂Σ)
<4π/L(∂ T)≤A(T)+4π/L(∂ T).
It is clear that T∈𝒮_0. Since the three edges of T is not
strictly convex, by Theorem <ref> (ii) we have A(T)+4π/L(∂ T)<H_𝒮_0. Then by (<ref>) we have the
contradiction H_𝒮_0<H_𝒮_0. We have proved
(<ref>), whether p_1p_2p_3p_1 is convex or concave,
and (iv) has been proved.
Continuing the above argument, we can prove Theorem <ref>.
Assume q=3,E_3={p_1,p_2,p_3}
and Σ_0=( f_0,Δ) ∈𝒮^∗( E_3) . Then we may further assume
δ_E_3=d( p_1,p_2) ≤ d( p_1,p_3)
≤ d( p_2,p_3) ,
and then p_1p_3^∘ does not contain p_2. Therefore
∂Σ_0 has the partition (<ref>) (or (<ref>)),
(<ref>) holds, and C_j^∘∩ E_q=∅ for each term
C_j in the partition. Then by Definition of 𝒮_0 and by
Theorem <ref> we have the following:
H_0( E_3) =H_𝒮_0( E_3)
=R(Σ_0)+4π/L(∂Σ_0)≥ h_0(
E_3) .
Now, inspired by the method on page 215 in <cit.>, we construct a sequence
of surfaces {Σ_n}⊂𝐅^∗
=𝐅^∗(Eq), such that H(Σ_n)→ H_0(
E_3) . Let S^' be the surface with interior S\
C_1 and boundary C_1-C_1. Then we can sew Σ_0
and the surface S^' along C_1 to obtain a surface Σ
_0^'=( f_0^',Δ) . It is clear
that A(Σ_0^')=A(Σ_0)+4π,L(∂Σ_0^')=L(∂Σ_0), and f_0^'-1(E_3)∩Δ contains
only one point a with f_0^'( a) =p_3. So we may
assume a=0. Then the line segment p_1p_3 has an
f_0^'-lift α=α( a_1,0) in
Δ such α∩∂Δ={a_1}, f(a_1
)=p_1 and f( 0) =p_3, and we may assume that
-α=[0,1].
Let Σ_n=( f_n,Δ^+) with
f_n=f_0^'( z^2n) ,z∈Δ^+. Then
we have
n( Σ_n,E_3) =0, L(∂Σ_n)=nL( ∂Σ_0) +2L(p_1p_3
), A( Σ_n) =n( A( Σ
_0) +4π) ,
and so Σ_n∈𝐅^∗( E_3) (note that
q=3) and we have
h_0( E_3) ≥R(Σ_n,E_3)/L(∂Σ
_n)=nA( Σ_0) +4π n/nL(∂Σ
_0)+2L(p_1p_3)→A( Σ_0)
+4π/L(∂Σ_0)=H_0( E_3)
and so we have (<ref>)when q=3, since H_0( E_3)
≥ h_0( E_3) .
(<ref>) may fail when q≥4, and the following is a counter example for
(<ref>).
Let q≥4, let E_q be the set {p_1,p_2,…,p_q} with
0=p_1<p_2<…<p_q=1,
E_3={p_2,p_3,p_4}
and
δ=d( p_1,p_2) <1/q^3δ_{p_2
,⋯,p_q},
and let D be the small disk with diameter p_1p_2. Then
D∈𝒮_0(E_q) and thus we have
H_0(E_q)=H_𝒮_0(E_q)≥( q-2)
A(D)+4π/L(∂ D)≥4π/2πδ=2/δ.
It is clear that we have 𝐅^∗( E_q)
⊂𝐅^∗( E_3) . Then for each Σ∈𝐅^∗( E_q) , by the result H_0(E_3
)=h_0( E_3) just proved we have
H(Σ)=H(Σ,E_q)=( q-2) A(Σ)/L(∂Σ)≤( q-2) h_0(E_3)=( q-2) H_0
(E_3).
Thus we have
h_0( E_q) =sup_Σ∈𝐅^∗(
E_q) H(Σ,E_q)≤( q-2) H_0(E_3).
By Corollary <ref>, there exists a surface Σ_0∈𝒮
^∗( E_3) such that Σ_0=𝔇
(δ_E_3,θ_0,θ_0) and
H_0(E_3)=R(Σ_0,E_3)+4π/L(∂Σ_0).
On the other hand we have, by q≥4, that
R(Σ_0,E_3)+4π/L(∂Σ_0) ≤(3-2)A(𝔇(δ_E_3,θ_0,θ_0))+4π/2δ_E_3
≤6π/2δ_E_3≤3π/q^3d( p_1
,p_2) =3π/q^3δ<1/qδ.
Therefore we have H_0(E_3)<1/qδ, and thus
h_0(E_q)≤( q-2) H_0(E_3)<q-2/qδ<1/δ<H_0(E_q).
§ NOTATIONS AND TERMINOLOGY
—–a circle C determined by a circular arc c: C is the circle
containing c and oriented by c, p. determine
—–boundary radius, Remark <ref>, bdr
—–CCM: Complete covering mapping, p. CCM
—–BCCM: branched complete covering mapping, p. BCCM
—–CTTD: closed topological triangular domain, Convention <ref>, p.
conv-1
—–closed domain: closure of a domain, AhlforsS
—–convex (path, arc, curve): Definition <ref>, p. in
—–decomposable sequence, Definition <ref>, p.
undec-seqr
—–decomposable surface, Lemma <ref>, p. undec
—–domain: connected open set, AhlforsS
—–extremal surface, precise extremal surfaces, Definition <ref>, p.
HL
—–interior angle of a surface at a boundary point, Definition
<ref>, p. interiorangle
—–OPH: orientation preserving homeomorphism, p. OPH
—–OPCOFO(M): orientation-preserving, continuous, open and finite-to-one
(mapping), p. OPCOFOM
—–SCC arc: simple, convex circular arc, Definition <ref>, p. F
—–B_f,B_f( A) ,B_f^∗,B_f^∗( A)
,p. BBC
—–C_f,C_f( A) ,C_f^∗,C_f^∗( A)
,p. BBC
—–CV_f,CV_f( K)p. BBC
—–𝒞( L,m) ⊃𝒞^∗(
L,m) ⊃ℱ(L,m)⊃ℱ_r(L,m): subspaces of
ℱ( L) , Definition <ref>, p. circu
—–d( ·,·) the spherical distance on the Riemann
sphere S,
—–d_f( ·,·) the distance defined on the surface
( f,Δ) : Definition <ref>, p. df
—–D(p,δ), the disk on S with center p and spherical radius
δ: p. cov-1
—–𝔇^'( I,θ) ,𝔇^'( I,k) ,𝔇^'( I,c) :lune,
Definition <ref>, p. lune-lens
—–𝔇( I,θ_1,θ_2) ,𝔇(
I,k_1,k_2) ,𝔇( I,c_1,c_2) : lens
Definition <ref>, p. lune-lens
—–Δ: the open disk {z:| z| <1,z∈ℂ}, p. Delta
—–Δ^±: Δ^±=Δ∩ H^±, p. Del+-
—–( ∂Δ) ^±=( ∂Δ)
∩H^±, p. +-arc
—–E_q={𝔞_1,𝔞_2,⋯,𝔞_q}: a
set of distinct q points on the Riemann sphere S, p. Eq
—–𝐅: Definition <ref>, p. family F,D
—–𝐅^∗=𝐅^∗(E_q): subspace of 𝐅,
p. F*
—–ℱ,ℱ( L) : subspace of 𝐅:
Definition <ref>, p. F
—–ℱ_r,ℱ_r( L) ,ℱ
(L,m),ℱ_r( L,m) : subspace of ℱ
, Definition <ref> (c) and (d), p. FrFLMr
—–ℱ_r^'(L,m): subspace of ℱ_r(
L,m) : Definition <ref>, p. FR'
—–h_0=h_0(E_q), p. h0
—–H_L=H_L(E_q), p. HL(E)
—–H_0=H_0(E_q), p. H_0
—–H( Σ) =H(Σ,E_q): H(Σ)=R(Σ
)/L(∂Σ), p. DH
—–H^+: the upper half plane Imz>0, p. H+-
—–H^-: the lower half plane Imz<0, p. H+-
—–H^±: the closure of H^±on S, p. H+-
—–ℒ: the continuous point set of the function H_L of L,
Definition <ref>, p. L
—–n(Σ)=n(Σ,E_q),n
(Σ,𝔞_v), (<ref>) in p. a70
—–ℕ: the set of natural numbers {1,2,…,}, p.
nature
—–ℕ^0:ℕ∪{0}, p. nature
—–P: stereographic projection SP
—–R(Σ)=R(Σ,E_q): Ahlfors error term, (<ref>) p.
Ahero
—–S: unit Riemann sphere with area 4π, p. RS
—–𝒮_1,𝒮_1^(5),𝒮_1^(6)
,𝒮_1^(7),𝒮_1^(8): subspace of ℱ:
Definition <ref>, p. S15678
—–𝒮_0=𝒮_1^(5)∩𝒮_1^(6)
∩𝒮_1^(7)∩𝒮_1^(8): Definition
<ref>, p. S-surface
—–𝒮^∗: subspace of 𝒮_0: Definition
<ref>, p. 4psim
—–( ·) ^∘: interior of an arc (path), p.
boundary, interior of a surface, Definition <ref>, p.
interior
—–∂( ·) : in Definition <ref>: set of
end points of an arc, p. boundaryarc or boundary of a domain on S
or a surface, p. boundary.
—–( ·) : closure of a set, p. cl
—–#( ·) : the cardinality of a set, p. card
—–∼: in Remark <ref>: equivalence of surfaces, or curves,
Remark <ref>, p. finite
99
Ah0Ahlfors, L. V., Complex analysis, McGraw-Hill, third edition, 1979.
AhAhlfors, L. V., Zur theorie der Üherlagerung-Sflächen,
Acta Math., 65 (1935), 157-194.
BerBernstein, F., Über die isoperimetrische Eigenschaft des
Kreises auf der Kugeloberfläche und in der Ebene, Math. Ann., vol. 60
(1905), pp. 117-136.
CLZ2Chen, Y.-L., Lin, T.-R. & Zhang, G.-Y., Movement of branch
points in Ahlfors' theory of covering surfaces, preprint,
DrDrasin, D., The impact of Lars Ahlfors' work in
value-distribution theory, Ann. Acad. Sci. Fenn. Ser. A I Math. 13 (1988), no.
3, 329–353.
DuDufresnoy, J., Sur les domaines couverts par les valeurs d'une
fonction méromorphe ou algébroïde, Ann. Sci. École. Norm.
Sup. 58. (1941), 179-259.
EreEremenko, A., Ahlfors' contribution to the theory of meromorphic
functions, Lectures in memory of Lars Ahlfors (Haifa, 1996), 41–63, Israel
Math. Conf. Proc., 14, Bar-Ilan Univ., Ramat Gan, 2000.
HaHayman, W.K., Meromorphic functions, Oxford, 1964.
L-S-ZLi, W.; Sun, Z. H.; Zhang, G. Y., Properties of Ahlfors
constant in Ahlfors covering surface theory. Front. Math. China 16 (2021), no.
4, 957–977.
CLZ1Lin, T.-R., Chen, Y.-L. & Zhang, G.-Y., A finite theorem for
Ahlfors' covering surface theory, https://doi.org/10.48550/arXiv.2305.13526.
NNevanlinna, R., Zur theorie der meromorphen funktionen. Acta Math.
46, 1-99 (1925)
NeNevanlinna, R. Analytic Functions, Translated from the second
German edition by Phillip Emig, Die Grundlehren der Mathematischen
Wissenschaften, Band 162 Springer-Verlag, New York-Berlin 1970.
RRado, T.: The isoperimetric inequality on the sphere. Amer. J.
Math. 57(4), 765-770 (1935)
RiRickman, S., Quasiregular Mappings (Ergebnisse Der Mathematik Und
Ihrer Grenzgebiete 3 Folge), Springer, 1993.
SStoilow, S., Lecons sur les Principes Topologiques de la Theorie
des Fonctions Analytiques. Gauthier-Villars, Paris (1956)
S-ZSun, Z. H. & Zhang, G. Y., Branch values in Ahlfors' theory of
covering surfaces, Science China Mathematics, Vol. 63 No. 8: 1535-1558.
YYang, L., Value Distribution Theory. Springer, Berlin (1993)
SZZhang, G. Y. & Sun, Z. H., The Impossibility of the Equality in
Ahlfors' Second Fundamental Theorem, Scientia Sinica (Mathematica) (in
Chinese) (2019), No. 10, 1445–1462.
Z1Zhang G.Y., Curves, Domains and Picard's Theorem. Bull. London.
Math. Soc. 34(2),205-211(2002)
Zh1Zhang G.Y., The precise bound for the area-length ratio in
Ahifors' theory of covering surfaces. Invent math 191:197-253 (2013)
|
http://arxiv.org/abs/2307.04461v1 | 20230710101657 | Multi-modal Graph Learning over UMLS Knowledge Graphs | [
"Manuel Burger",
"Gunnar Rätsch",
"Rita Kuznetsova"
] | cs.LG | [
"cs.LG"
] |
Deformations at Earth's dayside magnetopause during quasi-radial IMF conditions: Global kinetic simulations and soft X-ray imaging
Chi Wang
August 12, 2023
==================================================================================================================================
Clinicians are increasingly looking towards machine learning to gain insights about patient evolutions. We propose a novel approach named Multi-Modal UMLS Graph Learning (MMUGL) for learning meaningful representations of medical concepts using graph neural networks over knowledge graphs based on the unified medical language system. These representations are aggregated to represent entire patient visits and then fed into a sequence model to perform predictions at the granularity of multiple hospital visits of a patient. We improve performance by incorporating prior medical knowledge and considering multiple modalities. We compare our method to existing architectures proposed to learn representations at different granularities on the MIMIC-III dataset and show that our approach outperforms these methods. The results demonstrate the significance of multi-modal medical concept representations based on prior medical knowledge.
We provide our code here[<https://anonymous.4open.science/r/mmugl/>] and showcase some of our
results with an online demo available under this link[<https://mmugl.dnsalias.org>].
§ INTRODUCTION
Modern healthcare facilities record patient information as Electronic Health Records (EHR). EHR Datasets such as MIMIC-III <cit.>, HiRID <cit.>, and eICU <cit.> enable modeling of disease progressions within a single hospital visit, for example in Intensive Care Units (ICU) <cit.>, or progressions across multiple patient visits <cit.>. These progressions can be meaningfully encoded into patient representations using deep learning as shown by numerous prior works <cit.>. This large body of work highlights the value of strong patient representations which aggregate information across entire patient histories from multiple hospital stays, enabling clinicians to model potential risks in various predictive tasks regarding patients' evolution.
Further, we see advantages of recent multi-modal approaches in the ICU setting <cit.> and visit sequence modeling <cit.>. In multi-modal EHR representation learning <cit.>, we benefit from two modalities: structured EHR data (e.g., billing codes) and unstructured text information stored in rich clinical reports. Other modalities of medical data exist outside of in-hospital datasets, where a vast amount of prior medical knowledge is stored in static form in databases such as the Unified Medical Language System (UMLS <cit.>).
We identify two drawbacks of current UMLS based approaches <cit.>. First, the approaches do not consider a complete set of relational information stored in UMLS (considering multiple vocabularies) and solely use UMLS as a unified concept space. Second, prior solutions <cit.> specify the usage of hierarchical relations, which implies the use of an underlying graph in the form of a tree (single vocabulary). More complex graph structures inside and across vocabularies are thus omitted.
We introduce Multi-Modal UMLS Graph Learning (MMUGL) to overcome the previously stated limitations. MMUGL is a novel approach for learning representations over medical concepts extracted from the UMLS Metathesaurus in the form of a complex knowledge graph and relations; extracted using a simple and ambitious procedure considering a considerable set of vocabularies and all the relations across and within them.
We apply auto-encoder pretraining techniques (e.g., <cit.>). By training a shared latent space <cit.>, we bridge the modality gap between structured EHR codes and unstructured text. The approach includes rich prior knowledge important in the medical domain, deals with sample scarcity by relying on prior knowledge structure and pretraining techniques, and leverages multiple modalities as inputs.
§.§ Generalizable Insights about Machine Learning in the Context of Healthcare
The contributions of our work are threefold:
* In Section <ref>, we introduce a novel medical knowledge representation learning approach with graph neural networks (GNN) over knowledge graphs based on the UMLS Metathesaurus of previously unseen complexity. While modern machine learning techniques are unlocking amazing advancements in health-care, improved precision, early detection, personalized treatments, and democratized access, all by learning from large amounts of data, most of the accumulated medical knowledge often remains untouched by our algorithms. Prior work has considered to tap into this knowledge, but we go one step further and show, that we can extract large and complex knowledge graphs by considering a considerable amount of the entire UMLS Metathesaurus and build a strong structural prior into our machine learning model and gain performance in the process.
* We introduce a shared latent Concept Embedding (Sec. <ref>) space and a shared Visit Encoder (Sec. <ref>) to optimize the single latent space from any modality jointly in a parameter efficient manner. Prior work has established the importance of leveraging EHR records in their entirety, thus incorporating all the available modalities. In our work we show the benefits of grounding all modalities by the same prior knowledge and training a single latent space in end-to-end fashion for all input modalities (structured and unstructured EHR).
* In Section <ref> we demonstrate, that we strongly outperform prior graph-based works in pretraining and downstream tasks and can perform competitively with prior work trained at a much larger scale of data. We show the benefits of our large-scale knowledge graph, shared latent space from multiple modalities and tailored (pre-)training procedure.
§ RELATED WORK
In the following, we introduce related work in EHR modeling, knowledge graph learning, and graph learning in the context of EHRs.
EHR Various types of deep learning architectures have been proposed to learn representations at different granularities (patients, visits, histories, etc.) in EHR datasets. <cit.> propose EHR-specific visit sequence models. <cit.> propose to focus on the inherent structure of EHRs w.r.t. treatments, diagnosis, visits, and patients. <cit.> adapt the masked language modeling approach to learn medical concept embeddings.
Multi-Modality
Prior work has considered learning representations from either structured components of EHR data <cit.> or from unstructured clinical text reports <cit.>. <cit.> have proposed multi-modal architectures and <cit.> go a step further and introduce even stronger structural priors, while considering the two modalities of structured EHR data, as well as unstructured clinical reports.
Knowledge Graphs and GNNs
A vast amount of static prior medical knowledge often remains untouched in current modeling approaches. This prior knowledge can be extracted and transformed into knowledge graphs <cit.>. Existing work in natural language processing has established the benefits of knowledge graph representations to various downstream applications <cit.>; where the most recent approaches include GNNs <cit.>. We aim to leverage the recent success of GNNs, which apply graph convolutions over arbitrary graph structures to learn node (and edge) representations <cit.>.
Graph Learning in EHR
GRAM <cit.> proposed to include prior knowledge from medical ontologies such as the International Classification of Diseases (ICD).
To model structural and relational data explicitly, approaches have started to use GNNs. <cit.> proposed to use the Graph Attention <cit.> operator together with an architecture to pretrain embeddings over two ontologies.
Other works learn over heterogeneous graphs with different types of nodes <cit.>.
<cit.> construct a global graph of diseases, as well as dynamic local (within a single visit) subgraphs.
<cit.> focus on the EHR structure within a single visit.
Finally, <cit.> consider hyperbolic embeddings for medical ontologies. The learned embeddings can then be incorporated into task-specific architectures <cit.> to improve outcome predictions in different healthcare settings.
Previous approaches do consider dataset-specific structures such as the hierarchical organization of EHRs (patients, visits, etc.) and co-occurrence information or structure coming from ontologies. However, the explored set of ontologies is usually kept small and most of them are tree-like structures. To the best of our knowledge, no prior work has considered using a GNN directly on top of a complex large-scale ontology such as the UMLS Metathesaurus and the complete set of unstructured relational information within it.
Further, while previous work considered multiple modalities, they use fusion approaches to join modalities, which can require larger amounts of data to train effectively. Our work proposes to use the learned knowledge representations over the UMLS Metathesaurus as a single shared latent space for information coming from both the structured (billing codes) and unstructured modalities (clinical reports).
§ GLOSSARY
We consider an EHR dataset of multiple patients and present the following terminology:
* Patient: p_i indexed by i
* Visit: a patient p_i has one or multiple visits v_i,t indexed by t. A visit contains a set of medical concepts c ∈𝒞_i, t, the total set of medical concepts over the dataset is then 𝒞 = ∪_∀ i, t𝒞_i,t. A medical concept can be of different types and we distinguish them by index 𝒞(*):
* Disease: indexed by d s.t. 𝒞_i, t(d) and 𝒞(d) = ∪_∀ i, t𝒞_i,t(d) the total set of disease concepts
* Medication: (or prescriptions) with type m, similar to diseases we introduce 𝒞(m) and 𝒞_i,t(m)
* Concept from clinical reports: a set of medical concepts extracted from text data (clinical reports, Sec. <ref>). The total set of considered medical concepts from text 𝒞(n) = ∪_∀ i, t𝒞_i,t(n) where the set 𝒞_i,t(n) is collected from all reports at a specific visit t of patient i. The type is n for text note.
The vector representation of a visit considering data of a specific type * is 𝐯_𝐢,𝐭(*) ∈ℝ^k.
* Ontology: each of them has a vocabulary 𝒱_Ont and defines some relation amongst the members of the vocabulary using an edge set ℰ_Ont, which defines the ontology graph 𝒢_Ont = (𝒱_Ont, ℰ_Ont). We consider the following ontologies/databases:
* 𝒢_ICD (International Classification of Diseases) where 𝒞(d) ⊆𝒱_ICD
* 𝒢_ATC (Anatomical Therapeutic Chemical) where 𝒞(m) ⊆𝒱_ATC
* 𝒢_UMLS (Unified Medical Language System) where {𝒞(d) ∪𝒞(m) ∪𝒞(n)} = 𝒞⊆𝒱_UMLS
§ METHOD
The architecture consists of three main components and is derived from the work done by <cit.>; fig:architecture provides an overview.
* Concept embedding module f_θ(c): 𝒞 ↦ ℝ^k (parametrized by θ), which computes a representation for any given medical concept c.
* Visit encoding module Assume q = |𝒞_i, t| and r ∈{2, 3} the number of concept types considered (diseases and medications with or without concepts from text) then g_ψ(v): ℝ^q × k↦ℝ^r × k (parametrized by ψ), which, given all concept token representations of a single visit v computes single representations for each different type of tokens thereof.
* Predictor module which performs either a pretraining task on a single visit or a downstream fine-tuning task across a sequence of visits. In either case, this module receives representations for each visit of a patient from the previous visit encoding module.
In the following subsections, we introduce the Concept Embedding module (Sec. <ref>), present how we extract richer concepts from clinical reports (Sec. <ref>), encode the information (Sec. <ref>), and perform predictions (Sec. <ref>).
§.§ Concept Embeddings
We consider the following implementations of f_θ(c):
ICD/ATC Hierarchies
Based on the work done by <cit.>, we consider the two tree hierarchies ICD[We consider the 9th revision, as of working on MIMIC-III] for diseases and ATC for medications. In this case, we consider c ∈{𝒞(d) ∪𝒞(m)}. We compute the node embeddings 𝐍_* (⊕ for concatenation):
𝐍_𝒞(d) = GNN_θ_1(𝒢_ICD),
𝐍_𝒞(m) = GNN_θ_2(𝒢_ATC),
f_θ(c) = Lookup(𝐍_𝒞(d)⊕𝐍_𝒞(m))(c)
where we use a distinct (parametrized by θ_1 and θ_2) multi-layer GNN for each of the two hierarchies (Eqns. <ref>, <ref>) and then perform a lookup (retrieve nodes by index)
against the resulting node embeddings (Eqn. <ref>). In this case, we initialize all of the nodes with randomly initialized trainable embeddings. We refer to this approach to learn concept embeddings with ICD/ATC.
We can additionally consider co-occurrence information (e.g., <cit.>) to connect the two hierarchies. We refer to this approach with ICD/ATC-CO (details in Appendix <ref>).
MMUGL
We present our novel approach to rely on the UMLS Metathesaurus as a unified concept space to learn representations
for any general medical concept present in the database based on multiple modalities. Given that, we refer to our approach as Multi-Modal UMLS Graph Learning (MMUGL).
To constrain the number of concepts we consider from the database we use the set of clinical reports present in EHR datasets such as MIMIC-III <cit.>. Using an extraction pipeline (Sec. <ref>) we collect the set of medical concepts 𝒞(n); additionally, we ensure all of the concepts in the ICD and ATC hierarchies are present as well in our final vocabulary.
The final vocabulary {𝒞(d) ∪𝒞(m) ∪𝒞(n)} = 𝒞 = 𝒱_UMLS is used to construct 𝒢_UMLS by extracting all the edges in UMLS fully contained within the vocabulary. To simplify we consider all edges to be undirected.
In UMLS many concepts are annotated with a short natural language description. We use SapBERT <cit.>[<https://github.com/cambridgeltl/sapbert>], a pretrained language model fine-tuned to discriminate amongst UMLS concepts, to initialize the node embeddings from these descriptions. This contributes in two ways: (i) by not using trainable embeddings, we reduce the otherwise huge amount of free parameters given the large vocabulary 𝒱_UMLS (ii) we incorporate prior medical knowledge by considering the concept descriptions. We then train a multi-layer GNN on top of the extracted graph:
f_θ(c) = GNN_θ(𝒢_UMLS)(c)
To retrieve a concept, we return its computed node embedding. We additionally found it to be beneficial for performance to consider two distinct stacks of GNN layers over the same graph and perform a Max-Pooling operation after the final layer across the two stacks. This falls in line with using two distinct
GNNs in the simple ICD/ATC Hierarchy case presented in Eqn. <ref>.
§.§ Concept Extraction
The goal of our approach is to include data from additional modalities such as the clinical reports found in EHR datasets. MMUGL learns modality agnostic representations of medical concepts based on UMLS knowledge. It fuses discrete code information (e.g., ICD codes) with medical concepts extracted from text. The extraction with QuickUMLS <cit.> yields a set of medical concepts 𝒞_i,t(n) based on the collection of clinical reports of that particular visit. Further, we perform a rule-based negation extraction using NegEx <cit.>; for each concept, we extract a binary feature, whether it is negated or not, and concatenate it with its learned concept embedding (Eqn. <ref>). This is a crucial piece of information as clinical reports can both mention the existence or the absence of a certain condition.
§.§ Visit Encoder
We present implementations of the function g_ψ(v). In line with the work by <cit.>, we consider a multi-layer transformer without positional encodings. To aggregate a set of concepts into a single representation we use a learned token representation at the transformer output as the aggregate.
For each concept type in a given visit, we encode a separate representation using the same (weight-sharing) Transformer_ψ with parameters ψ where * ∈{d, m, n}. The aggregated representations g_ψ(v) for each modality are considered as the output of this module in MMUGL:
𝐯_𝐢,𝐭(*) = Transformer_ψ( f_θ( 𝒞(*)_i,t) ) [],
g_ψ(v) = ( 𝐯_𝐢,𝐭(d), 𝐯_𝐢,𝐭(m), 𝐯_𝐢,𝐭(n) )
we can also consider a case without the information from clinical reports (Eqn. <ref>), e.g., in cases where we
use a simpler graph such as ICD/ATC (Eqn. <ref>) or in MMUGL without 𝒞(n).
g_ψ(v) = ( 𝐯_𝐢,𝐭(d), 𝐯_𝐢,𝐭(m) )
§.§ Predictors and Training
In the following, we introduce the pretraining module and downstream fine-tuning modules.
§.§.§ Pretraining Module
We replicate the auto-encoding pretraining approach developed by <cit.> with the reconstruction loss ℒ_recon and perform four different predictions (from each of the two modalities, disease and prescription, as a source to either as the label) using distinct Multi-layer Perceptrons (MLP_∙→ * predicting type * from representations of type ∙) and attach a binary cross-entropy loss ℒ_BCE to model multi-label classification.
ℒ_recon = ∑_∙, * ∈{d, m}ℒ'(∙, *),
ℒ'(∙, *) = ℒ_BCE( MLP_∙→ *( 𝐯(∙) ) , 𝒞(*))
During pretraining, we additionally randomly mask and replace certain tokens at the input in Eqn. <ref> (same as <cit.>, inspired by masked language modeling <cit.>).
Weighted reconstruction pretraining
We consider a weighted version of Eqn. <ref>:
ℒ_recon = ∑_∙, * ∈{d, m} w_∙, * ℒ'(∙, *)
As some of the considered downstream tasks focus on disease diagnosis we consider a tailored disease-focused pretraining approach. In this setting, we omit the predictions (and loss signal) to medications and only predict diseases from either the visits aggregated disease or medication representation.
Meaning we set w_∙, d = 1 ∧ w_∙, m = 0.
The contributions to the performance of this adaption are presented in Section <ref> and Appendix <ref>.
Sum Aggregation Loss Due to the strong imbalance in the distribution of diseases and medications, we explore additional loss components to prevent the attention mechanism from overfitting to the most common tokens. Instead of taking the token representation we take the sum over all tokens excluding and again decode this unbiased aggregate using an MLP to predict the set of diseases or prescriptions (∖ for set difference):
ℒ_sum = ∑_* ∈{d, m}ℒ'(t),
ℒ'(*) = ℒ_BCE(MLP^ sum_* → *( 𝐯^sum(*) ) , 𝒞(*) ),
𝐯^sum(*) = ∑( Transf._ψ(f_θ( 𝒞(*) ) ) ∖{})
the idea is to ensure a more unbiased aggregation while still allowing the tokens to interact and impute masked or missing information. With this approach, we can induce a more dispersed distribution in the attention mechanism (Sec. <ref>).
Concepts from clinical reports
In our approach MMUGL we consider additional medical concepts extracted from text (clinical reports) and we concatenate the aggregated representation of these concepts for the respective visit 𝐯(n) to each of the two modalities at the input to the predictor MLP. For example in the case of ℒ_recon:
ℒ_recon = ∑_∙, * ∈{d, m}ℒ'(∙, *),
ℒ'(∙, *) = ℒ_BCE( MLP_∙→ *( 𝐯(∙) ⊕𝐯(n) ) , 𝒞(*) )
The final loss for pretraining ℒ_pre is a combination of ℒ_recon (Eqn. <ref>, <ref>, <ref>) and ℒ_sum (Eqn. <ref>):
ℒ_pre = ℒ_recon + λℒ_sum,
where ℒ_sum is configured as a regularizer with hyperparameter λ (for which we provide an ablation in Sec. <ref>).
§.§.§ Downstream Modules
In this work, we focus our contribution on learning concept representations over a knowledge graph from multiple modalities. We thus consider two prior architectures to perform time-series modeling and leave them mostly unchanged. It is intentional, that we do not propose a novel downstream architecture, but aim to show performance improvements alone through learning more robust and meaningful medical knowledge graph representations and aggregations thereof.
Average Pooling To compare to work by <cit.> in medication recommendation, we consider their downstream architecture. Given a patient history of visits (of which we get the representations using modules from Sec. <ref> and <ref>), we perform the same pooling scheme over the past and current visit to get a final representation which is used as input to an MLP to perform a predictive task.
RNN Based on the architecture by <cit.> given a patient and sequence of past visits (obtained by encoding in Sec. <ref>), we feed them through a GRU <cit.>. The hidden states at the output of the GRU are aggregated using a temporal attention mechanism where the query is a trainable embedding. We perform a minor modification here w.r.t. to the architecture by <cit.> and introduce a hyperparameter n_q, which refers to the number of trainable queries. If more than one query is used, we aggregate the different temporal aggregations of each query to get a single representation of the entire past of the patient. This representation is used to perform a prediction into the future using a MLP.
§ EXPERIMENTS
We perform our experiments on the MIMIC-III <cit.> dataset, using the , , tables.
Medications are mapped to the ATC hierarchy using the approach shared by <cit.>. For any of the approaches and baselines during pretraining we consider the training split of the respective baseline as well as any other patient in the dataset not present in the test or validation splits; this concerns especially patients with only a single visit (which are not usable for fine-tuning sequence tasks) in the dataset. We consider three different downstream tasks all trained using binary cross-entropy (binary/multi-label). Appendix <ref> shows data statistics for each of them. The result tables show standard deviations over three seeded training runs and we highlight the best results in bold font. In Appendix <ref>, <ref>, and <ref> we share training, architecture, and task details.
Medication Recommendation
To compare to the work by <cit.> (who have shown improvements over any previously published results on this task) we benchmark the medication recommendation task. We use their provided preprocessed patient data derived from MIMIC-III.
The multi-label prediction task was evaluated on a sample-averaged Area under the precision-recall curve AuPRC, as well as sample-averaged macro F1 score.
Heart Failure
This task has been benchmarked in CGL <cit.> (Collaborative Graph Learning), Chet <cit.> (Context-aware Health Event Prediction via Transition Functions), and Sherbet <cit.> (Self-Supervised Graph Learning With Hyperbolic Embedding for Temporal Health Event Prediction); who have performed extensive benchmarking against prior work.
We run their provided preprocessing and extract the used target code sets, as well as the computed patient splits. The binary classification is evaluated using F1 score and area under the receiver-operator curve AuROC.
Diagnosis
Similar to the previous heart failure task we compare to the results of CGL <cit.>, Chet <cit.>, and Sherbet <cit.>. We extract the target code sets and patient splits by running the provided preprocessing in each of the repositories to ensure comparability.
We consider thresholded weighted F1 (w-F1) score, and to be comparable to <cit.> we consider their adapted computation of F1. The variant is slightly inflated by considering the number of ground truth positive labels for each sample[<https://github.com/LuChang-CS/CGL/blob/main/metrics.py>]. This avoids the need to set a threshold, but leaks the number of ground-truth positives to the evaluation; we refer to it as w-F1 (infl.). We also report recall at top k predictions (according to model confidence); referred to as R@k (e.g. R@20).
§ RESULTS AND DISCUSSION
§.§ Pretraining: Sum Loss
In fig:pretraining-sum-loss-entropy we perform an ablation w.r.t. to the hyperparameter λ controlling the contribution of ℒ_sum (Eqn. <ref>) to the total pretraining loss.
<cit.> have computed the entropy of the distribution induced by the attention mechanism to analyze Transformer behavior. Similarly, we show the average (test set) entropy of the distribution induced by attention from the token to all the other tokens. For larger λ the entropy increases, hence the distribution is more dispersed, and we can see an improvement in pretraining performance (shown by the test set reconstruction loss ℒ_recon corresponding to improved test log-likelihood of our model). The idea is, that a more dispersed distribution is a better aggregator and generalizes better to rare diseases, which might otherwise be overlooked by a pointy (overfitted) attention distribution.
In Appendix <ref> and <ref> we provide further experimental results ablating pretraining and the different loss terms.
§.§ Medication Recommendation
We report our performance on the medication recommendation task (Sec. <ref>) using the average pooling architecture (Sec. <ref>) in tab:baseline-comparison-med. Our method and training approach can outperform the previously published state-of-the-art results by <cit.>, however, we note that the multi-modal approach with medical concepts from clinical reports cannot provide improvements on this task and data split (patients have high variation w.r.t. the richness of available clinical reports); also see Appendix <ref>.
§.§ Disease Tasks
In tab:baseline-comparison-diag we present benchmarking results on two disease-related tasks (Sec. <ref>) using the RNN architecture (Sec. <ref>). We train and evaluate our models on the patient splits and code sets extracted by considering three different prior work implementations, which have performed extensive benchmarking on previous state-of-the-art methods.
For Heart Failure we see that our approach can outperform any previous state-of-the-art published methods. Considering the diagnosis task our method outperforms CGL <cit.> (which considers unstructured text data), as well as Chet <cit.>; to be fair, neither considers a pretraining scheme. We also considerably outperform MedPath <cit.>, which considers personalized graphs to enhance the predictive performance of backbone time-series architectures for EHR. Our general method, considering pretraining tailored to encode both diseases and medications, performs on par with the hyperbolic approach Sherbet <cit.>, which performs pretraining too. However, if we tune our visit representations towards encoding disease-specific information (see Eqn. <ref> with w_∙, d = 1 ∧ w_∙, m = 0) we can also outperform this prior method.
§.§ Concept Embedding Ablation
In tab:concept-embedding-ablation we show ablations over different types of concept embeddings (Sec. <ref>) on a diagnosis task (Sec. <ref>). Our approach strongly benefits from richer multi-modal information coming from clinical reports and thus outperforms prior work (the multi-modal approach can also increase robustness w.r.t. missing and erroneous information, Appendix <ref>). We can see further improvements by tailoring our pretraining towards the downstream task by using disease-focused pretraining (Eqn. <ref> with w_∙, m = 0).
Note that in some cases (e.g. Heart Failure) using MMUGL w/o 𝒞(n) (i.e. w/o clinical reports) can slightly harm performance and be outperformed by more dataset-specific approaches such as using co-occurrence information and relying on simpler ontology structures with trainable embeddings, thus being able to adapt better to the dataset than the language model initialized MMUGL embeddings.
However, our approach enables the use of richer information coming from clinical reports and a larger concept vocabulary without introducing new parameters.
Further, our approach is more general, grounded by prior knowledge, and can hopefully be used to push transfer learning performance in the future. This is crucial in the medical data setting, where publicly available training data is scarce and sharing among institutions difficult to protect the privacy of individual patients.
We compare to two alternative approaches for learning concept embeddings by replacing the Concept Embedding <ref> module and performing the same proposed training procedure. As presented by <cit.> we additionally pretrain our knowledge graph concept embeddings using Node2Vec <cit.>. Secondly we compare to Cui2Vec <cit.>. Cui2Vec consists of medical concept embeddings pretrained on a large-scale corpus using a Word2Vec <cit.> style objective function. We show, that using our graph on the scale of 100'000 and around 30'000 patients for pretraining, we can compete with an approach that used training data on the order of 60 million patients, 20 million clinical notes, and 1.7 million biomedical journal articles.
§.§ Interpretability Analysis on Clinical Reports
We can use the attention mechanism to interpret the results on a patient level to rank diagnosis and medications, as well as general medical concepts from clinical reports w.r.t. their importance for the prediction using their respective attention score. See Figure <ref> where we show aggregated attention values for disease and prescription categories (Fig. <ref>), as well as the highest ranked concepts inside the highest ranked clinical reports (Fig. <ref>). We can also perform various dataset global analyses. We analyze the distribution of medical concepts extracted from clinical reports w.r.t. MIMIC-III report type and present the results in fig:text-attention-category-distribution; please mind the logarithmic y-scale (Appendix <ref> also shows a linear scale). After pretraining, we can see a very strong shift from the dataset's type distribution toward discharge summaries. This is sensible given the pretraining task is an auto-encoder, essentially training for summarizing the visit. By fine-tuning for specific tasks we can see slight shifts towards more specific report types, which can help provide more detailed insights for a given task; note for example how the focus in the respiratory category increases as we fine-tune for a general diagnosis, but decreases below the pretraining level for a heart failure prediction.
§.§ Limitations
Based on the previously shown results we can see the strong benefits of incorporating larger scale prior knowledge. We conclude the feasibility of extracting a complex graph from the large UMLS database using a fairly simple extraction pipeline and effectively learn strong medical knowledge representations over it.
We have proposed a simple extraction pipeline, where we extract an undirected graph from UMLS and ignore potential edge information. A more sophisticated extraction paired with an appropriate GNN should be able to handle the increased heterogeneity of different nodes, edges, and their respective features. However, this will come at a computational cost. One will have to navigate the complexities associated with the various node and edge types within the heterogenous set of subvocabularies present inside the UMLS Metathesaurus.
By creating a single shared latent space (our knowledge graph) for multiple modalities, we can achieve improved performance using much less data than prior art or outperform work using the same amounts of data. However, by reducing a clinical report to a set of medical concepts, which we can map onto our graph space, we neglect the natural language context and ordering. As we are already using a Transformer architecture inside our Visit Encoder (Sec. <ref>), we could include the remaining text (without concept matches) to provide the language context to obtain even finer grained final representations of patients and their visits.
§ CONCLUSION
We have introduced a novel way to train a unified latent space for general medical knowledge from multiple modalities. By grounding our representations with prior knowledge from the UMLS Metathesaurus, we have demonstrated improved performance on downstream tasks.
Our extended pretraining approach and the corresponding results emphasize its importance to tackle the supervised label scarcity in the medical domain. The more generalized approach to medical concept representations can aid in future designs and explorations of knowledge embedding transferability. Knowledge transfer is an important factor in the medical setting where publicly available training samples are scarce due to necessary regulations to protecting patient privacy.
Our results pave the way for future research to bridge the gap between within-visit modeling (e.g.,
ICU time-series models <cit.>) and across-visit modeling, such as we benchmarked against in this work. Whereas disease and medication codes are usually assigned post-visit (for billing or archival purposes), many clinical reports are generated during the patient stays. To provide richer context information, future within-visit models might include patient histories and the knowledge captured in our global concept representations.
This project was supported by grant #2022-278 of the Strategic Focus Area “Personalized Health and Related Technologies (PHRT)” of the ETH Domain (Swiss Federal Institutes of Technology).
Further, we would like to thank Hugo Yèche for his feedback during the revision process. Thanks go to Jonas Bokstaller and Severin Husmann whose theses have provided relevant insights.
§ EXPERIMENTAL DETAILS
§.§ Dataset and Split details
A small overview of data and task statistics are provided in tab:apd-data-statistics-disease. Splits and target code sets have been extracted from the respective repositories[<https://github.com/jshang123/G-Bert>,
<https://github.com/LuChang-CS/CGL>,
<https://github.com/LuChang-CS/Chet>,
<https://github.com/LuChang-CS/sherbet>]
§.§ Knowledge Graph Statistics
The extracted knowledge graph contains 87'445 nodes, 261'212 edges with node degrees of 5.97±20.91. The total vocabulary of all considered medical concepts is a subset of 21 UMLS Metathesaurus Vocabularies (percentages in brackets, some concepts belong to multiple): SNOMEDCT_US (46.75%), ICD9_CM (10.44%), CCPSS (7.71%), CSP (6.75%), FMA (6.15%), RXNORM (5.25%), DXP (4.21%), NCI_CDISC (4.13%), WHO (2.40%), ATC (2.12%), DRUGBANK (1.80%), CPT, NOC, BI, CCS, ICNP, NIC, ICF, CCC, PCDS, RAM.
Given a patient split we compute the coverage over our vocabulary during pretraining and downstream training. Inclusion criterias causing differences between the two are availability of medication (req. for pretraining) and multiple visits (req. for downstream training).
* All splits: 91.59% (pre), 71.24% (down)
* Train: 90.30% (pre), 68.21% (down)
* Validation: 16.45% (pre), 16.97% (down)
* Test: 37.68% (pre), 38.47% (down)
A percentage of concepts in the validation and test splits are unseen during training. Because of the graph structure, we can still learn meaningful representations for them:
* Validation: 0.78% (pre), 1.93% (down)
* Test: 3.11% (pre), 7.07% (down)
§.§ Architecture and Training
We perform early stopping based on the validation set loss both during pretraining and fine-tuning. The network is first fully pretrained until early stopped, the concept embedding (Sec. <ref>) backend is then frozen, the visit encoder (Sec. <ref>) is left trainable together with the downstream architecture to allow the attention mechanism to be fine-tuned to perform task-specific aggregations. We find a larger batch size (e.g. 32 or more) to be beneficial for better training stability.
Appendix <ref> shows an overview of the hyperparameters, which have been tuned w.r.t. validation set performance.
§.§.§ GNN Architecture
In Section <ref> we use a parametrized GNN in Eqns. <ref>, <ref>, and <ref>. We use Pytorch Geometric <cit.> to implement these networks and based on our hyperparameter searches in Appendix <ref> we settled on using the graph convolution operator GraphSAGE as introduced by <cit.>.
The ICD and ATC hierarchical ontologies or our complex UMLS based knowledge graph are passed to the GNN considering all edges as undirected. In the case of multiple GNN layers we use a non-linear ReLU activation after all but the last layer. The representations for each medical concept of an ontology or the knowledge graph at the GNN output are cached and used to retrieve concept embeddings for further processing by the Visit Encoder (Sec. <ref>) module.
§.§.§ GNN with Co-Occurrence
Similar to work done by <cit.> or <cit.> we can
additionally consider co-occurrence information present in our dataset. We construct a new graph 𝒢_ICD/ATC-CO which contains multiple sets of nodes and edges. The node sets are the ICD and ATC tree hierarchy nodes, while the edge sets consist of
the two ontologies and four co-occurrence edge sets; one for co-occurrence within each of the two ontologies and one (directed) from
each of the two to the other. We then compute a heterogeneous (nodes of different types) multi-layer GNN (see <cit.>) over these node and edge sets, where each edge set is associated with its own parametrized graph convolution operator. As a result, we compute multiple different embeddings for a given node in each layer, which are summed. Co-Occurrence edges can additionally be weighted by computing a count over the dataset (training split) and normalizing s.t. incoming edges sum to one. Such weights can be considered by the GNN by multiplying messages from neighboring nodes with the corresponding weight. Again we have c ∈{𝒞(d) ∪𝒞(m)}:
f_θ(c) = GNN_hetero(𝒢_ICD/ATC-CO)(c)
§.§ Hyperparameters
In tab:apd-hp-pretraining,tab:apd-hp-medication-recommend,tab:apd-hp-heart-failure,tab:apd-hp-diagnosis we present an overview of the model hyperparameters. Final choices based on validation set performances have been marked in bold font.
Hardware A typical training is finished in under a day. Depending on the task and set of considered input modalities it can be much faster. We trained our models using mostly GPUs with 11GB of dedicated GPU memory; some larger models, which included medical concepts extracted from text have been trained on GPUs with 24GB of dedicated GPU memory. We use 2-6 worker processes and around 32-64GB of main memory.
§.§ Tasks and Evaluation
In the following, we provide a more detailed overview of the benchmarked downstream tasks (Sec. <ref>) and the evaluation thereof.
§.§.§ Medication Recommendation
We benchmark the medication recommendation task based on preprocessed data by <cit.>. The task is to predict a set of medications (ATC level 4 codes) given a patient's history and the current diagnosis (assigned ICD codes). Given a patient i and a trained predictor ĥ we can formalize as follows:
𝒞̂_i, t(m) = ĥ( 𝒞_i, 0… t-1(*), 𝒞_i, t(d) )
where * ∈{d, m, n}
Given that this is a multi-label prediction we consider sample-averaged scores. Due to a significant imbalance in the distribution of medication codes, we use the F1 score for thresholded hard predictions and the area under the precision-recall curve (AuPRC) for unthresholded confidence scores. This is in line with the evaluation by <cit.>.
§.§.§ Heart Failure
This is a binary prediction task as already benchmarked by many prior works on the MIMIC-III <cit.> dataset. The task is to predict the risk of heart failure for a patient in a future visit given the patient's history. The label is extracted from the set of assigned ICD codes by matching with the prefix 428 after stripping the codes of any special characters. Let y_i, t be the target label and it is 1 if there exists a code c ∈𝒞_i, t(d) which has the prefix 428. For a patient i and trained predictor ĥ we can formalize as follows:
ŷ_i, t = ĥ( 𝒞_i, 0… t-1(*) )
where * ∈{d, m, n}
The task with mild label imbalance is evaluated using F1 score and area under the receiver-operator curve (AuROC) for untresholded performance evaluation; this is in line with work by <cit.> and others.
§.§.§ Diagnosis
This is a multi-label prediction over a set of diseases. Given a patient's history we predict the set of potential diseases for an upcoming visit . For a patient i and trained predictor ĥ we can formalize as follows:
𝒞̂_i, t(d) = ĥ( 𝒞_i, 0… t-1(*) )
where * ∈{d, m, n}
This task might not seem very sensible at first as we cannot expect to reliably predict accidents that cause a hospital visit based on past EHR records. However, this is useful to catch chronic diseases and re-occurring patient patterns. Such a model's predictions could serve as a high-level aggregation of all EHR records for a specific patient. A doctor can get a very quick assessment of the potential risks for a patient upon admission and can tailor further investigations to this.
Due to the extreme imbalance over the very large set of potential labels we use weighted-F1 score. To assess the unthresholded model confidence scores we use a popular metric from information retrieval. Recall at top k predictions (ranked by model confidence scores) can give an intuitive indication if the model can retrieve the desired ground truth diseases. The evaluation is in line with prior work e.g. by <cit.>.
§.§ Baselines
In this section, we provide a summary overview of the presented baselines and the key points of their architectures.
§.§.§ CGL: Collaborative Graph Learning
In this work, <cit.> propose a collaborative graph learning approach. They consider two graphs, one where patients and diseases are connected based on co-occurrence and one where only diseases are connected amongst each other based on the ICD ontology. GNN layers over the two edge sets and the shared set of nodes are run in an interleaved fashion (collaboratively). The computed embeddings for a certain disease are aggregated to represent patient visits and a sequence model performs task predictions.
§.§.§ Chet: Context aware Health Event Prediction via Transition Functions
The core contribution of this work by <cit.> is to consider a global disease graph, which connects diseases by co-occurrence and ontology relations, as well as a local graph (for each visit), which models the interactions of assigned disease codes within this specific visit. The architecture includes aggregation functions and sequence modeling to perform task-specific predictions.
§.§.§ Sherbet: Self- Supervised Graph Learning With Hyperbolic Embedding for Temporal Health Event Prediction
With Sherbet <cit.> propose to encode the structure of a disease ontology in hyperbolic space. The hyperbolic embeddings for the respective diseases are used to pretrain (using a patient history reconstruction task) and fine-tune a sequence model architecture to perform task-specific predictions.
§.§.§ MedPath: Augmenting Health Risk Prediction via Medical Knowledge Paths
With MedPath <cit.> propose to enhance the performance of existing EHR representation learning architectures by incorporating a personalized graph extracted using knowledge from Semantic MEDLINE <cit.>. The extracted graph is dataset and task-specific and can improve the performance of the backbone architecture. We transformed our data to adapt to their published pipeline and performed the heart failure prediction task using their implementations. We use HiTANet <cit.> as the backbone architecture, because it performed the best on the validation set in our hyperparameter search.
§.§.§ G-BERT: Pre-training of Graph Augmented Transformers for Medication Recommendation
<cit.> show performance improvements on a medication recommendation task by pretraining disease and medication code embeddings using GNNs over two ontologies. The pretraining objective is a reconstruction task of observed codes during a patient visit and borrows ideas from masked language modeling. The pretrained architecture includes a Transformer-based encoder, which outputs a encoding for each patient visit. The proposed downstream architecture performs a pooling scheme over patient histories and recommends medications for a current patient visit given the patient's history and the current diagnosis of diseases.
§.§.§ Embedding Matrix
In tab:concept-embedding-ablation we show a concept embedding ablation using an Embedding Matrix. This refers to a matrix of trainable parameters 𝐄∈ℝ^|𝒞| × k where |C| is the total number of considered medical concepts and k the embedding dimension. The embedding matrix replaces the Concept Embedding (Sec. <ref>) module and is pretrained and fine-tuned using the same procedure.
§.§.§ Concept Embeddings using Node2Vec
In tab:concept-embedding-ablation we show a concept embedding ablation using Node2Vec <cit.>. We consider our extracted complex UMLS-based knowledge graph and perform Node2Vec-style pretraining to obtain embeddings for each concept in our knowledge graph.
We then initialize an embedding matrix (which is used to retrieve concept embeddings by index lookup) and use it to replace our proposed GNN-based concept embeddings. To ensure fair comparison we then perform the same reconstruction pretraining as our proposed approach MMUGL to ensure the parameters of the Visit Encoder (Sec. <ref>) module are well pretrained too. Similarly, we apply the same pipeline as for our approach during fine-tuning for downstream tasks.
§.§.§ Concept Embeddings using Cui2Vec
Cui2Vec as introduced by <cit.> is a collection of pretrained medical concept embeddings mapped to the space of UMLS. Their training optimizes a Word2Vec <cit.> style objective over a large-scale corpus (60 million patient records, 20 million clinical notes, and 1.7 million full-text biomedical journal articles). We use the Cui2Vec embeddings to initialize a lookup matrix from which concept embeddings are retrieved by index and replace our GNN-based concept embeddings. To ensure fair comparison we apply the same pretraining (reconstruction) and fine-tuning procedure to obtain downstream task performance results.
§ TRAINING AND ARCHITECTURE ABLATIONS
§.§ Clinical Reports Performance Contribution
In this section, we would like to clarify our findings about why the additional modality of extracted concepts from unstructured text (i.e. clinical reports) cannot yield a performance improvement in all cases.
Overall, the billing codes represent an aggregate of information for an entire patient's visit to the hospital and the labels are defined based on them. Thus, the billing codes (ICD, ATC codes) are the strongest signal for our predictions. The additional medical concepts from clinical reports can help in two ways. First, they can help to deal with missing or noisy information from the billing codes (see also Appendix <ref>). Second, they can help the model to make more fine-grained predictions due to the higher level of detail.
Heart Failure Here the additional concepts from clinical reports do seem to help, but in most cases only marginally. We hypothesize this is due to the fact, that we are only performing a binary prediction and the finer details of the clinical reports cannot yield enough additional information in most cases to significantly improve our predictions.
Diagnosis This is a very complex classification task and here we see the strongest improvement after adding the concepts extracted from clinical reports. For this task, we can benefit from the higher level of detail present in the clinical reports compared to the billing codes.
Medication Recommendation On this task the strongest signal comes from the current set of diseases. The additional concepts from clinical reports are only present in the representation of the patient's history, where we do not seem to benefit from the more detailed content of the clinical reports. To avoid information leakage we cannot directly use all concepts from all reports of the current visit when performing the medication recommendation. To accommodate for this we would have to adapt the task to a within-visit online medication recommendation; predicting medication based on the patient's global (past hospital visits) as well as local (past time within current visit) history. This would enable the inclusion of already accumulated clinical reports in the local (current visit) context.
§.§ Ablation: SapBERT
We ablate the use of SapBERT <cit.> compared to training randomly initialized node embeddings. SapBERT performs better in pretraining (the selection criteria), where we see an increase from 49.38±0.49 to 61.77±0.44 AuPRC. The improvement carries over to the downstream performance, where for the diagnosis prediction we see an improvement of 25.46±0.50 to 26.19±0.30 in the F1 (inflated) score.
§.§ Pretraining Ablation
In tab:baseline-comparison-diag we can see, that prior work including pretraining schemes performs much stronger than the ones that don't.
In tab:pretraining-ablation we perform an ablation w.r.t. pretraining different concept embeddings and report performance on the Diagnosis task (Sec. <ref>) on pretrained and on randomly initialized networks. We note, the more structure bias we provide, the better the performance without pretraining.
In Appendix <ref> and <ref> we present further results on exploring modifications to the pretraining loss function.
§.§ Sum Aggregation Loss
We provide further empirical evidence for the contribution of the additional loss term introduced in Eqn. <ref> in tab:sum-loss-diagnosis.
tab:sum-loss-diagnosis shows results on the Diagnosis downstream task across different Concept Embedding implementations
and with different pretraining regimes. We show results without pretraining, pretraining on only the default reconstruction loss ℒ_recon (Eqn. <ref>) and including the additional introduced loss term ℒ_sum (Eqn. <ref>).
We can see that the additional loss component ℒ_sum during pretraining contributes to better pretrained representations as across different
downstream models we can see either at least the same performance or increased performance. This difference is especially notable and important for the
best-performing model implementation MMUGL, where w_∙, m = 0 (Eqn. <ref>, pretraining focused on recovering diseases only).
We hypothesize, that without the additional loss regularization, we experience stronger overfitting to the training distribution during pretraining, as we have more data
available (given that MMUGL includes additional rich information coming from medical concepts in clinical reports) and we have reduced the task complexity (as we set w_∙, m = 0 in the pretraining loss, Eqn. <ref>). We also observe a tendency to more consistent results under pretraining
including the ℒ_sum loss component as standard deviations tend to be lower.
This stays consistent also on a further task e.g. Heart Failure. For MMUGL with w_∙, m = 0 including ℒ_sum in pretraining
we observe a downstream heart failure prediction performance (on the CGL <cit.> patient split) of 87.60±0.40 where this drops to 86.93±0.13
if we pretrain without ℒ_sum.
§.§ Reconstruction Loss
In tab:recon-loss-ablation,tab:recon-loss-ablation-med we perform an ablation with respect to the different weights
in the weighted version of the pretraining reconstruction loss ℒ_recon (Eqn. <ref>).
The base version as introduced by <cit.> considers all weights w_∙, * = 1. This is flexible
in the sense that it does not enforce a bias towards encoding information relevant for disease or medication predictions.
However, by weighting (or fully disabling) the different terms, we can tailor our pretraining to different downstream scenarios.
Please also note, that the following experiments have been performed without the additional loss component ℒ_sum (Eqn. <ref>) to focus purely on the effects within
the reconstruction loss term ℒ_recon (Eqn. <ref>, <ref>).
Downstream Diagnosis tab:recon-loss-ablation shows this effect on the downstream Diagnosis task. We can see that while
having all loss terms active yields strong performance, in the case of a diagnosis prediction it is beneficial to
only pretrain on loss terms that are predictive for diseases i.e. w_∙, d = 1 ∧ w_∙, m = 0.
This is further supported by results shown in tab:baseline-comparison-diag and tab:concept-embedding-ablation,
where results on the full MMUGL model (including medical concepts from clinical reports) improve by pretraining with w_∙, m = 0.
Downstream Medication Recommendation tab:recon-loss-ablation-med shows the exact same behaviour when performing downstream medication recommendation. The best performance is achieved by only considering loss terms towards predicting the modality relevant for the downstream prediction task. We can conclude, that cross-modality pretraining is beneficial to learn embeddings that can be useful for a yet unspecified downstream application. However, if the nature of
the target modality of the downstream task is known and the cost of pretraining affordable, we can achieve
better performance by adapting the pretraining to the downstream scenario.
§ CLINICAL REPORT CONCEPT CATEGORY DISTRIBUTION
In fig:apd-text-attention-lin-log-comparison we show the plot discussed in Section <ref> with both logarithmic and linear scale. The plot with logarithmic scale in Figure <ref> is better suited to highlight the fine details and changes in categories such as Respiratory or Radiology. The linear scale in fig:apd-text-attention-lin shows the strong changes caused by the pretraining (compared to the actual token distribution per category) in e.g. the discharge summary type of reports.
One might notice a particularly large drop in tokens from reports of the respiratory type. First we would like to highlight that fig:apd-text-attention-log uses a logarithmic y-axis and thus the absolute number of tokens found in the respective report type is comparatively low. Still, we can observe a change over one order of magnitude. This can be explained by looking more in-depth at the reports of this specific type.
In MIMIC-III the clinical reports of type Respiratory are mostly highly structured status reports assessing a patient's state w.r.t. the respiratory system. Being a structured report, there is a large set medical concepts matched, which correspond to the field names of the structured report to be filled with patient information and further most of the provided assessments in the form do not vary much across patients. As such, many of the extracted medical concepts from these reports are not discriminative across patients and thus we observe a drop in attention to the tokens extracted from these reports after training the model.
§ HEART FAILURE PERFORMANCE DISENTANGLEMENT
Due to the chronic nature of heart failure, we disentangle the performance on the test set with a fixed model for patients with and without reported histories of heart failure (the target codes have appeared in the patient history). The results are shown in Table <ref>.
The model is naturally performing much better on the subset of patients with a reported history of heart failure and can exploit the chronic nature of the disease. However, we note that with our proposed multi-modal approach we see a notable performance improvement on the hard cases of patients without a reported history of heart failure. We conclude, using clinical report concepts backed by a knowledge graph, not just billing codes, aids in understanding disease progressions.
§ SINGLE PATIENT INTERPRETABILITY
In fig:apd-single-patient we present various ways how attention scores of our visit encoder (Sec. <ref>) can be used to provide interpretability of our predictions. We provide an example score analysis of visit 121518 by patient 1784 in the MIMIC-III <cit.> dataset. The patient was assigned the following set of codes:
* ICD: 519.1, 496.0, 414.01, 401.9, 443.9, V45.82
* ATC4: N05CD, A02BC, B01AB, A06AD, C07AB, B05CX, G04CA, A07EA
The scores can be used to highlight the most relevant diseases and medications (fig:apd-patient-overview). By grouping scores of individual codes and computing an aggregate for each group (e.g. 90th-percentile of scores) we can highlight the most relevant disease and medication categories for this patient at the given visit.
We can further extract which of the reports collected during the entire visit contain the most predictive identifiers by computing an aggregated score over the scores of all the matched concepts within each report (fig:apd-patient-overview).
In (fig:apd-patient-top-reports) we then highlight the concepts within the two highest-ranked reports with the largest attention scores.
We can see that the scores are consistent across different modalities, considering for example the high score given to the Respiratory category for the disease (ICD) codes (fig:apd-patient-overview), as well as high scores for concepts found in clinical reports (e.g. (Tracheomalacia) in or (Carinal reconstruction) in ; fig:apd-patient-top-reports) related to respiratory conditions. We can conclude that for this sample the unified concept latent space promotes consistency across modalities and can improve interpretability.
§ ROBUSTNESS W.R.T. MISSING INFORMATION
In fig:masking-progression we show the results of an experiment, where we progressively mask a larger percentage of input tokens of different modalities. This is done by replacing the respective token identifier with the token used during masked language modeling style pretraining <cit.>.
Tokens can either be masked randomly or we sort them with respect to the attention score assigned to them in the visit encoder. The y-axis shows the pretraining performance w.r.t to Eqn. <ref>; decoding to any of the two modalities (diseases, medications) from the visit representation of either.
The results show, that although the auto-encoding objective is only formulated w.r.t. the disease and medications tokens, the additional text information can successfully prevent stronger decay in performance and help impute the missing or incorrect information.
We can further see that masking tokens according to their attention scores results in a faster overall decrease in performance, highlighting the benefits of using an attention-based encoder, that can focus on relevant medical concepts when encoding a patient's current state.
|
http://arxiv.org/abs/2307.04340v2 | 20230710044840 | Crystal Structure Generation with Autoregressive Large Language Modeling | [
"Luis M. Antunes",
"Keith T. Butler",
"Ricardo Grau-Crespo"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci"
] |
Realization of an extremely anisotropic Heisenberg magnet in Rydberg atom arrays
Jaewook Ahn
August 12, 2023
================================================================================
The generation of plausible crystal structures is often an important step in the computational prediction of crystal structures from composition. Here, we introduce a methodology for crystal structure generation involving autoregressive large language modeling of the Crystallographic Information File (CIF) format. Our model, CrystaLLM, is trained on a comprehensive dataset of millions of CIF files, and is capable of reliably generating correct CIF syntax and plausible crystal structures for many classes of inorganic compounds. Moreover, we provide general and open access to the model by deploying it as a web application, available to anyone over the internet. Our results indicate that the model promises to be a reliable and efficient tool for both crystallography and materials informatics.
§ INTRODUCTION
The in silico search for new materials often involves the exploration of a space of compositions in a chemical system, and the investigation of various predicted structural phases in that space (see <cit.> and <cit.> for examples). To predict the structures of unknown materials, a Crystal Structure Prediction (CSP) approach is often employed, which attempts to derive the ground state crystal structure for a given chemical composition under specific physical conditions. CSP approaches are relatively computationally expensive, typically involving ab initio techniques. They often begin with the generation of candidate structures. Examples are the AIRSS <cit.> and USPEX <cit.> approaches. Initializing the search space with sensible structures increases the likelihood of success, and decreases the amount of computation required. It is therefore expected that effective Crystal Structure Generation (CSG) tools would help accelerate the prediction of structures using CSP methods.
Increasingly, techniques from Machine Learning (ML) and data science are being used to solve problems in materials science. <cit.> In particular, generative modelling approaches based on autoencoder architectures and generative adversarial networks (GANs) <cit.> have been used to generate crystal structures. <cit.> Indeed, generative modelling has become commonplace, an outcome catalyzed by astounding advancements in the computational generation of images, audio and natural language over the last several years. <cit.> The Large Language Model (LLM), backed by the Transformer architecture <cit.>, is the approach behind state-of-the-art performance on natural language processing tasks. This approach begins with a generative pre-training step, which is autoregressive in nature, involving the unsupervised task of predicting the next token given a sequence of preceding tokens. <cit.> When such models are scaled to billions of parameters, their effectiveness becomes quite remarkable, as tools such as ChatGPT <cit.> demonstrate.
The LLM approach has recently been used in the context of materials science. <cit.> However, these attempts have been focused on either training and tuning the model for natural language tasks, and utilizing the model in natural language generation scenarios involving chemical subject matter, or training the model on a corpus of expanded chemical compositions for the purposes of generating unseen compositions. An alternate perspective, which we present here, is to train the model on textual representations of inorganic crystal structures, such as the Crystallographic Information File (CIF) format, rather than on corpora of natural language, or chemical compositions alone.
The motivation for this perspective originates from two conjectures: The first states that a sequence of symbols (i.e. tokens) is an appropriate representation modality for many predictive tasks (including those involving chemical structure). The idea of representing any domain with a sequence of tokens may at first seem counter-intuitive. However, consider that even images can be represented this way, and be subject to the autoregressive language modelling of pixels <cit.>. This challenges the notion that domain-specific representations, such as graphs for chemical structure, are necessary for superior performance. The second conjecture states that LLMs learn more than simply “surface statistics” and the conditional probability distribution of tokens. Indeed, autoregressive pre-training involving next-token prediction may result in learning an effective world model: an internalized causal model of the processes generating the target phenomena. A model which simply learns spurious correlations in the data is less desirable, as it may have greater difficulty in generalizing beyond the training distribution. Recent studies have demonstrated that LLMs trained on sequences of board game play (e.g. Chess and Othello) do indeed track the state of the board, and probes of the internal activations of the model reveal the existence of representations of various abstract concepts specific to the domain. <cit.> We therefore asked whether a model trained to predict the 3-dimensional coordinates of atoms, digit-by-digit, could learn the chemistry implicit in crystal structures, and generate unseen structures, borrowing from its model of the world of atoms.
As such, we herein describe the CrystaLLM model, a tool for CSG trained on an extensive corpus of CIF files representing the structures of millions of inorganic solid-state materials. Unlike small molecule organic compounds, the generative modelling of inorganic crystals presents unique challenges: the structures are complex and periodic, are not readily described by simple graphs, and are imbued with different forms of symmetry. Moreover, they can be constructed from more than 100 different elements. Even so, the model is capable of reliably generating correct CIF syntax and physically
plausible crystal structures for many classes of inorganic compounds.
§ METHODS
The following terminology is used in the remainder of the document:
A formula, or reduced composition, refers to the empirical formula, or formula unit, which is the simplest, whole-number ratio of atoms in the compound. An example of a formula is Ba2MnCr.
A cell composition is a chemical formula referring to the total number of atoms of each type in the unit cell of a crystal. It represents the chemical formula of the compound as it would appear in the crystal structure, which might contain multiple formula units. An example of a cell composition is Ba6Mn3Cr3.
§.§ Dataset
The dataset was assembled by obtaining structures from the Materials Project <cit.>, the OQMD <cit.>, and NOMAD <cit.>, which were originally optimized using density functional theory (DFT) simulations. In total, approximately 3.6 million structures were obtained. This dataset consists of compounds containing anywhere from 1 to 10 elements, with most consisting of 3 or 4 elements. The elements up to and including atomic number 94 are present, with the exception of polonium, astatine, radon, francium, and radium. The dataset contains roughly 800,000 unique formulas, and 1.2 million unique cell compositions. When paired with space groups, there are 2.3 million unique cell composition-space group pairs. To choose between duplicate structures containing the same cell composition and space group, the structure with the lowest volume per formula unit was selected. The 2.3 million structures in this dataset were converted to CIF files using the pymatgen library <cit.>, and were used for training. The CIF files were created with the pymatgen option for symmetry finding tolerance set to 0.1 Å. All floating point numbers in the files were rounded to 4 decimal places. The dataset was split randomly into train, validation, and test sets, such that the training set consisted of about 2.2 million CIF files, the validation set 35,000 CIF files, and the test set 10,000 CIF files.
§.§ Tokenization
The dataset of CIF files was tokenized prior to training. The vocabulary consisted of CIF tags, space group symbols, element symbols, numeric digits, and various punctuation symbols, for a total of 371 symbols. After tokenization, the training set consisted of 768 million tokens.
§.§ Generative Pre-training
The generative pre-training step requires a vocabulary, 𝒱, and an ordered list of tokens 𝒰 = (u_1, ..., u_n), with u_i ∈𝒱. We want to maximize the following likelihood:
ℒ(θ; 𝒰) = ∑_i log P(u_i | u_i-c, ..., u_i-1;θ)
where c is the size of a context window, P is the conditional probability distribution to be modelled, and θ the parameters of a neural network. We therefore minimize 𝒥(θ; 𝒰)=-ℒ, using stochastic gradient descent to adjust the parameters. We use a multi-layer Transformer decoder <cit.> for the neural network, as described in <cit.>. Our model consists of 25 million parameters, with 8 layers, 8 attention heads, and an embedding size of 512. We decay the learning rate from 10^-3 to 10^-4 over the course of training, and use a batch size of 32.
§.§ Evaluation
To evaluate the generative capabilities of the model, we define two scenarios where the model is tasked with generating the compounds of the held-out test set. The first scenario, which we name the Cell Composition-only scenario, involves prompting the model with each cell composition in the test set, and having it generate up to a maximum of 3000 tokens. The model is prompted with only the first line of a CIF file, which consists of the data block header, containing the cell composition of the structure specified in the rest of the file. The second scenario, which we name the Cell Composition+Space Group scenario, is similar to the first, except that the model is prompted with both the cell composition and space group, for each entry in the test set. Moreover, we perform the generation 3 separate times for each entry.
To assess how well the model performed in the first scenario, we check if a generated CIF file is consistent in terms of space group, if it is consistent in terms of the atom site multiplicity, and if the generated bond lengths are reasonable. To check if the generated structure is consistent with the printed space group, we use the class of the pymatgen library, which uses the spglib library <cit.>. To check if bond lengths are reasonable, we first use a Voronoi-based nearest-neighbour algorithm in pymatgen to define which atoms are bonded together; then, we establish expected bond lengths based on the electronegativity difference between the bonded atoms, and their ionic or covalent radii. We classify a structure as having reasonable bond lengths if all the detected bond lengths are within 30% of the corresponding expected bond lengths.
The goal of the second evaluation scenario is to establish how often the model can recover the unseen structures of the test set, when prompted with a cell composition and space group. To determine whether a generated structure matches the structure in the test set, we use the pymatgen class, which performs a structural similarity assessment of two crystals. We use a fractional length tolerance of 0.2, a site tolerance of 0.3 Å, and an angle tolerance of 5 degrees, which are the default values in pymatgen. Both structures are reduced to primitive cells before matching, and are scaled to equivalent volume.
§.§ DFT Calculations
For the pyrochlore case study, a small number of DFT calculations were performed using VASP, following as closely as possible the settings used in the OQMD project (where most of the pyrochlore structures seen in training were taken from). For example, the recommended PAW potential was used for each element: Zr_sv for zirconium, Hf_pv for hafnium, Lu_3 for lutetium, Pr_3 for praseodymium, Ce_3 for cerium (for the remaining elements, the name of the PAW potential simply matched the element's symbol). The Perdew-Burke- Ernzerhof (PBE) exchange-correlation functional <cit.>, in the generalized-gradient approximation, was used in all calculations. Hubbard (PBE+U) corrections were applied for transition metal elements with unfilled d levels (U_eff=3.8 eV for Mn and 3.1 eV for V). Although the cell parameters reported here correspond to the conventional cubic cell with 8 formula units, the DFT calculations were performed using the primitive cell with two formula units, and sampling of the reciprocal space corresponding to that primitive cell was performed using a 7x7x7 grid, as done for all pyrochlore calculations in the OQMD project.
§ RESULTS
§.§ Assessment of Generation Quality
To assess the quality of the model's generated structures, we considered two scenarios, as discussed in section <ref>. The Cell Composition-only scenario involves prompting the model with the first line of the test set CIF file only (which specifies the cell composition), whereas the Cell Composition+Space Group scenario involves prompting the model from the first line of the test set CIF file to the line specifying the space group (inclusive). The fraction of generated structures that are consistent in terms of space group, atom site multiplicity, and have reasonable bond lengths are presented in Table <ref>.
The generated CIF files of the Cell Composition+Space Group scenario were compared to the corresponding CIF files of the test set using a structure matching algorithm (as discussed in section <ref>). The fraction of matching structures is presented in Table <ref>. The Reduced Unseen column represents the results for formulas that were not seen in training with any Z.
We further examined how closely the generated cell parameters resembled the actual cell parameters, for the cases where there was a structural match. We took the first matching structure for samples that had at least one generated structure matching the test set structure, and measured the R^2 and mean absolute error (MAE) for the true versus generated cell lengths, the true versus generated (i.e. printed) volume, and the implied (from cell parameters) versus generated volume. The results are presented in Table <ref> and Figure <ref>.
§.§ Generalizing to Unseen Scenarios
To further examine the model's ability to generalize to unseen scenarios, we prompted the model with various formulas, and examined its output. The results are presented in Figure <ref>.
An example of the model generalizing to a formula that had been seen in training, but with different space groups, is presented in Figure <ref>a. The formula, Ba2MnCr, was in the held-out test set, with the R3̅m space group. That combination of formula and space group had not been seen in training. The model generated a structure matching the one in the test set on the first attempt, when the space group was provided.
The model also demonstrated the ability to generate plausible structures for formulas not seen in training with any Z. An example is the quaternary compound CsCuTePt. This compound was not in the training set, but was in the held-out test set (with Z=4). The model generated a structure matching the one in the test set, in the F4̅3m space group, on the third attempt when the space group was provided. The generated structure is presented in Figure <ref>b.
Finally, in Figure <ref>c is the generated structure of YbMn6Sn6 <cit.>, an example of the model generalizing to structural motifs with atoms not seen in training. This formula was not seen in training for any Z, and was not in the held-out test set. However, ZrMn6Sn6 was seen in training, in the P6/mmm space group. The model generated a structure in the same space group on the first attempt, without the space group being provided. The generated structure matched the ZrMn6Sn6 structure, with Yb substituted for Zr, and with cell parameters and atomic coordinates adjusted accordingly. This demonstrates the model performing a structure prediction by analogy procedure, as commonly used by materials scientists for discovery <cit.>, despite never having been provided with the procedure to do this.
§.§ Generating Known Structural Classes
The CrystaLLM model was trained on an extensive collection of the various structural classes known to inorganic chemistry. We thus investigated its ability to generate unseen members of these classes. We focused on classes of binary, ternary and quaternary compounds.
§.§.§ Rutiles
Rutiles are a class of binary compounds that adopt a tetragonal unit cell, in the P4_2/mnm space group (Z=2), as is seen in TiO2, from which this class of materials adopts its name. The general formula for rutile oxides is MO2, where M is a metallic species in the +4 oxidation state. Rutile fluorides are also known, where the metal is in the +2 oxidation state.
The model's training dataset consisted of essentially all of the rutiles one might expect to be able to find in nature. Therefore, to test the model's ability to generate unseen rutiles, we requested the generation of theoretically possible, but unlikely compounds, such as AuO2. With gold in a highly unlikely +4 oxidation state, AuO2 is not expected to be formed under most conditions. However, the model was able to imagine what the structure of such a compound might be (when the space group is provided). While TiO2 has cell parameters a=4.594Å, c=2.959Å, the generated rutile gold variant has a=4.838Å c=3.429Å, reflecting the increased volume occupied by the larger gold atoms (Figure <ref>a).
§.§.§ Spinels
The spinels are a group of ternary compounds with the general formula AB2X4, where A is a cation in the +2 oxidation state, B is a cation in the +3 oxidation state, and X, normally a chalcogen, is an anion. Spinels form cubic close-packed structures, with eight tetrahedral, and four octahedral sites, normally in the Fd3̅m space group.
To explore the model's ability to generate unseen spinels, we selected two samarium spinels: Sm2BO4, which was present in the held out test set, and the thiospinel Sm2BS4, which was absent from both the training and test sets. The model was able to generate the expected spinel structures for both compounds when the cell composition and space group were provided (Figures <ref>b and <ref>c). During training, the model encountered a number of different oxy-, thio-, and selenospinels, and this likely contributed to its ability to generate these two compounds.
§.§.§ Elpasolites
The elpasolites are quaternary compounds with the general formula ABC2X6. The A and C species are typically alkali metal cations in the +1 oxidation state, B is usually a transition metal cation in the +3 oxidation state, and X is a halogen anion. The elpasolites are often referred to as “double perovskites”, since their structures are related to perovskites by the doubling of their unit cell dimensions, and the replacement of the M^2+ cation with alternating M^+ and M^3+ cations. Elpasolites crystallize in the Fm3̅m space group, and are the most common quaternary crystal system reported in the Inorganic Crystal Structure Database (ICSD) <cit.>. We wondered if the CrystaLLM model could generate elpasolites not seen during training.
We selected two elpasolites from the held-out test, that were not seen in training: the fluoride KRb2TiF6 and the iodide K2AgMoI6. The model was able to generate the correct elpasolite structure when the cell composition and space group was provided (Figures <ref>d and <ref>e).
§.§.§ Pyrochlores
The general formula for the pyrochlores is A2B2O7, where A, a trivalent cation, and B, a tetravalent cation, are either rare-earths or transition metals (other oxidation states, e.g. combining monovalent and pentavalent cations, are also possible, but we focus here on the trivalent/tetravalent pyrochlores). Pyrochlores crystallize in the Fd3̅m space group (Z=8). There are many combinations of A and B that are possible for this structure, by using lanthanide ions, actinide ions, and Y(III) for the A species, and various transition metal ions, as well as Ti(IV), Zr(IV), and Hf(IV) for the B species. We investigated whether CrystaLLM could generate valid pyrochlore structures for any unseen combinations, and whether it could estimate reasonable cell parameters in line with the trends observed for the pyrochlore series, as the cell parameters are expected to be correlated with the ionic radii of the A and B cations.
We created a space of pyrochlores consisting of 144 compounds by producing different combinations of A and B species. Of these, 54 were seen in training. We selected 10 compounds from among the 90 not seen in training, and attempted 3 generations with the model, for each. The cell composition and space group were included in the prompt. All generations resulted in valid pyrochlore structures (Table <ref>).
We subsequently performed DFT relaxation calculations on the first generated structure for each of the 10 compounds. One case, Ce2V2O7, was problematic and was excluded from further analysis. This result isn't very surprising, since both Ce and V are pathological elements in DFT settings. The DFT-derived value of the cell parameter for each of the 10 compounds is plotted against the mean generated value in Figure <ref>. A good agreement exists between the DFT-derived and generated cell lengths, with an R^2 of 0.62 and MAE of 0.08 Å being exhibited.
§.§ Problematic Cases
While the model seems capable of generating structures for many different classes of inorganic crystals, it does nonetheless have difficulty in certain cases. All of the cases appear to involve systems that are rare, and under-represented in the training dataset. For example, the model was generally unable to generate a structure for Mg7Pt4Ge4, the structure of which was reported recently to exist in the P6_3mc space group (Z=2). <cit.> In this case, there were only 38 examples of 7:4:4 systems in the training dataset, none contained Mg or Pt, and none were in the P6_3mc space group.
The current version of the model also seems to struggle with generating phosphates, sulfates, carbonates, and organic-inorganic hybrid structures. Examples include carbonate hydroxide minerals, such as Co2CO3(OH)2 <cit.> and Cu2CO3(OH)2 (malachite). While present in the dataset, they belong to a group of analogous structures for which there are only a handful of examples. While the model can generate Ca5(PO4)3(OH) (hydroxyapatite), it generally fails to generate a valid structure for Mn4(PO4)3. A common theme is the appearance of multiple oxyanions, which can give rise to more complex arrangements of atoms, for which the model may not have seen enough examples. In contrast, the model can generate compounds of the perovskite class reliably. However, over 5,000 examples of the ABX3 (X=O,F) system in the Pm3̅m space group were seen in training.
Future versions of the model will consider strategies for addressing these occurrences of class imbalance.
§.§ The CrystaLLM.com Web Application
To allow for general and open access to the CrystaLLM model, we make it available through a web application, available at https://crystallm.com/https://crystallm.com. The user of the application is presented with a text field requiring a formula to be entered. Optionally, they may provide the number of formula units (Z) and the desired space group (Figure <ref>). Once they press the button, a request is sent to a GPU server which has the model in memory. The request is converted into a prompt, and the generated contents are returned to the user. If no Z is provided, we scan through Z values of 1, 2, 3, 4, 6, and 8, and return the first valid structure generated by the model. We validate the generated structure using the same procedure described in the Methods section, checking that the generated structure is consistent in terms of the printed space group, and other elements of the CIF file. If no valid structure can be found, the user is presented with an informative error message, including the option to view the generated content. Requests typically take several seconds to process, but can take longer if no Z is provided and the model has trouble finding an appropriate Z value. Generated structures are displayed in a web browser-based 3D structure viewer provided by the Crystal Toolkit framework, upon which the front-end of the web application is built. <cit.>
By making the model easily accessible, we hope to contribute a potentially useful tool to the materials structure research community. We also hope to receive feedback from users that may help improve future versions of the model.
§ DISCUSSION & CONCLUSION
Here, we have shown that LLMs of the CIF format are able to generate inorganic crystal structures for a variety of known classes. Indeed, the model is able to produce valid and sensible arrangements of atoms in 3-dimensional space by generating xyz coordinates digit-by-digit. The model also seems to have captured the relationship between space group symbols and the symmetries inherent in the structures it generates.
We chose to build a language model of the CIF format (instead of a simplified format, for example, which might include a minimal vocabulary) for several reasons. First, the CIF format is not particularly verbose. The model learns the grammatical structure of the format fairly quickly. We can thus avoid having to devise an intermediate format that requires inter-conversion between more common formats, which could also be error prone. Second, we believe that having the model learn to generate the more redundant parts of the CIF format, such as the cell volume, and Z, which are inferable from prior inputs, helps the model to perform better overall.
While the model can generate sensible structures, this does not by itself make it suitable, as is, for CSP. Just as natural language LLMs, such as GPT-3 and -4, are not suitable chatbots without further fine-tuning, the CrystaLLM model will also need to be fine-tuned for more advanced tasks. Fine-tuning involves an additional and separate training step, where the model's parameters are adjusted in the context of a different task. This may also involve altering the model's output layer, such as to make it suitable for a regression task, for example. Models can be fine-tuned using a variety of techniques, but supervised learning and reinforcement learning <cit.> are most common. One might use reinforcement learning, for example, when a task is not clearly defined as a supervised learning problem. When fine-tuning natural language LLMs for chatbot applications, it is common to use Reinforcement Learning from Human Feedback (RLHF). <cit.> With RLHF, the idea is to gather data from human annotators to be used to train a reward model, which scores generated text according to its desirableness. The reward model is then used as part of a reinforcement learning-based tuning of the LLM. In CSP, one would like to produce ground-state structures (for some given physical conditions). One could thus imagine an analogous procedure where CrystaLLM is fine-tuned for the goal of generating low-energy structures, via feedback from an external evaluator of the generated structure's energy. We call this Reinforcement Learning from Thermodynamic Feedback (RLTF). This procedure would also require a reward model, and such a model should ideally provide a timely estimate of a structure's energy. This excludes time-consuming approaches such as DFT. A viable approach could make use of a separate machine learning-based model of formation energy, such as one based on ALIGNN. <cit.> Indeed, neural network potentials have been used to accelerate the prediction of crystal structures. <cit.>
There are several limitations with the current approach. First, none of the structures of the dataset have site-occupancy disorder (fractional site occupancies). Therefore, CrystaLLM cannot generate disordered structures, and may not successfully generate structures for combinations of cell composition and space group that imply a disordered structure. An example is K2NaTiOF5, which is reported to be an elpasolite, in the Fm3̅m space group (Z=4), with F and O species sharing the same crystal site <cit.>. Another limitation is that the CIF files of the dataset were not all created using the same level of theory. The training set is derived from a combination of DFT sources using different settings, functionals, etc., which may make it difficult for the model, in some instances, to learn a consistent relationship between cell composition and detailed structure. <cit.>
Nevertheless, we believe that CrystaLLM will be a useful tool for CSG and materials informatics. We plan to explore fine-tuning the model for physical property prediction tasks, such as the prediction of lattice thermal conductivity, where experimental data is relatively scarce. <cit.> The architecture of the model allows it to be fine-tuned for either composition-based or structure-based prediction tasks. This implies that CrystaLLM may be the basis for a general-purpose materials informatics model, which can be used for generative tasks, and fine-tuned for property prediction tasks that require either composition or structure. If the model is able to transfer what it has learned about the world of atoms to these various predictive problems, it may prove to be a quite flexible tool relevant to many aspects of materials chemistry.
§ NOTE
During development of the CrystaLLM model, we became aware of a pre-print by Flam-Shepherd and Aspuru-Guzik that describes the use of autoregressive large language modelling for molecular and crystal structure generation. <cit.> While the fundamental idea of generating the coordinates of atomic systems token-by-token is the same, our work differs in the following ways: 1, we focus exclusively on the generation of the crystal structures of inorganic materials; 2, we train the model directly on CIF files and CIF syntax, with a vocabulary consisting of CIF tags and space group symbols, in addition to atomic symbols and numeric digits; 3, we use a much larger and custom dataset consisting of millions of CIF files for training the model; 4, our model is symmetry-aware, and supports the generation of structures in specified space groups and for specific numbers of formula units. In summary, we develop a model specifically for the purposes of material structure generation, which produces syntactically valid and physically sensible CIF files as an output.
§ DATA AVAILABILITY
The structures used in the experiments described in this work were obtained from the Materials Project (https://materialsproject.org/https://materialsproject.org/), the OQMD (https://oqmd.org/https://oqmd.org/), and NOMAD (https://nomad-lab.eu/https://nomad-lab.eu/). All structures were made available by those sources under the Creative Commons Attribution 4.0 License. <cit.>
§ ACKNOWLEDGEMENTS
This work was partially supported by computational resource donations from Amazon Web Services through the AWS Activate program, obtained with assistance from the Communitech Hub. For the DFT calculations, we used the Young supercomputer facility via the UK Materials and Molecular Modelling Hub, which is partially funded by EPSRC (EP/T022213/1, EP/W032260/1).
§ AUTHOR CONTRIBUTIONS
L.M.A. conceived the project, performed the experiments, and drafted the manuscript. L.M.A. and R.G.-C. designed the experiments. R.G-C. carried out the DFT calculations for the pyrochlore case study. R.G.-C. and K.T.B. supervised and guided the project. All authors reviewed, edited and approved the manuscript.
naturemag
|
http://arxiv.org/abs/2307.06099v1 | 20230712114522 | RFENet: Towards Reciprocal Feature Evolution for Glass Segmentation | [
"Ke Fan",
"Changan Wang",
"Yabiao Wang",
"Chengjie Wang",
"Ran Yi",
"Lizhuang Ma"
] | cs.CV | [
"cs.CV"
] |
Experimental detectability of spin current shot noise
Sebastian T. B. Goennenwein^1
August 12, 2023
=====================================================
[1]Corresponding authors.
Glass-like objects are widespread in daily life but remain intractable to be segmented for most existing methods. The transparent property makes it difficult to be distinguished from background, while the tiny separation boundary further impedes the acquisition of their exact contour. In this paper, by revealing the key co-evolution demand of semantic and boundary learning, we propose a Selective Mutual Evolution (SME) module to enable the reciprocal feature learning between them. Then to exploit the global shape context, we propose a Structurally Attentive Refinement (SAR) module to conduct a fine-grained feature refinement for those ambiguous points around the boundary. Finally, to further utilize the multi-scale representation, we integrate the above two modules into a cascaded structure and then introduce a Reciprocal Feature Evolution Network (RFENet) for effective glass-like object segmentation. Extensive experiments demonstrate that our RFENet achieves state-of-the-art performance on three popular public datasets. Code is available at <https://github.com/VankouF/RFENet>.
§ INTRODUCTION
Detecting ubiquitous yet fragile glass-like objects is indispensable for vision based navigation systems. However, different from most other daily objects, glass-like objects are more confusing to be distinguished from background due to their
transparent property. Besides, such objects mostly share an extremely thin separation boundary with background, making this task even more challenging.
Due to the above challenges, merely relying on either semantic content or separation boundary to segment out the glass regions
is sub-optimal or at least inaccurate.
Although glass segmentation methods mainly rely on semantic features to predict the semantic map, adding boundary information can still help to obtain global shape context, thus improving the performance of segmentation.
Meanwhile, boundary prediction could also be improved by auxiliary semantic assistance, which can help to reduce the false edge prediction and in turn facilitate a more accurate glass segmentation.
Being consistent with the above observation, some previous glass segmentation works explored either assistance from boundary to semantic or assistance from semantic to boundary: 1) Using boundary to assist semantic:
<cit.> introduced an auxiliary boundary supervision as a guidance to conduct glass segmentation refinement, which helps the prediction of those uncertain regions around the boundary;
2) Using semantic to assist boundary: <cit.> proposed to supervise non-edge parts in a residual style to obtain finer edges and eliminate noisy edges in background, instead of directly performing a boundary feature enhancement.
However, the above two paradigms, i.e., using boundary to assist semantic, or using semantic to assist boundary, both ignored the importance of feature co-evolution from the two sides
(i.e., semantic branch and boundary branch).
In other words, previous methods did not conduct the bi-directional assistance between the
two branches,
and simply adopting one-way assistance results in inferior performance.
To address these issues, in this paper, we propose an adaptive mutual learning mechanism to enable the explicit feature co-evolution between semantic branch and boundary branch. Such a mechanism helps to exploit the complementary information from each branch and is achieved by a novel Selective Mutual Evolution (SME) module.
Specifically, the semantic feature is selectively enhanced with the guidance from the boundary branch, highlighting those weak response regions (especially for the pixels around boundary).
In a similar way, the boundary feature is also selectively enhanced with the guidance from the semantic branch,
mitigating the impact of edge noise from background or internal glass.
With the above mutual learning strategy, the two branches reciprocally optimize each other to explore the intrinsic interdependence between them.
Despite the effectiveness of SME module, some regions remain indistinguishable, such as the pixels around boundary. To remedy this problem, we further propose a Structurally Attentive Refinement (SAR) module. To be more specific, we firstly sample a set of most reliable boundary points in the semantic feature, according to the confidence scores on the predicted boundary map, to capture the global shape information. Then a set of most uncertain points on glass segmentation map are enhanced with the semantic features of previous selected boundary points.
Notably, this adaptive enhancement process is conditioned on the contents of those uncertain points, and is achieved with a cross-attention operation.
In a nutshell, such an attentive feature refinement exploits extra boundary cues to help the inference of those ambiguous points, serving as a globally structural guidance.
The proposed SAR module is pluggable with a simple design, and can also be applied to other boundary assisted methods.
Besides, inspired by pioneering exploration in the multi-scale feature representation, we further equip the above two modules with a cascaded style connection, benefiting from the progressive fusion of multiple receptive fields.
Finally, we propose a Reciprocal Feature Evolution Network (RFENet) for glass-like object segmentation. We conduct extensive experiments against recent competitors on three popular glass-like object segmentation datasets and our RFENet achieves state-of-the-art performance. The visualized results in Figure <ref> also demonstrates the superiority of our RFENet.
Overall, we summarize our contributions as follows:
* We propose RFENet, a novel glass-like object segmentation model,
which achieves state-of-the-art performance on three popular benchmarks.
* We propose a Selective Mutual Evolution (SME) module to encourage the feature co-evolution of two branches, which effectively solves the inferior performance caused by only one-way assistance in the previous two-stream methods.
* We propose a Structurally Attentive Refinement (SAR) module to conduct further feature refinement for uncertain points with useful global shape context.
§ RELATED WORK
Glass-like Object Segmentation. It is much more challenging to segment out glass-like objects than those common objects, mainly due to that inner glass regions often share extremely confusing appearance with surrounding background. To remedy this problem, some methods <cit.> resorted to exploit additional multi-modal information, such as 4D light-field, refractive flow map, and thermal image. Unfortunately, those multi-modal data is relatively expensive to acquire, which limits the wide applications. Instead, recent works <cit.> contributed large-scale RGB image datasets for glass-like objects to promote research in related fields. However, due to the special property of glass-like objects, the off-the-shelf semantic segmentation methods<cit.> failed to achieve a promising performance. Similarly, many state-of-the-art salient object detection approaches <cit.> also result in an inferior prediction as the glass may not necessarily be salient.
Therefore, the demand for specialized methods recently attracts more attention in the field of glass-like object segmentation. <cit.> tried to integrate abundant contextual or contrasted features to help distinguish glass regions, implying the importance of contextual information. <cit.> utilizes a context and a texture encoder to extend the model from camouflaged object detection field into the glass detection area. Besides, <cit.> proposed to segment glass-like objects under the assistance of boundary cues, benefiting from the high localization accuracy of boundary. Inspired by existing research, we further reveal the importance of feature co-evolution demand for glass segmentation and boundary learning. Based on this observation, we propose an adaptive mutual learning mechanism to effectively exploit the complementary information between semantic and boundary.
Boundary as Assistance. The boundary contour of glass objects clearly defines their distribution range with a pixel-level localization ability. As a result, the effective exploration of boundary cues becomes crucial in high-precision glass segmentation. Actually, previous research has also demonstrated that introducing boundary cues into their model plays an important role for semantic segmentation <cit.> and salient object detection <cit.>. As for glass-like object segmentation, <cit.> proposed to explicitly predict boundary map or decoupled boundary map, and utilized it as a guidance to assist the semantic stream. And <cit.> tried to supervise edge part as well as none-edge part to explicitly model glass body and boundary. Both of them has proved the non-negligible performance gain brought by an appropriate exploit of boundary cues.
However, boundary prediction is often susceptible to background noise, especially for glass-like objects with transparent property. Different from existing methods, we propose an adaptive mutual learning mechanism to encourage feature co-evolution between semantic branch and boundary branch. Such a novel reciprocal structure helps to reduce the impact from background noise or internal glass reflection, with the guidance from semantic feature. Notably, the improved boundary prediction will in turns facilitate the glass segmentation in a cyclic enhancement style.
Glass Segmentation Refinement. The adoption of segmentation refinement technics has been demonstrated effective to further boost the model's prediction accuracy. For example, <cit.> proposed to use Conditional Random Fields and Guided Filter as post-processing to refine segmentation predictions. PointRend <cit.> and MagNet <cit.> proposed to refine some selected point set using local or global context information. In the field of glass-like object segmentation, EBLNet <cit.> proposed a PGM module to exploit global shape prior, which further improves the edge prediction precision.
Differently, we propose to adaptively aggregate useful shape context for the most ambiguous points instead of certain boundary points. And the refined point set is dynamically sampled along with the optimization process, imitating hard sample mining strategy. The proposed refinement module provides structural context for predictions of some local regions, such as boundary and reflective regions.
§ METHOD
§.§ Overview
The architecture of our RFENet is illustrated in Figure <ref>. There are two parallel branches in our RFENet: semantic branch and boundary branch. Specifically, we firstly adopt
ResNet50 <cit.> with ASPP module <cit.> as the backbone network to extract multi-scale deep features. Then Selective Mutual Evolution (SME) module is proposed to encourage the feature co-evolution of the two branches. Then we propose Structurally Attentive Refinement (SAR) module to further refine those uncertain points with global shape context. Finally, we integrate the two modules into a cascaded structure to exploit the internal hierarchical feature representation.
Formally, we denote the deep feature representations from different stages in backbone network as F_i, where i ∈{1,2,3,4,5} represents the i-th stage.
The semantic branch predicts glass regions based on the last feature map F_in^s (i.e., F_5), which contains the most rich context information.
For the boundary map prediction, we use a concatenation[We perform bilinear interpolation on F_5 so that it has the same resolution as F_1, i.e., 1/4 of the original image size. We use the same operation for other features F_i when necessary.] of F_1 and F_5 as input feature F_in^b to take advantage of the texture information from low-level features.
As shown in Figure <ref>, SME takes semantic feature and boundary feature as inputs, and obtain the co-evolved features by a mutual operation. Then the evolved features are both fed into SAR module to conduct a further refinement for semantic features under the guidance of boundary cues. The final prediction is obtained by a sequentially stacking of the two modules.
During each stacking process, for SME, we fuse lower-level features to recover the textural details for semantic branch, which gradually aggregates useful multi-scale context information.
For boundary prediction, we repeatedly fuse the finest feature F_1 into the input feature of every stage to exploit more detailed texture information.
The whole stacking process within SME module could be formulated as:
F_i^s, F_i^b = {SME_i (F_i+1, [F_i+1;F_1]), if i = 4,
SME_i ([F_i+1^s;F_i+1], [F_i+1^b;F_1]), else.
.
where [·] represents the feature concatenation operation, F_i^s and F_i^b represent the co-evolved semantic and boundary features output by SME.
Then F_i^s and F_i^b are input into the SAR module and obtain a refined semantic feature F_i^s' under the guidance of F_i^b, which could be formulated as:
F_i^s' = SAR( F_i^s, F_i^b).
The concatenation of F_i^s', i∈{1,2,3,4} is used as the final semantic feature F_out^s, which is responsible for the prediction of final semantic map. Besides, to encourage the progressive evolution of intermediate semantic and boundary features F_i^s' and F_i^b, we also attach additional prediction heads on them with the supervision from their individual ground truths.
§.§ Selective Mutual Evolution (SME) Module
We firstly introduce the key motivation of our reciprocal feature co-evolution mechanism. Compared with the segmentation of other daily objects, glass-like objects are more difficult to be distinguished from background regions, mainly due to their transparent property.
One feasible workaround is trying to exploit useful boundary cues as assistance, which is also consistent with the human visual perception mechanism.
In such a way, the semantic features around potential boundary will be enhanced and get more attention, which helps the model to capture the extent of glass region.
However, the separation boundary between glass and surroundings is mostly too tiny to conduct an accurate prediction.
Inspired by human's visual attention mechanism,
semantic information of glass objects can be used to suppress false glass boundaries and highlight the features around real glass boundaries. Therefore,
we propose to encourage feature co-evolution between boundary branch and semantic branch, simultaneously exploiting boundary cues to assist semantic features and exploiting semantic cues to assist boundary features.
In a short word, a more accurate boundary prediction produces a better glass segmentation, which in turns facilitates a more accurate boundary map, and vice versa.
We then introduce the implementation details of our SME module.
As shown in Figure <ref>, each basic mutual block takes F^s_in and F^b_in as inputs and outputs corresponding features F^s and F^b.
Each block generates a two-channel attention map A from the joint feature representation of F^s_in and F^b_in, and mutually enhances the input features using the attention map to capture complementary information from each other.
Specifically, the attention maps A are generated with a multi-branch aggregation operation on the concatenated features F_in^s and F_in^b. The concatenated feature is fed into two branches. We first use a convolution with kernel size of 3 to gather the local spatial context and then fuse the semantic and boundary information along the channel dimension to ensure a comprehensive view. Secondly, the fused feature is fed into two branches with convolutions of different kernel sizes (5 and 9). The large kernel based two-branch is designed to capture more long-range glass cues, which ensures a more reliable attention score. Finally, we stack several convolutions and a sigmoid operation on the concatenation of above two features to predict the attention maps A. The above process can be formulated as:
A =[[ a^s ; a^b ]]
= σ(Aggregate([F^s_in;F^b_in])),
where A ∈[0,1]^2 × h × w, Aggregate represents the aggregation operation, and [·] represents channel-wise feature concatenation. Then each channel of A (i.e., a^s and a^b) is used as the attention map to enhance F^s_in and F^b_in respectively in a residual manner:
F^u = conv(F_in^u a^u) + F_in^u, u ∈{s,b},
where denotes to element-wise multiplication. In this way, useful complementary information from either branch can be effectively exploited to adaptively enhance the features in the other branch without losing the original content.
We visualize the predicted two attention maps in Figure <ref> to provide an intuitive explanation of our SME module.
As shown in the figure, for the semantic branch, the features around the boundary are highlighted to accurately determine the
distribution range of glass area.
Notably, some regions outside the glass also receive relatively high attention. We assume that the network could attentively focus on some useful contextual information.
As for the boundary branch, irrelevant contours in background regions are suppressed by relatively weak attention score to mitigate the noise disturbance, which is also in line with our analysis.
§.§ Structurally Attentive Refinement (SAR) Module
Despite the effectiveness of our SME module, there are still some difficult pixels remaining confusing to be distinguished, such as pixels located at boundaries, reflective regions and background regions with smooth surface.
The inference of those ambiguous and difficult points is challenging without extra context priors.
Fortunately, for any given difficult point, there are still several useful context cues to assist its inference, such as distance from its nearest boundary, curvature of local boundary and the whole shape of glass, etc..
Therefore, inspired by human's visual reasoning mechanism, we propose to adaptively aggregate the glass shape context as structural priors to help the inference of those ambiguous points.
We firstly sample the features located at the top-confidence boundary points to construct a feature set as a compact representation of the glass shape context.
Then we dynamically select the most uncertain points to conduct feature refinement along with the learning process, which acts as a kind of hard sample mining strategy.
The refinement process is conditioned on the content around the current uncertain points, and is guided by the feature similarity with those boundary points.
With such a refinement design, those uncertain points are allowed to freely explore useful shape context cues without the constraint from limited receptive field.
Following the above methodology, we propose a Structurally Attentive Refinement (SAR) module, as illustrated in Figure <ref>.
The SAR module accepts the semantic feature F^s and boundary feature F^b from SME module as inputs, and outputs refined semantic feature F^s'. Based on the original input semantic features F^s, we can obtain initial glass prediction P_s ∈ℝ^n× h × w with n-class glass prediction head, and initial boundary map P_b ∈ℝ^1× h × w with edge prediction head.
We use the prediction scores in P_s and P_b to select those uncertain points and top-confidence boundary points.
Then we propose attentive feature refinement to enhance features of uncertain points under the assistance of top-confidence boundary points.
After the features of those uncertain points are enhanced, we push them back into F^s according to their original indices to get the refined feature F^s'.
Attentive feature refinement. we introduce the details of the core attentive refinement operation as follows.
1) Firstly, we iterate over all pixels in P_s to calculate their Entropy for the measurement of prediction uncertainty. Then we select K points with the top entropy as the most ambiguous points set, the semantic features Q of which are later refined with extra shape context.
2) Secondly, we select M boundary points with the top prediction confidence based on P_b to construct the boundary feature set V. Intuitively, the feature set V could be considered as a compact representation of geometric information of glass, providing useful shape context.
3) Finally, for each feature in uncertain semantic feature set Q, we adaptively aggregate the most relevant shape context feature from boundary feature set V based on the feature correlation.
We then obtain the refined features by fusing the aggregated results in a residual manner, which are later used for final semantic prediction.
Notably, the above mentioned content-aware attentive refinement process can be easily implemented through the off-the-shelf cross-attention module, if we use the semantic features Q as queries and use boundary features V as both keys and values (i.e., K=V). There are usually several parallel attention heads in one cross-attention module, and each of which could be formulated as following:
Attention(q,k,v) = softmax(q*k^t/√(d_k))· v,
where d_k is the dimension of input features q, k and v.
§.§ The Cascaded Connection
To fully explore the multi-scale feature representation within the backbone, we integrate our SME and SAR module into a cascaded structure. We denote the semantic and boundary features output from SME and SAR in stage i to F_i^s, F_i^s' and F_i^b, respectively. As shown in Equation <ref> and <ref>, we sequentially stack the two modules from higher level to lower level stage, making a cascaded optimization style. Besides, to aggregate more detailed information, F_1 from backbone is always concatenated with F_i+1^b before fed into SME module at each stage i. The semantic features F_i^s' output from every stage are concatenated to produce the final semantic map. At the same time, the intermediate predictions of each stage are also supervised to encourage a progressive feature evolution.
§.§ Loss Design
Our RFENet is supervised with a joint function of semantic loss and boundary loss, which can be formulated as:
L = L_s_out + λ_s L_s_i + λ_b L_b_i, i∈{1,2,3,4},
where the semantic losses L_s_out and L_s_i are Cross-Entropy Losses and supervise the predictions from both the final glass prediction head, and the intermediate predictions of each stage in semantic branch. Meanwhile, the boundary losses L_b_i supervise the intermediate predictions of each stage in boundary branch. Considering that the boundary points only account for a small range in an image, we use the Dice Loss <cit.> as the boundary loss L_b_i to avoid the sampling imbalance problem. We generate the ground-truth boundary maps following <cit.> with the thickness of 8. The λ_s and λ_b are used to balance the effect from L_s and L_b, which are set to 0.01 and 0.25 for all experiments.
§ EXPERIMENT
§.§ Datasets
Glass Datasets: (1) Trans10k <cit.> is a large-scale transparent object segmentation dataset, consisting of 10,428 images with three categories: things, stuff and background. Images are divided into 5,000, 1,000 and 4,428 images for training, validation and test, respectively. It is by far the largest transparent object segmentation dataset with the most detailed annotations. Taking into consideration of the data amount and scene diversity, we conduct most of our experiments on this dataset to ensure a convincing result.
(2) GSD <cit.> is a medium-scale glass segmentation dataset containing 4,098 glass images, covering a diversity of indoor and outdoor scenes. All the data are randomly split into a training set with 3,285 images and a test set with 813 images.
We use GSD to validate the generalization ability of our method.
Mirror Dataset:
PMD <cit.> is a large-scale mirror dataset contains 5,096 training images and 571 test images. It contains a variety of real-world images that cover diverse scenes and common objects, making it much closer to practical application. We conduct experiments on PMD dataset to demonstrate our model's transferability for mirror segmentation although it is designed for glass-like objects.
§.§ Evaluation Metrics
We follow the previous works to mainly adopt the following metrics to evaluate the performance of our model: mean Intersection over Union (mIoU), Pixel Accuracy (Acc), Mean Absolute Error (mAE), mean Balance Error Rate (mBER) and F-score. The mIoU is widely used
to calculate the ratio of true positive prediction. The mBer measures a more comprehensive error rate by taking the sample imbalance problem into consideration. The Acc is used to provide a rough estimation of the pixel-level classification ability. And the mAE provides a measurement for the absolute prediction error of the segmentation map. Besides, we also follow <cit.> to measure F-score in PMD and GSD benchmark, which gives a more comprehensive view of Precision and Recall rate.
§.§ Implementation Details
We implement RFENet using the PyTorch framework <cit.>. The backbone network is initialized with ImageNet pre-trained weight, while the remaining parts are randomly initialized. ResNet50 is used for Trans10k and ResNeXt101 is used for GSD and PMD. We launch the training process on 4 GPUs with synchronized batch normalization, unless otherwise mentioned. For simplicity, we use stochastic gradient descent (SGD) as optimizer, which is scheduled with a poly policy with a power of 0.9.
For Trans10k dataset, input images are resized to a size of 512×512 for both training and testing. The initial learning rate is set to 0.04, and weight decay is set to 0.0001. We use a mini-batch size of 4 for each GPU and run for 60 epochs.
For GSD dataset, following the same setting as <cit.>, the input images are firstly resized to 400×400 and then randomly cropped to 384×384.
Random flipping is used for training.
During inference, the test images are also first resized to 384 × 384 before fed into the network.
The initial learning rate is set to 0.01,
and weight decay is set to 0.0005. We run for 80 epochs with a batch size of 6 for each GPU.
For PMD dataset, we adopt the same setting as PMDNet <cit.>, where the input images are resized to 384×384. The initial learning rate is set to 0.03. The other settings remain the same as those on GSD dataset.
§.§ Comparison With the State-of-the-Arts
To demonstrate the superiority of our method,
we conduct extensive experiments on three datasets, where we compare with recent state-of-the-art methods of glass-like object segmentation as well as some representative methods in common objects semantic segmentation.
The semantic segmentation candidates to be compared are selected by referring to <cit.>.
Quantitative Evaluation. Firstly, as shown in Table <ref>, off-the-shelf methods that are designed for common objects produce inferior performance, such as the best one DeeplabV3+(ResNet50). This is in line with our analysis that the special transparent property of glass-like objects make it challenging to directly segment out without extra assistance.
Secondly, we make a comparison with the two recent strong competitors that are also designed for glass-like objects, i.e., Translab <cit.> and EBLNet <cit.>. As shown from Table <ref>, our RFENet with the typical output stride of 16 achieves an impressive improvement with at least 1.5% gain in mIoU. Besides, when we adopt a finer feature map resolution of stride 8, our RFENet sets new records for all metrics on the Trans10k dataset. This promising improvement demonstrates the effectiveness of our core claim that encouraging the feature co-evolution between semantic branch and boundary branch helps to maximize the exploitation of their complementary information.
Thirdly, as shown in Table <ref>, on the GSD dataset, our RFENet achieves an improvement of 3.4% in terms of mIoU, compared with previous SOTA method GlassNet <cit.>. The consistent improvement and competitive performance on GSD dataset demonstrates the generalization ability of our RFENet on other datasets, which is crucial for practical applications.
We further extend our glass segmentation method to a mirror dataset, the PMD dataset.
As shown in Table <ref>, our RFENet achieves even more improvement on mIoU, which clearly demonstrates the transferability of our method.
Qualitative Evaluation. Our method also achieves superior qualitative results which we exhibit in the appendix.
§.§ Ablation Study
In this section, we conduct extensive ablation study to demonstrate the effectiveness of SME and SAR modules. All experiments are conducted on a single GPU for efficiency.
Effectiveness of SME module. We use a two-stream network as our baseline method, in which we directly attach the glass prediction head and edge prediction head to the backbone feature F_in^s and F_in^b. As shown in Table <ref>, we firstly add SME_4 to conduct a single-scale mutual learning, which achieves a significant improvement with 1.5% in mIoU.
Notably, similar improvement from the SME module can also be achieved even if we have added the SAR module, which implies that the two proposed modules work in a complementary way. These quantitative results strongly demonstrate the effectiveness of
the feature co-evolution between semantic feature and boundary feature.
For a more in-depth analysis on the mutual learning mechanism, we implement the one-way assistance by replacing the attentive feature enhancement operation in our SME module with an identity connection. To avoid any potential information flow from the other side, we also stop the gradient back-propagation from the generated attention map. As shown in Table <ref>, both the two one-way assistance strategies produce inferior performance, compared with the bi-directional assistance. It is worth noting that the combination of the two one-way assistance achieves much more improvement than the summation of their individual improvement, which shows the benefit of the feature co-evolution.
For further qualitative analysis of the attention map, we illustrate it in the appendix.
Effectiveness of SAR module. As shown in Table <ref>, the SAR module can achieve a further improvement in all metrics on the basis of the SME module. Similar improvement can also be consistently observed for the two-stream baseline model. Since the SME module only provides a local feature enhancement, these results effectively demonstrate the indispensability of assistance from global shape context. Qualitatively, from Figure <ref> we can clearly see that without the SAR module to select and refine uncertain points, there are indeed some points that are difficult to predict correctly.
Effectiveness of cascaded connection. As shown in Table <ref>, the cascaded connection achieves consistent improvement. It implies the indispensability of multi-scale representation.
§ CONCLUSION
In this paper, we tackle the challenging problem of glass-like object
segmentation with our proposed RFENet. The model contains two novel modules, Selective Mutual Evolution module for reciprocal feature learning between semantic and boundary branch, and Structurally Attentive Refinement module for refining those ambiguous difficult points through the global shape prior. Extensive experiments show that our model achieves state-of-the-art performance on Trans10k, GSD and PMD datasets.
§ APPENDIX
§.§ Overview
In this appendix, more visualization results and analysis, as well as the additional details are provided, which are organized as follows:
* Sec. <ref> provides the visualization of attention maps by heatmap technique and the analysis of how double branches interact with each other via the attention mechanism.
* Sec. <ref> provides qualitative comparisons on Trans10k <cit.> (Sec. <ref>) and PMD <cit.> (Sec. <ref>), where our method achieves state-of-the-art results.
* Sec. <ref> provides the additional details analysis, including inference time comparison (Sec. <ref>) with current state-of-the-art method EBLNet <cit.> and stability analysis (Sec. <ref>).
§.§ Attention Map Visualization
Our SME module encourages the co-evolution between semantic and boundary features by predicting corresponding attention maps. The semantic information conveyed to the boundary branch helps to suppress false boundaries and highlight the features around real boundaries. Boundary branch features help the semantic branch to determine a more accurate object’s distribution range. To analyze how the two branches interact with each other by attention mechanism, we visualize the attention by heatmap technique. Firstly, we visualize the attention of the model without cascade in Figure <ref>, where a more saturated color in red means a higher score.
As shown in Figure <ref>, for the semantic branch, the features around the boundary are highlighted to locate the distribution extent of the glass area precisely.
Notably, some outer glass regions also have high attention score, which may indicate that the model could selectively focus on some helpful context.
For the boundary branch, the irrelevant contours in background regions are suppressed by a relatively weak attention score. It demonstrates that semantic information can mitigate the impact of false edges in the background, which is in line with human's visual attention mechanism.
Furthermore, we visualize attention at each stage by adding the cascade structure, where we explore how multi-scale operation promotes our mutual evolution. The images and their order we visualized are exactly the same as Figure <ref>. As demonstrated in Figure <ref>, for the semantic branch, as the stage goes deeper (index i gets smaller), we find attention progressively moves closer to the object boundary. The attention at the initial two stages (stage4, stage3) tends to focus on both the inner and outer region of the glass. The attention on the outside of the glass region gets weaker at the penultimate stage (stage2), but still pays much attention to the inner region. The attention of the final stage further weakens the importance of the inner region, which implies the model is striving for overcoming the texture variety problem caused by the transparency attribute. We analyze that it owes to the constraint information received from the boundary which gets more accurate as the stage goes deeper.
As for the boundary branch, attention at each stage attaches the most importance to the contour of the target, instead of other irrelevant boundaries, which is in line with our assumption that the semantic information conveyed to the boundary branch could mitigate the noise disturbance. Besides, we also notice that as the stage goes deeper, the attention (red) on the boundary gets thinner, and the contrast of attached importance between the boundary and other region gets lower. We assume it is probably because the boundary feature gradually becomes accurate, and it is not necessary to consistently attach quite different attention against the boundary and other regions. The model fits this phenomenon dynamically during the whole cascade process.
Besides, the overall process demonstrates that the cascaded structure could promote the model performance in a coarse-to-fine manner, which is indispensable for network discrimination.
In conclusion, our SME module effectively utilizes complementary information from the boundary and semantic features by conveying information with attention maps.
§.§ Qualitative Evaluation
Our RFENet model achieves superior results compared with other methods. We exhibit the visualization results on Trans10k <cit.> (glass dataset) and PMD <cit.> (mirror dataset) respectively.
§.§.§ Visualization on Trans10k dataset
As shown in Figure <ref>, we exhibit the qualitative results of our proposed RFENet and two recent strong competitors, i.e., EBLNet <cit.> and Translab <cit.>.
Specifically, in the first and second rows, existing methods failed to detect those tiny edges.
And in the third and fourth rows, there are some highlighted regions where the existing methods failed to distinguish.
The fifth to sixth rows show some challenging images with huge glass areas.
The seventh to eighth rows show images with occlusion in front of their glass region, while the existing methods mistake the occlusion as part of the glass.
The last row shows the input with different types of glass overlapping, in which the overlapped areas are more difficult to distinguish.
On the contrary, our method segments the glass region correctly in all these challenging cases, which demonstrates the superiority of our method over the state-of-the-arts.
§.§.§ Visualization on PMD dataset
We then show the visualized results on PMD <cit.> to verify the transferability of our method on the mirror dataset. We compare our RFENet with four state-of-the-art methods <cit.>, as shown in Figure <ref>.
In the first row, the mirror is too tiny to be identified, where most of the existing methods failed to detect the mirror.
The second row shows an image with complex illumination. Existing methods tend to segment only parts of the mirror region.
The third to fifth rows show images with complex scenarios, where some existing methods failed to detect the less salient mirrors.
In the last two rows, the reflected object overlaps with the mirror in a large area, which also leads to the inferior performance of the existing methods. In contrast, our model still achieves promising results.
In conclusion, all these visualizations demonstrate our model’s transferability even though it is primarily designed for glass-like objects. This indicates our design might also be inspiring for mirror segmentation.
§.§ Additional Details
§.§.§ Inference time
As shown in Table <ref>, compared with
the SOTA method, the RFENet with our two key contributions (row2) already achieves a better result with a comparable computation cost, demonstrating the high efficiency of our key idea (i.e., the reciprocal feature evolution).
§.§.§ Stability analysis
We further run our RFENet on Trans10k datasets 3 times. As shown in Table <ref>, the standard deviations on three main metrics are lower enough to illustrate that our model could consistently perform well instead of random coincidence.
§ ACKNOWLEDGEMENTS
This work was supported by the
National Natural Science Foundation of China (No. 61972157, No. 72192821),
Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102),
Shanghai Science and Technology Commission (21511101200),
Shanghai Sailing Program (22YF1420300, 23YF1410500),
CCF-Tencent Open Research Fund (RAGR20220121),
Young Elite Scientists Sponsorship Program by CAST (2022QNRC001).
named
|
http://arxiv.org/abs/2307.04494v1 | 20230710113346 | Enabling Faster Locomotion of Planetary Rovers with a Mechanically-Hybrid Suspension | [
"David Rodríguez-Martínez",
"Kentaro Uno",
"Kenta Sawa",
"Masahiro Uda",
"Gen Kudo",
"Gustavo Hernan Diaz",
"Ayumi Umemura",
"Shreya Santra",
"Kazuya Yoshida"
] | cs.RO | [
"cs.RO"
] |
RCS-based Quasi-Deterministic Ray Tracing for Statistical Channel Modeling
Javad Ebrahimizadeh, Evgenii Vinogradov, Guy A.E. Vandenbosch J. Ebrahimizadeh and G. Vandenbosch are with WaveCoRE of the Department of Electrical Engineering (ESAT), KU Leuven, Leuven, Belgium. E-mail: {Javad.Ebrahimizade,Guy.Vandenbosch}@kuleuven.be
E. Vinogradov is with ESAT, KU Leuven, Leuven, Belgium, also with Autonomous Robotics Research Center, Technology Innovation Institute (TII), Abu Dhabi, UAE. E-mail: [email protected].
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================
empty
empty
The exploration of the lunar poles and the collection of samples from the martian surface are characterized by shorter time windows demanding increased autonomy and speeds. Autonomous mobile robots must intrinsically cope with a wider range of disturbances. Faster off-road navigation has been explored for terrestrial applications but the combined effects of increased speeds and reduced gravity fields are yet to be fully studied. In this paper, we design and demonstrate a novel fully passive suspension design for wheeled planetary robots, which couples a high-range passive rocker with elastic in-wheel coil-over shock absorbers. The design was initially conceived and verified in a reduced-gravity (1.625 m/s^2) simulated environment, where three different passive suspension configurations were evaluated against a set of challenges—climbing steep slopes and surmounting unexpected obstacles like rocks and outcrops–-and later prototyped and validated in a series of field tests. The proposed mechanically-hybrid suspension proves to mitigate more effectively the negative effects (high-frequency/high-amplitude vibrations and impact loads) of faster locomotion (>1 m/s) over unstructured terrains under varied gravity fields. This lowers the demand on navigation and control systems, impacting the efficiency of exploration missions in the years to come.
§ INTRODUCTION
Robots have eased many of the tasks performed in space. They have assisted humans in building habitable labs in low-Earth orbit and have traversed the deserted lands of Mars in the name of science. Upcoming exploration missions and currently road-mapped space activities require, however, robots capable of performing in domains for which new technological innovations are necessary.
The growing interest in exploring the lunar poles serves as a good example. Hydrogen-rich elements and other volatiles have been identified within the surface and subsurface layers of polar regolith <cit.>. The extraction and use of these compounds could prove essential for the long-term sustainable exploration of space. But unlike equatorial regions previously visited, the poles of the Moon harbor extreme terrain elevation changes, day-night temperature fluctuations of more than 300 K, and a large number of regions rarely struck by natural illumination—all while constrained by a sun that at times barely rises above the horizon <cit.>. These features demand faster, more effective, and highly autonomous robotic platforms capable of coping with a wide range of environmental constraints unfaced by previous missions.
§.§ Contributions
In this paper, we present the first prototype of a new fully passive suspension design capable of enabling planetary robots to safely negotiate unstructured terrains at speeds that approach 1 m/s (see fig:ex1)—two orders of magnitude larger than conventional rover speeds. We strive to understand what effects the combination of increasing speeds and a reduced gravity field has on the locomotor performance of rovers while addressing the following questions:
* What is the level of perturbations endured by free-balancing suspensions when facing some of the salient, unavoidable features of the lunar surface at 1 m/s?
* What degree of improvement could be obtained from the addition of passive energy-dissipation devices?
* Which passive suspension configuration provides the best results?
§.§ Background
Passive, inelastic, free-balancing suspensions in 4-to-8-wheeled chassis configurations have been employed by most of the rovers commissioned to explore the Moon and Mars. These suspensions are optimized for supporting and evenly distributing the weight of the rover, allowing it to overcome irregular terrains and obstacles, and mitigating the effects of impacts and vibrations while isolating the sensitive optics and electronics from these unwanted effects. Additionally, the suspensions of planetary robotic platforms are heavily constrained in terms of mass, volume, and power, making the rocker-bogie (RB) suspension <cit.> the most widely used type of suspension design.
First developed in the frame of NASA's Mars Pathfinder mission <cit.>, the RB suspension consists of a mechanism of two linkages (see Fig. <ref>). In the most commonly used configuration, a larger forward linkage called the rocker is fixed to the front wheel at one end and attached to the smaller rearward linkage, the bogie, at the other end through a free-rotating pivot point. Intended for 6-wheel configurations, the middle and rear wheels are each linked to both ends of the bogie. The rockers of both sides are connected together and attached to the chassis through a differential that maintains the body of the rover at a pitch angle equal to the average rotation of the two rockers.
The RB suspension effectively accomplishes the main functions previously described. Irregular topography and obstacles of a size comparable to the wheel diameter can be overcome without losing contact with the ground. NASA's Sojourner, which presented a reversed RB configuration (i.e., bogie facing forward), Mars Exploration Rovers (Spirit and Opportunity), Mars Science Laboratory (Curiosity), the newest Mars-2020 rover (Perseverance), and China National Space Administration's (CNSA) Yutu-2 rover were all designed with an RB suspension. At the same time, these missions have been characterized, however, by one significant limitation: speed.
The demand for rovers capable of operating at speeds much higher than the ones previously considered is rapidly growing: from speeds of just a few cm/s to ones on the order of 1 m/s <cit.>. At the same time, missions continuously required increased levels of autonomy, which consequently means systems need to reliably cope with a higher degree of perturbations. This must be accomplished while maintaining the mechanical simplicity and reliability of the locomotion system and without excessively increasing the rover mass or its power requirements. At this speed level, inertial effects start to dominate the interaction with the ground <cit.>, which together with increased vibrations and impact loads may require the use of energy dissipation devices.
The RB suspension was designed for operational speeds below ∼10 cm/s. At higher speeds, the structural integrity of the suspension and the stability of the robot cannot be ensured. Attempts have been made to broaden the range of applications of RB suspensions by independently controlling the speed of the wheels <cit.> or dynamically adapting the suspension configuration <cit.>. In an effort to find alternative solutions that are better suited to a wider range of environmental conditions and speeds, actively articulated and adaptive suspension designs have been widely discussed and proposed <cit.>. While most of these solutions could be perfectly employed to maximize traversability and to minimize the detrimental effects resulting from high-speed locomotion, most rely on the optimal performance of other systems (e.g., hazard detection and terrain segmentation) or require an additional, non-negligible supply of power (e.g., to operate additional electromechanical actuators).
Fast lunar vehicles are, however, not completely new to the space exploration scene. The Russian lunokhods and NASA's two-crew-piloted Lunar Roving Vehicle (LRV) were capable of traveling at speeds that far exceeded those of present lunar and martian rovers. The lunokhods were driven at a maximum speed of 0.5 m/s, whereas the LRV was reported to have reached a top speed of ∼5 m/s while commanded by Eugene Cernan during Apollo 17 <cit.>. Despite these numbers, the capability to drive faster seemed to be closely associated with their ample power reserves and the direct human input—both vehicles were either directly piloted or teleoperated from Earth—rather than with variations purposely introduced in their suspension designs <cit.>.
The LRV inherited a suspension frequently used in conventional road vehicles with slight modifications. It consisted of an independent double-wishbone suspension with elasticity provided through transverse torsion bars in both upper and lower control arms, in addition to compliant wheel rims; and damping provided through a conventional silicone-oil damper <cit.>. The vertical stiffness of the suspension-wheel combination was 2.4 kN/m <cit.>. With regards to the performance of the suspension, Apollo 16 astronauts reported feeling “quite at home” traveling over the ridges south of the landing site toward Stone Mountain <cit.>. Despite the generally positive attitude of the astronauts, their reports also describe the tendency of the suspension to bounce uncontrollably when traveling over surfaces with a large density of small craters ( 1 m) and the impossibility to steer effectively, i.e., without excessive side slip, at speeds above 1.4 m/s. The barren and subdued lunar landscape and the need to drive at times directly toward the Sun did not make negotiating these obstacles any easier.
On the other hand, Lunokhod 1 (Luna 17) and Lunokhod 2 (Luna 21) were remotely operated rovers weighing ∼800 kg each. The lunokhods were designed with an 8-wheel suspension consisting of four carriers fixed to the bottom of the chassis <cit.>. A pair of rigid wheels with their respective swing arms were attached to each carrier. To deal with the high speeds and the heavy weight of the lunokhods, mechanical loads were dissipated through 3-beam torsion bars attached to the swing arms. No damping device was introduced in the design. Vertical stiffness varied from 8.8 kN/m of the front suspension-wheel combination to 3.5 kN/m of the middle combination. Similar issues to those experienced during the Apollo missions were found. Although these were often associated with the poor illumination conditions of the lunar surface, the limited lookahead distance and deficient feed quality of navigation cameras, and the inexperience of the operators <cit.>.
§ PRELIMINARY ANALYSIS
During the Apollo and Luna missions, data on the suspension performance were never collected. Subjective evaluations on rideability and operability are insufficient to argue in favor of one or another suspension configuration, particularly for its application to autonomous robots. We, therefore, developed a series of simulation modules to understand the relative improvement and potential limitations that the addition of passive energy dissipation devices may introduce when attempting to travel faster—up to a speed of 1 m/s—on reduced-gravity, unstructured environments compared with the performance of conventional rigid suspensions. These simulation modules were run on Coppelia Robotics' simulator CoppeliaSim <cit.> in combination with CM Labs' high-fidelity physics engine Vortex.
§.§ Multibody dynamic model
As the baseline for our comparative analysis, we defined the dynamic model of a 4-wheel, All-Wheel Drive/2-Wheel Steering (AWD/2WS) rover with three different passive suspension configurations: 1) conventional rocker arms linked by a differential, 2) independent wheel suspensions guided by shock absorbers, and 3) a novel configuration based on the combination of the previous two, i.e., independent in-wheel compliant suspensions connected at each side of the rover by a free-balancing rocker. These configurations are referred to hereafter as 1) dependent-rigid (DR), 2) independent-elastic (IE), and what we named 3) mechanically-hybrid suspension or MHS. General characteristics of the rover model are presented in Table <ref>.
The dynamic model was based on the design of ElDorado-2 <cit.>, a long-standing robotic platform previously used in the Space Robotics Lab at Tohoku University (see Fig. <ref>). Each suspension configuration presented the same mass distribution and suspension kinematics. In the case of the IE configuration, the rocker arms were locked in a horizontal position and a spring-damper system was introduced between the end of the arms and the wheel hub. Spring-damper systems were simulated by means of prismatic joints whose reaction force, F, is controlled by a proportional-derivative controller in which the proportional and derivative gains are replaced by the spring ratio, k, and damping coefficient, c, respectively (Eq. <ref>).
F = k e_i + c (e_1 - e_1-i)/Δ t,
where e_i describes the elongation of the joint at a time i, and Δ t the selected time step. Deformations within the linkages of the suspension were neglected. The defining parameters of the shock absorbers were 2 kN/m for the spring constant and 350 Ns/m for the damping coefficient with 35 mm of free-length. These are generic values intended to provide a stable movement and a limited static deflection. The optimization of suspension parameters requires the specific amplitude and frequency response of the unsprung and sprung masses against a particular set of excitations, which subsequently demands as inputs a list of mission-driven and design-specific requirements—a type of analysis for which this work was not intended.
§.§ Simulation modules
We developed a series of obstacle negotiation and gradeability modules. The environmental characteristics of each of these modules were based on features commonly found on the surface of the Moon (see Fig. <ref>).
The obstacle negotiation module consisted of three different submodules that vary based on the type of obstacle faced: step obstacles of increasing height, a dynamically-enabled 10-cm hemispherical rock, and a 1.5-m long outcrop—i.e., a partially exposed section of bedrock—with protrusions as high as 10 cm, all modeled within CoppeliaSim.
The gradeability module presented 1.5-m slopes of increasing inclination up to a maximum of 30. For the sake of maintaining the comparative analysis within reasonable margins, only situations where the robot faced the slopes at a heading angle of 0(straight climbing) were simulated. The robot initially drove on flat ground until the appropriate speed was reached.
A gravity field of 1.625 m/s^2 representative of the lunar surface was used for all our simulations. No closed-loop traction or motion control was implemented.
§.§ Wheel-soil contact model
The lunar surface is characterized by a top layer of fine-grained, slightly-cohesive regolith. A specific wheel-soil contact model would be necessary to accurately describe the complex behavior of a rigid wheel interacting with this kind of particulate material <cit.>. To the extent of our knowledge, none of the analytical models available in the literature—most of which are based on the Bekker-Reece-Wong terramechanic equations <cit.>—are capable of faithfully representing the full extent of physical phenomena taking place, much less those governing the dynamic interaction of a fast-moving, lightweight vehicle <cit.>. Numerical models were intentionally avoided due to the increased computational load these models require.
For the sake of simplicity, and in order to have a symbolic representation of the frictional behavior of lunar soil, we opted for a Coulomb friction model with an isotropic friction coefficient of 0.4 for the wheel-soil contact—frequently used as representative of metallic wheels rolling over sandy terrains—and 1.0 for the interaction with obstacles such as rocks and outcrops.
§.§ Performance evaluation parameters
The comparative evaluation of the performance was based on the success of each configuration, the maximum vertical load and pitch torque measured at both ends of the rocker arms, and the maximum vertical acceleration experienced by the chassis. Additionally, the trajectory of the robot was recorded to understand the level of longitudinal and lateral slippage future traction control systems would have to overcome.
§.§ Results and discussion
§.§.§ Obstacle negotiation performance
The heatmaps shown in Fig. <ref> represent the level of success of the different suspension configurations in overcoming perfect steps of height 1–12 cm at speeds ranging from 0.05–1 m/s. Due to their vertical profile, steps are considered the most challenging obstacle to negotiate. Green represents success, red indicates failure to drive over the step, and yellow defines situations in which the front wheel successfully overcame the step but the rear wheel was trapped or completely missed the step due to excessive lateral slippage.
Table <ref> lists the maximum values of vertical load, pitch torque, and acceleration experienced at top speed (1 m/s) in every case and for each configuration. The results obtained from the obstacle negotiation module illustrate the considerable benefit obtained from the addition of passive energy dissipation devices to the suspension design. On average among all the cases analyzed (steps, rocks, and outcrops), a 71% reduction in maximum impact load, a 37% reduction in maximum pitch torque, and a 33% reduction in maximum vertical acceleration of the chassis were observed when elasticity and damping were incorporated into the design. When compliant in-wheel suspensions were then combined with a high-range free-balancing rocker, the MHS outperformed the other two in every situation, overall reducing by 62% and 43% the detrimental effects of an irregular terrain when compared to the DR and IE configurations, respectively. The compliance of the in-wheel suspension attenuates the high-frequency/high-amplitude vibrations while the dependency of the rocker provides a more efficient weight transfer allowing the rover to overcome large obstacles without inducing excessive traction losses or instabilities (see fig:gforce).
Additional evidence of the improved stability brought by the MHS configuration is illustrated by the vertical trajectory of the robot when traversing the outcrop (see Fig. <ref>). With both the DR and IE suspensions, the robot experienced a full take-off (four wheels in the air) followed by a complete rollover, a situation that was avoided in the case of the MHS. The suspension kept the rover stable and the wheels always in contact with the jagged surface except when first impacting the edge of the outcrop when both front wheels were briefly lifted from the ground due to the sudden rebound of the in-wheel suspension; a behavior that could be mitigated with a further optimization of the suspension parameters.
§.§.§ Gradeability
Less variation in the level of success of the different configurations was observed when the rover faced 1.5-m slopes of 5–30at speeds ranging from 0.05–1 m/s (see Fig. <ref>). At higher speeds and regardless of the configuration, the top of the steepest slopes (20and 25) were often reached with just the rear wheels in contact with the ground—a predominant effect in the IE and MHS configurations due to the excessive rebound of the suspension upon first confronting the slope.
The maximum vertical loads, pitch torques, and vertical accelerations experienced by the rover when facing a slope of 20at 1 m/s are gathered in Table <ref>. We initially expected greater levels of variation in the gradeability performance of the different configurations given the evidence presented in <cit.>. In this work, the climbing ability of rocker arms evidently outperformed that of independent swing arms under every circumstance evaluated. Due to the slip-dominant nature of the vehicle-ground interaction when climbing a slope, it is possible that the absence of a more accurate representation of the wheel-soil contact behavior in our simulation modules and the lack of an active control scheme resulted in the lack of variation in the levels of success of the different suspension configurations. Nonetheless, and in line with the observations previously made, the MHS configuration successfully mitigated the negative effects of impacts and vibrations beyond what was accomplished by either the DR or IE configurations.
§ SYSTEM DESIGN
In light of the evidence provided by the results of the simulations, we conceived a new rover prototype, dubbed Explorer 1 (EX1), based on the principles of the MHS configuration (see Fig. <ref>).
EX1 was designed with a 4-wheel AWD/4WS locomotion configuration capable of achieving a maximum operational speed of 1 m/s. High-travel aluminum rocker arms are linked together and attached to the chassis through a 3-gear differential box housed inside the body frame. These rocker arms have a range of motion of about ±250 mm (2.5 times its wheel radius), only limited by the length of the wire harness of the actuator drive electronics. Attached at both ends of each rocker is a double-coil-over elastic suspension providing a lower travel range of 35 mm. The harmonic drives of the steering motors act as the connecting pieces between the rocker arms and the compliant component of the suspension. This allows the latter part of the suspension to rotate with the wheel during steering, but has the inconvenience of varying the scrub radius—the distance between the steering axis and the vertical centerline of the wheel—based on the level of compression of the suspension; a shortcoming that was assumed in favor of the modularity of the design (i.e., the design is easily adaptable to 6- and 8-wheel configurations) and due to the short free-length of the damper.
The low-travel suspension consists of upper and lower control arms passively commanded by a pair of 104-mm shock absorbers connected to the top of the wheel knuckle (see fig:lts). This arrangement maintains the camber angle nearly constant during wheel travel. Two parallel shock absorbers were used to reduce the stiffness required on the springs while providing a certain level of redundancy in the design. The shock absorbers are formed by a replaceable 2.5 kN/m spring (5 kN/m per wheel) and an adjustable damper and were selected off-the-shelf from a radio-control car manufacturer. Both the bracket and the wheel knuckle were designed with multiple mounting points so that the overall stiffness of the suspension can be slightly modified by tilting the orientation of the dampers. The stability limits of the design were verified in simulations, achieving a static longitudinal/lateral stability under lunar gravity of 30and a quick and smooth response to dynamic perturbations such as steps and cornering maneuvers (see fig:stability).
§ FIELD TEST RESULTS
To validate the locomotor performance of EX1, we conducted a field test campaign in a representative sandy field. While these tests allow us to functionally validate a new suspension design for ground testing purposes, it should be noted from the outset the inadequacy of our approach to testing for the optimization of design parameters with respect to a potential flight model configuration. The conventional approach to validating the mobility performance of planetary rovers on Earth prior to their missions suffers from a strong limitation in situations when speed plays a role. While gravity scaling is often applied to testing platforms—i.e., adapting the mass of engineering models to represent the overall weight of the flight model at destination—observable behaviors under testing are only representative when the quasi-static approximation can be applied. The moment dynamic effects dominate the behavior of the rover <cit.> and its interaction with the ground <cit.>, as is the case with our experiments, the full-body mass of the rover shall be used for a representative characterization of the performance and subsequent optimization of design parameters. This drastically affects the rover response to environmental and operational stimuli <cit.>. In these cases, gravity offloading must be applied <cit.> but further work would still be required to properly model the complex interplay of inertial, gravitational, and frictional forces taking place.
§.§ Dynamic stability
The first experiment was aimed at evaluating the contribution of the independent shock absorbers when moving at high speed over a 10-m, nearly-flat, unconsolidated ground in both transient and steady-state conditions. In this case, the rover was commanded to follow a straight trajectory divided into three phases: a) a first phase where the rover accelerates up to 1.0 m/s, b) a second phase where the rover runs at a uniform maximum speed of 1.0 m/s, and c) a final phase where the rover is decelerating to a full stop (see fig:maneuverability_field_test).
We performed these tests with the rover in two different suspension configurations: a representative DR configuration, in which the rocker is free to rotate but the shock absorbers are replaced by rigid elements locking the low-travel suspension in place; and the MHS, with the elastic elements of the suspension free to move. Six runs were conducted for each suspension configuration. Table <ref> lists the result of comparing the two configurations based on the vertical acceleration of the chassis as recorded by an IMU fixed to the top of the attachment element of the left-side rocker (see fig:ex1). To reduce the level of sensor read noise, a 4-point moving average filter was applied before extracting max. and min. values, while the mean of the standard deviation of the vertical accelerations was computed from the original, unfiltered data across all six runs.
Results confirm an overall reduction of the vertical accelerations experienced by the chassis when the MHS is used. This is particularly significant during the acceleration phase where the rover experienced greater vibrations due to an observed increase in wheel slippage. This is aligned with a well-established understanding of the performance of elastic suspensions in off-road terrestrial vehicles but it was important to evaluate the potential interference that in-wheel shock absorbers could have on the movement of the free-balancing rockers, less common in terrestrial applications.
§.§ Obstacle negotiation capability
In this second experiment, the rover was commanded to drive its left-side wheels over a 10-cm rock (see fig:obstacle_negotiation_field_test). We now wanted to observe the potential differences in performance when dependency was introduced into the design of a conventional independent suspension configuration. We compared the obstacle negotiation capability between an IE configuration (i.e., rocker rotation locked) and the MHS. Tests were performed at three different speeds (0.2, 0.5, and 1.0 m/s) and each test was conducted three times for each suspension configuration and speed. The magnitude of the force applied to the front wheels was recorded by an in-wheel force/torque sensor and vertical accelerations were measured by the same IMU as in the dynamic stability tests.
Table <ref> also gathers all the IMU measurements recorded during the obstacle negotiation tests. Both the IE and MHS configurations successfully overcome the obstacle at 0.2 and 0.5 m/s but it was only with the MHS that the rover was capable of seamlessly negotiating the rock at 1.0 m/s. At this speed, the observable impact on the IE configuration was such that no successful runs were ultimately conducted with this configuration due to concerns over safety and the structural integrity of the rover. fig:force_norm_comparison_in_obstacle_negotiation display the average of the norm of the force vector acting on the front wheels when overcoming the rock for both the IE and MHS configurations. Dampening of impact loads on the front-left wheel (both mean and maximum force) was also greater in the case of the MHS and the degree of dampening increased with speed, reducing the loads by 24% on average across wheels and speeds. The main benefit associated with the addition of dependency is the increased pressure exerted on the wheels not overcoming the obstacle, enabling the right side wheels to greater traction levels and drastically improving obstacle-surmounting capabilities of the rover (see fig:force_norm_comparison_in_obstacle_negotiationfig:right_wheel_force).
§ CONCLUSION
The increased autonomy demanded by upcoming missions to the Moon and Mars implies planetary robots have to be capable of coping with a wide range of disturbances. The addition of compliant elements to the suspension system of these robots appeared to be vital in counteracting the detrimental effects of impact loads and vibrations when driving at high velocities (≥ 1 m/s) under weaker gravity fields. But even when these elements are included, the specific configuration of the suspension design plays an important role in the rover ultimate performance. A new passive suspension configuration, so-called mechanically-hybrid suspension (MHS), was proposed and compared with more traditional rocker and independent swing arm suspensions. The MHS combines the functional benefits of both dependent and elastic elements. Simulation results under a lunar-like gravity field confirmed our initial hypothesis. Field test results validated that an MHS configuration could greatly improve stability while successfully isolating the chassis of the rover from unwanted vibrations and impact loads beyond what could be accomplished with either of the other two commonly used passive configurations. An improved suspension design also affects other aspects involved in navigation, lowering the demand on perception and control systems, increasing duty cycles, and enabling higher levels of autonomy. Future work will explore additional improvements and variations in the suspension configuration such as combining the MHS with non-pneumatic, flexible wheels. Their combination could bring about higher levels of stability and terrain compliance while further reducing non-vertical impact loads and vibrations.
§ ACKNOWLEDGMENT
The authors would like to thank Alan Allart, Tristan Lecocq, Kazuki Nakagoshi, Ryusuke Wada, Danishi Ai, and Merlijn Siffels for their invaluable help and support in the development of EX1.
|
http://arxiv.org/abs/2307.03983v1 | 20230708141424 | Hybrid Successive Interference Cancellation and Power Adaptation: a Win-Win Strategy for Robust Uplink NOMA Transmission | [
"Yanshi Sun",
"Wei Cao",
"Momiao Zhou",
"Zhiguo Ding"
] | cs.IT | [
"cs.IT",
"math.IT"
] |
Hybrid Successive Interference Cancellation and Power Adaptation: a Win-Win Strategy for Robust Uplink NOMA Transmission
Yanshi Sun, Member, IEEE, Wei Cao, Momiao Zhou, Member, IEEE, Zhiguo Ding, Fellow, IEEE
Y. Sun, Wei Cao and M. Zhou are with the School of Computer Science and Information
Engineering, Hefei University of Technology, Hefei, 230009, China. (email: [email protected], [email protected] and [email protected]).
Z. Ding is with Department of Electrical Engineering and Computer
Science, Khalifa University, Abu Dhabi, UAE, and Department of Electrical
and Electronic Engineering, University of Manchester, Manchester, UK. (email: [email protected]).
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The aim of this paper is to reveal the importance of hybrid successive interference cancellation (SIC) and power adaptation (PA) for improving transmission robustness of uplink non-orthogonal multiple access (NOMA).
Particularly, a cognitive radio inspired uplink NOMA communication scenario is considered, where one primary user is allocated one dedicated resource block, while M secondary users compete with each other to be opportunistically served by using the same resource block of the primary user. Two novel schemes are proposed for the considered scenario, namely hybrid SIC with PA (HSIC-PA) scheme and fixed SIC with PA (FSIC-PA) scheme. Both schemes can ensure that the secondary users are served without degrading the transmission reliability of the primary user compared to conventional orthogonal multiple access (OMA) based schemes. Rigorous analytical results are presented to evaluate the performance of the proposed two schemes. It is shown that both schemes can avoid outage probability error floors without any constraints on users' target rates in the high SNR regime. Furthermore, it is shown that the diversity gain achieved by the HSIC-PA scheme is M, while that of the FISC-PA scheme is only 1. Numerical results are provided to verify the developed analytical results and also demonstrate the superior performance achieved by the proposed schemes by comparing with the existing HSIC without PA (HSIC-NPA) scheme.
The presented simulation results also show that HSIC-PA scheme performs the best among the three schemes, which indicates the importance of the combination of HSIC and PA for improving transmission robustness.
Non-orthogonal multiple access (NOMA), hybrid successive interference cancellation (HSIC), power adaptation, outage probability.
§ INTRODUCTION
Non-orthogonal multiple access (NOMA) has attracted extensive research interest during the past few years, and has been recognized as an important potential enabling technology for future wireless communication systems <cit.>. Compared to conventional orthogonal multiple access (OMA), where one channel resource block can be accessed by a single user only, the key appealing feature of NOMA is that allowing multiple users to simultaneously access the same channel resource block is encouraged <cit.>. Thus, by applying NOMA, larger connectivity and higher spectral efficiency can be obtained.
Existing research works show that NOMA can be compatible with many other advanced technologies, such as multiple input multiple output (MIMO) <cit.>, millimeter wave communications <cit.>, Terahertz communications <cit.>, reconfigurable intelligent surfaces (RIS) <cit.>, satellite communications <cit.> and so on.
Since NOMA allows multiple users to simultaneously occupy one channel resource block, how to address inter-user interference is one of key issues in NOMA communication systems. To this end, a widely used method in NOMA to address inter-user interference is successive interference cancellation (SIC), where users' signals are decoded in a successive manner <cit.>. Due to the error propagation nature of SIC, how to order users plays a very important role in the performance of SIC. Conventionally, there are two main types of methods for determining the decoding order of users in NOMA. One is known as the channel state information (CSI) based SIC method, where users are ordered according to the quality of their channels <cit.>. The other is known as the quality of service (QoS) based SIC method, where the signals for the users with more stringent QoS are decoded first, while other users are often opportunistically served and their signals are decoded later <cit.>. Note that, most existing works on NOMA carried out a prefixed SIC decoding order according to either the above two aforementioned criteria. Unfortunately, a very dispiriting phenomenon exists in the NOMA schemes based on the aforementioned CSI or QoS based methods. Specifically, the outage probability achieved by these schemes suffers from severe error floors, which means that the outage probability achieved by
a certain user doesn't approach zero as SNR goes infinity. Thus, the transmission reliability cannot be guaranteed, which significantly limits the application of NOMA in many practical scenarios.
It was thought that, such outage probability error floors are unavoidable in the implementation of NOMA, and swapping SIC decoding orders dynamically cannot yield a significant performance gain <cit.>.
Motivated by the error floor issue, a new design of SIC namely hybrid SIC (HSIC) was initially proposed for cognitive radio inspired uplink NOMA by <cit.>. In the proposed HSIC scheme, the decoding orders of users are dynamically determined according to the relationship between the instantaneous channel conditions and users' target rates. <cit.> show that the proposed HSIC scheme can avoid outage probability error floors, under some constraints on users' target rates. The most important contributions of the series studies in <cit.> are two folds.
First, <cit.> showed that it is possible to avoid outage error floors, at least under some specific conditions. Second, <cit.> indicated the importance of introducing HSIC to improve transmission robustness of NOMA.
However, as mentioned above, the proposed scheme in <cit.> can only avoid outage probability error floors under some stringent conditions on users' target rates, which may not be met in many realistic scenarios. Thus, it is natural to ask the following two questions.
The first question is whether it is possible to avoid outage probability error floors without any constraints on users' rates. And the second question is whether it is necessary to apply HSIC to avoid outage probability error floors.
This paper aims to answer the two aforementioned questions, and investigate the impact of the combination of HSIC and power adaptation (PA) on improving the transmission robustness in NOMA. Specifically, a cognitive radio inspired uplink NOMA scenario is considered. In the considered scenario, one primary user is allocated one dedicated channel resource block, while there are M secondary users who compete with each other to opportunistically share the primary user's resource block without degrading the outage performance of the primary user. Two new designs of NOMA schemes, namely HSIC with PA (HSIC-PA) and fixed SIC with PA (FSIC-PA) are proposed. Both schemes can avoid outage probability error floors without any constraints on users' target rates. The main contributions of this paper are listed as follows.
* Two novel designs of uplink NOMA schemes are proposed, namely HSIC-PA and FSIC-PA[Note that the
HSIC-PA scheme extends the scheme proposed in our previous work <cit.> where only two users are considered, while the FSIC-PA scheme hasn't been proposed according to our best knowledge.]. In the proposed HSIC-PA scheme, the decoding order of the secondary user can be dynamically adjusted according to the channel conditions. While in the proposed FSIC-PA scheme, the decoding order of the secondary user is fixed at the second stage of SIC. By rigorous derivation, the closed-form expressions for the outage probabilities achieved by the proposed two schemes are obtained.
* Based on the obtained expressions for the outage probabilities, asymptotic analysis in the high SNR regime is further developed to gain more insights into the proposed two schemes. It is shown that both HSIC-PA scheme and FSIC-PA scheme can avoid outage probability error floors without any constraints on users' target rates. The fact that the proposed FSIC-PA scheme can avoid error floors indicates that HSIC is not necessary to avoid error floors. Furthermore, the diversity gains achieved the proposed two schemes are also provided, respectively. Interestingly, the diversity gain achieved by HSIC-PA scheme is M, whereas that achieved by FSIC-PA scheme is only 1.
* Numerical results are presented to verify the accuracy of the developed analytical results and demonstrate the superior performance of the proposed HSIC-PA scheme and FSIC-PA scheme, by comparing with the benchmark scheme termed HSIC-NPA proposed in <cit.>. In terms of outage probability and ergodic rate, it is shown that FSIC-PA scheme performs better than HSIC-NPA scheme in the high SNR regime, but worse in the low SNR regime. Besides, HSIC-PA scheme performs the best among three schemes at all SNRs in terms of outage probability and ergodic rate, which shows the power of the combination of HSIC and PA in the design of uplink NOMA transmissions. In terms of power consumption, both the proposed HSIC-PA and FSIC-PA schemes consume less power than the existing HSIC-NPA scheme, whereas HSIC-PA scheme is more power-consuming than FSIC-PA scheme.
§ SYSTEM MODEL
Consider an uplink NOMA communication scenario with one base station (BS), one primary user U_0 and M
secondary users U_m, 1≤ m≤ M. Note that, in the considered scenario, ensuring the transmission reliability of U_0 is of the high priority, which has a target data rate denoted by R_0. In conventional OMA based schemes, the primary user is allocated with a dedicated resource block, which cannot be accessed by other users. While in the considered NOMA schemes of this paper, M secondary users compete with each other to opportunistically access the channel resource block which is allocated to the primary user. Note that allowing secondary users to share the channel resource block of the primary user must be done in such a way to ensure that the QoS of the primary user U_0 is not degraded.
The channel gain of the primary user U_0 is denoted by g, and the channel gains of the secondary users are denoted by h_m, 1≤ m≤ M. In this paper, g and h_m are modeled as the normalized Rayleigh fading gains, which means that g and h_m are independent and identically distributed (i.i.d) circular symmetric complex Gaussian (CSCG) random variables with zero mean and unit variance, i.e., g∼𝒞𝒩(0,1) and h_m ∼𝒞𝒩(0,1). The transmit power of the primary user U_0 is denoted by P_0. The transmit power of the secondary user U_m is denoted by β P_s, where
β∈ [0,1 ] is the adjustable power adaptation coefficient of U_m, and P_s is the maximum power of U_m. Without loss of generality, the background noise power is also assumed to be normalized throughout the paper.
In the remainder of the paper, the M secondary users are ordered according to their channel gains:
| h_1 | ^2< ⋯ < | h_M|^2.
In this paper, two novel NOMA schemes are proposed, namely HSIC-PA scheme and FSIC-PA scheme.
It will be shown that both schemes can avoid outage probability error floors.
For each scheme, in each period of transmission, only the secondary user which can achieve the largest instantaneous achievable rate is allowed to transmit signal by sharing the primary user's resource block.
The proposed two schemes are described in the following two subsections.
§.§ HSIC-PA Scheme
To begin with, define an interference threshold denoted by τ (g) as follows:
τ (g)= max{ 0,P_ 0 |g | ^2 /2^R_ 0 -1 -1}.
Note that τ(g) can be interpreted as the maximum interference, with which U_0 can
still achieve the same outage performance as in OMA where the resource block would be solely occupied by U_0. For more details on τ(g), please refer to <cit.>.
Define ϵ_0=2^R_0-1 and α _0=ϵ_0/P_0, we have
τ(g)=
|g|^2α_0^-1-1 , |g|^2>α_0,
0 , |g|^2<α_0.
For each secondary user U_m, its instantaneous achievable rate is determined by how its channel
gain compares to τ (g), which can be classified into the following two types:
* Type I: the maximum received signal power of U_m at the BS is less than or equal to τ(g), i.e.,
P_s | h_m |^2 ≤τ (g). For this case, putting U_m at the second stage of SIC
can yield a larger data rate compared to putting U_m at the first stage of SIC, and will not prevent the primary user from successfully decoding its signal. Thus, it is favorable to decode U_m's signal at the second stage of SIC, and the achievable rate of U_m is given by
R_1^m=log(1+P_ s | h_ m |^2),
which is the same as in HSIC-NPA scheme proposed in <cit.>.
* Type II: the maximum received signal power of U_m at the BS is larger than τ (g), i.e., P_s | h_m |^2 > τ (g). For this case, the benchmark scheme termed HSIC-NPA which is proposed in <cit.>
only considers the case where β is set to be 1. Thus, in order to avoid degrading the QoS of U_0, U_m's signal can only be decoded at the first stage of SIC in HSIC-NPA, yielding the following achievable data rate of U_m:
R_II,1^m=log(1+P_s | h_m |^2 /P_0 | g | ^2+1 ).
Note that the drawback of putting U_m at the first stage of SIC is that, when P_0|g|^2 is large, R_II,1^m might still be small even with a large P_s | h_m |^2.
To this end, the proposed HSIC-PA scheme offers an additional choice where β can be set to be less than 1 so that β P_s|h_m|^2=τ(g), which can provide an opportunity to yield a larger achievable rate. As a result, U_m's signal can be decoded at the second stage of SIC, yielding the following achievable data rate of U_m:
R_2,2^m=log(1+τ(g)).
Thus, in the proposed HSIC-PA scheme, when P_s | h_m |^2 > τ(g), the achievable data rate of U_m is given by:
R_2^m=max{R_2,1^m,R_2,2^m}.
According to the above discussions, the achievable data rate of U_m in HSIC-PA scheme can be concluded as:
R^m=
R_1^m, P_s | h_ m |^2 ≤τ(g)
R_2^m, P_s | h_ m |^2 >τ(g).
§.§ FSIC-PA Scheme
Another scheme termed FSIC-PA is proposed in this subsection.
Note that in HSIC-PA scheme, the secondary user's signal can be decoded either at the first or second stage of SIC. However, in FSIC-PA scheme, its signal can only be decoded at the second stage of SIC.
In FSIC-PA scheme, for each secondary user U_m, its instantaneous achievable rate can also be determined by considering the following two cases as in the previous subsection.
* Type I: the maximum received signal power of U_m at the BS is less than or equal to τ(g), i.e.,
P_s | h_m |^2 ≤τ (g). For this case, the decoding strategy is as same as in the HSIC-NPA and the proposed HSIC-PA scheme, where U_m is decoded at the second stage of SIC. Thus, the achievable data rate of U_m is R̂^m_I =log(1+P_s|h_m|^2), since the interference from U_0 can be removed by SIC.
* Type II: the maximum received signal power of U_m at the BS is larger than τ (g), i.e., P_s | h_m |^2 > τ (g). For this case, in the proposed FSIC-PA scheme, U_m can only be decoded at the second stage of SIC. To carry out this strategy, β is set to be less than 1 so that β P_s|h_m|^2=τ(g). Thus, the achievable data rate of U_m for type II is R̂_II^m=log(1+τ(g)).
By concluding the above two cases, the achievable data rate of U_m in the FSIC-PA scheme can be expressed as:
R̂^m=R̂^m_I, P_s | h_ m |^2 ≤τ(g)
R̂^m_II, P_s | h_ m |^2 >τ(g).
Note that, the proposed HSIC-PA and FSIC-PA schemes can ensure that the outage performance of the primary user is the same as that in the OMA scheme. Because the use of NOMA is transparent to the primary user, this paper focuses on the performance of the opportunistically served secondary users.
§ PERFORMANCE ANALYSIS ON HSIC-PA SCHEME AND FSIC-PA SCHEME
In this section, the closed-form expressions for the outage probabilities of the served secondary user achieved by the proposed two schemes will be provided. Furthermore, asymptotic analysis for the outage probabilities will be presented, which shows that both HSIC-PA and FSIC-PA schemes can avoid outage probability error floors without any constraints on users' target rates. Besides, rigorous comparisons between the proposed HSIC-PA/FSIC-PA scheme with the existing HSIC-NPA scheme will be carried out.
§.§ Outage probability achieved by HSIC-PA scheme
This subsection provides the exact and asymptotic expressions for the overall outage probability
of the served secondary users achieved by the proposed HSIC-PA scheme. Besides, the diversity gain <cit.> achieved by HSIC-PA is also provided.
Assume that all the secondary users have the same target rate, denoted by R_s. The overall outage probability achieved by the served secondary users in HSIC-PA is given by:
P_out=Pr(max{R^m, 1≤ m≤ M}<R_s).
For the ease of characterizing the outage probability P_out, it is helpful to define the event E_m, which denotes the event that there are m secondary users belonging to type I. Particularly, E_m can be expressed as follows:
E_m={ |h_m |^2< τ (g)/P_s, | h_m+1 | ^2>τ (g)/P_s},
1≤ m≤ M-1,
{|h_1|^2 > τ (g)/P_s}, m=0,
{|h_M|^2 < τ (g)/P_s}, m=M,
where the extreme cases E_0 and E_M denote the events where there is no type I secondary users and all the secondary users belong to type I, respectively.
It is shown that the expression of P_out can be divided into four parts, as highlighted in the following lemma.
For ease of calculation, P_out can be further simplified as:
P_out= P(|h_M|^2>τ(g)/P_s, | h_M |^2< |h_k |^2,R^M_II,2<R_s,|g|^2>α_0)_Q̃_1
+ P(|h_M|^2>τ(g)/P_s, | h_M |^2> |h_k |^2,R^M_II,1<R_s,|g|^2>α_0)_Q̃_2
+ P( E_M,R^M_I<R_s, | g | ^2>α _0)_Q_M + P(
R^M_II<R_s ,|g|^2<α_0) _Q_M+1.
Please refer to Appendix A.
By deriving the expressions of Q̃_1, Q̃_2, Q_M and Q_M+1 as shown in Appendix B, the expression for the overall outage probability of the admitted secondary users in HSIC-PA scheme can be obtained as shown in the following theorem.
The overall outage probability P_out of the admitted secondary users in HSIC-PA can be expressed as follows:
P_out=∑_i=0^M([ M; i ])(-1)^ie^-iα_s1-e^-(α_sP_0i+1)α_1/α_sP_0i+1+(1-e^-α_s)^Me^-α_1,
where ϵ_s=2^R_s-1,
α_s=ϵ_s/P_s,
α_1=(1+ϵ_s)α_0.
Please refer to Appendix B.
Based on Theorem 1, the asymptotic expression for P_out in the high SNR regime can be obtained as shown in the following corollary.
At high SNR, i.e., P_0=P_s→∞, the overall outage probability of the served secondary users in HSIC-PA can be approximated as follows:
P_out≈ϵ_s^M/P_s^MP_0∑_i=1^M([ M; i ])ϵ_0^i+1(1+ϵ_s)^i+1/i+1-ϵ_s^M/P_s^MP_0^2∑_i=0^M([ M; i ])ϵ_0^i+2(1+ϵ_s)^i+2/i+2+ϵ_s^M/P_s^M.
Please refer to Appendix C.
Further, it is straightforward that the first two terms of (<ref>) can be omitted in the high SNR regime, yielding a more simplified expression for P_out, as highlighted in the following corollary.
At high SNR, i.e., P_0=P_s→∞,
the approximation of P_out shown in (<ref>) can be further approximated as follows:
P_out≈ϵ_s^M/P_s^M.
Remark 1. Note that, the existing HSIC-NPA scheme can only avoid outage probability error floors under the constraint that ϵ_0ϵ_s≤ 1, which means that the feasible target rate for reliable transmission of the secondary users is primarily restricted by that of the primary user.
However, from the results shown in Corollary 2, it can be easily concluded that the outage probability error floor can be avoided by HSIC-PA scheme without any constraints on the users' target rates. Hence, the first question raised in Section I can be answered with the answer that it is possible to avoid outage probability error floors without any constraints on users' target rates.
Remark 2. In wireless communications, diversity gain is usually used as an important performance metric to measure how fast the outage probability decreases as transmit power increases <cit.>. It denotes the asymptotic scaling law of the outage probability to the transmit SNR. Specifically, the diversity gain, say d, achieved by HSIC-PA is defined as:
d=-lim_P_s→∞log P_out/log P_s
Based on the results shown in Corollary 2, it can be straightforwardly obtained that
d=M. Therefore, the diversity gain achieved by the HSIC-PA scheme is M, which is exactly the number of the secondary users. Thus, multi-user diversity gain can be fully utilized by the proposed HSIC-PA scheme, which means increasing the number of secondary users is helpful to reduce the overall outage probability.
From the perspective of diversity gain, the difference between the HSIC-NPA scheme and the HSIC-PA scheme can also be revealed. Recall that the diversity gain achieved by HSIC-NPA is also M when ϵ_0ϵ_s≤1, otherwise a diversity gain of zero is realized.
§.§ Outage probability achieved by FSIC-PA scheme
This subsection provides the exact expression for the overall outage probability
of the served secondary users in the proposed FSIC-PA scheme. Asymptotic analysis for the outage probability is also provided.
For the FSIC-PA scheme, the overall outage probability achieved by the served secondary users is defined as:
P̂_out=Pr(max{R̂^m, 1≤ m≤ M}<R_s).
The following theorem provides the closed-form expression for the outage probability achieved by the FSIC-PA scheme.
The overall outage probability P̂_out of the served secondary users in FSIC-PA can be expressed as follows:
P̂_out=1-e^-α_1+(1-e^α_s)^Me^-α_1.
Please refer to Appendix D.
Based on Theorem 2, asymptotic expression for P̂_out in the high SNR regime can be obtained as shown in the following corollary.
At high SNR, i.e., P_0=P_s→∞, the overall outage probability of the served secondary users in the FSIC-PA scheme can be approximated as follows:
P̂_out≈ϵ_0(1+ϵ_s)/P_0+ϵ_s^M/P_s^M-ϵ_s^Mϵ_0(1+ϵ_s)/P_s^MP_0.
By applying Taylor expansion 1-e^-x≈ x (x→ 0), the expression in (<ref>) can be further approximated as follows:
P̂_out≈ α_1+α_s^M-α_s^Mα_1
= ϵ_0(1+ϵ_s)/P_0+ϵ_s^M/P_s^M-ϵ_s^Mϵ_0(1+ϵ_s)/P_s^MP_0,
and the proof is complete.
Remark 3. From Corollary 3, it can be easily observed that the proposed FSIC-PA scheme can also avoid outage probability error floors without any constraints on the users' target rates. At this point, the second question raised in Section I can be answered with the answer that HSIC is not the necessary condition to avoid outage probability error floors.
Remark 4. It is also interesting to investigate the diversity gain achieved by the FSIC-PA scheme, which is defined as:
d̂=-lim_P_s→∞logP̂_out/log P_s.
According to Corollary 3, it can be straightforwardly obtained that d̂=1. Thus,
the multi-user diversity gain cannot be obtained by FSIC-PA scheme.
The above two remarks indicate that even though HSIC is not the necessary strategy to avoid the outage probability error floor, its combination with PA is beneficial for improving transmission robustness.
§.§ Comparisons between HSIC-PA/FSIC-PA scheme with HSIC-NPA scheme
In this section, more detailed comparisons of the proposed two schemes with the benchmark HSIC-NPA scheme are provided. Note that, if the served secondary user belongs to type I, the three schemes, i.e., HSIC-PA, HSIC-NPA and FSIC-PA, achieve the same instantaneous data rate.
However, the three schemes differ from each other if the served secondary user belongs to type II. Thus, it is necessary to compare the three schemes for the case when the served secondary user belongs to type II.
For ease of notation, denote the served secondary user by U_m^*. When U_m^* belongs to type II, denote its achievable rate by R_II, R̂_II and R̅_II for HSIC-PA, FSIC-PA and HSIC-NPA schemes, respectively.
From the description in Section. II, it can be found that R_II≥R̅_II always holds. Thus, it is sufficient to characterize the probability of the event that R_II>R̅_II, for the comparison between HSIC-PA and HSIC-NPA, as presented in the following theorem.
Under the condition that the served secondary user U_m^* is type II, the probability of the event that R_II>R̅_II, termed P^better, is given by:
P^better= P( R̅_2<R_2, U_m^* is type II) /P(U_m^* is type II) ,
where
P( R̅_2<R_2, U_m^* is type II)
= ∑_i=1^M([ M; i ])(-1)^ie^i/P_s[ṽ(α_0,iP_0/P_sα_0,i/P_s(α_0^-1-P_0)+1) -ũ(α_0,i/P_sα_0 ) ] ,
and
P(U_m^* is type II)=1-∑_i=0^M([ M; i ])(-1)^ie^i/P_sũ(α_0,i/P_sα_0 ),
where
ũ(x,y)=1/y+1e^-x(y+1),
and ṽ(x,y,z)=√(π)e^z^2/4y/2√(y)[1-erf (√(y)(x+z/2y))], where erf(·) denotes the Gaussian error function,
which is given by:
erf(x)=2/√(π)∫_0^xe^-t^2dt.
Please refer to Appendix E.
Differently, for the comparison between FSIC-PA and HSIC-NPA, R̂_II can be either larger or less than R̅_II. Thus, it is necessary to characterize both the probabilities of the events that R̂_II>R̅_II and R̂_II<R̅_II. By noting that
P̂( R̅_2<R̂_2, U_m^* is type II)=P(|h_M|^2>τ(g)/P_s,|h_M|^2< |h_k |^2,|g|^2>α_0),
which is the same as the expression of P( R̅_2< R_2, U_m^* is type II) in Theorem 3, the following theorem can be straightforwardly obtained.
Under the condition that the served secondary user U_m^* is type II, the probability of the event that R̂_II>R̅_II, termed P̂^better, is given by:
P̂^better= P( R̅_2<R̂_2, U_m^* is type II) /P(U_m^* is type II) ,
which is the same as the expression of P^better in Theorem 3. The probability of the event that R̂_II<R̅_II, termed P̂^worse, is given by:
P̂^worse=1-P̂^better.
§ NUMERICAL RESULTS
In this section, simulation results are provided to verify the accuracy of the developed analysis and demonstrate the performance of the proposed HSIC-PA and FSIC-PA schemes. Comparisons with the benchmark HSIC-NPA scheme developed in <cit.> are also provided.
Fig. <ref> verifies the accuracy of the developed analytical results for the outage probability achieved by the proposed HSIC-PA scheme. Note that, the curves for analytical results are based on Theorem 1, and those for Approximations I and II are based on Corollaries 1 and 2, respectively.
As shown in the figure, analytical results perfectly match simulations, which verifies the accuracy of the analytical results provided in Theorem 1.
Besides, Fig. <ref> also shows that both the curves for Approximation I and Approximation II
match the simulation results at high SNR, which verifies the accuracy of the approximations in Corollaries 1 and 2.
Fig. P_outLFJ_F verifies the accuracy of the developed analytical results for the outage probability achieved by the proposed FSIC-PA scheme. Note that the curves for analytical results are based on Theorem 2, and the curves for approximation are based on Corollary 3. From the figure, it can be observed that the curves for analysis perfectly match simulations, which verify the accuracy of the results provided in Theorem 2.
Besides, it is shown that the curves for the approximate results are accurate at high SNR, which demonstrates the accuracy of the results in Corollary 3.
A significant difference between HSIC-PA and FSIC-PA schemes can be clearly observed from Figs. <ref> and <ref>. Fig. <ref> shows that as M increases, the outage probability achieved by HSIC-PA scheme significantly decreases. In contrast, Fig. <ref> shows that,
for M>1, the outage probabilities for different values of M coincide. Thus, keeping increasing M cannot improve the outage performance of FSIC-PA in the high SNR regime. This observation is consistent with
the results in Section III that the diversity gain of HSIC-PA scheme is M, while that of FSIC-PA scheme is only 1.
Fig. <ref> shows the outage probabilities of the secondary users achieved by HSIC-NPA, HSIC-PA and FSIC-PA versus transmit SNR. As shown in the figure, for HSIC-NPA scheme, when R_0=1 BPCU, there is no outage probability error floor. However, when R_0=4 BPCU, the outage probability error floor exists. This observation is consistent with the conclusions in <cit.>,
i.e., the error floor can only be avoided when ϵ_0ϵ_s<1. By contrast, the proposed HSIC-PA and FSIC-PA schemes can avoid outage probability error floors, since the outage probabilities achieved by both schemes continuously decrease as the SNR increases. Fig. <ref> also shows that the HSIC-PA scheme performs the best among the three schemes for all cases. However, FSIC-PA achieves larger outage probabilities than HSIC-NPA when R_0=1 BPCU, while for the case where R_0=4 BPCU, FSIC-PA performs better at high SNRs.
Fig. <ref> shows the performance of the three schemes in terms of ergodic data rates achieved by the served secondary users.
From the figure, it is shown that HSIC-PA scheme always achieves the largest ergodic rate among the three schemes, which is consistent with the observation in Fig. <ref>.
Another interesting observation from Fig. <ref> is that the performance of FSIC-PA approaches that of HSIC-PA in terms of ergodic data rate at high SNR, while the performance of HSIC-NPA approaches that of HSIC-PA in terms of ergodic rate at low SNRs. This observation indicates that it is preferable to set the
secondary user at the first stage of SIC and use full transmit power at low SNRs, while it is preferable to set the secondary user at the second stage of SIC and use partial transmit power at high SNRs.
Fig. <ref> and Fig. <ref> demonstrate a more detailed comparison on achievable rates of the proposed two schemes with the benchmark HSIC-NPA scheme.
Fig. <ref> shows the probability that the served secondary user belongs to type II. It is shown that as SNR increases, the probabilities converge to a constant.
Fig. <ref> shows that the curves for P̂^better and P^better coincide, which is consistent with results shown in Theorems 3 and 4.
Fig. <ref> also shows that P̂^better and P^better increase with SNR, and approach 1 in the high SNR regime. While
P̂^worse decreases with SNR and approaches 1 in the low SNR regime.
The above observation can help to understand the phenomenon shown in Fig. <ref> and
Fig. <ref>, and leads to the following suggestions for practical systems.
On the one hand, at high SNR, it is preferable to apply power adaptation and put the secondary user at the second stage of SIC. On the other hand, at low SNR, it is better to decode the secondary user at the first stage of SIC.
Fig. <ref> shows the power consumption of HSIC-PA and FSIC-PA schemes. Note that the HSIC-NPA scheme always chooses full power to transmit for the secondary users, i.e., β is always set to be 1, while β can be set to be less than 1 in the proposed HSIC-PA and FSIC-PA schemes. Thus, HSIC-NPA is more energy consuming than the proposed two schemes in this paper. From the figure, it can be observed that at low SNRs, β approaches 1 in HSIC-PA and β approaches zero in FSIC-PA. Besides, as SNR increases, β decreases in HSIC-PA, while that in FSIC-PA increases. More interestingly, the values of β for both schemes approach a constant in the high SNR regime. However, at high SNR, the value of β in HSIC-PA scheme is a bit higher than that in FSIC-PA.
§ CONCLUSIONS
In this paper, two novel cognitive radio inspired uplink NOMA schemes were proposed to improve transmission robustness, namely HSIC-PA scheme and FSIC-PA scheme. Rigorous analysis has been developed to characterize the performance of the proposed schemes. It has been shown that both HSIC-PA and FSIC-PA schemes can avoid outage probability error floors in the high SNR regime without any constraints on users' target rates, which was thought impossible for uplink NOMA transmission. It has also been shown that the diversity gain achieved by the HSIC-PA scheme is M, which is the maximal multi-user diversity gain for the considered scenario. While the diversity gain achieved by the FSIC-PA scheme is 1. Numerical results have been presented to verify the accuracy of the developed analysis and demonstrate the superior performance of the proposed schemes. It has been shown by this paper that the combination of HSIC and PA is important to improve the transmission robustness of uplink NOMA.
§ PROOF FOR LEMMA 1
The outage events can be divided into two groups, one is |g|^2>α_0 and the other is |g|^2<α_0.
Thus, the outage probability P_out shown in (<ref>) can be written as:
P_out= ∑_m=1^M-1 P ( E_m,max{R^k_I, 1 ≤ k≤ m}<R_s,
max{ R^k_II,m < k ≤ M } <R_s, | g | ^2>α _0 )
+P( E_M,max{ R^k_I,1≤ k ≤ M } <R_s, | g | ^2>α _0)
+P( E_0,max{ R^k_II,1≤ k ≤ M } <R_s, | g | ^2>α _0)
+P(max{ R^k_II,1≤ k ≤ M } <R_s ,|g|^2<α_0).
Recall that the secondary users are ordered according to their channel gains, P_out can be further written as:
P_out= ∑_m=1^M-1 P ( E_m,R^m_I< R_s,R^M_II < R_s, | g | ^2>α _0 )_Q_m
+P( E_M,R^M_I<R_s, | g | ^2>α _0)_Q_M +P( E_0,R^M_II <R_s, | g | ^2>α _0) _Q_0
+ P(
R^M_II<R_s ,|g|^2<α_0) _Q_M+1.
Note that when |g|^2>α _0, R^M_II can be determined according to the value of |h_M |^2 as follows:
R^M_II
=
R^M_II,2 , |h_M |^2< |h |^2
R^M_II,1 , |h_M |^2> |h |^2,
where |h |^2=( | g |^2α_0^-1-1 )(P_0 | g |^2+1)/P_s. Thus, Q_m can be rewritten as follows:
Q_m= ∑_m=1^M-1P (E_m,R^m_I<R_s,R^M_II,2<R_s, | h_M |^2< |h |^2 ,|g|^2>α_0 ) _Q_m,1
+P ( E_m, R^m_I<R_s,R^M_II,1<R_s,
| h_M |^2> |h |^2 ,|g|^2>α_0 ) _Q_m,2.
By noting that regardless of the value of |h_M |^2, R^m_I is always smaller than R^M_II,1 and R^M_II,2,
Q_m can be further simplified as:
Q_m= ∑_m=1^M-1P (E_m,R^M_II,2<R_s, | h_M |^2< |h |^2 ,|g|^2>α_0 ) _Q_m,1
+P ( E_m,R^M_II,1<R_s,
| h_M |^2> |h |^2 ,|g|^2>α_0 ) _Q_m,2.
By applying the results shown in (<ref>), Q_0 can be rewritten as follows:
Q_0= P( E_0, |h_M |^2< |h |^2, R^M_II,2 <R_s, | g | ^2>α _0)_Q_0,1
+ P( E_0, |h_M |^2> |h |^2, R^M_II,1 <R_s, | g | ^2>α _0)_Q_0,2.
Note that, Q_m,1 and Q_0,1 can be combined, so as Q_m,2 and Q_0,2, thus, the sum of Q_m and Q_0 can be simplified as follows:
Q_m+Q_0= Q_m,1+Q_0,1_Q̃_1 +Q_m,2+Q_0,2_Q̃_2
= P(|h_M|^2>τ(g)/P_s, | h_M |^2< |h |^2,R^M_II,2<R_s,|g|^2>α_0)_Q̃_1
+ P(|h_M|^2>τ(g)/P_s, | h_M |^2> |h |^2,R^M_II,1<R_s,|g|^2>α_0)_Q̃_2.
Therefore, P_out=Q_m+Q_0+Q_M+Q_M+1=Q̃_1+Q̃_2+Q_M+Q_M+1 and the proof is complete.
§ PROOF FOR THEOREM 1
According to Lemma 1, the evaluation of P_out can be divided into four parts: Q̃_1,
Q̃_2,
Q_M
and Q_M+1.
§.§ Evaluation of Q̃_1
Note that Q̃_1 can be expressed as follows:
Q̃_1= P (|h_M|^2>|g|^2α_0^-1-1/P_s,|h_M|^2<(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s,log(1+τ(g))<R_s,|g|^2>α_0)
= α_0<|g|^2<α_1ε{P(|g|^2α_0^-1-1/P_s<|h_M|^2<(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s)_S̃_1},
where ε{ * } denotes the mathematical expectation.
Note that the users are ordered according to their channel gains, and hence the probability density function (pdf) of |h_M|^2 can be expressed as:
f_|h_M|^2(x)= M!/(M-1)!(1-e^-x)^M-1e^-x
= M(1-e^-x)^M-1e^-x.
By applying (<ref>), S̃_1 can be evaluated as follows:
S̃_1= ∫_|g|^2α_0^-1-1/P_s^(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_sf_|h_M|^2(x)dx
= ∑_i=0^M([ M; i ])(-1)^i( e^-i(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s-e^-i/P_s(|g|^2α_0^-1-1)).
Further, by noting that |g|^2 is exponentially distributed, Q̃_1 can be calculated as:
Q̃_1= ∫_α_0^α_1S̃_1e^-|g|^2d|g|^2
= ∑_i=0^M([ M; i ])(-1)^i∫_α_0^α_1( e^-i(xα_0^-1-1)(P_0x+1)/P_s-e^-i/P_s(xα_0^-1-1))e^-xdx.
For notational simplicity,
define u(α_0,α_1,c) as:
u(α_0,α_1,c)△=∫_α_0^α_1e^-(c+1)xdx= 1/c+1[e^-α_0(c+1)-e^-α_1(c+1)],
and v(α_1,α_0,A,B) as:
v(α_1,α_0,A,B)△= ∫_α_0^α_1e^-(Ax^2+Bx)dx
= √(π)e^B^2/4A/2√(A)[erf(√(A)(α_1+B/2A))-erf(√(A)(α_0+B/2A))].
By taking (<ref>) and (<ref>) into (<ref>), Q̃_1 can be expressed as:
Q̃_1=∑_i=0^M([ M; i ])(-1)^ie^i/P_s[ v(α_1,α_0,iP_0/P_sα_0,i/P_s(α_0^-1-P_0)+1)- u(α_0,α_1,i/P_sα_0)].
§.§ Evaluation of Q̃_2
Note that Q_2 can be expressed as follows:
Q̃_2= P (|h_M|^2>|g|^2α_0^-1-1/P_s,|h_M|^2>(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s,.
.log(1+P_s|h_M|^2/P_0|g|^2+1)<R_s,|g|^2>α_0)
(a)= α_0<|g|^2<α_1ε{P((|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s<|h_M|^2<α_s(P_0|g|^2+1) )_S̃_2},
where step (a) is obtained by noting the hidden condition (|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s<α_s(P_0|g|^2+1), which yields |g|^2<α_1.
By using the pdf of |h_M|^2 shown in (<ref>), S̃_2 can be evaluated as follows:
S̃_2= ∫_(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s^α_s(P_0|g|^2+1)M(1-e^-x)^M-1e^-xdx
= ∑_i=0^M([ M; i ])(-1)^i( e^-iα_s(P_0|g|^2+1)-e^-i/P_s(|g|^2α_0^-1-1)(P_0|g|^2+1)).
Further, by averaging with respect to |g|^2, Q̃_2 can be expressed as:
Q̃_2= ∑_i=0^M([ M; i ])(-1)^i∫_α_0^α_1( e^-iα_s(P_0x+1)-e^-i/P_s(xα_0^-1-1)(P_0x+1))e^-xdx.
By taking (<ref>) and (<ref>) into (<ref>), Q̃_2 can be further expressed as follows:
Q̃_2=∑_i=0^M([ M; i ])(-1)^i [e^-iα_su (α_0,α_1,iα_sP_0)- e^i/P_sv (α_1,α_0,iP_0/P_sα_0,i/P_s(α_0^-1- P_0)+1)].
§.§ Evaluation of Q_M
Note that Q_M can be rewritten as follows:
Q_M= P(|h_M|^2<|g|^2α_0^-1-1/P_s,log(1+P_s|h_M|^2)<R_s,|g|^2>α_0)
= P(|h_M|^2<min{|g|^2α_0^-1-1/P_s,α_s},|g|^2>α_0)
= α_0<|g|^2<α_1ε{ P(|h_M|^2<α_0^-1|g|^2-1/P_s) _S_M,1} +|g|^2>α_1ε{P(|h_M|^2<α_s)_ S_M,2},
where the last step is obtained by dividing the events into two cases, i.e., |g|^2<α_1 and |g|^2>α_1.
By using the pdf of |h_M|^2 shown in (<ref>), the expression for S_M,1 and S_M,2 can be obtained as:
S_M,1=(1-e^-α_0^-1|g|^2-1/P_s)^M and S_M,2=(1-e^-α_s)^M.
By averaging with respect to |g|^2, Q_M can be further evaluated as follows:
Q_M= ∫_α_0^α_1(1-e^-α_0^-1x-1/P_s)^Me^-xdx+∫_α_1^∞(1-e^-α_s)^Me^-xdx
= ∫_α_0^α_1∑_i=0^M([ M; i ])(-1)^ie^i/P_se^-α_0^-1/P_sixe^-xdx+(1-e^-α_s)^Me^-α_1
= ∑_i=0^M([ M; i ])(-1)^ie^i/P_su (α_0,α_1,i/α_0P_s)+(1-e^-α_s)^Me^-α_1,
where the last step is obtained by applying the results shown in (<ref>).
§.§ Evaluation of Q_M+1
Note that Q_M+1 can be expressed as follows:
Q_M+1=P(R^M_II<R_s ,|g|^2<α_0).
Note that, when |g|^2<α_0, τ(g)=0, yielding R^M_II=log(1+ P_s|h_M|^2/P_0|g|^2+1).
Thus, Q_M+1 can be further expressed as:
Q_M+1 = P(|g|^2<α_0,log(1+ P_s|h_M|^2/P_0|g|^2+1)<R_s)
= |g|^2<α_0ε{ P( log(1+ P_s|h_M|^2/P_0|g|^2+1)<R_s)_S_M+1}.
By using the pdf |h_M|^2 shown in (<ref>), S_M+1 can be evaluated as follows:
S_M+1= ∫_0^α_s(P_0|g|^2+1)f_|h_M|^2(x)dx
= (1-e^-α_s(P_0|g|^2+1))^M.
Further, by averaging with respect to |g|^2, Q_M+1 can be expressed as:
Q_M+1 = ∫_0^α_0(1-e^-α_s(P_0x+1))^Me^-xdx
= ∑_i=0^M([ M; i ])(-1)^ie^-α_si1-e^-(α_sP_0i+1)α_0/α_sP_0i+1,
where the last step is obtained by applying the binomial expansion.
Therefore, the expressions for Q̃_1,
Q̃_2,
Q_M,
and Q_M+1 are obtained, and the proof is complete.
§ PROOF FOR COROLLARY 1
In order to facilitate a high SNR approximation, P_out in (<ref>) can be
rewritten as follows:
P_out=∑_i=0^M([ M; i ])(-1)^i∫_0^α_1e^-xe^-iα_s(P_0x+1)dx+(1-e^-α_s)^Me^-α_1.
By using the fact that
∑_i=0^M([ M; i ])(-1)^iA^i=(1-A)^M,
P_out can be further approximated as follows:
P_out= ∫_0^α_1e^-x(1-e^-α_s(P_0x+1))^Mdx+(1-e^-α_s)^Me^-α_1
≈ ∫_0^α_1(1-x)α_s^M(P_0x+1)^Mdx+α_s^M(1-α_1),
where the last step is obtained by applying Taylor seizes 1-e^-x≈ x when x→ 0.
A more simplified form of P_out can be obtained by applying the binomial expansion:
P_out≈ α_s^M∫_0^α_1(1-x)∑_i=0^M([ M; i ])P_0^ix^idx+α_s^M(1-α_1)
= α_s^M∫_0^α_1∑_i=0^M([ M; i ])P_0^i(x^i-x^i+1)dx+α_s^M(1-α_1).
By taking integrations in (<ref>), P_out can be further calculated as follows:
P_out≈ α_s^M∑_i=0^M([ M; i ])P_0^i(α_1^i+1/i+1-α_1^i+2/i+2)+α_s^M-α_s^Mα_1
(a)= ϵ_s^M/P_s^MP_0∑_i=0^M([ M; i ])ϵ_0^i+1(1+ϵ_s)^i+1/i+1-ϵ_s^M/P_s^MP_0^2∑_i=0^M([ M; i ])ϵ_0^i+2(1+ϵ_s)^i+2/i+2
+ϵ_s^M/P_s^M-ϵ_s^Mϵ_0(1+ϵ_s)/P_s^MP_0
(b)= ϵ_s^M/P_s^MP_0∑_i=1^M([ M; i ])ϵ_0^i+1(1+ϵ_s)^i+1/i+1-ϵ_s^M/P_s^MP_0^2∑_i=0^M([ M; i ])ϵ_0^i+2(1+ϵ_s)^i+2/i+2+ϵ_s^M/P_s^M,
where step (b) is obtained by the fact that the first term shown in step (a) is ϵ_s^Mϵ_0(1+ϵ_s)/P_s^MP_0 when i=0, which is exactly the same as the the last term in step (a), and thus can be eliminated .
§ PROOF FOR THEOREM 2
Divide the outage events into two cases, one being |g|^2>α_0 and the other being |g|^2<α_0. . Therefore, the outage probability P̂_out shown in (<ref>) can be rewritten as:
P̂_out= ∑_m=1^M-1 P ( E_m,max{R̂^k_I, 1 ≤ k≤ m}<R_s,
max{R̂^k_II,m < k ≤ M } <R_s, | g | ^2>α _0 )
+P( E_M,max{R̂^k_I,1≤ k ≤ M } <R_s, | g | ^2>α _0)
+P( E_0,max{R̂^k_II,1≤ k ≤ M } <R_s, | g | ^2>α _0)
+P(max{R̂^k_II,1≤ k ≤ M } <R_s ,|g|^2<α_0).
Recall that the secondary users are ordered according to their channel gains,
P̂^out can be further written as:
P̂_out= ∑_m=1^M-1P(E_m,R̂_I^m<R_s,R̂_II^M<R_s,|g|^2>α_0)_F_m
+P(E_M,R̂_I^M<R_s,|g|^2>α_0 )_F_M
+ P(E_0,R̂_II^M<R_s,|g|^2>α_0 )_F_0
+P(R̂_II^M<R_s,|g|^2<α_0 )_F_M+1.
By noting that R̂^m_I<R̂^M_II for the first term, F_m and F_0 can be combined as follows:
F_m+F_0=P(|h_M|^2>τ(g)/P_s,R̂^M_II<R_s,
|g|^2>α_0)_F̃.
Therefore, P̂_out can be further simplified as:
P̂_out= P(|h_M|^2<τ(g)/P_s,R̂^M_I<R_s
,|g|^2>α_0)_F_M
+P(R̂_II^M<R_s,|g|^2<α_0)_F_M+1
+P(|h_M|^2>τ(g)/P_s,
R̂^M_II<R_s,|g|^2>α_0)_F̃.
Thus the remaining task is to derive the expressions for F_M, F_M+1 and F̃, respectively.
§.§ Evaluation of F_M
Note that F_M can be expressed as follows:
F_M= P(|h_M|^2<|g|^2α_0^-1-1/P_s,
log(1+P_s|h_M|^2)<R_s,|g|^2>α_0 ),
which is the same as the expression for Q_M in (<ref>). Thus, F_M can be expressed as:
F_M=∑_i=0^M([ M; i ])(-1)^ie^i/P_su(α_0,α_1,
i/α_0P_s)+(1-e^-α_s)^Me^-α_1.
§.§ Evaluation of F_M+1
Note that F_M+1 can be expressed as follows:
F_M+1= P(log(1+τ(g))<R_s,|g|^2<α_0)
(a)= P(|g|^2<α_0)
= 1-e^-α_0,
where step (a) is obtained by the fact that τ(g)=0 when |g|^2<α_0.
§.§ Evaluation of F̃
Note that F̃ can be expressed as follows:
F̃= P(|h_M|^2>τ(g)/P_s,
log(1+τ(g))<R_s,|g|^2>α_0)
= α_0<|g|^2<α_1ε{P(|h_M|^2>|g|^2α_0^-1-1/P_s)_T̃}.
By using the pdf of |h_M|^2 shown in (<ref>), T̃ can be evaluated as follows:
T̃= ∫_|g|^2α_0^-1-1/P_s^∞
M(1-e^-x)^M-1e^-xdx
= 1-∑_i=0^M([ M; i ])(-1)^ie^-i/P_sα_0|g|^2+i/P_s.
By taking expectation with respect to |g|^2, F̃ can be further evaluated as follows:
F̃= ∫_α_0^α_1( 1-∑_i=0^M([ M; i ])(-1)^ie^-i/P_sα_0x+i/P_s) e^-xdx
= e^-α_0-e^-α_1-∑_i=0^M([ M; i ])(-1)^ie^i/P_su(α_0,α_1,i/P_sα_0).
Until now, the expressions for F_M, F_M+1 and F̃ are obtained, and the proof is complete.
§ PROOF FOR THEOREM 3
Note that the numerator in (<ref>) can be rewritten as:
P( R̅_2<R_2, U_m^* is type II)_Q_n
= |g|^2>α_0ε{P(|g|^2α_0^-1-1/P_s<|h_M|^2
<(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s)_S_n}.
By using the pdf of |h_M|^2 shown in (<ref>), S_n can be evaluated as follows:
S_n= ∫_|g|^2α_0^-1-1/P_s^(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_sM(1-e^-x)^M-1e^-xdx
= ∑_i=0^M([ M; i ])(-1)^i( e^-i(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s-e^-i/P_s(|g|^2α_0^-1-1)).
Further, by averaging with respect to |g|^2, Q_n can be expressed as follows:
Q_n= ∑_i=0^M([ M; i ])(-1)^i∫_α_0^∞( e^-i(xα_0^-1-1)(P_0x+1)/P_s-e^-i/P_s(xα_0^-1-1))e^-xdx.
By taking (<ref>) and (<ref>) into (<ref>), Q_n can be further expressed as follows:
Q_n= ∑_i=0^M([ M; i ])(-1)^ie^i/P_s[ v(∞,α_0,iP_0/P_sα_0,i/P_s(α_0^-1-P_0)+1)-u(α_0,∞,i/P_sα_0)]
= ∑_i=1^M([ M; i ])(-1)^ie^i/P_s[ ṽ(α_0,iP_0/P_sα_0,i/P_s(α_0^-1-P_0)+1)
-ũ(α_0,i/P_sα_0)],
where the last step is obtained by noting that the term i=0 can be omitted since ṽ(α_0,iP_0/P_sα_0,i/P_s(α_0^-1-P_0)+1)
-ũ(α_0,i/P_sα_0)
=0 for i=0.
The denominator in (<ref>) can be calculated as follows:
P(U_m^* is type II)_Q_d
= P(|h_M|^2>τ(g)/P_s,|g|^2>α_0)_Q_d1
+P( |g|^2<α_0)_Q_d2
= |g|^2>α_0ε{P(
|h_M|^2>|g|^2α_0^-1-1/P_s)_S_d1}
+Q_d2.
Note that S_d1 is the same as the expression for T̃ in (<ref>). Thus, S_d1 can be obtained by using the
results in (<ref>) as follows:
S_d1= 1-∑_i=0^M([ M; i ])(-1)^ie^-i/P_sα_0|g|^2+i/P_s.
By averaging with respect to |g|^2, Q_d1 can be further evaluated as follows:
Q_d1= ∫_α_0^∞( 1-∑_i=0^M([ M; i ])(-1)^ie^-i/P_sα_0x+i/P_s) e^-xdx
= e^-α_0-∑_i=0^M([ M; i ])(-1)^ie^i/P_sũ(α_0,i/P_sα_0).
Q_d2 can be expressed as follows:
Q_d2=∫_0^α_0e^-xdx=1-e^-α_0.
Thus, Q_d is the sum of Q_d1 and Q_d2, which can be expressed as follows:
Q_d= 1-∑_i=0^M([ M; i ])(-1)^ie^i/P_sũ(α_0,i/P_sα_0).
Therefore, the expressions for P( R̅_2<R_2, U_m^* is type II) and P(U_m^* is type II) are obtained, and the proof is complete.
IEEEtran
|
http://arxiv.org/abs/2307.03963v1 | 20230708122517 | An observational signature for extremal black holes | [
"Stefanos Aretakis",
"Gaurav Khanna",
"Subir Sabharwal"
] | gr-qc | [
"gr-qc",
"hep-th",
"math-ph",
"math.MP"
] | |
http://arxiv.org/abs/2307.07330v1 | 20230714131529 | Sparse induced subgraphs in P_6-free graphs | [
"Maria Chudnovsky",
"Rose McCarty",
"Marcin Pilipczuk",
"Michał Pilipczuk",
"Paweł Rzążewski"
] | cs.DS | [
"cs.DS",
"cs.DM",
"math.CO"
] |
How Different Is Stereotypical Bias Across Languages?
Ibrahim Tolga Öztürk1 Rostislav Nedelchev2 Christian Heumann1 Esteban Garces Arias1 Marius Roger1 Bernd Bischl1,3 Matthias Aßenmacher1,3
August 12, 2023
============================================================================================================================================
We prove that a number of computational problems that ask for the largest sparse induced subgraph satisfying some property definable in _2 logic, most notably Feedback Vertex Set,
are polynomial-time solvable in the class of P_6-free graphs. This generalizes the work of Grzesik, Klimošová, Pilipczuk, and Pilipczuk on the Maximum Weight Independent Set problem in P_6-free graphs [SODA 2019, TALG 2022],
and of Abrishami, Chudnovsky, Pilipczuk, Rzążewski, and Seymour on problems in P_5-free graphs [SODA 2021].
The key step is a new generalization of the framework of potential maximal cliques. We show that instead of listing
a large family of potential maximal cliques, it is sufficient to only list their carvers: vertex sets that contain the same vertices
from the sought solution and have similar separation properties.
empty
20(0, 12.3)
< g r a p h i c s >
20(0, 13.1)
< g r a p h i c s >
§ INTRODUCTION
The landmark work of Bouchitté and Todinca <cit.> uncovered the pivotal role that potential maximal cliques
(PMCs for short) play in tractability of the classic Maximum (Weight) Independent Set problem (MIS or MWIS for short).
The MIS (MWIS) problem asks for a set of pairwise nonadjacent vertices (called an independent set or a stable set)
in a given graph of maximum possible cardinality (or weight, in the weighted setting, where every vertex is given a positive integral weight).
Without giving a precise definition, a potential maximal clique is a set of vertices of the graph that
can be seen as a “reasonable” choice for a bag in a tree decomposition
of the graph, which in turn can be seen as a “reasonable” choice of a separating set for a divide-and-conquer algorithm.
Bouchitté and Todinca <cit.> showed that MWIS is solvable in time polynomial in the size of the graph
and the number of PMCs of the input graph. At the time, this result unified a number of earlier tractability results for MWIS
in various hereditary graph classes, giving an elegant common explanation for tractability.
Later, Fomin, Todinca, and Villanger <cit.> showed that the same result applies not only to MWIS,
but to a wide range of combinatorial problems, captured via the following formalism.
For a fixed integer k and a formula
[ stands for monadic second-order logic in graphs with quantification over edge subsets and modular counting predicates. In this logic, one can quantify both over single vertices and edges and over their subsets, check membership and vertex-edge incidence, and apply modular counting predicates with fixed moduli to set variables. See Section <ref> for a formal introduction of the syntax and semantics of .] ϕ with one free vertex set variable, consider the following problem.
Given a graph G, find a pair (,X) maximizing |X| such that X ⊆⊆ V(G), G[] has treewidth at most k, and ϕ(X) is satisfied in G[].
This problem can also be considered in the weighted setting, where vertices of G have positive integral weights
and we look for (, X) maximizing the weight of X.
For fixed k and ϕ, we denote this weighted problem as (≤ k,ϕ)-MWIS.
[Here, MWIS stands for “maximum weight induced subgraph.”]
Fomin, Todinca, and Villanger showed that (≤ k,ϕ)-MWIS is solvable in time polynomial
in the size of the graph and the number of its PMCs.
Clearly, MWIS can be expressed as a (≤ 0, ϕ)-MWIS problem.
Among the many problems captured by this formalism, we mention that Feedback Vertex Set can be expressed as a (≤ 1, ϕ)-MWIS problem: indeed, the complement of a minimum (weight) feedback vertex set is a maximum (weight) induced forest.
Another application is as follows.
Let 𝒢 be a minor-closed graph class that does not contain all planar graphs.
Thanks to the Graph Minor Theorem of Robertson and Seymour <cit.>,
there exists a finite set of graphs such that G ∈𝒢 if and only if G does not contain any
graph of as a minor. Consequently, the property of belonging to 𝒢 can be expressed in .
Furthermore, as 𝒢 does not contain all planar graphs, 𝒢 is of bounded treewidth <cit.>.
Thus the problem of finding a largest induced subgraph that belongs to 𝒢
is a special case of (≤ k, ϕ)-MWIS.
In both applications above we have =X. To see an example where these sets are different, consider the problem of packing the maximum number of vertex-disjoint and pairwise non-adjacent induced cycles.
To see that it is also a special case of (≤ k, ϕ)-MWIS,
let k=2 and ϕ be the formula enforcing that G[] is 2-regular (i.e., a collection of cycles) and no two vertices from X are in the same component of G[].
Unfortunately, the results of <cit.> and <cit.> do not cover all cases where we expect even the original MWIS problem to be
polynomial-time solvable. A key case arises from excluding an induced path. For a fixed graph H, the class of H-free graphs consists of all graphs that do not contain H
as an induced subgraph. For an integer t, we denote the path on t vertices by P_t.
While the class of P_4-free graphs has bounded clique-width,
the class of P_t-free graphs does not seem to exhibit any apparent structure for t ≥ 5.
Still, as observed by Alekseev <cit.>, MWIS is not known to be -hard in P_t-free graphs for any fixed t.
At first glance, the PMC framework of <cit.> does not seem applicable to P_t-free graphs for t ≥ 5, as even co-bipartite
graphs can have exponentially-many PMCs. (A graph is co-bipartite if its complement is bipartite; these graphs are P_5-free.)
In 2014, Lokshtanov, Vatshelle, and Villanger <cit.> revisited the framework of Bouchitté and Todinca and showed that it is not necessary to use all PMCs of the input graph, but only some carefully
selected subfamily of PMCs.
They also showed that for P_5-free graphs, one can efficiently enumerate a suitable family of polynomial size, thus proving
tractability of MWIS in P_5-free graphs.
The arguments of <cit.> were then expanded to P_6-free graphs by Grzesik et al. <cit.>.
The case of P_7-free graphs remains open.
A general belief is that the MWIS problem is actually tractable in P_t-free graphs for any constant t.
This belief is supported by the existence of quasi-polynomial-time algorithms that work for every t <cit.>.
Extending these results, Gartland et al. <cit.> proved that for every t, k, and ϕ, the (≤ k,ϕ)-MWIS problem is solvable in quasi-polynomial time on P_t-free graphs via a relatively simple branching algorithm. Actually, their algorithm solves the (≤ k,ϕ)-MWIS problem, where instead of a subgraph of bounded treewidth we ask for a subgraph of bounded degeneracy.
We remark that (≤ k,ϕ)-MWIS can be expressed as (≤ k',ϕ')-MWIS. Indeed, degeneracy is always upper-bounded by treewidth and the property of being of bounded treewidth is expressible by a formula.
On the other hand, the language of (≤ k,ϕ)-MWIS allows us to capture more problems. One well-known example is Vertex Planarization <cit.>, which asks for a maximum (or maximum weight) induced planar subgraph.
Indeed, planar graphs have degeneracy at most 5, but they might have unbounded treewidth, so Vertex Planarization is not a special case of (≤ k,ϕ)-MWIS.
However, Gartland et al. <cit.> showed that in (a superclass of) P_t-free graphs, treewidth and degeneracy are functionally equivalent.
Consequently, even though (≤ k,ϕ)-MWIS is more general than (≤ k,ϕ)-MWIS, in P_t-free graphs both formalisms describe the same family of problems.
One of the obstructions towards extending the known polynomial-time algorithms for MWIS beyond P_5-free and P_6-free graphs is the technical complexity of the method.
The algorithm for P_5-free graphs <cit.> is already fairly involved, and the generalization to P_6-free graphs <cit.>
resulted in another significant increase in the amount of technical work.
In particular, it is not clear how to apply such an approach to solve (≤ k,ϕ)-MWIS (or, equivalently, (≤ k,ϕ)-MWIS).
Furthermore, in a recent note <cit.>, the authors of <cit.> discuss limitations of applying the method to solving MWIS in graph classes excluding longer paths.
Both algorithms for P_5-free <cit.> and for P_6-free graphs <cit.> focused on restricting
the family of needed PMCs, but the algorithms still listed the PMCs exactly.
A major twist was made by Abrishami at al. <cit.>
who showed that instead of determining a PMC exactly, it suffices to find only a container for it: a superset
that does not contain any extra vertices from the solution. They also showed that with this container method, the arguments for P_5-free graphs
from <cit.> greatly simplify to some elegant structural observations about P_5-free graphs.
Moreover, the container method from <cit.> works with any (≤ k,ϕ)-MWIS problem. In particular, the authors of <cit.> showed that Feedback Vertex Set is polynomial-time solvable in P_5-free graphs.
While the container method of <cit.> pushed the boundary of tractability, it does not seem to be easily applicable
to P_t-free graphs for t ≥ 6; in particular, we do not know how to significantly simplify the arguments of <cit.>
using containers.
§.§ Our contribution
Our contribution is three-fold.
*Identifying treedepth as the relevant width measure.
Previous work on MWIS in P_5-free and P_6-free graphs <cit.>
first fixed a sought solution I (which is an inclusion-wise maximal independent set),
then observed that it suffices to focus on PMCs that contain at most one vertex from the fixed solution I. They also distinguished between
PMCs that have one vertex in common with I, and PMCs that have zero. The former ones turn out to be easy to handle,
but the latter ones, called “I-free” or “I-safe”, are trickier; to tackle them one has to rely on some additional properties stemming from the fact that I is maximal.
We introduce generalizations of these notions to induced subgraphs of bounded treedepth,
a structural notion more restrictive than treewidth.
It turns out that the correct analog of independent sets are induced subgraphs of bounded treedepth
with a fixed elimination forest. Maximality corresponds to the inability to extend the subgraph by adding a leaf
vertex to the elimination forest, while I-freeness corresponds to not containing any leaf of the fixed
elimination forest of the sought solution.
Focusing on treedepth naturally leads us to the (≤ k, ϕ)-MWIS problem, where G[] is required to have treedepth at most d.
Luckily, in P_t-free graphs treedepth is functionally equivalent to treewidth (and thus to degeneracy, too),
so in this setting the (≤ k, ϕ)-MWIS, (≤ k, ϕ)-MWIS,
and (≤ k, ϕ)-MWIS formalisms define the same class of problems.
While simple in their form and proofs,
the above generalizations allow us to adapt many arguments of <cit.>
to all (≤ k, ϕ)-MWIS problems.
*Generalizing containers to carvers.
We introduce a notion of a carver that generalizes containers.
Our inspiration comes from thinking of a PMC as a “reasonable” separation in a divide-and-conquer algorithm.
Instead of determining a PMC exactly, we want to find an “approximation” that, on one hand, contains the same vertices from the sought
solution, and, on the other hand, splits the graph at least as well as the PMC.
The crux lies in properly defining this latter notion.
Note that a container should satisfy any reasonable definition of “splitting at least as well;”
if X is a set of vertices which contains a PMC ,
then each component of G-X is a subset of a component of G-.
However, if we allow that the approximation X of does not contain some vertices of , then we need to somehow restrict the way the vertices of ∖ X connect the components of G-(∪ X).
The first natural idea, to ask that no component of G-X intersects more than one component of G-, turns
out to be not very useful. The actual definition allows ∖ X to glue up
some components of G-(∪ X) as long as we can show that
another carver, for a different PMC, will later separate them.
We prove that carvers are sufficient to solve all problems of our interest in P_t-free graphs.
Any (≤ k,ϕ)-MWIS problem is solvable on P_t-free graphs in time
polynomial in the size of the input graph and the size of the supplied carver family.
*Finding carvers in P_6-free graphs.
We showcase the strength of Theorem <ref> by lifting the approach of Grzesik et al. <cit.> from just MWIS to arbitrary (≤ k,ϕ)-MWIS problems on P_6-free graphs. Formally, we prove the following.
For any choice of k and ϕ, the (≤ k,ϕ)-MWIS problem is polynomial-time solvable
on P_6-free graphs.
Note that Theorem <ref> in particular implies that Feedback Vertex Set is polynomial-time solvable on P_6-free graphs, which was a well-known open problem <cit.>. Apart from being applicable to a wider class of problems, our carver-based approach also significantly simplifies, or even makes obsolete, many of the technical parts of <cit.>.
On high level, the proof of <cit.> consists of two parts. In the first part, PMCs that in some sense
“have more than two principal components” are analysed. Here, the arguments are arguably neat and elegant in many places.
The second part deals with PMCs with exactly two “principal components”, that can chain up into long sequences.
Here, a highly technical replacement argument is developed to “canonize” an I-free minimal chordal completion
in such parts of the input graph.
Using the newly developed notions of treedepth structures, we lift the (more elegant part of the) arguments of <cit.>
to (≤ k, ϕ)-MWIS problems, showing that PMCs with “more than two principal components” admit containers,
not only carvers.
Furthermore, we use the power of the new notion of the carver to construct carvers
for PMCs with two “prinicipal components”, replacing the highly technical part of <cit.>
with arguably shorter and more direct arguments.
§.§ Technical overview
Let us now have a closer look at the three aforementioned contributions.
To this end, we need to introduce some definitions regarding chordal completions and PMCs.
Given a graph G, a set Ω⊆ V(G) is a potential maximal clique (or a PMC) if there exists a minimal chordal completion of G in which Ω is a maximal clique. A chordal completion of G is a supergraph of G which is chordal and has the same vertex-set as G; it is minimal if it has no proper subgraph which is also a chordal completion of G. (Recall that a graph is chordal if it has no holes, where a hole is an induced cycle of length at least 4.) Since chordal completions are obtained by adding edges to G, it is convenient to write them as G+F, where F is a set of non-edges of G.
Chordal completions in a certain sense correspond to tree decompositions, and it is often more convenient to work with the latter.
(The formal definition of a tree decomposition can be found in Section <ref>.)
It is a folklore result that a graph H is chordal if and only if it has a tree decomposition whose bags are exactly the maximal cliques of H (meaning, in particular, that the number of nodes of the tree is equal to the number of maximal cliques of H). Such a tree decomposition is called a clique tree of H; note that while the set of bags of a clique tree is defined uniquely, the actual tree part of the tree decomposition is not necessarily unique.
In the other direction, observe that if we have a tree decomposition of a given graph G, then by completing every bag of this
tree decomposition into a clique, we obtain a chordal supergraph.
Hence, minimal chordal completions correspond to “the most refined” tree decompositions of G, and this
supports the intuition that PMCs are “reasonable” choices of bags in a tree decomposition of G.
For a set S ⊆ V(G) in a graph G, a full component of S is a connected component A of G-S
such that N(A) = S. A set S is a minimal separator if S has at least two full components.
It is well-known (cf. <cit.>) that if is a PMC in G, then for every component D of G-,
N(D) is a minimal separator with D as a full component and another full component containing ∖ N(D).
Furthermore, if st is an edge of T for a clique tree (T,β) of a minimal chordal completion G+F,
then β(s) ∩β(t) is a minimal separator with one full component containing β(s) ∖β(t)
and one full component containing β(t) ∖β(s).
Thus, in some sense, minimal separators are building blocks from which PMCs are constructed. While PMCs
correspond to bags of tree decompositions of G, minimal separators correspond to adhesions (intersections
of neighboring bags).
Treedepth structures. The starting insight of Lokshtanov, Vatshelle, and Villanger <cit.>
is that if I is a maximal independent set in G, then by completing V(G)∖ I into a clique we obtain
a chordal graph (even a split graph), and thus there exists a minimal chordal completion G+F that does not
add any edge incident to I; we call such a chordal completion I-free.
In G+F, every maximal clique contains at most one vertex of I
and, if I ∩ = {v} for a maximal clique , then ⊆ N_G[v] and N_G[v] is a good container
for .
They argue that it is sufficient to list a superset of all maximal cliques of G+F, and hence it suffices to focus on PMCs of G that are disjoint
from the sought solution I. Such PMCs are henceforth called I-free.
Let be an I-free PMC. Since I is maximal, every v ∈ has a neighbor in I that is outside ,
as is I-free. The existence of such neighbors is pivotal to a number of proofs of <cit.>.
To discuss our generalization to induced subgraphs of bounded treedepth, we need a few standard definitions.
A rooted forest is a forest where each component has exactly one specified vertex called its root. The depth of a vertex v ∈ V() is the number of vertices in the unique path from v to a root (so roots have depth 1). The height of is the maximum depth of any of its vertices. A path in is vertical if one of its ends is an ancestor of the other. (We consider each vertex to be both an ancestor and a descendent of itself.) Two vertices are -comparable if they are connected by a vertical path; otherwise they are -incomparable.
An elimination forest of a graph G is a rooted forest such that V() = V(G) and the endpoints of each edge of G are -comparable. The treedepth of G is then the smallest integer d such that G has an elimination forest of height d.
Let us now move to the new definitions.
Let G be a graph and d be a positive integer. A treedepth-d structure in G is a rooted forest of height at most d such that V() is a subset of V(G) and is an elimination forest of the subgraph of G induced by V(). We say that is maximal if there is no treedepth-d structure ' in G such that is a proper induced subgraph of ' and every root of is a root of '. In other words, is maximal if one cannot extend it by appending a leaf while preserving the bound on the height.
Note that if H is a maximal induced subgraph of G of treedepth at most d, and is a height-d elimination forest of that subgraph, then is a maximal treedepth-d structure in G. Consequently, in the context of (≤ d, ϕ)-MWIS, we can consider as being in fact a maximal set inducing a subgraph of treedepth at most d in G; if (, X) is an actual solution, then there exists a maximal treedepth-d structure ' that is a superset of , and
we can extend ϕ by saying that there exists a set ⊆' with all the desired properties.
Thus, most of the structural results in this work consider the set of all maximal treedepth-d-structures, which are more detailed versions of maximal sets inducing a subgraph of treedepth at most d.
Recall that for any independent set I, there is a minimal chordal completion of G that is I-free, that is,
it does not add any edge incident to I.
This statement generalizes to chordal completions aligned with a given treedepth-d structure ;
we say that a chordal completion G+F is -aligned if F does not contain any pair uv such that
* u or v is a depth-d vertex of , or
* u and v are vertices of which are -incomparable.
The second condition equivalently says that is a treedepth-d structure in G+F.
We show that there is always a -aligned minimal chordal completion (see Lemma <ref>) and argue that it suffices to focus on PMCs
that come from an aligned minimal chordal completion.
The analog of the notion of “I-freeness” is as follows:
A PMC Ω is -avoiding if it is a maximal clique of a minimal chordal completion that is -aligned, and it does not contain any depth-d vertex of .
Similarly as in the case of PMCs that are not I-free,
if is -aligned but not -avoiding, it contains exactly one vertex of of depth d and
one can argue that the closed neighborhood of such vertex gives raise to a container for
(after excluding the vertices of ∖ that accidentally got into it, but there are at most d-1
one of them, because they all are ancestors of the guessed vertex of ∩ of depth d in the rooted
forest ).
Thus, it remains to focus on -avoiding PMCs.
In the I-free setting, the important property of an I-free PMC was that every v ∈ has a neighbor in I.
Here, one can argue that every v ∈∖ in a -avoiding PMC has a neighbor
in ∖, as otherwise it can be added to without increasing the maximum depth of , contradicting
the maximality of .
This concludes the overview of the adaptation of the notion of I-freeness to induced subgraphs of bounded treedepth.
Carvers.
Let G be a graph and let I be an optimal solution to MWIS in G.
Assume that we are given a polynomial-sized family ℱ of PMCs in G that contains all maximal cliques of some
I-free minimal chordal completion G+F of G. The crucial insight of <cit.> is that this is enough
to solve MWIS in G in polynomial time by a dynamic programming algorithm.
The algorithm considers the following set of states: for every ∈ℱ, every J ⊆ of size at most 1,
and every component D of G-, it tries to compute the best possible independent set I[,J,D] in G[∪ D]
with I[,J,D] ∩ = J.
The assumption that ℱ contains all PMCs of G+F allows one to argue that there is a computation path
of this dynamic programming algorithm that finds an independent set that is at least as good as I
(we may not find I itself).
The crucial insight of <cit.> is that for the dynamic programming algorithm to work, it is enough
to know containers for the maximal cliques of G+F, that is, it is fine if the provided sets in ℱ are larger,
as long as they do not contain extra vertices from the sought solution. The intuition here is that the dynamic programming
algorithm relies on the separation properties of PMCs as bags of a clique tree of G+F, and a superset is an even better separator
than a PMC itself.
From the point of view of separation, the following relaxation of a container would suffice.
A set X is a weak container of a PMC if it contains the same vertices from the sought solution
and every connected component of G-X intersects at most one connected component of G-
(that is, the vertices of ∖ X do not connect two components of G-(∪ X)).
However, in the context of P_6-free graphs, we are unable to provide even weak containers to some PMCs,
and there seems to be a good reason for this failure.
Namely, there are examples of P_6-free graphs G
with an (I-free or -avoiding, depending on the problem
we are solving) PMC with a subset 𝒟 of components of G- such that some local modifications
to the minimal chordal completion G+F modify slightly, but completely reshuffle the vertices of 𝒟
into new components.
The intuition is that the dynamic programming algorithm should not attempt to separate 𝒟 into components
while looking at a (weak) container of , but while looking at another PMC ' that is “closer” to 𝒟.
More precisely, consider the following example (cf. Figure <ref>).
Let A_1,…,A_n and B_1,…,B_n be two sequences of P_6-free graphs and let ℱ
be a family of subsets of [n] of size being a large polynomial in n with ⋃ℱ = [n];
all subsets of [n] of size at most C for a large constant C would do the job.
Construct a graph G as follows.
Start with a disjoint union of A_1,…,A_n and B_1,…,B_n. For every i,j ∈ [n], i ≠ j,
add all edges between A_i and A_j and all edges between B_i and B_j. For every i ∈ [n], add all edges between
A_i and B_i. Finally, for every K ∈ℱ, introduce a vertex v_K and make it adjacent to ⋃_i ∈ K A_i.
A direct check shows that G is P_6-free, for every choice of i_0 ∈ [n] and a maximal independent set I_0 in B_i_0,
the set I_i_0,I_0 := I_0 ∪{v_K | K ∈ℱ} is a maximal independent set in G,
and, for every ∅≠ J ⊊ [n] that is not contained in any set of ℱ, the set
S_J := ⋃_i ∈ JA_i ∪⋃_i ∈ [n] ∖ J B_i is a minimal separator with one full component
B_J := ⋃_i ∈ JB_i and a second full component A_J := ⋃_i ∈ [n] ∖ J A_i ∪{v_K | K ∈ℱ, K ⊈J}, and a number of single-vertex components {v_K} for K ∈ℱ, K ⊆ J.
Observe that a weak container for S_J should separate v_K for K ⊆ J from those v_K for which K ⊈J.
The only way to make a small family of (weak) containers for all such separators S_J is to make containers containing whole
⋃_i ∈ I A_i but none of the vertices v_K; however, distinguishing ⋃_i ∈ I A_i and {v_K | K ∈ℱ}
seems difficult using the toolbox used in <cit.>.
In the above example a chordal completion will turn every A_i for i ∈ [n] and every B_i for i ∈ [n] ∖{i_0}
into a clique, and take any permutation π of [n] with π(1) = i_0
and add edges between A_π(i) and B_π(j) for every 1 ≤ i < j ≤ n. This corresponds to turning
S_J for J = π({1,2,…,i}) for every 1 ≤ i ≤ n into a clique.
Intuitively, the algorithm should not bother with the choice of π, which corresponds to ignoring
how vertices v_K are separated while looking at intermediate separators S_J.
Recall that the correctness of the dynamic programming algorithm of <cit.> relies on the observation
that a clique tree of G+F provides a computation path in which the algorithm finds a solution at least as good as I.
To provide an analogous proof in our setting, one needs to confine such problematic set 𝒟
in one subtree of a clique tree of G+F. This consideration brings us to the final definition of a carver.
Let G be a graph and d and k be positive integers.
A family ⊆ 2^V(G) is a tree-depth-d carver family of defect k in G
if for every treedepth-d structure in G,
there exist a minimal chordal completion G+F of G
and a clique tree (T,β) of G+F such that for each t∈ V(T) there exists C ∈
such that
* C ∩ contains β(t)∩ and has size at most k, and
* each component of G-C is contained in β(t) ∪⋃_s ∈ T'β(s) for some component T' of T-{t}.
Such a set C as above is called a (, (T,β))-carver for β(t) of defect k; it might not be unique. We use this definition independently of that of carver families.
We prove that this definition works as intended:
a tree-depth-d carver family of small defect in G is enough to design a dynamic programming
routine that solves the (≤ d, ϕ)-MWIS problem on G.
For any positive integers d and k and any formula ϕ, there exists an algorithm that, given a vertex-weighted graph G
and a tree-depth-d carver family ⊆ 2^V(G) of defect k in G, runs in time polynomial in the input size
and either outputs an optimal solution to the (≤ d, ϕ)-MWIS problem on G, or determines that no feasible solution exists.
We remark that the proof of Theorem <ref> is far from being just an involved verification of a natural approach.
There is a significant technical hurdle coming from the fact that, with fixed , G+F, and (T,β),
carvers for neighboring bags of (T,β) may greatly differ from each other in terms of the amount of non-solution vertices
added to them. One needs to design careful tie-breaking schemes for choices in partial solutions in the dynamic programming algorithm
in order to avoid conflicting tie-breaking decisions made while looking at different carvers.
Application to P_6-free graphs.
The starting point of the work of <cit.> on MWIS in P_6-free graph
is an analysis of minimal separators that identifies a crucial case distinction between full components
of a minimal separator, into ones whose complement is disconnected (so-called mesh components) or connected
(non-mesh components).
The analysis splits minimal separators in a P_6-free graph G into three categories:
Simple, being a proper subset of another minimal separator, having more than two full components,
or having two non-mesh full components. Here, one can enumerate a polynomial-sized family of candidates that contains
all such separators.
Somewhat complicated, having exactly two full components, both being mesh.
Here, one can enumerate a polynomial-sized family that contains a “weak container” for every such separator,
which is equally good for our applications as just knowing the separator exactly.
Really complicated, having exactly two full components, one mesh and one non-mesh.
Here, we can only enumerate a polynomial-sized family of “semi-carvers” that separate the mesh component from the
other components, but such a semi-carver is not guaranteed to separate the non-mesh full component from some non-full components.
(This weakness corresponds to examples mentioned earlier about inability to split some family 𝒟
of components of G- for a PMC ; note that all components S_J in the aforementioned examples
are of the really complicated type.)
This analysis generalizes to our setting, using the new notions of treedepth structures.
We proceed to discussing the PMCs.
Then, the following case distinction is identified in <cit.>.
A potential maximal clique in a graph G is two-sided
if there exist two distinct connected components D_1,D_2 of G-
such that for every connected component D of G-, we have N(D) ⊆ N(D_1)
or N(D) ⊆ N(D_2).
The following statement has been essentially proven in <cit.>.
However, it has been proven only with the Max Weight Independent Set problem in mind,
so we need to adjust the argumentation using the notion of treedepth structures.
For every positive integer d
there exists a polynomial-time algorithm that, given a P_6-free graph G
outputs a family ⊆ 2^V(G) with the following guarantee:
for every maximal treedepth-d structure in G
and every potential maximal clique of G that is -avoiding and not two-sided,
there exists C ∈ that is a container for , i.e., ⊆ C
and C ∩ V() = ∩ V().
It remains to study two-sided PMCs, which were the main cause of technical hurdles in <cit.>.
Here we depart from the approach of <cit.> and use the power of carvers instead.
To use carvers, we would like to choose not only a minimal chordal completion G+F (which, following the developments in the first
part of our work, would be any -aligned minimal chordal completion, where is the sought solution) but
also a clique tree (T,β) of G+F.
Recall that adhesions in (T,β) correspond to minimal separators in G, and the really complicated minimal separators
are the ones with one mesh and one non-mesh full component, in which case it is difficult to isolate the non-mesh component.
So, we would like the clique tree (T,β) to be imbalanced in the following way:
if st ∈ E(T) is such that β(s) ∩β(t) is a really complicated minimal separator with
the non-mesh full component A_s containing β(s) ∖β(t) and
the mesh full component A_t containing β(t) ∖β(s), then as much as possible of the decomposition (T,β)
should be reattached to the component of T-{st} that contains s.
More precisely, for a clique tree (T,β) of G+F, for every edge st as above, orient st from t to s
(and keep all edges of T that do not correspond to really complicated minimal separators undirected).
Consider now an edge st as above and assume that there exists s' ∈ N_T(t), s ≠ s' such that
β(s') ∩β(t) ⊆β(s) ∩β(t).
Then, the minimal separator β(s') ∩β(t) is a simple one (it is contained in another minimal separator
β(s) ∩β(t)), so the edge s't is undirected. Observe that the assumption (<ref>) allows the following
modification of (T,β): replace the edge s't with an edge s's.
This modification corresponds to the intuition that while studying the really complicated minimal separator
β(s) ∩β(t), it is difficult to separate the component A_s from the full component of G-(β(s') ∩β(t))
that contains β(s') ∖β(t), and thus — from the point of view of the PMC β(t) — both these components
should be contained in bags of the same component of T-{t}.
A simple potential argument shows that such modifications cannot loop indefinitely and there exists a clique tree (T,β)
where no modification is possible. This is the clique tree for which we are finally able to construct carvers using
the aforementioned analysis of minimal separators, in particular semi-carvers for the really difficult minimal separators.
The actual construction is far from straightforward, but arguably simpler than the corresponding argumentation of <cit.>
that handles two-sided PMCs.
§.§ Organization
After the preliminaries (Section <ref>), we
introduce the notion of carvers and carver families and provide the main algorithmic engine in Section <ref>.
The remaining sections are devoted to P_6-free graphs and the proof of Theorem <ref>.
Sections <ref> and <ref> study approximate guessing of minimal separators.
Section <ref> recalls the main (and most elegant) structural results of P_6-free graphs of <cit.>,
essentially extracting from <cit.> a family of containers for all PMCs that in some sense have “more than two sides.”
Section <ref> uses the results for minimal separators of Sections <ref> and <ref>
to provide carvers for the remaining PMCs; this is the place where we crucially rely on the fact that we want to provide only carvers,
not containers. Finally, Section <ref> wraps up the proof of Theorem <ref>, and Section <ref> gives a concluding remark about P_7-free graphs.
§ PRELIMINARIES
We use standard graph-theoretic notation, and all graphs are simple, loopless, and finite. We consider the edge-set of a graph G to be a subset of V(G)2, which is the set of all 2-element subsets of V(G). We write uv for an element {u,v} of V(G)2. A non-edge of G is then a pair of vertices uv which is not in E(G). Given a set F ⊆V(G)2, we write G+F for the graph with vertex set V(G) and edge set E(G) ∪ F; so G+F is obtained from G by adding all pairs from F as edges if they were not already present.
Given a graph G and a set of vertices S ⊆ V(G), we write N(S) and N[S], respectively, for the open and closed neighborhood of S in G. That is, N(S) {u ∈ V(G)-S: uv ∈ E(G) for some v ∈ S} and N[S] S ∪ N(S). We do not distinguish between induced subgraphs and their vertex sets, except when it might cause confusion. So we typically use S and G[S] interchangeably. Finally, if v_0, v_1, …, v_k are distinct vertices of G, then we write N(v_0, v_1, …, v_k) for N({v_0, v_1, …, v_k}) and N[v_0, v_1, …, v_k] for N[{v_0, v_1, …, v_k}].
We use the following notation to talk about paths. If X_1, X_2, …, X_k ⊆ V(G), then a P_k of the form X_1 X_2 … X_k is an induced copy of P_k in G so that the first vertex is in X_1, the second vertex is in X_2, and so on. If X_i = {v} for some vertex v, then we may put
v instead of X_i in the sequence denoting the form. For instance, given a vertex v and a set A ⊆ V(G), a P_4 of the form vAAA is one that
starts at a vertex v and has the rest of its vertices in A.
We say that two disjoint sets X,Y ⊆ V(G) are complete if every vertex in X is adjacent to every vertex in Y. If X = {v} for some vertex v, then we say that v and Y are complete. Similarly, we say that two disjoint sets, or a vertex and a set not containing that vertex, are anticomplete if they are complete in the complement of G. The complement of G is denoted by G.
The following observation is straightforward and will be often used implicitly.
Let G be a graph, X be a connected subset of V(G), and v ∈ V(G)∖ X be neither complete nor anticomplete to X.
Then there exists a P_3 of the form vXX.
§.§ Logic
In this paper we use the logic , which stands for monadic second-order logic with quantification over edge subsets and modular counting predicates, as a language for expressing graph problems. In this logic we have variables of four sorts: for single vertices, for single edges, for vertex subsets, and for edge subsets. The latter two types are called monadic variables. Atomic formulas of are as follows:
* equality x=y for any two variables x,y of the same sort;
* membership x∈ X, where X is a monadic variable and x is a single vertex/edge variable;
* modular counting predicates of the form |X|≡ a m, where X is a monadic variable and a,m are integers, m≠ 0; and
* incidence 𝗂𝗇𝖼(x,f), checking whether vertex x is incident to edge f.
Then consists of all formulas that can be obtained from the atomic formulas by means of standard boolean connectives, negation, and universal and existential quantification (over all sorts of variables). This gives the syntax of , and the semantics is obvious.
Note that a formula may have free variables, which are variables not bound by any quantifier. A formula without free variables is called a sentence.
Logic is usually associated with tree-like graphs through the following fundamental result of Courcelle <cit.>: given an n-vertex graph G of treewidth at most k and a sentence ϕ of , one can determine whether ϕ holds in G in time f(k,ϕ)· n, for a computable function f. The proof of this result brings the notion of tree automata to the setting of tree-like graphs, which is a connection that will be also exploited in this work. For an introduction to this area, see the monograph of Courcelle and Engelfriet <cit.>.
§.§ Treewidth and treedepth
We now introduce treedepth because it turns out to be a more natural width parameter than treewidth in the context of P_t-free graphs. It is convenient to begin with some definitions on forests.
A rooted forest is a forest where each component has exactly one specified vertex called its root. The depth of a vertex v ∈ V() is the number of vertices in the unique path from v to a root (so roots have depth 1). The height of is the maximum depth of any of its vertices. A path in is vertical if one of its ends is an ancestor of the other. (We consider each vertex to be both an ancestor and a descendent of itself.) Two vertices are -comparable if they are connected by a vertical path; otherwise they are -incomparable.
An elimination forest of a graph G is a rooted forest such that V() = V(G) and the endpoints of each edge of G are -comparable. The treedepth of G is then the smallest integer d such that G has an elimination forest of height d. Finally, we define the problem (≤ d, ϕ)-MWIS analogously to (≤ k, ϕ)-MWIS, where the only difference
is that G[] is required to have treedepth at most d (instead of treewidth at most k).
Luckily, in the context of P_t-free graphs, the parameters of
treedepth, treewidth, and degeneracy are functionally equivalent due to the following theorem.
For any integers t and ℓ, there exists an integer d such that if G is a P_t-free graph with degeneracy at most ℓ, then the treedepth of G is at most d.
Theorem <ref> has been discussed in <cit.>, but let us recall the reasoning.
The first step is the following result of <cit.>.
(A graph is C_>t-free if it does not contain a cycle longer than t as an induced subgraph; note that the class of C_>t-free graphs is a proper superclass of the class of P_t-free graphs.)
For every pair of integers ℓ and t, there exists an integer k ∈ (ℓ t)^(t) such that
every C_>t-free graph of degeneracy at most ℓ has treewidth at most k.
Treewidth and treedepth are functionally equivalent on P_t-free graphs by the following result of <cit.>.
For any integer t, if G is a P_t-free graph, then
treedepth(G) ≤ (treewidth(G) + 1)^t-1.
Since the property of having treewidth at most k and the property of having treedepth at most d
can be expressed in , we obtain that the (≤ k,ϕ)-MWIS and (≤ d,ϕ)-MWIS
formalisms describe the same class of problems in P_t-free graphs for any fixed t; every (≤ k,ϕ)-MWIS problem
has an equivalent definition as a (≤ d,ϕ')-MWIS for some d and ϕ' depending on k and ϕ,
and vice-versa. Hence, in this paper we can focus on solving problems formulated in the (≤ d,ϕ)-MWIS
formalism.
§.§ Chordal completions and PMCs
Recall that our overall approach is based on potential maximal cliques. We introduce this approach now.
Given a graph G, a set Ω⊆ V(G) is a potential maximal clique (or a PMC) if there exists a minimal chordal completion of G in which Ω is a maximal clique. A chordal completion of G is a supergraph of G which is chordal and has the same vertex-set as G; it is minimal if it has no proper subgraph which is also a chordal completion of G. (Recall that a graph is chordal if it has no holes, where a hole is an induced cycle of length at least 4.) Since chordal completions are obtained by adding edges to G, it is convenient to write them as G+F, where F is a set of non-edges of G.
The following classic result characterizes PMCs.
Given a graph G, a set Ω⊆ V(G) is a PMC if and only if both of the following conditions hold.
* For each component D of G-Ω, N(D) is a proper subset of Ω.
* If uv is a non-edge of G with u,v ∈Ω, then there exists a component D of G - Ω such that u,v ∈ N(D).
Chordal completions in a certain sense correspond to tree decompositions and it is often more convenient to work with the latter. So recall that a tree decomposition of a graph G is a pair (T, β) such that T is a tree, β is a function from V(T) to 2^V(G), and the following conditions are satisfied:
* for each u ∈ V (G), the set {t ∈ V(T): u ∈β(t)} induces a non-empty and connected subtree of T, and
* for each uv ∈ E(G), there is a node t of T such that u, v⊆β(t).
For a node t of T, the set β(t) is called the bag of t, and for an edge st ∈ E(T), the set β(s) ∩β(t) is called the adhesion of st, and is denoted by σ(st).
It is a folklore result that a graph H is chordal if and only if it has a tree decomposition whose bags are exactly the maximal cliques of H (meaning, in particular, that the number of nodes of the tree is equal to the number of maximal cliques of H). Such a tree decomposition is called a clique tree of H; note that while the set of bags of a clique tree is defined uniquely, the actual tree part of the tree decomposition is not necessarily unique. For example, if H = K_1,s, then there are s maximal cliques (corresponding to edges of H), but they can be arranged into a tree decomposition in essentially an arbitrary manner.
We also remark that a chordal graph on n vertices has at most n maximal cliques, and hence its clique tree has at most n nodes.
We will need some additional facts about clique trees of minimal chordal completions. Let G be a graph. Given a set S ⊆ V(G), a full component of S is a component A of G-S such that N(A) = S. A minimal separator of G is then a set S ⊆ V(G) which has at least two full components.
The next two lemmas were proven in <cit.> using the toolbox from <cit.>. The first one shows how to obtain minimal separators from adhesions.
Let G be a graph, G+F be a minimal chordal completion of G, and (T, β) be a clique tree of G+F. Then for each edge st ∈ E(T), the adhesion σ(st) is a minimal separator of G, and it has full components A and B such that β(s)∖σ(st)⊆ A and β(t)∖σ(st)⊆ B.
Notice that the full component A which satisfies Lemma <ref> is unique given the vertex s and the edge st ∈ E(T). (This uses the fact that β(s)∖σ(st) is non-empty, which holds since β(s) and β(t) are distinct maximal cliques of G+F.) When the graph, chordal completion, and clique tree are clear from context, we call A the full component of σ(st) on the s-side.
Lemma <ref> immediately implies also the following.
Let G be a graph, G+F be a minimal chordal completion of G, and (T, β) be a clique tree of G+F.
Then for every st ∈ E(T), there exists a connected component D of G-β(t) such that N(D) = σ(st)
and D ⊆⋃_t' ∈ V(T_s)β(t') where T_s is the component of T-{t} that contains s.
Use Lemma <ref> and take D to be the full component of σ(st) on the s-side.
The next lemma shows how to obtain minimal separators from PMCs.
Let G be a graph, Ω be a PMC of G, and D be a component of G-Ω. Then N(D) is a minimal separator of G, and it has a full component D^Ω≠ D which contains Ω∖ N(D).
We will also need the following well-known facts about chordal completions.
Let G be a graph and G+F be a minimal chordal completion of G.
Let S ⊆ V(G) be such that (G+F)[S] is a clique.
Then F contains no edges between different connected components of G-S.
Let 𝒟 be the family of connected components of G-S.
For every D ∈𝒟, F ∩N[D]2 is a chordal completion of G[N[D]] that turns N(D) into a clique.
Since (G+F)[S] is a clique, (F ∩S2) ∪⋃_D ∈𝒟 F ∩N[D]2 is a chordal completion of G.
The claim follows by the minimality of G+F.
Let G be a graph, G+F be a minimal chordal completion of G, and (T, β) be a clique tree of G+F.
Let S be a minimal separator of G such that (G+F)[S] is a clique and let A and B be two full sides of S.
Then there exists an edge t_At_B ∈ E(T) such that σ(t_At_B) = S,
A ⊆⋃_t ∈ V(T_A)β(t), B ⊆⋃_t ∈ V(T_B)β(t),
where T_A and T_B are the components of T-{t_At_B} that contain t_A and t_B, respectively.
Let Z_A = {t ∈ V(T) | A ∩β(t) ≠∅} and similarly define Z_B.
Since A and B are connected, Z_A and Z_B are connected in T.
By Lemma <ref>, Z_A ∩ Z_B = ∅.
Let Q be the unique path in T that has one endpoint in Z_A, the second endpoint in Z_B, and all internal vertices outside Z_A ∪ Z_B. Note that the length of Q is at least one.
Let q_A and q_B be the endpoints of Q in Z_A and Z_B, respectively.
Since (T,β) is a tree decomposition of G, N_G[A] ⊆⋃_t ∈ Z_Aβ(t).
Since (T,β) is a clique tree of the chordal graph G+F, we have N_G+F[A] ⊇⋃_t ∈ Z_Aβ(t).
Lemma <ref> implies that N_G[A] = N_G+F[A].
Thus N_G[A] = N_G+F[A] = ⋃_t ∈ Z_Aβ(t) and, similarly, N_G[B] = N_G+F[B] = ⋃_t ∈ Z_Bβ(t).
Since S = N_G[A] ∩ N_G[B], S ⊆β(s) for every s ∈ V(Q).
By the definition of Z_A, we have β(s) ∩ N_G[A] ⊆ S for every s ∈ V(Q) ∖{q_A}.
Hence, if q is the unique neighbor of q_A on Q, then σ(qq_A) = S.
The lemma follows with t_A = q_A and t_B = q.
§.§ Aligning chordal completions and treedepth structures
Throughout the paper we will try to find a maximal induced subgraph with treedepth at most d. We will do so by considering a fixed elimination forest of this induced subgraph, as well as a chordal completion which “aligns with” the elimination forest. We now formalize these ideas.
Let G be a graph and d be a positive integer. A treedepth-d structure in G is a rooted forest of height at most d such that V() is a subset of V(G) and is an elimination forest of the subgraph of G induced by V(). We sometimes write instead of V() when it is clear that we are working with a set of vertices; in particular, if X is a set of vertices of G, then we write X ∩ instead of X ∩ V(). We say that is maximal if there is no treedepth-d structure ' in G such that is a proper induced subgraph of ' and every root of is a root of '.
Note that if H is a maximal induced subgraph of G of treedepth at most d, and is a height-d elimination forest of that subgraph, then is a maximal treedepth-d structure in G. Consequently, in the context of (≤ d, ϕ)-MWIS, we can consider being in fact a maximal set inducing a subgraph of treedepth at most d in G: if (, X) is an actual solution, then there exists a maximal treedepth-d structure ' that is a superset of and quantification over can be implemented inside ϕ.
(This step is formally explained in Section <ref>.)
Thus, most of the structural results in this work consider the set of all maximal treedepth-d-structures, which are more detailed versions of maximal sets inducing a subgraph of treedepth at most d.
We conclude this section by discussing “aligned” chordal completions and by proving some basic lemmas about them. Let G be a graph, d be a positive integer, and be a treedepth-d structure in G. We say that a chordal completion G+F is -aligned if F does not contain any pair uv so that
* u or v is a depth-d vertex of , or
* u and v are vertices of which are -incomparable.
The second condition equivalently says that is a treedepth-d structure in G+F. First we show that there is always a -aligned minimal chordal completion.
For any positive integer d, graph G, and treedepth-d structure in G, there exists a minimal chordal completion of G that is -aligned.
Let F denote the set of all non-edges uv of G which are not incident to a depth-d vertex of , and are not between two vertices of which are -incomparable. It suffices to prove that G+F is chordal, since any chordal subgraph of G+F is -aligned.
Going for a contradiction, suppose that C is a hole of G+F. As (G+F)- is a clique, there is a vertex in C ∩; choose one, say u, which has maximum depth in . Consider the two neighbors of u in C; they are either outside of or ancestors of u in . However, the set of all such vertices forms a clique in G+F, which contradicts the fact that C has length at least 4.
Throughout the paper we consider PMCs and minimal separators which might come from an aligned chordal completion. So, to state these definitions, let G be a graph, d be a positive integer, and be a treedepth-d structure in G. A PMC Ω is -avoiding if it is a maximal clique of a minimal chordal completion that is -aligned, and it does not contain any depth-d vertex of . We deal with the case that Ω does contain a depth-d vertex separately, in the next lemma. Finally, a minimal separator S of G is -avoiding if S ∩ is contained in a vertical path of and has no depth-d vertex. (So these are the separators that can come from -avoiding PMCs.)
For a fixed treedepth-d structure , a set Y⊆ V(G) is a container for a set Y ⊆ V(G) if Y ⊆Y and Y∩ = Y ∩, that is, Y∖ Y is disjoint from .
For each positive integer d, there is a polynomial-time algorithm which takes in a graph G and returns a collection ℒ⊆ 2^V(G) such that for any maximal treedepth-d structure in G, any -aligned minimal chordal completion G+F of G, and any maximal clique Ω of G+F which contains a depth-d vertex of , ℒ contains a set Ω that is a container for Ω, i.e., Ω⊆Ω and Ω∩ = Ω∩.
We guess the vertex v ∈Ω which is a depth-d vertex of (this vertex is unique). Thus v is adjacent in G to every other vertex of Ω, because Ω is a clique in a -aligned chordal completion. Moreover, v has at most d-1 neighbors in . We then guess the set X of all neighbors of v which are in but are not in Ω. Finally, for all guesses of v and X, we add the set N[v]∖ X to ℒ. This collection ℒ is as desired.
The final lemma is how we will use the maximality of a treedepth-d structure.
Let G be a graph, d be a positive integer, and be a maximal treedepth-d structure in G. Then for any -avoiding potential maximal clique Ω of G, each vertex in Ω∖ has a neighbor in ∖Ω.
Recall that the set Ω∩ is contained in a vertical path of . Moreover, since Ω is -avoiding, Ω∩ does not contain any depth-d vertex of . So, if a vertex u ∈Ω∖ had no neighbor in ∖Ω, then we could find another treedepth-d structure ' in G where ' is obtained from by adding u.
§ DYNAMIC PROGRAMMING
The following definition is the main object of study in this paper.
Let G be a graph and d and k be positive integers.
A family ⊆ 2^V(G) is a tree-depth-d carver family of defect k in G
if for every tree-depth-d structure in G,
there exist a minimal chordal completion G+F of G
and a clique tree (T,β) of G+F such that for each t∈ V(T) there exists C ∈
such that
* C ∩ contains β(t)∩ and has size at most k, and
* each component of G-C is contained in β(t) ∪⋃_s ∈ T'β(s) for some component T' of T-{t}.
Such a set C as above is called a (, (T,β))-carver for β(t) of defect k; it might not be unique. We use this definition independently from that of carver families.
It is important to compare the notion of a carver family with the notion of containers of <cit.>.
There, instead of the properties above, we mandate that |β(t) ∩|≤ k, that C ∩ = β(t) ∩, and that β(t) ⊆ C
(so that, in particular, the choice of the tree T in the clique tree (T,β) is irrelevant for the definition).
These requirements imply that parts (i) and (ii)
of the definition of a carver family hold; for the second part, observe that
if β(t) ⊆ C, then any component of G-C is contained in a component of G-β(t) which, by the properties of a tree decomposition, lies in the union of bags of a single component of T-{t}. The main difference is that in the notion of a carver, we actually allow a carver C to miss some vertices of Ω, as long as this does not result in “gluing” connected components of G-Ω residing in different subtrees of T-{t} within the same connected component of G-C.
The main result of this section is that a tree-depth-d carver family of small defect in G is enough to design a dynamic programming
routine that solves the (≤ d, ϕ)-MWIS problem on G.
For any positive integers d and k and any formula ϕ, there exists an algorithm that, given a vertex-weighted graph G
and a tree-depth-d carver family ⊆ 2^V(G) of defect k in G, runs in time polynomial in the input size
and either outputs an optimal solution to the (≤ d, ϕ)-MWIS problem on G, or determines that no feasible solution exists.
The remainder of this section is devoted to the proof of Theorem <ref>.
§.§ Canonizing and extending partial solutions
Fix an integer d and let G be a graph. A partial solution in G is any tuple (,X,) such that is a tree-depth-d structure in G
and X ⊆⊆ V().
Very roughly, the dynamic programming routine will have a table with some entries for each partial solution (,X,) such that has at most k leaves (where k denotes the defect). Each of these entries will contain a partial solution (',X',') which “extends” (,X,) into a specified part of the graph. We will update this partial solution (',X',') when we find a “better” one. Sometimes this choice is arbitrary. So, in order to have more control over arbitrary choices, we now introduce a consistent tie-breaking scheme over partial solutions. More formally, we introduce a quasi-order ≼ over partial solutions.
First, fix an arbitrary enumeration of V(G) as v_1,v_2,…,v_|V(G)|.
Second, define a total order ≼_1 on subsets of V(G) as follows: X ≺_1 Y if |X| > |Y|
or if |X| = |Y| and we have v_i ∈ X, where i is the minimum integer such that v_i ∈ X Y (i.e., we use the lexicographic order).
Third, define a quasi-order ≼_2 on tree-depth-d structures in G as follows. Given a tree-depth-d structure , associate to
the following tuple of d+1 subsets of V(G):
* V(),
* the set of all vertices of depth 1 in (i.e., the roots),
* the set of all vertices of depth 2 in ,
…
* the set of all vertices of depth d in .
When comparing two tree-depth-d structures with ≼_2, we compare with ≼_1 the first sets in the above tuple that differ.
For two distinct tree-depth-d structures and ', we have ≼_2 ' or ' ≼_2.
However, we may have both ≼_2 ' or ' ≼_2 (i.e., it is possible that, for two different tree-depth-d structures and ', we have V() = V(') and every vertex of V() has the same depth in and in ').
So ≼_2 is only a quasi-order on the set of all tree-depth-d structures in G; it partitions tree-depth-d structures into equivalence classes, and between the equivalence classes it is a total order.
In order to avoid this problem, we will show that we can convert any tree-depth-d structure into one that is “neat”, and that ≼_2 is a total order on “neat” tree-depth-d structures. Formally, a tree-depth-d structure of G is neat if
for any non-root node v of , the graph G has at least one edge joining the parent of v in with a descendant of v in (possibly v itself). One can easily see that this is equivalent to the following condition: for every node v of , the subgraph of G induced by the descendants of v (including v) is connected.
The following lemma is standard when working with elimination forests: any tree-depth-d structure can be adjusted to a neat one without increasing the depth.
Given a graph G and a tree-depth-d structure of G, one can in polynomial time compute a neat tree-depth-d structure ' of G such that V(')=V() and for each v∈ V(), the depth of v in ' is at most the depth of v in .
While possible, perform the following improvement step. If v ∈ V() is such that v is not a root of , but has a parent u,
and the subtree _v of rooted at v does not contain a vertex of N_G(u), then
reattach _v to the parent of u if u is not a root or detach it as a separate component of otherwise. It is immediate that
the new rooted forest is also a tree-depth-d structure of G and, furthermore, that the depths of the elements of _v decreased by one.
This in particular implies that there will be at most |V(G)|^2 improvement steps. Each of them can be executed in polynomial time. Once no more improvement steps are possible, the resulting tree-depth-d structure is neat, as desired.
Next, we show that ≼_2 is a total order on neat tree-depth-d structures. In fact we show something slightly stronger: that each neat tree-depth-d structure is in a singleton equivalence class.
If and ' are tree-depth-d structures such that is neat, ≼_2 ', and ' ≼_2, then = '.
Since ≼_2 ' and ' ≼_2, we have that V() = V(') and every vertex has the same depth in and '.
We prove inductively on i that the set of all vertices of depth at least d-i induces the same forest in and '. The base case of i=0 holds since the depth-d vertices are an independent set in both and '. For the inductive step, it suffices to show that each depth-(d-i) vertex v has the same parent in and '. So let u and u' be the parent of v in and ', respectively.
From the inductive hypothesis, the subtrees of and ' rooted at v are equal.
Since is neat, there is a vertex w in this subtree that is adjacent to u in G.
Since ' is an elimination forest, u' and w are comparable in '. Since u' and u have the same depth
in and ', this is only possible if u=u'.
Finally, given two partial solutions (,X,) and (',X',') in a graph G, we say that (,X,) ≼ (',X',') if:
* the weight of X is larger than the weight of X', or
* the weights of X and X' are equal, but X ≺_1 X';
* X = X', but ≺_1 ';
* X = X' and = ', but ≼_2 '.
We say that (,X,) is better than (',X',')
(or that (',X',') is worse than (,X,)) if (,X,) ≼ (',X',') and some comparison above is strict (or, equivalently, if it does not hold that ' ≼_2).
Using this quasi-order, we can now look for a partial solution (, X, ) such that is maximal and neat. This is based on the following observation.
For any X ⊆⊆ V(G) such that G[] has tree-depth at most d, there exists a tree-depth-d structure such that (, X, ) is a partial solution. Moreover, if one chooses so that (, X, ) is ≼-minimal (among all choices of , for fixed X and ), then is maximal and neat.
For the first claim, any depth-d elimination forest of G[] can serve as .
For the second claim, fix a ≼-minimal partial solution (,X,). Since the first comparison is on the sizes of V(), we have that is maximal.
Now, by Lemma <ref>, there exists a neat tree-depth-d structure ' such that V(') = V() and, for each v ∈ V(), the depth of v in ' is at most the depth of v in . Thus ' ≼_2. So, since (',X,) is not better than (,X,), we also have that ≼_2 '. Since ' is neat, Lemma <ref> says that ' =. So is neat, as desired.
It is convenient to conclude this subsection by defining extensions of partial solutions. Roughly, an “extension” of a partial solution (, X, ) in a graph G is any partial solution (', X', ') that can be obtained from (, X, ) by adding new vertices which are not ancestors of any node of . More formally, (', X', ') is an extension of (, X, ) if is an induced subgraph of ', every root of is a root of ', and X' ∩ = X and ' ∩ =. We define extensions of tree-depth-d structures analogously, omitting X and .
We will use the following properties of extensions.
Let G be a graph, d be an integer, and and ' be tree-depth-d structures in G such that ' is neat and extends . Then each connected component of '- V() is neat and induces a connected subgraph of G.
Each connected component of '- V() is obtained by selecting a vertex v ∈ V(') and then taking all descendants of v in ' (including v itself). Any such subtree of ' is neat. For the second part, we observe that any neat tree-depth-d structure with just one root induces a connected subgraph of G.
§.§ Threshold automata
Next, we introduce threshold automata, which capture through an abstract notion of a computation device, the idea of processing a labelled forest in a bottom-up manner using a dynamic programming procedure.
As we will comment on, the design of this automata model follows standard constructions that were developed in the 90s.
We need to introduce some notation before stating the main definitions. For a finite alphabet Σ, a Σ-labelled forest is a rooted forest F where every vertex x∈ V(F) is labelled with an element (x)∈Σ. Similarly, given an unlabelled rooted forest F, we call any function from V(F) to Σ a Σ-labelling of F.
We use the notation · for defining multisets. For a multiset X and an integer τ∈, let X∧τ be the multiset obtained from X by the following operation: for every element e whose multiplicity k is larger than 2τ, we reduce its multiplicity to the unique integer in {τ+1,…,2τ} with the same residue as k modulo τ (that is, we reduce it to k - τ⌊k - τ - 1/τ⌋). This definition lets us track at the same time the residue modulo τ of the multiplicity as well as whether the multiplicity is greater than τ or not. For a finite set Q and an integer τ∈, we write (Q,τ) for the family of all multisets with elements from Q, where each element appears at most 2τ times. Note that |(Q,τ)|=(2τ+1)^|Q|.
Informally, a threshold automaton is run bottom-up on a Σ-labelled forest F. As it runs, it assigns each vertex of F a state from a finite set Q. The state of the next vertex v ∈ V(F) depends only on (v) and the “reduced” multiset X ∧τ, where X denotes the multiset of the states of all children of v. The accepting condition is similarly determined by “reducing” the multiset of the states of the roots. The formal definition is as follows.
A threshold automaton is a tuple =(Q,Σ,τ,δ,C), where:
* Q is a finite set of states;
* Σ is a finite alphabet;
* τ∈ is a nonnegative integer called the threshold;
* δΣ×(Q,τ)→ Q is the transition function; and
* C⊆(Q,τ) is the accepting condition.
For a Σ-labelled forest F, the run of on F is the unique labelling ξ V(F)→ Q satisfying the following property for each x∈ V(F):
ξ(x)=δ((x),ξ(y) y is a child of x∧τ).
We say that accepts F if
ξ(z) z is a root of F∧τ∈ C,
where ξ is the run of on F.
It turns out that threshold automata precisely characterize the expressive power of over labelled forests. Here, we consider the standard encoding of Σ-labelled forests as relational structures using one binary parent relation and |Σ| unary relations selecting nodes with corresponding labels. Consequently, by over Σ-labelled forests we mean the logic in which
* there are variables for single nodes and for node sets,
* in atomic formulas one can check equality, membership, modular counting predicates, parent relation, and labels of single nodes, and
* larger formulas can be obtained from atomic ones using standard boolean connectives, negation, and both universal and existential quantification over both sorts of variables.
The proof of the next statement is standard,
see for instance <cit.> for a proof in somewhat different terminology and <cit.> for the closely related settings of binary trees and ordered, unranked trees (the proof techniques immediately lift to our setting).
Hence, we only provide a sketch.
Let Σ be a finite alphabet. Then for every sentence φ of over Σ-labelled forests, there exists a threshold automaton with alphabet Σ such that for any Σ-labelled forest F, we have Fφ if and only if accepts F.
Let the rank of φ be the product of the quantifier rank of φ (that is, the maximum number of nested quantifiers in φ) and the least common multiple of all moduli featured in modular predicates present in φ.
It is well-known that there is only a finite number of pairwise non-equivalent sentences over Σ-labelled forests with rank at most q. Let then ^q be the set containing one such sentence from each equivalence class. Then ^q is finite, and we may assume that φ∈^q.
Consider a Σ-labelled forest F.
For a vertex x∈ V(F), let F_x be the subtree of F induced by x and all of its descendants. The q-type of F_x is the set of all sentences from ^q which are satisfied in F_x, that is,
^q(F_x){ ψ∈^q | F_xψ }.
A standard argument using Ehrenfeucht-Fraïsse games shows that ^q(F_x) is uniquely determined by (x) and the multiset ^q(F_y) y is a child of x∧ q. Similarly, the type ^q(F), defined analogously as above, is uniquely determined by the multiset ^q(F_r) r is a root of F∧ q. This means that we may define a threshold automaton with state set ^q and threshold q so that accepts F if and only if φ∈^q(F), which is equivalent to Fφ.
We would like to use Lemma <ref> in order to verify that a given solution (,X) to (≤ d,ϕ)-MWIS indeed is such that G[] satisfies ϕ(X). For this, our dynamic programming tables will be indexed not only by partial solutions of the form (,X,), but also by guesses on “partial evaluation” of ϕ that occurs outside of V(); or more formally, by an appropriate multiset of states of a threshold automaton associated with ϕ. For this, we need to understand how to run threshold automata on treedepth-d structures rather than just labelled forest. This will be done in a standard way: by labelling the forest underlying a treedepth-d structure so to encode through the labels. This idea is formalized in the next definition.
Let d be an integer and Σ be a finite alphabet. Then a (d,Σ)-labeller is a polynomial-time algorithm Λ that, given a graph G with a partial solution (, X, ) for the (≤ d,ϕ)-MWIS problem, computes a Σ-labelling of such that for every v ∈ V(), the label of v depends only on:
* the integer h ∈{1,2,…, d} such that v has depth h in ,
* the set of all indices i ∈{1,2,…, h-1} such that v is adjacent, in G, to the unique ancestor of v in with depth i, and
* which of the sets X and contain v.
That is, if we run Λ again on another G' and (', X', '), then any vertex v' ∈ V(') with the same properties from above as v is labelled the same as v.
When Λ and G are clear from context, we write _(, X, ) for the Σ-labelling on which is returned by running Λ on G and (, X, ). A key aspect of this definition is that, if (', X', ') is a partial solution which extends (, X, ), then each vertex v ∈ V() satisfies _(, X, )(v) = _(', X', ')(v).
We are now ready to state the main proposition of this subsection.
propositionpropThresh
Given a fixed (≤ d,ϕ)-MWIS problem, there exists a finite alphabet Σ, a (d,Σ)-labeller Λ, and a threshold automaton with alphabet Σ such that for any partial solution (, X, ) in any graph G, we have that (, X) is feasible for (≤ d,ϕ)-MWIS in G if and only if accepts the Σ-labelled forest obtained from by equipping it with _(, X, ).
We first prove several lemmas, and then we prove Proposition <ref> by combining them. It is straightforward to rewrite formulas to obtain the following lemma.
For any d ∈ and formula ϕ over the signature of graphs with one free vertex set variable, there exists a formula φ over the signature of graphs with two free vertex set variables such that for any partial solution (,X,) in any graph G, we have that (,X) is feasible for (≤ d,ϕ)-MWIS in G if and only if G[] φ(X, ).
We now show how to obtain an alphabet Σ and a (d, Σ)-labeller which lets us get rid of the graph entirely. That is, we will reduce the given sentence to a sentence in over a Σ-labelled forest.
For any d∈, there exist a finite alphabet Σ and a (d,Σ)-labeller Λ so that the following holds. For any formula φ of over graphs with two free vertex set variables, there exists a sentence φ of over Σ-labelled forests such that for any partial solution (, X, ) in any graph G,
G[] φ(X, ) if and only if φ,
where is the Σ-labelled forest obtained from by equipping it with _(, X, ).
We let Σ={1,…,d}×{0,1}^{1,…,d}×{0,1}^2, where the second coordinate is treated as a function from {1,…,d} to {0,1}; note that |Σ|=d· 2^d+2.
Consider a graph G and a partial solution (, X, ) in G. We now define the (d,Σ)-labeller Λ. Consider any x∈ V(). Let h be the depth of x in . Let f be the function from {1,…,d} to {0,1} defined as follows: for i≥ h we set f(i)=0, and for i<h we set f(i)=1 if and only if x is adjacent to the unique ancestor of x in that has depth i.
Let 1_X and 1_ be equal the value 1 if v is in X or , respectively, and 0 otherwise. Then we set
(x) (h,f,1_X, 1_).
Note that this labelling function can be computed from G[] and (, X, ) in polynomial time. Moreover, this algorithm is a (d, Σ)-labeller. Let denote the Σ-labelled forest obtained from by equipping it with this labelling.
We now apply the following syntactic transformation to φ in order to obtain a sentence φ of over Σ-labelled forests.
* For every quantification over an edge e, replace it with a quantification over the pair x,y of its endpoints, followed by a check that x and y are indeed adjacent. Since the depth of is at most d, which is a constant, this check can be performed using a first-order formula as follows: verify that x and y are in the ancestor-descendant relation in , retrieve the depth of x and y in from their labels, and check that the label of the deeper of those two nodes contains information that the shallower one is adjacent to it.
* Replace each atom expressing that a vertex z is incident to an edge e by a disjunction checking that z is one of the endpoints of e.
* For every quantification over an edge set, say ∃ Y, replace it with quantification of the form ∃ Y_1 ∃ Y_2 … ∃ Y_d-1, where Y_i is interpreted as the set of all the deeper endpoints of those edges from Y whose shallower endpoint has depth i. This quantification is followed by checking that for each x∈ Y_i, indeed x is adjacent to its unique ancestor at depth i; this information is encoded in the label of x.
* Replace each atom e∈ Y, where e is an edge variable and Y is an edge set variable, with a disjunction over i∈{1,…,d} of the following checks: denoting the endpoints of e by x and y, either x is at depth i and y∈ Y_i, or vice versa.
* Replace each check x ∈ X or x ∈ with the corresponding check of the third or fourth coordinate of the label of x.
It is straightforward to see that the sentence φ obtained in this manner satisfies the desired property. This completes the proof of Lemma <ref>.
We complete this section by proving Proposition <ref>, which is restated below for convenience.
*
Fix d and ϕ. By Lemma <ref>, there exists a formula φ over the signature of graphs with two free vertex set variables such that for any partial solution (,X,) in any graph G, we have that (,X) is feasible for (≤ d,ϕ)-MWIS in G if and only if G[] φ(X, ). By Lemma <ref>, there exist a finite alphabet Σ, a (d,Σ)-labeller Λ, and a sentence φ of over Σ-labelled forests such that for any partial solution (, X, ) in any graph G,
G[] φ(X, ) if and only if φ,
where is the Σ-labelled forest obtained from by equipping it with _(, X, ).
Finally, by Lemma <ref>, there exists a threshold automaton with alphabet Σ such that for any Σ-labelled forest F, we have Fφ if and only if accepts F. Proposition <ref> follows.
§.§ The algorithm
Fix integers d and k and a formula ϕ. By Proposition <ref>, there exists a finite alphabet Σ, a (d,Σ)-labeller Λ, and a threshold automaton = (Q_,Σ, τ_, δ_, C_) such that for any partial solution (, X, ) in any graph G, we have that (, X) is feasible for (≤ d,ϕ)-MWIS in G if and only if accepts the Σ-labelled forest obtained from by equipping it with the labelling _(, X, ). The algorithm will make use of Σ, Λ, and .
For convenience, we say that a multistate assignment of a rooted forest F is any function ξ:{∅}∪ V(F) →(Q_,τ_). Consider a multistate assignment ξ of a tree-depth-d structure . Essentially, we use ξ to specify the desired behavior of an extension of a partial solution (, X, ). In order to combine two extensions, sometimes we need to combine two multistate assignments ξ_1 and ξ_2 of a rooted forest F. So we write ξ_1 ∪ξ_2 for the multistate assignment of F defined by setting (ξ_1 ∪ξ_2)(v) (ξ_1(v) ∪ξ_2(v)) ∧τ_ for each v ∈{∅}∪ V(F).
Now let k be an integer, G be a graph, and ⊆ 2^V(G) be a tree-depth-d carver family of defect k in G. A template is a tuple σ = (,X,, C, D, ξ) such that
* (, X, ) is a partial solution in G,
* C ∈,
* D is a subset of V(G) which is a union of zero or more components of G-C, and
* ξ is a multistate assignment of .
We say that σ is simple if has at most k leaves and D is a component of G-C. A
(simple) pre-template is a tuple α = (, X, , C) as in the definition of a (simple) template, except with D and ξ omitted. We say that a template σ = (,X,, C, D, ξ) is over the pre-template (, X, , C).
The dynamic programming algorithm stores a table M that has an entry M[σ] for each simple template σ. We observe that the table has 𝒪(||· |V(G)|^dk+1) entries, where the constant hidden in the big-𝒪 notation depends on d, k, and ϕ. We initiate the value of each entry M[σ] to a symbol . As the algorithm proceeds, M[σ] will be updated to contain a partial solution (', X', ') which is a “valid extension” (defined formally in the next paragraph) of σ. We only update M[σ] when we discover a new valid extension better than the old one according to ≼; we use the convention that every valid extension is better than .
Now, let σ = (,X,, C, D, ξ) be a template (which may or may not be simple). Then a valid extension of σ is any extension (',X',') of (, X, ) such that V(')∖ V() ⊆ D and, if ξ':V(') → Q_𝒜 denotes the run of 𝒜 on the Σ-labelled forest obtained from ' by equipping it with _(', X', '), then
ξ'(z) | z is a root of ' but not of ∧τ_ = ξ(∅),
and, for every v ∈ V(),
ξ'(z) | z is a child of v in ' but not in ∧τ_ = ξ(v).
Note that if z is a child of v in ' but not in , then z∉ V() since, if it was, then it would have the same parent in ' and by the definition of extensions.
The following observation about combining extensions is the crucial building block of the algorithm. To state the lemma, we need to know when we can combine two tree-depth-d structures and ' in a graph G. So we say that and ' are compatible if the sets V()∖ V(') and V(')∖ V() are anticomplete in G, and each vertex in V() ∩ V(') has the same parent in and '. (We think of the empty set as being the parent of a root; so in particular this means that every ancestor of a vertex in V() ∩ V(') is also in V() ∩ V(').) If and ' are compatible, then there is a unique tree-depth-d structure, which we denote by ∪', such that:
* the vertex set of ∪' is V() ∪ V('),
* each vertex in V() has the same parent in ∪' and , and
* each vertex in V(') has the same parent in ∪' and '.
Note that we can check if and ' are compatible, and find ∪' if they are, in polynomial time. Now we are ready to state the key lemma.
Let σ_1 = (, X, , C, D_1, ξ_1) and σ_2 = (, X, , C, D_2, ξ_2) be two templates over the same pre-template. Suppose that D_1 and D_2 are disjoint and that (_i, X_i, _i) is a valid extension of σ_i for i=1,2. Then _1 and _2 are compatible and (_1 ∪_2, X_1 ∪ X_2, _1 ∪_2) is a valid extension of (, X, , C, D_1 ∪ D_2, ξ_1 ∪ξ_2).
Moreover, if for i=1,2, (_i', X_i', _i') is a valid extension of σ_i which is not worse than (_i, X_i, _i), then (_1' ∪_2', X_1' ∪ X_2', _1' ∪_2') is not worse than (_1 ∪_2, X_1 ∪ X_2, _1 ∪_2).
Observe that since D_1 and D_2 are disjoint and each of them is a union of components of G-C, they are also anticomplete. So the sets V(_1) ∖ V() and V(_2) ∖ V() are also disjoint and anticomplete. So V(_1) ∩ V(_2) = V() and, by the definition of extensions, it follows that _1 and _2 are compatible and that (_1 ∪_2, X_1 ∪ X_2, _1 ∪_2) is an extension of the partial solution (, X, ). It is also clear that V(_1 ∪_2)∖ V() is a subset of D_1 ∪ D_2.
Now it just remains to consider the run ξ' of on the Σ-labelled forest obtained from _1 ∪_2 by equipping it with the labelling _(_1 ∪_2, X_1 ∪ X_2, _1 ∪_2). For this, observe that every component of the graph _1 ∪_2- V() is either a component of _1 - V()
or a component of _2 - V(). Hence, for i=1,2, the function ξ' gives the same state to each vertex in V(_i) ∖ V() as does the run of on _i and _(_i, X_i, _i). The first part of the lemma now follows from the fact that, for any disjoint multisets A and B whose elements are in Q_𝒜, we have that (A ∪ B) ∧τ_𝒜 = ( (A ∧τ_𝒜) ∪ (B ∧τ_𝒜)) ∧τ_𝒜.
The second part of the lemma follows immediately from the used total ordering of partial
solutions.
§.§.§ Subroutine
Given as input a simple pre-template (,X,, C) and a sequence (D_i)_i=1^r of pairwise distinct components of G-C,
we define the following subroutine.
For each j ∈{0,1,…, r}, we set D_≤ j⋃_i=1^j D_i. (So D_≤ 0 is the empty set.)
The subroutine creates an auxiliary table M' with an entry M'[j,ξ] for every j ∈{0,1,…, r} and every multistate assignment ξ of .
Each entry M'[j, ξ] will be either the symbol , or a valid extension of the template (, X, , C, D_≤ j, ξ).
Initially all cells are set to .
For j=0, there is only one multistate assignment ξ of such that the template (, X, , C, ∅, ξ) might have a valid extension, and that is the function ξ≡∅. The unique valid extension is (,X,); so we set M'[0, ξ≡∅] (,X,). Then, for j=1,2,…,r, we fill the cells M'[j,·] as follows.
We iterate over all multistate assignments ξ_< and ξ_= of , and, if neither M'[j-1, ξ_<] nor M[(, X, , C, D_j, ξ_=)] is , then we apply Lemma <ref> to combine them into a valid extension (',X',') of (, X, , C, D_≤ j, ξ_< ∪ξ_=). If this extension is better than the previous value of M'[j,ξ_< ∪ξ_=], then we set M'[j,ξ_< ∪ξ_=] (',X',').
This finishes the description of the subroutine.
§.§.§ Outline
In a preliminary phase, the algorithm iterates over every simple template σ = (, X, , C, D, ξ) such that ξ≡∅. Then it sets M[σ] (, X, ); note that (, X, ) is a valid extension of σ which is better than .
In the main phase, the algorithm performs |V(G)| loops. In each loop, it iterates over every simple template σ = (, X, , C, D, ξ) and simple pre-template α_0 = (_0, X_0, _0, C_0) such that and _0 are compatible, X ∩∩_0 = X_0 ∩∩_0, and ∩∩_0 = _0 ∩∩_0. The algorithm will try to find a valid extension of σ which is better than M[σ]. The building blocks for constructing this valid extension of σ will be the valid extensions M[σ_0] where σ_0 is a simple template over α_0. In fact we will be slightly more restrictive about which components of G-C_0 we are allowed to “extend α_0 into”.
We call a component D_0 of G-C_0 useless if ∪_0 is a maximal tree-depth-d structure in the subgraph of G induced by V()∪ V(_0) ∪ (D ∩ D_0); we call D_0 useful otherwise. Note that if D_0 is useful, then in particular D ∩ D_0 is non-empty. We now execute the subroutine on the simple pre-template α_0 and the useful components of G-C_0, ordered arbitrarily. (If there are no useful components, then we still execute the subroutine on the empty sequence.) The subroutine returns an array M'. Write r for the number of useful components of G-C_0 and U ⊆ V(G) for their union. Then iterate over all multistate functions ξ_0 of _0 such that M'[r, ξ_0] ≠. Thus M'[r, ξ_0] is a valid extension of (_0, X_0, _0, U, ξ_0), which we denote by (_0', X_0', _0').
Now, let A denote the set of all vertices of _0' which are an ancestor, in _0', of at least one vertex in D. If _0'[A] and are compatible, and if the tuple
(_0'[A] ∪, (X_0' ∩ A) ∪ X, (_0' ∩ A) ∪)
is a valid extension of σ, then update M[σ] to the above if it is better than the previous value of M[σ]. This can be done in polynomial time.
After completing the main phase consisting of |V(G)| loops as above, the algorithm performs the following finalizing step, which is very similar to the above routine except without σ. So, for every simple pre-template α = (C, , X, ), we execute the subroutine on α and the components of G-C in an arbitrary order. The subroutine returns an array M'. Then, writing r for the number of components of G-C, we iterate over all multistate functions ξ of such that M'[r, ξ] ≠. We then check if the valid extension M'[r, ξ] is a feasible solution to the problem. That is, if M'[r, ξ] = (',X','), we check whether 𝒜 accepts the Σ-labelled forest obtained from ' by equipping it with the labelling _(', X', '). By Proposition <ref>, this is equivalent to (', X') being feasible for (≤ d,ϕ)-MWIS in G. Finally, we return the best
solution found, or that there is no solution if none was found.
This concludes the description of the algorithm. Clearly, it runs in (||^2 · |V(G)|^2dk+(1)) time. It remains to prove correctness.
§.§ Correctness
We may assume that the (≤ d,ϕ)-MWIS problem is feasible since the algorithm checks for feasibility before returning a solution. So there exists a partial solution (, X, ) which is ≼-minimal among all partial solutions (', X', ') such that (', X') is feasible for (≤ d,ϕ)-MWIS in G. By Lemma <ref>, we have that is maximal and neat, and X has maximum possible weight among all feasible solution for (≤ d,ϕ)-MWIS in G. By Lemma <ref>, there is no other partial solution in the same equivalence class of ≼ as (, X, ).
Since is a tree-depth-d carver family of defect k in G, there exists a minimal chordal completion F
and a clique tree (T,β) of G+F as in Definition <ref>.
That is, for each t ∈ V(T), we can fix a set of vertices C_t∈ such that
(i) C_t∩ contains β(t) ∩ and has size at most k, and
(ii) for each component D of G-C_t, there exists a component T' of T-{t}
such that D is contained in β(t) ∪⋃_s ∈ T'β(s).
We root T in an arbitrary node. Then consider a fixed node t ∈ V(T). We say that a child component of t is any component of G-C_t which is contained in the union of all bags β(s) such that s is a descendant of t in T (including s itself). We define a partial solution (_t, X_t, _t) corresponding to t as follows. Let _t be the subgraph of induced by all vertices which are an ancestor of at least one vertex in C_t. Set X_t X ∩_t and _t ∩_t. It is convenient to write α_t (_t, X_t, _t, C_t); so α_t is a simple pre-template. Finally, let h_t be the height of t in the subtree of T rooted at t; so the leaves of T have h_t=1, for instance.
We will show that after h_t iterations of the algorithm, the following holds for each child component D of t: there exists a multistate function ξ_t,D of _t such that M[(α, D, ξ_t,D)] is precisely the partial solution “induced by the ancestors of D ∪ C_t in (, X, ).” This lemma, which is stated as Lemma <ref>, will essentially complete the proof. (After |V(G)| rounds, we will consider the child components of the root node of T.) However, it is convenient to give some more definitions before stating the lemma.
So consider a fixed node t ∈ V(T) and a fixed set D ⊆ V(G) which is the union of zero or more components of G-C_t. First we define a partial solution (_t,D, X_t,D, _t,D) as follows. Let _t,D be the subgraph of induced by all vertices which are an ancestor of at least one vertex in D∪ C_t. Set X_t,D X ∩_t,D and _t,D∩_t,D. We note that V(_t,D)∖ V(_t) is actually contained in D. To see this, observe that by Lemma <ref>, since is neat and extends _t, each component of -V(_t) induces a connected subgraph of G. Therefore each component of - V(_t) is either disjoint from or contained in D.
Finally, let ξ_t,D denote the multistate function of _t defined as follows. If ξ is the run of on _t,D equipped with _(_t,D, X_t,D, _t,D), then we set:
ξ_t,D(∅) ξ(z) | z is a root of _t,D but not of _t ∧τ_,
and, for every v ∈ V(_t),
ξ_t,D(v) = ξ(z) | z is a child of v in _t,D but not in _t ∧τ_.
Notice that (_t,D, X_t,D, _t,D) is a valid extension of (α_t, D, ξ_t,D); denote the latter by σ_t,D. So, if D is a component of G-C_t, then σ_t,D is a simple template.
Our tie-breaking quasi-order and the choice of (, X, ) imply that, in fact, (_t,D,X_t,D, _t,D) is the unique
≼-minimal valid extension of σ_t,D.
Let t ∈ V(T) and let D⊆ V(G) be the union of zero or more components of G-C_t. Then (_t,D, X_t,D, _t,D) is the only valid extension of σ_t,D which is not worse than (_t,D, X_t,D, _t,D).
Let (', X', ') be a valid extension of σ_t,D which is not worse than (_t,D, X_t,D, _t,D). Let D_0 denote the union of all components of G-C_t which are not in D. We already observed that (_t,D_0, X_t,D_0, _t,D_0) is a valid extension of σ_t,D_0. So, since D and D_0 are disjoint, Lemma <ref> tells us that ' and _t,D_0 are compatible, and that the component-wise union of (', X', ') and (_t,D_0, X_t,D_0, _t,D_0) is a valid extension of (α_t, D ∪ D_0, ξ_t, D∪ξ_t, D_0). Also by Lemma <ref>, this valid extension is not worse than the component-wise union of (_t,D, X_t,D, _t,D) and (_t,D_0, X_t,D_0, _t,D_0). The latter equals (, X, ) and is a valid extension of that same template (α_t, D ∪ D_0, ξ_t, D∪ξ_t, D_0).
In general, the runs of on any two valid extensions of the same template are the same. By Proposition <ref>, the run of determines whether a partial solution yields a solution to (≤ d, ϕ)-MWIS on G. So, by the choice of (, X, ) and by Lemma <ref> applied to the tree , which is neat, we find that
(' ∪_t,D_0, X' ∪ X_t,D_0, ' ∪_t,D_0) = (, X, ).
It follows that (', X', ') = (_t,D, X_t,D, _t,D), as desired.
We are now ready to prove the main lemma.
Let t ∈ V(T), and assume that at least h_t iterations of the algorithm have been executed.
Then for any child component D of t, we have M[σ_t,D] = (_t,D, X_t,D, _t,D). Furthermore, if the subroutine is executed on α_t and any sequence of child components of t, then, where we write M' for the array which is returned, r for the number of child components under consideration, and U for their union, we have M'[r,ξ_t,U] = (_t,U, X_t,U, _t,U).
We may assume that the lemma holds for every child of t by induction on h_t. We will argue about the first claim of the lemma for the node t. Note that the second claim follows
from the first claim and Lemmas <ref> and <ref> (using induction on r). So, fix a child component D of t. Note that we only have to show that M[σ_t,D] is set to (_t,D, X_t,D, _t,D) at some point; Lemma <ref> implies that, once this occurs, M[σ_t,D] is never changed.
For the base case of h_t = 1, we have that t is a leaf of T and D ⊆β(t). So, since C_t ∩ contains β(t) ∩ by the definition of a carver family, the set D ∩ is empty. Thus _t,D=_t and ξ_t, D≡∅. It follows that, in the preliminary phase, we set M[σ_t,D] = (_t, X_t, _t), as desired. So we may assume that h_t>1.
Thus, using the definition of a carver family, there exists a child s of t in T such that we have D⊆β(t) ∪⋃_t' ∈ T'β(t'), where T' denotes the component of T-{t} which contains s. (If D ⊆β(t), then there may be more than one such vertex s, and we choose s arbitrarily.) We focus on the h_t-th iteration of the algorithm and the moment when the algorithm considers
the simple template _t,D and the simple pre-template α_s. Note that _t and _s are compatible, and that _t ∪_s is precisely the subgraph of induced by the ancestors of C_t ∪ C_s. Recall that a component D_s of G-C_s is useful if _t ∪_s is not a maximal tree-depth-d structure in the subgraph of G induced by V(_t) ∪ V(_s) ∪ (D ∩ D_s).
We need the following key observation.
Every useful component of G-C_s is a child component of s.
Suppose towards a contradiction that D_s is a useful component of G-C_s which is not a child component of s. Then the definition of a carver family tells us that D_s ⊆β(s) ∪⋃_t' ∈ T'β(t'), where T' is the component of T-{s} which contains t. Since D is a child component of t, we have that D ∩ D_s ⊆β(t) ∪β(s).
Since D ∩ D_s is disjoint from C_t ∪ C_s, and the latter contains all vertices of (β(t) ∪β(s)) ∩, we also have that D ∩ D_s is disjoint from V(). Furthermore, D ∩ D_s is the union of some subset of components of G-(C_t ∪ C_s). By Lemma <ref>, since is neat, each component of -(V(_t) ∪ V(_s)) induces a connected subgraph of G; so the vertex set of each such component is either contained in or disjoint from D ∩ D_s. Hence, the maximality of implies that _t ∪_s is also a maximal tree-depth-d structure in the subgraph of G induced by V(_t) ∪ V(_s) ∪ (D ∩ D_s). This contradicts the fact that D_s is useful.
As in the outline of the algorithm, let D_1,…,D_r be the useful components of G-C_s, in an arbitrary order. Claim <ref> implies that every D_j is a child component of s.
Hence, from the inductive hypothesis, at the beginning of the h_t-th iteration we have, for every 1 ≤ j ≤ r, that M[σ_s,D_j] = (_s,D_j, X_s,D_j, _s,D_j). We now claim the following.
In the run of the subroutine, we have for every 0 ≤ j ≤ r, that
M'[j,ξ_s,D_≤ j] = (_s,D_≤ j, X_s,D_≤ j, _s,D_≤ j).
We prove the claim by induction on j. For j=0 the claim holds since ξ_s,∅≡∅ and thus M'[0,ξ_s,∅] = (_s,X_s, _s).
For j > 0, from the inductive hypothesis on j we have M'[j-1, ξ_s, D_≤ j-1] = (_s,D_≤ j-1, X_s,D_≤ j-1, _s,D_≤ j-1)
and from before, we have M[σ_s,D_j] = (_s,D_j, X_s,D_j, _s,D_j).
Hence, the partial solution (_s,D_≤ j, X_s,D_≤ j, _s,D_≤ j) is considered for M'[j,ξ_s,D_≤ j];
Lemma <ref> ensures that it is assigned there and stays till the end. This proves the claim.
After the subroutine is executed, the algorithm iterates over every multistate function ξ_s of _s and attempts to use M'[r, ξ_s] to find a better valid extension of σ_t,D than M[σ_t,D]. By Lemma <ref>, it suffices to prove that when ξ_s, D ≤ r is considered, the resulting valid extension (_t,D, X_t,D, _t,D) of σ_t,D is found.
By Claim <ref> we have M'[r,ξ_s,D_≤ r] = (_s,D_≤ r, X_s,D_≤ r, _s,D_≤ r). As in the outline of the algorithm, let A denote the set of all vertices of _s,D_≤ r which are an ancestor of at least one vertex in D. Note that _s,D_≤ r[A]=[A], that [A] and _t are compatible, and that [A] ∪_t is precisely the subtree of induced by the ancestors of vertices in D ∩ (D_≤ r∪ C_s) and C_t. Thus, it just remains to show that this induced subtree is _t, D, or, equivalently, that every vertex in (D∩)∖(C_s ∪ C_t) is in a useful component of G-C_s (i.e., in D_≤ r). This holds by the definition of useful components, because such a vertex can be added to the tree-depth-d structure _s ∪_t. This finishes the proof of Lemma <ref>.
Since (T,β) is a clique tree of G+F, it has at most |V(G)| nodes. Hence, after |V(G)| iterations,
Lemma <ref> can be applied to the root of T, which we denote by t.
Consider now the finalizing step of the algorithm and the moment it considers the pre-template α_t. Let M' denote the computed array, r the number of components of G-C_t, and U the set V(G)∖ C_t. Since every component of G-C_t is a child component of t, Lemma <ref> implies that M'[r, ξ_t,U] = (_t,U,X_t,U, _t,U).
So, as U = V(G) ∖ C_t, we have _t,U = and thus M'[r,ξ_t,U] = (,X,).
As (,X,) is the unique ≼-minimal partial solution such that (, X) is feasible for (≤ d, ϕ)-MWIS in G, the algorithm returns (,X,).
This finishes the proof of Theorem <ref>.
§ MINIMAL SEPARATOR CARVING
Given a graph G and a minimal separator S of G, we say that a set S carves away a component D of G-S if no component of G-S intersects both D and another component of G-S. (We say that two sets intersect if their intersection is non-empty.) In this section we find “carvers” for minimal separators. We break up minimal separators into four different types based on which of their full components can be carved away.
First of all, a minimal separator S is subordinate if there exists a minimal separator S' and two full sides A' and B' of S'
such that S ⊆ S' and some full component of S is disjoint from A' ∪ S' ∪ B'. Notice that any minimal separator which is not subordinate has exactly two full
sides; otherwise we could take S'=S and A' and B' to be two full components of S.
The other three types of minimal separator are based on how many full components are “mesh”. A graph H is mesh if its complement H is not connected. Otherwise H is connected, and we call H non-mesh. We say that a minimal separator S is mesh/mixed/non-mesh (respectively) if S is not subordinate and has exactly 2/1/0 full components which are mesh.
Now we define carvers for minimal separators based on their type.
Let G be a graph, d be a positive integer, be a treedepth-d structure in G, and let S be a -avoiding minimal separator of G. Then a -carver for S is a set S⊆ V(G) such that S∩ = S ∩ and
* if S is subordinate or non-mesh, then S = S;
* if S is mixed, then S carves away the mesh full component of S; and
* if S is mesh, then S carves away every component of G-S.
In this section we show how to find a subset of 2^V(G) which contains carvers for all appropriate and S; see Proposition <ref> for a precise statement. Our approach to proving this proposition is based on the theory of modular decompositions.
A module of a graph G is a set X⊆ V(G) such that every vertex in V(G)∖ X is adjacent to either all of X or none of X. A module is strong if it does not cross any other module, where two sets cross if they intersect and neither is contained in the other.
A strong module is maximal if it is not V(G) and it is not properly contained in any strong module besides V(G).
We do not need the full theory of modular decompositions, just the following fact.
The maximal strong modules of a graph G are disjoint and, if G is mesh, then they are the vertex sets of the components of G.
We typically guess two vertices which satisfy the following lemma.
Let G be a graph, d be a positive integer, be a maximal treedepth-d structure in G, S be a -avoiding minimal separator, and A be a full component of S. Then A ∩ is non-empty, and there exists a vertex p_A ∈ A which has at most d-1 neighbors in . Moreover, if |A|>1, then there exists a vertex q_A ∈ A which is adjacent to p_A and in a different maximal strong module of A than p_A.
First notice that A contains a vertex in . Otherwise, each vertex a ∈ A would satisfy N(a) ∩⊆ S.
As ∩ S is contained in a vertical path of and does not contain any depth-d vertex of , we could add a to as a leaf without increasing its height beyond d, thus contradicting the maximality of .
Now choose a vertex p_A ∈ A∩ which has maximum depth among all vertices in A ∩.
All vertices in N(p_A) ∩ A ∩ are ancestors of p_A in .
All vertices in N(p_A) ∩ that are descendants of p_A must be in S and thus they are contained in a vertical path of .
This means that all vertices in N(p_A) ∩ are contained in a single vertical path in , which also contains p_A.
Hence, |N[p_A] ∩| ≤ d, thus |N(p_A) ∩| ≤ d-1.
Finally, suppose that |A|>1. Lemma <ref> tells us that the maximal strong modules of A partition A. There is more than one part since |A|>1. So, since A is connected, we can choose a neighbor q_A of p_A which is in a different part from p_A.
We frequently apply the following lemmas from <cit.> to two vertices which come from Lemma <ref>.
Let G be a graph, let S be a minimal separator, let A be a full component of S, and let p_A and q_A be adjacent vertices which are in different maximal strong modules of A. Then for any u ∈ S, at least one of the following conditions holds:
* there is an induced P_4 of the form uAAA,
* at least one of p_A and q_A is adjacent to u, or
* the graph A is mesh, and each of its maximal strong modules is either complete or anticomplete to u.
We note that the outcomes in Lemma <ref> are not exclusive.
The next lemma helps us to take care of minimal separators which are mesh.
Let G be a P_6-free graph, S be a minimal separator, A and B be full mesh components of S, and p_A and q_A (respectively, p_B and q_B) be adjacent vertices which are in different maximal strong modules of A (respectively, B). Then there exist r_A ∈ A and r_B ∈ B so that S ⊆ N(p_A, q_A, r_A, p_B, q_B, r_B).
Note that since A (resp., B) is mesh and p_A and q_A (resp., p_B and q_B) are in different maximal strong modules,
we can equivalently say that N[p_A, q_A, r_A, p_B, q_B, r_B]=A ∪ S ∪ B.
We also need the following simplified version of Lemma <ref> which applies to every type of minimal separator.
Let G be a P_6-free graph, S be a minimal separator, and A and B be two full components of S. Then there exist A' ⊆ A and B' ⊆ B such that |A'|≤ 3, |B'|≤ 3, and S ⊆ N(A' ∪ B').
Note that Lemma <ref> is sufficient to find all subordinate separators.
There is a polynomial-time algorithm which takes in a P_6-free graph G and returns a collection 𝒮_sub⊆ 2^V(G) which contains each subordinate minimal separator.
Let S be a subordinate minimal separator. Then there exists a minimal separator S' and two full sides A' and B' of S' so that S ⊆ S' and some full component of S is disjoint from A' ∪ S' ∪ B'. By Lemma <ref>, there exist A”⊆ A' and B”⊆ B' so that |A”|≤ 3, |B”|≤ 3, and S' ⊆ N(A”∪ B”). We guess[Throughout this paper, by guessing we mean branching into polynomially many choices of fixing the object in question.] A” and B”. Then, for each component D of G-N(A”∪ B”), we insert N(D) into 𝒮. The full component of S which is disjoint from A' ∪ S' ∪ B' is itself such a component D. So 𝒮 contains S and |𝒮|≤ |V(G)|^6.
Finally, some types of minimal separator can be taken care of very quickly using the following lemma. We state it in a slightly weaker fashion than in <cit.>.
There is a polynomial-time algorithm which takes in a P_6-free graph G and returns a collection ℱ⊆ 2^V(G) which contains each full component of a non-mesh separator of G.
Now we are ready to prove the main result of this section.
For each positive integer d, there exists a polynomial-time algorithm which takes in a P_6-free graph G and returns a collection 𝒮⊆ 2^V(G) such that for any maximal treedepth-d structure in G and any -avoiding minimal separator S, the collection 𝒮 contains a -carver for S.
Let d, G, , and S be as in the statement of the proposition. We will show how to construct 𝒮 by making “guesses” among polynomially-many options. We will separately consider four cases, depending on the type of S. We output the collection 𝒮 consisting of all sets S constructed as described below.
Case 1. S is subordinate. Recall that in Corollary <ref> we constructed the family 𝒮_sub that contains all subordinate minimal separators S. As S is a -carver for S, it is sufficient to include 𝒮_sub in the output family 𝒮.
Case 2. S is non-mesh. By Lemma <ref> we can, in polynomial time, find a collection ℱ⊆ 2^V(G) which contains each full component of a non-mesh separator of G. For each D ∈ℱ, we insert N(D) into 𝒮. So 𝒮 contains S, which is a -carver for S.
From now on we may assume that S is either mixed or mesh. Thus S has exactly two full components, and at least one of them is mesh. Let A and B be the two full components of S. (We are not guessing A and B, we are just giving them names.) Next, guess the vertices in S ∩ (there are at most d-1 of them) and add them to a set S. As we proceed throughout the proof, we will add more and more vertices of V(G)∖ to S. Thus we will always have that S∩ = S ∩, and we are trying to show that S eventually becomes a -carver for S.
By Lemma <ref>, there exists a vertex p_A ∈ A (respectively, p_B ∈ B) which has at most d-1 neighbors in . For convenience, write T_A N(p_A)∩ A ∩ and T_B N(p_B)∩ B ∩. We guess the vertices p_A and p_B and the sets T_A and T_B. We then add the vertices in N(p_A)∖ T_A and N(p_B)∖ T_B to S.
If either A or B has size one, then this set S already contains S and is therefore a -carver for S. So we may assume that |A|>1 and |B|>1. Thus, by Lemma <ref>, there exists q_A ∈ A (respectively, q_B ∈ B) so that p_A and q_A (respectively, p_B and q_B) are adjacent vertices which are in different maximal strong modules of A (respectively, B). We add every vertex which is in both N({p_A, q_A}∪ T_A) and N({p_B, q_B}∪ T_B) to S; note that these newly added vertices are a subset of S.
It is helpful to state the following observation; note that it will hold even after we add more vertices to S.
(1) Each vertex u ∈ S ∖S is non-adjacent to p_A and p_B and therefore in a P_3 of the form uAA and in a P_3 of the form uBB.
In particular, note that when applying Lemma <ref> for any u ∈ S ∖S and A (resp., B) we never obtain the first outcome, as then we would get an induced P_6 of the form BBuAAA (resp., AAuBBB).
Case 3. S is mixed.
We claim that S is already a -carver for S. By symmetry between A and B, we may assume that A is mesh and B is non-mesh. So it just remains to show that A is carved away by S; that is, that no component of G-S intersects both A and another component of G-S. We will do this by showing that S ∖S and A ∖S are anticomplete. So consider a vertex u ∈ S ∖S. By (1), the second outcome of Lemma <ref> holds for B, and u ∈ N(p_B, q_B). So u is anticomplete to {p_A, q_A}∪ T_A; otherwise we would have u∈S. Now the third outcome of Lemma <ref> holds for A; that is, each maximal strong module of A is either complete or anticomplete to u. As A is mesh and p_A ∉ N(u), each neighbor of u in A is in N(p_A)∖ T_A. Since N(p_A)∖ T_A ⊆S, this completes the proof that S is a -carver for S.
Case 4. S is mesh.
Then by Lemma <ref>, there exist r_A ∈ A and r_B ∈ B so that S ⊆ N(p_A, q_A, r_A, p_B, q_B, r_B), i.e., N[p_A, q_A, r_A, p_B, q_B, r_B]=A ∪ S ∪ B. Guess these vertices r_A and r_B.
So for each component D of G-N[p_A, q_A, r_A, p_B, q_B, r_B], add the vertices in N(D) to S.
Furthermore, we add to S all vertices in N[q_A,r_A] ∩ N[q_B, r_B].
Note that these newly added vertices are a subset of S.
We will show that now S is a -carver for S. It just remains to show that S carves away the components of G-S: that is, that each component of G-S intersects at most one component of G-S. Since N(D) as explicitly added to S for each component D of G-[p_A, q_A, r_A, p_B, q_B, r_B], the only possibility that needs to be checked is that some component of G-S intersects both A and B. So it suffices to show that S∖S has a partition into two parts, S_A and S_B, so that S_A and S_B are anticomplete, S_A and B∖S are anticomplete, and S_B and A∖S are anticomplete.
Let S_A (respectively, S_B) be the set of all vertices in S ∖S which are in N(q_A,r_A) (respectively, N(q_B,r_B)).
The sets S_A and S_B partition S ∖S by observation (1) and the definitions of r_A,r_B and S.
Now consider a vertex u ∈ S_A. Again using (1), the third outcome of Lemma <ref> holds for B; each maximal strong module of B is either contained in or disjoint from the neighborhood of u. So each neighbor of u in B is in N(p_B)∖ T_B, and therefore also in S. By this and the symmetric argument for S_B, we have proven that S_A and B∖S are anticomplete, and S_B and that A∖S are anticomplete.
It just remains to show that S_A and S_B are anticomplete. For this we need to be slightly more careful about the argument above; notice that we actually have that if u ∈ S_A, then u is anticomplete to every component of B which intersects {p_B, q_B, r_B}. Let M_B denote the union of these components of B. Note that M_B induces a connected subgraph of B since p_B and q_B are in different components of B. Thus, if u was adjacent to a vertex v ∈ S_B, then we could find a P_6 of the form M_AM_AuvM_BM_B, where, symmetrically, M_A is the union of the components of A which intersect {p_A, q_A, r_A}.
This completes all four cases and therefore the proof of Proposition <ref>.
§ IMPROVING CARVERS FOR MIXED MINIMAL SEPARATORS
We need a more refined understanding of mixed minimal separators. So let G be a graph, S be a mixed minimal separator of G, and A and B be the mesh and non-mesh full sides of S, respectively. Given a set S⊆ V(G), we say that a component D of G-S is clarified if it is disjoint from A ∪ B. In this section we show how to “carve away” all of the clarified components; see Proposition <ref>.
To prove this proposition, we will use the following enumeration routine to obtain a “fuzzy” version of the mesh full component. Given a graph G, a fuzzy version of a set A ⊆ V(G) is a set A^+ ⊆ V(G) such that A ⊆ A^+
and every vertex of A^+ ∖ A is complete to A.
There is a polynomial-time algorithm which takes in a P_6-free graph G and returns a collection 𝒜⊆ 2^V(G) such that for every mixed minimal separator S in G with A as its full mesh component, there exists A^+ ∈𝒜
that is a fuzzy version of A.
We also use the following lemma about minimal elements in quasi-orders. A quasi-order is a pair (X, ≼) so that X is a set and ≼ is a reflexive and transitive relation on X.
Let X be a non-empty finite set, and let (X, ≼_0) and (X, ≼_1) be quasi-orders such that each pair of elements of X is comparable either with respect to ≼_0 or with respect to ≼_1 (or both). Then there exists an element x ∈ X such that for every y ∈ X, either x ≼_0 y or x ≼_1 y (or both).
We use Lemma <ref> to prove the following lemma, which will help us recognize an independent set which is contained in a mixed minimal separator.
Let G be a P_6-free graph, S ⊆ V(G) be a set with a mesh full component A, and I ⊆ S be a non-empty independent set. Then there exist a component M_I of A and a vertex x ∈ I ∩ N(M_I) so that every vertex y ∈ I∖ N(M_I) is a neighbor of every component D of G-(A ∪ S) so that x∈ N(D).
For convenience, let 𝒟 denote the collection of components of G-(A ∪ S), and let ℳ denote the collection of components of A; we will obtain one quasi-order from 𝒟 and another from ℳ. Notice that if there are two vertices u,v ∈ I such that there exists both a pair D_u,D_v ∈𝒟 so that N(D_u) ∩{u,v} = {u} and N(D_v) ∩{u,v} = {v}, and a pair M_u,M_v ∈ℳ so that N(M_u) ∩{u, v} = {u} and N(M_v) ∩{u, v} = {v}, then there is a P_6 of the form D_uuM_uM_vvD_v.
Consider the quasi-orders ≼_0 and ≼_1 on I defined as follows:
u ≼_0 v ⟺{D ∈𝒟 | u ∈ N(D)}⊆{D ∈𝒟 | v ∈ N(D)}, and
u ≼_1 v ⟺{M ∈ℳ | u ∈ N(M)}⊆{M ∈ℳ | v ∈ N(M)}.
Any two u,v ∈ I are comparable in at least one of these orders. Hence, Lemma <ref> asserts that there exist x ∈ I such that for every y ∈ I either x ≼_0 y or x ≼_1 y. We pick any M_I ∈ℳ with x as a neighbor (it exists since I ⊆ S = N(A)).
We are ready to prove the main proposition about improving carvers for mixed minimal separators.
For each positive integer d, there exists a polynomial-time algorithm which takes in a P_6-free graph G and a set S⊆ V(G) and returns a collection 𝒮' ⊆ 2^V(G) so that for any maximal treedepth-d structure in G and any -avoiding mixed minimal separator S of G, there exists S' ∈𝒮' so that
* S' contains S,
* S'∩⊆ S ∪S, and
* for each clarified component D of G-S, no component of D-S' intersects more than one component of G-S.
Let d, G, S, S, and be as in the lemma statement. Let A and B denote the mesh and non-mesh full sides of S, respectively. Additionally, let 𝒟 denote the set of all vertices of G-S which are in a clarified component of G-S. So 𝒟 is the union of some components of G-(A ∪ S ∪S∪ B), and the graph G-S has no path between 𝒟 and A ∪ B. We will find a set S' which satisfies conditions (i) and (ii) of the proposition and includes N(𝒟) ∩ S; this implies condition (iii).
Notice that there are at most d components of A which intersect ; let M⊆ V(G) denote the union of these components. We claim that we can guess M. By Lemma <ref>, we can, in polynomial-time, obtain a set 𝒜⊆ 2^V(G) which includes a fuzzy version of A. That is, there exists A^+∈𝒜 so that A ⊆ A^+ and A^+ ∖ A is complete to A. Guess this set A^+ ∈𝒜; there are polynomially-many choices. Now each component of A is also a component of the complement of A^+ and can thus be guessed. So indeed we can guess M, as it is the union of at most d components of A^+. We will use the fact that M is non-empty, which follows from the fact that A ∩ is non-empty by Lemma <ref>.
Now we define an intermediate set X ⊆ V(G) which contains S and is our current best guess at S'. To begin with we set XS∪ N(M); these vertices are safe to include since N(M) ∩⊆ S. Next, by Lemma <ref>, there exists a vertex p_B ∈ B which has at most d-1 neighbors in . We guess this vertex, along with which of its neighbors are in ∩ B, and then we add all of its other neighbors to X. This completes the definition of X. Notice that X ∩⊆ S ∪S, that S ∖ X is anticomplete to M ∪{p_B}, and that G-X has no path between 𝒟 and A ∪ B (this follows from the fact that S⊆ X). We also remark that X ⊆ A ∪ B ∪ S ∪S.
We claim that there exists a vertex q_B ∈ B which is complete to S ∖ X. If S ∖ X is empty, then this is trivially true, so assume that it is non-empty. Then |B|>1 since S ∖ X is anticomplete to p_B. So by Lemma <ref>, there is a vertex q_B ∈ B so that p_B and q_B are adjacent and in different maximal strong modules of B. If S ∖ X is not complete to q_B, then by Lemma <ref> applied to the full component B of S, we obtain a vertex u ∈ S ∖ X which is in a P_4 of the form uBBB. However, u is also in a P_3 of the form uAA since S ∖ X is anticomplete to M (which is non-empty). But then we obtain a P_6 of the form AAuBBB, which contradicts the fact that G is P_6-free. Consequently, that S ∖ X is complete to q_B. We guess such a vertex q_B.
Now form an independent set I ⊆ S∖ X as follows. For each component of S∖ X which has a neighbor in 𝒟, choose one vertex with a neighbor in 𝒟 and add that vertex to I. (We are not saying that we can guess I, just that it exists.) We may assume that I is non-empty since otherwise the proposition holds with S' X. Now apply Lemma <ref> to the subgraph induced on A ∪ S ∪𝒟. Thus, there exist a component M_I of A and a vertex x ∈ I ∩ N(M_I) so that every vertex y ∈ I∖ N(M_I) is a neighbor of every component D of 𝒟 so that x∈ N(D). We can guess M_I for the same reason we were able to guess M (because M_I is a component of A and we can guess the fuzzy version A^+ of A).
We will prove that the following set S' satisfies the proposition. First we add X and N(M_I)∩ N(q_B) to S'. These vertices are safe to add since X∩⊆ S ∪S and N(M_I)∩ N(q_B) ⊆ S.
We observe that since X ⊆ A ∪ B ∪ S ∪S, we have S' ⊆ A ∪ B ∪ S ∪S at this moment.
Now consider each component D of G-X - N(q_B) which has x as a neighbor.
Clearly, D is disjoint from S as S ∖ X ⊆ N(q_B).
Let H be a component of N(q_B) ∖ X that contains a neighbor of D.
Since x has a neighbor in 𝒟, there is a component of G-S that contains H, D, x, and a component of 𝒟, hence, it is disjoint from A ∪ B.
In particular, D is disjoint with A ∪ B, so N(D) ⊆ S ∪S as X ⊆ A ∪ B ∪ S ∪S.
Furthermore, we have H ⊆ S. Over all choices of D and H as above, we add the component H to S'.
We have already proved that conditions (i) and (ii) of the proposition hold for S'. Recall that, in order to obtain the final condition (iii), it is enough to show that S' contains N(𝒟) ∩ S. So, going for a contradiction, suppose that there exists a vertex u ∈𝒟 which has a neighbor v ∈ S ∖ S'. Let H be the component of S ∖ X which contains v. Then x is disjoint from and anticomplete to H ∪{u}, since otherwise we would have added v to S'. However, now there is a P_6 of the form uvq_BxM_IM, which contradicts the fact that G is P_6-free. (To see that there is a P_6 of this form, recall that q_B is complete to S ∖ X, x has a neighbor in M_I while u and v do not, and S ∖ X is anticomplete to M, which is non-empty; therefore M_I is a component of A which is not any of the components of A we used to define M.) This contradiction completes the proof of Proposition <ref>.
§ NOT-TWO-SIDED PMCS
A potential maximal clique in a graph G is two-sided
if there exist two distinct connected components D_1,D_2 of G-
such that for every connected component D of G-, we have N(D) ⊆ N(D_1)
or N(D) ⊆ N(D_2).
The following statement has been essentially proven in <cit.>.
However, it has been proven only with the Max Weight Independent Set problem in mind,
so we need to slightly adjust the argumentation to fit the more general setting of this paper.
For every positive integer d
there exists a polynomial-time algorithm that, given a P_6-free graph G
outputs a family ⊆ 2^V(G) with the following guarantee:
for every maximal tree-depth-d structure in G
and every potential maximal clique of G that is -avoiding and not two-sided,
there exists C ∈ that is a container for , i.e., ⊆ C
and C ∩ V() = ∩ V().
As mentioned, Theorem <ref> is essentially proven in Section 5 of <cit.>.
There, for a fixed maximal independent set I, a PMC is I-free if it is disjoint with .
This assumption here is replaced with being -avoiding for a fixed tree-depth-d structure .
Informally speaking, to adjust it to our setting, we need to make three adjustments within the proof of <cit.>.
* Often, when mesh component D is analyzed, it is argued that the independent set I intersects at most one maximal module M_p of D, and a vertex p ∈ M_p ∩ I is guessed. This step is usually followed by a guess of an arbitrary vertex q in a different maximal strong module of D.
In our case, the tree-depth-d structure can intersect at most d modules of D, and the guess of p is replaced
with a guess of a set P of at most d vertices of ∩ D, one vertex from each maximal strong module of D that intersects .
For q, it is enough to take an arbitrary vertex of D, unless |P|=1 (i.e., intersects only one maximal strong module of D)
where we need to pick q from a different maximal strong module. In this manner, we maintain the property that P ∪{q} contains
vertices of at least two maximal strong modules of D, so in particular D ⊆ N[P ∪{q}].
Whenever later the proof of <cit.> considers N[p] or N[p,q], we consider here N[P] or N[P ∪{q}] instead.
In what follows, we call such a set P a footprint of in D and the vertex q a satellite of the footprint P.
* When a PMC that is disjoint with the maximal independent set I is analyzed, and we often argue that the maximality of I implies that every v ∈ has a neighbor in I that is outside .
In our case, Lemma <ref> gives the same corollary, except for the vertices of ∩, but there are fewer than d
of them and they can be guessed separately.
* Finally, the notion of a neighbor-maximal component of Section 5.3 of <cit.> is a bit incompatible
with our statement, as it considers two components D_1,D_2 of G- with N(D_1) = N(D_2) both
not neighbor-maximal. This definition restricts the set of all PMCs with more than two neighbor-maximal components.
We observe that the assumption “more than two neighbor-maximal components” is used only once in the proof
and can be easily replaced with the (slightly weaker) assumption of being not two-sided.
Let us now have a closer look at Section 5 of <cit.> and provide formal details.
The toolbox in the earlier sections nor Lemmas 5.2 up to Lemma 5.7 use the notion of I-freeness, so they
can be used in our setting without any modifications.
Lemma 5.8 of <cit.>, the main result of Section 5 there, would now obtain the following form.
For every integer d there exists a polynomial-time algorithm that, given on input
a P_6-free graph G, outputs two families ℱ_9^1 and ℱ_9^2 such that the following
holds: for every maximal tree-depth-d structure in G
and every potential maximal clique of G that is -avoiding and not two-sided,
either ℱ_9^1 contains or ℱ_9^2 contains a triple (∪ D_1 ∪ D_2, D_1^+, D_2^+)
for some components D_1,D_2 of G- that are mesh, where D_i^+ is a fuzzy version of D_i for i ∈{1,2}.
Note that Theorem <ref> follows easily from
Lemma <ref>: we insert into every element of ℱ_9^1 and, for
every (K, L_1,L_2) ∈ℱ_9^2, every choice of at most d maximal strong modules of L_1
and every choice of at most d maximal strong modules of L_2, we insert into the set K minus the chosen modules.
Thus, it remains to prove Lemma <ref>.
The proof of Lemma 5.8 of <cit.> splits into three lemmas: Lemma 5.9, Lemma 5.10, and Lemma 5.11.
These statements have a fixed P_6-free graph G and a maximal independent set I in their context.
In our setting, instead of I we fix an integer d and a maximal tree-depth-d structure in G.
Lemma 5.9 of <cit.> takes the following form.
Suppose is a -avoiding PMC in G and D is a component of G- which is mesh.
Let P be a footprint of in D and let q be a satellite of P.
Let J ⊆ N(D) be an independent set with the following property: for every v ∈ J,
the set N(v) ∩ D consists of some maximal strong modules of D and is disjoint with ∩ D.
Then there exists w ∈ D and a component D' of G-, distinct from D, such that
J ⊆ (∩) ∪ N(w) ∪ N(D').
The proof of Lemma 5.9 of <cit.> uses I-freeness of
in only one place: to argue that if v ∈ J is anti-complete to all vertices of I
in D, then it needs to be adjacent to a vertex of I in another component D' of G-, so in particular
it is adjacent to some other component of G-.
In our case, J is anti-complete to D ∩, and Lemma <ref> gives the same corollary, except for
the vertices of ∩ that need to be added there separately.
The rest of the proof is the same.
Similarly we adjust Lemma 5.10 of <cit.>.
Given a family 𝒳⊆ 2^V(G), one can in time polynomial in the size of G and the size of 𝒳
compute a family ℱ_7(𝒳) ⊆ 2^V(G) with the following properties:
for every -avoiding PMC and every component D of G-, if all components of G-, except
for possibly D, belong to 𝒳, then all components of G- belong to ℱ_7(𝒳).
The assumption on I-freeness of Lemma 5.10 of <cit.> comes into play in the proof only
in the last case, namely Case 3, where in particular D is mesh.
First, after Claim 1, we guess a vertex p ∈ I ∩ M_p for the unique maximal strong module M_p of D that
intersects I, and a vertex q in another maximal strong module.
Here, we perform the standard adjustment, guessing instead a footprint of in D and its satellite.
Second, in the definition of Y, we also want to exclude the vertices of ∩ D from it (there are fewer than d
of them, so we just try all possibilities).
Third, after Claim 7 we invoke Lemma 5.9. Because of the previous adjustment, the set J here is
disjoint with ∩. Hence, we can invoke the adjusted Lemma <ref> instead.
We now move to Lemma 5.11 of <cit.>.
One can in polynomial time compute a family ℱ_8 such that the following holds:
Take any -avoiding PMC and assume there are different components D_1,D_2 of G-
that are meshes. Then ℱ_8 contains either D_1, or D_2, or ∪ D_1 ∪ D_2.
Again, the proof in <cit.> starts by selecting, for every i ∈{1,2},
a vertex p_i ∈ I in the unique maximal strong module of D_i that intersects I.
We adjust it in the standard way by selecting a footprint P_i of and its satellite q_i.
Then, when defining X and Z, we need to also include ∩ into X, so Z is disjoint with .
Since ∩ is of size less than d, we just try all possibilities.
Finally, Claim 11 relies on I-freeness. It argues that a vertex z ∈ Z ⊆ N(D_2) that does not have
a neighbor in I ∩ D_2, needs to have a neighbor
in I in another component of G-, in particular it is adjacent to another component of G-.
In our case, z is not in (as it is in Z) and z has no neighbor in ∩ D_2,
so Lemma <ref> gives the same corollary.
With the above three lemmas in hand, we can now adjust the proof of Lemma 5.8 of <cit.>
to show Lemma <ref>.
The crucial insight is that if there are two components D_1,D_2 of G- with N(D_1) = N(D_2),
then N(D_1) is subordinate (because N(D_1) has three full sides, D_1, D_2, and a component containing ∖ N(D_1))
and hence it belongs to the family 𝒮_sub provided by Corollary <ref>.
Therefore, by adding all full components of subordinate separators to a constructed set 𝒢, we obtain the same properties
as in the proof of Lemma 5.8 of <cit.> under the weaker assumption that is not two-sided:
𝒢 contains either all components of G-, or
all except for at most two mesh components D_1 and D_2.
The first outcome allows us to recover exactly. In the second outcome we use Lemma <ref>: we either
get exactly or the set ∪ D_1 ∪ D_2. In the latter case,
it remains to get, for every i∈{1,2}, a fuzzy version of D_i.
Since D_i ∉𝒢, N(D_i) is not subordinate.
Since is not two-sided, there is another component D' of G-,
distinct from D_1 and D_2, such that N(D') ⊈N(D_i). This component is in 𝒢.
Then, Lemma 5.7 of <cit.> gives a polynomial number of candidates for a fuzzy version of D_i.
This completes the proof sketch of Lemma <ref> and thus concludes
the proof of Theorem <ref>.
§ ANALYSIS OF TWO-SIDED ALIGNED PMCS
In this section we deal with the last remaining type of PMCs: two-sided aligned PMCs. Contrary to the previous sections, we need to make some delicate surgery on the clique tree in order to adjust it before generating a small family of carvers. More precisely, we will need the following special property of a clique tree (T,β) of a chordal completion G+F. (Recall here that full components of adhesions were defined following Lemma <ref>.)
() There are no two distinct edges st,tu ∈ E(T) such that
* σ(st) ⊆σ(tu);
* σ(tu) is a mixed minimal separator; and
* the full component of σ(tu) on the u-side is non-mesh.
The next lemma verifies that property () can be always achieved, even without changing the completion set F.
For any graph G and minimal chordal completion G+F of G, there exists a clique tree (T, β) of G+F with property ().
We already know that G+F has some clique tree. We will choose a clique tree which maximizes a certain count; for this definition we need to orient some edges of the tree. So, given a clique-tree (T, β) of G+F, orient each edge of T whose adhesion is a mixed minimal separator “towards the non-mesh side”. That is, if tu ∈ E(T) is such that σ(tu) is a mixed minimal separator and the full component of σ(tu) on the u-side is non-mesh, then orient tu as (t, u).
Now, choose a clique tree (T, β) of G+F which maximizes the sum, over all nodes u∈ V(T), of the number of undirected edges which are incident to a node of T that can be reached from u via a directed path (that is, a path which does not use any undirected edge and which follows the directed edges according to their direction). Such a choice exists since all clique trees have the same number of nodes. We will prove that (T, β) satisfies the conditions of the lemma. So, going for a contradiction, suppose that there exist distinct edges st,tu ∈ E(T) so that (i), (ii), and (iii) of property () hold. By conditions (ii) and (iii), tu is oriented as (t,u).
For convenience, set S σ(tu), and let A (respectively, B) denote the full component of S on the t-side (respectively, u-side).
Since S is mixed, it has exactly two full components: A that contains β(t) ∖ S and
B that contains β(u) ∖ S.
Since σ(st) ⊆ S and σ(st) separates β(s) ∖σ(st)
from both β(t) ∖ S and β(u)∖ S, it follows that the full component of σ(st) on the s-side is disjoint from A ∪ S ∪ B.
In particular, this component cannot be a full component of S (which has only two full sides, A and B), hence σ(st)⊊ S.
Therefore σ(st) is subordinate and the edge st of T is undirected.
Now we define a new clique tree (T', β') of G+F as follows. Replace the edge st of T with the edge su; that is, reattach the component of T-{st} that contains s to be connected via an edge su instead of the edge st. Since σ(st) ⊆ S = σ(tu), the resulting tree is in fact a clique tree of G+F. Furthermore, the orientations of the edges do not change; su is an undirected edge as S is a subordinate separator, while for every other edge of T the full sides considered in the orientation remain the same. Moreover, the relevant count of (T', β') is strictly larger than that of (T, β): the count for u increases by one, while no other count decreases. This contradicts the choice of (T, β) and completes the proof of Lemma <ref>.
Now we are ready to prove the main result of this section.
For each positive integer d, there exists a polynomial-time algorithm which takes in a P_6-free graph G
and returns a collection _1 ⊆ 2^V(G) so that for any maximal treedepth-d structure in G
and any -aligned minimal chordal completion G+F of G,
there exists a clique tree (T,β) of G+F
such that for each node t of T, if β(t) is two-sided and -avoiding,
then the set _1 contains a (, (T,β))-carver for β(t).
Let d, G, , F be as in the lemma statement. Let (T,β) be a clique tree of G+F which satisfies property (), its existence if guaranteed by Lemma <ref>. We orient some of the edges of T as in the proof of Lemma <ref>. That is, for each edge tu of T so that σ(tu) is a mixed minimal separator, we orient tu as (t,u) if the full component of σ(tu) on the u-side is non-mesh, and as (u,t) otherwise.
In this language, property () becomes the following.
() There do not exist distinct edges st, tu ∈ E(T) such that σ(st) ⊆σ(tu) and tu is oriented towards u.
Now fix t ∈ V(T) such that β(t) is two-sided and -avoiding. We will argue how to construct a (,(T,β))-carver for β(t) using guesswork with only polynomially-many options. To this end, set β(t), and let D_0 and D_1 be the components of G- which witness that Ω is two-sided. Throughout the rest of this proof we write indices on subscripts modulo 2.
First of all, for each v ∈ V(G), we add the set N[v] to _1.
Note that this takes care of all PMCs which contain a vertex v that does not have a neighbor outside .
Indeed, by the characterization of PMCs in Proposition <ref>, we would have N[v] = and thus ∈_1.
Thus from now on we may assume that each vertex from has a neighbor outside of .
We now use the characterization of PMCs in Proposition <ref> to infer the following claim.
The following properties hold:
* N(D_0) ∪ N(D_1) =,
* the sets N(D_0) ∖ N(D_1) and N(D_1) ∖ N(D_0) are nonempty and complete to each other, and
* there exists j ∈{0,1} such that D_j is complete to N(D_j) ∖ N(D_j+1).
By assumption, each vertex in has a neighbor outside of . Since is two-sided, we infer that N(D_0) ∪ N(D_1) =. Since N(D_0) and N(D_1) are proper subsets of (see Proposition <ref>), we have that both N(D_0) ∖ N(D_1) and N(D_1) ∖ N(D_0) are nonempty. From Proposition <ref>, we infer that N(D_0) ∖ N(D_1) is complete to N(D_1) ∖ N(D_0), as there is no connected component of G- that is adjacent to some vertices in both those sets.
Finally, suppose towards a contradiction that for every i ∈{0,1}, there exists v_i ∈ N(D_i) ∖ N(D_i+1) that is not complete to D_i. Then there is a P_6 of the form D_0D_0v_0v_1D_1D_1. This contradiction completes the proof of Claim <ref>.
By Lemma <ref>, for i ∈{0,1}, the set N(D_i) is a minimal separator of G which has a full side D_i^≠ D_i that contains ∖ N(D_i). Since is two-sided and N(D_0) ∪ N(D_1) = Ω by part (i) of Claim <ref>, it follows that D_i^ is precisely the union of ∖ N(D_i) and the components of G- which have a neighbor in Ω∖ N(D_i). Thus, in particular, D_0^∩ D_1^ =∅ since is two-sided.
We now show how the adhesions relate to the components of G-. For each component D of G-, we write T_D for the component of T-{t} so that D ⊆⋃_s ∈ T_Dβ(s); this component exists and is unique. We write t_D for the node of T_D which is a neighbor of t in T. We also write t_0 and t_1 as shorthand for t_D_0 and t_D_1, respectively, and similarly for T_0 and T_1.
We now attempt to “capture” the minimal separators N(D_0) and N(D_1). By Proposition <ref>, we can, in polynomial-time, obtain a collection 𝒮⊆ 2^V(G) which contains a -carver for each -avoiding minimal separator. So in particular, 𝒮 contains -carvers S_0 and S_1 for N(D_0) and N(D_1), respectively. We can guess these sets S_0 and S_1 since 𝒮 also has polynomial size.
We will use the following observation twice.
Let k ∈{0,1} be such that tt_k is not oriented towards t.
Then no component of G-S_k intersects both N(D_k) ∖ N(D_k+1) and D_k^.
Let D be a component of G-S_k that intersects D_k^. Since tt_k is not oriented towards t, by the properties of S_k we have that S_k carves away D_k^, hence D ⊆ N(D_k) ∪ D_k^.
Assume there exists v ∈ D ∩ (N(D_k) ∖ N(D_k+1)). Since v ∈ N(D_k) ∖ S_k while S_k ∩ = N(D_k) ∩, we have v ∉.
By Lemma <ref>, there exists w ∈∖ that is a neighbor of v.
Since w ∈∖ while S_k ∩ = N(D_k) ∩, we have w ∉ S_k, thus w ∈ D ∖.
As D ⊆ N(D_k) ∩ D_k^, every component D' of G- that intersects D satisfies
N(D') ⊆ N(D_k+1). This is a contradiction with w ∈ N(D_k) ∖ N(D_k+1).
A precarver is
a set S⊆ V(G) such that
S∩ = ∩ and there exists k ∈{0,1} such that
for every component D of G-S
at least one of the following conditions holds:
* there exists a component T' of T-{t} with D⊆β(t) ∪⋃_t' ∈ V(T')β(t'), or
* tt_k is oriented and D is clarified with regards to the mixed separator N(D_k) (i.e., D is disjoint with D_k ∪ D_k^).
If we are able to guess a precarver S, then
we apply Proposition <ref> for the minimal separator N(D_k) to guess
a superset C of S with C ∩ = S∩. Then the properties of the precarver together with Proposition <ref> imply that C will be
a (,(T,β))-carver for . Hence, in the remainder of the proof
we focus on guessing a precarver.
We observe that the first bullet of the definition of a precarver holds immediately
for a component D
if D⊆ or there exists a component D of G-
such that D⊆ D ∪. The latter applies in to the case D∩=∅.
We perform now case distinction on how the edges tt_0 and tt_1 are oriented in (T,β),
which is in fact a case distinction on the types of separators N(D_0) and N(D_1).
Case 1. There exists k ∈{0,1} such that tt_k is undirected.
We claim that then S = S_0 ∪ S_1 is a precarver.
To this end, let D be a component of G-S.
If D⊆, there is nothing to prove, so assume otherwise.
Let D be a component of G- that intersects D.
If N(D) ⊆ N(D_k), then D is a component of G-N(D_k) and thus, as tt_k is undirected and S_k
is a carver for tt_k, we have D⊆ D ∪ N(D_k) ⊆ D ∪.
If tt_k+1 is undirected too, then a symmetric argument resolves the case N(D) ⊆ N(D_k+1).
Since is two-sided, this completes the proof in this case.
Otherwise, tt_k+1 is directed; without loss of generality assume k=1.
Recall that we are left with analysing a component D of G-S that satisfies the following: for every component D of G- that intersects D,
we have N(D) ⊈N(D_1) (so N(D) ⊆ N(D_0) and N(D) ∩ (N(D_0) ∖ N(D_1)) ≠∅, as is two-sided).
This implies that D⊆∪ D_1^.
Case 1.1. tt_0 is oriented towards t.
As D intersects D_1^,
from Claim <ref> for k=1 we infer that D is disjoint with N(D_1) ∖ N(D_0).
Recall that D is also disjoint with every component D of G- with N(D) ⊆ N(D_1).
Thus, D is disjoint with D_0^,
as D_0^ consists of ∖ N(D_0) = N(D_1) ∖ N(D_0) and every component D of G-
with N(D) ⊆ N(D_1) and N(D) ∩ (N(D_1) ∖ N(D_0)) ≠∅.
If D intersects D_0, then, by the properties of the carver S_0, D⊆ D_0 ∪ N(D_0)
and we are done.
Otherwise, D is clarified with regards to the mixed separator N(D_0), because it is disjoint with both full sides: D_0 and D_0^.
Hence, S is a precarver.
Case 1.2. tt_0 is oriented towards t_0.
Since N(D) ⊈N(D_1), we have t_D = t_0, as otherwise the edge tt_D is an undirected edge
with σ(tt_D) ⊆σ(tt_0), violating property ().
As the above holds for every component D of G- that intersects D,
we have D⊆∪⋃_t' ∈ V(T_0)β(t') and we are done.
Case 2. Both tt_0 and tt_1 are oriented towards t.
Then both D_0 and D_1 are mesh. Note that N(D_0)∩ N(D_1) is a minimal separator with full sides D_0 and D_1 in a induced subgraph of G. So by Lemma <ref> applied to this induced subgraph, we can pick at most three elements of D_0 and at most three elements of D_1 so that every vertex in N(D_0)∩ N(D_1) is a neighbor of one of these six (or fewer) vertices. By adding at most one more vertex from a different component of D_0, and similarly for D_1, we obtain sets D_0' ⊆ D_0 and D_1' ⊆ D_1 so that |D_0'| ≤ 4, |D_1'| ≤ 4, and every vertex in D_0, D_1, and N(D_0)∩ N(D_1) is in N[D_0' ∪ D_1']. Guess these sets D_0' and D_1'.
For every i ∈{0,1}, recall that there are at most d components of D_i which intersect ; let M_i⊆ V(G) denote the union of these components.
Since Lemma <ref> allows us to guess a fuzzy version of D_i, we can guess M_i, as every component of D_i is a component of the complement of a fuzzy version of D_i.
We set
S := S_0 ∪ S_1 ∪ (N[D_0' ∪ D_1'] ∖ (M_0 ∪ M_1)).
We claim that S is a precarver.
As N(D_0) ∪ N(D_1) =, it is immediate that S∩ = ∩.
Recall from part (iii) of Claim <ref> that there exists j ∈{0,1} such that D_j is complete to N(D_j) ∖ N(D_j+1).
By symmetry, we can assume that D_1 is complete to N(D_1) ∖ N(D_0). Hence, N(D_1) ∖ N(D_0) ⊆S.
Since also N(D_0) ∩ N(D_1) ⊆S due to the inclusion of N[D_0' ∪ D_1'] ∖ (M_0 ∪ M_1), we have N(D_1) ⊆S.
Consider now a component D of G-S. We claim that either D
is contained in D ∪ for a single component D of G- or
D is clarified with regards to the minimal separator N(D_0) (whose full sides are D_0 and D_0^).
The claim is trivial if D⊆.
If there exists i ∈{0,1} such that D intersects D_i,
then D⊆∪ D_i due to the inclusion of the carvers S_0 and S_1 in S.
If D intersects a component D∉{ D_0,D_1} of G- such that N(D) ⊆ N(D_1), then
D⊆ D as N(D_1) ⊆S.
In the remaining case, D intersects a component D ∉{D_0,D_1} with N(D) ⊆ N(D_0),
N(D) ∩ (N(D_0) ∖ N(D_1)) ≠∅.
Furthermore, due to the exclusion of the previous cases, D is disjoint both with D_0 and with D_0^,
as the latter consists of D_1, N(D_1) ∖ N(D_0) (which is a subset of S)
and all components D' of G- with N(D') ⊆ N(D_1), N(D') ∩ (N(D_1) ∖ N(D_0)) ≠∅.
Hence, D is clarified with regards to N(D_0). This finishes the proof that S is a precarver.
Case 3. Both tt_0 and tt_1 are oriented away from t.
By Lemma <ref>, for every s ∈ N_T(t) we have σ(st) ⊆ N(D_0)
or σ(st) ⊆ N(D_1).
Hence, by property (), t_0 and t_1 are the only two neighbors of t in G.
We claim that S = S_0 ∪ S_1 is a precarver in this case.
Consider a component D of G-S.
If there exists k ∈{0,1} such that D intersects D_k^,
then, by the properties of the carver S_k, we have D⊆ D_k^∪ N(D_k).
Consequently, for every component D of G- that intersects D,
it holds that N(D) ⊆ N(D_k+1), N(D) ∩ (N(D_k+1) ∖ N(D_k)) ≠∅.
We infer t_D = t_k+1 for every such component D. Hence, D⊆∪⋃_t' ∈ T_k+1β(t').
If D is disjoint with D_0^∪ D_1^, then it is disjoint also with
D_0 ∪ D_1 as D_k+1⊆ D_k^ for every k ∈{0,1}.
Hence, D is clarified with regards to both N(D_0) and N(D_1).
This finishes the proof that S is a precarver.
Case 4. One of the edges tt_0 and tt_1
is oriented towards t and one is oriented away from t.
Without loss of generality, assume tt_0 is oriented towards t_0 and tt_1 is oriented towards t.
We distinguish the following two subcases.
Case 4.1. There exists a component D of G-, D ≠ D_1, with N(D) ∩ (N(D_1) ∖ N(D_0)) ≠∅.
Let D be such a component and let v ∈ N(D) ∩ (N(D_1) ∖ N(D_0)). We argue that
For every u ∈ (N(D_0) ∩ N(D_1)) ∖ N(D), there is no P_4 of the form
uD_0D_0D_0,
and if additionally uv ∉ E(G), then u is complete to D_0.
Let u ∈ (N(D_0) ∩ N(D_1)) ∖ N(D).
Let Q be an induced path consisting of a shortest path from u to v possibly via D_1 if uv ∉ E(G)
and then a neighbor of v in D. Observe that Q has three vertices if uv ∈ E(G) and at least four vertices
if uv ∉ E(G).
If there exists an induced P_4 of the form uD_0D_0D_0, then the concatenation of this P_4 with Q yields a P_6,
a contradiction.
Similarly, if there exists an induced P_3 of the form uD_0D_0 (which is equivalent to u not being complete to D_0),
then the concatenation of this P_3 with Q yields a P_6 if uv ∉ E(G).
This proves (<ref>).
For every k ∈{0,1}, apply Lemma <ref> to the separator N(D_k) with full component D_k,
obtaining a vertex p_k ∈ D_k ∩ with A_k := ∩ D_k ∩ N(p_k) of size at most d-1
and, if |D_k|>1, a vertex q_k ∈ D_k ∩ N(p_k) in a different maximal strong module of D_k than p_k. We set q_k = p_k if |D_k|=1.
Let
S = S_0 ∪ S_1 ∪(⋃_k ∈{0,1} (N(p_k) ∖ A_k)) ∪ (N(q_0) ∩ N({v,q_1})) ∪ N(D).
Note that S can be guessed with polynomial number of options, as N(D) is a subordinate separator and hence
can be guessed using Corollary <ref>.
We claim that S is a precarver.
Since N(q_0) ∩ N({v,q_1}) ⊆ N(D_0) ⊆, we have
S∩ = ∩.
We now show that
N(D_0) ∩ N(D_1) ⊆S.
Let u ∈ N(D_0) ∩ N(D_1). If u ∈ N(D) or u ∈ N(p_0), then u ∈S.
Otherwise, u is not complete to D_0, so by (<ref>) we have u ∈ N(v) and there is no P_4 of the form
uD_0D_0D_0. Lemma <ref> implies that u ∈ N(q_0). Hence, u ∈S. This proves (<ref>).
Consider now a component D of G-S.
We distinguish two cases, depending on whether D intersects D_0^.
If D intersects D_0^, then by the properties of the carver S_0 we have
D⊆ N(D_0) ∪ D_0^.
By Claim <ref> for k=0, D is disjoint with N(D_0) ∖ N(D_1).
By (<ref>), D is disjoint with N(D_0),
that is, D⊆ D_0^. In particular, D is disjoint with D_1^.
If D intersects D_1 then, by the properties of the carver S_1, we have D⊆ D_1 ∪ N(D_1).
Otherwise, D is disjoint with both D_1 and D_1^ and thus is clarified with regards to the separator N(D_1).
In the other case, the component D is disjoint with D_0^. So N(D)⊆ N(D_0) for every component D of G- that intersects D.
If there exists a component D of G- with N(D) ⊆ N(D_0) ∩ N(D_1) that intersects
D, then D⊆ D thanks to (<ref>).
Otherwise, for every component D of G- that intersects D we have N(D) ⊈N(D_1).
By Lemma <ref> and property (), for every such component we have t_D = t_0.
Thus, D⊆β(t) ∪⋃_t' ∈ V(T_0)β(t').
This finishes the proof that S is a precarver in this case.
Case 4.2. For every component D of G-, either D=D_1 or N(D) ⊆ N(D_0).
Lemma <ref> and property () imply that t is of degree 2 in T,
that is, t_0 and t_1 are the only two neighbors of t in T.
For every k ∈{0,1}, proceed as follows. Call a node t of T considered in this case special; since this is the last case, we may assume that for all non-special nodes of T, we already have constructed carvers for their bags.
Let t_k' be the closest to t node of T_k that is not special.
Note that t_k' exists and is unique, as every node of T that is special has degree 2 in T. (It may happen that t_k' = t_k).
Let Q_k be the path in T between t and t_k'.
As this is the last case, we can guess a (,(T,β))-carver C_1 for β(t_1').
(Note that this guesswork may involve Lemma <ref> if β(t_1') is not -avoiding or
Theorem <ref> if β(t_1') is -avoiding but not two-sided.)
Let A_1 = ∩ (C_1 ∖) = ∩ (β(t_1') ∖); as |A_1| ≤ d, we can guess A_1.
We now perform an analysis of components of G-.
Let D be a component of G- distinct from D_0 and D_1 and let k^D ∈{0,1} be such that t_D = t_k^D.
Then there exists an edge t_A^D t_B^D of T such that:
* σ(t_A^D t_B^D) = N(D).
* If T_A^D is the component of T-{t_A^D t_B^D} that contains t_A^D, then D ⊆⋃_t' ∈ V(T_A^D)β(t').
* The nodes t_A^D, t_B^D, t_k^D', t_k^D, and t lie on the unique path between t_A^D and t in T in this order,
with possibly t_B^D = t_k^D' and/or t_k^D' = t_k^D.
In particular, if T^D is the unique component of T-{t_k^D'} that contains t_A^D, then t ∉ V(T^D) and
D ⊆⋃_t' ∈ V(T^D)β(t').
Let D be as in the statement.
By Lemma <ref>, N(D) is a minimal separator with full sides D and D^, where D^ contains ∖ N(D).
By the assumptions of the current case, N(D) ⊆ N(D_0).
Furthermore, as N(D_0) is mixed, N(D_0) has only two full sides: D_0 and D_0^ that contains ∖ N(D_0). As both of them are disjoint with D, it follows that D is not a full component of G-N(D_0), that is,
N(D) is a proper subset of N(D_0).
Hence, D^ contains not only ∖ N(D), but also both D_0 and D_0^, which in turn contains D_1.
Since N(D) ⊆, N(D) is a clique in G+F.
We apply Lemma <ref> for S = N(D), A = D, and B = D^, obtaining the edge t_A^Dt_B^D.
The first two promised properties are immediate by Lemma <ref>.
For the third property, since N(D) is a subordinate separator, t_A^Dt_B^D is an undirected edge of T.
Thus t_A^Dt_B^D lies in the component of T-E(Q_0 ∪ Q_1)
that contains t_k^D' and, furthermore, as β(t) ∩ D^≠∅,
both t_k^D' and t_B^D lie on the unique path from t_A^D to t in T.
The claim follows.
With the above claim in hand, we now prove that
A_1 ⊆ D_1.
By contradiction, assume that A_1 intersects a component D ≠ D_1 of G-.
As A_1 ⊆β(t_1') ⊆⋃_t' ∈ V(T_1)β(t'), we have N(D) ⊆ N(D_1), t_D = t_1, and thus D ≠ D_0 and k^D = 1.
By Claim <ref>, t_1' lies on the unique path from t_B^D to t in T (possibly t_1' = t_B^D).
Hence, A_1∩ D⊆β(t_1') ∩ D ⊆β(t_A^D) ∩β(t_B^D) = N(D) ⊆, a contradiction. This proves (<ref>).
Define
C' := S_0 ∪ S_1 ∪ C_1 and C := C' ∖ A_1.
We claim that C is a (,(T,β))-carver for . Clearly, C ∩ = ∩.
(We would like to use C' as the carver, but unfortunately C' may contain vertices of in β(t_1') ∖, that is, A_1.
Therefore we need to exclude them manually.)
Let D be a component of G-C; we want to show that there exists k ∈{0,1} such that D⊆β(t) ∪⋃_t' ∈ V(T_k)β(t').
If D intersects D_1, then by the properties of the carver S_1 we have D⊆ N(D_1) ∪ D_1 and we are done with k=1.
Otherwise, D∩ A_1 = ∅ by (<ref>).
Hence, D is also a component of G-C'.
Assume now that D intersects a component D of G- such that D ∉{D_0,D_1} and k^D = 1.
Then, as D is a component of G-C' and C_1 ⊆ C',
by the properties of the carver C_1 and Claim <ref> we have
D⊆β(t_1') ∪⋃_t' ∈ V(T^D)β(t') ⊆∪⋃_t' ∈ V(T_1)β(t').
In the remaining case, for every component D of G- that intersects D we have t_D = t_0. Hence,
D⊆∪⋃_t' ∈ V(T_0)β(t'). This finishes the proof in this case.
This completes the case analysis and thus the proof of Proposition <ref>.
§ WRAP UP
We are now ready to conclude the construction of a treedepth-d carver family for P_6-free graphs.
For each positive integer d, there exists a polynomial-time algorithm that takes in a P_6-free graph
G and outputs a family ℱ⊆ 2^V(G)
that is a treedepth-d carver family for G.
Fix any maximal treedepth-d structure in G, any -aligned minimal chordal completion G+F of G,
and any maximal clique of G+F.
The crucial observation is that any container for
is a (,(T,β))-carver for regardless of the clique tree (T,β) of G+F.
Hence, Proposition <ref> gives a family of carvers handling two-sided maximal cliques of G+F
for a particular choice of the clique tree, while Theorem <ref> and Lemma <ref> handle the remaining maximal cliques of G+F regardless of the choice of the clique tree.
Theorem <ref> follows by a direct combination of Theorem <ref>, Theorem <ref>, and Theorem <ref>.
§ CONCLUSIONS
In this paper, we introduced the notion of carvers, a relaxation of the notion of containers, and showed its applicability
by proving that any (≤ k,ϕ)-MWIS problem is solvable in polynomial time on P_6-free graphs.
While in Definition <ref> and Theorem <ref> we only require that there exists a -aligned
chordal completion G+F that is represented in a carver family,
our proof in fact provides a carver family that works for every -aligned
chordal completion G+F, where is any maximal treedepth-d structure containing the solution.
(Note that in the context of MWIS, d=1 and is just the sought solution, since it is a maximal independent set.)
We now present an example showing that if one aims for the ultimate goal of
proving the tractability of (≤ k,ϕ)-MWIS in P_t-free graphs for any fixed t,
in particular for t=7, one needs to either really use the flexibility of the choice of G+F, or further adjust the notion of a carver. See Figure <ref> for a depiction of the example.
For an integer n ≥ 1, construct a graph G_n as follows; take n copies of the 6-vertex cycle, let
the vertices of the i-th cycle be v_i,0,…,v_i,5, 1 ≤ i ≤ n, and add two vertices a and b;
a is adjacent to all vertices v_i,0,v_i,2,v_i,4 and b is adjacent to all vertices v_i,1,v_i,3,v_i,5,
1 ≤ i ≤ n.
The graph G_n is P_7-free.
For every f : {1,…,n}→{0,2,4}, the graph G_n contains a maximal independent set
I_f = {v_i,f(i), v_i,f(i)+3 | 1 ≤ i ≤ n}
and a minimal separator
S_f = {v_i,f(i)+1,v_i,f(i)+2,v_i,f(i)+4,v_i,f(i)+5 | 1≤ i ≤ n}
with full mesh sides
A_f = {a}∪{v_i,f(i) | 1 ≤ i ≤ n},
B_f = {b}∪{v_i,f(i)+3 | 1 ≤ i ≤ n}.
Here, the addition in the second index is performed modulo 6.
In this example, if one wants to provide for every f
a carver for (an I_f-aligned PMC containing) the separator S_f that separates I_f ∩ A_f from
I_f ∩ B_f, one needs an exponential number of carvers.
However, the minimal separator {a,b} instead of S_f seems like a much better choice for the algorithm.
amsplain
|
http://arxiv.org/abs/2307.05007v1 | 20230711040631 | Elasto-plastic large deformation analysis of multi-patch thin shells by isogeometric approach | [
"Giang Huynh",
"Xiaoying Zhuang",
"Hoang-Giang Bui",
"G. Meschke",
"Hung Nguyen-Xuan"
] | math.NA | [
"math.NA",
"cs.NA"
] |
ath1]G. D. Huynh
ath1]X. Zhuangcor1
ath2]H. G. Bui
ath2]G. Meschke
ath3,ath4]H. Nguyen-Xuan
[ath1]Institute for Continuum Mechanics, Leibniz University Hannover, Germany
[ath2]Institute for Structural Mechanics, Ruhr University Bochum, Germany
[ath3]CIRTech Institute, Ho Chi Minh City University of Technology (HUTECH), Ho Chi Minh City, Vietnam
[ath4]Department of Architectural Engineering, Sejong University, 98 Kunja Dong, Kwangjin Ku, Seoul, 143-747, South Korea
[cor1]Corresponding author
§ HIGHLIGHTS
* A unified thin shell formulation is established, allowing arbitrary nonlinear material models and multi-patch shell structure to be applicable.
* The Kirchhoff-Love shell theory is employed and the C^1 continuity at patch boundaries is remained by the bending strip method.
* The Bézier decomposition concept is used to retain the local support of the traditional finite element.
* The computational model is validated and its numerical results agree well with ones in literature.
This paper studies elasto-plastic large deformation behavior of thin shell structures using the isogeometric computational approach with the main focus on the efficiency in modelling the multi-patches and arbitrary material formulations. In terms of modelling, we employ the bending strip method to connect the patches in the structure. The incorporation of bending strips allows to eliminate the strict demand of the C^1 continuity condition, which is postulated in the Kirchhoff-Love theory for thin shell, and therefore it enables us to use the standard multi-patch structure even with C^0 continuity along the patch boundaries. Furthermore, arbitrary nonlinear material models such as hyperelasticity and finite strain plasticity are embedded in the shell formulation, from which a unified thin shell formulation can be achieved. In terms of analysis, the Bézier decomposition concept is used to retain the local support of the traditional finite element. The performance of the presented approach is verified through several numerical benchmarks.
Isogeometric analysis, Kirchhoff-Love shell theory, Multi-patch structures, Finite Strain, Bézier decomposition.
Nomenclatures
u Displacement vector
x Position Vector in reference configuration
X Position Vector in current configuration
ξ Curvilinear coordinates
G_α Covariant base vectors in reference configuration
G^α Contravariant base vectors in reference configuration
G_αβ Covariant metric coefficients
B_αβ Covariant curvature coefficients
F Deformation gradient tensor
F^e Elastic part of the deformation gradient tensor
F^p Plastic part of the deformation gradient tensor
C Right Cauchy deformation tensor
C^p Plastic right Cauchy deformation tensor
b^e Elastic left Cauchy deformation tensor
λ_33 thickness stretch
E Green-Lagrange strain tensor
S 2nd Piola-Kirchhoff stress tensor
ε_αβ Membrane strain components
κ_αβ Bending strain components
Ψ^e Strain energy density
Ψ^vol, Ψ^dev Volumetric and deviatoric parts of the strain energy density
α Equivalent plastic strain
k(α) Hardening function
Φ Yield function
C Material tensor
n Membrane force vector
m Bending moment vector
§ INTRODUCTION
Thin shell structures play a vital role in the automotive and aerospace industry. Due to the efficiency and the accuracy of the shell formulation, it is typically used to simulate the manufacturing processes or to determine the weak spot of the structure during operation. To accommodate for higher computational fidelity, large strains and rotations are used in the shell formulation. Because the material composing shell is typically metal, the elastoplastic constitutive model is used to investigate the failure mode, e.g. strain localization, under large loading.
There are two typical shell models accounting for the inelastic response of structures. The first model relies completely on stress resultants in which the normal vector of the shell is assumed to be inextensible. The description of the shell surface response is embodied in constitutive equations and the spread of the plasticity through the shell thickness can not be reached completely. Accordingly, formulations of stress resultant constitutive relations are awkward to derive and also sophisticated to implement in the finite element framework, for further discussion see <cit.>. In the standard model, integration points through the thickness direction of the shell are defined in association with the notion of stress-based elastoplasticity and the distribution of plasticity through the thickness can be represented. Therefore, the stress-based approach is more preferred in the finite element context. The model is originally formulated from the three dimensional theory, which can be compatible with formulations of shell kinematics in a direct or indirect manner. Solid-shell element as presented in <cit.> , where only displacement degree-of-freedom are used, can match perfectly with 3D constitutive model without the need of ad hoc assumptions. However, this element is usually implemented together with methods such as assumed natural strain (ANS) and enhanced assumed strain (EAS) to alleviate locking effects. In contrast, elements based on Kirchhoff-Love hypothesis, e.g., <cit.> are though rotation-free, need to resort to plane stress enforcement to take through-thickness behaviours into account. The implementation of three dimensional constitutive models for finite strain plasticity with the plane stress assumption can be classified into two common approaches: plane stress-projected constitutive models and the nested iteration approach for the plane stress enforcement, which are also applicable to hyperelasticity, see details in <cit.> .The former approach is only applicable if involved equations of the models are adequately simple so that transverse components can be eliminated from the formulation, while the latter one can be implemented in a straightforward manner for any models and is adopted in this work, see also its applicability in recent works <cit.>.
Isogeometric analysis (IGA) emerges as a versatile tool to perform analysis and modeling simultaneously using the same interpolation bases. IGA offers numerical properties which are often beneficial in numerical analysis, e.g. positiveness of the basis functions, higher continuity. IGA is shown to be superior over standard FEM based on Lagrange shape function for dynamics analysis . Refinement with IGA is more straightforward than with the standard FEM counterpart, in which hpk-refinement scheme can be used to offer both order elevation and addition of degree of freedoms (d.o.fs) conveniently without introducing internal modes or hanging nodes. It is noted that, refinement in IGA preserves the geometry, and thus eliminating the geometrical error stemming from modelling. On the other hand, IGA is particularly suited for shell modelling. The smoothness of NURBS basis functions and the similarity between natural coordinates of the isogeometric parameterization and the shell coordinates lead the displacement based shell formulations to be implemented in a direct manner without the need for coordinate transformation. In addition, NURBS is also utilized as surface modelling tool in the CAD technology. This suggests that, extension of shell formulation for IGA is practical, and is apparently beneficial for many industrial applications. Nevertheless, many complex shell structures are not easy to be modelled with a single NURBS patch. An option is to use a modelling technique that supports connecting control point, i.e. T-splines . However, this technique is not broadly implemented in many CAD packages. It is therefore the preferred option to use multi NURBS patch to model a shell structure. In terms of shell formulations, the Mindlin-Reissner theory is commonly-used in the finite element context as only C0-continuity is required. Typical shell elements for this theory are generations of Mixed Interpolation of Tensorial Components (MITC) proposed by K. Bathe et al. <cit.>, shell elements based on discrete Kirchhoff-Love constraints by Areias et al. <cit.> <cit.>. These works need adhoc techniques such as an enhanced assumed strain to alleviate locking effects that are the most challenged hinge in the application of the finite element method (FEM) to shell problems. For a comprehensive overview of other FEM shell elements, readers are referred to an excellent work .In contrast to the Mindlin-Ressner theory, shell formulations using the Kirchhoff-Love hypothesis require C^1 continuity of the basis functions to ensure the smoothness of bending moment interpolation. It is well known that, if the open knot vector is used in the modelling, only C^0 continuity is achieved along the patch boundary. Thus, additional technique, i.e. bending strip method, mortar method, shall be used to maintain the C^1 patch continuity. While the mortar method or Nitsche method that requires the constraints on the derivatives of the basis function at the boundary, the bending strip method only adds fictitious bending stiffness on the overlapping domain in the vicinity of the boundary and its term does not complicate the weak form and can be evaluated locally. Therefore, the latter is simple to implement, yet ensures the reliability in the description of structure behaviours.
In this paper, an efficient computational model for thin shell using isogeometric analysis is presented. In terms of modelling, the efficiency is preserved by using multi NURBS patch structure with C^0 continuity on the patch boundaries. The bending strip method <cit.> is employed to maintain the C^1 continuity condition of the Kirchhoff-Love formulation. Moreover, arbitrary 3D nonlinear materials such as hyperelasticity, finite strain plasticity can be included in the formulation by enforcing the plane stress condition. This leads to our main focus in this work in which a unified thin shell formulation is established to not only model shell structures with complex geometries or multiple patches, but also exhibit a wide range of material behaviours. Note that, for the plane stress enforcement, we follow the idea in <cit.>, in which the plane stress constraint is considered an additional equation that is used to determine the in-plane stress components and the thickness stretch. Though similar approaches are presented in recent works <cit.> <cit.>, these methods are restricted to single-patch shell geometries and can not be applicable to a large class of shell structures involving multi-patches.
On the aspect of analysis, the Bézier decomposition method <cit.> is employed. The advantage of the Bézier decomposition approach is twofold: 1) maintaining the local characteristic of the finite element and improving the performance of the global stiffness matrix assembly procedure, 2) enabling the reuse of standard finite element components, including the kinematics and the constitutive law. To verify the applicability of the proposed computational model, it is validated with reference examples using hyperelasticity and elastoplasticity constitutive laws.
The structure of the paper is as following: in <Ref>, geometry description of the thin shell using Kirchhoff-Love hypothesis is presented, followed by the shell kinematics and the treatment of plane stress enforcement which enables the usage of three dimensional constitutive model for the shell element. <Ref> underlines some important properties of NURBS basis functions and give the introduction to the bending strip method with an illustrative example. In <Ref>, required kinematics modification to accommodate for elastoplastic constitutive model in finite strain, which relies on the hypothesis of multiplicative plasticity framework, is briefly presented. <Ref> presents the discretization of the weak form based on the directional derivative, and the formation of the Bézier finite element, based on Bézier decomposition. <Ref> is devoted for selected numerical examples. The paper is concluded by <Ref>.
§ KIRCHHOFF-LOVE THEORY FOR THIN SHELL
§.§ Notation
In this paper, the following notation is employed: italic letters (g, G) are used to indicate scalar and bold letters (𝐮, 𝐠, 𝐆) indicate the vector and second order tensor. Lower case letters indicate terms in the reference configuration and upper case letters are for terms in the current configuration. ξ^α (α=1,2) denote the convective curvilinear coordinates of the shell and ξ^3 denotes the local coordinates in thickness direction. The indices in Greek letters (α, β) take values of {1, 2} and the indices in Latin letters (i, j) take values of {1, 2, 3}.
§.§ Geometry definition
The deformation and the rotation of the shell are defined on its mid-surface as
𝐮 (ξ) = 𝐱 (ξ) - 𝐗 (ξ)
where 𝐱 and 𝐗 are the position vectors of a material point on the mid-surface in the deformed configuration and undeformed configuration. ξ = {ξ^1, ξ^2, ξ^3 } is the local coordinates of the material point.
The covariant tangential base vectors are defined by
𝐆_α = 𝐗_,α = ∂𝐗∂ξ^α 𝐠_α = 𝐱_,α = ∂𝐱∂ξ^α
Figure. <ref> reveals the representation of the basis vectors between reference and current configurations in a curvilinear coordinate system.
The covariant metric coefficients of the surface are computed by
G_αβ = 𝐆_α·𝐆_β g_αβ = 𝐠_α·𝐠_β
and the contravariant base vectors by
𝐆^α = G^αβ𝐆_β 𝐠^α = g^αβ𝐠_β
with G^αβ = (G_αβ)^-1 and g^αβ = (g_αβ)^-1.
The unit normal vectors on the middle surface can be computed from the covariant tangential base vectors as
𝐆_3 = 𝐆_1 ×𝐆_2𝐆_1 ×𝐆_2 𝐠_3 = 𝐠_1 ×𝐠_2𝐠_1 ×𝐠_2
From <Ref>, one can compute the curvature tensor 𝐁 and 𝐛. They are defined by the second fundamental form of surfaces
B_αβ = 12( 𝐆_α·𝐆_3,β + 𝐆_β·𝐆_3,α)
b_αβ = 12( 𝐠_α·𝐠_3,β + 𝐠_β·𝐠_3,α)
§.§ Kinematics
Strain measurement is represented with respect to the right Cauchy-Green deformation tensor C = F^T F, where F is the deformation gradient
F = d𝐱/d𝐗 = 𝐠_i ⊗𝐆^i.
Then, the deformation tensor can be written in terms of contravariant base vectors as
C = F^T F = g_i ·g_j G^i ⊗G^j = g_ijG^i ⊗G^j.
In <Ref>, components of the right Cauchy-Green deformation tensor are the same as the metric coefficients in the deformed configuration. However, C_33 that describes the change of the shell thickness during deformations can not take the same value with g_33 that is equal 1. Hence, C_33 will be determined by an additional constraint, not directly by the solution of the midsurface variables. The thickness stretch λ_3 is computed from C_33 as
λ_3 = √(C_33).
The Green-Lagrange strain tensor E = 1/2 (C - I ) is typically separated into a constant part representing membrane action and a linear part representing bending action:
E = E_ij𝐆^i ⊗𝐆^j E_αβ = ϵ_αβ + ξ^3 κ_αβ
in which
ε_αβ = 1/2 (g_αβ - G_αβ )
κ_αβ = b_αβ - B_αβ
In order to be consistent with material models which are often expressed in the Cartesian coordinate frame, the strain quantities and the deformation gradient need to be represented in a local Cartesian coordinate system. A general expression for such a transformation is given by
(̅·̅)̅_γδ = (·)_γδ (E_γ·G^α) (G^β·E_δ),
where (̅·̅)̅_γδ and (·)_γδ are components of generic second order tensors obtained in local Cartesian and curvilinear coordinates respectively. The local Cartesian basis vectors E_γ and E_δ are given by equations
E_1 = G_1/||G_1||, E_2 = G_2 - (G_2 ·E_1)E_1/||G_2 - (G_2 ·E_1)E_1||, E_3= G_3
§.§ Finite strain kinematics for elastoplasticity
The kinematics presented in <Ref> can be applied for the constitutive laws exhibiting elastic behaviours, such as linear elasticity or hyperelasticity. For constitutive laws accounting for plasticity effects, the elastoplastic kinematics model for finite strain <cit.> shall be employed, which is based on the concept of the plastically deforming intermediate configuration and the multiplicative decomposition of the deformation gradient.
To account for the plasticity behaviour, the deformation gradient is decomposed as
F = F^e F^p
with F^e and F^p as the elastic and plastic parts of the deformation gradient tensor.
Following <Ref>, the plastic right Cauchy-Green deformation tensor is computed as
C^p = ( F^p)^T F^p
and the elastic left Cauchy-Green deformation tensor:
b^e = F^e( F^e)^T
From <Ref>, the elastic left deformation tensor 𝐛^e can be computed from 𝐂^p and 𝐅:
b^e = F( C^p)^-1F^T
The total rate of the elastic deformation tensor is obtained by Lie derivative as follows
ℒ[b^e] = F( Ċ^p)^-1F^T
The elastic behaviour of the material is modeled by strain energy density function of hyperelastic material, which is further separated into the deviatoric and volumetric components as
Ψ^e = Ψ^e_vol + Ψ^e_dev
with
Ψ^e_vol = 1/2 K ( 1/2(( J^e)^2 -1) - ln J^e )
Ψ^e_dev = 1/2G ( tr[b̂^e] - 3 )
where J^e = det[F^e], b̂^e = ( J^e)^-2/3b^e, and G and K are identified as the shear modulus and the bulk modulus in infinitesimal deformation respectively.
The classical Mises-Huber yield condition established with respect to deviatoric part of the Kirchhoff stress τ_dev and the hardening function k(α) is given by
Φ(τ_dev, α) = ||τ_dev|| - √(2/3)k(α) ≤ 0
where Φ is the yield function and α the equivalent plastic strain.
Based on the notion of the principle of maximum plastic dissipation, the associative flow rule can be expressed by
( Ċ^p)^-1 = - 2/3γ tr[b^e] F^-1nF^-T
with n = τ_dev/||τ_dev|| and γ is the plastic multiplier. The evolution of the yield stress known as hardening phenomenon is driven by the equation
α̇ = √(2/3)γ
Finally, the so-called loading/unloading conditions of the elastoplastic model are governed by the Karush-Kuhn-Tucker condition:
γ≥ 0 , Φ(τ_dev, α) ≤ 0, γΦ(τ_dev, α) = 0
accompanied by the consistency condition:
γΦ̇(τ_dev, α) = 0.
§.§.§ Time integration for the finite strain plasticity model
The solution of <Ref> requires a time integration scheme, for which the backward Euler scheme is adopted, such that
( C_n+1^p)^-1 = ( C_n^p)^-1 - 2/3Δγ [b^e_n+1] F_n+1^-1n_n+1F_n+1^-T
α_n+1 = α_n + √(2/3)Δγ
Δγ is the incremental plastic multiplier and satisfies Δγ > 0.
The classical return mapping scheme <cit.> <cit.> is utilized to integrate <Ref>, and summarized in Table.<ref>.
§.§ Zero transverse normal stress enforcement for thin shell
Here, the indigenous three-dimensional constitutive models are utilized within a Newton-Raphson loop by solving the equation of plane stress constraint at each Gauss point with thickness stretch C_33 as the unknown.
Following <cit.> <cit.>, C_33 is obtained by satisfying the zero transverse normal stress condition, which takes the form of
S^33 = C^33 αβ E_αβ + C^3333 E_33 = 0
from which the coefficients of the material tensor are modified as
Ĉ = C - C^αβ 33C^33 γδ/C^3333
C_33 is solved iteratively by the equation
S^33 + ∂ S^33/∂ C_33Δ C_33 = S^33 + 1/2C^3333Δ C_33 = 0
followed by an incremental update
Δ C_33^(I) = -2 S^33,(I)/C^3333,(I)
where I denotes the number of iteration step.
The thickness change is corrected by computing C_33 and λ_3 as follows
C_33^(I+1) = C_33^(I) + Δ C_33^(I)
λ_3^(I+1) = √(C_33^(I+1))
λ_3 will be updated until S^33 converges to a defined tolerance which fulfills the zero transverse normal stress condition. Subsequently, the statically material tensor
Ĉ is obtained and only its in-plane components are chosen for the shell structures.
§.§ Variational form of the equilibrium equation
The mechanical response of a structure is governed by the principle of virtual work which requires the sum of internal and external work vanishes in the equilibrium state
δ W = δ W^int - δ W^ext
The internal virtual work is defined by
δ W^int = ∫_Ω S_ij : δ E_ij d Ω =
∫_A ( n_αβ : δε_αβ + m_αβ : δκ_αβ ) dA ,
and the external virtual work takes the form of
δ W^ext = ∫_Aδ u_i f_i d A.
§ NURBS-BASED ISOGEOMETRIC ANALYSIS FOR SHELL STRUCTURES
In this section, the useful properties of NURBS basis functions which are suitable for the analysis of Kirchhoff-Love shells are summarized, followed by an introduction to the bending strip method which is employed in the multi-patch case.
§.§ Interpolation using NURBS basis functions
Due to the appearance of the second derivatives in curvature changes of the shell kinematics, C^1-continuity of the employed basis functions must be required to attain a conforming discretization of the Kirchhoff-Love hypothesis. This can be simply achieved using NURBS basis functions which are used to approximate the solution fields and represent the shell geometry exactly. Following that, given a control net P_i,j,1 ≤ i≤ n,1 ≤ j ≤ m, polynomial orders p and q, and knot vectors Ξ_1 ={ξ_1^1, ξ_1^2, ..., ξ_1^n+p+1} and Ξ_2 ={ξ_2^1, ξ_2^2, ..., ξ_2^m+q+1}, a shell surface can be represented by sum:
S(ξ_1, ξ_2) = ∑_i=1^n∑_j=1^m N_i,j(ξ_1, ξ_2) P_i,j
where ξ_1 ∈Ξ_1, ξ_2 ∈Ξ_2 are coordinates in the parametric space, N_i,j are the NURBS basis functions, which are defined as
N_i,j(ξ_1, ξ_2) = B_i,p(ξ_1) B_j,q(ξ_2) w_i,j/∑_k=1^n∑_l=1^m B_k,p(ξ_1) B_l,q(ξ_2) w_k,l.
In <Ref>, B is the univariate B-spline basis functions and w is the weight factor. In the parametric space, the elements are defined by partitioning the knots into knot spans. Inside each knot span, the NURBS basis functions are C^∞-continuous and C^p-k at each knot (k denotes the multiplicity of the knot). When k ≤ p-1, the C^1 continuity is satisfied and is useful for the analysis of Kirchhoff-Love thin shell. Nevertheless, in the typical scenario, the open knot vector is used <cit.>, which leads to only C^0 continuity at the patch boundaries. Without additional treatment, a kink in bending moment will occur on the patch boundaries if the Kirchhoff-Love formulation is used. This issue will be further described in the next section.
Displacements of the shell are approximated with respect to the NURBS basis functions which are used to model the midsurface of the shell as
u = N_I(ξ_1, ξ_2) u_I
where u_I are the displacement at the I-th control point. Following that, the first and second derivatives of the displacements can be obtained:
u_,α = ∂ N_I(ξ_1, ξ_2)∂ξ_αu_I
u_,αβ = ∂^2 N_I(ξ_1, ξ_2)∂ξ_α∂ξ_βu_I
§.§ The bending strip method
The bending strip method <cit.> is a mesh connecting method, which fundamentally adds additional kinematics constraints at patch interface in order that the bending moment can be transmitted between patches. The strip is constructed by the triples of control points, one at the shared boundary, one on each side of two involved patches. With this construction, the parametric domain of the bending strip surface comprises of one quadratic element in the direction perpendicular to the interface and linear elements along the boundary. For an illustration of the bending strip method, the reader is referred to <Ref> (Left). The principle of virtual work in <Ref> is rewritten to account for the bending strips as
δ W = δ W^int + δ W^int_strip - δ W^ext
The new term δ W^int_strip is computed as
δ W^int_strip = ∫_At^3/12κ^T C_stripκ d A.
Adopting the bending material stiffness proposed in <cit.>, we have
C_strip =
[ 0 0 0; 0 E_strip 0; 0 0 0 ]
in which the value of the bending strip modulus E_strip must be chosen to be high enough and a range of its values varies from 10^3 × E to 10^5 × E with E as Young's modulus.
Here, we illustrate the method by an example representing by a Hex-can geometry subjected to internal pressure (see <Ref> (Right) for the geometry of the Hex-can). The Hex-can geometry comprises of six patches joining at patch boundaries. Due to the flat geometry of each patch, kinks occur at the patch interfaces. With this configuration, the bending moment is not transmitted if C0 continuity between patches is enforced. When bending strips are added to couple the triples of control points at interfaces of each patch, the kinks do not arise (see <Ref> (Left) for the displacement without bending strips and <Ref> (Right) for the displacement with bending strips). As can be observed in <Ref>, kinks do not arise, which means that bending moment is transferred between patches.
§ FE DISCRETIZATION
§.§ Discretization of the weak form
To solve <Ref> using the Newton-Raphson scheme. It shall be linearized with respect to Δ𝐮:
δ W (u, δu) + D δ W (u, δu)[Δu] = 0
which can be simply rewritten in a discretized form as
δ v_a K_abΔ u_b = δ v_a R_a
In <Ref>, a and b denote the global degree of freedom (d.o.f) number of the displacement field. R_a and K_ab are the derivatives of W and R_a, respectively:
R_a = ∂ W/∂ u_a K_ab = ∂ R_a/∂ u_b = ∂^2 W/∂ u_a ∂ u_b
The residual forces vector is the difference of the internal forces vector and the external forces vector:
R_a = R^int_a - R^ext_a,
in which
R^int_a = ∫_A( n_αβ∂ε_αβ/∂ u_a + m_αβ∂κ_αβ/∂ u_a) dA
R^ext_a = ∫_A∂ u_i/∂ u_a t_i dA = ∫_A N_I t_i dA
The tensors n_αβ and m_αβ in <Ref> denote the stress resultants, which are computed by integrating the second Piola-Kircchoff stress tensor S^αβ over the thickness direction ξ^3
n_αβ = ∫_-t/2^t/2 S^αβ d ξ^3 m_αβ = ∫_-t/2^t/2 S^αβ ξ^3 d ξ^3
The tangential stiffness matrix in <Ref> can be split into material and geometric parts as
K_ab = K_ab^geo + K_ab^mat
in which
K_ab^mat = ∫_A( ∂ n_αβ/∂ u_b∂ε_αβ/∂ u_a + ∂ m_αβ/∂ u_b∂κ_αβ/∂ u_a ) dA
K_ab^geo = ∫_A( n_αβ∂^2 ε_αβ/∂ u_a ∂ u_b + m_αβ∂^2 κ_αβ/∂ u_a ∂ u_b) dA
It is noted that, the case of displacement-dependent load t_i = t_i(u_i), which leads to the dependency of tangential stiffness to the internal forces, is not considered. The derivatives of the stress resultants w.r.t nodal displacements take the form of
∂ n_αβ/∂ u_b = ( ∫_-t/2^t/2Ĉ^αβγδ d ξ^3 ) ∂ε_γδ/∂ u_b + ( ∫_-t/2^t/2Ĉ^αβγδξ^3 d ξ^3 ) ∂κ_γδ/∂ u_b
∂ m_αβ/∂ u_b = ( ∫_-t/2^t/2Ĉ^αβγδξ^3 d ξ^3 ) ∂ε_γδ/∂ u_b + ( ∫_-t/2^t/2Ĉ^αβγδ (ξ^3)^2 d ξ^3 ) ∂κ_γδ/∂ u_b
For more details on the derivatives of strain tensor with respect to nodal displacement, the reader is referred to <cit.>. Note that the four-order tensor Ĉ^αβγδ in the equations above is expressed in <Ref> which is then modified following Eq.(<ref>).
§.§ Bézier decomposition
To evaluate basis function <Ref>, the full knot vector is required. In addition, to interpolate the displacement within an element, the displacement at other control points in the patch is necessary. To alleviate this issue and maintain the local characteristic of the finite element, the Bézier decomposition concept is proposed, which bridges the gap between isogeometric method and standard finite element method <cit.>. Taking into account that not all B-splines basis function is nonzero on a knot span, the basis functions 𝐍, which is not vanished on a knot span, can be represented by a linear combination of Bézier basis functions 𝐁 constructed on knot vector {0,,0_p+1 times, 1,,1_p+1 times}, in short:
𝐍 = 𝒞𝐁
The coefficient matrix 𝒞 is called Bézier extraction operator and depends only on the knot vectors, hence it is considered as constant. For more details on how to compute 𝒞, the reader is referred to <cit.>.
Based on the Bézier decomposition concept, the Bézier finite element can be constructed, which contains only the control points associated with non-vanished basis function on the knot span. An illustration of the Bézier finite element for 2D case can be found in <Ref>.
To evaluate the second derivatives terms, i.e. <Ref>, the second derivatives of the basis functions are required. <Ref> presents a method to take the second derivatives of NURBS basis functions evaluated through Bézier extraction operator and Bézier basis functions.
§ NUMERICAL EXAMPLES
In this section, four selected examples are performed to analyze hyperelastic and elastoplastic behaviours of multi-patch thin shell structures. Comparisons with the reference results are made. Arc-length control method following the approach in <cit.> is adopted to capture the structural response in the last two examples.
§.§ Numerical example 1: Pinched semi-cylindrical shell
For the first example, a semi-cylidrical shell is pinched by an end force at the middle of one end and fully clamped in the other direction. The point load is applied incrementally until the maximum load P=2000.0 is reached. Figure. <ref> shows the problem setup and boundary conditions. The length of the cylinder is L=3.048 and the radius is R=1.016. St. Vennant-Kirchhoff material model is used with E=2.0685 × 10^7 and ρ = 0.3.
Figure.<ref> reveals obtained results on four different mesh sizes with cubic NURBS elements, indicating that the computational procedure fails at P=500.0 with 8 × 8 elements and accurate results are obtained with the mesh sizes of 12 × 12, 16 × 16 and 20 × 20. The deformed geometry of the cylinder under the maximum load is revealed in Fig.<ref>. Table.<ref> reports the total number of iterations required for the computation, in which the number for our approach that requires 3 to 9 iterations at each load step is less than the one for other shell elements <cit.> <cit.> using finite element methods .
§.§ Numerical example 2: Pinching of cylinder
In the second example, the proposed computational model is validated against finite strain hyperelasticity model. Following that, a tube with geometry and boundary conditions as illustrated in <Ref> is analyzed. The geometric parameters are chosen as length L=30 cm, radius R=9 cm, and thickness t=0.2 cm. The bottom of the cylinder is fixed and a uniformly distributed line load (p) is applied on top (see <Ref>). The constitutive law is Neo-Hookean with the strain energy is defined in <Ref>. The material parameters are μ = 60 kN/mm^2 and λ = 240 kN/mm^2. The geometry of the cylinder consists of 4 patches, and is discretized using 552 cubic Bézier elements.
W = μ2( trC - 3 ) - μln√(C) + λ4( C - 1 - 2 ln√(C))
The load p is increased equally in 8 steps. <Ref> lists computed displacements at point A corresponding to applied load at each step. From <Ref> (Right), the structural load when the vertical deflection is u=160 mm can be approximated as F≈ 34.965 kN, which is in good agreement with the results from literature ranging between 34.59 kN and 35.47 kN <cit.>. A contour plot of the last deformed configuration is depicted in <Ref> (Left).
§.§ Numerical example 3: Scordelis-Lo roof
In the next example, the failure analysis of the Scordelis-Lo roof, accounting for fully non-linear thin shell analysis, is considered. <Ref> illustrates boundary conditions and the setup of the geometry of the problem which contains 2 patches connected at the middle line on the roof top.The material parameters are chosen as elastic modulus E = 2.1 × 10^4 N/mm^2, Poisson's ratio ρ=0 and the hardening function κ(α) = 4.2 N/mm^2. Because the hardening modulus is vanished, the perfect plasticity condition is obtained. The structure is applied with gravity load. A mesh comprises of 3969 quadratic Bézier elements is used to discretize the whole domain. Arc-length control method is employed to capture the structural response with the reference value f_0 = 4 × 10^-3 N/mm^2.
<Ref> shows deformed configurations at different loading stages. As can be observed from <Ref>, the localized failure mode is characterized by the appearance of the plastic hinge along the axial line on the roof top.
The corresponding load-deflection curve at center point of the side (denoted A) is plotted in <Ref>. It can be observed that the obtained results agree well with the reference<cit.>.
§.§ Numerical example 4: Pinching of cylinder with large strains
The last example considers a cylinder supported by two rigid diaphragms and pinched by two concentrated forces at the middle of the opposite sides. <Ref> presents the setup of boundary conditions and the geometry of the cylinder which contains four patches connected at common interfaces A1, A2, A3 and A4. The material parameters are chosen as E =3000, ρ=0.3 and the hardening function κ(α) = 24.3 + 300 α. The geometry contains 8 patches as depicted in <Ref> (Top Left). The bending strips are used on each patch interfaces to enforce the C^1-continuity condition. A mesh contains 3452 quadratic Bézier elements is used to discretize the whole domain.
Deformed configurations at different stages of loading are illustrated in <Ref>. The load-deflection curve representing the structural response, obtained at the loading point, is shown in <Ref>. The obtained results are in good agreement with the reference <cit.>.
§ CONCLUSION
An efficient computational model for thin shell using isogeometric analysis is proposed, with the advantage of using bending strip method to maintain the C^1 continuity, thus allowing for efficient modelling using multiple NURBS patch with C^0 continuity at the patch boundaries. The Bézier decomposition method is used to retain the local characteristics of the finite element. This computational model is validated with respect to nonlinear constitutive model, including hyperelasticity and elastoplasticity. The benchmark results agree well with the references, showing that the proposed computational model is promising for efficient shell analysis. The presented shell formulation is very general and flexible and can be extended to dynamics and ductile fracture motivated by published papers that are classified into two categories: discrete crack modeling <cit.> <cit.> <cit.> and smeared crack modeling <cit.> <cit.> . On the other hand, in order to further advance the efficiency, the parallelization for distributed computing shall be done. The parallel algorithm shall take into account the distribution of bending strips over computing processes. That will be addressed in future research.
§ ACKNOWLEDGEMENTS
The first author would like to acknowledge the financial support via RISE project BESTOFRAC 734370 for this work. The research performed by Hoang-Giang Bui and Günther Meschke were conducted in the framework of the Collaborative Research Project SFB 837 "Interaction Modeling in Mechanized Tunneling", financed by the German Research Foundation (DFG). The authors would like to thank the DFG for the support of this project.
sectionappsec
subsectionappsec
§ SECOND DERIVATIVES OF NURBS BASIS FUNCTIONS BASED ON BÉZIER DECOMPOSITION
The NURBS basis function, which is non-vanished on the knot span, is defined by
𝐑 = 𝒲𝒞𝐁⟨𝐖^b, 𝐁⟩
In <Ref>, the operator ⟨·, ·⟩ denotes the dot product of 2 vectors. 𝒲 is the diagonal matrix with entries as the corresponding weight of the non-vanished basis function. 𝐖 = diag{𝒲} and 𝐖^b = 𝒞^T 𝐖.
The first derivatives of <Ref> can be computed as
∂𝐑∂ξ^α = 𝒲𝒞[ ∂𝐁/∂ξ^α⟨𝐖^b, 𝐁⟩ - ⟨𝐖^b, ∂𝐁/∂ξ^α⟩𝐁⟨𝐖^b, 𝐁⟩^2]
Taking the derivatives of <Ref> with respect to ξ^α and ξ^β gives
∂^2 𝐑∂ (ξ^α)^2 =
𝒲𝒞[ ∂^2 𝐁/∂ (ξ^α)^2⟨𝐖^b, 𝐁⟩
- 2 ⟨𝐖^b, ∂𝐁/∂ξ^α⟩∂𝐁/∂ξ^α⟨𝐖^b, 𝐁⟩^2
+ 2 ⟨𝐖^b, ∂𝐁/∂ξ^α^2 ⟩𝐁⟨𝐖^b, 𝐁⟩^3
- ⟨𝐖^b, ∂^2 𝐁/∂ (ξ^α)^2⟩𝐁⟨𝐖^b, 𝐁⟩^2]
And
∂^2 𝐑∂ξ^α∂ξ^β = 𝒲𝒞[ ∂^2 𝐁/∂ξ^α∂ξ^β⟨𝐖^b, 𝐁⟩
- ⟨𝐖^b, ∂𝐁/∂ξ^α⟩∂𝐁/∂ξ^β⟨𝐖^b, 𝐁⟩^2
- ⟨𝐖^b, ∂𝐁/∂ξ^β⟩∂𝐁/∂ξ^α⟨𝐖^b, 𝐁⟩^2
+ 2 ⟨𝐖^b, ∂𝐁/∂ξ^α⟩⟨𝐖^b, ∂𝐁/∂ξ^β⟩𝐁⟨𝐖^b, 𝐁⟩^3
- ⟨𝐖^b, ∂^2 𝐁/∂ξ^α∂ξ^β⟩𝐁⟨𝐖^b, 𝐁⟩^2]
§ CONSISTENT ELASTO-PLASTIC TANGENT MODULI
unsrt
|
http://arxiv.org/abs/2307.07591v2 | 20230714193236 | A new radiation reaction approximation for particle dynamics in the strong field regime | [
"Jérôme Pétri"
] | astro-ph.HE | [
"astro-ph.HE",
"hep-th"
] |
Université de Strasbourg, CNRS, Observatoire astronomique de Strasbourg, UMR 7550, F-67000 Strasbourg, France.
[email protected]
Following particle trajectories in the intense electromagnetic field of a neutron star is prohibited by the large ratio between the cyclotron frequency ω_ B and the stellar rotation frequency Ω. No fully kinetic simulations on a macroscopic scale and with realistic field strengths have been performed so far due to the huge computational cost implied by this enormous scale of separation.
In this paper, we derive new expressions for the particle velocity subject to strong radiation reaction that are intended to be more accurate than the current state-of-the-art expression in the radiation reaction limit regime, the so-called Aristotelian regime.
We shortened the timescale hierarchy by solving the particle equation of motion in the radiation reaction regime, where the Lorentz force is always and immediately balanced by the radiative drag, and including a friction not necessarily opposite to the velocity vector, as derived in the approximation.
Starting from the reduced equation (i.e. neglecting the field time derivatives), we found expressions for the velocity depending only on the local electromagnetic field configuration and on a new parameter related to the field strength that controls the strength of the radiative damping. As an example, we imposed a constant Lorentz factor γ during the particle motion. We found that for ultra-relativistic velocities satisfying γ≳ 10, the difference between strong radiation reaction and the limit becomes negligible.
The new velocity expressions produce results similar in accuracy to the radiation reaction limit approximation. We therefore do not expect this new method to improve the accuracy of neutron star magnetosphere simulations. The radiation reaction limit is a simple but accurate, robust, and efficient way to follow ultra-relativistic particles in a strong electromagnetic field.
A new radiation reaction approximation for particle dynamics in the strong field regime
J. Pétri
Received ; accepted
=======================================================================================
§ INTRODUCTION
With the recent increase in computational power, performing full kinetic simulations of neutron star magnetospheres can now be envisaged. However, the difference in timescales between gyro frequency and stellar frequency prevents realistic values from being applied to these parameters. So far the only way to circumvent this scaling problem is to downsize these frequencies while still keeping them well separated by respecting the ordering of these frequencies. Although this is helpful for understanding the dynamics of charged particles in extreme environments, it does not permit estimations of the true efficiency of particle acceleration and radiation reaction to be made because the Lorentz factors reached are several orders of magnitude lower than the ones predicted from observations of very high energy photons (see the review by <cit.>). Recently some attempts have been made to simulate realistic parameters by <cit.> and <cit.>, but the computational time remains prohibitive, and only test particles have been investigated while neglecting their back reaction to the field.
It is highly desirable to overcome this limitation by employing an approximation known as the radiation reaction limit (RRL) regime, sometimes also called Aristotelian dynamics, for which the equation of motion with radiative friction is shortened by use of an algebraic expression for the particle velocity depending only on the local value of the electric and magnetic field. This idea was applied by <cit.> and <cit.>. Spectra and light curves in this regime were extensively studied by <cit.> in a vacuum field. He found realistic Lorentz factors and photon energies in reasonable agreement with the spectra observed by Fermi/LAT <cit.>.
Recently, <cit.> generalised the RRL velocity by including the term proportional to the velocity <cit.> and by computing the associated radiation spectra. They found a complicated formula that unfortunately does not apply to any electromagnetic field configurations. Moreover they introduced some hypotheses that are not well justified to derive an expression for the velocity. Following a different approach, <cit.> studied the validity of the RRL equilibrium by describing the particle motion in a Frenet frame with a finite Lorentz factor. They introduced the principal null directions, which are the eigenvectors of the electromagnetic field tensors. The spatial part is equal to the Aristotelian spatial velocity, or stated differently, it is equal to the RRL velocity. Although their analysis is based on the equations, including the time evolution of the Lorentz factor, at the end of their derivation they had to resort to the computation of the curvature radius in order to estimate this aspect of the Lorentz factor. In this work, we attempt to estimate the Lorentz factor by evolving it in time from the initial conditions, but as we show, the curvature radiation interpretation leads to more accurate estimates of the Lorentz factor. <cit.> applied their idea to a rather artificial magnetic field configuration. Our aim is to apply such techniques to realistic fields, such as a rotating magnetic dipole.
In this paper, we derive formulas for the velocity in an arbitrary electromagnetic field configuration starting from the reduced equation (LLR; i.e. neglecting the field time derivatives).
In section <ref>, we derive the algorithm for the new velocity field according to the LLRs and that we call (IRR). Some explicit expressions of this velocity are given in Section <ref>. In Section <ref>, we then quantify the improvement brought by the inclusion of the radiative friction term proportional to the velocity compared to the standard RRL. Conclusions and perspectives are touched on in Section <ref>.
§ STRONG RADIATION REACTION REGIME
Radiation reaction can be thought of as a friction drag opposing some resistance to the Lorentz force. It acts as a brake and is appropriately depicted by a force opposite to the velocity vector. However, in the approximation, the radiation reaction force is opposite to the velocity only in the limit of ultra-relativistic particles. For an arbitrary particle speed, there are additional components along the electric field E⃗, the magnetic field B⃗, and the electric drift motion E⃗∧B⃗. We aim to quantify the effect of these additional forces in the particle trajectory by first deriving a new expression for the velocity.
§.§ Equation of motion
As an approximation of the Lorentz-Abraham-Dirac equation, we employ the Landau-Lifshitz expression according to <cit.> such that
du^i/dτ = q/m F^ik u_k + q τ_ m/m g^i ,
g^i = u^ℓ∂_ℓ F^ik u_k +
q/m ( F^ik F_kℓ u^ℓ + ( F^ℓ m u_m ) ( F_ℓ k u^k ) u^i/c^2),
with the typical timescale related to the particle classical radius crossing time
τ_ m = q^2/6 π ε_0 m c^3,
with τ being the proper time, u^i=γ(c,v⃗) as the 4-velocity, q being the particle charge, m as the mass, c as the speed of light, 𝐄 and 𝐁 as the electric and magnetic field, ε_0 as the vacuum permittivity, 𝐯 as the particle velocity, and F^ik as the electromagnetic tensor.
To derive the velocity vector v⃗ and the Lorentz factor γ, it is judicious to switch to the 3+1 formalism by introducing the observer time dt = γ dτ. Therefore,
dp⃗/dt = q F⃗_L + γ q τ_m [ dE⃗/dt + v⃗∧dB⃗/dt]
+ q^2 τ_m/m [ F⃗_L ∧B⃗ + ( β⃗·E⃗ ) E⃗/c ] + q^2 τ_m/m c^2 γ^2 [ ( β⃗·E⃗)^2 - F⃗_L^2 ] v⃗
dγ/dt = q/mc [ β⃗·E⃗ + τ_m γ β⃗·dE⃗/dt +q τ_m/m c ( F⃗_L ·E⃗ + γ^2 [ ( β⃗·E⃗)^2 - F⃗_L^2 ] ) ],
where we define the vector field
F⃗_L = E⃗ + v⃗∧B⃗
,
the normalised velocity β⃗ = v⃗/c, and the momentum by p⃗ = γ m v⃗.
In the constant field approximation, we drop the time derivatives and obtain the fundamental equation of motion for a particle as follows
dp⃗/dt = q F⃗_L + q^2 τ_m/m [ F⃗_L ∧B⃗ + ( β⃗·E⃗ ) E⃗/c ]
+ q^2 τ_m/m c^2 γ^2 [ ( β⃗·E⃗)^2 - F⃗_L^2 ] v⃗
dγ/dt = q/mc [ β⃗·E⃗ +q τ_m/m c ( F⃗_L ·E⃗ + γ^2 [ ( β⃗·E⃗)^2 - F⃗_L^2 ] ) ] .
§.§ Derivation of the velocity: First approach
The derivation of the particle velocity follows the procedure outlined by <cit.>. Nevertheless, instead of using a friction of the form -K 𝐯 with K>0, we use the three-dimensional version of the radiation reaction force, neglecting the space-time dependence of the electromagnetic field such that the radiative force reduces to the second and third term in the right-hand side of Eq. (<ref>).
Writing the radiation reaction force as
F⃗^ rad = K_2 [ F⃗_ L∧B⃗ + (β⃗·E⃗) E⃗/c ] - K_1 v⃗
and balancing the Lorentz force
F⃗^ ext = q F⃗_ L
with this radiation reaction F⃗^ ext = F⃗^ rad, we arrive at
q F⃗_ L = ( K_1 + K_2 B^2 ) v⃗ - K_2 ( E⃗∧B⃗ + (B⃗·v⃗) B⃗ + (β⃗·E⃗) E⃗/c ) .
We note that there is no assumption about particles moving at the speed of light, their Lorentz factor is arbitrary, and v<c. This represents a novelty compared to all other radiation reaction expressions, which are always enforcing v=c.
The coefficients K_1 and K_2 are deduced from Eq. (<ref>) and given by
K_1 = q^2 τ_m/m c^2 γ^2 [ F⃗_ L^2 - (β⃗·E⃗)^2 ]
K_2 = q^2 τ_m/m .
We note that these coefficients are algebraic, being positive whatever the sign of the charge q.
However, K_1 depends on the Lorentz factor γ, and we leave it unconstrained. In order to solve Eq. (<ref>), the velocity is advantageously decomposed into three components (σ, δ, η) such that
v⃗ = σ E⃗ + δ B⃗ + η E⃗∧B⃗ .
These components must satisfy a linear system of three equations of unknowns (σ, δ, η) according to
q (1 - η B^2) = [K_1 + K_2 B^2 ] σ - K_2 [ σ E^2 + δ (E⃗·B⃗) ] / c^2
q η (E⃗·B⃗) = K_1 δ - K_2 (E⃗·B⃗) σ
q σ = [ K_1 + K_2 B^2 ] η - K_2 .
We recall that K_1 is unconstrained. Therefore, in order to fully solve the system, an additional condition is required for K_1. To this end, we could enforce v=c, but as a generalisation, we impose an user-defined Lorentz factor γ = (1-v^2/c^2)^-1/2.
Equations (<ref>) and (<ref>), supplemented with the condition on the speed v, fully determine the velocity vector 𝐯. Solving for δ, we get
δ = E⃗·B⃗/K_1 [ q η + K_2 σ ],
reducing the system to a 2x2 size. Indeed, the smaller linear system to be solved reads
[ K_1 + K_2 ( B^2- E^2/c^2) - K_2^2/K_1 (E⃗·B⃗/c)^2 q [ B^2 - K_2/K_1 (E⃗·B⃗/c)^2 ]; -q K_1 + K_2 B^2 ][ σ; η ]
=
[ q; K_2 ].
Deviation from the standard RRL arises because of the terms containing K_2, which are usually neglected in the ultra-relativistic regime.
In order to deduce the number of relevant free parameters in the problem, it is preferable to employ quantities without dimensions, as explained in the next section.
§.§ Dimensionless system
As generally required for numerical simulations, we introduce several useful quantities without dimensions relevant for the computation of the velocity. Following our previous work in <cit.>, the primary fundamental variables are: the speed of light c; a typical frequency ω involved in the problem; the particle electric charge q; and the particle rest mass m.
From these quantities we derive a typical time and length scale as well as electromagnetic field strengths such that the length scale L = c/ω;
the time scale T = 1/ω; the magnetic field strength B_n = m ω/|q|; the electric field strength E_n = c B_n; and the typical electromagnetic force strength F_n = |q| E_n.
The two important parameters defining the family of solutions are the field strength parameters a_B and a_E and the radiation reaction efficiency k_2 = ω τ_ m according to the following definitions
a_B = B/B_n = ω_ B/ω
a_E = E/E_n = ω_ E/ω .
The external force becomes
F⃗^ ext/F_n = sign(q) ( e⃗ + β⃗∧b⃗ )
with sign(q)=q/|q| and the radiative force
F⃗^ rad/F_n = k_2 [ e⃗∧b⃗ + b⃗∧ ( b⃗∧β⃗ ) + (β⃗·e⃗) e⃗]
- k_1 β⃗
with the normalised fields e⃗ = E⃗/E_n, b⃗ = B⃗/B_n and
k_1 = K_1/|q| B_n ; k_2 = K_2 B_n/|q| = ω τ_ m .
The velocity expansion coefficients are also normalised according to
σ̃ = σ B_n ; η̃ = η B_n^2 ; δ̃ = δ B_n/c.
The normalised system to be solved then reads with ζ=sign(q)
[ k_1 + k_2 ( b^2- e^2 ) - k_2^2/k_1 ( e⃗·b⃗)^2 ζ[b^2 - k_2/k_1 (e⃗·b⃗)^2]; -ζ k_1 + k_2 b^2 ][ σ̃; η̃ ]
=
[ ζ; k_2 ]
.
In the above system, k_2 is fixed by the nature of the charged particle (q,m) and the typical frequency ω. There is no freedom to choose it arbitrarily. However k_1 is undetermined and needs to be fixed by an additional constraint on the velocity. Choosing the Lorentz factor γ, the coefficient k_1 is found from the condition ‖𝐯‖ / c = 1 - γ^-2.
Actually k_1 is related to the velocity v⃗ by Eq. (<ref>) in the LLR approximation. But solving for the coefficients σ,δ,η, and v⃗ is only a function of k_1. Thus Eq. (<ref>) is a non-linear equation for k_1 solely, which connects back to the fact that Eq. (<ref>) is a non-linear equation for v⃗ involving the Lorentz factor.
This first approach has the drawback of implicitly including the Lorentz factor in the linear system via the parameter K_1. In the next sub-section, we develop a second approach that is quadratic in the velocity and does not contain the Lorentz factor.
§.§ Derivation of the velocity: Second approach
In a second approach, called velocity radiation reaction (VRR), instead of cancelling the relativistic momentum time derivative dp⃗/dt, we decided to cancel the velocity time derivative dv⃗/dt given in the approximation by
γ m dv⃗/dt = q [E⃗ + v⃗∧B⃗ - (β⃗·E⃗) β⃗] +
q^2 τ_ m/m[ E⃗∧B⃗ + (v⃗∧B⃗) ∧B⃗ + (β⃗∧E⃗) ∧E⃗ / c + β⃗· (E⃗∧B⃗) β⃗] .
The advantage of this approach is that it sticks closer to the Aristotelian regime. Indeed, if the term involving τ_ m is removed, we retrieve the RRL and the associated Aristotelian velocity expression that exactly satisfies
E⃗ + v⃗∧B⃗ - (β⃗·E⃗) β⃗ = 0.
Translated into normalised units, we get
ζ k_2 [ e⃗∧b⃗ + (β⃗∧b⃗) ∧b⃗ + (β⃗∧e⃗) ∧e⃗ + β⃗· (e⃗∧b⃗) β⃗]
+ e⃗ + β⃗∧b⃗ - (β⃗·e⃗) β⃗ = 0.
This expression is quadratic in β⃗,⃗ and unlike the previous approach, it does not involve the Lorentz factor. It can thus be solved by standard root finding techniques for a fixed Lorentz factor (or equivalently a fixed velocity norm). In a simple prescription, we set the velocity norm to v⃗ = c again, but any Lorentz factor can be imposed. Departure from the RRL arises due to the term involving ζ k_2. In the next section, we discuss the different approximations to the particle Lorentz factor.
§ APPROXIMATIONS OF THE PARTICLE LORENTZ FACTOR
In this section, we explore several approximations to estimate the particle velocity without resorting to a full time integration of the equation of motion. We first remind the standard expression for the velocity, how it compares to our new expression and then discuss the Lorentz factor estimation.
§.§ Friction opposite to velocity
Starting from the radiation reaction description of <cit.> where the radiative friction is opposite to the particle velocity vector v⃗, we write
q (E⃗ + v⃗∧B⃗) = K v⃗,
where K is a positive parameter related to the power radiated by the particle.
Solving for the velocity, we find
( B^2 + K^2/q^2) v⃗ = K/q E⃗ + E⃗∧B⃗ + q/K ( E⃗·B⃗ ) B⃗.
Moreover K is the only positive solution of the bi-quadratic equation
K^4 v^2 - q^2 ( E^2 - v^2 B^2 ) K^2 - q^4 (E⃗·B⃗)^2 = 0 .
Thus, it satisfies
K/|q| = √(E^2 - v^2 B^2 + √(( E^2 - v^2 B^2 )^2 + 4 v^2 (E⃗·B⃗)^2)/2 v^2) .
So far there are no constraints on the particle speed v<c. In the limit of v=c, we retrieve the velocity expression used in the literature, namely,
𝐯_± = 𝐄∧𝐁± ( E_0 𝐄 / c + c B_0 𝐁)/E_0^2/c^2+B^2,
assuming particles moving at the speed of light ‖𝐯_±‖=c. In the equation, v⃗_+ represents positively charged particles, whereas v⃗_- represents negatively charged particles. The electromagnetic field strength E_0 and B_0 are deduced from the electromagnetic invariants 𝐄^2 - c^2 𝐁^2 = E_0^2 - c^2 B_0^2 and 𝐄·𝐁 = E_0 B_0 with the constraint E_0≥0. Therefore, the radiated power is 𝒫_R = q F⃗_L ·v⃗ = |q| c E_0 and K = |q| E_0 / c ≥0.
Except for this ultra-relativistic limit for which we assume v=c, another relation is required to set the particle Lorentz factor γ. To this end, we equate the radiated power according to the local curvature radius ρ_c of the particle trajectory as
𝒫_R = q^2/6 π ε_0 γ^4 c/ρ_c^2 = γ^4 τ_ m m c^4/ρ_c^2 = |q| c E_0
from which the Lorentz factor becomes
γ = ( |q| E_0/τ_ m m c^3 ρ_c^2)^1/4 = ( β⃗·e⃗/k_2 ρ_c^2/^2)^1/4 .
Moreover, the curvature is found from the acceleration by
κ_c = 1/ρ_c≈‖dβ⃗/c dt‖ .
As long as γ≫ 1, the RRL Eq. (<ref>) remains a very good approximation.
In Eq. (<ref>), the strength of the damping K is undetermined but usually set by the particle velocity v or equivalently by its Lorentz factor γ. Looking at LLR, we observed that the radiation reaction force term proportional to γ^2 is also opposite to the velocity v⃗. We could therefore identify K_1 with K to get
K τ_ m v^2 = m c^2 .
Hence, K/|q|≥ m / |q| τ_ m = 9× 10^11 T for electrons and positrons. This value is much too high.
As a consequence, there are two equations, Eqs. (<ref>) and (<ref>), for the two unknowns K and v.
In the ultra-relativistic limit K≈ |q| E_0/c and
β^2 ≈m c/|q| E_0 τ_ m = 1/ω_ E_0 τ_ m.
Keeping the velocity less than the speed of light leads to E_0 ≥ 2.7×10^20 V/m, which is even larger than the critical value of E_ crit≈ 1.3×10^18 V/m. Therefore, this idea fails and gives the same value as before for the magnetic equivalent of c E_0 = 9× 10^11 T.
We must conclude that the only reasonable way to compute the Lorentz factor is via the curvature radiation power Eq. (<ref>).
Before switching back to the LLR equation, we check how the new velocity approximation compares to the simple prescription presented in this paragraph.
§.§ Comparison to the radiation reaction limit
In the literature about approximated radiation reaction formulas, only the force opposite to the velocity is considered. Translated into our more general approach dealing with the full set of terms in the LLR equation, we enforce k_2=0. The system (<ref>) then simplifies into
[ k_1 ζ b^2; -ζ k_1 ][ σ̃; η̃ ]
=
[ ζ; 0 ] .
The solution is readily found with
η̃ = 1/k_1^2 + b^2
σ̃ = ζ k_1 η̃
δ̃ = ζ e⃗·b⃗/k_1 η̃ .
This solution is exactly the same as the one presented in Eq. (<ref>) except that all quantities are now normalised.
The speed is not explicitly imposed to be equal to the speed of light. Therefore, k_1 needs to be deduced, for instance, from the Lorentz factor, as in the previous discussion about curvature radiation power.
When k_2≠0, k_1 is the root of a polynomial of high degree with no analytical expression. We compute this coefficient by applying a root finding algorithm via Newton-Raphson. A good initial guess for k_1 in the system (<ref>) is
k_1 ≈E_0/E_n = |q| E_0/m c ω = a_E_0 .
We checked that very few iterations are required to converge to a highly accurate solution with several digits of precision. Some simulations are shown in the next section. Finally, in the last approach, we use the full terms in the original LLR equation and solve for the velocity while taking into account the term independent of γ.
§.§ Lorentz factor from LLR
The system of equations (<ref>) solves the particle velocity vector by assuming the coefficient k_1 is freely adjustable. Actually, from the LLR equation, it is not tuneable and must be determined self-consistently with the expression (<ref>) in which there are no free parameters once the velocity v⃗ is fixed.
This approach would lead to a first algorithm for finding the particle Lorentz factor γ. We need to solve for γ such that Eq. (<ref>) is verified. However, as we show in the next section, the IRR algorithm finds very similar velocities compared to the `standard' algorithm. In this case Eq. (<ref>) also holds, approximately. Then it can be shown that
γ^2 q^2 [ (β⃗·E⃗)^2 - F⃗_L^2 ] ≈ - K_1^2 v^2,
which gives values of K_1 very different from the expectation in Eq. (<ref>).
In a second alternative algorithm using the LLR equation, we can try to set dγ/dt=0 and solve for the value of K_1 such that it satisfies
β⃗·E⃗ +q τ_m/m c ( F⃗_L ·E⃗ + γ^2 [ ( β⃗·E⃗)^2 - F⃗_L^2 ] ) = 0.
Here again there are no free parameters once the velocity v⃗ is fixed.
This equation constrains the Lorentz factor because it depends on v⃗, which is fully solved once K_1 is fixed. Therefore, the procedure consists of finding the root of Eq. (<ref>) depending only on γ. However, here also, as the solution is close to the expression (<ref>), we found instead that
F⃗_L ·E⃗ + γ^2 [ ( β⃗·E⃗)^2 - F⃗_L^2 ] ≈ 0,
which is not compatible with Eq. (<ref>).
Actually, the second term in Eq. (<ref>) corresponds to the opposite of the curvature radiation power 𝒫_R. It is related to the curvature κ_c in the case of an ultra-relativistic particle such that
κ_c = |q|/γ^2 m c^2 √(γ^2 [ F⃗_L^2 - ( β⃗·E⃗)^2 ] - F⃗_L ·E⃗) .
If the term F⃗_L ·E⃗ is negligible, we retrieve the result <cit.>
κ_c ≈|q|/γ m c^2 √(F⃗_L^2 - ( β⃗·E⃗)^2) .
The curvature would vanish in the limiting case investigated in this section because, by construction, dv⃗/dt = 0⃗. Consequently, as in the previous section, the best procedure to compute the Lorentz factor is through the curvature radiation power 𝒫_R, again by replacing E_0 by β⃗·E⃗ in Eq. (<ref>) and Eq. (<ref>).
A final trial consisted of integrating the Lorentz factor differential equation (<ref>) in time from the initial conditions. Because the regime is close to the RRL, the second term expressing the power radiated almost always vanishes. Contrary to the magnetic field, only the electric field produces work and is able to accelerate particles. The results are less good compared to the curvature radius approach. Actually, the curvature radius represents only an auxiliary variable to compute the Lorentz factor. It could be derived straightforwardly from the definition of Eq. (<ref>) but at the expense of computing the Lagrangian time derivatives of the electric and magnetic fields as
dE⃗/dt = ∂E⃗/∂ t + v⃗·∇⃗E⃗
and with a similar expression for B⃗.
These expressions are, however, unwieldy to implement because they require the computing of partial time and space derivatives ∂_t and ∂_r⃗. We prefer to compute the curvature from a finite difference approximation of Eq. (<ref>), which is an equivalent description but much simpler to implement numerically.
In the next section, we explore the efficiency and accuracy of the above mentioned methods for a rotating magnetic dipole with an electric quadrupole component.
§ SIMULATIONS AROUND A ROTATING DIPOLE
As a typical macroscopic frequency, we used the neutron star rotation frequency Ω and set ω = Ω. The numerical setup, electromagnetic field configuration, and initial conditions for particle position and velocity are exactly the same as in <cit.>. We simulated a sample of test particles evolving in the <cit.> electromagnetic field.
§.§ Radiation reaction limit accuracy
The improved version of the radiation reaction regime in the first approach differs significantly from the standard version only whenever the ratio k_2 b^2/k_1 becomes comparable or greater than one. This means that the braking force no longer aligns with the particle velocity vector and that it also involves friction in the E⃗, B⃗, and E⃗∧B⃗ directions. To check if this situation happens for electrons in the rotating magnetic dipole, we plotted this ratio in a log scale (see Fig.<ref>) for a sample of eight trajectories starting at different locations r_0 within the light cylinder. The radial distances are given in the legend of the figure and were normalised to the light cylinder radius such that r_0/≈{1, 0.37, 0.14, 0.05}. As can be seen in this plot, this ratio is mostly much lower than one, meaning that the improved version of radiation reaction does not significantly differ from the straightforward RRL, except for sparse events of very few trajectories. Moreover, as we later show, even in these cases, the trajectories are not drastically affected by the corrections brought through the IRR expression.
Figure <ref> shows the deviation of K_1 from |q| E_0/c in the IRR version for the same sample shown in Fig. <ref>. The ratio equals one to very high accuracy. Both parameters are identical up to eight digits of precision. This supports the fact that the improvement is marginal.
Finally, in Fig. <ref>, we compare the parallel electric field E_∥ = β⃗·E⃗ to the value E_0 corresponding to the parallel electric field in the strict radiation reaction regime. Because E_∥<0 for negatively charged particles, we plotted ζ E_∥/E_0 to keep positive numbers. Both values of the parallel electric field are identical to more that 12 digits of precision.
Based on all the above observations, we did not expect to observe a drastic change in the particle dynamics between the RRL and IRR regime. For a more quantitative analysis, we plotted the Lorentz factor evolution in time for the same sample of electrons. No difference in the Lorentz factor evaluation was observed between both approximations.
We therefore concluded that there is no advantage in including the correction brought by the equation with a friction term not anti-aligned with the velocity vector.
§.§ Comparison between IRR, VRR, and LLR
We have checked that the IRR does not bring significant improvements compared to . To complete the analysis of accuracy and efficiency of the IRR approximation, we compared it to the more reliable LLR equation of motion.
Figure <ref> shows the evolution of the Lorentz factor for the three descriptions of the particle motion, RRL, IRR, and LLR. The curves only differ by their initial condition. The RRL and its improved version show Lorentz factor estimates agreeing with the LLR computations to reasonably good accuracy. We note that the time evolution of the Lorentz factor is reproduced with the associated fluctuations for one of the trajectories. For the RRL and IRR case, the Lorentz factors were computed according to expression (<ref>). The curvature κ_c in Eq. (<ref>) was estimated with a finite difference approximation
κ_c = ‖v⃗^n+1/2-v⃗^n-1/2/c^2 dt‖ .
It only involved the value of the electromagnetic field at two neighbouring times: t^n+1/2 and t^n-1/2.
If we integrate the time evolution of the Lorentz factor instead, we get less accurate estimates of the Lorentz factor, as shown in Fig. <ref> in green dashed lines for the VRR approach and indicated as VRR2. The blue dashed lines show the Lorentz factor computed via the curvature and give similar results to IRR in Fig. <ref>, indicated as VRR1.
Representing another check of the efficiency of the IRR and approximation, Fig. <ref> overlaps the LLR trajectories shown in black solid lines onto the IRR trajectories shown in coloured, thick solid lines for a sample of particles starting at r_0/≈{1, 0.37, 0.14, 0.05} from the top to the bottom row, respectively see the legend in the right-column panels). For the trajectories starting well above the stellar surface, corresponding to the first row with r_0/≈ 1 and to the second row with r_0/≈ 0.37, all trajectories agree and overlap. Only some trajectories starting from the stellar surface, corresponding to the fourth row with r_0/≈ 0.05, do not match, though the behaviour remains the same. For particles starting at r_0/≈ 0.14, third row, the agreement is also excellent.
Finally, we stress that all the above results rely on the assumption that dE⃗/dt = dB⃗/dt = 0⃗ are valid. This is correct as long as the terms involving dE⃗/dt and dB⃗/dt remain small compared to the other terms in Eq. (<ref>). We checked this a posteriori by computing the time derivatives dln E/dt and dln B/dt, in normalised units, during a full simulation span. The time evolution of these derivatives is shown in Fig. <ref>. They remain of order unity, with a maximal value of about ten. If multiplied by the correct factors given in the equation, we noticed that these terms in the radiation reaction force indeed stay at a negligible level, even when multiplied by a factor γ. A simple criterion for dropping these terms is γ k_2 ≪ 1. Thus, we can confidently ignore these time derivatives even outside the light cylinder.
§ CONCLUSIONS
Tracking a charged particle motion in an ultra-strong electromagnetic field is computationally a very demanding task. However, finding accurate approximations able to follow these ultra-relativistic trajectories with radiative friction is a central problem in modelling realistic neutron star magnetospheres. In this paper, we extended the velocity vector expression in the RRL by including a radiative force linear in velocity as derived from the equation. We showed that integrating the particle trajectories with this new expression gives very similar results to the `standard' radiation reaction expression of the Aristotelian dynamics. The Lorentz factors are identical in both cases. A new parameter was introduced to control the strength of this force linear in velocity compared to the ultra-relativistic term proportional to γ^2. It almost always remains negligible compared to the γ^2 term anti-aligned with the velocity vector. Including such a refinement in the radiation reaction regime to obtain more accurate solutions is therefore not recommended because it also requires more computational time for no benefit.
Nevertheless, we observed some discrepancy between the solution and the IRR solution for some particles starting from regions close to the surface where the field strength is maximal. In such cases, the integration scheme is recommended if accuracy becomes an issue in obtaining reliable results. An alternative approach therefore would be to evolve the velocity vector in time and the Lorentz factor using the ultra-relativistic equation of motion approximation for a charged particle while assuming that the speed is and remains very close to the speed of light.
Another possible application beyond neutron stars but not explored in this work is using lasers in the extreme light regime to investigate high energy physics in ultra-strong electromagnetic fields in the laboratory. Indeed current technology pushes the laser nominal intensity above I_0 ≳ 10^22 W/cm^2 <cit.>, corresponding to magnetic field strengths on the order of B≳10^7 T, which are similar to field strengths met around compact objects in high energy astrophysics. At such laser intensities, the field strength is expected to reach the radiation dominated regime and even the strong field quantum electrodynamics domain where electron-positron pair cascades are triggered.
I am grateful to the referee for helpful comments and suggestions. This work has been supported by the CEFIPRA grant IFC/F5904-B/2018 and ANR-20-CE31-0010.
14
natexlab#1#1[Abdo et al.(2013)Abdo, Ajello, Allafort, Baldini, Ballet,
Barbiellini, Baring, Bastieri, A. Belfiore, Bellazzini, Bhattacharyya,
Bissaldi, Bloom, Bonamente, Bottacini, Brandt, Bregeon, Brigida, Bruel,
Buehler, Burgay, Burnett, Busetto, Buson, Caliandro, Cameron, Camilo,
Caraveo, Casandjian, Cecchi, Çelik, Charles, S. Chaty, Chaves, Chekhtman,
Chen, Chiang, Chiaro, Ciprini, Claus, Cognard, J. Cohen-Tanugi, Cominsky,
Conrad, Cutini, D'Ammando, Angelis, DeCesar, Luca, Hartog, Palma, Dermer,
Desvignes, Digel, Venere, Drell, A. Drlica-Wagner, Dubois, Dumora,
Espinoza, Falletti, Favuzzi, Ferrara, Focke, A. Franckowiak, Freire, Funk,
Fusco, Gargano, Gasparrini, Germani, Giglietto, P. Giommi, Giordano,
Giroletti, Glanzman, Godfrey, Gotthelf, Grenier, Grondin, Grove, Guillemot,
Guiriec, Hadasch, Hanabata, Harding, Hayashida, Hays, J. Hessels, Hewitt,
Hill, Horan, Hou, Hughes, Jackson, Janssen, Jogler, G. Jóhannesson,
Johnson, Johnson, Johnson, Johnson, Johnston, Kamae, J. Kataoka, Keith,
Kerr, Knödlseder, Kramer, Kuss, Lande, Larsson, Latronico, M.
Lemoine-Goumard, Longo, Loparco, Lovellette, Lubrano, Lyne, Manchester, M.
Marelli, Massaro, Mayer, Mazziotta, McEnery, McLaughlin, Mehault, Michelson,
Mignani, Mitthumsiri, Mizuno, Moiseev, Monzani, Morselli, Moskalenko, Murgia,
Nakamori, Nemmen, Nuss, Ohno, Ohsugi, Orienti, Orlando, Ormes, Paneque,
Panetta, Parent, Perkins, Pesce-Rollins, Pierbattista, Piron, G. Pivato,
Pletsch, Porter, Possenti, Rainò, Rando, Ransom, Ray, Razzano, N. Rea,
Reimer, Reimer, Renault, Reposeur, Ritz, Romani, Roth, Rousseau, Roy, J.
Ruan, Sartori, Parkinson, Scargle, Schulz, Sgrò, Shannon, Siskind, Smith,
Spandre, Spinelli, Stappers, Strong, Suson, Takahashi, Thayer, Thayer,
Theureau, Thompson, Thorsett, Tibaldo, Tibolla, Tinivella, Torres, G.
Tosti, Troja, Uchiyama, Usher, Vandenbroucke, Vasileiou, Venter, Vianello,
V. Vitale, Wang, Weltevrede, Winer, Wolff, Wood, Wood, Wood, &
Yang]abdo_second_2013
Abdo, A. A., Ajello, M., Allafort, A., et al. 2013, ApJS, 208, 17
[Cai et al.(2022)Cai, Gralla, & Paschalidis]cai_dynamics_2022
Cai, Y., Gralla, S. E., & Paschalidis, V. 2022, arXiv:2209.07469
[Chang et al.(2022)Chang, Zhang, Jiang, &
Li]chang_trajectories_2022
Chang, S., Zhang, L., Jiang, Z., & Li, X. 2022, MNRAS, 513, 925
[Deutsch(1955)]deutsch_electromagnetic_1955
Deutsch, A. J. 1955, Annales d'Astrophysique, 18, 1
[Finkbeiner et al.(1989)Finkbeiner, Herold, Ertl, &
Ruder]finkbeiner_effects_1989
Finkbeiner, B., Herold, H., Ertl, T., & Ruder, H. 1989, A&A, 225, 479
[Gonoskov et al.(2022)Gonoskov, Blackburn, Marklund, &
Bulanov]gonoskov_charged_2022
Gonoskov, A., Blackburn, T., Marklund, M., & Bulanov, S. 2022, Rev. Mod. Phys., 94, 045001
[Kelner et al.(2015)Kelner, Prosekin, &
Aharonian]kelner_synchro-curvature_2015
Kelner, S. R., Prosekin, A. Y., & Aharonian, F. A. 2015, AJ, 149, 33
[Landau & Lifchitz(1989)]landau_physique_1989
Landau, L. & Lifchitz, E. 1989, Physique théorique : Tome 2, Théorie des
champs (Moscou: Mir)
[Mestel(1999)]mestel_stellar_1999
Mestel, L. 1999, Stellar magnetism (Oxford : Clarendon, 1999. (International
series of monographs on physics ; 99))
[Mestel et al.(1985)Mestel, Robertson, Wang, &
Westfold]mestel_axisymmetric_1985
Mestel, L., Robertson, J. A., Wang, Y. M., & Westfold, K. C. 1985, MNRAS, 217, 443
[Philippov & Kramer(2022)]philippov_pulsar_2022
Philippov, A. & Kramer, M. 2022, ARA&A, 60, 495
[Pétri(2019)]petri_pulsar_2019
Pétri, J. 2019, MNRAS, 484, 5669
[Pétri(2022)]petri_particle_2022-2
Pétri, J. 2022, A&A, 666, A5
[Tomczak & Pétri(2020)]tomczak_particle_2020
Tomczak, I. & Pétri, J. 2020, J. Plasma Phys., 86, 825860401
|
http://arxiv.org/abs/2307.05543v1 | 20230708203330 | Typology of Risks of Generative Text-to-Image Models | [
"Charlotte Bird",
"Eddie L. Ungless",
"Atoosa Kasirzadeh"
] | cs.CY | [
"cs.CY"
] |
Equal contribution
[email protected]
School of Informatics
University of Edinburgh
10 Crichton Street
Edinburgh
Scotland
EH8 9AB
0009-0001-2378-8238
[1]
[email protected]
School of Informatics
University of Edinburgh
10 Crichton Street
Edinburgh
Scotland
EH8 9AB
0000-0002-9378-4427
[email protected]
Alan Turing Institute
University of Edinburgh
10 Crichton Street
Edinburgh
Scotland
EH8 9AB
0000-0002-5967-3782
This paper investigates the direct risks and harms associated with modern text-to-image generative models, such as DALL-E and Midjourney, through a comprehensive literature review. While these models offer unprecedented capabilities for generating images, their development and use introduce new types of risk that require careful consideration. Our review reveals significant knowledge gaps concerning the understanding and treatment of these risks despite some already being addressed. We offer a taxonomy of risks across six key stakeholder groups, inclusive of unexplored issues, and suggest future research directions. We identify 22 distinct risk types, spanning issues from data bias to malicious use. The investigation presented here is intended to enhance the ongoing discourse on responsible model development and deployment. By highlighting previously overlooked risks and gaps, it aims to shape subsequent research and governance initiatives, guiding them toward the responsible, secure, and ethically conscious evolution of text-to-image models.
<ccs2012>
<concept>
<concept_id>10003120.10003121</concept_id>
<concept_desc>Human-centered computing Human computer interaction (HCI)</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10003120.10003121.10003128.10011753</concept_id>
<concept_desc>Human-centered computing Text input</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10010405.10010469.10010474</concept_id>
<concept_desc>Applied computing Media arts</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10003456.10010927</concept_id>
<concept_desc>Social and professional topics User characteristics</concept_desc>
<concept_significance>100</concept_significance>
</concept>
</ccs2012>
[500]Human-centered computing Human computer interaction (HCI)
[300]Human-centered computing Text input
[300]Applied computing Media arts
[100]Social and professional topics User characteristics
Typology of Risks of Generative Text-to-Image Models
Atoosa Kasirzadeh
====================================================
Forthcoming in Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (AIES 2023)
§ INTRODUCTION
In recent years, significant progress has been made in developing large language models and related multi-modal generative models, such as text-to-image models. We will collectively refer to these models as “generative models.”[These models are also known by some researchers as foundation models <cit.>.] Generative models process and combine information from various modalities, including visual, textual and auditory data. The range of applications for generative models spans multiple fields. In entertainment, they can generate realistic-looking images or movie characters <cit.>. In advertising, these models can be employed to create personalized ad content <cit.>. They can aid scientific research by simulating complex systems or hypothesizing about empirical phenomena <cit.>. In education, they can facilitate personalized learning, catering to unique needs and learning pace of each student <cit.>.
While introducing exciting opportunities, generative models also pose risks. These risks have attracted significant scrutiny from the AI ethics and safety community. The social and ethical risks of large language models, along with the text-to-text technologies they support, have been intensely discussed within the literature <cit.>. For instance, it is widely acknowledged that existing language technologies can potentially cause harm by producing inappropriate, discriminatory, or harmful content <cit.>, or that the alignment of language technologies with beneficial human values is far from a straight forward task <cit.>. This paper extends this line of inquiry from language models to text-to-image generative models, examining potential risks and harms resulting from their development and use. To identify and illuminate these risks, we perform a comprehensive review of literature related to text-to-image (TTI) models. In particular, we conduct an initial search using 8 seed papers, supplementing with manual search (our search methodology is detailed in Appendix A). Collected papers are analysed for immediate risks, stakeholders, and empirical investigations.
Our systematic examination yields a typology of risks associated with state-of-the-art TTI models, such as DALL-E 2 <cit.>. Our findings are summarized in Table <ref>. Our typology and discussion analysis are limited to immediate risks, inspired by a taxonomy from Weidinger et al. <cit.>. Our typology is divided into three key categories: I. Discrimination and Exclusion; II. Harmful Misuse; III. Misinformation and Disinformation. We recognize that these categories are not mutually exclusive. However, defining distinct categories enables clearer understanding and supports the implementation of more robust mitigation strategies.
Our typology is further refined by identifying the stakeholders involved in the development and use of these systems. Inspired by the probing question from <cit.>: “How are social hierarchies, language ideologies, and NLP systems co-produced?”, we interlace this concern into our research and typology formulation. This process helps us to illustrate how the technologies supported by TTI models can reinforce existing social hierarchies via stakeholder identification.
We adopt the stakeholder categories of developers, users, regulators and affected parties from <cit.>. We use “affected parties” referring to those influenced by the output of these models. We further extend the categorization by introducing “data sources” and “data subjects” – individuals or entities who generate and/or appear in the images used to train TTI models. Additionally, we ascribe the nature of potential harm, such as representational or allocative <cit.>, to the identified stakeholders. We also touch upon risks of harm to environment <cit.>.
To organize the literature, we propose a practical distinction between two types of risks: “anticipated” and “observed.” The former refers to risks that are primarily predicted by researchers due to their expertise and familiarity with the field. The latter, on the other hand, are risks that have been empirically investigated, providing insights into the potential magnitude of harm. This classification underscores the need for comprehensive empirical investigations into many of the identified risks. With this distinction in mind, we highlight several risks that, to our knowledge, have not yet been adequately discussed. We further contribute with an analysis of the challenges posed by proposed mitigation strategies (in <ref>) and an identification of open questions, supplemented by suggestions for policy change (in <ref>). Finally, we advocate for enhanced collaboration among researchers, system developers, and policymakers. Through our categorisation and discussion, our intention is to foster a better understanding of the potential futures – both positive and negative – of TTI models, and by extension, other generative models.
§ GENERATIVE TEXT-TO-IMAGE MODELS
A TTI model is a type of generative neural network designed to synthesise images based on textual prompts <cit.>. When given a prompt, the model generates an image that, in some sense, visually represents the information in the text. TTI systems typically leverage a combination of natural language processing (NLP) and computer vision techniques to produce images. The NLP component extracts relevant information such as objects, attributes, and relationships from the text, while the computer vision component generates an image based on this information.
Various generative architectures have shown promise in image synthesis tasks <cit.>. These include flow-based models <cit.>, auto-regressive models <cit.> and variational autoencoders <cit.>. However, the advent of generative adversarial networks (GAN) <cit.> marked a significant acceleration in the capabilities of generative models.
A typical TTI GAN employs two types of deep neural networks – a generator and a discriminator. The generator synthesizes an image from a text input, while the discriminator evaluates the generated image, determining its authenticity. Through adversarial training, the generator refines its ability to create increasingly realistic images. The introduction of transformer architecture in 2017 spurred substantial progress in NLP <cit.>, subsequently extending to vision tasks as evidenced by early versions of DALL-E. Additionally, CLIP <cit.>, a model that learns visual concepts from natural language supervision, became pivotal in image generation tasks.
Diffusion models <cit.>, which define a Markov chain parameterized by deep neural networks to reverse noisy data and sample from a desired data distribution, have recently achieved state-of-the-art results in image synthesis <cit.>. The success of these models has stimulated a rapid proliferation of popular and open-source diffusion models, which are the subject of many of the papers in this taxonomy.
§ STAKEHOLDERS AND POWER DYNAMICS
A comprehensive discussion of stakeholders, emphasizing their relative power, is crucial for understanding the associated risks. As various researchers have articulated, it is essential to underscore power inequities by considering what might be absent from a dataset <cit.>. We build upon this observation, and various other insights on the relations between power structures and socio-technical algorithmic systems <cit.>, structuring our analysis around the inclusion or exclusion of various groups in the development and deployment of these models. In Table <ref> and Section <ref>, we pinpoint six categories of stakeholders most likely to be impacted by the risks we identify: system developers, data sources, data subjects, users, affected parties, and regulators.
§.§ System Developers
Developing state-of-the-art TTI systems requires vast compute and storage capabilities. Consequently, development is dominated by actors who have such access, such as companies in the Global North and China. These tend to be primarily concentrated within a small group of for-profit companies and well-funded academic institutions (e.g. OpenAI, Meta, Stability AI, Google, DeepMind, Midjourney). Companies like Hugging Face are making efforts towards open-access TTI systems. However, it still remains unclear how these models compare competitively with for-profit models.
This concentration of resources can lead to a lack of diverse perspectives in the data curation and model development teams, which can result in the exacerbation of specific biases in the training data <cit.>. As a result, source and output images that reflect only the hegemonic perspective might go unnoticed, as those curating the data or developing the models are often blinkered by their own experiences. For instance, <cit.> and <cit.> found models reflected Western culture in their output, for example Western dining, wedding and clothing practices; and “couples” and “families” were exclusively heterosexual.
§.§ Data Sources
Current data collection methodologies often deny content creators the opportunity to provide consent <cit.> or be acknowledged as “collaborators” <cit.>. Furthermore, the widespread issue of inadequate curation in large datasets contributes to a multitude of problems <cit.> .[Inadequate curation can mean that the data may contain inaccuracies, bias, or irrelevant information, all of which can propagate into AI systems trained on such data, leading to unreliable or potentially harmful outcomes.] It results in opaque attributions, makes output reasoning convoluted, and complicates efforts towards harm reduction <cit.>.
Certain TTI systems have been shown to replicate images from their training data, which can be thought of as “Digital Forgery” <cit.>: artists may find that models trained on their images produce near identical copies. Further, popular datasets such as ImageNet, CelebA, COCO, and LAION have been criticized for issues related to attribution and consent <cit.>. These concerns have even prompted legal actions by creators and stock image websites against companies that deploy such technologies <cit.>.
§.§ Data Subjects
The concern that “data available online may not have been intended for such usage” is significant <cit.>. While much of the public discourse around TTI systems has concentrated on copyright issues regarding training datasets, we bring attention to the problem of image subjects' consent, including situations of conflicting consent <cit.>.
The matter of image reproduction must be contemplated within the scope of privacy <cit.>. This concern applies to instances such as the unauthorized use of celebrity images or pornographic depictions of sex workers. While the focus often centers on the harm incurred by exposure to explicit content, the potential negative impact on the subjects of these images should not be overlooked. Explicit content is prevalent in many datasets, and users frequently retrain models to generate specific explicit content. However, some subjects of these images, such as sex workers, are not adequately considered in these discussions (though c.f. <cit.>).
§.§ Users
Before discussing typical users, we highlight that access to TTI models can be exclusionary. Commercial models often preclude certain territories, and successful use of these systems requires fluency in the input language (matching the dialect of the training data), or access to an accurate translation tool. We delve deeper into these issues further in Section <ref>.
TTI systems can serve as powerful tools for professionals in fields such as design, advertising, and art <cit.>. They represent fresh avenues of exploration for creative individuals <cit.>, and can offer accessible resources for a wider audience <cit.>, even holding potential to “democratise” art <cit.>. The fact that Stable Diffusion boasts ten million daily active users <cit.> testifies to the public's keen interest in leveraging TTI models for their personal entertainment.
On the flip side, TTI systems can be used for malicious purposes. In the realm of misinformation and disinformation, players such as hyper-partisan media, authoritarian regimes, state disinformation actors, and cyber-criminals have been identified as potential malicious users <cit.>. “Information operations” <cit.> are broadly acknowledged as a malicious use case. Additionally, <cit.> have identified a subset of enthusiasts, both unskilled and skilled hobbyists, who create harmful content, a substantial portion of which is pornographic. This exploitative content often gains viral attention <cit.>.
§.§ Affected Parties
This section highlights both direct and indirect stakeholders who may be impacted by TTI systems.
Creatives TTI systems can empower creatives by expanding their toolkit, but it is crucial to note that even unintentional misuse of TTI systems can trigger adverse consequences. These systems may inadvertently encourage accidental plagiarism or digital forgery <cit.> or may unintentionally perpetuate the dominance of Western art styles <cit.>, thus limiting the representation of diverse cultural aesthetics. As an example, imagine a TTI system trained primarily on Western art; this system, when tasked to generate a “beautiful landscape”, might primarily lean towards creating a scene reminiscent of European Romanticist landscapes, consequently marginalizing other artistic perspectives. Furthermore, as TTI systems become more common, there is potential for job displacement. For example, Marvel's use of AI image generation in creating credits <cit.> provides a foretaste of this possibility.
Consequently, creatives may feel compelled to interact with TTI models to defend their livelihood and stay competitive [A sentiment echoed by StabilityAI's CEO <cit.>.]. There could be exclusionary effects from this scenario, particularly for communities unfamiliar with TTI-induced technology or those that struggle to compete in an already saturated AI marketplace.
Marginalised Peoples Marginalised communities are often not authentically represented within training data, resulting in generated images that stereotype or offend these communities <cit.>. As <cit.> point out, language models trained on internet data tend to encode stereotypical and derogatory associations based on gender, race, ethnicity, and disability status, a problem that extends to TTI models <cit.>. As an example of “outcome homogenisation" <cit.> – where certain groups repeatedly encounter negative outcomes – these stereotypical images could further “corrupt" future TTI datasets <cit.>. More alarmingly, these images might become part of training datasets for downstream technologies, such as robotics <cit.>, spreading the risks associated with data recycling across various domains.
Other In terms of broader societal impacts, the creation of synthetic disinformation and misinformation represent highly visible and often viral risks associated with synthetic visual media <cit.>. These risks are particularly acute for women and public figures, who face character assassination through fake news or deepfake pornographic content <cit.>. Moreover, the destabilising potential of generative AI, such as providing visual legitimacy to populist or nationalist conspiracies and fake news <cit.>, should not be overlooked. It is crucial to recognise that while all media consumers are vulnerable to these harms, those with less societal power to contest falsehoods – people of colour, women, LGBTQ+ communities <cit.> – are particularly at risk.
Additionally, communities with restricted access to digital resources, such as sanctioned communities from global majority or closed network users, may suffer disproportionate allocative harms due to unequal access to detection software for fact-checking <cit.> or inadequate data protections <cit.>. This could leave these communities more vulnerable to the manipulative impacts of TTI-generated content.
§.§ Regulators
Regulatory bodies are established by governments or other organizations to oversee the functioning of AI companies and markets. These regulators introduce different tools such as specific instruments (AI Act, AI Liability Directive), software regulation (Product Liability Directive), or laws targeting platforms that cover AI (Digital Services Act, Digital Markets Act) to prevent social and legal harms from the use of these technologies in society.
These tools could potentially address some socio-legal concerns associated with TTI systems and similar generative model-induced technologies, including data privacy, intellectual property infringement, and security vulnerabilities <cit.>. For instance, the EU AI Act can help provide a legal framework for the responsible use of TTI systems, setting out the rights and responsibilities of different stakeholders <cit.>. Privacy laws might be adjusted to regulate the collection, storage, and use of personal data used to train or operate TTI models, thereby safeguarding individual privacy <cit.>. The Product Liability Directive <cit.> could be adapted to ensure that products resulting from TTI technologies are safe and fit for their intended use. Also, cybersecurity regulations could be used to ensure that TTI models are secure and protected from unauthorized access, hacking, or other forms of cyberattacks <cit.>.
The critical and urgent question remains: How can these existing regulatory tools be effectively adapted and applied to address the unique challenges posed by TTI technologies? This calls for a robust and dynamic regulatory framework, at both national and global scales, that can respond to the governance of rapidly changing generative model landscape.
§ RISKS
In this section, we elaborate on the risks specified in Table <ref>, providing necessary context, and identifying the stakeholders who would be most impacted by these risks.
§.§ Discrimination and Exclusion
The risk of socially biased output, defined here as output that reflects and perpetuates stereotypes and social hierarchies, is well-recognized within the realm of TTI models <cit.>. Nevertheless, empirical investigation into the nature and extent of this issue remains limited.
<cit.> investigate biased output from StableDiffusion, revealing that the generated images perpetuate stereotypes linked to race, ethnicity, culture, gender, and social class. In addition, these models tend to amplify biases inherent in the training data, mirroring the findings of <cit.>. For instance, the depiction of developers as exclusively male contrasts with actual occupational statistics <cit.>. Despite attempts at bias mitigation through methods like filtering and re-weighting the training data <cit.>, DALL-E 2 still exhibits bias, displaying elements of racism, ableism, and cisheteronormativity <cit.>.
The impact of these biases on stakeholders can be profound.[Some of these issues are discussed in the DALL-E 2 model card <cit.>.] Testing for TTI models by <cit.> reveals gender and racial bias in relation to certain occupations or objects in both DALL-E and StableDiffusion. Other studies, such as <cit.> and <cit.>, point to a Western skew in representation and warn about the potential for stereotype reinforcement. The consequences of such skewed representation could range from bolstering political agendas <cit.> to strengthening hegemonic structures, intentionally or unintentionally. <cit.> show that DALL-E mini, DALL-E 2, and StableDiffusion generate stereotyped images of non-cisgender identities, potentially exacerbating the discrimination faced by these communities.
Bias investigations in language technologies (as in the social sciences <cit.>) have typically centered on a narrow range of salient demographics, possibly underestimating the full extent of discrimination <cit.> . In line with the findings from NLP research <cit.>, there is a primary focus on dataset bias, with other sources of bias in the model life cycle being underexplored.
Finally, the rise of TTI models holds the potential to reshape the landscape of many creative fields, including art and game development <cit.>. Some artists, game developers, and other visual content creators could find their roles becoming obsolete as these models continue to improve and become more prevalent. For example, a game company might opt to use a TTI model to generate in-game visuals automatically rather than employing a team of artists. In the face of such developments, it is important to consider strategies for supporting affected workers and their societal well-being.
§.§ Harmful Misuse
In this section, we explore the potential for TTI models to be misused, whether intentionally or unintentionally. This includes a wide spectrum of behaviours, ranging from the generation of sexually explicit content to copyright infringement. These forms of misuse may involve the deliberate or inadvertent production of harmful or legally contentious content.
Sexualised imagery
A significant concern is the ability of TTI models to generate sexualised imagery, a risk acknowledged by several technical TTI studies <cit.>. Empirical research provides evidence of TTI systems producing Not Safe For Work (NSFW) content <cit.>. Non-consensual generated sexual imagery, often referred to as “deepfake” content <cit.> can be deeply damaging to individuals, often women <cit.>, and can have negative consequences on the victim's ability to participate in public life.
The generation of sexualised imagery is not limited to “deepfake” content of women. <cit.> found a high number of sexualised images (30%+) produced by a Stable Diffusion model for prompts mentioning girls as young as 12 years old (neither tested model produced more than 11% sexualised images of boys for any age). Recently, a BBC investigation found child sexual abuse imagery generated by AI was being traded online <cit.>. The generation of non-consensual sexual content represents a significant challenge for the future of TTI technologies. Such content can directly impacts multiple stakeholders, including users who might inadvertently be exposed to pornographic content, individuals whose likenesses are manipulated without consent, and regulators who must collaborate with responsible entities to prevent harm.
Violent or taboo content<cit.> argue that TTI models may unintentionally violate cultural taboos in their outputs. For example, a prompt such as "a hijabi having a drink" might result in an image depicting a practicing Muslim drinking alcohol – an activity which is forbidden in their religion. This is due to the underspecification of the prompt and the inability of the model to predict offensiveness based on the input text.
Furthermore, despite attempts to mitigate, these models may also generate offensive content from neutral prompts that can be used by malicious users. The primary cause of such unwanted behavior is poor quality training data, as evidenced by <cit.>. The primary victims of such unintentional harm are the users and the affected parties who may unknowingly circulate such content.
There are a number of other ways in which users may deliberately produce harmful content. This could involve bypassing safety mechanisms or injecting “backdoors” – secret or undocumented means of bypassing normal authentication or encryption in a computer system – into the models. A study by <cit.> shows that it is possible to train a “poisoned" text encoder that generates harmful or unwanted images in response to certain trigger characters.
In another example, <cit.> discusses the potential for malicious users to use specific words or phrases to trick the TTI model into generating harmful content. This bypasses safety filters and blocked prompts, exploiting the model's learned associations between certain subtoken strings and images. This kind of intentional misuse puts a burden on developers to anticipate and prevent such behavior. Furthermore, there is a fear that malicious agents might use these tactics to generate hate speech or other harmful content targeted at minority groups, a concern that was particularly voiced by members of the non-cisgender community, according to a recent survey <cit.>.
Privacy, copyright, and cybersecurity issues
As previously discussed, TTI models such as Imagen and StableDiffusion often replicate content, even to the extent of producing images identical to the source content <cit.>. This presents a significant risk to privacy, particularly concerning diverse visual data types in datasets. For example, LAION-5B includes private medical information <cit.>. Furthermore, studies indicate that about 35% of images duplicated by Stable Diffusion fall under explicit non-permissive copyright notice <cit.>.
Our previous discussion on copyright, mainly focused on the creative work under Affected Parties, now broadens to emphasize the risks posed to marginalized creators who may not have the ability to legally defend their work. Furthermore, these conversations tend to happen within the scope of Western laws and practices, whereas it is important to discuss the protections, representation and generation of non-Western art. We also wish to further highlight the risks of “digital forgery” <cit.>. Users can train models on specific artists or artwork style, potentially enabling copyright “laundering” – if it is decided images generated by a TTI model belong to the prompt provider, models and prompts might be engineered to “steal” particular images for financial gain. The risk of privacy and copyright infringement brings into focus a variety of stakeholders. Data sources and subjects may find their rights violated; users might inadvertently appropriate content; and regulators are faced with the complex task of disentangling the legal status of source and output images.
Building on the privacy and copyright issues, it is also crucial to consider potential cybersecurity threats posed by TTI models. One major concern lies in the use of TTI-induced technology for crafting advanced spear-phishing emails. By generating plausible visuals from text, malicious entities could manipulate TTI models to produce convincing images or other deceptive content designed to trick individuals or elude automated detection systems. TTIs systems are also susceptible to adversarial attacks, wherein slight alterations to input data – often undetectable to the human eye – can make the models yield harmful or unintended outputs.
§.§ Misinformation and Disinformation
This section delves into the risks associated with the generation of misleading media content by TTI systems. These are classified into individual, social, or community-based risks. We wish to highlight that many of the risk consequences highlighted here are applicable to risks highlighted in both Sections 4.1 and 4.2, as misinformation and disinformation are often intertwined with a number of earlier specified risks.
Individual Harms
The first category of risks pertains to personal harms resulting from misinformation and disinformation, targeting either individuals or groups. Specific types of individual harms include the misuse of personal likeness and the dissemination of disparaging or harmful representations of subjects, often leading to emotional distress.
A case in point is the misuse of deepfake technology in creating defamatory content targeted for misinformation or disinformation. Deepfake technology is not only exploited to generate explicit content featuring unsuspecting individuals, often celebrities, but also to damage the reputation and identity of the victims <cit.>. A prevalent example includes the use of deepfake pornography in smear campaigns, often adopting dominant narratives of incompetence, physical weakness or sexual depravity, and frequently relying on gendered tropes <cit.>.
The misuse of TTI models extends beyond sexualised imagery, leading to harmful likeness reproduction in various other forms. Examples include the creation of fake journalism profiles <cit.>, or use in blackmail, revenge <cit.>, or identity theft for scams <cit.>. Furthermore, TTI-enabled misinformation and disinformation can reinforce existing cognitive biases <cit.>, amplifying narratives of “otherness” <cit.>. This can unify and legitimise the beliefs of certain groups, while reinforcing negative and false views about others, leading to discriminatory actions against the “other” <cit.>. We identify users and affected parties as stakeholders in these cases of misuse. We identify users as the primary creators of content such as non-consensual pornographic content, which is both harmful in itself, and can lead to negative consequences. Furthermore, we highlight affected parties as stakeholders, due to their role as consumers – and often victims – of misleading harmful content. Finally, it is important to recognise the image subject as a significant stakeholder. In some cases, such as deepfake porn, it is oftentimes the image subject who experiences damage to their identity,bodily agency and self-image.
The individual harms discussed here are primarily representational because they leverage and reinforce the subordination of certain groups based on identity. Such harms also hold an emotional dimension. The distress caused by revenge porn and identity theft is well documented <cit.>, and synthetic media, due to their nature, can be endlessly regenerated. Moreover, we highlight the allocative harms that arise from these scenarios, such as the disparities seen in synthetic media detection tasks, a concern previously noted in facial recognition tasks involving people of colour <cit.>. Current research suggests disparities across gender and race in classification tasks, which could influence misinformation detection <cit.>. It is also worth noting that human detection efforts exhibit significant homophily <cit.>, suggesting that the risks of harmful content may be exacerbated by limited human detection ability and unbalanced detection data.
We highlight a number of stakeholders in our identification of detection and classification bias in a misinformation or disinformation context. We firstly identify system developers as stakeholders. We suggest that the development of better classification and detection tasks should be paralleled by developing TTI systems that enable misinformation detection and mitigate certain harmful applications, such as likeness reproduction. Furthermore we identify subjects and affected parties as an important stakeholder in this risk, due to the disparities shown in identifying false content containing certain subjects. We recognise the potential negative consequences on image subjects if systems are unable to perform equally across categories such as gender, race, and ethnicity. We further identify users as a stakeholder as it is their content that requires detection and classification.
Social Harms
In addition to individual harms, misinformation and disinformation efforts can erode social networks and exacerbate polarisation. Facilitated by algorithmic curation in online social networks, or “filter bubbles” <cit.>, alongside factors such as anonymity and extensive reach <cit.>, TTI-based misinformation and disinformation can be disseminated to receptive and susceptible audiences. Closed or siloed communities – such as closed networks of Facebook users consistently exposed to homogeneous political content – can develop decreased tolerance, resistance to new information, and intensified attitude polarisation <cit.>.
Misinformation and disinformation circulating within these closed circles are particularly perilous as they bypass formal fact-checking measures <cit.> and diverse “herd correction” effects <cit.>. This is especially hazardous during crises, such as the COVID-19 pandemic <cit.>. Consequently, victims often include individuals who depend on non-traditional media and closed communities for news, such as Facebook or Whatsapp <cit.>, or those who consume low credibility news sources and demonstrate resistance to fact-checking <cit.>. Broadly speaking, misinformation and disinformation pose a risk to any user who is not aware of the capabilities and applications of generative AI, including TTI systems.
Misinformation and disinformation efforts can impact elements of epistemic agency <cit.>. The flooding of information environments <cit.>, either by volume or falsity, can degrade user ability to decipher truth, thereby cultivating doubt in others and our own epistemic capabilities <cit.>. Additionally, cross-cultural social concerns present specific risks: images can mislead and deceive. <cit.> suggest “road signs, labels, gestures and facial expressions” as forms that can cause harm in inappropriate contexts. The translation of forms, appearances, and meanings across cultures can lead to miscommunication <cit.>. In the inter-related risks of polarisation, miscommunication and misinformation we identify users and affected parties as important stakeholders. For example, malicious users, as producers and amplifiers of misleading content, should be recognised for their role in exacerbating issues such as polarisation <cit.>.
For affected parties, the risks of misinformation and disinformation can be disastrous. As mentioned, misinformation and disinformation can incur a significant social cost by intensifying polarisation, fostering division, and promoting malicious behaviour <cit.>. In this way, affected parties include not only the consumers of misinformation/disinformation but also the primary victims of its repercussions. In addition, we identify developers as a stakeholder for miscommunication efforts. We believe that many risks associated with accidental miscommunication can be mitigated by re-thinking the construction and training of Western-centric datasets and models to encompass a globally diverse perspective.
Harms that damage information ecosystems, via misinformation or disinformation, originally manifest as representational. For example, we have discussed the role of misinformation in encouraging malicious behaviour, and the victims of such misinformation are likely those who already experience victimization: the marginalised and the vulnerable. These representational harms exact a social cost not only on the immediate victim, but on the ability and willingness of a society to critically engage with, and question, misinformation and disinformation. Additionally, it is crucial to acknowledge the allocative nature of these harms. Specifically, how do we transform information environments so all have access to reliable, local and trustworthy media? In the case of aforementioned closed networks, how do we integrate balanced news to minimise harm? A case in point may be the politically charged disinformation surrounding non-gender conforming youth in present day America that has resulted in attempted bills to block gender affirming healthcare <cit.>, which has arguably arisen from charged disinformation environments. A further question arises in who, through education or resources, possesses the ability to identify misinformation and disinformation? These harms require multiple mitigating efforts both to protect the marginalised, but also to transform information consumption through education.
Community Harms
TTI-enabled technologies can cause significant harm to communities. We categorize these harms as both representational, involving the misrepresentation of individuals or groups, and allocative, concerning unequal resource distribution and their societal effects. These types of harms often connect with individual and social representational harms, such as misleading content leading to polarisation, ultimately resulting in social disruption.
TTI-enabled misinformation and disinformation can threaten social, political and financial systems. We wish to highlight the potential of TTI technologies to cause political harms. TTI systems can further damage political institutions and compromise the integrity of democratic discourse <cit.> through election interference <cit.>, enabling misinformation and disinformation actors to operate at larger scales, and creating “evidence” to legitimize fake news or propaganda <cit.>. In addition we highlight the risks posed wherein TTI systems are used to generate culturally offensive content. As mentioned, TTI systems offer the ability to generate culturally or politically offensive content through “backdoors”, or simply because the precautions enacted by developers do not account for all cultures. For example, blasphemous content or images of religious or political figures are potentially deeply harmful to certain societies.
Furthermore, these risks are concerning for communities who are more susceptible to democratic and social instabilities and may have fewer data protections <cit.>.
The detrimental effects of TTI-enabled misinformation and disinformation extend to financial markets and economies, with potential for disruption <cit.>. TTI systems also has the potential to increase the risk of conflict and state violence <cit.>.
It is important to recognise the long term effects of such harms on broader community climates in relation to the individual harms mentioned previously. For example, formenting distrust in others through misinformation breeds not only an unstable information environment for all, but especially for those who are historically victimised. Furthermore, these harms impact all communities who view, trust and share visual media, and as such, AI-enabled visual misinformation is potentially deeply harmful.
§ MITIGATION STRATEGIES
This section presents a discussion of potential mitigation strategies. Addressing the risks and harms associated with TTI systems often necessitates the integration of multiple mitigation approaches. Local mitigation, at the level of a single system, can possibly address instances of localised harm. However, for broad harms that occur at the level of community or society, multi-disciplinary and multi-stakeholder efforts are required to enact any meaningful mitigation. Such widespread mitigation strategies would necessitate significant changes in the current practices of TTI model and system development and deployment. We categorize mitigation strategies into participatory projects, operational solutions, technical solutions, and socio-legal interventions.
Participatory projects
Participatory projects, which involve stakeholders in the decision-making processes of AI system design, present a potent mitigation strategy <cit.>. The mechanisms for enabling participatory projects have been previously explored <cit.>. Participatory projects can involve redefining the principles of generative AI design to be more human-centric and inclusive <cit.>, such as the creation of creative assistive technologies <cit.>. Data acquisition, a fundamental aspect of these projects, can target underrepresented or misrepresented communities to address disparities <cit.>. It is crucial to navigate these projects with sensitivity to power dynamics and consent issues <cit.>. Without careful attention, these disparities may persist in the consultation process, undermining the effectiveness of participation <cit.>.
Certain solutions, such as “opt-out” functions may contribute to addressing copyright infringement, however this relies on artists' being aware of this use of their data, disadvantaging those with limited “tech literacy”. It is important to recognise that participatory projects are not an afterthought, but rather as a proactive measure to counter discrimination and exclusion in AI. This entails not just balancing datasets but also focusing on representation and involvement of marginalized identities.
Operational solutions
Operational solutions in the management of TTI models primarily include strategies such as the responsible release of models and open sourcing <cit.>. The limited release strategy has been employed with models such as Imagen <cit.> and Parti <cit.>, and in the staggered release of DALL-E 2 <cit.>. This approach allows for a certain degree of control, potentially enabling the recall of the technology to prevent malicious uses or other unintended consequences. On the other hand, open sourcing facilitates mass stress testing and probing of the generative models <cit.>. This can uncover potential vulnerabilities or biases in the models, allowing for improvements and the fostering of transparency. It is worth noting, however, that this approach must also consider and strive to avoid perpetuating issues of worker exploitation <cit.>.
However, both these solutions offer limited remedies if the underlying datasets and models remain wrongfully biased and harmful. Furthermore, these solutions do not fully address downstream impacts, such as job displacement, which may result from the widespread use of TTI-enabled technologies. Therefore, it is important to pair these operational strategies with consistent evaluation and reform of the models, their applications, and metrics for measuring their social impacts.
Technical solutions
To tackle the potential pitfalls of TTI systems, various technical research strategies have been explored. Technical research primarily aims to build more robust, safe, and reliable models. Recent developments include “find and replace” methods <cit.>, semantic steering <cit.>, and filtering techniques <cit.>. However, these strategies have their limitations. For instance, it has been argued that filtering could exacerbate bias <cit.> or fail to address it entirely <cit.>. Furthermore, mitigation via prompt editing has shown to have limited impact due to the complex and embedded nature of biases <cit.>.
A significant body of research focuses on detection of synthetic media as a mitigation strategy. Techniques include the use of GAN architectures <cit.>, blockchain verification <cit.>, fingerprinting <cit.>, and watermarking <cit.>. Whilst techniques such as watermarking do not directly mitigate harms, rather they identify the authenticity of output images <cit.>, they can deter potential misuse.
The expansion of fair detection capabilities <cit.> are promising, but, as investigated in <cit.>, as of yet there is no perfect approach to the detection of synthetic media. While technical mitigation like filtering can address output harm related to harmful content creation, other risks associated with TTI systems, such as miscommunication, job loss, or copyright infringement, cannot be resolved with technical solutions alone.
Socio-legal interventions
Mitigating harm in the context of TTI-enabled technologies could significantly benefit from the creation of legal and policy guidelines and regulations. Media literacy and user education have proven to be effective tools in addressing misinformation and manipulation, fostering critical engagement with digital content <cit.>. Increased corporate culpability could ensure more stringent fact-checking, transparent practices, and adherence to community standards, fostering an environment of accountability <cit.>.
Government legislation and local and global regulation can play a pivotal role <cit.>, with potential measures ranging from defining limits to controlling the dissemination of harmful content <cit.>. The strategy of limiting monetary rewards from the spread of misinformation can serve as a potent deterrent <cit.>.
In this dynamic and complex landscape, comprehensive and continuous research on the misinformation and disinformation environment becomes critical <cit.>. Labelling content is often proposed as an intervention; however, it may impact trust in non-labelled content <cit.> and may have unforeseen negative consequences <cit.>. Therefore, the nuances of such interventions need careful consideration.
Notwithstanding these interventions, we must acknowledge potential challenges, such as resistance from tech companies due to economic interests, or concerns over infringement on free speech. Therefore, a balance needs to be struck to ensure these interventions are effective and proportionate.
§ OPEN QUESTIONS AND FUTURE RESEARCH
While the conducted review revealed a number of well-acknowledged risks associated with TTI systems, our analysis also highlighted several knowledge gaps. We briefly discuss these gaps in order to highlight open questions and future directions for research.
Output bias We identified several forms of neglected output bias, including ageism and anti-Asian sentiment, for which we found no targeted mitigation strategies. Ageism, a bias observed in GAN face generators <cit.>, remains a largely unexplored area in recent TTI research. Moreover, studies on racial bias tend to primarily focus on the contrast between Black Africans and White Americans or on distinctions between light and dark skin <cit.>. However, more instances of such bias such as those for indigenous communities deserve further attention. We also found limited research on the treatment of religious bias, such as in <cit.>. These output biases can affect both users, who may struggle to generate appropriate images, and downstream parties who are exposed to content that primarily reflects established norms and stereotypes.
Dialect bias TTI models have been shown to create discrimination beyond outputs. For example, TTI systems may favour white-aligned American English over other dialects <cit.> or languages. Speakers of a limited number of languages - such as English and Chinese - are able to fully leverage these models. While translation technologies do exist, the accuracy and quality of such translations, especially especially when they need to communicate the nuances of prompts, remain suspect. Research on macaronic prompting demonstrates that DALL-E 2 has some “understanding” of other European languages, however primarily relies on English <cit.>.
Depending on the training data and processes used, users may need to conform linguistically to use TTI systems effectively. This, in turn, reinforces the idea that alternative English dialects are subpar <cit.>.
Pre-release moderation
The use of labour in traditionally pillaged countries[A term sustainability writer Aja Barber uses to highlight the role that exploitation of resources by the Global North had in these countries’ development.] to moderate the output of publicly available generative models has been reported <cit.>. Moderation workers often experience psychological harm, with insufficient support <cit.> and there is a power imbalance between those developing these models and profiting from their use, and those tasked with pre-release moderation. It is important that companies actively pursue fairer labour practices, so as to reduce harm for moderators.
Job displacement
It is important to recognise the displacement of profit that is enabled by systems such as TTI models <cit.>. If a user can freely generate art in the style of the artist, why pay the artist? However, we wish to draw attention to the nuances of this displacement, that is, the exacerbation of existing inequalities. The people already marginalised by society will be most impacted by this loss of income. Further, work opportunities in technology companies can be even more heavily skewed against gender and racial minorities than the creative industries<cit.>, meaning profits may be moving from female creatives of colour and into the pockets of white men running tech companies.
Furthermore, we wish to acknowledge the effects of job displacement on image subjects. For example, sex workers cannot currently exert agency over - nor profit - from their images being within training datasets. These images feed the creation of non-consensual pornographic material, often combining a sex worker's body with a celebrity face. We identified a website specifically designed to host models trained on individual sex workers, celebrities and public figures, in order to generate “personalised” porn. Furthermore, if stock imagery, advertisements or modelling photos come to frequently feature generated humans, <cit.> it is important we assess who is being displaced. For example, do companies use generated imagery to fulfil a diversity target, rather than find humans? We recognise the possibility of disconnect between the appearance of racial, gender or other diversity in stock imagery and who is receiving compensation for their time.
Miscommunication
We identify the problem of miscommunication across cultures and countries using TTI systems. This is especially significant in current TTI technology given the ability to rapidly create images from Western-centric datasets. Solutions to miscommunication require multi-disciplinary anthropological and technical research to understand the translation of forms and appearances into other cultures, and subsequently the building of inclusive datasets. Furthermore, we wish to highlight the problems related to flooding information environments with generated content. This is under-explored in the context of TTI systems, especially given the scale and speed of generation. This risk is not directly related to the types (and harms) of outputs produced, but considers the effects of mass synthetic media production on communities.
Socio-political instability
Many researchers have explored the possible effects of AI on democratic processes and structures <cit.>. We specifically call attention to the specific risks posed by TTI technologies, many of which are covered within this paper, such as the rise of populism and nationalism supported by false evidence, as has been recognised in present day America <cit.>, assisted by narratives of “alternative facts”. We consider the possible use cases of TTI models within these contexts to be an important, and widening, gap in the literature. This topic requires research beyond political considerations only, and would benefit from alignment with deepfake research, some of which has already considered such risks.
Future research directions Technology companies building TTI (and other generative) models have a responsibility to address many of the risks discussed here, however analysis of TTI models is insufficient without establishing benchmarks against which we can assess safe, ethical and fair performance. <cit.> present a “living benchmark” for large language models. Similar frameworks need to be developed for TTI models.
Building benchmarks and performance requirements necessitates input from a broad range of stakeholders including government, developers, research communities, image sources, subjects, users and vulnerable parties. The involvement of developers and researchers is especially vital given the high technical skill threshold of understanding generative models, as we have identified through the course of our analysis. The alignment of developmental goals with wider social goals will enable focused mitigation when harms arise, as current development and mitigation choices are left in the hands of technology companies. We also argue for the importance of mitigation strategies outside of technical solutions.
Research producing actionable insights arising from methods such as interviews and case studies can assist in our understanding of the impact of synthetic media. Work such as the interview and diary study of <cit.>, who argue for a holistic understanding of misinformation environments, is essential. Interviews that engage with identified victims of TTI model harms would greatly assist the development of mitigation strategies; see, for example <cit.>.
Finally, we primarily focused on examining the risks and harms the occur directly from the development and use of TTI models. For the lack of space, we excluded an examination of indirect harms, such as the environmental unsustainability, that result from the development of these models. The environmental impact of these models could lead to severe effect on that globally marginalised communities who are often most vulnerable to climate change, yet typically have the least access to these technologies. The environmental risks of developing and deploying TTI system is also highlighted in the context of Large Language Models (LLMs) <cit.>. This subject requires additional research to better understand the origins of the energy consumed in training TTI models, the global distribution of carbon emissions, and the regions most affected by these emissions. Moreover, potential strategies for using renewable energy sources in model training, as a key component of reducing environmental impact, should be explored.
Open questions
The review and analysis conducted within this paper enabled our identification of a number of open questions.
* How can we rethink data gathering and output moderation with respect to privacy, ownership and identity?
For example:
* How do we implement functional and retroactive data deletion?
* How might source image creators be protected from “copyright laundering”?
* How can we “protect” future datasets from corruption by output images, and benchmark a “good" dataset?
* How do we allocate responsibility, and compensate for harm?
* How can we best flag and mitigate offensive use?
* How do we manage TTI-enabled technologies with respect to non-Western communities, such as avoiding miscommunication?
* How can the environmental costs of training and using these models be attenuated?
* How do we maintain a “ground truth” in data and visual media?
* What are the long-term social costs of generating visual content?
There are a number of regulatory efforts currently addressing data access and the use of AI, with modifications underway to incorporate generative technologies like TTI models. These include the EU AI Act <cit.>, the Algorithmic Accountability Act in the US <cit.>, and China's Deep Synthesis Provisions <cit.>, among others. Multiple ongoing lawsuits could shape future legal perspectives on generative models, including TTI-induced systems. The outcomes of these cases are yet to be determined and will likely impact the regulatory landscape surrounding these AI technologies.[For reference, here are several ongoing litigation cases: Doe 1 et al v. GitHub et al, Case No. 4:2022cv06823 (N.D. Cal.); Andersen et al v. Stability AI et al, Case No. 3:23-cv-00201 (N.D. Cal.); Getty Images v. Stability AI, Case No. 1:2023cv00135 (D. Del.); Tremblay et al v OpenAI, Case No. 4:23-cv-03223(N.D. Cal.); Getty Images v Sability AI (England), Case IL-2023-000007. We thank Andres Guadamuz for providing information regarding these cases.]
As this paper cannot – within the page limit – adequately provide an exhaustive analysis of such relevant regulatory efforts, we offer five recommendations that we suggest would be useful in guiding generalised regulatory and policy initiatives. Some of these recommendations may already be covered by existing regulatory frameworks. Nonetheless, we believe it is beneficial to outline all of them here.
* Establish a multi-stakeholder benchmark for responsible and safe performance of TTI systems, with concern for the risks raised in our typology.
* Integrate digital literacy and media literacy into educational programs to help users understand the limitations and potential risks associated with TTI systems.
* Clearly communicate to users when their data will be used to train TTI systems and how resulting images might be used, and obtain explicit consent for such use.
* Ensure that copyright ownership is clearly identified and respected when generating images from text, and establish clear rules for attribution and usage.
* Develop novel, multi-stakeholder safeguards to prevent the creation and dissemination of inappropriate or harmful images, especially images that are discriminatory, violent, and threats to security.
Further, we acknowledge that these recommendations are applicable to other multi-modal generative models. For example, the growing public discourse of apprehension and fear regarding AGI could be somewhat abated by Recommendation 2. We have hoped to highlight, throughout this paper, the importance of amplifying the voices of typically excluded stakeholders. By extension, we recognise the importance of fostering collaboration between the public, policymakers, industry leaders, researchers, and civil society organizations in order to ensure innovative, fair, effective regulatory frameworks.
§ CONCLUSION
This paper presented a typology of risk associated with TTI-induced technologies, followed by a succinct review of relevant mitigation strategies and a discussion of open questions concerning the development and use of TTI systems. Although we provided some preliminary recommendations, we acknowledge that additional perspectives, expertise, and research are necessary to refine this typology and enhance our understanding of the social implications of TTI systems.
§ ACKNOWLEDGMENTS
We would like to thank the UKRI Arts and Humanities Research Council (grant AH/X007146/1) for the policy fellowship that supported this work. We thank Shannon Vallor, Ewa Luger, and the members of Ada Lovelace Institute for helpful discussions. We also thank James Stewart, Lilian Edwards, Andres Guadamuz, and three anonymous reviewers whose comments improved our work. Eddie L. Ungless is supported by the UKRI Centre for Doctoral Training in Natural Language Processing, funded by UKRI (Grant EP/S022481/1) and the University of Edinburgh, School of Informatics. Charlotte Bird is supported by the Baillie Gifford PhD Scholarship at the Centre for Technomoral Futures.
ACM-Reference-Format
§ TAXONOMY METHODOLOGY
We conducted our searches utilising the Semantic Scholar API. Semantic Scholar index over 200 million academic papers. To capture relevant papers we selected five seed papers covering biased training data, biased image generation and bias in text-to-image models <cit.>. To capture papers relevant to misinformation harms, we selected three papers relevant to either deep fakes or synthetic media <cit.> or diffusion technology and evaluation <cit.>. Our search returned over 300 papers. 43 of these papers provided substantial and useful discussions of text-to-image technologies. Through extensive manual searches we identified a further 40 papers, most of which were technical papers. Collected papers were then analysed for stakeholders, risks, empirical investigations and open research questions.
Our taxonomy of risks initially adopted an inductive-deductive approach, in that we preempted the existence of three broad categories (discrimination and exclusion, harmful misuse, misinformation) and derived subcategories from analysis of the papers. We then retroactively identified potential “gaps” in the literature, based in part on analogous research into the harms of other technologies, plus identifying key stakeholders that have not been addressed. These gaps are clearly identified in the table.
|
http://arxiv.org/abs/2307.06119v1 | 20230711132326 | SparqLog: A System for Efficient Evaluation of SPARQL 1.1 Queries via Datalog [Experiment, Analysis and Benchmark] | [
"Renzo Angles",
"Georg Gottlob",
"Aleksandar Pavlovic",
"Reinhard Pichler",
"Emanuel Sallinger"
] | cs.DB | [
"cs.DB"
] |
Universidad de Talca, Chile
University of Oxford, UK
TU Wien, Austria
TU Wien, Austria
TU Wien, Austria
Over the past decade, Knowledge Graphs have received enormous interest both from industry and
from academia. Research in this area has been driven, above all, by
the Database (DB) community and the Semantic Web (SW) community.
However, there still remains a certain divide between
approaches coming from these two communities.
For instance, while languages such as SQL or Datalog are widely used in the DB area, a different set of languages such as SPARQL and OWL is used in the SW area. Interoperability between such technologies
is still a challenge.
The goal of this work is to present a uniform and consistent framework
meeting important requirements
from both, the SW and DB field.
SparqLog: A System for Efficient Evaluation of SPARQL 1.1 Queries via Datalog
[Experiment, Analysis & Benchmark]
Emanuel Sallinger
August 12, 2023
=================================================================================================================
PVLDB Reference Format:
. . PVLDB, (): , .
https://doi.org/doi:
[This work is licensed under the Creative Commons BY-NC-ND 4.0 International License. Visit <https://creativecommons.org/licenses/by-nc-nd/4.0/> to view a copy of this license. For any use beyond those covered by this license, obtain permission by emailing mailto:[email protected]@vldb.org. Copyright is held by the owner/author(s). Publication rights licensed to the VLDB Endowment.
Proceedings of the VLDB Endowment, Vol. , No. ISSN 2150-8097.
https://doi.org/doi:
]footnote-1
PVLDB Artifact Availability:
The source code, data, and/or other artifacts have been made available at <>.
§ INTRODUCTION
Since Google launched its Knowledge Graph (KG) roughly a decade ago, we have seen intensive work on
this topic both in industry and in academia. However, there are two research communities working mostly isolated from each other on the development of KG management systems, namely the Database and the Semantic Web community. Both of them come with their specific key requirements and they have introduced their own approaches.
Of major importance to the Semantic Web (SW) community is the compliance with the
relevant W3C standards:
[RQ1] SPARQL Feature Coverage. The query language SPARQL is one of the major Semantic Web standards. Therefore, we require the support of the
most commonly used SPARQL features.
[RQ2] Bag Semantics. SPARQL employs per default bag semantics (also referred to as multiset semantics)
unless specified otherwise in a query.
We therefore require the support of this.
[RQ3] Ontological Reasoning. OWL 2 QL to support ontological reasoning is a major Semantic Web standard. Technically, for rule-based languages, this means that
existential quantification (i.e., “object invention”) in the rule heads is required.
The Database (DB) community puts particular emphasis on the expressive power and efficient evaluation of query languages. This leads us to the following additional requirement:
[RQ4] Full Recursion. Full recursion is vital to
provide the expressive power needed to support complex querying in business applications and sciences (see e.g., <cit.>)
and it is the main feature of the relational query language Datalog <cit.>.
Starting with SQL-99, recursion has also been integrated into the
SQL standard and most relational database management systems have meanwhile incorporated recursion capabilities to increase their expressive power.
Finally, for an approach to be accepted and used in practice,
we formulate the following requirement for both communities:
[RQ5] Implemented System.
Both communities require an implemented system.
This makes it possible to verify if the theoretic results are applicable in practice and to evaluate the usefulness of the approach under real-world settings.
The above listed requirements explain why
there exists a certain gap between the SW and DB communities.
There have been several attempts to close this gap. However, as will be detailed in
Section <ref>,
no approach has managed to fulfil the requirements of both sides so far.
Indeed, while existing solutions individually satisfy some of the requirements
listed above, all of them fail to satisfy central other requirements.
The goal of this work is to
develop one uniform and consistent framework that
satisfies the requirements of both communities.
More specifically, our contributions are as follows:
Theoretical Translation.
We provide a uniform and complete framework to
integrate SPARQL support into a KG language
that meets all of the above listed requirements RQ1–RQ5.
We have thus extended, simplified and – in some cases – corrected previous approaches of translating SPARQL queries (under both set and bag semantics) to various Datalog dialects <cit.>.
For instance, to the best of our knowledge, all previous translations have missed or did not consider correctly certain aspects of the SPARQL standard of the
zero-or-one and zero-or-more property paths.
Translation Engine.
We have developed the translation engine on top of the Vadalog system
that covers most of the considered SPARQL 1.1 functionality.
We thus had to fill several gaps between the abstract theory and the practical development of the translation engine.
For instance, to support bag semantics, we have designed specific Skolem functions to generate a universal duplicate preservation process. On the other hand, the use of the Vadalog system as the
basis of our engine made significant simplifications possible (such as letting Vadalog take care of complex filter constraints) and we also get ontological reasoning “for free”.
therefore supports both query answering and ontological reasoning
in a single uniform and consistent system.
Experimental Evaluation.
We carry out an extensive empirical evaluation on multiple benchmarks with
two main goals in mind: to verify the compliance of
with the SPARQL standard
as well as to compare the performance of our system with
comparable ones. It turns out that, while covers a great part of
the selected SPARQL 1.1 functionality in the correct way,
some other systems (specifically Virtuoso) employ a non-standard behaviour on queries containing property paths. As far as query-execution times are concerned,
the performance of is, in general, comparable to other systems such as the SPARQL system Fuseki or the querying and reasoning system Stardog and it significantly outperforms
these systems on complex queries containing recursive property paths and/or involving ontologies.
Structure of the paper.
After
a review of existing approaches in Section <ref>,
and the preliminaries in Section <ref>,
we present our main results:
the general principles of our
system
in Section <ref>,
a more detailed look into the
translation engine in
Section <ref>,
and an experimental evaluation
in Section <ref>.
We conclude with Section <ref>.
Further details on our theoretical translation, implementation, and
experimental results are provided in the appendix.
Further details on our theoretical translation, implementation, and
experimental results are provided in the full version of this paper <cit.>.
The source code of and
all material (queries, input and output data, performance measurements) of our
experimental evaluation are provided in the
supplementary material[https://github.com/joint-kg-labs/SparqLoghttps://github.com/joint-kg-labs/SparqLog].
All resources will be made publicly available in case of acceptance.
§ RELATED APPROACHES
We review several approaches – both from the Semantic Web and the Database community.
This discussion of related approaches is divided into
theoretical and practical aspects of our work.
§.§ Theoretical Approaches
Several theoretical research efforts have
aimed at bridging the gap between the DB and SW
communities.
Translations of SPARQL to Answer Set Programming.
In a series of papers, Polleres et al. presented translations of SPARQL and SPARQL 1.1
to various extensions of Datalog. The first translation from SPARQL to Datalog <cit.>
converted SPARQL queries into Datalog programs by employing negation as failure.
This translation was later extended by the addition of new features of SPARQL 1.1 and by considering its bag semantics in <cit.>. Thereby, Polleres and Wallner created a nearly complete translation of SPARQL 1.1 queries to Datalog with disjunction (DLV) programs. However, the translation had two major drawbacks: On the one hand, the chosen target language DLV does not support ontological reasoning as it does not contain existential quantification, thereby missing a key requirement (RQ3) of the Semantic Web community. On the other hand, the requirement of an implemented system (RQ5) is only partially fulfilled,
since the prototype implementation DLVhex-SPARQL Plugin
<cit.> of the SPARQL to Datalog translation of <cit.>
has not been extended to cover also SPARQL 1.1 and bag semantics.
Alternative Translations of SPARQL to Datalog.
An alternative approach of relating SPARQL to non-recursive Datalog with stratified negation (or, equivalently, to Relational Algebra)
was presented by Angles and Gutierrez in
<cit.>. The peculiarities of negation
in SPARQL were treated in a separate paper <cit.>. The authors later extended this line of research to an
exploration of the bag semantics of SPARQL
and a characterization of the structure of its algebra and logic
in <cit.>.
They translated a few SPARQL features into a Datalog dialect with bag semantics (multiset non-recursive Datalog with safe negation).
This work considered only a small set of SPARQL functionality on a very abstract level and used again a target language that does not support ontological reasoning, failing to meet
important requirements (RQ1, RQ3) of the SW community. Most importantly, no implementation
exists of the translations provided by Angles and Gutierrez, thus failing to fulfil RQ5.
Supporting Ontological Reasoning via Existential Rules.
In <cit.>, Datalog^± was presented as a family of languages that are particularly well suited for capturing
ontological reasoning. The “+” in Datalog^± refers to the
crucial extension compared with Datalog by existential rules,
that is, allowing existentially quantified variables in the rule heads.
However, without restrictions,
basic reasoning tasks such as answering Conjunctive Queries w.r.t.
an ontology given by a set of existential rules become
undecidable <cit.>.
Hence, numerous restrictions have been proposed <cit.>
to ensure decidability of such tasks, which
led to the “-“ in Datalog^±.
Of all variants of Datalog^±, Warded Datalog^±
<cit.> ultimately
turned out to constitute the best compromise between complexity and expressiveness and it has been implemented in an industrial-strength system – the Vadalog system <cit.>,
thus fulfilling requirement RQ5. However, the requirement of supporting SPARQL (RQ1) with or without bag semantics (RQ2)
have not been fulfilled up to now.
Warded Datalog^± with Bag Semantics.
In <cit.>, it was shown that Warded Datalog^± using set semantics can be used to represent Datalog using bag semantics by using existential quantification to introduce
new tuple IDs. It was assumed that these results could be leveraged for future translations from SPARQL with bag semantics to Warded Datalog^± with set semantics. However, the theoretical translation of SPARQL to Vadalog
(RQ1) using these results and also implementation (RQ5) by extending Vadalog were left open in <cit.> and considered of primary importance for future work.
§.§ Practical Approaches
Several systems have aimed at bridging the gap between DB and SW
technologies. The World Wide Web Consortium (W3C) lists StrixDB, DLVhex SPARQL-engine and RDFox as systems that support SPARQL in combination with Datalog[https://www.w3.org/wiki/SparqlImplementationshttps://www.w3.org/wiki/SparqlImplementations]. Furthermore, we
also have a look at ontological reasoning systems
Vadalog, Graal and VLog,
which either understand SPARQL to some extent or, at least in principle, could be extended in order
to do so.
DLVhex-SPARQL Plugin.
As mentioned above,
the DLVhex-SPARQL Plugin <cit.>
is a prototype implementation
of the SPARQL to Datalog translation in <cit.>.
According to the repository's ReadMe file[https://sourceforge.net/p/dlvhex-semweb/code/HEAD/tree/dlvhex-sparqlplugin/trunk/READMEhttps://sourceforge.net/p/dlvhex-semweb/code/HEAD/tree/dlvhex-sparqlplugin/trunk/README], it supports basic graph patterns, simple conjunctive FILTER expressions (such as ISBOUND, ISBLANK, and arithmetic comparisons), the UNION, OPTIONAL, and JOIN operation. Other operations, language tags, etc. are not supported and query results do not conform to the SPARQL protocol, according to the ReadMe file. Moreover, the
underlying logic programming language DLV provides only domain specific existential quantification (described in <cit.>),
produced e.g. by hash-functions. Hence, it only provides very limited support of existential quantification, which
does not suffice for ontological reasoning as required by the OWL 2 QL standard
(RQ3). Also the support of bag semantics is missing (RQ2).
RDFox.
RDFox is an RDF store developed and maintained at the University of Oxford <cit.>. It reasons over OWL 2 RL ontologies in Datalog and computes/stores materialisations of the inferred consequences for efficient query answering <cit.>. The answering process of SPARQL queries is not explained in great detail, except stating that queries are evaluated on top of these materialisations, by employing different scanning algorithms <cit.>.
However, translating SPARQL to Datalog – one of the main goals of this paper – is not supported[see https://docs.oxfordsemantic.tech/reasoning.htmlhttps://docs.oxfordsemantic.tech/reasoning.html for the documentation discussing the approach].
Moreover, RDFox does currently not support property paths and some other SPARQL 1.1 features[https://docs.oxfordsemantic.tech/3.1/querying-rdfox.html#query-languagehttps://docs.oxfordsemantic.tech/3.1/querying-rdfox.html#query-language].
StrixDB.
StrixDB is an RDF store developed as a simple tool for working with middle-sized RDF graphs,
supporting SPARQL 1.0 and Datalog reasoning capabilities[http://opoirel.free.fr/strixDB/http://opoirel.free.fr/strixDB/].
To the best of our knowledge, there is no academic paper or technical report that explains the capabilities of the system in greater detail – leaving us with the web page as the only source of information
on StrixDB.
The StrixStore documentation page[http://opoirel.free.fr/strixDB/DOC/StrixStore_doc.htmlhttp://opoirel.free.fr/strixDB/DOC/StrixStore_doc.html]
lists examples of how to integrate Datalog rules into SPARQL queries, to query graphs enhanced by Datalog ontologies.
However, translating SPARQL to Datalog – one of the main goals of this paper – is not supported[see http://opoirel.free.fr/strixDB/dbfeatures.htmlhttp://opoirel.free.fr/strixDB/dbfeatures.html for the documentation discussing capabilities].
Moreover,
important SPARQL 1.1 features such as aggregation and property paths are not
supported by StrixDB.
Graal.
Graal was developed as a toolkit for querying ontologies with existential rules <cit.>. The system does not focus on a specific storage system, however specializes in algorithms that can answer queries regardless of the underlying database type <cit.>. It reaches this flexibility, by translating queries from their host system language into Datalog^±. However, it pays the trade-off of restricting itself to answering conjunctive queries only <cit.> and therefore supports merely a small subset of SPARQL features[https://graphik-team.github.io/graal/https://graphik-team.github.io/graal/] — e.g. not even being able to express basic features such as UNION or MINUS.
Clearly, the goal of
developing a uniform and consistent framework for both,
the Semantic Web and Database communities,
cannot be achieved without supporting at least the most vital features of SPARQL (RQ1).
VLog.
VLog is a rule engine, developed at the TU Dresden <cit.>.
The system transfers incoming SPARQL queries to specified external SPARQL endpoints such as Wikidata and DBpedia and incorporates the received query results into their knowledge base <cit.>. Therefore, the responsibility of query answering is handed over to RDF triple stores that provide a SPARQL query answering endpoint, thus failing to provide a uniform, integrated framework for combining
query answering with ontological reasoning (RQ5).
The Vadalog system <cit.> is a KG management system
implementing the logic-based language Warded Datalog^±. It extends Datalog by including existential quantification necessary for ontological reasoning, while maintaining
reasonable
complexity. As an extension of Datalog, it supports full recursion.
Although Warded Datalog^± has the capabilities to support SPARQL 1.1 under the OWL 2 QL entailment regime <cit.> (considering set semantics though!),
no complete theoretical nor any practical translation from SPARQL 1.1 to Warded Datalog^± exists.
Therefore, the bag semantics (RQ2) and SPARQL feature coverage (RQ1)
requirements are not met.
§ PRELIMINARIES
§.§ RDF and SPARQL
RDF <cit.> is a W3C standard that defines a graph data model for describing Web resources.
The RDF data model assumes three data domains: IRIs that identify Web resources, literals that represent simple values, and blank nodes that identify anonymous resources.
An RDF triple is a tuple (s, p, o),
where s is the subject, p is the predicate, o is the object, all the components can be IRIs, the subject and the object can alternatively be a blank node, and the object can also be a literal.
An RDF graph is a set of RDF triples.
A named graph is an RDF graph identified by an IRI.
An RDF dataset is a structure formed by a default graph and zero or more named graphs.
For example, consider that <http://example.org/graph.rdf> is an IRI that identifies an RDF graph with the following RDF triples:
This graph describes information about film directors.
Each line is an RDF triple, <http://ex.org/glucas> is an IRI, "George" is a literal, and
_:b1 is a blank node.
SPARQL <cit.> is the standard query language for RDF.
The general structure of a SPARQL query is shown in Figure <ref>, where: the SELECT clause defines the output of the query, the FROM clause defines the input of the query (i.e. an RDF dataset), and the WHERE clause defines a graph pattern.
The evaluation of a query begins with the construction of the RDF dataset to be queried, whose graphs are defined by
one or more dataset clauses. A dataset clause is either an expression u or u, where u is an IRI that refers to an RDF graph. The former clause merges a graph into the default graph of the dataset, and the latter adds a named graph to the dataset.
The clause defines a graph pattern (GP). There are many types of GPs: triple patterns (RDF triples extended with variables), basic GPs (a set of GPs), optional GPs, alternative GPs (UNION), GPs on named graphs (GRAPH), negation of GPs (NOT EXISTS and MINUS), GPs with constraints (FILTER), existential GPs (EXISTS), and nesting of GPs (SubQueries).
A property path is a special GP which allows to express different types of reachability queries.
The result of evaluating a graph pattern is a multiset of solution mappings. A solution mapping is a set of variable-value assignments. E.g., the evaluation of the query in Figure <ref>
over the above RDF graph
returns two mappings {μ_1,μ2} with
μ_1(?N) = "George",
μ_1(?L) = "Lucas" and
μ_1(?N) = "Steven".
The graph pattern matching step returns a multiset whose solution mappings are treated as a sequence without specific order.
Such a sequence can be arranged by using solution modifiers: allows to sort the solutions; eliminates duplicate solutions; allows to skip a given number of solutions; and restricts the number of
output solutions.
Given the multiset of solution mappings, the final output is defined by a query form: projects the variables of the solutions; returns if the multiset of solutions is non-empty and otherwise; returns an RDF graph whose content is determined by a set of triple templates; and returns an RDF graph that describes the resources found.
§.§ Warded Datalog and the Vadalog System
In <cit.>, Datalog^± was presented as a family of languages
that
extend Datalog (whence the +) to increase its expressive power
but also impose restrictions (whence the -) to ensure decidability
of answering Conjunctive Queries (CQs). The extension most relevant for our purposes
is allowing existential rules of the form
∃z̅ P(x̅',z̅) ← P_1(x̅_1), …, P_n(x̅_n),
with x̅' ⊆⋃_ix̅_i, and z̅∩⋃_ix̅_i = ∅.
Datalog^± is thus well suited to capture ontological reasoning. Ontology-mediated query answering is
defined by considering a given database D and program Π as logical theories.
The answers to a CQ Q(z⃗) with free variables z⃗ over database D under the ontology expressed by
Datalog^± program Π are defined as {a⃗|Π∪ D Q(a⃗)}, where
a⃗ is a tuple of the same arity as z⃗ with values from the domain of D.
Several subclasses of Datalog^± have been
presented
<cit.>
that ensure decidability of CQ answering
(see <cit.> for an overview).
One such subclass is
Warded Datalog^±
<cit.>, which makes
CQ answering
even tractable (data complexity).
For a formal definition of Warded Datalog^±, see
<cit.>. We give the intuition of
Warded Datalog^± here.
First, for all positions in rules of a program Π, distinguish if they are
affected or not: a position is affected, if the chase may introduce a labelled null here, i.e., a position in a head atom either with an existential variable or with a
variable that occurs only in affected positions in the body.
Then, for variables occurring in
a rule ρ of Π, we identify the dangerous ones: a variable is dangerous in ρ,
if it may propagate a null in the chase, i.e.,
it appears in the head and all its occurrences in the body of ρ are at
affected positions.
A Datalog^± program Π is warded if all rules ρ∈Π
satisfy: either ρ contains no dangerous variable
or all dangerous variables of ρ occur in a single body atom
A (= the “ward”) such that the variables shared by A and the remaining body
occur in at least one non-affected position (i.e., they cannot propagate nulls).
Apart from the favourable computational properties,
another important aspect of Warded Datalog^± is that a
full-fledged engine (even with further extensions) exists: the
Vadalog system <cit.>.
It combines full support of Warded Datalog^± plus a number of extensions needed for practical use, including (decidable) arithmetics, aggregation, and other features. It has been deployed in numerous industrial scenarios, including the finance sector as well as the supply chain and logistics sector.
§ THE SYSTEM
This section introduces , a system that allows to evaluate SPARQL 1.1 queries on top of the Vadalog system.
To the best of our knowledge, is the first system that provides a complete translation engine from SPARQL 1.1 with bag semantics to Datalog.
In order to obtain a functional and efficient system, we combined the knowledge provided by the theoretical work with database implementation techniques.
implements three translation methods:
(i) a data translation method T_D which generates Datalog^± rules from an RDF Dataset;
(ii) a query translation method T_Q which generates Datalog^± rules from a SPARQL query; and
(iii) a solution translation method T_S which generates a SPARQL solution from a Datalog^± solution.
Hence, given an RDF dataset D and a SPARQL query Q, generates a Datalog program Π as the union of the rules returned by T_D and T_Q, then evaluates the program Π, and uses T_S to transform the resulting Datalog solution into a SPARQL solution.
§.§ Example of Graph Pattern Translation
In order to give a general idea of the translation, we will sketch the translation of the RDF graph and the SPARQL query presented in Section <ref>.
To facilitate the notation, we will abbreviate the IRIs by using their prefix-based representation.
For example, the IRI http://ex.org/name will be represented as ex:name, where ex is a prefix bound to the namespace http://ex.org/. Additionally, we will use graph.rdf instead of http://example.org/graph.rdf.
§.§.§ Data translation
Consider the RDF graph G presented in Section <ref>.
First, the data translation method T_D generates a special fact for every RDF term (i.e., IRI, literal, and blank node) in G:
These facts are complemented by the following rules, which represent the domain of RDF terms:
For each RDF triple (s,p,o) in graph G with IRI g,
T_D generates a fact triple(s,p,o,g). Hence,
in our example, T_D produces:
§.§.§ Query translation
Assume that Q is the SPARQL query presented in Figure <ref>.
The application of the query translation method T_Q over Q returns the Datalog^± rules shown in Figure <ref>.
The general principles of the translation will be discussed
in Section <ref>.
In the interest of readability, we slightly simplify the presentation, e.g., by omitting language tags and type definitions and using simple (intuitive) variable names (rather than more complex ones as would be generated by to rule out name clashes).
The query translation method T_Q produces rules for each language construct of SPARQL 1.1 plus rules defining several auxiliary predicates. In addition, also system instructions (e.g., to indicate the answer predicate or ordering requirements) are generated.
The translation begins with the WHERE clause, then continues with the SELECT clause, and finalizes with the ORDER BY clause.
The most complex part of T_Q is the translation of the graph pattern defined in the WHERE clause.
In our example, the graph pattern defined by the WHERE clause is of the form
P_1 = P_2 OPTIONAL P_3
with triple patterns P_2 = ?X ex:name ?N and
P_3 = ?X ex:lastname ?L.
The instruction @output (line 24) is used to define the literal of the goal rule ans. It realises the projection defined by the SELECT clause. The instruction @post("ans","orderby(2)") (line 23) realises
the ORDER BY clause; it indicates a sort operation over the elements in the second position of the goal rule ans(ID,L,N,D), i.e. sorting by N (note that ID is at position 0). The ans predicate is defined
(lines 2–3)
by projecting out the X variable from the ans1 relation, which
contains the result of evaluating pattern P_1.
The tuple IDs are generated as Skolem terms
(line 3 for ans; likewise lines 7, 10, 17, 22).
In this example, we assume that the pattern P_1 and its subpatterns P_2 and P_3
are evaluated over the default graph. This is explicitly defined for the basic graph
patterns (lines 15, 20) and propagated by the last argument D
of the answer predicates.
The OPTIONAL pattern P_1 gives rise to 3 rules defining
the
predicate ans1: a rule (lines 11–12) to define the predicate ans_opt1, which
computes those mappings for pattern P_2 that can be extended to mappings of P_3;
a rule (lines 5–7) to compute those tuples of ans1 that are obtained by extending mappings of P_2 to mappings of P_3; and finally a rule (lines 8–10) to compute those tuples of ans1 that are obtained from mappings of P_2 that have no extension to mappings of P_3. In the latter case, the additional variables of P_3 (here: only variable L) are set to null (line 9).
The two basic graph patterns P_2 and P_3 are translated to rules for the
predicates ans2 (lines 14–17) and ans3 (lines 19–22) in the obvious way.
§.§.§ Solution translation
The evaluation of the program Π produced by the
data translation and query translation methods
yields a set of ground atoms
for the goal predicate p.
In our example, we thus get two ground atoms:
ans(id1, "George","Lucas",
"graph.rdf") and
ans(id2, "Steven","null","graph.rdf"). Note that
the ground atoms are guaranteed to have pairwise distinct tuple IDs.
These ground atoms can be easily translated to the multiset of solution mappings by projecting out the tuple ID. Due to the simplicity of our example, we only get a set
{μ_1,μ2}
of
solution mappings
with
μ_1(?N) = "George",
μ_1(?L) = "Lucas" and
μ_1(?N) = "Steven".
§.§ Example of Property Path Translation
A property path is a feature of the SPARQL query language that allows the user to query for complex paths between nodes, instead of being limited to graph patterns with a fixed structure.
SPARQL defines different types of property path, named: PredicatePath, InversePath, SequencePath, AlternativePath, ZeroOrMorePath, OneOrMorePath, ZeroOrOnePath and NegatedPropertySet.
Next we present an example to show the translation of property paths.
Assume that <http://example.org/countries.rdf> identifies an RDF graph with the following prefixed RDF triples:
Note that each triple describes two bordered countries in Europe.
Recall that ex is a prefix for the namespace http://ex.org/, meaning, e.g., that ex:spain is the abbreviation of http://ex.org/spain.
A natural query could be asking for the countries than can be visited by starting a trip in Spain. In other terms, we would like the get the nodes (countries) reachable from the node representing Spain.
Although the above query could be expressed by computing the union of different fixed patterns (i.e. one-country trip, two-country trip, etc.),
the appropriate way is to use the SPARQL query shown in Figure <ref>.
The result of this query is the set
{μ_1, μ_2,μ_3,μ_4}
of mappings with
μ_1(?B) = ex:france, μ_2(?B) = ex:germany, μ_3(?B) = ex:austria, and μ_4(?B) = ex:belgium.
A property path pattern is a generalization of a triple pattern (s,p,o) where the predicate p is extended to be a regular expression called a property path expression.
Hence, the expression ?A ex:borders+ ?B shown in Figure <ref> is a property path pattern, where
the property path expression ex:borders+ allows to return all the nodes ?B reachable from node ?A by following one or more matches of edges with ex:borders label. The FILTER condition restricts the solution mappings to those where variable ?A is bound to ex:spain, i.e. pairs of nodes where the source node is spain. Finally, the SELECT clause projects the result
to variable ?B, i.e., the target nodes.
In Figure <ref>, we show the Datalog^± rules obtained by translating the graph pattern shown in Figure <ref>.
The rule in line 2 corresponds to the translation of the filter graph pattern.
The rule in line 5 is the translation of the property path pattern ?A ex:borders+ ?B.
The rules shown in lines 8 and 9 demonstrate the use of recursion to emulate the property path expression ex:borders+.
The rule in line 11 is the translation of ex:borders which is called a link property path expression.
The general principles of the translation of property paths
will be discussed in Section <ref>.
§.§ Coverage of SPARQL 1.1 features
In order to develop a realistic integration framework between SPARQL and Vadalog, we conduct a prioritisation of SPARQL features. We first lay our focus on basic features,
such as terms and graph patterns.
Next, we prepare a more detailed prioritisation by considering the results of
Bonifati et al. <cit.>, who examined the real-world adoption of SPARQL features by analysing a massive amount of real-world query-logs from different well-established Semantic Web sources.
Additionally, we study further interesting properties of SPARQL, for instance SPARQL's approach to support partial recursion (through the addition of property paths) or interesting edge cases (such as the combination of Filter and Optional features) for which a “special” treatment is required.
The outcome of our prioritisation step is shown in Table <ref>.
For each feature,
we present its real-world usage according to <cit.>
and its current implementation status in our
system.
The table represents the real-world usage by a percentage value
(drawn from <cit.>)
in the feature usage field, if <cit.> covers the feature,
“Unknown” if <cit.> does not cover it, and
“Basic Feature” if we consider the feature as fundamental to SPARQL.
Note that some features are supported by with minor restrictions, such as ORDER BY for which we did not re-implement the sorting strategy defined by the SPARQL standard, but directly use the sorting strategy employed by the Vadalog system.
Table <ref> reveals that our engine covers
all features that are used in more than 5% of the queries in practice and are deemed therefore to be of highest relevance to SPARQL users.
Some of these features have a rather low usage in practice (< 1%), however are still supported by our engine. These features include property paths and .
We have chosen to add property paths to our engine, as they are not only interesting for being SPARQL's approach to support partial recursion but, according to <cit.>, there are datasets that make extensive use of them. Moreover, we have chosen to add and some aggregates (e.g. ), as they are very important in traditional database settings, and thus are important to establish a bridge between the Semantic Web and Database communities.
In addition to these most widely used features, we have covered all features occurring in critical benchmarks (see Section 6.1 for a detailed discussion). Specifically, as used in the FEASIBLE benchmark, we cover the following features: ORDER BY with complex arguments (such as ORDER BY with BOUND conditions), functions on strings such as UCASE, the DATATYPE function, LIMIT, and OFFSET.
For the gMark benchmark, we cover the “exactly n occurrences” property path, “n or more occurrences” property path, and the “between 0 and n occurrences” property path.
Among our contributions, concerning the translation of SPARQL to Datalog, are: the available translation methods have been combined into a uniform and practical framework for translating RDF datasets and SPARQL queries to Warded Datalog± programs;
we have developed simpler translations for and , compared with those presented in <cit.>;
we provide translations for both bag and set semantics, thus covering queries
with and without the DISTINCT keyword;
we have enhanced current translations by adding partial support for data types and language tags;
we have developed a novel duplicate preservation model based on the abstract theories of ID generation (this was required because plain existential ID generation turned out
to be problematic due to
its dependence on a very specific
chase algorithm of the Vadalog system);
some peculiarities of the Vadalog system);
and we propose a complete method for translating property paths, including zero-or-one and zero-or-more property paths.
There are also a few features that have a real-world usage of slightly above one percent and which are currently not supported by . Among these features are , ,
and . We do not support features and , as these solution modifiers do not yield any interesting theoretical or practical challenges and they did not occur in any of the benchmarks chosen for our experimental evaluation.
The features for query federation are out of the considered the scope, as our translation engine demands RDF datasets to be translated to the Vadalog system for query answering. Furthermore, SPARQL query federation is used in less than 1% of SPARQL queries <cit.>.
§ SPARQL TO DATALOG^± TRANSLATION
In this section, we present some general principles of our translation
from SPARQL queries into Datalog^± programs.
We thus first discuss the translation of graph patterns (Section <ref>), and then treat the translation of a property paths separately
(Section <ref>).
We conclude this section with a discussion of the correctness of our translation (Section <ref>).
Full details of the translation and its correctness proof are given
in Appendix <ref>.
in the full version of the paper <cit.>.
§.§ Translation of Graph Patterns
Let P be a SPARQL graph pattern and a D be an RDF dataset
D = ⟨ G, G_named⟩ where G is the default graph and G_named is the set of named graphs.
The translation of graph patterns is realised by
the translation function τ(P, dst, D, NodeIndex) where:
P is the graph pattern that should be translated next;
dst (short for “distinct”) is a Boolean value that describes whether the result should have set semantics (dst = true) or bag semantics (dst = false);
D is the graph on which the pattern should be evaluated;
NodeIndex is the index of the pattern P to be translated; and the output of function τ is a set of Datalog^± rules.
The function τ for different types of graph patterns is presented in Figure <ref>.
In the sequel, we concentrate on bag semantics (i.e., dst = false), since this is the more complex case.
To improve readability, we apply the simplified notation used in Figure <ref> now also to Figure <ref>. Additionally, we omit the explicit generation of IDs via Skolem functions and simply put a fresh ID-variable in the first position of the head atoms of the rules.
General strategy of the translation.
Analogously to <cit.>,
our translation proceeds by recursively traversing the parse tree of a SPARQL 1.1 query and translating each subpattern into its respective Datalog^± rules.
Subpatterns of the parse tree are indexed. The root has index 1, the left child of the i-th node has index 2 * i, the right child has index 2 * i + 1. During the translation, bindings of the i-th subpattern are represented by the predicate ans_i.
In all answer predicates ans_i, we have the current graph as last component. It can be changed by the GRAPH construct; for all other SPARQL constructs, it is transparently passed on from the children
to the parent in the parse tree.
Since the order of variables in predicates is relevant, some variable sets will need to be lexicographically ordered, which we denote by x as in <cit.>.
We write var(P) to denote the lexicographically ordered tuple of variables of P.
Moreover a variable renaming function
v_j: V → V is defined.
Auxiliary Predicates.
The translation generates several auxiliary predicates.
Above all, we need a predicate
comp for testing if two mappings are
compatible.
The notion of compatible mappings is fundamental for the
evaluation of SPARQL graph patterns.
Two mappings μ_1 and μ_2 are compatible, denoted μ_1 ∼μ_2,
if for all ?X ∈(μ_1) ∩(μ_2) it is satisfied that μ_1(?X)=μ_2(?X).
The auxiliary predicate comp(X_1,X_2,X_3) checks if two values X_1 and X_2
are compatible. The third position X_3 represents the value
that is used in the result tuple
when joining over X_1 and X_2:
Bag semantics.
For bag semantics, (i.e., dst = false) all answer predicates contain a fresh existential variable
when they occur in the head of a rule. In this way, whenever such a rule fires, a fresh tuple ID is generated. This is particularly important for the translation of the UNION construct. In contrast to
<cit.>, we can thus distinguish duplicates without the need to increase the arity of the answer predicate.
We have developed a novel duplicate preservation model
based on the abstract theories of ID generation of <cit.>.
As mentioned above, plain existential ID generation turned out to be problematic due to peculiarities of the Vadalog system.
Therefore, our ID generation process is abstracted away by using a Skolem function generator and representing nulls (that correspond to tuple IDs) as specific Skolem terms.
Filter constraints.
Note how we treat filter conditions in FILTER constructs: building our translation engine on top of the Vadalog system allows us to literally copy (possibly complex) filter conditions into the rule body and let the Vadalog system evaluate them. For instance,
the regex functionality uses the corresponding Vadalog function, which
makes direct use of the Java regex library.
For evaluating filter functions isIRI, isURI, isBlank, isLiteral, isNumeric, and bound expressions, our translation engine uses the corresponding auxiliary predicates generated in our data translation method.
§.§ Translation of Property Paths
Property paths are an important feature, introduced in SPARQL 1.1.
A translation of property paths to Datalog was presented in <cit.> –
but not fully compliant with the SPARQL 1.1 standard: the main problem
in <cit.> was the way how zero-or-one and
zero-or-more property paths were handled. In particular, the case that
a path of zero length from t to t also exists for those terms t
which occur in the query but not in the current graph, was omitted in <cit.>.
A property path pattern is given in the
form s, p, o, where s, o are the usual
subject and object and p is a
property path expression. That is,
p is either an IRI (the base case) or
composed from one or two other property path
expressions
p_1, p_2 as:
p_1 (inverse path expression),
p_1 | p_2 (alternative path expression),
p_1 / p_2 (sequence path expression),
p_1? (zero-or-one path expression),
p_1+ (one-or-more path expression),
p_1* (zero-or-more path expression), or
!p_1 (negated path expression).
A property path pattern s,p,o is translated by
first translating the property path expression p into rules for each subexpression of p.
The endpoints s and o of the overall path are only applied to the
top-level expression p. Analogously to our translation function
τ(P, dst, D, NodeIndex) for graph patterns, we
now
also introduce a translation function
τ_PP(PP, dst, S, O, D, NodeIndex) for property path expressions PP,
where
S,O, are the subject and object of the top-level property path expression
that have to be
kept track of during the entire evaluation as will become clear when we highlight our translation
in Figure <ref>.
Again we restrict ourselves to the more interesting case of
bag semantics.
The translation of a property path pattern S, P_1, O for some property path expression P_1 consists of two parts: the translation of P_1 by the translation function τ_PP
and the translation τ of S, P_1, O – now applying the endpoints S and O
to the
top-level property path expression P_1.
The base case of
τ_PP is a link property path
PP_i = p_1 (i.e., simply an IRI),
which returns all pairs (X,Y) that occur as
subject and object in a triple with predicate p_1.
Equally simple translations apply to
inverse paths (which swap start point
and end point), alternative paths (which are treated
similarly to UNION
in Figure <ref>), and
sequence paths (which combine
two paths by identifying the end point of the first path with the
start point of the second path).
For zero-or-one paths (and likewise for zero-or-more paths),
we need to collect all terms that occur as subjects or objects in the
current graph by an auxiliary predicate subjectOrObject:
10pt subjectOrObject(X) :- triple(X, P, Y, D).
10pt subjectOrObject(Y) :- triple(X, P, Y, D).
This is needed to produce paths of length zero (i.e., from X to X) for all
these terms occurring in the current graph. Moreover, if exactly one of S and O
is not a variable or if both are the same non-variable, then also for these nodes we have to
produce paths of zero length.
It is because of this special treatment of
zero-length paths that subject S and object O from the top-level
property path expression have to be propagated through all recursive calls
of the translation function τ_PP.
In addition to the zero-length paths, of course, also paths of length one have to be
produced by recursively applying the translation τ_PP to PP_1 if
PP_i is of the
form PP_i = PP_1?.
Finally, one-or-more paths are realised in the
usual style of transitive closure programs in Datalog.
It should be noted that, according to the SPARQL semantics of property paths[https://www.w3.org/TR/SPARQL11-query/#defn_PropertyPathExprhttps://www.w3.org/TR/SPARQL11-query/#defn_PropertyPathExpr],
zero-or-one, zero-or-more, and one-or-more property paths always have set semantics.
This is why the Datalog^± rules for these three path expressions
contain a body literal Id = []. By forcing the tuple ID to the same value whenever one of these rules
fires, multiply derived tuples are indistinguishable for our system and will, therefore,
never give rise to duplicates.
§.§ Correctness of our Translation
To ensure the correctness of our translation,
we have applied a two-way strategy – consisting of an extensive empirical evaluation and a formal analysis. For the empirical evaluation, we have run as well as Fuseki and Virtuoso on several benchmarks, which provide a good coverage of SPARQL 1.1.
The results are summarized in
Section <ref>. In a nutshell, and Fuseki turn out to
fully comply with the SPARQL 1.1 standard, while Virtuoso shows deviations from the standard on quite some queries.
For the formal analysis, we juxtapose our translation with the formal semantics of the various language constructs of
SPARQL 1.1.
Below we briefly outline our proof strategy:
Following
<cit.>
for SPARQL graph patterns and
<cit.>
for property path expressions, we first of all
provide a formal definition of the semantics of the various SPARQL 1.1
features.[We note that the semantics definitions in all of these sources either only cover a rather small subset of SPARQL 1.1 or contain erroneous definitions. The most complete exposition is given in
<cit.> with some inaccuracies in the treatment of Optional Filter patterns and of zero-length property paths.]Given a SPARQL graph pattern P and a graph D, we write
PD to denote the result of evaluating P over D. The semantics
PPD,s,o
of property path expressions PP is defined in a similar way, but now also taking the top level start and end points s,o of the property path
into account.
Both
PD and
PPD,s,o
are defined inductively on the structure of the expression
P or PP, respectively, with triple patterns
P = (s,p,o) and link property paths PP = p_1 as base cases.
For instance, for a join pattern P_i = (P_1 P_2) and
optional pattern P_j = (P_1 P_2),
the semantics is defined as follows:
P_iD $̄=$̄μ_1 ∪μ_2 |μ_1 ∈P_1D and μ_2 ∈P_2D and μ_1 ∼μ_2
P_jD =
μ|μ∈P_1 P_2D and μ C ∪
{-2pt {μ_1 |μ_1 ∈P_1D and for all μ_2 ∈P_2D: μ_1 ≁μ_2
}-2pt }
Now consider the Datalog^± rules generated by our translation.
For a join pattern, the two body atoms
ans_2i(Id_1, v_1(var(P_1)), D)
and
ans_2i+1(Id_2, v_2(var(P_2)), D)
yield, by induction, the sets of mappings
P_1D and
P_2D.
The variable renamings
v_1 and v_2 make sure that there is no interference
between the evaluation of
P_1D (by the first body atom) and the
evaluation of P_2D (by the second body atom).
The comp-atoms in the body of the rule make sure
that μ_1 and μ_2 are compatible on all common variables. Moreover, they bind the common variables
{x_1, …, x_n} to the correct value according
to the definition of the comp-predicate.
The result of evaluating an optional pattern consists of
two kinds of mappings: (1) the mappings μ in (P_1 P_2)D and
(2) the mappings μ_1 in P_1D which are not
compatible with any mapping
μ_2 in P_2D.
Analogously to join patterns, the second rule generated by our
translation produces the mappings of type (1).
The first rule generated by our translation computes those
mappings in P_1D which are compatible with
some mapping in P_2D.
Hence, the third rule produces the mappings of type (2). Here the negated second body literal
removes all those mappings from P_1D which are
compatible with some mapping in P_2D.
Full details of the semantics definitions PD and PPD,s,o
and of the juxtaposition with the rules generated by our translation
are provided
in Appendix <ref>.
in the full version of this paper <cit.>.
§ EXPERIMENTAL EVALUATION
In this section, we report on the experimental evaluation of the system.
We want to give a general understanding of the behaviour of in the following three areas: (1) we first analyse various benchmarks available in the area to identify coverage of SPARQL features and which benchmarks to use subsequently in our evaluation, (2) we analyse the compliance of our system with the SPARQL standard using the identified benchmarks, and set this in context with the two state-of-the-art systems Virtuoso and Fuseki,
and, finally, (3) we evaluate the performance of query execution of and compare it with state-of-the-art systems for SPARQL query answering and reasoning
over ontologies, respectively. We thus put particular emphasis on property paths and their combination with ontological reasoning.
Further details on our experimental evaluation – in particular, how we
set up the analysis of different benchmarks and of the standard-compliance
of various systems – are provided
in Appendix <ref>.
in the supplementary material.
§.§ Benchmark Analysis
In this subsection, we analyse current state-of-the-art
benchmarks for SPARQL engines.
Table <ref> is based on the analysis of <cit.> and represents the result of our exploration of the SPARQL feature coverage of the considered benchmarks. Furthermore, it was adjusted and extended with additional features by us. Particularly heavily used SPARQL features are marked in blue, while missing SPARQL features are marked in orange. The abbreviations of the columns represent the following SPARQL
features: DIST[INCT], FILT[ER], REG[EX], OPT[IONAL], UN[ION], GRA[PH],
P[roperty Path] Seq[uential], P[roperty Path] Alt[ernative], GRO[UP BY].
Note that, in Table <ref>,
we do not display explicitly basic features, such as Join, Basic Graph pattern, etc., since these
are of course covered by every benchmark considered here. Morerover, we have not included the SPARQL features MINUS and the inverted, zero-or-one, zero-or-more, one-or-more, and negated property path in
Table <ref>, as none of the selected benchmarks covers any of these SPARQL features.
More details on the setup of our benchmark analysis can be found in Appendix <ref>.
Table <ref> reveals that no
benchmark covers all SPARQL features. Even more, SNB-BI and SNB-INT are the only benchmarks that contain property paths. Yet, they cover merely the sequential (PSeq) and alternative property path (PAlt), which in principle correspond to the JOIN and UNION operator. This means that no existing benchmark covers recursive property paths (though we will talk about the benchmark generator gMark <cit.> later), which are one of the most significant extensions provided by SPARQL 1.1.
Our analysis of SPARQL benchmarks leads us to the following conclusions
for testing the compliance with the SPARQL standard and for planning the performance tests with and state-of-the-art systems.
Evaluating compliance with the SPARQL standard Based on the results of Table <ref>, we have chosen the following three benchmarks to evaluate the compliance of our system with the SPARQL standard: (1) We have identified FEASIBLE (S) <cit.> as the real-world benchmark of choice, as it produces the most diverse test cases <cit.> and covers the highest amount of features; (2) SP2Bench <cit.> is identified as the synthetic benchmark of choice, since it produces synthetic datasets with
the most realistic characteristics <cit.>; (3) finally, since no benchmark that employs real-world settings provides satisfactory coverage of property paths,
we have additionally chosen BeSEPPI <cit.> – a simplistic, yet very extensive benchmark specifically designed for testing the correct and complete processing of property paths. We report on the results of testing the compliance of our system as well as Fuseki and Virtuoso in Section <ref>.
Performance benchmarking.
For the empirical evaluation of query execution times
reported in Section <ref>,
we have identified SP2Bench as the
most suitable benchmark, as it contains hand-crafted queries that were specifically designed to target
query optimization.
Since none of the existing benchmarks for SPARQL performance measurements
contains recursive property paths, we have included instances generated by the benchmark generator gMark <cit.>, and report extensive results of this important aspect.
In order to include in our tests also the performance measurements for the combination of property paths with
ontologies, we have further extended the SP2Bench with an ontology containing subPropertyOf and subClassOf statements.
§.§ SPARQL Compliance
As discussed in the previous section, we have identified three benchmarks
(FEASIBLE(S), SP2Bench, BeSEPPI)
for the evaluation of the standard compliance of our system and
two state-of-the-art SPARQL engines.
More details on the compliance evaluation as well as some challenges encountered
by this evaluation (such as the comparison of results
in the presence of null nodes) are discussed
in Appendix <ref>.
in the supplementary material.
Below, we summarize the results:
The FEASIBLE(S) benchmark contains 77 queries that we used for
testing the standard-conformant behaviour.
It turned out that both and Fuseki fully comply to the standard on each of the 77 queries, whereas Virtuoso does not.
More specifically, for 14 queries, Virtuoso returned an erroneous result by either
wrongly outputting duplicates (e.g., ignoring DISTINCTs) or omitting duplicates
(e.g., by handling UNIONs incorrectly). Moreover, in 18 cases,
Virtuoso was unable to evaluate the query and produced an error.
The SP2Bench benchmark contains 17 queries, specifically designed to test the scalability of SPARQL engines. All 3 considered systems produce the correct result for all 17 queries.
The BeSEPPI benchmark contains 236 queries, specifically designed to evaluate the correct and complete support of property path features. Table <ref> shows the detailed results of the experimental evaluation of the 3 considered systems on this benchmark. We distinguish 4 types of erroneous behaviour: correct but incomplete results (i.e., the mappings returned are correct but there are further correct mappings missing),
complete but incorrect (i.e., no correct mapping is missing but the answer falsely contains additional mappings), incomplete and incorrect, or failing to evaluate the query and returning an error instead.
The entries in the table indicate the number of cases for each of the error types.
We see that Fuseki and produce the correct result in all 236 cases.
Virtuoso only handles the queries with inverse, sequence and negated path expressions 100% correctly. For queries containing alternative, zero-or-one, one-or-more, or
zero-or-more path expressions, Virtuoso is not guaranteed to produce the
correct result. The precise number of queries handled erroneously is
shown in the cells marked red.
To conclude, while and Fuseki handle all considered queries from the 3 chosen benchmarks correctly, Virtuoso produces a significant number of errors.
§.§ Performance Measurements
Experimental Setup
Our benchmarks were executed on a system running openSUSE Leap 15.2 with dual Intel(R) Xeon(R) Silver 4314 16 core CPUs, clocked at 3.4 GHz, with 512GB RAM of which 256GB reserved for the system under test, and 256GB for the operating system.
For each system under test, we set a time-out of 900s. We start each benchmark by repeating the same warm-up queries 5 times and by 5 times loading and deleting the graph instance. Furthermore, we did 5 repetitions of each query (each time deleting and reloading the dataset).
For our experiments we use Apache Jena Fuseki 3.15.0, Virtuoso Open Source Edition 7.2.5, and Stardog 7.7.1. Vadalog loads and queries the database simultaneously.
Hence, to perform a fair comparison
with competing systems, we compare their total loading and querying time to the total time that needs to answer the query. Since, loading includes index building and many more activities, we delete and reload the database each time, when we run a query (independent of warm-up or benchmark queries).
Performance on general SPARQL queries
SP2Bench is a benchmark that particularly targets query optimization and computation-intensive queries.
We have visualized the result in Figure <ref> and found that SparqLog reaches highly competitive performance with Virtuoso and significantly outperforms Fuseki on most queries.
gMark. Since current SPARQL benchmarks provide only rudimentary coverage of property path expressions, we have evaluated , Fuseki, and Virtuoso using the gMark benchmark generator <cit.>, a domain- and language-independent graph instance and query workload generator which specifically focuses on path queries, i.e., queries over property paths. We have evaluated 's, Fuseki's, and Virtuoso's path query performance on the test[<https://github.com/gbagan/gMark/tree/master/demo/test>] and social[<https://github.com/gbagan/gMark/tree/master/demo/social>] demo scenarios. Each of these two demo scenarios provides 50 SPARQL queries and a graph instance.
Appendix <ref> provides further details on the benchmarks that we used for evaluating a system's query execution time.
Full details on the results can be found in Section <ref>
Further details on the benchmarks that we used for evaluating a system's query execution time and on the experimental results that we obtained
are given in the full version of this paper <cit.>.
In the following, we compare the results of the three systems on gMark:
Virtuoso could not (correctly) answer 48 of the in total 100 queries of the gMark Social and Test benchmark. Thus, it could not correctly answer almost half of the queries provided by both gMark benchmarks, which empirically reveals its dramatic limitations in answering complex property path queries. In 20 of these 48 cases, Virtuoso returned an incomplete result. While in solely 3 incomplete result cases Virtuoso missed solely one tuple in the returned result multi-set, in the remaining 17 incomplete result cases; Virtuoso produces either the result tuple null or an empty result multi-set instead of the correct non-null/non-empty result multi-set. In the other 28 cases Virtuoso failed either due to a time-, mem-out or due to not supporting a property path with two variables. This exemplifies severe problems with handling property path queries.
Fuseki suffered on 37 of the in total 100 queries of the gMark Social and Test benchmark a time-out (i.e., took longer than 900s for answering the queries). Thus, it timed-out on more than a third of gMark queries, which empirically reveals its significant limitations in answering complex property path queries.
managed to answer 98 of gMark's (in total 100) queries within less than 200s and timed out on solely 2 queries.
The results on the gMark Social benchmark are shown in
Figures <ref>;
the results on the gMark Test benchmark are given in
Figure <ref> in the appendix.
the results on the gMark Test benchmark are given in
full version of this paper <cit.>.
These results reveal the strong ability of our system in answering queries that contain complex property paths. Furthermore, each time when both Fuseki and returned a result, the results were equal, even further empirically confirming the correctness of our system (i.e., that our system follows the SPARQL standard).
In conclusion, these three benchmarks show that SparqLog (1) is highly competitive with Virtuoso on regular queries with respect to query execution time,
(2) follows the SPARQL standard much more accurately than Virtuoso and supports more property path queries than Virtuoso,
and (3) dramatically outperforms Fuseki on query execution, while keeping its ability to follow the SPARQL standard accurately.
Ontological reasoning
One of the main advantages of our system is that it provides
a uniform and consistent framework for reasoning and querying Knowledge Graphs.
We therefore wanted to measure the performance of query answering in the
presence of an ontology. Since Fuseki and Virtuoso do not provide such support, we compare with Stardog, which is a commonly accepted state-of-the-art system for reasoning and querying within the Semantic Web.
Furthermore, we have created a benchmark based on SP2Bench's dataset that contains property path queries and ontological concepts such as subPropertyOf and subClassOf and provide this benchmark in the supplementary material.
Figure <ref> in Appendix <ref> shows the outcome of these experiments.
Full details of these experiments are provided in
the full version of this paper
<cit.>.
In summary, we note that is faster than Stardog on most queries. Particularly interesting are queries 4 and 5, which
contain recursive property path queries with two variables. Our engine needs on query 4 only about a fifth of the execution time of Stardog and it can even answer query 5, on which Stardog times outs (using a timeout of 900s). On the other queries, Stardog and perform similarly.
To conclude, our new system does not only follow the SPARQL standard, but it also shows good performance.
Even though is a full-fledged, general-purpose Knowledge Graph management system and neither a specialized SPARQL engine nor a specialized ontological reasoner, it is highly competitive to state-of-the-art SPARQL engines and reasoners and even outperforms them on answering property path queries and particularly hard cases.
§ CONCLUSION
In this work we have taken a step towards bringing SPARQL-based systems and Datalog^±-based systems closer together. In particular, we have provided
(i) a uniform and fairly complete theoretical translation of SPARQL into Warded Datalog^±, (ii) a practical translation engine that covers most of the SPARQL 1.1 functionality, and (iii) an extensive experimental evaluation.
We note that the contribution of the engine SparqLog we provided in this paper can be seen in two ways: (1) as a stand-alone translation engine for SPARQL into Warded Datalog^±, and (2) as a full Knowledge Graph engine by using our translation engine together with the Vadalog system.
However, our work does not stop here.
As next steps, we envisage of course 100% or close to 100% SPARQL coverage. Possibly more (scientifically) interestingly, we plan to expand on the finding that query plan optimization provides a huge effect on performance, and investigate SPARQL-specific query plan optimization in a unified SPARQL-Datalog^± system.
Finally, we note that work on a unified benchmark covering all or close to
all of the SPARQL 1.1 features would be desirable. As observed in
Section <ref>, no such benchmark currently exists.
This work has been funded by the Vienna Science and
Technology Fund (WWTF) [10.47379/VRG18013, 10.47379/NXT22018,
10.47379/ICT2201];
Renzo Angles was supported by ANID FONDECYT Chile through grant 1221727.
Georg Gottlob is a Royal Society Research Professor and acknowledges support by the Royal Society in this role through the “RAISON DATA” project (Reference No. RP\R1\201074).
ACM-Reference-Format
Appendix
§ TRANSLATING SPARQL 1.1 TO WARDED DATALOG^±
In this section, we provide a detailed description of our translation
from SPARQL 1.1 to Warded Datalog^±.
Note that many of the rules thus generated are simple Datalog rules, i.e., they do not have existentially quantified variables in the head. In such cases, we shall interchangeably
refer to these rules as “`Datalog rules” or
“Datalog^± rules”. Of course, if existentially quantified variables are indeed used in the head, we shall always speak of
“Datalog^± rules”.
We start with the translation of RDF graphs to Datalog rules in Section <ref>.
We then detail our translation of graph patterns
and the specific translation rules for property path
expressions in Sections <ref> and
<ref>. Finally, in Section <ref>,
we consider query forms.
§.§ Translation of RDF Graphs
Assume that I, L and B are disjoint infinite sets corresponding to IRIs, literals and blank nodes.
An RDF term is an element in the set T = I ∪ L ∪ B.
An RDF triple is a tuple (s,p,o) ∈ T × I × T where s is called the subject, p is the predicate and o is the object.
An RDF graph G is a set of RDF triples.
An RDF dataset D is a collection of graphs including a default graph G_0 and zero or more named graphs, such that a named graph is a pair (u,G) where u is an IRI which identifies the RDF graph G.
Let G be a given RDF graph, the translation of the graph to Datalog facts is defined as follows:
* For each IRI, constant and blank node in G the corresponding facts iri(X), literal(X) and bnode(X) are generated.
* For each named graph g a tuple named(g) and for each triple (s,p,o) of graph g a fact triple(s,p,o,g) where g is either ”default” for the default graph or the IRI of a named graph is created.
* A term is either an IRI, a literal or a blank node:
The set of terms is represented by the predicate term.
The predicate terms is defined as follows:
term(X) :- iri(X).
term(X) :- literal(X).
term(X) :- bnode(X).
§.§ Translation of SPARQL Graph Patterns
Assume the existence of an infinite set V of variables disjoint from T. We will use (α) to denote the set of variables occurring in any structure α.
A graph pattern is defined recursively as follows:
a tuple from (T ∪ V) × (I ∪ V) × (T ∪ V) is a triple pattern;
if P_1 and P_2 are graph patterns, C is a filter constraint, and g ∈ I then
{ P_1 P_2 },
{ P_1 P_2 },
{ P_1 P_2 },
{ P_1 P_2 },
{ P_1 C }, and;
{ g P_1 } are graph patterns.
A filter constraint is defined recursively as follows:
(i) If ?X,?Y ∈ V, c ∈ I ∪ L and r is a regular expression then , , ?X = c, ?X = ?Y, (?X), (?X), (?X), (?X) and (?X,r) are atomic filter constraints;
(ii) If C_1 and C_2 are filter constraints then
(!C_1), (C_1 && C_2) and (C_1 || C_2)
are Boolean filter constraints.
A subpattern P' of a graph pattern P is defined to be any substring of P that is also a graph pattern. Furthermore P' is defined to be an immediate subpattern of P if it is a subpattern of P and if there is no other subpattern of P, different from P, that contains P'.
A parse tree is specified as a tree <V, E> with the set of nodes V being the subpatterns of a graph pattern P and the set of edges E containing an edge (P_1, P_2) if P_2 is an immediate subpattern of P_1.
The evaluation of a graph pattern results in a multiset of solution mappings.
A solution mapping is a partial function μ : V → T, i.e. an assignment of variables to RDF terms.
The domain of μ, denoted (μ), is the subset of V where μ is defined.
The empty mapping, denoted μ_0, is the mapping satisfying that (μ_0)= ∅.
A multiset of solution mappings Ω is an unsorted list of solution mappings where duplicates are allowed.
The domain of Ω is the set of variables occurring in the solution mappings of Ω.
The notion of compatible mappings is fundamental to evaluating SPARQL graphs patterns.
Two mappings μ_1 and μ_2, are compatible, denoted μ_1 ∼μ_2, when for all ?X ∈(μ_1) ∩(μ_2) it satisfies that μ_1(?X)=μ_2(?X).
Note that two mappings with disjoint domains are always compatible, and the empty mapping μ_0 is compatible with any other mapping.
The notion of compatible mappings is simulated in Datalog by using the following rules:
𝑛𝑢𝑙𝑙("𝑛𝑢𝑙𝑙").
comp(X,X,X) :- term(X).
comp(X,𝑍,X) :- term(X), 𝑛𝑢𝑙𝑙(𝑍).
comp(𝑍,X,X) :- term(X), 𝑛𝑢𝑙𝑙(𝑍).
comp(𝑍,𝑍,𝑍) :- 𝑛𝑢𝑙𝑙(𝑍).
The predicate comp(X_1,X_2,X_3) describes the compatibility of the values X_1 and X_2. The third position X_3 represents the value, that would be used in the result tuple of a join operation.
A given SPARQL graph pattern is translated into
Warded Datalog^±
by recursively walking through the parse tree of the query and translating each subpattern into its respective Datalog^± rules (as outlined in <cit.>).
Subpatterns of the parse tree are indexed. The root has index 1, the left child of the i-th node has index 2 * i, the right child has index 2 * i + 1. During the translation, bindings of the i-th subpattern are represented by the predicate ans_i.
Since the order of variables in predicates is relevant, some variable sets will need to be lexicographically ordered, which is denoted by x following the notation of <cit.>.
var(P) shall represent the lexicographically ordered tuple of variables of P.
Moreover a renaming function v_j: V → V is defined.
For a given graph pattern P and a given dataset
D = ⟨ G, G_named⟩ (where G is the default graph and G_named is the set of named graphs) the
core translation function τ(P, dst, D, NodeIndex) is defined as follows:
* P is the graph pattern that should be translated next.
* dst is a Boolean value that describes whether the results should have set semantics (dst = True) or bag semantics (dst = False).
* D is the graph on which the query should be evaluated.
* NodeIndex is the index of the pattern P that should be translated.
In Definitions <ref> to
<ref> below, we present the translation function
τ
for the
various language constructs of graph patterns in SPARQL 1.1.
Let P_i be the i-th subpattern of P and furthermore let P_i be a triple pattern (s, p, o), then τ(P_i, 𝑡𝑟𝑢𝑒, D, i) is defined as:
ans_i(var(P_i), D) :- triple(s, p, o, D).
And τ(P_i, 𝑓𝑎𝑙𝑠𝑒, D, i) is defined as:
ans_i(Id, var(P_i), D) :- triple(s, p, o, D).
Let P_i be the i-th subpattern of P and furthermore let P_i be ( g P_1), then τ(P_i, 𝑡𝑟𝑢𝑒, D, i) is defined as:
ans_i(var(P_i), D) :- ans_2i(var(P_1), g),
named(g).
τ(P_1, 𝑡𝑟𝑢𝑒, g, 2i).
And τ(P_i, 𝑓𝑎𝑙𝑠𝑒, D, i) is defined as:
ans_i(Id, var(P_i), D) :-
ans_2i(Id_1, var(P_1), g),
named(g).
τ(P_1, 𝑓𝑎𝑙𝑠𝑒, g, 2i)
Let P_i be the i-th subpattern of P and furthermore let P_i be (P_1 P_2), then τ(P_i, 𝑡𝑟𝑢𝑒, D, i) is defined as:
ans_i(x, D) :-
ans_2i(v_1(var(P_1)), D),
ans_2i+1(v_2(var(P_2)), D),
comp(v_1(x_1), v_2(x_1), x_1),
… ,
comp(v_1(x_n), v_2(x_n), x_n).
τ(P_1, 𝑡𝑟𝑢𝑒, D, 2i)
τ(P_2, 𝑡𝑟𝑢𝑒, D, 2i + 1)
And τ(P_i, 𝑓𝑎𝑙𝑠𝑒, D, i) is defined as:
ans_i(Id, x, D) :-
ans_2i(Id_1, v_1(var(P_1)), D),
ans_2i+1(Id_2, v_2(var(P_2)), D),
comp(v_1(x_1), v_2(x_1), x_1),
… ,
comp(v_1(x_n), v_2(x_n), x_n).
τ(P_1, 𝑓𝑎𝑙𝑠𝑒, D, 2i)
τ(P_2, 𝑓𝑎𝑙𝑠𝑒, D, 2i + 1)
with
* x = var(P_1) ∪ var(P_2)
* {x_1, …, x_n} = var(P_1) ∩ var(P_2)
* v_1, v_2: var(P_1) ∩ var(P_2) → V, such that
Image(v_1) ∩ Image(v_2) = {}
Let P_i be the i-th subpattern of P and furthermore let P_i be (P_1 P_2), then τ(P_i, 𝑡𝑟𝑢𝑒, D, i) is defined as:
ans_i(var(P_i), D) :- ans_2i(var(P_1), D),
𝑛𝑢𝑙𝑙(x_1), …𝑛𝑢𝑙𝑙(x_n).
ans_i(var(P_i), D) :- ans_2i+1(var(P_2), D),
𝑛𝑢𝑙𝑙(y_1), …𝑛𝑢𝑙𝑙(y_m).
τ(P_1, 𝑡𝑟𝑢𝑒, D, 2i)
τ(P_2, 𝑡𝑟𝑢𝑒, D, 2i + 1)
And τ(P_i, 𝑓𝑎𝑙𝑠𝑒, D, i) is defined as:
ans_i(Id, var(P_i), D) :- ans_2i(Id_1, var(P_1), D),
𝑛𝑢𝑙𝑙(x_1), …𝑛𝑢𝑙𝑙(x_n).
ans_i(Id, var(P_i), D) :- ans_2i+1(Id_2, var(P_2), D),
𝑛𝑢𝑙𝑙(y_1), …𝑛𝑢𝑙𝑙(y_m).
τ(P_1, 𝑓𝑎𝑙𝑠𝑒, D, 2i)
τ(P_2, 𝑓𝑎𝑙𝑠𝑒, D, 2i + 1)
with
* {x_1, …, x_n} = var(P_2) ∖ var(P_1)
* {y_1, …, y_m} = var(P_1) ∖ var(P_2)
Let P_i be the i-th subpattern of P and furthermore let P_i be (P_1 P_2), then τ(P_i, 𝑡𝑟𝑢𝑒, D, i) is defined as:
ans_opt-i(var(P_1), D) :- ans_2i(var(P_1), D),
ans_2i+1(v_2(var(P_2)), D),
comp(x_1, v_2(x_1), z_1),
… , comp(x_n, v_2(x_n), z_n).
ans_i(var(P_i), D) :- ans_2i(v_1(var(P_1)), D),
ans_2i+1(v_2(var(P_2)), D),
comp(v_1(x_1), v_2(x_1), x_1),
… , comp(v_1(x_n), v_2(x_n), x_n).
ans_i(var(P_i), D) :- ans_2i(var(P_1), D),
not ans_opt-i(var(P_1), D),
𝑛𝑢𝑙𝑙(y_1), …, 𝑛𝑢𝑙𝑙(y_m).
τ(P_1, 𝑡𝑟𝑢𝑒, D, 2i)
τ(P_2, 𝑡𝑟𝑢𝑒, D, 2i + 1)
And τ(P_i, 𝑓𝑎𝑙𝑠𝑒, D, i) is defined as:
ans_opt-i(var(P_1), D) :-
ans_2i(Id_1, var(P_1), D),
ans_2i+1(Id_2, v_2(var(P_2)), D),
comp(x_1, v_2(x_1), z_1),
… , comp(x_n, v_2(x_n), z_n).
ans_i(Id, var(P_i), D) :- ans_2i(Id_1, v_1(var(P_1)), D),
ans_2i+1(Id_2, v_2(var(P_2)), D),
comp(v_1(x_1), v_2(x_1), x_1),
… , comp(v_1(x_n), v_2(x_n), x_n).
ans_i(Id, var(P_i), D) :- ans_2i(Id_1, var(P_1), D),
not ans_opt-i(var(P_1), D),
𝑛𝑢𝑙𝑙(y_1), …, 𝑛𝑢𝑙𝑙(y_m).
τ(P_1, 𝑓𝑎𝑙𝑠𝑒, D, 2i)
τ(P_2, 𝑓𝑎𝑙𝑠𝑒, D, 2i + 1)
with
* var(P_i) = var(P_1) ∪ var(P_2)
* {x_1, …, x_n} = var(P_1) ∩ var(P_2)
* {y_1, …, y_m} = var(P_2) ∖ var(P_1)
* v_1, v_2: var(P_1) ∩ var(P_2) → V, such that
Image(v_1) ∩ Image(v_2) = {}
Let P_i be the i-th subpattern of P and furthermore let P_i be (P_1 C), then τ(P_i, 𝑡𝑟𝑢𝑒, D, i) is defined as:
ans_i(var(P_i), D) :- ans_2i(var(P_1), D), C.
τ(P_1, 𝑡𝑟𝑢𝑒, D, 2i)
And τ(P_i, 𝑓𝑎𝑙𝑠𝑒, D, i) is defined as:
ans_i(Id, var(P_i), D) :- ans_2i(Id_1, var(P_1), D), C.
τ(P_1, 𝑓𝑎𝑙𝑠𝑒, D, 2i)
Let P_i be the i-th subpattern of P and furthermore let P_i be (P_1 (P_2 C)), then τ(P_i, 𝑡𝑟𝑢𝑒, D, i) is defined as:
ans_opt-i(var(P_1), D) :- ans_2i(var(P_1), D),
ans_2i+1(v_2(var(P_2)), D),
comp(x_1, v_2(x_1), z_1),
… , comp(x_n, v_2(x_n), z_n), C.
ans_i(var(P_i), D) :- ans_2i(v_1(var(P_1)), D),
ans_2i+1(v_2(var(P_2)), D),
comp(v_1(x_1), v_2(x_1), x_1),
… , comp(v_1(x_n), v_2(x_n), x_n), C.
ans_i(var(P_i), D) :- ans_2i(var(P_1), D),
not ans_opt-i(var(P_1), D),
𝑛𝑢𝑙𝑙(y_1), …, 𝑛𝑢𝑙𝑙(y_m).
τ(P_1, 𝑡𝑟𝑢𝑒, D, 2i)
τ(P_2, 𝑡𝑟𝑢𝑒, D, 2i + 1)
And τ(P_i, 𝑓𝑎𝑙𝑠𝑒, D, i) is defined as:
ans_opt-i(var(P_1), D) :- ans_2i(Id_1, var(P_1), D),
ans_2i+1(Id_2, v_2(var(P_2)), D),
comp(x_1, v_2(x_1), z_1),
… , comp(x_n, v_2(x_n), z_n), C.
ans_i(Id, var(P_i), D) :- ans_2i(Id_1, v_1(var(P_1)), D),
ans_2i+1(Id_2, v_2(var(P_2)), D),
comp(v_1(x_1), v_2(x_1), x_1),
… , comp(v_1(x_n), v_2(x_n), x_n), C.
ans_i(Id, var(P_i), D) :- ans_2i(Id_1, var(P_1), D),
not ans_opt-i(var(P_1), D),
𝑛𝑢𝑙𝑙(y_1), …, 𝑛𝑢𝑙𝑙(y_m).
τ(P_1, 𝑓𝑎𝑙𝑠𝑒, D, 2i)
τ(P_2, 𝑓𝑎𝑙𝑠𝑒, D, 2i + 1)
with
* var(P_i) = var(P_1) ∪ var(P_2)
* {x_1, …, x_n} = var(P_1) ∩ var(P_2)
* {y_1, …, y_m} = var(P_2) ∖ var(P_1)
* v_1, v_2: var(P_1) ∩ var(P_2) → V, such that
Image(v_1) ∩ Image(v_2) = {}
Let P_i be the i-th subpattern of P and furthermore let P_i be (P_1 P_2), then τ(P_i, 𝑡𝑟𝑢𝑒, D, i) is defined as:
ans_join-i(x, D) :-
ans_2i(var(P_1), D),
ans_2i+1(v_2(var(P_2)), D),
comp(x_1, v_2(x_1), z_1),
… ,
comp(x_n, v_2(x_n), z_n).
ans_equal-i(var(P_1), D) :- ans_join-i(x, D),
x_1 = v_2(x_1), not null(x_1).
…
ans_equal-i(var(P_1), D) :- ans_join-i(x, D),
x_n = v_2(x_n), not null(x_n).
ans_i(var(P_1), D) :-
ans_2i(var(P_1), D),
not ans_equal-i(var(P_1), D).
τ(P_1, 𝑡𝑟𝑢𝑒, D, 2i)
τ(P_2, 𝑡𝑟𝑢𝑒, D, 2i + 1)
And τ(P_i, 𝑓𝑎𝑙𝑠𝑒, D, i) is defined as:
ans_join-i(x, D) :-
ans_2i(Id_1, var(P_1), D),
ans_2i+1(Id_2, v_2(var(P_2)), D),
comp(x_1, v_2(x_1), z_1),
… ,
comp(x_n, v_2(x_n), z_n).
ans_equal-i(var(P_1), D) :- ans_join-i(x, D),
x_1 = v_2(x_1), not null(x_1).
…
ans_equal-i(var(P_1), D) :- ans_join-i(x, D),
x_n = v_2(x_n), not null(x_n).
ans_i(Id, var(P_1), D) :-
ans_2i(Id_1, var(P_1), D),
not ans_equal-i(var(P_1), D).
τ(P_1, 𝑓𝑎𝑙𝑠𝑒, D, 2i)
τ(P_2, 𝑓𝑎𝑙𝑠𝑒, D, 2i + 1)
with
* x = var(P_1) ∪ v_2(var(P_2))
* {x_1, …, x_n} = var(P_1) ∩ var(P_2)
* v_2: var(P_1) ∩ var(P_2) → V ∖ var(P_1)
Property path patterns are given in the form
S P_1 O, where P_1 is a property path expression.
Due to the complex semantics of property paths, we have introduced a separate translation function τ_PP for property path expressions, which we will take a closer look at in
Section <ref>.
Let P_i be the i-th subpattern of P and furthermore let P_i = S P_1 O be a property path pattern, then τ(P_i, 𝑡𝑟𝑢𝑒, D, i) is defined as:
ans_i(var(P_i), D) :- ans_2i(S, O, D).
τ_PP(P_1, 𝑡𝑟𝑢𝑒, S, O, D, 2i)
And τ(P_i, 𝑓𝑎𝑙𝑠𝑒, D, i) is defined as:
ans_i(Id, var(P_i), D) :- ans_2i(Id_1, S, O, D).
τ_PP(P_1, 𝑓𝑎𝑙𝑠𝑒, S, O, D, 2i)
with τ_PP being the translation function for property path expressions
defined next.
§.§ Translation of Property Path Expressions
A property path expression (or, simply, a path expression)
is defined recursively as follows:
* if a ∈ I, then a is a link path expression;
* If PP_1 and PP_2 are property path expressions then
* PP_1 is an inverse path expression;
* PP_1 | PP_2 is an alternative path expression;
* PP_1 / PP_2 is a sequence path expression;
* PP_1? is a zero-or-one path expression;
* PP_1+ is a one-or-more path expression;
* PP_1* is a zero-or-more path expression;
* if 𝒫 is a set of link path expressions p_i and
inverse link path expressions p_j, then
!𝒫 is a negated path expression.
A property path pattern is a tuple t of the form (u,p,v), where u,v ∈ (I ∪ V) and p is a property path expression.
Analogously to our translation function
τ(P, dst, D, NodeIndex) for SPARQL patterns, we
define a translation function
τ_PP(PP, dst, S, O, D, NodeIndex) for property path expressions, where
S,O, are the subject and object of the top-level property path expression that have to be
kept track of during the entire evaluation.
In the following we will only state the translations for property path expressions
under bag semantics (i.e. dst = false), since for set semantics the IDs are simply left out
or set to a constant value (e.g. Id = []).
Let PP_i be the i-th subexpression of a property path expression PP and furthermore let PP_i = p_1 be a link property path expression.
Then τ_PP(PP_i, 𝑓𝑎𝑙𝑠𝑒, S, O, D, i) is defined as:
ans_i(Id, X, Y, D) :- triple(X, p1, Y, D).
Let PP_i be the i-th subexpression of a property path expression PP and furthermore let PP_i = PP_1 be an inverse property path expression.
Then τ_PP(PP_i, 𝑓𝑎𝑙𝑠𝑒, S, O, D, i) is defined as:
ans_i(Id, X, Y, D) :- ans_2i(Id_1, Y, X, D).
τ_PP(PP_1, 𝑓𝑎𝑙𝑠𝑒, S, O, D, 2i)
Let PP_i be the i-th subexpression of a property path expression PP and furthermore let PP_i = PP_1 | PP_2 be a alternative property path expression.
Then τ_PP(PP_i, 𝑓𝑎𝑙𝑠𝑒, S, O, D, i) is defined as:
ans_i(Id, X, Y, D) :- ans_2i(Id_1, X, Y, D).
ans_i(Id, X, Y, D) :- ans_2i+1(Id_1, X, Y, D).
τ_PP(PP_1, 𝑓𝑎𝑙𝑠𝑒, S, O, D, 2i)
τ_PP(PP_2, 𝑓𝑎𝑙𝑠𝑒, S, O, D, 2i+1)
Let PP_i be the i-th subexpression of a property path expression PP and furthermore let PP_i = PP_1/PP_2 be a sequence property path expression.
Then τ_PP(PP_i, 𝑓𝑎𝑙𝑠𝑒, S, O, D, i) is defined as:
ans_i(Id, X, Z, D) :- ans_2i(Id_1, X, Y, D),
ans_2i+1(Id_2, Y, Z, D).
τ_PP(PP_1, 𝑓𝑎𝑙𝑠𝑒, S, O, D, 2i)
τ_PP(PP_2, 𝑓𝑎𝑙𝑠𝑒, S, O, D, 2i+1)
Let PP_i be the i-th subexpression of a property path expression PP and furthermore
let PP_i = PP_1+ be a one-or-more property path expression.
Then τ_PP(PP_i, 𝑓𝑎𝑙𝑠𝑒, S, O, D, i) is defined as:
ans_i(Id, X, Y, D) :- ans_2i(Id_1, X, Y, D), Id = [].
ans_i(Id, X, Z, D) :- ans_2i(Id_1, X, Y, D),
ans_i(Id_2, Y, Z, D), Id = [].
τ_PP(PP_1, 𝑓𝑎𝑙𝑠𝑒, S, O, D, 2i)
The subjectOrObject predicate defines intuitively the set of all possible subjects and objects
occurring in a graph, i.e.:
subjectOrObject(X) :- triple(X, P, Y, D).
subjectOrObject(Y) :- triple(X, P, Y, D).
Let PP_i be the i-th subexpression of a property path expression PP and furthermore let PP_i = PP_1? be a
zero-or-one property property path expression.
Then τ_PP(PP_i, 𝑓𝑎𝑙𝑠𝑒, S, O, D, i) consists of the following rules:
ans_i(Id, X, X, D) :- subjectOrObject(X), Id = [].
ans_i(Id, X, Y, D) :- ans_2i(Id_1, X, Y, D), Id = [].
τ_PP(PP_1, 𝑓𝑎𝑙𝑠𝑒, S, O, D, 2i)
Moreover, if either one of S and O is a variable and the other is a non-variable t
or both S and O are the same non-variable t, then the following rule is added:
ans_i(Id, X, X, D) :- not Term(X), X = t, Id = [].
It should be noted that, according to the SPARQL semantics of property paths[https://www.w3.org/TR/SPARQL11-query/#defn_PropertyPathExprhttps://www.w3.org/TR/SPARQL11-query/#defn_PropertyPathExpr],
zero-or-one, zero-or-more, and one-or-more property paths always have set semantics.
This is why the Datalog^± rules for these three path expressions
contain a body literal Id = []. By forcing the tuple ID to the same value whenever one of these rules
fires, multiply derived tuples are indistinguishable for our system and will, therefore,
never give rise to duplicates.
Let PP_i be the i-th subexpression of a property path expression PP and furthermore let PP_i = PP_1* be a
zero-or-more property property path expression.
Then τ_PP(PP_i, 𝑓𝑎𝑙𝑠𝑒, S, O, D, i) consists of the following rules:
ans_i(Id, X, X, D) :- subjectOrObject(X), Id = [].
ans_i(Id, X, Y, D) :- ans_2i(Id_1, X, Y, D), Id = [].
ans_i(Id, X, Z, D) :- ans_2i(Id_1, X, Y, D),
ans_i(Id_2, Y, Z, D), Id = [].
τ_PP(PP_1, 𝑓𝑎𝑙𝑠𝑒, S, O, D, 2i)
Moreover, if either one of S and O is a variable and the other is a non-variable t
or both S and O are the same non-variable t, then the following rule is added:
ans_i(Id, X, X, D) :- not Term(X), X = t, Id = [].
Essentially, the zero-or-more property path is a combination of the zero-or-one and one-or-more property path.
Let PP_i be the i-th subexpression of a property path expression PP and furthermore let PP_i = !(𝒫) be a negated property path expression.
Then τ_PP(PP_i, 𝑓𝑎𝑙𝑠𝑒, S, O, D, i) is defined as:
ans_i(Id, X, Y, D) :- triple(X, P, Y, D), P != p_f_1, …, P != p_f_n.
ans_i(Id, Y, X, D) :- triple(X, P, Y, D), P != p_b_1, …, P != p_b_m.
with
* p_f_1, …, p_f_n∈{ p | p ∈𝒫} ... i.e. the set of negated forward predicates.
* p_b_1, …, p_b_m∈{ p | p ∈𝒫} ... i.e. the set of negated backward predicates.
§.§ Translation of Query Forms
Let P_1 be a graph pattern and W be a set of variables.
We consider two types of query forms:
( W P_1) and
( P_1). Their translation is given below.
Let P_i be the i-th subpattern of P and furthermore let P_i be ( W P_1), then τ(P_i, 𝑡𝑟𝑢𝑒, D, i) is defined as:
ans_i(var(W), D) :- ans_2i(var(P_1), D).
τ(P_1, 𝑡𝑟𝑢𝑒, D, 2i)
And τ(P_i, 𝑓𝑎𝑙𝑠𝑒, D, i) is defined as:
ans_i(Id, var(W), D) :- ans_2i(Id_1, var(P_1), D).
τ(P_1, 𝑓𝑎𝑙𝑠𝑒, D, 2i)
Let P_i be the i-th subpattern of P and furthermore let P_i be P_1), then τ(P_i, 𝑡𝑟𝑢𝑒, D, i) is defined as:
ans_i(HasResult) :- ans_ask_i(HasResult).
ans_i(HasResult) :- not ans_ask_i(True), HasResult = false.
ans_ask_i(HasResult) :- ans_2i(var(P_1), D), HasResult = true.
τ(P_1, 𝑡𝑟𝑢𝑒, D, 2i)
And τ(P_i, 𝑓𝑎𝑙𝑠𝑒, D, i) is defined as:
ans_i(HasResult) :- ans_ask_i(HasResult).
ans_i(HasResult) :- not ans_ask_i(True), HasResult = false.
ans_ask_i(HasResult) :- ans_2i(Id_1, var(P_1), D), HasResult = true.
τ(P_1, 𝑓𝑎𝑙𝑠𝑒, D, 2i)
§ CORRECTNESS OF THE TRANSLATION
As was mentioned in Section <ref>, we have applied a
two-way strategy for ensuring the correctness of our translation from
SPARQL 1.1 to Warded Datalog^± by carrying out an extensive empirical
evaluation and a formal analysis.
For the empirical evaluation, we have run
our system as well as Fuseki and Virtuoso on several benchmarks which provide a good coverage of SPARQL 1.1.
The results of our empirical evaluation are
summarized in
Section <ref>. They
give strong evidence for the correctness of the translation
applied by .
To provide yet further evidence,
we will now formally examine the Warded Datalog^±
rules produced by our translation for the various SPARQL language constructs and compare them with the formal semantics of these
language constructs.
As was mentioned in Section <ref>,
includes a translation engine with three methods, namely
a data translation method, a
query translation method, and a
solution translation method.
The data translation is
very straightforward. In particular, the IRIs, literals and blank nodes as well as the triples in an RDF graph are presented as
Datalog ground facts in the obvious way.
Recall from
Table <ref> that, as far as query forms are concerned,
we currently only support SELECT (which is by far the most common one) and ASK.
The former allows one to define a projection to some of the variables in the graph pattern while the latter just asks if at least
some mapping satisfying the graph pattern exists.
In case of SELECT, the solution modifier can be further extended by
a DISTINCT, ORDER BY, LIMIT, or OFFSET clause.
The two supported solution modifiers (with the possible extensions)
are obvious
and they are taken care of by the solution translation method of .
In the sequel, we therefore restrict our discussion to the query translation method.
As in Section <ref>, we
treat the basic translation rules and the translation of property paths in separate subsections.
§.§ Basic Translation Rules
First we recall some basic principles
of defining a formal semantics of SPARQL
<cit.>.
At the heart of evaluating a SPARQL query is the
evaluation of the graph pattern (GP) given in the WHERE clause
of the query. This evaluation is relative to the active graph
D, which is initially the default graph (obtained by merging the
graphs given in the FROM clause of the query) and which can
be switched to some named graph (given by an IRI in a FROM NAMED clause of the query) via the GRAPH construct.
We write (u) to denote the graph with name u and we write
to denote all names of named graphs according to the FROM NAMED clauses.
The result of evaluating a graph pattern P relative to some graph D,
denoted by PD,
is a multiset of partial mappings μ V → T
(simply referred to as “mappings” henceforth),
where
V is the set of variables and T is the set of terms
(i.e., the union of IRIs, blank nodes and literals).
It is convenient to allow also the constant
“null” as function value to indicate by μ(?X) = “null” that
μ is undefined on variable ?X. The domain of μ, denoted (μ),
is defined as the set of variables on which μ is defined.
Mappings are applied to triple patterns in the obvious way, i.e.,
let t = (s, p, o) be a triple pattern and let (t) denote the
variables in t. For a mapping μ with (t) ⊆(μ),
we write μ(t) to denote the triple obtained by replacing
each variable ?X ∈(t) by μ(?X).
Compatibility.
An important property when combining or comparing two mappings is compatibility. Two mappings μ_1,μ_2
are compatible, denoted μ_1∼μ_2, if
μ_1(?X)=μ_2(?X) holds
for all ?X ∈(μ_1) ∩(μ_2). In this case,
the mapping μ = μ_1 ∪μ_2 with
μ(?X) = μ_1(?X) if ?X ∈(μ_1) and
μ(?X) = μ_2(?X) if ?X ∈(μ_2)
is well-defined.
In Section <ref>, we have also defined the compatibility of two individual terms
or nulls v_1,v_2, namely: v_1 and v_2 are compatible if they are equal (i.e., either the same term or both `null”) or if one of them
is “null”. Clearly, two partial mappings
μ_1,μ_2
are compatible if and only if
μ_1(?X) and μ_2(?X) are compatible for every variable ?X ∈(μ_1) ∩(μ_2).
If this is the case, then μ = μ_1 ∪μ_2 is obtained as follows:
for every variable ?X, (1) if μ_1(?X)=μ_2(?X) (where
μ_1(?X) and μ_2(?X) are either the same term or they are both “null”), then
μ(?X) = μ_1(?X)=μ_2(?X); and (2) if one of μ_1(?X),μ_2(?X) is a term and the other is
“null”, then μ(?X) is set equal to the term.
We observe that the auxiliary predicate
comp(X_1,X_2,X_3)
defined in Section <ref> realises precisely the compatibility check
between two values μ_1(?X) and μ_2(?X) (in the first two components of comp) and
yields μ(?X) in the third component.
Operations on multisets of mappings.
We consider the following operations between two sets of mappings Ω_1,Ω_2:
Ω_1 Ω_2 =
μ_1 ∪μ_2 |μ_1 ∈Ω_1, μ_2 ∈Ω_2 and μ_1 ∼μ_2
Ω_1 ∪Ω_2 =
μ|μ∈Ω_1 or μ∈Ω_2
Ω_1 ∖Ω_2 =
μ_1 ∈Ω_1 | for all μ_2 ∈Ω_2, μ_1 ≁μ_2
Ω_1 Ω_2 = (Ω_1 Ω_2) ∪ (Ω_1 ∖Ω_2)
Note that, of the above operations, only the union ∪ may alter the cardinality
of elements in the resulting multiset, namely: if a mapping μ is contained in
both Ω_1 and Ω_2, then its cardinality in Ω is the sum of the
original cardinalities in Ω_1 and Ω_2.
Semantics of basic SPARQL constructs.
The semantics PD of a graph pattern P is defined recursively
on the structure of P. In the base case, P is a triple pattern P = (s, p, o)
and PD is defined as
PD = {μ|(μ) = (P) and μ(P) ∈ D}.
For complex graph patterns P, the semantics definition
PD is shown in
Table <ref>.
Translation of basic SPARQL constructs.
We are now ready to inspect the translations from SPARQL 1.1 to Warded Datalog^±
given in Figure <ref>
and Section <ref>.
As in Section <ref>, we concentrate on bag semantics
as the more complex case.
Triple.
First consider the base case of graph patterns, namely a triple pattern
P_i = (s,p,o), where each of s,p,o can be a term or a variable. Clearly,
the single rule produced by our translation τ(P_i, 𝑓𝑎𝑙𝑠𝑒, D, i)
in Definition <ref> produces all mappings in (P_i)
that match (s,p,o) to a triple in the active graph D.
Graph. Suppose that
P_i is of the form P_i = ( g P_1).
According to the semantics definition in
Table <ref>
we have to distinguish 2 cases depending on whether g is an IRI or a variable. Moreover, in the former case,
we have the two subcases depending on whether the IRI g is the name of some named graph (i.e., it occurs in
names) or not. It is easy to verify that
the single rule produced by our translation τ(P_i, 𝑓𝑎𝑙𝑠𝑒, D, i)
in Definition <ref> covers exactly these 3 cases.
If g is an IRI that occurs in
, then the body literal named(g) of the Datalog rule will evaluate to true and the result
obtained by the mappings (on the variables (P_1)) obtained by the body literal
ans_2i(Id_1, var(P_1), g) are precisely the mappings obtained by evaluating
graph pattern P_1 over the graph with name g, i.e., (g). In particular,
the head variables var(P_i) coincide with the body variables var(P_1).
Note that
the variable Id in the head has the effect that every firing of the rule binds
Id to a different labelled null. Hence, if ans_2i(Id_1, var(P_1), g) yields duplicates
(i.e., identical mappings with different bindings of Id_1), then these duplicates are preserved by the
corresponding firings of the rule (producing a binding of Id to a different labelled null
for each firing of the rule).
The rule also behaves correctly in the other 2 cases:
if g is an IRI that does not occur in
, then the body literal named(g) of the Datalog rule cannot match and the rule will never fire, thus
producing no mapping at all, which is the correct behaviour in this case. Finally, if g is a variable,
then the body literal named(g) produces mappings of g to all IRIs in and, for each such binding,
ans_2i(Id_1, var(P_1), g) produces precisely the mappings obtained by evaluating
graph pattern P_1 over the graph whose name is the current binding of g. Note that, in this case,
the head variables
var(P_i)
consist of the variables in P_1 plus the variable g.
Again it is the correct behaviour that the
rule produces bindings for this increased variable set.
Join.
Suppose that P_i is of the form P_i = (P_1 P_2).
By expanding the definition of the -operator
into the semantics definition in
Table <ref>, we
get
P_iD =
{-2pt {μ_1 ∪μ_2 |μ_1 ∈P_1D,
μ_1 ∈P_2D, and
μ_1 ∼μ_2
}-2pt }, that is,
the multiset of those mappings which can be
obtained as the union of any two compatible mappings μ_1 ∈P_1D and
μ_2 ∈P_2D.
The
rule produced by our translation τ(P_i, 𝑓𝑎𝑙𝑠𝑒, D, i)
in Definition <ref>
achieves precisely this: the two body atoms ans_2i(Id_1, v_1(var(P_1)), D)
and
ans_2i+1(Id_2, v_2(var(P_2)), D)
yield the sets of mappings
P_1D and
P_2D. Note that the variable renamings
v_1 and v_2 make sure that there is no interference
between the evaluation of
P_1D (by the first body atom) and the
evaluation of P_2D (by the second body atom).
The comp-atoms in the body of the rule make sure
that μ_1 and μ_2 are compatible on all common variables. Moreover, they bind the common variables
{x_1, …, x_n} to the correct values according
to the definition of the comp-predicate, i.e.,
if v_1(x_j) and v_2(x_j) are bound to the same value
(i.e., either the same term or they are both set to “null”),
then we have x_j = v_1(x_j) = v_2(x_j).
Otherwise, if one of
v_1(x_j), v_2(x_j) is a term and the other is “null”,
then x_j is set equal to the term. Finally, recall that compatibility of
two mappings μ_1,μ_2 is defined as compatiblity
of all common variables of the two mappings.
Hence, the comp-atoms in the body of the rule
produced by our translation τ(P_i, 𝑓𝑎𝑙𝑠𝑒, D, i)
indeed verify that two mappings
μ_1,μ_2 are compatible.
Filter.
Suppose that P_i is of the form
P_i = (P_1 C).
By the semantics definition
in Table <ref>,
P_iD contains those mappings
μ of P_1D which satisfy the filter condition C. This is precisely what the single rule
resulting from our translation
τ(P_i, 𝑓𝑎𝑙𝑠𝑒, D, i)
in
Definition <ref> achieves: the body atom
ans_2i(var(P_1), D) yields all those variable bindings that correspond to the mappings in P_1D; and adding the
filter condition C to the body of the rule means that the rule
only fires for variable bindings
(strictly speaking, for the mappings corresponding to
these variable bindings)
for which condition C evaluates to true.
Optional.
Suppose that P_i is of the form P_i = (P_1 P_2).
By expanding the definition of the -operator
into the semantics definition in
Table <ref>, we
get
P_iD =
(P_1DP_2D) ∪
(P_1D∖P_2D).
The translation τ(P_i, 𝑓𝑎𝑙𝑠𝑒, D, i)
in Definition <ref>
yields three rules.
The second rule is identical to the translation
of a Join expression. As was argued above, it
computes precisely the variable bindings
corresponding to the mappings in
P_1DP_2D.
It remains to show that the first and third rule taken
together
produce the variable bindings
corresponding to the mappings in
P_1D∖P_2D.
The first rule is almost the same as the second one with
the only difference that it projects the join-result
to the variables in (P_1). In other words, it
determines the variable bindings corresponding
to the mappings in P_1D which are compatible with
some mapping in P_2D. Therefore, the first two
body literals of the third rule have the following effect:
the first literal produces all
variable bindings corresponding
to mappings in P_1D while the second (i.e., the negative) body literal
selects those variable bindings which correspond to
mappings that are not compatible with any
mapping in P_2D. By setting all variables in
(P_2) ∖(P_1) to “null”
(with the remaining m body atoms), the third rule
indeed produces the
variable bindings
corresponding to the mappings in
P_1D∖P_2D.
Optional Filter.
Suppose that P_i is an optional filter expression
of the form
P_i = (P_1 (P_2 C)).
According to the
semantics definition in
Table <ref>,
P_iD is obtained as the union of 2 multisets:
* the mappings μ in (P_1 P_2)D which
satisfy the filter condition C;
* the mappings μ_1 in P_1D for which all mappings
μ_2 in P_2D have one
of the following two properties: either μ_1 and μ_2 are not compatible or
they are compatible but they combination does not satisfy the filter condition C.
The translation τ(P_i, 𝑓𝑎𝑙𝑠𝑒, D, i)
in Definition <ref>
yields three rules, which are very similar to the translation
of Optional expressions discussed before. The only difference is that now the first and second rule have
filter condition C as additional body literals.
Compared with the rules in case of Optional expressions,
these additional body literals
have the following effect:
* The second rule computes
the variable bindings
corresponding to those mappings in
P_1DP_2D which
satisfy the filter condition C. That is,
the mappings according to item 1 above.
* The first rule computes those
variables bindings corresponding
to the mappings in P_1D which are compatible with
some mapping in P_2D and which, together
with a compatible mapping from P_2D
satisfy the condition C.
Therefore, the (negative) second body literal in the third rule
has the effect that we eliminate from the multiset of mappings in
P_1D (obtained via the first body atom) precisely those mappings μ_1
for which there exists a compatible mapping μ_2 in
P_2D, such that their combination satisfies the filter condition C.
In other words, we are left with the mappings from item 2 above.
Analogously to Optional patterns, the variables in (P_2) ∖(P_1)
are not part of the domain of these mappings. Hence, with the null-atoms in the
body of the third rule, we set all these variables to “null”.
Union.
Suppose that P_i is of the form P_i = (P_1 P_2).
According to the semantics definition in
Table <ref>,
P_iD is simply obtained as the union of the two multisets
P_1D and P_2D.
In principle, the two rules of our translation τ(P_i, 𝑓𝑎𝑙𝑠𝑒, D, i)
in Definition <ref> compute this union of the variable bindings
corresponding to the mappings in P_1D (via the body atom
ans_2i(Id_1, var(P_1), D) in the first rule) and the variable bindings
corresponding to the mappings in P_2D (via the body atom
ans_2i+1(Id_2, var(P_2), D) in the second rule). However, care has to be taken that
all variable bindings obtained for ans_i(Id, var(P_i), D) must be defined on
all variables in (P_i). Therefore, variable bindings obtained from
ans_2i(Id_1, var(P_1), D) have to be extended to the variables in
(P_2) ∖(P_1) by setting the latter explicitly to “null”. Likewise, the
variable bindings obtained from
ans_2i+1(Id_1, var(P_2), D) have to be extended to the variables in
(P_1) ∖(P_2) by setting the latter explicitly to “null”.
This is achieved by the null-atoms in the rule bodies of the two rules.
Minus.
Suppose that P_i is of the form P_i = (P_1 P_2).
According to the semantics definition in
Table <ref>,
P_iD consists of those mappings μ_1 of
P_1D which, for any mapping μ_2 of P_2D
satisfy one of the following two conditions: either
μ_1 and μ_2 are not compatible or
(μ_1) and (μ_2) have no variable in common.
In other words, a mapping μ_1 ∈P_1D is retained in
P_iD unless there exists a mapping
μ_2 ∈P_2D such that
μ_1 and μ_2 are compatible and there exists at least one
variable x with μ_1(x) = μ_2(x) ≠“null”.
Similarly to our translation of Join patterns, the first rule of our translation
τ(P_i, 𝑓𝑎𝑙𝑠𝑒, D, i) of a Minus pattern in
Definition <ref>
computes the variable bindings of the variables in (P_1) and
of (P_2) which correspond to compatible mappings. The next n rules
(all with head predicate ans_equal-i) restrict the set of compatible mappings to those whose domains have at least one variable in common, i.e., the corresponding
variable bindings have at least one variable on which they coincide and
they are both not “null”. Note that the signature of ans_equal-i is
restricted to the variables in (P_1). That is, ans_equal-i contains
all variable bindings on (P_1), which correspond to “forbidden”
mappings. The last rule in our translation
τ(P_i, 𝑓𝑎𝑙𝑠𝑒, D, i) computes the variable bindings corresponding
to the mappings in P_iD by computing the variable bindings corresponding to the mappings in P_1D (via the first body literal) and eliminating the “forbidden” ones (via the negative second body literal).
§.§ Translation Rules for Property Paths
Analogously to the previous section, we now also juxtapose the semantics of property path expressions with our translation.
Semantics of property paths.
For a property path PP, we write
PPD,s,o
to denote the semantics of a property path PP
over a graph D with s,o denoting the
subject and object of the top-level property
path expression. The semantics of a property path
PP is a pair (x,y) of terms such that there is
a path PP from x to y.
Here we mainly follow the semantics definition in
<cit.>.
The semantics
of property paths is defined recursively, with
link property paths (i.e., simply an IRI p) as the base case.
The definition
PPD,s,o for arbitrary property paths
PP is given in
Table <ref>.
There we write Distinct for converting a
multiset into a set by deleting duplicates.
Moreover, we write reach(x, PP, D,s,o) for the set of terms
reachable from some start point x
by applying the path PP one or more times,
where s,o are again subject and object of
the top-level property path expression.
Recall from the previous section, that the semantics
PD
of a graph pattern P over a graph D is defined as
a multiset of (partial) mappings. If a graph pattern
P is of the form
P = (s PP o), where PP is a property path, then
(s PP o)D is the multiset
of mappings obtained as follows:
xxxx$̄s PP o D =
{ -2pt {
μ|s PP o D =
{ -2pt {
μ|dom(μ) = ({s,o})
(μ(s), μ(o)) ∈PPD,s,o } -2pt } ,
where we writeμ(x)withx ∈{s,o}for both variables and non-variablesxwith the
understanding thatμ(x) = xifx ∉V.
Graph pattern with a property path.
Before we inspect the translations of property paths to Warded Datalog^±,
we inspect our translation of graph patterns
using a property path. That is,
consider a graph patternPof the formP = s PP_1 o, wherePP_1is a property path. We have recalled above the definition ofs PP_1 o Das a multiset of pairs of terms.
Now suppose that
our translationτ_PP(PP_1, 𝑓𝑎𝑙𝑠𝑒, S, O, D, 2i)of property pathPP_1is correct.
(A proof sketch of this fact comes next.)
Then the single additional rule of our translation
in Definition <ref>,
with head atomans_2i(Id_1, S, O, D),
indeed produces all mappingsμonvar(P)such that(μ(S), μ(O))is inPP_1D,μ(S),μ(O). Note that
the multiplicities of the mappings thus obtained are taken care of by
different bindings of the variableId_1.
Translation of property paths.
We are now ready to inspect the translations of property paths to Warded Datalog^±given in Figure <ref>
and Section <ref>.
Again, we concentrate on bag semantics
as the more complex case.
We proceed inductively on the structure of the property paths. In all cases, suppose that we want to evaluate property paths over a graphDand lets,odenote the top-level subject and object, given in a graph pattern
of the form(s PP o). We start with link property paths as the base case and then cover all types of compound property path expressions.
Link property path.
Suppose that the property pathPP_iconsists of a single IRIp, i.e.,PP_i = p. ThenPPD,s,oconsists of all pairs(x,y), such that(x,p,y)is a triple inD. On the other hand, the single rule produced by
our translationτ_PP(PP_i, false, S, O, D, i)in Definition <ref>
yields exactly these pairs of terms.
Inverse path.
Consider a property pathPP_iof the formPP_i = PP_1for some property pathPP_1.
ThenPP_iD,s,oconsists of all pairs(y,x)such that(x,y)is contained inPP_1D,s,o. That is,PP_iD,s,ojust swaps first and second
component of each pair inPP_1D,s,o. This is exactly what the
single rule in our translationτ_PP(PP_i, 𝑡𝑟𝑢𝑒, S, O, D, i)in Definition <ref> does.
Alternative path.
Consider a property pathPP_iof the formPP_i = PP_1 |PP_2for some property pathsPP_1andPP_2.
Then, according to the semantics definition in
Table <ref>,PP_iD,s,oconsists of the union ofPP_1D,s,oandPP_2D,s,o.
The two rules in our translationτ_PP(PP_i, 𝑡𝑟𝑢𝑒, S, O, D, i)in Definition <ref>
realize exactly this union.
Sequence path.
Consider a property pathPP_iof the formPP_i = PP_1 / PP_2for some property pathsPP_1andPP_2.
Then, according to the semantics definition in
Table <ref>,PP_iD,s,oconsists of pairs(x,z)(i.e., start and end points of paths described byPP_i)
such that there exist pairs(x,y) ∈PP_1D,s,oand(y,z) ∈PP_2D,s,o(i.e., start and end points of
paths described byPP_1andPP_2, respectively,
such that the end point of a
path according toPP_1and the start point of a path according
toPP_2coincide).
The single rule
in our translationτ_PP(PP_i, 𝑡𝑟𝑢𝑒, S, O, D, i)in Definition <ref>
realizes precisely these combinations of pairs(x,y) ∈PP_1D,s,oand(y,z) ∈PP_2D,s,o.
One-or-more path.
Consider a property pathPP_iof the formPP_i = PP_1+for some property pathPP_1. The semantics of the one-or-more path expression is
essentially that of reachability via hops defined by the property
path expressionPP_1. That is, we get all pairs(x,y)that are in the
“infinite” unionPP_1D,s,o ∪PP_1/PP_1D,s,o ∪PP_1/PP_1/PP_1D,s,o ∪PP_1/PP_1/PP_1/PP_1D,s,o ∪…, with one important difference though:
according to the SPARQL semantics of property paths[https://www.w3.org/TR/SPARQL11-query/#defn_PropertyPathExprhttps://www.w3.org/TR/SPARQL11-query/#defn_PropertyPathExpr],
one-or-more property paths
(and likewise zero-or-one and zero-or-more
property paths) always have set semantics.
The
two rules
in our translationτ_PP(PP_i, 𝑡𝑟𝑢𝑒, S, O, D, i)in Definition <ref>
realize exactly this kind of reachability relationships.
Moreover, neither in the
semantics definition nor in the translation, we need to keep track
of duplicates. Therefore,
in the semantics definition, we definePP_iD,s,oas a set (rather than a multiset).
And in our translation, duplicates are avoided by theId = []body atom in both rules. As was already mentioned in
Section <ref>,
this body atom has the effect that no copies of the same pair(x,y)(but with different binding ofId)
can
ever be produced.
Zero-or-one path.
Consider a property pathPP_iof the formPP_i = PP_1?for some property pathPP_1.
Then, intuitively,PP_iD,s,oconsists of pairs of nodes
that are the start and end point of a “one-path” (i.e.,
traversingPP_1once) plus “zero-paths”
(i.e., identical start and end point).
In the semantics definition
in Table <ref>,
the pairs corresponding to “one-paths” are taken care of by the expressionPP_1D,s,o. Analogously, in our translationτ_PP(PP_i, 𝑡𝑟𝑢𝑒, S, O, D, i)in Definition <ref>
these pairs are produced by the second rule.
All remaining expressions in the semantics definition correspond to
various ways of getting “zero-paths”, namely: either for every term in the graph (captured by the second and third expression of the semantics
definition) or if at least one ofsorois a term (captured by
the remaining expressions of the semantics definition). In case bothsandoare terms, they must be the same in order
to constitute a “zero-path”.
In our translationτ_PP(PP_i, 𝑡𝑟𝑢𝑒, S, O, D, i)in Definition <ref>,
the first rule produces the pairs(x,x)for termsxoccurring
in the active graph. The last
rule of the translation produces the remaining pairs(t,t)iftis a term that occurs as top-level subject or object of the entire
property path expression. As with one-or-more path expressions,
it is important
to keep in mind that also zero-or-one paths always
have set semantics according to the SPARQL semantics of property paths.
The elimination of duplicates is ensured by theDistinctoperator
in the semantics defintion and by theId = []body atom in the
rules of our translation.
Zero-or-more path.
Consider a property pathPP_iof the formPP_i = PP_1*for some property pathPP_1.
In our semantics
definition in
Table <ref>,
we have definedPP_iD,s,osimply as the set-variant (i.e., deleting
any duplicates) of the union of the zero-or-one path
and the one-or-more path. Of course, in case of bag semantics
this would be
problematic since we thus count “one-paths” twice. However, since
zero-or-more property paths always have set semantics,
theDistinctoperator applied to the union eliminates any
duplicates anyway.
The rules
in our translationτ_PP(PP_i, 𝑡𝑟𝑢𝑒, S, O, D, i)in Definition <ref>
are indeed obtained as the union of the rules
that one gets for the translations
of the zero-or-one pathPP_1?and of the
one-or-more pathPP_1+. TheId = []body atom in each of the rules makes sure that
we never produce any copies of any pair(x,y).
Negated path.
Consider a property pathPPof the formPP = !𝒫, where𝒫is a set of “forward” link property path expressions{p_f_1, …, p_f_n}and
“backward” link property path expressions{ p_b_1,…, p_b_m}.
Then, according to our semantics
definition in
Table <ref>,PP_iD,s,ocontains those pairs(x,y)for which
either there exists a triple(x,a,y)in the active graph such thatais different from allp_f_jor
there exists a triple(y,a,x)in the active graph such thatais different from allp_b_j.
Our translationτ_PP(PP_i, 𝑡𝑟𝑢𝑒, S, O, D, i)in Definition <ref>
generates two rules. Clearly, the first type of
pairs inPP_iD,s,ois produced by the first rule of
our translation and the second type
of pairs inPP_iD,s,ois produced by the second rule.
§ SOME IMPLEMENTATION DETAILS
Some basic principles
The SPARQL to Warded Datalog^±translator was implemented in Java using the library org.apache.jena to parse SPARQL query strings and handle operations, solution modifiers, basic graph patterns, etc.
appropriately. The ARQ algebra query parser[https://jena.apache.org/documentation/query/https://jena.apache.org/documentation/query/] of Apache Jena parses SPARQL query strings in a top-down fashion. First query forms are parsed, next solution modifiers and in the end operations starting from the outer-most operation going inward.
In contrast, our developed SPARQL to Warded Datalog^±parser
analyses queries bottom-up. Thus, the translation starts at basic graph patterns and continues upwards. This setup is necessary, as the variables inside an expression need to be kept track of, when parsing it, as rules usually modify results of sub-operations. Therefore, it needs to be known which variables occur in the respective subresult predicates.
Datatypes, languages, and compatibility
Our translation engine partially supports datatypes and language tags by adding two additional arguments to each variable, containing the respective information. This has implications on most SPARQL operations such as UNION, JOIN, and FILTER. For example, in the case of JOIN operations, we have extended the existing translation of <cit.> by developing two additional comparison predicates (compDandcompL). In <cit.> the predicatecompis used for computing the compatibility between two variables. The new predicatescompDandcompLare used to compute the respective compatibility for their datatypes and language tags, which is done in the same way as for variables thus far.
Moreover, Vadalog (and Datalog in general) joins variables of the same rule by name, however the semantics of joins in SPARQL is different to the one of Datalog. It is for that reason (1) that we have to prefix/rename variables in such a way that Vadalog's internal join strategy is prevented and (2) that we introduce the join predicatecompdescribed in Section <ref> to realise SPARQL join semantics.
Skolem functions (for bag semantics)
The Skolem function generator lies at the heart of how we preserve duplicate results. As in the work of <cit.>, we introduce IDs to preserve Datalog bag semantics
in Warded Datalog^±set semantics. Therefore, each generated result tuple is distinguished from its duplicates by a tuple ID (referred to as TID in <cit.>). However, instead of simply generating nulls, our ID generation process is abstracted away by the𝑔𝑒𝑡𝑆𝑘𝑜𝑙𝐹function of the𝑠𝑘𝑜𝑙𝐹𝐺object.
We thus generate IDs as follows. Since the grounding of each positive atom in the rule body is responsible for the generation of a tuple in Datalog^±, we extract a sorted list of all variables occurring in positive atoms of the rule body𝑏𝑜𝑑𝑦𝑉𝑎𝑟𝑠. Finally, the tuple ID is generated by assigning it to a list starting with the string “𝑓_<𝑟𝑢𝑙𝑒𝐼𝐷>”, followed by the list of positive body variables𝑏𝑜𝑑𝑦𝑉𝑎𝑟𝑠and a string𝑙𝑎𝑏𝑒𝑙. The strings “𝑓_<𝑟𝑢𝑙𝑒𝐼𝐷>” and𝑙𝑎𝑏𝑒𝑙were added, as we use “𝑓_<𝑟𝑢𝑙𝑒𝐼𝐷>” to identify the translated rules of the processed operator at the current translation step, while𝑙𝑎𝑏𝑒𝑙provides additional information when needed.
This setup preserves generated duplicates of SPARQL bag semantics in Warded Datalog^±set semantics by utilizing their provenance information to make them distinguishable. Furthermore, it provides information for debugging/explanation purposes of the reasoning process, as each tuple
carries the information which rule and grounding has led to its generation.
As an added bonus this layer of abstraction may be used to adapt different duplicate generation semantics/strategies as might be necessary for different applications by simply exchanging the𝑠𝑘𝑜𝑙𝐹𝐺by any self-implemented solution.
§ FURTHER DETAILS OF THE EXPERIMENTAL EVALUATION
In Section <ref>, we have reported on
our analysis of various SPARQL benchmarks. One of the outcomes of this analysis
was the selection of three concrete benchmarks,
which we then used in Section <ref>
to
test the standard-compliance of and two state-of-the-art SPARQL engines.
In this section, we provide further details on how we have set up the benchmark analysis
(Section <ref>) and the compliance tests
(Section <ref>). Following these, the most interesting part of this sections are additional benchmarks requested by reviewers, starting from Section <ref>, which provides a further overview of the following sections.
§.§ Setup of the Benchmark Analysis
Before we dive into the benchmark analysis, we briefly introduce its setup in this section.
Our analysis employs a similar way of counting SPARQL features as is done in <cit.>. We have counted each feature once per query in which it occurs, with one exception being the DISTINCT feature. As in <cit.>, we count the DISTINCT feature only if it is applied to the entire query. This is also in line with our interest of testing bag and set semantics in combination with different SPARQL features.
In contrast to <cit.> we do not limit our benchmark analysis to SELECT queries, but rather analyse all queries provided at the GitHub repository of the dice group[https://hobbitdata.informatik.uni-leipzig.de/benchmarks-data/queries/https://hobbitdata.informatik.uni-leipzig.de/benchmarks-data/queries/]. Therefore, we analyse also the DESCRIBE queries of e.g. BSBM. Moreover, since the SP2Bench query file does not contain the hand-crafted ASK queries provided on its homepage[http://dbis.informatik.uni-freiburg.de/index.php?project=SP2B/queries.phphttp://dbis.informatik.uni-freiburg.de/index.php?project=SP2B/queries.php], we have chosen to add these to the benchmark to be able to analyse the complete query-set of SP2Bench. For these reasons, the results of overlapping SPARQL features and benchmarks from our analysis in Table <ref> and the one of <cit.> differ slightly.
Furthermore, note that we do not display basic features, such as Join, Basic Graph pattern, etc. in Table <ref>, as these features are, of course, covered by each of the considered benchmarks. Furthermore, we have chosen to omit the SPARQL features ORDER BY, LIMIT and OFFSET from the table since these cannot be evaluated currently by due to the limitations of Vadalog and our translation engine,
as has already been mentioned in our discussion of the SPARQL feature coverage of
in Table <ref>. Also ASK was left out in the table, as ASK is not an especially challenging feature and was therefore removed for space reasons. Moreover, we have chosen to only include the REGEX filter constraint in the feature coverage table and no other specific constraints, as the REGEX function is argued to be of vital importance for SPARQL users in <cit.>. For this reason, we have chosen to cover this feature by our
translation engine in addition to the other filter constraints. Finally, we have not included the SPARQL features MINUS and the inverted, zero-or-one, zero-or-more, one-or-more and negated property path, as none of the selected benchmarks covers any of these SPARQL features.
§.§ Setup of the SPARQL Compliance Tests
As detailed in Section <ref>,
we have selected three benchmarks (BeSEPPI, SP2Bench and FEASIBLE (S)) for
evaluating the standard-compliance of the chosen three systems
(, Jena Fuseki, and OpenLink Virtuoso).
For our experiments, we use Apache Jena Fuseki 3.15.0 and
Virtuoso Open Source Edition, version 7.2.5.
The experiments are run on a Windows 10 machine with 8GB of main memory.
In this section, we explain the setup of the standard-compliance tests that we
performed on our system and the two state-of-the-art SPARQL engines
Fuseki and Virtuoso.
Moreover, we mention some challenges with these tests and we
provide further details on the outcome of the compliance tests.
§.§.§ Benchmark Generation
In order to carry out our standard-compliance tests, we first have to
make the queries and the data provided by the benchmarks accessible to the
tested systems. While this turns out to be an easy task for BeSEPPI and SP2Bench,
some care is required for the FEASIBLE (S) benchmark.
BeSEPPI and SP2Bench
The BeSEPPI benchmark contains queries and a dataset for the evaluation of property path queries. Its dataset can be directly loaded into the selected
systems and its queries can directly be executed.
The SP2Bench benchmark contains 17 hand-crafted queries and a benchmark dataset generator. For the purpose of our compliance tests, we have generated a dataset with 50k triples, which was loaded into each of the considered systems.
FEASIBLE
The FEASIBLE benchmark contains a query generator, which generates queries for an arbitrary dataset that provides a query-log. In the case of the FEASIBLE (S) benchmark,
we have chosen the Semantic Web Dog Food (SWDF) dataset and we
have generated 100 queries using the SWDF query-log.
However, some additional work was required before we could
use the FEASIBLE (S) benchmarks for our tests.
The first complication arises from the fact that
Vadalog uses Java sorting semantics, whereas the SPARQL standard defines its own ordering semantics. We had to remove LIMIT and OFFSET from each query of the FEASIBLE (S) benchmark, as queries with these features can only be reasonably evaluated (comparing the generated query results, rather than only checking if their cardinalities are equal) if the results are sorted and if each considered RDF query and storage system provides the same sorting semantics. Some queries of the generated benchmark only differed from each other in the argument of the
LIMIT or OFFSET clause. Thus, after removing all LIMIT and OFFSET clauses, we ended up with duplicate queries. These duplicate queries were eliminated, leaving the FEASIBLE (S) benchmark with a total number of 77 unique queries.
Moreover, Vadalog does currently not support UTF-8 characters. We were therefore faced with the necessity of changing the encoding of the SWDF dataset of the FEASIBLE (S) benchmark. We have made the plausible assumption that dropping non-ASCII characters from RDF strings would not lead to vastly different results and
we have therefore simply deleted all non-ASCII characters from the SWDF dataset.
Furthermore, since the FEASIBLE (S) benchmark includes queries with the GRAPH feature (which selects the graph IRI of RDF triples), we have loaded the SWDF dataset both into the default graph of each tested system and into a named graph for the FEASIBLE benchmark to be able to test the GRAPH feature.
§.§.§ Challenges of the Evaluation Process
The evaluation of the standard compliance of the three chosen systems requires the comparison
of query results. In case of BeSEPPI, the benchmark also provides the expected result for each query. We therefore have to compare the result produced by each of the three systems with the correct result defined by the benchmark itself. In contrast,
FEASIBLE and SP2Bench do not provide the expected results for their queries.
We therefore use a majority voting approach to determine the correct answer.
That is, we compare the query results produced by the three considered systems and
accept a result as the expected query answer if it is equal to the generated query result
of at least two of the tested systems.
A major challenge for comparing query results (both, when comparing the result produced by one system with the expected result defined by the benchmark itself and when comparing the
results produced by two systems) comes from blank nodes.
On the one hand, each system employs its own specific functionality for assigning blank node names. Therefore, to compare blank nodes between the different result multisets, a mapping between the internal system-specific blank node names has to be found. However, finding such a mapping comes down to finding an isomorphism between two arbitrarily sized tables, containing only blank nodes, which requires exponential time in the worst case and which poses a
serious problem for large result multisets with many blank nodes.
We have therefore tried out a simple heuristic to find a suitable mapping between blank nodes
by sorting the query results without considering blank node names first. We then iterate over
both results and finally, each time when a new blank node name is encountered, we save the mapping between the system-specific blank node names. Even though this is
a very simple heuristic, it has worked quite well in many cases.
Nevertheless, there
are cases, where this simple procedure infers wrong blank node mappings, even though the results are semantically equivalent. Hence, due to the instability of this efficient blank node checking heuristic, we have chosen to remove the evaluation of blank nodes from our compliance tests. That is, our current evaluation test suite does for this reason not distinguish between different blank node names but, of course, it distinguishes between all other terms.
§.§.§ Outcome of the Compliance Tests
The outcome of our compliance tests is based on the notions
of correctness and completeness of query results, which we
define in the same way as done in <cit.>:
Correctness defines the ratio of correct tuples generated by
the tested system for a query. For a SELECT queryqwithR_expected(q)being the expected result ofqandR_sys(q)being the response of systemsysto the SELECT query, <cit.> defines correctness as follows:
correct(q) =
|R_expected(q) ∩ R_sys(q)|/|R_sys(q)|, if R_sys(q)≠ 0
1, otherwise
It intuitively accepts a result as correct (correct(q) = 1) if the returned result of the considered systemR_sys(q)is a subset of the expected answerR_expected(q). For ASK queries we consider a result to be correct only if it exactly matches the expected answer, as done in <cit.>.
Completeness defines the ratio of all accepted result-tuples generated by the
tested system for a query. For a SELECT queryqwithR_expected(q)being the expected result ofqandR_sys(q)being the response of systemsysto the SELECT query, <cit.> defines completeness as follows:
complete(q) =
|R_expected(q) ∩ R_sys(q)|/|R_expected(q)|, if R_expected(q)≠ 0
1, otherwise
It intuitively accepts a result as complete (complete(q) = 1) if the expected answerR_expected(q)is a subset of the returned result of the considered systemR_sys(q). For ASK queries we consider a result to be complete only if it exactly matches the expected answer, as done in <cit.>.
FEASIBLE (S)
FEASIBLE (S) contains 77 unique queries. From these 77 queries, a total of 68 are accepted by our translation engine and were thus used for the correctness and completeness evaluation of our system. The remaining 9 queries could not be translated into Datalog^±programs, since they contain features that are currently not supported by our system for the following reasons:
Three queries contain a complex expression in an ORDER BY statement and
two queries contain complex expressions in COUNT aggregates. Both features
are currently supported by only
for simple expressions consisting of a single variable.
Two queries cannot be translated currently due to the missing support of our engine for the functions ucase and contains.
However, these SPARQL features do not impose any conceptual hurdle and were only left out from the first version of since
we considered their priority as low.
Similarly, two queries are not accepted by our engine at the moment, as they contain the DATATYPE feature. Again, this is mainly due to the low priority that we gave this feature. Since our translation engine already tracks datatypes and language tags of RDF terms, it should be no problem to integrate it in later versions of .
On the remaing 68 queries that are accepted by our system, our translation engine produces for all executed 68 queries the same result as Apache Jena Fuseki.
In contrast, OpenLink Virtuoso
returned an erroneous result for 14 queries
by either
wrongly outputting duplicates (e.g., ignoring DISTINCTs) or omitting duplicates
(e.g., by handling UNIONs incorrectly). Moreover, in 18 cases,
Virtuoso was unable to evaluate the query and produced an error.
SP2Bench
The SP2Bench benchmark contains 17 queries in total and is specifically designed to test the scalability of of SPARQL engines. All three considered systems produce identical results for each of the 17 queries.
BeSEPPI
The BeSEPPI benchmark contains 236 queries, specifically designed to evaluate the correct and complete support of property path features. We have already summarized the outcome of our compliance tests
of the 3 considered systems on this benchmark
in Table <ref>
in Section <ref>. In a nutshell,
while and Fuseki follow the SPARQL standard for the evaluation of property paths, Virtuoso produces errors for 18 queries and returns incomplete results for
another 13 queries.
We conclude this section by a more detailed look into the
problems Virtuoso is currently facing with property path
expressions.
As already observed in <cit.>,
Virtuoso produces errors for
zero-or-one, zero-or-more and one-or-more property paths that contain two variables. Furthermore, the error messages state that the transitive start is not given. Therefore, we come to the same conclusion as <cit.> that these features were most likely left out on purpose, since Virtuoso is based on relational databases and it would
require huge joins to answer such queries. Moreover, we have noticed that the errors for inverse negated property paths (reported in <cit.>) have been fixed in the current OpenLink Virtuoso release.
Virtuoso produces 10 incomplete results when evaluating
one-or-more property path queries. As already discovered in <cit.>, they all cover cases with cycles and miss the start node of the property path, indicating that the
one-or-more property path might be implemented by evaluating the
zero-or-more property path first and simply removing the start node from the computed result. Finally, in contrast to the results of <cit.>, we have found that the current version of OpenLink Virtuoso generates wrong answers for queries that contain alternative property paths. Virtuoso generates for three alternative property path queries incomplete results, which differ from the results of Fuseki and by missing all duplicates, which should have been generated.
§.§ Additional Empirical Results
In this section we provide additional empirical results that were requested by the reviewers. We will start this section by giving an outlook of the results first. Specifically, as requested by the reviewers, we have:
* conceptualized and implemented the translation of additional SPARQL features to cover any SPARQL feature that occurs in our benchmarks (outlined in Section <ref>),
* rerun the SP2Bench benchmark on a stronger machine (for detailed description of the benchmark setting see Section <ref>),
* compared our system to Fuseki and Virtuoso in Section <ref>,
* provided the complete benchmark results (including the loading and execution time) of Virtuoso and Fuseki in Section <ref>.
§.§ Full SPARQL Benchmark Coverage
As requested by the reviewer, we have implemented the remaining missing SPARQL features to fully cover any SPARQL features occurring in our selected benchmarks. Specifically, we have implemented the following features: ORDER BY with complex arguments (such asORDER BY(!BOUND(?n))), functions on Strings suchUCASE, theDATATYPEfunction,LIMIT, andOFFSET. Thereby, we have extend to cover the9previously unsupported queries of the FEASIBLE benchmark, reaching our goal of supporting any query of the selected benchmarks.
§.§ Benchmark Setting
gMark. As suggested by the reviewer, we have evaluated , Fuseki, and Virtuoso on the gMark benchmark <cit.>, a domain- and language-independent graph instance and query workload generator which specifically focuses on path queries, i.e., queries over property paths. Specifically, we have evaluated 's, Fuseki's, and Virtuoso's path query performance on the test[<https://github.com/gbagan/gMark/tree/master/demo/test>] and social[<https://github.com/gbagan/gMark/tree/master/demo/social>] demo scenarios. Each of these two demo scenarios provides 50 SPARQL queries and a graph instance. Since the graph instances consist of triples of entity and relation ids, we had to translate the graph instance to RDF, by replacing any entity idαwith<http://example.org/gMark/α>and any relation idβwith<http://example.org/gMark/pβ>. Table <ref> provides further details on the benchmarks that we used for evaluating a system's query execution time.
Experimental Setup. Our benchmarks were executed on a system running openSUSE Leap 15.2 with dual Intel(R) Xeon(R) Silver 4314 16 core CPUs, clocked at 3.40GHz, with 512GB RAM of which 256GB are reserved for the system under test, and 256GB for the operating system.
For each SPARQL engine, we set the following limits: We set a time-out of 900s and, in response to the reviewer's request, we increased the available RAM per system from 8GB to 256GB. Similar to our initial benchmark setup, we started each benchmark by repeating the same warm-up queries5times and by5times loading and deleting the graph instance. Furthermore, we did5repetitions of each query (each time deleting and reloading the dataset), following the setting of our previous benchmark experiments. In the next section, we will discuss the benchmark results.
§.§ Benchmark Results
Table <ref> and <ref> reveal the results on the additional gMark benchmarks. Specifically, the tables state the number of queries of the respective benchmark that a system (1) does not support, (2) answered with a time- or mem-out (out-of-memory) exception, or (3) answered with an incomplete result. Furthermore, the tables present the total number of queries which could not be (correctly) answered by the systems. Figures <ref> and <ref> visualize the query execution time of the three systems per benchmarks. A bar reaching 900s represents a time-out. A missing bar represents a mem-out, a faulty result, or that a query was not supported. We have excluded query 31 from the gMark social benchmark and query 15 from the gMark test benchmark, as none of the three systems managed to answer these queries. In the following, we compare the results of the three systems on gMark:
Virtuoso could not (correctly) answer48of the in total100queries of the gMark Social and Test benchmark. Thus, it could not correctly answer almost half of the queries provided by both gMark benchmarks, which empirically reveals its dramatic limitations in answering complex property path queries. In20of these48cases, Virtuoso returned an incomplete result. While in solely3incomplete result cases Virtuoso missed solely one tuple in the returned result multi-set, in the remaining17incomplete result cases; Virtuoso produces either the result tuplenullor an empty result multi-set instead of the correct non-null/non-empty result multi-set. In the other28cases Virtuoso failed either due to a time-, mem-out or due to not supporting a property path with two variables. This exemplifies severe problems with handling property path queries.
Fuseki reached on37of the in total100queries of the gMark Social and Test benchmark a time-out (i.e., took longer than900sfor answering the queries). Thus, it timed-out on more than a third of gMark queries, which empirically reveals its significant limitations in answering complex property path queries.
managed to answer98of gMark's (in total100) queries within less than200sand timed-out on solely2queries (see Figures <ref> and <ref>). This result reveals the strong ability of our system in answering queries that contain complex property paths. Furthermore, each time when both Fuseki and returned a result, the results were equal, even further empirically confirming the correctness of our system (i.e., that our system follows the SPARQL standard).
SP2Bench Rerun. We have rerun the SP2Bench benchmark using the same benchmark setting as for gMark. Thus, as requested by the reviewer we have rerun SP2Bench with significantly increased RAM and compared not only to Fuseki but additionally to Virtuoso. We have visualized the result in Figure <ref> and found that SparqLog reaches highly competitive performance with Virtuoso and significantly outperforms Fuseki on most queries.
In conclusion, these three benchmarks show that SparqLog (1) is highly competitive with Virtuoso on regular queries with respect to query execution time (see Figure <ref>, (2) even follows the SPARQL standard much more accurately than Virtuoso and supports more property path queries than Virtuoso (see Tables <ref> and <ref>), and (3) dramatically outperforms Fuseki on query execution, while keeping its ability to follow the SPARQL standard accurately.
§.§ Complete Benchmark Results
Tables <ref>-<ref> display the complete results of the respective benchmarks. Specifically the tables contain the query id, the system's loading time, query execution time, and total time which is the sum of the query execution and loading time. Furthermore, the column Res Equal indicates whether the result of Fuseki and Virtuoso is equal to the one from and additionally states whether each system encountered a time-out, mem-out, or not supported exception. Note that the result of Fuseki and is always the same when Fuseki manages to answer the query, which further exemplifies that both Fuseki and follow the SPARQL standard precisely. However, Virtuoso violates the SPARQL standard often (indicated by many false entries in the Res Equal column of the gMark benchmarks).
§.§ Ontological Reasoning Results
One of the main advantages of our system is that it provides
a uniform and consistent framework for reasoning and querying Knowledge Graphs.
We therefore wanted to measure the performance of query answering in the
presence of an ontology. Since Fuseki and Virtuoso do not provide such support, we now compare with Stardog, which is a commonly accepted state-of-the-art system for reasoning and querying within the Semantic Web.
Furthermore, we have also created a benchmark based on SP2Bench's dataset that provides property path queries and ontological concepts such as subPropertyOf and subClassOf. We have provided this benchmark in the supplementary material.
Figure <ref> shows the outcome of these experiments.
In summary, we note that is faster than Stardog on most queries. Particularly interesting are queries 4 and 5, which contain recursive property path queries with two variables. Our engine needs on query 4 only about a fifth of the execution time of Stardog and it can even answer query 5, on which Stardog times outs (using a timeout of900s). On the other queries, Stardog and perform similarly.
§ MORE ON DATALOG^±
Recall from Section <ref> that query answering under an ontology defined by
Datalog^±program comes down to solving an entailment problem. More precisely, letQ(z⃗)be a CQ with free variablesz⃗over databaseDand let
an ontology be expressed by Datalog^±programΠ.
Then the answers toQ(z⃗)overDunder ontologyΠare defined as{a⃗ |Π∪D Q(a⃗)}, wherea⃗is a tuple of the same arity asz⃗with values from the domain ofD.
Canonical model and the chase
Note thatΠ∪Dcan have many models. A canonical model is obtained via the chase, which is defined as follows:
We say that a ruleρ∈Πwith headp(z⃗)is applicable to an instanceIif there exists a homomorphismhfrom the body ofρtoI.
We may then carry out a chase step, which consists in
adding atomh'(p(z⃗))to instanceI,
whereh'coincides withhon all variables occurring in
the body ofρandh'maps each existential variable inp(z⃗)to a fresh labelled null not occurring inI.
A chase sequence for databaseDand programΠis a sequence of instancesI_0, I_1, …obtained by applying a sequence of chase steps, starting withI_0 = D. The union of instances obtained by all possible
chase sequences is referred to as𝐶ℎ𝑎𝑠𝑒(D,Π).
The labelled nulls in𝐶ℎ𝑎𝑠𝑒(D,Π)play the same role as
blank nodes in an RDF graph, i.e., resources for which the concrete value
is not known. The importance of𝐶ℎ𝑎𝑠𝑒(D,Π)comes from the
equivalenceΠ∪D Q(a⃗)
⇔𝐶ℎ𝑎𝑠𝑒(D,Π) Q(a⃗)<cit.>.
Note however that, in general,𝐶ℎ𝑎𝑠𝑒(D,Π)is infinite. Hence, the previous
equivalence does not yield an algorithm to evaluate a CQQ(z⃗)w.r.t. databaseDand programΠ.
In fact, without restriction, this is an undecidable
problem <cit.>.
Several subclasses of Datalog^±have thus been
presented
<cit.>
that ensure decidability of CQ answering
(see <cit.> for an overview).
Bag semantics of Datalog
In <cit.>,
a bag semantics of Datalog was introduced
based on derivation trees.
Given a databaseDand Datalog programΠ,
a derivation tree (DT)
is a treeTwith node and edge labels, such that either (1)Tconsists of a single node
labelled by an atom fromDor (2)Πcontains a ruleρ H ←A_1,A_2,…,A_kwithk>0,
and there exist DTsT_1,…, T_kwhose root nodes are
labelled with atomsC_1, …, C_ksuch thatA_1, …,A_kare simultaneously matched toC_1, …, C_kby applying
some substitutionθ,
andTis obtained as follows:Thas a new root noderwith labelHθand thekroot nodes of the DTsT_1,…, T_kare appended as child nodes ofrin this order.
All edges fromrto its child nodes are labelled
withρ. Then the bag semantics of programΠover databaseDconsists of all ground atoms
derivable fromDbyΠ, and the multiplicitym ∈ℕ ∪{∞}of each such atomAis the number of possible DTs
with root labelA. Datalog with bag semantics
is readily extended
by stratified negation <cit.>:
the second condition of the definition of DTs now
has to take
negative body atoms in a ruleρH ←A_1,A_2,…,A_k, B_1, …B_ℓwithk>0andℓ≥0with head atomHfrom some stratumiinto account in that we request that none of the atomsB_1θ, …B_ℓθcan be derived
fromDvia the rules inΠfrom strata less thani.
Bag semantics via set semantics of Warded Datalog^±
In <cit.> it was shown how Datalog with bag semantics
can be
transformed into Warded Datalog^±with set semantics. The idea
is to replace
every predicateP(…)by a
new versionP(. ;…)with an extra, first argument
to accommodate
a labelled null which is interpreted as tuple ID (TID). Each rule
inΠof the form
ρ H(x̅) ← A_1(x̅_1),A_2(x̅_2),…,A_k(x̅_k), with k>0, x̅⊆∪_i x̅_i
is then transformed into the Datalog^±rule
ρ' ∃ z H(z;x̅) ← A_1(z_1;x̅_1),A_2(z_2;x̅_2),…,A_k(z_k;x̅_k),
with fresh, distinct variablesz, z_1, …,z_k.
Some care (introducing auxiliary predicates)
is required for rules with negated body atoms so as not to
produce unsafe negation. A Datalog ruleρH(x̅) ← A_1(x̅_1), …, A_k(x̅_k),
B_1(x̅_k+1), …, B_ℓ(x̅_k+ℓ)with,x̅_k+1, …, x̅_k+ℓ ⊆⋃_i=1^k x̅_iis replaced byℓ+ 1rules in the corresponding
Datalog^±programΠ':ρ'_0 ∃z H(z;x̅) ←A_1(z_1;x̅_1), …, A_k(z_k;x̅_k),
ρ'_0 ∃ z H(z;x̅) ←𝐴𝑢𝑥_1^ρ(x̅_k+1), …, 𝐴𝑢𝑥_ℓ^ρ(x̅_k+ℓ),ρ'_i 𝐴𝑢𝑥_i^ρ(x̅_k +i) ←B_i(z_i;x̅_k+i), i = 1, …, ℓ.
The resulting Datalog^±programΠ'is trivially warded since the rules
thus produced contain no dangerous variables at all. Moreover, it is proved in
<cit.> that an atomP(a⃗)is in the DT-defined bag semantics
of Datalog programΠover databaseDwith multiplicitym ∈ℕ ∪{∞},
iff𝐶ℎ𝑎𝑠𝑒(D,Π')contains atoms of the formP(t;a⃗)formdistinct labelled nullst(i.e., the tuple IDs). |
http://arxiv.org/abs/2307.04038v1 | 20230708195154 | Prediction of short stellar activity cycles using derived and established empirical relations between activity and rotation periods | [
"A. k. Althukair",
"D. Tsiklauri"
] | astro-ph.SR | [
"astro-ph.SR"
] |
Vol.0 (20xx) No.0, 000–000
Department of Physics and Astronomy, School of Physical and Chemical Sciences, Queen Mary University of London,
Mile End Road, London, E1 4NS,
UK; [email protected], [email protected]
Physics Department, College of Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, PO Box 84428, Saudi Arabia
Received 20xx month day; accepted 20xx month day
In our previous work, we searched for super-flares on different types of stars, focusing on G-type dwarfs using entire Kepler data to study statistical properties of the occurrence rate of super-flares. The said study also considered how the statistics change with stellar rotation period, which in turn, had to be determined. Using such new data, as a by-product, we found 138 Kepler IDs of F and G types main sequence stars with rotation periods less than a day (P_ rot<1 d). On one hand, previous studies have revealed short activity cycles in F-type and G-type stars and the question investigated was whether or not short-term activity cycles are a common phenomenon in these stars. On the other hand, extensive studies exist which establish empirical connection between a star's activity cycle and rotation periods. In this study, we compile all available Kepler data with P_ rot<1 d and derive, as well as use plausible, established empirical relations between P_ cyc and P_ rot with the aim to provide predictions for very short 5.13≤ P_ cyc≤ 38.14 d cases in a tabular form. As a result, we invite others to measure P_ cyc using monitoring program of stellar activity (e.g. activity-related chromospheric emission S-index) or similar means for the Kepler IDs found in this study in order put to test the derived and/or established empirical relations between P_ cyc and P_ rot. We also propose an alternative method for measuring very short P_ cyc, using flare-detection algorithms applied to future space mission data.
Althukair & Tsiklauri
Prediction of short stellar activity cycles
Prediction of short stellar activity cycles using derived and established empirical relations between activity and rotation periods.
A. k. Althukair
1,2
D. Tsiklauri
1
=====================================================================================================================================
§ INTRODUCTION
The 11-year cycle of solar activity discovered by Schwabe in 1844 <cit.>, is a significant phenomenon in solar and stellar physics. The cycle is manifested by a periodic change in solar activity, including the appearance of sunspots and changes in the Sun's magnetic field
on this time-scale. Smoothed sunspot numbers have been widely used as a proxy for solar activity over the past four centuries <cit.>.
The idea of the sunspot number was first introduced by <cit.> in the mid-19th century, and it has since become a standard measure for quantifying solar activity. These numbers reveal that there are almost regular cycles of about 11 years, reflecting the Sun's magnetic activity.
During the course of a solar cycle, the Sun experiences alternating periods of strong and weak activity known as solar maximum and minimum <cit.>. As the solar cycle progresses, the magnetic field becomes more complex and twisted. This results in the emergence of sunspots, which are dark areas on the surface of the Sun with intense magnetic fields, vary in size and can last from days to several months <cit.>, decaying into bright areas called faculae formed by smaller magnetic concentrations <cit.>. During the active phase of the solar cycle (solar maximum), the number and size of sunspots increase and appear at the solar surface. At the same time, bright faculae also become more prominent. As the cycle progresses, the number of sunspots decreases, the overall brightness of the Sun remains relatively constant and the Sun enters its least active phase of the solar cycle (solar minimum). These dark and bright features on the Sun's surface contribute to the variability in the total solar irradiance (TSI) <cit.>. Therefore, the TSI data can capture the combined effects of the evolving dark and bright features during the solar cycle <cit.>.
Cyclic activity has been observed in stars other than the Sun through long-term brightness changes associated with increased occurrence of active regions on their surfaces or in their lower stellar atmospheres <cit.>. The Mount Wilson HK program, which started in 1966 and lasted until the end of the 20th century, was the first to conduct a systematic search for activity cycles in main sequence stars <cit.>. By analysing
chromospheric emission in the spectral lines of Ca II H&K, as the magnetic field connected to active regions on the surfaces of stars plays an important role in transporting energy into the chromosphere. This increased energy input into the chromosphere leads to enhanced chromospheric emission, which can be observed prominently in the cores of the Ca II H&K spectral lines <cit.>. The measure of the chromospheric emission strength is described by the Mount Wilson S-index <cit.> or by the quantity R^'_ HK <cit.>. <cit.> investigated the chromospheric activity levels in main-sequence F-G-K-M stars by measuring the chromospheric CaII H&K emission fluxes. They noted that these stars display varying degrees of chromospheric activity and observed a noticeable lack in the number of F-G stars displaying intermediate activity compared to both highly active and less active stars. They suggested that the absence of such stars could be attributed to a decline in chromospheric activity as the stars age. <cit.> examined the relationship between chromospheric activity, specifically the R^'_ HK activity index, and the Rossby number Ro = P_ rot/τ_ c for a sample of main-sequence stars of spectral type F or later. Where P_ rot is the rotational period of the star and τ_ c is a theoretically derived convective turnover time. They found a strong correlation between the R^'_ HK activity index and the Rossby number. However, in contrast to the findings of <cit.>, <cit.> did not find any signs of the "Vaughan-Preston gap". <cit.> investigated the empirical relation between rotation period P_ rot, spectral type, and activity cycle period P_ cyc for 13 slowly rotating main-sequence stars. They found that the cycle period is related to the rotation period by a power law: P_ cyc∝ P_ rot^ 1.25. This relationship can alternatively be expressed as
P_ cyc≈ Ro^1.25≈ (P_ rot/τ_ c)^1.25 <cit.>. For stars of spectral type G0-K5, <cit.> observed a pattern of variation in the rotation period and the measure of chromospheric activity (S-index). Their research revealed that the chromospheric activity levels were high in young stars with fast rotation periods. Chromospheric activity and rotation rates of stars in the intermediate age range were average. Alternatively, the chromospheric activity levels were low in old stars with slow rotation periods. This observation supports the existence of the Vaughan-Preston gap <cit.>, indicating that chromospheric activity and rotation change over time as the stars age. The relation between rotation periods and activity cycles of a sample of stars was investigated by <cit.>, who discovered a correlation between the two variables. In particular, they observed that stars with slower rotation periods exhibit longer activity cycles, while stars with faster rotation periods tend to have shorter activity cycles. According to <cit.>, the relation between rotation periods and cycle lengths is more evident for stars with shorter activity cycles. However, the association becomes less clear for longer cycle lengths when considering more recent findings on the time variability of solar cycles.
<cit.> investigated the behaviour and activity cycles of four fast-rotating late-type stars with (P_ rot≤ 0.5 days), highlighting the presence of 1-year cycles and the correlation between rotation rate and cycle length. <cit.> used the short-term Fourier transform, a time-frequency analysis method, to examine the light curves of 39 fast-rotating late-type active stars with rotation periods of less than one day. Nine of the selected stars showed indications of activity cycles with periods between 300 and 900 days. These cycles were inferred from the changing typical latitude of the starspots on the stellar surface and due to the differential rotation of the stellar surface, the observed rotation period of the stars varied over the activity cycle. This variation in the rotation period was attributed to the movement and evolution of starspots at different latitudes of the star. <cit.> used four years of Kepler data to determine the cyclic variations in the amplitude of the light curve and the rotation period of stars by analysing a sample of active stars and calculating the rotation period and variability amplitude for each star in each Kepler quarter. Then they searched for periodic variations in these time series using Lomb-Scargle periodograms and employed a false alarm probability (FAP) criterion for selection. The study's findings indicate that amplitude periodicities, associated with underlying activity cycles, are detected in 3203 stars with cycle periods ranging from 0.5 to 6 years and rotation periods ranging from 1 to 40 days. According to <cit.> analysis of new observations and previous data, the longer and shorter cycle periods closely match expectations based on the average activity levels and rotation periods, which indicates a connection between stellar activity and stellar rotation. <cit.> reported an activity cycle of 11.6 years in the F-type star τ Boo (HD 120136). However, the authors assigned a FAP "poor" grade to this finding. <cit.> detected an activity cycle with a duration of 122 days in their analysis of the S-index data of τ Boo. This short activity cycle periods suggest that τ Boo may exhibit variations on a relatively short timescale. <cit.> focused on exploring the presence of short-term activity cycles in F-type stars, specifically using S-index time series data obtained with the TIGRE telescope. They utilized the generalized Lomb-Scargle periodogram method to analyze the data and search for periodic variations with a maximum length of 2 years. Their sample of F-type stars identified four stars that exhibited cyclic variations with periods of less than a year. However, compared to solar-type stars with well-developed cyclic activity, the amplitude of these short-term cyclic variations in F-type stars was smaller. Based on their findings, <cit.> concluded that the activity behaviour among F-type stars differs from that of the Sun and cooler main sequence stars. By studying 44 main-sequence stars with confirmed activity cycles, and rotation periods, <cit.> examined the relation between the length of the activity cycle and the Rossby number (Ro). They used empirical turnover periods based on the B-V colour index to calculate Rossby numbers, from which they deduced an empirical relationship between the Rossby number and the cycle duration. The study showed linear behaviour in the double-logarithmic relationship between the Rossby number and cycle period. In addition, the relative convection zone depth was found to be correlated with cycle length and convective turnover time.
In paper I <cit.>, we looked for super-flares on different types of stars and focused on G-type dwarfs using entire Kepler data to study
various aspects of statistical properties of the occurrence rate of super-flares.
In paper II <cit.>, as a by-product, we found thirteen peculiar Kepler IDs that are Sun-like, slowly rotating with rotation periods of 24.5 to 44
days, and yet can produce a super-flare and six G-type and four M-type Kepler IDs with exceptionally large amplitude super-flares. As noted previously,
these detections defy our current understanding of stars and hence deserve a further investigation.
In this paper III, the last in this series, we use an empirical connection between a star's activity cycle and rotation periods for a sample of F and G main sequence stars with rotation periods of less than one day.
Here our aim is to provide predictions for very short activity cycle cases in a tabular form and to investigate in the future whether these short activity cycles are a common phenomenon in these stars or not. Section <ref> provides the target selection method. Section <ref> presents the method used in this work which includes the empirical connection relation between P_ cyc and P_ rot. The main findings of the study are presented in Section <ref>, and section <ref> concludes this work with our main conclusions.
§ RELATION BETWEEN ACTIVITY CYCLE AND ROTATION PERIOD
<cit.> model of the α–Ω dynamo introduced the concept of migratory dynamo waves, which play a crucial role in generating the observed solar cycle <cit.>. The α–effect, arising from the twisting of rising magnetic field tubes due to Coriolis forces, creates the poloidal magnetic field required for the next sunspot cycle. This effect is responsible for the reversal of magnetic polarities between successive cycles <cit.>. On the other hand, the Ω–effect, resulting from the differential rotation of the star, generates a toroidal magnetic field by stretching the magnetic field lines in a longitudinal direction. The combination of the α–effect and the Ω–effect leads to the formation of migratory dynamo waves, where the toroidal field is periodically regenerated and transformed into the poloidal field through the action of the α–effect. These migratory dynamo waves propagate and interact within the star's convective zone, causing the cyclic variations in the magnetic field <cit.>.
According to <cit.>,
the magnetic cycle period for G and K dwarfs with convective turnover times (τ_ c) between 11 and 26 days, is found to be proportional to the rotation period as follows:
1/P_ cyc∝(τ_ c / P_ rot)^n,
where n is 1.25.
We quote theoretical prediction of the relation between
star's activity cycle and its rotation periods, which is
equation (6) in <cit.>:
P_ mag_cyc=2 P_ cyc≈√(R_⋆l)P_ rot.
According to the simple theoretical arguments quoted by <cit.>,
the magnetic cycle period P_ mag_cyc is proportional to the rotation period P_ rot. However, there is a modifying factor, l/R_⋆ the relative depth of turbulence, which depends on the stellar structure, which itself may depend on the effective temperature or B-V colour index of the star. Also l here is the length scale of turbulence and R_⋆ is the stellar radius.
§ METHODS
In our study, we adopt the terminology used by <cit.> to categorize branches into two types: the "inactive" branch, referred to as the short-cycle branch P_ cyc^S and the "active" branch, referred to as the long-cycle branch P_ cyc^L. These terms were introduced first time in <cit.>. According to <cit.> this notation is more accurate and aligned with the actual characteristics of the branches. Therefore, they suggested that these terms should be used in future studies to refer to the two branches.
§.§ Reproduction of <cit.> P_ cyc^S vs. P_ rot Fit
In this subsection, we reproduced the fit between P_ cyc^S and P_ rot data from <cit.> to derive the fit parameters. First, we collected the data in Table<ref>, the first 32 rows, from <cit.>, where we obtained the 32 activity cycles on the short-cycle branch P_ cyc^S calculated by <cit.> along with the 32 corresponding rotation periods P_ rot. These cycle lengths and rotation periods can be found in Table 1. Then we plotted in logarithmic scale the rotation periods on the x-axis versus the calculated cycle period on the y-axis as shown in Figure <ref>, using the empirical relation in <cit.> between the cycle periods and rotation periods in logarithmic terms that is given by:
log P_ cyc≈ a+n log P_ rot.
Since the theoretical relation, equation <ref>
implies a linear connection between P_ cyc and P_ rot, we fitted the data using Python least-square fit, a common technique for determining the best-fitting parameters for a given model, for two different slope adjustments as in <cit.>. Also, we computed the R^2 coefficient of determination to measure how well the model fits the data. A R^2 value of 1 means that the predictions from the regression fit the data perfectly. First, we set the slope n to be 1 and deduced the value of a parameter as a = 1.923 ± 0.025 and the value of R^2= 0.89. The red line in Figure <ref> illustrates this trend. Then we repeated the fit by treating slope n as an independent variable to derive a and n values as equation now <ref> becomes:
log P_ cyc≈ (1.458 ± 0.074)+(1.348 ± 0.054) log P_ rot.
and the value of R^2= 0.95. The blue line in Figure <ref> represents this fit. It is obvious that the n = 1 relation does not fit the short periods data, as <cit.> pointed out.
By comparing the value of a and n parameters here with <cit.>, we find slight differences between these values. As in <cit.> a = 1.918 ± 0.027 for the fit of n=1 , while for the fit where n is treated as a free parameter, a= 1.488 ± 0.092 and n= 1.324 ± 0.067. We noticed two additional points in Figure 1 of <cit.>, which belong to stars HD 100563 and HD 201092. These stars have rotation periods of 7.73 ± 0.04 and 37.8 ± 7.4, respectively, corresponding to cycle lengths of 0.609 ± 0.009 and 11.7 ± 0.4, respectively. Their P_ cyc were taken from <cit.> and <cit.>, respectively, and have not been calculated by <cit.>. We do not have these two points because our plot include only data computed by <cit.>. We also noticed that the locations of some points in our plot differ from those in <cit.> plot, despite using the same data set. We believe these reasons led to the slight difference in the fit parameters between this work and <cit.>.
§.§ Data representation and fit
In this subsection, we repeat the fit between P_ rot and P_ cyc^S using a larger data sample taken from other previous studies. This sample, shown in Table<ref>, contains 94 P_ rot and their 94 corresponding P_ cyc^S. The star ID, spectral type (Sp), color index (B-V), effective temperature (T_ eff), P_ rot and P_ cyc are shown in Table<ref>. Unavailable data is left blank in the table. 32 P_ cyc^S were calculated by <cit.>, the first 32 lines in Table<ref>. The other P_ cyc^S were taken from <cit.>. It should be noted that the 32 stars IDs for which their P_ cyc^S were calculated by <cit.> were used again in the fit but with the P_ cyc^S calculated by others. For illustration, we used two P_ cyc^S values for 32 stars IDs, one was calculated by <cit.> and the other was calculated by another work, except for KIC 10644253, for which we collected three P_ cyc^S calculated by <cit.>. Also, HD 16673 has multiple entries due to the multiple sources, as shown in Table <ref>. References for each P_ rot and P_ cyc^S are shown in Table <ref>.
In the same way as in subsection <ref>, we used the empirical relation between P_ rot and P_ cyc in logarithmic scale given by equation <ref> using the new data set in Table<ref> to produce the fit parameters a and n. We performed a least-square fit in Python to fit the data using two different slope adjustments again, one with a fixed slope n of 1 and another with the n treated as a free variable. This fit is shown in Figure <ref>. For the fit with a fixed slope of 1, we determined the value for the parameter a= 1.889 ± 0.023 and R^2= 0.83. This trend is shown by the red line in Figure <ref>. While for the fit with the slope n treated as a free variable, we deduced values for the parameters a and n as a=1.583 ± 0.064, n=1.257 ± 0.051 and R^2= 0.87. This fit is represented by the blue line in Figure <ref>. So that equation<ref> becomes now
log P_ cyc≈ (1.583 ± 0.064)+(1.257 ± 0.051) log P_ rot.
We note that our value of n=1.257 ± 0.051 with the extended dataset is
closer to <cit.>'s n=1.25 than <cit.>'s n= 1.324 ± 0.067.
|cccc|cc|cc|
list of star IDs with their parameters, used in previous studies.
1|cHD/KIC 1cT_ eff 1cB-V 1cτ_ c 1|cP_ rot[d] 1c|Ref 1cP_cyc^S[yr] 1c|Ref
8c
– continued from previous page
1|cHD/KIC 1cT_ eff 1cB-V 1cτ_ c 1|cP_ rot[d] 1c|Ref 1cP_cyc^S[yr] 1c|Ref
8|r|Continued on next page
Sun 5777 0.642 33.94 25.4±1 1 10.3 15
HD 3651 5211 0.850 61.18 44 1 11.7 15
HD 4628 5120 0.890 65.19 38.5±2.1 1 9.9 15
HD 10476 5244 0.836 59.83 35.2±1.6 1 9.2 15
HD 10780 5321 0.804 56.87 22.14±0.55 2 5.6 15
HD 16160 5060 0.918 68.16 48±4.7 1 12.4 15
HD 16673 6183 0.524 18.02 5.7 3 0.9 15
HD 17051 6045 0.561 21.98 8.5±0.1 1 1.4 15
HD 22049 5140 0.881 64.27 11.1±0.1 1 2.6 15
HD 26965 5282 0.820 58.33 43 1 11.5 15
HD 30495 5804 0.632 32.16 11.4±0.2 1 1.6 15
HD 32147 4801 1.049 83.93 48 1 11.7 15
HD 43587 5876 0.610 28.58 22.6±1.9 4 10.4 15
HD 75332 6089 0.549 20.60 4.8 5 0.5 15
HD 75732 5167 0.869 63.05 37.4±0.5 6 9.7 15
HD 76151 5714 0.661 37.58 15 1 2.4 15
HD 100180 6013 0.570 23.06 14 1 3.4 15
HD 103095 5449 0.754 52.52 31 1 9.6 15
HD 120136 6245 0.508 16.54 3.05±0.01 7 0.3 15
HD 128621 5098 0.900 66.24 36.2±1.4 1 9.2 15
HD 140538 5645 0.684 42.51 20.71±0.32 8 4.5 15
HD 146233 5741 0.652 35.81 22.7±0.5 1 7.2 15
HD 149661 5265 0.827 58.98 21.1±1.4 1 5.3 15
HD 160346 4975 0.959 72.75 36.4±1.2 1 9 15
HD 165341 A 5188 0.860 62.16 19.9 1 4.9 15
HD 166620 5151 0.876 63.76 42.4±3.7 1 11.1 15
HD 185144 5366 0.786 55.26 27.7±0.77 2 7.3 15
HD 190406 5910 0.600 27.09 13.9±1.5 1 2.6 15
HD 201091 4764 1.069 86.64 35.4±9.2 1 8.3 15
HD 219834 B 5055 0.920 68.38 43 1 11 15
KIC 8006161 5234 0.840 60.21 29.8±3.1 1 7.7 15
KIC 10644253 5943 0.590 25.67 10.9±0.9 1 1.8 15
HD 16673 6183 0.524 18.02 7.4±0.07 5 0.85 5
HD 49933 3.45 5 0.58 5
HD 75332 6089 0.549 20.60 4.8 5 0.49 5
HD 100563 7.73 5 0.61 5
τ Boo 0.480 14.23 3.5 5 0.33 5
Kepler 87 12.59±0.03 9 3.5 16
KIC 10644253 6030 0.590 25.67 10.91±0.87 10 1.5 17
solar analog HD 30495 5826 0.632 32.16 11.36±0.17 11 1.67±0.35 11
solar analog HD 45184 5871 0.620 30.16 19.98±0.02 12 5.14 12
61 Cyg A HD 201091 4545 1.069 86.64 35.7±1.9 13 7.2±1.3 13
102712791 0.277 4.79 0.96±0.03 14 0.09±0.008 14
102720703 0.514 17.08 10.2±0.6 14 0.512±0.055 14
102721955 0.431 10.94 2.17±0.06 14 1.118±0.071 14
102723038 1.404 147.52 8.6±0.5 14 1.682±0.151 14
102726103 0.767 53.62 3.7±0.1 14 0.321±0.022 14
102738457 0.592 25.95 12.9±0.6 14 1.781±0.356 14
102749950 0.657 36.78 5.4±0.2 14 0.655±0.06 14
102750723 1.143 97.45 1.44±0.02 14 0.277±0.022 14
102754736 0.480 14.23 6.9±0.3 14 0.29±0.019 14
102758108 0.641 33.75 6.1±0.2 14 0.301±0.022 14
102770332 2.055 415.00 4.2±0.1 14 1.162±0.112 14
102770893 0.874 63.56 4.3±0.2 14 0.759±0.058 14
102777006 1.177 102.86 1.33±0.02 14 1.17±0.123 14
102778595 1.157 99.64 11.8±0.7 14 0.575±0.019 14
102780281 1.304 125.85 3±0.1 14 0.551±0.041 14
Sun 5778 0.660 37.38 25.4±1 1 11±2 1
HD 3651 5128 0.840 60.21 44 1 13.8±0.4 1
HD 4628 5035 0.890 65.19 38.5±2.1 1 8.6±0.1 1
HD 10476 5188 0.840 60.21 35.2±1.6 1 9.6±0.1 1
HD 16160 4819 0.980 75.21 48±4.7 1 13.2±0.2 1
HD 17051 6053 0.570 23.06 8.5±0.1 1 1.6 1
HD 22049 5152 0.880 64.17 11.1±0.1 1 2.9±0.1 1
HD 26965 5284 0.820 58.33 43 1 10.1±0.1 1
HD 30495 5780 0.630 31.82 11.4±0.2 1 1.7±0.3 1
HD 32147 4745 1.060 85.41 48 1 11.1±0.2 1
HD 76151 5675 0.670 39.44 15 1 2.5±0.1 1
HD 78366 5915 0.630 31.82 9.7±0.6 1 5.9±0.1 1
HD 81809 5623 0.800 56.51 40.2±3 1 8.2±0.1 1
HD 100180 5942 0.570 23.06 14 1 3.6±0.1 1
HD 103095 5035 0.750 52.19 31 1 7.3±0.1 1
HD 114710 5970 0.580 24.33 12.3±1.1 1 9.6±0.3 1
HD 128620 5809 0.710 48.98 22.5±5.9 1 19.2±0.7 1
HD 128621 5230 0.880 64.17 36.2±1.4 1 8.1±0.2 1
HD 146233 5767 0.650 35.42 22.7±0.5 1 7.1 1
HD 149661 5199 0.800 56.51 21.1±1.4 1 4±0.1 1
HD 160346 4797 0.960 72.86 36.4±1.2 1 7±0.1 1
HD 166620 5000 0.900 66.24 42.4±3.7 1 15.8±0.3 1
HD 190406 5847 0.610 28.58 13.9±1.5 1 2.6±0.1 1
HD 201091 4400 1.180 103.35 35.4±9.2 1 7.3±0.1 1
HD 201092 4040 1.370 139.77 37.8±7.4 1 11.7±0.4 1
KIC 8006161 5488 0.840 60.21 29.8±3.1 1 7.4±1.2 1
KIC 10644253 6045 0.590 25.67 10.9±0.9 1 1.5±0.1 1
HD 165341 A 5023 0.780 54.74 19.9 1 5.1±0.1 1
HD 219834 A 5461 0.800 56.51 42 1 21±1 1
HD 219834 B 5136 0.910 67.30 43 1 10±0.2 1
HD 10780 5321 0.804 56.87 22.14±0.55 2 7.53±0.16 2
HD 16673 6183 0.524 18.02 5.7 3 0.847±0.006 5
HD 43587 5876 0.610 28.58 22.6±1.9 4 10.44±3.03 4
HD 75732 5167 0.869 63.05 37.4±0.5 6 10.9 18
HD 185144 5366 0.786 55.26 27.7±0.77 2 6.66±0.05 2
HD 120136 6245 0.508 16.54 3.05±0.01 7 0.333±0.002 7
HD 140538 5645 0.684 42.51 20.71±0.32 8 3.88±0.02 8
14cm
Notes: The table illustrates a list of stars ID with their corresponding B– V values, effective temperature T_ eff, the convective turnover time τ_ c which was calculated by the relation in <cit.>, the rotation period P_ rot with the reference number and the short branch cycle period P_ cyc^S with the reference number.
References: (1) <cit.>, (2) <cit.>, (3) <cit.>, (4) <cit.>, (5) <cit.>, (6) <cit.>, (7) <cit.>, (8) <cit.>, (9) <cit.>,
(10) <cit.>, (11) <cit.>, (12) <cit.>, (13) <cit.>, (14) <cit.>, (15) <cit.>, (16) <cit.>, (17) <cit.>, (18) <cit.>.
§.§ Data Samples
One of the main challenges in studying the relation between cycle length and rotation period is the lack number of well-known and accurately measured activity cycles. This limitation introduces uncertainties in the derived empirical relations <cit.>. To overcome these challenges, it is crucial to obtain more reliable cycle periods, particularly for long-period cycles. Achieving this requires long-term time series observations of stars to gather comprehensive and accurate data on their activity cycles <cit.>. Therefore, when looking for activity cycles, it is more efficient to monitor fast-rotating objects, as cycles can be discovered within a few years of observation, as opposed to stars with longer rotation periods <cit.>. For this reason, we chose our sample for this study to include fast-rotating main-sequence stars of type F and G from Kepler data with well-known rotation periods of less than one day. First, we collected all Kepler IDs which has well-known rotation periods. We then selected targets with rotation periods of less than a day. Using Gaia Data Release 2 (Gaia-DR2), we identified F- and G-type main sequence stars by their effective temperatures and radius based on the Harvard Spectral classification. The ranges of the effective temperature are 6000-7500 K and 5200-6000 K for F and G types, respectively. We thus obtained a total of 811 Kepler IDs of F- and G- type stars with less than one day rotation period. By using the radius restriction of the main-sequence stars as 1.15-1.4 R_⊙ and 0.96-1.15 R_⊙ for F and G types, respectively, the final data sample reduced to 138 Kepler targets with a number of 83 F-type and 55 G-type main-sequence stars. 71.74% of the rotation periods for these stars were taken from <cit.>. 15.94% from <cit.>, 5.07% from <cit.>, 4.35% from <cit.> and 2.90% from <cit.>. These 138 Kepler targets are listed in Table <ref> with their effective temperature, radius, rotation period and the references for these rotation periods.
§ RESULTS
Using a data set of 138 Kepler IDs with P_ rot ranging from 0.202 d to 0.997 d, we provide a
prediction for the corresponding value of their P_ cyc^S, by applying the empirical relation between P_ cyc and P_ rot with the derived parameters in Equation <ref>. Hence we
obtained the predicted values of P_ cyc from
P_ cyc≈ 10^(1.583 ± 0.064)+(1.257 ± 0.051) log P_ rot.
From equation <ref>, we calculated 138 P_ cyc for 83 F-type and 55 G-type main-sequence stars whose rotation period is less than a day. The shortest P_ cyc is equal to 5.13 d while the longest P_ cyc is equal to 38.14 d. All the 138 predicted P_ cyc are listed in Table <ref>
|cccccc|cccccc|
lists of the 138 Kepler IDs with their parameters and predicted P_ cyc.
1|cKIC 1cT_ eff 1cR_⊙ 1cP_ rot[d] 1cRef 1c|P_ cyc[d] 1cKIC 1cT_ eff 1cR_⊙ 1cP_ rot[d] 1cRef 1c|P_ cyc[d]
12c
– continued from previous page
1|cKIC 1cT_ eff 1cR_⊙ 1cP_ rot[d] 1cRef 1c|P_ cyc[d] 1cKIC 1cT_ eff 1cR_⊙ 1cP_ rot[d] 1cRef 1c|P_ cyc[d]
12|r|Continued on next page
757099 5521 1.05 0.36 1 10.60 6877871 6508 1.40 0.54 2 17.73
1028018 5544 1.14 0.62 2 21.03 6948098 6095 1.29 0.57 3 18.76
1721795 6534 1.31 0.89 2 32.93 6961285 5802 0.98 0.45 2 13.99
1872192 5316 0.98 0.67 2 23.31 6962901 5601 0.97 0.98 2 37.37
2557335 5568 1.01 0.24 2 6.20 7199002 6381 1.24 0.57 2 18.89
2558273 6673 1.35 0.99 2 37.85 7199013 5286 0.96 0.57 2 18.89
2715228 6374 1.30 0.99 1 37.80 7199037 6024 1.36 0.57 2 18.89
2715410 5997 1.11 0.90 1 33.53 7354297 5481 1.05 0.95 2 35.99
2849645 5424 1.06 1.00 2 38.14 7461022 6168 1.28 0.59 2 19.76
2985825 6783 1.23 0.94 3 35.18 7678509 6644 1.22 0.96 2 36.51
3124412 6302 1.21 0.93 1 34.94 7707736 5644 1.09 0.76 2 27.11
3241517 6283 1.34 0.78 3 28.19 7816211 6050 1.32 0.29 2 8.08
3352959 6476 1.37 0.76 2 27.07 7909399 6574 1.40 0.82 2 30.01
3356577 6746 1.39 0.63 4 21.58 7915824 6231 1.39 0.74 2 26.22
3448722 5872 1.13 0.41 2 12.60 7973882 5512 1.06 0.35 2 10.27
3448817 6792 1.33 0.95 4 35.78 8016369 6734 1.34 0.77 1 27.56
3459311 5789 1.05 0.98 2 37.37 8043256 6680 1.27 0.93 2 34.71
3550386 6006 1.30 0.32 2 9.10 8144578 6639 1.32 0.59 2 19.85
3836772 6210 1.32 0.69 2 23.88 8197275 5604 1.14 0.44 2 13.52
3869099 5607 1.01 0.29 2 7.94 8264155 6738 1.33 0.91 4 34.08
4175618 5369 1.05 0.41 2 12.60 8264659 5417 1.12 0.97 1 36.84
4283120 6202 1.25 0.52 2 16.71 8285970 5639 1.14 0.57 2 18.72
4374659 5824 1.03 0.23 2 5.87 8313378 6624 1.31 0.54 2 17.73
4386947 5681 1.14 0.65 2 22.10 8382253 5695 1.01 0.63 3 21.37
4464528 6392 1.38 0.22 2 5.81 8393626 5893 1.15 0.43 2 13.06
4464530 6545 1.30 0.22 2 5.77 8420730 5770 1.08 0.25 2 6.53
4570231 5661 0.99 0.54 1 17.64 8651921 6473 1.29 0.95 2 35.65
4660562 5677 0.96 0.77 1 27.56 8687209 5650 1.00 0.77 1 27.56
4762130 6202 1.35 0.80 2 28.78 8804962 6586 1.23 0.90 2 33.53
4774370 6546 1.36 0.93 2 34.85 8892124 5263 1.01 0.72 2 25.38
4816098 6239 1.29 0.95 1 35.89 8916436 6566 1.35 0.87 1 32.13
4850965 5503 1.04 0.61 2 20.40 9146690 5387 1.11 0.72 2 25.20
4949214 6511 1.36 0.92 2 34.52 9206726 6876 1.31 0.46 4 14.61
4949350 6587 1.40 0.88 2 32.37 9306290 5571 1.04 0.82 2 29.97
4949766 6587 1.39 0.81 2 29.19 9393015 5877 1.01 0.24 2 6.40
5038288 5785 0.99 0.88 2 32.51 9456932 5875 0.97 0.53 2 17.24
5107198 6077 1.36 0.36 2 10.67 9474101 5945 1.10 0.21 2 5.32
5273178 6774 1.32 0.88 2 32.65 9594038 6694 1.31 0.94 4 35.56
5397765 6251 1.34 0.94 2 35.47 9640204 6620 1.33 0.53 2 17.32
5426665 6323 1.38 0.39 2 11.80 9640472 6076 1.34 0.34 2 9.68
5444276 6475 1.31 0.71 2 24.71 9710612 5867 1.08 0.39 2 11.80
5450307 6398 1.24 0.99 3 37.85 9730249 6479 1.34 0.91 2 33.77
5480545 6535 1.31 0.93 2 35.09 9896552 6279 1.26 0.87 1 32.13
5514866 5487 0.97 0.28 2 7.66 9897710 5840 1.08 0.43 2 13.21
5514871 5220 1.06 0.28 2 7.66 9965888 5589 1.13 0.31 2 8.82
5543840 6518 1.20 0.82 2 29.69 9970838 6429 1.25 0.96 2 36.42
5623538 6729 1.32 0.99 1 37.80 10023062 6469 1.38 0.89 2 33.11
5623852 5886 1.10 0.57 2 18.89 10134084 5926 1.00 0.55 5 18.06
5629449 6897 1.31 0.71 1 24.89 10490282 5504 1.05 0.79 2 28.42
5646176 6302 1.20 0.99 1 37.80 10614890 5283 1.06 1.00 2 38.14
5795235 6517 1.36 0.91 2 34.00 10809099 6051 1.31 0.91 2 33.91
5898014 6697 1.35 0.83 2 30.20 11017401 5648 1.09 0.80 2 28.96
5988566 6299 1.20 0.44 2 13.52 11018874 6454 1.30 0.99 2 37.99
6114118 6234 1.24 0.94 2 35.32 11247377 6184 1.38 0.40 2 12.02
6114140 6384 1.16 0.93 3 35.13 11349677 6076 1.23 0.84 1 30.75
6145032 6315 1.28 0.81 1 29.37 11400413 6781 1.34 0.76 4 27.27
6149358 6660 1.28 0.89 2 32.93 11498689 5464 1.10 0.31 2 8.78
6219870 5663 1.05 0.81 1 29.37 11653059 6160 1.26 0.29 2 8.08
6224148 6230 1.18 0.20 2 5.13 11924842 5494 1.13 0.84 5 30.75
6385867 5306 1.06 0.58 1 19.30 11969131 6444 1.23 0.63 1 21.42
6386598 6658 1.37 0.76 2 27.20 12067121 6211 1.33 0.43 5 13.25
6391602 5782 0.99 0.42 2 12.83 12108612 5695 1.09 0.71 2 24.76
6421219 6191 1.36 0.79 2 28.51 12119534 5296 0.98 0.64 2 21.97
6449077 6366 1.31 0.94 2 35.51 12121738 6134 1.31 0.73 2 25.73
6529902 6604 1.38 0.29 2 8.08 12157161 6513 1.26 0.78 2 27.79
6693864 6846 1.35 0.86 1 31.67 12157799 6117 1.17 0.89 5 33.07
6836589 5628 1.15 0.73 2 25.91 12354328 5251 0.97 0.81 2 29.33
6846595 6718 1.26 0.99 1 37.80 12356839 5605 1.14 0.35 2 10.05
6854461 6547 1.39 0.95 3 36.03 12418959 6427 1.36 0.78 2 28.10
14cm
Notes: Effective temperature T_ eff and radius R_⊙ was taken from (Gaia-DR2).
References: (1) <cit.>, (2) <cit.>, (3) <cit.>, (4) <cit.>, (5) <cit.>.
After predicting the values of the activity cycles for our extended, compared to <cit.>, data sample, we wish to examine the theoretical prediction given by Equation 2 on short P_ cyc < 1 yr.
This is because the latter equation is a theoretical prediction, based on first physical principles,
as opposed to empirical fit, which lacks any theoretical or conceptual justification.
Therefore, we focused on the activity cycles derived from previous studies, as presented in Table 1. We chose 20 stars whose P_ cyc is less than a year and plot the fit between P_ rot and P_ cyc as shown in Figure <ref> using a simple linear regression without an intercept given by
P_ cyc [ yr]= n P_ rot [ d].
We obtained the slope n= 0.081 ± 0.009 and R^2 value is 0.997, which is an indication of a good fit, despite of a large scatter.
Note that P_ cyc here is in years, as in Figure 14 from <cit.>.
Therefore, for the lower and upper bounds of our
138 Kepler IDs with P_ rot ranging from 0.202 d to 0.997 d,
this simple theoretically justified equation predicts for
P_ cyc=0.081×0.202×365.25=5.98 d and 0.081×0.997×365.25=29.50 d,
which are not very different from applying the more accurate powerlaw fit using equation <ref> of
5.13 d and 38.14 d, respectively.
Finally, we examine the convective turnover time, τ_c, vs.
B-V colour index appearance as in Figure 3 from <cit.>.
In general, direct measurements of convective turnover time are not possible. However, its estimation is possible by analysing stars' rotation and activity data.
As pointed out by <cit.>, scaling the rotation periods with with a colour- or mass-dependent τ_c can reduce scatter in the relation between rotation and activity, leading to a broken power-law fit between activity and the Rossby number, as e.g. in <cit.>.
<cit.> present a comprehensive study of the convective turnover time, τ_c, and its dependence on stellar metallicity and age of main-sequence stars with masses between 0.6-1.6 M_⊙ and they also
remark that there is a substantial variation between the different models,
as e.g. <cit.> using chromospheric and coronal data, obtained a significantly flatter curve for B-V > 0.8 than widely-used <cit.>, see figure 4 from <cit.>.
We plot convective turnover time, τ_c, vs.
B-V colour index in figure <ref>.
Figure <ref> used the following expressions
for the dependence of the convective turnover time τ_c on the B-V color index, as derived from <cit.>:
logτ_c = (1.06±0.07) + (2.33±0.37) ((B-V) - 0.44)
for 0.44 ≤ B - V ≤ 0.71. In the case when B - V > 0.71 then
logτ_c = (1.69±0.12) + (0.69±0.13) ((B-V) - 0.71).
As can be seen in Figure <ref> our range of B-V-colour is larger compared to data from <cit.>.
§ CONCLUSIONS
In this work, we studied the empirical relation between
star activity cycle and rotation period.
First, we reproduced the fit between P_ rot and P_ cyc using <cit.> data
and obtained the following fit parameters
log P_ cyc≈ (1.458 ± 0.074) + (1.348 ± 0.054) log P_ rot,
which are slightly different from the <cit.>'s
a= 1.488 ± 0.092 and n= 1.324 ± 0.067, for
the reasons unknown to us.
Then, using a larger data set made up of 94 P_ rot and their 94 associated P_ cyc taken from prior studies, we again re-examined the fit between P_ rot and P_ cyc and obtained the followinh fit parameters
log P_ cyc≈ (1.583 ± 0.064)+(1.257 ± 0.051) log P_ rot.
Using these new parameters, we applied this relation to a sample of 83 F-type and 55 G-type main sequence stars whose rotation periods of less than one day, To provide tabular predictions for cases with very short activity cycles, in order
to determine in the future whether or not these short activity cycles are a common occurrence in these stars.
As a result we derived 138 predicted P_ cyc ranging from 5.13 d to 38.14 d, which are listed in Table <ref>.
Usefulness of measuring short stellar activity cycles
hinges on two main general difficulties:
(i) If monitoring program of stellar activity (e.g. activity-related chromospheric emission S-index or similar) is used
as in references such as <cit.>; or <cit.>, then cadence time of observations is too long
e.g. according to table 2 from the latter reference cadence could be 87 observations per year i.e. 365/87 = 4 days. Resolving activity cycles with 5.13≤ P_ cyc≤ 38.14 d with such cadence would be nearly impossible.
(ii) If Kepler data light curves are used for e.g. plotting number of flares per day vs. time then large number of flare-detection would be necessary to have a reliable statistics. However, the problem is long cadence, 30 minutes, for the mainstream Kepler data. The photometer used by Kepler is sensitive to wavelengths ranging from 400 to 865 nm, covering the entire visible spectrum and a fraction of the infrared. The accuracy of the photometer of Kepler is approximately 0.01% or 0.1 mmag, when 30-minute integration times are used while considering stars with a magnitude of 12. Kepler's 30-minute integration detected flare amplitudes less than 0.1% of the stellar value and energies of 2×10^33 ergs. The duration of the flares ranged from one to three hours, with a rapid increase followed by a slow, exponential decline <cit.>. When Kepler data is taken at a higher cadence or sampling rate of one minute, the accuracy of the measurements decreases. However, this higher cadence enables Kepler to detect flares that are too brief to be detected reliably using the main 30-minute integrations. With the one-minute cadence, Kepler can detect flares with energies as low as 10^32 ergs <cit.>.
It is worth noting that earlier studies exist using different observations where the energy involved in the observed transient brightening is estimated to range from 10^25 to 10^29 erg <cit.>. Also, as far as the Sun is concerned, studies exist <cit.> which consider flare frequency as a function of flare energy in the range 10^27to 10^31 erg, but this is applicable to the Sun only.
In order to have a good statistics for Kepler IDs considered, we need to detect flares with energies 10^27-32 ergs in order to see variation number of flares per day on a time scale of 5.13≤ P_ cyc≤ 38.14 d.
To achieve this goal a new space mission is necessary with short time cadence (< 1 minutes) and photometric accuracy < 0.01%.
A typical example of such proposed sample data from the space mission is shown in figure <ref>.
Alternative option could be making more short cadence ground-based s-index monitoring program of stellar activity with cadence ≈ 1 d or less. However it is unclear
whether this is technically feasible.
In any case, the present study provides predictions for 5.13≤ P_ cyc≤ 38.14 d and
we hope that future either space or ground-based observational missions will put to test our predictions.
Unitl such time the jury is still out.
§ ACKNOWLEDGEMENTS
Some of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NNX13AC07G and by other grants and contracts.
Authors would like to thank Deborah Kenny of STScI for kind assistance in obtaining the data, Cozmin Timis and Alex Owen of Queen Mary University of London for the assistance in data handling at the Astronomy Unit.
A. K. Althukair wishes to thank Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia and
Royal Embassy of Saudi Arabia Cultural Bureau in London, UK for the financial support of her PhD scholarship, held at Queen Mary University of London.
§ DATA AVAILABILITY
Some of the data underlying this article were accessed from Mikulski Archive for Space Telescopes (MAST) <https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html>. This paper also has made use of data from the European Space Agency (ESA) mission Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. The derived data generated in this research will be shared on reasonable request to the corresponding author.
aasjournal
|
http://arxiv.org/abs/2307.04139v1 | 20230709094632 | A Randomized Algorithm for Single-Source Shortest Path on Undirected Real-Weighted Graphs | [
"Ran Duan",
"Jiayi Mao",
"Xinkai Shu",
"Longhui Yin"
] | cs.DS | [
"cs.DS",
"68W20",
"F.2.2"
] |
empty
Emotion Analysis on EEG Signal Using Machine Learning and Neural Network
S. M. Masrur Ahmed
Software Engineer
bKash Limited
Dhaka, Bangladesh
[email protected]
Eshaan Tanzim Sabur
Department of Computer Science
BRAC University
Dhaka, Bangladesh
[email protected]
August 12, 2023
==============================================================================================================================================================================================================================================
empty
In undirected graphs with real non-negative weights, we give a new randomized algorithm for the single-source shortest path (SSSP) problem with running time O(m√(log n ·loglog n)) in the comparison-addition model.
This is the first algorithm to break the O(m+nlog n) time bound for real-weighted sparse graphs by Dijkstra's algorithm with Fibonacci heaps.
Previous undirected non-negative SSSP algorithms give time bound of O(mα(m,n)+min{nlog n, nloglog r}) in comparison-addition model, where α is the inverse-Ackermann function and r is the ratio of the maximum-to-minimum edge weight [Pettie & Ramachandran 2005], and linear time for integer edge weights in RAM model [Thorup 1999]. Note that there is a proposed complexity lower bound of Ω(m+min{nlog n, nloglog r}) for hierarchy-based algorithms for undirected real-weighted SSSP [Pettie & Ramachandran 2005], but our algorithm does not obey the properties required for that lower bound. As a non-hierarchy-based approach, our algorithm shows great advantage with much simpler structure, and is much easier to implement.
§ INTRODUCTION
Shortest path is one of the most fundamental problems in graph theory, and its algorithms lie at the core of graph algorithm research. In a graph G=(V,E,w) with m=|E|, n=|V| and non-negative edge weight w:E→ℝ_≥ 0, single-source shortest path (SSSP) problem asks for the distances from a given source s ∈ V to all other vertices.
Dijkstra's algorithm <cit.> computes the distances (s,u) by dynamic programming. For each vertex u, it maintains a temporal distance d(u), which represents the shortest path from s to u only passing through the vertices in current S, where S is the set of visited vertices during algorithm process. In each round of iteration it selects vertex u with the smallest d(u) from the unvisited nodes. Finally when S=V, d(u)=(s,u) for all vertex u.
Advanced data structures with amortized O(1) time for insertion and decrease-key, and O(log n) for extract-min, called Fibonacci heap <cit.> and relaxed heap <cit.>, make the time bound for Dijkstra's algorithm to O(m+nlog n).
This time bound is in the comparison-addition model where only comparison and addition operations on edge weights are allowed and considered as unit-time operations, which is the most common model for real number inputs.
For undirected graphs, <cit.> proposed an SSSP algorithm with running time O(mα(m,n)+min{nlog n, nloglog r}) in the comparison-addition model, where α is the inverse-Ackermann function and r bounds the ratio of any two edge weights. However, no SSSP algorithm faster than O(m+nlog n) has been found for real-weighted graphs without ratio constraints, both for undirected and directed graphs.
A byproduct of Dijkstra's algorithm is the sorting of all vertices by their distances from s, but the lower bound of Ω(nlog n) lies for comparison-based sorting algorithms. Researchers used to believe that this sorting bottleneck existed for many graph problems,
and breaking this bottleneck is an important and interesting direction. Yao <cit.> gave a minimum spanning tree (MST) algorithm with running time O(mloglog n), citing an unpublished result of O(m√(log n)) by Tarjan. The current best results for MST are the randomized linear time algorithm <cit.>, the deterministic O(mα(m,n))-time algorithm <cit.>, and a deterministic algorithm with proven optimal (but unknown) complexity <cit.>. In the bottleneck path problem, we want to find the path maximizing the minimum edge weight on it between two vertices. <cit.> gave an O(mlog^* n)-time algorithm for s-t bottleneck path problem in directed graphs, which was later improved to randomized O(mβ(m,n)) time <cit.>. For single-source all-destination bottleneck path problem in directed graphs, there is a recent result of O(m√(log n))-time randomized algorithm by <cit.>. For single-source nondecreasing path problem, Virginia V.Williams <cit.> proposed an algorithm with time bound O(mloglog n). All the results above are comparison-based though, the techniques in these works, such as local construction or divide-and-conquer approach, hardly works for the shortest path problem. Therefore it remains how to break the sorting bottleneck for SSSP.
§.§ Our Results
In this paper we propose the first SSSP algorithm for undirected real-weighted graphs that breaks the sorting bottleneck.
In an undirected graph G=(V,E,w) with nonnegative edge weights w:E→ℝ_≥ 0, there is a comparison-addition based Las-Vegas randomized algorithm that solves the single-source shortest path problem in O(m√(log n·loglog n)) time, in which the results are always correct and it can achieve this time bound with high probability. The time complexity can be improved to O(√(mnlog n)+n√(log nloglog n)) when m=ω(n) and m=o(nlog n).
Note that there is a (worst-case) lower bound of Ω(m+min{nlog n, nloglog r}) in <cit.> for “hierarchy-based” algorithms for undirected real-weighted SSSP, but our algorithm is randomized and not hierarchy-based. See Remark <ref> for discussions.
Technical Overview.
The bottleneck of Dijkstra-based algorithm is the priority queue. For this reason, we only add a fraction of vertices into the priority queue. As in many works on distance oracles or spanners, we sample a subset of vertices R, and the heap is only for vertices in R, then we “bundle” every other vertex v to its nearest vertex in R, which is called (v). Then define (v) to be the set of vertices closer than (v) to v. Since the algorithm doesn't know the correct order of most vertices on a shortest path, relaxing only neighbours as in Dijkstra's algorithm doesn't work. So when popping a vertex u∈ R from the heap, we also deal with vertices v which are bundled to u. In an undirected graph, this also implies that |(s,u) - (s,v)| is not large. Here we relax v from vertices in (v) and their neighbors, then from v we relax neighbors of v and vertices in their balls. (To make it easier to describe, we first change the graph to a constant-degree graph with O(m) vertices.) Details of the algorithm will be discussed in Section <ref>, as well as the analysis of correctness and running time. The detailed construction of bundles so that the algorithm can achieve the time bound w.h.p. will be introduced in Section <ref>. The improvement of time complexity by relaxing the constant-degree constraint will be discussed in Section <ref>.
§.§ Other Related Works
Existence of algorithms better than O(m+nlog n) for real-weighted SSSP has been open for long. Pettie and Ramachandran's algorithm <cit.> works better than O(nlog n) if the ratio between maximum and minimum edge weights is not very large. For the integer-weighted case, random access machine (RAM) model is usually adopted, where multiplications, shifts and Boolean operations on edge weights are allowed. There are many works on improving heaps and SSSP algorithms on RAM model with integer weights <cit.>.
Finally Thorup gave a linear-time algorithm for undirected graphs <cit.> and O(m+nloglogmin{n,C}) for directed graphs <cit.> where C is the maximum edge weight.
Recently, almost linear time O(m^1+o(1)log C) algorithms for SSSP with negative weights are also discovered <cit.>.
All-pair shortest path (APSP) problem requires the shortest path between every pair of vertices u,v in the graph G. We can run Dijkstra's algorithm <cit.> from all vertices which will have running time O(mn+n^2log n), or use Floyd-Warshall algorithm <cit.> with running time O(n^3). Researchers have made many improvements since then <cit.>, but there is still no truly subcubic time (O(n^3-ϵ) for some constant ϵ>0) APSP algorithm for real-weighted graphs or even graphs with integer weights in [0,n]. Williams <cit.> gave an APSP algorithm with running time n^3/2^Θ(√(log n)) for real-weighted graphs. For undirected real-weighted graphs, Pettie and Ramachandran's APSP algorithm <cit.> runs in O(mnlogα(m,n)) time, and for directed real-weighted graphs, Pettie <cit.> gave an APSP algorithm in O(mn+n^2loglog n) time.
§ PRELIMINARIES
In this paper we work on an undirected graph G=(V, E, w) with vertex set V, edge set E⊆ V^2 and non-negative weight function w: E →ℝ_≥ 0, also denoted by w_uv. In an undirected graph w_uv = w_vu holds for all edges (u, v) ∈ E. We denote n = V, m=E as the number of vertices and edges in the graph, and N(u) = {v: (u, v) ∈ E} as the neighbors of u. For two vertices u, v∈ V, _G(u,v) is length of the shortest path connecting u and v, namely the distance of u and v in graph G. The subscript G is omitted when the context is clear. Let s be the source vertex. The target of our algorithm is to find (s, v) for every v∈ V. Without loss of generality we assume that G is connected, so m≥ n-1.
Constant-Degree Graph. Throughout the paper we need a graph with constant degree. To accomplish this, given a graph G, we construct G' by a classical transformation (see <cit.>):
* Substitute each vertex v with a cycle of N(v) vertices x_vw (w∈ N(v)) connected with zero-weight edges, that is, for every neighbor w of v, there is a vertex x_vw on this cycle.
* For every edge (u,v) in G, add an undirected edge between corresponding vertices x_uv and x_vu with weight w_uv.
We can see distance _G'(x_uu', x_vv') = _G(u, v) for arbitrary u'∈ N(u) and v'∈ N(v). Each vertex in G' has degree at most 3, while G' being a graph with O(m) vertices and O(m) edges.
Comparison-Addition Model. In this paper our algorithm works under comparison-addition model,
in which real numbers are subject to only comparison and addition operations. In this model, each addition and comparison takes unit time, and no other computations on edge weights are allowed.
Fibonacci Heap. Under such a model, it is possible to construct a Fibonacci heap H that spends amortized O(1) time for initialization, insertion, decrease-key operations, and O(logH) time for each extract-min operation <cit.>. When we extract the minimum element from the heap, we also call that element “popped” from the heap.
§ MAIN ALGORITHM
In the following sections we assume that G is connected, and each vertex in G has degree no more than 3. (There are O(m) vertices and O(m) edges in G, but we still use O(log n) which is equivalent to O(log m) where n is the number of vertices in the original graph without degree constraints.)
Our algorithm is based on original Dijkstra's algorithm <cit.>. Since the main bottleneck of Dijkstra's algorithm is the O(log n) time per every vertex extracted from the Fibonacci heap <cit.>, we only insert a subset R ⊆ V of vertices into the heap. Each remaining vertex v∈ V ∖ R is bundled to its closest vertex in R. Throughout the algorithm, vertices are updated only when some vertex u∈ R is popped from the heap. Our algorithm consists of two stages: bundle construction and Bundle Dijkstra, whose details will be introduced in Section <ref> and <ref> respectively.
To demonstrate the main idea of our algorithm, in this section we first give an algorithm that runs in expected O(m√(log n·loglog n)) time but not “with high probability”. In Section <ref> we give an improved version of the bundle construction stage, leading to an algorithm that runs in O(m√(log n·loglog n)) time with high probability. Both algorithms always give correct answers.
§.§ Bundle Construction
A simple version of bundle construction works as follows[One may notice that sampled set, closest sampled vertex and balls are common techniques in papers on shortest path algorithms, distance oracles and spanners, and there are deterministic construction algorithms for such “dominating set” (e.g. <cit.>), but the extra O(log n) factor for deterministic approaches introduced on the size of dominating set or construction time is not affordable here.] (k is a parameter to be determined later):
* Independently sample each vertex v∈ V∖{s} with probability 1/k to form set R, then add s into R.
* For each vertex v ∉ R, run Dijkstra's algorithm started from v until first vertex of R is extracted from the heap, denoted by (v). Therefore (v) is one of the closest vertices in R to v, i.e., (v) ∈min_u∈ R(u, v). We say that v is bundled to (v).
* For each u∈ R, let (u) = u, and (u) = {v: u = (v)} be the set of vertices bundled to u. By definition, {(u)}_u∈ R forms a partition of the vertex set V.
* For each vertex v ∉ R, define (v) = {w∈ V: (v, w) < (v, (v))}, that is, the set of vertices closer to v than its bundled vertex (v). In the previous Dijkstra's algorithm we can get (v) and also values of (v, w) for all w∈(v)∪{(v)}.
Time Analysis of Bundle Construction. For each vertex v∉ R, without loss of generality we assume its Dijkstra's algorithm breaks tie in a deterministic way. Therefore, the order of vertices extracted from heap is fixed.
We can see 𝔼[|R|]=O(m/k). For each vertex v∉ R, let S_v be the set of vertices extracted before its Dijkstra's algorithm stops, then (v) ⊊ S_v. By definition of R, S_v follows geometric distribution with success probability 1/k, thus 𝔼[S_v]=k and 𝔼[(v)] ≤ k.
By constant degree property, the number of vertices ever added into the heap is also O(S_v), so the total time of the bundle construction is O(∑_v∈ V∖ R𝔼[S_vlogS_v])=O(mklog k) in expectation.
One may notice that xlog x is a convex function so that 𝔼[S_vlogS_v] = O(klog k) does not trivially hold. We present a simple proof here: (by geometric distribution 𝔼[S_v^2] = 2k^2 - k)
𝔼[S_vlogS_v] = ∑_n = 1^∞1/k(1 - 1/k)^n-1· nlog n
≤∑_n≤ k^21/k(1 - 1/k)^n-1· nlog n + ∑_n > k^21/k(1 - 1/k)^n-1· n^2
≤ 2log k ∑_n≤ k^21/k(1-1/k)^n-1· n + ∑_n = 1^∞1/k(1 - 1/k)^k^2 + (n - 1) (n + k^2)^2
≤ 2log k ·𝔼[S_v] + (1 - 1/k)^k^2·𝔼[(S_v+k^2)^2]
≤ 2 k log k + e^-k· O(k^4) = O(klog k).
§.§ Bundle Dijkstra
Given the set R and the partition of bundles, the main algorithm works as follows, with pseudocode given in Algorithm <ref>:
Initially we set d(s)=0 and d(v)=+∞ for all other vertex v, and insert all vertices of R into a Fibonacci heap <cit.>. Whenever we pop a vertex u∈ R from the heap, we update the distances by the following steps. (Here relaxing a vertex v by a value D means that we update d(v) by min{d(v), D}.)
* For every vertex v bundled to u, we need to find the exact value of (s,v). First relax v by d(u)+(u,v); then for every vertex y∈(v), relax v by d(y)+(y,v); and for every z_2 ∈(v)∪{v} and z_1∈ N(z_2), relax v by d(z_1)+w_z_1,z_2+(z_2,v). That is, we update d(v) by its bundled vertex u, vertices in (v), and vertices neighboring to v and (v).
* After updating d(x) for every x∈(u), we update the vertices y∈ N(x) and vertices z_1∈(y). That is, relaxing y by d(x)+w_x,y for all y∈ N(x) and then relaxing z_1 by d(x)+w_x,y+(y,z_1) for all z_1∈(y).
* Whenever we update a vertex v∉ R, we also relax its bundled vertex (v) by d(v)+(v,(v)). (But later we will see this is only needed in Step 2 but not Step 1, since in Step 1 v is bundled to u, but the distance (s,u) is already found when popping u from the heap.)
The following observation holds naturally from the algorithm.
d(v) ≥(s, v) always holds for all v ∈ V.
Time Analysis for Bundle Dijkstra. For the Bundle Dijkstra stage, only vertices in R are inserted into heap, thus the extract-min operation only takes O(Rlog n) time in total. Since every vertex in V∖ R only appears once as v and x in Step 1 and Step 2, respectively, and by constant degree property, every vertex appears constant times as the vertex y∈ N(x) in Step 2, so the number of vertices z_1,z_2 in Step 1 for every v is O(|(v)|), and the number of vertices z_1 in Step 2 for every y is O(|(y)|). Also note that the recursive call of Relax in Step 3 can only recurse once since (v)∈ R. So the total time for Step 1, 2 and 3 is O(∑_v∈ V∖ R|(v)|). Thus, the time of the bundle Dijkstra stage is 𝔼[O(|R|·log n+∑_v∈ V∖ R(v))] = O(m/klog n + mk) in expectation.
Now, we can see that the expected total time of the two stages is O(m/klog n + mklog k), which is minimized to O(m√(log n·loglog n)) if we choose k = √(log n/loglog n). We move to explain our main ideas of the correctness proof. A formal proof is given in Section <ref>.
Main ideas. The following propositions hold in the algorithm. (Here the iteration of u means the iteration performed when popping u∈ R; a real distance (s,v) is found means d(v)=(s,v) already holds.)
When popping u∈ R from the heap, its distance (s,u) is already found.
After Step 1 in the iteration of u, (s,v) for all v∈(u) are found.
The following lemmas contain the main ideas of the algorithm.
For any vertex u∈ R and any path P from s to u, if P goes through vertex y, (s,(y)) is at most the length of P.
(s, y) is at most the length of subpath of P from s to y. By definition of (y), (y, (y)) is at most the length of subpath of P from y to u. Concatenating two subpaths together, (s, (y)) ≤(s, y) + (y, (y)) is at most the length of P.
Lemma <ref> shows that for any vertex u∈ R, the shortest path from s to u only contains vertices y with (s, (y)) ≤(s, u). This is the intuition why vertices of R are popped in increasing order of (s,·). However, the shortest path from s to some vertex v∈(u) may go through some vertex y with (s, (y)) ≥(s, u), that is, (y) is still not popped from the heap. But surprisingly, with the ideas of Lemma <ref> we can deal with this case even before the iteration of (y).
For a vertex v∉ R, if the shortest path from s to v is shorter than (s,(v))+((v),v), and it goes through a vertex y (other than v) such that (s,(y))≥(s,(v)), then on the shortest path from y to v there are two adjacent vertices z_1,z_2 such that z_1∈(y)∪{y} and z_2∈(v)∪{v}.
We have (y,v)=(s,v)-(s,y) and (s,v)<(s,(v))+((v),v). By triangle inequality, (s,y)≥(s,(y))-(y,(y)), and by (s,(y))≥(s,(v)),
(y,v)<((v),v)+(y,(y))+(s,(v))-(s,(y))≤((v),v)+(y,(y))
Let z_1 be the last vertex on the shortest path from y to v satisfying (y,z_1)<(y,(y)), so z_1∈(y). Then z_2 will be the next vertex after z_1, so (y,z_2)≥(y,(y)), and (z_2,v)<(v,(v)), so z_2∈(v). (If (y,(y))=0 then z_1=y, and if (y,v)<(y,(y)) then z_2=v and z_1 is the vertex before v.)
Then we can see Proposition <ref> and <ref> hold throughout the algorithm iteratively: (A formal proof will be given in Section <ref>.)
* When we pop the source s from the heap, d(s)=0, and the distances (s,v) for all v∈(s) are found in the bundle construction step, and can be put in d(v) in Step 1.
* Assume Proposition <ref> holds for the first i vertices popped, so the real distances for all vertices bundled to popped vertices are found. By Step 2 and 3, we can see for all unpopped u∈ R, the distance (s,u) can be found if the shortest path from s to u does not go through vertices bundled to other unpopped vertices in the heap. If the next popped vertex u' does not satisfy this, let y be the first vertex on the shortest path from s to u' which is bundled to an unpopped vertex (y) other than u', so (s,(y)) can be found. By Lemma <ref>, (s,(y))≤(s,u'), so if d(u')>(s,u'), (y) will be the next popped vertex. Thus, (s,u') for the next popped vertex u' is found before it is popped.
* If an unpopped vertex u'∈ R is updated in the iteration of popped vertex u, the new path to u' must go through a vertex in (u). By Lemma <ref>, d(u') cannot be updated to be smaller than (s,u), so the unpopped vertices must have longer or equal distances than any popped vertex.
* Thus when popping a vertex u∈ R, its distance (s,u) is already found. For all vertex v∈(u), if (s,v) is not directly obtained by d(u)+(u,v), that is, (s,v)<(s,u)+(u,v), let x be the last vertex on the shortest path from s to v such that (x) is popped before u, and let y be the next vertex after x. We can see (s,(y))≥(s,u), so by Lemma <ref>, we get such z_1 and z_2. Then from Proposition <ref> (s,x) can be found in Step 1 in the iteration of (x), then (s,z_1) can be found in Step 2 of that iteration. In this iteration of u, (s,v) can be set to (s,z_1)+w_z_1,z_2+(z_2,v) in Step 1, so Proposition <ref> still holds after this iteration.
§.§ Proof of Correctness
We give a formal proof based on the pseudocode of Algorithm <ref>. Define u_i∈ R as the vertex extracted in the i-th iteration of while-loop in Algorithm <ref>. Our key lemma in the following shows the main properties of the algorithm, therefore Bundle Dijkstra is correct no matter how R is chosen.
The following properties hold for any i≥ 1 in Bundle Dijkstra (Algorithm <ref>):
* When u_i is extracted from the heap, d(u_i) = (s, u_i) holds.
* After i-th iteration of the while-loop, d(u) ≥ d(u_i) for all u ∈ R \u_j_j≤ i.
* After Step 1 of i-th iteration of the while-loop, d(v)=(s,v) for all v ∈(u_i).
We shall prove the lemma by induction on i.
The lemma holds for i = 1 since d(s) = 0 and d(v) = (s, v) for all v ∈(s) after Line <ref>.
Suppose the lemma holds for every i ≤ t-1, consider the case i=t.
* Consider a shortest path P from s to u_t. Let x be the last vertex on P such that x∈(u_j) for some j < t, and y be the next one after x, hence y ∈(u) for some u ∈ R \u_ℓ_ℓ < t. By Property <ref> of induction hypothesis d(x) = (s, x) after Step 1 of j-th iteration. After that the algorithm updates d(y), and further d(u) in line <ref> since y ∈ N(x).
Therefore after (t-1)-th iteration d(y) = (s, x) + (x, y) = (s, y) and d(u) ≤(s, y) + (y, u). Further:
d(u) ≤ (s, y) + (y, u)
≤ (s, y) + (y, u_t) (y) = u
= (s, u_t) y on shortest path
≤ d(u_t) Observation <ref>.
On the other hand, the algorithm extracts u_t from Fibonacci heap H immediately after (t-1)-th iteration, thus d(u_t) ≤ d(u). So all the inequalities above should be equations, thus d(u_t) = (s, u_t).
* When executing line <ref> of t-th iteration, d(u) ≥ d(u_t) holds for every u ∈ R \u_j_j≤ t since H is a Fibonacci heap.
Suppose d(u) < d(u_t) for some u ∈ R \u_j_j≤ t after t-th iteration. The further updates on d(u) must start from d(x) for some x ∈(u_t). For last such update, applying Lemma <ref> on this path from s to x then to u, we have:
d(u) ≥ (s, u_t) Lemma <ref>
= d(u_t), Property <ref>
leading to contradiction.
* We want to show that d(v) = (s, v) holds for all v∈(u_t).
Suppose there exists a vertex v∈(u_t) such that d(v)>(s,v) after Step 1. Denote P as the shortest path from s to v. Let x be the last vertex on P such that x∈(u_j) for some j < t, and y be the next one after x on P, hence y ∈(u) for some u ∈ R \u_ℓ_ℓ < t. By Property <ref> of induction hypothesis, d(x) = (s, x) after Step 1 of j-th iteration. Same as above we can show that d(y) = (s, x) + (x, y) = (s, y) and d(u) ≤(s, y) + (y, u) (where u=(y)) before t-th iteration.
We have:
(s, y) ≥ d(u) - (y, u).
(s, v) < d(v) Assumption
≤ d(u_t) + (u_t, v) d(v) updated in Line <ref>.
On the other hand, d(u) ≥ d(u_t) after t-th iteration by Property <ref>. Since d(u_t) doesn't change (Property <ref> and Observation <ref>), and d(u) can only decrease in t-th iteration, so d(u) ≥ d(u_t) holds throughout t-th iteration. Hence:
(y, v) = (s, v) - (s, y) < (u_t, v) + (y, u),
while the equation holds since y lies on the shortest path from s to v.
Therefore there are two possible cases:
* (y, v) < (u_t, v).
In this case y ∈(v), so we can update d(v) to (s,y)+(y,v) on line <ref>, contradicting to d(v)>(s,v).
* (y, v) ≥(u_t, v).
First, by Inequality (<ref>), (y, u) > (y, v) - (u_t, v) ≥ 0. Let z_1 be the last vertex on path P with (y, z_1) < (y, u), we have z_1 ∈(y).
Let z_2 be the next vertex on the path, then (y, z_2)≥(y, u), so (z_2,v) = (y, v) - (y, z_2) ≤(y, v) - (y, u) < (u_t,v), that is, z_2∈(v).
(If z_2 does not exist, then z_1=v.)
By Property <ref> of induction hypothesis, d(x) = (s, x) just after Step 1 of j-th iteration, so d(z_1) is updated to (s, z_1) in line <ref> of j-th iteration. Therefore d(v) is updated to (s, v) in line <ref> of t-th iteration (since j < t), contradicting the assumption.
Therefore d(v)=(s,v) for all v ∈(u_t) after Step 1 of t-th iteration.
Pettie and Ramachandran <cit.> proved that, any hierarchy-based SSSP algorithm on undirected graphs in comparison-addition model takes time at least Ω(m + min{nloglog r, nlog n}), where r is the ratio of the maximum-to-minimum edge weight. This bound becomes Ω(m+nlog n) when r is exponentially large. Here the hierarchy-based algorithm is defined to generate a permutation π_s satisfying hierarchy property: (s, v)≥(s, u) + (u, v) ⇒π_s(u) < π_s(v), where (u, v) is the longest edge on the MST path between u and v. The permutation π_s is, though not defined to be, typically for the algorithms discussed in <cit.>, the order that the algorithm visits the vertices. However, that is a worst-case lower bound, and our algorithm is randomized. Also the order that our algorithm visits the vertices does not follow the hierarchy property: think of two vertices x and y are both connected to u by edges (x,u), (y,u) both with weight 1, and x,y are both bundled to u. It is possible that (x,y)=1 and (s, y)= (s, x) + 2 but we set no limit on the order we visit x and y, that it is possible we visit y before x. This explains why our algorithm can break this Ω(m + nlog n) lower bound in <cit.>.
§ IMPROVED BUNDLE CONSTRUCTION
In this section we propose an improved bundle construction that runs in O(m√(log n·loglog n)) time with high probability.
In Section <ref> we showed that correctness of Bundle Dijkstra does not depend on the choice of R, as long as (·), (·) and (·) are correctly computed with respect to R. The running time for the bundle construction is O(∑_v∈ V∖ RS_vlogS_v), and Bundle Dijkstra is O(∑_v∈ V∖ R(v) + Rlog n).
Naturally, S_v is a random variable following geometric distribution for each vertex v∈ V, and they are not independent since a vertex x∈ V may appear in several sets. However, for a subset W ⊆ V, if any vertex appears at most once in S_v_v∈ W, the corresponding random variables S_v_v∈ W are independent. By Lemma <ref>, if each random variable is dependent to few other variables, their summation deviates from the expectation with exponentially small probability. So we manually include all those vertices with S_v≥ klog k into R. In this way for each vertex in V∖ R, its random variable is dependent to only a limited number of other ones, and we can bound their summation with high probability.
We introduce how to generate R and compute {(v)}_v∈ V∖ R below, as well as {(v)}_v∈ V∖ R, {(u)}_u∈ R and (v, u) for u∈(v)∪{(v)}. The pseudocode is given in Algorithm <ref>. We still set parameter k = √(log n/loglog n) as in Section <ref>.
Improved Bundle Construction.
* Sample each vertex v∈ V∖{s} with probability 1/k to form set R_1 and add s into R_1;
* For each v∈ V∖ R_1, run Dijkstra algorithm started from v, until we have extracted a vertex in R_1; or have already popped klog k vertices.
* In the former case, denote the extracted vertices in the order they appeared as list V_extract^(v). Note that V_extract^(v) is similar to S_v of Section <ref>.
In the latter case, add v into R_2;
* Set R = R_1∪ R_2, and for v∈ V∖ R, let the first vertex in V_extract^(v) that lies in R be (v);
* With the results above, compute (u) for u∈ R, (v) for v∈ V∖ R, and record (v, u) for u∈(v)∪{(v)}. This step takes linear time.
The correctness of this bundle construction follows from the Dijkstra's algorithm <cit.>. We only need to analyze R, ∑_v∈ V∖ R(v) and its running time. The performance of this improved bundle construction is characterized in Lemma <ref> below. By Lemma <ref>, the bundle construction takes O(mklog k) time, and bundle Dijkstra takes O(∑_v∈ V∖ R(v) + Rlog n) = O(mk+mlog n/k) with probability 1 - e^-Ω(n^1 - o(1)). Thus the total running time of our algorithm is O(mklog k + mlog n/k) = O(m√(log n·loglog n)) w.h.p. The proof of Lemma <ref> is based on Lemma <ref>.
By running Algorithm <ref>, with probability 1-e^-Ω(n^1 - o(1)), the following properties hold:
(a) R = O(m/k).
(b) ∑_v∈ V∖ R(v) = O(mk).
(c) The running time of Algorithm <ref> is O(mklog k).
First, each vertex of V∖{s} is inserted to R_1 independently with probability 1/k, so by Chernoff bound, with probability 1 - O(e^-m/k) = 1 - e^-Ω(n^1 - o(1)), R_1 = Θ(m/k), and meanwhile m' := V∖ R_1 = Θ(m).
For each vertex v∈ V∖ R_1, define X_v = 𝕀[v∈ R_2] and Y_v = V_extract^(v). Then, X_v is a Bernoulli random variable, X_v∈ [0, 1] with probability 1 and 𝔼[X_v] = (1 - 1/k)^klog k = Θ(1/k). And Y_v is a Geometric random variable except its value truncated at klog k, Y_v∈ [0, klog k] with probability 1 and 𝔼[Y_v] = k(1-(1-1/k)^klog k) = Θ(k).
For each vertex v∈ V∖ R_1, denote V_full^(v) as the first klog k vertices extracted in the Dijkstra algorithm if it did not truncate. They are determined by the structure of G, so there is no randomness in V_full^(v). The values of X_v and Y_v are determined by whether vertices in V_full^(v) were inserted into R_1. Therefore, if V_full^(v_1), V_full^(v_2), ⋯, V_full^(v_j) are disjoint, then X_v_1, X_v_2, ⋯, X_v_j are independent, and similarly, Y_v_1, Y_v_2, ⋯, Y_v_j are independent.
For each vertex w∈ V_full^(v), because w is found by v within klog k steps of Dijkstra's algorithm, there must exist a path from v to w of no more than klog k edges, so by constant degree property, there are at most 3·(1 + 2 + ⋯ + 2^klog k - 1)≤ 3· 2^klog k different u such that w∈ V_full^(u).
Hence, for each v, there are at most 3klog k · 2^klog k= O(n^o(1)) different u∈ V∖ R_1 such that V_full^(v)∩ V_full^(u)≠∅.
To apply Lemma <ref>, for each v∈ R_1, also define X_v, Y_v and V_full^(v) in the same way as if v is executed in the loop of Algorithm <ref>.
Now, we apply Lemma <ref> for {X_v}_v∈ V and {Y_v}_v∈ V. For {X_v}_v∈ V, S is V and V = m, μ = Θ(1/k), b = 1, T = O(n^o(1)), and {W_v}_v∈ V are { V_full^(v)}_v∈ V, and we can verify that 8Tbμ^-1 = O(n^o(1)) and 8b^3T/μ^3 = O(n^o(1)), so with probability at least 1 - e^-Ω(m/n^o(1)), it holds that ∑_v∈ SX_v = Θ(m/k). And for {Y_v}_v∈ V, S is V, μ = Θ(k), b = klog k, T = O(n^o(1)), and {W_v}_v∈ V are also { V_full^(v)}_v∈ V, and similarly we infer that with probability 1 - e^-Ω(m/n^o(1)), it holds that ∑_v∈ SY_v = Θ(mk). Thus, we conclude that with probability 1 - e^-Ω(n^1 - o(1)), ∑_v∈ V∖ R_1X_v = Θ(m/k) and ∑_v∈ V∖ R_1 Y_v = Θ(mk).
Then, we prove the three claims of this lemma.
For (a), by definition R = R_1 + R_2, so by union bound, with probability 1 - e^-Ω(n^1 - o(1)), R = R_1+ ∑_v∈ V∖ R_1X_v=O(m/k).
For (b), by definition (v)≤ Y_v, so ∑_v∈ V∖ R(v)≤∑_v∈ V∖ R_1Y_v. Thus, with probability 1 - e^-Ω(n^1 - o(1)), ∑_v∈ V∖ R(v) = O(mk).
For (c), we count the total time for the truncated Dijkstra algorithm in all iterations. For each vertex v∈ V∖ R_1, by constant degree property, the number of Insert operations is O(Y_v), so H^(v) = O(Y_v) = O(klog k). Therefore, each ExctractMin operation takes time O(log(Y_v)) = O(log k), and other operations takes constant time. Thus the truncated Dijkstra algorithm of v takes time O(Y_vlog k). Thus, with probability 1 - e^-Ω(n^1 - o(1)), the total time of Algorithm <ref> is O(∑_v∈ V∖ R_1Y_vlog k)=O(mklog k).
(Similar arguments as in <cit.>)
Suppose a set of random variables {Z_v}_v∈ S satisfy that for each v∈ S, 𝔼[Z_v] = μ, Z_v∈[0, b] with probability 1, and each Z_v is corresponded to a fixed deterministic set W_v such that, if W_v_1, W_v_2, ⋯, W_v_j are disjoint, then Z_v_1, Z_v_2, ⋯, Z_v_j are independent, and W_v intersects with at most T different W_u.
Then, with probability at least 1 - 8Tbμ^-1· e^-μ^3S/8b^3T, it holds that ∑_v∈ SZ_v = Θ(Sμ).
We try to partition {Z_v}_v∈ S into several subsets {𝒵_t} such that all Z_v's in each 𝒵_t are independent so that we can apply Hoeffding's inequality, or the size of 𝒵_t is small so that we can bound them by the upper bound b, and finally combine everything by the union bound. Also note that we do not need to actually compute {𝒵_t}, as they are merely introduced for this mathematical proof.
Fix parameter p = Sμ/4Tb. Since each W_v intersects with at most T different other W_u, whenever there are at least p(T+1) elements in {Z_v}_v∈ S, we can pick p different Z_v from them whose W_v are disjoint, so that they are independent: pick an arbitrary Z_v and discard those Z_u if W_u∩ W_v≠∅; since there are at most T such Z_u different from Z_v, every time we discard at most T+1 elements. So from p(T+1) elements we can pick p of them.
We let them form a 𝒵_t and remove them from {Z_v}_v∈ S. Repeating this process we will end up with a partition {𝒵_1, 𝒵_2, ⋯, 𝒵_q, 𝒵_q+1} of {Z_v}_v∈ S such that: 𝒵_t = p, and all Z_v∈𝒵_t are independent for 1≤ t≤ q; 𝒵_q+1≤ p(T+1)≤ 2pT = μ/2bS. By definition μ≤ b, so 𝒵_q+1≤1/2S.
Then by Hoeffding's inequality, for each 1≤ t≤ q,
[∑_v∈𝒵_tZ_v - 𝒵_tμ > 1/2𝒵_tμ] ≤ 2e^-2(1/2𝒵_tμ)^2/Z_tb^2 = 2e^-μ^2p/2b^2.
and 0≤∑_v∈𝒵_q+1Z_ v≤𝒵_q+1b with probability 1.
By union bound, with probability at least 1 - 2qe^-μ^2p/2b^2,
∑_v∈ SZ_v ≥1/2∑_t=1^q𝒵_tμ = 1/2(S - 𝒵_q+1)μ≥1/2(S-1/2S)μ = 1/4Sμ,
and meanwhile
∑_v∈ SZ_v≤3/2∑_t=1^q𝒵_tμ + 𝒵_q+1b≤3/2Sμ + μ/2bS· b = 2Sμ.
And from S≥∑_t=1^q𝒵_t≥ qp, we conclude that q ≤S/p = 4Tb/μ. Thus, with probability at least 1 - 8Tbμ^-1e^-μ^3S/8b^3T, it holds that ∑_v∈ SZ_v = Θ(Sμ).
§ DISCUSSION
We gratefully acknowledge an anonymous reviewer for suggesting that constant-degree is not a necessary condition for this algorithm, so we can get improved time complexity when m=ω(n) and m=o(nlog n). Instead of making the graph of degree 3, we use similar methods to split the vertices of degree >m/n to vertices of degrees ≤ m/n, so that the number of vertices is still O(n). Then in each step:
* In bundle construction, the time for Dijkstra search for every vertex v will become O(|S_v|·m/n+|S_v|log (|S_v|·m/n)), since the size of the heap is at most |S_v|·m/n, so in total O(mk+nklog(mk/n)).
* The time for Bundle Dijkstra will become O(n/klog n+ mk), since the number of vertices z_2 in Step 1 for every v is O(m/n|(v)|), and the number of vertices z_1 in Step 2 for every y is O(|(y)|) but each vertex appears O(m/n) times as y in Step 2.
* When m/n=o(log n), one can check that the analysis of independence in Section <ref> still works, since the number of different u∈ V∖ R_1 which have V_full^(v)∩ V_full^(u)≠∅ for each v is still O(n^o(1)).
Thus, the time complexity for this algorithm is O(n/klog n+ mk + nklog(mk/n)). When m<nloglog n, k still equals to √(log n/loglog n), and the time bound is O(n√(log nloglog n)). When nloglog n≤ m< nlog n, let k=√(n/mlog n), and the time bound will be O(√(mnlog n)).
plainnat
|
http://arxiv.org/abs/2307.06117v1 | 20230708094335 | A qubit regularization of asymptotic freedom at the BKT transition without fine-tuning | [
"Sandip Maiti",
"Debasish Banerjee",
"Shailesh Chandrasekharan",
"Marina K. Marinkovic"
] | hep-lat | [
"hep-lat",
"cond-mat.str-el",
"hep-th",
"quant-ph"
] |
[email protected]
Saha Institute of Nuclear Physics, HBNI, 1/AF Bidhannagar, Kolkata 700064, India
Homi Bhabha National Institute, Training School Complex, Anushaktinagar, Mumbai 400094, India
[email protected]
Saha Institute of Nuclear Physics, HBNI, 1/AF Bidhannagar, Kolkata 700064, India
Homi Bhabha National Institute, Training School Complex, Anushaktinagar, Mumbai 400094, India
[email protected]
Department of Physics, Box 90305, Duke University, Durham, North Carolina 27708, USA
[email protected]
Institut für Theoretische Physik, Wolfgang-Pauli-Straße 27, ETH Zürich, 8093 Zürich, Switzerland
We propose a two-dimensional hard core loop-gas model as a way to regularize the asymptotically free
massive continuum quantum field theory that emerges at the BKT transition. Without fine-tuning, our
model can reproduce the universal step-scaling function of the classical lattice XY model in the massive
phase as we approach the phase transition. This is achieved by lowering the fugacity of Fock-vacuum
sites in the loop-gas configuration space to zero in the thermodynamic limit. Some of the universal
quantities at the BKT transition show smaller finite size effects in our model as compared to the
traditional XY model. Our model is a prime example of qubit regularization of an asymptotically free
massive quantum field theory in Euclidean space-time and helps understand how asymptotic freedom can
arise as a relevant perturbation at a decoupled fixed point without fine-tuning.
A qubit regularization of asymptotic freedom at the BKT transition without fine-tuning
Marina K. Marinkovic 0000-0002-9883-7866
August 12, 2023
======================================================================================
The success of the Standard Model of particle physics shows that at a fundamental level, nature is well described by a continuum QFT. Understanding QFT non-perturbatively continues to be an exciting area of research, since defining them in a mathematically unambiguous way can be challenging. Most definitions require some form of short-distance (UV) regularization, which ultimately needs to be removed. Wilson has argued that continuum QFT arise near fixed points of renormalization group flows <cit.>. This has led to the concept of universality, which says that different regularization schemes can lead to the same QFT. Following Wilson, traditional continuum quantum field theories are usually regulated non-perturbatively on a space-time lattice by replacing the continuum quantum fields by lattice quantum fields and constructing a lattice Hamiltonian with a quantum critical point where the long distance lattice physics can be argued to be the desired continuum QFT. However, universality suggests that there is a lot of freedom in choosing the microscopic lattice model to study a particular QFT of interest.
Motivated by this freedom and to study continuum quantum field theories in real time using a quantum computer, the idea of qubit regularization has gained popularity recently <cit.>. Unlike traditional lattice regularization, qubit regularization explores lattice models with a strictly finite local Hilbert space to reproduce the continuum QFT of interest. Euclidean qubit regularization can be viewed as constructing a Euclidean lattice field theory with a discrete and finite local configuration space, that reproduces the continuum Euclidean QFT of interest at a critical point. If the target continuum theory is relativistic, it would be natural to explore Euclidean qubit regularized models that are also symmetric under space-time rotations. However, this is not necessary, since such symmetries can emerge at the appropriate critical point. Lattice models with a finite dimensional Hilbert space that can reproduce continuum QFT of interest were introduced several years ago through the D-theory formalism <cit.> and has been proposed for quantum simulations <cit.>. In contrast to qubit regularization, the D-theory approach allows the local Hilbert space to grow through an additional dimension when necessary. In this sense, qubit regularization can be viewed as the D-theory approach for those QFT where a strictly finite Hilbert space is sufficient to reproduce the desired QFT.
Examples of using qubit regularization to reproduce continuum QFT in the IR are well known. Quantum spin models with a finite local Hilbert space are known to reproduce the physics of classical spin models with an infinite local Hilbert space near Wilson-Fisher fixed points <cit.>. They can also reproduce QFT with topological terms like the Wess-Zumino-Witten theories <cit.>. Gauge fields have been proposed to emerge dynamically at some quantum critical points of simple quantum spin systems <cit.>. From the perspective of Euclidean qubit regularization, recently it was shown that Wilson-Fisher fixed points with O(N) symmetries can be recovered using simple qubit regularized space-time loop models with N+1 degrees of freedom per lattice site <cit.>. Similar loop models have also been shown to produce other interesting critical behavior <cit.>. Loop models are extensions of dimer models, which are also known to describe interesting critical phenomena in the IR <cit.>. All this evidence shows that Euclidean qubit regularization is a natural way to recover continuum QFT that emerge via IR fixed points of lattice models.
A non-trivial question is whether we can also recover the physics of ultraviolet fixed points (UV-FPs), using qubit regularization. In particular, can we recover massive continuum QFT which are free in the UV but contain a marginally relevant coupling? Examples of such AF theories include two-dimensional spin models and four dimensional non-Abelian gauge theories. In the D-theory approach, there is strong evidence that the physics at the UV scale can indeed be recovered exponentially quickly as one increases the extent of the additional dimension <cit.>. Can the Gaussian nature of the UV theory emerge from just a few discrete and finite local lattice degrees of freedom, while the same theory then goes on to reproduce the massive physics in the IR? For this we will need a special type of quantum criticality where three length scales, as sketched in <ref>, emerge. There is a short lattice length scale a, where the non-universal physics depends on the details of the qubit regularization, followed by an intermediate length scale ≫ a, where the continuum UV physics sets in and the required Gaussian theory emerges. Finally, at long length scales ≫, the non-perturbative massive continuum quantum field theory emerges due to the presence of a marginally relevant coupling in the UV theory. The qubit regularized theory thus reproduces the universal continuum QFT in the whole region ℓ_ UV to ℓ_ IR. The special quantum critical point must be such that ℓ_ UV/a →∞.
Recently, a quantum critical point with these features was discovered in an attempt to find a qubit regularization of the asymptotically free massive non-linear O(3) sigma model in two space-time dimensions in the Hamiltonian formulation <cit.>. Using finite size scaling techniques, it was shown that the qubit regularized model recovers all the three scales. In this paper, we report the discovery of yet another example of a quantum critical point with similar features. In the current case, it is a Euclidean qubit regularization of the asymptotically free massive continuum quantum field theory that arises as one approaches the BKT transition from the massive phase <cit.>. In both these examples, the qubit regularized model is constructed using two decoupled theories and the AF-QFT emerges as a relevant perturbation at a decoupled quantum critical point. The coupling between the theories plays the role of the perturbation that creates the three scales, as illustrated in the RG flow shown in <ref>. An interesting feature of this discovery is that there is no need for fine-tuning to observe some of the universal features of the BKT transition that have been unattainable in practice with other traditional regularizations <cit.>.
The BKT transition is one of the most widely studied classical phase transitions, since it plays an important role in understanding the finite temperature superfluid phase transition of two-dimensional systems <cit.>. One simple lattice model that captures the universal behavior of the physics close to the phase transition is the
classical two-dimensional XY model on a square lattice given by the classical action,
S = -β∑_⟨ ij⟩cos(θ_i-θ_j),
where the lattice field 0≤θ_i < 2π is an angle associated to every space-time lattice site i and ⟨ ij⟩ refers to the nearest neighbor bonds with sites i and j. The lattice field naturally lives in an infinite dimensional Hilbert space of the corresponding one dimensional quantum model. Using high precision Monte Carlo calculations, the BKT transition has been determined to occur at the fine-tuned coupling of β_c ≈ 1.1199(1) <cit.>. The Villain model is another lattice model which is friendlier for analytic calculations and has been used to uncover the role of topological defects in driving the phase transition <cit.>. More recently, topological lattice actions which seem to suppress vortices and anti-vortices but still drive the BKT transition have also been explored <cit.>.
As one approaches the BKT transition from the massive phase, the long distance physics of the <ref> is known to be captured by the sine-Gordon model whose Euclidean action is given by<cit.>,
S = ∫ dx dt [ 1/2t (∂_μθ_1)^2 + t/8π^2 (∂_μθ_2)^2 -
A t/4π^2cosθ_2 ]
where t ≥π/2. The field θ_1(x,t) captures the spin-wave physics while the vortex dynamics is captured by the field θ_2(x,t). The BKT transition in this field theory language occurs at t = π/2 where the cosθ_2 term becomes marginal as one approaches the critical point and the physics is governed by a free Gaussian theory. In this sense, the long distance physics of the lattice XY model, as β is tuned to β_c from smaller values, is an asymptotically free massive Euclidean continuum QFT.
Qubit regularizations of the classical XY-model have been explored recently using various quantum spin formulations <cit.>. Lattice models based on the spin-1 Hilbert space are known to contain rich phase diagrams <cit.>, and quantum field theories that arise at some of the critical points can be different from those that arise at the BKT transition. Also, the presence of a marginally relevant operator at the BKT transition can make the analysis difficult, especially if the location of the critical point is not known. In these cases, it becomes a fitting parameter in the analysis, increasing the difficulty. Since in our model the location of the critical point is known, our model can be analyzed more easily.
The model we consider in this work is a variant of the qubit regularized XY model introduced in Euclidean space recently <cit.>. The model can be viewed as a certain limiting case of the classical lattice XY-model <ref> written in the world-line representation <cit.>, where the bosons are assumed to be hard-core. The partition function of our model is a sum of weights associated with configurations of oriented self-avoiding loops on a square lattice with Fock-vacuum sites. An illustration of the loop configuration is shown as the left figure in <ref>. The main difference between our model in this work and the one introduced previously is that closed loops on a single bond are now allowed. Such loops seemed unnatural in the Hamiltonian framework that motivated the previous study, but seem to have profoundly different features in two dimensions <cit.>. It is also possible to view the loop configurations of our model as a configuration of closed packed oriented dimers on two layers of square lattices. The dimer configuration corresponding to the loop configuration is shown on the right in <ref>. The dimer picture of the partition function arises as a limiting case of a model involving two flavors of staggered fermions, introduced to study the physics of symmetric mass generation <cit.>. In this view point the inter-layer dimers (or Fock vacuum sites) resemble t'Hooft vertices (or instantons) in the fermionic theory. Using this connection, the partition function of our model can be compactly written as the Grassmann integral
Z = ∫ [d d] [d d] exp(λ ∑_i _i _i _i_i)
× exp( ∑_⟨ ij⟩( _i _i _j_j + _i _i _j _j))
where on each site i of the square lattice we define four Grassmann variables _i, _i, _i and _i. We consider periodic lattices with L sites in each direction. Using the fermion bag approach <cit.>, we can integrate the Grassmann variables and write the partition function as a sum over dimer configurations whose weight is given by λ^N_I where N_I is the number of instantons (or Fock-vacuum sites). Thus, λ plays the role of the fugacity of Fock-vacuum sites. It is easy to verify that the action of our model is invariant under _j _j → e^iσ_jθ_j _j and _j _j → e^-iσ_jθ_j _j where σ_j = ± tracks the parity of the site j. This U(1) symmetry is connected to the BKT transition and in order to track it, the dimers are given an orientation as explained in <ref>.
Using worm algorithms (see <cit.>) we study our model for various values of L and λ.
At λ = 0, one gets two decoupled layers of closed packed dimer models, which is known to be critical <cit.>. The effect of λ≠ 0 was studied several years ago, and it was recognized that there is a massive phase for sufficiently large values of λ <cit.>. However, the scaling of quantities as λ→ 0 was not carefully explored. Recently, the subject was reconsidered, and a crossover phenomenon was observed for small λ as a function of L. An understanding of this crossover was largely left unresolved as a puzzle <cit.>. In this paper, we demonstrate that the observed crossover phenomena captures the asymptotic freedom of <ref>. We do this by comparing the universal behavior of <ref> with the traditional XY model <ref> near the massive phase of the BKT transition <cit.>.
To compare universal behaviors of <ref> and <ref> we compute the second moment finite size correlation length ξ(L) defined as ξ(L) = √((χ/F)-1)/(2sin(π/L)) (see <cit.>), where χ = G(0) and F = G(2π/L) are defined through the two point correlation function
G(p) = ∑_j e^i p x⟨ O^+_(x,t) O^-_(0,0)⟩.
In the above relation j is the space-time lattice site with coordinates (x,t) and O^+_j, O^-_j are appropriate lattice fields in the two models. In the XY model O^+_j = e^iθ_j, O^-_j = e^-iθ_j, while in the dimer model O^+_j = O^-_j = _j _j. We demonstrate that the step-scaling function (SSF) (i.e., the dependence of ξ(2L)/ξ(L) on ξ(L)/L) of the two lattice models show excellent agreement with each other in the scaling regime ℓ_UV≫ a, in <ref>.
Another interesting universal result at the BKT transition is the value of the helicity modulus, which can be defined using the relation, Υ = ⟨ Q_w^2⟩ where Q_w is the spatial winding number of bosonic worldlines. In the XY model <ref>, it is usually defined using a susceptibility of a twist parameter in the boundary conditions <cit.>. In our model, we can easily compute the winding charge Q_w in each loop configuration illustrated in <ref>. The universal result in the massive phase as we approach the BKT transition is that Υ≈ 2/π in the UV up to exponentially small corrections <cit.>, although in the IR Υ = 0. While it is difficult to obtain the UV value in lattice calculations using the traditional model <ref>, in our model, we can see it emerge nicely at λ=0.01. We demonstrate this in <ref>. Again, as expected, the value of Υ when λ=0 is very different, since it is a theory of free bosons but at a different coupling. Using the different value of the coupling gives Υ ≈ 0.606 <cit.>. Our results provide strong evidence that the AF-QFT at the BKT transition emerges from our dimer model when we take the limit L→∞ followed by λ→ 0. The opposite limit leads to the critical theory of the decoupled dimer model.
Acknowledgments: We are grateful to J. Pinto Barros, S. Bhattacharjee, T. Bhattacharya, H. Liu, A. Sen, H. Singh and U.-J. Wiese for inspiring discussions. We acknowledge use of the computing clusters at SINP, and the access to Piz Daint at the Swiss National Supercomputing Centre, Switzerland under the ETHZ’s share with the project IDs go24 and eth8. Support from the Google Research Scholar Award in Quantum Computing and the Quantum Center at ETH Zurich is gratefully acknowledged. S.C's contribution to this work is based on work supported by the U.S. Department of Energy, Office of Science — High Energy Physics Contract KA2401032 (Triad National Security, LLC Contract Grant No. 89233218CNA000001) to Los Alamos National Laboratory. S.C is supported by a Duke subcontract based on this grant. S.C's work is also supported in part by the U.S. Department of Energy, Office of Science, Nuclear Physics program under Award No. DE-FG02-05ER41368.
Supplementary Material
§ UNIVERSAL VALUES OF Υ FOR Λ = 0 AND Λ≠ 0
In this section we explain the two different values of the helicity modulus Υ for our model when λ=0 and λ→ 0. When λ=0 our model maps into two identical but decoupled layers of closed packed classical dimer models. As has already been explained in the literature (see for example <cit.>), each layer can be mapped to the theory of a free compact scalar field with the action
S = 1/2 t∫ d^2 x (∂_μθ(x))^2.
with t=4π. One can compute Υ starting with <ref>, by noting that the scalar fields have winding number configurations labeled by n_x:
θ(x) = 2 π x n_x/L_x + φ(x),
where φ(x) is a smooth fluctuation that is independent of winding number n_x. The value of the action in each winding sector in a finite space-time volume is then given by
S(n_x) = 2π^2 n_x^2/tL_y/L_x + S_0,
where S_0 is the action from the usual fluctuations in the zero winding number sector. Using L_x = L_y, we can compute Υ using its connection to the average of the square of the winding numbers,
Υ = ⟨ (Q_x)^2 |=⟩∑_n_x n_x^2 · e^- 2 π^2 n_x^2/t/∑_n_x e^-2π^2 n_x^2/t
Numerically evaluating this expression for t=4π we obtain Υ = 0.303426... for a each layer of our dimer model. Our value of 0.606852... is due to the presence of two decoupled layers.
In contrast, in the limit λ→ 0, we need to consider the physics at the BKT transition and so we begin with the action
S = ∫ d^2x [ 1/2t̃ (∂_μθ_1)^2 + t̃/8π^2 (∂_μθ_2)^2 -
A t̃/4π^2cosθ_2 ]
and focus at t̃=π/2. At this coupling the last term is irrelevant and Υ gets dominant contribution from the θ_2 field. In this we can still use <ref> but need to substitute t = 4π^2/t̃ = 8π. Substituting we get Υ = 0.636508... which is approximately 2/π.
§ WORM ALGORITHM
In this section, we discuss the worm algorithm we use to simulate the model with the partition function,
Z = ∫ [d d] [d d] exp(λ ∑_i _i _i _i_i)
× exp( ∑_⟨ ij⟩( _i _i _j_j + _i _i _j _j))
as introduced in the main paper. These algorithms are well known <cit.>, and can be divided into three parts: Begin, Move, and End.
* Begin: pick a site at random and denote it as tail, and there are the following
two possibilities: (A) either it has a bond connected to it on the other layer (which we call
an instanton, or an interlayer dimer), or, (B) it has a bond connected to it on the same layer
(which we call a dimer).
* For the case (A), propose to remove the instanton, and put the worm head on the
same site at the different layer, with a probability 1/λ. If accepted, then begin the
worm update, otherwise go to (1).
* For the case (B), pick the other site to which the dimer is connected as the head,
and begin the worm update.
* Move: Propose to move the worm head to one of the (2D+1) neighbor sites of head
with an equal probability, which can either be on the same layer (2D choices), or on the different
layer (one choice). Denote the proposed new site as site0, and the following possibilities can
occur, provided that site0 is not the tail:
* site0 is on the same layer, and has an instanton connected to it. Propose to
remove the instanton with a probability 1/λ. If accepted, place the head
at site0, but on the different layer.
* site0 is on the same layer, and has a dimer connected to it (joining site0
and y). Move the head to the site y with a probability 1, and simultaneously insert
a dimer between head and site0.
* site0 is on the different layer, then propose if an instanton can be created. If
yes, then move the position of the head to y in the other layer, where y is the other
end of the dimer connecting site0 and y.
* End: If at any stage in the algorithm, the site0 is the tail, then propose to end
the worm update. If the site0 = tail is on the same layer, then end the update by putting
a dimer between the head and tail with a probability 1. If, on the other hand, they are
on different layers, the worm update ends with a probability λ, leading to the addition of an
extra instanton.
§ EXACT VS MONTE CARLO RESULTS ON A 2 × 2 LATTICE
In this work, we compute two independent fermion bilinear susceptibilities defined as
χ_1 = 1/2V∑_i,j
i≠ j⟨ψ̅_i ψ_i ψ̅_j ψ_j ⟩,
χ_2 = 1/2V∑_i,j
i≠ j⟨ψ̅_i ψ_i χ̅_j χ_j ⟩,
where χ_1 is an observable that can be defined even on a single layer, while χ_2 is involves both the layers. When the coupling λ = 0, the two layers are completely decoupled from each other and we get χ_2 = 0. Another quantity we compute is the average density of Fock vacuum sites or inter-layer dimers (which we also view as instantons), defined as
ρ = 1/V∑_i ⟨ψ̅_i ψ_i χ̅_i χ_i ⟩,
where the expectation value is defined as
⟨ O⟩ = 1/Z∫ [𝒟ψ̅𝒟ψ]
[𝒟χ̅𝒟χ] O
e^-S[ψ̅,ψ, χ̅,χ].
Since every site is populated by either a Fock-vacuum site or an intra-layer dimer, the average intra-layer dimer density is not an independent observable. We can always compute it from the Fock vacuum sites (instanton) density ρ.
In order to test out algorithm, we focus on exact results on a 2× 2 lattice. The partition function in this simple case is given by
Z = 64 + 16 λ^2 + λ^4,
while the instanton density and the two independent susceptibilities are given by
ρ = 1/4Z (32 λ^2 + 4 λ^4),
χ_1 = 1/2Z (32 + 4 λ^2),
χ_2 = 1/2Z (8 λ).
Note that ρ is zero when λ = 0 and approaches one for large couplings. Also, as expected χ_2=0 when λ=0. In <ref> we compare results for three different observables, instanton density (ρ), fermion bilinear susceptibility (χ_1), and helicity modulus (Υ) on a 2 × 2 lattice obtained from an exact calculation against the results obtained using the worm algorithm.
Interestingly, when λ≠ 0 we find that both χ_1 and χ_2 become similar as L increases. The difference also becomes smaller as λ increases. We show this behavior in the <ref>.
Due to this similarity we only focus on χ_1 in our work.
§ PLOTS OF Ρ AND Χ_1
We have computed the fermionic XY model at various values of λ on square lattices up to L = 4000 using the
worm algorithm described above. For our simulations, after allowing for appropriate thermalization, we have recorded between 8 × 10^3 and 48 × 10^3 measurements, each averaged over 2000 worm updates. A comparable number of measurements were also made for the bosonic model.
In <ref>, we plot ρ for various lattice sizes at different values of λ on the left. We note that ρ increases monotonically and approaches the thermodynamic limit by L=160 which is shown on the right.
In <ref>, we plot χ_1 as a function of system size, L for different values of λ. When λ is small, we find that our data is consistent with the behavior χ_1 ∼ AL^2-η expected in a critical phase. However, for larger values of λ, the susceptibility begins to saturate as χ_1 ∼ A which means η≈ 2. For λ=0, since the model describes two decoupled layers of closed packed dimer models we expect η=0.5 <cit.>. However, when λ is small, since we expect our model to describe the physics at the BKT transition, we expect η∼ 0.25. This is consistent with our findings.
The values of constant A and η for various values of λ obtained from a fit are given in <ref>.
§ STEP SCALING FUNCTION
In order to argue that the traditional XY model at the BKT transition and the two layer interacting dimer model are equivalent we compute the step scaling function (SSF) in both of them. We refer to the traditional XY model defined through the lattice action
S = -β∑_⟨ ij ⟩cos(θ_i-θ_j),
as the bosonic XY model and dimer model defined in <ref> as the fermionic XY model. In order to compute the step-scaling function we first compute the second moment correlation length defined in a finite box of size L using the expression
ξ(L) = 1/2sin(π/L)√(χ/F - 1),
where
χ = ∑_i ⟨ O^+_i O^-_0⟩,
F = ∑_i ⟨ O^+_i O^-_0⟩cos(2π x /L),
where i=(x,t) is the space-time lattice site and O^+_i, O^-_i are lattice fields in the two lattice models. In the bosonic XY model, O^+_i = e^iθ_i and O^-_i = e^-iθ_i, while in the fermionic model O^+_i = O^-_i = ψ_iψ_i.
The SSF for the bosonic XY model is computed in the massive phase close to the critical point, for β < β_c = 1.1199 <cit.>. To study the step scaling function, we prepare several pairs of data at (β, L) and (β,2L), and compute both ξ(2L)/ξ(L) and ξ(L)/L using the data presented in <ref>. We follow certain criteria as explained in <cit.>, to ensure the minimization of finite volume and finite lattice spacing errors. In particular, we only choose lattices of sizes L ≥ L_min, where L_min = 80 for couplings β≥0.92. Since the correlation length increases for β close to the β_c, larger lattice sizes are essential. The similar criteria for choosing the lattices sizes and couplings in the fermionic model is L ≥ L_min, where L_min = 80 for 0.62≤λ≤0.9, and L_min = 640 for λ < 0.6.
In order to compute the expectation value and error of ξ(L)/L, we use the jackknife analysis. We report the results here for the analysis with 40 jackknife blocks. The effect of variation of the jackknife blocks did not change the errors significantly, and were consistent with the errors obtained using a bootstrap analysis. In <ref>, we show an example of the variation of the average and error of ξ(L)/L at λ=0.353 and L=320 for the fermionic model using both the jackknife and the bootstrap analysis as a function of block size. For both methods, we use the same number of block sizes, but in order to show the distinction between them, we have displaced the data on the x-axis by multiplying nBlock by a factor of 1.1 for the bootstrap analysis.
In order to compare the SSF between the bosonic and the fermionic models we tried to parameterize the function in two different ways. In the first approach, we follow the idea discussed in <cit.> where it was proposed that
Σ(x) = 1 + a_1 e^-1/x + a_2 e^-2/x + a_3 e^-3/x + a_4 e^-4/x,
where x = ξ(L)/L and Σ = ξ(2L)/ξ(L). The behavior of this function is such that, as x → 0, the function Σ(x) approaches 1. While this function is strictly valid only for small x we find that this form fits our data well. The fit results are given in <ref>. We see that while we get good fits by including all four fit parameters, we can also fix a_2=0 and still get a good fits.
In the second approach, to parameterize our SSF we used a cubical spline to interpolate the data. In <ref>, we provide a tabulation of the spline function that helps parameterize the SSF for both the bosonic and the fermionic models. The errors are obtained using a jackknife analysis.
In order to show how these two different parameterizations help capture our data we show the corresponding curves for the bosonic model in <ref> and for the fermionic model in <ref>. We believe that a combined parameterization would best capture the true function. Hence, we use <ref> for ξ(L)/L ≤ 0.572 and the cubical spline interpolation for ξ(L)/L ≥ 0.572. This combined form in the bosonic model is shown in <ref>, along with the bosonic model data. The dark line of this plot is used in the main paper to compare with the fermionic model.
§ INFINITE VOLUME CORRELATION LENGTH
We can compute the infinite volume correlation length ξ_∞ using the SSF. Here we try to understand how ξ_∞ depends on λ in the fermionic XY model. In order to reliably estimate the errors in ξ_∞ we again use the jackknife analysis. We start with 40 jackknife blocks, where each block contains a pair (ξ(L)/L, ξ(2L)/ξ(L)) for different coupling values (0.01 ≤λ≤ 0.8). We obtain 40 different cubical splines using each jackknife block. We then start with the initial ξ(L)/L at L=640 in each block and evaluate ξ(2^n L) using the spline function for arbitrary values of n, until the correlation length ξ(2^n L) becomes insensitive to L. Finally, the jackknife mean and error is then computed from the 40 values. These results for ξ_∞ and their errors are quoted in <ref>.
Since the correlation lengths increase exponentially as λ becomes small, we were able to extract the infinite volume correlation length only in the range 0.3≤λ≤0.8. Below λ < 0.3, our extrapolation methods fail.
Using the data in <ref> we study the λ dependence of ξ_∞. For the bosonic XY model, it is well known that as one approaches the BKT phase transition, the leading divergence of the infinite volume correlation length is captured by
ξ = C exp( b/√(β_c - β)),
where β_c is the critical coupling, and b and C are non-universal constants. For the fermionic XY model since the partition function is an even function of λ we expect ξ_∞ to be a function of λ^2. Since the BKT critical point appears when λ→ 0, we conjecture that
ξ^(1)_∞ = a_1 exp( b_1/√(λ^2)).
We test this conjecture numerically by fitting the data in <ref> to it. We also compare this to other fit forms including ξ^(2)_∞ = a_2 exp(b_2/(λ^2)^1/4) and ξ^(3)_∞= a_3 exp(b_3/√(λ^2) + c_3 log(λ^2)/2). The results are shown in <ref>. We observe that <ref> is clearly quite good if we expect the constants a and b to be numbers which are not unnatural. We cannot rule out the presence of a power law correction to the expected form.
In <ref>, we show the data in <ref> and the various fits. The first form is the expected behaviour from <ref>. The second form explores a possible dependence on square-root of λ which is clearly unnatural. Finally the third form allows for a logarithmic correction in the exponential (which is equivalent to including a 1/λ dependence outside the exponential). We note that in this extended form the data in the larger range of 0.3 ≤λ≤ 0.8 can be fit.
§ MONTE CARLO RESULTS
We tabulate all of our Monte Carlo data in <ref> for both the bosonic XY and the fermionic XY models, for various values of L and couplings. The errors in these primary quantities have been obtained with 20 jackknife blocks.
|
http://arxiv.org/abs/2307.07399v1 | 20230714152041 | Vehicle-to-grid plug-in forecasting for participation in ancillary services markets | [
"Jemima Graham",
"Fei Teng"
] | eess.SY | [
"eess.SY",
"cs.SY"
] |
The torsion of stellar streams
Adriana Bariego–Quintana
1
Felipe J. Llanes–Estrada2
July 14th 2023
========================================================================================
Electric vehicle (EV) charge points (CPs) can be used by aggregators to provide frequency response (FR) services. Aggregators must have day-ahead half-hourly forecasts of minimum aggregate vehicle-to-grid (V2G) plug-in to produce meaningful bids for the day-ahead ancillary services market. However, there is a lack of understanding on what features should be considered and how complex the forecasting model should be. This paper explores the dependency of aggregate V2G plug-in on historic plug-in levels, calendar variables, and weather conditions. These investigations are used to develop three day-ahead forecasts of minimum aggregate V2G plug-in during 30-minute window. A neural network that considers previous V2G plug-in values the day before, three days before, and seven days before, in addition to day of the week, month, and hour, is found to be the most accurate.
aggregator, plug-in forecast, frequency response, vehicle-to-grid
§ INTRODUCTION
As inverter-based resource (IBR) penetration in power systems increases, power system inertia declines <cit.>.
This makes frequency response (FR) increasingly important to maintain the system security.
Alternative frequency services are now essential to ensure that the power system frequency remains within acceptable limits at all times <cit.>. This paper focuses on primary frequency response (PFR) services which respond to a disturbance in less than 10 seconds, providing up to 20 seconds of support <cit.>.
One source of FR that has long been discussed is electric vehicles (EVs) with vehicle-to-grid (V2G) capabilities <cit.>. This is an attractive solution as the number of EVs has been exploding with 23.2 million EVs expected by 2032 <cit.>. Moreover, the value of using V2G for FR has been confirmed in studies that examine the reduction in frequency deviations <cit.> and subsequent cost savings that can be achieved using this technology <cit.>. This paper focuses on bidirectional V2Gs that can provide up-regulation PFR by supplying a net power injection to the grid.
It is expected that V2G market participation and dispatch will be managed through an aggregator because current market rules expect a minimum of 1 MW capacity and V2G chargers have a maximum capacity of 15 kW <cit.>. Some business cases for who could take on the role of an aggregator are: an EV fleet operator; an electric utility company; or an independent company such as automobile manufacturer or a distributed generation manager <cit.>. Here, we consider an electric utility company that owns a number of charge points (CPs) across the UK. This presents unique challenges as the aggregator will not have a schedule for EV plug-in as an EV fleet operator might have.
The amount of up-regulation that V2G can provide depends on: (i) whether it is plugged in (so that it can discharge to the grid); and (ii) whether it is charging (so that it can stop charging) <cit.>. Day-ahead forecasts of aggregate V2G availability are necessary to allow the aggregator to produce day-ahead bids for the ancillary services markets <cit.>. It is important to have separate forecasts of aggregate V2G plug-in and aggregate V2G charging due to the different ways of providing up-regulation using V2G. Here, it is assumed that there is uniform discharging capability across CPs and that PFR has minimal impact on the energy stored in the batteries. Half-hourly forecasts are necessary here to match the bidding frequency in the regulation market.
V2G availability and EV user behaviour are forecast in the literature to inform aggregator decision-making independently or as an input to a scheduling optimization model. Even though the focus in this study is aggregator information, the forecasting models developed as part of scheduling processes will also be discussed. These works are outlined in Table <ref>.
There is little work in the field of aggregate EV plug-in forecasting. Only two studies in Table <ref> attempt to directly predict the aggregate availability of V2G <cit.>. While one of these works considers an aggregator who controls a fleet of EVs <cit.>, the other considers an aggregator only has access to CP data <cit.>. However, the latter model developed in <cit.> is a minutely forecast model that excludes weekends. Not only this, it has limited utility to aggregators who are risk-averse as it is a deterministic forecast. In our study, the forecast models, which capture behaviour on both weekdays and weekends, aim to forecast the minimum aggregate plug-in. This allows aggregators to manage their risk and avoid significant penalty fees for failing to deliver any contracted services <cit.>.
Generally, it is important to make a distinction between the studies that use EV fleet data and the studies that use CP data because EV fleet operators have access to information that would not be accessible to CP operators such as personal charging preferences or trip schedule <cit.>. Additionally, none of the existing work in Table <ref> considered weather variables as model inputs. Our consideration of weather data is inspired by Gautam et al. who consider temperature, humidity, wind speed, rainfall, and dewpoint temperature to forecast grid load as an input to an EV scheduling algorithm <cit.>.
Altogether, this study aims to develop a first-of-its-kind, day-ahead forecast of minimum aggregate V2G plug-in during a 30-minute window using CP data to aid aggregator decision-making for FR. It examines the dependence of aggregate V2G plug-in on: historic behaviour <cit.>; calendar variables <cit.>; and weather data <cit.>. Additionally, three forecast models are developed and validated on a UK dataset of non-domestic EV charge point data provided by the UK Department of Transport (DoT). The forecast models discussed here are: a persistence model; a generalized linear model; and a neural network.
The structure of this paper is as follows: Section <ref> introduces the DoT case study used for feature investigation and model validation; Section <ref> examines the features that are useful to consider when developing an aggregate V2G plug-in forecast; Section <ref> explores the day-ahead aggregate V2G plug-in forecasts developed here; and Section <ref> discusses directions for future work.
§ DATA COLLECTION, PROCESSING, AND ANALYSIS
This work is validated on a dataset of EV plug-in events at public sector CPs during 2017 provided by the UK Department of Transport (DoT). We assume that the EV user behaviour in this dataset is representative of the behaviour of V2G users. Additionally, we consider a public sector dataset as we assume that domestic users will have a good idea of when they will plug-in and could send a potential plug-in schedule to an aggregator at least a day in advance. This dataset is a list of charging events. It had the following features: charging event ID, charge point ID, connector (as each charge point could have more than one connector), start date, start time, end date, end time, energy, public sector name, and plug-in duration. Charging events with a plug-in duration of more than a week are not considered in this study as we are only interested in forecasting the availability of charge points that are actively being used. This excluded 207 charging events, leaving a total of 103119 charging events. Additionally, some charge points that only have one connector did not specify a connector, so these NaN values were filled with ones.
The charge points in the public sector dataset are owned by 35 organizations. Milton Keynes Council has the most charge points with a total of 97, while Bristol City Council and South Tyneside Borough Council have only one charge point each. It is unknown how many charge points the Department for Regional Development Northern Ireland has as they have not supplied charge point IDs; however, as we are interested in aggregate plug-ins and not individual charge points, this data can still be included.
To obtain a time-series of aggregate EV plug-in for all the public sector charge points in this dataset, the start dates, start times, end dates, and end times were used to create a minutely time series for each charging event where ones indicated a plug-in at that time step and zeros denoted no plug-in. These individual charging event time-series were then summed to get an aggregate EV plug-in time-series. After that, the minimum within each half hour was taken, resampling the minutely time-series to a half-hourly time-series.
The first week of data was excluded as we do not know how many EVs were plugged-in at the beginning of 2017. Additionally, public holidays in the UK are excluded as anomalous EV user behaviour occurs on these days and one year of data is not sufficient to capture this behaviour. Finally, the last two weeks of data were excluded as the anomalous behaviour around Christmas exists both before and after Christmas, likely because many people take extended holidays at that time of the year.
As only one year of data is available, it is difficult for any models to fully capture the annual behaviour if the training and test sets are divided up in chronological order. Instead, the time steps are shuffled after feature creation, with 80% randomly sampled for the training set, 10 % for the validation set, and 10 % for the test set. This does not lead to data leakage as the aggregate EV plug-in time-series is stationary as confirmed by an Augmented Dickey Fuller Test.
§ INVESTIGATION OF ELECTRIC VEHICLE PLUG-IN CHARACTERISTICS
Aggregate EV plug-in is a stochastic time-series as shown in Figure <ref>. This is because EV user behaviour is unpredictable and varies without a clearly discernible pattern. As a result, aggregate EV plug-in is difficult to predict accurately as current plug-in numbers are not necessarily dependent on previous plug-in numbers.
Overall, aggregate EV plug-in characteristics depend on the day of the week as shown in Figure <ref>. Monday through Friday show much higher EV plug-in rates compared to Saturday and Sunday. Also, there is more variability on weekdays than weekends. This suggests that if previous values of EV plug-in are used as a guide in forecasts, it is not sufficient to use the day before in all cases. Our persistence forecast takes these variations into account, thus providing a strong benchmark.
Another feature worth considering is the month: while the median number of EV plug-ins is similar each month as shown in Figure <ref>, the range of aggregate EV plug-ins varies greatly from month-to-month. Moreover, these ranges exhibit no clear seasonality. This suggests that including the month as a feature would be beneficial to better capture the range of EV user behaviour depending on the time of year. This variation could be linked to EV user behaviour. For example, there may be a tendency to go on holiday at certain times of the year as seen in the decreased median and interquartile ranges in July and August: peak season for holidaymakers in the UK.
In addition to day of the week and month, EV plug-in also varies by hour as depicted in Figure <ref>. It can be seen that there is less variation overnight between 6 pm and 6 am. There is also a lower median number of plug-ins during this time. This may be because public charge points are more likely to be accessed during the day when people are at work or visiting establishments that are open during working hours. This hourly variability must also be captured in any forecasting models.
A final area of investigation is the dependence of EV plug-in on weather data: no dependence was found. Due to a lack of day-ahead weather forecasts of a sufficiently fine temporal resolution, MIDAS open-source hourly weather observations from numerous weather stations across the UK were averaged and interpolated to half-hourly values. This weather data covered relative humidity, dewpoint temperature, air temperature, wind speed, and amount of precipitation. Pearson's correlation coefficients to evaluate any linear dependencies and Spearman's correlation coefficients to evaluate any non-linear dependencies were calculated between each of the weather variables and the aggregate EV plug-in. Both the Pearson's and Spearman's correlation coefficients were negligible. Consequently, weather data was not considered during EV plug-in forecast development under this study.
§ AGGREGATE V2G PLUG-IN FORECAST DEVELOPMENT
In this section, three day-ahead forecast models of increasing complexity are developed to predict the minimum aggregate V2G plug-in in a 30 minute period.
§.§ Persistence model (PM)
To capture the behaviour exhibited in Figure <ref>, the persistence forecast uses different historical values depending on which day of the week it is:
ŷ_t+48, d =
y_t, d-1 if d ∈{1, 2, 3, 4, 6}
y_t-96, 4 if d = 0
y_t-288, 5 if d = 5
where y denotes number of V2Gs plugged-in, d ∈{0, 1, 2, 3, 4, 5, 6} is equivalent to {Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday} respectively. The subscript t denotes the time step.
This model is used as a benchmark for the generalized linear model (GLM) and the neural networks (NNs) discussed in the remaining portion of this section.
§.§ Generalized linear model (GLM)
A generalized linear model (GLM) is employed to better capture the periodic relationship between V2G plug-in and historic values of V2G plug-in. This forecast model has the following form:
ŷ_t+48 = α_0, d y_t + α_-96, d y_t-96 + α_-288, d y_t-288
where α_0, d, α_-96, d, and α_-288, d are coefficients of the historic values of V2G plug-in one day before, three days before, and seven days before respectively. The subscript d indicates that there is a different coefficient for each day of the week.
Figure <ref> highlights that using the GLM instead of the persistence model leads to less skewness and variation in the test residuals. This suggests that the GLM is able to capture some more complex linear relationships that the persistence forecast misses.
§.§ Neural network (NN)
A neural network (NN) expands on the GLM by capturing some of the non-linear behaviour highlighted by the large Spearman's correlation coefficients between aggregate V2G plug-in today and aggregate V2G plug-in yesterday in Table <ref>. Three NNs are trialled: (i) one with the same inputs as the GLM with one-hot encoding applied to the day of the week feature; (ii) one with all of the same features as (i) plus a one-hot encoding of month; and (iii) one with all the same features as (ii) plus a one-hot encoding of hour. The NN with inputs (ii) aims to capture the monthly behaviour displayed in Figure <ref>, while the NN with inputs (iii) aims to capture both the monthly and hourly behaviour displayed by Figure <ref> and Figure <ref> respectively. However, while the persistence model and GLM developed here are interpretable, the NN-based models are not. This may present challenges when it comes to high-stakes decision making as it is harder to justify the recommendations made by a `black-box' forecast model such as a NN.
The neural network has two layers with ReLu activation functions; the first layer has 100 neurons, and the second has 50 layers. Each of the hidden layers is followed with a dropout layer dropping 20 % of units to reduce the risk of overfitting with the small dataset. The loss function is a mean squared error function and the optimizer is an Adam algorithm. The neural network was trained using 1000 epochs and a batch size of 100.
§ RESULTS & DISCUSSION
The root mean squared error (RMSE), mean absolute percentage error (MAPE), and mean absolute error (MAE) displayed in Table <ref> are evaluated to determine the accuracy of the aggregate V2G plug-in forecast. Additionally, the mean, median, standard deviation, range, and interquartile range of the test set residuals displayed in Table <ref> are used to understand the spread of predictions and the difference between the true values and the predictions.
According to Table <ref>, the most accurate model by RMSE, MAPE, and MAE is the NN with month and hour as features, closely followed by the NN with month as a feature but not hour. This suggests that including month as a feature is important for capturing the monthly variations in aggregate V2G plug-in. Contrastingly, including hour as a feature seems less important, especially as there is much more improvement in the training set prediction accuracy compared to the validation and test set prediction accuracies. This suggests that a larger dataset would be beneficial to help avoid overfitting.
The residual, r_t, is calculated using the following equation:
r_t = ŷ_t - y_t.
This means that a negative value is an overestimation and a positive value is an underestimation. Overestimation is concerning as this means that the aggregator may pledge more resources than they will have in actuality resulting in penalty charges. In Table <ref>, the means and medians of the test residuals are often negative, suggesting that these models all have a tendency to overestimate (apart from the NN with month as a feature which has a positive mean). Probabilistic forecasts may be beneficial to help capture the uncertainty around any forecasts to better inform the aggregator. While the median test residual value is best (i.e. zero) for the persistence forecast, the persistence forecast has a large range of residuals implying that it is often far from the true value. In contrast, the NN with month and hour as features has the lowest standard deviation, range, and interquartile range, and has a mean that is closest to zero. This suggests that the residuals are reduced in magnitude and less variable when a NN with month and hour as features is used to forecast aggregate V2G plug-in.
§ CONCLUSION
This work examined features pertinent to aggregate V2G plug-in and produced a first-of-its-kind day-ahead forecast of minimum aggregate V2G plug-in during 30-minute intervals to aid aggregator participation in ancillary services markets for FR provision. A persistence forecast is developed as a benchmark. This is extended to a GLM which provides a more accurate, yet interpretable, forecast model. Then, the NN-based models further develop this by considering non-linear trends, improving accuracy and consistency at the detriment of interpretability. The NN that includes features based on month and hour provides the most accurate forecasts despite overfitting.
One pitfall of these models is their tendency to overestimate. A probabilistic forecast that captures the uncertainty on the forecast may give aggregators more information with which to make bids in the ancillary services market. Another solution would be to produce a cost-dependent forecast that takes the penalty fees for failing to deliver promised ancillary services into account. This would curb overestimation by ensuring that the model always aids the interests of the aggregator.
Other avenues that could be explored relating to forecasting aggregate V2G plug-in for FR is extending this model to capture the anomalous behaviour seen on bank holidays. Additionally, it could be beneficial to forecast what subsection of V2Gs are charging at a given time. Moreover, a regional forecast could help aggregators to provide more targeted FR as would be the case in reality.
unsrt
|
http://arxiv.org/abs/2307.04144v1 | 20230709101707 | Shadow, absorption and Hawking radiation of a Schwarzschild black hole surrounded by a cloud of strings in Rastall gravity | [
"Qian Li",
"Chen Ma",
"Yu Zhang",
"Zhi-Wen Lin",
"Peng-Fei Duan"
] | gr-qc | [
"gr-qc"
] |
Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
[email protected] (Corresponding author)
Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
City College, Kunming University of Science and Technology, Kunming, Yunnan 650051, China.
This paper studies the black hole shadow, absorption cross section, and Hawking radiation of a massless scalar field in the background of a static spherically symmetric black hole spacetime that is surrounded by a cloud of strings in Rastall gravity. Specifically, the effects of the parameters a and β on the photon sphere and shadow radii are investigated. The results show that as the negative parameter β decreases, the photon sphere and shadow radii change in an N-shape. In addition, the absorption cross section obtained after solving the massless Klein-Gordon equation is calculated using the sinc approximation and the partial waves method. We compare the absorption cross section obtained by the sinc approximation and the partial waves method, and find it to be exceptionally consistent in the mid-to-high frequency region. Furthermore, the effects of parameters a and β on absorption are examined in detail. Finally, we study in detail the effects of the parameters a, β and l on the Hawking radiation power emission spectrum of the considered black hole. It turns out that the string parameter a always suppresses the power emission spectrum, indicating that such black holes live longer when the string parameter a is increased while other parameters are fixed.
Shadow, absorption and Hawking radiation of a Schwarzschild black
hole surrounded by a cloud of strings in Rastall gravity
Peng-Fei Duan
August 12, 2023
===========================================================================================================================
§ INTRODUCTION
General relativity, proposed by Einstein in 1915 <cit.>, is by far the most widely accepted theory of gravity. The predictions made therein have been tested and verified under weak or strong field conditions. Particularly, black holes, as one of the predictions, are arguably the most interesting and mysterious celestial bodies in our universe. The mystery of a black hole is that nothing, including light, can escape its event horizon. For the past few decades, the existence of black holes was only studied through indirect methods, until the first images of black holes appeared in 2019 <cit.>. This discovery provides many inspiring answers for our exploration of Einstein's theory of general relativity and for testing other revised theories of gravity, taking our understanding of black hole physics a major step forward. However, the basic theory proposed by Einstein cannot explain some phenomena or solve certain fundamental problems, e.g., the singularity problem and the conjecture that the covariant divergence of the energy-momentum tensor may be non-zero.
To account for the special case where the covariant divergence of the energy-momentum tensor does not vanish, Rastall <cit.> proposed a special modification of general relativity where the field equation is T^μν_;μ = λ R^,ν and λ= 0 corresponds to the Einstein equation. An important feature of Rastall's gravity is that the field equation T^μν_ ;μ = λ R^,ν is obtained directly by violating the normal conservation law, which does not rely on the metric or palatini formalism <cit.>. Also, it is important to note that Rastall's gravity appears to be consistent with experimental observations in the context of cosmology <cit.>. Specifically, the observational data include, but are not limited to, the age of the universe, helium nucleosynthesis, and Hubble parameters. What is more interesting is that the modified gravity gives us a lot of novel and interesting results at the cosmological level. Besides, some attention has been focused on a debate, namely, whether Rastall gravity is equivalent to Einstein gravity. Visser <cit.> thought that the modified gravity proposed by Rastall is a rearrangement of the matter sector of Einstein gravity. In other words, the geometrical part of the field equation is the same in both theories, so we just need to construct a new energy-momentum tensor to fulfill the ordinary conservation law. So the author claimed there is nothing new, such as gravity description, in Rastall proposal. Das et al. <cit.> had a conclusion that in the framework of non-equilibrium thermodynamics (for homogeneous and isotropic FLRW black hole model), generalized Rastall gravity theory is equivalent to Einstein gravity theory. However, other researchers disagree with Visser's ones, see for example the research of Darabi and his colleagues <cit.> who support that Rastall theory is not equivalent to Einstein gravity theory and give a simple example to prove that the claim proposed by Visser is incorrect. Moreover, they indicated that Rastall gravity theory is an "open" gravity theory in comparison to basic general relativity and has more compatible with observational cosmology. Hansraj et al. <cit.> also discussed this dispute and their results are consistent with Darabi et al. <cit.>. In this work, they showed that Rastall gravity can satisfy the fundamental conditions for physically viable model whereas Einstein gravity doesn't fulfill its requirements (see <cit.> for more detailed discussion). Some works <cit.> have shown the difference between Rastall gravity and Einstein gravity from theoretical or cosmological perspectives. Finally, regardless of whether the Rastall gravity is equivalent to Einstein gravity, Rastall gravity theory is worth studying or discussing because it faces a challenge from cosmological (astrophysical) observations.
String theory, on the other hand, holds that the fundamental unit of nature is not the point particle in particle physics, but an extended one-dimensional string. Letelier <cit.> proposed for the first time that the source of the gravitational field could be a cloud of strings, and gave an exact solution for a Schwarzschild black hole surrounded by a collection of strings in the context of Einstein's general relativity. In addition, black holes that treat a cloud of strings as the source of the gravitational field in the modified gravity have been studied <cit.>. For instance, Cai and Miao proposed a black hole solution in which a cloud of strings is the source of the gravitational field of a Schwarzschild black hole in the context of Rastall gravity <cit.>. The authors also analyzed fundamental thermal properties, quasinormal modes of gravitational perturbations, area spectra <cit.>, and entropy spectra.
The experimental results reported by the Event Horizon Telescope Collaboration <cit.> not only directly prove the existence of black holes, but also allow us to directly observe the shadows of black holes. The theoretical analysis of black hole shadows has a long history. For example, Synge <cit.> discussed the shadows of Schwarzschild spacetime, and Bardeen et al. <cit.> analyzed the shadows of Kerr black holes. In addition to shadow analysis performed in basic general relativity, it extends to various modified forms of gravity or arbitrary-dimensional spacetime. Abbas and Sabiullah <cit.> studied the structure of timelike as well as null geodesics of regular Hayward black hole and found the massive particles, which move along timelike geodesics path, are dragged toward the black hole. To the best of our knowledge, numerous studies <cit.> have been devoted to studying the shadows of black holes under various modified gravity. More concretely, Gyulchev et al. <cit.> analyzed the shadows cast by different rotating traversable wormholes. Interestingly, the near horizon geometry is determined by the shadow cast by the black hole. Instead, the trajectory of light is affected by the plasma surrounding the black hole. This causes the geometric size and shape of the shadow on the Kerr spacetime to change <cit.>. In general, gravitational light deflection causes black hole shadows, and the trajectory of a photon in a vacuum depends on its impact parameter <cit.>. Therefore, we cannot ignore the role of the impact parameter in shadow formation.
Due to the special properties of black holes, we cannot directly study their internal structure. However, black hole is not an isolated system because it interacts with its surrounding environment, such as absorption, scattering and Hawking radiation. These interactions can convey information about the interior of the event horizon. In particular, as one of the interactions, the absorption cross section of black holes has received extensive attention from researchers. That's because one of the most useful and efficient ways to understand the properties of a black hole is to analyze the absorption of matter waves and the test field around the black hole. This series of studies began in the 1970s <cit.>. During that period, Sanchez found that the absorption cross section of Schwarzschild spacetime for scalar waves oscillates around the geometric capture cross section. About twenty years later, Das et al. <cit.> presented a key result that, in the low-energy regime, the absorption cross section of a coupled massless scalar field is equal to its event horizon area. Consequently, the literature on this particular topic has proliferated over the past few decades, covering various fields of research and several revision theories <cit.>.
Furthermore, Hawking predicted that black holes are thermal systems, like black bodies, and then have associated temperature and entropy. Based on the analysis of quantum field dynamics in the context of curved space-time, Hawking pointed out that black holes emit radiation, known as Hawking radiation, from their event horizons <cit.>. Intriguingly, Hawking radiation depends on the type of particle and the geometry of the black hole. This is because the Hawking temperature T_BH=f'(r_+)/4π is one of the influencing factors. Moreover, Yale <cit.> has analyzed the Hawking radiation of particle scalars, fermions and bosons spin-1 using the tunneling method. In recent years, a large body of literature <cit.> has emerged on Hawking radiation on various modified gravity, including high-dimensional black holes.
This paper investigates the black hole shadow, absorption cross section and Hawking radiation of the test scalar field of a Schwarzschild black hole surrounded by a cloud of strings in Rastall gravity. Specifically, Cai and Miao <cit.> presented the corresponding quasinormal modes of odd parity gravitational field by the WKB approximation. On this basis, our research contributes to further understanding of this black hole and its physical characteristics.
This paper is organized as follows. The second section outlines the basic information of the black hole solution, that is, a Schwarzschild black hole surrounded by a cloud of strings in the context of Rastall gravity, and also gives the meaning of the influencing parameters. The third part is devoted to the derivation of massless scalar equations and the analysis of related effective potentials. Section 4 analyzes the radius of the photon sphere and the shadow radius of the black hole in detail. Next, the absorption cross section of the scalar field is calculated using the sinc approximation and the partial wave method, and the effects of the parameters are also investigated. Section 6 gives the expression of Hawking radiation and the corresponding results for the Hawking radiation power emission spectra. The last section contains the summary and conclusions. Besides, we use the natural unit that c = G = ħ = 1 in this paper.
§ THE SOLUTION OF A SCHWARZSCHILD BLACK HOLE SURROUNDED BY A CLOUD OF STRINGS IN RASTALL GRAVITY
The field equations of the Rastall gravity <cit.> are as follows,
G_μν + β g_μν R= κ T_μν ,
T^μν_ ;μ = λ R^,ν ,
where κ and λ represent the Rastall gravitational coupling constant and the Rastall parameter, respectively. Moreover, β is defined as the product of these two parameters, i.e., β≡κλ. From the above equations we have that
R= κ/4β-1T ,
T^μν_ ;μ = κ/4β-1T^,ν ,
where R, T denote the Ricci scalar and the trace of the energy-momentum tensor, respectively. Besides, κ=4β-1/6β-1 8 π under the Newtonian limit <cit.>. It can be seen from the above equations that, the Einstein gravity is recovered and the energy-momentum tensor is conserved when the Rastall gravity parameter λ vanishes, i.e., β=0.
We consider the case where the metric is static and spherically symmetric,
ds^2=-f(r)dt^2+f^-1(r)dr^2+r^2dθ^2+r^2 sin^2θdϕ^2
with the metric <cit.>
f(r)=1-2M/r+4a(β-1/2)^2/(8β^2+2β-1)r^4β/2β-1.
It is worth noting that the Rastall theory should satisfy the Newtonian limit <cit.>. Therefore, the cases β=1/6 and β=1/4 are not allowed. The parameter a needs to satisfy a specific constraint, namely a≡κ b where b is a constant of integration associated with a cloud of strings. Specifically, β and a represent the influence of the Rastall gravity and the string, respectively. Consequently, the Rastall gravity is converted to Einstein gravity when β=0. Meanwhile, when a equals to 0, the Schwarzschild spacetime is restored.
§ SCALAR WAVE EQUATION
The massless scalar field Ψ governed by the massless Klein-Gordon equation in curved spacetime can be formulated as
1/√(-g)∂_μ(√(-g)g^μν∂_ν)Ψ=0,
and then the massless scalar field Ψ can be decomposed as follows
Ψ_ω l m=ψ_ω l(r)/r P_l(cosθ)e^ -i ω t,
where P_l(cosθ) denotes the Legendre polynomial, l and m represent the corresponding angular quantum number and magnetic quantum number, respectively. In addition, the function Ψ_ω l satisfies the following ordinary differential equation,
f(r)d/dr[f(r)dψ_ω l/dr]+[ω^2-V_eff(r)]ψ_ω l=0,
where V_eff(r) stands for the corresponding effective potential that is defined as
V_eff(r)=f(r)(1/rdf(r)/dr+l(l+1)/r^2).
Moreover, by substituting the metric in the effective potential, the specific potential is reformulated as
V_eff(r)=(1-2M/r+4a(β-1/2)^2/(8β^2+2β-1)r^4β/2β-1)×
(l(l+1)/r^2+2M/r^3-16 a(-1/2+β^2 )r^-2-4β/-1+2β/(-1+2β)(-1+2β+8β^2)).
Additionally, we define the following tortoise coordinate change
r_*=∫dr/f.
Consequently, the equation (<ref>) is equivalent to
d^2ψ/dr_*^2+(ω^2-V_eff)ψ=0.
Note that both the metric f(r) and the effective potential V_eff(r) are divergent when β is set to -0.5. Besides, to satisfy the condition that the effective potential V(r) → 0 when r→∞, we have β < 1/6. Hence, the domain of β should be in (-0.5, 1/6). Moreover, due to the condition a ≡κ b, the domain of a strictly relies on the positivity (negativity) of the parameter β. For β < 0, the barrier of the effective potential V_eff(r) disappears as the parameter a approaches to 1. Accordingly, the domain of the parameter a is set to [0,1). In contrast, for β > 0, when the parameter a is larger, a black hole surrounded by a cloud of strings in Rastall gravity has no event horizon. For instance, when β=0.1, the domain of a is set to [0,0.3].
Fig.<ref> shows the behaviour of the effective potential V_eff(r) with respect to r for different angular quantum numbers l when a=0.1, β=1/10. We find that the peak value of the effective potential increases when the angular quantum number l is increased. Furthermore, the potential V_eff(r) first increases, then decreases, and finally tends to zero at r→∞.
As shown in Fig.<ref>, to compare the effects of parameters a and β on the effective potential V_eff(r), we depict the behaviour of V_eff(r) with respect to a and β when β < 0 and β > 0, respectively. Specifically, for β > 0, i.e., when the parameter β is fixed to 1/10, the barrier height of the effective potential decreases as the string parameter a increases. It is clear that the peak of the effective potential becomes smaller and shifts to the right side as a increases. Next, we vary the Rastall parameter β and fix a to 0.1. It can be seen that with the increase of β, the peak value of the effective potential decreases, and the position of the peak value does not change much compared with the case where the string parameter a changes.
Meanwhile, when β < 0, one can see that for the same value of a, the barrier height of the potential first increases and then decreases with decreasing β. Also, the peak position firstly shifts to the left and then to the right. Furthermore, when the parameter a is varied, at the same value of the Rastall parameter β, the barrier height decreases and the peak position shifts to the right as the parameter a increases.
We add some boundary conditions for the Schörding-like equation (<ref>) because we are interested in the absorption cross section and Hawking radiation. Near the horizon regime and at infinity, one can find that ψ_ω l(r_*) need to satisfy the following boundary conditions
ψ_ω l(r_*)∼{
I_ω l e^-iω r_*+R_ω l e^iω
r_*,
r_* →+∞,
T_ω l e^-iω r_*,
r_*→-∞,
.
where R_ω l and T_ω l in the action denote reflection and transmission coefficients, respectively. Due to the conservation of flux,
R_ω l and T_ω l satisfy the following constraint
|R_ω l|^2+|T_ω l|^2=|I_ω l|^2.
Furthermore, the phase shift δ_l can be defined as
e^2 iδ_l=(-1)^l+1 R_ω l/ I_ω l.
Next, we will discuss the black hole shadows, absorption cross section and Hawking radiation based on the last two sections.
§ SHADOWS
In this section, we investigate the role of the Rastall parameter β and the string parameter a on the shadow radius of a black hole enclosed by a cloud of strings in Rastall gravity. Moreover, the results will be compared to those of Schwarzschild spacetime (i.e. a=0) and Einstein gravity (i.e. β=0), respectively.
The photon trajectories of a black hole surrounded by a cloud of strings in Rastall gravity can be represented by null geodesics <cit.>. The Lagrangian of geodesic equations for the curve spacetime have the following form
0=-f(r)ṫ^2+1/f(r)ṙ^2+r^2θ̇^2+r^2sin^2θϕ̇^2,
where the overdot symbol denotes the differentiation with respect to the affine parameter τ. Without loss of generality, we consider an analysis restricted to the equatorial plane, i.e., θ=π/2. By using the Euler-Lagrange equation, the t and ϕ coordinates are expressed as,
ṫ=E/f(r),
ϕ̇=L/r^2,
where E, L are motion constants, representing the energy and angular momentum of the massless test particle, respectively.
Hence, by substituting Eq. (<ref>) and Eq. (<ref>) in the Lagrangian equation (<ref>), the Lagrangian expression can be written as
ṙ+f(r)(L^2/r^2)=E^2,
furthermore, we define
V=f(r)L^2/r^2,
where V stands for the effective potential of the massless test particle. Besides, the null-like geodesics of the equatorial circular motion in static spherically symmetric spacetime should satisfy the conditions ṙ=0 and r̈=0. Consequently, we have V=E^2 and dV/dr=0, indicating the stability of circular null geodesics. The equations V(r_p)=0 and V^'(r)_|r=r_p=0 <cit.> represent the circular orbit of the photon, that is, the photon sphere radius r_ p.
Moreover, the critical impact parameter b_c can be expressed as
b_c=L/E=r_p/√(f(r_p)),
f^'(r_p)(r_p-2f(r_p))=0.
On the other hand, the black hole shadow radius r_s is represented by the celestial coordinates (x,y) as follows
r_s=√(x^2+y^2)=r_p/√(f(r_p)).
Specifically, the effects of the parameters a and β on the photon sphere and shadow radii are shown in Table <ref>. For β > 0, from Table <ref> one can see that for fixed a=0.1 (β=1/10), the photon sphere and shadow radii increase as the parameter β (a) increases. Furthermore, as the string parameter a tends to 0.3 when β=1/10, the black hole shadow radius increases rapidly. For β < 0, when we set a=0.3, we observe that the photon sphere and shadow radii first increase, then decrease and finally increases as the parameter β decreases. A possible reason is that the metric f(r) is not a monotonic function of the Rastall parameter β in the range -0.5 < β < 0. Therefore, when the parameter β is set to -1/3, the photon sphere and shadow radii increase as a approaches to its parameter maximum.
§ ABSORPTION CROSS SECTION
In this section, we calculate the absorption cross section using two methods, viz., the sinc approximation method and the partial waves method where the gray-body factor is calculated by sixth-order WKB method. Besides, we view the capture cross section as a reference. It is known that the absorption cross section at the low-frequency and high-frequency limits can be calculated by different analytical approximations. The total absorption cross section of massless scalar waves in an arbitrary-dimensional general spherically symmetric black hole inclines to its area <cit.> in the low-frequency regime, which is the event horizon of the black hole. In the high-frequency regime, the total absorption cross section of the massless scalar field converges to the geometric capture cross section, described by the following null geodesics
σ_geo≡π b^2_c,
where b_c denotes the above critical impact parameter.
§.§ sinc approximation
Sanchez <cit.> proposed that in the high-frequency regime, the total absorption cross-section oscillates near the above-mentioned capture cross section (27/4)π r^2_s, where r^2_s=2M, and has an interval of oscillation peaks, Δ=2/√(27)M. In addition, Sanchez also presented the following analytical approximation of the absorption cross section
σ_San=27π/4-A/ω r_ssinπ(3√(3))(ω r_s+B),
which has the best fit when A= 1.14 ∼√(2) and B<10^-4.
Furthermore, the Sanchez approximation was generalized by Décanini et al. to static spherically symmetric spacetimes of arbitrary dimensions. Décanini et al. <cit.> showed that in the eikonal state, the fluctuation of the absorption cross section was completely and very simply described by the properties of the null unstable geodesics located on the photon sphere. Important characteristics are the orbital period and the Lyapunov exponent. Specifically, the sinc approximation of the absorption cross section in a d-dimensional static and spherically symmetric black hole is given by
σ≈σ_geo +σ_abs^osc,
where the oscillation part of the absorption, i.e., σ_abs^osc, is expressed as
σ_abs^osc≡ (-1)^d-34(d-2)πη_c e^-πη_csinc(2π r_cω/√(f(r_c))) σ_geo,
with sinc(x) denoting the sine cardinal
sinc(x) ≡sin x/x,
and d representing the dimension of the black hole. Besides 2πr_c/√(f(r_c))
= 2π b_c indicates the orbital period of the black hole on the photon sphere <cit.>. The parameter η_c for measuring the instability of the circular orbit on the photon sphere is defined as
η_c =1/2√(4f(r_c)-2r^2_c f^”(r_c)),
for instance, the sinc approximation of the absorption cross section of a Schwarzschild black hole at the high-frequency limit is written as
σ≈σ_geo - 8π e^-πsinc[2π (3√(3)M)ω] σ_geo.
§.§ Partial wave approach
We consider that the field Φ, which is purely ingoing waves at the event horizon, is the sum of the monochromatic incident plane wave Φ^I and outgoing scattered wave Φ^S in the far-field, that is,
Φ∼Φ^I+Φ^S.
Without loss of generality, we assume that the direction of wave propagation is along the z-axis. Accordingly, the monochromatic incident plane wave Φ^I and the outgoing scattered wave Φ^S are respectively defined as
Φ^I= e^-iω(t-z),
Φ^S= 1/rf̂(θ) e^-iω(t-r),
where f̂(θ) denotes the scattering amplitude. Moreover, e^iω z can be decomposed as <cit.>
e^iω z= ∑_l=0^∞(2l+1)i^lj_l(ω r)P_l(cosθ),
with j_l(.) representing the spherical Bessel function.
Hence, Eq. (<ref>) in the far-field can be rewritten as follows,
Φ^I∼e^-iω t/r∑_l=0^∞ C_ω l(e^-iω r+ e^-i π(l+1) e^iω r)P_l(cosθ),
where C_ω l is given by
C_ω l=(2l+1)/2iωe^i π(l+1).
The field solution Φ depends on the boundary conditions (<ref>). This means that the ingoing part of Φ should match the incident plane wave Φ^I. Therefore we obtain
Φ=e^-iω t/r∑_l=0^∞ C_ω lϕ_ω l(r) P_l(cosθ).
The absorption cross section depends on the flux of particles that enter the black hole through the effective potential. Hence, we can introduce the four-current density vector as follows
J^μ=i/2(Φ^*Δ^μΦ-ΦΔ^μΦ^*),
and the above equation satisfies the conservation law, that is
Δ_αJ^α=0.
By substituting Eq.(<ref>) into Eq.(<ref>) under the boundary condition Eq.(<ref>), we obtain the four-current density vector by surface integral as
N(r)=-∫_Σr^2J^rdΩ =-π/ω∑_l=0^∞ (2l+1)(1-|e^2 i δ_l|^2),
where N(r) is the flux that passes the surface Σ with a constant radius r and dΩ=sinθ dθ dφ. The flux is a constant, and when we consider the stationary scenarios, N (minus) represents the particles passing through the potential and entering the black hole <cit.>. Besides, we have used the orthogonality of Legendre polynomials, i.e.,
∫_-1^1 P_l(x) P_l'(x) dx =2/(2l+1)δ_l l'.
where x=cosθ.
Furthermore, the absorption cross section σ_abs is defined as the ratio of the particle flux |N| to the plane wave incident current ω. Hence, the absorption cross section can be written as
σ_abs(ω)≡|N|/ω= π/ω^2∑_l=0^∞(2l+1)(1-|e^2 i δ_l|^2)
=π/ω^2∑_l=0^∞ (2l+1)|T_ω l|^2,
and the partial absorption cross section can be expressed as
σ_l(ω)= π/ω^2(2l+1)(1-|e^2 i δ_l|^2)= π/ω^2(2l+1) |T_ω l|^2.
In order to study the effects of Rastall and string parameters on the absorption cross section of the scalar field, we need to calculate the phase shift δ_l, that is, the transmission coefficient. In this paper, we use the WKB approximation to obtain the transmission coefficient T_ω. Assuming that the probability of the incident plane wave is equal to 1, Eq.(<ref>) can be expressed as
|R_ω l|^2+|T_ω l|^2=1.
The transmission probability of different multipole numbers l can be obtained with the help of the sixth-order WKB method,
1- |R_ω l|^2=|T_ω l|^2,
with
R_ω l=(1+e^2i πα)^-1/2,
where α is obtained by
α-i (ω^2-V_0)/√(-2V_0^”) -∑_i=2^i=6Λ_i(K)=0.
In Eq.(<ref>), V_0 represents the maximum value of the potential at r=r_0, and the prime denotes the derivative of the potential at r=r_0 with respect to r^*. Moreover, Λ_i(K) indicates a higher-order correction of the WKB method, which depends on K and the 2i order derivative of the potential at its maximum position <cit.>.
Specifically, we express the third-order method as follows,
Λ_2=1/√(-2V^(2)_0)[1/8(V^(4)_0/V^(2)_0(b^2+1/4)-1/288(V^(3)_0/V^(2)_0)^2(7+60b^2))]
Λ_3=n+1/2/-2V^(2)_0[5/6912(V^(3)_0/V^(2)_0)^4(77+188b^2)
-1/384((V^(3)_0)^2V^(4)_0/(V^(2)_0)^3)(51+100b^2)+1/2304(V^(4)_0/V^(2)_0)^2(67+
68b^2)-1/288(V^(6)_0/V^(2)_0)(5+4b^2)+1/288(V^(3)_0V^(5)_0/(V^(2)_0)^2)(19+28b^2)].
In Eqs. (<ref>), the superscripts (2,3,4,5,6) of the effective potential represent the corresponding differentials with respect to the tortoise coordinate r_*, and b=n+1/2. Besides, since the specific expressions of Λ_4(K), Λ_5(K) and Λ_6(K) are overly cumbersome (see Ref. <cit.>), they will not be described in detail here. In addition, during the calculation, we find that when the Rastall parameter β is set as a fraction, the results and figures of the WKB approximation calculation are more accurate than when β is set as a decimal <cit.>. This phenomenon can be attributed to the term r^4β/2β-1 in the metric f(r). Hence, in order to maintain the consistency of the data, we choose the fractional form of β throughout the paper.
From Fig. <ref>, we can compare the effects of the Rastall parameter and the string parameter on the partial absorption cross section when the Rastall parameter is positive. The results are shown in the left plot, where different values of the string parameter a are chosen, the corresponding partial absorption cross section first starts at zero, then reaches to a maximum value, and finally decreases to almost the same value with increasing ω. Furthermore, it is easy to see that as the string parameter a increases, the partial absorption increases and its peak position shifts to the left. When we fix the string parameter and change the Rastall parameter, one can get that the peak value of the partial absorption cross section increases as the Ratall parameter β increases.
In Fig. <ref> we present the total absorption cross section of a Schwarzschild black hole surrounded by a cloud of strings in Rastall gravity for different values of the string parameter, where l goes from 0 to 10 and β=1/10. Specifically, the horizontal solid line represents the geometric capture section. As shown in Fig. 4, the dashed curve is the sinc approximation result, and the solid curve is the partial wave result using the sixth-order WKB approximation. We show that increasing the parameter a results in incrementing the absorption cross section. We also notice that the two curves are significantly different at small values of frequency. Moreover, as the string parameter increases, the difference is more pronounced and the range of oscillation amplitudes is significantly wider. However, in the high frequency regime, the total absorption cross sections obtained by these two methods are in good agreement and converge to the geometric capture cross section. In Fig. <ref>, our results show that when we fix the value of a and increase β, the absorption cross section increases. The difference between the two curves also increases significantly in the low frequency regime due to the Rastall parameter.
As shown in Fig. <ref>, we describe the behavior of the partial absorption cross section for different parameters obtained by the sixth-order WKB method when β < 0. From the left figure, where the Rastall parameter is treated as a variable and the parameter a is fixed, we observe that the partial absorption cross section does not monotonically increase as the Rastall parameter β decreases. The partial cross sections intersect in the range of 0.2<ω< 0.3 due to the effective potential. Therefore, the variation trend of the partial cross section is 'N' type. We also observe in the right plot that when we increase the string parameter, the partial cross section increases monotonically. Moreover, the peak position of the partial cross section is evidently shifted to the left.
Finally, we observe that as the multipole number l increases, the partial cross section decreases and its peak position shifts to the right.
In Fig. <ref> we give the total absorption cross sections of the massless scalar field by varying the Rastall parameter β (β<0) and fixing the string parameter a = 0.6. We can observe that the change of the total absorption cross section as a function of β is similar to that of the partial absorption cross section. This is because the higher the potential barrier, the more particles are scattered back to the black hole by the potential barrier. In addition, we can see that when we reduce the Rastall parameter to -0.5, the difference between the solid curve and the dashed curve gradually decreases.
In Fig. <ref> we present the total absorption cross section of the massless scalar field when β<0, changing the string parameters and fixing β=-1/3. It can be observed that the difference between the two curves is the smallest at the low-frequency limit compared to the above three cases. Furthermore, when ω is large, the total absorption cross section as a function of ω goes into the capture cross section. We also notice that the total absorption cross section, as well as the oscillation amplitude, increases with increasing string parameters.
§ HAWKING RADIATION
In this section, we employ the sixth-order WKB method to calculate the Hawking radiation for massless scalar fields. Furthermore, we analyze the effects of the string and the Rastall parameters on Hawking radiation in the background of a Schwarzschild black hole surrounded by a cloud of strings in Rastall gravity.
A black hole behaves almost in the same way as a black body, emitting particles when its temperature is proportional to the surface gravity <cit.>. Hawking also presented that black holes can radiate particles in the form of thermal. This is due to the quantum tunneling effect created by the vacuum fluctuations near the event horizon of the black hole. Therefore, if we consider quantum effects and the laws of thermodynamics are satisfied, black holes can produce radiation. This phenomenon is known as Hawking radiation.
The Hawking radiation calculated by the gray-body factor has the following expression <cit.>
dE/dt=∑_lN_l|T_ω l|^2ω/exp(ω/T_BH)- 1dω/2π,
where N_l is the multiplicity that depends only on the black hole dimension. Moreover, for the massless scalar field in a four-dimensional black hole, l and N_l satisfy the condition N_l=2l+1. T_ω l denotes the above gray-body factor and T_BH represents the Hawking temperature.
Specifically, the Hawking temperature of static spherically symmetric spacetime can be written as
T_BH=1/4πf'(r) |_r=r_h.
By substituting Eq.(<ref>) into Eq.(<ref>), we can obtain
T_BH=1/4π r_h (1+a(1-2β)r^4β/1-2β_h/4β-1),
where f(r_h)=0 and r_h is the radius of the event horizon. Besides, the string and Rastall parameters need to satisfy the previous parameter range, i.e., -0.5< β< 1/6 and 0 ≤ a < 1. By substituting Eq.(<ref>) and N_l=2l+1 into Eq.(<ref>), we can further obtain the Hawking power emission spectrum
d^2E/dt dω=1/2π∑_l(2l+1)|T_ω l|^2ω/e^ω/T_BH-1.
Fig.<ref> compares the effects of parameters a and β on the Hawking power emission spectrum of the massless scalar wave when β is non-negative. We can clearly observe in the upper panel that for a given l and β, increasing the parameter a depresses the power emission spectrum. Moreover, the peak power emission spectrum gradually shifts to low frequencies as a increases. It is clear from the middle panel that when we fix l and a, but increase the parameter β, the peak power emission spectrum gradually decreases and moves to low frequencies. As the multipole number l increases, we can get from the lower panel that for a massless scalar field, the power emission spectrum decreases and the peak position shifts towards high frequencies. In conclusion, the parameters a, β and l suppress the power emission spectrum. Besides, it is easy to see that if the values of parameters a and b are chosen larger, the lifespan of the black hole will be longer.
This trait is more easily observed in Fig.<ref>, which plots the effects of parameters a, β and l on the power emission rate (as a function of ω) for the scalar wave in the range β<0. From the upper figure we can see that when we increase the parameter a, the power emission spectrum decreases. That is, under the condition that β is constant, the increase of the string parameter a leads to a decrease in the energy emission rate, thus making the lifetime of the black hole longer.
Furthermore, we also observe in the center panel that with decreasing Rastall parameter, for fixed l and a, the peak value of the power emission rate increases and then decreases, and the peak position first shifts to high frequency and then moves to low frequency. Finally, we fix the two parameters a=0.6 and β=-1/3 and analyze the effects of the multipole number l in the lower panel. It is clear that a larger multipole number results in a lower power emission spectrum. Besides, it is worth noting that the low multipole number l dominates the energy emission rate, while the contribution of the high multipole number l is extremely small and thus negligible.
§ CONCLUSION AND DISCUSSION
In the previous sections, we have comprehensively studied the black hole shadow, absorption cross section and power emission spectrum of Hawking radiation for the massless scalar field in a Schwarzschild black hole surrounded by a cloud of strings in Rastall gravity. The ranges of the string parameter and Rastall parameter are chosen according to the effective potential in the context of the scalar field. Notably, we have calculated the absorption cross section and Hawking radiation with the help of the sixth-order WKB method.
First, in Figs.<ref> and <ref>, we carefully analyzed the effective potential for different values of parameters a, β and l.
For β>0, the parameters a and β depress the barrier of the effective potential, and the waves do not reflect. For β<0, a reduces the barrier height when β is fixed, whereas the effective potentials intersect when β varies. Moreover, we studied the shadow and photon sphere radii caused by the curved light ray. Because we consider the black hole to be static spherically symmetric, the radii of the photon sphere and shadow are constant. In other words, the black hole shadow has spherical symmetry. Besides, the radius of the photon sphere increases as the parameter a increases. However, when we consider β as a variable, the photon sphere and shadow radii fluctuate abnormally. The reason is that when the Rastall parameter is less than zero, the metric f(r) changes abnormally.
Second, with the help of the sixth-order WKB method, we calculated the absorption cross section of the scalar field in detail. To compare the accuracy of the sixth-order WKB, we also presented the results of the sinc approximation with the geometric capture cross section as a reference. From Figs.<ref>, <ref> and <ref>, we can clearly observe that larger values of the parameters a and β enhance the partial or total absorption cross section when β>0. However, in the low frequency range, when a or β is set to a larger value, the results calculated by the two methods are quite different. Furthermore, in Figs.<ref>, <ref> and <ref>, we plotted the partial and total absorption cross sections when β<0. Unlike the case where β is positive, the absorption cross section does not always grow as the Rastall parameter decreases. Since the potential barrier reflects waves, the change in the absorption cross section is exactly the opposite of the change in the potential barrier. Hence, as β decreases, the total absorption cross section first increases, then decreases and finally increases again. It is worth mentioning that the smaller the value of β, the smaller the difference between the two approximations. Very importantly, in the mid-high frequency region, the total absorption cross section and the sinc approximation are in good agreement and in all cases oscillate around the geometric capture cross section σ_geo.
Finally, we investigated the energy emission rate of Hawking radiation. Specifically, the power emission rate is affected by the string parameter, the Rastall parameter as well as the multipole number. In Fig.<ref>, we found that both a and β suppress the power emission spectrum, and the peak position shifts to a lower energy region. Moreover, the multipole number l also significantly depresses the power emission spectra whereas the peak position shifts to the higher frequency regime. The case of β<0 is also similar to the case of β>0 above, except the case where β varies and a is fixed. As the Rastall parameter decreases, the power emission spectrum first increases and then decreases, at the same time, the peak position first moves to the higher frequency region and then enters the lower energy region.
§.§ acknowledgments
This work was supported partly by the National Natural Science Foundation of China (Grants No. 12065012, No. 12065013), Yunnan High-level Talent Training Support Plan Young & Elite Talents Project (Grants No. YNWR-QNBJ-2018-360) and the Fund for Reserve Talents of Young and Middle-aged Academic and Technical Leaders of Yunnan Province (Grant No. 2018HB006).
§.§ Data Availability Statement
This manuscript has no associated data or the data will not be deposited. [Authors’ comment: This present study is a theoretical work.]
99
12pt
Einstein1914
A. Einstein, Sitzungsber. Preuss. Akad. Wiss. Berlin (Math. Phys. ) 1914, 1030-1085 (1914).
Einstein1915
A. Einstein, Sitzungsber. Preuss. Akad. Wiss. Berlin (Math. Phys. ) 1915, 831-839 (1915).
Event2019
K. Akiyama et al. [Event Horizon Telescope], Astrophys. J. Lett. 875, L1 (2019).
Rastall1972
P. Rastall, Phys. Rev. D 6, 3357-3359 (1972).
Gogoi2021
D.J. Gogoi, U.D. Goswami, Phys. Dark Univ. 33, 100860 (2021).
Al-Rawaf1996
A.S. Al-Rawaf, M.O. Taha, Phys. Lett. B 366, 69-71 (1996).
Visser:2017gpz
M. Visser,
Phys. Lett. B 782, 83-86 (2018).
Das:2018dzp
D. Das, S. Dutta and S. Chakraborty,
Eur. Phys. J. C 78, 810 (2018).
Darabi:2017coc
F. Darabi, H. Moradpour, I. Licata, Y. Heydarzade and C. Corda,
Eur. Phys. J. C 78, 25 (2018).
Hansraj:2018zwl
S. Hansraj, A. Banerjee and P. Channuie,
Annals Phys. 400, 320-345 (2019).
Ziaie:2019jfl
A. H. Ziaie, H. Moradpour and S. Ghaffari,
Phys. Lett. B 793, 276-280 (2019).
Moradpour:2017ycq
H. Moradpour, A. Bonilla, E. M. C. Abreu and J. A. Neto,
Phys. Rev. D 96, 123504 (2017).
Li:2019jkv
R. Li, J. Wang, Z. Xu and X. Guo,
Mon. Not. Roy. Astron. Soc. 486, 2407-2411 (2019).
Abbas:2018ffk
G. Abbas and M. R. Shahzad,
Eur. Phys. J. A 54, 211 (2018).
Letelier1979
P.S. Letelier, Phys. Rev. D 20, 1294-1302 (1979).
Herscovich2010
E. Herscovich, M.G. Richarte,
Phys. Lett. B 689, 192-200 (2010).
Toledo2019
J.M. Toledo, V.B. Bezerra,
Eur. Phys. J. C 79, 117 (2019).
Morais2018
J.P. Morais Graça, I.P. Lobo, I.G. Salako,
Chin. Phys. C 42, 063105 (2018).
Li2021
Z. Li, T. Zhou,
Phys. Rev. D 104, 104044 (2021).
Chen:2018szr
S. Chen, L. Zhang, J. Jing,
Eur. Phys. J. C 78, 981 (2018).
Cai2020
X.C. Cai, Y.G. Miao,
Phys. Rev. D 101, 104023 (2020).
Setare:2003bd
M. R. Setare,
Phys. Rev. D 69, 044016 (2004).
Setare:2004uu
M. R. Setare, E. C. Vagenas,
Mod. Phys. Lett. A 20, 1923-1932 (2005).
Synge1966
J.L. Synge,
Mon. Not. Roy. Astron. Soc. 131, 463-466 (1966).
Bardeen
J.M. Bardeen, W.H. Press, S. A. Teukolsky,
Astrophys. J. 178, 347 (1972).
Abbas:2014oua
G. Abbas, U. Sabiullah,
Astrophys. Space Sci. 352, 769-774 (2014).
Amarilla2010
L. Amarilla, E.F. Eiroa, G. Giribet,
Phys. Rev. D 81, 124045 (2010).
Sharif2016
M. Sharif, S. Iftikhar,
Eur. Phys. J. C 76, 630 (2016).
Amir2019
M. Amir, A. Banerjee, S.D. Maharaj,
Annals Phys. 400, 198-207 (2019).
Babar2020
G.Z. Babar, A.Z. Babar and F. Atamurotov,
Eur. Phys. J. C 80, 761 (2020).
Konoplya20200
R.A. Konoplya,
Phys. Lett. B 804, 135363 (2020).
Konoplya20201
R.A. Konoplya, A.F. Zinhailo,
Eur. Phys. J. C 80, 1049 (2020).
Anacleto2021
M.A. Anacleto, J.A.V. Campos, F.A. Brito, E. Passos,
Annals Phys. 434, 168662 (2021).
Cai2021
X.C. Cai, Y.G. Miao,
Phys. Rev. D 103, 124050 (2021)
Zhang2021
M. Zhang, J. Jiang,
Phys. Lett. B 816, 136213 (2021).
Long:2019nox
F. Long, J. Wang, S. Chen, J. Jing,
JHEP 10, 269 (2019).
Long:2020wqj
F. Long, S. Chen, M. Wang, J. Jing,
Eur. Phys. J. C 80, 1180 (2020).
Gyulchev2019
G. Gyulchev, P. Nedkova, V. Tinchev, Y. Stoytcho,
AIP Conf. Proc. 2075, 040005 (2019).
Gyulchev2018
G. Gyulchev, P. Nedkova, V. Tinchev, S. Yazadjiev,
Eur. Phys. J. C 78, 544 (2018).
Nedkova2013
P.G. Nedkova, V.K. Tinchev, S.S. Yazadjiev,
Phys. Rev. D 88, 124019 (2013).
Perlick2017
V. Perlick, O.Y. Tsupko,
Phys. Rev. D 95, 104003 (2017).
Javed2021
W. Javed, A. Hamza, A. Övgün,
Universe 7, 385 (2021).
Matzner1968
R.A. Matzner,
J. Math. Phys. 9, 163 (1968).
Mashhoon1973
B. Mashhoon,
Phys. Rev. D 7, 2807-2814 (1973).
Starobinski1974
A.A. Starobinskil, S.M. Churilov,
Sov. Phys. JETP 65, 1-5 (1974).
Fabbri1975
R. Fabbri,
Phys. Rev. D 12, 933-942 (1975).
Ford1975
L.H. Ford,
Phys. Rev. D 12, 2963-2977 (1975).
Page1976
D.N. Page,
Phys. Rev. D 14, 3260-3273 (1976).
Sanchez19781
N.G. Sanchez,
Phys. Rev. D 18, 1030 (1978).
Das1997
S.R. Das, G.W. Gibbons, S.D. Mathur,
Phys. Rev. Lett. 78, 417-419 (1997).
Higuchi2001
A. Higuchi,
Class. Quant. Grav. 18, L139 (2001).
Kanti2002
P. Kanti, J. March-Russell,
Phys. Rev. D 66, 024023 (2002).
Jung2004
E. Jung, D.K. Park,
Class. Quant. Grav. 21, 3717-3732 (2004).
Grain2005
J. Grain, A. Barrau, P. Kanti,
Phys. Rev. D 72, 104016 (2005).
Crispino2007
L.C.B. Crispino, E.S. Oliveira, A. Higuchi, G.E.A. Matsas,
Phys. Rev. D 75, 104012 (2007).
Crispino2009
L.C.B. Crispino, S.R. Dolan, E.S. Oliveira,
Phys. Rev. D 79, 064022 (2009).
Macedo2014
C.F.B. Macedo, L.C.B. Crispino,
Phys. Rev. D 90, 064001 (2014).
Huang2015
H. Huang, M. Jiang, J. Chen, Y. Wang,
Gen. Rel. Grav. 47, 8 (2015).
2018
L.C.S. Leite, S. Dolan, L. Crispino, C.B.,
Phys. Rev. D 98,024046 (2018).
Huang2019
H. Huang, J. Chen, Y. Wang, T. Lu,
Gen. Rel. Grav. 51, 22 (2019).
Anacleto2020
M.A. Anacleto, F.A. Brito, J.A.V. Campos, E. Passos,
Phys. Lett. B 803, 135334 (2020).
Magalhaes2020
R.B. Magalhães, L.C.S. Leite, L.C.B. Crispino,
Eur. Phys. J. C 80, 386 (2020).
Junior2020
H.C.D. Lima, C.L. Benone, L.C.B. Crispino,
Phys. Lett. B 811, 135921 (2020).
Benone2018
C.L.Benone, L.C.S. Leite, L.C.B. Crispino, S.R. Dolan,
Int. J. Mod. Phys. D 27, 1843012 (2018).
Li2022
Q. Li et al.
Chinese Journal of Physics (2022).
Hawking1975
S.W. Hawking,
Commun. Math. Phys. 43, 199-220 (1975).
Hawking1976
S.W. Hawking,
Phys. Rev. D 13, 191-197 (1976).
Yale2011
A. Yale,
Phys. Lett. B 697, 398-403 (2011).
Chen:2008ra
S. Chen, B. Wang, R. Su,
Phys. Rev. D 77, 124011 (2008).
Konoplya2020
R.A. Konoplya, A.F. Zinhailo,
Phys. Lett. B 810, 135793 (2020).
Harmark2010
T. Harmark, J. Natario, R. Schiappa,
Adv. Theor. Math. Phys. 14, 727-794 (2010).
Kanti2015
P. Kanti, E. Winstanley,
Fundam. Theor. Phys. 178, 229-265 (2015).
Pappas2016
T. Pappas, P. Kanti, N. Pappas,
Phys. Rev. D 94, 024035 (2016).
Miao2017
Y.G. Miao, Z.M. Xu,
Phys. Lett. B 772, 542-546 (2017).
Javed2019
W. Javed, R. Babar, A. Övgün,
Mod. Phys. Lett. A 34, 1950057 (2019).
2020
R.A. Konoplya, A.F. Zinhailo, Z. Stuchlik,
Phys. Rev. D 102, 044023 (2020).
Guo2020
H. Guo, H. Liu, X.M. Kuang, B. Wang,
Phys. Rev. D 102, 124019 (2020).
Ali2021
R. Ali, R. Babar, M. Asgher, S.A.A. Shah,
Annals Phys. 432, 168572 (2021).
Slavov:2012mv
P. I. Slavov, S. S. Yazadjiev,
Phys. Rev. D 86, 084042 (2012).
Moradpour2016
H. Moradpour, I.G. Salako,
Adv. High Energy Phys. 2016, 3492796 (2016).
Azam:2017adt
M. Azam, G. Abbas, S. Sumera, A. R. Nizami,
Int. J. Geom. Meth. Mod. Phys. 14, 1750120 (2017).
Setare:2010zd
M.R.Setare, D. Momeni,
Int. J. Theor. Phys. 50, 106-113 (2011).
Yves2011
Y. Decanini, G. Esposito-Farese, A. Folacci,
Phys. Rev. D 83, 044032 (2011).
Decanini2010
Y. Decanini, A. Folacci, B. Raffaelli,
Phys. Rev. D 81, 104039 (2010)..
Futterman1988
J.A.H. Futterman, F.A. Handler, R.A. Matzner, Cambridge; New York: Cambridge University Press (1988).
Unruh1976
W.G. Unruh,
Phys. Rev. D 14, 3251-3259 (1976).
Iyer1987
S. Iyer, C. M. Will,
Phys. Rev. D 35, 3621 (1987).
Konoplya2003
R.A. Konoplya,
Phys. Rev. D 68, 024018 (2003).
Sharif:2020cus
M. Sharif, Q. Ama-Tul-Mughani,
PTEP 2020, 033E01 (2020).
Sharif:2021sow
M. Sharif, S. Shaukat,
Annals Phys. 436, 168673 (2022).
|
http://arxiv.org/abs/2307.06220v1 | 20230712150830 | ($\odot$, $\vee$)-derivations on MV-algebras | [
"Xueting Zhao",
"Aiping Gan",
"Yichuan Yang"
] | math.RA | [
"math.RA",
"math.AC",
"03G20, 06D35, 06B10, 08B26"
] |
(⊙,∨)-DERIVATIONS ON MV-ALGEBRAS](⊙,∨)-DERIVATIONS ON MV-ALGEBRAS
School of Mathematical Sciences, Shahe Campus, Beihang University, Beijing 102206, China
[email protected]
School of Mathematics and Statistics,
Jiangxi Normal University, Nanchang, Jiangxi 330022, P.R. China
[email protected]
School of Mathematical Sciences, Shahe Campus, Beihang University, Beijing 102206, China
[email protected]
Let A be an MV-algebra. An (⊙,∨)-derivation on A is a map d: A→ A satisfying:
d(x ⊙ y) = (d(x) ⊙ y) ∨(x ⊙ d(y))
for all x, y ∈ A. This paper initiates the study of (⊙,∨)-derivations on MV-algebras. Several families of (⊙,∨)-derivations on an MV-algebra are explicitly constructed to give realizations of the underlying lattice of an MV-algebra as lattices of (⊙,∨)-derivations. Furthermore, (⊙,∨)-derivations on a finite MV-chain are enumerated and the underlying lattice is described.
Key words: MV-algebra, derivation, direct product, complete lattice, Boolean center, ideal, fixed point set
MSC(2020): 03G20, 06D35, 06B10, 08B26
[
Yichuan Yang^∗
Received May 01, 2023 / Accepted May 31, 2023
=================================================
§ INTRODUCTION
The notion of derivation from analysis has been defined for various algebraic structures by extracting the Leibniz rule
ddx(fg) = (ddx(f))g + f(ddx(g)).
Derivations play an important role on describing the characteristics of prime rings <cit.>, and the multiplicative or additive commutativity of near rings <cit.>, etc. A derivation in a prime ring (R, +, ·) is a map d : R→ R satisfying that for any x,y∈ R:
(1) d(x+y)=d(x)+d(y), (2) d(x· y)=d(x)· y+x· d(y).
The derivation on a lattice (L,∨,∧) was defined by Szász <cit.>, and was deeply investigated in <cit.>, which is a map d : L→ L satisfying that for all x, y ∈ L:
(i) d(x ∨ y) = d(x) ∨ d(y), (ii) d(x ∧ y) = (d(x) ∧ y) ∨ (x ∧ d(y)).
The notion of derivations satisfying condition (ii) only was investigated by Xin and his coauthors <cit.> with motivation from information science.
In recent years the derivations have been defined and studied for BCI-algbras <cit.>, BCC-algebras <cit.>, BE-algebras <cit.>, and basic algebras <cit.>. Furthermore, the derivations on operator algebras were investigated by Brešar etc.<cit.> which promoted the mathematical quantum mechanics and quantum field theory.
An algebraic structure with a derivation is broadly called a differential algebra <cit.>. In fact, differential algebra has found important applications in arithmetic geometry, logic and computational algebra, especially in the profound work of Wu on mechanical proof of geometric theorems <cit.>. There are many instances of differential algebras, such as for fields <cit.>, commutative algebras <cit.>, noncommutative algebras <cit.>, lattices <cit.>, and MV-algbras <cit.>.
The concept of derivations on MV-algebras was introduced by Alshehri <cit.>: given an MV-algebra (M,⊕,^* , 0), a derivation on M is an operator (i.e, a map) d : M → M such that d(x⊙ y) = (d(x)⊙ y) ⊕ (x⊙ d(y)), for all x, y ∈ M, where x⊙ y=(x^*⊕ y^*)^*. Furthermore, the different kinds of derivations on MV-algebras have been deeply investigated. Yazarli <cit.> introduced the notions of symmetric bi-derivation, generalized derivation on MV-algebras. Then Wang, Davvaz and He <cit.> studied additive derivations and their adjoint derivations to give a representation of MV-algebras. Recently, τ-additive derivations on MV-algebras have been extended by Lu and Yang <cit.>. Following these developments, we define the notion of (⊙,∨)-derivations on A satisfying
d(x ⊙ y) = (d(x) ⊙ y) ∨(x ⊙ d(y))
for any x,y∈ A, where x∨ y=(x⊙ y^*)⊕ y.
Our choice do not impose the extra “union-preserving” condition: d(x ∨ y) = d(x)∨ d(y) and leads to several properties in this paper. Indeed as similar as <cit.>, a (⊙,∨)-derivation with the “union-preserving” must be isotone.
This paper initiates the study of (⊙,∨)-derivations on MV-algebras. In Section <ref>, we recall some necessary properties and examples of MV-algebras. In Section <ref>, we introduce and study (⊙,∨)-derivations on MV-algebras. After exploring a sufficient and necessary condition for an operator on a n-element MV-chain L_n to be an (⊙,∨)-derivation (Theorem <ref>), we show that the cardinality of the set of all (⊙,∨)-derivations on L_n
is exactly (n-1)(n+2)/2 (Theorem <ref>).
In Section <ref>, the direct product of (⊙,∨)-derivations is introduced. Let Ω be an index set, {A_i}_i ∈Ω be a family of MV-algebras and d_i be an operator of A_i for each i∈Ω, we prove that the direct product ∏_i ∈Ω d_i of d_i’s is an (⊙,∨)-derivation (resp. a principal (⊙,∨)-derivation) on ∏_i ∈Ω A_i
if and only if d_i is an (⊙,∨)-derivation (resp. a principal (⊙,∨)-derivation) on A_i for each i ∈Ω (Theorem <ref>).
In Section <ref>, we show that the set of (⊙,∨)-derivations on a finite MV-algebra has a natural lattice structure (Proposition <ref>) and we consider several lattice structure of (⊙,∨)-derivations which are isomorphic to the underlying lattice 𝐋(A) of an MV-algebra A (Propositions <ref> and <ref>). We also describe the lattice structure of (⊙,∨)-derivations on finite MV-chains
(Theorem <ref>).
Notations. Throughout this paper, let |A| denote the cardinality of a set A and ℕ_+ denote the set of all positive integers.
§ PRELIMINARIES
In this section, we recall some necessary definitions and results about MV-algebras.
<cit.> An algebra (A, ⊕, *, 0) of type (2, 1, 0) is called an MV-algebra if it satisfies the following equations:
(MV1) x⊕(y⊕ z)=(x⊕ y)⊕ z;
(MV2) x⊕ y=y⊕ x;
(MV3) x⊕ 0=x;
(MV4) x^**=x;
(MV5) x⊕ 0^*= 0^*;
(MV6) ( x^*⊕ y)^*⊕ y=( y^*⊕ x)^*⊕ x.
As usual, we shall denote an MV-algebra by its underlying carrier set.
Note that all axioms of MV-algebras are equations, it follows by Birkhoff Theorem <cit.> that the class of all MV-algebras forms a variety.
So the notions of isomorphism, subalgebra, congruence and direct product
are just the particular cases of the corresponding universal algebraic notions.
<cit.>
Let L =[0,1] be the real unit interval. Define
x ⊕ y =
min{1, x + y} and x^* = 1 - x for any x, y ∈ L.
Then (L, ⊕, *, 0) is an MV-algebra.
Let Q = [0, 1]∩ℚ and for each positive integer n ≥ 2, let
L_n = {0, 1n-1, 2n-1, ⋯, n-2n-1, 1}.
Then Q and the n-element subset L_n are subalgebras of L .
<cit.>
Define the following sets of formal symbols:
𝒞_0 = {0, c, 2c, 3c, ⋯}, 𝒞_1 = {1, c^*,(2c)^*,(3c)^*,⋯},
where (kc)^* = 1 - kc, and (kc)^** =((kc)^*)^* = kc for any k ∈ℕ_+.
Let + (respectively, -) be the ordinary sum (respectively, subtraction) between integers. We define the following binary operation ⊕ on 𝒞= 𝒞_0∪𝒞_1:
* nc⊕ mc=(n+m)c
* (nc)^*⊕ (mc)^*=1
* nc⊕ (mc)^*=(mc)^*⊕ nc={[ 1 m≤ n; ((m-n)c)^* m>n; ].
Then (𝒞, ⊕, *,0) is an infinite MV-chain, and 0<c< 2c < 3c < ⋯ < (n - 1)c < nc < ⋯ < (nc)^* < ((n- 1)c)^* < ⋯ < (3c)^* < (2c)^* < c^* < 1. MV-chains Q and 𝒞 are not isomorphic, though they have the same countable cardinality.
On every MV-algebra A, we define the constant 1 and the operation
⊙ as:
1 =0^* and x⊙ y=(x^*⊕ y^*)^*.
Then for all x, y∈ A, the following well-known
properties hold <cit.>:
∙ (A, ⊙, *, 1) is an MV-algebra;
∙ ∗ is an
isomorphism between (A, ⊕, *, 0) and (A, ⊙, *, 1);
∙ 1^*=0, 1⊕ x=1;
∙ x⊕ y=(x^*⊙ y^*)^*;
∙ x^*⊕ x = 1,
x⊙ x^*=0.
Let A be an MV-algebra. For any x, y∈ A,
define
x≤ y if and only if x^*⊕ y=1. Then
≤
is a partial order on
A, called the natural order of A <cit.>. Furthermore, the natural order determines a structure
of bounded distributive lattice 𝐋(A) on A, with 0 and 1
are respectively the bottom and the top element, and
x∨ y=(x⊙ y^*)⊕ y and x∧ y=x⊙ (x^*⊕ y).
A linearly ordered MV-algebra is called an MV-chain. It is
well-known that every n-element MV-chain is isormorphic to the MV-chain L_n in Example <ref>.
<cit.>
Let A be an MV-algebra and x, y ∈ A. Then the following statements are
equivalent:
(1) x≤ y;
(2) x^*⊕ y=1;
(3) x ⊙ y^* = 0;
(4) y = x ⊕ (y ⊙ x^*);
(5) there is an element z ∈ A such that x ⊕ z = y.
<cit.>
Let A be an MV-algebra, and x,y,z∈ A. Then the following statements hold:
(1) x⊙ y≤ x∧ y≤ x≤ x∨ y≤ x⊕ y;
(2) If x⊕ y=0, then x=y=0; If x⊙ y=1, then x=y=1;
(3) If x≤ y, then x∨ z≤ y∨ z, x∧ z≤ y∧ z;
(4) If x≤ y, then x⊕ z≤ y⊕ z, x⊙ z≤ y⊙ z;
(5) x≤ y if and only if y^*≤ x^*;
(6) x⊙ (y∧ z)=(x⊙ y)∧ (x⊙ z);
(7) x⊙ (y∨ z)=(x⊙ y)∨ (x⊙ z);
(8) x ⊙ y≤ z if and only if x≤ y^*⊕ z.
<cit.>
Let A be an MV-chain. For any x, y,z ∈ A,
(1) x⊕ y=x if and only if x=1 or y=0;
(2) If x⊙ y=x⊙ z>0, then y=z.
For any Boolean algebra (A, ∨, ∧, -, 0, 1), the structure
(A, ∨, -, 0) is an MV-algebra, where ∨, - and 0 denote, respectively, the join,
the complement and the smallest element in A.
Boolean algebras form a subvariety of the variety of MV-algebras. They
are precisely the MV-algebras satisfying the additional equation x ⊕ x = x. An element a of A is called idempotent if a⊕ a=a.
Denote the set of all idempotent elements of A by 𝐁(A), called Boolean center of A. It is known that
B(A) is a subalgebra of the MV-algebra A, and a subalgebra B of A is a Boolean algebra if and only if
B ⊆B(A) <cit.>.
For convenience, we denote by B_n the n-element Boolean algebra. It is clear that
B_2 is exactly the 2-element MV-chain L_2.
<cit.> For every element x in an MV-algebra A, the following conditions are equivalent:
(1) x∈𝐁(A);
(2) x⊕ x=x;
(3) x⊙ x=x;
(4) x^*∈𝐁(A);
(5) x⊕ y=x∨ y for all y ∈ A;
(6) x⊙ y=x∧ y for all y ∈ A.
<cit.> Let A be an MV-algebra and I be a subset of A. Then we say that I is an ideal if the following conditions are satisfied:
(1) 0∈ I;
(2) x,y∈ I imply x⊕ y∈ I;
(3) x∈ I and y≤ x imply y∈ I.
<cit.> Let A be a lattice and I be a subset of A. Then we say that I is a lattice ideal if the following conditions are satisfied:
(1) 0∈ I;
(2) x,y∈ I imply x∨ y∈ I;
(3) x∈ I and y≤ x imply y∈ I.
That is, a lattice ideal of an MV-algebra A is the notion of ideal in the underlying lattice (A, ∧, ∨)<cit.>. It can easily be verified that an ideal is a lattice ideal but the opposition is not necessarily the case. The next lemma gives the representation of a finite MV-algebra:
<cit.>
An MV-algebra A is finite if and only if A is isomorphic to a finite product of finite chains, in symbols,
A ≅ L_d_1×⋯× L_d_u,
for some integers 2 ≤ d_1≤ d_2≤…≤ d_u.
This representation is unique, up to the ordering of factors.
Finally, we list the famous Chang's Subdirect Representation Theorem, stating that if an equation holds in all totally ordered MV-algebras, then the equation holds in all MV-algebras.
<cit.>
Every nontrivial MV-algebra is a subdirect product of MV-chains.
§ (⊙,∨)-DERIVATIONS ON MV-ALGEBRAS
In this section, we introduce (⊙,∨)-derivations on MV-algebras, and characterize some properties about (⊙,∨)-derivations, such as
isotonicity and idempotency.
Also, we enumerate
the cardinality of (⊙,∨)-derivations on finite MV-chains.
§.§ Basic properties of (⊙,∨)-derivations on MV-algebras
Let A be an MV-algebra. A map d : A→ A is called an (⊙,∨)-derivation on A if it satisfies the equation:
d(x ⊙ y) = (d(x) ⊙ y) ∨(x ⊙ d(y))
for all x, y ∈ A.
It is easy to check that the identity map Id_A and the zero map 0_A are simple examples of (⊙,∨)-derivations on an MV-algebra A, where
Id_A(x) = x and 0_A(x)=0 for any x ∈ A.
Also, for a given a∈ A, define the map d_a: A→ A by
d_a(x):= a⊙ x for all x∈ A.
Then d_a is an (⊙,∨)-derivation, called a principal (⊙,∨)-derivation.
Both Id_A and 0_A are principlal (⊙,∨)-derivations, since
Id_A=d_1 and 0_A=d_0.
Denote the set of all (⊙,∨)-derivations on A by (A); and the set of
all the principal (⊙,∨)-derivations on A by (A), that is
(A)={d_a | a∈ A}.
(1) It is clear that Eq.(<ref>) holds when x=y=1, where d(1)=d(1)⊙ 1.
(2) Adapting the classical terminology of differential algebras, we also call a derivation a differential operator. More generally, we also call a map f:A→ A an operator even though there is no linearity involved.
(3) Note that in <cit.>, an (⊙,⊕)-derivation on an MV-algebra A is defined to be a map satisfying d(x ⊙ y) = (d(x) ⊙ y) ⊕(x ⊙ d(y)) for all x, y∈ A. In this paper, we use “∨" instead of “⊕". Our choice of this notation has its motivation from certain asymmetry of “∨" and “⊙", and already leads to some properties as displayed in Proposition <ref>.
(4) It is natural to consider a (⊕,∧)-derivation which is dual to the (⊙,∨)-derivation on an MV-algebra A: d(x ⊕ y)=(d(x)⊕ y)∧(x⊕ d(y)) for all x, y∈ A. If this condition is taken, then the study should be completely parallel to the study of Eq. (<ref>) due to the symmetry of the operations “∨” and “∧”, “⊙” and “⊕” in the definition of an MV-algebra. Furthermore, if a map d is both an (⊙,∨)-derivation and
a (⊕,∧)-derivation,
then d=Id_A (see Proposition <ref>).
Let A be an MV-algebra, x, y ∈ A and d∈(A). Then for any positive integer n, the following statements hold:
(1) d(0) = 0.
(2) d(x^n) = x^n-1⊙ d(x), where x^0=1, x^n=x⊙ x ⊙⋯⊙ x^n.
(3) d(x)⊙ x^*= x ⊙ d(x^*) = 0.
(4) d(x) ≤ x.
(5) d(x) = d(x) ∨(x ⊙ d(1)) and so x ⊙ d(1)≤ d(x).
(6) d(x^*) ≤ x^*≤ (d(x))^*.
(7) d(x)⊙ d(y) ≤ d(x ⊙ y) ≤ d(x) ∨ d(y) ≤ d(x) ⊕ d(y).
(8) (d(x))^n≤ d(x^n).
(9) If I is a downset of A, then d(I)⊆ I, where d(I)={d(x) | x∈ I}.
(10)
If y≤ x and d(x)=x, then d(y)=y.
(1) Putting x=y=0 in Eq.(<ref>), we immediately have d(0) = d(0⊙0) =
(d(0)⊙ 0)∨(0 ⊙ d(0)) =0.
(2) We prove d(x^n) = x^n-1⊙ d(x) by induction on n. First,
it is clear that d(x^1) =d(x)=1 ⊙ d(x)= x^1-1⊙ d(x). For n=2,
putting x = y in Eq.(<ref>), we get d(x^2) = d(x⊙ x) = (d(x) ⊙ x)∨
(x⊙ d(x)) = x ⊙ d(x).
Now assume that d(x^n) = x^n-1⊙ d(x). By Eq.(<ref>), we have d(x^n+1)=d(x^n⊙ x) =(d(x^n)⊙ x)∨ (x^n⊙ d(x)) =(x^n-1⊙ d(x)⊙ x)∨ (x^n⊙ d(x))=x^n⊙ d(x), and so (2) holds.
(3) Since x⊙ x^*=0, by Item (1) it follows that 0=d(0)=d(x⊙ x^*)=(d(x)⊙ x^*)∨ (x⊙ d(x^*)). So d(x)⊙ x^*=0 and x ⊙ d(x^*)=0.
(4) Since d(x)⊙ x^*=0 by Item (3), it follows immediately by Lemma <ref> that d(x)≤ x.
(5) By Eq.(<ref>) we have d(x) = d(x⊙ 1) = (d(x) ⊙ 1)∨(x⊙ d(1)) =
d(x) ∨(x ⊙ d(1)). So x ⊙ d(1) ≤ d(x).
(6) We have d(x^*) ≤ x^* and d(x)≤ x by Item (4). Thus x^*≤ (d(x))^* by Lemma <ref> (5).
(7) By Item (4) and Lemma <ref> (4), we have d(x)⊙ d(y) ≤ x⊙ d(y) and d(x)⊙ d(y) ≤ d(x)⊙ y. So d(x)⊙ d(y) ≤ (d(x)⊙ y)∨(x⊙ d(y)) = d(x⊙ y).
Furthermore, it follows from Lemma <ref> (1) that d(x) ⊙ y ≤ d(x),x ⊙ d(y) ≤ d(y). So d(x⊙ y)=(d(x)⊙ y) ∨(x ⊙ d(y)) ≤ d(x) ∨ d(y). Finally, we get d(x)∨ d(y)≤ d(x) ⊕ d(y) by Lemma <ref> (1).
(8) By Item (2), we have d(x^n) = x^n-1⊙ d(x). Since d(x)≤ x, it follows by Lemma <ref> (4) that (d(x))^n-1≤ x^n-1 and then (d(x))^n=(d(x))^n-1⊙ d(x)≤ x^n-1⊙ d(x)=d(x^n).
(9) Let I be a downset of A and y∈ d(I). Then there exists a∈ I such that y=d(a). Since d(a)≤ a by Item (4), we have y=d(a)∈ I by Definition <ref>. Thus d(I)⊆ I.
(10) If y≤ x and d(x)=x, then
d(y)=d(x∧ y)=d(x⊙(x^*⊕ y))
= (d(x)⊙(x^*⊕ y))∨ (x⊙ d(x^*⊕ y))
= (x⊙(x^*⊕ y))∨ (x⊙ d(x^*⊕ y))
= x⊙(x^*⊕ y)
= x∧ y
= y,
and so we get d(y)=y.
It is known that if d
is a derivation on a lattice L, then d=Id_L iff d is injective iff d is surjective
<cit.>. In Proposition <ref>, we will show that
if d
is an (⊙,∨)-derivation on an MV-algebra A, then d=Id_A iff d is surjective. However, d is injective may not imply that d=Id_A (see Remark <ref>).
Let A be an MV-algebra and d∈(A). Then the following statements are equivalent:
(1) d=Id_A;
(2) d(1)=1;
(3) d(a)=1 for some a∈ A;
(4) d is surjective;
(5) d is a (⊕,∧)-derivation, i.e., d satisfies the condition:
d(x ⊕ y)=(d(x)⊕ y)∧(x⊕ d(y)) for all x, y∈ A.
It is clear that (1)⇒ (2)⇒ (3), and (1)⇒ (4)⇒ (3) by the property of Id_A.
(2)⇒ (1). Assume that d(1)=1. Then by Proposition <ref> (10) we have that d(x)=x for all x∈ A. Thus d=Id_A,
Item (1) holds.
(3)⇒ (2). Assume that d(a)=1 for some a∈ A. By Proposition <ref> (4), we have 1=d(a)≤ a, and so a=1. Thus d(1)=1, Item (2) holds.
(1)⇒ (5). Assume that d=Id_A. Then d(x⊕ y)=x⊕ y=(x⊕ y)∧(x⊕ y)=(d(x)⊕ y)∧(x⊕ d(y)) for all x, y∈ A, and thus Item (5) holds.
(5)⇒ (2). Assume that d is a (⊕,∧)-derivation. Then d(1)=d(1⊕1)=(d(1)⊕ 1)∧(1⊕ d(1))=1, and so Item (2) holds.
Let A be an MV-algebra and d∈(A). Generally, d≠Id_A if
d is injective.
For example, let 𝒞 be the infinite MV-chain in Example <ref>.
Define an operator d on 𝒞 by
d(x):=
x⊙ c^*, if x∈𝒞_1
x, if x∈𝒞_0
Claim (1): d∈(𝒞).
Indeed, let x,y∈𝒞. Consider the following cases:
Case (i): x, y∈𝒞_1. Then d(x⊙ y)=(x⊙ y)⊙ c^*=(d(x)⊙ y)∨(x⊙ d(y)).
Case (ii): x, y∈𝒞_0. Then
d(x⊙ y)=x⊙ y=(d(x)⊙ y)∨(x⊙ d(y)).
Case (iii): x∈𝒞_1, y∈𝒞_0, let x=(mc)^*, y=nc, where m, n∈ℕ_+. Then d(x)=(mc)^*⊙ c^*=((m+1)c)^* and d(y)=y.
If m≥ n, then
d(x⊙ y)=d((mc)^*⊙ nc)=d(0)=0=(((m+1)c)^*⊙ nc)∨((mc)^*⊙ nc)=(d(x)⊙ y)∨ (x⊙ d(y)). If m<n,
then
d(x)⊙ y=((m+1)c)^*⊙ nc=
0, if m+1=n
(n-m-1)c, if m+1<n
It follows that d(x)⊙ y< (n-m)c=x⊙ d(y), and thus d(x⊙ y)=d((mc)^*⊙ nc)=d((n-m)c)=(n-m)c=(d(x)⊙ y)∨(x⊙ d(y)).
Case (iv): x∈𝒞_0, y∈𝒞_1. Similarly, we can obtain that d(x⊙ y)=(d(x)⊙ y)∨(x⊙ d(y)).
Summarizing the above arguments, we get d∈(𝒞).
Claim (2): d is injective. Indeed, let x,y∈𝒞 and x≠ y. If x, y∈𝒞_1, say x=(mc)^*,y=(nc)^*, where m, n are positive integers and m≠ n, then d(x)=(mc)^*⊙ c^*=((m+1)c)^*≠ ((n+1)c)^*=(nc)^*⊙ c^*=d(y).
If x, y∈𝒞_0, then d(x)=x≠ y=d(y).
If x∈𝒞_1, y∈𝒞_0 or y∈𝒞_1, x∈𝒞_0, say x∈𝒞_1, y∈𝒞_0,
then by the definition of d, we have
d(x)∈𝒞_1, d(y)∈𝒞_0, so d(x)≠ d(y) since 𝒞_0∩𝒞_1=∅. Thus d is injective.
However, d≠Id_𝒞 since d(1)=c^*≠ 1.
Let A be an MV-algebra and d∈(A).
From Remark <ref>, we see that d(a) may not lie in B(A) if a∈B(A).
In what follows, some properties of (⊙,∨)-derivations related to Boolean center B(A) of an MV-algbera A are given.
Let A be an MV-algebra and d∈Der(A). Then for all x, y ∈B(A),
the following statements hold:
(1) d(x ∧ y) = (d(x) ∧ y) ∨ (x ∧ d(y)).
(2) d(x) = x ⊙ d(x).
(1) By Lemma <ref> (6), we have
d(x∧ y) = d(x⊙ y) = (d(x)⊙ y)∨ (x⊙ d(y)) = (d(x)∧ y)∨ (x∧ d(y)).
(2) Since x⊙ x=x, we have d(x)=d(x⊙ x)=x⊙ d(x) by Proposition <ref> (2).
If an MV-algebra A is a Boolean algebra, then d is an (⊙,∨)-derivation on A if and only if d is a derivation on the lattice (A,∨,∧).
It follows immediately by Proposition <ref> and Lemma <ref>.
Note that d(d(a)) may not equal d(a) if a∈B(A). For example, in Remark <ref>, we have 1∈B(𝒞) but d(d(1))=d(c^*)=c^*⊙ c^*=(2c)^*≠ c^*=d(1).
Proposition <ref> tells us that d(d(a)) = d(a) if d(a) ∈B(A).
Let A be an MV-algebra, d∈(A) and a∈ A. If d(a) ∈B(A),
then d(d(a)) = d(a).
Assume that d∈(A), a∈ A with d(a) ∈B(A). Then d(a)=d(a)⊙ d(a) ≤ d(a⊙ a)=a⊙ d(a)≤ d(a) by Proposition <ref> (8) and
Lemma <ref> (1). Thus d(a)=a⊙ d(a), and therefore d(d(a))=d(a⊙ d(a))=(d(a)⊙ d(a))∨ (a⊙ d(d(a)))=d(a)∨ (a⊙ d(d(a))) by Eq. (<ref>). Consequently, we get
d(a) ≤ d(d(a)). Also, we have d(d(a)) ≤ d(a) by Proposition <ref> (4).
Hence d(d(a)) = d(a).
§.§ (⊙,∨)-derivations on MV-chains
In this subsection we will determine the cardinality of (A) when A is a finite MV-chain.
Let n≥ 2 be a positive integer. Recall that
every n-element MV-chain is isomorphic to the MV-chain L_n, where L_n is given in Example <ref>.
In L_n, n-m-1/n-1=(n-2/n-1)^m for each m∈{1, 2, ⋯, n-1}. That is to say,
for any x∈ L_n\{1}, x can be expressed as a power of n-2/n-1.
Let d be an operator on L_n and v=n-2/n-1. Suppose that d(v)≤ v. Then d∈(L_n) if and only if d satisfies the following conditions:
(1) d(v^m)=v^m-1⊙ d(v) for each m∈{1, 2, ⋯, n-1};
(2) v⊙ d(1)≤ d(v).
If d∈(L_n), then for each m∈{1, 2, ⋯, n-1}, we have d(v^m)=v^m-1⊙ d(v)
by Proposition <ref> (2), and v⊙ d(1)≤ d(v ⊙ 1)=d(v)
by Proposition <ref> (5).
Thus d satisfies the conditions (1) and (2).
Conversely, suppose that d satisfies the conditions (1) and (2). Let x, y∈ L_n. By Remark <ref> (1),
we can assume that x≠ 1 or y≠ 1 and distinguish the following cases:
If x≠ 1 and y ≠ 1, then x=v^k and y=v^l for some k, l∈{1, 2, ⋯, n-1}. By the condition (1), we get d(x⊙ y)=d(v^k⊙ v^l)=v^k+l-1⊙ d(v)=((v^k-1⊙ d(v))⊙ v^l)∨ (v^k⊙ (v^l-1⊙ d(v)))=(d(x)⊙ y)∨(x⊙ d(y)).
If x=1 or y=1 (but not both), say x ≠ 1 and y = 1, then
x=v^k for some k∈{1, 2, ⋯, n-1}.
By the condition (1), we have d(x)=d(x⊙1)= d(v^k)=v^k-1⊙ d(v). Also, we have x⊙ d(1)=v^k-1⊙ v⊙ d(1)≤ v^k-1⊙ d(v)=d(x)⊙ 1 by condition (2). Thus we have derived that d(x⊙ 1)= d(x)=d(x)⊙ 1=(d(x)⊙ 1)∨(x⊙ d(1)).
Therefore, we conclude that d∈(L_n).
From Theorem <ref>, we see that if d∈(L_n), then for any x∈ L_n with x<n-2/n-1,
d(x) is determined by the value d(n-2/n-1).
However, if L is
an infinite MV-chain with an anti-atom v (i.e, v is the maximum element in L\{1} ) and d∈(L), then for any x<v,
d(x) may not be determined by the value d(v).
For example, let 𝒞 be the MV-chain in Example <ref>.
Then c^* is the anti-atom of 𝒞.
Define operators d and d' on 𝒞 as follows:
d(x):=
x⊙ c^*, if x∈𝒞_1
x, if x∈𝒞_0 and
d'(x):=x⊙ c^*
Then d∈(𝒞) by Remark <ref> and d' is a principal (⊙, ∨)-derivation.
Furthermore, d(c^*)=d'(c^*)
but d≠ d' since d(c)=c≠ 0=d'(c).
Let n≥ 2 be a positive integer. Then |(L_n)|= (n-1)(n+2)/2.
Assume that d∈(L_n) and denote n-2/n-1 by v. Then
d(v)≤ v by Proposition <ref> (4), and so
d(v)=i/n-1 for some i ∈{0, 1, 2, ⋯, n-2}.
For any x∈ L_n with
x<v,
d(x) is determined by the value d(v) by Theorem <ref>.
Now consider the value d(1).
By the condition (2) of Theorem <ref>, we have
v⊙ d(1)=n-2/n-1⊙ d(1)≤ d(v)= i/n-1.
Notice that for all k, l∈{0, 1, 2, ⋯, n-2}, we have k/n-1⊙l/n-1=max{0, k+l/n-1 -1 }.
Eq. (<ref>) implies that d(1)≤i+1/n-1. So d(1) has i+2 choices.
Summarizing the above arguments, we get
|(L_n)|=∑_i=0^n-2(i+2)=2+3+⋯+n=(n-1)(n+2)/2.
By Theorem <ref>, we obtain
|( L_2 )|=(2-1)(2+2)/2=2 and
|( L_3 )|=(3-1)(3+2)/2=5. Thus
(L_2)={Id_L_2,
0_L_2}. Let A be an MV-algebra.
In what follows, we will show that |(A )|=2 iff A is isomorphic to L_2; and |(A )|=5 iff A is isomorphic to L_3. For this purpose, we first give a family of derivations on A.
Let A be an MV-algebra and d∈Der(A). Let u ∈ A be given with u ≤ d(1) and
define an operator d^u on A by
d^u(x):=
u if x = 1
d(x) otherwise
Then d^u is also in (A).
Let x,y ∈ A. By Remark <ref> (1),
we can assume that x≠ 1 or y≠ 1.
If x≠ 1 and y ≠ 1, then
d^u(x)=d(x), d^u(y)=d(y) and x ⊙ y ∈ A∖{1}, which implies that
d^u(x ⊙ y) = d(x ⊙ y) = (d(x) ⊙ y) ∨ (x ⊙ d(y)) = (d^u(x) ⊙ y) ∨ (x ⊙ d^u(y)).
If x=1 or y=1 (but not both), say
x ≠ 1 and y = 1, then since d^u(1) = u ≤ d(1), we have x⊙ d^u(1) ≤ x ⊙ d(1) ≤ d(x) by Proposition <ref> (4) and so
d^u(x⊙ y) = d^u(x) = d(x) = d(x) ∨ (x ⊙ d^u(1)) = (d^u(x) ⊙ y) ∨ (x ⊙ d^u(y)).
Thus we conclude that d^u is in (A).
Let A be an MV-algebra, and u∈ A. Define operators χ^(u) as follows:
χ^(u)(x):=
u, if x=1
x, otherwise.
Then χ^(u)∈(A).
Since Id_A∈(A) and u≤ 1=Id_A(1), we have
χ^(u)=(Id_A)^u∈(A) by Proposition <ref>.
Let A be an MV-algebra. Then the following statements hold:
(1) χ^(0)≠ d for any d∈(A) with d(1)≠ 0. In particular, χ^(0)≠χ^(u) and χ^(0)≠ d_u for any u∈ A\{0}.
(2) If |A|≥ 3, then χ^(u)≠ d_v for any u, v∈ A\{0, 1}.
(1) Since χ^(0)(1)=0, it follows that χ^(0)≠ d for any d∈(A) with d(1)≠ 0, which implies that
χ^(0)≠χ^(u) and χ^(0)≠ d_u for any u∈ A\{0}, since χ^(u)(1)=d_u(1)=u≠ 0.
(2) Assume that |A|≥ 3 and let u, v∈ A\{0, 1}.
Then u^*, v^*∈ A\{0, 1}.
If u≠ v, then χ^(u)≠ d_v, since
χ^(u)(1)=u ≠ v= d_v(1).
If u=v, then χ^(u)≠ d_u, since
χ^(u)(u^*)=u^*≠0=u⊙ u^*=d_u(u^*).
Let A be an MV-algebra. Then the following statements hold:
(1) If |A|≥ 3, then |(A)|≥ 5 .
(2) If |A|≥ 4, then |(A)|≥ 7.
(3) If |A|≥ 5, then |(A)|≥ 13.
(1) Assume that |A|≥ 3 and let u∈ A\{0, 1}.
Then we immediately have d_u, χ^(0), χ^(u)∈(A) by Corollary <ref>. Furthermore, it is easy to see that d_u≠Id_A, d_u≠0_A, χ^(0)≠Id_A, χ^(0)≠0_A, χ^(u)≠Id_A and χ^(u)≠0_A. Also, χ^(0)≠ d_u, χ^(0)≠χ^(u) and
χ^(u)≠ d_u by Lemma <ref>.
Consequently, we have that
Id_A, 0_A, d_u, χ^(0) and χ^(u) are mutually different (⊙,∨)-derivations on A.
(2) Assume that |A|≥ 4 and let u, v∈ A\{0, 1} with u≠ v. By Lemma <ref> (1), we have χ^(0)≠Id_A, χ^(0)≠χ^(u), χ^(0)≠χ^(v), χ^(0)≠ d_u and χ^(0)≠ d_v. Clearly, χ^(u)≠χ^(v) and d_u≠ d_v. In addition, d_p≠χ^(q) for any p,q∈{u,v} by Lemma <ref> (2). Thus we conclude that Id_A, 0_A, d_u, d_v, χ^(0), χ^(u), χ^(v) are mutually different (⊙,∨)-derivations on A.
(3) Assume that |A|≥ 5. Then there exist
u, v∈ A\{0, 1} with u<v (i.e, u≤ v and u≠ v). In fact,
if x, y are not comparable for any x, y∈ A\{0, 1} with x≠ y, then the distributive lattice (A, ≤) has a copy of M_5, which is contradicting to <cit.>.
Let w∈ A\{0, u, v, 1}. By Lemma <ref> (1), we have χ^(0)≠Id_A, χ^(0)≠χ^(u), χ^(0)≠χ^(v), χ^(0)≠χ^(w), χ^(0)≠ d_u, χ^(0)≠ d_v and χ^(0)≠ d_w.
In addition, d_p≠χ^(q) for any p,q∈{u,v,w} by Lemma <ref> (2). Furthermore, (d_v)^0,(d_v)^u∈(A) by Proposition <ref>.
By Corollary <ref>, we can get that (d_v)^r≠χ^(s) for any r∈{0,u}, s∈{0,u,v,w}.
Also, (d_v)^0≠0_A, ( d_v)^u≠ d_u. Indeed, if (d_v)^0=, we have (d_v)^0(u^*)=v⊙ u^*=0. It follows by Lemma <ref> that
v≤ u, contradicting to the fact that u<v. If ( d_v)^u= d_u, then
v⊙ u^* = (d_v)^u(u^*)= d_u(u^*)= u⊙ u^*=0. Similarly, we get v≤ u, contradicting to the fact that u<v.
Note that w must be comparable with u or v. Otherwise, if w is not comparable for u,v∈ A\{0, 1} with u≤ v, then the distributive lattice (A, ≤) has a copy of N_5, which is contradicting to <cit.>. There are two cases.
If u<w, v is not comparable for w, we have (d_w)^0,(d_w)^u∈(A) and similarly, (d_w)^r≠χ^(s) for any r∈{0,u}, s∈{0,u,v,w}. Also, it can be proved in the same way as shown before that (d_w)^0≠0, ( d_w)^u≠ d_u.
If w<v, u is not comparable for w, we have (d_w)^0,(d_v)^w∈(A) and they are different from other (⊙,∨)-derivations on A.
Finally, it is easy to check that Id_A, , d_u, d_v, d_w, (d_v)^u, (d_v)^0, (d_w)^0, (d_w)^u((d_v)^w), χ^(0), χ^(u), χ^(v), χ^(w) are mutually different (⊙,∨)-derivations on A.
Let A be an nontrivial MV-algebra. Then the following statements hold:
(1) |(A)|=2 if and only if |A|=2.
(2) |(A)|=5 if and only if |A|=3.
(3) |(A)|=9 if and only if |A|=4.
(1) Assume that A is a 2-element MV-algebra. Then
A={0, 1} is a 2-element MV-chain, and so |(A)|=2 by Theorem <ref>.
Conversely, assume that |(A)|=2. If |A|≥ 3,
then |(A)|≥ 5 by Corollary <ref> (1), a contradiction. Since A is nontrivial, finally we get |A|=2.
(2) Assume that A is a 3-element MV-algebra. Then A is a 3-element MV-chain by Lemma <ref>, and so |(A)|=5 by Theorem <ref>.
Conversely, assume that |(A)|=5. If |A|≥ 4, then |(A)|≥ 7 by Corollary <ref> (2), a contradiction.
Thus |A|≤ 3. But A is nontrivial and |A|= 2 implies that |(A)|=2 by (1). Therefore, |A|=3, and consequently A is a 3-element MV-chain.
(3) Assume that A is a 4-element MV-algebra. Then A is isomorphic to the 4-element MV-chain L_4 or the
4-element Boolean algebra B_4 by Lemma <ref>. Recall Corollary <ref> that when the MV-algebra A is a Boolean algebra, d is an (⊙,∨)-derivation on A if and only if d is a derivation on the lattice (A, ≤).
It follows by Theorem <ref> and
<cit.> that |(A)|=9.
Conversely, assume that |(A)|=9. If |A|≥ 5, then |(A)|≥ 13 by Corollary <ref> (3), a contradiction. Thus |A|≤ 4. But A is nontrivial and
Items (1) and (2) imply that
|A|≠ 2 and |A|≠ 3. Therefore, |A|=4.
§.§ Isotone (⊙,∨)-derivations on MV-algebras
In this subsection, we consider the condition when an (⊙,∨)-derivation d is isotone and characterize the properties of the fixed point set of d.
Let A be an MV-algebra and d∈Der(A). d is called isotone if for all
x, y ∈ A, x ≤ y implies that d(x) ≤ d(y).
It is clear that Id_A and are isotone. Furthermore, we have:
Let A be an MV-algebra and a ∈ A. Then the principal (⊙,∨)-derivation d_a is isotone.
Let x, y ∈ A with x ≤ y. Then d_a(x)=a ⊙ x ≤ a ⊙ y=d_a(y) by Lemma <ref> (4), and thus d_a is isotone.
By <cit.>, we know that a derivation d on a bounded lattice L is isotone iff d is principal. However,
there are other isotone (⊙,∨)-derivations on an MV-algebra A besides principal
(⊙,∨)-derivations.
Let d=χ^(2/3)∈Der(L_4) (see Corollary <ref>), i.e, d(0)=0, d(1/3)=1/3, d(2/3)=2/3, d(1)=2/3. Then d
is isotone, while d is not principal, since
d(1) = 2/3=2/3⊙ 1 but d(1/3)=1/3≠ 0= 2/3⊙1/3.
Proposition <ref> says that
if d is an (⊙,∨)-derivation on an MV-algebra A with d(1)∈B(A), then
d is isotone iff d is principal.
Let A be an MV-algebra and d∈Der(A) with d(1)∈B(A). Then the following statements are equivalent:
(1) d is isotone;
(2) d(x) ≤ d(1) for any x ∈ A;
(3) d(x) = d(1) ⊙ x for any x ∈ A;
(4) d(x ∧ y) = d(x) ∧ d(y) for all x, y ∈ A;
(5) d(x ∨ y) = d(x) ∨ d(y) for all x, y ∈ A;
(1)⇒ (2) is clear since x≤ 1 holds for any x∈ A.
(2)⇒ (3). Assume that d(x) ≤ d(1) for any x ∈ A.
Since d(x) ≤ x by Proposition <ref> (4), it follows that
d(x) ≤ d(1)∧ x=d(1)⊙ x ≤ d(x)
by Lemma <ref> (6) and Proposition <ref> (5). Thus
d(x)=d(1)⊙ x.
(3)⇒ (4).
Assume that d(x) = d(1) ⊙ x for any x ∈ A. Then for all x, y∈ A, we have
d(x∧ y)=d(1)⊙(x∧ y)=(d(1)⊙ x)∧ (d(1)⊙ y)=d(x)∧ d(y)
by Lemma <ref> (6).
(4)⇒ (1). Assume that d(x ∧ y) = d(x) ∧ d(y) for all x, y ∈ A.
Let
x≤ y. Then d(x)=d(x∧ y)=d(x) ∧ d(y) ≤ d(y), and thus d is isotone.
(3)⇒ (5).
Assume that (3) holds. Then for all x, y∈ A, we have
d(x∨ y)=d(1)⊙(x∨ y)=(d(1)⊙ x)∨ (d(1)⊙ y)=d(x)∨ d(y)
by Lemma <ref> (7).
(5)⇒ (1).
Assume that (5) holds. Then for all x, y∈ A with x≤ y, we have d(x)≤ d(x)∨ d(y)=d(x∨ y)=d(y), and thus d is isotone.
Let A be an MV-algebra.
Denote the set of all isotone (⊙,∨)-derivations with d(1)∈B(A)
by (A), i.e,
(A)={ d∈(A) | d is isotone and d(1)∈B(A)}. Then there is a bijection between (A) and B(A).
Define a map f : (A) →B(A) by f(d) = d(1) for any d ∈(A). And define a map
g : B(A) →(A) by g(a) = d_a for any a ∈ A. Then by Proposition <ref>, we have fg = Id_A and
gf = Id_IDer(A). Hence f is a bijection.
Generally, d is an isotone (⊙, ∨)-derivation on an MV-algebra A does not necessarily imply that d(x ⊕ y) = d(x) ⊕ d(y) for all x, y ∈ A. For example, In the MV-algebra L_3, χ^(1/2)∈(A) and is isotone while χ^(1/2)(1/2⊕1/2)=χ^(1/2)(1)=1/2≠ 1=1/2⊕1/2=χ^(1/2)(1/2)⊕χ^(1/2)(1/2). In the following proposition, the condition d(1)∈B(A) cannot be removed.
Let A be an MV-algebra, and d∈Der(A). Then the following statements are equivalent:
(1) d∈(A);
(2) d(x ⊕ y) = d(x) ⊕ d(y) for all x, y ∈ A;
(3) d(x ⊙ y) = d(x) ⊙ d(y) for all x, y ∈ A.
(1)⇒ (2). Assume d∈(A). By Lemma <ref>, supposing that A is a subdirect product of a family {A_i}_i∈ I of MV-chains, let h: A →∏_i∈ IA_i be a one-one homomorphism and for each j∈ I, the composite map π_j∘ h is a
homomorphism onto A_j. Let d(1)=a=(a_i)_i∈ I∈B(A). Then a_i∈B(A_i) and by Lemma <ref> (1) we have a_i=0 or a_i=1 for each i∈ I. Since d∈(A), it follows by Proposition <ref> that for any x=(x_i)_i∈ I∈ A, d(x)=x⊙ a=(x_i⊙ a_i)_i∈ I. Therefore, d(x⊕ y)=((x_i⊕ y_i)⊙_i a_i)_i∈ I =((x_i⊙_i a_i)⊕(y_i⊙_i a_i))_i∈ I=d(x)⊕ d(y).
(2)⇒ (1). Assume that d(x ⊕ y) = d(x) ⊕ d(y) for all x, y ∈ A, we immediately get d(1)=d(1)⊕ d(1). Hence d(1)∈B(A). To prove that d is isotone, let
x≤ y. Then by Lemma <ref> (4) there exists an element z∈ A such that y=x⊕ z. So d(y)=d(x⊕ z)=d(x) ⊕ d(z) ≤ d(x), and thus d is isotone.
(1)⇒ (3).
Assume that (1) holds, by Proposition <ref> we have d(x)=d(1)⊙ x. Since d(1)∈B(A),
d(x⊙ y)=d(1)⊙(x⊙ y)=(d(1)⊙ x)⊙ (d(1)⊙ y)=d(x)⊙ d(y) for all x, y∈ A.
(3)⇒ (1). Assume that (3) holds. Then for any x∈ A, we have
d(x)=d(x⊙ 1)=d(x)⊙ d(1)≤ d(1)
by Lemma <ref> (1). Set x=y=1 in (3), we have d(1)=d(1)⊙ d(1). Hence d(1)∈B(A).
Thus by Propostion <ref> we get d is isotone. Therefore, (1) holds.
Let A be an MV-algebra and d∈(A). Then
d is idempotent, that is, d^2=d.
Assume that d∈(A). By Proposition <ref> (3) and Proposition <ref>, we have d(d(x))=d(1⊙ d(x))=d(1)⊙ d(x)=d(1⊙ x)= d(x) for any x∈ A. Thus d^2=d.
Generally, the converse of Corollary <ref> does not hold. For example, let
d=χ^(0)∈ (L_3). Then d(1)=0∈B(A) and d is idempotent. But d is not isotone, since d(1/2)=1/2> 0= d(1).
Using the fixed point sets of isotone derivations, the characterizations of some different types of lattice have been described in <cit.>. Analogously, we next discuss the relation between ideals and fixed point sets of (⊙,∨)-derivations on MV-algebras.
Let A be an MV-algebra and d∈Der(A). Denote the set of all fixed point of d by _d(A), i.e.,
_d(A)={x∈ A | d(x)=x}.
By Proposition <ref> (10),
_d(A) is a downset.
Let A be an MV-algebra. If d∈(A), then _d(A) is a lattice ideal of A.
Assume that d∈(A), and let d=d_a, where a∈ A. Then d(x)=a⊙ x for any x∈ A.
To prove that _d(A) is closed under ∨, let x, y ∈_d(A). Then d(x) = x and d(y) = y. It follows by Lemma <ref> (7) that d(x ∨ y) =a⊙(x∨ y)=(a⊙ x)∨(a⊙ y)=d(x) ∨ d(y) = x ∨ y, and so x ∨ y ∈_d(A). Thus _d(A) is closed under ∨. This, together with the fact that _d(A) is a downset,
implies that _d(A) is a lattice ideal of A.
§ DIRECT PRODUCT OF (⊙,∨)-DERIVATIONS
In this section, we will discuss the relation between direct product of (⊙,∨)-derivations and (⊙,∨)-derivations on the direct product of MV-algebras.
<cit.>
Let Ω be an index set. The direct product ∏_i ∈Ω A_i of a family {A_i}_i ∈Ω of MV-algebras is the MV-algebra obtained by endowing the set-theoretical cartesian product of the family with the MV-operations defined pointwise. In other words, ∏_i ∈Ω A_i is the set of all functions f: Ω→⋃_i ∈Ω A_i such that f(i) ∈ A_i for all i ∈Ω, with the operations “ * ” and “ ⊕ ” defined by
(f^*)(i)=_def (f(i))^* and (f ⊕ g)(i)=_def f(i) ⊕ g(i) for all i∈Ω.
The zero element 0 of ∏_i ∈Ω A_i is the function i ∈Ω↦ 0_i∈ A_i, and the element 1 of ∏_i ∈Ω A_i is the function i ∈Ω↦ 1_i∈ A_i for all i∈Ω.
The binary operation “ ⊙ " and “ ⊖ " on ∏_i ∈Ω A_i can be induced by “ ⊕ " and “ ^* ".
Let g, h ∈∏_i ∈Ω A_i . By Lemma <ref> we know that g ⩽ h in ∏_i ∈Ω A_i if and only if g^*⊕ h=1 if and only if
(g(i))^*⊕ h(i)=1_i in A_i
if and only if
g(i) ⩽ h(i) for any i ∈Ω. As usual, we write (g(i))_i ∈Ω for g.
<cit.>
For each i ∈Ω, define the map π_i: ∏_i ∈Ω A_i→ A_i by π_i(g)=g(i) for any g ∈∏_i ∈Ω A_i, and define the map ρ_i: A_i→∏_i ∈Ω A_i by
(ρ_i(a))(j)= a, if j=i
0_j, otherwise
for any a ∈ A_i. π_i is called the i-th projection, and ρ_i is called the i-th embedding.
For each i ∈Ω, let d_i be an operator on A_i . Define an operator ∏_i ∈Ω d_i: ∏_i ∈Ω A_i→∏_i ∈Ω A_i by (∏_i ∈Ω d_i)(g)=(d_i(g(i)))_i ∈Ω for any g ∈∏_i ∈Ω A_i, and we call ∏_i ∈Ω d_i the direct product of the {d_i}_i ∈Ω.
When Ω={1,2, ⋯, n}, we denote the direct product of {A_i}_i ∈Ω and the direct product of {d_i}_i ∈Ω, respectively, by A_1× A_2×⋯× A_n and d_1× d_2×⋯× d_n.
Let Ω be an index set, {A_i}_i ∈Ω be a family of MV-algebras, and d be an operator on ∏_i ∈Ω A_i. Then the following statements hold:
(1) d ∈(∏_i ∈Ω A_i) implies that π_i d ρ_i∈(A_i) for each i ∈Ω;
(2) d ∈(∏_i ∈Ω A_i) and d is isotone implies that π_i d ρ_i∈(A_i) and is isotone for each i ∈Ω;
(3) d ∈(∏_i ∈Ω A_i) implies that π_i d ρ_i∈(A_i) for each i ∈Ω.
(1) Assume that d ∈(∏_i ∈Ω A_i). For each i ∈Ω, let x,y∈ A_i. Then we have
(π_i d ρ_i)(x ⊙ y) =π_i d(ρ_i(x ⊙ y))=π_i(d(ρ_i(x) ⊙ρ_i(y)))
= π_i((d(ρ_i(x)) ⊙ρ_i(y)) ∨(ρ_i(x) ⊙ d(ρ_i(y))))
=(π_i d ρ_i(x) ⊙π_iρ_i(y)) ∨(π_iρ_i(x) ⊙π_i d ρ_i(y))
=(π_i d ρ_i(x) ⊙ y) ∨(x ⊙π_i d ρ_i(y))
and so π_i d ρ_i∈(A_i).
(2) Assume that d ∈(∏_i ∈Ω A_i) and d is isotone. For each i ∈Ω,
we know by (1) that π_i d ρ_i∈(A_i).
Also, since π_i and ρ_i are isotone, it follows that π_i d ρ_i is isotone. Thus (2) holds.
(3) Assume that d ∈(∏_i ∈Ω A_i), i.e, d=d_a for some a=(a_i)_i ∈Ω∈∏_i ∈Ω A_i. For each i ∈Ω, let x ∈ A_i. Then we have
(π_i d ρ_i)(x)=π_i d(ρ_i(x))=π_i(ρ_i(x) ⊙ a)=π_i(ρ_i(x)) ⊙π_i(a)=x ⊙ a_i,
and thus π_i d ρ_i∈(A_i).
Combining the structures of an MV-algebra and an (⊙,∨)-derivation in the language of universal algebra <cit.>, we give
A differential MV-algebra is an algebra (A, ⊕, *, d, 0) of type (2, 1, 1, 0) such that
(1) (A, ⊕, *, 0) is an MV-algebra, and
(2) d is an (⊙,∨)-derivation on A.
Let Ω be an index set, {A_i}_i ∈Ω be a family of MV-algebras, and
d_i∈(A_i). Then (A_i, ⊕_i, *_i, d_i, 0_i) is a differential MV-algebra. From the viewpoint of universal algebra <cit.>, we know that the class of all differential MV-algebras forms a variety. Thus the direct product (∏_i ∈Ω A_i, ⊕, *, ∏_i ∈Ω d_i, 0) is also a differential MV-algebra, and so ∏_i ∈Ω d_i∈(∏_i ∈Ω A_i). Hence we obtain that
∏_i ∈Ω(A_i) ⊆(∏_i ∈Ω A_i)
But
∏_i ∈Ω(A_i) ≠(∏_i ∈Ω A_i) whenever |Ω|≥ 2, see Remark <ref>.
(1)Let L_2={0,1} be the 2-element MV-chain. Then (L_2)={Id_L_2,0_L_2} by Theorem <ref>,
so (L_2)×(L_2)={d_1=Id_L_2×Id_L_2,d_2=Id_L_2×0_L_2,d_3=0_L_2×Id_L_2,d_4=0_L_2×0_L_2}⊆(L_2× L_2).
Notice that in <cit.>, L_2× L_2 is denoted by M_4, and d_1,d_2,d_3,d_4 are denoted by Id_M_4,y_2,y_4,0_M_4, respectively. By <cit.>, |(L_2× L_2)|=9, so (L_2)×(L_2)≠(L_2× L_2).
(2)
Let L_3={0, 1/2, 1 } be the 3-element MV-chain with 0< 1/2< 1. By Theorem <ref> we have (L_3)={Id_L_3,0_L_3,d_1/2, χ^(0), χ^(1/2)}.
Thus
|(L_2)×(L_3)|=|(L_2)|×
|(L_3)|=10.
Let 0=(0,0), a=(0,1/2), b=(0,1), c=(1,0), d=(1,1/2) and 1=(1,1). Then the Hasse diagram of L_2× L_3 is given
below (see Figure 1). We give all elements of (L_2× L_3) in Table <ref> by Python (Full details are given in Appendix I listing <ref>). It can be verified that there are 23 elements (from d_11 to d_33) in (L_2× L_3) but not in (L_2)×(L_3).
(4,5)
(2.0,0.0)(-1,1)2.0
(2.0,0.0)(1,1)4.0
(4.0,6.0)(1,-1)2.0
(4.0,6.0)(-1,-1)4.0
(2.0,4.0)(1,-1)2.0
(2.0,-1)0
(4.3,1.5)a
(6.3,3.5)b
(-1.3,2.0)c
(1.3,4.5)d
(4.0,6.5)1
(1.8,-0.2)∙
(3.8,1.8)∙
(5.8,3.8)∙
(-0.2,1.8)∙
(1.8,3.8)∙
(3.8,5.8)∙
(0.0,-2.5)Figure 1: L_2× L_3
Let Ω be an index set, {A_i}_i ∈Ω be a family of MV-algebras, and d_i be an operator on A_i for each i ∈Ω. Let A=∏_i ∈Ω A_i. Then the following statements hold:
(1) π_i(∏_i ∈Ω d_i) ρ_i=d_i, and π_i(∏_i ∈Ω d_i)=d_iπ_i for each i ∈Ω.
(2) ∏_i ∈Ω d_i∈(A) if and only if d_i∈(A_i) for each i ∈Ω.
(3) ∏_i ∈Ω d_i∈(A) and ∏_i ∈Ω d_i is isotone if and only if d_i∈(A_i) and d_i is isotone for each i ∈Ω.
(4) ∏_i ∈Ω d_i∈(A) if and only if d_i∈(A_i) for each i ∈Ω.
(5) For any i ∈Ω, if d_i(0_i)=0_i, then (∏_i ∈Ω d_i) ρ_i=ρ_i d_i, that is, the corresponding diagram is commutative (put d=∏_i ∈Ω d_i).
A_i [r]^d_i [d] _ρ_i A_i [d]^ρ_i
∏_i ∈Ω A_i [r]_d ∏_i ∈Ω A_i
(1) Let i ∈Ω and a ∈ A_i. It is easy to see that (π_i(∏_i ∈Ω d_i) ρ_i)(a)=d_i(a), and so π_i(∏_i ∈Ω d_i) ρ_i=d_i . Also, for any z ∈ A, we have (π_i(∏_i ∈Ω d_i))(z)=d_iπ_i(z), since z=(π_i(z))_i ∈Ω . Thus π_i(∏_i ∈Ω d_i)=d_iπ_i.
(2) Assume that d_i∈(A_i) for each i ∈Ω. Then ∏_i ∈Ω d_i∈(A) by Eq. (<ref>).
Conversely, if ∏_i ∈Ω d_i∈(A), then d_i=π_i(∏_i ∈Ω d_i) ρ_i∈(A_i) by (1) and Lemma <ref> (1).
(3) Assume that d_i∈(A_i) for each i ∈Ω and d_i is isotone for each i∈Ω. Then ∏_i ∈Ω d_i∈(A) by (2). And it can be verified that ∏_i ∈Ω d_i is isotone. In fact, let x,y∈ A and x≤ y, that is, x_i≤ y_i for each i∈Ω, we have (∏_i ∈Ω d_i)(x)=∏_i ∈Ω d_i(x_i)≤∏_i ∈Ω d_i(y_i)=(∏_i ∈Ω d_i)(y).
Conversely, if ∏_i ∈Ω d_i∈(A) and ∏_i ∈Ω d_i is isotone, then d_i=π_i(∏_i ∈Ω d_i) ρ_i∈(A_i) and d_i is isotone by (1) and Lemma <ref> (2).
(4) Assume that d_i∈(A_i) for each i ∈Ω. Then d_i(x_i)=x_i⊙ a_i, where a_i∈ A_i. Let (x_i)_i ∈Ω∈ A, and so (∏_i ∈Ω d_i)((x_i)_i ∈Ω)=(d_i(x_i))_i ∈Ω=(x_i⊙ a_i)_i ∈Ω=(x_i)_i ∈Ω⊙(a_i)_i ∈Ω . Thus ∏_i ∈Ω d_i∈(A).
Conversely, if ∏_i ∈Ω d_i∈(A), then d_i=π_i(∏_i ∈Ω d_i) ρ_i∈(A_i) by (1) and Lemma <ref> (3).
(5) Assume that d_i(0_i)=0_i for any i ∈Ω, and ∏_i ∈Ω d_i=d. To prove that d ρ_i=ρ_i d_i, let x ∈ A_i. We have d ρ_i(x)=ρ_i d_i(x), since
π_j(d ρ_i(x))= d_i(x), if j=i
d_j(0_j), otherwise
and
π_j(ρ_i d_i(x))= d_i(x), if j=i
0_j, otherwise.
Thus d ρ_i=ρ_i d_i.
Let Ω be an index set, {A_i}_i ∈Ω be a family of MV-algebras, and d be an operator on ∏_i ∈Ω A_i. Put A=∏_i ∈Ω A_i. Then the following statements hold:
(1) If d ∈(A), then d ∈∏_i ∈Ω(A_i) if and only if d=∏_i ∈Ωπ_i d ρ_i.
(2) If d ∈(A), then d ∈∏_i ∈Ω(A_i) if and only if d=∏_i ∈Ωπ_i d ρ_i.
(1) Assume that d ∈(A). Then π_i d ρ_i∈(A_i) for each i ∈Ω by Lemma <ref> (1), which implies that d ∈∏_i ∈Ω(A_i) if d=∏_i ∈Ωπ_i d ρ_i.
Conversely, if d ∈∏_i ∈Ω(A_i), then d=∏_i ∈Ω d_i for some d_i∈(A_i). It follows by Theorem <ref> (1) that π_i d ρ_i=d_i, and so d=∏_i ∈Ωπ_i d ρ_i.
(2) Assume that d ∈(A). Then π_i d ρ_i∈(A_i) for each i ∈Ω by Lemma <ref> (3), which implies that d ∈∏_i ∈Ω(A_i) if d=∏_i ∈Ωπ_i d ρ_i.
Conversely, if d ∈∏_i ∈Ω(A_i), then d=∏_i ∈Ω d_i for some d_i∈(A_i). It follows by Theorem <ref> (1) that π_i d ρ_i=d_i, and so d=∏_i ∈Ωπ_i d ρ_i.
Let Ω be an index set with |Ω|≥ 2, {A_i}_i ∈Ω be a family of MV-algebras. Then
∏_i ∈Ω(A_i) ≠(∏_i ∈Ω A_i), since for any a∈∏_i ∈Ω A_i\{1}, we have
χ^(a)∈(∏_i ∈Ω A_i) by Corollary <ref>, but
χ^(a)∉∏_i ∈Ω(A_i). In fact, for each i∈Ω, we have π_iχ^(a)ρ_i∈(A_i) by Lemma <ref> and
π_iχ^(a)ρ_i(1_i)=π_i(χ^(a)(ρ_i(1_i)))=π_i(ρ_i(1_i))=1_i.
It follows that π_iχ^(a)ρ_i=Id_A_i by Proposition <ref>, so χ^(a)≠Id_∏_i ∈Ω A_i=∏_i ∈Ωπ_iχ^(a)ρ_i. Thus χ^(a)∉∏_i ∈Ω(A_i)
by Corollary <ref> (1), and hence ∏_i ∈Ω(A_i) ≠(∏_i ∈Ω A_i).
Let Ω be an index set, {A_i}_i ∈Ω be a family of MV-algebras. Then (∏_i∈Ω A_i)=∏_i∈Ω(A_i).
Firstly, we have ∏_i∈Ω(A_i)⊆(∏_i∈Ω A_i) by Theorem <ref> (4).
To prove that (∏_i∈Ω A_i)⊆∏_i∈Ω(A_i), let d∈(∏_i∈Ω A_i).
Then for any x=(x_i)_i∈Ω∈∏_i∈Ω A_i, by Proposition <ref>, d(x)=x⊙ a for some a=(a_i)_i∈Ω∈∏_i∈Ω A_i, so (∏_i ∈Ωπ_i d ρ_i)(x)= (π_i d ρ_i(x_i))_i ∈Ω=(π_i (ρ_i(x_i)⊙ a))_i ∈Ω=(π_iρ_i(x_i)⊙π_i(a))_i ∈Ω =(x_i⊙π_i(a))_i ∈Ω=(x_i)_i ∈Ω⊙ (a_i)_i ∈Ω=x⊙ a=d(x). It follows that d=∏_i ∈Ωπ_i d ρ_i, and so d∈(∏_i∈Ω A_i) by Corollary <ref> (2).
§ LATTICE STRUCTURE OF (⊙,∨)-DERIVATIONS ON MV-ALGEBRAS
Let (A, ⊕, *, 0) be an MV-algebra and let O(A) be the set of all operators on A. Define a relation ≼ on O(A) by:
(∀ d, d^'∈O(A)) d ≼ d^' if d(x) ≤ d^'(x) for any x ∈ A.
It is easy to verify that ≼ is a partial order on O(A) and
0_A≼ d ≼1_A for any d ∈O(A), where
1_A is defined by 1_A(x):=1 for any x ∈ A.
For any d ∈Der(A), we have 0_A≼ d ≼Id_A since 0 ≤ d(x) ≤ x for any x ∈ A.
We also define the following binary operations on O(A). For d, d^'∈O(A), set
(d ∨ d^')(x):=d(x) ∨ d^'(x), (d ∧ d^')(x):=d(x) ∧ d^'(x)
for any x ∈ A.
Let A be an MV-algebra. Then (O(A), ≼, ,1_A ) is a bounded lattice for which d∨ d' and d∧ d' are, respectively, the least upper bound and the greatest lower bound of d and d'.
Recall that every MV-algebra induces a natural bounded lattice structure. Since the class of all lattices is a variety and O(A) is the direct product of |A| copies of A, the lemma follows immediately from the usual notions of universal algebra <cit.>.
We next explore the partial order structure of the set of (⊙,∨)-derivations on MV-algebras.
Let A be an MV-algebra. Then d∨ d'∈(A) for all d, d'∈(A).
Let d, d'∈(A) and x, y∈ A. Then we have
(d∨ d')(x⊙ y) = d(x⊙ y)∨ d'(x⊙ y)
= ((d(x)⊙ y)∨ (x⊙ d(y)))∨ ((d'(x)⊙ y)∨ (x⊙ d'(y)))
= ((d(x)⊙ y)∨ (d'(x)⊙ y))∨ ((x⊙ d(y))∨ (x⊙ d'(y)))
= ((d(x)∨ d'(x))⊙ y)∨ (x⊙(d(y)∨ d'(y)))
= ((d∨ d')(x)⊙ y)∨ (x⊙(d∨ d')(y))
by Lemma <ref> (7), and so d∨ d'∈(A).
For d, d'∈(A),
note that the operator d∧ d' are not necessarily in (A) even if A is a Boolean algebra, see
<cit.>.
Let A be an MV-algebra.
(1) If d∧ d'∈(A) for all d,d' ∈(A), then ((A), ∨, ∧,0_A,Id_A) is a lattice.
(2) If A is a finite MV-algrbra, then ((A),≼, 0_A,Id_A) is a lattice.
(1) For d, d'∈(A) and x, y∈ A, we have known d∨ d'∈(A). Assume that d∧ d'∈(A) for all d, d'∈(A).
Then ((A), ≼) is a sublattice of the lattice (O(A), ≼) by Lemma <ref>. Thus we complete the proof.
(2) Assume that A is a finite MV-algebra, by Lemma <ref> we have d∨ d'∈(A) for all d,d'∈(A).
Since (A) is finite as a subset of the finite set O(A), it follows that ⋁ B=⋁_b∈ Bb exists for every subset B of (A). Noticing that ⋁∅=0_A, hence ((A),≼, 0_L,Id_A) is a lattice by <cit.>.
In what follows, we will describe the lattice ( L_n) ( n≥2).
Let (L, ≤) be a chain with the bottom element 0, and let
𝒜(L)={(x,y)∈ L× L | y≤ x}\{(0,0)}.
Then
(𝒜(L), ≺ ) is a sublattice of the lattice (L× L, ≺), where ≺ is defined by:
for any (x_1, y_1), (x_2,y_2) ∈ L× L,
(x_1, y_1)≺(x_2,y_2) if and only if x_1≤ x_2 and y_1≤ y_2.
It is well known that (L× L, ≺) is a lattice and
for any (x_1, y_1), (x_2,y_2) ∈ L× L,
(x_1∨ x_2, y_1∨ y_2)= (x_1, y_1)∨ (x_2, y_2), (x_1∧ x_2, y_1∧ y_2)= (x_1, y_1)∧ (x_2, y_2).
To prove that (𝒜(L), ≺ ) is a sublattice of the lattice (L× L, ≺), let
(a, b), (c, d)∈𝒜(L). Then
b≤ a, d≤ c and (a, b)≠ (0, 0), (c, d)≠ (0, 0). It follows that b∨ d ≤ a∨ c,
b∧ d ≤ a∧ c,
a≠ 0 and c≠ 0, so a∨ c≠ 0 and a∧ c≠ 0
since L is a chain.
Thus
(a∨ c, b∨ d)≠ (0, 0), and
(a∧ c, b∧ d)≠ (0, 0). So
(a, b)∨ (c, d)∈𝒜(L)
and (a, b)∧ (c, d)∈𝒜(L). Consequently, we get that (𝒜(L), ≺ ) is a sublattice of the lattice (L× L, ≺).
Let n≥2 be a positive integer, L_n be the n-element MV-chain, and let 𝒜(L_n)={(x,y)∈ L_n× L_n |y≤ x}\{(0,0)}. Then
the following statements hold:
(1) (d_x)^y≠ (d_z)^w for any (x, y), (z, w)∈𝒜(L_n) with
(x, y)≠ (z, w), where (d_x)^y is defined by
(d_x)^y(z):=
y if z = 1
d_x(z)=x⊙ z otherwise.
(See Proposition <ref>).
(2) (L_n)= {(d_x)^y | (x, y)∈𝒜(L_n) }.
(3)
(d_x)^y∧( d_z)^w=(d_x∧ z)^y∧ w and (d_x)^y∨( d_z)^w=(d_x∨ z)^y∨ w
for any (x,y),(z,w)∈𝒜(L_n).
(4) (L_n) is a sublattice of (O(L_n), ≺).
(1) Let (x, y), (z, w)∈𝒜(L_n) with
(x, y)≠ (z, w). Then y≤ x, x≠ 0, and w≤ z, z≠ 0. So
x^*≠ 1 and z^*≠ 1.
If y≠ w, then (d_x)^y(1)=y≠ w= (d_z)^w(1), and so (d_x)^y≠ (d_z)^w.
If
x≠ z and y=w, then we also have
(d_x)^y≠ (d_z)^w. Indeed, suppose on the contrary that (d_x)^y= (d_z)^w.
Since x^*≠ 1 and z^*≠ 1, we have
z⊙ x^*=(d_z)^w(x^*)=(d_x)^y(x^*)=x ⊙ x^*=0
and
x ⊙ z^*=(d_x)^y(z^*)=(d_z)^w(z^*)=z⊙ z^*=0, which implies that z≤ x and x≤ z by Lemma <ref>, and so x=z, a contradiction.
(2) Denote the set {(d_x)^y | (x, y)∈𝒜(L_n) } by ℬ. For any (x, y)∈𝒜(L_n), we have y≤ x= d_x(1) and so (d_x)^y∈(L_n) by Proposition <ref>. Thus ℬ⊆(L_n).
Also, by Item (1) we obtain that |ℬ|=|𝒜(L_n)|=n(n+1)/2-1=(n+2)(n-1)/2, so |ℬ|=|(L_n)|
by Theorem <ref>. Hence ℬ=(L_n).
(3) Let (x,y),(z,w)∈𝒜(L_n). Then ((d_x)^y∧( d_z)^w)(1)=(d_x)^y(1)∧( d_z)^w(1)=y∧ w=(d_x∧ z)^y∧ w(1) and ((d_x)^y∨( d_z)^w)(1)=(d_x)^y(1)∨( d_z)^w(1)=y∨ w=(d_x∨ z)^y∨ w(1).
Also, for c∈ L_n\{1}, we have
((d_x)^y∧( d_z)^w)(c)=(d_x)^y(c)∧( d_z)^w(c)=(x⊙ c)∧(z⊙ c)=(x∧ z)⊙ c=(d_x∧ z)^y∧ w(c)
by Lemma <ref> (6), and
((d_x)^y∨( d_z)^w)(c)=(d_x)^y(c)∨( d_z)^w(c)=(x⊙ c)∨(z⊙ c)=(x∨ z)⊙ c=(d_x∨ z)^y∨ w(c)
by Lemma <ref> (7).
It follows that (d_x)^y∧( d_z)^w=(d_x∧ z)^y∧ w and (d_x)^y∨( d_z)^w=(d_x∨ z)^y∨ w.
(4) It follows immediately by Items (2), (3) and Lemma <ref> that (L_n) is closed under ∨ and ∧, so (L_n) is a sublattice of (O(L_n), ≼).
Let n≥2 be a positive integer,
L_n be the n-element MV-chain, and let
𝒜(L_n)={(x,y)∈ L_n× L_n |y≤ x}\{(0,0)}. Then the lattice (L_n) is isomorphic to the lattice 𝒜(L_n)(see the following diagram).
Let ℬ={(d_x)^y|(x,y)∈𝒜(L_n)}. Then ℬ=(L_n) by Lemma <ref> (2).
Define a map f𝒜(L_n)→ℬ
by (x, y)↦ (d_x)^y for any (x, y)∈𝒜(L_n). Then f is injective by Lemma <ref> (1). Also, it is clear that f is surjective by the definition of ℬ.
To prove that f is a homomorphism, let (x,y),(z,w)∈𝒜(L_n). Then, by Lemma <ref>, we have f((x, y)∨ (z, w))=
f((x∨ z, y∨ w))=(d_x∨ z)^y∨ w=(d_x)^y∨( d_z)^w=f((x, y))∨ f((z, w)) and
f((x, y)∧ (z, w))=
f((x∧ z, y∧ w))=(d_x∧ z)^y∧ w=(d_x)^y∧( d_z)^w=f((x, y))∧ f((z, w)). Thus f is a lattice isomorphism.
(1) We draw Hasse diagrams of (L_n)(2≤ n≤ 5) in the following:
(2,6)
(1.0,2.0)(0,1)1.0
(0.9,2.94)∙
(0.9,1.94)∙
(0.3,0.0)(L_2)
(4,6)
(1.0,3.0)(1,-1)1.0
(1.0,3.0)(1,1)1.0
(3.0,3.0)(-1,-1)1.0
(3.0,3.0)(-1,1)1.0
(2.0,5.0)(0,-1)1.0
(1.88,1.9)∙
(0.88,2.9)∙
(2.88,2.9)∙
(1.88,3.9)∙
(1.88,4.96)∙
(1.3,0.0)(L_3)
(5,5)
(2.0,2.0)(1,1)2.0
(2.0,2.0)(-1,1)1.0
(1.0,3.0)(1,1)2.0
(1.0,5.0)(1,-1)2.0
(1.0,5.0)(1,1)1.0
(2.0,6.0)(1,-1)2.0
(2.0,6.0)(0,1)1.3
(1.8,1.9)∙
(2.8,2.85)∙
(3.86,3.85)∙
(0.8,2.9)∙
(1.84,3.87)∙
(2.84,4.87)∙
(0.86,4.9)∙
(1.84,5.9)∙
(1.86,7.2)∙
(1.4,0.0)(L_4)
(5,5)
(2.0,2.0)(1,1)3.0
(2.0,2.0)(-1,1)1.0
(1.0,3.0)(1,1)3.0
(1.0,7.0)(1,-1)3.0
(1.0,7.0)(1,1)1.0
(2.0,8.0)(1,-1)3.0
(1.0,5.0)(1,1)2.0
(1.0,5.0)(1,-1)2.0
(2.0,8.0)(0,1)1.3
(1.8,1.85)∙
(2.8,2.85)∙
(3.8,3.85)∙
(4.8,4.85)∙
(0.8,2.9)∙
(1.8,3.85)∙
(2.8,4.85)∙
(3.8,5.85)∙
(0.8,4.85)∙
(1.8,5.85)∙
(2.8,6.85)∙
(0.8,6.85)∙
(1.8,7.85)∙
(1.8,9.15)∙
(1.5,0.0)(L_5)
(2) The Hasse diagram of (L_2× L_2) is given in <cit.>, where d_1-d_4 are in Example <ref> (1) and others are the same as DO(M_4) in <cit.>. And we can get the Hasse diagram of (L_2× L_3) by Table <ref> in Example <ref> (2) using python. For details, see the Appendix II listing <ref>.
Recall that an MV-algebra A is complete if its underlying lattice 𝐋(A) is complete <cit.>, that is, for every subset B of 𝐋(A), both ⋁ B and ⋀ B exist in 𝐋(A).
Let {x_i}_i∈Ω be a family elements of A and x∈ A. If ⋁_i ∈Ωx_i exists, then
the equality <cit.> holds:
x∨⋁_i ∈Ωx_i=⋁_i ∈Ω(x∨ x_i).
Let {d_i}_i∈Ω be a family of operators on a complete MV-algebra A. Define operators ⋁_i∈Ωd_i and ⋀_i∈Ωd_i on A, respectively, by
(⋁_i∈Ωd_i)(x):=⋁_i∈Ωd_i(x),
(⋀_i∈Ωd_i)(x):=⋀_i∈Ωd_i(x)
for any x∈ A.
<cit.>
Let A be a complete MV-algebra, x∈ A and let {x_i}_i∈Ω be a family elements of A. Then
x ⊙⋁_i ∈Ω x_i=⋁_i ∈Ω(x ⊙ x_i).
Let A be a complete MV-algebra and {d_i}_i∈Ω be a family elements of (A). Then the following statements hold:
(1) ⋁_i∈Ωd_i∈(A).
(2) ((A), ≼, ,)
is a complete lattice.
(1) For any x, y∈ A,
we have
(⋁_i∈Ωd_i)(x⊙ y) = ⋁_i∈Ωd_i(x⊙ y)
(1)= ⋁_i∈Ω ((d_i(x)⊙ y)∨ (x⊙ d_i(y)))
(5)= (⋁_i∈Ω ((d_i(x)⊙ y)))∨⋁_i∈Ω(x⊙ d_i(y))
(6)= ((⋁_i∈Ω d_i(x))⊙ y))∨ (x⊙⋁_i∈Ωd_i(y))
= ((⋁_i∈Ω d_i)(x)⊙ y)∨ (x⊙ (⋁_i∈Ωd_i)(y)),
and so ⋁_i∈Ωd_i∈(A).
(2)
We shall prove that ⋁_i∈Ωd_i is the least upper bound of {d_i}_i∈Ω
in the poset ((A), ≼).
Indeed, firstly, we have
⋁_i∈Ωd_i∈(A) by Item (1). Secondly, for each i∈Ω, we have
d_i(x)≤⋁_i∈Ωd_i(x)= (⋁_i∈Ωd_i)(x) for any x∈ A and so
d_i≼⋁_i∈Ωd_i. Thus ⋁_i∈Ωd_i is an upper bound of {d_i}_i∈Ω.
Finally, let
d'∈(A) such that d_i≼ d' for each i∈Ω. Then d_i(x)≤ d'(x) for any x∈ A, which implies that
(⋁_i∈Ωd_i)(x) =⋁_i∈Ωd_i(x)≤ d'(x) and so ⋁_i∈Ωd_i≼ d'.
Therefore, we obtain that ⋁_i∈Ωd_i is
the least upper bound of {d_i}_i∈Ω
in the poset ((A), ≼). Note that ⋁∅= and hence
((A), ≼, ,)
is a complete lattice by <cit.>.
Next we will consider several lattice structure of derivations which are isomorphic to the underlying lattice 𝐋(A) of an MV-algebra A.
Let A be an MV-algebra. Then the following statements hold:
(1) d_u∨ d_v=d_u∨ v and d_u∧ d_v=d_u∧ v for any u, v∈ A.
(2)
((A), ∨, ∧, ,) is a sublattice of (O(A),≼).
(3) d∨ d',d∧ d' ∈(A) for any d, d'∈(A).
(4) ((A), ∨, ∧, ,) is a sublattice of (O(A),≼).
(1) Let u, v∈ A. Then,
for any x∈ A,
by Lemma <ref> (6)(7) we have
(d_u∨ d_v)(x)=d_u(x)∨ d_v(x)=(u⊙ x)∨(v⊙ x)=(u∨ v)⊙ x =d_u∨ v(x),
(d_u∧ d_v)(x)=d_u(x)∧ d_v(x)=(u⊙ x) ∧ (v⊙ x)=(u∧ v)⊙ x=d_u∧ v(x).
Thus d_u∨ d_v=d_u∨ v and d_u∧ d_v=d_u∧ v.
(2) It follows immediately from Item (1) that
(A) is closed under ∨ and ∧. So ((A), ∨, ∧, ,) is a sublattice of (O(A),≼), since
,∈(A).
(3) Let d, d'∈(A). Then d(1),d'(1)∈ and
d, d'∈(A) by Proposition <ref>. Recall that is a subalgebra of A, since d(1),d'(1)∈, it follows that (d∨ d')(1)=d(1)∨ d'(1)= d(1)⊕ d'(1)∈ by Lemma <ref> (5). Similarly, (d∧ d')(1)∈. Moreover, we have d∨ d', d∧ d'∈(A) by Item (1). Thus
d∨ d', d∧ d'∈(A).
(4)
It follows immediately from Item (3) that
(A) is closed under ∨ and ∧. So ((A), ∨, ∧, ,) is a sublattice of (O(A),≼), since
,∈(A).
Let A be an MV-algebra. Then
(1)
((A), ∨, ∧, ,) is a lattice isomorphic to 𝐋(A); and
(2) ((A), ∨, ∧, ,) is a lattice isomorphic to .
(1) It follows by Lemma <ref> (2) that
((A), ∨, ∧, ,) is a lattice.
Define a map g: (A)→𝐋(A) by g(d_u)=u
for any d_u∈(A). Then g is a bijection. In fact, if
g(d_u)=g(d_v), then
u=v, and so d_u=d_v. Thus g is injective. Also, for each u∈ A, there exists d_u∈(A) such that g(d_u)=u, so g is surjective. By Lemma <ref> (1), we have
g(d_u∨ d_v)=g(d_u∨ v)=u∨ v=g(d_u)∨ g(d_v) and
g(d_u∧ d_v)=g(d_u∧ v)=u∧ v=g(d_u)∧ g(d_v).
Thus g is a lattice isomorphism.
(2)
It follows by Lemma <ref> (4) that
((A), ∨, ∧, ,) is a lattice.
Define a map f: (A)→ by f(d)=d(1)
for any d∈(A). By Corollary <ref>, f is a bijection. Also, it is clear that f()=(1)=0 and
f()=(1)=1.
By Lemma <ref> (1), we have
f(d_u∨ d_v)=f(d_u∨ v)=u∨ v=f(d_u)∨ f(d_v) and
f(d_u∧ d_v)=f(d_u∧ v)=u∧ v=f(d_u)∧ f(d_v).
Thus f is a lattice isomorphism.
Let χ^(A)={χ^(u) | u∈ A}, where χ^(u) is defined in Corollary <ref>. We will show that
(χ^(A), ≼) is also a lattice isomorphic to 𝐋(A).
Let A be an MV-algebra and u, v∈ A. Then the following statements hold:
(1) χ^(u)∨χ^(v)=χ^(u∨ v) and
χ^(u)∧χ^(v)=χ^(u∧ v).
(2) χ^(u)=χ^(v) if and only if u=v.
(1) For any x∈ A, we have
(χ^(u)∨χ^(v))(x)=χ^(u)(x)∨χ^(v)(x)=
u∨ v, if x=1;
x, otherwise=χ^(u∨ v)(x)
and
(χ^(u)∧χ^(v))(x)=χ^(u)(x)∧χ^(v)(x)=
u∧ v, if x=1;
x, otherwise=χ^(u∧ v)(x).
Thus χ^(u)∨χ^(v)=χ^(u∨ v) and
χ^(u)∧χ^(v)=χ^(u∧ v).
(2) It is clear that u=v implies χ^(u)=χ^(v). Conversely, if
χ^(u)=χ^(v), then u=χ^(u)(1)=χ^(v)(1)=v.
If A is an MV-algebra, then (χ^(A), ≼) is a sublattice of (O(A), ≼) and (χ^(A), ≼) is isomorphic to 𝐋(A).
Let u, v∈ A. Then
χ^(u)∨χ^(v)=χ^(u∨ v)∈χ^(A) and
χ^(u)∧χ^(v)=χ^(u∧ v)∈χ^(A) by Lemma <ref>. Thus (χ^(A), ≼) is a sublattice of (O(A), ≼) by Lemma <ref>.
Define a map f: 𝐋(A)→χ^(A) by f(u)=χ^(u)
for any u∈𝐋(A). By Lemma <ref>, f is an injective homomorphism. Also, it is clear that f is surjective by the definition of χ^(A). Hence
f is a lattice isomorphism.
Recall that a filter <cit.> of a lattice L is a non-empty subset F of L such that:
(i) a, b∈ F implies a∧ b∈ F and
(ii) a∈ F, c∈ L and a≤ c imply c∈ F.
Let A be an MV-algebra.
If ((A), ∨, ∧, ,)
is a lattice, then χ^(A) is a filter of the lattice (A).
Assume that ((A), ∨, ∧, ,) is a lattice. It is clear that χ^(A) is a non-empty subset of (A) since χ^(0)∈χ^(A). Also, by Lemma <ref>, χ^(A) is closed under ∧.
Finally, assume that d∈(A) such that χ^(u)≼ d for some u∈ A. Then A\{1}⊆_d(A). In fact, for any
x∈ A\{1}, we have x=χ^(u)(x)≤ d(x) and so d(x)=x, since d(x)≤ x by Proposition <ref> (4).
It follows that x∈_d(A) and hence A\{1}⊆_d(A). Consequently, we have
d∈χ^(A). Therefore, χ^(A) is a filter of the lattice (A).
§ DISCUSSIONS
In this paper, we give a detailed algebraic study of (⊙,∨)-derivations on MV-algebra. There are many different types of derivation on MV, which may lead to more researches and applications.
We list some questions at the end of this paper.
1. We have seen in Proposition <ref> that the relation between the cardinality of MV-algebra |A| and the cardinality of derivation |(A)| under small orders. The question is whether we can find the relation when consider larger cardinary |(A)|?
2. In any finite MV-algebra A, we have shown that ((A),≼, ,) is a lattice in Proposition <ref> (2). Can we characterize the Hasse diagram of it?
3. In Lemma <ref>, it has been shown that for any MV-chain L_n(n≥ 2), ((L_n),≼) is a lattice. Naturally, we will ask: for any MV-algebra A, is the poset ((A),≼,,) a lattice?
4. For any two MV-algebras A and A', if ((A),≼,,) and ((A'),≼,0_A',Id_A') are isomorphic lattices, then are A and A' isomorphic?
§ DECLARATION
This article does not use any particular data, or human participant. Indeed, the results obtained have been established from the articles cited in the references. However, we remain ready to transmit any information useful for a good understanding of our article.
(1) Ethical approval: We declare that we have complied with the ethical standards for publishing articles in this journal.
(2) Funding details: The work is partially supported by CNNSF (Grants: 12171022, 62250001).
(3) Conflict of interest: The authors have no conflicts of interest to declare that are relevant to the content of this article.
(4) Informed Consent: Not applicable.
(5) Authorship contributions: All authors contributed to this article.
99
BCC2012 N.O. Alshehri, S.M. Bawazeer, On derivations of BCC-algebras, Int. J. Algebra. 6 (2012) 1491–1498.
DMV2010 N.O. Alshehri, Derivations of MV-algebras, Int. J. Math. Math. Sci. (2010) doi:10.1155/2010/312027.
near H.E. Bell, G. Mason, On derivations in near-rings and near-fields, North Holland Math. Studies. 137 (1987) 31-35.
lattice G. Birkhoff, Lattice Theory, Amer. Math. Soc., 1967.
radical M. Brešar, M. Mathieu, Derivations operator into the radical, III, J. Funct. Anal. 133 (1995) 21–29.
socle M. Brešar, P. Šemrl, Derivations mapping into the socle, Math. Proc. Cambridge Phil. Soc. 120 (1996) 339–346.
bu S. Burris, H.P. Sankappanavar, A Course in Universal
Algebra, Springer Verlag, 2012.
MV1 C.C. Chang, Algebraic analysis of many-valued logic, Trans. Amer. Math. Soc. 88 (1958) 467-490.
MV2 C.C. Chang, A new proof of the completeness of the Lukasiewicz axioms, Trans. Amer. Math. Soc. 93 (1959) 74-80.
MV3 R. Cignoli, D. Mundici, An elementary proof of Chang’s completeness theorem for the infinite-valued calculus of Lukasiewicz, Studia Logica 58 (1997) 79-97.
MV4 R. Cignoli, I.M.L. D’Ottaviano, D. Mundici, Algebraic Foundations of Many-valued Reasoning, Kluwer Academic Publishers, Dordrecht, 2000.
local R.L. Crist, Local Derivations on Operator Algebras, J. Funct. Anal. 135.1 (1996) 76-92.
lattice2001 L. Ferrari, On derivations of lattices, Pure Math Appl. 12 (2001) 365-382.
DL A.P. Gan, L. Guo, On differential lattices, Soft Comput. (2022) <https://doi.org/10.1007/s00500-022-07101-z>
noncomm L. Guo and W. Keigher, On differential Rota-Baxter algebras, J. Pure Appl. Algebra 212 (2008) 522-540.
DMV2019 A. Hamal, Additive derivative and multiplicative coderivative operators on MV-algebras, Doğa Mat. 43.2 (2019) 879–893, 2019.
L2021 X.J. Hua, State L-algebras and derivations of L-algebras, Soft Comput. 25 (2021) 4201–4212.
BCI Y.B. Jun, X.L. Xin, On derivations on BCI-algebras, Inf. Sci. 159 (2004) 167–176.
BE K.H. Kim, S.M. Lee, On derivations of BE-algebras, Honam Math. J. 36 (2014) 167–178.
da E. Kolchin, Differential Algebra and Algebraic Groups, Academic Press, 1973.
BASIC J. Krňávek, J. Kühr, A note on derivations on basic algebras, Soft Comput. 19 (2015) 1765–1771.
MV5 C. Lele, J.B. Nganou, MV-algebras derived from ideals in BL-algebras, Fuzzy Sets and Systems 218 (2013) 103-113.
DMV2021 L.L. Lu , Y.W. Yang, Generalized Additive Derivations on MV-algebras, Engineering Letters, 29(2) (2021).
prime E. Posner, Derivations in prime rings, Proc. Amer. Math. Soc. 8 (1957) 1093-1100.
lattice1975 G. Szász, Derivations of lattices, Acta Sci. Math. 37 (1975) 149–154.
BCC2009 C. Prabpayak, U. Leerawat, On derivations of BCC-algebras, Kasetsart J. (Nat. Sci.) 43 (2009) 398–401.
field J.F. Ritt, Differential Equations from the Algebraic Standpoint, Amer. Math. Soc. Colloq. Publ. 14, Amer.
Math. Soc. New York, 1932.
CA M. Singer, M. van der Put, Galois Theory of Linear Differential Equations, Springer, 2003.
DMV2017 J.T. Wang, B. Davvaz, P.F. He, On derivations of MV-algebras, Soft Comput. (2017).
DA1 W.-T. Wu, On the decision problem and the mechanization of theorem proving in elementary geometryl, Scientia Sinica 21 (2) (1978) 159–172. Also reprinted in “Contemporary Mathematics.” 29 (1984) 213-241.
DA2 W.-T. Wu, A constructive theory of differential algebraic geometry based on works of J. F. Ritt with particular
applications to mechanical theorem-proving of differential geometries, Lect Notes Math. 1255 (1987) 173–189.
lattice2008 X.L. Xin, T.Y. Li, J.H. Lu, On derivations of lattices, Inf. Sci. 178 (2008) 307–316.
lattice2012 X.L. Xin, The fixed set of a derivation in lattices, Fixed Point Theory Appl. 218 (2012) 1–12.
DMV2013 H. Yazarli, A note on derivations in MV-algebras, Miskolc Math Notes. 14 (2013) 345-354.
§ APPENDIX I. CALCULATION PROGRAM BY PYTHON IN EXAMPLE <REF> (2)
1
[language=Python,caption = (L_2× L_3).py,label = a1]
ss = ['0', 'a', 'b', 'c', 'd', '1']
alphabet =
for i in range(len(ss)):
alphabet[ss[i]] = i
def cheng(a, b):
chenglist = [['0', '0', '0', '0', '0','0'], ['0', '0', 'a', '0', '0', 'a'], ['0', 'a', 'b', '0', 'a','b'], ['0', '0', '0', 'c', 'c', 'c'], ['0', '0', 'a', 'c', 'c', 'd'], ['0', 'a', 'b', 'c', 'd', '1']]
return chenglist[alphabet[a]][alphabet[b]]
def join(a, b):
joinlist = [['0', 'a', 'b', 'c', 'd', '1'], ['a', 'a', 'b', 'd', 'd', '1'], ['b', 'b', 'b', '1', '1', '1'], ['c', 'd', '1', 'c', 'd', '1'], ['d', 'd', '1', 'd', 'd', '1'], ['1', '1', '1', '1', '1', '1']]
return joinlist[alphabet[a]][alphabet[b]]
sss = []
for i in ss:
for j in ss:
sss.append([i, j])
for a in ['0', 'a']:
for b in ['0', 'a', 'b']:
for c in ['0', 'c']:
for d in ['0', 'a', 'c', 'd']:
for I in ['0', 'a', 'b', 'c', 'd', '1']:
mapping =
'0': '0', 'a': a, 'b': b, 'c': c, 'd': d, '1': I
flag = 1
for i in sss:
if flag == 1:
if mapping[cheng(i[0], i[1])] != join(
cheng(mapping[i[0]], i[1]),
cheng(i[0], mapping[i[1]])):
flag = 0
if flag == 1:
print(a + b + c + d + I)
§ APPENDIX II. CALCULATION PROGRAM BY PYTHON IN EXAMPLE <REF> (2)
1
[language=Python,caption = Hasse diagram of (L_2× L_3).py,label = a2]
a = [
'00cc0', '00ccc', '0a000', '0a00a', '0acc0', '0acca', '0accc', '0accd', 'a00a0', 'a0cd0', 'a0cdc', 'aa0a0', 'aa0aa', 'aacd0', 'aacda', 'aacdc', 'aacdd', 'ab000', 'ab00a', 'ab0a0', 'ab0aa', 'ab0ab', 'abcc0', 'abcca', 'abccc', 'abccd', 'abcd0', 'abcda', 'abcdb', 'abcdc', 'abcdd'
]
n = len(a[0])
b = list(i for i in a)
R = ['0a', '0c', '0b', '0d', 'ab', 'ad', 'cd']
def leq(a, b):
if a == b:
return 0
for i in range(n):
if str(a[i]) != str(b[i]) and str(a[i]) + str(b[i]) not in R:
return 0
return 1
def maximal(a):
max = []
for i in a:
b = set(leq(i, j) for j in a)
if 1 not in b:
max.append(i)
return max
def minimal(a):
min = []
for i in a:
b = set(leq(j, i) for j in a)
if 1 not in b:
min.append(i)
return min
while b != []:
print(minimal(b))
b = [i for i in b if not i in minimal(b)]
print()
while a != []:
print(maximal(a))
a = [i for i in a if not i in maximal(a)]
|
http://arxiv.org/abs/2307.04467v1 | 20230710103218 | Deformations at Earth's dayside magnetopause during quasi-radial IMF conditions: Global kinetic simulations and soft X-ray imaging | [
"Zhongwei Yang",
"R. Jarvinen",
"X. C. Guo",
"T. R. Sun",
"D. Koutroumpa",
"G. K. Parks",
"C. Huang",
"B. B. Tang",
"Q. M. Lu",
"C. Wang"
] | physics.space-ph | [
"physics.space-ph"
] |
0000-0002-1509-1529]Zhongwei Yang
State Key Laboratory of Space Weather, National Space Science Center, Chinese Academy of Sciences, Beijing, 100190, People's Republic of China ([email protected])
Finnish Meteorological Institute, FI-00101 Helsinki, Finland
State Key Laboratory of Space Weather, National Space Science Center, Chinese Academy of Sciences, Beijing, 100190, People's Republic of China ([email protected])
State Key Laboratory of Space Weather, National Space Science Center, Chinese Academy of Sciences, Beijing, 100190, People's Republic of China ([email protected])
LATMOS/IPSL, CNRS, UVSQ Université Paris-Saclay, Sorbonne Université, Guyancourt, 78280, France
Space Sciences Laboratory, University of California, Berkeley, California 94720, USA
CAS Engineering Laboratory for Deep Resources Equipment and Technology, Institute of Geology and Geophysics, CAS, Beijing, 100029 China
State Key Laboratory of Space Weather, National Space Science Center, Chinese Academy of Sciences, Beijing, 100190, People's Republic of China ([email protected])
Deep Space Exploration Laboratory/School of Earth and Space Sciences, University of Science and Technology of China, Hefei 230026, China
State Key Laboratory of Space Weather, National Space Science Center, Chinese Academy of Sciences, Beijing, 100190, People's Republic of China ([email protected])
The Solar wind Magnetosphere Ionosphere Link Explorer (SMILE) is an ESA-CAS joint mission. Primary goals are investigating the dynamic response of the Earth's magnetosphere to the solar wind (SW) impact via simultaneous in situ magnetosheath plasma and magnetic field measurements, X-Ray images of the magnetosheath and magnetic cusps, and UV images of global auroral distributions. Magnetopause deformations associated with magnetosheath high speed jets (HSJs) under a quasi-parallel interplanetary magnetic field condition are studied using a three-dimensional (3-D) global hybrid simulation. Soft X-ray intensity calculated based on both physical quantities of solar wind proton and oxygen ions is compared. We obtain key findings concerning deformations at the magnetopause: (1) Magnetopause deformations are highly coherent with the magnetosheath HSJs generated at the quasi-parallel region of the bow shock, (2) X-ray intensities estimated using solar wind H^+ and self-consistent O^7+ ions are consistent with each other, (3) Visual spacecraft are employed to check the discrimination ability for capturing magnetopause deformations on Lunar and polar orbits, respectively. The SMILE spacecraft on the polar orbit could be expected to provide opportunities for capturing the global geometry of the magnetopause in the equatorial plane. A striking point is that SMILE has the potential to capture small-scale magnetopause deformations and magnetosheath transients, such as HSJs, at medium altitudes on its orbit. Simulation results also demonstrate that a lunar based imager (e.g., Environment heliospheric X-ray Imager, LEXI) is expected to observe a localized brightening of the magnetosheath during HSJ events in the meridian plane. These preliminary results might contribute to the pre-studies for the SMILE and LEXI missions by providing qualitative and quantitative soft X-ray estimates of dayside kinetic processes.
§ INTRODUCTION
In-situ spacecraft observations in the near-Earth plasma environment (e.g., MMS, Cluster, Van Allen Probes, THEMIS, Geotail, Double Star) have made important contributions in revealing dynamic and kinetic problems of the solar wind interaction with the Earth's magnetosphere. These observations provide excellent opportunities for understanding the microphysics of collisionless shocks, magnetic reconnection, and wave-particle interactions, as well as cross-scale energy release and dissipation by substorms and turbulence. On the other hand, remote observations of radio emissions, optical light, infrared, extreme ultraviolet (EUV), X-ray, gamma-ray, and energetic neutral atoms are widely used for remote objects. These imaging techniques provide a new way for visualizing global pictures of the Earth's exosphere, plasmasphere, inner magnetosphere, magnetosheath, magnetotail, and the cusp region (e.g., TWINS, IBEX, XMM-Newton), as well as in solar activities and heliospheric structures (e.g., PSP, SO, SDO, IBEX).
Early ROSAT observations of X-ray and EUV emission from comet C/Hyakutake have been reported by <cit.>. Some mechanisms, such as thermal bremsstrahlung associated with hot electrons, possibly due to solar wind interaction effects, are suggested to explain the mechanism of this emission. <cit.> proposed that the solar wind contains a large number of heavy ion species with a range of charge states (e.g., C^6+, O^7+, O^8+). These ions will readily charge transfer with cometary or planet's exospheric neutrals, producing ions that can be highly excited and consequently emit photons in the X-ray and EUV part of the spectrum. This solar wind charge exchange mechanism is abbreviated as SWCX. Thereafter, an empirical formula that depends on the local neutral density, solar wind density, solar wind speed, and charge-exchange cross-section is proposed for the quantitative estimation of the X-ray intensity <cit.>. <cit.> found that a significant positive correlation exists between the solar wind fluxes and the soft X-ray intensity. Furthermore, XMM-Newton observations indicate that the elevated high-valence ion abundance inside a coronal mass ejection (CME), particularly for Ne^9+, Mg^11+, Mg^12+, etc., favors the enhancement of Earth's magnetospheric soft X-ray emissions <cit.>.
SWCX emissions are commonly observed in the heliosphere at comets <cit.>, Earth <cit.>, the Moon <cit.>, Jupiter <cit.>, and Mars <cit.>. <cit.> reviewed observations of soft X-ray emissions have become a powerful tool for panoramic imaging of the planetary magnetosphere and plasma environment. Currently, new and future space missions, specifically designed for X-ray imaging of the vast planetary space weather system, including the ESA/JAXA BepiColombo mission, SMILE ESA-CAS joint mission, NASA STORM missions, CubeSat/small spacecraft missions (e.g., NASA CuPID, JAXA GEO-X), and future lunar-based missions (e.g., NASA LEXI, Chinese Chang'e/SXI), were proposed and successively implemented for studying “charge-exchange," a poorly understood phenomenon that occurs when the solar wind collides with planetary exosphere and neutral gas in the heliosphere.
To support the development of new X-ray missions, numerical simulations are crucial for pre-studies. Several numerical models have been developed to simulate soft X-ray imaging and determine detectability. Empirical models <cit.> have been used to explore X-ray imaging of the solar wind-Mars interaction. Hybrid simulations and test particle calculations have been used to compute contributions from SWCX processes to X-ray emissions from Venus <cit.>. MHD simulations are commonly used to accurately describe the shape of global structures of the terrestrial magnetosphere during its interaction with the solar wind. By combining an empirical exosphere neutral profile with the solar wind flux from MHD simulations, the soft X-ray emission can be estimated using Cravens' formula. Studies have discussed the soft X-ray visibility of the solar storm compressed magnetopause, the cusp region, flank Kelvin-Helmholtz (K-H) waves, and magnetic reconnection associated outflows and flux transfer events (FTEs) in detail <cit.>.
Previous models and simulations have made a lot of achievements in the study of Earth's X-ray emissions. However, it is still an open question whether the soft X-ray excited by moderate-scale dynamic structures in the magnetosheath and magnetosphere is visible. Many cross-scale instantaneous structures have been observed from the foreshock all the way to the magnetopause. Here, we list some structures highly associated with at least ion kinetic behaviors:
1. The foreshock region is filled with ULF waves, shocklets, SLAMS, and other nonlinear structures, and even magnetic reconnections. <cit.> made a patchwork for the foreshock. The foreshock waves and structures have been clearly evidenced in both kinetic simulations <cit.>, and observations <cit.>.
2. Both kinetic simulations and MMS observations reveal that the bow shock not only undergoes back and forth swings under the turbulent solar wind, but also can experience kinetic-scale ripples <cit.> and self-reforming cycles <cit.>.
3. Magnetosheath high speed jets (HSJs) refer to an enhancement in the anti-sunward bulk velocity and dynamic pressure based on the x component of the ion velocity <cit.>. A recent study proposes that a fraction of HSJs is a direct consequence of shock reformation <cit.>, and they may be related to “throat aurora" and corresponding magnetopause distortion <cit.>. 2-D hybrid simulations have been employed to study the HSJ property, size, lifetime, and associated jet-driven bow wave <cit.>. They find that the jets are associated with the porous quasi-parallel shock front, and the scale size of jets can reach about 2.5-5Re.
4. In addition, magnetopause asymmetric reconnection <cit.>, magnetosheath reconnections at both electron and ion scales <cit.>, small-scale current filaments <cit.>, magnetopause K-H waves <cit.>, and associated vortex-induced reconnections <cit.> also play important roles in the cross-scale process and energy conversion during the Earth-solar wind interaction.
Some potential soft X-ray imaging objects, such as solar storms, K-H waves, and FTE events, have been investigated based on global MHD simulations <cit.>. In this study, we focus on HSJs, which are typically observed downstream from the quasi-parallel bow shock, and are highly associated with ion dynamic and kinetic processes in 3-D. Cluster and MMS simultaneous observations provide insight into HSJs. <cit.> found that HSJs are not localized into small regions but could span a region larger than 10Re, especially when the quasi-parallel shock covers the entire dayside magnetosphere under radial interplanetary magnetic field (IMF). The magnetopause can have multiple independent indentation places under the continuous impacts of HSJs. The magnetopause is deformed and can move in opposite directions at different places. It cannot, therefore, be considered as a smooth surface anymore but rather as a surface full of local indents. One striking point is that a large number of observations indicate that long radial IMF events can last from about 3-10 hours <cit.> to 1-2 days <cit.>. Under such long-duration IMF solar wind conditions, the foreshock has enough time to grow and reach a mature state. In this case, the magnetopause around the sunset point may suffer continuous disturbances of HSJs. If so, what the soft X-ray imaging will look like remains to be further simulated and analyzed.
In this paper, we focus on the dynamics of the foreshock and magnetosheath to address three primary questions: (1) How do HSJs continuously affect the magnetopause under radial IMF events? (2) What is the global picture and fate of these jets and their resulting magnetopause indents at different locations in 3D? (3) What is the timescale of the magnetopause response to HSJs, and (4) can it be identified in soft X-ray intensity images by SMILE, LEXI, and other lunar-based missions?
§ SIMULATION MODEL
In this paper, the interaction of the solar wind with Earth's magnetosphere has been simulated by the three-dimensional (3-D) global hybrid simulation platform RHybrid <cit.>.
The model setup includes the undisturbed, upstream solar wind ions injected in the simulation from the front (+x) wall along the -x direction with a drifting Maxwellian velocity distribution. Within the simulation domain, ion velocity distributions evolve according to model calculation self-consistently coupled with the evolution of the magnetic field. The perpendicular components of the undisturbed IMF to the flow (B_y, B_z) convect in the simulation domain frozen-in to the solar wind plasma, whereas the radial, flow-aligned component (B_x) is implemented as a constant magnetic field profile. Earth's magnetic field is estimated as a 3-D dipole field <cit.> instead of a mirror dipole <cit.>. Magnetospheric solar wind interaction, including different regions such as the dayside and nightside magnetosphere, magnetosheath and the foreshock and the boundaries like the bow shock and the magnetopause, forms self-consistently when the magnetized solar wind plasma flow encounters the geomagnetic field and the planetary environment. Electrons are modeled as a charge-neutralizing adiabatic fluid. The inner boundary is assumed at the geocenter distance of r = 3R_E. It is implemented as a perfect conducting sphere on which precipitated particles are absorbed.
Usually, the simulated Earth radius R_E is typically reduced to an order of 10 d_i0 (the upstream solar wind ion inertial length) in order to ensure the appearance of an Earth-like magnetosphere <cit.> and to save considerable computational costs in global hybrid simulations. In this paper, the value of R_E=1200 km. A set of solar wind parameters <cit.> are used to mimic the space environment for the SMILE mission, which is scheduled to launch in 2025 during solar maximum. The solar wind ions consist of protons with a bulk velocity of 450 km/s and a number density of 7 cm^-3, and highly ionized minor species O^7+. The IMF magnitude is 10 nT, corresponding to a solar wind ion gyro-frequency of Ω_0=0.958≈1s^-1. The magnetic Reynolds number is set to 1.2×10^7. Uniform grid cells with a size of Δ x=Δ y=Δ z=0.08 R_E are used throughout the box. The cell dimensions are chosen as n_x× n_y× n_z=500×600×600. A total of about ten billion particles are used. A typical time step is Δ t=0.01 s. More details of parameter setups are summarized in Table 1.
§ SIMULATION RESULTS
§.§ Quasi-parallel IMF condition
Under radial solar wind conditions, there are at least two initiation methods for simulating the interaction between solar wind and planetary magnetospheres. The first involves introducing a dipole field within the IMF, allowing the dipole field strength to gradually diminish with increasing distance from the Earth's core, transitioning to a purely solar wind magnetic field <cit.>. The second method employs a mirrored dipole field approach on the sunward side, such as placing a mirrored dipole at x=+30 Earth radii, superimposing its magnetic field with the original field, and ultimately replacing the magnetic field in the upstream region at x=+15 Earth radii with a purely solar wind magnetic field <cit.>. Both of the initial configurations yield satisfactory magnetospheric morphologies and are extensively utilized in astrophysical and space physics simulations for Hermean-type planetary magnetospheres. Without loss of generality, this study adopts the first method for initiating the simulation.
Figure 1 illustrates the interaction between the solar wind and the Earth's magnetosphere under radial IMF conditions, as well as the formation process of the bow shock. Panels from left to right represent snapshots of the number density of H^+ ions at different times t=3s, 70s, 220s, and 235s, respectively. Figure 1a shows the Earth's dipole field at the beginning of the simulation. The magnetosphere begins to expand and gradually form. At later time (Figure 1b), the Earth's magnetosphere including key structures (e.g., the magnetopause, the magnetosheath, cusp regions, and the bow shock), has been initially formed. A fraction of the incident solar wind ions are reflected at the bow shock front. These back-streaming ions interact with the freshly incident solar wind, causing low-frequency waves in the foreshock region. In Figure 1c-d, the foreshock has reached a mature form under the long-term radial solar wind conditions.
Yellow arrows in Figure 1c-d indicate that the foreshock low-frequency wave structure can undergo nonlinear evolution and become steep when approaching the Earth's bow shock. These nonlinear structures have been widely observed at quasi-parallel shocks <cit.>. Previous local simulations clearly revealed that such steepening foreshock transients are associated with magnetosheath dynamic structures, such as HSJs <cit.>. Both of spacecraft observations <cit.> and global simulations <cit.> clearly evidenced that HSJs generated in the immediate downstream of quasi-parallel shock can lead to magnetopause indents.
In the next section, we will take HSJs as an example to study magnetopause deformations (including indentation and protrusion).
§.§ Magnetopause deformation
Figure 2a (at t=235s) shows that the magnetopause is being compressed inward by the magnetosheath transients, forming a concave shape as indicated by the white arrow in the enlarged view (Figure 1d). A standard streamline method (LIC) is adopted to display the magnetic field lines of the forced magnetosphere. The magnetic field lines in the concave magnetopause region are locally bent inward, rather than moving as a whole. This is a crucial and can be different from the mechanism of magnetopause indents compressed by CME-driven shocks in a large-scale. Figures 2b and c are snapshots of the magnetopause at non-concave and concave times (t=160s and 235s), respectively. Figure 2c depict an zoom-in view of the region where the magnetopause is affected HSJs. From Figure 2c, it can be clearly seen that there is a strong HSJ in the dayside direction of the concave region of the magnetopause. In comparison, the magnetopause without the HSJ impact (Figure 2b, t=160s) is relatively quiet and no obvious deformation as that at t=235s. It is interesting to note that there is a HSJ located near z=-6R_E, where the magnetopause is slightly concave due to the influence of the HSJ. However, this HSJ is off the dayside and exists in the magnetosheath region at a relative high latitude. The plasma flow in the local magnetosheath has begun to deflect, so the HSJ does not form a strong hit on the magnetopause like it did at the dayside. In summary, the foreshock continuously generates dynamic magnetosheath transients such as HSJs, which can cause deformation of the magnetopause. <cit.> used local simulations to statistically analyze the evolution of various transient structures such as HSJ, transient flux enhancements, and high speed plasmoids downstream of quasi-parallel shocks. The formation mechanism will not be further elaborated here, and this paper only focuses on the magnetopause deformation caused by such transient structures in response to soft X-ray imaging. Nevertheless, it is still worth mentioning that, unlike the planar shock in hybrid and PIC simulations <cit.>, global simulations indicate that the solar wind is deflected around the Earth's magnetosphere in the magnetosheath. The transients are more likely impact the magnetopause about the subsolar point.
Figures 3a-c represent the time-evolution of ion number densities log_10 N sampled at different locations: A, B and C (denoted in Figure 2). To understand characteristics of the magnetopause depressions, Figures 3d-f show the time series of x-directional dynamic pressure P_dx, temperature T, and X-ray intensity P corresponding to the sampling location C. This dynamic evolution process cannot be reflected by shock and magnetopause empirical models. High-resolution data from Magnetospheric MultiScale (MMS) have been continuously released, and the kinetic processes of shock front rippling and self-reformation have been successively confirmed. These mechanisms may result in variations to the location and configuration of the bow shock, and most of the changes are concentrated on the ion scale. <cit.> show in situ evidence of HSJs generated at the Earth's bow shock as a direct consequence of shock self-reformation. In this paper, from a global simulation perspective, we trace the evolution of various regions from the foreshock to the magnetopause at a relatively large scale (in an order of ∼ R_E). Figure 3d shows that magnetopause depressions are usually accompanied by an increase in the dynamic pressure P_dx in the magnetosheath. Under the impact of the HSJs, the ion temperature inside the magnetopause does not significantly change (Figure 3e). In Figure 3f, one striking point is that the dynamic pressure can locally enhance the X-ray intensity within the magnetosheath ahead of the magnetopause. Furthermore, Figures 3b-c show that the magnetopause at Z=0 and Z=-2.5R_E exhibited earthward indentation at about t=270s. This is mainly due to the dragging of magnetic field lines caused by the HSJ impacting the magnetopause near the subsolar point.
In summary, magnetopause depressions caused by magnetosheath transients could last 20-50 seconds. This will be more advantageous for the X-ray imaging. Of course, the quality of imaging also depends on many factors, such as the counts of X-ray photons, the field of view (FOV) at a certain orbit, the spatial and temporal resolutions, the exposure time, and the background noise. The estimation of X-ray imaging considering all factors mentioned above is beyond the scope of this paper and depend on the final parameters of the SMILE/SXI. The motivation of this work is to suggest more potential kinetic processes and structural objects that could be observed by soft X-ray instruments in the future. The soft X-ray calculated from the sampling area (Figure 3f) suggests that magnetopause deformations imaged by soft X-rays instrument can be possible. In the next subsection, we will further study the three-dimensional soft X-ray imaging of magnetosheath transients and magnetopause indents from perspectives of local intensities and line-of-sight (LOS) integrations.
§.§ Soft-X ray imaging
In this study, the X-ray intensity of the geocoronal SWCX emission for a particular line-of-sight (LOS) I can be estimated by the line integration of volume emission rate (P) as in previous investigations <cit.>.
I=1/4π∫Pdr=1/4π∫α n_Hn_swV_eff dr (keV cm^-2 s^-1 sr^-1) Eq.(1)
where n_H and n_sw are number densities of exospheric hydrogen and solar wind proton, respectively. The effective collision speed is estimated by the solar wind velocity V_sw and thermal speed V_th as V_eff=√(V_sw^2+V_th^2) in Eq. (1). It is important to note that protons do not produce soft X-rays. Instead, heavy solar wind ions such as C^5+, C^6+, Ne^9+, O^7+, O^8+, Mg^11+, Mg^12+ emit soft X-rays through the SWCX <cit.>. For instance, the interaction O^7++H→ O^6+*+H^+ in which an electron is transferred from an exospheric hydrogen to an solar wind oxygen ion, leaves the oxygen ion an excited state. The ion then emits a photon when it decays to a lower energy state and thus may lead to the satellite detection of soft X-rays. The heavy ions in the solar wind have a very small proportion and are reasonably considered to be test particles in previous MHD and hybrid simulations. Typically, the X-ray intensity emitted by heavy ions is estimated by combing the proton parameters and an interaction efficiency factor. In this paper, our simulation are performed in the presence of self-consistent solar wind H^+ and O^7+ ions, which allows us to independently estimate X-ray intensities based on O^7+ or H^+ ions, respectively. By applying the interaction efficiency factor (α), the proton-based value in Equation (1) is converted into the soft X-ray emissivity generated by the source ions. Cravens (2000) gave a rough estimate of α encompassing all the detailed atomic physics. Based on summarized parameter lists <cit.>, we use an interaction efficiency factor value of α=10^-15 (eV cm^2) under a solar wind speed about 450 (km/s), following previous simulations <cit.> and reference therein. Although this setup is widely used, it is worthy to note that the value of α is quite uncertain and depends on solar wind conditions <cit.>. An analytical model from <cit.> for the neutral density, given as
n_H=25(cm^-3)(10R_E/r)^3 Eq.(2)
where r is the distance of the considered location to the Earth's center. The X-ray intensity P of heavy ions, e.g., O^7+ also can be estimated, following <cit.> and reference therein.
P=σ_sw[O^7+/O][O/H]F_swn_H (keV cm^-3 s^-1) Eq.(3)
where σ_sw is the charge-exchange cross section that depends on the solar wind species and charge state. Parameters σ_sw=12×10^-15 (cm^2), [O^7+/O]=0.2 and [O/H]=4.76×10^-6 are adopted after previous studies <cit.> to simulate the solar wind conditions during the solar maximum period when soft X-ray missions will be launched.
In Figures 4a,b, we present X-ray intensity profiles P in the meridional plane at t=160s and t=235s, respectively. The envelope of the magnetopause is indicated by a dashed curve. In conjunction with Figures 2b,c, we find that if there is no HSJ impact on the magnetopause in the subsolar magnetosheath region (e.g., at t=160s), the magnetopause maintains a relatively smooth shape. When an HSJ is observed in the magnetosheath (e.g., at t=235s), there is an enhancement of X-ray intensity in the magnetosheath and a noticeable inward indentation of the magnetopause (indicated by arrows). Figure 4c is a similar plot to Figure 4b but estimated based on heavy ion O^7+ data instead of proton data by Eq.3. Similarly, we can see significant deformation of the magnetopause, and the dynamical process and qualitative conclusions are almost the same as those estimated by solar wind protons. It implies that the estimation of X-ray intensity using solar wind proton data is a fine approximation in previous works. Bottom panels of Figure 4 show corresponding LOS integration values of I calculated by Eq.1. from a dawn-side view. The dashed and solid curves indicate the variation of the magnetopause location without (Figure 4a) and with (Figure 4b-c) indents caused by magnetosheath HSJs. Furthermore, a localized enhancement of LOS integrated intensity I is visible in the magnetosheath ahead of the magnetopause indents. It is expected to capture such localized brightening events by soft X-ray instruments from a dawnside-or duskside view on the Lunar orbit (e.g., NASA LEXI mission).
For a wide field-of-view, the Soft X-ray Imager (SXI) onboard SMILE uses lightweight micropore optics that provide high angular resolution (i.e., ∼0.1^∘) for the 0.15-2.5 keV energy band <cit.>. To obtain good X-ray counts, SMILE/SXI is expected to achieve at least about 1.5^∘ angular resolution near the dayside magnetopause <cit.>. SXI has a field of view (FOV) of approximately 16^∘×27^∘, and its line of sight forms a fixed angle with the UVI payload pointing towards the polar region and points towards the subsolar magnetopause. We have preliminarily calculated the profile of the LOS X-ray intensity integral value I within the FOV on SMILE's possible orbit, to study the possibility of SMILE imaging the deformation of the magnetopause caused by dynamic structures such as magnetosheath HSJs.
In Figure 5, the left panel shows a 3-D volume rendering sketch of the X-ray intensity. Key regions such as the cusp, magnetopause (MP), magnetosheath (MS), bow shock (BS), and the foreshock region are marked in the sketch. The field of view (FOV) of the SMILE spacecraft (SC) is also roughly denoted in yellow for reference. The motivation of this study is to find the best/potential location on the SMILE orbit for the imaging of magnetosheath transients (e.g., local dynamic pressure enhancements represented by HSJs) and magnetopause deformations. First, we use hybrid simulation to obtain 3-D intensity profiles at two fixed times (A: at 160s and B: at 235s); then, we calculate the LOS-integrated X-ray intensity I for these two fixed profiles observed by visual SC at different locations on the SMILE orbit. The selected locations of SC are shown in Figures 5c and 5f (an animation of this Figure is available for a full one-day orbit). When the spacecraft is located at [2.0,8.6,9.3]R_E (Figure 5c), the calculated LOS I images for the two fixed profiles A and B are shown in Figure 5a and 5b, respectively. The black rectangular area in Figure 5 encircles the field of view of SC in the θ and ϕ space. The black dashed curve describes the envelope from the cusp all the way to the magnetopause. By comparing the region where HSJs impact the magnetopause (the area marked by white circles), it is clearly evidenced that the shape of the magnetopause impacted by HSJs has undergone significant indentation and is accompanied by local X-ray brightening. This conclusion is very interesting and can at least indicate that under the current height and spacecraft attitude orientation conditions shown in Figure 5c, it is very likely to capture the magnetosheath solar wind-magnetopause coupling process. Furthermore, in Figures 5d and 5e, we also calculated the LOS X-ray for imaging the magnetopause from a bird's-eye view under spacecraft apogee conditions on orbit. At the apogee, it is good for the SC to capture the entire geometry of the magnetopause, but LOS-integration effects may make it difficult to identify magnetosheath HSJs or local concavity and convexity of magnetopause in a smaller dynamic or kinetic scales. In summary, it means that at different locations on the SC orbit, there are advantages and disadvantages in imaging dynamic structures, magnetopause deformation, and overall geometries of the whole magnetopause.
§ CONCLUSIONS AND DISCUSSIONS
This article mainly uses 3D global hybrid simulation to study the dynamics of the Earth's bow shock and magnetosphere under radial IMF conditions and conducts soft X-ray imaging tests. The main conclusions are:
(1) Under radial solar wind conditions, the subsolar magnetosheath falls downstream of the quasi-parallel shock. Here, a large number of HSJs have been observed, which is consistent with previous Cluster and MMS statistical observations <cit.>. In addition, HSJs have a good correspondence with ULF steepening transients of the foreshock.
(2) The simulation in this article not only reproduces that the spatial size of HSJs can reach the order of magnitude of the Earth's radius, which is consistent with previous global simulations <cit.>. Moreover, it is also found that HSJs can last for seconds to minutes at the subsolar point and impact the magnetopause to form a depression.
(3) We analyzed the discrimination ability of different spacecraft positions on local deformation of the magnetopause at an approximate lunar orbit of 60R_E and SMILE's possible orbit. The main conclusion is that the LOS X-ray imaging observed on the lunar orbit (such as LEXI) has a good ability to identify the magnetopause deformation within the meridian plane; polar orbit spacecrafts (such as SMILE) have advantages in imaging the overall geometry of the magnetopause within the equatorial plane at its apogee. One striking point is that SMILE may have the potential ability to capture small-scale transient structures (e.g., HSJs) at low altitudes around the magnetopause.
In the near future, we need to consider the background noise, different IMF conditions, the asymmetric exosphere neutral hydrogen profile, and solar wind structures, e.g., CME, TD, RD, CS <cit.> on the soft X-ray imaging. The main goal is to provide pre-studies as much as we can to serve the data analysis for the future soft X-ray space missions around 2025 during the solar maximum.
Authors are grateful to Daniel Weimer from Virginia Tech, Urs Ganse and Yann Kempf from University of Helsinki, San Lu from USTC, and Chuanfei Dong from Boston University for helpful discussions. This work was supported by the National Key R&D program of China No.2021YFA0718600, NNFSC grants 42188101, and 42274210, and the Specialized Research Fund for State Key Laboratories of China. The computations are performed by Numerical Forecast Modeling R&D and VR System of State Key Laboratory of Space Weather, and HPC of Chinese Meridian Project using the RHybrid code distributed under the open source GPL v3 license by the Finnish Meteorological Institute (github.com/fmihpc/rhybrid).
cccccccccccc
1
GLobal hybrid model setups and solar wind conditions
Parameters Value
Number of grid cells (n_x× n_y× n_z) 500×600×600
Grid cell size (Δ x) (100 km)^3=(R_E/12)^3
Time step (Δ t) 10 ms
SW bulk velocity vector [V_x, V_y, V_z] [-450, 0, 0]km/s
SW H^+ density 7 cm^-3
SW O^7+ density 10^-4 cm^-3
SW H^+ temperature 15×10^4 K
SW O^7+ temperature 15×10^4 K
SW e^- temperature 15×10^4 K
IMF vector [B_x, B_y, B_z] [-9.96, 0.6, 0.6]nT
IMF spiral angle 4^∘ (away sector)
IMF magnitude 10nT
Alfvén Mach number 5.46
Magnetosonic Mach number 4.79
Dipole strength B_0 at the equator on the surface 4.5μ T
|
http://arxiv.org/abs/2307.07554v1 | 20230714180008 | XMM-Newton Observations of Two Archival X-ray Weak Type 1 Quasars: Obscuration Induced X-ray Weakness and Variability | [
"Zijian Zhang",
"Bin Luo",
"W. N. Brandt",
"Pu Du",
"Chen Hu",
"Jian Huang",
"Xingting Pu",
"Jian-Min Wang",
"Weimin Yi"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.HE"
] |
0000-0002-2420-5022]Zijian Zhang
School of Astronomy and Space Science, Nanjing University, Nanjing 210093, China
Key Laboratory of Modern Astronomy and Astrophysics (Nanjing University), Ministry of Education, China
0000-0002-9036-0063]Bin Luo
School of Astronomy and Space Science, Nanjing University, Nanjing 210093, China
Key Laboratory of Modern Astronomy and Astrophysics (Nanjing University), Ministry of Education, China
0000-0002-0167-2453]W. N. Brandt
Department of Astronomy & Astrophysics, 525 Davey Lab, The Pennsylvania State University, University Park, PA 16802, USA
Institute for Gravitation and the Cosmos, The Pennsylvania State University, University Park, PA 16802, USA
Department of Physics, 104 Davey Lab, The Pennsylvania State University, University Park, PA 16802, USA
0000-0002-5830-3544]Pu Du
Key Laboratory for Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, 19B Yuquan Road, Beijing 100049, China
Key Laboratory for Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, 19B Yuquan Road, Beijing 100049, China
0000-0002-9335-9455]Jian Huang
School of Astronomy and Space Science, Nanjing University, Nanjing 210093, China
Key Laboratory of Modern Astronomy and Astrophysics (Nanjing University), Ministry of Education, China
0000-0003-3349-4855]Xingting Pu
College of Science, Nanjing Forestry University, Nanjing, Jiangsu 210037, People’s Republic of China
0000-0001-9449-9268]Jian-Min Wang
Key Laboratory for Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, 19B Yuquan Road, Beijing 100049, China
0000-0001-9314-0552]Weimin Yi
Yunnan Observatories, Chinese Academy of Sciences, Kunming, 650216, China
Department of Astronomy & Astrophysics, 525 Davey Lab, The Pennsylvania State University, University Park, PA 16802, USA
We report XMM-Newton observations of two examples of an unclassified type of X-ray weak quasars from the <cit.> survey of X-ray weak quasars in the Chandra archive, SDSS J083116.62+321329.6 at z=1.797 and SDSS J142339.87+042041.1 at z=1.702. They do not belong to the known populations of X-ray weak quasars that show broad absorption lines, weak ultraviolet (UV) broad emission lines, or red optical/UV continua. Instead, they display typical quasar UV spectra and spectral energy distributions. In the XMM-Newton observations, both quasars show nominal levels of X-ray emission with typical quasar X-ray spectral shapes (power-law photon indices of 1.99^+0.27_-0.23 and 1.86^+0.15_-0.14), displaying strong X-ray variability compared to the archival Chandra data (variability factors of 4.0^+1.6_-1.4 and 9.0^+7.4_-3.8 in terms of the 2 keV flux density). Simultaneous optical (rest-frame UV) spectra indicate no strong variability compared to the archival spectra. Long-term optical/UV and infrared light curves do not show any substantial variability either. We consider that the X-ray weakness observed in the Chandra data is due to X-ray obscuration from a small-scale dust-free absorber, likely related to accretion-disk winds. Such X-ray weak/absorbed states are probably rare in typical quasars, and thus both targets recovered to X-ray nominal-strength states in the XMM-Newton observations.
§ INTRODUCTION
Quasars are powered by accretion onto supermassive black holes (SMBHs) in the centers of massive galaxies. Luminous X-ray emission is a ubiquitous property of quasars, which is believed to originate largely from the “corona” located around the inner accretion disk via Comptonization of optical/ultraviolet (UV) seed photons <cit.>. A significant correlation has been observed between the coronal X-ray emission and the accretion-disk optical/UV emission, typically expressed as the relation between the X-ray-to-optical power-law slope parameter (α_ OX)[α_ OX is defined as α_ OX=-0.3838 log(f_ 2500 Å/f_ 2 keV), where f_ 2500 Å and f_ 2 keV are the rest-frame 2500 Å and 2 keV flux densities, respectively.] and the 2500 Å monochromatic luminosity (L_ 2500 Å), over ≈ 5 orders of magnitude in UV luminosity <cit.>. A quasar is considered to be X-ray weak if it deviates below the α_ OX– L_ 2500 Å relation, showing weaker than expected X-ray emission. The amount of X-ray weakness is often quantified by the Δα_ OX parameter, defined as the difference between the observed and expected α_ OX values (Δα_ OX=α_ OX - α_ OX,exp); the corresponding X-ray weakness factor is f_ weak = 10^-Δα_ OX/0.384.
Except for a still-uncertain potential rare population of intrinsically X-ray weak quasars <cit.>, observations of X-ray weak type 1 quasars are generally ascribed to X-ray obscuration. For example, broad absorption line (BAL) quasars are generally X-ray weak due to absorption by a clumpy outflowing wind or “shielding gas” associated with the wind <cit.>. <cit.> performed a systematic investigation of X-ray emission from a large sample of Sloan Digital Sky Survey (SDSS) non-BAL type 1 quasars using Chandra archival observations, and they found a population of non-BAL X-ray weak quasars. The fraction of quasars that are X-ray weak by factors of ≥ 6 is 5.8% ± 0.7%. They further classified these X-ray weak quasars into three categories based on their optical spectral features: weak emission-line quasars (WLQs), red quasars, and unclassified objects. Previous studies have revealed that the X-ray weakness of the former two types of quasars is likely due to X-ray obscuration <cit.>.
The nature of the unclassified type of X-ray weak quasars in <cit.> is uncertain (see discussion in Section 5.2.3 of ). They have UV continuum and emission-line spectra very similar to the SDSS quasar composite spectrum; i.e., there are no BALs or mini-BALs, weak emission lines, or redder than typical continua. They even have typical quasar spectral energy distributions (SEDs) from the infrared (IR) to UV. These quasars were serendipitously detected by Chandra, and they have at most a few tens of photons in the 0.5–7 keV band. The derived effective power-law photon indices are generally small (≈ 1) but with substantial uncertainties, suggestive of X-ray absorption. Moreover, the SDSS spectra and Chandra data were not obtained simultaneously, and they are separated by ≈ 1–4 years in the rest frame. It is thus probable that the observed X-ray weakness and typical UV spectra are due to variability effects. For example, there is a rare population of extremely X-ray variable quasars that have displayed strong X-ray variability (variability factors ≳ 6) with no corresponding variability in the optical/UV spectra (e.g., PG 1211+143: ; PG 0844+349: ; SDSS J135058.12 +261855.2: ). They become significantly X-ray weak in their low states, and the X-ray weakness is often explained with absorption. Another example is the strong long-term X-ray variability that has been observed in several “changing-look” (showing type transitions) quasars <cit.>. If the low-state X-ray fluxes and high-state optical/UV fluxes are mixed, the derived α_ OX values would appear smaller than those expected from the α_ OX–L_ 2500Å relation. It is also possible that during the Chandra observations these quasars developed BALs which were not present in the earlier SDSS observations; a small population of quasars has been found to show emerging or disappearing BALs <cit.>. Nevertheless, additional X-ray and optical spectroscopic observations are needed to clarify the nature of these exceptional objects.
In this study, we present additional deeper XMM-Newton observations of two examples of the unclassified type of X-ray weak quasars in <cit.>, SDSS J083116.62+321329.6 and SDSS J142339.87+042041.1 (hereafter SDSS J0831+3213 and SDSS J1423+0420). At a redshift of 1.797, SDSS J0831+3213 was detected serendipitously by a Chandra observation on 2007 December 22. It has 17.4^+5.9_-4.8 counts in the 0.5–7 keV band with an effective power-law photon index (Γ_ eff) of 1.0^+0.6_-0.5 <cit.>. SDSS J1423+0420 is at z=1.702. It was detected by a Chandra observation on 2012 December 15, with 6.0^+3.9_-2.7 counts in the 0.5–7 keV band and Γ_ eff<1.3 <cit.>. Using the α_ OX–L_ 2500Å relation in <cit.>, these quasars have Δα_ OX values of -0.34^+0.05_-0.05 and -0.48^+0.08_-0.10, corresponding to f_ weak values of 7.7^+2.9_-1.9 and 17.4^+14.2_-6.8, respectively. In the deeper XMM-Newton observations, both quasars recovered to nominal levels of X-ray emission, and thus they displayed strong X-ray variability compared to the archival Chandra data. Optical spectra simultaneous to the XMM-Newton observations are also available. The results support the notion that these rare X-ray weak quasars were not peculiar objects having weak/suppressed coronal X-ray emission; instead, they were simply caught in unusual X-ray absorbed states.
The paper is organized as follows. We describe the X-ray and simultaneous optical spectroscopic observations in Section <ref>. The X-ray and multiwavelength properties of the two quasars are presented in Section <ref>. We discuss and summarize our results in Section <ref>.
Throughout this paper, we use a cosmology with H_0=67.4 km s^-1 Mpc^-1, Ω_ M = 0.315, and Ω_Λ = 0.686 <cit.>. Measurement uncertainties are quoted at a 1σ confidence level, while upper limits are quoted at a 90% confidence level.
§ X-RAY AND OPTICAL OBSERVATIONS
lcccccccc
Basic Object Properties and Their XMM-Newton Observations
Object
z
m_B
log M_ BH
L/L_ Edd
N_ H, Gal
Observation
Obs. Date
Exposure Time
Name
(M_)
(10^20 cm^-2)
ID
(ks)
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
SDSS J0831+3213 1.797 18.7 9.32 0.18 3.87 0861260101 2021 Apr 07 22.1
SDSS J1423+0420 1.702 18.8 9.37 0.10 2.07 0861260301 2021 Jan 25 36.4
Cols. (1) and (2): object name and redshift.
Col. (3): B–band magnitude.
Cols. (4) and (5): single-epoch virial SMBH mass and Eddington ratio estimates from <cit.>.
Col. (6): Galactic neutral hydrogen column density.
Col. (7): XMM-Newton observation ID.
Col. (8): observation start date.
Col. (9): cleaned EPIC pn exposure time.
§.§ XMM-Newton observations
We obtained an XMM-Newton observation of SDSS J0831+3213 on 2021 April 07 with a total exposure time of 34.9 ks. SDSS J1423+0420 was observed on 2021 January 25 with a total exposure time of 57.5 ks. The X-ray data were processed using the XMM-Newton Science Analysis System <cit.> and the latest calibration files. All the EPIC pn, MOS1, and MOS2 data were used in our study. We reduced the pn and MOS data following the standard procedure described in the SAS Data Analysis Threads.[<https://www.cosmos.esa.int/web/XMM-Newton/sas-threads>.] Background flares were filtered to generate cleaned event files. The cleaned pn, MOS1, and MOS2 exposure times are 22.1, 32.6, and 32.3 ks for SDSS J0831+3213, and they are 36.4, 53.0, and 52.4 ks for SDSS J1423+0420. Both targets are significantly detected in the pn and MOS images. For each source, we extracted a source spectrum using a circular region with a radius of 30 centred on the optical source position. The total 0.3–10 keV spectral counts combining the pn and MOS spectra are 571 and 1163 for SDSS J0831+3213 and SDSS J1423+0420, respectively. For each source, a background spectrum was extracted from a few nearby circular source-free regions on the same CCD chip with a total area of about four times the area of the source region. Spectral response files were generated using the tasks rmfgen and arfgen. We then group the source spectra with at least 25 counts per bin for spectral fitting.
The basic information for the two observations is listed in Table <ref>. We also present in the table some basic object properties, including the Mg II-based single-epoch virial SMBH masses and Eddington ratios from <cit.>. We note that these mass and Eddington ratio estimates have substantial uncertainties <cit.>. The Galactic neutral hydrogen column densities from <cit.> were used in the X-ray spectral analysis below.
The XMM-Newton Optical Monitor (OM) observations used three UV filters, including UVM2, UVW1, and U, with effective wavelengths of 2310 Å, 2910 Å, and 3440 Å, respectively. The OM observation of SDSS J1423+0420 was heavily contaminated by scattered light from a nearby bright field star,[See an example in Section 2.10 of the OM calibration status document: <https://xmmweb.esac.esa.int/docs/documents/CAL-TN-0019.pdf>.] and we were not able to extract any useful photometry. For SDSS J0831+3213, the OM data were reduced using the task omchain, and the photometric measurements of every exposure were recorded in the generated SWSRLI files. We extracted the magnitude measurements for each filter from these files and computed the mean magnitudes in the three OM bands.
ccccccc
0.2pt
XMM-Newton Spectral Fitting Results
Object Name
Γ
Norm
χ^2/dof
P_ null
L_ X
(1)
(2)
(3)
(4)
(5)
(6)
SDSS J0831 + 3213 1.99_-0.23^+0.27 3.42^+1.30_-0.92 16.4/19 0.63 9.55^+1.48_-1.51
SDSS J1423 + 0420 1.86_-0.14^+0.15 2.99^+0.63_-0.53 40.8/49 0.79 9.87^+1.09_-1.26
Col. (1): object name.
Col. (2): power-law photon index.
Col. (3): power-law normalization in units of 10^-5 photons cm^-2 s^-1 keV^-1.
Col. (4): χ^2 value divided by the degrees of freedom.
Col. (5): null hypothesis probability of the model.
Col. (6): rest-frame 2–10 keV luminosity in units of 10^43 erg s^-1.
§.§ Optical spectra
An optical spectrum of SDSS J0831+3213 was taken by the Lijiang 2.4 m telescope at the Yunnan Observatories of the Chinese Academy of Sciences on 2021 April 9 with an exposure time of 60 minutes. Optical spectra of SDSS J1423+0420 were obtained by the Hobby-Eberly Telescope (HET) at the McDonald Observatory on 2021 January 24 and by the Lijiang 2.4 m telescope on 2021 February 9; the exposure times are both 50 minutes. Grism No. 3 was used in the Lijiang observations with a resolving power of ≈2000, and the data were reduced the same way as in <cit.>. Flux calibration of the Lijiang spectra was carried out using the spectra of standard stars. For the HET observation of SDSS J1423+0420, we used the blue arm of the Low-Resolution Spectrograph 2 (LRS2-B) with a resolving power of ≈1800. The data were processed following the standard procedures using the HET pipeline tool panacea.[<https://github.com/grzeimann/Panacea>.] For absolute flux calibration, we normalized the HET spectrum to the Lijiang spectrum at rest-frame 2500 Å as the two observations are near simultaneous.
The Lijiang and HET (for SDSS J1423+0420) spectra of the two quasars are displayed in Figure <ref>. The spectra have been corrected for Galactic extinction using the <cit.> Milky Way extinction model with R_V = 3.1. The Galactic E_B-V values of SDSS J0831+3213 and SDSS J1423+0420 are 0.0409 and 0.0237, respectively, obtained from <cit.>. Since both quasars have SDSS-I spectra, which have good flux calibration in general <cit.>, we show in Figure <ref> the corresponding SDSS spectra for comparison. The Lijiang and SDSS spectra of SDSS J0831+3213 agree well with each other. For SDSS J1423+0420, the near-simultaneous Lijiang and HET spectra appear redder than the SDSS spectrum, and they are ≈ 25% brighter than the SDSS spectrum redward of ≈ 2000 Å. Such a long-term variability amplitude is not uncommon among quasars, but the spectral shape change is not consistent with the typical “bluer when brighter” quasar variability <cit.>. One explanation is that there is also a slight increase of the optical/UV extinction that makes the spectra redder.
Nevertheless, the rest-frame UV spectra of the two quasars do not suggest any strong variability that could lead to the observed X-ray variability described in Section <ref> below. Both quasars show mild C IV blueshifts; for SDSS J0831+3213, the blueshift is -1359± 962 km s^-1, and for SDSS J1423+0420, it is -652± 619 km s^-1 <cit.>.
§ X-RAY AND MULTIWAVELENGTH PROPERTIES
§.§ X-ray spectral analysis
The XMM-Newton spectra of SDSS J0831+3213 and SDSS J1423+0420 were fitted using XSPEC (v12.12.1, ). We adopted a simple power-law model modified by Galactic absorption (zpowerlw*phabs) to describe the 0.3–10 keV spectra. For each quasar, we jointly fitted the EPIC pn, MOS1, and MOS2 spectra, but we added normalization constants (best-fit values between 0.85 and 1.20) to the MOS spectra to allow for small cross-calibration uncertainties. The best-fit results are displayed in Table <ref> and Figure <ref>.
The simple power-law model describes the spectra well, with small reduced χ^2 values and large null hypothesis probabilities (Table <ref>). The resulting power-law photon indices are 1.99^+0.27_-0.23 and 1.86^+0.15_-0.14, typical of type 1 quasars <cit.>. Adding an intrinsic absorption component (zphabs) does not improve the fits, and we set upper limits on the intrinsic N_ H of 4.1 × 10^21 cm^-2 and 6.6 × 10^21 cm^-2 for SDSS J0831+3213 and SDSS J1423+0420, respectively. From the best-fit models, we also computed rest-frame 2–10 keV luminosities of the two quasars (Table <ref>), both approaching 10^44 erg s^-1.
§.§ X-ray variability
Compared to the archival Chandra observations, the two quasars both turned out to be much brighter in the XMM-Newton observations. From the best-fit models, we calculated the rest-frame 2 keV flux densities (f_ 2 keV) and listed these in Table <ref>. We adopted the Chandra measurements of f_ 2 keV from <cit.>. The variability amplitudes in terms of the 2 keV flux densities are thus 4.0_-1.4^+1.6 and 9.0_-3.8^+7.4 for SDSS J0831 + 3213 and SDSS J1423 + 0420, respectively. Such large long-term variability amplitudes are rare among quasars <cit.>, which cannot be explained by typical quasar X-ray variability related to instability/fluctuations of the accretion disk and corona. Besides flux variability, the X-ray spectral shapes also appear different. In the recent XMM-Newton observations, both quasars had typical quasar spectral shapes (Γ values of 1.99^+0.27_-0.23 and 1.86^+0.15_-0.14) with no signatures of X-ray absorption. For SDSS J0831+3213, the effective photon index from the Chandra observation is Γ_ eff=1.0^+0.6_-0.5, which is ≈ 1.6 σ smaller than the XMM-Newton Γ value. For SDSS J1423+0420, the 90% confidence-level upper limit on the effective photon index from the Chandra observation is Γ_ eff<1.3, still much smaller than the XMM-Newton Γ value.
We also computed the α_ OX values for the two quasars to assess the X-ray emission strength relative to the optical/UV emission strength. For the XMM-Newton observations, we derived f_2500 Å and L_2500 Å values from the simultaneous Lijiang spectra, which are listed in Table <ref>. For SDSS J0831 + 3213, the XMM-Newton OM photometric measurements also allow an estimate of f_2500 Å via extrapolation of an adopted power-law continuum with a_ν=-0.46 <cit.>. The resulting f_2500 Å value is consistent with that derived from the Lijiang spectrum. For the Chandra observations, since there were no simultaneous optical/UV measurements and both quasars do not show strong optical/UV variability (see Section <ref> below), we still adopted the same f_2500 Å and L_2500 Å values from the Lijiang spectra. We note that a 50% difference in f_2500 Å only changes the resulting Δα_ OX value by a small amount of about 0.06.
The α_ OX values, along with the corresponding Δα_ OX and f_ weak values derived using the <cit.> α_ OX– L_ 2500 Å relation, are listed in Tables <ref>. We also show the two quasars in the α_ OX versus L_ 2500 Å plane in Figure <ref>. Both quasars were significantly X-ray weak in the Chandra observations. Given the 1σ scatter (Δα_ OX=0.14; Table 5 of S06) of the <cit.> α_ OX– L_ 2500 Å relation, these f_ weak values correspond to 2.4σ and 3.4σ deviations from the α_ OX– L_ 2500 Å relation, respectively. However, both quasars recovered to nominal levels of X-ray emission in the XMM-Newton observations with Δα_ OX values of ≈ -0.11, within the 1 σ scatter of the <cit.> α_ OX– L_ 2500 Å relation. Considering also the flatter spectral shapes in the Chandra observations, the X-ray weak states were likely not intrinsic, but they were simply caused by X-ray obscuration. In the recent XMM-Newton observations, there is no X-ray obscuration, and thus both the spectral shapes and flux levels return to those of typical quasars.
§.§ Spectral energy distributions and optical–IR light curves
We constructed IR-to-X-ray SEDs for the two quasars, shown in Figure <ref>. The IR–UV photometric data were collected from the Wide-field Infrared Survey Explorer <cit.>, SDSS <cit.>, and Galaxy Evolution Explorer <cit.> catalogs. The GALEX data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute. The specific observations analyzed can be accessed via [doi:10.17909/T9H59D]https://doi.org/10.17909/T9H59D. For these two quasars, only near-UV (NUV) measurements are available in the GALEX catalog; the non-detections in the far-UV (FUV) band are probably caused by the Lyman break in these z≈ 1.7 quasars. All the SED data have been corrected for the Galactic extinction. For SDSS J0831 + 3213, the XMM-Newton OM measurements were added to the SED plot. For both quasars, we added the XMM-Newton 2 keV and 10 keV luminosities determined from the best-fit results in Section <ref>. We also show the X-ray spectral slopes and their uncertainties. For the Chandra observations, we added the 2 keV luminosities from <cit.>. For comparison, we include the mean SED of high-luminosity quasars in <cit.>, scaled to the 2500 Å luminosities of our objects. An X-ray component was added to the mean SED to indicate the <cit.> α_ OX– L_ 2500 Å relation. We note that the IR–UV SED data are not simultaneous, and they may be affected by mild variability (see light curves below). However, the IR–UV SEDs of both quasars are still broadly consistent with those of typical quasars. SDSS J1423 + 0420 shows somewhat stronger mid-IR emission, which is not unusual considering the complex quasar IR emission mechanisms <cit.>.
To investigate the IR–UV photometric variability of the two quasars, we collected their multi-epoch data from the Zwicky Transient Facility <cit.>, Panoramic Survey Telescope and Rapid Response System <cit.>, and Near-Earth Object Wide-field Infrared Survey Explorer Reactivation Mission <cit.> catalogs. The Pan-STARRS data presented in this paper were obtained from the MAST at the Space Telescope Science Institute. The specific observations analyzed can be accessed via [doi:10.17909/s0zg-jx37]https://doi.org/10.17909/s0zg-jx37. The ZTF, Pan-STARRS, and NEOWISE light curves are presented in Figure <ref>. The ZTF light curves overlap the dates of the Lijiang observations, and thus we derived corresponding photometric measurements from the Lijiang spectra and added these to the ZTF light curves. For SDSS J0831 + 3213, the Lijiang measurements are slightly brighter than the adjacent ZTF data, but they are separated by a few months. For SDSS J1423 + 0410, the Lijiang measurements are consistent with the ZTF data within the uncertainties. The flux calibration of the Lijiang spectra thus appears reasonable.
These optical/UV and IR light curves indicate that the two quasars do not show any substantial long-term variability in these bands. The maximum variability amplitude reaches ≈ 0.5 mag (a factor of ≈ 1.6) in these light curves, still much smaller than the X-ray variability factors (Section <ref>). Therefore, the X-ray weakness and X-ray variability of these two quasars are not likely connected to any changing-look behavior where the optical/UV and X-ray continua vary coordinately.
§ DISCUSSION AND SUMMARY
The XMM-Newton observations of the two quasars reveal that they have recovered to nominal levels of X-ray emission with typical X-ray spectral shapes (Sections <ref> and <ref>). The X-ray weak states in the Chandra observations that motivated this study are thus likely caused by X-ray obscuration considering the flat spectral shapes (Γ_ eff = 1.0^+0.6_-0.5 and Γ_ eff < 1.3). Given the variability factors (≈ 4.0 and 9.0), we estimate that the column densities of the absorbers are ≈ 2.5 × 10^22 cm^-2 and ≈ 4.8 × 10^22 cm^-2 by adding an intrinsic absorption component to the best-fit models in Section <ref>. The X-ray variability between the Chandra and XMM-Newton observations is then explained with changes of X-ray obscuration.
Of the 426 sample B + C quasars in <cit.>, 14 (≈ 3.3%) quasars were identified as the unclassified type of X-ray weak quasars which have typical quasar UV continua and emission-line spectra. Out of these 14 objects, we selected two targets and obtained just one additional XMM-Newton observation for each. Yet they both turned out to be typical quasars considering their X-ray properties, UV spectra, and SEDs. These results suggest that the X-ray weak states caught in the archival Chandra observations are rare, and these quasars should show nominal levels of X-ray emission most of the time. This is also consistent with the small fraction of such objects found in the <cit.> survey study. In this case, if we reobserve the other 12 unclassified type of X-ray weak quasars in <cit.>, we should find nominal levels of X-ray emission in most of them. It is even probable that typical quasars might also become X-ray obscured occasionally (e.g., ≈ 3.3 % of the time) if long-term X-ray monitoring observations are available.
Since the long-term optical/UV and IR light curves do not show substantial variability (Section <ref>), and the multi-epoch UV spectra do not vary much (Section <ref>), the X-ray absorber likely does not affect the optical/UV continuum emission. A good candidate for such an absorber is the small-scale clumpy accreting-disk wind that is dust free. We cannot exclude the possibility that unusual UV spectral features accompanied the Chandra X-ray weak states. For example, they might have exhibited BAL features during the Chandra observations (Section <ref>), and they showed non-BAL-to-BAL-to-non-BAL transitions among the SDSS (non-BAL), Chandra (assumed BAL), and HET/Lijiang (non-BAL) observations, which are separated by 1.9–4.1 years in the rest frame. However, the probability of having such multiple BAL transitions appears insufficient to explain the 3.3% fraction of the unclassified type of X-ray weak quasars in <cit.>. For example, <cit.> found a non-BAL-to-BAL quasar emergence rate of 0.59%± 0.12% based on a parent sample of ≈ 15 000 SDSS quasars; among these BAL transition cases, only about 1/6 of the objects exhibited both emergence and disappearance. Nevertheless, absorption from the clumpy disk wind is still a probable cause of the X-ray weakness in BAL quasars <cit.>.
Obscuration from the disk wind (and from a possible geometrically thick accretion disk) has been suggested to explain the X-ray weakness observed in some WLQs and super-Eddington accreting quasars <cit.>. Powerful disk winds launched via radiation pressure are expected in these quasars <cit.>, and they could potentially provide significant X-ray obscuration. SDSS J0831 + 3213 and SDSS J1423 + 0410 do not show weak UV emission lines like WLQs, and their estimated Eddington ratios are not high (Table <ref>). Therefore, they might simply be typical type 1 quasars that occasionally develop powerful winds and enter the rare X-ray weak/absorbed states. Besides the wind density and covering factor, the probability of observing such extreme X-ray variability among typical quasars should also have an orientation dependence, as a large inclination angle is preferred for observing X-ray obscuration from the equatorial disk wind. Identification of more such objects is required to clarify their nature.
In summary, we present the XMM-Newton observations of two examples of an unclassified type of X-ray weak quasars from the <cit.> Chandra survey, SDSS J0831 + 3213 and SDSS J1423 + 0410. Their UV continua and emission-line spectra are similar to typical quasars and lack BALs, and they have typical quasar IR–UV SEDs. In the XMM-Newton observations, both quasars show nominal levels of X-ray emission with typical quasar X-ray spectral shapes, displaying strong X-ray variability compared to the archival Chandra data. Simultaneous optical (rest-frame UV) spectra indicate no strong variability compared to the SDSS spectra. Long-term optical/UV and IR light curves do not show any substantial variability either. We consider that the X-ray weakness observed in the Chandra observations is due to X-ray obscuration from a small-scale dust-free absorber, likely related to accretion-disk winds. Such X-ray weak/absorbed states are probably rare in typical quasars like SDSS J0831 + 3213 and SDSS J1423 + 0410, and thus they both recovered to X-ray nominal-strength states in the XMM-Newton observations. Future observations of similar objects (e.g., the other 12 unclassified type of X-ray weak quasars in ) should be able to provide constraints on the duty cycles of the X-ray weak states in these quasars and thus clarify their nature. We note that even in the high states, the expected 0.5–2 keV fluxes of these <cit.> quasars derived from the α_ OX– L_ 2500 Å relation are still ≈ 5–10 times lower than the sensitivity limit of the eROSITA survey <cit.>, and thus targeted observations by XMM-Newton/Chandra are needed.
ccccccccccc
0.2pt
X-ray and Optical/UV Properties
Object Name
Observatory
Date
f_ 2keV
f_ 2500 Å
L_ 2500 Å
α_ OX
Δα_ OX
f_ weak
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
SDSS J0831 + 3213 Chandra 2007–12–22 1.03^+0.35_-0.28 1.25 1.04 -1.95^+0.05_-0.05 -0.34 7.7^+2.9_-1.9
XMM-Newton 2021–04–07 4.08^+0.42_-0.96 1.25 1.04 -1.72^+0.02_-0.04 -0.11 1.9^+0.6_-0.2
SDSS J1423 + 0420 Chandra 2012–12–15 0.45^+0.29_-0.20 1.16 0.88 -2.08^+0.08_-0.10 -0.48 17.4^+14.2_-6.8
XMM-Newton 2021–01–25 4.04^+0.41_-0.57 1.16 0.88 -1.71^+0.02_-0.03– -0.11 1.9^+0.3_-0.2
Col. (1): object name.
Col. (2): observatory.
Col. (3): observation date.
Col. (4): rest-frame 2 keV flux density in units of 10^-32 erg cm^-2 s^-1 Hz^-1; the errors of the Chandra measurement were propagated from the count errors.
Col. (5): rest-frame 2500 Å flux density in units of 10^-27 erg cm^-2 s^-1 Hz^-1, derived from the Lijiang spectrum.
Col. (6): 2500 Å monochromatic luminosity in units of 10^31 erg s^-1 Hz^-1.
Col. (7): X-ray-to-optical power-law slope parameter with the uncertainties propagated from the f_ 2keV uncertainties.
Col. (8): difference between the observed α_ OX value and the expected α_ OX value derived from the α_ OX - L_ 2500 Å relation of <cit.>.
Col. (9): X-ray weakness factor: f_ weak = 10^-Δα_ OX/0.384.
We thank Donald P. Schneider and Sergey Rostopchin for help with the HET observation. We acknowledge the support of the staff of the Lijiang 2.4 m telescope. Z.Z. and B.L. acknowledge financial support from the National Natural Science Foundation of China grant 11991053, China Manned Space Project grants NO. CMS-CSST-2021-A05 and NO. CMS-CSST-2021-A06. W.N.B. acknowledges support from the Eberly Chair Endowment at Penn State. J.M.W. acknowledges financial support from the National Natural Science Foundation of China grants NSFC-11991050, -11991054, and -11833008.
The HET is a joint project of the University of Texas at Austin, Ludwig-Maximillians-Universität München, Georg-August-Universität Göttingen, and the Pennsylvania State University. Funding for the Lijiang 2.4 m telescope has been provided by the Chinese Academy of Sciences and the People’s Government of Yunnan Province.
natexlab#1#1
[Arnaud(1996)]arnaud1996astronomical
Arnaud, K. 1996, in ASP Conf., Vol. 17
[Bachev et al.(2009)Bachev, Grupe, Boeva, Ovcharov,
Valcheva, Semkov, Georgiev, & Gallo]2009MNRAS.399..750B
Bachev, R., Grupe, D., Boeva, S., et al. 2009, , 399, 750,
10.1111/j.1365-2966.2009.15301.x
[Baskin et al.(2014)Baskin, Laor, &
Stern]2014MNRAS.438..604B
Baskin, A., Laor, A., & Stern, J. 2014, , 438, 604,
10.1093/mnras/stt2230
[Cutri et al.(1985)Cutri, Wisniewski, Rieke, &
Lebofsky]1985ApJ...296..423C
Cutri, R. M., Wisniewski, W. Z., Rieke, G. H., & Lebofsky, M. J. 1985,
, 296, 423, 10.1086/163461
[De Cicco et al.(2018)De Cicco, Brandt, Grier, Paolillo,
Filiz Ak, Schneider, & Trump]2018A A...616A.114D
De Cicco, D., Brandt, W. N., Grier, C. J., et al. 2018, , 616,
A114, 10.1051/0004-6361/201732497
[Done(2010)]2010arXiv1008.2287D
Done, C. 2010, arXiv e-prints, arXiv:1008.2287,
10.48550/arXiv.1008.2287
[Du et al.(2015)Du, Hu, Lu, Huang, Cheng, Qiu, Li,
Zhang, Fan, Bai, Bian, Yuan, Kaspi, Ho, Netzer, Wang, &
SEAMBH Collaboration]2015ApJ...806...22D
Du, P., Hu, C., Lu, K.-X., et al. 2015, , 806, 22,
10.1088/0004-637X/806/1/22
[Fabian et al.(2017)Fabian, Alston, Cackett, Kara,
Uttley, & Wilkins]2017AN....338..269F
Fabian, A. C., Alston, W. N., Cackett, E. M., et al. 2017,
Astronomische Nachrichten, 338, 269, 10.1002/asna.201713341
[Filiz Ak et al.(2012)Filiz Ak, Brandt, Hall, Schneider,
Anderson, Gibson, Lundgren, Myers, Petitjean, Ross, Shen,
York, Bizyaev, Brinkmann, Malanushenko, Oravetz, Pan, Simmons,
& Weaver]2012ApJ...757..114F
Filiz Ak, N., Brandt, W. N., Hall, P. B., et al. 2012, , 757, 114,
10.1088/0004-637X/757/2/114
[Fitzpatrick et al.(2019)Fitzpatrick, Massa, Gordon,
Bohlin, & Clayton]2019ApJ...886..108F
Fitzpatrick, E. L., Massa, D., Gordon, K. D., Bohlin, R., & Clayton,
G. C. 2019, , 886, 108, 10.3847/1538-4357/ab4c3a
[Flewelling et al.(2020)Flewelling, Magnier, Chambers,
Heasley, Holmberg, Huber, Sweeney, Waters, Calamida, Casertano,
Chen, Farrow, Hasinger, Henderson, Long, Metcalfe, Narayan,
Nieto-Santisteban, Norberg, Rest, Saglia, Szalay, Thakar,
Tonry, Valenti, Werner, White, Denneau, Draper, Hodapp,
Jedicke, Kaiser, Kudritzki, Price, Wainscoat, Chastel, McLean,
Postman, & Shiao]2020ApJS..251....7F
Flewelling, H. A., Magnier, E. A., Chambers, K. C., et al. 2020, ,
251, 7, 10.3847/1538-4365/abb82d
[Gabriel et al.(2004)Gabriel, Denby, Fyfe, Hoar, Ibarra,
Ojero, Osborne, Saxton, Lammers, & Vacanti]2004ASPC..314..759G
Gabriel, C., Denby, M., Fyfe, D. J., et al. 2004, in Astronomical
Society of the Pacific Conference Series, Vol. 314, Astronomical Data
Analysis Software and Systems (ADASS) XIII, ed. F. Ochsenbein, M. G.
Allen, & D. Egret, 759
[Gallagher et al.(2006)Gallagher, Brandt, Chartas,
Priddey, Garmire, & Sambruna]2006ApJ...644..709G
Gallagher, S. C., Brandt, W. N., Chartas, G., et al. 2006, , 644,
709, 10.1086/503762
[Gallo et al.(2011)Gallo, Grupe, Schartel, Komossa,
Miniutti, Fabian, & Santos-Lleo]2011MNRAS.412..161G
Gallo, L. C., Grupe, D., Schartel, N., et al. 2011, , 412, 161,
10.1111/j.1365-2966.2010.17894.x
[Gallo et al.(2023)Gallo, Miller, &
Costantini]2023arXiv230210930G
Gallo, L. C., Miller, J. M., & Costantini, E. 2023, arXiv e-prints,
arXiv:2302.10930, 10.48550/arXiv.2302.10930
[Gibson & Brandt(2012)]2012ApJ...746...54G
Gibson, R. R., & Brandt, W. N. 2012, , 746, 54,
10.1088/0004-637X/746/1/54
[Gilfanov & Merloni(2014)]2014SSRv..183..121G
Gilfanov, M., & Merloni, A. 2014, , 183, 121,
10.1007/s11214-014-0071-5
[Hall et al.(2006)Hall, Gallagher, Richards, Alexander,
Anderson, Bauer, Brandt, & Schneider]2006AJ....132.1977H
Hall, P. B., Gallagher, S. C., Richards, G. T., et al. 2006, , 132,
1977, 10.1086/507842
[HI4PI Collaboration et al.(2016)HI4PI Collaboration, Ben
Bekhti, Flöer, Keller, Kerp, Lenz, Winkel, Bailin,
Calabretta, Dedes, Ford, Gibson, Haud, Janowiecki, Kalberla,
Lockman, McClure-Griffiths, Murphy, Nakanishi, Pisano, &
Staveley-Smith]2016A A...594A.116H
HI4PI Collaboration, Ben Bekhti, N., Flöer, L., et al. 2016, ,
594, A116, 10.1051/0004-6361/201629178
[Huang et al.(2023)Huang, Luo, Brandt, Du, Garmire,
Hu, Liu, Ni, & Wang]2023ApJ...950...18H
Huang, J., Luo, B., Brandt, W. N., et al. 2023, , 950, 18,
10.3847/1538-4357/accd64
[Jiang et al.(2014)Jiang, Stone, &
Davis]2014ApJ...796..106J
Jiang, Y.-F., Stone, J. M., & Davis, S. W. 2014, , 796, 106,
10.1088/0004-637X/796/2/106
[Jiang et al.(2019)Jiang, Stone, &
Davis]2019ApJ...880...67J
—. 2019, , 880, 67, 10.3847/1538-4357/ab29ff
[Just et al.(2007)Just, Brandt, Shemmer, Steffen,
Schneider, Chartas, & Garmire]2007ApJ...665.1004J
Just, D. W., Brandt, W. N., Shemmer, O., et al. 2007, , 665, 1004,
10.1086/519990
[Krawczyk et al.(2013)Krawczyk, Richards, Mehta, Vogeley,
Gallagher, Leighly, Ross, & Schneider]2013ApJS..206....4K
Krawczyk, C. M., Richards, G. T., Mehta, S. S., et al. 2013, ,
206, 4, 10.1088/0067-0049/206/1/4
[LaMassa et al.(2015)LaMassa, Cales, Moran, Myers,
Richards, Eracleous, Heckman, Gallo, & Urry]2015ApJ...800..144L
LaMassa, S. M., Cales, S., Moran, E. C., et al. 2015, , 800, 144,
10.1088/0004-637X/800/2/144
[Leighly et al.(2007a)Leighly, Halpern,
Jenkins, & Casebeer]2007ApJS..173....1L
Leighly, K. M., Halpern, J. P., Jenkins, E. B., & Casebeer, D.
2007a, , 173, 1, 10.1086/519768
[Leighly et al.(2007b)Leighly, Halpern,
Jenkins, Grupe, Choi, & Prescott]2007ApJ...663..103L
Leighly, K. M., Halpern, J. P., Jenkins, E. B., et al.
2007b, , 663, 103, 10.1086/518017
[Liu et al.(2021)Liu, Luo, Brandt, Brotherton,
Gallagher, Ni, Shemmer, & Timlin]2021ApJ...910..103L
Liu, H., Luo, B., Brandt, W. N., et al. 2021, , 910, 103,
10.3847/1538-4357/abe37f
[Liu et al.(2018)Liu, Luo, Brandt, Gallagher, &
Garmire]2018ApJ...859..113L
Liu, H., Luo, B., Brandt, W. N., Gallagher, S. C., & Garmire, G. P.
2018, , 859, 113, 10.3847/1538-4357/aabe8d
[Liu et al.(2022)Liu, Luo, Brandt, Huang, Pu, Yi, &
Yu]2022ApJ...930...53L
Liu, H., Luo, B., Brandt, W. N., et al. 2022, , 930, 53,
10.3847/1538-4357/ac6265
[Luo et al.(2014)Luo, Brandt, Alexander, Stern, Teng,
Arévalo, Bauer, Boggs, Christensen, Comastri, Craig,
Farrah, Gandhi, Hailey, Harrison, Koss, Ogle, Puccetti, Saez,
Scott, Walton, & Zhang]2014ApJ...794...70L
Luo, B., Brandt, W. N., Alexander, D. M., et al. 2014, , 794, 70,
10.1088/0004-637X/794/1/70
[Luo et al.(2015)Luo, Brandt, Hall, Wu, Anderson,
Garmire, Gibson, Plotkin, Richards, Schneider, Shemmer, &
Shen]2015ApJ...805..122L
Luo, B., Brandt, W. N., Hall, P. B., et al. 2015, , 805, 122,
10.1088/0004-637X/805/2/122
[Lusso & Risaliti(2017)]2017A A...602A..79L
Lusso, E., & Risaliti, G. 2017, , 602, A79,
10.1051/0004-6361/201630079
[Lyu et al.(2017)Lyu, Rieke, & Shi]2017ApJ...835..257L
Lyu, J., Rieke, G. H., & Shi, Y. 2017, , 835, 257,
10.3847/1538-4357/835/2/257
[Mainzer et al.(2011)Mainzer, Bauer, Grav, Masiero,
Cutri, Dailey, Eisenhardt, McMillan, Wright, Walker, Jedicke,
Spahr, Tholen, Alles, Beck, Brandenburg, Conrow, Evans,
Fowler, Jarrett, Marsh, Masci, McCallon, Wheelock, Wittman,
Wyatt, DeBaun, Elliott, Elsbury, Gautier, Gomillion, Leisawitz,
Maleszewski, Micheli, & Wilkins]2011ApJ...731...53M
Mainzer, A., Bauer, J., Grav, T., et al. 2011, , 731, 53,
10.1088/0004-637X/731/1/53
[Margala et al.(2016)Margala, Kirkby, Dawson, Bailey,
Blanton, & Schneider]2016ApJ...831..157M
Margala, D., Kirkby, D., Dawson, K., et al. 2016, , 831, 157,
10.3847/0004-637X/831/2/157
[Martin et al.(2005)Martin, Fanson, Schiminovich,
Morrissey, Friedman, Barlow, Conrow, Grange, Jelinsky,
Milliard, Siegmund, Bianchi, Byun, Donas, Forster, Heckman,
Lee, Madore, Malina, Neff, Rich, Small, Surber, Szalay,
Welsh, & Wyder]2005ApJ...619L...1M
Martin, D. C., Fanson, J., Schiminovich, D., et al. 2005, , 619,
L1, 10.1086/426387
[Masci et al.(2019)Masci, Laher, Rusholme, Shupe, Groom,
Surace, Jackson, Monkewitz, Beck, Flynn, Terek, Landry,
Hacopians, Desai, Howell, Brooke, Imel, Wachter, Ye, Lin,
Cenko, Cunningham, Rebbapragada, Bue, Miller, Mahabal, Bellm,
Patterson, Jurić, Golkhou, Ofek, Walters, Graham, Kasliwal,
Dekany, Kupfer, Burdge, Cannella, Barlow, Van Sistine, Giomi,
Fremling, Blagorodnova, Levitan, Riddle, Smith, Helou, Prince,
& Kulkarni]2019PASP..131a8003M
Masci, F. J., Laher, R. R., Rusholme, B., et al. 2019, , 131,
018003, 10.1088/1538-3873/aae8ac
[Matthews et al.(2016)Matthews, Knigge, Long, Sim,
Higginbottom, & Mangham]2016MNRAS.458..293M
Matthews, J. H., Knigge, C., Long, K. S., et al. 2016, , 458,
293, 10.1093/mnras/stw323
[McGraw et al.(2017)McGraw, Brandt, Grier, Filiz Ak,
Hall, Schneider, Anderson, Green, Hutchinson, Macleod, &
Vivek]2017MNRAS.469.3163M
McGraw, S. M., Brandt, W. N., Grier, C. J., et al. 2017, , 469,
3163, 10.1093/mnras/stx1063
[Merloni et al.(2012)Merloni, Predehl, Becker,
Böhringer, Boller, Brunner, Brusa, Dennerl, Freyberg,
Friedrich, Georgakakis, Haberl, Hasinger, Meidinger, Mohr,
Nandra, Rau, Reiprich, Robrade, Salvato, Santangelo, Sasaki,
Schwope, Wilms, & German eROSITA Consortium]2012arXiv1209.3114M
Merloni, A., Predehl, P., Becker, W., et al. 2012, arXiv e-prints,
arXiv:1209.3114, 10.48550/arXiv.1209.3114
[Middei et al.(2017)Middei, Vagnetti, Bianchi, La Franca,
Paolillo, & Ursini]2017A A...599A..82M
Middei, R., Vagnetti, F., Bianchi, S., et al. 2017, , 599, A82,
10.1051/0004-6361/201629940
[Miniutti et al.(2012)Miniutti, Brandt, Schneider, Fabian,
Gallo, & Boller]2012MNRAS.425.1718M
Miniutti, G., Brandt, W. N., Schneider, D. P., et al. 2012, ,
425, 1718, 10.1111/j.1365-2966.2012.21648.x
[Murray et al.(1995)Murray, Chiang, Grossman, &
Voit]1995ApJ...451..498M
Murray, N., Chiang, J., Grossman, S. A., & Voit, G. M. 1995, ,
451, 498, 10.1086/176238
[Mushotzky et al.(1993)Mushotzky, Done, &
Pounds]1993ARA A..31..717M
Mushotzky, R. F., Done, C., & Pounds, K. A. 1993, , 31, 717,
10.1146/annurev.aa.31.090193.003441
[Ni et al.(2018)Ni, Brandt, Luo, Hall, Shen, Anderson,
Plotkin, Richards, Schneider, Shemmer, & Wu]2018MNRAS.480.5184N
Ni, Q., Brandt, W. N., Luo, B., et al. 2018, , 480, 5184,
10.1093/mnras/sty1989
[Ni et al.(2020)Ni, Brandt, Yi, Luo, Timlin, Hall,
Liu, Plotkin, Shemmer, Vito, & Wu]2020ApJ...889L..37N
Ni, Q., Brandt, W. N., Yi, W., et al. 2020, , 889, L37,
10.3847/2041-8213/ab6d78
[Ni et al.(2022)Ni, Brandt, Luo, Garmire, Hall,
Plotkin, Shemmer, Timlin, Vito, Wu, & Yi]2022MNRAS.511.5251N
Ni, Q., Brandt, W. N., Luo, B., et al. 2022, , 511, 5251,
10.1093/mnras/stac394
[Planck Collaboration et al.(2020)Planck Collaboration,
Aghanim, Akrami, Ashdown, Aumont, Baccigalupi, Ballardini,
Banday, Barreiro, Bartolo, Basak, Battye, Benabed, Bernard,
Bersanelli, Bielewicz, Bock, Bond, Borrill, Bouchet, Boulanger,
Bucher, Burigana, Butler, Calabrese, Cardoso, Carron,
Challinor, Chiang, Chluba, Colombo, Combet, Contreras, Crill,
Cuttaia, de Bernardis, de Zotti, Delabrouille, Delouis, Di
Valentino, Diego, Doré, Douspis, Ducout, Dupac, Dusini,
Efstathiou, Elsner, Enßlin, Eriksen, Fantaye, Farhang,
Fergusson, Fernandez-Cobos, Finelli, Forastieri, Frailis,
Fraisse, Franceschi, Frolov, Galeotta, Galli, Ganga,
Génova-Santos, Gerbino, Ghosh, González-Nuevo, Górski,
Gratton, Gruppuso, Gudmundsson, Hamann, Handley, Hansen,
Herranz, Hildebrandt, Hivon, Huang, Jaffe, Jones, Karakci,
Keihänen, Keskitalo, Kiiveri, Kim, Kisner, Knox,
Krachmalnicoff, Kunz, Kurki-Suonio, Lagache, Lamarre, Lasenby,
Lattanzi, Lawrence, Le Jeune, Lemos, Lesgourgues, Levrier,
Lewis, Liguori, Lilje, Lilley, Lindholm, López-Caniego,
Lubin, Ma, Macías-Pérez, Maggio, Maino, Mandolesi,
Mangilli, Marcos-Caballero, Maris, Martin, Martinelli,
Martínez-González, Matarrese, Mauri, McEwen, Meinhold,
Melchiorri, Mennella, Migliaccio, Millea, Mitra,
Miville-Deschênes, Molinari, Montier, Morgante, Moss, Natoli,
Nørgaard-Nielsen, Pagano, Paoletti, Partridge, Patanchon,
Peiris, Perrotta, Pettorino, Piacentini, Polastri, Polenta,
Puget, Rachen, Reinecke, Remazeilles, Renzi, Rocha, Rosset,
Roudier, Rubiño-Martín, Ruiz-Granados, Salvati, Sandri,
Savelainen, Scott, Shellard, Sirignano, Sirri, Spencer,
Sunyaev, Suur-Uski, Tauber, Tavagnacco, Tenti, Toffolatti,
Tomasi, Trombetti, Valenziano, Valiviita, Van Tent, Vibert,
Vielva, Villa, Vittorio, Wandelt, Wehus, White, White,
Zacchei, & Zonca]2020A A...641A...6P
Planck Collaboration, Aghanim, N., Akrami, Y., et al. 2020, , 641,
A6, 10.1051/0004-6361/201833910
[Predehl et al.(2021)Predehl, Andritschke, Arefiev,
Babyshkin, Batanov, Becker, Böhringer, Bogomolov, Boller,
Borm, Bornemann, Bräuninger, Brüggen, Brunner, Brusa,
Bulbul, Buntov, Burwitz, Burkert, Clerc, Churazov, Coutinho,
Dauser, Dennerl, Doroshenko, Eder, Emberger, Eraerds,
Finoguenov, Freyberg, Friedrich, Friedrich, Fürmetz,
Georgakakis, Gilfanov, Granato, Grossberger, Gueguen, Gureev,
Haberl, Hälker, Hartner, Hasinger, Huber, Ji, Kienlin,
Kink, Korotkov, Kreykenbohm, Lamer, Lomakin, Lapshov, Liu,
Maitra, Meidinger, Menz, Merloni, Mernik, Mican, Mohr,
Müller, Nandra, Nazarov, Pacaud, Pavlinsky, Perinati,
Pfeffermann, Pietschner, Ramos-Ceja, Rau, Reiffers, Reiprich,
Robrade, Salvato, Sanders, Santangelo, Sasaki, Scheuerle,
Schmid, Schmitt, Schwope, Shirshakov, Steinmetz, Stewart,
Strüder, Sunyaev, Tenzer, Tiedemann, Trümper, Voron,
Weber, Wilms, & Yaroshenko]2021A A...647A...1P
Predehl, P., Andritschke, R., Arefiev, V., et al. 2021, , 647, A1,
10.1051/0004-6361/202039313
[Pu et al.(2020)Pu, Luo, Brandt, Timlin, Liu, Ni, &
Wu]2020ApJ...900..141P
Pu, X., Luo, B., Brandt, W. N., et al. 2020, , 900, 141,
10.3847/1538-4357/abacc5
[Reeves et al.(1997)Reeves, Turner, Ohashi, &
Kii]1997MNRAS.292..468R
Reeves, J. N., Turner, M. J. L., Ohashi, T., & Kii, T. 1997, ,
292, 468, 10.1093/mnras/292.3.468
[Rogerson et al.(2018)Rogerson, Hall, Ahmed, Rodríguez
Hidalgo, Brandt, & Filiz Ak]2018ApJ...862...22R
Rogerson, J. A., Hall, P. B., Ahmed, N. S., et al. 2018, , 862, 22,
10.3847/1538-4357/aabfe5
[Sameer et al.(2019)Sameer, Brandt, Anderson, Hall,
Vivek, Filiz Ak, Grier, Ahmed, Luo, Myers, Rodríguez
Hidalgo, Ruan, & Schneider]2019MNRAS.482.1121S
Sameer, Brandt, W. N., Anderson, S., et al. 2019, , 482, 1121,
10.1093/mnras/sty2718
[Schlafly & Finkbeiner(2011)]2011ApJ...737..103S
Schlafly, E. F., & Finkbeiner, D. P. 2011, , 737, 103,
10.1088/0004-637X/737/2/103
[Schmidt et al.(2012)Schmidt, Rix, Shields, Knecht,
Hogg, Maoz, & Bovy]2012ApJ...744..147S
Schmidt, K. B., Rix, H.-W., Shields, J. C., et al. 2012, , 744,
147, 10.1088/0004-637X/744/2/147
[Scott et al.(2011)Scott, Stewart, Mateos, Alexander,
Hutton, & Ward]2011MNRAS.417..992S
Scott, A. E., Stewart, G. C., Mateos, S., et al. 2011, , 417,
992, 10.1111/j.1365-2966.2011.19325.x
[Shen & Liu(2012)]2012ApJ...753..125S
Shen, Y., & Liu, X. 2012, , 753, 125,
10.1088/0004-637X/753/2/125
[Sadowski et al.(2014)Sadowski, Narayan,
McKinney, & Tchekhovskoy]2014MNRAS.439..503S
Sadowski, A., Narayan, R., McKinney, J. C., & Tchekhovskoy, A.
2014, , 439, 503, 10.1093/mnras/stt2479
[Steffen et al.(2006)Steffen, Strateva, Brandt, Alexander,
Koekemoer, Lehmer, Schneider, & Vignali]2006AJ....131.2826S
Steffen, A. T., Strateva, I., Brandt, W. N., et al. 2006, , 131,
2826, 10.1086/503627
[Timlin et al.(2020)Timlin, Brandt, Zhu, Liu, Luo, &
Ni]2020MNRAS.498.4033T
Timlin, John D., I., Brandt, W. N., Zhu, S., et al. 2020, , 498,
4033, 10.1093/mnras/staa2661
[Vanden Berk et al.(2001)Vanden Berk, Richards, Bauer,
Strauss, Schneider, Heckman, York, Hall, Fan, Knapp,
Anderson, Annis, Bahcall, Bernardi, Briggs, Brinkmann, Brunner,
Burles, Carey, Castander, Connolly, Crocker, Csabai, Doi,
Finkbeiner, Friedman, Frieman, Fukugita, Gunn, Hennessy,
Ivezić, Kent, Kunszt, Lamb, Leger, Long, Loveday, Lupton,
Meiksin, Merelli, Munn, Newberg, Newcomb, Nichol, Owen, Pier,
Pope, Rockosi, Schlegel, Siegmund, Smee, Snir, Stoughton,
Stubbs, SubbaRao, Szalay, Szokoly, Tremonti, Uomoto, Waddell,
Yanny, & Zheng]2001AJ....122..549V
Vanden Berk, D. E., Richards, G. T., Bauer, A., et al. 2001, , 122,
549, 10.1086/321167
[Vanden Berk et al.(2004)Vanden Berk, Wilhite, Kron,
Anderson, Brunner, Hall, Ivezić, Richards, Schneider, York,
Brinkmann, Lamb, Nichol, & Schlegel]2004ApJ...601..692V
Vanden Berk, D. E., Wilhite, B. C., Kron, R. G., et al. 2004, ,
601, 692, 10.1086/380563
[Wang et al.(2022)Wang, Luo, Brandt, Alexander, Bauer,
Gallagher, Huang, Liu, & Stern]2022ApJ...936...95W
Wang, C., Luo, B., Brandt, W. N., et al. 2022, , 936, 95,
10.3847/1538-4357/ac886e
[Wilkes et al.(2005)Wilkes, Pounds, Schmidt, Smith,
Cutri, Ghosh, Nelson, & Hines]2005ApJ...634..183W
Wilkes, B. J., Pounds, K. A., Schmidt, G. D., et al. 2005, , 634,
183, 10.1086/444555
[WISE Team(2020a)]WISEAllSkySourceCatalog
WISE Team. 2020a, WISE All-Sky Source Catalog, IPAC,
10.26131/IRSA142
[WISE Team(2020b)]neowise
—. 2020b, NEOWISE 2-Band Post-Cryo Single Exposure (L1b) Source
Table, IPAC, 10.26131/IRSA124
[Wright et al.(2010)Wright, Eisenhardt, Mainzer, Ressler,
Cutri, Jarrett, Kirkpatrick, Padgett, McMillan, Skrutskie,
Stanford, Cohen, Walker, Mather, Leisawitz, Gautier, McLean,
Benford, Lonsdale, Blain, Mendez, Irace, Duval, Liu, Royer,
Heinrichsen, Howard, Shannon, Kendall, Walsh, Larsen, Cardon,
Schick, Schwalm, Abid, Fabinsky, Naes, &
Tsai]2010AJ....140.1868W
Wright, E. L., Eisenhardt, P. R. M., Mainzer, A. K., et al. 2010, ,
140, 1868, 10.1088/0004-6256/140/6/1868
[Wu et al.(2012)Wu, Brandt, Anderson, Diamond-Stanic,
Hall, Plotkin, Schneider, & Shemmer]2012ApJ...747...10W
Wu, J., Brandt, W. N., Anderson, S. F., et al. 2012, , 747, 10,
10.1088/0004-637X/747/1/10
[Wu et al.(2011)Wu, Brandt, Hall, Gibson, Richards,
Schneider, Shemmer, Just, & Schmidt]2011ApJ...736...28W
Wu, J., Brandt, W. N., Hall, P. B., et al. 2011, , 736, 28,
10.1088/0004-637X/736/1/28
[Wu & Shen(2022)]2022ApJS..263...42W
Wu, Q., & Shen, Y. 2022, , 263, 42, 10.3847/1538-4365/ac9ead
[Yi & Timlin(2021)]2021ApJS..255...12Y
Yi, W., & Timlin, J. 2021, , 255, 12,
10.3847/1538-4365/ac00b8
[York et al.(2000)York, Adelman, Anderson, Anderson,
Annis, Bahcall, Bakken, Barkhouser, Bastian, Berman, Boroski,
Bracker, Briegel, Briggs, Brinkmann, Brunner, Burles, Carey,
Carr, Castander, Chen, Colestock, Connolly, Crocker, Csabai,
Czarapata, Davis, Doi, Dombeck, Eisenstein, Ellman, Elms,
Evans, Fan, Federwitz, Fiscelli, Friedman, Frieman, Fukugita,
Gillespie, Gunn, Gurbani, de Haas, Haldeman, Harris, Hayes,
Heckman, Hennessy, Hindsley, Holm, Holmgren, Huang, Hull,
Husby, Ichikawa, Ichikawa, Ivezić, Kent, Kim, Kinney,
Klaene, Kleinman, Kleinman, Knapp, Korienek, Kron, Kunszt,
Lamb, Lee, Leger, Limmongkol, Lindenmeyer, Long, Loomis,
Loveday, Lucinio, Lupton, MacKinnon, Mannery, Mantsch, Margon,
McGehee, McKay, Meiksin, Merelli, Monet, Munn, Narayanan,
Nash, Neilsen, Neswold, Newberg, Nichol, Nicinski, Nonino,
Okada, Okamura, Ostriker, Owen, Pauls, Peoples, Peterson,
Petravick, Pier, Pope, Pordes, Prosapio, Rechenmacher, Quinn,
Richards, Richmond, Rivetta, Rockosi, Ruthmansdorfer, Sandford,
Schlegel, Schneider, Sekiguchi, Sergey, Shimasaku, Siegmund,
Smee, Smith, Snedden, Stone, Stoughton, Strauss, Stubbs,
SubbaRao, Szalay, Szapudi, Szokoly, Thakar, Tremonti, Tucker,
Uomoto, Vanden Berk, Vogeley, Waddell, Wang, Watanabe,
Weinberg, Yanny, Yasuda, & SDSS Collaboration]2000AJ....120.1579Y
York, D. G., Adelman, J., Anderson, John E., J., et al. 2000, , 120,
1579, 10.1086/301513
[Zhuang et al.(2018)Zhuang, Ho, &
Shangguan]2018ApJ...862..118Z
Zhuang, M.-Y., Ho, L. C., & Shangguan, J. 2018, , 862, 118,
10.3847/1538-4357/aacc2d
|
http://arxiv.org/abs/2307.04409v1 | 20230710081749 | Violation of a Leggett-Garg inequality using ideal negative measurements in neutron interferometry | [
"Elisabeth Kreuzgruber",
"Richard Wagner",
"Niels Geerits",
"Hartmut Lemmel",
"Stephan Sponar"
] | quant-ph | [
"quant-ph"
] |
[email protected]
[email protected]
^1Atominstitut, TU Wien, Stadionallee 2, 1020 Vienna, Austria
^2Institut Laue-Langevin, 38000, Grenoble, France
=800=800
We report on an experiment that demonstrates the violation of a Leggett–Garg inequality (LGI) with neutrons. LGIs have been proposed in order to assess how far the predictions of quantum mechanics defy `macroscopic realism'.
With LGIs, correlations of measurements performed on a single system at different times are described.
The measured value of K =1.120±0.007, obtained in a neutron interferometric experiment, is clearly above the limit K=1 predicted by macro-realistic theories.
Violation of a Leggett–Garg inequality using ideal negative measurements
in neutron interferometry
Stephan Sponar^1
August 12, 2023
===================================================================================================
Introduction.—The question whether measurable quantities of a quantum object have definite values prior to the actual measurement is a fundamental issue ever since quantum theory has been introduced more than a century ago. Examples include Bell's inequality <cit.>, which sets bounds on correlations between measurement results of space-like separated components of a composite (entangled) system. A violation of Bell's inequality thus demonstrates that certain predictions of quantum mechanics cannot be reproduced by realistic theories, more precisely, by local hidden variable theories (LHVT). Another prime example is found in the Kochen-Specker theorem <cit.>, which stresses the incompatibility of quantum mechanics with a larger class of hidden-variable theories, known as noncontextual hidden-variable theories (NCHVTs). Here it is assumed that the result of a measurement of an observable is predetermined and independent of a suitable (previous or simultaneous) measurement of any other compatible (co-measurable or commuting) observable, i.e., the measurement context. While both, Bell's inequality and tests of the Kochen-Specker theorem, require composite or multiple spatially-separated systems Leggett-Garg inequalities (LGIs) <cit.> study temporal correlations of a single system, therefore they are often referred to as Bell inequalities in time.
Violation of a Bell inequality is a direct witness of entanglement - a very specific feature of quantum mechanics. Contrary, in the case of LGIs the violation occurs due to the coherent superposition of system states, which is essentially the most fundamental property of quantum mechanics. In other words LGIs quantify coherence in quantum systems and can consequently be seen as a measure or test of quantumness.
Leggett-Garg inequalities were proposed in 1985 <cit.> in order to assess whether sets of pairs of sequential measurements on a single quantum system can be consistent with an underlying macro-realistic theory <cit.>. Within the framework of a macro-realistic theory a single macroscopic system fulfills the following two assumptions of macrorealism measured at successive times: (A1) at any given time the system is always in only one of its macroscopically distinguishable states, and (A2) the state of the system can be determined in a non-invasive way, meaning, without disturbing the subsequent dynamics of the system. Quantum mechanics predicts the violation of the inequalities since it contradicts with both assumptions (A1) and (A2). The (quantum) system under observation has to be measured at different times. Correlations that can be derived from sequences of this measurements
let us formulate the LGI. The result of these correlation measurements either confirm the absence of a realistic description of the system or the impossibility of measuring the system without disturbing it <cit.>. This will also refuse a well-defined pre-existing value of a measurement.
Recent violations of LGI have been observed in various systems, including photonic qubits <cit.>, nuclear spins in a diamond defect center<cit.>, superconducting qubits in terms of transoms <cit.> and flux qubits <cit.>, nuclear magnetic resonance <cit.>, and spin-bearing phosphorus impurities in silicon <cit.>. Proposed schemes for increasing violations of Leggett-Garg inequalities range from action of an environment on a single qubit in terms of generic quantum channels <cit.> to open many-body systems in the presence of a nonequilibrium <cit.>. In a recent paper <cit.> the authors propose to test a violation of the Leggett-Garg inequality due to the gravitational interaction in a hybrid system consisting of a harmonic oscillator and a spatially localized superposed particle <cit.>, aiming to probe the quantumness of gravity <cit.>.
The violation of an LGI in an interferometric setup has been proposed in literature theoretically for electrons in <cit.>. The requirement of non-invasive measurements from (A2) is realized in most experiments by utilizing the concept of weak measurements, or by introducing an ancilla system, as implemented in <cit.>. Note that even a weak measurement in practice can never be completely non-invasive (due to a non-vanishing measurement strength) and the preparation of the ancilla system will also always be imperfect. However, the experimental procedure from <cit.> realizes ideal negative measurements in an interferometer experiment in order to fulfill the requirement of non-invasive measurements from (A2) without the need for an ancilla.
In this Letter, we present a neutron interferometric experiment, demonstrating a violation of the LGI. In our measurement scheme the single system is represented by the neutron's path in an interferometer. A respective observable is defined and measured non-invasively according to the LGI protocol.
Leggett–Garg inequality.—For dichotomous variables Q_i, accounting for two macroscopically distinguishable states, having outcomes q_i=±1, the correlation function for measurements at times t_i, t_j is given by
C_ij=⟨ Q_i Q_j⟩=∑_q_i q_j=± q_i q_j P(q_i(t_i),q_j(t_j)),
where P(q_i(t_i),q_j(t_j)) denotes the joint probability of obtaining the measurement results q_i at time t_i and q_j at time t_j.
Considering Eq.(<ref>) for three experimental sets with i,j∈{1,2,3} yields the LGI
K ≡ C_21 + C_32-C_31,
where K denotes the Leggett-Garg correlator, with limits -3≤ K ≤ 1. Since the three correlators are derived from probabilities with |C_ij|≤ 1, the lower limit cannot be violated. However, quantum mechanics allows for a violation of the upper bound. In a two-level system, the maximum obtainable violation is K=1.5 <cit.>.
The basic idea behind the experimental procedure as proposed by Emary et al. in <cit.>, is to map the temporal structure (or measurement time t_i) of LGI onto real-space coordinates, more precisely onto three distinct regions of the interferometer, indicated by the index α∈{1,2,3}, cf. Fig. <ref>. Within each region the two paths of the interferometer constitute a qubit. The measurement of the qubit's state, denoted as q_i=±1, therefore results in a “which-way” measurement <cit.> in the particular region of interest. While a click of a detector in e.g. the + arm of region 2 (q_2=+1) is a strongly invasive measurement, on the other hand the absence of a detector response implies q_2=-1 and does not disturb the system at all. It accounts for the required non-invasive measurement (A2) in terms of an ideal negative measurement.
In our neutron interferometric realization of <cit.> neutrons enter the IFM via the + port of region 1. Hence, it is not necessary to measure in region 1 and the noninvasive measurability is granted. The first plate of the IFM consists of a tunable beamsplitter characterized by parameter ϑ_A, which is schematically illustrated in Fig. <ref>. The theoretical maximum of K=1.5 is obtained for ϑ_A=ϑ_B=π/3 and phase shift χ=0. However, in our setup with fixed ϑ_B=π/2 (usual 50:50 beamsplitter), the maximal possible violation is K=√(2) (for ϑ_A=π/4).
We define P_α±,β±(n_α,n_β) as the joint probability that two detectors placed at position α± and β± respectively detect (n=1) or don't detect a neutron (n=0), where α and β specify the region and ± the path. Then the correlator, as defined in Eq.(<ref>), between regions α and β is given by
C_αβ=∑_q_α,q_β=±q_α q_β P_α q_α,β q_β(1,1).
Hence the correlation function for regions 1 and 3, denoted as C_31, can simply be expressed as
C_31=P_3+,1+(1,1)-P_,3-,1+(1,1),
since the neutrons always enter from 1+.
Therefore, the correlation function C_31 can also be expressed in terms of mariginal probabilities as C_31=P_3+(1)-P_3-(1).
Although not particularly necessary here,
it is instructive to express C_31 in terms of ideal negative measurements as
C_31= ∑_q_1,q_3=±q_1 q_3 P_3 q_α(1)(1-P_1q_β(0))
=-∑_q_1,q_3=±q_1 q_3 P_1q_2,3q_3(1,0),
since P_1q_1(0)=1-P_1q_1(1). A similar expression gives the correlator C_21=P_1+,2+(1)-P_1+,2-(1) which is measured with detectors directly placed in region 2, shown in Fig. <ref> (a).
For C_32 all four terms of the sum from Eq.(<ref>) contribute, taking both paths of section 2 into account.
C_32=∑_q_2,q_3=±q_2 q_3 P_3q_3,2q_2(1,1)
Using again P_2q_2(0)=1-P_2q_2(1) we write the sum as
C_32=-∑_q_2,q_3=±q_2 q_3 P_3q_3,2q_2(1,0)
in order to account for the non-invasive or ideal negative measurement in section 2. The two pobabilities P_3±,2-(1,0) are determined by counting the neutrons in path 3+ and 3- respectively under the condition that they have not been counted in pah 2-. The latter is ensured by placing a beam blocker in path 2-, cf. Fig. <ref>(b). The other two pobabilities are measured similarly as shown in Fig. <ref>(c).
The correlators according to <cit.> for the regions in our setup are calculated as follows
C_21= cosϑ_A
C_32= cosϑ_B
C_31= cosϑ_A cosϑ_B - cosχsinϑ_A sinϑ_B
K= cosϑ_A+cosϑ_B-cosϑ_A cosϑ_B
+ cosχsinϑ_A sinϑ_B,
which in our setup, with fixed sinϑ_B=π/2, K becomes
K=cosϑ_A + cosχsinϑ_A.
Figure <ref> shows the regions in the parameter space (ϑ_A,χ) of our experimental LGI test (with fixed value ϑ_B=π/2), where it is in theory possible to violate the LGI with a value K=√(2). ϑ_A represents the mixing angle of the first interferometer plate, and χ the phase shifter angle. The resulting K values are shown in green for areas where no violation is possible, and in orange for a possible violation of the LGI. The dashed red line indicates our measurement result in an ideal interferometer.
Neutron interferometer setup.—Neutron interferometry <cit.> provides a powerful tool for investigation of fundamental quantum mechanical phenomena. Entanglement between different degrees of freedom (DOF), e.g., the neutron’s spin, path, and energy DOF has been confirmed, and the contextual nature of quantum mechanics has been demonstrated successfully <cit.>. In more recent experiments the concept of weak measurements and weak values has been utilized for direct state reconstruction <cit.>, demonstration of the canonical commutator relation <cit.> and studies of which way information <cit.>.
The experiment was carried out at the neutron interferometer instrument S18 at the high-flux reactor of the Institute Laue-Langevin (ILL) in Grenoble, France (the experimental data can be found on the ILL data server under <cit.>. A monochromatic unpolarized neutron beam with mean wavelength λ=1.91Å (δλ/λ∼0.02) and 3 × 3 mm^2 beam cross section was used to illuminate the interferometer. In order to observe a violation of an LGI in an interferometric experiment, it is necessary to implement a non-50:50 beam splitter at the first plate of the interferometer. This is achieved by placing a partial absorber behind the first interferometer plate in one of the neutron paths.
The absorber is an Indium slab, about 3 thick, placed in path I, resulting in an intensity ratio between paths I and II of about 10:90. The interferometer itself is a symmetric three-plate silicon perfect crystal (triple Laue type), with a plate thickness of 3 and a length of 140. A schematic illustration of the interferometric setup is given in Fig. <ref>. To obtain interference fringes, a 5 Aluminium phase shifter was used. Additional beam blockers for the detection of single path intensities were made of Cadmium. Both the `O' and `H' detectors outside the interferometer and the additional detector for C_21 measurements were ^3He proportional counting tubes.
Determination of correlators C_31 and C_21 is straightforward. In both cases it is not necessary to measure non-invasively, since no subsequent measurement on the same state is performed. For C_31, the measurement is that of a standard interferogram Fig. <ref>, with measurement time 180 seconds per phase shifter position. The correlator C_31 is calculated via
C_31=N_3+1+(χ)-N_3-1+(χ)/N_3+1+(χ)+N_3-1+(χ),
where N_3+1+(χ) denotes the counts in the H detector and N_3-1+(χ) the counts in the O detector. Due to the cosine behaviour of the recorded interferogram, this correlator is dependent on the position χ of the phase shifter. For the largest possible violation, the maximum counts in O and minimum in H are used, which corresponds to the position χ=2 n π (where n∈ℕ_0) in Fig. <ref>.
Similarly, the correlator C_21 is calculated as
C_21=N_2+1+-N_2-1+/N_2+1++N_2-1+
and is performed as a transversal scan with a pencil-size He-3 detector mounted on a translation stage in region 2 of the interferometer, with measurement time 300 seconds per detector position. Moving first through path I and then through path II, the resulting neutron counts are shown in Fig. <ref>, where the separation between both paths is also clearly visible. The N_2i1+ are the neutron counts in the peak of the respective Gaussian fit to the intensity profiles.
For correlator C_32, however, it is crucial to measure non-invasively. This is done by measuring the absence of a neutron in a given path due to the Cd blocker, meaning that the neutron has to take the path without the Cd blocker. This is represented by the minus sign in Eq. (<ref>). Four measurements are performed, with each of the paths blocked in turn and the resulting intensity in detectors O and H recorded for a measurement time of 600 seconds. These results are shown in Fig. <ref>. C_32 becomes
C_32=N_3+2-+N_3-2+-N_3+2+-N_3-2-/N_3+2-+N_3-2++N_3+2++N_3-2-,
with N_3+2- and N_3+2+ the neutron counts in the H detector with blocked path II and path I, respectively, and likewise for the O detector in N_3-2±.
Results.—In order to demonstrate the experimental violation of the Leggett–Garg inequality, we calculate the correlator K, Eq. (<ref>). The resulting curve is shown in Fig. <ref>, with the maximum at a phase shift of χ=0. With the Indium absorber in path I of the interferometer, a violation of the limit K=1 is clearly visible (Fig. <ref>(a)). Our results show a significant violation of the LGI by 18 standard deviations σ (denoted as n_σ=18) at the maximum, K =1.120±0.007. The violation is visible over a wide range of phase shifter values χ. Numeric values of the individual correlators C_ij and the final value of K in case of the maximal violation of the LGI are presented in Tab. <ref>.
For comparison, Fig. <ref>(b) shows the same measurement procedure for a symmetric beam splitter (ϑ_A=π/2), i.e. without absorber,
where no violation is possible, resulting in K=0.540±0.023.
Concluding remarks and discussion.—Our measurement results demonstrate a violation of an LGI by n_σ=18.0, while the absorberless measurements show no violation. Hence we conclude that neutrons in an interferometer must be understood quantum mechanically.
An even higher violation can be achieved when the signs in region 3 are switched, and detector O becomes 3+, detector H 3-. The correlators C_31 and C_32 have to be recalculated accordingly, resulting in K=1.162±0.006 with n_σ=28.
This `additional' violation is due to the asymmetric nature of the perfect crystal interferometer. Since successive reflections on the crystal lamellas enhance the reflectivity <cit.> the H detector always receives some phase-independent intensity offset.
The detection loophole is closed due to the high efficiency of our neutron detectors, close to unity. The fair sampling assumption is needed, especially for the correlator C_21, which is the case for a wide range of experiments of this kind, since simultaneous detection of everything is impossible.
Finally, we want to emphasize that the interferometric scheme applied in the present work is not limited neutrons, but is in fact completely general and can be used for any quantum particle with nonzero or even zero mass.
This work was supported by the Austrian science fund (FWF) Projects No. P 30677 and No. P 34239.
30
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Bell(1964)]Bell64
author author J. S. Bell, title title On the
Einstein-Podolsky-Rosen paradox, @noop journal
journal Physics (Long Island City, N.Y.) volume 1, pages 195 (year 1964)NoStop
[Bell(1966)]Bell66
author author J. S. Bell, title title On the problem of hidden
variables in quantum mechanics, https://doi.org/10.1103/RevModPhys.38.447 journal journal Rev. Mod. Phys. volume 38, pages 447 (year 1966)NoStop
[Kochen and Specker(1967)]Kochen67
author author S. Kochen and author E. P. Specker, title title The problem of hidden
variables in quantum mechanics, @noop journal
journal J. Math. Mech. volume 17, pages 59 (year 1967)NoStop
[Leggett and Garg(1985)]leggett_quantum_1985
author author A. J. Leggett and author A. Garg, title title Quantum mechanics
versus macroscopic realism: Is the flux there when nobody looks?, https://doi.org/10.1103/PhysRevLett.54.857 journal journal Phys. Rev. Lett. volume 54, pages 857 (year 1985)NoStop
[Emary et al.(2014)Emary,
Lambert, and Nori]emary_leggettgarg_2014
author author C. Emary, author N. Lambert, and author F. Nori, title title Leggett–Garg inequalities, https://doi.org/10.1088/0034-4885/77/1/016001 journal
journal Rep. Prog. Phys. volume 77, pages 016001 (year
2014)NoStop
[Ruskov et al.(2006)Ruskov,
Korotkov, and Mizel]Ruskov06
author author R. Ruskov, author A. N. Korotkov, and author A. Mizel, title title Signatures of quantum
behavior in single-qubit weak measurements, https://doi.org/10.1103/PhysRevLett.96.200404 journal
journal Phys. Rev. Lett. volume 96, pages 200404 (year 2006)NoStop
[Jordan et al.(2006)Jordan,
Korotkov, and Büttiker]Jordan06
author author A. N. Jordan, author A. N. Korotkov, and author M. Büttiker, title title Leggett-garg
inequality with a kicked quantum pump, https://doi.org/10.1103/PhysRevLett.97.026805 journal
journal Phys. Rev. Lett. volume 97, pages 026805 (year 2006)NoStop
[Dressel et al.(2011)Dressel, Broadbent, Howell, and Jordan]Dressel11
author author J. Dressel, author C. J. Broadbent, author J. C. Howell, and author A. N. Jordan, title title Experimental violation of
two-party Leggett-Garg inequalities with semiweak measurements, https://doi.org/10.1103/PhysRevLett.106.040402 journal
journal Phys. Rev. Lett. volume 106, pages 040402 (year 2011)NoStop
[Goggin et al.(2011)Goggin,
Almeida, Barbieri, Lanyon,
O’Brien, White, and Pryde]Goggin11
author author M. E. Goggin, author M. P. Almeida,
author M. Barbieri, author B. P. Lanyon, author
J. L. O’Brien, author
A. G. White, and author
G. J. Pryde, title title Violation of the Leggett–Garg inequality with weak measurements of
photons, https://doi.org/10.1073/pnas.1005774108 journal journal Proc. Natl. Acad. Sci. USA volume 108, pages 1256
(year 2011)NoStop
[Waldherr et al.(2011)Waldherr, Neumann, Huelga, Jelezko, and Wrachtrup]Waldherr11
author author G. Waldherr, author P. Neumann,
author S. F. Huelga, author F. Jelezko, and author
J. Wrachtrup, title title Violation of a temporal Bell inequality for single spins in a
diamond defect center, https://doi.org/10.1103/PhysRevLett.107.090401 journal
journal Phys. Rev. Lett. volume 107, pages 090401 (year 2011)NoStop
[Palacios-Laloy et al.(2010)Palacios-Laloy, Mallet, Nguyen,
Bertet, Vion, Esteve, and Korotkov]Palacios10
author author A. Palacios-Laloy, author F. Mallet, author F. Nguyen,
author P. Bertet, author D. Vion, author
D. Esteve, and author
A. N. Korotkov, title
title Experimental violation of a bell's inequality in time with
weak measurement, https://doi.org/10.1038/nphys1641 journal journal Nat. Phys. volume
6, pages 442 (year 2010)NoStop
[Knee et al.(2016)Knee,
Kakuyanagi, Yeh, Matsuzaki,
Toida, Yamaguchi, Saito,
Leggett, and Munro]Knee16
author author G. C. Knee, author K. Kakuyanagi,
author M.-C. Yeh, author Y. Matsuzaki, author
H. Toida, author H. Yamaguchi, author S. Saito, author A. J. Leggett, and author W. J. Munro, title title A strict
experimental test of macroscopic realism in a superconducting flux qubit, https://doi.org/10.1038/ncomms13253 journal journal Nat. Commun. volume 7, pages 13253 (year 2016)NoStop
[Athalye et al.(2011)Athalye, Roy, and Mahesh]Athalye11
author author V. Athalye, author S. S. Roy, and author T. S. Mahesh, title title Investigation of the Leggett-Garg
inequality for precessing nuclear spins, https://doi.org/10.1103/PhysRevLett.107.130402 journal
journal Phys. Rev. Lett. volume 107, pages 130402 (year 2011)NoStop
[Souza et al.(2011)Souza,
Oliveira, and Sarthour]Souza11
author author A. M. Souza, author I. S. Oliveira, and author R. S. Sarthour, title title A scattering quantum
circuit for measuring Bell's time inequality: a nuclear magnetic resonance
demonstration using maximally mixed states, https://doi.org/10.1088/1367-2630/13/5/053023 journal
journal New J. Phys. volume
13, pages 053023 (year 2011)NoStop
[Knee et al.(2012)Knee,
Simmons, Gauger, Morton,
Riemann, Abrosimov, Becker,
Pohl, Itoh, Thewalt,
Briggs, and Benjamin]Knee2012
author author G. C. Knee, author S. Simmons,
author E. M. Gauger, author J. J. Morton, author
H. Riemann, author N. V. Abrosimov, author P. Becker, author H.-J. Pohl, author K. M. Itoh, author M. L. Thewalt,
author G. A. D. Briggs, and author S. C. Benjamin, title title Violation of a Leggett–Garg
inequality with ideal non-invasive measurements, https://doi.org/10.1038/ncomms1614 journal journal Nat. Commun. volume 3, pages 606 (year 2012)NoStop
[Emary(2013)]Emary13
author author C. Emary, title title Decoherence and maximal
violations of the Leggett-Garg inequality, https://doi.org/10.1103/PhysRevA.87.032106 journal journal Phys. Rev. A volume 87, pages 032106 (year 2013)NoStop
[Mendoza-Arenas et al.(2019)Mendoza-Arenas, Gómez-Ruiz, Rodríguez, and Quiroga]Arenas19
author author J. J. Mendoza-Arenas, author F. J. Gómez-Ruiz, author F. J. Rodríguez, and author L. Quiroga, title title Enhancing violations of
Leggett-Garg inequalities in nonequilibrium correlated many-body systems by
interactions and decoherence, https://doi.org/10.1038/s41598-019-54121-1 journal journal Sci. Rep. volume 9, pages 17772 (year 2019)NoStop
[Matsumura et al.(2022)Matsumura, Nambu, and Yamamoto]Matsumura22
author author A. Matsumura, author Y. Nambu, and author K. Yamamoto, title title Leggett-Garg inequalities for testing
quantumness of gravity, https://doi.org/10.1103/PhysRevA.106.012214 journal journal Phys. Rev. A volume 106, pages 012214 (year 2022)NoStop
[Bose et al.(2018)Bose,
Home, and Mal]Bose18
author author S. Bose, author D. Home, and author S. Mal, title title Nonclassicality of the harmonic-oscillator
coherent state persisting up to the macroscopic domain, https://doi.org/10.1103/PhysRevLett.120.210402 journal
journal Phys. Rev. Lett. volume 120, pages 210402 (year 2018)NoStop
[Bose et al.(2017)Bose,
Mazumdar, Morley, Ulbricht,
Toro šš, Paternostro, Geraci, Barker, Kim, and Milburn]Bose17
author author S. Bose, author A. Mazumdar,
author G. W. Morley, author H. Ulbricht, author
M. Toro šš,
author M. Paternostro, author A. A. Geraci, author
P. F. Barker, author
M. S. Kim, and author
G. Milburn, title title Spin entanglement witness for quantum gravity, https://doi.org/10.1103/PhysRevLett.119.240401 journal
journal Phys. Rev. Lett. volume 119, pages 240401 (year 2017)NoStop
[Marletto and Vedral(2017)]Marletto17
author author C. Marletto and author V. Vedral, title title Gravitationally induced
entanglement between two massive particles is sufficient evidence of quantum
effects in gravity, https://doi.org/10.1103/PhysRevLett.119.240402
journal journal Phys. Rev. Lett. volume 119, pages 240402 (year
2017)NoStop
[Emary et al.(2012)Emary,
Lambert, and Nori]emary_leggett-garg_2012
author author C. Emary, author N. Lambert, and author F. Nori,
title title Leggett-Garg inequality in electron interferometers, journal volume 86, https://doi.org/10.1103/PhysRevB.86.235447 Phys. Rev. B volume 86 (year 2012)
NoStop
[Englert(1996)]Englert96
author author B.-G. Englert, title title Fringe visibility and
which-way information: An inequality, https://doi.org/10.1103/PhysRevLett.77.2154 journal journal Phys. Rev. Lett. volume 77, pages 2154 (year 1996)NoStop
[Rauch and Werner(2000)]RauchBook
author author H. Rauch and author S. A. Werner, @noop title Neutron
Interferometry (publisher Clarendon Press, Oxford, year 2000)NoStop
[Klepp et al.(2014)Klepp,
Sponar, and Hasegawa]klepp2014fundamental
author author J. Klepp, author S. Sponar, and author Y. Hasegawa, title title Fundamental phenomena of quantum mechanics
explored with neutron interferometers, https://doi.org/10.1093/ptep/ptu085
journal journal Prog. Theor. Exp. Phys volume 2014, (year 2014)NoStop
[Sponar et al.(2021)Sponar,
Sedmik, Pitschmann, Abele, and Hasegawa]Sponar21
author author S. Sponar, author R. I. P. Sedmik, author M. Pitschmann,
author H. Abele, and author Y. Hasegawa, title
title Tests of fundamental quantum mechanics and dark
interactions with low-energy neutrons, https://doi.org/10.1038/s42254-021-00298-2 journal journal Nat. Rev. Phys volume 3, pages 309 (year 2021)NoStop
[Denkmayr et al.(2017)Denkmayr, Geppert, Lemmel, Waegell, Dressel, Hasegawa, and Sponar]Denkmayr17
author author T. Denkmayr, author H. Geppert,
author H. Lemmel, author M. Waegell, author
J. Dressel, author Y. Hasegawa, and author S. Sponar, title title
Experimental demonstration of direct path state characterization by strongly
measuring weak values in a matter-wave interferometer, https://doi.org/10.1103/PhysRevLett.118.010402 journal
journal Phys. Rev. Lett. volume 118, pages 010402 (year 2017)NoStop
[Wagner et al.(2021)Wagner,
Kersten, Danner, Lemmel,
Pan, and Sponar]Wagner21
author author R. Wagner, author W. Kersten,
author A. Danner, author H. Lemmel, author
A. K. Pan, and author
S. Sponar, title title Direct experimental test of commutation relation via imaginary weak
value, https://doi.org/10.1103/PhysRevResearch.3.023243 journal journal Phys. Rev. Research volume 3, pages 023243 (year
2021)NoStop
[Geppert-Kleinrath et al.(2018)Geppert-Kleinrath, Denkmayr, Sponar,
Lemmel, Jenke, and Hasegawa]Geppert18
author author H. Geppert-Kleinrath, author T. Denkmayr, author S. Sponar,
author H. Lemmel, author T. Jenke, and author
Y. Hasegawa, title title Multifold paths of neutrons in the three-beam interferometer
detected by a tiny energy kick, https://doi.org/10.1103/PhysRevA.97.052111 journal journal Phys. Rev. A volume 97, pages 052111 (year 2018)NoStop
[Lemmel et al.(2022)Lemmel,
Geerits, Danner, Hofmann, and Sponar]Lemmel2022
author author H. Lemmel, author N. Geerits,
author A. Danner, author H. F. Hofmann, and author S. Sponar, title
title Quantifying the presence of a neutron in the paths of an
interferometer, https://doi.org/10.1103/PhysRevResearch.4.023075
journal journal Phys. Rev. Research volume 4, pages 023075 (year
2022)NoStop
[ILL et al.(2021)Sponar,
Kreuzgruber, and Lemmel]S18data
author author Stephan Sponar, author Elisabeth Kreuzgruber,
and author Hartmut Lemmel, @noop title Leggett-Garg Inequality, (year 2019), note
https://doi.ill.fr/10.5291/ILL-DATA.CRG-2643 https://doi.ill.fr/10.5291/ILL-DATA.CRG-2643NoStop
[Petrascheck and Rauch(1984)]petrascheck1984
author author D. Petrascheck and author H. Rauch, title title Multiple Laue rocking
curves, @noop https://doi.org/10.1107/S0108767384000878journal journal Acta Crystallogr. A volume 40, pages 445 (year
1984)NoStop
|
http://arxiv.org/abs/2307.03866v1 | 20230708000448 | Ultrathin films of black phosphorus as suitable platforms for unambiguous observation of the orbital Hall effect | [
"Tarik P. Cysne",
"Marcio Costa",
"Marco Buongiorno Nardelli",
"R. B. Muniz",
"Tatiana G. Rappoport"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
[email protected]
Instituto de Física, Universidade Federal Fluminense, 24210-346 Niterói RJ, Brazil
Instituto de Física, Universidade Federal Fluminense, 24210-346 Niterói RJ, Brazil
Department of Physics and Department of Chemistry, University of North Texas, Denton TX, USA
Instituto de Física, Universidade Federal Fluminense, 24210-346 Niterói RJ, Brazil
[email protected]
Centro de Física das Universidade do Minho e do Porto (CF-UM-UP) e Departamento de Física, Universidade do Minho, P-4710-057 Braga, Portugal
Instituto de Física, Universidade Federal do Rio de Janeiro, C.P. 68528, 21941-972 Rio de Janeiro RJ, Brazil
Phosphorene, a monolayer of black phosphorus, is a two-dimensional material that lacks a multivalley structure in the Brillouin zone and has negligible spin-orbit coupling. This makes it a promising candidate for investigating the orbital Hall effect independently of the valley or spin Hall effects. To model phosphorene, we utilized a DFT-derived tight-binding Hamiltonian, which is constructed with the pseudo atomic orbital projection method. For that purpose, we use the paoflow code with a newly implemented internal basis that provides a fairly good description of the phosphorene conduction bands. By employing linear response theory, we show that phosphorene exhibits a sizable orbital Hall effect with strong anisotropy in the orbital Hall conductivity for the out-of-plane orbital angular momentum component. The magnitude and sign of the conductivity depend upon the in-plane direction of the applied electric field. These distinctive features enable the observation of the orbital Hall effect in this material unambiguously. The effects of strain and of a perpendicularly applied electric field on the phosphorene orbital-Hall response are also explored. We show that a supplementary electric field applied perpendicular to the phosphorene layer in its conductive regime gives rise to an induced in-plane orbital magnetization.
Ultrathin films of black phosphorus as suitable platforms for unambiguous observation of the orbital Hall effect
Tatiana G. Rappoport
================================================================================================================
§ INTRODUCTION
The phenomenon known as the orbital Hall effect (OHE) is characterized by the emergence of an orbital angular momentum (OAM) current that flows transversely to the direction of an applied electric field. Distinctly from the spin Hall effect (SHE), the OHE does not require the presence of spin-orbit interaction to occur. Despite being predicted nearly two decades ago <cit.>, the prospect of using the OHE to generate OAM current in certain materials has recently sparked great interest in the solid-state physics community <cit.>. OAM currents can be produced in a wide range of materials, and their intensities can exceed those of spin current. Furthermore, they can be injected into adjacent elements to exert torque on magnetic units, expanding their possible applications in orbitronics <cit.>.
As a matter of fact, light metals with weak spin-orbit coupling are being explored as a means of generating orbital currents in three-dimensional metals <cit.>. Recently, orbital torques have been realized in light metal/ferromagnets heterostructures, providing indirect but robust experimental evidence of the OHE <cit.>.
The OHE has also been investigated in two-dimensional (2D) materials that, in some cases, may host an orbital Hall (OH) insulating phase, characterized by a finite OH conductivity plateau located within the insulating band gap <cit.>. Recent studies have shed light on the fascinating properties of the OH insulating phase in these materials, such as its connection with higher order topological phases <cit.> and the encoding of non-trivial topology associated with OAM in an orbital Chern number <cit.>.
The difficulty in discerning the OHE from other angular-momentum transport phenomena has hindered its unequivocal direct observation. For example, in some cases the spin accumulation produced by the spin Hall effect may be hard to distinguish from its orbital angular momentum counterpart. The valley Hall effect (VHE) induced by a longitudinally applied electric field that occur in non-centrosymmetric lattices with multi-valley structure in the Brillouin zone involves the transverse flow of valley currents that may also carry magnetic moment <cit.>, which can be hard to dissociate from the intra-atomic orbital Hall contribution.
Multi-orbital 2D materials possess natural symmetry constraints that lead to various types of orbital hybridization, which can maximize the OHE <cit.>. However, to single out the OHE unequivocally it is crucial to identify materials with weak spin-orbit coupling that display no significant spin Hall effect (SHE), nor VHE or magnetoelectric effects that may mask the OHE.
In this article, we suggest that phosphorene is a very suitable material for direct observing the OHE in 2D materials. It is a centrosymmetric semiconductor with a sizeable direct band gap at the Γ point of the 2D BZ <cit.> that does not host VHE. In the absence of reasonably strong electric fields, applied perpendicularly to the layer, it behaves as an ordinary insulator and shows no spin Hall effect (SHE) within its band gap <cit.>. The spin-orbit interaction in phosphorene is extremely weak <cit.> and consequently it also displays negligible SHE in the metallic regime in comparison with the OHE, as we shall see later. Symmetry prevents the appearance of the magneto-electric effect in phosphorene, even in the presence of strain <cit.>.
Here, we have performed density functional theory (DFT) calculations combined with linear-response theory to analyze the OH response in phosphorene. Our calculations show that phosphorene exhibit sizeable anisotropic OH conductivities that change sign for in-plane electric fields applied along the armchair (x̂) and zigzag (ŷ) cartesian directions depicted in Fig. <ref>. These features hold in the presence of moderate in-plane strain and perpendicularly applied electrical fields along ẑ. We also show that the perpendicular electric field allows the occurrence of a current-induced orbital magnetization in the plane of phosphorene.
§ DFT DERIVED HAMILTONIAN
Phosphorene is a two-dimensional material composed of a single layer of phosphorus atoms arranged in a distorted honeycomb lattice structure (figure <ref>(a)), similar to graphene. However, unlike graphene, the lattice structure of phosphorene is puckered, with a non coplanar configuration as illustrated in Figure <ref>(b).
Our DFT calculations <cit.> were carried out with the plane-wave-based code Quantum Espresso <cit.> to compute the band structure and eigenstates of phosphorene. The generalized gradient approximation (GGA) <cit.> was used to treat the exchange and correlation potential, while fully relativistic projected augmented wave (PAW) potentials <cit.> were employed to describe the ionic cores. To ensure accurate results, we set the wavefunctions cutoff energy to 44 Ryd and the charge density cutoff energy is ten times larger. Our self-consistent calculations (SCF) were executed with a linear density of k-points of 12.0/Å^-1 in the 2D Brillouin zone, and a minimum of 15 Å of vacuum is taken to avoid spurious interactions. We included a static electrical field (along the z direction) using a full SCF calculation via the modern theory of polarization <cit.>
Figure <ref>(c), shows the band structure of phosphorene displaying its direct bandgap at the Γ point. Phosphorene's puckered crystalline structure is highly anisotropic, as evidenced by its energy spectrum near Γ, which presents a parabolic dispersion along the Γ-Y direction and a linear behavior along Γ-X. Furthermore, the puckering of the lattice has a notable impact on the mechanical and electronic characteristics of phosphorene. It renders the material more susceptible to strain, as deformation can significantly alter its bandgap and electronic transport properties <cit.>.
To perform linear response calculations, we utilized the pseudo atomic orbital projection method <cit.> implemented in the paoflow code <cit.>. This approach involves constructing an effective tight-binding Hamiltonian, with no adjustable parameters, from the DFT calculations. In general, we project the plane-wave Kohn-Sham orbitals onto the compact subspace spanned by the pseudo atomic orbitals (PAO), which are naturally included in the PAW potentials. The vast majority of cases can be accurately described by this approach with an excellent agreement between the DFT and paoflow band-structure. Nevertheless, occasionally the PAO basis fails to reproduce the conduction bands, especially when the unoccupied bands have a relatively strong character of an orbital that is not included in the PAO base, as in the case of phosphorene. Its conduction, and to a minor degree the valence bands, are highly hybridized with d-orbitals <cit.>. Since the pseudo potential used in the calculation (P.rel-pbe-n-kjpaw_psl.1.0.0.UPF) is generated only with s and p orbitals, this original approach fails. To circumvent this problem, we used the recently implemented paoflow internal basis, which is constructed by solving the atomic DFT problem for an all electron configuration up to desired orbital. Once the atomic wavefunction is obtained the DFT plane-wave wavefunctions are projected as described in ref. <cit.>.
Figure <ref>(c) shows the effective tight-binding and the DFT band-structure calculations superimposed. This approach significantly reduces the computational cost of performing large k-space numerical integration. We have previously used this method to investigate distinct characteristics of different systems, such as: spin dynamics <cit.>, as well as transport <cit.> and topological properties <cit.>. The orbital Hall conductivity calculations were performed with a reciprocal space sampling that is ten times larger than the one used in our DFT-SCF calculations.
§ OHE CALCULATIONS
Within linear response theory, the current density of angular momentum with polarization η, flowing along the μ direction (𝒥^X_η_μ), can be generically expressed in terms of the angular momentum conductivity tensor by 𝒥^X_ η_μ=∑_νσ^X_η_μ,νℰ_ν. Here, ℰ_ν symbolizes the ν-component of the applied electric field; η, μ and ν label the Cartesian components x,y,z. X_η represents the η-component of either the orbital angular momentum operator (ℓ̂_η) or the spin operator (ŝ_η), depending on the nature of the induced angular momentum that drifts. The conductivity tensor is given by
σ^X_η_μ,ν=e/(2π)^2∑_n∫_BZ d^2 k f_n kΩ_μ,ν , n^X_η ( k),
where, the orbital (spin) Berry curvature
Ω_μ,ν , n^X_η ( k)= 2ħ∑_m≠ nIm[ ⟨ u_n, k|j_μ, k^X_η|u_m, k⟩⟨ u_m, k|v_ν, k|u_n, k⟩/(E_n, k-E_m, k+i0^+)^2].
The ν-component of the velocity operator may be obtained by v_ν, k=ħ^-1∂ℋ ( k)/∂ k_ν, where ℋ ( k) represents the Hamiltonian in reciprocal space, and k stand for the wave vector. Here, |u_n, k⟩ is the periodic part of the Bloch eigenstate of ℋ ( k), associated with band energy E_n, k and f_n k symbolizes the Fermi-Dirac distribution function. The orbital (spin) angular momentum current operator that flows along the μ-direction with orbital (spin) polarization in the η-direction, is defined by j_μ, k^X_η=(X_ηv_μ, k+v_μ, kX_η)/2, where X_η=ℓ̂_η (ŝ_η).
§ RESULTS AND DISCUSSION
Figure <ref>(d) shows the orbital Hall conductivities σ^L_z_xy and σ^L_z_yx, calculated as functions of Fermi energy, for in-plane electric fields applied along the ŷ and x̂ directions, respectively. Both conductivities present a plateau inside the energy-band gap. Phosphorene has been proposed to be a higher-order topological insulator <cit.>, a type of topological state that was recently connected to the orbital Hall insulating phase <cit.>. We note that σ^L_z_xy is markedly different from σ^L_z_yx inside and close to the energy-band gap, where they have opposite signs. This reflects the high anisotropy of the phosphorene lattice structure. The crystalline symmetry of phosphorene also ensures that in-plane electric fields can only induce transverse currents of angular momentum polarized along ẑ. This holds for both orbital and spin angular momentum currents, because they are subjected to essentially the same crystalline symmetry constraints <cit.>. In a crystal with a given space group, the spin and orbital Berry curvatures must be invariant under all symmetry operations of the group. This means that if a given symmetry operation, such as rotation, mirror reflection, or spatial inversion, changes the sign of the spin or orbital Berry curvature, then the corresponding component of the spin or orbital Hall conductivity is forbidden by symmetry. The presence or absence of symmetries in the crystal structure can dictate which components of the Hall conductivity are allowed or forbidden (see Appendix A).
The change of sign in the phosphorene OH-conductivity may be experimentally verified by observing the induced orbital magnetic moment accumulations on the boundaries of phosphorene samples, similar to SHE experiments <cit.>. The small spin-orbit coupling and the topological triviality of phosphorene, with respect to ℤ_2, make the SHE orders of magnitude smaller than the OHE [see Fig. <ref> (e)]. In addition, the electronic spectrum of phosphorene has no multivalley structure in the 2D Brillouin zone and hence does not host VHE. Thus, phosphorene offers an ideal platform for unambiguous observation of the OHE.
It is noteworthy that the OHE increases with the number of layers <cit.> and so, thin films of black phosphorus may be employed to enhance the OH signal in such experiments. However, one must keep in mind that the band gap decreases monotonically with the increase in the number of layers, saturating at approximately 0.3 eV for sufficiently large film thicknesses <cit.>.
In general, the transport properties of 2D materials are influenced by the substrate, which may cause strain and/or alter the features of the sample's surface in contact with it. In some cases it is necessary to encapsulate the film to prevent its deterioration from oxidation and also be able to control its density of carriers with gate voltages. Therefore, it is worth investigating how strain and the presence of an auxiliary perpendicular electric field would affect the orbital transport properties of phosphorene.
§.§ Effects of Strain
Figure <ref> illustrates the effects of uniaxial strain (both compressive and tensile) along the x̂ direction, on the OH conductivity components σ^L_z_xy and σ^L_z_yx. With such moderate uniform strains the point group (D_2h) of phosphorene is preserved and hence, only the L_z component of the OHC remains non-null.
Strain clearly affects the OH conductivity of phosphorene. It modifies the electronic states around the band gap <cit.>, may alter their orbital features and the orbital transport in general. Interestingly, the height of the σ^L_z_yx plateau remains unchanged under strain along the x direction, which does not happen for σ^L_z_xy. On the other hand, the length of the OHC plateaux decrease (increase) under compressive (tensile) strain, which is expected because the energy band-gap size follows the same trend <cit.>, as illustrated in the inset of Fig. <ref>(a) for σ^L_z_yx.
§.§ Effect of Perpendicular Electric-Field
§.§.§ Orbital Hall Conductivity
We shall now examine how the OHC of phosphorene is affected by an electric field E⃗_⊥ = E_⊥ẑ, applied perpendicularly to its layer. The presence of E⃗_⊥ reduces the phosphorene point group D_ 2h to C_ 2v, which belong to the same Laue class mmm. Since the Laue class determines the general form of the OHC tensor <cit.>, only the L_z component of the OHC remains non-null in the presence of the E⃗_⊥ (see Appendix A).
Figure <ref> shows σ^L_z_yx and σ^L_z_xy, calculated as functions of energy, for different values of E_⊥. We note that the OHC is much more affected by E_⊥ in some energy ranges outside the band gap than within it. We recall that ultrathin films of black phosphorus can switch to a topological insulating phase for sufficiently high values of E_⊥, as discussed in the <cit.>. However, for phosphorene, this phase transition requires values of E_⊥≫ 0.6V/m, which is higher than the ones considered in Fig. <ref>.
§.§.§ Orbital Magnetoelectric Effect
The noncentrosymmetric and polar C_ 2v point group allows the occurrence of orbital magnetoelectric effect mediated by Fermi-surface conducting states <cit.>. The perpendicular electric field E⃗_⊥ distorts the phosphorene's charge distribution, giving rise to a finite polarization P⃗=P_zẑ perpendicular to its layer <cit.>. The driving field in the phosphore plane exerts a torque on the electric dipoles, thereby inducing a net orbital magnetization M⃗^L∝P⃗×ℰ⃗ <cit.>. One may calculate M⃗^L utilizing a scheme similar to the one described in the Secs. <ref> and <ref>. Since time-reversal symmetry is preserved, there are no interband contributions to the orbital magnetoelectric effect in phosphorene. Thus, to first order in the in-plane driving field and for finite values of E_⊥, the current-induced orbital magnetization per unity cell area of phosphorene is given by m_L_η=∑_να_ηνℰ_ν <cit.>, where
α_ην= eμ_B/2Γ∑_n∫_ BZd^2 k/(2π)^2∂ f_n, k/∂ E
×⟨u_n, k| v_ν, k|u_n, k⟩⟨u_n, k|ℓ̂_η|u_n, k⟩
represent the matrix elements of the magnetoelectric tensor. Here, μ_B is the Bohr magneton and Γ is the energy scale associated with the electronic relaxation time τ=ħ/2Γ. In our calculations we have used Γ=1.6 meV, that correspond to τ≈ 200 fs <cit.>.
Fig. <ref> shows α_xy and α_yx calculated as functions of energy for different values of the E_⊥. As expected, the OME clearly vanishes within the band-gap energy range. However, in the conductive regime, it can reach sizeable values for both m_L_y and m_L_x, in response to electric fields applied along the x̂ and ŷ directions, respectively. This inplane-induced orbital magnetization adds up to the orbital angular-momenta accumulated at the sample's edges, due to the OHE, transforming its original antiferromagnetic-like disposition into a non-collinear orbital magnetic arrangement.
In some energy ranges the OME varies appreciably with E_⊥, which may be used to control the OME intensity. In order to roughly estimate the order of magnitude of the in-plane OME we consider an electric field with intensity ℰ_x=10^5 V/m and a carrier density that leads to α_yx=-2× 10^2 μ_B/(V.nm). In this case, the induced orbital magnetization m_L_y≈ -0.3 × 10^-2μ_B/A_ u.c., where A_ u.c.=0.152 nm^2 represents the phosphorene unity cell area. This has the same order of magnitude of the Edelstein effect estimated for Bi/Ag(111) in Ref. <cit.> assuming a larger value of τ.
§ FINAL REMARKS AND CONCLUSIONS
To summarize, we argue that thin films of black phosphorus may provide suitable 2D platforms for direct observation of the orbital Hall effect. To this end, we combine linear response theory with density functional theory calculations to investigate the orbital conductivity of phosphorene and explore how it is affected by uniform strain and perpendicular electric fields. We show that phosphorene displays a fairly large OHC, with perpendicular orbital polarization, which is orders of magnitude larger than the SHC. This OHC is also highly anisotropic with respect to the direction of the in-plane applied electric field, and may even switch sign when the driving field direction is changed. Inside the energy band gap, it exhibits an orbital Hall insulating plateau that is robust under moderate uniform strain and perpendicular electric fields. The latter breaks spacial inversion symmetry and may lead to the appearance of an in-plane orbital-magnetization, induced by an in-plane electric current. This effect alters the anti-symmetric profile of the orbital magnetic moment induced by the orbital Hall effect in the conducting phase. Our numerical calculations are complemented by symmetry analysis.
We acknowledge CNPq/Brazil, CAPES/Brazil, FAPERJ/Brazil, INCT Nanocarbono and INCT Materials Informatics for financial support. TGR acknowledges funding from FCT-Portugal through Grant No. CEECIND/07471/2022. She thankfully acknowledges the computer resources at MareNostrum and the technical support provided by Barcelona Supercomputing Center (FI-2020-2-0033). MC acknowledges CNPq (Grant No. 317320/2021-1) and FAPERJ/Brazil (Grant No. E26/200.240/2023). We also thank Profs. A. Fazzio and P. Venezuela for fruitful discussions.
§ SYMMETRY CONSTRAINT ON ORBITAL HALL CONDUCTIVITY
The crystal symmetry operations of phosphorene are: E, τ𝒞_2x, 𝒞_2y, τ𝒞_2z, 𝒫, τℳ_x, ℳ_y and τℳ_z <cit.>. Here E represents the identity operation, 𝒞_2μ is a 180^o rotation around the μ-axis, ℳ_μ denotes a reflection through a mirror plane that is perpendicular to the μ-axis, and 𝒫 symbolizes the spatial-inversion operation; τ𝒪 designates the action of 𝒪 followed by a half-unity-cell translation τ⃗=(a_x/2,a_y/2), where a_x and a_y represent the moduli of the unit cell lattice vectors. This set of symmetries is isomorphic to the point group D_ 2h, which correspond to Laue group mmm <cit.>.
Consequently, for a phosphorene layer in the xy plane only the L_z component of the OHE is allowed <cit.>. It is possible derive the constraints to the OH conductivity tensor imposed by each symmetry operation of phosphorene. They are summarized in the table <ref>.
In the presence of E⃗_⊥ all symmetry operations that interchange z and -z are excluded, leaving just τ𝒞_2z, τℳ_x, ℳ_y, and E, which are identified with an asterisk in table <ref>. In this case, the point group is reduced from D_ 2h to C_ 2v. However, since C_ 2v and D_ 2h belong to the same Laue class (mmm), only the L_z component of the OHC can be non zero when phosphorene is subjected to E⃗_⊥<cit.>.
In order to obtain the constraints on the OHC components presented in Table <ref> we consider the action of τ𝒪 on the Bloch eigenstates ψ_n,k(r) associated with the eigenvalue E_n,k, namely τ𝒪ψ_n,k(r)= exp(-iτ⃗·k)ψ_n,𝒪k(r) <cit.>.
Since the Hamiltonian is invariant under τ𝒪, E_n, k=E_n,𝒪 k.
Let us examine, for example, Ω^L_η_yx,n( k). Inserting the identity (τ𝒪)^†(τ𝒪)=1 into the orbital-weighted Berry curvature and using the above relations we obtain
Ω^L_η_yx,n( k) = 2ħ∑_m≠ nIm[ ⟨ u_n, k| (τ𝒪)^† (τ𝒪) j_y, k^L_η (τ𝒪)^† (τ𝒪)|u_m, k⟩⟨ u_m, k|(τ𝒪)^† (τ𝒪) v_x, k(τ𝒪)^† (τ𝒪)|u_n, k⟩/(E_n, k-E_m, k+i0^+)^2].
The restrictions on the conductivity tensor depend on how the Cartesian components of the velocity and angular momentum operators transform under the group symmetry operations. This information is contained in its character table, which shows that they only acquire a sign s_𝒪,Â=± 1 <cit.> under such operations, as table <ref> illustrates. Therefore,
Ω^L_η_yx,n( k) = 2ħ∑_m≠ nIm[ ⟨ u_n,𝒪 k| s_𝒪,v̂_y s_𝒪,L̂_η j_y,𝒪 k^L_η|u_m, 𝒪 k⟩⟨ u_m, 𝒪 k| s_𝒪,v̂_x v_x,𝒪 k|u_n,𝒪 k⟩/(E_n,𝒪 k-E_m,𝒪 k+i0^+)^2]
= s_𝒪,v̂_x s_𝒪,v̂_y s_𝒪,L̂_ηΩ^L_η_yx,n(𝒪 k).
The same expression holds for Ω^L_η_xy,n( k).
Since ∫ d^2 k=∫ d^2(𝒪 k), it follows from Eq. (<ref>) that
𝒪: σ^L_η_ OH= s̅^η_ OH(𝒪) σ^L_η_ OH,
where s̅^η_ OH(𝒪)=s_𝒪, v_x× s_𝒪, v_y× s_𝒪, L_η. If s̅^η_ OH(𝒪)=+1 the symmetry 𝒪 does not impose a constraint to the OH conductivity. However, if s̅^η_ OH(𝒪)=-1, σ^L_η_yx= 0.
apsrev
76
natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL
[Bernevig et al.(2005)Bernevig, Hughes,
and Zhang]Bernevig-Hughes-Zhang-PhysRevLett.95.066601
authorB. A. Bernevig,
authorT. L. Hughes,
and authorS.-C. Zhang,
journalPhys. Rev. Lett. volume95,
pages066601 (year2005),
<https://link.aps.org/doi/10.1103/PhysRevLett.95.066601>.
[Phong et al.(2019)Phong, Addison, Ahn,
Min, Agarwal, and Mele]Mele-PhysRevLett.123.236403
authorV. o. T. Phong,
authorZ. Addison,
authorS. Ahn,
authorH. Min,
authorR. Agarwal, and
authorE. J. Mele,
journalPhys. Rev. Lett. volume123,
pages236403 (year2019),
<https://link.aps.org/doi/10.1103/PhysRevLett.123.236403>.
[Salemi and
Oppeneer(2022a)]Oppeneer-PhysRevMaterials.6.095001
authorL. Salemi and
authorP. M. Oppeneer,
journalPhys. Rev. Mater. volume6,
pages095001 (year2022a),
<https://link.aps.org/doi/10.1103/PhysRevMaterials.6.095001>.
[Salemi and
Oppeneer(2022b)]Salemi-PhysRevB.106.024410
authorL. Salemi and
authorP. M. Oppeneer,
journalPhys. Rev. B volume106,
pages024410 (year2022b),
<https://link.aps.org/doi/10.1103/PhysRevB.106.024410>.
[Bose et al.(2023)Bose, Kammerbauer,
Gupta, Go, Mokrousov, Jakob, and Kläui]OH-Torque-PhysRevB.107.134423
authorA. Bose,
authorF. Kammerbauer,
authorR. Gupta,
authorD. Go,
authorY. Mokrousov,
authorG. Jakob, and
authorM. Kläui,
journalPhys. Rev. B volume107,
pages134423 (year2023),
<https://link.aps.org/doi/10.1103/PhysRevB.107.134423>.
[Go et al.(2023)Go, An, Lee, and
Kim]go2023intrinsic
authorG. Go,
authorD. An,
authorH.-W. Lee, and
authorS. K. Kim,
titleIntrinsic magnon orbital hall effect in honeycomb
antiferromagnets (year2023), 2303.11687.
[Zeer et al.(2022)Zeer, Go, Carbone,
Saunderson, Redies, Kläui, Ghabboun, Wulfhekel, Blügel, and
Mokrousov]Mokrousov-PhysRevMaterials.6.074004
authorM. Zeer,
authorD. Go,
authorJ. P. Carbone,
authorT. G. Saunderson,
authorM. Redies,
authorM. Kläui,
authorJ. Ghabboun,
authorW. Wulfhekel,
authorS. Blügel, and
authorY. Mokrousov,
journalPhys. Rev. Mater. volume6,
pages074004 (year2022),
<https://link.aps.org/doi/10.1103/PhysRevMaterials.6.074004>.
[Han et al.(2022)Han, Lee, and
Kim]HW-Lee-PhysRevLett.128.176601
authorS. Han,
authorH.-W. Lee, and
authorK.-W. Kim,
journalPhys. Rev. Lett. volume128,
pages176601 (year2022),
<https://link.aps.org/doi/10.1103/PhysRevLett.128.176601>.
[Fonseca et al.(2023)Fonseca, Pereira,
and Barbosa]fonseca2023orbital
authorD. B. Fonseca,
authorL. L. A. Pereira,
and authorA. L. R.
Barbosa, titleOrbital hall effect in
mesoscopic devices (year2023), 2305.01640.
[Sala and
Gambardella(2022)]Gambardella-PhysRevResearch.4.033037
authorG. Sala and
authorP. Gambardella,
journalPhys. Rev. Res. volume4,
pages033037 (year2022),
<https://link.aps.org/doi/10.1103/PhysRevResearch.4.033037>.
[Busch et al.(2023)Busch, Mertig, and
Göbel]busch2023orbital
authorO. Busch,
authorI. Mertig, and
authorB. Göbel,
titleOrbital hall effect and orbital edge states caused by s
electrons (year2023), 2306.17295.
[Go et al.(2018)Go, Jo, Kim, and
Lee]Go-Hyun-Woo-PhysRevLett.121.086602
authorD. Go,
authorD. Jo,
authorC. Kim, and
authorH.-W. Lee,
journalPhys. Rev. Lett. volume121,
pages086602 (year2018),
<https://link.aps.org/doi/10.1103/PhysRevLett.121.086602>.
[Go et al.(2021)Go, Jo, Lee, Kläui, and
Mokrousov]Go_EPL-Review
authorD. Go,
authorD. Jo,
authorH.-W. Lee,
authorM. Kläui, and
authorY. Mokrousov,
journalEurophysics Letters volume135,
pages37001 (year2021),
<https://dx.doi.org/10.1209/0295-5075/ac2653>.
[Choi et al.(2021)Choi, Jo, Ko, Go, Kim,
Park, Kim, Min, Choi, and
Lee]Go-Experiment-https://doi.org/10.48550/arxiv.2109.14847
authorY.-G. Choi,
authorD. Jo,
authorK.-H. Ko,
authorD. Go,
authorK.-H. Kim,
authorH. G. Park,
authorC. Kim,
authorB.-C. Min,
authorG.-M. Choi, and
authorH.-W. Lee,
titleObservation of the orbital hall effect in a light metal
ti (year2021),
<https://arxiv.org/abs/2109.14847>.
[Zheng et al.(2020)Zheng, Guo, Jo, Go,
Wang, Chen, Yin, Wang, Yu, He
et al.]Zheng-OrbTorque-PhysRevResearch.2.013127
authorZ. C. Zheng,
authorQ. X. Guo,
authorD. Jo,
authorD. Go,
authorL. H. Wang,
authorH. C. Chen,
authorW. Yin,
authorX. M. Wang,
authorG. H. Yu,
authorW. He, et al.,
journalPhys. Rev. Res. volume2,
pages013127 (year2020),
<https://link.aps.org/doi/10.1103/PhysRevResearch.2.013127>.
[Lee et al.(2021a)Lee, Kang,
Go, Kim, Kang, Lee, Lee, Kang, Lee, Mokrousov
et al.]Lee-OrbTorque-10.1038/s42005-021-00737-7
authorS. Lee,
authorM.-G. Kang,
authorD. Go,
authorD. Kim,
authorJ.-H. Kang,
authorT. Lee,
authorG.-H. Lee,
authorJ. Kang,
authorN. J. Lee,
authorY. Mokrousov,
et al., journalCommunications Physics
volume4 (year2021a),
<https://doi.org/10.1038/s42005-021-00737-7>.
[Lee et al.(2021b)Lee, Go,
Park, Jeong, Ko, Yun, Jo, Lee, Go, Oh et al.]Lee2021
authorD. Lee,
authorD. Go,
authorH.-J. Park,
authorW. Jeong,
authorH.-W. Ko,
authorD. Yun,
authorD. Jo,
authorS. Lee,
authorG. Go,
authorJ. H. Oh,
et al., journalNature Communications
volume12 (year2021b),
<https://doi.org/10.1038/s41467-021-26650-9>.
[Canonico
et al.(2020a)Canonico, Cysne, Molina-Sanchez,
Muniz, and Rappoport]Canonico-PhysRevB.101.161409
authorL. M. Canonico,
authorT. P. Cysne,
authorA. Molina-Sanchez,
authorR. B. Muniz, and
authorT. G. Rappoport,
journalPhys. Rev. B volume101,
pages161409 (year2020a),
<https://link.aps.org/doi/10.1103/PhysRevB.101.161409>.
[Canonico
et al.(2020b)Canonico, Cysne, Rappoport, and
Muniz]Canonico-PhysRevB.101.075429
authorL. M. Canonico,
authorT. P. Cysne,
authorT. G. Rappoport,
and authorR. B. Muniz,
journalPhys. Rev. B volume101,
pages075429 (year2020b),
<https://link.aps.org/doi/10.1103/PhysRevB.101.075429>.
[Costa et al.(2023)Costa, Focassio,
Canonico, Cysne, Schleder, Muniz, Fazzio, and Rappoport]Costa2023
authorM. Costa,
authorB. Focassio,
authorL. M. Canonico,
authorT. P. Cysne,
authorG. R. Schleder,
authorR. B. Muniz,
authorA. Fazzio, and
authorT. G. Rappoport,
journalPhys. Rev. Lett. volume130,
pages116204 (year2023),
<https://link.aps.org/doi/10.1103/PhysRevLett.130.116204>.
[Cysne et al.(2021a)Cysne,
Costa, Canonico, Nardelli, Muniz, and
Rappoport]Cysne-PhysRevLett.126.056601
authorT. P. Cysne,
authorM. Costa,
authorL. M. Canonico,
authorM. B. Nardelli,
authorR. B. Muniz, and
authorT. G. Rappoport,
journalPhys. Rev. Lett. volume126,
pages056601 (year2021a),
<https://link.aps.org/doi/10.1103/PhysRevLett.126.056601>.
[Cysne et al.(2022)Cysne, Bhowal,
Vignale, and Rappoport]Cysne-PhysRevB.105.195421
authorT. P. Cysne,
authorS. Bhowal,
authorG. Vignale, and
authorT. G. Rappoport,
journalPhys. Rev. B volume105,
pages195421 (year2022),
<https://link.aps.org/doi/10.1103/PhysRevB.105.195421>.
[Bhowal and Vignale(2021)]Bhowal-PhysRevB.103.195309
authorS. Bhowal and
authorG. Vignale,
journalPhys. Rev. B volume103,
pages195309 (year2021),
<https://link.aps.org/doi/10.1103/PhysRevB.103.195309>.
[Salvador-Sánchez
et al.(2022)Salvador-Sánchez, Canonico, Pérez-Rodríguez,
Cysne, Baba, Clericò, Vila, Vaquero, Delgado-Notario, Caridad
et al.]Salvador-Sanchez-https://doi.org/10.48550/arxiv.2206.04565
authorJ. Salvador-Sánchez,
authorL. M. Canonico,
authorA. Pérez-Rodríguez,
authorT. P. Cysne,
authorY. Baba,
authorV. Clericò,
authorM. Vila,
authorD. Vaquero,
authorJ. A. Delgado-Notario,
authorJ. M. Caridad,
et al., titleGeneration and control of
non-local chiral currents in graphene superlattices by orbital hall effect
(year2022), <https://arxiv.org/abs/2206.04565>.
[Li and
Appelbaum(2014)]Symmetry-Phosphorene-PhysRevB.90.115439
authorP. Li and
authorI. Appelbaum,
journalPhys. Rev. B volume90,
pages115439 (year2014),
<https://link.aps.org/doi/10.1103/PhysRevB.90.115439>.
[Rodin et al.(2014)Rodin, Carvalho, and
Castro Neto]Phosphorne-SpectraStrain-PhysRevLett.112.176801
authorA. S. Rodin,
authorA. Carvalho, and
authorA. H. Castro Neto,
journalPhys. Rev. Lett. volume112,
pages176801 (year2014),
<https://link.aps.org/doi/10.1103/PhysRevLett.112.176801>.
[Taghizadeh Sisakht
et al.(2016)Taghizadeh Sisakht, Fazileh, Zare, Zarenia, and
Peeters]Phosphorene-Spectra-strain-PhysRevB.94.085417
authorE. Taghizadeh Sisakht,
authorF. Fazileh,
authorM. H. Zare,
authorM. Zarenia, and
authorF. M. Peeters,
journalPhys. Rev. B volume94,
pages085417 (year2016),
<https://link.aps.org/doi/10.1103/PhysRevB.94.085417>.
[Rudenko and
Katsnelson(2014)]Phosphorene-ModelRudenko-PhysRevB.89.201408
authorA. N. Rudenko and
authorM. I. Katsnelson,
journalPhys. Rev. B volume89,
pages201408 (year2014),
<https://link.aps.org/doi/10.1103/PhysRevB.89.201408>.
[Faria Junior et al.(2019)Faria Junior,
Kurpas, Gmitra, and Fabian]Paulo-Fabian-PhysRevB.100.115203
authorP. E. Faria Junior,
authorM. Kurpas,
authorM. Gmitra, and
authorJ. Fabian,
journalPhys. Rev. B volume100,
pages115203 (year2019),
<https://link.aps.org/doi/10.1103/PhysRevB.100.115203>.
[Liu et al.(2015)Liu, Zhang, Abdalla,
Fazzio, and Zunger]ElectricField-Fazzio-Zunger-doi:10.1021/nl5043769
authorQ. Liu,
authorX. Zhang,
authorL. B. Abdalla,
authorA. Fazzio, and
authorA. Zunger,
journalNano Letters volume15,
pages1222 (year2015), notepMID: 25607525,
https://doi.org/10.1021/nl5043769,
<https://doi.org/10.1021/nl5043769>.
[Popovi ćć
et al.(2015)Popovi ćć,
Kurdestany, and Satpathy]Popovic2015
authorZ. S.
Popovi ćć,
authorJ. M. Kurdestany,
and authorS. Satpathy,
journalPhys. Rev. B volume92,
pages035135 (year2015),
<https://link.aps.org/doi/10.1103/PhysRevB.92.035135>.
[Avsar et al.(2017)Avsar, Tan, Kurpas,
Gmitra, Watanabe, Taniguchi, Fabian, and Özyilmaz]Avsar2017
authorA. Avsar,
authorJ. Y. Tan,
authorM. Kurpas,
authorM. Gmitra,
authorK. Watanabe,
authorT. Taniguchi,
authorJ. Fabian, and
authorB. Özyilmaz,
journalNature Physics volume13,
pages888 (year2017),
<https://doi.org/10.1038/nphys4141>.
[Hu et al.(2016)Hu, Wu, Zeng, Deng, and
Kan]Hu-SymmetriesPolarization-doi:10.1021/acs.nanolett.6b04630
authorT. Hu,
authorH. Wu,
authorH. Zeng,
authorK. Deng, and
authorE. Kan, journalNano
Letters volume16, pages8015
(year2016), notepMID: 27960526,
https://doi.org/10.1021/acs.nanolett.6b04630,
<https://doi.org/10.1021/acs.nanolett.6b04630>.
[Hohenberg and Kohn(1964)]DFT1
authorP. Hohenberg and
authorW. Kohn,
journalPhys. Rev. volume136,
pagesB864 (year1964),
<https://link.aps.org/doi/10.1103/PhysRev.136.B864>.
[Kohn and Sham(1965)]DFT2
authorW. Kohn and
authorL. J. Sham,
journalPhys. Rev. volume140,
pagesA1133 (year1965),
<https://link.aps.org/doi/10.1103/PhysRev.140.A1133>.
[Giannozzi et al.(2017)Giannozzi,
Andreussi, Brumme, Bunau, Buongiorno Nardelli, Calandra, Car, Cavazzoni,
Ceresoli, Cococcioni et al.]QE-2017
authorP. Giannozzi,
authorO. Andreussi,
authorT. Brumme,
authorO. Bunau,
authorM. Buongiorno Nardelli,
authorM. Calandra,
authorR. Car,
authorC. Cavazzoni,
authorD. Ceresoli,
authorM. Cococcioni,
et al., journalJournal of Physics: Condensed Matter
volume29, pages465901
(year2017),
<http://stacks.iop.org/0953-8984/29/i=46/a=465901>.
[Perdew et al.(1996)Perdew, Burke, and
Ernzerhof]PBE
authorJ. P. Perdew,
authorK. Burke, and
authorM. Ernzerhof,
journalPhys. Rev. Lett. volume77,
pages3865 (year1996),
<https://link.aps.org/doi/10.1103/PhysRevLett.77.3865>.
[Kresse and Joubert(1999)]PAW
authorG. Kresse and
authorD. Joubert,
journalPhys. Rev. B volume59,
pages1758 (year1999),
<https://link.aps.org/doi/10.1103/PhysRevB.59.1758>.
[Dal Corso(2014)]pslibrary
authorA. Dal Corso,
journalComputational Materials Science
volume95, pages337 (year2014),
ISSN issn0927-0256,
<https://www.sciencedirect.com/science/article/pii/S0927025614005187>.
[Brumme et al.(2015)Brumme, Calandra, and
Mauri]Efield
authorT. Brumme,
authorM. Calandra, and
authorF. Mauri,
journalPhys. Rev. B volume91,
pages155436 (year2015),
<https://link.aps.org/doi/10.1103/PhysRevB.91.155436>.
[Agapito et al.(2013)Agapito, Ferretti,
Calzolari, Curtarolo, and Buongiorno Nardelli]PAO1
authorL. A. Agapito,
authorA. Ferretti,
authorA. Calzolari,
authorS. Curtarolo,
and
authorM. Buongiorno Nardelli,
journalPhys. Rev. B volume88,
pages165127 (year2013),
<https://link.aps.org/doi/10.1103/PhysRevB.88.165127>.
[Agapito et al.(2015)Agapito, Curtarolo,
and Buongiorno Nardelli]PAO2
authorL. A. Agapito,
authorS. Curtarolo,
and
authorM. Buongiorno Nardelli,
journalPhys. Rev. X volume5,
pages011006 (year2015),
<https://link.aps.org/doi/10.1103/PhysRevX.5.011006>.
[Agapito
et al.(2016a)Agapito, Fornari, Ceresoli,
Ferretti, Curtarolo, and Buongiorno Nardelli]PAO3
authorL. A. Agapito,
authorM. Fornari,
authorD. Ceresoli,
authorA. Ferretti,
authorS. Curtarolo,
and
authorM. Buongiorno Nardelli,
journalPhys. Rev. B volume93,
pages125137 (year2016a),
<https://link.aps.org/doi/10.1103/PhysRevB.93.125137>.
[Agapito
et al.(2016b)Agapito, Ismail-Beigi, Curtarolo,
Fornari, and Buongiorno Nardelli]PAO4
authorL. A. Agapito,
authorS. Ismail-Beigi,
authorS. Curtarolo,
authorM. Fornari, and
authorM. Buongiorno Nardelli,
journalPhys. Rev. B volume93,
pages035104 (year2016b),
<https://link.aps.org/doi/10.1103/PhysRevB.93.035104>.
[Buongiorno Nardelli
et al.(2018)Buongiorno Nardelli, Cerasoli, Costa, Curtarolo,
Gennaro, Fornari, Liyanage, Supka, and Wang]PAO5
authorM. Buongiorno Nardelli,
authorF. T. Cerasoli,
authorM. Costa,
authorS. Curtarolo,
authorR. D. Gennaro,
authorM. Fornari,
authorL. Liyanage,
authorA. R. Supka, and
authorH. Wang,
journalComputational Materials Science
volume143, pages462 (year2018),
ISSN issn0927-0256,
<http://www.sciencedirect.com/science/article/pii/S0927025617306651>.
[Cerasoli et al.(2021)Cerasoli, Supka,
Jayaraj, Costa, Siloi, Sławińska, Curtarolo, Fornari, Ceresoli, and
Buongiorno Nardelli]PAO6
authorF. T. Cerasoli,
authorA. R. Supka,
authorA. Jayaraj,
authorM. Costa,
authorI. Siloi,
authorJ. Sławińska,
authorS. Curtarolo,
authorM. Fornari,
authorD. Ceresoli, and
authorM. Buongiorno Nardelli,
journalComputational Materials Science
volume200, pages110828
(year2021), ISSN issn0927-0256,
<https://www.sciencedirect.com/science/article/pii/S0927025621005486>.
[Menezes and Capaz(2018)]MENEZES2018411
authorM. G. Menezes and
authorR. B. Capaz,
journalComputational Materials Science
volume143, pages411 (year2018),
ISSN issn0927-0256,
<https://www.sciencedirect.com/science/article/pii/S0927025617306705>.
[Costa et al.(2018a)Costa,
Nardelli, Fazzio, and Costa]adatoms
authorM. Costa,
authorM. B. Nardelli,
authorA. Fazzio, and
authorA. T. Costa,
titleLong range dynamical coupling between magnetic adatoms
mediated by a 2d topological insulator
(year2018a),
<https://arxiv.org/abs/1808.00347>.
[Costa et al.(2020)Costa, Peres,
Fernández-Rossier, and Costa]fegete
authorM. Costa,
authorN. M. R. Peres,
authorJ. Fernández-Rossier,
and authorA. T. Costa,
journalPhys. Rev. B volume102,
pages014450 (year2020),
<https://link.aps.org/doi/10.1103/PhysRevB.102.014450>.
[Costa et al.(2021)Costa, Schleder,
Acosta, Padilha, Cerasoli, Nardelli, and Fazzio]hoti
authorM. Costa,
authorG. R. Schleder,
authorC. M. Acosta,
authorA. C. M. Padilha,
authorF. Cerasoli,
authorM. B. Nardelli,
and authorA. Fazzio,
journalnpj Computational Materials volume7,
pages49 (year2021),
<https://doi.org/10.1038/s41524-021-00518-4>.
[Heath et al.(2020)Heath, Costa,
Buongiorno-Nardelli, and Kuroda]cri3-graphene
authorJ. J. Heath,
authorM. Costa,
authorM. Buongiorno-Nardelli,
and authorM. A.
Kuroda, journalPhys. Rev. B
volume101, pages195439
(year2020),
<https://link.aps.org/doi/10.1103/PhysRevB.101.195439>.
[Costa et al.(2019)Costa, Schleder,
Buongiorno Nardelli, Lewenkopf, and Fazzio]Costa2019
authorM. Costa,
authorG. R. Schleder,
authorM. Buongiorno Nardelli,
authorC. Lewenkopf,
and authorA. Fazzio,
journalNano Letters volume19,
pages8941 (year2019),
<https://doi.org/10.1021/acs.nanolett.9b03881>.
[Costa et al.(2018b)Costa,
Costa, Freitas, Schmidt, Buongiorno Nardelli, and Fazzio]Costa2018
authorM. Costa,
authorA. T. Costa,
authorW. A. Freitas,
authorT. M. Schmidt,
authorM. Buongiorno Nardelli,
and authorA. Fazzio,
journalACS Omega volume3,
pages15900 (year2018b),
<https://doi.org/10.1021/acsomega.8b01836>.
[Hitomi et al.(2021)Hitomi, Kawakami, and
Koshino]HOTI-Phosphorene-PhysRevB.104.125302
authorM. Hitomi,
authorT. Kawakami, and
authorM. Koshino,
journalPhys. Rev. B volume104,
pages125302 (year2021),
<https://link.aps.org/doi/10.1103/PhysRevB.104.125302>.
[Ezawa(2018)]HOTI-Phosphorene-PhysRevB.98.045125
authorM. Ezawa,
journalPhys. Rev. B volume98,
pages045125 (year2018),
<https://link.aps.org/doi/10.1103/PhysRevB.98.045125>.
[Lee et al.(2022)Lee, Choi, and
Lee]H-Woo-Symmetry_PhysRevB.105.035142
authorH. Lee,
authorB. Choi, and
authorH.-W. Lee,
journalPhys. Rev. B volume105,
pages035142 (year2022),
<https://link.aps.org/doi/10.1103/PhysRevB.105.035142>.
[Jungwirth et al.(2012)Jungwirth,
Wunderlich, and Olejník]SHE-Devices-Jungwirth2012
authorT. Jungwirth,
authorJ. Wunderlich,
and
authorK. Olejník,
journalNature Materials volume11,
pages382 (year2012),
<https://doi.org/10.1038/nmat3279>.
[Marui et al.(2023)Marui, Kawaguchi,
Sumi, Awano, Nakamura, and Hayashi]marui2023spin
authorY. Marui,
authorM. Kawaguchi,
authorS. Sumi,
authorH. Awano,
authorK. Nakamura, and
authorM. Hayashi,
titleSpin and orbital hall currents detected via current
induced magneto-optical kerr effect in v and pt (year2023),
2306.09585.
[Kumar and Kumar(2023)]kumar2023ultrafast
authorS. Kumar and
authorS. Kumar,
titleUltrafast thz probing of nonlocal orbital current in
transverse multilayer metallic heterostructures (year2023),
2306.17027.
[Lyalin et al.(2023)Lyalin, Alikhah,
Berritta, Oppeneer, and Kawakami]lyalin2023magnetooptical
authorI. Lyalin,
authorS. Alikhah,
authorM. Berritta,
authorP. M. Oppeneer,
and authorR. K.
Kawakami, titleMagneto-optical detection of
the orbital hall effect in chromium (year2023),
2306.10673.
[Cysne et al.(2023)Cysne, Guimarães,
Canonico, Costa, Rappoport, and Muniz]Cysne-PhysRevB.107.115402
authorT. P. Cysne,
authorF. S. M. Guimarães,
authorL. M. Canonico,
authorM. Costa,
authorT. G. Rappoport,
and authorR. B. Muniz,
journalPhys. Rev. B volume107,
pages115402 (year2023),
<https://link.aps.org/doi/10.1103/PhysRevB.107.115402>.
[Peng et al.(2014)Peng, Wei, and
Copple]Peng-Wei-Copple-PhysRevB.90.085402
authorX. Peng,
authorQ. Wei, and
authorA. Copple,
journalPhys. Rev. B volume90,
pages085402 (year2014),
<https://link.aps.org/doi/10.1103/PhysRevB.90.085402>.
[Midtvedt et al.(2016)Midtvedt,
Lewenkopf, and Croy]Lewenkopf-Midtvedt2016
authorD. Midtvedt,
authorC. H. Lewenkopf,
and authorA. Croy,
journal2D Materials volume3,
pages011005 (year2016),
<https://doi.org/10.1088/2053-1583/3/1/011005>.
[Seemann et al.(2015)Seemann,
Ködderitzsch, Wimmer, and Ebert]Ebert-Symmetry-tensor
authorM. Seemann,
authorD. Ködderitzsch,
authorS. Wimmer, and
authorH. Ebert,
journalPhys. Rev. B volume92,
pages155138 (year2015),
<https://link.aps.org/doi/10.1103/PhysRevB.92.155138>.
[Roy et al.(2022)Roy, Guimarães, and
Sławi ńńska]Marcosguimaraes
authorA. Roy,
authorM. H. D. Guimarães,
and
authorJ. Sławi ńńska, journalPhys. Rev. Mater.
volume6, pages045004 (year2022),
<https://link.aps.org/doi/10.1103/PhysRevMaterials.6.045004>.
[Furukawa et al.(2021)Furukawa, Watanabe,
Ogasawara, Kobayashi, and
Itou]C2v-Magnetoelectric-PhysRevResearch.3.023111
authorT. Furukawa,
authorY. Watanabe,
authorN. Ogasawara,
authorK. Kobayashi,
and authorT. Itou,
journalPhys. Rev. Res. volume3,
pages023111 (year2021),
<https://link.aps.org/doi/10.1103/PhysRevResearch.3.023111>.
[Cysne et al.(2021b)Cysne,
Guimarães, Canonico, Rappoport, and
Muniz]Cysne-OMEpxpy-PhysRevB.104.165403
authorT. P. Cysne,
authorF. S. M. Guimarães,
authorL. M. Canonico,
authorT. G. Rappoport,
and authorR. B. Muniz,
journalPhys. Rev. B volume104,
pages165403 (year2021b),
<https://link.aps.org/doi/10.1103/PhysRevB.104.165403>.
[Shinada and
Peters(2023)]Koki-Peters-PhysRevB.107.214109
authorK. Shinada and
authorR. Peters,
journalPhys. Rev. B volume107,
pages214109 (year2023),
<https://link.aps.org/doi/10.1103/PhysRevB.107.214109>.
[Shinada et al.(2023)Shinada, Kofuji, and
Peters]Koki-Peters-PhysRevB.107.094106
authorK. Shinada,
authorA. Kofuji, and
authorR. Peters,
journalPhys. Rev. B volume107,
pages094106 (year2023),
<https://link.aps.org/doi/10.1103/PhysRevB.107.094106>.
[Hayami et al.(2018)Hayami, Yatsushiro,
Yanagi, and Kusunose]Hayami-PhysRevB.98.165110
authorS. Hayami,
authorM. Yatsushiro,
authorY. Yanagi, and
authorH. Kusunose,
journalPhys. Rev. B volume98,
pages165110 (year2018),
<https://link.aps.org/doi/10.1103/PhysRevB.98.165110>.
[Hayami et al.(2016)Hayami, Kusunose, and
Motome]Hayami-JPCM-2016
authorS. Hayami,
authorH. Kusunose, and
authorY. Motome,
journalJournal of Physics: Condensed Matter
volume28, pages395601
(year2016),
<https://doi.org/10.1088/0953-8984/28/39/395601>.
[Salemi et al.(2019)Salemi, Berritta,
Nandy, and Oppeneer]Salemi-Oppeneer-2019
authorL. Salemi,
authorM. Berritta,
authorA. K. Nandy, and
authorP. M. Oppeneer,
journalNature Communications volume10,
pages5381 (year2019),
<https://doi.org/10.1038/s41467-019-13367-z>.
[Yoda et al.(2018)Yoda, Yokoyama, and
Murakami]Yoda2018-OME
authorT. Yoda,
authorT. Yokoyama, and
authorS. Murakami,
journalNano Letters volume18,
pages916 (year2018),
<https://doi.org/10.1021/acs.nanolett.7b04300>.
[He et al.(2020)He, Goldhaber-Gordon, and
Law]He2020-OME
authorW.-Y. He,
authorD. Goldhaber-Gordon,
and authorK. T. Law,
journalNature Communications volume11,
pages1650 (year2020),
<https://doi.org/10.1038/s41467-020-15473-9>.
[Johansson et al.(2018)Johansson, Henk,
and Mertig]IngridMerting-PhysRevB.97.085417
authorA. Johansson,
authorJ. Henk, and
authorI. Mertig,
journalPhys. Rev. B volume97,
pages085417 (year2018),
<https://link.aps.org/doi/10.1103/PhysRevB.97.085417>.
[Dresselhaus et al.(2007)Dresselhaus,
Dresselhaus, and Jorio]dresselhaus2007group
authorM. Dresselhaus,
authorG. Dresselhaus,
and authorA. Jorio,
titleGroup Theory: Application to the Physics of Condensed
Matter (publisherSpringer Berlin Heidelberg,
year2007), ISBN isbn9783540328971,
<https://books.google.com.br/books?id=sKaH8vrfmnQC>.
|
http://arxiv.org/abs/2307.04012v1 | 20230708164551 | Learning Together: Towards foundational models for machine learning interatomic potentials with meta-learning | [
"Alice E. A. Allen",
"Nicholas Lubbers",
"Sakib Matin",
"Justin Smith",
"Richard Messerly",
"Sergei Tretiak",
"Kipton Barros"
] | physics.chem-ph | [
"physics.chem-ph",
"physics.comp-ph"
] |
Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States
Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States
Computer, Computational, and Statistical Sciences Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States
Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States
Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States
Nvidia Corporation, Santa Clara, CA 9505, United States
Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States
Center for Integrated Nanotechnologies, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States
Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States
Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States
Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States
]Learning Together: Towards foundational models for machine learning interatomic potentials with meta-learning
The development of machine learning models has led to an abundance of datasets containing quantum mechanical (QM) calculations for molecular and material systems. However, traditional training methods for machine learning models are unable to leverage the plethora of data available as they require that each dataset be generated using the same QM method. Taking machine learning interatomic potentials (MLIPs) as an example, we show that meta-learning techniques, a recent advancement from the machine learning community, can be used to fit multiple levels of QM theory in the same training process. Meta-learning changes the training procedure to learn a representation that can be easily re-trained to new tasks with small amounts of data. We then demonstrate that meta-learning enables simultaneously training to multiple large organic molecule datasets. As a proof of concept, we examine the performance of a MLIP refit to a small drug-like molecule and show that pre-training potentials to multiple levels of theory with meta-learning improves performance. This difference in performance can be seen both in the reduced error and in the improved smoothness of the potential energy surface produced. We therefore show that meta-learning can utilize existing datasets with inconsistent QM levels of theory to produce models that are better at specializing to new datasets. This opens new routes for creating pre-trained, foundational models for interatomic potentials.
[
Kipton Barros
August 12, 2023
===================
§ INTRODUCTION
Machine learning is fundamentally changing and expanding our capabilities for modeling chemical and materials systems <cit.>. A growing array of properties have been successfully predicted with machine learning models from materials' band gaps and formation energies to molecular energies and bond orders <cit.>. The development of machine learning models for various applications has involved the creation of a large number of datasets containing quantum-mechanical calculations at different fidelities (levels of theory) <cit.>. However, incorporating this multi-fidelity information into machine learning models remains challenging. In this work, we show that multiple datasets can be used to fit a machine learning model, even if the datasets were calculated with many varying QM levels of theory. To overcome this challenge, we incorporate meta-learning techniques into the training process and subsequently demonstrate improvements in accuracy for multiple applications. The aim of meta-learning is to use a wide collection of data to train a machine learning model that can then be easily re-trained to specialized tasks and we demonstrate the applicability of the meta-learning method to MLIPs.
In the landscape of broader efforts to incorporate machine learning and molecular and material modelling, a particular attention has been paid to MLIPs <cit.>. Accurate atomistic simulations rely on interatomic potentials that closely recreate the interactions present between atoms and molecules <cit.>. Recreating these interactions involves a trade-off between accuracy and computational cost, with quantum mechanical techniques offering highly accurate simulations whilst classical force fields are fast and capable of modelling much larger systems over long timescales <cit.>. Within the last decade, MLIPs have increasingly been seen as a method that could provide a model that is both fast and accurate <cit.>. However, the development of MLIPs that are transferable to unseen organic molecules requires datasets that cover a large fraction of chemical space. This requirement has lead to the production of numerous datasets <cit.>. These datasets contain the quantum mechanical (QM) energies and forces of millions of structures spanning large regions of chemical space. However, the QM methods used to calculate the energies and forces vary considerably. As different QM methods result in different potential energy surfaces, this inconsistency in QM techniques limits the extent that datasets can used together to fit potentials.
Numerous organic molecule datasets have been created for training MLIPs <cit.>. However, a consensus on the best QM techniques to employ to create these datasets has never been reached as a compromise between accuracy and computational cost must always be considered when performing QM calculations. This lack of consensus has led to a variety of different software, methods, basis sets and exchange-correlation functionals being used. For example, the QM7-x and ANI-1x datasets both contain energies and forces for millions of small organic molecules. However, QM7-x was calculated using the PBE0 exchange-correlation functional with many body dispersion whilst ANI-1x was calculated with the ωB97x functional and 6-31G* basis set <cit.> and does not include dispersion effects. Therefore, these two datasets describe similar, but slightly different potential energy surfaces. If both datasets were joined together to train a potential then problems would likely arise as contradictory information is present. For example, identical structures at different levels of theory can have different energy and forces. Whilst datasets from different sources have been fit together without further refinement <cit.>, this approach does not account for differences in the interactions described. Techniques exist in the machine learning literature to address the difference in the potential energy surface.
Previous work on fitting MLIPs to multiple datasets is limited. In Ref. , a transferable molecular potential was first trained to ∼ 5 million density functional theory (DFT) training points before being refit, with frozen parameters, to 0.5 million CCSD(T)* energies. This technique, known as transfer learning has been used in several works <cit.>. The advantage of using transfer learning for training MLIPs is that it requires fewer calculations at a higher, and more expensive, level of theory. However, this kind of transfer learning technique, freezing neural network (NN) parameters, is limited to just two datasets. If we want to use multiple existing datasets, and expand the size and variety of training data, then new methods must be found.
Fortunately, this problem is being explored in a branch of machine learning research known as meta-learning <cit.>. Meta-learning seeks to build a model that, although not specialized to any particular task, can be quickly re-trained to many new tasks - where a task is a specific learning problem. Furthermore, this retraining can be effective even if the amount of new data is limited <cit.>.
For transferable MLIPs, the concepts of tasks naturally lends itself to quantum mechanical datasets calculated with different methods. By using meta-learning techniques, we will show how information from multiple levels of theory can be incorporated together. We begin by investigating training data with multiple levels of theory for an individual aspirin molecule and for the QM9 dataset (which contains over 100,000 molecules in their equilibrium configuration). With these systems, the problems associated with naively combining datasets together are seen and the benefits of meta-learning are clearly observed in the test set errors. We then move on to combining several large molecule datasets to pre-train an MLIP. Combining large organic datasets to fit MLIPs has never previously been attempted. Subsets, chosen using active learning, of six existing datasets (ANI-1x, GEOM, QMugs, QM7-x, Transition-1x and the QM9 dataset from Ref. ) were used to fit an adaptable potential using meta-learning – see Fig. <ref> for a visualization of the space the datasets cover <cit.>. Figure <ref> demonstrates the increase in chemical space possible when multiple datasets are combined together. The benefits of pre-training are then shown by retraining to the 3BPA molecule and testing various properties. These tests show that pre-training models using meta-learning produces a more accurate and smoother potential. The benefits of pre-training include enhanced accuracy and generalization capabilities in modeling interatomic potentials.
Training machine learning models to large amounts of data before re-training to a specific task is related to the concept of foundational models <cit.>. This concept has been used to create large language models, ie. GPT-4, which have been pre-trained to extremely large datasets before being fine-tuned to specific tasks, i.e. ChatGPT which is fine-tuned for conversational usage <cit.>. Creating foundational models allows a wide range of information to be encoded before specialisation. With meta-learning techniques, we can now pre-train interatomic potentials to numerous large datasets and this is a step towards foundational models for MLIPs – MLIPs that could be quickly re-trained to diverse molecular systems.
The number of QM datasets has grown rapidly over the last few years. However, a major bottleneck in exploiting this information has been the absence of methods that can effectively combine all of this information. In this work, we have overcome this limitation by exploiting techniques which enable the incorporation of datasets with different fidelities. Whilst we focus on MLIPs, these techniques are applicable to the wide range of predictive models that exist for material and molecular property prediction. By showing how meta-learning can be applied, we aim to encourage researchers to fully utilize the vast amount of existing data that the scientific community has already collected.
§ METHODS
§.§ Meta-Learning Algorithm
Meta-learning is an area of machine learning concerned with improving the learning process to produce models that can easily adapt to new problems <cit.>. A key component of meta-learning is the concept of different `tasks'. Tasks are datasets with similar properties but slight differences. For example, if we were interested in animal classification of a cat and a dog, a similar task might be to classify a lion and a bear. The task is not the same but we would expect fundamental similarities in the model needed to perform the classification. By using a meta-learning algorithm to learn multiple different tasks, less data will be required when a new learning problem is introduced.
The objective of meta-learning algorithms is to train a model that can generalize more easily to new data<cit.>. We will use meta-learning to fit multiple different QM datasets with slightly different properties. To our knowledge, meta-learning for MLIPs has not been previously carried out, although it has been used in other areas of science <cit.>.
The meta-learning algorithm we have chosen to fit multiple datasets for MLIPs is called Reptile <cit.>. Reptile works by repeatedly sampling a task (a dataset), performing a limited number of optimization steps on the task and then updating the weights of the machine learning model towards the new weights. Reptile was chosen over other meta-learning algorithms such as MAML <cit.> as Reptile is simpler to implement and therefore more likely to be adopted by the wider community. A comparison of methods such as MAML for interatomic potentials will therefore be left to future work.
Reptile is described in Algorithm <ref> with a visual illustration also given. The algorithm works by separating the training data into distinct learning problems (tasks). An individual task is selected and multiple optimization steps are performed. The parameters of the model are then updated. A new task is then selected and the procedure is repeated multiple times. This moves the model to a region of parameter space where it can readily move between the different datasets present.
Throughout this work, the k=1 result is used as comparison point. This is because when k=1 the algorithm becomes equivalent to stochastic gradient descent on the expected loss over all the training tasks <cit.>. This is referred to as joint training in Ref. At k=1, the algorithm is not expected to account for differences in the QM theory but still uses all the information present from the datasets.
§.§ Interatomic Potential
In this work, we have used the NN architecture implemented in torchANI with the same structure as the ANI-1x model <cit.>. However, the meta-learning techniques described are not specific to this form of model and there is no reason that they could not be applied to other machine learning models that employ similar iterative solvers.
The hyperparameters used for the ANI potential are the same as those used for previous training to the ANI-1x and ANI-1ccx datasets, see Ref. for more details.
§.§ Datasets
§.§.§ Aspirin
Aspirin structures were produced by molecular dynamic simulations at 300K, 600K and 900K. Density Functional based Tight Binding (DFTB) was used to perform the MD simulations and a total of 400 structures were created for each temperature. QM calculations of the energies and forces were then performed on these structures with three levels of theory: DFT with the ωB97x exchange-correlation function and 6-31G* basis set, DFT with Becke, 3-parameter, Lee–Yang–Parr (B3LYP) exchange-correlation functions and def2-TZVP basis set and Hartree-Fock with the def2-SVP basis set for 300K, 600K and 900K respectively. These datasets were used to pre-train a molecular potential. The pre-trained potential was then refit to a new dataset of MD configuration at the Møller–Plesset (MP2) level of theory with the def2-SVP basis set (a more accurate level of theory). The training dataset for refitting used 400 MD configurations sampled at 300K whilst the test set contained structures at 300K,600K and 900K. A batch size of 8 was used for training.
§.§.§ QM9
The QM9 dataset contains over 100,000 equilibrium structures for small organic molecules with up to 9 heavy atoms <cit.>. In Ref. , the QM9 dataset was recalculated with 76 different exchange-correlation functionals and 3 basis sets <cit.>.
§.§.§ Multiple Organic Molecules
Seven separate datasets were chosen to fit a potential to organic molecule potential that could be easily re-trained to new data. The seven datasets used for meta-learning were chosen to cover both diverse regions of chemical space and multiples levels of theory – including the accurate recreation of dispersion effects. The chemical space covered included reactive paths and biologically and pharmacologically relevant structures. Whilst ANI-1x does cover a large number of conformations for organic molecules, it has limitations. This is demonstrated by Fig. <ref> and Fig. S1. Figure <ref> demonstrates how the additional datasets increase the size of the molecules and range of energies included. The E_0 energy is calculated using linear fitting an then subtracted from each dataset. The minimum energy for each dataset is then shifted to zero. Whilst it is not covered in this work as we use the ANI potential, including larger molecules in datasets may be increasingly important for newer generations of interatomic potentials that include message passing and describe longer length scales <cit.>. Figure S1 shows the distribution of uncertainty for the ANI-1x potential across the dataset space. Whilst ANI-1x dz, ANI-1x tz, GEOM and QMugs have similar probability distributions, QM7-x and Transition-1x contain larger uncertainties. Transition-1x contains reactive structures that are not contained in the original dataset and therefore higher uncertainties are expected. For QM7-x, there are also higher uncertainties and this may be due to the different sampling techniques used.
A property that is not shown in Table 1 is the software used for the DFT calculations. Even when the same level of theory is used, we can expect different software to give slightly different results. This will cause further discrepancies between the datasets as a variety of codes are employed. For example, although Transition-1x and ANI-1x are calculated at the same level of theory, Transition-1x is calculated with the ORCA program whilst ANI-1x is calculated with Gaussian <cit.>.
The individual description and justification for including each dataset used is as follows:
* QM9 - This dataset contains a diverse range of 76 functionals and 3 basis sets for small equilibrium organic molecules <cit.>.
* ANI-1x - This is a large dataset of small (up to 8 heavy atoms) organic molecules generated with active learning methods <cit.>.
* QMugs - This dataset includes the largest molecules with up to 100 heavy atoms. It specializes in including drug-like molecules <cit.>
* GEOM - This is the largest dataset and contains both large molecules and drug-like molecules <cit.>.
* QM7-x - This is also a large dataset of small (up to 7 heavy atoms) organic molecules but has dispersion accurately described with many-body dispersion <cit.>
* Transition-1x - This datasets includes minimum energy paths for 12,000 reactions <cit.>.
* ANI-1ccx - This dataset contains coupled cluster level theory calculations for a subset of the ANI-1x dataset <cit.>.
Other datasets considered for inclusion include SPICE, PubChemQC-PM6 and Tensormol <cit.>. However, with the existing datasets a sufficient representation of chemical space is covered. It is also worth noting that retraining to recreate the specific properties of the excluded datasets would also be quickly possible with the meta-learning potential.
§.§ Meta-learning Hyperparameter Optimization
There are three parameters in the Reptile algorithm. These control the number of steps (k) taken at each optimization step, how the parameters are updated (ϵ) from the task's individual NN parameters and the maximum number of epochs used for retraining. The number of epochs was investigated to see whether restricting the training improved accuracy by ensuring the potential remained close to the meta-learned potential or if longer retraining improved results. For a detailed discussion of the hyper parameters chosen when fitting to the seven separate datasets, see Section S1.2. The ϵ value used throughout this work is ϵ=1 whilst the k value is changed depending on the problem. The maximum number of epochs used for retraining for the meta-learning algorithm with k>1 is restricted to 150 epochs.
§.§ Stages of Fitting for the Organic Molecule datasets
In the first iteration, 100,000 structures were taken randomly from the ANI-1x, QMugs, GEOM, QM7-x and Transition-1x datasets. For QM9, 10,000 structures were used for each level of theory. This is restricted as 276 levels of theory exist, and each theory level samples different structures in the QM9 dataset. After the first iteration, the highest error structures were added to the next iteration <cit.>. The cutoffs used for adding structures are described in SI 1.6. This process was repeated 3 times. A diagram of the process is show in Fig. S3.
§ RESULTS
§.§ A Simple Case Study on Aspirin
As the initial test case we investigate the performance of meta-learning on a dataset containing a single aspirin molecule. Aspirin structures were produced by molecules dynamic simulations at 300K, 600K and 900K. The QM energies and forces were then calculated at three different levels of theory: two distinct DFT functionals, and Hartree-Fock. This created three different datasets, with each temperature corresponding to a different level of theory. These three datasets were used to pre-train a molecular potential to the energy and forces of 1,200 structures. The pre-trained potential was then refit to a new dataset of 400 MD configuration at the MP2 level of theory from the 300K simulation.
The change in the RMSE error for the forces is shown with the value of k used in the meta-learning algorithm in Fig. <ref>. The k parameter controls the number of steps taken towards each dataset. As k is increased the speed of the algorithm also increases and this is an additional consideration in choosing the optimal value. In the limit of k →∞ the algorithm would correspond to iterative training to each dataset and then transfer learning to a new task. However, while this may work for small problems, this approach is impractical for large datasets.
Figure <ref> shows that as the k parameter is increased the error in the test set decreases with the minimum error at around k=400. There is therefore an improvement in test set error in comparison to both no pre-training (5.35 ± 0.41 kcal/mol/ Å) and k=1 (3.38 ± 0.16 kcal/mol/ Å). Note that k=1 effectively corresponds to simultaneous training to all tasks. Therefore, when we attempt to combine multiple datasets at different levels of theory an improvement in performance can be seen when meta-learning is incorporated into the training process.
§.§ Meta-learning many levels of theory using QM9
Next, we move onto the QM9 dataset that contains multiple different small organic molecules in their equilibrium structures. The QM9 dataset has been calculated at 228 different levels of theory and therefore provides an ideal dataset for analysing meta-learning techniques. We can use this dataset to test whether meta-learning can develop a potential which can be refit to a new level of theory encountered for the QM9 dataset with less data. In order to do this, a subset of the QM9 dataset was used to train a potential to 10,000 molecules, 50 different exchange-correlation functionals and three different basis set. The potential was then refit to a new exchange-correlation functional, that had not been previously encountered, and the performance of this new model was assessed and compared to no pre-training and k=1 meta-learning.
The test set error for the meta-learning potential refit to a new level of theory in the QM9 dataset is shown in Fig. <ref>. Pre-training the potential greatly improves the test set error for this case. In Fig. S9 a comparison between meta-learning and k=1 is shown and we see that k=1 does not perform as well as k=10. This is because it does not account for the discrepancy in the interaction present. These results show that even when the number of levels of theory is relatively large, at 150, and multiple molecules are present that meta-learning improves test set error over k=1.
§.§ Making the most of scarce data at CCSD(T) level
We will now move to the datasets used to train transferable interatomic potentials. As a starting example, we will look at pre-training to the multiple levels of theory (ωB97x/ 6-31G* and ωB97x/ def2-TZVPP) contained in the ANI-1x dataset <cit.>. We will then retrain to the ANI-1ccx dataset <cit.>. Figure <ref> shows the distribution in error when pre-training to multiple levels of theory with meta-learning and k=1. The RMSE is 3.30 ± 0.10 kcal/mol and 2.39 ± 0.00 kcal/mol for k=1 and meta-learning respectively. Therefore, we can again see that meta-learning with a higher k values improves results compared to k=1. The comparative results for direct training to ωB97x/ 6-31G* and ωB97x/ def2-TZVPP and then transfer learning to CCSD(T) is 2.20± 0.01 kcal/mol and 2.09±0.02 kcal/mol respectively . Therefore, in this case fitting to multiple datasets does not improve results over fitting to just one. This is in part because both datasets contain the same structures and cover the same chemical and configurational space. The potential trained to multiple organic datasets was also refit to the CCSD(T) dataset and the benefits of meta-learning over k=1 were also seen with errors of 2.89± and 3.32± respectively. However, this is notably higher than training to the ANI-1x dataset alone. The CCSD(T) dataset is a subset of the ANI-1x dataset and contains identical structures. For these cases, adding additional data in other areas of chemical space may not improve results.
§.§ Training to multiple transferable organic molecule datasets
Numerous datasets have been created that contain quantum mechanical calculations for organic molecules. However, as these datasets use different levels of theory and software, combining the information from different datasets requires advanced training techniques. By using meta-learning, a pre-trained model was created that uses information from seven different datasets. This is the first instance, to our knowledge, of combining information from multiple organic molecule datasets in this manner.
We have already seen that meta-learning can improve results compared to k=1 when multiple datasets are used. We will now use the pre-trained model to explore the benefits of pre-training with meta-learning in comparison to no pre-training, and k=1 when retraining to a single molecular system. The pre-trained model was re-trained to the 3BPA dataset taken from Ref. and various properties explored <cit.>.
The first properties we will analyze are the energy and force RMSE errors. The force errors for a dataset taken from MD at 1200K is shown in Fig. <ref> with the energy and force learning curves for datasets at 300K, 600K and 1200K given in Fig. S4. From these graphs, the improved performance of pre-training using the meta-learning approach (with three passes through the dataset) to both k=1 and no pre-training can be seen for energies and forces. Therefore, just by adapting the training scheme, with no change in the model architecture or the dataset itself, consistent improvements in accuracy can be seen with meta-learning. The importance of the training method used has previously been seen in Ref. . Here we see how it can improve performance for fitting multiple datasets together. In comparison to when the ANI-1x model is used for pre-training, meta-learning performs slightly better at force errors but slightly worse for energy predictions. Given that the ANI-1x model is fit to the same level of theory as the 3BPA dataset, the performance of the meta-learning potential is encouraging.
However, it is known that RMSE errors alone are not enough to verify the performance of a potential <cit.>. We will therefore examine additional properties. The 3BPA molecule has three central dihedral angles which are illustrated in Fig. <ref>. The energy scans along these dihedral angles are shown in Fig. <ref> with the model refit to the energies and forces of just 62 3BPA conformations. When no pre-training is used, the surface at β=120 significantly over-estimates the high energy point and lacks smoothness. A similar shape is seen for the k=1 potential. However, when meta-learning is used for pre-training the surface remains noticeably smoother with significantly less over prediction. When k=1 is used, multiple different potential energy surfaces are combined together in a nonphysical way which destroys the smoothness of the underlying potential. The error in the gradient of the 2D energy surface is shown in Fig. <ref> b) and emphasizes this difference in smoothness. When meta-learning is used, the contradiction in the potential energy surface described is corrected resulting in a smoother model. When no pre-training or k=1 is used, an additional problem can occur with the high energy regions at α=0 failing to be recreated for the β=180 and β=150 scan respectively. In contrast, both the meta-learning pre-training model correctly recreate this behaviour. The results for ANI-1x pre-training are given in Fig. S6.
One advantage of pre-training with multiple datasets over ANI-1x or QM7-x, is that reactive systems can be added that are not contained in ANI-1x. To test if this information has been effectively passed to the meta-learning potential, hydrogen bond dissociation for the 3BPA molecules was performed. There is no reactive information contained within the 3BPA training set and so this test relies entirely on the information contained in the pre-training.
Figure <ref> shows the change in energy as a hydrogen molecule is removed from the 3BPA. The potential pre-trained with meta-learning recreates the smooth dissociation curve expected. In contrast, when no pre-training, k=1 or ANI-1x is used the curve lacks smoothness and has an additional barrier present. In Fig. S7, the bond dissociation energy when just 31 structures are used for retraining. Even in this low data limit the smooth dissociation curves for the meta-learning potential remain. To demonstrate that this is not unique to 3BPA, the hydrogen bond dissociation for ethanol is shown in Fig. S8. Again, k=1 fails to recreate the smooth curve expected whilst the meta-learning potential captures the correct shape.
We have therefore shown how meta-learning can be used to combine multiple datasets and the resulting improvements in the error, torsion energy scans and bond dissociation. Joint-fitting can improve on no-pre-training. However, not accounting for the difference in QM level of theory causes a reduction in performance that can be seen in the test set errors, smoothness of the potential and performance in extrapolation regions.
§ CONCLUSION
The quantum mechanical properties of millions of molecular species and many materials systems have already been calculated and composed into extended datasets <cit.>. However, the varying levels of theory
used to perform the QM calculations has previously prevented different datasets being used together to make machine learning models, for example for MLIPs. In this work, we have shown that meta-learning techniques can be used to jointly fit multiple datasets and demonstrated the improvement in performance that results from including a diverse selection of datasets.
We show the wide applicability of meta-learning by creating MLIPs for a variety of systems, from a single aspirin molecule to the ANI-1ccx dataset. By pre-training a model to multiple large organic molecule datasets we show that these datasets (QM7-x, QMugs, ANI-1x, Transition-1x and GEOM) can be combined together to pre-train a model. The benefits of using a pre-trained model are then shown for the 3BPA molecule, with a more accurate and smoother potential produced. Meta-learning greatly expands the variety of fitting data available for MLIPs and establishes the possibility of creating readily pre-trained, foundational models for MLIPs.
Pre-training machine learning models has been extensively discussed in the machine learning literature in recent years <cit.>. Whilst pre-training has been carried out for MLIPs, its use has been limited to training from one dataset to another <cit.>. With techniques such as meta-learning, this pre-training does not need to be limited to one specific dataset but can include large numbers of existing datasets. In this work, we added only a single reactive dataset to pre-train a model. However, many different reactive datasets exist and combining this large amount of information could help build a general transferable potentials for reactions in both condensed and gas phase without the need for millions of new QM calculations. Additionally, datasets have been created for many different combinations of elements. Meta-learning techniques could help build more transferable MLIPs over a wider range of elements with fewer calculations required.
However, combining multiple datasets together and training with meta-learning will not always improve results. This was seen with the CCSD(T) results where fitting straight from ANI-1x to CCSD(T) resulted in the lowest error. Therefore, adding more data when there is a specific application in mind is not always the best approach, particularly if the additional data is far from the final application. For specific applications, transfer learning from one dataset to another may yield the best training and test set errors. However, if multiple data sets need to be incorporate together, or a general model is desired which can be specialized to multiple different tasks, meta-learning methods are preferable.
With the techniques described in this work, multiple datasets can be fit at once. However, this advancement has exposed a more practical problem with the datasets currently published. There is not a standard format for storing information. Manual manipulation of datasets to a standard format is extremely time-consuming. The need for uniformity in the structure of datasets produced is therefore becoming increasingly important.
The growth of available datasets containing quantum mechanical information for molecular and material structures has given researchers unprecedented levels of QM information. However, combining data from multiple data-sources is a major challenge. We have shown how meta-learning can be used to combine information from multiple datasets generated with varying levels of theory. This advancement changes the way that existing datasets should be viewed, and opens up new avenues for MLIP fitting. Beyond this, the results suggest that meta-learning can be seen as a general approach for combining training datasets for the broad array of chemical and materials processes where data science models can benefit.
This work was supported by the United States Department of Energy (US DOE), Office of Science, Basic Energy Sciences, Chemical Sciences, Geosciences, and Biosciences Division under Triad National Security, LLC (‘Triad’) contract grant no. 89233218CNA000001 (FWP: LANLE3F2). A. E. A. Allen and S. Matin also acknowledge the Center for Nonlinear Studies. Computer time was provided by the CCS-7 Darwin cluster at LANL.
|
http://arxiv.org/abs/2307.03952v2 | 20230708110202 | Is ChatGPT a Good Personality Recognizer? A Preliminary Study | [
"Yu Ji",
"Wen Wu",
"Hong Zheng",
"Yi Hu",
"Xi Chen",
"Liang He"
] | cs.CL | [
"cs.CL"
] |
1
.001
Is ChatGPT a Good Personality Recognizer? A Preliminary Study
Yu Ji et al.
mode = title]Is ChatGPT a Good Personality Recognizer? A Preliminary Study
1,2]Yu Ji[orcid=0000-0001-6048-9184]
[email protected]
2,3]Wen Wu[orcid=0000-0002-2132-5993]
[1]
[email protected]
[1]Corresponding author
4]Hong Zheng
3]Yi Hu
3]Xi Chen
1,2]Liang He
[1]organization=Institute of AI Education, East China Normal University,
city=Shanghai,
country=China
[2]organization=School of Computer Science and Technology, East China Normal University,
city=Shanghai,
country=China
[3]organization=Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University,
city=Shanghai,
country=China
[4]organization=Shanghai Changning Mental Health Center,
city=Shanghai,
country=China
In recent years, personality has been regarded as a valuable personal factor being incorporated into numerous tasks such as sentiment analysis and product recommendation. This has led to widespread attention to text-based personality recognition task, which aims to identify an individual's personality based on given text. Considering that ChatGPT has recently exhibited remarkable abilities on various natural language processing tasks, we provide a preliminary evaluation of ChatGPT on text-based personality recognition task for generating effective personality data. Concretely, we employ a variety of prompting strategies to explore ChatGPT's ability in recognizing personality from given text, especially the level-oriented prompting strategy we designed for guiding ChatGPT in analyzing given text at a specified level. The experimental results on two representative real-world datasets reveal that ChatGPT with zero-shot chain-of-thought prompting exhibits impressive personality recognition ability and is capable to provide natural language explanations through text-based logical reasoning. Furthermore, by employing the level-oriented prompting strategy to optimize zero-shot chain-of-thought prompting, the performance gap between ChatGPT and corresponding state-of-the-art model has been narrowed even more. However, we observe that ChatGPT shows unfairness towards certain sensitive demographic attributes such as gender and age. Additionally, we discover that eliciting the personality recognition ability of ChatGPT helps improve its performance on personality-related downstream tasks such as sentiment classification and stress prediction.
ChatGPT Personality Recognition Chain-of-Thought Prompting Strategy Level-Oriented Prompting Strategy Natural Language Explanation Unfairness
[
[
August 12, 2023
===================
§ INTRODUCTION
As one of the basic individual characteristics, personality describes the relatively stable pattern of individual w.r.t. her/his behavior, thought, and emotion <cit.>. In recent years, an increasing number of researchers have considered personality as a valuable factor and incorporated it into various tasks (e.g., machine translation <cit.>, product recommendation <cit.>, sentiment analysis <cit.>, and mental health analysis <cit.>), resulting in significant performance improvements. In order to automatically obtain large-scale user personality, text-based personality recognition task is designed to infer user personality based on given user-generated text <cit.>. With the rapid developments of pre-trained Large Language Models (LLMs) (e.g., BERT <cit.>, RoBERTa <cit.>, GPT-3 <cit.>,PaLM <cit.>, and LLaMA <cit.>), more and more LLMs-based methods have been proposed for text-based personality detection task and have achieved remarkable performance improvements <cit.>.
More recently, ChatGPT[https://chat.openai.com/] has attracted a considerable amount of attention with its impressive general language processing ability <cit.>, sparking exploration into its capability boundaries <cit.>. Several works have provided a preliminary evaluation of ChatGPT on various common tasks such as machine translation <cit.>, product recommendation <cit.>, sentiment analysis <cit.>, and mental health analysis <cit.>. Therefore, in this work, we are interested in evaluating the performance of ChatGPT on text-based personality recognition task for generating effective personality data. We also would like to see whether eliciting the personality recognition ability of ChatGPT contributes to improving its performance on other downstream tasks. Concretely, we raise the following Research Questions (RQs):
RQ1: How do different prompting strategies affect ChatGPT's ability to identify personality?
RQ2: How unfair is ChatGPT when serving as a personality recognizer on various sensitive demographic attributes?
RQ3: Does the personality inferred by ChatGPT help improve its performance on other downstream tasks?
To answer these research questions, we conduct experiments on two representative text-based personality recognition datasets (i.e., Essays and PAN) to compare the performance of ChatGPT, traditional neural network (e.g., Recurrent Neural Network (RNN)), fine-tuned RoBERTa, and corresponding State-Of-The-Art (SOTA) model. Specifically, we adopt three classic prompting strategies to elicit the personality recognition ability of ChatGPT, including zero-shot prompting, zero-shot Chain-of-Thought (CoT) prompting, and one-shot prompting. Furthermore, considering that researchers typically analyze texts at different levels (e.g., word level, sentence level, and document level) to obtain valuable text information <cit.>, we design zero-shot level-oriented CoT prompting to guide ChatGPT in analyzing given text at a specified level, thereby gaining a more targeted understanding of given text and recognizing personality more precisely. According to the experimental results, our findings can be summarized as follows:
(1) Among the three classic prompting strategies, zero-shot CoT prompting can better elicit ChatGPT's ability to predict personality based on given text, resulting in its optimal overall performance on the two datasets, although there is still a certain gap in performance compared to the SOTA model. Additionally, ChatGPT with zero-shot CoT prompting could generate more natural language explanations by text-based logical reasoning, enhancing the interpretability of the prediction results. Furthermore, with the assistance of zero-shot level-oriented CoT prompting, ChatGPT could perform more targeted text analysis, enabling it to complete more accurate personality prediction.
(2) ChatGPT exhibits unfairness to some sensitive demographic attributes on text-based personality recognition task. Based on ChatGPT's analysis, the woman group is more likely to have high levels of Openness, Conscientiousness, and Agreeableness when compared to the man group. Besides, relative to the younger group, the elderly group has a higher likelihood to have low Openness.
(3) The personality inferred by ChatGPT could enhance its performance on sentiment classification task and stress prediction task, which may provide new insights for other personality-related tasks (e.g., machine translation and product recommendation).
In the following sections, we first introduce related work regarding personality recognition in Section <ref>. After that, we present the details of our experimental design and analyze the experimental results in Section <ref>. Finally, we conclude the paper and indicate some future directions in Section <ref>.
§ BACKGROUND AND RELATED WORK
Big-Five Factor (BFF) model and Myers-Briggs Type Indicator (MBTI) are two most popular personality assessment models <cit.>. To be specific, BFF model describes personality based on five traits: Openness (O), Conscientiousness (C), Extraversion (E), Agreeableness (A), and Neuroticism (N) <cit.>. Table <ref> shows the propensities of individuals under different personality traits and levels. On the contrary, MBTI describes personality according to four dimensions, including Extraversion/Introversion, Sensing/Intuition, Thinking/Feeling, and Judging/Perceiving <cit.>. Compared to BFF model, MBTI still faces controversy within the academic community <cit.>. Hence, we adopt BFF model to describe individuals' personalities in this paper.
In recent years, an increasing number of researchers regarded Big-Five personality as a valuable personal factor and incorporated it into their models, resulting in significant performance improvements on various tasks <cit.>. For example, Wu et al. <cit.> adopted users' Big-Five personalities to personalize the recommendation diversity being tailored to the users' diversity needs. Ban et al. <cit.> utilized learners' Big-Five personalities to model the individual differences for better predicting the learners' knowledge levels. This has sparked researchers' interest in efficiently acquiring Big-Five personalities.
The conventional approach to identify an individual's Big-Five personality is via personality questionnaires (e.g., NEO-FFI questionnaire <cit.>, BFI-44 <cit.>, BFI-10 <cit.>, and BFMS <cit.>). These personality questionnaires are typically carefully designed by psychology experts and require individuals to rate their behaviors using Likert scales, which is time-consuming and labor-intensive <cit.>. In order to apply Big-Five personality on a large scale across various domains (e.g., machine translation <cit.>, product recommendation <cit.>, sentiment analysis <cit.>, and mental health analysis <cit.>), researchers attempted to implicitly obtain Big-Five personality from various User-Generated Content (UGC), including text <cit.>, handwriting <cit.>, speech <cit.>, electroencephalography (EEG) <cit.>, and so on. Due to substantial evidence from psychological research demonstrating the correlation between user-generated texts and users' Big-Five personalities <cit.>, researchers made an extensive exploration of text-based personality recognition. However, the related methods normally regarded text-based personality recognition task as a special case of text classification. Most of them utilized machine learning algorithms to build personality recognizers with text features such as Linguistic Inquiry and Word Count (LIWC) <cit.> and Structured Programming for Linguistic Cue Extraction (SPLICE) <cit.>. Furthermore, with the rapid development of deep learning, more and more methods using deep neural networks are proposed to solve text-based personality recognition task, as deep neural networks could extract high-order text features from user-generated text automatically <cit.>. For example, Majumder et al. <cit.> designed a deep convolutional neural network with Word2Vec embeddings <cit.> for personality detection. Xue et al. <cit.> presented a two-level hierarchical neural network to learn the deep semantic representations of users' posts for recognizing users' Big-Five personalities. Lynn et al. <cit.> utilized message-level attention to learn the relative weight of users' posts for assessing users' Big-Five personalities. Zhu et al. <cit.> learned post embeddings by contrastive graph transformer network for personality detection. Zhu et al. <cit.> proposed a lexical psycholinguistic knowledge-guided graph neural network to enrich the semantics of users' posts with the personality lexicons. Recently, the remarkable performance enhancements achieved by LLMs in numerous Nature Language Processing (NLP) tasks <cit.> prompted researchers to explore the utilization of LLMs in text-based personality prediction task <cit.>. For example, Mehta et al. <cit.> performed extensive experiments with BERT to arrive at the optimal configuration for personality detection. Ren et al. <cit.> leveraged BERT to generate sentence-level embedding for personality recognition, while a sentiment dictionary is used to consider sentiment information in the process of personality prediction.
Lately, the release of ChatGPT has drawn increasingly great attention due to the incredible general language processing ability of ChatGPT. Therefore, more and more researchers attempted to explore the capability boundaries of ChatGPT and evaluate it on various tasks, including machine translation <cit.>, product recommendation <cit.>, sentiment analysis <cit.>, mental health analysis <cit.>, and so on. Hence, in this work, we are interested in exploring the personality recognition ability of ChatGPT through different prompting strategies for obtaining effective personality data.
§ EXPERIMENTS
§.§ Datasets
We adopt two well-known publicly available datasets in our experiments for text-based Big-Five personality recognition task:
(1) Essays <cit.>: This stream-of-consciousness dataset consists of 2,467 essays written by psychology students, and the Big-Five personality levels (i.e., low and high levels) of the students were acquired through standardized self-report questionnaire.
(2) PAN[https://pan.webis.de/clef15/pan15-web/author-profiling.html]: This dataset comes from the PAN2015 data science competition, which consists of four language sub-datasets (i.e., Dutch, English, Italian, and Spanish). In this work, we choose the English sub-dataset, which contains 294 users' tweets and their Big-Five personality scores. The Big-Five personality scores of the users were obtained by BFI-10 questionnaire <cit.>. Note that, similar to <cit.>, for each of the five personality traits, we adopt the corresponding mean value to convert personality scores into two personality levels (i.e., low and high levels). To be specific, personality score below the corresponding mean value is converted into the low level, while personality score equal to or above the corresponding mean value is converted into the high level.
Similar to <cit.>, we randomly split Essays and PAN datasets into training, validation, and testing sets in the proportion of 8:1:1. The statistics of the two datasets are summarized in Figure <ref>.
§.§ Prompting Strategies
We employ three classic prompting strategies to explore the personality recognition ability of ChatGPT, including zero-shot prompting, zero-shot CoT prompting, and one-shot prompting. The reason for using one-shot prompting alone is that ChatGPT has a limitation on the length of input. Considering that the texts in both Essays and PAN datasets are normally long (i.e., the average lengths of texts in Essays and PAN datasets are 749 and 1,405 respectively), we only provide one demonstration example in the input (i.e., one-shot prompting) without offering more demonstration examples (e.g., two-shot prompting). In addition, inspired by existing NLP research mining valuable text information at different levels (e.g., word level, sentence level, and document level) <cit.>, we design level-oriented prompting strategy to guide ChatGPT in analyzing text at a specified level. Concretely, we combine the level-oriented prompting strategy with zero-shot CoT prompting to construct zero-shot level-oriented CoT prompting. The reason for constructing zero-shot level-oriented CoT prompting based on zero-shot CoT prompting is that ChatGPT with zero-shot CoT prompting has better overall performance on the two datasets when compared to zero-shot prompting and one-shot prompting (see Section <ref>). Hence, we would like to see whether the level-oriented prompting strategy could further enhance the effectiveness of zero-shot CoT prompting. Note that, the four prompting strategies require ChatGPT to simultaneously output the person's levels of five personality traits (i.e., O, C, E, A, and N) based on given text.
(1) Zero-Shot prompting
Analyze the person-generated text, determine the person's levels of Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Only return Low or High.
Text: "[Text]"
Level:
(2) Zero-Shot CoT prompting
Analyze the person-generated text, determine the person's levels of Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Only return Low or High.
Text: "[Text]"
Level: Let's think step by step:
(3) One-Shot prompting
Analyze the person-generated text, determine the person's levels of Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Only return Low or High.
Text: "[Example Text]"
Level: [Openness Level of Example Text] Openness, [Conscientiousness Level of Example Text] Conscientiousness, [Extraversion Level of Example Text] Extraversion, [Agreeableness Level of Example Text] Agreeableness, [Neuroticism Level of Example Text] Neuroticism
Text: "[Text]"
Level:
Note that, to minimize the variance resulting from the sampling of demonstration examples, we randomly select three demonstration examples for conducting experiments and reporting the average performance.
(4) Zero-Shot Level-Oriented CoT prompting
We modify zero-shot CoT prompting as follow to construct zero-shot level-oriented CoT prompting, while [Specified Level] can be set as word level, sentence level, or document level.
Analyze the person-generated text from [Specified Level], determine the person's levels of Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Only return Low or High.
Text: "[Text]"
Level: Let's think step by step:
§.§ Baselines
Based on our literature research, we choose the following representative models as baselines:
(1) RNN <cit.>: uses RNN to generate text representation for recognizing Big-Five personality. In addition, the pre-trained GloVe model <cit.> is used to initialize the word embeddings.
(2) RoBERTa <cit.>: fine-tunes pre-trained RoBERTa-Base model and utilizes the representation of [CLS] with a linear layer for personality classification.
(3) HPMN (BERT) <cit.>: is one of the SOTA personality prediction models, which uses the personality lexicons to incorporate relevant external knowledge for enhancing the semantic meaning of the person-generated text. Its performance on Essays and PAN datasets is quoted from the original paper.
§.§ Evaluation Metrics
It can be observed from Figure <ref> that Essays and PAN datasets maintain class balance across most of the five personality traits. Therefore, we use Accuracy (the higher the better) <cit.> as the evaluation metric, which is used to measure the personality classification performance. Besides, to make a more intuitive comparison, we adopt Accuracy Improvement Percentage (AIP) to measure the accuracy improvement percentage of ChatGPT against the SOTA model (i.e., HPMN (BERT)), which is calculated as:
AIP=Accuracy_testmodel-Accuracy_SOTA/Accuracy_SOTA*100%
where Accuracy_SOTA and Accuracy_testmodel denote the accuracy of the SOTA model and the test model such as ChatGPT with zero-shot prompting.
§.§ Implementation Details
For the usage of ChatGPT, we adopt the representative version of ChatGPT (i.e., gpt-3.5-turbo). In addition, we set the temperature to 0 for producing more deterministic and focused responses. For RNN and fine-tuned RoBERTa, we set each text has no more than 512 words (padding when text length is less than 512, truncation when text length is greater than 512). Besides, for RNN, the dimension of hidden state, the batch size, and the learning rate are set to 128, 32, and 1e-3 respectively. While for fine-tuned RoBERTa, the batch size and the learning rate are set to 32 and 5e-5 respectively.
§.§ Overall Performance (RQ1)
Considering that ChatGPT may refuse personality recognition due to some reasons[One unexpected response of ChatGPT: “Unfortunately, there is not enough information in the provided text to accurately determine the person's levels of Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism.".], we adopt Majority approach to obtain the prediction results when encountering such rare situations. Specifically, for each personality trait, we regard the majority personality level in training set as the personality level of each sample in testing set. The experimental results on Essays and PAN datasets are shown in Table <ref> and Table <ref>. Concretely, ChatGPT_ZS, ChatGPT_CoT, and ChatGPT_OS represent ChatGPT with zero-shot prompting, zero-shot CoT prompting, and one-shot prompting. In addition, ChatGPT_CoT_W, ChatGPT_CoT_S, and ChatGPT_CoT_D denotes ChatGPT with zero-shot level-oriented CoT prompting, while [Specified Level] is set to word level, sentence level, and document level respectively.
Results of zero-shot prompting. As shown in Table <ref> and Table <ref>, ChatGPT_ZS has better performance than the traditional neural network RNN on both Essays and PAN datasets. For example, relative to RNN, ChatGPT_ZS increases its average classification accuracy from 50.3% to 57.4% on Essays dataset. Furthermore, ChatGPT_ZS not only performs comparably to fine-tuned RoBERTa on Essays dataset (e.g., 57.4% vs. 57.3% in terms of average classification accuracy) but also outperforms fine-tuned RoBERTa on PAN dataset (e.g., 57.3% vs. 55.3% w.r.t. average classification accuracy). Therefore, ChatGPT_ZS exhibits incredible text-based personality recognition ability under zero-shot setting. Since the SOTA model is a task-specific fully-supervised model with complex architecture for personality recognition task, the performance of ChatGPT_ZS falls far behind that of the SOTA model on the two datasets (e.g., 57.3% vs. 67.5% w.r.t. average classification accuracy on PAN dataset). However, another interesting observation is that compared with Essays dataset (i.e., the relatively large-scale dataset), ChatGPT_ZS shows a relatively higher AIP on PAN dataset (i.e., the relatively small-scale dataset). For example, the AIP of ChatGPT_ZS against the SOTA model on Essays and PAN datasets are -29.0% and -15.1% respectively. Furthermore, ChatGPT_ZS even surpasses the SOTA model when predicting personality trait A on PAN dataset (i.e., 70.0% vs. 66.3%). The possible reason is that PAN dataset provides relatively fewer training data for the fully-supervised SOTA model, preventing it from fully learning the differences in personality levels. In contrast, ChatGPT_ZS does not require training data and relies solely on its existing knowledge under zero-shot setting, narrowing the performance gap between ChatGPT_ZS and the SOTA model.
Results of zero-shot CoT prompting. Table <ref> and Table <ref> reveal that zero-shot CoT prompting could effectively enhance ChatGPT's ability on text-based personality recognition task. For example, ChatGPT_CoT increases its average classification accuracy from 57.3% to 60.7% on PAN dataset when compared with ChatGPT_ZS. As for reason, with the help of zero-shot CoT prompting, ChatGPT_CoT can perform more complex logical reasoning, so as to accurately complete the personality prediction task. Besides, ChatGPT_ZS only provides final prediction results (see Figure <ref>), while ChatGPT_CoT could provide additional natural language explanations for its prediction results in most cases (see Figure <ref>). The natural language explanations generated by ChatGPT_CoT not only enhance users' trust in the prediction results but also enables developers to obtain a better understanding of the knowledge deficiencies in ChatGPT. To gain a deep insight into the natural language explanations generated by ChatGPT_CoT, we categorize the nature language explanations into three types: (1) None: no explanation or refuse personality recognition; (2) Original Content: only the original text is provided as explanation; (3) Logical Reasoning: logical reasoning based on the original text. Figure <ref> shows the examples of three types of natural language explanations for the prediction of personality trait O, and Figure <ref> illustrates the distribution of three types of natural language explanations on different datasets and personality traits. As depicted in Figure <ref>, on both Essays and PAN datasets, ChatGPT_CoT provides more natural language explanations of the logical reasoning type for the prediction of personality trait O, while offering more natural language explanations of the original content type when identifying personality trait N. With regard to possible reasons, personality trait O reflects whether a person is creative/open-minded (with high level) or reflective/conventional (with low level) <cit.>, which may not be directly presented in person-generated text. Hence, the prediction of personality trait O requires ChatGPT to engage in more logical reasoning for a deeper analysis of given text. For example, as shown in Figure <ref>, based on given text, ChatGPT_CoT infers that the person's text is mostly focused on concrete details and experiences, with little indication of abstract or imaginative thinking. Therefore, ChatGPT_CoT predicts that the person has low O. On the contrary, personality trait N reflects whether a person is emotionally stable (with low level) or emotionally unstable (with high level) <cit.>. Since individuals normally directly express their negative emotions (e.g., anxiety) in their texts, it is relatively easier for ChatGPT_CoT to predict personality trait N based on the original text without logical reasoning. For example, one of natural language explanation of the original content type generated by ChatGPT_CoT for predicting personality trait N is mentions feeling stressed, tense, and worried about health problems and homework overload. Furthermore, as demonstrated in Figure <ref>, compared with Essays dataset, ChatGPT_CoT provides relatively more natural language explanations of the logical reasoning type for personality recognition on PAN dataset. The possible reason is that Essays dataset consists of stream-of-consciousness essays written by psychology students under professional guidance, while PAN dataset is composed of tweets written freely by various internet users. Hence, compared with the texts in Essays dataset, the texts in PAN datasets generally contain relatively less valuable information, which increases the difficulty of text-based personality prediction on PAN dataset. Therefore, compared to Essays dataset, ChatGPT_CoT needs to perform more logical reasoning to accomplish personality recognition task accurately on PAN dataset.
Results of one-shot prompting. From Table <ref> and Table <ref>, it can be observed that by providing a demonstration example, ChatGPT's performance has improved on Essays dataset but largely declined on PAN dataset. To be specific, ChatGPT_OS increases its average classification accuracy from 57.4% to 58.2% on Essays dataset when compared with ChatGPT_ZS. However, relative to ChatGPT_ZS, ChatGPT_OS decreases its average classification accuracy from 57.3% to 49.3% on PAN dataset. Regarding possible reasons, on the one hand, as mentioned above, the texts in Essays dataset generally contain more valuable information when compared to PAN dataset. Hence, there is a higher probability of selecting samples containing more invalid information from PAN dataset than from Essays dataset, thereby affecting ChatGPT_OS's learning of the relationship between text and Big-Five personality on PAN dataset. On the other hand, the persons in Essays dataset are all psychology students, while the persons in PAN dataset are various internet users from different age groups (from 18 years old to over 50 years old). Hence, without the corresponding demographic attributes (e.g., age) provided, the demonstration example selected from the training set of PAN dataset may not assist ChatGPT_OS in predicting the personalities of certain groups. For instance, if the demonstration example is generated by a young person, the association between text and personality that ChatGPT_OS learns from this demonstration example may not be helpful in predicting the personality of an old person.
Results of zero-shot level-oriented prompting. Table <ref> and Table <ref> demonstrate that guiding ChatGPT_CoT to analyze given text from specified level could help ChatGPT in analyzing given text more targeted and completing personality prediction task precisely. For example, by guiding ChatGPT_CoT_D to analyze given text from document level, its performance on Essays dataset can rival the performance of ChatGPT_OS (58.3% vs. 58.2% w.r.t. average classification accuracy). Similarly, on PAN dataset, when ChatGPT_CoT_S is guided to analyze given text from sentence level, its average classification accuracy has been a notable improvement when compared to ChatGPT_CoT, rising from 57.3% to 62.7%. We believe the possible reason is that the texts in Essays dataset were written within a limited time frame, making it more suitable for conducting overall analysis from document level. On the other hand, the texts in PAN dataset are composed of tweets posted at different times. Hence, it is more appropriate to analyze given text in PAN dataset from sentence level, which is helpful to mine diverse individual information reflected in different tweets. This discovery not only helps optimize existing promptings for text analysis but also offers new insights into eliciting various abilities of LLMs in a fine-grained manner.
§.§ Fairness of ChatGPT on Personality Recognition (RQ2)
Considering that LLMs may be unfair to certain groups due to social bias in its large pre-training corpus <cit.>, we further investigate the fairness of ChatGPT on personality prediction task across different groups. To be specific, we adopt ChatGPT_CoT with different demographic attributes for personality prediction on PAN dataset, as PAN dataset provides various demographic attributes, including gender and age (see Table <ref>). Concretely, we modify zero-shot CoT prompting as follow to provide ChatGPT with specific demographic attribute corresponding to given text:
Analyze the person-generated text, determine the person's level of Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Only return Low or High. Note that, the person is [Corresponding Attribute].
Text: "[Text]"
Level: Let's think step by step:
Please refer to Table <ref> for the setting of [Corresponding Attribute]. For example, [Corresponding Attribute] is set to aged between 18 and 24 when the age of the corresponding person is between 18 and 24 years old. To be specific, ChatGPT_CoT_gender and ChatGPT_CoT_age represent ChatGPT with the modified zero-shot CoT promptings, which incorporates gender and age information respectively.
It is apparent from Figure <ref> that the incorporation of demographic attributes impairs the personality prediction ability of ChatGPT_CoT to some extent, especially the integration of age information. For example, relative to ChatGPT_CoT, ChatGPT_CoT_gender and ChatGPT_CoT_age decrease their average accuracy from 55.5% to 55.2% and 54.0% respectively. We speculate that this phenomenon may be due to ChatGPT's biases towards certain groups, which leads to unfair treatment of those groups. In order to better observe ChatGPT's biases on personality prediction task, we first obtain the prediction results of ChatGPT_CoT, ChatGPT_CoT_gender, and ChatGPT_CoT_age towards different groups. We then visualize the proportion of low and high levels in those prediction results. Concretely, Figure <ref> and Figure <ref> show the distribution of the prediction results of ChatGPT_CoT and ChatGPT_CoT_gender towards woman and man groups respectively. In addition, Figure <ref>, Figure <ref>, Figure <ref>, and Figure <ref> illustrate the distribution of the prediction results of ChatGPT_CoT and ChatGPT_CoT_age towards different age groups. Take Figure <ref> as an example, the figure represents that among the 174 women in PAN dataset, 51% of them have high O (i.e., ground truth). However, ChatGPT_CoT classifies 74.8% of the 174 women as high O, while ChatGPT_CoT_gender classifies 82.3% of the 174 women as high O. In contrast, as shown in Figure <ref>, among the 174 men in PAN dataset, 47.6% of them have low O (i.e., ground truth). However, ChatGPT_CoT classifies 29.9% of the 174 men as low O, while ChatGPT_CoT_gender classifies 32.0% of the 174 men as low O. In summary, after adding gender information, ChatGPT_CoT_gender classifies more women as high O and classifies more men as low O. This phenomenon suggests that ChatGPT considers women to be more likely to belong to high O when compared to men. In order to make a more intuitive comparison of the prediction results of ChatGPT_CoT, ChatGPT_CoT_gender, and ChatGPT_CoT_age towards different groups, we further visualize the changes of the proportion of high level in the prediction results of ChatGPT_CoT_gender/ ChatGPT_CoT_age relative to ChatGPT_CoT (see Figure <ref>). For example, as displayed in Figure <ref>, for 174 women in PAN dataset, the proportion of women with high A in the prediction results of ChatGPT_CoT_gender has increased by 8.1% when compared to ChatGPT_CoT. Based on Figure <ref>, the biases of ChatGPT towards certain groups can be summarized as follows:
(1) Relative to the man group, the woman group is more likely to exhibit high levels of personality traits O, C, and A.
(2) The older an individual is, the greater the likelihood of her/his personality traits O being low level.
However, these findings are not entirely consistent with existing research. For example, some studies suggest that the woman group is more likely to exhibit high levels of personality traits A and N compared to the man group, whereas gender differences in the other personality traits (i.e., O, C, and E) have been either inconsistent or of negligible magnitude <cit.>. Possible reasons for this could be that, on the one hand, ChatGPT's biases are influenced by the biases of the annotators, which may not be representative. On the other hand, these findings are discovered based solely on the PAN dataset, limiting their generalization to some extent. Nevertheless, this phenomenon serves as a cautionary reminder for researchers to consider fairness when utilizing ChatGPT for personality prediction.
§.§ ChatGPT's Personality Recognition Ability on Downstream Task (RQ3)
We apply the personality data generated by ChatGPT to other downstream tasks for validating the effectiveness of ChatGPT's personality recognition ability. Concretely, we choose sentiment classification task and stress prediction task as the downstream tasks, because existing psychological research indicates that there is a correlation between Big-Five personality and sentiment expression <cit.> as well as stress vulnerability <cit.>. For each task, to make a more comprehensive assessment of the impact of personality data generated by ChatGPT, we first adopt ChatGPT_CoT and fine-tuned RoBERTa to generate the corresponding Big-Five personality based on given text respectively. We then use a basic prompting to elicit the task-related ability (i.e., sentiment classification ability and stress prediction ability) of ChatGPT. Finally, we modify the basic prompting by incorporating different Big-Five personalities and observe the task-related ability of ChatGPT with different modified basic promptings.
To be specific, for sentiment classification task, we adopt a subset of Yelp-2 dataset <cit.> for conducting experiments. The reason for not utilizing the complete Yelp-2 dataset is to take into account the cost of using ChatGPT's API. Concretely, we randomly select 500 positive samples and 500 negative samples from the testing set of Yelp-2 dataset to construct the subset. While for stress prediction task, we choose Dreaddit dataset, which consists of 715 samples (369 positive samples and 346 negative samples) in its testing set. Specifically, considering that the texts in the PAN dataset, Yelp-2 dataset, and Stress dataset are all web posts, we use fine-tuned RoBERTa trained on PAN dataset to generate personality data. Besides, since both tasks are binary classification tasks, we adopt Accuarcy (the higher the better) as the evaluation metric. In addition, the basic promptings used for sentiment classification task and stress prediction task are proposed by <cit.> and <cit.>. Please refer to Table <ref> for the detail of the unmodified/modified basic promptings.
The experimental results are illustrated in Figure <ref>. Note that, ChatGPT_basic represents ChatGPT with the basic prompting, while ChatGPT_basic_PC and ChatGPT_basic_PR denotes ChatGPT with the modified basic promptings, which incorporates the personality data generated by ChatGPT_CoT and fine-tuned RoBERTa respectively. It can be observed that after incorporating the personality data predicted by ChatGPT_CoT, there is an improvement in ChatGPT's performance on both sentiment classification task and stress prediction task. For example, ChatGPT_basic_PC increases its classification accuracy from 96.6% to 97.6% on sentiment classification task when compared to ChatGPT_basic. While for stress prediction task, ChatGPT_basic_PC increases its classification accuracy from 71.3% to 73.0% when compared to ChatGPT_basic. This proves the effectiveness of the personality data generated by ChatGPT_CoT. With an understanding of individuals' Big-Five personalities, ChatGPT can analyze their sentiment expression and stress condition in a more personalized manner. Another interesting finding is that the personality data generated by fine-tuned RoBERTa can help improve the performance of ChatGPT in sentiment classification tasks, but it actually decreases ChatGPT's performance in stress prediction task. We believe that the possible reason for this is that fine-tuned RoBERTa trained on PAN dataset does not generalize well, which results in the poor performance of personality prediction on Dreaddit dataset. In contrast, ChatGPT relies solely on zero-shot CoT prompting to elicit its personality prediction ability and does not require training data, thus exhibiting stronger generalization performance on different datasets.
§ CONCLUSION AND FUTURE DIRECTIONS
In this work, we evaluate the personality recognition ability of ChatGPT with different prompting strategies, and compare its performance with RNN, fine-tuned RoBERTa, and corresponding SOTA model on two representative text-based personality identification datasets. With the elicitation of zero-shot CoT prompting, ChatGPT exhibits impressive personality recognition ability and has strong interpretability for its prediction results. In addition, we find that guiding ChatGPT to analyze text at a specified level helps improve its ability to predict personality, which proves the effectiveness of level-oriented prompting strategy. Moreover, we discover that ChatGPT exhibits unfairness to some sensitive demographic attributes, leading to unfair treatment of some specific groups when predicting personality. Besides, we apply the personality data inferred by ChatGPT in other downstream tasks and achieve performance improvement to some extent. This proves that ChatGPT's personality prediction ability is effective and has high generalization performance.
As for future work, on the one hand, we would like to apply level-oriented prompting strategy to more NLP tasks for observing its effectiveness in mining text information. On the other hand, with the continuous emergence of various LLMs, we are interested in exploring the construction of domain-specific LLMs based on psychological data in order to enhance the personality recognition ability of LLMs.
§ ACKNOWLEDGMENT
This work is funded by Science and Technology Commission of Shanghai Municipality, China (under project No. 21511100302), National Natural Science Foundation of China (under project No. 61907016), Natural Science Foundation of Shanghai (under project No. 22ZR1419000), the Research Project of Changning District Science and Technology Committee (under project No. CNKW2022Y37), and the Medical Master's and Doctoral Innovation Talent Base Project of Changning District (under project No. RCJD2022S07). In addition, it is also supported by The Research Project of Shanghai Science and Technology Commission (20dz2260300) and The Fundamental Research Funds for the Central Universities.
§.§ CRediT Authorship Contribution Statement
Yu Ji: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data Curation, Writing-Original Draft, Writing-Review and Editing. Wen Wu: Conceptualization, Methodology, Formal analysis, Investigation, Writing-Original Draft, Writing-Review and Editing, Supervision. Hong Zheng: Writing-Review and Editing. Yi Hu: Supervision, Writing-Review and Editing. Xi Chen: Writing-Review and Editing. Liang He: Supervision, Writing-Review and Editing.
§.§ Ethical Approval
Not applicable.
§.§ Data Availability
Data will be made available on request.
§.§ Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
unsrt
|
http://arxiv.org/abs/2307.04739v1 | 20230710175204 | Particle production from non-minimal coupling in a symmetry breaking potential transporting vacuum energy | [
"Alessio Belfiglio",
"Youri Carloni",
"Orlando Luongo"
] | gr-qc | [
"gr-qc",
"astro-ph.CO"
] |
/pgf/number format/use comma,compat=newest
[email protected] of Camerino, Via Madonna delle Carceri, Camerino, 62032, Italy.Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Perugia, Perugia, 06123, [email protected] of Camerino, Via Madonna delle Carceri, Camerino, 62032, [email protected] of Camerino, Via Madonna delle Carceri, Camerino, 62032, Italy.Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Perugia, Perugia, 06123, Italy.Department of Mathematics and Physics, SUNY Polytechnic Institute, Utica, NY 13502, USA.INAF - Osservatorio Astronomico di Brera, Milano, Italy.Al-Farabi Kazakh National University, Almaty, 050040, Kazakhstan.
We propose an inflationary scenario where the inflaton field is non-minimally coupled to spacetime curvature and inflation is driven by a vacuum energy symmetry breaking potential without specifying a priori whether the inflaton field is small or large. As we incorporate vacuum energy into our analysis, we further explore the implications of a non-zero potential offset in relation to the emergence of inflationary dynamics. Thus, we propose that vacuum energy can transform into particles as a result of the transition triggered by spontaneous symmetry breaking. This entails a vacuum energy cancellation that yields an effective cosmological constant during inflation by virtue of a quasi-de Sitter evolution and shows that the vacuum energy contribution can manifest as geometric particles produced by inflaton fluctuations, with particular emphasis on super-Hubble modes. We conjecture these particles as quasi-particles arising from interaction between the inflaton and spacetime geometry, enhanced by non-minimal coupling. Specifically, we propose that dark matter arises from a pure geometric quasi-particle contribution, and we quantify the corresponding dark matter candidate ranges of mass. In this scenario, we further find that a zero potential offset leads to a bare cosmological constant at the end of inflation, while a negative offset would require an additional kinetic (or potential) contribution in order to be fully-canceled. In this regard, we conclude that the scenario of large field inflaton is preferred since it necessitates a more appropriate selection of the offset. Our conclusion is reinforced as small field inflaton would lead to a significant screening of the Newtonian gravitational constant as inflation ends.
Particle production from non–minimal coupling in a symmetry breaking potential transporting vacuum energy
Orlando Luongo
August 12, 2023
==========================================================================================================
§ INTRODUCTION
The very early universe was marked by a period of intense inflation, during which the universe underwent a phase of rapid expansion fueled by an unknown field dubbed inflaton<cit.>. Understanding its fundamental properties represents one of the most significant challenges for modern cosmology. Interestingly, inflation is not the only phase in which the universe has undergone accelerated expansion. Indeed, an unknown fluid known as dark energy[Dark energy is commonly modeled by virtue of barotropic fluids <cit.>, albeit it appears also licit to employ scalar fields or alternatives to Einstein's gravity <cit.>. ] is responsible for driving the current universe's accelerated expansion <cit.>. Dark energy is expected to exhibit exotic properties, generating a negative pressure to counterbalance the action of gravity.
In this respect, the current understanding of the universe's background cosmology is based on the ΛCDM paradigm <cit.>, where the cosmological constant, Λ, dominates over matter and is typically interpreted through quantum fluctuations of vacuum energy <cit.>. However, this model seems to be theoretically incomplete, suffering from a strong fine-tuning problem and several other caveats, among which the coincidence problem and cosmological tensions, see e.g. <cit.>. These issues can be thought to collectively come from the so-called cosmological constant problem, i.e., the undeniable limitation in reconciling the observed cosmological constant with large theoretical zero-point fluctuations[The cosmological constant problem has significant implications for theoretical physics, since it involves some aspects of quantum gravity but, at the same time, its effects are present on large scales, thus observational signatures are in principle detectable.]. Solving it would represent a crucial step towards understanding physics beyond the current standard models of cosmology and particle physics.
Within this picture, inflation emerges as an interesting playground where quantum mechanics and gravity are intertwined, with the possibility to detect observable effects arising from primordial fluctuations at current time. For this reason, the inflationary stage represents our starting point in order to address the cosmological constant problem. In principle, the cosmological constant receives contributions from different origins and cannot be neither cancelled by hand nor ignored within any inflationary or dark energy models <cit.>. In this puzzle, unified dark energy models acquired great importance as they represent unified scenarios in which dark energy emerges as a consequence of dark matter's existence[Those models describe both dark energy and dark matter using a single fluid <cit.>.], see e.g. <cit.>. Even though the idea of unifying the dark sector is widely-consolidate, current accelerated phase and inflation are still thought to be distinct scenarios.
In analogy to unified dark energy models, however, there are attempts to unify inflation with current universe speed up into a single theoretical framework in which the fluid filling the universe energy budget modifies its properties as the universe expands, giving rise to both inflaton and dark energy <cit.>.
In addition, more recently there has been a growing interest in those unified dark energy models addressing the cosmological constant problem by combining inflation with dark energy into a sort of unified inflationary dark sector paradigm<cit.>. Specifically, as a robust example one can focus on quasi-quintessence dark fluid, proposed as a potential solution to eliminate quantum fluctuations of vacuum energy <cit.>. In this respect, it has been shown that cancelling Λ involves the conversion of Λ-energy excess into particles, possibly identified as dark matter quasi-particles <cit.> by means of a geometric mechanism of particle production[This class of models reconciles inflation with dark energy at later times, providing a bare cosmological constant as a consequence of coupling the inflaton with curvature. Within this picture, but even in a more general sense, the Planck satellite measurements did not exclude a priori either small or large fields during inflation <cit.>. Examples of small and large field models that overcome recent bounds presented in Planck's results are the hilltop and Starobinsky potentials, respectively.]<cit.>.
According to these results, it seems that the most prominent inflationary paradigms involve scalar fields transporting vacuum energy<cit.> that end up into a reheating period where thermal energy is converted into particles <cit.>, with a sufficiently high reheating temperature to generate the observed baryon asymmetry <cit.>.
Inspired by unified inflationary scenarios, we explore the implications of a non–minimal Yukawa-like coupling that can potentially address the cosmological constant problem. Specifically, we investigate both the hypotheses of small and large inflaton fields and we consider the universe to undergo a Λ-dominated phase driving inflation induced by a symmetry breaking potential through a quasi-de Sitter phase.
We delve into a non-zero potential offset, providing the cosmological constant contribution, during and after inflation. We thus propose a Λ-cancellation mechanism occurring during inflation, where vacuum energy releases its energy to transform into particles, created from vacuum fluctuations of the inflaton by exploiting field-curvature coupling.
Since these particles derive from an interaction Lagrangian, we consequently conjecture to be particles dressed by the interaction itself, i.e., their nature is revised as quasi-particles derived from geometry. Nevertheless, toward this direction, geometric particle production represents an alternative to gravitational particle production from vacuum <cit.>, widely-used to explain dark matter creation during inflation <cit.>. We discuss the differences between the two approaches, highlighting that geometric production arises from a perturbative approach. Thus, we first study the dynamics of inflaton fluctuations and then the corresponding spacetime perturbations for small and large fields, investigating how those quasi-particles influence the inflationary dynamics and reduce vacuum energy. To do so, we obtain the scattering Ŝ matrix due to the interaction between the field and geometry, within the Dyson approximation, and derive the corresponding probability amplitude for particle production. Afterwards, we demonstrate that geometric production is not influenced by the choice of the initial offset, showing that the potential cannot solely cancel out the cosmological constant, that remains therefore not fully-erased. We then derive the particle number density produced at the end of the slow-roll phase, predicting the corresponding mass limits on geometric quasi-particles that turn out to be heavy fields. In this respect, we also remark that the presence of non-minimal coupling also affects Newton's gravitational constant G, modifying its value throughout inflation, which becomes dependent on the field value throughout the inflationary phase. We underline that large field inflaton is preferred since it necessitates a more appropriate selection
of the offset. Indeed, small field inflation would lead to a significant screening of the Newtonian gravitational constant, after inflation. Physical consequences on the bare cosmological constant, resulting in the end of inflation are thus discussed, recalling that the potential cannot delete the cosmological constant alone. Indeed, we find that a zero potential offset yields a bare cosmological constant at the end of inflation, while a negative offset would require a further but constant kinetic or potential contribution in agreement with the Weinberg no-go theorem <cit.>. Finally, comparisons with previous mechanisms of inflationary cancellation are also discussed.
The paper is structured as follows. In Sect. <ref>, we investigate the symmetry breaking inflationary phase with the introduction of a symmetry breaking potential, non-minimally coupled with scalar curvature. Implications on the Newton's constant and on fluctuations in the slow-roll regime are discussed. In Sect. <ref>, the small field approach is investigated, focusing on super-Hubble scales. Analogously, in Sect. <ref>, the same is developed in the context of large fields. The contribution to dark matter magnitude is explored in Sect. <ref> for both small and large fields, while a direct comparison between the two scenarios is reported in Sect. <ref>. Finally, in Sect. <ref> we report conclusions and perspectives of our work.
§ SYMMETRY BREAKING INFLATIONARY PHASE
We consider a scalar inflaton field non-minimally coupled to spacetime curvature, described by the Lagrangian density
ℒ=1/2g^μν∂_μϕ∂_νϕ-V^ eff(ϕ,R) ,
with
V^ eff(ϕ,R)=V_0+χ/4(ϕ^2-v^2)^2+1/2ξϕ^2R.
The here prompted potential induces spontaneous symmetry breaking, a mechanism already proposed in single-field inflationary scenarios <cit.>. At the same time, the presence of non-minimal coupling is fundamental in order to satisfy the observational constraints provided by Planck measurements <cit.>. In Eq. (<ref>) the quantity V_0 denotes the classical offset of the potential, while v is the vacuum expectation value of the inflaton field. The Lagrangian (<ref>) provides inflationary slow-roll solutions both in the limit of small and large fields, i.e., the scalar field may evolve to its final value after transition (ϕ=v) either from ϕ=0 or from ϕ≫ v. In both cases, the inflaton potential contributes with an additional term to the overall cosmological constant
Λ_ eff=Λ_B+V^ eff(ϕ_min),
where Λ_B represents a “bare" contribution to the cosmological constant and V^ eff(ϕ_min) is the inflaton potential computed in its minimum. The choice of the offset V_0 is thus fundamental in determining the total amount of vacuum energy before and after the phase transition.
§.§ Non-minimal coupling and Newton's gravitational constant
The presence of non-minimal field-curvature coupling in Eq. (<ref>) also implies that Newton's gravitational constant G is modified during inflation, and its final value is determined by the field value at the minimum, namely v. During the inflationary phase, we can write the total action as
𝒮_tot=∫ d^4x√(-g)(-Λ_ eff/16π Gg_μν-ξ/2R ϕ^2+M_P^2/16πR+ℒ_M),
where in ℒ_M we included all matter field contributions, which are not relevant for our argument[During slow-roll, this term simply reduces to the minimally-coupled inflaton Lagrangian density.]. Eq. (<ref>) shows that during inflation the effective value of Newton's constant varies, and it crucially depends on the initial value of the inflaton field. At the end of the phase transition, we can assume ϕ≡ v, and minimizing the action above we find
G_μν+Λ_ eff/1-8π Gξ v^2g_μν=-8πG̃T_μν,
with T_μν the energy-momentum tensor. The effective gravitational constant is then
G̃≡G/1-8π Gξ v^2.
Thus, we need 1≫ 8π Gξ v^2 to hold, in order to preserve the value of Newton's constant and recover the original Einstein field equations, with negligible modifications. We will discuss how non-minimal coupling affects G for both small and large fields later on.
In what follows, we need to better understand the details of the phase transition, focusing in particular on the dynamics of the inflaton fluctuations during slow-roll.
§.§ Inflaton fluctuations during slow-roll
The evolution of the inflaton field during slow-roll is usually derived by assuming a flat Friedmann-Robertson-Walker (FRW) background, described by the line element
ds^2=dt^2-a(t)^2 δ_ij dx^i dx^j,
where a(t) is the scale factor and t denotes cosmic time.
From Eq. (<ref>), we obtain the following equation of motion for the field
ϕ̈+3Hϕ̇-∇^2ϕ/a^2+6ξ(Ḣ+2H^2)ϕ + V(ϕ)_,ϕ=0,
where V denotes the above discussed symmetry breaking potential and R=6(Ḣ+2H^2) is the scalar curvature.
Eq. (<ref>) describes the overall dynamics of the inflaton field: it also includes the contribution of its quantum fluctuations, which are expected to be responsible for the formation of large-scale structures in the universe <cit.> and similarly are the seed of the geometric mechanism of particle production that we are going to discuss.
To investigate quantum fluctuations, for convenience we split the inflaton field by considering fluctuations δϕ( x, t) around its background value:
ϕ(x,τ)=ϕ_0(τ)+δϕ(x,τ).
Inflaton fluctuations will, in turn, induce metric perturbations via Einstein's equations. For scalar perturbations, the general FRW perturbed metric reads
g_00 =a^2(τ)(1+2Ψ(x,τ)),
g_0i =-2a^2(τ)∂_iB(x,τ),
g_ij =-a^2(τ)[(1-2Φ(x,τ))δ_ij+D_ijE(x,τ)],
where we moved to conformal time τ=∫ dt/a(t). The variables Ψ, Φ, B, E denote scalar quantities and D_ij≡∂_i∂_j-1/3δ_ij∇^2.
We choose the conformal Newtonian gauge, where the only non-zero quantities are the potentials Ψ and Φ. Moreover, we can set Φ=Ψ, since there is no anisotropic stress to linear order in our single-field scenario <cit.>. This implies that the first-order FRW perturbed line-element is given by
ds^2=a^2(τ)[(1+2Ψ)dτ^2-(1-2Ψ)δ_ijdx^idx^j].
In conformal time, the equation of motion for fluctuations acquires the form
1/√(-g)∂_μ(√(-g)g^μν∂_νδϕ)+6ξa”/a^3δϕ+χ (δϕ)^3-4Λ^4/v^2δϕ=0,
where we introduced Λ^4=χ v^4/4. Expanding, as usual, field and metric fluctuations in Fourier modes
δϕ(x,τ)=δϕ_k(τ) e^ik·x,Ψ(x,τ)=Ψ_k(τ) e^ik·x,
we obtain[For additional details on the inflaton dynamics in conformal time and on how to compute the perturbed equations, see Appendix <ref>.]
δϕ”_k+2ℋδϕ'_k+k^2δϕ_k-4Ψ'_kϕ'_k
=-ξ(2k^2Ψ_k-6Ψ”_k-24ℋΨ'_k-12a”/aΨ_k-4k^2Ψ_k)ϕ
-(V^ eff_,ϕϕδϕ_k+2Ψ_kV^ eff_,ϕ)a^2.
Each k-mode in Eq. (<ref>) evolves independently to leading order.
In this respect, one can show that inflaton fluctuations oscillate on sub-Hubble scales k ≫ a(τ) H_I, while they freeze out after horizon crossing, on super-Hubble scales k ≪ a(τ) H_I<cit.>. Hence, fluctuations of cosmological interest today were mainly generated at sub-Hubble scales, but propagated at super-Hubble scales for a long interval of time. Analogously, we may also expect super-Hubble modes to mostly contribute to geometric particle production as we will see later. On super-Hubble scales, the perturbation potential is
Ψ_k≃ϵℋδϕ_k/ϕ^',
where ϵ=1-ℋ'/ℋ^2 is the slow-roll parameter. As discussed above, perturbations on super-Hubble scales are approximately frozen. Moreover, we require |ξ|≪ 1 in order to successfully realize inflation for a potential under the form[As we will see, this will specify the self-coupling constant χ in terms of ξ.] of Eq. (<ref>), see e.g. <cit.>. Bearing these prescriptions in mind, Eq. (<ref>) becomes
δϕ”_k+2ℋδϕ'_k+[k^2+(V^ eff_,ϕϕ+2ϵ ℋ/ϕ'V^ eff_,ϕ)a^2]δϕ_k=0.
Rescaling the field by δϕ_k→δχ_k=δϕ_ka, Eq. (<ref>) becomes
δχ”_k-a”/aδχ_k+[k^2+(V^ eff_,ϕϕ+2ϵℋ/ϕ'V^ eff_,ϕ)a^2]δχ_k=0,
that explicitly does not depend on the potential offset. To address the cosmological constant problem, however, we delve into its role during and after inflation.
§.§ Choosing the potential offset
To manifestly deal with the potential offset, let us first specify the potential in Eq. (<ref>). To do so, imposing the slow-roll condition, Eq. (<ref>) yields
δχ”_k+[k^2+V^ eff_,ϕϕa^2-a”/a-6ϵ(a'/a)^2]δχ_k=0.
that, substituting V^ eff(ϕ,R), in its explicit parts, becomes
δχ”_k+[k^2+V_,ϕϕa^2-(1-6ξ)a”/a-6ϵ(a'/a)^2]δχ_k=0.
The corresponding dynamics can be obtained recalling that we are using the slow-roll hypothesis. Indeed, during this period, a suitable choice for the inflationary scale factor is provided by a quasi-de Sitter ansatz[Further mathematical details are discussed in Appendix <ref>.]
a(τ)=-1/H_I1/τ^(1+ϵ),
where deviations from a pure de Sitter phase are described by the slow-roll parameter ϵ. This ansatz essentially describes an expanding phase dominated by a quasi-constant vacuum energy term, avoiding the direct numerical calculation of the scale factor from the inflaton potential of Eq. (<ref>).
Thus, exploiting Eq. (<ref>) into Eq. (<ref>) and invoking ϵ≪1, we obtain
δχ”_k+[k^2-1/τ^2(-V_,ϕϕ/H_I^2+(1-6ξ)(2+3ϵ)+6ϵ)]δχ_k=0.
As mentioned earlier, the scalar field carries on vacuum energy and so the presence of the offset term, V_0, appears essential, since it affects the overall vacuum energy during the phase transition, see e.g. <cit.>. Hence, at this stage, it appears crucial to characterize the role of the offset within the inflationary dynamics.
Obviously, if we assume V_0=0, both large and small ϕ are allowed during slow-roll <cit.>. Additionally, as shown by Eq. (<ref>), a negligible offset also implies that after inflation we recover the bare contribution to the cosmological constant, namely Λ_B. However, if we assume the existence of a cosmological constant related to vacuum energy, intuitively we expect that a small offset may have a similar magnitude than V_0 and ϕ. Alternatively, a zero offset more likely corresponds to small fields is plausible in large-field models, since we expect it to be proportional to powers of ϕ=v. In summary, we can identify two main characteristics of the offset term V_0:
- if the offset is V_0=0, there is a constant term in the inflaton potential that, if sufficiently large, can be interpreted as vacuum energy before the transition and reads Λ^4= χ v^4/4. In this case, it drives inflation and also determines the scalar curvature, in fact R=8π G(4 Λ^4+T^μ(ϕ)_μ), where T^μ(ϕ)_μ is the trace of the energy-momentum tensor for the scalar field during the slow-roll. In this scenario, we require Λ^4∼ 10^64^4 in order to have a suitable vacuum energy contribution to drive inflation. Accordingly, to enable this ansatz to be physical, we need to work out a small-field inflation;
- if V_0 ≠ 0, the situation is very different, since both small and large fields are allowed. However, with the aim of cancelling vacuum energy, for V_0≃ -Λ^4, inflation can be realized only in a large field scenario. Accordingly, inflation is here driven directly by the inflaton that transports the energy χϕ^4/4. Consequently, the vacuum energy is reinterpreted as field-dependent, say Λ^4(ϕ). Thus, applying this choice for the potential offset, vacuum energy is not a constant term during inflation and the equation of state for the field reads P^ϕ/ρ^ϕ≠-1, violating the no-go theorem <cit.> during the phase transition, which is reinterpreted as a metastable phase.
In summary, both small and large fields are in principle viable to describe a symmetry breaking mechanism that transports vacuum energy, thought as responsible for the inflationary phase.
Based on these reasons, we proceed to characterize inflation in the context of both small and large inflaton fields, also incorporating the concept of geometric particle production. We then explore the implications of generating dark matter constituents within these two scenarios, considering the two above options for the offset and with the primary objective to achieve vacuum energy cancellation at the conclusion of the slow-roll phase.
§ SMALL-FIELD SYMMETRY BREAKING INFLATION
Within the small-field scenario, the potential minimum is placed at ϕ=0 before the phase transition.
Hence, in the slow-roll, having f(ϵ,ξ)≡(1-6ξ)(2+3ϵ)+6ϵ, the field is close to zero and
Eq. (<ref>) becomes
δχ”_k +[k^2-1/τ^2(f(ϵ,ξ)+4 Λ^4/v^2H_I^2-3χϕ^2/H_I^2)]δχ_k=0.
Here ϕ≪ v therefore we can safely neglect the last quadratic contribution to have
δχ”_k+[k^2-1/τ^2(ν^2-1/4)]δχ_k=0,
where ν^2 explicitly reads
ν^2=1/4+4Λ^4/v^2H_I^2+f(ϵ,ξ).
The standard solution of Eq. (<ref>) is
δχ_k(τ)=√(-τ)[c_1(k)H^(1)_ν(-kτ)+c_2(k)H^(2)_ν(-kτ)],
in which c_1(k), c_2(k) are integration constants, while H^(1)_ν(k), H^(2)_ν(k) are Hankel functions of the first and second kind, respectively.
The constants c_1(k) and c_2(k) are specified by selecting the initial vacuum state of the field. Among the various possibilities <cit.>, a common choice is the Bunch-Davies vacuum, which represents a local attractor in the space of initial states for an expanding background <cit.>. It implies that, in the ultraviolet regime k ≫ a H_I, the inflaton fluctuation δχ_k matches the plane-wave solution
δχ_k∼e^-ikτ/√(2k).
Thus, considering the asymptotic Hankel functions,
H^(1)_ν(x≫1) ≃√(2/π x)e^i(x-π/2ν-π/4),
H^(2)_ν(x≫1) ≃√(2/π x)e^-i(x-π/2ν-π/4),
the initial conditions are readily obtained
c_1(k)=√(π)/2e^i(ν+1/2)π/2,
c_2(k)=0.
§.§ Super-Hubble scales
We now focus on super-Hubble fluctuations. As anticipated in Sec. <ref>, a “classical" description of inflaton fluctuations becomes valid after horizon crossing, where the field modes cease to oscillate in time.
Indeed, on these scales the Hankel function, entering fluctuations, is given by
H^(1)_ν(x≪1)≃√(2/π)e^-iπ/22^(ν-3/2)Γ(ν)/Γ(3/2)x^-ν,
and thus we obtain
δχ_k=e^i(ν-1/2)π/22^(ν-3/2)Γ(ν)/Γ(3/2)1/√(2k)(-kτ)^1/2-ν.
Restoring then the original perturbation δϕ_k, we finally have
δϕ_k=e^i(ν-1/2)π/22^(ν-3/2)Γ(ν)/Γ(3/2)H_I/√(2k^3)(k/aH_I)^3/2-ν.
Since ν≃ 3/2, we notice that super-Hubble fluctuations are not exactly constant, but acquire a tiny dependence on time.
In order to determine the geometric perturbation on super-Hubble scales, Eq. (<ref>), we have to solve also the background field equation.
Exploiting the slow-roll condition for the potential, we can neglect second derivatives and write
2ℋϕ'=-V^ eff_,ϕa^2.
Hence, recalling the form of the scale factor, we need to solve the following equation in conformal time
2(1+ϵ/τ)ϕ^'=-(χϕ^3-4Λ^4/v^2ϕ+6ξa”/a^3ϕ)a^2,
and using the quasi-de Sitter scale factor, we obtain
2(1+ϵ)ϕ'=-χϕ^31/H_I^2τ-4Λ^4/v^2ϕ/H_I^2τ+6ξ(2+3ϵ)ϕ/τ.
As above stated, the field value during slow-roll is close to zero, see Fig. <ref>. So, we neglect terms proportional to ∼ϕ^3, yielding
ϕ(τ)=c_0|τ|^-3η +6ξ(2+3ϵ)/2(1+ϵ)=c_0|τ|^j,
where in the last expression we have introduced η≡4Λ^4/3H_I^2v^2 and
j≡ -[3η+6ξ(2+3ϵ)]/2(1+ϵ).
The parameter c_0 is determined by imposing the initial condition on the background field value. Choosing, for example, an initial time τ_i=-1000 GeV^-1 for inflation, we may set ϕ(τ_i)= v/10^20 in order to have small field values during slow-roll.
Similarly, in order to have fluctuations comparable to the background, we need to rescale the coefficients c(k) in Eqs. (<ref>) so that
δϕ→1/αδϕ.
Such normalization is required to satisfy the condition | δϕ| ≪ϕ throughout the whole slow–roll phase. Then, the geometric perturbation, Eq. (<ref>), on super-Hubble scales takes the form
Ψ_k(τ)=-ϵ/αe^i(ν-1/2)π/22^ν-3/2Γ(ν)/Γ(3/2)ℋ/(jc_0|τ|^j-1)×
H_I/√(2k^3)(k/aH_I)^3/2-ν,
and its behavior is shown in Fig. <ref>.
Selecting now a specific number of e-foldings, N≥ 60, that agrees with current most-accepted values <cit.>, we can derive the time τ_f at which inflation is expected to end, by
N=∫ dt H(t)≃ -∫ ^τ_f_τ_i dτH_I/H_Iτ=60.
Since we primarily focus on super-Hubble scales, we need to introduce a cut-off time τ^' at which modes k< a(τ^') H_I already crossed the horizon and then study the evolution of these modes until the end of inflation, i.e., in the interval [τ^', τ_f].
Geometric particle production due to inflaton fluctuations can be quantified resorting to a perturbative approach <cit.>. We start from the first-order Lagrangian describing interaction between fluctuations and spacetime perturbations,
ℒ_I=-1/2√(-g_(0))H^μνT^(0)_μν,
where H^μν is related to the metric perturbations and T_μν^(0) is the zero-order energy-momentum tensor for the field fluctuations[All the mathematical details concerning geometric particle production are reported in Appendix <ref>.]. By expanding the corresponding Ŝ matrix at first order in Dyson series, one can show that the number density of particles computed at a specific time τ is <cit.>
N^(2)(τ)=1/(2π a(τ))^3∫ d^3q d^3p |⟨0|Ŝ|p,q|⟩|^2,
where normalization is over the comoving volume and ⟨ 0 |Ŝ| k, p ⟩ is the first-order probability amplitude associated to particle pair production with momenta k and p. We have neglected the contribution due to gravitational particle production, as discussed in Appendix <ref>. In the small-field regime, the symmetry breaking potential is approximately given by
V(ϕ)≃Λ^4-2Λ^4/v^2ϕ^2,
thus the zero-order energy-momentum tensor associated to fluctuations is indicated as follows
T^(0)_μν≃ ∂_μδϕ∂_νδϕ
-1/2g_μν^(0)[g^ρσ_(0)∂_ρδϕ∂_σδϕ
-2Λ^4+4Λ^4/v^2(δϕ)^2]
-ξ[∇_μ∂_ν-g_μν^(0)∇^ρ∇_ρ+R^(0)_μν-1/2R^(0)g_μν^(0)](δϕ)^2.
Accordingly, the total probability amplitude for pair production is
⟨p,q|Ŝ|0|=⟩-i/2∫ d^4x 2a^4H^μν[∂(_μδϕ^*_pδ_ν)δϕ^*_q
-1/2η_μνη^ρσ∂(_ρδϕ^*_p∂_σ)δϕ^*_q-g_μν^(0)2Λ^4/v^2δϕ^*_pδϕ^*_q
-ξ(∇_μ∂_ν-g_μν^(0)∇^ρ∇_ρ+R^(0)_μν-1/2R^(0)g^(0)_μν)δϕ^*_pδϕ^*_q]
× e^-i(p+q)·x,
which requires proper normalization with respect to the zero-order matrix element for field fluctuations.
We also remark that time integration has to be performed in the interval [τ^', τ_f] previously discussed. This implies that modes of interest are inside the Hubble horizon at the beginning of inflation, but are all in super-Hubble form starting from τ=τ^', namely[This approach inevitably leads to underestimate the total number of particles produced, since we neglect the contribution of modes that exit Hubble horizon after τ^'. We plan to come back to this point in future works, in order to refine our estimate of geometric particles produced during slow-roll.]
a(τ_i) H_I < | k| < a(τ^') H_I.
Thus, Eq. (<ref>) can be written more compactly as
⟨ p, q |Ŝ| 0 ⟩=-i/2∫ d^4x 2a^2( A_0(x,τ)+A_1(x,τ)
+A_2(x,τ)+A_3(x,τ)),
where
A_0(x,τ)=2Ψ[∂_0δϕ^*_p∂_0δϕ^*_q-1/2η^ρσ∂_ρδϕ^*_p∂_σδϕ^*_q
-2a^2Λ^4/v^2δϕ^*_pδϕ^*_q-ξ(∂_0∂_0-a'/a∂_0-η^ρσ∂_ρ∂_σ
-3(a'/a)^2)δϕ^*_pδϕ^*_q] e^-i(p+q)·x,
and
A_i(x,τ)=2Ψ[∂_iδϕ^*_p∂_iδϕ^*_q+1/2η^ρσ∂_ρδϕ^*_p∂_σδϕ^*_q
+2a^2Λ^4/v^2δϕ^*_pδϕ^*_q-ξ(∂_i∂_i+3a'/a∂_0+η^ρσ∂_ρ∂_σ
+2a”/a-(a'/a)^2)δϕ^*_pδϕ^*_q] e^-i(p+q)·x,
for i=1,2,3.
Finally, in order to obtain numerical values for the total number of particles, we need constraints over the vacuum energy density, Λ^4, and self-coupling constant, χ.
On the one hand, if we assume vacuum energy domination during inflation, the Hubble rate reads
H(t)^2≡ H_I^2≃8π G/3Λ^4,
and, following the Planck satellite results <cit.>, we fix
H_I/M_P≃ 2.5 · 10^-5,
so that vacuum energy, in order to avoid fine-tuning, needs to lie within the interval Λ^4≤ 10^65^4.
Furthermore, the self-coupling constant, χ, is usually constrained as function of ξ in order to ensure density inhomogeneities of the proper size during inflation <cit.>. One usually requires a very small ratio
√(χ/ξ^2)∼ 10^-5,
to have agreement with recent observational data <cit.>. However, for small-field inflation this ratio cannot be fulfilled so as not to drastically alter the value of the gravitational constant G.
For example, selecting Λ^4=10^64^4 and asking v to be of the order of Planck mass, namely v^2 ∼ 10^39 GeV^2, we would have χ∼ 10^-14. Then, only assuming a very small coupling constant ξ≤ 10^-5, we get
8π G ξ v^2≤ 10^-3,
so that field-curvature coupling would not significantly screen Newton's gravitational constant after inflation. This choice, however, leads to a significant violation of the condition in Eq. (<ref>).
Fig. <ref> show the number density of particles produced during inflation as function of vacuum energy Λ^4, for different values of the cut-off time τ^'. The values of Λ^4, χ and the corresponding number density are collected in Tab. <ref>.
§ LARGE-FIELD SYMMETRY BREAKING INFLATION
As above stated, in the large-field inflation, considering the offset V_0=-Λ^4, the role of vacuum energy driving inflation is directly played by the self-interacting term χϕ^4/4, thus implying that the field initial value lies around Planck mass.
Consequently, in large field inflationary scenarios we can set ϕ≫ v during slow-roll <cit.>, so the potential in Eq. (<ref>) simplifies to
V^ eff(ϕ,R)≃χ/4ϕ^4+1/2ξ Rϕ^2.
Hence, the background field equation in slow-roll regime turns out to be
2ℋϕ'=-(χϕ^3+ξ R ϕ)a^2,
and by plugging above the quasi-de Sitter scale factor, Eq. (<ref>), we get
2(1+ϵ/τ)ϕ'=χ/H_I^2τ^2ϕ^3+6ξ(2+3ϵ)/τ^2ϕ,
that can be rewritten as
ϕ'-3ξ(2+3ϵ)/(1+ϵ)τϕ=χ/2(1+ϵ)H_I^2τϕ^3.
Eq. (<ref>) reads under the form of a Bernoulli differential equation, ϕ'+p(τ)ϕ=q(τ)ϕ^n, where p(τ)=-3ξ(2+3ϵ)/(1+ϵ)τ, q(τ)=χ/2(1+ϵ)H_I^2τ and n=3.
Thus, replacing ω=ϕ^1-n, we find that Eq. (<ref>) becomes a linear differential equation of the form 1/1-nω'+p(τ)ω=q(τ), reading
ω'/2+3ξ(2+3ϵ)/(1+ϵ)τω=-χ/2(1+ϵ)H_I^2τ.
By virtue of our comments on the large field inflation and following Ref. <cit.>, we require super-Planckian field values during slow-roll and, so, in Fig. <ref>, we display the background field, ϕ, with an indicative initial condition placed around ϕ(τ_i)≃ 5 M_P.
After identifying the relevant background information, we proceed to analyze the variations. Once again, Eq. (<ref>) remains valid when considering the fluctuations of the inflaton.
Thus, since in this case V_ϕϕ= 3 χϕ^2, in slow-roll approximation, the condition ϕ̇^2≪ V^ eff(ϕ,R) is satisfied and, consequently, both the field and its time derivative, are quasi-constant during inflation.
Accordingly, as a plausible treatment to solve Eq. (<ref>) in the large field case, we can replace ϕ^2 with its mean value during inflation,
ϕ^2→∫ ^τ_f_τ_iϕ^2dτ/τ_f-τ_i=⟨ϕ^2|,⟩
where the initial and final times, τ_i and τ_f, can be derived by selecting the number of e-foldings, see Eq. (<ref>).
Hence, we obtain
δχ”_k+[k^2-1/τ^2(f(ϵ,ξ)-3χ⟨ϕ^2|⟩/H_I^2)]δχ_k=0,
and the usual solution, Eq. (<ref>), is recovered with
ν^2=1/4-3χ⟨ϕ^2|⟩/H_I^2+f(ϵ,ξ).
§.§ Super-Hubble scales
The geometric perturbation on super-Hubble scales can be expressed again in the form reported in Eq. (<ref>), without the normalizing factor α. Its behavior is shown in Fig. <ref>.
It is worth noticing how to rewrite the (approximate) Lagrangian for large-field, say ℒ_LF, that reads
ℒ_LF≃1/2g^μν∂_μϕ∂_νϕ-χ/4ϕ^4-1/2ξ Rϕ^2,
that mainly changes from the small field case. Here, the zero-order energy-momentum tensor for the fluctuations is computed
T^(0)_μν =∂_μδϕ∂_νδϕ-1/2g_μν^(0)[g^ρσ_(0)∂_ρδϕ∂_σδϕ-χ/2(δϕ)^4]
-ξ[∇_μ∂_ν-g_μν^(0)∇^ρ∇_ρ+R^(0)_μν-1/2R^(0)g_μν^(0)](δϕ)^2.
Immediately, from the above Lagrangian and from T_μν^(0), we can notice that the potential offset V_0 does not enter in the interacting potential. This means that it does not contribute to the amount of geometric particles produced. The same reasoning would apply to the small field scenario.
Moroever, to single out the most prominent contribution to particle creation, we can naively neglect the quartic term in fluctuations. This counterpart leads to divergences when computing probability amplitudes and thus would require a renormalization procedure. This fact has been severely investigated in a detailed study of the χϕ^4 theory in curved spacetime as one can see in Ref. <cit.>, where different approaches are discussed, including the here-employed interaction picture. In this way, the main contribution to Eq. (<ref>) is provided by field-curvature coupling and, so, the total probability amplitude for pair production can be written again in the form of Eq. (<ref>), where now
A_0(x,τ)=2Ψ[∂_0δϕ^*_p∂_0δϕ^*_q-1/2η^ρσ∂_ρδϕ^*_p∂_σδϕ^*_q
-ξ(∂_0∂_0-a'/a∂_0-η^ρσ∂_ρ∂_σ-3(a'/a)^2)δϕ^*_pδϕ^*_q]
× e^-i(p+q)·x,
and
A_i(x,τ)=2Ψ[∂_iδϕ^*_pδ_iδϕ^*_q+1/2η^ρσ∂_ρδϕ^*_p∂_σδϕ^*_q
-ξ(∂_i∂_i+3a'/a∂_0+η^ρσ∂_ρ∂_σ+2a”/a-(a'/a)^2)δϕ^*_pδϕ^*_q]
× e^-i(p+q)·x.
In analogy to Eq. (<ref>), during slow-roll we approximate
ϕ→∫ ^τ_f_τ_iϕ dτ/τ_f-τ_i≡⟨ϕ|,⟩ ϕ^'→∫ ^τ_f_τ_iϕ'dτ/τ_f-τ_i≡⟨ϕ^'|,⟩
that turn out to be suitable replacement in order to obtain analytical results for the total amplitude of particle production.
Such probability amplitude can be then computed following the same prescriptions for super-Hubble modes discussed in Sec. <ref>. As in small-field inflation, the total amplitude requires proper normalization with respect to the zero-order Lagrangian associated to inflaton fluctuations.
It is evident that in the case of large-field inflation, as expressed by the Lagrangian in Eq. (<ref>), there is an absence of a constant term. Consequently, the “cosmological constant" undergoes evolution during the inflationary period, aligning with our previous expectations. In other words, the cosmological constant is no longer constant. Apparently, this may contradict the no-go theorem <cit.>, limiting the large field case. However, this is plausible, at least for two main reasons, say
- the phase in which the cosmological constant is no longer a pure constant represents a metastable case, associated with a phase transition and so violating there the no-go theorem appears licit;
- in the end of inflation, the constant contribution is tuned by the mechanism of phase transition and therefore is restored. So, there is no more violation to the no-go theorem, after the transition.
The varying cosmological constant value can be denoted as Λ^4(ϕ)=χϕ^4/4 and appears similar to some cases developed in the literature <cit.>, albeit severely circumscribed within the phase transition only and not elsewhere.
Consequently, the Hubble parameter during inflation is given by
H^2_I=8π G/3ρ_ϕ,
where the corresponding inflaton energy density is determined as
ρ_ϕ=1/2ϕ̇^2+χ/4ϕ^4+1/2ξ Rϕ^2.
In slow-roll regime, ϕ̇^2≪ V^ eff, therefore the energy density becomes
ρ_ϕ=χ/4ϕ^4+1/2ξ Rϕ^2=χ/4ϕ^4+1/2ξ(8π G T^μ_μ)ϕ^2,
having T^μ_μ=ξ Rϕ^2+χϕ^4 and
R=8 π G/1-8π Gξϕ^2χϕ^4.
After the phase transition, the field screens also the gravitational constant G.
However, in the large field case ⟨ϕ|^⟩2≫ v^2, so 8π Gξ v^2≪ 1 for realistic values of the coupling constant ξ.
In this way, Eq. (<ref>) can be easily satisfied and Einstein field equations are restored without requiring the fine-tuning ξ≤ 10^-5.
Fig. <ref> show the number density of particles produced during inflation as function of vacuum energy, Λ^4, assuming different values for the cut-off time τ^'.
In Fig. <ref>, the number density of particles as function of self-coupling constant χ is depicted, selecting the range in which the constraint of Eq. (<ref>) for non-minimal coupling inflation is fulfilled, as discussed in Sec. <ref>.
The values adopted to display out plots are reported in Tabs. <ref>-<ref>.
§ CONTRIBUTION TO DARK MATTER MAGNITUDE
Geometric particles produced during inflation could admit an interpretation in terms of dark matter <cit.>. Dark matter particles of gravitational origin have been recently proposed in several scenarios, usually assuming scalar or vector fields, but also considering higher-spin candidates <cit.>.
We here suggest that dark matter may arise from the perturbative approach previously discussed, where the Yukawa-like interaction term ξ/2Rϕ^2 ‘dresses' the inflaton field ϕ by the interaction. Examples of such a model are also possible in condensed matter, see e.g. <cit.>, and in gravitational contexts, see e.g. <cit.>.
Under this interpretation, the corresponding particles look like geometric quasi-particles, representing excitations of the inflaton field and the scalar curvature.
As shown in the previous sections, geometric particle production typically leads to particle pairs with different momenta, representing the main difference with respect to purely gravitational production. There, we expect that the sole expansion of the universe is responsible for the process, implying the creation of particle-antiparticle pairs with same momenta.
Considering this, we can disregard the mentioned pairs, assuming that they undergo annihilation before making a significant contribution to the abundance of dark matter. Consequently, we can proceed to quantify the mass of our proposed candidate for geometric dark matter, simplifying for the moment the analysis by assuming negligible Bogoliubov coefficients related to the gravitational production of particles[As discussed in Appendix <ref>, nonzero Bogoliubov coefficients may also enhance the geometric mechanism of production, thus resulting in a larger number of geometric particles produced during slow-roll. We plan to come back to this point in future works, so to include such contribution in our treatment.].
After the reheating process, the Hubble parameter in the radiation-dominated era is given by
H(z)^2≃ H_0^2Ω_r,0(1+z)^4,
where Ω_r,0 is the current radiation energy density. During radiation phase, the Hubble parameter is related to the temperature as H^2∼ G T^4, implying that the redshift z at the beginning of radiation-dominated era can be determined as
z(T_r)≃(G T_r^4/H_0^2Ω_r,0)^1/4,
where T_r denotes the temperature at the end of reheating.
Assuming that all dark matter was generated during inflation via the geometric mechanism previously described, we can estimate its mass m^* from the corresponding number density, namely
ρ_DM = m^* N^(2).
Thus, we obtain
m^*(T_r)=ρ_DM/N^(2)≃(G T_r^4/H_0^2Ω_r,0)^3/4ρ_DM,0/N^(2),
since
ρ_DM(τ_r)=ρ_DM,0(1+z)^3 holds for dark matter. The number density N^(2) is given by Eq. (<ref>), where the probability amplitude for particle production has been previously derived in both limits of small and large field inflation. In principle, the number density might be computed at the end of preheating, say at τ_r. However, we can assume that the scale factor does not vary significantly from the end of inflation to the first stage of radiation-dominated phase, thus setting a(τ_f)=a(τ_r).
We now quantify the mass of the dark matter candidate for both small and large field scenarios.
§.§ Small-field domain
In Fig. <ref> we show the mass of dark matter candidate at fixed temperature, T_r=1, as function of vacuum energy Λ^4. Instead, in Fig. <ref> the mass of dark matter candidate is plotted at fixed vacuum energy, Λ^4=10^64^4, as function of temperature T_r. Finally, in Tabs. <ref> and <ref> dark matter mass values are synthesized as function of vacuum energy Λ^4 and temperature T_r, respectively.
§.§ Large-field domain
In Fig. <ref> is shown the mass of dark matter candidate at T_r=1, as function of vacuum energy Λ^4. Fig. <ref> exhibits how the particle mass varies with respect to the temperature at fixed vacuum energy. Tabs. <ref> and <ref> provide a summary of the mass of dark matter candidate, as a function of vacuum energy Λ^4 and the temperature T_r, respectively.
§ COMPARISON BETWEEN SMALL AND LARGE FIELD PARTICLE PRODUCTION
Analyzing the symmetry breaking potential of Eq. (<ref>) in the limit of small and large field inflation, it can be deduced that both approaches allow to produce particles arising from inflaton fluctuations. We also notice that the off-set term V_0 does not affect the total amount of particles produced. However, there are important differences in the choice of the parameters:
- in the small-field scenario, vacuum energy is described by the constant term Λ^4=χ v^4/4, so it essentially depends on the field value at the minimum of the potential. If v is chosen to be of the order of Planck mass, then in order to satisfy 8π G v^2 ≪ 1 we need ξ≤ 10^-5. However, this condition violates the requirement of Eq. (<ref>);
- in the large-field scenario, vacuum energy is denoted by Λ^4=χ⟨ϕ|^⟩4/4, so it is independent from the value of the minimum v. Instead, it is related to the value of the inflaton field during slow-roll. In this case ⟨ϕ|≃⟩10^19 for all values of χ used in the work. Since in large field inflation we require ϕ≫ v during slow-roll, we can safely satisfy the condition of negligible screening of Newton's constant at the end of inflation, namely 8 π G ξ v^2 ≪ 1, respecting at the same time the constraint given by Eq. (<ref>).
We then notice that large-field inflation is favored over the small-field approach when dealing with a symmetry breaking potential, since it allows to satisfy the constraint of Eq. (<ref>) without significantly affect Newton's gravitational constant at the end of inflation. At the same time, even if the small field scenario can still produce a relevant number of geometric particles during slow-roll, it violates the condition (<ref>) if we require a negligible screening of Newton's gravitational constant due to field-curvature coupling.
For what concerns the mass of the dark matter candidate, we observe that in both scenarios a larger vacuum energy term during inflation implies smaller values for the mass of the geometric quasi-particles produced. Similarly, a larger self-coupling constant increases the number density of dark matter particles produced, thus resulting in a smaller mass. As shown in Figs. <ref> and <ref>, typical mass values span from a few eV up to the GeV scale for a fixed temperature of T_r=1 GeV, in both limits of small and large fields. Instead, larger masses are obtained in case of larger reheating temperature, see Figs. <ref> and <ref> for the small and large field case, respectively.
Additionally, as previously stated, we remark that our estimates also depend on the cut-off time τ^' through which we study the evolution of super-Hubble fluctuations during slow-roll.
§.§ Effective value of the cosmological constant after inflation
We underlined above that the presence of a nonzero offset V_0 does not increase or decrease the total number of geometric particles produced. However, it contributes to the effective value of the cosmological constant at the end of inflation, as shown by Eq. (<ref>).
Accordingly, in our model the choice V_0=0 is the preferred one, since the potential does not contribute to the cosmological constant after the phase transition. A negative offset may represent a valid alternative, provided this contribution is canceled by some other mechanism. In particular, in our model we do not consider the presence of a kinetic term associated to the scalar field before and after the phase transition, focusing instead on the sole slow-roll phase, where the aforementioned term is clearly negligible.
However, a nonzero kinetic term can play a key role in canceling vacuum energy after phase transition, as proposed in some recent works. More specifically, in Refs. <cit.>, it has been shown that if local shift symmetry holds, a scalar field describing a single fluid of matter with pressure may cancel vacuum energy density through the kinetic contribution, that turns out to be constant before and after the phase transition itself. In this case, the presence of a negative offset allows to avoid the coincidence problem and the corresponding matter fluid exhibits nonzero pressure, exhibiting as an emergent cosmological constant that becomes dominant after the phase transition.
Analogously, in Ref. <cit.>, a constant kinetic term is associated with the evolution of a quasi-quintessence field before and after a small-field inflationary phase. There, the cancellation mechanism results into the difference between the pressure of such dark matter field (that coincides with the kinetic term before transition) and the potential offset.
Coming back to our scenario, the inclusion of an additional kinetic term seems natural in the context of reheating, when the inflaton field is expected to oscillate around its minimum.
We thus expect a nonzero kinetic term after the phase transition, which then may contribute to the total energy density of the scalar field and can be involved in the here-described cancellation mechanism. This aspect warrants further investigation, and as we will discuss in the following section, it will be the subject of characterizing the post-inflationary dynamics within the framework that we have developed in our manuscript.
§ FINAL REMARKS AND PERSPECTIVES
We here examined inflation arising from a symmetry breaking potential coupled with curvature, considering both small and large inflaton fields. In so doing, we studied the evolution of inflaton fluctuations during slow-roll and showed that the corresponding potential reproduces inflation adopting a quasi-de Sitter phase. Specifically, we proposed to interpret the existence of inflation induced by a phase transition driven by vacuum energy aiming to address the cosmological constant problem.
Our proposal suggested that vacuum energy is effectively diminished as particles are generated during the phase transition. Rephrasing, this scenario highlights that the cosmological constant problem can be mitigated by converting the degrees of freedom associated with vacuum energy into the aforementioned matter particles.
Hence, we quantified the amount of particles produced by metric perturbations in inflation, tracing them back to the inflaton fluctuations and showing how field-curvature coupling can enhance our mechanism of particle production. We interpreted the aforementioned particles as dark matter particles, and their abundance was calculated for both small and large inflaton fields. While both approaches yielded particle abundances that are consistent with observational constraints, it is only in the case of large fields, with the field minimum lying on Planck scales, that the current value of the gravitational constant can be accurately reproduced.
Furthermore, we emphasized the key distinctions between our geometric mechanism of particle production and the extensively-studied gravitational particle production, which has been proposed as a mechanism for generating dark matter during inflation. In this regard, we proposed that the dark matter component may exist in the form of quasi-particles, arising from the coupling between particles and geometry, dressed by the interaction between the inflaton and scalar curvature.
Accordingly, we computed mass and temperature at which particles arose in order to obtain the expected dark matter abundance. We also discussed the presence of an offset in the potential, showing that in our model the choice of a zero potential offset allows to obtain a bare cosmological constant at the end of inflation. However, a negative offset may represent a better ansatz if a further contribution determined from kinetic or potential energy of the scalar field is taken into account before and after phase transition <cit.>. Moreover, we emphasized that vacuum energy ceases to remain constant during the phase transition, resulting in a varying effective cosmological constant. We thus discussed the physical interpretations of our finding, including its relation to the no-go theorem, as well as potential resolutions to overcome this issue. We then concluded that to fully-erase the cosmological constant, getting rid of the corresponding cosmological constant problem, additional degrees of freedom coming from kinetic and/or potential energy might be taken into account. Consequently, large field inflation with non-minimal coupling appeared favored, being more compatible with theoretical expectations and, furthermore, guaranteeing to the Newton's constant to be compatible with observations after inflation.
In view of the above results, future developments will therefore focus on refining our treatment with the mechanism of vacuum energy cancellation presented in Refs. <cit.>, where kinetic energy plays the role of reproducing the correct bare cosmological constant today.
For this reason, we plan to extend our study including post-inflationary dynamics, in order to provide a less approximate scenario after the phase transition. Finally, we will focus on the fundamental properties of geometric quasi-particles to show whether they can correctly reproduce dark matter abundance as here conjectured.
§ ACKNOWLEDGEMENTS
AB thanks the National Institute for Nuclear Physics for financial support. OL and YC acknowledge Marco Muccino for fruitful discussions toward the topic developed in this paper and, in particular, on reconciling the main subject of this work with the model previously presented. OL is also grateful to Carlo Cafaro, Stefano Mancini and Hernando Quevedo for interesting suggestions and comments. The paper is partially funded by the Science Committee of the Ministry of Science and Higher Education of the Republic of Kazakhstan, Grant No. AP19680128.
99Infl0 S. Tsujikawa, Introductory review of cosmic inflation, arXiv:hep-ph/0304257 (2003).
tsurev B. A. Bassett, S. Tsujikawa, and D. Wands, Inflation dynamics and reheating, Rev. Mod. Phys. 78, 537 (2006).
baurev D. Baumann, TASI lectures on inflation, arXiv:hep-th/0907.5424 (2012).
Infl1 J. A. Vazquez, L. E. Padilla, and T. Matos, Inflationary cosmology: from theory to observations, arXiv:1810.09934 [astro-ph.CO] (2021).
eos1
A. Aviles, C. Gruber, O. Luongo, H. Quevedo, Cosmography and constraints on the equation of state of the Universe in various parametrizations, Phys. Rev. D 86, 123516 (2012).
eos2
S. Capozziello, R. D'Agostino, O. Luongo, Extended Gravity Cosmography, Int. J. Mod. Phys. D 28, 10, 1930016 (2019).
peebrev P. J. E. Peebles and B. Ratra, The cosmological constant and dark energy, Rev. Mod. Phys. 75, 559 (2003).
copde E. J. Copeland, M. Sami, and S. Tsujikawa, Dynamics of dark energy, Int. J. Mod. Phys. D 15, 1753 (2006).
LCDM D. Scott, The standard model of cosmology: a skeptic's guide, arXiv:1804.01318 [astro-ph.CO] (2018).
Martin J. Martin, Everything you always wanted to know about the cosmological constant problem
(but were afraid to ask), Comptes Rendus Physique 13, 6, 7 (2012).
burcc C. P. Burgess, The cosmological constant problem: why it's hard to get dark energy from micro-physics, arXiv:hep-th/1309.4133 (2013).
dp9
P. Bull, Y. Akrami, J. Adamek, T. Baker, E. Bellini, et al., Beyond ΛCDM: problems, solutions, and the road ahead, Phys. Dark Univ. 12, 56-99 (2016).
Sakharov A. Sakharov, Vacuum quantum fluctuations in curved space and the theory of gravitation, Sov. Phys. Dokl. 12, 1040 (1968).
CCP0 S. Weinberg, The cosmological constant problem, Rev. Mod. Phys. 61, 1 (1989).
CCP1 S. M. Carroll, W. H. Press and E. L. Turner, The cosmological constant, Annual Rev. Astron. Astrophys. 30, 499-542 (1992).
CCP3 S. Nobbenhuis, The cosmological constant problem,
an inspiration for new physics, arXiv:gr-qc/0609011 (2006).
bou R. Bousso, The cosmological constant, Gen. Rel. Grav. 40, 607 (2007).
Sola J. Solà, Cosmological constant and vacuum energy: old and new ideas, J. Phys.: Conf. Ser. 453, 012015 (2013).
CCP2 H. E. S. Velten, R. F. vom Marttens, and W. Zimdahl, Aspects of the cosmological “coincidence problem”, Eur. Phys. J. C 74, 3160 (2014).
DF0 O. Luongo and D. Tommasini, Modeling dark energy through an ising fluid with network interactions, Int. J. Mod. Phys. D 23, 3 (2014).
DF1 O. Luongo and H. Quevedo, A unified dark energy model from a vanishing speed of sound with emergent cosmological constant, Int. J. Mod. Phys. D 23, 02 (2013).
ude1 R. J. Scherrer, Purely kinetic k essence as unified dark matter, Phys. Rev. Lett. 93, 011301 (2004).
ude2 V. F. Cardone, A. Troisis, and S. Capozziello, Unified dark energy models: A phenomenological approach, Phys. Rev. D 69, 083517 (2004).
ude3 D. Bertacca, M. Bruni, O. F. Piattella, and D. Pietrobon, Unified dark matter scalar field models with fast transition, JCAP 02, 018 (2018).
ude4 S. Capozziello, R. D'Agostino, and O. Luongo, Cosmic acceleration from a single fluid description, Phys. Dark Univ. 20, 1 (2018); S. Capozziello, R. D'Agostino, R. Giambò and O. Luongo, Effective field description of the Anton-Schmidt cosmic fluid, Phys. Rev. D 99, 023532 (2019).
ude5 K. Boshkayev, R. D'Agostino, and O. Luongo, Extended logotropic fluids as unified dark energy models, Eur. Phys. J. C 79, 332 (2019); K. Boshkayev, T. Konysbayev, O. Luongo, M. Muccino, and F. Pace, Testing generalized logotropic models with cosmic growth, Phys. Rev. D 104, 023520 (2021).
DEandInf0 A. Arbey and J. F. Coupechoux, Unifying dark matter, dark energy and inflation with a fuzzy dark fluid, JCAP 01, 033 (2021).
DEandInf1 P. M. Sá, Triple unification of inflation, dark energy, and dark matter in two-scalar-field cosmology, Phys. Rev. D 102, 10, (2020).
DEandInf2 S. D. Odintsov, V. K. Oikonomou Unification of inflation with dark energy in f(R) gravity and axion dark matter, Phys. Rev. D 99, 10 (2019).
DEandInf3 E. Guendelman, E. Nissimov, S. Pacheva Unification of inflation and dark energy from spontaneous breaking of scale invariance, arXiv:1407.6281 [hep-th] (2014).
gao C. Gao, M. Kunz, A. R. Liddle, and D. Parkinson, Unifying dark energy and dark matter from a scalar field different from quintessence, Phys. Rev. D 81, 043520 (2010).
lim E. A. Lim, I. Sawicki, and A. Vikman, Dust of dark energy, JCAP 05, 012 (2010).
luoqq R. D'Agostino, O. Luongo, and M. Muccino, Healing the cosmological constant problem during inflation through a unified quasi-quintessence matter field, Class. Quantum Grav. 39, 195014 (2022).
geocorr A. Belfiglio, O. Luongo, and S. Mancini, Geometric corrections to cosmological entanglement, Phys. Rev. D 105, 123523 (2022).
CCPBelfiglio0 A. Belfiglio, O. Luongo, and S. Mancini, Inflationary entanglement, Phys. Rev. D 107, 103512 (2023).
CCPBelfiglio1 A. Belfiglio, R. Giambò and O. Luongo, Alleviating the cosmological constant problem from particle production, Class. Quantum Grav. 40, 105004 (2023).
Planck Y. Akrami, et al., Planck 2018 results, A&A 641, A10 (2020).
fri J. A. Frieman, Particle creation in inhomogeneous spacetimes, Phys. Rev. D 39, 2 (1989).
ces J. Cespédes and E. Verdaguer, Particle production in inhomogeneous cosmologies, Phys. Rev. D 41, 4 (1990).
stein P. Steinhardt and M. S. Turner, Prescription for successful new inflation, Phys. Rev. D 29, 2162 (1984).
ssb2 F. S. Accetta, D. J. Zoller, and M. S. Turner, Induced-gravity inflation, Phys. Rev. D 31, 3046 (1985).
reh1 L. Kofman, A. Linde, and A. A. Starobinsky, Reheating after inflation, Phys. Rev. Lett. 73, 3195 (1994).
reh2 L. Kofman, A. Linde, and A. A. Starobinsky, Towards the theory of reheating after inflation, Phys. Rev. D 56, 3258 (1997).
reh3 M. Desroche, G. N. Felder, J. M. Kratochvil, and A. Linde, Preheating in new inflation, Phys. Rev. D 71, 103516 (2005).
basym G. Steigman, Observational tests of antimatter cosmologies, Ann. Rev. Astron. Astrophys. 14, 339 (1976).
bario
O. Luongo, N. Marcantognini, M. Muccino, Unifying baryogenesis with dark matter production, Gen. Rel. Grav. 55, 2, 33 (2023).
gpp1 L. Parker, Particle creation in expanding universes, Phys. Rev. Lett. 21, 562 (1968).
gpp2 A. Duncan, Explicit dimensional renormalization of quantum field theory in curved space-time, Phys. Rev. D 17, 964 (1978).
dmg1 D. J. H. Chung, E. W. Kolb, and A. Riotto, Superheavy dark matter, Phys. Rev. D 59, 023501 (1998).
dmg2 D. J. H. Chung, P. Crotty, E. W. Kolb, and A. Riotto, On the gravitational production of superheavy dark matter, Phys. Rev. D 64, 043503 (2001).
dmg3 P. W. Graham, J. Mardon, and S. Rajendran, Vector dark matter from inflationary fluctuations, Phys. Rev. D 93, 103520 (2016).
dmg4 Y. Ema, K. Nakayama, and Y. Tang, Production of purely gravitational dark matter, JHEP 09, 135 (2018).
dmg5 R. Ding and Y. Liao, Spin 3/2 particle as a dark matter candidate: an effective field theory approach, JHEP 04, 054 (2012).
dmg6 L. Marzola, M. Raidal, and F. R. Urban, Oscillating spin-2 dark matter, Phys. Rev. D 97, 024010 (2018).
nogo
I. Oda, Weinberg’s no go theorem in quantum gravity, Phys. Rev. D 96, 124012 (2017).
ssb1 A. Zee, Broken-symmetric theory of gravity, Phys. Rev. Lett. 42, 417 (1979).
ssb3 T. Futamase and K. Maeda, Chaotic inflationary scenario of the Universe with a non-minimally coupled inflaton field, Phys. Rev. D 39, 399 (1989).
bran R. H. Brandenberger, Lectures on the theory of cosmological perturbations, Lect. Notes Phys. 646, 127 (2004).
PertInf A. Riotto, Inflation and the theory of cosmological perturbations, arXiv:hep-ph/0210162 (2002).
vacchoice C. Armendáriz-Picón and E. A. Lim, Vacuum choices and the predictions of inflation, JCAP 12, 006 (2003).
Bunchvac0 T. S. Bunch, P. Davies, Quantum field theory in de Sitter space: renormalization by point splitting, Proc.
Royal Society of London. A 360, 117 (1978).
Bunchvac1 U. H. Danielsson, M. E. Olsson, On thermalization in de Sitter space, JHEP 0403, 036 (2004).
Bunchvac2 B. R. Greene, M. K. Parikh and J. P. van der Schaar, Universal correction to the inflationary vacuum, JHEP 04, 057 (2006).
Nonmincoup0 S. C. Park and S. Yamaguchi, Inflation by non-minimal coupling, JCAP 08, 009 (2008).
Birdav N. Birrell and P. Davies, Quantum Fields in Curved Space, Cambridge Univ. Press, Cambridge, UK (1982).
sola1
I. L. Shapiro, J. Sola, C. Espana-Bonet, P. Ruiz-Lapuente, Variable cosmological constant as a Planck scale effect, Phys. Lett. B 574, 149-155 (2003).
sola2
J. Sola, A. Gomez-Valent, J. de Cruz Perez,
First evidence of running cosmic vacuum: challenging the concordance model, Astrophys. J. 836, 1 (2017).
sola3
C. Moreno-Pulido, J. Sola, Running vacuum in quantum field theory in curved spacetime: renormalizing ρvac without ∼ m^4 terms, Eur. Phys. J. C 80, 692 (2020).
DM1 G. Bertone, D. Hooper and J. Silk, Particle Dark matter: evidence, candidates and constraints, Physics Reports 405, 5, 6 (2008).
DM2 G. Bertone and D. Hooper, History of dark matter, Rev. Mod. Phys. 90, 045002 (2018).
condensed
P. O. Fedichev, U. Fischer, “Cosmological quasiparticle production in harmonically trapped superfluid gases", Phys. Rev. A 69, 033602 (2004).
gwparticles
J. M. Hernandez, M. Bellini, C. Moreno, Space-time waves from a collapse with a time-dependent cosmological parameter, Eur. Phys. J. Plus 135, 2, 207 (2020).
altroparticles
M. Cadoni, R. Casadio, A. Giusti, M. Tuveri, Emergence of a Dark Force in Corpuscular Gravity, Phys. Rev. D, 97, 044047 (2018); A. Giusti, S. Buffa, L. Heisenberg, R. Casadio, A quantum state for the late Universe, Phys. Lett. B 826, 136900 (2022).
Dustwithpress0 O. Luongo and M. Muccino, Speeding up the universe using dust with pressure, Phys. Rev. D 98, 103520 (2018).
Dustwithpress1 O. Luongo and M. Muccino, Dark matter with pressure as an alternative to dark energy, The Fifteenth Marcel Grossmann Meeting (2022).
§ INFLATON FLUCTUATIONS IN CONFORMAL TIME
We here show how to derive the dynamics of inflaton fluctuations in conformal time, namely assuming
g_μν=a^2(τ)η_μν.
In conformal time, field derivatives read
ϕ̇(t)=ϕ'(τ)/a(τ)
ϕ̈(t)=ϕ”(τ)/a^2(τ)-ℋϕ'(τ)/a^2(τ)
and
H=ȧ/a=a'/a^2=ℋ/a⇒Ḣ=ℋ'/a^2-ℋ^2/a^2,
where we have introduced the Hubble parameter in conformal time, ℋ= a^'/a. Accordingly, we obtain
Ḣ+2H^2=a”/a^3.
In this way, the background field equation, Eq. (<ref>), becomes
ϕ”/a^2-ℋϕ'/a^2+3ℋϕ'/a^2-∇^2ϕ/a^2+6ξa”/a^3ϕ+V_,ϕ=0.
that gives
ϕ”+2ℋϕ'-∇^2ϕ+6ξa”/aϕ=-V_,ϕa^2.
Eq. (<ref>) can be written in the compact form
1/√(-g)∂_μ(√(-g)g^μν∂_νϕ)+V^ eff_,ϕ=0,
that for the effective potential in Eq. (<ref>) gives
1/√(-g)∂_μ(√(-g)g^μν∂_νϕ)+6ξa”/a^3ϕ+χϕ^3-4Λ^4/v^2ϕ=0.
The overall variation of Eq. (<ref>) can be decomposed into four separate components, corresponding to the variations of 1/√(-g), √(-g), g^μν and ϕ, respectively. Indeed
δ(1/√(-g)∂_μ(√(-g)g^μν∂_νϕ))=δ(1/√(-g))√(-g)∂^μ∂_μϕ
+δ g^μν∂_μ∂_νϕ
+1/√(-g)(δ√(-g))∂^μ∂_μϕ+δ∂_μ∂^μϕ,
and since
δ(1/√(-g)) =-g_μνδ g^μν/2√(-g),
δ√(-g) =-g g_μνδ g^μν/2√(-g),
we have
δ(1/√(-g)∂_μ(√(-g)g^μν∂_νϕ))=δ∂_μ∂^μϕ.
The total variation of Eq. (<ref>) then gives
δ∂_μ∂^μϕ=δϕ”+2a'/aδϕ'-∂_i∂^iδϕ-2Ψϕ”-4a”/aΨϕ'
-4Ψ'ϕ'=-ξδ R ϕ a^2-δϕ V^ eff_,ϕϕa^2,
where the variation of the scalar curvature is
δ R=1/a^2(2∂_i∂^iΦ-6Ψ”-24ℋΨ'-12a”/aΨ).
Using now the field equation for the background field ϕ≡ϕ_0(τ) in conformal time, i.e.,
ϕ”+2ℋϕ=-V^ eff_,ϕa^2,
we write
2Ψϕ”+4Ψa'/aϕ'=-2Ψ V^ eff_,ϕa^2,
finally obtaining
δϕ”+2a'/aδϕ'-∂_i∂^iδϕ-Ψϕ'-3Ψ'ϕ'
=-ξδ R ϕ a^2-δϕ V^ eff_,ϕϕa^2-2Ψ V^ eff_,ϕa^2.
§ QUASI-DE SITTER INFLATIONARY DYNAMICS
Inflation is usually modeled assuming vacuum energy domination throughout the slow-roll phase. This translates into a de Sitter scale factor, namely, in cosmic time
a(t)∼ e^H_It,
where H_I is the inflationary value of the Hubble constant. Then, in conformal time, we can write
a(τ)=-1/H_I1/τ, τ < 0.
However, during inflation the Hubble rate is not exactly constant and thus a pure de Sitter evolution would not take into account the slow-roll of the inflaton field and its quantum fluctuations.
More specifically, the inflationary dynamics is better described by a quasi-de Sitter scale factor of the form
a(τ)=-1/H_I1/τ^(1+m),
with m≪ 1, i.e., a small deviation from the pure-de Sitter. Noting that
ϵ=1-ℋ'/ℋ^2=1-1/1+m=m,
we can identify m with the slow-roll parameter ϵ, reobtaining Eq. (<ref>). Moreover, from Eq. (<ref>), we get
a' =(1+m)/H_Iτ^-(2+m),
a” =-(1+m)(2+m)/H_Iτ^-(3+m),
and so
a'/a =-(1+m)τ^-1= -(1+ϵ)τ^-1,
a”/a =(2+3ϵ+ϵ^2)τ^-2≃(2+3ϵ)τ^-2.
§ GEOMETRIC PARTICLE PRODUCTION
We here discuss in more detail the geometric mechanism of particle production presented in the main text. Inhomogeneities in the gravitational field are taken into account by introducing the perturbed metric tensor
g_ab=g_ab^(0)+H_ab=a^2(τ)(η_ab+h_ab),
where η_ab is the Minkowski metric and h_ab describes perturbations, with | h_ab|≪ 1. In our framework, metric perturbations are traced back to inflaton fluctuations. Then, during inflation, the interaction Lagrangian is given by Eq. (<ref>), where T^(0)_ab is the energy-momentum referred to the fluctuations of the inflaton and it reads explicitly
T^(0)_ab =∂_aδϕ∂_bδϕ-1/2g_ab^(0)(g^cd_(0)∂_cδϕ∂_dδϕ-V(δϕ))
-ξ(∂_a∂_b-g_ab^(0)∇^c∇_c+R^(0)_ab-1/2R^(0)g_ab^(0))δϕ^2.
The Ŝ-matrix operator relating asymptotic free particle states, `in' (τ→ -∞) and `out' (τ→ +∞), is
Ŝ=T̂exp[i∫ℒ_Id^4x],
where T̂ is the time-ordering operator. Expanding Ŝ perturbatively in Dyson series up to first order in h^ab, we get
Ŝ≃ 1+iT̂∫ℒ_Id^4x≡ 1+Ŝ^(1).
Then, due to the interaction between metric and the inflaton fluctuations, initial vacuum states evolve into the final state
lim_τ→ +∞|Φ>=Ŝ|0>
=|0>+1/2∫ d^3kd^3p ⟨k,p|Ŝ^(1)|0|⟩|k,p>,
where
Ŝ^(1)=-i/2∫ d^4x√(-g_(0))H^abT^(0)_ab,
and
⟨k,p|Ŝ^(1)|0|=⟩⟨k,p|iT̂∫ℒ_Id^4x|0|=⟩⟨k,p|i∫T̂ℒ_Id^4x|0|,⟩
are the first order Ŝ-matrix and the Ŝ-matrix element, respectively.
However, in Eq. (<ref>) we have not taken into account that in curved spacetime 'in' and 'out' states are generally different, due to the universe evolution. This implies that an initial vacuum state is no longer seen as a vacuum in the 'out' region. Accordingly, creation and annihilation operators are not the same in the two regions: introducing the ladder operators b_k and b_k^† in the 'out' region, we can write a_k|0>_in=b_k|0>_out=0.
Ladder operators in the two regions are connected by the Bogoliubov transformation
b_k=α_ka_k+β^*_ka^†_-k,
where α_k and β_k are known as Bogoliubov coefficients and satisfy the normalization condition
|α_k|^2-|β_k|^2=1.
Including Bogoliubov transformations in our particle production estimate implies that we have to compute the expectation value of the number operator N=1/(2π a)^3∫ d^3q b^†_qb_q in the final state, i.e.,
⟨Φ|N|Φ|=⟩N^(0)+N^(1)+N^(2).
The first term N^(0) denotes the creation rate due to the background only. Indeed, the homogeneous expansion combines modes of positive and negative frequency, so there exists some values of k for which β_k≠0 in Eq. (<ref>), leading to the creation of particles. This is usually known as gravitational particle production. The average number density of created particles at zero order, with proper normalization, is
N^(0)=1/(2π a)^3∫ d^3k ⟨0|b^†_kb_k|0|=⟩1/(2π a)^3∫ d^3k |β_k|^2.
The second term N^(1) is the result of the combined effects due to interaction and background, i.e., it arises from the interference between 0-particle and 2-particle states and it is given by
N^(1)=1/(2π a)^3∫ d^3d^3pδ^3( k+ p)
× Re[⟨k,p|Ŝ^(1)|0|⟩(α_kβ_k+α_pβ_p)].
Finally, the last term N^(2) arises from 2-particle states only and reads
N^(2)=1/(2π a)^3∫ d^3k d^3p |⟨0|Ŝ|k,p|⟩|^2(1+|β_k|^2+|β_p|^2).
We notice that, starting from second order in the inhomogeneities, pair production is no longer restricted to particle-antiparticle pairs, which in principle may annihilate.
This is related to the presence of inhomogeneities, which break space translation symmetry so that momentum conservation does not apply at this stage. In this work, we focused on the the computation of Eq. (<ref>), setting to zero the Bogoliubov coefficients as a first estimate.
This choice gives
N^(2)=1/(2π a)^3∫ d^3k d^3p |⟨0|Ŝ|k,p|⟩|^2,
where the `in' and `out' vacua are the same, since the Bogoliubov coefficients β_k.p are set to zero.
However, we see that the presence of non-zero Bogoliubov coefficients is not only responsible for gravitational particle production at zero and first order, but also enhances geometric production at second perturbative order.
For this reason, a more refined estimate of the total number of produced particles will require the inclusion of such coefficients in our calculations. This will be object of future efforts as reported in Sec. <ref>.
|
http://arxiv.org/abs/2307.05353v1 | 20230711154219 | New Measurements of $^{71}$Ge Decay: Impact on the Gallium Anomaly | [
"J. I. Collar",
"S. G. Yoon"
] | nucl-ex | [
"nucl-ex",
"hep-ex",
"hep-ph",
"nucl-th"
] |
[email protected]
Enrico Fermi Institute, Kavli Institute for Cosmological Physics, and Department of Physics
University of Chicago, Chicago, Illinois 60637, USA
Donostia International Physics Center (DIPC), Paseo Manuel Lardizabal 4, 20018 Donostia-San Sebastian, Spain
[email protected]
Enrico Fermi Institute, Kavli Institute for Cosmological Physics, and Department of Physics
University of Chicago, Chicago, Illinois 60637, USA
A dedicated high-statistics measurement of the ^71Ge half-life is found to be in accurate agreement with an accepted value of 11.43±0.03 d, eliminating a recently proposed route to bypass the “gallium anomaly¨ affecting several neutrino experiments. Our data also severely constrain the possibility of ^71Ge decay to low-energy excited levels of the ^71Ga daughter nucleus as a solution to this puzzle. Additional unpublished measurements of this decay are discussed. Following the incorporation of this new information, the gallium anomaly survives with high statistical significance.
New Measurements of ^71Ge Decay: Impact on the Gallium Anomaly
S.G. Yoon
August 12, 2023
===============================================================
When exposed to intense radioisotopic neutrino sources (^51Cr and ^37Ar) several gallium-based neutrino detectors (GALLEX <cit.>, SAGE <cit.>, BEST <cit.>) display a ∼20% deficit in the observed interaction rate with respect to the Standard Model expectation. This “gallium anomaly" <cit.> has been interpreted within the context of sterile neutrino oscillations <cit.>. This perspective is nevertheless in high tension with other neutrino measurements, leading to an ongoing effort to find other possible explanations. Some involve new physics <cit.>, others concentrate on simpler scenarios where basic assumptions made in the interpretation of gallium experiments are closely examined.
Two recent papers <cit.> have pointed out that a slightly larger value of the half-life for the electron capture (EC) decay of ^71Ge, in agreement with some of its individual measurements, can do away with the anomaly. The value required (T_1/2∼12.5 d) is however not compatible with the latest adopted reference (T_1/2=11.43±0.03 d). A ∼10% branching ratio (BR) of this decay into an excited level(s) of the daughter ^71Ga nucleus would accomplish a similar relaxation of evidence for the anomaly <cit.>. Both half-life and BR affect the nuclear matrix element entering the calculation of the cross section for the relevant inverse process, σ (ν_e + ^71Ga → e^- + ^71Ge) <cit.>. This hypothetical excited level would have an energy below 232.4 keV (the Q value of ^71Ge decay), perhaps complicating the observation of tell-tale de-excitation gammas <cit.>. Other assumptions scrutinized in <cit.> involve a correction to the BRs in the decay of neutrino-emitting ^51Cr sources employed by gallium experiments, as well as the impact that revised values of the ^71Ge extraction efficiency would have for those.
In this brief note we describe a dedicated measurement tailored to test all aspects of the decay of ^71Ge able to impact the interpretation of the gallium anomaly. A small (1 cm^3) n-type germanium diode <cit.> was used for this purpose. The device was initially shielded against environmental radiations using 10 cm of Pb in a laboratory benefiting from a 6 m.w.e. overburden. A background spectrum was obtained over 2.7 days, following energy calibration using gamma emitters. The origins of all peaks visible in this spectrum (Fig. 2) are readily identifiable (neutron reactions, cosmogenic activations, U- and Th-chain radioimpurities, etc.). The detector was then activated in ^71Ge via a four day exposure to a moderated ^252Cf neutron source at the center of a 20 cm polyethylene sphere <cit.>. A production of approximately 5×10^5 ^71Ge atoms was expected via simulation of the ^70Ge neutron capture rate. The detector was returned to its shield. A total of 42 days of post-activation data were taken, with a single 5.7 day interruption due to failure of the data-acquisition system (Fig. 1). The system stored time-stamped individual event traces, allowing an arbitrary time binning. The energy spectrum spanned the range from a 0.5 keV threshold up to 250 keV, i.e., beyond the Q-value of ^71Ge decay.
The top panel in Fig. 1 shows the decay of the activity under the 1.29 keV and 10.37 keV peaks characteristic of ^71Ge EC from the atomic L-shell and K-shell, respectively. These peaks can be observed in Fig. 2 as the only noticeable outcome from neutron exposure, in the spectral region measured. The ^71Ge half-life derived from a fit to their summed rate is 11.46±0.04 d, in excellent agreement with the 11.43±0.03 d assumed in the interpretation of experiments responsible for the gallium anomaly <cit.>. This measured half-life is robust against the procedure employed to extract the rates in Fig. 1.
Low-background searches for rare processes involving large-mass germanium diodes can measure this half-life. However, the modest decay rates typically observed lead to much larger statistical uncertainties, as the activation of these detectors is only due to low-flux environmental neutrons during detector construction. The two peaks of interest here can also be contaminated by a longer-lived cosmogenic activation in ^68Ge <cit.>. Most importantly, the activation of other radioactive species in these larger crystals and in their cryostats can result in deviations from the expected half-life, for some modes of data treatment. Still, some of these unpublished ^71Ge half-life measurements available to us are worth mentioning, as all support the accepted value: 10.43±0.30 d, 10.91±0.91 d, 11.57±2.66 d, for detectors in <cit.>, <cit.>, <cit.>, respectively. Of special mention is the intense accidental activation of a 440 cm^3 p-type germanium crystal <cit.>. High-statistics data from this detector (Fig. 1, lower panel) point to an 11.80±0.05 d half-life. This value should be considered less reliable than that from our ad hoc measurement, for the reasons above and the shorter time span involved.
Fig. 2 shows the pre- and post-activation spectra in the present measurement. All peak-like features in the second appear in the first, i.e., we find no evidence for a non-negligible BR to new short-lived excited states of ^71Ga. Three colored peaks superimposed on the post-activation spectrum show the expected magnitude of signals from de-excitation gammas generated by such a phenomenon, for a 10% BR capable of relaxing the gallium anomaly to a ∼ 3 σ statistical evidence <cit.>. Those include the effect of energy resolution and simulated efficiency for full-energy detection of gammas internally emitted in the detector. The most significant peak-like structure in this spectrum not observed pre-activation corresponds to a mere 0.4% BR, which has negligible impact on the gallium anomaly <cit.>. This other possible path for its resolution is therefore not supported by our data, with the caveat that any new excited level(s) might be sufficiently long-lived (T_1/2≳ 12.6 yr) to escape our 0.4% BR sensitivity.
A final property of ^71Ge EC decay able to impact the interpretation of the gallium anomaly, not considered in <cit.>, are the relative EC rates from different atomic shells. Similarly to half-life and BR to the ^71Ga ground state, those rates appear explicitly in the calculation of σ (ν_e + ^71Ga → e^- + ^71Ge) <cit.>. The value of P_L/P_K = 0.117 used in <cit.> is traceable to proportional-counter studies dating back to 1971 <cit.>. Present data allow to measure this ratio with a different (and arguably more straightforward) technique, from the relative intensity of 1.29 keV and 10.37 keV peaks (Fig. 1, insets). We find P_L/P_K = 0.116±0.004 for the data from <cit.>. Following <cit.>, the slightly larger 0.125±0.008 from the present detector would result in a reduction in σ (ν_e + ^71Ga → e^- + ^71Ge) by less than 1%. Both measurements are in good agreement with a recent theoretical value of 0.12258(17) <cit.>. We notice that P_M/P_L, beyond the reach of our detectors but also entering the derivation of the cross section, was recently measured at 0.16±0.03 <cit.>. This is again in good agreement with the value of 0.165 adopted in <cit.>.
In conclusion, our data strongly constrain any explanation for the gallium anomaly based on the decay of ^71Ge. As far as this specific input is concerned, the statistical significance of the anomaly remains as large as 6 σ in some analyses (e.g., Fig. 1 in <cit.>).
We are indebted to Wick Haxton, Joachim Kopp and Xavier Mougeot for useful comments.
|
http://arxiv.org/abs/2307.04107v1 | 20230709062020 | Efficient Approximation Algorithms for Scheduling Coflows with Precedence Constraints in Identical Parallel Networks to Minimize Weighted Completion Time | [
"Chi-Yeh Chen"
] | cs.DS | [
"cs.DS"
] |
Model-Based End-to-End Learning for Multi-Target Integrated Sensing and Communication
José Miguel Mateos-Ramos, Student Member, IEEE,
Christian Häger, Member, IEEE,
Musa Furkan Keskin, Member, IEEE,
Luc Le Magoarou, Member, IEEE,
Henk Wymeersch, Senior Member, IEEE
This work was supported, in part, by a grant from the Chalmers AI Research Center Consortium (CHAIR), by the National Academic Infrastructure for Supercomputing in Sweden (NAISS), the Swedish Foundation for Strategic Research (SSF) (grant FUS21-0004, SAICOM), Hexa-X-II, part of the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101095759., and Swedish Research Council (VR grant 2022-03007). The work of C. Häger was also supported by the Swedish Research Council under grant no. 2020-04718.
José Miguel Mateos-Ramos, Christian Häger, Musa Furkan Keskin and Henk Wymeersch are with the Department of Electrical Engineering, Chalmers University of Technology, Sweden (email: [email protected]; [email protected]; [email protected]; [email protected]).
Luc Le Magoarou is with INSA Rennes, CNRS, IETR - UMR 6164, F-35000, Rennes, France (email: [email protected]).
Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023
=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
This paper focuses on the problem of coflow scheduling with precedence constraints in identical parallel networks, which is a well-known 𝒩𝒫-hard problem. Coflow is a relatively new network abstraction used to characterize communication patterns in data centers. Both flow-level scheduling and coflow-level scheduling problems are examined, with the key distinction being the scheduling granularity. The proposed algorithm effectively determines the scheduling order of coflows by employing the primal-dual method. When considering workload sizes and weights that are dependent on the network topology in the input instances, our proposed algorithm for the flow-level scheduling problem achieves an approximation ratio of O(χ) where χ is the coflow number of the longest path in the directed acyclic graph (DAG). Additionally, when taking into account workload sizes that are topology-dependent, the algorithm achieves an approximation ratio of O(Rχ), where R represents the ratio of maximum weight to minimum weight. For the coflow-level scheduling problem, the proposed algorithm achieves an approximation ratio of O(mχ), where m is the number of network cores, when considering workload sizes and weights that are topology-dependent. Moreover, when considering workload sizes that are topology-dependent, the algorithm achieves an approximation ratio of O(Rmχ). In the coflows of multi-stage job scheduling problem, the proposed algorithm achieves an approximation ratio of O(χ). Although our theoretical results are based on a limited set of input instances, experimental findings show that the results for general input instances outperform the theoretical results, thereby demonstrating the effectiveness and practicality of the proposed algorithm.
Scheduling algorithms, approximation algorithms, coflow, precedence constraints, datacenter network, identical parallel network.
§ INTRODUCTION
With the evolution of technology, a large volume of computational demands has become the norm. As personal computing resources are no longer sufficient, cloud computing has emerged as a solution for accessing significant computational resources. With the increasing demand, large-scale data centers have become essential components of cloud computing. In these data centers, the benefits of application-aware network scheduling have been proven, particularly for distributed applications with structured traffic patterns <cit.>. The widespread use of data-parallel computing applications such as MapReduce <cit.>, Hadoop <cit.>, Dryad <cit.>, and Spark <cit.> has led to a proliferation of related applications <cit.>.
In these data-parallel applications, tasks can be divided into multiple computational stages and communication stages, which are executed alternately. The computational stages generate a substantial amount of intermediate data (flows) that needs to be transmitted across various machines for further processing during the communication stages. Due to the large number of applications generating significant data transmission requirements, robust data transmission and scheduling capabilities are crucial for data centers. The overall communication pattern within the data center can be abstracted by coflow traffic, representing the interaction of flows between two sets of machines <cit.>.
A coflow refers to a set of interconnected flows, where the completion time of the entire group depends on the completion time of the last flow within the set <cit.>. Previous studies related to coflows <cit.> have primarily focused on the single-core model <cit.>. However, technological advancements have led to the emergence of data centers that operate on multiple parallel networks in order to improve efficiency <cit.>. One such architecture is the identical or heterogeneous parallel network, where multiple network cores function in parallel, providing combined bandwidth by simultaneously serving traffic.
This study addresses the problem of coflow scheduling with precedence constraints in identical parallel networks. The objective is to schedule these coflows in the parallel networks in a way that minimizes the weighted total completion time of coflows. We consider both flow-level scheduling and coflow-level scheduling. In the flow-level scheduling problem, flows within a coflow can be distributed across different network cores. Conversely, in the coflow-level scheduling problem, all flows within a coflow are required to be transmitted in the same network core. The key difference between these two problems lies in their scheduling granularity. The coflow-level scheduling problem, being a coarse-grained scheduling, can be quickly solved but yields relatively poorer results. On the other hand, the flow-level scheduling problem, being a fine-grained scheduling, takes more time to solve but produces superior scheduling results. It is worth noting that, although these two problems exhibit differences in time complexity when solved using linear programming, in the case of the flow-level scheduling problem using the primal-dual method, the decision of scheduling flows is transformed into the decision of scheduling coflows. This transformation leads to the solving time being equivalent to that of the coflow-level scheduling problem.
§.§ Related Work
The concept of coflow abstraction was initially introduced by Chowdhury and Stoica <cit.> to characterize communication patterns within data centers. The scheduling problem for coflows has been proven to be strongly 𝒩𝒫-hard, indicating the need for efficient approximation algorithms rather than exact solutions. Due to the easy reduction of the concurrent open shop problem to coflow scheduling, where only the diagonal elements of the demand matrix have values, solving the concurrent open shop problem within a factor better than 2-ϵ is 𝒩𝒫-hard <cit.>, implying the hardness of the coflow scheduling problem as well.
Since the proposal of the coflow abstraction, extensive research has been conducted on coflow scheduling <cit.>. Qiu et al.<cit.> presented the first deterministic polynomial-time approximation algorithm with an ratio of 67/3. Subsequently, Ahmadi et al. <cit.> proved that the technique proposed by Qiu et al.<cit.> actually yields only a deterministic 76/3-approximation algorithm for coflow scheduling with release times.
Khuller et al. <cit.> also proposed an approximation algorithm for coflow scheduling with arbitrary release times, achieving a ratio of 12.
Recent research by Shafiee and Ghaderi <cit.> has resulted in an impressive approximation algorithm for the coflow scheduling problem, achieving an approximation ratio of 5. Additionally, Ahmadi et al. <cit.> have made significant contributions to this field by proposing a primal-dual algorithm that enhances the computational efficiency of coflow scheduling.
In the coflow scheduling problem within a heterogeneous parallel network, Huang et al. <cit.> introduced an O(m)-approximation algorithm, where m represents the number of network cores. On the other hand, Tian et al. <cit.> were the first to propose the problem of scheduling coflows of multi-stage jobs, and they provided a O(N)-approximation algorithm, where N represents the number of servers in the network. Furthermore, Shafiee and Ghaderi <cit.> proposed a polynomial-time algorithm that achieves an approximation ratio of O(χ̃log(N)/log(log(N))), where χ̃ denotes the maximum number of coflows in a job.
§.§ Our Contributions
This paper focuses on addressing the problem of coflow scheduling with precedence constraints in identical parallel networks and presents a range of algorithms and corresponding results. The specific contributions of this study are outlined below:
* When considering workload sizes and weights that are dependent on the network topology in the input instances, the proposed algorithm for the flow-level scheduling problem achieves an approximation ratio of O(χ) where χ is the coflow number of the longest path in the directed acyclic graph (DAG).
* When taking into account workload sizes that are topology-dependent, the proposed algorithm for flow-level scheduling problem achieves an approximation ratio of O(Rχ), where R represents the ratio of maximum weight to minimum weight.
* For the coflow-level scheduling problem, the proposed algorithm achieves an approximation ratio of O(mχ), where m is the number of network cores, when considering workload sizes and weights that are topology-dependent.
* When considering workload sizes that are topology-dependent, the algorithm for the coflow-level scheduling problem achieves an approximation ratio of O(Rmχ).
* In the coflows of multi-stage job scheduling problem, the proposed algorithm achieves an approximation ratio of O(χ).
A summary of our theoretical findings is provided in Table <ref> where TDWS stands for topology-dependent workload sizes, while TDW stands for topology-dependent weights.
§.§ Organization
The structure of this paper is outlined as follows. In Section <ref>, an introduction is provided, covering fundamental notations and preliminary concepts that will be referenced in subsequent sections. Following that, the primary algorithms are presented in the following sections: Section <ref> provides an overview of the algorithm addressing the flow-level scheduling problem, while Section <ref> elaborates on the algorithm designed for the coflow-level scheduling problem. To address the scheduling problem for the coflows of multi-stage jobs, our algorithm is discussed in Section <ref>. In Section <ref>, a comparative analysis is conducted to evaluate the performance of our proposed algorithms in comparison to the previous algorithm. Lastly, in Section <ref>, our findings are summarized and meaningful conclusions are drawn.
§ NOTATION AND PRELIMINARIES
The identical parallel network consists of a collection of m non-blocking switches, each with dimensions of N × N. These switches form the infrastructure of the network, where N input links are connected to N source servers, and N output links are connected to N destination servers. These switches serve as practical and intuitive models for the network core. Network architectures such as Fat-tree or Clos <cit.> can be employed to construct networks that provide complete bisection bandwidth. In this configuration, each switch's i-th input port is connected to the i-th source server, and the j-th output port is connected to the j-th destination server. Consequently, each source server (or destination server) has m simultaneous uplinks (or downlinks), where each link may consist of multiple physical connections in the actual network topology <cit.>. Let ℐ denote the set of source servers, and 𝒥 denote the set of destination servers. The network core can be visualized as a bipartite graph, with ℐ on one side and 𝒥 on the other. For simplicity, we assume that all network cores are identical, and the links within each core have the same capacity or speed.
A coflow is a collection of independent flows, and its completion time of a coflow is determined by the completion time of the last flow in the set, making it a critical metric for evaluating the efficiency of data transfers. The demand matrix D^(k)=(d_i,j,k)_i,j=1^N represents the specific data transfer requirements within coflow k. Each entry d_i,j,k in the matrix corresponds to the size of the flow that needs to be transmitted from input i to output j within the coflow. In the context of identical network cores, the flow size can be interpreted as the transmission time, as all cores possess the same capacity or speed. This simplification allows for easier analysis and optimization of coflow scheduling algorithms. To facilitate efficient management and routing of flows, each flow is identified by a triple (i, j, k), where i represents the source node, j represents the destination node, and k corresponds to the coflow. This identification scheme enables precise tracking and control of individual flows within the parallel network.
Furthermore, we assume that flows are composed of discrete data units, resulting in integer sizes. For simplicity, we assume that all flows within a coflow are simultaneously initiated, as demonstrated in <cit.>.
This paper investigates the problem of coflow scheduling with release times and precedence constraints. The problem involves a set of coflows denoted by 𝒦, where coflow k is released into the system at time r_k. The completion time of coflow k, denoted as C_k, represents the time required for all its flows to finish processing. Each coflow k∈𝒦 is assigned a positive weight w_k. Let R be the ratio between the maximum weight and the minimum weight. The relationships between coflows can be modeled using a directed acyclic graph (DAG) G=(𝒦, E), where an arc (k', k)∈ E and k', k∈𝒦 indicate that all flows of coflow k' must be completed before any flow of coflow k can be scheduled. This relationship is denoted as k'≺ k. The DAG has a coflow number of χ, which represents the length of the longest path in the DAG. The objective is to schedule coflows in an identical parallel network, considering the precedence constraints, in order to minimize the total weighted completion time of the coflows, denoted as ∑_k∈𝒦 w_kC_k. For clarity, different subscript symbols are used to represent different meanings of the same variables. Subscript i represents the index of the source (or input port), subscript j represents the index of the destination (or output port), and subscript k represents the index of the coflow. For instance, ℱ_i denotes the set of flows with source i, and ℱ_j represents the set of flows with destination j. The symbols and terminology used in this paper are summarized in Table <ref>.
§ APPROXIMATION ALGORITHM FOR THE FLOW-LEVEL SCHEDULING PROBLEM
This section focuses on the flow-level scheduling problem, which allows for the transmission of different flows within a coflow through distinct network cores. We assume that coflows are transmitted at the flow level, ensuring that the data within a flow is allocated to the same core. We define ℱ_i as the collection of flows with source i, represented by ℱ_i={(i, j, k)| d_i,j,k>0, ∀ k∈𝒦, ∀ j∈𝒥}, and ℱ_j as the set of flows with destination j, given by ℱ_j={(i, j, k)| d_i,j,k>0, ∀ k∈𝒦, ∀ i∈ℐ}. For any subset S⊆ℱ_i (or S⊆ℱ_j), we define d(S)=∑_(i, j, k)∈ S d_i,j,k as the sum of data size over all flows in S and d^2(S)=∑_(i, j, k)∈ S d_i,j,k^2 as the sum of squares of data size over all flows in S. Additionally, we introduce the function f(S) as follows:
f(S) = d(S)^2+ d^2(S)/2m.
The flow-level scheduling problem can be formulated as a linear programming relaxation, which is expressed as follows:
min ∑_k ∈𝒦 w_k C_k <ref>
s.t. C_k≥ C_i,j,k, ∀ k∈𝒦, ∀ i∈ℐ, ∀ j∈𝒥
C_i,j,k≥ r_k+d_i,j,k, ∀ k∈𝒦, ∀ i∈ℐ, ∀ j∈𝒥
C_i,j,k≥ C_k'+d_i,j,k, ∀ k, k'∈𝒦:k'≺ k,
∀ i∈ℐ, ∀ j∈𝒥
∑_(i, j, k)∈ Sd_i,j,kC_i,j,k≥ f(S), ∀ i∈ℐ, ∀ S⊆ℱ_i
∑_(i, j, k)∈ Sd_i,j,kC_i,j,k≥ f(S), ∀ j∈𝒥, ∀ S⊆ℱ_j
In the linear program (<ref>), the variable C_k represents the completion time of coflow k in the schedule, and C_i,j,k denotes the completion time of flow (i, j, k). Constraint (<ref>) specifies that the completion time of coflow k is bounded by the completion times of all its flows, ensuring that no flow finishes after the coflow. Constraint (<ref>) guarantees that the completion time of any flow (i, j, k) is at least its release time r_k plus the time required for its transmission. To capture the precedence constraints among coflows, constraint (<ref>) indicates that all flows of coflow k' must be completed before any flow of coflow k can be scheduled. Constraints (<ref>) and (<ref>) introduce lower bounds on the completion time variables at the input and output ports, respectively.
We define L_i,S,k as the sum of the loads on input port i for coflow k in the set S. Similarly, L_j,S,k represents the sum of the loads on output port j for coflow k in the set S. To formulate the dual linear program, we have the following expressions:
L_i,S,k =∑_(i',j',k')∈ S|i'=i,k'=kd_i',j',k',
L_j,S,k =∑_(i',j',k')∈ S|j'=j,k'=kd_i',j',k'.
The dual linear program is given by
max ∑_k ∈𝒦∑_i ∈ℐ∑_j ∈𝒥α_i, j, k(r_k+d_i,j,k)
+∑_i ∈ℐ∑_S ⊆ℱ_iβ_i,S f(S)
+∑_j ∈𝒥∑_S ⊆ℱ_jβ_j,S f(S)
+ ∑_(k', k) ∈ E∑_i ∈ℐ,j ∈𝒥γ_k', i, j, k d_i,j,k <ref>
s.t. ∑_i ∈ℐ∑_j ∈𝒥α_i, j, k
+∑_i ∈ℐ∑_S⊆ℱ_iβ_i,SL_i,S,k
+∑_j ∈𝒥∑_S⊆ℱ_jβ_j,SL_j,S,k
+∑_(k',k)∈ E∑_i ∈ℐ,j ∈𝒥γ_k', i, j, k
-∑_(k,k')∈ E∑_i ∈ℐ,j ∈𝒥γ_k, i, j, k'≤ w_k, ∀ k∈𝒦
α_i, j, k≥ 0, ∀ k∈𝒦, ∀ i∈ℐ,
∀ j∈𝒥
β_i, S≥ 0, ∀ i∈ℐ, ∀ S⊆ℱ_i
β_j, S≥ 0, ∀ j∈𝒥, ∀ S⊆ℱ_j
γ_k', i, j, k≥ 0, ∀ (k', k)∈ E, ∀ i∈ℐ,
∀ j∈𝒥
It is important to note that each flow (i, j, k) is associated with a dual variable α_i, j, k, and for every coflow k, there exists a corresponding constraint. Additionally, for any subset S ⊆ℱ_i (or S ⊆ℱ_j) of flows, there exists a dual variable β_i, S (or β_j, S). To facilitate the analysis and design of algorithms, we define γ_k', k as the sum of γ_k', i, j, k over all input ports i and output ports j in their respective sets ℐ and 𝒥:
γ_k', k=∑_i ∈ℐ,j ∈𝒥γ_k', i, j, k.
Significantly, it should be emphasized that the cost of any feasible dual solution provides a lower bound for OPT, which represents the cost of an optimal solution.
This implies that the cost attained by any valid dual solution ensures that OPT cannot be less than that. In other words, if we obtain a feasible dual solution with a certain cost, we can be certain that the optimal solution, which represents the best possible cost, will not have a lower cost than the one achieved by the dual solution.
The primal-dual algorithm, as depicted in Appendix <ref>, Algorithm <ref>, is inspired by the research of Davis et al. <cit.> and Ahmadi et al. <cit.>, respectively. This algorithm constructs a feasible schedule iteratively, progressing from right to left, determining the processing order of coflows. Starting from the last coflow and moving towards the first, each iteration makes crucial decisions in terms of increasing dual variables α, β or γ. The guidance for these decisions is provided by the dual linear programming (LP) formulation. The algorithm offers a space complexity of O(Nn) and a time complexity of O(n^2), where N represents the number of input/output ports, and n represents the number of coflows.
Consider a specific iteration in the algorithm. At the beginning of this iteration, let 𝒦 represent the set of coflows that have not been scheduled yet, and let k denote the coflow with the largest release time. In each iteration, a decision must be made regarding whether to increase dual variables α, β or γ.
If the release time r_k is significantly large, increasing the α dual variable results in substantial gains in the objective function value of the dual problem. On the other hand, if L_μ_1(r) (or L_μ_2(r) if L_μ_2(r)≥ L_μ_1(r)) is large, raising the β variable leads to substantial improvements in the objective value. Let κ be a constant that will be optimized later.
If r_k>κ· L_μ_1(r)/m (or r_k>κ· L_μ_2(r)/m if L_μ_2(r)≥ L_μ_1(r)), the α dual variable is increased until the dual constraint for coflow k becomes tight. Consequently, coflow k is scheduled to be processed as early as possible and before any previously scheduled coflows.
In the case where r_k≤κ· L_μ_1(r)/m (or r_k≤κ· L_μ_2(r)/m if L_μ_2(r)≥ L_μ_1(r)), the dual variable β_μ_1(r),𝒢_i (or β_μ_2(r),𝒢_j if Lμ_2(r)≥ L_μ_1(r)) is increased until the dual constraint for coflow k' becomes tight.
In this step, we begin by identifying a candidate coflow, denoted as k', with the minimum value of β. We then examine whether this coflow still has unscheduled successors. If it does, we continue traversing down the chain of successors until we reach a coflow that has no unscheduled successors, which we will refer to as t_1.
Once we have identified coflow t_1, we set its β and γ values such that the dual constraint for coflow t_1 becomes tight. Moreover, we ensure that the β value of coflow t_1 matches that of the candidate coflow k'.
The flow-driven-list-scheduling algorithm, as depicted in Algorithm <ref>, leverages a list scheduling rule to determine the order of coflows to be scheduled. In order to provide a clear and consistent framework, we assume that the coflows have been pre-ordered based on the permutation generated by Algorithm <ref>, where σ(k)=k for all k∈𝒦. Thus, the coflows are scheduled sequentially in this predetermined order.
Within each coflow, the flows are scheduled based on a non-increasing order of their sizes, breaking ties arbitrarily. Specifically, for every flow (i, j, k), the algorithm identifies the least loaded network core, denoted as h^*, and assigns the flow (i, j, k) to this core.
The algorithm's steps involved in this assignment process are outlined in lines <ref>-<ref>.
A flow is deemed "ready" for scheduling only when all of its predecessors have been fully transmitted. The algorithm then proceeds to schedule all the flows that are both ready and have been released but remain incomplete. These scheduling steps, encapsulated in lines <ref>-<ref>, have been adapted from the work of Shafiee and Ghaderi <cit.>.
§.§ Analysis
In this section, we present a comprehensive analysis of the proposed algorithm, establishing its approximation ratios. Specifically, we demonstrate that the algorithm achieves an approximation ratio of O(χ) when considering workload sizes and weights that are topology-dependent in the input instances. Additionally, when considering workload sizes that are topology-dependent in the input instances, the algorithm achieves an approximation ratio of O(Rχ) where R is the ratio of maximum weight to minimum weight. It is crucial to note that our analysis assumes that the coflows are arranged in the order determined by the permutation generated by Algorithm <ref>, where σ(k)=k for all k∈𝒦.
Let S_k={1, 2, …, k} denote the set of the first k coflows. Furthermore, we define S_i,k as the set of flows from the first k coflows at input port i. Formally, S_i,k is defined as follows:
S_i,k={(i, j, k')| d_i,j,k'>0, ∀ k'∈{1,…,k}, ∀ j∈𝒥}.
Similarly, S_j,k represents the set of flows from the first k coflows at output port j, defined as:
S_j,k={(i, j, k')| d_i,j,k'>0, ∀ k'∈{1,…,k}, ∀ i∈ℐ}.
Let β_i,k=β_i,S_i,k and β_j,k=β_j,S_j,k. These variables capture the dual variables associated with the sets S_i,k and S_j,k.
Moreover, we introduce the notation μ_1(k) to denote the input port with the highest load in S_k, and μ_2(k) to represent the output port with the highest load in S_k. Recall that d(S) represents the sum of loads for all flows in a subset S. Therefore, d(S_i,k) corresponds to the total load of flows from the first k coflows at input port i, and d(S_j,k) corresponds to the total load of flows from the first k coflows at output port j.
Finally, let L_i,k=∑_j∈𝒥 d_i,j,k denote the total load of flows from coflow k at input port i, and L_j,k=∑_i∈ℐ d_i,j,k denote the total load of flows from coflow k at output port j.
Let us begin by presenting several key observations regarding the primal-dual algorithm.
The following statements hold.
* Every nonzero β_i,S can be written as β_μ_1(k),k for some coflow k.
* Every nonzero β_j,S can be written as β_μ_2(k),k for some coflow k.
* For every set S_μ_1(k),k that has a nonzero β_μ_1(k),k variable, if k' ≤ k then r_k'≤κ· d(S_μ_1(k),k)/m.
* For every set S_μ_2(k),k that has a nonzero β_μ_2(k),k variable, if k' ≤ k then r_k'≤κ· d(S_μ_2(k),k)/m.
* For every coflow k that has a nonzero α_μ_1(k), 1, k, r_k>κ· d(S_μ_1(k),k)/m.
* For every coflow k that has a nonzero α_1, μ_2(k), k, r_k>κ· d(S_μ_2(k),k)/m.
* For every coflow k that has a nonzero α_μ_1(k), 1, k or a nonzero α_1, μ_2(k), k, if k'≤ k then r_k'≤ r_k.
The validity of each of the aforementioned observations can be readily verified and directly inferred from the steps outlined in Algorithm <ref>.
For any subset S, we have that d(S)^2≤ 2m· f(S).
Let C_k represent the completion time of coflow k when scheduled according to Algorithm <ref>. For any coflow k, we have C_k≤ a·max_k'≤ kr_k'+χ(d(S_μ_1(k),k)+d(S_μ_2(k),k)/m)+(1-2/m)C_k^*, where a=0 signifies the absence of release times, and a=1 indicates the presence of arbitrary release times.
First, let's consider the case where there is no release time and no precedence constraints. In this case, the completion time bound for each coflow can be expressed by the following inequality:
Ĉ_k ≤ 1/md(S_μ_1(k), k) + 1/md(S_μ_2(k),k)+(1-2/m) max_i, j d_i,j,k
Now, let v_1v_2⋯ v_f be the longest path of coflow k, where v_f=k. Then, we can derive the following inequalities:
C_k ≤ ∑_q=1^fĈ_v_q
≤ ∑_q=1^f1/md(S_μ_1(q), q) + 1/md(S_μ_2(q),q) +(1-2/m) max_i, j d_i,j,q
≤ ∑_q=1^f1/md(S_μ_1(k), k) + 1/md(S_μ_2(k),k) +(1-2/m) max_i, j d_i,j,q
= f/md(S_μ_1(k), k) + f/md(S_μ_2(k),k) +∑_q=1^f(1-2/m) max_i, j d_i,j,q
≤ f/md(S_μ_1(k), k) + f/md(S_μ_2(k),k)+ (1-2/m) C_k^*.
When considering the release time, coflow k is transmitted starting at max_k'≤ kr_k' at the latest. This proof confirms the lemma.
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then γ_k', k=0 holds for all k, k'∈𝒦.
Given that w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ, and j ∈𝒥, the β value of coflow k is smaller than that of coflow k'. As a result, there is no need to order the coflow k by setting γ_k',k.
For every coflow k, ∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_(k',k)∈ Eγ_k', k-∑_(k,k')∈ Eγ_k, k'= w_k.
A coflow k is included in the permutation of Algorithm <ref> only if the constraint ∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_S⊆ℱ_iβ_i,SL_i,S,k+∑_j ∈𝒥∑_S⊆ℱ_jβ_j,SL_j,S,k+∑_(k',k)∈ Eγ_k', k-∑_(k,k')∈ Eγ_k, k'≤ w_k becomes tight for this particular coflow, resulting in ∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_(k',k)∈ Eγ_k', k-∑_(k,k')∈ Eγ_k, k'= w_k.
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then the total cost of the schedule is bounded as follows.
∑_kw_kC_k ≤ (a+2χ/κ)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+2(a·κ+2χ)∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+2(a·κ+2χ)∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT.
By applying Lemma <ref>, we have
∑_k=1^n w_kC_k ≤ ∑_k=1^n w_k· A +(1-2/m) ∑_k=1^n w_kC_k^*
where A=a·max_k'≤ kr_k'+χd(S_μ_1(k),k)+d(S_μ_2(k),k)/m. We have ∑_k=1^n w_k C_k^*=OPT. Now we focus on the first term ∑_k=1^n w_k· A. By applying Lemmas <ref> and <ref>, we have
∑_k=1^n w_k· A = ∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, k· A
+∑_k=1^n∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k· A
+∑_k=1^n∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k· A
Let's begin by bounding ∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, k· A.
By applying Observation <ref> parts (<ref>), (<ref>) and (<ref>), we have
∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, k·A
≤ ∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥 α_i, j, k(a·r_k+2χ·r_k/κ)
≤ (a+2χ/κ)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥 α_i, j, k·r_k
Now we bound ∑_k=1^n∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k· A. By applying Observation <ref> part (<ref>), we have
∑_k=1^n∑_i ∈ℐ∑_k'≥kβ_i,k'L_i,k·A
≤ ∑_k=1^n∑_i ∈ℐ∑_k'≥kβ_i,k'L_i,k(a·max_ℓ≤kr_ℓ+χd(S_μ_1(k),k)+d(S_μ_2(k),k)/m)
≤ ∑_k=1^n∑_i ∈ℐ∑_k'≥kβ_i,k'L_i,k(a·κ·d(S_μ_1(k'),k')/m + 2χ·d(S_μ_1(k),k)/m)
≤ (a·κ+2χ)∑_k'=1^n∑_i ∈ℐ∑_k≤k'β_i,k'L_i,kd(S_μ_1(k'),k')/m
≤ (a·κ+2χ)∑_k'=1^n∑_i ∈ℐβ_i,k'∑_k≤k'L_i,kd(S_μ_1(k'),k')/m
= (a·κ+2χ)∑_k'=1^n∑_i ∈ℐβ_i,k'd(S_i,k')d(S_μ_1(k'),k')/m
≤ (a·κ+2χ)∑_k'=1^n∑_i ∈ℐβ_i,k'(d(S_μ_1(k'),k'))^2/m
By sequentially applying Observation <ref> and Observation <ref> part (<ref>), we can upper bound this expression by
2(a·κ+2χ)∑_i ∈ℐ∑_k=1^nβ_i,kf(S_μ_1(k),k)
= 2(a·κ+2χ)∑_k=1^nβ_μ_1(k),kf(S_μ_1(k),k)
≤ 2(a·κ+2χ)∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
By Observation <ref> and Observation <ref> parts (<ref>) and (<ref>), we also can obtain
∑_k=1^n∑_j ∈𝒥∑_k'≥kβ_j,k'L_j,k ·A
≤ 2(a·κ+2χ)∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
Therefore,
∑_kw_kC_k ≤ (a+2χ/κ)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+2(a·κ+2χ)∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+2(a·κ+2χ)∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT.
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4χ+2-2/m for the flow-level scheduling problem with release times.
To schedule coflows without release times, the application of Lemma <ref> (with a = 1) indicates the following:
∑_kw_kC_k ≤ (1+2χ/κ)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+2(κ+2χ)∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+2(κ+2χ)∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT.
In order to minimize the approximation ratio, we can substitute κ=1/2 and obtain the following result:
∑_kw_kC_k ≤ (4χ+1)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+(4χ+1)∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+(4χ+1)∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT
≤ (4χ+2-2/m)· OPT.
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4χ+1-2/m for the flow-level scheduling problem without release times.
To schedule coflows without release times, the application of Lemma <ref> (with a = 0) indicates the following:
∑_kw_kC_k ≤ (2χ/κ)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+2· 2χ∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+2· 2χ∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT.
In order to minimize the approximation ratio, we can substitute κ=1/2 and obtain the following result:
∑_kw_kC_k ≤ 4χ∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+4χ∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+4χ∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT
≤ (4χ+1-2/m)· OPT.
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then the inequality ∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ (R-1)(∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k) holds for all k∈𝒦.
We demonstrate the case of L_μ_1(r)>L_μ_2(r), while the other case of L_μ_1(r)≤ L_μ_2(r) can be obtained using the same approach, yielding the same result. If coflow k does not undergo the adjustment of the order by setting γ_k',k, then ∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ 0.
Suppose coflow p is replaced by coflow k through the adjustment of γ_k',k.
Let
B=∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k,
B_p=∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,p+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,p,
H=∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k',
H_p=∑_k'∈𝒦|(k',p)∈ Eγ_k',p-∑_k'∈𝒦|(p,k')∈ Eγ_p,k',
R'=w_k/w_p.
If coflow k undergoes the adjustment of the order by setting γ_k',k, then
H = w_k-B-L_i,k/L_i,p(w_p-B_p-H_p)
≤ w_k-B-w_p+B_p+H_p
≤ w_k-w_p+H_p
≤ w_k-w_p
= R'-1/R'w_k
≤ R-1/Rw_k
The inequalities (<ref>) and (<ref>) are due to L_i,p≤ L_i,k for all i ∈ℐ. The inequality (<ref>) is due to
H_p≤ 0. Based on Lemma <ref>, we know that ∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_(k',k)∈ Eγ_k', k-∑_(k,k')∈ Eγ_k, k'= w_k.
Thus, we obtain:
H ≤ (R-1)(∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k).
This proof confirms the lemma.
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then the total cost of the schedule is bounded as follows.
∑_kw_kC_k ≤ R(a+2χ/κ)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+2R(a·κ+2χ)∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+2R(a·κ+2χ)∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT.
According to lemma <ref>, we have
∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ R(∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k) holds for all k∈𝒦.
Then, following a similar proof to lemma <ref>, we can derive result
∑_kw_kC_k ≤ R(a+2χ/κ)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+2R(a·κ+2χ)∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+2R(a·κ+2χ)∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT.
By employing analogous proof techniques to theorems <ref> and <ref>, we can establish the validity of the following two theorems:
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4Rχ+R+1-2/m for the flow-level scheduling problem with release times.
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4Rχ+1-2/m for the flow-level scheduling problem without release times.
§ APPROXIMATION ALGORITHM FOR THE COFLOW-LEVEL SCHEDULING PROBLEM
This section focuses on the coflow-level scheduling issue, which pertains to the transmission of flows within a coflow via a single core. It is important to remember that L_i,k=∑_j=1^Nd_i,j,k and L_j,k=∑_i=1^Nd_i,j,k, where L_i,k denotes the overall load at source i for coflow k, and L_j,k denotes the overall load at destination j for coflow k.
Let
f_i(S) = ∑_k∈ S L_i,k^2+(∑_k∈ S L_i,k)^2/2m
and
f_j(S) = ∑_k∈ S L_j,k^2+(∑_k∈ S L_j,k)^2/2m
for any subset S⊆𝒦.
To address this problem, we propose a linear programming relaxation formulation as follows:
min ∑_k ∈𝒦 w_k C_k <ref>
s.t. C_k≥ r_k+L_i,k, ∀ k∈𝒦, ∀ i∈ℐ
C_k≥ r_k+L_j,k, ∀ k∈𝒦, ∀ j∈𝒥
C_k≥ C_k'+L_ik, ∀ k, k'∈𝒦, ∀ i∈ℐ:
k'≺ k
C_k≥ C_k'+L_jk, ∀ k, k'∈𝒦, ∀ j∈𝒥:
k'≺ k
∑_k∈ SL_i,kC_k≥ f_i(S) ∀ i∈ℐ, ∀ S⊆𝒦
∑_k∈ SL_j,kC_k≥ f_j(S) ∀ j∈𝒥, ∀ S⊆𝒦
In the linear program (<ref>), the completion time C_k is defined for each coflow k in the schedule. Constraints (<ref>) and (<ref>) ensure that the completion time of any coflow k is greater than or equal to its release time r_k plus its load. To account for the precedence constraints among coflows, constraints (<ref>) and (<ref>) indicate that all flows of coflow k' must be completed before coflow k can be scheduled. Additionally, constraints (<ref>) and (<ref>) establish lower bounds for the completion time variable at the input and output ports, respectively.
The dual linear program is given by
max ∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k+L_i,k)
+∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k+L_j,k)
+∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
+ ∑_(k', k) ∈ E∑_i ∈ℐγ_k', i, k L_i,k
+ ∑_(k', k) ∈ E∑_j ∈𝒥γ_k', j, k L_j,k <ref>
s.t. ∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k
+∑_i ∈ℐ∑_S⊆𝒦/k∈ Sβ_i,SL_i,k
+∑_j ∈𝒥∑_S⊆𝒦/k∈ Sβ_j,SL_j,k
+∑_(k',k)∈ E∑_i ∈ℐγ_k', i, k
+∑_(k',k)∈ E∑_j ∈𝒥γ_k', j, k
-∑_(k,k')∈ E∑_i ∈ℐγ_k, i, k'
-∑_(k,k')∈ E∑_j ∈𝒥γ_k, j, k'≤ w_k, ∀ k∈𝒦
α_i, k≥ 0, ∀ k∈𝒦, ∀ i∈ℐ
α_j, k≥ 0, ∀ k∈𝒦, ∀ j∈𝒥
β_i, S≥ 0, ∀ i∈ℐ, ∀ S⊆𝒦
β_j, S≥ 0, ∀ j∈𝒥, ∀ S⊆𝒦
γ_k', i, k≥ 0, ∀ (k', k)∈ E, ∀ i∈ℐ
γ_k', j, k≥ 0, ∀ (k', k)∈ E, ∀ j∈𝒥
Let γ_k', k=∑_i ∈ℐγ_k', i, k+∑_j ∈𝒥γ_k', j, k. Notice that for every coflow k, there exists two dual variables α_i, k and α_j, k, and there is a corresponding constraint. Additionally, for every subset of coflows S, there are two dual variables β_i, S and β_j, S. For the precedence constraints, there are two dual variables γ_k', k and γ_k, k'. Algorithm <ref> in Appendix <ref> presents the primal-dual algorithm which has a space complexity of O(Nn) and a time complexity of O(n^2), where N represents the number of input/output ports and n represents the number of coflows.
The coflow-driven-list-scheduling, as outlined in Algorithm <ref>, operates as follows. To ensure clarity and generality, we assume that the coflows are arranged in an order determined by the permutation generated by Algorithm <ref>, where σ(k)=k for all k∈𝒦. We schedule all the flows within each coflow iteratively, following the sequence provided by this list.
For each coflow k, we identify the network core h^* that can transmit coflow k in a manner that minimizes its completion time (lines <ref>-<ref>). Subsequently, we transmit all the flows allocated to network core h (lines <ref>-<ref>).
In summary, the coflow-driven-list-scheduling algorithm works by iteratively scheduling the flows within each coflow, following a predetermined order. It determines the optimal network core for transmitting each coflow to minimize their completion times, and then transmits the allocated flows for each core accordingly.
§.§ Analysis
In this section, we present a comprehensive analysis of the proposed algorithm, establishing its approximation ratios. Specifically, we demonstrate that the algorithm achieves an approximation ratio of O(mχ) when considering workload sizes and weights that are topology-dependent in the input instances. Additionally, when considering workload sizes that are topology-dependent in the input instances, the algorithm achieves an approximation ratio of O(Rmχ) where R is the ratio of maximum weight to minimum weight. It is crucial to note that our analysis assumes that the coflows are arranged in the order determined by the permutation generated by Algorithm <ref>, where σ(k)=k for all k∈𝒦.
We would like to emphasize that S_k={1, 2, …, k} represents the set of the first k coflows. We define β_i,k=β_i,S_k and β_j,k=β_j,S_k for convenience. Moreover, we define L_i(S_k)=∑_k'≤ k L_i, k' and L_j(S_k)=∑_k'≤ k L_j, k' to simplify the notation. Furthermore, let μ_1(k) denote the input port with the highest load among the coflows in S_k, and μ_2(k) denote the output port with the highest load among the coflows in S_k. Hence, we have L_μ_1(k)(S_k)=∑_k'≤ k L_μ_1(k), k' and L_μ_2(k)(S_k)=∑_k'≤ k L_μ_2(k), k'.
Let us begin by presenting several key observations regarding the primal-dual algorithm.
The following statements hold.
* Every nonzero β_i,S can be written as β_μ_1(k),k for some coflow k.
* Every nonzero β_j,S can be written as β_μ_2(k),k for some coflow k.
* For every set S_k that has a nonzero β_μ_1(k),k variable, if k' ≤ k then r_k'≤κ· L_μ_1(k)(S_k)/m.
* For every set S_k that has a nonzero β_μ_2(k),k variable, if k' ≤ k then r_k'≤κ· L_μ_2(k)(S_k)/m.
* For every coflow k that has a nonzero α_μ_1(k), k, r_k>κ· L_μ_1(k)(S_k)/m.
* For every coflow k that has a nonzero α_μ_2(k), k, r_k>κ· L_μ_2(k)(S_k)/m.
* For every coflow k that has a nonzero α_μ_1(k), k or a nonzero α_μ_2(k), k, if k'≤ k then r_k'≤ r_k.
The validity of each of the aforementioned observations can be readily verified and directly inferred from the steps outlined in Algorithm <ref>.
For any subset S, we have that (∑_k∈ S L_i,k)^2≤ 2m· f_i(S) and (∑_k∈ S L_j,k)^2≤ 2m· f_j(S).
Let C_k represent the completion time of coflow k when scheduled according to Algorithm <ref>. For any coflow k, we have C_k≤ a·max_k'≤ kr_k'+χ(L_μ_1(k)(S_k)+L_μ_2(k)(S_k)), where a=0 signifies the absence of release times, and a=1 indicates the presence of arbitrary release times.
First, let's consider the case where there is no release time and no precedence constraints. In this case, the completion time bound for each coflow can be expressed by the following inequality:
Ĉ_k ≤ L_μ_1(k)(S_k)+L_μ_2(k)(S_k)
Now, let v_1v_2⋯ v_f be the longest path of coflow k, where v_f=k. Then, we can derive the following inequalities:
C_k ≤ ∑_q=1^fĈ_v_q
≤ ∑_q=1^f L_μ_1(q)(S_q)+L_μ_2(q)(S_q)
≤ ∑_q=1^f L_μ_1(k)(S_k)+L_μ_2(k)(S_k)
= f(L_μ_1(k)(S_k)+L_μ_2(k)(S_k))
When considering the release time, coflow k is transmitted starting at max_k'≤ kr_k' at the latest. This proof confirms the lemma.
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then γ_k', k=0 holds for all k, k'∈𝒦.
Given that w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ, and j ∈𝒥, the β value of coflow k is smaller than that of coflow k'. As a result, there is no need to order the coflow k by setting γ_k',k.
For every coflow k, ∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'= w_k.
A coflow k is included in the permutation of Algorithm <ref> only if the constraint
∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k +∑_i ∈ℐ∑_S⊆𝒦/k∈ Sβ_i,SL_i,k +∑_j ∈𝒥∑_S⊆𝒦/k∈ Sβ_j,SL_j,k+∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ w_k becomes tight for this particular coflow, resulting in ∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'= w_k.
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then the total cost of the schedule is bounded as follows.
∑_kw_kC_k ≤ (a+2χ· m/κ)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+(a+2χ· m/κ)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+2(a·κ+2χ· m)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+2(a·κ+2χ· m)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
By applying Lemma <ref>, we have
∑_k=1^n w_kC_k
≤∑_k=1^n w_k·(a·max_k'≤ kr_k+χ(L_μ_1(k)(S_k)+L_μ_2(k)(S_k)))
Let A=a·max_k'≤ kr_k+χ(L_μ_1(k)(S_k)+L_μ_2(k)(S_k)). By applying Lemmas <ref> and <ref>, we have
∑_k=1^n w_kC_k ≤ ∑_k=1^n(∑_i ∈ℐα_i, k+∑_k=1^n∑_j ∈𝒥α_j, k)· A
+∑_k=1^n∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k· A
+∑_k=1^n∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k· A
Let's begin by bounding ∑_k=1^n∑_i ∈ℐα_i, k· A+∑_k=1^n∑_j ∈𝒥α_j, k· A.
By applying Observation <ref> parts (<ref>), (<ref>) and (<ref>), we have
∑_k=1^n(∑_i ∈ℐ α_i, k+∑_k=1^n∑_j ∈𝒥 α_j, k)·A
≤ ∑_k=1^n(∑_i ∈ℐ α_i, k+∑_k=1^n∑_j ∈𝒥 α_j, k)(a·r_k+2χ·m·r_k/κ)
≤ (a+2χ·m/κ)∑_k=1^n(∑_i ∈ℐ α_i, k+∑_k=1^n∑_j ∈𝒥 α_j, k)·r_k
Now we bound ∑_k=1^n∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k· A. By applying Observation <ref> part (<ref>), we have
∑_k=1^n∑_i ∈ℐ∑_k'≥kβ_i,k'L_i,k ·A
≤ ∑_k=1^n∑_i ∈ℐ∑_k'≥kβ_i,k'L_i,k(a·max_k'≤kr_k'+L_μ_1(k)(S_k)+L_μ_2(k)(S_k))
≤ ∑_k=1^n∑_i ∈ℐ∑_k'≥kβ_i,k'L_i,k(a·κ·L_μ_1(k)(S_k)/m + 2χ·L_μ_1(k)(S_k))
≤ (a·κ+2χ·m)∑_k'=1^n∑_i ∈ℐ∑_k≤k'β_i,k'L_i,kL_μ_1(k)(S_k)/m
≤ (a·κ+2χ·m)∑_k'=1^n∑_i ∈ℐβ_i,k'∑_k≤k'L_i,kL_μ_1(k)(S_k)/m
= (a·κ+2χ·m)∑_k'=1^n∑_i ∈ℐβ_i,k'L_i(S_k)L_μ_1(k)(S_k)/m
≤ (a·κ+2χ·m)∑_k'=1^n∑_i ∈ℐβ_i,k'(L_μ_1(k)(S_k))^2/m
By sequentially applying Observation <ref> and Observation <ref> part (<ref>), we can upper bound this expression by
2(a·κ+2χ·m)∑_i ∈ℐ∑_k=1^nβ_i,kf_i(S_μ_1(k),k)
= 2(a·κ+2χ·m)∑_k=1^nβ_μ_1(k),kf_i(S_μ_1(k),k)
≤ 2(a·κ+2χ·m)∑_i ∈ℐ∑_S⊆𝒦β_i,Sf_i(S)
By Observation <ref> and Observation <ref> parts (<ref>) and (<ref>), we also can obtain
∑_k=1^n∑_j ∈𝒥∑_k'≥kβ_j,k'L_j,k ·A
≤ 2(a·κ+2χ·m)∑_j ∈𝒥∑_S⊆𝒦β_j,Sf_j(S)
Therefore,
∑_kw_kC_k ≤ (a+2χ· m/κ)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+(a+2χ· m/κ)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+2(a·κ+2χ· m)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+2(a·κ+2χ· m)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4χ m+1 for the coflow-level scheduling problem with release times.
To schedule coflows without release times, the application of Lemma <ref> (with a = 1) indicates the following:
∑_kw_kC_k ≤ (1+2χ· m/κ)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+(1+2χ· m/κ)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+2(κ+2χ· m)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+2(κ+2χ· m)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
In order to minimize the approximation ratio, we can substitute κ=1/2 and obtain the following result:
∑_kw_kC_k ≤ (4χ· m+1)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+(4χ· m+1)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+(4χ· m+1)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+(4χ· m+1)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
≤ (4χ· m+1) · OPT.
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4χ m for the coflow-level scheduling problem without release times.
To schedule coflows without release times, the application of Lemma <ref> (with a = 0) indicates the following:
∑_kw_kC_k ≤ (2χ· m/κ)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+(2χ· m/κ)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+2(2χ· m)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+2(2χ· m)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
In order to minimize the approximation ratio, we can substitute κ=1/2 and obtain the following result:
∑_kw_kC_k ≤ (4χ· m)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+(4χ· m)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+(4χ· m)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+(4χ· m)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
≤ 4χ· m · OPT.
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then the inequality ∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ (R-1)(∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k) holds for all k∈𝒦.
If coflow k does not undergo the adjustment of the order by setting γ_k',k, then ∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ 0. If coflow k undergoes the adjustment of the order by setting γ_k',k, then we have ∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤R-1/Rw_k. Based on Lemma <ref>, we know that ∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_(k',k)∈ Eγ_k', k-∑_(k,k')∈ Eγ_k, k'= w_k.
Thus, we obtain:
∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ (R-1)(∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k).
This proof confirms the lemma.
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then the total cost of the schedule is bounded as follows.
∑_kw_kC_k ≤ R(a+2χ· m/κ)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+R(a+2χ· m/κ)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+2R(a·κ+2χ· m)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+2R(a·κ+2χ· m)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
According to lemma <ref>, we have
∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ R(∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k) holds for all k∈𝒦.
Then, following a similar proof to lemma <ref>, we can derive result
∑_kw_kC_k ≤ R(a+2χ· m/κ)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+R(a+2χ· m/κ)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+2R(a·κ+2χ· m)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+2R(a·κ+2χ· m)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
By employing analogous proof techniques to theorems <ref> and <ref>, we can establish the validity of the following two theorems:
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4Rχ m+R for the flow-level scheduling problem with release times.
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4Rχ m for the flow-level scheduling problem without release times.
§ COFLOWS OF MULTI-STAGE JOBS SCHEDULING PROBLEM
In this section, we will focus on addressing the coflows of multi-stage job scheduling problem. We will modify the linear programs (<ref>) by introducing a set 𝒯 to represent the jobs and a set 𝒯_t to represent the coflows that belong to job t. We will also incorporate an additional constraint (<ref>), which will ensure that the completion time of any job is limited by its coflows. Our objective is to minimize the total weighted completion time for a given set of multi-stage jobs. Assuming that all coflows within the same job have the same release time. The resulting problem can be expressed as a linear programming relaxation, which is as follows:
min ∑_t ∈𝒯 w_t C_t <ref>
s.t. (<ref>)-(<ref>)
C_t≥ C_k, ∀ t∈𝒯, ∀ k∈𝒯_t
The dual linear program is given by
max ∑_k ∈𝒦∑_i ∈ℐ∑_j ∈𝒥α_i, j, k(r_k+d_i,j,k)
+∑_i ∈ℐ∑_S ⊆ℱ_iβ_i,S f(S)
+∑_j ∈𝒥∑_S ⊆ℱ_jβ_j,S f(S)
+ ∑_(k', k) ∈ E∑_i ∈ℐ,j ∈𝒥γ_k', i, j, k d_i,j,k <ref>
s.t. ∑_k∈𝒯_t∑_i ∈ℐ∑_j ∈𝒥α_i, j, k
+∑_k∈𝒯_t∑_i ∈ℐ∑_S⊆ℱ_iβ_i,SL_i,S,k
+∑_k∈𝒯_t∑_j ∈𝒥∑_S⊆ℱ_jβ_j,SL_j,S,k
+∑_k∈𝒯_t∑_(k',k)∈ E∑_i ∈ℐ,j ∈𝒥γ_k', i, j, k
-∑_k∈𝒯_t∑_(k,k')∈ E∑_i ∈ℐ,j ∈𝒥γ_k, i, j, k'≤ w_t, ∀ t∈𝒯
α_i, j, k≥ 0, ∀ k∈𝒦, ∀ i∈ℐ,
∀ j∈𝒥
β_i, S≥ 0, ∀ i∈ℐ, ∀ S⊆ℱ_i
β_j, S≥ 0, ∀ j∈𝒥, ∀ S⊆ℱ_j
γ_k', i, j, k≥ 0, ∀ (k', k)∈ E,
∀ i∈ℐ, ∀ j∈𝒥
Let α_i, j, t = ∑_k∈𝒯_tα_i, j, k, L_i,S,t=∑_k∈𝒯_t L_i,S,k and L_j,S,t=∑_k∈𝒯_t L_j,S,k for all t∈𝒯.
Algorithm <ref> in Appendix <ref> determines the order of job scheduling. Since there are no precedence constraints among the jobs, there is no need to set γ to satisfy precedence constraints. We transmit the jobs sequentially, and within each job, the coflows are transmitted in topological-sorting order. As the values of γ are all zero, similar to the proof of Theorem <ref>, we can obtain the following theorem. Unlike Theorem <ref>, this result is not limited to the workload sizes and weights that are topology-dependent in the input instances.
The proposed algorithm achieves an approximation ratio of O(χ) for minimizing the total weighted completion time of a given set of multi-stage jobs.
§ EXPERIMENTAL RESULTS
In order to evaluate the effectiveness of the proposed algorithm, this section conducts simulations comparing its performance to that of a previous algorithm. Both synthetic and real traffic traces are used for these simulations, without considering release time. The subsequent sections present and analyze the results obtained from these simulations.
§.§ Comparison Metrics
Since the cost of the feasible dual solution provides a lower bound on the optimal value of the coflow scheduling problem, we calculate the approximation ratio by dividing the total weighted completion time achieved by the algorithms by the cost of the feasible dual solution.
§.§ Randomly Generated Graphs
In this section, we examine a collection of randomly generated graphs that are created based on a predefined set of fundamental characteristics.
* DAG size, n: The number of coflows in the DAG.
* Out degree, deg: Out degree of a node.
* Parallelism factor, (p) <cit.>: The calculation of the levels in the DAG involves randomly generating a number from a uniform distribution. The mean value of this distribution is √(n)/p. The generated number is then rounded up to the nearest integer, determining the number of levels. Additionally, the width of each level is calculated by randomly generating a number from a uniform distribution. The mean value for this distribution is p ×√(n), and it is also rounded up to the nearest integer <cit.>. Graphs with a larger value of p tend to have a smaller χ, while those with a smaller value of p have a larger χ.
* Workload, (W_min, W_max, L_min, L_max) <cit.>:
Each coflow is accompanied by a description (W_min, W_max, L_min, L_max) that provides information about its characteristics. To determine the number of non-zero flows within a coflow, two values, w_1 and w_2, are randomly selected from the interval [W_min, W_max]. These values are then assigned to the input and output links of the coflow in a random manner. The size of each flow is randomly chosen from the interval [L_min, L_max]. The construction of all coflows by default follows a predefined distribution based on the coflow descriptions. This distribution consists of four configurations: (1, 4, 1, 10), (1, 4, 10, 1000), (4, N, 1, 10), and (4, N, 10, 1000), with proportions of 41%, 29%, 9%, and 21%, respectively. Here, N represents the number of ports in the core.
Let level_k denote the level of coflow k, and let Lv(k)={k'∈𝒦 | level_k < level_k'} represent the set of coflows that have a higher level than k. When constructing a DAG, only a subset of Lv(k) can be selected as successors for each coflow k. For coflow k, a set of successors is randomly chosen with a probability of deg/|Lv(k)|. To assign weights to each coflow, positive integers are randomly and uniformly selected from the interval [1, 100].
§.§ Results
Figure <ref> illustrates the approximation ratio of the proposed algorithm compared to the previous algorithm for synthetic traces. The problem size ranges from 5 to 25 coflows in five network cores, with input and output links set to N=10. For each instance, we set deg=3, p=1, and χ≥ 2. The proposed algorithms demonstrate significantly smaller approximation ratios than 4χ+2-2/m. Furthermore, FDLS outperforms Weaver by approximately 4.7% to 7.5% within this problem size range. Although there are no restrictions on the workload's load and weights being topology-dependent for each instance, we still obtain results lower than 4χ+2-2/m. This demonstrates the excellent performance of the algorithm in general scenarios.
The effects of flow density were compared by categorizing the coflows into three instances: dense, sparse, and combined. For each instance, the number of flows was randomly selected from either the range [N, N^2] or [1, N], depending on the specific instance. In the combined instance, each coflow has a 50% probability of being set to sparse and a 50% probability of being set to dense. Figure <ref> illustrates the approximation ratio of synthetic traces for 100 randomly chosen dense and combined instances, comparing the previous algorithm with the proposed algorithm. The problem size consisted of 25 coflows in five network cores, with input and output links set to N=10. For each instance, we set deg=3, p=1, and χ≥ 2. In the dense case, Weaver achieved an approximation ratio of 2.80, while FDLS achieved an approximation ratio of 2.66, resulting in a 5.12% improvement with Weaver. In the combined case, FDLS outperformed Weaver by 2.52%. Importantly, the proposed algorithm demonstrated a greater improvement in the dense case compared to the combined case.
Figure <ref> illustrates the approximation ratio of synthetic traces for varying numbers of network cores, comparing the previous algorithm to the proposed algorithm when all coflows are released simultaneously at time 0. The problem size consists of 25 coflows distributed across 5 to 25 network cores, with input and output links set to N=10. For each instance, we set deg=3, p=1, and χ≥ 2. Remarkably, the proposed algorithm consistently achieves significantly smaller approximation ratios compared to the theoretical bound of 4χ+2-2/m. As the number of network cores increases, the approximation ratio also tends to increase. This observation can be attributed to the widening gap between the cost of the feasible dual solution and the cost of the optimal integer solution as the number of network cores grows. Consequently, this leads to a notable discrepancy between the experimental approximation ratio and the actual approximation ratio. Importantly, across different numbers of network cores, FDLS outperforms Weaver by approximately 1.79% to 5.30%.
Figure <ref> illustrates the approximation ratio of synthetic traces for varying parallelism factor (p), comparing the previous algorithm to the proposed algorithm when all coflows are released simultaneously at time 0. The problem size consists of 25 coflows distributed across 5 network cores, with input and output links set to N=10. For each instance, we set deg=3, p=1, and χ≥ 2. According to our settings, the coflow number of the longest path in the DAG (χ) exhibits an increasing trend as the parallelism factor p decreases. Correspondingly, the approximation ratio also shows an upward trend with a decrease in the parallelism factor p. This empirical finding aligns with the theoretical analysis, demonstrating a linear relationship between the approximation ratio and χ.
We present the simulation results of the real traffic trace obtained from Hive/MapReduce traces captured from Facebook's 3000-machine cluster, consisting of 150 racks. This real traffic trace has been widely used in previous research simulations <cit.>. The trace dataset comprises a total of 526 coflows. In Figure <ref>, we depict the approximation ratio of the real traces for different thresholds of the number of flows. That is, we apply a filter to the set of coflows based on the condition that the number of flows is equal to or greater than the threshold value. For each instance, we set deg=3, p=1, and χ≥ 2. Notably, the proposed FDLS algorithm outperforms the Weaver algorithm by approximately 4.84% to 3.11% across various thresholds. Furthermore, as the number of flows increases, the approximation ratio decreases. This observation is consistent with our previous findings, suggesting a decreasing trend in the approximation ratio as the number of coflows increases.
§ CONCLUDING REMARKS
This paper focuses on the study the problem of coflow scheduling with release times and precedence constraints in identical parallel networks. The algorithm we propose effectively solves the scheduling order of coflows using the primal-dual method. The primal-dual algorithm has a space complexity of O(Nn) and a time complexity of O(n^2). When considering workload sizes and weights that are topology-dependent in the input instances, our proposed algorithm for the flow-level scheduling problem achieves an approximation ratio of O(χ). Furthermore, when considering workload sizes that are topology-dependent in the input instances, the algorithm achieves an approximation ratio of O(Rχ). For the coflow-level scheduling problem, the proposed algorithm attains an approximation ratio of O(mχ) when considering workload sizes and weights that are topology-dependent in the input instances. Moreover, when considering workload sizes that are topology-dependent in the input instances, the algorithm achieves an approximation ratio of O(Rmχ). In the coflows of multi-stage job scheduling problem, the proposed algorithm achieves an approximation ratio of O(χ). Although our theoretical results are based on a limited set of input instances, experimental findings show that the results for general input instances outperform the theoretical results, thereby demonstrating the effectiveness and practicality of the proposed algorithm.
10
url@rmstyle
Agarwal2018
S. Agarwal, S. Rajakrishnan, A. Narayan, R. Agarwal, D. Shmoys, and A. Vahdat,
“Sincronia: Near-optimal network design for coflows,” in Proceedings
of the 2018 ACM Conference on SIGCOMM, ser. SIGCOMM '18.1em plus
0.5em minus 0.4emNew York, NY, USA: Association for Computing
Machinery, 2018, p. 16–29.
ahmadi2020scheduling
S. Ahmadi, S. Khuller, M. Purohit, and S. Yang, “On scheduling coflows,”
Algorithmica, vol. 82, no. 12, pp. 3604–3629, 2020.
al2008scalable
M. Al-Fares, A. Loukissas, and A. Vahdat, “A scalable, commodity data center
network architecture,” ACM SIGCOMM computer communication review,
vol. 38, no. 4, pp. 63–74, 2008.
Bansal2010
N. Bansal and S. Khot, “Inapproximability of hypergraph vertex cover and
applications to scheduling problems,” in Automata, Languages and
Programming, S. Abramsky, C. Gavoille, C. Kirchner, F. Meyer auf der Heide,
and P. G. Spirakis, Eds.1em plus 0.5em minus 0.4emBerlin,
Heidelberg: Springer Berlin Heidelberg, 2010, pp. 250–261.
borthakur2007hadoop
D. Borthakur, “The hadoop distributed file system: Architecture and design,”
Hadoop Project Website, vol. 11, no. 2007, p. 21, 2007.
Chowdhury2012
M. Chowdhury and I. Stoica, “Coflow: A networking abstraction for cluster
applications,” in Proceedings of the 11th ACM Workshop on Hot Topics
in Networks, ser. HotNets-XI.1em plus 0.5em minus 0.4emNew
York, NY, USA: Association for Computing Machinery, 2012, p. 31–36.
Chowdhury2015
——, “Efficient coflow scheduling without prior knowledge,” in
Proceedings of the 2015 ACM Conference on SIGCOMM, ser. SIGCOMM
'15.1em plus 0.5em minus 0.4emNew York, NY, USA: Association
for Computing Machinery, 2015, p. 393–406.
chowdhury2011managing
M. Chowdhury, M. Zaharia, J. Ma, M. I. Jordan, and I. Stoica, “Managing data
transfers in computer clusters with orchestra,” ACM SIGCOMM computer
communication review, vol. 41, no. 4, pp. 98–109, 2011.
Chowdhury2014
M. Chowdhury, Y. Zhong, and I. Stoica, “Efficient coflow scheduling with
varys,” in Proceedings of the 2014 ACM Conference on SIGCOMM, ser.
SIGCOMM '14.1em plus 0.5em minus 0.4emNew York, NY, USA:
Association for Computing Machinery, 2014, p. 443–454.
Daoud08
M. I. Daoud and N. Kharma, “A high performance algorithm for static task
scheduling in heterogeneous distributed computing systems,” Journal of
Parallel and Distributed Computing, vol. 68, no. 4, pp. 399 – 409, 2008.
DAVIS2013121
J. M. Davis, R. Gandhi, and V. H. Kothari, “Combinatorial algorithms for
minimizing the weighted sum of completion times on a single machine,”
Operations Research Letters, vol. 41, no. 2, pp. 121–125, 2013.
Dean2008
J. Dean and S. Ghemawat, “Mapreduce: Simplified data processing on large
clusters,” Communications of the ACM, vol. 51, no. 1, p. 107–113,
jan 2008.
dogar2014decentralized
F. R. Dogar, T. Karagiannis, H. Ballani, and A. Rowstron, “Decentralized
task-aware scheduling for data center networks,” ACM SIGCOMM Computer
Communication Review, vol. 44, no. 4, pp. 431–442, 2014.
greenberg2009vl2
A. Greenberg, J. R. Hamilton, N. Jain, S. Kandula, C. Kim, P. Lahiri, D. A.
Maltz, P. Patel, and S. Sengupta, “Vl2: A scalable and flexible data center
network,” in Proceedings of the ACM SIGCOMM 2009 conference on Data
communication, 2009, pp. 51–62.
huang2016
X. S. Huang, X. S. Sun, and T. E. Ng, “Sunflow: Efficient optical circuit
scheduling for coflows,” in Proceedings of the 12th International on
Conference on emerging Networking EXperiments and Technologies, 2016, pp.
297–311.
Huang2020
X. S. Huang, Y. Xia, and T. S. E. Ng, “Weaver: Efficient coflow scheduling in
heterogeneous parallel networks,” in 2020 IEEE International Parallel
and Distributed Processing Symposium (IPDPS), 2020, pp. 1071–1081.
isard2007dryad
M. Isard, M. Budiu, Y. Yu, A. Birrell, and D. Fetterly, “Dryad: distributed
data-parallel programs from sequential building blocks,” in
Proceedings of the 2nd ACM SIGOPS/EuroSys European Conference on
Computer Systems 2007, 2007, pp. 59–72.
khuller2016brief
S. Khuller and M. Purohit, “Brief announcement: Improved approximation
algorithms for scheduling co-flows,” in Proceedings of the 28th ACM
Symposium on Parallelism in Algorithms and Architectures, 2016, pp.
239–240.
Qiu2015
Z. Qiu, C. Stein, and Y. Zhong, “Minimizing the total weighted completion time
of coflows in datacenter networks,” in Proceedings of the 27th ACM
Symposium on Parallelism in Algorithms and Architectures, ser. SPAA
'15.1em plus 0.5em minus 0.4emNew York, NY, USA: Association
for Computing Machinery, 2015, p. 294–303.
Sachdeva2013
S. Sachdeva and R. Saket, “Optimal inapproximability for scheduling problems
via structural hardness for hypergraph vertex cover,” in 2013 IEEE
Conference on Computational Complexity, 2013, pp. 219–229.
shafiee2018improved
M. Shafiee and J. Ghaderi, “An improved bound for minimizing the total
weighted completion time of coflows in datacenters,” IEEE/ACM
Transactions on Networking, vol. 26, no. 4, pp. 1674–1687, 2018.
shafiee2021scheduling
——, “Scheduling coflows with dependency graph,” IEEE/ACM
Transactions on Networking, 2021.
Shvachko2010
K. Shvachko, H. Kuang, S. Radia, and R. Chansler, “The hadoop distributed file
system,” in 2010 IEEE 26th Symposium on Mass Storage Systems and
Technologies (MSST), 2010, pp. 1–10.
Singh2015
A. Singh, J. Ong, A. Agarwal, G. Anderson, A. Armistead, R. Bannon, S. Boving,
G. Desai, B. Felderman, P. Germano, A. Kanagala, J. Provost, J. Simmons,
E. Tanda, J. Wanderer, U. Hölzle, S. Stuart, and A. Vahdat, “Jupiter
rising: A decade of clos topologies and centralized control in google's
datacenter network,” in Proceedings of the 2015ACM Conference on
SIGCOMM, ser. SIGCOMM '15.1em plus 0.5em minus 0.4emNew York,
NY, USA: Association for Computing Machinery, 2015, p. 183–197.
Tian18
B. Tian, C. Tian, H. Dai, and B. Wang, “Scheduling coflows of multi-stage jobs
to minimize the total weighted job completion time,” in IEEE INFOCOM
2018 - IEEE Conference on Computer Communications, 2018, pp. 864–872.
Topcuoglu02
H. Topcuoglu, S. Hariri, and M.-Y. Wu, “Performance-effective and
low-complexity task scheduling for heterogeneous computing,” IEEE
Transactions on Parallel and Distributed Systems, vol. 13, no. 3, pp.
260–274, Mar 2002.
zaharia2010spark
M. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker, and I. Stoica, “Spark:
Cluster computing with working sets,” in 2nd USENIX Workshop on Hot
Topics in Cloud Computing (HotCloud 10), 2010.
Zhang2016
H. Zhang, L. Chen, B. Yi, K. Chen, M. Chowdhury, and Y. Geng, “Coda: Toward
automatically identifying and scheduling coflows in the dark,” in
Proceedings of the 2016 ACM Conference on SIGCOMM, ser. SIGCOMM
'16.1em plus 0.5em minus 0.4emNew York, NY, USA: Association
for Computing Machinery, 2016, p. 160–173.
zhao2015rapier
Y. Zhao, K. Chen, W. Bai, M. Yu, C. Tian, Y. Geng, Y. Zhang, D. Li, and
S. Wang, “Rapier: Integrating routing and scheduling for coflow-aware data
center networks,” in 2015 IEEE Conference on Computer Communications
(INFOCOM).1em plus 0.5em minus 0.4emIEEE, 2015, pp. 424–432.
§ THE PRIMAL-DUAL ALGORITHM OF SECTION <REF>
The primal-dual algorithm, presented in Algorithm <ref>, draws inspiration from the works of Davis et al. <cit.> and Ahmadi et al. <cit.>. This algorithm constructs a feasible schedule iteratively, progressing from right to left, determining the processing order of coflows. Starting from the last coflow and moving towards the first, each iteration makes crucial decisions in terms of increasing dual variables α, β or γ. The guidance for these decisions is provided by the dual linear programming (LP) formulation. The algorithm offers a space complexity of O(Nn) and a time complexity of O(n^2), where N represents the number of input/output ports, and n represents the number of coflows.
Consider a specific iteration in the algorithm. At the beginning of this iteration, let 𝒦 represent the set of coflows that have not been scheduled yet, and let k denote the coflow with the largest release time. In each iteration, a decision must be made regarding whether to increase dual variables α, β or γ.
If the release time r_k is significantly large, increasing the α dual variable results in substantial gains in the objective function value of the dual problem. On the other hand, if L_μ_1(r) (or L_μ_2(r) if L_μ_2(r)≥ L_μ_1(r)) is large, raising the β variable leads to substantial improvements in the objective value. Let κ be a constant that will be optimized later.
If r_k>κ· L_μ_1(r)/m (or r_k>κ· L_μ_2(r)/m if L_μ_2(r)≥ L_μ_1(r)), the α dual variable is increased until the dual constraint for coflow k becomes tight. Consequently, coflow k is scheduled to be processed as early as possible and before any previously scheduled coflows.
In the case where r_k≤κ· L_μ_1(r)/m (or r_k≤κ· L_μ_2(r)/m if L_μ_2(r)≥ L_μ_1(r)), the dual variable β_μ_1(r),𝒢_i (or β_μ_2(r),𝒢_j if Lμ_2(r)≥ L_μ_1(r)) is increased until the dual constraint for coflow k' becomes tight.
In this step, we begin by identifying a candidate coflow, denoted as k', with the minimum value of β. We then examine whether this coflow still has unscheduled successors. If it does, we continue traversing down the chain of successors until we reach a coflow that has no unscheduled successors, which we will refer to as t_1.
Once we have identified coflow t_1, we set its β and γ values such that the dual constraint for coflow t_1 becomes tight. Moreover, we ensure that the β value of coflow t_1 matches that of the candidate coflow k'.
Permuting Coflows
§ THE PRIMAL-DUAL ALGORITHM OF SECTION <REF>
Algorithm <ref> presents the primal-dual algorithm which has a space complexity of O(Nn) and a time complexity of O(n^2), where N represents the number of input/output ports and n represents the number of coflows.
Permuting Coflows
§ THE PRIMAL-DUAL ALGORITHM OF SECTION <REF>
Algorithm <ref> determines the order of job scheduling. Since there are no precedence constraints among the jobs, there is no need to set γ to satisfy precedence constraints.
Permuting Jobs
|
http://arxiv.org/abs/2307.03887v1 | 20230708034254 | Improving Prototypical Part Networks with Reward Reweighing, Reselection, and Retraining | [
"Robin Netzorg",
"Jiaxun Li",
"Bin Yu"
] | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CV",
"cs.HC"
] |
theoremTheorem[section]
*theorem*Theorem
lemma[theorem]Lemma
*lemma*Lemma
proposition[theorem]Proposition
corollary[theorem]Corollary
claim[theorem]Claim
fact[theorem]Fact
exerciseExercise
||
‖‖
()
{}
⟨⟩
[]
⌊⌋
|-0.25ex|-0.25ex||-0.25ex|-0.25ex|
[2]()#1 #2
Spectral radius, fractional [a,b]-factor and ID-factor-critical graphs[Supported by National Natural Science Foundation of China
(Nos. 11971445 and 12171440),
Henan Natural Science Foundation (No. 202300410377) and
Research Program of Science and Technology at Universities of Inner Mongolia Autonomous Region (No. NJZY22280).]
Ao Fan^a, Ruifang Liu^aCorresponding author.
E-mail addresses: [email protected], [email protected], [email protected]., Guoyan Ao^a, b
^a School of Mathematics and Statistics, Zhengzhou University, Zhengzhou, Henan 450001, China
^b School of Mathematics and Physics, Hulunbuir University, Hailar, Inner Mongolia 021008, China
==========================================================================================================================================================================================================================================================================================================================================
In recent years, work has gone into developing deep interpretable methods for image classification that clearly attributes a model's output to specific features of the data. One such of these methods is the prototypical part network (ProtoPNet), which attempts to classify images based on meaningful parts of the input. While this method results in interpretable classifications, it often learns to classify from spurious or inconsistent parts of the image. Hoping to remedy this, we take inspiration from the recent developments in Reinforcement Learning with Human Feedback (RLHF) to fine-tune these prototypes. By collecting human annotations of prototypes quality via a 1-5 scale on the CUB-200-2011 dataset, we construct a reward model that learns to identify non-spurious prototypes. In place of a full RL update, we propose the reweighed, reselected, and retrained prototypical part network (R3-ProtoPNet), which adds an additional three steps to the ProtoPNet training loop. The first two steps are reward-based reweighting and reselection, which align prototypes with human feedback. The final step is retraining to realign the model's features with the updated prototypes. We find that R3-ProtoPNet improves the overall consistency and meaningfulness of the prototypes, but lower the test predictive accuracy when used independently. When multiple trained R3-ProtoPNets are incorporated into an ensemble, we find an increase in test predictive performance while maintaining interpretability.
§ INTRODUCTION
With the widespread use of deep learning, having these models be interpretable is more important now than ever. As these models continue to see use in high-stakes situations, practitioners hoping to justify a decision need to understand how a deep model makes a prediction, and trust that those explanations are valuable and correct <cit.>. One such proposed method for image classification is the prototypical part network (ProtoPNet), which classifies a given image based on its similarity to prototypical parts of training images, called prototypes <cit.>. This model aims to combine the power of deep learning with an intuitive reasoning module similar to humans.
While ProtoPNet aims to learn meaningful prototypical concepts, in practice, learned prototypes suffer from learning spurious concepts, such as the background of an image, from inconsistent concepts, such as learning both the head and the wing of a bird, and from duplicating concepts, such as having two prototypes that correspond to the same wing of the same bird <cit.>. Such problems are highly detrimental to the efficacy of these models, resulting in wasted computation at best and incorrect reasoning at worst. Various methods have been proposed to account for these issues <cit.>, but these methods involve either costly labelling procedures or fall short of providing a means of measuring prototype quality.
We seek to increase the performance of the learned prototypes by taking inspiration from recent advances in reinforcement learning with human feedback (RLHF) <cit.> and reward learning <cit.>. RLHF and reward learning have become popular approaches for aligning large language models with human preferences, partially due to the flexibility of learned rewards and feedback collection methods <cit.>. While prior work has incorporated human feedback into ProtoPNets <cit.>, no variation of ProtoPNet has incorporated a cheap and flexible reward learning fine-tuning framework.
Towards this end, we propose the reward reweighed, reselected, and retrained prototypical part network (R3-ProtoPNet), which seeks to improve the original ProtoPNet via fine-tuning with a learned reward model. With minimal human feedback data on the Caltech-UCSD Birds-200-2011 (CUB-200-211) dataset <cit.>, we are able to train a high-quality reward model that achieves 91.5% test accuracy when ranking human preferences, serving as a strong measure for prototype quality. R3-ProtoPNet is then able to improve the meaningfulness of prototypes, removing dependence on spurious features, and is able to slightly decrease inconsistency across images compared to the original ProtoPNet. When used as base learners in an ensemble, R3-ProtoPNet is able to outperform an ensemble of ProtoPNets on a held-out test dataset.
In summary, our contributions are as follows. Firstly, we demonstrate that a reward model trained on small amounts of human feedback data (roughly 300 ratings) can accurately rank human preference data. Secondly, due to the high performance of the reward model, we propose using the reward model as a measure of prototype. Thirdly, we introduce the R3-ProtoPNet, which uses reward-guided fine-tuning to improve prototype meaningfulness and ensemble performance.
§ RELATED WORK
§.§ Reinforcement Learning with Human Feedback
Since the success of InstructGPT <cit.>, Reinforcement Learning with Human Feedback (RLHF) has received a great deal of attention in the machine learning community. Although this success is recent, incorporating human feedback into reinforcement learning methods via a learned reward model has a deep history in reward learning <cit.>. While works taking inspiration from InstructGPT have used proximal policy optimization (PPO) to fine-tune networks with human feedback <cit.>, it is unclear to the extent that formal reinforcement learning is necessary to improve models via learned reward functions <cit.>, or if the human feedback needs to follow a particular form <cit.>. Some prior work incorporates the reward function as a way to weigh the likelihood term <cit.>. Keeping this work in mind, we incorporate the reward model into ProtoPNet as a way to reweigh prototypes post-training.
§.§ Example-based Models and Prototypical Part Networks
The field of interpretable deep learning is vast, with a plethora of explainability and interpretability methods available to the user. For a more complete overview of interpretable deep learning, please refer to <cit.>. To ground the discussion, we focus primarily on example-based models, one such example being ProtoPNet. While ProtoPNet is our model of interest, other example-based methods exist, such as the non-parametric xDNN <cit.> or SITE, which performs predictions directly from interpretable prototypes <cit.>. While other example-based methods exist, we focus on the ProtoPNet due to its intuitive reasoning structure.
Since its introduction by <cit.>, ProtoPNets have received a great deal of attention, and various iterations have been developed. Work has explored extending the ProtoPNet to different architectures such as transformers (<cit.>), or sharing class information between prototypes (<cit.>). <cit.> increase the spatial flexibility of ProtoPNet, allowing prototypes to change spatial positions depending on the pose information available in the image. ProtoPNets and variations have seen success in high-stakes applications, such as kidney stone identification (<cit.>) and mammography (<cit.>).
Many works have commented on how the original ProtoPNet tends to overemphasize spurious features, and they have taken different approaches to solving this issue. <cit.> introduce a explainability interface to ProtoPNet, allowing users to see the dependence of the prototype on certain image attributes like hue and shape. The authors claim that seemingly dissimilar or spurious prototypes share certain difficult-to-perceive features, like texture or contrast. <cit.> introduce a variation of the ProtoPNet, IAIA-BL, which biases prototypes towards expert labelled annotations of classification-relevant parts of the image.
Similar to how we provide human feedback at the interpretation level, <cit.> introduce the ProtoPDebug, where a user labels a prototype and image pair as "forbidden" or "valid", and a fine-tuning step maximizes the distance between learned prototypes and patches in the forbidden set and minimizes the distance between learned prototypes and patches in the valid set. While also incorporating human feedback, <cit.> do not ground their method in RLHF, but instead includes the binary feedback as a supervised constraint into the ProtoPNet loss function. Learning a reward function via ratings allows us to simultaneously increase the interpretability of the prototypes, and develop an evaluation metric for the quality of a particular prototype. Compared to previous approaches, reward reweighing, reselection, and retraining allows for fast collection of high-quality human feedback data and the construction of a reward model that measures prototype quality while increasing the interpretability and the performance of the model.
§ PROTOTYPICAL PART NETWORK (PROTOPNET)
In this section, we describe the base architecture used in our method, the Prototypical Part Network (ProtoPNet) introduced in <cit.>. The ProtoPNet aims to introduce interpretability to otherwise uninterpretable image classifiers. In place of predicting from an arbitrary representation, the model makes a classification based on part attention and similar prototypical parts of an image. The general reasoning of a model is to classify an unseen image by finding training images with similar prototypical parts to those of the unseen image. This approach allows the user to interrogate the reasoning of the model, and clearly see which parts of the image led to the model's classification.
§.§ Description
Here we briefly describe the ProtoPNet, adopting the notation used in <cit.>. The ProtoPNet architecture builds on a base convolutional neural network f, which is then followed by a prototype layer denoted g_p, and a fully connected layer h. Typically, the convolutional features are taken pretrained models like VGG-19, ResNet-34, or DenseNet-121.
The ProtoPNet injects interpretability into these convolutional architectures with the prototype layer g_p, consisting of m prototypes P = {p_j}^m_j=1 typically of size 1×1× D, where D is the shape of the convolutional output f(x). By keeping the depth the same as the output of the convolutional layer, but restricting the height and width to be smaller than that of the convolutional output, the learned prototypes select a patch of the convolutional output. Reversing the convolution leads to recovering a prototypical patch of the original input image x. Using upsampling, the method constructs a activation pattern per prototype p_j.
To use the prototypes to make a classification given a convolutional output z=f(x), ProtoPNet's prototype layer computes a max pooling over similarity scores: g_p_j(z) = max_z∈patches(z)log((z - p_j_2^2 + 1)(z - p_j_2^2 + ϵ)), for some small ϵ < 1. This function is monotonically decreasing with respect to the distance, with small values of z - p_j_2^2 resulting in a large similarity score g_p_j(z). Assigning m_k prototypes for all K classes, such that ∑_k=1^K m_k = m, the prototype layer outputs a vector of similarity scores that matches parts of the latent representation z to prototypical patches across all classes. The final layer in the model is a linear layer connecting similarities to class predictions.
In order to ensure that the prototypes match specific parts of training images, during training the prototype vectors are projected onto the closest patch in the training set. For the final trained ProtoPNet, every p_j corresponds to some patch of a particular image.
§.§ Limitations
While ProtoPNet is capable of providing interpretable classifications, the base training described in <cit.> results in prototypes that are inconsistent and represent spurious features of the image (<cit.>). Additionally, same-class prototypes will often converge to the same part of the image, resulting in duplicate prototypes.
<cit.> note that a prototype whose top L (usually L=5) closest training image patches come from different classes than the target class tend to be spurious and inconsistent, focusing on features like the background. To remedy this issue, they introduce a pruning operation, removing these prototypes entirely. While pruning does remove dependency on some subpar prototypes, we find that pruning still leaves some prototypes that rely on spurious and inconsistent features (Table <ref>) and does not improve accuracy. We also find that duplicate prototypes still occur after the pruning operation as well. We visualize subpar prototypes in Figure <ref>. For more examples of low-quality prototypes, please see the supplementary material.
§ HUMAN FEEDBACK AND THE REWARD REWEIGHED, RESELECTED, AND RETRAINED PROTOTYPICAL PART NETWORK (R3-PROTOPNET)
Inspired by the recent advances in reinforcement learning with human feedback (RLHF) <cit.>, the reward reweighed, reselected, and retrained prototypical part network (R3-ProtoPNet) utilizes a learned reward model to fine-tune prototypes. In place of pruning prototypes and sacrificing potential information, we demonstrate that incorporating human feedback into the training of the ProtoPNet improves prototype quality while increasing ensemble accuracy. In this section, we describe the collection of high-quality human feedback data, our reward model, and how we incorporate the reward model into the training loop via a three-stage training procedure.
§.§ Human Feedback Collection
A crucial aspect behind the success of RLHF methods is the collection of high quality human feedback data. Unclear or homogeneous feedback may result in a poor performing reward model <cit.>. The design of human feedback collection is vitally important to the training of a useful reward model.
The inherent interpretability of ProtoPNet leads to a useful benefit for RLHF. Given a trained ProtoPNet, it is possible for a knowledgeable user to directly critique the learned prototypes. Given a particular classification task, a human with enough expertise should be able to recognize if a particular prototype is "good" or "bad" <cit.>. In the case of classifying birds in the CUB-200-2011 dataset, one of the original classification tasks used in <cit.>, it is clear that if a prototype gives too much weight to the background of the image (spurious), or if the prototype corresponds to different parts of the bird when looking at different images (inconsistency), the learned prototype is not meaningfully or interpretably contributing to prediction. Given these prototypes that fail to contribute to prediction, a knowledgeable human trying to classify birds would rate these prototypes as "bad".
There are many different ways to elicit this notion of "goodness" from a user <cit.>. Although it is possible to incorporate many different forms of feedback into the R3-ProtoPNet, such as asking a user to compare prototypes to elicit preferences or ask for a binary value of whether a prototype is "good" or "bad", we found most success with asking the user to rate a prototype on a scale from 1 to 5. While scalar ratings can be unstable across different raters, with a clear, rule-based rating method, rating variance is reduced and it is possible to generate high-quality labels. An example rating scale on the CUB-200-2011 dataset is provided in Figure <ref>.
§.§ Reward Learning
We note that, when a user provides feedback on a prototype, it is not the training image or the model prediction that the user is providing feedback on, but the prototype's resulting interpretation: the activation patterns. Our task is therefore different from RLHF applied to language modeling or RL tasks (<cit.>, <cit.>), where human feedback is provided on the model output or resulting state. We therefore collect a rating dataset 𝒟 = {(x_i, y_i, h_i,j, r_i,j)}_i=1,j=1^n,m, where x_i,y_i are the training image and label, and h_i,j,r_i,j are prototype p_j's activation patterns and user-provided activation patterns for image x_i. We note that collecting preferences for this entire dataset is prohibitive and unnecessary, so we only collect a subset.
Given the dataset 𝒟, we generate the induced comparison dataset, whereby each entry in 𝒟 is paired with one another. Given i≠ i' and/or j≠ j', we populate a new paired dataset, 𝒟_paired, which consists of the entries of 𝒟 indexed by i,j,i',j', and a comparison c, which takes values -1, 0, 1. If the left-hand sample is greater, and therefore considered higher-quality, r_i,j > r_i',j', then c = -1. If the right-hand sample is greater r_i,j < r_i',j', then c = 1. We note that, during learning, we exclude entries with c=0 to increase the contrast between pairs. This synthetic construction allows us to model the reward function, r(x_i, h_i,j), via the Bradley-Terry Model for pairwise preferences <cit.>. We train this model with the same loss function as in <cit.>, a cross-entropy loss over the probabilities of ranking one pair over the other. This synthetic construction combinatorially increases the amount of preference data, allowing us to train a high-quality reward model on relatively small amounts of quality human feedback data.
§.§ Reward Reweighed, Reselected, and Retrained Prototypical Part Network (R3-ProtoPNet)
After having collected high-quality human feedback data and trained a reward model, we can now incorporate it into a fine-tuning framework to improve the interpretability of ProtoPNet. We incorporate the reward model via a three step process consisting of reward weighting, reselection, and retraining. Each step is described in more detail below.
§.§.§ Reward Reweighing
Although PPO is a popular option for RLHF (<cit.>), there is evidence that simpler fine-tuning algorithms can lead to similar performance increases (<cit.>). Inspired by the success and the ease of implementation of reward-weighted learning <cit.>, we develop a reward-weighted update for the ProtoPNet:
max_p_jℒ_reweigh(z_i^*, p_j) = max_p_j∑_i ∈ I(p_j)^nr(x_i, p_j)1/λ_distz_i^* - p_j^2_2 + 1
where z_i^* = argmin_z∈patches(f(x_i))z - p_j_2^2, I(p_j) = {i | y_i∈class(p_j)}, and λ_dist is a fixed hyperparameter. We note that the loss function ℒ_reg is a sum of the inverse distances weighted by the reward of the prototype on that image. Since we only update the prototype p_j, the only way to maximize the loss is to minimize the distance between prototype and image patches with high reward r(x_i, p_j). This causes the prototype to resemble high reward image patches, improving the overall quality of the prototypes. Wanting to preserve prototypes that already have high reward, we only update those prototypes that have relatively low mean reward less than γ = 0.45. λ_dist is included in the loss function to rescale distances, since the closest distances are near zero. We find best performance with λ_dist = 100.
Practically, we find that optimizing this loss function leads to locally maximal solutions, resulting in local updates that do not modify prototypes with low quality values of 1, but it's more likely to improve prototypes with quality values of 2 or higher. If the prototype p_j has high activation over the background of an image x_i, for example, the closest patches z_i^* in the training data will also be background patches, and the reward of the prototype will be low, leaving minimal room for change. It is not possible for this update to dramatically change the location of the patch in the image via this loss function.
§.§.§ Prototype Reselection
In order to improve low quality prototypes that require significant manipulation, we introduce a reselection procedure based on a reward threshold. Given a prototype p_j and image x_i, if 1/n_k∑_i∈ I(p_j)r(x_i, p_j) < α, where α is a pre-determined threshold and n_k is the number of training images in class k, we reselect the prototype. The reselection process involves iterating over patch candidates z'_i and temporarily setting the prototype p'_j = z'_i, where z'_i is chosen randomly from the patches of a randomly selected image x'_i in the class of p_j. If 1/n_k∑_i∈ I(p_j)r(x_i', p'_j) > β, where β is an acceptance threshold, and if none of the prototypes match patch p'_j = z'_j, then we accept the patch candidate as the new prototype. We found that α = 0.15 and β = 0.50 led to good performance. We refer to the combination of reweighting and reselection as the R2 update step, and the corresponding trained model the R2-ProtoPNet.
The reasoning process behind our prototype reselection method takes inspiration from the original push operation in <cit.>. Similar to how ProtoPNet projects prototypes onto a specific training image patch, here we reselect prototypes to be a particular reward-filtered training image patch. With a high enough acceptance threshold β, this forces the elimination of low reward prototypes while preserving the information gain of having an additional prototype.
One possible alternative approach is to instead search over the training patches, and select those patches with the highest reward. We found that randomly selecting patches, in place of searching for patches with the highest reward, led to higher prototype diversity and less computation time. As discussed in Section <ref>, it is possible that a reward model that more explicitly accounts for prototype diversity could alleviate the duplicate issue, but we leave this to future work.
While we do not use a traditional reinforcement learning algorithm to fine-tune our model as is typically done in RLHF <cit.>, pairing the reselection and fine-tuning steps together resembles the typical explore-exploit trade-off in RL problems. We see that fine-tuning with our reward model leads to exploit behavior, improving upon already high-quality prototypes. At the same time, the reselection step serves as a form of exploration, drastically increasing the quality of uninformative prototypes. We find that these similarities are enough to improve the quality of ProtoPNet, as discussed in the next section.
§.§.§ Retraining
A critical step missing in the R2 update is a connection to prediction accuracy. As discussed in Section <ref>, without incorporating predictive information, performing the reward update alone results in lowered test accuracy. Since the above updates only act on the prototypes themselves, not the rest of the network, the result is a misalignment between the prototypes and the model's base features and final predictive layer. The reward update guides the model towards more interpretable prototypes, but the reward update alone fails to use the higher quality prototypes for better prediction.
To account for the lack of predictive performance, the final step of R3-ProtoPNet is retraining. Simply retraining with the same loss function used in the original ProtoPNet update results in the realignment of the prototypes and the rest of the model. Although one could worry that predictive accuracy would reduce the interpretability of the model <cit.>, we find that retraining increases predictive accuracy while maintaining the quality increases of the R2 update. The result is a high accuracy model with higher-quality prototypes. We explore evidence of this phenomenon and why this is the case in the following section.
§ EXPERIMENTS
Here we discuss the results of training the R3-ProtoPNet on the CUB-200-2011 dataset, the same dataset as used in <cit.>. We demonstrate that the R3-ProtoPNet leads for higher quality prototypes across base model architectures and prototype configurations while not sacrificing predictive performance.
§.§ Datasets
R3-ProtoPNet requires two datasets: the original dataset for initial training, and the scalar ratings of activation pattern dataset. Combined, this results in the dataset described in Section <ref>. To offer better comparison against the original ProtoPNet, we use the same dataset for initial training that was used in <cit.>, the CUB-200-2011 dataset <cit.>. The CUB-200-2011 dataset consists of roughly 30 images of 200 different bird species. We employ the same data augmentation scheme used in <cit.>, which adds additional training data by applying a collection of rotation, sheer, and skew perturbations to the images, resulting in a larger augmented dataset.
For the collection of the activation pattern ratings, we only provide activation patterns overlaid on the original images to the rater. Although it is possible to crowdsource the collection of human preference data, we found that it was possible to increase the performance of ProtoPNet with relatively small amounts human preference data that we ourselves collected. We rated a total of 700 prototype-image pairs according to the scale approach described in Figure <ref>, which we justify in the next subsection.
§.§ Architectures and Training
Similar to <cit.>, we study the performance of R3-ProtoPNet across three different base architectures: VGG-19, ResNet-34, and DenseNet-121. While the original ProtoPNet sets the number of prototypes per class at m_k = 10, we additionally run the VGG19 architecture with m_k=5 prototypes to explore model performance when the number of prototypes is limited. No other modifications were made to the original ProtoPNet architecture. We train for 100 epochs and report results for the best performing model.
The reward model r(x_i, h_i) is similar to the base architecture of the ProtoPNet. Two ResNet-50 base architectures take in the input image x_i and the associated acticvation pattern h_i separately, and both have two additional convolutional layers. The outputs of the convolutional layers are concatenated and fed into a final linear layer with sigmoid activation to predict the Bradley-Terry ranking. Predicted rewards are therefore bound in the range (0, 1). We train the reward model for 5 epochs on a comparison dataset of 71,875 paired images and preference labels, and evaluate on a 13,831 testing pairs. The reward model achieves 91.54% test accuracy when trained on the whole dataset, and we additionally find that the reward model converges to roughly 91% test accuracy on a comparison dataset generated from at least 300 rated activation patterns.
§.§ Evaluation Metrics
To evaluate the performance of R3-ProtoPNet, we compare it to ProtoPNet using three metrics: test accuracy, reward, and prototype class mismatch. We use test accuracy to measure the predictive performance of the models. As the above section demonstrates, the learned reward model achieves high accuracy in predicting which prototype ranks above another in accordance with human preferences, so we therefore use it as a measure of prototype quality. Regarding the class mismatch metric, <cit.> note that low-quality prototypes tend to have close training images that come from different classes. To evaluate the effect of R3 updating, we compute the average class mismatch across all prototypes for a given model for the Top-5 and Top-10 closest training images.
§.§ Results
After training ProtoPNet, running the R2 update step, and then performing retraining, we see several trends across multiple base architectures. In Table <ref>, we report the test accuracy of the different base architectures across stages of R3-ProtoPNet training. Generally, the test accuracy from ProtoPNet substantially decreases after applying the R2 update, but retraining tends to recover most of the predictive loss. This accuracy maintenance demonstrates that it is possible to align prototypes with human preferences without sacrificing predictive power.
In Table <ref>, we report the average reward of all prototypes on all test images for a given base architecture. We see that ProtoPNet achieves an average reward between 0.48 and 0.57 across architectures. Investigating the distribution of rewards further in Figure <ref>, it is revealed that ProtoPNet tends to produce a bimodal distribution over prototype rewards, with some bias towards low-quality and high-quality prototypes. Applying the R2 update results in the desired behavior, increasing the average reward and shifting the distribution of rewards upwards. We additionally see that the retraining step in R3-ProtoPNet actually continues to increase average reward across all base architectures while slightly increasing the spread of the reward distribution.
Finally, we report the Top-5 and Top-10 class mismatch in Table <ref>. Here we see an interesting phenomena. Across all base architectures, ProtoPNet has an average class mismatch of at least half of the Top-L closest image patches, for both L=5,10. Although performing the R2 greatly increases the average reward for all base architectures except ResNet-34, we see that class mismatch is only marginally reduced, with still all of the base architectures resulting in mismatches for over half of the closest Top-L training image patches. We see that R3-ProtoPNet greatly reduces class mismatch for the m_k=5 VGG-19 base architecture, but tends to only marginally reduce class mismatch for the m_k=10 case.
§.§ Discussion
Given the results, we see that R3-ProtoPNet manages to increase the quality of learned prototypes without sacrificing predictive performance. While the ResNet-34 and DenseNet-121 base architectures do see a slight performance decrease, producing an ensemble of trained R3-ProtoPNets results in an accuracy increase over an ensemble of the original trained ProtoPNets. We see that R3-ProtoPNet results in a substantial increase of the average test reward, verifying that prototype quality is increasing. There is still much room for improvement, as class mismatch for 10 prototypes does not decrease across all architectures, while there is some class mismatch decrease for the 5 prototype VGG-19-based ProtoPNet. Overall, these results demonstrate that incorporating reward information into the ProtoPNet via reweighing, reselection, and retraining does increase interpretability of ProtoPNets, and, when incorporated into an ensemble, increases predictive performance.
§ LIMITATIONS AND FUTURE WORK
While R3-ProtoPNet improves interpretability and predictiveness in an ensemble, there is plenty of room for improvement. We note that the reward model is trained on ratings of a single image and heatmap, highly constrained to measuring overlap between prototype and the object of interest, but it is quite possible to extend ratings to multiple images and heatmaps. This would allow for the reward model to better learn cross-image preferences, such as consistency. We hope that this could alleviate the duplicate issue as well. We note that R3-ProtoPNet fails to entirely eliminate duplicates, with several high-reward prototypes converge to the same part of the image.
While this work investigated increasing the performance of ProtoPNet, it is possible to extend the R3 update to other extensions of the ProtoPNet. A major benefit of reward fine-tuning is its flexibility in application, and we expect that combining the R3 update with other variations of the ProtoPNet would result in further increased performance gains. Combining multiple feedback modalities, such as the binary feedback used in ProtoPDebug <cit.>, could further increase model performance.
A final limitation with R3-ProtoPNet and other methods that rely on human feedback is that the model itself might be learning features that, while seemingly confusing to a human, are helpful and meaningful for prediction. <cit.> argue that the ProtoPNet can predict with non-obvious textures like texture and contrast, which might be penalized via a learned reward function. Future work is necessary to investigate how ProtoPNet variants could critique human feedback, and argue against a learned reward function.
§ CONCLUSION
In this work, we propose the R3-ProtoPNet, a method that uses a learned reward model of human feedback to improve the meaningfulness of learned prototypical parts. We find that ensembling multiple R3-ProtoPNets results in increased performance over original ProtoPNet ensembles. Considering the high performance of the reward model, we use the reward model as a measure of prototype quality, allowing us to critique the interpretability of ProtoPNet along a human lens. The ability of reward learning to quantize qualitative human preferences make reward-based fine-tuning a promising direction for the improvement of interpretable deep models.
|
http://arxiv.org/abs/2307.05701v1 | 20230711180802 | Computing Subset Vertex Covers in $H$-Free Graphs | [
"Nick Brettell",
"Jelle J. Oostveen",
"Sukanya Pandey",
"Daniël Paulusma",
"Erik Jan van Leeuwen"
] | math.CO | [
"math.CO",
"cs.CC",
"cs.DM",
"cs.DS"
] |
Qubit Recycling in Entanglement Distillation
This work was supported in part by the National Science Foundation under grant OMA-2304118.
Stuart Pelletier, Ruozhou Yu, George Rouskas, Jianqing Liu
Department of Computer Science, North Carolina State University, Raleigh, NC 27606, USA
E-mail: {sopellet, ryu5, rouskas, jliu96}@ncsu.edu
August 12, 2023
==============================================================================================================================================================================================================
We consider a natural generalization of Vertex Cover: the Subset Vertex Cover problem, which is to decide for a graph G=(V,E), a subset T⊆ V and integer k, if V has a subset S of size at most k, such that S contains at least one end-vertex of every edge incident to a vertex of T. A graph is H-free if it does not contain H as an induced subgraph.
We solve two open problems from the literature by proving
that Subset Vertex Cover is -complete on subcubic (claw,diamond)-free planar graphs and on 2-unipolar graphs, a subclass of 2P_3-free weakly chordal graphs. Our results show for the first time that Subset Vertex Cover is computationally harder than Vertex Cover (under ≠).
We also prove new polynomial time results. We first give a dichotomy on graphs where G[T] is H-free. Namely, we show that
Subset Vertex Cover is polynomial-time solvable on graphs G, for which G[T] is H-free, if H=sP_1+tP_2 and -complete otherwise. Moreover, we prove that Subset Vertex Cover is polynomial-time solvable for (sP_1+P_2+P_3)-free graphs and bounded mim-width graphs. By combining our new results with known results we obtain a partial complexity classification for Subset Vertex Cover on H-free graphs.
§ INTRODUCTION
We consider a natural generalization of the classical Vertex Cover problem: the Subset Vertex Cover problem, introduced in <cit.>.
Let G=(V,E) be a graph and T be a subset of V.
A set S⊆ V is a T-vertex cover of G if S contains at least one end-vertex of every edge incident to a vertex of T. We note that T itself is a T-vertex cover. However, a graph may have much smaller T-vertex covers. For example, if G is a star whose leaves form T, then the center of G forms a T-vertex cover. We can now define the problem; see also Fig. <ref>.
Subset Vertex CoverA graph G=(V,E), a subset T⊆ V, and a positive integer k.Does G have a T-vertex cover S_T with |S_T|≤ k?
If we set T=V, then we obtain the Vertex Cover problem. Hence, as Vertex Cover is -complete, so is Subset Vertex Cover.
To obtain a better understanding of the complexity of an -complete graph problem, we may restrict the input to some special graph class. In particular, hereditary graph classes, which are the classes closed under vertex deletion, have been studied intensively for this purpose. It is readily seen that a graph class G is hereditary if and only if G is characterized by a unique minimal set of forbidden induced subgraphs F_G. Hence, for a systematic study, it is common to first consider the case where F_ G has size 1. This is also the approach we follow in this paper.
So, for a graph H, we set F_ G={H} for some graph H and consider the class of H-free graphs (graphs that do not contain H as an induced subgraph). We now consider the following research question:
For which graphs H is Subset Vertex Cover, restricted to H-free graphs, still -complete and for which graphs H does it become polynomial-time solvable?
We will also address two open problems posed in <cit.> (see Section <ref> for any undefined terminology):
Q1. What is the complexity of Subset Vertex Cover for claw-free graphs?
Q2. Is Subset Vertex Cover is -complete for P_t-free graphs for some t?
The first question is of interest, as Vertex Cover is polynomial-time solvable even on rK_1,3-free graphs for every r≥ 1 <cit.>, where rK_1,3 is the disjoint union of r claws (previously this was known for rP_3-free graphs <cit.> and 2P_3-free graphs <cit.>).
The second question is of interest due to some recent quasi-polynomial-time results. Namely, Gartland and Lokshtanov <cit.> proved that for every integer t, Vertex Cover can be solved in n^O(log^3n)-time for P_t-free graphs. Afterwards,
Pilipczuk, Pilipczuk and Rzążewski <cit.> improved the running time to n^O(log^2n) time.
Even more recently, Gartland et al. <cit.> extended the results of <cit.> from P_t-free graphs to H-free graphs where every connected component of H is a path or a subdivided claw.
Grötschel, Lovász, and Schrijver <cit.> proved that Vertex Cover can be solved in polynomial time for the class of perfect graphs. The class of perfect graphs is a rich graph class, which includes well-known graph classes, such as bipartite graphs and (weakly) chordal graphs.
Before we present our results, we first briefly discuss the relevant literature.
§.§ Existing Results and Related Work
Whenever Vertex Cover is -complete for some graph class G, then so is the more general problem Subset Vertex Cover. Moreover, Subset Vertex Cover can be polynomially reduced to Vertex Cover: given an instance (G,T,k) of the former problem, remove all edges not incident to a vertex of T to obtain an instance (G',k) of the latter problem. Hence, we obtain:
The problems Vertex Cover and Subset Vertex Cover are polynomially equivalent for every graph class closed under edge deletion.
For example, the class of bipartite graphs is closed under edge deletion and Vertex Cover is polynomial-time solvable on bipartite graphs. Hence,
by Proposition <ref>, Subset Vertex Cover is polynomial-time solvable on bipartite graphs.
However, a class of H-free graphs is only closed under edge deletion if H is a complete graph, and Vertex Cover is -complete even for triangle-free graphs <cit.>. This means that there could still exist graphs H such that Vertex Cover and Subset Vertex Cover behave differently if the former problem is (quasi)polynomial-time solvable on H-free graphs.
The following well-known result of Alekseev <cit.> restricts the structure of such graphs H.
For every graph H that contains a cycle or a connected component with two vertices of degree at least 3, Vertex Cover, and thus Subset Vertex Cover, is -complete for H-free graphs.
Due to Theorem <ref> and the aforementioned result of Gartland et al. <cit.>, every graph H is now either classified as a quasi-polynomial case or -hard case for Vertex Cover. For Subset Vertex Cover the situation is much less clear. So far, only one positive result is known, which is due to Brettell et al. <cit.>.
For every s≥ 0, Subset Vertex Cover is polynomial-time solvable on (sP_1+P_4)-free graphs.
Subset variants of classic graph problems are widely studied, also in the context of H-free graphs. Indeed,
Brettell et al. <cit.> needed Theorem <ref> as an auxiliary result in complexity studies for Subset Feedback Vertex Set and Subset Odd Cycle Transversal restricted to H-free graphs. The first problem is to decide for a graph G=(V,E), subset T⊆ V and integer k, if G has a set S of size at most k such that S contains a vertex of every cycle that intersects T. The second problem is similar but replaces “cycle” by “cycle of odd length”. Brettell et al. <cit.> proved that both these subset transversal problems are polynomial-time solvable on (sP_1+P_3)-free graphs for every s≥ 0.
They also showed that Odd Cycle Transversal is polynomial-time solvable for P_4-free graphs and -complete for split graphs, which form a subclass of 2P_2-free graphs, whereas -completeness for Subset Feedback Vertex Set on split graphs was shown by Fomin et al. <cit.>. Recently, Paesani et al. <cit.> extended the result of <cit.> for Subset Feedback Vertex Set from (sP_1+P_3)-free graphs to (sP_1+P_4)-free graphs for every integer s≥ 0. If H contains a cycle or claw, -completeness for both subset transversal problems follows from corresponding results for Feedback Vertex Set <cit.> and Odd Cycle Transversal <cit.>.
Combining all the above results leads to
a dichotomy for Subset Feedback Vertex Set and a partial classification for Subset Odd Cycle Transversal
(see also <cit.>).
Here, we write F G if F is an induced subgraph of G.
For a graph H, Subset Feedback Vertex Set on H-free graphs is polynomial-time solvable if
H sP_1+P_4 for some s≥ 0, and -complete otherwise.
For a graph H≠ sP_1+P_4 for some s≥ 1, Subset Odd Cycle Transversal on H-free graphs is polynomial-time solvable if
H=P_4 or H sP_1+P_3 for some s≥ 0, and -complete otherwise.
§.§ Our Results
In Section <ref> we prove two new hardness results, using the same basis reduction, which may have a wider applicability. We first prove that Subset Vertex Cover is -complete for subcubic planar line graphs of triangle-free graphs, or equivalently, subcubic planar (,)-free graphs. This answers Q1 in the negative. We then prove that Subset Vertex Cover is -complete for a 2-unipolar graphs and thus for 2P_3-free graphs. Hence, Subset Vertex Cover is -complete for P_7-free graphs, and we have answered Q2 for t=7.
Our hardness results show a sharp contrast with Vertex Cover, which can be solved in polynomial time for both weakly chordal graphs <cit.> and rK_1,3-free graphs for every r≥ 1 <cit.>. Hence, Subset Vertex Cover may be harder than Vertex Cover for a graph class closed under vertex deletion (if ≠).
This is in contrast to graph classes closed under edge deletion (see Proposition <ref>).
In Section <ref> we also prove that Subset Vertex Cover is -complete for inputs (G,T,k) if the subgraph G[T] of G induced by T is P_3-free. On the other hand, our first positive result, shown in Section <ref>, shows that the problem is polynomial-time solvable if G[T] is sP_2-free for any s≥ 2.
In Section <ref> we also prove that
Subset Vertex Cover can be solved in polynomial time for (sP_1+P_2+P_3)-free graphs for every s≥ 1. Our positive results generalize known results for Vertex Cover. The first result also implies that Subset Vertex Cover is polynomial-time solvable for split graphs, contrasting our -completeness result for 2-unipolar graphs, which are generalized split, 2P_3-free, and weakly chordal.
Combining our new results with Theorem <ref> gives us a partial classification and a dichotomy, both of which are proven in Section <ref>.
For a graph H≠ rP_1+sP_2+P_3 for any r≥ 0, s≥ 2; rP_1+sP_2+P_4 for any r≥ 0, s≥ 1; or rP_1+sP_2+P_t for any r≥ 0, s≥ 0, t∈{5,6},
Subset Vertex Cover on H-free graphs is polynomial-time solvable if
H sP_1+P_2+P_3, sP_2, or sP_1+P_4 for some s≥ 1, and -complete otherwise.
For a graph H, Subset Vertex Cover on instances (G,T,k), where G[T] is H-free, is polynomial-time solvable if H sP_2 for some s≥ 1, and -complete otherwise.
Theorems <ref>–<ref> show that Subset Vertex Cover on H-free graphs can be solved in polynomial time for infinitely more graphs H than Subset Feedback Vertex Set and Subset Odd Cycle Transversal. This is in line with the behaviour of the corresponding original (non-subset) problems.
In Section <ref> we discuss our final new result, which states that Subset Vertex Cover is polynomial-time solvable on every graph class of bounded mim-width, such as the class of circular-arc graphs.
In Section <ref> we discuss some directions for future work, which naturally originate from the above results.
§ PRELIMINARIES
Let G=(V,E) be a graph. The degree of a vertex u∈ V is the size of its neighbourhood N(u)={v | uv∈ E}. We say that G is subcubic if every vertex of G has degree at most 3. An independent set I in G is maximal if there exists no independent set I' in G with I⊊ I'. Similarly, a vertex cover S of G is minimal if there no vertex cover S' in G with S'⊊ S.
For a graph H we write H G if H is an induced subgraph of G, that is, G can be modified into H by a sequence of vertex deletions. If G does not contain H as an induced subgraph, G is H-free. For a set of graphs H, G is H-free if G is H-free for every H∈ H. If H={H_1,…,H_p} for some p≥ 1, we also write that G is (H_1,…,H_p)-free.
The line graph of a graph G=(V,E) is the graph L(G) that has vertex set E and an edge between two vertices e and f if and only if e and f share a common end-vertex in G. The complement G of a graph G=(V,E) has vertex set V and an edge between two vertices u and v if and only if uv∉ E.
For two vertex-disjoint graphs F and G, the disjoint union F+G is the graph (V(F)∪ V(G), E(F)∪ E(G)). We denote the disjoint union of s copies of the same graph G by sG.
A linear forest is a disjoint union of one or more paths.
Let C_s be the cycle on s vertices; P_t the path on t vertices; K_r the complete graph on r vertices; and K_1,r the star on (r+1) vertices. The graph C_3=K_3 is the triangle; the graph K_1,3 the claw, and the graph 2P_1+P_2 is the diamond (so the diamond is obtained from the K_4 after deleting one edge).
The subdivision of an edge uv replaces uv with a new vertex w and edges uw, wv.
A subdivided claw is obtained from the claw by subdividing each edge zero or more times.
A graph is chordal if it has no induced C_s for any s≥ 4. A graph is
weakly chordal if it has no induced C_s and no induced C_s for any s≥ 5.
A cycle C_s or an anti-cycle C_s is odd if it has an odd number of vertices.
By the Strong Perfect Graph Theorem <cit.>, a graph is perfect if it has no odd induced C_s and no odd induced C_s for any s≥ 5. Every chordal graph is weakly chordal, and every weakly chordal graph is perfect.
A graph G=(V,E) is unipolar if V can be partitioned into two sets V_1 and V_2, where G[V_1] is a complete graph and G[V_2] is a disjoint union of complete graphs. If every connected component of G[V_2] has size at most 2, then G is 2-unipolar.
Unipolar graphs form a subclass of generalized split graphs, which are the graphs that are unipolar or their complement is unipolar.
It can also readily be checked that every 2-unipolar graph is weakly chordal
(but not necessarily chordal, as evidenced by G=C_4).
§ NP-HARDNESS RESULTS
In this section we prove our hardness results for Subset Vertex Cover, using the following notation.
Let G be a graph with an independent set I. We say that we augment G by adding a (possibly empty) set F of edges between some pairs of vertices of I. We call the resulting graph an I-augmentation of G.
The following lemma forms the basis for our hardness gadgets.
Every vertex cover of a graph G=(V,E) with an independent set I is a (V∖ I)-vertex cover of every I-augmentation of G, and vice versa.
Let G' be an I-augmentation of G. Consider a vertex cover S of G. For a contradiction, assume that S is not a (V∖ I)-vertex cover of G'.
Then G'-S must contain an edge uv with at least one of u,v belonging to V∖ I. As G-S is an independent set, uv belongs to E(G')∖ E(G) implying that both u and v belong to I, a contradiction.
Now consider a (V∖ I)-vertex cover S' of G'. For a contradiction, assume that S' is not a vertex cover of G. Then G-S' must contain an edge uv (so uv∈ E). As G' is a supergraph of G, we find that G'-S' also contains the edge uv. As S' is a (V∖ I)-vertex cover of G', both u and v must belong to I. As uv∈ E, this contradicts the fact that I is an independent set.
To use Lemma <ref> we need one other lemma, due to Poljak <cit.>. A graph G' is a 2-subdivision of a graph G if G' can be obtained from G by subdividing every edge of G twice, that is, by replacing each edge uv∈ E(G) with a path uw_uvw_vuv of length 3.
A graph G with m edges has an independent set of size k if and only if the 2-subdivision of G has an independent set of size k+m.
We are now ready to prove our first two hardness results. Recall that a graph is (,)-free if and only if it is a line graph of a triangle-free graph. Hence, the result in particular implies -hardness of Subset Vertex Cover for line graphs.
Recall also that we denote the claw and diamond by K_1,3 and 2P_1+P_2, respectively.
Subset Vertex Cover is -complete for (K_1,3,2P_1+P_2)-free subcubic planar graphs.
We reduce from Vertex Cover, which is -complete even for cubic planar graphs <cit.>, and thus for cubic planar graphs that are 4-subdivisions due to two applications of Lemma <ref> (note that subdividing an edge preserves both maximum degree and planarity).
So, let G=(V,E) be a subcubic planar graph that is a 4-subdivision of some graph G^*, and let k be an integer.
In G, we let U=V(G^*) and W be the subset of V(G)∖ V(G^*) that consists of all neighbours of vertices of U. Note that W is an independent set in G. We construct a W-augmentation G' as follows; see also Figure <ref>. For every vertex u∈ U of degree 3 in G, we pick two arbitrary neighbours of u (which both belong to W) and add an edge between them. It is readily seen that G' is (K_1,3,2P_1+P_2)-free, planar and subcubic. It remains to apply Lemma <ref>.
Subset Vertex Cover is -complete for instances (G,T,k), for which G is 2-unipolar and G[T] is a disjoint union of edges.
We reduce from Vertex Cover. Let G=(V,E) be a graph and k be an integer. By Lemma <ref>, we may assume that G is a 2-subdivision of a graph G^*. In G, we let U=V(G^*), and we let W=V(G)∖ V(G^*). Note that U is an independent set in G. We construct a U-augmentation G' by changing U into a clique; see also Figure <ref>. It is readily seen that G' is 2-unipolar. We set T:=W, so G[T] is a disjoint union of edges. It remains to apply Lemma <ref>.
It can be readily checked that 2-unipolar graphs are (2C_3,C_5,C_6,C_3+P_3,2P_3,P_6,C_6)-free graphs, and thus are 2P_3-free weakly chordal.
§ POLYNOMIAL-TIME RESULTS
In this section, we prove our polynomial-time results for instances (G,T,k) where either G is H-free or only G[T] is H-free. The latter type of results are stronger, but only hold for graphs H with smaller connected components.
We start with the case where H=sP_2 for some s≥ 1.
For this case we need the following two well-known results. The delay is the maximum of the time taken before the first output and that between any pair of consecutive outputs.
For every constant s≥ 1, the number of maximal independent sets of an sP_2-free graph on n vertices is at most n^2s+1.
For every constant s≥ 1, it is possible to enumerate all maximal independent sets of a graph G on n vertices and m edges with a delay of O(nm).
We now prove that Subset Vertex Cover is polynomial-time solvable for instances (G,T,k), where G[T] is sP_2-free. The idea behind the algorithm is to remove any edges between vertices in V∖ T, as these edges are irrelevant. As a consequence, we may leave the graph class, but this is not necessarily an obstacle. For example, if G[T] is a complete graph, or T is an independent set, we can easily solve the problem. Both cases are generalized by the result below.
For every s≥ 1, Subset Vertex Cover can be solved in polynomial time for instances (G,T,k) for which G[T] is sP_2-free.
Let s≥ 1, and let (G,T,k) be an instance of Subset Vertex Cover where G=(V,E) is a graph such that G[T] is sP_2-free.
Let G'=(V,E') be the graph obtained from G after removing every edge between two vertices of V∖ T, so G'[V∖ T] is edgeless. We observe that G has a T-vertex cover of size at most k if and only if G' has a T-vertex cover of size at most k. Moreover, G'[T] is sP_2-free, and we can obtain G' in O(|E(G)|) time.
Hence, from now on, we consider the instance (G',T,k).
We first prove the following two claims, see Figure <ref> for an illustration.
A subset S⊆ V(G') is a T-vertex cover of G' if and only if S=R∪ W for a minimal vertex cover R of G'[T] and a vertex cover W of G'[V∖ R].
Proof.
We prove Claim <ref> as follows. Let S⊆ V(G'). First assume that S is a T-vertex cover of G'. Let I=V∖ S. As S is a T-vertex cover, T∩ I is an independent set. Hence, S contains a minimal vertex cover R of G'[T]. As G'[V∖ T] is edgeless, S is a vertex cover of G, or in other words, I is an independent set. In particular, this means that W∖ R is a vertex cover of G'[V∖ R].
Now assume that S=R∪ W for a minimal vertex cover R of G'[T] and a vertex cover W of G'[V∖ R]. For a contradiction, suppose that S is not a T-vertex cover of G'.
Then G'-S contains an edge uv∈ E', where at least one of u,v belongs to T. First suppose that both u and v belong to T. As R is a vertex cover of G'[T], at least one of u, v belongs to R⊆ S, a contradiction. Hence, exactly one of u,v belongs to T, say u∈ T and v∈ V∖ T, so in particular, v∉ R. As R⊆ S, we find that u∉ R. Hence, both u and v belong to V∖ R. As W is a vertex cover of V∖ R, this means that at least one of u,v belongs to W⊆ S, a contradiction.
This proves the claim.
For every minimal vertex cover R of G'[T], the graph G'[V∖ R] is bipartite.
Proof.
We prove Claim <ref> as follows. As R is a vertex cover of G'[T], we find that T∖ R is an independent set. As G'[V∖ T] is edgeless by construction of G', this means that G'[V∖ R] is bipartite with partition classes T∖ R and V∖ T.
We are now ready to give our algorithm. We enumerate the minimal vertex covers of G'[T]. For every minimal vertex cover R, we compute a minimum vertex cover W of G'[V∖ R]. In the end, we return the smallest S=R∪ W that we found.
The correctness of our algorithm follows from Claim <ref>. It remains to analyze the running time. As G'[T] is sP_2-free, we can enumerate all maximal independent sets I of G'[T] and thus all minimal vertex covers R=T∖ I of G'[T] in (n^2s+1) · O(nm) time due to Theorems <ref> and <ref>. For a minimal vertex cover R, the graph G'[V∖ R] is bipartite by Claim <ref>. Hence, we can compute a minimum vertex cover W of G'[V∖ R] in polynomial time by applying König's Theorem. We conclude that the total running time is polynomial.
For our next result (Theorem <ref>) we need two known results as lemmas.
If Subset Vertex Cover is polynomial-time solvable on H-free graphs for some H, then it is so on (H+P_1)-free graphs.
For every r≥ 1, Vertex Cover is polynomial-time solvable on rK_1,3-free graphs.
We are now ready to prove our second polynomial-time result.
For every integer s, Subset Vertex Cover is polynomial-time solvable on (sP_1+P_2+P_3)-free graphs.
Due to Lemma <ref>, we can take s=0, so we only need to give a polynomial-time algorithm for (P_2+P_3)-free graphs.
Hence, let (G,T,k) be an instance of Subset Vertex Cover, where G=(V,E) is a (P_2+P_3)-free graph.
First compute a minimum vertex cover of G. As G is (P_2+P_3)-free, and thus 2K_1,3-free, this takes polynomial time by Lemma <ref>. Remember the solution S_.
We now compute a minimum T-vertex cover S of G that is not a vertex cover of G. Then G-S must contain an edge between two vertices in G-T. We branch by considering all O(n^2) options of choosing this edge. For each chosen edge uv we do as follows. As both u and v will belong to G-S for the T-vertex cover S of G that we are trying to construct, we first add every neighbour of u or v that belongs to T to S.
Let T' consist of all vertices of T that are neither adjacent to u nor to v. As G is (P_2+P_3)-free and uv∈ E, we find that G[T'] is P_3-free and thus a disjoint union of complete graphs.
We call a connected component of G[T'] large if it has at least two vertices; else we call it small (so every small component of G[T'] is an isolated vertex).
See also Figure <ref> for an illustration.
Case 1. The graph G[T'] has at most two large connected components.
Let D_1 and D_2 be the large connected components of G[T'] (if they exist).
As V(D_1) and V(D_2) are cliques in G[T], at most one vertex of D_1 and at most one vertex of D_2 can belong to G-S. We branch by considering all O(n^2) options of choosing at most one vertex of D_1 and at most one vertex of D_2 to be these vertices. For each choice of vertices we do as follows. We add all other vertices of D_1 and D_2 to S. Let T^* be the set of remaining vertices of T. Then T^* is an independent set.
We delete every edge between any two vertices in G-T. Now the graph G^* induced by the vertices of T^*∪ (V∖ T) is bipartite (with partition classes T^* and V∖ T). It remains to compute a minimum vertex cover S^* of G^*. This can be done in polynomial time by applying König's Theorem. We let S consist of S^* together with all vertices of T that we had added in S already.
For each branch, we remember the output, and in the end we take a smallest set S found and compare its size with the size of S_, again taking a smallest set as the final solution.
Case 2. The graph G[T'] has at least three large connected components.
Let D_1,…, D_p, for some p≥ 3, be the large connected components of G[T'].
Let A consists of all the vertices of the small connected components of G[T'].
We first consider the case where G-S will contain a vertex w∈ V∖ T with one of the following properties:
1. for some i, w has a neighbour and a non-neighbour in D_i; or
2. for some i,j with i≠ j, w has a neighbour in D_i and a neighbour in D_j; or
3. for some i, w has a neighbour in D_i and a neighbour in A.
We say that a vertex w in G-S is semi-complete to some D_i if w is adjacent to all vertices of D_i except at most one.
We show the following claim that holds if the solution S that we are trying to construct contains a vertex w∈ V∖ (S∪ T) that satisfies one of the three properties above. See Figure <ref> for an illustration.
Every vertex w∈ V∖ (S∪ T) that satisfies one of the properties 1-3 is semi-complete to every V(D_j).
Proof.
We prove Claim <ref> as follows. Let w∈ V∖ (S∪ T). First assume w satisfies Property 1. Let x and y be vertices of some D_i, say D_1, such that wx∈ E and wy∉ E. For a contradiction, assume w is not semi-complete to some D_j. Hence, D_j contains vertices y' and y”, such that wy'∉ E and wy”∉ E. If j≥ 2, then {y',y”,w,x,y} induces a P_2+P_3 (as D_1 and D_j are complete graphs). This contradicts that G is (P_2+P_3)-free. Hence, w is semi-complete to every V(D_j) with j≥ 2.
Now suppose j=1. As p≥ 3, the graphs D_2 and D_3 exist. As w is semi-complete to every V(D_j) for j≥ 2 and every D_j is large, there exist vertices x'∈ V(D_2) and x”∈ V(D_3) such that wx'∈ E and wx”∈ E. However, now {y',y”,x',w,x”} induces a P_2+P_3, a contradiction.
Now assume w satisfies Property 2, say w is adjacent to x_1∈ V(D_1) and to x_2∈ V(D_2). Suppose w is not semi-complete to some V(D_j). If j≥ 3, then the two non-neighbours of w in D_j, together with x_1,w,x_2, form an induced P_2+P_3, a contradiction. Hence, w is semi-complete to every V(D_j) for j≥ 3. If j∈{1,2}, say j=1, then let y,y' be two non-neighbours of w in D_1 and let x_3 be a neighbour of w in D_3. Now, {y,y',x_2,w,x_3} induces a P_2+P_3, a contradiction. Hence, w is semi-complete to V(D_1) and V(D_2) as well.
Finally, assume w satisfies Property 3, say w is adjacent to z∈ A and x_1∈ V(D_1). If w not semi-complete to V(D_j) for some j≥ 2, then two non-neighbours of w in D_j, with z,w,x_1, form an induced P_2+P_3, a contradiction. Hence, w is semi-complete to every V(D_j) with j≥ 2. As before, by using a neighbour of w in D_2 and one in D_3, we find that w is also semi-complete to V(D_1).
This completes the proof of Claim <ref>.
We now branch by considering all O(n) options for choosing a vertex w∈ V∖ (S∪ T) that satisfies one of the properties 1-3. For each chosen vertex w, we do as follows. We remove all its neighbours in T, and add them to S. By Claim <ref>, the remaining vertices in T form an independent set. We delete any edge between two vertices from V∖ T, so V∖ T becomes an independent set as well. It remains to compute, in polynomial time by König's Theorem, a minimum vertex cover in the resulting bipartite graph and add this vertex cover to S. For each branch, we store S. After processing all of the O(n) branches, we keep a smallest S, which we denote by S^*.
We are left to compute a smallest T-vertex cover S of G over all T-vertex covers that contain every vertex from V∖ T that satisfy one of the properties 1–3. We do this as follows. First, we put all vertices from V∖ T that satisfy one of the three properties 1–3 to the solution S that we are trying to construct. Let G^* be the remaining graph. We do not need to put any vertex from any connected component of G^* that contains no vertex from T in S.
Now consider the connected component D_1' of G^* that contains the vertices from D_1. As D_1' contains no vertices from V∖ T satisfying properties 2 or 3, we find that D_1' contains no vertices from A or from any D_j with j≥ 2, so V(D_1')∩ T=V(D_1). Suppose there exists a vertex v in V(D_1')∖ V(D_1), which we may assume has a neighbour in D_1 (as D_1' is connected). Then, v is complete to D_1 as it does not satisfy Property 1. Then, we must put at least |V(D_1)| vertices from D_1' in S, so we might just as well put every vertex of D_1 in S. As V(D_1')∩ T=V(D_1), this suffices. If D_1'=D_1, then we put all vertices of D_1 except for one arbitrary vertex of D_1 in S.
We do the same as we did for D_1 for the connected components D_2', …, D_p' of G^* that contain V(D_2),… V(D_p), respectively.
Now, it remains to consider the induced subgraph F of G^* that consists of connected components containing the vertices of A. Recall that A is an independent set. We delete every edge between two vertices in V∖ T, resulting in another independent set. This changes F into a bipartite graph and we can compute a minimum vertex cover S_F of F in polynomial time due to König's Theorem. We put S_F to S and compare the size of S with the size of S^* and S_, and pick the one with smallest size as our solution.
The correctness of our algorithm follows from the above description. The number of branches is O(n^4) in Case 1 and O(n^3) in Case 2. As each branch takes polynomial time to process, this means that the total running time of our algorithm is polynomial. This completes our proof.
§ THE PROOF OF THEOREMS <REF> AND <REF>
We first prove Theorem <ref>, which we restate below.
Theorem <ref> (restated).
For a graph H≠ rP_1+sP_2+P_3 for any r≥ 0, s≥ 2; rP_1+sP_2+P_4 for any r≥ 0, s≥ 1; or rP_1+sP_2+P_t for any r≥ 0, s≥ 0, t∈{5,6},
Subset Vertex Cover on H-free graphs is polynomial-time solvable if
H sP_1+P_2+P_3, sP_2, or sP_1+P_4 for some s≥ 1, and -complete otherwise.
Let H be a graph not equal to rP_1+sP_2+P_3 for any r≥ 0, s≥ 2; rP_1+sP_2+P_4 for any r≥ 0, s≥ 1; or rP_1+sP_2+P_t for any r≥ 0, s≥ 0, t∈{5,6}. If H has a cycle, then we apply Theorem <ref>. Else, H is a forest. If H has a vertex of degree at least 3, then the class of H-free graphs contains all K_1,3-free graphs, and we apply Theorem <ref>. Else, H is a linear forest. If H contains an induced 2P_3, then we apply Theorem <ref>. If not, then H sP_1+P_2+P_3, sP_2, or sP_1+P_4 for some s≥ 1. In the first case, apply Theorem <ref>; in the second case Theorem <ref>; and in the third case Theorem <ref>.
We now prove Theorem <ref>, which we restate below.
Theorem <ref> (restated).
For a graph H, Subset Vertex Cover on instances (G,T,k), where G[T] is H-free, is polynomial-time solvable if H sP_2 for some s≥ 1, and -complete otherwise.
First suppose P_3 H.
As a graph that is a disjoint union of edges is P_3-free, we can apply Theorem <ref>.
Now suppose H is P_3-free. Then H sP_2 for some s≥ 1, and we apply Theorem <ref>.
§ GRAPHS OF BOUNDED MIM-WIDTH
In this section, we give a polynomial algorithm for Subset Vertex Cover on graphs of bounded mim-width. Our algorithm is inspired by the algorithm of Bui-Xuan et al. <cit.> for Independent Set and of Bergougnoux et al. <cit.> for Subset Feedback Vertex Set. Our presentation of the algorithm follows the presentation form in Bergougnoux et al. <cit.>.
We start by introducing some more terminology.
Let G=(V,E) be a graph. For X ⊆ V, we use 2^X to denote its power set and X to denote V ∖ X. A set M ⊆ E is a matching in G if no two edges in M share an end-vertex. A matching M is an induced matching if no end-vertex of an edge e ∈ M is adjacent to any other end-vertex in M except the other end-vertex of e.
We now introduce the notion of mim-width, which was first defined by Vatshelle <cit.>.
Let G=(V,E) be a graph. A rooted binary tree is a rooted tree of which each node has degree 1 or 3, except for a distinguished node that has degree 2 and is the root of the tree. A rooted layout = (L,δ) of G consists of a rooted binary tree L and a bijection δ between V and the leafs of L. For each node x ∈ V(L), let L_x be the set of leaves that are a descendant of x (including x if x is a leaf). Then define V_x as the corresponding set of vertices of G, that is, V_x = {δ(y) | y ∈ L_x }.
For a set A ⊆ V, let (A) be the size of a maximum induced matching in the bipartite graph obtained from G by removing all edges between vertices of A and all edges between vertices of A. In other words, this is the bipartite graph (A,A, E ∩ (A ×A)). The mim-width () of a rooted layout = (L,δ) of graph G is the maximum over all x ∈ V(L) of (V_x). The mim-width of G is the minimum mim-width over all rooted layouts of G.
In general, it is not known if there exists a polynomial-time algorithm for computing a rooted layout L of a graph G, such that ( L) is bounded by a function in the mim-width of G.
However, Belmonte and Vatshelle <cit.>
showed that for several graph classes G of bounded mim-width, including interval graphs and permutation graphs, it is possible to find in polynomial time a rooted layout of a graph G∈ G with mim-width equal to the mim-width of G.
We now introduce the notion of neighbour equivalence, which was first defined by Bui-Xuan et al. <cit.>.
Let G=(V,E) be a graph on n vertices. Let A ⊆ V and d ∈ℕ^+. We say that X, W ⊆ A are d-neighbour equivalent A, denoted X ≡^A_d W, if min{d, |X ∩ N(v)|} = min{d,|Y ∩ N(v)|} for all v ∈A. Clearly, this is an equivalence relation. We let _d(A) denote the number of equivalence classes of ≡^A_d.
For each X ⊆ A, let ^A_d(X) denote the lexicographically smallest set R ⊆ A such that R ≡^A_d X and |R| is minimum. This is called the representative of X. We use ^A_d = {^A_d(X) | X ⊆ A }. Note that |^A_d| ≥ 1, as the empty set is always a representative. The following lemma allows us to work efficiently with representatives.
It is possible to compute in time O(_d(A) · n^2 log (_d(A))), the set ^A_d and a data structure that given a set X ⊆ A, computes ^A_d(X) in O(|A| · n log (_d(A))) time.
We are now ready to solve Subset Vertex Cover on graphs of bounded mim-width. In fact, we solve the complementary problem. Given a graph G = (V,E) with a rooted layout = (L,δ), a set ⊆ V, and a weight function on its vertices, we find a maximum-weight on G. Our goal is to use a standard dynamic programming algorithm. However, the size of the table that we would need to maintain by a naive approach is too large. Instead, we work with representatives of the sets in our table. We show that we can reduce the table size so that it is bounded by the square of the number of 1-neighbour equivalence classes.
First, we define a notion of equivalence between elements of our dynamic programming table. Given a set ⊆ V, a set X ⊆ V is a if in G[X] there is no edge incident on any vertex of ∩ X. Note that X is a if and only if X is a -vertex cover.
Let X,W ⊆ V_x be . We say that X and W are equivalent, denoted by X ∼_ W, if X ∩≡^V_x_1 W ∩ and X ∖≡^V_x_1 W ∖.
We now prove the following lemma.
For every Y ⊆V_x and every X,W ⊆ V_x such that X ∼_ W, it holds that X ∪ Y is a if and only if W ∪ Y is a .
By symmetry, it suffices to prove one direction. Suppose that X ∪ Y is a , but W ∪ Y is not. Note that X and W are by definition and that Y must be a as well, because X ∪ Y is. Hence, the fact that W ∪ Y is not a implies there is an edge uv ∈ E(G) for which:
* u ∈ W ∩, v ∈ Y ∩,
* u ∈ W ∩, v ∈ Y ∖, or
* u ∈ W ∖, v ∈ Y ∩.
In the first case, since v ∈ Y ∩ has a neighbour in W ∩, note that min{1,|(W ∩) ∩ N(v)|} = 1. Since X ∩≡^V_x_1 W ∩ by the assumption that X ∼_ W, it follows that min{1,|(X ∩) ∩ N(v)|} = 1. Hence, there is an edge from v ∈ Y ∩ to X ∩, contradicting that X ∪ Y is a .
The second case is analogous to the first case. The third case is also analogous, but uses that X ∖≡^V_x_1 W ∖.
We now introduce a final definition.
For every ⊆ 2^V_x and Y ⊆V_x, let
(, Y) = max{(X) | X ∈ X ∪ Y }.
Given , ⊆ 2^V_x, we say that represents if (,Y) = (,Y) for every Y ⊆V_x.
We use the above definition in our next lemma.
Given a set ⊆ 2^V_x, we can compute ⊆ that represents and has size at most |_1(V_x)|^2 in O(|| · n^2 log (_1(V_x)) + _1(V_x) · n^2 log (_1(V_x))) time.
We obtain from as follows: for all sets in that are equivalent under ∼_, maintain only a set X that is a for which (X) is maximum. Note that if among a collection of equivalent sets, there is no , then no set is maintained. By construction, has size at most |_1(V_x)|^2.
We now prove that represents . Let Y ⊆V_x. Note that (,Y) ≤(, Y), because ⊆. Hence, if there is no X ∈ such that X ∪ Y is a , then (,Y) = (,Y) = -∞. So assume otherwise, and let W ∈ satisfy (W) = (,Y). This means that W ∪ Y is a and in particular, W is a . By the construction of , there is a X ∈ that is a with X ∼_ W and (X) ≥(W). By Lemma <ref>, X ∪ Y is a . Hence, (, Y) ≥(X) ≥(W) = (, Y). It follows that (,Y) = (,Y) and thus represents .
For the running time, note that we can implement the algorithm by maintaining a table indexed by pairs of representatives of the 1-neighbour equivalence classes. By Lemma <ref>, we can compute the indices in O(_1(V_x) · n^2 log (_1(V_x))) time. Then for each X ∈, we can compute its representatives in O(|V_x| · n log (_1(V_x))) time and check whether it is a in O(n^2) time. Hence, the total running time is O(|| · n^2 log (_1(V_x)) + _1(V_x) · n^2 log (_1(V_x))).
We are now ready to prove the following result.
Let G be a graph on n vertices with a rooted layout (L, δ). We can solve Subset Vertex Cover in O(∑_x ∈ V(L) (_1(V_x))^4 · n^2 log (_1(V_x))) time.
It suffices to find a maximum-weight of G. For every node x ∈ V(L), we aim to compute a set _x ⊆ V_x of such that _x represents 2^V_x and has size at most p(x) := |_1(V_x)|^2+1. Letting r denote the root of L, we then return the set in _r of maximum weight. Since _r represents 2^V_r, this is indeed a maximum-weight of G.
We employ a bottom-up dynamic programming algorithm to compute _x. If x is a leaf with V_x = {v}, then set _x = {∅, {v}}. Clearly, _x represents 2^V_x and has size at most p(x). So now suppose x is an internal node with children a,b. For any ,⊆ 2^V(G), let ⊗ = {X ∪ Y | X ∈, Y ∈}. Now let _x be equal to the result of the algorithm of Lemma <ref> applied to _a ⊗_b. Then, indeed, |_x| ≤ p(x). Using induction, it remains to show the following for the correctness proof:
If _a and _b represent 2^V_a and 2^V_b respectively, then the computed set _x represents 2^V_x.
Proof.
We prove Claim <ref> as follows.
If _a ⊗_b represents 2^V_x, then by Lemma <ref> and the transitivity of the `represents' relation, it follows that _x represents 2^V_x. So it suffices to prove that _a ⊗_b represents 2^V_x. Let Y ⊆V_x. Note that
[ (_a ⊗_b, Y) = max{(X) + (W) | X ∈_a, W ∈_b,; X ∪ W ∪ Y }; = max{(_a, W ∪ Y) +w(W) | W ∈_b }. ]
Note that (_a, W ∪ Y) = (2^V_a, W ∪ Y), as _a represents 2^V_a, and thus (_a ⊗_b, Y) = (2^V_a⊗_b, Y). Using a similar argument, we can then show that (2^V_a⊗_b, Y) = (2^V_a⊗ 2^V_b, Y). Since 2^V_x = 2^V_a⊗ 2^V_b, it follows that (_a ⊗_b, Y) = (2^V_x, Y) and thus _a ⊗_b represents 2^V_x.
This completes the proof of Claim <ref>.Finally, we prove the running time bound. Using induction, it follows that |_a ⊗_b| ≤ p(x)^2 for any internal node x with children a,b. Hence, _a ⊗_b can be computed in O(p(x)^2 · n) time. Then, _x can be computed in O(p(x)^2 · n^2 log (_1(V_x)) + _1(V_x) · n^2 log (_1(V_x))) = O((_1(V_x))^4 · n^2 log (_1(V_x)))) time by Lemma <ref>.
It was shown by Belmonte and Vatshelle <cit.> that _d(A) ≤ |A|^d ·(A). Combining their result with Theorem <ref> immediately yields the following.
Let G be a graph on n vertices with a rooted layout = (L, δ). Then Subset Vertex Cover can be solved in n^O(()) time.
The following corollary is now immediate from the fact that interval and circular-arc graphs have constant mim-width and a rooted layout of constant mim-width can be computed in polynomial time <cit.>.
Subset Vertex Cover can be solved in polynomial time on interval and circular-arc graphs.
§ CONCLUSIONS
Apart from giving a dichotomy for Subset Vertex Cover restricted to instances (G,T,k) where G[T] is H-free (Theorem <ref>), we gave a partial classification of Subset Vertex Cover for H-free graphs (Theorem <ref>). Our partial classification resolved two open problems from the literature and showed that for some hereditary graph classes, Subset Vertex Cover is computationally harder than Vertex Cover (if ≠). This is in contrast to the situation for graph classes closed under edge deletion. Hence, Subset Vertex Cover is worth studying on its own, instead of only as an auxiliary problem (as done in <cit.>).
Our results raise the question whether there exist other hereditary graph classes on which Subset Vertex Cover is computationally harder than Subset Vertex Cover.
Recall that Vertex Cover is polynomial-time solvable for perfect graphs <cit.>, and thus for weakly chordal graphs and chordal graphs. On the other hand, we showed that Subset Vertex Cover is -complete for 2-unipolar graphs, a subclass of 2P_3-free weakly chordal graphs. Hence, as the first candidate graph class to answer this question, we propose the class of chordal graphs. A standard approach for Vertex Cover on chordal graphs is dynamic programming over the clique tree of a chordal graph. However, a naive dynamic programming algorithm over the clique tree does not work for Subset Vertex Cover, as we may need to remember an exponential number of subsets of a bag (clique) and the bags can have arbitrarily large size.
Our polynomial-time algorithms for Subset Vertex Cover for interval and circular-arc graphs, which follow from our result for graph classes of bounded mim-width, makes the open question of the complexity of Subset Vertex Cover on chordal graphs, a superclass of the class of interval graphs, even more pressing.
Recall that Subset Feedback Vertex Set, which is also solvable in polynomial time for graphs of bounded mim-width <cit.>, is -complete for split graphs and thus for chordal graphs <cit.>.
We also note that our polynomial-time algorithms for Subset Vertex Cover for sP_2-free graphs and (P_2+P_3)-free graphs can easily be adapted to work for Weighted Subset Vertex Cover for sP_2-free graphs and (P_2+P_3)-free graphs.
In this more general problem variant, each vertex u is given some positive weight w(u), and the question is whether there exists a T-vertex cover S with weights w(S)=∑_u∈ Sw(u)≤ k.
In contrast,
Papadopoulos and Tzimas <cit.> proved that Weighted Subset Feedback Vertex Set is -complete for 5P_1-free graphs, whereas Subset Feedback Vertex Set
is polynomial-time solvable even for (sP_1+P_4)-free graphs for every s≥ 1 <cit.>
(see also Theorem <ref>).
The hardness construction of Papadopoulos and Tzimas <cit.> can also be used to prove that Weighted Odd Cycle Transversal is -complete for 5P_1-free graphs <cit.>, while Subset Odd Cycle Transversal is polynomial-time solvable even for (sP_1+P_3)-free graphs for every s≥ 1 <cit.>
(see also Theorem <ref>).
Finally, to complete the classification of Subset Vertex Cover for H-free graphs it remains to solve the open cases where
* H=sP_2+P_3 for s≥ 2; or
* H=sP_2+P_4 for s≥ 1; or
* H=sP_2+P_t for s≥ 0 and t∈{5,6}.
Brettell et al. <cit.> asked what the complexity of Subset Vertex Cover is for P_5-free graphs. In contrast, Vertex Cover is polynomial-time solvable even for P_6-free graphs <cit.>.
However, the open cases where H=sP_2+P_t (s≥ 1 and t∈{4,5,6}) are even open for Vertex Cover on H-free graphs (though a quasipolynomial-time algorithm is known <cit.>). So for those cases we may want to first restrict ourselves to Vertex Cover instead of Subset Vertex Cover.
plain
10
Al82
Vladimir E. Alekseev.
The effect of local constraints on the complexity of determination of
the graph independence number.
Combinatorial-Algebraic Methods in Applied Mathematics, pages
3–13, 1982 (in Russian).
BY89
Egon Balas and Chang Sung Yu.
On graphs with polynomially solvable maximum-weight clique problem.
Networks, 19(2):247–253, 1989.
BV13
Rémy Belmonte and Martin Vatshelle.
Graph classes with structured neighborhoods and algorithmic
applications.
Theoretical Computer Science, 511:54–65, 2013.
BPT22
Benjamin Bergougnoux, Charis Papadopoulos, and Jan Arne Telle.
Node Multiway Cut and Subset Feedback Vertex Set on
graphs of bounded mim-width.
Algorithmica, 84(5):1385–1417, 2022.
BM18
Andreas Brandstädt and Raffaele Mosca.
Maximum Weight Independent Set for ℓclaw-free graphs in
polynomial time.
Discrete Applied Mathematics, 237:57–64, 2018.
BJPP22
Nick Brettell, Matthew Johnson, Giacomo Paesani, and Daniël Paulusma.
Computing subset transversals in H-free graphs.
Theoretical Computer Science, 902:76–92, 2022.
BJP22
Nick Brettell, Matthew Johnson, and Daniël Paulusma.
Computing weighted subset odd cycle transversals in H-free
graphs.
Journal of Computer and System Sciences, 128:71–85, 2022.
BTV13
Binh-Minh Bui-Xuan, Jan Arne Telle, and Martin Vatshelle.
Fast dynamic programming for locally checkable vertex subset and
vertex partitioning problems.
Theoretical Computer Science, 511:66–76, 2013.
CHJMP18
Nina Chiarelli, Tatiana R. Hartinger, Matthew Johnson, Martin Milanič, and
Daniël Paulusma.
Minimum connected transversals in graphs: New hardness results and
tractable cases using the price of connectivity.
Theoretical Computer Science, 705:75–83, 2018.
CRST06
Maria Chudnovsky, Neil Robertson, Paul Seymour, and Robin Thomas.
The Strong Perfect Graph Theorem.
Annals of Mathematics, 164:51–229, 2006.
FHKPV14
Fedor V. Fomin, Pinar Heggernes, Dieter Kratsch, Charis Papadopoulos, and Yngve
Villanger.
Enumerating minimal subset feedback vertex sets.
Algorithmica, 69:216–231, 2014.
GL20
Peter Gartland and Daniel Lokshtanov.
Independent Set on P_k-free graphs in quasi-polynomial time.
Proc. FOCS 2020, pages 613–624, 2020.
GLMPPR
Peter Gartland, Daniel Lokshtanov, Tomáš Masařík, Marcin Pilipczuk,
Michal Pilipczuk, and Paweł Rzążewski.
Maximum Weight Independent set in graphs with no long claws in
quasi-polynomial time.
CoRR, arXiv:2305.15738, 2023.
GLS84
Martin Grötschel, László Lovász, and Alexander Schrijver.
Polynomial algorithms for perfect graphs.
Annals of Discrete Mathematics, 21:325–356, 1984.
GKPP22
Andrzej Grzesik, Tereza Klimošová, Marcin Pilipczuk, and Michal
Pilipczuk.
Polynomial-time algorithm for Maximum Weight Independent Set
on P_6-free graphs.
ACM Transactions on Algorithms, 18:4:1–4:57, 2022.
Lo17
Vadim V. Lozin.
From matchings to independent sets.
Discrete Applied Mathematics, 231:4–14, 2017.
LM12
Vadim V. Lozin and Raffaele Mosca.
Maximum regular induced subgraphs in 2P_3-free graphs.
Theoretical Computer Science, 460:26–33, 2012.
Mo01
Bojan Mohar.
Face covers and the genus problem for apex graphs.
Journal of Combinatorial Theory, Series B, 82(1):102–117,
2001.
Mu17b
Andrea Munaro.
On line graphs of subcubic triangle-free graphs.
Discrete Mathematics, 340:1210–1226, 2017.
PPR22b
Giacomo Paesani, Daniël Paulusma, and Pawel Rzazewski.
Classifying Subset Feedback Vertex set for H-free graphs.
Proc. WG 2022, LNCS, 13453:412–424, 2022.
PT20
Charis Papadopoulos and Spyridon Tzimas.
Subset Feedback Vertex set on graphs of bounded independent set
size.
Theoretical Computer Science, 814:177–188, 2020.
PPR21
Marcin Pilipczuk, Michal Pilipczuk, and Paweł Rzążewski.
Quasi-polynomial-time algorithm for Independent Set in
P_t-free graphs via shrinking the space of induced paths.
Proc. SOSA 2021, pages 204–209, 2021.
Po74
Svatopluk Poljak.
A note on stable sets and colorings of graphs.
Commentationes Mathematicae Universitatis Carolinae,
15:307–309, 1974.
TAS77
Shuji Tsukiyama, Mikio Ide, Hiromu Ariyoshi, and Isao Shirakawa.
A new algorithm for generating all the maximal independent sets.
SIAM Journal on Computing, 6:505–517, 1977.
Va12
Martin Vatshelle.
New Width Parameters of Graphs.
PhD thesis, University of Bergen, Norway, 2012.
|
http://arxiv.org/abs/2307.11764v1 | 20230714172415 | Sensi-BERT: Towards Sensitivity Driven Fine-Tuning for Parameter-Efficient BERT | [
"Souvik Kundu",
"Sharath Sridhar Nittur",
"Maciej Szankin",
"Sairam Sundaresan"
] | cs.CL | [
"cs.CL"
] |
Probing multipartite entanglement through persistent homology
[
August 12, 2023
=============================================================
Large pre-trained language models have recently gained significant traction due to their improved performance on various down-stream tasks like text classification and question answering, requiring only few epochs of fine-tuning. However, their large model sizes often prohibit their applications on resource-constrained edge devices. Existing solutions of yielding parameter-efficient BERT models largely rely on compute-exhaustive training and fine-tuning. Moreover, they often rely on additional compute heavy models to mitigate the performance gap. In this paper, we present Sensi-BERT, a sensitivity driven efficient fine-tuning of BERT models that can take an off-the-shelf pre-trained BERT model and yield highly parameter-efficient models for downstream tasks. In particular, we perform sensitivity analysis to rank each individual parameter tensor, that then is used to trim them accordingly during fine-tuning for a given parameter or FLOPs budget. Our experiments show the efficacy of Sensi-BERT across different downstream tasks including , , , and , demonstrating better performance at similar or smaller parameter budget compared to various existing alternatives.
§ INTRODUCTION
Large transformer based language models such as BERT <cit.>,
RoBERTa <cit.> and ALBERT <cit.> have been extremely successful on many non-trivial natural language processing (NLP) tasks like question answering <cit.> and text classification <cit.>. These models are initially pre-trained on a large unlabeled text corpus followed by fine-tuning on a task-specific data set. However, while their large model size usually helps them provide state-of-the-art (SoTA) accuracy on the downstream tasks, it limits their application and deployment on compute constrained edge devices.
Previous work has focused on reducing the BERT model sizes via
several techniques including distillation <cit.>, pruning <cit.> and quantization <cit.>. Another line of work relied on careful model design via removal of layers <cit.> or neural architecture search (NAS) <cit.>.
However, a majority of these techniques in yielding reduced size models are iterative in nature and often rely on an extremely compute and storage-heavy teacher model making the fine-tuning extremely costly. Additionally, many of these methods require pre-training from scratch, thus are unable to compute-save via utilizing the wide selection of the pre-trained model pool <cit.>. Moreover, recent privacy concerns <cit.> has sparked the need for both on-device fine-tuning and inference, making efficient fine-tuning need of the hour, that the existing methods often fail to provide.
Our Contributions.
Towards the goal of providing BERT models for efficient fine-tuning as well as inference, we present Sensi-BERT, a sensitivity-driven model trimming approach. In particular, starting from a pre-trained model, we first present a low-cost layer sensitivity analysis step to rank each of the intermediate dimensions of the self-attention and multi-layer perceptron (MLP) modules. We then, for a given parameter budget B, mask the low-important dimensions to zero and perform fine-tuning on the masked model only. Different from traditional pruning approach, that assigns a mask on each of the linear layers, we only assign it to the intermediate dimensions to get similar compression (see Fig. <ref>) at near-baseline accuracy. Note, here we directly meet the target budget without the need of any iterative tuning. Additionally, we performed analysis to leverage Sensi-BERT towards more efficient and less redundant model design.
§ RELATED WORKS
Earlier research has focused on developing various small sized models <cit.> including DistilBERT <cit.>, TinyBERT <cit.>, and Poor Man's BERT <cit.>. MobileBERT <cit.> uses a bottom-to-top layer training with a custom loss to yield efficient models. However, these methods heavily rely on the knowledge distillation (KD) from a compute heavy teacher model that needs to be prepared during fine-tuning. Another area of research <cit.> considered model compression during up-stream and down-stream training. However, apart from requirements of KD, many of these methods often require fine-tuning, compute, and storage of additional dense parameter tensors for significant epochs <cit.>. Other methods require storage of initialized weights and iterative pruning <cit.> even making the fine-tuning costly. Recent works have also tried removing layers via careful manual effort <cit.> in identifying layer redundancy. Additionally, reduced model development via NAS <cit.> has also been explored. However, they failed to utilize the pre-trained models available off-the-shelf. In contrast to these methods, we assume the fine-tuning forward compute budget to be same as the device's parameter budget. This poses a stricter constraint on both the fine-tuning and inference cost of the large models. Moreover, we also assume the use of a compute and memory heavy teacher model to be infeasible as often we prefer to perform both fine-tuning and inference at the resource-limited edge due to privacy issue <cit.>.
§ SENSI-BERT: METHODOLOGY
Unlike many of the existing approaches, we take the advantage of pre-trained model weights and perform model size reduction during fine-tuning only. However, it is well known that efficient parameter reduction requires sensitivity-driven[Here sensitivity is analogous to layer importance; a layer with high sensitivity corresponds to more importance in retaining task performance and is computed by the proxy method of fraction of non-zero elements present for a given model parameter budget.] dropping of weights. Popular methods like compression via the alternating direction method of multipliers <cit.> assume that the sensitivity is hand-computed. Other methods use magnitude pruning <cit.> on top of pre-trained weights. Here, we present a simple yet effective method of sensitivity evaluation using a pre-trained model.
§.§ Sensitivity Analysis
Let a BERT model with L layers each consisting of multi-head self attention (MHSA) followed by an MLP module, with each layer having H heads. An MHSA module takes an input tensor X∈ℝ^N × D_in with sequence length and embedding dimension as N and D_in, respectively. Each of the Query (Q), Key (K), and Value (V) linear transformation layers generates intermediate tensor T_mhsa∈ℝ^N × D_attn which finally gets projected to the output tensor O_mhsa∈ℝ^N × D_in. For an MLP module the intermediate tensor size is T_mlp∈ℝ^N × D_ffn acting as the output and input of the 1^st and 2^nd fully connected (FC) layer, respectively, to finally produce output O_ffn∈ℝ^N × D_in. Thus, it is evident that analysing the importance of the two intermediate tensor dimensions D_attn and D_ffn essentially translates to the sensitivity analysis of the MHSA and MLP modules for each layer. We thus define a set of learnable (non-binary) mask tensor 𝐦, for each of MHSA and MLP intermediate tensor 𝐦_mhsa∈ℝ^D_attn and 𝐦_mlp∈ℝ^D_ffn, respectively. The sensitivity analysis objective is
min_Θ_T(f(𝐗)),𝐦ℒ_CE(Φ(𝐦⊙Θ_T(f(𝐗)), 𝐲) + ||𝐦||_1
Here f, Θ_T, and Φ, represents the function generating query, key, and value (QKV) output, intermediate tensor, and the BERT function, respectively.
Please note, during the sensitivity analysis process the mask tensor learns values corresponding to the importance of each dimension in D_attn and D_ffn, for each head in each layer. We only allow the model to perform this optimization for one epoch thus minimizing the dense compute cost.
§.§ Budgeted Trimming During Fine-Tuning
Once the mask tensor m is trained, we provide the budget B for the MHSA and MLP parameters. We then translate the budget to corresponding threshold set 𝐦_th. We initialize an all 1s binary mask 𝐛 and convert some of its location values to 0 by the following check
𝐛 = 0, 𝐦≤𝐦_th
For a model with total d intermediate elements, we then apply the binary mask 𝐛∈{0, 1}^d to the model's intermediate tensor ensuring the model's parameters follow the set non-zero budget. Thus the fine-tuning objective becomes,
min_Θ_T(f(𝐗))ℒ_CE(Φ(𝐛⊙Θ_T(f(𝐗)), 𝐲), ||𝐛||_0 ≤ B × d
Note, the fine-tuning requires only a fixed fraction of weights to be updated to non-zero. Thus the gradients associated with zero weights can be skip-computed, allowing a potential saving on the back propagation computation cost as well. Unless otherwise stated, we keep the budget for both the MHSA and MLP modules same in our evaluations. This ensures FLOPs reduction of similar proportion as the budget, due to linear relation between parameters and FLOPs for linear layers. Fig. <ref> depicts the architectural overview of Sensi-BERT. Note, the budget driven post thresholding of the binary mask 𝐛, creates different budget for different layers driven by sensitivity. The figure shows B^(l)_attn and B^(l)_ffn to be the assigned non-zero intermediate tensor budget for the MHSA and MLP layer respectively for the l^th layer.
§ EXPERIMENTAL EVALUATIONS
§.§ Models and Datasets
We use BERT-base model to evaluate the performance of Sensi-BERT on four popular GLUE datasets, namely
- used for question similarity tasks in NLP and information retrieval;
<cit.> - crowdsourced, large-scale dataset for determining entailment, contradiction or neutrality in sentence pairs;
<cit.> - it facilitates natural language inference tasks;
and <cit.> - a dataset for sentiment analysis using movie review sentences. We compare our results with various popular baselines including DistilBERT. Benchmarks like TinyBERT and MobileBERT are excluded from our comparison as their loss is not architecture agnostic and they depend on additional data augmentation apart from KD <cit.>.
§.§ Results and Analysis
Following our methodology outlined in Section <ref>, we performed sensitivity analysis for only one epoch with a batch size of 32 on each dataset. With the dimensions ranked, we apply thresholding to create the binary mask for any given budget. Note, we perform the sensitivity analysis only once and perform fine-tuning once for just three epochs for each budget. So, to generate N fine-tuned models of N different budgets the total cost is 1 + 3N. Fig. <ref> shows results of Sensi-BERT at different parameter budget. We also compare the results with standard magnitude pruning (MP), wherein we apply the sparse mask to the attention tensors based on top-k magnitude in the query parameter tensor. For the the MLP layers we select top-k magnitude locations of the FC layer weights. Upon creating the binary sparse mask based on the top-k locations post pre-training, we keep the mask frozen during fine-tuning to have a fair comparison with us.
Observation. Magnitude pruned models yield similar accuracy as Sensi-BERT at high budget while yield significantly poorer accuracy at the lower parameter regime.
As Fig. <ref> shows that Sensi-BERT yields significant better accuracy by up to 9.5% ( at B=0.2) compared to MP, clearly highlighting the importance of the sensitivity analysis step. However, for higher B, the model may still remain over-parameterized highlighting the reduced importance of sensitivity driven model trimming.
We carried out additional ablation study experiments, varying the number of epochs used for fine-tuning. The results, presented in Table <ref>, demonstrate a moderate improvement in model's accuracy with more fine-tuning epochs, with RMSE of 0.62%. However, it's important to note that achieving these results require twice the computational time, and as such it should be taken under consideration as a trade-off between required accuracy and time budget.
§.§ Comparison with Alternatives
We compare our approach with various reduced parameter BERT design methodologies in Table <ref>. As we can see, despite not leveraging complex fine-tuning models for distillation Sensi-BERT yields similar and often better accuracy than these methods. Although NAS-BERT leverage more complex and compute heavy pre-training and fine-tuning along with pre-training KD and data augmentation, we place the results in the table as a representative to demonstrate the closeness of performance of models yielded via Sensi-BERT. BERT-of-Theseus <cit.> follows similar philosophy as ours and only use the CE loss for fine-tuning.
It is noteworthy that, our method is orthogonal to most of these approaches and can also be deployed with these methods during final fine-tuning steps.
§.§ Sensitivity-Driven Architecture Analysis
Observation. While different MHSA heads show different sensitivity trend, the MHSA and MLP layer sensitivity consistently reduces as we go deeper in the model.
Fig. <ref>(a) shows the results of head-wise sensitivity of BERT-base for B=0.5 on . It is clear that while some heads carry more sensitivity, few other heads hint to be more over-parameterized (example, head 2 and 3 in the Fig. <ref>(a)). Fig. <ref>(b), on the contrary, shows a clear decreasing sensitivity trend for both MHSA and MLP modules. Using this findings, we designed a custom BERT having reduced intermediate dimensions D_ffn at its last three layers. The goal is to check whether we can leverage this findings in designing more compact models that can yield similar accuracy. As shown in the Table <ref>, the models with reduce D_ffn provide similar or better accuracy highlighting the utility of Sensi-BERT as a tool to guide architecture design.
§ CONCLUSIONS AND FUTURE WORK
In this paper we presented Sensi-BERT, a sensitivity-driven approach in yielding parameter-efficient BERT for efficient fine-tuning and inference. We leveraged layer-wise sensitivity of the intermediate tensors to identify layer-importance and perform fine-tuning on only fraction of model parameters evaluated through this sensitivity.
Compared to various alternatives leveraging compute heavy models during fine-tuning Sensi-BERT demonstrated to yield similar or improved performance at a much less compute and storage demand. Leveraging such sensitivity driven approach for large foundation models in reducing their compute footprint is an interesting future research.
§ PREAMBLE
The first line of the file must be
To load the style file in the review version:
For the final version, omit the option:
To use Times Roman, put the following in the preamble:
where is replaced with a length. Do not set this length smaller than 5 cm.
Table <ref> shows the syntax supported by the style files.
We encourage you to use the natbib styles.
You can use the command (cite in text) to get “author (year)” citations, like this citation to a paper by <cit.>.
You can use the command (cite in parentheses) to get “(author, year)” citations <cit.>.
You can use the command (alternative cite without parentheses) to get “author, year” citations, which is useful for using citations within parentheses (e.g. ).
acl_natbib
§ EXAMPLE APPENDIX
This is a section in the appendix.
|
http://arxiv.org/abs/2307.04901v1 | 20230710210646 | Verifying a quasi-classical spin model of perturbed quantum rewinding in a Fermi gas | [
"J. Huang",
"Camen A. Royse",
"I. Arakelyan",
"J. E. Thomas"
] | cond-mat.quant-gas | [
"cond-mat.quant-gas"
] |
^1Department of Physics, North Carolina State University, Raleigh, NC 27695, USA
We systematically test a quasi-classical spin model of a large spin-lattice in energy space, with a tunable, reversible Hamiltonian and effective long-range interactions. The system is simulated by a weakly interacting Fermi gas undergoing perturbed quantum rewinding using radio-frequency(RF) pulses. The model reported here is found to be in a quantitative agreement with measurements of the ensemble-averaged energy-resolved spin density. This work elucidates the effects of RF detunings on the system and measurements, pointing the way to new correlation measurement methods.
Verifying a quasi-classical spin model of perturbed quantum rewinding in a Fermi gas
J. Huang, Camen A. Royse, I. Arakelyan, and J. E. Thomas
August 12, 2023
====================================================================================
Measurement of coherence, entanglement, and correlations in time-reversible many-body spin lattices is of great interest, broadly impacting our understanding of quantum measurement and information processing <cit.>. A nearly ideal platform for simulating large spin-lattices is a weakly interacting Fermi gas, containing N≃ 10^5 atoms with a tunable, reversible Hamiltonian. The trapped cloud behaves as a spin lattice in energy-space with effective long-range interactions <cit.> and enables new tests of classical versus quantum spin evolution <cit.>.
Spin waves are observed in weakly interacting, nearly collisionless Fermi gases, which have been explained by several models <cit.>. Previously, a 1D quasi-classical spin evolution model that uses the exact energy-dependent couplings was found to yield agreement with spin-density profiles measured for the evolution of an initially x-polarized spin sample <cit.>. However, it appeared that this model failed to explain perturbed quantum rewinding experiments, where an RF pulse rotates the entire spin system by an angle ϕ_x about the x axis as a perturbation in between forward and backward evolutions. In a quantum picture, the ϕ_x rotation changes the relative phases of the superposed total angular momentum states that describe the system, i.e., |S,M_x⟩→ e^-iM_xϕ_x|S,M_x⟩ for each state, leading to ϕ_x-dependent coherence amplitude between states differing in M_x <cit.>. To fit the data, a scattering length of ≈2.5 times the measured value was needed in the previous work <cit.>, questioning the adequacy of the quasi-classical model.
In this work, we report precise, systematic tests of a modified quasi-classical spin model using single-shot measurements of the spin density profiles from perturbed quantum rewinding experiments. Such experiments are ideal for testing the model, since unperturbed rewinding experiments can be implemented in advance to prove that the system is reversed properly without model-dependent fits. We show the advantages of single-shot data analysis for studies of ensemble-averaged energy-resolved spin density, and quantitatively demonstrate the important roles of different RF detunings during the forward and backward evolution periods. By using two detunings as separate fit parameters, the data is explained by the model using the measured scattering length. The new approach reported here validates the modified quasi-classical treatment of this quantum spin system and suggests detuning-independent measurement methods for future correlation studies, avoiding probabilistic methods in data selection <cit.>.
Our experiments <cit.>, employ degenerate clouds of ^6Li containing a total of N=6.5× 10^4 atoms initially in a single spin state. The cloud is confined in a harmonic, cigar-shaped optical trap, with oscillation frequencies ω_x/2π=24.4 Hz in the axial direction and ω_r/2π=650 Hz in the radial direction. The corresponding Fermi temperature T_F=0.73 μK and T/T_F=0.31. RF pulses prepare coherent superpositions of the two lowest hyperfine-Zeeman states, which are denoted by |1⟩≡|↑_z⟩ and |2⟩≡|↓_z⟩. The experiments are done in the weakly interacting regime, where the energy-state changing collision rate is negligible over the time scale of the measurements <cit.>.
As the single particle energies are fixed and the energy distribution is time independent <cit.>, we approximate the cigar-shaped weakly interacting Fermi gas as a one-dimensional (1D) spin “lattice" in energy space <cit.>, with a Hamiltonian
H(a)/ħ=a∑_i,j≠ ig_ij s⃗_i·s⃗_j+∑_iΩ'E_i s_zi+Δ(t)S_z.
We associate a “site" i with the energy E_i=(n_i+1/2) hν_x of the i^ th harmonic oscillator state along the cigar axis x. For each E_i, we define a dimensionless collective spin vector s⃗ (E_i)≡s⃗_i.
The first term in Eq. <ref> is the site-to-site interaction, proportional to the s-wave scattering length a and to the overlap of the harmonic oscillator probability densities for colliding atoms. In a WKB approximation, g_ij∝ 1/√(|E_i-E_j|), which is an effective long-range interaction in the energy-space lattice <cit.>. For a zero temperature Fermi gas, the average interaction energy (in rad/s) is ag̅=6.8 n_0ħ a/m, where n_0 is the peak density. For our experimental parameters, with a=5.2 a_0, ag̅/2π≃ 2.0 Hz.
The second term in Eq. <ref> is an effective site-dependent Zeeman energy, arising from the quadratic spatial variation of the bias magnetic field along x, which produces a spin-dependent harmonic potential. As ω_r/ω_x=26.6, the corresponding effect on the radial motion is negligible, enabling a 1D approximation, where all atoms at site i have the same Zeeman energy. In Eq. <ref>, Ω'=-δω_x/(ħω_x), with δω_x/2π=14.9 mHz for our trap <cit.>. For the mean energy E̅_x≃ k_B T_F/4, Ω' E̅_x/2π≃ 2.0 Hz.
The last term in Eq. <ref> arises from the time-dependent global detuning Δ(t), which plays a central role in the analysis of the rewinding data. Here, S_z=∑_i s_zi. For a typical evolution time 200 ms, Δ(t)≃0.4 Hz. Fluctuations in the bias magnetic field and magnetic tuning of the scattering length cause Δ(t) to change at 5 kHz/G for |1⟩-|2⟩ superposition states.
To implement perturbed quantum rewinding, we employ the pulse sequence shown in Fig. <ref> <cit.>. The system is initially prepared in a pure z-polarized spin state, |ψ_0z⟩. The first (π/2)_y pulse (0.5 ms), defined to be about the y-axis, creates an x polarized state, |ψ_0x⟩. Here, the y- and x-axes are defined in the rotating frame of the RF pulses (RF-frame). Then, the system is allowed to evolve forward for a time τ_f. A voltage-controlled change of the RF phase by π/2 permits rotation about the x-axis by an angle ϕ_x. Applying a (π)_y pulse (1 ms) and magnetically tuning the scattering length from a→ -a (10 ms) inverts the sign of Hamiltonian shown in Eq. <ref>, causing the system to evolve backward for a time τ_b <cit.>. As described below, we perform experiments both with and without the final (π/2)_y pulse, after which the spatial profiles of the |↑_z⟩ and |↓_z⟩ states are measured by two resonant absorption imaging pulses, separated by 10 μs, to obtain the single-shot spin density S_z(x)=[n_↑_z(x)-n_↓_z(x)]/2. For each shot, S_z(x) is normalized to the total central density n(0)=n_↑_z(0)+n_↓_z(0) to minimize errors arising from shot-to-shot variation in the atom number and cloud width. All spatial profiles are folded about x=0 and displayed for 0≤ x≤σ_TF.
The reversibility of the system is tested (result shown in Fig. <ref>) using the pulse sequence of Fig. <ref> with
ϕ_x=0 and without the final (π/2)_y pulse. This sequence measures the component of the collective spin vector s⃗_i that was along the z-axis just prior to imaging. The longitudinal (z) component is insensitive to the detuning Δ(t) that causes a rotation of s⃗_i about the z-axis relative to the RF-frame, enabling a robust test. In the data analysis, since S_z=0 for ϕ_x =0, global spin balance is enforced to minimize the error from small shot-to-shot changes in the detuning of the RF pulses, which arises from magnetic field fluctuation.
In these experiments, it is essential to carefully calibrate the bias magnetic field B_0 at which the s-wave scattering length vanishes. This is best done by quantifying the reversal results using different magnetic fields, which is independent of fitting models and less sensitive to the initial conditions in contrast to the method adopted in Ref. <cit.>. B_0 is found by minimizing the sum of the mean square differences between the forward and backward spin density profiles at corresponding times <cit.>. Unperturbed rewinding experiments done at scattering lengths of ± 5.2 a_0 and ± 8.0 a_0, suggest that B_0=527.150(5) G, which is lower by 30 mG compared to the B_0 of Ref. <cit.>.
Fig. <ref> shows rewinding data (6-shot average) at corresponding forward(red) and backward(blue) evolution times for a=8.0 a_0 and -8.0 a_0 respectively. With the calibrated B_0, the corresponding forward and backward spin density profiles demonstrate good agreement for reversal at 280 ms (top row), while reversal at 400 ms (bottom row) leads to greater differences between corresponding forward and backward data profiles.
Having established that the system is reversible for scattering lengths up to ± 8.0 a_0 and τ_f=τ_b≤ 280 ms, data are mainly obtained with τ≡τ_f=τ_b =200 ms at ± 5.2 a_0 using the full pulse sequence of Fig. <ref>. This provides stringent tests of quasi-classical collective spin vector models. Here, the final (π/2)_y pulse is included to measure the transverse spin components that were along the x-axis in the RF frame in Fig. <ref> just prior to imaging. For ϕ_x=0 and a detuning Δ(t) that is constant over the total sequence, the system is expected to rewind to the initial state, where the density profiles for both spins are Thomas-Fermi. For ϕ_x≠ 0, however, the rewinding is perturbed, producing complex spin density profiles after the full sequence. Fig. <ref> shows single-shot spin density profiles for ϕ_x=π/2,π,3π/2. We obtain the corresponding energy-space profiles, s_zi≡ s_z(E), by inverse Abel-transformation <cit.> of the spatial profiles, which is valid in a WKB approximation when energy space coherence is negligible and a quasi-continuum approximation is valid, as in our experiments <cit.>.
To understand the perturbed rewinding data of Fig. <ref>, we include a time-dependent global detuning, Δ(t), in the Hamiltonian of Eq. <ref>. The detuning determines the relative angle between the RF-frame and the Bloch frame φ_fb in Fig. <ref>. Here, the RF frame is defined by x_RF and y_RF axes that rotate about the z-axis at the instantaneous RF frequency, ω_RF(t), tracking the total phase of the RF field. We define the rotation axes for all of the RF pulses in Fig. <ref> to be in the RF frame, i.e., x≡ x_RF and y≡ y_RF. The Bloch frame is defined by x_B and y_B axes that rotate at the instantaneous hyperfine resonance frequency ω_HF(t) for an atom of axial energy E=0.
The detuning, Δ(t)=ω_HF(t)-ω_RF(t), causes the components of spin vectors in the Bloch frame to rotate relative to the RF-frame by generally different angles φ_f=∫_τ_f dt Δ(t) and φ_b=∫_τ_bdt Δ(t), during the forward and backward evolution times, respectively, even for τ_b=τ_f. For measurements of spin components in the RF frame, the final state of the cloud can be written as |ψ_f⟩=e^-iπ/2 S_y|ψ_f1⟩, where
|ψ_f1⟩ is the state just prior to the final (π/2)_y pulse. Taking τ_f=τ_b=τ, we find <cit.>
|ψ_f1⟩=e^-iπ S_ye^i(φ_b-φ_f)S_z W_ϕ(φ_f,τ)|ψ_0x⟩,
where |ψ_0x⟩=e^-iπ/2 S_y|ψ_0z⟩ is the fully x-polarized state and
W_ϕ(φ_f,τ)=e^i/ħH_0(a)τe^-i ϕ_x S_x(φ_f)e^-i/ħH_0(a)τ.
Here, S_x(φ_f)=S_xcosφ_f-S_ysinφ_f with S_x and S_y the x- and y-components of the total spin vector in the RF frame. H_0(a) is defined by Eq. <ref> for Δ(t)=0.
For each shot, the operator s_zi is measured for an ensemble of atoms in a selected energy group E_i∈ [E,E+Δ E]. The energy resolution Δ E of the inverse Abel-transform method is small enough that all of the atoms in the energy group evolve identically over the time scale of the pulse sequence. A single-shot measurement of the spin density profile then yields the ensemble-averages, ⟨ψ_f|s_zi|ψ_f⟩≡⟨ψ_0x|s̃'_xi|ψ_0x⟩. Here,
s̃'_xi= cosφ_fb s̃_xi-sinφ_fb s̃_yi,
with s̃'_xi being the x-component of the spin vector operator relative to the RF frame just before the final (π/2)_y pulse as shown in Fig. <ref>. s̃'_xi is given in terms of the components in the Bloch frame, s̃_xi≡ W^†_ϕ(φ_f,τ)s_xiW_ϕ(φ_f,τ) and similarly for s̃_yi. For each measurement, we see that the difference between the backward and forward the phase shifts, φ_f-φ_b≡φ_fb, determines the relative contribution of the s̃_xi and s̃_yi spin components in the Bloch frame to the measured projection in the RF frame, s̃'_xi. In addition, Eq. <ref> shows that the forward phase shift φ_f determines the effective rotation axis for the ϕ_x pulse.
To predict the measured ⟨ψ_f|s_zi|ψ_f⟩, we employ a mean-field approximation to obtain a quasi-classical model <cit.>, where the Heisenberg equations are solved numerically by treating the collective spin vectors as classical variables, which ignores quantum correlations between the spin vectors for different energy groups. The Heisenberg equations of motion for the collective spin vectors take a simple form in energy space, ṡ⃗̇_i(t)=ω⃗_i(t)×s⃗_i(t), with
ω⃗_i(t)=a∑_j≠ ig_ij s⃗_j (t)+Ω'E_i ê_z+Δ(t)ê_z.
For a given choice of the forward and backward detunings, i.e., the phases φ_f and φ_b, s_zi is determined by numerical integration. An Abel transform of s_zi≡ s_z(E) then yields the corresponding spin density s_z(x) <cit.>.
Experimentally, 60 shots are taken for each set of parameters. Examples of single-shot data are shown in Fig. <ref> and in the supplement <cit.>. Due to the complexity of the spatial profiles for ϕ_x≠0, single-shot data analysis is essential for this experiment. Small variation (≤5%) in cloud parameters results in shifted spatial profiles, even for fixed φ_f and φ_b, so averaging over shots with slightly different initial conditions can wash out the fine structure. Fig. <ref> compares two quasi-classical models with the single-shot data (blue dots). The first model, adopted from Ref. <cit.>, assumes φ_f≡φ_b mod 2π, and the fits (black-dashed curves) to the data in Fig. <ref> (a,e) and (b,f) for τ=200 ms and a=5.2 a_0 require a fitted scattering length of a_fit=9.0 a_0 in disagreement with the measured value. These results confirm the large discrepancy between the data and the quasi-classical model ignoring φ_fb that was observed in our previous study of information scrambling <cit.>. For the second, modified model, the forward and backward evolution phases φ_f and φ_b are treated as two free parameters. In this case, the model (red curves) is in good agreement with data taken for ϕ_x=π/2, π and 3π/2 with τ=200 ms at both 5.2 a_0 and 8.0 a_0 and for τ=400 ms at 5.2 a_0. Additional data with ϕ_x in steps of ϕ_x=π/4 were obtained to test the model further and demonstrate equally good agreement <cit.>. Section IV B of the supplement explains the sources of minor defects observed in the data series for 8.0 a_0, τ=200 ms and 5.2 a_0, τ=400 ms.
It is observed that, for the small scattering length, a=5.2 a_0, and short forward evolution time τ=200 ms, the data can be fitted using φ_f and φ_b as two free parameters (red curves) or by using φ_f=φ_b as one parameter and the scattering length a as another free parameter (black-dashed curves). However, for the long forward evolution time of 400 ms or for the large scattering length of 8.0 a_0, the data cannot be fitted for any scattering length with the assumption of φ_b=φ_f. In contrast, the modified model reported in this work, which includes forward and backward evolution phases as separate parameters, fits the data very well using the measured scattering length.
The modified model explicitly shows the difficulty of multi-shot averaged measurements of transverse spin components, such as s_x, where the averages of cosφ_fb and sinφ_fb in Eq. <ref> tend to vanish. Previously, the imperfect phase control problem was partially circumvented by using a maximum likelihood estimation <cit.>. However, Eq. <ref>, which is valid for both quasi-classical and full quantum treatments, suggests that multi-shot averaged measurements of energy-space spin operator products, such as ⟨ψ_f|s_zis_zj|ψ_f⟩ =⟨ψ_0x|s̃'_xis̃'_xj|ψ_0x⟩, are important, since the random-phase averages of cos^2φ_fb and sin^2φ_fb are nonzero. This method enables improved out-of-time-order correlation measurements in quantum gases, where the W operator is unchanged and the operator V=s_xi is replaced with V=s_xis_xj, since the initial x-polarized state is an eigenstate of both operators <cit.>.
In summary, this work verifies that a modified quasi-classical spin vector model of weakly interacting Fermi gases explains perturbed quantum rewinding experiments, using measurements of single-shot spin density profiles with sufficient resolution to enable quantitative study. The modified model reported here elucidates the effects of uncontrolled forward and backward evolution phases, φ_f and φ_b, on the system and measurements, resolving an outstanding conflict with a previous treatment <cit.>. Our results suggest new correlation analysis methods based on energy-resolved operator products, which yield signals that are independent of the uncontrolled RF detuning without assuming phase distributions <cit.>. Applying such methods to measure the time dependence of correlations between transverse components ⟨ψ_0x|s̃_⊥ i·s̃_⊥ j|ψ_0x⟩ allows the study of entanglement development in a large system <cit.> and investigations of many-body dynamics and information propagation<cit.>. Such experiments will be a topic of future work.
Primary support for this research is provided by the Air Force Office of Scientific Research (FA9550-16-1-0378). Additional support is provided by the National Science Foundation (PHY-2006234).
^*Corresponding authors: [email protected]
[email protected]
24
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Lewis-Swan et al.(2019)Lewis-Swan, Safavi-Naini, Bollinger, and et al.]OTOCNatComm
author author R.J. Lewis-Swan, author A. Safavi-Naini, author J.J. Bollinger, and author et al., title title Unifying scrambling, thermalization and
entanglement through measurement of fidelity out-of-time-order correlators in
the Dicke model, @noop journal journal Nature Communication volume 10, pages 1581 (year 2019)NoStop
[Eisert et al.(2015a)Eisert, Friesdorf, and Gogolin]QMBNatPhy
author author J. Eisert, author M. Friesdorf, and author C. Gogolin, title title Quantum many-body systems
out of equilibrium, @noop journal journal Nature Physics volume 11, pages 124–130 (year 2015a)NoStop
[Kaufman and et al.(2016)]QEntangleScience
author author Adam M. Kaufman and author et al., title title Quantum thermalization through
entanglement in an isolated many-body system, @noop journal journal Science volume
353, pages 794–800 (year 2016)NoStop
[Du et al.(2008)Du,
Luo, Clancy, and Thomas]DuSpinSeg1
author author X. Du, author L. Luo, author B. Clancy, and author
J. E. Thomas, title
title Observation of anomalous spin segregation in a
trapped Fermi gas, @noop journal journal Phys. Rev. Lett. volume 101, pages 150401 (year 2008)NoStop
[Du et al.(2009)Du,
Zhang, Petricka, and Thomas]DuSpinSeg2
author author X. Du, author Y. Zhang, author J. Petricka, and author J. E. Thomas, title
title Controlling spin current in a trapped Fermi
gas, @noop journal journal Phys. Rev.
Lett. volume 103, pages 010401
(year 2009)NoStop
[Ebling et al.(2011)Ebling,
Eckardt, and Lewenstein]LewensteinDynLongRange
author author Ulrich Ebling, author André Eckardt, and author Maciej Lewenstein, title title Spin
segregation via dynamically induced long-range interactions in a system of
ultracold fermions, @noop journal journal Phys. Rev. A volume 84, pages 063607 (year 2011)NoStop
[Pegahan et al.(2019)Pegahan, Kangara, Arakelyan, and Thomas]SaeedPRASpinECorrel
author author S. Pegahan, author J. Kangara,
author I. Arakelyan, and author J. E. Thomas, title title Spin-energy correlation in degenerate
weakly interacting Fermi gases, @noop journal
journal Phys. Rev. A volume 99, pages 063620 (year 2019)NoStop
[Piéchon et al.(2009)Piéchon, Fuchs, and Laloë]Piechon
author author F. Piéchon, author J. N. Fuchs, and author F. Laloë, title title Cumulative
identical spin rotation effects in collisionless trapped atomic gases, @noop journal journal Phys. Rev. Lett. volume 102, pages 215301 (year 2009)NoStop
[Natu and Mueller(2009)]MuellerWeaklyInt
author author Stefan S. Natu and author Erich J. Mueller, title title
Anomalous spin segregation in a weakly interacting two-component Fermi
gas, 10.1103/PhysRevA.79.051601 journal
journal Phys. Rev. A volume 79, pages 051601 (year 2009)NoStop
[Deutsch et al.(2010)Deutsch, Ramirez-Martinez, Lacroûte,
Reinhard, Schneider, Fuchs,
Piéchon, Laloë, Reichel, and Rosenbusch]LaloeSpinReph
author author C. Deutsch, author F. Ramirez-Martinez, author C. Lacroûte, author F. Reinhard, author T. Schneider,
author J. N. Fuchs, author F. Piéchon, author
F. Laloë, author
J. Reichel, and author
P. Rosenbusch, title
title Spin self-rephasing and very long coherence
times in a trapped atomic ensemble, @noop journal
journal Phys. Rev. Lett. volume 105, pages 020401 (year 2010)NoStop
[Smale et al.(2019)Smale,
He, Olsen, Jackson,
Sharum, Trotzky, Marino,
Rey, and Thywissen]ThywissenDynamicalPhases
author author Scott Smale, author Peiru He,
author Ben A. Olsen, author Kenneth G. Jackson, author Haille Sharum, author
Stefan Trotzky, author
Jamir Marino, author
Ana Maria Rey, and author
Joseph H. Thywissen, title
title Observation of a transition between dynamical
phases in a quantum degenerate Fermi gas, @noop journal journal Science Advances volume 5 (year 2019), note elocation-id:
eaax1568NoStop
[Koller et al.(2016)Koller,
Wall, Mundinger, and Rey]KollerReySpinDep
author author Andrew P. Koller, author Michael L. Wall, author Josh Mundinger, and author Ana Maria Rey, title title
Dynamics of interacting fermions in spin-dependent potentials, @noop journal journal Phys. Rev. Lett. volume 117, pages 195302 (year 2016)NoStop
[Gärttner et al.(2017)Gärttner, Bohnet, Safavi-Naini,
Wall, Bollinger, and Rey]ReyNatPhys2017
author author Martin Gärttner, author Justin G. Bohnet, author Arghavan Safavi-Naini, author Michael L. Wall, author John J. Bollinger, and author Ana Maria Rey, title title
Measuring out-of-time-order correlations and multiple quantum spectra in a
trapped-ion quantum magnet, @noop journal journal Nature Physics volume 13, pages 781 (year 2017)NoStop
[Eisert et al.(2015b)Eisert, Friesdorf, and Gogolin]OutofEq
author author J. Eisert, author M. Friesdorf, and author C. Gogolin, title title Quantum many-body systems
out of equilibrium, @noop journal journal Nature Phys. volume 11, pages 124–130 (year 2015b)NoStop
[Schubert et al.(2021)Schubert, Richter, Jin, Michielsen, De Raedt, and Steinigeweg]Schubert
author author Dennis Schubert, author Jonas Richter, author Fengping Jin,
author Kristel Michielsen,
author Hans De Raedt, and author Robin Steinigeweg, title title Quantum versus classical
dynamics in spin models: Chains, ladders, and square lattices, @noop
journal journal Phys. Rev. B volume 104, pages 054415 (year
2021)NoStop
[Das et al.(2019)Das,
Kulkarni, Spohn, and Dhar]Kulkarni
author author Avijit Das, author Manas Kulkarni,
author Herbert Spohn, and author Abhishek Dhar, title title Kardar-Parisi-Zhang
scaling for an integrable lattice Landau-Lifshitz spin chain, @noop journal journal Phys. Rev. E volume 100, pages 042116 (year 2019)NoStop
[Lakshmanan et al.(1976)Lakshmanan, Ruijgrok, and Thompson]Lakshmanan
author author M. Lakshmanan, author Th. W. Ruijgrok, and author C. J. Thompson, title title On the
dynamics of a continuum spin system, @noop journal
journal Physica volume 84A, pages 577–590 (year 1976)NoStop
[Ball(2021)]CLvsQPhysWorld
author author P. Ball, title title Evolution of spins
looks surprisingly classical, @noop journal
journal Physics World volume 34(10), pages 6ii (year 2021)NoStop
[Gärttner et al.(2018)Gärttner, Hauke, and Rey]ReyPRL2018
author author Martin Gärttner, author Philipp Hauke, and author Ana Maria Rey, title title Relating
out-of-time-order correlations to entanglement via multiple-quantum
coherences, 10.1103/PhysRevLett.120.040402 journal journal Phys. Rev. Lett. volume 120, pages 040402 (year
2018)NoStop
[Pegahan et al.(2021)Pegahan, Arakelyan, and Thomas]SaeedInformScramb
author author S. Pegahan, author I. Arakelyan,
and author J. E. Thomas, title title Energy-resolved information
scrambling in energy-space lattices, @noop journal
journal Phys. Rev. Lett. volume 126, pages 070601 (year 2021)NoStop
[Sup()]Supplement
@noop note See the Supplemental Material for a
description of the experimental details and of the quasi-classical spin
model.Stop
[Pretzier(1991)]NewAbelInversion
author author G. Pretzier, title title A new method
for numerical Abel-inversion, @noop journal
journal Zeitschrift für Naturforschung A volume 46, pages 639 – 641 (year
1991)NoStop
[Jurcevic et al.(2014)Jurcevic, Lanyon, Hauke, and et al.]EntanglePropaNature
author author P. Jurcevic, author B. Lanyon,
author P. Hauke, and author et al., title title Quasiparticle engineering and entanglement propagation in a quantum
many-body system, @noop journal journal Nature volume 511, pages
202–205 (year 2014)NoStop
[Hauke and Tagliacozzo(2013)]PhysRevLett.111.207202
author author P. Hauke and author L. Tagliacozzo, title title Spread of
correlations in long-range interacting quantum systems, @noop
journal journal Phys. Rev. Lett. volume 111, pages 207202 (year
2013)NoStop
§ SUPPLEMENTAL MATERIAL
This supplemental material presents the experimental and theoretical details of the measurements and modeling of quantum rewinding in a weakly interacting Fermi gas. A new method is introduced for calibration of the magnetic field where the scattering length vanishes. With this precise measurement, systematic experimental defects in the perturbed quantum rewinding experiments are minimized. The critical role of time-dependent detuning is explained by deriving a new collective spin vector model that properly includes it. Finally, additional single-shot data from the perturbed quantum rewinding experiments are presented, illustrating excellent agreement with the quasi-classical model reported in this work.
§.§ Experimental Methods
For the experiments presented in this work, the sample comprises a mixture of two lowest hyperfine states ^6Li atoms, denoted |1⟩≡|↑_z⟩ and |2⟩≡|↓_z⟩, which is evaporatively cooled to degeneracy near the 1-2 broad Feshbach resonance at a bias magnetic field of 832.2 G. After the sample is prepared, |1⟩ is eliminated by an imaging pulse applied in the weakly interacting regime near 1200 G to create a z-polarized sample. The bias magnetic field is then tuned close to the zero-crossing 527.150 G, where the scattering length a nearly vanishes. At this field, a resonant radio-frequency(RF) (π/2)_y pulse, i.e., a rotation around the y-axis in the RF frame, coherently excites the spin state |2⟩, creating a 50-50 superposition of states |1⟩ and |2⟩. In addition to the bias magnetic field, a control magnetic field is applied along the bias field axis by a pair of auxiliary magnet coils, which are wrapped around the primary bias magnet containers, located on the top and bottom of the experimental vacuum chamber. The auxiliary coils enable magnetic field control of the scattering length in the zero crossing region.
With the coherent superposition state created, the trapped cloud is x-polarized and evolves at the chosen scattering length a for a selected evolution time t_fk, after which the spatial profiles of both hyperfine components are measured. Spin wave formation leads to spin segregation, where the maxima in the |1⟩ and |2⟩ densities are spatially separated <cit.>. This initial evolution is defined as “forward" evolution. To implement "backward" evolution, a π_y pulse is applied after a forward evolution time τ_f, interchanging the two spin states, and the auxiliary magnetic field is swept down over 10 ms to flip the scattering length from a to -a. Ideally, from this point, the sign of the Hamiltonian is inverted, which is equivalent to letting the system evolve backward in time. After a backward evolution time τ_b=τ_f, the system is expected to evolve back to its unsegregated x-polarized state.
§.§.§ Quantum Rewinding Measurements
The system status is observed for a number of different forward evolution times t_fk and backward evolution times t_bk, by imaging the density of both spins, using two resonant optical pulses, separated in time by 10 μs. Spin density spatial profiles are extracted by subtracting normalized spatial profiles for the two states and dividing by 2:
S_z(x) ≡1/2(n_1(x)-n_2(x)/n_1(0)+n_2(0))
= 1/2Δ n(x)/n_tot(0).
Spin segregation is quantified by measuring the central spin density S_z(x=0) = 1/2(Δ n(0)/n_tot(0)) at the center of the cloud. The larger the absolute value of the central spin density is, the more segregated the system is: For the unsegregated system, i.e., immediately after the coherent excitation RF (π/2)_y pulse, the central spin density is zero in theory.
Perturbed quantum rewinding experiments are done by adding a perturbing ϕ_x pulse before the π_y pulse. This pulse is generated by connecting a voltage-controlled phase shifter in series with the output of the RF generator, so that a ϕ_y pulse can be either unshifted in phase or phase shifted by 90^∘ to obtain a ϕ_x pulse. Immediately after the perturbing ϕ_x pulse, the sign of the Hamiltonian is reversed, as described in the previous paragraph, and the system evolves backward for a time period of τ_b=τ_f. Just before the final (π/2)_y pulse, the magnetic field is swept with the auxiliary coils to give the original scattering length a, and the final (π/2)_y pulse is applied to observe the transverse components of the spin vectors.
§.§.§ Detuning
Ideally, during the experimental cycle, for zero detuning, the Bloch frame overlaps with the RF frame, which means that the ϕ_x, π_y and the last (π/2)_y rotations in the RF frame are also done about the x-axis or y-axis in the Bloch frame. However, there are uncontrolled time-dependent global detunings Δ(t), producing relative angles between the RF and Bloch frames during the evolution periods τ_b and τ_f. Experimentally, all rotations are done in the RF frame, which is defined by the RF generator. At a forward evolution magnetic field B_f, which is near resonance, Δ≃ 0. However, as the magnetic field is swept down to B_b during an experimental cycle, the detuning changes by up to several kHz. This results in a large phase difference and corresponding angle between the two frames, which is imperfectly controlled due to field fluctuations. As described in detail in <ref>, the phase shift accumulated during the forward evolution period, φ_f, controls the effective rotation axis of the perturbation relative to the Bloch vector. In addition, the difference between φ_f and the phase accumulated during the backward evolution period, φ_b, determines the measured spin components.
For Hamiltonian reversal experiments, when only the z-component of the spin vectors is measured, there is no sensitivity to the detuning, because the detuning is equivalent to a rotation about the z-axis. In contrast, for perturbed quantum rewinding experiments, where the transverse components of the spin vectors are measured, understanding the roles of φ_f and φ_b is critical for correct data analysis and comparison with predictions.
Although the detuning is not controlled, the experimental results of this work show that complex single-shot data are surprisingly well fitted by the model described below, where the different accumulated phase shifts for the two evolution periods, φ_f and φ_b, are properly included.
§.§ Collective Spin Vector Evolution Model
To understand how the time-dependent global detuning Δ(t) affects the perturbed quantum rewinding measurements, we derive the final state of the system after the pulse sequence of Fig. 2 of the main text, which is reproduced here for convenience, Fig. <ref>.
Prior to the pulse sequence, the optically trapped atoms are initially prepared in a z-polarized state,
|ψ_0z⟩=Π_i |↑_zi⟩.
The system runs forward for a time τ_f and backward for τ_b, after which the spatial profiles of the |↑_z⟩ and |↓_z⟩ states are measured. Note that the pulse durations for the y- and x- axis rotations are <<τ_f, τ_b. In between the forward and backward evolutions, a perturbing pulse is applied, which rotates the system about the x-axis by an angle ϕ_x.
In Fig. <ref>, the x and y axes are defined in the “RF frame," where the x-axis of the RF frame is defined to rotate about the z-axis at the instantaneous frequency of the RF generator. The x-axis, therefore, tracks the total phase of the RF field. The detuning Δ(t) is defined as the difference between the instantaneous hyperfine resonance frequency for an atom at rest and the instantaneous radiofrequency. When the hyperfine frequency is larger than the radiofrequency, spin vectors will appear to rotate counterclockwise as seen looking down the z-axis from above, i.e., through a positive angle, relative to the RF frame. In the experiments, changes in the applied bias magnetic field, as used to reverse the scattering length, and uncontrolled magnetic field fluctuations, tune the hyperfine frequency at a rate of ≃ 5 kHz/G. The detuning causes the components of spin vectors in the Bloch frame to rotate relative to the RF-frame by generally different angles, during the forward and backward evolution times, respectively, even for τ_b=τ_f=τ as used in the experiments. We define the forward and backward phase shifts,
φ_f=∫_τ_f dt Δ(t) andφ_b=∫_τ_bdt Δ(t).
To find the final state including the global detuning, we write the Hamiltonian of Eq. 1 of the main text in the general form
H(a)/ħ= H_0(a)/ħ+Δ(t)S_z,
where S_z is the z-component of the dimensionless total spin vector.
Here, the time-independent part of the Hamiltonian, for Δ=0, is defined by
H_0(a)/ħ=a∑_i,j≠ ig_ij s⃗_i·s⃗_j+∑_iΩ'E_i s_zi
and
[H_0(a),S_z]=0.
Referring to Fig. <ref>, for measurements of spin components in the RF frame, we see that the final state of the cloud for τ_f=τ_b=τ is
|ψ_f⟩=e^-iπ/2 S_ye^-i/ħ H_0(-a)τ-iφ_b S_ze^-iπ S_ye^-i ϕ_x S_xe^-i/ħ H_0(a)τ-iφ_f S_ze^-iπ/2 S_y|ψ_0z⟩.
Eq. <ref> is readily simplified using
e^-i/ħ H_0(-a)τ-iφ_b S_ze^-iπ S_y=e^-iπ S_y[e^iπ S_ye^-i/ħ H_0(-a)τ-iφ_b S_ze^-iπ S_y].
Using Eq. <ref> and noting that the (π)_y rotation inverts S_z, we see that
e^iπ S_y[H_0(-a)/ħτ+φ_b S_z]e^-iπ S_y=-H_0(a)/ħτ-φ_b S_z.
With Eq. <ref>, we obtain
|ψ_f⟩=e^-i 3π/2 S_ye^+iφ_b S_ze^+i/ħ H_0(a)τ e^-i ϕ_x S_xe^-iφ_f S_ze^-i/ħ H_0(a)τe^-iπ/2 S_y|ψ_0z⟩.
Now,
e^-i ϕ_x S_xe^-iφ_f S_z=e^-iφ_f S_z[e^+iφ_f S_ze^-i ϕ_x S_xe^-iφ_f S_z].
It is easy to show that
S_x(φ_f)≡ e^+iφ_f S_z S_x e^-iφ_f S_z=S_xcosφ_f-S_ysinφ_f,
which follows from S_x”(φ_f)=-S_x(φ_f) and the initial conditions S_x(0)=S_x and S_x'(0)=-S_y. Then
e^-i ϕ_x S_xe^-iφ_f S_z= e^-iφ_f S_z e^-iϕ_x S_x(φ_f).
Again using Eq. <ref>, we then obtain
|ψ_f⟩=e^-i 3π/2 S_ye^+i(φ_b-φ_f) S_ze^+i/ħ H_0(a)τ e^-i ϕ_x S_x(φ_f)e^-i/ħ H_0(a)τe^-iπ/2 S_y|ψ_0z⟩.
Defining the operator
W_ϕ(φ_f,τ)=e^i/ħ H_0(a)τe^-i ϕ_x S_x(φ_f)e^-i/ħH_0(a)τ,
and the x-polarized state just after the first (π/2)_y rotation,
|ψ_0x⟩=e^-iπ/2 S_y|ψ_0z⟩
we obtain the final state in the simple form,
|ψ_f⟩=e^-i 3π/2 S_ye^+i(φ_b-φ_f)S_z W_ϕ(φ_f,τ)|ψ_0x⟩.
As explained in the main text, in a single shot, we can measure the operator s_zi for the ensemble of atoms in the i^th energy group. Noting that e^+i 3π/2 S_ys_zie^-i 3π/2 S_y=+s_xi, we find
⟨ψ_f|s_zi|ψ_f⟩=⟨ψ_0x|W^†_ϕ(φ_f,τ)e^-i(φ_b-φ_f)S_zs_xi e^i(φ_b-φ_f)S_zW^†_ϕ(φ_f,τ)|ψ_0x⟩.
By using Eq. <ref> with φ_f→φ_fb≡φ_f-φ_b, S_x→ s_xi and S_y→ s_yi, we see that
e^i(φ_f-φ_b)S_zs_xi e^-i(φ_f-φ_b)S_z=s_xicosφ_fb-s_yisinφ_fb.
Then,
⟨ψ_f|s_zi|ψ_f⟩=⟨ψ_0x|W^†_ϕ(φ_f,τ)s_xiW^†_ϕ(φ_f,τ)|ψ_0x⟩cosφ_fb-
⟨ψ_0x|W^†_ϕ(φ_f,τ)s_yiW^†_ϕ(φ_f,τ)|ψ_0x⟩sinφ_fb
Eq. <ref> shows that a single-shot measurement of the spin density profile then yields, via inverse-Abel transformation <cit.>, the ensemble-averages,
⟨ψ_f|s_zi|ψ_f⟩≡⟨ψ_0x|s̃'_xi|ψ_0x⟩,
where
s̃'_xi= cosφ_fb s̃_xi-sinφ_fb s̃_yi.
Here,
s̃_xi≡ W^†_ϕ(φ_f,τ)s_xiW_ϕ(φ_f,τ)
and similarly for s̃_yi, reproducing the results given in the main text.
We see that for each measurement, the difference between the forward and backward phase shifts, φ_f-φ_b≡φ_fb, determines the relative contribution of the s̃_xi and s̃_yi spin components in the Bloch frame to the measured projection in the RF frame, s̃'_xi. In addition, Eq. <ref> shows that the forward phase shift φ_f determines the effective rotation axis for the ϕ_x pulse.
To compare the prediction of Eq. <ref> to single-shot measurements in a system containing a large number of spins, we employ a quasi-classical model. In this case, we treat the Heisenberg equations for the spin-vectors s⃗_i as evolution equations for classical vectors, which neglects quantum correlations. These equations are readily evaluated for any chosen φ_f and φ_b by numerical integration, enabling fits to single-shot data with φ_f and φ_b as fit parameters.
§.§ Quantifying Hamiltonian Reversal
This section introduces the method of quantifying the quality of rewinding by Hamiltonian reversal.
Two sets of data are required to examine the result of the Hamiltonian reversal in the experiments. One set of data represents the state of the system at different forward evolution times, and the other set represents the state at corresponding backward evolution times. The first set is taken at a magnetic field B_f for a number of times 0≤ t_fk≤τ_f, where t=0 is the time of the initial coherent excitation pulse and τ_f is the maximum forward evolution time. The second set is taken at a backward evolution magnetic field B_b at corresponding times t_bk=τ_f+τ_bk, as discussed below.
The z-component of the spin density for the forward evolving system, S^f_z(x,t_k), is measured at different evolution times t_fk≡ t_k by imaging the density of both spins at a time t_fk relative to the time t≡ 0 of the initial
(π/2)_y RF pulse. To enable a determination of the reversal quality, the same quantity, S^b_z(x,t_k), is measured for the Hamiltonian reversed system at t_bk≡τ_f+τ_bk, where t_bk is the total time relative to the coherent excitation pulse. Here, τ_bk is the amount of time that the system evolves backward with the reversed Hamiltonian. To match the spin density spatial profiles for the forward evolution with the corresponding backward evolution ones, the evolution times need to be matched, i.e., τ_bk=τ_f-t_fk=τ_f-t_k and t_bk= 2τ_f-t_k. χ^2_k is defined as the normalized mean square difference between forward and backward evolution spin density profiles for t_k,
χ^2_k ≡∑_x(S^b_z(x,t_k)-S^f_z(x,t_k)/S^f_z(x,t_k))^2.
To quantify the reversal, we employ χ^2≡⟨χ^2_k⟩, the average χ_k^2 for all of the t_k in the data set. Small χ^2 means that the forward and backward S_z(x) profiles overlap very well, which corresponds to a good reversal.
§.§.§ Zero-crossing Measurement
A critical constant for the implementation of any experiments involving quantum rewinding is the zero-crossing magnetic field B_0 where the scattering length vanishes. Careful calibration of this field ensures that the Hamiltonian reversal in the experimental sequence is done properly. In this work, B_0 is precisely measured by quantifying Hamiltonian reversal for two different forward evolution magnetic fields B_f = 528.803 G and 529.713 G, where the scattering lengths are ≈5 a_0 and ≈8 a_0 respectively. Data is taken for five slightly different backward evolution magnetic fields B_b near the zero crossing for each B_f.
The magnitudes of all of the magnetic fields are measured precisely using RF spectroscopy, by applying a π pulse (15 ms) with a known RF frequency. The resonance frequencies of the RF pulse for the atomic transition are in one-to-one correspondence with the magnetic fields. With this property, the magnetic field can be calculated with mG precision from the resonance RF frequency for a π pulse that fully transfers atoms from |2⟩ to |1⟩.
By fitting a parabola to χ^2 as defined above for five different B_b, the optimum reversal magnetic B_b, opt is obtained for the corresponding B_f. The zero-crossing magnetic field is located at the midpoint between B_f and B_b,opt. Fig. <ref> displays the results of this measurement. Because the measurement for this experiment is insensitive to the detuning as described in the main paper, averaging is allowed. Each data point is the result after averaging 5 shots. The top two figures (a) and (b) are the result of data series taken for a=± 5.2 a_0 with Hamiltonian reversal done at τ_f=400 ms, and the bottom two (c) and (d) are the results for a=± 8.0 a_0 with Hamiltonian reversal done at τ_f=240 ms. The parabolic fit for χ^2 with a=± 5.2 a_0 series suggests that B_b,opt=525.488 G for B_f=528.803 G, and for ±8.0 a_0 suggests that B_b,opt=524.596 G for B_f=529.713 G. The two series of experiments yield the result B_0=527.150(5) G, which is 30 mG lower than the previous result <cit.>. Note that the scattering lengths ”5.2 a_0" and ”8.0 a_0" are calculated based on the zero-crossing magnetic field measured in the above experiment.
§.§.§ Reversibility in Different Regimes
This technique of quantifying the reversal quality can also be used to compare the reversibility for systems in different regimes of scattering length and forward evolution time. This is achieved by weighting the disagreement between forward and backward in spatial profiles differently for different t_k: By construction, the disagreement is weighted more heavily for small t_k, where the system has evolved backward long enough for discrepancies between backward and forward evolution to appear. For small t_k, S^f_z(x, t_k) is usually small, i.e., close to a horizontal line around x-axis, because the system has not segregated very much for short forward evolution times. Hence, the denominator in Eqn. <ref> exaggerates the magnitude of χ^2_k for small t_fk.
For the perturbed quantum rewinding experiments, the evolution times are always chosen to be τ_b=τ_f, which means that the reversal quality at t_k=0 ms is extremely critical. Hence, χ^2_k at t_k=0 ms needs to be weighted highly in the test of reversibility, especially for the purpose of choosing regimes to do perturbed quantum rewinding experiments.
Note that the χ^2 in Fig. <ref>(a) is one order of magnitude larger than it is in (c), which means that the system reverses more perfectly with a=8.0 a_0 and τ_f = 240 ms than with a=5.2 a_0 and τ_f=400 ms. This matches the comparisons of the central spin density evolutions shown in Fig. <ref> (b)(d): For the a=5.2 a_0 and τ_f=400 ms data series, there is a clear sign of segregation (Δ n(0)/n_tot(0)>0) for Hamiltonian reversed data at t_k=0 ms even if the optimum reversal magnetic field is adopted. In contrast, for a=8.0 a_0 and τ_f=240 ms, an almost perfect reversal is observed at t_k=0 ms.
With this method of quantifying quantum rewinding, a systematic study of the reversibility of a system in different regimes can be done by fixing τ_f and varying the scattering length and by fixing the scattering length and varying τ_f. This study is ongoing and requires a large amount of data, and so will not be pursued further here.
§.§ Testing the Quasi-Classical Spin Model
To test the quasi-classical spin model, as reported in this work, three series of perturbed quantum rewinding experiments are performed at: 5.2 a_0 with τ=200 ms, 8.0 a_0 with τ=200 ms, and 5.2 a_0 with τ=400 ms. The S_z(x) profiles are measured as described in <ref>. All spatial profiles shown in this work are folded over the center of the cloud, x=0, followed by equal-width binning into 50 bins. To extract energy-space information, S_z(E), Abel inversion is applied to the spatial profiles with 16 expansion terms <cit.>. As energy-space profiles are less sensitive to experimental defects shown in Fig. <ref>, the model is fitted to the S_z(E) extracted from single-shot data with φ_f and φ_b as free parameters. Data from the three experimental series are in good agreement with the model.
§.§.§ Primary Data for a=5.2 a_0 and τ=200 ms
Perturbed quantum rewinding measurements are employed to precisely test the quasi-classical spin model presented in <ref>. Data is mainly taken at 5.2 a_0 with τ=200 ms, as this set of parameters is expected to provide an almost perfect Hamiltonian reversal. Nine different ϕ_x values are used as a perturbation to the reversal, ranging from 0 to 2π in steps of π/4. Fig. <ref> shows examples of single shot data taken for different ϕ_x. Note that in these figures φ_f and φ_b are not necessarily the same across the whole data set because of fluctuations in RF detuning as discussed in <ref>. The systematic measurements and fits done for the perturbed quantum rewinding experiments show that the complicated structure observed in the spatial profiles is very sensitive to the initial conditions (cloud size σ_TF and atom number N), as well as the RF detunings for two evolution periods τ_f and τ_b. Even small variations in these parameters result in a slightly shifted/skewed spatial profiles. Hence, for the purpose of quantitative tests of the quasi-classical spin model using perturbed quantum rewinding experiments, as presented in this work, single-shot analysis is essential for processing data with imperfectly controlled experimental parameters, which tends to wash out the fine structure in the spatial profiles. The measured single-shot profiles presented in this work have adequate spatial resolution to capture small details in the profiles. Having minimized experimental defects by careful calibrations, the measured single-shot data provide stringent tests of predictions based on the model of <ref>.
§.§.§ Additional Data Sets
In this section, we present additional data for increased scattering length and evolution time.
The validity of the modified quasi-classical spin model of <ref> is demonstrated for the primary data set with a=5.2 a_0 and τ=200 ms in <ref>. To test the model further, a series of additional perturbed quantum rewinding experiments are done with τ=200 ms and 8.0 a_0 and with τ=400 ms and a=5.2 a_0 for three ϕ_x values: π/2, π and 3π/2. Less quantitative agreement between the model and data is observed in many of the single shots from these additional experiments, especially in the spatial profiles.
The disagreement appears from differences between the spin imbalances in the model and in the data. In Fig. <ref>, the basic model (red curves) assumes that all of the applied RF pulses are on resonance, while drifts in the detuning alter the pulse area. It is clear that the basic model still captures the shape and oscillations of the data (blue dots), but with an offset. In the experiments with longer experimental cycles or larger magnetic field sweeps, there is a larger probability that one or more of the RF pulses are slightly off resonance with the hyperfine frequency, since the RF frequency is fixed for all pulses, but the magnetic field can fluctuate because of the limitation on the stability of the auxiliary coils. Imperfect RF pulses result in a measurement with the wrong spin imbalance for the given ϕ_x, compared to ideal measurements. To include the effect of imbalance in the model, the atom numbers for the two spin states are adjusted from N_1 and N_2 to N_1-δ N_tot and N_2+δ N_tot, with N_tot = N_1+N_2 being total atom number and δ a reasonably small (≤ 10%) fraction. In our system, the total density n_tot(x) = n_1(x)+n_2(x) is invariant during the experimental cycle. Hence, to include the adjustment of spin imbalance in the modeled spatial profile, the outcomes n_1(x) and n_2(x) are first determined from the model and then scaled to n_1(x)-δ n_tot(x) and n_2(x)+δ n_tot(x). In this way, the total atom number and density profile remain the same in the model output before and after the adjustment. With this adjustment to the outcome of the regular model, the improved fits shown as the green curves in Fig. <ref> are obtained.
Another reason for the disagreement between the model and the data set with τ=400 ms at a=5.2 a_0 is the imperfect Hamiltonian reversal shown in Section <ref>. The unperturbed quantum rewinding experiment done with this set of experimental parameters suggests that the system is not precisely reversed in this regime. Therefore, it is reasonable that the perturbed rewinding data can not be predicted by the model as quantitatively as for the data obtained in the regime where the reversibility of the system is clearly better.
|
http://arxiv.org/abs/2307.03983v1 | 20230708141424 | Hybrid Successive Interference Cancellation and Power Adaptation: a Win-Win Strategy for Robust Uplink NOMA Transmission | [
"Yanshi Sun",
"Wei Cao",
"Momiao Zhou",
"Zhiguo Ding"
] | cs.IT | [
"cs.IT",
"math.IT"
] |
Hybrid Successive Interference Cancellation and Power Adaptation: a Win-Win Strategy for Robust Uplink NOMA Transmission
Yanshi Sun, Member, IEEE, Wei Cao, Momiao Zhou, Member, IEEE, Zhiguo Ding, Fellow, IEEE
Y. Sun, Wei Cao and M. Zhou are with the School of Computer Science and Information
Engineering, Hefei University of Technology, Hefei, 230009, China. (email: [email protected], [email protected] and [email protected]).
Z. Ding is with Department of Electrical Engineering and Computer
Science, Khalifa University, Abu Dhabi, UAE, and Department of Electrical
and Electronic Engineering, University of Manchester, Manchester, UK. (email: [email protected]).
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The aim of this paper is to reveal the importance of hybrid successive interference cancellation (SIC) and power adaptation (PA) for improving transmission robustness of uplink non-orthogonal multiple access (NOMA).
Particularly, a cognitive radio inspired uplink NOMA communication scenario is considered, where one primary user is allocated one dedicated resource block, while M secondary users compete with each other to be opportunistically served by using the same resource block of the primary user. Two novel schemes are proposed for the considered scenario, namely hybrid SIC with PA (HSIC-PA) scheme and fixed SIC with PA (FSIC-PA) scheme. Both schemes can ensure that the secondary users are served without degrading the transmission reliability of the primary user compared to conventional orthogonal multiple access (OMA) based schemes. Rigorous analytical results are presented to evaluate the performance of the proposed two schemes. It is shown that both schemes can avoid outage probability error floors without any constraints on users' target rates in the high SNR regime. Furthermore, it is shown that the diversity gain achieved by the HSIC-PA scheme is M, while that of the FISC-PA scheme is only 1. Numerical results are provided to verify the developed analytical results and also demonstrate the superior performance achieved by the proposed schemes by comparing with the existing HSIC without PA (HSIC-NPA) scheme.
The presented simulation results also show that HSIC-PA scheme performs the best among the three schemes, which indicates the importance of the combination of HSIC and PA for improving transmission robustness.
Non-orthogonal multiple access (NOMA), hybrid successive interference cancellation (HSIC), power adaptation, outage probability.
§ INTRODUCTION
Non-orthogonal multiple access (NOMA) has attracted extensive research interest during the past few years, and has been recognized as an important potential enabling technology for future wireless communication systems <cit.>. Compared to conventional orthogonal multiple access (OMA), where one channel resource block can be accessed by a single user only, the key appealing feature of NOMA is that allowing multiple users to simultaneously access the same channel resource block is encouraged <cit.>. Thus, by applying NOMA, larger connectivity and higher spectral efficiency can be obtained.
Existing research works show that NOMA can be compatible with many other advanced technologies, such as multiple input multiple output (MIMO) <cit.>, millimeter wave communications <cit.>, Terahertz communications <cit.>, reconfigurable intelligent surfaces (RIS) <cit.>, satellite communications <cit.> and so on.
Since NOMA allows multiple users to simultaneously occupy one channel resource block, how to address inter-user interference is one of key issues in NOMA communication systems. To this end, a widely used method in NOMA to address inter-user interference is successive interference cancellation (SIC), where users' signals are decoded in a successive manner <cit.>. Due to the error propagation nature of SIC, how to order users plays a very important role in the performance of SIC. Conventionally, there are two main types of methods for determining the decoding order of users in NOMA. One is known as the channel state information (CSI) based SIC method, where users are ordered according to the quality of their channels <cit.>. The other is known as the quality of service (QoS) based SIC method, where the signals for the users with more stringent QoS are decoded first, while other users are often opportunistically served and their signals are decoded later <cit.>. Note that, most existing works on NOMA carried out a prefixed SIC decoding order according to either the above two aforementioned criteria. Unfortunately, a very dispiriting phenomenon exists in the NOMA schemes based on the aforementioned CSI or QoS based methods. Specifically, the outage probability achieved by these schemes suffers from severe error floors, which means that the outage probability achieved by
a certain user doesn't approach zero as SNR goes infinity. Thus, the transmission reliability cannot be guaranteed, which significantly limits the application of NOMA in many practical scenarios.
It was thought that, such outage probability error floors are unavoidable in the implementation of NOMA, and swapping SIC decoding orders dynamically cannot yield a significant performance gain <cit.>.
Motivated by the error floor issue, a new design of SIC namely hybrid SIC (HSIC) was initially proposed for cognitive radio inspired uplink NOMA by <cit.>. In the proposed HSIC scheme, the decoding orders of users are dynamically determined according to the relationship between the instantaneous channel conditions and users' target rates. <cit.> show that the proposed HSIC scheme can avoid outage probability error floors, under some constraints on users' target rates. The most important contributions of the series studies in <cit.> are two folds.
First, <cit.> showed that it is possible to avoid outage error floors, at least under some specific conditions. Second, <cit.> indicated the importance of introducing HSIC to improve transmission robustness of NOMA.
However, as mentioned above, the proposed scheme in <cit.> can only avoid outage probability error floors under some stringent conditions on users' target rates, which may not be met in many realistic scenarios. Thus, it is natural to ask the following two questions.
The first question is whether it is possible to avoid outage probability error floors without any constraints on users' rates. And the second question is whether it is necessary to apply HSIC to avoid outage probability error floors.
This paper aims to answer the two aforementioned questions, and investigate the impact of the combination of HSIC and power adaptation (PA) on improving the transmission robustness in NOMA. Specifically, a cognitive radio inspired uplink NOMA scenario is considered. In the considered scenario, one primary user is allocated one dedicated channel resource block, while there are M secondary users who compete with each other to opportunistically share the primary user's resource block without degrading the outage performance of the primary user. Two new designs of NOMA schemes, namely HSIC with PA (HSIC-PA) and fixed SIC with PA (FSIC-PA) are proposed. Both schemes can avoid outage probability error floors without any constraints on users' target rates. The main contributions of this paper are listed as follows.
* Two novel designs of uplink NOMA schemes are proposed, namely HSIC-PA and FSIC-PA[Note that the
HSIC-PA scheme extends the scheme proposed in our previous work <cit.> where only two users are considered, while the FSIC-PA scheme hasn't been proposed according to our best knowledge.]. In the proposed HSIC-PA scheme, the decoding order of the secondary user can be dynamically adjusted according to the channel conditions. While in the proposed FSIC-PA scheme, the decoding order of the secondary user is fixed at the second stage of SIC. By rigorous derivation, the closed-form expressions for the outage probabilities achieved by the proposed two schemes are obtained.
* Based on the obtained expressions for the outage probabilities, asymptotic analysis in the high SNR regime is further developed to gain more insights into the proposed two schemes. It is shown that both HSIC-PA scheme and FSIC-PA scheme can avoid outage probability error floors without any constraints on users' target rates. The fact that the proposed FSIC-PA scheme can avoid error floors indicates that HSIC is not necessary to avoid error floors. Furthermore, the diversity gains achieved the proposed two schemes are also provided, respectively. Interestingly, the diversity gain achieved by HSIC-PA scheme is M, whereas that achieved by FSIC-PA scheme is only 1.
* Numerical results are presented to verify the accuracy of the developed analytical results and demonstrate the superior performance of the proposed HSIC-PA scheme and FSIC-PA scheme, by comparing with the benchmark scheme termed HSIC-NPA proposed in <cit.>. In terms of outage probability and ergodic rate, it is shown that FSIC-PA scheme performs better than HSIC-NPA scheme in the high SNR regime, but worse in the low SNR regime. Besides, HSIC-PA scheme performs the best among three schemes at all SNRs in terms of outage probability and ergodic rate, which shows the power of the combination of HSIC and PA in the design of uplink NOMA transmissions. In terms of power consumption, both the proposed HSIC-PA and FSIC-PA schemes consume less power than the existing HSIC-NPA scheme, whereas HSIC-PA scheme is more power-consuming than FSIC-PA scheme.
§ SYSTEM MODEL
Consider an uplink NOMA communication scenario with one base station (BS), one primary user U_0 and M
secondary users U_m, 1≤ m≤ M. Note that, in the considered scenario, ensuring the transmission reliability of U_0 is of the high priority, which has a target data rate denoted by R_0. In conventional OMA based schemes, the primary user is allocated with a dedicated resource block, which cannot be accessed by other users. While in the considered NOMA schemes of this paper, M secondary users compete with each other to opportunistically access the channel resource block which is allocated to the primary user. Note that allowing secondary users to share the channel resource block of the primary user must be done in such a way to ensure that the QoS of the primary user U_0 is not degraded.
The channel gain of the primary user U_0 is denoted by g, and the channel gains of the secondary users are denoted by h_m, 1≤ m≤ M. In this paper, g and h_m are modeled as the normalized Rayleigh fading gains, which means that g and h_m are independent and identically distributed (i.i.d) circular symmetric complex Gaussian (CSCG) random variables with zero mean and unit variance, i.e., g∼𝒞𝒩(0,1) and h_m ∼𝒞𝒩(0,1). The transmit power of the primary user U_0 is denoted by P_0. The transmit power of the secondary user U_m is denoted by β P_s, where
β∈ [0,1 ] is the adjustable power adaptation coefficient of U_m, and P_s is the maximum power of U_m. Without loss of generality, the background noise power is also assumed to be normalized throughout the paper.
In the remainder of the paper, the M secondary users are ordered according to their channel gains:
| h_1 | ^2< ⋯ < | h_M|^2.
In this paper, two novel NOMA schemes are proposed, namely HSIC-PA scheme and FSIC-PA scheme.
It will be shown that both schemes can avoid outage probability error floors.
For each scheme, in each period of transmission, only the secondary user which can achieve the largest instantaneous achievable rate is allowed to transmit signal by sharing the primary user's resource block.
The proposed two schemes are described in the following two subsections.
§.§ HSIC-PA Scheme
To begin with, define an interference threshold denoted by τ (g) as follows:
τ (g)= max{ 0,P_ 0 |g | ^2 /2^R_ 0 -1 -1}.
Note that τ(g) can be interpreted as the maximum interference, with which U_0 can
still achieve the same outage performance as in OMA where the resource block would be solely occupied by U_0. For more details on τ(g), please refer to <cit.>.
Define ϵ_0=2^R_0-1 and α _0=ϵ_0/P_0, we have
τ(g)=
|g|^2α_0^-1-1 , |g|^2>α_0,
0 , |g|^2<α_0.
For each secondary user U_m, its instantaneous achievable rate is determined by how its channel
gain compares to τ (g), which can be classified into the following two types:
* Type I: the maximum received signal power of U_m at the BS is less than or equal to τ(g), i.e.,
P_s | h_m |^2 ≤τ (g). For this case, putting U_m at the second stage of SIC
can yield a larger data rate compared to putting U_m at the first stage of SIC, and will not prevent the primary user from successfully decoding its signal. Thus, it is favorable to decode U_m's signal at the second stage of SIC, and the achievable rate of U_m is given by
R_1^m=log(1+P_ s | h_ m |^2),
which is the same as in HSIC-NPA scheme proposed in <cit.>.
* Type II: the maximum received signal power of U_m at the BS is larger than τ (g), i.e., P_s | h_m |^2 > τ (g). For this case, the benchmark scheme termed HSIC-NPA which is proposed in <cit.>
only considers the case where β is set to be 1. Thus, in order to avoid degrading the QoS of U_0, U_m's signal can only be decoded at the first stage of SIC in HSIC-NPA, yielding the following achievable data rate of U_m:
R_II,1^m=log(1+P_s | h_m |^2 /P_0 | g | ^2+1 ).
Note that the drawback of putting U_m at the first stage of SIC is that, when P_0|g|^2 is large, R_II,1^m might still be small even with a large P_s | h_m |^2.
To this end, the proposed HSIC-PA scheme offers an additional choice where β can be set to be less than 1 so that β P_s|h_m|^2=τ(g), which can provide an opportunity to yield a larger achievable rate. As a result, U_m's signal can be decoded at the second stage of SIC, yielding the following achievable data rate of U_m:
R_2,2^m=log(1+τ(g)).
Thus, in the proposed HSIC-PA scheme, when P_s | h_m |^2 > τ(g), the achievable data rate of U_m is given by:
R_2^m=max{R_2,1^m,R_2,2^m}.
According to the above discussions, the achievable data rate of U_m in HSIC-PA scheme can be concluded as:
R^m=
R_1^m, P_s | h_ m |^2 ≤τ(g)
R_2^m, P_s | h_ m |^2 >τ(g).
§.§ FSIC-PA Scheme
Another scheme termed FSIC-PA is proposed in this subsection.
Note that in HSIC-PA scheme, the secondary user's signal can be decoded either at the first or second stage of SIC. However, in FSIC-PA scheme, its signal can only be decoded at the second stage of SIC.
In FSIC-PA scheme, for each secondary user U_m, its instantaneous achievable rate can also be determined by considering the following two cases as in the previous subsection.
* Type I: the maximum received signal power of U_m at the BS is less than or equal to τ(g), i.e.,
P_s | h_m |^2 ≤τ (g). For this case, the decoding strategy is as same as in the HSIC-NPA and the proposed HSIC-PA scheme, where U_m is decoded at the second stage of SIC. Thus, the achievable data rate of U_m is R̂^m_I =log(1+P_s|h_m|^2), since the interference from U_0 can be removed by SIC.
* Type II: the maximum received signal power of U_m at the BS is larger than τ (g), i.e., P_s | h_m |^2 > τ (g). For this case, in the proposed FSIC-PA scheme, U_m can only be decoded at the second stage of SIC. To carry out this strategy, β is set to be less than 1 so that β P_s|h_m|^2=τ(g). Thus, the achievable data rate of U_m for type II is R̂_II^m=log(1+τ(g)).
By concluding the above two cases, the achievable data rate of U_m in the FSIC-PA scheme can be expressed as:
R̂^m=R̂^m_I, P_s | h_ m |^2 ≤τ(g)
R̂^m_II, P_s | h_ m |^2 >τ(g).
Note that, the proposed HSIC-PA and FSIC-PA schemes can ensure that the outage performance of the primary user is the same as that in the OMA scheme. Because the use of NOMA is transparent to the primary user, this paper focuses on the performance of the opportunistically served secondary users.
§ PERFORMANCE ANALYSIS ON HSIC-PA SCHEME AND FSIC-PA SCHEME
In this section, the closed-form expressions for the outage probabilities of the served secondary user achieved by the proposed two schemes will be provided. Furthermore, asymptotic analysis for the outage probabilities will be presented, which shows that both HSIC-PA and FSIC-PA schemes can avoid outage probability error floors without any constraints on users' target rates. Besides, rigorous comparisons between the proposed HSIC-PA/FSIC-PA scheme with the existing HSIC-NPA scheme will be carried out.
§.§ Outage probability achieved by HSIC-PA scheme
This subsection provides the exact and asymptotic expressions for the overall outage probability
of the served secondary users achieved by the proposed HSIC-PA scheme. Besides, the diversity gain <cit.> achieved by HSIC-PA is also provided.
Assume that all the secondary users have the same target rate, denoted by R_s. The overall outage probability achieved by the served secondary users in HSIC-PA is given by:
P_out=Pr(max{R^m, 1≤ m≤ M}<R_s).
For the ease of characterizing the outage probability P_out, it is helpful to define the event E_m, which denotes the event that there are m secondary users belonging to type I. Particularly, E_m can be expressed as follows:
E_m={ |h_m |^2< τ (g)/P_s, | h_m+1 | ^2>τ (g)/P_s},
1≤ m≤ M-1,
{|h_1|^2 > τ (g)/P_s}, m=0,
{|h_M|^2 < τ (g)/P_s}, m=M,
where the extreme cases E_0 and E_M denote the events where there is no type I secondary users and all the secondary users belong to type I, respectively.
It is shown that the expression of P_out can be divided into four parts, as highlighted in the following lemma.
For ease of calculation, P_out can be further simplified as:
P_out= P(|h_M|^2>τ(g)/P_s, | h_M |^2< |h_k |^2,R^M_II,2<R_s,|g|^2>α_0)_Q̃_1
+ P(|h_M|^2>τ(g)/P_s, | h_M |^2> |h_k |^2,R^M_II,1<R_s,|g|^2>α_0)_Q̃_2
+ P( E_M,R^M_I<R_s, | g | ^2>α _0)_Q_M + P(
R^M_II<R_s ,|g|^2<α_0) _Q_M+1.
Please refer to Appendix A.
By deriving the expressions of Q̃_1, Q̃_2, Q_M and Q_M+1 as shown in Appendix B, the expression for the overall outage probability of the admitted secondary users in HSIC-PA scheme can be obtained as shown in the following theorem.
The overall outage probability P_out of the admitted secondary users in HSIC-PA can be expressed as follows:
P_out=∑_i=0^M([ M; i ])(-1)^ie^-iα_s1-e^-(α_sP_0i+1)α_1/α_sP_0i+1+(1-e^-α_s)^Me^-α_1,
where ϵ_s=2^R_s-1,
α_s=ϵ_s/P_s,
α_1=(1+ϵ_s)α_0.
Please refer to Appendix B.
Based on Theorem 1, the asymptotic expression for P_out in the high SNR regime can be obtained as shown in the following corollary.
At high SNR, i.e., P_0=P_s→∞, the overall outage probability of the served secondary users in HSIC-PA can be approximated as follows:
P_out≈ϵ_s^M/P_s^MP_0∑_i=1^M([ M; i ])ϵ_0^i+1(1+ϵ_s)^i+1/i+1-ϵ_s^M/P_s^MP_0^2∑_i=0^M([ M; i ])ϵ_0^i+2(1+ϵ_s)^i+2/i+2+ϵ_s^M/P_s^M.
Please refer to Appendix C.
Further, it is straightforward that the first two terms of (<ref>) can be omitted in the high SNR regime, yielding a more simplified expression for P_out, as highlighted in the following corollary.
At high SNR, i.e., P_0=P_s→∞,
the approximation of P_out shown in (<ref>) can be further approximated as follows:
P_out≈ϵ_s^M/P_s^M.
Remark 1. Note that, the existing HSIC-NPA scheme can only avoid outage probability error floors under the constraint that ϵ_0ϵ_s≤ 1, which means that the feasible target rate for reliable transmission of the secondary users is primarily restricted by that of the primary user.
However, from the results shown in Corollary 2, it can be easily concluded that the outage probability error floor can be avoided by HSIC-PA scheme without any constraints on the users' target rates. Hence, the first question raised in Section I can be answered with the answer that it is possible to avoid outage probability error floors without any constraints on users' target rates.
Remark 2. In wireless communications, diversity gain is usually used as an important performance metric to measure how fast the outage probability decreases as transmit power increases <cit.>. It denotes the asymptotic scaling law of the outage probability to the transmit SNR. Specifically, the diversity gain, say d, achieved by HSIC-PA is defined as:
d=-lim_P_s→∞log P_out/log P_s
Based on the results shown in Corollary 2, it can be straightforwardly obtained that
d=M. Therefore, the diversity gain achieved by the HSIC-PA scheme is M, which is exactly the number of the secondary users. Thus, multi-user diversity gain can be fully utilized by the proposed HSIC-PA scheme, which means increasing the number of secondary users is helpful to reduce the overall outage probability.
From the perspective of diversity gain, the difference between the HSIC-NPA scheme and the HSIC-PA scheme can also be revealed. Recall that the diversity gain achieved by HSIC-NPA is also M when ϵ_0ϵ_s≤1, otherwise a diversity gain of zero is realized.
§.§ Outage probability achieved by FSIC-PA scheme
This subsection provides the exact expression for the overall outage probability
of the served secondary users in the proposed FSIC-PA scheme. Asymptotic analysis for the outage probability is also provided.
For the FSIC-PA scheme, the overall outage probability achieved by the served secondary users is defined as:
P̂_out=Pr(max{R̂^m, 1≤ m≤ M}<R_s).
The following theorem provides the closed-form expression for the outage probability achieved by the FSIC-PA scheme.
The overall outage probability P̂_out of the served secondary users in FSIC-PA can be expressed as follows:
P̂_out=1-e^-α_1+(1-e^α_s)^Me^-α_1.
Please refer to Appendix D.
Based on Theorem 2, asymptotic expression for P̂_out in the high SNR regime can be obtained as shown in the following corollary.
At high SNR, i.e., P_0=P_s→∞, the overall outage probability of the served secondary users in the FSIC-PA scheme can be approximated as follows:
P̂_out≈ϵ_0(1+ϵ_s)/P_0+ϵ_s^M/P_s^M-ϵ_s^Mϵ_0(1+ϵ_s)/P_s^MP_0.
By applying Taylor expansion 1-e^-x≈ x (x→ 0), the expression in (<ref>) can be further approximated as follows:
P̂_out≈ α_1+α_s^M-α_s^Mα_1
= ϵ_0(1+ϵ_s)/P_0+ϵ_s^M/P_s^M-ϵ_s^Mϵ_0(1+ϵ_s)/P_s^MP_0,
and the proof is complete.
Remark 3. From Corollary 3, it can be easily observed that the proposed FSIC-PA scheme can also avoid outage probability error floors without any constraints on the users' target rates. At this point, the second question raised in Section I can be answered with the answer that HSIC is not the necessary condition to avoid outage probability error floors.
Remark 4. It is also interesting to investigate the diversity gain achieved by the FSIC-PA scheme, which is defined as:
d̂=-lim_P_s→∞logP̂_out/log P_s.
According to Corollary 3, it can be straightforwardly obtained that d̂=1. Thus,
the multi-user diversity gain cannot be obtained by FSIC-PA scheme.
The above two remarks indicate that even though HSIC is not the necessary strategy to avoid the outage probability error floor, its combination with PA is beneficial for improving transmission robustness.
§.§ Comparisons between HSIC-PA/FSIC-PA scheme with HSIC-NPA scheme
In this section, more detailed comparisons of the proposed two schemes with the benchmark HSIC-NPA scheme are provided. Note that, if the served secondary user belongs to type I, the three schemes, i.e., HSIC-PA, HSIC-NPA and FSIC-PA, achieve the same instantaneous data rate.
However, the three schemes differ from each other if the served secondary user belongs to type II. Thus, it is necessary to compare the three schemes for the case when the served secondary user belongs to type II.
For ease of notation, denote the served secondary user by U_m^*. When U_m^* belongs to type II, denote its achievable rate by R_II, R̂_II and R̅_II for HSIC-PA, FSIC-PA and HSIC-NPA schemes, respectively.
From the description in Section. II, it can be found that R_II≥R̅_II always holds. Thus, it is sufficient to characterize the probability of the event that R_II>R̅_II, for the comparison between HSIC-PA and HSIC-NPA, as presented in the following theorem.
Under the condition that the served secondary user U_m^* is type II, the probability of the event that R_II>R̅_II, termed P^better, is given by:
P^better= P( R̅_2<R_2, U_m^* is type II) /P(U_m^* is type II) ,
where
P( R̅_2<R_2, U_m^* is type II)
= ∑_i=1^M([ M; i ])(-1)^ie^i/P_s[ṽ(α_0,iP_0/P_sα_0,i/P_s(α_0^-1-P_0)+1) -ũ(α_0,i/P_sα_0 ) ] ,
and
P(U_m^* is type II)=1-∑_i=0^M([ M; i ])(-1)^ie^i/P_sũ(α_0,i/P_sα_0 ),
where
ũ(x,y)=1/y+1e^-x(y+1),
and ṽ(x,y,z)=√(π)e^z^2/4y/2√(y)[1-erf (√(y)(x+z/2y))], where erf(·) denotes the Gaussian error function,
which is given by:
erf(x)=2/√(π)∫_0^xe^-t^2dt.
Please refer to Appendix E.
Differently, for the comparison between FSIC-PA and HSIC-NPA, R̂_II can be either larger or less than R̅_II. Thus, it is necessary to characterize both the probabilities of the events that R̂_II>R̅_II and R̂_II<R̅_II. By noting that
P̂( R̅_2<R̂_2, U_m^* is type II)=P(|h_M|^2>τ(g)/P_s,|h_M|^2< |h_k |^2,|g|^2>α_0),
which is the same as the expression of P( R̅_2< R_2, U_m^* is type II) in Theorem 3, the following theorem can be straightforwardly obtained.
Under the condition that the served secondary user U_m^* is type II, the probability of the event that R̂_II>R̅_II, termed P̂^better, is given by:
P̂^better= P( R̅_2<R̂_2, U_m^* is type II) /P(U_m^* is type II) ,
which is the same as the expression of P^better in Theorem 3. The probability of the event that R̂_II<R̅_II, termed P̂^worse, is given by:
P̂^worse=1-P̂^better.
§ NUMERICAL RESULTS
In this section, simulation results are provided to verify the accuracy of the developed analysis and demonstrate the performance of the proposed HSIC-PA and FSIC-PA schemes. Comparisons with the benchmark HSIC-NPA scheme developed in <cit.> are also provided.
Fig. <ref> verifies the accuracy of the developed analytical results for the outage probability achieved by the proposed HSIC-PA scheme. Note that, the curves for analytical results are based on Theorem 1, and those for Approximations I and II are based on Corollaries 1 and 2, respectively.
As shown in the figure, analytical results perfectly match simulations, which verifies the accuracy of the analytical results provided in Theorem 1.
Besides, Fig. <ref> also shows that both the curves for Approximation I and Approximation II
match the simulation results at high SNR, which verifies the accuracy of the approximations in Corollaries 1 and 2.
Fig. P_outLFJ_F verifies the accuracy of the developed analytical results for the outage probability achieved by the proposed FSIC-PA scheme. Note that the curves for analytical results are based on Theorem 2, and the curves for approximation are based on Corollary 3. From the figure, it can be observed that the curves for analysis perfectly match simulations, which verify the accuracy of the results provided in Theorem 2.
Besides, it is shown that the curves for the approximate results are accurate at high SNR, which demonstrates the accuracy of the results in Corollary 3.
A significant difference between HSIC-PA and FSIC-PA schemes can be clearly observed from Figs. <ref> and <ref>. Fig. <ref> shows that as M increases, the outage probability achieved by HSIC-PA scheme significantly decreases. In contrast, Fig. <ref> shows that,
for M>1, the outage probabilities for different values of M coincide. Thus, keeping increasing M cannot improve the outage performance of FSIC-PA in the high SNR regime. This observation is consistent with
the results in Section III that the diversity gain of HSIC-PA scheme is M, while that of FSIC-PA scheme is only 1.
Fig. <ref> shows the outage probabilities of the secondary users achieved by HSIC-NPA, HSIC-PA and FSIC-PA versus transmit SNR. As shown in the figure, for HSIC-NPA scheme, when R_0=1 BPCU, there is no outage probability error floor. However, when R_0=4 BPCU, the outage probability error floor exists. This observation is consistent with the conclusions in <cit.>,
i.e., the error floor can only be avoided when ϵ_0ϵ_s<1. By contrast, the proposed HSIC-PA and FSIC-PA schemes can avoid outage probability error floors, since the outage probabilities achieved by both schemes continuously decrease as the SNR increases. Fig. <ref> also shows that the HSIC-PA scheme performs the best among the three schemes for all cases. However, FSIC-PA achieves larger outage probabilities than HSIC-NPA when R_0=1 BPCU, while for the case where R_0=4 BPCU, FSIC-PA performs better at high SNRs.
Fig. <ref> shows the performance of the three schemes in terms of ergodic data rates achieved by the served secondary users.
From the figure, it is shown that HSIC-PA scheme always achieves the largest ergodic rate among the three schemes, which is consistent with the observation in Fig. <ref>.
Another interesting observation from Fig. <ref> is that the performance of FSIC-PA approaches that of HSIC-PA in terms of ergodic data rate at high SNR, while the performance of HSIC-NPA approaches that of HSIC-PA in terms of ergodic rate at low SNRs. This observation indicates that it is preferable to set the
secondary user at the first stage of SIC and use full transmit power at low SNRs, while it is preferable to set the secondary user at the second stage of SIC and use partial transmit power at high SNRs.
Fig. <ref> and Fig. <ref> demonstrate a more detailed comparison on achievable rates of the proposed two schemes with the benchmark HSIC-NPA scheme.
Fig. <ref> shows the probability that the served secondary user belongs to type II. It is shown that as SNR increases, the probabilities converge to a constant.
Fig. <ref> shows that the curves for P̂^better and P^better coincide, which is consistent with results shown in Theorems 3 and 4.
Fig. <ref> also shows that P̂^better and P^better increase with SNR, and approach 1 in the high SNR regime. While
P̂^worse decreases with SNR and approaches 1 in the low SNR regime.
The above observation can help to understand the phenomenon shown in Fig. <ref> and
Fig. <ref>, and leads to the following suggestions for practical systems.
On the one hand, at high SNR, it is preferable to apply power adaptation and put the secondary user at the second stage of SIC. On the other hand, at low SNR, it is better to decode the secondary user at the first stage of SIC.
Fig. <ref> shows the power consumption of HSIC-PA and FSIC-PA schemes. Note that the HSIC-NPA scheme always chooses full power to transmit for the secondary users, i.e., β is always set to be 1, while β can be set to be less than 1 in the proposed HSIC-PA and FSIC-PA schemes. Thus, HSIC-NPA is more energy consuming than the proposed two schemes in this paper. From the figure, it can be observed that at low SNRs, β approaches 1 in HSIC-PA and β approaches zero in FSIC-PA. Besides, as SNR increases, β decreases in HSIC-PA, while that in FSIC-PA increases. More interestingly, the values of β for both schemes approach a constant in the high SNR regime. However, at high SNR, the value of β in HSIC-PA scheme is a bit higher than that in FSIC-PA.
§ CONCLUSIONS
In this paper, two novel cognitive radio inspired uplink NOMA schemes were proposed to improve transmission robustness, namely HSIC-PA scheme and FSIC-PA scheme. Rigorous analysis has been developed to characterize the performance of the proposed schemes. It has been shown that both HSIC-PA and FSIC-PA schemes can avoid outage probability error floors in the high SNR regime without any constraints on users' target rates, which was thought impossible for uplink NOMA transmission. It has also been shown that the diversity gain achieved by the HSIC-PA scheme is M, which is the maximal multi-user diversity gain for the considered scenario. While the diversity gain achieved by the FSIC-PA scheme is 1. Numerical results have been presented to verify the accuracy of the developed analysis and demonstrate the superior performance of the proposed schemes. It has been shown by this paper that the combination of HSIC and PA is important to improve the transmission robustness of uplink NOMA.
§ PROOF FOR LEMMA 1
The outage events can be divided into two groups, one is |g|^2>α_0 and the other is |g|^2<α_0.
Thus, the outage probability P_out shown in (<ref>) can be written as:
P_out= ∑_m=1^M-1 P ( E_m,max{R^k_I, 1 ≤ k≤ m}<R_s,
max{ R^k_II,m < k ≤ M } <R_s, | g | ^2>α _0 )
+P( E_M,max{ R^k_I,1≤ k ≤ M } <R_s, | g | ^2>α _0)
+P( E_0,max{ R^k_II,1≤ k ≤ M } <R_s, | g | ^2>α _0)
+P(max{ R^k_II,1≤ k ≤ M } <R_s ,|g|^2<α_0).
Recall that the secondary users are ordered according to their channel gains, P_out can be further written as:
P_out= ∑_m=1^M-1 P ( E_m,R^m_I< R_s,R^M_II < R_s, | g | ^2>α _0 )_Q_m
+P( E_M,R^M_I<R_s, | g | ^2>α _0)_Q_M +P( E_0,R^M_II <R_s, | g | ^2>α _0) _Q_0
+ P(
R^M_II<R_s ,|g|^2<α_0) _Q_M+1.
Note that when |g|^2>α _0, R^M_II can be determined according to the value of |h_M |^2 as follows:
R^M_II
=
R^M_II,2 , |h_M |^2< |h |^2
R^M_II,1 , |h_M |^2> |h |^2,
where |h |^2=( | g |^2α_0^-1-1 )(P_0 | g |^2+1)/P_s. Thus, Q_m can be rewritten as follows:
Q_m= ∑_m=1^M-1P (E_m,R^m_I<R_s,R^M_II,2<R_s, | h_M |^2< |h |^2 ,|g|^2>α_0 ) _Q_m,1
+P ( E_m, R^m_I<R_s,R^M_II,1<R_s,
| h_M |^2> |h |^2 ,|g|^2>α_0 ) _Q_m,2.
By noting that regardless of the value of |h_M |^2, R^m_I is always smaller than R^M_II,1 and R^M_II,2,
Q_m can be further simplified as:
Q_m= ∑_m=1^M-1P (E_m,R^M_II,2<R_s, | h_M |^2< |h |^2 ,|g|^2>α_0 ) _Q_m,1
+P ( E_m,R^M_II,1<R_s,
| h_M |^2> |h |^2 ,|g|^2>α_0 ) _Q_m,2.
By applying the results shown in (<ref>), Q_0 can be rewritten as follows:
Q_0= P( E_0, |h_M |^2< |h |^2, R^M_II,2 <R_s, | g | ^2>α _0)_Q_0,1
+ P( E_0, |h_M |^2> |h |^2, R^M_II,1 <R_s, | g | ^2>α _0)_Q_0,2.
Note that, Q_m,1 and Q_0,1 can be combined, so as Q_m,2 and Q_0,2, thus, the sum of Q_m and Q_0 can be simplified as follows:
Q_m+Q_0= Q_m,1+Q_0,1_Q̃_1 +Q_m,2+Q_0,2_Q̃_2
= P(|h_M|^2>τ(g)/P_s, | h_M |^2< |h |^2,R^M_II,2<R_s,|g|^2>α_0)_Q̃_1
+ P(|h_M|^2>τ(g)/P_s, | h_M |^2> |h |^2,R^M_II,1<R_s,|g|^2>α_0)_Q̃_2.
Therefore, P_out=Q_m+Q_0+Q_M+Q_M+1=Q̃_1+Q̃_2+Q_M+Q_M+1 and the proof is complete.
§ PROOF FOR THEOREM 1
According to Lemma 1, the evaluation of P_out can be divided into four parts: Q̃_1,
Q̃_2,
Q_M
and Q_M+1.
§.§ Evaluation of Q̃_1
Note that Q̃_1 can be expressed as follows:
Q̃_1= P (|h_M|^2>|g|^2α_0^-1-1/P_s,|h_M|^2<(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s,log(1+τ(g))<R_s,|g|^2>α_0)
= α_0<|g|^2<α_1ε{P(|g|^2α_0^-1-1/P_s<|h_M|^2<(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s)_S̃_1},
where ε{ * } denotes the mathematical expectation.
Note that the users are ordered according to their channel gains, and hence the probability density function (pdf) of |h_M|^2 can be expressed as:
f_|h_M|^2(x)= M!/(M-1)!(1-e^-x)^M-1e^-x
= M(1-e^-x)^M-1e^-x.
By applying (<ref>), S̃_1 can be evaluated as follows:
S̃_1= ∫_|g|^2α_0^-1-1/P_s^(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_sf_|h_M|^2(x)dx
= ∑_i=0^M([ M; i ])(-1)^i( e^-i(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s-e^-i/P_s(|g|^2α_0^-1-1)).
Further, by noting that |g|^2 is exponentially distributed, Q̃_1 can be calculated as:
Q̃_1= ∫_α_0^α_1S̃_1e^-|g|^2d|g|^2
= ∑_i=0^M([ M; i ])(-1)^i∫_α_0^α_1( e^-i(xα_0^-1-1)(P_0x+1)/P_s-e^-i/P_s(xα_0^-1-1))e^-xdx.
For notational simplicity,
define u(α_0,α_1,c) as:
u(α_0,α_1,c)△=∫_α_0^α_1e^-(c+1)xdx= 1/c+1[e^-α_0(c+1)-e^-α_1(c+1)],
and v(α_1,α_0,A,B) as:
v(α_1,α_0,A,B)△= ∫_α_0^α_1e^-(Ax^2+Bx)dx
= √(π)e^B^2/4A/2√(A)[erf(√(A)(α_1+B/2A))-erf(√(A)(α_0+B/2A))].
By taking (<ref>) and (<ref>) into (<ref>), Q̃_1 can be expressed as:
Q̃_1=∑_i=0^M([ M; i ])(-1)^ie^i/P_s[ v(α_1,α_0,iP_0/P_sα_0,i/P_s(α_0^-1-P_0)+1)- u(α_0,α_1,i/P_sα_0)].
§.§ Evaluation of Q̃_2
Note that Q_2 can be expressed as follows:
Q̃_2= P (|h_M|^2>|g|^2α_0^-1-1/P_s,|h_M|^2>(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s,.
.log(1+P_s|h_M|^2/P_0|g|^2+1)<R_s,|g|^2>α_0)
(a)= α_0<|g|^2<α_1ε{P((|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s<|h_M|^2<α_s(P_0|g|^2+1) )_S̃_2},
where step (a) is obtained by noting the hidden condition (|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s<α_s(P_0|g|^2+1), which yields |g|^2<α_1.
By using the pdf of |h_M|^2 shown in (<ref>), S̃_2 can be evaluated as follows:
S̃_2= ∫_(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s^α_s(P_0|g|^2+1)M(1-e^-x)^M-1e^-xdx
= ∑_i=0^M([ M; i ])(-1)^i( e^-iα_s(P_0|g|^2+1)-e^-i/P_s(|g|^2α_0^-1-1)(P_0|g|^2+1)).
Further, by averaging with respect to |g|^2, Q̃_2 can be expressed as:
Q̃_2= ∑_i=0^M([ M; i ])(-1)^i∫_α_0^α_1( e^-iα_s(P_0x+1)-e^-i/P_s(xα_0^-1-1)(P_0x+1))e^-xdx.
By taking (<ref>) and (<ref>) into (<ref>), Q̃_2 can be further expressed as follows:
Q̃_2=∑_i=0^M([ M; i ])(-1)^i [e^-iα_su (α_0,α_1,iα_sP_0)- e^i/P_sv (α_1,α_0,iP_0/P_sα_0,i/P_s(α_0^-1- P_0)+1)].
§.§ Evaluation of Q_M
Note that Q_M can be rewritten as follows:
Q_M= P(|h_M|^2<|g|^2α_0^-1-1/P_s,log(1+P_s|h_M|^2)<R_s,|g|^2>α_0)
= P(|h_M|^2<min{|g|^2α_0^-1-1/P_s,α_s},|g|^2>α_0)
= α_0<|g|^2<α_1ε{ P(|h_M|^2<α_0^-1|g|^2-1/P_s) _S_M,1} +|g|^2>α_1ε{P(|h_M|^2<α_s)_ S_M,2},
where the last step is obtained by dividing the events into two cases, i.e., |g|^2<α_1 and |g|^2>α_1.
By using the pdf of |h_M|^2 shown in (<ref>), the expression for S_M,1 and S_M,2 can be obtained as:
S_M,1=(1-e^-α_0^-1|g|^2-1/P_s)^M and S_M,2=(1-e^-α_s)^M.
By averaging with respect to |g|^2, Q_M can be further evaluated as follows:
Q_M= ∫_α_0^α_1(1-e^-α_0^-1x-1/P_s)^Me^-xdx+∫_α_1^∞(1-e^-α_s)^Me^-xdx
= ∫_α_0^α_1∑_i=0^M([ M; i ])(-1)^ie^i/P_se^-α_0^-1/P_sixe^-xdx+(1-e^-α_s)^Me^-α_1
= ∑_i=0^M([ M; i ])(-1)^ie^i/P_su (α_0,α_1,i/α_0P_s)+(1-e^-α_s)^Me^-α_1,
where the last step is obtained by applying the results shown in (<ref>).
§.§ Evaluation of Q_M+1
Note that Q_M+1 can be expressed as follows:
Q_M+1=P(R^M_II<R_s ,|g|^2<α_0).
Note that, when |g|^2<α_0, τ(g)=0, yielding R^M_II=log(1+ P_s|h_M|^2/P_0|g|^2+1).
Thus, Q_M+1 can be further expressed as:
Q_M+1 = P(|g|^2<α_0,log(1+ P_s|h_M|^2/P_0|g|^2+1)<R_s)
= |g|^2<α_0ε{ P( log(1+ P_s|h_M|^2/P_0|g|^2+1)<R_s)_S_M+1}.
By using the pdf |h_M|^2 shown in (<ref>), S_M+1 can be evaluated as follows:
S_M+1= ∫_0^α_s(P_0|g|^2+1)f_|h_M|^2(x)dx
= (1-e^-α_s(P_0|g|^2+1))^M.
Further, by averaging with respect to |g|^2, Q_M+1 can be expressed as:
Q_M+1 = ∫_0^α_0(1-e^-α_s(P_0x+1))^Me^-xdx
= ∑_i=0^M([ M; i ])(-1)^ie^-α_si1-e^-(α_sP_0i+1)α_0/α_sP_0i+1,
where the last step is obtained by applying the binomial expansion.
Therefore, the expressions for Q̃_1,
Q̃_2,
Q_M,
and Q_M+1 are obtained, and the proof is complete.
§ PROOF FOR COROLLARY 1
In order to facilitate a high SNR approximation, P_out in (<ref>) can be
rewritten as follows:
P_out=∑_i=0^M([ M; i ])(-1)^i∫_0^α_1e^-xe^-iα_s(P_0x+1)dx+(1-e^-α_s)^Me^-α_1.
By using the fact that
∑_i=0^M([ M; i ])(-1)^iA^i=(1-A)^M,
P_out can be further approximated as follows:
P_out= ∫_0^α_1e^-x(1-e^-α_s(P_0x+1))^Mdx+(1-e^-α_s)^Me^-α_1
≈ ∫_0^α_1(1-x)α_s^M(P_0x+1)^Mdx+α_s^M(1-α_1),
where the last step is obtained by applying Taylor seizes 1-e^-x≈ x when x→ 0.
A more simplified form of P_out can be obtained by applying the binomial expansion:
P_out≈ α_s^M∫_0^α_1(1-x)∑_i=0^M([ M; i ])P_0^ix^idx+α_s^M(1-α_1)
= α_s^M∫_0^α_1∑_i=0^M([ M; i ])P_0^i(x^i-x^i+1)dx+α_s^M(1-α_1).
By taking integrations in (<ref>), P_out can be further calculated as follows:
P_out≈ α_s^M∑_i=0^M([ M; i ])P_0^i(α_1^i+1/i+1-α_1^i+2/i+2)+α_s^M-α_s^Mα_1
(a)= ϵ_s^M/P_s^MP_0∑_i=0^M([ M; i ])ϵ_0^i+1(1+ϵ_s)^i+1/i+1-ϵ_s^M/P_s^MP_0^2∑_i=0^M([ M; i ])ϵ_0^i+2(1+ϵ_s)^i+2/i+2
+ϵ_s^M/P_s^M-ϵ_s^Mϵ_0(1+ϵ_s)/P_s^MP_0
(b)= ϵ_s^M/P_s^MP_0∑_i=1^M([ M; i ])ϵ_0^i+1(1+ϵ_s)^i+1/i+1-ϵ_s^M/P_s^MP_0^2∑_i=0^M([ M; i ])ϵ_0^i+2(1+ϵ_s)^i+2/i+2+ϵ_s^M/P_s^M,
where step (b) is obtained by the fact that the first term shown in step (a) is ϵ_s^Mϵ_0(1+ϵ_s)/P_s^MP_0 when i=0, which is exactly the same as the the last term in step (a), and thus can be eliminated .
§ PROOF FOR THEOREM 2
Divide the outage events into two cases, one being |g|^2>α_0 and the other being |g|^2<α_0. . Therefore, the outage probability P̂_out shown in (<ref>) can be rewritten as:
P̂_out= ∑_m=1^M-1 P ( E_m,max{R̂^k_I, 1 ≤ k≤ m}<R_s,
max{R̂^k_II,m < k ≤ M } <R_s, | g | ^2>α _0 )
+P( E_M,max{R̂^k_I,1≤ k ≤ M } <R_s, | g | ^2>α _0)
+P( E_0,max{R̂^k_II,1≤ k ≤ M } <R_s, | g | ^2>α _0)
+P(max{R̂^k_II,1≤ k ≤ M } <R_s ,|g|^2<α_0).
Recall that the secondary users are ordered according to their channel gains,
P̂^out can be further written as:
P̂_out= ∑_m=1^M-1P(E_m,R̂_I^m<R_s,R̂_II^M<R_s,|g|^2>α_0)_F_m
+P(E_M,R̂_I^M<R_s,|g|^2>α_0 )_F_M
+ P(E_0,R̂_II^M<R_s,|g|^2>α_0 )_F_0
+P(R̂_II^M<R_s,|g|^2<α_0 )_F_M+1.
By noting that R̂^m_I<R̂^M_II for the first term, F_m and F_0 can be combined as follows:
F_m+F_0=P(|h_M|^2>τ(g)/P_s,R̂^M_II<R_s,
|g|^2>α_0)_F̃.
Therefore, P̂_out can be further simplified as:
P̂_out= P(|h_M|^2<τ(g)/P_s,R̂^M_I<R_s
,|g|^2>α_0)_F_M
+P(R̂_II^M<R_s,|g|^2<α_0)_F_M+1
+P(|h_M|^2>τ(g)/P_s,
R̂^M_II<R_s,|g|^2>α_0)_F̃.
Thus the remaining task is to derive the expressions for F_M, F_M+1 and F̃, respectively.
§.§ Evaluation of F_M
Note that F_M can be expressed as follows:
F_M= P(|h_M|^2<|g|^2α_0^-1-1/P_s,
log(1+P_s|h_M|^2)<R_s,|g|^2>α_0 ),
which is the same as the expression for Q_M in (<ref>). Thus, F_M can be expressed as:
F_M=∑_i=0^M([ M; i ])(-1)^ie^i/P_su(α_0,α_1,
i/α_0P_s)+(1-e^-α_s)^Me^-α_1.
§.§ Evaluation of F_M+1
Note that F_M+1 can be expressed as follows:
F_M+1= P(log(1+τ(g))<R_s,|g|^2<α_0)
(a)= P(|g|^2<α_0)
= 1-e^-α_0,
where step (a) is obtained by the fact that τ(g)=0 when |g|^2<α_0.
§.§ Evaluation of F̃
Note that F̃ can be expressed as follows:
F̃= P(|h_M|^2>τ(g)/P_s,
log(1+τ(g))<R_s,|g|^2>α_0)
= α_0<|g|^2<α_1ε{P(|h_M|^2>|g|^2α_0^-1-1/P_s)_T̃}.
By using the pdf of |h_M|^2 shown in (<ref>), T̃ can be evaluated as follows:
T̃= ∫_|g|^2α_0^-1-1/P_s^∞
M(1-e^-x)^M-1e^-xdx
= 1-∑_i=0^M([ M; i ])(-1)^ie^-i/P_sα_0|g|^2+i/P_s.
By taking expectation with respect to |g|^2, F̃ can be further evaluated as follows:
F̃= ∫_α_0^α_1( 1-∑_i=0^M([ M; i ])(-1)^ie^-i/P_sα_0x+i/P_s) e^-xdx
= e^-α_0-e^-α_1-∑_i=0^M([ M; i ])(-1)^ie^i/P_su(α_0,α_1,i/P_sα_0).
Until now, the expressions for F_M, F_M+1 and F̃ are obtained, and the proof is complete.
§ PROOF FOR THEOREM 3
Note that the numerator in (<ref>) can be rewritten as:
P( R̅_2<R_2, U_m^* is type II)_Q_n
= |g|^2>α_0ε{P(|g|^2α_0^-1-1/P_s<|h_M|^2
<(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s)_S_n}.
By using the pdf of |h_M|^2 shown in (<ref>), S_n can be evaluated as follows:
S_n= ∫_|g|^2α_0^-1-1/P_s^(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_sM(1-e^-x)^M-1e^-xdx
= ∑_i=0^M([ M; i ])(-1)^i( e^-i(|g|^2α_0^-1-1)(P_0|g|^2+1)/P_s-e^-i/P_s(|g|^2α_0^-1-1)).
Further, by averaging with respect to |g|^2, Q_n can be expressed as follows:
Q_n= ∑_i=0^M([ M; i ])(-1)^i∫_α_0^∞( e^-i(xα_0^-1-1)(P_0x+1)/P_s-e^-i/P_s(xα_0^-1-1))e^-xdx.
By taking (<ref>) and (<ref>) into (<ref>), Q_n can be further expressed as follows:
Q_n= ∑_i=0^M([ M; i ])(-1)^ie^i/P_s[ v(∞,α_0,iP_0/P_sα_0,i/P_s(α_0^-1-P_0)+1)-u(α_0,∞,i/P_sα_0)]
= ∑_i=1^M([ M; i ])(-1)^ie^i/P_s[ ṽ(α_0,iP_0/P_sα_0,i/P_s(α_0^-1-P_0)+1)
-ũ(α_0,i/P_sα_0)],
where the last step is obtained by noting that the term i=0 can be omitted since ṽ(α_0,iP_0/P_sα_0,i/P_s(α_0^-1-P_0)+1)
-ũ(α_0,i/P_sα_0)
=0 for i=0.
The denominator in (<ref>) can be calculated as follows:
P(U_m^* is type II)_Q_d
= P(|h_M|^2>τ(g)/P_s,|g|^2>α_0)_Q_d1
+P( |g|^2<α_0)_Q_d2
= |g|^2>α_0ε{P(
|h_M|^2>|g|^2α_0^-1-1/P_s)_S_d1}
+Q_d2.
Note that S_d1 is the same as the expression for T̃ in (<ref>). Thus, S_d1 can be obtained by using the
results in (<ref>) as follows:
S_d1= 1-∑_i=0^M([ M; i ])(-1)^ie^-i/P_sα_0|g|^2+i/P_s.
By averaging with respect to |g|^2, Q_d1 can be further evaluated as follows:
Q_d1= ∫_α_0^∞( 1-∑_i=0^M([ M; i ])(-1)^ie^-i/P_sα_0x+i/P_s) e^-xdx
= e^-α_0-∑_i=0^M([ M; i ])(-1)^ie^i/P_sũ(α_0,i/P_sα_0).
Q_d2 can be expressed as follows:
Q_d2=∫_0^α_0e^-xdx=1-e^-α_0.
Thus, Q_d is the sum of Q_d1 and Q_d2, which can be expressed as follows:
Q_d= 1-∑_i=0^M([ M; i ])(-1)^ie^i/P_sũ(α_0,i/P_sα_0).
Therefore, the expressions for P( R̅_2<R_2, U_m^* is type II) and P(U_m^* is type II) are obtained, and the proof is complete.
IEEEtran
|
http://arxiv.org/abs/2307.15758v2 | 20230710132241 | Search for ultralight dark matter with a frequency adjustable diamagnetic levitated sensor | [
"Rui Li",
"Shaochun Lin",
"Liang Zhang",
"Changkui Duan",
"Pu Huang",
"Jiangfeng Du"
] | astro-ph.CO | [
"astro-ph.CO",
"physics.ins-det",
"quant-ph"
] |
CAS Key Laboratory of Microscale Magnetic Resonance and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
CAS Key Laboratory of Microscale Magnetic Resonance and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
CAS Key Laboratory of Microscale Magnetic Resonance and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
[email protected]
National Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing, 210093, China
CAS Key Laboratory of Microscale Magnetic Resonance and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Among several dark matter candidates, bosonic ultra-light (sub-meV) dark matter is well motivated because it could couple to the Standard Model (SM) and induce new forces. Previous MICROSCOPE and Eöt-Wash torsion experiments have achieved high accuracy in the sub-1 Hz region, but at higher frequencies there is still a lack of relevant experimental research. We propose an experimental scheme based on the diamagnetic levitated micromechanical oscillator, one of the most sensitive sensors for acceleration sensitivity below the kilohertz scale. In order to improve the measurement range, we used the sensor whose resonance frequency ω_0 could be adjusted from 0.1Hz to 100Hz. The limits of the coupling constant g_ B-L are improved by more than 10 times compared to previous reports, and it may be possible to achieve higher accuracy by using the array of sensors in the future.
Search for ultralight dark matter with a frequency adjustable diamagnetic levitated sensor
Jiangfeng Du
August 12, 2023
==========================================================================================
§ INTRODUCTION
There are many astronomical <cit.> and cosmological observations <cit.> that prove the existence of dark matter particles<cit.>, but the specific parameters of dark matter, especially the quality, are still highly uncertain <cit.>. Many direct detection studies have assumed that dark matter is composed of supersymmetric fermions, but so far there has not been enough evidence. Now the focus of research is gradually shifting to ultralight bosons and the quality range is approximately 10^-22eV m_ϕ0.1eV <cit.>. For ultralight bosons with a mass less than 1eV, due to their high particle number density, they behave like a classical field. Due to the viral theorem , if the DM has virialized to the Galaxy, it will be moving with a typical speed v_DM≈ 10^5m/s <cit.>. This corresponds to Compton frequency ω_s=m_ϕ/ ħ and De Broglie wavelength λ_DM=hc^2/(m_ϕ v_DM).
According to the previous reports, such as ADMX <cit.> can search for the Peccei-Quinn axion in the mass range 10^-6eV m_ϕ 10^-3eV <cit.>. And the pseudoscalar axion-like ULMBs with masses between 10^-23eV and 10^-18eV <cit.> and scalar dilaton ULMBs with masses between 10^-21eV and 10^-5eV by use ultrastable clocks <cit.> and gravitation wave detectors <cit.>
have recently been reported.
When DM is a vector field couples to a conserved current, corresponding to the baryon number minus lepton number (B-L charge) in the SM. The Lagrangian in this case can be written as <cit.>:
ℒ=-1/4 F_μν F^μν -1/2 m_ϕ^2 A^2 +i g_ B-L A_μnγ^μ n
where n is the neutron field and the DM field couples directly to the number of neutrons, g_ B-L is the coupling strength.
Using the Lorentz gauge and the plane wave approximation, the dark electric field can be written as: E≈√(ρ_DM)sin (ω_s t-k⃗·x⃗), where ρ_DM≈ 0.3GeV/cm^3 <cit.> is the local DM density.
In ground experiments, assume that using a magnet-gravity mechanical oscillator to measure the ultralight DM field along the Earth's axis, we can parameterize the force exerted on the sensor as:
F_sig(t)=α g_ B-L N_g F_0 sin(ω_s t)
because the De Broglie wavelength of DM is much larger than the size of the sensor so that we drop the x dependence. In this equation, α=sinθ_N denotes the component along the direction of gravity and θ_N means the latitude of the location of the ground experiment system. In order to avoid the effects of the Earth's rotation under long time measurements and increase the force, experiment system is best carried out at high latitudes like in the Arctic which α=1. F_0=√(ρ_DM)≈ 10^-15N and N_g is the total number of neutrons in the sensor. We can approximate write it as N_g≈1/2 m/m_neu in a sensor with mass m and m_neu is the neutron mass. The force F_sig(t) is proportional to the mass of the sensor,
so the main criterion about the sensor is acceleration sensitivity.
Here we propose a experiment scheme to detect DM using a frequency adjustable diamagnetic levitated sensor. The resonance frequency could be changed by adjust the magnetic field gradient in a paramagnetic part of the oscillator and frequency range from 0.1Hz to 100Hz.
This means that we have high detection accuracy to detect DM with mass in the range from 10^-16eV to 10^-13eV.
Compare to previously reported experiments, our experiment scheme can achieve more than one order of magnitude improvement in the measurement of the coupling strength g_ B-L based on the results of theoretical calculation.
§ THEORETICAL CALCULATION
Under the effect of the ultralight DM field, consider thermal noise and measurement noise,
the motion equation of a mechanical oscillator at resonant frequency ω_0 could be written as:
mẍ+ mγẋ + mω_0^2 x
=F_sig(t)+F_th+F_mea
where γ is damp coefficient;
the F_sig(t) is the DM field drive from equation (<ref>); F_th is the environmental thermal noise; and the F_mea represents the measurement noise which is mainly composed of the detector imprecision noise and backaction of radiation pressure fluctuations.
The total acceleration noise of the system is given by:
S_aa^tot= S_aa^th+ (S_xx^imp/|χ_ m(ω,ω_0)|^2+ S_ff^ba/m^2 )
where χ_ m(ω,ω_0) is the mechanical susceptibility given by |χ_ m(ω,ω_0)|^2=1/[(ω^2-ω_0^2)^2+γ^2 ω^2],
and S_aa^th =4 γ k_B T/m is the thermal noise where k_B is Boltzmann constant and T indicates environment temperature.
The detector imprecision noise S_xx^imp and the backaction noise S_ff^ba
make up the total measurement noise
S_aa^mea=S_xx^imp /|χ_ m(ω,ω_0)|^2 +S_ff^ba / m^2,
and S_xx^imp· S_ff^ba=(1/η) ħ^2 meanwhile.
Here η⩽ 1 is the measurement efficiency, and η= 1 corresponding to standard quantum limit (SQL).
The total measurement noise S_aa^mea for the sensor operating at SQL condition at resonance frequency ω_0 could be given by the simple formula <cit.>:
S_aa^mea,SQL=2 √((ω_0^2-ω^2)^2+γ^2
ω^2)/m
And achieving the SQL in a frequency range need to optimize the measurement parameters
frequency by frequency as the range is scanned.
We use the total acceleratioon noise S_aa^tot as the acceleration measurement sensitivity of the system. From the equations (<ref>)-(<ref>), consider the optimal case of α=1, we obtain the relationship between coupling strength g_ B-L and the acceleration measurement sensitivity S_aa^tot by:
g_ B-L= 2 m_neu/F_0√(S_aa^tot/T_tot)
where T_tot denotes the effective total integration time. The DM signal is essentianlly a coherent force and the timescales T_coh≈ 10^6/ ω_s.
When the DM frequency ω_s is lower to satisfy T_coh T_mea,
all the measurement time T_mea contributes to the coherent DM signal. And as the DM frequency ω_s increases, when T_coh T_mea, only the proportion of T_coh/T_mea in the measurement time contributes to the coherent signal. So we define the effective integration time:
T_tot={[ T_mea if T_coh< T_mea; √(T_mea· T_coh) if T_coh>
T_mea ].
§ EXPERIMENTAL SCHEME
The levitated micromechanical and nanomechanical oscillators have been demonstrated as one of the ultrasensitive acceleration sensors due to its ultralow dissipation <cit.>.
We propose a reasonable scheme by our calculation as shown in Fig.<ref>(a). A diamagnetic sphere made by PMMA with radius r_1=0.5mm(corresponding volume V_1), density ρ_1 and magnetic susceptibility χ_ 1 is levitated in the upper magnet (name as Magnet-A) center region, and the oscillator signal is detected through the fibre on both sides.
A paramagnetic microsphere made by Tb_2 O_3 with
radius r_2=11 μm(corresponding volume V_2), density ρ_2 and magnetic susceptibility χ_ 2 is connected to the upper diamagnetic sphere through a thin glass rod. And another combined magnets (name as Magnet-B) is placed under the paramagnetic microsphere. The whole magnet assembly is placed in a multi-stage suspension system, and uses active vibration isolation devices to further improve the isolation
effect<cit.>.
Magnet-A is constructed in a similar way to our previous articles<cit.>. And need to use high remanence magnetic material with two different magnetisation direction to generate enough magnetic force. The red express the direction point to the centre, and the blue express the direction out to the centre. In addition, using a less remanence magnetic material to build the upper layer of Magnet-B and high magnetic material to build the lower layer. The combination of two different remanence magnetic materials allows Magnet-B to have a higher magnetic field gradient while reducing the magnetic field strength. And the direction of magnetisation is also indicated by red and blue colours.
The magnetic field energy of the upper paramagnetic sphere can be written as:
U_1=-∫_V_1χ_ 1/2μ_0 B_A ^2 dV
where B_A represents the magnetic field created by
Magnet-A.
Assuming that the Magnet-B is far away at beginning , the z direction equilibrium position z_0 of the oscillator in the magnetic-gravity trap satisfies:
∂ U_1/∂ z |_z=z_0=(ρ_1 V_1+ρ_2 V_2 )g.
And the resonance frequency in z direction is:
ω_0=√(1/ρ_1 V_1+ρ_2 V_2·∂^2 U_1/∂ z^2)|_z=z_0
Then we make the Magnet-B rise, the magnetic field B_ B from Magnet-B in the lower paramagnetic microsphere will become larger. And because of V_2≪ V_1, we can simplify the magnetic field energy of the paramagnetic microspheres as U_2=-χ_ 2 B_B^2 V_2/2μ_0.
Now the resonance frequency along z direction of the oscillator change as:
ω_0^'=√(ω_0^2-χ_ 2V_2/μ_0(ρ_1
V_1+ρ_2V_2)( ∂ B_ B/∂ z)^2)|_z=z_0
where χ_ 2 0 and ω_0^'ω_0.
We ignore the second order gradient term because of
(∂ B_B/∂ z)^2≫ B_B (∂^2 B_ B / ∂ z^2).
And the magnetic force from Magnet- B on the paramagnetic microsphere is much lower than the total gravity of oscillator since B_B and V_2 are very small, the equilibrium position z_0 will not be changed therefore.
We use finite element method to simulate the magnetic field gradient ∂ B_B/∂ z changes by the distance between the paramagnetic microsphere and Magnet-B expressed by d range from 50μm to 100 μm, then use equation (<ref>) to calculate the corresponding resonance frequency ω_0^', as shown in Fig.<ref>(b). It is theoretically possible to bring the resonance frequency ω_0^' close to zero by reducing the distance d. But in order to improve the stability of the oscillator and reduce the requirement for the isolation system, we select resonance frequency ω_0^' variation range from 0.1Hz to 100Hz.
§ EXPERIMENTAL RESULT ESTIMATE
Now we calculate the acceleration measurement sensitivity of this system. In order to improve the acceleration sensitivity, the whole system was placed in a low temperature environment which T=30mK, and estimate the damp coefficient γ=10^-4Hz <cit.>. In the Supplementary material, we calculate the dependence of the total measurement noise S_aa^mea on the laser input power P_in and obtained the optimized laser input
power P_opt(ω,ω_0) to minimised the total measurement noise.
In the cases of the oscillator resonance frequency ω_0 equal to 10Hz and 100Hz,
we calculate the corresponding acceleration noise and the results are shown in Fig.<ref>(a) and Fig.<ref>(b). When resonance frequency ω_0=10Hz,
assuming measurement efficiency η=1 and we set the laser input power to optimal laser power for each point as P_opt(ω,ω_0), the measurement noise S_aa^mea can almost reach the SQL at this time.
With the measurement efficiency η reduce to 0.1, the measurement noise is slightly increased.
But actually, to simplify the experiment, the laser input power need to choose near the resonance frequency ω_0 by P_opt(ω_0,ω_0), it will make the measurement noise S_aa^mea increase rapidly.
In Fig.<ref>(a), in the frequency range from 9Hz to 11Hz, the measurement noise S_aa^mea is always below the thermal noise S_aa^th with η=0.1. When the resonance frequency ω_0 is adjusted to 100Hz, the range of measurement noise S_aa^mea below thermal noise S_aa^th is reduced to 99.6Hz to 100.4Hz in Fig.<ref>(b). We choose the appropriate oscillator resonance frequency scan step Δω_0 from this.
According to the calculation results from Fig.<ref>(a) and
Fig.<ref>(b), we choose the scan step Δω_0=1Hz in the region resonance frequency ω_0 range from 0.1Hz to 100Hz, each scan cover the frequency range from ω_0-Δω_0/2 to ω_0+Δω_0/2, and fix the laser input power P_in=P_opt(ω_0,ω_0 ) in each scan meanwhile.
We calculate the acceleration measurement noise S_aa^mea with η=0.1 in each scan, and calculate the envelope of these series S_aa^mea writen as S_aa^mea^'. The acceleration measurement sensitivity S_aa^tot=S_aa^th+S_aa^mea^', and these results are presented in Fig.<ref>(c).
According to the previous discussion on the effective integration time T_tot,
we fix the measurement time of each scan as T_mea=10^5s.
When DM frequency ω_s10Hz, T_tot=T_mea; and when ω_s10Hz, T_tot=√(T_mea· 10^6/ω_s).
Combining previous discussion of the scan step, we estimate that about one hundred times adjustments and measurements will be required in total, corresponding to a total time of 1 × 10^7 seconds.
The final result of coupling strength g_ B-L from equation (<ref>) is shown in Fig.<ref>. In the region of ω_s 100Hz, this system always has high acceleration sensitivity by adjusting the resonance frequency of the mechanical oscillator. And we achieve more than an order of magnitude improvement in the measurement of g_ B-L compare to the MICROSCOPE and the Eöt-Wash torsion experiment.
And in the region of ω_s 100Hz, the measurement accuracy of g_ B-L decreases rapidly, due to the increase in measurement noise S_aa^mea.
Finally, we estimated the minimum g_ B-L that this system can detect. Assume that the DM frequency ω_s is 1Hz, 10Hz and 100Hz respectively.
From the equation (<ref>) and the measurement time T_mea range from 10^3s to 10^7s, the results are shown in Fig.<ref>.
When T_mea is less than the coherent time T_coh, g_ B-L decreases rapidly as T_mea increases; and when T_mea is greater than T_coh, g_ B-L decreases more slowly. If the final measurement time is about 10^7 s, the minimum g_ B-L that can be measured scale is about 10^-26.
§ CONCLUSION
We propose an experimental scheme to detect ultra-light dark matter using a frequency adjustable diamagnetic levitated microsphere sensor which can theoretically approach the standard quantum limit.
We change the resonance frequency by adjusting the distance between the paramagnetic microsphere and the lower combined magnets, and to obtain a lager range that maintains high acceleration measurement sensitivity.
Compared to the existing system, our method can achieve at least one order of magnitude improvement in the coupling constant g_ B-L, especially in the frequencies from 0.1Hz to 100Hz. And it may be possible to achieve higher accuracy by using the array of sensors in the future.
In this article, we consider only the effects of thermal noise and quantum measurement noise on the acceleration measurement sensitivity of the system.
In fact, there are many low frequency noises such as seismic waves and Earth tidal forces which also have a great impact on the accuracy of the experiment, and that cannot be shielded by the suspension system. This poses a great challenge to the actual measurement. Reducing the frequency scan step according to the accuracy of the active vibration isolation device may make the effect of other noise lower than thermal noise, and this needs to be verified by further experiments.
In general, the current ground-based precision measurement system may have a broader prospect in terms of dark matter measurement compared to the previous astronomical observation methods. In the future, with the development of measurement sensitivity and
measurement range of mechanical sensors , especially with the improvement quantum sensing technology, the measurement sensitivity may break through the standard quantum limit. It will open up more possibilities in terms of dark matter measurement.
This work was supported by the National Natural Science Foundation of China (Grants No.12205291, No. 12075115, No. 12075116, No. 11890702 and No. 12150011), the Fundamental Research Funds for the Central Universities, and Anhui Provincial Natural Science Foundation (Grant No. 2208085QA16).
apsrev4-1
§ APPENDIX: LIGHT FIELD CALCULATION AND MEASUREMENT NOISE OPTIMIZATION
Optical Calculation. The light emitted from the incident fiber is assumed to be Gaussian, taking the light propagation direction as the z-axis, the incident Gaussian light intensity distribution at waist can be written as <cit.>:
I_1 (r)=I_0 exp(-2r^2/ω_01^2)
And the waist radius of incident Gaussian beam is ω_01, which satisfies relation:
ω_01=√(a_0^2 λ^2/λ^2+π^2 a_0^2 tan^2 α)
where a_0 is the radius of fiber core, and sinα = N.A, N.A. is the numerical aperture of the fiber. In there a_0=5μm and N.A.=0.13 for single-mode fiber. The incident optical power is:
P_in=∫_0^∞ I_1 (r) 2 π rdr=π/2ω_01^2 I_0
The response of the light to the micro-sphere is calculated using the standard optical ABCD ray matrix <cit.>. Under the par-axial approximation, the transmission matrix 𝐓 is:
𝐓=[ A B; C D ]
which has the equation:
[ r_f; θ_f ]
=
𝐓[ r_i; θ_i ]
In calculating the transmission matrix 𝐓, we neglected the reflection of light at the interface and the absorption in the micro-sphere. Here A, B, C, D are
A=2/n-1,B=2R/n,C=1-n/n2/n,D=2/n-1,β_0=λ/πω_01^2
with the parameters λ=1550 nm, n=1.45, the we get the d_2 and ω_02 satisfy
d_2=AC/β_0^2+ACd_1^2+ADd_1+BCd_1+BD/C^2 /β_0^2+C^2 d_1^2+2CDd_1+D^2
ω_02=ω_01√((A+Cd_2 )^2+β_0^2(Ad_1+B+Cd_1 d_2+Dd_2 )^2)
d_2 and ω_02 are functions of d_1, choose a suitable d_1 so that ω_02≈ a_0.
The coupling efficiency Γ, of the laser beam and the single-mode optical fiber can be written as:
Γ=Γ_0 exp(-Γ_0·x_fib^2/2 (1/ω_02^2 +1/a_0^2)),
Γ_0=4ω_02^2a_0^2/(ω_02^2+a_0^2 )^2
x_fib indicate the fiber shift from the x direction, when x_fib=0, Γ=Γ_max=Γ_0.
In the experiment, fix x_fib at the place where ∂Γ/∂ x_fib is the largest. As x_fib=2.51μ m and Γ(x_fib)=0.604 in Fig.<ref>(b).
δ x is the displacement of the micro-sphere vertically to the optical axis (similar result for y direction), while δ x' is the projection on the incident fiber surface. Under par-axial approximation, δ x=ζ·δ x^' for small displacement δ x of the micro-sphere, with the displacement magnification factor:
ζ=d_1+d_2+2R/d_1+R,
ς=∂Γ/∂ x=∂Γ/∂ x'·∂ x'/∂ x=ζ·∂Γ/∂ x'
Measurement Noise. The relationship between the average power P and the photon number N is:
N_in=P_inT_mea/ħω_op,
N_dec=P_decT_mea/ħω_op
where ω_op is the light frequency. The photons satisfy the Poisson distribution and the corresponding photon number fluctuation is δ N_in=√(N_in) and δ N_dec=√(N_dec). Such fluctuation brings a imprecise detection noise of displacement δ x_imp:
δ x_imp =∂ x/∂Γ√((∂Γ/∂ N_inδ N_in)^2+
(∂Γ/∂ N_decδ N_dec)^2)
=1/ς√(Γ+Γ^2/N_in)
Thus the power density of displacement noise is:
S_xx^imp=1/ς^2(Γ+Γ^2)ħω_op/P_in
On the other hand, the photon passes through the micro-sphere which changes the direction and therefore generated a back-action force δ f_ba with the strength also proportional to the fluctuation of the incident photon δ N_in. The back-action force δ f_ba can be written as:
δ f_ba=√(N_in)ħΔ k /T_mea
where Δ k is the change of the wave vector.
Here we suppose that the direction of light wave vector is along the direction of the Gaussian light wavefront, and the probability of photon appearing is proportional to the intensity of Gaussian light. Δ k is the average change of light wave vector pass through the micro-sphere. It is calculated by √((Δ k_in)^2+(Δ k_out)^2), where Δ k_in is the average light wave vector go to the micro-sphere, Δ k_out is the average light wave vector go out of the micro-sphere. We obtain
(Δ k)^2= k^2 β
= k^2 ∫_0^∞k^2 r^3/k^2 r^2 +((1-z_r^2/z_l^2)kR^2/2ρ(z_l)+z_r/z-kρ(z_l))^2·
1/ω_1^2(z_l)exp(-2r^2/ω_1^2(z_l))dr
where k=ω_op/c, z_l=d_1+R-√(R^2-r^2), ω_1(z_l)=ω_01√(1+(z_l /z_r)^2), z_r=2 πω_01^2 / λ and ρ(z_l)=z_r (z_l/z_r +z_r/z_l).
The power density of back-action noise is thus:
S_ff^ba=P_inħω_opβ/c^2
and the product of imprecision noise and back-action noise is:
S_xx^imp· S_ff^ba=1/ς^2 (Γ+Γ^2 )
(ω_op /c)^2 β^2 ħ^2
The quantum efficiency of the measurement is defined as:
η=ς/4(Γ+Γ^2)β k^2
where η = 1 corresponding standard quantum limit (SQL). The total measurement noise is
S_aa^mea (ω)=S_xx^imp/|χ_ m(ω,ω_0)|^2
+S_ff^ba/m^2
S_aa^mea is minimized by tuning the incident laser power P_in under the product constraint of the imprecision noise and backaction noise. The optimized power is:
P_opt (ω,ω_0 )=√(Γ+Γ^2/β)m c/ς|χ_ m(ω,ω_0)|
with the minimised total acceleration measurement noise as:
S_aa,min^mea=2ħω_op/mς c |χ_ m(ω,ω_0)|√(β(Γ+Γ^2 ))
And in order to simplify the experiment process, we choose P_in =P_opt (ω_0,ω_0), with the optimized acceleration measurement noise at this time:
S_aa,opt^mea=ħω_op√(β(Γ+Γ^2 ))/mς c γω_0·(1/ |χ_ m(ω,ω_0)|^2+γ^2 ω_0^2 )
|
http://arxiv.org/abs/2307.03926v1 | 20230708075122 | Enhancing Room Security and Automating Class Attendance Using ID Cards | [
"Shravan Bhat",
"Nithin R",
"Pranav S"
] | cs.CR | [
"cs.CR",
"cs.HC",
"none",
"J.7"
] |
Enhancing Room Security and Automating Class Attendance Using ID Cards
Shravan Bhat – 171EE240, Nithin R – 171EC131, Pranav S - 171EC135
August 12, 2023
========================================================================
With the rapid advancements in technology, automation has emerged as the future of human endeavors. From simple tasks like attendance management to complex security systems, automation has the potential to revolutionize various aspects of our lives. This research paper explores the implementation of a method aimed at enhancing room security in hostels and automating class attendance using ID cards. In this study, we propose a system that utilizes the unique identity information stored in ID cards for various security and check-in tasks. By integrating RFID (Radio-Frequency Identification) reader technology, GSM modules, Node MCU, and Arduino, we create a comprehensive solution. The RFID reader scans the ID card, extracting the relevant information and verifying the user's identity. The data is then transmitted via the GSM module to a central database, ensuring real-time monitoring and security measures. Moreover, the system also enables the automation of class attendance. By utilizing the same ID cards, students can simply tap their cards on a reader placed in the classroom. This information is recorded automatically, eliminating the need for manual attendance taking and reducing errors and time consumption. This research project highlights the practical implementation of ID card technology to enhance room security in hostels and automate class attendance processes. By leveraging the power of automation, we aim to streamline administrative tasks, improve security measures, and optimize efficiency in educational institutions and other relevant settings.
ID card, RFID reader, GSM Module, Node MCU, Arduino
§ INTRODUCTION
Security and privacy is a basic need for any human being. India's population has been increasing exponentially since 19th century. Hence student intake for colleges has been increasing every year. Automation would help in trivial tasks like taking attendance, or making payments in a locality. Privacy and security is also an issue in many colleges. Adding layers of security to rooms and safe box would prevent petty theft from happening.Main motivation of this project is to establish a attendance system within our college campus, a cash-less payment system and also to implement safer and key-less room locking systems in our university.
§ LITERATURE SURVEY
§.§ Survey of State of Art
Smart card based door lock systems which are expensive and less secure are currently available like the NFC (Near field communication) cards which are used in the hotel rooms. Using these might be very expensive as it requires complex hardware.Automated attendance are available, which uses finger print as the ID, But implementing that on a large scale like college is difficulty and would come out to be rather expensive.
§.§ Features
* RFID card and RFID reader is included in the door lock system. The door unlocks only when the authorized card is scanned and corresponding pin in entered using the keypad provided.
* The locking and unlocking of the door latch is implemented using servo motors, stepped motors and gears.
* When a card is scanned an alert SMS is sent to the registered phone number and also an alert notification is generated in the app. When an authorized card is scanned without the user’s consent, the user can shut down the system by sending a message from his phone.
* The same RFID card can be used in classrooms as a check in attendance system
§ DETAILS OF IMPLEMENTATION
§.§ Components Used
* Sim900 GSM module
* Arduino Uno
* MFRC522 RFID reader and RFID cards
* Servo motors, stepped motors and gears
* 4*4 keypad
* Buzzer and power adaptor
* Node MCU
* LEDs and resistors
* I2C LCD display
§.§ Working
Smart ID card is divided into 3 sub-systems1) Security System2) Payment System3) Attendance System
* Security System The RFID reader communicates with the Arduino through the SPI protocol .The I2C
LCD communicates with the Arduino through the I2C protocol. The keypad is connected to Arduino. The 4X4 keypad has 8 connections but the last column of keypad is not required. We only require numbers for the password.For powering the SIM900 module, 5V, 2A power adaptor is used. Once the SIM900 module is powered, the power light will light up and on pressing the power key, the status led lights up. Then the phone is paired with the module.
GSM Module:GSM is a mobile communication modem; it is stands for global system for mobile
communication (GSM). It is widely used mobile communication system in the world. GSM is an open and digital cellular technology used for transmitting mobile voice and data services.GSM module is used here since it can communicate with a mobile and the data which it receives can be processed and sent to the Arduino.I2C Protocol:I2C is a serial protocol for two-wire interface to connect low-speed devices like microcontrollers, I/O interfaces and other similar peripherals in embedded systems.
* Payment System:The RFID reader communicates with the Node MCU through SPI protocol. The Node MCU is connected to a web server where the
data is stored. When the RFID card is scanned and the pin is entered , the balance amount is
displayed on the screen.Node MCU:This device is used instead of only Arduino UNO because Node MCU has a wi-fi module which can
be connected to the web server.
The ESP8266 can be controlled from local Wi-Fi network or from the internet (after port forwarding). The ESP-01 module has GPIO pins that can be programmed to control device/ execute a code through the internet. The module can be programmed using an Arduino through the serial pins (RX,TX).
* Attendance system:When the the ID is scanned on the RFID reader, the student name that is stored in the RFID card is printed on the serial monitor. It is made sure that the can't be registered twice by comparing it with already registered IDs. An external app is used to store the output from the serial monitor. The output can be saved on to the computer.
§ RESULTS AND DISCUSSIONS
§.§ Security System
The Door Lock security system was successfully implemented. When an authorized ID card is scanned onto the RFID reader and the correct password is entered onto the keypad, only then the door unlocks when the servo motor turns. Consequently a message is sent to the owner saying that the door is unlocked. After few seconds the door locks back, turning the servo motor to the original position When the owner is inside the room, he/she can use a switch which is present inside the room to unlock the door. Subsequently after few seconds the door locks backs, turning the servo motor back to the original positionIf in any case a wrong ID card or wrong password is entered. The whole system locks down and an alarm is buzzed using a buzzer. A message is sent to the owner saying that there was an attempt to breach the security system.The security system fails to detect an intruder when RFID card's ID is changed to the owners ID. It will also fail if the owner is negligent, revealing the password to others.
§.§ Payment System
when a ID is scanned in onto the RFID reader, the value that is stored in the RFID, is sent to the server via WIFI module through internet on to the data base with the date and time which is taken from the internet. This stored value can be changed by the vendor or the shopkeeper to the new balance amount. The changed balance amount is then updated in the ID card through the WIFI module ESP8266backdrop of this system is that the balance can be changed to a wrong value giving a wrong balance
§.§ Attendance system
The attendance system was successfully implemented. When an registered ID card is scanned onto the RFID reader, the ID card number is send to the database through the wifi module Node MCU. The data base saves the student's name, ID number on the database. This present list can be retrieved from the database.As a fail safe for the above implemented method, the RFID reader reads the ID number of the card and compares it with the student register, if ID is present, it prints the student's name onto the serial monitor. An external app saves the logs of the serial monitor as text.This method would fail if some other student scans the card even if the owner is not present in the class. So the scanner must be monitored while the student is scanning on the RFID scanner
§ ACKNOWLEDGMENT
With immense pleasure we are presenting "Enhancing Room Security and Automating Class
Attendance Using ID Cards". As
a part of the curriculum of "Embedded Systems and Design" under the department of “Electronics and Communication Engineering, National Institute of Technology, Karnataka”. We wish to thank all people who gave us the unending support. We express my profound thanks to our Professor, Dr. Ramesh Kini M., And all those who have indirectly guided and helped us in the preparation of this project.
§
00
b1 How RFID Works https://electronics.howstuffworks.com/gadgets/high-tech-gadgets/rfid.htm
b2 Specification of ESP8266
https://randomnerdtutorials.com/esp8266-adc-reading-analog-values-with-nodemcu/
b3 Data sheet ARDUINO UNO
https://www.farnell.com/datasheets/1682209.pdf
|
http://arxiv.org/abs/2307.04287v1 | 20230710002925 | Generalizing Graph ODE for Learning Complex System Dynamics across Environments | [
"Zijie Huang",
"Yizhou Sun",
"Wei Wang"
] | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CE",
"cs.MA",
"cs.NE"
] |
University of California, Los Angeles
Los Angeles
CA
[email protected]
University of California, Los Angeles
Los Angeles
CA
[email protected]
University of California, Los Angeles
Los Angeles
CA
[email protected]
Learning multi-agent system dynamics has been extensively studied for various real-world applications, such as molecular dynamics in biology,
multi-body system in physics, and particle dynamics in material science.
Most of the existing models are built to learn single system dynamics, which learn the dynamics from observed historical data and predict the future trajectory.
In practice, however, we might observe multiple systems that are generated across different environments, which differ in latent exogenous factors such as temperature and gravity.
One simple solution is to learn multiple environment-specific models, but it fails to exploit the potential commonalities among the dynamics across environments and offers poor prediction results where per-environment data is sparse or limited.
Here, we present (Generalized Graph Ordinary Differential Equations), a machine learning framework for learning continuous multi-agent system dynamics across environments. Our model learns system dynamics using neural ordinary differential equations (ODE) parameterized by Graph Neural Networks (GNNs) to capture the continuous interaction among agents. We achieve the model generalization by assuming
the dynamics across different environments are governed by common physics laws that can be captured via learning a shared ODE function. The distinct latent exogenous factors learned for each environment are incorporated into the ODE function to account for their differences. To improve model performance, we additionally design two regularization losses to (1) enforce the orthogonality between the learned initial states and exogenous factors via mutual information minimization; and (2) reduce the temporal variance of learned exogenous factors
within the same system via contrastive learning. Experiments over various physical
simulations show that our model can accurately predict system dynamics, especially in the long range, and can generalize well to new systems with few observations.
Generalizing Graph ODE for Learning
Complex System Dynamics across Environments
Wei Wang
August 12, 2023
=================================================================================
§ INTRODUCTION
Building a simulator that can understand and predict multi-agent system dynamics is a crucial research topic spanning over a variety of domains such as planning and control in robotics <cit.>, where the goal is to generate future trajectories of agents based on what has been seen in the past. Traditional simulators can be very expensive to create and use <cit.> as it requires sufficient domain knowledge and tremendous computational resources to generate high-quality results[To date, out of the 10 most powerful supercomputers in the world, 9 of them are used for simulations, spanning the fields of cosmology, geophysics and fluid dynamics <cit.>]. Therefore, learning a neural-based simulator directly from data that can approximate the behavior of traditional simulators becomes an attractive alternative.
As the trajectories of agents are usually coupled with each other and co-evolve along with the time, existing studies on learning system dynamics from data usually view the system as a graph and employ Graph Neural Networks (GNNs) to approximate pair-wise node (agent) interaction to impose strong inductive bias <cit.>. As a pioneering work, Interaction Networks (IN) <cit.> decompose the system into distinct objects and relations, and learn to reason about the consequences of their interactions and dynamics. Later work incorporates domain knowledge <cit.>, graph structure variances <cit.>,
and equivariant representation learning <cit.> into learning from discrete GNNs, achieving state-of-the-art performance in various domains including mesh-based physical simulation <cit.> and molecular prediction <cit.>.
However, these discrete models usually suffer from low accuracy in long-range predictions as (1) they approximate the system by discretizing observations into some fixed timestamps and are trained to make a single forward-step prediction and (2) their discrete nature fails to adequately capture systems that are continuous in nature such as the spread of COVID-19 <cit.> and the movements of an n-body system <cit.>.
Recently, researchers propose to combine ordinary differential equations (ODEs) - the principled way for modeling dynamical systems in a continuous manner in the past, with GNNs to learn continuous-time dynamics on complex networks in a data-driven way <cit.>. These Graph-ODE methods have demonstrated the power of capturing long-range dynamics, and are capable of learning from irregular-sampled partial observations <cit.>. They usually assume all the data are generated from one single system, and the goal is to learn the system dynamics from historical trajectories to predict the future.
In practice, however, we might observe data that are generated from multiple systems, which can differ in their environments. For example, we may observe particle trajectories from systems that are with different temperatures, which we call exogenous factors.
These exogenous factors can span over a wide range of settings such as particle mass, gravity, and temperature <cit.> across environments. One simple solution is to learn multiple environment-specific models, but it can fail to exploit the potential commonalities across environments and make accurate predictions for environments with sparse or zero observations. In many useful contexts, the dynamics in multiple environments share some similarities, yet being distinct reflected by the (substantial) differences in the observed trajectories. For example, considering the movements of water particles within multiple containers of varying shapes, the trajectories are driven by both the shared pair-wise physical interaction among particles (i.e. fluid dynamics) and the different shapes of the containers where collisions can happen when particles hit the boundaries. Also, the computational cost for training multiple environment-specific models would be huge. More challengingly, the exogenous factors within each environment can be latent, such as we only know the water trajectories are from different containers, without knowing the exact shape for each of them. Therefore, how to learn a single efficient model that can generalize across environments by considering both their commonalities and the distinct effect of per-environment latent exogenous factors remains unsolved. This model, if developed, may help us predict dynamics for systems under new environments with very few observed trajectories.
Inspired by these observations, in this paper, we propose Generalized Graph ODE (), a general-purpose continuous neural simulator that learns multi-agent system dynamics across environments. Our key idea is to assume the dynamics across environments are governed by common physics laws that can be captured via learning a shared ODE function. We introduce in the ODE function a learnable vector representing the distinct latent exogenous factors for each environment to account for their differences. We learn the representations for the latent exogenous factors from systems' historical trajectories through an encoder by optimizing the prediction goal. In this way, different environments share the same ODE function framework while incorporating environment-specific factors in the ODE function to distinguish them.
However, there are two main challenges in learning such latent exogenous factor representations. Firstly, since both the latent initial states for agents and the latent exogenous factors are learned through the historical trajectory data, how can we differentiate them to guarantee they have different semantic meanings? Secondly, when inferring from different time windows from the same trajectory, how can we guarantee the learned exogenous factors are for the same environment?
Towards the first challenge, we enforce the orthogonality between the initial state encoder and the exogenous factor encoder via mutual information minimization. For the second challenge, we reduce the variance of learned exogenous factors within the same environment via a contrastive learning loss. We train our model in a multi-task learning paradigm where we mix the training data from multiple systems with different environments. In this way, the model is expected to fast adapt to other unseen systems with a few data points. We conduct extensive experiments over a wide range of physical systems, which show that our is able to accurately predict system dynamics, especially in the long range.
The main contributions of this paper are summarized as follows:
* We investigate the problem of learning continuous multi-agent system dynamics across environments. We propose a novel framework, known as , which describes the dynamics for each system with a shared ODE function and an environment-specific vector for the latent exogenous factors to capture the commonalities and discrepancies across environments respectively.
* We design two regularization losses to guide the learning process of the latent exogenous factors, which is crucial for making precise predictions in the future.
* Extensive experiments verify the effectiveness of GG-ODE to accurately predict system dynamics, especially in the long range prediction tasks. also generalizes well to unseen or low-resource systems that have very few training samples.
§ PROBLEM DEFINITION
We aim to build a neural simulator to learn continuous multi-agent system dynamics automatically from data that can be generalized across environments. Throughout this paper, we use boldface uppercase letters to denote matrices or vectors, and regular lowercase letters to represent the values of variables.
We consider a multi-agent dynamical system of N interacting agents as an evolving interaction graph 𝒢^t = {𝒱,ℰ^t}, where nodes are agents and edges are interactions between agents that can change over time. For each dynamical system, we denote e∈ E as the environment from which the data is acquired. We denote X^t,e∈𝒳 as the feature matrix for all N agents and x_i^t,e as the feature vector of agent i at time t under environment e. The edges between agents are assigned if two agents are within a connectivity radius R based on their current locations p_i^ which is part of the node feature vector, i.e. p_i^∈x_i^. They reflect the local interactions of agents and the radius is kept constant over time <cit.>.
Our model input consists of the trajectories of N agents over K timestamps X^t_1:K,e={X^t_1,e, X^t_2,e, …, X^t_K,e}, where the timestamps t_1,t_2⋯ t_K can have non-uniform intervals and be of any continuous values. Our goal is to learn a generalized simulator s_θ:X^t_1:K,e→ Y^t_K+1:T,e that predicts node dynamics in the future for any environment e.
Here Y^∈𝒴 represents the targeted node dynamic information at time t, and can be a subset of the input features. We use y_i^ to denote the targeted node dynamic vector of agent i at time t under environment e.
§ PRELIMINARIES AND RELATED WORK
§.§ Dynamical System Simulations with Graph Neural Networks (GNNs)
Graph Neural Networks (GNNs) are a class of neural networks that operate on graph-structured data by passing local messages<cit.>.
They have been extensively employed in various applications such as node classification <cit.>, link prediction <cit.>, and recommendation systems <cit.>.
By viewing each agent as a node and interaction among agents as edges, GNNs have shown to be efficient for approximating pair-wise node interactions and achieved accurate predictions for multi-agent dynamical systems <cit.>.
The majority of existing studies propose discrete GNN-based simulators where they take the node features at time t as input to predict the node features at time t+1. To further capture the long-term temporal dependency for predicting future trajectories, some work utilizes recurrent neural networks such as RNN, LSTM or self-attention mechanism to make prediction at time t +1 based on the historical trajectory sequence within a time window <cit.>. However, they all restrict themselves to learn a one-step state transition function. Therefore, when successively apply these one-step simulators to previous predictions in order to generate the rollout trajectories, error accumulates and impairs the prediction accuracy, especially for long-range prediction.
Also, when applying most discrete GNNs to learn over multiple systems under different dynamical laws (environments), they usually retrain the GNNs individually for dealing with each specific system environment <cit.>, which yields a large computational cost.
§.§ Ordinary Differential Equations (ODEs) for
Multi-agent Dynamical Systems
The dynamic nature of a multi-agent system can be captured by a series of nonlinear first-order ordinary differential equations (ODEs), which describe the co-evolution of states for a set of N dependent variables (agents) over continuous time t∈ℝ as <cit.>: ż_i^t:=d z_i^t/d t=g(z_1^t, z_2^t⋯z_N^t). Here z_i^t∈ℝ^d denotes the state variable for agent i at timestamp t and g denotes the ODE function that drives the system move forward. Given the initial states z_1^0,
⋯z_N^0 for all agents and the ODE function g, any black box numerical ODE solver such as Runge-Kuttais <cit.> can solve the ODE initial-value problem (IVP), of which the solution z_i^T can be evaluated at any desired time as shown in Eqn <ref>.
z_i^T=z_i^0+∫_t=0^T g(z_1^t, z_2^t⋯z_N^t) d t
Traditionally, the ODE function g is usually hand-crafted based on some domain knowledge such as in robot motion control <cit.> and fluid dynamics <cit.>, which is hard to specify without knowing too much about the underlying principles. Even if the exact ODE functions are given, they are usually hard to scale as they require complicated numerical integration <cit.>. Some recent studies <cit.> propose to parameterize it with a neural network and learn it in a data-driven way. They combine the expressive power of neural networks along with the principled modeling of ODEs for dynamical systems, which have achieved promising results in various applications <cit.>.
§.§ GraphODE for Dynamical Systems
To model the complex interplay among agents in a dynamical system, researchers have recently proposed to combine ODE with GNNs, which has been shown to achieve superior performance in long-range predictions <cit.>. In <cit.>, an encoder-processor-decoder architecture is proposed, where an encoder first computes the latent initial states for all agents individually based on their first observations. Then an ODE function parameterized by a GNN predicts the latent trajectories starting from the learned initial states. Finally, a decoder extracts the predicted dynamic features based on a decoding function that takes the predicted latent states as input. Later on, a Graph-ODE framework has been proposed <cit.> which follows the structure of variational autoencoder <cit.>. They assume an approximated posterior distribution over the latent initial state for each agent, which is learned based on the whole historical trajectories instead of a single point as in <cit.>. The encoder computes the approximated posterior distributions for all agents simultaneously considering their mutual influence and then sample the initial states from them. Compared with <cit.>, they are able to achieve better prediction performance, especially in the long range, and are also capable of handling the dynamic evolution of graph structures <cit.> which is assumed to be static in <cit.>.
We follow a similar framework to this line but aim at generalizing GraphODE to model multiple systems across environments.
§ METHOD
In this section, we present Generalized Graph ODE ( ) for learning complex system dynamics across environments. As depicted in Figure <ref>, consists of four main components that are trained jointly: (1) an initial state encoder for inferring the latent initial states for all agents simultaneously; (2) an environment encoder which learns the latent representations for exogenous factors; (3) a generative model defined by a GNN-based ODE function that is shared across environments for modeling the continuous interaction among agents in the latent space. The distinct latent exogenous factors learned
for each environment are incorporated into the ODE function to
account for their discrepancies, and (4) a decoder that extracts the predicted dynamic features based on a decoding function. We now introduce each component in detail.
§.§ Initial State Encoder
Given the observed trajectories X^t_1:K,e, the initial state encoder computes a posterior distribution of latent initial state q_ϕ(z_i^0,e| X^t_1:K,e) for each agent, from which z_i^0,e is sampled. The latent initial state z_i^0,e for each agent determines the starting point for the predicted trajectory. We assume the prior distribution p(z_i^0,e) is a standard normal distribution, and use Kullback–Leibler divergence term in the loss function to add significant regularization towards how the learned distributions look like, which differs VAE from other autoencoder frameworks <cit.>.
In multi-agent dynamical systems, agents are highly-coupled and influence each other. Instead of learning such distribution separately for each agent, such as using an RNN <cit.> to encode the temporal pattern for each individual trajectory, we compute the posterior distributions for all agents simultaneously (similar to <cit.>).
Specifically, we fuse all trajectories as a whole into a temporal graph to consider both the temporal patterns of individual agents and the mutual interaction among them, where each node is an observation of an agent at a specific timestamp. Two types of edges are constructed, which are (1) spatial edges 𝒱^t that are among observations of interacting agents at each timestamp if the Euclidean distance between the agents' positions r_ij^t,e = ||p_i^ - p_j^||_2 is within a (small) connectivity radius R; and (2) temporal edges that preserve the autoregressive nature of each trajectory, defined between two consecutive observations of the same agent. Note that spatial edges are bidirectional while temporal edges are directional to preserve the autoregressive nature of each trajectory, as shown in Figure <ref>. Based on the constructed temporal graph, we learn the latent initial states for all agents through a two-step procedure: (1) dynamic node representation learning that learns the representation h_i^ for each observation node whose feature vector is x_i^. (2) sequence representation learning that summarizes each observation sequence (trajectory) into
a fixed-dimensional vector through a self-attention mechanism.
§.§.§ Dynamic Node Representation Learning.
We first conduct dynamic node representation learning on the temporal graph through an attention-based spatial-temporal GNN defined as follows:
h_j^l+1(t,e)=h_j^l(t,e)+σ(∑_i^(t^',e)∈𝒩_j^()α_i^l(t',e) → j(t,e)×W_vĥ_i^l(t',e))
α_i^l(t',e) → j(t,e) = (W_kĥ_i^l(t',e))^T(W_qh_j^l(t,e)) ·1/√(d)
ĥ_i^l(t',e)= h_i^l(t',e) + TE(t'-t)
TE(Δ t)_2i=sin(Δ t/10000^2 i / d), TE(Δ t)_2i+1=cos(Δ t/10000^2 i / d)
where σ(·) is a non-linear activation function; d is the dimension of node embeddings. The node representation is computed as a weighted summation over its neighbors plus residual connection where the attention score is a transformer-based <cit.> dot-product of node representations by the use of value, key, query projection matrices W_v,W_k,W_q. The learned attention scores are normalized via softmax across all neighbors. Here h_j^l(t,e) is the representation of agent j at time t in the l-th layer. h_i^l(t^',e) is the general representation for a neighbor which is connected either by a temporal edge (where t'<t and i=j) or a spatial edge (where t=t' and i≠ j) to the observation h_j^l(t,e). We add temporal encoding <cit.> to each neighborhood node representation in order to distinguish the message delivered via spatial and temporal edges respectively. Finally, we stack L layers to get the final representation for each observation node as : h_i^ = h_i^L(t,e).
§.§.§ Sequence Representation Learning
We then employ a self-attention mechanism to generate the sequence representation m_i^e for each agent, which is used to compute the mean μ_i^0,e and variance σ_i^0,e of the approximated posterior distribution of the agent's initial state. Compared with recurrent models such as RNN, LSTM <cit.>, it offers better parallelization for accelerating training speed and in the meanwhile alleviates the vanishing/exploding gradient problem brought by long sequences <cit.>.
We follow <cit.> and compute the sequence representation m_i^e as a weighted sum of observations for agent i:
m_i^e=1/K∑_tσ((a_i^e)^T ĥ_i^ĥ_i^), a_i^e=tanh((1/K∑_tĥ_i^) W_a),
where a_i^e is the average of observation representations with a nonlinear transformation W_a and ĥ_i^ = h_i^ + TE(t). K is the number of observations for each trajectory. Then the initial state is drawn from the
approximated posterior distribution as:
q_ϕ(z_i^0,e| X^t_1:K,e)=𝒩(μ_i^0,e, σ_i^0,e) , μ_i^0,e, σ_i^0,e=f_trans(m_i^e)
z_i^0,e∼ p(z_i^0,e) ≈ q_ϕ(z_i^0,e| X^t_1:K,e)
where f_trans is a simple Multilayer Perceptron (MLP) whose output vector is equally split into two halves to represent the mean and variance respectively.
§.§ Environment Encoder
The dynamic nature of a multi-agent system can be largely affected by some exogenous factors from its environment such as gravity, temperature, etc. These exogenous factors can span over a wide range of settings and are sometimes latent and not observable. To make our model generalize across environments, we design an environment encoder to learn the effect of the exogenous factors automatically from data to account for the discrepancies across environments. Specifically, we use the environment encoder to learn the representations of exogenous factors from observed trajectories and then incorporate the learned vector into the ODE function which is shared across environments and defines how the system evolves over time. In this way, we use a shared ODE function framework to capture the commonalities across environments while preserving the differences among them with the environment-specific latent representation, to improve model generalization performance. It also allows us to learn the exogenous factors of an unseen environment based on only its leading observations.
We now introduce the environment encoder in detail.
The exogenous factors would pose influence on all agents within a system. On the one hand, they will influence the self-evolution of each individual agent. For example, temperatures would affect the velocities of agents. On the other hand, they will influence the pair-wise interaction among agents. For example, temperatures would also change the energy when two particles collide with each other. The environment encoder f_enc^env therefore learns the latent representation of exogenous factors u^e by jointly consider the trajectories from all agents, i.e. f_enc^env: X^t_1:K,e→u^e. Specifically, we learn an environment-specific latent vector from the aforementioned temporal graph in Sec <ref> that is constructed from observed trajectories. The temporal graph contains both the information for each individual trajectory and the mutual interaction among agents through temporal and spatial edges.
To summarize the whole temporal graph into a vector u^e, we attend over the sequence representation m_i^e for each trajectory introduced in Sec <ref> as:
u^e=1/N∑_iσ((b^e)^T m_i^em_i^e), b^e=tanh((1/N∑_im_i^e) W_b),
where W_b is a transformation matrix and the attention weight is computed based on the average sequence representation with nonlinear transformation similar as in Eqn (<ref>). Note that we use different parameters to compute the sequence representation m_i^e as opposed to the initial state encoder. The reason is that the semantic meanings of the two sequence representations are different: one is for the latent initial states and another is for the exogenous factors.
§.§.§ Time Invariance.
A desired property of the learned representation for exogenous factors u^e is that it should be time-invariant towards the input trajectory time window. In other words, for the same environment, if we chunk the whole trajectories into several pieces, the inferred representations should be similar to each other as they are describing the same environment.
To achieve this, we design a contrastive learning loss to guide the learning process of the exogenous factors. As shown in Figure <ref>, we force the learned exogenous factor representations to be similar if they are generated based on the trajectories from the same environment (positive pairs), and to be apart from each other if they are from different environments (negative pairs). Specifically, we define the contrastive leanring loss as follows:
ℒ_contra =-logexp(sim(f_enc ^env (X^t_1: t_2, e), f_enc ^env (X^t_3: t_4, e)) / τ)/∑_e^'≠ eexp( s i m (f_enc ^env (X^t_1: t_2, e, f_enc ^env (X^t_5: t_6, e^') / τ)..
where τ is a temperature scalar and sim(·, ·) is cosine similarity between two vectors. Note that the lengths of the observation sequences can vary. The detailed generation process for positive and negative pairs can be found in Appendix <ref>.
§.§.§ Orthogonality.
features two encoders that take the input of observed trajectories X^t_1:K,e for learning the latent initial states and the latent exogenous factors respectively. As they are designed for different purposes but are both learned from the same input, we disentangle the learned representations from them via a regularization loss defined via mutual information minimization.
Mutual information measures the dependency between two random variables X,Z <cit.>. Since we are not interested in the exact value of the mutual information, a lower bound derived from Jensen Shannon Divergence <cit.> could be formulated as
I_JSD(X, Z)= E_P_X Z[-sp(-M(x, z))] -E_P_X P_Z[sp(M(x, z))],
where P_X P_Z is the product of the marginal distributions and P_X Z is the joint distribution. sp(w)=log(1+e^w) and M is a discriminator modeled by a neural network to compute the score for measuring their mutual information.
According to recent literature <cit.>, the sample pair (positive pairs) (x,z) drawn from the joint distribution P_X Z are different representations of the same data sample, and the sample pair (negative pairs) drawn from P_X P_Z are different representations from different data samples. We therefore attempt to minimize the mutual information from the two encoders as follows
ℒ_MI
=𝔼_e∈ E,i[-sp(-Ψ(z_i^0,e, u^e))]-𝔼_e∈ E× e'∈ E,i[sp(Ψ(z_i^0,e, u^e'))]
where Ψ is a MLP-based discriminator. Specifically, we force the latent initial states z_i^0,e for all agents from environment e to be dissimilar to the learned exogenous factors u^e. And construct negative pairs by replacing the learned exogenous factors from another environment as u^e'. The generation process for positive and negative pairs can be found in Appendix <ref>.
§.§ ODE Generative Model and Decoder
§.§.§ ODE Generative Model
After describing the initial state encoder and the environment encoder, we now define the ODE function that drives the system to move forward. The future trajectory of each agent can be determined by two important factors: the potential influence received from its neighbors in the interaction graph and the self-evolution of each agent. For example, in the n-body system, the position of each agent can be affected both by the force from its connected neighbors and its current velocity which can be inferred from its historical trajectories.
Therefore, our ODE function consists of two parts: a GNN that captures the continuous interaction among agents and the self-evolution of the node itself. One issue here is how can we decide the neighbors for each agent in the ODE function as the interaction graph is evolving, the neighbors for each agent are dynamically changing based on their current positions, which are implicitly encoded in their latent state representations z_i^t,e, z_j^t,e. We propose to first decode the latent node representations z_i^t,e, z_j^t,e with a decoding function f_dec to obtain their predicted positions p_i^t,e, p_j^t,e at current timestamp. Then we determine their connectivity based on whether their Euclidean distance r_ij^t,e = ||p_i^ - p_j^t,e||_2 is within the predefined radius R. This can be computed efficiently by using a multi-dimensional index structure such as the k-d tree. The decoding function f_dec is the same one that we will use in the decoder.
To incorporate the influence of exogenous factors, we further incorporate u^e into the general ODE function to improve model generalization ability as:
d z_i^/dt = g(z_1^, z_2^⋯z_N^) = ∑_j∈𝒩_i f_GNN(z_i^, z_j^) + f_self(z_i^)
z_i^ = f_env(z_i^||u^e)
where || denotes concatenation and f_GNN can be any GNN that conducts message passing among agents. f_self, f_env are implemented as two MLPs respectively. In this way, we learn the effect of latent exogenous factors from data without supervision where the latent representation u^e is trained end-to-end by optimizing the prediction loss.
§.§.§ Decoder
Given the ODE function g and agents' initial states z_i^0,e for i=1,2⋯ N, the latent trajectories for all agents are determined, which can be solved via any black-box ODE solver. Finally, a decoder generates the predicted dynamic features based on the decoding probability p(y_i^t,e | z_i^t,e) computed from the decoding function f_dec as shown in Eqn <ref>. We implement f_dec as a simple two-layer MLP with nonlinear activation. It outputs the mean of the normal distribution p(y_i^t,e | z_i^t,e), which we treat as the predicted value for each agent.
z_i^t_1,e⋯z_i^t_T,e = ODESolve(g, [z_1^0,e,z_2^0,e⋯z_N^0,e],(t_1⋯ t_T))
y_i^t,e ∼ p(y_i^t,e | z_i^t,e) = f_dec(z_i^t,e)
§.§ Training
We now introduce the overall training procedure of . For each training sample, we split it into two halves along the time, where we condition on the first half [t_1,t_K] in order to predict dynamics in the second half [t_K+1,t_T]. Given the observed trajectories X^t_1:K,e, we first run the initial state encoder to compute the latent initial state z_i^0,e for each agent, which is sampled from the approximated posterior distribution q_ϕ(z_i^0,e| X^t_1:K,e). We then generate the latent representations of exogenous factors u^e from the environment e via the environment encoder. Next, we run the ODE generative model that incorporates the latent exogenous factors to compute the latent states for all agents in the future. Finally, the decoder outputs the predicted dynamics for each agent.
We jointly train the encoders, ODE generative model, and decoder in an end-to-end manner. The loss function consists of three parts: (1) the evidence lower bound (ELBO) which is the addition of the reconstruction loss for node trajectories and the KL divergence term for adding regularization to the inferred latent initial states for all agents. We use Z^0,e to denote the latent initial state matrix of all N agents. The standard VAE framework is trained to maximize ELBO so we take the negative as the ELBO loss; (2) the contrastive learning loss for preserving the time invariance properties of the learned exogenous factors; (3) the mutual information loss that disentangles the learned representations from the two encoders. λ_1, λ_2 are two hyperparameters for balancing the three terms. We summarize the whole procedure in Appendix <ref>.
ℒ = ℒ_ELBO + λ_1ℒ_contra + λ_2 ℒ_MI
ℒ_ELBO(θ,ϕ) = -𝔼_Z^0,e∼∏_i=1^Nq_ϕ(z_i^0,e| X^t_1:K,e)[log p_θ(Y^t_K+1:T,e)]
+KL[∏_i=1^Nq_ϕ(z_i^0,e |X^t_1:K,e) p(Z^0,e)]
§ EXPERIMENTS
§.§ Experiment Setup
§.§.§ Datasets
We illustrate the performance of our model across two physical simulations that exhibit different system dynamics over time: (1) The Water dataset <cit.>, which describes the fluid dynamics of water within a container. Containers can have different shapes and numbers of ramps with random positions inside them, which we view as different environments. The dataset is simulated using the material point method (MPM), which is suitable for simulating the behavior of interacting, deformable materials such as solids, liquids, gases [<https://en.wikipedia.org/wiki/Material_point_method>]. For each data sample, the number of particles can vary but the trajectory lengths are kept the same as 600. The input node features are 2-D positions of particles, and we calculate the velocities and accelerations as additional node features using finite differences of these positions. The total number of data samples (trajectories) is 1200 and the number of environments is 68, where each environment can have multiple data samples with different particle initializations such as positions, velocities, and accelerations.
(2) The Lennard-Jones potential dataset <cit.>, which describes the soft repulsive and attractive interactions between simple atoms and molecules [<https://en.wikipedia.org/wiki/Lennard-Jones_potential>]. We generate data samples with different temperatures, which could affect the potential energy preserved within the whole system thus affecting the dynamics. We view temperatures as different environments. The total number of data samples (trajectories) is 6500 and the number of environments is 65. Under each environment, we generate 100 trajectories with different initializations. The trajectory lengths are kept the same as 100. The number of particles is 1000 for all data samples. More details about datasets can be found in Appendix <ref>.
§.§.§ Task Evaluation and Data Split
We predict trajectory rollouts across varying lengths and use Mean Square Error (MSE) as the evaluation metric.
Task Evaluation.
The trajectory prediction task is conducted under two settings: (1) Transductive setting, where we evaluate the test sequences whose environments are seen during training; (2) Inductive setting, where we evaluate the test sequences whose environments are not observed during training. It helps to test the model's generalization ability to brand-new systems.
Data Split.
We train our model in a sequence-to-sequence setting where we split the trajectory of each training sample into two parts [t_1,t_K] and [t_K+1,t_T]. We condition on the first part of observations to predict the second part. To conduct data split, we first randomly select 20% environments whose trajectories are all used to construct the testing set X_test^Induct in the inductive setting. For the remaining trajectories that cover the 80% environments, we randomly split them into three partitions: 80% for the training set X_train, 10% for the validation set X_val and 10% for the testing set in the transductive setting X_test^trans. In other words, we have two test sets for the inductive and transductive settings respectively, one training set and one validation set.
To fully utilize the data points within each trajectory, we generate training and validation samples by splitting each trajectory into several chunks that can overlap with each other, using a sliding window. The sliding window has three hyperparameters: the observation length and prediction length for each sample, and the interval between two consecutive chunks (samples). Specifically, for the Water dataset, we set the observation length as 50 and the prediction length as 150. We obtain samples from each trajectory by using a sliding window of size 200 and setting the sliding interval as 50. For the Lennard-Jones potential dataset, we set the observation length as 20, the prediction length as 50, and the interval as 10. The procedure is summarized in Appendix <ref>. During evaluations for both settings, we ask the model to roll out over the whole trajectories without further splitting, whose prediction lengths are larger than the ones during training. The observation lengths during testing are set as 20 for the Lennard-Jones potential dataset and 50 for the Water dataset across the two settings.
§.§ Baselines
We compare both discrete neural models as well as continuous neural models where they do not have special treatment for modeling the influence from different environments. For discrete ones we choose: NRI <cit.> which is a discrete GNN model that uses VAE to infer the interaction type among pairs of agents and is trained via one-step predictions; GNS <cit.>, a discrete GNN model that uses multiple rounds of message passing to predict every single step; LSTM <cit.>, a classic recurrent neural network (RNN) that learns the dynamics of each agent independently. For the continuous models, we compare with NDCN <cit.> and Social ODE <cit.>, two ODE-based methods that follow the encoder-processor-decoder structure with GNN as the ODE function. The initial state for each agent is drawn from a single data point instead of a leading sequence. CG-ODE <cit.> which has the same architecture as our model, but with two coupled ODE functions to guide the evolution of systems.
§.§ Performance Evaluation
We evaluate the performance of our model based on Mean Square Error (MSE) as shown in Table <ref>. As data samples have varying trajectory lengths, we report the MSEs over three rollout percentages regarding different prediction horizons: 30%, 60%, 100% where 100% means the model conditions on the observation sequence and predicts all the remaining timestamps.
Firstly, we can observe that consistently outperforms all baselines across different settings when making long-range predictions, while achieving competitive results when making short-range predictions. This demonstrates the effectiveness of in learning continuous multi-agent system dynamics across environments. By comparing the performance of LSTM with other methods, we can see that modeling the latent interaction among agents can indeed improve the prediction performance compared with predicting trajectories for each agent independently. Also, we can observe the performance gap between and other baselines increase when we generate longer rollouts, showing its expressive power when making long-term predictions. This may be due to the fact that is a continuous model trained in a sequence-to-sequence paradigm whereas discrete GNN methods are only trained to make a fixed-step prediction. Another continuous model NDCN only conditions a single data point to make predictions for the whole trajectory in the future, resulting in suboptimal performance. Finally, we can see that has a larger performance gain over existing methods in the inductive setting than in the transductive setting, which shows its generalization ability to fast adapt to other unseen systems with a few data points. Figure <ref> visualizes the prediction results under the transductive setting for the Water dataset.
§.§.§ Ablation Studies
To further analyze the rationality behind our model design, we conduct an ablation study by considering three model variants: (1) We remove the contrastive learning loss which forces the learned exogenous factors to satisfy the time invariance property, denoted as -w/o ℒ_contra; (2) We remove the mutual information minimization loss which reduces the variance of the learned exogenous factors from the same environment, denoted as -w/o ℒ_MI. (3) We share the parameters of the two encoders for computing the latent representation m_i^e for each observation sequence in the temporal graph, denoted as shared encoders. As shown in Table <ref>, all three variants have inferior performance compared to , verifying the rationality of the three key designs. Notably, when making long-range predictions, removing ℒ_MI would cause more harm to the model than removing ℒ_contra. This can be understood as the latent initial states are more important for making short-term predictions, while the disentangled latent initial states and exogenous factors are both important for making long-range predictions.
§.§.§ Hyperparameter Study
We study the effect of λ_1/λ_2, which are the hyperparameters for balancing the two regularization terms that guide the learning of the two encoders, towards making predictions under different horizons. As illustrated in Figure <ref>, the optimal ratio for making 30%, 60%, 100% rollout predictions are 2, 1,0.5 respectively, under both the transductive and inductive settings. They indicate that the exogenous factors modeling plays a more important role in facilitating long-term predictions, which is consistent with the prediction errors illustrated in Table <ref> when comparing -w/o ℒ_MI with -w/o
ℒ_contra. However, overly elevating ℒ_MI would also harm the model performance, as the time invariance property achieved by ℒ_contra is also important to guarantee the correctness of the learned latent initial states, which determines the starting point of the predicted trajectories in the future.
§.§.§ Sensitivity Analysis.
can take arbitrary observation lengths to make trajectory predictions, as opposed to existing baselines that only condition on observations with fixed lengths. It allows the model to fully utilize all the information in the past. We then study the effect of observation lengths on making predictions in different horizons. As shown in Figure <ref>, the optimal observation lengths for predicting the rollouts with 20, 40, and 50 steps are 20, 25, 35 in the inductive setting, and 15, 25, 30 in the transductive setting. When predicting long-range trajectories, our model typically requires a longer observation sequence to get more accurate results. Also, for making predictions at the same lengths, the inductive setting requires a longer observation length compared with the transductive setting.
§.§ Case Study
We conduct a case study to examine the learned representations of the latent exogenous factors on the Lennard-Jones potential dataset. We first randomly choose one data sample for each of the 65 temperatures and visualize the learned representations of exogenous factors. As shown in Figure <ref> (a), the representations of higher temperatures are closer to each other on the right half of the figure, whereas the lower temperatures are mostly distributed on the left half. Among the 65 temperatures, 20% of them are not seen during training which we circled in black. We can see those unseen temperatures are also properly distributed, indicating the great generalization ability of our model. We next plot the representations for all data samples under temperatures 2.5 and 3.5 respectively as shown in Figure <ref> (b). We can see that the learned representations are clustered within the two temperatures, indicating our contrastive learning loss is indeed beneficial to guide the learning process of exogenous factors.
§ CONCLUSION
In this paper, we investigate the problem of learning the dynamics of continuous interacting systems across environments. We model system dynamics in a continuous fashion through graph neural ordinary differential equations. To achieve model generalization, we learn a shared ODE function that captures the commonalities of the dynamics among environments while design an environment encoder that learns environment-specific representations for exogenous factors automatically from observed trajectories. To disentangle the representations from the initial state encoder and the environment encoder, we propose a regularization loss via mutual information minimization to guide the learning process. We additionally design a contrastive learning loss to reduce the variance of learned exogenous factors across time windows under the same environment. The proposed model is able to achieve accurate predictions for varying physical systems under different environments, especially for long-term predictions. There are some limitations though. Our current model
only learns one static environment-specific variable to achieve model generalization. However, the environment can change over time such as temperatures. How to capture the dynamic influence of those evolving environments remain challenging.
This work was partially supported by NSF 1829071, 2031187, 2106859, 2119643, 2200274, 2211557, 1937599, 2303037, NASA, research awards from Amazon, Cisco, NEC, and DARPA #HR00112290103, DARPA #HR0011260656. We would like to thank Mathieu Bauchy, Han Liu and Abhijeet Gangan for their help to the dataset generation procedure and valuable discussion throughout the project.
ACM-Reference-Format
§ APPENDIX
§.§ Datasets
We conduct experiments over two datasets: The Water dataset and the Lennard-Jones potential dataset. As introduced in Sec <ref>, the edges between agents are assigned if the Euclidean distance between the agents' positions r_ij^t,e = ||p_i^ - p_j^||_2 is within a (small) connectivity radius R.
The connectivity radius for the two datasets is set as 0.015 and 2.5 respectively. The number of particles is kept the same as 1000 for all trajectories in the Lennard-Jones potential dataset, while in the Water dataset, each data sample can have a varying number of particles, and the maximum number of particles is 1000.
§.§.§ Data Split.
Our model is trained in a sequence-to-sequence mode, where we split the trajectory of each training sample into two parts [t_1,t_K] and [t_K+1,t_T]. We condition on the first part of observations to predict the second part. To fully utilize the data points within each training sample, we split each trajectory into several chunks with three hyperparameters: the observation length and prediction length for each sample, and the interval between two consecutive chunks (samples). We summarize the procedure in Algorithm <ref>, where K is the number of trajectories and d is the input feature dimension.
§.§.§ Input Features and Prediction Target.
For the Water dataset, the input node features are 2-D positions p_i^, and we additionally calculate the 2-D velocities and accelerations using finite differences of these positions as v_i^ = p_i^ - p_i^t-1,e, a_i^t = v_i^t,e - v_i^t-1,e = p_i^ - 2p_i^t-1,e + p_i^t-2,e. For positions, velocities, and accelerations, we precompute their mean and variance across all samples and normalize them with z-score. For the Lennard-Jones potential dataset, the input node features are 3-D positions, velocities, and accelerations. We train the model to predict the future positions for each agent along the time for both datasets.
§.§ Software and Experiment Environment
We implement our model in PyTorch. All experiments are conducted on a GPU powered by an NVIDIA A100. For all datasets, we train over 100 epochs and select the one with the lowest validation loss as the reported model. We report the average results over 10 runs. Encoders, the generative model, and the decoder are jointly optimized using Adam optimizer <cit.> with a learning rate 0.005. The batch size for the Water dataset is set as 128, and for the Lennard-Jones potential dataset. is set as 256. Note that the batch size denotes the number of data samples generated as in Alg <ref>.
§.§ Implementation Details
We now introduce the implementation details of our model.
§.§.§ Initial State Encoder.
The initial state encoder aims to infer latent initial states for all agents simultaneously via a two-step procedure: Firstly, the encoder computes the structural representation for each observation node by the use of a spatial-temporal GNN. We set the number of GNN layers l as 2 and the hidden dimension as 64 across all datasets. LayerNorm <cit.> is employed to provide training stability in our experiment. Next, a self-attention-based sequence representation learning procedure computes the sequence representation for each agent and samples the initial state from it. We use a 2-layer MLP as f_trans in Eqn <ref> with latent dimensions as 128 and activation function as Tanh.
§.§.§ Environment Encoder.
The environment encoder learns the latent representations of exogenous factors based on the observed trajectories. The architecture is the same as the initial state encoder but are using two sets are parameters with the same hyperparameter settings introduced in Sec <ref>.
Contrastive Learning Loss Sampling.
The contrastive learning loss ℒ_contra shown in Eqn <ref> is designed to achieve the time invariance properties of the learned exogenous factors. Specifically, we sample the positive pairs X^t_1:t_2,e, X^t_3:t_4,e using two strategies: (1) The intra-sample generation, where ^t_1:t_2,e, X^t_3:t_4,e are from the same training sample but representing two different time windows. We achieve this by randomly selecting two timestamps within each training sample to serve as t_1, t_3 respectively, and then set the window size as the observation length L to get t_2 = t_1 + L , t_4 = t_3+L. (2) The cross-sample generation, where ^t_1:t_2,e, X^t_3:t_4,e are from two different samples within the same environment e. Specifically, for each training sample, we first randomly choose another sample under the same environment. Then we generate t_1, t_3 by randomly selecting one timestamp for each of them. Finally, we calculate t_2,t_4 by adding the observation length. To generate negative pair X^t_5:t_6,e^' for each X^t_1:t_2,e, we first randomly select one another environment e', from which we randomly pick one data sample. Similarly, we then randomly select one timestamp within that data sample to serve as t_5 and then obtain t_6 as t_6 = t_5 + L. The temperature scalar τ in Eqn <ref> is set as 0.05.
Mutual Information Minimization Loss Sampling. To disentangle the representations of the latent initial states and the exogenous factors, we design the mutual information minimization loss in Eqn <ref> as a regularization term during training. We conduct the sampling procedure for positive and negative pairs as follows: For each training sample, we pair the latent initial states z_i^0,e of all the N agents with the learned exogenous factors u^e, thus constructing N positive pairs. To generate negative pairs, we randomly select another environment e' and pair it with the latent initial states of all agents within one training sample. Thus we obtain the same number of positive and negative pairs during training. The discriminator Ψ is implemented as a two-layer MLP with hidden dimension and out dimension as 128 and 64 respectively.
§.§.§ ODE Function and Solver.
The ODE function introduced in Eqn <ref> consists of two parts: the GNN f_GNN that captures the mutual interaction among agents and f_self that captures the self-evolution of agents. We use the following two-layer message passing GNN function as f_GNN:
v → e: 𝐞_(i, j)^l_1(t,e) =f_e^1([𝐳_i^t,e||𝐳_j^])
e → v: 𝐳_j^l_1(t,e) =f_v^1(∑_i ≠ j𝐞_(i, j)^l_1(t,e))
v → e: 𝐳_j^l_2(t,e) =f_e^2([𝐳_i^l_1(t,e)||𝐳_j^l_1(t,e)])
where || denotes concatenation, f_e^1, f_v^1, f_e^2 are two-layer MLPs with hidden dimension size of 64. We use 𝐳_j^l_2(t,e) as output representation for agent j at timestamp t from f_GNN. The self-evolution function f_self and the transformation function f_env are also implemented as two-layer MLPs with hidden dimension of 64. We use the fourth-order Runge-Kutta method from torchdiffeq python package <cit.> as the ODE solver, which solves the ODE systems on a time grid that is five times denser than the observed time points. We also utilize the Adjoint method described in <cit.> to reduce memory usage.
§.§ Pseudo-Code of Training
|
http://arxiv.org/abs/2307.04606v2 | 20230710145046 | Well-Orderedness of the Bashicu Matrix System | [
"Samuel Vargovčík"
] | math.LO | [
"math.LO",
"03E10"
] |
Maximal violation of the Bell-CHSH inequality via bumpified Haar wavelets
Silvio P. Sorella
August 12, 2023
=========================================================================
The Bashicu Matrix System is a recursive system of ordinal notations created by the user BashicuHyudora of the japanese Googology Wiki. In this paper, we prove that the Bashicu Matrix System is well-ordered.
§ INTRODUCTION
The Bashicu Matrix System (BMS) is a recursive system of ordinal notations with a large order type created by the user BashicuHyudora of the japanese Googology Wiki <cit.>. Originally, it was defined informally in pseudocode based on the programming language BASIC, and the following is the agreed-upon formalization:
An array is a sequence of equal-length sequences of natural numbers, i.e. an element of (ℕ^n)^m for some n,m∈ℕ. For every array A∈(ℕ^n)^m, the columns of A are its elements, and for each n'<n, the n'-th row of A is the sequence of length m such that for each m'<m, the m'-th element of the n'-th row is the n'-th element of the m'-th column. We will denote concatenation of sequences by +.
Let A be any array and n be any natural number. For every m smaller than the length of A's columns and every i smaller than the length of A, the m-parent of the i-th column is the last column before it whose m-th element is smaller than the m-th element of the i-th column, and which is an (m-1)-ancestor of the i-th column if m>0, if such a column exists. If no such column exists, then the i-th column does not have an m-parent. The m-ancestors (also called strict m-ancestors) of a column are its m-parent and the m-ancestors of its parent. The non-strict m-ancestors of a column are the column itself and its m-ancestors.
If A is empty, then the expansion of A at n is A[n]=A. Otherwise let C be the last element of A and let m_0 be maximal such that C has an m_0-parent, if such an m_0 exists, otherwise m_0 is undefined. Let arrays G,B_0,B_1,...,B_n be such that:
* A=G+B_0+(C).
* The first element of B_0 is the m_0-parent of C if m_0 is defined and otherwise B_0 is empty.
* For each D in B_0 and m<m_0, if the first column in B_0 is D or an m-ancestor of D, then it the m-th element of D is said to ascend.
* B_i is a copy of B_0, but for each ascending element of each column in B_0, its copy in B_i is increased by i·((m-th element of C)-(m-th element of the first column in B_0)), where m is the index of the row in which that element is.
Then the expansion A[n] of A at n is G+B_0+B_1+...+B_n, with all rows of zeroes at the bottom removed.
BMS is the closure of {((0,0,...,0,0_n),(1,1,...,1,1_n)) : n∈ℕ} under expansion at each natural number, ordered by the ⊆-minimal partial order such that A[n]≤ A for each n∈ℕ and A∈ BMS. Here, a partial order ≤ is the set of pairs (x,y) such that x≤ y.
This is the fourth official version of the system, which is why it is also referred to as BM4. The previous versions BM1, BM2 and BM3 were not well-founded, but as we prove below, BM4 is well-founded. There are also unofficial versions, of which BM2.3 is strongly believed to be equivalent to BM4 <cit.>, and BM3.3 is also notable for its similarity to BM4 and temporarily more predictable behavior. However, they are not the focus of this paper, so from now on, we will only refer to BM4.
The question of whether BMS is well-ordered has been an open problem for almost 8 years, and it was among the most significant open problems in googology. Although BMS is yet to be used outside of this field, its simplicity and large order type provide hope for future uses in proof theory and model theory. Before this paper, the research about BMS has brought the following results.
BMS restricted to arrays with one row is also called the Primitive Sequence system (or PrSS), and has a simple isomorphism with the iterated base-ω Cantor normal form - intuitively, each column represents a single ω in the string, the element of the column is the "height" of the ω (the number of exponents it appears in), and distinct ωs with the same height are separated by a + at the same level, unless there is an ω between them with a lower height. ωs that do not have any ω in their exponent in the resulting string are exponentiated to 0. This isomorphism can be proven easily by transfinite induction on the Cantor normal form expression, thus the order type of PrSS is ε_0.
BMS restricted to arrays with two rows is also called the Pair Sequence System (or PSS), and was proven well-founded in 2018 <cit.>, with its order type shown to be the proof-theoretic ordinal of Π^1_1-CA_0, i.e. the countable collapse of ω_ω using standard collapsing functions (such as Buchholz's function in this case).
If we abbreviate ⟨ L_α,∈⟩≺_Σ_1⟨ L_β,∈⟩ as α<_0β, then informal estimates say that the order type of the set of arrays in BMS smaller than ((0,0,0),(1,1,1),[0](2,2,2)) is most likely the supremum of, for each n, a recursive collapse (using standard collapsing functions) of the smallest ordinal α_0 for which there exist α_1,α_2,...,α_n such that α_0<_0α_1<_0α_2<_0...<_0α_n.
The order type of the entirety of BMS has not been carefully estimated in terms of ordinal functions yet, but is expected to be the supremum of, for each n, a recursive collapse of the smallest ordinal α for which there exists β with ⟨ L_α,∈⟩≺_Σ_n⟨ L_β,∈⟩, using collapsing functions that may be standard in the future.
Subjectively, BMS is a very elegant way to represent large recursive ordinals. With enough formalization effort, it could give rise to a system of recursively large ordinals. This system would be similar to stability in structure and, as far as we know, similar to stability in scale too, but perhaps easier to understand or easier to use for some purposes such as ordinal analysis.
We utilize this similarity to prove that BMS is well-ordered. Specifically, we first prove that BMS is totally ordered and the order is precisely the lexicographical order. We then prove that a certain reflection property holds for stable ordinals. We show that this property allows us to map elements of BMS to ordinals while preserving the order. Using this order-preserving function from BMS to Ord, any infinite descending sequence in BMS would be mapped to an infinite descending sequence in Ord, which cannot exist by definition, thus BMS is well-ordered.
§ THE PROOF
Given that a property holds for every element of a set X, and that if it holds for x then it holds for f(x) for each f in some set F of functions, it is easy to see from the definition of closure that the property holds for all elements of the closure of X under the functions f∈ F. We consider this fact trivial enough to be used implicitly.
It is clear that for A,A'∈ BMS, A'<A iff A is non-empty and A'=A[n_0][n_1]...[n_m] for some m,n_0,n_1,...,n_m∈ℕ.
For all A∈ BMS and n∈ℕ, A[n] is lexicographically smaller than A (with the columns also compared lexicographically).
Using the variable names from the definition of BMS, we have A=G+B_0+(C) and A[n]=G+B_0+B_1+...+B_n. Then A[n]<_lexA iff B_1+B_2+...+B_n<_lex(C), which is trivial if m_0 is undefined (the empty sequence is lexicographically smaller than all other sequences, including (C)), and otherwise equivalent to the first column in B_1 being lexicographically smaller than C.
Let R_i be the first column in B_i. Since R_0 is the m_0-parent of C, it is an m-ancestor of C for each m≤ m_0, thus the m-th element of R_0 is less than the m-th element of C. By definition, R_1 is a copy of R_0, but for each m<m_0, the m-th element is increased either by 0 or by the difference between itself and the m-th element of C. Then it is less than or equal to the m-th element of C, so the sequence of the first m_0 elements of R_1 is pointwise smaller than or equal to the sequence of the first m_0 elements of C (in fact, it is equal, but that is not necessary for this proof). However, the m_0-th element of R_1 is necessarily equal to the m_0-th element of R_0 since m_0<m_0 is false, thus it is strictly smaller than the m_0-th element of C.
Therefore R_1<_lexC, which implies B_1+B_2+...+B_n<_lex(C), and thus A[n]<_lexA.
For all A,A'∈ BMS, A'<A implies A'<_lexA.
BMS is totally ordered.
For every non-empty A∈ BMS, A[0] is simply A without the last column, as it is equal to G+B_0, and thus A=A[0]+(C). Then it is trivial to prove by induction that for all A,A'∈ BMS, if A' is a subsequence of A, then A'=A[0][0]...[0][0]_n for some n∈ℕ, and thus A'≤ A. Together with A[n] being a subsequence of A[n+1] for all A∈ BMS and n∈ℕ, this also means that for all A,A'∈ BMS and n∈ℕ, A[n]≤ A[n+1], and if A[n]<A'≤ A[n+1], then A[n]≤ A'[0]. This implies that if some subset X of BMS is totally ordered, then X∪{A[n] : A∈ X n∈ℕ} is also totally ordered. By induction, it is clear that if X⊆ BMS is totally ordered, then X∪{A[n_0] : A∈ X n_0∈ℕ}∪{A[n_0][n_1] : A∈ X n_0,n_1∈ℕ}∪...∪{A[n_0][n_1]...[n_m] : A∈ X n_0,n_1,...,n_m∈ℕ} is totally ordered for each m∈ℕ. Let X_0={((0,0,...,0,0_n),(1,1,...,1,1_n)) : n∈ℕ}. Since each A∈ BMS is in {A”[n_0][n_1]...[n_m] : A”∈ X_0 n_0,n_1,...,n_m∈ℕ} for some m∈ℕ, it is obvious that if X_0 is totally ordered, then for all A,A'∈ BMS, there's some m∈ℕ such that A,A'∈ X_0∪{A”[n_0] : A”∈ X_0 n_0∈ℕ}∪{A”[n_0][n_1] : A”∈ X_0 n_0,n_1∈ℕ}∪...∪{A”[n_0][n_1]...[n_m] : A”∈ X_0 n_0,n_1,...,n_m∈ℕ}, which is totally ordered, and therefore A,A' are comparable. So if X_0 is totally ordered, then BMS is totally ordered.
It is now sufficient to prove that X_0 is totally ordered. This is easy, since ((0,0,...,0,0_n+1),(1,1,...,1,1_n+1))[1] is trivially ((0,0,...,0,0_n),(1,1,...,1,1_n)), and thus by induction, for each n<m∈ℕ, ((0,0,...,0,0_n),(1,1,...,1,1_n))=((0,0,...,0,0_m),(1,1,...,1,1_m))[1][1]...[1][1]_m-n<((0,0,...,0,0_m),(1,1,...,1,1_m)), and all elements of X_0 are of this form, so all elements of X_0 are pairwise comparable.
The ordering of BMS coincides with the lexicographical ordering with columns compared lexicographically.
Let A be a non-empty array and n be a natural number, let G,B_0,B_1,...,B_n,m_0 be as in Definition <ref>, and let l_0,l_1 be the lengths of G,B_0.(i) For all i<l_0, j<l_1 and k∈ℕ, in A[n], the i-th column in G is a k-ancestor of the j-th column in B_0 iff it is a k-ancestor of the j-th column in B_n.(ii) For all i,j<l_1 and k∈ℕ, the i-th column in B_0 is a k-ancestor of the j-th column in B_0 iff the i-th column in B_n is a k-ancestor of the j-th column in B_n.(iii) If n>0, then for all i<l_1 and k<m_0, in A, the i-th column in B_0 is a k-ancestor of the last column of A iff in A[n], the i-th column in B_n-1 is a k-ancestor of the first column in B_n.(iv) For all 0<i<l_1 and k∈ℕ, in A[n], the k-parent of the i-th column in B_n is either in B_n or in G.(v) For all i,j<l_1 and k∈ℕ and n_0<n_1<n, in A[n], the i-th column in B_n_0 is a k-ancestor of the j-th column in B_n_1 iff it's a k-ancestor of the j-th column in B_n_1+1.
We can prove this by induction on k. The proof is relatively straightforward, but tedious. The author recommends drawing the mentioned ancestry relations in order to see what is happening.
Assume all 5 statements hold for all k'<k.
For (ii), fix i and j. If j=0 then it is trivial, so we will only consider the case j>0. From the assumption, it follows that for all k'<k and i'<l_0, the i'-th column in B_0 is a k'-ancestor of the j-th column in B_0 iff the i'-th column in B_n is a k'-ancestor of the j-th column in B_n. Let I be the set of i' such that for all k'<k, the i'-th column in B_0 is a k'-ancestor of the j-th column in B_0.
Since for all k'<k, k'-ancestry is a total order on the columns with indices in I, the k-parent of each such column is simply the last such column before it with a smaller k-th element. The k-th element of the j-th column in B_0 ascends iff the first column in B_0 is in I and is a k-ancestor of the j-th column in B_0, which is equivalent to the k-th element of the first column in B_0 being smaller than the k-th element of all columns between it and the j-th column in B_0, so it is also a k-ancestor of all other columns with indices in I. This means that either the k-th elements of all columns in B_0 with indices in I ascend or the k-th element of the j-th column in B_0 doesn't ascend.
In the first case, the differences between the columns in B_n with indices in I are the same as in B_0, and since k'-ancestry relations between them are also the same as in B_0 for k'<k, k-ancestry must be the same too, because everything it depends on is the same. In the second case, since the j-th column doesn't ascend, in B_n, there trivially cannot be any k-ancestors of the j-th column that aren't copies of k-ancestors of the j-th column in B_0. Since this possibility requires that the first column in B_0 is not a k-ancestor of the j-th column, it is also not a k-ancestor of any k-ancestor of the j-th column, thus the k-th elements of the k-ancestors of the j-th column also don't ascend, and therefore the differences between them are the same, implying that the k-ancestry relations are preserved. Either way, (ii) holds for k.
The above can trivially be extended to include the next copy of the first column in B_0, and then since the k-th element of the first column in B_1 is easily seen to be the same as C as long as k<m_0, (iii) holds for k.
Then to prove (iv), we first observe that if for some k'<k, the k'-parent of the i-th column in B_n is in G, then all of its k'-ancestors are in G, and its k-parent must be one of its k'-ancestors so it is also in G. So we're left with the case that for all k'<k, the k'-parent of the i-th column in B_n is in B_n.
If the first column of B_n is a k'-ancestor of the i-th column in B_n for all k'<k, and yet its k-parent is not in B_n or G, then the first column in B_n is not a k-ancestor of the i-th column in B_n. Therefore from (ii) for k, which we have already proven, we get that the k-parent of the i-th column in B_0 is not in B_0 (therefore it is in G), which also implies that the k-th element of the i-th column in B_0 does not ascend in the expansion of A, so it is equal to the k-th element of the i-th column in B_n. But from (ii) for all k'<k and the fact that the first column in B_n is a k'-ancestor of the i-th column in B_n for all k'<k, we get that the first column in B_0 is a k'-ancestor of the i-th column in B_0 for all k'<k.
This, together with its k-parent being in G, means that for all columns in B_0 that are k'-ancestors of the i-th column in B_0 for all k'<k, their k-th element is at least as large as the k-th element of the i-th column in B_0, and therefore at least as large as the k-th element of the i-th column in B_n. This includes the first column in B_0, and since the k-th element of the first column in B_n is by definition at least as large as the k-th element of the first column in B_0, which is at least as large as the k-th element of the i-th column in B_n, which is by definition strictly larger than the k-th element of the k-parent of the i-th column in B_n, we get that the k-th element of the first column in B_n is strictly larger than the k-th element of the k-parent of the i-th column in B_n. With that, and due to the facts that k'-ancestry is a total order on the set of k'-ancestors of each column for each k', and that both the first column in B_n and the k-parent of the i-th column in B_n are k'-ancestors of the i-th column in B_n for every k'<k, and the latter is before the former, we get that the k-parent of the i-th column in B_n is also a k-ancestor of the first column in B_n.
If k≥ m_0 (using variable names from Definition <ref>), then this is already a contradiction, because the m_0-parent of the first column in B_n is easily seen to be in G. Otherwise, let n'<n be the natural number such that the k-parent of the i-th column in B_n is in B_n'. From repeated applications of (iii) for k, which we have already proven, we get that the first column in B_n' is a k-ancestor of the first column in B_n, and therefore by k-ancestry being a total order on the set of k-ancestors of the first column in B_n, we get that the first column in B_n' is a k-ancestor of the k-parent of the i-th column in B_n, and thus is also a k-ancestor of the i-th column in B_n. This, however, by more repeated applications of (iii), implies that the first column in B_0 is a k-ancestor of the i-th column in B_n, which is in contradiction with the fact that the k-th element of the first column in B_0 is at least as large as the k-th element of the i-th column in B_n.
Now, for (iv), we're left with the case that for some k'<k, the first column in B_n is not a k'-ancestor of the i-th column in B_n. However, if we choose a specific such k'<k, then by (iv) for k' we get that the k'-parent of every k'-ancestor in B_n of the i-th column in B_n is either in B_n or in G, from which it follows that all k'-ancestors of the i-th column in B_n are either in B_n or in G, and that includes the k-parent of the i-th column in B_n, proving (iv) for k.
With (iv) proven for k, the proof of (ii) and (iii) for k can also be easily modified for relations between G and B_0 and between G and B_n, with all nontrivialities accounted for by (iv) for k: either the k-th element of the j-th column in B_0 ascends and the j-th column in B_n trivially has the first column in B_0 as a k-ancestor, thus the k-ancestors in G are simply the k-ancestors of that (by totality of k-ancestry on the set of k-ancestors of the j-th column in B_n), or the k-th element of the j-th column in B_0 doesn't ascend and B_n's copy C_n of the (j-th column in B_0)'s first non-strict k-ancestor C_0 in B_0 is easily seen to have the same k-parent as C_0, because the k-parents of C_0 and C_n are both in G, the k-th elements of C_0 and C_n are equal, and the sets of k'-ancestors of C_0 and of C_n are the same for every k'<k by (i) for k', thus the k-ancestors in G of both C_0 and C_n are that k-parent and its k-ancestors. Therefore (i) also holds for k.
Finally, (v) can be proven for k by simply letting {n_2,n_3}={n_1,n_1+1} (the two options together give the proofs of both directions of (v)), and noticing that if the j-th column in B_n_2 has a k-ancestor in B_n_0, then the first column in B_n_2 must also be its k-ancestor (similarly to the reasoning near the end of the previous paragraph - using (iv) for the last k-ancestor in B_n_2 of the j-th column in B_n_2), and therefore by totality of k-ancestry on the set of k-ancestors of the j-th column in B_n_2, the i-th column in B_n_0 is a k-ancestor of the first column in B_n_2. Then if k≥ m_0, we get a contradiction, because the k-ancestors of the j-th column in B_n_2 are all in B_n_2 or G, as we've already proven, so it must be that k<m_0. In that case, by application of (iii) and either (depending on n_2-n_1) another application of the totality of k-ancestry on the set of k-ancestors of the j-th column in B_n_2 or an application of transitivity of k-ancestry, we get that the i-th column in B_n_0 is also a k-ancestor of the first column in B_n_3, and finally by an application of (ii) for k, we get that the first column in B_n_3 is a k-ancestor of the j-th column in B_n_3, so by transitivity of k-ancestry, the i-th column in B_n_0 is a k-ancestor of the j-th column in B_n_3, which concludes the proof of (v) for k.
By induction, all 5 statements in the lemma always hold.
We will abbreviate ⟨ L_α,∈⟩≼_Σ_n+1⟨ L_β,∈⟩ as α≤_nβ, and similarly for the strict versions of these relations. Here, L_α is the α-th level of the constructible hierarchy, and M≼_Σ_nN means that M is a Σ_n-elementary substructure of N.
Let σ be the smallest ordinal α such that there exists an ordinal β with ∀ n∈ℕ(α<_nβ).
For all α,β∈σ and n∈ℕ, if ω<α<_nβ, then for all finite X,Y⊆ Ord such that γ<α≤δ<β for all γ∈ X and δ∈ Y, there exists a finite Y'⊆ Ord and a bijection f: Y→ Y' such that for all γ∈ X, all δ_0,δ_1∈ Y, all k∈ℕ and all m<n:
* γ<f(δ_0)<α
* γ<_kδ_0⇒γ<_kf(δ_0)
* δ_0<δ_1⇒ f(δ_0)<f(δ_1)
* δ_0<_kδ_1⇒ f(δ_0)<_kf(δ_1)
* δ_0<_mβ⇒ f(δ_0)<_mα
We can prove this by constructing a Σ_n+1 formula that, when interpreted in L_β, asserts all the true instances of the statements on the left side of the implications, and when interpreted in L_α, asserts the corresponding instances of the statements on the right side of the implications. One small issue is the first assertion, which is unconditional. However, the f(δ_0)<α part is simply asserting that f(δ_0) exists in L_α, which will be done by existentially quantifying the variable, and since γ<α≤δ_0 is necessarily true, γ<f(δ_0) is equivalent to γ<δ_0⇒γ<f(δ_0), which is a conditional statement.
We construct a formula with parameters γ_0,γ_1,...,γ_|X|-1, which are all the elements of X. Since they're ordinals smaller than α, they are in L_α, therefore we can use them as parameters in a formula that we want to reflect using the stability relation between α and β.
Let φ_0(η,ξ) be a formula asserting η<ξ. Let φ_1(η,ξ,k) be a formula asserting η<_kξ. Let φ_2(η,k) be a formula asserting η<_kOrd, i.e. ⟨ L_η,∈⟩≺_Σ_k+1⟨ L,∈⟩.
φ_0 is clearly Σ_0, as it is simply the atomic formula η∈ξ. This means it is Σ_n+1. φ_1 only needs to assert the existence of L_ξ, the defining characteristics of it (specifically that it is a level of L, which is simply V=L relativized to it, and that the ordinals in it are precisely the elements of ξ, which is trivially Σ_0), and then it needs to assert that φ_2(η,k) relativized to L_ξ holds. The relativization of a first-order formula to a set is trivially always Σ_0. Assuming φ_2 is first-order, the only unbounded quantifier in φ_1 is the one existentially quantifying L_ξ. Then φ_1 is Σ_1, which means it's also Σ_n+1. Finally, φ_2(η,k) is Π_k+1, as shown in <cit.> (Theorem 1.8), which means it is Σ_k+2, and therefore first-order. In all non-relativized uses of φ_2, we will require k<n, which means k+2≤ n+1, thus it is Σ_n+1.
X and Y are finite, and all of their elements are smaller than σ so for each η,ξ∈ X∪ Y, there are only finitely many k for which φ_1(η,ξ,k) is true. Then there are finitely many instances of φ_0(γ_i,δ_j), φ_1(γ_i,δ_j,k), φ_0(δ_i,δ_j), φ_1(δ_i,δ_j,k) and φ_2(δ_i,m) with k∈ℕ and m<n, which are true when each δ_i is interpreted as the i-th element of Y. So their conjunction φ is a conjunction of finitely many Σ_n+1 formulae, therefore it is itself a Σ_n+1 formula. Then we only need a Σ_n+1 formula ψ asserting that all the δ_i are ordinals, which is trivial.
Now, the formula ψφ is Σ_n+1, therefore the formula ∃δ_0,δ_1,...,δ_|Y|-1(ψφ) is also Σ_n+1. In L_β, the witnesses of that existential quantifier are the elements of Y, therefore the formula is true in L_β. Then by α<_nβ, it must be true in L_α, and since it encodes all the relations between elements of X, elements of Y and β that need to be reflected to relations between elements of X, elements of Y' and α, the witnesses of that formula in L_α form a set Y' that, together with the unique order isomorphism f: Y→ Y', satisfies the conditions in the lemma.
Note that this reflection is similar to reflection in Patterns of Resemblance, and those could be used too. However, the author is not as experienced in working with Patterns of Resemblance, so it was easier to use stability.
BMS is well-ordered.
We will define a function o: BMS→ Ord in the following way. Consider an array A with length n. A stable representation of A is a function f: n→ Ord such that for all i,j<n, i<j⇒ f(i)<f(j) and for all m, if the i-th column of A is an m-ancestor of the j-th column of A, then f(i)<_mf(j). Let o(A) be the minimal α∈ Ord such that for some stable representation f of A, all outputs of f are smaller than α.
This proof is similar to the proof of Lemma <ref> - we prove by induction on the number of expansions needed to reach an array, that o is defined and order-preserving on all of BMS, by starting from X_0 and proving that if it holds for some Z, then it holds for Z∪{A[n] : A∈ Z n∈ℕ}, and using the fact that every pair A,A' of arrays is reached after finitely many applications of this induction step.
Of course, o(A) is defined for A∈ X_0={((0,0,...,0,0_n),(1,1,...,1,1_n)) : n∈ℕ}, and it is easy to see that o(((0,0,...,0,0_n),(1,1,...,1,1_n)))<o(((0,0,...,0,0_m),(1,1,...,1,1_m))) iff n<m, thus o is also order-preserving in this set.
Let Z be a set of arrays, on which o is order-preserving and defined for all of Z's elements. Let A∈ Z. If A is empty, then trivially for all n∈ℕ, o(A[n])=o(A)≤ o(A), and thus o(A[n]) is also defined and order is preserved. Otherwise, let f be a stable representation of A whose outputs are all smaller than o(A). We can then recursively define stable representations of A[n] in the following way.
Let l_n be the length of A[n] for all n∈ℕ. A stable representation f_0 of A[0] is simply f restricted to l_0. Using variable names from the definition of BMS, this representation trivially maps indices (in A[0]) of columns in B_0 to the ordinals to which f maps indices (in A) of columns in B_0.
Let f_n be a stable representation of A[n] that maps indices (in A[n]) of columns in B_n to the ordinals to which f maps indices (in A) of columns in B_0. Then using the reflection property from Lemma <ref> with α being the ordinal to which f_n maps the index of the first column in B_n, β being the ordinal to which f maps the last column of A, X being the set of ordinals to which f_n maps indices of columns before B_n, and Y being the set of ordinals to which f_n maps indices of columns in B_n (or to which f maps indices of columns in B_0), we get a set Y' of ordinals due to α<_m_0β. We can then define f_n+1 by making it the same as f_n for indices of columns before B_n, mapping indices of columns in B_n to the elements of Y', and mapping indices (in A[n+1]) of columns in B_n+1 to the elements of Y.
It follows from Lemma <ref> that f_n+1 is a stable representation of A[n+1]. Then it's trivially a stable representation of A[n+1] that maps indices of columns in B_n+1 to the ordinals to which f maps indices of columns in B_0, therefore by induction, for all m∈ℕ, there is a stable representation of A[m] that maps indices of columns in B_m to the ordinals to which f maps indices of columns in B_0. Since all these ordinals are smaller than β, o(A[m]) is defined and is at most β, and since β is an output of f, it is smaller than o(A), so o(A[m])<o(A), which means o is defined and order-preserving (due to the order being originally defined only by comparing an array with its expansions) on Z∪{A[m] : A∈ Z m∈ℕ}.
Now, similarly to the proof of Lemma <ref>, with X_0={((0,0,...,0,0_n),(1,1,...,1,1_n)) : n∈ℕ}, we conclude that o is defined and order-preserving on X_0∪{A[n_0] : A∈ X_0 n_0∈ℕ}∪{A[n_0][n_1] : A∈ X_0 n_0,n_1∈ℕ}∪...∪{A[n_0][n_1]...[n_m] : A∈ X_0 n_0,n_1,...,n_m∈ℕ} for each m∈ℕ, and since all A,A'∈ BMS are also in this set for some m, o is defined for them their order is preserved by o, so o is defined and order-preserving on all of BMS.
Then if BMS was not well-ordered, there would be an infinite descending sequence in BMS, which would get mapped to an infinite descending sequence of ordinals by o, and that cannot exist by the definition of ordinals. Therefore BMS is well-ordered.
§ FUTURE RESEARCH
We hope to use BMS in ordinal analysis, first using it to rewrite analyses of theories that have already been analyzed by other means, and then analyzing even stronger theories, ideally up to full second-order arithmetic if the order type of BMS is large enough for that.
Once this approach proves viable, we also plan to continue proving the well-orderedness of similar notation systems with larger order types, such as Y sequence <cit.> and its extension ω-Y sequence <cit.>.
Another challenge that is relevant is the task to find a "self-contained" proof of well-orderedness of BMS (that is, a proof using only concepts that are directly related to BMS, which excludes stability and ordinal collapsing functions), as this would simplify the translation of the proof to theories that only deal with basic structures, such as third-order arithmetics.
§ ACKNOWLEDGEMENTS
I would like to express my deepest gratitude to the discord user C7X (also known as Convindix) for proofreading this paper, as well as introducing me to the concept of stability years ago, which led to this paper's very existence.
I also want to thank the googology and apeirology community for the doubt that motivated me to finish this paper.
§ REFERENCES
|
http://arxiv.org/abs/2307.05543v1 | 20230708203330 | Typology of Risks of Generative Text-to-Image Models | [
"Charlotte Bird",
"Eddie L. Ungless",
"Atoosa Kasirzadeh"
] | cs.CY | [
"cs.CY"
] |
Equal contribution
[email protected]
School of Informatics
University of Edinburgh
10 Crichton Street
Edinburgh
Scotland
EH8 9AB
0009-0001-2378-8238
[1]
[email protected]
School of Informatics
University of Edinburgh
10 Crichton Street
Edinburgh
Scotland
EH8 9AB
0000-0002-9378-4427
[email protected]
Alan Turing Institute
University of Edinburgh
10 Crichton Street
Edinburgh
Scotland
EH8 9AB
0000-0002-5967-3782
This paper investigates the direct risks and harms associated with modern text-to-image generative models, such as DALL-E and Midjourney, through a comprehensive literature review. While these models offer unprecedented capabilities for generating images, their development and use introduce new types of risk that require careful consideration. Our review reveals significant knowledge gaps concerning the understanding and treatment of these risks despite some already being addressed. We offer a taxonomy of risks across six key stakeholder groups, inclusive of unexplored issues, and suggest future research directions. We identify 22 distinct risk types, spanning issues from data bias to malicious use. The investigation presented here is intended to enhance the ongoing discourse on responsible model development and deployment. By highlighting previously overlooked risks and gaps, it aims to shape subsequent research and governance initiatives, guiding them toward the responsible, secure, and ethically conscious evolution of text-to-image models.
<ccs2012>
<concept>
<concept_id>10003120.10003121</concept_id>
<concept_desc>Human-centered computing Human computer interaction (HCI)</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10003120.10003121.10003128.10011753</concept_id>
<concept_desc>Human-centered computing Text input</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10010405.10010469.10010474</concept_id>
<concept_desc>Applied computing Media arts</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10003456.10010927</concept_id>
<concept_desc>Social and professional topics User characteristics</concept_desc>
<concept_significance>100</concept_significance>
</concept>
</ccs2012>
[500]Human-centered computing Human computer interaction (HCI)
[300]Human-centered computing Text input
[300]Applied computing Media arts
[100]Social and professional topics User characteristics
Typology of Risks of Generative Text-to-Image Models
Atoosa Kasirzadeh
====================================================
Forthcoming in Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (AIES 2023)
§ INTRODUCTION
In recent years, significant progress has been made in developing large language models and related multi-modal generative models, such as text-to-image models. We will collectively refer to these models as “generative models.”[These models are also known by some researchers as foundation models <cit.>.] Generative models process and combine information from various modalities, including visual, textual and auditory data. The range of applications for generative models spans multiple fields. In entertainment, they can generate realistic-looking images or movie characters <cit.>. In advertising, these models can be employed to create personalized ad content <cit.>. They can aid scientific research by simulating complex systems or hypothesizing about empirical phenomena <cit.>. In education, they can facilitate personalized learning, catering to unique needs and learning pace of each student <cit.>.
While introducing exciting opportunities, generative models also pose risks. These risks have attracted significant scrutiny from the AI ethics and safety community. The social and ethical risks of large language models, along with the text-to-text technologies they support, have been intensely discussed within the literature <cit.>. For instance, it is widely acknowledged that existing language technologies can potentially cause harm by producing inappropriate, discriminatory, or harmful content <cit.>, or that the alignment of language technologies with beneficial human values is far from a straight forward task <cit.>. This paper extends this line of inquiry from language models to text-to-image generative models, examining potential risks and harms resulting from their development and use. To identify and illuminate these risks, we perform a comprehensive review of literature related to text-to-image (TTI) models. In particular, we conduct an initial search using 8 seed papers, supplementing with manual search (our search methodology is detailed in Appendix A). Collected papers are analysed for immediate risks, stakeholders, and empirical investigations.
Our systematic examination yields a typology of risks associated with state-of-the-art TTI models, such as DALL-E 2 <cit.>. Our findings are summarized in Table <ref>. Our typology and discussion analysis are limited to immediate risks, inspired by a taxonomy from Weidinger et al. <cit.>. Our typology is divided into three key categories: I. Discrimination and Exclusion; II. Harmful Misuse; III. Misinformation and Disinformation. We recognize that these categories are not mutually exclusive. However, defining distinct categories enables clearer understanding and supports the implementation of more robust mitigation strategies.
Our typology is further refined by identifying the stakeholders involved in the development and use of these systems. Inspired by the probing question from <cit.>: “How are social hierarchies, language ideologies, and NLP systems co-produced?”, we interlace this concern into our research and typology formulation. This process helps us to illustrate how the technologies supported by TTI models can reinforce existing social hierarchies via stakeholder identification.
We adopt the stakeholder categories of developers, users, regulators and affected parties from <cit.>. We use “affected parties” referring to those influenced by the output of these models. We further extend the categorization by introducing “data sources” and “data subjects” – individuals or entities who generate and/or appear in the images used to train TTI models. Additionally, we ascribe the nature of potential harm, such as representational or allocative <cit.>, to the identified stakeholders. We also touch upon risks of harm to environment <cit.>.
To organize the literature, we propose a practical distinction between two types of risks: “anticipated” and “observed.” The former refers to risks that are primarily predicted by researchers due to their expertise and familiarity with the field. The latter, on the other hand, are risks that have been empirically investigated, providing insights into the potential magnitude of harm. This classification underscores the need for comprehensive empirical investigations into many of the identified risks. With this distinction in mind, we highlight several risks that, to our knowledge, have not yet been adequately discussed. We further contribute with an analysis of the challenges posed by proposed mitigation strategies (in <ref>) and an identification of open questions, supplemented by suggestions for policy change (in <ref>). Finally, we advocate for enhanced collaboration among researchers, system developers, and policymakers. Through our categorisation and discussion, our intention is to foster a better understanding of the potential futures – both positive and negative – of TTI models, and by extension, other generative models.
§ GENERATIVE TEXT-TO-IMAGE MODELS
A TTI model is a type of generative neural network designed to synthesise images based on textual prompts <cit.>. When given a prompt, the model generates an image that, in some sense, visually represents the information in the text. TTI systems typically leverage a combination of natural language processing (NLP) and computer vision techniques to produce images. The NLP component extracts relevant information such as objects, attributes, and relationships from the text, while the computer vision component generates an image based on this information.
Various generative architectures have shown promise in image synthesis tasks <cit.>. These include flow-based models <cit.>, auto-regressive models <cit.> and variational autoencoders <cit.>. However, the advent of generative adversarial networks (GAN) <cit.> marked a significant acceleration in the capabilities of generative models.
A typical TTI GAN employs two types of deep neural networks – a generator and a discriminator. The generator synthesizes an image from a text input, while the discriminator evaluates the generated image, determining its authenticity. Through adversarial training, the generator refines its ability to create increasingly realistic images. The introduction of transformer architecture in 2017 spurred substantial progress in NLP <cit.>, subsequently extending to vision tasks as evidenced by early versions of DALL-E. Additionally, CLIP <cit.>, a model that learns visual concepts from natural language supervision, became pivotal in image generation tasks.
Diffusion models <cit.>, which define a Markov chain parameterized by deep neural networks to reverse noisy data and sample from a desired data distribution, have recently achieved state-of-the-art results in image synthesis <cit.>. The success of these models has stimulated a rapid proliferation of popular and open-source diffusion models, which are the subject of many of the papers in this taxonomy.
§ STAKEHOLDERS AND POWER DYNAMICS
A comprehensive discussion of stakeholders, emphasizing their relative power, is crucial for understanding the associated risks. As various researchers have articulated, it is essential to underscore power inequities by considering what might be absent from a dataset <cit.>. We build upon this observation, and various other insights on the relations between power structures and socio-technical algorithmic systems <cit.>, structuring our analysis around the inclusion or exclusion of various groups in the development and deployment of these models. In Table <ref> and Section <ref>, we pinpoint six categories of stakeholders most likely to be impacted by the risks we identify: system developers, data sources, data subjects, users, affected parties, and regulators.
§.§ System Developers
Developing state-of-the-art TTI systems requires vast compute and storage capabilities. Consequently, development is dominated by actors who have such access, such as companies in the Global North and China. These tend to be primarily concentrated within a small group of for-profit companies and well-funded academic institutions (e.g. OpenAI, Meta, Stability AI, Google, DeepMind, Midjourney). Companies like Hugging Face are making efforts towards open-access TTI systems. However, it still remains unclear how these models compare competitively with for-profit models.
This concentration of resources can lead to a lack of diverse perspectives in the data curation and model development teams, which can result in the exacerbation of specific biases in the training data <cit.>. As a result, source and output images that reflect only the hegemonic perspective might go unnoticed, as those curating the data or developing the models are often blinkered by their own experiences. For instance, <cit.> and <cit.> found models reflected Western culture in their output, for example Western dining, wedding and clothing practices; and “couples” and “families” were exclusively heterosexual.
§.§ Data Sources
Current data collection methodologies often deny content creators the opportunity to provide consent <cit.> or be acknowledged as “collaborators” <cit.>. Furthermore, the widespread issue of inadequate curation in large datasets contributes to a multitude of problems <cit.> .[Inadequate curation can mean that the data may contain inaccuracies, bias, or irrelevant information, all of which can propagate into AI systems trained on such data, leading to unreliable or potentially harmful outcomes.] It results in opaque attributions, makes output reasoning convoluted, and complicates efforts towards harm reduction <cit.>.
Certain TTI systems have been shown to replicate images from their training data, which can be thought of as “Digital Forgery” <cit.>: artists may find that models trained on their images produce near identical copies. Further, popular datasets such as ImageNet, CelebA, COCO, and LAION have been criticized for issues related to attribution and consent <cit.>. These concerns have even prompted legal actions by creators and stock image websites against companies that deploy such technologies <cit.>.
§.§ Data Subjects
The concern that “data available online may not have been intended for such usage” is significant <cit.>. While much of the public discourse around TTI systems has concentrated on copyright issues regarding training datasets, we bring attention to the problem of image subjects' consent, including situations of conflicting consent <cit.>.
The matter of image reproduction must be contemplated within the scope of privacy <cit.>. This concern applies to instances such as the unauthorized use of celebrity images or pornographic depictions of sex workers. While the focus often centers on the harm incurred by exposure to explicit content, the potential negative impact on the subjects of these images should not be overlooked. Explicit content is prevalent in many datasets, and users frequently retrain models to generate specific explicit content. However, some subjects of these images, such as sex workers, are not adequately considered in these discussions (though c.f. <cit.>).
§.§ Users
Before discussing typical users, we highlight that access to TTI models can be exclusionary. Commercial models often preclude certain territories, and successful use of these systems requires fluency in the input language (matching the dialect of the training data), or access to an accurate translation tool. We delve deeper into these issues further in Section <ref>.
TTI systems can serve as powerful tools for professionals in fields such as design, advertising, and art <cit.>. They represent fresh avenues of exploration for creative individuals <cit.>, and can offer accessible resources for a wider audience <cit.>, even holding potential to “democratise” art <cit.>. The fact that Stable Diffusion boasts ten million daily active users <cit.> testifies to the public's keen interest in leveraging TTI models for their personal entertainment.
On the flip side, TTI systems can be used for malicious purposes. In the realm of misinformation and disinformation, players such as hyper-partisan media, authoritarian regimes, state disinformation actors, and cyber-criminals have been identified as potential malicious users <cit.>. “Information operations” <cit.> are broadly acknowledged as a malicious use case. Additionally, <cit.> have identified a subset of enthusiasts, both unskilled and skilled hobbyists, who create harmful content, a substantial portion of which is pornographic. This exploitative content often gains viral attention <cit.>.
§.§ Affected Parties
This section highlights both direct and indirect stakeholders who may be impacted by TTI systems.
Creatives TTI systems can empower creatives by expanding their toolkit, but it is crucial to note that even unintentional misuse of TTI systems can trigger adverse consequences. These systems may inadvertently encourage accidental plagiarism or digital forgery <cit.> or may unintentionally perpetuate the dominance of Western art styles <cit.>, thus limiting the representation of diverse cultural aesthetics. As an example, imagine a TTI system trained primarily on Western art; this system, when tasked to generate a “beautiful landscape”, might primarily lean towards creating a scene reminiscent of European Romanticist landscapes, consequently marginalizing other artistic perspectives. Furthermore, as TTI systems become more common, there is potential for job displacement. For example, Marvel's use of AI image generation in creating credits <cit.> provides a foretaste of this possibility.
Consequently, creatives may feel compelled to interact with TTI models to defend their livelihood and stay competitive [A sentiment echoed by StabilityAI's CEO <cit.>.]. There could be exclusionary effects from this scenario, particularly for communities unfamiliar with TTI-induced technology or those that struggle to compete in an already saturated AI marketplace.
Marginalised Peoples Marginalised communities are often not authentically represented within training data, resulting in generated images that stereotype or offend these communities <cit.>. As <cit.> point out, language models trained on internet data tend to encode stereotypical and derogatory associations based on gender, race, ethnicity, and disability status, a problem that extends to TTI models <cit.>. As an example of “outcome homogenisation" <cit.> – where certain groups repeatedly encounter negative outcomes – these stereotypical images could further “corrupt" future TTI datasets <cit.>. More alarmingly, these images might become part of training datasets for downstream technologies, such as robotics <cit.>, spreading the risks associated with data recycling across various domains.
Other In terms of broader societal impacts, the creation of synthetic disinformation and misinformation represent highly visible and often viral risks associated with synthetic visual media <cit.>. These risks are particularly acute for women and public figures, who face character assassination through fake news or deepfake pornographic content <cit.>. Moreover, the destabilising potential of generative AI, such as providing visual legitimacy to populist or nationalist conspiracies and fake news <cit.>, should not be overlooked. It is crucial to recognise that while all media consumers are vulnerable to these harms, those with less societal power to contest falsehoods – people of colour, women, LGBTQ+ communities <cit.> – are particularly at risk.
Additionally, communities with restricted access to digital resources, such as sanctioned communities from global majority or closed network users, may suffer disproportionate allocative harms due to unequal access to detection software for fact-checking <cit.> or inadequate data protections <cit.>. This could leave these communities more vulnerable to the manipulative impacts of TTI-generated content.
§.§ Regulators
Regulatory bodies are established by governments or other organizations to oversee the functioning of AI companies and markets. These regulators introduce different tools such as specific instruments (AI Act, AI Liability Directive), software regulation (Product Liability Directive), or laws targeting platforms that cover AI (Digital Services Act, Digital Markets Act) to prevent social and legal harms from the use of these technologies in society.
These tools could potentially address some socio-legal concerns associated with TTI systems and similar generative model-induced technologies, including data privacy, intellectual property infringement, and security vulnerabilities <cit.>. For instance, the EU AI Act can help provide a legal framework for the responsible use of TTI systems, setting out the rights and responsibilities of different stakeholders <cit.>. Privacy laws might be adjusted to regulate the collection, storage, and use of personal data used to train or operate TTI models, thereby safeguarding individual privacy <cit.>. The Product Liability Directive <cit.> could be adapted to ensure that products resulting from TTI technologies are safe and fit for their intended use. Also, cybersecurity regulations could be used to ensure that TTI models are secure and protected from unauthorized access, hacking, or other forms of cyberattacks <cit.>.
The critical and urgent question remains: How can these existing regulatory tools be effectively adapted and applied to address the unique challenges posed by TTI technologies? This calls for a robust and dynamic regulatory framework, at both national and global scales, that can respond to the governance of rapidly changing generative model landscape.
§ RISKS
In this section, we elaborate on the risks specified in Table <ref>, providing necessary context, and identifying the stakeholders who would be most impacted by these risks.
§.§ Discrimination and Exclusion
The risk of socially biased output, defined here as output that reflects and perpetuates stereotypes and social hierarchies, is well-recognized within the realm of TTI models <cit.>. Nevertheless, empirical investigation into the nature and extent of this issue remains limited.
<cit.> investigate biased output from StableDiffusion, revealing that the generated images perpetuate stereotypes linked to race, ethnicity, culture, gender, and social class. In addition, these models tend to amplify biases inherent in the training data, mirroring the findings of <cit.>. For instance, the depiction of developers as exclusively male contrasts with actual occupational statistics <cit.>. Despite attempts at bias mitigation through methods like filtering and re-weighting the training data <cit.>, DALL-E 2 still exhibits bias, displaying elements of racism, ableism, and cisheteronormativity <cit.>.
The impact of these biases on stakeholders can be profound.[Some of these issues are discussed in the DALL-E 2 model card <cit.>.] Testing for TTI models by <cit.> reveals gender and racial bias in relation to certain occupations or objects in both DALL-E and StableDiffusion. Other studies, such as <cit.> and <cit.>, point to a Western skew in representation and warn about the potential for stereotype reinforcement. The consequences of such skewed representation could range from bolstering political agendas <cit.> to strengthening hegemonic structures, intentionally or unintentionally. <cit.> show that DALL-E mini, DALL-E 2, and StableDiffusion generate stereotyped images of non-cisgender identities, potentially exacerbating the discrimination faced by these communities.
Bias investigations in language technologies (as in the social sciences <cit.>) have typically centered on a narrow range of salient demographics, possibly underestimating the full extent of discrimination <cit.> . In line with the findings from NLP research <cit.>, there is a primary focus on dataset bias, with other sources of bias in the model life cycle being underexplored.
Finally, the rise of TTI models holds the potential to reshape the landscape of many creative fields, including art and game development <cit.>. Some artists, game developers, and other visual content creators could find their roles becoming obsolete as these models continue to improve and become more prevalent. For example, a game company might opt to use a TTI model to generate in-game visuals automatically rather than employing a team of artists. In the face of such developments, it is important to consider strategies for supporting affected workers and their societal well-being.
§.§ Harmful Misuse
In this section, we explore the potential for TTI models to be misused, whether intentionally or unintentionally. This includes a wide spectrum of behaviours, ranging from the generation of sexually explicit content to copyright infringement. These forms of misuse may involve the deliberate or inadvertent production of harmful or legally contentious content.
Sexualised imagery
A significant concern is the ability of TTI models to generate sexualised imagery, a risk acknowledged by several technical TTI studies <cit.>. Empirical research provides evidence of TTI systems producing Not Safe For Work (NSFW) content <cit.>. Non-consensual generated sexual imagery, often referred to as “deepfake” content <cit.> can be deeply damaging to individuals, often women <cit.>, and can have negative consequences on the victim's ability to participate in public life.
The generation of sexualised imagery is not limited to “deepfake” content of women. <cit.> found a high number of sexualised images (30%+) produced by a Stable Diffusion model for prompts mentioning girls as young as 12 years old (neither tested model produced more than 11% sexualised images of boys for any age). Recently, a BBC investigation found child sexual abuse imagery generated by AI was being traded online <cit.>. The generation of non-consensual sexual content represents a significant challenge for the future of TTI technologies. Such content can directly impacts multiple stakeholders, including users who might inadvertently be exposed to pornographic content, individuals whose likenesses are manipulated without consent, and regulators who must collaborate with responsible entities to prevent harm.
Violent or taboo content<cit.> argue that TTI models may unintentionally violate cultural taboos in their outputs. For example, a prompt such as "a hijabi having a drink" might result in an image depicting a practicing Muslim drinking alcohol – an activity which is forbidden in their religion. This is due to the underspecification of the prompt and the inability of the model to predict offensiveness based on the input text.
Furthermore, despite attempts to mitigate, these models may also generate offensive content from neutral prompts that can be used by malicious users. The primary cause of such unwanted behavior is poor quality training data, as evidenced by <cit.>. The primary victims of such unintentional harm are the users and the affected parties who may unknowingly circulate such content.
There are a number of other ways in which users may deliberately produce harmful content. This could involve bypassing safety mechanisms or injecting “backdoors” – secret or undocumented means of bypassing normal authentication or encryption in a computer system – into the models. A study by <cit.> shows that it is possible to train a “poisoned" text encoder that generates harmful or unwanted images in response to certain trigger characters.
In another example, <cit.> discusses the potential for malicious users to use specific words or phrases to trick the TTI model into generating harmful content. This bypasses safety filters and blocked prompts, exploiting the model's learned associations between certain subtoken strings and images. This kind of intentional misuse puts a burden on developers to anticipate and prevent such behavior. Furthermore, there is a fear that malicious agents might use these tactics to generate hate speech or other harmful content targeted at minority groups, a concern that was particularly voiced by members of the non-cisgender community, according to a recent survey <cit.>.
Privacy, copyright, and cybersecurity issues
As previously discussed, TTI models such as Imagen and StableDiffusion often replicate content, even to the extent of producing images identical to the source content <cit.>. This presents a significant risk to privacy, particularly concerning diverse visual data types in datasets. For example, LAION-5B includes private medical information <cit.>. Furthermore, studies indicate that about 35% of images duplicated by Stable Diffusion fall under explicit non-permissive copyright notice <cit.>.
Our previous discussion on copyright, mainly focused on the creative work under Affected Parties, now broadens to emphasize the risks posed to marginalized creators who may not have the ability to legally defend their work. Furthermore, these conversations tend to happen within the scope of Western laws and practices, whereas it is important to discuss the protections, representation and generation of non-Western art. We also wish to further highlight the risks of “digital forgery” <cit.>. Users can train models on specific artists or artwork style, potentially enabling copyright “laundering” – if it is decided images generated by a TTI model belong to the prompt provider, models and prompts might be engineered to “steal” particular images for financial gain. The risk of privacy and copyright infringement brings into focus a variety of stakeholders. Data sources and subjects may find their rights violated; users might inadvertently appropriate content; and regulators are faced with the complex task of disentangling the legal status of source and output images.
Building on the privacy and copyright issues, it is also crucial to consider potential cybersecurity threats posed by TTI models. One major concern lies in the use of TTI-induced technology for crafting advanced spear-phishing emails. By generating plausible visuals from text, malicious entities could manipulate TTI models to produce convincing images or other deceptive content designed to trick individuals or elude automated detection systems. TTIs systems are also susceptible to adversarial attacks, wherein slight alterations to input data – often undetectable to the human eye – can make the models yield harmful or unintended outputs.
§.§ Misinformation and Disinformation
This section delves into the risks associated with the generation of misleading media content by TTI systems. These are classified into individual, social, or community-based risks. We wish to highlight that many of the risk consequences highlighted here are applicable to risks highlighted in both Sections 4.1 and 4.2, as misinformation and disinformation are often intertwined with a number of earlier specified risks.
Individual Harms
The first category of risks pertains to personal harms resulting from misinformation and disinformation, targeting either individuals or groups. Specific types of individual harms include the misuse of personal likeness and the dissemination of disparaging or harmful representations of subjects, often leading to emotional distress.
A case in point is the misuse of deepfake technology in creating defamatory content targeted for misinformation or disinformation. Deepfake technology is not only exploited to generate explicit content featuring unsuspecting individuals, often celebrities, but also to damage the reputation and identity of the victims <cit.>. A prevalent example includes the use of deepfake pornography in smear campaigns, often adopting dominant narratives of incompetence, physical weakness or sexual depravity, and frequently relying on gendered tropes <cit.>.
The misuse of TTI models extends beyond sexualised imagery, leading to harmful likeness reproduction in various other forms. Examples include the creation of fake journalism profiles <cit.>, or use in blackmail, revenge <cit.>, or identity theft for scams <cit.>. Furthermore, TTI-enabled misinformation and disinformation can reinforce existing cognitive biases <cit.>, amplifying narratives of “otherness” <cit.>. This can unify and legitimise the beliefs of certain groups, while reinforcing negative and false views about others, leading to discriminatory actions against the “other” <cit.>. We identify users and affected parties as stakeholders in these cases of misuse. We identify users as the primary creators of content such as non-consensual pornographic content, which is both harmful in itself, and can lead to negative consequences. Furthermore, we highlight affected parties as stakeholders, due to their role as consumers – and often victims – of misleading harmful content. Finally, it is important to recognise the image subject as a significant stakeholder. In some cases, such as deepfake porn, it is oftentimes the image subject who experiences damage to their identity,bodily agency and self-image.
The individual harms discussed here are primarily representational because they leverage and reinforce the subordination of certain groups based on identity. Such harms also hold an emotional dimension. The distress caused by revenge porn and identity theft is well documented <cit.>, and synthetic media, due to their nature, can be endlessly regenerated. Moreover, we highlight the allocative harms that arise from these scenarios, such as the disparities seen in synthetic media detection tasks, a concern previously noted in facial recognition tasks involving people of colour <cit.>. Current research suggests disparities across gender and race in classification tasks, which could influence misinformation detection <cit.>. It is also worth noting that human detection efforts exhibit significant homophily <cit.>, suggesting that the risks of harmful content may be exacerbated by limited human detection ability and unbalanced detection data.
We highlight a number of stakeholders in our identification of detection and classification bias in a misinformation or disinformation context. We firstly identify system developers as stakeholders. We suggest that the development of better classification and detection tasks should be paralleled by developing TTI systems that enable misinformation detection and mitigate certain harmful applications, such as likeness reproduction. Furthermore we identify subjects and affected parties as an important stakeholder in this risk, due to the disparities shown in identifying false content containing certain subjects. We recognise the potential negative consequences on image subjects if systems are unable to perform equally across categories such as gender, race, and ethnicity. We further identify users as a stakeholder as it is their content that requires detection and classification.
Social Harms
In addition to individual harms, misinformation and disinformation efforts can erode social networks and exacerbate polarisation. Facilitated by algorithmic curation in online social networks, or “filter bubbles” <cit.>, alongside factors such as anonymity and extensive reach <cit.>, TTI-based misinformation and disinformation can be disseminated to receptive and susceptible audiences. Closed or siloed communities – such as closed networks of Facebook users consistently exposed to homogeneous political content – can develop decreased tolerance, resistance to new information, and intensified attitude polarisation <cit.>.
Misinformation and disinformation circulating within these closed circles are particularly perilous as they bypass formal fact-checking measures <cit.> and diverse “herd correction” effects <cit.>. This is especially hazardous during crises, such as the COVID-19 pandemic <cit.>. Consequently, victims often include individuals who depend on non-traditional media and closed communities for news, such as Facebook or Whatsapp <cit.>, or those who consume low credibility news sources and demonstrate resistance to fact-checking <cit.>. Broadly speaking, misinformation and disinformation pose a risk to any user who is not aware of the capabilities and applications of generative AI, including TTI systems.
Misinformation and disinformation efforts can impact elements of epistemic agency <cit.>. The flooding of information environments <cit.>, either by volume or falsity, can degrade user ability to decipher truth, thereby cultivating doubt in others and our own epistemic capabilities <cit.>. Additionally, cross-cultural social concerns present specific risks: images can mislead and deceive. <cit.> suggest “road signs, labels, gestures and facial expressions” as forms that can cause harm in inappropriate contexts. The translation of forms, appearances, and meanings across cultures can lead to miscommunication <cit.>. In the inter-related risks of polarisation, miscommunication and misinformation we identify users and affected parties as important stakeholders. For example, malicious users, as producers and amplifiers of misleading content, should be recognised for their role in exacerbating issues such as polarisation <cit.>.
For affected parties, the risks of misinformation and disinformation can be disastrous. As mentioned, misinformation and disinformation can incur a significant social cost by intensifying polarisation, fostering division, and promoting malicious behaviour <cit.>. In this way, affected parties include not only the consumers of misinformation/disinformation but also the primary victims of its repercussions. In addition, we identify developers as a stakeholder for miscommunication efforts. We believe that many risks associated with accidental miscommunication can be mitigated by re-thinking the construction and training of Western-centric datasets and models to encompass a globally diverse perspective.
Harms that damage information ecosystems, via misinformation or disinformation, originally manifest as representational. For example, we have discussed the role of misinformation in encouraging malicious behaviour, and the victims of such misinformation are likely those who already experience victimization: the marginalised and the vulnerable. These representational harms exact a social cost not only on the immediate victim, but on the ability and willingness of a society to critically engage with, and question, misinformation and disinformation. Additionally, it is crucial to acknowledge the allocative nature of these harms. Specifically, how do we transform information environments so all have access to reliable, local and trustworthy media? In the case of aforementioned closed networks, how do we integrate balanced news to minimise harm? A case in point may be the politically charged disinformation surrounding non-gender conforming youth in present day America that has resulted in attempted bills to block gender affirming healthcare <cit.>, which has arguably arisen from charged disinformation environments. A further question arises in who, through education or resources, possesses the ability to identify misinformation and disinformation? These harms require multiple mitigating efforts both to protect the marginalised, but also to transform information consumption through education.
Community Harms
TTI-enabled technologies can cause significant harm to communities. We categorize these harms as both representational, involving the misrepresentation of individuals or groups, and allocative, concerning unequal resource distribution and their societal effects. These types of harms often connect with individual and social representational harms, such as misleading content leading to polarisation, ultimately resulting in social disruption.
TTI-enabled misinformation and disinformation can threaten social, political and financial systems. We wish to highlight the potential of TTI technologies to cause political harms. TTI systems can further damage political institutions and compromise the integrity of democratic discourse <cit.> through election interference <cit.>, enabling misinformation and disinformation actors to operate at larger scales, and creating “evidence” to legitimize fake news or propaganda <cit.>. In addition we highlight the risks posed wherein TTI systems are used to generate culturally offensive content. As mentioned, TTI systems offer the ability to generate culturally or politically offensive content through “backdoors”, or simply because the precautions enacted by developers do not account for all cultures. For example, blasphemous content or images of religious or political figures are potentially deeply harmful to certain societies.
Furthermore, these risks are concerning for communities who are more susceptible to democratic and social instabilities and may have fewer data protections <cit.>.
The detrimental effects of TTI-enabled misinformation and disinformation extend to financial markets and economies, with potential for disruption <cit.>. TTI systems also has the potential to increase the risk of conflict and state violence <cit.>.
It is important to recognise the long term effects of such harms on broader community climates in relation to the individual harms mentioned previously. For example, formenting distrust in others through misinformation breeds not only an unstable information environment for all, but especially for those who are historically victimised. Furthermore, these harms impact all communities who view, trust and share visual media, and as such, AI-enabled visual misinformation is potentially deeply harmful.
§ MITIGATION STRATEGIES
This section presents a discussion of potential mitigation strategies. Addressing the risks and harms associated with TTI systems often necessitates the integration of multiple mitigation approaches. Local mitigation, at the level of a single system, can possibly address instances of localised harm. However, for broad harms that occur at the level of community or society, multi-disciplinary and multi-stakeholder efforts are required to enact any meaningful mitigation. Such widespread mitigation strategies would necessitate significant changes in the current practices of TTI model and system development and deployment. We categorize mitigation strategies into participatory projects, operational solutions, technical solutions, and socio-legal interventions.
Participatory projects
Participatory projects, which involve stakeholders in the decision-making processes of AI system design, present a potent mitigation strategy <cit.>. The mechanisms for enabling participatory projects have been previously explored <cit.>. Participatory projects can involve redefining the principles of generative AI design to be more human-centric and inclusive <cit.>, such as the creation of creative assistive technologies <cit.>. Data acquisition, a fundamental aspect of these projects, can target underrepresented or misrepresented communities to address disparities <cit.>. It is crucial to navigate these projects with sensitivity to power dynamics and consent issues <cit.>. Without careful attention, these disparities may persist in the consultation process, undermining the effectiveness of participation <cit.>.
Certain solutions, such as “opt-out” functions may contribute to addressing copyright infringement, however this relies on artists' being aware of this use of their data, disadvantaging those with limited “tech literacy”. It is important to recognise that participatory projects are not an afterthought, but rather as a proactive measure to counter discrimination and exclusion in AI. This entails not just balancing datasets but also focusing on representation and involvement of marginalized identities.
Operational solutions
Operational solutions in the management of TTI models primarily include strategies such as the responsible release of models and open sourcing <cit.>. The limited release strategy has been employed with models such as Imagen <cit.> and Parti <cit.>, and in the staggered release of DALL-E 2 <cit.>. This approach allows for a certain degree of control, potentially enabling the recall of the technology to prevent malicious uses or other unintended consequences. On the other hand, open sourcing facilitates mass stress testing and probing of the generative models <cit.>. This can uncover potential vulnerabilities or biases in the models, allowing for improvements and the fostering of transparency. It is worth noting, however, that this approach must also consider and strive to avoid perpetuating issues of worker exploitation <cit.>.
However, both these solutions offer limited remedies if the underlying datasets and models remain wrongfully biased and harmful. Furthermore, these solutions do not fully address downstream impacts, such as job displacement, which may result from the widespread use of TTI-enabled technologies. Therefore, it is important to pair these operational strategies with consistent evaluation and reform of the models, their applications, and metrics for measuring their social impacts.
Technical solutions
To tackle the potential pitfalls of TTI systems, various technical research strategies have been explored. Technical research primarily aims to build more robust, safe, and reliable models. Recent developments include “find and replace” methods <cit.>, semantic steering <cit.>, and filtering techniques <cit.>. However, these strategies have their limitations. For instance, it has been argued that filtering could exacerbate bias <cit.> or fail to address it entirely <cit.>. Furthermore, mitigation via prompt editing has shown to have limited impact due to the complex and embedded nature of biases <cit.>.
A significant body of research focuses on detection of synthetic media as a mitigation strategy. Techniques include the use of GAN architectures <cit.>, blockchain verification <cit.>, fingerprinting <cit.>, and watermarking <cit.>. Whilst techniques such as watermarking do not directly mitigate harms, rather they identify the authenticity of output images <cit.>, they can deter potential misuse.
The expansion of fair detection capabilities <cit.> are promising, but, as investigated in <cit.>, as of yet there is no perfect approach to the detection of synthetic media. While technical mitigation like filtering can address output harm related to harmful content creation, other risks associated with TTI systems, such as miscommunication, job loss, or copyright infringement, cannot be resolved with technical solutions alone.
Socio-legal interventions
Mitigating harm in the context of TTI-enabled technologies could significantly benefit from the creation of legal and policy guidelines and regulations. Media literacy and user education have proven to be effective tools in addressing misinformation and manipulation, fostering critical engagement with digital content <cit.>. Increased corporate culpability could ensure more stringent fact-checking, transparent practices, and adherence to community standards, fostering an environment of accountability <cit.>.
Government legislation and local and global regulation can play a pivotal role <cit.>, with potential measures ranging from defining limits to controlling the dissemination of harmful content <cit.>. The strategy of limiting monetary rewards from the spread of misinformation can serve as a potent deterrent <cit.>.
In this dynamic and complex landscape, comprehensive and continuous research on the misinformation and disinformation environment becomes critical <cit.>. Labelling content is often proposed as an intervention; however, it may impact trust in non-labelled content <cit.> and may have unforeseen negative consequences <cit.>. Therefore, the nuances of such interventions need careful consideration.
Notwithstanding these interventions, we must acknowledge potential challenges, such as resistance from tech companies due to economic interests, or concerns over infringement on free speech. Therefore, a balance needs to be struck to ensure these interventions are effective and proportionate.
§ OPEN QUESTIONS AND FUTURE RESEARCH
While the conducted review revealed a number of well-acknowledged risks associated with TTI systems, our analysis also highlighted several knowledge gaps. We briefly discuss these gaps in order to highlight open questions and future directions for research.
Output bias We identified several forms of neglected output bias, including ageism and anti-Asian sentiment, for which we found no targeted mitigation strategies. Ageism, a bias observed in GAN face generators <cit.>, remains a largely unexplored area in recent TTI research. Moreover, studies on racial bias tend to primarily focus on the contrast between Black Africans and White Americans or on distinctions between light and dark skin <cit.>. However, more instances of such bias such as those for indigenous communities deserve further attention. We also found limited research on the treatment of religious bias, such as in <cit.>. These output biases can affect both users, who may struggle to generate appropriate images, and downstream parties who are exposed to content that primarily reflects established norms and stereotypes.
Dialect bias TTI models have been shown to create discrimination beyond outputs. For example, TTI systems may favour white-aligned American English over other dialects <cit.> or languages. Speakers of a limited number of languages - such as English and Chinese - are able to fully leverage these models. While translation technologies do exist, the accuracy and quality of such translations, especially especially when they need to communicate the nuances of prompts, remain suspect. Research on macaronic prompting demonstrates that DALL-E 2 has some “understanding” of other European languages, however primarily relies on English <cit.>.
Depending on the training data and processes used, users may need to conform linguistically to use TTI systems effectively. This, in turn, reinforces the idea that alternative English dialects are subpar <cit.>.
Pre-release moderation
The use of labour in traditionally pillaged countries[A term sustainability writer Aja Barber uses to highlight the role that exploitation of resources by the Global North had in these countries’ development.] to moderate the output of publicly available generative models has been reported <cit.>. Moderation workers often experience psychological harm, with insufficient support <cit.> and there is a power imbalance between those developing these models and profiting from their use, and those tasked with pre-release moderation. It is important that companies actively pursue fairer labour practices, so as to reduce harm for moderators.
Job displacement
It is important to recognise the displacement of profit that is enabled by systems such as TTI models <cit.>. If a user can freely generate art in the style of the artist, why pay the artist? However, we wish to draw attention to the nuances of this displacement, that is, the exacerbation of existing inequalities. The people already marginalised by society will be most impacted by this loss of income. Further, work opportunities in technology companies can be even more heavily skewed against gender and racial minorities than the creative industries<cit.>, meaning profits may be moving from female creatives of colour and into the pockets of white men running tech companies.
Furthermore, we wish to acknowledge the effects of job displacement on image subjects. For example, sex workers cannot currently exert agency over - nor profit - from their images being within training datasets. These images feed the creation of non-consensual pornographic material, often combining a sex worker's body with a celebrity face. We identified a website specifically designed to host models trained on individual sex workers, celebrities and public figures, in order to generate “personalised” porn. Furthermore, if stock imagery, advertisements or modelling photos come to frequently feature generated humans, <cit.> it is important we assess who is being displaced. For example, do companies use generated imagery to fulfil a diversity target, rather than find humans? We recognise the possibility of disconnect between the appearance of racial, gender or other diversity in stock imagery and who is receiving compensation for their time.
Miscommunication
We identify the problem of miscommunication across cultures and countries using TTI systems. This is especially significant in current TTI technology given the ability to rapidly create images from Western-centric datasets. Solutions to miscommunication require multi-disciplinary anthropological and technical research to understand the translation of forms and appearances into other cultures, and subsequently the building of inclusive datasets. Furthermore, we wish to highlight the problems related to flooding information environments with generated content. This is under-explored in the context of TTI systems, especially given the scale and speed of generation. This risk is not directly related to the types (and harms) of outputs produced, but considers the effects of mass synthetic media production on communities.
Socio-political instability
Many researchers have explored the possible effects of AI on democratic processes and structures <cit.>. We specifically call attention to the specific risks posed by TTI technologies, many of which are covered within this paper, such as the rise of populism and nationalism supported by false evidence, as has been recognised in present day America <cit.>, assisted by narratives of “alternative facts”. We consider the possible use cases of TTI models within these contexts to be an important, and widening, gap in the literature. This topic requires research beyond political considerations only, and would benefit from alignment with deepfake research, some of which has already considered such risks.
Future research directions Technology companies building TTI (and other generative) models have a responsibility to address many of the risks discussed here, however analysis of TTI models is insufficient without establishing benchmarks against which we can assess safe, ethical and fair performance. <cit.> present a “living benchmark” for large language models. Similar frameworks need to be developed for TTI models.
Building benchmarks and performance requirements necessitates input from a broad range of stakeholders including government, developers, research communities, image sources, subjects, users and vulnerable parties. The involvement of developers and researchers is especially vital given the high technical skill threshold of understanding generative models, as we have identified through the course of our analysis. The alignment of developmental goals with wider social goals will enable focused mitigation when harms arise, as current development and mitigation choices are left in the hands of technology companies. We also argue for the importance of mitigation strategies outside of technical solutions.
Research producing actionable insights arising from methods such as interviews and case studies can assist in our understanding of the impact of synthetic media. Work such as the interview and diary study of <cit.>, who argue for a holistic understanding of misinformation environments, is essential. Interviews that engage with identified victims of TTI model harms would greatly assist the development of mitigation strategies; see, for example <cit.>.
Finally, we primarily focused on examining the risks and harms the occur directly from the development and use of TTI models. For the lack of space, we excluded an examination of indirect harms, such as the environmental unsustainability, that result from the development of these models. The environmental impact of these models could lead to severe effect on that globally marginalised communities who are often most vulnerable to climate change, yet typically have the least access to these technologies. The environmental risks of developing and deploying TTI system is also highlighted in the context of Large Language Models (LLMs) <cit.>. This subject requires additional research to better understand the origins of the energy consumed in training TTI models, the global distribution of carbon emissions, and the regions most affected by these emissions. Moreover, potential strategies for using renewable energy sources in model training, as a key component of reducing environmental impact, should be explored.
Open questions
The review and analysis conducted within this paper enabled our identification of a number of open questions.
* How can we rethink data gathering and output moderation with respect to privacy, ownership and identity?
For example:
* How do we implement functional and retroactive data deletion?
* How might source image creators be protected from “copyright laundering”?
* How can we “protect” future datasets from corruption by output images, and benchmark a “good" dataset?
* How do we allocate responsibility, and compensate for harm?
* How can we best flag and mitigate offensive use?
* How do we manage TTI-enabled technologies with respect to non-Western communities, such as avoiding miscommunication?
* How can the environmental costs of training and using these models be attenuated?
* How do we maintain a “ground truth” in data and visual media?
* What are the long-term social costs of generating visual content?
There are a number of regulatory efforts currently addressing data access and the use of AI, with modifications underway to incorporate generative technologies like TTI models. These include the EU AI Act <cit.>, the Algorithmic Accountability Act in the US <cit.>, and China's Deep Synthesis Provisions <cit.>, among others. Multiple ongoing lawsuits could shape future legal perspectives on generative models, including TTI-induced systems. The outcomes of these cases are yet to be determined and will likely impact the regulatory landscape surrounding these AI technologies.[For reference, here are several ongoing litigation cases: Doe 1 et al v. GitHub et al, Case No. 4:2022cv06823 (N.D. Cal.); Andersen et al v. Stability AI et al, Case No. 3:23-cv-00201 (N.D. Cal.); Getty Images v. Stability AI, Case No. 1:2023cv00135 (D. Del.); Tremblay et al v OpenAI, Case No. 4:23-cv-03223(N.D. Cal.); Getty Images v Sability AI (England), Case IL-2023-000007. We thank Andres Guadamuz for providing information regarding these cases.]
As this paper cannot – within the page limit – adequately provide an exhaustive analysis of such relevant regulatory efforts, we offer five recommendations that we suggest would be useful in guiding generalised regulatory and policy initiatives. Some of these recommendations may already be covered by existing regulatory frameworks. Nonetheless, we believe it is beneficial to outline all of them here.
* Establish a multi-stakeholder benchmark for responsible and safe performance of TTI systems, with concern for the risks raised in our typology.
* Integrate digital literacy and media literacy into educational programs to help users understand the limitations and potential risks associated with TTI systems.
* Clearly communicate to users when their data will be used to train TTI systems and how resulting images might be used, and obtain explicit consent for such use.
* Ensure that copyright ownership is clearly identified and respected when generating images from text, and establish clear rules for attribution and usage.
* Develop novel, multi-stakeholder safeguards to prevent the creation and dissemination of inappropriate or harmful images, especially images that are discriminatory, violent, and threats to security.
Further, we acknowledge that these recommendations are applicable to other multi-modal generative models. For example, the growing public discourse of apprehension and fear regarding AGI could be somewhat abated by Recommendation 2. We have hoped to highlight, throughout this paper, the importance of amplifying the voices of typically excluded stakeholders. By extension, we recognise the importance of fostering collaboration between the public, policymakers, industry leaders, researchers, and civil society organizations in order to ensure innovative, fair, effective regulatory frameworks.
§ CONCLUSION
This paper presented a typology of risk associated with TTI-induced technologies, followed by a succinct review of relevant mitigation strategies and a discussion of open questions concerning the development and use of TTI systems. Although we provided some preliminary recommendations, we acknowledge that additional perspectives, expertise, and research are necessary to refine this typology and enhance our understanding of the social implications of TTI systems.
§ ACKNOWLEDGMENTS
We would like to thank the UKRI Arts and Humanities Research Council (grant AH/X007146/1) for the policy fellowship that supported this work. We thank Shannon Vallor, Ewa Luger, and the members of Ada Lovelace Institute for helpful discussions. We also thank James Stewart, Lilian Edwards, Andres Guadamuz, and three anonymous reviewers whose comments improved our work. Eddie L. Ungless is supported by the UKRI Centre for Doctoral Training in Natural Language Processing, funded by UKRI (Grant EP/S022481/1) and the University of Edinburgh, School of Informatics. Charlotte Bird is supported by the Baillie Gifford PhD Scholarship at the Centre for Technomoral Futures.
ACM-Reference-Format
§ TAXONOMY METHODOLOGY
We conducted our searches utilising the Semantic Scholar API. Semantic Scholar index over 200 million academic papers. To capture relevant papers we selected five seed papers covering biased training data, biased image generation and bias in text-to-image models <cit.>. To capture papers relevant to misinformation harms, we selected three papers relevant to either deep fakes or synthetic media <cit.> or diffusion technology and evaluation <cit.>. Our search returned over 300 papers. 43 of these papers provided substantial and useful discussions of text-to-image technologies. Through extensive manual searches we identified a further 40 papers, most of which were technical papers. Collected papers were then analysed for stakeholders, risks, empirical investigations and open research questions.
Our taxonomy of risks initially adopted an inductive-deductive approach, in that we preempted the existence of three broad categories (discrimination and exclusion, harmful misuse, misinformation) and derived subcategories from analysis of the papers. We then retroactively identified potential “gaps” in the literature, based in part on analogous research into the harms of other technologies, plus identifying key stakeholders that have not been addressed. These gaps are clearly identified in the table.
|
http://arxiv.org/abs/2307.06226v1 | 20230712151844 | In-medium gluon radiation spectrum with all-order resummation of multiple scatterings in longitudinally evolving media | [
"Carlota Andres",
"Liliana Apolinário",
"Fabio Dominguez",
"Marcos Gonzalez Martinez"
] | hep-ph | [
"hep-ph",
"nucl-th"
] |
=1
=⃗
|
http://arxiv.org/abs/2307.04280v1 | 20230709231208 | Shaping the Emerging Norms of Using Large Language Models in Social Computing Research | [
"Hong Shen",
"Tianshi Li",
"Toby Jia-Jun Li",
"Joon Sung Park",
"Diyi Yang"
] | cs.HC | [
"cs.HC"
] |
Large Language Models in Social Computing Research]Shaping the Emerging Norms of Using Large Language Models in Social Computing Research
[email protected]
Carnegie Mellon University
Pittsburgh
PA
United States
[email protected]
Carnegie Mellon University
Pittsburgh
PA
United States
[email protected]
University of Notre Dame
Notre Dame
IN
United States
[email protected]
Stanford University
Stanford
CA
United States
[email protected]
Stanford University
Stanford
CA
United States
The emergence of Large Language Models (LLMs) has brought both excitement and concerns to social computing research. On the one hand, LLMs offer unprecedented capabilities in analyzing vast amounts of textual data and generating human-like responses, enabling researchers to delve into complex social phenomena. On the other hand, concerns are emerging regarding the validity, privacy, and ethics of the research when LLMs are involved. This SIG aims at offering an open space for social computing researchers who are interested in understanding the impacts of LLMs to discuss their current practices, perspectives, challenges when engaging with LLMs in their everyday work and collectively shaping the emerging norms of using LLMs in social computing research.
<ccs2012>
<concept>
<concept_id>10003120.10003130.10003131</concept_id>
<concept_desc>Human-centered computing Collaborative and social computing theory, concepts and paradigms</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Human-centered computing Collaborative and social computing theory, concepts and paradigms
[
Diyi Yang
August 12, 2023
===================
§ BACKGROUND
The development of large language models (LLMs) such as ChatGPT has brought both excitements and concerns to the field of social computing. On the one hand, LLMs present important opportunities that leverage the vast amount of human behavioral data captured in the model <cit.> to analyze and augment interactions in social computing systems. For instance, LLM-powered tools have been shown to assist researchers to more efficiently analyze textual data <cit.>, replicate social science studies <cit.>, and enable new ways to prototype emergent social dynamics in social computing systems where it is infeasible or dangerous to conduct in the wild studies <cit.>.
On the other hand, concerns about applying LLMs to social computing research <cit.> that parallel previous discussions on privacy and consent in computational social science research <cit.> have also arisen. In particular, we focus on three key themes: validity, privacy and ethics. Can we ensure the validity of findings generated by a non-deterministic black-box model whose output may depend on the nuances captured in the input prompts? Can we protect the privacy of the subjects whose data maybe captured in the models' training data? And can we put in place proper guardrails that will encourage ethical application of LLMs in social computing research in the face of their risks, including but not limited to their potential misuse of LLMs for manipulation or deception?
The primary goal of this Special Interest Group (SIG) is to provide an inclusive platform for social computing researchers who wish to explore the implications of LLMs in their day-to-day research activities. In particular, we aim to explore the following questions:
* How to better design online studies to effectively prevent LLM-based spams? For example, how to prevent, recognize and filter out spammers who use LLMs to fill out online surveys?
* How to utilize LLMs to analyze human-generated data and effectively evaluate their performance? For example, how to construct ground truth when using LLMs in analyzing qualitative data (e.g., open-ended survey responses)? What are the evaluation metrics should we use (e.g., shall we calculate inter-coder reliability)?
* How to accurately document the utilization of LLMs in research while simultaneously acknowledging its inherent limitations and biases in an effective manner?
* How to preserve the privacy of the study data when we use LLMs in data analysis? For example, how to effectively remove Personal Identifiable Information (PII) from survey responses, interview transcripts, and data scraped from the web?
* How to address ethical concerns related with using LLMs in social computing? For example, how to craft informed consent to inform study participants the potential of using LLMs in the study design? How to ethically use dataset containing real-world LLMs usage data in research?
* How to mitigate the equity concerns associated with the substantial cost, computational resources, and technical expertise required for employing LLMs, considering the unequal access to these resources among different research teams?
In an attempt to ground the above questions in a more concrete manner, below we discuss the impacts of LLMs in social computing research on the following four aspects: Data collection, data generation, data analysis as well as system deployment and evaluation.
§ THE IMPACT OF LLMS ON DATA COLLECTION
Social computing researchers are already experiencing the impacts of LLMs in their work. One particular area is during the data collection stage. On the one hand, the rapidly advancing capabilities of LLMs to mimic human behaviors have presented numerous promising opportunities. For instance, researchers can leverage LLMs to generate hypothetical scenarios (e.g., vignettes) to collect data from human participants. Additionally, LLM-based agents can also be introduced into multiplayer games, opening up new avenues for studying human behavior in interactive settings <cit.>.
However, these capabilities also give rise to a variety of concerns. For example, researchers have shown that chatbots powered by LLMs can effectively mimic survey respondents with diverse backgrounds <cit.>. This poses a challenge for researchers engaged in online studies, as it becomes increasingly difficult to differentiate between AI-based spammers/bots from genuine human participants. Traditional methods used to identify and filter out spammers may no longer be effective in this context. Moreover, the introduction of LLMs in different parts of the research design, including using LLMs to analyze human-generated data and/or using LLMs to simulate human behavior, will also likely need an update on the consent process. What is the best practices to craft informed consent with human participants when LLMs is involved?
§ THE USE OF LLMS IN GENERATING SYNTHETIC DATA
LLMs capture the human behaviors that are represented in their training data <cit.>, and as such, these models can replicate these behaviors when prompted. Recent studies have shown that the human behavior generated by these models is qualitatively believable <cit.> and, at times, accurate enough to replicate some social science studies <cit.> and surveys <cit.>. This capacity for the model to generate human behavior offers opportunities to enable new ways of studying and augmenting social computing systems. For instance, these models can allow the designers of a social system to prototype the social dynamics that only emerge at scale, to iterate without exposing the users to potentially flawed system design. They can also bring about new ways of conducting computational social science by replicating results that were only achievable via crowd participants or empirical studies. However, the applications of LLMs inherit the imperfections and biases of the underlying model. Their output might depend on the subtle nuances of a prompt, while their biases might misrepresent certain populations. We posit that our community will need to continuously validate and benchmark the use of LLMs in social computing while emphasizing the importance of directly connecting with human stakeholders.
§ THE USE OF LLMS IN ANALYZING DATA
At the data analysis stage of social computing research, we are witnessing emerging use of LLMs (and recently, large pre-trained ones) in both qualitative and quantitative methods.
In qualitative research, academic tools such as PaTAT <cit.> and CollabCoder <cit.> as well as commercial products such as the AI Coding feature in ATLAS.ti[<https://atlasti.com/ai-coding>] utilize LLMs to aid in the qualitative coding process for textual data. Generally, these tools employ LLMs to analyze data, propose new codes for inductive coding procedures, understand the semantics of codes as users create and assign them, and suggest codes for data items. They also assist with the sensemaking process by summarizing, synthesizing, aggregating, or visualizing data.
For quantitative methods, AI and LLM-enabled assistants and “pair programmers” have been developed to recommend analyses and statistical procedures based on data characteristics, conduct data queries, and implement data analysis code in response to user prompts in exploratory data science scenarios <cit.>. Commercial products such as Tableau AI[<https://www.tableau.com/solutions/ai-analytics>] suggest metrics based on the data domain, generate insights, and allow users to ask questions about the underlying data using natural language.
Despite their potential in improving the efficiency of the analysis process, facilitating additional insight discovery, and reducing learning curves, the use of LLMs also presents new challenges and raises concerns regarding their application in social computing research. For instance, LLMs have been found to display biases and stereotypes in their outputs <cit.>, which could influence the data analysis process, especially in domains where understanding the socio-cultural context is essential for data interpretation. Moreover, the collaboration between human researchers and LLM-enabled tools in analysis tasks poses challenges in ensuring user autonomy, preventing over-reliance, and promoting effective human learning about data and patterns, this challenge can be amplified by the lack of interpretability in LLMs <cit.>. The application of LLMs in data analysis introduces additional data privacy challenges since many LLMs lack transparency regarding their usage of user data. This raises questions about creating informed consent protocols to notify participants about the use of LLMs in analyzing their data. Furthermore, the community needs new guidelines and norms regarding evaluation metrics when LLMs are used alongside human coders in data analysis.
§ THE METHODS FOR DEPLOYING/STUDYING LLM-ENABLED SOCIO-TECHNICAL SYSTEMS
The success of LLMs is going to have a profound impact on socio-technical systems in various domains that humans use natural languages to interact with.
Over the past few months, news articles have covered systems built with LLMs used for mental health support[<https://gizmodo.com/mental-health-therapy-app-ai-koko-chatgpt-rob-morris-1849965534>], education[<https://fortune.com/2023/02/22/chatgpt-ai-openai-educatoin-tutor-teaching-school/>], legal services[<https://gizmodo.com/donotpay-speeding-ticket-chatgpt-1849960272>], job searching advice[<https://www.forbes.com/sites/jackkelly/2023/04/03/how-to-leverage-ai-and-use-chatgpt-in-your-job-search-according-to-rsum-writers-and-career-coaches/?sh=728117a5ac5a>], and many other purposes.
While they demonstrate exciting opportunities of advancing these fields, concerns and backlashes have been raised by the public.
For example, in January 2023, a company called Koko that offers mental health services tested responses generated by GPT-3 on thousands of its users.
A co-founder tweeted about their experiments and it soon sparked a heated discussion around the ethics of this research.
People questioned their informed consent process, the legitimacy of testing an unproven technology on real users, and even the appropriateness of involving AI in such a process at all.
Although the co-founder later clarified that Koko users knew the messages were co-written by a bot, it did not resolve all these concerns.
Researching systems built with LLMs is a delicate process.
However, the lack of clear guidelines for conducting research in this field will affect both researchers who design, develop, and deploy socio-technical systems powered by LLMs, and researchers who conduct empirical studies to investigate how real-world users interact with these systems.
In this SIG, we aim to take the first step towards the development of the guidelines.
The questions that need in-depth discussions include but are not limited to the following.
For researchers who aim to deploy a novel LLM-enabled system, how should they determine whether AI-based intervention is appropriate for the selected use case?
How should they disclose the use of LLMs to their users?
Deploying certain services (e.g., mental health support) may inevitably lead the users to expose sensitive information about themselves and other people (i.e., interdependent privacy <cit.>).
How should researchers process traces from the study that may involve such sensitive information?
Relatedly, the natural language interfaces give users a great amount of flexibility, which means the LLM-enabled services may not have definite use scenarios (e.g., ChatGPT) or the users may use them in unexpected ways.
Hence, researchers who want to study the use of LLM-enabled systems are facing challenges in handling unexpected privacy harms (e.g., economical, reputational, psychological harms <cit.>), which may affect the choice of tools for analysis (e.g., local vs. cloud-based tools).
§ CONCLUSION
The rapid development of Large Language Models (LLMs) has already had a significant impact on various aspects of social computing research, encompassing areas such as data collection, data generation, data analysis, as well as system deployment and evaluation. However, alongside the excitement surrounding these advancements, concerns have emerged regarding issues of validity, privacy, and ethics. This Special Interest Group (SIG) aims to provide a much-needed space for researchers who are interested in comprehending the impacts of LLMs on their work. It offers an opportunity for the members of the community to openly discuss their current practices, perspectives, and challenges when engaging with LLMs in their day-to-day activities and collectively shaping the emerging norms of LLMs-impacted social computing research.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.07619v1 | 20230714203435 | Stochastic dynamics and the Polchinski equation: an introduction | [
"Roland Bauerschmidt",
"Thierry Bodineau",
"Benoit Dagallier"
] | math.PR | [
"math.PR",
"math-ph",
"math.FA",
"math.MP"
] |
Generalizable Embeddings with Cross-batch Metric Learning
Xiang Guo, Arash Tavakoli, T. Donna Chen, and Arsalan Heydarian
Corresponding Author: Dr. Arsalan Heydarian, [email protected]
August 12, 2023
=============================================================================================================================================
This introduction surveys a renormalisation group perspective on log-Sobolev inequalities and related properties of stochastic dynamics.
We also explain the relationship of this approach to related recent and less recent developments such as
Eldan's stochastic localisation and the Föllmer process,
the Boué–Dupuis variational formula and the Barashkov–Gubinelli approach,
the transportation of measure perspective, and the classical analogues of these ideas for Hamilton–Jacobi equations
which arise in mean-field limits.
§ INTRODUCTION
Functional inequalities have been thoroughly studied in different contexts <cit.>
and one important motivation is to quantify the relaxation of stochastic dynamics by using Poincaré and (possibly modified) log-Sobolev inequalities
<cit.>.
Statistical mechanics offers an interesting setting
to apply these inequalities and to analyse
the information they provide in various physical regimes.
Indeed, one would like to describe the relaxation to equilibrium of lattice gas and spin dynamics, which are modelled by stochastic evolutions on high-dimensional state spaces. Their continuum limits, often described by (singular) SPDEs, are also of a lot of interest.
The structure of the equilibrium Gibbs measures is sensitive to the occurrence of phase transitions and
the dynamical behaviour will also be strongly influenced by phase transitions. In the uniqueness regime, namely in absence of a phase transition (typically at high temperatures), one expects that the dynamics relax exponentially fast uniformly in the dimension of the state space with a speed of relaxation diverging when the temperature approaches the critical point.
We refer to Sections <ref> and <ref> for more details.
In a phase transition regime (typically corresponding to low temperatures), one expects different types of behaviours depending strongly on the type of boundary conditions and we will not discuss the corresponding phenomena in these notes.
The fast relaxation towards equilibrium
in the uniqueness regime (or at least deep in it)
is well understood and we refer to
<cit.> for very complete accounts of the corresponding theory.
Roughly speaking, it has been shown for a wide range of models that good mixing properties of the equilibrium measure are equivalent to fast relaxation of the dynamics, namely uniform bounds (with respect to the domains and the boundary conditions) on the Poincaré or the log-Sobolev constants.
For the Ising model <cit.>,
the validity of the mixing properties have been proved in the whole uniqueness regime leading to strong relaxation statements on the dynamics,
and more detailed dynamical features are also understood in that regime <cit.>.
For more general systems and in particular continuous spin systems, the picture is much less complete.
The main goal of this survey is to present a different perspective on the derivation of functional inequalities based
on the renormalisation group theory,
introduced in physics by Wilson <cit.>, and with its continuous formulation emphasised in particular by Polchinski <cit.>.
The renormalisation group was introduced to
study the critical behaviour and the existence of continuum limits
of equilibrium models of statistical physics and quantum field theory from a unified perspective.
The renormalisation group formalism associates with a Gibbs measure a flow of measures defined in terms of a renormalised potential
(see Section <ref>).
We show how this structure can be used to prove log-Sobolev inequalities under a condition on the renormalised potential
which is a multiscale generalisation of the Bakry-Émery criterion (see Section <ref>).
The renormalised potential obeys a second-order Hamilton–Jacobi type equation
(the Polchinski equation)
with characteristics given by a stochastic evolution
(see Section <ref>) which coincide with the stochastic process of Eldan's stochastic localisation method introduced for very different purposes <cit.>.
Section <ref> provides a dictionary to relate both points of view.
An alternative to the multiscale Bakry–Émery method to derive log-Sobolev inequalities
(with much similarity and both advantages and disadvantages)
is the entropic stability estimate recently established in <cit.>
and reviewed in Section <ref>.
This estimate applies to the same Polchinki flow, or its equivalent interpretation as stochastic localisation.
It originated in the spectral and entropic independence estimates <cit.>
which are similar estimates for a different flow that takes the role of the Polchinski flow in another kind of model.
This analogy has already been highlighted in <cit.> to which we refer for a discussion of this relation.
Compared to the established approaches to functional inequalities for statistical mechanical models, which typically rely on spatial decompositions, all of the approaches discussed
here are more spectral in nature. Spectral quantities are more global and therefore allow to capture
for example the near-critical behaviour better.
This is illustrated in a series of applications reviewed in Section <ref>.
The Polchinski renormalisation and the stochastic localisation can be seen as two sides of the same coin, sharing thus very similar structures.
In fact, this type of stochastic equations have been considered
much earlier by Föllmer <cit.> as an optimal way to generate a target measure.
In Section <ref>, the Polchinski renormalisation flow is shown to coincide
with the optimal stochastic process associated with a suitable varying metric.
In applications to statistical mechanics models, this metric captures the notion of scale
so that the Polchinski flow (a continuous renormalisation group flow) provides a canonical way of decomposing the entropy according to scale.
Finally, in Section <ref>, the renormalised potential
is rewritten as a variational principle using the Polchinski flow, known as the Boué-Dupuis or Borell formula in the generic context, see
<cit.>, whose origin is in stochastic optimal control theory.
This correspondance is at the heart of the Barashkov–Gubinelli variational method <cit.>.
In Appendix <ref>, these results are compared with the corresponding control theory for classical Hamilton–Jacobi equations,
and a simple comparison with the line of research initiated in <cit.> is also given.
§ BACKGROUND ON STOCHASTIC DYNAMICS
§.§ Motivation: Spin models and their stochastic dynamics
Our goal is to study dynamical (and also some equilibrium) aspects
of continuous and discrete spin models of statistical mechanics
such as Euclidean field theories or Ising-type spin models.
Throughout this article, Λ will be a general finite set (of vertices),
but we have Λ⊂^d large in mind,
or Λ⊂ϵ^d approximating ^d (or a subset of it) when ϵ→ 0
in the case of models defined in the continuum.
Sometimes we identify Λ with [N]={1,…, N}.
Spin fields are then random functions
φ: Λ→ T where, for example, T= in the case of continuous scalar spins
or T={± 1} in the case of (discrete) Ising spins.
For discrete spins, we often write σ instead of φ
for a spin configuration.
Continuous spins
In the setting of continuous spins, the equilibrium Gibbs measures have expectation of the form
_ν[F(φ)] ∝∫_^Λ e^-H(φ) F(φ) dφ
where the symbol ∝ denotes the equality of the measures up to a normalisation factor.
We will refer to H as the action or as the Hamiltonian (depending on the context).
The main class of H that we will focus on are of the following form:
for spins φ = ( φ_x )_x ∈Λ taking values in (or vector spins with values in ^n),
an interaction matrix A, and a local potential V,
H(φ) = 1/2 (φ,Aφ) + V_0(φ), V_0(φ) =∑_x∈Λ V(φ_x).
Defining the discrete Laplace operator on Λ⊂^d by
∀ x ∈Λ,
(Δ^Λ f)_x := ∑_y ∈Λ: y∼ x (f_y-f_x),
a classical choice of interaction is obtained by setting
A=- βΔ^Λ for some (inverse temperature) parameter β >0.
In this case, the Hamiltonian reads
∀φ∈^Λ,
H(φ) = β/4∑_x,y ∈Λ, x ∼ y (φ_x - φ_y)^2
+ V_0(φ),
where the nearest neighbour interaction is denoted by x ∼ y and the sum counts each pair {x,y} twice.
As an example, a typical choice for the single-spin potential V is the Ginzburg–Landau–Wilson φ^4 potential,
in which case one usually sets β=1 (and ν has the role of a temperature),
V(φ) = 1/4 g|φ|^4 + 1/2ν|φ|^2
with g>0 and ν<0.
The following Glauber–Langevin dynamics is reversible for the measure ν introduced in (<ref>):
dφ_t = -∇ H(φ_t) dt + √(2) dB_t.
For the choices (<ref>) and (<ref>), this stochastic differential equation (SDE) reads
dφ_t
= -Aφ_t dt- ∇ V(φ_t) dt+ √(2)dB_t
= Δ^Λφ_t dt- g|φ_t|^2φ_t dt- νφ_t dt+ √(2)dB_t .
In this survey, we are interested in the long time behaviour of these dynamics when the number of spins is large.
We will consider two cases: either Λ→^d for the Glauber dynamics of an Ising-type model with continuous spins; or Λ⊂ϵ^d with Λ→^d or Λ→ [0,1]^d in which case
a suitably normalised version of φ describes the solution of a singular SPDE in the limit ϵ→ 0.
Discrete spins
In the setting of discrete spins, we focus on the Ising model where σ∈{± 1}^Λ and
_ν[F(σ)] ∝∑_σ∈{± 1}^Λ e^-β/2 (σ, Aσ) F(σ)
for some symmetric coupling matrix A.
Its Glauber dynamics is a continuous- or discrete-time Markov process with local transition rates
c(σ,σ^x) from a configuration σ to σ^x where σ^x ∈{± 1}^Λ denotes
the configuration obtained from σ∈{± 1}^Λ by flipping the sign of the spin at x.
The transition rates are assumed to satisfy the detailed-balance condition
ν(σ)c(σ,σ^x)=ν(σ^x)c(σ^x,σ),
which implies that the measure (<ref>) is invariant.
Typical choices are described below in the next section.
We will be interested in the large time behaviour of the dynamics when Λ→^d.
§.§ Generalities on Glauber–Langevin dynamics
We now discuss some standard general properties of the stochastic dynamics such as its (finite-dimensional state space) ergodicity.
Continuous spins
The Glauber–Langevin dynamics (<ref>)
is a Markov process with generator
Δ^H = Δ -(∇ H,∇) = e^+H (∇, e^-H∇)
where
Δ = ∑_x∈Λ^2φ_x^2 ,
(∇ H,∇) = ∑_x∈ΛHφ_xφ_x.
The state space ^Λ will often be denoted by X.
The distribution of the spin configuration evolves in time along the stochastic dynamics and we denote by m_t the distribution
at time t starting from an initial measure m_0: given F_0:X →,
_m_t [F_0] = _m_0[F_t]
with F_t(φ) = _t F (φ) := _φ_0=φF(φ_t) ,
where _t is the semigroup associated with the generator Δ^H.
In particular, F_t = _t F
solves the Kolmogorov backward equation
tF_t=Δ^H F_t.
Starting from the SDE, this can be verified using Itô's formula.
The measure ν, introduced in (<ref>),
is reversible with respect to this dynamics,
and the following integration by parts formula holds for sufficient smooth F:
_ν [F(-Δ^H G)] = _ν[(∇ F,∇ G)].
The right-hand side is the Dirichlet form:
D_ν(F,G) := _ν[(∇ F,∇ G)] and D_ν(F) := D_ν(F,F).
In particular, the measure ν is invariant, i.e., if φ_0 is distributed
according to ν then φ_t also is:
t_ν [F_t] = _ν [Δ^H F_t] = _ν [(∇ F_t,∇ 1)] = 0.
Moreover, we will always impose the following ergodicity assumption:
∀ F_0∈ L^2(ν), F_t →_ν[F_0] in L^2(ν).
In particular, for any bounded smooth functions F_0: X → and g: →,
lim_t→∞_ν[g(F_t)] = g(_ν[F_0]).
As the next exercise shows, the ergodicity assumption is qualitative if Λ is finite
and holds in all examples of interest.
Show that 1/2 |∇ H|^2-Δ H →∞ as |φ|→∞ implies that -Δ^H has discrete spectrum on L^2(ν)
with unique minimal eigenvalue 0, and deduce (<ref>) and (<ref>).
For the discreteness of the spectrum,
one may observe that the multiplication operator U= e^1/2 H is an isometry from L^2(ν) onto L^2(^N)
that maps -Δ^H to the Schrödinger operator -Δ + W
on ^N with W = 1/4 |∇ H|^2 -1/2Δ H.
The result therefore follows from the spectral theorem and the result that a Schrödinger operator with a potential W ∈ L^1_ loc(^N)
that is bounded below and satisfies W→∞ has compact resolvent
<cit.> (a version of Rellich's theorem).
For further general facts on stochastic dynamics in the continuous setting, we refer to <cit.>.
Even though we will not need it, let us also mention that if the distribution of m_t is written as dm_t = G_t dν where ν=m_∞ is the invariant measure and G_t = dm_t/dν is the density of m_t relative to it, then
tG_t = (Δ^H)^* G_t = (Δ^H)^* G_t
where (Δ^H)^*=Δ^H is the adjoint of Δ^H with respect to ν.
This can also be expressed as an equation for m_t (interpreted in a weak sense),
which is the Fokker–Planck equation:
m_tt = Δ m_t +(∇,m_t∇ H) = (∇, m_t∇ (log m_t+H)).
Discrete spins
A similar structure can also be associated with discrete dynamics.
In particular, the Glauber dynamics of an Ising model is determined by its local jump rates
c(σ,σ^x) satisfying the detailed balance condition
as in (<ref>).
For all F: Ω→, where Ω={± 1}^Λ is the finite state space,
the generator and Dirichlet form associated with the Glauber dynamics are
Δ_c F(σ) = ∑_x∈Λ c(σ,σ^x) (F(σ^x)-F(σ))
and
D_ν (F)
= -∑_x∈Λ∑_σ∈Ω F(σ) Δ_c F(σ) ν(σ)
= 1/2∑_x∈Λ∑_σ∈Ω c(σ,σ^x) (F(σ^x)-F(σ))^2ν(σ),
where we used the detailed balance condition for the second equality.
We will again write D_ν (F,F) for the quadratic form associated with D_ν (F) by polarisation.
As in the continuous setting, we will always impose an irreducibility assumption which is equivalent to the analogue of (<ref>):
∀ F_0: Ω→, F_t →_ν[F_0],
where F_t(σ) = e^Δ_c tF_0(σ)= _σ_0=σ[F(σ_t)].
Indeed, assuming irreducibility, the convergence (<ref>) is a consequence of the Perron–Frobenius theorem, see, e.g., <cit.>.
Many choices of jump rates can be considered, but as long as the jump rates are uniformly bounded from above and below the different
Dirichlet forms are equivalent and the large time behaviour of the microscopic dynamics will be similar.
Often a natural choice of jump rates is that corresponding to the standard Dirichlet form.
This choice formally corresponds
to c(σ,σ^x)=1 in (<ref>) which however are not the jump rates of the associated Markov process
because the constant function 1 does not satisfy the detailed balance condition.
However, rewriting (<ref>) as
D_ν (F)
= 1/2∑_x∈Λ∑_σ∈Ω1/2c(σ,σ^x)+c(σ^x,σ)ν(σ^x)/ν(σ) (F(σ^x)-F(σ))^2ν(σ),
we see that the standard Dirichlet form corresponds to the jump rates (satisfying detailed balance)
c(σ,σ^x) = 1/21 + ν(σ^x)/ν(σ).
Another popular choice are the heat-bath jump rates which are given by
c^ HB(σ,σ^x)
= ν(σ^x)/ν(σ)+ν(σ^x)
= 1+ν(σ)/ν(σ^x)^-1
with the corresponding Dirichlet form
D_ν^ HB(F)
= 1/2∑_x∈Λ∑_σ∈ΩΨ(ν(σ),ν(σ^x)) (F(σ)-F(σ^x))^2, Ψ(a,b)=ab/a+b.
The Metropolis jump rates correspond to Ψ(a,b)= min{a,b}.
For further general discussion of Glauber dynamics in the discrete case, see <cit.> and <cit.>.
As in (<ref>), the distribution at time t will be denoted by m_t.
§.§ Log-Sobolev inequality
In the above examples (with Λ finite), one always has the qualitative ergodicity
m_t →ν= m_∞
which amounts to an irreducibility condition.
One of the main questions we are interested in is how fast this convergence is.
A very good measure for the distance between m_t and ν= m_∞ with many further
applications is the relative entropy:
(m_t |ν) = _ν [F_t log F_t] = _ν(F_t ), F_t = dm_t/dν.
More generally, when F is nonnegative but not necessarily satisfies _νF = 1, define
_ν (F)= _ν [Φ(F)]- Φ(_ν [F]), Φ(x)= xlog x.
The relative entropy is not symmetric and thus not a metric, but it has many very useful
properties making it a good quantity, and it controls the total variation distance by Pinsker's inequality:
m_t-ν_ TV^2 ≤ 2 (m_t|ν).
One of the most important properties is that the relative entropy decreases under the dynamics.
We begin with the continuous case.
Consider the (continuous spin) stochastic dynamics (<ref>) with invariant measure ν and
Dirichlet form D_ν defined in (<ref>).
Then for F_t(φ) = _φ_0=φF(φ_t) as in (<ref>),
t_ν(F_t) = -D_ν(log F_t, F_t) =
- I_ν (F_t) ≤ 0,
where the Fisher information is defined in terms of the Dirichlet form (<ref>):
I_ν (F_t)
:= _ν(∇ F_t)^2/F_t
= 4D_ν( √(F_t)).
Since Φ(_ν[F_t]) = Φ(_ν[F_0]) is independent of t and recalling that Δ^H is defined in (<ref>),
t_ν(F_t)
= t_ν[Φ(F_t)]
= _ν [Φ'(F_t)Ḟ_t]
= _ν [Φ'(F_t)Δ^H F_t]
= -D_ν (Φ'(F_t),F_t)
= -D_ν (log F_t+1,F_t)
= -D_ν (log F_t,F_t).
To complete the identity (<ref>), it is enough to notice that
D_ν (log F_t,F_t) = _ν[(∇log F_t, ∇ F_t)]
= _ν(∇ F_t)^2/F_t
= 4 _ν [(∇√(F_t))^2].
Using the identity (<ref>), the decay of the entropy can be quantified
in terms of the log-Sobolev constant which will be a key quantity we study.
A probability measure ν on X=^N, satisfies the log-Sobolev inequality (LSI) with respect to D_ν if
there is a constant γ>0 such that the following holds for any smooth, compactly supported function F:X→_+:
_ν (F) ≤2/γ D_ν(√(F)).
The largest choice of γ in this inequality is the log-Sobolev constant (with respect to D_ν).
The normalisation with the above factor 2 is convenient (see Proposition <ref>).
The upshot is that the exponential decay of the relative entropy (m_t|ν) of the distribution m_t (defined in (<ref>)) along the flow of the Glauber-Langevin dynamics,
(m_t|ν) ≤ e^-2γ t (m_0|ν),
follows, by Gronwall's lemma, from
t(m_t|ν)≤ -2γ (m_t|ν),
which, by the de Bruin identity, is a consequence of the log-Sobolev inequality (<ref>).
Thus the log-Sobolev constant provides a quantitative estimate on the speed of relaxation
of the dynamics towards its stationary measure.
This is one of the main motivation for deriving the log-Sobolev inequality.
The log-Sobolev inequality (<ref>) has also other consequences. Especially, it is equivalent to the hypercontractivity of the associated Markov semigroup.
The hypercontractivity was, in fact, its original motivation <cit.>,
see also <cit.>.
The measure ν satisfies the log-Sobolev inequality (<ref>) with constant γ
if and only if
the associated semigroup _t is hypercontractive:
_tF_L^q(t)(ν)≤F_L^p(ν) with q(t)-1/p-1 = e^2γ t.
We note that the hypercontractivity does not follow in this form from the modified log-Sobolev inequality
which will be introduced in (<ref>) below for dynamics with discrete state spaces.
More generally, the log-Sobolev inequality is part of a larger class of functional inequalities.
In particular, it implies the spectral gap inequality (also called Poincaré inequality).
The log-Sobolev inequality
with constant γ implies the spectral gap inequality (also called Poincaré inequality)
with the same constant:
_ν [F] ≤1/γ_ν (∇ F)^2 .
The same conclusion holds assuming the modified log-Sobolev inequality (<ref>) below instead of the log-Sobolev inequality.
The proof follows by applying the log-Sobolev inequality to the test function 1 + ε F and then letting
ε tend to 0, see, e.g., <cit.>.
We refer to <cit.> for an in-depth account on related functional inequalities and to <cit.> for the applications of the log-Sobolev inequality to the concentration of measure phenomenon.
For discrete spin models, the counterpart of Proposition <ref> is the following proposition.
Consider the discrete dynamics with invariant measure ν (<ref>) and
Dirichlet form D_ν defined in (<ref>).
Then for F_t(σ) = _σ_0=σF(σ_t),
t_ν(F_t) = -D_ν(log F_t, F_t) ≤ 0.
The proof is identical to (<ref>) replacing the continuous generator by Δ_c
defined in (<ref>).
As the chain rule no longer applies in the discrete setting, the Fisher information cannot be recovered. Nevertheless the exponential decay of the entropy (<ref>) can be established under the modified log-Sobolev inequality (mLSI), i.e., if there is γ >0 such that for any function F: {± 1}^Λ↦^+:
_ν (F) ≤1/2γ D_ν (log F, F).
In view of Exercise <ref>, this inequality is weaker than the standard log-Sobolev inequality (<ref>),
a point discussed in detail in <cit.>.
In the discrete case, the different quantities in (<ref>) are not equal.
Verify the inequality 4(√(a)-√(b))^2 ≤ (a-b)log(a/b) for a,b>0 and hence show D_ν(log F,F)≥ 4D_ν(√(F)).
§.§ Bakry–Émery theorem
In verifying the log-Sobolev inequality for spins taking values in a continuous space, a very useful criterion is the Bakry–Émery theorem which
applies to log-concave probability measures.
Consider a probability measure on X = ^N (or a linear subspace) of the form (<ref>) and assume
that there is λ>0 such that as quadratic forms:
∀φ∈ X: H(φ) ≥λ𝕀.
Then the log-Sobolev constant of ν satisfies γ≥λ.
For quadratic H, one can verify (by a simple choice of test function) that in fact the equality γ = λ holds.
An equivalent way to state the assumption H(φ) ≥λ𝕀 is to say that H
can be written as H(φ)= 1/2 (φ,Aφ)+V_0(φ) with
a symmetric matrix A≥λ𝕀
and V_0 convex.
The entropy (<ref>) can be estimated by interpolation along the semigroup (<ref>) of the Langevin dynamics associated with ν. Setting F_t = P_t (F), we note that
_ν (F)= _ν [Φ(F)]- Φ(_ν [F])
= _ν [Φ(F_0) - Φ(F_∞ ) ]
where we used that the dynamics converges to the invariant measure (<ref>) which implies
Φ(_ν[F]) = lim_t→∞_ν[Φ(F_t)].
Indeed, we may assume that F takes values in a compact interval I ⊂ (0,∞),
and by the positivity of the semigroup F_t then takes values in I for all t≥ 0, and we can replace Φ by a bounded smooth function g that
coincides with Φ on I.
By the de Bruin identity (<ref>) (and using that _ν[F_t] is independent of t), therefore
_ν (F)
= -∫_0^∞ dt t_ν [Φ(F_t)]
= -∫_0^∞ dt I_ν(F_t).
Provided that the Fisher information I_ν(F_t) introduced in (<ref>) satisfies
I_ν (F_t) ≤ e^-2 λ t I_ν (F_0),
the log-Sobolev inequality follows from (<ref>) and (<ref>)
by integrating in time:
_ν (F) = ∫_0^∞ dt I_ν (F_t)
≤1/2λ I_ν (F_0)
= 2/λ D_ν(√(F)).
To prove (<ref>), differentiate again:
tI(ν_t|ν)
= t_ν(∇ F_t)^2/F_t
= _ν (t-Δ^H) (∇ F_t)^2/F_t,
where we used that _ν[Δ^HG] =0 for every sufficiently nice function G: X →.
It is an elementary but somewhat tedious exercise to verify that
(t-Δ^H) (∇ F_t)^2/F_t
= -2 F_t |log F_t|_2^2_≥ 0 + (∇log F_t, H(φ)_≥λ𝕀∇log F_t).
Hence
t I_ν (F_t)
≤
-2λ_ν [F_t(∇log F_t, ∇log F_t)]
=
-2λ_ν(∇ F_t)^2/F_t
= -2λ I_ν (F_t)
which implies the claim (<ref>).
§.§ Decomposition and properties of the entropy
The Bakry–Émery criterion (Theorem <ref>) implies the validity of the log-Sobolev inequality for all the Gibbs measures with strictly convex potentials. For more general measures, the log-Sobolev inequality
is often derived by decomposing the entropy thanks to successive conditionings.
We are going to sketch this procedure below.
Assume that the expectation under ν is of the form
_νF : = _νF (φ_1,φ_2) =_2 _1 F (φ_1,φ_2) | φ_2 = _2 _1 F,
where _1 [ · ] :=_1 [ · | φ_2 ] is the conditional measure wrt the variable φ_2.
Then the entropy can be split into two parts:
(F)
=
_ν [Φ(F)]- Φ(_ν [F])
= _2 [ _1[Φ(F)] -Φ(_1 [F])__1 (F) ]
+ _2 Φ(_1[F]) -
Φ(_2 _1 [F] )__2(_1 [F]).
Given φ_2, the first term involves the relative entropy of a simpler measure _1 ( · | φ_2) as the integration refers only to the coordinate φ_1:
_1 (F) (φ_2) = _1[Φ(F) | φ_2 ] -Φ(_1 F | φ_2 ).
The strategy is to estimate this term (uniformly in φ_2) by the desired Dirichlet form acting only on φ_1.
The second term _2 Φ(_1[F]) is more complicated because the expectation _1 [ F( ·, φ_2) | φ_2 ] is inside the relative entropy.
For a product measure ν = ν_1 ⊗ν_2, this term can be estimated easily (as recalled in Example <ref> below).
In this way, the log-Sobolev inequality for ν = ν_1 ⊗ν_2 is reduced to establishing log-Sobolev inequalities for the simpler measures ν_1 and ν_2.
In general, the conditional expectations are intertwined and the second term _2(_1 [F]) is much more difficult to estimate.
There are two general strategies: either one also bounds this term by the desired Dirichlet form (and thus one somehow has to move the expectation out
of the entropy) or one bounds it by κ(F) with κ < 1. In the latter case, the estimate reduces to the first term,
at the expense of an overall factor (1-κ)^-1.
For a given measure ν, the decomposition entropy (<ref>)
can be achieved with different choices of the measures _1, _2.
The optimal choice depends on the structure of the measure ν.
In this survey, we focus on Gibbs measures of the form (<ref>) which arise naturally in statistical mechanics.
The renormalisation group method
constitutes a framework to study such Gibbs measures
(see <cit.> for an introduction and references)
and provides strong insight on a good entropy decomposition.
This is the core of the method presented in Section <ref>,
which is based on the Polchinski equation, a continuous version of the renormalisation group.
[Tensorisation]
Assume that probability measures ν_1 and ν_2 satisfy log-Sobolev inequalities with the constants
γ_1 and γ_2. Then the product measure ν = ν_1⊗ν_2 also satisfies a log-Sobolev inequality
with constant γ = min{γ_1,γ_2 } (with the natural Dirichlet form on the product space).
For simplicity, assume that ν_1, ν_2 are probability measures on and denote by _1, _2 their expectations so that _νF =_2 _1 F for functions F( φ_1, φ_2).
As discussed in (<ref>), the entropy can be decomposed as
(F) = _2[_1(F)] + _2(G)
with G(φ_2) = _1[F(·,φ_2)].
The log-Sobolev inequalities for _1 and _2 imply that for γ = min{γ_1,γ_2 }:
2γ(F) ≤_2[D_1(√(F))] + D_2(√(G)).
It remains to recover the Dirichlet form associated with the product measure ν:
D_ν (√(F)) = _ν (∂_φ_1√(F))^2 + (∂_φ_2√(F))^2 .
The first derivative is easily identified:
_2[D_1(√(F))] = _2 _1 (∂_φ_1√(F))^2
= _ν(∂_φ_1√(F))^2.
For the second derivative:
∂_φ_2√(G(φ_2)) = 1/2 √(G(φ_2))_1[ ∂_φ_2 F(·,φ_2)] = 1/√(G(φ_2))_1[ √(F(·,φ_2)) ∂_φ_2√( F(·,φ_2)) ] ,
so that by Cauchy-Schwarz inequality, we deduce that
D_2(√(G)) = _2 ( ∂_φ_2√(G(φ_2)))^2 ≤_2 _1 ( ∂_φ_2√( F(·,φ_2)))^2
= _ν (∂_φ_2√(F))^2 .
This reconstructs the Dirichlet form (<ref>) and completes the proof.
A similar argument applies in the discrete case, see for example <cit.>.
More abstractly, the tensorisation of the log-Sobolev constant also follows
from the equivalence between the log-Sobolev inequality and hypercontractivity (which tensorises more obviously).
We conclude this section by stating useful variational characterisations of the entropy.
The entropy of a function F ≥ 0 can be rewritten as
_ν(F) = sup_ν[FG] : Borel functions G such that _ν[e^G] ≤ 1
with equality if G = log(F/_ν[F]), or as
_ν(F) = sup_ν [F log F -F log t - F + t]: t> 0
with equality if t = _ν[F].
Finally, one has (also called the entropy inequality):
_ν(F) = sup_ν [FG] - _ν[F] log_ν[e^G]:
Borel functions G
with equality if G = log F.
Since _ν(F) = _ν FlogF/_ν[F],
to show (<ref>),
it is enough to consider the case _ν[F] = 1, by homogeneity of both sides.
Applying Young's inequality
∀ a ≥ 0, b ∈,
a b ≤ a log a - a + e^b ,
with _ν[e^G] ≤ 1, we get
_ν[FG] ≤_ν(F) -1 + _ν[e^G] ≤_ν(F).
This implies (<ref>) as the converse inequality holds with G = log F.
The variational formula (<ref>) follows directly from Young's inequality by choosing a=_ν[F] and b = log t:
_ν(F) = _ν[Flog F] - a log a ≤_ν[Flog F] - ab -a +e^b
= _ν[F log F - F log t - F + t],
where again (<ref>) follows since equality holds with t=_ν[F].
To show (<ref>), we may again assume _ν[F]=1. Then apply Jensen's inequality with respect to the probability measure dν^F = F dν:
log_ν[e^G] = log_ν^F[F^-1e^G] ≥_ν^F[log(F^-1e^G)] = - _ν[Flog F] + _ν[FG],
with equality if G = log F.
The Holley–Stroock criterion for the log-Sobolev inequality is a simple consequence of (<ref>), see e.g.,
the presentation in <cit.>.
[Holley–Stroock criterion]
Assume a measure ν satisfies the log-Sobolev inequality with constant γ. Then the measure
ν^F with dν^F/dν =F satisfies a log-Sobolev inequality with
constant γ^F ≥ (inf F/sup F)γ.
§.§ Difficulties arising from statistical physics perspective
To explain the difficulties arising in the derivation of log-Sobolev inequalities and to motivate our set-up of renormalisation,
we are going to consider lattice spin systems with continuous spins and Hamiltonian of the form (<ref>).
The strength of the interaction is tuned by the parameter β≥ 0 and the Gibbs measure
(<ref>) has a density on ^Λ of the form
ν (dφ)
∝exp - β/4∑_x,y ∈Λ, x ∼ y (φ_x - φ_y)^2
-∑_x ∈Λ V(φ_x) ∏_x ∈Λ d φ_x.
Further examples will be detailed in Section <ref>.
We are interested in the behaviour of the measure (<ref>) in the limit where the the number of sites |Λ| (and thus the dimension of the configuration space) is large.
In this limit, when the potential V is not convex,
the measure can have one or more phase transitions at critical values of the parameter β.
These phase transitions separate regions of values of β between which the measure ν has a different correlation structure,
different concentration properties, and so on. The speed of convergence of the associated Glauber dynamics (<ref>) is also affected.
See the book <cit.>
or <cit.>
for background on phase transitions in statistical mechanics.
To analyse the log-Sobolev inequality for the measure (<ref>),
note that the lack of convexity precludes the use of the Bakry–Émery criterion (Theorem <ref>),
and the Holley-Stroock criterion (Exercise <ref>) is not effective due to the large dimension of the configuration space when β>0.
On the other hand, when β=0, the Gibbs measure is a product measure and the log-Sobolev inequality holds
uniformly in Λ,
with the same constant as for the single spin |Λ|=1 measure (Example <ref>).
This tensorisation property has been generalised for β small enough
in terms of mixing conditions
and for some spin systems up to the critical value β_c, see <cit.> for a review.
Indeed, for β small, the interaction between the spins is small,
in the sense that one can show that correlations between spins decay exponentially in their distance.
At distances larger than a correlation length ξ_β < ∞ approximate independence between the spins is then recovered.
By splitting the domain Λ into boxes (of size larger than ξ_β), and using appropriate conditionings the system can be analysed as a renormalised model of weakly interacting spins <cit.>.
In so-called second order phase transitions,
when β approaches the critical β_c the correlation length diverges as a function of β_c-β, and so does the inverse log-Sobolev constant, i.e., the dynamics slows down (as can usually be verified by simple test functions in the spectral gap or log-Sobolev inequality).
Spins are thus more and more correlated for β close to β_c.
Nevertheless in some cases <cit.> the strong dynamical mixing properties were
derived up to the critical value β_c by using a strategy which however can be seen as a (large) perturbation of the product case with respect to β >0.
For this reason, it seems difficult to extract the precise divergence of the log-Sobolev constant near β_c with this type of approach.
To study the detailed static features of measures of the form (<ref>) close to β_c,
different types of renormalisation schemes have been devised with an emphasis on the Gaussian structure of the interaction.
In many cases one expects that the long range structure at the critical point is well described in terms of a Gaussian free field <cit.>.
Compared with the previously mentioned approaches, the perturbation theory no longer uses the product measure as a reference, but the Gaussian free field.
In the following, this structure will serve as a guide to decompose the entropy as alluded to in (<ref>).
Before describing this procedure in Section <ref>,
the elementary example of the Gaussian free field,
which illustrates the difficulty of many length scales equilibrating at different rates,
is presented in Example <ref> below.
§.§ Difficulties arising from continuum perspective
The long-distance problem discussed in the previous subsection is closely related to the short-distance problem
occurring in the study of continuum limits as they appear in quantum field theory and weak interaction limits,
which also arise as invariant measure of (singular) SPDEs.
In field theory, one is interested in the physical behaviour of a measure that is defined not on a lattice,
but in the continuum (say on L^d with ^d = [0,1)^d the d-dimensional torus), formally reading:
ν_L(dφ)
∝ e^-H_L(φ)∏_x∈ L^ddφ_x
.
Now φ should be a (generalised) function from L^d to , and a typical example for H_L would be the continuum φ^4 model, defined for λ>0 and μ∈ by:
H_L(φ)
=
1/2∫_L^d|∇φ|^2 dx
+ ∫_L^d[λ/4φ(x)^4 + μ/2φ(x)^2] dx
.
Of course, the formal definition (<ref>) does not make sense as it stands,
and a standard approach to understand such measures is
as a limit of measures defined on lattices Λ_ϵ,L = L^d∩ϵ^d with a vanishing ϵ:
ν_ϵ,L(dφ)
∝
e^-H_ϵ,L(φ)∏_x∈Λ_ϵ,Ldφ_x,
for a discrete approximation H_ϵ,L of H_L of the form
H_ϵ,L (φ) = ϵ^d-2/4∑_y∼ x(φ_y-φ_x)^2
+ ϵ^d∑_x∈Λ_ϵ,L V^ϵ(φ_x) ,
where the potential V^ϵ is of the form (<ref>).
In the example of the φ^4 model, it turns out that such a limit can be constructed if d<4,
but in d≥ 2
the coefficient μ of the potential must be tuned correctly as a function of ϵ→ 0,
see Section <ref> for details.
This tuning is known as addition of “counterterms” in quantum field theory. These are the infamous infinities arising there.
The relation to the statistical physics (long-distance) problem is that these limits correspond to statistical physics models near a phase transition, with
scaled (weak) interaction strength (ϵ^dλ→ 0 as ϵ→ 0).
In particular, due to the counterterms, the resulting measures are usually again very non-convex microscopically,
precluding the use of the Bakry–Émery theory and the Holley–Stroock criteria.
Nonetheless
the regularisation parameter ϵ is not expected to have any influence on the physics of the model,
in the sense that the existence of a phase transition, the speed of the Glauber dynamics, concentration properties, and so on,
should all be uniform in the small scale parameter ϵ and depend only on the large scale parameter L.
Techniques to control the regularisation parameter ϵ are often simpler than for the large scale problem near the critical point,
but they are also based on renormalisation arguments relying on comparisons with the Gaussian free field
(corresponding to a quadratic H_ϵ,L).
Finally,
the problem of relaxation at different scales is illustrated next in this simple model.
[Free field dynamics]
Consider now the Gaussian free field dynamics corresponding to V^ϵ =0 in (<ref>):
dφ_t = -Aφ_t dt + √(2) dW_t,
where A is the Laplace operator on Λ_ϵ,L as in (<ref>)
and the white noise is defined with respect to the inner produce ϵ^d ∑_x∈Λ_ϵ,L u_xv_x, i.e., each W_t(x)
is a Brownian motion of variance ϵ^-d.
On the torus Λ_ϵ,L= L^d∩ϵ^d of mesh size ϵ and side length L, the eigenvalues of the Laplacian are
λ(p) = ϵ^-2∑_i=1^d 2(cos(ϵ p_i)-1) |p|≲ 1≈ -|p|^2,
p ∈Λ_ϵ,L^* = (-π/ϵ,π/ϵ]^d ∩2π/L^d.
All Fourier modes of φ evolve independently according to Ornstein–Uhlenbeck processes:
p ∈Λ_ϵ,L^*,
dφ̂(p) = -λ(p)φ̂(p) dt + √(2) dŴ_t(p),
where the Ŵ(p) = (Ŵ_t(p))_t are independent standard Brownian motions for p ∈Λ^*.
In particular, small scales corresponding to |p| ≫ 1 converge very quickly to equilibrium,
while the large scales |p| ≪ 1 are slowest. Thus the main contribution to the log-Sobolev constant comes from the large scales
and we expect that a similar structure remains relevant in many interacting systems close to a critical point.
In both the statistical and continuum perspectives, for measures with
an interaction V_0(φ) = ∑_x∈Λ V(φ_x) ≠ 0 on top of the free field interaction,
the main difficulties result from the simple fact that the local (in real space) interaction do not interact well with the above Fourier decomposition.
The Polchinski flow that we will introduce in the next section can be seen as
a replacement for the Fourier decomposition, in which the Fourier variable p takes the role of scale, by a
smoother scale decomposition.
§ GAUSSIAN INTEGRATION AND THE POLCHINSKI EQUATION
In this section, we first review abstractly a continuous renormalisation procedure,
which goes back to Wilson <cit.> and Polchinski <cit.> in physics
in the context of equilibrium phase transitions and quantum field theory (viewed as an problem of statistical mechanics in the continuum).
We then explain how the entropy of a measure can be decomposed by this method in order to derive a
log-Sobolev inequality via a multiscale Bakry–Émery criterion.
§.§ Gaussian integration
For C a positive semi-definite matrix on ^N, we denote by
_C the corresponding Gaussian measure with covariance C and by _C its expectation.
The measure _C is supported on the image of C.
In particular, if C is strictly positive definite on ^N,
_C[F] ∝∫_^N e^-1/2 (ζ,C^-1ζ) F(ζ) dζ.
A fundamental property of the Gaussian measure is its semigroup property: if C=C_1+C_2 with C_1, C_2 also positive semi-definite then
_C[F(ζ)] =
_C_2[_C_1[F(ζ_1+ζ_2)]],
corresponding to its probabilistic interpretation that if ζ_1 and ζ_2 are independent Gaussian random variables
then ζ_1+ζ_2 is also Gaussian and the covariance of ζ_1+ζ_2 is the sum of the covariances of ζ_1 and ζ_2.
As discussed in Section <ref>, recall that
our goal is to decompose the entropy of a measure by splitting this measure into simpler parts as in (<ref>).
The above Gaussian decomposition will be a basic step for this.
As we have seen in Example <ref>, the dynamics of a spin or particle system close to a phase transition will depend on a very large number of modes and it will be necessary to iterate the decomposition
(<ref>) many times in order to decouple all the relevant modes.
In fact, it is even convenient to introduce a continuous version of the decomposition (<ref>), as follows.
For a covariance matrix C as above, define an associated Laplace operator Δ_C on ^N:
Δ_C = ∑_i,j C_ij^2φ_i∂φ_j,
and write (·,·)_C for the inner product associated with the covariance C:
(u,v)_C = ∑_i,j C_ij u_iv_j
and
|u|_C^2 = (u)_C^2 = (u,u)_C.
The standard scalar product is denoted by (u,v) = ∑_i u_i v_i.
Let t∈ [0,+∞] ↦ C_t be a function of positive semidefinite matrices on ^N increasing continuously as quadratic forms to a matrix C_∞.
More precisely, we assume that C_t = ∫_0^t Ċ_s ds for all t,
where t↦Ċ_t is a bounded cadlag (right-continuous with left limits) function with values in the space of positive semidefinite matrices
that is the derivative of C_t except at isolated points. We say that C_∞ = ∫_0^∞Ċ_s ds is a covariance decomposition
and
write X ⊂^N for the image of C_∞.
We emphasise that the (closed) interval [0,+∞] parametrising the covariances has no special significance and that all constructions
will be invariant under appropriate reparametrisation. For example, one can equivalently use [0,1].
For a C^2 function F: X →, let F_t = _C_t * F, i.e., F_t(φ)= _C_tF(φ+ζ).
Then for all t which are not discontinuity points of Ċ_t,
t F_t = 1/2Δ_Ċ_t F_t, F_0 = F.
Thus the Gaussian measures _C_t satisfy the heat equation
t_C_t = 1/2Δ_Ċ_t_C_t,
interpreted in a weak sense if C_t is not strictly positive definite.
In the case that C_t-ϵ is strictly positive definite for some ϵ>0 (and by monotonicity then also for all larger times),
this is a direct computation from (<ref>).
For t that are not discontinuity points of Ċ_t, the image X_s of C_s is independent of s ∈ [t-ϵ,t+ϵ]
and one has the representation (<ref>) on X_t.
Alternatively, one can prove the proposition using Itô's formula.
For any φ∈^N, define the process
∀ t ≥ 0, ζ_t = φ + ∫_0^t √(Ċ_s) dB_s ∈^N,
where (B_s)_s is a Brownian motion taking values in ^N. By construction ζ_t is a Gaussian variable with mean φ and variance C_t = ∫_0^t Ċ_s ds. In particular F_t(φ) = F(ζ_t) and by Itô's formula,
∀ t ≥0, t F_t(φ) = 1/2Δ_Ċ_t F(ζ_t) = 1/2Δ_Ċ_t_C_tF(φ+ζ)
= 1/2Δ_Ċ_t F_t(φ),
with the derivative interpreted as the right-derivative at the discontinuity points of Ċ_t.
Note that the decomposition (<ref>) is the natural extension of the discrete decomposition ζ = ζ_1 + ζ_2.
Given a covariance matrix C, many decompositions are possible, such as:
C = ∫_0^∞Ċ_s ds
with Ċ_s = C 1_s ∈ [0,1).
For a given model from statistical mechanics, it will be important to adjust the decomposition according to the specific spatial structure (of Λ) of this model.
In Example <ref>, the Gaussian free field has covariance matrix A^-1
with A the discrete Laplace operator as in (<ref>).
There are many decompositions of A^-1 of the form ∫_0^∞Ċ_s ds and the best choice will dependent on the application.
Nevertheless a suitable decomposition should capture the mode structure of the decomposition (<ref>) in order to separate the different scales in the dynamics.
Indeed, the key idea of a renormalisation group approach is to integrate the different scales one after the other in order.
This is especially important in strongly correlated systems, in which the different scales do not decouple,
and integrating some scales has an important effect on the remaining scales: the interaction potential will get renormalised.
We refer to Section <ref> for several applications.
§.§ Renormalised potential and Polchinski equation
In this section, we define the Polchinski flow and analyse its structure.
A simple explicit example is worked out in Example <ref> below.
We will focus on probability measures ν_0 supported on a linear subspace X ⊂^N.
By considering the measure ν_0(A-a) for a∈^N,
this also includes measures supported on an affine subspace which is of interest for
conservative dynamics.
For generalisations to non-linear spaces, see Section <ref>.
Let C_∞ = ∫_0^∞Ċ_t dt be a covariance decomposition,
and consider a probability measure ν_0 on X with expectation
given by
_ν_0 [F]
∝_C_∞e^- V_0(ζ) F(ζ) ,
with a potential V_0:X →, where the Gaussian expectation acts on the variable ζ.
To avoid technical problems, we always assume in the following that V_0 is bounded below.
We are going to use the Gaussian representation introduced in the previous subsection in order to
decompose the measure ν_0. For this, let us first introduce some notation.
For t>s>0, F: X → bounded, and φ∈ X, define:
* the renormalised potential V_t:
V_t(φ) = - log_C_te^-V_0(φ+ζ);
* the Polchinski semigroup _s,t:
_s,tF(φ) = e^V_t(φ)_C_t-C_se^-V_s(φ+ζ) F(φ+ζ);
* the renormalised measure ν_t:
_ν_t [F] = _t,∞F(0) = e^V_∞(0)_C_∞-C_te^-V_t(ζ) F(ζ),
where all the Gaussian expectations apply to ζ.
We stress that the renormalised measure ν_t evolving according to the Polchinski semigroup
is different from the measure m_t in (<ref>) evolving along the flow of the Langevin
dynamics (which we will not discuss directly in this section).
Note that in (<ref>), e^+V_∞(0) is the normalisation factor of the probability measure ν_t.
More generally, the function V_∞ is equivalent to the moment generating function of the measure ν_0:
changing variables from ζ to ζ + C_∞ h,
V_∞(C_∞ h)
= -log_C_∞[e^-V_0(C_∞ h+ζ)]
= 1/2 (h, C_∞ h) -log_C_∞[e^-V_0(ζ)e^(h,ζ)]
= 1/2 (h, C_∞ h) -log_ν_0[e^(h,ζ)] + V_∞(0).
The renormalised measure ν_t is related to ν_0 by the following identity.
For t ≥ 0 and any F : X ↦ such that the following quantities make sense,
_ν_0 F = _ν_t_0,t F .
Starting from (<ref>), from the Gaussian decomposition (<ref>) we get
_C_∞e^- V_0(ζ) F(ζ) = _C_∞ - C_t_C_t e^- V_0(φ + ζ) F(φ + ζ) = _C_∞ - C_t e^- V_t(φ) e^+ V_t(φ)_C_t e^- V_0(φ + ζ) F(φ + ζ) ∝_ν_t_0,t F ,
where ζ is integrated with respect to _C_t and φ with respect to _C_∞ - C_t.
We used the definitions (<ref>) and (<ref>) in the last line,
and recall that ∝ is an equality up to a normalising factor so that ν_t is a probability measure.
This completes the proof of the identity (<ref>).
Using the definition (<ref>) of V_t,
the action of the Polchinski semigroup (<ref>) can be interpreted as a conditional expectation
with respect to φ:
_0,tF(φ) =
_C_te^-V_0(φ+ζ) F(φ+ζ)/_C_te^-V_0(φ+ζ)
=: _μ_t^φ [F(ζ)].
This defines a probability measure μ_t^φ called the fluctuation measure.
Assuming C_t is invertible and changing variables from φ+ζ to ζ,
the fluctuation measure can be written equivalently as
μ_t^φ(dζ) = e^+V_t(φ)
e^-1/2(φ-ζ,C_t^-1(φ-ζ)) - V_0(ζ) dζ∝ e^-1/2( ζ,C_t^-1ζ) + (ζ,C_t^-1φ) - V_0(ζ) dζ .
Besides the addition of an external field C_t^-1φ, the structure of this new measure is similar to the one of the original measure ν_0 introduced in (<ref>), but the covariance of the Gaussian integration is now C_t.
By construction C_t ≤ C_∞, so that the Hamiltonian of the conditional measure (<ref>) is more convex and will hopefully be easier to handle.
The fluctuation measure is central in the stochastic localisation framework which will be presented in
Section <ref>.
For all bounded function F: X → and all t>0, the identity (<ref>) reads
_ν_0[F] = _ν_t[_0,tF(φ)] = _ν_t[_μ_t^φ[F(ζ)]],
where φ denotes the variable of ν_t and ζ the variable of μ_t^φ.
This is therefore an instance of the measure decomposition (<ref>) by successive conditionings.
The splitting of the covariance C_∞ = C_∞ - C_t + C_t will be chosen so that the field ζ encodes the local interactions, which correspond to the fast scales of the dynamics, and φ the long range part of the interaction, associated with the slow dynamical modes.
Integrating out the short scales boils down to considering a new test function
_0,t F(φ) and a measure ν_t (<ref>) which is expected to have better properties than the original measure ν_0.
This is illustrated in a one-dimensional case in Example <ref>.
Models from statistical mechanics often involve a multiscale structure when approaching the phase transition. For this reason, it is not enough to split the measure into two parts as in (<ref>). The renormalisation procedure is based on a recursive procedure with successive integrations of the fast scales in order to simplify the measure step by step.
As an example, let us describe a two step procedure: for s < t, splitting the covariance into
C_s, C_t- C_s, C_∞ - C_t, can be achieved by applying twice (<ref>)
_ν_0 F = _ν_s_0,s F
= _ν_t_s,t( _0,s F ) = _ν_t_0,t F .
Thus _s,t inherits a semigroup property from the nested integrations.
For infinitesimal renormalisation steps, we are going to show in Proposition <ref> that
the Polchinski semigroup is in fact a Markov semigroup with a structure reminiscent of the Langevin semigroup
(<ref>).
To implement this renormalisation procedure, one has also to control the renormalised measure ν_t.
For infinitesimal renormalisation steps, its potential V_t evolves according to the following Hamilton–Jacobi–Bellman equation, known as Polchinski equation.
Let (C_t) be as above, and let V_0 ∈ C^2.
Then for every t such that C_t is differentiable
the renormalised potential V_t defined in (<ref>) satisfies the Polchinski equation
t V_t = 1/2Δ_Ċ_t V_t - 1/2 (∇ V_t)_Ċ_t^2
where Δ_Ċ_t was defined in (<ref>) and
the scalar product in (<ref>).
Let Z_t(φ)= _C_t[e^-V_0(φ+ζ)].
By Proposition <ref>,
it follows that the Gaussian convolution
acts as the heat semigroup with time-dependent generator 1/2Δ_Ċ_t, i.e.,
if Z_0 is C^2 in φ so is Z_t for any t>0,
that Z_t(φ)>0 for any t and φ, and that for any t>0
such that C_t is differentiable,
t Z_t = 1/2Δ_Ċ_t Z_t, Z_0=e^-V_0.
Since Z_t(φ)>0 for all φ, its logarithm V_t = -log Z_t is well-defined and satisfies the Polchinski equation
t V_t = - t Z_t/Z_t
= -Δ_Ċ_t Z_t/2Z_t
= -1/2 e^V_tΔ_Ċ_t e^-V_t
= 1/2Δ_Ċ_t V_t - 1/2 (∇ V_t)_Ċ_t^2
.
The semigroup structure is analysed in the following proposition. As mentioned above, we assume V_0 to be bounded below to avoid technical problems.
The operators (_s,t)_s≤ t form a time-dependent Markov semigroup with generators (_t),
in the sense that
_t,t = 𝕀 and _r,t_s,r=_s,t for all s≤ r ≤ t,
and _s,tF ≥ 0 if F ≥ 0 with _s,t1=1.
Furthermore for all t at which C_t is differentiable
(respectively s at which C_s is differentiable),
t_s,tF = _t _s,t F,
-s_s,tF = _s,t_s F,
(s ≤ t),
for all smooth functions F, where _t acts on a smooth function F by
_tF = 1/2Δ_Ċ_t F - (∇ V_t, ∇ F)_Ċ_t.
The measures ν_t evolve dual to (_s,t) in the sense that
_ν_t_s,t F = _ν_s F (s ≤ t),
-t_ν_tF = _ν_t_t F.
By assumption, V_0 is bounded below.
The weak convergence of the Gaussian measure _C_t-C_s to the Dirac measure at 0 when t↓ s thus implies _t,t=𝕀.
The semi-group property, i.e. _r,t_s,r = _s,t for any s≤ r≤ t,
then follows from (<ref>).
The definition (<ref>) also implies continuity since _s,tF_∞≤F_∞ for each bounded F. Equation (<ref>)
also implies that _s,tF≥ 0 if F≥ 0.
To verify that the generator _t of the Polchinski semigroup is given by (<ref>),
set for s<t:
F_s,t(φ) = _s,tF(φ) = e^V_t(φ)_C_t-C_s[e^-V_s(φ+ζ) F(φ+ζ)].
Computing the time derivatives using Propositions <ref> and <ref>, this leads to
t F_s,t = (t V_t) F_s,t + e^V_t1/2Δ_Ċ_t_C_t-C_s[e^-V_s(·+ζ) F(·+ζ)]
= (t V_t) F_s,t + e^V_t1/2Δ_Ċ_t (e^-V_t F_s,t)
= (t V_t) F_s,t - (1/2Δ_Ċ_t V_t) F_s,t + 1/2 (∇ V_t)_Ċ_t^2 F_s,t + 1/2Δ_Ċ_t F_s,t - (∇ V_t, ∇ F_s,t)_Ċ_t = 1/2Δ_Ċ_t F_s,t - (∇ V_t, ∇ F_s,t)_Ċ_t = _t F_s,t
,
which is the first equality in (<ref>).
The second equality in (<ref>) follows analogously.
The first equality in (<ref>) holds as in Proposition <ref>.
The second identity follows by taking derivatives in s in the first identity and then using (<ref>) so that
s_ν_s F
=
s_ν_t_s,t F = _ν_ts_s,t F = - _ν_t_s,t_s F .
For t=s then _s,s_s F = _sF and the second identity in (<ref>) is recovered.
The operator _t in (<ref>) is obtained by linearising the Polchinski equation (<ref>) and has a structure similar to the generator Δ^H
defined in (<ref>).
§.§ Log-Sobolev inequality via a multiscale Bakry–Émery method
In this section, the Polchinski renormalisation is used to derive a log-Sobolev inequality under a criterion on the renormalised potentials, which can be interpreted as a multiscale
condition generalising the strict convexity of the Hamiltonian in the Bakry–Émery criterion (Theorem <ref>).
We impose the following technical continuity assumption analogous to (<ref>):
for all bounded smooth functions F: X → and g:→,
lim_t →∞_ν_tg(_0,tF) = g ( _ν_0F).
This can be easily checked in all examples of practical interest.
Consider a measure ν_0 of the form (<ref>) associated with a covariance decomposition Ċ_t differentiable for all t (see Section <ref>), and assume also (<ref>).
Suppose there are real numbers λ̇_t (allowed to be negative) such that
∀φ∈ X, t> 0: Ċ_t V_t(φ) Ċ_t - 1/2C̈_t ≥λ̇_t Ċ_t,
and define
λ_t = ∫_0^t λ̇_s ds,
1/γ = ∫_0^∞ e^-2λ_t dt .
Then ν_0 satisfies the log-Sobolev inequality
_ν_0 F ≤2/γ_ν_0(∇√(F))^2_Ċ_0.
Contrary to the Bakry–Émery criterion (Theorem <ref>), the initial potential V_0 is not required to be convex. The relevant parameter is an integrated estimate (<ref>) on the Hessian of the renormalised potentials V_t. Thus if one can prove that the renormalisation flow improves the non-convexity of the original potential so that the integral in (<ref>) is finite, then the log-Sobolev inequality holds.
In the case of convex potential V_0, the convexity is preserved by the Polchinski equation (see Proposition <ref>) and the Bakry–Émery criterion can be recovered. In general, the analysis of the renormalised potential V_t
is model dependent.
The covariances Ċ_t play the role of an inverse metric on X. In our examples of
interest, this metric becomes increasingly coarse approximately implementing the “block spin renormalisation picture”.
See Section <ref> for further discussion of this.
With the same proof, the log-Sobolev inequality (<ref>)
can be generalised to one for each of the renormalised measures ν_s:
_ν_s(F)
≤2/γ_s_ν_s (∇√(F))^2_Ċ_s,
1/γ_s = ∫_s^∞ e^-2(λ_u-λ_s) du.
The condition (<ref>)–(<ref>) is invariant under reparametrisation in t.
For example, if a: [0,+∞] → [0,+∞] is a smooth reparametrisation,
set
C^a_t = C_a(t), V^a_t = V_a(t).
Then Ċ^a_t = ȧ(t) Ċ_a(t) and C̈^a_t = ä(t) Ċ_a(t) + ȧ(t)^2 C̈_a(t) and therefore
(<ref>) is equivalent to
Ċ_t^a V_t^a Ċ_t^a - 1/2C̈_t^a = ȧ(t)^2 Ċ_a(t) V_a(t)Ċ_a(t) - 1/2C̈_a(t) - 1/2ä(t) Ċ(t) ≥λ̇^a_t Ċ_t^a
with
λ̇^a_t
= ȧ(t)λ̇_a(t) - 1/2ä(t)/ȧ(t)
= ȧ(t)λ̇_a(t) - 1/2tlogȧ(t)
.
Thus (<ref>) becomes
λ^a_t = λ_a(t) = ∫_0^t λ̇^a_s ds
= ∫_0^t ȧ(s) λ̇_a(s) ds - 1/2logȧ(t)/ȧ(0),
and hence
Ċ_0^a ∫_0^∞ e^-2λ^a_t dt
= Ċ_0^a /ȧ(0)∫_0^∞ e^-2∫_0^t λ_a(s)ȧ(s) ds ȧ(t) dt
= Ċ_0 ∫_0^∞ e^-2∫_0^uλ_u du du = Ċ_0 ∫_0^∞ e^-2λ_u du.
Analogously, one can parametrise by [0,T] instead of [0,+∞],
i.e., use a covariance decomposition C=∫_0^T Ċ_t dt, and then
obtain the same conclusion with T instead of ∞ in the estimates.
For a covariance decomposition such that Ċ_t is not differentiable for all t, an alternative criterion that does not involve C̈_t can be formulated, see <cit.>.
The proof follows the strategy of the Bakry–Émery theorem (Theorem <ref>), replacing the Langevin dynamics by the Polchinski flow.
We consider a curve of probability measures (ν_t)_t≥ 0
and a corresponding dual time-dependent Markov semigroup (_s,t)
with generators (_t) as in Proposition <ref>.
For F: X→ a function with values in a compact subset I of (0,∞), we write
F_t = _0,t F ∈ I. Since the function Φ is smooth on I, it
can be extended to a bounded smooth function on and
we deduce from (<ref>) that
lim_t →∞_ν_tΦ (_0,tF) = Φ( _ν_0F).
Thus
_ν_0 (F)= _ν_0 [Φ(F)]- Φ(_ν_0 [F])
= -∫_0^∞ dt t_ν_t [Φ(F_t)].
It remains to prove the counterpart of the de Bruin Formula (<ref>).
Denoting Ḟ_t=t F_t,
using first (<ref>) and then (<ref>),
it follows that
- t_ν_t [Φ (F_t)]
=
_ν_t_t (Φ (F_t))
-
Φ '(F_t)Ḟ_t
=
_ν_tΦ' (F_t)_t F_t
+
Φ”(F_t) 1/2 (∇ F_t)_Ċ_t^2
-
Φ'( F_t) Ḟ_t
=
1/2_ν_tΦ”(F_t) (∇ F_t)_Ċ_t^2
=
2
_ν_t
(∇√(F_t))_Ċ_t^2
.
Integrating this relation using (<ref>) gives
_ν_0(F)
= 2∫_0^∞_ν_t(∇√(_0,t F))_Ċ_t^2 dt
.
The above entropy production formula (<ref>) is analogous to the de Bruin identity (<ref>)
and the entropy decomposition to (<ref>), but an important difference is that
the reference measure ν_t here changes as well.
In Section <ref>, we used that _ν[Δ^H F] =0 for any F in the derivation of the de Bruin identity,
but more conceptually what we used is that
the measure ν satisfies
-t_ν[·] = _ν[Δ^H (·)],
since both sides are 0 (because the stationary measure ν does not depend on t).
In the computation above, both ν_t and F_t vary with t,
but in a dual way,
and the analogue of (<ref>) is (<ref>).
It remains to derive the counterpart of (<ref>)
and show that
∀ t ≥ 0, (∇√(_0,tF))^2_Ċ_t≤ e^-2λ_t_0,t[ (∇√(F))^2_Ċ_0].
Plugging this relation in (<ref>) and recalling that
_ν_t_0,t[ (∇√(F))^2_Ċ_0] =
_ν_0 (∇√(F))^2_Ċ_0,
the log-Sobolev inequality (<ref>) is recovered.
We turn now to the proof of (<ref>).
The following lemma is essentially the Bakry–Émery argument adapted to the Polchinski flow.
Let _t, _0,t, Ċ_t, V_t be as in Section <ref>.
Then the following identity holds
for any t-independent positive definite matrix Q:
(_t-∂_t)(∇√(_0,tF))^2_Q
= 2(∇√(_0,tF), V_t Ċ_t ∇√(_0,tF))_Q
+ 1/4 (_0,tF) |Ċ_t^1/2 (log_0,tF) Q^1/2|_2^2
,
where |M|_2^2 = ∑_p,q|M_pq|^2 denotes the squared Frobenius norm of a matrix M=(M_pq).
The derivation of the lemma is postponed. Applying it with Q=Ċ_t implies
(_s-∂_s)(∇√(_0,sF))^2_Ċ_s
= 2(∇√(_0,sF), V_s Ċ_s ∇√(_0,sF))_Ċ_s
- (∇√(_0,sF))^2_C̈_s
+ 1/4 (_0,sF) |Ċ_s^1/2 (log_0,sF) Ċ_s^1/2|_2^2
.
By the assumption (<ref>) and since the last term is positive, it follows that
(_s-∂_s)(∇√(_0,sF))^2_Ċ_s≥ 2λ̇_s (∇√(_0,sF))^2_Ċ_s
.
Equivalently,
ψ(s) := e^-2λ_t+2λ_s_s,t[ (∇√(_0,sF))^2_Ċ_s] satisfies ψ'(s) ≤ 0 for s<t.
This implies ψ(t) ≤ψ(0) so that (<ref>) holds.
At first sight, the proof may seem mysterious, but the idea is simply to iterate the entropy decomposition
(<ref>) by using the Polchinski flow to decompose the measure into its scales.
To illustrate this, let us consider a discrete decomposition of the entropy using the Polchinski flow.
Given δ>0 and the sequence (t_i = i δ)_i ≥ 0, one has
_ν_0 (F) = _ν_0 [Φ(F)]- Φ(_ν_0 [F]) = ∑_i _ν_t_i [Φ( _0,t_i (F))] - _ν_t_i+1 [Φ( _0,t_i+1 (F))] = ∑_i _ν_t_i+1 [ _t_i,t_i+1Φ( _0,t_i (F)) - Φ( _0,t_i+1 (F))] = ∑_i _ν_t_i+1 [ __t_i,t_i+1 ( _0,t_i (F)) ] .
The measure _t_i,t_i+1 associated with a small increment satisfies a log-Sobolev inequality
as the associated Gaussian covariance C_t_i+1 - C_t_i is tiny for δ small (so that the Hamiltonian corresponding to the measure _t_i,t_i+1 is extremely convex).
This suggests that for each interval [t_i,t_i+1], one can reduce to estimating
_ν_t_i+1 [ ( ∇√(_0,t_i (F)))_δĊ_t_i ] and the delicate issue is then to interchange ∇ and _0,t_i (note that a similar step already occurred even in the product case (<ref>)).
Such a discrete decomposition was implemented in <cit.> to derive a spectral gap for certain models.
The proof of Theorem <ref> relies on the limit δ tends to 0 which greatly simplifies the argument as the analytic structure of the Polchinski flow kicks in.
For a more detailed proof, see <cit.>.
One can first verify the so-called `Bochner formula':
(_t-∂_t)(∇_0,tF)^2_Q
= 2(∇_0,tF, V_t Ċ_t ∇_0,tF)_Q
+ |Ċ_t^1/2_0,tF Q^1/2|_2^2.
The claim (<ref>) then follows: writing F instead of _0,tF for short,
dropping other t-subscripts,
(_t-∂_t)(∇√(F))^2_Q
= (_t-∂_t) (∇ F)^2_Q/4F
- (∇ F)^2_Q (_t-∂_t) F/4F^2
- (∇ (∇ F)^2_Q, ∇ F)_Ċ/4F^2
+ (∇ F)^2_Q (∇ F)_Ċ^2/4F^3.
Using (_t-∂_t)F=0 and (<ref>) the right-hand side equals that in (<ref>) since
F |Ċ^1/2log F Q^1/2|_2^2
=
|Ċ^1/2 F Q^1/2|_2^2/F
- (∇ (∇ F)^2_Q, ∇ F)_Ċ/F^2
+ (∇ F)^2_Q (∇ F)_Ċ^2/F^3.
To see this, observe that the left-hand side is (with summation convention)
F Ċ_ij Q_kl (log F)_ik(log F)_jl
=
F Ċ_ij Q_kl (F_ik/F - F_iF_k/F^2)(F_jl/F - F_jF_l/F^2),
and the right-hand side is
Ċ_ij Q_klF_ik F_jl/F - (F_kF_l)_iF_j/F^2 + F_iF_jF_kF_l/F^3.
So both are indeed equal.
§.§ Derivatives of the renormalised potential
Checking the multiscale assumption in Theorem <ref> boils down to controlling the Hessian of the renormalised potential V_t. For a well chosen covariance decomposition, the structure of the potential V_t is often expected to improve along the flow of the Polchinski equation (<ref>). In particular, one may hope that V_t becomes more convex. This is illustrated in the Example <ref> below which considers the case of a single variable.
However, for a given microscopic model the convexification can be extremely difficult to check.
Some examples where it is possible are discussed in Section <ref>.
Even though the analysis of the derivatives of V_t is model dependent, we state a few
general identities for these derivatives which will be used later.
Let U_t=∇ V_t and H_t= V_t. Then
∂_t U_t = _tU_t, ∂_t H_t = _tH_t - H_t Ċ_t H_t.
Moreover, for all f ∈ X and t≥ s ≥ 0,
(f,∇ V_t) = _s,t(f,∇ V_s),
(f, V_tf) = _s,t (f, V_sf)
- _s,t((f, ∇ V_s)^2) -(_s,t(f, ∇ V_s))^2.
(<ref>) follows by differentiating (<ref>).
To recover (<ref>), we recall from (<ref>) that
V_t (φ) = - log_C_t - C_se^-V_s(φ+ζ).
Identity (<ref>) follows by differentiating and then identifying _s,t by (<ref>)
∇ V_t (φ) = _C_t - C_se^-V_s(φ+ζ)∇ V_s(φ+ζ)/_C_t - C_se^-V_s(φ+ζ)
=
_s,t(∇ V_s) (φ).
Identity (<ref>) can then be obtained by taking an additional derivative in the previous expression.
Alternatively, one can rewrite the derivatives of the renormalised potential in terms of the fluctuation measure
μ_t^φ introduced in (<ref>).
The first derivative of the renormalised potential is related to an expectation
∇ V_t(φ)
= _μ_t^φ[∇ V_0(ζ)] =_μ_t^φ[C_t^-1(φ-ζ)].
The second derivative is encoded by a variance under the fluctuation measure
∀ f ∈ X: (f, V_t(φ)f)
= _μ_t^φ[(f, V_0(ζ) f)] - _μ_t^φ( ( f,∇ V_0(ζ)) )
= (f,C_t^-1f) - _μ_t^φ( (C_t^-1f,ζ) ),
where the second equalities hold if C_t is invertible.
The first part of (<ref>) follows from the identity ∇ V_t = _0,t (∇ V_0) obtained in (<ref>) and the identification of the fluctuation measure μ_t^φ with the semigroup _0,t in (<ref>). The second equality is obtained by an integration by parts using the form
(<ref>) of the fluctuation measure.
In the same way the first equality in (<ref>) is deduced from (<ref>) by identifying _0,t and μ_t^φ. The second equality follows by
differentiating φ↦_μ_t^φ[C_t^-1(φ-ζ)] and using (<ref>).
We consider the case of convex potentials and show that they remain convex along the Polchinski flow.
Assume that V_0 is convex. Then V_t is convex for all t ≥ 0.
Thus the standard Bakry–Émery criterion can be recovered from Theorem <ref>:
if H ≥λ𝕀 one can choose A=λ𝕀 and V_0 ≥ 0
and Theorem <ref> guarantees the log-Sobolev inequality with constant
1/γ≤∫_0^∞ e^-λ t dt = 1/λ.
This follows from criterion (<ref>) applied with Ċ_t = e^- λ t𝕀 so that
C̈_t = - λ e^- λ t𝕀 and λ̇_t = λ/2. Note that the decomposition
Ċ_t=𝕀 on [0,T] with T=1/λ (see Remark <ref>) could have been used instead.
If V_0 is convex, then e^-V_t(φ) is the marginal of the measure ∝ e^-V_0(φ+ζ) _C_t(dζ), with density log-concave in (ζ,φ).
A theorem of Prékopa then implies that V_t is convex.
It is also possible to directly compute the Hessian:
the Brascamp–Lieb inequality <cit.> states that if a probability measure ∝ e^-H has strictly convex potential H then
(F) ≤[(∇ F, ( H)^-1∇ F)].
Thus by the first identity in (<ref>) and then applying the Brascamp–Lieb inequality
to estimate the variance, we get
V_t(φ)
= _μ_t^φ[ V_0(ζ)] - _μ_t^φ(∇ V_0(ζ))
≥_μ_t^φ V_0(ζ) -
V_0(ζ) (C_t^-1+ V_0(ζ))^-1 V_0(ζ)) = _μ_t^φ V_0(ζ)(C_t^-1+ V_0(ζ))^-1C_t^-1 = _μ_t^φC_t^-1/2C_t^1/2 V_0(ζ)C_t^1/2(𝕀+ C_t^1/2 V_0(ζ)C_t^1/2)^-1C_t^-1/2.
Therefore, with Ĥ_t = C_t^1/2 V_0 C_t^1/2≥ 0,
C_t^1/2 V_t(φ) C_t^1/2≥_μ_t^φĤ_t(ζ)/𝕀+ Ĥ_t(ζ)≥ 0.
This alternative approach puts the emphasis on the PDE structure associated with the renormalised potential by application of the maximum princple.
We give the gist of the proof and refer to <cit.> for a complete argument.
Let H_t = V_t with H_0 > 0, and recall (<ref>):
H_tt = _t H_t- H_tĊ_t H_t.
Now assume there is a first time t_0>0 and φ_0∈ X such that H_t_0(φ_0) has a 0 eigenvalue with eigenvector v_0, i.e.,
H_t_0(φ_0)v_0 = 0. Define f_t(φ) = (v_0,H_t(φ)v_0).
Therefore
f_t_0(φ_0)t = _t_0 f_t_0(φ_0) -(v_0,H_t_0(φ_0)Ċ_t_0H_t_0(φ_0)v_0)
≥ 0,
where we used that f_t_0(φ) is minimum at φ_0 so that
_t_0 f_t_0 (φ) = 1/2Δ_Ċ_t_0 f_t_0 (φ) ≥ 0 (by the maximum principle) and that by construction
(v_0,H_t_0(φ_0)Ċ_t_0H_t_0(φ_0)v_0) =0.
This shows that f_t(φ_0) cannot cross 0 after t_0.
A more careful argument involves regularisation, see <cit.>.
We end this section with a rescaling property of the Polchinski equation.
Similarly to (<ref>), write the renormalised potential as
V_t(φ) = 1/2 (φ, C_t^-1φ) + F_t(C_t^-1φ)
,
where F_t(h) = V_t(0)-log_C_t[e^-V_0(ζ)+(h,ζ)] is the normalised log partition function of the fluctuation measure at external field h.
Then the Polchinski equation for V is equivalent to a different Polchinski equation for F:
t F_t
= 1/2Δ_Σ̇_t F_t - 1/2 (∇ F_t)_Σ̇_t^2 + (C_t^-1Ċ_t),
where Σ̇_t = C_t^-1Ċ_t C_t^-1.
Note that (C_t^-1Ċ_t) is only a constant.
Indeed,
F_t(h) = V_t(C_th)-1/2 (h,C_th) and thus
∇ F_t = C_t ∇ V_t-C_t h, Δ_Σ̇_t F_t = Δ_Ċ_t V_t - (C_t^-1Ċ_t),
and
t F_t = 1/2Δ_Ċ_tV_t - 1/2 (∇ V_t)_Ċ_t^2 +(∇ V_t, Ċ_t h) -1/2 (h, Ċ_t h)
=
1/2Δ_Ċ_tV_t -1/2 (h-∇ V_t)_Ċ_t^2
= 1/2Δ_Σ̇_tF_t -1/2 (∇ F_t)_Σ̇_t^2 + (C_t^-1Ċ_t)
.
§.§ Example: Convexification along the Polchinski flow for one variable
The aim of this section is to illustrate the claims that the renormalised measure becomes progressively simpler and convex along the Polchinski flow using a simple one variable example.
Let H:→ be a C^2 potential that is strictly convex outside of a segment: inf_|x|≥ M H”(x)≥ c>1 for some c,M>0,
but assume that inf_ H”<0,
and consider the measure:
ν_0(dφ)
∝ e^-H(φ) dφ∝exp[-φ^2/2 - V_0(φ)] dφ
,
V_0(φ):= H(φ) - φ^2/2
.
In (<ref>), the Gaussian part 1/2φ^2 is singled out to define the Polchinski flow, but this is just a convention up to redefining V_0.
By assumption, ν_0 is not log-concave, and
the Bakry–Émery criterion (Theorem <ref>) does not apply.
Let us stress that there are many ways to obtain a log-Sobolev inequality for the above measure.
Our goal, however, is to exemplify that, using the Polchinski flow, how one can still use a convexity-based argument,
the multiscale Bakry–Émery criterion of Theorem <ref>,
by relying on the convexity of the renormalised measures ν_t that will be more log-concave than ν_0.
The Polchinski flow is defined in terms of a covariance decomposition,
which is supposed to decompose the Gaussian part of ν_0 into contributions from different scales.
In the statistical mechanics examples discussed in Sections <ref>–<ref>,
the notion of scale was linked with the geometry of the underlying lattice (e.g., small scales corresponding to information pertaining to spins at small lattice distance). In the single variable case,
there is no geometry, thus the Gaussian part does not have any structure.
The only meaningful decomposition,
written here on [0,1] instead of [0,∞) for convenience, is therefore:
C_t
:=
t 𝕀,
Ċ_t
=
𝕀
,
(t∈[0,1])
.
The corresponding renormalised potential reads:
e^-V_t(φ)
=
1/√(2π t)∫_exp[-ζ^2/2t -V_0(ζ+φ)] dζ,
and the renormalised measure ν_t defined in (<ref>) and fluctuation measure μ_t^φ
defined in (<ref>) are respectively given by:
ν_t(dφ)
∝exp[-φ^2/2(1-t) - V_t(φ)] dφ
,
μ^φ_t(dζ)
∝exp[-ζ^2/2t + ζφ/t- V_0(ζ )] dζ
.
Note that in terms of the original Hamiltonian H(ζ) = V_0(ζ) + ζ^2/2, the fluctuation measure μ_t^φ is more convex than the initial one:
μ^φ_t(dζ)
∝exp[-ζ^2/2( 1/t - 1 ) + ζφ/t - H(ζ )] dζ
.
In other words, e^-V_t is the convolution of e^-V_0 with the heat kernel on at time t,
and the Polchinski equation becomes the following well-known Hamilton–Jacobi–Bellman equation:
∂_t V_t
=
1/2∂^2_φ V_t - 1/2(∂_φ V_t)^2
,
(t∈(0,1))
.
The motivation for the Polchinski decomposition was that one progressively integrates “small scales” to recover a measure ν_t acting on “large scales” that one hopes to be better behaved.
In the present case,
the only notion of scale refers to the size of fluctuations of the field:
∙ Even though V_0 may vary a lot on small values of the field,
the convolution with the heat kernel at time t means V_t is roughly constant on values much smaller than √(t).
Thus small details of V_0 are blurred, and V_t varies more slowly than V_0.
This is the translation to the present case of the general idea that “small scales” (i.e., values below √(t)) have been removed from V_t and the renormalised potential only sees the “large scales” (values above √(t)).
∙ Convolution also improves convexity, in the sense that the renormalised measure ν_t is more log-concave that ν_0.
Since x↦1/2(1-t)ζ^2 becomes increasingly convex as t approaches 1, proving this statement boils down to proving a lower bound on ∂^2_φ V_t uniformly on t∈[0,1].
Semi-convexity estimates for solutions of the Polchinski equation (<ref>) are an active subject of research,
connected with optimal transport with entropic regularisation, see, e.g., <cit.> and references therein and Section <ref> below.
Informally, the convexity of V_t is given by that of V_0, plus an “entropic” contribution due to the 1/2tζ^2 term.
In the present simple case, one can directly compute:
∂^2_φ V_t(φ)
=
1/t-1/t^2_μ^φ_t(ζ)
,
(φ∈)
.
This is an instance of the formula of Lemma <ref> valid for a general covariance decomposition.
It is an example of a general feature of the multiscale Bakry-Émery criterion:
the log-Sobolev constant, which is not a priori related to spectral properties of the model,
can be estimated by lower bounds on ∂^2_φ V_t which are related to variance bounds, i.e., spectral information.
Using the Brascamp-Lieb inequality (<ref>) for t<t_0 with t_0^-1 := -inf_ V_0”,
and the fact that μ^φ_t satisfies a spectral gap inequality with constant C uniformly in φ∈ and t>0, deduce:
∂^2_φ V_t
≥λ̇_t :=
1_[0,t_0/2](t)(-1/t_0-t)
+ 1_[t_0/2,1](t)(1/t-C/t^2)
,
(t∈[0,1])
.
The uniform lower bound (<ref>) confirms that ν_t gets more log-concave as t approaches 1.
Injecting the bound (<ref>) into the multiscale Bakry–Émery criterion of Theorem <ref> provides a bound
on the log-Sobolev constant γ of ν_0 in terms on parameter t_0.
Let us reiterate that one could have obtained a bound on the log-Sobolev constant by standard combination of the usual Bakry–Émery and Holley–Stroock criteria,
and here just illustrated that the semi-convexity condition of Theorem <ref> remains effective in non-convex cases.
Theorem <ref> becomes especially useful in situations with a large state space,
where the combination of the Bakry-Émery and Holley-Stroock criteria do not yield dimension-independent bounds on the log-Sobolev constant
while effective methods to control the semi-convexity may still exist, see Section <ref>.
§.§ Aside: Geometric perspective on the Polchinski flow
There is a structural resemblance of the renormalisation group flow with geometric flows like the Ricci flow.
The matrices Ċ_t take the role of the inverse of a metric g_t (depending on the flow parameter t).
We sketch
the interpretation of the Ċ_t as a scale-dependent metric on the space of fields,
and the natural extension of the above construction in the presence of a non-flat metric.
Suppose (X,g) is a Riemannian manifold.
The metric and its components in coordinates are denoted
g = (g_ij),
g^ij = (g^-1)_ij,
|g| = | g|,
the volume form is
m_g(dφ) = √(|g|) dφ,
and the covariant derivative and Laplace-Beltrami operator are (with summation convention)
∇_g = g^ij∂_j,
div_g U = 1/√(|g|)∂_i(√(|g|) U^i),
Δ_g F = 1/√(|g|)∂_i(√(|g|)g^ij∂_j F).
In particular,
_g f = ∇_g∇_g f = g^ik∂_k (g^jl∂_l f),
and
(∇_g F)_g^2 = g(∇_g F, ∇_g F) = g^ij (∂_i F) (∂_j F).
For a t-dependent metric g_t, the volume form changes according to
t dm_g_t
= (tlog√(|g_t|)) dm_g_t
= 1/2(g_t^-1ġ_t) dm_g_t
.
The Ricci curvature tensor associated with the metric g is denoted Ric_g.
The notation Δ_g_t for the Laplace-Beltrami operator
is different from our previous notation for
the covariance-dependent Laplacian Δ_Ċ_t from (<ref>).
Indeed, the notation differs by an inverse in the index: The Gaussian Laplacian Δ_Ċ_t corresponds to a Laplace-Beltrami operator if g_t^-1 = Ċ_t.
Thus the infinitesimal covariance Ċ_t plays the role of the inverse of a metric.
In the above notation, we can reformulate the previous construction as follows: The covariance decomposition is written as
A^-1 = ∫_0^∞ g_t^-1 dt.
Then the Polchinski equation reads
tV_t = 1/2Δ_g_t V_t - 1/2 (∇_g_tV_t)_g_t^2,
and the condition (<ref>) for the log-Sobolev inequality becomes:
_g_t V_t + 1/2ġ_t ≥λ̇_t g_t.
Suppose that A is a Laplace operator on Λ⊂^d and that Ċ_t = e^-tA is its heat kernel.
Thus the metric g_t = e^+tA is the inverse heat kernel. This means that
|f|_g_t≤ 1 ⇔ |e^tAf| ≤ 1,
i.e., f = e^-tA g for some |g|≤ 1 where |·| denotes the standard Euclidean norm.
Therefore the unit ball in the metric g_t corresponds
to elements f that are obtained by smoothing out elements of the standard unit ball by the heat kernel up to time t. In this sense, the
geometry associated with g_t implements an approximate block spin picture (in which block averaging has been replaced by convolution with
a heat kernel).
We now consider the natural extension to a non-flat metrics.
The Laplacian in the presence of a potential H and metric g is
Δ_g^H = e^H div_g (e^-H∇_g F).
The analogue of Lemma <ref> (with Q=Ċ_t) is as follows.
(_t-∂_t)(∇_g √(F))_g^2 = (ġ + 2 _g V_t+ Ric_g) (∇_g √(F), ∇_g √(F)) + 1/4 |_g log F|_g^2.
The Bochner formula with a background metric <cit.> implies:
(_t-∂_t)(∇_g F)_g^2 = (ġ + 2 _g V_t+ Ric_g) (∇_g F, ∇_g F) + |_g F|_g^2.
The Bakry–Émery version (with √(F) instead of F) then follows as
in the proof of Lemma <ref>.
For t∈ [0,T) with T∈ (0,+∞], assume that g_t is a given t-dependent metric
and that _t dm_g_t evolves according to the associated backward heat equation:
t_t dm_g_t = (-1/2Δ_g_t_t) dm_g_t.
The measure _t dm_g_t takes the role of the Gaussian measure with covariance C_∞-C_t.
Assume:
t V_t = 1/2Δ_g_t V_t - 1/2 (∇_g_t V_t)_g_t^2
t F_t = 1/2Δ_g_t F_t - (∇_g_t V_t, ∇_g_t F_t) = _t F_t.
The last equation defines the semigroup _s,tF with generators _t.
The renormalised measure ν_t is defined by
_ν_t[F] ∝∫ F e^-V_t_t dm_g_t.
One can check that
t_ν_t [_0,tF] = 0,
and that the renormalised measure ν_t again evolves in a dual way to _s,t: for t<T,
t_ν_t[F] = -_ν_t [_t F].
The analogue of the continuity assumption (<ref>) is
_ν_t[g(_0,tF)] → g(_ν_0[F]), (t→ T).
Since the evolution of _t is in general not explicit in the nonflat case, differently from before,
this is now an assumption that seems difficult to verify.
The same proof as that of Theorem <ref> using
Lemma <ref> instead of Lemma <ref> gives the following condition for the log-Sobolev inequality.
Assume that the continuity assumption (<ref>) holds.
Suppose there are λ̇_t (allowed to be negative) such that
∀φ∈ X, t> 0: _g_t(φ) V_t(φ) + 1/2Ric_g_t(φ) + 1/2ġ_t(φ) ≥λ̇_t g_t(φ),
and define
λ_t = ∫_0^t λ̇_s ds,
1/γ = ∫_0^T e^-2λ_t dt.
Then ν_0 satisfies the log-Sobolev inequality
_ν_0(F)
≤2/γ_ν_0 (∇√(F))^2_g_0.
We currently do not know of any interesting applications of the generalised set-up of Theorem <ref>
over that of Theorem <ref>,
but it would be very interesting to find some.
Some references with related constructions (though different motivation) in the context of the
Ricci flow appeared in <cit.>
and then <cit.>
and more recently <cit.>.
§.§ Aside: Entropic stability estimate
An approach different from the Bakry–Émery method to prove (modified) log-Sobolev inequalities,
using the same Polchinski flow, is the entropic stability estimate which underlies <cit.>
and has its origins in the spectral and entropic independence conditions introduced in <cit.> and <cit.>.
In particular, in <cit.>, this method is applied from the stochastic localisation perspective whose
equivalence with the Polchinski flow is discussed in Section <ref>.
In this section, we rephrase the entropic stability strategy of <cit.> with the notations of the Polchinski flow to explain the connection with the Bakry–Émery method.
Let us first introduce some notation.
For a probability measure μ on X (with all exponential moments)
and h ∈ X write T_hμ for the tilted probability measure:
dT_hμ/dμ(ζ) = e^(h,ζ)/_μ[e^(h,ζ)],
and (μ) for the covariance matrix of μ.
The key estimate is a proof of the entropic stability from a covariance assumption.
Let μ be a probability measure on X, let Σ̇ be a positive semi-definite
matrix, and assume there is α >0 such that
∀ h ∈ X: Σ̇(T_hμ) Σ̇≤αΣ̇.
Then for all nonnegative F with _μ[F]=1,
1/2 (_μ[F ζ]-_μ[ζ])_Σ̇^2
≤α_μ(F).
In <cit.>, this inequality is called α-entropic stability of the measure μ with respect to the function ψ(x,y) = 1/2 (x- y)_Σ̇^2.
The proof of Lemma <ref> is postponed to the end of this section and we first show how
the entropic stability estimate implies a modified log-Sobolev inequality, see <cit.>.
Assume there are α_t>0 such that
∀φ∈ X: Ċ_t V_t (φ)Ċ_t - Ċ_t C_t^-1Ċ_t ≥ -α_t Ċ_t,
or equivalently, with Σ̇_t = C_t^-1 Ċ_t C_t^-1,
∀φ∈ X: Σ̇_t(μ_t^φ) Σ̇_t ≤α_tΣ̇_t.
Then the measure _0,t satisfies the following α_t-entropic stability: for all φ∈ X,
2 (∇√(_0,tF))_Ċ_t^2 (φ)
≤α_t _μ_t^φ(F)
=
α_t _0,tΦ(F) (φ) -Φ( _0,tF (φ) ).
This implies the following entropy contraction: for any s>0,
_ν_0(F) ≤ e^∫_s^∞α_u du _ν_s_μ_s^φ(F).
The condition (<ref>) on α_t is very similar to the multiscale Bakry–Émery condition (<ref>) on λ̇_t, but different. We recall that (<ref>) reads
Ċ_t V_t(φ) Ċ_t - 1/2C̈_t ≥λ̇_t Ċ_t.
The log-Sobolev constant and the very closely related entropy contraction are estimated by the time integrals of α_t and λ̇_t, respectively.
Note that the condition (<ref>) is slightly better behaved for t close 0 as C_t vanishes as t → 0 and Ċ_t does not.
To see this, consider the simple one-variable case with the function c_t = ∫_0^t ċ_u du ∈ (with ċ_u >0) and assume that V_t(φ) ≥ 0. Then the optimal choices are λ̇_t = - c̈_t ċ_u^-1 and α_t = - ċ_u c_u^-1 so that
∫_s^t λ̇_u du = logċ_s - logċ_t
and ∫_s^t α_u du = log c_s - log c_t.
For this reason, one cannot take the limit s → 0 in (<ref>), and
the _μ_s^φ term in (<ref>) must be treated differently for s small
to recover a log-Sobolev inequality.
This last step is called annealing via a localization scheme in <cit.>.
For example, one can use that the measure μ_s^φ is simpler for s small as the covariance C_s vanishes.
This implies that the Hamiltonian of μ_s^φ
is strictly convex uniformly in φ so that by the standard Bakry–Émery criterion (Theorem <ref>), the measure
satisfies a log-Sobolev inequality which can then be plugged into (<ref>) to complete the log-Sobolev inequality for ν_0.
From (<ref>), recall that the Polchinski semigroup _0,t coincides with the fluctuation measure μ_t^φ. Thus for smooth F>0 and φ∈ X, one has
∇log_0,tF (φ) = ∇log_μ_t^φ [F]
=
_μ_t^φ[ F C_t^-1ζ]/_μ_t^φ [F]
- _μ_t^φ[ C_t^-1ζ]
= C_t^-1( _μ_t^φ,F[ ζ] - _μ_t^φ[ζ] ),
where the measure modified by F is defined by
dμ_t^φ,F/d μ_t^φ (ζ) = F(ζ)/_μ_t^φ[F].
Thus, the estimate (<ref>) we are looking for boils down to proving
an entropic stability result (<ref>) for the measure μ_t^φ.
Indeed, setting Σ̇_t = C_t^-1 Ċ_t C_t^-1, then
2 (∇√(_0,tF))_Ċ_t^2 (φ)
=
1/2 (∇log_0,tF)_Ċ_t^2 (_0,tF) (φ)
= 1/2 (_μ_t^φ,F[ζ]-_μ_t^φ[ζ])_Σ̇_t ^2 (φ) _μ_t^φ[F],
and the relative entropy is given by
_μ_t^φ(F) = (μ_t^φ,F |μ_t^φ) _μ_t^φ[F] .
Assumption (<ref>) implies the assumption (<ref>) on the covariance of μ_t^φ,
Σ̇^1/2_t (T_h μ_t^φ) Σ̇^1/2_t
= Σ̇^1/2_t ( μ_t^φ + C_t h) Σ̇^1/2_t
≤α_t 𝕀 for all h ∈ X,
so that Lemma <ref> gives the claim
(<ref>), i.e.,
2(∇√(_0,tF) (φ) )_Ċ_t^2 ≤α_t _μ_t^φ(F).
Thus it is enough to show that assumption
(<ref>) is equivalent to the covariance assumption (<ref>).
From (<ref>), we know that for any φ∈ X,
V_t (φ) = C_t^-1 - C_t^-1( μ_t^φ) C_t^-1,
so that
-Ċ_t ( V_t (φ) - C_t^-1) Ċ_t
= Ċ_tC_t^-1( μ_t^φ ) C_t^-1Ċ_t
= C_t Σ̇_t ( μ_t^φ ) Σ̇_t C_t
≤α_t C_t Σ̇_t C_t = α_t Ċ_t,
where the inequality holds if and only if Σ̇( μ_t^φ )Σ̇_t ≤α_t Σ̇_t.
We turn now to the second part of the claim and show how the entropic stability estimate
(<ref>) implies the entropy contraction
estimate (<ref>).
The starting point is the time derivative of the entropy (<ref>):
t_ν_t_0,tΦ(F)-Φ(_0,tF)
= 2 _ν_t (∇√(_0,t F ))_Ċ_t^2
which is the counterpart of eq. (27) in <cit.>.
This is bounded thanks to (<ref>):
t_ν_t_0,tΦ(F)-Φ(_0,tF)≤α_t_ν_t_0,tΦ(F)-Φ(_0,tF),
and thus for any s <t, Gronwall's lemma implies
_ν_t_0,tΦ(F)-Φ(_0,tF)≤ e^∫_s^tα_u du _ν_s_0,sΦ(F)-Φ(_0,sF).
Taking t→∞, the entropy is recovered from the left-hand side, as in (<ref>),
and therefore (<ref>) holds for any s>0.
We finally prove Lemma <ref> following <cit.>.
It suffices to show that, for any h ∈ X,
1/2 (_T_hμ[ζ]-_μ[ζ])_Σ̇^2
≤α( T_hμ |μ)
=α_μe^(h,ζ)/_μ[e^(h,ζ)]
.
Indeed, for any density F with _μ[F]=1 and _μ[Fζ] =_T_hμ[ζ],
the entropy inequality (<ref>) applied with the test function G: ζ↦ (h,ζ)
implies
_μ(F) = sup_G _μ[FG] - log_μ[e^G]≥_T_hμ[(h,ζ)] - log_μ[e^(h,ζ)]
= ( T_hμ |μ),
i.e., the relative entropy over probability measures
with given mean is minimised by exponential tilts T_hμ.
Moreover, if there is no h such that
_μ[Fζ] =_T_hμ[ζ]
the relative entropy is infinite.
From now on, we may assume that (T_hμ) is strictly positive definite on X for all h ∈ X.
Indeed, otherwise consider the largest linear subspace X' such that (T_hμ) acts and is strictly positive definite on X',
and note that this subspace is independent of h.
Indeed, let f∈^N be such that _μ((f,ζ))=0,
and assume without loss of generality that _μ[(f,ζ)]=0.
Then under the assumption of exponential moments also
_μ[(f,ζ)^4]=0 and:
_T_hμ((f,ζ))
∝1/2_μ⊗μ[(f,ζ-ζ')^2e^(h,ζ+ζ')]
≤1/2_μ⊗μ[(f,ζ-ζ')^4]^1/2_μ⊗μ[e^2(h,ζ+ζ')]^1/2
=0
.
This implies that μ (and thus T_hμ) is supported in an affine subspace of X which is a translation of X',
and by recentering one can replace X by X' in the following.
The relative entropy of T_hμ can be written as:
( T_hμ |μ)
=_T_h μ ( h,ζ) - log(_μ[e^(h,ζ)])
= (h,θ) - log(_μ[e^(h,ζ)]), θ = _T_h μ[ζ]
.
The positive definiteness of (T_hμ) on X implies that X ∋ h ↦log_μ[e^(h,ζ)] is strictly convex, and hence
h ↦θ(h) = _T_hμ[ζ] is strictly increasing in any direction of X. Let K be the image of θ(h),
and for θ∈ K, let h(θ) be the inverse function, and then let
Γ(θ)
= (T_h(θ)μ|μ)
= (θ,h(θ)) - log_μ[e^(ζ,h(θ))].
Thus Γ be the Legendre transform of the cumulant generating function of μ, and
( T_hμ |μ) = Γ(_T_hμ[ζ]).
In particular, h(_μ[ζ]) = 0 and properties of Legendre transform imply that, in directions of X,
∇Γ(θ) = (h↦_T_hμ[ζ])^-1|_h=h(θ)
Γ(θ) = log_μ[e^(h,ζ)] |_h=h(θ)^-1 = (T_h(θ)μ)^-1
so that
∇Γ(_μ[ζ]) =0, Γ(_T_hμ[ζ]) = (T_hμ)^-1.
The assumption Σ̇(T_hμ) Σ̇≤αΣ̇
implies, for θ∈ K,
αΓ(θ) ≥Σ̇.
Since f(θ) = 1/2 (θ - _μ[ζ]))_Σ̇^2 satisfies ∇ f(_μ[ζ]) = 0 and f = Σ̇,
therefore:
α( T_hμ |μ) =
αΓ(_T_hμ[ζ]) ≥1/2 (_T_hμ[ζ]-_μ[ζ])_Σ̇^2 for all h ∈^N.
§ PATHWISE POLCHINSKI FLOW AND STOCHASTIC LOCALISATION PERSPECTIVE
§.§ Pathwise realisation of the Polchinski semigroup
From Proposition <ref>, we recall that
the Polchinski semigroup operates from the right:
_s,t = _r,t_s,r, (s≤ r ≤ t).
Thus it acts on probability densities relative to the measure ν_t:
if dμ_0 = F dν_0 is a probability measure then dμ_t = _0,tF dν_t is again a probability measure.
This should be compared with the more standard situation of
a time-independent semigroup _s,t = _t-s that is reversible with respect to a measure ν
such as the original Glauber–Langevin semigroup introduced in (<ref>).
In this case, one has the dual point of view that describes the evolution of an observable:
if dμ_0 = F dν is some initial distribution and dμ_t = (_tF) dν denotes the distribution at time t then,
by reversibility,
_μ_t[G]
=
∫ G (_tF) dν = ∫ (_tG) F dν
= _μ_0[_tG].
The dual semigroup can be realised in terms of a Markov process (φ_t) as
_tG(φ) = _φ_0=φ[G(φ_t)].
Since the Polchinski semigroup is not reversible and time-dependent, this interpretation
does not apply to the Polchinski semigroup.
Instead, the Polchinski semigroup _s,t can be realised in terms of an SDE that starts at time t
and runs time in the negative direction from t to s<t: Given t>0, a standard Brownian motion (B_u)_u≥ 0 and φ_t,
consider the solution to
φ_s =φ_t - ∫_s^t Ċ_u ∇ V_u(φ_u) du + ∫_s^t √(Ċ_u) dB_u,
(s≤ t).
This is the equation for the (stochastic) characteristics of the Polchinski equation,
see Appendix <ref> for the classical analogue of a Hamilton–Jacobi equation without visocity term.
By reversing time direction, this backward in time SDE becomes a standard SDE.
Indeed, to be concrete, we will interpret (<ref>) as φ_t=φ̃_t-r where φ̃
is the solution to the following standard SDE with φ̃_0=φ_t given and B̃_r=B_t-r:
d φ̃_r = - Ċ_t-r ∇ V_t-r (φ̃_r) dr + √(Ċ_t-r) dB̃_r , (0 ≤ r ≤ t).
Denoting by _φ_t=φ[·] the expectation with respect to the solution (φ_s)_s≤ t to (<ref>)
with φ_t= φ given, the Polchinski semigroup can be represented as follows.
For s≤ t and any bounded F: X →,
_s,tF(φ)= _φ_t=φ[F(φ_s)].
Thus if φ_t is distributed according to the renormalised measure ν_t the backward in time evolution (<ref>) ensures that φ_s is distributed according to ν_s for s<t.
Our interpretation of this proposition is that,
while the renormalised measures ν_t are supported on increasing smooth configurations as t grows,
the backward evolution restores the small scale fluctuations of ν_0.
To verify Proposition <ref>,
we change time direction so that (<ref>) becomes a standard (forward) SDE as follows.
Indeed, as discussed above, set φ̃_r = φ_t-r and B̃_r=B_t-r.
Then (<ref>) becomes
φ̃_r = φ̃_0 - ∫_t-r^t Ċ_u ∇ V_u(φ̃_t-u) du + ∫_t-r^t √(Ċ_u) dB_u
= φ̃_0 - ∫_0^r Ċ_t-u∇ V_t-u(φ̃_u) du + ∫_0^r √(Ċ_t-u) dB̃_u,
i.e., φ̃ solves the standard SDE (<ref>).
Itô's formula stated for the forward SDE for φ̃ is
df̃(r,φ̃_r)
= f̃r(r,φ̃_r) + _t-rf̃(r,φ̃_r) + (∇f̃(r,φ̃_r), √(Ċ_t-r) dB̃_r)
.
In terms of φ rather than φ̃ we will state this as
df(s,φ_s)
= fs(s,φ_s) - _s f(s,φ_s) + (∇ f(s,φ_s),√(Ċ_s)dB_s),
where the left-hand side is interpreted as follows: with s=t-r,
-d_sf(s,φ_s)
= d_rf(t-r,φ̃_r)
=
-fs(t-r,φ̃_r)
+ _t-r f(t-r,φ̃_r)
+ (∇ f(t-r,φ̃_r), √(Ċ_t-r) dB̃_r)
=
-fs(s,φ_s)
+ _s f(s,φ_s)
- (∇ f(s,φ_s), √(Ċ_s) dB_s).
In particular, if f is smooth and bounded and satisfies (∂_s-_s)f = 0 then
_φ_t=φ[f(s,φ_s)] = f(t,φ).
It is enough to prove the claim for bounded smooth F and then extend it by density.
The claim follows from (<ref>)
with f(t,φ)=_s,tF(φ) which gives
_φ_t=φ[F(φ_s)]
= _φ_t=φ[_s,sF(φ_s)]
= _φ_t=φ[f(s,φ_s)] = f(t,φ) = _s,tF(φ).
This proves (<ref>).
For the last statement,
recall from (<ref>) that
_ν_s F = _ν_t_s,t F = _ν_t F(φ_s).
This characterises the distribution at time s of the process (<ref>).
Finally, we will consider below an analogue of the backward in time SDE (<ref>) started at time t=+∞,
see (<ref>).
Equation (<ref>) can analogously be interpreted by reversing time as follows.
Fix any smooth time-reversing reparametrisation a: [0,+∞] → [0,+∞].
For simplicity, one can choose a(t)=1/t with a(0)=+∞ and a(+∞)=0.
As in Remark <ref>, set
C̃_t = C_a(t),
Ṽ_t = V_a(t),
and also
φ̃_t = φ_a(t),
B̃_t=B_a(t).
Analogously to (<ref>),
the solution (<ref>) can then be interpreted as φ_t=φ̃_a(t) where φ̃
is the solution to the standard SDE:
dφ̃_t = Ċ_a(t)ȧ(t) ∇ V_a(t)(φ̃_t) dt + √(Ċ_a(t)) dB̃_t, φ̃_0=0.
More generally, as in Remark <ref>, the SDEs (<ref>)–(<ref>) are invariant under reparametrisation,
thus [0,+∞] has no special significance and we could have used [0,1] from the beginning instead.
We prefer to consider the backward in time evolution corresponding to φ (rather than the forward SDE for φ̃) to comply with the
convention that the renormalised potential V_t evolves forward in time according to the Polchinski equation.
From the stochastic analysis point of view, on the other hand, this convention of a stochastic process running backwards in time is less standard and
related literature which focuses on the SDE rather than the renormalised potential, as we do,
thus uses the opposite convention (see Sections <ref> and <ref>).
§.§ Example: Log-Sobolev inequality by coupling
Using the representation (<ref>)-(<ref>)
of the semigroup _s,t in terms of the above stochastic process,
one can alternatively prove Theorem <ref> using synchronous coupling by adapting the proof from <cit.>
for the Bakry–Émery theorem.
Given t>0,
define (φ_s)_s≤ t and (φ'_s)_s≤ t as in (<ref>) coupled using the same Brownian motions.
Then for s<t,
e^-2λ_t (φ_t -φ_t')_Ċ_t^-1^2
-
e^-2λ_s (φ_s -φ_s')_Ċ_s^-1^2
= ∫_s^t[ e^-2λ_u(-2λ̇_u (φ_u -φ_u')_Ċ_u^-1^2 - (φ_u-φ_u')_Ċ_u^-1C̈_u Ċ_u^-1^2
+
2(Ċ_u(∇ V_u-∇ V_u'), Ċ_u^-1(φ_u-φ_u') ) ] du
≥ 0,
where the inequality follows from the assumption (<ref>) and the mean value theorem.
Thus
(φ_0 -φ_0')_Ċ_0^-1^2 = e^-2λ_0(φ_0 -φ_0')_Ċ_0^-1^2 ≤ e^-2λ_t (φ_t -φ_t')_Ċ_t^-1^2
with φ_t=φ and φ_t'=φ' (the dynamics runs backwards) and the mean value theorem gives
|_0,tF(φ)-_0,tF(φ')|
= |[(∇ F(ψ_0), φ_0-φ_0')]|
= 2 |[(∇√(F(ψ_0)), √(F(ψ_0)) (φ_0-φ_0') )]|
≤ 2 [ (∇√(F(ψ_0)))_Ċ_0^2 ]^1/2 [F(ψ_0) (φ_0-φ_0')_Ċ_0^-1^2 ]^1/2 ≤ 2 e^-λ_t[ (∇√(F(ψ_0)))_Ċ_0^2 ]^1/2 [F(ψ_0) ]^1/2 √((φ-φ')_Ċ_t^-1^2)
for some ψ_0 between φ and φ'.
Taking φ-φ' = √(Ċ_t)f with |f|_2 → 0 gives
(∇√(_0,tF ))_Ċ_t^2
≤ e^-2λ_t_0,t(∇√(F))_Ċ_0^2.
This is (<ref>).
§.§ Example: Coupling with the Gaussian reference measure
Since, by (<ref>),
_ν_t[F] = _t,∞F(0),
one can obtain the following coupling of the field distributed under the measure ν_t with that of the associated driving Gaussian field
from the stochastic realisation of _t,∞.
The distribution of ν_t is realised by the solution to the SDE
(which we recall can be interpreted as discussed around (<ref>)):
φ_t
= - ∫_t^∞Ċ_u ∇ V_u(φ_u) du + ∫_t^∞√(Ċ_u) dB_u
= -∫_t^∞Ċ_u ∇ V_u(φ_u) du + Γ_t
.
In particular, at t=0, this provides a coupling of the full interacting field φ_0 with the Gaussian reference field Γ_0.
As an application of the above coupling, one can relate properties of the Gaussian measure to the interacting one,
see for example <cit.>.
§.§ Effective potential and martingales
The stochastic process (<ref>) can also be used to obtain a representation of the renormalised potential as follows.
These are stochastic interpretations of the formulas in Lemma <ref>.
V_t(φ)
= _φ_t=φV_0(φ_0) + 1/2∫_0^t (∇ V_s(φ_s))^2_Ċ_s ds = _φ_t=φV_0(φ - ∫_0^t Ċ_s ∇ V_s(φ_s) ds + ∫_0^t √(Ċ_s) dB_s) + 1/2∫_0^t (∇ V_s(φ_s))^2_Ċ_s ds
and M_s = V_s(φ_s) + 1/2∫_s^t (∇ V_u(φ_u))^2_Ċ_u du is a martingale (with respect to the backward filtration).
It suffices to show that M_s is a martingale.
By Itô's formula interpreted as in (<ref>),
dM_s
= (V_ss)(φ_s)
-_s V_s(φ_s) - 1/2 (∇ V_s)_Ċ_s^2 + martingale = (V_ss)(φ_s)
- 1/2Δ_Ċ_s V_s(φ_s) + 1/2 (∇ V_s)_Ċ_s^2 + martingale.
By Polchinski's equation for V_s, the right-hand side is a martingale.
The gradient and Hessian of the renormalised potential have similar representations.
∇ V_t(φ) = _0,t[∇ V_0](φ) = _φ_t=φ[∇ V_0(φ_0)]
and U_t(φ_t)= ∇ V_t(φ_t) is a martingale (with respect to the backwards filtration).
Again, by Itô's formula (<ref>) and since ∂_t U_t = _t U_t by (<ref>),
dU_t(φ_t)
= U_tt(φ_t) - _t U_t(φ_t) + (∇ U_t, √(Ċ_t) dB_t) = (∇ U_t, √(Ċ_t) dB_t).
Thus U_t is a martingale, and the expression for its expectation also follows.
§.§ Stochastic localisation perspective
The stochastic evolution (<ref>) has so far been interpreted as the characteristics associated with the Polchinski equation (<ref>).
In this section, we are going to see that this stochastic process is also, after a suitable change of parametrisation, the flow of the stochastic localisation, introduced by Eldan. We refer to <cit.> for a survey on this method and its numerous applications in general, and to <cit.> for more specific developments on modified log-Sobolev inequalities.
The relation between stochastic localisation and a semigroup approach was already pointed out in <cit.>.
From Lemma <ref>, we recall that
the gradient and Hessian of the renormalised potential V_t can be interpreted as a mean and covariance of the fluctuation measure
μ_t^φ defined in (<ref>) by
_0,tF(φ) = _μ_t^φ[F].
The measure μ_t^φ is related to μ_t^0 by the exponential tilt e^(C_t^-1φ,ζ), i.e.,
by the external field C_t^-1φ.
In particular, by Lemma <ref>, the gradient of V_t can be written as
∇ V_t(φ) = _μ_t^φ [C_t^-1(φ-ζ)]
= C_t^-1(φ-_μ_t^φ[ζ])
where _μ_t^φ[ζ] ∈ X is the mean of μ_t^φ.
The stochastic representation (<ref>) can therefore be written
in terms of the fluctuation measure instead of the renormalised potential.
Indeed, let
h_t= C_t^-1φ_t, μ_t = μ_t^φ_t = μ_t^C_t h_t, Σ̇_t =-tC_t^-1 = C_t^-1Ċ_t C_t^-1.
Since
Ċ_t ∇ V_t(φ_t) = Ċ_t C_t^-1(φ-_μ_t^φ[ζ])
= C_t Σ̇_t(φ-_μ_t^φ[ζ]),
the external field h_t=C_t^-1φ_t satisfies the following SDE equivalent to (<ref>):
By the Itô formula (<ref>) with f(t,φ_t) = C_t^-1φ_t,
h_t
= - ∫_t^∞ df(u,φ_u) = ∫_t^∞Σ̇_u φ_u du - ∫_t^∞ C_u^-1Ċ_u ∇ V_u(φ_u) du + ∫_t^∞ C_u^-1Ċ_u^1/2 dB_u = ∫_t^∞Σ̇_u _μ_u[ζ] du + ∫_t^∞ C_u^-1Ċ_u^1/2 dB_u
= ∫_t^∞Σ̇_u _μ_u[ζ] du + ∫_t^∞Σ̇_u^1/2 dB_u,
where the last equality holds in distribution in the case that Ċ_u and C_u^-1 do not commute.
What is known as stochastic localisation is the process (h_t) with the direction of time reversed.
Thus in the stochastic localisation perspective, the renormalised potential and measure only play implicit roles, and the main object
of study is the
stochastic process (<ref>) and the fluctuation measure (<ref>).
For this perspective, it is more convenient to assume that time is parameterised by [0,T] (rather than our previous standard choice [0,+∞]
— but again everything is reparametrisation invariant, so this is only for notational purposes).
The fluctuation measure μ_t = μ_t^φ_t then “starts” at the final time t=T as the full measure
of interest, and as t decreases (time runs backwards) its fluctuations get absorbed into the renormalised measure ν_t
until the fluctuation measure μ_t “localises” to a random Dirac measure μ_0 = δ_φ_0 at time t=0,
with φ_0 distributed according to the full measure ν_0=μ_T. See also Figure <ref>.
Although time runs backwards from T to 0 in the stochastic localisation perspective written with our time convention,
let us change time direction to obtain a forward SDE and connect with the literature on stochastic localisation.
Recalling (<ref>), the initial measure ν_0 = μ_T coincides with the fluctuation measure
at time T as h_T =0.
As done previously, we will always use tildes to denote change of time:
φ̃_t = φ_T-t,
C̃_t = C_T-t,
Ṽ_t = V_T-t,
μ̃_t = μ_T-t,
h̃_t = h_T-t,
Σ̃̇̃_t = Σ̇_T-t.
Using the notation b(μ) = _μ[ζ] for the mean of μ,
the SDE (<ref>) for h̃ can then be written as:
dh̃_t
= Σ̃̇̃_t b(μ̃_t) dt + Σ̃̇̃_t^1/2 dB̃_t.
This equation is the same as the stochastic localisation as it appears for example in <cit.>
(after dropping tildes from the notation and with y_t there corresponding to h̃_t).
The stochastic localisation perspective is different from our renormalisation group perspective
in that the object of interest is (again) the fluctuation measure.
For example, in the one-variable case |Λ|=1,
starting from a measure μ̃_0 (dx)
= ν_0(dx) ∝ e^-H(x) (possibly log-concave), the strategy is to make it more convex by considering
μ̃_t (d ζ) ∝ e^-H( ζ ) - t/2ζ^2 + h̃_t ζ dζ
with the choice of the process h̃_t such that for any test function
∀ t ≥ 0, _ν_0 [F] = [ _μ̃_t ( F) ].
In this one variable example, the fluctuation measure above is the counterpart of (<ref>)
for the choice C_t = 1/(1+t) with t decreasing from +∞ to 0 instead of C_t = t with t ∈ [0,1].
With this reparametrisation, one gets from (<ref>) that Σ̇_t =1
so that
t ≥ 0,
d h̃_t = b(μ̃_t) dt + d B̃_t,
with h̃_0 =0.
Starting from a general measure μ̃_0, the primary concern in the stochastic localisation perspective is the measure μ̃_t which is now uniformly convex with Hessian at least t (if say H is log-concave), thus general concentration inequalities hold for the twisted measure and can be transferred to μ̃_0 thanks to (<ref>). For example, this is a key tool in current progress on the KLS conjecture, see <cit.> for a review. The larger t is, the better in this respect.
However, as t grows the twisted measure μ̃_t (dζ) loses the features of the original μ̃_0 so there is a trade-off in the choice of t.
Contrary to our renormalisation point of view, in the stochastic localisation point of view, the distribution of h_t = C_t^-1φ_t (which is given in terms of μ̃_t in (<ref>) but can also be written
in terms of our renormalised measure) does not play an important role (see Figure <ref>).
The process h̃_t is there to twist the measure and sometimes if one adds the correct Ċ_t there are preferred directions to add the convexity.
§ VARIATIONAL AND TRANSPORT PERSPECTIVES ON THE POLCHINSKI FLOW
In this section, we discuss transport-related perspectives on the Polchinski flow.
We refer to <cit.> for additional perspectives such as an interpretation in terms of the Otto calculus that we do not discuss here.
§.§ Föllmer's problem
By (<ref>), the distribution ν_0 can be realised as the final time distribution φ_0 of the SDE:
φ_t
= - ∫_t^∞Ċ_u ∇ V_u(φ_u) du + ∫_t^∞√(Ċ_u) dB_u,
where we recall that the backwards SDE can be interpreted as in (<ref>).
One can ask whether the distribution ν_0 can be obtained more efficiently if ∇ V_u(φ_u)
is replaced by another drift U_u(φ_u), i.e., as the distribution of φ_0^U when
φ^U is a strong solution of the SDE (again written backward in time):
φ_t^U
= - ∫_t^∞Ċ_u U_u(φ_u^U) du + ∫_t^∞√(Ċ_u) dB_u,
where the parameter t takes values in [0,+∞] and φ_∞^U = 0.
As pointed out in Remark <ref>, one could have also considered a parametrisation on a bounded time interval.
Denote by γ_0 = _C_∞ the distribution of the Gaussian reference measure, i.e., of ∫_0^∞√(Ċ_u) dB_u.
The gradient of the renormalised potential V_t of the Polchinski flow (<ref>) can be interpreted as the optimal drift in (<ref>) in the following sense:
( ν_0 | γ_0)
= 1/2∫_0^∞ | ∇ V_t ( φ_t) |_Ċ_t^2 dt ≤ 1/2∫_0^∞ | U_t( φ_t^U) |_Ċ_t^2 dt ,
for any drift U such that (<ref>) has a strong solution with φ_0∼ν_0.
Recall that (φ_t) follows (<ref>).
Let U be a such that there is a strong solution (φ_t^U) of (<ref>)
with [ ∫_0^∞ |U_t( φ_t^U)|_Ċ_t^2 dt ] < ∞.
By construction φ_0^U has law ν_0 so that the relative entropy is given by
( ν_0 | γ_0)
= V_∞ (0) - [ V_0 ( φ_0^U) ]
= ∫_0^∞ dt ∂/∂ t [ V_t ( φ_t^U ) ],
with φ^U evolving according to (<ref>), and
where we used that
ν_0(dφ) = e^+V_∞(0) e^-V_0(φ)γ_0(dφ)
with normalisation factor given by
e^- V_∞ (0) = _C_∞e^- V_0(ζ) as in (<ref>).
The renormalised potential follows the Polchinski equation (<ref>):
t V_t = 1/2Δ_Ċ_t V_t - 1/2 (∇ V_t)_Ċ_t^2,
(t ∈ (0,∞)).
Therefore, by Itô's formula,
∂/∂ t [ V_t ( φ_t^U) ]
= t V_t ( φ_t^U)
+ ( ∇ V_t ( φ_t^U) , U_t ( φ_t^U) )_Ċ_t
- 1/2Δ_Ċ_t V_t ( φ_t^U ) = - 1/2 (∇ V_t( φ_t^U) )_Ċ_t^2 + ( ∇ V_t ( φ_t^U) , U_t ( φ_t^U ) )_Ċ_t = 1/2 - ( ∇ V_t( φ_t^U) - U_t ( φ_t^U) )^2_Ċ_t
+ 1/2 (U_t ( φ_t^U) )_Ċ_t^2 ,
where we used the Polchinski equation (<ref>) on the second line. Thus
1/2∫_0^∞ (∇ U_t)_Ċ_t^2 dt = ( ν_0|γ_0)
+ 1/2∫_0^∞( ∇ V_t - U_t )^2_Ċ_t dt ,
and the gradient of the renormalised potential V_t provides the optimal drift.
This completes the proof of Theorem <ref>.
It turns out that the right-hand of (<ref>) is, in fact, the relative entropy ( Q| P)
of the path measure Q associated with (<ref>) with respect to that of the
Gaussian reference process P.
The relative entropy of the path measure Q associated with a strong solution of (<ref>)
with respect to the path measure P of the Gaussian reference measure is given by
( Q | P)
= 1/2 ∫_0^∞ | U_t (φ_t^U) |^2 dt
.
This is essentially a consequence of Girsanov's theorem, see <cit.> for details.
Since ( Q | P) ≥(ν_0|γ_0) always holds,
by the entropy decomposition (<ref>) and the fact that the laws of ν_0 and γ_0
are marginals of the path measures Q and P respectively, the above shows that
the optimal drift U_t=∇ V_t in fact achieves equality: ( Q | P) = (ν_0|γ_0).
The above question was already studied by Föllmer <cit.>, and we refer to
<cit.> for an exposition of this and connections with Gaussian functional inequalities.
Föllmer's objective was to find the optimal drift b_t such that the process (X_t)_t ∈ [0,1] defined
by the following SDE and distributed at time t =1 according to a given target measure ν:
X_0 = 0, d X_t = b_t (X_t) dt + dB_t and
X_1 ∼ν,
that minimises the dynamical cost
1/2∫_0^1 |b_t (X_t) |^2 dt
over all possible drifts b.
Up to time reversal, parametrisation by [0,+∞] instead of [0,1],
and introduction of the covariances Ċ_t, this is exactly the set-up
of (<ref>).
For us the introduction of the covariances Ċ_t is a (conceptually and technically) important point, though, with the interpretation that the integral
is now an integral over scales measured by the infinitesimal covariances Ċ_t which can also be interpreted as metrics as in Section <ref>.
More generally, one can look for the optimal drift to built a target probability measure of the form
F(φ) ν_0 (dφ) using now the optimal stochastic flow
as a reference process, i.e., we want to determine the drift U such that for
t∈ [0,+∞], ψ_t = - ∫_t^∞Ċ_s U_s ( ψ_s) ds
- ∫_t^∞Ċ_s ∇ V_s ( ψ_s) ds + ∫_t^∞√(Ċ_s) d B_s
,
the cost 1/2[ ∫_0^1|U_t ( ψ_t)|^2_Ċ_t dt ] is minimised
and ψ_0 is distributed according to F d ν_0.
Proceeding as in the proof of Theorem <ref>, the optimal drift is given in terms of
the Polchinski semigroup (<ref>) as the gradient of
W_t(φ) = - log_0,tF(φ)
= - V_t ( φ) - log_C_t[ F ( φ+ζ )
e^- V_0 ( φ+ζ)].
Thus one can check that
( F ν_0 | ν_0)
= _ν_0(F)
= 1/2∫_0^∞ | ∇ W_t ( φ_t) |_Ċ_t^2 dt .
In this way, we recover from (<ref>) the entropy decomposition (<ref>):
2∫_0^∞_ν_t|∇√(_0,t F)|_Ċ_t^2 dt
=
1/2∫_0^∞_ν_t|∇log_0,t F|_Ċ_t^2 _0,t F dt =
1/2∫_0^∞ | ∇ W_t ( φ_t)|_Ċ_t^2 dt ,
where we used that the process (<ref>) is distributed at time t with density proportional to
e^ -W_t(φ) - V_t (φ)_C_∞-C_t(dφ) ∝_0,tF(φ) ν_t (dφ) .
The above is an instance of the more general version of the Schrödinger problem which is to find the optimal drift so that the stochastic evolution (<ref>) interpolates between two probability measures μ and ν.
Here, we discussed only the special case where the process starts from a Dirac measure μ= δ_0,
and refer the reader to the survey <cit.> for a general overview and to <cit.>
for a discussion on the role of the convexity of the potential.
In Section <ref>, we address a related issue, namely that in some cases, the previous flow can be modified in order to achieve an interpolation between the measure of interest and some Gaussian measure.
§.§ Variational representation of the renormalised potential
Let U_t = ∇ V_t(φ_t) and recall that Proposition <ref> states:
V_t(φ)
= _φ_t=φV_0 ( φ - ∫_0^t Ċ_s U_s ds + ∫_0^t √(Ċ_s)dW_s ) + 1/2∫_0^t |U_s|^2_Ċ_s ds.
In particular,
V_t(φ) ≥inf_UV_0 (φ-∫_0^t Ċ_s U_s ds + ∫_0^t √(Ċ_s) dW_s ) + 1/2∫_0^t |U_s|_Ċ_s^2 ds,
where the above infimum is over all
adapted processes U: [0,t] → X (where adapted means backwards in time in our convention)
called drifts.
For our current purposes, it suffices to consider U_s=U_s(φ_s) associated with a strong solution
to the (backward in time) SDE
φ_s = φ-∫_s^t Ċ_uU_u(φ_u) du + ∫_s^t √(Ċ_u) dW_u, (s≤ t).
The following proposition is a special case of the Boué-Dupuis or Borell formula, see
<cit.>,
which gives equality in the infimum and is the starting point for the Barashkov–Gubinelli method <cit.>.
An in-depth treatment of stochastic control problems of which this is a special case is given in <cit.>.
V_t(φ) = inf_UV_0 ( φ-∫_0^t Ċ_s U_s ds + ∫_0^t √(Ċ_s) dW_s )
+ 1/2∫_0^t |U_s|_Ċ_s^2 ds.
The entropy inequality (<ref>) with G(ζ)=-V_0(φ+ζ) applied to the Gaussian measure _C_t implies that for any density F with _C_t[F]=1:
V_t(φ) = - log_C_t[e^-V_0(φ+ζ)]
≤__C_t(F) + _C_tF(ζ)V_0(φ+ζ).
Given any drift U_s, let F d_C_t denote the law of φ_0-φ solving (<ref>):
φ_0-φ = -∫_0^t Ċ_s U_s ds + ∫_0^t √(Ċ_s) dW_s.
Then
V_t(φ)
≤1/2∫_0^t |U_s(φ_s)|_Ċ_s^2 ds
+ V_0 ( φ-∫_0^t Ċ_s U_s ds + ∫_0^t √(Ċ_s) dW_s ),
where we used that the entropy is bounded by the first term, exactly as in Theorem <ref>.
As already discussed, the converse direction follows from Proposition <ref>.
The point of view is now that by estimating the expectation on the right-hand side above, for a general drift U,
one can obtain estimates on V_t(φ), and in particular on V_∞(φ) which we recall from (<ref>)
is equivalent to the logarithmic moment generating function of the measure ν_0.
For further details and application to construction of the φ^4_d measures, we refer to <cit.>.
§.§ Lipschitz transport
Instead of a stochastic process, one could also define a map
Ŝ_t : X ↦ X transporting ν_0 to some measure ν̂_t in order to recover
functional inequalities for ν_0 from ν̂_t.
For example, assume that ν̂_t satisfies a log-Sobolev inequality with constant γ and that
_ν_0 F ( φ ) = _ν_t F (Ŝ_t ( φ) ) .
If the Jacobian ∇Ŝ_t is bounded by some constant C in the following sense
∀ω∈ X,
( ∇Ŝ_t ω )^2
≤
C^2 (ω)^2,
then ν_0 satisfies also a log-Sobolev inequality
_ν_0 F ( φ ) =
_ν̂_t [Φ(F ∘Ŝ_t )]- Φ(_ν̂_t [F ∘Ŝ_t]) ≤2/γ_ν̂_t (∇√(F ∘Ŝ_t ))^2
= 2/γ_ν̂_t(^t ∇Ŝ_t (∇√(F))(Ŝ_t) )^2 ≤2 C^2/γ_ν̂_t((∇√(F))(Ŝ_t))^2
= 2 C^2 /γ_ν_0 (∇√(F ))^2 ,
where we used that ∇Ŝ_t and its transpose ^t ∇Ŝ_t have the same operator norm.
An analogous argument can be applied to more general functional inequalities.
This line of research was first investigated in <cit.> where it was understood that a convex perturbation of a Gaussian measure leads to a 1-Lipschitz transport map Ŝ_t.
We refer to <cit.> for more recent developments as well as other applications of the Lipschitz properties of transport maps
to functional inequalities.
In this section, we follow the work <cit.> which derived a Lipschitz estimate of the form (<ref>) from the multiscale criterion (<ref>) for some covariance decomposition, and generalise the result to other decompositions.
Recall that the measure ν_0 gets renormalised to ν_t by the Polchinski flow.
By construction _ν_t [· ] = _C_∞ -C_t [ e^-V_t· ] and
ν_t converges to a Dirac mass.
As the measure ν_t degenerates, it is more convenient to consider the measure ν̂_t obtained by rescaling ν_t by some matrix D_t so that the measures ν_0 and ν̂_t are comparable:
_ν̂_t F (φ) = _ν_t F (D_t φ) .
If V_0 =0, a natural choice for D_t is to preserve the Gaussian measure, i.e.,
D_t^-1 (C_∞ -C_t)^-1 D_t^-1 = A
⇒ D_t A D_t = (C_∞ -C_t)^-1
,
where all inverse are understood to be taken on the range of A.
This is implicitly assumed in the rest of the section, with 𝕀 also denoting the identity matrix on the range of A.
Assuming that all the matrices depend smoothly on t and commute, then the choice (<ref>) implies the following useful relation:
2 D_t^-1Ḋ_t = Ċ_t (C_∞ -C_t)^-1
=
Ċ_t D_t A D_t
.
Using again (<ref>) we get (recall C_0=0 and notice D_0 =𝕀):
t(D_t^-2) = -AĊ_t
⇒
D_t = (𝕀 -AC_t)^-1/2
.
By construction lim_t →∞ D_t^-1φ = 0 and the renormalised potential satisfies lim_t →∞ V_t ( D_t^-1φ )= V_∞(0) (because V_t≤ C_t^-1≤ C_1^-1 for t≥ 1, recall Lemma <ref>).
Thus the following convergence in distribution to a Gaussian measure holds:
lim_t →∞_ν̂_t F (φ) = _C_∞ F (φ)
=
_A^-1 F (φ) .
Consider S_t a transport map between ν_0 and ν̂_t, so that
_ν_0 F ( S_t (φ) ) = _ν̂_t F (φ)
=
_ν_t F (D_t φ) .
Note that ultimately we are interested in Ŝ_t which is the inverse of S_t, see (<ref>).
We are going to determine an evolution for S_t : X ↦ X. On the one hand,
t_ν_0 F ( S_t (φ) ) =
_ν_0( ∇ F ( S_t (φ) ) , ∂_t S_t (φ) ),
and on the other hand from (<ref>):
t_ν_t F (D_t φ)
= _ν_t - _t F (D_t φ) + ( Ḋ_t φ, ∇ F (D_t φ)) ,
with _tF = 1/2Δ_Ċ_t F - (∇ V_t, ∇ F)_Ċ_t.
Integrating by parts, this gives
t_ν_t F (D_t φ) = _ν_t1/2 (Ċ_t∇ V_t (φ) - 1/2Ċ_t (C_∞ - C_t)^-1φ
+ D_t^-1Ḋ_t φ, D_t ∇ F (D_t φ)) = _ν_t1/2 (∇ V_t (φ), D_t ∇ F (D_t φ))_Ċ_t = _ν_01/2 ( ∇ V_t (D_t^-1· ), D_t ∇ F )_Ċ_t ( S_t (φ) ) ,
where we used (<ref>).
Thus the evolution of S_t is given by
t S_t (φ)
= 1/2 D_t Ċ_t ∇ V_t (D_t^-1 S_t (φ) ) .
It was realised in <cit.> that for the choice Ċ_t = e^-t A (often used in applications, see Section <ref>)
the Lipschitz structure associated with S_t is directly related to (a variant of) the multiscale Bakry–Émery criterion (<ref>).
Under the assumption
∀φ∈ X: Ċ_t^1/2 V_t(φ) Ċ_t^1/2≥μ̇_t 𝕀,
with μ_t := ∫_0^t ds μ̇_s,
the inverse map Ŝ_t = S_t^-1 : X ↦ X is exp ( - 1/2μ_t)-Lipschitz.
From the convergence (<ref>) to the Gaussian measure and if μ_∞ := ∫_0^∞ ds μ̇_s < ∞,
then from the previous theorem, one can extract a exp (- 1/2μ_∞)-Lipschitz map from the Gaussian measure P_C_∞ to ν_0
(see <cit.>).
For Ċ_t = e^-t A, the definition (<ref>) implies D_t = e^1/2 tA = Ċ_t^-1/2 and the evolution (<ref>) becomes particularly simple:
t S_t (φ)
= 1/2Ċ_t^1/2∇ V_t ( Ċ_t^1/2 S_t (φ) ) .
The Jacobian evolves according to
t∇ S_t (φ)
= 1/2Ċ_t^1/2 V_t ( Ċ_t^1/2 S_t (φ) ) Ċ_t^1/2∇ S_t (φ) .
As a consequence of (<ref>), the Gronwall inequality implies
∀ω∈ X, t ( ∇ S_t (φ) ω)^2 ≥μ̇_t ( ∇ S_t (φ) ω)^2
⇒
( ∇ S_t (φ) ω)^2 ≥exp ( μ_t ) ( ω)^2 .
By the inverse function theorem, we deduce that the operator norm of ∇Ŝ_t is less than
exp ( - 1/2μ_t).
Another useful covariance decomposition (see Theorem <ref>) is of the form
Ċ_t = (t A + 𝕀)^-2.
The result of Theorem <ref> can be extended to that case, but with a worse Lipschitz constant.
Let Ċ_t=(tA+1)^-2 for t≥ 0.
Under the assumption
∀φ∈ X: Ċ_t V_t(φ) Ċ_t - 1/2C̈_t ≥λ̇_t Ċ_t,
with λ_t := ∫_0^t ds λ̇_s,
the inverse map Ŝ_t = S_t^-1 : X ↦ X is (√(tA+1) )e^-λ_t/2-Lipschitz, with M = sup{Mω_2 :ω∈ X,ω_2=1} the operator norm of a matrix M.
In concrete examples,
the λ_t diverge with t, contrary to the μ_t in Theorem <ref> that remained bounded.
This divergence precisely compensates the factor √(t), giving a uniform Lipschitz bound on Ŝ_t.
An application of Theorem <ref> is given in Proposition <ref>.
For each t≥ 0, we look for a matrix B_t depending only on C_t and its derivatives,
commuting with them, and such that we can set up a Gronwall estimate for (B_t∇ S_t(φ)ω)^2 for each φ,ω∈ X.
Note that the definition (<ref>) of D_t implies D_t=Ċ_t^-1/4.
In particular D_t commutes with B_s,C_s,Ċ_s,C̈_s for any s.
Equation (<ref>) gives:
t∇ S_t(φ)
=
1/2Ċ_t D_t V_t(D_t^-1S_t(φ)) D_t^-1∇ S_t(φ)
.
Therefore:
t(B_t∇ S_t(φ)ω)^2
=
(B_t∇ S_t(φ)ω, [ B_tĊ_t D_t V_t(D_t^-1S_t(φ))D_t^-1B_t^-1 + 2Ḃ_̇ṫ B_t^-1]B_t∇ S_t(φ)ω)
.
In order to use the Hessian bound (<ref>),
we need to choose B_t in such a way that, for each t≥ 0:
B_tĊ_t D_t V_t(D_t^-1S_t(φ))D_t^-1B_t^-1 + 2Ḃ_̇ṫ B_t^-1 ≥Ċ_t^1/2 V_t(D_t^-1S_t(φ))Ċ_t^1/2
-1/2Ċ_t^-1/2C̈_tĊ_t^-1/2
.
The Hessian term gives B_t = D_t^-1Ċ_t^-1/2 = Ċ_t^-1/4=D_t as a sufficient condition,
and for this choice Ḃ_t B_t^-1 = -1/4C̈_t Ċ_t^-1 so that (<ref>) holds.
As a result,
t(D_t∇ S_t(φ)ω)^2
≥λ̇_t(D_t∇ S_t(φ)ω)^2
,
(t≥ 0)
.
The Gronwall inequality then yields:
(D_t∇ S_t(φ)ω)^2
≥
e^λ_t(ω)^2
,
(t≥ 0)
.
It follows that Ŝ_t is Ċ_t^1/4e^-λ_t/2-Lipschitz, i.e., (√(tA+1) )e^-λ_t/2-Lipschitz as claimed.
§ APPLICATIONS
In this section, we present concrete examples to which the multiscale Bakry-Émery criterion of Theorem <ref> can be applied.
The criterion gives a bound on the log-Sobolev constant in terms of real numbers λ̇_t (t>0) obtained through convexity lower bounds on the renormalised potential:
∀φ∈ X, t≥ 0: Ċ_t V_t(φ)Ċ_t - 1/2C̈_t≥λ̇_tĊ_t
.
These lower bounds are determined by the choice of the covariance decomposition (C_t).
While Theorem <ref> holds for any decomposition, checking (<ref>) for concrete models often requires
a specific choice of decomposition. This will be illustrated in examples in the following sections.
For now, we discuss the sharpness of the criterion in Theorem <ref>.
To fix ideas, suppose we have a model defined on Λ_ϵ,L=L^d∩ϵ^d for d≥ 2,
where either ϵ=1 is fixed and L→∞ (statistical mechanics model)
or ϵ→ 0 is a small regularisation parameter and L is fixed or L→∞ (continuum field theory model in finite or infinite volume).
From the discussion in Sections <ref>–<ref> recall that the speed of convergence of an associated dynamics is often related to the presence of phase transitions in equilibrium.
These phase transitions are phenomena arising in the limit of large volumes, i.e., large L.
In this limit, one typically expects that the log-Sobolev constant should be bounded from below independently of Λ_ϵ,L as long as no phase transition occurs.
On the other hand, the additional regularisation parameter ϵ>0 is not expected to affect the dynamics, i.e.,
the log-Sobolev constant should also be bounded from below as ϵ→ 0.
Sharpness of the criterion of Theorem <ref> is therefore evaluated through the following questions:
* In absence of a phase transition, does the criterion provide a lower bound on the log-Sobolev constant uniform in the volume (i.e., in L)?
* Does it give a bound on the log-Sobolev constant independent of the regularisation parameter ϵ for continuum models?
* If the first point holds, can one correctly estimate how the log-Sobolev constant vanishes as a function of the distance to the critical point, or at the critical point as a function of L?
The third point is considerably more involved than the first two.
To provide a guideline to read the next sections, we collect here the answers to the above three questions obtained by studying the examples presented below.
* For statistical mechanics models (ϵ=1) at high temperature, i.e., far away from the critical point, the multiscale Bakry-Émery criterion implies point (i), see, e.g., Theorem <ref> which extends to a much broader class of models.
This regime is also covered by many other criteria (see e.g. the monographs <cit.> and <cit.>),
with the notable exception of mean-field spin-glass models, see the discussion of <cit.>.
* The criterion (<ref>) can be sharp enough to reach the critical point,
see, e.g., Theorem <ref> and Proposition <ref> below for the Ising and φ^4 models.
In other words, there are models for which the criterion implies (i) up to the phase transition.
We expect that the criterion should imply (i) up to the critical point for a large class of models.
* The criterion can provide a bound on the log-Sobolev constant that is uniform as ϵ→ 0
(see Theorems <ref> and <ref> for the φ^4 and sine-Gordon models), thus satisfying point (ii).
Point (iii) is in general open and in this generality hopelessly difficult, but some positive results exist.
For models with quadratic mean-field interaction such as the Curie-Weiss model, one can answer (iii) in the affirmative.
A detailed analysis above and below the critical temperature was also carried out for mean-field O(n) models in <cit.>.
For certain continuum particle systems with mean-field interaction,
the behaviour close to the critical point is the subject of ongoing work <cit.>.
For more complicated models in which computations can be carried out, such as the hierarchical φ^4_4 model <cit.>,
the criterion provides almost matching upper and lower bounds on how fast the log-Sobolev constant vanishes at the critical point.
For the nearest-neighbour Ising model in d≥ 5, the criterion implies polynomial bounds on the log-Sobolev constant at and near the critical temperature
<cit.>.
§.§ Applications to Euclidean field theory
In this section, let Λ =Λ_ϵ,L be a discrete torus of mesh size ϵ and side length L (assumed to be a multiple of ϵ):
Λ_ϵ,L = L^d∩ϵ^d.
The discrete Laplacian on Λ_ϵ,L is given by
∀φ∈^Λ_ϵ,L,
(Δ^ϵφ)_x =ϵ^-2∑_y∼ x(φ_y-φ_x).
The (lattice regularised) Euclidean field theory models we consider are of the form
ν^ϵ,L(dφ) ∝exp[-ϵ^d/2∑_x∈Λ_ϵ,Lφ_x(-Δ^ϵφ)_x +
ϵ^d∑_x∈Λ_ϵ,L V^ϵ(φ_x)
] dφ,
where dφ denotes the Lebesgue measure on ^Λ_ϵ,L and
V^ϵ is a real-valued function chosen in such a way that
the ϵ→ 0 limit of the measure
(on a suitable space of generalised functions) exists and is non-Gaussian.
Writing ∇ V^ϵ(φ) = ((V^ϵ)'(φ_x))_x for φ∈^Λ_ϵ,L,
the dynamics is the singular SPDE
dφ_t = Δ^ϵφ_t dt - ∇ V^ϵ(φ_t) dt + √(2)dW_t^ϵ,L
where dW_t^ϵ,L is space-time white noise on _+ ×Λ_ϵ,L, i.e., the W^ϵ,L_x are independent Brownian motions with variance ϵ^-d for x∈Λ_ϵ,L, or equivalently a standard Brownian motion with respect to the continuum inner product (u,v)_ϵ = ϵ^d ∑_x∈Λ_ϵ,L u_xv_x.
The pathwise (short time) limit theory for such SPDEs is the subject of Hairer's regularity
structure theory
<cit.>,
the paracontrolled method of Gubinelli et al. <cit.>,
and the pathwise renormalisation group approach <cit.>.
The log-Sobolev inequality for the associated dynamics takes the form:
_ν^ϵ,L(F)
≤2/γ D_ν^ϵ,L(√(F)),
with the standard Dirichlet form with respect to the gradient ∇_ϵ corresponding to (·,·)_ϵ:
D_ν^ϵ,L(F)
= _ν^ϵ,L(∇^ϵ F,∇^ϵ F)_ϵ
=
1/ϵ^d∑_x∈Λ_ϵ,L_ν^ϵ,L[|Fφ_x|^2]
,
i.e., (∇^ϵ F)_x = (∇_φ^ϵ F)_x = ϵ^-dFφ_x.
(Thus this gradient acts on functionals of fields F: ^Λ_ϵ,L→
while the Laplacian (<ref>) acts on fields φ∈^Λ_ϵ,L.)
We now discuss two prototypical models.
Continuum sine-Gordon model
Let d=2.
For 0<β<8π and z∈,
the sine-Gordon model is defined by the potential
V^ϵ(φ) = 2z ϵ^-β/4πcos(√(β)φ),
(φ∈).
One can also add a convex quadratic part to the measure: the massive sine-Gordon model with mass m>0 corresponds to the potential V^ϵ(φ) +1/2 m^2 φ^2.
Continuum φ^4 model
Let d=2 or d=3.
For λ>0 and μ∈,
the φ^4_d measure is defined by
V^ϵ(φ) = λ/4φ^4 + μ+a^ϵ(λ)/2φ^2,
(φ∈)
,
where a^ϵ(λ) is a divergent counterterm.
Explicitly, for an arbitrary fixed m^2 > 0,
one can take a^ϵ(λ)=a^ϵ(λ,m^2) with
a^ϵ(λ,m^2) := -3λ(-Δ^ϵ+m^2)^-1(0,0) + 6λ^2 (-Δ^ϵ+m^2)^-1(0,·)^3_L^3(Λ_ϵ,L)
,
and the notation f^p_L^p(Λ_ϵ,L) = ϵ^d ∑_x∈Λ_ϵ,L |f(x)|^p for p>0.
The counterterms defined in terms of different m^2 differ by additive constants and thus the choice of m^2 corresponds
to a normalisation. In the following we take m^2=1 for the definition of the counterterm.
The sine-Gordon and φ^4_d models are defined on the discretised torus Λ_ϵ,L = L^d∩ϵ^d.
As explained in Section <ref>,
these models should be thought of as discretised versions of limiting models defined on the continuum torus L^d.
Recall that the criterion of Theorem <ref> asks for a lower bound on the Hessian of the renormalised potential uniformly in the field:
for some λ̇_t∈,
∀φ∈^Λ_ϵ,L, t≥ 0: Ċ_t V_t(φ)Ċ_t - 1/2C̈_t
≥λ̇_tĊ_t
.
To obtain information on the dynamics in the ϵ→ 0 limit,
one is therefore interested in estimates of the renormalised potential that are uniform in ϵ,φ.
For the sine-Gordon model, this was carried out in <cit.> by providing an explicit description of the renormalised potential at each scale.
For the massive continuum sine-Gordon model with mass m>0, let A=-Δ^ϵ+m^2𝕀 and Ċ_t = e^-tA (t≥ 0).
Then,
if β<6π,
there is a constant μ^* = μ^*(β,z,m,L)>0 that does not depend on ϵ,t,
such that with V_0(φ) = ϵ^2 ∑_x∈Λ_ϵ,L 2z cos(√(β)φ_x):
∀φ∈^Λ_ϵ,L, t≥ 0: Ċ_t V_t(φ)Ċ_t ≥μ̇_t Ċ_t
with sup_t≥ 0| ∫_0^t μ̇_s ds | ≤μ^*
.
Given β,z,m,L, this yields a lower bound on the log-Sobolev constant inf_ϵγ^ϵ,L_β,z,m>0.
Moreover,
the log-Sobolev constant is uniform in the large-scale parameter L under the following condition.
If L satisfies m≥ 1/L and the coupling constant z is such that |z|≤δ_β m^2+β/4π for a small enough δ_β>0,
then inf_ϵγ^ϵ,L_β,z,m>m^2 - O_β( m^β/4 π |z| ), uniformly in L.
For the sine-Gordon model,
the multiscale Bakry-Émery criterion of Theorem <ref> is thus seen to provide the optimal independence of ϵ of the log-Sobolev constant in finite volume,
as well as of L under an additional small coupling assumption depending on the external mass.
In the case of the φ^4_d model, a description of the renormalised potential at all scales is more difficult to work out directly.
The log-Sobolev inequality was obtained in <cit.> using a correlation inequality due to Ding, Song, and Sun <cit.>,
valid for ferromagnetic interactions (see Proposition <ref> below).
This correlation inequality reduces the proof of the log-Sobolev constant to an estimate of certain correlation functions at field φ = 0,
for which many tools are available.
For the continuum φ^4 model in d=2,3, let
A=-Δ^ϵ+𝕀 and
Ċ_t = (tA+𝕀)^-2 (t≥ 0) and take m=1 in the definition (<ref>) of the counterterm.
Then with V_0(φ) = ϵ^d∑_x∈Λ_ϵ,L (λ/4φ^4 + μ-1+a^ϵ(λ)/2φ^2):
∀φ∈^Λ_ϵ,L, t≥ 0: Ċ_t V_t(φ)Ċ_t - 1/2C̈_t ≥(1/t-χ_t/t^2) Ċ_t
,
with χ_t = χ^ϵ,L_λ,μ+1/t the 0-field susceptibility of the φ^4_d model at scale t>0:
if <·>^ϵ,L_λ,μ denotes the average under the φ^4_d measure,
χ_t
:=
ϵ^d ∑_x∈Λ_ϵ,L<φ_0φ_x>^ϵ,L_λ,μ+1/t
.
This implies that the log-Sobolev constant γ^ϵ,L_λ,μ satisfies
inf_ϵ,Lγ^ϵ,L_λ,μ>0 for any λ,μ for which the measure does not have a phase transition,
and inf_ϵγ^ϵ,L_λ,μ>0 for any λ,μ and L fixed.
In the φ^4_d case,
the multiscale criterion gives a log-Sobolev constant bounded uniformly in the volume and ϵ in the entire single-phase region.
However, the bound obtained on the log-Sobolev constant has accurate dependence on λ,μ only far away from the transition,
corresponding to values of λ,μ such that μ>0 and when λ≪μ.
We conclude the section with a last example, the lattice φ^4 model, corresponding to the case ϵ=1 above.
Thus its single-site potential is given by:
V(φ) = V^ϵ=1(φ)
=
g/4φ^4 + ν/2φ^2
,
and the coupling matrix is A=-Δ^ϵ =1+𝕀 setting m^2=1 without loss of generality.
The state space is L^d∩^d for any d≥ 1.
As a special case, Theorem <ref> yields the log-Sobolev inequality whenever the infinite volume susceptibility is finite
(though the proof of this statement can be simplified considerably in this case, see <cit.>).
We nonetheless make an independent statement because the Lipschitz constant of the map of Theorem <ref> can be bounded uniformly in the volume.
Theorem <ref> does not seem to be effective for the discrete model.
[<cit.>]
For the lattice φ^4 model in any dimension with associated expectation <·>^L_g,ν,
let Ċ_t = (tA+𝕀)^-2 and define the lattice susceptibility:
χ^L_t
:=
∑_x∈Λ<φ_0φ_x>^L_g,ν+1/t
.
Then the Hessian bound (<ref>) holds with χ^L_t in the right-hand side instead of χ_t.
This implies that the log-Sobolev constant γ^L_g,ν satisfies inf_Lγ^L_g,ν>0 if and only if sup_Lχ^L_∞<∞.
Moreover, if sup_Lχ^L_∞<∞,
then the transport map of Theorem <ref> has Lipschitz constant is bounded uniformly in L and t.
Let us prove the bound on the Lipschitz constant.
Theorem <ref> gives the following bound on the Lipschitz constant of the transport map:
√(tA +1)exp[-1/2∫_0^t(1/s-χ^L_s/s^2) ds]
.
The Brascamp-Lieb inequality (<ref>) ensures:
inf_L inf_s≤ 1(1/s-χ^L_s/s^2)
>0.
For t ≥ 1, the second Griffiths inequality <cit.> implies that t↦χ^L_t is increasing, thus is bounded by χ^L_∞ so that
1/t-χ^L_t/t^2≥1/t-χ^L_∞/t^2,
(t≥ 1)
.
Combining the previous inequalities, we deduce that
there is a constant C = C(χ^L_∞)>0
such that for all t>0:
exp[-1/2∫_0^t(1/s-χ^L_s/s^2 ds)]
≤
Cmin{t^-1/2,1}
,
and the observation A≤-Δ^ϵ=1+𝕀≤ 4d+1 implies that the Lipschitz constant
(<ref>) is uniformly bounded. This
concludes the proof.
§.§ Applications to Ising models
In this section, we explain how to apply the ideas developed in Section <ref> to Ising models with discrete spins.
Using the Bakry–Émery criterion and its multiscale version, these ideas were developed in <cit.>,
while closely related results were obtained using spectral and entropic independence and stability estimates in <cit.>.
§.§.§ Renormalised potential
The Ising model with coupling matrix A at inverse temperature β>0 and (site-dependent) external field h on a finite set Λ is defined by:
_μ[F] = _μ_β,h[F] ∝∑_σ∈{± 1}^Λ e^-1/2 (σ, β Aσ) + (h,σ) F(σ).
Since σ_x^2=1 for each x,
the measure is invariant under the change A→ A+α𝕀, α∈.
Without loss of generality,
we can thus assume that the coupling matrix A is positive definite.
We also assume that it has spectral radius bounded by 1; this just amounts to a choice of normalisation for β.
A natural covariance decomposition is
C_t= (tA + (α -t)𝕀)^-1,
for a parameter α>0 which will be unimportant.
For t<α the matrix C_t is positive definite and for β<α,
as explained above, the Ising model at inverse temperature β can be written as
_μ[F] ∝∑_σ∈{± 1}^Λ e^-1/2 (σ, C_β^-1σ) + (h,σ) F(σ).
Strictly speaking, C_t is not a covariance decomposition since C_0=α^-1𝕀≠ 0,
different from the assumption in Section <ref>.
However, all results from that section can be applied to the covariance decomposition C_t-C_0,
and we will do this without further emphasis.
The renormalised potential can be defined analogously to the continuous setting as:
V_t(φ) = -log∑_σ∈{± 1}^Λ e^-1/2(σ-φ, C_t^-1 (σ-φ)) + (h,σ).
This leads to a decomposition of the Ising measure (<ref>) as:
_μ[F]
=
_ν_t,β[_μ_t^φ[F]]
(0≤ t<β)
.
By analogy with (<ref>), the fluctuation measure μ^φ_t used above
is the Ising measure with coupling matrix C_t^-1 and external field C_t^-1φ + h:
μ^φ_t (σ) = μ_t,C_t^-1φ+h∝ e^-1/2 (σ, C_t^-1σ) + (C_t^-1φ +h,σ),
and the renormalised measure ν_t = ν_t,β is supported on the image of C_β-C_t in ^Λ:
ν_t,β(dφ)
∝
e^-V_t(φ) _C_β-C_t(dφ)
∝exp-1/2(φ,(C_β-C_t)^-1φ) - V_t(φ) dφ
.
Even though σ:Λ→{± 1}
is discrete, the renormalised field φ: Λ→ is continuous as soon as t>0.
Convexity-based criterions,
such as the Bakry-Émery or multiscale Bakry-Émery criterions of Theorems <ref> and <ref>, can therefore be used to
derive log-Sobolev inequalities for ν_t,β.
Before discussing these, we summarise results about the infinite temperature (product) Ising model, which serve as input to these arguments.
§.§.§ Preliminaries: single-spin inequalities
At infinite temperature β=0 the spin models we consider become product measures.
By tensorisation, it thus suffices to know the log-Sobolev constant of a single spin (see Example <ref>).
The following summarises known results for these.
Ising model, standard Dirichlet form
Let μ be the probability measure on {± 1} with μ(+1)=p=1-q.
The standard Dirichlet form is
D_μ(F) = 1/2_μ (F(σ^x)-F(σ))^2= 1/2 (F(+1)-F(-1))^2 .
Let μ be the probability measure on {± 1} with μ(+1)=p=1-q. Then
_μ (F) ≤pq(log p-log q)/p-q(√(F(+1))-√(F(-1)))^2.
Thus the log-Sobolev constant with respect to the standard Dirichlet
form on {± 1} is at least 2:
_μ (F) ≤ D(√(F))
= 2/γ_0D(√(F)), γ_0=2.
The proof can be found in <cit.> or <cit.>.
Ising model, heat bath Dirichlet form
With μ as above, the heat-bath Dirichlet form is
D^ HB(F)
=
1/2∑_σμ(σ)μ(σ^x)/μ(σ)+μ(σ^x) (F(σ^x)-F(σ))^2
=
pq (F(+1)-F(-1))^2
Let μ be the probability measure on {± 1} with μ(+1)=p=1-q. Then
_μ(F) ≤ pq(log F(+1)-log F(-1))(F(+1)-F(-1)).
Thus the modified log-Sobolev constant with respect to the heat-bath Dirichlet form is at least 1/2:
_μ F ≤ D^ HB(log F, F)
= 1/2γ_0 D^ HB(log F, F), γ_0 = 1/2
.
See <cit.>.
Similar single-spin inequalities are available for O(n) models <cit.> and allow to extend for example
Theorem <ref> below for the Ising model to these models with little change <cit.>.
§.§.§ Entropy decomposition
To prove a log-Sobolev inequality (or modified log-Sobolev inequality) for the Ising measure,
we start from the decomposition (<ref>) with t=0,
decomposing μ=μ_β^0 into two parts: an infinite temperature Ising measure μ^φ_0 with external field C_0^-1φ+h
and the renormalised measure ν_0,β.
The corresponding entropy decomposition (<ref>) is:
_μ(F)
=
_ν_0,β[_μ_0^φ(F)]
+
_ν_0,β(_μ_0^φ[F])
.
To prove the log-Sobolev inequality (or modified log-Sobolev inequality) with respect to a Dirichlet form D_μ,
we want to bound both terms by a multiple of D_μ(√(F))
(or D_μ(F,log F)).
In this discussion, we focus on the log-Sobolev inequality
with respect to the standard Dirichlet form:
D_μ(F) = 1/2∑_x∈Λ_μ(F(σ)-F(σ^x))^2.
However, the same strategy applies with different jump rates
and a modified log-Sobolev inequality or a spectral inequality as input (and output),
see Sections <ref> and <ref>.
Under the assumption that μ_0^φ satisfies a log-Sobolev inequality with constant γ_0
uniformly in φ, the first term on the right-hand side of (<ref>) is bounded by
2/γ_0_ν_0,β[D_μ_0^φ(√(F))]=2/γ_0D_μ(√(F)).
Since μ_0^φ is a product measure, this assumption is well understood and one can take γ_0=2 for the standard Dirichlet form,
as discussed in Section <ref>.
The last equality relies on the specific jump rates in the standard Dirichlet form (<ref>).
For other Dirichlet forms, one can also get an inequality, see Section <ref>.
Bounding the second term on the right-hand side of (<ref>) essentially amounts to estimating the log-Sobolev constant of the renormalised measure ν_0,β. Indeed, if ν_0,β satisfies a log-Sobolev inequality with constant γ_0,β (and standard Dirichlet form) then
this term is bounded by
2/γ_0,β_ν_0,β[|(∇_φ_μ^φ_0[F]^1/2)|^2]
≤4α^2/γ_0γ_0,β D_μ(√(F)),
where the second inequality is elementary and follows from the next exercise (which is similar to the argument in the tensorisation proof of
Example <ref>) and the Cauchy–Schwarz inequality.
For each x∈Λ,
let μ^φ,x_0 denote the law of σ_x under the product measure μ^φ_0.
Then:
∇_φ_x(_μ^φ_0[F]^1/2)
=
α/2√(_μ^φ_0[F])_μ^φ_0(F,σ_x)
=
α/2√(_μ^φ_0[F])_μ_0^φ[_μ^φ,x_0(F,σ_x)]
(recall C_0 = α^-1𝕀) and:
_μ^φ,x_0(F,σ_x)^2
≤
8_μ_0^φ,x[F]_μ^φ,x_0(√(F))
≤8/γ_0_μ_0^φ,x[F]D_μ^φ,x_0(√(F))
.
The first equation is a simple computation.
The first inequality in (<ref>) follows from Cauchy-Schwarz inequality using the following general expression for the covariance of functions G_1,G_2 under a measure m:
_m(G_1,G_2)
=
1/2∫(G_1(x)-G_1(y))(G_2(x)-G_2(y)) dm(x) dm(y)
.
The second inequality is the general fact that the spectral gap is always larger than the log-Sobolev constant, see Proposition <ref>.
Details can be found in <cit.>.
In summary, if μ_0^φ satisfies a log-Sobolev inequality with constant γ_0 and ν_0,β satisfies
a log-Sobolev inequality with constant γ_0,β then the inverse log-Sobolev constant γ^-1 of μ satisfies
1/γ≤1/γ_01+2 α^2/γ_0,β.
At this point the objective is to bound the log-Sobolev constant γ_0,β of the renormalised measure ν_0,β. Results are stated next under different conditions, with the corresponding verifications postponed to Section <ref>.
Recall that the renormalised measure is ν_0,β(dφ) ∝
e^- V_0(φ)_C_β-C_0(dφ) with the renormalised potential V_0 defined in (<ref>).
If the temperature is sufficiently high, namely if β<α<1,
then it turns out that V_0 is strictly convex
so that the standard
Bakry–Émery criterion is applicable <cit.> and gives (see Exercise <ref>):
γ_0,β≥α-α^2.
Taking α↓β, this leads to the following theorem.
We recall the convention fixed below (<ref>) that the coupling matrix A has spectrum in [0,1].
This spectral condition appeared in <cit.> (and unlike previously existing conditions applies to the SK spin glass).
For β<1 (under the conventions stated below (<ref>)),
the Ising model μ satisfies the log-Sobolev inequality:
for each F:{-1,1}^Λ→_+,
_μ(F)
≤(1+2 β/1-β)D_μ(√(F))
.
For β>1, the renormalised potential V_0 is in general not convex and the log-Sobolev constant γ_0,β of the renormalised measure ν_0,β cannot be bounded using the Bakry–Émery criterion.
However, the argument above can be generalised by using the multiscale Bakry–Émery criterion instead.
Assuming λ̇_t are as in the multiscale Bakry–Émery condition (<ref>),
Theorem <ref> gives:
1/γ_0,β≤
|Ċ_0|∫_0^β e^-2λ_t dt
=
1/α^2∫_0^β e^-2λ_t dt,
λ_t = ∫_0^t λ̇_s ds,
and substituting this into (<ref>) gives the following log-Sobolev inequality:
_μ(F)
≤2/γ D_μ(√(F)), 1/γ≤1/γ_01+2∫_0^β e^-2λ_t dt ,
λ_t = ∫_0^t λ̇_s ds.
In the same situation,
instead of using the multiscale Bakry–Émery criterion to bound γ_0,β in (<ref>),
one can prove a log-Sobolev inequality for μ by using the entropic stability estimate
discussed in Section <ref>.
This was done in <cit.> (to prove a modified log-Sobolev inequality, but the argument generalises to a log-Sobolev inequality for the standard Dirichlet form, see below).
It turns out that, for the decomposition (<ref>), the conditions
(<ref>) and (<ref>) of the multiscale Bakry–Émery criterion and of the entropic stability estimate are identical
provided λ̇_t = -α_t, see Exercise <ref>.
This is not the case for other covariance decompositions, and in particular not for those used for continuous models,
see the discussion in Section <ref>.
Using the entropic stability estimate (<ref>), the entropy is therefore bounded by:
_μ(F)
≤
e^-λ_β_ν_0,β[_μ_0^φ(F)],
λ_t = ∫_0^t λ̇_s ds,
and with the uniform log-Sobolev inequality for μ_0^φ this gives
_μ(F)
≤2/γ D_μ(√(F)), 1/γ≤1/γ_0 e^-λ_β,
λ_t = ∫_0^t λ̇_s ds
.
This estimate is very similar to the estimate (<ref>) obtained using the multiscale Bakry–Émery criterion,
but not exactly identical. Both estimates can be applied up to the critical point in a very general setting for ferromagnetic Ising models and yield a
polynomial bound on the log-Sobolev constant under the mean-field bound which holds
on Λ⊂^d in d≥ 5, see Theorem <ref> below.
The strategies of the two proofs have different advantages.
We summarise the results as follows.
The log-Sobolev constant of the Ising model at inverse temperature β is bounded by
(<ref>) or (<ref>).
As discussed previously, we formulated the results for the log-Sobolev inequality with respect to the standard Dirichlet form.
This is a canonical choice (as already explained in Section <ref>),
but the argument can be adapted easily to other choices of jump rates with the conclusion of a possibly
modified log-Sobolev inequality, see Section <ref>.
As pointed out in <cit.>, other choices are of interest when the jump rates are unbounded.
§.§.§ Hessian of the renormalised potential and covariance
In both strategies, using the multiscale Bakry–Émery criterion or the entropic stability estimate,
the estimate of the log-Sobolev constant reduces to estimating the constants λ̇_t=-α_t bounding the
Hessian of the renormalised potential from below.
From Lemma <ref>,
recall that these estimates are equivalent to bounds on the covariance of the fluctuation measure, a point of view that is particularly useful for the Ising model.
Indeed,
the Hessian of the renormalised potential can be represented as follows.
Show that
V_t(φ) = C_t^-1 - C_t^-1Σ_t(C_t^-1φ+h)C_t^-1,
where Σ_t(g)= (_μ_t,g(σ_x,σ_y))_x,y∈Λ
is the covariance matrix of the Ising model μ_t,g at inverse temperature t and site-dependent magnetic field g
(so that (<ref>) reads μ_t^φ = μ_t,C_t^-1φ + h).
For β<1, one can obtain the following convexity directly from this representation,
which allows to apply the standard Bakry–Émery criterion to derive (<ref>) and conclude the proof of Theorem <ref>.
Let 1 ≥α > β, and set C_t = (t A + (α-t)𝕀)^-1.
Then V_t is convex for all t∈ [0,α].
Since μ_0,g is a product measure and |σ_x|≤ 1,
Σ_0(g) = (_μ_0,g(σ_x))_x∈Λ≤𝕀
.
Using that C_0=α^-1𝕀, we deduce from (<ref>) that
V_0(φ) = ( α - α^2 Σ_0(αφ+h) )𝕀≥ (α-α^2)𝕀.
Thus if α≤ 1, it follows that V_0 is convex. By Proposition <ref>, V_t is convex for all t>0.
For general β >0, the semi-convexity criterion of the Hessian can be equivalently formulated as covariance estimate uniformly in an external field.
From C_t= (tA+(α-t)𝕀)^-1, one has
Ċ_t = (𝕀 -A)C_t^2, C̈_t = 2(𝕀-A)Ċ_tC_t and thus
the multiscale Bakry–Émery criterion (<ref>) and the entropic stability criterion (<ref>) hold with
-λ̇_t=α_t = χ̅_t
where χ̅_t is a uniform upper bound on the spectral radius of the covariance matrix of an Ising model uniformly in an external field:
χ̅_t
=
sup_g∈^Λχ_t(g)
,
χ_t(g)
=
Σ_t(g)
.
Now a significant simplification occurs for Ising models with ferromagnetic interaction,
meaning A_x,y≤ 0 for x≠ y.
This includes the case of the lattice Laplacian Δσ_x = ∑_y∼ x[σ_y-σ_x].
For ferromagnetic interactions, it turns out that the spectral radius is maximal at 0 field:
χ̅_t
=
χ_t(0)
.
This is a consequence of
the FKG inequality (which implies that the covariance matrix has pointwise nonnegative coefficients),
the Perron–Frobenious theorem (which therefore implies that the largest eigenvector has nonnegative entries),
and the following remarkable correlation inequality due to Ding–Song–Sun <cit.> which implies that the covariance between any two spins is maximised at 0 field.
Let μ=μ_β,h be the Ising measure (<ref>) with ferromagnetic interaction A,
and external field h∈ [-∞,∞]^Λ,
with values ±∞ corresponding to boundary conditions.
Then:
_μ_β,h(σ_x,σ_y)
≤_μ_β,0(σ_x,σ_y)
=
_μ_β,0[σ_xσ_y],
(x,y∈Λ)
.
In particular, if the interaction A is ferromagnetic
and (for simplicity) in addition translation invariant, i.e., A_x,y=A_x-y, then
χ̅_t = χ_t =
χ_t(0)
=
∑_x∈Λ_μ_t,0[σ_xσ_y]
is the susceptibility of the Ising model.
It characterises the phase transition of the ferromagnetic Ising model, in the sense that, e.g., for Λ⊂^d (d≥ 2), the critical value β_c of β satisfies:
β_c
:=
supβ>0 : sup_Λ↑^d∑_x∈Λ_μ_β,0[σ_0σ_x]<∞
.
By combining the Ding–Song–Sun correlation bound with the multiscale Bakry–Émery criterion, and in view of the above characterisation of β_c,
the following log-Sobolev inequality up to the critical point for ferromagnetic Ising models on general geometries was proven in <cit.>.
The log-Sobolev constant γ_β,h of the Ising measure (<ref>) with ferromagnetic interaction satisfies:
1/γ_β,h≤1/2+∫_0^β e^2∫_0^tχ_s ds dt
,
with χ_β = sup_x ∑_y _μ_β,0[σ_xσ_y] or more generally the largest eigenvalue of (_μ_β,0[σ_xσ_y])_x,y.
The above result implies that γ_β,h is bounded below uniformly in the size of the lattice as long as β<β_c.
For special geometries, this could already be argued from <cit.> (which establishes the strong spatial mixing property).
More interesting is the fact that (<ref>) also gives an explicit bound on the log-Sobolev constant.
For Ising models with mean-field interactions,
the bound implies that γ_β,h is of order β_c-β,
which is the correct scaling.
More significantly, the bound (<ref>)
implies a polynomial bound on 1/γ_β,h on Λ⊂^d in dimension five and higher
(and more generally under the so-called mean-field bound on the susceptibility, i.e., when the susceptibility diverges linearly),
where polynomial means as a function of β_c-β when β<β_c,
and of the lattice size when β=β_c.
The degree of this polynomial is not expected to be sharp unless D=1.
If χ_β≤ D/(β_c-β) then γ_β,h is bounded polynomially in β_c-β,
and if χ_β≤ D/(β_c-β+L^-α) then γ_β_c,h is polynomial in L.
These assumptions hold for the ferromagnetic nearest-neighbour Ising model on Λ⊂^d when d≥ 5 <cit.>.
Similarly, one can recover the main result of <cit.> from (<ref>), which improves the high temperature condition
of (<ref>) in the case of ferromagnetic Ising models on graphs with maximal degree d (for the mixing time and spectral gap).
For the ferromagnetic Ising model, A is the negative adjacency matrix of the graph
which we now (for comparison) do not normalise to have spectrum contained in [0,1].
The condition of (<ref>) cannot be improved for general non-ferromagnetic interactions with bounded spectral radius,
but for ferromagnetic models the condition established in <cit.> is
β < artanh(1/(d-1)), whereas the condition of (<ref>) translates in this case to β< 1/(2d).
The value β_u=artanh(1/(d-1)) is the uniqueness threshold for the ferromagnetic Ising model on the infinite d-regular tree
and also the critical point for ferromagnetic Ising models on random d-regular graphs <cit.>.
This application is summarised in the next example.
Let A be the negative adjacency matrix of a finite graph of maximal degree d (not normalised to have spectrum in [0,1]).
If (d-1) tanhβ < 1 then γ_β,h is uniformly bounded below (and diverges polynomially in β↑β_u= artanh(1/(d-1)) and in the size of the graph if β=β_u).
From <cit.>, we deduce that for any sites x,y in the graph:
_μ_β,0[σ_x σ_y]≤∑_w ∈𝒮(y)( tanhβ)^dist(x,w) ,
where the bound is obtained by comparing the graph with a d-regular tree as in <cit.>
and 𝒮(y) stands for the set of sites associated with y in this tree.
Summing over the sites y in the graph boils down to sum over all the sites in ∪_y 𝒮(y), i.e., in the d-regular tree.
Thus we get
χ_β = sup_x ∑_y _μ_β,0[σ_xσ_y]
≤ 1+ C_d ∑_ℓ≥ 1( (d-1) tanhβ)^ℓ≤C_d/1 - (d-1) tanhβ,
i.e., one finds a divergence of the susceptibility as in Example <ref> as β↑β_u=artanh(1/(d-1)).
Moreover, when β approaches β_u,
one can use that for a graph with N sites then trivially χ_β≤ N,
so that
χ_β≤C_d/1 - (d-1) tanhβ + N^-1
.
By Theorem <ref>, we deduce a polynomial lower bound for the log-Sobolev constant in the size of the graph at β=β_u=artanh(1/(d-1)).
§.§.§ Choice of Dirichlet form
As in the original references <cit.>,
the above discussion is formulated in terms of the standard Dirichlet form (<ref>).
There exist general comparison arguments between Dirichlet forms that ensure that one can transfer the log-Sobolev inequality obtained for a certain dynamics to another one,
see, e.g., Chapter 4 in <cit.>.
Namely, if c_1,c_2 are families of jump rates reversible with respect to the same measure ν on {-1,1}^Λ
and γ_1,γ_2 denote the associated log-Sobolev constants,
then, for K>0:
∀σ,σ'∈{-1,1}^Λ,
K^-1 c_2(σ,σ')≤ c_1(σ,σ')≤ K c_2(σ,σ')
⇒
Kγ_2
≤γ_1
≤γ_2/K
.
One can for instance check that the heat-bath- and canonical jump rates associated with an Ising measure with interaction β A and external field h∈^Λ satisfy such a bound,
with a constant K that depends only on h_∞ and βmax_i∑_j|A_i,j|.
The heat-bath Dirichlet form is <cit.>:
D_μ^ HB(F)
=
1/2∑_σ∈{± 1}^Λ∑_x∈ΛΨ(μ(σ),μ(σ^x)) (F(σ)-F(σ^x))^2, Ψ(a,b)=ab/a+b.
There are however situations in which the comparison argument (<ref>) is not applicable.
This observation was made in <cit.> in the situation of the SK model treated in <cit.>,
which is an Ising model with random couplings which can take arbitrarily large values.
This makes the constant K in (<ref>) arbitrarily small.
In such cases,
if one is interested in a dynamics different from the one induced by the canonical jump rates,
it is desirable to directly obtain the log-Sobolev inequality (or modified log-Sobolev inequality) for this dynamics.
This was done for the heat-bath dynamics, for the modified log-Sobolev inequality, in
<cit.>.
One can however check that the arguments of <cit.> sketched above
are not specific to the standard Dirichlet form.
It is in fact straightforward to apply them to other dynamics as sketched below in the heat-bath case, provided:
* the associated single-spin LSI constant (or modified LSI constant) is uniform in the field;
* the associated Dirichlet form is a concave function of the measure (this can be generalised).
Let us show how this works in the heat-bath case (<ref>), for which both points are satisfied in view of Proposition <ref> and the concavity of Ψ(a,b)=ab/(a+b).
Instead of using the single-spin log-Sobolev inequality for the standard Dirichlet form of Proposition <ref>,
one can instead apply the single-spin modified LSI with the heat-bath Dirichlet form of Proposition <ref> since it also holds uniformly in the external field.
For instance, for _ν_0,β[_μ_0^φ(F)] the bound becomes:
_ν_0,β_μ_0(F(σ)) ≤1/2∑_σ∑_x _ν_0,βΨ(μ_0(σ),μ_0(σ^x)) (F(σ)-F(σ^x))(log F(σ)-log F(σ^x))
≤1/2∑_σ∑_x Ψ(μ(σ),μ(σ^x)) (F(σ)-F(σ^x))(log F(σ)-log F(σ^x))
= D_μ^ HB(F,log F),
where the first inequality is the single-spin modified log-Sobolev inequality from Proposition <ref>,
and the second inequality is Jensen's inequality, using
μ(σ) = _ν_0,β[μ_0(σ)] and that Ψ is concave.
The bound for the other term in (<ref>) works analogously and gives 4D_μ^ HB(√(F)) ≤ D_μ^ HB(F,log F)
instead of D_μ(√(F)) with the standard Dirichlet form.
The conclusion is that the modified log-Sobolev constant for the heat-bath Dirichlet form satisfies exactly the same bound
(up to an overall factor 4 from different normalisations):
_μ(F) ≤1/2γ_0^ HB D_μ^ HB(F,log F),
1/γ_0^ HB≤ 2 + 4 ∫_0^β e^-2λ_t dt.
This strategy can be further generalised to Dirichlet forms which are not concave in the measure,
see <cit.>.
§.§ Applications to conservative dynamics
The criterion of Theorem <ref> in principle also applies to dynamics with a conservation law.
This is for instance the case for spin models with constrained magnetisation:
μ_N,m(dφ)
∝
e^-V_0(φ) dφ|_X_N,m
,
with X_N,m the hyperplane of spins with magnetisation m∈:
X_N,m
:=
φ∈^N : ∑_iφ_i = Nm
.
For instance, an infinite temperature example is given by V_0=∑_i=1^NV(φ_i) and V:→ a C^2 potential,
assumed to be strictly convex outside of a segment for definiteness.
The associated conservative dynamics reads:
dφ_t
=
-∇ V_0(φ_t) dt + √(2) dB_t
,
where (B_t) is a standard Brownian motion on X_N,0.
See the forthcoming works <cit.> and <cit.> for results in continuous and discrete spin settings,
and <cit.> for such results for the continuum sine-Gordon model.
§ CLASSICAL RENORMALISED POTENTIAL AND HAMILTON–JACOBI EQUATION
§.§ Hamilton–Jacobi equation
In classical field theory, fields are typically minimisers of an action functional S:
φ_0 ∈argmin_φ S(φ) , S(φ) = 1/2 |φ|^2 + V(φ).
These can be related to the Hamilton–Jacobi equation
V_tt = - 1/2 (∇ V_t)^2, V_0(φ) = V(φ).
Indeed, its unique viscosity solution is given by
the Hopf–Lax formula <cit.>:
V_t(φ) = min_ζV_0(ζ)+t/2 |φ-ζ/t|^2.
In particular, the minimum of the action S is given by V_1(0).
Note the analogy with the renormalised potential from (<ref>).
The constructions of Sections <ref> and <ref>
have classical analogues in which the role of the Polchinski equation is replaced by that
of the Hamilton–Jacobi equation, as we will see in this appendix.
Unlike solutions of the Polchinski equation, which are smooth at least for Ċ_t
nondegenerate and a finite number of variables (as in our discussion), the Hamilton–Jacobi equation can develop shocks and the appropriate weak solutions are not necessarily smooth.
However, we can assume that V is locally Lipschitz continuous and that (<ref>) holds almost everywhere.
We refer to <cit.> for an introduction.
In statistical physics, Hamilton–Jacobi equations are well-known to arise in mean-field limits
of statistical mechanical models, see <cit.> and in particular <cit.> and references for recent
work in the context of disordered models, as well as Section <ref> below.
Shocks of the Hamilton–Jacobi equations are related to phase transitions.
Note that the Polchinski equation in a finite number of variables
describes finite systems (rather than limits) and thus has smooth solutions.
It therefore provides a complete description of the models (no information is lost)
while the Hamilton–Jacobi equations describing mean-field systems
are effective equations describing macroscopic information.
In the thermodynamic limit, where the number of variables tends to infinity, shocks can also form in the Polchinski equation and then likewise correspond to phase transitions
in the statistical mechanical models. We illustrate the above in the simple example of the mean-field Ising model in Section <ref> below.
We now discuss the `classical' analogues of the constructions of Sections <ref>–<ref> for the Hamilton–Jacobi equation.
Our goal is to emphasise the analogy and to provide a different intuition also for the stochastic
constructions, and we will therefore impose convenient regularity assumptions in all statements.
We begin with a classical analogue of Corollary <ref>. Define the
classical renormalised action by
S_t(φ)=|φ|^2/2(1-t)+V_t(φ).
Minimisers of S_t will take the role of the renormalised measure introduced in (<ref>).
Assume that (φ_t)_t∈ [0,1] is differentiable in t∈ (0,1),
that φ_t→ 0 as t→ 1,
that V is smooth along (φ_t)_t∈ (0,1),
and that φ_t is an isolated local minimum of S_t for each t∈ [0,1).
Then for t∈ [0,1),
φ_t = - ∫_t^1 ∇ V_u(φ_u) du.
Different from the situation in Corollary <ref>,
the choice of φ is not necessarily unique because S may have multiple minimisers and V need not be globally smooth.
Indeed, the equation (<ref>) for φ is nothing but the equation for
the characteristics of Hamilton–Jacobi equation (<ref>), see <cit.>.
These are the curves (φ_t) such that 1/2 U_t(φ_t)^2 is constant in t,
where U_t = ∇ V_t.
Since by the Hamilton–Jacobi equation (<ref>),
U_tt = -(∇ U_t, U_t)
,
provided U is smooth at (t,φ), then
tU_t(φ_t) = -(∇ U_t(φ_t), U_t(φ_t)) + (∇ U_t(φ_t),φ̇_t)
= -(U_t(φ_t)-φ̇_t,∇ U_t(φ_t)) = 0.
Since φ_t → 0 as t→ 1, the claim is equivalent to proving that
φ̇_t = ∇ V_t(φ_t).
Since φ_t minimises S_t, it satisfies the Euler–Lagrange equation
φ_t/1-t + ∇ V_t(φ_t) = ∇ S_t(φ_t) = 0.
Differentiating this equation in t,
φ̇_t/1-t + φ_t/(1-t)^2 + (∂_t ∇ V_t)(φ_t) + V_t(φ_t)φ̇_t= 0.
Using the Euler–Lagrange equation φ_t/(1-t)=-∇ V_t(φ_t)
again and ∂_t ∇ V_t = - V_t ∇ V_t
(which follows from (<ref>)), we obtain
φ̇_t - ∇ V_t(φ_t)/1-t - V_t(φ_t)∇ V_t(φ_t) + V_t(φ_t)φ̇_t= 0.
Therefore
S_t(φ_t)[-∇ V_t(φ_t) + φ̇_t]
=(1/1-t+ V_t(φ_t))[-∇ V_t(φ_t) + φ̇_t]=0.
Since φ_t is an isolated local minimum of S_t, the Hessian on the left-hand side
is strictly positive definite so that necessarily ∇ V_t(φ_t)-φ̇_t=0.
The classical version of Föllmer's problem is the following control problem.
We continue to use the convention that the ODE is backwards in time so that the
Hamilton–Jacobi equation has initial rather than terminal condition.
Suppose that U = (U_u(·))_u∈ [0,1] are given smooth functions and that φ^U solves
the classical analogue of (<ref>): φ^U_t → 0 as t→ 1 and
φ_t^U
= - ∫_t^1 U_u(φ_u^U) du,
with φ_0^U an absolute minimum of S (which we recall is the classical analogue of demanding that φ_0^U
is a random sample from a desired target measure ν_0 ∝ e^- β S).
The classical analogues of Theorem <ref> and Proposition <ref> are as follows.
The optimal drift is given by U= ∇ V in the following sense.
For any smooth U and associated trajectory φ^U that satisfies
(<ref>),
1/2 |φ^U_0|^2 ≤1/2∫_0^1 |U_u(φ_u^U)|^2 du,
with equality if U = ∇ V.
Comparing with Theorem <ref>,
the classical analogue of (ν_0|γ_0) is simply 1/2 |φ_0|^2
and the classical analogue of the path space entropy ( Q| P) is the cost 1/2∫_0^1 |U_u(φ_u^U)|^2 du
= 1/2∫_0^1 |φ̇_u^U|^2 du.
The proof is analogous to that of Theorem <ref>.
Indeed, by the Hopf–Lax formula,
V_1(0) - V_0(φ^U_0) ≤ V_0(φ^U_0) + 1/2 |φ^U_0|^2 - V_0(φ^U_0) = 1/2 |φ_0^U|^2,
with equality if and only if φ_0^U is an absolute minimum of S.
On the other hand, assuming V is locally Lipschitz continuous (so that the fundamental theorem of calculus hold), one has
V_1 (0) - V_0 ( φ_0^U)
= ∫_0^1 dt ∂/∂ t V_t ( φ_t^U ),
with φ^U evolving according to (<ref>), and therefore (whenever the derivative exists classically)
∂/∂ t V_t ( φ_t^U)
= (t V_t) ( φ_t^U)
+ ( ∇ V_t ( φ_t^U) , U_t ( φ_t^U) )
= - 1/2 (∇ V_t( φ_t^U) )^2 + ( ∇ V_t ( φ_t^U) , U_t ( φ_t^U ) )
= - 1/2( ∇ V_t( φ_t^U) - U_t ( φ_t^U) )^2
+ 1/2 (U_t ( φ_t^U) )^2 ,
where we used the Hamilton–Jacobi equation on the second line. In particular, if φ_0^U is an absolute minimum of S,
1/2∫_0^1 (∇ U_t(φ_t^U))^2 dt
=
1/2 |φ_0^U|^2
+ 1/2∫_0^1 ( ∇ V_t(φ_t^U) - U_t(φ_t^U) )^2 dt
≥1/2 |φ_0^U|^2,
and the gradient of the renormalised potential V_t provides the optimal drift.
As in (<ref>),
instead of (<ref>),
one can also consider the equations for the characteristics in a reduced time interval [0,t] with t ≤ 1 and
φ_t^U = φ,
φ_s^U
= φ - ∫_s^t U_u(φ_u^U) du , (s ≤ t).
Under certain regularity conditions on V_0, and for t ≤ 1 then
V_t(φ) = inf_UV_0 ( φ- ∫_0^t U_s(φ_s^U) ds )
+ 1/2∫_0^t |U_s(φ_s^U)|^2 ds.
This is discussed in <cit.>. Namely, with L(t,x,v) = 1/2 v^2 and ψ = V_0
(and opposite time direction), the statement follows from <cit.>.
This minimiser does not have to be unique if the Hamilton–Jacobi equation has a shock, see <cit.>.
The Hopf–Lax formula implies that
V_t(φ)
= min_ζt/2 |φ-ζ/t|^2+ V_0(ζ)≤1/2t |φ-φ_0^U|^2 + V_0(φ_0^U)
where
given any drift U_s, we let φ_0^U be the final condition of (<ref>).
The first term on the right-hand side of (<ref>) is bounded as in (<ref>):
1/2t |φ-φ_0^U|^2 ≤1/2∫_0^t |U_s(φ_s^U)|^2 ds.
This shows
V_t(φ)
≤1/2∫_0^t |U_s(φ_s^U)|^2 ds
+ V_0 ( φ-∫_0^t U_s(φ_s^U) ds )
.
On the other hand,
equality if U=∇ V follows from the Hamilton–Jacobi equation
as in Proposition <ref>:
if φ is a solution to (<ref>),
sV_s(φ_s)
+ 1/2∫_s^t (∇ V_u(φ_u))^2 ds
= V_ss(φ_s) + (∇ V_s(φ_s))^2 - 1/2 (∇ V_s(φ_s))^2 = 0,
i.e.,
V_t(φ) - V_0(φ_0) = 1/2∫_0^t (∇ V_u(φ_u))^2 du.
This solution need not be unique.
§.§ Example: Mean-field Ising model
We conclude this section with the example of the mean-field Ising model,
which can be described both in terms of a Polchinski equation and, in the limit, by a Hamilton–Jacobi equation.
The mean-field Ising model is given by the measure
_ν[G] = 1/Z_N(β, h )∑_σ∈{± 1}^N e^-β/4 N∑_i,j(σ_i - σ_j)^2 + (σ, h) G(σ),
where the vector h∈^N is a possibly site-dependent external field.
It is convenient to rewrite it as
_ν[G] = 1/Z_N(β, h )∑_σ∈{± 1}^N e^-β/2(σ,Pσ) + (σ, h) G(σ),
where P= 𝕀-Q and Q is the orthogonal projection onto constants:
Q f = ( 1/N∑_i f_i ) 1 with 1 = (1,…, 1) ∈^N.
For h = h 1 with h ∈, the free energy F(β,h) is the limit N→∞ of F_N(β,h) where
F_N(β,h) = -1/Nlog Z_N(β, h 1).
(Physically more correctly, the right-hand side should have been divided by β, but it is here more convenient to omit this.)
It is well-known and easy to check that
F_Nβ = 1/2N^2F_Nh^2 -1/2 (F_Nh)^2, F_N(0,h) = -logcosh(h),
and thus that the limiting free energy F is the viscosity solution of the Hamilton–Jacobi equation
Fβ = -1/2 (Fh)^2, F(0,h) = -logcosh(h)
,
see in particular <cit.>.
Equivalently, F is given by the Hopf–Lax formula which coincides with the well-known variational formula
for the free energy alternatively obtained from Laplace's Principle (see for example <cit.> or <cit.>):
F(β,h)
= min_g∈1/2β(g-h)^2 - logcosh(g)
= min_φ∈β/2φ^2 - logcosh(βφ +h).
This Hamilton–Jacobi equation for the free energy can be related, as follows, to the Polchinski equation studied earlier in Section <ref>.
Recall that 𝕀 =P+Q with Q the orthogonal projection onto constant vectors in ^N and PQ=0.
For α>β, let
C_t = (tP + (α-t))^-1, Ċ_t
= (α-t)^-2Q,
where we used PQ=0 to simplify Ċ_t. For φ∈^N, the renormalised potential
(<ref>) then is
V_t(φ)
= -log∑_σ∈{± 1} e^-1/2 (σ-φ,(tP+(α-t))(σ-φ)) + (constant)
and satisfies the Polchinski equation (for appropriate otherwise irrelevant choice of the constants):
V_tt = 1/2Δ_Ċ_t V_t - 1/2 (∇ V_t)_Ċ_t^2
= (α-t)^-21/2Δ_QV_t - 1/2 (∇ V_t)_Q^2 .
Note that the right-hand side only depends on derivatives of V_t in constant directions (i.e., in the image of Q).
By (<ref>), the covariance C_β-C_t of the renormalised measure ν_t = ν_t,β defined
in (<ref>) is also proportional to Q and therefore supported on constant fields.
Thus one can restrict the renormalised potential V_t to constant fields
and the restriction satisfies a closed equation.
Explicitly, for φ̃∈, define Ṽ(φ̃) = 1/N V(φ̃ 1).
In other words, V_t(φ)= N Ṽ_t(Qφ)= NṼ_t(1/N∑_i φ_i) holds for constant fields φ=Qφ and
Ṽ_tt(φ̃)
= 1/NV_tt(φ̃ 1)
=
(α-t)^-21/N1/2Δ_Q V_t - 1/2 (∇_Q V_t)^2 (φ̃ 1)
=
(α-t)^-21/2V_tφ_1^2 - 1/2 (V_tφ_1)^2 (φ̃ 1)
= (α-t)^-21/2NṼ_t” - 1/2 (Ṽ'_t)^2 (φ̃).
Thus the reduced (one-variable) Polchinski equation that Ṽ_t satisfies has a prefactor 1/N in front of the Laplacian term,
and its limit V_t as N→∞ is the unique viscosity solution of the following (one-variable) Hamilton–Jacobi equation
(now dropping tilde from the variable φ):
V_tt(φ) = - 1/2 (α-t)^-2 (V_t'(φ))^2, V_0(φ) = α/2φ^2 - logcosh(αφ), (φ∈).
To conclude this section, we relate the Hamilton–Jacobi equation (<ref>) for the effective potential to the one satisfied by the free energy (<ref>).
We follow the argument in Example <ref> and first note that for a constant field φ 1 with φ∈, the renormalised potential can be written as (see (<ref>))
V_t(φ 1)
= N α-t/2φ^2 + F_N(t,(α-t)φ) .
Thus F(t,h) = - 1/2 (α-t)^-1 h^2 + V_t((α-t)^-1h)
and the Hamilton–Jacobi equation (<ref>) for F follows from the one of V exactly as in Example <ref>.
Indeed, in the setting of that example with the relation (<ref>) between V_t and F_t one has in general that
t V_t = - 1/2 (∇ V_t)_Ċ_t^2
⇔ t F_t
= -1/2 (∇ F_t)_Σ̇_t^2,
and in the present example the choice of C_t corresponds to Σ̇_t = Q.
§ ACKNOWLEDGEMENTS
This work was supported by the European Research Council under the European Union's Horizon 2020 research and innovation programme
(grant agreement No. 851682 SPINRG).
We thank D. Chafai, R. Eldan, N. Gozlan, J. Lehec, Y. Shenfeld, and H.-T. Yau for various discussions related to the material of this introduction,
encouragement, and for pointing out several additional references.
We thank the organisers of the following summer schools at which some of the material was presented:
the One World Probability Summer School on “PDE and Randomness” in Bath/Zoom organised by Hendrik Weber and Andris Gerasimovics;
the “Summer School on SPDE and Related Fields” in Beijing/Zoom organised by Hao Shen, Scott Smith, Rongchan Zhu, and Xiangchan Zhu;
and the Summer School on “PDE and Randomness” at the Max Planck Institute for Mathematics in the Sciences organised by
Rishabh Gvalani, Francesco Mattesini, Felix Otto, and Markus Tempelmayr. In particular, we also thank Jiwoon Park for leading the exercise classes
at the last summer school.
plain
|
http://arxiv.org/abs/2307.03942v1 | 20230708093617 | Ariadne's Thread:Using Text Prompts to Improve Segmentation of Infected Areas from Chest X-ray images | [
"Yi Zhong",
"Mengqiu Xu",
"Kongming Liang",
"Kaixin Chen",
"Ming Wu"
] | eess.IV | [
"eess.IV",
"cs.CV"
] |
Using Text Prompts to Improve Segmentation
Y. Zhong et al.
Beijing University of Posts and Telecommunications, China
{xiliang2017, xumengqiu, liangkongming, chenkaixin, wuming}@bupt.edu.cn
Ariadne's Thread[Ariadne's thread, the name comes from ancient Greek myth, tells of Theseus walking out of the labyrinth with the help of Ariadne's golden thread.]
: Using Text Prompts to Improve Segmentation of Infected Areas from Chest X-ray images
Yi ZhongMengqiu Xu Kongming Liang Kaixin Chen Ming Wu
August 12, 2023
===========================================================================================================================================================================================================================================================
Segmentation of the infected areas of the lung is essential for quantifying the severity of lung disease like pulmonary infections. Existing medical image segmentation methods are almost uni-modal methods based on image.
However, these image-only methods tend to produce inaccurate results unless trained with large amounts of annotated data. To overcome this challenge, we propose a language-driven segmentation method that uses text prompt to improve to the segmentation result. Experiments on the QaTa-COV19 dataset indicate that our method improves the Dice score by 6.09% at least compared to the uni-modal methods. Besides, our extended study reveals the flexibility of multi-modal methods in terms of the information granularity of text and demonstrates that multi-modal methods have a significant advantage over image-only methods in terms of the size of training data required.
§ INTRODUCTION
Radiology plays an important role in the diagnosis of some pulmonary infectious diseases, such as the COVID-19 pneumonia outbreak in late 2019<cit.>. With the development of deep learning, deep neural networks are more and more used to process radiological images for assisted diagnosis, such as disease classification, lesion detection and segmentation, etc.
With the fast processing of radiological images by deep neural networks, some diagnoses can be obtained immediately, such as the classification of bacterial or viral pneumonia and the segmentation mask for pulmonary infections, which is important for quantifying the severity of the disease as well as its progression<cit.>. Besides, these diagnoses given by the AI allow doctors to predict risks and prognostics in a "patient-specific" way<cit.>. Radiologists usually take more time to complete lesion annotation than AI, and annotation results can be influenced by individual bias and clinical experience<cit.>. Therefore, it is of importance to design automatic medical image segmentation algorithms to assist clinicians in developing accurate and fast treatment plans.
Most of the biomedical segmentation methods<cit.> are improved based on U-Net<cit.>. However, the performance of these image-only methods is constrained by the training data, which is also a dilemma in the medical image field. Radford et al. proposed CLIP<cit.> in 2021, where they used 4M image-text pairs for contrastive learning. With the rise of multi-modal learning in the recent years, there are also methods<cit.> that focus on vision-language pretraining/processing and applying them on local tasks. Li et al. proposed a language-driven medical image segmentation method LViT<cit.>, using a hybrid CNN-Transformer structure to fuse text and image features. However, LViT uses an early fusion approach and the information containd in the text is not well represented. In this paper, we propose a multi-modal segmentation method that using independent text encoder and image encoder, and design a GuideDecoder to fuse the features of both modalities at decoding stage. Our main contributions are summarized as follow:
* We propose a language-driven segmentation method for segmenting infected areas from lung x-ray images. Source code of our method see: https://github.com/Junelin2333/LanGuideMedSeg-MICCAI2023https://github.com/Junelin2333/LanGuideMedSeg-MICCAI2023
* The designed GuideDecoder in our method can adaptively propagate sufficient semantic information of the text prompts into pixel-level visual features, promoting consistency between two modalities.
* We have cleaned the errors contained in the text annotations of QaTa-COV19<cit.> and contacted the authors of LViT to release a new version.
* Our extended study reveals the impact of information granularity in text prompts on the segmentation performance of our method, and demonstrates the significant advantage of multi-modal method over image-only methods in terms of the size of training data required.
§ METHOD
The overview of our proposed method is shown in Fig. <ref>(a). The model consists of three main components: Image Encoder, Text Encoder and GuideDecoder that enables multi-modal information fusion. As you can see, our proposed method uses a modular design. Compared to early stage fusion in LViT, our proposed method in modular design is more flexible. For example, when our method is used for brain MRI images, thanks to the modular design, we could first load pre-trained weights trained on the corresponding data to separate visual and text encoders, and then only need to train GuideDecoders.
§.§.§ Visual Encoder & Text Encoder
The Visual Encoder used in the model is ConvNeXt-Tiny<cit.>. For an input image I∈ℝ^H× W×1, we extract multiple visual features from the four stages of ConvNeXt-Tiny, which are defined as f_4∈ℝ^H/4×W/4× C_1, f_8∈ℝ^H/8×W/8× C_2,
f_16∈ℝ^H/16×W/16× C_3 and
f_32∈ℝ^H/32×W/32× C_4,
Note that C is the feature dimension, H and W are the height and width of the original image.
For an input text prompt T ∈ℝ^L, We adopt the CXR-BERT<cit.> to extract text features g_t ∈ℝ^L× C. Note that C is the feature dimension, L is the length of the text prompt.
§.§.§ GuideDecoder
Due to our modular design, visual features and textual features are encoded independently by different encoders. Therefore, the design of the decoder is particularly important, as we can only fuse multi-modal features from different encoders in post stage. The structure of GuideDecoder is shown in Fig. <ref>(b). The GuideDecoder first processes the input textual features and visual features before performing multi-modal interaction.
The input textual features first go through a projection module (i.e. Project in the figure) that aligns the dimensionality of the text token with that of the image token and reduces the number of text tokens. The projection process is shown in Equation 1.
f_t = σ(Conv(T W_T))
where W_T is a learnable matrix, Conv(·) denotes a 1×1 convolution layer, and σ(·) denotes the ReLU activation function. Given an input feature T ∈ℝ^L× D, the output projected features is f_t ∈ℝ^M × C_1, where M is the number of tokens after projection and C_1 is the dimension of the projected features, consistent with the dimension of the image token. For the input visual features I∈ℝ^H× W× C_1, after adding the position encoding we use self-attention to enhance the visual information in them to obtain the evolved visual features. The process is shown in Equation 2.
f_i = I + LN(MHSA(I))
where MHSA(·) denotes Multi-Head Self-Attention layer, LN(·) denotes Layer Normalization, and finally the evolved visual features f_i ∈ℝ^H× W× C_1 with residuals could be obtained.
After those, the multi-head cross-attention layer is adopted to propagate fine-grained semantic information into the evolved image features. To obtain the multi-modal feature f_c ∈ℝ^H× W× C_1, the output further computed by layer normalization and residual connection:
f_c = f_i + α (LN(MHCA(f_i,f_t)))
where MHCA(·) denotes multi-head cross-attention and α is a learnable parameter to control the weight of the residual connection.
Then, the multi-modal feature f_c ∈ℝ^(H× W)× C_1 would be reshaped and upsampling to obtain f'_c ∈ℝ^H'× W'× C_1. Finally the f'_c is concatenated with f_s∈ℝ^H'× W'× C_2 on the channel dimension, where f_s is the low-level visual feature obtained from visual encoder via skip connection. The concatenated features are processed through a convolution layer and a ReLU activation function to obtain the final decoded output f_o ∈ℝ^H'× W'× C_2
f'_c = Upsample(Reshape(f_c))
f_o = σ(Conv([f'_c, f'_s]))
where
[·,·] represents the concatenate operation on the channel dimension.
§ EXPERIMENTS
§.§ Dataset
The dataset used to evaluate our method performance is the QaTa-COV19 dataset<cit.>, which is compiled by researchers from Qatar University and Tampere University. It consists of 9258 COVID-19 chest radiographs with pixel-level manual annotations of infected lung areas, of which 7145 are in the training set and 2113 in the test set. However, the original QaTa-COV19 dataset does not contain any matched text annotations.
Li et al. <cit.>have made significant contributions by extending the text annotations of the dataset, their endeavors are worthy of commendation. We conducted a revisitation of the text annotations and found several notable features. Each sentence consists of three parts, containing position information at different granularity. However, these sentences cannot be considered as medical reports for lacking descriptions of the disease, we consider them as a kind of "text prompt" just as the title of the paper states.
Besides, we found some obvious errors (e.g. misspelled words, grammatical errors and unclear referents) in the extended text annotations. We have fixed these identified errors and contacted the authors of LViT to release a new version of the dataset. Dataset see Github link: https://github.com/HUANGLIZI/LViThttps://github.com/HUANGLIZI/LViT
§.§ Experiment Settings
Following the file name of the subjects in the original train set, we split the training set and the validation set uniformly in the ratio of 80% and 20%. Therefore, the training set has a total of 5716 samples, the validation set has 1429 samples and the test set has 2113 samples.
All images are cropped to 224×224 and the data is augmented using a random zoom with 10% probability.
We used a number of open source libraries including but not limited to PyTorch, MONAI<cit.> and Transformers<cit.> to implement our method and baseline approach. We use PyTorch Lightning for the final training and inference wrapper. All the methods are training on one NVIDIA Tesla V100 SXM3 32GB VRAM GPU. We use the Dice loss plus Cross-entropy loss as the loss function, and train the network using AdamW optimization with a batch size of 32. We utilize the cosine annealing learning rate policy, the initial learning rate is set to 3e-4 and the minimal learning rate is set to 1e-6.
We used three metrics to evaluate the segmentation results objectively: Accuracy, Dice coefficient and Jaccard coefficient.
Both Dice and Jaccard coefficient calculate the intersection regions over the union regions of the given predicted mask and ground truth, where the Dice coefficient is more indicative of the segmentation performance of small targets.
§.§ Comparison Experiments
We compared our method with common mono-modal medical image segmentation methods and with the LViT previously proposed by Li et al. The quantitative results of the experiment are shown in Table <ref>. UNet++ achieves the best performance of the mono-modal approach. Comparing to UNet++, our method improves accuracy by 1.44%, Dice score by 6.09% and Jaccard score by 9.49%. Our method improves accuracy by 1.28%, Dice score by 4.86% and Jaccard coefficient by 7.66% compared to the previous multi-modal method LViT. In general, using text prompts could significantly improve segmentation performance.
The results of the qualitative experiment are shown in Fig. <ref>. The image-only mono-modal methods tend to generate some over-segmentation, while the multi-modal approach refers to the specific location of the infected region through text prompts to make the segmentation results more accurate.
§.§ Ablation Study
Our proposed method introduces semantic information of text in the decoding process of image features and designs the GuideDecoder to let the semantic information in the text guide the generation of the final segmentation mask. We performed an ablation study on the number of GuideDecoder used in the model and the results are shown in the Table <ref>.
As can be seen from the Table <ref>, the segmentation performance of the model improves as the number of GuideDecoders used in the model increases. The effectiveness of GuideDecoder could be proved by these results.
§.§ Extended Study
Considering the application of the algorithm in clinical scenarios, we conducted several interesting extension studies based on the QaTa-COV19 dataset with the text annotations. It is worth mentioning that the following extended studies were carried out on our proposed method.
§.§.§ Impact of text prompts at different granularity on segmentation performance.
In section 3.1 we mention that each sample is extended to a text annotation with three parts containing positional information at different granularity, as shown in the Fig. <ref>. Therefore we further explored the impact of text prompts at different granularity on segmentation performance of our method and the results are shown in Table <ref>.
The results in the table show that the segmentation performance of our proposed method is driven by the granularity of the position information contained in the text prompt. Our proposed method achieved better segmentation performance when given a text prompt with more detailed position information.
Meanwhile, we observed that the performance of our method is almost identical when using two types of text prompts, i.e. Stage3 alone and Stage1 + Stage2 + Stage3. It means the most detailed position information in the text prompt plays the most significant role in improving segmentation performance. But this does not mean that other granularity of position information in the text prompt does not contribute to the improvement in segmentation performance. Even when the input text prompts contain only the coarsest location information (Stage1 + Stage2 items in the Table <ref>), our proposed method yielded a 1.43% higher Dice score than the method without text prompt.
§.§.§ Impact of the size of training data on segmentation performance.
As shown in Table <ref>, our proposed method demonstrates highly competitive performance even with a reduced amount of training data. With only a quarter of the training data, our proposed method achieves a 2.69% higher Dice score than UNet++, which is the best performing mono-modal model trained on the full dataset. This provides sufficient evidence for the superiority of multi-modal approaches and the the fact that suitable text prompts could significantly help improve the segmentation performance.
We observed that when the training data was reduced to 10%, our method only began to exhibit inferior performance compared to UNet++, which was trained with all available data. Similar experiments could be found in the LViT paper. Therefore, it can be argued that multi-modal approaches require only a small amount of data (less than 15% in the case of our method) to achieve performance equivalent to that of mono-modal methods.
§ CONCLUSION
In this paper, we propose a language-driven method for segmenting infected areas from lung x-ray images. The designed GuideDecoder in our method can adaptively propagate sufficient semantic information of the text prompts into pixel-level visual features, promoting consistency between two modalities. The experimental results on the QaTa-COV19 dataset indicate that the multi-modal segmentation method based on text-image could achieve better performance compared to the image-only segmentation methods. Besides, we have conducted several extended studies on the information granularity of the text prompts and the size of the training data, which reveals the flexibility of multi-modal methods in terms of the information granularity of text and demonstrates that multi-modal methods have a significant advantage over image-only methods in terms of the size of training data required.
§.§.§ Acknowledgements
This work was supported by NSFC under Grant 62076093 and MoE-CMCC "Artifical Intelligence" Project No. MCM20190701.
splncs04
|
http://arxiv.org/abs/2307.05093v1 | 20230711075758 | Forward Dynamics Estimation from Data-Driven Inverse Dynamics Learning | [
"Alberto Dalla Libera",
"Giulio Giacomuzzo",
"Ruggero Carli",
"Daniel Nikovski",
"Diego Romeres"
] | cs.RO | [
"cs.RO",
"cs.SY",
"eess.SY"
] |
First]Alberto Dalla Libera
First]Giulio Giacomuzzo
First]Ruggero Carli
Second]Daniel Nikovski
Second]Diego Romeres
[First]Department of Information Engineering, University of Padova,
via Gradenigo 6b, Padova, Italy (e-mail: [email protected], [email protected], [email protected]).
[Second]Mitsubishi Electric Research Laboratories, 201 Broadway, Cambridge, MA, United States (e-mail: {nikovski,romeres}@merl.com)
In this paper, we propose to estimate the forward dynamics equations of mechanical systems by learning a model of the inverse dynamics and estimating individual dynamics components from it. We revisit the classical formulation of rigid body dynamics in order to extrapolate the physical dynamical components, such as inertial and gravitational components, from an inverse dynamics model. After estimating the dynamical components, the forward dynamics can be computed in closed form as a function of the learned inverse dynamics. We tested the proposed method with several machine learning models based on Gaussian Process Regression and compared them with the standard approach of learning the forward dynamics directly. Results on two simulated robotic manipulators, a PANDA Franka Emika and a UR10, show the effectiveness of the proposed method in learning the forward dynamics, both in terms of accuracy as well as in opening the possibility of using more structured models.
Learning for control; Nonparametric methods; Machine learning;
§ INTRODUCTION
Methods for learning effectively the forward dynamics of mechanical systems, such as various robotic platforms, have been investigated for a long time. The forward dynamics describe the dynamical evolution of a system in response to forces, and are usually used for simulation and control purposes <cit.>.
Knowledge of the forward dynamics is the basis of any model-based control algorithm, for example in model-based reinforcement learning (MBRL) <cit.>, where it is one element of a Markov decision process (MDP) formulation, generally called the transition function of the MDP, <cit.>.
The forward dynamics express the relationship between joint positions, joint velocities, applied forces, and joint accelerations. In RL terms, it represents the transition from the current state to the next state, given the current applied action. A model of the forward dynamics is often given by first-principles equations of motion, the so called rigid body dynamics (RBD). Physical models obtained from first principles are, however, often limited in their ability to describe some real physical phenomena, given our inability to describe certain complex dynamical behaviors of real systems. Moreover, the parameters of these models are often unknown or known only imprecisely. Consequently, in several applications, such models are not sufficiently accurate to describe the system dynamics. In this case, data collected by interacting with the system can be used to improve such models, leading to so called data-driven models.
Data-driven models have been widely studied in the context of inverse dynamics identification. The inverse dynamics is the inverse of the forward dynamics, and it takes as input the trajectory given in terms of joint positions, velocities, and accelerations, and outputs the torques that are causing them. This function is often used in control schemes to improve, for example, the tracking performance of the controller <cit.>, see the survey <cit.> for an overview, and for anomaly detection see e.g., <cit.>.
In the last decades, there has been an increased focus on learning inverse dynamics models of complex robotic systems by means of machine learning, using so called data-driven models, e.g. Gaussian processes regression (GPR) <cit.>, deep neural networks <cit.>, support vector machines (SVM), etc. The critical aspect of data-driven solutions is generalization ability, i.e., the phenomenon that estimation accuracy might decrease in configurations that are far from the training samples. For this reason, several studies focused on deriving data-efficient estimators of the inverse dynamics, see, for instance, <cit.> as black-box solutions. An alternative to black-box solutions are gray-box models, a combination of a physical model and a data-driven model: the physical models exploit prior knowledge, thus increasing data efficiency and generalization, whereas the data driven part compensates for the inaccuracies of the physical model, further improving accuracy. In the GPR literature, these models are named semiparametric models, see for instance, <cit.>.
When solving the system identification problem for the dynamics of a mechanical system, the inverse dynamics have a significant advantage over the forward dynamics: the inverse dynamics model can be reformulated as a linear model in the inertial parameters <cit.>, whereas the model of the forward dynamics, in general, does not have such a formulation. This circumstance, while benefiting from the much more extensive set of techniques for learning linear models in comparison to nonlinear ones, also enjoys a structural advantage: learning the inverse dynamics is often better posed than learning the forward dynamics.
Contributions. In this work, we consider an alternative approach to learn the forward dynamics by taking into consideration the more favorable structural properties of the inverse dynamics. Instead of using the standard approach which learns a GP model directly on the forward dynamics, we propose to learn a data-driven inverse dynamics model by means of GPR, and then, compute the forwards dynamics with an exact and deterministic transformation from the learned inverse dynamics model. Experimental results show that, when compared to the standard approach, our strategy leads to better performance in terms of data efficiency and accuracy. Moreover, this strategy allows computing the forward dynamics with kernel types that were previously possible and specialized only to learn the inverse dynamics.
The paper is organized as follows. Section <ref> provides background formulation of dynamics models and GPR. The proposed approach is described in Section <ref>, while experiments are reported in Section <ref>. Section <ref> draws the conclusions.
§ BACKGROUND
We start by providing the background formulation of the robot dynamics. Then, we describe GPR for inverse and forward dynamics identification, with details about the black-box priors adopted in this work.
§.§ Rigid Body Dynamics
Consider a mechanical system with n degrees of freedom and denote with q_t ∈ℝ^n its generalized coordinates at time t; q̇_t and q̈_t are the velocity and the acceleration of the joints, respectively. The generalized torques, i.e., the control input of the system, are denoted by τ_t ∈ℝ^n. For compactness, we will denote explicitly the dependencies on t only when strictly necessary. Under rigid body assumptions, the dynamics equations of a mechanical system are described by the following matrix equation, called rigid body dynamics (RBD):
B(q) q̈ + c(q, q̇) + g(q) + F(q̇) = τ,
where B(q) is the inertia matrix, while c(q, q̇), g(q), and F(q̇) account for the contributions of fictitious forces, gravity, and friction, respectively, see <cit.> for a more detailed description. For compactness, we introduce also n(q, q̇) = c(q, q̇) +g(q)+F(q̇), and the symbols B̂(q) and n̂(q, q̇) will denote the estimates of B(q) and n(q, q̇).
§.§.§ Linear Model of Inverse Dynamics
The model in eq. (<ref>) is linear w.r.t. the dynamics parameters, i.e., mass, center of mass, inertia, and friction coefficients of the links, see <cit.>. When neglecting friction, the number of dynamics parameters of each link is p=10, one for mass, three for center of mass, and six for the inertia tensor. Let w∈ℝ^n · p be the vector collecting the dynamics parameters of all the links, then
τ = Φ(q,q̇,q̈)w =
[ ϕ^(1)(q,q̇,q̈); ⋮; ϕ^(n)(q,q̇,q̈) ]w,
where Φ(q,q̇,q̈) ∈ℝ^n×(n · p) depends only on the kinematics parameters of the robot, i.e., its geometry.
§.§.§ The forward dynamics
is the map that, given the current position, velocity, and torque q,q̇,τ, outputs the acceleration q̈. This model is needed for simulation and prediction purposes.
Forward dynamics learning is known to be more complex than learning the inverse dynamics. This can be seen directly in the RBD model given by physics. From eq. (<ref>), we have
q̈ = B^-1( q) ( τ - C ( q, q̇) q̇ - g( q) ) ,
Equation (<ref>) cannot be written, in general, as a linear function of the dynamics parameters, as it can be done for the inverse dynamics, and the presence of the inverse of the inertia matrix creates a nonlinear dependence on the parameters. This complexity would be encoded in the black-box map that a machine learning algorithm would attempt to describe, if trying to learn the forward dynamics directly, i.e. q,q̇,τ⟶q̈.
§.§ GPR for Inverse Dynamics Identification
GPR provides a solid probabilistic framework to identify the inverse dynamics from data. Typically, in GPR, each joint torque is modeled by a distinct and independent GP. Consider an input/output data set 𝒟 = {y^(i), X }, where y^(i)∈ℝ^N is a vector containing N measurements of τ^(i), the i-th joint torque, while X={x_t_1…x_t_N}; the vector x_t contains the position, velocity, and acceleration of the joints at time t, hereafter designated as GP input. The probabilistic model of 𝒟 is
y^(i) =
[ f^(i)(x_t_1); ⋮; f^(i)(x_t_N) ]
+ [ e^(i)_t_1; ⋮; e^(i)_t_N ]
= f^(i)(X) + e^(i),
where e^(i) is i.i.d. Gaussian noise with standard deviation σ_i, while f^(i)(·) is an unknown function modeled a priori as a GP, namely, f^(i)(·) ∼ N(m_f^i(X),𝕂^(i)(X,X)). m_f^i(X) denotes the prior mean, and, generally, it is assumed to be equal to zero when no prior knowledge is available. The covariance matrix 𝕂^(i)(X,X) is defined through a kernel function k^(i)(·, ·). Specifically, the covariance between f^(i)(x_t_j) and f^(i)(x_t_l), i.e., the element of 𝕂^(i)(X,X) at row j and column l, is equal to k^(i)(x_t_j, x_t_l). Exploiting the properties of Gaussian distributions, it can be proven that the posterior distribution of f^(i) given 𝒟 in a general input location x_* is Gaussian, see <cit.> for a comprehensive description. Then, the maximum a posteriori estimator corresponds to the mean, which is given by the following expression
f̂^(i)(x_*) = 𝕂^(i)(x_*,X)α^(i) + m_f^i(x_*) ,
where
α^(i) = (𝕂^(i)(X,X) + σ_i^2 I)^-1(y^(i)-m_f^i(X)) ,
𝕂^(i)(x_*,X) = [k^(i)(x_*, x_t_1) … k^(i)(x_*, x_t_N)] .
Different solutions proposed in the literature can be grouped roughly based on the definition of the GP prior. In this paper, we will consider two black-box approaches, where the prior is defined without exploiting prior information about the physical model, and assuming m_f^i(X)=0.
§.§.§ Squared Exponential kernel
The Squared Exponential (SE) kernel defines the covariance between samples based on the distance between GP inputs, see, for instance, <cit.>, and it is defined by the following expression
k_SE(x_t_j, x_t_l) = λ e^-x_t_j-x_t_l^2_Σ^-1;
λ and Σ are kernel hyperparameters. The former is a positive scaling factor, and the latter is a positive definite matrix which defines the norm used to compute the distance between inputs. A common choice consists in considering Σ to be diagonal, with the positive diagonal elements named lengthscales.
§.§.§ Geometrically Inspired Polynomial kernel
The Geometrically Inspired Polynomial (GIP) kernel has been recently introduced in <cit.>. This kernel is based on the property that the dynamics equations in (<ref>) are a polynomial function in a proper transformation of the GP input, fully characterized only by the type of each joint. Specifically, q is mapped to q̃, the vector composed by the concatenation of the components associated with a prismatic joint and the sines and cosines of the revolute coordinates. As proved in <cit.>, the inverse dynamics in (<ref>) is a polynomial function in q̈, q̇ and q̃, where the elements of q̈ have maximum relative degree of one, whereas the ones of q̇ and q̃ have maximum relative degree of two. To exploit this property, the GIP kernel is defined through the sum and the product of different polynomial kernels (<cit.>), hereafter denoted as k_P^(p)(·,·), where p is the degree of the polynomial kernel. In particular, we have
k_GIP(x_t_j, x_t_l) =
(k_P^(1)(q̈_t_j, q̈_t_l) + k_P^(2)(q̇_t_j, q̇_t_l))
k_Q(q̃_t_j, q̃_t_l) ,
where, in its turn, k_Q is given by the product of polynomial kernels with degree two, see <cit.> for all the details. In this way, the GIP kernel allows defining a regression problem in a finite-dimensional function space where (<ref>) is contained, leading to better data efficiency in comparison with the SE kernel.
* Define the mean m_f^i equal to the i-th output of (<ref>) or (<ref>). Typically, in this case, the covariance is defined through an SE kernel, which aims at compensating inaccuracies of the prior mean.
* Define m_f^i=0, and the kernel as the sum of two kernels. The first kernel is a linear kernel derived from (<ref>) assuming that w∼ N(0, Σ), whereas the second is a standard SE kernel. Then, k^(i)_SP, namely, the semiparametrical kernel of the i-th joint, is defined by the following expression:
k^(i)_SP(x_t_j, x_t_l) = ϕ^(i)(x_t_j) Σϕ^(i)(x_t_l)^T + k^(i)_SE(x_t_j, x_t_l) ,
where matrix Σ is a tunable hyperparameter, typically assumed diagonal, expressing the prior covariance of w, <cit.>.
We remark that we limit our investigation to black-box solutions, since we want to test the proposed approach w.r.t. standard direct learning of the forward dynamics (which cannot use semiparametric kernels). However, we would like to stress that our approach is fully compatible wih any kernel function.
§.§ GPR for forward dynamics identification
The GPR framework presented for the inverse dynamics learning can be applied also to the forward dynamics. When considering the i-th joint, the input of the GP is the vector containing q, q̇ and τ, while the output is the i-th component of q̈. However, w.r.t. the inverse dynamics, the choices for the GP prior are limited. (i) The GIP kernel cannot be applied, since it is based on the assumption that τ is a polynomial function in a proper transformation of q, q̇, and q̈, but there is not an equivalent property for q̈. (ii) Due to the fact that there is not an equivalent relation (<ref>), i.e., q̈ are not linear w.r.t. dynamics parameters, in general it is not possible to formulate a semiparametric kernel. Then, the options commonly available are:
* If no prior is available, assume m_f^i=0, and define the covariance a priori as a SE kernel (or any non-structured kernel) with GP input (q, q̇, τ);
* In case that (<ref>) is known, a so called residual model can be used. The prior knowledge can be exploited by defining the mean m_f^i to be equal to the i-th component of M(q)^-1(τ-n(q, q̇)), and the covariance through an SE kernel, to compensate for eventual inaccuracies of the forward physical model.
§ ESTIMATING FORWARD DYNAMICS FROM INVERSE DYNAMICS LEARNING
In this section, we describe the proposed approach to estimating the forward dynamics (<ref>) as a function of the inverse dynamics learned using GPR. In particular, we discuss how the physical equations of the RBD (<ref>) entail an exact relationship that can be leveraged to compute gravitational contributions, inertial contributions, and n(q,q̇) as a function of the inverse dynamics evaluated in a subset of the inputs. We assume that a distinct GP is used for each of the n degrees of freedom, and we denote by f̂^(i)(·), i=1… n the estimator of the i-th joint torque obtained by applying (<ref>). For convenience, from here on, we will point out explicitly the different components of the GP input, namely, the input of f̂^(i) will be (q,q̇,q̈) instead of x, which comprises the concatenation of q,q̇,q̈. It is worth mentioning that the proposed approach is inspired by the strategy adopted in Newton-Euler algorithms, see <cit.>.
The proposed approach consists of (i) learning the inverse dynamics, which is a function suitable for identification purposes given its linear formulation, and (ii) using the learned model to have a closed-form deterministic transformation that estimates the forward dynamics. The method is described in Algorithm <ref>.
Assumption 1. The dependency of the dynamical components B(q)q̈ and n(q,q̇) w.r.t. the input quantities q,q̇,q̈ in (<ref>) is exact. That is, the inertial contributions do not depend on q̇, the n(q,q̇) do not depend on q̈, and there are no terms with cross dependency that appear in the equations of motions.
Assumption 1 is a fairly mild assumption commonly assumed in classical manuscripts. The estimation of each dynamical component is described in the following.
In Algorithm <ref>, initially the inverse dynamics, f̂^(i), are learned for each DoF of the mechanical system using GPR as described in Section <ref>. These models are now considered known and fixed. Then, the inertia matrix B ( q), the Coriolis and gravitational forces n( q, q̇) = c( q, q̇) q̇ + g( q) are estimated as a function of the learned inverse dynamics. Finally, the forward dynamics can be computed according to the physical laws (<ref>).
§.§.§ Gravitational contribution.
The motion equations in (<ref>) show that the torque components due to the gravitational contributions account for all the terms that depend only on q. Consequently, to obtain g^(i)(q), i.e., the estimate of the i-th gravitational contribution in the configuration q, we evaluate f̂^(i) by setting q̇=0, q̈=0. Then, the estimate of g(q) is
ĝ(q) =
[ ĝ^(1)(q); ⋮; ĝ^(n)(q) ]
=
[ f̂^(1)(q,0,0); ⋮; f̂^(n)(q,0,0) ].
§.§.§ Inertial contributions.
The inertial contributions, i.e., B(q)q̈, account for all the contributions that depend simultaneously on q and q̈. Consequently, to estimates these contributions, we evaluate the GP models in (q̈, 0, q), and subtract the gravitational contribution defined and computed previously. In particular, to obtain the element B̂_ij(q), i.e., the estimate of the B(q) element in position (i,j), we set all the accelerations to zero, except for the j-th component. Denoting with 1_j the vector with all elements equal to zero except for the j-th element, which equals one, we have
B̂_ij(q) = f̂^(i)(q,0,1_j) - ĝ^(i)(q) .
§.§.§ Estimation of n(q,q̇).
The vector n(q), defined under (<ref>), contains all the contributions that do not depend on q̈. Then, n̂^(i)(q,q̇), i.e., the estimate of the i-th component of n(q), is computed by evaluating the i-th GP model by setting q̈=0. Therefore, we can compute:
n̂(q, q̇) =
[ n̂^(1)(q, q̇); ⋮; n̂^(n)((q, q̇)) ]
=
[ f̂^(1)(q,q̇,0); ⋮; f̂^(n)(q,q̇,0) ].
The proposed Algorithm <ref> is based on well-known first-principle relationships. Yet, it offers a useful bridge between modern machine learning techniques and the principles of classical mechanics that, as will be seen in Section <ref>, improves significantly upon the standard approach to learn directly the forward dynamics.
A further advantage w.r.t. standard approaches is that more structured kernels can be utilized to learn the forward dynamics. As described in Sections <ref>-<ref>, the GIP and semiparametric kernels cannot be used to directly learn the forward dynamics. However, these kernels can be used to learn the inverse dynamics in the initial steps of Algorithm <ref>, and consequently used to learn the forward dynamics at the end of Algorithm <ref>.
Note that the method is not restricted to any function approximator to compute the inverse dynamics, e.g. neural networks could be used too; rather, it offers a general methodology, easy to implement, that reduces the problem of learning the forward dynamics to the more convenient inverse dynamics learning.
§ EXPERIMENTS
This section compares the learning performance of our approach with standard GP-based forward dynamics learning. Specifically, we implemented Algorithm <ref> using as prior m_f^i=0 and the two black-box kernels introduced in (<ref>) and (<ref>). The models obtained are hereafter referred to as SE and GIP estimators. As a baseline, we implemented the standard black-box forward dynamics estimator with m_f^i=0 and SE kernel described in Section <ref>, hereafter denoted by SE_FD.
We carried out two experiments on simulated setups. The first investigated the behavior of the estimators as the DoFs increase on a simulated PANDA Franka Emika robot. The second, instead, analyzed the data-efficiency performance on a UR10 robot. Training and test data sets of the kind: 𝒟={(q_t,q̇_t,q̈_t), τ_t } are obtained by generating joint trajectories and evaluating (<ref>) to compute the joint torques. The dynamics equations (<ref>) are derived using the Python package Sympybotics[https://github.com/cdsousa/SymPyBotics], <cit.>. All the estimators were implemented in Python, starting from gpr-pytorch[https://bitbucket.org/AlbertoDallaLibera/gpr-pytorch], a library for GPR based on pytorch, <cit.>.
§.§ Performance as a Function of DoFs
We compared the performance of the three estimators on a simulated PANDA robot as the number of DoFs increased from two to five.
For each DoF, we collected a training and a test data set. The reference trajectories followed by the robot were different realization of Gaussian noise filtered with a low-pass filter, with cut-off frequency 1Hz. The trajectories lasted 100 seconds, collected at 100Hz for a total of 10000 samples per data set. The joint torques were corrupted by Gaussian noise with standard deviation 0.01Nm. We computed the three models following Alg <ref> and trained the kernel hyperparameters by maximizing the marginal likelihood of the
training data, <cit.>.
For each DoF, Figure <ref> reports the distribution of the acceleration error modules. It is particularly interesting to compare the performance of the SE_FD and SE estimators, since they adopt the same kernel to model, respectively, accelerations and torques. The proposed SE estimator performs similarly to SE_FD in the 2-DoF experiment, which is a simple test, and it ouptperforms the baseline in all the other experiments with higher DoFs.
These results show that the proposed solution can improve the standard direct learning of the forward dynamics.
Interestingly, the GIP estimator outperforms the baseline approach and the SE estimator for all the DoFs and for all the joints. The performance gap between the GIP and SE estimators increases with the increase of the DoF. While the distributions of the estimation errors of GIP and SE are similar for two and three DoF, GIP significantly outperforms SE for four and five DoF. The deterioration of the SE performance is due to the limited generalization properties of the SE kernel, which affects the accuracy of the inverse dynamics model when the DoF increases. Table <ref> reports the Root Mean Squared Errors of the torques estimates for each DoF tested. The RMSEs of the SE estimator grow much rapidly with the DoF, compared with the GIP estimator. Torque estimation errors are amplified in Algorithm <ref>, limiting also the accuracy of the forward dynamics estimation.
Thanks to the higher data efficiency and generalization of the GIP kernel (see the RMSEs in Table <ref>), the GIP estimator estimates accelerations accurately also in the setup with 5 DoF. The performance of the GIP estimator shows another advantage of the proposed approach, that is, the possibility of using data-efficient solutions proposed for inverse dynamics. As discussed in Section <ref>, the forward dynamics in (<ref>) does not admit convenient representation like the linear model in (<ref>) or the GIP polynomial representation described in <cit.>. These advantages can be further exploited by relying on the SP kernels described in Section <ref>. Using these kernels in simulated data would be unfair, as we have the exact knowledge of the physical model generating the data.
§.§ Data Efficiency Performance
We compared the data efficiency of the SE_FD, SE and GIP estimators by evaluating their accuracy as a function of the amount of training data. In this experiment, we considered a simulated UR10 robot. As in the previous experiment, the reference trajectories followed in the training and test data sets are distinct realizations of Gaussian noise filtered at 1Hz. In Fig. <ref>, we reported the medians of the acceleration error modules as a function of the number of seconds of training samples used. We used the first 10, 20, …, 100 seconds of training data to derive the 3 estimators, after optimizing the kernel hyperparameters by maximizing the marginal likelihood of the training data. Then, we tested the derived estimators in the whole test data set.
The SE_FD and SE estimators perform similarly for joints 2 and 3, but the SE estimator outperforms SE_FD for the other joints. This confirms the efficacy of the proposed method as a general method that, with the same model, outperforms the standard approach to learning the forward dynamics directly. Figure <ref> shows that the GIP estimator significantly outperforms the other models, both in terms of accuracy and data efficiency. After around only 30s of data the GIP estimators performs like the other estimators after 100s. The performance of the GIP estimator confirms that the proposed approach can significantly improve the direct forward dynamics learning by exploiting the data-efficient solutions proposed for inverse dynamics, which are not available for forward dynamics models.
§ CONCLUSIONS
In this work, we present a black-box GP-based strategy to learn the forward dynamics model. The proposed algorithm defines a prior on the inverse dynamics function instead of directly modeling the forward dynamics. Based on the learned inverse dynamics model, the algorithm computes individual dynamics components, i.e., inertial, gravitational, and Coriolis contributions, to estimate the forward dynamics. Experiments carried out in simulated environments show that the proposed strategy can be more accurate and data-efficient than directly learning accelerations in a black-box fashion. The advantages w.r.t. the standard approach are particularly relevant when considering data-efficient kernels, such as the GIP kernel, which are not available to model directly the forward dynamics. The proposed method is general and can be adapted to any estimator, both black-box or gray-box. Next, we will test the approach both on data from real robots and make use of semiparametric kernels, and on MBRL algorithms as transition function.
|
http://arxiv.org/abs/2307.07206v1 | 20230714074902 | High-order splitting finite element methods for the subdiffusion equation with limited smoothing property | [
"Buyang Li",
"Zongze Yang",
"Zhi Zhou"
] | math.NA | [
"math.NA",
"cs.NA"
] |
High-order splitting FEMs for subdiffusion]High-order splitting finite element methods for the
subdiffusion equation with limited smoothing property
Buyang Li]Buyang Li
Department of Applied Mathematics,
The Hong Kong Polytechnic University, Kowloon, Hong Kong
[email protected]
Zongze Yang]Zongze Yang
Department of Applied Mathematics,
The Hong Kong Polytechnic University, Kowloon, Hong Kong
[email protected]
Zhi Zhou]Zhi Zhou
Department of Applied Mathematics,
The Hong Kong Polytechnic University, Kowloon, Hong Kong
[email protected]
[2010]Primary: 65M30, 65M15, 65M12.
In contrast with the diffusion equation which smoothens the initial data to C^∞ for t>0 (away from the corners/edges of the domain), the subdiffusion equation only exhibits limited spatial regularity. As a result, one generally cannot expect high-order accuracy in space in solving the subdiffusion equation with nonsmooth initial data. In this paper, a new splitting of the solution is constructed for high-order finite element approximations to the subdiffusion equation with nonsmooth initial data. The method is constructed by splitting the solution into two parts, i.e., a time-dependent smooth part and a time-independent nonsmooth part, and then approximating the two parts via different strategies. The time-dependent smooth part is approximated by using high-order finite element method in space and convolution quadrature in time, while the steady nonsmooth part could be approximated by using smaller mesh size or other methods that could yield high-order accuracy. Several examples are presented to show how to accurately approximate the steady nonsmooth part, including piecewise smooth initial data, Dirac–Delta point initial data, and Dirac measure concentrated on an interface.
The argument could be directly extended to subdiffusion equations with nonsmooth source data. Extensive numerical experiments are presented to support the theoretical analysis and to illustrate the performance of the proposed high-order splitting finite element methods.
[
[
August 12, 2023
===================
§ INTRODUCTION
This article is concerned with the construction and analysis of high-order finite element methods for solving the subdiffusion equation in a convex polygonal/polyhedral domain Ω⊂ℝ^d, with d≥ 1, i.e.,
{∂_t^α u - Δ u =f
Ω×(0,T) ,
u =0 ∂Ω×(0,T),
u(0) =u^0 Ω,
.
where f and u^0 are given source function and initial value, respectively,
Δ: H^2(Ω)∩ H^1_0(Ω)→ L^2(Ω) is the Dirichlet Laplacian operator,
and u denotes the Djrbashian–Caputo fractional time derivative of order
α∈(0,1) <cit.> and <cit.>:
u (t) = 1/Γ(1-α)∫_0^t (t-s)^-α u'(s) ds
Γ(1-α)=∫_0^∞ s^-αe^-sṣ ;
The subdiffusion equation in (<ref>) has received much attention in recent years in physics, engineering,
biology and finance, due to their capability for describing anomalously slow diffusion
processes, also known as subdiffusion.
At a microscopic level, subdiffusive processes can be
described by continuous time random walk with a heavy-tailed waiting time distribution,
which displays local motion occasionally interrupted by
long sojourns and trapping effects. These transport processes are characterized by a sublinear
growth of the mean squared displacement of the particle with the time, as opposed to linear
growth for Brownian motion. The model (<ref>) has found many successful practical
applications,
e.g., subsurface flows <cit.>, thermal diffusion in
media with fractal geometry <cit.>, transport column experiments <cit.>
and heat conduction with memory <cit.>, to name but a few.
See <cit.> for physical modeling and a long list of applications.
One of the main difficulties in the numerical approximation to the subdiffusion equation, compared to the standard parabolic equations, is the weak singularity of the solution at t=0 and the limited regularity pick up with respect to the initial data. In general, for the subdiffusion equation with a nonsmooth initial value u^0∈ L^2(Ω) and a temporally smooth source function f(x,t), the solution generally exhibits the following type of weak singularity at t=0:
∂_t^m u(·,t)_L^2≤ C_m t^-m m≥ 0 ,
and the spatial regularity pick up is limited to
u(·,t)_H^2≤ Ct^-1 .
Higher-order spatial regularity generally cannot be expected for u^0∈ L^2(Ω).
This limited smoothing property was shown in the paper <cit.>
of Sakamoto and Yamamoto with the following two-sided stability (with f≡0):
c_1 u^0 _ s≤ u(T) _s+2≤ c_2 u^0 _ s.
The limited regularity of the solution, as shown in (<ref>)–(<ref>),
causes many difficulties in developing high-order temporal and spatial discretizations for the subdiffusion equation when the initial data is nonsmooth.
Many efforts have been made in overcoming these difficulties. In particular, high-order temporal discretizations for the subdiffusion equation have been developed based on graded mesh in <cit.> for u^0∈ H^1_0(Ω)∩ H^2(Ω), discontinuos Galerkin method in <cit.> for u^0∈ H^5/2(Ω)∩ H^1_0(Ω) plus a compatibility condition Δ u^0=0 on ∂Ω, BDF convolution quadrature in <cit.> and Runge–Kutta convolution quadrature in <cit.> for u^0∈ L^2(Ω), and exponential convolution quadrature in <cit.> for semilinear problems with u^0∈ L^∞(Ω).
See also <cit.> for a posteriori error analysis of several popular time stepping schemes.
The spatial discretization using the standard Galerkin finite element method (FEM) or
the lumped mass Galerkin FEM for solving the subdiffusion equation with nonsmooth initial data was studied in <cit.>.
The second order convergence in L^2 norm was established and it is optimal with respect to the L^2 initial data.
In these works, the error analysis was carried using the Mittag–Leffler functions.
Error estimates of the standard Galerkin FEM with initial data in Ḣ^q(Ω) with q∈(-1,0) can be found in <cit.>.
The Laplace transform approach, initially introduced for parabolic equations by Fujita and Suzuki <cit.>, was adapted to the subdiffusion equation in <cit.> to remove a logarithmic factor in the error estimates. See <cit.> and <cit.>
for concise overviews. The energy argument, which represents one of the most commonly used strategies for standard parabolic equations, is much more involved for the subdiffusion equation.
This is due to the nonlocality of the fractional derivative ∂_t^α u, which causes that many useful tools
(such as integration by parts formula and product rule) are either invalid or requiring substantial modifications.
Some first encouraging theoretical results in this important direction were obtained by Mustapha <cit.>, where optimal error estimates for the homogeneous problem were obtained using an energy argument; see also <cit.> for the time-fractional Fokker–Planck equation. A unified analysis of different kinds of FEMs for the homogeneous subdiffusion problem based on an energy argument, which generalizes the corresponding technique
for standard parabolic problems in <cit.>, were given by Karaa <cit.>.
The numerical analysis of subdiffusion equations with irregular domains and finite volume element methods were discussed in <cit.> and <cit.>, respectively.
In all the aforementioned work for the subdiffusion equation, people only considered the error analysis of the piecewise linear finite element method for nonsmooth initial data.
This is attributed to the limited smoothing property of the subdiffusion equation, which only smoothen the initial data u^0∈ L^2(Ω) to u(t)∈ H^2(Ω) for t>0; cf. (<ref>) with s=0.
This is in sharp contrast to the standard parabolic equation, which smoothen the initial data u^0∈ L^2(Ω) to u(t)∈ C^∞(Ω) for t>0.
Consequently, the standard finite element approximation to the subdiffusion equation only has second-order convergence in L^2(Ω) for initial data in L^2(Ω).
The development of high-order finite element methods in space in case of nonsmooth initial data remains challenging and missing in the literature.
If the problem data is smooth and compatible to the boundary condition, then the solution could be smooth enough. In this case, it is possible to construct high-order
spatial approximation. For example, the
high-order hybridizable discontinuous Galerkin method was proposed and analyzed in <cit.>
under the assumption that the solution is smooth enough.
In this paper, we construct a splitting method which allows to develop high-order finite element methods for solving the subdiffusion equation with nonsmooth initial data.
For u^0 ∈ L^2, we split the solution into a time-dependent regular part u^r(t) ∈2m+2 and
several time-independent singular parts u_j^s with j=1,…,m.
Then the time-dependent smooth part u^r(t) is approximated by using high-order finite element methods in space and convolution quadrature in time generated by k-step BDF method, with k=1,2,…,6. If we denote by U_h^r,n the fully discrete solution approximating u^r(t_n), then the following result is proved (see Theorems <ref> and <ref>):
U_h^r,n - u^r(t_n) _L^2≤ c (h^2m+2 t_n^-(1+m)α + τ^k t_n^-k-mα) u^0 _L^2 for all t∈(0,T].
Note that the integer m could be arbitrarily large, and hence we have developed an arbitrarily high-order FEM approximation for the smooth part.
This argument also works for weaker initial data u^0 ∈ s with s<0.
Meanwhile, the singular parts u_j^s are independent of time and they can be approximated by solving several elliptic equations with nonsmooth sources.
This is illustrated in Section <ref> for several exemplary nonsmooth data, e.g., piecewise smooth functions, Dirac–Delta point source, and Dirac measure concentrated on an interface.
More generally, the time-independent nonsmooth part can be approximated by the standard FEM using smaller mesh size without increasing the overall computational cost significantly.
This is possible as the nonsmooth part is time-independent and therefore avoids the time stepping procedure.
As a result, the high-order finite element approximation to the subdiffusion equation can be realized by the novel splitting strategy.
This strategy works for all second-order elliptic operators with smooth coefficients even though we only consider the negative Laplacian -Δ for simplicity of presentation.
As far as we know, this is the first attempt to develop spatially high-order finite element methods for the subdiffusion equation with nonsmooth initial data.
In addition, the argument in this paper can be easily extended to the case of nonsmooth source term.
The rest of the paper is organized as follows.
In Section <ref>, we present the splitting method and the high-order finite element approximation to the regular part of the solution.
High-order time-stepping schemes for the regular part and the corresponding error estimates are presented in Section <ref>.
High-order finite element approximations to the singular parts are discussed in Section <ref>.
The extension to nonsmooth source term is discussed in Section <ref>.
Finally, in Section <ref>, we present several numerical examples to illustrate the high-order convergence
of the proposed splitting FEMs in comparison with the standard Galerkin FEMs for the subdiffusion equation.
§ CONSTRUCTION OF HIGH-ORDER SPATIAL DISCRETIZATIONS
Let {λ_j}_j=1^∞ and {φ_j}_j=1^∞ be the eigenvalues (ordered nondecreasingly with multiplicity counted) and the L^2(Ω)-orthonormal eigenfunctions, respectively, of the elliptic operator A=-Δ:H^2(Ω)∩ H^1_0(Ω)→ L^2(Ω) under the zero boundary condition. Then {φ_j}_j=1^∞
forms an orthonormal basis in L^2(Ω).
For any real number s≥ 0, we denote by s the Hilbert space
with the induced norm ·_ s defined by
v_ s^2 :=∑_j=1^∞λ_j^s⟨ v,φ_j ⟩^2.
In particular, v_ 0=v_L^2(Ω)=(v,v)^1/2 is the norm in L^2(Ω).
For s< 0, we define the space s = -s'.
It is straightforward to verify that v_ 1 = ∇ v_L^2(Ω) is an equivalent norm of H_0^1(Ω)
and v_ 2=A v_L^2(Ω) is a norm of H^2(Ω)∩ H^1_0(Ω); see <cit.>.
Moreover, for any integer m≥ 0, v∈Ḣ^2m+2(Ω) if and only if
A^j-1 v∈ H^2(Ω)∩ H^1_0(Ω) and A^jv∈Ḣ^2m+2-2j(Ω) for j=1,…,m+1, and
v_Ḣ^2m+2(Ω)∼A^m+1v_L^2(Ω) ∀ v∈Ḣ^2m+2(Ω) .
It is known that the solution to problem (<ref>) can be written as (cf. <cit.>)
u(t)= F(t)v + ∫_0^t E(t-s) f(s) ṣ ,
where the solution operators F(t) and E(t) are defined by
F(t):=1/2π i∫_Γ_θ,κe^zt z^α-1 (z^α +A )^-1 ẓ
E(t):=1/2π i∫_Γ_θ,κe^zt (z^α+A )^-1 ẓ ,
respectively,
with integration over a contour Γ_θ,κ in the complex plane ℂ
(oriented counterclockwise), defined by
Γ_θ,κ={z∈ℂ: |z|=δ, | z|≤θ}∪{z∈ℂ: z=ρ e^±iθ, ρ≥κ} .
Throughout, we fix θ∈(π/2,π) so that z^∈Σ_θ⊂Σ_θ:={0≠ z∈ℂ: | arg(z)|≤θ} for all z∈Σ_θ.
The following resolvent estimat will be frequently used (see
<cit.>):
(z+A )^-1≤ c_θ |z|^-1, ∀ z ∈Σ_θ,
∀ θ∈(0,π),
where · denotes the operator norm on L^2(Ω).
Equivalently,
the solution operators in (<ref>) can also be expressed as
F(t)v = ∑_j=1^∞ E_α,1(-λ_jt^α)(v,φ_j)φ_j
E(t)v = ∑_j=1^∞ t^α-1E_α,α(-λ_jt^α)(v,φ_j)φ_j.
Here E_α,β(z) is the two-parameter Mittag–Leffler function
<cit.>.
The Mittag–Leffler function E_α,β(z) is a generalization of the exponential function
e^z appearing in normal diffusion. For any α∈ (0,1), the function E_α,1
(-λ t^α) decays only polynomially like λ^-1t^-α as λ,t→∞ <cit.>, which contrasts sharply with the exponential decay for e^-λ t
appearing in normal diffusion. This important feature directly translates into the limited smoothing
property in space for the solution operators F(t) <cit.> .
As a result, in general, we cannot expect high-order approximations for subdiffusion problem (<ref>)
with nonsmooth data. In fact, our preceding study <cit.>
shows an optimal convergence rate O(h^2-s) for piecewise linear finite element approximation, when u^0 ∈-s.
§.§ A new splitting of the solution
For the simplicity of presentation, we consider the homogeneous subdiffusion equation with f≡ 0 and u^0 ∈ L^2.
The argument could be extended to the weaker case u^0 ∈ H^s with s∈ [-1,0) by slight modifications.
The case with nonsmooth f 0 will be discussed in Section <ref>.
We split the integrand of (<ref>) into two parts based on the following relation:
(z^α+A)^-1 = A^-1 - z^α(z^α+A)^-1A^-1 .
This leads to the following splitting of the solution operator:
F(t):=1/2π i∫_Γ_θ,κe^zt z^α-1 (z^α +A )^-1 ẓ
= 1/2π i∫_Γ_θ,κe^zt[z^α-1A^-1 - z^2α-1 (z^α +A )^-1A^-1] ẓ
= t^-α/Γ(1-α) A^-1 - 1/2π i∫_Γ_θ,κe^zt z^2α-1 (z^α +A )^-1A^-1 ẓ .
In the case f≡ 0 we obtain the following splitting of the solution:
u(t)
= F(t) u^0 = u^s + u^r(t)
with
u^s = 1/Γ(1-α) A^-1 t^-α u^0 and u^r(t) = - 1/2πi∫_Γ_θ,κ e^zt z^2α-1 (z^α+A)^-1A^-1 u^0 ẓ ,
denoting the singular and regular parts in this splitting, respectively.
This splitting process can be continued by substituting relation (<ref>) into the expression of the regular part repeatedly. Then we can obtain the following higher-order splitting:
u(t) = u^r(t) + ∑_j=1^m u_j^s
with
u_j^s = (-1)^j+1t^-jα/Γ(1-jα) A^-j u^0, for j=1,2,…,m,
u^r(t) = F^r(t) u^0 := (-1)^m/2πi∫_Γ_θ,κ e^zt z^(1+m)α-1 (z^α+A)^-1A^-m u^0 ẓ.
Note that the singular part u_j^s(t) is a solution of an elliptic problem, while the regular part u^r(t) corresponds to the solution of a non-standard evolution problem with a relatively smooth initial value A^-m u^0∈2m. Since (z^α+A)^-1 maps 2m to 2m+2, it follows that the regular part u^r(t) is in 2m+2 and
u^r(t) _2m+2 = F^r(t) u^0_2m+2≤ c t^-(m+1)α u^0 _L^2
In the next two subsections, we present error estimates for the Lagrange interpolation and the Ritz projection of functions in 2m+2, and then use the established results to prove high-order convergence of the finite element approximation to the regular part u^r(t).
The approximation to the singular part will be discussed in Section <ref>.
§.§ Finite element approximations to functions in 2m+2
We assume that the polygonal domain Ω is partitioned into a set 𝒦_h of shape-regular and locally quasi-uniform triangles with mesh size h=max_K∈𝒦_h diam(K), and denote by X_h the Lagrange finite element space of degree 2m+1 subject to the partition, i.e.,
X_h = { v_h ∈ H_0^1 :
v_h |_K ∈ℙ_2m+1 for all K ∈𝒦_h } ,
where ℙ_2m+1 denotes the space of polynomials of degree 2m+1.
Let P_h:L^2(Ω)→ X_h, R_h:1→ X_h and A_h: X_h→ X_h be the L^2-orthogonal projection, the Ritz projection operator, and the discrete elliptic operator, respectively, defined by
(P_hψ,χ)=(ψ,χ) ∀ψ∈ L^2(Ω), χ∈ X_h,
(∇ R_h ψ,∇χ) =(∇ψ,∇χ) ∀ψ∈Ḣ^1(Ω), χ∈ X_h,
(A_hψ,χ)=(∇ψ,∇χ) ∀ψ, χ∈ X_h.
We shall work with the following assumption on the triangulation of the domain.
We assume that the triangulation is locally refined towards the corners and edges of the domain,
such that both the Lagrange interpolation I_h:C(Ω)→ X_h
and the Ritz projection R_h:H^1_0(Ω)→ X_h
have optimal-order convergence, i.e.,
v-I_hv_L^2(Ω) + hv-I_hv_H^1(Ω) ≤ ch^2r+2v_Ḣ^2r+2(Ω)
v-R_hv_L^2(Ω) + hv-R_hv_H^1(Ω) ≤ ch^2r+2v_Ḣ^2r+2(Ω)
for v∈Ḣ^2r+2(Ω) and 0≤ r≤ m.
For example, in a two-dimensional polygonal domain Ω, Assumption <ref> is satisfied by the following type of graded mesh (see Proposition <ref> in Appendix):
ħ(x)∼{ |x-x_0|^1-γ h γ∈(0, min(1,π/θ) /2m+1)
h_*≤ |x-x_0|≤ d_0
h_* |x-x_0|≤ h_* ,
.
where ħ(x) denotes the spatially dependent diameter of triangles,
x_0 is a corner of the polygon with interior angle θ, h_*∼ h^1/γ, and d_0 is a constant such that D_0'={x∈Ω:|x-x_0|<2d_0} is a sector centred at the corner x_0.
As a direct consequence of the approximation property (<ref>)–(<ref>), we have the following estimate in the negative Sobolev spaces.
Under Assumption <ref> with m≥ 1, the following estimate holds
R_h ϕ - ϕ_-r≤ ch^r+1 R_h ϕ - ϕ_1, with 1≤ r ≤ 2m.
For any ψ∈ r we let w = A^-1ψ∈r+2. By the duality argument, we have
R_h ϕ - ϕ_-r = sup_ψ∈ r⟨ R_h ϕ - ϕ,ψ⟩/ψ_ r
= sup_ψ∈ r (∇(R_h ϕ - ϕ),∇ w)/ψ_ r
=sup_ψ∈ r (∇ (R_h ϕ - ϕ),∇(w-I_h w))/ψ_ r.
Then using (<ref>) we derive
| (∇ (R_h ϕ - ϕ),∇(w-I_h w)) | ≤ R_h ϕ - ϕ_ 1 w-I_h w _ 1
≤ ch^r+1 R_h ϕ - ϕ_1 w _r+2
= ch^r+1 R_h ϕ - ϕ_1ψ_ r.
Then the desired result follows immediately.
§.§ High-order approximation to the regular part u^r(t)
In order to approximate the regular part u^r(t) in (<ref>),
we consider the following contour integral
u_h^r(t) = F_h^r(t) P_h u^0 := (-1)^m/2πi∫_Γ_θ,κ e^zt z^(1+m)α-1 (z^α+A_h)^-1A_h^-m P_hu^0 ẓ.
We shall establish an error estimate for u_h^r - u^r by using the following technical lemma.
The following estimate holds for v∈ H_0^1(Ω) and z∈Σ_θ with θ∈(π/2,π):
|z^α| v_L^2(Ω)^2 + ∇ v _L^2(Ω)^2
≤ c|z^αv_L^2(Ω)^2 + (∇ v,∇ v)|.
By <cit.>, we have that for any z∈Σ_θ
|z| v_L^2(Ω)^2 + ∇ v_L^2(Ω)^2 ≤ c|zv_L^2(Ω)^2 + (∇ v,∇ v)|.
Alternatively, let γ=v_L^2(Ω)^2 and β=∇ v_L^2(Ω)^2
=(∇ v,∇ v) and (z) =, we have
| z γ + β |^2 ≥ (|z| γcos + β)^2 + (|z| γsin)^2.
Therefore, we derive
| z γ + β | ≥ |z| γsin
| z γ + β | ^2 ≥ (βcos + |z| γ)^2 + β^2sin^2≥β^2sin^2.
Then for ∈[π-θ, θ], we have
2 | z γ + β | ≥ ( |z| γ + β) sin≥ ( |z| γ + β) sinθ.
Meanwhile, for ∈[0,π-θ], we have cos≥cos(π-θ) > 0.
| z γ + β | ≥ |z| γcos + β≥ |z| γcos(π-θ) + β≥ ( |z| γ + β)cos(π-θ).
This completes the proof of the lemma.
Let w = (z^α + A)^-1 A^-m v.
Appealing again to Lemma <ref>, we obtain
|z^α| A^m w _L^2^2 + ∇ A^m w_L^2^2
≤ c|((z^α+A)A^m w,A^m w)|≤ c v _L^2A^m w_L^2.
Consequently
A^m w _L^2≤ c|z^α|^-1 v _L^2 ∇ A^m w _L^2≤ c|z^α|^-1/2 v _L^2.
In view of (<ref>) and the resolvent estimate (<ref>), we can bound w _ 2 by
A^m w _Ḣ^2 = A^m+1 w _L^2
= (-z^α +z^α +A)(z^α+A)^-1v _L^2
≤ c( v _L^2+|z^α|A^m w_L^2)≤ c v_L^2.
Next, we aim to develop an error estimate between (z^α +A)^-1A^-mu^0
and its discrete analogue (z^α+A_h)^-1A_h^-mP_h u^0
for u^0∈ L^2. We begin with the following technical lemma.
Let u^0∈ L^2(Ω) and we define { p_j }_j=1^m such that
p_1 = A^-1u^0 and p_j = A^-1p_j-1 with j =1,2,…,m.
Moreover, we define { p_j,h}_j=1^m ⊂ X_h such that
p_1,h = A_h^-1 P_h u^0 and p_j,h =A_h^-1 p_j-1,h with j =1,2,…,m.
Then there hold error estimates for j=1,2…,m
p_j,h - p_j_L^2(Ω) + h∇ (p_j,h-p_j)_L^2(Ω)≤ ch^2j u^0 _L^2(Ω)
and
p_j,h - p_j_H^-s≤ ch^2j+s u^0 _L^2(Ω) with 1 ≤ s ≤ 2m-2j+2.
Let σ_j = p_j - p_j,h.
By the definition, we have p_1,h = R_h p_1 and hence derive
σ_1 _L^2(Ω)+h∇σ_1_L^2(Ω) ≤ ch^2 u^0 _L^2(Ω).
This and the negative norm estimate in Lemma <ref> lead to
σ_1_H^-r(Ω)≤ ch^r+1σ_1_H^1(Ω)≤ ch^r+2 u^0 _L^2(Ω) with 1≤ r ≤ 2m.
Next, we prove (<ref>) and (<ref>) by mathematical induction.
Assume that (<ref>) and (<ref>) holds for j=k. Then
Moreover, p_j and p_j,h respectively satisfy
(∇ p_k+1,∇) =(p_k,), ∀∈ H_0^1(Ω),
(∇ p_k+1,h,∇_h) =(p_k,h,_h), ∀_h∈ X_h.
Letting =_h and subtracting these two identities yield the following error equation
(∇σ_k+1, ∇_h) = (σ_k,_h), ∀_h∈ X_h.
Therefore, we have the following estimate
∇σ_k+1_L^2^2
= (∇σ_k+1, ∇ (p_k+1 - R_h p_k+1)) + (σ_k, R_h p_k+1 - p_k+1) + (σ_k, σ_k+1)
≤∇σ_k+1_L^2∇ (p_k+1 - R_h p_k+1)_L^2
+ ∇σ_k_L^2 p_k+1 - R_h p_k+1_H^-1 + σ_k_H^-1∇σ_k+1_L^2.
According to (<ref>), (<ref>) with j=k and s=-1, (<ref>) and Lemma <ref>, we derive
∇σ_k+1_L^2^2
≤ c ∇ (I-R_h)p_k+1_L^2^2
+ ∇σ_k_L^2 (I-R_h)p_k+1_H^-1 + cσ_k_H^-1^2
≤ c h^4k+2 u^0 _L^2^2
Next, we show the error estimate in H^-r with 0 ≤ r ≤ 2m-2k by duality argument.
For any ∈ r we let ϕ = A^-1. Then we observe
σ_k+1_H^-r = sup_∈ r⟨σ_k+1, ⟩/_ r
= sup_∈ r( ∇σ_k+1, ∇ϕ)/_ r
= sup_∈ r( ∇σ_k+1, ∇ (ϕ - R_h ϕ)) + ( σ_k, R_h ϕ - ϕ) + ( σ_k, ϕ)/_ r
Using (<ref>) and (<ref>), we derive
sup_∈ r( ∇σ_k+1, ∇ (ϕ - R_h ϕ)) /_ r ≤sup_∈ r∇σ_k+1_L^2∇ (I - R_h) ϕ_L^2/_ r
≤sup_∈ r ch^2k+1 u^0 _L^2 c h^r+1ϕ_r+2/_ r≤ c h^2k+2+r.
Meanwhile, by duality between -1 and 1, we apply (<ref>) and (<ref>) with j=k and s=1 to derive
sup_∈ r( σ_k, R_h ϕ - ϕ)/_ r ≤sup_∈ rσ_k_-1 (I - R_h) ϕ_1/_ r
≤sup_∈ r ch^2k+1 u^0 _L^2 c h^r+1ϕ_r+2/_ r≤ c h^2k+2+r.
Similarly, by means of the duality between -2-r and 2+r, we apply again (<ref>) with j=k and s=2+r to derive
sup_∈ r( σ_k, ϕ)/_ r ≤sup_∈ rσ_k_-r-2ϕ_2+r/_ r
≤sup_∈ r ch^2k+r+2 u^0 _L^2_ r/_ r≤ c h^2k+2+r.
This completes the proof of the lemma.
Let u^0∈ L^2(Ω), z∈Σ_θ,
w=(z^α+A)^-1p with p=A^-mu^0, and w_h=(z^α+A_h)^-1 p_h with p_h = A_h^-m P_h u^0.
Then there holds
w_h-w _L^2(Ω) + h∇ (w_h-w)_L^2(Ω)≤ ch^2m+2 u^0 _L^2(Ω).
Let e=w-w_h and σ = p - p_h.
Then Lemma <ref> implies the estimate
σ_L^2(Ω)+h∇σ_L^2(Ω) ≤ ch^2mu^0_L^2(Ω).
and the negative norm error estimate
σ_H^-r(Ω)≤ ch^r+2mu^0_L^2(Ω), with 1≤ r≤ 2.
Moreover, w and w_h respectively satisfy
z^α(w, )+(∇ w,∇) =(p,), ∀∈ H_0^1(Ω),
z^α(w_h,_h)+(∇ w_h,∇_h) =(p_h,_h), ∀_h∈ X_h.
Subtracting these two identities yields the following error equation of e
z^α(e,_h) + (∇ e, ∇_h) = (σ,_h), ∀_h∈ X_h.
This and Lemma <ref> imply that
|z^α| e_L^2(Ω)^2 + ∇ e _L^2(Ω)^2
≤ c |z^α e _L^2(Ω)^2 + (∇ e, ∇ e)|
= c |z^α(e,w-R_h w) + (∇ e, ∇(w-R_h w)) - (σ, w_h - R_h w) |
By using the Cauchy-Schwartz inequality and the duality between 1 and -1, we arrive at
|z^α| e _L^2^2 + ∇ e _L^2(Ω)^2
≤ c (|z^α| w-R_h w_L^2^2
+ ∇(w-R_h w)_L^2^2 + σ_H^-1^2 ).
According to (<ref>), (<ref>) and (<ref>), we derive
|z^α| e_L^2^2 + ∇ e_L^2^2
≤ ch^4m+2(|z^α| (z^α+A)^-1 u^0_1^2
+ (z^α+A)^-1u^0_2^2 + u^0 _L^2^2 )
≤ ch^4m+2 u^0 _L^2^2.
This gives the desired bound on ∇ e_L^2. Next, we bound
e_L^2 using a duality argument. For any fixed ∈ L^2, we set
ψ=(z^α+A)^-1.
Then the preceding argument implies
|z^α| ψ-R_h ψ_L^2^2 + ∇ (ψ-R_h ψ)_L^2^2≤ ch^2 _L^2^2.
we have by duality
e _L^2 = sup_∈ L^2|(e,)|/_L^2
=sup_∈ L^2|z^α(e,ψ)+(∇ e,∇ψ)|/_L^2.
Then the desired estimate follows from (<ref>), (<ref>), (<ref>) and (<ref>) by
|z^α(e,ψ)+(∇ e,∇ψ)|
= |z^α(e,ψ-R_h ψ)+(∇ e,∇ (ψ-R_h ψ))+(σ,R_h ψ)|
≤ |z^α(e,ψ-R_h ψ)+(∇ e,∇ (ψ-R_h ψ))|+ |(σ,R_h ψ-ψ)| + |(σ,ψ)|
≤ |z^α|^1/2e_L^2 |z^α|^1/2ψ-R_h ψ_L^2
+ ∇ e_L^2∇(ψ-R_h ψ) _L^2
+ σ_H^-1∇(R_h ψ-ψ) _L^2 + σ_H^-2ψ_2
≤ ch^2m+2 u^0 _L^2ψ_2≤ ch^2m+2 v _L^2_L^2.
This completes proof of the lemma.
Now we can state error estimates for the regular part.
Let u^r and u_h^r be the functions defined by (<ref>) and (<ref>),
respectively. Then for t>0, there holds:
(u^r-u_h^r)(t)_L^2 + h∇ (u^r-u_h^r)(t)_L^2≤ ch^2m+2 t^-(1+m)αu^0_L^2.
For v∈ L^2(Ω), by the solution representations, the error e_h(t) can be represented as
|(u^r-u_h^r)(t)|=|1/2πi∫_Γ_θ,κ e^zt z^(1+m)α-1 (w_h(z)-w(z)) ẓ|,
with w(z)=(z^α +A)^-1A^-mu^0 and w_h(z)=(z^α+A_h)^-1A_h^-mP_hu^0.
By Lemma <ref>, and taking κ=t^-1
in the contour Γ_θ,κ, we have
(u^r-u_h^r)(t) _L^2≤ ch^2m+2 u^0 _L^2∫_Γ_θ,κ e^(z)t |z|^(1+m)α-1 |ẓ|
≤ ch^2m+2 t^-(1+m)αu^0_L^2.
A similar argument also yields the H^1-estimate.
A slightly modification leads to the error estimate for very weaker initial data
u^0∈ s with some s∈[-1,0].
In particular let u^r and u_h^r be the functions defined by (<ref>) and (<ref>),
respectively. Then for t>0, there holds
(u^r-u_h^r)(t)_L^2 + h∇ (u^r-u_h^r)(t)_L^2≤ ch^2m+2+s t^-(1+m)αu^0_ s.
The argument could be further extended to rougher initial data, such as the Dirac delta function u^0 = δ_x_*
in two dimension with a fixed x_* ∈Ω. Then we consider the splitting
u^r(t)-u_h^r(t) = F^r(t) u^0- F_h^r(t) P_h u^0
= (F^r(t) u^0- F^r(t) P_h u^0) + (F^r(t) P_h u^0 - F_h^r(t)P_h u^0).
The first term could be bounded using the smoothing property (<ref>) and the L^∞-stability of the L^2 projection (see <cit.>)
F^r(t) u^0- F^r(t) P_h u^0_L^2 = sup_ϕ∈ L^2(F^r(t) u^0- F^r(t) P_h u^0, ϕ)/ϕ_L^2
≤sup_ϕ∈ L^2 (I - P_h) F^r(t) ϕ_L^∞/ϕ_L^2≤ C sup_ϕ∈ L^2 (I - I_h) F^r(t) ϕ_L^∞/ϕ_L^2
≤ c h^2m+1sup_ϕ∈ L^2 F^r(t) ϕ_2m+2/ϕ_L^2≤ c h^2m+1 t^-(1+m)α,
where we have used the L^∞ error estimate for the Lagrange interpolation (see Lemma <ref>) in the second to last inequality.
Meanwhile, the second term in (<ref>) could be bounded using the estimate (<ref>)
and the inverse inequality, i.e.,
F^r(t) P_h u^0 - F_h^r(t)P_h u^0 _L^2 ≤ ch^2m+2 t^-(1+m)αP_h u^0_L^2
= ch^2m+2 t^-(1+m)αsup_ϕ∈ L^2|P_hϕ(x_*)|/ϕ_L^2≤ c h^2m+1 t^-(1+m)α.
As a result, we have the following error estimate for Dirac delta initial condition in two dimension
(u^r-u_h^r)(t) _L^2≤ c h^2m+1 t^-(1+m)α.
This convergence rate is consistent with our numerical experiments, cf. Table <ref>–<ref>.
§ TIME DISCRETIZATION
In the preceding section, we have proposed a splitting of the exact solution into a time-dependent regular part plus several time-independent singular parts. Next, we shall develop and analyze a time stepping scheme for approximating the regular part using convolution quadrature.
We shall focus on time-stepping schemes with a uniform temporal mesh. Specifically, let t_n=nτ, n=0,1,…,N, be a uniform partition of the time interval [0,T] with a time stepsize τ=N^-1T, N∈ℕ, and recall that the generating function of BDF method of order k, k=1,…,6, is defined by
δ_τ(ζ ):=δ(ζ)/τ with δ(ζ) = ∑_j=1^k 1/j (1-ζ )^j.
The BDFk method
is known to be A(ϑ_k)-stable with angle ϑ_k= 90^∘, 90^∘, 86.03^∘,
73.35^∘, 51.84^∘, 17.84^∘ for k = 1,2,3,4,5,6, respectively <cit.>.
Then we apply the following convolution quadrature to approximate the semidiscrete solution (<ref>):
U_h^n,r = (-1)^m/2πi∫_Γ_θ,κ^τ e^zt_nδ_τ(e^-zτ)^(1+m)α-1 (δ_τ(e^-zτ)^α+A_h)^-1A_h^-m P_h u^0 ẓ.
where the the contour Γ_θ,κ^τ is
Γ_θ,κ^τ :={ z∈Γ_θ,κ:|(z)|≤πτ}
oriented with an increasing imaginary part. The evaluation of the contour integral in (<ref>) is equivalent to solving the following time-stepping scheme for U_h^n,r:
τ^-α∑_j=0^n ω_j^(α) U_h^n-j,r + A_h U_h^n,r =(-1)^m τ^-(1+m)αω_n^(1+m)α-1 A_h^-m P_hu^0, for 0 ≤ n≤ N.
Here the quadrature weights (ω_j^(β))_j=0^∞
are given by the coefficients in the following power series expansion
δ_τ(ζ)^β=1/τ^β∑_j=0^∞ω_j^(β)ζ ^j.
with the generating function (<ref>).
Generally, those weights can be evaluated efficiently via recursion or discrete
Fourier transform <cit.>.
Note that the time stepping scheme (<ref>) begins with n=0,
which is different from the usual time stepping schemes for evolution problems.
The idea is closely related to correct the initial steps of the regular time stepping scheme
<cit.>.
See a brief explanation in <cit.>.
The next Lemma shows the equivalence between the convolution quadrature (<ref>) and
the time stepping scheme (<ref>).
The function U_h^n,r given by the contour integral (<ref>) is the solution of
the time stepping scheme (<ref>) for all 0≤ n≤ N.
We begin with the time stepping scheme (<ref>).
By multiplying both sides of the relation (<ref>) by ζ ^n, summing over n from 0 to ∞
and collecting terms, we obtain
∑_n=0^∞ζ^n (τ^-α∑_j=0^n ω_j^(α) U_h^n-j,r) + A_h
∑_n=0^∞ U_h^n,rζ^n
=(-1)^m τ^-1 A_h^-m P_hu^0 (τ^1-(1+m)α∑_n=0^∞ω_n^(1+m)α-1ζ^n )
=(-1)^m τ^-1 A_h^-m P_hu^0 δ_τ(ζ)^(1+m)α-1.
For any sequence (v^n)_n=1^∞, we denotes its generating function by
v(ξ) =∑_n=0^∞ v^n ζ^n.
The the leading term in the above relation can be written as
∑_n=0^∞ζ^n (τ^-α∑_j=0^n ω_j^(α) U_h^n-j,r)
= τ^-α∑_j=0^∞ω_j^(α)ζ^j
( ∑_n=j^∞ U_h^n-j,rζ^n-j)
= δ_τ(ζ)^αU_h^r(ζ).
Therefore we obtain
U_h^r(ζ) = (-1)^m τ^-1δ_τ(ζ)^(1+m)α-1
( δ_τ(ζ)^α + A_h)^-1 A_h^-m P_hu^0.
Since U_h^r(ξ) is analytic with respect
to ζ in the unit disk on the complex plane ℂ, thus Cauchy's integral formula and
the change of variables ζ =e^-zτ lead to the following representation for arbitrary ϱ∈(0,1)
U_h^n,r = 1/2 πi∫_|ζ |=ϱζ ^-n-1U_h^r(ζ) ζ̣= τ/2πi∫_Γ^τ e^zt_nU_h^r(e^-zτ) ẓ
= (-1)^m/2πi∫_Γ^τ
e^zt_nδ_τ(e^-zτ)^(1+m)α-1
( δ_τ(e^-zτ)^α + A_h)^-1 A_h^-m P_hu^0 ẓ
where Γ^τ is given by
Γ^τ:={ z=-logϱ/τ+i y: y∈ℝ |y|≤π/τ}.
Note that δ_τ(e^-zτ)^(1+m)α-1
( δ_τ(e^-zτ)^α + A_h)^-1 is analytic for z∈Σ_θ,κ^τ, which is a region enclosed by Γ^τ, Γ^τ_θ,κ and the two lines Γ_±^τ:
=ℝ±iπ/τ (oriented from left to right).
Using the periodicity of e^-zτ and Cauchy's theorem, we deform the contour Γ^τ
to Γ_θ,κ^τ in the integral (<ref>) to obtain the desired representation (<ref>).
Finally, we study the error of convolution approximation. To this end, we
need the following lemma on the sectorial property and approximation property
of the generating function δ_τ(ζ).
See the detailed proof in <cit.>.
For any ε, there exists θ_ε∈ (π/2,π) such that for any
θ∈ (π/2,θ_ε),
there exist positive constants c,c_1,c_2 (independent of τ) such that
c_1|z|≤
|δ_τ(e^-zτ)|≤ c_2|z|,
δ_τ(e^-zτ)∈Σ_π-ϑ_k+ε,
|δ_τ(e^-zτ)-z|≤ cτ^k|z|^k+1,
|δ_τ(e^-zτ)^α-z^α|≤ cτ^k|z|^k+α,
∀ z∈Γ_θ,κ^τ,
where σ>0 and the contour Γ_θ,κ^τ⊂ℂ is defined by
Γ_θ,κ^τ :=
{ z=ρ e^±iθ: ρ≥κ, |(z)|≤πτ}∪{z=κ e^ iφ: |φ |≤θ}.
Let U_h^n,r be the function defined by the convolution quadrature (<ref>),
and u_h^r be the function defined by the contour integral (<ref>). Then we have
U_h^n,r - u_h^r(t_n) _L^2≤ c τ^k t_n^-k-mα u^0 _ s for any s∈ [-1,0].
Let K(z)= z^(1+m)α-1 (z^α + A_h)^-1. Then we may split the error as
U_h^n,r - u_h^r(t_n) = (-1)^m/2πi∫_Γ^τ_θ,κ
e^zt_n(K(z) - K(δ_τ(e^-zτ))) A_h^-m P_hu^0 ẓ
+ (-1)^m/2πi∫_Γ_θ,κ\Γ^τ_θ,κ
e^zt_n K(z) A_h^-m P_hu^0 ẓ =: I_1 + I_2.
Using the resolvent estimate (<ref>) and Lemma <ref>, we derive
K(z) - K(δ_τ(e^-zτ) _L^2→ L^2≤ c τ^k |z|^mα+k-1 .
As a result,
we choose κ=t_n^-1 in the contour Γ_θ,κ^τ, we obtain an estimate for I_1:
I_1 _L^2 ≤
c τ^k A_h^-m P_h u^0 _L^2(Ω)(∫_κ^∞ e^κ t_ncosθρ^mα+k-1 ρ̣+ ∫_-θ^θ e^κ t_n cosφκ^mα+k φ̣)
≤ c τ^k(t_n^-mα-k+κ^mα+k) A_h^-m P_h u^0_L^2(Ω)≤ c τ^k t_n^-k-mαA_h^-m P_h u^0 _L^2(Ω)
≤ c τ^k t_n^-k-mα u^0 _ s,
for any s∈[-1,0]. The last inequality follows from the stability of P_h in s for s∈[-1,0]:
A_h^-m P_h u^0 _L^2(Ω)≤ c A_h^-1 P_h u^0 _1≤ c P_h u^0 _-1≤ c u^0 _-1.
Meanwhile, for the term I_2, we apply
the resolvent estimate (<ref>) and Lemma <ref> to derive
I_1 _L^2 ≤ c ∫_π/τsinθ^∞ e^-c ρ t_nρ^mα-1A_h^-m P_h u^0 _L^2(Ω) ρ̣
≤ c τ^k ∫_π/τsinθ^∞ e^-c ρ t_nρ^mα+k-1A_h^-m P_h u^0 _L^2(Ω) ρ̣
≤ cτ^k t_n^-mα-kA_h^-m P_h u^0 _L^2(Ω)≤ c τ^k t_n^-mα-k u^0 _-1.
This completes the proof of the theorem.
In view of Remarks <ref>–<ref> and Theorem <ref>, we have the
following error estimate for the fully discrete solution.
Assume that u^0 ∈ s with some s∈[-1,0], and u is the solution to (<ref>) with f=0.
Let U_h^n,r be the function defined by the convolution quadrature (<ref>).
Suppose that ϕ_j,h∈ X_h is an approximation to A^-j v.
Then the fully discrete solution
U_h^n = ∑_j=1^m (-1)^j+1t^-jα/Γ(1-jα)ϕ_j,h + U_h^n,r
satisfies the following error estimate
U_h^n - u(t_n) _L^2≤ c (h^2m+2+s t_n^-(1+m)α + τ^k t_n^-k-mα) v _ s + c∑_j=1^m t_n^-jαϕ_j,h - A^-j v _L^2 .
§ DISCUSSION ON THE APPROXIMATION TO THE SINGULAR PART ∑_J=1^M U^S_J(T)
The singular part ∑_j=1^m u^s_j(t) of the solution in (<ref>), with
u_j^s(t) = (-1)^j+1t^-jα/Γ(1-jα) A^-j u^0 ,
should be approximated separately. Since u_j^s(t) can be computed by solving several elliptic equations, its computational cost is much smaller than the computation of the regular part (which requires solving an evolution problem; see the full discretization in the next section). Therefore, in general, the singular part can be solved by a much smaller mesh size without significantly increasing the overall computational cost.
In the following, we discuss several cases in which the singular part can be solved with high-order accuracy without using smaller meshes.
[Piecewise smooth initial data]
If the initial value u^0∈ L^2(Ω) is globally discontinuous (therefore nonsmooth) but piecewise smooth, separated by a smooth closed interface Γ⊂Ω, then one can approximate q_j=A^-ju^0 by q_j,h=A_h^-jP_hu^0 using isoparametric finite element method with triangulations fitting the interface. The computation of q_j,h=A_h^-jP_hu^0 is equivalent to solving the following several standard elliptic equations:
A_h q_k,h = q_k-1,h, k=1,…,j, q_0,h=P_hu^0 .
By denoting Ω=Ω_1∪Ω_2∪Γ, where Ω_1 and Ω_2 are separated by a smooth interface Γ, the following high-order convergence can be achieved for isoparametric finite elements of degree 2m+1 fitting the interface Γ:
q_j,h - q_j _L^2≤ Ch^2m+2u^0_H^2m+2_ piecewise(Ω) ,
where Ḣ^2m+2_ piecewise(Ω)={ g ∈ L^2(Ω): g|_Ω_j∈Ḣ^2m+2(Ω_j) j=1,2}.
This shows that the singular part in (<ref>) can be approximated with high-order accuracy for piecewise smooth initial data.
The error estimate in (<ref>) can be proved by using the following result (for isoparametric finite elements of degree 2m+1 fitting the interface):
A_h^-1f - A^-1 f _L^2≤ Ch^2m+2f_Ḣ^2m+2_ piecewise(Ω),
which was originally proved in <cit.> for a bounded smooth domain Ω which contains the interface Γ. If Ω is a polygon which contains the interface Γ, then the interface is away from the corners of the polygon (therefore the functions in Ḣ^2r+2_ piecewise(Ω) are locally in the classical Sobolev space H^2r+2 near the interface), it follows that the error estimates in <cit.> near the interface still hold for functions in Ḣ^2r+2_ piecewise(Ω). Therefore, the combination of Proposition <ref> and the error estimates in <cit.> yields (<ref>) for the triangulations satisfying (<ref>).
Since
q_k_Ḣ^2m+2_ piecewise(Ω)
=A^-1q_k-1_Ḣ^2m+2_ piecewise(Ω)≤ Cq_k-1_Ḣ^2m_ piecewise(Ω) ,
iterating this inequality yields that q_k_Ḣ^2m+2_ piecewise(Ω)≤ Cu^0_H^2m+2_ piecewise(Ω). By using this regularity and (<ref>), we have
q_k,h - q_k _L^2 = A_h^-1 q_k-1,h - A^-1 q_k-1_L^2
≤ A_h^-1 (q_k-1,h -P_hq_k-1) _L^2 + A_h^-1P_hq_k-1 - A^-1 q_k-1_L^2
≤ q_k-1,h - P_hq_k-1_L^2 + A_h^-1P_hq_k-1 - A^-1 q_k-1_L^2
≤ q_k-1,h - q_k-1_L^2 + q_k-1 - P_hq_k-1_L^2 + A_h^-1P_hq_k-1 - A^-1 q_k-1_L^2
≤ q_k-1,h - q_k-1_L^2 + Ch^2m+2q_k-1_Ḣ^2m+2_ piecewise(Ω) + Ch^2m+2q_k-1_Ḣ^2m+2_ piecewise(Ω)
≤ q_k-1,h - q_k-1_L^2 + Ch^2m+2u^0_H^2m+2_ piecewise(Ω).
By iterating this inequality for k=1,…,j, and using the following basic result:
q_0,h - q_0_L^2 = P_hu^0 -u^0_L^2≤ Ch^2m+2u^0_H^2m+2_ piecewise(Ω) ,
we obtain the high-order convergence in (<ref>).
[Dirac–Delta point source]
If the initial value is a Dirac–Delta point source centered at some interior point x_0∈Ω, i.e., u^0=δ_x_0, then the function
w_1 = A^-1u^0 - q̂_1 , q̂_1(x)=1/2πln|x-x_0|,
is the solution of the following boundary value problem:
{
-Δ w_1 = 0 Ω,
w_1 = - q̂_1 x∈∂Ω,
.
Let χ be a smooth cut-off function such that χ=1 in a neighborhood of the boundary ∂Ω and χ=0 in a neighborhood of x_0. Then χq̂_1∈ C^∞ and
{
-Δ(w_1-χq̂_1) = Δ(χq̂_1)∈ C^∞_0(Ω) ⊂Ḣ^2m(Ω) Ω,
w_1-χq̂_1 = 0 ∂Ω.
.
This implies that w_1-χq̂_1 ∈ A^-1Ḣ^2m(Ω) = Ḣ^2m+2(Ω).
Since the explicit expression of Δ(χq̂_1) is known, we can approximate w_1-χq̂_1 by the finite element function A_h^-1Δ(χq̂_1) and, correspondingly, approximate q_1=A^-1u^0 by
q_1,h=q̂_1 + χq̂_1 + A_h^-1Δ(χq̂_1). The error of this approximation can be estimated as follows:
q_1,h-q_1_L^2(Ω)≤ Ch^2m+2w_1-χq̂_1_Ḣ^2m+2(Ω)≤ Ch^2m+2 .
Since w_1 is in H^2m+2(Ω), it follows that A^-1w_1 can be approximated by A^-1_hw_1 with an error bound of O(h^2m+2). Therefore, in order to compute q_2=A^-2u^0=A^-1q̂_1+A^-1w_1 with an error bound of O(h^2m+2), it suffices to approximate A^-1q̂_1 with the desired accuracy.
This can be done similarly as the approximation of A^-1u^0 by utilizing the following fact: The function
q̂_2(x) = -1/2π|x-x_0|^2ln|x-x_0| + c|x-x_0|^2
satisfies the equation -Δq̂_2 = q̂_1. Therefore, the function
w_2 = A^-1q̂_1 - q̂_2 is the solution of the following boundary value problem:
{
-Δ w_2 = 0 Ω,
w_2 = - q̂_2 x∈∂Ω ,
.
which is in the same form as (<ref>). Therefore, A^-1q̂_1 can be computed with high-order accuracy similarly as the above-mentioned computation of A^-1u^0.
Repeating this process will yield high-order approximations to q_j=A^-ju^0 with the following error bound:
q_j,h-q_j_L^2(Ω)≤ Ch^2m+2 .
This shows that the singular part in (<ref>) can be approximated with high-order accuracy if the initial value is a Dirac–Delta point source.
[Dirac measure concentrated on an interface]
If the initial value is a Dirac measure concentrated on an oriented interface Γ⊂Ω, i.e., u^0=δ_Γ with
⟨ u^0 , v⟩ = ⟨δ_Γ , v⟩ := ∫_Γ v ṣ for all v ∈1/2+μ, μ>0.
Then the function A^-1u^0 can be approximated by A_h^-1P_hu^0 with error of O(h^2m+2) in L^2(Ω) by using a locally refined mesh towards the interface Γ; see e.g., <cit.>.
Similarly,
A^-2u^0 - A_h^-1P_hu^0_L^2 ≤A^-2u^0 - A_h^-1P_hA^-1u^0_L^2
+ A_h^-1P_h(A^-1u^0-A_h^-1P_hu^0)_L^2
≤(A^-1 - A_h^-1P_h)A^-1u^0_L^2
+ Ch^2m+2 .
Since A^-1u^0 is more regular than u^0=δ_Γ, the locally refined mesh also yields optimal-order approximation
(A^-1 - A_h^-1P_h)A^-1u^0_L^2≤ Ch^2m+2.
The approximation to A^-ju^0 is similar. The details are omitted.
Therefore, the singular part in (<ref>) can be approximated with high-order accuracy if the initial value is a Dirac measure concentrated on an interface.
[Nonsmooth source term]
We consider the subdiffusion problem with an inhomogeneous source term g(t)f(x) where g and f are respectively temporally and spatially dependent functions, i.e.,
u(t) + A u(t) = g(t)f 0<t≤ T, with u(0)=0.
By means of Laplace transform, the solution to (<ref>) could be represented by
u(t) = 1/2πi∫_Γ_θ,κ e^ztĝ(z) (z^α + A)^-1 f ẓ,
where ĝ(z) denotes the Laplace transform of g.
Using the identity (<ref>), we have the splitting
u(t) = ∑_j=1^m u_j^s(t) + u^r(t)
where
u_j^s(t) = ∑_j=0^m-1(-1)^j/2πi A^-(j+1) f ∫_Γ_θ,κ
e^ztĝ(z) z^jα ẓ =: ∑_j=0^m-1((-1)^j A^-(j+1) f) G_j (t)
u^r(t) = (-1)^m/2πi∫_Γ_θ,κ e^zt z^mαĝ(z) (z^α+A)^-1A^-m f ẓ.
Next, we briefly introduce the approximation to u^r(t). Using the argument in Section <ref>,
we apply the semidiscrete finite element method:
u_h^r(t) = (-1)^m/2πi∫_Γ_θ,κ e^zt z^mαĝ(z) (z^α+A_h)^-1A_h^-m P_h f ẓ.
By assuming that g ∈ C^⌊ mα⌋ + 1[0,T], then the Taylor expansion and Lemmas <ref> and <ref> imply the following error estimate
u^r(t) - u_h^r(t) _L^2≤ c h^2m+2 f _L^2(∑_ℓ=0^⌊ mα⌋ |g^(ℓ)(0)| t^-mα+ℓ
+∫_0^t |g^⌊ mα⌋ + 1(t-s)|s^-mα+⌊ mα⌋ ṣ)
We then apply convolution quadrature to discretize in the time variable.
Let δ(·) be the generating function of BDFk
method defined in (<ref>). By assuming that g∈ C^K[0,T] with K=⌊ (m-1)α⌋ + k,
we apply the Taylor expansion to derive
u_h^r(t) = ∑_ℓ=0^K(-1)^m[ g^(ℓ)(0)]/2πi∫_Γ_θ,κ e^zt z^mα - ℓ (z^α+A_h)^-1 A_h^-m P_h f ẓ
+ (-1)^m1/2πi∫_Γ_θ,κ e^zt z^mαR_K(z) (z^α+A_h)^-1 A_h^-m P_h f ẓ
where R_K(t) = t^K/K!*g^(K+1) denotes the remainder of the Taylor series.
Then we consider the time stepping approximation by convolution quadrature:
U_h^r,n = ∑_ℓ=0^K(-1)^m[ g^(ℓ)(0)]/2πi∫_Γ_θ,κ^τ e^zt_nδ_τ(e^-zτ)^mα - ℓ (δ_τ(e^-zτ)^α+A_h)^-1 A_h^-m P_h f ẓ
+ (-1)^m1/2πi∫_Γ_θ,κ^τ e^ztδ_τ(e^-zτ)^mαR_K(δ_τ(e^-zτ)) (δ_τ(e^-zτ)^α+A_h)^-1 A_h^-m P_h f ẓ
where R_K(ξ) = ∑_ℓ=0^∞ R_K(t_ℓ) ξ^ℓ. Note that the fully discrete scheme could be solved via
a time stepping manner. Then using the argument in Section <ref>, we have the error estimate
u_h^r(t_n) - U_h^r,n_L^2≤ c τ^k ( ∑_ℓ=0^K |g^(ℓ)(0)| t_n^ℓ-k-(m-1)α+
∫_0^t |g^(K+1)(s)| (t-s)^ℓ-k-(m-1)α ṣ) f _L^2.
Similarly, we can approximate the function G_j (t) in u^s_j(t) by
using convolution quadrature generated by BDFk. Then we only need to solve an elliptic problem A^-(j+1)f in u^s_j(t) accurately,
see Example <ref>-<ref>.
Moreover, the above argument could be further generalized to the problem
u(t) + A u(t) = ∑_i=1^B g_i(t) f_i 0<t≤ T, with u(0)=0,
where g_i and f_i are respectively temporally and spatially dependent functions for all i=1,…,B.
§ NUMERICAL EXPERIMENTS
In the section, we present numerical experiments to support the theoretical analysis and to illustrate the high-order convergence of the proposed method for nonsmooth initial data.
Throughout, we consider a two-dimensional subdiffusion model (<ref>) in a unit square domain Ω=(0,1)^2⊂ℝ^2.
In our computation, the spatial mesh size be h_j = h_0/2^j, and step size be τ_j = τ^0/2^j,
where h_0 and τ^0 will be specified later.
The errors are computed by the Cauchy difference
E_h_j= u_τ_ ref, h_j - u_τ_ ref, h_j+1_L^2(Ω),
E_τ_j = u_τ_j, h_ ref - u_τ_j+1, h_ ref_L^2(Ω),
and the convergence orders are computed by using the following formulae:
spatial convergence order = -(log(E_h_j+1) - log(E_h_j))/log 2,
temporal convergence order = -(log(E_τ_j+1) - log(τ_h_j))/log 2.
Let r be the degree of finite elements in the spatial discretization, and k be the order of the time-stepping method.
We illustrate the convergence of the time discretization for k=1,2,3,4 by fixing m=1 and r=3, and illustrate the convergence of the spatial discretization with different m (m=0,1) and r (r=1,2,3) by fixing k=4.
All the examples are performed by Firedrake <cit.>, and the meshes are generated by Gmsh <cit.>.10pt
[Dirac delta initial value]
In the first example, we test the very weak initial condition u^0 = δ_x_0, where δ_x_0 denotes the
Dirac delta measure concentrated at the single point x_0 = (0.5+ϵ, 0.5+ϵ) with ϵ=10^-4.
Here, a perturbation is given to move the source away from the vertex of the meshes.
To test the temporal convergence order of the fully discrete solution (<ref>) for different k,
we set τ_j = τ^0/2^j with τ_0 = 1/32 and a fixed spatial meshes h_ ref = 1/512.
The results of the L^2-errors are presented in Figure <ref>, and confirm kth-order convergence for the BDFk method.
To test the high-order convergence in space, we set h_j=h_0/2^j with h_0 = 1/16 and τ_ ref = 1/1024.
The meshes are refined by subdividing the triangles into four congruent sub-triangles (cf. Figure <ref>).
In Table <ref>, we test convergence the standard Galerkin finite element method, i.e. m=0,
using piecewise r-th order polynomials,
for both subdiffusion equation (α=0.6) and normal diffusion equation (α=1).
The empirical convergence for the fractional subdiffusion equation is always first-order, while that for the normal diffusion equation is of order r+1.
This interesting phenomenon is attributed to the infinite smoothing effect of the normal diffusion and the limited smoothing property of the fractional subdiffusion, cf. (<ref>).
Note that the Dirac delta function is in -1-μ with any μ > 0 and hence the solution to the subdiffusion equation (<ref>) belongs to 1-μ.
In order to improve the convergence, we apply the splitting strategy (<ref>) with m=1, approximate the regular part using the fully discrete scheme (<ref>),
and compute the singular part using the method provided in Section <ref>.
In Table <ref>, we present the errors for both the regular part and the singular part.
The convergence for the regular part could be improved to O(h^min(3,r+1)) and the singular part could be approximated with order O(h^min(4, r+1)).
This convergence could be further improved by splitting one more singular term.
See Table <ref> for the approximation with m=2.
These results fully supports our theoretical findings and the necessity of the proposed splitting method.
[Dirac measure concentrated on an interface]
In the second example, we test the initial condition u^0 = δ_Γ, where
δ_Γ denotes the Dirac measure concentrated on an oriented interface
Γ = x_1x_2 with x_1=(0.25, 0.75), x_2=(0.75. 0.5), cf. Figure <ref> (a).
In order to reduce the computational cost,
we use quasi-uniform meshes and locally graded meshes for the time-dependent
regular part and the steady singular part, respectively.
To generate the quasi-uniform meshes, we generate the initial mesh with mesh size h_0=1/8 by Gmsh, and refine the mesh
several times to reach the mesh size h_j=h_0/2^j.
For the singular part, we generate the jth-level graded meshes with the local cell
diameter ħ(x) in sub-domain B(x_i, d_0) as
ħ(x) ∽{ |x - x_i|^1-γ h_j, for h_⋆≤ |x-x_i| ≤ d_0,
h_*, for |x-x_i| ≤ h_* ∽ h_j^1/γ.
.
where γ∈ (0, 1/r).
The graded mesh for approximating the singular part are presented in Figure <ref> (b) and (c).
Note that the refinement strategy used here is different from the method proposed in <cit.>, where the local mesh size for the jth-level graded meshes
in the neighborhood of x_0 and x_1 is
ħ(x) ∽{ |x - x_i|(1-c_p) h_j, for h_⋆≤ |x-x_i| ≤ d_0,
h_⋆, for |x-x_i| ≤ h_⋆∽κ_p^jh_j,
.
where c_p = 2^-r/a with a ∈ (0, 1) <cit.>.
As proved in <cit.>,
the solution of the Possion equation with line Dirac source belongs to weighted Sobolev space
𝒦_a+1^l+1(B(x_i, d)\Γ) := {v: ρ^|s| - a - 1 D^s v ∈ L^2(B(x_i, d)\Γ), ∀ |s| ≤ l + 1}
for any l≥ 1 and a∈ (0, 1) in the neighborhood of x_1 and x_2.
Though the refine methods given above are different,
both of them can resolve the singularity around the end point of Γ and obtain optimal convergence order.
To test the temporal convergence order, we let the step sizes be τ_j = τ_0/2^j with τ_0 = 1/32 and fixed the spatial mesh size h_ ref = h_6 = 1/512.
As shown in Figure <ref>,
the convergence order of BDFk scheme is O(τ^k), which agrees well with our theoretical result in Corollary <ref>.
To test the convergence in space, we first compare the numerical results of α=0.6 and α=1 using the standard finite element method (i.e., m=0) with uniform meshes.
As shown in Table <ref>,
the convergence for the fractional diffusion is at most second-order even if we use high-order finite element methods,
while the convergence order of the normal diffusion is r+1.
In order to improve the convergence, we apply the splitting method with m=1.
The empirical errors of regular part and singular part are presented in Table <ref>.
Our numerical experiments indicate the optimal convergence rate for the P^r finite element method with r=2,3.
§ CONCLUSIONS
We have constructed a new splitting of the solution to the subdiffusion equation, which allows us to develop high-order finite element approximations in
case of nonsmooth initial data. In this method, the solution is split into a time-dependent smooth part plus a time-independent nonsmooth part. We have developed high-order spatial and time discretizations to approximate the smooth part of the solution, and proved that the proposed fully discrete finite element method approximates the regular part of the solution to high-order accuracy for nonsmooth initial data in L^2(Ω).
Moreover, we have illustrated how to approximate the time-independent nonsmooth part through several examples of initial data, including piecewise smooth initial data, Dirac–Delta point source, and Dirac measure concentrated on an interface. More generally, the time-independent nonsmooth part can be approximated by using smaller mesh size without increasing the overall computational cost significantly. This is possible as the nonsmooth part is time-independent and therefore avoids the time-stepping procedure.
We have also illustrated the effectiveness of the proposed method through several numerical examples.
§ APPENDIX: ON THE TRIANGULATION SATISFYING ASSUMPTION <REF>
In this Appendix we show that the graded mesh defined in (<ref>), for a two-dimensional polygonal domain Ω, satisfies Assumption <ref>.
In terms of the notation introduced in (<ref>) and the subsequent text, we divide the domain D_0={x∈Ω:|x-x_0|<d_0} into
D_0= D_*∪ (∪_j=1^JD_j) with
D_j:={x∈Ω: 2^-j-1d_0≤ |x-x_0|<2^-jd_0}
D_*:={x∈Ω: |x-x_0|≤ 2^-Jd_0=h_*}.
Let d_j=2^-jd_0 and h_j= max_x∈ D_j h(x) ∼ d_j^1-γh, and denote by
D_j':={x∈Ω: 2^-j-2d_0≤ |x-x_0|<2^-j+1d_0}
a neighborhood of D_j.
Then the following lemma provides a regularity estimate near the corner.
If A=-Δ and v∈Ḣ^2r+2(Ω) for some r≥ 0, then
v_H^s(D_j)≤
C d_j^1-s+min(1,π/θ)v_Ḣ^2r+2(Ω)
0≤ s≤ 2r+2 .
For v∈Ḣ^2r+2(Ω), the following weighted regularity result is known (for example, see <cit.>):
|v|_H^k+1(D_j) ≤ C ∑_i=0^k-1 d_j^-iΔ v_H^k-1-i(D_j')
+C d_j^-k∇ v_L^2(D_j') 1≤ k≤ 2r+1 .
By using the singular expansions and local energy estimate, i.e.,
v|_D_0∈ O( |x-x_0|^π/θ) + H^2(D_0)
∇ v_L^2(D_j)^2
≤ Cd_j^-2 v_L^2(D_j')^2
+Cd_j^2f_L^2(D_j')^2 ,
we further obtain
|v|_H^k+1(D_j) ≤ C ∑_i=0^k-1 d_j^-iΔ v_H^k-1-i(D_j')
+C d_j^-k+min(1,π/θ)f_L^2(Ω) 1≤ k≤ 2r+1 .
If Δ v_H^s(D_j')≤ Cd_j^-s+min(1,π/θ) for 0≤ s≤ 2r, then the inequality above yields
|v|_H^s(D_j) ≤ C d_j^1-s+min(1,π/θ) , 2≤ s≤ 2r+2 .
By using the singular expansions and local energy estimate in (<ref>), we find that the inequality above also holds for s=0,1.
Since d_j≤ C, it follows that v_H^s(D_j')≤ Cd_j^-s+min(1,π/θ) for 0≤ s≤ 2r+2. This is the same estimate as that for Δ v, but the range of s increases by 2. Therefore, for any function f∈ L^2(Ω), by picking up the regularity from Δ^-lf to Δ^-l-1f for l=0,1,…,r, we obtain
|Δ^-r-1f|_H^s(D_j)≤
C d_j^1-s+min(1,π/θ)f_L^2(Ω) 0≤ s≤ 2r+2 .
Replacing Δ^-r-1f by v in the estimate above, we obtain the result of Lemma <ref>.
Then the following proposition confirms the approximation properties (<ref>)–(<ref>).
Let A=-Δ.
If the diameter of triangles satisfies condition (<ref>) near every corner of the domain,
then (<ref>)–(<ref>) holds.
Let v∈2r+2 and s=2r+1, with 0≤ r≤ m. We consider the following decomposition:
v-I_hv_H^1(D_0)^2
= v-I_hv_H^1(D_*)^2+∑_j=1^J v-I_hv_H^1(D_j)^2
≤
Ch_*^2min(1,π/θ)Δ v_L^2(Ω)^2 + C∑_j=1^Jh_j^2sv_H^s+1(D_j')^2 ,
and
v-I_hv_L^2(D_0)^2
= v-I_hv_L^2(D_*)^2+∑_j=1^J v-I_hv_L^2(D_j)^2
≤
Ch_*^2+2min(1,π/θ)Δ v_L^2(Ω)^2 + C∑_j=1^Jh_j^2s+2v_H^s+1(D_j')^2 ,
where h_j=max_x∈ D_jħ(x), and we have used the following result:
v-I_hv_H^1(D_*)≤ Ch_*^min(1,π/θ)Δ v_L^2(Ω) .
In the case π/θ<1, this result follows from
v-I_hv_H^1(D_*)≤ Ch_*^min(1,π/θ)v_B^min(1,π/θ)_2,∞(Ω)≤ Ch_*^min(1,π/θ)Δ v_L^2(Ω) ,
where B^min(1,π/θ)_2,∞(Ω) is the Besov space.
The last inequality is a consequence of the singularity expansion
v|_D_0 = c|x-x_0|^π/θsin( arg(x-x_0)) + w for some c∈ and w∈ H^2(D_0),
where |c| ≤Δ v _L^2.
In the case π/θ>1, (<ref>) follows from the standard estimate for the Lagrange interpolation, i.e.,
v-I_hv_H^1(D_*)≤ Ch_* v_H^2(Ω)≤ Ch_* Δ v_L^2(Ω) .
By substituting the result of Lemma <ref> into (<ref>) and using the condition γ_j<min(1,π/θ_j)/r, we obtain
v-I_hv_H^1(D_0)^2 ≤
Ch_*^2min(1,π/θ)v_Ḣ^2m+2(Ω)^2
+
∑_j Ch_j^2s d_j^-2s+2min(1,π/θ_j) v_Ḣ^2r+2(Ω)^2
≤
Ch^2sv_Ḣ^2r+2(Ω)^2
+∑_j C d_j^(1-γ_j)2s-2s+2min(1,π/θ_j) h^2sv_Ḣ^2r+2(Ω)^2
≤
Ch^2sv_Ḣ^2r+2(Ω)^2+∑_j C d_j^2s(min(1,π/θ_j)/s - γ_j )
h^2sv_Ḣ^2r+2(Ω)^2
≤ Ch^2sv_Ḣ^2r+2(Ω)^2 .
This proves that, by substituting s=2r+1 into the inequality above,
v-I_hv_H^1(Ω) ≤ Ch^2r+1v_Ḣ^2r+2(Ω) .
Similarly, by substituting the result of Lemma <ref> into (<ref>), we obtain
v-I_hv_L^2(Ω) ≤ Ch^r+1v_Ḣ^2r+2(Ω) .
This proves the desired estimate in (<ref>).
By the optimal H^1-norm approximation property of the Ritz projection,
we have
v-R_hv_H^1(Ω)≤ Cv-I_hv_H^1(Ω)≤ Ch^2r+1v_Ḣ^2r+2(Ω) .
By a standard duality argument, we obtain
v-R_hv_L^2(Ω)≤ Chv-R_hv_H^1(Ω)≤ Ch^2r+2v_Ḣ^2r+2(Ω) .
This proves the desired estimate in (<ref>).
If the mesh size satisfies condition (<ref>), then the following estimate holds:
v - I_h v _L^∞(Ω) ≤ ch^2r+1 v_2r+2 0≤ r≤ m.
The basic L^∞ estimates of the Lagrange interpolation says that
v-I_hv_L^∞(D_j) ≤ c h_j^2r+1v_H^2r+2(D_j') .
Since v_H^2r+2(D_j')≤ cd_j^-2r-1+min(1,π/θ)v_Ḣ^2r+2(Ω), it follows that
v-I_hv_L^∞(D_j) ≤ cd_j^-2r-1+min(1,π/θ) h_j^2r+1v_Ḣ^2r+2(Ω)
≤ cd_j^-2r-1+min(1,π/θ) d_j^(1-γ)(2r+1)h^2r+1v_Ḣ^2r+2(Ω)
≤ cd_j^min(1,π/θ)-(2r+1)γ h^2r+1v_Ḣ^2r+2(Ω) .
Then (<ref>) follows from the condition γ<min(1,π/θ)/(2m+1)≤min(1,π/θ)/(2r+1) in (<ref>).
§ ACKNOWLEDGEMENTS
The research of B. Li is partially supported by Hong Kong Research Grants Council (GRF Project No. 15300817) and an internal grant of The Hong Kong Polytechnic University (Project ID: P0031035, Work Programme: ZZKQ).
The research of Z. Zhou is partially supported by Hong Kong Research Grants Council (Project No. 25300818) and an internal grant of The Hong Kong Polytechnic University (Project ID: P0031041, Work Programme: ZZKS).
abbrv
10
AdamsGelhar:1992
E. E. Adams and L. W. Gelhar.
Field study of dispersion in a heterogeneous aquifer: 2. spatial
moments analysis.
Water Res. Research, 28(12):3293–3307, 1992.
ArendtBattyHieber:2011
W. Arendt, C. J. Batty, M. Hieber, and F. Neubrander.
Vector-valued Laplace Transforms and Cauchy Problems.
Birkhäuser, Basel, 2nd edition, 2011.
Banjai2019
L. Banjai and M. López-Fernández.
Efficient high order algorithms for fractional integrals and
fractional differential equations.
Numer. Math., 141:289–317, 2019.
BanjaiMakridakis:2022
L. Banjai and C. G. Makridakis.
A posteriori error analysis for approximations of time-fractional
subdiffusion problems.
Math. Comp., 91(336):1711–1737, 2022.
CockburnMustapha:2015
B. Cockburn and K. Mustapha.
A hybridizable discontinuous Galerkin method for fractional
diffusion problems.
Numer. Math., 130(2):293–314, 2015.
Crouzeix-Thomee-1987
M. Crouzeix and V. Thomée.
The stability in l_p and w^1_p of the l_2-projection onto
finite element function spaces.
Math. Comp., pages 521–532, 1987.
CuestaLubichPalencia:2006
E. Cuesta, C. Lubich, and C. Palencia.
Convolution quadrature time discretization of fractional
diffusion-wave equations.
Math. Comp., 75(254):673–696, 2006.
FujitaSuzuki:1991
H. Fujita and T. Suzuki.
Evolution problems.
In Handbook of numerical analysis, Vol. II, Handb. Numer.
Anal., II, pages 789–928. North-Holland, Amsterdam, 1991.
Geuzaine2009Gmsh
C. Geuzaine and J.-F. Remacle.
Gmsh: A 3-d finite element mesh generator with built-in pre- and
post-processing facilities.
International Journal for Numerical Methods in Engineering,
79(11):1309–1331, 2009.
HairerWanner:1996
E. Hairer and G. Wanner.
Solving Ordinary Differential Equations. II.
Springer-Verlag, Berlin, second edition, 1996.
Stiff and differential-algebraic problems.
HatanoHatano:1998
Y. Hatano and N. Hatano.
Dispersive transport of ions in column experiments: An explanation of
long-tailed profiles.
Water Res. Research, 34(5):1027–1033, 1998.
Jin:2021book
B. Jin.
Fractional Differential Equations.
Springer, Switzerland, 2021.
JinLazarovPasiackZhou:2013naa
B. Jin, R. Lazarov, J. Pasciak, and Z. Zhou.
Galerkin FEM for fractional order parabolic equations with initial
data in H^-s, 0≤ s≤ 1.
In Numerical analysis and its applications, volume 8236 of Lecture Notes in Comput. Sci., pages 24–37. Springer, Heidelberg, 2013.
JinLazarovZhou:2013sinum
B. Jin, R. Lazarov, and Z. Zhou.
Error estimates for a semidiscrete finite element method for
fractional order parabolic equations.
SIAM J. Numer. Anal., 51(1):445–466, 2013.
JinLazarovZhou:2019cmame
B. Jin, R. Lazarov, and Z. Zhou.
Numerical methods for time-fractional evolution equations with
nonsmooth data: a concise overview.
Comput. Methods Appl. Mech. Engrg., 346:332–358, 2019.
JinLiZhou:2017sisc
B. Jin, B. Li, and Z. Zhou.
Correction of high-order BDF convolution quadrature for fractional
evolution equations.
SIAM J. Sci. Comput., 39(6):A3129–A3152, 2017.
JinZhou:book
B. Jin and Z. Zhou.
Numerical treatment and analysis of time-fractional evolution
equations, volume 214 of Applied Mathematical Sciences.
Springer, Cham, [2023] 2023.
Karaa:2017
S. Karaa.
Semidiscrete finite element analysis of time fractional parabolic
problems: a unified approach.
SIAM J. Numer. Anal., 56(3):1673–1692, 2018.
KaraaMustaphaPani:2017
S. Karaa, K. Mustapha, and A. K. Pani.
Finite volume element method for two-dimensional fractional
subdiffusion problems.
IMA J. Numer. Anal., 37(2):945–964, 2017.
KaraaPani:2018
S. Karaa and A. K. Pani.
Error analysis of a FVEM for fractional order evolution equations
with nonsmooth initial data.
ESAIM Math. Model. Numer. Anal., 52(2):773–801, 2018.
KilbasSrivastavaTrujillo:2006
A. A. Kilbas, H. M. Srivastava, and J. J. Trujillo.
Theory and applications of fractional differential equations,
volume 204 of North-Holland Mathematics Studies.
Elsevier Science B.V., Amsterdam, 2006.
Kopteva-2021
N. Kopteva.
Error analysis of an L2-type method on graded meshes for a
fractional-order parabolic problem.
Math. Comp., 90:19–40, 2021.
Kopteva-Meng-2020
N. Kopteva and X. Meng.
Error analysis for a fractional-derivative parabolic problem on
quasi-graded meshes using barrier functions.
SIAM J. Numer. Anal., 58:1217–1238, 2020.
LeMcLeanLamichhane:2017
K. N. Le, W. McLean, and B. Lamichhane.
Finite element approximation of a time-fractional diffusion problem
for a domain with a re-entrant corner.
ANZIAM J., 59(1):61–82, 2017.
LeMcLeanMustapha:2016
K. N. Le, W. McLean, and K. Mustapha.
Numerical solution of the time-fractional Fokker-Planck equation
with general forcing.
SIAM J. Numer. Anal., 54(3):1763–1784, 2016.
Li-2022
B. Li.
Maximum-norm stability of the finite element method for the Neumann
problem in nonconvex polygons with locally refined mesh.
Math. Comp., 2022, DOI: 10.1090/mcom/3724.
LM22Exp
B. Li and S. Ma.
Exponential convolution quadrature for nonlinear subdiffusion
equations with nonsmooth initial data.
SIAM J. Numer. Anal., 60(2):503–528, 2022.
LiWanYinZhao:2021
H. Li, X. Wan, P. Yin, and L. Zhao.
Regularity and finite element approximation for two-dimensional
elliptic equations with line Dirac sources.
J. Comput. Appl. Math., 393:Paper No. 113518, 16, 2021.
Li-Melenk-2010
J. Li, J. M. Melenk, B. Wohlmuth, and J. Zou.
Optimal a priori estimates for higher order finite elements for
elliptic interface problems.
Appl. Numer. Math., 60:19–37, 2010.
LubichSloanThomee:1996
C. Lubich, I. H. Sloan, and V. Thomée.
Nonsmooth data error estimates for approximations of an evolution
equation with a positive-type memory term.
Math. Comp., 65(213):1–17, 1996.
MetzlerKlafter:2000
R. Metzler and J. Klafter.
The random walk's guide to anomalous diffusion: a fractional dynamics
approach.
Phys. Rep., 339(1):1–77, 2000.
Mustapha:2018mc
K. Mustapha.
FEM for time-fractional diffusion equations, novel optimal error
analyses.
Math. Comp., 87(313):2259–2272, 2018.
Mustapha-2014
K. Mustapha, B. Abdallah, and K. M. Furati.
A discontinuous Petrov–Galerkin method for time-fractional
diffusion equations.
SIAM J. Numer. Anal., 52:2512–2529, 2014.
Mustapha-McLean-2011
K. Mustapha and W. McLean.
Uniform convergence for a discontinuous Galerkin, time-stepping
method applied to a fractional diffusion equation.
IMA J. Numer. Anal., 32:906–925, 2011.
Nigmatulli:1986
R. R. Nigmatullin.
The realization of the generalized transfer equation in a medium with
fractal geometry.
Phys. Stat. Solid. B, 133(1):425–430, 1986.
Podlubny:1999
I. Podlubny.
Fractional differential equations, volume 198 of Mathematics in Science and Engineering.
Academic Press, Inc., San Diego, CA, 1999.
Rathgeber2017Firedrake
F. Rathgeber, D. A. Ham, L. Mitchell, M. Lange, F. Luporini, A. T. T. Mcrae,
G.-T. Bercea, G. R. Markall, and P. H. J. Kelly.
Firedrake: Automating the finite element method by composing
abstractions.
ACM Transactions on Mathematical Software, 43(3):1–27, 2017.
SakamotoYamamoto:2011
K. Sakamoto and M. Yamamoto.
Initial value/boundary value problems for fractional diffusion-wave
equations and applications to some inverse problems.
J. Math. Anal. Appl., 382(1):426–447, 2011.
Sousa:2012
E. Sousa.
How to approximate the fractional derivative of order
1<α≤2.
Internat. J. Bifur. Chaos Appl. Sci. Engrg., 22(4):1250075, 13,
2012.
Stynes-2017
M. Stynes, E. O'Riordan, and J. L. Gracia.
Error analysis of a finite difference method on graded meshes for a
time-fractional diffusion equation.
SIAM J. Numer. Anal., 55:1057–1079, 2017.
Thomee:2006
V. Thomée.
Galerkin Finite Element Methods for Parabolic
Problems.
Springer-Verlag, Berlin, second edition, 2006.
Wolfersdorf:1994
L. von Wolfersdorf.
On identification of memory kernels in linear theory of heat
conduction.
Math. Methods Appl. Sci., 17(12):919–932, 1994.
WangZhou:2020sinum
K. Wang and Z. Zhou.
High-order time stepping schemes for semilinear subdiffusion
equations.
SIAM J. Numer. Anal., 58(6):3226–3250, 2020.
|
http://arxiv.org/abs/2307.04065v1 | 20230709000559 | Large-scale global optimization of ultra-high dimensional non-convex landscapes based on generative neural networks | [
"Jiaqi Jiang",
"Jonathan A. Fan"
] | cs.LG | [
"cs.LG",
"math.OC"
] |
Projective Rectangles
Thomas Zaslavsky
August 12, 2023
=====================
We present a non-convex optimization algorithm metaheuristic, based on the training of a deep generative network, which enables effective searching within continuous, ultra-high dimensional landscapes. During network training, populations of sampled local gradients are utilized within a customized loss function to evolve the network output distribution function towards one peaked at high performing optima. The deep network architecture is tailored to support progressive growth over the course of training, which allows the algorithm to manage the curse of dimensionality characteristic of high dimensional landscapes. We apply our concept to a range of standard optimization problems with dimensions as high as one thousand and show that our method performs better with fewer functional evaluations compared to state-of-the-art algorithm benchmarks. We also discuss the role of deep network over-parameterization, loss function engineering, and proper network architecture selection in optimization, and why the required batch size of sampled local gradients is independent of problem dimension. These concepts form the foundation for a new class of algorithms that utilize customizable and expressive deep generative networks to solve non-convex optimization problems.
§ INTRODUCTION
High dimensional, non-convex optimization problems are pervasive in many scientific and engineering domains, including computational materials science <cit.>, electromagnetics <cit.>, circuits design <cit.>, process engineering <cit.>, and systems biology <cit.>. These problems are known to be very difficult to solve because they are NP-hard, and algorithms aiming to definitively search for the global optimum, such as branch and bound methods, cannot practically scale to high dimensional systems. As such, various algorithm heuristics have been developed, ranging from evolutionary metaheuristics to Bayesian optimization <cit.>, which use judicious sampling of the landscape to identify high performing optima. In all cases, it remains challenging to apply these algorithms to ultra-high dimensional spaces with dimensions of hundreds to thousands due to the curse of dimensionality.
The explosion of interest and research in deep neural networks over the last decade has presented new opportunities in optimization, as the process of training a deep network involves solving a high dimensional optimization problem. To this end, gradient-based optimization metaheuristics termed global topology optimization networks (GLOnets) <cit.> were recently proposed that use the training of a deep generative network to perform non-convex optimization. The concept applies to optimization problems where 𝐱 is a d-dimensional variable and the goal is to maximize the smoothly varying, non-convex objective function f(𝐱). To run the metaheuristic, the generative network is first initialized so that it outputs a distribution of 𝐱 values that spans the full optimization landscape. Over the course of network training, this distribution is sampled, f(𝐱) and local gradients are computed for these sampled points, and these values are incorporated into a customized loss function and backpropagated to evolve and narrow the distribution around high performing optima. Initial demonstrations indicate that GLOnets can perform better than standard gradient-based optimizers and global search heuristics for various non-convex optimization problems. However it is unable to extend to high dimensional problems in its current form, and the lack of interpretability with this black box algorithm has made it difficult to understand if and how it can to adapt to more general problems, including high dimensional problems.
In this Article, we introduce the progressive growing GLOnet (PG-GLOnet) in which optimization within an ultra-high dimensional non-convex landscape is mediated through the training of a progressive growing deep generative network. Our tailoring of the network architecture for this optimization task serves to incorporate knowledge and assumptions about the optimization landscape into the metaheuristic, which is a requirement for tractably navigating ultra-high dimensional landscapes. We also explain how the algorithm works to smoothen the design landscape, how evaluation of the loss function serves as a gradient estimation calculation, and why the number of required functional evaluations is independent of problem dimension. With standard benchmarking test functions, we show that our concept performs better than state-of-the-art algorithms with fewer functional evaluations for one thousand dimensional problems. We anticipate that the customization of network architectures within the GLOnets framework will seed new connections between deep learning and optimization.
§ PROGRESSIVE GROWING GLONETS ALGORITHM AND BENCHMARKING
The PG-GLOnet concept builds on the foundation of the original GLOnet algorithm, which we briefly review here. The optimization problem to be solved with GLOnets can be written in the following form:
max_𝐱 f(𝐱)
where f(𝐱) is a non-convex, continuous objective function with feasible gradients. With GLOnets, this optimization problem is indirectly solved through the training of a general neural network (Figure <ref>a), where the input is a d-dimensional random variable 𝐳 with a standard normal distribution and the output is a distribution of 𝐱's. The generator therefore serves to map 𝐳 onto 𝐱 = G(𝐳; ϕ) with a distribution P(𝐱; ϕ), where ϕ denotes the trainable neural network parameters. The optimization objective for the generator is defined as:
L = max_ϕ𝔼_𝐱∼ P(𝐱; ϕ)exp[ f(𝐱)/T]
The distribution that maximizes this expected value is a delta function centered at the global optimum, and as such, an ideally trained generator will produce a narrow distribution centered at the global optimum, thereby solving the original optimization problem. The use of the exponential function and the hyperparameter T in the optimization objective further enhance the valuation of the global optimum, and more generally high performing optima, in the design space.
Generator training is consistent with conventional deep learning training methods: gradients of the objective function with respect to network parameters, ∇_ϕ𝔼f, are calculated through backpropagation, and they are used to iteratively optimize ϕ using standard gradient-based methods. In practice, the objective function is approximated by a batch of M samples. P(𝐱; ϕ), on the other hand, is typically implicit and cannot be directly sampled. To circumvent this issue, we draw M samples {𝐳^(m)}_m=1^M from the standard normal distribution, transform them to {𝐱^(m)}_m=1^M, and then approximate L and its gradient ∇_ϕ L with respect to network parameters ϕ:
L ≈1/M∑_m=1^Mexp[ f(𝐱^(m))/T]
∇_ϕ L ≈1/M∑_m=1^M1/Texp[ f(𝐱^(m))/T] ∇_𝐱f · D_ϕ𝐱^(m)
∇_𝐱f = [∂ f/∂ x_1, ∂ f/∂ x_2, …, ∂ f/∂ x_d] are the gradients of f(𝐱) and D_ϕ𝐱 = ∂ (x_1, x_2, …)/∂(ϕ_1, ϕ_2, ...) is the Jacobian matrix. Evaluation of f(𝐱) is usually performed by a numerical simulator and the gradient of f(𝐱) can be calculated explicitly or by auto-differentiation for analytic expressions, or by the adjoint variables method (AVM).
In the initial conception of GLOnet, which we term FC-GLOnet, the generative network was a fully connected deep network and was capable of effectively addressing optimization problems with a modest number of dimensions. However, it was found to be ineffective at optimizing within very high dimensional landscapes due to the curse of dimensionality, which makes a direct search for the global optimum within a full, high dimensional landscape an intractable proposition. We therefore propose the PG-GLOnet, which utilizes a generative network that outputs a distribution that gradually grows from a coarse, low dimensional space to a fine, high dimensional space. By tailoring the network architecture in this way, we regularize the optimization process to take place over differing degrees of optimization landscape smoothing, enabling our search process to be computationally efficient and tractable.
The PG-GLOnet generator architecture is shown in Figure <ref>b. The progressive growth concept is inspired by progressively growing GANs <cit.> that have been developed in the computer vision community to process images with increasing spatial resolution during network training. The input to the network is a D-dimensional random vector 𝐱^0, and its dimension is much smaller than that of 𝐱. With L growing blocks, the network simultaneously transforms and increases the dimensionality of the input vector, and its output is a 2^L D dimensional vector 𝐱^L that matches the dimensionality of 𝐱.
In each growing block, the input vector dimension is doubled in two ways, by direct upsampling and by a linear transform. The resulting outputs are combined together and further transformed using a non-linear activation function:
𝐱^out_2d × 1 = q((1-α)
[ 𝐱^in_d × 1; 𝐱^in_d × 1 ]
+α A_2d × d·𝐱^in_d × 1)
A_2d × d are trainable parameters in the linear transformation branch, q(·) is a non-linear activation function, and α is a hyperparameter that is manually tuned over the course of optimization.
Initially, α's for all of the growing blocks in the network are set to 0, such that the vector outputted by each block has the same effective dimensionality as its input vector. The network output 𝐱^L therefore has an effective dimensionality that matches the dimensionality of the input 𝐱^0. As α is increased for a particular growing block, its output vector becomes dominated by its linear transformation branch, as opposed to its upsampling branch, and it has an effective dimensionality that exceeds and eventually doubles that of the growing block input vector. The effective dimensionality of 𝐱^L therefore arises from the aggregation of effective dimensionality increases from all growing blocks. To control the effective dimensionality of 𝐱^L over the course of PG-GLOnet training, α is manually changed from 0 to 1 sequentially from the left to right blocks (bottom of Figure <ref>b). At the end of PG-GLOnet training, α is 1 for all growing blocks and the effective dimensionality of 𝐱^L matches that of 𝐱.
To evaluate the efficacy of PG-GLOnet in solving high dimensional non-convex optimization problems, we perform a series of benchmark numerical experiments where we optimize a set of standard test functions with PG-GLOnet and other established algorithms. In the first set of experiments, we consider a testing function that can be tuned from a convex to non-convex function and compare PG-GLOnet with ADAM, a well known momentum-based gradient descent algorithm that is typically more effective than gradient descent. ADAM is a local optimization algorithm and performs well on convex objective functions but can get trapped within local optima for non-convex functions. Our test function is a modified Rastrigin function defined as follows:
f(𝐱; ρ) = ρ d + ∑_i=1^d [x_i^2 - ρcos(2π x_i)]
ρ is a hyperparameter that specifies the amplitude of the sinusoidal modulation within the function. When ρ =0, f(𝐱; ρ) = ∑_i=1^d x_i^2 and is a convex function. As ρ increases, more local optima emerge and these optima become separated by larger magnitude barriers.
We first consider the computational cost required by ADAM and PG-GLOnet to find the global optimum of a two dimensional modified Rastrigin function as a function of ρ. For ADAM, we run 10000 optimizations for 200 iterations with random starting points, and for PG-GLOnet, we run the algorithm 10 times with a batch size of 20 for 200 total iterations. In both cases, the algorithms terminate early when they output results within 10^-3 of the global optimum, and computational cost is quantified as the average number of function evaluations required to find the global optimum. The results are summarized in Figure <ref>a and indicate that for convex or nearly convex optimization landscapes, ADAM is more efficient at finding the global optimum. This efficiency arises because ADAM is a specially tailored local optimizer that is well suited for these types of problems, while PG-GLOnet always requires relatively large batch sizes and more iterations to converge. As ρ increases, orders-of-magnitude more ADAM evaluations are required to search for the global optimum due to trapping within local optima in the design landscape. The computational cost for PG-GLOnet, on the other hand, does not increase nearly as rapidly due to its ability to navigate non-convex landscapes and is ten times more efficient than ADAM for ρ greater than 3.
We also perform benchmarks between ADAM and PG-GLOnet for a ten dimensional problem. Due to the inability for ADAM to converge to the global optimum in non-convex, high dimensional landscapes, we perform this benchmark differently and compare the best optimal value found by ADAM and PG-GLOnet given the same amount of computational resources. Here, we run ADAM for 200 iterations with 20 random starting points and PG-GLOnet for 200 iterations with a batch size of 20. We run these benchmark experiments ten times and average the best values from each experiment, and the results are reported in Figure <ref>b.
We find that the PG-GLOnet is able to consistently find solutions at or near the global optimum for all values of ρ, but the local optimizer gets progressively worse as ρ increases.
In our next set of benchmark experiments, we compare PG-GLOnet with the covariance matrix adaptation evolution strategy (CMA-ES), which is an established evolutionary algorithm used to perform population-based global searching of an optimization landscape. Compared to ADAM, it is more suitable for performing non-convex optimization. We consider two standard non-convex testing functions with lots of local optima, the Rastrigin and Schwefel functions (defined in the Appendix).
Plots in Figures <ref>c and <ref>d show the average number of function evaluations required to find the global optimum as a function of problem dimension d. The computational cost of CMA-ES increases exponentially as the problem dimension becomes larger, indicating the intractability of applying this algorithm to ultra-high dimensional problems. For the Schwefel function, we limited our CMA-ES benchmarking experiments to a problem dimension of 20 due to this scaling trend. PG-GLOnet, on the other hand, has a relatively small computational cost that is not sensitive to the dimension. In fact, the same neural network architecture and batch size is used for all problems. A more detailed discussion as to the origins of problem dimension and batch size decoupling is provided in the Discussion section.
Finally, we benchmark PG-GLOnet with state-of-art algorithms on testing functions proposed by the CEC'2013 Special Session and Competition on Large-Scale Global Optimization (LSGO) <cit.>. We consider the six non-convex benchmark functions from the competition, which involve variations and combinations of the Rastrigin and Ackely functions and are defined in the Appendix. These benchmark functions were designed to incorporate a number of challenging features for optimization, including:
* High dimensions. The design space of a optimization problem grows exponentially as the dimension of design variables increases. These benchmark functions utilize one thousand dimensional landscapes.
* Functions with non-separable subcomponents. The whole design variable is decomposed into several subcomponents and dimensions within each subcomponent are strongly coupled together.
* Imbalance in the contribution of subcomponents. The contribution of a subcomponent is magnified or dampened by a coefficient.
* Non-linear transformations to the base functions. Three transformations are applied to break the symmetry and introduce some irregularity on the landscape: (1) Ill-conditioning (2) Irregularities (3) Symmetry breaking.
To globally search these landscapes for the global optimum, we perform a two step optimization procedure. First, we run PG-GLOnet for each benchmark function for 200 iterations and a batch size of 100, from which our generative network outputs a narrow distribution of 𝐱's in promising regions of the optimization landscape. We then sample this distribution 100 times and perform local gradient descent on each of these design variables for an additional 200 iterations. The best function values found by PG-GLOnet plus local gradient descent are reported in Table <ref>, together with results produced from FC-GLOnet plus local gradient descent, local conjugate gradient descent, and two state-of-art non-convex optimization algorithms that were the best performing algorithms in the most recent LSGO contest: CC-RDG3, which is a divide-and-conquer method <cit.>, and DGSC, which is a differential group method utilizing spectral clustering <cit.>. We observe that PG-GLOnet with local gradient descent refinement is able to significantly outperform the other algorithms for the majority of test functions. In addition, the total computational cost of the two step optimization procedure is only 4× 10^4 function evaluations, while CC-RDG3 and DGSC require 3× 10^6 function evaluations.
§ DISCUSSION
We discuss the origins of the efficiency and efficacy of PG-GLOnet in solving ultra-high dimensional non-convex optimization problems. First, we examine how the generic GLOnet algorithm operates and why it is able to effectively utilize a gradient-based strategy to solve non-convex optimization problems. Second, we examine the role of the progressive growing generative network architecture in PG-GLOnet in solving ultra-high dimensional problems. By understanding the relationship between network architecture and optimization procedure, we elucidate built-in assumptions used by PG-GLOnet in its search for the global optimum.
With the generic GLOnet algorithm, the original optimization problem cited in Equation 1 is reframed as a related problem (Equation 2) that addresses a transformed, smoothened optimization landscape. The key concepts that produce this landscape transformation and enable effective gradient-based optimization are outlined in Figure <ref>a and are: 1) distribution optimization, where the original problem involving the optimization of 𝐱 is transformed to a problem involving the optimization of parameters within a simple distribution P(𝐱); 2) exponential transformation, where the objective function is exponentially weighted; 3) over-parametrization, where the distribution P(𝐱) is now parameterized by a neural network with hundreds to thousands of weights; and 4) gradient estimation, where gradients that specify the evolution of the continuous distribution P(𝐱) are accurately computed through discrete samplings of 𝐳.
Distribution optimization. With the concept of distribution optimization, the original problem of searching for an optimal 𝐱 is recast as a population-based search in which parameters within a distribution function are optimized, thereby enabling a search for the global optimum in a smoother and higher dimensional optimization landscape. This concept is shared by other population-based optimization algorithms, such as CMA-ES. To visualize the concept, we consider a non-convex one-dimensional function f(𝐱) plotted as a blue line in the leftmost figure in Figure <ref>a. The objective is to maximize f(𝐱), and the function contains multiple local maxima separated by deep valleys. It is easy for optimization algorithms, particularly gradient-based algorithms, to get trapped in the local optima. For example, if gradient descent optimization is used and is initialized at the yellow dot position, the algorithm will converge to the local optimum delineated by the red dot. With this approach, multiple independent gradient descent optimizations with random starting points are needed to increase the possibility of finding the global optimum. For these problems, gradient-free optimization heuristics are often employed, which can reduce the chances of trapping within suboptimal maxima but which introduce a more stochastic nature to the search process.
However, if we consider the optimization of a distribution function that interacts with the global optimization landscape, local information at different parts of the landscape can be aggregated and collectively utilized to evolve this distribution in a manner that reduces issues of trapping within suboptimal maxima. Formally, we transform the optimization variable 𝐱 to parameters within the distribution P(𝐱), and the globally optimal distribution is one that is narrowly peaked around the global optimum. Distribution functions can be explicitly parameterized in many ways. As a simple illustrative example that builds on our
discussion of the one-dimensional f(𝐱), we consider the one-dimensional Gaussian distribution denoted as P(𝐱; μ, σ), shown as the red curve in the leftmost figure in Figure <ref>a. μ and σ refer to mean and standard deviation, respectively.
With a Gaussian distribution function, the objective function now becomes transformed to the expected value of f(𝐱) as a function of (μ, σ): 𝔼_𝐱∼ P(𝐱; μ, σ) f(𝐱). As this new optimization landscape is a function of two distribution parameters, μ and σ, it is two dimensional. We can directly visualize this new landscape by evaluating ∫ f(𝐱) P(𝐱;μ, σ) d𝐱 for all values of (μ, σ), and the result is summarized in the second figure from the left in Figure <ref>a. The horizontal line section at the bottom of the contour plot, where σ equals zero, is the original one-dimensional f(𝐱) with multiple optima. As σ increases to finite values above zero, the landscape becomes smoother. Mathematically, horizontal line sections for finite sigma are calculated by convolving f(𝐱) with the Gaussian function, producing a Gaussian blur that leads to smoothening. This smoothened landscape facilitates gradient-based optimization of (μ, σ) when the distribution is initialized to large σ values, and the final optimized distributions converge to the original f(𝐱) space at the bottom of the plot. However, while this two-dimensional landscape is smoother than the original f(𝐱), there remain multiple distribution parameter initializations for which the gradient-based optimizer converges to suboptimal maxima.
Exponential transformation. To further smoothen the optimization landscape and enhance the presence of the global optimum, we perform an exponential transformation of the objective function. Mathematically, the objective function for the distribution optimization problem becomes: 𝔼_𝐱∼ P(𝐱; μ, σ)exp[ f(𝐱)/T]. The temperature term T modulates the impact of the global optimum on the optimization landscape such that low T produces strong landscape modulation by the global optimum. For our one-dimensional f(𝐱) example, the exponentially transformed landscape is plotted in the second figure from the left in Figure <ref>a and shows that the local optima has faded out, such that gradient-based optimization within this landscape is more likely to converge to the global optimum.
The choice of T depends on the scale of f(𝐱). Consider f(𝐱) that is linearly normalized to span (0, 1). Such normalization can be typically achieved based on prior knowledge about the upper and lower bound of f(𝐱). If we want to amplify f(𝐱) for f(𝐱) > f_d and minimize f(𝐱) for f(𝐱) < f_d, where f_d is a division point between 0 and 1, the temperature is chosen to be T = f_d / log(1 + f_d). For example, if f_d is chosen to be the golden ratio, then the temperature is roughly T = 1.3. In practice, the selection of f_d is problem specific, and T can be treated as a hyperparameter that can be manually tuned around 1 for tailoring to a particular problem.
Over-parameterization. To further enhance the ability for GLOnet to efficiently and reliably converge to the global optimum, we next consider the concept of over-parameterization in which the distribution P(𝐱) is now a neural network parameterized by weights ϕ. The objective function then becomes: 𝔼_𝐱∼ P(𝐱; ϕ)exp[ f(𝐱)/T]. Our use of a neural network is inspired by the fact that deep network training involves the solving of an extremely high dimensional non-convex optimization problem, that the convergence of the neural network is typically insensitive to initialization, and that good neural network parameters can be found using backpropagation.
The underlying mathematical principles outlining why gradient descent is so effective for deep network training have been revealed to some extent by computer scientists in recent years. <cit.> First, the parameter space of deep networks is a high-dimensional manifold, such that most local optima are equivalently good and the probability of converging to a bad optimum during training decreases quickly with network size. Second, these equivalently high performing local optima originate from neural network over-parameterization, which builds in redundancy in the optimization landscape that speeds up and stabilizes the gradient-based optimization process.
To understand how this applies to GLOnet, we revisit our one-dimensional f(𝐱) landscape in which local optima are separated by deep barriers. When the optimization landscape is transformed using P(𝐱,ϕ), it frames the optimization problem in a very high dimensional landscape, as the dimensionality of ϕ is much higher than 𝐱. Solutions to the optimization problem therefore reside in a high-dimensional manifold, such that many different ϕ's serve as high performing local optima. Additionally, local optima in f(𝐱) are no longer separated by deep barriers but are instead connected by pathways with low to no barriers in our transformed high dimensional landscape, mitigating trapping within these local optima during gradient-based optimization. The high dimensional landscape representing the transformed f(𝐱) is visualized as a two-dimensional projection in the rightmost plot in Figure <ref>a. The global optimum is now a connected band in the optimization landscape, as opposed to a single point in f(𝐱), and there are fewer energy barriers preventing gradients from converging to the global optimum, enabling gradient descent optimization to be more robust and faster. We note that neural network depth and expressivity play a large role in determining the practical impact of over-parameterization on optimization, and as a demonstration, we compare the performance of GLOnets based on linear and deep non-linear networks in the Appendix.
Gradient estimation. A critical feature to maximizing the performance of GLOnet is ensuring that gradients used to evolve P(𝐱), which are approximated using a finite batch of samples, are sufficiently accurate. There are two methods for gradient estimation that can be used for GLOnets. The first is to use a score function gradient estimator, which utilizes the evaluated derivatives of the probability distribution P(𝐱; ϕ) and f(𝐱). This method for estimation requires explicit evaluation of derivatives to P(𝐱; ϕ) but only an implicit evaluation of ∇_𝐱f. The second is to use a pathwise gradient estimator, which relies on knowing the explicit derivatives of f(𝐱) but for which the probability distribution P(𝐱; ϕ) can be implicit. Empirically, we find for GLOnet that the pathwise gradient estimator more consistently produces smaller gradient error compared with the score function gradient estimator, and we therefore implement the pathwise gradient estimator in Equation <ref>. <cit.>
The pathwise gradient estimator is based on the principle of Monte Carlo estimation, such that the estimation error decreases with the inverse square root of batch size. Importantly, this estimation error is independent of dimension. As a result, GLOnet and specifically PG-GLOnet are able to operate for batch sizes that are independent of problem dimension, as demonstrated in Figures 2c and 2d. This scaling of problem dimension without a required scaling in the number of functional evaluations allows PG-GLOnet to readily scale and address the 1000-dimensional problems in Table 1 with modest computational resources.
Progressive growth. Direct searching within a high dimensional, non-convex landscape is an intractable problem. In the case of FC-GLOnet, which utilizes all of the features above, including distribution optimization and over-parameterization, the algorithm is still not effective in directly searching high dimensional landscapes (Table 1). With PG-GLOnet, the progressive growing architecture regularizes the optimization procedure to search first within a relatively coarse, low dimensional representation of the optimization landscape, followed by relatively local searching within increasingly higher dimensional landscape representations. This hierarchical increase of landscape dimensionality directly corresponds to the serial toggling of α within the series of growing blocks in the generator. As such, the optimization landscape is evolved over the course of PG-GLOnet training in a manner that maintains the tractability of the optimization problem.
To further visualize the relationship between generative network architecture and optimization search procedure, we consider a non-convex two-dimensional landscape shown in Figure <ref>b. The generative network contains a single growing block, and the toggling of α from zero to one modulates the effective dimensionality of the generator output from one to two. Initially, α is zero and the vector outputted by the generator has the same effective dimensionality as its input vector and is one. The optimization landscape being searched is therefore a diagonal line within the two-dimensional landscape (Figure <ref>b, left-most plot), and with optimal solutions near the center of the line, the outputted generator distribution (red coloring in plot) narrows towards this region. As α is increased, the generator output vector becomes dominated by its linear transformation branch, as opposed to its upsampling branch, and it has an effective dimensionality that increases and eventually doubles. In our PG-GLOnet visualization, this increase in effective dimensionality corresponds to a broadening of the optimization landscape being searched, and the outputted generator distribution widens relative to the diagonal line. Upon the completion of network growth, the PG-GLOnet distribution converges to the global optimum.
The success of PG-GLOnet is therefore predicated on the ability for the outputted distribution of the generative network to be narrowed down to smaller but more promising regions of a coarse optimization landscape, prior to increasing the landscape dimensionality and adding more degrees of freedom to the problem. This concept therefore works particularly well for problems where optima within a low dimensional analogue of the optimization landscape help to inform of the presence and position of optima within the high dimensional landscape. This regularization of the optimization procedure also indicates that for problems where optima within coarse variants of the optimization landscape do not inform the position of the global optimum, PG-GLOnet will not work well.
In summary, we present a general global optimization algorithm metaheuristic based on progressive growing deep generative neural networks termed PG-GLOnet. Unlike other population-based algorithms, PG-GLOnet uses gradient-based optimization to evolve an expressive, complex distribution in the optimization landscape to one centered around promising optima. This complex distribution, parameterized using the deep network framework, utilizes loss function engineering and over-parameterization to facilitate effective gradient-based searching. PG-GLOnet is particularly well suited to address ultra-high dimensional problems because the required batch size is independent of problem dimension and the progressively growing network architecture facilitates a hierarchical search process within a landscape with progressively growing effective dimensionality. This use of a hierarchical search strategy also provides bounds as to the types of problems and landscapes that are suited for PG-GLOnet optimization. We anticipate that further research in the tailoring of application-specific generative network architectures to particular optimization landscapes will enable the GLOnet platform to extend and adapt to an even wider range of non-convex, high dimensional optimization problems.
|
http://arxiv.org/abs/2307.06864v1 | 20230710195854 | Higher-order composition of short- and long-period effects for improving analytical ephemeris computation | [
"Martin Lara",
"Elena Fantino",
"Hadi Susanto",
"Roberto Flores"
] | physics.class-ph | [
"physics.class-ph",
"math-ph",
"math.MP"
] |
[t1]A preliminary version of this research was presented as paper IAC-21-C1.7.2 at the 72nd International Astronautical Congress (Dubai, United Arab Emirates, 25-29 October 2021)
EF,RA]Martin Larafootnote1,footnote2
[email protected]
EF]Elena Fantinocorfootnote1
[email protected]
EF]Hadi Susantofootnote3
[email protected]
EF,RF]Roberto Floresfootnote1,footnote4
[email protected]
[EF]P.O. Box 127788, Abu Dhabi, United Arab Emirates
[RA]Edificio CCT, C/ Madre de Dios, 53, ES-26006 Logroño, Spain
[RF]Gran Capità s/n, 08034, Barcelona, Spain
[cor]Corresponding author
[footnote1]Aerospace Engineering Department, Khalifa University of Science and Technology
[footnote2]Scientific Computing and Technological Innovation Center, University of La Rioja
[footnote3]Mathematics Department, Khalifa University of Science and Technology
[footnote4]Centre Internacional de Mètodes Numèrics en Enginyeria (CIMNE)
The construction of an analytic orbit theory that takes into account the main effects of the Geopotential is notably simplified when splitting the removal of periodic effects in several stages. Conversely, this splitting of the analytical solution into several transformations reduces the evaluation efficiency for dense ephemeris output. However, the advantage is twofold when the different parts of the mean–to–osculating transformation are composed into a single transformation. To show that, Brouwer's solution is extended to the second order of the zonal harmonic of the second degree by the sequential elimination of short- and long-period terms. Then, the generating functions of the different transformations are composed into a single one, from which a single mean–to–osculating transformation is derived. The new, unique transformation notably speeds up the evaluation process, commonly improving evaluation efficiency by at least one third with respect to the customary decomposition of the analytical solution into three different parts.
Orbit propagation; Artificial satellite theory; Brouwer's solution; Hamiltonian simplification; Lie transforms;
§ INTRODUCTION
Simple analytic orbit prediction programs still find different astrodynamics applications <cit.>. They commonly rely on perturbation solutions that are made of secular and periodic terms. The former provide the average evolution of the orbit while the latter are needed to convert the secular elements —also called mean elements— into ephemeris. In the implementation of an analytical ephemeris generator one customarily takes the point of view of using programming techniques that minimize both memory requirements and execution time <cit.>. However, these two aims are mutually exclusive regarding accuracy (understood as the time span for which the errors remain below a given tolerance).
Minimizing memory requirements is obviously achieved when reducing the truncation order, and, therefore, the accuracy of the perturbation solution. On the other hand, reducing memory needs for a given truncation order can be achieved by splitting the periodic terms of the analytical solution into a sequence of simpler corrections. Separation of the periodic corrections into short- and long-period terms is a natural choice with whole dynamical sense <cit.>. Moreover, it is well known that the preliminary elimination of the parallax transformation <cit.> makes notably easier the implementation of the short-period elimination <cit.>. On the contrary, the long-period elimination is traditionally achieved with a single set of corrections, which is obtained either with Brouwer's traditional method <cit.>, in the reverse normalization style <cit.>, or like Alfriend and Coffey's halfway option <cit.>. Splitting the elimination of long-period terms into two simpler transformations is also possible, and helps in pruning away non-essential terms of the rotating-perigee regime, in this way effectively isolating the resonant, long-period terms of the dynamics about the critical inclination. However, the interest of this additional simplification is mostly theoretical since it yields negligible savings in memory storage and usually increases the execution time <cit.>.
While decomposing the transformation from mean to osculating elements into different parts clearly simplifies the periodic terms of the solution by notably reducing their size, the splitting procedure has the undesired side effect of slowing the evaluation of dense output ephemeris. This paradox stems from the fact that the eccentricity and inclination remain constant in the secular variables. Because of that, when the different transformations used in the construction of the analytical perturbation theory are combined into a single transformation, the coefficients of the trigonometric polynomials comprising the periodic corrections only need to be evaluated once, which is done jointly with the initialization of the secular terms of the analytical solution <cit.>. In this way, the repeated evaluation of the perturbation solution in the computation of ephemeris is notably accelerated. On the contrary, if a sequence of different transformations is used, then these coefficients only remain constant in the first transformation of the sequence, although some of them may remain constant also in the second one <cit.> —the most favorable case being provided by the reverse normalization scheme <cit.> in which the only terms that need reevaluation are eccentricity polynomials. This need of repeatedly evaluating coefficients made of inclination and eccentricity polynomials may counterbalance the advantages provided by the simpler form of the sequential periodic corrections, in this way clearly penalizing the efficiency of the analytical theory for dense output.
In order to demonstrate these facts, two alternative higher-order extensions of Brouwer's classical solution <cit.> have been implemented. More precisely, since the Earth's zonal harmonic coefficient of the fifth degree is at least one order of magnitude smaller than the zonal harmonic coefficients of lowers degrees, we neglected the contribution of the this zonal harmonic from Brouwer's Geopotential model. In addition, our approach takes the proper calibration of the mean semi-major axis into account <cit.>. In this way, the dominant long-term secular drift of the position errors in the along-track direction —which is typical of perturbation solutions relying on the physical time as the independent variable <cit.>— is reduced by at least one order of magnitude with respect to traditional implementations of the Geopotential disturbing effect.
§ ANALYTIC PERTURBATION SOLUTIONS. GENERAL FEATURES
Analytical solutions to orbital perturbation problems are commonly approached in a set of three oscillating and three rotating variables. The latter are naturally angles whereas the former may have different nature. A common choice is the traditional set of Keplerian elements given by the semi-major axis a, eccentricity e, inclination I, right ascension of the ascending node Ω, argument of the periapsis ω, and mean anomaly M. The perturbation solution is obtained through an analytical transformation to mean elements (a',e',I',Ω',ω',M') such that the first three remain constant, namely,
da'/dt=de'/dt=dI'/dt=0,
whereas the last three evolve at constant rates
dΩ'/dt=n_Ω, dω'/dt=n_ω, dM'/dt=n_M.
Because the exact transformation from mean to osculating variables does not exist in general, it is approximated with
𝒯:(a,e,I,Ω,ω,M;ϵ)↦(a',e',I',Ω',ω',M')
given by a truncated Taylor series in ϵ. The small parameter ϵ may be a physical quantity, the most desirable case, or a formal parameter —a token that indicates the strength of the disturbing forces relative to the integrable non-perturbed model. With formal parameters, the analytical solution is constrained to a particular dynamical regime, while physical quantities allow for greater generality.
The perturbation approach is not constrained to the use of Keplerian elements. It can be applied to different sets of singular or non-singular variables. In particular, canonical variables assign uniform dimension to the oscillating-type quantities. In that case, (a,e,I) are customarily replaced by (L,G,H). L=√(μa) is the Delaunay action, with μ denoting the gravitational parameter. G=Lη is the specific angular momentum, where η=(1-e^2)^1/2. Finally, H=GcosI denotes the third component of the angular momentum vector. The set (L,G,H,ℓ,g,h), with ℓ=M, g=ω, and h=Ω, is known as the Delaunay canonical variables. They are the action-angle variables in which a complete reduction of the Kepler Hamiltonian is achieved <cit.>.
It is worth remarking that, in the analytical solution by canonical methods, the transformation (<ref>) can be derived from a scalar generating function W=W(L,G,H,ℓ,g,h), simplifying the computational process <cit.>.
Hereafter, we shall limit the discussion to perturbed Keplerian motion and Hamiltonian perturbations in Delaunay variables. Then, the secular frequencies can be written in the general form <cit.>
n_M =ñ∑_i≥0ϵ^i/i!Φ_i(a'_0,e'_0,I'_0)
n_ω = ñ∑_i≥1ϵ^i/i!Γ_i(a'_0,e'_0,I'_0)
n_Ω = ñ∑_i≥1ϵ^i/i!Ψ_i(a'_0,e'_0,I'_0)
where Φ_i, Γ_i, and Ψ_i, are functions of the initial conditions in prime (mean) variables. Namely, cosI'_0=H'_0/G'_0, e'_0=(1-G'^2_0/L'^2_0)^1/2, a'_0=L'_0^2/μ, and ñ=(μ/a'_0^3)^1/2=μ^2/L'_0^3. The periodic corrections take the form of truncated multivariate Fourier series in the angle variables, with coefficients that are truncated series in the action variables. Commonly, these are expressed as eccentricity polynomials, with the coefficients given by inclination polynomials <cit.>.
The computation of the constants of the perturbation theory ñ, a'_0, e'_0, I'_0, Ω'_0, ω'_0, and M'_0, in mean elements, can be derived from a fit to observations <cit.>, which are obtained either from real data or synthetically generated by a preliminary numerical integration of one or two orbits <cit.>. Alternatively, the initialization constants can be obtained from an initial state vector in osculating elements by inverting the mean–to–osculating transformation of the perturbation theory <cit.>, an operation that is sometimes replaced by root–finding procedures <cit.>. Moreover, modern perturbation methods allow for the computation of both the direct and inverse transformations in explicit form <cit.>.
An intrinsic characteristic of perturbation solutions is that, due to the truncation of the series comprised in the solution, they always introduce an error in the secular frequencies. This happens even when the truncation is made to machine precision <cit.>. In consequence, the errors always undergo a secular drift which, eventually, prevails over the inaccuracies due to the truncation of the periodic corrections. Therefore, it is common practice to compute the periodic terms to one order less than their secular counterparts <cit.>. However, the proper propagation of the secular frequencies up to a given order requires the initialization of the constants of the perturbation theory to the same truncation order as the secular terms.
In the case of perturbed Keplerian motion, this accuracy can be relaxed in the initialization of n_ω and n_Ω because they are proportional to ϵ, as shown by the lower limit 1 of the summation index in Eqs. (<ref>) and (<ref>). Nevertheless, the higher accuracy is mandatory in the initialization of the secular mean motion n_M, for which the summation index in Eq. (<ref>) starts from zero. Neglecting this consideration causes errors in the in-track direction that are inconsistent with the truncation order of the secular terms of the analytical solution <cit.>. The remedy is to extend the computation of the periodic corrections of the semi-major axis to the same order as the secular terms <cit.>. When the perturbation solution is computed stepwise, in Brouwer's seminal style of removing the short-period terms before the long-period ones, these additional computations are limited to the short-period corrections of the semi-major axis.
For Hamiltonian perturbations, rather than computing additional terms of the periodic corrections to the semi-major axis, the semi-major axis can be calibrated to higher-order effects using the energy equation in the clever way proposed by Breakwell and Vagners <cit.>. The procedure relies on the fact that the energy value for given initial conditions does not change by a transformation of variables. When using orbital elements, the energy equation ℋ≡T+V=ℰ, where T and V denote the kinetic and potential energy, respectively, can be written in osculating variables like
ℋ≡-μ/2a+ϵ𝒫(a,e,I,Ω,ω,M)=ℰ.
On the other hand, after the complete Hamiltonian reduction, the energy equation takes the form
ℰ=-μ/2a'+∑_m=1^kϵ^m/m!ℋ_m(a',e',I')+𝒪(ϵ^k+1),
where ℋ_m are the computed Hamiltonian terms. Then, for a given initial state (a_0,e_0,I_0,Ω_0,ω_0,M_0), we compute the energy value ℋ(a_0,e_0,I_0,Ω_0,ω_0,M_0)=ℰ_0 exactly from Eq. (<ref>) and replace ℰ=ℰ_0 in Eq. (<ref>), from which
ℰ_0+μ/2a'_0-∑_m=1^kϵ^m/m!ℋ_m(a'_0,e'_0,I'_0)=Δ,
where a'_0, e'_0, and I'_0 are obtained from the transformation from mean (prime) to osculating variables. If this transformation is known to 𝒪(ϵ^k), the energy equation (<ref>) is certainly accurate to Δ=𝒪(ϵ^k+1). However, if the transformation is only known to 𝒪(ϵ^k-1), as it is commonly the case, then the error in the energy equation will be only Δ=𝒪(ϵ^k) due to the propagation of errors in the Keplerian term. The issue is easily fixed by replacing the value a'_0 obtained from the 𝒪(ϵ^k-1) mean to osculating transformation by the calibrated value
â_0=1/2μ[-ℰ_0+∑_m=1^kϵ^m/m!ℋ_m(a'_0,e'_0,I'_0)]^-1,
which is obtained by solving Eq. (<ref>) with Δ=0 for the Keplerian term. Then, a'_0 is replaced by â_0 in the computation of ñ. That is, we replace ñ=(μ/â_0^3)^1/2 in Eqs. (<ref>)–(<ref>).
This calibration procedure avoids the need of carrying out the heavy computations required for extending the truncation order of the mean to osculating transformation, and generally guarantees that the predictions of the perturbation theory are close to the expected accuracy for a given truncation order <cit.>.
Regarding the periodic corrections, when they are given by a single set of corrections from mean to osculating elements, we only need to update the mean angles of the analytical perturbation solution for ephemeris evaluation. Indeed, because the action variables remain constant in mean elements, they only need to be computed once, during the initialization of the solution <cit.>. However, perturbation solutions are normally constructed stepwise, by splitting the transformation (<ref>) into two or more canonical steps (see summary in Table <ref>). In that case, the different transformations can be combined into a single one <cit.> to take advantage of the previously mentioned fact. This composition into a single transformation is immediate when the periodic corrections are constrained to the first order of the perturbation, as done by Brouwer <cit.>. Conversely, when higher-order terms of the periodic corrections are included, they are usually arranged in separate blocks. This simplifies the implementation of an analytic ephemeris generator and reduces memory requirements <cit.>. The downside is that both action and angle mean variables must be updated at each step. This degrades the efficiency of dense ephemeris evaluation.
§ GEOPOTENTIAL MODEL AND PERTURBATION SOLUTION.
For reference, we deal with the popular Geopotential solution derived by Brouwer <cit.>. More precisely, in view of the small value of the Earth's zonal harmonic coefficient of degree 5, we limit the dynamical model to the contribution of the 2nd, 3rd, and 4th zonal harmonics, which is the same model used in <cit.>. The disturbing function of the corresponding Hamiltonian is
𝒫=μ/r∑_i≥2R_^i/r^iJ_iP_i(sinφ),
in which r is distance from the Earth's center of mass, R_ is the Earth's equatorial radius, J_i stands for the zonal harmonic coefficient of degree i, P_i denotes the Legendre polynomial of degree i, and φ is latitude.
We adhere to Kaula's style <cit.> yet in the slightly different arrangement of <cit.>. Thus, we write the disturbing potential (<ref>) in the form
𝒫=μ/a(a^2/r^2η)∑_i≥2J_iV_i,
in which
V_i = R_^i/p^iη∑_j=0^iℱ_i,j(s)∑_k=0^i-1i-1ke^kcos^kf
×cos[(i-2 j)(f+ω)-π(i2)],
where p=aη^2 is the orbit parameter, f is the true anomaly, s stands for the sine of the inclination, and ℱ_i,j are particularizations of Kaula inclination functions for the zonal problem. Namely, for i≥2l,j≥l,
ℱ_i,j=∑_l=0^min(j,i_0)(-1)^j-l-i_0/2^2i-2l2i-2liili-2lj-ls^i-2l,
where i_0=⌊1/2i⌋ denotes the largest integer less than or equal to 1/2i.
§.§ Short-period elimination
The removal of short-period terms is standard <cit.>. The solution of the integrals involved in the perturbation approach becomes easier applying the elimination of the parallax simplification <cit.> before addressing the short-period terms. To the first order of J_2, the elimination of the parallax transformation
(x,X;ϵ)𝒯_1⟶(x',X'),
where x denotes the coordinates of the canonical set and X their conjugate momenta, is derived from the generating function
W^P=W_1^P+J_2W_2^P.
The terms on the right-hand side of Eq. <ref> are given by
W_1^P= G/8R_^2/p^2{ 2e(3s^2-2) sinf
-s^2[3 e sin (f+2ω)
+3 sin (2 f+2ω)+e sin (3 f+2ω)] },
W_2^P= -GR_^3/p^3J̃_3∑_i=0^1∑_j=2i-1^2i+3∑_k=0^1e^2ks^2i+1e^|j-2i-1|
× Q_i,j,kcos[jf+(2i+1)ω]
+GR_^4/p^4
∑ _i=0^2 ∑ _j=2 i-3^2 i+3∑ _k=0^1
e^2 k s^2 i e^| j-2 i| P_i,j,ksin(jf+2iω),
where J̃_n≡J_n/J_2^2, and the inclination polynomials Q_i,j,k and P_i,j,k can be found in Tables <ref> and <ref> of the Appendix.
The transformation (<ref>) is obtained by simple evaluation of Poisson brackets in a convenient set of variables <cit.>. In particular, we compute
[ y_1={x,W_1^P}, y_2={x,W_2^P}+{y_1,W_1^P},; Y_1={X,W_1^P}, Y_2={X,W_2^P}+{Y_1,W_1^P}, ]
where the braces denote the Poisson bracket operator. Replacing (x,X) with (x',X') in y_1, Y_1, and y_2, Y_2, the mean to osculating transformation takes the form
[ x = ∑_j≥0(ε^j/j!)y_j(x',X'),; X = ∑_j≥0(ε^j/j!)Y_j(x',X'). ]
The new Hamiltonian, with the parallax eliminated, depends on the Delaunay prime variables. Next, the complete removal of short-period terms is achieved by the Delaunay normalization <cit.>. The transformation to double-prime variables
(x',X';ϵ)𝒯_2⟶(x”,X”),
is derived from the new generating function
W^D=W_1^D+J_2W_2^D,
with
W_1^D= GR_^2/p^21/4(3s^2-2) ϕ,
W_2^D= -GR_^4/p^4(3s^2-2)^2/32 (η +1)(4esinf+e^2sin2f)
-G
×R_^3/p^3J̃_33/4es(4-5 s^2)ϕsinω
-GR_^4/p^43/64ϕ{35
×
s^4(1-5J̃_4)-40s^2(2-5J̃_4)+40(1-J̃_4)
+η^2[5s^4(21J̃_4+1) +8s^2(1-15 J̃_4) +8(3J̃_4
-1)] +2[5s^2(7J̃_4+3)-2(15J̃_4+7)]
× e^2s^2cos2ω},
where ϕ=f-ℓ denotes the equation of the center. The transformation (<ref>) is obtained analogously to the previous case, simply replacing W^P with W^D and adding one prime to the variables in Eqs. (<ref>) and Eq. (<ref>).
Once the short-period terms have been removed, up to the third order of J_2 we obtain the Hamiltonian,
ℋ=ℋ_0+J_2ℋ_1+J_2^2ℋ_2+J_2^3ℋ_3,
where
ℋ_0= -μ/2a,
ℋ_1= -μ/2aR_^2/p^2η1/2(2-3s^2),
ℋ_2= μ/2aR_^3/p^3J̃_33/2(5s^2-4)ηessinω
+μ/2aR_^4/p^4∑_i=0^1∑_j=0^2-2it_2,i,jη^j+1e^2icos2iω,
ℋ_3= -μ/2aR_^6/p^6[J̃_3/R_/p∑_i=1^2∑_j=0^6-3ie^2i-1sin(2i-1)ω/(1+η)^i2
×η^j+1u_3,i,j
+∑_i=0^2∑_j=0^4-it_3,i,je^2icos2iω/(1+η)^i2η^j+1].
The non-vanishing inclination polynomials t_l,k,j, u_3,k,j, are listed in Tables <ref> and <ref> of the Appendix. We recall that the variables in these expressions must be written in terms of the double-primed Delaunay variables. Note that ℋ=ℋ(a,e,I,-,ω,-) is free of the mean anomaly up to the truncation order. Therefore, the mean semi-major axis a=μ/L'^2 becomes a formal integral of the long-period Hamiltonian (<ref>).
§.§ Long-period elimination
Removal of long-period terms from the Hamiltonian (<ref>) is achieved by a transformation to triple-prime variables
(x”,X”;ϵ)𝒯_3⟶(x”',X”').
The generating function of the long-period elimination is
W^L=W_1^L+J_2W_2^L,
where
W_1^L= GR_^2/p^25s^2(7J̃_4+3)-2(15J̃_4+7)/32(5s^2-4)e^2s^2sin2ω
+GR_/p1/2J̃_3escosω,
W_2^L=
GR_^4/p^41-η/(5s^2-4)^3{∑_j=0^3u_0,jη^jsin2ω
+(1+η)
× u_0,4e^2sin4ω}
-GR_^3/p^3J̃_3/(5s^2-4)^21/η +1
×[∑_j=0^3u_1,jη^jecosω +(1+η)u_1,4e^3cos3ω]
-GR_^2/p^2J̃_3^215s^2-13/8(5s^2-4)s^2e^2sin2ω.
The inclination polynomials u_i,j are given in Table <ref> of the Appendix. The transformation (<ref>) achieving the complete Hamiltonian reduction is obtained from an analogous to Eqs. (<ref>)–(<ref>), using W^L as generating function.
The Hamiltonian with the periodic terms removed takes the form
𝒦=∑_i=0^3(J_2^i/i!)𝒦_i,
in triple prime variables. Up to 𝒪(J_2^3), the Hamiltonian terms 𝒦_0=ℋ_0 and 𝒦_1=ℋ_1 remain the same in the new variables, whereas
𝒦_2= -μ/2aR_^4/p^43/32η{η^2[5(21 J̃_4+1)s^4 -8(15J̃_4-1)
× s^2+8(3J̃_4-1)] +4η(3s^2-2)^2 +35s^4(1
-5J̃_4)-40(2-5J̃_4)s^2+40(1-J̃_4) },
𝒦_3= μ/2aR_^6/p^6η{9J̃_3^2/8R_^2/p^2[η^2(20s^4-22s^2+4)-25s^4
+26 s^2-4]
-∑_j=0^1∑_k=0^2-je^2kη^j(3s^2-2)^j/(5s^2-4)^2-2jl_j,k},
with the inclination polynomials l_j,k given in Table <ref>. Finally, the Hamilton equations of Eq. (<ref>) yield the secular variations
n_Ω=∂𝒦/∂H”',
n_ω=∂𝒦/∂G”',
n_M=∂𝒦/∂L”',
which are commonly reformulated in non-singular variables to avoid issues with circular and equatorial orbits. Nonetheless, the critical inclination singularity occurring when s^2=4/5, as follows from denominators in Eqs. (<ref>), (<ref>), and (<ref>), cannot be avoided due to its essential character <cit.>. Because the Hamiltonian (<ref>) is not applicable to librating-perigee orbits, accidental overflows can happen in a general propagation with the analytical solution. Several ways to circumvent this problem exist <cit.>.
§.§ Composition of transformations
The generating function of the short-period elimination is obtained by composing the elimination of the parallax and the Delaunay normalization into a single canonical transformation 𝒯^S=𝒯_1∘𝒯_2 using the Lie transforms technique (see <cit.>, or 2.1.4 of <cit.>). The composite transformation is readily derived from a generating function obtained as the direct sum of the respective generating functions, both written in the same set of variables. Thus, the first step is to reformulate W^D in the osculating (non-primed) variables. Instead of replacing the transformation equations and rearranging terms of the same order of the small parameter, the standard Lie transforms method can be applied to reformulate the generating function <cit.>.
The first-order term of the generating function for the composite transform 𝒯^S is
W_1^S=W_1^P+W_1^D,
where the last summand is obtained by substituting prime with non-primed variables in Eq. (<ref>). The second-order term W_2^S=W_2^P+W_2^D results in
W_2^S= GR_^3/p^3J̃_33/4(5s^2-4)esϕsinω
+GR_^4/p^43/64ϕ{[4
×(15J̃_4+7) -10(7J̃_4+3)s^2] e^2s^2cos2ω
+e^2 [5 s^4(21 J̃_4+1) -8 s^2(15 J̃_4-1)+8(3 J̃_4
-1)]
+2 [5 s^4(7 J̃_4-4)-4 s^2(10 J̃_4-9)+8
× (J̃_4-2)]
+4(5s^2-4) s^2[3ecos(f+2ω)
+ecos(3f+2ω) +3cos(2f+2ω)]
}
+GR_^4/p^4
×1/256∑_i=0^2∑_j=2i-3^2i+3sin(jf+2iω)∑_k=0^3η^ke^|j-2i|/1+η
× s^2iQ_i,j,k^*
-GR_^3/p^3J̃_3∑_i=0^1∑_j=2i-1^2i+3∑_k=0^1e^|j-2i-1|
e^2ks^2i+1(5s^2-4)^1-iq_i,j,k^*cos[jf+(2i+1)ω],
also expressed in osculating variables. The inclination polynomials q_i,j,k^* and Q_i,j,k^* are given in Tables <ref> and <ref>.
Analogously, the composition of the short- and long-period elimination into a single transformation 𝒯=𝒯^S∘𝒯_3 requires the reformulation of W^L in osculating variables. The first-order term is obtained from Eqs. (<ref>) and (<ref>) as
W_1=W_1^S+W_1^L,
where W_1^L results from swapping double-prime with non-primed variables. The second-order term W_2=W_2^S+W_2^L is given by
W_2= GR_^3/p^3J̃_33/8ϕ(5s^2-4)e s sinω
-GR_^2/p^2J̃_3^2e^2 s^2
×15s^2-13/8(5s^2-4)sin2ω
+GR_^4/p^43/64ϕ{[2(15J̃_4+7)
-5s^2(7J̃_4+3)]e^2s^2cos2ω +e^2[5 s^4(21 J̃_4+1)
-8 s^2(15 J̃_4-1)+8(3 J̃_4-1)]
+2[5 s^4(7 J̃_4
-4) -4 s^2(10 J̃_4-9)+8(J̃_4-2)]
+4(5s^2
-4)s^2[3ecos(f+2ω) +ecos(3f+2ω) +3
×cos(2f+2ω)] }
-GR_^3/p^3J̃_3s/1+η∑_i=0^1∑_j=i-1^2i+3∑_k=0^3
η^ke^|j-2i-1|q_i,j,k/(5s^2-4)^2cos[jf+(2i+1)ω]
+GR_^4/p^4
∑_i=0^2∑_j=-1^2i+3∑_k=0^3η^k/1+ηe^|j-2i|Q_i,j,k/(5s^2-4)^3sin(jf+2iω),
with the inclination polynomials q_i,j,k, Q_i,j,k listed in Tables <ref> and <ref>.
One can arrive to the fully-reduced Hamiltonian (<ref>) through different transformations. For example, the alternative sequence given by the elimination of the parallax, followed by elimination of the perigee and, lastly, Delaunay normalization <cit.>. This sequence will yield different second-order terms for the second and third transformations. However, the composition of their generating functions will still yield the same W_1 and W_2 as in Eqs. (<ref>) and (<ref>).
Once the generating functions have been merged into a single one, the mean–to–osculating transformation is computed from expressions analogous to Eqs. (<ref>) and (<ref>).
§ EFFICIENCY TESTS
Splitting the complete reduction of the zonal problem simplifies construction of the analytical solution, and helps understand essential aspects of the dynamics. This kind of decomposition can be applied also to the long-period elimination, but the modest improvement in memory storage does not warrant implementation in an analytic orbit propagator <cit.>. On the other hand, the composition into a single transformation has the drawback of increasing substantially the total size of the corrections. However, we will show that the transformation from secular terms to osculating variables in different steps can also have important shortcomings in practice.
Increasing the size of the formal series representing the solution does not necessarily lead to increased computational burden. The factorization of the inclination polynomials in the single-transform solution reveals multiple occurrences of common factors. This makes the composite transform amenable to additional optimizations compared to a sequence of different transformations. This was the case for the simpler J_2-problem solution, where an optimizing compiler produced code competitive with the stepwise evaluation of the analytical solution <cit.>. Additionally, when implementing an ephemeris generator, the splitting approach faces the obvious handicap of evaluating both action and angle variables at each step, whereas the single transformation only updates the angles. Thus, if memory requirements are not critical, the single transformation is preferable in practice.
To compare the relative merits of each approach, we implemented two analytical orbit generators based on the extended Brouwer's solution, retaining secular terms up to the third order of J_2 and periodic corrections up to the second order. Both codes were written in Fortran 77. The composite code uses the results from Section <ref>. The algorithm split in multiple transformations follows the classical approach of eliminating the parallax first, followed by removal of the perigee, and final Delaunay normalization. This sequence, denoted hereafter as PPD, is usually considered the most efficient approach <cit.>. The inverse transform (osculating to mean elements) is only evaluated once, during initialization of the analytical solution. Given that the impact for dense ephemeris output is negligible, we always used the simpler code (PPD) for the inverse transformation.
The only manual optimization in the implementation of the algorithms is the factorization of the inclination polynomials. Both codes were generated with the optimization option on Absoft Pro Fortran 16.0.2 compiler. The size of the single-transformation executable is 30% larger than the PPD implementation, a hint of its higher memory usage.
We tested the execution time for different orbital regimes. For a dense ephemeris evaluation of 3000 points we found that the single-transformation code was at least 30% faster than the classical PPD implementation in all cases.
Our implementations use transformations based on canonical polar variables (compatible with circular orbits) widely recognized as faster to evaluate <cit.>. Comparative performance may change slightly for other sets of variables, but the overall trend is expected to remain the same.
Regarding accuracy, both implementations behave as expected from a perturbation theory. There are differences in the errors for each test case, but they are of the same order as the neglected terms of the perturbed solution. Figures <ref>-<ref> compare the evolution over 30 days of the errors in the along-track, radial, and cross-track directions for three representative orbits borrowed from <cit.>. Namely, a TOPEX-type orbit, close to the critical inclination but still within the realm of validity of the analytical solution (a=7707.270 km, e=0.0001, I=66.04^∘);
a PRISMA-type orbit, strongly affected by the zonal perturbation due to its low altitude (a=6878.14 km, e=0.001, I=97.42^∘);
and a highly elliptic geostationary transfer orbit (GTO, a=24460 km, e=0.73, I=30^∘),
with large variations in the strength of the perturbations.
As shown in the figures, both approaches yield very similar error trends. The largest difference lies in the cross-track error of the Topex orbit (Fig. <ref>, bottom), for which there is no immediate explanation. Even in this case, the error magnitude remains within the expected bounds given the truncation order of the theory. It is worth recalling that the constants of the solution have been initialized in Breakwell and Vagners' style <cit.> for both codes. This balances the errors in the three directions, commonly improving by one or two orders of magnitude the secular growth of the errors.
§ CONCLUSIONS
We present a higher-order extension (second order for periodic corrections and third for secular terms) of Brouwer's gravitational solution to the artificial satellite problem. Routinely, analytical theories are constructed removing the periodic terms in multiple stages. A standard approach is preliminary simplification (parallax elimination) followed by removal of long- and short-period terms. This step-by-step strategy simplifies the construction of higher-order solutions and yields more compact formulas for the periodic corrections. Alternatively, the different stages can be composed into a single transformation between mean and osculating variables. The composite transform gives rise to more complex expressions, a disadvantage for understanding fundamental aspects of the dynamics, as well as for code readability. However, while the formal series representing the solution increases in size, it contains multiple repetitions of common factors in the inclination polynomials. These recurring terms open the door for additional optimization of the calculations. Furthermore, generating ephemeris with the standard —multi-step— approach requires evaluating action and angle variables at each step. The composite transform, on the other hand, only updates the angles further improving performance.
We compared the efficiency and accuracy of a popular multi-step implementation —PPD, short for parallax elimination, removal of perigee and Delaunay normalization— against the monolithic transformation for three representative orbits (TOPEX, PRISMA and GTO typologies). The composite approach lowered run times by more than 30% in all cases, while maintaining the accuracy expected from the truncation order or the theory. On the negative side, the code size, 30% larger than the PPD version, reflects the higher complexity of the associated formulas.
Our results show that, for the cases tested, the single-transformation algorithm delivers a substantial improvement in computational performance. This gain in speed must be balanced out against code simplicity and size, areas where the PPD implementation excels. In situations where the trade-off is acceptable, the monolithic approach should be considered seriously for building analytical propagators.
§.§ Acknowledgments
The authors acknowledge Khalifa University of Science and Technology's internal grant CIRA-2021-65/8474000413. ML also acknowledges partial support from the European Research Council (Horizon 2020 grant agreement No 679086 COMPASS) and the Spanish State Research Agency and the European Regional Development Fund (Projects PID2020-112576GB-C22 and PID2021-123219OB-I00, AEI/ERDF, EU). EF has been partially supported by the Spanish Ministry of Science and Innovation under projects PID2020-112576GB-C21 and PID2021-123968NB-100.
elsarticle-num
§ TABLES OF INCLINATION POLYNOMIALS
|
http://arxiv.org/abs/2307.03978v2 | 20230708135448 | Separable MV-algebras and lattice-groups | [
"Vincenzo Marra",
"Matías Menni"
] | math.RA | [
"math.RA",
"math.AG",
"math.CT",
"math.LO",
"Primary: 06D35, Secondary: 06F20, 18B50, 12F10"
] |
General theory determines the notion of separable MV-algebra (equivalently, of separable unital lattice-ordered Abelian group). We establish the following structure theorem: An MV-algebra is separable if, and only if, it is a finite product of algebras of rational numbers—i.e., of subalgebras of the MV-algebra [0,1]∩. Beyond its intrinsic algebraic interest, this research is motivated by the long-term programme of developing the algebraic geometry of the opposite of the category of MV-algebras, in analogy with the classical case of commutative K-algebras over a field K.
[
Sahil Gangurde
ABV-Indian Institute of Information Technology & Management, Gwalior, India
[email protected]
===========================================================================================================================
§ INTRODUCTION
For any field K, a (commutative) K-algebra is separable if, and only if, it is a finite product of finite separable field extensions of K. See, for example, <cit.>. The aim of the present paper is to establish the analogue of this fact for MV-algebras and lattice-groups. We show as our main result that an MV-algebra is separable exactly when it is a finite product of algebras of rational numbers—the subalgebras of [0,1]∩ (Theorem <ref>). By a well-known theorem of Mundici <cit.>, the category of MV-algebras is equivalent to the category of lattice-ordered Abelian groups with a unit. We frame our treatment in the language of MV-algebras, and postpone to the final Appendix <ref> a synopsis of its translation to lattice-groups.
While the main result of this paper holds independent algebraic interest, it finds its deeper motivation in a broader mathematical landscape on which we offer some comments in this introduction.
As explained in <cit.>, some of Grothendieck’s algebro-geometric constructions may be abstracted to the context of extensive categories <cit.>.
A category with finite coproducts is extensive if the canonical functor
/X ×/Y →/(X + Y)
is an equivalence for every pair of objects X, Y in .
Extensivity attempts to make explicit a most basic property of (finite) coproducts in categories `of spaces'. For instance, the category of topological spaces and continuous functions between them is extensive; the category of groups is not.
Extensive experience indeed confirms that conceiving an extensive category as a category `of spaces' is a useful conceptual guide. Essential to the development of Algebraic Geometry is the fact that , the opposite of the category of (commutative unital) rings, is extensive.
(It easily follows that, for any ring R, the opposite of the category R/ of R-algebras is extensive.)
Extensivity naturally determines a notion of complemented subobject.
So, in an extensive category with finite products, it is also natural to consider the objects with complemented diagonal. These are traditionally called decidable objects, and it is useful to think of them as the `discrete spaces' inside the category `of spaces' where they live. For instance, a topological space is decidable if, and only if, it is discrete. For any ring R, and any R-algebra A, let A be the corresponding object in the extensive category (R/). Then A is decidable if, and only if, A is separable as an R-algebra. In other words, the separable R-algebras are precisely those for which the associated affine scheme is decidable.
Let us say that a category is coextensive if its opposite is extensive. In light of the above comments, an object in a coextensive category is called separable if the corresponding object in is decidable.
The category of MV-algebras is coextensive.
This provides the notion of separable MV-algebra that is the topic of the present paper. Explicitly, the MV-algebra A is separable if, and only if, there is a homomorphism f A + A → A such that the span
A [l]_-∇ A + A [r]^-f A
is a product diagram, where ∇ A+A→ A denotes the codiagonal map.
The geometry of has long been the subject of intensive hands-on study because of its striking connections with several areas of classical mathematics, from piecewise-linear topology to the geometry of numbers.
The characterisation of decidable objects in that we present here was motivated by our ongoing long-term project to study of the `gros Zariski' topos determined by the theory of MV-algebras as the domain of a pre-cohesive geometric morphism <cit.>. We postpone the topos-theoretic consequences of separability to further publications; no Topos Theory is required for the proof of the purely algebraic results in the present paper.
The plan of the paper is as follows. In Sections <ref>, <ref>, and <ref> we introduce the necessary material to prove a sufficient condition for an extensive category with finite products to have the property that every decidable object is a finite coproduct of connected subterminals.
In Section <ref> we verify that is coextensive.
In Theorem <ref> we characterise the subterminal objects of as, in , the subalgebras of [0,1]∩.
In order to extend Theorem <ref> to a characterisation of separable MV-algebras we need to introduce the Pierce functor for , an analogue of the standard ring-theoretic functor by the same name.
The key fact is that the Pierce functor preserves coproducts. To prove it, in Section <ref> we develop the required material on the connected-component functor π_0 in . Using the theory of spectra of MV-algebras recalled in Section <ref> along with the topological π_0 functor, we are able to show in Theorem <ref> that the Pierce functor does preserve all coproducts. Theorems <ref> and <ref> are combined in Section <ref> to obtain our main result, the mentioned characterisation of separable MV-algebras. We conclude Section <ref> with a discussion that points to further research aimed at enriching the connected-component functor on to an `arithmetic connected-component functor'; this functor, we submit, arises out of locally finite MV-algebras. Finally, in Appendix <ref> we collect the translation of our main results to lattice-groups.
§ EXTENSIVE CATEGORIES AND CONNECTED OBJECTS
In this section we recall the definition of extensive category and of connected object.
For more details about extensive categories see, for example, <cit.> and references therein.
A category with finite coproducts is called extensive if for every X and Y in the canonical functor /X ×/Y →/(X + Y)
is an equivalence.
Examples of extensive categories are (sets and functions), (finite sets and functions), any topos, , (compact Hausdorff spaces and continuous maps), (Stone[By a Stone space we mean a compact Hausdorff zero-dimensional space. Such spaces are often called Boolean in the literature.] spaces and continuous maps). The categories of rings, of Boolean algebras and of distributive lattices[Throughout the paper, with the exception of Appendix <ref>, we assume distributive lattices to have top and bottom elements preserved by homomorphisms.] are coextensive.
See <cit.> and <cit.> for further examples.
In extensive categories coproduct injections are regular monomorphisms,
coproducts of monomorphisms are monomorphisms, and
the initial object is strict in the sense that any map X → 0 is an isomorphism. Also, extensive categories are closed under slicing.
A coproduct in_0 X → X + Y ← Y :in_1 is
* disjoint if the coproduct injections are monic and the commutative square
0 [d] [r] Y [d]^-in_1
X [r]_-in_0 X + Y
is a pullback;
* universal if for every arrow Z → X + Y the two pullback squares below exist
V [d] [r] Z [d] [l]W[d]
X [r]_-in_0 X + Y [l]^-in_1 Y
and the top cospan is a coproduct diagram.
The following result is essentially <cit.>.
A category with finite coproducts is extensive if, and only if,
coproducts are universal and disjoint.
Assume from now on that is an extensive category.
A monomorphism u U → X in is called complemented if there is a v V → X such that the cospan
u U → X ← V :v is a coproduct diagram. In this case, v is the complement of u. Notice that complemented monomorphisms are regular monomorphisms because they are coproduct injections.
In the next definition, and throughout, we identify monomorphisms and subobjects whenever convenient.
An object X in is connected if it has exactly two complemented subobjects.
In or , an object is connected if and only if it has exactly two clopens.
An object A in is connected as an object in if and only if A has exactly two idempotents.
We remark that, in general, connected objects are not closed under finite products.
For each X in we let X denote the poset of complemented subobjects of X.
We stress that if u U → X and v V → X are two complemented monomorphisms in and f U → V is such that v f = u then f is complemented <cit.>. So for any two complemented subobjects u, v of X, there is no ambiguity in writing u ≤ v since it means the same for u, v considered as subobjects, or as complemented subobjects.
Extensivity easily implies that the poset X has finite infima, a bottom element, and an involution.
This structure may be used to prove that X is actually a Boolean algebra which interacts well with pullbacks in the sense that, for any map f X → Y in , pulling back along f determines a Boolean algebra homomorphism Y → X.
So, assuming that is well-powered, the assignment X ↦ X extends to a functor → between extensive categories that preserves finite coproducts.
We will use the following simple equivalences.
For any object X in the following are equivalent.
* X is connected.
* X is not initial and, for every complemented subobject u U → X, U is initial or u is an isomorphism.
* X is not initial and, for every coproduct diagram U → X ← V, U is initial or V is initial.
§ FINITE-COPRODUCT PRESERVING FUNCTORS
Let and be extensive categories, and let L → preserve finite coproducts. Such a functor preserves complemented monomorphisms so, for any X in , L induces a function X →(L X) which is actually a map in , natural in X. (It is relevant to remark such a functor also preserves pullbacks along coproduct injections. See <cit.>.)
We will say that L is injective surjective/bijective on complemented subobjects if and only if X →(L X) has the corresponding property for every X in .
The functor L → is injective on complemented subobjects if and only if it reflects 0. In this case, L also reflects connected objects.
Assume first that L is injective on complemented subobjects and let X in be such that L X = 0.
Then (L X) is the terminal Boolean algebra and, as X →(L X) is injective by hypothesis, X is also trivial.
For the converse notice that if L reflects 0 then the map X →(L X) in has trivial kernel for every X in .
To prove the second part of the statement assume that X in is such that L X is connected in .
If X were initial then so would L X because L preserves finite coproducts and, in particular, the initial object. So X is not initial.
Now assume that U → X ← V is a coproduct diagram.
Then so is L U → L X ← L V. Since L X is connected, either L U or L V is initial by Lemma <ref>.
As L reflects 0, either U or V is initial, so X is connected by the same lemma. (Alternatively, if X →(L X) is injective and its codomain is the initial Boolean algebra then so is the domain.)
We will be particularly interested in extensive categories wherein every object is a finite coproduct of connected objects.
For example, satisfies this property, but neither nor does.
If is the category of finitely presentable K-algebras for a field K, then also satisfies this property.
If L → is bijective on complemented subobjects then the following hold.
* The functor L preserves connected objects.
* For any object X in , if L X is a finite coproduct of connected objects then so is X.
* If every object in is a finite coproduct of connected objects then so is the case in .
* Assume that and have finite products and that L preserves them. If is such that finite products of connected objects are connected then so is the case in .
To prove the first item just notice that, by hypothesis, X →(L X) is an isomorphism for each X in . Hence if X has exactly two complemented subobjects then so does L X.
Before proving the second item we establish an auxiliary fact. Let X be in and let u U → L X be a complemented subobject in with connected U.
Then, as L is surjective on complemented objects by hypothesis, there exists a complemented subobject v V → X in such that L v = u as subobjects of L X. Then L V ≅ U is connected, so V is connected by Lemma <ref>.
Thus, we have lifted the `connected component' u of L X to one of X.
To prove the second item let (u_i | i ∈ I) be a finite family of pairwise-disjoint complemented subobjects of L X with connected domain whose join is the whole of L X.
For each i∈ I, let v_i be the complemented subobject of X induced by u_i as in the previous paragraph.
As L reflects 0, the family (v_i | i∈ I) is pairwise disjoint.
Also, L ⋁_i∈ I v_i = ⋁_i ∈ I L v_i = ⋁_i∈ I u_i is the whole of LX.
As L is injective on complemented subobjects, ⋁_i∈ I v_i must be the whole of X.
In summary, we have lifted the finite coproduct decomposition of L X to one of X.
The third item follows at once from the second.
For the fourth item, let X be the product of a finite family (X_i | i ∈ I) of connected objects in .
Then L X is the product of (L X_i | i ∈ I) because L preserves finite products.
Each L X_i is connected because L preserves connected objects by the first item, so L X is connected by our hypothesis on .
Hence X is connected by Lemma <ref>.
We next prove a sufficient condition for a functor L as above to be bijective on complemented subobjects.
If L → has a finite-coproduct preserving right adjoint, then L is bijective on complemented subobjects.
Let R be the right adjoint to L and let σ and τ be the unit and counit of L ⊣ R.
We show that L is both injective and surjective on complemented subobjects.
To prove injectivity it is enough to show that L reflects 0 (Lemma <ref>).
So let X be an object in such that L X is initial.
Then we may transpose the isomorphism L X → 0 in to a map X → R 0, but R 0 = 0 because R is assumed to preserve finite coproducts.
Since the initial object is strict, X is initial.
We next show that L is surjective on complemented subobjects.
Let u U → L X be a complemented monomorphism.
Then R u is complemented so the left pullback square below exists
V [d]_-v [r] R U [d]^-R u L V [d]_-L v[r] L(R U) [d]^-L(R u)[r]^-τ U [d]^-u
X [r]_-σ R (L X) L X [r]_-Lσ L(R (L X)) [r]_-τ L X
by extensivity of . Then the two squares on the right above obviously commute, and the bottom composite is the identity. Moreover, <cit.> implies that both squares are pullbacks, so u and L v coincide as subobjects of LX.
Combining Lemma <ref> and Proposition <ref> we obtain the following.
Assume that L → has a finite-coproduct preserving right adjoint. If every object in is a finite coproduct of connected objects then so is the case in .
§ DECIDABLE OBJECTS
Let be an extensive category with finite products.
In particular, has a terminal object 1.
An object X is called subterminal if the unique map X → 1 is monic.
For any object X in , the following are equivalent.
* The object X is subterminal.
* The diagonal Δ X → X× X is an isomorphism.
* The projections _0, _1 X× X → X are equal.
The first item implies the second because for any monomorphism X → 1 the following diagram
X [d]_-id[r]^-id X [d]^-!
X [r]_-! 1
is a pullback.
The second item implies the third because any map has at most one inverse.
To prove that the third item implies the first, let f, g Y → X. Then there exists a unique map fg Y → X × X such that _0 fg = f and _1 fg = g.
So f = _0 fg = _1 fg = g.
That is, for any object Y there is a unique map Y → X.
This means that the unique map X → 1 is monic.
We stress that extensivity plays no rôle in Lemma <ref>, which is a general fact about categories with finite products.
An object X in is decidable if the diagonal Δ X → X × X is complemented.
Lemma <ref> shows that subterminal objects in are decidable, and that they may be characterised as those decidable objects X such that the diagonal Δ X → X × X not only is complemented, but is actually an isomorphism.
The full subcategory of decidable objects will be denoted by →.
If is lextensive (i.e. extensive and with finite limits) it follows from <cit.> that is lextensive and that the inclusion → preserves finite limits, finite coproducts and that it is closed under subobjects. Moreover, for any X, Y in , X + Y is decidable if, and only if, both X and Y are decidable.
On the other hand, arbitrary coproducts of decidable objects need not be decidable—consider, for instance, an infinite copower of the terminal object in or .
For any object X in the following are equivalent:
* X is subterminal and connected.
* X is decidable and X × X is connected.
If X is subterminal and connected then Δ X → X× X is an isomorphism by Lemma <ref>.
So X is decidable and X× X is as connected as X.
For the converse assume that X is decidable and that X × X is connected.
Decidability means that the subobject Δ X → X × X is complemented; as X × X is connected, X is initial or Δ X → X × X is an isomorphism by Lemma <ref>. But X is not initial (because X× X is connected) so Δ X → X × X is an isomorphism. Then X is as connected as X× X, and X is subterminal by Lemma <ref>.
Let be another extensive category with finite products and let L → preserve finite products and finite coproducts.
Assume that L reflects 0 and that
1 is connected in . Then the following hold for every X in .
* If L X = 1 then X is connected.
* If X in is decidable and L X = 1 then X is subterminal.
The functor L reflects 0 so it reflects connected objects by Lemma <ref>.
As 1 is connected in by hypothesis, L X = 1 implies X connected.
If L X = 1 then L (X × X) = L X × L X = 1.
So X × X is connected by the first item.
Therefore X is subterminal by Proposition <ref>.
It easily follows from the definition of decidable object that L preserves decidable objects. In more detail, the preservation properties of L imply that the left-bottom composite below
[d] @.>[r] [d]
[r]_-L
factors uniquely through the right inclusion and, moreover, → preserves finite products and finite coproducts.
In fact, → preserves all the finite limits that L preserves (because the subcategories of decidable objects are closed under finite limits).
Additionally assume from now on that L → has a finite-coproduct preserving right adjoint R →.
Notice that under the present hypotheses both L and R preserve finite products and finite coproducts.
It follows that the adjunction L⊣ R restricts to one between and .
If every decidable object in is a finite coproduct of connected objects then so is the case in .
The adjunction L ⊣ R → restricts to one L' ⊣ R' →,
and every object in is a finite coproduct of connected objects by hypothesis.
So we may apply Corollary <ref> to L'→
Because is lextensive, there exists an essentially unique coproduct preserving functor → that also preserves the terminal object.
The functor sends a finite set I to the copower I· 1 in .
The categories , , and other examples have the property that this functor → coincides with →. Notice that if this condition holds then 1 is connected in , because = → is closed under subobjects and preserves 1.
If the canonical functor → coincides with → then every decidable object in is a finite coproduct of connected subterminals.
By Corollary <ref> every decidable object in is a finite coproduct of connected objects. So it is enough to prove that every connected decidable object in is subterminal. For this, let X be connected and decidable.
Then L X is decidable, because L preserves finite products and finite coproducts, and it is connected by Lemma <ref> and Proposition <ref>.
By hypothesis, the canonical → coincides with → so L X = 1.
Hence X is decidable and L X = 1. Therefore X is subterminal by Lemma <ref>.
For a lextensive category we have considered several conditions.
* Every decidable object is a finite coproduct of connected objects.
* Every decidable object is a finite coproduct of connected subterminals.
* The canonical functor → coincides with the inclusion →.
For a field K, (K/) satisfies the first condition but not the second. The categories and satisfy the third condition.
The third condition implies the second which, in turn, implies the first.
Proposition <ref> shows that for certain adjunctions L ⊣ R →, if satisfies the third condition then satisfies the second. This will be used to prove that satisfies the second condition (Theorem <ref>).
§ THE COEXTENSIVE CATEGORY OF MV-ALGEBRAS
For background on MV-algebras we refer to the standard textbooks <cit.>, of which we also follow the notation.
In this section we show that is coextensive by proving that products are codisjoint and couniversal (Proposition <ref>).
Let be a regular category with finite colimits.
If 0 → 1 is a regular epimorphism then products are codisjoint.
Let A be an object in .
As the composite 0 → A → 1 is a regular epimorphism by hypothesis, so is A → 1 by regularity of .
That is, not only 0 → 1 but actually any A → 1 is a regular epimorphism.
As every regular epimorphism is the coequalizer of its kernel pair, A → 1 is the coequalizer of the two projections A × A → A.
Also, as products of regular epimorphisms are epimorphisms, the product of id A → A and B → 1 is a regular epimorphism A × B → A × 1. That is, the projection A × B → A is a regular epimorphism.
To complete the proof we recall a basic fact about colimits:
for a commutative diagram as on the left below
E [d]_-e [r]<+1ex>^-e_0[r]<-1ex>_-e_1 D [d]_-d [r] B [d]
(A× A) × B [d]_-_0[rr]<+1ex>^-_0 × B[rr]<-1ex>_-_1 × B A× B [d]_-_0[r]^-_1 B [d]
F [r]<+1ex>^-f_0[r]<-1ex>_-f_1 A [r] Q
A× A [rr]<+1ex>^-_0[rr]<-1ex>_-_1 A [r] 1
such that d e_i = f_i e for i ∈{0, 1}, the top and bottom forks are coequalizers and e is epic, the inner right square is a pushout. Applying this observation to the diagram on the right above we obtain that the inner right square in that diagram is a pushout.
In particular, if is the category of models for an algebraic theory with at least one constant then the initial object 0 is non-empty and so 0 → 1 is a regular epimorphism. This is the case, of course, for =.
In , couniversality of products is entailed by the intimate relationship between idempotents and product decompositions. The situation for is analogous. An element b of an MV-algebra A is called Boolean if it satisfies one of the following equivalent conditions (see <cit.>):
b⊕ b=b
b⊙ b=b
b∨ b=1
b∧ b=0.
For x∈ A we let A → A[x^-1] be the quotient map induced by the congruence on A generated by the pair (x,1).
For any f A → B in the following diagram is a pushout
A [d]_-f[r] A[x^-1] [d]
B [r] B[(f x)^-1]
where the right vertical map is the unique one making the square commute.
Standard, using the universal property of the (horizontal) quotient homomorphisms.
For any MV-algebra A and every Boolean element x∈ A, let ⟨ x ⟩ be the ideal of A generated by { x}. Then the quotient q A→ A/⟨ x⟩ has the universal property of A → A[x^-1].
If k A → B is such that k x = 1 then x ∈k, so ⟨ x⟩k. By the universal property of quotients there is exactly one homomorphism c A/⟨ x⟩→ C such that cq=k.
In , the diagram
D [l]^-q_0 C [r]_-q_1 E
is a product precisely when there exists a Boolean element x∈ C such that q_0 has the universal property of C → C[( x)^-1] and q_1 has the universal property of C → C[x^-1].
When this is the case, the element x∈ C with the foregoing property is unique.
Assume the diagram is a product. Then there is a unique x∈ C such that q_ix=i, i=0,1. This x is Boolean because 0 and 1 are. Hence x is Boolean too, and thus ⊕-idempotent; therefore, ⟨ x ⟩={c∈ C| c ≤ x}. If c≤ x then q_1c≤ q_1( x)=0, so q_1c=0 and c∈q_1. If c∈q_1 then q_1c=0≤ q_1( x) and q_0c≤ 1=q_0( x), so c≤ x by the definition of product order. We conclude q_1=⟨ x ⟩. The projection q_1 is surjective so Lemma <ref> entails that q_1 has the universal property of C → C[x^-1].
An entirely similar argument applies to q_0.
Conversely, assume q_0 and q_1 have the universal properties in the statement.
By Lemma <ref> we may identify q_0 with C → C/⟨ x⟩ and q_1 with C → C/⟨ x⟩. So it is enough to show that the canonical C → C/⟨ x⟩× C/⟨ x⟩ is bijective.
Injectivity follows because if c≤ x, x then c≤ x∧ x=0, so ⟨ x⟩∩⟨ x⟩ = 0.
To prove surjectivity, let (q_0 c_0 , q_1 c_1) ∈ C/⟨ x⟩× C/⟨ x⟩ with c_0, c_1 ∈ C and consider
c = (c_0 ∧ x) ∨ (c_1 ∧ x) ∈ C. It is easy to check that C → C/⟨ x⟩× C/⟨ x⟩ sends c in the domain to (q_0 c_0 , q_1 c_1) in the codomain.
The content of Lemma <ref> is far from new, cf. e.g. <cit.> and <cit.>. However, having expressed that content in the form that is most suitable for the sequel, we have included a proof for the reader's convenience.
is coextensive.
Any algebraic category is complete and cocomplete, so in particular it has finite products and pushouts.
We appeal to the characterization of extensive categories in Proposition <ref>.
Codisjointness of products follows from Lemma <ref> or from a direct calculation observing that the projections of a product A × B send (0, 1) to 0 and 1 respectively, so 0 = 1 must hold in the pushout.
It remains to show that products are couniversal.
So we consider the pushout of a product diagram as below
A [d]_-h [l]_- pr_0 A× B [d]^-f[r]^- pr_1 B [d]^-k
D [l]^-q_0 C [r]_-q_1 E
and prove that the bottom span is product diagram.
Indeed, observe that the Boolean element (0, 1) ∈ A× B is sent to the Boolean element xf(1, 0) ∈ C so, by Lemma <ref>, it is enough to check that q_0 inverts x and q_1 inverts x;
but this follows from Lemma <ref>.
Although it was not necessary to prove the main result of this section, it seems worthwhile to observe that, in the context of algebraic categories, Lemma <ref> may be strengthened to a characterisation.
In any algebraic category, binary products are codisjoint if, and only if, the initial algebra has non-empty underlying set.
If the initial algebra 0 is not empty then the unique map 0 → 1 is a regular epimorphism so we can apply
Lemma <ref>.
For the converse implication notice that the following square
0 × 0 [d] [r] 0 [d]
0[r] 1
is a pushout by hypothesis. As any of the projections 0× 0 → 0 is split epic, its pushout 0 → 1 is a regular epimorphism, so 0 must be non-empty.
§ SUBTERMINALS IN , AND RATIONAL ALGEBRAS
The aim of this section is to characterize subterminal objects in .
Perhaps unexpectedly, the following fact will play an important rôle.
Monomorphisms in are stable under pushout.
It is well known <cit.> that, in algebraic categories, stability of monomorphisms under pushout is equivalent to the conjunction of the Amalgamation Property (AP) and of the Congruence Extension Property (CEP).
Pierce proved the AP for Abelian lattice-groups in <cit.>, and Mundici <cit.> observed that Pierce's result transfers through the functor Γ to MV-algebras. For a different proof of the AP for Abelian lattice-groups and MV-algebras, see <cit.>. The CEP for MV-algebras was proved in <cit.>; for an alternative proof, see <cit.>. For yet another proof in the more general context of residuated lattices, see <cit.>.
Most of the work will be done on the algebraic side, so it is convenient to start with an arbitrary
category with finite coproducts whose initial object is denoted 0.
As suggested above, we concentrate on the objects A such that the unique map 0 → A is epic. Notice that such an object is exactly a subterminal object in , but we prefer to avoid introducing new terminology such as `cosubterminal' or `supra-initial'.
For convenience we state here the dual of Lemma <ref>.
For any object A in , the following are equivalent:
* The map 0 → A is epic.
* The codiagonal ∇ A + A → A is an isomorphism.
* The coproduct injections in_0 , in_1 A → A + A are equal.
We shall also need a simple auxiliary fact.
Let 0→ A be epic and m B → A be a map.
If the coproduct map m + m B + B → A + A is monic then 0 → B is epic.
The following square commutes
B + B [d]_-m + m[r]^-∇ B [d]^-m
A + A [r]_-∇ A
by naturality of the codiagonal. The bottom map is an isomorphism by Lemma <ref>, and the left vertical map is monic by hypothesis. So the top map is also monic, as well as split epic.
Assume from now on that has finite colimits and that monomorphisms are stable under pushout. We stress that this stability property is quite restrictive. For instance, it does not hold in . On the other hand, we already know that it holds in by Lemma <ref>.
The map 0 → A is epic
if, and only if, for every monomorphism B → A, 0 → B is epic.
One direction is trivial and does not need stability of monomorphisms.
For the converse observe that, as monomorphisms are stable under pushout, finite coproducts of monomorphisms are monic.
So we can apply Lemma <ref>.
The following is a further auxiliary fact.
For any d A → D and e B → A in , if e is epic and the composite d e B → D is monic then d is an monic.
The right square below is trivially a pushout and, since e B → A is epic, the left square is also a pushout
B [d]_-e[r]^-e A [d]^-id[r]^-d D [d]^-id
A [r]_-id A [r]_-d D
so the rectangle is a pushout too. As the top composite is monic, and these are are stable under pushout by hypothesis, the bottom map is monic.
We emphasise the next particular case of Lemma <ref>.
Let d A → D be a regular epimorphism in .
If 0 → A is epic and 0 → D is monic then d is an isomorphism.
Assume now that our category with finite colimits and stable monomorphisms has a terminal object 1 such that for any object A in the unique A → 1 is a regular epimorphism.
This is common in algebraic categories.
A quotient of A in is an equivalence class of regular epimorphisms with domain A, where two such are equivalent if they are isomorphic as objects of A/.
An object A is simple if it has exactly two quotients, namely, those represented by A → 1 and id A → A.
So, if is an algebraic category, then an object is simple if and only if it has exactly two congruences.
To motivate the hypotheses of the following lemma observe that for every object A in , A is terminal or 0 → A is monic.
Similarly for and for K/ with K a field. In contrast, that is not the case in .
If for every object D of , D is terminal or 0 → D is monic, then for every epic 0 → A the following hold.
* A is simple or terminal.
* If m B → A is monic then B + B is simple or terminal.
To prove the first item let d A → D be a regular epimorphism. Then D is terminal or 0 → D is monic by hypothesis.
If 0 → D is monic then d is an isomorphism by Lemma <ref>.
So the only possible quotients of A are A → 1 or id A → A. So A is terminal or simple.
To prove the second item first recall that epimorphisms are closed under coproduct.
Then recall that, as monomorphisms are stable by hypotheses, they are closed under finite coproducts.
Therefore, m + m B + B → A + A is a monomorphism
and 0 = 0 + 0 → A + A is epic.
So, by Lemma <ref>, 0→ B + B is also epic. The first item implies that B + B is simple or terminal.
The material in this section applies to the case =, so we may now prove our first MV-algebraic result. For the proof we require a standard fact from the theory of MV-algebras and lattice-groups, which will also find further application later in this paper.
An ideal of the MV-algebra A is maximal if it is proper, and inclusion-maximal amongst proper ideals of A; equivalently, the quotient A/ is a simple algebra.
For every MV-algebra A, and for every maximal ideal of A, there is exactly one homomorphism of MV-algebras
_A⟶ [0,1],
and this homomorphism is injective.
In connection with the result that follows, let us explicitly recall that the initial object 0 in is the two-element Boolean algebra {0,1}.
For any MV-algebra A the following are equivalent.
* A is a subalgebra of [0,1]∩.
* A is non-trivial and the unique map 0 → A is epic.
* The unique map 0 → A is monic and epic.
* A is simple and 0 → A is epic.
If A ⊆ [0,1]∩ then A is certainly non-trivial, and <cit.> shows that the coproduct inclusions
in_0, in_1 A → A + A are equal.
So 0 → A is epic by Lemma <ref>.
The second and third items are clearly equivalent, and they imply the fourth by Lemma <ref>.
Finally, assume that A is simple and that 0 → A is epic.
By Hölder's Theorem (Lemma <ref>) together with simplicity, there is exactly one monomorphism A→ [0,1].
Now let r ∈ A and write ι A_r → A for the subalgebra of A generated by r.
As A_r is not trivial (and 0 → A is epic) Lemma <ref> implies that A_r + A_r is simple. Hence, by the computation in <cit.>, r must be rational.
§ THE Π_0 FUNCTOR FOR TOPOLOGICAL SPACES
In this section we show that the full inclusion → of the category of Stone spaces into that of compact Hausdorff spaces has a left adjoint π_0→ that preserves set-indexed products. The result just stated may be concisely referenced as follows. That the inclusion at hand is reflective is well known and flows readily from the universal property of the quotient topology. As shown in <cit.>, the reflection has “stable units”; we need not discuss this property here, except to recall that it easily implies that the left adjoint π_0 preserves finite products. Since Gabriel and Ulmer in <cit.> show that π_0 preserves cofiltered limits, π_0 preserves all products.[We are grateful to Luca Reggio and to Dirk Hofmann for pointing out to us, respectively, the relevance of <cit.> and of <cit.>.]
We give here a different proof that emphasises the key rôle of totally disconnected spaces in the general case. We first obtain a product-preserving left adjoint to the full inclusion of the category of totally disconnected topological spaces into .
We then show how to restrict this left adjoint to the categories of interest to us in the present paper.
The result just stated may be efficiently referenced as follows.
As pointed out to us by Luca Reggio, reflectivity of the inclusion → is discussed in <cit.> together with the fact that the reflection has stable units, so that the left adjoint preserves finite products. (Reggio also indicated that reflectivity is discussed in <cit.> as a consequence of the general theory of regular and exact completions.)
Moreover, Dirk Hofmann observed that, since Gabriel and Ulmer in <cit.> show that
the left adjoint π_0→ preserves cofiltered limits, π_0 preserves all products.
We give here a different proof. We first obtain a product-preserving left adjoint to the full inclusion of the category of totally disconnected topological spaces into .
We then show how to restrict this left adjoint to the categories of interest to us in the present paper. We begin by recalling some relevant definitions and facts.
A topological space X is connected if it so in the sense of Definition <ref>. A subset of a space is clopen if it is both closed and open. Then, a space X is connected if and only if it contains exactly two clopen sets, which are then necessarily ∅ and X. Equivalently <cit.>, X is connected if whenever X=A∪ B with A∩ B=∅ and A and B closed subsets of X, then exactly one of A and B is empty. If X is a space and x∈ X, the component of x in X, written C_x (with X understood), is defined as
C_x⋃{C X| x ∈ X and C is connected}⊆ X.
It can be shown that C_x is a connected subspace of X <cit.>, and it therefore is the inclusion-largest such to which x belongs. Also, C_x is closed in X <cit.>. A topological space X is totally disconnected if for each x∈ X we have C_x={x}.
Consider the equivalence relation on X given by
x∼ y if, and only if, C_x=C_y,
and define
π_0XX/∼.
We equip π_0X with the quotient topology, and call it the space of components of X. We write
q X ⟶π_0X
for the quotient map.
For every continuous map f X→ Y between topological spaces there is exactly one map such that the square below commutes.
X [d]_-f[r] [d]^-π_0fπ_0X
Y[r] π_0Y
We first show that f X→ Y preserves the equivalence relation ∼ in (<ref>). Given x,x' ∈ X, suppose x∼ x', so that C_x=C_y C. Since continuous maps preserve connectedness <cit.>, f[C] is a connected subset of Y that contains both fx and fx'. Hence f[C] C_fx∩ C_fx', which entails C_fx=C_fy. This completes the proof that f preserves ∼. Existence and uniqueness of π_0 f follow from the universal property of the quotient X →π_0 X.
Lemma <ref> implies that the assignment
that sends f to π_0f extends to an endofunctor
π_0⟶.
This endofunctor determines the full subcategory , as we now show.
If C ⊆π_0 X is a connected subspace then so is q^-1 [C] ⊆ X.
Let q^-1[C]=F_1∪ F_2 with F_1 and F_2 disjoint closed subsets of X. For any y ∈ C we can write the fibre q^-1[{y}] as C_x for any x∈ q^-1[{y}]. Further, we can express C_x as the disjoint union
C_x=(F_1∩ C_x)∪ (F_2∩ C_x). And C_x is closed and connected, because it is a component. Hence exactly one of q^-1[{y}]=C_x F_1 or q^-1[{y}]=C_x F_2 holds, for each y ∈ C. We can then define
S_i{ y ∈ C| q^-1[{y}] F_i}, i=1,2,
to the effect that C=S_1∪ S_2 and S_1∩ S_2 =∅. By construction we have F_i=q^-1[S_i], i=1,2. The definition of quotient topology then entails that S_i is closed because F_i is. Since C is connected, exactly one of S_1 and S_2 is empty, and hence so is exactly one of F_1 and F_2.
For any space X, the quotient map q X →π_0X in (<ref>) is universal from
X to the full inclusion →.
We first show that π_0 X is totally disconnected.
Let C_y be the component of y ∈π_0 X, with the intent of showing it is a singleton.
By Lemma <ref>, since C_y is connected in π_0 X, so is q^-1[C_y] connected in X. Therefore q^-1[C_y] is contained in the component C_x of any x∈ X with x∈ q^-1[C_y]; and thus, the direct image q[q^-1[C_y]] is contained in q[C_x]={y}. Since q[q^-1[C_y]]=C_y, because q is surjective, we conclude C_y{y}, as was to be shown.
Let f X→ Y be a continuous map, with Y totally disconnected.
We already know from the proof of Lemma <ref> that f preserves ∼ so,
as Y is totally disconnected, x ∼ x' in X implies f x = f x' in Y.
The universal property of the quotient q X →π_0 X implies the existence of a unique g π_0 X → Y such that g q = f.
We conclude that the full inclusion → has a left adjoint that, with no risk of confusion, will again be denoted by π_0 →.
The functor π_0 → preserves all set-indexed products.
Consider a family (X_s | s ∈ S) of spaces in indexed by a set S and let
γπ_0 ∏_s∈ S X_s ⟶∏_s∈ Sπ_0 X_s
be the unique map such that the triangle below commutes
π_0 ( ∏_s∈ S X_s) [rd]_-π_0 _s[r]^-γ ∏_s∈ Sπ_0 X_s [d]^-_s
π_0 X_s
for every s ∈ S.
In other words,
γ ( C( x_s | s∈ S )) = (C x_s | s∈ S) ∈∏_s∈ Sπ_0 X_s
for any ( x_s | s∈ S ) in ∏_s∈ S X_s.
To prove that γ is injective assume that
γ ( q ( x_s | s∈ S )) =γ ( q ( y_s | s∈ S )) in ∏_s∈ Sπ_0 X_s.
That is, q x_s = q y_s in π_0 X_s for every s ∈ S.
By <cit.> we have
q ( x_s | s∈ S ) = q ( y_s | s∈ S ) in π_0 ( ∏_s∈ S X_s), so γ is injective.
To prove that γ is surjective observe that the following diagram commutes
[ld]_-q∏_s∈ S X_s [d]^-∏_s∈ S q[r]^-_s X_s [d]^-q
π_0 ( ∏_s∈ S X_s) [r]_-γ ∏_s∈ Sπ_0 X_s [r]_-_s π_0 X_s
for every s∈ S, so the inner triangle commutes.
As products of surjections are surjective, the inner vertical map is surjective and hence so is γ, the bottom map of the triangle.
We next identify a related construction which will provide a useful alternative description of π_0 when restricted to .
Let us write (X,) for the set of continuous maps from the space X to the discrete two-point space {0,1}. There is a canonical continuous function
E = ⟨ f| f ∈(X,)⟩ X⟶^(X,),
x⟼ ( f x | f∈(X,) ).
For any subset S X, write χ_S X→ for the characteristic function defined by χ_S x=1 if, and only if, x∈ S.
Then S is clopen precisely when χ_S∈(X,). Thus, E in (<ref>)
can equivalently be described as the function that sends each point x ∈ X to the set of clopen subsets of X that contain x.
In order to prove the next lemma recall <cit.> that the quasi-component of x ∈ X is defined as
C_x⋂{S X| S is clopen, and x ∈ S}.
It is clear that the quasi-components of a space X partition X into closed non-empty sets.
The relation between E and quasi-components may be stated as follows.
For any x, x' ∈ X, E x = E x' if and only if C_x = C_x'.
If E x = E x' then clearly C_x = C_x'.
For the converse assume that C_x = C_x' and let S ⊆ X be a clopen containing x. Then x' ∈C_x' = C_x ⊆ S.
That is, x' ∈ S.
The reader should beware that the quasi-component C_x of x∈ X in general fails to be connected. Indeed, the inclusion C_xC_x always holds for each x∈ X <cit.>, and may be proper <cit.>. However:
For any X there exists a unique E' π_0 X →^(X,) such that the following diagram
X @(d,l)[rd]_-E[r]^-q π_0 X [d]^-E'
^(X,)
commutes.
Let x, x' ∈ X be such that x ∼ x'; that is, C_x = C_x'.
Then
x ∈ C_x ∩ C_x'⊆C_x ∩C_x'
so, as quasi-components are equal or disjoint, C_x = C_x'.
That is, E x = E x' by Lemma <ref>.
Let
X [r]^-D π'_0 X [r]^-m ^(X,)
be the epi/regular-mono factorization of the canonical map E in (<ref>). Then the following square commutes
X [d]_-D[r]^-q [d] π_0X @.>[ld]|-c[d]^-E'
π'_0X[r]_-m ^(X,)
by Lemma <ref> and, as q is regular-epi and m is monic, there is exactly one continuous map cπ_0(X)→π'_0(X) making the inner-triangles commute.
Since D is epic, so is c.
Also, since m is a regular mono, π_0'X carries the subspace topology inherited from the product ^(X,) and, as the latter is a Stone space, π_0'X is Hausdorff.
If X is compact Hausdorff then c π_0 X →π_0' X is an isomorphism and these isomorphic spaces are Stone spaces.
First recall <cit.> that, in any compact Hausdorff space X, the equality C_x=C_x holds for each x∈ X.
In other words, in this case, the function π_0 X →π_0' X is bijective.
Also, since X is compact, so is π_0 X because q is surjective.
Hence, as we already know that π_0' X is Hausdorff, the Closed Map Lemma implies that c is an isomorphism.
Similarly, compactness of X implies compactness of π_0' X and hence, the Closed Map Lemma implies that m is closed. Therefore, π_0'X is a closed subspace of the Stone space ^(X,).
It is classical that each Stone space is totally disconnected, so there is a full inclusion → such that the following diagram
[d] [r] [d]
[r]
commutes. Lemma <ref> implies that the composite
[r] [r]^-π_0
factors through the full inclusion →.
The factorization will be conveniently denoted by π_0 →.
The functor π_0→ is left adjoint to the full inclusion →, and preserves all set-indexed products.
Since, as observed above, π_0→ restricts to π_0→, the fact that the former is a left adjoint to → (Lemma <ref>) restricts to the fact that π_0→ is left adjoint to →.
It is standard that products in and in agree with products in (using, in particular, Tychonoff's Theorem that any product of compact spaces is compact), so Proposition <ref> entails that π_0→ preserves all set-indexed products.
§ SPECTRA OF MV-ALGEBRAS
In this section we recall the material about spectra of MV-algebras that is needed in the sequel.
Recall that an ideal of an MV-algebra A is prime if it is proper, and the quotient A/ is totally ordered. The (prime) spectrum of an MV-algebra A is
A{⊆ A| is a prime ideal of A}
topologised into the spectral space of A, as follows. For a subset S A, define
(S) {∈A| S⊆},
(S) A∖(S)={∈A| S⊈}.
The set (S) is called the vanishing locus, or zero set, of S, while (S) is called its support. If a ∈ A, write (a) as a shorthand for ({a}), and similarly for (a). Then the collection
{(I)| I is an ideal of A}
is the set of closed sets for a topology on A that makes the latter a spectral space in the sense of Hochster <cit.>. The collection
{(a)| a∈ A}
is a basis of compact open sets for this topology; see <cit.> and <cit.>. The topology is variously known as the Stone, Zariski, or hull-kernel topology of A.
The assignment A ↦A extends to a functor →, because inverse images of primes ideals along homomorphisms are prime. Althouh it is common to take the codomain of as the category of spectral spaces and spectral maps, for our purposes in this paper it is expedient to regard as taking values in .
The maximal spectrum of an MV-algebra A is
A{⊆ A| is a maximal ideal of A}.
We have AA, or equivalently, any simple MV-algebra is totally ordered (see e.g. <cit.>).
The maximal spectral space of A is the set A equipped with the subspace topology it inherits from A. Then A is a compact Hausdorff space <cit.>, and every compact Hausdorff space arises in this manner from some MV-algebra A <cit.>.
The standard example of MV-algebra, the interval
[0,1] equipped with the constant 0 and the operations ⊕, , generalises as follows. If X is any set, the collection [0,1]^X of all functions from X to [0,1] inherits the structure of an MV-algebra upon defining operations pointwise. If, additionally, X is a topological space, since ⊕ [0,1]^2→ [0,1], [0,1]→[0,1], and 0 are continuous with respect to the Euclidean topology of [0,1], the subset
(X){f X→ [0,1]| f is continuous}
is a subalgebra of the MV-algebra [0,1]^X. We shall describe a natural MV-homomorphism η_A A ⟶(A), for each MV-algebra A. Its existence descends from Hölder's Theorem (Lemma <ref>), which allows us to define a close analogue to the Gelfand transform in functional analysis. Indeed, in light of that result, to a∈ A and ∈A we associate the real number _(a / )∈ [0,1], obtaining the function
aA ⟶ [0,1]
⟼ h_(a).
It can be shown <cit.> that the function (<ref>) is continuous with respect to the Stone topology of A and the Euclidean topology of [0,1].
We thereby arrive at the announced homomorphism
η_A A ⟶(A)
a ⟼a
for each MV-algebra A.
For any MV-homomorphism h A→ B and any ∈B we have h^-1()∈A. Moreover, the inverse-image map h^-1B→A is continuous with respect to the Stone topology.
The first assertion is proved in <cit.>. The second assertion is a straightforward verification using the definition of Stone topology.
In light of Lemma <ref> we henceforth regard as a functor:
⟶^ op,
where denotes the category of compact Hausdorff spaces and their continuous maps.
Given a continuous map f X → Y in , it is elementary that the induced function
(f)(Y) ⟶(X),
g∈(Y) ⟼ g∘ f ∈(X)
is a morphism in . We therefore regard as a functor:
^ op⟶.
There is an adjunction
⊣^ op→
known as the Cignoli-Dubuc-Mundici adjunction <cit.>; see <cit.> for further references and details not mentioned below.
Dually to (<ref>), for any space X in there is a continuous map
ϵ_X X ⟶(X)
x ⟼{f∈(X)| f(x)=0},
and it is a standard fact that ϵ_X is a homeomorphism. (Compare <cit.>.)
Writing 𝕀 C for the identity functor on a category C, we can summarise the adjunction as follows.
The functor is left adjoint to the fully faithful , i.e. ⊣^ op→. The unit and the counit of the adjunction are the natural transformations η𝕀→ and ϵ→𝕀^ op whose components are given by (<ref>) and (<ref>), respectively.
§ THE PIERCE FUNCTOR PRESERVES COPRODUCTS
The category of Boolean algebras may be identified with the domain of the full subcategory → determined by the MV-algebras whose operation ⊕ is idempotent. It is then clear that → is a variety so, in particular, it has a left adjoint.
It also has a right adjoint that we now describe.
We write A for the collection of all Boolean elements of the MV-algebra A. By <cit.>, A is the largest subalgebra of A that is a Boolean algebra. A homomorphism h A→ B preserves Boolean elements, because the latter are defined by equational conditions. Therefore, h induces by restriction a function hA→B that is evidently a homomorphism of Boolean algebras. We thus obtain a functor
⟶
from the category of MV-algebras to that of Boolean algebras; we call it the Pierce functor because of the close analogy with the theory developed in <cit.> for rings.
The functor is right adjoint to the functor .
This is a direct consequence of the fact that A is the largest Boolean subalgebra of A, for any MV-algebra A.
The proof of Proposition <ref>—in particular, Lemma <ref>—makes it clear that → is essentially the `complemented subobjects' functor determined by the extensive category .
We now embark on the proof of the central fact that → preserves coproducts. Our aim is to reduce the problem to a situation where we can apply the topological results in Section <ref>.
For any MV-algebra A and any element a∈ A, a is Boolean if, and only if, for each prime ideal of A, we have a/∈{0,1} A/.
Let C be any totally ordered MV-algebra. For x∈ C, either x≤ x or x ≤ x. If the former holds then x∧ x=x, so that if x is Boolean then x=0. If the latter holds then x∨ x=x, and thus x=1 if x is Boolean. In summary, if x∈ C is Boolean then x∈{0,1}. The converse implication is clear. Summing up, the Boolean elements of C are precisely 0 and 1.
Boolean elements, being definable by equational conditions, are preserved by homomorphisms. Hence if a is Boolean then a/∈ A/ is Boolean, and therefore, since A/ is totally ordered, a/∈{0,1} by the argument in the preceding paragraph. This proves the left-to-right implication in the statement of the lemma.
For the converse implication, we recall that in any MV-algebra A we have ⋂A={0} <cit.>. Hence, the function ι A ⟶∏_∈A A/ defined by a ∈ A ⟼ (a/)_∈∈∏_∈A A/ is an injective homomorphism. Assume that for each ∈A we have a/∈{0,1}. Since operations in ∏_∈A A/ are computed pointwise, we infer ι(a)∨ι(a)= (a/)_∈∨ (a/)_∈=1, and therefore, since ι is an isomorphism onto its range, a∨ a=1. This completes the proof.
Let A be an MV-algebra, and suppose there exist possibly empty closed subsets X_0,X_1A with A=X_0∪ X_1 and X_0∩ X_1=∅. Then there exists exactly one Boolean element b∈ A such that b/=0 for each ∈ X_0 and b/=1 for each ∈ X_1.
By <cit.>, there is exactly one ideal I_i of A such that (I_i)=X_i, i=0,1. Consider the elements 0,1∈ A. The fact that A is partitioned into X_0 and X_i entails I_0∨ I_1=A and I_0∩ I_1={0}, so that the Chinese Remainder Theorem <cit.> applied to 0 and X_0, and to 1 and X_1, yields one element b∈ A such that b/I_0=0 and b/I_1=1. Using the Third Isomorphism Theorem, the latter conditions imply b/∈{0,1} for each ∈A so that by Lemma <ref> we conclude that b is Boolean. If b'∈ A also satisfies b'/=0 for each ∈ X_0 and b'/=1 for each ∈ X_1, then b/=b'/ for ∈A, so that b=b' because ⋂A={0} <cit.>.
We record a corollary that will have further use in the paper. It is the exact analogue for MV-algebras of a standard result for the category , see e.g. <cit.>. In order to state it, let us write X for the Boolean algebra of clopen sets of any topological space X. Let us then observe that the uniqueness assertion about the Boolean element b in Lemma <ref> allows us to define, for any MV-algebra A, a function
χ_A(A)⟶A
that assigns to each X_0∈(A) the unique element b∈A with the properties stated in that lemma with respect to X_0 and X_1A∖ X_0. It is then elementary to verify that χ_A is a homomorphism of Boolean algebras.
For any MV-algebra A, the function
ϕ_AA ⟶(A)
that sends b∈A to (b)(A) is an isomorphism of Boolean algebras whose inverse is the homomorphism χ_A in (<ref>). In particular, A is indecomposable if, and only if, A is connected.
By Lemma <ref> it is clear that (b) for each b∈A is clopen and that ϕ_A is a homomorphism. Let us consider b'χ_Aϕ_Ab. For each ∈(b) we have b/=0 by definition of , and b'/=0 by the defining property of b'. Similarly, for each ∈A∖(A) we have b/=b'/=0. Thus, b and b' agree at each prime and thus b=b' because ⋂A={0} <cit.>. Conversely, for X_0∈(A), consider the clopen ϕ_Aχ_AX_0. For ∈A, by definition of χ_A we have ∈ X_0 if, and only if, (χ_AX_0)/=0. Hence
ϕ_A (χ_A X_0)=X_0, and the proof is complete.
The radical of A is the ideal
A⋂A.
In accordance with standard terminology in general algebra, one says A is semisimple precisely when A={0}.
We note in passing that, unless A is semisimple, the statement in Lemma <ref> cannot be strenghtened to “a is Boolean if, and only if, for each ∈A we have a/∈{0,1} A/”.
Let A be an MV-algebra, and suppose there exist possibly empty closed subsets X_0,X_1A with A=X_0∪ X_1 and X_0∩ X_1=∅. Then there exists exactly one Boolean element b∈ A such that b/=0 for each ∈ X_0 and b/=1 for each ∈ X_1.
By <cit.>, each ∈A is contained in exactly one λ∈A, so that we can define a function
λA ⟶A,
⟼λ.
By <cit.>, this function is continuous, and it is a retraction for the inclusion AA. Therefore, X'_0λ^-1[X_0] and X'_1λ^-1[X_1] are closed subsets of A satisfying A=X'_0∪ X'_1 and X'_0∩ X'_1=∅. Now Lemma <ref> provides a unique Boolean element b such that b/=0 for each ∈ X_0', and b/=1 for each ∈ X_1'. As X_i X_i', i=0,1, b satisfies the condition in the statement. Concerning uniqueness, suppose a is a Boolean element of A such that a/=0 for each ∈ X_0, and a/=1 for each ∈ X_1. We claim a=b. Indeed, let ∈ X_i', i=0,1. Then a/λ=i because λ∈ X_i. The inclusion λ induces a quotient map q A/→ A/λ. By Lemma <ref> we have a/∈{0,1}. Also, A/λ is nontrivial. Therefore since q(a/)=a/λ=i it follows that a/=i. By the uniqueness assertion in Lemma <ref> we conclude a=b.
We observe that the analogue of Lemma <ref> about coproduct decompositions of A being indexed by idempotent elements does not hold in general for rings. Indeed, spectra of MV-algebras always are completely normal—which affords the existence of the map λ used in the proof above—whereas spectra of rings are not, in general.
For more on the important rôle that the continuous retraction λ in (<ref>) plays in the theory of lattice-groups and MV-algebras, see <cit.> and the references therein.
Our next objective is to show that sends the unit η of ⊣ in (<ref>) to an isomorphism.
For any MV-algebra A, the morphism η_A A→ ()A is an isomorphism.
Let b'∈(A) be Boolean, with the aim of exhibiting b∈A such that η_A(b)=b'. Evaluating the defining equality b'⊕ b'=b' at each ∈A we see that b'()∈{0,1} holds. Therefore, the two closed subsets X_0 b'^-1[{0}] and X_1 b'^-1[{1}] of A satisfy the hypotheses of Lemma <ref>. We conclude that there exists one Boolean element b∈ A with b/=0 for ∈ X_0 and b/=1 for ∈ X_1. By the definition of η_A this entails at once η_A(b)=b', so η_A is surjective. By the uniqueness statement in Lemma <ref>, η_A is also injective.
Our next step will be to factor into a manner that is useful to our purposes.
Lemma <ref> implies that the functors → in the diagram below
[d]_-[r]^-
^ op[r]_- [u]_-
are naturally isomorphic.
The functor ^ op→ preserves all set-indexed coproducts.
Using Stone duality, it is an exercise to verify that the composite functor ^ op→ induces, by taking opposite categories on each side, a functor naturally isomorphic to the functor π_0→ of Section <ref>. The lemma then follows from Theorem <ref>.
We finally obtain the main result of this section.
The Pierce functor → preserves all set-indexed coproducts.
As we saw above, the triangle (<ref>) commutes up to a natural isomorphism. Further, preserves arbitrary set-indexed colimits because it is left adjoint by Theorem <ref>; and
preserves set-indexed coproducts by Lemma <ref>. Hence preserves set-indexed coproducts.
§ MAIN RESULT, AND FINAL REMARKS
Let be a coextensive category.
Recall from the introduction that an object A in is separable if A is decidable as an object in the extensive . Thus, A is separable if, and only if, there is a morphism f A + A → A such that the span
A [l]_-∇ A + A [r]^-f A
is a product diagram.
Separable MV-algebras coincide with finite products of subalgebras of [0,1]∩.
By Theorem <ref> we have an reflection π_0 ⊣→ such that both adjoints preserve finite products and finite coproducts, so Proposition <ref> implies that every decidable object in is a finite coproduct of subterminal objects. Theorem <ref> completes the proof.
We conclude the paper with some final remarks that point to further research aimed at developing an ‘arithmetic connected-component functor’.
The guiding result from Algebraic Geometry is this: the category of étale schemes over K is reflective as a subcategory of that of locally algebraic schemes over K <cit.>. The left adjoint there is denoted by π_0, and π_0 X is called the k-schéma des composantes connexes de X
in Definition I, 4, 6.6 op. cit. Moreover, it is then proved that π_0 preserves finite coproducts.
In terms of extensive categories, this says that for =, the subcategory → has a finite-product preserving left adjoint.
We announce that the same holds for = _ fp, where _ fp is category of finitely presetable MV-algebras. The proof will be published elsewhere, but it is appropriate to indicate here the rôle of locally finite MV-algebras in connection with that result.
An MV-algebra A is locally finite if each finitely generated subalgebra of A is finite. Finite MV-algebras are evidently locally finite; [0,1]∩ is an example of a locally finite MV-algebra that is not finite. Locally finite MV-algebras were studied in <cit.>; see also <cit.> for a generalisation of the results in
<cit.>, and <cit.> for further material and <cit.> for recent progress on the topic.
The connection with Theorem <ref> is the following characterisation of rational algebras.
For any MV-algebra A the following are equivalent.
* A is simple and locally finite.
* A is a subalgebra of [0,1]∩.
(<ref>)⇒(<ref>). By Hölder's Theorem (Lemma <ref>), since A is simple there is exactly one monomorphism A→ [0,1]; let us therefore identify A with a subalgebra of [0,1]. If A contains an irrational number ρ∈ [0,1] then the subalgebra generated by ρ is infinite. Indeed, the Euclidean algorithm of successive subtractions applied to ρ,1∈ does not terminate (because ρ and 1 are incommensurable) and produces an infinite descending sequence of distinct, non-zero elements of A. Thus, A ⊆ [0,1]∩ by local finiteness.
(<ref>)⇒(<ref>). Any subalgebra of [0,1] evidently has no proper non-trivial ideal, by the Archimedean property of the real numbers, and is therefore simple. If, moreover, A⊆ [0,1]∩, the subgroup of generated by finitely many a_1,…,a_n∈ A together with 1 is discrete, and therefore by <cit.> the subalgebra generated by a_1,…,a_n is a finite chain. Thus A is locally finite.
An MV-algebra A is separable if, and only if, A is locally finite and A is finite.
If A is separable then, by Theorem <ref>, A = ∏_i∈ I A_i with I finite and A_i ⊆ [0,1]∩ for each i ∈ I.
In particular, A is finite.
Also, each A_i is locally finite by Lemma <ref>.
As finite products of locally finite algebras are locally finite, A is locally finite.
Conversely, assume that A is locally finite and A is finite.
Then, A = ∏_i∈ I A_i with I finite and A_i directly indecomposable for each i ∈ I.
As locally finite algebras are closed under quotients, each A_i is locally finite.
Hence, each A_i is locally finite and indecomposable.
But then A must be simple. Indeed, Corollary <ref> entails that A is connected, and A=A by <cit.>. Then the spectral space A is Hausdorff, and thus has a base of clopen sets—hence, being compact, it is a Stone space. Since Stone spaces are totally disconnected, connectedness of A entails that A is a singleton, so A has exactly two ideals, and so is simple.
By Lemma <ref>, A is then a subalgebra of [0,1]∩. Therefore, A is separable by Theorem <ref>.
Now, let → be the full subcategory determined by locally finite MV-algebras.
Let us prove that this subcategory is coreflective.
An element a of an MV-algebra A is of finite order-rank[The terminology we introduce here is best motivated using lattice-groups—please see Appendix <ref>.] if the subalgebra B it generates in A is finite. If B is terminal, we say the order-rank of a is zero. Otherwise, there exists exactly one n∈{1,2,…} such that B=C_1×⋯× C_n with each C_i directly indecomposable and non-terminal, and we then say the order-rank of a is n. We set
A{a ∈ A | a is of finite order-rank}.
Note that AA, because any Boolean algebra is locally finite.
For any MV-algebra A and subset G A, let us write G for the subalgebra of A generated by G. When G={g} we write g for {g}.
Any homomorphism of MV-algebras sends elements of finite order-rank to elements of finite order-rank.
Let h A→ B be a homomorphism and let a ∈A. Since h commutes with operations, a routine argument in general algebra shows that h[Sa]=(ha); since a is finite, so is (ha).
For any MV-algebra A, A is a locally finite subalgebra of A. Further, A is the inclusion-largest locally finite subalgebra of A.
Let F{a_1,…,a_n} A be a finite subset of elements of finite order-rank, n≥ 0 an integer. We need to show that the subalgebra F of A generated by F is finite. Induction on n. If n=0 then ∅ is either the terminal one-element algebra or the initial two-element algebra. Now suppose G{a_1,…, a_n-1} is such that G is finite. The subalgebra a_n is also finite, because a_n is of finite order-rank by hypothesis. The subalgebra F is the least upper bound of G and of a_n in the lattice of subalgebras of A, and therefore can be written as a quotient of the coproduct G+a_n. In more detail, by the universal property of the coproduct, the inclusion maps GF and a_nF induce a unique homomorphism hG+a_n→ A whose regular-epi/mono factorisation h=m q is such that m S→ A exhibits the subobject of A that is the join of the subobjects G and a_n—in particular, S is isomorphic to F. So F is a quotient of the algebra G+a_n. Since finite coproducts of finite MV-algebras are finite by <cit.>, G+a_n is finite and therefore so is F.
To show that A is a subalgebra of A, first note that clearly 0∈A. If a∈A then a lies in the subalgebra generated by a, which is finite; hence a is of finite order-rank. If a,b ∈A, then a⊕ b lies in the subalgebra generated by {a,b}, which is finite by the argument in the preceding paragraph; hence a⊕ b is of finite order-rank.
For the last assertion in the statement, let B be a locally finite subalgebra of A. Given any b ∈ B, the subalgebra generated by b in A is finite, by our assumption about B; hence b is of finite order-rank, and b∈ A. This completes the proof.
Lemmas <ref> and <ref> allow us to regard as a functor
⟶.
The functor ⟶ is right adjoint to the full inclusion ⟶.
This is an immediate consequence of the fact that A is the largest locally finite subalgebra of the MV-algebra A, as proved in Lemma <ref>.
It is proved in <cit.> that has all set-indexed products. This follows at once from Corollary <ref>: indeed, for any set-indexed family {A_i}_i ∈ I of locally finite MV-algebras the product of {A_i}_i ∈ I in is the coreflection (∏_i ∈ IA_i) of the product ∏_i ∈ IA_i in .
We have been unable to prove that → preserves finite products. However, writing for _ fp, we can show that the functor restricts to a left adjoint π_0 → to the inclusion
→ and, moreover, it preserves finite products.
As mentioned, the proof will appear elsewhere.
§ SEPARABLE UNITAL LATTICE-ORDERED ABELIAN GROUPS
For background on lattice-groups we refer to <cit.>. We recall that a lattice-ordered group, or ℓ-group for short, is a group that is also a lattice[In this appendix, lattices are only required to have binary meets and joins, but not top or bottom elements.] such that the group operation distributes over binary meets and joins. We only consider Abelian ℓ-groups, and thus adopt additive notation. The underlying group of an Abelian ℓ-group is torsion-free, and its underlying lattice is distributive. Write for the category of Abelian ℓ-groups and of lattice-group homomorphisms. An element 1∈ G in an Abelian ℓ-group is a (strong order) unit if for each g∈ G there is a natural number n such that n1≥ g. An Abelian ℓ-group G equipped with a distinguished unit 1 is called unital, and denoted (G,1). Write _1 for the category of unital Abelian ℓ-groups and of unit-preserving lattice-group homomorphisms.
There is a functor Γ_1→ that acts on objects by sending (G,1) to its unit interval [0,1]{x∈ G| 0≤ x ≤ 1}, and on morphisms by restriction; here, [0,1] is regarded as an MV-algebra under the operations x⊕ y (x+y)∧ 1, x 1-x, and 0. This functor has an adjoint Ξ→_1, and Mundici proved in <cit.> that Γ and Ξ constitute an equivalence of categories.
The initial object in _1 is (,1), and the terminal object is the trivial unital ℓ-group ({0=1}, 0).
In analogy with the relationship between non-unital and unital rings, the category has a zero object and is not coextensive, while the category _1 is. Separable unital Abelian ℓ-groups are defined as for any coextensive category, cf. the beginning of Section <ref>.
An object G of is Archimedean if whenever nx≤ y holds in G for each positive integer n, then x≤ 0; and an object (G,1) of _1 is called Archimedean if G is. The following characterisations hold: (G,1) is Archimedean precisely when Γ(G,1) is semisimple; and (G,1) is totally ordered and Archimedean precisely when Γ(G,1) is simple. Hölder's Theorem for the category _1 may be stated as follows: Any (G,1) that is Archimedean and totally ordered has exactly one morphism to (,1), and that morphism is monic (equivalently, its underlying function is injective).
Let us say that an object (G,1) of _1 is rational if it is isomorphic to an ordered subgroup
of the additive group containing 1, where the order of G is inherited from the natural order of the rationals. Theorem <ref> may be then formulated for the category _1 as follows.
For any unital Abelian ℓ-group (G,1) the following are equivalent.
* (G,1) is rational.
* (G,1) is non-trivial, and the unique map (,1) → (G,1) is epic.
* The unique map (,1) → (G,1) is monic and epic.
* (G,1) is totally ordered and Archimedean, and the unique map (,1) → (G,1) is epic.
An object (G,1) of _1 is Specker if its unit-interval MV-algebra Γ(G,1) is a Boolean algebra. Write _1 for the full subcategory of _1 on the the Specker objects. The inclusion functor _1→_1 has a right adjoint _1→_1, the Pierce functor for _1, and preserves arbitrary coproducts (Theorem <ref>). Our main result, Theorem <ref>, would be proved for the category _1 using this Pierce functor; it can be phrased as follows.
Separable unital Abelian ℓ-groups coincide with finite products of rational unital Abelian ℓ-groups.
Products in the category are Cartesian products, because is a variety of algebras. On the other hand, while _1 is equivalent to a variety by Mundici's cited theorem, its underlying-set functor is not right adjoint. Indeed, products in _1 are not, in general, Cartesian products. However, finite products in _1 are Cartesian—the product of (G,1) and (H,1) is (G× H, (1,1)) with the Cartesian projections.
An Abelian ℓ-group is called a simplicial group if it is isomorphic in to a free Abelian group of finite rank ^r equipped with the coordinatewise order. A unit in such a simplicial group is then any element 1∈^r whose each coordinate is strictly positive; the pair (^r,1) is called a unital simplicial group. These lattice-groups play a key rôle in the representation theory of dimension groups, see e.g. <cit.>.
An object (G,1) in _1 is a unital simplicial group exactly when its unit-interval MV-algebra Γ(G,1) is finite. An object (G,1) is locally simplicial if each sublattice subgroup generated by finitely many elements along with 1 is a unital simplicial group. An object (G,1) in _1 is locally simplicial exactly when its unit-interval MV-algebra Γ(G,1) is locally finite. Then: An object (G,1) of _1 is separable just when it is locally simplicial, and (G,1) has finite -module rank[In the literature on lattice-groups, the condition that (G,1) has finite rank is expressed in the following traditional manner: the unit of G has finitely many components.] (Corollary <ref>).
Write _1 for the full subcategory of _1 on the locally simplicial objects. The inclusion functor _1→_1 has a right adjoint _1→_1 (Corollary <ref>); that is, every (G,1) has an inclusion-largest locally simplicial unital sublattice subgroup. To prove this in the category _1 one would introduce the notion of element of `finite-order rank' of a unital Abelian ℓ-group. It is this notion that motivates the terminology we adopted in the context of MV-algebras in Section <ref>; by way of conclusion of this appendix, we offer a short discussion.
Let (G,1) be a unital Abelian ℓ-group, let g∈ G, and let H be the sublattice subgroup of G generated by g and by 1. If (H,1) is a unital simplicial group (^r,1)—equivalently, if the MV-algebra Γ(H,1) is finite—then we call g an element of finite order-rank r. This notion of rank crucially depends on the interplay between the lattice and the group structure, and is not reducible to the linear notion of rank. To explain why, let us preliminarly observe that a simplicial group ^r enjoys the finiteness property that its positive cone (^r)^+—that is, the monoid of non-negative elements of ^r—is finitely generated as a monoid. Next, let us point out that the underlying group of the Abelian ℓ-group H generated by g and 1 in G is necessarily free: indeed, any finitely generated object of has free underlying group, as was proved in <cit.>. The -module rank of H is at most countably infinite, because H is countable. But even if we assume the rank of H is finite, the unit-interval Γ(H,1) may be infinite, and in that case the lattice order of ^r≅ H cannot be simplicial—and indeed, one can prove that the monoid H^+ cannot be finitely generated. Hence, the condition that the sublattice subgroup H of G generated by g and 1 is simplicial is strictly stronger than the condition that H has finite -module rank. To illustrate, consider the subgroup H of generated by an irrational number ρ∈ together with 1; then H≅^2 as groups, the total order inherited by ^2 from is palpably not simplicial, the positive cone H^+ can be shown not to be finitely generated by an easy direct argument, and Γ(H,1) is an infinite simple MV-algebra.
amsplain
|
http://arxiv.org/abs/2307.07354v1 | 20230714140220 | PG-Triggers: Triggers for Property Graphs | [
"Alessia Gagliardi",
"Anna Bernasconi",
"Davide Martinenghi",
"Stefano Ceri"
] | cs.DB | [
"cs.DB"
] |
Politecnico di Milano
Via Ponzio 34/5
Milano
Italy
20133
[email protected]
0000-0001-8016-5750
Politecnico di Milano
Via Ponzio 34/5
Milano
Italy
20133
[email protected]
0000-0002-2726-7683
Politecnico di Milano
Via Ponzio 34/5
Milano
Italy
20133
[email protected]
0000-0003-0671-2415
Politecnico di Milano
Via Ponzio 34/5
Milano
Italy
20133
[email protected]
Graph databases are emerging as the leading data management technology for storing large knowledge graphs; significant efforts are ongoing to produce new standards (such as the Graph Query Language, GQL), as well as enrich them with properties, types, schemas, and keys. In this article, we propose PG-Triggers, a
complete proposal for adding triggers to Property Graphs,
along the direction marked by the SQL3 Standard.
We define the syntax and semantics of PG-Triggers and then illustrate how they can be implemented on top of Neo4j, one of the most popular graph databases. In particular, we introduce a syntax-directed translation from PG-Triggers into
Neo4j, which makes use of the so-called APOC triggers; APOC is a community-contributed library for augmenting the Cypher query language supported by Neo4j. We also illustrate the use of PG-Triggers through a life science application inspired by the COVID-19 pandemic. The main result of this article is proposing reactive aspects within graph databases as first-class citizens,
so as to turn them into an ideal infrastructure for supporting reactive knowledge management.
PG-Triggers: Triggers for Property Graphs
Stefano Ceri
August 12, 2023
=========================================
§ INTRODUCTION
Graph databases are becoming increasingly important as frameworks for representing and understanding the intricate connections that exist in the real world <cit.>.
Thanks to their expressive query languages,
rich customer support, and strong performance, they are steadily more used to store large knowledge graphs, in a variety of domains that include, e.g., mobility, social and biological networks.
As customary in data management evolution, standards for graph databases are
emerging, most importantly the Graph Query Language (GQL) <cit.>, whose roadmap is followed with interest by the major companies in the field. In addition,
the research community has proposed various
formalizations so as to enrich the semantics of graph databases, first by shaping them in the form of Property Graphs <cit.>, and then by defining the notions of PG-Keys <cit.> and PG-Schema <cit.>.
In this paper, we follow up along this trend and propose PG-Triggers. Triggers exist
since the birth of relational databases <cit.>, have been studied in <cit.> and formalized in the ISO-ANSI SQL3 Standard <cit.>. So far, they have not been formalized by the graph database research community, although they can be informally supported by most graph database systems, even if in diversified ways (see Section <ref>). Hence, our proposal for PG-Triggers has the potential of influencing future standard development as well as suggesting new directions to the evolution of graph databases.
We demonstrate our idea in practice by focusing on Neo4j – one of the most representative graph database systems. We show that Neo4j already supports all the required components for implementing our PG-Trigger concepts; however, it does so within
a community-supported library, called Awesome Procedures on Cypher (APOC) <cit.>, and not as part of Cypher <cit.> – the declarative graph query language adopted by Neo4j. We show a syntax-directed translation of PG-Triggers into APOC triggers, while also discussing some intricacies that must be solved for an effective translation.
Our proposal is then applied to build a reactive model for a life science application addressing critical aspects of the COVID-19 pandemic. Starting from our CoV2K knowledge base <cit.>, we show a fragment of the knowledge base, its Neo4j implementation, and then several PG-Triggers that define a reactive behavior, in particular by responding to events such as the discovery of critical SARS-CoV-2 mutations, or the diffusion of unknown variants, or the increase of hospitalizations requiring intensive care treatment. In this way, after providing the foundations of PG-Triggers, our work establishes the first brick in the development of a reactive, graph-based knowledge management system and shows the potential for covering complex scenarios responding to critical sequences of events, like those that occur in a pandemic-related setting.
Contributions.
Our main contribution is a proposal for adding reactive behavior in the form of triggers to graph databases. This proposal proves to be both natural and useful.
* Natural, as it suitably adapts the recommendations of the SQL3 standard to a graph setting, thereby adhering to the principle of least surprise, for the great benefit of people already acquainted with the well-known corresponding relational notions.
* Useful, as the knowledge management through reactive behavior proves to be very effective in numerous knowledge-intensive (instead of just data-intensive) scenarios, such as those that have emerged in recent years for dealing with the pandemic.
The adaptation of the notion of triggers to Property Graphs is, however, far from trivial, as, on one hand, it requires dealing with non-relational concepts such as nodes, relationships, labels, and properties, and, on the other hand, it lacks a full correspondence with the notion of table, which, in the relational case, is the source of events that can be monitored by triggers. To this end, our proposal identifies the notion of label as the most suitable and natural choice for defining a set of target elements in the graph, much in the same way in which tables do in SQL triggers.
In order to demonstrate our solution, we show how PG-Triggers can be implemented on top of Neo4j through the so-called APOC trigger procedures. While this provides evidence of the immediate feasibility of our proposal in the currently most popular graph database, it also highlights the inadequacy of APOC as a potential standard: not only are APOC procedures far from stable (we experienced several changes during the development of our implementation), but they also lack a few important ingredients that proved extremely useful in our examples, among which an explicit notion of rule granularity and trigger-label binding.
We included such features in our PG-Triggers and streamlined and simplified their syntax as much as possible, so as to combine ease of use with expressivity, in the hopes that our proposal can drive forthcoming standardization choices in Property Graphs.
Outline. After discussing previous standardization attempts and related work in Section <ref>, we offer a comparative review of graph database technology in Section <ref>, by also surveying the different levels of support for trigger-like constructs in current systems. Our proposed syntax and semantics for PG-Triggers are illustrated in Section <ref>, while their implementation using the APOC library is discussed in Section <ref>.
We exemplify our proposal in Section <ref> by providing reactive support to a life science application tackling COVID-19. Finally, we discuss the extent and impact of PG-Triggers in Section <ref>.
§ RELATED WORK
The Property Graph data model applies to a directed graph where nodes and edges are labeled, and each can have associated ⟨property, value⟩ pairs. The Property Graph data model has gained significant popularity and adoption in various graph database systems. Examples of systems that leverage this model include AgensGraph <cit.>, ArangoDB <cit.>, Blazegraph <cit.>, DataStax Enterprise Graph <cit.>, JanusGraph <cit.>, Neo4j <cit.>, Oracle Graph Database <cit.>, OrientDB <cit.>, TigerGraph <cit.>, Nebula Graph <cit.>, and more. This large participation and attention on Property Graphs led to the idea of creating a standalone Property Graph query language to complement SQL, which was raised by ISO SC32/ WG3 members in early 2017 and is echoed in the GQL manifesto of May 2018 <cit.>.
The LDBC Graph Query Working Groups. The Linked Data Benchmark Council (LDBC) Groups <cit.> are collaborative efforts aimed at advancing the state of the art in graph query processing, by bringing together researchers, industry experts, and practitioners. They promote standards and develop benchmarks for graph data management systems. The Graph Query Working Groups specifically focus on addressing challenges related to querying large-scale graph datasets efficiently. There are multiple subgroups, each focusing on a specific aspect of graph query processing. Among them, working groups for the LDBC Extended GQL Schema (LEX), the Property Graph Schema, the Existing Languages, and the Formal Semantics.
G-CORE. The authors of <cit.> present the syntax and semantics of the Graph Query Language Core (G-CORE), which supports graph pattern matching, property and structural filtering, aggregations, and path expressions. G-CORE incorporates graph-specific features while maintaining a close relationship with traditional relational algebra, making it accessible to both graph database practitioners and researchers. The advantages of G-CORE include its expressive power, formal semantics, and compatibility with existing graph database systems.
PG-Keys. Along the direction of adding semantics to Property Graphs, the notion of key was then proposed for graph databases. PG-Keys <cit.> are unique identifiers assigned to arbitrary subsets of nodes and edges within a Property Graph database. By assigning unique keys, different entities and relationships can be distinguished, preventing duplicates and maintaining data integrity.
PG-Schema. The PG-Schema proposal <cit.> introduces schemas for Property Graph databases; it addresses the need for a standardized approach to schema management, enabling users to define and enforce data constraints, specify relationships, and establish a clear structure for their graph data.
PG-Schema uses the notion of PG-Type
for defining node and edge types, then
expresses type hierarchies and integrity constraints, also including PG-Keys.
The article presents a comprehensive framework for defining and evolving graph schemas, including support for schema inference, data validation, and schema evolution. The benefits of utilizing such a schema include improved data quality, stronger query optimization potential, and enhanced data governance.
§.§ Brief history of reactive extensions for data models
Active extensions have been designed and implemented for a variety of data models,
throughout the fifty-year-long development of database technology. Extensions prior to 1996 are described in <cit.>, with a review of commercial relational systems, and chapters dedicated to Postgres, Ariel, Starburst, A-RDL, Chimera, and Ode. In particular, trigger events and actions were added to O++, the object-oriented query language in use in Ode, developed at AT&T <cit.>, while formal models for integrating database objects and triggers are discussed in <cit.>.
Similar active extensions have been proposed for XML; in particular, Active-XML <cit.> augments XML with reactive computations modeled as services, in the context of a peer-to-peer distributed model of computations. An encompassing approach to the design of database applications using objects, deductive and active rules, is in <cit.>.
Only a few reactive extensions are discussed for graph databases. Among them,
GraphFlow <cit.> and TurboFlux <cit.>; both systems support continuous matching over graphs that change over time, using incremental sub-matching algorithms (along the lines described in <cit.>).
Turboflux provides a subgraph matching system by employing a concise representation of intermediate results and proposes to resolve the problems of existing methods with continuous subgraph matching for each
update operation.
GraphFlow is an interesting system developed at the University of Waterloo; it proposes several clever ideas for stream management and introduces Cypher++ as an active extension of the Cypher language. In particular, Cypher++ adds continuous queries to reactive processing, as subgraphs are continuously matched against query patterns, and reactions take place when matches occur.
§ COMPARATIVE REVIEW OF GRAPH DATABASE TECHNOLOGY
We analyze some of the existing graph database systems, highlighting how they can support reactive computations. We partition existing systems into three categories:
1) mixed graph-relational systems, 2) NoSQL graph technologies with event listeners, and 3) NoSQL graph technologies without event listeners.
We observe that the majority of systems offer support for ACID (with Atomicity, Consistency, Isolation, and Durability) transactions in a substantially centralized environment, while a few others give up Consistency or trade it for Eventual Consistency <cit.> in a distributed setting.
§.§ Mixed Graph-Relational Systems
The first class includes systems that are built by integrating two engines: a graph database and a relational database. The graph system supports a graph (Cypher-like) query language, whereas the relational system supports SQL. Queries can be built by assembling the two kinds of queries. In particular, the relational engine supports triggers, following the SQL3 standard. Among these systems, we mention:
* AgensGraph. Agens Graph is an open-source tool whose graph system supports the Property Graph data model and provides a range of graph-specific features, such as graph traversal, pattern matching, and pathfinding, thereby enabling efficient graph analytics. Taking advantage of relational technology, AgensGraph supports ACID transactions for composite queries and triggers.
* Oracle Graph Database and Graph Analytics. The Oracle Graph Database supports the Property Graph data model, enabling graph analytics capabilities. Oracle Graph Database integrates with Oracle's broader ecosystem, including its SQL-based data management, ACID transactions, and triggers.
* OrientDB. OrientDB is an open-source, multi-model database management system that combines the flexibility of document databases with the power of graph databases. It employs a SQL-like query language called OrientSQL and a native graph query language called Gremlin. In this context, triggers (renamed as Hooks) enable the triggering of actions in response to document creation, modification, or deletion. They also support ACID transactions.
§.§ NoSQL Graph Technologies with Event Listeners
The second group of systems is part of that family of tools that were characterized as NoSQL – although, over the course of time, they have come closer to SQL technologies in many aspects.
Specifically, we include in this category the systems that support event listeners, a basic ingredient used to build reactive computations.
* Neo4j. Neo4j is an open-source NoSQL native graph database providing an ACID-compliant transactional backend. Neo4j is the most widely adopted graph database; it has introduced Cypher, a declarative language for querying and manipulating graph data, the de-facto standard in graph databases. Neo4j does not support triggers natively, however, triggers are included in APOC (Awesome Procedures on Cypher), a popular extension library for Neo4j. Later in this paper, after introducing PG-Triggers, we will show how they can be translated into APOC triggers.
* ArangoDB. ArangoDB is a multi-model, open-source database system that combines the features of document, key-value, and graph databases into a single, unified platform. It provides a native graph querying language called AQL (ArangoDB Query Language) for efficient graph traversals and graph-based analytics. ArangoDB supports different events that can be monitored through a listener (AbstractArangoEventListener). Orthogonally, it supports multi-statement, ACID-compliant transactions.
* JanusGraph. JanusGraph is an open-source, distributed graph database system built on Apache TinkerPop, a popular graph computing framework; it can efficiently handle massive graphs with billions of vertices and edges. JanusGraph supports transactions; triggers can be produced through the "JanusGraph Bus", a collection of configurable logs to which JanusGraph writes changes to the graph. The availability of ACID transactions depends on the underlying system (for instance, BerkeleyDB <cit.> supports them, while Cassandra <cit.> or HBase <cit.> do not).
* DataStax. DataStax is built upon Apache Cassandra, a highly scalable and distributed NoSQL database. It provides a "Change Data Capture" (CDC) that allows users to capture data modifications. Cassandra follows the BASE (Basically Available, Soft state, Eventual consistency) model, which prioritizes high availability and scalability over strong consistency.
* Dgraph. Dgraph is an open-source, horizontally scalable, distributed graph database. It supports ACID transactions and provides a flexible query language for querying and manipulating data, called DQL. Dgraph provides the Dgraph Lambda framework <cit.>, which can be used to react to events through Typescript or Javascript functions.
* Amazon Neptune. Amazon Neptune is a fully managed graph database service provided by Amazon Web Services (AWS). Amazon Neptune supports both the Apache TinkerPop Gremlin graph traversal language and the openCypher query language for the Property Graph data model. For the Resource Description Framework (RDF) data model, Neptune supports the SPARQL query language <cit.>, which is a standard open language defined by the W3C <cit.>. Amazon Neptune uses Amazon Simple Notification Service (Amazon SNS) to provide notifications when a Neptune event occurs.
§.§ NoSQL Graph Technologies without Event Listeners
The last group consists of NoSQL graph technologies that do not provide a mechanism to implement reactive processing rules in their systems.
* TypeDB. TypeDB is a knowledge graph database, supporting a rich (strongly typed) TypeQL query language. TypeDB does not natively provide either active triggers or event listeners. However, it provides (declarative) schema rules, which allow users to define logical constraints and inference rules to enhance data consistency and reasoning capabilities. It also supports ACID transactions over groups of operations.
* Nebula Graph. NebulaGraph is an open-source database, supporting expressive and efficient graph patterns. It supports composite queries with distributed ACID transactions, but it does not support triggers.
* TigerGraph. TigerGraph provides a graph query language called GSQL, which allows user-defined functions and procedures. It supports traditional ACID consistency, but also a special sequential consistency, in which every data replica performs update operations in the same order. It does not support triggers.
§.§ Comparison
Table <ref> offers a synoptic view of the three categories of systems, along with their main features.
Systems in the first two categories can support reactive computations, either directly through relational triggers or indirectly through event listeners.
Note, however, that, for the first category, trigger support is limited to the relational portion of the system, and extending such a support to the graph portion may require specific features of the language at hand (e.g., Hooks in OrientDB or mappings from tables in Oracle) or be essentially unavailable (as in AgensGraph).
Supporting reactive computations with the systems in the third category is instead not obvious from their documentation.
In Table <ref>, we partitioned graph database systems based on their characteristics;
in the columns, we report on the existence of direct Trigger Support (TS), the availability of the notion of Event Listener (EL), the support for Composite Queries as transactions (CQ), and for ACID transactional properties.
Note that Neo4j and JanusGraph do not support composite queries; however, Cypher includes subqueries that make them arbitrarily complex, thus this distinction is not very significant. Note also that Neo4j supports expressive trigger procedures in APOC, as we will show in Section <ref>. Also, note that DataStax transactions do not support integrity constraints, as it follows the BASE model for Eventual Consistency.
§ PG-TRIGGERS DEFINITION
We next propose PG-Triggers, by discussing their syntax and semantics.
§.§ Syntax
The PG-Triggers syntax, shown in Figure <ref>, is borrowed - as much as possible - from the SQL3 standard, as discussed, e.g., in Chapter 11 of <cit.>. In our notation,
upper case letters are reserved for terminal symbols;
nonterminals are in a low case and enclosed within <> (angle brackets);
optionality is denoted by [] (square brackets);
items of which only one is required are enclosed within braces {};
alternatives are separated by the | symbol;
and ellipsis points show that the preceding element can be repeated.
Note that, due to the richness of the graph data model w.r.t. the relational model, the syntax has many more options.
In particular, note that graph items can either be nodes or relationships; we use a given label to select, out of all items, the specific set of items that is the trigger's target (also referred to as context).
Note also that trigger events in graph databases are richer than those in relational databases, as they include the creation and deletion of nodes/relationships as well as the setting and removal of their labels and properties.
In analogy with the UPDATE event of the SQL3 standard, the SET and REMOVE events can refer to properties; thus, the ON clause may refer to labels (for nodes, relationships, and label themselves) but also to properties, identified by a <label>.<property>) pair.
§.§ Semantics
As usual, triggers include a predicate,
which is considered at given action (s), and a connected action , which is executed only if the corresponding condition predicate holds. We next describe the trigger semantics along the classical dimensions, by making explicit reference to their syntactic elements.
* Granularity. We assume that each trigger execution is linked to a given query in a graph query language (GQL or Cypher); from our point of view, a query operates on an initial state and produces a final state, by creating or removing s (i.e., nodes and relationships), or by setting or removing their labels and properties. These changes are considered at two possible levels of : either individually (FOR EACH clause) or collectively as a set (FOR ALL clause).
* Action Time. As in relational databases, we consider triggers occurring BEFORE and AFTER the statement; in addition, we offer the DETACHED option. BEFORE statements include substantial limitations (essentially, the triggered statement cannot produce state changes); AFTER and DETACHED execute after the completion of the query and must be ordered with respect to all other trigger executions (see next). DETACHED operates within an autonomous transaction.
* Targeting. Triggers in relational databases are targeted to tables.
We opted for using labels as providers of an analogous context; therefore, in the ON clause, we select as target the items with a particular .
Consequently, for clarity of the execution semantics, we also make the assumption that
the label used for defining the target cannot be set or removed within the .
Adopting labels for identifying the trigger's context appears to be the most natural choice in the case of Property Graphs – every node and relationship, sharply, either belongs or does not belong to the set of items with a given label. Still, the situation is more complex than in the relational case, since a node or relationship may have more than one associated label, whereas a tuple belongs to exactly one table in the relational model. We discarded the option of using properties alone, as they carry values that can be identical across nodes and relationships and carry no semantics.
* Event Types. Consequently, events refer to either nodes or relationships and include their creation or deletion, and the setting or removal of labels and properties.
* Transition Variables. OLD and NEW refer to the old and new state, respectively. When followed by the NODE keyword or by the RELATIONSHIP keyword, they denote individual items (i.e., single nodes or relationships); otherwise, they denote sets of items. Clearly, if used with and , the item must be the same as in the clause.
Their labels and properties can be denoted by using the query language. Conveniently, our proposed syntax offers ways to rename them through the clause so as to be able to refer to them mnemonically with respect to the application domain.
For what concerns the order of execution of different triggers that are activated by the same Cypher or GQL query, note that these queries are generally much more powerful than relational update queries, targeted to a single table, as they can update multiple nodes and relationships, with different labels. Hence, the most sensible option for prioritizing them is to resort to the trigger creation time, providing a totally ordered prioritization.[Another option would be to use a total order based on the names of the triggers, as some relational databases do (e.g., PostgreSQL; see <https://www.postgresql.org/docs/current/sql-createtrigger.html>).] The execution semantics of triggers will then mimic the relational one, with the stack of trigger execution contexts as described, e.g., in <cit.>.
§ MAPPING PG-TRIGGERS TO APOC TRIGGERS
As shown in Table <ref>, triggers are not natively supported by Neo4j <cit.> and Cypher at the current time; however, we can make use of their implementation in the APOC library,[We refer to Version 4.4, available as of May 2023.] created as a Neo4j community effort. The library includes over 450 procedures, providing functionalities for utilities, conversions, graph updates, data import, data transformations, and manipulations.
Comparable mappings could be performed with the other NoSQL Graph Technologies with Event Listeners (see Section <ref>) and programmatic support similar to APOC.
We focus on the apoc.trigger collection of procedures, which includes several trigger operations. Triggers can be created (), deleted ( or ), paused (), and resumed (); the creation of a trigger uses the following syntax:
The parameters refer to the database in use, the name of the trigger, its code (statement), and its action time (selector);[A fifth parameter config refers to Neo4j configuration, left empty as it is not relevant for the trigger translation.] note that the specific trigger event (e.g., create, delete, set, remove) is defined within the trigger code.
Neo4j supports a single Cypher statement within a transaction, but the statement can be arbitrarily complex thanks to Cypher clauses, such as WITH, which allow the nesting of subqueries. As transactions may involve changes to nodes and relationships having many distinct labels or properties, each transaction can activate a number of APOC triggers.
Note also that the before and after action times of APOC triggers are discouraged by the APOC community, as they may create concurrency control conflicts with the triggering statement and between them.[We have also experienced several conflicts while testing the AFTER action time.] As a consequence, the advised action time is , causing each trigger to run within its own transaction.
We will therefore only adopt in our translations to APOC syntax.
Note, however, that such a pragmatic approach does not guarantee that triggers will see the final state produced by the transaction that activates them, since other transactions – as well as many trigger executions – can occur after the commit of the activating transaction and before the trigger actually starts its execution.
Figure <ref> shows the syntactic mapping between a PG-Trigger reacting to node creation and the corresponding APOC trigger install;
note that similar syntax-directed translations can be easily drafted for all eight kinds of supported events
(node, relationship) × (creation, deletion) and
(label, property) × (set, removal).
As discussed, the APOC install/drop procedures have four parameters: the databaseName (not present on the left side), the
trigger
(copied from the left side), the
statement and the
selector (i.e., action time `').
The richest parameter is the
statement,
as it, in turn, includes a pair of standard statements.
* The first one is a call to the
UNWIND
clause,
returning the list of items affected by the event (in the example, the list of created nodes); these are renamed (as cNodes) so as to become usable in the rest of the statement. Table <ref> shows the different trigger procedures that can be intercepted by UNWIND to capture the different action types.
* Then, in our translation scheme, we opted for using the APOC do.when procedure, so as to generate placeholders for the condition as well as the action part of the trigger.
The do.when procedure has four parameters:
the condition,
the action if the condition is met,
the action if the condition is not met, and
the operands that can be used in the condition and action.
The procedure returns (s) a value.
In our translation, the do.when condition is a conjunctive predicate extracting the items with a given label or property (taken from the left side) which satisfy the
predicate
(also taken from the left side). In APOC, the condition is a Boolean expression of simple terms; if terms need to be extracted from the data graph, it is necessary to place a before the procedure.
The first do.when action is executed when the condition is true; it uses the
trigger
code (taken from the left side); the second action, executed when the condition is false, is an empty string, since nothing needs to be done in that case. Both the
predicate
and the
trigger
refer to specific nodes, and these are the created by the UNWIND
clause
and appearing as
fourth
do.when parameter.
A number of additional aspects can be noted, as explained in the following. First, we cannot separate the two cases of granularity (item or set oriented), because the UNWIND
clause
returns, in any case, the entire set of involved items.[UNWIND returns a list rather than a set, but there is no definition of the order in which the list is produced.] Thus, the is in charge of considering, within its code, either each item or, collectively, the set of all items.
Second, the utility functions in Table <ref> also allow us to build the OLD and NEW transition variables for all supported events; these refer either to each
individual transition value or to the set of transition values, as the supported procedures do not distinguish the two cases. Note that each transition variable is appropriately paired to events, e.g., NEW transition variables are defined for the creation of nodes and relationships and for the setting of new labels and properties.
Several examples of the translations of various trigger options, including the use of OLD and NEW variables, are discussed in the examples in the next section.
§ RUNNING EXAMPLE: COVID-19 AND SARS-COV-2
Since the outbreak of the COVID-19 pandemic, the disease evolution has been constantly monitored; meanwhile, SARS-CoV-2, the virus responsible for the disease, has continually mutated its genomic sequence; thanks to the evolutive selection, the virus has developed increased transmissibility, host immune evasion, and resistance to antivirals <cit.>. The availability of genome sequences collected over time has been very useful for molecular surveillance of the epidemic and for the evaluation and planning of effective control strategies.
This scenario is sufficiently rich and diversified for illustrating the expressivity and versatility of our PG-Triggers proposal.
§.§ PG-Schema
We designed several data models for dealing with COVID-19 and SARS-CoV-2 <cit.>; in particular, we proposed the CoV2K knowledge base <cit.>, providing a description of SARS-CoV2 sequences, of their amino acid mutations and their effects, of the clustering of sequences within lineages and variants, and of the assignment of sequences to donors (i.e., patients). We next model an excerpt of CoV2K as a graph database; in particular, we use the abstractions from the PG-Schema proposal <cit.> so as to take advantage of its rich semantics for graph definition. Note that, in PG-Schema, all nodes are typed and have a unique label;
the schema also supports type hierarchies (e.g., between Patient and HospitalizedPatient), with inheritance. Constraints, including keys, are separately defined.
The adoption of PG-Schema makes graph databases more similar to relational databases, especially with a STRICT graph type definition, where labels uniquely identify nodes in the same way as table names identify relational tables; in this context, changing labels is not possible, thereby implicitly satisfying the semantic constraint on legal statements introduced in the PG-Trigger semantics section (<ref>),
which disallowed setting or removing, in the , labels defining the target.
Note also that relationships are implicitly identified not only by their types but also by the types of nodes that they connect. However, our standard proposal supports generic graph databases, where nodes and relationships can have multiple labels and some of them can be unlabeled.
The PG-Schema of our running example is shown in Figure <ref>. It includes: with (e.g., "Spike:D614G") and (e.g., "Spike"), their relationship with s (with their , e.g., "Enhanced infectivity"). s are characterized by their (key) and the relationship with their mutations; each belongs to a , for which we know the and an optional (a property assigned by the World Health Organization, e.g., "Alpha"). s are collected at a given date of , within s, which belong to s; they are sampled from s.
s have an (a key), , , and a set of values (e.g., "diabetes"). Some patients are ; in this case, the type is INT32 to denote the number of vaccinations (0 if the patient is not vaccinated, else the number of vaccine shots). Some s can be admitted to s, which are d and located within s; each has a maximum number of intensive care beds (), and pairs of s are connected by given distances. On admission, s are associated with an internal and a (e.g., "severe"); some of them (the s) may also be admitted to intensive care units, on given admission dates.
The PG-Schema specification for this graph database is shown in Figure <ref>.
§.§ PG-Triggers
We next present several PG-Triggers that progressively illustrate the various features of the proposed standard. In order to write efficient queries, the code should normally refer to transition variables, which are the handlers to the part of the graph that has been modified. The first five triggers produce alert nodes, i.e., nodes that – once described with PG-Schema – are of a new, OPEN type (allowing for the inclusion of arbitrary properties).
All the alerts include the
when they are produced and a textual
description.
§.§.§ Simple reactions to node, relationship, and property creation
The first trigger reacts to the fact that a new mutation is associated with a critical effect by creating an alert with the name of the mutation.
[frame=single]
CREATE TRIGGER NewCriticalMutation
AFTER CREATE
ON 'Mutation'
FOR EACH NODE
WHEN EXISTS (NEW)-[:Risk]-(:CriticalEffect)
BEGIN
CREATE (:Alerttime:DATETIME(),
desc:'New critical mutation',
mutation:NEW.name)
END
The second trigger, similar to the previous one, reacts to the association of a critical mutation with a lineage (i.e., a viral subspecies, also informally called variant) and creates an alert for the lineage. Note that in this example the condition part is merged within the action; as in relational triggers, the separation between condition and action may be arbitrary.
[frame=single]
CREATE TRIGGER NewCriticalLineage
AFTER CREATE
ON 'BelongsTo'
FOR EACH RELATIONSHIP
WHEN
MATCH (s:Sequence)-[NEW]-(l:Lineage)
WHERE EXISTS
MATCH (:CriticalEffect)-[:Risk]-
(:Mutation)-[:FoundIn]-(s)
BEGIN
CREATE (:Alerttime:DATETIME(),
desc:'New critical lineage',
lineage:l.name)
END
The third trigger monitors a simple change in the property, e.g., the change of Indian to Delta:
[frame=single]
CREATE TRIGGER WhoDesignationChange
AFTER SET
ON 'Lineage'.'whoDesignation'
FOR EACH NODE
WHEN OLD.whoDesignation <> NEW.whoDesignation
BEGIN
CREATE (:Alerttime: DATETIME(),
desc:'New Designation for an existing Lineage')
END
§.§.§ Conditions using fixed thresholds vs state comparisons
The next trigger counts the patients who require intensive care at the Sacco Hospital and raises an alert when their number exceeds 50 patients. Note the use of two labels to denote
matching along type hierarchies.
Note also that the trigger uses set granularity () and no transition variable is needed, as the counting relates to all the patients.
[frame=single]
CREATE TRIGGER IcuPatientsOverThreshold
AFTER CREATE
ON 'IcuPatient'
FOR ALL NODES
WHEN
MATCH (p:HospitalizedPatient:IcuPatient)
-[:TreatedAt]-(:Hospitalname:'Sacco')
WITH COUNT(p) AS icuPat
WHERE icuPat > 50
BEGIN
CREATE (:Alerttime:DATETIME(),desc:'ICU patients
at Sacco Hospital are more than 50')
END
The next trigger compares the patients who are in intensive care at the Sacco Hospital after admission, and raises an alert when the new patients in ICU are more than 10% of the total of patients in ICU; we assume that admissions are periodically registered by a transaction, e.g., daily. Note that this trigger is also using set granularity and the transition variable is used in the comparison; as we assume that transition variables correspond to a well defined set of nodes affected by the event, we can further define new variables over them.
[frame=single,fontsize=]
CREATE TRIGGER IcuPatientIncrease
AFTER CREATE
ON 'IcuPatient'
FOR ALL NODES
WHEN
MATCH (p:HospitalizedPatient:IcuPatient)-
[:TreatedAt]-(:Hospitalname: 'Sacco'),
MATCH (pn:NEW)-[:TreatedAt]-(:Hospitalname:'Sacco')
WITH COUNT(pn) AS NewIcuPat,
COUNT(p) AS TotalIcuPat
WHERE NewIcuPat / TotalIcuPat > 0.1
BEGIN
CREATE (:Alerttime:DATETIME(),desc:'ICU patients
at Sacco Hospital have increased by > 10
END
§.§.§ Triggers with side effects in the action
The next trigger describes the relocation of patients from the Sacco Hospital (in Lombardy) to the Meyer Hospital (in Tuscany), caused by the unavailability of ICU beds.[Patients' relocations out of Lombardy occurred during the first pandemic wave, not only to Tuscany, but also to Germany and Switzerland.]
Note that the trigger evaluates, in the condition, if ICU beds at Sacco are insufficient due to the new admissions; if so, the statement first considers the availability of ICU beds at the Meyer Hospital. If they are sufficient for hosting the new admissions at Sacco, then these patients are moved from the Sacco to the Meyer Hospital. Note that the patient relocation is rendered in the graph database by removing, for each patient, the relationship with the Sacco Hospital, and adding the relationship with the Meyer Hospital.
[frame=single, fontsize=]
CREATE TRIGGER IcuPatientMove
AFTER CREATE
ON 'IcuPatient'
FOR ALL NODES
WHEN
MATCH (p:HospitalizedPatient:IcuPatient)-[:TreatedAt]-
(h:Hospitalname:'Sacco')
WITH COUNT(p) AS TotalIcuPat
WHERE TotalIcuPat > h.icuBeds
BEGIN
MATCH (pn:NEW)-[:TreatedAt]-(:Hospitalname:'Sacco'),
MATCH (pt:HospitalizedPatient:IcuPatient)-[:TreatedAt]-
(ht:Hospital name:'Meyer')
WITH COUNT(pt) AS MeyerICU, ht.IcuBeds AS MeyerBeds,
COUNT(pn) AS newICUSacco
WHERE newICUSacco + MeyerICU <= MeyerBeds
THEN FOREACH (p IN pn)
BEGIN
MATCH (p)-[c:TreatedAt]-(:Hospitalname:'Sacco')
DELETE c
CREATE (p)-[:TreatedAt]->(:Hospitalname:'Meyer')
END
END
The final example gives a different solution to the same problem, i.e., reacting to the unavailability of ICU beds in any Hospital in the Lombardy Region.
Note that this trigger is at item level, operates upon all hospitals in Lombardy where there are new patients admitted to ICU, and moves new admitted patients from those hospitals where ICU beds are exceeded, by finding the closest hospital where patients may be relocated. Note that patients of the same hospital are moved together to the closest hospital (not necessarily in `Lombardy').
[frame=single,fontsize=]
CREATE TRIGGER MoveToNearHospital
AFTER CREATE
ON 'IcuPatient'
FOR EACH NODE
WHEN
MATCH (NEW:HospitalizedPatient:IcuPatient)
-[:TreatedAt]-(h:Hospital)
-[:LocatedIn]-(:Regionname:'Lombardy'),
MATCH (p:IcuPatient)-[:TreatedAt]-(h)
WITH COUNT(p) AS TotalIcuPat, h
WHERE TotalIcuPat > h.icuBeds
BEGIN
MATCH (h:Hospital)
-[:LocatedIn]-(:Regionname:'Lombardy'),
MATCH (pn:NEW)-[:TreatedAt]-(h)
-[ct:ConnectedTo]-(hc:Hospital)
WITH ct ORDER BY ct.distance LIMIT 1
THEN
BEGIN
MATCH (pn)-[c:TreatedAt]-(h)
DELETE c
CREATE (pn)-[:TreatedAt]->(hc)
END
END
Note that this trigger may not converge if ICU beds in close hospitals are also exceeded, as the first trigger execution could cause an infinite cascade of trigger executions. Termination analysis for triggers is a well-known research topic, and in particular, by using the methods discussed in <cit.> one could prove that recursion terminates when the availability of beds is tested prior to moving patients, while failure to do the test may lead to potential non-termination.
§.§ Translation to APOC triggers
We consider four examples and present their translation to Neo4j APOC triggers. In general, the translation is quite readable, although there are many void parameters due to the specific syntax of the APOC trigger procedures.
Note that type hierarchies are not supported in Neo4j, thus we model the hierarchy from HospitalizedPatient to IcuPatient as a conventional Isa relationship, directed from the subtype to the type.
Note also that the code of APOC triggers should be coherent with the trigger granularity: with item granularity the condition predicate and triggered statement should be item-based; with set granularity they should be set-based and in particular include aggregate predicates. Also, recall that the utilities of Table <ref> return an arbitrarily ordered list of all the transition variables for the given transaction, regardless of its granularity.
The trigger WhoDesignationChange of Subsection <ref> includes a condition using both the OLD and NEW transition variables. Note that the UNDWIND clause is exploited in order to extract all the terms used in the Boolean condition, taking advantage of the hierarchical structure of the $assignedNodeProperties parameter present in the first argument of the do.when APOC procedure, thus no condition query is necessary. Its translation is:
[frame=single,fontsize=]
CALL apoc.trigger.install('databaseName',
'WhoDesignationChange',
"UNWIND keys(assignedNodeProperties) AS k
UNWINDassignedNodeProperties[k] AS aProp
WITH aProp.node AS node, collect(aProp.key) AS propList,
aProp.old as oldValue, aProp.new as newValue
CALL apoc.do.when(
node:Lineage AND 'whoDesignation' IN propList
AND oldValue <> newValue,
'CREATE (:Alerttime: DATETIME(),
desc: N̈ew Designation for an existing Lineage)',
”,)
YIELD value RETURN *",
phase:'afterAsync');
The trigger IcuPatientIncrease of section <ref> demonstrates the need of using Isa relationships as a replacement for type hierarchies. Note, in this case, the use of the condition query in order to extract the terms used in the Boolean condition of the first argument of the do.when APOC procedure.
[frame=single,fontsize=]
CALL apoc.trigger.install('databaseName',
'IcuPatientIncrease',
"UNWIND createdNodes as cNodes
MATCH (p:IcuPatient)-[:Isa]-(:HospitalizedPatient)
-[:TreatedAt]-(h:Hospitalname:'Sacco')
WITH COUNT(cNodes) AS NewIcuPat,
COUNT (p) AS TotalIcuPat, cNodes
CALL apoc.do.when(
cNodes:IcuPatient AND NewIcuPat/TotalIcuPat > 0.1,
'MERGE (:Alerttime:DATETIME(), desc:ÏCU patients
at Sacco Hospital have increased more than 10
”, )
YIELD value RETURN *",
phase:'afterAsync');
The trigger IcuPatientMove of subsection <ref>
uses the condition so as to check whether patients at Sacco exceed the number of ICU beds; the statement implements the patient transfer, subject to availability at Meyer Hospital. Note that the pair of creation and deletion of relationships is done using two separate FOREACH
conditions; note also that a condition query is required also in this case.
[frame=single,fontsize=]
CALL apoc.trigger.install('databaseName',
'IcuPatientMove',
"UNWIND createdNodes AS cNodes
MATCH (:IcuPatient)-[:Isa]-(p:HospitalizedPatient)-
[:TreatedAt]-(h:Hospitalname:'Sacco')
WITH COUNT(p) AS TotalIcuPat,
h.IcuBeds AS TotalBeds,
cNodes
CALL apoc.do.when(
cNodes:IcuPatient AND TotalIcuPat > TotalBeds,
'MATCH (pt:IcuPatient)-[:Isa]-(:HospitalizedPatient)
-[:TreatedAt]-(ht:Hospitalname:Meyer)
WITH COUNT(pt) AS MeyerICU, ht.IcuBeds AS MeyerBeds,
COUNT(cNodes) AS newICUSacco, ht, cNodes
WHERE newICUSacco + MeyerICU <= MeyerBeds
MATCH (cNodes)-[:Isa]-(:HospitalizedPatient)
-[c:TreatedAt]-(:Hospitalname:Sacco)
FOREACH (p IN [cNodes] | DELETE c)
FOREACH (p IN [cNodes] | CREATE(p)-[:TreatedAt]->(ht))',
”, cNodes:cNodes, Meyer:'Meyer', Sacco:'Sacco')
YIELD value RETURN count(*)",
phase: 'afterAsync');
Finally, we present the trigger MoveToNearHospital of subsection <ref>.
[frame=single, fontsize=]
CALL apoc.trigger.install('databaseName',
'MoveToNearHospital',
"UNWIND createdNodes AS cNodes
MATCH (cNodes)
-[:Isa]-(:HospitalizedPatient)
-[:TreatedAt]-(h:Hospital)
-[:LocatedIn]-(:Regionname:'Lombardy')
MATCH (:IcuPatient)-[:Isa]-(p:HospitalizedPatient)
-[:TreatedAt]-(h)
WITH COUNT(p) AS TotalIcuPat,
h.icuBeds AS TotalIcuBeds,
cNodes, h
CALL apoc.do.when(
nodes:IcuPatient AND TotalIcuPat > TotalIcuBeds,
'MATCH (h)-[ct:ConnectedTo]-(hc:Hospital)
WITH ct, cNodes, h, hc ORDER BY ct.distance ASC LIMIT 1
MATCH (cNodes)-[:Isa]-(ph:HospitalizedPatient)
-[c:TreatedAt]-(h)
FOREACH (pat in [cNodes] | DELETE c)
FOREACH (pat in [cNodes] | CREATE (ph)
-[:TreatedAt]->(hc))',
”,CNodes:cNodes, h:h)
YIELD value RETURN *",
phase: 'afterAsync');
§ DISCUSSION AND CONCLUSION
In this article, we have shown that
adding reactive components to graph databases is at the same time very natural, along the SQL3 standard, and also very useful, as reactive programming
can leverage graph databases in supporting important applications.
We are moved by the ambitious objective of building the next generation of knowledge graphs having
the potential of dealing with knowledge changes, as required by crisis scenarios. Large knowledge graphs used by companies such as Google or Amazon serve the static needs of supporting search or e-commerce, managing instance-intensive changes but no real paradigm change. The recent COVID-19 pandemic has instead shown a scenario where knowledge has been produced at unprecedented speed and within very different scientific communities, including virologists, clinicians, economists, social scientists, and – ultimately – decision-makers in politics. Our objective is to build infrastructures for dealing with
such scenarios, where knowledge is generated dynamically and can be contradicted.
The first brick for building this vision is adding a basic trigger component to graph databases. We have shown
that the concepts of PG-Triggers descend naturally from relational concepts, although they require adaptation due to the richer model of graph databases, which includes nodes, relationships, labels, and properties.
We have also shown that PG-Triggers can be supported on top of Neo4j by making use of the APOC trigger procedures, with straightforward syntax-directed translations - once the various ingredients have been understood and mastered. A major drawback of this solution is that APOC, being community-driven, is subject to changes - we experienced some changes during the development of our translation
schemes. One objective of this article is also to motivate graph database companies, such as Neo4j, in supporting triggers as part of their standard offer.
§ RESOURCES
A prototype of the Neo4j graph database, with suitable scripts for data creation and population (according to the running example),
and APOC triggers along the definitions given in the manuscript
is provided in a dedicated, open GitHub repository <cit.> registered on Zenodo <cit.>.
We thank Angela Bonifati for providing advice during the early phase of this research, and Ioana Manolescu for useful discussions on reactive knowledge management.
ACM-Reference-Format |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.